Keras 2 : examples : コンピュータビジョン – 画像のノイズ除去のための畳込みオートエンコーダ (翻訳/解説)
翻訳 : (株)クラスキャット セールスインフォメーション
作成日時 : 11/01/2021 (keras 2.6.0)
* 本ページは、Keras の以下のドキュメントを翻訳した上で適宜、補足説明したものです:
- Code examples : Computer Vision : Convolutional autoencoder for image denoising (Author: Santiago L. Valdarrama)
* サンプルコードの動作確認はしておりますが、必要な場合には適宜、追加改変しています。
* ご自由にリンクを張って頂いてかまいませんが、sales-info@classcat.com までご一報いただけると嬉しいです。
- 人工知能研究開発支援
- 人工知能研修サービス(経営者層向けオンサイト研修)
- テクニカルコンサルティングサービス
- 実証実験(プロトタイプ構築)
- アプリケーションへの実装
- 人工知能研修サービス
- PoC(概念実証)を失敗させないための支援
- テレワーク & オンライン授業を支援
- お住まいの地域に関係なく Web ブラウザからご参加頂けます。事前登録 が必要ですのでご注意ください。
- ウェビナー運用には弊社製品「ClassCat® Webinar」を利用しています。
◆ お問合せ : 本件に関するお問い合わせ先は下記までお願いいたします。
株式会社クラスキャット セールス・マーケティング本部 セールス・インフォメーション |
E-Mail:sales-info@classcat.com ; WebSite: https://www.classcat.com/ ; Facebook |
Keras 2 : examples : 画像のノイズ除去のための畳込みオートエンコーダ
イントロダクション
このサンプルは、MNIST データセットからのノイズのある数字画像をクリーンな数字画像にマップする、画像のノイズ除去のための深層畳込みオートエンコーダを実装する方法を実演します。この実装は Building Autoencoders in Keras by François Chollet というタイトルの元のブログ投稿に基づいています。
セットアップ
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import layers
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Model
def preprocess(array):
"""
Normalizes the supplied array and reshapes it into the appropriate format.
"""
array = array.astype("float32") / 255.0
array = np.reshape(array, (len(array), 28, 28, 1))
return array
def noise(array):
"""
Adds random noise to each image in the supplied array.
"""
noise_factor = 0.4
noisy_array = array + noise_factor * np.random.normal(
loc=0.0, scale=1.0, size=array.shape
)
return np.clip(noisy_array, 0.0, 1.0)
def display(array1, array2):
"""
Displays ten random images from each one of the supplied arrays.
"""
n = 10
indices = np.random.randint(len(array1), size=n)
images1 = array1[indices, :]
images2 = array2[indices, :]
plt.figure(figsize=(20, 4))
for i, (image1, image2) in enumerate(zip(images1, images2)):
ax = plt.subplot(2, n, i + 1)
plt.imshow(image1.reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(image2.reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
データの準備
# Since we only need images from the dataset to encode and decode, we
# won't use the labels.
(train_data, _), (test_data, _) = mnist.load_data()
# Normalize and reshape the data
train_data = preprocess(train_data)
test_data = preprocess(test_data)
# Create a copy of the data with added noise
noisy_train_data = noise(train_data)
noisy_test_data = noise(test_data)
# Display the train data and a version of it with added noise
display(train_data, noisy_train_data)
オートエンコーダの構築
畳込みオートエンコーダを構築するために関数型 API を使用していきます。
input = layers.Input(shape=(28, 28, 1))
# Encoder
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(input)
x = layers.MaxPooling2D((2, 2), padding="same")(x)
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(x)
x = layers.MaxPooling2D((2, 2), padding="same")(x)
# Decoder
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x)
x = layers.Conv2D(1, (3, 3), activation="sigmoid", padding="same")(x)
# Autoencoder
autoencoder = Model(input, x)
autoencoder.compile(optimizer="adam", loss="binary_crossentropy")
autoencoder.summary()
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 28, 28, 32) 320 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 14, 14, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 14, 14, 32) 9248 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 7, 7, 32) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 14, 14, 32) 9248 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 28, 28, 32) 9248 _________________________________________________________________ conv2d_2 (Conv2D) (None, 28, 28, 1) 289 ================================================================= Total params: 28,353 Trainable params: 28,353 Non-trainable params: 0 _________________________________________________________________
これで train_data を入力データとターゲットの両方として使用してオートエンコーダを訓練することができます。同じ形式を使用して検証データもセットアップしていることに注意してください。
autoencoder.fit(
x=train_data,
y=train_data,
epochs=50,
batch_size=128,
shuffle=True,
validation_data=(test_data, test_data),
)
Epoch 1/50 469/469 [==============================] - 20s 8ms/step - loss: 0.1330 - val_loss: 0.0739 Epoch 2/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0720 - val_loss: 0.0698 Epoch 3/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0696 - val_loss: 0.0684 Epoch 4/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0683 - val_loss: 0.0674 Epoch 5/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0675 - val_loss: 0.0667 Epoch 6/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0669 - val_loss: 0.0662 Epoch 7/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0664 - val_loss: 0.0658 Epoch 8/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0660 - val_loss: 0.0653 Epoch 9/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0656 - val_loss: 0.0651 Epoch 10/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0653 - val_loss: 0.0648 Epoch 11/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0651 - val_loss: 0.0645 Epoch 12/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0649 - val_loss: 0.0644 Epoch 13/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0647 - val_loss: 0.0641 Epoch 14/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0645 - val_loss: 0.0640 Epoch 15/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0643 - val_loss: 0.0638 Epoch 16/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0642 - val_loss: 0.0637 Epoch 17/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0640 - val_loss: 0.0636 Epoch 18/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0639 - val_loss: 0.0636 Epoch 19/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0638 - val_loss: 0.0634 Epoch 20/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0637 - val_loss: 0.0633 Epoch 21/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0636 - val_loss: 0.0632 Epoch 22/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0635 - val_loss: 0.0631 Epoch 23/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0634 - val_loss: 0.0630 Epoch 24/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0634 - val_loss: 0.0630 Epoch 25/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0633 - val_loss: 0.0629 Epoch 26/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0632 - val_loss: 0.0629 Epoch 27/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0632 - val_loss: 0.0629 Epoch 28/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0631 - val_loss: 0.0627 Epoch 29/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0631 - val_loss: 0.0627 Epoch 30/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0630 - val_loss: 0.0627 Epoch 31/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0630 - val_loss: 0.0626 Epoch 32/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0629 - val_loss: 0.0626 Epoch 33/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0629 - val_loss: 0.0627 Epoch 34/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0628 - val_loss: 0.0625 Epoch 35/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0628 - val_loss: 0.0625 Epoch 36/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0628 - val_loss: 0.0626 Epoch 37/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0627 - val_loss: 0.0624 Epoch 38/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0627 - val_loss: 0.0624 Epoch 39/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0627 - val_loss: 0.0623 Epoch 40/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0626 - val_loss: 0.0623 Epoch 41/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0626 - val_loss: 0.0623 Epoch 42/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0626 - val_loss: 0.0622 Epoch 43/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0626 - val_loss: 0.0622 Epoch 44/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0625 - val_loss: 0.0621 Epoch 45/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0625 - val_loss: 0.0622 Epoch 46/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0625 - val_loss: 0.0622 Epoch 47/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0625 - val_loss: 0.0622 Epoch 48/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0624 - val_loss: 0.0622 Epoch 49/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0624 - val_loss: 0.0621 Epoch 50/50 469/469 [==============================] - 3s 7ms/step - loss: 0.0624 - val_loss: 0.0621 CPU times: user 2min 57s, sys: 18.1 s, total: 3min 15s Wall time: 3min 23s <keras.callbacks.History at 0x7f23cd9a9650>
テストデータ上で予測して、元の画像をオートエンコーダからの予測と一緒に表示しましょう。
予測は元の画像にどれほど非常に近いかに気付いてください、全く同じではありませんが。
predictions = autoencoder.predict(test_data)
display(test_data, predictions)
オートエンコーダが動作することが分かった今、入力としてノイズのあるデータをそして出力としてクリーンデータを使用してそれを再訓練しましょう。オートエンコーダが画像のノイズ除去する方法を学習することを望みます。
autoencoder.fit(
x=noisy_train_data,
y=train_data,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(noisy_test_data, test_data),
)
Epoch 1/100 469/469 [==============================] - 3s 7ms/step - loss: 0.1001 - val_loss: 0.0931 Epoch 2/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0927 - val_loss: 0.0909 Epoch 3/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0910 - val_loss: 0.0898 Epoch 4/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0900 - val_loss: 0.0889 Epoch 5/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0893 - val_loss: 0.0882 Epoch 6/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0887 - val_loss: 0.0884 Epoch 7/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0883 - val_loss: 0.0874 Epoch 8/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0879 - val_loss: 0.0873 Epoch 9/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0876 - val_loss: 0.0868 Epoch 10/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0874 - val_loss: 0.0867 Epoch 11/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0872 - val_loss: 0.0867 Epoch 12/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0870 - val_loss: 0.0865 Epoch 13/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0868 - val_loss: 0.0863 Epoch 14/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0867 - val_loss: 0.0860 Epoch 15/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0865 - val_loss: 0.0859 Epoch 16/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0864 - val_loss: 0.0858 Epoch 17/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0863 - val_loss: 0.0857 Epoch 18/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0862 - val_loss: 0.0855 Epoch 19/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0861 - val_loss: 0.0858 Epoch 20/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0860 - val_loss: 0.0855 Epoch 21/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0859 - val_loss: 0.0854 Epoch 22/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0859 - val_loss: 0.0853 Epoch 23/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0858 - val_loss: 0.0852 Epoch 24/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0857 - val_loss: 0.0853 Epoch 25/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0856 - val_loss: 0.0852 Epoch 26/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0856 - val_loss: 0.0851 Epoch 27/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0855 - val_loss: 0.0850 Epoch 28/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0855 - val_loss: 0.0850 Epoch 29/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0854 - val_loss: 0.0850 Epoch 30/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0854 - val_loss: 0.0849 Epoch 31/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0853 - val_loss: 0.0848 Epoch 32/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0853 - val_loss: 0.0849 Epoch 33/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0853 - val_loss: 0.0848 Epoch 34/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0852 - val_loss: 0.0848 Epoch 35/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0852 - val_loss: 0.0847 Epoch 36/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0851 - val_loss: 0.0847 Epoch 37/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0851 - val_loss: 0.0847 Epoch 38/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0851 - val_loss: 0.0846 Epoch 39/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0850 - val_loss: 0.0848 Epoch 40/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0850 - val_loss: 0.0848 Epoch 41/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0850 - val_loss: 0.0847 Epoch 42/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0850 - val_loss: 0.0846 Epoch 43/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0850 - val_loss: 0.0846 Epoch 44/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0849 - val_loss: 0.0845 Epoch 45/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0849 - val_loss: 0.0846 Epoch 46/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0849 - val_loss: 0.0845 Epoch 47/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0849 - val_loss: 0.0846 Epoch 48/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0848 - val_loss: 0.0848 Epoch 49/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0848 - val_loss: 0.0844 Epoch 50/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0848 - val_loss: 0.0845 Epoch 51/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0848 - val_loss: 0.0844 Epoch 52/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0848 - val_loss: 0.0844 Epoch 53/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0847 - val_loss: 0.0845 Epoch 54/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0847 - val_loss: 0.0844 Epoch 55/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0847 - val_loss: 0.0843 Epoch 56/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0847 - val_loss: 0.0843 Epoch 57/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0847 - val_loss: 0.0846 Epoch 58/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0847 - val_loss: 0.0843 Epoch 59/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0846 - val_loss: 0.0843 Epoch 60/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0846 - val_loss: 0.0843 Epoch 61/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0846 - val_loss: 0.0843 Epoch 62/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0846 - val_loss: 0.0844 Epoch 63/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0846 - val_loss: 0.0843 Epoch 64/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0846 - val_loss: 0.0843 Epoch 65/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0846 - val_loss: 0.0842 Epoch 66/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0845 - val_loss: 0.0842 Epoch 67/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0845 - val_loss: 0.0843 Epoch 68/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0845 - val_loss: 0.0843 Epoch 69/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0845 - val_loss: 0.0842 Epoch 70/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0845 - val_loss: 0.0842 Epoch 71/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0845 - val_loss: 0.0843 Epoch 72/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0845 - val_loss: 0.0843 Epoch 73/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0845 - val_loss: 0.0844 Epoch 74/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0845 - val_loss: 0.0842 Epoch 75/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0844 - val_loss: 0.0842 Epoch 76/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0844 - val_loss: 0.0845 Epoch 77/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0844 - val_loss: 0.0842 Epoch 78/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0844 - val_loss: 0.0843 Epoch 79/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0844 - val_loss: 0.0844 Epoch 80/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0844 - val_loss: 0.0841 Epoch 81/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0844 - val_loss: 0.0841 Epoch 82/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0844 - val_loss: 0.0841 Epoch 83/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0844 - val_loss: 0.0841 Epoch 84/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0844 - val_loss: 0.0843 Epoch 85/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0842 Epoch 86/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0841 Epoch 87/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0841 Epoch 88/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0841 Epoch 89/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0841 Epoch 90/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0841 Epoch 91/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0841 Epoch 92/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0842 Epoch 93/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0840 Epoch 94/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0841 Epoch 95/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0840 Epoch 96/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0843 - val_loss: 0.0841 Epoch 97/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0842 - val_loss: 0.0841 Epoch 98/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0842 - val_loss: 0.0840 Epoch 99/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0842 - val_loss: 0.0843 Epoch 100/100 469/469 [==============================] - 3s 7ms/step - loss: 0.0842 - val_loss: 0.0841 <keras.callbacks.History at 0x7f233604b450>
今度はノイズのあるデータ上で予測してオートエンコーダの結果を表示しましょう。
オートエンコーダが入力画像からノイズを除去する驚くほどの仕事を行なっていることがわかるでしょう。
predictions = autoencoder.predict(noisy_test_data)
display(noisy_test_data, predictions)
以上