Soc DL Manual
Soc DL Manual
Name ________________________________________Branch___________________
Name _________________________
Regd No ________________________________Year of Study ___________________
Subject________________________________________________________________
-------------------------------------------------------------------------------------------------------
VIGNAN’S NIRULA INSTITUTE OF TECHNOLOGY AND SCIENCE FOR WOMEN
PEDAPALAKALURU ROAD, GUNTUR-522005.
(Affiliated to JNTU KAKINADA, Kakinada)
2023-2024
VIGNAN’S NIRULA INSTITUTE OF TECHNOLOGY AND SCIENCE FOR WOMEN
PEDAPALAKALURU ROAD, GUNTUR-522005.
(Affiliated to JNTU KAKINADA, Kakinada)
CERTIFICATE
Signature of Signature of
Head of the Department Laboratory In Charge
Signature of
External Examiner
CONTENTS
9. Autoencoders
10. GAN
[4]: plt.figure(figsize=(8,8))
for i in range(11):
plt.subplot(6,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
1
[5]: model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3),activation='relu',
input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.summary()
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.summary()
model.compile(optimizer='adam',loss=tf.keras.losses.
↪SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896
2
max_pooling2d_1 (MaxPoolin (None, 6, 6, 64) 0
g2D)
=================================================================
Total params: 56320 (220.00 KB)
Trainable params: 56320 (220.00 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896
=================================================================
Total params: 122570 (478.79 KB)
Trainable params: 122570 (478.79 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model.evaluate(test_images,test_labels,verbose=2)
3
print(test_acc)
Epoch 1/5
1563/1563 [==============================] - 78s 49ms/step - loss: 1.5272 -
accuracy: 0.4397 - val_loss: 1.2652 - val_accuracy: 0.5497
Epoch 2/5
1563/1563 [==============================] - 72s 46ms/step - loss: 1.1659 -
accuracy: 0.5832 - val_loss: 1.0670 - val_accuracy: 0.6209
Epoch 3/5
1563/1563 [==============================] - 68s 44ms/step - loss: 1.0325 -
accuracy: 0.6364 - val_loss: 1.0053 - val_accuracy: 0.6485
Epoch 4/5
1563/1563 [==============================] - 68s 43ms/step - loss: 0.9521 -
accuracy: 0.6652 - val_loss: 0.9844 - val_accuracy: 0.6575
Epoch 5/5
1563/1563 [==============================] - 70s 45ms/step - loss: 0.8847 -
accuracy: 0.6904 - val_loss: 0.9195 - val_accuracy: 0.6846
313/313 - 3s - loss: 0.9195 - accuracy: 0.6846 - 3s/epoch - 11ms/step
0.6845999956130981
4
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.preprocessing import LabelEncoder
from tensorflow.python import keras
from tensorflow.python.keras import utils
from tensorflow.keras import utils
from tensorflow.keras.utils import to_categorical
from keras.utils import to_categorical
import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten, InputLayer
import imageio # To read images
from PIL import Image # For image resizing
!unzip agedetectiontrain.zip
!unzip agedetectiontest.zip
train = pd.read_csv('/content/train.csv')
test = pd.read_csv('/content/test.csv')
#Once, both the data sets are read successfully, we can display any
random movie character along with their age group to verify the ID
against the Class value, as shown below:
np.random.seed(125)
idx = np.random.choice(train.index)
img_name = train.ID[idx]
img = imageio.imread(os.path.join('/content/Train', img_name))
print('Age group:', train.Class[idx])
plt.imshow(img)
plt.axis('off')
plt.show()
#Let us reshape and transform the training data first, as shown below:
temp = []
for img_name in train.ID:
img_path = os.path.join('Train', img_name)
img = imageio.imread(img_path)
img = np.array(Image.fromarray(img).resize((32,
32))).astype('float32')
temp.append(img)
train_x = np.stack(temp)
#Next, let us reshape and transform the testing data, as shown below:
temp = []
for img_name in test.ID:
img_path = os.path.join('Test', img_name)
img = imageio.imread(img_path)
img = np.array(Image.fromarray(img).resize((32,
32))).astype('float32')
temp.append(img)
test_x = np.stack(temp)
#Next, let us define a network with one input layer, one hidden layer,
and one output layer, as shown below:
model = Sequential([
InputLayer(input_shape=input_num_units),
Flatten(),
Dense(units=hidden_num_units, activation='relu'),
Dense(units=output_num_units, activation='softmax'),
])
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 3072) 0
=================================================================
Total params: 1,538,003
Trainable params: 1,538,003
Non-trainable params: 0
_________________________________________________________________
Epoch 1/5
156/156 [==============================] - 5s 30ms/step - loss: 0.9060
- accuracy: 0.5631
Epoch 2/5
156/156 [==============================] - 4s 24ms/step - loss: 0.8446
- accuracy: 0.6030
Epoch 3/5
156/156 [==============================] - 4s 25ms/step - loss: 0.8277
- accuracy: 0.6151
Epoch 4/5
156/156 [==============================] - 5s 32ms/step - loss: 0.8160
- accuracy: 0.6234
Epoch 5/5
156/156 [==============================] - 4s 24ms/step - loss: 0.8053
- accuracy: 0.6296
<keras.callbacks.History at 0x79d9ae713550>
Epoch 1/5
125/125 [==============================] - 7s 57ms/step - loss: 0.8004
- accuracy: 0.6323 - val_loss: 0.7988 - val_accuracy: 0.6339
Epoch 2/5
125/125 [==============================] - 5s 44ms/step - loss: 0.7945
- accuracy: 0.6373 - val_loss: 0.7961 - val_accuracy: 0.6306
Epoch 3/5
125/125 [==============================] - 4s 36ms/step - loss: 0.7886
- accuracy: 0.6377 - val_loss: 0.7750 - val_accuracy: 0.6472
Epoch 4/5
125/125 [==============================] - 4s 29ms/step - loss: 0.7827
- accuracy: 0.6454 - val_loss: 0.7748 - val_accuracy: 0.6504
Epoch 5/5
125/125 [==============================] - 4s 29ms/step - loss: 0.7814
- accuracy: 0.6434 - val_loss: 0.7712 - val_accuracy: 0.6570
<keras.callbacks.History at 0x79d9d0978550>
plt.figure(figsize=(8,8))
for i in range(11):
plt.subplot(6,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3),activation='relu',
input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.summary()
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.summary()
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalC
rossentropy(from_logits=True),
metrics=['accuracy'])
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896
=================================================================
Total params: 56320 (220.00 KB)
Trainable params: 56320 (220.00 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896
=================================================================
Total params: 122570 (478.79 KB)
Trainable params: 122570 (478.79 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/5
1563/1563 [==============================] - 80s 50ms/step - loss:
1.5439 - accuracy: 0.4401 - val_loss: 1.2851 - val_accuracy: 0.5487
Epoch 2/5
1563/1563 [==============================] - 72s 46ms/step - loss:
1.1840 - accuracy: 0.5778 - val_loss: 1.0758 - val_accuracy: 0.6210
Epoch 3/5
1563/1563 [==============================] - 71s 45ms/step - loss:
1.0403 - accuracy: 0.6328 - val_loss: 1.0632 - val_accuracy: 0.6348
Epoch 4/5
1563/1563 [==============================] - 71s 45ms/step - loss:
0.9461 - accuracy: 0.6668 - val_loss: 0.9557 - val_accuracy: 0.6647
Epoch 5/5
1563/1563 [==============================] - 72s 46ms/step - loss:
0.8781 - accuracy: 0.6920 - val_loss: 0.9039 - val_accuracy: 0.6878
313/313 - 6s - loss: 0.9039 - accuracy: 0.6878 - 6s/epoch - 18ms/step
0.6877999901771545
history = model.fit(train_images, train_labels,
epochs=25,validation_data=(test_images, test_labels))
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc =
model.evaluate(test_images,test_labels,verbose=2)
print(test_acc)
Epoch 1/25
1563/1563 [==============================] - 82s 52ms/step - loss:
0.8176 - accuracy: 0.7149 - val_loss: 0.8911 - val_accuracy: 0.6933
Epoch 2/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.7691 - accuracy: 0.7296 - val_loss: 0.8554 - val_accuracy: 0.7091
Epoch 3/25
1563/1563 [==============================] - 71s 45ms/step - loss:
0.7314 - accuracy: 0.7421 - val_loss: 0.8980 - val_accuracy: 0.6951
Epoch 4/25
1563/1563 [==============================] - 74s 48ms/step - loss:
0.6924 - accuracy: 0.7572 - val_loss: 0.8811 - val_accuracy: 0.7007
Epoch 5/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.6575 - accuracy: 0.7704 - val_loss: 0.8910 - val_accuracy: 0.7031
Epoch 6/25
1563/1563 [==============================] - 73s 46ms/step - loss:
0.6268 - accuracy: 0.7810 - val_loss: 0.9326 - val_accuracy: 0.6919
Epoch 7/25
1563/1563 [==============================] - 71s 45ms/step - loss:
0.5953 - accuracy: 0.7911 - val_loss: 0.9428 - val_accuracy: 0.7029
Epoch 8/25
1563/1563 [==============================] - 74s 47ms/step - loss:
0.5670 - accuracy: 0.8001 - val_loss: 0.9305 - val_accuracy: 0.6983
Epoch 9/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.5401 - accuracy: 0.8090 - val_loss: 0.9055 - val_accuracy: 0.7093
Epoch 10/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.5195 - accuracy: 0.8162 - val_loss: 0.9583 - val_accuracy: 0.7046
Epoch 11/25
1563/1563 [==============================] - 73s 46ms/step - loss:
0.4877 - accuracy: 0.8275 - val_loss: 0.9495 - val_accuracy: 0.7113
Epoch 12/25
1563/1563 [==============================] - 71s 46ms/step - loss:
0.4647 - accuracy: 0.8364 - val_loss: 0.9852 - val_accuracy: 0.7002
Epoch 13/25
1563/1563 [==============================] - 70s 45ms/step - loss:
0.4471 - accuracy: 0.8411 - val_loss: 1.0472 - val_accuracy: 0.6985
Epoch 14/25
1563/1563 [==============================] - 74s 47ms/step - loss:
0.4204 - accuracy: 0.8507 - val_loss: 1.0665 - val_accuracy: 0.6970
Epoch 15/25
1563/1563 [==============================] - 73s 47ms/step - loss:
0.4031 - accuracy: 0.8572 - val_loss: 1.0335 - val_accuracy: 0.7113
Epoch 16/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.3815 - accuracy: 0.8638 - val_loss: 1.0957 - val_accuracy: 0.7042
Epoch 17/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.3664 - accuracy: 0.8680 - val_loss: 1.1560 - val_accuracy: 0.6975
Epoch 18/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.3442 - accuracy: 0.8768 - val_loss: 1.2238 - val_accuracy: 0.6899
Epoch 19/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.3341 - accuracy: 0.8812 - val_loss: 1.2090 - val_accuracy: 0.7040
Epoch 20/25
1563/1563 [==============================] - 73s 47ms/step - loss:
0.3114 - accuracy: 0.8881 - val_loss: 1.2891 - val_accuracy: 0.6945
Epoch 21/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.3003 - accuracy: 0.8929 - val_loss: 1.2742 - val_accuracy: 0.7007
Epoch 22/25
1563/1563 [==============================] - 75s 48ms/step - loss:
0.2873 - accuracy: 0.8966 - val_loss: 1.3051 - val_accuracy: 0.6978
Epoch 23/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.2718 - accuracy: 0.9010 - val_loss: 1.3597 - val_accuracy: 0.6971
Epoch 24/25
1563/1563 [==============================] - 73s 47ms/step - loss:
0.2608 - accuracy: 0.9064 - val_loss: 1.3995 - val_accuracy: 0.7030
Epoch 25/25
1563/1563 [==============================] - 74s 47ms/step - loss:
0.2517 - accuracy: 0.9094 - val_loss: 1.4640 - val_accuracy: 0.6935
313/313 - 5s - loss: 1.4640 - accuracy: 0.6935 - 5s/epoch - 15ms/step
0.6934999823570251
sequence-prediction
[ ]: import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
[ ]: tokenizer = Tokenizer()
tokenizer.fit_on_texts([text])
total_words = len(tokenizer.word_index) + 1
[ ]: input_sequences = []
for line in text.split('\n'):
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)
[ ]: X = input_sequences[:, :-1]
y = input_sequences[:, -1]
[ ]: y = np.array(tf.keras.utils.to_categorical(y, num_classes=total_words))
[ ]: model = Sequential()
model.add(Embedding(total_words, 100, input_length=max_sequence_len-1))
model.add(LSTM(150))
model.add(Dense(total_words, activation='softmax'))
print(model.summary())
Model: "sequential"
_________________________________________________________________
1
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 12, 100) 1300
=================================================================
Total params: 153863 (601.03 KB)
Trainable params: 153863 (601.03 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
None
[ ]: model.compile(loss='categorical_crossentropy', optimizer='adam',␣
↪metrics=['accuracy'])
Epoch 1/10
1/1 [==============================] - 2s 2s/step - loss: 2.5643 - accuracy:
0.1667
Epoch 2/10
1/1 [==============================] - 0s 21ms/step - loss: 2.5531 - accuracy:
0.4167
Epoch 3/10
1/1 [==============================] - 0s 22ms/step - loss: 2.5419 - accuracy:
0.3333
Epoch 4/10
1/1 [==============================] - 0s 20ms/step - loss: 2.5303 - accuracy:
0.4167
Epoch 5/10
1/1 [==============================] - 0s 26ms/step - loss: 2.5178 - accuracy:
0.3333
Epoch 6/10
1/1 [==============================] - 0s 22ms/step - loss: 2.5041 - accuracy:
0.4167
Epoch 7/10
1/1 [==============================] - 0s 21ms/step - loss: 2.4885 - accuracy:
0.4167
Epoch 8/10
1/1 [==============================] - 0s 28ms/step - loss: 2.4705 - accuracy:
0.4167
Epoch 9/10
1/1 [==============================] - 0s 24ms/step - loss: 2.4492 - accuracy:
0.4167
Epoch 10/10
1/1 [==============================] - 0s 30ms/step - loss: 2.4237 - accuracy:
2
0.2500
[ ]: <keras.src.callbacks.History at 0x78ebff570c10>
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1,␣
↪padding='pre')
print(seed_text)
3
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 25ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 21ms/step
hello there! cse cse cse c c renuka cse c am renuka from c am am 4th
4
!git clone https://github.com/surmenok/keras_lr_finder.git
keras_lr_finder_repo
!mv keras_lr_finder_repo/keras_lr_finder keras_lr_finder
!git clone https://github.com/google/bi-tempered-loss.git
!mv bi-tempered-loss/tensorflow/loss.py loss.py
!rm -r keras_lr_finder_repo bi-tempered-loss
import ssl
cce = CategoricalCrossentropy(from_logits=False)
%timeit cce(np.random.rand(1000, 10).astype(np.float32),
np.random.rand(1000, 10).astype(np.float32))
2.11 ms ± 86.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops
each)
%timeit bi_tempered_logistic_loss(np.random.rand(1000,
10).astype(np.float32), np.random.rand(1000, 10).astype(np.float32),
0.5, 2.)
9.54 ms ± 100 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Wrapper
Let's define a loss function wrapper that would be passed to Keras during training. The wrapper
defaults to standard logistic loss if both temperatures are set to 1, otherwise it uses the bi-
tempered version.
class BiTemperedWrapper:
Synthetic dataset
The synthetic dataset is a reproduction of the one presented in the original paper. It consists of
two rings of points separated by a margin.
blue_train = generate_points(1200, 0, 5)
red_train = generate_points(1200, 6, 8)
blue_train_margin_noise, red_train_margin_noise = np.copy(blue_train),
np.copy(red_train)
blue_train_global_noise, red_train_global_noise = np.copy(blue_train),
np.copy(red_train)
plt.figure(figsize=(17, 5))
plt.subplot(1, 3, 1)
plt.title("Training data\nwithout noise")
plot_points([blue_train, red_train], 8)
plt.subplot(1, 3, 2)
plt.title("Training data\n20% margin noise")
plot_points([blue_train_margin_noise, red_train_margin_noise], 8)
plt.subplot(1, 3, 3)
plt.title("Training data\n20% global noise")
plot_points([blue_train_global_noise, red_train_global_noise], 8)
plt.show()
plt.figure(figsize=(6, 5))
blue_valid = generate_points(300, 0, 5)
red_valid = generate_points(300, 6, 8)
plt.title("Validation data")
plot_points([blue_valid, red_valid], 8)
plt.show()
The data has to be transformed prior to being passed to the model. The labels are converted into
numerical values (0 or 1) and one-hot encoded.
Now, we're going to define our model. It's a simple fully connected network with two hidden
layers.
def synthetic_model():
return Sequential([
InputLayer(input_shape=X_train[0].shape),
Dense(32, activation='relu'),
Dense(32, activation='relu'),
Dense(2)
])
The network is trained on all three datasets (without noise, with margin noise, and with global
noise) for two different sets of temperatures - (1.0, 1.0) tantamount to using standard
logistic loss and (0.2, 4.0) for the bi-tempered alternative.
Epoch 1/20
75/75 [==============================] - 2s 9ms/step - loss: 0.5583 -
accuracy: 0.6183 - val_loss: 0.5046 - val_accuracy: 0.7200
Epoch 2/20
75/75 [==============================] - 0s 3ms/step - loss: 0.4416 -
accuracy: 0.7958 - val_loss: 0.3621 - val_accuracy: 0.8833
Epoch 3/20
75/75 [==============================] - 0s 4ms/step - loss: 0.2902 -
accuracy: 0.9112 - val_loss: 0.2183 - val_accuracy: 0.9400
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1723 -
accuracy: 0.9629 - val_loss: 0.1351 - val_accuracy: 0.9583
Epoch 5/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1103 -
accuracy: 0.9821 - val_loss: 0.0868 - val_accuracy: 0.9983
Epoch 6/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0765 -
accuracy: 0.9937 - val_loss: 0.0606 - val_accuracy: 0.9967
Epoch 7/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0553 -
accuracy: 0.9979 - val_loss: 0.0464 - val_accuracy: 0.9967
Epoch 8/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0428 -
accuracy: 0.9992 - val_loss: 0.0345 - val_accuracy: 1.0000
Epoch 9/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0354 -
accuracy: 0.9992 - val_loss: 0.0270 - val_accuracy: 1.0000
Epoch 10/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0279 -
accuracy: 1.0000 - val_loss: 0.0218 - val_accuracy: 1.0000
Epoch 11/20
75/75 [==============================] - 0s 2ms/step - loss: 0.0226 -
accuracy: 1.0000 - val_loss: 0.0189 - val_accuracy: 1.0000
Epoch 12/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0187 -
accuracy: 0.9996 - val_loss: 0.0161 - val_accuracy: 1.0000
Epoch 13/20
75/75 [==============================] - 0s 2ms/step - loss: 0.0156 -
accuracy: 1.0000 - val_loss: 0.0140 - val_accuracy: 1.0000
Epoch 14/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0131 -
accuracy: 1.0000 - val_loss: 0.0113 - val_accuracy: 1.0000
Epoch 15/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0114 -
accuracy: 1.0000 - val_loss: 0.0091 - val_accuracy: 1.0000
Epoch 16/20
75/75 [==============================] - 0s 2ms/step - loss: 0.0092 -
accuracy: 1.0000 - val_loss: 0.0077 - val_accuracy: 1.0000
Epoch 17/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0081 -
accuracy: 1.0000 - val_loss: 0.0067 - val_accuracy: 1.0000
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0072 -
accuracy: 1.0000 - val_loss: 0.0061 - val_accuracy: 1.0000
Epoch 19/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0065 -
accuracy: 1.0000 - val_loss: 0.0052 - val_accuracy: 1.0000
Epoch 20/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0055 -
accuracy: 1.0000 - val_loss: 0.0044 - val_accuracy: 1.0000
Epoch 1/20
75/75 [==============================] - 4s 11ms/step - loss: 0.2790 -
accuracy: 0.4967 - val_loss: 0.2615 - val_accuracy: 0.5350
Epoch 2/20
75/75 [==============================] - 0s 4ms/step - loss: 0.2527 -
accuracy: 0.6004 - val_loss: 0.2445 - val_accuracy: 0.6633
Epoch 3/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2303 -
accuracy: 0.7342 - val_loss: 0.2137 - val_accuracy: 0.8017
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1925 -
accuracy: 0.8746 - val_loss: 0.1725 - val_accuracy: 0.9000
Epoch 5/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1559 -
accuracy: 0.9350 - val_loss: 0.1406 - val_accuracy: 0.9517
Epoch 6/20
75/75 [==============================] - 0s 4ms/step - loss: 0.1288 -
accuracy: 0.9683 - val_loss: 0.1177 - val_accuracy: 0.9667
Epoch 7/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1086 -
accuracy: 0.9862 - val_loss: 0.1010 - val_accuracy: 0.9700
Epoch 8/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0929 -
accuracy: 0.9917 - val_loss: 0.0862 - val_accuracy: 0.9967
Epoch 9/20
75/75 [==============================] - 0s 5ms/step - loss: 0.0826 -
accuracy: 0.9958 - val_loss: 0.0772 - val_accuracy: 0.9983
Epoch 10/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0744 -
accuracy: 0.9983 - val_loss: 0.0701 - val_accuracy: 0.9983
Epoch 11/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0677 -
accuracy: 0.9992 - val_loss: 0.0643 - val_accuracy: 0.9983
Epoch 12/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0626 -
accuracy: 0.9996 - val_loss: 0.0593 - val_accuracy: 1.0000
Epoch 13/20
75/75 [==============================] - 0s 5ms/step - loss: 0.0580 -
accuracy: 0.9992 - val_loss: 0.0554 - val_accuracy: 0.9983
Epoch 14/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0542 -
accuracy: 0.9996 - val_loss: 0.0519 - val_accuracy: 1.0000
Epoch 15/20
75/75 [==============================] - 0s 5ms/step - loss: 0.0509 -
accuracy: 1.0000 - val_loss: 0.0489 - val_accuracy: 1.0000
Epoch 16/20
75/75 [==============================] - 0s 5ms/step - loss: 0.0481 -
accuracy: 1.0000 - val_loss: 0.0462 - val_accuracy: 1.0000
Epoch 17/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0454 -
accuracy: 1.0000 - val_loss: 0.0439 - val_accuracy: 1.0000
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0433 -
accuracy: 1.0000 - val_loss: 0.0418 - val_accuracy: 1.0000
Epoch 19/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0413 -
accuracy: 1.0000 - val_loss: 0.0401 - val_accuracy: 1.0000
Epoch 20/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0396 -
accuracy: 1.0000 - val_loss: 0.0386 - val_accuracy: 0.9983
Epoch 1/20
75/75 [==============================] - 1s 6ms/step - loss: 0.5866 -
accuracy: 0.6121 - val_loss: 0.5048 - val_accuracy: 0.7417
Epoch 2/20
75/75 [==============================] - 0s 3ms/step - loss: 0.4486 -
accuracy: 0.8129 - val_loss: 0.3604 - val_accuracy: 0.8900
Epoch 3/20
75/75 [==============================] - 0s 3ms/step - loss: 0.3186 -
accuracy: 0.9104 - val_loss: 0.2374 - val_accuracy: 0.8983
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2247 -
accuracy: 0.9371 - val_loss: 0.1598 - val_accuracy: 0.9417
Epoch 5/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1812 -
accuracy: 0.9454 - val_loss: 0.1156 - val_accuracy: 0.9917
Epoch 6/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1544 -
accuracy: 0.9563 - val_loss: 0.0883 - val_accuracy: 0.9900
Epoch 7/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1438 -
accuracy: 0.9583 - val_loss: 0.0812 - val_accuracy: 1.0000
Epoch 8/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1390 -
accuracy: 0.9579 - val_loss: 0.0687 - val_accuracy: 1.0000
Epoch 9/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1343 -
accuracy: 0.9604 - val_loss: 0.0675 - val_accuracy: 0.9900
Epoch 10/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1335 -
accuracy: 0.9575 - val_loss: 0.0579 - val_accuracy: 1.0000
Epoch 11/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1321 -
accuracy: 0.9608 - val_loss: 0.0548 - val_accuracy: 0.9967
Epoch 12/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1357 -
accuracy: 0.9575 - val_loss: 0.0541 - val_accuracy: 1.0000
Epoch 13/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1336 -
accuracy: 0.9592 - val_loss: 0.0494 - val_accuracy: 0.9983
Epoch 14/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1307 -
accuracy: 0.9592 - val_loss: 0.0487 - val_accuracy: 0.9967
Epoch 15/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1289 -
accuracy: 0.9604 - val_loss: 0.0469 - val_accuracy: 0.9967
Epoch 16/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1303 -
accuracy: 0.9625 - val_loss: 0.0496 - val_accuracy: 1.0000
Epoch 17/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1285 -
accuracy: 0.9600 - val_loss: 0.0441 - val_accuracy: 1.0000
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1296 -
accuracy: 0.9575 - val_loss: 0.0492 - val_accuracy: 0.9967
Epoch 19/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1250 -
accuracy: 0.9617 - val_loss: 0.0446 - val_accuracy: 0.9983
Epoch 20/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1256 -
accuracy: 0.9600 - val_loss: 0.0413 - val_accuracy: 0.9983
Epoch 1/20
75/75 [==============================] - 5s 12ms/step - loss: 0.2631 -
accuracy: 0.5533 - val_loss: 0.2520 - val_accuracy: 0.5967
Epoch 2/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2430 -
accuracy: 0.6767 - val_loss: 0.2251 - val_accuracy: 0.7650
Epoch 3/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2074 -
accuracy: 0.8429 - val_loss: 0.1798 - val_accuracy: 0.8850
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1647 -
accuracy: 0.9162 - val_loss: 0.1382 - val_accuracy: 0.9567
Epoch 5/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1347 -
accuracy: 0.9458 - val_loss: 0.1130 - val_accuracy: 0.9767
Epoch 6/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1169 -
accuracy: 0.9504 - val_loss: 0.0968 - val_accuracy: 0.9917
Epoch 7/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1057 -
accuracy: 0.9546 - val_loss: 0.0850 - val_accuracy: 0.9967
Epoch 8/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0972 -
accuracy: 0.9579 - val_loss: 0.0762 - val_accuracy: 0.9983
Epoch 9/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0912 -
accuracy: 0.9575 - val_loss: 0.0705 - val_accuracy: 0.9967
Epoch 10/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0866 -
accuracy: 0.9579 - val_loss: 0.0648 - val_accuracy: 1.0000
Epoch 11/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0828 -
accuracy: 0.9600 - val_loss: 0.0613 - val_accuracy: 1.0000
Epoch 12/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0803 -
accuracy: 0.9596 - val_loss: 0.0580 - val_accuracy: 0.9983
Epoch 13/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0777 -
accuracy: 0.9592 - val_loss: 0.0542 - val_accuracy: 1.0000
Epoch 14/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0748 -
accuracy: 0.9613 - val_loss: 0.0536 - val_accuracy: 0.9933
Epoch 15/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0733 -
accuracy: 0.9600 - val_loss: 0.0498 - val_accuracy: 0.9983
Epoch 16/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0718 -
accuracy: 0.9600 - val_loss: 0.0479 - val_accuracy: 1.0000
Epoch 17/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0705 -
accuracy: 0.9592 - val_loss: 0.0470 - val_accuracy: 0.9983
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0681 -
accuracy: 0.9629 - val_loss: 0.0511 - val_accuracy: 0.9783
Epoch 19/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0689 -
accuracy: 0.9571 - val_loss: 0.0431 - val_accuracy: 1.0000
Epoch 20/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0666 -
accuracy: 0.9604 - val_loss: 0.0415 - val_accuracy: 1.0000
Epoch 1/20
75/75 [==============================] - 1s 5ms/step - loss: 0.6941 -
accuracy: 0.5292 - val_loss: 0.5760 - val_accuracy: 0.6067
Epoch 2/20
75/75 [==============================] - 0s 2ms/step - loss: 0.6213 -
accuracy: 0.6642 - val_loss: 0.5109 - val_accuracy: 0.8433
Epoch 3/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5847 -
accuracy: 0.7179 - val_loss: 0.4254 - val_accuracy: 0.8900
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5547 -
accuracy: 0.7487 - val_loss: 0.3641 - val_accuracy: 0.9517
Epoch 5/20
75/75 [==============================] - 0s 2ms/step - loss: 0.5442 -
accuracy: 0.7767 - val_loss: 0.3299 - val_accuracy: 0.9417
Epoch 6/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5376 -
accuracy: 0.7725 - val_loss: 0.3217 - val_accuracy: 0.9583
Epoch 7/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5336 -
accuracy: 0.7812 - val_loss: 0.3145 - val_accuracy: 0.9767
Epoch 8/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5314 -
accuracy: 0.7821 - val_loss: 0.3073 - val_accuracy: 0.9833
Epoch 9/20
75/75 [==============================] - 0s 2ms/step - loss: 0.5304 -
accuracy: 0.7908 - val_loss: 0.3173 - val_accuracy: 0.9933
Epoch 10/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5310 -
accuracy: 0.7867 - val_loss: 0.2976 - val_accuracy: 0.9833
Epoch 11/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5286 -
accuracy: 0.7917 - val_loss: 0.2768 - val_accuracy: 0.9450
Epoch 12/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5255 -
accuracy: 0.7850 - val_loss: 0.2892 - val_accuracy: 0.9850
Epoch 13/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5258 -
accuracy: 0.7892 - val_loss: 0.2890 - val_accuracy: 0.9883
Epoch 14/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5283 -
accuracy: 0.7900 - val_loss: 0.2840 - val_accuracy: 0.9733
Epoch 15/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5203 -
accuracy: 0.7917 - val_loss: 0.2802 - val_accuracy: 0.9833
Epoch 16/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5219 -
accuracy: 0.7908 - val_loss: 0.2845 - val_accuracy: 0.9917
Epoch 17/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5222 -
accuracy: 0.7921 - val_loss: 0.2642 - val_accuracy: 0.9667
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5283 -
accuracy: 0.7900 - val_loss: 0.2680 - val_accuracy: 0.9700
Epoch 19/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5233 -
accuracy: 0.7867 - val_loss: 0.2659 - val_accuracy: 0.9850
Epoch 20/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5229 -
accuracy: 0.7875 - val_loss: 0.2710 - val_accuracy: 0.9667
Epoch 1/20
75/75 [==============================] - 4s 11ms/step - loss: 0.2855 -
accuracy: 0.5063 - val_loss: 0.2636 - val_accuracy: 0.5550
Epoch 2/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2767 -
accuracy: 0.5692 - val_loss: 0.2527 - val_accuracy: 0.6517
Epoch 3/20
75/75 [==============================] - 0s 4ms/step - loss: 0.2689 -
accuracy: 0.6212 - val_loss: 0.2378 - val_accuracy: 0.7733
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2586 -
accuracy: 0.6954 - val_loss: 0.2180 - val_accuracy: 0.8367
Epoch 5/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2474 -
accuracy: 0.7267 - val_loss: 0.1972 - val_accuracy: 0.8850
Epoch 6/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2375 -
accuracy: 0.7487 - val_loss: 0.1806 - val_accuracy: 0.9517
Epoch 7/20
75/75 [==============================] - 0s 4ms/step - loss: 0.2293 -
accuracy: 0.7671 - val_loss: 0.1625 - val_accuracy: 0.9433
Epoch 8/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2227 -
accuracy: 0.7767 - val_loss: 0.1500 - val_accuracy: 0.9750
Epoch 9/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2170 -
accuracy: 0.7900 - val_loss: 0.1392 - val_accuracy: 0.9600
Epoch 10/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2136 -
accuracy: 0.7892 - val_loss: 0.1299 - val_accuracy: 0.9917
Epoch 11/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2101 -
accuracy: 0.7933 - val_loss: 0.1217 - val_accuracy: 0.9950
Epoch 12/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2071 -
accuracy: 0.7954 - val_loss: 0.1125 - val_accuracy: 0.9867
Epoch 13/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2047 -
accuracy: 0.7967 - val_loss: 0.1073 - val_accuracy: 0.9950
Epoch 14/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2030 -
accuracy: 0.7975 - val_loss: 0.1026 - val_accuracy: 0.9983
Epoch 15/20
75/75 [==============================] - 0s 4ms/step - loss: 0.2015 -
accuracy: 0.8000 - val_loss: 0.0978 - val_accuracy: 0.9983
Epoch 16/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2004 -
accuracy: 0.8000 - val_loss: 0.0941 - val_accuracy: 0.9983
Epoch 17/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1993 -
accuracy: 0.8000 - val_loss: 0.0914 - val_accuracy: 0.9983
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1984 -
accuracy: 0.8000 - val_loss: 0.0881 - val_accuracy: 0.9967
Epoch 19/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1973 -
accuracy: 0.7996 - val_loss: 0.0866 - val_accuracy: 1.0000
Epoch 20/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1966 -
accuracy: 0.7996 - val_loss: 0.0828 - val_accuracy: 0.9933
Having trained the models, let's plot their responses and accurtacy metrics.
model = Sequential()
model.add(Conv2D(input_shape=[224,224,3],
filters=64,kernel_size=[3,3], padding='same', activation='relu'))
model.add(Conv2D(filters=64,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(MaxPool2D(pool_size=[2,2], strides=[2,2]))
model.add(Conv2D(filters=64,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters=64,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=[2,2], strides=[2,2]))
model.add(Conv2D(filters=128,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters=128,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters=128,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=[2,2], strides=[2,2]))
model.add(Conv2D(filters=256,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters=256,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters=256,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(MaxPool2D(pool_size=[2,2], strides=[2,2]))
model.add(Conv2D(filters=512,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters=512,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters=512,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=[2,2], strides=[2,2]))
model.add(Flatten())
model.add(Dense(units=4096,activation='relu'))
model.add(BatchNormalization())
model.add(Dense(units=4096,activation='relu'))
model.add(BatchNormalization())
model.add(Dense(units=38, activation="softmax"))
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_13 (Conv2D) (None, 224, 224, 64) 1792
=================================================================
Total params: 127,601,254
Trainable params: 127,579,750
Non-trainable params: 21,504
_________________________________________________________________
model.compile(optimizer=adam,
loss=['categorical_crossentropy'],metrics=['accuracy'])
ls
train/ valid/
training_set = train_datagen.flow_from_directory('train',
target_size = (224,
224),
batch_size = 512,
class_mode =
'categorical')
test_set = test_datagen.flow_from_directory('valid',
target_size = (224, 224),
batch_size = 512,
class_mode =
'categorical')
Epoch 1/10
Autoencoders are neural network models used for various tasks in machine learning, such as
dimensionality reduction, anomaly detection, and image denoising. This program that
demonstrates the application of autoencoders for image denoising using TensorFlow and Keras.
This program will train an autoencoder to remove noise from images.
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D,
UpSampling2D
from tensorflow.keras.models import Model
from tensorflow.keras.datasets import mnist
Epoch 1/10
469/469 [==============================] - 15s 9ms/step - loss: 0.1716
- val_loss: 0.1197
Epoch 2/10
469/469 [==============================] - 4s 8ms/step - loss: 0.1155
- val_loss: 0.1104
Epoch 3/10
469/469 [==============================] - 3s 7ms/step - loss: 0.1091
- val_loss: 0.1060
Epoch 4/10
469/469 [==============================] - 3s 7ms/step - loss: 0.1056
- val_loss: 0.1033
Epoch 5/10
469/469 [==============================] - 3s 7ms/step - loss: 0.1033
- val_loss: 0.1014
Epoch 6/10
469/469 [==============================] - 3s 7ms/step - loss: 0.1018
- val_loss: 0.1003
Epoch 7/10
469/469 [==============================] - 3s 6ms/step - loss: 0.1007
- val_loss: 0.0992
Epoch 8/10
469/469 [==============================] - 3s 7ms/step - loss: 0.0998
- val_loss: 0.0988
Epoch 9/10
469/469 [==============================] - 3s 7ms/step - loss: 0.0992
- val_loss: 0.0983
Epoch 10/10
469/469 [==============================] - 3s 7ms/step - loss: 0.0986
- val_loss: 0.0976
<keras.callbacks.History at 0x781450f52170>