0% found this document useful (0 votes)
13 views50 pages

Soc DL Manual

The document outlines a skill-oriented course on Deep Learning Techniques using Python at Vignan's Nirula Institute of Technology and Science for Women for the academic year 2023-2024. It includes a list of experiments such as building CNNs for image recognition, age group identification using ANN, and object detection using YOLO, among others. The course also provides a certificate of completion for students who successfully finish the required experiments.

Uploaded by

D Anveshini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views50 pages

Soc DL Manual

The document outlines a skill-oriented course on Deep Learning Techniques using Python at Vignan's Nirula Institute of Technology and Science for Women for the academic year 2023-2024. It includes a list of experiments such as building CNNs for image recognition, age group identification using ANN, and object detection using YOLO, among others. The course also provides a certificate of completion for students who successfully finish the required experiments.

Uploaded by

D Anveshini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

SKILL ORIENTED COURSE

DEEP LEARNING TECHNIQUES USING PYTHON

Department of Information Technology

Name ________________________________________Branch___________________
Name _________________________
Regd No ________________________________Year of Study ___________________

Subject________________________________________________________________

-------------------------------------------------------------------------------------------------------
VIGNAN’S NIRULA INSTITUTE OF TECHNOLOGY AND SCIENCE FOR WOMEN
PEDAPALAKALURU ROAD, GUNTUR-522005.
(Affiliated to JNTU KAKINADA, Kakinada)
2023-2024
VIGNAN’S NIRULA INSTITUTE OF TECHNOLOGY AND SCIENCE FOR WOMEN
PEDAPALAKALURU ROAD, GUNTUR-522005.
(Affiliated to JNTU KAKINADA, Kakinada)

CERTIFICATE

This is to certify that __________________________________________________


bearing the Regd. No ________________________is a student of _______________B. Tech
_______________Semester has completed _________________ experiments in
_____________________________________Laboratory during the academic year 2023-2024.

Signature of Signature of
Head of the Department Laboratory In Charge

Signature of
External Examiner
CONTENTS

Exp.No Name of the Experiment Page No. Signature

1. Build CNN for image recognition

2. ANN to identify age group of an actor


CNN hyperparameter tuning for image
3.
recognition

4. RNN for sequence prediction

5. MLP for image denoising

6. Object Detection using YOLO

7. Robust Bi-Tempered Logistic Loss

8. AlexNET using advanced CNN

9. Autoencoders

10. GAN

11. Capstone Project


cnn

November 10, 2023

[1]: import tensorflow as tf


from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt

[2]: (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.


↪load_data()

Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz


170498071/170498071 [==============================] - 10s 0us/step

[3]: # Normalize pixel values to be between 0 and 1


train_images, test_images = train_images / 255.0, test_images / 255.0
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']

[4]: plt.figure(figsize=(8,8))
for i in range(11):
plt.subplot(6,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()

1
[5]: model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3),activation='relu',
input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.summary()
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.summary()
model.compile(optimizer='adam',loss=tf.keras.losses.
↪SparseCategoricalCrossentropy(from_logits=True),

metrics=['accuracy'])

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896

max_pooling2d (MaxPooling2 (None, 15, 15, 32) 0


D)

conv2d_1 (Conv2D) (None, 13, 13, 64) 18496

2
max_pooling2d_1 (MaxPoolin (None, 6, 6, 64) 0
g2D)

conv2d_2 (Conv2D) (None, 4, 4, 64) 36928

=================================================================
Total params: 56320 (220.00 KB)
Trainable params: 56320 (220.00 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896

max_pooling2d (MaxPooling2 (None, 15, 15, 32) 0


D)

conv2d_1 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_1 (MaxPoolin (None, 6, 6, 64) 0


g2D)

conv2d_2 (Conv2D) (None, 4, 4, 64) 36928

flatten (Flatten) (None, 1024) 0

dense (Dense) (None, 64) 65600

dense_1 (Dense) (None, 10) 650

=================================================================
Total params: 122570 (478.79 KB)
Trainable params: 122570 (478.79 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

[6]: history = model.fit(train_images, train_labels, epochs=5,␣


↪validation_data=(test_images, test_labels))

plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model.evaluate(test_images,test_labels,verbose=2)

3
print(test_acc)

Epoch 1/5
1563/1563 [==============================] - 78s 49ms/step - loss: 1.5272 -
accuracy: 0.4397 - val_loss: 1.2652 - val_accuracy: 0.5497
Epoch 2/5
1563/1563 [==============================] - 72s 46ms/step - loss: 1.1659 -
accuracy: 0.5832 - val_loss: 1.0670 - val_accuracy: 0.6209
Epoch 3/5
1563/1563 [==============================] - 68s 44ms/step - loss: 1.0325 -
accuracy: 0.6364 - val_loss: 1.0053 - val_accuracy: 0.6485
Epoch 4/5
1563/1563 [==============================] - 68s 43ms/step - loss: 0.9521 -
accuracy: 0.6652 - val_loss: 0.9844 - val_accuracy: 0.6575
Epoch 5/5
1563/1563 [==============================] - 70s 45ms/step - loss: 0.8847 -
accuracy: 0.6904 - val_loss: 0.9195 - val_accuracy: 0.6846
313/313 - 3s - loss: 0.9195 - accuracy: 0.6846 - 3s/epoch - 11ms/step
0.6845999956130981

4
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.preprocessing import LabelEncoder
from tensorflow.python import keras
from tensorflow.python.keras import utils
from tensorflow.keras import utils
from tensorflow.keras.utils import to_categorical
from keras.utils import to_categorical
import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten, InputLayer
import imageio # To read images
from PIL import Image # For image resizing

!unzip agedetectiontrain.zip
!unzip agedetectiontest.zip

train = pd.read_csv('/content/train.csv')
test = pd.read_csv('/content/test.csv')

#Once, both the data sets are read successfully, we can display any
random movie character along with their age group to verify the ID
against the Class value, as shown below:
np.random.seed(125)
idx = np.random.choice(train.index)
img_name = train.ID[idx]
img = imageio.imread(os.path.join('/content/Train', img_name))
print('Age group:', train.Class[idx])
plt.imshow(img)
plt.axis('off')
plt.show()

<ipython-input-19-7d46d19bec80>:5: DeprecationWarning: Starting with


ImageIO v3 the behavior of this function will switch to that of
iio.v3.imread. To keep the current behavior (and make this warning
disappear) use `import imageio.v2 as imageio` or call
`imageio.v2.imread` directly.
img = imageio.imread(os.path.join('/content/Train', img_name))

Age group: MIDDLE


#Next, we can start transforming the data sets to a one-dimensional
array after reshaping all the images to a size of 32 x 32 x 3.

#Let us reshape and transform the training data first, as shown below:
temp = []
for img_name in train.ID:
img_path = os.path.join('Train', img_name)
img = imageio.imread(img_path)
img = np.array(Image.fromarray(img).resize((32,
32))).astype('float32')
temp.append(img)
train_x = np.stack(temp)

<ipython-input-20-f8cdcdbdad01>:7: DeprecationWarning: Starting with


ImageIO v3 the behavior of this function will switch to that of
iio.v3.imread. To keep the current behavior (and make this warning
disappear) use `import imageio.v2 as imageio` or call
`imageio.v2.imread` directly.
img = imageio.imread(img_path)

#Next, let us reshape and transform the testing data, as shown below:
temp = []
for img_name in test.ID:
img_path = os.path.join('Test', img_name)
img = imageio.imread(img_path)
img = np.array(Image.fromarray(img).resize((32,
32))).astype('float32')
temp.append(img)
test_x = np.stack(temp)

<ipython-input-33-a271849ec298>:5: DeprecationWarning: Starting with


ImageIO v3 the behavior of this function will switch to that of
iio.v3.imread. To keep the current behavior (and make this warning
disappear) use `import imageio.v2 as imageio` or call
`imageio.v2.imread` directly.
img = imageio.imread(img_path)

# Normalizing the images


train_x = train_x / 255.
test_x = test_x / 255.

# Encoding the categorical variable to numeric


lb = LabelEncoder()
train_y = lb.fit_transform(train.Class)
train_y = utils.to_categorical(train_y)

#Specifying all the parameters we will be using in our network


input_num_units = (32, 32, 3)
hidden_num_units = 500
output_num_units = 3
epochs = 5
batch_size = 128

#Next, let us define a network with one input layer, one hidden layer,
and one output layer, as shown below:
model = Sequential([
InputLayer(input_shape=input_num_units),
Flatten(),
Dense(units=hidden_num_units, activation='relu'),
Dense(units=output_num_units, activation='softmax'),
])

# Printing model summary


model.summary()

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 3072) 0

dense (Dense) (None, 500) 1536500

dense_1 (Dense) (None, 3) 1503

=================================================================
Total params: 1,538,003
Trainable params: 1,538,003
Non-trainable params: 0
_________________________________________________________________

# Compiling and Training Network


model.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])

#Now, let us build the model, using the fit() method:


model.fit(train_x, train_y, batch_size=batch_size, epochs=epochs,
verbose=1)

Epoch 1/5
156/156 [==============================] - 5s 30ms/step - loss: 0.9060
- accuracy: 0.5631
Epoch 2/5
156/156 [==============================] - 4s 24ms/step - loss: 0.8446
- accuracy: 0.6030
Epoch 3/5
156/156 [==============================] - 4s 25ms/step - loss: 0.8277
- accuracy: 0.6151
Epoch 4/5
156/156 [==============================] - 5s 32ms/step - loss: 0.8160
- accuracy: 0.6234
Epoch 5/5
156/156 [==============================] - 4s 24ms/step - loss: 0.8053
- accuracy: 0.6296

<keras.callbacks.History at 0x79d9ae713550>

#The following code considers 20 percent of the training data as


validation data set:
# Training model along with validation data
model.fit(train_x, train_y, batch_size=batch_size, epochs=epochs,
verbose=1, validation_split=0.2)

Epoch 1/5
125/125 [==============================] - 7s 57ms/step - loss: 0.8004
- accuracy: 0.6323 - val_loss: 0.7988 - val_accuracy: 0.6339
Epoch 2/5
125/125 [==============================] - 5s 44ms/step - loss: 0.7945
- accuracy: 0.6373 - val_loss: 0.7961 - val_accuracy: 0.6306
Epoch 3/5
125/125 [==============================] - 4s 36ms/step - loss: 0.7886
- accuracy: 0.6377 - val_loss: 0.7750 - val_accuracy: 0.6472
Epoch 4/5
125/125 [==============================] - 4s 29ms/step - loss: 0.7827
- accuracy: 0.6454 - val_loss: 0.7748 - val_accuracy: 0.6504
Epoch 5/5
125/125 [==============================] - 4s 29ms/step - loss: 0.7814
- accuracy: 0.6434 - val_loss: 0.7712 - val_accuracy: 0.6570
<keras.callbacks.History at 0x79d9d0978550>

# Predicting and importing the result in a csv file


#pred = model.predict_classes(test_x)
pred = np.argmax(model.predict(test_x),axis=1)
pred = lb.inverse_transform(pred)
test['Class'] = pred
test.to_csv('out.csv', index=False)

208/208 [==============================] - 1s 5ms/step

# Visual Inspection of predictions


idx = 2481
img_name = test.ID[idx]
img = imageio.imread(os.path.join('/content/Test', img_name))
plt.imshow(np.array(Image.fromarray(img).resize((128, 128))))
pred = np.argmax(model.predict(test_x),axis=1)
#pred = model.predict_classes(test_x)
print('Original:', train.Class[idx], 'Predicted:',
lb.inverse_transform([pred[idx]]))

<ipython-input-32-bad772541f70>:4: DeprecationWarning: Starting with


ImageIO v3 the behavior of this function will switch to that of
iio.v3.imread. To keep the current behavior (and make this warning
disappear) use `import imageio.v2 as imageio` or call
`imageio.v2.imread` directly.
img = imageio.imread(os.path.join('/content/Test', img_name))

208/208 [==============================] - 2s 9ms/step


Original: MIDDLE Predicted: ['MIDDLE']
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt

(train_images, train_labels), (test_images, test_labels) =


datasets.cifar10.load_data()

Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-


python.tar.gz
170498071/170498071 [==============================] - 3s 0us/step

# Normalize pixel values to be between 0 and 1


train_images, test_images = train_images / 255.0, test_images / 255.0
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']

plt.figure(figsize=(8,8))
for i in range(11):
plt.subplot(6,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3),activation='relu',
input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.summary()
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.summary()
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalC
rossentropy(from_logits=True),
metrics=['accuracy'])

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896

max_pooling2d (MaxPooling2 (None, 15, 15, 32) 0


D)

conv2d_1 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_1 (MaxPoolin (None, 6, 6, 64) 0


g2D)

conv2d_2 (Conv2D) (None, 4, 4, 64) 36928

=================================================================
Total params: 56320 (220.00 KB)
Trainable params: 56320 (220.00 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896

max_pooling2d (MaxPooling2 (None, 15, 15, 32) 0


D)

conv2d_1 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_1 (MaxPoolin (None, 6, 6, 64) 0


g2D)

conv2d_2 (Conv2D) (None, 4, 4, 64) 36928


flatten (Flatten) (None, 1024) 0

dense (Dense) (None, 64) 65600

dense_1 (Dense) (None, 10) 650

=================================================================
Total params: 122570 (478.79 KB)
Trainable params: 122570 (478.79 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

history = model.fit(train_images, train_labels,


epochs=5,validation_data=(test_images, test_labels))
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc =
model.evaluate(test_images,test_labels,verbose=2)
print(test_acc)

Epoch 1/5
1563/1563 [==============================] - 80s 50ms/step - loss:
1.5439 - accuracy: 0.4401 - val_loss: 1.2851 - val_accuracy: 0.5487
Epoch 2/5
1563/1563 [==============================] - 72s 46ms/step - loss:
1.1840 - accuracy: 0.5778 - val_loss: 1.0758 - val_accuracy: 0.6210
Epoch 3/5
1563/1563 [==============================] - 71s 45ms/step - loss:
1.0403 - accuracy: 0.6328 - val_loss: 1.0632 - val_accuracy: 0.6348
Epoch 4/5
1563/1563 [==============================] - 71s 45ms/step - loss:
0.9461 - accuracy: 0.6668 - val_loss: 0.9557 - val_accuracy: 0.6647
Epoch 5/5
1563/1563 [==============================] - 72s 46ms/step - loss:
0.8781 - accuracy: 0.6920 - val_loss: 0.9039 - val_accuracy: 0.6878
313/313 - 6s - loss: 0.9039 - accuracy: 0.6878 - 6s/epoch - 18ms/step
0.6877999901771545
history = model.fit(train_images, train_labels,
epochs=25,validation_data=(test_images, test_labels))
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc =
model.evaluate(test_images,test_labels,verbose=2)
print(test_acc)

Epoch 1/25
1563/1563 [==============================] - 82s 52ms/step - loss:
0.8176 - accuracy: 0.7149 - val_loss: 0.8911 - val_accuracy: 0.6933
Epoch 2/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.7691 - accuracy: 0.7296 - val_loss: 0.8554 - val_accuracy: 0.7091
Epoch 3/25
1563/1563 [==============================] - 71s 45ms/step - loss:
0.7314 - accuracy: 0.7421 - val_loss: 0.8980 - val_accuracy: 0.6951
Epoch 4/25
1563/1563 [==============================] - 74s 48ms/step - loss:
0.6924 - accuracy: 0.7572 - val_loss: 0.8811 - val_accuracy: 0.7007
Epoch 5/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.6575 - accuracy: 0.7704 - val_loss: 0.8910 - val_accuracy: 0.7031
Epoch 6/25
1563/1563 [==============================] - 73s 46ms/step - loss:
0.6268 - accuracy: 0.7810 - val_loss: 0.9326 - val_accuracy: 0.6919
Epoch 7/25
1563/1563 [==============================] - 71s 45ms/step - loss:
0.5953 - accuracy: 0.7911 - val_loss: 0.9428 - val_accuracy: 0.7029
Epoch 8/25
1563/1563 [==============================] - 74s 47ms/step - loss:
0.5670 - accuracy: 0.8001 - val_loss: 0.9305 - val_accuracy: 0.6983
Epoch 9/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.5401 - accuracy: 0.8090 - val_loss: 0.9055 - val_accuracy: 0.7093
Epoch 10/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.5195 - accuracy: 0.8162 - val_loss: 0.9583 - val_accuracy: 0.7046
Epoch 11/25
1563/1563 [==============================] - 73s 46ms/step - loss:
0.4877 - accuracy: 0.8275 - val_loss: 0.9495 - val_accuracy: 0.7113
Epoch 12/25
1563/1563 [==============================] - 71s 46ms/step - loss:
0.4647 - accuracy: 0.8364 - val_loss: 0.9852 - val_accuracy: 0.7002
Epoch 13/25
1563/1563 [==============================] - 70s 45ms/step - loss:
0.4471 - accuracy: 0.8411 - val_loss: 1.0472 - val_accuracy: 0.6985
Epoch 14/25
1563/1563 [==============================] - 74s 47ms/step - loss:
0.4204 - accuracy: 0.8507 - val_loss: 1.0665 - val_accuracy: 0.6970
Epoch 15/25
1563/1563 [==============================] - 73s 47ms/step - loss:
0.4031 - accuracy: 0.8572 - val_loss: 1.0335 - val_accuracy: 0.7113
Epoch 16/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.3815 - accuracy: 0.8638 - val_loss: 1.0957 - val_accuracy: 0.7042
Epoch 17/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.3664 - accuracy: 0.8680 - val_loss: 1.1560 - val_accuracy: 0.6975
Epoch 18/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.3442 - accuracy: 0.8768 - val_loss: 1.2238 - val_accuracy: 0.6899
Epoch 19/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.3341 - accuracy: 0.8812 - val_loss: 1.2090 - val_accuracy: 0.7040
Epoch 20/25
1563/1563 [==============================] - 73s 47ms/step - loss:
0.3114 - accuracy: 0.8881 - val_loss: 1.2891 - val_accuracy: 0.6945
Epoch 21/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.3003 - accuracy: 0.8929 - val_loss: 1.2742 - val_accuracy: 0.7007
Epoch 22/25
1563/1563 [==============================] - 75s 48ms/step - loss:
0.2873 - accuracy: 0.8966 - val_loss: 1.3051 - val_accuracy: 0.6978
Epoch 23/25
1563/1563 [==============================] - 72s 46ms/step - loss:
0.2718 - accuracy: 0.9010 - val_loss: 1.3597 - val_accuracy: 0.6971
Epoch 24/25
1563/1563 [==============================] - 73s 47ms/step - loss:
0.2608 - accuracy: 0.9064 - val_loss: 1.3995 - val_accuracy: 0.7030
Epoch 25/25
1563/1563 [==============================] - 74s 47ms/step - loss:
0.2517 - accuracy: 0.9094 - val_loss: 1.4640 - val_accuracy: 0.6935
313/313 - 5s - loss: 1.4640 - accuracy: 0.6935 - 5s/epoch - 15ms/step
0.6934999823570251
sequence-prediction

November 10, 2023

[ ]: import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense

[ ]: with open('/content/ABC.txt', 'r', encoding='utf-8') as file:


text = file.read()

[ ]: tokenizer = Tokenizer()
tokenizer.fit_on_texts([text])
total_words = len(tokenizer.word_index) + 1

[ ]: input_sequences = []
for line in text.split('\n'):
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)

[ ]: max_sequence_len = max([len(seq) for seq in input_sequences])


input_sequences = np.array(pad_sequences(input_sequences,␣
↪maxlen=max_sequence_len, padding='pre'))

[ ]: X = input_sequences[:, :-1]
y = input_sequences[:, -1]

[ ]: y = np.array(tf.keras.utils.to_categorical(y, num_classes=total_words))

[ ]: model = Sequential()
model.add(Embedding(total_words, 100, input_length=max_sequence_len-1))
model.add(LSTM(150))
model.add(Dense(total_words, activation='softmax'))
print(model.summary())

Model: "sequential"
_________________________________________________________________

1
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 12, 100) 1300

lstm (LSTM) (None, 150) 150600

dense (Dense) (None, 13) 1963

=================================================================
Total params: 153863 (601.03 KB)
Trainable params: 153863 (601.03 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
None

[ ]: model.compile(loss='categorical_crossentropy', optimizer='adam',␣
↪metrics=['accuracy'])

model.fit(X, y, epochs=10, verbose=1)

Epoch 1/10
1/1 [==============================] - 2s 2s/step - loss: 2.5643 - accuracy:
0.1667
Epoch 2/10
1/1 [==============================] - 0s 21ms/step - loss: 2.5531 - accuracy:
0.4167
Epoch 3/10
1/1 [==============================] - 0s 22ms/step - loss: 2.5419 - accuracy:
0.3333
Epoch 4/10
1/1 [==============================] - 0s 20ms/step - loss: 2.5303 - accuracy:
0.4167
Epoch 5/10
1/1 [==============================] - 0s 26ms/step - loss: 2.5178 - accuracy:
0.3333
Epoch 6/10
1/1 [==============================] - 0s 22ms/step - loss: 2.5041 - accuracy:
0.4167
Epoch 7/10
1/1 [==============================] - 0s 21ms/step - loss: 2.4885 - accuracy:
0.4167
Epoch 8/10
1/1 [==============================] - 0s 28ms/step - loss: 2.4705 - accuracy:
0.4167
Epoch 9/10
1/1 [==============================] - 0s 24ms/step - loss: 2.4492 - accuracy:
0.4167
Epoch 10/10
1/1 [==============================] - 0s 30ms/step - loss: 2.4237 - accuracy:

2
0.2500

[ ]: <keras.src.callbacks.History at 0x78ebff570c10>

[ ]: seed_text = "hello there!"


next_words = 40

for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1,␣
↪padding='pre')

predicted = np.argmax(model.predict(token_list), axis=-1)


output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " " + output_word

print(seed_text)

1/1 [==============================] - 1s 514ms/step


1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 26ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 25ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 25ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 30ms/step
1/1 [==============================] - 0s 22ms/step

3
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 25ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 21ms/step
1/1 [==============================] - 0s 21ms/step
hello there! cse cse cse c c renuka cse c am renuka from c am am 4th

4
!git clone https://github.com/surmenok/keras_lr_finder.git
keras_lr_finder_repo
!mv keras_lr_finder_repo/keras_lr_finder keras_lr_finder
!git clone https://github.com/google/bi-tempered-loss.git
!mv bi-tempered-loss/tensorflow/loss.py loss.py
!rm -r keras_lr_finder_repo bi-tempered-loss

Cloning into 'keras_lr_finder_repo'...


remote: Enumerating objects: 68, done.ote: Counting objects: 100%
(6/6), done.ote: Compressing objects: 100% (6/6), done.ote: Total 68
(delta 1), reused 2 (delta 0), pack-reused 62pered-loss'...
remote: Enumerating objects: 152, done.ote: Counting objects: 100%
(30/30), done.ote: Compressing objects: 100% (23/23), done.ote: Total
152 (delta 15), reused 11 (delta 5), pack-reused 122

!git clone https://github.com/DumalaAnveshini/bitempered.git


from bitempered.utils import generate_points, mix_cifar, mix_points,
points_to_data, get_best_lr

Cloning into 'bitempered'...


remote: Enumerating objects: 8, done.ote: Counting objects: 100%
(8/8), done.ote: Compressing objects: 100% (8/8), done.ote: Total 8
(delta 0), reused 0 (delta 0), pack-reused 0

from bitempered.plotting import plot_cifar, plot_points,


plot_synthetic_results, plot_cifar_results, plot_lr_finding_results

from loss import _internal_bi_tempered_logistic_loss as


bi_tempered_logistic_loss
from keras_lr_finder.lr_finder import LRFinder

import ssl

import matplotlib.pyplot as plt


import numpy as np
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.applications import resnet50
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.layers import Dense, Flatten, InputLayer,
UpSampling2D
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.utils import to_categorical
Bi-tempered logistic loss analysis
Performance
Standard logistic loss:

cce = CategoricalCrossentropy(from_logits=False)
%timeit cce(np.random.rand(1000, 10).astype(np.float32),
np.random.rand(1000, 10).astype(np.float32))

2.11 ms ± 86.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops
each)

Bi-tempered logistic loss:

%timeit bi_tempered_logistic_loss(np.random.rand(1000,
10).astype(np.float32), np.random.rand(1000, 10).astype(np.float32),
0.5, 2.)

9.54 ms ± 100 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Wrapper
Let's define a loss function wrapper that would be passed to Keras during training. The wrapper
defaults to standard logistic loss if both temperatures are set to 1, otherwise it uses the bi-
tempered version.

class BiTemperedWrapper:

def __init__(self, t1=1., t2=1.):


self.t1 = t1
self.t2 = t2
self.cce = CategoricalCrossentropy(from_logits=True)

def __call__(self, labels, activations):


if self.t1 == 1. and self.t2 == 1.:
return self.cce(labels, activations)
return bi_tempered_logistic_loss(activations, labels, self.t1,
self.t2)

Synthetic dataset
The synthetic dataset is a reproduction of the one presented in the original paper. It consists of
two rings of points separated by a margin.
blue_train = generate_points(1200, 0, 5)
red_train = generate_points(1200, 6, 8)
blue_train_margin_noise, red_train_margin_noise = np.copy(blue_train),
np.copy(red_train)
blue_train_global_noise, red_train_global_noise = np.copy(blue_train),
np.copy(red_train)

mix_points(blue_train_margin_noise, red_train_margin_noise, 4, 7, 0.2)


mix_points(blue_train_global_noise, red_train_global_noise, 0, 8, 0.2)

plt.figure(figsize=(17, 5))
plt.subplot(1, 3, 1)
plt.title("Training data\nwithout noise")
plot_points([blue_train, red_train], 8)

plt.subplot(1, 3, 2)
plt.title("Training data\n20% margin noise")
plot_points([blue_train_margin_noise, red_train_margin_noise], 8)

plt.subplot(1, 3, 3)
plt.title("Training data\n20% global noise")
plot_points([blue_train_global_noise, red_train_global_noise], 8)
plt.show()

plt.figure(figsize=(6, 5))
blue_valid = generate_points(300, 0, 5)
red_valid = generate_points(300, 6, 8)
plt.title("Validation data")
plot_points([blue_valid, red_valid], 8)
plt.show()
The data has to be transformed prior to being passed to the model. The labels are converted into
numerical values (0 or 1) and one-hot encoded.

X_train, y_train = points_to_data(blue_train, red_train)


X_train_margin, y_train_margin =
points_to_data(blue_train_margin_noise, red_train_margin_noise)
X_train_global, y_train_global =
points_to_data(blue_train_global_noise, red_train_global_noise)

X_valid, y_valid = points_to_data(blue_valid, red_valid)

Now, we're going to define our model. It's a simple fully connected network with two hidden
layers.

def synthetic_model():
return Sequential([
InputLayer(input_shape=X_train[0].shape),
Dense(32, activation='relu'),
Dense(32, activation='relu'),
Dense(2)
])
The network is trained on all three datasets (without noise, with margin noise, and with global
noise) for two different sets of temperatures - (1.0, 1.0) tantamount to using standard
logistic loss and (0.2, 4.0) for the bi-tempered alternative.

datasets = {'noise free': (X_train, y_train),


'margin': (X_train_margin, y_train_margin),
'global': (X_train_global, y_train_global)}
temp_vals = [(1., 1.), (0.2, 4.0)]
results = {}
for dataset_name, (X_data, y_data) in datasets.items():
results[dataset_name] = {}
for temps in temp_vals:
results[dataset_name][temps] = {}
model = synthetic_model()
model.compile(loss=BiTemperedWrapper(*temps),
optimizer=Adam(), metrics=['accuracy'])
history = model.fit(X_data, y_data, validation_data=(X_valid,
y_valid), epochs=20)
results[dataset_name][temps]['model'] = model
results[dataset_name][temps]['history'] = history.history

Epoch 1/20
75/75 [==============================] - 2s 9ms/step - loss: 0.5583 -
accuracy: 0.6183 - val_loss: 0.5046 - val_accuracy: 0.7200
Epoch 2/20
75/75 [==============================] - 0s 3ms/step - loss: 0.4416 -
accuracy: 0.7958 - val_loss: 0.3621 - val_accuracy: 0.8833
Epoch 3/20
75/75 [==============================] - 0s 4ms/step - loss: 0.2902 -
accuracy: 0.9112 - val_loss: 0.2183 - val_accuracy: 0.9400
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1723 -
accuracy: 0.9629 - val_loss: 0.1351 - val_accuracy: 0.9583
Epoch 5/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1103 -
accuracy: 0.9821 - val_loss: 0.0868 - val_accuracy: 0.9983
Epoch 6/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0765 -
accuracy: 0.9937 - val_loss: 0.0606 - val_accuracy: 0.9967
Epoch 7/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0553 -
accuracy: 0.9979 - val_loss: 0.0464 - val_accuracy: 0.9967
Epoch 8/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0428 -
accuracy: 0.9992 - val_loss: 0.0345 - val_accuracy: 1.0000
Epoch 9/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0354 -
accuracy: 0.9992 - val_loss: 0.0270 - val_accuracy: 1.0000
Epoch 10/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0279 -
accuracy: 1.0000 - val_loss: 0.0218 - val_accuracy: 1.0000
Epoch 11/20
75/75 [==============================] - 0s 2ms/step - loss: 0.0226 -
accuracy: 1.0000 - val_loss: 0.0189 - val_accuracy: 1.0000
Epoch 12/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0187 -
accuracy: 0.9996 - val_loss: 0.0161 - val_accuracy: 1.0000
Epoch 13/20
75/75 [==============================] - 0s 2ms/step - loss: 0.0156 -
accuracy: 1.0000 - val_loss: 0.0140 - val_accuracy: 1.0000
Epoch 14/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0131 -
accuracy: 1.0000 - val_loss: 0.0113 - val_accuracy: 1.0000
Epoch 15/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0114 -
accuracy: 1.0000 - val_loss: 0.0091 - val_accuracy: 1.0000
Epoch 16/20
75/75 [==============================] - 0s 2ms/step - loss: 0.0092 -
accuracy: 1.0000 - val_loss: 0.0077 - val_accuracy: 1.0000
Epoch 17/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0081 -
accuracy: 1.0000 - val_loss: 0.0067 - val_accuracy: 1.0000
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0072 -
accuracy: 1.0000 - val_loss: 0.0061 - val_accuracy: 1.0000
Epoch 19/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0065 -
accuracy: 1.0000 - val_loss: 0.0052 - val_accuracy: 1.0000
Epoch 20/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0055 -
accuracy: 1.0000 - val_loss: 0.0044 - val_accuracy: 1.0000
Epoch 1/20
75/75 [==============================] - 4s 11ms/step - loss: 0.2790 -
accuracy: 0.4967 - val_loss: 0.2615 - val_accuracy: 0.5350
Epoch 2/20
75/75 [==============================] - 0s 4ms/step - loss: 0.2527 -
accuracy: 0.6004 - val_loss: 0.2445 - val_accuracy: 0.6633
Epoch 3/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2303 -
accuracy: 0.7342 - val_loss: 0.2137 - val_accuracy: 0.8017
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1925 -
accuracy: 0.8746 - val_loss: 0.1725 - val_accuracy: 0.9000
Epoch 5/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1559 -
accuracy: 0.9350 - val_loss: 0.1406 - val_accuracy: 0.9517
Epoch 6/20
75/75 [==============================] - 0s 4ms/step - loss: 0.1288 -
accuracy: 0.9683 - val_loss: 0.1177 - val_accuracy: 0.9667
Epoch 7/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1086 -
accuracy: 0.9862 - val_loss: 0.1010 - val_accuracy: 0.9700
Epoch 8/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0929 -
accuracy: 0.9917 - val_loss: 0.0862 - val_accuracy: 0.9967
Epoch 9/20
75/75 [==============================] - 0s 5ms/step - loss: 0.0826 -
accuracy: 0.9958 - val_loss: 0.0772 - val_accuracy: 0.9983
Epoch 10/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0744 -
accuracy: 0.9983 - val_loss: 0.0701 - val_accuracy: 0.9983
Epoch 11/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0677 -
accuracy: 0.9992 - val_loss: 0.0643 - val_accuracy: 0.9983
Epoch 12/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0626 -
accuracy: 0.9996 - val_loss: 0.0593 - val_accuracy: 1.0000
Epoch 13/20
75/75 [==============================] - 0s 5ms/step - loss: 0.0580 -
accuracy: 0.9992 - val_loss: 0.0554 - val_accuracy: 0.9983
Epoch 14/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0542 -
accuracy: 0.9996 - val_loss: 0.0519 - val_accuracy: 1.0000
Epoch 15/20
75/75 [==============================] - 0s 5ms/step - loss: 0.0509 -
accuracy: 1.0000 - val_loss: 0.0489 - val_accuracy: 1.0000
Epoch 16/20
75/75 [==============================] - 0s 5ms/step - loss: 0.0481 -
accuracy: 1.0000 - val_loss: 0.0462 - val_accuracy: 1.0000
Epoch 17/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0454 -
accuracy: 1.0000 - val_loss: 0.0439 - val_accuracy: 1.0000
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0433 -
accuracy: 1.0000 - val_loss: 0.0418 - val_accuracy: 1.0000
Epoch 19/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0413 -
accuracy: 1.0000 - val_loss: 0.0401 - val_accuracy: 1.0000
Epoch 20/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0396 -
accuracy: 1.0000 - val_loss: 0.0386 - val_accuracy: 0.9983
Epoch 1/20
75/75 [==============================] - 1s 6ms/step - loss: 0.5866 -
accuracy: 0.6121 - val_loss: 0.5048 - val_accuracy: 0.7417
Epoch 2/20
75/75 [==============================] - 0s 3ms/step - loss: 0.4486 -
accuracy: 0.8129 - val_loss: 0.3604 - val_accuracy: 0.8900
Epoch 3/20
75/75 [==============================] - 0s 3ms/step - loss: 0.3186 -
accuracy: 0.9104 - val_loss: 0.2374 - val_accuracy: 0.8983
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2247 -
accuracy: 0.9371 - val_loss: 0.1598 - val_accuracy: 0.9417
Epoch 5/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1812 -
accuracy: 0.9454 - val_loss: 0.1156 - val_accuracy: 0.9917
Epoch 6/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1544 -
accuracy: 0.9563 - val_loss: 0.0883 - val_accuracy: 0.9900
Epoch 7/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1438 -
accuracy: 0.9583 - val_loss: 0.0812 - val_accuracy: 1.0000
Epoch 8/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1390 -
accuracy: 0.9579 - val_loss: 0.0687 - val_accuracy: 1.0000
Epoch 9/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1343 -
accuracy: 0.9604 - val_loss: 0.0675 - val_accuracy: 0.9900
Epoch 10/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1335 -
accuracy: 0.9575 - val_loss: 0.0579 - val_accuracy: 1.0000
Epoch 11/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1321 -
accuracy: 0.9608 - val_loss: 0.0548 - val_accuracy: 0.9967
Epoch 12/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1357 -
accuracy: 0.9575 - val_loss: 0.0541 - val_accuracy: 1.0000
Epoch 13/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1336 -
accuracy: 0.9592 - val_loss: 0.0494 - val_accuracy: 0.9983
Epoch 14/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1307 -
accuracy: 0.9592 - val_loss: 0.0487 - val_accuracy: 0.9967
Epoch 15/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1289 -
accuracy: 0.9604 - val_loss: 0.0469 - val_accuracy: 0.9967
Epoch 16/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1303 -
accuracy: 0.9625 - val_loss: 0.0496 - val_accuracy: 1.0000
Epoch 17/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1285 -
accuracy: 0.9600 - val_loss: 0.0441 - val_accuracy: 1.0000
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1296 -
accuracy: 0.9575 - val_loss: 0.0492 - val_accuracy: 0.9967
Epoch 19/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1250 -
accuracy: 0.9617 - val_loss: 0.0446 - val_accuracy: 0.9983
Epoch 20/20
75/75 [==============================] - 0s 2ms/step - loss: 0.1256 -
accuracy: 0.9600 - val_loss: 0.0413 - val_accuracy: 0.9983
Epoch 1/20
75/75 [==============================] - 5s 12ms/step - loss: 0.2631 -
accuracy: 0.5533 - val_loss: 0.2520 - val_accuracy: 0.5967
Epoch 2/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2430 -
accuracy: 0.6767 - val_loss: 0.2251 - val_accuracy: 0.7650
Epoch 3/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2074 -
accuracy: 0.8429 - val_loss: 0.1798 - val_accuracy: 0.8850
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1647 -
accuracy: 0.9162 - val_loss: 0.1382 - val_accuracy: 0.9567
Epoch 5/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1347 -
accuracy: 0.9458 - val_loss: 0.1130 - val_accuracy: 0.9767
Epoch 6/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1169 -
accuracy: 0.9504 - val_loss: 0.0968 - val_accuracy: 0.9917
Epoch 7/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1057 -
accuracy: 0.9546 - val_loss: 0.0850 - val_accuracy: 0.9967
Epoch 8/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0972 -
accuracy: 0.9579 - val_loss: 0.0762 - val_accuracy: 0.9983
Epoch 9/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0912 -
accuracy: 0.9575 - val_loss: 0.0705 - val_accuracy: 0.9967
Epoch 10/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0866 -
accuracy: 0.9579 - val_loss: 0.0648 - val_accuracy: 1.0000
Epoch 11/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0828 -
accuracy: 0.9600 - val_loss: 0.0613 - val_accuracy: 1.0000
Epoch 12/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0803 -
accuracy: 0.9596 - val_loss: 0.0580 - val_accuracy: 0.9983
Epoch 13/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0777 -
accuracy: 0.9592 - val_loss: 0.0542 - val_accuracy: 1.0000
Epoch 14/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0748 -
accuracy: 0.9613 - val_loss: 0.0536 - val_accuracy: 0.9933
Epoch 15/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0733 -
accuracy: 0.9600 - val_loss: 0.0498 - val_accuracy: 0.9983
Epoch 16/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0718 -
accuracy: 0.9600 - val_loss: 0.0479 - val_accuracy: 1.0000
Epoch 17/20
75/75 [==============================] - 0s 4ms/step - loss: 0.0705 -
accuracy: 0.9592 - val_loss: 0.0470 - val_accuracy: 0.9983
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0681 -
accuracy: 0.9629 - val_loss: 0.0511 - val_accuracy: 0.9783
Epoch 19/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0689 -
accuracy: 0.9571 - val_loss: 0.0431 - val_accuracy: 1.0000
Epoch 20/20
75/75 [==============================] - 0s 3ms/step - loss: 0.0666 -
accuracy: 0.9604 - val_loss: 0.0415 - val_accuracy: 1.0000
Epoch 1/20
75/75 [==============================] - 1s 5ms/step - loss: 0.6941 -
accuracy: 0.5292 - val_loss: 0.5760 - val_accuracy: 0.6067
Epoch 2/20
75/75 [==============================] - 0s 2ms/step - loss: 0.6213 -
accuracy: 0.6642 - val_loss: 0.5109 - val_accuracy: 0.8433
Epoch 3/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5847 -
accuracy: 0.7179 - val_loss: 0.4254 - val_accuracy: 0.8900
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5547 -
accuracy: 0.7487 - val_loss: 0.3641 - val_accuracy: 0.9517
Epoch 5/20
75/75 [==============================] - 0s 2ms/step - loss: 0.5442 -
accuracy: 0.7767 - val_loss: 0.3299 - val_accuracy: 0.9417
Epoch 6/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5376 -
accuracy: 0.7725 - val_loss: 0.3217 - val_accuracy: 0.9583
Epoch 7/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5336 -
accuracy: 0.7812 - val_loss: 0.3145 - val_accuracy: 0.9767
Epoch 8/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5314 -
accuracy: 0.7821 - val_loss: 0.3073 - val_accuracy: 0.9833
Epoch 9/20
75/75 [==============================] - 0s 2ms/step - loss: 0.5304 -
accuracy: 0.7908 - val_loss: 0.3173 - val_accuracy: 0.9933
Epoch 10/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5310 -
accuracy: 0.7867 - val_loss: 0.2976 - val_accuracy: 0.9833
Epoch 11/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5286 -
accuracy: 0.7917 - val_loss: 0.2768 - val_accuracy: 0.9450
Epoch 12/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5255 -
accuracy: 0.7850 - val_loss: 0.2892 - val_accuracy: 0.9850
Epoch 13/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5258 -
accuracy: 0.7892 - val_loss: 0.2890 - val_accuracy: 0.9883
Epoch 14/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5283 -
accuracy: 0.7900 - val_loss: 0.2840 - val_accuracy: 0.9733
Epoch 15/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5203 -
accuracy: 0.7917 - val_loss: 0.2802 - val_accuracy: 0.9833
Epoch 16/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5219 -
accuracy: 0.7908 - val_loss: 0.2845 - val_accuracy: 0.9917
Epoch 17/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5222 -
accuracy: 0.7921 - val_loss: 0.2642 - val_accuracy: 0.9667
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5283 -
accuracy: 0.7900 - val_loss: 0.2680 - val_accuracy: 0.9700
Epoch 19/20
75/75 [==============================] - 0s 4ms/step - loss: 0.5233 -
accuracy: 0.7867 - val_loss: 0.2659 - val_accuracy: 0.9850
Epoch 20/20
75/75 [==============================] - 0s 3ms/step - loss: 0.5229 -
accuracy: 0.7875 - val_loss: 0.2710 - val_accuracy: 0.9667
Epoch 1/20
75/75 [==============================] - 4s 11ms/step - loss: 0.2855 -
accuracy: 0.5063 - val_loss: 0.2636 - val_accuracy: 0.5550
Epoch 2/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2767 -
accuracy: 0.5692 - val_loss: 0.2527 - val_accuracy: 0.6517
Epoch 3/20
75/75 [==============================] - 0s 4ms/step - loss: 0.2689 -
accuracy: 0.6212 - val_loss: 0.2378 - val_accuracy: 0.7733
Epoch 4/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2586 -
accuracy: 0.6954 - val_loss: 0.2180 - val_accuracy: 0.8367
Epoch 5/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2474 -
accuracy: 0.7267 - val_loss: 0.1972 - val_accuracy: 0.8850
Epoch 6/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2375 -
accuracy: 0.7487 - val_loss: 0.1806 - val_accuracy: 0.9517
Epoch 7/20
75/75 [==============================] - 0s 4ms/step - loss: 0.2293 -
accuracy: 0.7671 - val_loss: 0.1625 - val_accuracy: 0.9433
Epoch 8/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2227 -
accuracy: 0.7767 - val_loss: 0.1500 - val_accuracy: 0.9750
Epoch 9/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2170 -
accuracy: 0.7900 - val_loss: 0.1392 - val_accuracy: 0.9600
Epoch 10/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2136 -
accuracy: 0.7892 - val_loss: 0.1299 - val_accuracy: 0.9917
Epoch 11/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2101 -
accuracy: 0.7933 - val_loss: 0.1217 - val_accuracy: 0.9950
Epoch 12/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2071 -
accuracy: 0.7954 - val_loss: 0.1125 - val_accuracy: 0.9867
Epoch 13/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2047 -
accuracy: 0.7967 - val_loss: 0.1073 - val_accuracy: 0.9950
Epoch 14/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2030 -
accuracy: 0.7975 - val_loss: 0.1026 - val_accuracy: 0.9983
Epoch 15/20
75/75 [==============================] - 0s 4ms/step - loss: 0.2015 -
accuracy: 0.8000 - val_loss: 0.0978 - val_accuracy: 0.9983
Epoch 16/20
75/75 [==============================] - 0s 3ms/step - loss: 0.2004 -
accuracy: 0.8000 - val_loss: 0.0941 - val_accuracy: 0.9983
Epoch 17/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1993 -
accuracy: 0.8000 - val_loss: 0.0914 - val_accuracy: 0.9983
Epoch 18/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1984 -
accuracy: 0.8000 - val_loss: 0.0881 - val_accuracy: 0.9967
Epoch 19/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1973 -
accuracy: 0.7996 - val_loss: 0.0866 - val_accuracy: 1.0000
Epoch 20/20
75/75 [==============================] - 0s 3ms/step - loss: 0.1966 -
accuracy: 0.7996 - val_loss: 0.0828 - val_accuracy: 0.9933

Having trained the models, let's plot their responses and accurtacy metrics.

plot_synthetic_results(*datasets['noise free'], results['noise free'],


'Noise free dataset')
plot_synthetic_results(*datasets['margin'], results['margin'], '20%
margin noise')
plot_synthetic_results(*datasets['global'], results['global'], '20%
global noise')

82/82 [==============================] - 0s 1ms/step


82/82 [==============================] - 0s 1ms/step
82/82 [==============================] - 0s 2ms/step
82/82 [==============================] - 0s 2ms/step

82/82 [==============================] - 0s 2ms/step


82/82 [==============================] - 0s 2ms/step
from keras.models import Sequential
from keras.layers import Conv2D,MaxPool2D, BatchNormalization, Dropout
from keras.layers import Flatten, Dense
from keras.preprocessing.image import ImageDataGenerator

model = Sequential()

model.add(Conv2D(input_shape=[224,224,3],
filters=64,kernel_size=[3,3], padding='same', activation='relu'))
model.add(Conv2D(filters=64,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(MaxPool2D(pool_size=[2,2], strides=[2,2]))

model.add(Conv2D(filters=64,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters=64,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())

model.add(MaxPool2D(pool_size=[2,2], strides=[2,2]))

model.add(Conv2D(filters=128,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())

model.add(Conv2D(filters=128,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())

model.add(Conv2D(filters=128,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())

model.add(MaxPool2D(pool_size=[2,2], strides=[2,2]))

model.add(Conv2D(filters=256,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())

model.add(Conv2D(filters=256,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())

model.add(Conv2D(filters=256,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(MaxPool2D(pool_size=[2,2], strides=[2,2]))

model.add(Conv2D(filters=512,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters=512,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())

model.add(Conv2D(filters=512,kernel_size=[3,3], padding='same',
activation='relu'))
model.add(BatchNormalization())

model.add(MaxPool2D(pool_size=[2,2], strides=[2,2]))

model.add(Flatten())
model.add(Dense(units=4096,activation='relu'))
model.add(BatchNormalization())

model.add(Dense(units=4096,activation='relu'))
model.add(BatchNormalization())

model.add(Dense(units=38, activation="softmax"))
model.summary()

Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_13 (Conv2D) (None, 224, 224, 64) 1792

conv2d_14 (Conv2D) (None, 224, 224, 64) 36928

max_pooling2d_5 (MaxPooling (None, 112, 112, 64) 0


2D)

conv2d_15 (Conv2D) (None, 112, 112, 64) 36928

batch_normalization_12 (Bat (None, 112, 112, 64) 256


chNormalization)

conv2d_16 (Conv2D) (None, 112, 112, 64) 36928

batch_normalization_13 (Bat (None, 112, 112, 64) 256


chNormalization)

max_pooling2d_6 (MaxPooling (None, 56, 56, 64) 0


2D)

conv2d_17 (Conv2D) (None, 56, 56, 128) 73856

batch_normalization_14 (Bat (None, 56, 56, 128) 512


chNormalization)
conv2d_18 (Conv2D) (None, 56, 56, 128) 147584

batch_normalization_15 (Bat (None, 56, 56, 128) 512


chNormalization)

conv2d_19 (Conv2D) (None, 56, 56, 128) 147584

batch_normalization_16 (Bat (None, 56, 56, 128) 512


chNormalization)

max_pooling2d_7 (MaxPooling (None, 28, 28, 128) 0


2D)

conv2d_20 (Conv2D) (None, 28, 28, 256) 295168

batch_normalization_17 (Bat (None, 28, 28, 256) 1024


chNormalization)

conv2d_21 (Conv2D) (None, 28, 28, 256) 590080

batch_normalization_18 (Bat (None, 28, 28, 256) 1024


chNormalization)

conv2d_22 (Conv2D) (None, 28, 28, 256) 590080

max_pooling2d_8 (MaxPooling (None, 14, 14, 256) 0


2D)

conv2d_23 (Conv2D) (None, 14, 14, 512) 1180160

batch_normalization_19 (Bat (None, 14, 14, 512) 2048


chNormalization)

conv2d_24 (Conv2D) (None, 14, 14, 512) 2359808

batch_normalization_20 (Bat (None, 14, 14, 512) 2048


chNormalization)

conv2d_25 (Conv2D) (None, 14, 14, 512) 2359808

batch_normalization_21 (Bat (None, 14, 14, 512) 2048


chNormalization)

max_pooling2d_9 (MaxPooling (None, 7, 7, 512) 0


2D)

flatten_1 (Flatten) (None, 25088) 0


dense_3 (Dense) (None, 4096) 102764544

batch_normalization_22 (Bat (None, 4096) 16384


chNormalization)

dense_4 (Dense) (None, 4096) 16781312

batch_normalization_23 (Bat (None, 4096) 16384


chNormalization)

dense_5 (Dense) (None, 38) 155686

=================================================================
Total params: 127,601,254
Trainable params: 127,579,750
Non-trainable params: 21,504
_________________________________________________________________

from keras.optimizers import Adam


adam=Adam(learning_rate=0.001)

model.compile(optimizer=adam,
loss=['categorical_crossentropy'],metrics=['accuracy'])

train_datagen = ImageDataGenerator(rescale = 1./255,


shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)

cd /kaggle/input/new-plant-diseases-dataset/'New Plant Diseases


Dataset(Augmented)'/'New Plant Diseases Dataset(Augmented)'/

/kaggle/input/new-plant-diseases-dataset/New Plant Diseases


Dataset(Augmented)/New Plant Diseases Dataset(Augmented)

ls

train/ valid/

training_set = train_datagen.flow_from_directory('train',
target_size = (224,
224),
batch_size = 512,
class_mode =
'categorical')

test_set = test_datagen.flow_from_directory('valid',
target_size = (224, 224),
batch_size = 512,
class_mode =
'categorical')

Found 70295 images belonging to 38 classes.


Found 17572 images belonging to 38 classes.

from keras.callbacks import EarlyStopping


early=EarlyStopping

# fit the model


r = model.fit_generator(
training_set,
validation_data=test_set,
epochs=10,
# callbacks=[early_stopping],
steps_per_epoch=len(training_set),
validation_steps=len(test_set)
)

/tmp/ipykernel_142/921714525.py:2: UserWarning: `Model.fit_generator`


is deprecated and will be removed in a future version. Please use
`Model.fit`, which supports generators.
r = model.fit_generator(

Epoch 1/10
Autoencoders are neural network models used for various tasks in machine learning, such as
dimensionality reduction, anomaly detection, and image denoising. This program that
demonstrates the application of autoencoders for image denoising using TensorFlow and Keras.
This program will train an autoencoder to remove noise from images.

import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D,
UpSampling2D
from tensorflow.keras.models import Model
from tensorflow.keras.datasets import mnist

# Load the MNIST dataset with added noise


(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0

Downloading data from https://storage.googleapis.com/tensorflow/tf-


keras-datasets/mnist.npz
11490434/11490434 [==============================] - 2s 0us/step

# Add Gaussian noise to the images


noise_factor = 0.5
x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0,
scale=1.0, size=x_train.shape)
x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0,
scale=1.0, size=x_test.shape)

# Clip pixel values to be in the range [0, 1]


x_train_noisy = np.clip(x_train_noisy, 0., 1.)
x_test_noisy = np.clip(x_test_noisy, 0., 1.)

# Define the autoencoder model


input_img = Input(shape=(28, 28, 1))
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)

x = Conv2D(32, (3, 3), activation='relu', padding='same')(encoded)


x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)

autoencoder = Model(input_img, decoded)


autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

# Train the autoencoder


autoencoder.fit(x_train_noisy, x_train,
epochs=10,
batch_size=128,
shuffle=True,
validation_data=(x_test_noisy, x_test))

Epoch 1/10
469/469 [==============================] - 15s 9ms/step - loss: 0.1716
- val_loss: 0.1197
Epoch 2/10
469/469 [==============================] - 4s 8ms/step - loss: 0.1155
- val_loss: 0.1104
Epoch 3/10
469/469 [==============================] - 3s 7ms/step - loss: 0.1091
- val_loss: 0.1060
Epoch 4/10
469/469 [==============================] - 3s 7ms/step - loss: 0.1056
- val_loss: 0.1033
Epoch 5/10
469/469 [==============================] - 3s 7ms/step - loss: 0.1033
- val_loss: 0.1014
Epoch 6/10
469/469 [==============================] - 3s 7ms/step - loss: 0.1018
- val_loss: 0.1003
Epoch 7/10
469/469 [==============================] - 3s 6ms/step - loss: 0.1007
- val_loss: 0.0992
Epoch 8/10
469/469 [==============================] - 3s 7ms/step - loss: 0.0998
- val_loss: 0.0988
Epoch 9/10
469/469 [==============================] - 3s 7ms/step - loss: 0.0992
- val_loss: 0.0983
Epoch 10/10
469/469 [==============================] - 3s 7ms/step - loss: 0.0986
- val_loss: 0.0976

<keras.callbacks.History at 0x781450f52170>

# Denoise some test images


denoised_images = autoencoder.predict(x_test_noisy)

313/313 [==============================] - 1s 2ms/step

# Display noisy and denoised images


n = 10 # Number of images to display
plt.figure(figsize=(20, 4))
for i in range(n):
# Display original image
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

# Display denoised image


ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(denoised_images[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()

You might also like