Deep Learning (R20A6610)
Deep Learning (R20A6610)
DEEP LEARNING
LAB MANUAL
B.TECH
(CSE(DATA SCIENCE))
1
B.Tech – CSE (Data Science) R-20
INDEX
Use the concept of Data Augmentation to increase the data size from a single
9 image.
Design and implement a CNN model to classify CIFAR10 image dataset.
Use the concept of Data Augmentation while designing the CNN model.
10 Record the accuracy corresponding to the number of epochs.
2
B.Tech – CSE (Data Science) R-20
Implement the standard VGG-16 & 19 CNN architecture model to classify
12 multi category image dataset and check the accuracy.
3
R-20
Week-1
a. Design a single unit perceptron for classification of a linearly separable binary dataset (placement.csv)
without using pre-defined models. Use the Perceptron() from sklearn.
Program
OUTPUT:
R-20
Exercise:
Design a single unit perceptron for classification of a linearly separable binary dataset without using pre-defined
models. Use the Perceptron() from sklearn.
b. Identify the problem with single unit Perceptron. Classify using Or-, And- and Xor-ed data and analysis the
result.
Program
# Perceptron on Or-, And- and Xor-ed data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
or_data = pd.DataFrame()
and_data = pd.DataFrame()
xor_data = pd.DataFrame()
or_data['input1']=[1,1,0,0]
or_data['input2']=[1,0,1,0]
or_data['ouput']=[1,1,1,0]
and_data['input1']=[1,1,0,0]
and_data['input2']=[1,0,1,0]
and_data['ouput']=[1,0,0,0]
xor_data['input1']=[1,1,0,0]
xor_data['input2']=[1,0,1,0]
xor_data['ouput']=[0,1,1,0]
OUTPUT:
R-20
Exercise
Identify the problem with single unit Perceptron. Classify using Not- and XNOR-ed data and analyze the result.
R-20
Week-2
Build an Artificial Neural Network by implementing the Backpropagation algorithm and test the same
using appropriate data sets. Vary the activation functions used and compare the results.
Program:
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
X, y = datasets.load_iris( return_X_y = True)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40)
# Define the network model and its arguments.
# Set the number of neurons/nodes for each layer:
model = Sequential()
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
#sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
#model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# Compile the model and calculate its accuracy:
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
#model.fit(X_train, y_train, batch_size=32, epochs=3)
# Print a summary of the Keras model:
model.summary()
#model.fit(X_train, y_train)
#model.fit(X_train, y_train, batch_size=32, epochs=300)
model.fit(X_train, y_train, epochs=5)
score = model.evaluate(X_test, y_test)
print(score)
OUTPUT:
R-20
Exercise:
Note down the accuracies for the following set of experiments on the given NN and compare the results
Do the required modifications needed. Take training data percentage 30%, test data percentage 70%.
Week-3
Build a Deep Feed Forward ANN by implementing the Backpropagation algorithm and test the same
using appropriate data sets. Use the number of hidden layers >=4.
Program:
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
X, y = datasets.load_iris( return_X_y = True)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40)
# Define the network model and its arguments.
# Set the number of neurons/nodes for each layer:
model = Sequential()
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
OUTPUT:
R-20
Exercise:
Modify the above NN model to run on Ionosphere dataset with number of hidden layers >=4. Take training data
percentage 30%, test data percentage 70%. No. of epochs=100, activation function Relu, optimizer adam.
R-20
Week-4
Design and implement an Image classification model to classify a dataset of images using Deep Feed Forward
NN. Record the accuracy corresponding to the number of epochs. Use the MNIST datasets.
Program
#load required packages
import tensorflow as tf
from tensorflow import keras
from keras.models import Sequential
from keras import Input
from keras.layers import Dense
import pandas as pd
import numpy as np
import sklearn
from sklearn.metrics import classification_report
import matplotlib
import matplotlib.pyplot as plt
# Print shapes
print("Shape of X_train: ", X_train.shape)
print("Shape of y_train: ", y_train.shape)
print("Shape of X_test: ", X_test.shape)
print("Shape of y_test: ", y_test.shape)
# Display images of the first 10 digits in the training set and their true lables
fig, axs = plt.subplots(2, 5, sharey=False, tight_layout=True, figsize=(12,6),
facecolor='white')
n=0
for i in range(0,2):
for j in range(0,5):
axs[i,j].matshow(X_train[n])
axs[i,j].set(title=y_train[n])
n=n+1
plt.show()
# Print shapes
print("New shape of X_train: ", X_train.shape)
print("New shape of X_test: ", X_test.shape)
R-20
#Design the Deep FF Neural Network architecture
model = Sequential(name="DFF-Model") # Model
model.add(Input(shape=(784,), name='Input-Layer')) # Input Layer - need to speicfy
the shape of inputs
model.add(Dense(128, activation='relu', name='Hidden-Layer-1',
kernel_initializer='HeNormal'))
model.add(Dense(64, activation='relu', name='Hidden-Layer-2',
kernel_initializer='HeNormal'))
model.add(Dense(32, activation='relu', name='Hidden-Layer-3',
kernel_initializer='HeNormal'))
model.add(Dense(10, activation='softmax', name='Output-Layer'))
# Printing the parameters:Deep Feed Forward Neural Network contains more than 100K
#print(' Weights and Biases ')
#for layer in model_d1.layers:
print("")
print('---------- Evaluation on Training Data ----------- ')
print(classification_report(y_train, pred_labels_tr))
print("")
R-20
print('---------- Evaluation on Test Data -----------')
print(classification_report(y_test, pred_labels_te))
print("")
OUTPUT:
R-20
Exercise:
Design and implement an Image classification model to classify a dataset of images using Deep Feed Forward
NN. Record the accuracy corresponding to the number of epochs 5, 50. Use the CIFAR10/Fashion MNIST
datasets. [You can use CIFAR10 available in keras package]. Make the necessary changes whenever required.
Below note down only the changes made and the accuracies obtained.
R-20
Week-5
Design and implement a CNN model (with 2 layers of convolutions) to classify multi category image datasets.
Record the accuracy corresponding to the number of epochs. Use the MNIST, CIFAR-10 datasets.
Program
import keras
from keras.datasets import mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
(train_X,train_Y), (test_X,test_Y) = mnist.load_data()
train_X = train_X.reshape(-1, 28,28, 1)
test_X = test_X.reshape(-1, 28,28, 1)
train_X.shape
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(64, (3,3), input_shape=(28, 28, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
model.fit(train_X, train_Y_one_hot, batch_size=64, epochs=10)
test_loss, test_acc = model.evaluate(test_X, test_Y_one_hot)
print('Test loss', test_loss)
print('Test accuracy', test_acc)
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
plt.imshow(test_X[0].reshape(28, 28), cmap = plt.cm.binary)
plt.show()
OUTPUT:
R-20
Exercise:
Design and implement a CNN model (with 2 layers of convolutions) to classify multi category image datasets.
Record the accuracy corresponding to the number of epochs 10, 100. Use the CIFAR10/Fashion MNIST datasets.
Make the necessary changes whenever required. Below note down only the changes made and the accuracies
obtained.
R-20
Week-6
Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets.
Record the accuracy corresponding to the number of epochs. Use the Fashion MNIST datasets. Record the time
required to run the program, using CPU as well as using GPU in Colab.
Program-
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(128, (3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(28, (3,3)))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
R-20
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
OUTPUT:
R-20
Exercise:
Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets.
Use the MNIST/ CIFAR-10 datasets. Set the No. of Epoch as 5, 10 and 20. Make the necessary changes whenever
required. Record the accuracy corresponding to the number of epochs. Record the time required to run the
program, using CPU as well as using GPU in Colab. Below note down only the changes made and the accuracies
obtained.
R-20
Week-7
Design and implement a CNN model (with 2+ layers of convolutions) to classify multi category image datasets.
Use the concept of padding and Batch Normalization while designing the CNN model. Record the accuracy
corresponding to the number of epochs. Use the Fashion MNIST datasets.
Program
# Batch-Normalization and padding
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D,
BatchNormalization
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
OUTPUT:
R-20
Exercise:
Design and implement a CNN model (with 2+ layers of convolutions) to classify multi category image datasets.
Use the concept of Batch-Normalization and padding while designing the CNN model. Record the accuracy
corresponding to the number of epochs 5, 25, 225. Make the necessary changes whenever required. Use the
MNIST/CIFAR-10 datasets. Below note down only the changes made and the accuracies obtained.
R-20
Week-8
Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets. Use
the concept of regularization and dropout while designing the CNN model. Use the Fashion MNIST datasets.
Record the Training accuracy and Test accuracy corresponding to the following architectures:
a. Base Model
b. Model with L1 Regularization
c. Model with L2 Regularization
d. Model with Dropout
e. Model with both L2 (or L1) and Dropout
Program
a. Base Model: Modify the b. experiment program commenting on kernel_regularizer=l1(0.01) function. See the below
program for reference.
b.
# L1 Regularizer
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.regularizers import l1
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(128, (3,3),
kernel_regularizer=l1(0.01)
))
R-20
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(28, (3,3),
#kernel_regularizer=l1(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
c.
# L2 regularizer
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.regularizers import l2
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(128, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(28, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
d.
#Dropout
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D, Dropout
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(128, (3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
#Dropout(0.20)
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
e.
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
R-20
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(128, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(28, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
OUTPUT:
a. Base Model:
Exercise:
Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets. Use
the concept of regularization and dropout while designing the CNN model. Use the MNIST dataset. Modify the
program as and when needed. Record the Training accuracy and Test accuracy corresponding to the following
architectures:
a. Base Model
b. Model with both L2 (or L1) and Dropout
R-20
Week-9
Use the concept of data augmentation to increase the data size from a single image.
Program-
OUTPUT:
R-20
Exercise:
Use the concept of data augmentation to increase the data size from a single image. Use any random image of your
choice. Apply variations of ImageDataGenerator () function on arguments height_shift_range=0.5,
horizontal_flip=True, rotation_range=90, brightness_range=[0.2,1.0], zoom_range=[0.5,1.0] etc. and analyze the
output images.
R-20
Week-10
Design and implement a CNN model to classify CIFAR10 image dataset. Use the concept of Data Augmentation
while designing the CNN model. Record the accuracy corresponding to the number of epochs.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
# The data, shuffled and split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
num_classes = 10
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
model_1 = Sequential()
model_1.summary()
batch_size = 32
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total
width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total
height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
OUTPUT:
Exercise:
Can you make the above model do better on the same dataset? Can you make it do worse? Experiment with
different settings of the data augmentation while designing the CNN model. Record the accuracy mentioning the
modified settings of data augmentation.
R-20
Week-11
Implement the standard LeNet CNN architecture model to classify multi category image dataset (MNIST) and
check the accuracy
Program-
# LeNet
import tensorflow as tf
from tensorflow import keras
import numpy as np
(train_x, train_y), (test_x, test_y) = keras.datasets.mnist.load_data()
train_x = train_x / 255.0
test_x = test_x / 255.0
train_x = tf.expand_dims(train_x, 3)
test_x = tf.expand_dims(test_x, 3)
val_x = train_x[:5000]
val_y = train_y[:5000]
lenet_5_model = keras.models.Sequential([
keras.layers.Conv2D(6, kernel_size=5, strides=1, activation='tanh',
input_shape=train_x[0].shape, padding='same'), #C1
keras.layers.AveragePooling2D(), #S2
keras.layers.Conv2D(16, kernel_size=5, strides=1, activation='tanh',
padding='valid'), #C3
keras.layers.AveragePooling2D(), #S4
keras.layers.Conv2D(120, kernel_size=5, strides=1, activation='tanh',
padding='valid'), #C5
keras.layers.Flatten(), #Flatten
keras.layers.Dense(84, activation='tanh'), #F6
keras.layers.Dense(10, activation='softmax') #Output layer
])
lenet_5_model.compile(optimizer='adam',
loss=keras.losses.sparse_categorical_crossentropy, metrics=['accuracy'])
lenet_5_model.fit(train_x, train_y, epochs=5, validation_data=(val_x, val_y))
lenet_5_model.evaluate(test_x, test_y)
OUTPUT:
R-20
Exercise:
Implement the standard LeNet CNN architecture model to classify multi category image dataset (Fashion
MNIST) and check the accuracy. Below note down only the changes made and the accuracies obtained for epochs
5, 50, 250.
R-20
Week-12
Implement the standard VGG 16 CNN architecture model to classify cat and dog image dataset and check the
accuracy.
Program-
R-20
OUTPUT:
Exercise:
Implement the standard VGG 19 CNN architecture model to classify cat and dog image dataset and check the
accuracy. Make the necessary changes whenever required.
R-20
Week-13
Program-
OUTPUT:
Exercise:
Implement RNN for sentiment analysis on movie reviews. Use the concept of Embedding layer.
R-20
Week-14
Implement Bi-directional LSTM for sentiment analysis on movie reviews.
Program-
# Bi directional LSTM
import numpy as np
from keras.preprocessing import sequence
from keras.utils import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional
from keras.datasets import imdb
model = Sequential()
model.add(Embedding(n_unique_words, 128, input_length=maxlen))
model.add(Bidirectional(LSTM(64)))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history=model.fit(x_train, y_train, batch_size=batch_size, epochs=10,
validation_data=[x_test, y_test])
test_loss, test_acc = model.evaluate(x_test, y_test)
print('Test loss', test_loss)
print('Test accuracy', test_acc)
print(history.history['loss'])
print(history.history['accuracy'])
from matplotlib import pyplot
pyplot.plot(history.history['loss'])
pyplot.plot(history.history['accuracy'])
pyplot.title('model loss vs accuracy')
pyplot.xlabel('epoch')
pyplot.legend(['loss', 'accuracy'], loc='upper right')
pyplot.show()
OUTPUT:
Exercise:
Implement Bi-directional LSTM on a suitable dataset of your choice. Modify the program as needed.
R-20