0% found this document useful (0 votes)
107 views46 pages

Deep Learning (R20A6610)

Uploaded by

barak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views46 pages

Deep Learning (R20A6610)

Uploaded by

barak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

B.

Tech – CSE (Data Science) R-20

DEEP LEARNING
LAB MANUAL

B.TECH

(IV YEAR – I SEM)


(2022-23)

DEPARTMENT OF EMERGING TECHNOLOGIES

(CSE(DATA SCIENCE))

MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY


(Autonomous Institution – UGC, Govt. of India)

Recognized under 2(f) and 12 (B) of UGC ACT 1956


(Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2015 Certified)
Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, India

1
B.Tech – CSE (Data Science) R-20

INDEX

Deep Learning Programs Using Python

S. No. Name of the Program Page No.


a. Design a single unit perceptron for classification of a linearly separable
binary dataset without using pre-defined models. Use the Perceptron() from
1 sklearn.
b. Identify the problem with single unit Perceptron. Classify using Or-, And-
and Xor-ed data and analyze the result.
Build an Artificial Neural Network by implementing the Backpropagation
2 algorithm and test the same using appropriate data sets. Vary the activation
functions used and compare the results.
Build a Deep Feed Forward ANN by implementing the Backpropagation algorithm
3 and test the same using appropriate data sets. Use the number of hidden layers >=4.
Design and implement an Image classification model to classify a dataset of images
4 using Deep Feed Forward NN. Record the accuracy corresponding to the number of
epochs. Use the MNIST, CIFAR-10 datasets.
Design and implement a CNN model (with 2 layers of convolutions) to classify
5 multi category image datasets. Record the accuracy corresponding to the number of
epochs. Use the MNIST, CIFAR-10 datasets.
Design and implement a CNN model (with 4+ layers of convolutions) to classify
multi category image datasets. Use the MNIST, Fashion MNIST, CIFAR-10
datasets. Set the No. of Epoch as 5, 10 and 20. Make the necessary changes
6 whenever required. Record the accuracy corresponding to the number of epochs.
Record the time required to run the program, using CPU as well as using GPU in
Colab.
Design and implement a CNN model (with 2+ layers of convolutions) to
classify multi category image datasets. Use the concept of padding and Batch
7 Normalization while designing the CNN model. Record the accuracy
corresponding to the number of epochs. Use the Fashion
MNIST/MNIST/CIFAR10 datasets.
Design and implement a CNN model (with 4+ layers of convolutions) to classify
multi category image datasets. Use the concept of regularization and dropout while
designing the CNN model. Use the Fashion MNIST datasets. Record the Training
accuracy and Test accuracy corresponding to the following architectures:
a. Base Model
8 b. Model with L1 Regularization
c. Model with L2 Regularization
d. Model with Dropout
e. Model with both L2 (or L1) and Dropout

Use the concept of Data Augmentation to increase the data size from a single
9 image.
Design and implement a CNN model to classify CIFAR10 image dataset.
Use the concept of Data Augmentation while designing the CNN model.
10 Record the accuracy corresponding to the number of epochs.

Implement the standard LeNet-5 CNN architecture model to classify multi


11 category image dataset (MNIST, Fashion MNIST) and check the accuracy.

2
B.Tech – CSE (Data Science) R-20
Implement the standard VGG-16 & 19 CNN architecture model to classify
12 multi category image dataset and check the accuracy.

Implement RNN for sentiment analysis on movie reviews.


13

Implement Bidirectional LSTM for sentiment analysis on movie reviews.


14

3
R-20
Week-1

a. Design a single unit perceptron for classification of a linearly separable binary dataset (placement.csv)
without using pre-defined models. Use the Perceptron() from sklearn.

Program

OUTPUT:
R-20

Exercise:

Design a single unit perceptron for classification of a linearly separable binary dataset without using pre-defined
models. Use the Perceptron() from sklearn.

[hint-use make_classification() to generate binary dataset from sklearn


Eg:
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=100, n_features=2,
n_informative=1,n_redundant=0,n_classes=2, n_clusters_per_class=1,
random_state=41,hypercube=False,class_sep=10)
]
R-20

b. Identify the problem with single unit Perceptron. Classify using Or-, And- and Xor-ed data and analysis the
result.

Program
# Perceptron on Or-, And- and Xor-ed data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
or_data = pd.DataFrame()
and_data = pd.DataFrame()
xor_data = pd.DataFrame()

or_data['input1']=[1,1,0,0]
or_data['input2']=[1,0,1,0]
or_data['ouput']=[1,1,1,0]

and_data['input1']=[1,1,0,0]
and_data['input2']=[1,0,1,0]
and_data['ouput']=[1,0,0,0]

xor_data['input1']=[1,1,0,0]
xor_data['input2']=[1,0,1,0]
xor_data['ouput']=[0,1,1,0]

from sklearn.linear_model import Perceptron


clf1=Perceptron()
clf2=Perceptron()
clf3=Perceptron()
clf1.fit(and_data.iloc[:,0:2].values,and_data.iloc[:,-1].values)
print(clf1.coef_)
print(clf1.intercept_)
x=np.linspace(-1,1,5)
y=-x+1
plt.plot(x,y)
#sns.scatterplot(and_data['input1'],and_data['input2'],hue=and_data['ouput'],s=200)
clf2.fit(or_data.iloc[:,0:2].values,or_data.iloc[:,-1].values)
print(clf2.coef_)
print(clf2.intercept_)
x1=np.linspace(-1,1,5)
y1=-x+0.5
plt.plot(x1,y1)
#sns.scatterplot(or_data['input1'],or_data['input2'],hue=or_data['ouput'],s=200)
clf3.fit(xor_data.iloc[:,0:2].values,xor_data.iloc[:,-1].values)
print(clf3.coef_)
print(clf3.intercept_)
R-20
plot_decision_regions(xor_data.iloc[:,0:2].values,xor_data.iloc[:,-1].values,
clf=clf3, legend=2)

OUTPUT:
R-20

Exercise
Identify the problem with single unit Perceptron. Classify using Not- and XNOR-ed data and analyze the result.
R-20

Week-2

Build an Artificial Neural Network by implementing the Backpropagation algorithm and test the same
using appropriate data sets. Vary the activation functions used and compare the results.

Program:
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
X, y = datasets.load_iris( return_X_y = True)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40)
# Define the network model and its arguments.
# Set the number of neurons/nodes for each layer:
model = Sequential()
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
#sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
#model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# Compile the model and calculate its accuracy:
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
#model.fit(X_train, y_train, batch_size=32, epochs=3)
# Print a summary of the Keras model:
model.summary()
#model.fit(X_train, y_train)
#model.fit(X_train, y_train, batch_size=32, epochs=300)
model.fit(X_train, y_train, epochs=5)
score = model.evaluate(X_test, y_test)
print(score)

OUTPUT:
R-20

Exercise:
Note down the accuracies for the following set of experiments on the given NN and compare the results
Do the required modifications needed. Take training data percentage 30%, test data percentage 70%.

b) NN model with 2 hidden layers


(1) Iris dataset
(a) No. of epochs=100,
(i) check accuracy using activation functions Sigmoid, Relu, Tanh
(ii) check accuracy using optimizer sgd, adam
(iii) check accuracy by varying learning rate in sgd as 0.0001, 0.0005, 5.
(iv) check accuracy using loss mean squared error, categorical cross entropy.
(b) No. of epochs =300
(i) Repeat the same above variations
(2) Ionosphere data
(a) Repeat the same settings as Iris
R-20
R-20

Week-3

Build a Deep Feed Forward ANN by implementing the Backpropagation algorithm and test the same
using appropriate data sets. Use the number of hidden layers >=4.

Program:
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
X, y = datasets.load_iris( return_X_y = True)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40)
# Define the network model and its arguments.
# Set the number of neurons/nodes for each layer:
model = Sequential()
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))

#sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)


#model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# Compile the model and calculate its accuracy:
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
#model.fit(X_train, y_train, batch_size=32, epochs=3)
# Print a summary of the Keras model:
model.summary()
#model.fit(X_train, y_train)
#model.fit(X_train, y_train, batch_size=32, epochs=300)
model.fit(X_train, y_train, epochs=5)
score = model.evaluate(X_test, y_test)
print(score)

OUTPUT:
R-20

Exercise:
Modify the above NN model to run on Ionosphere dataset with number of hidden layers >=4. Take training data
percentage 30%, test data percentage 70%. No. of epochs=100, activation function Relu, optimizer adam.
R-20

Week-4
Design and implement an Image classification model to classify a dataset of images using Deep Feed Forward
NN. Record the accuracy corresponding to the number of epochs. Use the MNIST datasets.

Program
#load required packages
import tensorflow as tf
from tensorflow import keras
from keras.models import Sequential
from keras import Input
from keras.layers import Dense
import pandas as pd
import numpy as np
import sklearn
from sklearn.metrics import classification_report
import matplotlib
import matplotlib.pyplot as plt

# Load digits data


(X_train, y_train), (X_test, y_test) = keras.datasets.mnist.load_data()

# Print shapes
print("Shape of X_train: ", X_train.shape)
print("Shape of y_train: ", y_train.shape)
print("Shape of X_test: ", X_test.shape)
print("Shape of y_test: ", y_test.shape)

# Display images of the first 10 digits in the training set and their true lables
fig, axs = plt.subplots(2, 5, sharey=False, tight_layout=True, figsize=(12,6),
facecolor='white')
n=0
for i in range(0,2):
for j in range(0,5):
axs[i,j].matshow(X_train[n])
axs[i,j].set(title=y_train[n])
n=n+1
plt.show()

# Reshape and normalize (divide by 255) input data


X_train = X_train.reshape(60000, 784).astype("float32") / 255
X_test = X_test.reshape(10000, 784).astype("float32") / 255

# Print shapes
print("New shape of X_train: ", X_train.shape)
print("New shape of X_test: ", X_test.shape)
R-20
#Design the Deep FF Neural Network architecture
model = Sequential(name="DFF-Model") # Model
model.add(Input(shape=(784,), name='Input-Layer')) # Input Layer - need to speicfy
the shape of inputs
model.add(Dense(128, activation='relu', name='Hidden-Layer-1',
kernel_initializer='HeNormal'))
model.add(Dense(64, activation='relu', name='Hidden-Layer-2',
kernel_initializer='HeNormal'))
model.add(Dense(32, activation='relu', name='Hidden-Layer-3',
kernel_initializer='HeNormal'))
model.add(Dense(10, activation='softmax', name='Output-Layer'))

#Compile keras model


model.compile(optimizer='adam', loss='SparseCategoricalCrossentropy',
metrics=['Accuracy'], loss_weights=None, weighted_metrics=None, run_eagerly=None,
steps_per_execution=None)

#Fit keras model on the dataset


model.fit(X_train, y_train, batch_size=10, epochs=5, verbose='auto', callbacks=None,
validation_split=0.2, shuffle=True, class_weight=None, sample_weight=None,
initial_epoch=0, # Integer, default=0, Epoch at which to start training (useful for
resuming a previous training run).
steps_per_epoch=None, validation_steps=None, validation_batch_size=None,
validation_freq=5, max_queue_size=10, workers=1, use_multiprocessing=False,)

# apply the trained model to make predictions


# Predict class labels on training data
pred_labels_tr = np.array(tf.math.argmax(model.predict(X_train),axis=1))
# Predict class labels on a test data
pred_labels_te = np.array(tf.math.argmax(model.predict(X_test),axis=1))

#Model Performance Summary


print("")
print(' Model Summary ')
model.summary()
print("")

# Printing the parameters:Deep Feed Forward Neural Network contains more than 100K
#print(' Weights and Biases ')
#for layer in model_d1.layers:

#print("Layer: ", layer.name) # print layer name


#print(" --Kernels (Weights): ", layer.get_weights()[0]) # kernels (weights)
#print(" --Biases: ", layer.get_weights()[1]) # biases

print("")
print('---------- Evaluation on Training Data ----------- ')
print(classification_report(y_train, pred_labels_tr))
print("")
R-20
print('---------- Evaluation on Test Data -----------')
print(classification_report(y_test, pred_labels_te))
print("")
OUTPUT:
R-20

Exercise:

Design and implement an Image classification model to classify a dataset of images using Deep Feed Forward
NN. Record the accuracy corresponding to the number of epochs 5, 50. Use the CIFAR10/Fashion MNIST
datasets. [You can use CIFAR10 available in keras package]. Make the necessary changes whenever required.
Below note down only the changes made and the accuracies obtained.
R-20

Week-5

Design and implement a CNN model (with 2 layers of convolutions) to classify multi category image datasets.
Record the accuracy corresponding to the number of epochs. Use the MNIST, CIFAR-10 datasets.

Program

import keras
from keras.datasets import mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
(train_X,train_Y), (test_X,test_Y) = mnist.load_data()
train_X = train_X.reshape(-1, 28,28, 1)
test_X = test_X.reshape(-1, 28,28, 1)
train_X.shape
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(64, (3,3), input_shape=(28, 28, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
model.fit(train_X, train_Y_one_hot, batch_size=64, epochs=10)
test_loss, test_acc = model.evaluate(test_X, test_Y_one_hot)
print('Test loss', test_loss)
print('Test accuracy', test_acc)
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
plt.imshow(test_X[0].reshape(28, 28), cmap = plt.cm.binary)
plt.show()

OUTPUT:
R-20

Exercise:

Design and implement a CNN model (with 2 layers of convolutions) to classify multi category image datasets.
Record the accuracy corresponding to the number of epochs 10, 100. Use the CIFAR10/Fashion MNIST datasets.
Make the necessary changes whenever required. Below note down only the changes made and the accuracies
obtained.
R-20

Week-6
Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets.
Record the accuracy corresponding to the number of epochs. Use the Fashion MNIST datasets. Record the time
required to run the program, using CPU as well as using GPU in Colab.

Program-
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt

(train_X,train_Y), (test_X,test_Y) = fashion_mnist.load_data()

train_X = train_X.reshape(-1, 28,28, 1)


test_X = test_X.reshape(-1, 28,28, 1)

train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255

train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)

model = Sequential()

model.add(Conv2D(256, (3,3), input_shape=(28, 28, 1)))


model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(128, (3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64, (3,3), input_shape=(28, 28, 1)))


model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(28, (3,3)))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
R-20
model.add(Dense(64))

model.add(Dense(10))
model.add(Activation('softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])

model.fit(train_X, train_Y_one_hot, batch_size=64, epochs=5)

test_loss, test_acc = model.evaluate(test_X, test_Y_one_hot)


print('Test loss', test_loss)
print('Test accuracy', test_acc)

predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))

plt.imshow(test_X[0].reshape(28, 28), cmap = plt.cm.binary)


plt.show()

OUTPUT:
R-20

Exercise:

Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets.
Use the MNIST/ CIFAR-10 datasets. Set the No. of Epoch as 5, 10 and 20. Make the necessary changes whenever
required. Record the accuracy corresponding to the number of epochs. Record the time required to run the
program, using CPU as well as using GPU in Colab. Below note down only the changes made and the accuracies
obtained.
R-20

Week-7

Design and implement a CNN model (with 2+ layers of convolutions) to classify multi category image datasets.
Use the concept of padding and Batch Normalization while designing the CNN model. Record the accuracy
corresponding to the number of epochs. Use the Fashion MNIST datasets.

Program
# Batch-Normalization and padding

import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D,
BatchNormalization
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt

(train_X,train_Y), (test_X,test_Y) = fashion_mnist.load_data()

train_X = train_X.reshape(-1, 28,28, 1)


test_X = test_X.reshape(-1, 28,28, 1)

train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255

train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)

model = Sequential()

model.add(Conv2D(256, (3,3), input_shape=(28, 28, 1) ,padding='same'))


model.add(Activation('relu'))
BatchNormalization()
model.add(MaxPooling2D(pool_size=(2,2) ,padding='same'))

model.add(Conv2D(128, (3,3) ,padding='same'))


model.add(Activation('relu'))
#BatchNormalization()
model.add(MaxPooling2D(pool_size=(2,2) ,padding='same'))

model.add(Conv2D(64, (3,3), input_shape=(28, 28, 1) ,padding='same'))


model.add(Activation('relu'))
#BatchNormalization()
model.add(MaxPooling2D(pool_size=(2,2) ,padding='same'))
R-20
model.add(Conv2D(28, (3,3)))
model.add(Activation('relu'))
#BatchNormalization()
model.add(MaxPooling2D(pool_size=(2,2) ,padding='same'))

model.add(Flatten())
model.add(Dense(64))

model.add(Dense(10))
model.add(Activation('softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])

model.fit(train_X, train_Y_one_hot, batch_size=64, epochs=5)

test_loss, test_acc = model.evaluate(test_X, test_Y_one_hot)


print('Test loss', test_loss)
print('Test accuracy', test_acc)

predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))

plt.imshow(test_X[0].reshape(28, 28), cmap = plt.cm.binary)


plt.show()

OUTPUT:
R-20

Exercise:

Design and implement a CNN model (with 2+ layers of convolutions) to classify multi category image datasets.
Use the concept of Batch-Normalization and padding while designing the CNN model. Record the accuracy
corresponding to the number of epochs 5, 25, 225. Make the necessary changes whenever required. Use the
MNIST/CIFAR-10 datasets. Below note down only the changes made and the accuracies obtained.
R-20

Week-8

Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets. Use
the concept of regularization and dropout while designing the CNN model. Use the Fashion MNIST datasets.
Record the Training accuracy and Test accuracy corresponding to the following architectures:
a. Base Model
b. Model with L1 Regularization
c. Model with L2 Regularization
d. Model with Dropout
e. Model with both L2 (or L1) and Dropout

Program

a. Base Model: Modify the b. experiment program commenting on kernel_regularizer=l1(0.01) function. See the below
program for reference.

b.
# L1 Regularizer
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.regularizers import l1
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt

(train_X,train_Y), (test_X,test_Y) = fashion_mnist.load_data()

train_X = train_X.reshape(-1, 28,28, 1)


test_X = test_X.reshape(-1, 28,28, 1)

train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255

train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)

model = Sequential()

model.add(Conv2D(256, (3,3), input_shape=(28, 28, 1), kernel_regularizer=l1(0.01)


))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(128, (3,3),
kernel_regularizer=l1(0.01)
))
R-20
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64, (3,3), input_shape=(28, 28, 1),


#kernel_regularizer=l1(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(28, (3,3),
#kernel_regularizer=l1(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dense(64))

model.add(Dense(10))
model.add(Activation('softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])

model.fit(train_X, train_Y_one_hot, epochs=5)

test_loss, test_acc = model.evaluate(test_X, test_Y_one_hot)


print('Test loss', test_loss)
print('Test accuracy', test_acc)

predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))

plt.imshow(test_X[0].reshape(28, 28), cmap = plt.cm.binary)


plt.show()

c.
# L2 regularizer
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.regularizers import l2
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt

(train_X,train_Y), (test_X,test_Y) = fashion_mnist.load_data()


R-20

train_X = train_X.reshape(-1, 28,28, 1)


test_X = test_X.reshape(-1, 28,28, 1)

train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255

train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)

model = Sequential()

model.add(Conv2D(256, (3,3), input_shape=(28, 28, 1), kernel_regularizer=l2(0.01)


))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(128, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64, (3,3), input_shape=(28, 28, 1),


#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(28, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dense(64))

model.add(Dense(10))
model.add(Activation('softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])

model.fit(train_X, train_Y_one_hot, epochs=5)

test_loss, test_acc = model.evaluate(test_X, test_Y_one_hot)


print('Test loss', test_loss)
R-20
print('Test accuracy', test_acc)

predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))

plt.imshow(test_X[0].reshape(28, 28), cmap = plt.cm.binary)


plt.show()

d.
#Dropout

import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D, Dropout
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt

(train_X,train_Y), (test_X,test_Y) = fashion_mnist.load_data()

train_X = train_X.reshape(-1, 28,28, 1)


test_X = test_X.reshape(-1, 28,28, 1)

train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255

train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)

model = Sequential()

model.add(Conv2D(256, (3,3), input_shape=(28, 28, 1)))


model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
Dropout(0.20)

model.add(Conv2D(128, (3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
#Dropout(0.20)

model.add(Conv2D(64, (3,3), input_shape=(28, 28, 1)))


model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))
#Dropout(0.20)
R-20
model.add(Conv2D(28, (3,3)))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))
#Dropout(0.20)
model.add(Flatten())
model.add(Dense(64))

model.add(Dense(10))
model.add(Activation('softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])

model.fit(train_X, train_Y_one_hot, batch_size=64, epochs=5)

test_loss, test_acc = model.evaluate(test_X, test_Y_one_hot)


print('Test loss', test_loss)
print('Test accuracy', test_acc)

predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))

plt.imshow(test_X[0].reshape(28, 28), cmap = plt.cm.binary)


plt.show()

e.

# L2 regularizer and Dropout


import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.regularizers import l2
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt

(train_X,train_Y), (test_X,test_Y) = fashion_mnist.load_data()

train_X = train_X.reshape(-1, 28,28, 1)


test_X = test_X.reshape(-1, 28,28, 1)

train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
R-20
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)

model = Sequential()

model.add(Conv2D(256, (3,3), input_shape=(28, 28, 1), kernel_regularizer=l2(0.01)


))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
Dropout(0.20)

model.add(Conv2D(128, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64, (3,3), input_shape=(28, 28, 1),


#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(28, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dense(64))

model.add(Dense(10))
model.add(Activation('softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])

model.fit(train_X, train_Y_one_hot, epochs=5)

test_loss, test_acc = model.evaluate(test_X, test_Y_one_hot)


print('Test loss', test_loss)
print('Test accuracy', test_acc)

predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))

plt.imshow(test_X[0].reshape(28, 28), cmap = plt.cm.binary)


plt.show()
R-20

OUTPUT:
a. Base Model:

b. Model with L1 Regularization:

c. Model with L2 Regularization:

d. Model with Dropout:

e. Model with both L2 (or L1) and Dropout:

Exercise:

Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets. Use
the concept of regularization and dropout while designing the CNN model. Use the MNIST dataset. Modify the
program as and when needed. Record the Training accuracy and Test accuracy corresponding to the following
architectures:
a. Base Model
b. Model with both L2 (or L1) and Dropout
R-20

Week-9

Use the concept of data augmentation to increase the data size from a single image.

Program-

OUTPUT:
R-20

Exercise:

Use the concept of data augmentation to increase the data size from a single image. Use any random image of your
choice. Apply variations of ImageDataGenerator () function on arguments height_shift_range=0.5,
horizontal_flip=True, rotation_range=90, brightness_range=[0.2,1.0], zoom_range=[0.5,1.0] etc. and analyze the
output images.
R-20

Week-10

Design and implement a CNN model to classify CIFAR10 image dataset. Use the concept of Data Augmentation
while designing the CNN model. Record the accuracy corresponding to the number of epochs.

# data augmentation with flow function

from future import print_function

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D

import matplotlib.pyplot as plt


%matplotlib inline

# The data, shuffled and split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

num_classes = 10

y_train = tf.keras.utils.to_categorical(y_train, num_classes)


y_test = tf.keras.utils.to_categorical(y_test, num_classes)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

# Let's build a CNN using Keras' Sequential capabilities

model_1 = Sequential()

## 5x5 convolution with 2x2 stride and 32 filters


model_1.add(Conv2D(32, (5, 5), strides = (2,2), padding='same',
input_shape=x_train.shape[1:]))
R-20
model_1.add(Activation('relu'))

## Another 5x5 convolution with 2x2 stride and 32 filters


model_1.add(Conv2D(32, (5, 5), strides = (2,2)))
model_1.add(Activation('relu'))

## 2x2 max pooling reduces to 3 x 3 x 32


model_1.add(MaxPooling2D(pool_size=(2, 2)))
model_1.add(Dropout(0.25))

## Flatten turns 3x3x32 into 288x1


model_1.add(Flatten())
model_1.add(Dense(512))
model_1.add(Activation('relu'))
model_1.add(Dropout(0.5))
model_1.add(Dense(num_classes))
model_1.add(Activation('softmax'))

model_1.summary()

batch_size = 32

# initiate RMSprop optimizer


opt = tf.keras.optimizers.legacy.RMSprop(lr=0.0005, decay=1e-6)

# Let's train the model using RMSprop


model_1.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])

datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total
width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total
height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images

datagen.fit(x_train) # This computes any statistics that may be needed (e.g.


for centering) from the training set.

# Fit the model on the batches generated by datagen.flow().


model_1.fit(datagen.flow(x_train, y_train,
R-20
batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=5,
validation_data=(x_test, y_test))

test_loss, test_acc = model_1.evaluate(x_test, y_test)


print('Test loss', test_loss)
print('Test accuracy', test_acc)

OUTPUT:

Exercise:

Can you make the above model do better on the same dataset? Can you make it do worse? Experiment with
different settings of the data augmentation while designing the CNN model. Record the accuracy mentioning the
modified settings of data augmentation.
R-20

Week-11

Implement the standard LeNet CNN architecture model to classify multi category image dataset (MNIST) and
check the accuracy

Program-
# LeNet

import tensorflow as tf
from tensorflow import keras
import numpy as np
(train_x, train_y), (test_x, test_y) = keras.datasets.mnist.load_data()
train_x = train_x / 255.0
test_x = test_x / 255.0
train_x = tf.expand_dims(train_x, 3)
test_x = tf.expand_dims(test_x, 3)

val_x = train_x[:5000]
val_y = train_y[:5000]

lenet_5_model = keras.models.Sequential([
keras.layers.Conv2D(6, kernel_size=5, strides=1, activation='tanh',
input_shape=train_x[0].shape, padding='same'), #C1
keras.layers.AveragePooling2D(), #S2
keras.layers.Conv2D(16, kernel_size=5, strides=1, activation='tanh',
padding='valid'), #C3
keras.layers.AveragePooling2D(), #S4
keras.layers.Conv2D(120, kernel_size=5, strides=1, activation='tanh',
padding='valid'), #C5
keras.layers.Flatten(), #Flatten
keras.layers.Dense(84, activation='tanh'), #F6
keras.layers.Dense(10, activation='softmax') #Output layer
])

lenet_5_model.compile(optimizer='adam',
loss=keras.losses.sparse_categorical_crossentropy, metrics=['accuracy'])
lenet_5_model.fit(train_x, train_y, epochs=5, validation_data=(val_x, val_y))
lenet_5_model.evaluate(test_x, test_y)

OUTPUT:
R-20

Exercise:

Implement the standard LeNet CNN architecture model to classify multi category image dataset (Fashion
MNIST) and check the accuracy. Below note down only the changes made and the accuracies obtained for epochs
5, 50, 250.
R-20

Week-12
Implement the standard VGG 16 CNN architecture model to classify cat and dog image dataset and check the
accuracy.

Program-
R-20
OUTPUT:

Exercise:

Implement the standard VGG 19 CNN architecture model to classify cat and dog image dataset and check the
accuracy. Make the necessary changes whenever required.
R-20

Week-13

Implement RNN for sentiment analysis on movie reviews.

Program-

# RNN sentiment analysis on movie reviews

from keras.datasets import imdb


from keras.preprocessing.text import Tokenizer
from keras.utils import pad_sequences
from keras import Sequential
from keras.layers import Dense,SimpleRNN,Embedding,Flatten
(X_train,y_train),(X_test,y_test) = imdb.load_data()
X_train = pad_sequences(X_train,padding='post',maxlen=50)
X_test = pad_sequences(X_test,padding='post',maxlen=50)
X_train.shape
model = Sequential()
#model.add(Embedding(10000, 2))
model.add(SimpleRNN(32,input_shape=(50,1), return_sequences=False))
model.add(Dense(1, activation='sigmoid'))
R-20
model.summary()
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
model.fit(X_train, y_train,epochs=5,validation_data=(X_test,y_test))
test_loss, test_acc = model.evaluate(X_test, y_test)
print('Test loss', test_loss)
print('Test accuracy', test_acc)

OUTPUT:

Exercise:
Implement RNN for sentiment analysis on movie reviews. Use the concept of Embedding layer.
R-20

Week-14
Implement Bi-directional LSTM for sentiment analysis on movie reviews.

Program-
# Bi directional LSTM

import numpy as np
from keras.preprocessing import sequence
from keras.utils import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional
from keras.datasets import imdb

n_unique_words = 10000 # cut texts after this number of words


maxlen = 200
batch_size = 128
(x_train, y_train),(x_test, y_test) = imdb.load_data(num_words=n_unique_words)
x_train = pad_sequences(x_train, maxlen=maxlen)
x_test = pad_sequences(x_test, maxlen=maxlen)
y_train = np.array(y_train)
y_test = np.array(y_test)
R-20

model = Sequential()
model.add(Embedding(n_unique_words, 128, input_length=maxlen))
model.add(Bidirectional(LSTM(64)))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history=model.fit(x_train, y_train, batch_size=batch_size, epochs=10,
validation_data=[x_test, y_test])
test_loss, test_acc = model.evaluate(x_test, y_test)
print('Test loss', test_loss)
print('Test accuracy', test_acc)
print(history.history['loss'])
print(history.history['accuracy'])
from matplotlib import pyplot
pyplot.plot(history.history['loss'])
pyplot.plot(history.history['accuracy'])
pyplot.title('model loss vs accuracy')
pyplot.xlabel('epoch')
pyplot.legend(['loss', 'accuracy'], loc='upper right')
pyplot.show()

OUTPUT:

Exercise:
Implement Bi-directional LSTM on a suitable dataset of your choice. Modify the program as needed.
R-20

You might also like