0% found this document useful (0 votes)
8 views41 pages

DLT Lab PDF

The document outlines the implementation and evaluation of three neural network models: a feed-forward neural network for flower recognition, a Multi-Layer Perceptron (MLP) to solve the XOR problem, and a Recurrent Neural Network (RNN) for sequential data processing. Each section details the aim, algorithm, and program code necessary to build and assess the performance of the respective models. The results demonstrate successful implementation and performance evaluation for all three models.

Uploaded by

vigncs150
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views41 pages

DLT Lab PDF

The document outlines the implementation and evaluation of three neural network models: a feed-forward neural network for flower recognition, a Multi-Layer Perceptron (MLP) to solve the XOR problem, and a Recurrent Neural Network (RNN) for sequential data processing. Each section details the aim, algorithm, and program code necessary to build and assess the performance of the respective models. The results demonstrate successful implementation and performance evaluation for all three models.

Uploaded by

vigncs150
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

1. Implement a simple feed-forward neural network.

a. Create a basic network


b. Analyze performance by varying the batch size, number of hidden layers, learning rate.
c. Create a confusion matrix to validate the performance of your model.

Aim: To create and evaluate the performance of a basic feed-forward neural network by varying batch
size, hidden layers, and learning rates.

Algorithm:

1. Import necessary libraries (e.g., Keras, TensorFlow).


2. Initialize the Sequential model.
3. Add Dense layers with varying number of hidden neurons.
4. Compile the model with appropriate loss and optimizer.
5. Train the model and evaluate its performance.
6. Analyze performance using confusion matrix and accuracy metrics.

Program:

Flower recognition
# Ignore the warnings import

warnings

warnings.filterwarnings('always')

warnings.filterwarnings('ignore')

# data visualisation and manipulation import

numpy as np

import pandas as pd

import matplotlib.pyplot as plt from

matplotlib import style import seaborn

as sns #configure

# sets matplotlib to inline and displays graphs below the corressponding cell.

%matplotlib inline style.use('fivethirtyeight')

sns.set(style='whitegrid',color_codes=True)
#model selection

from sklearn.model_selection import train_test_split from

sklearn.model_selection import KFold

from sklearn.metrics import


accuracy_score,precision_score,recall_score,confusion_matrix,roc_curve,roc_auc_score

from sklearn.model_selection import GridSearchCV from

sklearn.preprocessing import LabelEncoder #preprocess.

from keras.preprocessing.image import ImageDataGenerator

#dl libraraies

from keras import backend as K

from keras.models import Sequential from

keras.layers import Dense

from keras.optimizers import Adam,SGD,Adagrad,Adadelta,RMSprop from

keras.utils import to_categorical

# specifically for cnn

from keras.layers import Dropout, Flatten,Activation

from keras.layers import Conv2D, MaxPooling2D, BatchNormalization

import tensorflow as tf

import random as rn import

seaborn as sns

from sklearn.metrics import confusion_matrix

# specifically for manipulating zipped images and getting numpy arrays of pixel values of images.

import cv2

import numpy as np from tqdm

import tqdm import os


from random import shuffle from

zipfile import ZipFile from PIL import

Image

X=[]

Z=[] IMG_SIZE=150

FLOWER_DAISY_DIR='../input/flowers/flowers/daisy'

FLOWER_SUNFLOWER_DIR='../input/flowers/flowers/sunflower'

FLOWER_TULIP_DIR='../input/flowers/flowers/tulip'

FLOWER_DANDI_DIR='../input/flowers/flowers/dandelion'

FLOWER_ROSE_DIR='../input/flowers/flowers/rose'

def assign_label(img,flower_type):

return flower_type

def make_train_data(flower_type,DIR):

for img in tqdm(os.listdir(DIR)):

label=assign_label(img,flower_type) path =

os.path.join(DIR,img)

img = cv2.imread(path,cv2.IMREAD_COLOR) img =

cv2.resize(img, (IMG_SIZE,IMG_SIZE))

X.append(np.array(img)) Z.append(str(label))

model = Sequential()

model.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same',activation ='relu', input_shape =


(150,150,3)))

model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same',activation ='relu'))

model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

model.add(Conv2D(filters =96, kernel_size = (3,3),padding = 'Same',activation ='relu'))

model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Conv2D(filters = 96, kernel_size = (3,3),padding = 'Same',activation ='relu'))

model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

model.add(Flatten()) model.add(Dense(512))

model.add(Activation('relu')) model.add(Dense(5,

activation = "softmax")) batch_size=128

epochs=50

from keras.callbacks import ReduceLROnPlateau

red_lr= ReduceLROnPlateau(monitor='val_acc',patience=3,verbose=1,factor=0.1)

datagen = ImageDataGenerator(

featurewise_center=False, # set input mean to 0 over the dataset

samplewise_center=False, # set each sample mean to 0

featurewise_std_normalization=False, # divide inputs by std of the dataset

samplewise_std_normalization=False, # divide each input by its std

zca_whitening=False, # apply ZCA whitening

rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)

zoom_range = 0.1, # Randomly zoom image

width_shift_range=0.2, # randomly shift images horizontally (fraction of total width)

height_shift_range=0.2, # randomly shift images vertically (fraction of total height)

horizontal_flip=True, # randomly flip images

vertical_flip=False) # randomly flip images

datagen.fit(x_train)

model.compile(optimizer=Adam(lr=0.001),loss='categorical_crossentropy',metrics=['accuracy'])

History = model.fit_generator(datagen.flow(x_train,y_train, batch_size=batch_size),

epochs = epochs, validation_data = (x_test,y_test),

verbose = 1, steps_per_epoch=x_train.shape[0] // batch_size)


plt.plot(History.history['loss'])

plt.plot(History.history['val_loss'])

plt.title('Model Loss') plt.ylabel('Loss')

plt.xlabel('Epochs') plt.legend(['train',

'test']) plt.show()

plt.plot(History.history['acc'])

plt.plot(History.history['val_acc'])

plt.title('Model Accuracy')

plt.ylabel('Accuracy') plt.xlabel('Epochs')

plt.legend(['train', 'test']) plt.show()

result=model.predict(x_test)

cm=confusion_matrix(result,y_test)

sns.heatmap(cm,annot=True)

OUTPUT:
RESULT:
Successfully created and evaluated the performance of a basic feed-forward neural network by varying
batch size, hidden layers, and learning rates.
2. Solve XOR problem using Multi Layer Perceptron
Aim: To solve the XOR problem using a Multi-Layer Perceptron (MLP) and analyze its performance.

Algorithm:

● Import libraries (NumPy, TensorFlow).


● Define the XOR input and output values.
● Initialize a Sequential MLP model.
● Add a hidden Dense layer with 2 neurons and a ReLU activation function.
● Add an output Dense layer with 1 neuron using sigmoid activation.
● Compile the model using binary cross-entropy loss and an optimizer like Adam.
● Train the model and evaluate its accuracy.
● Make predictions on the XOR input.

Program:

# Importing necessary libraries

import numpy as np import

tensorflow as tf

from tensorflow.keras.models import Sequential from

tensorflow.keras.layers import Dense

# XOR input and output

X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

y = np.array([[0], [1], [1], [0]])

# Building the MLP model model =

Sequential()

model.add(Dense(2, input_dim=2, activation='relu')) # Hidden layer with 2 neurons

model.add(Dense(1, activation='sigmoid')) # Output layer with 1 neuron for binary output #

Compiling the model

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) #

Training the model


model.fit(X, y, epochs=1000, verbose=0) #

Evaluating the model

_, accuracy = model.evaluate(X, y)

print(f'Accuracy: {accuracy*100:.2f}%') #

Making predictions

predictions = model.predict(X)

predictions = [1 if p > 0.5 else 0 for p in predictions]

print(f'Predictions: {predictions}')

Output:

accuracy: 0.7500 - loss: 0.6050

Accuracy: 75.00%

Predictions: [1, 1, 1, 0]

Result: Successfully implemented a Multi-Layer Perceptron to solve the XOR problem and evaluated its
performance.
3. Implement Recurrent Neural Network (RNN)

Aim: To implement a Recurrent Neural Network (RNN) for processing sequential data.

Algorithm:

● Load and preprocess sequential data (e.g., time series).


● Define an RNN model using SimpleRNN or LSTM layers.
● Compile the model using MSE or categorical loss depending on the problem.
● Train the model on sequential data.
● Evaluate the model's accuracy and visualize the results.

Program:

import pandas as pd;

import matplotlib.pyplot as plt from

numpy import array from numpy import

hstack

from keras.models import Sequential from

keras.layers import Dense

from keras.layers import LSTM

from keras.layers import RNN, SimpleRNN

from keras.preprocessing.sequence import TimeseriesGenerator from

keras.layers import Dropout

from keras.optimizers import Adam from

keras.layers.core import Activation

from keras.callbacks import LambdaCallback from

sklearn.preprocessing import MinMaxScaler import os

print(os.listdir("../input/bike-sharing-dataset"))

dataset = pd.read_csv('../input/bike-sharing-dataset/day.csv')

plt.figure(figsize=(15,10))
plt.plot(dataset['cnt'], color='blue') plt.show()

temp = dataset[dataset.yr == 1] temp =

temp[temp.mnth == 10]

print(temp.cnt.mean()) temp.head()

one_hot = pd.get_dummies(dataset['weekday'], prefix='weekday') dataset =

dataset.join(one_hot)

one_hot = pd.get_dummies(dataset['weathersit'], prefix='weathersit') dataset =

dataset.join(one_hot)

scaler = MinMaxScaler(feature_range=(0, 1))

scaled = scaler.fit_transform(array(dataset['cnt']).reshape(len(dataset['cnt']), 1)) series =

pd.DataFrame(scaled)

series.columns = ['cntscl']

dataset = pd.merge(dataset, series, left_index=True, right_index=True)

dataset.head()

number_of_test_data = 50

number_of_holdout_data = 50

number_of_training_data = len(dataset) - number_of_holdout_data - number_of_test_data

print ("total, train, test, holdout:", len(dataset), number_of_training_data, number_of_test_data,


number_of_holdout_data)

datatrain = dataset[:number_of_training_data]

datatest = dataset[-(number_of_test_data+number_of_holdout_data):-number_of_holdout_data] datahold =


dataset[-number_of_holdout_data:]

in_seq1 = array(datatrain['holiday']) in_seq2 =


array(datatrain['workingday']) in_seq3 =
array(datatrain['temp']) in_seq4 =
array(datatrain['atemp']) in_seq5 =
array(datatrain['hum']) in_seq6 =
array(datatrain['windspeed']) in_seq7 =
array(datatrain['weekday_0']) in_seq8 =
array(datatrain['weekday_1']) in_seq9 =
array(datatrain['weekday_2']) in_seq10 =
array(datatrain['weekday_3']) in_seq11 =
array(datatrain['weekday_4']) in_seq12 =
array(datatrain['weekday_5']) in_seq13 =
array(datatrain['weekday_6'])

in_seq14 = array(datatrain['weathersit_1']) in_seq15 =


array(datatrain['weathersit_2']) in_seq16 =
array(datatrain['weathersit_3']) out_seq_train =
array(datatrain['cntscl'])

in_seq1 = in_seq1.reshape((len(in_seq1), 1)) in_seq2 =


in_seq2.reshape((len(in_seq2), 1)) in_seq3 =
in_seq3.reshape((len(in_seq3), 1)) in_seq4 =
in_seq4.reshape((len(in_seq4), 1)) in_seq5 =
in_seq5.reshape((len(in_seq5), 1)) in_seq6 =
in_seq6.reshape((len(in_seq6), 1)) in_seq7 =
in_seq7.reshape((len(in_seq7), 1))

in_seq8 = in_seq8.reshape((len(in_seq8), 1)) in_seq9 =


in_seq9.reshape((len(in_seq9), 1)) in_seq10 =
in_seq10.reshape((len(in_seq10), 1)) in_seq11 =
in_seq11.reshape((len(in_seq11), 1)) in_seq12 =
in_seq12.reshape((len(in_seq12), 1)) in_seq13 =
in_seq13.reshape((len(in_seq13), 1)) in_seq14 =
in_seq14.reshape((len(in_seq14), 1)) in_seq15 =
in_seq15.reshape((len(in_seq15), 1)) in_seq16 =
in_seq16.reshape((len(in_seq16), 1))

out_seq_train = out_seq_train.reshape((len(out_seq_train), 1))

datatrain_feed = hstack((in_seq1, in_seq2, in_seq3, in_seq4, in_seq5, in_seq6, in_seq7, in_seq8, in_seq9,
in_seq10, in_seq11, in_seq12, in_seq13, in_seq14, in_seq15, in_seq16, out_seq_train))

in_seq1 = array(datatest['holiday']) in_seq2 =


array(datatest['workingday']) in_seq3 =
array(datatest['temp']) in_seq4 =
array(datatest['atemp']) in_seq5 =
array(datatest['hum']) in_seq6 =
array(datatest['windspeed']) in_seq7 =
array(datatest['weekday_0']) in_seq8 =
array(datatest['weekday_1']) in_seq9 =
array(datatest['weekday_2']) in_seq10 =
array(datatest['weekday_3']) in_seq11 =
array(datatest['weekday_4']) in_seq12 =
array(datatest['weekday_5']) in_seq13 =
array(datatest['weekday_6'])

in_seq14 = array(datatest['weathersit_1']) in_seq15 =


array(datatest['weathersit_2']) in_seq16 =
array(datatest['weathersit_3']) out_seq_test =
array(datatest['cntscl'])

in_seq1 = in_seq1.reshape((len(in_seq1), 1)) in_seq2 =


in_seq2.reshape((len(in_seq2), 1)) in_seq3 =
in_seq3.reshape((len(in_seq3), 1)) in_seq4 =
in_seq4.reshape((len(in_seq4), 1)) in_seq5 =
in_seq5.reshape((len(in_seq5), 1)) in_seq6 =
in_seq6.reshape((len(in_seq6), 1)) in_seq7 =
in_seq7.reshape((len(in_seq7), 1)) in_seq8 =
in_seq8.reshape((len(in_seq8), 1)) in_seq9 =
in_seq9.reshape((len(in_seq9), 1)) in_seq10 =
in_seq10.reshape((len(in_seq10), 1)) in_seq11 =
in_seq11.reshape((len(in_seq11), 1)) in_seq12 =
in_seq12.reshape((len(in_seq12), 1)) in_seq13 =
in_seq13.reshape((len(in_seq13), 1))

in_seq14 = in_seq14.reshape((len(in_seq14), 1)) in_seq15 =


in_seq15.reshape((len(in_seq15), 1)) in_seq16 =
in_seq16.reshape((len(in_seq16), 1))

out_seq_test = out_seq_test.reshape((len(out_seq_test), 1))

datatest_feed = hstack((in_seq1, in_seq2, in_seq3, in_seq4, in_seq5, in_seq6, in_seq7, in_seq8, in_seq9,
in_seq10, in_seq11, in_seq12, in_seq13, in_seq14, in_seq15, in_seq16, out_seq_test))

in_seq1 = array(datahold['holiday']) in_seq2 =


array(datahold['workingday']) in_seq3 =
array(datahold['temp']) in_seq4 =
array(datahold['atemp']) in_seq5 =
array(datahold['hum']) in_seq6 =
array(datahold['windspeed']) in_seq7 =
array(datahold['weekday_0']) in_seq8 =
array(datahold['weekday_1']) in_seq9 =
array(datahold['weekday_2']) in_seq10 =
array(datahold['weekday_3']) in_seq11 =
array(datahold['weekday_4']) in_seq12 =
array(datahold['weekday_5']) in_seq13 =
array(datahold['weekday_6'])

in_seq14 = array(datahold['weathersit_1']) in_seq15 =


array(datahold['weathersit_2']) in_seq16 =
array(datahold['weathersit_3']) out_seq_hold =
array(datahold['cntscl'])

in_seq1 = in_seq1.reshape((len(in_seq1), 1)) in_seq2 =


in_seq2.reshape((len(in_seq2), 1)) in_seq3 =
in_seq3.reshape((len(in_seq3), 1)) in_seq4 =
in_seq4.reshape((len(in_seq4), 1)) in_seq5 =
in_seq5.reshape((len(in_seq5), 1)) in_seq6 =
in_seq6.reshape((len(in_seq6), 1)) in_seq7 =
in_seq7.reshape((len(in_seq7), 1)) in_seq8 =
in_seq8.reshape((len(in_seq8), 1)) in_seq9 =
in_seq9.reshape((len(in_seq9), 1)) in_seq10 =
in_seq10.reshape((len(in_seq10), 1)) in_seq11 =
in_seq11.reshape((len(in_seq11), 1)) in_seq12 =
in_seq12.reshape((len(in_seq12), 1)) in_seq13 =
in_seq13.reshape((len(in_seq13), 1)) in_seq14 =
in_seq14.reshape((len(in_seq14), 1)) in_seq15 =
in_seq15.reshape((len(in_seq15), 1)) in_seq16 =
in_seq16.reshape((len(in_seq16), 1))

out_seq_hold = out_seq_hold.reshape((len(out_seq_hold), 1))

datahold_feed = hstack((in_seq1, in_seq2, in_seq3, in_seq4, in_seq5, in_seq6, in_seq7, in_seq8, in_seq9,
in_seq10, in_seq11, in_seq12, in_seq13, in_seq14, in_seq15, in_seq16, out_seq_hold))

n_features = datatrain_feed.shape[1] n_input =


10

generator_train = TimeseriesGenerator(datatrain_feed, out_seq_train, length=n_input,


batch_size=len(datatrain_feed))

generator_test = TimeseriesGenerator(datatest_feed, out_seq_test, length=n_input, batch_size=1)

generator_hold = TimeseriesGenerator(datahold_feed, out_seq_hold, length=n_input, batch_size=1)

print("timesteps, features:", n_input, n_features)


timesteps, features: 10 17 model =

Sequential()

model.add(SimpleRNN(4, activation='relu', input_shape=(n_input, n_features), return_sequences = False))

model.add(Dense(1, activation='relu'))

adam = Adam(lr=0.0001) model.compile(optimizer=adam,


loss='mse')

model.summary()

score = model.fit_generator(generator_train, epochs=3000, verbose=0,


validation_data=generator_test)

losses = score.history['loss'] val_losses =


score.history['val_loss'] plt.figure(figsize=(10,5))
plt.plot(losses, label="trainset")
plt.plot(val_losses, label="testset")

plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.show()

OUTPUT

Layer (type) Output Shape Param #

=================================================================

simple_rnn_1 (SimpleRNN) (None, 4) 88

dense_1 (Dense) (None, 1) 5

=================================================================

Total params: 93

Trainable params: 93

Non-trainable params: 0

Train vs test loss


Actual vs prediction

Result: Successfully implemented a Recurrent Neural Network for processing sequential data and
evaluated its accuracy.
4.Implement Long Short-Term Memory (LSTM)

Aim: To implement and evaluate an LSTM network for processing sequential data.

Algorithm:

● Load the dataset and pad sequences for consistency.


● Build an LSTM model with an Embedding layer followed by LSTM layers.
● Compile the model with binary cross-entropy and Adam optimizer.
● Train and evaluate the model using accuracy as a metric.

Program:

from tensorflow.keras.preprocessing import sequence from

tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense, Embedding from

tensorflow.keras.layers import LSTM

from tensorflow.keras.datasets import imdb

print('Loading data...')

# num_words: how many unique words that you want to load into your training and testing dataset (x_train,

y_train), (x_test, y_test) = imdb.load_data(num_words=20000)

x_train = sequence.pad_sequences(x_train, maxlen=80) x_test =

sequence.pad_sequences(x_test, maxlen=80) model = Sequential()

model.add(Embedding(20000, 128))

model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))

model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy',

optimizer='adam',

metrics=['accuracy'])

model.fit(x_train, y_train,

batch_size=32,

epochs=15, verbose=2,
validation_data=(x_test, y_test))

def show_train_history(train_history,train,validation):
plt.plot(train_history.history[train])
plt.plot(train_history.history[validation]) plt.title('Train
History')

plt.ylabel(train)
plt.xlabel('Epoch')

plt.legend(['train', 'validation'], loc='upper left')

plt.show()

show_train_history(train_history,'acc','val_acc') score, acc =

model.evaluate(x_test, y_test,

batch_size=32,

verbose=2)

print('Test score:', score) print('Test

accuracy:', acc)
OUTPUT

Test loss 0.37

test accuracy 0.87

Result: Successfully implemented and evaluated an LSTM network for processing sequential data with
improved prediction accuracy.
5. Neural Network Models using TensorFlow and Keras

Aim: To implement a Convolutional Neural Network (CNN) using TensorFlow and Keras.

Algorithm:

● Load the dataset (e.g., MNIST).


● Build the CNN model with Conv2D, MaxPooling2D, and Dense layers.
● Compile the model using categorical cross-entropy and Adam optimizer.
● Train the model and evaluate its performance.
● Visualize training and testing accuracy.

Program:

# Import necessary libraries import

tensorflow as tf

from tensorflow.keras import layers, models from

tensorflow.keras.datasets import mnist from

tensorflow.keras.utils import to_categorical # Load the

MNIST dataset

(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Preprocess the data: Normalize the pixel values between 0 and 1

x_train = x_train.reshape((x_train.shape[0], 28, 28, 1)).astype('float32') / 255

x_test = x_test.reshape((x_test.shape[0], 28, 28, 1)).astype('float32') / 255 # One-hot

encode the labels

y_train = to_categorical(y_train, 10) y_test =

to_categorical(y_test, 10) # Build the neural

network model model = models.Sequential()

# Add convolutional layers followed by max-pooling model.add(layers.Conv2D(32, (3, 3),

activation='relu', input_shape=(28, 28, 1)))

model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Conv2D(64, (3, 3), activation='relu'))


model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Conv2D(64, (3, 3), activation='relu')) # Flatten

the output and add Dense layers model.add(layers.Flatten())

model.add(layers.Dense(64, activation='relu'))

model.add(layers.Dense(10, activation='softmax'))

# Compile the model

model.compile(optimizer='adam',

loss='categorical_crossentropy',

metrics=['accuracy'])

# Train the model

model.fit(x_train, y_train, epochs=5, batch_size=64, validation_data=(x_test, y_test)) # Evaluate

the model

test_loss, test_acc = model.evaluate(x_test, y_test) print(f"Test

accuracy: {test_acc}")

OUTPUT:

Test accuracy: 70.45%

Result: Successfully implemented a Convolutional Neural Network using TensorFlow and Keras and
evaluated its performance on the dataset.
6. Implement Text Classifier using RNN

Aim: To build a text classifier using RNN for sentiment analysis.

Algorithm:

● Load text data (e.g., IMDB dataset).


● Vectorize text data and prepare batches.
● Build an RNN model with embedding and LSTM layers.
● Compile the model using binary cross-entropy.
● Train the model and validate performance on test data.

Program:

import tensorflow as tf

import tensorflow_datasets as tfds import

numpy as np

import matplotlib.pyplot as plt

# Obtain the imdb review dataset from tensorflow datasets dataset =

tfds.load('imdb_reviews', as_supervised=True)

# Seperate test and train datasets

train_dataset, test_dataset = dataset['train'], dataset['test'] # Split the

test and train data into batches of 32

# and shuffling the training set batch_size

= 32

train_dataset = train_dataset.shuffle(10000) train_dataset =

train_dataset.batch(batch_size) test_dataset =

test_dataset.batch(batch_size) example, label =

next(iter(train_dataset)) print('Text:\n',

example.numpy()[0])

print('\nLabel: ', label.numpy()[0])

# Using the TextVectorization layer to normalize, split, and map strings # to

integers.
encoder = tf.keras.layers.TextVectorization(max_tokens=10000)

encoder.adapt(train_dataset.map(lambda text, _: text))

# Extracting the vocabulary from the TextVectorization layer. vocabulary

= np.array(encoder.get_vocabulary())

# Encoding a test example and decoding it back.

original_text = example.numpy()[0] encoded_text =

encoder(original_text).numpy()

decoded_text = ' '.join(vocabulary[encoded_text])

print('original: ', original_text)

print('encoded: ', encoded_text)

print('decoded: ', decoded_text) # Creating

the model

model =

tf.keras.Sequential([ encoder,

tf.keras.layers.Embedding(

len(encoder.get_vocabulary()), 64, mask_zero=True),

tf.keras.layers.Bidirectional(

tf.keras.layers.LSTM(64, return_sequences=True)),

tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),

tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(1)

])

# Summary of the model

model.summary()

# Compile the model

model.compile(

loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),

metrics=['accuracy']

# Training the model and validating it on test set history =

model.fit(

train_dataset, epochs=5,

validation_data=test_dataset,

# Plotting the accuracy and loss over time # Training

history

history_dict = history.history

# Seperating validation and training accuracy acc =

history_dict['accuracy']

val_acc = history_dict['val_accuracy']

# Seperating validation and training loss loss =

history_dict['loss']

val_loss = history_dict['val_loss'] # Plotting

plt.figure(figsize=(8, 4))

plt.subplot(1, 2, 1) plt.plot(acc)

plt.plot(val_acc)

plt.title('Training and Validation Accuracy')

plt.xlabel('Epochs')

plt.ylabel('Accuracy')

plt.legend(['Accuracy', 'Validation Accuracy'])

plt.subplot(1, 2, 2)

plt.plot(loss) plt.plot(val_loss)
plt.title('Training and Validation Loss')

plt.xlabel('Epochs')

plt.ylabel('Loss')

plt.legend(['Loss', 'Validation Loss']) plt.show()

sample_text = (

'''The movie by GeeksforGeeks was so good and the animation are so dope. I would

recommend my friends to watch it.'''

predictions = model.predict(np.array([sample_text]))

print(*predictions[0])

# Print the label based on the prediction

if predictions[0] > 0:

print('The review is positive') else:

print('The review is negative')


OUTPUT

The review is positive

Result: Successfully built a text classifier using an RNN for sentiment analysis and evaluated its
performance on the test data.
7. Image Classifier using CNN

Aim: To implement an image classifier using a Convolutional Neural Network (CNN).

Algorithm:

● Load and preprocess image data.


● Define the CNN model architecture with Conv2D, MaxPooling, and Dense layers.
● Compile the model with categorical cross-entropy and Adam optimizer.
● Train the model on image data and evaluate it using accuracy metrics.

Program:

import keras

from keras.models import Sequential

from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout from

keras.optimizers import Adam

from keras.callbacks import TensorBoard import

numpy as np

import pandas as pd

import matplotlib.pyplot as plt

from sklearn.model_selection import train_test_split from

sklearn.metrics import confusion_matrix

from sklearn.metrics import classification_report from

sklearn.metrics import roc_curve, auc

from sklearn.metrics import accuracy_score from

keras.utils import np_utils

import itertools

data = np.load('../input/orl-faces/ORL_faces.npz') # load the

"Train Images"

x_train = data['trainX'] #normalize

every image
x_train = np.array(x_train,dtype='float32')/255 x_test =

data['testX']

x_test = np.array(x_test,dtype='float32')/255 # load the

Label of Images

y_train= data['trainY'] y_test=

data['testY']

# show the train and test Data format print('x_train :

{}'.format(x_train[:])) print('Y-train shape:

{}'.format(y_train))

print('x_test shape: {}'.format(x_test.shape)) x_train, x_valid,

y_train, y_valid= train_test_split(

x_train, y_train, test_size=.05, random_state=1234,)

im_rows=112

im_cols=92 batch_size=512

im_shape=(im_rows, im_cols, 1) #change

the size of images

x_train = x_train.reshape(x_train.shape[0], *im_shape) x_test =

x_test.reshape(x_test.shape[0], *im_shape) x_valid =

x_valid.reshape(x_valid.shape[0], *im_shape) print('x_train shape:

{}'.format(y_train.shape[0])) print('x_test shape:

{}'.format(y_test.shape)) cnn_model= Sequential([

Conv2D(filters=36, kernel_size=7, activation='relu', input_shape= im_shape),

MaxPooling2D(pool_size=2),

Conv2D(filters=54, kernel_size=5, activation='relu', input_shape= im_shape),

MaxPooling2D(pool_size=2),

Flatten(),

Dense(2024, activation='relu'),
Dropout(0.5),

Dense(1024, activation='relu'),

Dropout(0.5),

Dense(512, activation='relu'),

Dropout(0.5),

#20 is the number of outputs Dense(20,

activation='softmax')

])

cnn_model.compile( loss='sparse_categorical_crossentropy',#'categorical_cro

ssentropy',

optimizer=Adam(lr=0.0001),

metrics=['accuracy']

cnn_model.summary()

history=cnn_model.fit(

np.array(x_train), np.array(y_train), batch_size=512, epochs=250,

verbose=2, validation_data=(np.array(x_valid),np.array(y_valid)),

scor = cnn_model.evaluate( np.array(x_test), np.array(y_test), verbose=0) print('test los

{:.4f}'.format(scor[0]))

print('test acc {:.4f}'.format(scor[1]))

print(history.history.keys())

# summarize history for accuracy

plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])

plt.title('model accuracy') plt.ylabel('accuracy')

plt.xlabel('epoch')

plt.legend(['train', 'test'], loc='upper left') plt.show()

# summarize history for loss

plt.plot(history.history['loss'])

plt.plot(history.history['val_loss'])

plt.title('model loss') plt.ylabel('loss')

plt.xlabel('epoch')

plt.legend(['train', 'test'], loc='upper left') plt.show()

OUTPUT

test los 0.3272

test acc 0.9375


Result: Successfully implemented an image classifier using a Convolutional Neural Network and
evaluated its performance with accuracy metrics.
8. Object Detection and Classification for Traffic Analysis using CNN

Aim: To design a CNN-based system for object detection and classification in traffic analysis.

Algorithm:

● Load and preprocess traffic image data.


● Define a custom CNN model with two outputs: one for classification and one for bounding box
regression.
● Compile the model with different loss functions for each output.
● Train the model on traffic data.
● Save and evaluate the model.

Program:

import tensorflow as tf

from tensorflow.keras.models import Model

from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, Flatten, Dense, Dropout,

Concatenate

from tensorflow.keras.optimizers import Adam

from tensorflow.keras.preprocessing.image import ImageDataGenerator import

numpy as np

# Constants

IMG_WIDTH, IMG_HEIGHT = 224, 224

BATCH_SIZE = 16

EPOCHS = 20

NUM_CLASSES = 5 # For example, car, bus, truck, bike, pedestrian

# Image Data Generator for traffic images (assumes labels include both classification and bounding boxes)

train_data_dir = "path_to_train_data" validation_data_dir =

"path_to_validation_data"

train_datagen =
ImageDataGenerator( rescale=1.0 / 255.0,

shear_range=0.2,

zoom_range=0.2,

horizontal_flip=True

train_generator =

train_datagen.flow_from_directory( train_data_dir,

target_size=(IMG_WIDTH, IMG_HEIGHT),

batch_size=BATCH_SIZE, class_mode='categorical'

validation_generator =

train_datagen.flow_from_directory( validation_data_dir,

target_size=(IMG_WIDTH, IMG_HEIGHT),

batch_size=BATCH_SIZE, class_mode='categorical'

# Define a custom CNN model for both classification and bounding box regression def

create_model():

# Input layer

inputs = Input(shape=(IMG_WIDTH, IMG_HEIGHT, 3))

# Feature extraction with CNN

x = Conv2D(32, (3, 3), activation='relu')(inputs) x =

MaxPooling2D(pool_size=(2, 2))(x)

x = Conv2D(64, (3, 3), activation='relu')(x) x =

MaxPooling2D(pool_size=(2, 2))(x)

x = Conv2D(128, (3, 3), activation='relu')(x) x =

MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)

# Fully connected layer

x = Dense(256, activation='relu')(x) x =

Dropout(0.5)(x)

# Classification output (softmax for multi-class classification)

class_output = Dense(NUM_CLASSES, activation='softmax', name='class_output')(x) #

Bounding box output (4 values: xmin, ymin, xmax, ymax)

bbox_output = Dense(4, activation='linear', name='bbox_output')(x) # Create

the model with two outputs

model = Model(inputs=inputs, outputs=[class_output, bbox_output]) return

model

# Create the model model =

create_model()

# Compile the model with two loss functions: one for classification and one for bounding box regression

model.compile( optimizer=Adam(learning_rate=0.0

001), loss={

'class_output': 'categorical_crossentropy',

'bbox_output': 'mean_squared_error'

},

metrics={

'class_output': 'accuracy',

'bbox_output': 'mse'

# Train the model history =

model.fit(

train_generator, steps_per_epoch=train_generator.samples //
BATCH_SIZE, epochs=EPOCHS,

validation_data=validation_generator,

validation_steps=validation_generator.samples // BATCH_SIZE

# Save the model

model.save('traffic_object_detection_model.h5')

OUTPUT:

Result: Successfully designed a CNN-based system for object detection and classification in traffic
analysis and evaluated its effectiveness.
9. Image Augmentation using Deep Restricted Boltzmann Machine (RBM)

Aim: To implement deep RBM for image augmentation and generation.

Algorithm:

● Load the dataset and normalize pixel values.


● Initialize RBM model with visible and hidden units.
● Train the RBM using contrastive divergence.
● Generate new image samples from the hidden layer.
● Visualize generated images.

Program:

import numpy as np

import tensorflow as tf

from tensorflow.keras.datasets import mnist import

matplotlib.pyplot as plt

# Load dataset (MNIST in this example)

(x_train, y_train), (x_test, y_test) = mnist.load_data() x_train =

x_train / 255.0 # Normalize pixel values

x_train = x_train.reshape(x_train.shape[0], -1) # Flatten images for RBM input

# RBM class class RBM:

def init (self, visible_units, hidden_units, learning_rate=0.01, epochs=10, batch_size=64): self.visible_units

= visible_units

self.hidden_units = hidden_units

self.learning_rate = learning_rate

self.epochs = epochs self.batch_size =

batch_size

self.weights = np.random.randn(visible_units, hidden_units) * 0.01


self.h_bias = np.zeros(hidden_units)

self.v_bias = np.zeros(visible_units)

def sigmoid(self, x):

return 1 / (1 + np.exp(-x))

def forward(self, visible):

hidden_prob = self.sigmoid(np.dot(visible, self.weights) + self.h_bias)

hidden_state = (hidden_prob > np.random.rand(self.hidden_units)).astype(np.float32) return

hidden_prob, hidden_state

def backward(self, hidden):

visible_prob = self.sigmoid(np.dot(hidden, self.weights.T) + self.v_bias) return

visible_prob

def contrastive_divergence(self, visible):

hidden_prob, hidden_state = self.forward(visible) positive_gradient =

np.dot(visible.T, hidden_prob)

# Reconstruct the visible layer and run forward pass again visible_reconstructed =

self.backward(hidden_state) hidden_reconstructed_prob, _ =

self.forward(visible_reconstructed)

negative_gradient = np.dot(visible_reconstructed.T, hidden_reconstructed_prob) # Update

weights and biases

self.weights += self.learning_rate * (positive_gradient - negative_gradient) / visible.shape[0]

self.v_bias += self.learning_rate * np.mean(visible - visible_reconstructed, axis=0)

self.h_bias += self.learning_rate * np.mean(hidden_prob - hidden_reconstructed_prob, axis=0) def

train(self, data):

for epoch in range(self.epochs):

np.random.shuffle(data)
for batch in range(0, data.shape[0], self.batch_size):

batch_data = data[batch:batch + self.batch_size]

self.contrastive_divergence(batch_data)

print(f"Epoch {epoch + 1}/{self.epochs} completed") def

generate_image(self, hidden_sample):

visible_reconstructed = self.backward(hidden_sample) return

visible_reconstructed

# Parameters

visible_units = x_train.shape[1]

hidden_units = 256

epochs = 10

learning_rate = 0.1

batch_size = 64

# Instantiate and train RBM

rbm =RBM(visible_units, hidden_units, learning_rate, epochs, batch_size)

rbm.train(x_train)

# Image augmentation by generating new samples from the hidden layer hidden_samples =

np.random.rand(1, hidden_units) # Random hidden state generated_image =

rbm.generate_image(hidden_samples)

# Display the generated image

generated_image = generated_image.reshape(28, 28) # Reshape to original image dimensions

plt.imshow(generated_image, cmap="gray")

plt.show()

Output:
Result: Successfully implemented a deep RBM for image augmentation and generation, visualizing the
generated images effectively.
10. Sentiment Analysis using LSTM
Aim: To implement sentiment analysis using LSTM.

Algorithm:

● Load and preprocess text data (e.g., IMDB).


● Build an LSTM model with an embedding layer.
● Compile the model using binary cross-entropy and train it.
● Evaluate the model and visualize accuracy and loss across epochs.

Program:

from tensorflow.keras.preprocessing import sequence from

tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense, Embedding from

tensorflow.keras.layers import LSTM

from tensorflow.keras.datasets import imdb

print('Loading data...')

# num_words: how many unique words that you want to load into your training and testing dataset (x_train,

y_train), (x_test, y_test) = imdb.load_data(num_words=20000)

x_train = sequence.pad_sequences(x_train, maxlen=80) x_test =

sequence.pad_sequences(x_test, maxlen=80)

model = Sequential()

model.add(Embedding(20000, 128))

model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))

model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy',
optimizer='adam',

metrics=['accuracy'])

model.fit(x_train, y_train,

batch_size=32,

epochs=15, verbose=2,

validation_data=(x_test, y_test))

def show_train_history(train_history,train,validation):

plt.plot(train_history.history[train])

plt.plot(train_history.history[validation])

plt.title('Train History')

plt.ylabel(train)

plt.xlabel('Epoch')

plt.legend(['train', 'validation'], loc='upper left') plt.show()

show_train_history(train_history,'acc','val_acc') score, acc =

model.evaluate(x_test, y_test,

batch_size=32,

verbose=2)

print('Test score:', score) print('Test

accuracy:', acc)

OUTPUT
Result: Successfully implemented sentiment analysis using LSTM and evaluated the model's
performance across epochs.

You might also like