CCS355 - Neural Netwok and Deep Learning Lab Manual
CCS355 - Neural Netwok and Deep Learning Lab Manual
Regulation - 2021
RECORD
NAME :
REG.NO :
YEAR/SEMESTER :
P.T.LEE CHENGALVARAYA NAICKER COLLEGE
OF ENGINEERING & TECHNOLOGY
(Approved by AICTE, New Delhi & Affiliated to Anna University, Chennai)
Vallal P.T.Lee Chengalvaraya Naicker Nagar, Oovery, Kanchipuram – 631 502
BONAFIDE CERTIFICATE
3 Implement a perceptron in
TensorFlow/Keras Environment
Additional Experiments
SI.NO DATE PAGE NO MARK SIGN
14
Implement Object Detection using
CNN
15
Implement any simple Reinforcement
Algorithm for an NLP problem
1. Jupyter Notebooks:
• Jupyter Notebooks are interactive documents that allow you to write and
execute code in a step-by-step manner.
• They are widely used for data analysis, machine learning, and
experimentation.
• You can install Jupyter using Anaconda or pip.
2. Anaconda:
• Anaconda is a distribution of Python that comes with pre-installed
scientific packages and tools.
• It includes popular libraries like NumPy, pandas, scikit-learn, and more.
• While Anaconda itself is not necessary, it can simplify the setup of your
Python environment.
3. Google Colab:
• Google Colab is a cloud-based Jupyter notebook environment provided by
Google.
• It allows you to run code in the cloud and provides access to GPU/TPU for
free.
• Colab is suitable for resource-intensive tasks like deep learning.
4. Installation of Libraries:
• Most of the experiments listed involve TensorFlow and Keras, which can
be installed using pip in any Python environment.
• Virtual environments can be used to manage dependencies for each
experiment.
6. Platform-Agnostic:
• The experiments are platform-agnostic and can be executed on Windows,
Linux, or macOS.
Recommendations:
Program Objective:
The objective of this program is to perform simple vector addition using
TensorFlow. This involves defining two vectors, adding them element-
wise, and then printing the original vectors as well as the resultant vector.
Problem Statement:
Perform vector addition of two given vectors using TensorFlow.
Aim:
To demonstrate how to use TensorFlow to perform basic vector addition.
Algorithm:
Program:
import tensorflow as tf #
Enable eager execution
tf.compat.v1.enable_eager_execu
tion()# Define two vectors
vector1 = tf.constant([1.0, 2.0,
3.0]) vector2
= tf.constant([4.0, 5.0, 6.0]) #
Perform vector addition
result_vector = tf.add(vector1,
vector2)# Print the results
print("Vector
1:", vector1.numpy())
print("Vect
or 2:",
vector2.nu
mpy())
print("Resultant Vector:", result_vector.numpy())
Ouput:
Vector 1: [1. 2. 3.]
Vector 2: [4. 5. 6.]
Resultant Vector: [5. 7. 9.]
Result:
The program will output the original vectors (vector1 and vector2)
and theresultant vector after performing vector addition.
EX.NO:
Implement a regression model in Keras
DATE:
Program Objective:
The objective of this program is to implement a simple linear regression model
using the Keras library. The model will be trained on synthetic data, and
predictions will be made on new test data.
Problem Statement:
Create a regression model to predict output values based on input features. The
model should learn the underlying linear relationship in the data.
Aim:
To demonstrate the implementation of a linear regression model using Keras for
predictive modeling.
Algorithm:
STEP 1: Import necessary libraries (NumPy and TensorFlow).
STEP 2: Generate synthetic training data with a linear relationship and some noise.
STEP 3: Build a sequential model in Keras with a single dense layer (linear
activation) for regression.
STEP 4: Compile the model using Stochastic Gradient Descent (SGD) optimizer and
Mean Squared Error (MSE) loss function.
STEP 5: Train the model on the training data for a specified number of epochs.
STEP 6: Generate test data for predictions.
STEP 7: Use the trained model to make predictions on the test data.
STEP 8: Display the predicted output values.
Program:
Result: The program will output the predictions made by the regression model
on the test data.
EX.NO:
Implement a perceptron in TensorFlow/Keras
DATE: Environment
Program Objective:
The objective of this program is to implement a perceptron for binary
classification using TensorFlow/Keras. The perceptron is trained on synthetic
data and evaluated for accuracy on a test set.
Problem Statement:
Create a perceptron model for binary classification based on synthetic data. Train
the model and assess its accuracy on a test set.
Aim:
To demonstrate the implementation and training of a perceptron using
TensorFlow/Keras for binary classification.
Algorithm:
STEP 1: Import necessary libraries (NumPy, TensorFlow, scikit-learn).
STEP 2: Generate synthetic data for a binary classification task.
STEP 3: Split the data into training and testing sets.
STEP 4: Build a perceptron model in Keras with one dense layer and a sigmoid
activation function.
STEP 5: Compile the model using Stochastic Gradient Descent (SGD) optimizer
and binary crossentropy loss.
STEP 6: Train the perceptron on the training set for a specified number of
epochs.
STEP 7: Make predictions on the test set.
STEP 8: Evaluate the accuracy of the model on the test set.
Program:
import numpy as np import tensorflow as tf from
sklearn.model_selection import train_test_split from
sklearn.metrics import accuracy_score
# Generate some synthetic data for a binary classification task
np.random.seed(42)
X = np.random.rand(100, 2) # Input features (2 features for simplicity) y =
(X[:, 0] + X[:, 1] > 1).astype(int) # Binary label based on a simple condition
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42) # Build a perceptron model model =
tf.keras.Sequential([ tf.keras.layers.Input(shape=(2,)),
tf.keras.layers.Dense(units=1, activation='sigmoid')]) # Compile
the model
model.compile(optimizer='sgd', loss='binary_crossentropy',
metrics=['accuracy']) # Train the perceptron
model.fit(X_train, y_train, epochs=50, batch_size=8, verbose=0)
# Make predictions on the test set predictions =
model.predict(X_test) binary_predictions =
(predictions > 0.5).astype(int)
# Evaluate accuracy
accuracy = accuracy_score(y_test, binary_predictions) print("Accuracy
on the test set:", accuracy)
Ouput:
Result: The program will output the accuracy of the trained perceptron on the
test set.
EX.NO:
Implement a Feed-Forward Network in TensorFlow/Keras.
DATE:
Program Objective:
The objective of this program is to implement a feed-forward neural network
using TensorFlow/Keras for binary classification. The network is trained on
synthetic data and evaluated for accuracy on a test set.
Problem Statement:
Create a feed-forward neural network for binary classification based on synthetic
data. Train the network and assess its accuracy on a test set.
Aim:
To demonstrate the implementation and training of a feed-forward neural
network using TensorFlow/Keras for binary classification.
Algorithm:
STEP 1: Import necessary libraries (NumPy, TensorFlow, scikit-learn).
STEP 2: Generate synthetic data for a binary classification task.
STEP 3: Split the data into training and testing sets.
STEP 4: Build a feed-forward neural network in Keras with two dense layers
(ReLU activation for the first layer, sigmoid activation for the output layer).
STEP 5: Compile the model using the Adam optimizer and binary crossentropy
loss.
STEP 6: Train the neural network on the training set for a specified number of
epochs.
STEP 7: Make predictions on the test set.
STEP 8: Evaluate the accuracy of the model on the test set.
Output:
1/1 [==============================] - 0s 156ms/step
Accuracy on the test set: 0.9
Result: The program will output the accuracy of the trained feed-forward neural
network on the test set.
EX.NO:
Implement an Image Classifier using CNN in
DATE: TensorFlow/Keras.
Program Objective:
The objective of this program is to implement an image classifier using
Convolutional Neural Networks (CNN) in TensorFlow/Keras. The CNN is trained
on the MNIST dataset and evaluated for accuracy on a test set.
Problem Statement:
Create a CNN-based image classifier for recognizing handwritten digits using the
MNIST dataset. Train the model and assess its accuracy on a test set.
Aim:
To demonstrate the implementation and training of a CNN using
TensorFlow/Keras for image classification.
Algorithm:
STEP 1: Import necessary libraries (TensorFlow, Keras, matplotlib).
STEP 2: Load and preprocess the MNIST dataset.
STEP 3: Build a CNN model in Keras with convolutional and pooling layers. STEP
4: Compile the model using the Adam optimizer and categorical cross entropy
loss.
STEP 5: Train the CNN on the training set for a specified number of epochs.
STEP 6: Evaluate the model on the test set.
STEP 7: Display the test accuracy and plot the training history.
Program:
import tensorflow as tf from tensorflow.keras
import layers, models from
tensorflow.keras.datasets import mnist from
tensorflow.keras.utils import to_categorical
import matplotlib.pyplot as plt
Output:
Downloading data from https://storage.googleapis.com/tensorflow/tf-
kerasdatasets/mnist.npz
11490434/11490434 [==============================] - 1s 0us/step
Epoch 1/5
938/938 [==============================] - 61s 63ms/step - loss: 0.1887
- accuracy: 0.9410 - val_loss: 0.0621 - val_accuracy: 0.9806
Epoch 2/5
938/938 [==============================] - 56s 60ms/step - loss: 0.0541
- accuracy: 0.9833 - val_loss: 0.0393 - val_accuracy: 0.9878
Epoch 3/5
938/938 [==============================] - 54s 57ms/step - loss: 0.0363
- accuracy: 0.9887 - val_loss: 0.0356 - val_accuracy: 0.9886
Epoch 4/5
938/938 [==============================] - 52s 55ms/step - loss: 0.0287
- accuracy: 0.9907 - val_loss: 0.0382 - val_accuracy: 0.9888
Epoch 5/5
938/938 [==============================] - 53s 56ms/step - loss: 0.0239
- accuracy: 0.9924 - val_loss: 0.0288 - val_accuracy: 0.9908
313/313 [==============================] - 3s 9ms/step - loss: 0.0288 -
accuracy: 0.9908
Test accuracy: 0.9908000230789185
Result: The program will output the test accuracy of the CNN on the MNIST
dataset and display a plot of training accuracy and validation accuracy over
epochs.
EX.NO:
Improve the Deep learning model by fine
DATE: tuning hyper parameters
Program Objective:
The objective of this program is to fine-tune the hyperparameters of a deep
learning model (CNN) for image classification on the MNIST dataset. The program
aims to improve the model's performance by adjusting hyperparameters such as
the number of filters, units in dense layers, learning rate, and using early stopping
with model checkpointing.
Problem Statement:
Fine-tune the hyperparameters of the CNN model for the MNIST dataset to
improve its accuracy. Utilize techniques such as adjusting the architecture,
learning rate, and implementing early stopping with model checkpointing.
Aim:
To demonstrate the improvement in model performance through fine-tuning
hyperparameters.
Algorithm:
STEP 1: Import necessary libraries (TensorFlow, Keras, matplotlib).
STEP 2: Load and preprocess the MNIST dataset.
STEP 3: Build a CNN model with hyperparameter tuning, including increased
filters, units, and adjusted learning rate.
STEP 4: Compile the model using the Adam optimizer and categorical
crossentropy loss.
STEP 5: Define callbacks for early stopping and model checkpointing.
STEP 6: Train the CNN model with fine-tuned hyperparameters, utilizing the
defined callbacks.
STEP 7: Load the best model from the checkpoint.
STEP 8: Evaluate the best model on the test set.
STEP 9: Display the test accuracy and plot the training history.
Program:
import tensorflow as tf from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist from tensorflow.keras.utils
import to_categorical from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
import matplotlib.pyplot as plt
Epoch 1/20
938/938 [==============================] - 78s 82ms/step - loss: 0.2188
- accuracy: 0.9339 - val_loss: 0.0786 - val_accuracy: 0.9763
Epoch 2/20
3/938 [...............................] - ETA: 47s - loss: 0.0681 - accuracy:
0.9792/usr/local/lib/python3.10/dist-
packages/keras/src/engine/training.py:3103: UserWarning: You are saving your
model as an HDF5 file via `model.save()`. This file format is considered legacy.
We recommend using instead the native Keras format, e.g.
`model.save('my_model.keras')`.
saving_api.save_model(
938/938 [==============================] - 57s 61ms/step - loss: 0.0694
- accuracy: 0.9784 - val_loss: 0.0539 - val_accuracy: 0.9849
Epoch 3/20
938/938 [==============================] - 62s 66ms/step - loss: 0.0488
- accuracy: 0.9849 - val_loss: 0.0473 - val_accuracy: 0.9856
Epoch 4/20
938/938 [==============================] - 57s 61ms/step - loss: 0.0377
- accuracy: 0.9885 - val_loss: 0.0523 - val_accuracy: 0.9837
Epoch 5/20
938/938 [==============================] - 60s 64ms/step - loss: 0.0301
- accuracy: 0.9904 - val_loss: 0.0537 - val_accuracy: 0.9849
Epoch 6/20
938/938 [==============================] - 58s 62ms/step - loss: 0.0249
- accuracy: 0.9921 - val_loss: 0.0457 - val_accuracy: 0.9867
Epoch 7/20
938/938 [==============================] - 56s 60ms/step - loss: 0.0199
- accuracy: 0.9938 - val_loss: 0.0593 - val_accuracy: 0.9848
Epoch 8/20
938/938 [==============================] - 57s 61ms/step - loss: 0.0167
- accuracy: 0.9944 - val_loss: 0.0480 - val_accuracy: 0.9877
Epoch 9/20
938/938 [==============================] - 56s 60ms/step - loss: 0.0149
- accuracy: 0.9949 - val_loss: 0.0553 - val_accuracy: 0.9843
313/313 [==============================] - 4s 11ms/step - loss: 0.0457 -
accuracy: 0.9867
Test accuracy of the best model: 0.9866999983787537
Result: The program will output the test accuracy of the best model after fine-tuning
hyperparameters and display a plot of training accuracy and validation accuracy over
epochs.
EX.NO:
Implement a transfer learning concept in Image
Classification
DATE:
Program Objective:
Problem Statement:
Given the CIFAR-10 dataset, which consists of 60,000 32x32 color images in 10
classes, the task is to develop a deep learning model to classify these images into
their respective categories. The model will utilize transfer learning with a pre-trained
VGG16 model to achieve better performance with less training data. Specifically, the
program will load the CIFAR-10 dataset, preprocess the data, load the pre-trained
VGG16 model without its top classification layer, add custom classification layers on
top of it, freeze the convolutional base, compile the model, train it on the training
data, evaluate its performance on the test data, and print the test accuracy achieved
by the model.
Aim:
The aim of this program is to perform image classification on the CIFAR-10 dataset
using transfer learning with a pre-trained VGG16 model.
Algorithm:
STEP 7:Output
Program:
# Load pre-trained VGG16 model without the top classification layer and freeze
convolutional base
3))
base_model.trainable = False
# Define model
architecture model =
models.Sequential([
base_model,
layers.Flatten(),
layers.Dense(256, activation='relu'),
layers.Dropout(0.5),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy']) model.summary()
Output:
Model: "sequential"
=================================================================
vgg16 (Functional) (None, 1, 1, 512) 14714688
=================================================================
Epoch 2/10
0.5435
Result:
After training the model for 10 epochs and evaluating it on the test data, the program
will print the test accuracy achieved by the model. This accuracy indicates the
percentage of correctly classified images in the test dataset. Additionally, the
program may display the summary of the model architecture, showing the layers and
the number of parameters.
EX.NO:
Using pretrained model on Keras for Transfer Learning
DATE:
Program Objective:
The objective of this program is to demonstrate transfer learning using a pretrained
VGG16 model on the CIFAR-10 dataset. The program aims to fine-tune the model for
the specific task of image classification on CIFAR-10 by adding custom top layers
while keeping the pretrained weights frozen.
Problem Statement:
Fine-tune a pretrained VGG16 model on the CIFAR-10 dataset for image
classification. Create a custom top architecture for the specific task.
Aim:
To showcase the effectiveness of transfer learning and the integration of a
pretrained VGG16 model for image classification on the CIFAR-10 dataset.
Algorithm:
STEP 1: Load the CIFAR-10 dataset and normalize pixel values to be between 0 and
1.
STEP 2: Load the pretrained VGG16 model without the top (fully connected) layers.
STEP 3: Freeze the weights of the pretrained layers.
STEP 4: Create a new model by adding custom top layers for the specific
classification task (CIFAR-10 has 10 classes).
STEP 5: Compile the model with appropriate settings.
STEP 6: Set up a data generator with data augmentation and validation split.
STEP 7: Train the model using the data generator.
STEP 8: Evaluate the model on the test set.
Program:
from tensorflow.keras.datasets import cifar10 from
tensorflow.keras.applications import VGG16
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import layers, models, optimizers
# Load the pretrained VGG16 model without the top (fully connected) layers
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(32, 32,
3))
Model: "sequential"
Epoch 1/10
1250/1250 [==============================] - 522s 416ms/step - loss:
1.5227 - accuracy: 0.1009 - val_loss: 1.2595 - val_accuracy: 0.1067
...
Epoch 10/10
1250/1250 [==============================] - 543s 435ms/step - loss:
1.1144 - accuracy: 0.0960 - val_loss: 1.1163 - val_accuracy: 0.1063
313/313 [==============================] - 100s 321ms/step - loss: 1.1382 -
accuracy: 0.109
Program Objective:
The objective of this program is to demonstrate transfer learning using a pretrained
VGG16 model on the CIFAR-10 dataset. The program aims to fine-tune the model for the
specific task of image classification on CIFAR-10 by adding custom top layers while
keeping the pretrained weights frozen.
Problem Statement:
Fine-tune a pretrained VGG16 model on the CIFAR-10 dataset for image classification.
Create a custom top architecture for the specific task.
Aim:
To showcase the effectiveness of transfer learning and the integration of a pretrained
VGG16 model for image classification on the CIFAR-10 dataset.
Algorithm:
STEP 1: Load the CIFAR-10 dataset and normalize pixel values to be between 0 and 1.
STEP 2: Load the pretrained VGG16 model without the top (fully connected) layers.
STEP 3: Freeze the weights of the pretrained layers.
STEP 4: Create a new model by adding custom top layers for the specific classification
task (CIFAR-10 has 10 classes).
STEP 5: Compile the model with appropriate settings.
STEP 6: Set up a data generator with data augmentation and validation split.
STEP 7: Train the model using the data generator.
STEP 8: Evaluate the model on the test set.
Program:
from tensorflow.keras.datasets import cifar10 from
tensorflow.keras.applications import VGG16
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import layers, models, optimizers
Output:
Model: "sequential"
Epoch 1/10
1250/1250 [==============================] - 522s 416ms/step - loss: 1.5227 -
accuracy: 0.1009 - val_loss: 1.2595 - val_accuracy: 0.1067
...
Epoch 10/10
1250/1250 [==============================] - 543s 435ms/step - loss: 1.1144 -
accuracy: 0.0960 - val_loss: 1.1163 - val_accuracy: 0.1063
313/313 [==============================] - 100s 321ms/step - loss: 1.1382 -
accuracy: 0.109
Result: The program will output the model architecture summary, training/validation
accuracy and loss for each epoch, and the final test accuracy.
EX.NO:
Implement an LSTM based Autoencoder in
DATE: Tensorflow/Keras
Program Objective:
The objective of this program is to implement an LSTM-based Autoencoder using
TensorFlow/Keras. Autoencoders are a type of neural network designed for
unsupervised learning that learns a compressed, efficient representation of input data.
The LSTM (Long Short-Term Memory) architecture is utilized to capture temporal
dependencies in sequence data.
Problem Statement:
The task is to create an autoencoder model that can effectively learn to encode and decode
sequences. In this specific example, the program generates random sequence data as a
placeholder. The model is trained to reconstruct the input sequences, and the training is
performed using mean squared error (MSE) as the loss function. The LSTM layers are
used for both encoding and decoding, and the overall architecture is summarized, trained,
and evaluated.
This program serves as a foundational example for understanding the implementation of
LSTM-based autoencoders, which can later be adapted for more complex applications
such as sequence-to-sequence prediction, anomaly detection in sequences, or
representation learning for sequential data.
Aim:
To showcase the implementation of LSTM based Autoencoder in Tensorflow/Keras.
Algorithm:
STEP 1: Import required libraries such as numpy for numerical operations and
tensorflow with its submodules for building and training the LSTM-based autoencoder.
STEP 2: Create or load the input sequence data for training the autoencoder. In this
example, a random sequence of shape (100, 10, 1) is generated, representing 100
sequences, each of length 10 with one feature.
STEP 3: Create a Sequential model.
STEP 4: Add an LSTM layer for encoding with specified parameters like the number of
units (64), activation function ('relu'), and input shape.
STEP 5: Add a RepeatVector layer to repeat the encoded representation across the
sequence length.
STEP 6: Add another LSTM layer for decoding with similar parameters as the encoding
LSTM.
STEP 7: Print or visualize the reconstructed sequences and compare them with the
original input sequences to assess the performance of the autoencoder.
Program:
import pandas as pd from sklearn.model_selection import
train_test_split from keras.preprocessing.text import
Tokenizer from keras.preprocessing.sequence import
pad_sequences from keras.utils import to_categorical
from keras.models import Sequential from keras.layers
import SimpleRNN, Dense, Activation from keras import
optimizers from keras.metrics import
categorical_accuracy import numpy as np
=================================================================
Total params: 49985 (195.25 KB)
Trainable params: 49985 (195.25 KB)
Non-trainable params: 0 (0.00 Byte)
Epoch 1/10
4/4 [==============================] - 3s 14ms/step - loss: 0.3377
Epoch 2/10
4/4 [==============================] - 0s 14ms/step - loss: 0.2896
Epoch 3/10
4/4 [==============================] - 0s 14ms/step - loss: 0.2456
Epoch 4/10
4/4 [==============================] - 0s 14ms/step - loss: 0.1936
Epoch 5/10
4/4 [==============================] - 0s 15ms/step - loss: 0.1377
Epoch 6/10
4/4 [==============================] - 0s 15ms/step - loss: 0.1066
Epoch 7/10
4/4 [==============================] - 0s 14ms/step - loss: 0.1178
Epoch 8/10
4/4 [==============================] - 0s 14ms/step - loss: 0.1005
Epoch 9/10
4/4 [==============================] - 0s 14ms/step - loss: 0.1017
Epoch 10/104/4 [==============================] - 0s 19ms/step - loss: 0.1023
4/4 [==============================] - 0s
Result: The program will output the model architecture summary, training/validation
accuracy and loss for each epoch, and the final test accuracy.
EX.NO:
Image generation using GAN Program
DATE:
Objective:
The objective of this program is to implement a Generative Adversarial Network (GAN)
using TensorFlow/Keras for generating synthetic images resembling the MNIST dataset.
Problem Statement:
Generate realistic-looking handwritten digits resembling the MNIST dataset using a
GAN. The GAN consists of a generator and a discriminator. The generator creates fake
images, and the discriminator distinguishes between real and fake images. The GAN is
trained to improve the generator's ability to produce images that are indistinguishable
from real ones.
Aim:
The aim is to train a GAN to generate convincing handwritten digits by optimizing the
generator and discriminator iteratively.
Algorithm:
STEP 1: Import necessary libraries including NumPy, Matplotlib, and
TensorFlow/Keras.
STEP 2: Define a function build_generator that creates the generator model.
STEP 3: Include a Reshape layer to convert output to (28, 28, 1) - image shape.
STEP 4: Define a function build_discriminator that creates the discriminator model.
STEP 5: Define a function build_gan that creates the GAN model by combining the
generator and discriminator.
STEP 6: Define a function load_dataset to load and preprocess the MNIST dataset.
STEP 7: Define a function train_gan that trains the GAN model.
STEP 8: Generate images using the trained generator.
STEP 9: Execute the main function if the script is run directly.
Program:
import numpy as np import
matplotlib.pyplot as plt import
tensorflow as tf from
tensorflow.keras import layers
# Generator model
def build_generator(latent_dim):
model = tf.keras.Sequential()
model.add(layers.Dense(128, input_dim=latent_dim,
activation='relu')) model.add(layers.BatchNormalization())
model.add(layers.Dense(784, activation='sigmoid'))
model.add(layers.Reshape((28, 28, 1))) return model
# Print progress
if epoch % 100 == 0:
print(f"Epoch {epoch}, D Loss: {d_loss[0]}, G Loss: {g_loss}")
save_generated_images(generator, epoch, latent_dim)
generator = build_generator(latent_dim)
discriminator = build_discriminator(img_shape)
gan = build_gan(generator, discriminator)
X_train = load_dataset()
discriminator.compile(loss='binary_crossentropy',
optimizer='adam', metrics=['accuracy'])
gan.compile(loss='binary_crossentropy', optimizer='adam')
dataset_url =
"https://storage.googleapis.com/download.tensorflow.org/example_images/flower_p
h otos.tgz" data_dir = tf.keras.utils.get_file('flower_photos.tar', origin=dataset_url,
extract=True) data_dir = pathlib.Path(data_dir).with_suffix('') image_count =
len(list(data_dir.glob('*/*.jpg'))) print(image_count)
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1])) tulips
= list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
batch_size = 32 img_height =
180 img_width = 180
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training", seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds =
tf.keras.utils.image_dataset_from_directory(
data_dir, validation_split=0.2, subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size) class_names =
train_ds.class_names
print(class_names) import
matplotlib.pyplot as plt
normalization_layer = layers.Rescaling(1./255)
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x),
y)) image_batch, labels_batch = next(iter(normalized_ds)) first_image
= image_batch[0]
# Notice the pixel values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image)) num_classes
= len(class_names)
model = Sequential([
layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same',
activation='relu'), layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs=4 history =
model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
acc = history.history['accuracy']
val_acc =
history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation
Loss') plt.legend(loc='upper right') plt.title('Training
and Validation Loss') plt.show()
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
plt.figure(figsize=(10, 10)) for
images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
Output:
228813984/228813984 [==============================] - 1s 0us/step 3670
Found 3670 files belonging to 5 classes.
Using 2936 files for training.
Found 3670 files belonging to 5 classes.
Using 734 files for validation.
['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (24,
180, 180, 3) (32,
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3) (32,
180, 180, 3)
(32, 180, 180, 3)
(32, 180, 180, 3)
=================================================================
Total params: 3989285 (15.22 MB)
Trainable params: 3989285 (15.22 MB)
Non-trainable params: 0 (0.00 Byte)
Epoch 1/4
92/92 [==============================] - 104s 1s/step - loss: 1.3069 - accuracy:
0.4441 - val_loss: 1.0472 - val_accuracy: 0.5790
Epoch 2/4
92/92 [==============================] - 100s 1s/step - loss: 0.9803 - accuracy:
0.6097 - val_loss: 0.9344 - val_accuracy: 0.6281
Epoch 3/4
92/92 [==============================] - 116s 1s/step - loss: 0.7882 - accuracy:
0.7006 - val_loss: 0.8806 - val_accuracy: 0.6458
Epoch 4/4
92/92 [==============================] - 103s 1s/step - loss: 0.5535 - accuracy:
0.7871 - val_loss: 0.9246 - val_accuracy: 0.6594
Result:
Thus, the program has been successfully executed for image classification using a
deep learning model.
EX.NO:
Recommendation system from sales data using
DATE: Deep Learning
Problem Statement
In the rapidly evolving landscape of e-commerce, personalized recommendations play a
pivotal role in enhancing user experience and boosting sales. Our company, XYZ Retail,
aims to leverage deep learning techniques to develop an advanced recommendation
system based on historical sales data. The goal is to provide personalized product
suggestions to users, ultimately increasing customer satisfaction and driving revenue.
Objective:
The objective of this project is to design and implement a deep learning-based
recommendation system that can analyze past sales data and generate accurate product
recommendations for individual users. The system should be capable of understanding
user preferences, identifying patterns, and suggesting items that align with the user's
interests.
Dataset used : sample_sales_data.csv
Code for generating the dataset using faker library to generate random names and
words for users and items, respectively.
Step 1 : pip install faker Step 2
: Run the following code
import pandas as pd import
numpy as np from faker import
Faker import random import
datetime fake = Faker()
# Generate sample users num_users
= 100
users = [fake.name() for _ in range(num_users)]
# Generate sample items num_items
= 50
items = [fake.word() for _ in range(num_items)]
# Generate sample sales data num_transactions
= 500
data = {
'user': [random.choice(users) for _ in range(num_transactions)],
'item': [random.choice(items) for _ in range(num_transactions)],
'purchase': [random.choice([0, 1]) for _ in range(num_transactions)],
'timestamp': [fake.date_time_between(start_date="-1y", end_date="now") for _ in
range(num_transactions)]
}
sales_data = pd.DataFrame(data) #
Save the generated data to a CSV file
sales_data.to_csv('sample_sales_data.csv', index=False) #
Display the first few rows of the generated data
print(sales_data.head())
user_flatten = Flatten()(user_embedding)
item_flatten = Flatten()(item_embedding) concat =
Concatenate()([user_flatten, item_flatten]) dense1
= Dense(128, activation='relu')(concat) dense2 =
Dense(64, activation='relu')(dense1) output =
Dense(1, activation='sigmoid')(dense2)
model = Model(inputs=[user_input, item_input], outputs=output)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
return model
# Create and train the model num_users =
len(data['user_id'].unique()) num_items =
len(data['item_id'].unique()) model =
create_model(num_users, num_items)
model.summary()
train_user = train_data['user_id'].values
train_item = train_data['item_id'].values
train_labels = train_data['purchase'].values
model.fit([train_user, train_item], train_labels, epochs=5, batch_size=64,
validation_split=0.2)
# Evaluate the model
test_user = test_data['user_id'].values test_item =
test_data['item_id'].values test_labels =
test_data['purchase'].values accuracy =
model.evaluate([test_user, test_item], test_labels)
print(f'Test Accuracy: {accuracy[1]*100:.2f}%')
Output
Result:
Thus, the program has been successfully executed for recommendation system
from sales data using Deep Learning
EX.NO:
Implement Object Detection using CNN Problem
DATE:
Statement:
In the realm of computer vision and image processing, object detection is a fundamental
task that finds applications in various domains, including autonomous vehicles,
surveillance, and augmented reality. Convolutional Neural Networks (CNNs) have proven
to be powerful tools for object detection due to their ability to automatically learn
hierarchical features from images.
Objective:
The objective of this project is to design and implement an Object Detection system using
Convolutional Neural Networks. The system should be capable of accurately detecting
and localizing objects within images, providing bounding box coordinates, class labels,
and confidence scores.
Dataset Used: CIFAR-
10 dataset Program
Code:
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
# Load CIFAR-10 dataset
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
# Normalize pixel values to be between 0 and 1
train_images, test_images = train_images / 255.0, test_images / 255.0
# Define the CNN model model
= models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu')
])
# Add dense layers for classification
model.add(layers.Flatten()) model.add(layers.Dense(64,
activation='relu')) model.add(layers.Dense(10)) # Compile the
model model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Train the model
history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images,
test_labels))
Result:
Thus, the program has been successfully executed for Object Detection using CNN
EX.NO: Implement any simple Reinforcement Algorithm
for an NLP problem
DATE:
Problem Statement:
Design a text generation system that can autonomously generate coherent and
meaningful text sequences based on a given initial prompt.
Aim:
The aim of this project is to implement a reinforcement learning-based approach,
specifically Q-learning, to train a model to generate text sequences. The model should
learn to select the most appropriate words to follow a given sequence of words, aiming
to maximize the coherence and relevance of the generated text.
Algorithm:
3. Load Dataset: Load the dataset containing Mastercard stock history from a CSV fileinto a
pandas DataFrame. Drop irrelevant columns like "Dividends" and "Stock Splits". Print the
first few rows and descriptive statistics of the dataset.
4. Define Functions:
• train_test_plot: Plot the training and testing data split based on the
specified start and end years.
• train_test_split: Split the dataset into training and testing sets based onthe
specified start and end years.
• split_sequence: Define a function to split a univariate sequence into
samples for time-series prediction.
• plot_predictions: Plot the actual vs. predicted stock prices.
• return_rmse: Calculate and print the root mean squared error (RMSE)
between the actual and predicted stock prices.
5. Preprocessing:
• Perform Min-Max scaling on the training set.
• Split the training set into input features (X_train) and target variable
(y_train) using the split_sequence function.
• Reshape X_train for compatibility with LSTM input shape.
6. Model Definition:
• Build an LSTM model using Keras Sequential API.
• Add an LSTM layer with 125 units and "tanh" activation function, followedby a
Dense layer with 1 unit.
• Compile the model using RMSprop optimizer and mean squared error loss
function.
• Print the summary of the model architecture.
7. Model Training:
• Fit the LSTM model on the training data for 50 epochs with a batch size of32.
8. Testing:
• Prepare the test data by taking the last n_steps (60) values from thedataset
and scaling them.
• Split the test data into input sequences (X_test) and corresponding targetvalues
(y_test).
• Reshape X_test for compatibility with the model's input shape.
• Make predictions on the test data using the trained LSTM model.
• Inverse transform the predicted values to obtain the actual stock prices.
9. Evaluation:
• Plot the actual vs. predicted stock prices using the plot_predictions
function.
• Calculate and print the RMSE between the actual and predicted stockprices
using the return_rmse function.
Program:
# Importing the libraries
import numpy as np import
pandas as pd import
matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, GRU, Bidirectional
from tensorflow.keras.optimizers import SGD
from tensorflow.random import
set_seed set_seed(455)
np.random.seed(455)
dataset = pd.read_csv(
"data/Mastercard_stock_history.csv", index_col="Date", parse_dates=["Date"]
).drop(["Dividends", "Stock Splits"], axis=1)
print(dataset.head())
print(dataset.describe())
dataset.isna().sum() tstart = 2016 tend =
2020 def train_test_plot(dataset, tstart,
tend):
dataset.loc[f"{tstart}":f"{tend}", "High"].plot(figsize=(16, 4), legend=True)
dataset.loc[f"{tend+1}":, "High"].plot(figsize=(16, 4), legend=True)
plt.legend([f"Train (Before {tend+1})", f"Test ({tend+1} and beyond)"])
plt.title("MasterCard stock price") plt.show()
train_test_plot(dataset,tstart,tend) def
train_test_split(dataset, tstart, tend):
train = dataset.loc[f"{tstart}":f"{tend}", "High"].values
test = dataset.loc[f"{tend+1}":, "High"].values return
train, test
training_set, test_set = train_test_split(dataset, tstart, tend)
sc = MinMaxScaler(feature_range=(0, 1)) training_set =
training_set.reshape(-1, 1) training_set_scaled =
sc.fit_transform(training_set) def
split_sequence(sequence, n_steps):
X, y = list(), list() for i in
range(len(sequence)):
end_ix = i + n_steps
if end_ix > len(sequence) - 1:
break
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y) return
np.array(X), np.array(y) n_steps
= 60 features = 1
# split into samples
X_train, y_train = split_sequence(training_set_scaled, n_steps)
# Reshaping X_train for model
X_train = X_train.reshape(X_train.shape[0],X_train.shape[1],features)
# The LSTM architecture model_lstm = Sequential() model_lstm.add(LSTM(units=125,
activation="tanh", input_shape=(n_steps, features))) model_lstm.add(Dense(units=1))
# Compiling the model
model_lstm.compile(optimizer="RMSprop", loss="mse")
model_lstm.summary()
model_lstm.fit(X_train, y_train, epochs=50, batch_size=32) dataset_total
= dataset.loc[:,"High"]
inputs = dataset_total[len(dataset_total) - len(test_set) - n_steps :].values inputs
= inputs.reshape(-1, 1)
#scaling
inputs = sc.transform(inputs)
# Split into samples
X_test, y_test = split_sequence(inputs, n_steps)
# reshape
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], features)
#prediction
predicted_stock_price = model_lstm.predict(X_test)
#inverse transform the values
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
def plot_predictions(test, predicted):
plt.plot(test, color="gray", label="Real")
plt.plot(predicted, color="red", label="Predicted")
plt.title("MasterCard Stock Price Prediction")
plt.xlabel("Time")
plt.ylabel("MasterCard Stock
Price") plt.legend() plt.show()
Volume 0
dtype: int64
Model: "sequential"
Epoch 50/50
38/38 [==============================] - 1s 30ms/step - loss: 3.1642e-04
>>> The root mean squared error is 6.70.
Result:
The program trains an LSTM model on Mastercard stock price data from 2016 to 2020
and tests its performance on data from 2021 onwards. It then plots the actual vs.
predicted stock prices and calculates the root mean squared error (RMSE) between them.
The root mean squared error is printed out, indicating the level of accuracy of the LSTM
model in predicting the Mastercard stock prices.
CONTENT BEYOND SYLABBUS:
EX.NO:
Implement a basic neural network for binary classification
DATE: using tensor flow/keras.
Problem Objective:
The objective of this problem is to develop a neural network model for binary classification.
Problem Statement:
Given a dataset with features and binary labels, the task is to design and train a neural network model
that can accurately classify the samples into one of the two classes.
Aim:
To create a neural network model that effectively performs binary classification, achieving high
accuracy on unseen data.
Algorithm:
STEP 2: Create a sequential model with dense layers using ReLU activation
functions and an output layer with a sigmoid activation function for
binary classification.
STEP 3: Compile the model using Adam optimizer and binary cross-entropy loss
function, with accuracy as the evaluation metric.
STEP 4: Train the model on the training data for a specified number of epochs and
batch size, utilizing a portion for validation.
STEP 6: Display training progress including loss and accuracy per epoch,
optionally print accuracy on test data.
Program:
Output:
Epoch 1/10
25/25 [==============================] - 1s 10ms/step - loss: 0.6963 - accuracy:
0.5400 - val_loss: 0.7098 - val_accuracy: 0.4050
Epoch 2/10
25/25 [==============================] - 0s 3ms/step - loss: 0.6839 - accuracy: 0.5650 -
val_loss: 0.7086 - val_accuracy: 0.4350
Epoch 3/10
25/25 [==============================] - 0s 3ms/step - loss: 0.6755 - accuracy: 0.5813 -
val_loss: 0.7070 - val_accuracy: 0.4650
Epoch 4/10
25/25 [==============================] - 0s 3ms/step - loss: 0.6711 - accuracy: 0.5875 -
val_loss: 0.7085 - val_accuracy: 0.5000
Epoch 5/10
25/25 [==============================] - 0s 3ms/step - loss: 0.6670 - accuracy: 0.5813 -
val_loss: 0.7074 - val_accuracy: 0.5100
Epoch 6/10
25/25 [==============================] - 0s 3ms/step - loss: 0.6636 - accuracy: 0.6237 -
val_loss: 0.7114 - val_accuracy: 0.4800
Epoch 7/10
25/25 [==============================] - 0s 3ms/step - loss: 0.6608 - accuracy: 0.5950 -
val_loss: 0.7083 - val_accuracy: 0.4950
Epoch 8/10
25/25 [==============================] - 0s 3ms/step - loss: 0.6549 - accuracy: 0.6350 -
val_loss: 0.7076 - val_accuracy: 0.4900
Epoch 9/10
25/25 [==============================] - 0s 3ms/step - loss: 0.6512 - accuracy: 0.6488 -
val_loss: 0.7097 - val_accuracy: 0.5000
Epoch 10/10
25/25 [==============================] - 0s 3ms/step - loss: 0.6479 - accuracy: 0.6425 -
val_loss: 0.7100 - val_accuracy: 0.4800
<keras.src.callbacks.History at 0x7f888eadcb80>
Result:
After training the neural network model, it achieves a certain level of accuracy on the
validation set, indicating its capability to classify binary data accurately. The accuracy
obtained serves as a measure of the model's performance in classifying unseen data.