Ad3511 DL Lab All Lab Manual
Ad3511 DL Lab All Lab Manual
DEPARTMENT OF
ARTIFICIAL INTELLIGENCE AND DATASCIENCE
Regulation 2021
AD3511 DEEP LEARNING LABORATORY
NAME :
REGISTER NO :
YEAR : III B.Tech AI&DS
SEMESTER : V
ACADEMIC YEAR : 2023-2024
BATCH : 2021-2025
SYLLABUS R2021
AD3511 DEEP LEARNING LABORATORY L T PC
0 0 4 2
OBJECTIVES:
To understand the tools and techniques to implement deep neural networks
To apply different deep learning architectures for solving problems
To implement generative models for suitable applications
To learn to build and validate different models
LIST OF EXPERIMENTS:
OUTCOMES:
At the end of this course, the students will be able to:
CO1 Apply deep neural network for simple problems
CO2 Apply Convolution Neural Network for image processing
CO3 Apply Recurrent Neural Network and its variants for text analysis
CO4 Apply generative models for data augmentation
CO5 Develop real-world solutions using suitable deep neural networks
Program Outcomes:
Graduate Attribute
1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex
engineering problems.
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems
and design system components or processes that meet the specified needs with
appropriate consideration for the public health and safety, and the cultural, societal,
and environmental considerations.
4. Conduct investigations of complex problems:Use research-based knowledge and
research methods including design of experiments, analysis and interpretation of data,
and synthesis of the information to provide valid onclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex
engineering activities with an understanding of the limitations.
12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological
change.
1. Evolve AI based efficient domain specific processes for effective decision making in several
domains such as business and governance domains.
2. Arrive at actionable Foresight, Insight, hindsight from data for solving business and
engineering problems
3. Create, Select and Apply the theoretical knowledge of AI and Data Analytics along with
practical industrial tools and techniques to manage and solve wicked societal problems
INDEX
Ex.No Name of the Experiment CO PO/PSO Marks Sign
AVERAGE
AIM:
To solve XOR problem using DNN
CONCEPT:
The XOR problem is a classic problem in the field of machine learning and artificial intelligence.
XOR stands for "exclusive OR," which is a logical operation that takes two binary inputs (0 or 1)
and returns 1 only when exactly one of the inputs is 1. Otherwise, it returns 0. Here's the truth
table for the XOR operation:
For solving the XOR problem, use a Deep Neural Network (DNN) implemented with
TensorFlow and Keras. The XOR problem is a classic binary classification problem that
cannot be linearly separated. A DNN with hidden layers can learn the non-linear patterns
required to solve this problem.
1. Training:
Provide the input data (XOR inputs) and the corresponding labels (XOR outputs) to
the DNN.
The DNN adjusts its internal weights through backpropagation to minimize the loss
function.
The model learns to approximate the XOR function based on the provided
examples.
2. Epochs and Batch Size:
Train the model for multiple epochs (iterations over the entire dataset) to allow the
DNN to adjust its weights and learn the XOR function.
You can choose a suitable batch size for updating weights in each iteration.
3. Evaluation:
After training, evaluate the model's performance using the same XOR data.
Calculate metrics like accuracy to assess how well the model has learned the XOR
function.
4. Inference:
Once trained, the DNN can be used to predict the XOR output for new input
combinations.
STEPS
1 Define the XOR X: Input data for the XOR problem, where each row represents an
input and output input sample.
data y: Corresponding target outputs for the XOR problem.
2 Define the DNN Input Layer:
architecture
The input layer specifies an input shape of (2,), indicating that
the model expects input data with two features.
Hidden Layer 1 (Dense):
PROGRAM
import tensorflow as tf
import numpy as np
RESULT
Thus XOR problem using DNN is solved.
AIM:
To implement character Recognition using CNN
CONCEPT:
STEPS
PROGRAM
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
10
Date
AIM:
To Implement Face recognition using CNN
CONCEPT:
Face recognition using Convolutional Neural Networks (CNNs) is a popular application of deep
learning in computer vision. CNNs are well-suited for image recognition tasks like face
recognition because they can automatically learn hierarchical features from images, which
are essential for discriminating between different faces.
Here's an overview of how to approach face recognition using CNNs:
1. Data Collection and Preprocessing:
Collect a dataset of labeled face images. You need images of individuals you
want the model to recognize.
Preprocess the images: Resize them to a common size (e.g., 224x224 pixels),
normalize pixel values to a certain range (usually [0, 1] or [-1, 1]), and perform data
augmentation (e.g., random cropping, flipping) to increase model robustness.
2. CNN Architecture:
Choose a CNN architecture: Common choices include variants of VGG, ResNet,
Inception, or MobileNet. These architectures are available in libraries like
TensorFlow and PyTorch.
Pretrained Models: Consider using a pretrained CNN model on a large dataset like
ImageNet. Transfer learning allows you to leverage learned features and fine-tune
the model for face recognition.
3. Model Training:
Modify the architecture: Replace the classification head of the pretrained model
with a new one suitable for face recognition. Typically, the new head consists of a
few fully connected layers.
Data Input: Feed the preprocessed face images into the model and train it using
a suitable loss function. Common choices include triplet loss or contrastive loss,
which encourage the network to learn embeddings that make similar faces close
and dissimilar faces far apart in the embedding space.
Optimization: Use an optimizer like Adam or SGD to update the model's
parameters during training.
4. Testing and Verification:
Embedding Extraction: After training, remove the classification head and use the
model to extract embeddings (feature vectors) from face images.
Face Verification: To verify whether two face images belong to the same person,
calculate the similarity (e.g., cosine similarity) between their embeddings. Define a
threshold to decide whether the faces are a match or not.
Face Identification: For identification, compare the embeddings of a query face
against embeddings of all known faces in the database and find the closest
11
match.
5. Deployment and Testing:
Deploy the trained model to your desired platform (web application, mobile app,
etc.).
Test the model's accuracy and performance on real-world data. Tweak
hyperparameters or augment the dataset as needed to improve accuracy.
Remember that face recognition involves privacy concerns, and ethical considerations should
be taken into account, especially when working with sensitive data.
STEPS
1 Set the path to the directory containing the face images
2 Load the face images and labels & Iterate over the face image directory and
load the images
3 Convert the data to numpy arrays, Preprocess labels to extract numeric part
And Convert labels to one-hot encoded vectors
4 Split the data into training and validation sets
5 Compile, Train the CNN model and Save the trained model
PROGRAM
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from sklearn.model_selection import train_test_split
# Iterate over the face image directory and load the images
for filename in os.listdir(faces_dir):
if filename.endswith(".jpg"):
img_path = os.path.join(faces_dir, filename)
img = load_img(img_path, target_size=(64, 64)) # Resize images to
64x64 pixels
img_array = img_to_array(img)
x_data.append(img_array)
label = filename.split(".")[0]
# Assuming the filename format is label.jpg
y_data.append(label)
x_data = np.array(x_data)
y_data = np.array(y_data)
OUTPUT
Epoch 1/10
65/65 [==============================] - 5s 68ms/step - loss: 215.6209 - accuracy: 0.0
098 - val_loss: 4.7830 - val_accuracy: 0.0039
Epoch 2/10
65/65 [==============================] - 4s 66ms/step - loss: 4.7793 - accuracy: 0.011
2 - val_loss: 4.7757 - val_accuracy: 0.0039
Epoch 3/10
65/65 [==============================] - 4s 66ms/step - loss: 4.7717 - accuracy: 0.012
2 - val_loss: 4.7694 - val_accuracy: 0.0039
Epoch 4/10
65/65 [==============================] - 4s 66ms/step - loss: 4.7646 - accuracy: 0.010
7 - val_loss: 4.7634 - val_accuracy: 0.0039
Epoch 5/10
13
RESULT
Face recognition using CNN is analysed and implemented.
14
AIM:
To implement Language modeling using RNN
CONCEPT:
developed to address some of the limitations of traditional RNNs, such as the vanishing gradient
problem.
Additionally, modern language models like GPT (Generative Pre-trained Transformer) have
largely surpassed the performance of traditional RNN-based models by utilizing attention
mechanisms and larger-scale architectures.
STEPS
PROGRAM
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, SimpleRNN, Dense
model = Sequential([
Embedding(input_dim=len(chars), output_dim=50, input_length=seq_length),
SimpleRNN(100, return_sequences=False),
Dense(len(chars), activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
for _ in range(num_chars_to_generate):
seed_indices = [char_to_index[char] for char in seed_text]
# Check if the seed sequence length matches the model's input length
if len(seed_indices) < seq_length:
diff = seq_length - len(seed_indices)
seed_indices = [0] * diff + seed_indices
17
18
Epoch 31/50
1/1 [==============================] - 0s 17ms/step - loss: 0.4871
Epoch 32/50
1/1 [==============================] - 0s 0s/step - loss: 0.4469
Epoch 33/50
1/1 [==============================] - 0s 18ms/step - loss: 0.4099
Epoch 34/50
1/1 [==============================] - 0s 0s/step - loss: 0.3753
Epoch 35/50
1/1 [==============================] - 0s 18ms/step - loss: 0.3430
Epoch 36/50
1/1 [==============================] - 0s 0s/step - loss: 0.3134
Epoch 37/50
1/1 [==============================] - 0s 15ms/step - loss: 0.2865
Epoch 38/50
1/1 [==============================] - 0s 0s/step - loss: 0.2621
Epoch 39/50
1/1 [==============================] - 0s 2ms/step - loss: 0.2399
Epoch 40/50
1/1 [==============================] - 0s 15ms/step - loss: 0.2200
Epoch 41/50
1/1 [==============================] - 0s 1ms/step - loss: 0.2021
Epoch 42/50
1/1 [==============================] - 0s 18ms/step - loss: 0.1860
Epoch 43/50
1/1 [==============================] - 0s 0s/step - loss: 0.1714
Epoch 44/50
1/1 [==============================] - 0s 16ms/step - loss: 0.1580
Epoch 45/50
1/1 [==============================] - 0s 0s/step - loss: 0.1460
Epoch 46/50
1/1 [==============================] - 0s 4ms/step - loss: 0.1353
Epoch 47/50
1/1 [==============================] - 0s 12ms/step - loss: 0.1257
Epoch 48/50
1/1 [==============================] - 0s 933us/step - loss: 0.1170
Epoch 49/50
1/1 [==============================] - 0s 17ms/step - loss: 0.1090
Epoch 50/50
1/1 [==============================] - 0s 0s/step - loss: 0.1017
This is a sample tentrfornlanguags modnging nsing Rgn.rginsrngangrngangnoggrng nsingrnging
ndgg nsinorng ngrngadgsinorng
RESULT
Thus Language modeling using RNN is implemented.
19
AIM:
To implement Sentiment analysis using LSTM
CONCEPT:
Sentiment analysis using LSTM (Long Short-Term Memory) is a common task in natural language
processing where you aim to determine the sentiment or emotional tone expressed in a given
text. LSTM is a type of recurrent neural network that can capture long-range dependencies in
sequences, making it well-suited for sequence-based tasks like sentiment analysis.
1. Text Representation: In sentiment analysis, text data needs to be converted into a
numerical format that can be processed by neural networks. This is typically done using
techniques like tokenization and word embedding. Tokenization splits the text into
individual words or subwords, while word embedding maps each token to a dense
vector representation in a continuous vector space.
2. Sequence Padding: In order to train LSTM networks efficiently, sequences (sentences or
documents) need to have a consistent length. Since text data can have varying lengths,
padding is applied to make all sequences of the same length. Shorter sequences are
padded with zeros at the beginning or end.
3. LSTM Architecture: An LSTM is a type of recurrent neural network designed to handle
sequential data. It has memory cells and gates that allow it to capture long-term
dependencies in sequences. LSTMs can remember important information over extended
periods, which is crucial for sentiment analysis since sentiments in text can span across
multiple words.
4. Embedding Layer: The input text tokens are passed through an embedding layer, which
converts the discrete token indices into dense vector representations. This layer is
responsible for capturing semantic relationships between words.
5. LSTM Layer: The LSTM layer processes the embedded sequences, updating its internal
state based on the input tokens and previous state. The LSTM's ability to maintain and
update context over time enables it to capture sequential patterns and dependencies
within the text.
6. Classification Layer: After processing the sequence through the LSTM layer, the final
hidden state is passed through a fully connected (dense) layer. This layer performs the
sentiment classification by producing a probability score indicating the likelihood of a
particular sentiment class.
7. Training and Backpropagation: During training, the model's predictions are compared to
the actual sentiment labels using a loss function (such as binary cross-entropy for binary
sentiment classification). The gradients of the loss are propagated back through the
network using backpropagation through time (BPTT), and the model's parameters are
updated using an optimization algorithm (e.g., Adam, SGD).
8. Inference and Prediction: Once the LSTM model is trained, it can be used to predict
sentiment labels for new, unseen text data. The input text is processed through the trained
20
model, and the final classification layer's output provides the predicted sentiment.
Sentiment analysis using LSTM is a powerful application of deep learning in natural language
processing. It allows the model to learn and capture complex patterns in textual data, making
it capable of understanding and classifying sentiments expressed in various contexts.
STEPS
1 Load the IMDB dataset, which consists of movie reviews labeled with positive or
negative sentiment.
2 Preprocess the data by padding sequences to a fixed length (max_review_length)
and limiting the vocabulary size to the most frequent words (num_words).
3 Build an LSTM-based model. The Embedding layer is used to map word indices to
dense vectors, the LSTM layer captures sequence dependencies, and the Dense layer
produces a binary sentiment prediction.
4 The model is compiled with binary cross-entropy loss and the Adam optimizer.
5 Train the model using the training data. Finally, we evaluate the model on the test
data and print the test accuracy.
PROGRAM
import numpy as np
import tensorflow as tf
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
model = Sequential()
model.add(Embedding(max_features, embedding_size, input_length=max_len))
model.add(LSTM(100)) # LSTM layer with 100 units
model.add(Dense(1, activation='sigmoid'))
21
OUTPUT
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb
.npz
17464789/17464789 [==============================] - 7s 0us/step
Epoch 1/5
391/391 [==============================] - 286s 727ms/step - loss: 0.4991 - accuracy:
0.7626 - val_loss: 0.3712 - val_accuracy: 0.8412
Epoch 2/5
391/391 [==============================] - 296s 757ms/step - loss: 0.3381 - accuracy:
0.8587 - val_loss: 0.3609 - val_accuracy: 0.8532
Epoch 3/5
391/391 [==============================] - 313s 801ms/step - loss: 0.2642 - accuracy:
0.8945 - val_loss: 0.3168 - val_accuracy: 0.8678
Epoch 4/5
391/391 [==============================] - 433s 1s/step - loss: 0.2263 - accuracy: 0.9
142 - val_loss: 0.3119 - val_accuracy: 0.8738
Epoch 5/5
391/391 [==============================] - 302s 774ms/step - loss: 0.1982 - accuracy:
0.9247 - val_loss: 0.3114 - val_accuracy: 0.8745
782/782 [==============================] - 74s 95ms/step - loss: 0.3114 - accuracy: 0.
8745
Loss: 0.3113741874694824
Accuracy: 0.8745200037956238
RESULT
Thus Sentiment analysis using LSTM is implemented.
22
AIM:
To implement Parts of speech tagging using Sequence to Sequence architecture
CONCEPT:
Parts of speech (POS) tagging is a natural language processing task where each word in a
sentence is assigned a specific grammatical category, such as noun, verb, adjective, etc.
Sequence-to-Sequence (Seq2Seq) architecture, which was originally designed for machine
translation tasks, can also be adapted for POS tagging. The Seq2Seq architecture consists of
two main components: an encoder and a decoder. Here's how you can use Seq2Seq for POS
tagging:
1. Encoder-Decoder Setup: In the context of POS tagging, the encoder takes in the input
sentence (sequence of words) and encodes it into a fixed-size context vector. The
decoder then generates the POS tags based on this context vector.
2. Encoder Component: The encoder can be implemented using a recurrent neural
network (RNN) such as LSTM or GRU. The input sequence of words (tokens) is passed
through the encoder RNN, and the final hidden state of the encoder captures the
contextual information of the entire sentence.
3. Decoder Component: The decoder is another RNN that takes the encoder's final hidden
state as an initial hidden state and generates POS tags one at a time. At each step, the
decoder produces a probability distribution over possible POS tags for the current word.
4. Training: During training, the model is given pairs of input sentences and corresponding
POS tag sequences. The encoder generates the context vector, which is then used as
the initial state of the decoder. The decoder generates the predicted POS tags. The
model is trained to minimize the cross-entropy loss between the predicted and actual
POS tags.
5. Inference: During inference (testing or prediction), the model is given an input sentence,
and the encoder generates the context vector. The decoder then generates POS tags
one by one, using the context vector and the previously generated tag. This process
continues until an end-of-sentence token is generated or a maximum sequence length
is reached.
STEPS
PROGRAM
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Create a set of all unique words and POS tags in the dataset
input_words = set()
target_words = set()
for input_text, target_text in zip(input_texts, target_texts):
input_words.update(input_text.split())
target_words.update(target_text.split())
initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs]
+ decoder_states)
OUTPUT
Epoch 1/50
1/1 [==============================] - 7s 7s/step - loss: 1.3736 - accuracy: 0.0000e+0
0 - val_loss: 1.1017 - val_accuracy: 0.0000e+00
Epoch 2/50
1/1 [==============================] - 0s 63ms/step - loss: 1.3470 - accuracy: 0.7500
- val_loss: 1.1068 - val_accuracy: 0.0000e+00
Epoch 3/50
1/1 [==============================] - 0s 65ms/step - loss: 1.3199 - accuracy: 0.7500
- val_loss: 1.1123 - val_accuracy: 0.0000e+00
Epoch 4/50
Epoch 44/50
1/1 [==============================] - 0s 58ms/step - loss: 0.0882 - accuracy: 0.7500
:
26
:
:
Epoch 50/50
1/1 [==============================] - 0s 60ms/step - loss: 0.0751 - accuracy: 0.7500
- val_loss: 2.2554 - val_accuracy: 0.0000e+00
RESULT
27
AIM:
To implement Machine Translation using Encoder-Decoder model
CONCEPT:
STEPS
28
PROGRAM
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Create a set of all unique words in the input and target sequences
input_words = set()
target_words = set()
for input_text, target_text in zip(input_texts, target_texts):
input_words.update(input_text.split())
target_words.update(target_text.split())
29
30
OUTPUT
Input: This is a pen
Translated Text: ist ein Stift Coden ein
RESULT
Thus Machine Translation using Encoder-Decoder model is implemented.
31
AIM:
To implement Image augmentation using GANs
CONCEPT:
STEPS
32
3 Train the hyperparameters and the Training loop has the following steps:
Generate a batch of fake images
Train the discriminator
Train the generator
Print the progress and save samples
PROGRAM
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten
from tensorflow.keras.layers import BatchNormalization, Dropout
from tensorflow.keras.layers import Conv2D, Conv2DTranspose
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
discriminator.add(Flatten())
discriminator.add(Dense(1, activation='sigmoid'))
# Training hyperparameters
epochs = 100
batch_size = 128
sample_interval = 10
# Training loop
for epoch in range(epochs):
# Randomly select a batch of real images
idx = np.random.randint(0, x_train.shape[0], batch_size)
real_images = x_train[idx]
34
count = 0
for i in range(4):
for j in range(4):
axs[i, j].imshow(samples[count, :, :], cmap='gray')
axs[i, j].axis('off')
count += 1
plt.show()
OUTPUT
Epoch: 90 Discriminator Loss: 0.03508808836340904
Generator Loss: 1.736445483402349e-06
RESULT
Thus Image augmentation using GANs is implemented.
35