0% found this document useful (0 votes)
58 views39 pages

Deep Learning Lab (Ai&ds)

A CNN model was developed for character recognition using the CIFAR-10 dataset. The model consists of convolutional and max pooling layers to extract features, followed by dense layers for classification. The model was trained on preprocessed training data for 5 epochs and evaluated on test data, achieving an accuracy of 56.82%.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views39 pages

Deep Learning Lab (Ai&ds)

A CNN model was developed for character recognition using the CIFAR-10 dataset. The model consists of convolutional and max pooling layers to extract features, followed by dense layers for classification. The model was trained on preprocessed training data for 5 epochs and evaluated on test data, achieving an accuracy of 56.82%.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

ADHI COLLEGE OF ENGINEERING

AND
TECHNOLOGY
Sankarapuram, Near Walajabad, KancheepuramDist, Pin: 631605

DEPARTMENT
OF
ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

STUDENT NAME :

REGISTER NO :

SUBJECT CODE :

SUBJECT NAME : DEEP LEARNING LABORATORY

YEAR / SEMESTER : III/IV (2023- 2024)

1
Sankarapuram, Near Walajabad, KancheepuramDist, Pin: 631605
DEPARTMENT OF

ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

LABORATORY RECORD NOTE BOOK

2022 – 2023

This is to certify that this is a bonafide record of the work done by

Mr./Ms. of the year B.E / B.Tech.,

Department of _____________________________________________________

in the Laboratory in the Semester.

Staff In-Charge Head of the Department


Submitted to the University Examination held on _____________________.

2
INTERNAL EXAMINER EXTERNAL EXAMINER
TABLE OF CONTENTS

PAGE
EX.NO DATE LIST OF EXPERIMENTS SIGN
NO
Solving XOR Problem Using DNN
1
Character Recognition Using CNN
2

3 Face Recognition Using CNN

4 Language Modeling Using RNN

5 Sentiment Analysis Using LSTM

6 Parts of Speech Tagging Using Sequence To


Sequence Architecture

7 Machine Translation Using Encoder-Decoder


Model
8 Image Augmentation Using GANs

3
Ex. No: 1
Solving XOR Problem Using DNN
Date:

AIM
To Write a python program solving XOR problem using DNN

ALGORITHM
1. Data Preparation:
 Create a dataset with XOR input and output pairs: (0, 0) -> 0, (0, 1) -> 1, (1, 0) -> 1,
(1, 1) -> 0.
2. Neural Network:
 Design a feedforward neural network with:
 Input layer: 2 neurons.
 Hidden layers: You can use 1 or more hidden layers with ReLU activation functions.
 Output layer: 1 neuron with a sigmoid activation function (for binary classification).
3. Training:
 Initialize the network's weights and biases randomly.
 Use binary cross-entropy as the loss function.
 Train the network using an optimization algorithm (e.g., SGD, Adam) for a sufficient
number of epochs until it converges.
4. Evaluation:
 Assess the model's performance using metrics like accuracy, precision, recall, or F1-
score.
5. Testing:
 Verify the model's predictions on new inputs to ensure it correctly predicts the XOR
operation.

4
PROGRAM
import numpy as np

import tensorflow as tf

# Data Preparation

X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

y = np.array([0, 1, 1, 0])

# Neural Network Architecture

model = tf.keras.Sequential()

model.add(tf.keras.layers.Dense(2, input_dim=2, activation='relu'))

model.add(tf.keras.layers.Dense(1, activation='sigmoid'))

# Loss Function and Optimization

model.compile(loss='binary_crossentropy', optimizer='adam')

# Training

model.fit(X, y, epochs=10000)

# Testing

predictions = model.predict(X)

print(predictions)

OUTPUT
Epoch 1/10

1/1 [==============================] - 1s 912ms/step - loss: 0.7392

Epoch 2/10

1/1 [==============================] - 0s 7ms/step - loss: 0.7382

Epoch 3/10

1/1 [==============================] - 0s 8ms/step - loss: 0.7372

Epoch 4/10

1/1 [==============================] - 0s 7ms/step - loss: 0.7363


5
Epoch 5/10

1/1 [==============================] - 0s 8ms/step - loss: 0.7353

Epoch 6/10

1/1 [==============================] - 0s 7ms/step - loss: 0.7343

Epoch 7/10

1/1 [==============================] - 0s 7ms/step - loss: 0.7334

Epoch 8/10

1/1 [==============================] - 0s 8ms/step - loss: 0.7325

Epoch 9/10

1/1 [==============================] - 0s 7ms/step - loss: 0.7315

Epoch 10/10

1/1 [==============================] - 0s 7ms/step - loss: 0.7306

1/1 [==============================] - 0s 80ms/step

[[0.5043481 ]

[0.36242732]

[0.42150375]

[0.286842 ]]

RESULT
Thus the program for solving XOR problem using DNN was Executed and verified
successfully

6
Ex. No:2 Character Recognition Using CNN
Date:

AIM
To write a python program for character recognition using CNN

ALGORITHM
1. Data Collection and Preprocessing:
 Collect a dataset of characters with associated labels (e.g., images of letters or digits).
 Preprocess the data:
 Resize images to a consistent size (e.g., 28x28 pixels).
 Normalize pixel values (typically scale them to the range [0, 1]).
 Augment the dataset if necessary by applying transformations like rotation,
translation, or noise to increase diversity.
2. Data Splitting:
 Split the dataset into three parts: training, validation, and test sets.
 Typical splits might be 70-80% for training, 10-15% for validation, and 10-15% for
testing.
3. Neural Network Architecture:
 Define the CNN architecture:
 Convolutional layers: These layers learn feature maps.
 Pooling layers: Reduce spatial dimensions while retaining important features.
 Fully connected layers: Flatten the output from convolutional layers and feed
it into one or more dense layers.
 Output layer: Use softmax activation for multi-class classification.
4. Loss Function:
 Choose a suitable loss function for character recognition, such as categorical cross-
entropy.
5. Model Compilation:
 Compile the model:
 Select an optimizer (e.g., Adam, SGD).

7
 Set the chosen loss function.
 Specify evaluation metrics (e.g., accuracy).
6. Model Training:
 Train the CNN using the training dataset:
 Iterate through the dataset in mini-batches.
 Compute the loss and update model weights through backpropagation.
 Use the validation set to monitor the model's performance and apply early
stopping if necessary to prevent overfitting.
 Train for an appropriate number of epochs until convergence.
7. Model Evaluation:
 Evaluate the model on the test dataset:
 Calculate accuracy and other relevant metrics to assess its performance.
8. Inference:
 Use the trained model to recognize characters in new images:
 Preprocess the input image (resize, normalize, etc.).
 Pass the preprocessed image through the trained CNN to obtain character
predictions.
9. Post-processing (Optional):
 Post-processing steps may include correcting recognized characters or formatting the
output as needed, depending on your application.
10. Deployment:
 Deploy the trained model in your application for character recognition.

8
PROGRAM
import tensorflow as tf
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
from tensorflow.keras.utils import to_categorical
# Load and preprocess the CIFAR-10 dataset
(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()
train_images = train_images.astype('float32') / 255
test_images = test_images.astype('float32') / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# Build a simple CNN model
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax')) # 10 classes in CIFAR-10
# Compile the model
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model
model.fit(train_images, train_labels, epochs=5, batch_size=64)

9
# Evaluate the model on test data
test_loss, test_accuracy = model.evaluate(test_images, test_labels)
print(f'Test Accuracy: {test_accuracy * 100:.2f}%')

OUTPUT
Epoch 1/5
782/782 [==============================] - 46s 56ms/step - loss: 1.5831 - accuracy:
0.4202
Epoch 2/5
782/782 [==============================] - 46s 59ms/step - loss: 1.2138 - accuracy:
0.5681
Epoch 3/5
782/782 [==============================] - 46s 59ms/step - loss: 1.0725 - accuracy:
0.6196
Epoch 4/5
782/782 [==============================] - 43s 55ms/step - loss: 0.9716 - accuracy:
0.6585
Epoch 5/5
782/782 [==============================] - 35s 44ms/step - loss: 0.8996 - accuracy:
0.6845
313/313 [==============================] - 2s 7ms/step - loss: 0.9725 - accuracy: 0.6650
Test Accuracy: 66.50%

RESULT

Thus the program for character recognition using CNN was executed and verified
successfully.

10
Ex. No: 3
Face Recognition Using CNN
Date:

AIM
To write a python program for face recognition using CNN

ALGORITHM
1. Data Preprocessing:
 Collect and preprocess face images, ensuring uniform size and data augmentation.
2. Data Split:
 Divide the dataset into training, validation, and test sets.
3. CNN Model:
 Design a CNN model architecture, possibly using pre-trained models like VGG or ResNet.
4. Model Training:
 Train the model with training data using categorical cross-entropy loss and an optimizer (e.g.,
Adam).
5. Model Evaluation:
 Assess model performance on the test set using accuracy and other relevant metrics.
6. Face Recognition:
 Feed an input face image to the trained model for recognition.
7. Post-processing:
 Apply thresholding or other post-processing to refine recognition results.
8. Deployment:
 Deploy the model in your application for real-time recognition.
9. Continuous Improvement:
 Periodically retrain the model with new data to enhance accuracy.
10. Privacy and Security:
 Consider privacy, security, and ethical implications in your implementation.

11
PROGRAM
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout
from keras.optimizers import Adam
from keras.callbacks import TensorBoard
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import accuracy_score
import np_utils
import itertools
#load dataset
data = np.load("ORL_faces.npz")
# load the "Train Images"
x_train = data['trainX']
#normalize every image
x_train = np.array(x_train,dtype='float32')/255
x_test = data['testX']
x_test = np.array(x_test,dtype='float32')/255
# load the Label of Images
y_train= data['trainY']
y_test= data['testY']
# show the train and test Data format

12
print('x_train : {}'.format(x_train[:]))
print('Y-train shape: {}'.format(y_train))
print('x_test shape: {}'.format(x_test.shape))
x_train, x_valid, y_train, y_valid= train_test_split(
x_train, y_train, test_size=.05, random_state=1234,)
im_rows=112
im_cols=92
batch_size=512
im_shape=(im_rows, im_cols, 1)
#change the size of images
x_train = x_train.reshape(x_train.shape[0], *im_shape)
x_test = x_test.reshape(x_test.shape[0], *im_shape)
x_valid = x_valid.reshape(x_valid.shape[0], *im_shape)
print('x_train shape: {}'.format(y_train.shape[0]))
print('x_test shape: {}'.format(y_test.shape))
#filters= the depth of output image or kernels
cnn_model= Sequential([
Conv2D(filters=36, kernel_size=7, activation='relu', input_shape= im_shape),
MaxPooling2D(pool_size=2),
Conv2D(filters=54, kernel_size=5, activation='relu', input_shape= im_shape),
MaxPooling2D(pool_size=2),
Flatten(),
Dense(2024, activation='relu'),
Dropout(0.5),
Dense(1024, activation='relu'),
Dropout(0.5),
Dense(512, activation='relu'),
Dropout(0.5),

13
#20 is the number of outputs
Dense(20, activation='softmax')
])
cnn_model.compile(
loss='sparse_categorical_crossentropy',#'categorical_crossentropy',
optimizer=Adam(lr=0.0001),
metrics=['accuracy']
)
cnn_model.summary()
history=cnn_model.fit(
np.array(x_train), np.array(y_train), batch_size=512,
epochs=10, verbose=2,
validation_data=(np.array(x_valid),np.array(y_valid)),
)
scor = cnn_model.evaluate( np.array(x_test), np.array(y_test), verbose=0)

OUTPUT
x_train : [[0.1882353 0.19215687 0.1764706 ... 0.18431373 0.18039216 0.18039216]
[0.23529412 0.23529412 0.24313726 ... 0.1254902 0.13333334 0.13333334]
[0.15294118 0.17254902 0.20784314 ... 0.11372549 0.10196079 0.11372549]
...
[0.44705883 0.45882353 0.44705883 ... 0.38431373 0.3764706 0.38431373]
[0.4117647 0.4117647 0.41960785 ... 0.21176471 0.18431373 0.16078432]
[0.45490196 0.44705883 0.45882353 ... 0.37254903 0.39215687 0.39607844]]
Y-train shape: [ 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3
4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5
6 6 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 7 7
8 8 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 9 9 9

14
10 10 10 10 10 10 10 10 10 10 10 10 11 11 11 11 11 11 11 11 11 11 11 11
12 12 12 12 12 12 12 12 12 12 12 12 13 13 13 13 13 13 13 13 13 13 13 13
14 14 14 14 14 14 14 14 14 14 14 14 15 15 15 15 15 15 15 15 15 15 15 15
16 16 16 16 16 16 16 16 16 16 16 16 17 17 17 17 17 17 17 17 17 17 17 17
18 18 18 18 18 18 18 18 18 18 18 18 19 19 19 19 19 19 19 19 19 19 19 19]
x_test shape: (160, 10304)
x_train shape: 228
x_test shape: (160,)
WARNING:absl:`lr` is deprecated in Keras optimizer, please use `learning_rate` or use the legacy
optimizer, e.g.,tf.keras.optimizers.legacy.Adam.
Model: "sequential_12"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_18 (Conv2D) (None, 106, 86, 36) 1800
max_pooling2d_17 (MaxPooli (None, 53, 43, 36) 0
ng2D)
conv2d_19 (Conv2D) (None, 49, 39, 54) 48654
max_pooling2d_18 (MaxPooli (None, 24, 19, 54) 0
ng2D)

flatten_9 (Flatten) (None, 24624) 0

dense_29 (Dense) (None, 2024) 49841000

dropout_6 (Dropout) (None, 2024) 0


dense_30 (Dense) (None, 1024) 2073600
dropout_7 (Dropout) (None, 1024) 0

15
dense_31 (Dense) (None, 512) 524800
dropout_8 (Dropout) (None, 512) 0
dense_32 (Dense) (None, 20) 10260
=================================================================
Total params: 52500114 (200.27 MB)
Trainable params: 52500114 (200.27 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/10
1/1 - 4s - loss: 2.9976 - accuracy: 0.0702 - val_loss: 3.3532 - val_accuracy: 0.0833 - 4s/epoch -
4s/step
Epoch 2/10
1/1 - 3s - loss: 3.7595 - accuracy: 0.0526 - val_loss: 3.1484 - val_accuracy: 0.0000e+00 - 3s/epoch -
3s/step
Epoch 3/10
1/1 - 3s - loss: 3.3301 - accuracy: 0.0789 - val_loss: 3.0502 - val_accuracy: 0.0000e+00 - 3s/epoch -
3s/step
Epoch 4/10
1/1 - 3s - loss: 3.0069 - accuracy: 0.0570 - val_loss: 3.0185 - val_accuracy: 0.0000e+00 - 3s/epoch -
3s/step
Epoch 5/10
1/1 - 3s - loss: 3.0039 - accuracy: 0.0702 - val_loss: 3.0031 - val_accuracy: 0.0000e+00 - 3s/epoch -
3s/step
Epoch 6/10
1/1 - 3s - loss: 2.9852 - accuracy: 0.0789 - val_loss: 3.0038 - val_accuracy: 0.0000e+00 - 3s/epoch -
3s/step
Epoch 7/10
1/1 - 3s - loss: 2.9818 - accuracy: 0.1053 - val_loss: 3.0061 - val_accuracy: 0.0000e+00 - 3s/epoch -
3s/step
Epoch 8/10

16
1/1 - 3s - loss: 2.9769 - accuracy: 0.0570 - val_loss: 3.0060 - val_accuracy: 0.0000e+00 - 3s/epoch -
3s/step
Epoch 9/10
1/1 - 3s - loss: 2.9736 - accuracy: 0.1096 - val_loss: 3.0057 - val_accuracy: 0.0000e+00 - 3s/epoch -
3s/step
Epoch 10/10
1/1 - 3s - loss: 2.9650 - accuracy: 0.0789 - val_loss: 3.0050 - val_accuracy: 0.0000e+00 - 3s/epoch -
3s/step

RESULT
Thus the program for face recognition using CNN was executed and verified successfully.

17
Ex. No: 4
Language Modeling Using RNN
Date:

AIM
To write python program for language modeling using RNN

ALGORITHM
1. Data Preparation:
 Collect a text corpus as your dataset. The corpus can be any large text document or collection
of text data.
 Preprocess the text data, including tokenization (splitting text into words or subwords),
lowercasing, and removing punctuation and special characters.
2. Vocabulary Building:
 Create a vocabulary of unique words or subwords present in the corpus. Assign a unique
index to each word in the vocabulary.
 You may use a pre-built tokenizer (e.g., TensorFlow's Tokenizer or spaCy) to handle this
step.
3. Text Encoding:
 Convert the text data into numerical sequences by mapping words to their corresponding
indices in the vocabulary.
 Create input-output pairs where each input is a sequence of words and the corresponding
output is the next word in the sequence.
4. Model Architecture:
 Build an RNN-based model. Common choices include simple RNN, LSTM (Long Short-
Term Memory), or GRU (Gated Recurrent Unit).
 The model architecture typically consists of an embedding layer, the recurrent layer, and a
dense layer for output.
5. Training Data Preparation:
 Create training batches by dividing the dataset into sequences of fixed length.
 Batch sequences for more efficient training.
6. Model Training:
 Train the RNN model using the training data, with the input sequence as features and the
target (next word) as the label.
 Use a categorical cross-entropy loss function and an optimizer like Adam or SGD.
 Train the model for multiple epochs.
7. Model Evaluation:
 Monitor the model's performance on a validation dataset, using metrics like perplexity or
accuracy.
 Early stopping can be used to prevent overfitting.
8. Text Generation:
 Use the trained model to generate text. Start with a seed sequence and predict the next word.
Append the predicted word to the sequence and repeat.
 You can use techniques like temperature to control the randomness of text generation.

18
9. Fine-Tuning (Optional):
 Fine-tune the model with different hyperparameters or by modifying the architecture to
improve performance.
10. Deployment:
 Deploy the trained language model for various NLP tasks, including text generation, language
translation, sentiment analysis, and more.
11. Continuous Improvement:
 Regularly update and retrain the model with new data to improve its language modeling
capabilities.

PROGRAM

import numpy as np

import tensorflow as tf

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import LSTM, Dense

# Define a sample text for training

text = """

This is a sample text for training an RNN-based language model.

You can replace it with your own text data for better results.

"""

# Create a vocabulary and map characters to integers

chars = sorted(list(set(text)))

char_to_int = {char: i for i, char in enumerate(chars)}

int_to_char = {i: char for i, char in enumerate(chars)}

vocab_size = len(chars)

# Prepare training data

seq_length = 100

X_data = []

y_data = []

19
for i in range(0, len(text) - seq_length, 1):

seq_in = text[i:i + seq_length]

seq_out = text[i + seq_length]

X_data.append([char_to_int[char] for char in seq_in])

y_data.append(char_to_int[seq_out])

X = np.reshape(X_data, (len(X_data), seq_length, 1))

X = X / float(vocab_size)

y = tf.keras.utils.to_categorical(y_data)

# Build the RNN model

model = Sequential()

model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2]), return_sequences=True))

model.add(LSTM(256))

model.add(Dense(y.shape[1], activation='softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam')

# Train the model

model.fit(X, y, epochs=10, batch_size=64)

OUTPUT
1/1 [==============================] - 5s 5s/step - loss: 3.3397
Epoch 2/10
1/1 [==============================] - 1s 577ms/step - loss: 3.2726
Epoch 3/10
1/1 [==============================] - 1s 531ms/step - loss: 3.1765
Epoch 4/10
1/1 [==============================] - 1s 536ms/step - loss: 2.9643
Epoch 5/10

20
1/1 [==============================] - 1s 523ms/step - loss: 2.7935
Epoch 6/10
1/1 [==============================] - 1s 529ms/step - loss: 2.6736
Epoch 7/10
1/1 [==============================] - 1s 540ms/step - loss: 2.6244
Epoch 8/10
1/1 [==============================] - 1s 546ms/step - loss: 2.6235
Epoch 9/10
1/1 [==============================] - 1s 539ms/step - loss: 2.5957
Epoch 10/10
1/1 [==============================] - 1s 543ms/step - loss: 2.5634

RESULT

Thus the program for language modeling using RNN was executed and verified successfully.

Ex. No: 5
Sentiment Analysis Using LSTM
Date:
AIM

21
To write a python program for sentiment analysis using LSTM

ALGORITHM
1. Data Preparation:
 Gather a labeled dataset with text samples and sentiment labels (e.g., positive/negative).
 Preprocess the text data, including tokenization and data cleaning.
2. Model Architecture:
 Build an LSTM-based model with the following layers:
 Embedding Layer: Convert tokenized text into dense vectors.
 LSTM Layer(s): Capture sequential patterns in the text.
 Dense Layer with Softmax Activation: Provide sentiment predictions.
3. Data Splitting:
 Split the dataset into training, validation, and test sets (e.g., 80-10-10 split).
4. Model Training:
 Compile the model with categorical cross-entropy loss and an optimizer (e.g., Adam).
 Train the model on the training data for a suitable number of epochs.
5. Model Evaluation:
 Assess the model's performance using metrics like accuracy, precision, recall, and F1-score on the
validation and test datasets.
6. Hyperparameter Tuning (Optional):
 Experiment with hyperparameters such as LSTM units, learning rate, and dropout rates to
optimize model performance.
7. Model Deployment:
 Deploy the trained sentiment analysis model for real-world applications, such as sentiment
classification in user-generated text.
8. Continuous Improvement:
 Continuously update and fine-tune the model with new data to adapt to evolving sentiment
patterns.
9. Monitoring and Maintenance:
 Monitor the model's performance and user feedback for ongoing quality assurance and relevance.

PROGRAM
import numpy as np

import tensorflow as tf

22
from tensorflow.keras.preprocessing.text import Tokenizer

from tensorflow.keras.preprocessing.sequence import pad_sequences

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Embedding, LSTM, Dense

# Sample dataset for sentiment analysis (you can replace it with your own dataset)

texts = [

"I love this movie!",

"This is great.",

"I dislike it.",

"This is terrible.",

labels = np.array([1, 1, 0, 0]) # 1 for positive, 0 for negative sentiment

# Tokenize and pad the sequences

max_words = 1000

max_sequence_length = 20

tokenizer = Tokenizer(num_words=max_words)

tokenizer.fit_on_texts(texts)

sequences = tokenizer.texts_to_sequences(texts)

sequences = pad_sequences(sequences, maxlen=max_sequence_length)

# Build the LSTM model

model = Sequential()

model.add(Embedding(input_dim=max_words, output_dim=32, input_length=max_sequence_length))

model.add(LSTM(64, dropout=0.2, recurrent_dropout=0.2))

model.add(Dense(1, activation='sigmoid')) # Binary classification (sigmoid activation)

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

23
# Train the model

model.fit(sequences, labels, epochs=10, batch_size=2)

# Perform sentiment analysis on new text

new_texts = [

"This movie is fantastic!",

"I can't stand this.",

new_sequences = tokenizer.texts_to_sequences(new_texts)

new_sequences = pad_sequences(new_sequences, maxlen=max_sequence_length)

predictions = model.predict(new_sequences)

for i, text in enumerate(new_texts):

sentiment = "positive" if predictions[i] > 0.5 else "negative"

print(f'Text: "{text}" - Sentiment: {sentiment} (Confidence: {predictions[i][0]:.4f})')

OUTPUT
Epoch 1/10

2/2 [==============================] - 2s 13ms/step - loss: 0.6908 - accuracy: 0.5000

Epoch 2/10

2/2 [==============================] - 0s 13ms/step - loss: 0.6879 - accuracy: 1.0000

Epoch 3/10

2/2 [==============================] - 0s 11ms/step - loss: 0.6875 - accuracy: 0.7500

Epoch 4/10

2/2 [==============================] - 0s 14ms/step - loss: 0.6853 - accuracy: 0.7500

Epoch 5/10

2/2 [==============================] - 0s 13ms/step - loss: 0.6852 - accuracy: 0.7500

Epoch 6/10

2/2 [==============================] - 0s 13ms/step - loss: 0.6830 - accuracy: 0.7500


24
Epoch 7/10

2/2 [==============================] - 0s 13ms/step - loss: 0.6792 - accuracy: 1.0000

Epoch 8/10

2/2 [==============================] - 0s 13ms/step - loss: 0.6746 - accuracy: 1.0000

Epoch 9/10

2/2 [==============================] - 0s 13ms/step - loss: 0.6788 - accuracy: 1.0000

Epoch 10/10

2/2 [==============================] - 0s 13ms/step - loss: 0.6718 - accuracy: 1.0000

1/1 [==============================] - 0s 246ms/step

Text: "This movie is fantastic!" - Sentiment: positive (Confidence: 0.5097)

Text: "I can't stand this." - Sentiment: negative (Confidence: 0.4962)

RESULT

Thus the program for sentiment analysis using LSTM was executed and verified successfully.

Ex. No: 6
Parts Of Speech Tagging Using Sequence To Sequence
Date:
Architecture

AIM
To write a python program for parts of speech tagging using sequence to sequence architecture

ALGORITHM

25
1. Data Preparation:
 Collect a dataset of sentences with associated POS tags.
2. Tokenization and Vocabulary Building:
 Tokenize sentences into words or subword tokens.
 Create a vocabulary of unique tokens from the training data.
3. Data Encoding:
 Convert tokens and POS tags into numerical representations.
4. Sequence-to-Sequence Model:
 Construct a sequence-to-sequence model, such as an RNN or transformer.
 The encoder processes token embeddings to create a hidden representation.
 The decoder generates POS tags for each token.
5. Training:
 Train the model using the dataset.
 Utilize a loss function like cross-entropy to measure prediction accuracy.
 Employ teacher forcing during training, using true POS tags as inputs.
6. Inference:
 For inference, input a sentence.
 Start with a "start of sequence" token and feed it to the decoder.
 Predict the next POS tag and repeat until an "end of sequence" token is generated.
7. Post-processing:
 Convert numerical POS tags back to human-readable forms.
 Remove any special tokens used for sequence generation.
8. Evaluation:
 Assess the model on a separate validation or test dataset with metrics like accuracy, precision,
recall, and F1-score.
9. Hyperparameter Tuning:
 Experiment with different model architectures, learning rates, batch sizes, and sequence
lengths to optimize performance.
10. Deployment:

26
 Deploy the model to tag POS in new, unseen sentences.

PROGRAM
import spacy

# Load the English model

nlp = spacy.load("en_core_web_sm")

# Sample text for POS tagging

text = "The quick brown fox jumps over the lazy dog."

# Process the text using SpaCy

doc = nlp(text)

# Extract and print POS tags

for token in doc:

print(f"Token: {token.text}, POS Tag: {token.pos_}")

27
OUTPUT
Token: The, POS Tag: DET
Token: quick, POS Tag: ADJ
Token: brown, POS Tag: ADJ
Token: fox, POS Tag: NOUN
Token: jumps, POS Tag: VERB
Token: over, POS Tag: ADP
Token: the, POS Tag: DET
Token: lazy, POS Tag: ADJ
Token: dog, POS Tag: NOUN
Token: ., POS Tag: PUNCT

RESULT

Thus the program parts of Speech tagging using sequence to sequence architecture was
executed and verified successfully.

28
Ex. No: 7 Machine Translation Using Encoder-Decoder Model
Date:

AIM
To write a python program for machine translation using encoder-decoder model

ALGORITHM
1. Data Preparation:
 Collect a parallel corpus with source language sentences and their corresponding translations
in the target language.
2. Tokenization and Vocabulary Building:
 Tokenize source and target sentences.
 Create separate vocabularies for the source and target languages.
3. Data Encoding:
 Convert tokens into numerical representations, e.g., using embeddings.
4. Encoder-Decoder Model:
 Define an encoder-decoder architecture.
 The encoder processes the source sentence to create a hidden representation.
 The decoder generates the target sentence based on the encoder's output.
5. Training:
 Train the model on the parallel corpus.
 Use a loss function (e.g., cross-entropy) to measure the difference between predicted
translations and actual translations.
 Implement teacher forcing during training, using true target translations as decoder inputs.
6. Inference:
 For translation, input a source sentence into the encoder.
 Start the decoder with a "start of sequence" token and use the encoder's output.
 Predict the next token and continue until an "end of sequence" token is generated or a
maximum sequence length is reached.
7. Post-processing:
 Convert numerical tokens back into human-readable translations.
 Remove any special tokens used for sequence generation.
8. Evaluation:
 Evaluate translation quality with metrics such as BLEU score or human assessment.
9. Hyperparameter Tuning:
 Experiment with different hyperparameters (model architecture, learning rate, batch size) to
optimize translation quality.
10. Deployment:
 Deploy the trained model for translating text between source and target languages.

29
PROGRAM

OUTPUT

RESULT
Thus the program for for machine translation using encoder-decoder model was executed and
verified successfully.

30
Ex. No: 8
Image Augumentation Using GANs
Date:

AIM
To write a python program for image augumentation using GANS

ALGORITHM
1. Data Preparation:
 Gather a dataset of images that you want to augment.
2. GAN Training for Image Generation:
Generator Network:
 Define the generator network, which will create synthetic images. This network typically uses
convolutional layers and may be based on architectures like DCGAN or StyleGAN.
Discriminator Network:
 Create a discriminator network to distinguish between real and generated images. The
discriminator is trained to classify images as real or fake.
Loss Function:
 Implement the adversarial loss function, which encourages the generator to create images that
are indistinguishable from real images, and the discriminator to correctly classify real and
fake images.
Training:
 Train the GAN by optimizing both the generator and discriminator networks alternatively
until the generator creates convincing synthetic images. This process is iterative.
3. Image Generation:
 Use the trained generator to create synthetic images. You can sample from the generator by
providing random noise as input.
4. Image Blending:
 Blend the synthetic images with your original dataset. You can control the extent of blending,
e.g., by specifying the ratio of synthetic to real images in your augmented dataset.
5. Data Augmentation:
 Apply traditional data augmentation techniques to the augmented dataset, such as rotation,
scaling, cropping, and flipping, to create additional variations.
6. Label Propagation:
 Ensure that labels are correctly propagated for the synthetic images in your augmented
dataset. You may need to apply the same labels as the real images or assign appropriate labels
based on the problem context.
7. Training:
 Train your machine learning model, e.g., a convolutional neural network (CNN), on the
augmented dataset. This helps the model generalize better, as it has seen more variations in
the data.
8. Evaluation:
 Evaluate the performance of your machine learning model on a validation or test dataset to
assess the impact of GAN-based augmentation on the model's accuracy and robustness.
9. Fine-tuning (Optional):

31
 If your model doesn't perform as expected, consider fine-tuning the GAN, adjusting
hyperparameters, or using different architectures to generate more diverse synthetic data.
10. Deployment:
 Deploy your machine learning model for its intended application.

PROGRAM
#importing the necessary libraries

import tensorflow as tf

import numpy as np

import matplotlib.pyplot as plt

#defining the generator model

def generator_model():

model = tf.keras.Sequential()

model.add(tf.keras.layers.Dense(256, input_shape=(100,)))

model.add(tf.keras.layers.LeakyReLU(alpha=0.2))

model.add(tf.keras.layers.BatchNormalization(momentum=0.8))

model.add(tf.keras.layers.Dense(512))

model.add(tf.keras.layers.LeakyReLU(alpha=0.2))

model.add(tf.keras.layers.BatchNormalization(momentum=0.8))

model.add(tf.keras.layers.Dense(1024))

model.add(tf.keras.layers.LeakyReLU(alpha=0.2))

model.add(tf.keras.layers.BatchNormalization(momentum=0.8))

model.add(tf.keras.layers.Dense(np.prod(img_shape), activation='tanh'))

model.add(tf.keras.layers.Reshape(img_shape))

model.summary()

return model

#defining the discriminator model

def discriminator_model():

32
model = tf.keras.Sequential()

model.add(tf.keras.layers.Flatten(input_shape=img_shape))

model.add(tf.keras.layers.Dense(512))

model.add(tf.keras.layers.LeakyReLU(alpha=0.2))

model.add(tf.keras.layers.Dense(256))

model.add(tf.keras.layers.LeakyReLU(alpha=0.2))

model.add(tf.keras.layers.Dense(1, activation='sigmoid'))

model.summary()

return model

#defining the GAN model

def gan_model(generator, discriminator):

model = tf.keras.Sequential()

model.add(generator)

model.add(discriminator)

model.summary()

return model

#defining the training process

def train(epochs, batch_size, sample_interval):

#loading the dataset

(X_train, _), (_, _) = tf.keras.datasets.mnist.load_data()

#rescaling the images

X_train = X_train / 127.5 - 1.

X_train = np.expand_dims(X_train, axis=3)

#defining the labels

real = np.ones((batch_size, 1))

fake = np.zeros((batch_size, 1))

33
#defining the training loop

for epoch in range(epochs):

#training the discriminator

idx = np.random.randint(0, X_train.shape[0], batch_size)

imgs = X_train[idx]

noise = np.random.normal(0, 1, (batch_size, 100))

gen_imgs = generator.predict(noise)

d_loss_real = discriminator.train_on_batch(imgs, real)

d_loss_fake = discriminator.train_on_batch(gen_imgs, fake)

d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)

#training the generator

noise = np.random.normal(0, 1, (batch_size, 100))

g_loss = gan.train_on_batch(noise, real)

#printing the losses

print ("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1],
g_loss))

#saving the generated images

if epoch % sample_interval == 0:

sample_images(epoch)

#defining the function to generate and save images

def sample_images(epoch):

r, c = 5, 5

noise = np.random.normal(0, 1, (r * c, 100))

gen_imgs = generator.predict(noise)

#rescaling the images

34
gen_imgs = 0.5 * gen_imgs + 0.5

fig, axs = plt.subplots(r, c)

cnt = 0

for i in range(r):

for j in range(c):

axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')

axs[i,j].axis('off')

cnt += 1

plt.show()

#defining the parameters

img_rows = 28

img_cols = 28

channels = 1

img_shape = (img_rows, img_cols, channels)

latent_dim = 100

#building the models

generator = generator_model()

discriminator = discriminator_model()

gan = gan_model(generator, discriminator)

#compiling the models

discriminator.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(0.0002, 0.5),


metrics=['accuracy'])

gan.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(0.0002, 0.5))

#training the model

train(epochs=10, batch_size=32, sample_interval=200)

35
OUTPUT
Model: "sequential"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

dense (Dense) (None, 256) 25856

leaky_re_lu (LeakyReLU) (None, 256) 0

batch_normalization (Batch (None, 256) 1024

Normalization)

dense_1 (Dense) (None, 512) 131584

leaky_re_lu_1 (LeakyReLU) (None, 512) 0

batch_normalization_1 (Bat (None, 512) 2048

chNormalization)

dense_2 (Dense) (None, 1024) 525312

leaky_re_lu_2 (LeakyReLU) (None, 1024) 0

batch_normalization_2 (Bat (None, 1024) 4096

chNormalization)

dense_3 (Dense) (None, 784) 803600

reshape (Reshape) (None, 28, 28, 1) 0

=================================================================

Total params: 1493520 (5.70 MB)

Trainable params: 1489936 (5.68 MB)

Non-trainable params: 3584 (14.00 KB)

_________________________________________________________________

Model: "sequential_1"

_________________________________________________________________
36
Layer (type) Output Shape Param #

=================================================================

flatten (Flatten) (None, 784) 0

dense_4 (Dense) (None, 512) 401920

leaky_re_lu_3 (LeakyReLU) (None, 512) 0

dense_5 (Dense) (None, 256) 131328

leaky_re_lu_4 (LeakyReLU) (None, 256) 0

dense_6 (Dense) (None, 1) 257

=================================================================

Total params: 533505 (2.04 MB)

Trainable params: 533505 (2.04 MB)

Non-trainable params: 0 (0.00 Byte)

_________________________________________________________________

Model: "sequential_2"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

sequential (Sequential) (None, 28, 28, 1) 1493520

sequential_1 (Sequential) (None, 1) 533505

=================================================================

Total params: 2027025 (7.73 MB)

Trainable params: 2023441 (7.72 MB)

Non-trainable params: 3584 (14.00 KB)

_________________________________________________________________

1/1 [==============================] - 0s 362ms/step

0 [D loss: 0.875573, acc.: 35.94%] [G loss: 0.802773]

37
1/1 [==============================] - 0s 215ms/step

1/1 [==============================] - 0s 48ms/step

1 [D loss: 0.371252, acc.: 84.38%] [G loss: 0.676736]

1/1 [==============================] - 0s 48ms/step

2 [D loss: 0.374005, acc.: 67.19%] [G loss: 0.537242]

1/1 [==============================] - 0s 38ms/step

3 [D loss: 0.395670, acc.: 60.94%] [G loss: 0.489162]

1/1 [==============================] - 0s 40ms/step

4 [D loss: 0.416983, acc.: 59.38%] [G loss: 0.426387]

1/1 [==============================] - 0s 42ms/step

5 [D loss: 0.455045, acc.: 54.69%] [G loss: 0.470056]

1/1 [==============================] - 0s 43ms/step

6 [D loss: 0.461933, acc.: 54.69%] [G loss: 0.454022]

1/1 [==============================] - 0s 39ms/step

7 [D loss: 0.454959, acc.: 57.81%] [G loss: 0.506700]

1/1 [==============================] - 0s 43ms/step

8 [D loss: 0.456054, acc.: 57.81%] [G loss: 0.480617]

1/1 [==============================] - 0s 37ms/step

9 [D loss: 0.480623, acc.: 51.56%] [G loss: 0.534445]

38
RESULT

Thus the program image augumentation using GANS was executed and verified
successfully.

39

You might also like