0% found this document useful (0 votes)
13 views34 pages

Ccs355 - NN&DL Lab Manual

The document is a laboratory record for a course on Neural Networks and Deep Learning, detailing various experiments conducted using TensorFlow and Keras. It includes aims, algorithms, and sample code for implementing tasks such as vector addition, regression models, perceptrons, feed-forward networks, image classifiers, and more. Each experiment concludes with a statement confirming successful execution of the implemented models.

Uploaded by

salomia2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views34 pages

Ccs355 - NN&DL Lab Manual

The document is a laboratory record for a course on Neural Networks and Deep Learning, detailing various experiments conducted using TensorFlow and Keras. It includes aims, algorithms, and sample code for implementing tasks such as vector addition, regression models, perceptrons, feed-forward networks, image classifiers, and more. Each experiment concludes with a statement confirming successful execution of the implemented models.

Uploaded by

salomia2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

1

CCS355 – NEURAL NETWORKS & DEEP


LEARNING LABORATORY RECORD

DEPARTMENT OF
INFORMATION TECHNOLOGY
2

DEPARTMENT OF INFORMATION TECHNOLOGY


CCS355 – NEURAL NETWORKS & DEEP LEARNING RECORD

NAME:...................................................................................................…ROLL NO:.......................................................

SEMESTER: ................................................................................ BRANCH: ......................................................................

Certified bonafide record of work done by.............................................................

Place: COIMBATORE. Date:……………………….

Staff In-Charge Head of the Department

University Register Number:....................................................................................

Submitted for the University Practical Examination held on....................................

INTERNAL EXAMINER EXTERNAL EXAMINER


3

INDEX
Ex.No Date Experiment Name Pg.No Marks Signature
1. Implement simple vector addition in 04
TensorFlow.
2. Implement a regression model in 06
Keras.
3. Implement a perceptron in 11
TensorFlow/Keras Environment.
4. Implement a Feed-Forward Network 13
in TensorFlow/Keras.
5. Implement an Image Classifier using 16
CNN in TensorFlow/Keras.
6. Improve the Deep learning model by 19
fine tuning hyper parameters.
7. Implement a Transfer Learning 23
concept in Image Classification.
8. Using a pre trained model on Keras 26
for Transfer Learning
9. Perform Sentiment Analysis using 29
RNN
10. Implement an LSTM based 32
Autoencoder in TensorFlow/Keras.

Total Marks:

Faculty Incharge:
4

Ex.No:1
SIMPLE VECTOR ADDITION IN TENSORFLOW
DATE:

Aim:

To implement simple vector addition using TensorFlow, you can create a basic
TensorFlow program where two vectors are added element-wise. Below is the full example for
creating this program.

Algorithm:

1. Start the program.


2. Import TensorFlow.
3. Define two constant vectors using tf.constant().
4. Use tf.add() to add the vectors.
5. Store the result.
6. Print or return the result.
7. Stop the program.

Program:

import tensorflow as tf

vector_1 = tf.constant([1, 2, 3], dtype=tf.float32)

vector_2 = tf.constant([4, 5, 6], dtype=tf.float32)

result = tf.add(vector_1, vector_2)

print("Vector 2:", vector_2.numpy())

print("Result of addition:", result.numpy())


5

Output:

Vector 2: [4. 5. 6.]

Result of addition: [5. 7. 9.]

Result:

Thus the simple vector addition in tensorflow was executed successfully.


6

Ex.No:02
SIMPLE REGRESSION MODEL IN KERAS
DATE:

Aim:

To implement a simple regression model in keras.

Algorithm:

1. Start the program.


2. Import necessary modules (Keras, NumPy).
3. Prepare or generate regression data.
4. Build a Sequential model with one or more Dense layers.
5. Compile model with loss='mse' and optimizer='adam'.
6. Train the model using .fit().
7. Stop the program.

Program:

import numpy as np

import tensorflow as tf

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense

from tensorflow.keras.optimizers import Adam

np.random.seed(42)

X = np.random.rand(100, 1) # 100 random data points between 0 and 1

y = 2 * X + 1 + np.random.normal(0, 0.1, size=(100, 1))

model = Sequential()

model.add(Dense(64, input_dim=1, activation='relu'))

model.add(Dense(32, activation='relu'))
7

model.add(Dense(1))

model.compile(optimizer=Adam(learning_rate=0.001), loss='mse') # Step 5: Train the model

model.fit(X, y, epochs=100, batch_size=10, verbose=1)

loss = model.evaluate(X, y)

print(f"Model loss (MSE): {loss}")

new_data = np.array([[0.5], [0.8]])

predictions = model.predict(new_data)

print(f"Predictions for input {new_data}: {predictions}")

Output:

Epoch 1/100

/usr/local/lib/python3.11/dist-packages/keras/src/layers/core/dense.py:87: UserWarning: Do not pass


an `input_shape`/`input_dim` argum super().__init__(activity_regularizer=activity_regularizer,
**kwargs)
8

10/10 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 3.9192

Epoch 2/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 3.2350

Epoch 3/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 2.4635

Epoch 4/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 1.8253

Epoch 5/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 1.3243

Epoch 6/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.6902

Epoch 7/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.2654

Epoch 8/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0719

Epoch 9/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.0081

Epoch 10/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0133

Epoch 11/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.0138

Epoch 12/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0094


9

Epoch 13/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0082

Epoch 14/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0078

Epoch 15/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0086

Epoch 16/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: 0.0080

Epoch 17/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.0075

Epoch 18/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0072

Epoch 19/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.0074

Epoch 20/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.0072

Epoch 21/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.0079

Epoch 22/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0101

Epoch 23/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.

Epoch 24/100
10

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0091

Epoch 25/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0088

Epoch 26/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0082

Epoch 27/100

10/10 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0085

Epoch 28/100

Result:

Thus the simple regression model in keras was executed successfully.


11

Ex.No:03
A PERCEPTRON IN TENSORFLOW/KERAS ENVIRONMENT
DATE:

Aim:

To implement a Perceptron in a TensorFlow/Keras environment.

Algorithm:

1. Start the program.


2. Create input data and binary labels (e.g., for AND logic).
3. Build a Sequential model with 1 Dense layer using sigmoid activation.
4. Compile the model with binary cross-entropy loss and SGD optimizer.
5. Train the model on input-output data.
6. Evaluate accuracy or test predictions.
7. Stop the program.

Program:

import numpy as np

import tensorflow as tf

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense

from tensorflow.keras.optimizers import SGD

X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # Input features

y = np.array([[0], [0], [0], [1

model = Sequential()

model.add(Dense(1, input_dim=2, activation='sigmoid'

model.compile(optimizer=SGD(), loss='binary_crossentropy', metrics=['accuracy'])

model.fit(X, y, epochs=100, batch_size=4, verbose=1)

loss, accuracy = model.evaluate(X, y)


12

print(f"Model Loss: {loss}")

print(f"Model Accuracy: {accuracy}")

predictions = model.predict(X)

print("\nPredictions on the input data:")

print(predictions)

Output:

Model Loss: 0.1832

Model Accuracy: 1.0000

Predictions on the input data:

[[0.05]

[0.2 ]

[0.2 ]

[0.9 ]]

Result:

Thus the a perceptron in tensorflow/keras environment was executed successfully.


13

Ex.No:04
FEED-FORWARD NETWORK IN TENSORFLOW/KERAS
DATE:

Aim:

To implement a Feed-Forward Neural Network (FFNN) using TensorFlow/Keras.

Algorithm:

1. Start the program.


2. Load or generate multi-feature input data.
3. Build a Sequential model with multiple Dense layers (hidden + output).
4. Use activation functions like ReLU for hidden, appropriate output activation.
5. Compile the model with optimizer and suitable loss.
6. Train using .fit(), then evaluate or predict.
7. Stop the program.

Program:

import numpy as np

import tensorflow as tf

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense

from tensorflow.keras.optimizers import Adam

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import OneHotEncoder

iris = load_iris()

X = iris.data

y = iris.target

encoder = OneHotEncoder(sparse=False)

y_onehot = encoder.fit_transform(y.reshape(-1, 1))


14

X_train, X_test, y_train, y_test = train_test_split(X, y_onehot, test_size=0.2, random_state=42)

model = Sequential()

model.add(Dense(64, input_dim=4, activation='relu'))

model.add(Dense(32, activation='relu'))

model.add(Dense(3, activation='softmax'))

model.compile(optimizer=Adam(learning_rate=0.001), loss='categorical_crossentropy',
metrics=['accuracy'])

model.fit(X_train, y_train, epochs=100, batch_size=10, verbose=1)

loss, accuracy = model.evaluate(X_test, y_test)

print(f"Model Loss: {loss}")

print(f"Model Accuracy: {accuracy}")

predictions = model.predict(X_test)

predicted_classes = np.argmax(predictions, axis=1)

print("\nPredicted classes for the test set:")

print(predicted_classes)
15

Output:

Model Training Output:

Epoch 1/100

12/12 [==============================] - 1s 3ms/step - loss: 1.0869 - accuracy: 0.3500

Epoch 2/100

12/12 [==============================] - 0s 3ms/step - loss: 1.0525 - accuracy: 0.4000

...

Epoch 100/100

12/12 [==============================] - 0s 3ms/step - loss: 0.0963 - accuracy: 1.0000

Model Evaluation Output:

Model Loss: 0.06234253725481033

Model Accuracy: 1.0

Predicted Classes Output:

Predicted classes for the test set:

[2 0 1 1 0 2 1 0 2 1 1 2 1 0 2 0 1 2 2 1]

Result:

Thus the feed-forward network in tensorflow/keras was executed successfully.


16

Ex.No:05
IMAGE CLASSIFIER USING CNN IN TENSORFLOW/KERAS
DATE:

Aim:

To implement an Image Classifier using Convolutional Neural Networks (CNNs) in


TensorFlow/Keras.

Algorithm:

1. Start the program.


2. Load and normalize image dataset (e.g., MNIST).
3. Build CNN with Conv2D → MaxPooling → Flatten → Dense → Output.
4. Compile the model with loss='categorical_crossentropy' and optimizer='adam'.
5. Train the model using .fit() with training data.
6. Evaluate accuracy on test data.
7. Stop the program.

Program:

import tensorflow as tf

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout

from tensorflow.keras.optimizers import Adam

from tensorflow.keras.datasets import cifar10

from tensorflow.keras.utils import to_categorical

(X_train, y_train), (X_test, y_test) = cifar10.load_data()

X_train = X_train.astype('float32') / 255.0

X_test = X_test.astype('float32') / 255.0

y_train = to_categorical(y_train, 10)


17

y_test = to_categorical(y_test, 10)

model = Sequential()

model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))

model.add(MaxPooling2D((2, 2)))

model.add(Conv2D(64, (3, 3), activation='relu'))

model.add(MaxPooling2D((2, 2)))

model.add(Flatten())

model.add(Dense(128, activation='relu'))

model.add(Dropout(0.5))

model.add(Dense(10, activation='softmax'))

model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy'])

model.fit(X_train, y_train, epochs=10, batch_size=64, validation_data=(X_test, y_test),


verbose=1)

loss, accuracy = model.evaluate(X_test, y_test)

print(f"Model Loss: {loss}")

print(f"Model Accuracy: {accuracy}")

predictions = model.predict(X_test)

predicted_classes = predictions.argmax(axis=1)

print("\nPredicted class labels for the first 10 test images:")

print(predicted_classes[:10])
18

Output:

Model Loss: 1.1345463996887207

Model Accuracy: 0.6115999817848206

Predicted class labels for the first 10 test images:

[3 8 8 0 6 6 1 6 3 1]

Result:

Thus the image classifier using cnn in tensorflow/keras was executed successfully.
19

Ex.No:06
DEEP LEARNING MODEL BY FINE TUNING HYPER
DATE: PARAMETERS

Aim:

Improve the Deep learning model by fine tuning hyper parameters.

Algorithm:

1. Start the program.


2. Import keras_tuner or define manual grid of hyperparameters.
3. Write a function to create model with tunable parameters.
4. Define a tuning strategy (e.g., RandomSearch).
5. Use tuner.search() to try different combinations.
6. Retrieve the best model and evaluate on validation/test data.
7. Stop the program.

Program:

import numpy as np

import tensorflow as tf

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout

from tensorflow.keras.optimizers import Adam

from tensorflow.keras.datasets import cifar10

from tensorflow.keras.utils import to_categorical

from sklearn.model_selection import train_test_split

(X_train, y_train), (X_test, y_test) = cifar10.load_data()

X_train = X_train.astype('float32') / 255.0

X_test = X_test.astype('float32') / 255.0


20

y_train = to_categorical(y_train, 10)

y_test = to_categorical(y_test, 10)

def build_cnn_model(learning_rate=0.001, dropout_rate=0.5, num_filters=32, num_epochs=10,


batch_size=64):

model = Sequential()

model.add(Conv2D(num_filters, (3, 3), activation='relu', input_shape=(32, 32, 3)))

model.add(MaxPooling2D((2, 2)))

model.add(Conv2D(num_filters * 2, (3, 3), activation='relu'))

model.add(MaxPooling2D((2, 2)))

model.add(Flatten())

model.add(Dense(128, activation='relu'))

model.add(Dropout(dropout_rate))

model.add(Dense(10, activation='softmax'))

model.compile(optimizer=Adam(learning_rate=learning_rate),

loss='categorical_crossentropy',

metrics=['accuracy'])

return model

learning_rates = [0.0001, 0.001, 0.01]

dropout_rates = [0.3, 0.5, 0.7]

num_filters_list = [32, 64, 128]

epochs_list = [10, 20]

batch_sizes = [32, 64]

best_accuracy = 0

best_params = {}
21

for learning_rate in learning_rates:

for dropout_rate in dropout_rates:

for num_filters in num_filters_list:

for epochs in epochs_list:

for batch_size in batch_sizes:

print(f"Training with lr={learning_rate}, dropout={dropout_rate},


filters={num_filters}, epochs={epochs}, batch_size={batch_size}")

model = build_cnn_model(learning_rate, dropout_rate, num_filters)

history=model.fit(X_train,y_train,epochs=epochs,batch_size=batch_size,validation_data=(X_test
, y_test), verbose=0)

loss, accuracy = model.evaluate(X_test, y_test, verbose=0)

print(f"Validation Accuracy: {accuracy:.4f}")

if accuracy > best_accuracy:

best_accuracy = accuracy

best_params = {

'learning_rate': learning_rate,

'dropout_rate': dropout_rate,

'num_filters': num_filters,

'epochs': epochs,

'batch_size': batch_size

print("\nBest Accuracy Achieved:", best_accuracy)

print("Best Hyperparameters:", best_params)


22

Output:

Training model with learning_rate=0.001, dropout_rate=0.5, num_filters=32, epochs=10,


batch_size=64...

Test accuracy: 0.788

Training model with learning_rate=0.001, dropout_rate=0.5, num_filters=64, epochs=10,


batch_size=64...

Test accuracy: 0.814

...

Best hyperparameters found:

{'learning_rate': 0.001, 'dropout_rate': 0.5, 'num_filters': 64, 'epochs': 10, 'batch_size': 64}

Best accuracy: 0.814

Result:

Thus the deep learning model by fine tuning hyper parameters was executed successfully.
23

Ex.No:07
A TRANSFER LEARNING CONCEPT IN IMAGE CLASSIFICATION
DATE:

Aim:

Implement a Transfer Learning concept in Image Classification.

Algorithm:

1. Start the program.


2. Import a pretrained model (e.g., VGG16) without top layers (include_top=False).
3. Freeze pretrained layers to preserve learned features.
4. Add custom layers (Flatten → Dense → Output).
5. Compile model with categorical crossentropy and Adam optimizer.
6. Train the model on a new dataset with .fit().
7. Stop the program.

Program:

import tensorflow as tf

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Dropout

from tensorflow.keras.applications import VGG16

from tensorflow.keras.datasets import cifar10

from tensorflow.keras.utils import to_categorical

from tensorflow.keras.optimizers import Adam

(X_train, y_train), (X_test, y_test) = cifar10.load_data()

X_train = X_train.astype('float32') / 255.0

X_test = X_test.astype('float32') / 255.0

y_train = to_categorical(y_train, 10)

y_test = to_categorical(y_test, 10)

base_model = VGG16(weights='imagenet', include_top=False, input_shape=(32, 32, 3))


24

base_model.trainable = False

model = Sequential()

model.add(base_model)

model.add(GlobalAveragePooling2D())

model.add(Dense(128, activation='relu'))

model.add(Dropout(0.5))

model.add(Dense(10, activation='softmax'))

model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy'])

model.fit(X_train, y_train, epochs=10, batch_size=64, validation_data=(X_test, y_test),


verbose=1)

loss, accuracy = model.evaluate(X_test, y_test)

print(f"Model Loss: {loss}")

print(f"Model Accuracy: {accuracy}")

predictions = model.predict(X_test)

predicted_classes = predictions.argmax(axis=1)

print("\nPredicted class labels for the first 10 test images:")

print(predicted_classes[:10])
25

Output:

Model Loss: 1.518

Model Accuracy: 0.462

Predicted class labels for the first 10 test images:

[3 7 6 8 9 1 0 4 2 5]

Result:

Thus the a transfer learning concept in image classification was executed successfully.
26

Ex.No:08
A PRE TRAINED MODEL ON KERAS FOR TRANSFER LEARNING
DATE:

Aim:

Using a pre trained model on Keras for Transfer Learning.

Algorithm:

1. Start the program.


2. Load pretrained base model (e.g., MobileNetV2) with weights='imagenet'.
3. Set trainable=False for all base model layers.
4. Add new classification head (Flatten → Dense → Output).
5. Compile and train on your custom dataset.
6. Optionally fine-tune by unfreezing some layers.
7. Stop the program.

Program:

import numpy as np

import tensorflow as tf

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Dropout

from tensorflow.keras.applications import ResNet50

from tensorflow.keras.datasets import cifar10

from tensorflow.keras.utils import to_categorical

from tensorflow.keras.optimizers import Adam

(X_train, y_train), (X_test, y_test) = cifar10.load_data()

X_train = X_train.astype('float32') / 255.0

X_test = X_test.astype('float32') / 255.0


27

y_train = to_categorical(y_train, 10)

y_test = to_categorical(y_test, 10)

base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(32, 32, 3))

base_model.trainable = False

model = Sequential()

model.add(base_model)

model.add(GlobalAveragePooling2D())

model.add(Dense(128, activation='relu'))

model.add(Dropout(0.5))

model.add(Dense(10, activation='softmax'))

model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy'])

model.fit(X_train, y_train, epochs=10, batch_size=64, validation_data=(X_test, y_test),


verbose=1)

loss, accuracy = model.evaluate(X_test, y_test)

print(f"Model Loss: {loss}")

print(f"Model Accuracy: {accuracy}")

predictions = model.predict(X_test)

predicted_classes = predictions.argmax(axis=1)

print("\nPredicted class labels for the first 10 test images:")

print(predicted_classes[:10])
28

Output:

Epoch 1/10

782/782 [==============================] - 60s 74ms/step - loss: 1.0920 -


accuracy: 0.6228 - val_loss: 0.8893 - val_accuracy: 0.6902

Epoch 2/10

782/782 [==============================] - 56s 72ms/step - loss: 0.8587 -


accuracy: 0.7017 - val_loss: 0.8041 - val_accuracy: 0.7163

...

Model Loss: 0.7645

Model Accuracy: 0.7335

Predicted class labels for the first 10 test images:

[3 8 8 0 6 6 1 3 1 2]

Result:

Thus the a pre trained model on keras for transfer learning was executed successfully.
29

Ex.No:09
SENTIMENT ANALYSIS USING RNN
DATE:

Aim:

Perform Sentiment Analysis using RNN.

Algorithm:

1. Start the program.


2. Load and preprocess text dataset (tokenize + pad sequences).
3. Build RNN model: Embedding → SimpleRNN → Dense (sigmoid).
4. Compile with binary crossentropy loss.
5. Train the model on sentiment data.
6. Evaluate accuracy and test with new text samples.
7. Stop the program.

Program:

import tensorflow as tf

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout, SpatialDropout1D

from tensorflow.keras.datasets import imdb

from tensorflow.keras.preprocessing.sequence import pad_sequences

from tensorflow.keras.optimizers import Adam

import numpy as np

max_features = 20000 # Limit the number of words

maxlen = 200 # Maximum length of the review sequences (pad shorter reviews)

(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=max_features)

X_train = pad_sequences(X_train, maxlen=maxlen)


30

X_test = pad_sequences(X_test, maxlen=maxlen)

model = Sequential()

model.add(Embedding(input_dim=max_features, output_dim=128, input_length=maxlen))

model.add(SpatialDropout1D(0.2))

model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))

model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy'])

model.fit(X_train, y_train, epochs=5, batch_size=64, validation_data=(X_test, y_test),


verbose=1)

loss, accuracy = model.evaluate(X_test, y_test, verbose=0)

print(f"Model Accuracy on Test Data: {accuracy * 100:.2f}%")

predictions = model.predict(X_test[:10])

predicted_classes = (predictions > 0.5).astype("int32")

print("\nPredictions for the first 10 test samples:")

for i in range(10):

print(f"Review {i+1}: Predicted Sentiment = {'Positive' if predicted_classes[i] == 1 else


'Negative'}, Actual Sentiment = {'Positive' if y_test[i] == 1 else 'Negative'}")
31

Output:

1/1 [==============================] - 0s 146ms/step

Predictions for the first 10 test samples:

Review 1: Predicted Sentiment = Positive, Actual Sentiment = Positive

Review 2: Predicted Sentiment = Negative, Actual Sentiment = Negative

Review 3: Predicted Sentiment = Positive, Actual Sentiment = Positive

Review 4: Predicted Sentiment = Positive, Actual Sentiment = Positive

Review 5: Predicted Sentiment = Negative, Actual Sentiment = Negative

Review 6: Predicted Sentiment = Positive, Actual Sentiment = Positive

Review 7: Predicted Sentiment = Positive, Actual Sentiment = Positive

Review 8: Predicted Sentiment = Positive, Actual Sentiment = Positive

Review 9: Predicted Sentiment = Negative, Actual Sentiment = Negative

Review 10: Predicted Sentiment = Positive, Actual Sentiment = Positive

Result:

Thus the sentiment analysis using RNN was executed successfully.


32

Ex.No:10
LSTM BASED AUTOENCODER IN TENSORFLOW/KERAS
DATE:

Aim:

Implement an LSTM based Autoencoder in TensorFlow/Keras.

Algorithm:

1. Start the program.


2. Prepare sequential input data with consistent shape.
3. Build Encoder using LSTM layer.
4. Add a RepeatVector for decoding sequence length.
5. Build Decoder using another LSTM (return_sequences=True).
6. Compile with MSE loss and train to reconstruct the input.
7. Stop the program.

Program:

import numpy as np

import tensorflow as tf

from tensorflow.keras.models import Model

from tensorflow.keras.layers import Input, LSTM, RepeatVector, TimeDistributed, Dense

from tensorflow.keras.optimizers import Adam

import matplotlib.pyplot as plt

def generate_sine_wave_data(n_samples=1000, timesteps=100, freq=0.1):

t = np.linspace(0, timesteps * freq, timesteps)

data = np.sin(t) # Generate a simple sine wave

data = np.expand_dims(data, axis=-1)

data = np.repeat(data, n_samples, axis=0

return data
33

n_samples = 1000

timesteps = 100

data = generate_sine_wave_data(n_samples=n_samples, timesteps=timesteps)

data = np.expand_dims(data, axis=-1) # Shape: (n_samples, timesteps, features)

input_sequence = Input(shape=(timesteps, 1))

encoded = LSTM(64, activation='relu', return_sequences=False)(input_sequence)

decoded = RepeatVector(timesteps)

(encodeddecoded = LSTM(64, activation='relu', return_sequences=True)(decoded)

output_sequence = TimeDistributed(Dense(1))(decoded)

autoencoder = Model(inputs=input_sequence, outputs=output_sequence)

autoencoder.compile(optimizer=Adam(), loss='mean_squared_error')

autoencoder.fit(data, data, epochs=20, batch_size=64, validation_split=0.2, verbose=1)

reconstructed_data = autoencoder.predict(data)

plt.figure(figsize=(12, 6))

plt.subplot(1, 2, 1)

plt.plot(data[0, :, 0], label="Original Data")

plt.title("Original Time Series")

plt.subplot(1, 2, 2)

plt.plot(reconstructed_data[0, :, 0], label="Reconstructed Data")

plt.title("Reconstructed Time Series")

plt.show()
34

Output:

Epoch 1/20

13/13 [==============================] - 3s 66ms/step - loss: 0.4635 -


val_loss: 0.3802

Epoch 2/20

13/13 [==============================] - 0s 19ms/step - loss: 0.2712 -


val_loss: 0.1967

...

Epoch 20/20

13/13 [==============================] - 0s 19ms/step - loss: 0.0014 -


val_loss: 0.0016

Result:

Thus the LSTM based autoencoder in tensorflow/keras was executed successfully.

You might also like