0% found this document useful (0 votes)
30 views39 pages

Neural DEEP

The document outlines various exercises implementing different machine learning models using TensorFlow and Keras, including vector addition, regression models, perceptrons, feed-forward networks, CNNs for image classification, and hyperparameter tuning. Each exercise includes an aim, algorithm, program code, output, and results confirming successful execution. The exercises demonstrate practical applications of deep learning techniques in Python programming.

Uploaded by

mohan prabhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views39 pages

Neural DEEP

The document outlines various exercises implementing different machine learning models using TensorFlow and Keras, including vector addition, regression models, perceptrons, feed-forward networks, CNNs for image classification, and hyperparameter tuning. Each exercise includes an aim, algorithm, program code, output, and results confirming successful execution. The exercises demonstrate practical applications of deep learning techniques in Python programming.

Uploaded by

mohan prabhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Ex.

No: 01

Date:
Implement simple vector addition in TensorFlow

Aim:
To implement a basic vector addition using TensorFlow.

Algorithm:
1. Import the TensorFlow library.
2. Define two vectors as constants (e.g., vector1 and vector2).
3. Use the tf.add operation to add the two vectors element-wise.
4. Print the result of the vector addition.

Program:
import tensorflow as tf

vector1 = tf.constant([1.0, 2.0, 3.0])


vector2 = tf.constant([4.0, 5.0, 6.0])

result = tf.add(vector1, vector2)

print("Result of vector addition:")


print(result.numpy())

Output:
Result of vector addition:
[5. 7. 9.]

Result:
Thus the above program for basic vector addition using TensorFlow were executed and
the output is verified successfully.

3
Ex. No: 02
Implement a regression model in Keras.
Date:

Aim:
To implement a regression model using the Keras.

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Generate synthetic data for training the regression model.
3. Define the architecture of the neural network for regression.
4. Compile the model, specifying the loss function and optimizer.
5. Train the model using the synthetic data.
6. Evaluate the model's performance.
7. Make predictions using the trained model.

Program:
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense
from sklearn.model_selection import train_test_split
import numpy as np

np.random.seed(42)
X = np.random.rand(100, 1)
y = 3 * X + 2 + 0.1 * np.random.randn(100, 1)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

model = Sequential()
model.add(Dense(1, input_dim=1, activation='linear'))

model.compile(optimizer='sgd', loss='mean_squared_error')

model.fit(X_train, y_train, epochs=50, batch_size=16, verbose=1)

loss = model.evaluate(X_test, y_test)


print(f"Mean Squared Error on Test Data: {loss}")

predictions = model.predict(X_test)

4
print("\nSample Predictions:")
for i in range(5):
print(f"Actual: {y_test[i][0]}, Predicted: {predictions[i][0]}")

Output:

Epoch 50/50

5/5 [==============================] - 0s 7ms/step - loss: 0.2651

1/1 [==============================] - 0s 250ms/step - loss: 0.2895

Mean Squared Error on Test Data: 0.28948336839675903

1/1 [==============================] - 0s 207ms/step

Sample Predictions:

Actual: 2.2563304117214535, Predicted: 3.0480055809020996

Actual: 4.634134485871327, Predicted: 3.892982006072998

Actual: 4.193039236802164, Predicted: 3.7683780193328857

Actual: 3.8966281075824716, Predicted: 3.656846046447754

Actual: 2.8554431395043554, Predicted: 3.246446371078491

Result:
Thus the above program for implementation of regression model in keras were
executed and the output is verified successfully.

5
Ex. No: 03
Implement a perceptron in TensorFlow/Keras Environment.
Date:

Aim:
To implement a perceptron using the TensorFlow/Keras Environment.

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Generate synthetic data for training the perceptron.
3. Define the architecture of the perceptron with a single layer and one output neuron.
4. Compile the model, specifying the binary cross-entropy loss and the stochastic
gradient descent (SGD) optimizer.
5. Train the perceptron using the synthetic data.
6. Evaluate the model's performance.
7. Make predictions using the trained perceptron.

Program:

import numpy as np
import tensorflow as tf
from keras.layers import Dense
from keras.models import Sequential
from keras.optimizers import SGD

np.random.seed(42)
X = np.random.rand(100, 2)
y = np.where(X[:, 0] + X[:, 1] > 1, 1, 0)

model = Sequential()
model.add(Dense(units=1, activation="sigmoid", input_shape=(2,)))
model.compile(loss="binary_crossentropy", optimizer=SGD(learning_rate=0.1),
metrics=["accuracy"])

model.fit(X, y, epochs=100, verbose=0)


loss, accuracy = model.evaluate(X, y)

print("Test Loss: ", loss)


print("Test Accuracy: ", accuracy)

prediction = model.predict(X)
prediction_labels = np.round(prediction)
print("predicted Labels: ", n.squeeze(prediction_labels))

6
Output:

4/4 [==============================] - 0s 5ms/step - loss: 0.4255 - accuracy: 0.9500

Test Loss: 0.42546337842941284

Test Accuracy: 0.949999988079071

4/4 [==============================] - 0s 3ms/step

predicted Labels: [1. 1. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0.

0. 1. 1. 1. 0. 0. 0. 1. 0. 0. 0. 1. 0. 1. 0. 0. 1. 0. 0. 1. 1. 0. 1. 1.

0. 0. 0. 0. 1. 1. 0. 0. 1. 1. 1. 1. 1. 0. 0. 1. 0. 0. 0. 1. 1. 1. 1. 0.

0. 1. 0. 1. 0. 1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 1. 0. 1. 0. 1. 0. 0.

1. 0. 1. 1.]

Result:
Thus the above program for implementation of perceptron were executed and the
output is verified successfully.

7
Ex. No: 04

Date:
Implement a Feed-Forward Network in TensorFlow/Keras.

Aim:
To implement a feed-forward neural network using the TensorFlow/Keras.

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Generate synthetic data for training the neural network.
3. Define the architecture of the feed-forward neural network with multiple layers.
4. Compile the model, specifying the loss function and optimizer.
5. Train the neural network using the synthetic data.
6. Evaluate the model's performance.
7. Make predictions using the trained neural network.

Program:
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification

X, y = make_classification(n_samples=1000, n_features=20, n_classes=2,


random_state=42)

X_train, X_test, y_train, y_test = train_test_split(


X, y, test_size=0.2, random_state=42
)

model = keras.Sequential(
[
keras.layers.Dense(64, activation="relu", input_shape=(20,)),
keras.layers.Dense(1, activation="sigmoid"),
]
)

model.compile(optimizer="adam", loss="binary_crossentropy",
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.2)

8
test_loss, test_acc = model.evaluate(X_test, y_test)
print(f"Test Accuracy: {test_acc}")
print(f"Test Loss: {test_loss}")

Output:

Epoch 10/10

20/20 [==============================] - 0s 6ms/step - loss: 0.3337 - accuracy: 0.8813 -


val_loss: 0.2906 - val_accuracy: 0.9000

7/7 [==============================] - 0s 3ms/step - loss: 0.3663 - accuracy: 0.8700

Test Accuracy: 0.8700000047683716

Test Loss: 0.366344034671783

Result:
Thus the above program for implementation of feed forward network were executed
and the output is verified successfully.

9
Ex. No: 05

Date:
Implement an Image Classifier using CNN in TensorFlow/Keras.

Aim:
To implement an image classifier using a Convolutional Neural Network (CNN) with
TensorFlow/Keras.

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Load and preprocess image data.
3. Define the architecture of the CNN with convolutional and pooling layers.
4. Compile the model, specifying the loss function, optimizer, and metrics.
5. Train the CNN using the image data.
6. Evaluate the model's performance.
7. Make predictions using the trained CNN.

Program:

import tensorflow as tf
from keras import datasets, layers, models
import matplotlib.pyplot as plt

(train_images, train_labels), (test_images, test_labels) =


datasets.cifar10.load_data()

train_images, test_images = train_images / 255.0, test_images / 255.0

class_names = [
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]

plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)

10
plt.imshow(train_images[i])
plt.xlabel(class_names[train_labels[i][0]])
plt.show()

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation="relu", input_shape=(32, 32,
3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation="relu"))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation="relu"))

model.summary()

model.add(layers.Flatten())
model.add(layers.Dense(64, activation="relu"))
model.add(layers.Dense(10))

model.summary()

model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)

history = model.fit(
train_images, train_labels, epochs=10, validation_data=(test_images,
test_labels)
)

plt.plot(history.history["accuracy"], label="accuracy")
plt.plot(history.history["val_accuracy"], label="val_accuracy")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.ylim([0.5, 1])
plt.legend(loc="lower right")

test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)


print(test_acc)

Output:

11
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896

max_pooling2d (MaxPooling2D) (None, 15, 15, 32) 0

12
conv2d_1 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_1 (MaxPooling 2D) (None, 6, 6, 64) 0

conv2d_2 (Conv2D) (None, 4, 4, 64) 36928

=================================================================
Total params: 56,320
Trainable params: 56,320
Non-trainable params: 0
_________________________________________________________________
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896

max_pooling2d (MaxPooling2D) (None, 15, 15, 32) 0

conv2d_1 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_1 (MaxPooling 2D) (None, 6, 6, 64) 0

conv2d_2 (Conv2D) (None, 4, 4, 64) 36928

flatten_3 (Flatten) (None, 1024) 0

dense_3 (Dense) (None, 64) 65600

13
dense_4 (Dense) (None, 10) 650

=================================================================
Total params: 122,570
Trainable params: 122,570
Non-trainable params: 0
_________________________________________________________________
Epoch 10/10
1563/1563 [==============================] - 104s 66ms/step - loss: 0.6062 -
accuracy: 0.7883 - val_loss: 0.8608 - val_accuracy: 0.7181
313/313 - 4s - loss: 0.8608 - accuracy: 0.7181 - 4s/epoch - 13ms/step
0.7181000113487244

Result:
Thus the above program for implementation of image classifier using CNN were
executed and the output is verified successfully.

14
Ex. No: 06
Improve the Deep learning model by fine tuning hyper parameters.
Date:

Aim:
To improve the deep learning model by fine-tuning hyperparameters.

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Generate synthetic data for training the neural network.
3. Define a function to create the model with hyperparameters as arguments.
4. Compile the model, specifying the loss function and optimizer.
5. Train the neural network using the synthetic data.
6. Evaluate the model's performance.
7. Fine-tune hyperparameters and repeat steps 3-6.

Program:
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense
from sklearn.model_selection import train_test_split
import numpy as np

np.random.seed(42)
X = np.random.rand(100, 10)
y = np.random.randint(0, 2, size=(100,))

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

def create_model(hidden_layers, neurons_per_layer, learning_rate,


activation_function):
model = Sequential()
model.add(Dense(neurons_per_layer, input_dim=10,
activation=activation_function))

for _ in range(hidden_layers - 1):


model.add(Dense(neurons_per_layer, activation=activation_function))

model.add(Dense(1, activation='sigmoid'))

optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)

15
model.compile(optimizer=optimizer, loss='binary_crossentropy',
metrics=['accuracy'])

return model

initial_hidden_layers = 2
initial_neurons_per_layer = 8
initial_learning_rate = 0.001
initial_activation_function = 'relu'

model = create_model(initial_hidden_layers, initial_neurons_per_layer,


initial_learning_rate, initial_activation_function)

model.fit(X_train, y_train, epochs=50, batch_size=16, verbose=1)

loss, accuracy = model.evaluate(X_test, y_test)


print(f"Initial Model - Loss: {loss}, Accuracy: {accuracy * 100:.2f}%")

fine_tuned_neurons_per_layer = 16
fine_tuned_epochs = 100

fine_tuned_model = create_model(initial_hidden_layers,
fine_tuned_neurons_per_layer, initial_learning_rate,
initial_activation_function)

fine_tuned_model.fit(X_train, y_train, epochs=fine_tuned_epochs,


batch_size=16, verbose=1)

fine_tuned_loss, fine_tuned_accuracy = fine_tuned_model.evaluate(X_test,


y_test)
print(f"\n\nFine-Tuned Model - Loss: {fine_tuned_loss}, Accuracy:
{fine_tuned_accuracy * 100:.2f}%")

Output:

Epoch 100/100

5/5 [==============================] - 0s 4ms/step - loss: 0.4938 - accuracy: 0.7500

1/1 [==============================] - 0s 121ms/step - loss: 1.0410 - accuracy: 0.5000

Fine-Tuned Model - Loss: 1.0410325527191162, Accuracy: 50.00%

Result:
Thus the above program for improving the Deep Learning model by fine tunning
hyper parameters were executed and the output is verified successfully.

16
Ex. No: 07

Date:
Implement a Transfer Learning concept in Image Classification.

Aim:
To implement Transfer Learning in Image Classification

Algorithm:
1. Load cifar10 dataset form keras datasets.
2. Load the pre-trained MobileNetV2 model.
3. Modify the model by adding new layers for the specific classification task.
4. Compile the model, specifying the loss function, optimizer, and metrics.
5. Train the model using the image data.
6. Evaluate the model's performance.
7. Make predictions using the trained model.

Program:
import matplotlib.pyplot as plt
import seaborn as sns
import cv2
import os
import numpy as np

labels = ["rugby", "soccer"]


img_size = 224

def get_data(data_dir):
data = []
for label in labels:
path = os.path.join(data_dir, label)
class_num = labels.index(label)
for img in os.listdir(path):
try:
img_arr = cv2.imread(os.path.join(path, img))[..., ::-1]
resized_arr = cv2.resize(img_arr, (img_size, img_size))
data.append([resized_arr, class_num])
except Exception as e:
print(e)
return np.array(data)

17
train = get_data("/content/input/train")
val = get_data("/content/input/test")

l = []
for i in train:
if i[1] == 0:
l.append("rugby")
else:
l.append("soccer")

sns.set_style("darkgrid")
sns.countplot(x=l, hue=l, palette="Set3")

plt.show()

plt.figure(figsize=(5, 5))
plt.imshow(train[1][0])
plt.title(labels[train[0][1]])

plt.figure(figsize=(5, 5))
plt.imshow(train[-1][0])
plt.title(labels[train[-1][1]])

plt.show()

18
Output:

Result:
Thus the above program for implementation of Transfer Learning concept in Image
Classification were executed and the output is verified successfully.

19
Ex. No: 08

Date:
Using a pre trained model on Keras for Transfer Learning

Aim:
To using a pre-trained model on Keras for Transfer Learning

Algorithm:
1. Load and preprocess the new dataset.
2. Load the pre-trained MobileNetV2 model without the top (classification) layer.
3. Add a new classification layer suited to your task on top of the pre-trained model.
4. Compile the model.
5. Train the model on the new dataset.
6. Evaluate the model's performance.

Program:

import numpy as np
from keras.models import Model
from keras.applications.vgg16 import VGG16, preprocess_input
from keras.layers import Dense, Flatten, Input
from keras.utils import load_img, img_to_array
from keras.preprocessing.image import ImageDataGenerator

vgg = VGG16(weights="imagenet", include_top=False, input_shape=(224, 224, 3))

for layer in vgg.layers:


layer.trainable = False

flatten_layer = Flatten()(vgg.output)

output_layer = Dense(4, activation="softmax")(flatten_layer)

model = Model(inputs=vgg.input, outputs=output_layer)

model.compile(optimizer="adam", loss="categorical_crossentropy",
metrics=["accuracy"])

train_data_gen = ImageDataGenerator(
rescale=1.0 / 255,
shear_range=0.5,
zoom_range=0.7,
horizontal_flip=True,
vertical_flip=True,

20
)

test_data_gen = ImageDataGenerator(rescale=1.0 / 255)

train_data = train_data_gen.flow_from_directory(
"/content/Wether/Wether/Train", target_size=(224, 224),
class_mode="categorical"
)

test_data = test_data_gen.flow_from_directory(
"/content/Wether/Wether/Test", target_size=(224, 224),
class_mode="categorical"
)

history = model.fit_generator(train_data, validation_data=test_data, epochs=5)

my_image = load_img("weather_test.jpg", target_size=(224, 224))


my_image = img_to_array(my_image)
my_image = my_image.reshape(
(1, my_image.shape[0], my_image.shape[1], my_image.shape[2])
)
my_image = preprocess_input(my_image)

prediction = model.predict(my_image)
res = [np.round(x) for x in prediction]
print(res)

21
Output:
Epoch 1/5
36/36 [==============================] - 19s 511ms/step - loss: 1.0154 - accuracy:
0.6107 - val_loss: 0.5228 - val_accuracy: 0.7920
Epoch 2/5
36/36 [==============================] - 19s 540ms/step - loss: 0.4699 - accuracy:
0.8320 - val_loss: 0.4626 - val_accuracy: 0.8400
Epoch 3/5
36/36 [==============================] - 19s 512ms/step - loss: 0.3671 - accuracy:
0.8693 - val_loss: 0.1625 - val_accuracy: 0.9600
Epoch 4/5
36/36 [==============================] - 18s 499ms/step - loss: 0.3070 - accuracy:
0.9004 - val_loss: 0.2483 - val_accuracy: 0.9360
Epoch 5/5
36/36 [==============================] - 18s 502ms/step - loss: 0.3277 - accuracy:
0.8782 - val_loss: 0.1385 - val_accuracy: 0.9520
1/1 [==============================] - 1s 603ms/step
[array([0., 0., 0., 1.], dtype=float32)]

Result:
Thus the above program for using pre-trained model on keras for Transfer Learning
were executed and the output is verified successfully.

22
Ex. No: 09
Perform Sentiment Analysis using RNN.
Date:

Aim:
To perform sentiment analysis using a Recurrent Neural Network (RNN) with
TensorFlow/Keras.

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Load and preprocess text data for sentiment analysis.
3. Tokenize and pad the text sequences.
4. Define the architecture of the RNN model.
5. Compile the model, specifying the loss function and optimizer.
6. Train the RNN using the text data.
7. Evaluate the model's performance.
8. Make predictions using the trained RNN.

Program:
import numpy as np
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Embedding, SimpleRNN, Dense, Flatten
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split

texts = ["I love this product!", "Not satisfied with the service.", "Amazing
experience.", "Disappointing quality."]
labels = [1, 1, 1, 0]

tokenizer = Tokenizer(num_words=1000, oov_token="<OOV>")


tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
padded_sequences = pad_sequences(sequences, maxlen=10, padding='post',
truncating='post')

labels = np.array(labels)

X_train, X_test, y_train, y_test = train_test_split(padded_sequences, labels,


test_size=0.2, random_state=42)

model = Sequential()
model.add(Embedding(input_dim=1000, output_dim=16, input_length=10))

23
model.add(SimpleRNN(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])

model.fit(X_train, y_train, epochs=50, batch_size=2, verbose=1)

loss, accuracy = model.evaluate(X_test, y_test)


print(f"Binary Cross-Entropy Loss on Test Data: {loss}")
print(f"Accuracy on Test Data: {accuracy * 100:.2f}%")

predictions = model.predict(X_test)
print("\nSample Predictions:")
for i in range(len(predictions)):
print(f"Actual: {y_test[i]}, Predicted: {round(predictions[i][0])}")

Output:

Epoch 50/50

2/2 [==============================] - 0s 22ms/step - loss: 0.5604 - accuracy: 1.0000

1/1 [==============================] - 0s 196ms/step - loss: 0.5549 - accuracy: 1.0000

Binary Cross-Entropy Loss on Test Data: 0.5549091696739197

Accuracy on Test Data: 100.00%

1/1 [==============================] - 0s 137ms/step

Sample Predictions:

Actual: 1, Predicted: 1

Result:
Thus the above program for performing Sentiment Analysis using RNN were
executed and the output is verified successfully.

24
Ex. No: 10

Date:
Implement an LSTM based Autoencoder in TensorFlow/Keras.

Aim:
To implement an LSTM-based Autoencoder using TensorFlow/Keras.

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Load and preprocess sequential data.
3. Define the architecture of the LSTM-based Autoencoder.
4. Compile the model, specifying the loss function and optimizer.
5. Train the Autoencoder using the sequential data.
6. Evaluate the model's performance.
7. Make predictions using the trained Autoencoder.

Program:
import tensorflow as tf
from keras.models import Sequential
from keras.layers import LSTM, RepeatVector, TimeDistributed, Dense
import numpy as np

np.random.seed(42)
sequence_data = np.random.rand(100, 10, 1)

model = Sequential()

model.add(LSTM(8, activation='relu', input_shape=(10, 1),


return_sequences=True))
model.add(LSTM(4, activation='relu', return_sequences=False))
model.add(RepeatVector(10))

model.add(LSTM(4, activation='relu', return_sequences=True))


model.add(LSTM(8, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(optimizer='adam', loss='mse')

model.fit(sequence_data, sequence_data, epochs=50, batch_size=16, verbose=1)

loss = model.evaluate(sequence_data, sequence_data)


print(f"Mean Squared Error on Training Data: {loss}")

encoded_data = model.predict(sequence_data)

25
print("\nSample Predictions:")
for i in range(3):
print(f"Original Sequence:\n{sequence_data[i]}")
print(f"Encoded and Decoded Sequence:\n{encoded_data[i]}")
print("\n")

Output:
Epoch 50/50

7/7 [==============================] - 0s 67ms/step - loss: 0.0798

4/4 [==============================] - 1s 11ms/step - loss: 0.0796

Mean Squared Error on Training Data: 0.07958578318357468

4/4 [==============================] - 0s 9ms/step

Sample Predictions:

Original Sequence:

[[0.37454012]

[0.95071431]

[0.73199394]

[0.59865848]

[0.15601864]

[0.15599452]

[0.05808361]

[0.86617615]

[0.60111501]

[0.70807258]]

Encoded and Decoded Sequence:

[[0.35951287]

[0.48757628]

[0.5377357 ]

[0.548638 ]

[0.54050046]

[0.5242156 ]

[0.5055392 ]

26
[0.4873782 ]

[0.47106645]

[0.45708445]]

Original Sequence:

[[0.02058449]

[0.96990985]

[0.83244264]

[0.21233911]

[0.18182497]

[0.18340451]

[0.30424224]

[0.52475643]

[0.43194502]

[0.29122914]]

Encoded and Decoded Sequence:

[[0.26717597]

[0.3793814 ]

[0.43721834]

[0.46050552]

[0.4647041 ]

[0.45918518]

[0.44929692]

[0.43799573]

[0.42685196]

[0.4166411 ]]

Original Sequence:

[[0.61185289]

27
[0.13949386]

[0.29214465]

[0.36636184]

[0.45606998]

[0.78517596]

[0.19967378]

[0.51423444]

[0.59241457]

[0.04645041]]

Encoded and Decoded Sequence:

[[0.26825362]

[0.38092735]

[0.43878633]

[0.4619585 ]

[0.46601555]

[0.46036348]

[0.45036083]

[0.4389659 ]

[0.42774758]

[0.41747904]]

Result:
Thus the above program for implementation of LSTM based Autoencoder were
executed and the output is verified successfully.

28
Ex. No: 11

Date:
Image generation using GAN Additional Experiments.

Aim:
To experiment with image generation using a Generative Adversarial Network (GAN).

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Define the generator and discriminator models.
3. Create the GAN model by combining the generator and discriminator.
4. Compile the GAN model.
5. Train the GAN model on a dataset.
6. Generate new images using the trained GAN.

Program:
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms as T

import matplotlib.pyplot as plt


import numpy as np

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

train_dataset = datasets.CIFAR10(
root="./data", train=True, download=True, transform=T.ToTensor()
)
dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32,
shuffle=True)
latent_dim = 100
lr = 0.0002
beta1 = 0.5
beta2 = 0.999
num_epochs = 10

class Generator(nn.Module):
def __init__(self, latent_dim):
super(Generator, self).__init__()

self.model = nn.Sequential(

29
nn.Linear(latent_dim, 128 * 8 * 8),
nn.ReLU(),
nn.Unflatten(1, (128, 8, 8)),
nn.Upsample(scale_factor=2),
nn.Conv2d(128, 128, kernel_size=3, padding=1),
nn.BatchNorm2d(128, momentum=0.78),
nn.ReLU(),
nn.Upsample(scale_factor=2),
nn.Conv2d(128, 64, kernel_size=3, padding=1),
nn.BatchNorm2d(64, momentum=0.78),
nn.ReLU(),
nn.Conv2d(64, 3, kernel_size=3, padding=1),
nn.Tanh(),
)

def forward(self, z):


img = self.model(z)
return img

class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()

self.model = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=3, stride=2, padding=1),
nn.LeakyReLU(0.2),
nn.Dropout(0.25),
nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1),
nn.ZeroPad2d((0, 1, 0, 1)),
nn.BatchNorm2d(64, momentum=0.82),
nn.LeakyReLU(0.25),
nn.Dropout(0.25),
nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1),
nn.BatchNorm2d(128, momentum=0.82),
nn.LeakyReLU(0.2),
nn.Dropout(0.25),
nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(256, momentum=0.8),
nn.LeakyReLU(0.25),
nn.Dropout(0.25),
nn.Flatten(),
nn.Linear(256 * 5 * 5, 1),
nn.Sigmoid(),
)

def forward(self, img):


validity = self.model(img)
return validity

30
generator = Generator(latent_dim).to(device)
discriminator = Discriminator().to(device)

adversarial_loss = nn.BCELoss()

optimizer_G = optim.Adam(generator.parameters(), lr=lr, betas=(beta1, beta2))


optimizer_D = optim.Adam(discriminator.parameters(), lr=lr, betas=(beta1,
beta2))
for epoch in range(num_epochs):
for i, batch in enumerate(dataloader):
real_images = batch[0].to(device)
valid = torch.ones(real_images.size(0), 1, device=device)
fake = torch.zeros(real_images.size(0), 1, device=device)
real_images = real_images.to(device)

optimizer_D.zero_grad()

z = torch.randn(real_images.size(0), latent_dim, device=device)

fake_images = generator(z)

real_loss = adversarial_loss(discriminator(real_images), valid)


fake_loss = adversarial_loss(discriminator(fake_images.detach()),
fake)
d_loss = (real_loss + fake_loss) / 2

d_loss.backward()
optimizer_D.step()

optimizer_G.zero_grad()

gen_images = generator(z)

g_loss = adversarial_loss(discriminator(gen_images), valid)

g_loss.backward()
optimizer_G.step()

if (i + 1) % 100 == 0:
print(
f"Epoch [{epoch+1}/{num_epochs}]\
Batch {i+1}/{len(dataloader)} "
f"Discriminator Loss: {d_loss.item():.4f} "
f"Generator Loss: {g_loss.item():.4f}"
)

if (epoch + 1) % 10 == 0:

31
with torch.no_grad():
z = torch.randn(16, latent_dim, device=device)
generated = generator(z).detach().cpu()
grid = torchvision.utils.make_grid(generated, nrow=4,
normalize=True)
plt.imshow(np.transpose(grid, (1, 2, 0)))
plt.axis("off")
plt.show()

Output:

Epoch [10/10] Batch 1500/1563 Discriminator Loss: 0.4170 Generator Loss: 1.1472

Result:
Thus the above program for image generation using GAN addition experiments were
executed and the output is verified successfully.

32
Ex. No: 12
Train a Deep learning model to classify a given image using
Date: pre trained model

Aim:
To train a deep learning model to classify a given image using a pre-trained model.

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Load and preprocess the dataset.
3. Load a pre-trained model (MobileNetV2) without the top classification layer.
4. Add a new classification layer suitable for the target dataset.
5. Compile the model, specifying the loss function, optimizer, and metrics.
6. Train the model on the target dataset using transfer learning.
7. Evaluate the model's performance.
8. Make predictions using the trained model.

Program:
import tensorflow as tf
from keras import layers, models
from keras.applications import MobileNetV2
from keras.datasets import cifar10
from keras.utils import to_categorical

(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()


train_images, test_images = train_images / 255.0, test_images / 255.0
train_labels = to_categorical(train_labels, num_classes=10)
test_labels = to_categorical(test_labels, num_classes=10)

base_model = MobileNetV2(weights='imagenet', include_top=False,


input_shape=(32, 32, 3))

for layer in base_model.layers:


layer.trainable = False

model = models.Sequential()
model.add(base_model)
model.add(layers.GlobalAveragePooling2D())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))

33
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])

model.fit(train_images, train_labels, epochs=5, validation_data=(test_images,


test_labels))

accuracy = model.evaluate(test_images, test_labels)[1]


print(f"Accuracy on Test Data: {accuracy * 100:.2f}%")

Output:

Epoch 5/5

1563/1563 [==============================] - 18s 11ms/step - loss: 1.7067 - accuracy: 0.3797 -


val_loss: 1.7828 - val_accuracy: 0.3570

313/313 [==============================] - 3s 9ms/step - loss: 1.7828 - accuracy: 0.3570

Accuracy on Test Data: 35.70%

Result:
Thus the above program for training a Deep learning model to classify a given image using pre
trained model were executed and the output is verified successfully.

34
Ex. No: 13
Recommendation system from sales data using Deep Learning
Date:

Aim:
To implement a recommendation system using deep learning on sales data.

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Prepare the sales data, which typically includes user-item interactions and purchase
history.
3. Build a matrix factorization model for collaborative filtering using deep learning.
4. Compile the model, specifying the appropriate loss function and optimizer.
5. Train the model using the sales data.
6. Use the trained model to make recommendations.

Program:
import tensorflow as tf
from keras.models import Model
from keras.layers import Input, Embedding, Dot, Flatten
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np

sales_data = pd.DataFrame({
'user_id': [1, 1, 2, 2, 3, 3, 4, 4],
'item_id': [101, 102, 101, 103, 102, 104, 101, 104],
'purchase': [1, 1, 1, 1, 1, 1, 1, 1]
})

user_mapping = {user_id: idx for idx, user_id in


enumerate(sales_data['user_id'].unique())}
item_mapping = {item_id: idx for idx, item_id in
enumerate(sales_data['item_id'].unique())}

sales_data['user_index'] = sales_data['user_id'].map(user_mapping)
sales_data['item_index'] = sales_data['item_id'].map(item_mapping)

train_data, test_data = train_test_split(sales_data, test_size=0.2,


random_state=42)

num_users = len(user_mapping)
num_items = len(item_mapping)
embedding_dim = 10

35
user_input = Input(shape=(1,), name='user_input')
item_input = Input(shape=(1,), name='item_input')

user_embedding = Embedding(input_dim=num_users, output_dim=embedding_dim,


input_length=1)(user_input)
item_embedding = Embedding(input_dim=num_items, output_dim=embedding_dim,
input_length=1)(item_input)

dot_product = Dot(axes=1)([user_embedding, item_embedding])


flat = Flatten()(dot_product)

model = Model(inputs=[user_input, item_input], outputs=flat)

model.compile(optimizer='adam', loss='mean_squared_error')

model.fit([train_data['user_index'], train_data['item_index']],
train_data['purchase'], epochs=10, batch_size=1)

loss = model.evaluate([test_data['user_index'], test_data['item_index']],


test_data['purchase'])
print(f"Mean Squared Error on Test Data: {loss}")

user_id_to_recommend = 1
items_not_purchased = sales_data.loc[~((sales_data['user_id'] ==
user_id_to_recommend) & (sales_data['purchase'] == 1)), 'item_index'].unique()

recommendation_input = pd.DataFrame({'user_index':
[user_mapping[user_id_to_recommend]] * len(items_not_purchased),
'item_index': items_not_purchased})

predictions = model.predict([recommendation_input['user_index'],
recommendation_input['item_index']])

recommendations = pd.DataFrame({'item_index': items_not_purchased,


'predicted_purchase': np.mean(predictions,axis=1)})
recommendations = recommendations.sort_values(by='predicted_purchase',
ascending=False)

top_recommendations = recommendations.head(5)
top_recommendations['item_id'] = top_recommendations['item_index'].map({idx:
item_id for item_id, idx in item_mapping.items()})
print("\nTop Recommendations:")
print(top_recommendations[['item_id', 'predicted_purchase']])

36
Output:

Epoch 500/500

6/6 [==============================] - 0s 5ms/step - loss: 4.4114e-04

1/1 [==============================] - 0s 82ms/step - loss: 0.0014

Mean Squared Error on Test Data: 0.001383352093398571

1/1 [==============================] - 0s 43ms/step

Top Recommendations:

item_id predicted_purchase

0 101 0.968562

2 102 0.958725

1 103 0.933026

3 104 0.929990

Result:
Thus the above program for Recommendation system from sales data using Deep
Learning were executed and the output is verified successfully.

37
Ex. No: 14
Implement Object Detection using CNN
Date:

Aim:
To implement object detection using a Convolutional Neural Network (CNN) in
TensorFlow/Keras.

Algorithm:
1. Import the necessary libraries: TensorFlow and Keras.
2. Load and pre-process image and annotation data.
3. Define the architecture of the SSD model.
4. Compile the model, specifying the appropriate loss function and optimizer.
5. Train the SSD model using the image and annotation data.
6. Evaluate the model's performance on a test dataset.
7. Use the trained model for object detection on new images.

Program:
import tensorflow as tf
from keras import layers
from keras.models import Model
from keras.optimizers import Adam
from keras.losses import binary_crossentropy
from keras.applications import MobileNetV2
import numpy as np

def create_ssd_model(input_shape, num_classes):


base_model = MobileNetV2(input_shape=input_shape, include_top=False)

ssd_output = layers.Conv2D(4, (3, 3), padding='same',


activation='sigmoid', name='ssd_output')(base_model.output)

model = Model(inputs=base_model.input, outputs=ssd_output)


return model

input_shape = (224, 224, 3)


num_classes = 1

ssd_model = create_ssd_model(input_shape, num_classes)

ssd_model.compile(optimizer=Adam(), loss='binary_crossentropy',
metrics=['accuracy'])

X_train = np.random.rand(100, 224, 224, 3)


y_train = np.random.randint(0, 2, size=(100, 7, 7, 4))

38
ssd_model.fit(X_train, y_train, epochs=50, batch_size=32,
validation_split=0.2)

X_test = np.random.rand(20, 224, 224, 3)


y_test = np.random.randint(0, 2, size=(20, 7, 7, 4))

evaluation = ssd_model.evaluate(X_test, y_test)


print(f"Loss: {evaluation[0]}, Accuracy: {evaluation[1]}")

Output:

Epoch 50/50

3/3 [==============================] - 1s 224ms/step - loss: 0.0038 - accuracy: 0.5314 -


val_loss: 2.7050 - val_accuracy: 0.2724

1/1 [==============================] - 0s 130ms/step - loss: 2.5840 - accuracy: 0.2357

Loss: 2.5839765071868896, Accuracy: 0.23571428656578064

Result:
Thus the above program for implementation of Object Detection using CNN were executed
and the output is verified successfully.

39
Ex. No: 15

Date:
Implement any simple Reinforcement Algorithm for an NLP problem

Aim:
To implement a simple reinforcement algorithmfor an NLP problem.

Algorithm:
1. Define the environment, states, actions, and rewards.
2. Initialize Q-values for state-action pairs.
3. Implement the Q-learning algorithm:
a. Select an action based on the current state and exploration-exploitation
strategy.
b. Execute the action and observe the reward and the next state.
c. Update the Q-value for the selected action based on the observed reward
and the Q-learning formula.
d. Repeat until convergence or a specified number of iterations.
4. Train the chatbot using the Q-learning algorithm.

Program:

import tensorflow as tf

from keras.models import Sequential


from keras.layers import Embedding, LSTM, Dense
from keras.optimizers import Adam
import numpy as np

vocab = ['h', 'e', 'l', 'o', ' ']


char_to_index = {char: i for i, char in enumerate(vocab)}
index_to_char = {i: char for i, char in enumerate(vocab)}

input_sequence = ['h', 'e', 'l', 'l', 'o']


target_sequence = ['e', 'l', 'l', 'o', ' ']

X_train = np.array([char_to_index[char] for char in input_sequence])


y_train = np.array([char_to_index[char] for char in target_sequence])

model = Sequential([
Embedding(input_dim=len(vocab), output_dim=5, input_length=1),
LSTM(10, return_sequences=True),
Dense(len(vocab), activation='softmax')
])

40
model.compile(optimizer=Adam(), loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, y_train, epochs=150, verbose=0.2)

def generate_sequence(seed, model, length):


result = [seed]
for _ in range(length):
idx = np.array(char_to_index[result[-1]]).reshape((1, 1, 1))
prediction = model.predict(idx)
next_char = index_to_char[np.argmax(prediction)]
result.append(next_char)
return ''.join(result)

generated_sequence = generate_sequence('h', model, length=4)


print("Generated Sequence:", generated_sequence)

Output:

Epoch 150/150

1/1 [==============================] - 1s 676ms/step

1/1 [==============================] - 0s 30ms/step

1/1 [==============================] - 0s 35ms/step

1/1 [==============================] - 0s 42ms/step

Generated Sequence: hello

Result:
Thus the above program for implementation of simple Reinforcement Algorithm for an NLP
problem were executed and the output is verified successfully.

41

You might also like