Ccs355 - NN&DL Lab Manual
Ccs355 - NN&DL Lab Manual
DEPARTMENT OF
INFORMATION TECHNOLOGY
2
NAME:...................................................................................................…ROLL NO:.......................................................
INDEX
Ex.No Date Experiment Name Pg.No Marks Signature
1. Implement simple vector addition in 04
TensorFlow.
2. Implement a regression model in 06
Keras.
3. Implement a perceptron in 11
TensorFlow/Keras Environment.
4. Implement a Feed-Forward Network 13
in TensorFlow/Keras.
5. Implement an Image Classifier using 16
CNN in TensorFlow/Keras.
6. Improve the Deep learning model by 19
fine tuning hyper parameters.
7. Implement a Transfer Learning 23
concept in Image Classification.
8. Using a pre trained model on Keras 26
for Transfer Learning
9. Perform Sentiment Analysis using 29
RNN
10. Implement an LSTM based 32
Autoencoder in TensorFlow/Keras.
Total Marks:
Faculty Incharge:
4
Ex.No:1
SIMPLE VECTOR ADDITION IN TENSORFLOW
DATE:
Aim:
To implement simple vector addition using TensorFlow, you can create a basic
TensorFlow program where two vectors are added element-wise. Below is the full example for
creating this program.
Algorithm:
Program:
import tensorflow as tf
Output:
Result:
Ex.No:02
SIMPLE REGRESSION MODEL IN KERAS
DATE:
Aim:
Algorithm:
Program:
import numpy as np
import tensorflow as tf
np.random.seed(42)
model = Sequential()
model.add(Dense(32, activation='relu'))
7
model.add(Dense(1))
loss = model.evaluate(X, y)
predictions = model.predict(new_data)
Output:
Epoch 1/100
Epoch 2/100
Epoch 3/100
Epoch 4/100
Epoch 5/100
Epoch 6/100
Epoch 7/100
Epoch 8/100
Epoch 9/100
Epoch 10/100
Epoch 11/100
Epoch 12/100
Epoch 13/100
Epoch 14/100
Epoch 15/100
Epoch 16/100
Epoch 17/100
Epoch 18/100
Epoch 19/100
Epoch 20/100
Epoch 21/100
Epoch 22/100
Epoch 23/100
Epoch 24/100
10
Epoch 25/100
Epoch 26/100
Epoch 27/100
Epoch 28/100
Result:
Ex.No:03
A PERCEPTRON IN TENSORFLOW/KERAS ENVIRONMENT
DATE:
Aim:
Algorithm:
Program:
import numpy as np
import tensorflow as tf
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # Input features
model = Sequential()
predictions = model.predict(X)
print(predictions)
Output:
[[0.05]
[0.2 ]
[0.2 ]
[0.9 ]]
Result:
Ex.No:04
FEED-FORWARD NETWORK IN TENSORFLOW/KERAS
DATE:
Aim:
Algorithm:
Program:
import numpy as np
import tensorflow as tf
iris = load_iris()
X = iris.data
y = iris.target
encoder = OneHotEncoder(sparse=False)
model = Sequential()
model.add(Dense(32, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile(optimizer=Adam(learning_rate=0.001), loss='categorical_crossentropy',
metrics=['accuracy'])
predictions = model.predict(X_test)
print(predicted_classes)
15
Output:
Epoch 1/100
Epoch 2/100
...
Epoch 100/100
[2 0 1 1 0 2 1 0 2 1 1 2 1 0 2 0 1 2 2 1]
Result:
Ex.No:05
IMAGE CLASSIFIER USING CNN IN TENSORFLOW/KERAS
DATE:
Aim:
Algorithm:
Program:
import tensorflow as tf
model = Sequential()
model.add(MaxPooling2D((2, 2)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
predictions = model.predict(X_test)
predicted_classes = predictions.argmax(axis=1)
print(predicted_classes[:10])
18
Output:
[3 8 8 0 6 6 1 6 3 1]
Result:
Thus the image classifier using cnn in tensorflow/keras was executed successfully.
19
Ex.No:06
DEEP LEARNING MODEL BY FINE TUNING HYPER
DATE: PARAMETERS
Aim:
Algorithm:
Program:
import numpy as np
import tensorflow as tf
model = Sequential()
model.add(MaxPooling2D((2, 2)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(dropout_rate))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer=Adam(learning_rate=learning_rate),
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
best_accuracy = 0
best_params = {}
21
history=model.fit(X_train,y_train,epochs=epochs,batch_size=batch_size,validation_data=(X_test
, y_test), verbose=0)
best_accuracy = accuracy
best_params = {
'learning_rate': learning_rate,
'dropout_rate': dropout_rate,
'num_filters': num_filters,
'epochs': epochs,
'batch_size': batch_size
Output:
...
{'learning_rate': 0.001, 'dropout_rate': 0.5, 'num_filters': 64, 'epochs': 10, 'batch_size': 64}
Result:
Thus the deep learning model by fine tuning hyper parameters was executed successfully.
23
Ex.No:07
A TRANSFER LEARNING CONCEPT IN IMAGE CLASSIFICATION
DATE:
Aim:
Algorithm:
Program:
import tensorflow as tf
base_model.trainable = False
model = Sequential()
model.add(base_model)
model.add(GlobalAveragePooling2D())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
predictions = model.predict(X_test)
predicted_classes = predictions.argmax(axis=1)
print(predicted_classes[:10])
25
Output:
[3 7 6 8 9 1 0 4 2 5]
Result:
Thus the a transfer learning concept in image classification was executed successfully.
26
Ex.No:08
A PRE TRAINED MODEL ON KERAS FOR TRANSFER LEARNING
DATE:
Aim:
Algorithm:
Program:
import numpy as np
import tensorflow as tf
base_model.trainable = False
model = Sequential()
model.add(base_model)
model.add(GlobalAveragePooling2D())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
predictions = model.predict(X_test)
predicted_classes = predictions.argmax(axis=1)
print(predicted_classes[:10])
28
Output:
Epoch 1/10
Epoch 2/10
...
[3 8 8 0 6 6 1 3 1 2]
Result:
Thus the a pre trained model on keras for transfer learning was executed successfully.
29
Ex.No:09
SENTIMENT ANALYSIS USING RNN
DATE:
Aim:
Algorithm:
Program:
import tensorflow as tf
import numpy as np
maxlen = 200 # Maximum length of the review sequences (pad shorter reviews)
model = Sequential()
model.add(SpatialDropout1D(0.2))
model.add(Dense(1, activation='sigmoid'))
predictions = model.predict(X_test[:10])
for i in range(10):
Output:
Result:
Ex.No:10
LSTM BASED AUTOENCODER IN TENSORFLOW/KERAS
DATE:
Aim:
Algorithm:
Program:
import numpy as np
import tensorflow as tf
return data
33
n_samples = 1000
timesteps = 100
decoded = RepeatVector(timesteps)
output_sequence = TimeDistributed(Dense(1))(decoded)
autoencoder.compile(optimizer=Adam(), loss='mean_squared_error')
reconstructed_data = autoencoder.predict(data)
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.subplot(1, 2, 2)
plt.show()
34
Output:
Epoch 1/20
Epoch 2/20
...
Epoch 20/20
Result: