Lab record (2)
Lab record (2)
No:1
AIM:
ALGORITHM:
Step 1: Start
Step 3: Define the input vectors that you want to add together. These vectors can be
represented as lists, NumPy arrays, or TensorFlow tensors.
Step 4: Convert the input vectors into TensorFlow constant tensors using the tf.constant
function. This step ensures that the input data is compatible with TensorFlow
operations.
Step 5: Perform addition operation using the tf.add function to add the two input tensors
together. This function performs element-wise addition, adding corresponding
elements from each tensor.
Step 7: Retrieve the result using the numpy() method to convert the TensorFlow tensor to a
NumPy array. Step 8: Output the result of the addition operation, which represents the
sum of the two input vectors.
Step 9: End.
1
PROGRAM:
scalar.ndim
# create a vector
# creating a matrix
print(matrix+matrix1)
OUTPUT:
2
RESULT:
Thus the program for simple vector addition in TensorFlow was executed
successfully
3
Ex. No:2
IMPLEMENT A REGRESSION MODEL IN KERAS
Date:6-2-2024
AIM:
ALGORITHM:
Step 1: Start
Step 2: Import libraries NumPy and TensorFlow libraries are imported. Specifically,
TensorFlow's Keras API is imported to define and train the neural network model.
Step 3: Generate Random data for regression is generated using NumPy. X represents the
features, and y represents the labels. The labels (y) are generated based on a linear
relationship with some added noise.
Step 4: Define a sequential model is defined using Keras. It consists of two dense layers. The
first layer has 10 neurons with ReLU activation function, and it expects input of shape
(1,). The second layer has 1 neuron, which is the output neuron for regression.
Step 5: The model is compiled using the Adam optimizer and mean squared error loss
function, which are commonly used for regression tasks.
Step 6: The model is trained on the generated data for 100 epochs with a batch size of 32. The
training process aims to minimize the mean squared error loss.
Step 7: Once the model is trained, predictions are made on the same data X used for training.
Step 8: A loop is used to print the predictions made by the model along with the
corresponding true labels(y). This allows for a visual comparison of the model's
performance.
Step 9: Stop.
4
PROGRAM:
import numpy as np
import tensorflow as tf
np.random.seed(0)
X = np.random.rand(100, 1) # Features
# Make predictions
predictions = model.predict(X)
# Print some predictions and true labels for comparison for i in range(5):
OUTPUT:
5
RESULT:
Thus the program for a regression model in Keras was executed successfully.
6
Ex. No:3
IMPLEMENT A PERCEPTRON IN TENSORFLOW/KERAS
Date:6-2-2024
AIM:
ALGORITHM:
Step 1: Start
Step 2: NumPy and TensorFlow libraries are imported. Specifically, TensorFlow's Keras API
is imported to define and train the neural network model.
Step 3: Example data for a logical OR operation is generated. X contains input binary
vectors, and y contains corresponding output labels.
Step 4: A sequential model is defined using Keras. It consists of a single dense layer with one
neuron. The input shape is (2,), matching the shape of the input vectors. The
activation function used is sigmoid, suitable for binary classification tasks like logical
OR.
Step 5: The model is compiled using the Adam optimizer and binary cross-entropy loss
function, which are common choices for binary classification tasks. Accuracy is also
set as a metric to monitor during training.
Step 6: The model is trained on the example data (X and y) for 1000 epochs. The training
process aims to minimize the binary cross-entropy loss.
Step 7: Once training is complete, the model is evaluated on the same dataset it was trained
on. The loss and accuracy metrics are printed.
Step 8: Finally, the trained model is used to make predictions on the input data X, and the
predictions are printed.
Step 9: Stop
7
PROGRAM:
import numpy as np
import tensorflow as tf
y = np.array([0, 1, 1, 1])
print("Loss:", loss)
print("Accuracy:", accuracy)
# Make predictions
predictions = model.predict(X)
print("Predictions:", predictions.flatten())
OUTPUT:
Accuracy: 0.75
8
RESULT:
9
Ex. No:4
IMPLEMENT A FEED FORWARD NETWORK IN TENORFLOW/KERAS
Date:13-2-2024
AIM:
ALGORITHM:
Step 3: Load and split MNIST dataset into training, validation, and test sets.
Step 4: Print the shapes of the training, validation, and test sets to check the data dimensions.
Step 5: Plot a few samples from the training set using matplotlib.pyplot.
Step 6: Reshape the input images to a 1D array (flatten) for feeding into the neural network.
Step 8: Load the Fashion MNIST dataset and print the labels of the first few samples to
check.
Step 9: Convert the integer labels to one-hot encoded format using to_categorical.
10
PROGRAM:
import random
import numpy as np
import tensorflow as tf
from tensorflow.keras.utils
random.seed(SEED_VALUE)
np.random.seed(SEED_VALUE)
Tf .random.set_seed(SEED_VALUE)
X_valid = X_train_all[:10000]
X_train = X_train_all[10000:]
y_valid = y_train_all[:10000]
y_train = y_train_all[10000:]
print(X_train.shape)
print(X_valid.shape)
print(X_test.shape)
plt.figure(figsize=(18, 5))
for i in range(3):
plt.subplot(1, 3, i + 1)
plt.axis(True)
plt.imshow(X_train[i], cmap='gray')
plt.subplots_adjust(wspace=0.2, hspace=0.2)
11
X_valid = X_valid.reshape((X_valid.shape[0], 28 * 28))
y_train_onehot = to_categorical(y_train_fashion[0:9])
print(y_train_onehot)
OUTPUT:
12
RESULT:
Thus the program for a Feed Forward neural network using TensorFlow/Keras was
executed successfully.
13
Ex. No:5
IMPLEMENT A IMAGE CLASSIFIER USING CNN IN TENSORFLOW/KERAS
Date:13-2-2024
AIM:
ALGORITHM:
Step 6: Compile the model and train the model on the training data.
Step 7: Plot the training history (accuracy and epochs) and evoluate the model on the test
data.
PROGRAM:
import tensorflow as tf
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer','dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
14
# The CIFAR labels happen to be arrays,
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
model = models.Sequential()
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.MaxPooling2D((2, 2)))
model.summary()
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.summary()
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentrop
(from_logits=True), metrics=['accuracy'])
plt.plot(history.history['val_accuracy'], label='val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
print(test_acc)
15
OUTPUT:
16
17
RESULT:
Thus the python program for image classifier using CNN in tensorflow/keras was
executed successfully.
18
Ex. No:6
IMPROVE THE DEEP LEARNING MODE BY TUNING HYPER PARAMETERS
Date:13-2-2024
AIM:
To write a python program to improve the deep learning model by fine tuning hyper
parameters.
ALGORITHM:
Step 3: This creates a dataset with 1000 samples, 20 features, 10 of which are informative and
2 classes. Then define the parameter distribution for hyper parameter tuning.
Step 4: Initialize the decision tree classifier and perform hyper parameter tuning using
Randomized Search CV.
Step 5: Print the best parameters and best score found during the hyper parameter tuning
process.
PROGRAM:
import numpy as np
tree = DecisionTreeClassifier()
tree_cv.fit(X, y)
19
OUTPUT:
RESULT:
Thus the program for deep learning model by fine tuning hyper parameters was
executed successfully
20
Ex. No:7
IMPLEMENT A TRANSFER LEARNING CONCEPT IN IMAGE
Date:26-2-2024 CLASSIFICATION
AIM:
ALGORITHM:
Step 2: Define the class names and directory containing training images.
Step 5: Load the pre-trained VGG16 model (excluding the top layer) and freeze some layers.
Step 6: Add custom classification layers on top of the VGG16 base model.
Step 7: Compile and train the model and Save the trained model. Step 8: Use the model for
predictions on a sample image.
PROGRAM:
import tensorflow as tf
import numpy as np
train_dir = r'C:\Users\LENOVO\PycharmProjects\nn\train'
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
21
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(
train_dir,
batch_size=32,
class_mode='categorical'
layer.trainable = False
x = layers.GlobalAveragePooling2D()(base_model.output)
x = layers.Dense(256, activation='relu')(x)
x = layers.Dropout(0.5)(x)
transfer_model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
transfer_model.summary()
print("Training started...")
print("Training completed.")
transfer_model.save(r'C:\Users\LENOVO\PycharmProjects\nn\transfer_learning_resnet50_m
odel.h5')
print("Making predictions...")
22
img_path = r'C:\Users\LENOVO\PycharmProjects\nn\pet.jpg' # Update with the path to the
image you want to classify
img_array = tf.keras.preprocessing.image.img_to_array(img)
predictions = transfer_model.predict(img_array)
predicted_class = np.argmax(predictions[0])
predicted_class_name = class_names[predicted_class]
plt.imshow(img)
plt.axis('off')
plt.show()
print("Prediction completed.")
23
OUTPUT:
24
RESULT:
Thus the python program for a transfer learning concept in image classification was
executed successfully.
25
Ex. No:8
USING A PRE TRAINED MODEL ON KERAS FOR TRANSFER LEARNING
Date:26-2-2024
AIM:
To write a python program to use a pre trained model on keras for transfer learning.
ALGORITHM:
Step 2: Define the class names (in this case, 'Cats' and 'Dogs') and specify the directory
containing the Training images.
Step 4: Use flow_from_directory to load and augment the training images from the specified
directory.
Step 5: Load the pre-trained VGG16 model from Keras applications, excluding its top layer
(fully connected Layers.
Step 6: Optionally, freeze some layers of the base VGG16 model to prevent their weights
from being updated during training.
Step 7: Add custom layers on top of the VGG16 base model to adapt it to the binary
classification task.
Step 8: Create a new model using models. Model with the VGG16 base model's input and the
custom classification layers as output.
Step 9: Compile the transfer learning model using Adam optimizer, binary cross-entropy loss
function for binary classification, and accuracy as the metric.
Step 10: Save and Display the image along with the predicted class name to visualize the
classification result.
26
PROGRAM:
import tensorflow as tf
import numpy as np
train_dir = r'C:\Users\LENOVO\PycharmProjects\nn\train'
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(150, 150),
batch_size=32,
layer.trainable = False
27
x = layers.Flatten()(base_model.output)
x = layers.Dense(256, activation='relu')(x)
x = layers.Dropout(0.5)(x)
transfer_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
transfer_model.summary()
print("Training started...")
print("Training completed.")
transfer_model.save(r'C:\Users\LENOVO\PycharmProjects\nn\transfer_learning_model1.ker
as')
img_path = r'C:\Users\LENOVO\PycharmProjects\nn\pet.jpg'
img_array = image.img_to_array(img)
print("Making predictions...")
predictions = transfer_model.predict(img_array)
predicted_class = predictions[0][0] # Since it's binary, you can directly take the first element
of the prediction array
plt.axis('off')
plt.show()
28
OUTPUT:
29
RESULT:
Thus the python program for pre-trained model on keras for transfer learning was
executed successfully.
30
Ex. No:9
PERFORM SENTIMENT ANALYSIS USING RNN
Date:27-2-2024
AIM:
ALGORITHM:
Step 2: Define the class names (in this case, 'Cats' and 'Dogs') and specify the directory
containing the training images.
Step 4: Use flow_from_directory to load and augment the training images from the specified
directory.
Step 5: Load the pre-trained VGG16 model from Keras applications, excluding its top layer
(fully connected layers).
Step 6: Optionally, freeze some layers of the base VGG16 model to prevent their weights
from being updated during training.
Step 7: Add custom layers on top of the VGG16 base model to adapt it to the binary
classification task.
Step 8: Compile the transfer learning model using Adam optimizer, binary cross-entropy loss
function for binary classification, and accuracy as the metric.
Step 9: train the model using fit with the augmented training data generator and a specified
number of epochs.
Step10: Display the image along with the predicted class name to visualize the classification
result.
31
PROGRAM:
ACCURACY:
import numpy as np
import tensorflow as tf
max_features = 10000
maxlen = 500
batch_size = 32
print('Loading data...')
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
print(model.summary())
print('Training...')
print('Evaluating...')
32
loss, accuracy = model.evaluate(x_test, y_test)
OUTPUT:
33
RESULT:
Thus the program for sentiment analysis using RNN was executed successfully.
34
Ex. No:10
IMPLEMENT AN LSTM BASED AUTOENCODER IN TENSORFLOW/KERAS
Date:27-2-2024
AIM:
ALGORITHM:
Step 2: Create random data for demonstration purposes. The data consists of 1000 sequences,
each of length 10, with 1 feature.
Step 4: Use an LSTM layer with 4 units for encoding the input sequences (encoded).
Step 5: Decode the repeated representation using another LSTM layer with 4 units and a
time-distributed dense layer to reconstruct the original input shape.
Step 6: Create the auto-encoder model using the defined input and output layers and Compile
the auto-encoder model with the Adam optimizer and mean squared error (MSE) loss
function.
35
PROGRAM:
import numpy as np
import tensorflow as tf
encoded = LSTM(4)(inputs)
encoded = RepeatVector(10)(encoded)
decoded = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(1))(decoded)
autoencoder.compile(optimizer='adam', loss='mse')
autoencoder.summary()
decoder_layer = autoencoder.layers[-2](encoded_input)
decoder_layer = autoencoder.layers[-1](decoder_layer)
36
OUTPUT:
37
RESULT:
Thus the program for an LSTM (Long short-Term Memory) based auto-encoder
was executed successfully.
38
Ex. No:11
IMAGE GENERATION USING GAN
Date:27-2-2024
AIM:
ALGORITHM:
Step 2: Load the MNIST dataset and preprocess it by normalizing the pixel values to the
range [-1, 1] and adding a channel dimension.
Step 3: Create the generator model using a Sequential model with layers for dense, reshape,
and transpose convolution operations.
Step 4: Create the discriminator model using another Sequential model with convolution
layers followed by a dense layer for binary classification.
Step 5: Compile the discriminator model with binary cross-entropy loss and the Adam
optimizer.
Step 6: Set discriminator. trainable = False to freeze the discriminator's weights during GAN
training.
Step 7: Create the GAN model by connecting the generator and discriminator in a sequential
manner.
Step 8: Compile the GAN model with binary cross-entropy loss and the Adam optimizer.
Step 9: END.
39
PROGRAM:
import tensorflow as tf
import numpy as np
generator = keras.Sequential([
keras.layers.Dense(7 * 7 * 128,
input_shape=(100,)),
keras.layers.Reshape((7, 7, 128)),
keras.layers.LeakyReLU(alpha=0.2),
])
discriminator = keras.Sequential([
keras.layers.LeakyReLU(alpha=0.2),
keras.layers.Flatten(),
keras.layers.Dense(1, activation='sigmoid')
])
discriminator.compile(loss='binary_crossentropy',
optimizer=keras.optimizers.Adam(learning_rate=0.0002),
metrics=['accuracy'])
discriminator.trainable = False
gan_input = keras.Input(shape=(100,))
generated_image = generator(gan_input)
40
gan_output = discriminator(generated_image)
gan.compile(loss='binary_crossentropy',
optimizer=keras.optimizers.Adam(learning_rate=0.0002))
batch_size = 64
epochs = 10
sample_interval = 1000
real_images = X_train[idx]
fake_images = generator.predict(noise)
if epoch % sample_interval == 0:
41
OUTPUT:
RESULT:
Thus the program for image generation using GAN (Generative Adversarial
Network) was executed successfully.
42
Ex. No:12
TRAIN A DEEP LEARNING MODEL TO CLASSIFY A GIVEN IMAGE USING
Date:27-2-2024 PRE TRAINED MODEL
AIM:
To write a python program to train a deep learning model to classify a given image
using pre trained model.
ALGORITHM:
Step 4: Load the VGG16 model pre-trained on ImageNet, excluding the fully connected
layers.
Step 5: Freeze the weights of the pre-trained layers so they are not updated during training.
Step 6: Create a new Sequential model and add the pre-trained VGG16 model as a layer.
Step 7: Flatten the output of VGG16 and add fully connected layers for classification,
including dropout layers for regularization.
Step 8: Flatten the output of VGG16 and add fully connected layers for classification,
including dropout layers for regularization.
Step 9: Set the number of classes in your dataset and add an output layer with softmax
activation for multi- class classification.
Step 10: Compile the model with the Adam optimizer, categorical cross-entropy loss for
multi-class classification, and accuracy as a metric.
Step 11: Use ImageDataGenerator to load and preprocess the data, rescaling pixel values to
the range [0, 1].
Step 12: Create data generators for training and validation data, specifying target size, batch
size, class model, and shuffle parameters.
Step 13: Train the model using model and Evaluate the model on the validation data using
model.
43
PROGRAM:
import tensorflow as tf
drive.mount('/content/drive')
data_dir = '/content/drive/MyDrive/Collab'
layer.trainable = False
model = Sequential()
model.add(vgg_model)
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
num_classes = 2
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer=Adam(lr=1e-4), loss='categorical_crossentropy',
metrics=['accuracy'])
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
44
target_size=(224, 224),
batch_size=32,
shuffle=True
target_size=(224, 224),
batch_size=32,
shuffle=False
class_labels = train_generator.class_indices
model.fit( train_generator,
steps_per_epoch=train_generator.samples // train_generator.batch_size,
validation_data=validation_generator,
validation_steps=validation_generator.samples // validation_generator.batch_size
45
OUTPUT:
RESULT:
Thus the program for deep learning model to classify a given image using pre
trained model was executed successfully
46
Ex. No:13
AIM:
To write a python program to build the recommendation system from sales data
using deep learning.
ALGORITHM:
Step 3: Generate random user IDs, item IDs, and ratings for training, validation, and testing
sets.
Step 4: Create a class Collaborative Filtering Model that inherits from tf.keras.Model.
Step 5: Implement the call method in Collaborative Filtering Model to compute the dot
product of user and item embeddings.
Step 6: Create an instance of Collaborative Filtering Model with the specified number of
users, items, and embedding size.
Step 7: Use the fit method to train the model on the training data and Use the evaluate method
to evaluate the model on the test data.
47
PROGRAM:
import tensorflow as tf
import numpy as np
num_users = 1000
num_items = 500
num_samples = 10000
class CollaborativeFilteringModel(tf.keras.Model):
self.dot = tf.keras.layers.Dot(axes=1)
user_embedding = self.user_embedding(user_id)
item_embedding = self.item_embedding(item_id)
embedding_size = 50
model.compile(optimizer='adam', loss='mean_squared_error')
epochs=10, batch_size=64)
49
OUTPUT:
RESULT:
Thus the program for recommendation system from sales data using deep learning
was executed successfully.
50
Ex. No:14
AIM:
ALGORITHM:
Step 3: Set the grid size for the mask and define functions for creating colored digits and data.
Step 4: Pick a random digit from the MNIST dataset then Make the digit colorful by
multiplying it with random values.
Step 5: Create empty arrays for images (X) and labels (y).Call the make_numbers function to
populate X and y with colorful digits and their respective labels.
Step 6: Define a function to assign colors based on probabilities. Step 7: Define a function
show_predict to visualize predictions.
Step 8: Show predictions for the generated sample using the show_predict function
51
PROGRAM:
import numpy as np
import tensorflow as tf
import cv2
for _ in range(3):
idx = np.random.randint(len(X_num))
kls = y_num[idx]
channels = y[my][mx]
if channels[0] > 0:
channels[0] = 1.0
def make_data(size=64):
for i in range(size):
make_numbers(X[i], y[i])
52
X = np.clip(X, 0.0, 1.0)
return X, y
def get_color_by_probability(p):
if p < 0.3:
if p < 0.7:
X = X.copy()
for mx in range(8):
for my in range(8):
channels = y[my][mx]
continue
color = get_color_by_probability(prob)
kls = np.argmax(channels[5:])
plt.imshow(X)
X, y = make_data(size=1)
show_predict(X[0], y[0])
plt.show()
53
OUTPUT:
RESULT:
Thus the python program for object detection using CNN (Convolutional Neural
Network) was executed successfully
54
Ex. No:15
AIM:
ALGORITHM:
55
PROGRAM:
import numpy as np
num_states = 10
num_actions = 10
Q = np.zeros((num_states, num_actions))
alpha = 0.1
gamma = 0.9
epsilon = 0.1
reward = 0
def train_q_learning(num_episodes):
for _ in range(num_states):
else:
state = next_state
def generate_response(state):
return action
def interactive_dialogue():
try:
context = int(input())
response_action = generate_response(context)
else:
except ValueError:
num_episodes = 1000
train_q_learning(num_episodes)
interactive_dialogue()
57
OUTPUT:
RESULT:
Thus the python program for reinforcement algorithm for an NLP problem was
executed successfully.
58