0% found this document useful (0 votes)
19 views19 pages

NNDL Manual

Uploaded by

files.skiruthik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views19 pages

NNDL Manual

Uploaded by

files.skiruthik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Ex No: 1

IMPLEMENT SIMPLE VECTOR ADDITION


Date: IN TENSORFLOW

AIM:
To implement simple vector addition in TensorFlow

ALGORITHM:
Step1:Importing the necessary packages for the program.
Step2:create the vectors for the addition operation.
Step3: Perform the addition operation
Step4:Print the result for the given input.
PROGRAM:

(1) # importing packages


import tensorflow as tf

# creating two tensors


matrix = tf.constant([[1, 2], [3, 4]])
matrix1 = tf.constant([[2, 4], [6, 8]])

# addition of two matrices


print(matrix+matrix1)

(2)# importing packages


import tensorflow as tf

# create a vector
vector1 = tf.constant([10, 10])
vector2 = tf.constant([20, 20])
# checking the dimensions of vector
print(vector1+vector2)

(3) x = [1, 2, 3, 4, 5]
y=1
tf.add(x, y)
x = tf.convert_to_tensor([1, 2, 3, 4, 5])
y = tf.convert_to_tensor(1)
x+y
(4) x = [1, 2, 3, 4, 5]
y = tf.constant([1, 2, 3, 4, 5])
tf.add(x, y)

(5) x = tf.constant([1, 2], dtype=tf.int8)


y = [2**7 + 1, 2**7 + 2]
tf.add(x, y)

(6) import numpy as np


x = np.ones(6).reshape(1, 2, 1, 3)
y = np.ones(6).reshape(2, 1, 3, 1)
tf.add(x, y).shape.as_list()

RESULT:

Thus the implement simple vector addition in TensorFlow was executed successfully.
Ex No: 2

Date: IMPLEMENT A REGRESSION MODEL IN KERAS

AIM:
To implement a regression model in Keras

ALGORITHM:

Step 1: Import the necessary packages and libraries


Step 2: load the data
Step 3: Define the model architecture
Step 4: Compile the model
Step 5: Train the model
Step 6: Evaluate the model
PROGRAM:

#1. Import the necessary packages and libraries:

import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

#2. Load the data:

data = np.loadtxt('data.csv', delimiter=',')


X_train = data[:, :-1]
y_train = data[:, -1]

#3. Define the model architecture:

model = Sequential()
model.add(Dense(16, input_dim=X_train.shape[1], activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='linear'))

#4. Compile the model:

model.compile(loss='mean_squared_error', optimizer='adam')

#5. Train the model:

model.fit(X_train, y_train, epochs=100, batch_size=32, validation_split=0.2)


#6. Evaluate the model:

mse = model.evaluate(X_train, y_train)


print('MSE: %.2f' % mse)

RESULT:

Thus the implement a regression model in Keras was executed successfully.


Ex No : 3(a)

Date: IMPLEMENT A PERCEPTRON IN TENSORFLOW

AIM:
To implement a perceptron in TensorFlow

ALGORITHM:.

Step1: Import the packages for the program.


Step2: Define the perceptron with x,and weight and training data.
Step3:Define the perceptron’s training process.
Step4:Test the perceptron’s prediction.
Step5:then,print the output and then stop the program.

PROGRAM:

import tensorflow as tf
# Define the perceptron function
def perceptron(X, weights, bias):
sum = tf.matmul(X, weights) + bias # linear combination
output = tf.math.sign(sum) # activation function
return output
# Define training data
X_train = tf.constant([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=tf.float32)
y_train = tf.constant([[-1], [-1], [-1], [1]], dtype=tf.float32)
# Define the perceptron's variables
weights = tf.Variable(tf.zeros([2, 1]), dtype=tf.float32)
bias = tf.Variable(tf.zeros([1]), dtype=tf.float32)
# Define the perceptron's training process
learning_rate = 0.1
epochs = 20
for epoch in range(epochs):
with tf.GradientTape() as tape:
y_pred = perceptron(X_train, weights, bias)
loss = tf.reduce_mean(tf.square(y_train - y_pred))
gradients = tape.gradient(loss, [weights, bias])
weights.assign_sub(learning_rate * gradients[0])
bias.assign_sub(learning_rate * gradients[1])
print('Epoch:', epoch+1, ' Loss:', loss.numpy())
# Test the perceptron's predictions
X_test = tf.constant([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=tf.float32)
y_test = tf.constant([[-1], [-1], [-1], [1]], dtype=tf.float32)
y_pred = perceptron(X_test, weights, bias)
print('Predictions:',
y_pred.numpy())

RESULT:

Thus the implement a perceptron in TensorFlow was executed successfully


Ex No: 3(b)
IMPLEMENT A PERCEPTRON IN KERAS
Date:

AIM :
To implement a perceptron in keras

ALGORITHM:
Step1: start the program.
Step2:Create the sequential model.
Step3:Add the single dense layer with a single output.
Step4:Add an activation layer with sigmoid
Step5:compile the model.
Step6:Evaluate the model and the stop the program.

PROGRAM:

import keras
from keras.models import Sequential
from keras.layers.core import Dense, Activation
# create a sequential model
model = Sequential()
# add a single dense layer with a single output and input_dim of 2
model.add(Dense(units=1, input_dim=2))
# add an activation layer with a sigmoid activation function
model.add(Activation('sigmoid'))
# compile the model
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
# train the model on sample data
X = [[0, 0], [0, 1], [1, 0], [1, 1]]
Y = [0, 0, 0, 1]
model.fit(X, Y, epochs=1000, batch_size=4, verbose=0)
# evaluate the model on test data
loss, accuracy = model.evaluate(X, Y, verbose=0)
print("Accuracy: ", accuracy)

RESULT:

Thus the implement a perceptron in Keras was executed successfully.


Ex No: 4(a)
IMPLEMENT A FEED-FORWARD NETWORK IN
Date: TENSORFLOW

AIM :
To implement a Feed-Forward network in TensorFlow

ALGORITHM:

Step1:Load the dataset for the program.


Step2:Train the data in the dataset.
Step3:define the model.
Step4:Define the function and optimizer.
Step5:Compile the model.
Step6:Train and evaluate the model.

PROGRAM:

import tensorflow as tf
# Load dataset
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Normalize data
x_train, x_test = x_train / 255.0, x_test / 255.0
# Define model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)])
# Define loss function and optimizer
loss_fn=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()
# Compile the model
model.compile(optimizer=optimizer,loss=loss_fn,metrics=['accuracy'])
# Train the model
model.fit(x_train, y_train, epochs=5)
# Evaluate the model
model.evaluate(x_test, y_test, verbose=2)

RESULT:

Thus the implement a Feed-Forward network in TensorFlow was executed successfully.


Ex No: 4(b)
IMPLEMENT A FEED-FORWARD NETWORK IN KERAS
Date:

AIM :
To implement a Feed-Forward network in Keras

ALGORITHM:

Step1:Import the packages


Step2:Generate the dummy data.
Step3:Create a sequential model.
Step4:Create the different layers with appropriate inputs and activation functions.
Step5:Predict the data and stop the program.

PROGRAM
#implement a feed forward neural network in keras
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
# Generate some dummy data
X = np.random.rand(100, 10) # 100 samples with 10 features each
y = np.random.randint(2, size=100) # Binary labels (0 or 1)
# Create a sequential model
model = Sequential()
# Add layers to the model
model.add(Dense(units=64, activation='relu', input_dim=10)) # Input layer with 10 input features
model.add(Dense(units=32, activation='relu')) # Hidden layer with 32 units
model.add(Dense(units=1, activation='sigmoid')) # Output layer with 1 unit and sigmoid activation
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X, y, epochs=10, batch_size=32)
# Make predictions
sample_input = np.random.rand(1, 10) # Input for a single sample
prediction = model.predict(sample_input)
print("Prediction:", prediction)

RESULT:

Thus the implement a Feed-Forward network in Keras was executed successfully.


Ex No: 5(a)
IMPLEMENT AN IMAGE CLASSIFIER USING CNN IN
Date: TENSORFLOW

AIM:
To implement an image classifier using CNN in TensorFlow

ALGORITHM:

Step1:Import the packages.


Step2:Load the dataset.
Step3:Noramalise the pixel values between 0 and 1
Step4:Define the model using CNN.
Step5:Compile the model with appropriate loss function, optimizer and metrics.
Step6:Evaluate the model and plot the training and validate the accuracy.

PROGRAM:

import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt

# Load the dataset


(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()

# Normalize pixel values between 0 and 1


train_images, test_images = train_images / 255.0, test_images / 255.0

# Define the model using CNN

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))

# Compile the model with appropriate loss function, optimizer and metrics

model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])

# Train the model on the training set

history = model.fit(train_images, train_labels, epochs=10,


validation_data=(test_images, test_labels))

# Evaluate the model on the test set


test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(test_acc)

# Plot the training and validation accuracy

plt.plot(history.history['accuracy'], label='train accuracy')


plt.plot(history.history['val_accuracy'], label='test accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

RESULT:

Thus the implement an image classifier using CNN in TensorFlow was executed successfully
Ex No: 5(b)
IMPLEMENT AN IMAGE CLASSIFIER USING CNN IN KERAS
Date:

AIM:
To implement an image classifier using CNN in Keras
.
ALGORITHM:

Step1: Import the necessary packages


Step2: Load the dataset.
Step3:Normalise the pixel values between 0 and 2=1.
Step4:Define the model using CNN.
Step5:Compile the model with appropriate loss function,optimizer and metrics.
Step6:Train and evaluate the model.

PROGRAM:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# Load the dataset
(train_images, train_labels), (test_images, test_labels) = keras.datasets.cifar10.load_data()
# Normalize pixel values between 0 and 1
train_images = train_images.astype('float32') / 255.0
test_images = test_images.astype('float32') / 255.0
# Define the model using CNN
model = keras.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10)
])
# Compile the model with appropriate loss function, optimizer and metrics
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Train the model on the training set
history = model.fit(train_images, train_labels, epochs=10,
validation_data=(test_images, test_labels))
# Evaluate the model on the test set
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(test_acc)

RESULT:

Thus the implement an image classifier using CNN in Keras was executed successfully.
Ex No:9
PERFORM SENTIMENT ANALYSIS USING RNN
Date:

AIM:
To Perform sentiment analysis using RNN
ALGORITHM:
Step1: Import the packages.
Step2:Set the hyperparameters and load the dataset.
Step3:Pad the sequences to make them same length.
Step4:Define and compile the model.
Step5:Train and evaluate the model.

PROGRAM:

import keras
from keras.datasets import imdb
from keras.layers import Dense, Embedding, SimpleRNN
from keras.models import Sequential
from keras.preprocessing import sequence

# Set hyperparameters

max_features = 20000 # Number of words to consider as features


maxlen = 100 # Maximum length of each sequence
batch_size = 32 # Number of samples per gradient update
epochs = 10 # Number of epochs to train the model

# Load the dataset

(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)

# Pad sequences to make them same length

x_train = sequence.pad_sequences(x_train, maxlen=maxlen)


x_test = sequence.pad_sequences(x_test, maxlen=maxlen)

# Define the model

model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))

# Compile the model

model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])

# Train the model


hist = model.fit(x_train, y_train,

batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))

# Evaluate the model

loss, acc = model.evaluate(x_test, y_test, batch_size=batch_size)


print(f'Test loss: {loss}, Test accuracy: {acc}')

RESULT:
Thus Perform sentiment analysis using RNN was executed successfully.
Ex No:10(a)
IMPLEMENT AN LSTM BASED AUTOENCODER IN
Date:
TENSORFLOW

AIM:
To implement an LSTM based autoencoder in TensorFlow.
ALGORITHM:
Step1:Import the necessary packages.
Step2:Set the hyperparameters.
Step3:Define the autoencoder model.
Step4:Compile the autoencoder.
Step5:Train the autoencoder using random data.
Step6:Test the autoencoder and then stop.

PROGRAM:

import tensorflow as tf
from tensorflow.keras.layers import Input, LSTM, RepeatVector, TimeDistributed, Dense
from tensorflow.keras.models import Model

# Set hyperparameters

timesteps = 10 # Number of timesteps in each sequence


input_dim = 1 # Dimensionality of each input element
latent_dim = 64 # Dimensionality of the latent space representation
batch_size = 32
epochs = 10

# Define the autoencoder model

inputs = Input(shape=(timesteps, input_dim))


encoded = LSTM(latent_dim)(inputs)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(input_dim, return_sequences=True)(decoded)

autoencoder = Model(inputs, decoded)

# Compile the autoencoder

autoencoder.compile(optimizer='adam', loss='mse')

# Train the autoencoder using random data

import numpy as np

X_train = np.random.random((1000, timesteps, input_dim))


autoencoder.fit(X_train, X_train, batch_size=batch_size, epochs=epochs)
# Test the autoencoder on new data

X_test = np.random.random((10, timesteps, input_dim))


decoded_output = autoencoder.predict(X_test)

RESULT:

Thus implement an LSTM based autoencoder in TensorFlow was executed successfully.


ExNo:10(b)

Date: IMPLEMENT AN LSTM BASED AUTOENCODER IN KERAS

AIM:

To implement an LSTM based autoencoder in Keras.

ALGORITHM:
Step1:Import the packages for the program.
Step2:Set the hyperparameters.
Step3:Define the autoencoder model.
Step4:Compile the autoencoder.
Step5:Train the autoencoder using random data.
Step6: Test the autoencoder and then stop the program.

PROGRAM:

from keras.layers import Input, LSTM, RepeatVector


from keras.models import Model

# Set hyperparameters

timesteps = 10 # Number of timesteps in each sequence


input_dim = 1 # Dimensionality of each input element
latent_dim = 64 # Dimensionality of the latent space representation
batch_size = 32
epochs = 10

# Define the autoencoder model

inputs = Input(shape=(timesteps, input_dim))


encoded = LSTM(latent_dim)(inputs)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(input_dim, return_sequences=True)(decoded)

autoencoder = Model(inputs, decoded)

# Compile the autoencoder

autoencoder.compile(optimizer='adam', loss='mse')

# Train the autoencoder using random data

import numpy as np

X_train = np.random.random((1000, timesteps, input_dim))


autoencoder.fit(X_train, X_train, batch_size=batch_size, epochs=epochs)

# Test the autoencoder on new data


X_test = np.random.random((10, timesteps, input_dim))
decoded_output = autoencoder.predict(X_test)

RESULT:

Thus implement an LSTM based autoencoder in Keras was executed successfully.


ExNo:11

Date: IMAGE GENERATION USING GAN

AIM:
To implement image generation using GAN .

ALGORITHM:

Step1:Import the necessary packages for the program.


Step2: Define the Build_generator model with the layers.
Step3:Define the build_descriminator model and return the model.
Step4:Necessary inputs are given for the model.
Step5:Plot the model using “plot”function
Step6:Show the results and then stop the program.

PROGRAM:

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

def build_generator():
model = keras.Sequential([
layers.Dense(7 * 7 * 256, input_shape=(100,)),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 256)),
layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same'),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same'),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', activation='tanh'),
])
return model

def build_discriminator():
model = keras.Sequential([
layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same', input_shape=(28, 28, 1)),
layers.LeakyReLU(alpha=0.2),
layers.Dropout(0.3),
layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'),
layers.LeakyReLU(alpha=0.2),
layers.Dropout(0.3),
layers.Flatten(),
layers.Dense(1, activation='sigmoid'),
])
return model

generator = build_generator()
discriminator=build_discriminator()

def build_gan(generator, discriminator):


discriminator.trainable = False
model = keras.Sequential([generator, discriminator])
return model

num_images = 10
noise = np.random.normal(0, 1, (num_images, 100))
generated_images = generator.predict(noise)

for i in range(num_images):
plt.subplot(1, num_images, i + 1)
plt.imshow(generated_images[i, :, :, 0], cmap='gray')
plt.axis('off')
plt.show()

RESULT:

Thus the image generation using GAN was executed successfully.

You might also like