0% found this document useful (0 votes)
23 views12 pages

DL LAB MANUAL Mugesh

The document outlines various experiments conducted using neural networks, including testing activation functions, solving the XOR problem with a Multilayer Perceptron, recognizing characters and digits from images, utilizing autoencoders for image reconstruction, and building a Convolutional Neural Network for speech recognition. Each experiment includes an aim, algorithm, program code, and results demonstrating successful implementation. Overall, the document showcases the application of different neural network architectures and techniques on various datasets.

Uploaded by

727722euai051
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views12 pages

DL LAB MANUAL Mugesh

The document outlines various experiments conducted using neural networks, including testing activation functions, solving the XOR problem with a Multilayer Perceptron, recognizing characters and digits from images, utilizing autoencoders for image reconstruction, and building a Convolutional Neural Network for speech recognition. Each experiment includes an aim, algorithm, program code, and results demonstrating successful implementation. Overall, the document showcases the application of different neural network architectures and techniques on various datasets.

Uploaded by

727722euai051
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

EXP.

NO: 1
ACCURACY OF VARIOUS ACTIVATION FUNCTIONS
DATE:

AIM :
To test the accuracies of various activation functions.

ALGORITHM :

 Import Libraries: Load TensorFlow, NumPy, and Matplotlib.


 Load MNIST Data: Get training and test data from MNIST.
 Normalize Data: Scale image pixel values to [0, 1].
 Build Model: Create a neural network with Flatten, Dense, Dropout, and Output
layers.
 Compile Model: Set optimizer, loss function, and evaluation metric.
 Train Model: Train the model for 8 epochs.
 Evaluate Model: Test the model on unseen data.
 Print Results: Show loss and accuracy values.
 Plot Chart: Create a bar chart to compare model accuracies.
 Show Chart: Display the bar chart.

PROGRAM :

import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Dropout
import numpy as np
import matplotlib.pyplot as plt
# Load and preprocess data
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Define the model with changed activation function
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='tanh'), # Changed from 'relu' to 'tanh'
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Train the model


model.fit(x_train, y_train, epochs=8)

# Evaluate the model


loss, accuracy = model.evaluate(x_test, y_test, verbose=0)
# Print summary
print("\nSummary of results:")
print(f"Loss: {loss}")
print(f"Accuracy: {accuracy}")

# Updated activation function labels in the bar chart


neuroacti = ["sigmoid 64", "sigmoid 128", "tanh 64", "tanh 128"]
acc = [97.68, 98.55, 97.56, 98.44]

# Plot accuracy of models


plt.bar(neuroacti, acc)
plt.title('Accuracy of models')
plt.xlabel('Models')
plt.ylabel('Accuracy')
plt.ylim(97.0, 100.0)
plt.show()

OUTPUT :
RESULT :

Thus the accuracies of various activation functions have been studied successfully.

EXP.NO: 2
MLP NEUTRAL NETWORK TO SOLVE THE XOR PROBLEM
DATE:

AIM :
To solve the given XOR problem using Multilayer Perceptron Neural Network.

ALGORITHM :

 Import TensorFlow and NumPy.


 Define XOR inputs and outputs.
 Create an MLP with one hidden (ReLU) and one output (tanh) layer.
 Compile with Adam optimizer and binary crossentropy loss.
 Train for 500 epochs.
 Test predictions on XOR inputs.
 Print results.

PROGRAM :
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
X = np.array([[0, 0], [0, 0], [1, 1], [1, 0]])
y = np.array([[1], [1], [1], [1]])
model = Sequential()
model.add(Dense(8, input_dim=2, activation='relu')) # Increased neurons from 6 to 8
model.add(Dense(1, activation='tanh')) # Changed activation from sigmoid to tanh
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=1000, verbose=0)
loss, accuracy = model.evaluate(X, y)
print(f'Accuracy: {accuracy * 100:.2f}%')
predictions = model.predict(X)
predictions_int = [round(pred[0]) for pred in predictions]
print('Predictions:', predictions_int)

OUTPUT :

RESULT :

Thus, a MLP was built to solve the XOR problem.


EXP.NO: 3
ARTIFICIAL NEURAL NETWORK TO RECOGNIZE CHARACTER
DATE: AND DIGITS FROM IMAGES

AIM :
To build a Artificial Neural Network to recognize characters and digits from images.

ALGORITHM :
 Import Libraries: Use Keras and NumPy.
 Load Data: Load MNIST dataset (28x28 images).
 Preprocess Data: Normalize inputs and one-hot encode outputs.
 Build Model:
1. Add Flatten, Dense (150 neurons, tanh), and Dense (100 neurons,
ReLu) layers.
 Compile: Use Adam optimizer and categorical crossentropy loss.
 Train Model: Train for 10 epochs with validation.
 Evaluate Model: Test accuracy on the test set.
 Predict: Predict classes for test data and display results.

PROGRAM :
# Import necessary libraries
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt
from google.colab import files
from PIL import Image, ImageOps

# Load the MNIST dataset


(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize the data


x_train = x_train / 255.0
x_test = x_test / 255.0

# Build the feed-forward network model


model = Sequential([
Flatten(input_shape=(28, 28)),
Dense(128, activation='relu'),
Dense(64, activation='relu'),
Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Train the model


model.fit(x_train, y_train, epochs=5, validation_split=0.1)

# Evaluate the model


test_loss, test_accuracy = model.evaluate(x_test, y_test)
print(f"Test Accuracy: {test_accuracy * 100:.2f}%")

# Save the model


model.save("mnist_ffnn_model.h5")
# Upload and validate with a custom hand-drawn image
print("Upload a hand-drawn digit image (28x28 grayscale, or it will be resized):")
uploaded = files.upload()

for filename in uploaded.keys():


# Load the uploaded image
img = Image.open(filename).convert('L') # Convert to grayscale
img = ImageOps.invert(img) # Invert the colors
img = img.resize((28, 28)) # Resize to 28x28

# Display the uploaded image


plt.imshow(img, cmap='gray')
plt.title("Uploaded Image")
plt.axis('off')
plt.show()

# Preprocess the image


img_array = np.array(img) / 255.0 # Normalize pixel values
img_array = img_array.reshape(1, 28, 28) # Reshape for model input

# Make a prediction
prediction = model.predict(img_array)
predicted_label = np.argmax(prediction)
print(f"Predicted Digit: {predicted_label}")

OUTPUT :
RESULT :
Thus, an Artificial Neural Network to recognize characters and digits from images.
AIM :

To write a program using autoencoders to analyze images for image reconstruction


tasks

ALGORITHM :
 Import Libraries – Load NumPy, Matplotlib, OpenCV, and Keras modules.
 Load Data – Load the Fashion-MNIST dataset (grayscale 28×28 images).
 Preprocess Data
1 Resize images to 32×32.
2 Convert grayscale to RGB (3 channels).
3 Normalize pixel values to [0,1].
 Build Autoencoder Model
 Encoder: Apply Conv2D, MaxPooling2D layers for feature extraction.
 Decoder: Use Conv2D, UpSampling2D layers to reconstruct images.
 Compile & Train Model
 Use Adam optimizer and Binary Crossentropy loss.
 Train for 5 epochs with batch size 256.
 Evaluate & Visualize Results
 Predict reconstructed images on test data.
 Plot original vs. reconstructed images for comparison.

PROGRAM :
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Model
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.datasets import fashion_mnist
import cv2
# Load Fashion-MNIST dataset
(X_train, _), (X_test, _) = fashion_mnist.load_data()
# Resize images from (28x28) to (32x32) and convert grayscale to 3-channel RGB-like
format
X_train = np.array([cv2.cvtColor(cv2.resize(img, (32, 32)), cv2.COLOR_GRAY2RGB) for
img in X_train])
X_test = np.array([cv2.cvtColor(cv2.resize(img, (32, 32)), cv2.COLOR_GRAY2RGB) for
img in X_test])
# Normalize pixel values
X_train = X_train.astype('float32') / 255.0
X_test = X_test.astype('float32') / 255.0

# Now shape is correct: (60000, 32, 32, 3)


print("Train shape:", X_train.shape)
print("Test shape:", X_test.shape)

# Encoder
input_img = Input(shape=(32, 32, 3))
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(16, (3, 3), activation='tanh', padding='same')(x) # Changed activation
encoded = MaxPooling2D((2, 2), padding='same')(x)

# Decoder
x = Conv2D(16, (3, 3), activation='tanh', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x) # Output to 3
channels

# Build autoencoder model


autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
# Train the autoencoder
autoencoder.fit(X_train, X_train, epochs=5, batch_size=256, shuffle=True,
validation_data=(X_test, X_test))
# Generate decoded images
decoded_imgs = autoencoder.predict(X_test)

# Visualize original and reconstructed images


n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# Original images
ax = plt.subplot(2, n, i + 1)
plt.imshow(X_test[i])
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

# Reconstructed images
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i])
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
OUTPUT:

RESULT :
Thus, a program using autoencoders to analyze images for image reconstruction
tasks was build successfully.
EXP.NO: 5
CONVOLUTIONAL NEURAL NETWORK (CNN) FOR SPEECH
DATE: RECOGNITION

AIM :
To build Convolutional Neural Network for Speech Recognition task.

ALGORITHM :
 Import Libraries: Load TensorFlow, NumPy, and required Keras modules.
 Generate Data: Create random dummy data with shape (num_samples,
input_length, 1).
 Preprocess Data: One-hot encode labels and split data into training & testing sets.
 Build CNN Model:
 Add Conv1D layers with tanh and LeakyReLU activation.
 Use MaxPooling1D, BatchNormalization, Flatten, Dense, and Dropout layers.
 Compile Model: Use Adam optimizer and categorical cross-entropy loss.
 Train Model: Train for 7 epochs with a batch size of 32.
 Evaluate Model: Compute accuracy on test data.
 Make Predictions: Predict and display first 10 classes.

PROGRAM :
import tensorflow as tf
import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.layers import LeakyReLU

def generate_dummy_data(num_samples=1000, input_length=16000, num_classes=10):


X = np.random.rand(num_samples, input_length, 1)
y = np.random.randint(0, num_classes, num_samples)
y = to_categorical(y, num_classes=num_classes)
return X, y

def build_cnn_model(input_shape, num_classes):


model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=input_shape),
tf.keras.layers.Conv1D(64, kernel_size=3, activation='tanh'),
tf.keras.layers.MaxPooling1D(pool_size=2),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv1D(128, kernel_size=3),
tf.keras.layers.LeakyReLU(alpha=0.1), # Using LeakyReLU instead of ReLU
tf.keras.layers.MaxPooling1D(pool_size=2),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='tanh'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
return model

num_samples = 1000
input_length = 16000
num_classes = 10
X, y = generate_dummy_data(num_samples, input_length, num_classes)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

input_shape = (input_length, 1)
model = build_cnn_model(input_shape=input_shape, num_classes=num_classes)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

epochs = 7 # Changed from 10 to 7


batch_size = 32
print("Training the model...")
history = model.fit(X_train, y_train, validation_split=0.2, epochs=epochs,
batch_size=batch_size)

print("Evaluating the model...")


test_loss, test_accuracy = model.evaluate(X_test, y_test, verbose=2)
print(f"Test Accuracy: {test_accuracy * 100:.2f}%")

print("Making predictions on test data...")


predictions = model.predict(X_test)
predicted_classes = np.argmax(predictions, axis=1)
print("First 10 Predictions:")
print(predicted_classes[:10])
OUTPUT :

RESULT :

Thus, a Convolutional Neural Network for speech recognition was build


successfully.

You might also like