0% found this document useful (0 votes)
22 views26 pages

Final DL

The document outlines a series of practical exercises focused on neural networks using Python and PyTorch, covering topics such as basic neural network construction, binary classification, sentiment analysis, regression tasks, and autoencoders. Each practical includes code examples, training loops, and conclusions summarizing the learning outcomes. The document also includes a rubric for evaluating the exercises based on various criteria.

Uploaded by

pruthvirajpasi42
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views26 pages

Final DL

The document outlines a series of practical exercises focused on neural networks using Python and PyTorch, covering topics such as basic neural network construction, binary classification, sentiment analysis, regression tasks, and autoencoders. Each practical includes code examples, training loops, and conclusions summarizing the learning outcomes. The document also includes a rubric for evaluating the exercises based on various criteria.

Uploaded by

pruthvirajpasi42
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Practical – 1

Aim: To learn neural network fundamentals and construct an initial


network using Python and NumPy. Utilize the contemporary deep
learning framework PyTorch to develop multi-layer neural networks
and analyze real-world data.
Program:

1.​Using Numpy

import numpy as np
# Sigmoid activation
function def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Derivative of sigmoid
def sigmoid_derivative(x):
return x * (1 - x)
# Input and output
X = np.array([[0,
0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([[0],
[1],
[1],
[0]]) # XOR problem

# Initialize weights
randomly np.random.seed(1)
weights_1 = np.random.rand(2, 4)
weights_2 = np.random.rand(4, 1)

# Training the network


for epoch in range(10000):
layer1 = sigmoid(np.dot(X, weights_1))
output = sigmoid(np.dot(layer1, weights_2))

# Backpropagation
error = y - output
d_output = error * sigmoid_derivative(output)
d_layer1 = d_output.dot(weights_2.T) * sigmoid_derivative(layer1)
# Update weights
weights_2 += layer1.T.dot(d_output)
weights_1 += X.T.dot(d_layer1)
print("Final output:")
print(output)

2.​Using PyTorch
import torch
import torch.nn as nn
import torch.optim as
optim

# Dataset (same XOR problem)


X = torch.tensor([[0,0], [0,1], [1,0], [1,1]], dtype=torch.float32)
y = torch.tensor([[0], [1], [1], [0]], dtype=torch.float32)

# Define the model


class
SimpleNN(nn.Module):
def ​ init​ (self):
super(SimpleNN, self).​ init​
() self.hidden = nn.Linear(2, 4)
self.output = nn.Linear(4, 1)
self.sigmoid = nn.Sigmoid()

def forward(self, x):


x = self.sigmoid(self.hidden(x))
x = self.sigmoid(self.output(x))
return x
model = SimpleNN()

# Loss and optimizer

criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

# Training loop
for epoch in range(10000):
# Forward pass
outputs = model(X)
loss = criterion(outputs, y)

# Backward pass
optimizer.zero_grad()
loss.backward()
optimizer.step()

if epoch % 1000 == 0:
print(f'Epoch {epoch}, Loss: {loss.item():.4f}')

# Final prediction
with torch.no_grad():
predicted = model(X).round()
print("\nPredicted outputs:")
print(predicted)

Output:

Conclusion: We built a tiny neural network using only NumPy to understand


how​ neurons​ connect​ and​ learn.
It showed how forward pass and backpropagation work manually. This is
great for beginners to learn neural network basics.

Rubric wise marks obtained:


Problem Completeness
Knowledge Logic
Recognition and accuracy Ethics (2)
(2) (2) Building (2) (2)
Rubrics Total
Good Avg. Good Avg. Good Avg. Good Avg. Good Avg.
(2) (1) (2) (1) (2) (1) (2) (1) (2) (1)

Marks
Practical – 2
Aim: To design a neural network (NN) model with a single hidden layer for
addressing classification problems.

Program:

import numpy as np
# Sigmoid
activation def
sigmoid(x):
return 1 / (1 +
np.exp(-x)) # Derivative of
sigmoid
def
sigmoid_derivative(x):
return x * (1 - x)

# Binary classification dataset [Study Hours, Sleep Hours]


X = np.array([[1, 0],
[2, 1],
[3, 1],
[4, 2],
[5, 3]])
# Labels: 0 = fail, 1 = pass
y = np.array([[0], [0], [1], [1], [1]])

np.random.seed(42)
input_size = 2
hidden_size = 4
output_size = 1

weights_1 = np.random.randn(input_size, hidden_size)


weights_2 = np.random.randn(hidden_size, output_size)
for epoch in range(10000):
layer1 = sigmoid(np.dot(X, weights_1))
output = sigmoid(np.dot(layer1, weights_2))

error = y - output
d_output = error *
sigmoid_derivative(output) #
Backpropagation
if epoch % 1000 == 0:
loss = np.mean(np.square(error))
print(f"Epoch {epoch}, Loss: {loss:.4f}")

print("\nFinal predictions:")
print(np.round(output, 2))

Output:

Conclusion:

We created a neural network with one hidden layer to classify data.


This model learned patterns and predicted correct classes.
It’s a simple introduction to how deeper networks work.

Rubric wise marks obtained:

Problem Completeness
Knowledge Recognition Logic and Ethics (2)
(2) (2) Building accurac Total
(2) y (2)
Good Avg. Good Avg. Good Avg. Good Avg. Good Avg.
(2) (1) (2) (1) (2) (1) (2) (1) (2) (1)
Practical – 3
Aim: Design a neural network for classifying movie reviews (Binary
Classification) using IMDB dataset
Program:

import torch
import torch.nn as nn
import torch.optim as
optim
from torchtext.datasets import IMDB
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
from torch.utils.data import DataLoader
from torch.nn.utils.rnn import pad_sequence

# Tokenizer
tokenizer = get_tokenizer("basic_english")

# Build vocabulary
def yield_tokens(data_iter):
for label, text in data_iter:
yield tokenizer(text)

train_data = list(IMDB(split='train'))
vocab​ =​
build_vocab_from_iterator(yield_tokens(train_data), specials=["<pad>"])
vocab.set_default_index(vocab["<pad>"])

# Pipelines
def encode(text):
return torch.tensor(vocab(tokenizer(text)), dtype=torch.long)

def label_to_num(label):
return 1 if label == 'pos' else 0

# Collate function
def collate(batch):
texts = [encode(text) for label, text in batch]
# DataLoader
train_loader​ =​ DataLoader(train_data,​ batch_size=16,​
shuffle=True, collate_fn=collate)

# Simple model
class SentimentNet(nn.Module):
def ​ init​ (self, vocab_size,
embed_dim): super().​ ​ init​ ()
self.embedding = nn.Embedding(vocab_size, embed_dim)
self.fc = nn.Linear(embed_dim, 1)
self.sigmoid = nn.Sigmoid()

def forward(self, x):


x = self.embedding(x).mean(1) # Average word embeddings
x = self.fc(x)
return self.sigmoid(x).squeeze()

# Initialize
model = SentimentNet(len(vocab), embed_dim=64)
loss_fn = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Training loop
for epoch in range(3):
total_loss = 0
for texts, labels in train_loader:
preds = model(texts)
loss = loss_fn(preds, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_loss += loss.item()
print(f"Epoch {epoch+1}, Loss: {total_loss:.4f}")

Output:
Conclusion:
We built a binary classifier for IMDB reviews (positive or negative). The
network learned the meaning of words using embedding layers. It showed
how deep learning handles text classification.

Rubric wise marks obtained:


Problem Completeness
Knowledge Logic
Recognitio and accuracy Ethics (2)
(2) Building (2)
Rubrics n (2) Total
(2)
Good Avg. Good Avg. Good Avg. Good Avg. Good Avg.
(2) (1) (2) (1) (2) (1) (2) (1) (2) (1)

Marks
Practical – 4
Aim:​ To​ Implement​ a​ multilayer​ perceptron​ (MLP)​ model​
for prediction tasks, such as house prices(Bostan Housing Price
Dataset).

Program:
import torch
import torch.nn as nn
import torch.optim as
optim
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# Load dataset
boston = load_boston()
X = boston.data
y =
boston.target #
Train-test split
X_train,​ X_test,​ y_train,​ y_test​ =​ train_test_split(X,​ y,​
test_size=0.2, random_state=42)
# Standardize features
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Convert to tensors
X_train = torch.tensor(X_train, dtype=torch.float32)
X_test = torch.tensor(X_test, dtype=torch.float32)
y_train = torch.tensor(y_train, dtype=torch.float32).view(-1, 1)
y_test = torch.tensor(y_test, dtype=torch.float32).view(-1, 1)

# Define MLP model


class MLP(nn.Module):
def ​ init​ (self):
super(MLP, self).​ init​
()
self.hidden = nn.Linear(13,
64) self.relu = nn.ReLU()
self.output = nn.Linear(64, 1)
def forward(self, x):
optimizer = optim.Adam(model.parameters(), lr=0.01)
# Training loop
for epoch in range(100):
model.train()
optimizer.zero_grad()
outputs = model(X_train)
loss = criterion(outputs, y_train)
loss.backward()
optimizer.step()
if (epoch+1) % 10 == 0:
print(f'Epoch {epoch+1}, Loss: {loss.item():.4f}')

# Testing
model.eval()
predictions = model(X_test).detach().numpy()
print("\nSample Predictions (first 5):")
print(predictions[:5].flatten())

Output:

Conclusion:

We used an MLP (multi-layer neural network) to predict house prices. The


model learned relationships between house features and price. It’s a good
start for regression tasks using neural networks.

Rubric wise marks obtained:


Problem Completeness
Knowledge Logic
Recognition and accuracy Ethics (2)
(2) (2) Building (2) (2)
Rubrics Total
Good Avg. Good Avg. Good Avg. Good Avg. Good Avg.
(2) (1) (2) (1) (2) (1) (2) (1) (2) (1)

Marks
Practical – 5
Aim: To develop a Conventional Feed Forward Neural Network on MNIST
Data set using. (a) Sequential Class (b)Model Class API
Program:

import torch
import torch.nn as nn
import torch.optim as
optim
from torchvision import datasets, transforms

# MNIST data
transform = transforms.ToTensor()
train_data​ =​ datasets.MNIST(root='./data',​ train=True,​
download=True, transform=transform)
train_loader​ =​ torch.utils.data.DataLoader(train_data,​
batch_size=64, shuffle=True)

# Define model using Sequential


model = nn.Sequential(
nn.Flatten(),
nn.Linear(28*28, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10)
)

# Loss and optimizer


criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Training
for epoch in range(3):
total_loss = 0
for images, labels in train_loader:
preds = model(images)
loss = criterion(preds, labels)optimizer.zero_grad() loss.backward() optimizer.step()
total_loss += loss.item()
print(f"Epoch {epoch+1}, Loss: {total_loss:.4f}")
# Define model using custom class
class FFNN(nn.Module):
def ​ init​ (self):
super(FFNN, self).​ init​ ()
self.flatten = nn.Flatten()
self.fc1 = nn.Linear(28*28, 128)
self.fc2 = nn.Linear(128, 64)
self.fc3 = nn.Linear(64, 10)
self.relu = nn.ReLU()

def forward(self, x):


x = self.flatten(x)
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x

model = FFNN()

# Loss and optimizer


criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Training
for epoch in range(3):
total_loss = 0
for images, labels in train_loader:
preds = model(images)
loss = criterion(preds, labels)

optimizer.zero_grad()
loss.backward()
optimizer.step()

total_loss += loss.item()
print(f"Epoch {epoch+1}, Loss: {total_loss:.4f}")
Output:

Conclusion:

We trained a simple feedforward network on handwritten digits (MNIST). First


using Sequential class and then using Model class API. This showed two
ways to build models easily in PyTorch.

Rubric wise marks obtained:

Problem Completeness
Knowledge Logic
Recognition and accuracy Ethics (2)
(2) (2) Building (2) (2)
Rubrics Total
Good Avg. Good Avg. Good Avg. Good Avg. Good Avg.
(2) (1) (2) (1) (2) (1) (2) (1) (2) (1)

Marks
Practical – 6
Aim: To implement Auto Encoders for Dimensionality Reduction

Program:

import torch
import torch.nn as nn
import torch.optim as
optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader

# Load MNIST data


transform = transforms.ToTensor()
train_data​ =​ datasets.MNIST(root='./data',​ train=True,​
download=True, transform=transform)
train_loader = DataLoader(train_data, batch_size=128, shuffle=True)

# Define Autoencoder
class
Autoencoder(nn.Module):
def ​ init​ (self):
super(Autoencoder, self).​ init​ ()
# Encoder: compress from 784 ->
32 self.encoder = nn.Sequential(
nn.Flatten(),
nn.Linear(28*28, 128),
nn.ReLU(),
nn.Linear(128, 32)
)
# Decoder: decompress from 32 -> 784
self.decoder = nn.Sequential(
nn.Linear(32, 128),
nn.ReLU(),
nn.Linear(128, 28*28),
nn.Sigmoid() # Output between 0 and 1
)

def forward(self, x):


encoded =
self.encoder(x)
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Train Autoencoder
for epoch in
range(5):
total_loss = 0
for images, _ in train_loader: # labels are not needed
outputs = model(images)
loss = criterion(outputs, images.view(-1, 28*28)) optimizer.zero_grad()
loss.backward() optimizer.step() total_loss += loss.item()
print(f"Epoch {epoch+1}, Loss: {total_loss:.4f}")

Output:

Conclusion:

We built an autoencoder to compress and decompress data. It learned a


smaller representation of data without losing important information.
Autoencoders are powerful for feature reduction and noise removal.

Rubric wise marks obtained:


Problem Completeness
Knowledge Logic
Recognitio and accuracy Ethics (2)
(2) Building (2)
Rubrics n (2) Total
(2)
Good Avg. Good Avg. Good Avg. Good Avg. Good Avg.
(2) (1) (2) (1) (2) (1) (2) (1) (2) (1)

Marks
Practical – 7
Aim:​ Build​ a​ Convolution​ Neural​ Network(CNN)​ for​ MNIST
Handwritten Digit Classification.

Program:

import torch
import torch.nn as nn
import torch.optim as
optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader

# Load MNIST dataset


train_data​ =​ datasets.MNIST(root='./data',​ train=True,​
download=True, transform=transforms.ToTensor())
test_data​ =​ datasets.MNIST(root='./data',​ train=False,​
download=True, transform=transforms.ToTensor())

train_loader = DataLoader(train_data, batch_size=64, shuffle=True)


test_loader = DataLoader(test_data, batch_size=1000)

# Simple CNN model


class
SimpleCNN(nn.Module):
def ​ init​ (self):
super().​ init​ ()
self.conv = nn.Conv2d(1, 8, 3, padding=1) # (B, 1, 28, 28) -> (B, 8, 28, 28)
self.pool = nn.MaxPool2d(2, 2)​ # (B, 8, 28, 28) -> (B, 8, 14,
14) self.fc = nn.Linear(8 * 14 * 14, 10)​​# Final layer

def forward(self, x):


x = self.pool(torch.relu(self.conv(x)))
x = x.view(-1, 8 * 14 * 14)
return self.fc(x)

model = SimpleCNN()
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss = loss_fn(outputs, labels)

optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}, Loss: {loss.item():.4f}")

# Testing
correct = 0
total = 0
with torch.no_grad():
for images, labels in test_loader:
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()

print(f"Test Accuracy: {100 * correct / total:.2f}%")

Output:

Conclusion:

We designed a basic CNN to classify handwritten digits. CNNs are good


at understanding images by detecting patterns. This model achieved high
accuracy even with simple layers.

Rubric wise marks obtained:


Problem Completeness
Knowledge Logic
Recognition and accuracy Ethics (2)
(2) (2) Building (2) (2)
Rubrics Total
Good Avg. Good Avg. Good Avg. Good Avg. Good Avg.
(2) (1) (2) (1) (2) (1) (2) (1) (2) (1)

Marks
Practical – 8
Aim: To perform hyperparameter tuning with Long Short-Term Memory
(LSTM) networks using time series data.

Program:

import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt

# Generate sine wave data


x = np.linspace(0, 100, 1000)
y = np.sin(x)

# Create sequences
def create_seq(data, seq_len):
xs, ys = [], []
for i in range(len(data) - seq_len):
xs.append(data[i:i+seq_len])
ys.append(data[i+seq_len])
return np.array(xs), np.array(ys)

seq_len = 20
X, Y = create_seq(y, seq_len)

# Train/test split
train_size = int(len(X) * 0.8)
X_train, X_test = X[:train_size], X[train_size:]
Y_train, Y_test = Y[:train_size], Y[train_size:]

# Convert to tensors
X_train = torch.tensor(X_train, dtype=torch.float32).unsqueeze(-1)
Y_train = torch.tensor(Y_train, dtype=torch.float32).unsqueeze(-1)
X_test = torch.tensor(X_test, dtype=torch.float32).unsqueeze(-1)
Y_test = torch.tensor(Y_test, dtype=torch.float32).unsqueeze(-1)

# Define LSTM Model


class LSTM(nn.Module):
def ​ init​ (self,
hidden_size): super().​
​ init​ ()
self.fc = nn.Linear(hidden_size, 1)

def forward(self, x):


out, _ = self.lstm(x)
return self.fc(out[:, -1, :])

# Hyperparameter tuning
hidden_sizes = [8, 16]
learning_rates = [0.01, 0.001]

for hidden in hidden_sizes:


for lr in learning_rates:
print(f"\nTraining with hidden_size={hidden}, lr={lr}")
model = LSTM(hidden_size=hidden)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)

# Train
for epoch in range(20):
model.train()
out = model(X_train)
loss = criterion(out, Y_train)

optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Final Train Loss: {loss.item():.4f}")

# Test
model.eval()
with torch.no_grad():
pred = model(X_test)
test_loss = criterion(pred, Y_test)
print(f"Test Loss:
{test_loss.item():.4f}")

Output:
Conclusion:
We built an LSTM model for time series prediction and tuned its settings. We
tested different hidden sizes and learning rates. This helped find the best
model configuration to predict future values.

Rubric wise marks obtained:


Problem Completeness
Knowledge Logic
Recognitio and accuracy Ethics (2)
(2) Building (2)
Rubrics n (2) Total
(2)
Good Avg. Good Avg. Good Avg. Good Avg. Good Avg.
(2) (1) (2) (1) (2) (1) (2) (1) (2) (1)

Marks
Practical – 9
Aim: To train a Neural Network on the MNIST dataset and employ
SHAP (SHapley Additive exPlanations) to interpret its predictions.
Program:

import torch
import torch.nn as nn
import torch.optim as
optim
from torchvision import datasets, transforms
import shap
import matplotlib.pyplot as plt

# 1. Load MNIST dataset


transform = transforms.ToTensor()
train_data​ =​ datasets.MNIST(root='./data',​ train=True,​
download=True, transform=transform)
test_data​ =​ datasets.MNIST(root='./data',​ train=False,​
download=True, transform=transform)

train_loader​ =​ torch.utils.data.DataLoader(train_data,​
batch_size=128, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=1, shuffle=True)

# 2. Define a simple Neural Network


class SimpleNN(nn.Module):
def ​ init​ (self):
super().​ ​init​ ()
self.fc1 = nn.Linear(28*28, 100)
self.fc2 = nn.Linear(100, 10)

def forward(self, x):


x = x.view(-1, 28*28) # Flatten
x = torch.relu(self.fc1(x))
return self.fc2(x)
model = SimpleNN()
# 3. Train the model
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

model.train()
for images, labels in train_loader:
outputs = model(images)
loss = criterion(outputs,
labels) optimizer.zero_grad()
loss.backward()
optimizer.step()
print(" Training Done")

# 4. Explain a prediction with SHAP


model.eval()
image, label = next(iter(test_loader))

background = image.repeat(100, 1, 1, 1) # Background data

explainer = shap.DeepExplainer(model, background)


shap_values = explainer.shap_values(image)

# 5. Plot
shap.image_plot(shap_values, image.numpy())

Output:

Conclusion:
After training a simple MNIST classifier, we used SHAP to explain predictions.
SHAP showed which pixels influenced the model's decision. This made the
neural network’s thinking process easier to understand.

Rubric wise marks obtained:


Problem Completeness
Knowledge Logic
Recognition and accuracy Ethics (2)
(2) (2) Building (2) (2)
Rubrics Total
Good Avg. Good Avg. Good Avg. Good Avg. Good Avg.
(2) (1) (2) (1) (2) (1) (2) (1) (2) (1)

Marks
Practical – 10
Aim: Implement reinforcement learning on any data set.

Program:

import gym
import numpy as np

# 1. Create environment
env​ =​ gym.make('FrozenLake-v1',​ is_slippery=False)​ #​ Non-slippery​ for
simplicity

# 2. Initialize Q-Table
q_table = np.zeros([env.observation_space.n, env.action_space.n])

# 3. Set
hyperparameters
learning_rate = 0.8
discount_factor = 0.95
episodes = 1000

# 4. Training loop
for episode in range(episodes):
state = env.reset()[0]
done = False

while not done:


# Choose random action (exploration)
action​ =​ np.argmax(q_table[state,​ :]​ +​
np.random.randn(1, env.action_space.n) * (1.0 / (episode + 1)))
new_state, reward, done, truncated, _ = env.step(action)

# Update Q-Table
q_table[state, action] = q_table[state, action] + learning_rate * (reward +
discount_factor * np.max(q_table[new_state, :]) - q_table[state, action])

state = new_state

print("\n Training
done = False
while not done:
action = np.argmax(q_table[state, :])
state, reward, done, truncated, _ = env.step(action)
env.render()

Output:

Conclusion:
We trained a simple agent using Q-learning in the FrozenLake environment. The
agent learned to reach the goal by trial and error. It showed how
reinforcement learning works without needing a dataset.

Rubric wise marks obtained:


Problem Completeness
Knowledge Logic
Recognition and accuracy Ethics (2)
(2) (2) Building (2) (2)
Rubrics Total
Good Avg. Good Avg. Good Avg. Good Avg. Good Avg.
(2) (1) (2) (1) (2) (1) (2) (1) (2) (1)

Marks

You might also like