DL Experiments
DL Experiments
EAEPE20
PROJECT FILE
2024-2025
Branch: ECAM
Section: 2
Sno.
TITLE DATE SIGN
1. Experiment 1: BackPropagation
2. Experiment 2: RMSProp(Root
Mean Square Propagation)
3. Experiment 3: CNN
(Convolutional Neural Network) for
MNIST Classification
THEORY:
Backpropagation (backward propagation of errors) is a fundamental
learning method that calculates the gradient of the loss function with respect
Key Concepts:
z = w·x + b
a = σ(z)
Simple Experiment:
Let's implement a simple neural network with one hidden layer to learn the
XOR function.
XOR Problem
The XOR function is a classic problem that a single perceptron cannot solve,
import numpy as np
def sigmoid_derivative(x):
return x * (1 - x)
# Output dataset
y = np.array([[0],
[1],
[1],
[0]])
# Hyperparameters
learning_rate = 0.1
epochs = 10000
# Training loop
for epoch in range(epochs):
# Forward pass
hidden_layer_input = np.dot(X, hidden_weights)
hidden_layer_output = sigmoid(hidden_layer_input)
# Backpropagation
# Output layer error
output_delta = error * sigmoid_derivative(predicted_output)
# Update weights
output_weights += hidden_layer_output.T.dot(output_delta) *
learning_rate
hidden_weights += X.T.dot(hidden_delta) * learning_rate
OUTPUT:
Experiment 2
TOPIC: RMSProp(Root Mean Square Propagation)
THEORY:
RMSprop is an adaptive learning rate optimization algorithm designed to
address the limitations of vanilla Stochastic Gradient Descent (SGD). It
adjusts the learning rate for each parameter based on the magnitude of
recent gradients.
Why Use RMSprop?
Simple Experiment:
● RMSprop (Root Mean Square Propagation) adapts the learning rate
per parameter.
● It divides the gradient by a running average of its recent magnitude.
CODE:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import RMSprop
model.compile(optimizer=optimizer, loss='binary_crossentropy',
metrics=['accuracy'])
OUTPUT:
Experiment 3
TOPIC: CNN (Convolutional Neural Network) for MNIST Classification
THEORY:
CNNs are designed for grid-structured data (images, videos) using local
connectivity and weight sharing to detect spatial hierarchies.
Core Components:
objects.
Limitations:
Simple Experiment:
Train a CNN to recognize handwritten digits (0-9) from the MNIST dataset.
The CNN learns hierarchical features (edges → curves → digits) through
convolutional and pooling layers, achieving high accuracy (~99%).
CODE:
import tensorflow as tf
model = Sequential([
MaxPooling2D((2, 2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D((2, 2)),
Flatten(),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Evaluate
OUTPUT:
Experiment 4
TOPIC: RNN (Recurrent Neural Network) for Time Series Prediction.
THEORY:
RNNs process sequential data (time series, text) by maintaining a hidden
state that captures temporal dependencies.
Mathematical Formulation:
2. Output:
1.Time-series forecasting.
3. Speech recognition.
Limitations:
Simple Experiment:
CODE:
import numpy as np
import tensorflow as tf
def generate_data(n=1000):
t = np.arange(n)
return y
y = generate_data()
# Prepare sequences
X, Y = [], []
for i in range(len(data)-window):
X.append(data[i:i+window])
Y.append(data[i+window])
X, Y = create_dataset(y)
X = X.reshape(-1, 10, 1)
model = Sequential([
Dense(1)
])
model.compile(optimizer='adam', loss='mse')
# Train
# Predict
predictions = model.predict(X[-10:])
OUTPUT:
Experiment 5
TOPIC: U-Net for Image Segmentation
THEORY:
Image segmentation assigns a class label to each pixel (e.g., tumor in MRI
scans). U-Net is a CNN architecture with skip connections for precise
localization.
U-Net Architecture:
Limitations:
Simple Experiment:
classes.
CODE:
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D,
UpSampling2D, Concatenate
from tensorflow.keras.models import Model
import numpy as np
# Encoder
c1 = Conv2D(16, (3, 3), activation='relu', padding='same')(inputs)
p1 = MaxPooling2D((2, 2))(c1)
# Bottleneck
b = Conv2D(64, (3, 3), activation='relu', padding='same')(p2)
# Decoder
u1 = UpSampling2D((2, 2))(b)
u1 = Concatenate()([u1, c2])
c3 = Conv2D(32, (3, 3), activation='relu', padding='same')(u1)
u2 = UpSampling2D((2, 2))(c3)
u2 = Concatenate()([u2, c1])
c4 = Conv2D(16, (3, 3), activation='relu', padding='same')(u2)
# Train
model.fit(X_train, Y_train, epochs=10, batch_size=8,
validation_data=(X_test, Y_test))
# Predict
pred_mask = model.predict(X_test[0:1])
print("Predicted mask shape:", pred_mask.shape)
OUTPUT: