Experiment 9 1
Experiment 9 1
:-9
THEORY:-
1. Introduction/Overview:
This experiment aims to provide a deep understanding of the inner workings of neural networks by
implementing a simple feedforward neural network from scratch using NumPy. Students will learn
about fundamental concepts like forward propagation, backpropagation, and gradient descent. This
hands-on experience will demystify the "black box" nature of neural networks.
Learning Objectives:
Understand the basic architecture of a feedforward neural network.
Implement forward propagation and activation functions (sigmoid).
Implement backpropagation to calculate gradients.
Implement gradient descent for weight updates.
Train a simple neural network on a small dataset.
3. Procedure:
3.1. Dataset Creation (XOR):
We'll use the XOR problem, which is a classic example demonstrating the need for non-linear
activation functions.
Python
import numpy as np
class NeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
# Initialize weights randomly
self.weights1 = np.random.rand(input_size, hidden_size) # Weights between input and hidden
layer
self.weights2 = np.random.rand(hidden_size, output_size) # Weights between hidden and output
layer
hidden_layer_error = output_delta.dot(self.weights2.T)
hidden_layer_delta = hidden_layer_error * sigmoid_derivative(self.hidden_layer_output)
# Update weights
self.weights2 += self.hidden_layer_output.T.dot(output_delta) * learning_rate
self.weights1 += X.T.dot(hidden_layer_delta) * learning_rate
RESULT: - Hence, implemented the core components of Neural Network from scratch.
.
PRACTICAL ASSIGNMENT:
1. Why is a non-linear activation function (like sigmoid) necessary?
2. How do the weights change during training?
3. What is the effect of the learning rate? Experiment with different learning rates.
4. What are the limitations of this simple neural network?
5. How can this code be extended to handle more complex datasets and network architectures?
.