0% found this document useful (0 votes)
30 views3 pages

Experiment 9 1

This document outlines an experiment to implement a feedforward neural network from scratch using NumPy, focusing on concepts like forward propagation, backpropagation, and gradient descent. It includes a simple XOR dataset for training and provides a code implementation for the neural network. Additionally, it poses practical questions regarding activation functions, weight changes, learning rates, and limitations of the model.

Uploaded by

sya833063
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views3 pages

Experiment 9 1

This document outlines an experiment to implement a feedforward neural network from scratch using NumPy, focusing on concepts like forward propagation, backpropagation, and gradient descent. It includes a simple XOR dataset for training and provides a code implementation for the neural network. Additionally, it poses practical questions regarding activation functions, weight changes, learning rates, and limitations of the model.

Uploaded by

sya833063
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

EXPERIMENT NO.

:-9

AIM :- Implementing a Neural Network from Scratch

THEORY:-

1. Introduction/Overview:
This experiment aims to provide a deep understanding of the inner workings of neural networks by
implementing a simple feedforward neural network from scratch using NumPy. Students will learn
about fundamental concepts like forward propagation, backpropagation, and gradient descent. This
hands-on experience will demystify the "black box" nature of neural networks.

Learning Objectives:
Understand the basic architecture of a feedforward neural network.
Implement forward propagation and activation functions (sigmoid).
Implement backpropagation to calculate gradients.
Implement gradient descent for weight updates.
Train a simple neural network on a small dataset.

2. Materials and Equipment:


Computer with Python 3.7+ installed.
Jupyter Notebook or similar IDE.
NumPy library (install using pip install numpy).
A small sample dataset (we'll use a simple XOR dataset).

3. Procedure:
3.1. Dataset Creation (XOR):
We'll use the XOR problem, which is a classic example demonstrating the need for non-linear
activation functions.
Python
import numpy as np

# Input features (XOR inputs)


X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
# Target outputs (XOR outputs)
y = np.array([[0], [1], [1], [0]])

3.2. Neural Network Implementation:


Python
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
return x * (1 - x)

class NeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
# Initialize weights randomly
self.weights1 = np.random.rand(input_size, hidden_size) # Weights between input and hidden
layer
self.weights2 = np.random.rand(hidden_size, output_size) # Weights between hidden and output
layer

def forward(self, X):


self.hidden_layer_output = sigmoid(np.dot(X, self.weights1))
self.output = sigmoid(np.dot(self.hidden_layer_output, self.weights2))
return self.output

def backward(self, X, y, learning_rate):


# Calculate error
output_error = y - self.output
output_delta = output_error * sigmoid_derivative(self.output)

hidden_layer_error = output_delta.dot(self.weights2.T)
hidden_layer_delta = hidden_layer_error * sigmoid_derivative(self.hidden_layer_output)

# Update weights
self.weights2 += self.hidden_layer_output.T.dot(output_delta) * learning_rate
self.weights1 += X.T.dot(hidden_layer_delta) * learning_rate

def train(self, X, y, epochs, learning_rate):


for _ in range(epochs):
output = self.forward(X)
self.backward(X, y, learning_rate)

# Create a neural network with 2 inputs, 2 hidden units, and 1 output


nn = NeuralNetwork(2, 2, 1)

# Train the neural network


nn.train(X, y, epochs=10000, learning_rate=0.1)

# Test the trained network


print("Predictions after training:")
for i in range(len(X)):
print(f"Input: {X[i]}, Prediction: {nn.forward(np.array([X[i]]))}")
3.3 Explanation of the Code:
sigmoid() and sigmoid_derivative(): These functions implement the sigmoid activation function
and its derivative, respectively.
NeuralNetwork class:
__init__(): Initializes the weights randomly.
forward(): Performs forward propagation.
backward(): Performs backpropagation to calculate gradients and update weights.
train(): Trains the network for a given number of epochs.

RESULT: - Hence, implemented the core components of Neural Network from scratch.
.
PRACTICAL ASSIGNMENT:
1. Why is a non-linear activation function (like sigmoid) necessary?
2. How do the weights change during training?
3. What is the effect of the learning rate? Experiment with different learning rates.
4. What are the limitations of this simple neural network?
5. How can this code be extended to handle more complex datasets and network architectures?
.

You might also like