0% found this document useful (0 votes)
11 views9 pages

ML Expt 9

This document outlines an experiment focused on the learning and implementation of the Error Backpropagation Perceptron Training Algorithm in a machine learning lab. It explains the theory behind backpropagation, including forward propagation, error calculation, and weight updates, along with a Python implementation of a neural network. The document also provides example outputs demonstrating the training process and mean squared error over multiple epochs.

Uploaded by

Vaibhavi Girkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views9 pages

ML Expt 9

This document outlines an experiment focused on the learning and implementation of the Error Backpropagation Perceptron Training Algorithm in a machine learning lab. It explains the theory behind backpropagation, including forward propagation, error calculation, and weight updates, along with a Python implementation of a neural network. The document also provides example outputs demonstrating the training process and mean squared error over multiple epochs.

Uploaded by

Vaibhavi Girkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Experiment No : 09

**************************************************************************************************

Subject: ML LAB Class: TE Branch: CSE(AIML)


Course Code: CSL604
Aim : Learning and Implementation of Error Backpropagation Perceptron Training
Algorithm.
**************************************************************************************************

Theory :

Backpropagation is also known as "Backward Propagation of Errors" and it is a method used to


train neural network . Its goal is to reduce the difference between the model’s predicted output and
the actual output by adjusting the weights and biases in the network.

Example of Backpropagation in Machine Learning

Let’s walk through an example of backpropagation in machine learning. Assume the neurons use the
sigmoid activation function for the forward and backward pass. The target output is 0.5, and the
learning rate is 1.

Forward Propagation

1. Initial Calculation

The weighted sum at each node is calculated using:

aj=∑(wi,j∗xi)aj=∑(wi,j∗xi)

Where,

 ajaj is the weighted sum of all the inputs and weights at each node

 wi,jwi,j represents the weights associated with the jthjth input to the ithith neuron

 xixi represents the value of the jthjth input

2. Sigmoid Function

The sigmoid function returns a value between 0 and 1, introducing non-linearity into the model.

yj=11+e−ajyj=1+e−aj1
3. Computing Outputs

At h1 node

a1=(w1,1x1)+(w2,1x2)=(0.2∗0.35)+(0.2∗0.7)=0.21a1=(w1,1x1)+(w2,1x2
)=(0.2∗0.35)+(0.2∗0.7)=0.21

Once we calculated the a1 value, we can now proceed to find the y3 value:

yj=F(aj)=11+e−a1yj=F(aj)=1+e−a11

y3=F(0.21)=11+e−0.21y3=F(0.21)=1+e−0.211

y3=0.56y3=0.56

Similarly find the values of y4 at h2 and y5 at O3

a2=(w1,2∗x1)+(w2,2∗x2)=(0.3∗0.35)+(0.3∗0.7)=0.315a2=(w1,2∗x1)+(w2,2∗x2
)=(0.3∗0.35)+(0.3∗0.7)=0.315

y4=F(0.315)=11+e−0.315y4=F(0.315)=1+e−0.3151

a3=(w1,3∗y3)+(w2,3∗y4)=(0.3∗0.57)+(0.9∗0.59)=0.702a3=(w1,3∗y3)+(w2,3∗y4
)=(0.3∗0.57)+(0.9∗0.59)=0.702

y5=F(0.702)=11+e−0.702=0.67y5=F(0.702)=1+e−0.7021=0.67

4. Error Calculation

Our actual output is 0.5 but we obtained 0.67. To calculate the error we can use the below formula:

Errorj=ytarget−y5Errorj=ytarget−y5

Error=0.5−0.67=−0.17Error=0.5−0.67=−0.17

Using this error value we will be backpropagating.

Backpropagation

1. Calculating Gradients

The change in each weight is calculated as:

Δwij=η×δj×OjΔwij=η×δj×Oj
Where:

 δjδj is the error term for each unit,

 ηη is the learning rate.

2. Output Unit Error

For O3:

δ5=y5(1−y5)(ytarget−y5)δ5=y5(1−y5)(ytarget−y5)

=0.67(1−0.67)(−0.17)=−0.0376=0.67(1−0.67)(−0.17)=−0.0376

3. Hidden Unit Error

For h1:

δ3=y3(1−y3)(w1,3×δ5)δ3=y3(1−y3)(w1,3×δ5)

=0.56(1−0.56)(0.3×−0.0376)=−0.0027=0.56(1−0.56)(0.3×−0.0376)=−0.0027

For h2:

δ4=y4(1−y4)(w2,3×δ5)δ4=y4(1−y4)(w2,3×δ5)

=0.59(1−0.59)(0.9×−0.0376)=−0.0819=0.59(1−0.59)(0.9×−0.0376)=−0.0819

4. Weight Updates

For the weights from hidden to output layer:

Δw2,3=1×(−0.0376)×0.59=−0.022184Δw2,3=1×(−0.0376)×0.59=−0.022184

New weight:

w2,3(new)=−0.22184+0.9=0.67816w2,3(new)=−0.22184+0.9=0.67816

For weights from input to hidden layer:

Δw1,1=1×(−0.0027)×0.35=0.000945Δw1,1=1×(−0.0027)×0.35=0.000945

New weight:

w1,1(new)=0.000945+0.2=0.200945w1,1(new)=0.000945+0.2=0.200945

Similarly other weights are updated:

 w1,2(new)=0.271335w1,2(new)=0.271335

 w1,3(new)=0.08567w1,3(new)=0.08567

 w2,1(new)=0.29811w2,1(new)=0.29811

 w2,2(new)=0.24267w2,2(new)=0.24267

The updated weights are illustrated below


After updating the weights the forward pass is repeated yielding:

 y3=0.57y3=0.57

 y4=0.56y4=0.56

 y5=0.61y5=0.61

Since y5=0.61y5=0.61 is still not the target output the process of calculating the error and
backpropagating continues until the desired output is reached.

This process demonstrates how backpropagation iteratively updates weights by minimizing errors until
the network accurately predicts the output.

Error=ytarget−y5Error=ytarget−y5

=0.5−0.61=−0.11=0.5−0.61=−0.11

This process is said to be continued until the actual output is gained by the neural network.

Code :

import numpy as np

# Sigmoid Activation Function

def sigmoid(x):

return 1 / (1 + np.exp(-x))

# Derivative of Sigmoid Function

def sigmoid_derivative(x):

return x * (1 - x)

# Neural Network Class

class NeuralNetwork:

def __init__(self, input_size, hidden_size, output_size):

self.input_size = input_size

self.hidden_size = hidden_size

self.output_size = output_size

# Initialize weights randomly

self.weights_input_hidden = np.random.rand(input_size, hidden_size)

self.weights_hidden_output = np.random.rand(hidden_size, output_size)


# Forward Pass

def forward_pass(self, X):

# Input to hidden layer

self.hidden_input = np.dot(X, self.weights_input_hidden)

self.hidden_output = sigmoid(self.hidden_input)

# Hidden to output layer

self.output_input = np.dot(self.hidden_output, self.weights_hidden_output)

self.output = sigmoid(self.output_input)

return self.output

# Backward Pass (Error Backpropagation)

def backward_pass(self, X, y, output):

# Compute output error

self.output_error = y - output

# Compute gradients for weights between hidden and output layer

delta_output = self.output_error * sigmoid_derivative(output)

self.weights_hidden_output += np.dot(self.hidden_output.T, delta_output)

# Compute gradients for weights between input and hidden layer

delta_hidden = np.dot(delta_output, self.weights_hidden_output.T) *


sigmoid_derivative(self.hidden_output)

self.weights_input_hidden += np.dot(X.T, delta_hidden)

# Training Function

def train(self, X, y, epochs):

for epoch in range(epochs):

# Forward pass

output = self.forward_pass(X)

# Backward pass

self.backward_pass(X, y, output)
# Calculate and print mean squared error

mse = np.mean(np.square(y - output))

print(f'Epoch {epoch+1}/{epochs}, Mean Squared Error: {mse}')

# Example usage:

# Input data (4 features), target output (1 output)

X = np.array([[0, 0, 1, 1],

[0, 1, 1, 0],

[1, 0, 1, 0],

[1, 1, 1, 1]])

y = np.array([[0], [1], [1], [0]]) # Target output

# Initialize and train the neural network

input_size = 4

hidden_size = 3

output_size = 1

epochs = 100

nn = NeuralNetwork(input_size, hidden_size, output_size)

nn.train(X, y, epochs)

Output:-

Epoch 1/100, Mean Squared Error: 0.3494659684777398

Epoch 2/100, Mean Squared Error: 0.3189853011409361

Epoch 3/100, Mean Squared Error: 0.29093550494635323

Epoch 4/100, Mean Squared Error: 0.2705716199108813

Epoch 5/100, Mean Squared Error: 0.25901187295651007

Epoch 6/100, Mean Squared Error: 0.2534997527779032

Epoch 7/100, Mean Squared Error: 0.25100013303060187

Epoch 8/100, Mean Squared Error: 0.2497610944012798

Epoch 9/100, Mean Squared Error: 0.24900413495360807

Epoch 10/100, Mean Squared Error: 0.24841707168951


Epoch 11/100, Mean Squared Error: 0.24787790619613054

Epoch 12/100, Mean Squared Error: 0.24733761373063928

Epoch 13/100, Mean Squared Error: 0.24677439830457354

Epoch 14/100, Mean Squared Error: 0.24617635086290224

Epoch 15/100, Mean Squared Error: 0.24553493104617813

Epoch 16/100, Mean Squared Error: 0.24484251266376245

Epoch 17/100, Mean Squared Error: 0.244091447326119

Epoch 18/100, Mean Squared Error: 0.24327369611522293

Epoch 19/100, Mean Squared Error: 0.2423806775274162

Epoch 20/100, Mean Squared Error: 0.24140320385719663

Epoch 21/100, Mean Squared Error: 0.24033146221768037

Epoch 22/100, Mean Squared Error: 0.239155028228179

Epoch 23/100, Mean Squared Error: 0.2378629126922508

Epoch 24/100, Mean Squared Error: 0.23644364636149756

Epoch 25/100, Mean Squared Error: 0.23488540939547964

Epoch 26/100, Mean Squared Error: 0.2331762117126351

Epoch 27/100, Mean Squared Error: 0.2313041283047088

Epoch 28/100, Mean Squared Error: 0.22925758963859189

Epoch 29/100, Mean Squared Error: 0.22702572143294852

Epoch 30/100, Mean Squared Error: 0.22459872066315137

Epoch 31/100, Mean Squared Error: 0.2219682464951976

Epoch 32/100, Mean Squared Error: 0.21912779755248357

Epoch 33/100, Mean Squared Error: 0.2160730426063176

Epoch 34/100, Mean Squared Error: 0.21280207262946754

Epoch 35/100, Mean Squared Error: 0.20931554955583787

Epoch 36/100, Mean Squared Error: 0.205616740754089

Epoch 37/100, Mean Squared Error: 0.20171144565859783

Epoch 38/100, Mean Squared Error: 0.1976078378754657

Epoch 39/100, Mean Squared Error: 0.19331625755006532

Epoch 40/100, Mean Squared Error: 0.1888489913380021

Epoch 41/100, Mean Squared Error: 0.1842200702746558

Epoch 42/100, Mean Squared Error: 0.17944510164194746

Epoch 43/100, Mean Squared Error: 0.17454113416273642

Epoch 44/100, Mean Squared Error: 0.16952654142345516


Epoch 45/100, Mean Squared Error: 0.16442089990587594

Epoch 46/100, Mean Squared Error: 0.15924483676427084

Epoch 47/100, Mean Squared Error: 0.1540198277510242

Epoch 48/100, Mean Squared Error: 0.1487679352974101

Epoch 49/100, Mean Squared Error: 0.14351148801408084

Epoch 50/100, Mean Squared Error: 0.13827271334160068

Epoch 51/100, Mean Squared Error: 0.13307334298815343

Epoch 52/100, Mean Squared Error: 0.12793421520491802

Epoch 53/100, Mean Squared Error: 0.12287489871348037

Epoch 54/100, Mean Squared Error: 0.11791336066434413

Epoch 55/100, Mean Squared Error: 0.11306569620814821

Epoch 56/100, Mean Squared Error: 0.10834593110637415

Epoch 57/100, Mean Squared Error: 0.10376590228128973

Epoch 58/100, Mean Squared Error: 0.09933521513380808

Epoch 59/100, Mean Squared Error: 0.09506127143391857

Epoch 60/100, Mean Squared Error: 0.09094935794043427

Epoch 61/100, Mean Squared Error: 0.08700278372213958

Epoch 62/100, Mean Squared Error: 0.08322305332658902

Epoch 63/100, Mean Squared Error: 0.07961006324416427

Epoch 64/100, Mean Squared Error: 0.07616231024896357

Epoch 65/100, Mean Squared Error: 0.07287710186131639

Epoch 66/100, Mean Squared Error: 0.06975076109586374

Epoch 67/100, Mean Squared Error: 0.06677881961349887

Epoch 68/100, Mean Squared Error: 0.0639561952253113

Epoch 69/100, Mean Squared Error: 0.06127735130121523

Epoch 70/100, Mean Squared Error: 0.05873643696471062

Epoch 71/100, Mean Squared Error: 0.05632740799654834

Epoch 72/100, Mean Squared Error: 0.054044129139194506

Epoch 73/100, Mean Squared Error: 0.05188045902248514

Epoch 74/100, Mean Squared Error: 0.04983031925816596

Epoch 75/100, Mean Squared Error: 0.04788774941827213

Epoch 76/100, Mean Squared Error: 0.04604694965832631

Epoch 77/100, Mean Squared Error: 0.0443023127052984

Epoch 78/100, Mean Squared Error: 0.04264844683068805


Epoch 79/100, Mean Squared Error: 0.04108019129370288

Epoch 80/100, Mean Squared Error: 0.039592625585671136

Epoch 81/100, Mean Squared Error: 0.03818107364724794

Epoch 82/100, Mean Squared Error: 0.03684110407347915

Epoch 83/100, Mean Squared Error: 0.03556852717414888

Epoch 84/100, Mean Squared Error: 0.034359389621477554

Epoch 85/100, Mean Squared Error: 0.03320996729586668

Epoch 86/100, Mean Squared Error: 0.032116756833482855

Epoch 87/100, Mean Squared Error: 0.031076466286708383

Epoch 88/100, Mean Squared Error: 0.03008600522900017

Epoch 89/100, Mean Squared Error: 0.029142474568347003

Epoch 90/100, Mean Squared Error: 0.028243156277011364

Epoch 91/100, Mean Squared Error: 0.027385503198271717

Epoch 92/100, Mean Squared Error: 0.026567129052176487

Epoch 93/100, Mean Squared Error: 0.025785798730693643

Epoch 94/100, Mean Squared Error: 0.02503941894701944

Epoch 95/100, Mean Squared Error: 0.02432602928323574

Epoch 96/100, Mean Squared Error: 0.023643793664142255

Epoch 97/100, Mean Squared Error: 0.022990992272213507

Epoch 98/100, Mean Squared Error: 0.022366013908620426

Epoch 99/100, Mean Squared Error: 0.02176734879758962

Epoch 100/100, Mean Squared Error: 0.021193581825607835

You might also like