0% found this document useful (0 votes)
41 views3 pages

DWDM Lab1

The document discusses the back propagation algorithm. It explains that back propagation is used to minimize errors in a neural network by adjusting weights in the opposite direction of the gradient of the loss function. It involves calculating the gradient of the loss with respect to the weights and updating the weights iteratively to improve performance. The document also provides mathematical expressions for calculating outputs, errors, and updating weights and biases in the back propagation process.

Uploaded by

shreyastha2058
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views3 pages

DWDM Lab1

The document discusses the back propagation algorithm. It explains that back propagation is used to minimize errors in a neural network by adjusting weights in the opposite direction of the gradient of the loss function. It involves calculating the gradient of the loss with respect to the weights and updating the weights iteratively to improve performance. The document also provides mathematical expressions for calculating outputs, errors, and updating weights and biases in the back propagation process.

Uploaded by

shreyastha2058
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Title: Back Propagation Algorithm

Objective: To know the working of perceptron for the prediction of output of OR-gate on
the basis of it's combination of two inputs.

Theory

Perceptron: A perceptron is the simplest form of an artificial neural network unit. It takes
multiple binary inputs, applies weights to them, sums the weighted inputs, and then passes
the result through an activation function to produce an output. The perceptron is the
building block of more complex neural network architectures.

Forward Propagation: Forward propagation is the process in which input data is processed
through the neural network layer by layer to generate an output. Each layer performs a
weighted sum of its inputs, applies an activation function, and passes the result to the next
layer. This sequential flow of data through the network constitutes forward propagation.

Backward Propagation: Backward propagation, or back propagation, is an optimization


algorithm used to minimize the error in a neural network by adjusting the weights. It
involves calculating the gradient of the loss function with respect to the weights and
updating the weights in the opposite direction of the gradient. This process is repeated
iteratively to improve the network's performance.

Mathematical Expressions

1) Calculate output
For input layer: Oj = Ij
For hidden layer and output layer:
Ij =∑k Ok * W + θj

2) Calculate error
For output layer:
Δk = ( tk – yk ) + yink
For hidden layer:
Δinj = Σ δj wjk
The error information term is calculated as :
Δj = δinj + zinj

3) Update weight and bias


Δ wjk = α δk zj
And the bias correction term is given by Δwk = α δk.
Therefore wjk(new) = wjk(old) + Δ wjk
W0k(new) = wok(old) + Δ wok
For each hidden unit zj (j=1 to a) update its bias and weights (i=0 to n) the weight
connection term
Δ vij = α δj xi
And the bias connection on term
Δ v0j = α δj
Therefore vij(new) = vij(old) + Δvij
V0j(new) = v0j(old) + Δv0j

Code:
import numpy as np import
matplotlib.pyplot as plt

class Perceptron:
def __init__(self, input_size, learning_rate=0.01, epochs=5):
self.learning_rate = learning_rate
self.epochs = epochs
self.weights = np.random.rand(input_size)
self.bias = np.random.rand(1)
self.errors = []
def predict(self, inputs):
summation = np.dot(inputs, self.weights) + self.bias
return 1 if summation > 0 else 0
def train(self, training_inputs, labels):
for epoch in range(self.epochs):
total_error = 0
for inputs, label in zip(training_inputs, labels):
prediction = self.predict(inputs)
error = label - prediction
total_error += abs(error)
self.weights += self.learning_rate * error * inputs
self.bias += self.learning_rate * error
self.errors.append(total_error)
def plot_errors(self): plt.plot(range(1, self.epochs + 1),
self.errors, marker='o')
plt.xlabel('Epoch')
plt.ylabel('Total Absolute Error')
plt.title('Perceptron Training Progress')
plt.show()

inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) labels
= np.array([0, 1, 1, 1])

#Create a perceptron with 2 input neurons


perceptron = Perceptron(input_size=2, learning_rate=0.1, epochs=10)
#Train the perceptron perceptron.train(inputs,
labels) #Plot the training progress
perceptron.plot_errors() #Test the perceptron
test_inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
for test_input in test_inputs: prediction =
perceptron.predict(test_input)
print(f"Input: {test_input}, Prediction: {prediction}")

OUTPUT:

DISCUSSION:
Backpropagation revolutionized neural network training, enabling efficient parameter
adjustments to minimize prediction errors. Despite challenges like vanishing gradients, it's
crucial for effective deep learning model training. Ongoing research aims to address
limitations and enhance performance.

CONCLUSION:
Backpropagation is pivotal for optimizing neural networks by iteratively adjusting parameters
to minimize prediction errors. Despite challenges, it remains essential for training deep
learning models effectively. Ongoing research seeks to overcome limitations and improve
overall performance.

You might also like