DWDM Lab1
DWDM Lab1
Objective: To know the working of perceptron for the prediction of output of OR-gate on
the basis of it's combination of two inputs.
Theory
Perceptron: A perceptron is the simplest form of an artificial neural network unit. It takes
multiple binary inputs, applies weights to them, sums the weighted inputs, and then passes
the result through an activation function to produce an output. The perceptron is the
building block of more complex neural network architectures.
Forward Propagation: Forward propagation is the process in which input data is processed
through the neural network layer by layer to generate an output. Each layer performs a
weighted sum of its inputs, applies an activation function, and passes the result to the next
layer. This sequential flow of data through the network constitutes forward propagation.
Mathematical Expressions
1) Calculate output
For input layer: Oj = Ij
For hidden layer and output layer:
Ij =∑k Ok * W + θj
2) Calculate error
For output layer:
Δk = ( tk – yk ) + yink
For hidden layer:
Δinj = Σ δj wjk
The error information term is calculated as :
Δj = δinj + zinj
Code:
import numpy as np import
matplotlib.pyplot as plt
class Perceptron:
def __init__(self, input_size, learning_rate=0.01, epochs=5):
self.learning_rate = learning_rate
self.epochs = epochs
self.weights = np.random.rand(input_size)
self.bias = np.random.rand(1)
self.errors = []
def predict(self, inputs):
summation = np.dot(inputs, self.weights) + self.bias
return 1 if summation > 0 else 0
def train(self, training_inputs, labels):
for epoch in range(self.epochs):
total_error = 0
for inputs, label in zip(training_inputs, labels):
prediction = self.predict(inputs)
error = label - prediction
total_error += abs(error)
self.weights += self.learning_rate * error * inputs
self.bias += self.learning_rate * error
self.errors.append(total_error)
def plot_errors(self): plt.plot(range(1, self.epochs + 1),
self.errors, marker='o')
plt.xlabel('Epoch')
plt.ylabel('Total Absolute Error')
plt.title('Perceptron Training Progress')
plt.show()
inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) labels
= np.array([0, 1, 1, 1])
OUTPUT:
DISCUSSION:
Backpropagation revolutionized neural network training, enabling efficient parameter
adjustments to minimize prediction errors. Despite challenges like vanishing gradients, it's
crucial for effective deep learning model training. Ongoing research aims to address
limitations and enhance performance.
CONCLUSION:
Backpropagation is pivotal for optimizing neural networks by iteratively adjusting parameters
to minimize prediction errors. Despite challenges, it remains essential for training deep
learning models effectively. Ongoing research seeks to overcome limitations and improve
overall performance.