Lecture-20 21 22 (ANN)
Lecture-20 21 22 (ANN)
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 1 / 30
Outline
3 Perception
4 Multi-layerd perceptron
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 2 / 30
Introduction
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 3 / 30
Artificial Neuron
The most fundamental unit of a deep neural network is called an
artificial neuron.
Why is it called a neuron ? Where does the inspiration come from ?
The inspiration comes from biology (more specifically, from the brain)
Biological neurons = neural cells = neural processing units.
We will first see what a biological neuron looks like.
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 4 / 30
Artificial Neuron
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 5 / 30
Biologcal Neuron
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 6 / 30
Biological Neuron
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 7 / 30
Biological Neuron
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 8 / 30
McCulloch Pitts Neuron
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 9 / 30
Properties of McCulloch-Pitts Neurons
Binary Model: Each neuron produces a binary output (0 or 1), depending
on whether its input surpasses a threshold.
Input and Weights: Inputs are binary (0 or 1), and each input is associated
with a weight that determines its significance.
Threshold Logic Unit: The neuron activates (outputs 1) if the weighted
sum of inputs exceeds a pre-defined threshold:
( P
1, if i wi xi ≥ Threshold
Output =
0, otherwise
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 11 / 30
Perception
Frank Rosenblatt, an American psychologist, proposed the classical
perception model (1958)
A more general computational model than McCulloch–Pitts neurons.
Main differences: Introduction of numerical weights for inputs and a
mechanism for learning these weights(weight adjustment (learning
algorithm).
Inputs are no longer limited to boolean values
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 12 / 30
Perception
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 13 / 30
Difference between McCulloch-Pitts Neurons and
Perceptron
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 14 / 30
Perceptron
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 15 / 30
A perceptron can only solve linearly separable problems
(e.g., AND, OR)
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 16 / 30
Perceptron Learning Algorithm
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 17 / 30
Limitation of perceptron
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 18 / 30
Example
Suppose we want to predict whether a web user will click on an ad for a
refrigerator. given Four features:
Recently searched “refrigerator repair” (assume x1 =0).
Recently searched for ’refrigerator reviews’ (assume x2 = 1).
Recently bought a refrigerator (assume x3 =0).
Has clicked on any ad in the recent past. ((assume x3 =0)
These are all binary features (values can be either 0 or 1)
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 19 / 30
Introduction to MLP
A Multilayer Perceptron (MLP) is a type of artificial neural network that
consists of multiple layers of neurons, each fully connected to the neurons
in the subsequent layer. It is commonly used for supervised learning tasks
like classification and regression.
Example:
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 20 / 30
Comparision between Multilayered Perceptron and Feed
Forwarded Neural Network
Definition: A feed-forward neural network is one of the simplest types of artificial
neural networks devised. In this network, the information moves in only one direc-
tion: forward, from the input nodes, through the hidden nodes (if any), and to the
output nodes.
Feature Multilayer Feedforward
Perceptron (MLP) Neural Network
(FNN)
Definition A type of FNN with A broad class of
only fully connected neural networks
layers. with unidirectional
data flow (no
cycles).
Architecture Consists of fully Can include fully
connected layers connected,
only. convolutional,
pooling, and other
specialized layers.
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 21 / 30
Comparision between Multilayered Perceptron and Feed
Forwarded Neural Network
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 22 / 30
Feedfroward Neural Network
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 23 / 30
Feedfroward Neural Network
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 24 / 30
Feedfroward Neural Network
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 25 / 30
Back propagation Algorithm
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 26 / 30
Frame Title
Network Architecture:
Input layer: 2 neurons (x1 , x2 ).
Hidden layer: 2 neurons (h1 , h2 ) with sigmoid activation.
Output layer: 1 neuron (y ) with sigmoid activation.
Given:
Weights:
0.15 0.2 0.4
W1 = , W2 =
0.25 0.3 0.45
Biases:
0.35
b1 = , b2 = 0.6
0.35
Hidden Layer:
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 28 / 30
Exampel
δ3 = (y − ytarget ) · σ ′ (z3 )
Mr. Tapan Kumar Dey (MUJ) Artificial Neural Network February 27, 2025 29 / 30
Example
Gradients: