Week-3 Module-2 Neural Network
Week-3 Module-2 Neural Network
Lecture 11 – MP Model
2
Unit No: 2 Neural Networks
Lecture No: 9
Network Architecture
Network Architecture
• deals with how neurons form layers and how they are interconnected.
• Two types of NN: single-layer or multilayer
• A layer is formed by combining processing elements
• A layer is a stage that links input stage and output stage.
• When a layer is formed, inputs are connected to output nodes with various
weights.
• This results in a single-layer feed forward network.
2. Multilayer Feed-forward Network
• When outputs are connected back as inputs to same or preceding layer then it is
a feedback network.
• If the feedback of the output is directed back to the same layer then it is called
lateral feedback.
4. Recurrent Networks
Lecture No: 10
Activation Functions
A general neuron symbol
• The variable net is defined as a scalar product of the weight and input vector.
Bipolar activation functions
• Notice that as λ🡺 ∞, the continuous function becomes the sgn (net) function.
• "bipolar" implies both positive and negative responses of neurons are produced
for this activation function.
Unipolar activation functions
• The neuron as a processing node obtain net and performs the nonlinear
operation f (net) through its activation function.
• where λ > 0 is proportional to the neuron gain determining the steepness of the
continuous function f(net) near net = 0.
Continuous activation function for various λ are :
Hard-limiting (Binary) activation functions describe the discrete neuron
model as given below:
Soft-limiting activation functions describe the continuous neuron model as
given below:
Unit No: 2 Neural Networks
Lecture No: 11
MP Model
McCulloch-Pitts Neuron Model
• The first formal definition was formulated by McCulloch and Pitts (1943).
• The McCulloch-Pitts model of the neuron is as shown below:
McCulloch-Pitts Neuron Model
• ANN employ a variety of neuron models that have more diversified features than
this model
McCulloch-Pitts Neuron Model: Limitations
• ANN employ a variety of neuron models that have more diversified features than
this model
Logic Gates with MP Neurons
• We shall see explicitly how one can construct simple networks that perform NOT,
AND, and OR.
• It is then a well known result from logic that we can construct any logical function
from these three operations.
• The resulting networks, however, will usually have a much more complex
architecture than a simple Perceptron.
41
Implementation of Logical NOT, AND, and OR
• Logical OR
x1 x2 y
0 0 0 x θ=2
1 2
0 1 1
1 0 1
1 1 1 y
x 2
2
42
Implementation of Logical NOT, AND, and OR
• Logical AND
x1 x2 y
0 0 0 x θ=2
1 1
0 1 0
1 0 0
1 1 1 y
x 1
2
43
Implementation of Logical NOT, AND, and OR
• Logical NOT
X1 y
0 1 x θ=2
1 -1
1 0
1
2
bias
44
Implementation of Logical NOT, AND, and OR
x1 x2 y
0 0 0 x θ=2
1 2
0 1 0
1 0 1
1 1 0 y
x -1
2
45
Logical XOR
• Logical XOR
x1 x2 y
0 0 0 x
1 ?
0 1 1
1 0 1
1 1 0 y
x ?
2
46
Logical XOR
• Each training pattern produces a linear inequality for the output in terms of the
inputs and the network parameters. These can be used to compute the weights
and thresholds.
47
Finding the Weights Analytically
• We have two weights w1 and w2 and the threshold q, and for each training
pattern we need to satisfy
48
Finding the Weights Analytically
49
Unit No: 2 Neural Networks
Lecture No: 12
Linear Separability
Geometric Interpretation of MP Model
The M-P neuron just learnt a linear decision boundary! The M-P neuron is splitting
the input sets into two classes — positive and negative. Positive ones (which
output 1) are those that lie ON or ABOVE the decision boundary and negative ones
(which output 0) are those that lie BELOW the decision boundary.
AND Function
In this case, the decision boundary equation is x_1 + x_2 =2. Here, all the input
points that lie ON or ABOVE, just (1,1), output 1 when passed through the AND
function M-P neuron. It fits! The decision boundary works!
52 Week 2
Linear Separability
⮚ Linear separability is the concept wherein the separation of the input space
into regions is based on whether the network response is positive or negative.