0% found this document useful (0 votes)
50 views24 pages

Artificial Neural Networks - 3: Dr. Aditya Abhyankar

This document discusses artificial neural networks and neuron models. It provides an overview of McCulloch-Pitts, Perceptron, and Adaline models. The McCulloch-Pitts model uses a weighted sum of inputs plus a bias term but has limitations as weights are fixed and it cannot learn. The Perceptron model introduces a learning law to adjust weights to minimize error. Adaline uses a linear activation function and least mean squares learning law. Examples are given to demonstrate perceptron learning of logic functions, though it cannot learn the XOR function.

Uploaded by

anurag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views24 pages

Artificial Neural Networks - 3: Dr. Aditya Abhyankar

This document discusses artificial neural networks and neuron models. It provides an overview of McCulloch-Pitts, Perceptron, and Adaline models. The McCulloch-Pitts model uses a weighted sum of inputs plus a bias term but has limitations as weights are fixed and it cannot learn. The Perceptron model introduces a learning law to adjust weights to minimize error. Adaline uses a linear activation function and least mean squares learning law. Examples are given to demonstrate perceptron learning of logic functions, though it cannot learn the XOR function.

Uploaded by

anurag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 24

Artificial Neural Networks - 3

Dr. Aditya Abhyankar

Recap

ANN- definition
Resemblance with BNN
BNN ANN comparison
ANN Terminology
Neuron Models
Learning !!

ANN Terminology
ANN highly simplified model of BNN
ANN has interconnected processing units
Summing part receives N inputs, weights
each value and computes weighted sum
Weighted sum activation value
Positive weight excitatory input
Negative weight inhibitory input

Neuron Model
McCulloch-Pitts (Simplistic) Neuron Model

The network function of a neuron is a


weighted sum of its input signals plus a bias
term.

Neuron Model
The net function is a linear or nonlinear
mapping from the input data space to an
intermediate feature space (in terms of
pattern classification).
The most common form is a hyper-plane

Models of Neuron
McCulloch-Pitts Model (MP)
Rosenblatts Perceptron Model
Adaline Model

MP Model
McCulloch-Pitts (Simplistic) Neuron Model

The network function of a neuron is a


weighted sum of its input signals plus a bias
term.

MP Model Limitations
Weights fixed
Incapable of learning
Original model allows ONLY:
Binary output steps
Operations at discrete time steps

Perceptron
A1
A2

Sensory
Unit

a1 w1
a2

w2
wM

AM a
M
Association
Unit

Summing
Unit

x
s f ( x)
Output
Unit

Perceptron
M

Activation

x wi ai
i 1

Output

s f ( x)

Error

bs

Weight Change

wi ai

Perceptron - Advantages
Perceptron learning law gives step-bystep process for adjusting weights
Perceptron convergence theorem

Widrows Adaline
A1
A2

Sensory
Unit

a1 w1
a2

w2
wM

AM a
M
Association
Unit

Summing
Unit

x
s f ( x) x
Output
Unit

Adaline (ADAptive Linear


Element)
M

Activation

x wi ai
i 1

Output

s f ( x) x

Error

bs bx

Weight Change

wi ai

Widrows Adaline
Analog activation value x compared with
target output b
OR
Output is linear function of x
LMS learning law
Gradient descent algorithm

Heat and Cold Ex.

Hebb Rule

Examples

AND logic
OR logic
AND-NOT logic
3-D example where Hebb fails
X-OR !!

Concepts
A discriminating hyper plane is constituted
by the combination of summing unit and
output unit.
0 targets are difficult to learn!
Bi-polar notion preferred
Hebb Rule doesnt give direction of
learning!
Concept of a bias!!

Perceptron Learning

Perceptron Learning

Problem!
AND function, bipolar i/ps, bipolar targets,

1, b w1 w2 0

Problem!
AND function, binary i/ps, bipolar targets,

1, b 0, 0.2

Problem!
AND function, binary i/ps, bipolar targets,

1, b 0, 0.2

Results for third epoch

Problem!
AND function, binary i/ps, bipolar targets,

1, b 0, 0.2

Results for tenth epoch

You might also like