0% found this document useful (0 votes)
15 views24 pages

ANN 2a

The document discusses foundational concepts in soft computing, focusing on the McCulloch-Pitts neural model and the perceptron model, which are early artificial neural network designs. It highlights the limitations of the perceptron in handling non-linearly separable data, exemplified by the XOR problem, and introduces the concept of multi-layer perceptrons as a solution. The document serves as a primer on these models, emphasizing their logical operations and learning capabilities.

Uploaded by

Sayan Kundu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views24 pages

ANN 2a

The document discusses foundational concepts in soft computing, focusing on the McCulloch-Pitts neural model and the perceptron model, which are early artificial neural network designs. It highlights the limitations of the perceptron in handling non-linearly separable data, exemplified by the XOR problem, and introduces the concept of multi-layer perceptrons as a solution. The document serves as a primer on these models, emphasizing their logical operations and learning capabilities.

Uploaded by

Sayan Kundu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Department of Information Technology

SOFT COMPUTING /
ADVANCE SOFT COMPUTING
(IT 1746 / CS405A8 / CS317A8 / CS513A3)

• McCulloch & Pitts model


• Logic Performing McCulloch-Pitts neural net
• Perceptron model
• Linear Separability
- XOR problem
THE McCULLOCH–PITTS NEURAL MODEL contd.
• The earliest artificial neural model was proposed by McCulloch
and Pitts in 1943.

• It consists of a number of input units connected to a single


output unit.

• The interconnecting links are unidirectional.

• There are two kinds of inputs, namely, excitatory inputs and


inhibitory inputs.
– Excitatory inputs are connected to output unit through
+vely weighted interconections. All excitatory weights have
same positive magnitude.
– Inhibitory inputs are connected to output unit through -vely
weighted interconnections. All inhibitory weights have same
negative magnitude.
THE McCULLOCH–PITTS NEURAL MODEL contd.
THE McCULLOCH–PITTS NEURAL MODEL contd.

• Simple McCulloch-Pitts neutrons can be designed


to perform conventional logical operations.

• For this purpose one has to select the appropriate


number of inputs, the inter connection weights
and the appropriate activation function.
Logic-performing McCulloch-Pitts neural nets
Logic-performing McCulloch-Pitts neural nets
contd.
Logic-performing McCulloch-Pitts neural nets
contd.
THE PERCEPTRON
• The perceptron is one of the earliest neural network
models proposed by Frank Rosenblatt in 1962.

• Early neural network enthusiasts were very fond of


the perceptron due to its simple structure, pattern
classifying behaviour, and learning ability.

• As far as the study of neural networks is concerned


the perceptron is a very good starting point.
THE PERCEPTRON contd.
THE PERCEPTRON contd.
THE PERCEPTRON contd.

• The net input y_in of the perceptron to Y is the


algebraic sum of the weighted inputs.
THE PERCEPTRON contd.

• The perceptron was only able to work with linear


separation of data points.

• A linearly separable set of patterns is one that can


be completely partitioned by a decision plane into
two classes.

• The nice thing about the perceptrons is, for a given


set of linearly separable patterns, it is always
possible to find a perceptron that solves the
corresponding classification problem.
THE PERCEPTRON contd.

• The perceptron was only able to work with


linear separation of data points.
Source: Deep Learning Lecture notes, U.K. Chakraborty
The XOR Problem
• Real life classification problems, however, rarely
offer such well-behaved linearly separable data as
required by a perceptron.

• Minsky and Papert [1969] showed that no


perceptron can learn to compute even a trivial
function like a two bit XOR.

• The reason is, there is no single straight line that


may separate the 1-producing patterns {(0, 1), (1, 0)}
from the patterns 0-producing patterns {(0, 0), (1,1)
}.
The XOR Problem - Is it possible to overcome
this limitation?

• There are two ways:


i)draw a curved decision surface between the two sets of
patterns . However, perceptron cannot model any
curved surface.
The XOR Problem - Is it possible to overcome this
limitation? contd.

ii) The other way is to employ two, instead of


one, decision lines.
Multi-layer Perceptron
• Using this idea, it is possible design a multi-layered
perceptron to solve the XOR problem.
• Such a multilayered perceptron is shown below. Here
the first perceptron Y fires only when the input is (1, 1).
Perceptron Learning Rule
Perceptron Learning Rule contd.

• Now, let X = (x0 , …, xm) be a training vector for


which the output of the perceptron is
expected to be t, where t = 1, 0, or −1.

• The current combination of the weights is


given by the weight vector W = (w0 , …, wm).
Perceptron Learning Rule contd.
Perceptron Learning Rule contd.
Perceptron Learning Rule contd.

You might also like