0% found this document useful (0 votes)
18 views26 pages

Mod 2 2

Uploaded by

annaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views26 pages

Mod 2 2

Uploaded by

annaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Perceptron

Original perceptron network



Perceptron learning rule is more powerful than hebb rule

Under suitable conditions, its learning procedure can be
proved to converge to correct weights

Sensory to associator connections are weighted with
fixed values -1,0,1

Sensory units
 Binary activation

Associator units
 Binary activations

Response units
 1, 0, -1

Activations of associator was binary step
function with an arbitrary but fixed threshold

Weights from associator to response units are
adjusted by perceptron learning rule

The output of perceptron is y=f(y_in)



The net did not distinguish between an error in which
the calculated output is 0 and the target is -1 as
opposed to an error in which the calculated output is
+1 and target is -1


Where target t is +1 or -1

And α is the learning rate parameter

If an error did not occur, then no learning

Training continued until no error occured.

The perceptron learing rule convergence
theorem states that if weights exist to allow the
net to respond correctly to all training patterns
then the rules procedure for adjusting the
weights will find values such that the net does
respond correctly to all training patterns.

The net will find the final set of weights in finite
number of steps.

Only the weights connecting active inputs
updated

Threshold of response unit is fixed non negative
value.
Perceptron for the AND
function :binary inputs and bipolar
targets

Α=1, initial weights and bias to 0.and ᶿ=0.2
AND function :bipolar inputs and
targets
Architecture of perceptron with
several output classes

You might also like