Mod 2 2
Mod 2 2
The net did not distinguish between an error in which
the calculated output is 0 and the target is -1 as
opposed to an error in which the calculated output is
+1 and target is -1
Where target t is +1 or -1
And α is the learning rate parameter
If an error did not occur, then no learning
Training continued until no error occured.
The perceptron learing rule convergence
theorem states that if weights exist to allow the
net to respond correctly to all training patterns
then the rules procedure for adjusting the
weights will find values such that the net does
respond correctly to all training patterns.
The net will find the final set of weights in finite
number of steps.
Only the weights connecting active inputs
updated
Threshold of response unit is fixed non negative
value.
Perceptron for the AND
function :binary inputs and bipolar
targets
Α=1, initial weights and bias to 0.and ᶿ=0.2
AND function :bipolar inputs and
targets
Architecture of perceptron with
several output classes