Ann2013 L19
Ann2013 L19
-1
case
+
1
sgn(.)
o1 1 2
unit no.
1 2 3 1 2 3 1 2 3 1 2 3
Present output
sgn
Next output
1 1 1 1 1 -1 -1 1 -1 1 -1 -1
0 0 -2 2 2 -2 2 0 0 0 2 0
x x -1 1 1 -1 1 x x x 1 x
1 1 -1 1 1 -1 1 1 -1 1 1 -1
+
-1 -1
sgn(.)
o2
3 4
+
-1
sgn(.)
o3
Zurada- (page 10-13)
So far, nets are entirely feed-forward Biological neural nets have a lot of feedback connections Feedforward + feedback = recurrent
Prof. biology/chemistry, CalTech /AT&T Bell Labs Re-energised ANN research in 1982 Studied systems of interconnected neurons stable states which are entered if started in a nearby state network can learn to set up stable states Auto-Association, Content-Addressable Memory
Hopfield Net
Increase weight between two nodes if both have same activity, otherwise
When an axon in cell A is near enough to excite cell B and
D. O. Hebb (1949)
The Hopfield network employs Hebbian learning. This form of learning is a mathematical abstraction of the principle of synaptic modulation first articulated by Hebb (1949). Hebb's law says that if one neuron stimulates another neuron when the receiving neuron is firing, the strength of the connection between the two cells is strengthened. Mathematically this is expressed as: wij = ai aj