0% found this document useful (0 votes)
39 views11 pages

Ann2013 L19

Recurrent neural networks allow for feedback connections between neurons, unlike traditional feedforward networks. Dr. John Hopfield studied recurrent networks that could learn stable states and perform content-addressable memory. Hopfield networks use Hebbian learning and the Hebb rule to store patterns such that when presented with a partial or distorted pattern, the network will retrieve the closest stored pattern. They employ a simple mathematical model where the weight between two neurons is increased based on their simultaneous activation, following Hebb's principle of synaptic plasticity.

Uploaded by

Vipul Shrivastav
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views11 pages

Ann2013 L19

Recurrent neural networks allow for feedback connections between neurons, unlike traditional feedforward networks. Dr. John Hopfield studied recurrent networks that could learn stable states and perform content-addressable memory. Hopfield networks use Hebbian learning and the Hebb rule to store patterns such that when presented with a partial or distorted pattern, the network will retrieve the closest stored pattern. They employ a simple mathematical model where the weight between two neurons is increased based on their simultaneous activation, following Hebb's principle of synaptic plasticity.

Uploaded by

Vipul Shrivastav
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 11

Recurrent Networks

-1

case

+
1

sgn(.)

o1 1 2

unit no.
1 2 3 1 2 3 1 2 3 1 2 3

Present output

sgn

Next output

1 1 1 1 1 -1 -1 1 -1 1 -1 -1

0 0 -2 2 2 -2 2 0 0 0 2 0

x x -1 1 1 -1 1 x x x 1 x

1 1 -1 1 1 -1 1 1 -1 1 1 -1

+
-1 -1

sgn(.)

o2

3 4

+
-1

sgn(.)

o3
Zurada- (page 10-13)

So far, nets are entirely feed-forward Biological neural nets have a lot of feedback connections Feedforward + feedback = recurrent

Dr.John .J. Hopfield

Prof. biology/chemistry, CalTech /AT&T Bell Labs Re-energised ANN research in 1982 Studied systems of interconnected neurons stable states which are entered if started in a nearby state network can learn to set up stable states Auto-Association, Content-Addressable Memory

Content Addressable Memory


Suppose we store in memory the item: H.A. Kramers and G.H. Wannier Phys. Rev., 60, 252 (1941) A content addressable memory (CAM) could retrieve it from a partial query such as: and Wannier (1941) and a CAM tolerant of distortions from: Vannier, (1941)

Associative Memory Problem:


Store a set of patterns in such a way that when presented with a new pattern, the network responds by producing whichever of the stored patterns most closely resembles the new pattern.

Hopfield Net

A Simple Hopfield Net

The Hebb rule Neurons that fire together wire together.

Increase weight between two nodes if both have same activity, otherwise
When an axon in cell A is near enough to excite cell B and

repeatedly and persistently takes part in firing it, some


growth process or metabolic change takes place in one or both cells such that As efficacy in firing B is increased

D. O. Hebb (1949)

The Hopfield network employs Hebbian learning. This form of learning is a mathematical abstraction of the principle of synaptic modulation first articulated by Hebb (1949). Hebb's law says that if one neuron stimulates another neuron when the receiving neuron is firing, the strength of the connection between the two cells is strengthened. Mathematically this is expressed as: wij = ai aj

You might also like