0% found this document useful (0 votes)
66 views

Lecture 11 - Supervised Learning - Hopfield Networks - (Part 4)

A Hopfield network is a type of recurrent neural network useful for associative memory. It consists of fully connected neurons with symmetric weights. Each neuron has a binary output of 1 or -1. The network dynamically settles into attractor states that represent stored memories. It can recall incomplete or corrupted input patterns by converging to the closest attractor state. Hopfield networks are widely used for content-addressable memory and solving optimization problems by reaching energy minima.

Uploaded by

Ammar Alkindy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views

Lecture 11 - Supervised Learning - Hopfield Networks - (Part 4)

A Hopfield network is a type of recurrent neural network useful for associative memory. It consists of fully connected neurons with symmetric weights. Each neuron has a binary output of 1 or -1. The network dynamically settles into attractor states that represent stored memories. It can recall incomplete or corrupted input patterns by converging to the closest attractor state. Hopfield networks are widely used for content-addressable memory and solving optimization problems by reaching energy minima.

Uploaded by

Ammar Alkindy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Dr.

Qadri Hamarsheh

Supervised Learning in Neural Networks (Part 4)


Hopfield Networks
 Neural networks were designed by analogy with the brain. The brain’s
memory, however, works by association.
 Multilayer neural networks trained with the back-propagation algorithm are
used for pattern recognition problems. However, to emulate the human
memory’s associative characteristics we need a different type of network: a
recurrent neural network.
 A recurrent neural network has feedback loops from its outputs to its inputs.
The presence of such loops has a profound impact on the learning capability
of the network.
 John Hopfield in 1982 formulated the physical principle of storing
information in a dynamically stable network (Content-Addressable
Memory).
Single-layer n-neuron Hopfield network

Figure 1
Common Properties and Architecture:
 Every neuron is connected to every other neuron.
 Fully connected with Symmetric weights: wij = wji.
 No self-loops : wii = 0.
 Each neuron has a single input from the outside world.
 In the Hopfield model the neurons have a binary output taking the values –1
and 1.
 Random updating for neurons.
 A Hopfield Network is a model of associative memory. It is based on
Hebbian learning but uses binary neurons.

1
Dr. Qadri Hamarsheh

 A Hopfield Network provides a formal model which can be analyzed for


determining the storage capacity of the network.
 An associative memory can be thought as a set of attractors, each with its own
basin of attraction.
 The space of all possible states of the network is called the configuration
space.
 Basins of attraction: Division of the configuration space by stored patterns.
 Stored patterns should be attractors.

Figure 2
 Memories are attractors in state space as shown in the figure.

Figure 3
 The dynamics of the system carries starting points into one of the attractors as
shown in the above figure.
 A Hopfield net is composed of binary threshold units with recurrent
connections between them. Recurrent networks of non-linear units are
generally very hard to analyze. They can behave in many different ways:
 Settle to a stable state.
 Oscillate.
 Follow chaotic trajectories that cannot be predicted.
 But Hopfield realized that if the connections are symmetric, there is a global
energy function.
 Each “configuration” of the network has energy.
 The binary threshold decision rule causes the network to settle to an energy
minimum.
 Pattern recognizer.

2
Dr. Qadri Hamarsheh

 Hopfield networks have two applications. First, they can act as associative
memories. Second, they can be used to solve optimization problems.
According to figure 1:
 The Hopfiled model starts with the standard McCulloch-Pitts model of a
neurons with the sign activation function:

 The current state is determined by the current outputs of all neurons, y1, y2,
. . ., yn. Thus, for a single-layer n-neuron network, the state can be defined
by the state vector as:

 Synaptic weights between neurons are represented in matrix form as


follows:

 Where M is the number of states to be memorized by the network, Ym is the


n-dimensional binary vector, I is n x n identity matrix, and superscript T
denotes matrix transposition.
Possible states for the three-neuron Hopfield network

3
Dr. Qadri Hamarsheh

 The stable state-vertex is determined by the weight matrix W, the current


input vector X, and the threshold matrix 𝜽. If the input vector is partially
incorrect or incomplete, the initial state will converge into the stable state-
vertex after a few iterations.
Example:
 Suppose, that our network is required to memorize two states, (1, 1, 1) and
(-1, -1, -1). Thus,

where Y1 and Y2 are the three- dimensional vectors.


 The 3 x 3 identity matrix I is

 weight matrix calculation:

 The network is tested by the sequence of input vectors, X1 and X2,


which are equal to the output (or target) vectors Y1 and Y2,
respectively using the following equation:
𝒀𝒎 = 𝒔𝒊𝒈𝒏(𝑾𝑿𝒎 − 𝜽), 𝒎 = 𝟏, 𝟐, . . . , 𝑴
Where 𝜽 is the threshold. The testing process include the following
steps:
1) Activate the network by applying the input vector X.
2) Calculate the actual output vector Y.
3) Compare the result with the initial input vector X.

4
Dr. Qadri Hamarsheh

 The remaining six states are all unstable. However, stable states (also
called fundamental memories) are capable of attracting states that
are close to them.
 The fundamental memory (1, 1, 1) attracts unstable states (-1, 1, 1),
(1,-1, 1) and (1, 1, -1). Each of these unstable states represents a
single error, compared to the fundamental memory (1, 1, 1).
 The fundamental memory (-1, -1, -1) attracts unstable states (-1, -
1, 1), (-1, 1, -1) and (1, -1, -1).
 Thus, the Hopfield network can act as an error correction network.

You might also like