100% found this document useful (1 vote)
220 views6 pages

Hopfield Neural Network

Uploaded by

parostdevil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
220 views6 pages

Hopfield Neural Network

Uploaded by

parostdevil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Hopfield Neural Network

Last Updated : 30 Aug, 2024

The Hopfield Neural Networks, invented by Dr John J. Hopfield consists


of one layer of ‘n’ fully connected recurrent neurons. It is generally used
in performing auto-association and optimization tasks. It is calculated
using a converging interactive process and it generates a different
response than our normal neural nets.

Discrete Hopfield Network


It is a fully interconnected neural network where each unit is connected
to every other unit. It behaves in a discrete manner, i.e. it gives finite
distinct output, generally of two types:

Binary (0/1)
Bipolar (-1/1)

The weights associated with this network are symmetric in nature and
have the following properties.

1. wij = wji
​ ​

2. wii = 0

Structure & Architecture of Hopfield Network

Each neuron has an inverting and a non-inverting output.


Being fully connected, the output of each neuron is an input to all
other neurons but not the self.

The below figure shows a sample representation of a Discrete Hopfield


Neural Network architecture having the following elements.
Ads by
Stop seeing this ad

Why this ad?

Discrete Hopfield Network Architecture

[ x1 , x2 , ... , xn ] -> Input to the n given neurons.


[ y1 , y2 , ... , yn ] -> Output obtained from the n given neurons
Wij -> weight associated with the connection between the ith and
the jth neuron.

Training Algorithm

For storing a set of input patterns S(p) [p = 1 to P], where S(p) = S1(p) …
Si(p) … Sn(p), the weight matrix is given by:

For binary patterns


AI ML DS Data Science Data Analysis Data Visualization Machine Learning Deep Learning NLP
wij = ∑Pp=1 [2si (p)–1][2sj (p)–1] (wij f or all i 
​ ​ ​ ​ = j)

For bipolar patterns

P
wij = ∑p=1 [si (p)sj (p)] (where wij = 0 f or all i = j)
​ ​ ​ ​ ​

(i.e. weights here have no self-connection)

Steps Involved in the training of a Hopfield Network are as mapped


below:

Initialize weights (wij) to store patterns (using training algorithm).


For each input vector yi, perform steps 3-7.
Make the initial activators of the network equal to the external input
vector x.

yi = xi : (f or i = 1 to n)
​ ​

For each vector yi, perform steps 5-7.


Calculate the total input of the network yin using the equation given
below.

yini = xi + ∑j [yj wji ]



​ ​ ​ ​ ​

Apply activation over the total input to calculate the output as per
the equation given below:

⎧ 1 if yin > θi
yi = ⎨
​ ​

yi if yin = θi

​ ​ ​ ​ ​ ​

0 if yin < θi
​ ​

(where θi (threshold) and is normally taken as 0)

Now feedback the obtained output yi to all other units. Thus, the
activation vectors are updated.
Test the network for convergence.

Consider the following problem. We are required to create a Discrete


Hopfield Network with the bipolar representation of the input vector as
[1 1 1 -1] or [1 1 1 0] (in case of binary representation) is stored in the
network. Test the Hopfield network with missing entries in the first and
second components of the stored vector (i.e. [0 0 1 0]).
Given the input vector, x = [1 1 1 -1] (bipolar) and we initialize the
weight matrix (wij) as:

wij = ∑[sT (p)t(p)] ​

1
1
= [1 1 1 −1]
1
​ ​ ​ ​ ​ ​ ​

−1
1 1 1 −1
1 1 1 −1
=
1 1 1 −1
​ ​ ​ ​ ​ ​

− 1 −1 −1 1

and weight matrix with no self-connection is:


0 1 1 −1
1 0 1 −1
wij =
1 1 0 −1
​ ​ ​ ​ ​ ​ ​

−1 −1 −1 0

As per the question, input vector x with missing entries, x = [0 0 1 0]


([x1 x2 x3 x4]) (binary). Make yi = x = [0 0 1 0] ([y1 y2 y3 y4]).
Choosing unit yi (order doesn’t matter) for updating its activation. Take
the ith column of the weight matrix for calculation.

(we will do the next steps for all values of yi and check if there is
convergence or not)
4
yin1 = x1 + ∑j=1 [yj wj1 ]

​ ​ ​ ​ ​

0
1
= 0 + [0 0 1 0]
1
​ ​ ​ ​ ​ ​

−1
=0 +1
=1
Applying activation, yin1 > 0 ⟹ y1 = 1 ​
​ ​

giving f eedback to other units, we get


y = [1 ​ 0 ​ 1 ​ 0] ​

which is not equal to input vector


x = [1 ​ 1 ​ 1 ​ 0] ​

Hence, no covergence. ‘

Now for next unit, we will take updated value via feedback. (i.e.
y = [1 0 1 0])
yin3 = x3 + ∑4j=1 [yj wj3 ]

​ ​ ​ ​ ​

1
1
= 1 + [1 0 1 0]
0
​ ​ ​ ​ ​ ​

−1
=1 +1
=2
Applying activation, yin3 > 0 ⟹ y3 = 1 ​
​ ​

giving f eedback to other units, we get


y = [1 ​ 0 ​ 1 ​ 0] ​

which is not equal to input vector


x = [1 ​ 1 ​ 1 ​ 0] ​

Hence, no covergence.

Now for next unit, we will take updated value via feedback. (i.e.
y = [1 0 1 0])

4
yin4 = x4 + ∑j=1 [yj wj4 ]

​ ​ ​ ​ ​

−1
−1
= 0 + [1 0 1 0]
−1
​ ​ ​ ​ ​ ​

0
= 0 + (−1) + (−1)
= −2
Applying activation, yin4 < 0 ⟹ y4 = 0 ​
​ ​

giving f eedback to other units, we get


y = [1 ​ 0 ​ 1 ​ 0] ​

which is not equal to input vector


x = [1 ​ 1 ​ 1 ​ 0] ​

Hence, no covergence.

Now for next unit, we will take updated value via feedback. (i.e.
y = [1 0 1 0])

yin2 = x2 + ∑4j=1 [yj wj2 ]



​ ​ ​ ​ ​

1
0
= 0 + [1 0 1 0]
1
​ ​ ​ ​ ​ ​

−1
=0 +1 +1
=2
Applying activation, yin2 > 0 ⟹ y2 = 1 ​
​ ​

giving f eedback to other units, we get


y = [1 ​ 1 ​ 1 ​ 0] ​

which is equal to input vector


x = [1 ​ 1 ​ 1 ​ 0] ​

Hence, covergence with vector x.

Continuous Hopfield Network


Unlike the discrete Hopfield networks, here the time parameter is
treated as a continuous variable. So, instead of getting binary/bipolar
outputs, we can obtain values that lie between 0 and 1. It can be used
to solve constrained optimization and associative memory problems.
The output is defined as:

vi = g(ui )
​ ​

where,

vi = output from the continuous hopfield network


ui = internal activity of a node in continuous hopfield network.

Energy Function
The Hopfield networks have an energy function associated with them. It
either diminishes or remains unchanged on update (feedback) after
every iteration. The energy function for a continuous Hopfield network
is defined as:
n n n
E = 0.5 ∑i=1 ∑j=1 wij vi vj + ∑i=1 θi vi
​ ​ ​ ​ ​ ​ ​

To determine if the network will converge to a stable configuration, we


see if the energy function reaches its minimum by:
d
dt
E​ ≤0

The network is bound to converge if the activity of each neuron wrt time
is given by the following differential equation:
d −ui n
u
dt i
​ ​ = τ

​ + ∑j=1 wij vj + θi ​ ​ ​ ​

You might also like