Neural Networks and Physical Systems With Emergent Collective Computational Abilities HOPFIELD 82
Neural Networks and Physical Systems With Emergent Collective Computational Abilities HOPFIELD 82
Neural Networks and Physical Systems With Emergent Collective Computational Abilities HOPFIELD 82
J. J. Hopfield
PNAS 1982;79;2554-2558
doi:10.1073/pnas.79.8.2554
This information is current as of December 2006.
This article has been cited by other articles:
www.pnas.org#otherarticles
E-mail Alerts
Receive free email alerts when new articles cite this article - sign up in the box at the
top right corner of the article or click here.
Reprints
Notes:
Biophysics
J. J. HOPFIELD
Division of Chemistry and Biology, California Institute of Technology, Pasadena, California 91125; and Bell Laboratories, Murray Hill, New Jersey 07974
ABSTRACT Computational properties of use to biological organisms or to the construction of computers can emerge as collective properties of systems -having a large number of simple
equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase
space flow of the state of a system. A model of such a system is
given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce
a content-addressable memory which correctly yields an entire
memory from any subpart of sufficient size. The algorithm for the
time evolution of the state of the system is based on asynchronous
parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition,
categorization, error correction, and time sequence retention.
The collective properties are only weakly sensitive to details of the
modeling or the failure of individual devices.
Given the dynamical electrochemical properties of neurons and
their interconnections (synapses), we readily understand schemes
that use a few neurons to obtain elementary useful biological
behavior (1-3). Our understanding of such simple circuits in
electronics allows us to plan larger and more complex circuits
which are essential to large computers. Because evolution has
no such plan, it becomes relevant to ask whether the ability of
large collections of neurons to perform "computational" tasks
may in part be a spontaneous collective consequence of having
a large number of interacting simple neurons.
In physical systems made from a large number of simple elements, interactions among large numbers of elementary components yield collective phenomena such as the stable magnetic
orientations and domains in a magnetic system or the vortex
patterns in fluid flow. Do analogous collective phenomena in
a system of simple interacting neurons have useful "computational" correlates? For example, are the stability of memories,
the construction of categories of generalization, or time-sequential memory also emergent properties and collective in
origin? This paper examines a new modeling of this old and fundamental question (4-8) and shows that important computational properties spontaneously arise.
All modeling is based on details, and the details of neuroanatomy and neural function are both myriad and incompletely
known (9). In many physical systems, the nature of the emergent collective properties is insensitive to the details inserted
in the model (e.g., collisions are essential to generate sound
waves, but any reasonable interatomic force law will yield appropriate collisions). In the same spirit, I will seek collective
properties that are robust against change in the model details.
The model could be readily implemented by integrated circuit hardware. The conclusions suggest the design of a deloThe publication costs ofthis article were defrayed in part by page charge
payment. This article must therefore be hereby marked "advertisement" in accordance with 18 U. S. C. 1734 solely to indicate this fact.
2554
Biophysics: Hopfield
("not firing") and Vi = 1 ("firing at maximum rate"). When neuron i has a connection made to it from neuron j, the strength
of connection is defined as Tij. (Nonconnected neurons have Tij
0.) The instantaneous state of the system is specified by listing
the N values of Vi, so it is represented by a binary word of N
bits.
The state changes in time according to the following algorithm. For each neuron i there is a fixed threshold U,. Each
neuron i readjusts its state randomly in time but with a mean
attempt rate W, setting
Vi
1
I T.,V.
Vi0if
joi
< Ui
Tij=
(2V -
1)(2Vj - 1)
[2]
Tijjs =E (2V,- 1)
VJ(2Vj-1) Hjs.
[5]
I Tijvj
[3]
2555
>
[4]
~~~~~~~~~/
I'
.'C
-0.1 /
0
Membrane Potential (Volts) or "Input"
2556
Biophysics: Hopfield
[6]
Tij VjVj
[7]
Tij Vj
[8]
ioj
-AVi
joi'
Thus, the algorithm for altering Vi causes E to be a monotonically decreasing function. State changes will continue until a
least (local) E is reached. This case is isomorphic with an Ising
model. Tij provides the role of the exchange coupling, and there
is also an external local field at each site. When T.j is symmetric
but has a random character (the spin glass) there are known to
be many (locally) stable states (29).
Monte Carlo calculations were made on systems of N = 30
and N = 100, to examine the effect of removing the T.1 = T.
restriction. Each element of T., was chosen as a random number
between -1 and 1. The neural architecture of typical cortical
regions (30, 31) and also of simple ganglia of invertebrates (32)
suggests the importance of 100-10,000 cells with intense mutual interconnections in elementary processing, so our scale of
N is slightly small.
The dynamics algorithm was initiated from randomly chosen
initial starting configurations. For N = 30 the system never
displayed an ergodic wandering through state space. Within a
time of about 4/W it settled into limiting behaviors, the commonest being a stable state. When 50 trials were examined for
a particular such random matrix, all would result in one of two
or three end states. A few stable states thus collect the flow from
most of the initial state space. A simple cycle also occurred occasionally-for example, . A -* B -- A -- B
-2 pi In pi.
[9]
e-x2/2a2 dx.
[10]
2 N/2
Biophysics: Hopfield
1.0
0.5
0.5
N-
n=
100
10
N= 100
2&
0.2
8) bits.
0.2
0.2
n=1
N= 100
3 6 9
29
39
49
phase
space flow is
2557
apparently dominated by
attractors
which are the nominally assigned memories, each ofwhich dominates a substantial region around it. The flow is not entirely
deterministic, and the system responds to an ambiguous starting state by a statistical choice between the memory states it
most resembles.
Were it desired to use such a system in an Si-based contentaddressable memory, the algorithm should be used and modi-
fied to hold the known bits of information while letting the others adjust.
0 rl
,2558
Biophysics: Hopfield
E Cij Xj
[12]
j=1
ATU = A >
(2Vs+1 - 1)(2Vj - 1)
[13]
327-334.
18. Longuet-Higgins, J. C. (1968) Nature (London) 217, 104-105.
19. Kohonen, T. (1977) Associative Memory-A System-Theoretic
Approach (Springer, New York).
20. Willwacher, G. (1976) Biol Cybern. 24, 181-198.
21. Anderson, J. A. (1977) Psych. Rev. 84, 413-451.
22. Perkel, D. H. & Bullock, T. H. (1969) Neurosci. Res. Symp.
Summ. 3, 405-527.
23. John, E. R. (1972) Science 177, 850-864.
24. Roney, K. J., Scheibel, A. B. & Shaw, G. L. (1979) Brain Res.
Rev. 1, 225-271.
25. Little, W. A. & Shaw, G. L. (1978) Math. Biosci. 39, 281-289.
26. Shaw, G. L. & Roney, K. J. (1979) Phys. Rev. Lett. 74, 146-150.
27. Hebb, D. 0. (1949) The Organization of Behavior (Wiley, New
York).
endon, Oxford).
393-413.