0% found this document useful (0 votes)
9 views27 pages

Critical Dynamics and Cyclic Memory

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views27 pages

Critical Dynamics and Cyclic Memory

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Critical Dynamics and Cyclic Memory Retrieval in

Non-reciprocal Hopfield Networks


Shuyue Xue1,2 , Mohammad Maghrebi1 , George I. Mias1,3,4 and Carlo Piermarocchi* 1
arXiv:2501.00983v1 [cond-mat.dis-nn] 2 Jan 2025

1
Department of Physics and Astronomy, Michigan State University, East Lansing, Michigan 48824, USA
2
Department of Computational Mathematics, Science and Engineering, Michigan State University, East
Lansing, Michigan 48824, USA
3
Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing,
Michigan 48824, USA
4
Department of Biochemistry and Molecular Biology, Michigan State University, East Lansing, Michigan
48824, USA

Abstract
We study Hopfield networks with non-reciprocal coupling inducing switches between memory patterns.
Dynamical phase transitions occur between phases of no memory retrieval, retrieval of multiple point-
attractors, and limit-cycle attractors. The limit cycle phase is bounded by two critical regions: a Hopf
bifurcation line and a fold bifurcation line, each with unique dynamical critical exponents and sensi-
tivity to perturbations. A Master Equation approach numerically verifies the critical behavior predicted
analytically. We discuss how these networks could model biological processes near a critical threshold
of cyclic instability evolving through multi-step transitions.

Contents
1 Introduction 2

2 Cyclic Hopfield networks 3


2.1 Mean field solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Near cusp dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Critical Dynamics 7
3.1 Near the cusp with βλ− = 0 and βλ+ ≈ 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Hopf bifurcation line: βλ− ̸= 0 while βλ+ = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Limit cycle phase near the Hopf line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.4 Fold line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.5 Near the fold line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.6 External drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
* Email: [email protected]

1
4 Master Equation 13
4.1 Exact diagonalization of Liouvillian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

5 Glauber Dynamics 17
5.1 Numerical tests of critical behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

6 Conclusions 23

1 Introduction
The Hopfield model [1] is a spin glass model introduced to describe neural networks. It addresses the
issue of content-addressable, or associative, memory, i.e., how some complex extended systems are
able to recover a host of memories using only partial or noisy information. The statistical properties of
the Hopfield models have been extensively investigated (see, e.g., [2, 3] for a review of earlier works).
In typical Hopfield models, neural interactions are symmetric, but as Hopfield pointed out [1], the
introduction of asymmetric interactions can result over time in transitions between memory patterns.
In addition to the Hebbian coupling,
p
1X ν ν
Ji j = ξ ξ , (1)
N ν=1 i j
where ξνi , with ν = 1, . . . , p are spin memory patterns, one can introduce asymmetric interactions of
the form
q
′ λ X ν+1 ν
Ji j = ξ ξj , (2)
N ν=1 i
with q < p. With this modification, some of the spin memory patterns become metastable and can
be replaced in time by other patterns. This allows for the storage and retrieval of a limited number
of temporal sequences of spin patterns. While incoherent asymmetry acts as a noise mechanism that
can help stabilize memory retrieval [4], asymmetric interactions of the form in Eq. 2 enable coherent
pattern evolution in time. Moreover, the addition of terms of the form in Eq. 2 makes the spin system
non-reciprocal. Recent studies have examined how non-reciprocity can induce novel classes of phase
transitions that cannot be described using a free energy [5].
Dynamical spin models that can describe coherent temporal sequences, such as the class of Hopfield
models above, are particularly interesting in the study of out-of-equilibrium processes. These models
have recently been applied beyond modeling brain functions to many biological and biomedical sys-
tems, such as models of cell reprogramming [6, 7], classification of disease subtypes [8], or disease
progression models [9, 10]. In particular, Szedlak et al. [11] used a Hopfield model with both terms
in Eqs. 1 and 2 to describe the dynamics of gene expression patterns in the cell cycle of cancer and
yeast cells. A key finding of the paper was the necessity of finely adjusting the model’s parameters,
specifically the noise level and the relative strength between symmetric and asymmetric interactions,
embodied by the parameter λ in Eq.2. This adjustment guarantees that the model maintains cyclic be-
havior while remaining sufficiently responsive to perturbations, such as targeted inhibitions that result
in observable changes. This parameter tuning aligns with the idea of operating at the "edge of chaos",
where biological systems exhibit both maximal robustness and sensitivity to external conditions [12].
Here, we study two-memory Hopfield networks with N -sites, characterized by asymmetric inter-
actions that drive the system toward a critical threshold of oscillatory instability. The non-reciprocity
leads to time-reversal symmetry breaking and introduces an extended region of criticality in the phase
diagram, a feature typically observed in biological systems [13, 14]. Similar behavior can be observed

2
in other classes of non-reciprocal kinetic Ising models with on-site interactions between two different
types of spin [15]. These asymmetric models can exhibit noise-induced interstate switching leading to
non-equilibrium currents or oscillations [16]. Biological networks often operate far from the N → ∞
limit. For instance, the cell cycle program only involves a few hundred genes. The role of fluctuations
and their dependence on N becomes, therefore, critical in their dynamical behavior. Here, we focus on
the role of fluctuations in dynamical phase transitions to limit cycles. We find that the limit cycle phase
is bounded by two critical lines: a Hopf bifurcation line and a fold bifurcation line. The autocorrelation
function C(τ) on these lines scales as C ∼ C̃(τ/N ζ ), where C̃ are universal scale-invariant functions
and ζ is a dynamical critical exponent previously introduced to characterize out-of-equilibrium critical
behavior [17]. The dynamical exponent ζ = 1/2 on the Hopf line and ζ = 1/3 on the fold line. The
sensitivity to an external perturbation of strength F in these two critical regions also differs. On the
Hopf line, the system exhibits enhanced sensitivity to periodic perturbations resonant with the limit
cycle frequency and features a response time that scales as |F |−2/3 . In contrast, an external bias on the
fold line can only induce switches between memory patterns in a limited and controlled way, without
ever pushing the state into sustained limit cycles. Moreover, the characteristic response time is faster
and scales as |F |−1/2 . While it was established that Hopf oscillators form a dynamic universality class
relevant in biology, such as in the sensitivity of hair cells in the cochlea [18, 19], the fold line identifies a
distinct critical behavior that could help understanding transitions from stable points to cycles or more
complex multi-step biological programs.
In Sect. 2, we introduce a two-memory non-reciprocal Hopfield model and analyze its phase dia-
gram in a mean-field approximation. We show that the dynamical phase diagram is characterized by
a cyclic behavior phase, bounded by two critical lines, Hopf and fold bifurcation lines. In Sect.3 we
examine the critical properties of the system on and near these two lines using analytical methods.
We introduce an exact form of the Master Equation for the system in Sect.4, explicitly accounting for
the spin symmetry under pattern exchange. Based on this Master Equation approach, we explore the
system in the large N limit using a Glauber Monte Carlo procedure in Sect. 5 In Sect. 5.1, we numer-
ically test the critical behavior and dynamic critical exponents derived analytically in Sect. 3. Finally,
we summarize conclusions in Sect.6.

2 Cyclic Hopfield networks


We consider Hopfield networks with N Ising spins σi = ±1 interacting through non-reciprocal couplings
Ji j ̸= J ji . We focus on a network encoding two memory patterns, ξ1i and ξ2i , with couplings of the form

λ+ € 1 1 Š λ €

Š
Ji j = ξi ξ j + ξ2i ξ2j + ξ1i ξ2j − ξ2i ξ1j . (3)
N N
The term proportional to λ+ describes the Hebbian coupling, while λ− introduces a bias between the
two memory patterns. By applying the Mattis gauge transformation [20] to the spins σi → ξ1i σi , Ji j
reduces to
λ+  λ− 
Ji j = 1 + ξi ξ j + ξ j − ξi , (4)
N N
where ξi = ξ1i ξ2i , which is equivalent to setting the first memory pattern to all spin up. The symmetric
case, with λ− = 0, has been previously introduced and solved by Van Hemmen [21]. In this two-
memory Hopfield network, the N spins separate into two sub-networks, which we call similarity (S)
and differential (D) subnetworks [22]: S corresponding to spins with ξi = 1 (i.e. ξ1i = ξ2i ) and D
corresponding to spins with ξi = −1 (i.e., ξ1i = −ξ2i )). We can then define two magnetizations along

3
the two memory patterns:

1 X
m1 = σj (5)
N j∈S,D
1X 1X
m2 = σj − σj (6)
N j∈S N j∈D

To describe the dynamics of this Ising system statistically, we introduce a master equation for the prob-
ability distribution pS (σ, t) for a similarity spin σ in S

∂ pS (σ, t)
= −pS (σ)wS (σ) + pS (−σ)wS (−σ) ,
∂t
here wS (σ) is the spin-flip transition rate. Assuming the system is in a thermal reservoir with inverse
temperature β = 1/kB T , the spin-flip transition rates take the form

wS (σ) = (1 − σ tanh βhS ) /2τ0 , (7)

where the field hS is the same for all spins in the subnetwork S and can be written as

hS = (λ+ − λ− )m1 + (λ+ + λ− )m2 , (8)

while τ0 is an arbitrary constant that determines the time scale of Ising dynamics, originally introduced
in one dimension by Glauber [23] and extended to higher dimensions by Suzuki and Kubo [24]. The
master equation for p D (σ, t), the probability distribution for a spin σ in the differential subnetwork D,
is similar to the one in Eq.(7) but with a field:

h D = (λ+ + λ− )m1 − (λ+ − λ− )m2 . (9)

2.1 Mean field solution


A mean-field system of equations for 〈m1 〉 and 〈m2 〉 can then be obtained from the master equations
for pS (σ, t) and p D (σ, t) by replacing the magnetizations in Eqs.(8, 9) with their expectation values:

〈m1 〉 1
〈m˙1 〉 = − + {tanh β [λa 〈m1 〉 + λs 〈m2 〉] + tanh β [λs 〈m1 〉 − λa 〈m2 〉]} , (10)
τ0 2τ0
〈m 〉 1
〈m˙2 〉 = − 2 + {tanh β [λa 〈m1 〉 + λs 〈m2 〉] − tanh β [λs 〈m1 〉 − λa 〈m2 〉]} , (11)
τ0 2τ0

where the coupling constants λa = λ+ −λ− and λs = λ+ +λ− account for the asymmetric and symmetric
components of the interaction, respectively. To simplify the notation, we drop the 〈. . . 〉 and assume
mean field variables.
In Fig. 1, the phase portrait of the mean-field equations is presented. For λ− = 0, a phase transition
occurs at βλ+ = 1. This transition separates the paramagnetic phase, (see orbits in a), from the memory
retrieval phase (see orbits in f).

4
a βλ+ = 0.8, βλ− = 0.0 b βλ+ = 0.9, βλ− = 0.85
1 1.0
Fold Bifurcation
b c Hopf Bifurcation
m2

0 0.8
Limit Cycles
Memory Retrieval
−1 0.6 Paramagnetic
c βλ+ = 1.0, βλ− = 0.85 d βλ+ = 1.3, βλ− = 0.17
1
0.4

βλ−
m2

0
0.2
d
e
−1
0.0 a f
e βλ+ = 1.3, βλ− = 0.1 f βλ+ = 1.3, βλ− = 0.0
1

−0.2
m2

−0.4
−1
−1 0 1 −1 0 1 0.6 0.8 1.0 1.2 1.4
m1 m1 βλ+

Figure 1: Phase portraits (left) and the phase diagram (right) for the two-memory cyclic Hopfield
model. Left: The panel shows the dynamical behavior for different values of λ+ and λ− . The six phase
portraits (a to f) show the trajectory dynamics, with green arrows indicating the vector fields of the
derivatives. In e and f, empty circles represent saddle points, while solid circles denote stable points
(sinks). Right: The phase diagram is divided into three regions of different dynamical behavior: Limit
Cycles (diagonal lines with a purple background), Memory Retrieval (dotted dark green background),
and Paramagnetic (vertical stripes with a light green background). These phases are bounded by bi-
furcation lines: fold bifurcation lines (dark green lines) and the Hopf bifurcation (purple vertical line).
The positions of the six trajectory plots (a - f) are indicated by corresponding labels on the phase dia-
gram.

In the latter phase, either m1 ̸= 0 or m2 ̸= 0, and |m1(2) | → 1 when βλ+ ≫ 1. Symmetric steady-
state solutions with m1 = m2 ̸= 0 ≤ 1/2 are observed for βλ+ > 1. These solutions, represented as
empty circles in f, are mixed memory states equidistant from the two patterns. Such symmetric states
are saddle point solutions and are always unstable in a two-memory scenario. In this reciprocal two-
memory model, mixed asymmetric solutions are not permissible, in contrast to what is observed in
Hopfield networks with more than two patterns, as shown by Amit et al.[25].
Next, we explore how these stable and unstable fixed points change in the presence of the asym-
metric interaction λ− . We rewrite Eqs. 10 and 11 in compact form as ṁ = F (m, λ+ , λ− ), where m is
the vector of magnetizations. In a neighborhood of m which is a solution of F (m, λ+ , λ− ) = 0, we can
linearize the mean field equations as
ṁ = A · (m − m ) (12)
where the Jacobian matrix A at m can be expressed as

−1 + βλ+ ∆ + 2βλ− Γ βλ− ∆ − 2βλ+ Γ


• ˜
A= (13)
−βλ− ∆ − 2βλ+ Γ −1 + βλ+ ∆ − 2βλ− Γ

with ∆ = 1 − m1 2 − m2 2 and Γ = m1 m2 .
The stability of steady-state solutions is determined by the eigenvalues of the Jacobian matrix A.
This matrix has either two real or two complex conjugate eigenvalues. In the scenario where m = 0,
the eigenvalues of A are µ± = βλ+ − 1 ± iβλ− , revealing that for βλ+ < 1 and λ− ̸= 0, the solution

5
m = 0 is stable, with focus-type orbits (see b). The line in βλ+ = 1, where the eigenvalues transition
to being purely imaginary is a Hopf bifurcation line. This line is the projection of the curve defined in
(m1 , m2 , λ+ , λ− ) by F (m, λ+ , λ− ) = 0 and Tr[A(m, λ+ , λ− )] = 0 on the (λ+ , λ− ) plane [26].
When βλ+ > 1, the phase diagram splits into two distinct regions, contingent upon the exis-
tence of solutions with m ̸= 0. The boundary between these regions is a fold bifurcation line (also
called saddle-node bifurcation line) derived from projecting the curve in (m1 , m2 , λ+ , λ− ), defined by
F (m, λ+ , λ− ) = 0 and Det[A(m, λ+ , λ− )] = 0 onto the (λ+ , λ− ) plane [26]. Along this line, a single
real eigenvalue transitions to zero while its counterpart maintains a negative value. This behavior
can be interpreted as a merging of the memory retrieval fixed points, which are stable node-type, and
the mixed memory states, which are saddle points. In this system, the non-reciprocal parameter λ−
shifts the fixed points, causing four memory retrieval fixed points to approach the mixed memory states
progressively. This convergence facilitates a circular directionality in the orbits, acting as a harbinger
for the limit cycle solutions apparent above the fold bifurcation line, where only the unstable solution
m = 0 persists.

2.2 Near cusp dynamics


The limit cycle phase can be better described by introducing a complex variable z(t) = m1 − im2 . By
3
approximating tanh(x) ≈ x − x3 in Eqs.(10) and (11), we obtain an equation for z(t/τ0 ):

Λ2 Λ̄ 2 Λ̄3 3
ż = (Λ − 1)z − z z̄ + z̄ , (14)
2 6
where Λ = β(λ+ + iλ− ) and the bar indicates complex conjugation. The last term in Eq.(14) is an
anti-resonant term, which can be eliminated using a smooth change of variables:
h(Λ, Λ̄) 3
w=z− z̄ , (15)
6
where h(Λ, Λ̄) = Λ̄3 /(3Λ̄ − Λ − 2).

By substituting Eq.(15) into Eq.(14) and retaining only terms up to the cubic order in w, we obtain
the Poincaré normal form:
Λ2 Λ̄ 2
ẇ = (Λ − 1)w − w w̄. (16)
2
For ρ(t) = |w(t)|, we can then write:
(βλ+ )2 + (βλ− )2
 
ρ̇ = ρ βλ+ − 1 − (βλ+ )ρ 2 , (17)
2
which indicates that non-zero steady solutions exist for βλ+ > 1. Focusing near the cusp point at
βλ+ = 1 and βλ− = 0 and retaining terms only up to the first order in (βλ+ − 1) and βλ− , we find
that the amplitude of the limit cycles increases with βλ+ as:
ρ02 = 2(βλ+ − 1). (18)
This expression gives the amplitude of the limit cycles above the fold line and provides the amplitude
of the memory retrieval below the fold line. To observe the change in the dynamical behavior corre-
sponding to the fold line, we can write the equation for the phase θ (t) = arg[z(t)] from Eq.(14), which
keeps the anti-resonant term proportional to z̄ 3 . Then, retaining terms up to the first order in (βλ+ −1)
and βλ− we have:
βλ+ − 1
θ̇ = βλ− − sin 4θ . (19)
3

6
Using this equation, we can determine the period of the limit cycles as:
Z 8π Z 2π
T 1 dθ 1 dθ 2π 1
= = = p , (20)
τ0 4βλ− 0
1 − α sin θ βλ− 0
1 − α sin θ βλ− 1 − α2

where α = (βλ+ − 1)/3βλ− . Near the vertical Hopf line, the period is only determined by βλ− . As we
move right in the region with βλ+ > 1, the period increases and then diverges when we approach the
fold line, which, near the cusp point, corresponds to 1

βλ+ − 1
βλ− = . (21)
3

3 Critical Dynamics
To study the effect of fluctuations near the critical lines we modify Eq. (14) as

|Λ|2 Λ 2 Λ̄3 3 1
ż = (Λ − 1)z − |z| z + z̄ + p ζ(t) , (22)
2 6 N
where we have included the complex-valued white noise variable ζ(t),

〈ζ(t)ζ̄(t ′ )〉 = Dδ(t − t ′ ) , (23)

to account for noise beyond the mean-field equation. In this section, we put τ0 = 1 to simplify the
notation. The constant D is phenomenological, and the scaling with the system size N is chosen to
match the standard mean-field equation plus noise for collective models (see, e.g., [17]).
Let us consider the following distinct regions.

3.1 Near the cusp with βλ− = 0 and βλ+ ≈ 1


In this case, the equation is better written in terms of m1 and m2 . In the absence of the asymmetric
term, the dynamics is governed by a free energy F as

∂F 1
ṁi = − + p ξi (t), (24)
∂ mi N
with a real white noise
〈ξi (t)ξ j (t ′ )〉 = Dδi j δ(t − t ′ ) , (25)
where
r
F = (m21 + m22 ) + u1 m41 + u2 m42 + 2u12 m21 m22 . (26)
2
(βλ )3 (βλ )3
Here r = 1 − βλ+ and u1 = u2 = 6+ ≈ 61 , u12 = 2+ ≈ 21 . Note that the model exhibits a Z2 × Z2
symmetry. The phase diagram is determined by the sign of r and u1 u2 − u212 . Since u1 u2 < u212 , there
1
The zero temperature β = ∞ limit of the Eqs. 10 and 11 can be studied by noting that tanh β x → sgn x for β → ∞. The
only possible values for the steady states m1 and m2 are the ones compatible with the half sum of two sign functions, which
can only give 0, ±1 or ±1/2. The solutions with |m1 | = 1 and |m2 | = 0 and vice-versa describe the perfect memory retrieval.
Consider the solution m1 = 1 and m2 = 0. By replacing these values in Eq.10, we find 1 = (sgn λa + sgn λs ) /2 which is possible
only for λ+ > λ− . If this condition is violated, the dynamics has limit cycles [27]. The equation for the fold line, which is given
by Eq. 21 near the critical temperature, changes asymptotically to a line of slope one for β approaching ∞. Also, the unstable
mixed memory solutions with |m1 | = |m2 | = 1/2 exist in this limit only for λ− = 0. This can be verified, for instance, by replacing
the solution m1 = m2 = 1/2 in Eqs. 10 and 11, which give sgn λ+ + sgn λ− = 1 and sgn λ+ − sgn λ− = 1, and is possible only
for λ− = 0 .

7
are only three phases: m1 = m2 = 0 when βλ+ < 1 and either m1 ̸= 0 = m2 or m1 = 0 ̸= m2 when
βλ+ > 1. This is analogous to a multi-critical point in a spin system where anisotropies break the On
symmetry along more than one direction (see, e.g., Sect. 4.6 of Ref. [28]).
To understand the critical behavior at the critical point βλ+ = 1 (and λ− = 0), we consider the
stochastic Langevin equation

1
ṁ1 = −(4u1 m31 + 4u12 m1 m22 ) + p ξ(t) , (27)
N
and a similar equation for m2 . The linear term vanishes since r = 0 at the critical point. Now, a rescaling
of time and field variables,
t̃ = t/N 1/2 , m̃i = N 1/4 mi , (28)
leads to a scale-invariant equation (i.e., independent of N ) as

d m̃1 /d t̃ = −(4u1 m̃31 + 4u12 m̃1 m̃22 ) + ξ( t̃) , (29)

and similarly for m2 . This observation leads to useful scaling relations. For example, the two-time
correlation function for t ≫ τ0 can be written as
p
Ci j (t, τ) = 〈mi (t + τ)m j (t)〉 ∼ δi j N −1/2 C̃(τ/ N ) , (30)

where C̃ is a universal scaling function. The Kronecker delta function follows from the Z2 ×Z2 symmetry
of the model.

3.2 Hopf bifurcation line: βλ− ̸= 0 while βλ+ = 1


In this case, we use the transformation w = e−iβλ− t z, and we assume that the oscillation is sufficiently
fast to neglect the anti-resonant terms, which can be viewed as a rotating wave approximation. The
resulting equation for w becomes

1 1
ẇ = −r w − |w|2 w + p ζ(t) , (31)
2 N
where we have replaced Λ ≈ 1 in the nonlinear term. Note that the rotating wave approximation is
equivalent to the Poincaré transformation in Eq. 15 for large βλ− and βλ+ = 1. While the Poincaré
method is more general and also valid in the small βλ− limit, we will discuss this case using the rotating
wave ansatz, which provides a more intuitive interpretation. Interestingly, the symmetry is now O(2)
rather than Z2 × Z2 . We can still describe the dynamics by a free energy defined as

1
F̃ = r|w|2 + |w|4 . (32)
4
Similar approaches have appeared before [29, 15, 30, 31]. Scaling relations similar to the ones in Eq.
28 at the critical point r = 0
t̃ = t/N 1/2 , w̃ = N 1/4 w (33)
leads to a scale-invariant equation, and to
p
〈w(t)w̄(0)〉 = N −1/2 C̃(t/ N ) . (34)

The universal scaling function C̃ differs from the previous case because the underlying symmetries
and the dynamics are different. We will show in Sect. 5.1 that this scaling behavior is consistent with
numerical simulations.

8
3.3 Limit cycle phase near the Hopf line
The continuous O(2) symmetry breaking in the ordered (limit cycle) phase in the regime where the ro-
tating wave approximation applies results in a Goldstone mode, which is susceptible to noise. Defining
w = ρ0 e iϑ , the dynamics of the phase is given by

1
ϑ̇ = p ξ(t) , 〈ξ(t)ξ(t ′ )〉 = Dδ(t − t ′ ) . (35)
ρ0 N

It then follows that 〈(ϑ(t) − ϑ(0))2 〉 ∼ D/(ρ02 N )t. Therefore, the perfect oscillations in the limit cycle
phase are suppressed due to noise at any finite N as (restoring z = e iβλ− t w)

Cz (t, τ) = 〈z(t + τ)z̄(t)〉 = ρ02 e iβλ− τ−Dτ/(ρ0 N ) .


2
(36)

Therefore, the oscillations in the correlation function are damped with a characteristic time T ∼ ρ02 N .
Below, we will show how the exact master equation approach and Glauber simulations reproduce this
damping effect for finite systems. Deep in the limit cycle phase and/or closer to the fold line, the limit
cycle dynamics is not uniform (i.e., not governed by a single frequency). However, we later show that
the oscillations are similarly damped.

3.4 Fold line


Fluctuations on the fold line can be studied by adding a white noise term to Eq. 19.
As one approaches the transition line βλ− = (βλ+ − 1)/3, the frequency of the limit cycle vanishes,
and θ describes a soft mode. On the other hand, the amplitude is a fast variable that relaxes to a
constant value ρ02 = 2(βλ+ − 1) as shown above. After making the transformation θ → θ + π/8 and
re-scaling noise strength and time using appropriate powers of βλ− , we obtain the stochastic equation

1
θ̇ = 1 − cos(4θ ) + p ξ(t) . (37)
N
Expanding around a fixed point, say θ = 0, we find to the first nonzero order

1
θ̇ ≈ 8 θ 2 + p ξ(t) . (38)
N
It follows from this equation that small but negative θ slowly converges to θ = 0 while small but
positive θ slowly diverges from θ = 0 before a quick phase slip occurs from 0+ → π/2− .
For an initial condition with θ (t = 0) < 0, the phase variable converges to θ = 0 as

1
θ (t) ∼ − , t → ∞. (39)
8t
The divergence for θ0 = θ (t = 0) > 0 is slow as well: the phase variable spends a time of the order
t ∼ 2/θ0 near θ = 0 before a quick escape to a value close to, but below, θ = π/2. Without noise,
depending on the initial condition, the phase variable converges to one of the fixed points (in the above
scenario, it would be θ = 0, π/2). However, the noise qualitatively changes this picture.
As shown above, without nonlinearity, the noise will induce a mean square displacement given
by 〈(θ (t) − θ0 )2 〉 = 2Dt/N . Therefore, even with θ0 < 0, noise would induce excursions to θ > 0,
followed by a long plateau, and then a quick slip to π/2− just below π/2. This is again followed by
a noise-induced excursion to π/2+ slightly above π/2, another long plateau, and then a phase slip to
π− , and so on. The resulting effect is a slow net rotation of the complex order parameter. Note that
this rotation disappears as N → ∞ since the noise is suppressed. The following argument gives the
dynamical scaling behavior in N : Suppose we are close to θ = 0. At short times, the nonlinearity is

9
12

10

8
θ(t)

0 20 40 60 80 100
t

Figure 2: Representative trajectories at the fold transition as a function of t with N = 1300 and D =
1. One can notice several features that are absent (deep) in the limit cycle phase: First, there is a
larger variation between different trajectories. This highlights the role of noise in inducing phase slips.
Second, the period of jumps (or the frequency of the limit cycle rotation) is roughly of the order T ∼ 10
while deep in the limit cycle phase it is of order 1. Again this is due to noise as the period should diverge
when N → ∞.

unimportant, while the noise induces a displacement of the order of θ (t)2 ∼ t/N . At a sufficiently long
time t ∗ , when θ is sufficiently large, and importantly also positive, the nonlinearity becomes relevant,
making the phase variable diverge from 0+ . A slow dynamics of the order 1/θ (t ∗ ) is followed by a
quick phase slip before arriving at π/2− . The time scale t ∗ (or rather θ∗ = θ (t ∗ )) is determined by
minimizing (dropping constant factors for simplicity)
1
t tot = N θ∗2 + . (40)
θ∗

It follows that θ∗ ∼ N −1/3 and


t ∗ ∼ N 1/3 . (41)
−1/3
This means that the frequency of oscillations (at the critical point) goes to zero as N . This behavior
is also reflected in the correlation function, which for t ≫ τ0 scales as

Cz (t, τ) ∼ ρ02 C̃(τ/N 1/3 ) (42)

with C̃ being a scale-free function. This scaling behavior is verified, and the form of the scaling functions
is calculated numerically in Sect. 5.1.

3.5 Near the fold line


Our discussion has focused on the phase transition exactly at the fold line. We next discuss what
happens slightly away from this line into either the limit cycle or the fixed memory retrieval phase. In
the limit cycle phase, another scale appears away from the phase transition, ε = βλ− − (βλ+ − 1)/3
described by a modification of Eq. 37:
1
θ̇ = 1 + ε − cos(4θ ) + p ξ(t). (43)
N

10
The motion is highly non-harmonic and resembles a step-wise rather than a smooth linear increase of
the phase variable (hence, it is not described by a single frequency). Next we investigate whether the
argument leading to Eq. 36 still follows and a damping with a characteristic time T ∼ N appears. The
noise-less version of Eq. 43 admits an exact solution. While the precise form of the equation is not
directly used in the following discussion, we report it for completeness:
p 
−1
ε tan 2 ε(ε + 2)(t − t 0 ) 
θ0 (t) = 2 tan p . (44)
ε(ε + 2)
p
The characteristic oscillation frequency can then be obtained as ω0 ∝ 2 ε(ε + 2). To describe
small fluctuations around this (noiseless) solution, we can take t 0 → − f (t) and expand the equation
of motion to the first order in f (t). Since f (t) = const is an exact solution, the expansion only involves
the time derivative, and we obtain

4ε(ε + 2) 1
p  f˙ + O( f 2 ) = p ξ(t). (45)
cos 4t ε(ε + 2) + ε + 1 N

Also, we note that


4ε(ε + 2)
θ (t) = θ0 (t) + p  f (t) + O( f 2 ). (46)
cos 4t ε(ε + 2) + ε + 1
The same prefactor appears in both equations above, and we will denote it by θ1 (t). We can then show
that: R t+τ ′
D
Cz (t, τ) ∼ ρ02 e− 2N θ1 (t) t d t 1/θ1 (t ) .
2 2 ′
(47)
The last term decays with time roughly exponentially (when coarse graining the features over each
cycle) approximately as exp(−Dτ/2N ), as in Eq. 36. We conclude that the latter equation is more
general than the assumptions that were used to derive it, and is likely valid throughout the limit cycle
phase. Indeed the above equation suggests that the anharmonicity in the oscillations can be made
uniform by reparametrizing the time as d t̃ = d t/θ12 (t), leading to an equation similar to Eq. 35 that
describes the dynamics of θ (τ).
p
Near the fold line, the limit cycle frequency scales as ω0 ∼ ε. Comparing this with the behavior on
the critical line, where ω ∼ N −1/3 , a rescaled variable εN 2/3 emerges governing the crossover between
the two limits. This scaling follows from an application of the Arrhenius law on the other side of the
phase transition where the point memory retrieval phase emerges. To this end, we consider ε < 0
describing the point memory phase.
We can approximate the dynamics by introducing a tilted Sine-Gordon effective potential:

1
V (θ ) = −(1 − |ε|)θ + sin 4θ . (48)
4

For ε < 0, a small barrier emerges, whose height scales 2 as ∆V ∼ |ε|3/2 . Now according to the
Arrhenius law, we find the decay rate given by Γ ∼ exp(−βeff ∆V ), where the effective temperature,
characterizing the noise strength, scales as βeff ∼ N . Therefore, Γ ∼ exp(−AN |ε|3/2 ) and

Cz (t, τ) ∼ e−Γ τ . (49)

Note that the same scaling variable (|ε|3/2 N ) governs both sides of the fold transition.
2
More precisely, V (−x 0 ) − V (x 0 ) ∼ |ε|3/2 where V ′ (±x 0 ) = 0.

11
0  : −0.5
:0
 : 0.5
−2

−4
V(θ)

−6

−8

0 20 40 60 80 100
θ

Figure 3: Effective potential V (θ ) for different values of ε. Depending on its sign, ε alters the steepness
in V (θ ), thereby affecting system’s stability. A negative ε initiates an uphill start in V , which poses a
potential hill for θ as a metastable state till it overcomes the hill. For ε = 0, V starts flat, then slips
down at a faster rate than in the negative ε scenario. A positive ε triggers an immediate downhill
movement in V , swiftly driving the system into the oscillatory phase at an even faster rate.

3.6 External drive


Let us now consider the effect of an external drive on systems at the two critical lines. We assume that
the external drive has the form Fe iωt , where ω is nearly resonant with the cycle frequency, determined
by λ− near the Hopf line, and approaching zero near the fold line. On the Hopf line, we can shift to a
ω rotating frame by setting w = e−iωt z, which gives an equation similar to Eq. 31
1
ẇ = −iδw − |w|2 w + F (50)
2
with F replacing the noise term, and the detuning δ = βλ− − ω. By rescaling to units t̃ = t F 2/3 and
w̃ = wF −1/3 , Eq.(50) leads to
w(t) ∝ F 1/3 w̃(t F 2/3 , δF −2/3 ) (51)
where w̃ is a parameter-free function. This suggests that, at δ = 0, the response gain w/F ∼ F −2/3
diverges for small perturbations. Therefore, near the Hopf line, the system behaves like a filter with
larger gain for weaker perturbations. This enhanced sensitivity at criticality is known to be relevant
in biological functions, such as in the auditory sensitivity of hair cells in the cochlea [19]. The scaling
analysis also shows that the dynamics at criticality is slowed down by a factor F −2/3 , so while smaller
perturbations give enhanced gain, it also takes longer for the oscillator to respond to the external drive.
The response behavior on the fold line is qualitatively different. Since, without noise, the system
is frozen on the fold line, we consider a constant complex drive F with a phase related to its relative
strength on m1 and m2 . Retaining only the phase dynamics, we find a modified Eq. 43

θ̇ = 1 + ε − cos(4θ ) + Im(Fe−iθ ). (52)

The term e−iθ can only be ±1 or ±i except during a fast switch between the memory pattern. Consider
then a system initially frozen on the fold line (without noise) or in the memory retrieval phase with a
small ε < 0. The last term in Eq. 52 can shift the value of ε by ±ReF or ±ImF , and a switch happens
only if the result is positive. Since the sign of the shift is state-dependent, the maximum number of

12
memory switches is limited to two. In other words, a static drive F will never be able to push the system
at criticality into a phase with sustained limit cycles. Such a drive can only switch between memory
patterns in a limited and controlled way. Finally, near an equilibrium position, a scaling analysis shows
that
θ (t) ∝ F 1/2 θ̃ (t F 1/2 , ε/F ) (53)
where θ̃ is parameter-independent. This suggests that the response time on the fold line scales as F −1/2
and is faster than the F −2/3 dependence on the Hopf line.

4 Master Equation
We now introduce a formulation for the Master Equation to describe the full dynamics of the network,
allowing us to explore exactly the critical behavior studied in the previous section. Taking into ac-
count the separation of the full network into similarity and differential networks, we can rewrite the
probability distribution at time t for a given configuration of all spins (σ1 , σ2 , · · · , σN ) = {σi } as:

P({σi } , t) = P̃(MS , M D , t), (54)

where the variables  


MS(D) ∈ −NS(D) , −NS(D) + 2, · · · NS(D)
identify the sum of the spin configuration {σi } over the subnetworks S and D. Each value of (MS , M D )
is associated with a number of equivalent spin configurations given by:
 ‹  ‹
NS ND
g(MS , M D ) = ∗ ,
NM+S NM+D

where NM± = (NS(D) ± MS(D) )/2 indicate the number of spins up or down for a given MS(D) . This
S(D)
degeneracy can be taken into account by defining a probability distribution

P(MS , M D , t) = g(MS , M D ) ∗ P̃(MS , M D , t), (55)

which satisfies: X
P(MS , M D , t) = 1, (56)
MS ,M D

and its dynamics are determined by the Master Equation:

∂ P(MS , M D , t)
= I in − I out , (57)
∂t
where

I in = NM+ w+S (MS + 2, M D )P(MS + 2, M D , t) + NM− w−S (MS − 2, M D )P(MS − 2, M D , t)+


S +2 S −2

+ NM+ +2 w+D (MS , M D + 2)P(MS , M D + 2, t) + NM− −2 w−D (MS , M D − 2)P(MS , M D − 2, t) ,


D D
” —
+ + + +
I out = NM wS (MS , M D ) + NM wS (MS , M D ) + NM w D (MS , M D ) + NM− w−D (MS , M D ) P(MS , M D , t) ,
− −
S S D D
(58)

13
with I in(out) as the flux into (out of) the state (MS , M D ), and the ± spin-flip transition rates defined as

1 2
 ‹
w±S (MS , M D ) = 1 ∓ tanh [βλ+ (MS ∓ 1) − βλ− M D ]
2τ0 N
1 2β ±
 ‹
= 1 ∓ tanh hS (MS , M D ) , (59)
2τ0 N
1 2
 ‹
w±D (MS , M D ) = 1 ∓ tanh [βλ+ (M D ∓ 1) + βλ− MS ]
2τ0 N
1 2β ±
 ‹
= 1 ∓ tanh h (MS , M D ) . (60)
2τ0 N D

The terms MS(D) ∓ 1 in Eqs. 59 and 60 take into account the exclusion of the spin self-interaction.
Previous studies have explored the effect of including versus omitting self-interaction terms in Hopfield
dynamics [32, 33] and exclusion of self-interactions has been shown to lead to larger information
storage capacities [33]. The effect of the ±1/N is irrelevant in the mean-field solutions discussed
above, and for the remainder of the paper, we focus on the case that omits self-interaction.
The single spin flip rates in Eqs. 59 and 60 can be rewritten in terms of the local energy change
δε±S(D)
due to a spin-flip:
Š−1
w±S(D) = 1 + eβδεS(D)
±
€
, (61)

where δε± S(D)


= −h±S(D)
(MS , M D )δMS(D)
±
with δMS(D)
±
= ∓2. Any cyclic process that starts from a given
spin configuration and involves flipping only spins within either subnetwork S or D conserves the total
energy, resulting in a net energy change of zero. However, when processes involve spins from both
subnetworks S and D, the energy change depends on the cycle path. Consider, for instance, the two-
spin cycle

(MS , M D ) → (MS − 2, M D ) → (MS − 2, M D − 2)


→ (MS , M D − 2) → (MS , M D ) , (62)

where two spins up are sequentially flipped down and then back up, with the S spin flipped before the
D spin. The total energy change in this case is δε = −16λ− /N . In contrast, the time-reversed process
in which the spin in D is flipped before the one in S results in δε = +16λ− /N . This path dependence
implies the violation of Kolmogorov’s criterion for the transition rates [34] and, therefore, breaking of
the detailed balance principle.

4.1 Exact diagonalization of Liouvillian


By enumerating the states using a single index k = (MS , M D ) we can rewrite the Master Equation in Eq.
57 as: X
Ṗ(k, t) = − Lk,k′ P(k′ , t) (63)
k′

where L is the Liouvillian matrix. The all-ones vector is always a left eigenvector of the nonsymmetric
matrix L with eigenvalue Λ1 = 0, which guarantees the probability conservation in Eq. 56, and, for
finite N , all the eigenvalues of the Liouvillian have a positive real part. To study the system’s phase
diagram, we focus in Fig. 4 on the second smallest eigenvalue Λ2 and its dependence as a function of
N.
Note that the real part of Λ2 remains nonzero in the region βλ+ < 1 of the phase diagram in Fig. 1,
corresponding to the paramagnetic phase. For βλ+ > 1, the real part of Λ2 converges to zero, allowing
for the memory retrieval of a constant magnetization value as N → ∞. The imaginary part of Λ2 , on
the other hand, changes its behavior as a function of N for βλ+ ∼ 1.3, which is near the fold line of the

14
Figure 4: Real and Imaginary part of the second smaller eigenvalue of the Liouvillian matrix, Λ2 , as a
function of βλ+ for a fixed value of βλ− = 0.17 and NS = ND = N /2.

mean-field model, separating the limit cycle and the memory retrieval phases where the oscillations
disappear. Observing the sharp features of the diagram in Fig. 1 by analyzing the eigenvalues of the
Liouvillian is computationally demanding, and even for N = 80, resulting in an L of dimensions 1681
by 1681, the transitions in Fig. 4 are not sharply defined. Below, we will implement a Glauber Monte
Carlo algorithm that allows us to explore significantly larger N .
The Liouvillian matrix can be used to calculate exact expectation and correlation functions. For
instance, given a probability distribution at t = 0, P(k, 0), the average magnetization along the first
memory pattern as a function of time can be calculated as
1X
〈m1 (t)〉 = [M1 ]k P(k, t) , (64)
N k

where X
e −L t P(k′ , 0) ,

P(k, t) = k,k′
(65)
k′

and M1 = MS + M D .

15
Similarly, the two-time correlation function for M1 can be defined as
1 X
C1,1 (t, τ) = 2 [M1 ]k̄ [M1 ]k P(k̄, t + τ; k, t). (66)
N
k,k̄

The joint probability P(k̄, t + τ; k, t) can be rewritten as

P(k̄, t + τ; k, t) = P(k̄, t + τ|k, t)P(k, t), (67)

where P(k̄, t + τ|k, t) is the conditional probability of the system to be in state k̄ at time t + τ, given it
was in state k at time t. This conditional probability can be calculated by shifting the initial condition
t → 0 and using X
e−Lτ k̄,k′ P(k′ , 0) ,

P(k̄, t + τ|k, t) = (68)
k′

with the initial probability set to P(k′ , 0) = δk′ ,k . The two-time correlation can then be expressed
as [24]
1 X
C1,1 (t, τ) = 2 〈M1 (τ)〉k [M1 ]k P(k, t), (69)
N k
where X
[M1 ]k̄ e−Lτ k̄,k
 
〈M1 (τ)〉k = (70)

is the expectation of M1 at τ given having been in configuration k at t = 0.


Similar averages and two-time correlations can be defined for other quantities such as M2 and
Z = M1 − i M2 .
Fig. 5 shows the exact 〈m2 (t)〉 for different values of N calculated using the Liouvillian. The initial
state was configured such that m1 (0) = 1 and m2 (0) = 0, with the parameters βλ+ = 1.3 and βλ− =
0.17. This positions the system slightly above the fold line in the phase diagram of Fig. 1.
Fig. 6 shows the two-time correlation function for M2 , denoted as C2,2 (t, τ). The function is depen-
dent on N as well. In smaller systems (N = 50, 100), C2,2 (t, τ) quickly drops to zero, indicating that
M2 (t + τ) becomes uncorrelated with M2 (t) as τ increases due to fluctuations, while in larger systems,
oscillations in C2,2 (t, τ) emerges.

N = 100
N = 200
0.2 N = 300
N = 400

0.0
hm2(t)i

−0.2

−0.4

0 50 100 150 200


t/τ0

Figure 5: Exact 〈m2 (t)〉 solved from the master equation for different system sizes N with βλ+ = 1.3
and βλ− = 0.17. As N increases, the oscillations become slower and more pronounced.

16
N = 50
0.20 N = 100
N = 300
0.15 N = 400
N = 600
0.10 N = 800
C2,2(15, τ )

0.05

0.00

−0.05

−0.10

−0.15
0 25 50 75 100 125 150 175
τ /τ0

Figure 6: Two-time correlation function C2,2 (t, τ) for M2 at t = 15, βλ+ = 1.3 and βλ− = 0.17, for
various N . As N increases, C2,2 (t, τ) exhibits defined oscillations.

5 Glauber Dynamics
In parallel with deriving the master equations for P(MS , M D , t), we implemented a Glauber dynam-
ics that utilizes the division into subnets, rather than relying on random spin flips across the entire
network. This adaptation not only provides a direct comparison with the predictions of the master
equations but also allows us to examine much larger systems. Specifically, in our implementation we
consider the total magnetizations M1 = MS + M D and M2 = MS − M D . Each Monte Carlo step involves
a probabilistic decision to flip a spin within one of the two subnets, with the selection between S and
D being randomized. The corresponding transition rates, as defined in Eqs. 59 and 60, incorporate the
effects of λ+ and λ− .
Our implementation tracks these magnetizations at intervals of N iterations. Below we show results
from our simulations where we varied network sizes N , with additional adjustments in interaction
strengths λ+ and λ− . We focused on assessing the system’s finite-size effects and convergence towards
the mean field solutions.

17
0.75 Mean Field
N : 100
N : 1000
0.50
N : 10000
N : 100000
0.25
m2(t)

0.00

−0.25

−0.50

−0.75

0 50 100 150 200 250 300 350


t/τ0

Figure 7: Magnetization m2 (t) in Glauber dynamics as it approaches the mean field solution. Each
dashed line with markers represents a single realization for system sizes N = 100 (blue circles), 1, 000
(orange triangles), 10, 000 (red crosses), and 100, 000 (green diamonds) at βλ+ = 1.3 and βλ− = 0.17,
compared to the mean field solution (solid black line). As N increases, the simulations match the mean
field predictions, with larger systems nearly overlapping with the mean field curve.

Fig. 7 illustrates the convergence of the Glauber dynamics toward the mean field solution for N →
∞. The observations are consistent in both m1 (t) and m2 (t). At N = 100, deviations from the mean
field solution are notable, particularly in the oscillation frequency and noise levels. As N increases to
1, 000 and 10, 000, the discrepancies between the simulations and mean field solutions decrease, with
progressively smoother magnetization dynamics.
At N = 10, 000 and 100, 000, stochastic effects significantly recede. In these larger systems, the
dynamics closely resemble those of an infinite, continuous medium.
While individual realizations of Glauber dynamics for very large N align well with the mean-field
solution, smaller systems exhibit significant variability. A comparison between ensemble-averaged sim-
ulations and the mean-field solution reveals damping as a net result of averaging over realizations. In
Fig. 8, the averaged m2 (t) displays oscillation damping, even in relatively large systems.
This damping effect due to the ensemble average remains pronounced even in a relatively larger
system with N = 1, 000. As expected, larger systems recover the mean field oscillation amplitude and
maintain persistent oscillations over an extended range.
Fig. 9 contrasts the ensemble-averaged magnetization m2 (t) from Glauber dynamics simulations
with the exact Liouvillian solution of Fig. 5. As the sampling increases, the stochastic ensemble mean
converges towards the Liouvillian dynamics, demonstrating the equivalence between the statistical
expectations of stochastic processes and the deterministic predictions derived from the master equation.
For larger sampling (purple and green trajectories), the decoherence among individual dynamics leads
to destructive interference and damped oscillations. A single realization (blue trajectory) still preserves
the characteristic oscillation within the limit cycle regime, albeit with inconsistent periods. This can be
attributed to the high susceptibility to noise in smaller systems. Our simulation was limited to N = 200,
a relatively small configuration, due to the computational expense associated with the Liouvillian matrix
calculation, as discussed above.

18
Mean Field
0.6 N: 100
N: 1000
0.4 N: 10000
N: 100000

0.2
m2(t)

0.0

−0.2

−0.4

−0.6

0 200 400 600 800 1000 1200 1400


t/τ0

Figure 8: Average magnetization, m2 (t) in Glauber dynamics over 1,000 realizations. The N = 100
system shows a significant decay within approximately two periods. At N = 1, 000, the oscillation
initially matches the mean field period but begins to shorten around t/τ0 = 400 while damping out. A
larger system exhibits prolonged oscillation persistence, yet still with a noticeable damping. The largest
N (the green curve) approximates an infinite system and more closely recovers the oscillations of the
mean field solution.

0.75 Liouvillian
MC Runs : 1
MC Runs : 10
0.50
MC Runs : 100
MC Runs : 1000
0.25
m2(t)

0.00

−0.25

−0.50

−0.75

0 50 100 150 200


t/τ0

Figure 9: Equivalence of the master equation solution and averages of Glauber dynamics. System
parameters and initial conditions are same of those in Fig. 7 and 8: βλ+ = 1.3, βλ− = 0.17, m1 (0) = 1,
and m2 (0) = 0. The system has 100 spins in each subnet S and D, totaling N = 200. Colored dashed
curves represent the averages from Monte Carlo simulation runs. As the sampling size increases, the
average trajectory of all stochastic paths converges to the Liouvillian solution.

19
5.1 Numerical tests of critical behavior
In this last section, we test the predictions obtained using Langevin’s equations in Sect. 3 with Glauber
numerical simulations. We focus first on predictions related to the fold line. The first observation
from Eq. 42 is that the oscillations of the autocorrelation function for a system exactly on the fold
line are purely driven by fluctuations and are characterized by a period that scales as N 1/3 . We show
this behavior in Fig. 10, where after rescaling the delay time τ by N 1/3 , the autocorrelation functions
calculated numerically with N ranging from N = 1000 to N = 50000 collapse to a single universal
function. The autocorrelation is calculated starting at t = 100τ0 to remove transients related to the
choice of the initial conditions. The expected scaling behavior is observed for the real and imaginary
components of the autocorrelation of z = m1 − im2 .

0.40 Re[Cz] Im[Cz]


N = 1000
0.30 N = 10000
N = 30000
Cz(t = 100 τ0, τ )

N = 50000
0.20

0.10

0.00

-0.10

0 5 10 15 20 25 30
τ /(τ0N1/3)

Figure 10: Autocorrelation on the fold line at βλ+ = 1.25 and βλ− = 0.1025. For different values of
N . The time axis is scaled according to Eq. 42 to show collapsing into a single function.

The second prediction relates to the response of a system on the fold line to an external drive and its
dependence on the strength of the drive, F . According to Eq. 53, we expect that in the limit of large N
where the noise-induced switching is suppressed, the characteristic time for switching scales as F −1/2 .
We tested this behavior in Fig. 11, where we show the rotation of the angle θ = arctan m2 /m1 right
after the activation of a constant field F in a system initially at m2 = 1. The constant F pushes the state
towards m1 , and the amplitude of rotation and its time dependence scale as predicted by Eq. 53 in the
limit of small θ .
We also find N -dependent damped oscillations for the autocorrelation function on the Hopf bi-
furcation line. This is consistent with the scaling relation obtained in Eq. 34 using a rotating wave
z̃ = e−iβλ− t z.
Fig. 12 shows how, by rescaling the autocorrelation in amplitude and time, simulation runs for
networks of different sizes N collapse into a universal function. We have verified that this behavior
holds for different values of βλ− along the Hopf line.
Finally, we numerically studied the behavior of the autocorrelation functions slightly outside the
critical lines, identifying two distinct behaviors. Near the Hopf line and above the fold line, the damping
of the autocorrelation is associated with a characteristic time T that scales linearly with N , following
the analytical predictions of Eqs. 36 and 47. In contrast, below the fold line, in the regime where
memory retrieval is effective, the characteristic time increases exponentially with N , as described by
Eq. 49. Fig. 13 presents the results of numerical simulations where the decay of the autocorrelation
function was fitted to an exponential model with a characteristic time T .

20
Figure 11: θ rotations following the activation of a constant F for a system of N = 106 on the fold line
with βλ+ = 1.25 and βλ− = 0.1025. Time and angles are rescaled according to Eq. 53, which is valid
for θ ≪ 1.

Re[Cz̃] Im[Cz̃]
0.80
N = 1000
N = 10000
0.60
N1/2Cz̃(t = 600 τ0, τ )

N = 30000
N = 50000
0.40

0.20

0.00

-0.20

-0.40
0.0 0.5 1.0 1.5 2.0 2.5 3.0
τ /(τ0N1/2)

Figure 12: Hopf critical exponent: βλ+ = 1.0, βλ− = 1.7. The time axis and autocorrelation are
scaled according to Eq. 34 to show collapsing into a single function.

21
Near Hopf
6000 Above Fold
Below Fold
5000 Fit: Tnear Hopf = 0.07N − 38.77
Fit: Tabove Fold = 0.17N + 147.06
Fit: Tbelow Fold = 97.5400 · e0.000247N
4000
T(N)

3000

2000

1000

6000 8000 10000 12000 14000 16000


N

Figure 13: Characteristic decay times of autocorrelation near the Hopf line and above the Fold line
grow linearly with N , following predictions of Eqs. 36 and 47. Below the Fold line, the decay time
grows exponentially with N , following Eq. 49. Errors in the data are too small to be visible and have
been omitted for clarity. The fits for all three regimes indicate strong statistical agreement between the
data and the respective fitted models, with R2 values of 0.9687 (near Hopf), 0.9795 (above Fold), and
0.9930 (below Fold).

22
6 Conclusions
Several biological processes evolve through multi-step sequential transitions. Hematopoiesis, for in-
stance, is a multi-step cascade that starts with stem cells and progresses through oligopotent and
lineage-committed progenitors. Similarly, central pattern generators are neural circuits producing
rhythmic or periodic functions such as breathing or walking. Another example is the cell cycle, which
consists of a finely-tuned sequence of cellular phases. From a theoretical perspective, developing and
understanding effective models that can address questions related to these biological sequential transi-
tions is important. For instance, are these transitions controlled by intrinsic or extrinsic factors? What
is the role of stochasticity, and how does it scale with the number of involved components? Are there
critical regions that separate phases of different behaviors and exhibit some scale invariance properties?
Are there critical regions of the phase space with enhanced sensitivity to external perturbations?
The two-memory non-reciprocal Hopfield model studied here addresses many of the above ques-
tions. Switching is encoded through non-reciprocal interactions that modify Hebbian coupling. In this
N -body system, we explore the effects of the number of components, N , and noise. We found that two
distinct regions of critical behavior emerge at the interface of different dynamical phases. We identi-
fied and studied these regions, which correspond to Hopf bifurcations and fold bifurcations. Previous
studies have explored the hypothesis that some biological systems operate at Hopf bifurcation critical-
ity. However, behaviors near the fold line could explain other biological phenomena involving state
switching. The dynamic scaling behavior, marked by different critical exponents ζ in the autocorre-
lation function, suggests these two regimes are qualitatively distinct. Furthermore, we showed that
sensitivity to external signals varies significantly. Specifically, in the Hopf bifurcation line, the system
is sensitive to perturbations resonant with the limit cycle frequency. In contrast, perturbations to a
system in the fold line do not induce sustained limit cycles but enable controlled state switching. The
time required to respond to perturbations also differs, scaling faster in the fold line than in the Hopf
line.
The model studied here can be generalized to more than two patterns. For a system encoding p
patterns, the N spins partition into 2 p −1 subnetworks, analogous to the division into similarity (S) and
differential (D) spins introduced earlier. For instance, in Ref. [11], four patterns were used to extend
the Z2 × Z2 (C4 ) model considered here to the C8 symmetry case. A modification of the interaction
using a Moore-Penrose pseudoinverse matrix of spins and patterns [33] was also used in that paper to
reduce errors due to correlation among the memory patterns. However, for larger p, the model’s ability
to recover sequences of patterns quickly diminishes [3]. One way to address this limitation involves
introducing a delay in the switching term [35], which could be realized through a modulation of the
interaction, as recently explored by Herron et al. [36]. Hopfield networks do not need to be complete
networks for memory retrieval. For instance, in random asymmetric networks, memory retrival is
preserved when the average network connectivity is above a critical value [37]. This property can be
exploited to integrate the models with additional biological information. For example, in Ref. [22], the
wiring of gene regulatory networks was combined with the memory retrieval property of the Hopfield
model to identify bottleneck genes more susceptible to cell state switching. Another exciting extension
involves defining branching points for memory patterns. Instead of cycles or fixed points, one can
represent dynamics in which a memory pattern ξ1 can transition into ξ2 or ξ3 patterns. This can be
implemented by adding a random switch in the Glauber dynamics that randomly chooses between ξ2
and ξ3 in the dynamics. This approach was implemented in Ref.[10] to model the random switching
between clonal states in disease progression.
While the present study has been motivated by biological questions, Hopfield networks with dilute
memory patterns (i.e., p < log2 (N )) have been explored in the presence of a transverse field on the x-
axis, which renders the system quantum mechanical [38]. Although non-reciprocity in physical systems
is less common than in biological settings, integrated photonics systems can be engineered to exhibit
real space asymmetric coupling [39]. Non-reciprocity resulting from quantum mechanical effects in

23
coupled parametric oscillators has also been recently demonstrated [39]. Studying these physical sys-
tems in critical regions near oscillatory instability could help understand the effects of noise and driving
in truly out-of-equilibrium systems. Hopfield networks and their modern improvements [40] have also
received renewed attention due to their connection to machine learning and artificial intelligence.
For instance, new message-passing algorithms for Restricted Boltzmann Machines (RBM) have been
proposed based on the mapping of Hopfield networks to RBM by a Hubbard-Stratonovich Gaussian
transformation [41]. Moreover, Hopfield networks have been suggested as a better alternative to the
attention mechanism used in transformers [42]. Since the attention mechanism is the key innovation
of the transformer architecture [43], a fundamental understanding of the properties of symmetric and
asymmetric Hopfield neural networks could suggest more powerful architectures for AI applications.

Acknowledgements
S.X. and C.P. acknowledge support by NIH R35GM149261.

Code Availability
The code developed for this manuscript is available at https://github.com/shuyue13/non-reciprocal-Hopfield

24
References
[1] John J Hopfield. Neural networks and physical systems with emergent collective computational
abilities. Proceedings of the national academy of sciences, 79(8):2554–2558, 1982.
[2] John Hertz, Anders Krogh, Richard G Palmer, and Heinz Horner. Introduction to the theory of
neural computation, 1991.
[3] DJ Amit. Modeling brain function: The world of attractor neural networks. Cambridge: Cambridge
University Press, 1989.
[4] JA Hertz, G Grinstein, and SA Solla. Irreversible spin glasses and neural networks. In Heidelberg
Colloquium on Glassy Dynamics: Proceedings of a Colloquium on Spin Glasses, Optimization and
Neural Networks Held at the University of Heidelberg June 9–13, 1986, pages 538–546. Springer,
1987.
[5] Michel Fruchart, Ryo Hanai, Peter B Littlewood, and Vincenzo Vitelli. Non-reciprocal phase tran-
sitions. Nature, 592(7854):363–369, 2021.
[6] Ryan Hannam, Alessia Annibale, and Reimer Kühn. Cell reprogramming modelled as transitions
in a hierarchy of cell cycles. Journal of Physics A: Mathematical and Theoretical, 50(42):425601,
2017.
[7] Alex H Lang, Hu Li, James J Collins, and Pankaj Mehta. Epigenetic landscapes explain par-
tially reprogrammed cells and identify key reprogramming genes. PLoS computational biology,
10(8):e1003734, 2014.
[8] Laura Cantini and Michele Caselle. Hope4Genes: a Hopfield-like class prediction algorithm for
transcriptomic data. Scientific reports, 9(1):337, 2019.
[9] Atefeh Taherian Fard and Mark A Ragan. Modeling the attractor landscape of disease progression:
a network-based approach. Frontiers in genetics, 8:48, 2017.
[10] Sergii Domanskyi, Alex Hakansson, Giovanni Paternostro, and Carlo Piermarocchi. Modeling
disease progression in multiple myeloma with Hopfield networks and single-cell RNA-seq. In
2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 2129–2136.
IEEE, 2019.
[11] Anthony Szedlak, Spencer Sims, Nicholas Smith, Giovanni Paternostro, and Carlo Piermarocchi.
Cell cycle time series gene expression data encoded as cyclic attractors in Hopfield systems. PLoS
computational biology, 13(11):e1005849, 2017.
[12] Stuart A Kauffman and Sonke Johnsen. Coevolution to the edge of chaos: Coupled fitness land-
scapes, poised states, and coevolutionary avalanches. Journal of theoretical biology, 149(4):467–
505, 1991.
[13] Giuseppe Longo and Maël Montévil. Perspectives on organisms. Springer, 2014.
[14] Thierry Mora and William Bialek. Are biological systems poised at criticality? Journal of Statistical
Physics, 144:268–302, 2011.
[15] Yael Avni, Michel Fruchart, David Martin, Daniel Seara, and Vincenzo Vitelli. The non-reciprocal
Ising model. arXiv preprint arXiv:2311.05471, 2024.
[16] C Han, M Wang, B Zhang, MI Dykman, and HB Chan. Coupled parametric oscillators: From
disorder-induced current to asymmetric Ising model. Physical Review Research, 6(2):023162,
2024.

25
[17] Daniel A Paz and Mohammad F Maghrebi. Driven-dissipative Ising model: An exact field-
theoretical analysis. Physical Review A, 104(2):023713, 2021.

[18] AJ Hudspeth, Frank Jülicher, and Pascal Martin. A critique of the critical cochlea: Hopf—a bifur-
cation—is better than none. Journal of neurophysiology, 104(3):1219–1229, 2010.

[19] Sébastien Camalet, Thomas Duke, Frank Jülicher, and Jacques Prost. Auditory sensitivity provided
by self-tuned critical oscillations of hair cells. Proceedings of the national academy of sciences,
97(7):3183–3188, 2000.

[20] DC Mattis. Solvable spin systems with random interactions. Physics Letters A, 56(5):421–422,
1976.

[21] Jan L van Hemmen. Classical spin-glass model. Physical Review Letters, 49(6):409, 1982.

[22] Anthony Szedlak, Giovanni Paternostro, and Carlo Piermarocchi. Control of asymmetric Hopfield
networks and application to cancer attractors. PloS one, 9(8):e105842, 2014.

[23] Roy J Glauber. Time-dependent statistics of the Ising model. Journal of mathematical physics,
4(2):294–307, 1963.

[24] Masuo Suzuki and Ryogo Kubo. Dynamics of the Ising model near the critical point. I. Journal of
the Physical Society of Japan, 24(1):51–60, 1968.

[25] Daniel J Amit, Hanoch Gutfreund, and Haim Sompolinsky. Spin-glass models of neural networks.
Physical Review A, 32(2):1007, 1985.

[26] Yuri A Kuznetsov. Elements of applied bifurcation theory. Springer, New York, 1998.

[27] Masatoshi Shiino, Hidetoshi Nishimori, and Masaya Ono. Nonlinear Master Equation Approach
to Asymmetrical Neural Networks of the Hopfield-Hemmen Type. Journal of the Physical Society
of Japan, 58(3):763–766, 1989.

[28] Paul M Chaikin and Tom C Lubensky. Principles of condensed matter physics. Cambridge University
Press, Cambridge, 1995.

[29] Ching-Kit Chan, Tony E Lee, and Sarang Gopalakrishnan. Limit-cycle phase in driven-dissipative
spin systems. Physical Review A, 91(5):051601, 2015.

[30] Romain Daviet, Carl Philipp Zelle, Achim Rosch, and Sebastian Diehl. Nonequilibrium criticality
at the onset of time-crystalline order. Physical Review Letters, 132(16):167102, 2024.

[31] Carl Philipp Zelle, Romain Daviet, Achim Rosch, and Sebastian Diehl. Universal phenomenology
at critical exceptional points of nonequilibrium O(N) models. Physical Review X, 14(2):021052,
2024.

[32] L Personnaz, I Guyon, and G Dreyfus. Information storage and retrieval in spin-glass like neural
networks. Journal de Physique Lettres, 46(8):359–365, 1985.

[33] I Kanter and Haim Sompolinsky. Associative recall of memory without errors. Physical Review A,
35(1):380, 1987.

[34] Frank P Kelly. Reversibility and stochastic networks. Cambridge University Press, 2011.

[35] Haim Sompolinsky and Ido Kanter. Temporal association in asymmetric neural networks. Physical
review letters, 57(22):2861, 1986.

26
[36] Lukas Herron, Pablo Sartori, and BingKan Xue. Robust retrieval of dynamic sequences through
interaction modulation. PRX Life, 1(2):023012, 2023.

[37] Bernard Derrida, Elizabeth Gardner, and Anne Zippelius. An exactly solvable asymmetric neural
network model. Europhysics Letters, 4(2):167, 1987.

[38] Rongfeng Xie and Alex Kamenev. Quantum Hopfield model with dilute memories. Physical Review
A, 110(3):032418, 2024.

[39] Ogulcan E Orsel, Jiho Noh, Penghao Zhu, Jieun Yim, Taylor L Hughes, Ronny Thomale, and
Gaurav Bahl. Giant non-reciprocity and gyration through modulation-induced Hatano-Nelson
coupling in integrated photonics. arXiv preprint arXiv:2410.10079, 2024.

[40] Dmitry Krotov and John J Hopfield. Dense associative memory for pattern recognition. Advances
in neural information processing systems, 29, 2016.

[41] Marc Mézard. Mean-field message-passing equations in the Hopfield model and its generaliza-
tions. Physical Review E, 95(2):022117, 2017.

[42] Hubert Ramsauer, Bernhard Schäfl, Johannes Lehner, Philipp Seidl, Michael Widrich, Thomas
Adler, Lukas Gruber, Markus Holzleitner, Milena Pavlović, Geir Kjetil Sandve, et al. Hopfield
networks is all you need. arXiv preprint arXiv:2008.02217, 2020.

[43] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing
systems, 30, 2017.

27

You might also like