The Elephant Random Walk in The Triangular Array Setting
The Elephant Random Walk in The Triangular Array Setting
1–13 (2025)
doi:10.1017/jpr.2024.106
RAHUL ROY ,∗ Indian Statistical Institute and Indraprastha Institute of Information Technology,
Delhi
MASATO TAKEI ,∗∗ Yokohama National University
HIDEKI TANEMURA ,∗∗∗ Keio University
Abstract
Gut and Stadmüller (2021, 2022) initiated the study of the elephant random walk with
limited memory. Aguech and El Machkouri (2024) published a paper in which they
discuss an extension of the results by Gut and Stadtmüller (2022) for an ‘increasing
memory’ version of the elephant random walk without stops. Here we present a formal
definition of the process that was hinted at by Gut and Stadtmüller. This definition is
based on the triangular array setting. We give a positive answer to the open problem in
Gut and Stadtmüller (2022) for the elephant random walk, possibly with stops. We also
obtain the central limit theorem for the supercritical case of this model.
Keywords: Random walk with memory; limit theorems; phase transition
2010 Mathematics Subject Classification: Primary 60K50
Secondary 60F05
1. Introduction
In recent years there has been a lot of interest in the study of the elephant random walk
(ERW) since it was introduced in [15]; see the excellent thesis [13] for a detailed bibliography.
The standard ERW is described as follows. Let p ∈ (0, 1) and s ∈ [0, 1]. We consider a sequence
X1 , X2 , . . . of random variables taking values in {+1, −1} given by
+1 with probability s,
X1 = (1.1)
−1 with probability 1 − s;
© The Author(s), 2025. Published by Cambridge University Press on behalf of Applied Probability Trust.
Gut and Stadmüller [9, 10] studied a variation of this model as described in [9, Section 3.2];
[1] also studied a similar variation of the model as described in [1, Section 2]. We present
the two different formalizations of models given in [1, 10]; our work is based on the first
formalization.
error in [1, Theorem 2 (3)] is due to the use of the linear setting for their equation (3.20), while
working in the triangular array setting. In particular, there is a mistake in the expression of M ∞
on [1, p. 14], which was fixed in the subsequent corrigendum. Their results in the corrected
version agree with the results obtained here, although the methods used are different; this paper
also provides additional results not obtained by them.
In the next section we present the statement of our results, and in Sections 3 and 4 we prove
the results. In Section 5 and thereafter we study similar questions about the ERW with stops
and present our results.
Wn
= 0 almost surely (a.s.) and in L2 .
lim (2.1)
n n→∞
• For α ∈ −1, 12 , i.e. p ∈ 0, 34 ,
Wn d 1
√ → N 0, as n → ∞, (2.2)
n 1 − 2α
Wn 1
lim sup ± √ =√ a.s. (2.3)
n→∞ 2n log log n 1 − 2α
• For α = 12 , i.e. p = 34 ,
Wn d
√ → N(0, 1) as n → ∞, (2.4)
n log n
Wn
lim sup ± √ = 1 a.s. (2.5)
n→∞ 2n log n log log log n
• For α ∈ 12 , 1 , i.e. p ∈ 34 , 1 , there exists a random variable M such that
Wn
lim =M a.s. and in L2 , (2.6)
n→∞ nα
mn
γn := , lim γn = γ ∈ [0, 1]. (2.8)
n n→∞
(i) If α ∈ −1, 12 , i.e. p ∈ 0, 34 , then
γn Tn d {γ + α(1 − γ )}2
√ → N 0, + γ (1 − γ ) as n → ∞. (2.9)
mn 1 − 2α
(n)
for k ∈ (mn , n] ∩ N. From (1.5), we can see that the conditional distribution of Xk for k ∈
(mn , n] ∩ N is given by
(n) (n) 1 Wmn
P(Xk = ±1 | Gk−1 ) = 1±α· .
2 mn
(This corresponds to [10, (2.2)].) From this we have that
(n) Wmn
E[Xk | F∞ ] = α · (3.1)
mn
for each k ∈ (mn , n] ∩ N, and
n
(n) Wmn
E[Tn − Wmn | F∞ ] = E[Xk | F∞ ] = α(n − mn ) · . (3.2)
mn
k=mn +1
We introduce
An := E[Tn | F∞ ], Bn := Tn − An . (3.3)
Noting that
Wmn
An = Wmn + E[Tn − Wmn | F∞ ] = · {γn + α(1 − γn )}, (3.4)
γn
we have
γn Tn = γn (An + Bn ) = cn Wmn + γn Bn , (3.5)
where cn = cn (α) := γn + α(1 − γn ).
First, we prove Theorem 2.1(i). Assume that α ∈ −1, 12 . By (3.4) and (2.2),
γn An cn Wmn d 1
√ = √ → {γ + α(1 − γ )} · N 0, as n → ∞.
mn mn 1 − 2α
In terms of characteristic functions, this is equivalent to, for ξ ∈ R,
2
iξ γn An ξ {γ + α(1 − γ )}2
E exp √ → exp − · as n → ∞. (3.6)
mn 2 1 − 2α
Now we turn to {Bn }. Unless specified otherwise, all the results on {Bn } hold for all
(n)
α ∈ (−1, 1). Since, for each n ∈ N, Xk for k ∈ (mn , n] ∩ N are independent and identically
distributed under P( · | F∞ ), so
n
(n) (n)
Bn = {Xk − E[Xk | F∞ ]} (3.7)
k=mn +1
(n)
is a sum of centered i.i.d. random variables. The conditional variance of Xk for
(n) (n) (n)
V[Xk | F∞ ] = E[(Xk )2 | F∞ ] − (E[Xk | F∞ ])2
⎧
⎪
⎨0 for k ∈ [1, mn ] ∩ N,
2
= Wmn (3.8)
⎪
⎩1 − α 2 · for k ∈ (mn , n] ∩ N.
mn
We have
γn Bn (γn )2 Wmn 2 Wmn 2
V √ | F∞ = · (n − mn ) · 1 − α ·
2
= γn (1 − γn ) · 1 − α ·
2
,
mn mn mn mn
(3.9)
Proof. Because Bn is the sum (3.7) of centered i.i.d. random variables under P( · | F∞ ), by
(3.8) we have
iξ γn Bn ξ 2 γn Wmn 2 γn n−mn
E exp √ | F∞ = 1− · 1−α ·
2
+o as n → ∞ a.s.
mn 2n mn n
γn An cn Wmn d 1+γ
√ =√ → · N(0, 1) as n → ∞.
mn log mn mn log mn 2
γn An cn Wmn
α
= → {γ + α(1 − γ )}M (3.12)
(mn ) (mn )α
and
x2
P(|γn Bn | ≥ x) ≤ 2 exp − for x > 0.
2γn (1 − γn )mn
√
Taking x = 2(1 + ε)γn (1 − γn )mn log n for some ε > 0, we have
∞ ∞
2
P(|γn Bn | ≥ 2(1 + ε)γn (1 − γn )mn log n) ≤ .
n1+ε
n=1 n=1
This,
together
with the Borel–Cantelli lemma, implies (4.1). To obtain (4.2) note that, for
c ∈ 12 , 1 ,
2γn (1 − γn )mn log n 2(1 − γn ) log n 2(1 − γn ) log n
= ≤ → 0 as n → ∞,
(mn )2c n(mn )2c−2 n2c−1
where we used 2c − 2 < 0 < 2c − 1 and mn ≤ n.
We now prove
(2.14)
in Theorem 2.2. Equation (2.13) is readily derived from (2.14). For
the case α ∈ −1, 12 , from (2.3) and (3.4) we have
γn An γ + α(1 − γ )
lim sup √ ≤ √ a.s.
n→∞ 2mn log log mn 1 − 2α
γn An 1+γ
lim sup √ ≤ a.s.
n→∞ 2m n log mn log log log mn 2
By (4.2), if α ∈ −1, 12 then (2.14) holds for any c ∈ 12 , 1 . As for the case α ∈ 12 , 1 ,
almost-sure convergence in (2.11) follows from (3.12) and (4.2). Thus, (2.14) holds for any
c ∈ (α, 1).
n
lim = > 0 a.s. and in L2 , (5.3)
n→∞ nβ
where has a Mittag–Leffler distribution with parameter β. We turn to the central limit
theorem for {Wn } in [5].
• For α ∈ [ − β, β/2),
Wn d β
√ → N 0, as n → ∞. (5.4)
n β − 2α
• For α = β/2,
Wn d
√ → N(0, 1) as n → ∞. (5.5)
n log n
The strong law of large numbers and its refinement can also be obtained for the ERW with
stops.
Theorem 5.3Under the same conditions as in Theorem 5.1, we have (2.13). In addition, (2.14)
holds for c ∈ max α, 12 , 1 .
Remark 5.2. Assume that β ∈ 12 , 1 . As a by-product of the proof of Theorem 5.3, we can
prove the a.s. convergence in (5.12). The a.s. convergence in (5.10) is valid for α ∈ 12 , β .
From this,
2
γn Bn (mn )β Wmn
V | F∞ = γn (1 − γn ) · β − α 2 · · . (6.6)
mn mn (mn )(1+β)/2
For any β ∈ (0, 1) and α ∈ [ − β, β], we show that
Wmn
lim =0 a.s. (6.7)
n→∞ (mn )(1+β)/2
Indeed, if α ∈ [ − β, β/2) then
Wn β
lim sup = a.s. (6.8)
n→∞ 2nβ log log n β − 2α
by [5, (3.5)]. If α = β/2 then
Wn
lim sup = β a.s. (6.9)
n→∞ 2nβ log n log log log n
by [5, (3.13)]. If α ∈ (β/2, β] then Wmn /(mn )α → M as n → ∞ a.s. by (5.6), and (1 + β)/2 > α
since 2α − β ≤ β < 1. In any case we have (6.7). Since (mn )β / mn → 1/ as n → ∞ a.s. by
(5.3), we see that (6.6) converges to βγ (1 − γ ) as n → ∞ a.s., and
γn Bn
V | F∞ → 0 as n → ∞ a.s.
mn log mn
This together with (6.4) gives (5.9). As for the case α ∈ (β/2, β], by (3.4) and (5.6),
γn An cn Wmn
α
= → {γ + α(1 − γ )}M as n → ∞ a.s. and in L2 .
(mn ) (mn )α
Now (5.10) follows from Lemma 6.1(ii). The proof of (5.11) is almost identical to that of
(2.12): use (3.5), (5.7), and (6.3).
γn An cn (β) mn
β
= → {γ + β(1 − γ )} · as n → ∞ a.s. and in L2 .
(mn ) (mn )β
Noting that β > 0, this shows that γn Bn /(mn )β → 0 as n → ∞ in L2 , which completes the
proof.
Acknowledgements
R.R. thanks Keio University for its hospitality during multiple visits. M.T. and H.T. thank
the Indian Statistical Institute for its hospitality.
Funding information
M.T. is partially supported by JSPS KAKENHI Grant Numbers JP19H01793, JP19K03514,
and JP22K03333. H.T. is partially supported by JSPS KAKENHI Grant Numbers
JP19K03514, JP21H04432, and JP23H01077.
Competing interests
There were no competing interests to declare which arose during the preparation or
publication process of this article.
References
[1] AGUECH, R. AND E L M ACHKOURI, M. (2024). Gaussian fluctuations of the elephant random walk with
gradually increasing memory. J. Phys. A 57, 065203. Corrigendum, J. Phys. A 57, 349501.
[2] A ZUMA, K. (1967). Weighted sums of certain dependent random variables. Tôhoku Math. J. (2) 19, 357–367.
[3] BAUR, E. AND B ERTOIN, J. (2016). Elephant random walks and their connection to Pólya-type urns. Phys. Rev.
E 94, 052134.
[4] B ERCU, B. (2018). A martingale approach for the elephant random walk. J. Phys. A 51, 015201.
[5] B ERCU, B. (2022). On the elephant random walk with stops playing hide and seek with the Mittag–Leffler
distribution. J. Statist. Phys. 189, 12.
[6] C OLETTI, C. F., G AVA, R. J. AND S CHÜTZ, G. M. (2017). Central limit theorem for the elephant random walk.
J. Math. Phys. 58, 053303.
[7] C OLETTI, C. F., G AVA, R. J. AND S CHÜTZ, G. M. (2017). A strong invariance principle for the elephant
random walk. J. Statist. Mech. 2017, 123207.
[8] G UÉRIN, H., L AULIN, L. AND R ASCHEL, K. (2023). A fixed-point equation approach for the superdiffusive
elephant random walk. Preprint, arXiv:2308.14630.
[9] G UT, A. AND S TADTMÜLLER, U. (2021). Variations of the elephant random walk. J. Appl. Prob. 58, 805–829.
[10] G UT, A. AND S TADTMÜLLER, U. (2022). The elephant random walk with gradually increasing memory. Statist.
Prob. Lett. 189, 109598.
[11] K UBOTA, N. AND TAKEI, M. (2019). Gaussian fluctuation for superdiffusive elephant random walks. J. Statist.
Phys. 177, 1157–1171.
[12] K UMAR, N., H ARBOLA, U. AND L INDENBERG, K. (2010). Memory-induced anomalous dynamics:
Emergence of diffusion. Phys. Rev. E 82, 021101.
[13] L AULIN, L. (2022). Autour de la marche aléatoire de l’éléphant [About the elephant random walk]. Thesis,
Université de Bordeaux.
[14] Q IN, S. (2023). Recurrence and transience of multidimensional elephant random walks. Preprint,
arXiv:2309.09795.
[15] S CHÜTZ, G. M. AND T RIMPER, S. (2004). Elephants can always remember: Exact long-range memory effects
in a non-Markovian random walk. Phys. Rev. E 70, 045101.