0% found this document useful (0 votes)
8 views13 pages

The Elephant Random Walk in The Triangular Array Setting

This document presents a formal definition and analysis of the elephant random walk (ERW) in a triangular array setting, building on previous studies. It addresses an open problem regarding the ERW with possible stops and provides a central limit theorem for the supercritical case. The authors extend and improve upon earlier results, detailing various limit theorems and the strong law of large numbers for the ERW model.

Uploaded by

isid.rahul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views13 pages

The Elephant Random Walk in The Triangular Array Setting

This document presents a formal definition and analysis of the elephant random walk (ERW) in a triangular array setting, building on previous studies. It addresses an open problem regarding the ERW with possible stops and provides a central limit theorem for the supercritical case. The authors extend and improve upon earlier results, detailing various limit theorems and the strong law of large numbers for the ERW model.

Uploaded by

isid.rahul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

J. Appl. Probab.

1–13 (2025)
doi:10.1017/jpr.2024.106

THE ELEPHANT RANDOM WALK IN THE TRIANGULAR ARRAY


SETTING

RAHUL ROY ,∗ Indian Statistical Institute and Indraprastha Institute of Information Technology,
Delhi
MASATO TAKEI ,∗∗ Yokohama National University
HIDEKI TANEMURA ,∗∗∗ Keio University

Abstract
Gut and Stadmüller (2021, 2022) initiated the study of the elephant random walk with
limited memory. Aguech and El Machkouri (2024) published a paper in which they
discuss an extension of the results by Gut and Stadtmüller (2022) for an ‘increasing
memory’ version of the elephant random walk without stops. Here we present a formal
definition of the process that was hinted at by Gut and Stadtmüller. This definition is
based on the triangular array setting. We give a positive answer to the open problem in
Gut and Stadtmüller (2022) for the elephant random walk, possibly with stops. We also
obtain the central limit theorem for the supercritical case of this model.
Keywords: Random walk with memory; limit theorems; phase transition
2010 Mathematics Subject Classification: Primary 60K50
Secondary 60F05

1. Introduction
In recent years there has been a lot of interest in the study of the elephant random walk
(ERW) since it was introduced in [15]; see the excellent thesis [13] for a detailed bibliography.
The standard ERW is described as follows. Let p ∈ (0, 1) and s ∈ [0, 1]. We consider a sequence
X1 , X2 , . . . of random variables taking values in {+1, −1} given by

+1 with probability s,
X1 = (1.1)
−1 with probability 1 − s;

{Un : n ≥ 1} a sequence of independent random variables, independent of X1 , with Un having


a uniform distribution over {1, . . . , n}; and, for n ∈ N := {1, 2, . . .},

+XUn with probability p,
Xn+1 = (1.2)
−XUn with probability 1 − p.
n
The ERW {Wn } is defined by Wn = k=1 Xk for n ∈ N.

Received 5 March 2024; accepted 31 October 2024.


∗ Postal address: 7 SJS Sansanwal Marg, New Delhi 110016, India. Email: [email protected]
∗∗ Postal address: 79-5 Tokiwadai, Hodogaya-ku, Yokohama 240-8501, Japan. Email: [email protected]
∗∗∗ Postal address: 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan. Email: [email protected]

© The Author(s), 2025. Published by Cambridge University Press on behalf of Applied Probability Trust.

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


2 R. ROY ET AL.

Gut and Stadmüller [9, 10] studied a variation of this model as described in [9, Section 3.2];
[1] also studied a similar variation of the model as described in [1, Section 2]. We present
the two different formalizations of models given in [1, 10]; our work is based on the first
formalization.

1.1. Triangular array setting


Consider a sequence {mn : n ∈ N} of positive integers satisfying
1 ≤ mn ≤ n for each n ∈ N. (1.3)
Let X1 , X2 , . . . be the sequence defined by (1.1) and (1.2). We define a triangular array of ran-
(n) (n)
dom variables {{Sk : 1 ≤ k ≤ n} : n ∈ N} as follows. Let {Yk : 1 ≤ k ≤ n} be random variables
with

(n) Xk for 1 ≤ k ≤ mn ,
Yk = (n) (1.4)
Xk for mn < k ≤ n,

where, for mn < k ≤ n,



(n) +XUk,n with probability p,
Xk = (1.5)
−XUk,n with probability 1 − p.

Here, Un := {Uk,n : mn < k ≤ n} is an independent and identically distributed (i.i.d.) collection


of uniform random variables over {1, . . . , mn }, and {Un : n ∈ N} is an independent collec-
(n)  (n)
tion. Finally, for 1 ≤ k ≤ n let Sk := ki=1 Yi . We note that for fixed n ∈ N, the sequence
(n) (n)
{Sk : 1 ≤ k ≤ n} is a random walk with increments in {+1, −1}. However, the sequence {Sn :
n ∈ N} does not have such a representation. We study properties of the sequence {Tn : n ∈ N}
given by
Tn := Sn(n) . (1.6)
The process {Tn : n ∈ N} was called the ERW with gradually increasing memory in [10], where

lim mn = +∞. (1.7)


n→∞

1.2. Linear setting



In this setting the ERW Wn+1 := Wn + Zn+1 is given by the increments

 +1 with probability s,
W1 = Z1 =
−1 with probability 1 − s,

+ZVn with probability p,
Zn+1 =
−ZVn with probability 1 − p,

where Vn is a uniform random variable over {1, . . . , mn }, and {Vn : n ∈ N} is an independent


collection.
Remark 1.1. We note here that the dependence structure in the definitions of Tn and Wn are
different and as such, results obtained for Tn need not carry to those obtained for Wn . The

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


Elephant random walk in the triangular array setting 3

error in [1, Theorem 2 (3)] is due to the use of the linear setting for their equation (3.20), while
working in the triangular array setting. In particular, there is a mistake in the expression of M ∞
on [1, p. 14], which was fixed in the subsequent corrigendum. Their results in the corrected
version agree with the results obtained here, although the methods used are different; this paper
also provides additional results not obtained by them.
In the next section we present the statement of our results, and in Sections 3 and 4 we prove
the results. In Section 5 and thereafter we study similar questions about the ERW with stops
and present our results.

2. Results for the ERW in the triangular array setting


Before we state our results, we give a short synopsis of the results for the standard ERW
{Wn } [3, 4, 6–8, 11, 14]. Let α := 2p − 1.

• For α ∈ (−1, 1) i.e. p ∈ (0, 1),

Wn
= 0 almost surely (a.s.) and in L2 .
lim (2.1)
n n→∞
   
• For α ∈ −1, 12 , i.e. p ∈ 0, 34 ,
 
Wn d 1
√ → N 0, as n → ∞, (2.2)
n 1 − 2α
Wn 1
lim sup ± √ =√ a.s. (2.3)
n→∞ 2n log log n 1 − 2α

• For α = 12 , i.e. p = 34 ,

Wn d
√ → N(0, 1) as n → ∞, (2.4)
n log n
Wn
lim sup ± √ = 1 a.s. (2.5)
n→∞ 2n log n log log log n
   
• For α ∈ 12 , 1 , i.e. p ∈ 34 , 1 , there exists a random variable M such that

Wn
lim =M a.s. and in L2 , (2.6)
n→∞ nα

where E[M] = β/(α + 1), E[M 2 ] > 0, P(M = 0) = 1, and


 
Wn − Mnα d 1
√ → N 0, as n → ∞. (2.7)
n 2α − 1

Our first result improves and extends [10, Theorem 3.1].


Theorem 2.1. Let p ∈ (0, 1) and α = 2p − 1. Assume that {mn : n ∈ N} satisfies (1.3), (1.7), and

mn
γn := , lim γn = γ ∈ [0, 1]. (2.8)
n n→∞

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


4 R. ROY ET AL.

   
(i) If α ∈ −1, 12 , i.e. p ∈ 0, 34 , then
 
γn Tn d {γ + α(1 − γ )}2
√ → N 0, + γ (1 − γ ) as n → ∞. (2.9)
mn 1 − 2α

(ii) If α = 12 , i.e. p = 34 , then


 
γn Tn d (1 + γ )2
√ → N 0, as n → ∞. (2.10)
mn log mn 4
1  3 
(iii) If α ∈ 2 , 1 , i.e. p ∈ 4 , 1 , then
γn Tn
lim = {γ + α(1 − γ )}M a.s. and in L2 , (2.11)
n→∞ (mn )α
where M is the random variable in (2.6). Moreover,
 
γn Tn − M{γn + α(1 − γn )}(mn )α d {γ + α(1 − γ )}2
√ → N 0, + γ (1 − γ ) as n → ∞.
mn 2α − 1
(2.12)
Remark 2.1. If α = γ = 0 then the right-hand side of (2.9) is N(0,0), which we interpret as
the delta measure at 0. Our result (2.12) differs from [1, Theorem 2 (3)]; the reason for this is
given in Remark 1.1.
The next theorem concerns the strong law of large numbers and its refinement.
Theorem 2.2 Let p ∈ (0, 1) and α = 2p − 1. Assume that {mn : n ∈ N} satisfies (1.3), (1.7), and
(2.8). Then
Tn
= 0 a.s.
lim (2.13)
n n→∞
  
Actually, we obtain the following sharper result: If c ∈ max α, 1
2 , 1 then
γn Tn
lim = 0 a.s. (2.14)
n→∞ (mn )c

3. Proof of Theorem 2.1


Throughout this section we assume that (1.3), (1.7), and (2.8) hold.
Proof. Let Fn be the σ -algebra generated by X1 , . . . , Xn . For n ∈ N, the conditional
distribution of Xn+1 given the history up to time n is
#{k = 1, . . . , n : Xk = ±1} #{k = 1, . . . , n : Xk = ∓1}
P(Xn+1 = ±1 | Fn ) = ·p+ · (1 − p)
 n n
1 Wn
= 1±α· .
2 n
(n)
For each n ∈ N, let Gmn = F∞ := σ ({Xi : i ∈ N}) = σ ({X1 } ∪ {Ui : i ∈ N}) and
(n) (n)
Gk := σ ({Xi : i ∈ N} ∪ {Xi : mn < i ≤ k}) = σ ({X1 } ∪ {Ui : i ∈ N} ∪ {Ui,n : mn < i ≤ k})

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


Elephant random walk in the triangular array setting 5

(n)
for k ∈ (mn , n] ∩ N. From (1.5), we can see that the conditional distribution of Xk for k ∈
(mn , n] ∩ N is given by
 
(n) (n) 1 Wmn
P(Xk = ±1 | Gk−1 ) = 1±α· .
2 mn
(This corresponds to [10, (2.2)].) From this we have that
(n) Wmn
E[Xk | F∞ ] = α · (3.1)
mn
for each k ∈ (mn , n] ∩ N, and
n
(n) Wmn
E[Tn − Wmn | F∞ ] = E[Xk | F∞ ] = α(n − mn ) · . (3.2)
mn
k=mn +1

We introduce
An := E[Tn | F∞ ], Bn := Tn − An . (3.3)
Noting that
Wmn
An = Wmn + E[Tn − Wmn | F∞ ] = · {γn + α(1 − γn )}, (3.4)
γn
we have
γn Tn = γn (An + Bn ) = cn Wmn + γn Bn , (3.5)
where cn = cn (α) := γn + α(1 − γn ).  
First, we prove Theorem 2.1(i). Assume that α ∈ −1, 12 . By (3.4) and (2.2),
 
γn An cn Wmn d 1
√ = √ → {γ + α(1 − γ )} · N 0, as n → ∞.
mn mn 1 − 2α
In terms of characteristic functions, this is equivalent to, for ξ ∈ R,
   2 
iξ γn An ξ {γ + α(1 − γ )}2
E exp √ → exp − · as n → ∞. (3.6)
mn 2 1 − 2α
Now we turn to {Bn }. Unless specified otherwise, all the results on {Bn } hold for all
(n)
α ∈ (−1, 1). Since, for each n ∈ N, Xk for k ∈ (mn , n] ∩ N are independent and identically
distributed under P( · | F∞ ), so
n
(n) (n)
Bn = {Xk − E[Xk | F∞ ]} (3.7)
k=mn +1
(n)
is a sum of centered i.i.d. random variables. The conditional variance of Xk for
(n) (n) (n)
V[Xk | F∞ ] = E[(Xk )2 | F∞ ] − (E[Xk | F∞ ])2


⎨0 for k ∈ [1, mn ] ∩ N,
 2
= Wmn (3.8)

⎩1 − α 2 · for k ∈ (mn , n] ∩ N.
mn

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


6 R. ROY ET AL.

We have
       
γn Bn (γn )2 Wmn 2 Wmn 2
V √ | F∞ = · (n − mn ) · 1 − α ·
2
= γn (1 − γn ) · 1 − α ·
2
,
mn mn mn mn
(3.9)

which converges to γ (1 − γ ) as n → ∞ a.s. by (2.1). Based on this observation, we prove the


following result.
Lemma 3.1. For γ ∈ [0, 1],
   2 
iξ γn Bn ξ
E exp √ | F∞ → exp − · γ (1 − γ ) as n → ∞ a.s. (3.10)
mn 2

Proof. Because Bn is the sum (3.7) of centered i.i.d. random variables under P( · | F∞ ), by
(3.8) we have
       
iξ γn Bn ξ 2 γn Wmn 2 γn n−mn
E exp √ | F∞ = 1− · 1−α ·
2
+o as n → ∞ a.s.
mn 2n mn n

Note that γn /n → 0 and (γn /n) · (n − mn ) = γn (1 − γn ) → γ (1 − γ ) as n → ∞. Now (3.10)


follows from this together with (2.1). 
From (3.5), (3.6), and (3.10), together with the bounded convergence theorem, we can see
that      
iξ γn Tn iξ γn An iξ γn Bn
E exp √ = E exp √ · E exp √ | F∞
mn mn mn
converges to
 2   2 
ξ {γ + α(1 − γ )}2 ξ
exp − · · exp − · γ (1 − γ )
2 1 − 2α 2
as n → ∞. This gives (2.9).
The proof of Theorem 2.1(ii)
 is along the same lines as that of (i), and is actually simpler.
Assume that α = 12 . As cn 12 = (1 + γn )/2, from (3.4) and (2.4) we have

γn An cn Wmn d 1+γ
√ =√ → · N(0, 1) as n → ∞.
mn log mn mn log mn 2

Also, from (3.9) and (2.1), we have


 2    
γn Bn Wmn 2
E √ = γn (1 − γn ) 1 − α 2 · E → γ (1 − γ ) as n → ∞. (3.11)
mn mn

This implies that γn Bn / mn log mn → 0 as n → ∞ in L2 . By Slutsky’s lemma, we obtain
(2.10).  
Finally, we prove Theorem 2.1(iii). Assume that α ∈ 12 , 1 . By (3.4) and (2.6),

γn An cn Wmn
α
= → {γ + α(1 − γ )}M (3.12)
(mn ) (mn )α

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


Elephant random walk in the triangular array setting 7

as n → ∞ a.s. and in L2 . Noting that γn Bn /(mn )α → 0 as n → ∞ in L2 , by (3.11) we obtain the


L2 -convergence in (2.11). From (4.2), which will be proved in the next section, the almost-sure
convergence in (2.11) follows. To prove (2.12), by (3.5) we have
γn Tn − cn · M · (mn )α = cn {Wmn − M · (mn )α } + γn Bn . (3.13)
Note that M is F∞ -measurable. Using (2.7), (3.10), and (3.13), we obtain (2.12) similarly to
the proof of (2.9). 

4. Proof of Theorem 2.2


In this section we assume that (1.3), (1.7), and (2.8) hold.
Proof. First we give almost-sure bounds for {Bn }.
Lemma 4.1 For any α ∈ (−1, 1) and γ ∈ [0, 1],
γn Bn
lim sup √ ≤ 1 a.s. (4.1)
n→∞ 2γn (1 − γn )mn log n
 
In particular, for any c ∈ 12 , 1 ,
γn Bn
lim = 0 a.s. (4.2)
n→∞ (mn )c
(n) (n)
Proof. Note that |Xk − E[Xk | F∞ ]| ≤ 1 for each 1 ≤ k ≤ n. For λ ∈ R, it follows from
Azuma’s inequality [2] that
  n
 
 (n) (n)
E[ exp (λγn Bn ) | F∞ ] = E exp λγn Xk − E[Xk | F∞ ] | F∞
k=mn +1

≤ exp ((λγn ) (n − mn )/2),


2

and  
x2
P(|γn Bn | ≥ x) ≤ 2 exp − for x > 0.
2γn (1 − γn )mn

Taking x = 2(1 + ε)γn (1 − γn )mn log n for some ε > 0, we have
∞  ∞
2
P(|γn Bn | ≥ 2(1 + ε)γn (1 − γn )mn log n) ≤ .
n1+ε
n=1 n=1

This,
 together
 with the Borel–Cantelli lemma, implies (4.1). To obtain (4.2) note that, for
c ∈ 12 , 1 ,
2γn (1 − γn )mn log n 2(1 − γn ) log n 2(1 − γn ) log n
= ≤ → 0 as n → ∞,
(mn )2c n(mn )2c−2 n2c−1
where we used 2c − 2 < 0 < 2c − 1 and mn ≤ n. 
We now prove
 (2.14)
 in Theorem 2.2. Equation (2.13) is readily derived from (2.14). For
the case α ∈ −1, 12 , from (2.3) and (3.4) we have
γn An γ + α(1 − γ )
lim sup √ ≤ √ a.s.
n→∞ 2mn log log mn 1 − 2α

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


8 R. ROY ET AL.

For the case α = 12 , from (2.5) and (3.4) we have

γn An 1+γ
lim sup √ ≤ a.s.
n→∞ 2m n log mn log log log mn 2
     
By (4.2), if α ∈ −1, 12 then (2.14) holds for any c ∈ 12 , 1 . As for the case α ∈ 12 , 1 ,
almost-sure convergence in (2.11) follows from (3.12) and (4.2). Thus, (2.14) holds for any
c ∈ (α, 1). 

5. The ERW with stops in the triangular array setting


Let s ∈ [0, 1], and assume that p, q, r ∈ [0, 1) satisfy p + q + r = 1. We consider a sequence
X1 , X2 , . . . of random variables taking values in {+1, 0, −1} given by

+1 with probability s,
X1 = (5.1)
−1 with probability 1 − s;

{Un : n ≥ 1} a sequence of independent random variables, independent of X1 , with Un having


a uniform distribution over {1, . . . , n}; and, for n ∈ N,


⎨XUn with probability p,
Xn+1 = −XUn with probability q, (5.2)


0 with probability r.

The ERW with stops {Wn } is defined by Wn = nk=1 Xk for n ∈ N. Note that if r = 0 then it is
the standard ERW defined in Section 1. Hereafter we assume that r ∈ (0, 1).
The ERW with stops was introduced in [12]. To describe the limit theorems obtained in [5],
it is convenient to introduce the new parameters α := p − q and β := 1 − r, where β ∈ (0, 1)
and α ∈ [ − β, β]. Let n be the number of moves up to time n, i.e.
n n
n := 1{Xk =0} = Xk2 for n ∈ N.
k=1 k=1

It is shown in [5] that

n
lim = > 0 a.s. and in L2 , (5.3)
n→∞ nβ
where has a Mittag–Leffler distribution with parameter β. We turn to the central limit
theorem for {Wn } in [5].

• For α ∈ [ − β, β/2),
 
Wn d β
√ → N 0, as n → ∞. (5.4)
n β − 2α
• For α = β/2,
Wn d
√ → N(0, 1) as n → ∞. (5.5)
n log n

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


Elephant random walk in the triangular array setting 9

• For α ∈ (β/2, β], there exists a random variable M such that


Wn
lim = M a.s. and in L2 (5.6)
n→∞ nα
and
 
Wn − Mnα d β
√ → N 0, as n → ∞, (5.7)
n 2α − β
where P(M > 0) > 0.
(n) (n)
Next, we define the sequence {Tn } as in (1.6); however, Yk and Xk of (1.4) and (1.5) are
defined with {Xi } as in (5.1) and (5.2) instead of (1.1) and (1.2). We call this the ERW with
stops in the triangular array setting.
Our first result of this section is an extension of [10, Theorem 4.1]. We note here that [10]
allows X1 to take value 0 with probability r, unlike this paper. As such they have an extra δ0 in
their results for the case γ = 0.
Theorem 5.1. Let β ∈ (0, 1) and α ∈ [ − β, β]. Assume that {mn : n ∈ N} satisfies (1.3), (1.7),
and (2.8).

(i) If α ∈ [ − β, β/2) then


 
γn Tn d β{γ + α(1 − γ )}2
 → N 0, + βγ (1 − γ ) as n → ∞. (5.8)
mn β − 2α
(ii) If α = β/2 then
γn Tn d
 → N(0, {γ + β(1 − γ )/2}2 ) as n → ∞. (5.9)
mn log mn
(iii) If α ∈ (β/2, β] then
γn Tn
= {γ + α(1 − γ )}M
lim in L2 , (5.10)
(mn )α
n→∞

where M is the random variable in (5.6). Moreover,


γn Tn − M · {γn + α(1 − γn )} · (mn )α

mn
 
d β{γ + α(1 − γ )}2
→ N 0, + βγ (1 − γ ) as n → ∞. (5.11)
2α − β
Remark 5.1. Unlike the results in [10], we have a random normalization in the results above.
This is because we consider the general case γ ∈ [0, 1]. We can obtain the L4 -convergence in
(5.10) using Burkholder’s inequality as in [1, (3.15)].
 (n)
We also consider the process { n : n ∈ N} defined by n := nk=1 {Xk }2 for n ∈ N. The
next theorem is an improvement of [10, Theorem 4.2].
Theorem 5.2. Under the same conditions as in Theorem 5.1, we have
γn n
lim = {γ + β(1 − γ )} in L2 , (5.12)
n→∞ (mn )β

where is defined in (5.3).

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


10 R. ROY ET AL.

The strong law of large numbers and its refinement can also be obtained for the ERW with
stops.
Theorem 5.3Under the same conditions as in Theorem 5.1, we have (2.13). In addition, (2.14)
holds for c ∈ max α, 12 , 1 .
 
Remark 5.2. Assume that β ∈ 12 , 1 . As a by-product of the proof of Theorem 5.3, we can
 
prove the a.s. convergence in (5.12). The a.s. convergence in (5.10) is valid for α ∈ 12 , β .

6. Proof of Theorem 5.1


Proof. Noting that p = (β + α)/2 and q = (β − α)/2, for n ∈ N we have
#{k = 1, . . . , n : Xk = ±1} #{k = 1, . . . , n : Xk = ∓1}
P(Xn+1 = ±1 | Fn ) = ·p+ ·q
 n  n
1 n Wn
= β· ±α· .
2 n n
For k ∈ (mn , n] ∩ N, we have
 
 (n) (n)  1 mn Wmn
P Xk = ±1 | Gk−1 = β· ±α· , (6.1)
2 mn mn
 (n) (n)  mn
P {Xk }2 = 1 | Gk−1 = β · . (6.2)
mn
From (6.1), we see that (3.1) and (3.2) continue to hold in this setting. Defining {An } and {Bn }
by (3.3), we note that they satisfy (3.4) and (3.5).
We prepare a lemma about {Bn }.
Lemma 6.1. Under the assumption of Theorem 5.1:

(i) For α ∈ [ − β, β] and ξ ∈ R,


   2 
iξ γn Bn ξ
E exp  | F∞ → exp − · βγ (1 − γ ) as n → ∞ a.s., (6.3)
mn 2
 
iξ γn Bn
E exp  | F∞ → 1 as n → ∞ a.s. (6.4)
mn log mn

(ii) If α ∈ (β/2, β] then γn Bn /(mn )α → 0 as n → ∞ in L2 .


Proof. Note that E[Bn | F∞ ] = 0. By (6.1) and (6.2),
 2
(n) mn Wmn
V[Xk | F∞ ] = β · − α2 ·
mn mn
for k ∈ (mn , n] ∩ N. As in (3.9), we have
  2   
mn Wmn 1 − γn Wm2 n
V[Bn | F∞ ] = (n − mn ) · β · − α2 · = · β mn − α ·
2
.
mn mn γn mn
(6.5)

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


Elephant random walk in the triangular array setting 11

From this,
  2 
γn Bn (mn )β Wmn
V  | F∞ = γn (1 − γn ) · β − α 2 · · . (6.6)
mn mn (mn )(1+β)/2
For any β ∈ (0, 1) and α ∈ [ − β, β], we show that
Wmn
lim =0 a.s. (6.7)
n→∞ (mn )(1+β)/2
Indeed, if α ∈ [ − β, β/2) then

Wn β
lim sup  = a.s. (6.8)
n→∞ 2nβ log log n β − 2α
by [5, (3.5)]. If α = β/2 then
Wn 
lim sup  = β a.s. (6.9)
n→∞ 2nβ log n log log log n
by [5, (3.13)]. If α ∈ (β/2, β] then Wmn /(mn )α → M as n → ∞ a.s. by (5.6), and (1 + β)/2 > α
since 2α − β ≤ β < 1. In any case we have (6.7). Since (mn )β / mn → 1/ as n → ∞ a.s. by
(5.3), we see that (6.6) converges to βγ (1 − γ ) as n → ∞ a.s., and
γn Bn
V  | F∞ → 0 as n → ∞ a.s.
mn log mn

By a similar computation to Lemma 4.1, we obtain (6.3) and (6.4) in (i).


Next, we consider (ii). By (6.5),
   2 
γn Bn 2 E[ mn ] 2 E[(Wmn ) ]
E = γn (1 − γn ) · β · −α · .
(mn )α (mn )2α (mn )1+2α
From [5, (A.6)], E[(Wn )2 ] ∼ n2α /{(2α − β)(2α)} as n → ∞. On the other hand, from [5,
(4.4)] we can see that E[ n ] ∼ nβ / (1 + β) as n → ∞. Noting that β < 2α, we have (ii). 
Assume that α ∈ [ − β, β/2). By (3.4) and (5.4), we have
 
γn An cn Wm d β
 =  n → {γ + α(1 − γ )} · N 0, as n → ∞.
mn mn β − 2α
Combining this and (6.3), we can prove (5.8) by the same method as for (2.9). Next, we
consider the case α = β/2. By (3.4) and (5.5), we have
γn An cn Wmn d
 = → {γ + α(1 − γ )} · N(0, 1) as n → ∞.
mn log mn mn log mn

This together with (6.4) gives (5.9). As for the case α ∈ (β/2, β], by (3.4) and (5.6),
γn An cn Wmn
α
= → {γ + α(1 − γ )}M as n → ∞ a.s. and in L2 .
(mn ) (mn )α
Now (5.10) follows from Lemma 6.1(ii). The proof of (5.11) is almost identical to that of
(2.12): use (3.5), (5.7), and (6.3). 

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


12 R. ROY ET AL.

7. Proof of Theorem 5.2


Proof. Put An
:= E[ n | F∞ ] and Bn := n − An . Using (6.2), we can see that γn An =
cn (β) mn , which together with (5.3) imply

γn An cn (β) mn
β
= → {γ + β(1 − γ )} · as n → ∞ a.s. and in L2 .
(mn ) (mn )β

As for Bn , again by (6.2) we can see that


 2 n
γn Bn (γn )2 (n)
V | F∞ = · V[{Xk }2 | F∞ ]
(mn )β (mn )2β
k=mn +1
 
(γn )2 mn mn
= · (n − mn ) · β · · 1−β · ,
(mn )2β mn mn
 2  
γn Bn βγn (1 − γn ) mn mn
E = · E · 1 − β · .
(mn )β (mn )β (mn )β mn

Since β < 1 and β


mn /(mn ) converges to in L2 by (5.3), we have
   2
mn mn mn β mn
E · 1−β · =E − ·E → E[ ] as n → ∞.
(mn )β mn (mn )β (mn )1−β (mn )β

Noting that β > 0, this shows that γn Bn /(mn )β → 0 as n → ∞ in L2 , which completes the
proof. 

8. Proof of Theorem 5.3


(n) (n)
Proof. The proof of Lemma 4.1 is based on the fact that |Xk − E[Xk | F∞ ]| ≤ 1. Thus,
 1 } for the ERW with stops in the triangular array setting also satisfies (4.2) for any
{B n c∈
2 , 1 . If α ∈ [ − β, β/2] then, from (3.4), (6.8), and (6.9), we can see that γ n An = o(nc ) for

any c ∈ (β/2, 1). If α ∈ (β/2, β] then (3.4)


 and  imply that γn An = o(n ) for any c ∈ (α, 1).
(5.6) c

In any case, (2.14) holds for c ∈ max α, 2 , 1 .


1


Acknowledgements
R.R. thanks Keio University for its hospitality during multiple visits. M.T. and H.T. thank
the Indian Statistical Institute for its hospitality.

Funding information
M.T. is partially supported by JSPS KAKENHI Grant Numbers JP19H01793, JP19K03514,
and JP22K03333. H.T. is partially supported by JSPS KAKENHI Grant Numbers
JP19K03514, JP21H04432, and JP23H01077.

Competing interests
There were no competing interests to declare which arose during the preparation or
publication process of this article.

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press


Elephant random walk in the triangular array setting 13

References
[1] AGUECH, R. AND E L M ACHKOURI, M. (2024). Gaussian fluctuations of the elephant random walk with
gradually increasing memory. J. Phys. A 57, 065203. Corrigendum, J. Phys. A 57, 349501.
[2] A ZUMA, K. (1967). Weighted sums of certain dependent random variables. Tôhoku Math. J. (2) 19, 357–367.
[3] BAUR, E. AND B ERTOIN, J. (2016). Elephant random walks and their connection to Pólya-type urns. Phys. Rev.
E 94, 052134.
[4] B ERCU, B. (2018). A martingale approach for the elephant random walk. J. Phys. A 51, 015201.
[5] B ERCU, B. (2022). On the elephant random walk with stops playing hide and seek with the Mittag–Leffler
distribution. J. Statist. Phys. 189, 12.
[6] C OLETTI, C. F., G AVA, R. J. AND S CHÜTZ, G. M. (2017). Central limit theorem for the elephant random walk.
J. Math. Phys. 58, 053303.
[7] C OLETTI, C. F., G AVA, R. J. AND S CHÜTZ, G. M. (2017). A strong invariance principle for the elephant
random walk. J. Statist. Mech. 2017, 123207.
[8] G UÉRIN, H., L AULIN, L. AND R ASCHEL, K. (2023). A fixed-point equation approach for the superdiffusive
elephant random walk. Preprint, arXiv:2308.14630.
[9] G UT, A. AND S TADTMÜLLER, U. (2021). Variations of the elephant random walk. J. Appl. Prob. 58, 805–829.
[10] G UT, A. AND S TADTMÜLLER, U. (2022). The elephant random walk with gradually increasing memory. Statist.
Prob. Lett. 189, 109598.
[11] K UBOTA, N. AND TAKEI, M. (2019). Gaussian fluctuation for superdiffusive elephant random walks. J. Statist.
Phys. 177, 1157–1171.
[12] K UMAR, N., H ARBOLA, U. AND L INDENBERG, K. (2010). Memory-induced anomalous dynamics:
Emergence of diffusion. Phys. Rev. E 82, 021101.
[13] L AULIN, L. (2022). Autour de la marche aléatoire de l’éléphant [About the elephant random walk]. Thesis,
Université de Bordeaux.
[14] Q IN, S. (2023). Recurrence and transience of multidimensional elephant random walks. Preprint,
arXiv:2309.09795.
[15] S CHÜTZ, G. M. AND T RIMPER, S. (2004). Elephants can always remember: Exact long-range memory effects
in a non-Markovian random walk. Phys. Rev. E 70, 045101.

https://doi.org/10.1017/jpr.2024.106 Published online by Cambridge University Press

You might also like