0% found this document useful (0 votes)
12 views17 pages

BDZT2024

Uploaded by

hiếu hữu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views17 pages

BDZT2024

Uploaded by

hiếu hữu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Quaestiones Mathematicae

ISSN: (Print) (Online) Journal homepage: www.tandfonline.com/journals/tqma20

Complete convergence for the maximal partial


sums without maximal inequalities

Fakhreddine Boukhari, Nguyen Chi Dzung & Lê Vǎn Thaǹh

To cite this article: Fakhreddine Boukhari, Nguyen Chi Dzung & Lê Vǎn Thaǹh (06 Mar
2024): Complete convergence for the maximal partial sums without maximal inequalities,
Quaestiones Mathematicae, DOI: 10.2989/16073606.2024.2323150

To link to this article: https://doi.org/10.2989/16073606.2024.2323150

Published online: 06 Mar 2024.

Submit your article to this journal

Article views: 4

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tqma20
© 2024 NISC (Pty) Ltd
Quaestiones Mathematicae 2024: 1–16. This is the final version of the article that is
published ahead of the print and online issue
https://doi.org/10.2989/16073606.2024.2323150

COMPLETE CONVERGENCE FOR THE MAXIMAL


PARTIAL SUMS WITHOUT MAXIMAL
INEQUALITIES
Fakhreddine Boukhari
Laboratoire de Statistique et Modélisations Aléatoires, Faculty of Sciences, Abou Bekr
Belkaid University, BP 119, Tlemcen 13000, Algeria.
E-Mail f [email protected]

Nguyen Chi Dzung


Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc
Viet, Hanoi 10307, Vietnam.
E-Mail [email protected]

Lê Vǎn Thành*


Department of Mathematics, Vinh University, 182 Le Duan, Vinh, Nghe An, Vietnam.
E-Mail [email protected]

Abstract. This work provides the necessary and sufficient conditions for complete
convergence for the maximal partial sums of dependent random variables. The results
are proved without using maximal inequalities. The main theorems can be applied
to sequences of (i) m-pairwise negatively dependent random variables and (ii) m-
extended negatively dependent random variables. While the result for case (i) unifies
and improves many existing ones, the result for case (ii) complements the main
theorem of Chen et al. [J. Appl. Probab., 2010]. Affirmative answers to open
questions raised by Chen et al. [J. Math. Anal. Appl., 2014], and Wu and Rosalsky
[Glas. Mat. Ser. III, 2015] are also given. Two examples illustrating the sharpness
of the main result are presented.

Mathematics Subject Classification (2020): 60F15.


Key words: Complete convergence, rate of convergence, maximal inequality, dependent
random variables, regularly varying function.

1. Introduction and the main result. This work is an improvement of the


arXiv preprint [10]. Let {X, Xn , n ≥ 1} be a sequence of pairwise independent and
identically distributed (p.i.i.d.) random variables. Etemadi [11] is the first author
who proved the Kolmogorov strong law of large numbers (SLLN)
Pn
i=1 (Xi − EXi )
lim = 0 almost surely (a.s.)
n→∞ n
* Corresponding author.

Quaestiones Mathematicae is co-published by NISC (Pty) Ltd and Informa UK Limited


(trading as the Taylor & Francis Group)
2 F. Boukhari, N.C. Dzung and L.V. Thành

under the optimal moment condition E|X| < ∞ without using the maximal in-
equalities. The Etemadi result was further extended to random fields by Fazekas
and Tómács [12] in which the authors also obtained the rate of convergence. The
problem of proving the Marcinkiewicz–Zygmund SLLN for p.i.i.d. random variables
under an optimal moment condition is more challenging, and Etemadi’s method in
[11] does not seem to work if the normalizing constants are of the form n1/p with
p > 1. Let 1 < p < 2. Martı̆kainen [17] proved that if E(|X|p logβ |X|) < ∞ for
some β > max{0, 4p − 6}, then the Marcinkiewicz–Zygmund SLLN holds, i.e.,
Pn
i=1 (Xi − EXi )
lim = 0 a.s. (1.1)
n→∞ n1/p
Here and hereafter, for x ≥ 0 and β ∈ R, we denote the natural logarithm of
max{x, e} by log x, and write logβ x = (log x)β . As far as we know, Rio [20] is the
first author who proved the Marcinkiewicz–Zygmund SLLN (1.1) under the optimal
moment condition E|X|p < ∞. Anh et al. [1] recently proved the Marcinkiewicz–
Zygmund-type SLLN with the norming constants of the form n1/p L̃(n1/p ), n ≥ 1,
where L̃(·) is the de Bruijn conjugate of a slowly varying function L(·). However,
the proof in [1] is based on a maximal inequality for negatively associated ran-
dom variables which is no longer available even for pairwise independent random
variables.
In this paper, we use Rio’s method [20] and the theory of regularly varying
functions to derive rates of convergence in the SLLN under optimal moment condi-
tions. Although Rio’s result was extended by Thành [24], it only considered sums
for p.i.i.d. random variables. The motivation of the present paper is that many
other dependence structures do not enjoy a Kolmogorov-type maximal inequality
such as pairwise negative dependence, and extended negative dependence, among
others (see, e.g., [4, 8, 22, 29] and the references therein). In contrast to [24], we
explore the scenario where the involved family of random variables is not neces-
sarily stochastically dominated and establish a Baum–Katz-type theorem under a
uniformly bounded moment condition. We also provide a necessary condition for
the convergence of the Baum–Katz series.
The main result of this paper is the following theorem. To our best knowledge,
Theorem 1.1 and Corollary 1.2 are new even when the underlying sequence is
comprised of independent random variables.
Theorem 1.1. Let 1 ≤ p < 2, and {Xn , n ≥ 1} be a sequence of random variables.
Assume that there exists a universal constant C such that for all nondecreasing
functions fi , i ≥ 1,
k+ℓ
! k+ℓ
X X
Var fi (Xi ) ≤ C Var(fi (Xi )), k ≥ 0, ℓ ≥ 1, (1.2)
i=k+1 i=k+1

provided the variances exist. Let L(·) be a slowly varying function defined on [0, ∞)
and let L̃(·) be the de Bruijn conjugate of L(·). When p = 1, we assume further
that L(x) ≥ 1 and is increasing on [0, ∞). If
sup E |Xn |p Lp (|Xn |) log(|Xn |) log2 (log |Xn |) < ∞,

(1.3)
n≥1
Complete convergence for the maximal partial sums 3

then for all α ≥ 1/p, we have

∞ j
!
X X
αp−2 α α
n P max (Xi − EXi ) > εn L̃(n ) < ∞ for all ε > 0. (1.4)
1≤j≤n
n=1 i=1

Conversely, if

∞ j
!
X X
αp−2 α α
n P max (Xi − ci ) > εn L̃(n ) < ∞ for all ε > 0, (1.5)
1≤j≤n
n=1 i=1

where {ci , i ≥ 1} is a sequence of real numbers, then

X n
X
nαp−2 P |Xi − ci | > nα L̃(nα ) < ∞.

(1.6)
n≥1 i=1

Remark 1. (i) Many dependence structures enjoy (1.2) such as negative associ-
ation, pairwise independence, pairwise negative dependence, extended nega-
tive dependence, various mixing sequences, etc. The SLLN for sequences and
fields of random variables satisfying these dependence structures was studied
by many authors. We refer to [2, 9, 12, 14, 15, 16, 19] and the references
therein.

(ii) Theorem 1.1 can fail if Condition (1.2) is not satisfied (see Example 2.1).

(iii) Condition (1.3) is very sharp. Even when the involved random variables are
independent, Example 3.1 in Section 3 shows that Theorem 1.1 may fail if
(1.3) is weakened to

sup E (|Xn |p Lp (|Xn |) log(|Xn |) log(log |Xn |)) < ∞.


n≥1

Considering a special interesting case α = 1/p and L(x) ≡ 1, we obtain the


following corollary.

Corollary 1.2. Let 1 ≤ p < 2, and {Xn , n ≥ 1} be a sequence of random


variables satisfying Condition (1.2). If

sup E |Xn |p log(|Xn |) log2 (log |Xn |) < ∞,



(1.7)
n≥1

then
∞ j
!
X X
−1
n P max (Xi − EXi ) > εn1/p < ∞ for all ε > 0. (1.8)
1≤j≤n
n=1 i=1

Pj
Remark 2. (i) Since the sequence {max1≤j≤n | i=1 (Xi −EXi )|, n ≥ 1} is non-
decreasing, it follows from (1.8) that the SLLN (1.1) holds.
4 F. Boukhari, N.C. Dzung and L.V. Thành

(ii) For the SLLN under the uniformly bounded moment condition, Baxter et
al. [3] proved (1.1) with assumptions that the sequence {Xn , n ≥ 1} is in-
dependent and supn≥1 E|Xn |r < ∞ for some r > p. This condition is much
stronger than (1.7). Baxter et al. [3] studied the SLLN for weighted sums
and their method does not give the rate of convergence as in Corollary 1.2.
(iii) For sequence of p.i.i.d. random variables {X, Xn , n ≥ 1}, Chen et al. [7]
obtained (1.8) under the condition that E(|X|p logr |X|) < ∞ for some 1 <
p < r < 2. In Corollary 1.2, the moment Condition (1.7) is weaker than that
of Chen et al. [7].
The rest of the paper is arranged as follows. Section 2 presents a complete con-
vergence result for sequences of dependent random variables with regularly varying
normalizing constants under a stochastic domination condition. The proof of The-
orem 1.1 is given in Section 3. Finally, Section 4 contains corollaries and remarks
comparing our results and the ones in the literature. As for the notation, we shall
write un = o(vn ) (resp., un ≍ vn ) to indicate that un /vn → 0 as n tends to infinity
(resp., c1 un ≤ vn ≤ c2 un for large values of n and some positive constants c1 , c2 ).

2. Complete convergence for the maximal partial sums of dependent


random variables with regularly varying normalizing constants under a
stochastic domination condition. In this section, we will use Rio’s method
[20] to obtain complete convergence for sums of dependent random variables with
regularly varying constants under a stochastic domination condition.
A family of random variables {Xi , i ∈ I} is said to be stochastically dominated
by a random variable X if
sup P(|Xi | > t) ≤ P(|X| > t), for all t ≥ 0. (2.1)
i∈I

We note that many authors use an apparently weaker definition of {Xi , i ∈ I}


being stochastically dominated by a random variable Y , namely that
sup P(|Xi | > t) ≤ CP(|Y | > t), for all t ≥ 0 (2.2)
i∈I

for some constant C ∈ (0, ∞). However, it is shown by Rosalsky and Thành [21]
that (2.1) and (2.2) are indeed equivalent.
Let ρ ∈ R. A real-valued function R(·) is said to be regularly varying (at
infinity) with index of regular variation ρ if it is a positive and measurable function
on [A, ∞) for some A ≥ 0, and for each λ > 0,
R(λx)
lim = λρ .
x→∞ R(x)
A regularly varying function with the index of regular variation ρ = 0 is called
slowly varying (at infinity). If L(·) is a slowly varying function, then by Theorem
1.5.13 in Bingham et al. [5], there exists a slowly varying function L̃(·), unique up
to asymptotic equivalence, satisfying
 
lim L(x)L̃ (xL(x)) = 1 and lim L̃(x)L xL̃(x) = 1. (2.3)
x→∞ x→∞
Complete convergence for the maximal partial sums 5

 
The function L̃ is called the de Bruijn conjugate of L, and L, L̃ is called a (slowly
varying) conjugate pair (see, e.g., p. 29 in Bingham et al. [5]). If L(x) = logγ x or
L(x) = logγ (log x) for some γ ∈ R, then L̃(x) = 1/L(x). Especially, if L(x) ≡ 1,
then L̃(x) ≡ 1.
Here and thereafter, for a slowly varying function L(·), we denote the de Bruijn
conjugate of L(·) by L̃(·). Throughout, we will assume, without loss of generality,
that L(x) and L̃(x) are both continuous on [0, ∞), differentiable on [A, ∞) for some
A ≥ 0, and xγ L(x) and xγ L̃(x) are both strictly increasing on [0, ∞) for all γ > 0
(see Thành [26, p. 578]). We also assume that (see Lemma 2.2 in Anh et al. [1])

xL′ (x) xL̃′ (x)


lim = 0 and lim = 0. (2.4)
x→∞ L(x) x→∞ L̃(x)

The following theorem establishes complete convergence for the maximal partial
sums of dependent random variables without using the Kolmogorov-type maximal
inequalities. For the special case where L(x) = logα x with α ≥ 0, Miao et al. [18]
proved Theorem 2.1 for sequences of negatively associated random variables, which
do enjoy the Kolmogorov maximal inequality. The main contribution of our result
is that it can be applied to dependence structures where the Kolmogorov-type
maximal inequalities may not hold.
Theorem 2.1. Let 1 ≤ p < 2 and let {Xn , n ≥ 1} be a sequence of random
variables satisfying Condition (1.2). Let L(·) be as in Theorem 1.1. If {Xn , n ≥ 1}
is stochastically dominated by a random variable X, and

E (|X|p Lp (|X|)) < ∞, (2.5)

then for all α ≥ 1/p, we have


∞ j
!
X X
nαp−2 P max (Xi − EXi ) > εnα L̃(nα ) < ∞ for all ε > 0. (2.6)
1≤j≤n
n=1 i=1

We only sketch the proof of Theorem 2.1 and refer the reader to the proof of
Theorem 1 in Thành [24] for details. The main difference here is that we have to
consider the nonnegative random variables so that after applying certain trunca-
tion techniques (see (2.8) and (2.9) below), the new random variables still satisfy
Condition (1.2).
Sketch proof of Theorem 2.1. Since {Xn+ , n ≥ 1} and {Xn− , n ≥ 1} satisfy the
assumptions of the theorem and Xn = Xn+ − Xn− , n ≥ 1, without loss of generality
we can assume that Xn ≥ 0 for all n ≥ 1. For n ≥ 1, set

bn = nα L̃ (nα ) , (2.7)

Xi,n = Xi 1(Xi ≤ bn ) + bn 1(Xi > bn ), 1 ≤ i ≤ n, (2.8)


and
 
Yi,m = Xi,2m − Xi,2m−1 − E Xi,2m − Xi,2m−1 , m ≥ 1, i ≥ 1. (2.9)
6 F. Boukhari, N.C. Dzung and L.V. Thành

Since xL̃(x) is strictly increasing on [0, ∞), {bn , n ≥ 0} is strictly increasing se-
quence. It is easy to see that (2.6) is equivalent to

∞ j
!
X X
n(αp−1)
2 P max (Xi − EXi ) > εb2n < ∞ for all ε > 0. (2.10)
1≤j<2n
n=1 i=1

It follows from the stochastic domination condition and definition of bn that



0 ≤ E Xi,2m − Xi,2m−1 ≤ E (|X|1(|X| > b2m−1 )) . (2.11)

Using (2.11) and proceeding in a similar manner as in Thành [24, Equation (23)],
the proof of (2.10) will be completed if we can show that

∞ j
!
X X
n(αp−1)
2 P max (Xi,2n − EXi,2n ) ≥ εb2n−1 < ∞ for all ε > 0.
1≤j<2n
n=1 i=1
(2.12)
For m ≥ 0, set S0,m = 0 and

j
X
Sj,m = (Xi,2m − EXi,2m ), j ≥ 1.
i=1

For 1 ≤ j < 2n and for 0 ≤ m ≤ n, let kj,m = ⌊j/2m ⌋ be the greatest integer which
is less than or equal to j/2m , jm = kj,m 2m . Then (see Thành [24, Equation (28)])

n
X k2mX
+2m−1

max n |Sj,n | ≤ max
n−m
Xi,2m−1 − EXi,2m−1
1≤j<2 0≤k<2
m=1 i=k2m +1

n (k+1)2m n
X X X
+ max
n−m
Yi,m + 2m+1 E (|X|1(|X| > b2m−1 )) .
0≤k<2
m=1 i=k2m +1 m=1
(2.13)

Combining (1.2), (2.8) and (2.9), we have for all m ≥ 1,

k+ℓ
!2 k+ℓ
X X
2

E Xi,2m−1 − EXi,2m−1 ≤C EXi,2m−1 , k ≥ 0, ℓ ≥ 1, (2.14)
i=k+1 i=k+1

and
k+ℓ
!2 k+ℓ
X X
2
E Yi,m ≤C EYi,m , k ≥ 0, ℓ ≥ 1. (2.15)
i=k+1 i=k+1

By using (2.13)–(2.15) and proceeding in a similar manner as in pages 1236-1238


in Thành [24], we obtain (2.12). 2
Complete convergence for the maximal partial sums 7

Remark 3. When 0 < p < 1, we can show that Theorems 1.1 and 2.1 hold
irrespective of the dependence structure of the underlying sequence of random
variables (see Theorems 3.1 and 3.2 of Boukhari [6]). However, for the case 1 ≤
p < 2, the following simple example shows that these theorems can fail if the
involved random variables do not satisfy (1.2).

Example 2.1. Let Xn ≡ X, where X is a Bernoulli random variable with P(X =


±1) = 1/2. It is easy to see that Condition (1.2) fails. Let 1 ≤ p < 2 and consider
the case where L(x) ≡ 1 and α = 1/p ≤ 1. Since X is a bounded random variable,
Conditions (1.3) and (2.5) are both satisfied. We have with probability 1 that

|X1 + · · · + Xn | = n|X| = n ≥ nα
and therefore for all 0 < ε ≤ 1,
∞ j j
! ∞
!
X αp−2
X X −1
X
n P max (Xi − EXi ) > εnα L̃(nα ) = n P max Xi > εn α
1≤j≤n 1≤j≤n
n=1 i=1 n=1 i=1
X∞
≥ n−1 = ∞.
n=1

The next theorem shows that the moment condition in (2.5) in Theorem 2.1 is
optimal.

Theorem 2.2. Let 1 ≤ p < 2, 1/p ≤ α ≤ 1 and let {X, Xn , n ≥ 1} be a sequence


of identically distributed random variables satisfying (1.2). Let L(·) be a slowly
varying function defined on [0, ∞). When α = 1, we assume further that L(x) ≥ 1
and is increasing on [0, ∞). If for some constant c,
∞ j
!
X X
αp−2 α α
n P max (Xi − c) > εn L̃(n ) < ∞ for all ε > 0, (2.16)
1≤j≤n
n=1 i=1

then E (|X|p Lp (|X|)) < ∞ and EX = c.

Proof. Let bn = nα L̃(nα ), n ≥ 1. Note again that we can assume that bn is


strictly increasing (see, e.g., Proposition B.1.9 in [13]). A direct application of the
second portion of Theorem 1.1 with ci ≡ c yields
X
nαp−1 P (|X − c| > bn ) < ∞.
n≥1

Employing Proposition 2.6 of [1], we obtain

E (|X − c|p Lp (|X − c|)) < ∞. (2.17)

Since L(·) is slowly varying and c is a constant, (2.17) implies E (|X|p Lp (|X|)) < ∞.
Applying Theorem 2.1, we obtain
∞ j
!
X X
αp−2
n P max (Xi − EXi ) > εbn < ∞ for all ε > 0. (2.18)
1≤j≤n
n=1 i=1
8 F. Boukhari, N.C. Dzung and L.V. Thành

Let ε > 0 be arbitrary. Since αp ≥ 1 and 0 < bn ↑, we have from (2.18) that
∞ j
!
X X
αp−2
∞> n P max (Xi − EXi ) > εbn
1≤j≤n
n=1 i=1
∞ j
!
X X
−1
≥ n P max (Xi − EXi ) > εbn
1≤j≤n
n=1 i=1
k
(2.19)
∞ 2X
−1 j
!
X X
−1
= n P max (Xi − EXi ) > εbn
1≤j≤n
k=1 n=2k−1 i=1
∞ j
!
1X X
≥ P max (Xi − EXi ) > εb2k .
2 1≤j≤2k−1
i=1
k=1

By applying the Borel–Cantelli lemma, (2.19) implies


Pj
max1≤j≤2k−1 i=1 (Xi − EXi )
lim = 0 a.s. (2.20)
k→∞ b2k
It is clear that b2n /bn ≍ 1. Therefore, we infer from (2.20) and the identical
distribution assumption that
 Pn  Pn
i=1 Xi i=1 (Xi − EXi )
lim − n1−α L̃−1 (nα )EX = lim = 0 a.s. (2.21)
n→∞ bn n→∞ bn
Similarly, we obtain from (2.16) that
 Pn 
i=1 Xi
lim − n1−α L̃−1 (nα )c = 0 a.s. (2.22)
n→∞ bn
Combining (2.21) and (2.22) yields
lim n1−α L̃−1 (nα )(EX − c) = 0 a.s. (2.23)
n→∞

If α < 1, then n1−α L̃−1 (nα ) → ∞. If α = 1, then by (2.3), we have n1−α L̃−1 (nα ) =
L̃−1 (n) ∼ L(nL̃(n)) ≥ 1. Therefore, we conclude from (2.23) that c = EX. 2

3. Proof of Theorem 1.1. In this section, we will present a result on the


stochastic domination condition via regularly varying functions theory, and use it
to prove Theorem 1.1. We need the following simple lemma. See Rosalsky and
Thành [21] for a proof.
Lemma 3.1. Let g : [0, ∞) → [0, ∞) be a measurable function with g(0) = 0
which is bounded on [0, A] and differentiable on [A, ∞) for some A ≥ 0. If ξ is a
nonnegative random variable, then
Z ∞
E(g(ξ)) = E(g(ξ)1(ξ ≤ A)) + g(A) + g ′ (x)P(ξ > x)dx. (3.1)
A
Complete convergence for the maximal partial sums 9

The following result generalizes and unifies Theorem 2.5 (ii) and (iii) of Rosalsky
and Thành [21]. The proof is similar to that of Theorem 2.6 in [25].
Theorem 3.2. Let {Xi , i ∈ I} be a family of random variables and let L(·) be a
slowly varying function. If
sup E |Xi |p L(|Xi |) log(|Xi |) log2 (log |Xi |) < ∞ for some p > 0,

(3.2)
i∈I

then there exists a nonnegative random variable X with distribution function


F (x) = 1 − supi∈I P(|Xi | > x), x ∈ R such that {Xi , i ∈ I} is stochastically
dominated by X and
E (X p L(X)) < ∞. (3.3)
Proof. By (3.2) and Theorem 2.5 (i) of Rosalsky and Thành [21], we get that
{Xi , i ∈ I} is stochastically dominated by a nonnegative random variable X with
distribution function
F (x) = 1 − sup P(|Xi | > x), x ∈ R.
i∈I

Let
g(x) = xp L(x) log(x) log2 (log x), h(x) = xp L(x), x ≥ 0.
Applying the first half of (2.4), there exists B large enough such that g(·) and h(·)
are strictly increasing on [B, ∞), and
xL′ (x) p
≤ , x > B.
L(x) 2
Therefore,
xL′ (x) 3pxp−1 L(x)
 
h′ (x) = pxp−1 L(x) + xp L′ (x) = xp−1 L(x) p + ≤ , x > B.
L(x) 2
(3.4)
By Lemma 3.1, (3.2) and (3.4), there exists a constant C1 such that
Z ∞
E(h(X)) = E(h(X)1(X ≤ B)) + h(B) + h′ (x)P(X > x)dx
B
3p ∞ p−1
Z
≤ C1 + x L(x)P(X > x)dx
2 B
Z ∞
3p
= C1 + xp−1 L(x) sup P(|Xi | > x)dx
2 B i∈I
3p ∞ −1
Z
≤ C1 + x log (x) log−2 (log x) sup E (g(|Xi |)) dx
−1
2 B i∈I
Z ∞
3p
= C1 + sup E (g(|Xi |)) x log (x) log−2 (log x)dx
−1 −1
2 i∈I B
< ∞.
The proposition is proved. 2
10 F. Boukhari, N.C. Dzung and L.V. Thành

Remark 4. The contribution of the slowly varying function L(x) in Theorem 3.2
helps us to unify Theorem 2.5 (ii) and (iii) of Rosalsky and Thành [21]. Letting
L(x) = log−1 (x) log−2 (log x), x ≥ 0, then by Theorem 3.2, the condition
sup E|Xi |p < ∞ for some p > 0,
i∈I

implies that the family {Xi , i ∈ I} is stochastically dominated by a nonnegative


random variable X satisfying
E X p log−1 (X) log−2 (log X) < ∞.


This slightly improves Theorem 2.5 (ii) in Rosalsky and Thành [21]. Similarly, by
letting L(x) = 1, we obtain an improvement of Theorem 2.5 (iii) in Rosalsky and
Thành [21].
We now recall a two-sided inequality stated in [4] to derive the necessary con-
ditions for the validity of the weak law of large numbers under appropriate depen-
dence restrictions. In the following lemma, we apply Theorem 2.3 in [4] for random
variables Xn′ = Xn − cn , n ≥ 1.
Lemma 3.3. Let {Xn , n ≥ 1} be a sequence of random variables fulfilling (1.2)
and let {cn , n ≥ 1} be a sequence of real numbers. For t > 0 and n ≥ 1, put
n
X
In (t) = P( max |Xi − ci | > t) and Jn (t) = P(|Xi − ci | > t).
1≤i≤n
i=1

Then
1 Jn (t)
· ≤ In (t) ≤ Jn (t), n ≥ 1,
2 2C + Jn (t)
where C is given by (1.2). In particular, if In (un ) = o(1) for some positive sequence
{un , n ≥ 1}, then In (un ) ≍ Jn (un ).

Proof of Theorem 1.1. By applying Theorem 3.2, we have from (1.3) that the
sequence {Xn , n ≥ 1} is stochastically dominated by a nonnegative random variable
X with
E (|X|p Lp (|X|)) < ∞.
Applying Theorem 2.1, we immediately obtain (1.4).
We now turn to the proof of the second part of the theorem. Assume that (1.5)
α
is met. Let bn = nP L̃ (nα ) , n ≥ 1, and let ε > 0 be arbitrary. For n ≥ 1 and t > 0,
n
let S0 = 0, Sn = i=1 (Xi − ci ), In (t) and Jn (t) be as in Lemma 3.3. From the
relation |Xn − cn | ≤ |Sn | + |Sn−1 |, n ≥ 1, we infer that
 
bn
In (εbn ) ≤ P max |Sk | > ε .
1≤k≤n 2
Joining this with (1.5), we reach that
X
nαp−2 In (εbn ) < ∞. (3.5)
n≥1
Complete convergence for the maximal partial sums 11

Besides, since the sequence {bn , n ≥ 1} is increasing, we obtain from (3.5) that
2n
X
nαp−1 In (εb2n ) ≤ k αp−2 Ik (εbk ) = o(1),
k=n+1

which, in view of the range of α and Lemma 2.1(ii) of [6], leads to In (εbn ) = o(1).
By invoking Lemma 3.3, we conclude that In (εbn ) ≍ Jn (εbn ) implying, via (3.5),
X
nαp−2 Jn (εbn ) < ∞.
n≥1

This establishes the thesis and achieves the proof of the theorem. 2

The following example illustrates the sharpness of Theorem 1.1 (and Corollary
1.2). It shows that in Theorem 1.1, (1.4) may fail if (1.3) is weakened to

sup E (|Xn |p Lp (|Xn |) log(|Xn |) log(log |Xn |)) < ∞. (3.6)


n≥1

It therefore also shows that the main result of Sung [23, Theorem 3.1] may fail if
the underlying random variables are not identically distributed.
Example 3.1. Let 1 ≤ p < 2 and L(·) be a positive slowly varying function
such that g(x) = xp Lp (x) is strictly increasing on [A, ∞) for some A > 0. Let
B = ⌊A+g(A)⌋+1, h(x) be the inverse function of g(x), x ≥ B, and let {Xn , n ≥ B}
be a sequence of independent random variables such that for all n ≥ B
1 1
P(Xn = 0) = 1 − , P (Xn = ±h(n)) = .
n log(n) log(log n) 2n log(n) log(log n)
By (2.3), we can choose (unique up to asymptotic equivalence)
h(xp )
L̃(x) = , x ≥ B.
x
Since L̃(·) is a slowly varying function,

log(L̃(n1/p )) = o (log n) ,

and so   1
log h(n) = log n1/p L̃(n1/p ) = log n + o(log n).
p
It thus follows that
sup E |Xn |p Lp (|Xn |) log(|Xn |) log2 (log |Xn |)

n≥1

= sup E g(|Xn |) log(|Xn |) log2 (log |Xn |)



n≥1

log(h(n)) log2 (log h(n))


= sup = ∞,
n≥1 log(n) log(log n)
12 F. Boukhari, N.C. Dzung and L.V. Thành

and

sup E (|Xn |p Lp (|Xn |) log(|Xn |) log(log |Xn |))


n≥1

= sup E (g(|Xn |) log(|Xn |) log(log |Xn |))


n≥1
log(h(n)) log(log h(n))
= sup < ∞.
n≥1 log(n) log(log n)

Therefore (1.3) fails but (3.6) holds.


Now, if (1.4) holds, then by letting α = 1/p, we have
Pn
i=B Xi
lim = 0 a.s. (3.7)
n→∞ n L̃(n1/p )
1/p

It follows from (3.7) that

Xn
lim = 0 a.s. (3.8)
n→∞ n1/p L̃(n1/p )

Since the sequence {Xn , n ≥ 1} is comprised of independent random variables, the


Borel–Cantelli lemma and (3.8) ensure that

X  
P |Xn | > n1/p L̃(n1/p )/2 < ∞. (3.9)
n=B

However, we have

X   X∞
P |Xn | > n1/p L̃(n1/p )/2 = P (|Xn | > h(n)/2)
n=B n=B

X 1
= =∞
n log(n) log(log n)
n=B

contradicting (3.9). Therefore, (1.4) must fail.


Now, if we choose L(x) ≡ 1, then all assumptions of Theorem 3.1 of Sung [23]
are met except for the identical distribution hypothesis. It follows from the above
argument that (3.7) also fail (with L̃(x) ≡ 1). Therefore, this shows that Theorem
3.1 of Sung [23] may fail if the underlying random variables are not identically
distributed.

4. Corollaries and remarks. In this section, we apply Theorems 1.1, 2.1,


2.2 to sequences of (i) m-pairwise negatively dependent random variables and (ii)
extended negatively dependent random variables. The results obtained in this
section are new even when L(x) ≡ 1. We also present some remarks to compare
our results with the existing ones.
Complete convergence for the maximal partial sums 13

4.1. m-pairwise negatively dependent random variables. The Baum–Katz


theorem and the Marcinkiewicz–Zygmund SLLN for sequences of m-pairwise neg-
atively dependent random variables were studied by Wu and Rosalsky [27]. Let
m ≥ 1 be a fixed integer. A sequence of random variables {Xn , n ≥ 1} is said to be
m-pairwise negatively dependent if for all positive integers j and k with |j −k| ≥ m,
Xj and Xk are negatively dependent, i.e.,

P(Xj ≤ x, Xk ≤ y) ≤ P(Xj ≤ x)P(Xk ≤ y) for all x, y ∈ R.

When m = 1, this reduces to the usual concept of pairwise negative dependence.


It is well known that if {Xn , n ≥ 1} is a sequence of m-pairwise negatively depen-
dent random variables and {fn , n ≥ 1} is a sequence of nondecreasing functions,
then {fn (Xn ), n ≥ 1} is a sequence of m-pairwise negatively dependent random
variables.
The following corollary is the first result in the literature on the complete con-
vergence for sequences of m-pairwise negatively dependent random variables under
the optimal condition even when m = 1 and L(x) ≡ 1.

Corollary 4.1. Let 1 ≤ p < 2, α ≥ 1/p, and let {Xn , n ≥ 1} be a sequence of


m-pairwise negatively dependent random variables, and L(·) as in Theorem 1.1.
(i) If (1.3) holds, then we obtain (1.4).
(ii) If {Xn , n ≥ 1} is stochastically dominated by a random variable X satisfying
(2.5), then we obtain (2.6). Conversely, if α ≤ 1 and the random variables
X, X1 , X2 , . . . are identically distributed, then (2.6) implies (2.5).

Proof. From Lemma 2.1 in Wu and Rosalsky [27], it is easy to see that m-pairwise
negatively dependent random variables satisfy Condition (1.2). Therefore, Part (i)
follows from Theorems 1.1, and Part (ii) follows from Theorems 2.1 and 2.2. 2

Remark 5. (i) We consider a special case where α = 1/p, 1 < p < 2 and L(x) ≡ 1
in Corollary 4.1 (ii). Under the condition E(|X|p ) < ∞, we obtain
∞ j
!
X X
n−1 P max (Xi − EXi ) > εn1/p < ∞ for all ε > 0, (4.1)
1≤j≤n
n=1 i=1

and therefore
Pn
i=1 (Xi − EXi )
(4.2)
lim = 0 a.s.
n→∞ n1/p
(ii) For 1 < p < 2, Sung [23] considered the pairwise independent case and
obtained (4.2) under a slightly stronger condition that
 
E |X|p (log log |X|)2(p−1) < ∞. (4.3)

Furthermore, one cannot obtain the rate of convergence (4.1) by using the method
used in Sung [23]. In Chen et al. [7, Theorem 3.6], the authors proved (4.1)
14 F. Boukhari, N.C. Dzung and L.V. Thành

assuming E(|X|p logr |X|) < ∞ for some r > p. They raised an open question
as to whether or not (4.1) holds under (4.3) (see [7, Remark 3.1]). For the case
where the random variables are m-pairwise negatively dependent, Wu and Rosalsky
[27] obtained (4.1) and (4.2) under the condition E(|X|p logr |X|) < ∞ for some
r > 1 + p. Wu and Rosalsky [27] then raised an open question as to whether
or not (4.2) holds under Condition (4.3). For p = 1 and the underlying random
variables are m-pairwise negatively dependent, Wu and Rosalsky [27, Remarks 3.6]
stated another open question as to whether or not (4.1) (with p = 1) holds under
the condition E|X| < ∞. Therefore, a very special case of Corollary 4.1 gives
affirmative answers to the mentioned open questions raised by Chen et al. [7] and
Wu and Rosalsky [27].

4.2. Extended negatively dependent random variables. The Kolmogorov


SLLN for extended negatively dependent was first studied by Chen et al. [8].
A collection of random variables {X1 , . . . , Xn } is said to be extended negatively
dependent if for all x1 , . . . , xn ∈ R, there exists M > 0 such that

P(X1 ≤ x1 , . . . , Xn ≤ xn ) ≤ M P(X1 ≤ x1 ) . . . P(Xn ≤ xn ),

and
P(X1 > x1 , . . . , Xn > xn ) ≤ M P(X1 > x1 ) . . . P(Xn > xn ).
A sequence of random variables {Xi , i ≥ 1} is said to be extended negatively
dependent if for all n ≥ 1, the collection {Xi , 1 ≤ i ≤ n} is extended negatively
dependent.
Let m be a positive integer. The notion of m-extended negative dependence
was introduced in Wu and Wang [28]. A sequence {Xi , i ≥ 1} of random variables
is said to be m-extended negatively dependent if for any n ≥ 2 and any i1 , i2 , . . . , in
such that |ij − ik | ≥ m for all 1 ≤ j ≤ k ≤ n, we have {Xi1 , . . . , Xin } are extended
negatively dependent. If {Xi , i ≥ 1} is a sequence of m-extended negatively de-
pendent random variables and {fi , i ≥ 1} is a sequence of nondecreasing functions,
then {fi (Xi ), i ≥ 1} is a sequence of m-extended negatively dependent random vari-
ables. We note that the classical Kolmogorov maximal inequality or the classical
Rosenthal maximal inequality are not available for extended negatively dependent
random variables (see Wu and Wang [28], Wu et al. [29]).

Corollary 4.2. Corollary 4.1 holds if {Xn , n ≥ 1} is a sequence of m-extended


negatively dependent random variables.

Proof. Lemma 3.3 of Wu and Wang [28] implies that the sequence {Xn , n ≥ 1}
satisfies Condition (1.2). Corollary 4.2 then follows from Theorems 1.1, 2.1, and
2.2. 2

Remark 6. Chen et al. [8] proved the Kolmogorov SLLN for sequences of extended
negatively dependent and identically distributed random variables {X, Xn , n ≥ 1}
under the condition that E|X| < ∞. They used Etemadi’s method in Etemadi [11]
which does not seem to work for the case of the Marcinkiewicz–Zygmund SLLN.
Complete convergence for the maximal partial sums 15

To our best knowledge, Corollary 4.2 is the first result in the literature on the
Baum–Katz theorem for sequences of m-extended negatively dependent random
variables under the optimal moment condition even when L(x) ≡ 1 and m = 1.

Acknowledgement. The research of Fakhreddine Boukhari is a contribution to the


Project PRFU C00L03UN130120210002, funded by the DGRSDT-MESRS-Algeria.
The research of Lê Vǎn Thành was supported by the Ministry of Education and
Training, grant no. B2022-TDV-01.

References

1. V.T.N. Anh, N.T.T. Hien, L.V. Thành, and V.T.H. Van, The Marcinkiewicz–
Zygmund-type strong law of large numbers with general normalizing sequences, Jour-
nal of Theoretical Probability 34 (2021), 331–348.
2. P. Bai, P. Chen, and S.H. Sung, On complete convergence and the strong law of
large numbers for pairwise independent random variables, Acta Mathematica Hun-
garica 142 (2014), 502–518.
3. J. Baxter, R. Jones, M. Lin, and J. Olsen, SLLN for weighted independent
identically distributed random variables, Journal of Theoretical Probability 17 (2004),
165–181.
4. I. Bernou and F. Boukhari, Limit theorems for dependent random variables with
infinite means, Statistics and Probability Letters 189 (2022), 109563.
5. N.H. Bingham, C.M. Goldie, and J.L. Teugels, Regular variation, Encyclopedia
of Mathematics and its Applications, Vol. 27, Cambridge University Press, Cam-
bridge, 1989.
6. F. Boukhari, On convergence rates in the Marcinkiewicz–Zygmund
strong law of large numbers, Results in Mathematics 76(4) (2021), 174.
https://doi.org/10.1007/s00025-021-01487-2
7. P. Chen, P. Bai, and S.H. Sung, The von Bahr–Esseen moment inequality for
pairwise independent random variables and applications, Journal of Mathematical
Analysis and Applications 419 (2014), 1290–1302.
8. Y. Chen, A. Chen, and K.W. Ng, The strong law of large numbers for extended
negatively dependent random variables, Journal of Applied Probability 47 (2010),
908–922.
9. E.B. Czerebak-Mrozowicz, O.I. Klesov, and Z. Rychlik, Marcinkiewicz-type
strong law of large numbers for pairwise independent random fields, Probability and
Mathematical Statistics 22 (2002), 127–139.
10. N.C. Dzung and L.V. Thành, On the complete convergence for sequences of de-
pendent random variables via stochastic domination conditions and regularly varying
functions theory, arXiv2107.12690, Manuscript, pp. 1–18, 2021.
11. N. Etemadi, An elementary proof of the strong law of large numbers, Zeitschrift für
Wahrscheinlichkeitstheorie und Verwandte Gebiete 55 (1981), 119–122.
12. I. Fazekas and T. Tómács, Strong laws of large numbers for pairwise independent
random variables with multidimensional indices, Publicationes Mathematicae Debre-
cen 53 (1998), 149–161.
16 F. Boukhari, N.C. Dzung and L.V. Thành

13. L. de Haan and A. Ferreira, Extreme Value Theory: An Introduction, Springer,


New York, 2006.
14. O. Klesov, I. Fazekas, C. Noszály, and T. Tómács, Strong laws of large numbers
for sequences and fields, Theory of Stochastic Processes 5 (1999), 91–104.
15. M.H. Ko, Complete convergence for weighted sums of ρ∗ -mixing random fields, Rocky
Mountain Journal of Mathematics 44 (2014), 1595–1605.
16. A. Kuczmaszewska and Z. Lagodowski, Convergence rates in the SLLN for some
classes of dependent random fields, J. Math. Anal. Appl. 380 (2011), 571–584.
17. A. Martı̆kainen, On the strong law of large numbers for sums of pairwise indepen-
dent random variables, Statistics and Probability Letters 25 (1995), 21–26.
18. Y. Miao, J. Mu, and J. Xu, An analogue for Marcinkiewicz–Zygmund strong law
of negatively associated random variables, Revista de la Real Academia de Ciencias
Exactas, Fı́sicas y Naturales. Serie A. Matemáticas 111 (2017), 697–705.
19. H. Naderi, P. Matula, M. Amini, and A. Bozorgnia, On stochastic dominance
and the strong law of large numbers for dependent random variables, Revista de la
Real Academia de Ciencias Exactas, Fı́sicas y Naturales. Serie A. Matemáticas 110
(2016), 771–782.
20. E. Rio, Vitesses de convergence dans la loi forte pour des suites dépendantes (Rates
of convergence in the strong law for dependent sequences), Comptes Rendus de
l’Académie des Sciences. Série I, Mathématique 320 (1995), 469–474.
21. A. Rosalsky and L.V. Thành, A note on the stochastic domination condition and
uniform integrability with applications to the strong law of large numbers, Statistics
and Probability Letters 178 (2021), 109181.
22. Y. Shen, X.J. Wang, W.Z. Yang, and S.H. Hu, Almost sure convergence theorem
and strong stability for weighted sums of NSD random variables, Acta Mathematica
Sinica. English Series 29 (2013), 743–756.
23. S.H. Sung, Marcinkiewicz–Zygmund type strong law of large numbers for pairwise
i.i.d. random variables, Journal of Theoretical Probability 27 (2014), 96–106.
24. L.V. Thành, On the Baum–Katz theorem for sequences of pairwise independent
random variables with regularly varying normalizing constants, Comptes Rendus
Mathématique. Académie des Sciences. Paris 358 (2020), 1231–1238.
25. , On a new concept of stochastic domination and the laws of large
numbers, TEST 32 (2023), 74–106.
26. , On weak laws of large numbers for maximal partial sums of
pairwise independent random variables, Comptes Rendus Mathématique. Académie
des Sciences. Paris 361 (2023), 577–585.
27. Y. Wu and A. Rosalsky, Strong convergence for m-pairwise negatively quadrant
dependent random variables, Glasnik Matematički, Series III 50 (2015), 245–259.
28. Y. Wu and X. Wang, Strong laws for weighted sums of m-extended negatively
dependent random variables and its applications, J. Math. Anal. Appl. 494 (2021),
124566.
29. Y. Wu, X. Wang, T.C. Hu, M. Ordóñez Cabrera, and A. Volodin, Limiting
behaviour for arrays of rowwise widely orthant dependent random variables under
conditions of R-h-integrability and its applications, Stochastics 91 (2019), 916–944.

Received 19 July, 2023 and in revised form 27 October, 2023.

You might also like