BDZT2024
BDZT2024
To cite this article: Fakhreddine Boukhari, Nguyen Chi Dzung & Lê Vǎn Thaǹh (06 Mar
2024): Complete convergence for the maximal partial sums without maximal inequalities,
Quaestiones Mathematicae, DOI: 10.2989/16073606.2024.2323150
Article views: 4
Abstract. This work provides the necessary and sufficient conditions for complete
convergence for the maximal partial sums of dependent random variables. The results
are proved without using maximal inequalities. The main theorems can be applied
to sequences of (i) m-pairwise negatively dependent random variables and (ii) m-
extended negatively dependent random variables. While the result for case (i) unifies
and improves many existing ones, the result for case (ii) complements the main
theorem of Chen et al. [J. Appl. Probab., 2010]. Affirmative answers to open
questions raised by Chen et al. [J. Math. Anal. Appl., 2014], and Wu and Rosalsky
[Glas. Mat. Ser. III, 2015] are also given. Two examples illustrating the sharpness
of the main result are presented.
under the optimal moment condition E|X| < ∞ without using the maximal in-
equalities. The Etemadi result was further extended to random fields by Fazekas
and Tómács [12] in which the authors also obtained the rate of convergence. The
problem of proving the Marcinkiewicz–Zygmund SLLN for p.i.i.d. random variables
under an optimal moment condition is more challenging, and Etemadi’s method in
[11] does not seem to work if the normalizing constants are of the form n1/p with
p > 1. Let 1 < p < 2. Martı̆kainen [17] proved that if E(|X|p logβ |X|) < ∞ for
some β > max{0, 4p − 6}, then the Marcinkiewicz–Zygmund SLLN holds, i.e.,
Pn
i=1 (Xi − EXi )
lim = 0 a.s. (1.1)
n→∞ n1/p
Here and hereafter, for x ≥ 0 and β ∈ R, we denote the natural logarithm of
max{x, e} by log x, and write logβ x = (log x)β . As far as we know, Rio [20] is the
first author who proved the Marcinkiewicz–Zygmund SLLN (1.1) under the optimal
moment condition E|X|p < ∞. Anh et al. [1] recently proved the Marcinkiewicz–
Zygmund-type SLLN with the norming constants of the form n1/p L̃(n1/p ), n ≥ 1,
where L̃(·) is the de Bruijn conjugate of a slowly varying function L(·). However,
the proof in [1] is based on a maximal inequality for negatively associated ran-
dom variables which is no longer available even for pairwise independent random
variables.
In this paper, we use Rio’s method [20] and the theory of regularly varying
functions to derive rates of convergence in the SLLN under optimal moment condi-
tions. Although Rio’s result was extended by Thành [24], it only considered sums
for p.i.i.d. random variables. The motivation of the present paper is that many
other dependence structures do not enjoy a Kolmogorov-type maximal inequality
such as pairwise negative dependence, and extended negative dependence, among
others (see, e.g., [4, 8, 22, 29] and the references therein). In contrast to [24], we
explore the scenario where the involved family of random variables is not neces-
sarily stochastically dominated and establish a Baum–Katz-type theorem under a
uniformly bounded moment condition. We also provide a necessary condition for
the convergence of the Baum–Katz series.
The main result of this paper is the following theorem. To our best knowledge,
Theorem 1.1 and Corollary 1.2 are new even when the underlying sequence is
comprised of independent random variables.
Theorem 1.1. Let 1 ≤ p < 2, and {Xn , n ≥ 1} be a sequence of random variables.
Assume that there exists a universal constant C such that for all nondecreasing
functions fi , i ≥ 1,
k+ℓ
! k+ℓ
X X
Var fi (Xi ) ≤ C Var(fi (Xi )), k ≥ 0, ℓ ≥ 1, (1.2)
i=k+1 i=k+1
provided the variances exist. Let L(·) be a slowly varying function defined on [0, ∞)
and let L̃(·) be the de Bruijn conjugate of L(·). When p = 1, we assume further
that L(x) ≥ 1 and is increasing on [0, ∞). If
sup E |Xn |p Lp (|Xn |) log(|Xn |) log2 (log |Xn |) < ∞,
(1.3)
n≥1
Complete convergence for the maximal partial sums 3
∞ j
!
X X
αp−2 α α
n P max (Xi − EXi ) > εn L̃(n ) < ∞ for all ε > 0. (1.4)
1≤j≤n
n=1 i=1
Conversely, if
∞ j
!
X X
αp−2 α α
n P max (Xi − ci ) > εn L̃(n ) < ∞ for all ε > 0, (1.5)
1≤j≤n
n=1 i=1
X n
X
nαp−2 P |Xi − ci | > nα L̃(nα ) < ∞.
(1.6)
n≥1 i=1
Remark 1. (i) Many dependence structures enjoy (1.2) such as negative associ-
ation, pairwise independence, pairwise negative dependence, extended nega-
tive dependence, various mixing sequences, etc. The SLLN for sequences and
fields of random variables satisfying these dependence structures was studied
by many authors. We refer to [2, 9, 12, 14, 15, 16, 19] and the references
therein.
(ii) Theorem 1.1 can fail if Condition (1.2) is not satisfied (see Example 2.1).
(iii) Condition (1.3) is very sharp. Even when the involved random variables are
independent, Example 3.1 in Section 3 shows that Theorem 1.1 may fail if
(1.3) is weakened to
then
∞ j
!
X X
−1
n P max (Xi − EXi ) > εn1/p < ∞ for all ε > 0. (1.8)
1≤j≤n
n=1 i=1
Pj
Remark 2. (i) Since the sequence {max1≤j≤n | i=1 (Xi −EXi )|, n ≥ 1} is non-
decreasing, it follows from (1.8) that the SLLN (1.1) holds.
4 F. Boukhari, N.C. Dzung and L.V. Thành
(ii) For the SLLN under the uniformly bounded moment condition, Baxter et
al. [3] proved (1.1) with assumptions that the sequence {Xn , n ≥ 1} is in-
dependent and supn≥1 E|Xn |r < ∞ for some r > p. This condition is much
stronger than (1.7). Baxter et al. [3] studied the SLLN for weighted sums
and their method does not give the rate of convergence as in Corollary 1.2.
(iii) For sequence of p.i.i.d. random variables {X, Xn , n ≥ 1}, Chen et al. [7]
obtained (1.8) under the condition that E(|X|p logr |X|) < ∞ for some 1 <
p < r < 2. In Corollary 1.2, the moment Condition (1.7) is weaker than that
of Chen et al. [7].
The rest of the paper is arranged as follows. Section 2 presents a complete con-
vergence result for sequences of dependent random variables with regularly varying
normalizing constants under a stochastic domination condition. The proof of The-
orem 1.1 is given in Section 3. Finally, Section 4 contains corollaries and remarks
comparing our results and the ones in the literature. As for the notation, we shall
write un = o(vn ) (resp., un ≍ vn ) to indicate that un /vn → 0 as n tends to infinity
(resp., c1 un ≤ vn ≤ c2 un for large values of n and some positive constants c1 , c2 ).
for some constant C ∈ (0, ∞). However, it is shown by Rosalsky and Thành [21]
that (2.1) and (2.2) are indeed equivalent.
Let ρ ∈ R. A real-valued function R(·) is said to be regularly varying (at
infinity) with index of regular variation ρ if it is a positive and measurable function
on [A, ∞) for some A ≥ 0, and for each λ > 0,
R(λx)
lim = λρ .
x→∞ R(x)
A regularly varying function with the index of regular variation ρ = 0 is called
slowly varying (at infinity). If L(·) is a slowly varying function, then by Theorem
1.5.13 in Bingham et al. [5], there exists a slowly varying function L̃(·), unique up
to asymptotic equivalence, satisfying
lim L(x)L̃ (xL(x)) = 1 and lim L̃(x)L xL̃(x) = 1. (2.3)
x→∞ x→∞
Complete convergence for the maximal partial sums 5
The function L̃ is called the de Bruijn conjugate of L, and L, L̃ is called a (slowly
varying) conjugate pair (see, e.g., p. 29 in Bingham et al. [5]). If L(x) = logγ x or
L(x) = logγ (log x) for some γ ∈ R, then L̃(x) = 1/L(x). Especially, if L(x) ≡ 1,
then L̃(x) ≡ 1.
Here and thereafter, for a slowly varying function L(·), we denote the de Bruijn
conjugate of L(·) by L̃(·). Throughout, we will assume, without loss of generality,
that L(x) and L̃(x) are both continuous on [0, ∞), differentiable on [A, ∞) for some
A ≥ 0, and xγ L(x) and xγ L̃(x) are both strictly increasing on [0, ∞) for all γ > 0
(see Thành [26, p. 578]). We also assume that (see Lemma 2.2 in Anh et al. [1])
The following theorem establishes complete convergence for the maximal partial
sums of dependent random variables without using the Kolmogorov-type maximal
inequalities. For the special case where L(x) = logα x with α ≥ 0, Miao et al. [18]
proved Theorem 2.1 for sequences of negatively associated random variables, which
do enjoy the Kolmogorov maximal inequality. The main contribution of our result
is that it can be applied to dependence structures where the Kolmogorov-type
maximal inequalities may not hold.
Theorem 2.1. Let 1 ≤ p < 2 and let {Xn , n ≥ 1} be a sequence of random
variables satisfying Condition (1.2). Let L(·) be as in Theorem 1.1. If {Xn , n ≥ 1}
is stochastically dominated by a random variable X, and
We only sketch the proof of Theorem 2.1 and refer the reader to the proof of
Theorem 1 in Thành [24] for details. The main difference here is that we have to
consider the nonnegative random variables so that after applying certain trunca-
tion techniques (see (2.8) and (2.9) below), the new random variables still satisfy
Condition (1.2).
Sketch proof of Theorem 2.1. Since {Xn+ , n ≥ 1} and {Xn− , n ≥ 1} satisfy the
assumptions of the theorem and Xn = Xn+ − Xn− , n ≥ 1, without loss of generality
we can assume that Xn ≥ 0 for all n ≥ 1. For n ≥ 1, set
bn = nα L̃ (nα ) , (2.7)
Since xL̃(x) is strictly increasing on [0, ∞), {bn , n ≥ 0} is strictly increasing se-
quence. It is easy to see that (2.6) is equivalent to
∞ j
!
X X
n(αp−1)
2 P max (Xi − EXi ) > εb2n < ∞ for all ε > 0. (2.10)
1≤j<2n
n=1 i=1
Using (2.11) and proceeding in a similar manner as in Thành [24, Equation (23)],
the proof of (2.10) will be completed if we can show that
∞ j
!
X X
n(αp−1)
2 P max (Xi,2n − EXi,2n ) ≥ εb2n−1 < ∞ for all ε > 0.
1≤j<2n
n=1 i=1
(2.12)
For m ≥ 0, set S0,m = 0 and
j
X
Sj,m = (Xi,2m − EXi,2m ), j ≥ 1.
i=1
For 1 ≤ j < 2n and for 0 ≤ m ≤ n, let kj,m = ⌊j/2m ⌋ be the greatest integer which
is less than or equal to j/2m , jm = kj,m 2m . Then (see Thành [24, Equation (28)])
n
X k2mX
+2m−1
max n |Sj,n | ≤ max
n−m
Xi,2m−1 − EXi,2m−1
1≤j<2 0≤k<2
m=1 i=k2m +1
n (k+1)2m n
X X X
+ max
n−m
Yi,m + 2m+1 E (|X|1(|X| > b2m−1 )) .
0≤k<2
m=1 i=k2m +1 m=1
(2.13)
k+ℓ
!2 k+ℓ
X X
2
E Xi,2m−1 − EXi,2m−1 ≤C EXi,2m−1 , k ≥ 0, ℓ ≥ 1, (2.14)
i=k+1 i=k+1
and
k+ℓ
!2 k+ℓ
X X
2
E Yi,m ≤C EYi,m , k ≥ 0, ℓ ≥ 1. (2.15)
i=k+1 i=k+1
Remark 3. When 0 < p < 1, we can show that Theorems 1.1 and 2.1 hold
irrespective of the dependence structure of the underlying sequence of random
variables (see Theorems 3.1 and 3.2 of Boukhari [6]). However, for the case 1 ≤
p < 2, the following simple example shows that these theorems can fail if the
involved random variables do not satisfy (1.2).
|X1 + · · · + Xn | = n|X| = n ≥ nα
and therefore for all 0 < ε ≤ 1,
∞ j j
! ∞
!
X αp−2
X X −1
X
n P max (Xi − EXi ) > εnα L̃(nα ) = n P max Xi > εn α
1≤j≤n 1≤j≤n
n=1 i=1 n=1 i=1
X∞
≥ n−1 = ∞.
n=1
The next theorem shows that the moment condition in (2.5) in Theorem 2.1 is
optimal.
Since L(·) is slowly varying and c is a constant, (2.17) implies E (|X|p Lp (|X|)) < ∞.
Applying Theorem 2.1, we obtain
∞ j
!
X X
αp−2
n P max (Xi − EXi ) > εbn < ∞ for all ε > 0. (2.18)
1≤j≤n
n=1 i=1
8 F. Boukhari, N.C. Dzung and L.V. Thành
Let ε > 0 be arbitrary. Since αp ≥ 1 and 0 < bn ↑, we have from (2.18) that
∞ j
!
X X
αp−2
∞> n P max (Xi − EXi ) > εbn
1≤j≤n
n=1 i=1
∞ j
!
X X
−1
≥ n P max (Xi − EXi ) > εbn
1≤j≤n
n=1 i=1
k
(2.19)
∞ 2X
−1 j
!
X X
−1
= n P max (Xi − EXi ) > εbn
1≤j≤n
k=1 n=2k−1 i=1
∞ j
!
1X X
≥ P max (Xi − EXi ) > εb2k .
2 1≤j≤2k−1
i=1
k=1
If α < 1, then n1−α L̃−1 (nα ) → ∞. If α = 1, then by (2.3), we have n1−α L̃−1 (nα ) =
L̃−1 (n) ∼ L(nL̃(n)) ≥ 1. Therefore, we conclude from (2.23) that c = EX. 2
The following result generalizes and unifies Theorem 2.5 (ii) and (iii) of Rosalsky
and Thành [21]. The proof is similar to that of Theorem 2.6 in [25].
Theorem 3.2. Let {Xi , i ∈ I} be a family of random variables and let L(·) be a
slowly varying function. If
sup E |Xi |p L(|Xi |) log(|Xi |) log2 (log |Xi |) < ∞ for some p > 0,
(3.2)
i∈I
Let
g(x) = xp L(x) log(x) log2 (log x), h(x) = xp L(x), x ≥ 0.
Applying the first half of (2.4), there exists B large enough such that g(·) and h(·)
are strictly increasing on [B, ∞), and
xL′ (x) p
≤ , x > B.
L(x) 2
Therefore,
xL′ (x) 3pxp−1 L(x)
h′ (x) = pxp−1 L(x) + xp L′ (x) = xp−1 L(x) p + ≤ , x > B.
L(x) 2
(3.4)
By Lemma 3.1, (3.2) and (3.4), there exists a constant C1 such that
Z ∞
E(h(X)) = E(h(X)1(X ≤ B)) + h(B) + h′ (x)P(X > x)dx
B
3p ∞ p−1
Z
≤ C1 + x L(x)P(X > x)dx
2 B
Z ∞
3p
= C1 + xp−1 L(x) sup P(|Xi | > x)dx
2 B i∈I
3p ∞ −1
Z
≤ C1 + x log (x) log−2 (log x) sup E (g(|Xi |)) dx
−1
2 B i∈I
Z ∞
3p
= C1 + sup E (g(|Xi |)) x log (x) log−2 (log x)dx
−1 −1
2 i∈I B
< ∞.
The proposition is proved. 2
10 F. Boukhari, N.C. Dzung and L.V. Thành
Remark 4. The contribution of the slowly varying function L(x) in Theorem 3.2
helps us to unify Theorem 2.5 (ii) and (iii) of Rosalsky and Thành [21]. Letting
L(x) = log−1 (x) log−2 (log x), x ≥ 0, then by Theorem 3.2, the condition
sup E|Xi |p < ∞ for some p > 0,
i∈I
This slightly improves Theorem 2.5 (ii) in Rosalsky and Thành [21]. Similarly, by
letting L(x) = 1, we obtain an improvement of Theorem 2.5 (iii) in Rosalsky and
Thành [21].
We now recall a two-sided inequality stated in [4] to derive the necessary con-
ditions for the validity of the weak law of large numbers under appropriate depen-
dence restrictions. In the following lemma, we apply Theorem 2.3 in [4] for random
variables Xn′ = Xn − cn , n ≥ 1.
Lemma 3.3. Let {Xn , n ≥ 1} be a sequence of random variables fulfilling (1.2)
and let {cn , n ≥ 1} be a sequence of real numbers. For t > 0 and n ≥ 1, put
n
X
In (t) = P( max |Xi − ci | > t) and Jn (t) = P(|Xi − ci | > t).
1≤i≤n
i=1
Then
1 Jn (t)
· ≤ In (t) ≤ Jn (t), n ≥ 1,
2 2C + Jn (t)
where C is given by (1.2). In particular, if In (un ) = o(1) for some positive sequence
{un , n ≥ 1}, then In (un ) ≍ Jn (un ).
Proof of Theorem 1.1. By applying Theorem 3.2, we have from (1.3) that the
sequence {Xn , n ≥ 1} is stochastically dominated by a nonnegative random variable
X with
E (|X|p Lp (|X|)) < ∞.
Applying Theorem 2.1, we immediately obtain (1.4).
We now turn to the proof of the second part of the theorem. Assume that (1.5)
α
is met. Let bn = nP L̃ (nα ) , n ≥ 1, and let ε > 0 be arbitrary. For n ≥ 1 and t > 0,
n
let S0 = 0, Sn = i=1 (Xi − ci ), In (t) and Jn (t) be as in Lemma 3.3. From the
relation |Xn − cn | ≤ |Sn | + |Sn−1 |, n ≥ 1, we infer that
bn
In (εbn ) ≤ P max |Sk | > ε .
1≤k≤n 2
Joining this with (1.5), we reach that
X
nαp−2 In (εbn ) < ∞. (3.5)
n≥1
Complete convergence for the maximal partial sums 11
Besides, since the sequence {bn , n ≥ 1} is increasing, we obtain from (3.5) that
2n
X
nαp−1 In (εb2n ) ≤ k αp−2 Ik (εbk ) = o(1),
k=n+1
which, in view of the range of α and Lemma 2.1(ii) of [6], leads to In (εbn ) = o(1).
By invoking Lemma 3.3, we conclude that In (εbn ) ≍ Jn (εbn ) implying, via (3.5),
X
nαp−2 Jn (εbn ) < ∞.
n≥1
This establishes the thesis and achieves the proof of the theorem. 2
The following example illustrates the sharpness of Theorem 1.1 (and Corollary
1.2). It shows that in Theorem 1.1, (1.4) may fail if (1.3) is weakened to
It therefore also shows that the main result of Sung [23, Theorem 3.1] may fail if
the underlying random variables are not identically distributed.
Example 3.1. Let 1 ≤ p < 2 and L(·) be a positive slowly varying function
such that g(x) = xp Lp (x) is strictly increasing on [A, ∞) for some A > 0. Let
B = ⌊A+g(A)⌋+1, h(x) be the inverse function of g(x), x ≥ B, and let {Xn , n ≥ B}
be a sequence of independent random variables such that for all n ≥ B
1 1
P(Xn = 0) = 1 − , P (Xn = ±h(n)) = .
n log(n) log(log n) 2n log(n) log(log n)
By (2.3), we can choose (unique up to asymptotic equivalence)
h(xp )
L̃(x) = , x ≥ B.
x
Since L̃(·) is a slowly varying function,
log(L̃(n1/p )) = o (log n) ,
and so 1
log h(n) = log n1/p L̃(n1/p ) = log n + o(log n).
p
It thus follows that
sup E |Xn |p Lp (|Xn |) log(|Xn |) log2 (log |Xn |)
n≥1
and
Xn
lim = 0 a.s. (3.8)
n→∞ n1/p L̃(n1/p )
However, we have
∞
X X∞
P |Xn | > n1/p L̃(n1/p )/2 = P (|Xn | > h(n)/2)
n=B n=B
∞
X 1
= =∞
n log(n) log(log n)
n=B
Proof. From Lemma 2.1 in Wu and Rosalsky [27], it is easy to see that m-pairwise
negatively dependent random variables satisfy Condition (1.2). Therefore, Part (i)
follows from Theorems 1.1, and Part (ii) follows from Theorems 2.1 and 2.2. 2
Remark 5. (i) We consider a special case where α = 1/p, 1 < p < 2 and L(x) ≡ 1
in Corollary 4.1 (ii). Under the condition E(|X|p ) < ∞, we obtain
∞ j
!
X X
n−1 P max (Xi − EXi ) > εn1/p < ∞ for all ε > 0, (4.1)
1≤j≤n
n=1 i=1
and therefore
Pn
i=1 (Xi − EXi )
(4.2)
lim = 0 a.s.
n→∞ n1/p
(ii) For 1 < p < 2, Sung [23] considered the pairwise independent case and
obtained (4.2) under a slightly stronger condition that
E |X|p (log log |X|)2(p−1) < ∞. (4.3)
Furthermore, one cannot obtain the rate of convergence (4.1) by using the method
used in Sung [23]. In Chen et al. [7, Theorem 3.6], the authors proved (4.1)
14 F. Boukhari, N.C. Dzung and L.V. Thành
assuming E(|X|p logr |X|) < ∞ for some r > p. They raised an open question
as to whether or not (4.1) holds under (4.3) (see [7, Remark 3.1]). For the case
where the random variables are m-pairwise negatively dependent, Wu and Rosalsky
[27] obtained (4.1) and (4.2) under the condition E(|X|p logr |X|) < ∞ for some
r > 1 + p. Wu and Rosalsky [27] then raised an open question as to whether
or not (4.2) holds under Condition (4.3). For p = 1 and the underlying random
variables are m-pairwise negatively dependent, Wu and Rosalsky [27, Remarks 3.6]
stated another open question as to whether or not (4.1) (with p = 1) holds under
the condition E|X| < ∞. Therefore, a very special case of Corollary 4.1 gives
affirmative answers to the mentioned open questions raised by Chen et al. [7] and
Wu and Rosalsky [27].
and
P(X1 > x1 , . . . , Xn > xn ) ≤ M P(X1 > x1 ) . . . P(Xn > xn ).
A sequence of random variables {Xi , i ≥ 1} is said to be extended negatively
dependent if for all n ≥ 1, the collection {Xi , 1 ≤ i ≤ n} is extended negatively
dependent.
Let m be a positive integer. The notion of m-extended negative dependence
was introduced in Wu and Wang [28]. A sequence {Xi , i ≥ 1} of random variables
is said to be m-extended negatively dependent if for any n ≥ 2 and any i1 , i2 , . . . , in
such that |ij − ik | ≥ m for all 1 ≤ j ≤ k ≤ n, we have {Xi1 , . . . , Xin } are extended
negatively dependent. If {Xi , i ≥ 1} is a sequence of m-extended negatively de-
pendent random variables and {fi , i ≥ 1} is a sequence of nondecreasing functions,
then {fi (Xi ), i ≥ 1} is a sequence of m-extended negatively dependent random vari-
ables. We note that the classical Kolmogorov maximal inequality or the classical
Rosenthal maximal inequality are not available for extended negatively dependent
random variables (see Wu and Wang [28], Wu et al. [29]).
Proof. Lemma 3.3 of Wu and Wang [28] implies that the sequence {Xn , n ≥ 1}
satisfies Condition (1.2). Corollary 4.2 then follows from Theorems 1.1, 2.1, and
2.2. 2
Remark 6. Chen et al. [8] proved the Kolmogorov SLLN for sequences of extended
negatively dependent and identically distributed random variables {X, Xn , n ≥ 1}
under the condition that E|X| < ∞. They used Etemadi’s method in Etemadi [11]
which does not seem to work for the case of the Marcinkiewicz–Zygmund SLLN.
Complete convergence for the maximal partial sums 15
To our best knowledge, Corollary 4.2 is the first result in the literature on the
Baum–Katz theorem for sequences of m-extended negatively dependent random
variables under the optimal moment condition even when L(x) ≡ 1 and m = 1.
References
1. V.T.N. Anh, N.T.T. Hien, L.V. Thành, and V.T.H. Van, The Marcinkiewicz–
Zygmund-type strong law of large numbers with general normalizing sequences, Jour-
nal of Theoretical Probability 34 (2021), 331–348.
2. P. Bai, P. Chen, and S.H. Sung, On complete convergence and the strong law of
large numbers for pairwise independent random variables, Acta Mathematica Hun-
garica 142 (2014), 502–518.
3. J. Baxter, R. Jones, M. Lin, and J. Olsen, SLLN for weighted independent
identically distributed random variables, Journal of Theoretical Probability 17 (2004),
165–181.
4. I. Bernou and F. Boukhari, Limit theorems for dependent random variables with
infinite means, Statistics and Probability Letters 189 (2022), 109563.
5. N.H. Bingham, C.M. Goldie, and J.L. Teugels, Regular variation, Encyclopedia
of Mathematics and its Applications, Vol. 27, Cambridge University Press, Cam-
bridge, 1989.
6. F. Boukhari, On convergence rates in the Marcinkiewicz–Zygmund
strong law of large numbers, Results in Mathematics 76(4) (2021), 174.
https://doi.org/10.1007/s00025-021-01487-2
7. P. Chen, P. Bai, and S.H. Sung, The von Bahr–Esseen moment inequality for
pairwise independent random variables and applications, Journal of Mathematical
Analysis and Applications 419 (2014), 1290–1302.
8. Y. Chen, A. Chen, and K.W. Ng, The strong law of large numbers for extended
negatively dependent random variables, Journal of Applied Probability 47 (2010),
908–922.
9. E.B. Czerebak-Mrozowicz, O.I. Klesov, and Z. Rychlik, Marcinkiewicz-type
strong law of large numbers for pairwise independent random fields, Probability and
Mathematical Statistics 22 (2002), 127–139.
10. N.C. Dzung and L.V. Thành, On the complete convergence for sequences of de-
pendent random variables via stochastic domination conditions and regularly varying
functions theory, arXiv2107.12690, Manuscript, pp. 1–18, 2021.
11. N. Etemadi, An elementary proof of the strong law of large numbers, Zeitschrift für
Wahrscheinlichkeitstheorie und Verwandte Gebiete 55 (1981), 119–122.
12. I. Fazekas and T. Tómács, Strong laws of large numbers for pairwise independent
random variables with multidimensional indices, Publicationes Mathematicae Debre-
cen 53 (1998), 149–161.
16 F. Boukhari, N.C. Dzung and L.V. Thành