Renyi Anti Conc

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

A RÉNYI ENTROPY INTERPRETATION OF

ANTI-CONCENTRATION AND NONCENTRAL SECTIONS OF


CONVEX BODIES

JAMES MELBOURNE, TOMASZ TKOCZ, AND KATARZYNA WYCZESANY

Abstract. We extend Bobkov and Chistyakov’s (2015) upper bounds on con-


centration functions of sums of independent random variables to a multivariate
entropic setting. The approach is based on pointwise estimates on densities of
sums of independent random vectors uniform on centred Euclidean balls. In
this vein, we also obtain sharp bounds on volumes of noncentral sections of
isotropic convex bodies.

2020 Mathematics Subject Classification. Primary 60E05, 60E15; Secondary 52A40.

Key words. Concetration function, Sums of independent random variables, Rényi entropy, Anti-
concentration, Sections of convex bodies, Pointwise lower bounds on convolutions.

1. Introduction

Anti-concentration is a phenomenon which asserts that random variables have a


“small” probability of falling within a certain range, or in other words, it quantifies
the scatter of the values of the random variable. In particular, one is interested in
the rate of increase of anti-concentration of a sum of independent random variables,
which we further address in this note in an entropic multivariate setting.

More precisely, for a random vector X taking values in Rd , we define its concentra-
tion function QX : [0, +∞) → [0, 1] as

QX (λ) = sup P (|X − x| ≤ λ) , λ ≥ 0,


x∈Rd

where | · | is the standard Euclidean norm on Rd . The anti-concentration phenom-


enon has been quantified in a number of classical results, and can be traced back

Date: June 16, 2024.


TT’s research supported in part by NSF grant DMS-2246484.
1
to works of Doeblin, Lévy, Kolmogorov, [13, 20, 22]. Rogozin’s inequality from [31]
strengthened all those and it states that there is a universal positive constant C such
that for independent random variables X1 , . . . , Xn , their sum S = X1 + · · · + Xn
and positive parameters λ1 , . . . , λn , we have
 −1/2
Xn  
QS (λ) ≤ C  λ2j 1 − QXj (λj )  , λ ≥ max λj .
j≤n
j=1

Esseen in [14] offered an analytic approach based on characteristic functions. This


led to further improvements, by Kesten in [16, 17], as well as Postnikova and Yudin
in [30], culminating in a bound improving upon all previous ones, established by
Miroshnikov and Rogozin in [26], which gives
 −1/2
n
X 1
QS (λ) ≤ C  λ2j DXj ( 12 λj )QXj (λj )−2  , λ≥ max λj ,
j=1
2 j≤n

where DX (λ) = λ−2 E[min{|X|, λ}2 ]. Note that DX ≤ 1. Recently, Bobkov and
Chistyakov in [5] have further strengthened this inequality by removing the factors
P 2 1/2
DXj at the expense of shrinking the domain λ & max λj to λ & λj , which is
necessary for such a modified inequality to hold (see their remark before Theorem
1.2 in [5]). Namely, they obtain the inequality
 −1/2
Xn P 1/2
2 −2 n
(1) QS (λ) ≤ C  λj QXj (λj )  , λ≥ j=1 λ2j ,
j=1

with a universal positive constant C. They were motivated by two-sided bounds


on the concentration function of sums of log-concave random variables. Crucially
for their approach, they have obtained a uniform bound on the density of the
sum of independent uniform random variables, which can be naturally restated in
geometric terms as the statement that the volume of the hyperplane sections of the
cube, as soon as it is nontrival, it is large (at least a universal fraction of the volume
of the cube).

The aim of this note is to extend these results to higher dimensions, as well as
provide a new extension of those to Rényi entropies, which continues the recent body
of works devoted to developing subadditivity properties for sums of independent
random variables in various settings, see for instance [4, 6, 7, 23]. Our approach has
incidentally led us to a curious sharp lower bound on noncetral sections of isotropic
convex bodies, which may be of independent interest.

1.1. Noncentral sections. Our first main result is the following uniform bound.
2
Theorem 1. Let d ≥ 1. Let U1 , U2 , . . . be i.i.d. random vectors uniform on the
unit Euclidean ball B2d in Rd . There is a positive constant cd depending only on d
Pn
such that for every n ≥ 1 and real numbers a1 , . . . , an with j=1 a2j = 1, we have

inf p(x) ≥ cd ,
x∈B2d
Pn
where p is the density of the random vector j=1 aj Uj .

In the 1-dimensional case d = 1, this was discovered by Bobkov and Chistyakov in


[5] (Proposition 3.2), as alluded to earlier (they obtained c1 = 0.00095..). Motivated
by applications to noncentral sections of the cube and polydisc, König and Rudelson
1
in [21] studied the cases d = 1 and d = 2 and obtained that one can take c1 = 34 =
1
0.029.. and c2 = 27π = 0.011... As we shall present, without too much additional
work, their probabilistic approach essentially yields the claimed result for arbitrary
d with
1
(2) cd = ,
100 · 2d ωd
where as usual ωd stands for the volume of the unit ball in Rd ,
π d/2
ωd = vold (B2d ) = .
Γ( d2 + 1)

Pursuing a more geometric direction, we extend the Bobkov-Chistyakov result to


a sharp bound for all even log-concave densities. Recall that a random vector X
in Rd with density f is called log-concave when f = e−φ for a convex function
φ : Rd → [0, +∞] (for background, see for instance [1]).

Theorem 2. Let f : R → [0, +∞) be an even log-concave probability density. Let


sZ
σ= x2 f (x)dx
R

be its variance. Then


√ 1 √
(3) σf (σ 3) ≥ √ e− 6 = 0.061...
2
(The equality is attained for the symmetric exponential density.)

The example of the symmetric uniform distribution shows that in the parameter
√ √
σ 3, constant 3 cannot be replaced with any larger number for such a lower
bound to continue to hold (uniformly over all even log-concave densities). The

parameter σ 3 can be loosely thought of as the effective support of f , as stems

3
from the following basic lemma (see, e.g. Theorem 5 in [25] for a generalisation to
Rényi entropies).

Lemma 3. Let f : R → [0, +∞) be an even log-concave probability density of vari-


ance 1. Then the support of f , that is the set supp(f ) = {x ∈ R, f (x) > 0} con-
√ √
tains the interval [− 3, 3].
√ 1
Proof. Suppose that supp(f ) = [−a, a] with a < 3. Let g(x) = 2√ 1 √ √ (x)
3 [− 3, 3]
√ √
be the uniform density on [− 3, 3] with variance 1. We only need to use that f
is even and nonincreasing on [0, +∞). Since x2 f = x2 g, by said monotonicity,
R R

f intersects g on [0, +∞) at a point c ∈ [0, a], and f − g ≥ 0 on [0, c], f − g ≤ 0 on


[c, a], so
Z ∞ Z c Z a √
Z 3
2 2 2
0= x (f (x) − g(x)) ≤ c (f (x) − g(x)) + c (f (x) − g(x)) + x2 (−g(x))
0 0 c a

Z 3
= x2 (−g(x)) < 0,
a

a contradiction. 

Theorem 2 readily yields a sharp lower bound for the volume of noncentral sections
of isotropic symmetric convex bodies (on their effective support). For a recent
survey on this topic, see [28]. Recall that a convex body K in Rd is called (centrally)
symmetric if K = −K and in that special case it is called isotropic if it has volume 1
and covariance matrix proportional to the identity matrix,
Z 
xi xj dx = L2K Id×d ,
K i,j≤d

and the proportionality constant LK > 0 is called the isotropic constant of K.

Corollary 4. Let K be a symmetric isotropic convex body in Rd with isotropic



constant LK . For every hyperplane H in Rd with distance at most LK 3 to the
origin, we have
1 1 −√6
(4) vold−1 (K ∩ H) ≥ √ e .
LK 2

Remark 5. This bound is sharp, in that for every  > 0, there is d and a symmetric

isotropic convex body K in Rd which admits  a hyperplane H at distance LK 3
√ 
to the origin for which vold−1 (K ∩ H) < L1K √12 e− 6 +  .

4
1.2. Subadditivity of Rényi entropy. To state our second main result and elu-
cidate on the connection between Rényi entropies and concentration function, we
begin with recalling the necessary definitions.

For a random vector X in Rd with density f on Rd , and p ∈ [0, +∞], we define the
p-Rényi entropy of X as
Z 
1 p
hp (X) = log f (x) dx ,
1−p Rd

with the cases p ∈ {0, 1, ∞} treated by limiting expressions: h0 (X) = log vold (supp(f )),
R
h1 (X) = − Rd f log f , and h∞ (X) := − log kf k∞ , provided the relevant integrals
exist (in the Lebesgue sense). We define the Rényi entropy power to be

Np (X) = e2hp (X)/d .

Finally, the maximum functional M for X is defined by

M (X) = kf k∞

and we have
N∞ (X) = M (X)−2/d .
We observe that if U is uniform on the unit ball B2d and independent of X, then
the concetration function of X is up to scaling factors the maximum functional of
the smoothed variable X + λU , that is, plainly
Z
QX (λ) = sup P (|X − x| ≤ λ) = sup 1{|y−x|≤λ} f (y)dy
(5) x∈Rd x∈Rd Rd

= λd ωd M (X + λU )
and, consequently,
2/d
(6) N∞ (X + λU ) = ωd λ2 QX (λ)−2/d .

This relationship allows to rewrite the anti-concentration bound (1) in terms of the
maximum functional, or ∞-Rényi entropy power, of smoothed densities. It turns
out that thanks to Theorem 1, the same continues to hold for p-Rényi entropies of
random vectors in Rd .

Theorem 6. Let p > 1. For all independent random vectors X1 , . . . , Xn in Rd ,


Pn Pn
their sum S = j=1 Xj and positive parameters λ1 , . . . , λn with j=1 λ2j = 1, we
have
n
1 X
(7) Np (S + U0 ) ≥ Np (Xj + λj Uj ),
Cp,d j=1

5
where U0 , U1 , . . . , Un are independent random vectors uniform on the unit ball in
2p d+7
Rd , also independent of the Xj ’s. One can take Cp,d = e · 2 p−1 d .

As a corollary, we get an extension of (1) to multivariate random variables.

Corollary 7. For all independent random vectors X1 , . . . , Xn in Rd , their sum


P 1/2
n 2
S = X1 + · · · + Xn , positive parameters λ1 , . . . , λn and λ ≥ j=1 λj , we have
 −d/2
n
p(d+7) X
QX (λ) ≤ (2λ + 1)d ed/2 2 p−1  λ2j QXj (λj )−2/d  .
j=1

The next sections present the proofs of our main results Theorem 1, 2 and 6. The
last section is devoted to remarks on reverse bounds in the log-concave setting.

2. Sums of uniforms: Proof of Theorem 1

Throughout this section we fix d ≥ 1, let U1 , U2 , . . . be i.i.d. random vectors


uniform on the Euclidean unit ball B2d in Rd and let ξ1 , ξ2 , . . . be i.i.d. random
vectors uniform on the Euclidean unit sphere S d+1 in Rd+2 . We also fix n ≥ 1 and
Pn
real numbers a1 , . . . , an with j=1 a2j = 1. Theorem 1 holds trivially for n = 1.
Thus we shall assume in all the statements of this section that n ≥ 2 with all the
aj nonzero.

2.1. A probabilistic formula. One of the key ingredients is the following prob-
Pn
abilistic formula for the density p of j=1 aj Uj , established in [21] via a delicate
Fourier analytic argument when d = 1, 2 (Proposition 3.2 in [21]). We extend it to
all dimensions and give an elementary direct and short proof.

Lemma 8. For every x ∈ Rd , we have


n −d
1 h X i
p(x) = E aj ξj 1{|Pn aj ξj |>|x|} .
ωd j=1
j=1

The crux is an intimate connection between the uniform measure on the sphere and
its projection to a codimension 2 subspace which turns out to be uniform on the
ball. This is folklore which specialised to two dimensional spheres amounts to the
Archimedes’ Hat-Box theorem. We refer to Corollary 4 in [3] for a generalisation
to `p balls.
6
Lemma 9. Let d ≥ 1 and let X = (X1 , . . . , Xd , Xd+1 , Xd+2 ) be a random vector
uniform on the unit Euclidean sphere S d+1 in Rd+2 . The random vector X̃ =
(X1 , . . . , Xd ) in Rd is uniform on the unit Euclidean ball B2d .

Proof. Let P : S d+1 → B2d be the projection map P (x1 , . . . , xd , xd+1 , xd+2 ) =
(x1 , . . . , xd ). The preimage of a point x ∈ B2d with |x| = r under P is a circle

x2d+1 +x2d+2 = 1−r2 of radius 1 − r2 . Using cylindrical coordinates (r, xd+1 , xd+2 ),
the preimage on S d+1 of an infinitesimal volume element dr under P then has
(d + 1)-volume on S d+1 equal to
p q p
2π 1 − r2 (d( 1 − r2 ))2 + (dr)2 = 2πdr,

which is uniform on B2d (i.e. does not depend on r). 


Pn Pn
Proof of Lemma 8. Let X = j=1 aj ξj , Y = j=1 aj Uj and let P : S d+1 → B2d
be the projection map P (x1 , . . . , xd , xd+1 , xd+2 ) = (x1 , . . . , xd ). Note that P (X) =
Pn
j=1 aj P (ξj ), and by Lemma 9, each P (ξj ) has the same distribution as Uj . There-
fore, Y has the same distribution as P (X). For a Borel set A in Rd we thus have,

P (Y ∈ A) = P (P (X) ∈ A) = P X ∈ A × R2 .


Since X is rotationally invariant, we can write X = |X|θ, where θ is a random


vector uniform on S d+1 , independent of X. Using this independence, we condition
on the values of X and continue the calculation as follows
     
1 1
P X ∈ A × R2 = EX Pθ θ ∈ (A × R2 ) = EX Pθ θ ∈ A × R2 ,

|X| |X|
since for dilates of the set A × R2 , we plainly have λ(A × R2 ) = (λA) × R2 , λ > 0.
Using Lemma 9 again, and a change of variables, we obtain
      
1 1
Pθ θ ∈ A × R2 = Pθ P (θ) ∈ A
|X| |X|
Z
1
= 1{x∈A/|X|,|x|≤1} dx
ωd Rd
Z
1
= |X|−d 1{|x|≤|X|} dx.
ωd A
Consequently, Z 
1 
P (Y ∈ A) = E|X|−d 1{|X|≥|x|} dx.
ωd A
1 −d
This means that Y has density on Rd given by p(x) = ωd E|X| 1{|X|≥|x|} . 

We mention in passing that, alternatively, Lemma 8 can also be derived from a


result of Baernstein II and Culverhouse, (6.5) in [2].

7
Pn Pn
Lemma 10. The random variables | j=1 aj ξj | and | j=1 aj Uj | have densities,
say, f : [0, +∞) → R and g : [0, +∞) → R, respectively, which satisfy
Z ∞
g(r) = dr d−1
s−d f (s)ds, r ≥ 0.
r

Their proof relies on the Fourier inversion formula and a subtle calculation. Cu-
riously, going the other way around, Lemma 10 can be readily obtained from
Lemma 8. We sketch the argument for completeness.
Pn Pn
Proof. Let X = j=1 aj ξj , Y = j=1 aj Uj . To see that |X| has a density, let S =
Pn−1 2 2 2
j=1 aj ξj and note that |X| = |S| + 2|S|an θ + an , where θ has the distribution
of, say, the first coordiante of ξn and is independent of S. Thus |X|2 has a density.
By Lemma 8, the density p of Y is given by
Z ∞
1 1
p(x) = E|X|−d 1{|X|≥|x|} = s−d f (s)ds.
ωd ωd |x|

Integration in polar coordinates finishes the argument. 

2.2. Probabilistic bounds. We will use the following bounds established by König
and Rudelson, see Propositions 5.1 and 5.4 in [21].

Proposition 11 (König-Rudelson, [21]). We have,


 
n
X
(8) P aj ξj ≥ 1 ≥ 0.1,
j=1

and for t > 1,


 
n  
X d+2
(9) P a j ξj ≥ t ≤ td+2 exp 2
(1 − t ) .
j=1
2

Pn 2
Note that under our normalisation, E j=1 aj ξj = 1. Inequality (9) quantifies
Pn
the strong concentration of j=1 aj ξj . Bound (8) is of anti-concentration type;
it is sometimes referred to as Stein property, see [10], can robustly be approached
by moment estimates (Paley-Zygmund-type inequalities), see [32], and has been
very well studied for random signs, see [11, 29]; for a generalisation to matricial
coefficients, see Theorem 2 in [12].

We are now ready to prove Theorem 1. Since we do not try to optimise the values
of constants involved, we forsake potentially more precise calculations in favour of
simplicity of the ensuing arguments.
8
Proof of Theorem 1. We fix x ∈ B2d and let X = |
P
aj ξj |. By Lemma 8, we want
to lower bound
1  −d 
p(x) = E X 1{X>|x|} .
ωd
Crudely,

E X −d 1{X>|x|} ≥ 2−d P (|x| < X < 2) ≥ 2−d P (1 < X < 2) ,


 

and by Proposition 11,

P (1 < X < 2) = P (X ≥ 1) − P (X ≥ 2) ≥ 0.1 − (2e−3/2 )d+2


≥ 0.1 − (2e−3/2 )3 > 0.01,

thus finishing the proof. 

3. Noncentral sections on effective support: Proofs of Theorem 2


and Corollary 4

3.1. Proof of Theorem 2. Employing the localisation method of degrees of free-


dom for log-concave functions developed by Fradelizi and Guédon in [15], it suffices
to prove the theorem for densities of the following form
 
−γ(|x|−a)
(10) f (x) = c 1[0,a] (|x|) + e 1[a,a+b] (|x|) , x ∈ R,
R
where a, b ≥ 0 not both 0, γ ≥ 0 and c is determined by R
f = 1. We refer to
[25] for the details of the argument (the only difference is that [25] deals with the
R √
minimisation of the entropy f 7→ − f log f instead of the functional f 7→ f (σ 3)
under the constraint that σ is fixed).

Note that the functional f 7→ σf (σ 3) is invariant under replacing f (·) with λf (λ ·)
for any λ > 0. Therefore, it suffices to only consider γ = 1 (the case γ = 0 is formally
contained in the case b = 0 with any γ > 0). For f as above, we have
Z
1= f = 2c(a + 1 − e−b )
R

and !
b
a3
Z
2 2 −x
σ = 2c + (x + a) e dx .
3 0

For a, b ≥ 0 not both 0, we define


s
Rb
−b a3 + 3 0
(x + a)2 e−x dx
A = A(a, b) = a + 1 − e , B = B(a, b) = ,
A
√ 1
so that B = σ 3 and c = 2A .
9
Claim 1. For all a, b ≥ 0 not both 0, we have B(a, b) ≥ A(a, b). In particular,
B(a, b) ≥ a and B(a, b) ≥ 1 − e−b .

Proof. The claim is equivalent to AB 2 − A3 ≥ 0. Note that for a fixed a > 0,

∂b (AB 2 − A3 ) = 3(a + b)2 e−b − 3A2 e−b = 3e−b (a + b − A)(a + b + A)

which is positive for every b > 0, since a + b − A = b − 1 + e−b > 0. Thus

AB 2 − A3 ≥ (AB 2 − A3 )|b=0 = 0. 

By Claim 1, B ≥ a, so when evaluating f at B, we take the exponential bit of f ,


1 A−1+e−b −B
that is f (B) = ce−(B−a) = 1 a−B
2A e = 2A e and (3) becomes
B A−1+e−b −B √ − √
6
e ≥ 6e .
A
We introduce the function

ψ(x) = x − 1 − log x, x > 0,

as it will be convenient to rewrite the last inequality equivalently by taking the


logarithms of both sides,

(11) ψ(B) ≤ e−b + ψ(A) + ψ( 6).

Let h(a, b) be the difference between the right hand side and the left hand side,

h(a, b) = e−b + ψ(A) + ψ( 6) − ψ(B).

The proof is concluded through the following two claims. 

Claim 2. For every a > 0, b 7→ h(a, b) is nonincreasing on (0, +∞).

Claim 3. For every a > 0, we have limb→∞ h(a, b) ≥ 0 with equality if and only if
a = 0.

Proof of Claim 2. We fix a > 0 and differentiate with respect to b. We have,

∂b A = e−b
(x+a)2 e−x dx
Rb
a3 +3
and, using B 2 = 0
A ,
Rb
3(a + b)2 e−b a3 + 3 (a + x)2 e−x dx −b e−b
0
3(a + b)2 − B 2 .

2B∂b B = − 2
e =
A A A
Plainly, ψ 0 (x) = 1 − x1 . Thus,
 
1 1 1
eb ∂b h = −1 + ψ 0 (A) − ψ 0 (B)eb ∂b B = − 3(a + b)2 − B 2 .

− 1−
A B 2AB
10
Since A > 0, ∂b h ≤ 0 is therefore equivalent to the inequality

(1 − B) 3(a + b)2 − B 2 ≤ 2B 2 .


We observe that 3(a + b)2 − B 2 ≥ 0. Indeed,


Z b
AB 2 = a3 + 3 (a + x)2 e−x dx ≤ 3(a + b)2 a + 3(a + b)2 (1 − e−b ) = 3(a + b)2 A.
0

As a result, if B ≥ 1, we conclude that ∂b h ≤ 0. When B < 1, ∂b h ≤ 0 is equivalent


to the inequality
 −1
2 2 2
B ≥ 3(a + b) 1 + .
1−B
The right hand side as a function of B ∈ (0, 1) is plainly decreasing. Using the
bound B ≥ 1 − e−b from Claim 1, it thus suffices to show that
−1
B 2 ≥ 3(a + b)2 1 + 2eb ,

or, equivalently,
−1
AB 2 − 3A(a + b)2 1 + 2eb ≥ 0.
We fix a > 0. There is equality at b = 0. We take the derivative in b of the left
hand side which reads
−1 −1
3(a + b)2 e−b − 3e−b (a + b)2 1 + 2eb − 6A(a + b) 1 + 2eb
−2 b
+ 6A(a + b)2 1 + 2eb e
6(a + b)2 Aeb
 
A
= 1 − + .
1 + 2eb a + b 1 + 2eb
Clearly, a + b ≥ a + 1 − e−b = A. Consequently the above expression is positive,
which finishes the proof. 

Proof of Claim 3. We readily have,

A(a, ∞) = a + 1,
r r
a3 + 3(a2 + 2a + 2) (a + 1)3 + 3(a + 1) + 2
B(a, ∞) = = .
a+1 a+1
As a result, setting x = a + 1 and
r
2
f (x) = x2 + 3 + , x ≥ 1,
x
we obtain
√ 
h(a, ∞) = ψ(x) + ψ( 6) − ψ f (x) ,
where, recall, ψ(u) = u − 1 − log u. Note that the right hand side vanishes at x = 1.
To conclude, we show that its derivative is positive for every x > 1. The derivative

11
reads
   
1 1 1 f (x) − 1 1
1− − 1− f 0 (x) = 1 − − x −
x f (x) x f (x)2 x2
f (x)2
 
x − 1 f (x) − 1 1
= · −x−1−
x f (x)2 f (x) − 1 x
Plainly, f (x) > 1. Moreover,
r r
f (x)2 2 1 1
> f (x) + 1 = x2 + 3 + + 1 > x2 + 2 + 2 + 1 = x + + 1,
f (x) − 1 x x x
which shows that the derivative is positive and finishes the proof. 

3.2. Proof of Corollary 4. This is a standard argument. We fix a unit vector θ


in Rd and consider the section function by hyperplanes orthogonal to θ,

f (t) = vold−1 (K ∩ (tθ + θ⊥ )), t ∈ R.

By the Brunn-Minkowski inequality, this defines a log-concave function. Since K is


symmetric, f is even. In particular, it is nonincreasing on [0, +∞), so it suffices to
√ √
show that LK f (LK 3) ≥ √12 e− 6 . Since K is of volume 1, with isotropic constant
qR qR
R
2 f (t)dt = 2
LK , we have R f = 1 and σ = R
t K
hx, θi dx = LK . Theorem 2
yields the result.

To see that Corollary 4 is indeed sharp, we present the following construction


confirming Remark 5.

3.3. Proof of Remark 5. Given λ = (λ1 , λ2 ) ∈ (0, ∞)2 , we define a double cone
Kλ in Rd+1 by
  
d |t|
Kλ = (x, t) ∈ R × R, |t| ≤ λ2 d, |x| ≤ λ1 1− .
λ2 d
We have by direct computation
2d ωd λd1 λ2
vol(Kλ ) = .
d+1
Setting
r
(d + 2)(d + 3)
λ1 = Ld ,
d+1
r
(d + 2)(d + 3)
λ2 = Ld ,
2d2
with 1
(d + 1)d+2
  2(d+1)
Ld = ,
2((d + 3)(d + 2))d+1 ωd2
12
the body Kλ is in isotropic position with isotropic constant Ld , and, in particular,
vold+1 (Kλ ) = 1. Moreover,
Ld d→∞ √
−−−→ 2
λ2
and r
d+1 d+1
ωd λ1 = .
2
Thus,
√ !d
√ Ld 3
Ld vold (Kλ ∩ {t = Ld 3}) = Ld ωd λd1 1−
λ2 d
s √ !d
d+1 Ld 3
= ωd λd+1
1 1−
(d + 2)(d + 3) λ2 d
s √ !d
(d + 1)2 Ld 3
= 1− .
2(d + 2)(d + 3) λ2 d

Taking the limit we see that


√ 1 √
lim Ld vol(Kλ ∩ {t = Ld 3}) = √ e− 6 .
d→∞ 2

Remark 12. Given t0 ∈ [0, 3], consider the problem

(12) inf{f (t0 ), f is an even log-concave density on R with variance 1}.


√ √
Theorem 2 asserts that at t0 = 3 the infimum equals √12 e− 6 and is attained
for the symmetric exponential density. It is a well-known result going back to
Moriguti’s work [27] that for an arbitrary probability density f on R, we have
1
−1
kf k2∞ ≥ 12
R 2
R
x f (x)dx , with equality attained for symmetric uniform densi-
ties. As a result, when specialised to even log-concave densitites f of variance 1,
1
we get that at the point t0 = 0 (12) equals √
2 3
and is attained for the symmetric
uniform density.

Fix t0 ∈ (0, 3). Using log-concavity, interpolating the previous two bounds gives
1−t0 /√3  t /√3
1 −√6 0

1
f (t0 ) ≥ √ √ e .
2 3 2
Since the right hand side is strictly grater than the minimum of the two bounds,
neither the symmetric uniform nor exponential density attains (12). From the proof
of Theorem 2, this infimum is attained at a density of the form (10). We do not
have a good prediction for such a density.

13
4. Rényi entropy: Proof of Theorem 6

The argument simply relies on combining Theorem 1 with the following subaddi-
tivity result for Rényi entropies extending the classical entropy power inequality.

Theorem 13 (Bobkov-Chistyakov, [4]). Let p ≥ 1. For independent random vari-


ables X1 , . . . , Xn in Rd , we have
n
X
Np (X1 + · · · + Xn ) ≥ e−1 Np (Xi ).
i=1

In fact, they obtained the better constant cp = e−1 p1/(p−1) in place of e−1 . More-
over, as established in [24], the case p = ∞ admits an optimal dimensionally depen-
2/d
Γ( d
2 +1)
dent constant c∞ (d) = d , for d ≥ 2. However, for simplicity of expression,
2 +1
and as our other computations do not attempt to approach optimal constants, nor
do the larger constants attainable effect the asymptotics of corollaries to come, we
will not make use of this sharpening.
Pn
Proof of Theorem 6. By Theorem 1 and (2), the density function of j=1 λj Uj is
1
P n
bounded below by 100·2d
times the density function of U0 . It follows that j=1 (Xj +
1
λj Uj ) has a density function bounded pointwise below by 100·2d
times the density
function of S + U0 . Thus,
 
n
2p
− d(p−1)
X
Np (S + U0 ) ≥ (100 · 2d ) Np  (Xj + λj Uj ) .
j=1

By Theorem 13,
 
Xn n
X
Np  (Xj + λj Uj ) ≥ e−1 Np (Xj + λj Uj ).
j=1 j=1

Combining the two inequalities yields the result with


2p 2p d+7
Cp,d = e · (100 · 2d ) d(p−1) < e · 2 p−1 d .

The same argument can be applied with sharpened constants in the case p = ∞. 
Pn
Proof of Corollary 7. By homogeneity, we can assume that j=1 λ2j = 1. When
2p d+7
λ = 1, in view of (6), the corollary follows immediately with constant (e·2 p−1 d )d/2 =
p(d+7)
d/2
e 2by setting p = ∞ in Theorem 6. When λ ≥ 1, using the union bound,
p−1

we get QX (λ) ≤ (2λ + 1)d QX (1) because by a standard volumetric argument a ball
of radius λ ≥ 1 can be covered by at most (2λ + 1)d unit balls (see, e.g. Theorem
4.1.13 in [1]), and the corollary follows from the previous case. 
14
5. Reversals under log-concavity

It turns out that in the one dimensional case, the variance of a log-concave ran-
1
dom variable X is a good proxy for its maximum functional, more precisely 12 ≤
2
Var(X)M (X) ≤ 1, see Proposition 2.1 in [5]. Building on this and the additivity
of variance under independence, Bobkov and Chistyakov ([5], Corollary 2.2) derived
two-sided matching bounds on the concentration function of sums of independent
log-concave random variables.

In higher dimensions, such a proxy with good tensorisation properties seems to be


a holy grail. If, however, we restrict our attention to isotropic random vectors,
that is the centred ones with identity covariance matrix, then the maximum func-
tional is directly related to the isotropic constant, which is well-studied in geometric
functional analysis (see e.g. [1, 9]).

More specifically, if X is a random vector in Rd , its isotropic constant LX is defined


to be
1 1
LX = (det[Cov(X)]) 2d M (X) d

By a standard argument, LX ≥ κd , where κd is the isotropic constant of a random


−1/d 1
vector uniform on the unit volume Euclidean ball ωd B2d . Moreover, κd ≥ 12 .
Let
Kd = sup LX ,
the supremum taken over all log-concave isotropic random vectors X in Rd . Bour-
gain’s famous slicing conjecture originating in [8] asks whether Kd is upper-bounded
by a universal constant, see also [1, 9, 19]. The best result to date is Klartag’s bound

by O( log d), [18].

Since covariance matrices add up for sums of independent random variables, we get
two-sided bounds as in the one-dimensional case, modulo bounds on the isotropic
constant.

Theorem 14. Let X1 , . . . , Xn be independent log-concave random vectors in Rd


and S be their sum. For λ > 0, we have
κdd ωd λd Kdd ωd λd
 h i 1/2
≤ QS (λ) ≤  h i1/2 .
Pn Pn
det λd I + j=1 Cov(Xj ) det λd I + j=1 Cov(Xj )

15
Proof. For a log-concave random vector X, by the definitions of the isotropic con-
stant and constants κd and Kd , we have
κdd Kdd
p ≤ M (X) ≤ p .
det[Cov(X)] det[Cov(X)]
Crucially, sums of independent log-concave random vectors are log-concave and
uniform distributions on convex sets are log-concave. Therefore, we can apply this
double-sided bound to X = S + λU , where U is a random vector uniform on the
unit ball independent of the Xj ’s. Using (5) and noting that
n n
X 1 X
Cov(X) = Cov(λU ) + Cov(Xj ) = λ2 I + Cov(Xj ),
j=1
d j=1

where I stands for the d × d identity matrix, we arrive at the desired bounds. 

References

[1] Artstein-Avidan, S.; Giannopoulos, A.; Milman, V. D., Asymptotic geometric analysis. Part
I. Mathematical Surveys and Monographs, 202. American Mathematical Society, Providence,
RI, 2015.
[2] Baernstein II, A.; Culverhouse, R.; Majorization of sequences, sharp vector Khinchin inequal-
ities, and bisubharmonic functions. Studia Math. 152 (2002), no. 3, 231–248.
[3] Barthe, F.; Guédon, O.; Mendelson, S.; Naor, A., A probabilistic approach to the geometry of
p
the ln -ball. Ann. Probab. 33 (2005), no. 2, 480–513.
[4] Bobkov, S. G.; Chistyakov, G. P., Entropy power inequality for the Rényi entropy. IEEE Trans.
Inform. Theory 61 (2015), no. 2, 708–714.
[5] Bobkov, S.; Chistyakov, G., On concentration functions of random variables. J. Theoret.
Probab. 28 (2015), no. 3, 976–988.
[6] Bobkov, S. G.; Marsiglietti, A., Variants of the entropy power inequality. IEEE Trans. Inform.
Theory 63 (2017), no. 12, 7747–7752.
[7] Bobkov, S. G.; Marsiglietti, A; Melbourne, J., Concentration functions and entropy bounds
for discrete log-concave distributions. Combin. Probab. Comput. 31 (2022), no. 1, 54–72.
[8] Bourgain, J., On high-dimensional maximal functions associated to convex bodies. Amer. J.
Math. 108 (1986), no. 6, 1467–1476.
[9] Brazitikos, S.; Giannopoulos, A.; Valettas, P.; Vritsiou, B-H., Geometry of isotropic convex
bodies. Mathematical Surveys and Monographs, 196. American Mathematical Society, Provi-
dence, RI, 2014.
[10] Burkholder, D. L., Independent sequences with the Stein property. Ann. Math. Statist. 39
(1968), 1282–1288.
[11] Dvorak, V., Klein, O., Probability mass of Rademacher sums beyond one standard deviation.
SIAM J. Discrete Math. 36 (2022), no. 3, 2393–2410.
[12] Chasapis, G., Liu, R., Tkocz, T., Rademacher-Gaussian tail comparison for complex coeffi-
cients and related problems. Proc. Amer. Math. Soc. 150 (2022), no. 3, 1339–1349.
[13] Doeblin, W., Sur les sommes d’un grand nombre des variables aléatoires independantes. Bull.
Sci. Math. 63 (1939), 23–64.
16
[14] Esseen, C. G., On the Kolmogorov-Rogozin inequality for the concentration function.
Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 5 (1966), 210–216.
[15] Fradelizi, M., Guédon, O., A generalized localization theorem and geometric inequalities for
convex bodies. Adv. Math., 204(2):509–529, 2006.
[16] Kesten, H., A sharper form of the Doeblin-Lévy-Kolmogorov-Rogozin inequality for concen-
tration functions. Math. Scand. 25 (1969), 133–144.
[17] Kesten, H., Sums of independent random variables—without moment conditions. Ann. Math.
Statist. 43 (1972), 701–732.
[18] Klartag, B., Logarithmic bounds for isoperimetry and slices of convex sets. Ars Inven. Anal.
2023, Paper No. 4, 17 pp.
[19] Klartag, B., Milman, V., The slicing problem by Bourgain. To appear in Analysis at Large,
a collection of articles in memory of Jean Bourgain, edited by A. Avila, M. Rassias and Y.
Sinai, Springer, 2022.
[20] Kolmogorov, A., Sur les propriétés des fonctions de concentrations de M. P. Lévy. Ann. Inst.
H. Poincaré 16 (1958), 27–34.
[21] König, H.; Rudelson, M., On the volume of non-central sections of a cube. Adv. Math. 360
(2020), 106929, 30 pp.
[22] Lévy, P., Theorie de l’addition des variables aléatoires, Paris, 1937.
[23] Madiman, M.; Melbourne, J.; Roberto, C., Bernoulli sums and Rényi entropy inequalities.
Bernoulli 29 (2023), no. 2, 1578–1599.
[24] Madiman, M.; Melbourne, J.; Xu, P., Rogozin’s convolution inequality for locally compact
groups. Preprint (2017): arXiv:1705.00642.
[25] Madiman, M.; Nayar, P.; Tkocz, T., Sharp moment-entropy inequalities and capacity bounds
for symmetric log-concave distributions. IEEE Trans. Inform. Theory 67 (2021), no. 1, 81–94.
[26] Mirošnikov, A. L.; Rogozin, B. A., Inequalities for concentration functions. Teor. Veroyatnost.
i Primenen. 25 (1980), no. 1, 178–183.
[27] Moriguti,S., A lower bound for a probability moment of any absolutely continuous distribution
with finite variance, Ann. Math. Stat. 23 (1952), 286–289.
[28] Nayar, P.; Tkocz, T., Extremal sections and projections of certain convex bodies: a survey.
Harmonic analysis and convexity, 343–390, Adv. Anal. Geom., 9, De Gruyter, Berlin, 2023.
[29] Oleszkiewicz, K., On the Stein property of Rademacher sequences. Probab. Math. Statist. 16
(1996), no. 1, 127–130.
[30] Postnikova, L. P.; Judin, A. A., A sharpened form of an inequality for the concentration
function. Teor. Verojatnost. i Primenen. 23 (1978), no. 2, 376–379.
[31] Rogozin, B. A., On the increase of dispersion of sums of independent random variables. Teor.
Verojatnost. i Primenen. 6 (1961), 106–108.
[32] Veraar, M., A note on optimal probability lower bounds for centered random variables, Colloq.
Math 113 (2008), 231–240.

17
(JM) Department of Probability and statistics, Centro de Investigacion en matemáticas
(CIMAT), Mexico.

(TT) Carnegie Mellon University; Pittsburgh, PA 15213, USA.

(KW) Carnegie Mellon University; Pittsburgh, PA 15213, USA.

Email address: [email protected]

18

You might also like