0% found this document useful (0 votes)
13 views13 pages

Convexity Sec Proj

Uploaded by

jipob11905
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views13 pages

Convexity Sec Proj

Uploaded by

jipob11905
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Convexity properties of sections of 1-symmetric

bodies and Rademacher sums

Joseph Kalarickal, David Rotunno, Salil Singh∗, Tomasz Tkocz†

Carnegie Mellon University, Pittsburgh, PA 15213, USA

Abstract

We establish a monotonicity-type property of volume of central hy-


perplane sections of the 1-symmetric convex bodies, with applications to
chessboard cutting. We parallel this for projections with a new convexity-
type property for Rademacher sums.

2020 Mathematics Subject Classification. Primary 52A20; Secondary 60E15.

Key words. hyperplane sections, 1-symmetric convex bodies, Rademacher sums, dual loga-
rithmic Brunn-Minkowski inequality

1 Introduction and results

A line can intersect at most 2N − 1 squares of the standard N × N chessboard


and this is achieved by a diagonal line pushed down a bit. It is only recently that
this fact has been generalised to higher dimensions and arbitrary convex bodies.
Specifically, given a convex body K in Rn and N ≥ 1, consider the (open) cells
∗ Email: [email protected]
† Research supported in part by the NSF grant DMS-2246484.

1
1 n
of the lattice NZ , that is the cubes z+(0, N1 )n , z ∈ 1 n
NZ , and let CK (N ) be the
maximal number of cells contained in K that a hyperplane in Rn can intersect.
For the standard cube, K = [0, 1]n , we simply write Cn (N ) = C[0,1]n (N ), so
C2 (N ) = 2N − 1. Bárány and Frankl in [4] showed that C3 (N ) ≤ 94 N 2 + 2N + 1
for all N ≥ 1 and C3 (N ) ≥ 49 N 2 + N − 5 for all N sufficiently large. In
the companion work [5], they established the exact asymptotics of CK (N ) as
N → ∞ for a fixed body K. Their main result is that

CK (N ) = βK N n−1 (1 + o(1)), N → ∞,

with the constant βK of the leading term given by

kak1
βK = max max voln−1 (K ∩ (ta + a⊥ )). (1)
a∈Rn \{0} t∈R |a|

Here and throughout, |x| is the standard Euclidean norm, whereas kxkp is the
`p norm of a vector x in Rn , so |x| = kxk2 . It is a consequence of the Brunn-
Minkowski inequality that when K is symmetric, say about the origin, then given
an outer-normal vector a, the maximal volume section voln−1 (K ∩ (ta + a⊥ )) is
the central one at t = 0. Thus we define the 0-homogeneous function,

kak1
VK (a) = voln−1 (K ∩ a⊥ ), a ∈ Rn \ {0} (2)
|a|

and for origin-symmetric K, we have βK = maxa VK (a).

Bárány and Frankl in [5] conjectured that for the unit cube Qn = [− 21 , 12 ]n , the
maximum of VQn is attained at diagonal vectors. This was confirmed by Aliev
in [2],
βQn = VQn ((1, . . . , 1)).

We refine this result to a Schur-convexity statement (for background on ma-


jorisation, we refer for instance to Chapter II of Bhatia’s book [6]). In fact,
not only does this hold for the cube, but for all 1-symmetric convex bodies.
A convex body K in Rn is called 1-symmetric if it is symmetric with respect to
every coordinate hyperplane {x ∈ Rn , xj = 0}, j ≤ n, and K is invariant under
permutations of the coordinates.

Theorem 1. Let K be a 1-symmetric convex body in Rn . Then the function a 7→


VK (a) defined in (2) is Schur concave on Rn+ . In particular, for the chessboard

2

cutting constant defined in (1), we have βK = n voln−1 (K ∩ (1, . . . , 1)⊥ ).

Our short proof crucially relies on Busemann’s theorem from [8], combined with
the symmetries of the body. In contrast, Aliev’s approach from [2] employs
Busemann’s theorem in a further geometric argument on the plane which did
not seem to allow for the present generalisation to Schur-convexity. We record
Busemann’s theorem for future use.

Theorem 2 (Busemann, [8]). Let K be an origin-symmetric convex body in


Rn . Then the function

|x|
NK (x) = , x 6= 0 (3)
voln−1 (K ∩ x⊥ )

extended at 0 by 0 defines a norm on Rn .

We also refer to Theorem 3.9 in [11] for a generalisation to lower-dimensional


sections, as well as to Theorem 5 in [3] for an extension to log-concave functions.

With Busemann’s theorem in hand, we can motivate our next result. Hyper-
plane sections of the unit volume cube Qn = [− 12 , 21 ]n admit a curious proba-
bilistic formula: if we let ξ1 , ξ2 , . . . be i.i.d. random vectors uniform on the unit
sphere S 2 in R3 , then for a unit vector a ∈ Rn , we have
h i
−1
voln−1 (Qn ∩ a⊥ ) = E |a1 ξ1 + · · · + an ξn | ,

see [10]. Thus Busemann’s theorem in particular asserts that the function
 −1
−1
n
|x| |x| X
x 7→ =  = xj ξj

E
voln−1 (Qn ∩ x⊥ )

−1 
Pn xj
j=1 |x| ξj
E j=1

is convex on Rn . A perhaps much simpler (geometrically dual) analogue of this


fact is that for i.i.d. Rademacher random variables ε1 , ε2 , . . . (random signs,
P (εj = ±1) = 12 ), the function

x 7→ E |x1 εj + · · · + xn εn |

is plainly convex on Rn . Resisting great efforts and prompting significant ac-

3
tivity across geometric functional analysis, the conjectured logarithmic Brunn-
Minkowski inequality posed in [7] can be equivalently stated as a convexity
property of sections of the cube (see [12]), which in particular would imply that
the function  
−1
t 7→ − log E et1 ξ1 + · · · + etn ξn (4)

is convex on Rn . To the best of our knowledge, even this apparent “toy-case”


remains unproved. Driven by the analogy with random signs, we establish the
following result.

Theorem 3. Let ε1 , ε2 , . . . be independent Rademacher random variables. For


every n ≥ 1 and p ≥ 1, the function

p
Φ(t1 , . . . , tn ) = log E et1 ε1 + · · · + etn εn

is convex on Rn .

Our proof leverages the usual Hölder duality, but nontrivially restricted to ran-
dom variables having nonnegative correlations with the random signs.

We present the proofs in the next section. The final section is devoted to further
remarks. In particular, with the same method, we obtain an extension of Aliev’s
result from [1]. We make precise the alluded geometric duality and a connection
of our Theorem 3 to Saroglou’s result from [14].

2 Proofs

2.1 Proof of Theorem 1

First note that by the symmetries of K, function VK is also symmetric (un-


der permuting the coordinates of the input), as well as unconditional, that is
VK (a1 , . . . , an ) = VK (|a1 |, . . . , |an |). Fix x, y ∈ Rn+ such that x ≺ y, that is
y majorises x. In particular, kxk1 = kyk1 . Thus, to show VK (x) ≥ VK (y),
equivalently, we would like to show that

1 1
voln−1 (K ∩ x⊥ ) ≥ voln−1 (K ∩ y ⊥ ),
|x| |y|

4
that is N (x) ≤ N (y) with

|a|
N (a) = .
voln−1 (K ∩ a⊥ )

By Theorem 2, N is convex. By the symmetries of K, function N is symmetric.


P
Since x ≺ y, then y = σ λσ xσ for some nonnegative weights λσ adding up to
1, where the sum is over all permutations and xσ = (xσ(1) , . . . , xσ(n) ). By the
convexity of N and its symmetry,
P X X 
N (y) = N ( λ σ xσ ) ≥ λσ N (xσ ) = λσ N (x) = N (x).

This finishes the proof. 

2.2 Proof of Theorem 3

1 1
Fix p ≥ 1 and let q ∈ [1, ∞] be its conjugate, p + q = 1. Let Bq be the closed
unit ball in Lq (of the underlying probability space with the norm kY kq =
(E|Y |q )1/q ). For t ∈ Rn , we denote
X
Xt = etj εj .
j

Thanks to Hölder’s inequality, we have

kXt kp = max EXt Y, t ∈ Rn ,


Y ∈Bq

with the maximum attained at

1
Y∗ (t) = sgn(Xt )|Xt |p−1 .
kXt kpp−1

The main idea is to consider the subset Aq of Bq of random variables with


nonnegative correlations with all εj ,

Aq = {Y ∈ Bq , E[Y εj ] ≥ 0, j = 1, . . . , n} .

Claim. Y∗ (t) ∈ Aq , for every t ∈ Rn .

5
As a result,
kXt kp = max EXt Y, t ∈ Rn ,
Y ∈Aq

which allows to finish the proof in one line. We have,


 
n
1 X
Φ(t) = log EkXt kp = max log EXt Y = max log  etj E[Y εj ]
p Y ∈Aq Y ∈Aq
j=1

Pn
Functions t 7→ log j=1 etj E[Y εj ] are convex (as sums of log-convex functions
are log-convex), so their pointwise maximum over Y ∈ Aq is also convex.

Proof of the claim. Let f (x) = sgn(x)|x|p−1 which is nondecreasing. Fix j ≤ n


and note that evaluating the expectation against εj gives
    
1 X X
kXt kpp−1 E[Y∗ (t)εj ] = E[f (Xt )εj ] = E f etj + eti εi  − f −etj + eti εi  .
2
i6=j i6=j

The square bracket is nonnegative as f (v + u) ≥ f (v − u) for every u ≥ 0 and


v ∈ R, by monotonicity.
Remark 4. We have crucially used that the class of log-convex functions is stable
under summation, or more generally, if {fα (x)}α∈A is a family of log-convex
functions on, say Rn , then the function
Z
x 7→ fα (x)dµ(α) (5)
A

is also log-convex on Rn , where µ is a nonnegative measure on A. This read-


ily follows from Hölder’s inequality. As a result, Theorem 3 instantly extends
to sums of independent symmetric random variables (a random variable X is
symmetric if −X and X have the same distribution).

Corollary 5. Let X1 , X2 , . . . be independent symmetric random variables. For


every n ≥ 1 and p ≥ 1, the function

p
Φ(t1 , . . . , tn ) = log E et1 X1 + · · · + etn Xn

is convex on Rn .

For the proof, note that by the symmetry of the Xj , they have the same dis-

6
tribution as εj |Xj |, respectively, where ε1 , . . . , εn are independent Rademacher
random variables (independent of the Xj ). Thus, it suffices to use (5) with µ
given by the distribution of (|X1 |, . . . , |Xn |).

3 Concluding remarks

3.1 Monotonicity under `∞ normalisation

Aliev in Lemma 2 in [1] showed that for the unit cube Qn , its Busemann norm
|x|
NQn (x) = voln−1 (Qn ∩x⊥ )
(see Theorem 2) is maximised over the unit `∞ -sphere
at its vertices. Since the maximum of a convex function over a convex body
is attained at an extreme point, Aliev’s lemma extends in such a statement to
all origin-symmetric convex bodies. Moreover, since an even convex function
on the real line is nondecreasing, for 1-symmetric bodies we obtain a stronger
monotonicity property.

Theorem 6. Let K be a 1-symmetric convex body in Rn . Then the function


x 7→ NK (x) defined in (3) is monotone with respect to each coordinate on Rn+ .
In particular, maxx∈[0,1]n NK (x) = NK (1, . . . , 1).

3.2 Dual logarithmic Brunn-Minkowski inequality

Saroglou’s dual log Brunn-Minkowski inequality, Theorem 6.2 from [14], essen-
tially states that the (n − 1)-volume of the polytope
n o
Pt = conv ±etj Proj(1,...,1)⊥ ej , j ≤ n = Proj(1,...,1)⊥ conv{±etj ej , j ≤ n}

is log-convex. As usual, for a subspace H in Rn , ProjH denotes the orthog-


onal projection onto H. On the other hand, the 2n facets of the stretched
cross-polytope conv{±etj ej } are all congruent with the outer normal vectors
P −1/2 P 1/2 P
−2tj
je [εj e−tj ]nj=1 , ε ∈ {−1, 1} and the (n−1)-volume n je
−2tj
e tj ,

7
so from Cauchy’s formula (see for instance, [9, 13]), we get

P X
voln−1 (Pt ) = 2n−1 ne tj
E e−tj εj .
j

Thus the convexity of the function

X
t 7→ log E etj εj (6)
j

is a special case of Saroglou’s result. In that sense, Theorem 3 can be viewed as


a probabilistic extension of the dual logarithmic Brunn-Minkowski inequality.
In analogy to Theorem 3, we thus conjecture that the following extension of (4)
holds: for every 0 < q ≤ 1, the function
 h i
−q
t 7→ − log E et1 ξ1 + · · · + etn ξn

is convex on Rn .

3.3 A vector-valued extension

As an immediate corollary to Theorem 3, we obtain its extension to vector-


valued coefficients in Hilbert space.

Corollary 7. Let ε1 , ε2 , . . . be independent Rademacher random variables. Let


H be a separable Hilbert space with norm k · k. Let p ≥ 1 and v1 , . . . , vn be
vectors in H. Then

p
Φ(t1 , . . . , tn ) = log E et1 ε1 v1 + · · · + etn εn vn

is convex on Rn .

Proof. We use a standard embedding (see, e.g. Remark 3 in [15]): we fix an


orthonormal basis in H, say (uk )k≥1 , take i.i.d. standard Gaussian random

8
P
variables g1 , g2 , . . . , independent of the εj and set G = k≥1 gk uk to have

1 p
kxkp = E |hx, Gi| , x ∈ H.
E|g1 |p

This gives that


p
n
X
p tj
Φ(t1 , . . . , tn ) = − log E|g1 | + log EG Eε e hvj , Gi εj .
j=1

The result follows from Theorem 3, for conditioned on the value of G, the func-
Pn p
tj
tion t 7→ Eε j=1 e hvj , Gi εj is log-convex and sums of log-convex functions
are log-convex.

3.4 A Representation as a maximum

We finish with an elementary representation of the L1 norm of Rademacher


sums as a maximum of linear forms with nonnegative ordered coefficients. This
besides being of independent interest gives an alternative proof of the convexity
of (6), as explained at the end of this subsection. For n ≥ 1, we let

Tn = {x ∈ Rn , x1 ≥ x2 ≥ . . . xn ≥ 0}

be the cone in Rn of nonincreasing nonnegative sequences.

Lemma 8. For every n ≥ 1, there is a finite subset An of Tn such that for all
x ∈ Tn , we have
n
X n
X
E xj εj = max aj xj .
a∈An
j=1 j=1

Proof. Changing the order of summation, we write


    " !#
n
X n
X n
X n
X n
X
E xj εj = E sgn  xj εj   xj εj  = xj E εj sgn xi εi ,
j=1 j=1 j=1 j=1 i=1

where we use the standard signum function, sgn(t) = |t|/t, t 6= 0, sgn(0) = 0


which is odd and nondecreasing. It is thus natural to define the function α =

9
(α1 , . . . , αn ) : Rn → Rn ,
" n
!#
X
αj (x) = E εj sgn xi εi .
i=1

Note that since sgn(·) is odd, we have


 
X
αj (x) = E sgn xj + xi εi  , x ∈ Rn .
i6=j

We set
An = α(Tn )

and to finish the proof, we claim that

(1) An is a finite set,

(2) An ⊂ Tn ,
Pn Pn
(3) E| j=1 xj εj | = maxa∈An j=1 aj xj , for every x ∈ Tn .

Claim (1) holds because αj (x) takes only finitely many values (for any x, αj (x)
is a sum of 2n terms, each equal to ± 21n or 0).

To show (2), we fix x ∈ Tn and 1 ≤ k ≤ n − 1. To argue that αk (x) ≥ αk+1 (x),


we write
   
X X
αk (x) − αk+1 (x) = E sgn xk + εk+1 xk+1 + xi εi  − E sgn xk+1 + εk xk + xi εi 
i6=k,k+1 i6=k,k+1
    
X X
= E sgn xk − xk+1 + xi εi  − E sgn xk+1 − xk + xi εi 
i6=k,k+1 i6=k,k+1

and the monotonicity of sgn(·) finishes the argument. We also need to show
that αn (x) ≥ 0. Taking the expectation with respect to εn in the definition of
αn , we have
" ! !#
1 X X
αn (x) = E sgn xn + xi εi − sgn −xn + xi εi
2 i<n i<n

10
and the expression inside the expectation is nonnegative because sgn(v + u) ≥
sgn(v − u) for every u ≥ 0 and v ∈ R, by monotonicity.

Finally, to prove (3), we fix x ∈ Tn , take arbitrary a ∈ An , say a = α(y) with


y ∈ Tn and note that
" !#
X X X X
aj xj = αj (y)xj = E εj sgn yi ε i xj
j j j i
 ! 
X X
= E sgn yi εi  xj εj 
i j

X
≤E xj εj
j

P
proving that maxa∈An aj xj ≤ E| xj εj | with the equality plainly attained for
a = α(x).

If we now account for all possible orderings by taking the maximum over all per-
mutations in the symmetric group Sn on {1, . . . , n}, we obtain a representation
for arbitrary coefficients.

Corollary 9. Let n ≥ 1 and let An be the finite subset provided by Lemma 8.


For every x ∈ Rn+ , we have

n
X n
X
E xj εj = max aj xσ(j) .
a∈An ,σ∈Sn
j=1 j=1

Proof. Fix x ∈ Rn+ and let σ ∗ be a permutation such that

xσ∗ (1) ≥ · · · ≥ xσ∗ (n) .

By Lemma 8,

n
X n
X n
X
E xj εj = E xσ∗ (j) εj = max aj xσ∗ (j)
a∈An
j=1 j=1 j=1

Moreover, by the rearrangement inequality, for an arbitrary permutation σ and

11
arbitrary a ∈ An we get
n
X n
X
aj xσ∗ (j) ≥ aj xσ(j)
j=1 j=1

since both sequences (aj ) and (xσ∗ (j) ) are nonincreasing. This finishes the
proof.

To see that the function in (6) is convex, note that from Corollary 9, we have

n
X
Φ(t1 , . . . , tn ) = max log aj etσ(j) .
σ∈Sn ,a∈An
j=1

For a fixed a ∈ An and σ ∈ Sn , the function


n
X
log aj etσ(j)
j=1

is convex (as sums of log-convex functions are log-convex). Thus so is their


pointwise maximum. 

References
[1] Aliev, I., Siegel’s lemma and sum-distinct sets. Discrete Comput. Geom. 39
(2008), no. 1-3, 59–66.

[2] Aliev, I., On the volume of hyperplane sections of a d-cube. Acta Math.
Hungar. 163 (2021), no. 2, 547–551.

[3] Ball, K., Logarithmically concave functions and sections of convex sets in
Rn . Studia Math. 88 (1988), no. 1, 69–84.

[4] Bárány, I., Frankl, P., How (not) to cut your cheese. Amer. Math. Monthly
128 (2021), no. 6, 543–552.

[5] Bárány, I., Frankl, P., Cells in the box and a hyperplane. J. Eur. Math. Soc.
(JEMS) 25 (2023), no. 7, 2863–2877.

[6] Bhatia, R., Matrix analysis. Graduate Texts in Mathematics, 169. Springer-
Verlag, New York, 1997.

12
[7] Böröczky, K., Lutwak, E., Yang, D., Zhang, G., The log-Brunn-Minkowski
inequality. Adv. Math. 231 (2012), no. 3-4, 1974–1997.

[8] Busemann, H., A theorem on convex bodies of the Brunn-Minkowski type.


Proc. Nat. Acad. Sci. U.S.A. 35 (1949), 27–31.

[9] Gardner, R. J., Geometric tomography. Second edition. Encyclopedia of


Mathematics and its Applications, 58. Cambridge University Press, New
York, 2006.

[10] König, H., Koldobsky, A., Volumes of low-dimensional slabs and sections
in the cube, Adv. Appl. Math. 47(4) (2011), 894–907.

[11] Milman, V. D., Pajor, A., Isotropic position and inertia ellipsoids and
zonoids of the unit ball of a normed n-dimensional space. Geometric as-
pects of functional analysis (1987–88), 64–104, Lecture Notes in Math., 1376,
Springer, Berlin, 1989.

[12] Nayar, P., Tkocz, T., On a convexity property of sections of the cross-
polytope. Proc. Amer. Math. Soc. 148 (2020), no. 3, 1271–1278.

[13] Nayar, P., Tkocz, T., Extremal sections and projections of certain convex
bodies: a survey. Harmonic analysis and convexity, 343–390, Adv. Anal.
Geom., 9, De Gruyter, Berlin, 2023.

[14] Saroglou, C., More on logarithmic sums of convex bodies. Mathematika 62


(2016), no. 3, 818–841.

[15] Szarek, S. J., On the best constants in the Khinchin inequality. Studia
Math. 58 (1976), no. 2, 197–208.

13

You might also like