100% found this document useful (1 vote)
237 views284 pages

Random Matrices and Random Partitions Normal Convergence, Volume 1 PDF

Uploaded by

Tom Well
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
237 views284 pages

Random Matrices and Random Partitions Normal Convergence, Volume 1 PDF

Uploaded by

Tom Well
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 284

www.ebook3000.

com
Random Matrices
and Random Partitions
Normal Convergence

www.ebook3000.com

9197_9789814612227_tp.indd 1 23/3/15 11:13 am


World Scientific Series on Probability Theory and Its
Applications

Series Editors: Zhenghu Li (Beijing Normal University, China)


Yimin Xiao (Michigan State University, USA)

Vol. 1 Random Matrices and Random Partitions: Normal Convergence


by Zhonggen Su (Zhejiang University, China)

www.ebook3000.com

LaiFun - Random Matrices and Random Partitions.indd 1 12/3/2015 4:19:03 PM


World Scientific Series on
Probability Theory and
Its Applications
Volume 1

Random Matrices
and Random Partitions
Normal Convergence

Zhonggen Su
Zhejiang University, China

World Scientific
NEW JERSEY • LONDON • SINGAPORE • BEIJING • SHANGHAI • HONG KONG • TA I P E I • CHENNAI
www.ebook3000.com

9197_9789814612227_tp.indd 2 23/3/15 11:13 am


Published by
World Scientific Publishing Co. Pte. Ltd.
5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

Library of Congress Cataloging-in-Publication Data


Su, Zhonggen.
Random matrices and random partitions normal convergence / by Zhonggen Su (Zhejiang
University, China).
pages cm. -- (World scientific series on probability theory and its applications ; volume 1)
Includes bibliographical references and index.
ISBN 978-9814612227 (hardcover : alk. paper)
1. Random matrices. 2. Probabilities. I. Title.
QA273.43.S89 2015
519.2'3--dc23
2015004842

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library.

Copyright © 2015 by World Scientific Publishing Co. Pte. Ltd.


All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means,
electronic or mechanical, including photocopying, recording or any information storage and retrieval
system now known or to be invented, without written permission from the publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance
Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy
is not required from the publisher.

Printed in Singapore

www.ebook3000.com

LaiFun - Random Matrices and Random Partitions.indd 2 12/3/2015 4:19:03 PM


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page v

To Yanping and Wanning

www.ebook3000.com

v
May 2, 2013 14:6 BC: 8831 - Probability and Statistical Theory PST˙ws

This page intentionally left blank

www.ebook3000.com
March 3, 2015 13:52 9197-Random Matrices and Random Partitions ws-book9x6 page vii

Preface

This book is intended to provide an introduction to remarkable probabil-


ity limit theorems in random matrices and random partitions, which look
rather different at a glance but have many surprising similarities from a
probabilistic viewpoint.
Both random matrices and random partitions play a ubiquitous role in
mathematics and its applications. There have been a great deal of research
activities around them, and an enormous exciting advancement had been
seen in the last three decades. A couple of excellent and big books have
come out in recent years. However, the work on these two objects are so rich
and colourful in theoretic results, practical applications and research tech-
niques. No one book is able to cover all existing materials. Needless to say,
these are rapidly developing and ever-green research fields. Only recently, a
number of new interesting works emerged in literature. For instance, based
on Johansson’s work on deformed Gaussian unitary ensembles, two groups
led respectively by Erdös-Yau and Tao-Vu successfully solved, around 2010,
the long-standing conjecture of Dyson-Gaudin-Mehta-Wigner’s bulk univer-
sality in random matrices by developing new techniques like the compar-
ison principles and rigidity properties. Another example is that with the
help of concepts of determinantal point processes coined by Borodin and
Olshanski, around 2000, in the study of symmetric groups and random
partitions, a big breakthrough has been made in understanding universal-
ity properties of random growth processes. Each of them is worthy of a
new book.
This book is mainly concerned with normal convergence, namely central
limit theorems, of various statistics from random matrices and random par-
titions as the model size tends to infinity. For the sake of writing and learn-
ing, we shall only focus on the simplest models among which are circular

vii

www.ebook3000.com
March 3, 2015 13:52 9197-Random Matrices and Random Partitions ws-book9x6 page viii

viii Random Matrices and Random Partitions

unitary ensemble, Gaussian unitary ensemble, random uniform partitions


and random Plancherel partitions. As a matter of fact, many of the results
addressed in this book are found valid for more general models. This book
consists of three parts as follows.
We shall first give a brief survey on normal convergence in Chapter 1.
It includes the well-known laws of large numbers and central limit theo-
rems for independent identically distributed random variables and a few
methods widely used in dealing with normal convergence. In fact, the cen-
tral limit theorems are arguably regarded as one of the most important
universality principles in describing laws of random phenomena. Most of
the materials can be found in any standard probability theory at graduate
level. Because neither the eigenvalues of a random matrix with all entries
independent nor the parts of random partitions are independent of each
other, we need new tools to treat statistics of dependent random variables.
Taking this into account, we shall simply review the central limit theo-
rems for martingale difference sequences and Markov chains. Besides, we
shall review some basic concepts and properties of convergence of random
processes. The statistic of interest is sometimes a functional of certain ran-
dom process in the study of random matrices and random partitions. We
will be able to make use of functional central limit theorems if the ran-
dom processes under consideration is weakly convergent. Even under the
stochastic equicontinuity condition, a slightly weaker condition than uni-
form tightness, the Gikhmann-Skorohod theorem can be used to guarantee
convergence in distribution for a wide class of integral functionals.
In Chapters 2 and 3 we shall treat circular unitary ensemble and Gaus-
sian unitary ensemble respectively. A common feature is that there exists
an explicit joint probability density function for eigenvalues of each matrix
model. This is a classic result due to Weyl as early as the 1930s. Such an
explicit formula is our starting point and this makes delicate analysis pos-
sible. Our focus is upon the second-order fluctuation, namely asymptotic
distribution of a certain class of linear functional statistics of eigenvalues.
Under some smooth conditions, a linear eigenvalue statistic satisfies the

central limit theorem without normalizing constant n, which appears in
classic Lévy-Feller central limit theorem for independent identically dis-
tributed random variables. On the other hand, either indicator function
or logarithm function does not satisfy the so-called smooth condition. It
turns out that the number of eigenvalues in an interval and the logarithm
of characteristic polynomials do still satisfy the central limit theorem after

suitably normalized by log n. The log n-phenomena is worthy of more

www.ebook3000.com
March 3, 2015 13:52 9197-Random Matrices and Random Partitions ws-book9x6 page ix

Preface ix

attention since it will also appear in the study of other similar models. In
addition to circular and Gaussian unitary ensembles, we shall consider their
extensions like circular β matrices and Hermite β matrices where β > 0 is a
model parameter. These models were introduced and studied at length by
Dyson in the early 1960s to investigate energy level behaviors in complex
dynamic systems. A remarkable contribution at this direction is that there
is a five (resp. three) diagonal sparse matrix model representing circular β
ensemble (resp. Hermite β ensemble).
In Chapters 4 and 5 we shall deal with random uniform partitions and
random Plancherel partitions. The study of integer partitions dates back
to Euler as early as the 1750s, who laid the foundation of partition theory
by determining the number of all distinct partitions of a natural number.
We will naturally produce a probability space by assigning a probability
to each partition of a natural number. Uniform measure and Plancherel
measure are two best-studied objects. Young diagram and Young tableau
are effective geometric representation in analyzing algebraic, combinatorial
and probabilistic properties of a partition. Particularly interesting, there
exists a nonrandom limit shape (curve) for suitably scaled Young diagrams
under both uniform and Plancherel measure. This is a kind of weak law
of large numbers from the probabilistic viewpoint. To proceed, we shall
further investigate the second-order fluctuation of a random Young diagram
around its limit shape. We need to treat separately three different cases:
at the edge, in the bulk and integrated. It is remarkable that Gumbel
law, normal law and Tracy-Widom law can be simultaneously found in
the study of random integer partitions. A basic strategy of analysis is
to construct a larger probability space (grand ensemble) and to use the
conditioning argument. Through enlarging probability space, we luckily
produce a family of independent geometric random variables and a family
of determinantal point processes respectively. Then a lot of well-known
techniques and results are applicable.
Random matrices and random partitions are at the interface of many
science branches and they are fast-growing research fields. It is a formidable
and confusing task for a new learner to access the research literature,
to acquaint with terminologies, to understand theorems and techniques.
Throughout the book, I try to state and prove each theorem using lan-
guage and ways of reasoning from standard probability theory. I hope
it will be found suitable for graduate students in mathematics or related
sciences who master probability theory at graduate level and those with
interest in these fields. The choice of results and references is to a large

www.ebook3000.com
March 3, 2015 13:52 9197-Random Matrices and Random Partitions ws-book9x6 page x

x Random Matrices and Random Partitions

extent subjective and determined by my personal point of view and taste


of research. The references at the end of the book are far from exhaustive
and in fact are rather limited. There is no claim for completeness.
This book started as a lecture note used in seminars on random matrices
and random partitions for graduate students in the Zhejiang University over
these years. I would like to thank all participants for their attendance and
comments. This book is a by-product of my research project. I am grateful
to the National Science Foundation of China and Zhejiang Province for
their generous support in the past ten years. I also take this opportunity
to express a particular gratitude to my teachers, past and present, for
introducing me to the joy of mathematics. Last, but not least, I wish to
thank deeply my family for their kindness and love which is indispensable
in completing this project.
I apologize for all the omissions and errors, and invite the readers to
report any remarks, mistakes and misprints.

Zhonggen Su
Hangzhou
December 2014
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page xi

Contents

Preface vii

1. Normal Convergence 1
1.1 Classical central limit theorems . . . . . . . . . . . . . . . 1
1.2 The Stein method . . . . . . . . . . . . . . . . . . . . . . 19
1.3 The Stieltjes transform method . . . . . . . . . . . . . . 25
1.4 Convergence of stochastic processes . . . . . . . . . . . . . 29

2. Circular Unitary Ensemble 33


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 Symmetric groups and symmetric polynomials . . . . . . 38
2.3 Linear functionals of eigenvalues . . . . . . . . . . . . . . 47
2.4 Five diagonal matrix models . . . . . . . . . . . . . . . . . 57
2.5 Circular β ensembles . . . . . . . . . . . . . . . . . . . . . 78

3. Gaussian Unitary Ensemble 89


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.2 Fluctuations of Stieltjes transforms . . . . . . . . . . . . . 98
3.3 Number of eigenvalues in an interval . . . . . . . . . . . . 112
3.4 Logarithmic law . . . . . . . . . . . . . . . . . . . . . . . 125
3.5 Hermite β ensembles . . . . . . . . . . . . . . . . . . . . . 136

4. Random Uniform Partitions 153


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.2 Grand ensembles . . . . . . . . . . . . . . . . . . . . . . . 160
4.3 Small ensembles . . . . . . . . . . . . . . . . . . . . . . . 169

xi
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page xii

xii Random Matrices and Random Partitions

4.4 A functional central limit theorem . . . . . . . . . . . . . 180


4.5 Random multiplicative partitions . . . . . . . . . . . . . . 200

5. Random Plancherel Partitions 207


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.2 Global fluctuations . . . . . . . . . . . . . . . . . . . . . . 220
5.3 Fluctuations in the bulk . . . . . . . . . . . . . . . . . . . 237
5.4 Berry-Esseen bounds for character ratios . . . . . . . . . 244
5.5 Determinantal structure . . . . . . . . . . . . . . . . . . . 253

Bibliography 261

Index 267
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 1

Chapter 1

Normal Convergence

1.1 Classical central limit theorems

Throughout the book, unless otherwise specified, we assume that (Ω, A, P )


is a large enough probability space to support all random variables of study.
E will denote mathematical expectation with respect to P .
Let us begin with Bernoulli’s law, which is widely recognized as the
first mathematical theorem in the history of probability theory. In modern
terminology, the Bernoulli law reads as follows. Assume that ξn , n ≥ 1
is a sequence of independent and identically distributed (i.i.d.) random
variables, P (ξn = 1) = p and P (ξn = 0) = 1 − p, where 0 < p < 1. Denote
Pn
Sn = k=1 ξk . Then we have
Sn P
−→ p, n → ∞. (1.1)
n
In other words, for any ε > 0,
 S 
n
P − p > ε → 0, n → ∞.

n
It is this law that first provide a mathematically rigorous interpretation
about the meaning of probability p that an event A occurs in a random
experiment. To get a feeling of the true value p (unknown), what we need
to do is to repeat independently a trial n times (n large enough) and to
count the number of A occurring. According to the law, the larger n is, the
higher the precision is.
Having the Bernoulli law, it is natural to ask how accurate the frequency
Sn /n can approximate the probability p, how many times one should repeat
the trial to attain the specified precision, that is, how big n should be.
With this problem in mind, De Moivre considered the case p = 1/2 and

1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 2

2 Random Matrices and Random Partitions

proved the following statement:


Sn − n
Z b
  1 2
P a ≤ 1√ 2 ≤ b ≈ √ e−x /2 dx. (1.2)
2 n 2π a

Later on, Laplace further extended the work of De Moivre to the case
p 6= 1/2 to obtain
Z b
 Sn − np  1 2
P a≤ p ≤b ≈ √ e−x /2 dx. (1.3)
np(1 − p) 2π a
Formulas (1.2) and (1.3) are now known as De Moivre-Laplace central limit
theorem (CLT). p
Note ESn = np, V ar(Sn ) = np(1 − p). So (Sn − np)/ np(1 − p) is
a normalized random
√ variable with mean zero and variance one. Denote
−x2 /2
φ(x) = e / 2π, x ∈ R. This is a very nice function from the viewpoint
of function analysis. It is sometimes called bell curve since its graph looks
like a bell, as shown in Figure 1.1.

Fig. 1.1 Bell curve

The Bernoulli law and De Moivre-Laplace CLT have become an indis-


pensable part of our modern daily life. See Billingsley (1999a, b), Chow
(2003), Chung (2000), Durrett (2010) and Fischer (2011) for a history of
the central limit theorem and the link to modern probability theory. But
what is the proof? Any trick? Let us turn to De Moivre’s original proof of
(1.2). To control the left hand side of (1.2), De Moivre used the binomial
formula
 
n 1
P (Sn = k) =
k 2n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 3

Normal Convergence 3

and invented together with Stirling the well-known Stirling formula (it ac-
tually should be called De Moivre-Stirling formula)

n! = nn e−n 2πn(1 + o(1)).

Setting k = n/2 + nxk /2, where a ≤ xk ≤ b, we have
S − n √
n 2
 1 nn e−n 2πn(1 + o(1))
P 1√ = xk = n · √ p
2 n
2 k k e−k (n − k)n−k e−(n−k) 2πk 2π(n − k)
1 2
= √ e−xk /2 (1 + o(1)).
2πn
Taking sum over k yields the integral of the right hand side of (1.2).
Given a random variable X, denote its distribution function FX (x) un-
der P . Let X, Xn , n ≥ 1 be a sequence of random variables. If for each
continuity point x of FX ,
FXn (x) → FX (x), n → ∞,
d
then we say Xn converges in distribution to X, and simply write Xn −→ X.
In this terminology, (1.3) is written as
S − np d
p n −→ N (0, 1), n → ∞,
np(1 − p)
where N (0, 1) stands for a standard normal random variable.
As the reader may notice, the Bernoulli law only deals with frequency
and probability, i.e., Bernoulli random variables. However, in practice peo-
ple are faced with a lot of general random variables. For instance, measure
length of a metal rod. Its length, µ, is intrinsic and unknown. How do
we get to know the value of µ? Each measurement is only a realization of
µ. Suppose that we measure repeatedly the metal rod n times and record
Pn
the observed values ξ1 , ξ2 , · · · , ξn . It is believed that k=1 ξk /n give us
a good feeling of how long the rod is. It turns out that a claim similar to
the Bernoulli law is also valid for general cases. Precisely speaking, assume
that ξ is a random variable with mean µ. ξn , n ≥ 1 is a sequence of i.i.d.
Pn
copy of ξ. Let Sn = k=1 ξk . Then
Sn P
−→ µ, n → ∞. (1.4)
n
This is called the Khinchine law of large numbers. It is as important as the
Bernoulli law. As a matter of fact, it provides a solid theoretic support for
a great deal of activity in daily life and scientific research.
The proof of (1.4) is completely different from that of (1.1) since we do
not know the exact distribution of ξk . To prove (1.4), we need to invoke
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 4

4 Random Matrices and Random Partitions

the following Chebyshev inequality. If X is a random variable with finite


mean µ and variance σ 2 , then for any positive x > 0
σ2
P (|X − µ| > x) ≤ .
x2
In general, we have
Ef (X)
P (X > x) ≤ ,
f (x)
where f : R 7→ R is a nonnegative nondecreasing function. We remark that
the Chebyshev inequalities have played a fundamental role in proving limit
theorems like the law of large numbers.
Having (1.4), we next naturally wonder what the second order fluctua-
tion is of Sn /n around µ? In other words, is there a normalizing constant
an → ∞ such that an (Sn − nµ)/n converges in distribution to a certain
random variable? What is the distribution of the limit variable? To attack
these problems, we need to develop new tools and techniques since the De
Moivre argument using binomial distribution is no longer applicable.
Given a random variable X with distribution function FX , define for
every t ∈ R,
ψX (t) = EeitX
Z
= eitx dFX (x).
R
Call ψX (t) the characteristic function of X. This is a Fourier transform of
FX (x). In particular, if X has a probability density function pX (x), then
Z
ψX (t) = eitx pX (x)dx;
R
while if X takes only finitely or countably many values, P (X = xk ) = pk ,
k ≥ 1, then

X
ψX (t) = eitxk pk .
k=1
Note the characteristic function of any random variable is always well de-
fined no matter whether its expectation exists.

Example 1.1. (i) If X is a normal random variable with mean µ and


2 2
variance σ 2 , then ψX (t) = eiµt−σ t /2 ;
(ii) If X is a Poisson random variable with parameter λ, then ψX (t) =
it
eλ(e −1) ;
(iii) If X is a standard Cauchy random variable, then ψX (t) = e−|t| .
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 5

Normal Convergence 5

Some basic properties are listed below.


(i) ψX (0) = 1, |ψX (t)| ≤ 1 for any t ∈ R.
(ii) ψX (t) is uniformly continuous in any finite closed interval on R.
(iii) ψX (t) is nonnegative definite.
According to Bochner’s theorem, if any function satisfying (i), (ii) and (iii)
above must be a characteristic function of a random variable.
(iv) ψX (t) = ψX (−t) for any t ∈ R.
(v) If E|X|k < ∞, then ψX (t) is k times differentiable, and
(k)
ψX (0) = ik EX k . (1.5)
Hence we have the Taylor expansion at zero: for any k ≥ 1
k
X il ml
ψX (t) = tl + o(|t|k ) (1.6)
l!
l=0
and
k−1
X l i ml l |t|k
ψX (t) = t + βk θk ,
l! k!
l=0

where ml := ml (X) = EX l , βk = E|X|k , |θk | ≤ 1.


(vi) If X and Y are independent random variables, then
ψX+Y (t) = ψX (t)ψY (t).
Obviously, this product formula can be extended to any finitely many in-
dependent random variables.
(vii) The distribution function of a random variable is uniquely determined
by its characteristic function. Specifically speaking, we have the following
inversion formula: for any FX -continuity points x1 and x2
Z T −itx1
1 e − e−itx2
FX (x2 ) − FX (x1 ) = lim ψX (t)dt.
T →∞ 2π −T it
In particular, if ψX (t) is absolutely integrable, then X has density function
Z ∞
1
pX (x) = e−itx ψX (t)dt.
2π −∞
P∞ P∞
On the other hand, if ψX (t) = k=1 ak eitxk with ak > 0 and k=1 ak = 1,
then X is a discrete random variable, and
P (X = xk ) = ak , k = 1, 2, · · · .
In addition to basic properties above, we have the following Lévy continuity
theorem, which will play an important role in the study of convergence in
distribution.
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 6

6 Random Matrices and Random Partitions

d
Theorem 1.1. (i) Xn −→ X if and only if ψXn (t) → ψX (t) for any t ∈ R.
(ii) If ψXn converges pointwise to a function ψ, and ψ is continuous at
t = 0, then ψ(t) is a characteristic function of some random variable, say
d
X, and so Xn −→ X.

Having the preceding preparation, we can easily obtain the following CLT
for sums of independent random variables.

Theorem 1.2. Let ξ, ξn , n ≥ 1 be a sequence of i.i.d. random variables


Pn
with mean µ and variance σ 2 . Let Sn = k=1 ξk . Then
Sn − nµ d
√ −→ N (0, 1), n → ∞. (1.7)
σ n
This is often referred to as Feller-Lévy CLT. Its proof is purely analytic.
For sake of comparison, we quickly review the proof.
Proof. Without loss of generality, we may and do assume µ = 0, σ 2 = 1.
By hypothesis it follows
  t n
ψSn /√n (t) = ψξ √ .
n
Also, using (1.6) yields
 t  t2 1
ψξ √ =1− +O
n 2n n
for each t. Hence we have
 t2  1 n
ψSn /√n (t) = 1 − +O
2n n
2
→ e−t /2 .
By Theorem 1.1 and (i) of Example 1.1, we conclude the desired (1.7). 

Remark 1.1. Under the assumption that ξn , n ≥ 1 are i.i.d. random


variables, the condition σ 2 < ∞ is also necessary for (1.7) to hold. See
Chapter 10 of Ledoux and Talagrand (2011) for a proof.

In many applications, it is restrictive to require that ξn , n ≥ 1 are i.i.d.


random variables. Therefore we need to extend the Feller-Lévy CLT to the
non-i.i.d. cases. In fact, there have been a great deal of work toward this
direction. For the sake of reference, we will review below some most com-
monly used CLT, including the Lindeberg-Feller CLT for independent not
necessarily identically distributed random variables, the martingale CLT,
the CLT for ergodic stationary Markov chains.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 7

Normal Convergence 7

Assume that ξn , n ≥ 1 is a sequence of independent random variables,


Pn Pn
Eξn = µn , V ar(ξn ) = σn2 < ∞. Let Sn = k=1 ξk , Bn =
2
k=1 σk .
Assume further that Bn → ∞. Introduce the following two conditions.
Feller condition:
1
max σ 2 → 0.
Bn 1≤k≤n k
Lindeberg condition: for any ε > 0
n
1 X
E(ξk − µk )2 1(|ξk −µk |≥ε√Bn ) → 0.
Bn
k=1

Obviously, Feller condition is a consequence of Lindeberg condition. More-


over, we have the following Lindeberg-Feller CLT.

Theorem 1.3. Under Feller condition, the ξn satisfies the CLT, that is
Pn
Sn − k=1 µk d
√ −→ N (0, 1), n → ∞
Bn
if and only if Lindeberg condition holds.

It is easy to see that if there is a δ > 0 such that


n
1 X
1+δ/2
E|ξk |2+δ → 0, (1.8)
Bn k=1

then Lindeberg condition is satisfied. The condition (1.8) is sometimes


called Lyapunov condition.

Corollary 1.1. Assume that ξn , n ≥ 1 is a sequence of independent


Bernoulli random variables, P (ξn = 1) = pn , P (ξn = 0) = 1 − pn . If
P∞
n=1 pn (1 − pn ) = ∞, then
Pn
(ξk − pk ) d
pPk=1n −→ N (0, 1), n → ∞. (1.9)
p
k=1 k (1 − p k )

The Lindeberg-Feller theorem has a wide range of applications. In partic-


ular, it implies that the normal law exists universally in nature and human
society. For instance, the error in measurement might be caused by a large
number of independent factors. Each factor contributes only a very small
part, but none plays a significant role. Then the total error obeys approxi-
mately a normal law.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 8

8 Random Matrices and Random Partitions

Next turn to the martingale CLT. First, recall some notions and basic
properties of martingales. Assume that An , n ≥ 0 is a sequence of non-
decreasing sub-sigma fields of A. Let Xn , n ≥ 0 be a sequence of random
variable with Xn ∈ An and E|Xn | < ∞. If for each n ≥ 1
E(Xn |An−1 ) = Xn−1 a.e.
we call {Xn , An , n ≥ 0} a martingale. If An = σ{X0 , X1 , · · · , Xn }, we
simply write {Xn , n ≥ 0} a martingale. If {Xn , An , n ≥ 0} is a martingale,
setting dn := Xn −Xn−1 , then {dn , An , n ≥ 0} forms a martingale difference
sequence, namely
E(dn |An−1 ) = 0 a.e.
Conversely, given a martingale difference sequence {dn , An , n ≥ 0}, we can
form a martingale {Xn , An , n ≥ 0} by
Xn
Xn = dk .
k=1

Example 1.2. (i) Assume that ξn , n ≥ 1 is a sequence of independent


Pn
random variables with mean zero. Let S0 = 0, Sn = k=1 ξk , A0 = {∅, Ω},
An = σ{ξ1 , · · · , ξn }, n ≥ 1. Then {Sn , An , n ≥ 0} is a martingale.
(ii) Assume that X is a random variable with finite expectation. Let An ,
n ≥ 0 is a sequence of non-decreasing sub-sigma fields of A. Let
Xn = E(X|An ) a.e.
Then {Xn , An , n ≥ 0} is a martingale.

We now state a martingale CLT due to Brown (1971).

Theorem 1.4. Assume that {dn , An , n ≥ 0} is a martingale difference


Pn Pn
sequence, set Sn = k=1 dk , Bn = k=1 Ed2k . If the following three con-
ditions
(i)
1  P
max E d2k Ak−1 −→ 0, n → ∞

Bn 1≤k≤n
(ii)
n
1 X  P
E d2k Ak−1 −→ 1, n → ∞

Bn
k=1
(iii) for any ε > 0
n
1 X  P
E d2k 1(|dk |≥ε√Bn ) Ak−1 −→ 0,

n→∞
Bn
k=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 9

Normal Convergence 9

are satisfied, then


S d
√ n −→ N (0, 1), n → ∞.
Bn
(1974) presented an improved version under the following slightly weaker
conditions:
(i0 ) there is a constant M > 0 such that
1
E max d2 ≤ M, ∀n ≥ 1;
Bn 1≤k≤n k
(ii0 )
1 P
max d2 −→ 0, n → ∞;
Bn 1≤k≤n k
(iii0 )
n
1 X 2 P
dk −→ 1, n → ∞.
Bn
k=1

The interested reader is referred to Hall and Heyde (1980) for many other
limit theorems related to martingales.
Let E be a set of at most countable points. Assume that Xn , n ≥ 0 is
a random sequence with state space E. If for any states i and j, any time
n ≥ 0,
P (Xn+1 = j|Xn = i, Xn−1 = in−1 , · · · , X0 = i0 )
= P (Xn+1 = j|Xn = i)
= P (X1 = j|X0 = i), (1.10)
then we call Xn , n ≥ 0 a time-homogenous Markov chain. Condition (1.10),
called the Markov property, implies that the future is independent of the
past given its present state.
Denote pij the transition probability:
pij = P (Xn+1 = j|Xn = i).
The matrix P := (pij ) is called the transition matrix. It turns out that
both the X0 and the transition matrix P will completely determine the
(n) 
law of a Markov chain. Denote the n-step transition matrix P(n) = pij ,
(n)
where pij = P (Xn = j|X0 = i). Then a simple chain rule manipulation
shows
P(n+m) = P(n) P(m) .
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 10

10 Random Matrices and Random Partitions

This is the well-known Chapman-Kolmogorov equation. Moreover, P(n) =


Pn .
(n)
State j is accessible from state i, denoted by i → j, if pij > 0 for
some n ≥ 1. State i and j communicate with each other, denoted by
i ↔ j, if i → j and j → i. A Markov chain is irreducible if any two states
communicate each other.
The period di of a state i is the greatest common divisor of all n that
(n)
satisfy pii > 0. State i is aperiodic if di = 1, and otherwise it is periodic.
Denote by τi the hitting time

τi = min{n ≥ 1; Xn = i}.

A state i is transient if P (τi = ∞|X0 = i) > 0, and i is recurrent if


P (τi < ∞|X0 = i) = 1. A recurrent state i is positive recurrent if

E(τi |X0 = i) < ∞.

An irreducible aperiodic positive recurrent Markov chain is called ergodic.

If a probability distribution π on E satisfies the following equation:

π = πP,

then we call π a stationary distribution. If we choose π to be an initial


distribution, then the Xn is a stationary Markov chain. In addition, if for
any i, j

πi pij = πj pji ,

then the Xn is reversible. In particular,


d
(X0 , X1 ) = (X1 , X0 ).

An irreducible aperiodic Markov chain is ergodic if and only if it has a


stationary distribution.

Theorem 1.5. Assume that Xn , n ≥ 0 is an ergodic Markov chain with


P
stationary distribution π. Assume that f : E →
7 R is such that i∈E f (i)πi
is absolutely convergent. Then
n−1
1X X
lim f (Xi ) = f (i)πi a.e.
n→∞ n
i=0 i∈E
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 11

Normal Convergence 11

This is a type of law of large numbers for Markov chains. See Serfozo (2009)
for a proof.
Let L02 be the subspace of L2 (π) consisting of functions f : E 7→ R with
P
Eπ f := i∈E f (i)πi = 0. We shall give a sufficient condition under which
Pn−1
the linear sum Sn (f ) := i=0 f (Xi ) satisfies the CLT. To this end, we
introduce the transition operator TX
T g(i) = g(j)pij .
j∈E

Trivially, T g(i) = E g(X1 ) X0 = i . Assume that there exists a function g
such that
f = g − T g. (1.11)
Then it easily follows from the martingale CLT
Sn (f ) d
−→ N 0, σf2


n
with limit variance
σf2 = kgk22 − kT gk22 ,
where k · k2 denotes the L2 -norm.
If f is such that

X
T nf (1.12)
n=0
is convergent in L2 (π), then the solution of the equation (1.11) do exist.
P∞
Indeed, g = n=0 T n f just solves the equation.
It is too restrictive to require that the series in (1.12) is L2 -convergent.
An improved version is
Theorem 1.6. Let Xn , n ≥ 0 is an ergodic Markov chain with stationary
distribution π. Assume f ∈ L02 satisfies the following two conditions:
(i)
L
T n f −→
2
0, n → ∞;
(ii)

X 1/2
kT n f k22 − kT n+1 f k22 < ∞.
n=0
Then we have
Sn (f ) d
−→ N 0, σf2


n
with limit variance
n
 X
k 2
n+1
X 2 
σf2 T k f 2 .

= lim T f 2−
n→∞
k=0 k=0
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 12

12 Random Matrices and Random Partitions

In the preceding paragraphs, we have seen that the characteristic func-


tions is a powerful tool in proving convergence in distribution and iden-
tifying the limit distribution. It is particularly successful in the study of
partial sums of independent or asymptotically independent random vari-
ables. However, it is sometimes not an easy task to compute the charac-
teristic function of a random variable of interest. In the rest of this section
and next sections we will briefly introduce other methods and techniques,
among which are the moment method, the replacement trick, the Stein
method and the Stieltjes transform method.
The moment method is closely related to an interesting old problem. Is
the distribution of X uniquely determined by its moments? If not, what
extra conditions do we require? Suppose X has finite moments of all or-
(k)
ders. Then according to (1.5), ψX (0) = ik mk where mk = EX k , k ≥ 0.
However, it does not necessarily follow
∞ k
X i mk
ψX (t) = tk . (1.13)
k!
k=0

Example 1.3. Consider two random variables X and Y , whose probability


density functions are as follows
( 2
√ 1 e−(log x) /2 , x > 0
pX (x) = 2π x
0, x≤0

and
( 2
√ 1 e−(log x) /2 (1 + sin(2π log x)), x > 0
pY (x) = 2π x
0, x ≤ 0.

Then it is easy to check that X and Y have all moments finite and EX k =
EY k for any k ≥ 1.

If the following Carleman condition



−1/2k
X
m2k =∞ (1.14)
k=1

is satisfied, then (1.13) holds, and so the distribution of X is uniquely


determined by its moments. A slightly stronger condition for (1.14) to hold
is
1 1/2k
lim inf m < ∞.
k→∞ k 2k
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 13

Normal Convergence 13

Example 1.4. (i) Assume X ∼ N (0, σ 2 ), then for k ≥ 1


m2k (X) = σ 2k (2k − 1)!!,


m2k+1 (X) = 0.
(ii) Assume X is a Poisson random variable with parameter λ, then for
k≥1
EX(X − 1) · · · (X − k + 1) = λk .
(iii) Assume X is a random variable with density function given by
 1√
4 − x2 , |x| ≤ 2,
ρsc (x) = 2π (1.15)
0, otherwise,
then for k ≥ 1
1 2k
 
m2k (X) = k+1 k ,
m2k+1 (X) = 0.
(iv) Assume X is a random variable with density function given by
( q
1 4−x
x , 0 < x ≤ 4,
ρM P (x) = 2π (1.16)
0, otherwise,
then for k ≥ 1
 
1 2k
mk (X) = .
k+1 k

Fig. 1.2 Wigner semicircle law


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 14

14 Random Matrices and Random Partitions

Fig. 1.3 Marchenko-Pastur law

We remark that ρsc and ρM P (see Figures 1.2 and 1.3) are often called
Wigner semicircle law and Marchenko-Pastur law in random matrix litera-
ture. They are respectively the expected spectrum distributions of Wigner
random matrices and sample covariance matrices in the large dimensions.
It is now easy to verify that these moments satisfy the Carleman condition
(1.14). Therefore normal distribution, Poisson distribution, Wigner semi-
circle law and Marchenko-Pastur law are all uniquely determined by their
moments.

Theorem 1.7. Let Xn , n ≥ 1 be a sequence of random variables with


all moments finite. Let X be a random variable whose law is uniquely
determined by its moments. If for each k ≥ 1
mk (Xn ) → mk (X), n→∞
d
then Xn −→ X.

When applying Theorem 1.7 in practice, it is often easier to work with


cumulants rather than moments. Let X be a random variable with all
moments finite. Expand log ψX (t) at t = 0 as follows
∞ k
X i τk
log fX (t) = tk .
k!
k=1

We call τk the kth cumulant of X.

Example 1.5. (i) If X ∼ N (µ, σ 2 ), then


τ1 = µ, τ2 = σ 2 , τk = 0, ∀k ≥ 3.
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 15

Normal Convergence 15

(ii) If X is a Poisson random variable with parameter λ, then


τk = λ, ∀k ≥ 1.

The cumulants possess the following nice properties. Fix a a constant.


(i) shift equivariance:
τ1 (X + a) = τ1 (X) + a;
(ii) shift invariance:
τk (X + a) = τk (X), ∀k ≥ 2;
(iii) homogeneity:
τk (aX) = ak τk (X), ∀k ≥ 2;
(vi) additivity: if X and Y are independent random variables, then
τk (X + Y ) = τk (X) + τk (Y ), ∀k ≥ 2;
(v) relations between cumulants and moments:
k−1
X k − 1 
τk = mk − τl mk−l
l−1
l=1
k
X Y k!
= (−1)α1 +···+αk −1 (α1 + · · · + αk − 1)! mαl , (1.17)
α1 ,··· ,αk
αl !lαl l
l=1

where the summation is extended over all nonnegative integer solutions of


the equation α1 + 2α2 + · · · + kαk = k, and
XY
mk = τ|B|
λ B∈λ

where λ runs through the set of all partitions of {1, 2, · · · , n}, B ∈ λ means
one of the blocks into which the set if partitioned and |B| is the size of the
block.
Since knowledge of the moments of a random variable is interchangeable
with knowledge of its cumulants, Theorem 1.7 can be reformulated as

Theorem 1.8. Let Xn , n ≥ 1 be a sequence of random variables with


all moments finite. Let X be a random variable whose law is uniquely
determined by its moments. If for each k ≥ 1
τk (Xn ) → τk (X), n→∞
d
then Xn −→ X.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 16

16 Random Matrices and Random Partitions

This theorem is of particular value when proving asymptotic normality.


Namely, if

τ1 (Xn ) → 0, τ2 (Xn ) → 1

and for k ≥ 3

τk (Xn ) → 0,
d
then Xn −→ N (0, 1).
To conclude this section, we will make a brief review about Lindeberg
replacement strategy by reproving the Feller-Lévy CLT. To this end, we
need an equivalent version of convergence in distribution, see Section 1.4
below.

Lemma 1.1. Let X, Xn , n ≥ 1 be a sequence of random variables. Then


d
Xn −→ X if and only if for each bounded thrice continuously differentiable
function f with kf (3) k∞ < ∞,

Ef (Xn ) −→ Ef (X), n → ∞.

Theorem 1.9. Let ξn , n ≥ 1 be a sequence of i.i.d. random variables with


Pn
mean zero and variance 1 and E|ξn |3 < ∞. Let Sn = k=1 ξk . Then it
follows
Sn d
√ −→ N (0, 1), n → ∞.
n

Proof. Let ηn , n ≥ 1 be a sequence of i.i.d. normal random variables


Pn
with mean zero and variance 1, and let Tn = k=1 ηk . Trivially,
T
√n ∼ N (0, 1).
n
According to Lemma 1.1, it suffices to show that for any bounded thrice
continuously differentiable function f with kf (3) k∞ < ∞,
S  T 
n n
Ef √ − Ef √ −→ 0 n → ∞.
n n
To do this, set
k−1
X n
X
Rn,k = ξl + ηl , 1 ≤ k ≤ n.
l=1 l=k+1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 17

Normal Convergence 17

Then
S  T 
n n
Ef √ − Ef √
n n
n h
X  1   1 i
= Ef √ (Rn,k + ξk ) − Ef √ (Rn,k + ηk )
n n
k=1
Xn h  1   1 i
= Ef √ (Rn,k + ξk ) − Ef √ Rn,k
n n
k=1
Xn h  1   1 i
− Ef √ (Rn,k + ηk ) − Ef √ Rn,k . (1.18)
n n
k=1

Applying the Taylor expansion of f at Rn,k / n, we have by hypothesis
 1   1 
Ef √ (Rn,k + ξk ) − Ef √ Rn,k
n n
 1 ξ 1  1  ξ2
k
= Ef 0 √ Rn,k √ + Ef 00 √ Rn,k k
n n 2 n n
3
1 ξ
+ Ef (3) (R∗ ) 3/2 k
6 n
1  1  1
= Ef 00 √ Rn,k + 3/2 Ef (3) (R∗ )ξk3 , (1.19)
2n n 6n
√ √
where R∗ is between (Rn,k + ξk )/ n and Rn,k / n.
Similarly, we also have
 1   1 
Ef √ (Rn,k + ξk ) − Ef √ Rn,k
n n
1  1  1
= Ef 00 √ Rn,k + 3/2 Ef (3) (R∗∗ )ηk3 , (1.20)
2n n 6n
√ √
where R∗∗ is between (Rn,k + ξk )/ n and Rn,k / n.
Putting (1.19) and (1.20) back into (1.18) yields
S  T 
n n
Ef √ − Ef √
n n
n
X 1 1
= Ef (3) (R∗ )ξk3 + 3/2 Ef (3) (R∗∗ )ηk3 .
6n3/2 6n
k=1

Noting kf (3) k∞ < ∞ and E|ξk |3 < ∞ and E|ηk |3 < ∞, we obtain
S  T 
n n
= O n−1/2 .

Ef √ − Ef √
n n
The assertion is now concluded. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 18

18 Random Matrices and Random Partitions

Proof of Theorem 1.2 To apply Theorem 1.9, we need to use the trun-
cation technique. For any constant a > 0, define

ξ¯k (a) = ξk 1(|ξk |≤a) , 1 ≤ k ≤ n.

Obviously, ξ¯k (a), 1 ≤ k ≤ n are i.i.d. bounded random variables. Let


µk (a) = E ξ¯k (a) and σ̄k2 (a) = V ar(ξ¯k (a)) for 1 ≤ k ≤ n. So according to
Theorem 1.9, it follows
Pn ¯
k=1 (ξk (a)− µk (a)) d
√ −→ N (0, 1), n → ∞.
σ̄1 (a) n

Since a > 0 is arbitrary, then by selection principle, there is a sequence of


constants an > 0 such that an → ∞ and
Pn ¯
k=1 (ξk (an ) − µk (an )) d
√ −→ N (0, 1). (1.21)
σ¯1 (an ) n

In addition, it is easy to see

µk (an ) → 0, σ¯n (an ) → 1.

Hence by (1.21),
Pn ¯
k=1 (ξk (an ) − µk (an )) d
√ −→ N (0, 1). (1.22)
n

Finally, it follows from the Chebyshev inequality


n
1  X 
P
√ Sn − (ξ¯k (an ) − µk (an )) −→ 0. (1.23)
n
k=1

Combining (1.22) and (1.23) together, we conclude the assertion (1.7). 

Remark 1.2. The Lindeberg replacement strategy makes clear the fact
that the CLT is a local phenomenon. By this we mean that the structure
of the CLT does not depend on the behavior of any fixed number of the
increments. Only recently was it successfully used to establish the Four
Moment Comparison theorem for eigenvalues of random matrices, which
in turn solves certain long-standing conjectures related to universality of
eigenvalues of random matrices. See Tao and Vu (2010).
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 19

Normal Convergence 19

1.2 The Stein method

The Stein method was initially conceived by Stein (1970, 1986) to provide
errors in the approximation by the normal distribution of the distribution
of the sum of dependent random variables of a certain structure. However,
the ideas presented are sufficiently abstract and powerful to be able to
work well beyond that purpose, applying to approximation of more general
random variables by distributions other than normal. Besides, the Stein
method is a highly original technique and useful in quantifying the error in
the approximation of one distribution by another in a variety of metrics.
This subsection serves as a basic introduction of the fundamentals of the
Stein method. The interested reader is referred to nice books and surveys,
say, Chen, Goldstein and Shao (2010), Ross (2011).
A basic starting point is the following Stein equation.

Lemma 1.2. Assume that ξ is a random variable with mean zero and vari-
ance σ 2 . Then ξ is normal if and only if for every bounded continuously
differentiable function f (kf k∞ , kf 0 k∞ < ∞),
Eξf (ξ) = σ 2 Ef 0 (ξ). (1.24)

Proof. Without loss of generality, assume σ 2 = 1. If ξ ∼ N (0, 1), then


by the integration by part formula, we easily obtain (1.24).
Conversely, assume (1.24) holds. Fix z ∈ R and consider the following
first order ordinary differential equation
f 0 (x) − xf (x) = 1(−∞,z] (x) − Φ(z), (1.25)
where Φ(·) denotes the standard normal distribution function. Then a
simple argument shows that there exists a unique solution:
( 2 R
∞ 2
ex /2 x e−t /2 (Φ(z) − 1(−∞,z] (t))dt,
fz (x) = 2 ∞ 2
ex /2 x e−t /2 (Φ(z) − 1(−∞,z] (t))dt.
R

Note
1 −z2 /2
1 − Φ(z) ∼ e , z → ∞.
z
< π/2 and kfz0 k∞ ≤ 2. By hypothesis, it
p
It is not hard to see kfz k∞
follows
P (ξ ≤ z) − Φ(z) = Efz0 (ξ) − Eξfz (ξ) = 0.
We now conclude the proof. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 20

20 Random Matrices and Random Partitions

As an immediate corollary to the proof, we can derive the following Stein


continuity theorem.

Theorem 1.10. Assume that ξn , n ≥ 1 is a sequence of random variables


with mean zero and variance 1. If for every bounded continuously differen-
tiable function f ,

Eξn f (ξn ) − Ef 0 (ξn ) → 0, n→∞


d
then ξn −→ N (0, 1).

Remark 1.3. The above Stein equation can be extended to a non-normal


random variable. Assume that ξ has the (q +2)th moment finite, f is (q +1)
times bounded continuously differentiable, then
q
X τk+1
Eξf (ξ) = Ef (k) (ξ) + εq ,
k!
k=0

where τk is the kth culumant of ξ, the remainder term admits the bound
1 + (3 + 2q)q+2
εq ≤ cq kf (q+1) kE|ξ|q+2 , cq ≤ .
(q + 1)!
As the reader may notice, if we replace the indicator function 1(−∞,z] by
a smooth function in the preceding differential equation (1.25), then its
solution will have a nicer regularity property. Let H be a family of 1-
Lipschitz functions, namely

H = h : R 7→ R, |h(x) − h(y)| ≤ |x − y| .

Consider the following differential equation:

f 0 (x) − xf (x) = h(x) − Eh(ξ), (1.26)

where ξ ∼ N (0, 1).

Lemma 1.3. Assume h ∈ H. There exists a unique solution of (1.26):


( 2 R
∞ 2
ex /2 x e−t /2 (Φ(z) − h(t))dt,
fh (x) = 2 ∞ 2
ex /2 x e−t /2 (Φ(z) − h(t))dt.
R

Moreover, fh satisfies the following properties


r
0 π
kfh k∞ ≤ 2, kfh k∞ ≤ , kfh00 k∞ ≤ 2.
2
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 21

Normal Convergence 21

We omit the proof, which can be found in Chen, Goldstein and Shao (2010).
It is easily seen that for any random variable W of interest,
Eh(W ) − Eh(ξ) = Efh0 (W ) − EW fh0 (W ).
Given two random variables X and Y , the Wasserstein distance is defined
by
dW (X, Y ) = sup |Eh(X) − Eh(Y )|.
h∈H

Note the Wasserstein distance is widely used in describing the distributional


approximations. In particular,
r
πp
sup |P (X ≤ x) − P (Y ≤ x)| ≤ dW (X, Y ).
x∈R 2
Let G be the family of bounded continuously differentiable functions with
bounded first and second derivatives, namely
r
n
0 π o
G = f : R 7→ R, kf k∞ ≤ 2, kf k∞ ≤ , kf 00 k∞ ≤ 2 .
2
Taking Lemma 1.3 into account, we immediately get

Theorem 1.11. Let ξ ∼ N (0, 1), W a random variable. Then we have


dW (W, ξ) ≤ sup |EW f (W ) − Ef 0 (W )|.
f ∈G

To illustrate the use of the preceding Stein method, let us take a look at
the normal approximation of sums of independent random variables below.

Example 1.6. Suppose that ξn , n ≥ 1 is a sequence of independent random


Pn
variables with mean zero, variance 1 and E|ξn |3 < ∞. Let Sn = i=1 ξi ,
then
S n
n
 4 X
dW √ , ξ ≤ 3/2 E|ξi |3 . (1.27)
n n i=1

Proof. Writing Wn = Sn / n, we need only to control the supremum

of EWn f (Wn ) − Ef 0 (Wn ) over G. Set Wn,i = (Sn − ξi )/ n. Then by
independence and noting Eξi = 0
n
1 X
EWn f (Wn ) = √ Eξi f (Wn )
n i=1
n
1 X 
= √ Eξi f (Wn ) − f (Wn,i ) . (1.28)
n i=1
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 22

22 Random Matrices and Random Partitions

Using the Taylor expansion of f at Wn,i , we have


2
ξi 1 ∗ ξi
f (Wn ) − f (Wn,i ) = f 0 (Wn,i ) √ + f 00 (Wn,i ) , (1.29)
n 2 n

where Wn,i is between Wn and Wn,i .
Inserting (1.29) into (1.28) and noting Eξi2 = 1 yields
n n
1X 1 X
EWn f (Wn ) = Ef 0 (Wn,i ) + 3/2 Ef 00 (Wn,i

)ξi3 .
n i=1 2n i=1

Subtracting Ef 0 (Wn ) in both sides gives


n
1X
EWn f (Wn ) − Ef 0 (Wn ) = E f 0 (Wn,i ) − f 0 (Wn )

n i=1
n
1 X
+ Ef 00 (Wn,i

)ξi3 . (1.30)
2n3/2 i=1
It follows from the mean value theorem
‡ ξi
f 0 (Wn,i ) − f 0 (Wn ) = f 00 (Wn,i )√ , (1.31)
n

where Wn,i is between Wn and Wn,i .
Thus combining (1.30) and (1.31), and noting kf 00 k∞ ≤ 2 and E|ξi | ≤
1 ≤ E|ξi |3 , we have (1.27) as desired. 
Recall the ordered pair (W, W 0 ) of random variables is exchangeable if
d
(W, W 0 ) = (W 0 , W ).
d
Trivially, if (W, W 0 ) is an exchangeable pair, then W = W 0 . Also, assuming
g(x, y) is antisymmetric, namely g(x, y) = −g(y, x), then
Eg(W, W 0 ) = 0
if the expectation exists.

Theorem 1.12. Assume that (W, W 0 ) is an exchangeable pair, and assume


there exists a constant 0 < τ ≤ 1 such that
E W 0 |W = (1 − τ )W.


If EW 2 = 1, then
r
1  E(|W 0 − W |2 |W ) 2 1 0 3
dW (W, ξ) ≤ √ E 1− + E W − W .
2π 2τ 3τ
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 23

Normal Convergence 23

Rx
Proof. Given f ∈ G, define F (x) = 0
f (t)dt. Then it obviously follows

EF (W 0 ) − EF (W ) = 0. (1.32)

On the other hand, using the Taylor expansion for F at W , we obtain


1
F (W 0 ) − F (W ) = F 0 (W )(W 0 − W ) + F 00 (W )(W 0 − W )2
2
1
+ F 000 (W ∗ )(W 0 − W )3
6
1
= f (W )(W 0 − W ) + f 0 (W )(W 0 − W )2
2
1 00
+ f (W ∗ )(W 0 − W )3 ,
6
where W ∗ is between W and W 0 .
This together with (1.32) in turn implies
1 1
Ef (W )(W 0 − W ) + Ef 0 (W )(W 0 − W )2 + Ef 00 (W ∗ )(W 0 − W )3 = 0.
2 6
Note by hypothesis E(W 0 |W ) = (1 − τ )W ,

Ef (W )(W 0 − W ) = E f (W )(W 0 − W )|W




= Ef (W )E W 0 − W |W


= −τ EW f (W ).

Hence we have
1 1
EW f (W ) = Ef 0 (W )(W 0 − W )2 + Ef 00 (W ∗ )(W 0 − W )3 .
2τ 6τ
Subtracting Ef 0 (W ) yields
 1 
EW f (W ) − Ef 0 (W ) = −Ef 0 (W ) 1 − E (W 0 − W )2 W

1
+ Ef (W )(W 0 − W )3 .
00 ∗

Thus it follows from the Cauchy-Schwarz inequality and the fact kf 0 k∞ ≤
p
2/π and kf 00 k∞ ≤ 2,
r r 
2
EW f (W ) − Ef 0 (W ) ≤ 2 E 1 − 1 E (W 0 − W )2 W

π 2τ
1 0 3
+ E W − W .

The proof is complete. 
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 24

24 Random Matrices and Random Partitions

As an application, we shall revisit below the normal approximation of sums


of independent random variables. Use notations in Example 1.6. A key in-
gredient is to construct Wn0 in such a way that (Wn , Wn0 ) is an exchangeable
pair and E(Wn0 |Wn ) = (1 − τ )Wn for some 0 < τ ≤ 1.
Let {ξn0 , n ≥ 1} be an independent copy of {ξn , n ≥ 1}. Let I be a
uniform random variable taking values 1, 2, · · · , n, independent of all other
random variables. Define
1
Wn0 = √ Sn − ξI + ξI0 .

n
Let An = σ{ξ1 , · · · , ξn }. Trivially, An is a sequence of increasing σ-fields
and Wn ∈ An . Some simple manipulation shows
1
E Wn0 − Wn An = √ E − ξI + ξI0 An
 
n
1
= − Wn ,
n
which implies τ = 1/n. In addition,
1
E (Wn0 − Wn )2 An = E (ξI − ξI0 )2 |An
 
n
n
1 1 X 2
= + 2 ξ
n n i=1 i

and
n
1 1 X 2 
E (Wn0 − Wn )2 Wn = + 2 E

ξi |Wn .
n n i=1

Hence we have
 1 2
E 1− E (Wn0 − Wn )2 Wn

n
 1 1 1  X 2 2
=E 1− + 2E ξi Wn
2τ n n i=1
n 2
1 X 2 2
= E (ξ i − Eξi )|Wn
4n2 i=1
n
1 X 2  2
≤ 2
E ξi − Eξi2
4n i=1
n n
1 X 2 2 2 1 X 4
≤ 2 E(ξi − Eξi ) ≤ 2 Eξ .
4n i=1 n i=1 i
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 25

Normal Convergence 25

Finally, note
3 1 3
E Wn0 − Wn = E ξI − ξI0

n3/2
n
8 1X
≤ 3/2
E|ξi |3 .
n n i=1

Applying Theorem 1.12, we immediately obtain


v
 r 1 1u n n
S
n uX 8 1X
dW √ , ξ ≤ t Eξi4 + 3/2 E|ξi |3 .
n 2π n i=1 3n n i=1

In particular, when ξn , n ≥ 1 are i.i.d. random variables with Eξn4 < ∞,


then
S
n
 A
dW √ , ξ ≤ √
n n
for some numerical constant A.

1.3 The Stieltjes transform method

Stieltjes transforms, also called Cauchy transforms, of functions of bounded


variation are other important tools in the study of convergence of probabil-
ity measures. It actually plays a particularly significant role in the asymp-
totic spectrum theory of random matrices. Given a probability measure µ
on the real line R, its Stieltjes transform is defined by
Z
1
sµ (z) = dµ(x)
R x−z

for any z outside the support of µ. In particular, it is well defined for all
complex numbers z in C \ R. Some elementary properties about sµ (z) are
listed below. Set z = a + iη.
(i)

sµ (z) = sµ (z).

So we may and do focus on the upper half plane, namely η > 0 in the
sequel.
(ii)
1
|sµ (z)| ≤ .
|η|
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 26

26 Random Matrices and Random Partitions

(iii)
Z
1
Im(sµ (z)) = Im(z) dµ(x).
R |x − z|2
So Im(sµ (z)) has the same sign as Im(z).
(iv)
1
sµ (z) = − (1 + o(1)), z → ∞.
z
(v) If mk (µ) := R xk dµ(x) exists and are finite for every k ≥ 0, then it
R

follows

1 X mk (µ)
sµ (z) = − .
z zk
k=0
So sµ (z) is closely related to the moment generating function of µ.
(vi) sµ (z) is holomorphic outside the support of µ.
Example 1.7. (i) Let µ := µsc be a probability measure on R with density
function ρsc given by (1.15). Then its Stieltjes transform is

z z2 − 4
ssc (z) = − + . (1.33)
2 2
(ii) Let µ := µM P be a probability measure on R with density function
given by (1.16). Then its Stieltjes transform is
1 1p
sM P (z) = − + z(z − 4).
2 2z
Theorem 1.13. Let µ be a probability measure with Stieltjes transform
sµ (z). Then for any µ-continuity points a, b (a < b)
1 b sµ (λ + iη) − sµ (λ − iη)
Z
µ(a, b) = lim dλ
η→0 π a 2i
1 b
Z
= lim Im(sµ (λ + iη))dλ.
η→0 π a

Proof. Let X be a random variable whose law is µ. Let Y be a standard


Cauchy random variable, namely Y has probability density function
1
pY (y) = , y ∈ R.
π(1 + y 2 )
Then the random variable Zη := X + ηY has probability density function
1 λ − y 
Z
pη (λ) = pY dµ(y)
R η η
Z
1 η
= dµ(y)
π R (λ − y)2 + η 2
1
= Im(sµ (λ + iη)).
π
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 27

Normal Convergence 27

d
Note that Zη −→ X as η → 0. This implies
lim P (Zη ∈ (a, b)) = P (X ∈ (a, b)).
η→0

We now conclude the proof. 


Compared with the Fourier transform, an important advantage of Stieltjes
transform is that one can easily find the density function of a probability
measure via the Stieltjes transform. In fact, let F (x) be a distribution
function induced by µ and x0 ∈ R. Suppose that limz→x0 Im(sµ (z)) exists,
denoted by Im(sµ (x0 )). Then F is differentiable at x0 , and F 0 (x0 ) =
Im(sµ (x0 ))/π.
The Stieltjes continuity theorem reads as follows.

Theorem 1.14. (i) If µ, µn , n ≥ 1 is a sequence of probability measures


on R such that
µn ⇒ µ, n→∞
then for each z ∈ C \ R,
sµn (z) → sµ (z), n → ∞.
(ii) Assume that µn , n ≥ 1 is a sequence of probability measures on R such
that as n → ∞,
sµn (z) → s(z), ∀z ∈ C \ R
for some s(z). Then there exists a sub-probability measure µ (µ(R) ≤ 1)
such that
Z
1
s(z) = dµ(x)
R x−z
and for any continuous function f decaying to 0 at infinity,
Z Z
f (x)dµn (x) → f (x)dµ(x), n → ∞.
R R
(iii) Assume µ is a deterministic probability measure, and µn , n ≥ 1 is a
sequence of random probability measures on R. If for any z ∈ C \ R,
P
sµn (z) −→ sµ (z), n→∞
then µn weakly converges in probability to µ. Namely, for any bounded
continuous function f ,
Z Z
P
f (x)dµn (x) −→ f (x)dµ(x), n → ∞.
R R
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 28

28 Random Matrices and Random Partitions

The reader is referred to Anderson, Guionnet and Zeitouni (2010), Bai and
Silverstein (2010), Tao (2012) for its proof and more details.
To conclude, let us quickly review the Riesz transform for a continual
diagram. Let
2 √
u arcsin u2 + 4 − u2 , |u| ≤ 2,

Ω(u) = π (1.34)
|u|, |u| > 2

its Riesz transform is defined by


2
(Ω(u) − |u|)0 
Z
1 
RΩ (z) = exp du
z −2 u−z

for each z ∈/ [−2, 2].


It is easy to compute
z 1p 2
RΩ (z) = − + z − 4,
2 2
which implies by (1.33)

RΩ (z) = ssc (z). (1.35)

This is not a coincidence! As we will see in Chapter 5, Ω(u) (see Figure


1.4 below) turns out to be the limit shape of a typical Plancherel Young
diagram. The equation (1.35) provides another evidence that there is a
close link between random matrices and random Plancherel partitions.

Fig. 1.4 Ω curve


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 29

Normal Convergence 29

1.4 Convergence of stochastic processes

Let (S, ρ) be a metric space, S a σ-field generated by its topology. A


mapping X : Ω 7→ S is said to be measurable if for any A ∈ S
X −1 (A) = {ω : X(ω) ∈ A} ∈ A.
We also call X a S-valued random element. The most commonly studied
random elements include real (complex) random variable, random vector,
random processes, Banach-valued random variable. Denote by PX the law
of X under P :
PX (A) := P ◦ X −1 (A) = P (ω ∈ Ω : X(ω) ∈ A), A ∈ S.
By definition, a sequence of random variables Xn weakly converges to a
random variable X if PXn ⇒ PX , and write simply Xn ⇒ X. The following
five statements are equivalent:
(i) for any bounded continuous function f ,
Ef (Xn ) → Ef (X), n → ∞;
(ii) for any bounded uniformly continuous function f ,
Ef (Xn ) → Ef (X), n → ∞;
(iii) for any closed set F ,
lim sup P (Xn ∈ F ) ≤ P (X ∈ F );
n→∞

(iv) for any open set G,


lim inf P (Xn ∈ G) ≥ P (X ∈ G);
n→∞

(v) for any measurable X-continuity set A,


lim P (Xn ∈ A) = P (X ∈ A).
n→∞

The reader is referred to Billingsley (1999a) for the proof and more details.
In addition, (ii) can be replaced by
(ii0 ) for any bounded infinitely differentiable function f ,
Ef (Xn ) → Ef (X), n → ∞.
It can even be replaced by
(ii00 ) for any continuous function f with compact support,
Ef (Xn ) → Ef (X), n → ∞.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 30

30 Random Matrices and Random Partitions

d
In the special cases S = R and Rk , Xn ⇒ X is equivalent to Xn −→ X. In
the case S = R∞ , Xn ⇒ X if and only if for each k ≥ 1
Xn ◦ πk−1 ⇒ X ◦ πk−1 , n→∞
where πk denotes the projection from R∞ to Rk .
The case of C[0, 1] is more interesting and challenging. Assume Xn ⇒
X. Then for any k ≥ 1 and any k points t1 , t2 , · · · , tk ∈ [0, 1]
Xn ◦ πt−1
1 ,··· ,tk
⇒ X ◦ πt−1
1 ,··· ,tk
, n→∞ (1.36)
where πt1 ,··· ,tk is a projection from C[0, 1] to Rk , i.e., πt1 ,··· ,tk (x) =
(x(t1 ), · · · , x(tk )).
However, the condition (1.36) is not a sufficient condition for Xn to
weakly converge to X. We shall require additional conditions. The Xn
is said to be weakly relatively compact if every subsequence has a further
convergent subsequence in the sense of weak convergence. According to the
subsequence convergence theorem, Xn is weakly convergent if all the limit
variables are identical in law. Another closely related concept is uniform
tightness. The Xn is uniformly tight if for any ε > 0, there is a compact
subset Kε in (S, ρ) such that
P (Xn ∈
/ Kε ) < ε, for all n ≥ 1.
The celebrated Prohorov’s theorem tells that the Xn must be weakly rel-
atively compact if Xn is uniformly tight. The converse is also true in a
separable complete metric space. A major point of this theorem is that the
weak convergence of probability measures rely on how they concentrate in
a compact subset in a metric space. In C[0, 1], the so-called Ascoli-Arzelà
lemma completely characterizes a relatively compact subset: K ⊆ C[0, 1]
is relatively compact if and only if
(i) uniform boundedness:
sup sup |x(t)| < ∞,
t∈R x∈K

(ii) equi-continuity: for any ε > 0 there is a δ > 0 such that


sup sup |x(s) − x(t)| < ε.
|s−t|≤δ x∈K

Note that under condition (ii), (i) can be replaced by the condition
(i0 )
sup |x(0)| < ∞.
x∈K
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 31

Normal Convergence 31

Combining the Ascoli-Arzelà lemma and Prohorov’s theorem, we can read-


ily give a criterion for Xn to weakly converge to X in C[0, 1]. Assume that
we are given a sequence of continuous random processes X and Xn , n ≥ 1
in C[0, 1]. Then Xn ⇒ X if and only if
(i) finite dimensional distributions converge, namely (1.36) holds;
(ii) for any ε > 0, there is a finite positive constant M such that
P (|Xn (0)| > M ) < ε, for all n ≥ 1; (1.37)
(iii) for any ε > 0 and η > 0, there is a δ > 0 such that
 
P sup |Xn (s) − Xn (t)| > η < ε, for all n ≥ 1. (1.38)
|s−t|<δ

To illustrate how to use the above general framework, we shall state and
prove the Donsker invariance principle.

Theorem 1.15. Let ξn , n ≥ 1 be a sequence of i.i.d. real random vari-


ables defined in a common probability space (Ω, A, P ), and Eξ1 = 0 and
Pn
V ar(ξ1 ) = 1. Define Sn = i=1 ξi , n ≥ 1, and define
1 nt − [nt]
Xn (t) = √ S[nt] + √ ξ[nt]+1 , 0 ≤ t ≤ 1. (1.39)
n n
Then
Xn ⇒ B, n→∞
where B = (B(t), 0 ≤ t ≤ 1) is a standard Brownian motion.

Proof. We need to verify the conditions (1.36), (1.37) and (1.38). In-
deed, (1.36) directly follows from the Feller-Lévy CLT. (1.37) is trivial
since Xn (0) = 0, and (1.38) follows from the Lévy maximal inequality for
sums of independent random variables. The detail is left to the reader. 
We remark that the random process constructed in (1.39) is a polygon

going through points (k/n, Sk / n). It is often referred to as a partial sum
process. The Donsker invariance principle has found a large number of
applications in a wide range of fields. To apply it, we usually need the
following mapping theorem. Let (S1 , ρ1 ) and (S2 , ρ2 ) be two metric spaces,
h : S1 7→ S2 a measurable mapping. Assume that X, Xn , n ≥ 1 is a
sequence of S1 -valued random elements and Xn ⇒ X. It is natural to ask
under what hypothesis the h(Xn ) still weakly converges to h(X). Obviously,
if h is continuous, then we have h(Xn ) ⇒ h(X). Moreover, the same still
holds if h is a measurable mapping and PX (Dh ) = 0 where Dh is the set
of all discontinuity points of h. As a simple example, we can compute the
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 32

32 Random Matrices and Random Partitions


limiting distribution of max1≤k≤n Sk / n. Indeed, let h(x) = sup0≤t≤1 x(t)
where x ∈ C[0, 1]. Then h is continuous and
1
h(Xn ) = √ max Sk , h(B) = sup B(t).
n 1≤k≤n 0≤t≤1
Hence it follows
1 d
√ max Sk −→ sup B(t), n → ∞.
n 1≤k≤n 0≤t≤1
Another example is to compute the R 1 limiting distribution of the weighted
Pn
sum k=1 kξk /n3/2 . Let h(x) = 0 x(t)dt where x ∈ C[0, 1]. Then h is
continuous and
n Z 1
1 X
h(Xn ) = 3/2 kξk + op (1), h(B) = B(t)dt,
n k=1 0
where op (1) is negligible. Hence it follows
n Z 1
1 X d
kξ k −→ B(t)dt.
n3/2 k=1 0
More interesting examples can be found in Billingsley (1999a). In addi-
tion to R∞ and C[0, 1], one can also consider weak convergence of random
processes in D[0, 1], C(0, ∞) and D(0, ∞).
As the reader might notice, in proving the weak convergence of Xn in
C[0, 1], the most difficult part is to verify the uniform tightness condition
(1.38). A weaker version than (1.38) is stochastic equicontinuity: for every
ε > 0 and η > 0, there is a δ > 0 such that
sup P (|Xn (s) − Xn (t)| > η) < ε, for all n ≥ 1. (1.40)
|s−t|<δ
Although (1.40) does not guarantee that the process Xn converges weakly,
we can formulate a limit theorem for comparatively narrow class of func-
tionals of integral form.
Theorem 1.16. Suppose that X = (X(t), 0 ≤ t ≤ 1) and Xn = (Xn (t), 0 ≤
t ≤ 1) is a sequence of random processes satisfying (1.36) and (1.40). Sup-
pose that g(t, x) is a continuous function and there is a nonnegative function
h(x) such that h(x) ↑ ∞ as |x| → ∞ and
|g(t, x)|
lim sup sup = 0.
a→∞ 0≤t≤1 |x|>a h(x)

If supn≥1 sup0≤t≤1 E|h(Xn (t))| < ∞, then as n → ∞


Z 1 Z 1
 d 
g t, Xn (t) dt −→ g t, X(t) dt.
0 0
This theorem is sometimes referred to as Gikhman-Skorohod theorem. Its
proof and applications can be found in Chapter 9 of Gikhman and Skorohod
(1996).
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 33

Chapter 2

Circular Unitary Ensemble

2.1 Introduction

For n ∈ N, a complex n × n matrix Un is said to be unitary if


Un∗ Un = Un Un∗ = In .
This is equivalent to saying Un is nonsingular and Un∗ = Un−1 . The set Un
of unitary matrices forms a remarkable and important set, a compact Lie
group, which is generally referred to as the unitary group. This group has
a unique regular probability measure µn that is invariant under both left
and right multiplication by unitary matrices. Such a measure is called Haar
measure. Thus we have induced a probability space (Un , µn ), which is now
known as Circular Unitary Ensemble (CUE).
By definition the columns of an n×n random unitary matrix are orthog-
onal vectors in the n dimensional complex space Cn . This implies that the
matrix elements are not independent and thus are statistically correlated.
Before discussing statistical correlation properties, we shall have a quick
look at how to generate a random unitary matrix.
Form an n × n random matrix Zn = (zij )n×n with i.i.d. complex stan-
dard normal random variables. Recall Z = X + iY is a complex standard
normal random variable if X and Y are i.i.d. real normal random variable
with mean 0 and variance 1/2. The Zn is almost sure of full rank, so ap-
ply Gram-Schmidt orthonormalization to its columns: normalize the first
column to have norm one, take the first column out of the second column
and normalize to have norm one, and so on. Let Tn be the map induced
by Gram-Schmidt algorithm, then the resulting matrix Tn (Zn ) is unitary
and even is distributed with Haar measure. This is easy to prove and un-
derstand. Indeed it holds Tn (Un Zn ) = Un Tn (Zn ) for any unitary matrix
d d
Un ∈ Un . Since Un Zn = Zn , so Un Tn (Zn ) = Tn (Zn ) as required.

33
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 34

34 Random Matrices and Random Partitions

Given a unitary matrix Un , consider the equation


Un v = λv,
where λ is a scalar and v is a vector in Cn . If a scalar λ and a nonzero
vector v happen to satisfy this equation, then λ is called an eigenvalue
of Un and v is called an eigenvector associated with λ. The eigenvalues
of Un are zeros of the characteristic polynomial det(zIn − Un ). It turns
out that all eigenvalues are on the unit circle T := {z ∈ C; |z| = 1} and
are almost surely distinct with respect to product Lebesgue measure. Call
these {eiθ1 , · · · , eiθn } with 0 ≤ θ1 , · · · , θn < 2π. Note that for any sequence
of n points on T there are matrices in Un with these points as eigenvalues.
The collection of all matrices with the same set of eigenvalues constitutes
a conjugacy class in Un .
The main question of interest in this chapter is: pick a Un ∈ Un ac-
cording to Haar measure, how are {eiθ1 , · · · , eiθn } distributed? The most
celebrated result is the following Weyl formula.

Theorem 2.1. The joint probability density for the unordered eigenvalues
of a Haar distributed random matrix in Un is
1 Y
eiθj − eiθk 2 ,
pn eiθ1 , · · · , eiθn =

n
(2.1)
(2π) n!
1≤j<k≤n

where the product is by convention 1 when n = 1.

The proof is omitted, the reader is referred to Chapter 11 of Mehta (2004).


See also Chapter 2 of Forrester (2010).
Weyl’s formula is the starting point of the following study of the CUE.
In particular, it gives a simple way to perform averages on Un . For a class
function f that are constant on conjugacy classes,
Z Z
f eiθ1 , · · · , eiθn pn eiθ1 , · · · , eiθn dθ1 · · · dθn .
 
f (Un )dµn =
Un [0,2π]n

Obviously, U1 has only one eigenvalue and is uniformly distributed on T.


U2 has two eigenvalues whose joint probability density is
1 2
p2 eiθ1 , eiθ2 = 2 eiθ1 − eiθ2 .


It is easy to compute the marginal density for each eigenvalue by integrating
out the other argument. In particular, each eigenvalue is also uniformly
distributed on T. But these  two eigenvalues are not independent of each
other; indeed p2 eiθ1 , eiθ2 tends to zero as θ1 and θ2 approach each other.
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 35

Circular Unitary Ensemble 35

Interestingly, these properties hold for general n. To see this, let for any
set of n-tuple complex numbers (x1 , · · · , xn )
∆(x1 , · · · , xn ) = det xj−1

k 1≤j,k≤n
.
Then the Vandermonde identity shows
Y
∆(x1 , · · · , xn ) = (xk − xj ).
1≤j<k≤n

Define
n
X sin nθ/2
Sn (θ) = e−i(n−1)θ/2 eikθ = , (2.2)
sin θ/2
k=1
where by convention Sn (0) = n. Thus we have by using the fact that the
determinant of a matrix and its transpose are the same
eiθj − eiθk 2 = ∆(eiθ1 , · · · , eiθn ) 2
Y

1≤j<k≤n

= det Sn (θk − θj ) 1≤j,k≤n .
This formula is very useful to computing some eigenvalue statistics. In order
to compute the m-dimensional marginal density, we also need a formula of
Gaudin (see Conrey (2005)), which states
Z 2π
Sn (θj − θ)Sn (θ − θk )dθ = 2πSn (θj − θk ).
0
As a consequence, we have
Z 2π
 
det Sn (θj − θk ) 1≤j,k≤n dθn = 2π det Sn (θj − θk ) 1≤j,k≤n−1 .
0
Repeating this yields easily
Z
1
pn,m eiθ1 , · · · , eiθm := pn eiθ1 , · · · , eiθn dθm+1 · · · dθn
 
n
n!(2π) [0,2π]n−m
Z
1 
= n
det Sn (θj − θk ) n×n dθm+1 · · · dθn
n!(2π) [0,2π]n−m
(n − m)! 1 
= m
det Sn (θj − θk ) 1≤j,k≤m .
n! (2π)
In particular, the first two marginal densities are
1
pn,1 eiθ =

, 0 ≤ θ ≤ 2π (2.3)

and
1 1  
pn,2 eiθ1 , eiθ2 = 1 − (Sn (θ1 − θ2 ))2 .

2
(2.4)
n(n − 1) (2π)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 36

36 Random Matrices and Random Partitions

Hence each eigenvalue is still uniformly distributed on T.


The CUE and its eigenvalue distributions naturally appear in a va-
riety of problems from particle physics to analytic number theory. It is
indeed a very special example of three ensembles introduced and studied
by Dyson in 1962 with a view to simplifying the study of energy level be-
havior in complex quantum systems, see Dyson (1962). More generally,
consider n identically charged particles confined to move on T in the com-
plex plane. Each interacts with the others through the usual Coulomb
potential, − log eiθj − eiθk , which gives rise to the Hamiltonian
X
− log eiθj − eiθk .

Hn (θ1 , · · · , θn ) =
1≤j<k≤n

This induces the Gibbs measure with parameter n and β > 0


1 eiθj − eiθk β ,
Y
pn,β eiθ1 , · · · , eiθn =

(2.5)
(2π)n Zn,β
1≤j<k≤n

where n is the number of particles, β stands for the inverse temperature,


and Zn,β is given by

Γ( 21 βn + 1)
Zn,β = .
[Γ( 12 β + 1)]n
The family of probability measures defined by (2.5) is called Circular β En-
semble (CβE). The CUE corresponds to β = 2. Viewed from the opposite
perspective, one may say that the CUE provides a matrix model for the
Coulomb gas at the inverse temperature β = 2. In Section 2.5 we shall see
a matrix model for general β.
In this chapter we will be particularly interested in the asymptotic be-
haviours of various eigenvalue statistics as n tends to infinity. Start with
the average spectral measures. Let eiθ1 , · · · , eiθn be eigenvalues of a Haar
distributed unitary matrix Un . Put them together as a probability measure
on T:
n
1X
νn = δeiθk .
n
k=1

Theorem 2.2. As n → ∞,

νn ⇒ µ in P,

where µ is a uniform measure on T.


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 37

Circular Unitary Ensemble 37

The proof is basically along the line of Diaconis and Shahshahani (1994).
We need a second-order moment estimate as follows.

Lemma 2.1. For any integer l 6= 0,


Xn 2
E eilθk = min(|l|, n). (2.6)

k=1

Proof. It is easy to see


Xn 2
E eilθk = n + n(n − 1)Eeil(θ1 −θ2 ) . (2.7)

k=1

In turn, in virtue of (2.2) and (2.4) it follows


n(n − 1)Eeil(θ1 −θ2 )
Z 2π Z 2π
1 il(θ1 −θ2 )

2

= e 1 − (S n (θ 1 − θ 2 )) dθ1 dθ2
(2π)2 0 0
Z 2π Z 2π n 2
1 X
il(θ1 −θ2 ) ik(θ1 −θ2 )
=− e e dθ1 dθ2
(2π)2 0

0 k=1
Z 2π Z 2π
1  X 
=− 2
eil(θ1 −θ2 ) n + ei(m−k)(θ1 −θ2 ) dθ1 dθ2
(2π) 0 0 1≤m6=k≤n
= −]{(m, k) : m − k = l, 1 ≤ m 6= k ≤ n}

|l| − n, |l| ≤ n,
= (2.8)
0, |l| > n.
Substituting (2.8) into (2.7) immediately yields (2.6). 
Proof of Theorem 2.2. It is enough to show that the Fourier transform
of νn converges in probability to that of µ. Let for each integer l 6= 0
Z 2π Z 2π
1 1
ν̂n (l) = e−ilθ dνn (θ), µ̂(l) = e−ilθ dµ(θ).
2π 0 2π 0
Trivially µ̂(l) = 0. We shall need only prove
E ν̂n (l) = 0, E|ν̂n (l)|2 → 0. (2.9)
Since each eigenvalue is uniform over T, then
n
1 X −ilθk
E ν̂n (l) = E e
n
k=1
−ilθ1
= Ee = 0.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 38

38 Random Matrices and Random Partitions

Also, it follows from (2.6)


|l|
E|ν̂n (l)|2 =
n2
whenever n ≥ |l|. Thus (2.9) holds as n → ∞. The proof is concluded. 
Theorem 2.2 means that for every bounded continuous function f
n Z 2π
1X  P 1
f eiθk −→ f eiθ dθ.

n 2π 0
k=1

This is a kind of law of large numbers, and is very similar to the Khinchine
law of large numbers for sums of i.i.d. random variables in standard prob-
ability theory, see (1.4). One cannot see from such a first-order average the
difference between eigenvalues and sample points chosen at random from
the unit circle T. However, a significant feature will appear in the second-
order fluctuation, which is the main content of the following sections.

2.2 Symmetric groups and symmetric polynomials

We shall first introduce the irreducible characters of symmetric groups and


state without proofs character relations of two kinds. Then we shall define
four classes of symmetric polynomials and establish a Schur orthonormality
formula of Schur polynomials and power polynomials with respect to Haar
measure. Most of the materials can be found in Macdonald (1995) and
Sagan (2000).
Throughout the section, n is a fixed natural number. Consider the
symmetric group , Sn , consisting of all permutations of {1, 2, · · · , n} using
composition as the multiplication. Assume σ ∈ Sn , denote by rk the num-
ber of cycles of length k in σ. The cycle type, or simply the type, of σ is
an expression of the form

1r1 , 2r2 , · · · , nrn .




For example, if σ ∈ S5 is given by

σ(1) = 2, σ(2) = 3, σ(3) = 1, σ(4) = 4, σ(5) = 5



then it has cycle type 12 , 20 , 31 , 40 , 50 . The cycle type of the identity
permutation is (1n ).
In Sn , permutations σ and τ are conjugates if

σ = ϑτ ϑ−1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 39

Circular Unitary Ensemble 39

for some ϑ ∈ Sn . The set of all permutations conjugate to a given σ is


called the conjugacy class of σ and is denoted by Kσ . Conjugacy is an
equivalent relation, so the distinct conjugacy classes partition Sn . It is not
hard to see that two permutations are in the same conjugacy class if and
only if they have the same cycle type, and the size of a conjugacy class with
Qn
cycle type (1r1 , 2r2 , · · · , nrn ) is given by n!/ i=1 iri ri !.
Let Md be the set of all d × d matrices with complex entries, and Ld
the group of all invertible d × d matrices with multiplication. A matrix
representation of Sn is a group homomorphism
X : Sn −→ Ld .
Equivalently, to each σ ∈ Sn is assigned X(σ) ∈ Ld such that
(i) X(1n ) = Id the identity matrix, and
(ii) X(στ ) = X(σ)X(τ ) for all σ, τ ∈ Sn .
The parameter d is called the degree of the representation. Given a
matrix representation X of degree d, let V be the vector space of all column
vectors of length d. Then we can multiply v ∈ V by σ ∈ Sn using
σv = X(σ)v.
This makes V into a Sn -module of dimension d. If a subspace W ⊆ V is
closed under the action of Sn , that is,
w ∈ W ⇒ σw ∈ W for all σ ∈ Sn ,
then we say W is a Sn -submodule of V .
A non-zero matrix representation X of degree d is reducible if the Sn -
module V contains a nontrivial submodule W . Otherwise, X is said to be
irreducible. Equivalently, X is reducible if V has a basis B in which every
σ ∈ Sn is assigned a block matrix of the form
 
A(σ) 0
X(σ) = ,
0 B(σ)
where the A(σ) are square matrices, all of the same size, and 0 is a nonempty
matrix of zeros.
Two representations X and Y are equivalent if and only if there exists
a fixed matrix T such that
Y (σ) = T X(σ)T −1 , for all σ ∈ Sn .
The number of inequivalent irreducible representations is equal to the num-
ber of conjugacy classes of Sn .
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 40

40 Random Matrices and Random Partitions

A classical theorem of Maschke implies that every matrix representation


of Sn having positive dimension is completely reducible. In particular, let
X be a matrix representation of Sn of degree d > 0, then there is a fixed
matrix T such that every matrix X(σ), σ ∈ Sn has the form
 (1)
···

X (σ) 0 0
 0 X (2) (σ) · · · 0 
−1
T X(σ)T =  ,
 
.. .. . . .
.
 . . . . 
(k)
0 0 · · · X (σ)
where each X (i) is an irreducible matrix representation of Sn .
To every matrix representation X assigns one simple statistic, the char-
acter defined by
χX (σ) = trX(σ),
where tr denotes the trace of a matrix. Otherwise put, χX is the map
trX
Sn −→ C.
It turns out that the character contains much of the information about the
matrix representation. Here are some elementary properties of characters.

Lemma 2.2. Let X be a matrix representation of Sn of degree d with char-


acter χX , then
(i) χX (1n ) = d;
(ii) χX is a class function, that is, χX (σ) = χX (τ ) if σ and τ are in the
same conjugacy class;
(iii) if Y is a matrix representation equivalent to X, then their characters
are identical: χX ≡ χY .

Let χ and ψ be any two functions from Sn to C. The inner product of χ


and ψ is defined by
1 X
hχ, ψi = χ(σ)ψ(σ).
n!
σ∈Sn
In particular, if χ and ψ are characters, then
1 X
hχ, ψi = χ(σ)ψ(σ). (2.10)
n!
σ∈Sn

Theorem 2.3. If χ and ψ are irreducible characters, then we have char-


acter relation of the first kind
hχ, ψi = δχ,ψ , (2.11)
where δχ,ψ stands for Kronecker delta.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 41

Circular Unitary Ensemble 41

Since character is a class function, then we can rewrite (2.10) and (2.11) as
1 X
hχ, ψi = |K|χ(K)ψ(K) = δχ,ψ ,
n!
K

where we mean by χ(K) and ψ(K) that χ and ψ act in K respectively, |K|
denotes the number of K and the sum is over all conjugacy classes of Sn .
This implies that the modified character table
r
 |K| 
U= χ(K)
n! χ,K

has orthonormal rows. Hence U , being a square, is a unitary matrix and


has orthonormal columns. Thus we have proven the character relation of
the second kind as follows.

Theorem 2.4.
X n!
χ(K)χ(L) = δK,L , (2.12)
χ
|K|

where the sum is take over all irreducible characters.

A partition of n is a sequence (λ1 , λ2 , · · · , λl ) of non-increasing natural


numbers such that
l
X
λi = n,
i=1

where the λi ’s are called parts, l is called the length.


If λ = (λ1 , λ2 , · · · , λl ) is a partition of n, then we write λ 7→ n. We
Pl
also use the notation |λ| = i=1 λi . The cycle type of a permutation in
Sn naturally gives a partition of n. Conversely, given a λ 7→ n, let rk =
]{i; λi = k}, then we have a cycle type 1r1 , 2r2 , · · · , nrn . Thus there is a
natural one-to-one correspondence between partitions of n and conjugacy
classes of Sn . As a consequence, the number of irreducible characters is
equal to the number of partitions of n.
Let Pn be the set of all partitions of n. We need to find an ordering on
Pn . Since each partition is a sequence of integer numbers, then a natural
ordering is the ordinary lexicographic order. Let λ = (λ1 , λ2 , · · · , λl ) and
µ = (µ1 , µ2 , · · · , µm ) be partitions of n. Then λ < µ in lexicographic order
if, for some index i,

λj = µj for j < i and λi < µi .


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 42

42 Random Matrices and Random Partitions

This is a total ordering on Pn . For instance, on P6 we have

16 < 2, 14 < 22 , 12 < 23 < 3, 13 < 3, 2, 1


     

< 32 < 4, 12 < 4, 2 < 5, 1 < 6 .


    

Another ordering is the following dominance order. If

λ1 + λ2 + · · · + λi ≥ µ1 + µ2 + · · · + µi

for all i ≥ 1, then λ is said to dominate µ, written as λ D µ. The lexi-


cographic order is a refinement of the dominance order in the sense that
λ ≥ µ if λ, µ ∈ Pn satisfy λ D µ.
Next we shall describe a graphic representation of a partition. Suppose
λ = (λ1 , λ2 , · · · , λl ) 7→ n. The Young diagram (shape) of λ is an array of n
boxes into l left-justified rows with row i containing λi boxes for 1 ≤ i ≤ l.
The box in row i and column j has coordinates (i, j), as in a matrix, see
Figure 2.1.

Fig. 2.1 Young diagram

A Young tableau of shape λ, tλ , is an array obtained by putting the


numbers 1, 2, · · · , n into the boxes bijectively. A Young tableau tλ is
standard if the rows are increasing from left to right and the columns are
increasing from top to bottom. Let ti,j stand for the entry of t in posi-
tion (i, j). Clearly there are n! Young tableau for any shape λ 7→ n. Two
tableaux tλ1 and tλ2 are row equivalent, tλ1 ∼ tλ2 , if corresponding rows of the
two tableaux contain the same elements. A tabloid of shape λ is

{tλ } = tλ1 , tλ1 ∼ tλ .



February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 43

Circular Unitary Ensemble 43

The number of tableaux in any given equivalence class is λ! := λ1 !λ2 ! · · · λl !.


Thus the number of tabloids of shape λ is just n!/λ!.
Given a σ ∈ Sn , define
σtλ = (σti,j ).
 
λ 12
To illustrate, if σ = (1, 2, 3) ∈ S3 , λ = (2, 1) 7→ 3, t = , then
3
 
λ 23
σt = (σti,j ) = .
1
This induces an action on tabloids by letting
σ tλ = σtλ .
 

Suppose that the tableau tλ has columns C1 , C2 , · · · , Cλ1 . Let


X
κCj = sgn(σj )σj ,
σj ∈SCj

where SCj is a symmetric group of permutations of numbers from Cj . Let


κtλ = κC1 κC2 · · · κCλ1 .
This is a linear combinations of elements of Sn , so κtλ ∈ C[Sn ]. Now we
can pass from tabloid tλ to polytabloid
etλ = κtλ tλ .


Some basic properties are summarized in the following lemma.



Lemma 2.3. (i) For any λ 7→ n, etλ , tλ is a standard Young tableau is
independent;
(ii) For any λ 7→ n,
S λ =: span etλ , tλ is a standard Young tableau


= span etλ , tλ is a Young tableau ;




(iii) S λ , λ 7→ n form a complete list of irreducible Sn -modules.

Let χλ be the character of matrix representation associated with S λ , and


dλ the number of standard Young tableaux of shape λ, then we have

Theorem 2.5.
χλ (1n ) = dimS λ = dλ
and
X X
χ2λ (1n ) = d2λ = n!. (2.13)
λ7→n λ7→n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 44

44 Random Matrices and Random Partitions

Formula (2.13) is often referred to as the Burnside identity. Some more


information about partitions will be found in Chapters 4 and 5.
Consider the ring Z[x1 , · · · , xn ] of polynomials in n independent vari-
ables x1 , · · · , xn with rational integer coefficients. The symmetric group Sn
acts on this ring by permuting the variables, and a polynomial is symmetric
if it is invariant under this action. Let Λn := Λn [x1 , · · · , xn ] be the subring
formed by the symmetric polynomials. We will list four classes of widely
used symmetric polynomials, all indexed by partitions.
• Elementary symmetric polynomials. For each integer r ≥ 0 the rth
elementary symmetric polynomial er is the sum of all products of r distinct
variables xi , so that e0 = 0 and for r ≥ 1
X
er = xi1 xi2 · · · xir .
1≤i1 <i2 <···<ir ≤n

For each partition λ = (λ1 , λ2 , · · · , λl ) define


eλ = eλ1 eλ2 · · · eλl .
er , r ≥ 0 are algebraically independent over Z, and every element of Λn is
uniquely expressible as a polynomial in the er .
• Complete symmetric polynomials. For each integer r ≥ 0 the rth complete
symmetric polynomial hr is the sum of all monomials of total degree r in the
variables x1 , x2 , · · · , xn . In particular, h0 = 1 and h1 = e1 . By convention,
hr and er are defined to be zero for r < 0. Define
hλ = hλ1 hλ2 · · · hλl
for any partition λ = (λ1 , λ2 , · · · , λl ). The hr , r ≥ 0 are algebraically
independent over Z, and
Λn = Z[h1 , h2 , · · · , hn ]
• Schur symmetric polynomials. For each partition λ = (λ1 , λ2 , · · · , λl )
with length l ≤ n, consider the determinant
λ +n−j 
det xi j 1≤i,j≤n
.
This is divisible in Z[x1 , x2 , · · · , xn ] by each of the differences xj − xi (1 ≤
Q
i < j ≤ n), and hence by their product 1≤i<j≤n (xj − xi ), which is the
Vandermonde determinant det xn−j

i 1≤i,j≤n
. Define
λ +n−j
det(xi j )1≤i,j≤n
sλ := sλ (x1 , · · · , xn ) = n−j
, (2.14)
det(xi )1≤i,j≤n
where sλ = 0 if the numbers λj + n − j (1 ≤ j ≤ n) are not all distinct.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 45

Circular Unitary Ensemble 45

The quotient (2.14) is a symmetric and homogeneous polynomial of


degree |λ|, that is, in Λn . It is called the Schur polynomial in the variable
x1 , x2 , · · · , xn , corresponding to the partition λ. The Schur polynomial
sλ (x1 , · · · , xn ) with l ≤ n form a Z-basis of Λn .
Each Schur polynomial can be expressed as a polynomial in the ele-
mentary symmetric polynomials er , and as a polynomial in the complete
symmetric polynomial hr . The formulas are:
sλ = det(hλi −i+j )1≤i,j≤n
where l(λ) ≤ n, and
sλ = det(eλ0i −i+j )1≤i,j≤n
where λ0 is a conjugate partition with l(λ0 ) ≤ n.
In particular, we have
s(n) = hn , s(1n ) = en .
• Power sum polynomials. For each r ≥ 1 the rth is
n
X
pr := pr (x1 , · · · , xn ) = xri .
i=1

We define
pλ = pλ1 pλ2 · · · pλl
for each partition λ = (λ1 , λ2 , · · · , λl ). Note that pλ , λ 7→ n do not form a
Z-basis of Λn . For instance,
1 1
h2 = p2 + p21
2 2
does not have integral coefficients when expressed
 in terms of the pλ . In
general, for any partition λ = 1r1 , 2r2 , · · · of n, define

Y
zλ = iri ri !. (2.15)
i=1

Then we can express hn , en as linear combinations of the pλ as follows


X
hn = zλ−1 pλ
λ7→n

and
X
en = (−1)n−l(λ) zλ−1 pλ .
λ7→n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 46

46 Random Matrices and Random Partitions

The Schur polynomial sλ can also be expressed as linear combinations of


the pλ
X
sλ = zρ−1 χλ (ρ)pρ ,
ρ7→|λ|

where χλ (ρ) is the value of irreducible character χλ at permutations of


cycle-type ρ. Conversely, we have the following inverse formula, which is
called Schur-Weyl duality.

Theorem 2.6.
X
pλ = χρ (λ)sρ . (2.16)
ρ7→|λ|

We now define an inner product on Λn . Suppose f ∈ Λn , let

f (Un ) = f eiθ1 , · · · , eiθn ,




where Un is an n × n unitary matrix with eigenvalues eiθ1 , · · · , eiθn . Thus


f : Un 7→ C is invariant under unitary transforms.
Suppose we are given two symmetric polynomials f, g ∈ Λn , their inner
product is defined by
Z
hf, gi = f (Un )g(Un )dµn .
Un

It turns out that Schur polynomials are orthonormal with respect to this
inner product, which is referred to as Schur orthonormality. In particular,
we have

Theorem 2.7.

hsλ , sτ i = δλ,τ 1(l(λ)≤n) . (2.17)

Proof. According to (2.1), we have


Z
hf, gi = f (Un )g(Un )dµn
Un
Z
1
f eiθ1 , · · · , eiθn g e−iθ1 , · · · , e−iθn
 
= n
(2π) n! [0,2π]n
eiθj − eiθk 2 dθ1 · · · dθn .
Y
·
1≤j<k≤n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 47

Circular Unitary Ensemble 47

If λ and τ are two partitions of lengths ≤ n, then by (2.14)


Z
1
sλ eiθ1 , · · · , eiθn sτ e−iθ1 , · · · , e−iθn
 
hsλ , sτ i =
(2π)n n! [0,2π]n
eiθj − eiθk 2 dθ1 · · · dθn
Y
·
1≤j<k≤n
Z
1
det ei(λk +n−k)θj det e−i(τk +n−k)θj dθ1 · · · dθn
 
= n
(2π) n! [0,2π]n
1
det(ei(λk +n−k)θj ) det(e−i(τk +n−k)θj ) 1 ,

= (2.18)
n!
where [f ]1 denotes the constant term of f .
A simple algebra shows
det(ei(λk +n−k)θj ) det(e−i(τk +n−k)θj ) 1 = n!δλ,τ ,
 

which together with (2.18) implies


hsλ , sτ i = δλ,τ .
We conclude the proof. 
Having Schur orthonormality (2.17), we can further compute the inner
product of power sum polynomials. For any partitions µ and ν, apply-
ing the Schur-Weyl duality (2.16) immediately yields
X X
hpµ , pν i = χρ (µ)χσ (ν)hsρ , sσ i
ρ7→|µ| σ7→|ν|
X X
= χρ (µ)χσ (ν)δρ,σ 1(l(ρ)≤n)
ρ7→|µ| σ7→|ν|
X
= δ|µ|,|ν| χρ (µ)χρ (ν)1(l(ρ)≤n) .
ρ7→|µ|

When |µ| = |ν| ≤ n, the sum is taken over all partitions of |µ|, and so the
character relation (2.12) of the second kind shows
hpµ , pν i = zµ δµ,ν , (2.19)
where zµ is defined as in (2.15).

2.3 Linear functionals of eigenvalues

Let f : T 7→ R be a square integrable function, that is, f ∈ L2 (T, dµ).


Define the Fourier coefficients by
Z 2π
ˆ 1
f eiθ e−ilθ dθ, −∞ < l < ∞,

fl =
2π 0
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 48

48 Random Matrices and Random Partitions

so that fˆ0 is the average of f over T. Since f is real, then fˆ−l = fˆl .
In this section we shall focus on the fluctuation of linear eigenvalue
Pn
statistic k=1 f eiθk around the average nfˆ0 .
P∞
Theorem 2.8. If f ∈ L2 (T, dµ) is such that l=1 l|fˆl |2 < ∞, then
n
d
X
f eiθk − nfˆ0 −→ N 0, σf2 , n → ∞
 
(2.20)
k=1
P∞
where σf2 =2 l=1 l|fˆl |2 .
This theorem goes back to Szegö as early as in 1950s, and is now known
as Szegö’s strong limit theorem. There exist at least six different proofs
with slight different assumptions on f in literature, and the most classical
one uses the orthogonal polynomials on the unit circle T. Here we prefer to
prove the theorem using the moment method of Diaconis and Evans (2001),
Diaconis (2003). The interested reader is referred to Simon (2004) for other
five proofs. See also a recent survey of Deift, Its and Krasovsky (2012) for
extensions and applications.
Lemma 2.4. Suppose that Z = X + iY is a complex standard normal
random variable, then for any non-negative integers a and b
EZ a Z̄ b = a!δa,b .
Proof. Z can clearly be written in polar coordinates as follows:
Z = γeiθ ,
where γ and θ are independent, θ is uniform over [0, 2π], and γ has density
2
function 2re−r , r ≥ 0. It easily follows
EZ a Z̄ b = Eγ a+b eiθ(a−b)
= Eγ a+b Eeiθ(a−b)
= Eγ 2a δa,b = a!δa,b ,
as desired. 
Lemma 2.5. (i) Suppose that Zl , l ≥ 1 is a sequence of i.i.d. complex
standard normal random variables. Then for any m ≥ 1 and any non-
negative integers a1 , a2 , · · · , am and b1 , b2 , · · · , bm
Ym X n m X
a l Y n bl
E eilθk e−ilθk
l=1 k=1 l=1 k=1
m √ m √
Y al Y b
=E lZl lZ l l (2.21)
l=1 l=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 49

Circular Unitary Ensemble 49

 Pm Pm
whenever n ≥ max l=1 lal , l=1 lbl .
(ii) For any integer j, l ≥ 1,
Xn X n
E eilθk e−ijθk = δj,l min(l, n). (2.22)
k=1 k=1

Proof. Recall the lth power sum polynomial


n
 X
pl eiθ1 , · · · , eiθn = eilθk .
k=1
Then it follows
m X
Y n al
eilθk = pλ eiθ1 , · · · , eiθn


l=1 k=1
and
m X
Y n b l
e−ilθk = pµ (eiθ1 , · · · , eiθn )
l=1 k=1
 
where λ = 1a1 , 2a2 , · · · , mam and µ = 1b1 , 2b2 , · · · , mbm .
According to the orthogonality relation in (2.19), we have
Ym Xn m X
a l Y n bl
E eilθk e−ilθk
l=1 k=1 l=1 k=1
= hpλ , pµ i
Ym
= δλ,µ lal al ! (2.23)
l=1
whenever |λ|, |µ| ≤ n. Now we can get the identity (2.21) using Lemma 2.4.
Turn to (2.22). We immediately know from (2.23) that the expectation
is zero if j 6= l, while the case j = l has been proven in (2.6). 
As an immediate consequence, we have the following

Theorem 2.9. For each integer m ≥ 1


Xn  √
d
eilθk , 1 ≤ l ≤ m −→

l Zl , 1≤l≤m , n → ∞.
k=1
In particular, it holds
m n
d
X X
ˆ e−ilθk − nfˆ0 −→ N 0, σm,f
2

fl , n→∞
l=−m k=1
Pm
2
where σm,f =2 l=1 l|fˆl |2 .
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 50

50 Random Matrices and Random Partitions

P∞
Proof of Theorem 2.8. Since l=−∞ |fˆl |2 < ∞, then we can express f
in terms of Fourier series
X∞
f eiθ = fˆl e−ilθ ,


l=−∞
from which it follows
Xn ∞
X n
X
f eiθk = fˆl e−ilθk


k=1 l=−∞ k=1



X n
X ∞
X n
X
= nfˆ0 + fˆl e−ilθk + fˆl e−ilθk .
l=1 k=1 l=1 k=1
It is sufficient for us to establish the following statement: there exists a
sequence of numbers mn with mn → ∞ and mn /n → 0 such that
(i)
∞ n
P
X X
fˆl e−ilθk −→ 0; (2.24)
l=mn +1 k=1
(ii)
mn n ∞
X 1/2
d
X X
fˆl e−ilθk −→ l|fˆl |2 NC (0, 1), (2.25)
l=1 k=1 l=1
where NC (0, 1) denotes a complex standard normal random variable.
Indeed, for any sequence of numbers mn with mn → ∞, we have by
(2.22)
X∞ n
X 2 X∞
E fˆl e−ilθk = min(l, n)|fˆl |2

l=mn +1 k=1 l=mn +1

X
≤ l|fˆl |2
l=mn +1
→ 0,
which directly implies (2.24) using the Markov inequality.
For (2.25), note the moment identity (2.21) is applicable to yield
mn n mn
a X n
¯ X ilθk b
X X 
E fˆl e−ilθk fˆl e
l=1 k=1 l=1 k=1
mn mn
√ a X ¯√
X b
=E fˆl l Zl fˆl l Z l
l=1 l=1
mn
X a
= l|fˆl |2 a!δa,b (2.26)
l=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 51

Circular Unitary Ensemble 51

whenever n ≥ mn (a + b).
If mn /n → 0, then for any non-negative integer numbers a and b, the
assumption n ≥ mn (a+b) can be guaranteed for sufficiently large n. Now we
can conclude the claim (2.25) by letting n → ∞ in (2.26). The proof is now
complete. 

Remark 2.1. A remarkable feature in Theorem 2.8 is that there is no nor-


malizing constant in left hand side of (2.20). Recall that there is a normal-

izing constant 1/ n in the central limit theorem for sums of i.i.d. random
variables with finite variance. This further manifests that the eigenvalues of
the CUE spread out more regularly on the unit circle T than independent
uniform points. This phenomena also appears in the central limit theo-
rem for linear functional of eigenvalues of the Gaussian Unitary Ensemble
(GUE), see Chapter 3 below.

The following result shows that even when



X
l|fˆl |2 = ∞,
l=1
Pn 
the central limit theorem for k=1 f eiθk after properly scaled still holds
under a weak additional assumption. Recall that a positive sequence {ck }
is said to be slowly varying if for any α > 0
cbαkc
lim = 1.
k→∞ ck

Theorem 2.10. Suppose that f ∈ L2 (T, dµ) is such that


n
X
Bn := l|fˆl |2 , n≥1
l=1
is slowly varying. Then
n
1 X 
d
f eiθk − nfˆ0 −→ N (0, 1).


2Bn k=1

Proof. As in the proof of Theorem 2.8, it follows


n ∞ n ∞ n
X X X X ¯ X ilθk
f eiθk − nfˆ0 = fˆl e−ilθk + fˆl

e .
k=1 l=1 k=1 l=1 k=1
It is enough to prove
∞ n
1 X ˆ X −ilθk d
√ fl e −→ NC (0, 1).
Bn l=1 k=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 52

52 Random Matrices and Random Partitions

Because Bn , n ≥ 1 is slowly varying, there must be a sequence of integers


mn such that as n → ∞
mn
mn → ∞, →0
n
and
Bmn
→ 1. (2.27)
Bn
We shall establish the following statements:
(i)
∞ n
1 X X P
√ fˆl e−ilθk −→ 0; (2.28)
Bn l=mn +1 k=1

(ii)
mn n
1 X X d
√ fˆl e−ilθk −→ NC (0, 1). (2.29)
Bn l=1 k=1
According to (2.22),
X ∞ n
X 2 ∞
X
E fˆl e−ilθk = |fˆl |2 min(l, n)

l=mn +1 k=1 l=mn +1
Xn ∞
X
= l|fˆl |2 + n |fˆl |2 .
l=mn +1 l=n+1

Summing by parts,
∞ ∞
X X 1
2 |fˆl |2 = (Bl+1 − Bl )
l+1
l=n+1 l=n

X Bl Bn
= − .
l(l + 1) n + 1
l=n

Since Bn , n ≥ 1 is slowly varying,



n X Bl
→ 1.
Bn l(l + 1)
l=n

Putting these together implies


∞ n
1 X ˆ X −ilθk 2
E fl e → 0,
Bn
l=mn +1 k=1

which in turn implies (2.28).


March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 53

Circular Unitary Ensemble 53

Turn to (2.29). Fix non-negative integers a and b. Since mn /n → 0,


then n ≥ mn (a + b) for sufficiently large n. So (2.21) is applicable to yield
mn n mn
a X n
¯ X ilθk b
X X 
E fˆl e−ilθk fˆl e
l=1 k=1 l=1 k=1
mn
X a
= l|fˆl |2 a!δa,b . (2.30)
l=1

Thus (2.29) is valid from (2.27) and (2.30). The proof is now complete. 
To conclude this section, we shall look at two interesting examples. The
first one is the distribution of values taken by the logarithm of charac-
teristic polynomial of a random unitary matrix. Recall the characteristic
polynomial of a matrix Un is defined by the determinant
det(zIn − Un ).
Fix z = eiθ0 and assume Un is from the CUE. Since eiθ0 is almost surely
not an eigenvalue of Un , then
n
 Y
det eiθ0 In − Un = eiθ0 − eiθk 6= 0.


k=1

It is fascinating that the logarithm of det eiθ0 In −Un after properly scaled
weakly converges to a normal distribution, analogous to Selberg’s result on
the normal distribution of values of the logarithm of the Riemann zeta
function. This was first observed by Keating and Snaith (2000), which
argued that the Riemann zeta function on the critical line could be modelled
by the characteristic polynomial of a random unitary matrix.

Theorem 2.11. As n → ∞,
1  d
√ log det(eiθ0 In − Un ) − inθ0 −→ NC (0, 1),
log n
where log denotes the usual branch of the logarithm defined on C \ {z :
Re(z) ≤ 0}.

Proof. First observe


n
X
log det eiθ0 In − Un − inθ0 = log 1 − ei(θk −θ0 ) .
 

k=1

According to Weyl’s formula,


 d
ei(θk −θ0 ) , 1 ≤ k ≤ n = eiθk ,

1≤k≤n .
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 54

54 Random Matrices and Random Partitions

Hence it suffices to prove


n
1 X  d
√ log 1 − eiθk −→ NC (0, 1).
log n k=1

Note that it follows for any n ≥ 1


n ∞ n
X X 1 X ilθk
log 1 − eiθk = −

e , a.s. (2.31)
l
k=1 l=1 k=1

Indeed, for any real r > 1



 eiθk  X 1 ilθk
log 1 − =− e ,
r lrl
l=1

and so
n ∞ n
X  eiθk  X 1 X ilθk
log 1 − =− e .
r lrl
k=1 l=1 k=1

Thus we have by virtue of (2.22)


∞ n 2 X∞ 2
X 1 1 X
ilθk 11
E − 1 e = − 1 min(l, n)

l rl l2 r l

l=1 k=1 l=1
n 2 ∞ 2
X 1 1 X 11
= − 1 + − 1 .
l rl l2 r l
l=1 l=n+1

Letting r → 1+ easily yields


∞ n 2
X 1 1 X
ilθk
E − 1 e → 0,

l rl
l=1 k=1

which in turn implies (2.31).


Now we need only prove
∞ n
1 X 1 X ilθk d
√ e −→ NC (0, 1).
log n l=1 l k=1

The proof is very similar to that of Theorem 2.8. Let mn = n/log n so that
mn → ∞ and mn /n → 0. We shall establish the following statements:
(i)
∞ n
1 X 1 X ilθk P
√ e −→ 0; (2.32)
log n l=m +1 l k=1
n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 55

Circular Unitary Ensemble 55

(ii)
mn n
1 X 1 X ilθk d
√ e −→ NC (0, 1). (2.33)
log n l=1 l k=1
According to (2.22), it holds
∞ n n ∞
X 1 X ilθk 2 X 1 X 1
E e = +n

l l l2
l=mn +1 k=1 l=mn +1 l=n+1
= O(log log n),
which together with the Markov inequality directly implies (2.32).
To prove (2.33), note for any non-negative integers a and b,
mn n mn n
X 1 X ilθk a X 1 X −ilθk b
E e e
l l
l=1 k=1 l=1 k=1
mn
X 1 a
= a!δa,b
l
l=1
= (1 + o(1))(log n)a a!δa,b ,
as desired. 
The second example of interest is the numbers of eigenvalues lying in an
arc. For 0 ≤ a < b < 2π, write Nn (a, b) for the number of eigenvalues eiθk
with θk ∈ [a, b]. Particularly speaking,
n
X
Nn (a, b) = 1(a, b) (θk ).
k=1

Since each eigenvalue eiθk is uniform over T, then


n(b − a)
ENn (a, b) = .

The following theorem, due to Wieand (1998) (see also Diaconis and Evans
(2001)), shows that the fluctuation of Nn (a, b) around the mean is asymp-
totically normal. It is worth mentioning that the asymptotic variance log n
(up to a constant) is very typical in the study of numbers of points like
eigenvalues in an interval. The reader will again see it in the study of GUE
and random Plancherel partitions.

Theorem 2.12. For 0 ≤ a < b < 2π, as n → ∞


Nn (a, b) − n(b−a)
2π d
1
√ −→ N (0, 1). (2.34)
π log n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 56

56 Random Matrices and Random Partitions

Proof. (2.34) is actually a direct consequence of Theorem 2.10. Indeed,


set
f eiθ = 1(a, b) (θ).


Then a simple calculation shows


b−a 1
fˆ0 = , fˆl = e−ila − e−ilb ,

l 6= 0
2π l2πi
and so
Xn
Bn : = l|fˆl |2
l=1
n
1 X 1 −ila 2
= e − e−ilb
4π 2 l
l=1
n
1 X1 
= 2
2 + 2 cos l(b − a) .
4π l
l=1
On the other hand, an elementary calculus shows
n
1 X cos l(b − a)
→ 0, n → ∞.
log n l
l=1
Hence Bn is a slowly varying and
Bn 1
→ , n → ∞.
log n 2π 2
The proof is complete. 
The above theorem deals only with the number of eigenvalues in a single
arc. In a very similar way, employing the Cramér-Wald device, one may
prove a finite dimensional normal convergence for multiple arcs.
Theorem 2.13. As n → ∞, the finite dimensional distribution of the pro-
cesses
Nn (a, b) − n(b−a)

1
√ , 0 ≤ a < b < 2π
π log n
converges to those of a centered Gaussian process {Z(a, b) : 0 ≤ a < b < 2π}
with the covariance structure
1, if a = a0 and b = b0 ,



1
, if a = a0 and b 6= b0 ,



 2


1 0 0
EZ(a, b)Z(a0 , b0 ) = 2 , if a 6= a and b = b ,

− 12 , if b = a0 ,







0, otherwise.
Proof. See Theorem 6.1 of Diaconis (2001). 
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 57

Circular Unitary Ensemble 57

2.4 Five diagonal matrix models

This section is aimed to establish a five diagonal sparse matrix model for the
CUE and to provide an alternate approach to asymptotic normality of the
characteristic polynomials and the number of eigenvalues lying in an arc.
We first introduce basic notions of orthogonal polynomials and Verblunsky
coefficients associated to a finitely supported measure on the unit circle,
and quickly review some well-known facts, including the Szegö recurrence
equations and Verblunsky’s theorem. The measure we will be concerned
with is the spectral measure induced by a unitary matrix and a cyclic
vector. Two matrices of interest to us are upper triangular Hessenberg
matrix and CMV five diagonal matrix, whose Verblunsky coefficients can
be expressed in a simple way. Then we turn to a random unitary matrix
distributed with Haar measure. Particularly interesting, the associated
Verblunsky coefficients are independent Θv -distributed complex random
variables. Thus as a consequence of Verblunsky’s theorem, we naturally get
a five diagonal matrix model for the CUE. Lastly, we rederive Theorems 2.11
and 2.12 via a purely probabilistic approach: use only the classical central
limit theorems for sums of independent random variables and martingale
difference sequences.
Assume we are given a finitely supported probability measure dν on
exactly n points eiθ1 , eiθ2 , · · · , eiθn with masses ν1 , ν2 , · · · , νn , where νi > 0
Pn
and i=1 νi = 1. Let L2 (T, dν) be the of square integrable functions on T
with respective to dν with the inner product given by
Z
f eiθ g eiθ dν.
 
hf, gi =
T

Applying the Gram-Schmidt algorithm to the ordered set 1, z, · · · , z n−1 ,
we can get a sequence of orthogonal polynomials Φ0 , Φ1 , · · · , Φn−1 , where
Φ0 (z) = 1, Φk (z) = z k + lower order.
Define the Szegö dual by
Φ∗k (z) = z k Φk (z̄ −1 ).
Namely,
k
X k
X
Φk (z) = cl z l ⇒ Φ∗k (z) = c̄k−l z l .
l=0 l=0
As Szegö discovered, there exist complex constants α0 , α1 , · · · , αn−2 ∈ D,
where D := {z ∈ C, |z| < 1}, such that for 0 ≤ k ≤ n − 2
Φk+1 (z) = zΦk (z) − ᾱk Φ∗k (z) (2.35)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 58

58 Random Matrices and Random Partitions

and
Φ∗k+1 (z) = Φ∗k (z) − αk zΦk (z). (2.36)
Expanding z n in this basis shows that there exists an αn−1 , say αn−1 =
eiη ∈ T, 0 ≤ η < 2π, such that if letting
Φn (z) = zΦn−1 (z) − ᾱn−1 Φ∗n−1 (z), (2.37)
then
Φn (z) = 0 in L2 (T, dν).
Define
p
ρk = 1 − |αk |2 , 0 ≤ k ≤ n − 1,
then it follows from recurrence relations (2.35) and (2.36)
k−1
Y
kΦ0 k = 1, kΦk k = ρl , k ≥ 1. (2.38)
l=0

The orthonormal polynomial φk is defined by


Φk (z)
φk (z) = .
kΦk k
We call αk , 0 ≤ k ≤ n − 1 the Verblunsky coefficients associated to the
measure dν, which play an important role in the study of unitary matrices.
We sometimes write αk (dν) for αk to emphasize the dependence on the
underlying measure dν. A basic fact we need below is

Theorem 2.14. There is a one-to-one correspondence between the finitely


supported probability measure dν on T and complex numbers α0 ,α1 ,· · · ,
αn−1 with α0 ,α1 , · · · , αn−2 ∈ D and αn−1 ∈ T.

This theorem is now called Verblunsky’s theorem (also called Favard’s the-
orem for the circle). The reader is referred to Simon (2004) for the proof
(at least four proofs are presented).
It is very expedient to encode the Szegö recurrence relation (2.35) and
(2.36). Let
Φk (z)
Bk (z) = z .
Φ∗k (z)
It easily follows
1 − ᾱk B̄k (z)
B0 (z) = z, Bk+1 (z) = zBk (z) , z ∈ T, (2.39)
1 − αk Bk (z)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 59

Circular Unitary Ensemble 59

which shows that the Bk (z) can be completely expressed in terms of the
Verblunsky coefficients αk .
Note in view of (2.39), Bk is a finite Blaschke product of degree k + 1.
Define a continuous function ψk (θ) : [0, 2π) 7→ R via
Bk eiθ = eiψk (θ) .

(2.40)
ψk (θ) is the absolute Präfer phase of Bk , so the set of points
 iθ
e : Bn−1 (eiθ ) = ᾱn−1 = eiθ : ψn−1 (θ) ∈ 2πZ + η


is the support of dν.


Also, ψk (θ) is a strictly increasing function of θ. To avoid ambiguity,
we may choose a branch of the logarithm in (2.39) so that
ψ0 (θ) = θ, ψk+1 (θ) = ψk (θ) + θ + Υ(ψk , αk ), (2.41)
where Υ(ψ, α) = −2Im log(1 − αeiψ ).
Let C[z] be the vector space of complex polynomials in the variable z.
Consider the multiplication operator Π : f (z) = zf (z) in C(z). We easily
obtain an explicit expression of Π in the basis of orthonormal polynomials
φk , 0 ≤ k ≤ n − 1. In particular,
  
φ0 φ0 0
  
 φ1   φ1   0 
  
.
  
 ..  L . 
.

Π  .  = Hn  ..  +  ,
 
    
 . 
 φn−2   φn−2   0  
Φn
φn−1 φn−1 kΦn−1 k
L L

where Hn = Hij 0≤i,j≤n−1 is an lower triangular Hessenberg matrix given
by
 Qi


 −αj−1 ᾱi l=j+1 ρl , j ≤ i − 1,
−αi−1 ᾱi , j = i,

L
Hij =

 ρi , j = i + 1,

 0, j > i + 1.
A simple algebra further shows that the characteristic polynomial of HnL is
equal to the nth polynomial Φn (z) defined in (2.37). Namely,
det(zIn − HnL ) = Φn (z), (2.42)
iθ1 iθn
which implies e , · · · , e are the spectrum of HnL .
So, the spectral anal-
L
ysis of Hn can give relations between the zeros of orthogonal polynomials
and the Verblunsky coefficients. However, HnL is a far from sparse matrix
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 60

60 Random Matrices and Random Partitions

L
and the entries Hij depend on the Verblunsky coefficients αk and ρk in a
complicated way. This makes difficult this task. Moreover, the numerical
computations of zeros of high degree orthogonal polynomials becomes a
nontrivial problem due to the Hessenberg structure of HnL .
To overcome this difficulty, Cantero, Moral, and Velázquez (2003)
used a simple and ingenious idea. Applying the Gram-Schmidt proce-
−1 2 −2

dure
 to the first
n of the ordered set 1, z, z , z , z , · · · rather than
1, z, · · · , z n−1 , we can get a sequence of orthogonal Laurent polynomi-
als, denoted by χk (z), 0 ≤ k ≤ n−1. We will refer to the χk as the standard
right orthonormal L-polynomial with respect to the measure dν. Interest-
ingly, the χk can be expressed in terms of the orthonormal polynomial φk
and its Szegö dual φ∗k as follows:

χ2k+1 (z) = z −k φ2k+1 (z),


χ2k (z) = z −k φ∗2k (z).

Similarly, applying the Gram-Schmidt procedure to the first n of the or-


dered set of 1, z −1 , z, z −2 , z 2 , · · · , we can get another sequence of orthog-


onal Laurent polynomial, denoted by χk∗ . We call the χk∗ the standard
left orthogonal L-polynomial. It turns out that the χk and χk∗ are closely
related to each other through the equation:

χk∗ (z) = χ̄k z −1 .




Define
 
ᾱk ρk
Ξk =
ρk −αk

for 0 ≤ k ≤ n − 2, while Ξ−1 = (1) and Ξn−1 = (ᾱn−1 ) are 1 × 1 matrices.


Then it readily follows from the Szegö recurrence relation that
    
χ2k (z) 1 −α2k−1 1 χ2k−1 (z)
= ,
χ2k∗ (z) ρ2k−1 1 −ᾱ2k−1 χ2k−1∗ (z)
   
χ2k−1 (z) χ2k−1∗ (z)
= Ξ2k−1 ,
χ2k (z) χ2k∗ (z)
   
χ2k∗ (z) χ2k (z)
z = Ξ2k .
χ2k+1∗ (z) χ2k+1 (z)
It can be further written as a five term recurrence equation:

zχ0 (z) = ᾱ0 χ0 (z) + ρ0 χ1 (z),


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 61

Circular Unitary Ensemble 61

    
ρ2k−2 ᾱ2k−1 −α2k−2 ᾱ2k−1
χ2k−1 (z) χ2k−2 (z)
z =
χ2k (z)ρ2k−2 ρ2k−1 −α2k−2 ρ2k−1 χ2k−1 (z)
  
ρ2k−1 ᾱ2k ρ2k−1 ρ2k χ2k (z)
+ .
−α2k−1 ᾱ2k −α2k−1 ρ2k χ2k+1 (z)
Construct now the n × n block diagonal matrices
L = diag(Ξ0 , Ξ2 , Ξ4 , · · · ), M = diag(Ξ−1 , Ξ1 , Ξ3 , · · · )
and define
Cn = ML, Cnτ = LM. (2.43)
It is easy to check that L and M are symmetric unitary matrices, and so
both Cn and Cnτ are unitary.
A direct manipulation of matrix product shows that Cn is a five diagonal
sparse matrix. Specifically speaking, if n = 2k, then Cn is equal to
 
ᾱ0 ρ0 0 0 0 ··· 0 0
 ρ0 ᾱ1 −α0 ᾱ1 ρ1 ᾱ2 ρ1 ρ2 0 ··· 0 0 
 
 ρ ρ −α ρ −α ᾱ −α ρ 0 ··· 0 0 
 0 1 0 1 1 2 1 2 
 0 0 ρ2 ᾱ3 −α2 ᾱ3 ρ3 ᾱ4 · · · 0 0
 

,
 0 0 ρ2 ρ3 −α2 ρ3 −α3 α4 · · · 0 0


 .. .. .. .. .. .. .. ..
 
.

 . . . . . . . 
 
 0 0 0 0 0 · · · −αn−3 ᾱn−2 −αn−3 ρn−2 
0 0 0 0 0 · · · ρn−2 ᾱn−1 −αn−2 ᾱn−1
while if n = 2k + 1, then Cn is equal to
 
ᾱ0 ρ0 0 0 ··· 0 0 0
 ρ0 ᾱ1 −α0 ᾱ1 ρ1 ᾱ2 ρ1 ρ2 · · · 0 0 0 
 
 ρ ρ −α ρ −α ᾱ −α ρ · · · 0 0 0 
 0 1 0 1 1 2 1 2 
 0 0 ρ2 ᾱ3 −α2 ᾱ3 · · · 0 0 0
 

.
 0 0 ρ2 ρ3 −α2 ρ3 · · · 0 0 0


 .. .. .. .. .. .. .. ..
 
.

 . . . . . . . 
 
 0 0 0 0 · · · ρn−3 ᾱn−2 −αn−3 ᾱn−2 ρn−2 ᾱn−1 
0 0 0 0 · · · ρn−3 ρn−2 −αn−3 ρn−2 −αn−2 ᾱn−1
The multiplication operator Π : f (z) 7→ zf (z) can be explicitly expressed
in the basis of χk , 0 ≤ k ≤ n − 1 as follows. If n = 2k, then
  
χ0 χ0 0
  
 χ1   χ1   0 
  
..
  
Π  ...  = Cn  ...  + 
 
   
. , (2.44)
      
 χn−2   χn−2   0


Φn
χn−1 χn−1 z k−1 kΦn−1 k
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 62

62 Random Matrices and Random Partitions

while if n = 2k + 1, then
     
χ0 χ0 0

 χ1



 
  χ1 
 0
 ..
    ..  ..

Π .
 
 = Cn 
 
+ .

. . (2.45)
 χn−3 
 
 χn−3  
   0 

 χn−2   ρn−2 zk−1 Φn
     
 χn−2  kΦn−1 k 
χn−1 χn−1 −αn−2 zk−1 Φ n
kΦn−1 k

The analog in the basis of χk∗ , 0 ≤ k ≤ n − 1 holds with Cn replaced by Cnτ .


Call Cn and Cnτ the CMV matrices associated to α0 , α1 , · · · , αn−1 .
Similarly to the equation (2.42), we have

Lemma 2.6. In the above notations,

det(zIn − Cn ) = Φn (z).

Proof. If n = 2k, then by (2.44)


 

χ0
 0
 χ1
  0 
  
..

(zIn − Cn )  ..
  
= .

  . . 
 χn−2   0

 
Φn (z)
χn−1 z k−1 kΦn−1 k

Denote by Cn,k the k × k subminor matrix of Cn . Applying to solve the


above system with respect to χn−1 (z), we get
 
0
 .. 
1  zI
n−1 − Cn,n−1 . 
χn−1 (z) = det  
det(zIn − Cn ) 
 0 

Φn (z)
··· z k kΦn−1 k
Φn (z) det(zIn−1 − Cn,n−1 )
= ,
z k−1 kΦn−1 k det(zIn − Cn )

which implies

det(zIn − Cn ) Φn (z)
= .
det(zIn−1 − Cn,n−1 ) Φn−1 (z)
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 63

Circular Unitary Ensemble 63

Similarly, if n = 2k + 1, applying Cramér’s rule to solve the initial system


(2.45) with respect to χn−2 (z) gives
1
χn−2 (z) =
det(zIn − Cn )
 
0 0
 .. .. 
 n−2 − Cn,n−2
 zI . . 

· det 
 0 0 

··· ρn−2 zk−1ΦkΦ
n (z)
−ρ
 

n−1 k
n−2 ᾱn−1 
 
Φn (z)
··· −αn−2 zk−1 kΦn−1 k z + αn−2 ᾱn−1
Φn (z) det(zIn−2 − Cn,n−2 )
= zρn−2 k−1
z kΦn−1 k det(zIn − Cn )
and so
det(zIn − Cn ) Φn (z)
= .
det(zIn−2 − Cn,n−2 ) Φn−2 (z)
Thus we find by induction that for n ≥ 1
det(zIn − Cn ) Φn (z)
= .
det(z − Cn,1 ) Φ1 (z)
Since
Φ1 (z) = z − ᾱ0 = det(z − Cn,1 ),
then the claim follows. 
In what follows we will be concerned with the spectral measure of a unitary
matrix. Let Un be a unitary matrix from Un , e1 = (1, 0, · · · , 0)τ a cyclic
vector. Construct a probability measure dν on T such that
Z
z m dν = hUnm e1 , e1 i, m ≥ 0.
T
Note that dν is of finite support. Indeed, let eiθ1 , eiθ2 , · · · , eiθn be the
eigenvalues of Un , then there must exist a unitary matrix Vn such that
 iθ1
0 ··· 0

e
 0 eiθ2 · · · 0 
 ∗
Un = Vn  . ..  Vn .

. .. . .
 . . . . 
0 0 · · · eiθn
Furthermore, Vn may be chosen to consist of eigenvectors v1 , v2 , · · · , vn .
If we further require that v11 := q1 > 0, v12 := q2 > 0, · · · , v1n := qn > 0,
then Vn is uniquely determined. In addition, it easily follows
q12 + q22 + · · · + qn2 = 1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 64

64 Random Matrices and Random Partitions

because of orthonormality of Vn . Thus it follows


n
X
hUnm e1 , e1 i = qj2 eimθj , m ≥ 0.
j=1

So dν is supported by eiθ1 , eiθ2 , · · · , eiθn and
ν eiθj = qj2 , j = 1, 2, · · · , n.


Having the measure dν on T, we can produce the Verblunsky coefficient.


s αk (dν) We shall below write αk (Un , e1 ) for αk (dν) to indicate the un-
derlying matrix and cyclic vector. The following lemmas provide us with
two nice examples of unitary matrices, whose proofs can be found in Simon
(2004).

Lemma 2.7. Given a sequence of complex numbers α0 , α1 , · · · , αn−2 ∈ D


and αn−1 ∈ T, construct an upper triangular Hessenberg matrix HnU =
U
Hij 0≤i,j≤n−1
by letting
 Qj−1

 −αi−1 ᾱj l=i ρl , i < j,
−αi−1 ᾱi , i = j,

U
Hij = (2.46)
ρ , i = j + 1,
 j


0, i > j + 1.

Then αk HnU , e1 = αk , 0 ≤ k ≤ n − 1.

Lemma 2.8. Given a sequence of complex numbers α0 , α1 , · · · , αn−2 ∈ D


and αn−1 ∈ T, construct a CMV matrix Cn as in (2.43). Then αk (Cn , e1 ) =
αk , 0 ≤ k ≤ n − 1.

What is the distribution of the αk (Cn , e1 ) if Un is chosen at random


from the CUE? To answer this question, we need to introduce a notion of
Θv -distributed random variable. A complex random variable Z is said to
be Θv -distributed (v > 1) if for any f
v−1
Z Z
(v−3)/2
Ef (Z) = f (z) 1 − |z|2 dz.
2π D
For v ≥ 2 an integer, there is an intuitive geometric interpretation for Z.

Lemma 2.9. If X = (X1 , · · · , Xn , Xn+1 ) ∈ Rn+1 is uniform over the n-


dimensional unit sphere Sn , then for any 1 ≤ k ≤ n,
Γ( n+1
2 )
Z
Ef (X1 , · · · , Xk ) = f (x1 , · · · , xk )
2π k/2 Γ( n−k+1
2 ) Bk
(n−k−1)/2
· 1 − x21 − · · · − x2k dx1 · · · dxk , (2.47)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 65

Circular Unitary Ensemble 65


where Bk = (x1 , · · · , xk ) : x21 + · · · + x2k < 1 .
In particular, X1 is Beta(1/2, n/2)-distributed and X1 + iX2 is Θn -
distributed.
Proof. (2.47) is actually a direct consequence of the following change-of-
variables formula using matrix volume:
Z Z

f (v)dv = (f ◦ φ)(u) Jφ (u) du (2.48)
V U
where V ⊆ Rn and U ⊆ Rm with n ≥ m, f is integrable on V, φ : U 7→ V
is sufficiently well-behaved function,
dv
and du denote respectively the
volume element in V ⊆ Rn , and Jφ (u) is the volume of Jacobian matrix
Jφ (u).
To apply (2.48) in our setting, let
φk (x1 , · · · , xn ) = xk , 1≤k≤n
and
1/2
φn+1 (x1 , · · · , xn ) = 1 − x21 − · · · − x2n .
n n
So the S is the graph of B under the mapping φ = (φ1 , · · · , φn+1 ). The
Jacobian matrix of φ is
 
1 0 0 ··· 0 0
 0
 1 0 ··· 0 0 

Jφ = 
 .. 
.
 0 0 0 . 1 0 
 0 0 0 ··· 0 1
 

∂φn+1 ∂φn+1 ∂φn+1 ∂φn+1 ∂φn+1
∂x1 ∂x2 ∂x3 ··· ∂xn−1 ∂xn
This is an n + 1 × n rectangular matrix, whose volume is computed by
q
Jφ = det(J τ Jφ )
φ
−1/2
= 1 − x21 − · · · − x2n .
Hence according to (2.48), we have
Ef (X1 , · · · , Xk )
Z
= f (x1 , · · · , xk )ds
Sn
Γ( n+1
2 )
Z
−1/2
= (n+1)/2
f (x1 , · · · , xk ) 1 − x21 − · · · − x2n dx1 · · · dxn
2π Bn
Γ( n+1
2 )
Z
= f (x1 , · · · , xk )dx1 · · · dxk
2π (n+1)/2 Bk
Z
−1/2
· 1 − x21 − · · · − x2n dxk+1 · · · dxn (2.49)
x2k+1 +···+x2n <1−x21 −···−x2k
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 66

66 Random Matrices and Random Partitions

where ds denotes the uniform measure on Sn .


On the other hand, it is easy to compute
Z
−1/2
1 − x21 − · · · − x2n dxk+1 · · · dxn
x2k+1 +···+x2n <1−x21 −···−x2k
(n−k−1)/2
= 1 − x21 − · · · − x2k
Z
−1/2
1 − x21 − · · · − x2n−k dx1 · · · dxn−k
Bn−k
(n−k+1)/2
π (n−k−1)/2
= n−k+1
1 − x21 − · · · − x2k . (2.50)
Γ( 2 )
Substituting (2.50) into (2.49) immediately get (2.47). 

Remark 2.2. Lemma 2.9 can also be proved by using the following well-
known fact: let g1 , · · · , gn+1 be a sequence of i.i.d. standard normal random
variables, then
d 1
(X1 , · · · , Xn+1 ) = 2 2
(g1 , · · · , gn+1 ).
(g1 + · · · + gn+1 )1/2
To keep notation consistent, Z is said to be Θ1 -distributed if Z is uniform
on the unit circle.

Theorem 2.15. Assume that Un is a unitary matrix chosen from Un at


random according to the Haar measure dµn . Then the Verblunsky coeffi-
cients αk (Un , e1 ) are independent Θ2(n−k−1)+1 -distributed complex random
variables.

The key to proving Theorem 2.15 is the Householder transform, which will
transfer unitary matrix into an upper triangular Hessenberg form. Write
Un = (uij )n×n . Let w = (w1 , w2 , · · · , wn )τ where
u21  1 |u21 | 1/2
w1 = 0, w2 = − − ,
|u21 | 2 2α
ul1
wl = − , l≥3
(2α2 − 2α|u21 |)1/2
where α > 0 and
α2 : = |u21 |2 + |u31 |2 + · · · + |un1 |2
= 1 − |u11 |2 .
Trivially, it follows
w∗ w = 1.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 67

Circular Unitary Ensemble 67

Define
···
 
1 0 0
0 
Rn = In − 2ww∗ =  . .
 
 .. Vn−1 
0
This is a reflection through the plane perpendicular to w. It is easy to
check
Rn−1 = Rn∗ = Rn
and
u11 (u12 , u13 , · · · , u1n )Vn−1
 
 

 u21 

 u31 
Rn Un Rn =  ,
 
 Vn−1  .  Vn−1 Un,n−1 Vn−1
 
 .. 

 
un1
where Un,n−1 is the n − 1 × n − 1 submatrix of Un by deleting the first row
and the first column.
Take a closer look at the first column. The first element of Rn Un Rn is
unchanged, u11 ; while the second is
 
u21
 u32 
 u21

1 − 2w2 w2∗ , −2w2 w3∗ , · · · , −2w2 wn∗  .  = α ,
 ..  |u21 |
un1
and the third and below are zeros. So far we have described one step of
the usual Householder algorithm. To make the second entry nonnegative,
we need to add one further conjugation. Let Dn differ from the identity
matrix by having (2, 2)-entry e−iφ with φ chosen appropriately and form
Dn Rn Un Rn Dn∗ . Then we get the desired matrix
 
u 11 (u12 , u13 , · · · , u1n )V n−1
 p1 − |u |2 
 11 
0
 
 .

 .. Vn−1 Un,n−1 Vn−1 

 . 
0
Proof of Theorem 2.15. We shall apply the above refined Householder
algorithm to a random unitary matrix Un . To do this, we need the following
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 68

68 Random Matrices and Random Partitions

realization of Haar measure: choose the first column at random from the
unit sphere Sn−1 ; then choose the second column from the unit sphere of
vectors orthogonal to the first; then the third column and so forth. In this
way we get a Haar matrix because it is invariant under left multiplication
by any unitary matrix.
Now the first column of Un is a random vector from the unit sphere
Sn−1 . After applying the above refined Householder algorithm, the new
first column take the form (ᾱ0 , ρ0 , 0, · · · , 0)τ where ᾱ0 = u11 the original
(1, 1) entry of Un and so is by Lemma 2.9 Θ2n−1 -distributed, while ρ0 =
p
1 − |α0 |2 as desired. The other columns are still orthogonal to the first
column and form a random orthogonal basis for the orthogonal complement
of the first column. Remember Haar measure is invariant under both right
and left multiplication by a unitary.
For the subsequent columns the procedure is similar. Assume the (k −
1)th column is
 
ᾱk−2 ρ0 ρ1 · · · ρk−3
 −ᾱk−2 α0 ρ1 · · · ρk−3 
 
 .. 

 . 

−ᾱk−2 αk−3
 
.
 
ρk−2

 
 
 0 
..
 
 
 . 
0
Let  
ρ0 ρ1 · · · ρk−2
 −α0 ρ1 · · · ρk−2 
 
 .. 

 . 

 −αk−3 ρk−2 
 
X= ,
 −αk−2 
 
 0 
..
 
 
 . 
0
then X is a unit vector orthogonal to the first k − 1 columns. Namely X is
an element of the linear vector space spanned by the last n−k+1 columns in
Un . Its inner product with the kth column, denoted by ᾱk−1 , is distributed
as the entry of a random vector from 2(n−k + 1)-sphere and is independent
of α0 , α1 , · · · , αk−2 . This implies that αk−1 is Θ2(n−k)+1 -distributed.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 69

Circular Unitary Ensemble 69

We now multiply the matrix at hand from left by the appropriate re-
flection and rotation to bring the kth columns into the desired form. Note
neither these operations alter the top k rows and so the inner product of the
kth column with X is unchanged. But now the kth column is uniquely de-
termined; it must be ᾱk−1 X +ρk−1 ek+1 , where ek+1 = (0, · · · , 0, 1, 0 · · · 0)τ .
We then multiply on the right by RD∗ , but this leaves the first k column
unchanged, while orthogonally intermixing the other columns. In this way,
we obtain a matrix whose first k column confirm to the structure HnU . While
the remaining columns form a random basis for the orthogonal complements
of the span of those k columns.
In this way, we can proceed inductively until we reach the last column.
It is obliged to be a random orthonormal basis for the one-dimensional space
orthogonal to the preceding n − 1 columns and hence a random unimodular
multiple, say ᾱn−1 , of X. This is why the last Verblunsky coefficient is
Θ1 -distributed.
We have now conjugated Un to a matrix in the form of Hessenberg as
in Lemma 2.7. Note the vector e1 is unchanged under the action of each of
the conjugating matrices, then

αk (Un , e1 ) = αk HnU , e1 = αk .


We conclude the proof. 


Combining Lemma 2.8 and Theorem 2.15 together, we immediately have

Theorem 2.16. Let α0 , α1 , · · · , αn−1 be a sequence of independent com-


plex random variables and αk is Θ2(n−k−1)+1 -distributed. Define the CMV
matrix Cn as in (2.43), then its eigenvalues are distributed according to
(2.1).

Cn is called a five diagonal matrix model of the CUE. It first appeared in


the work of Killip and Nenciu (2004).
The rest of this section will be used to rederive Theorems 2.11 and
2.12 with help of the Verblunsky coefficients αk and the Präfer phase ψk
introduced above.
Start by an identity in law due to Bourgade, Hughes, Nikeghbali and
Yor (2008).

Lemma 2.10. Let Vn ∈ Un be a random matrix with the first column v1


uniformly distributed on the n-dimensional unit complex sphere. If Un−1 ∈
Un−1 is distributed with Haar measure dµn−1 and is independent of Vn ,
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 70

70 Random Matrices and Random Partitions

then
 
1 0
Un := Vn (2.51)
0 Un−1
is distributed with Haar measure dµn on Un .

Proof. We shall prove for a fixed matrix M ∈ Un


d
M Un = Un .
Namely,
   
1 0 d 1 0
M Vn = Vn .
0 Un−1 0 Un−1
Write Vn = (v1 , v2 , · · · , vn ). Since v1 is uniform, then so is M v1 . By
conditioning on v1 = v and M v1 = v, it suffices to show
   
1 0 d 1 0
(v, M v2 , · · · , M vn ) = (v, v2 , · · · , vn ) . (2.52)
0 Un−1 0 Un−1
Choose a unitary matrix A such that Av = e1 . Since A(v, M v2 , · · · , M vn )
is unitary, then it must be equal to
 
1 0
0 Xn−1
for some Xn−1 ∈ Un−1 . Similarly,
 
1 0
A(v, v2 , · · · , vn ) =
0 Yn−1
for some Yn−1 ∈ Un−1 .
It is now easy to see
     
1 0 1 0 d 1 0 1 0
=
0 Xn−1 0 Un−1 0 Yn−1 0 Un−1
d d
since Xn−1 Un−1 = Un−1 and Yn−1 Un−1 = Un−1 by rotation invariance
of Haar measure. Thus by virtue of invertibility of A, (2.52) immediately
follows, which concludes the proof. 

Lemma 2.11. Let Un be a random matrix from the CUE with the Verblun-
sky coefficients α0 , α1 , · · · , αn−1 . Then
n−1
d
Y
det(In − Un ) = (1 − αk ). (2.53)
k=0
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 71

Circular Unitary Ensemble 71

Proof. To apply Lemma 2.10, we particularly choose a Vn as follows.


Let v1 be a random vector uniformly distributed on the n-dimensional unit
complex sphere. Define

Vn = v1 , e2 + a2 (v1 − e1 ), · · · , en + an (v1 − e1 ) ,
where e1 , · · · , en are classic base in Rn , a2 , · · · , an are such that Vn is
unitary, that is,
hv1 , ek i
ak = , k = 2, 3, · · · , n.
hv1 − e1 , e1 i
According to Lemma 2.10, it follows
 
d
 1 0
det(In − Un ) = det In − Vn , (2.54)
0 Un−1
where Un−1 is distributed with Haar measure dµn−1 independently of v1 .
It remains to computing the determinant on the right hand side of (2.54).
Note  
 1 0  1 0  
det In − Vn = det ∗ − Vn det Un−1 . (2.55)
0 Un−1 0 Un−1

Set Un−1 = (u2 , u3 , · · · , un ) and w1 = v1 − e1 . Then
 
1 0
∗ − Vn
0 Un−1
   
 0 0 
= −w1 , − (e2 + a2 w1 ), · · · , − (en + an w1 ) .
u2 un
So by the multi-linearity property,
−w11
 
0

 1 0    −w21 
det − V = det
 
∗ n .
. U ∗
− I
0 Un−1
 
 . n−1 n−1 

−wn1

= −w11 det(Un−1 − In−1 ). (2.56)
Substituting (2.56) in (2.55), we get
 
 1 0
det In − Vn = −w11 det(In−1 − Un−1 ).
0 Un−1
Observe w11 = v11 − 1 and v11 ∼ Θ2n−1 -distributed. Thus it follows
d
det((In − Un ) = (1 − α0 ) det(In−1 − Un−1 ).
Proceeding in this manner, we have
n−1
d
Y
det(In − Un ) = (1 − αk ),
k=0
as required. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 72

72 Random Matrices and Random Partitions

Remark 2.3. The identity (2.53) can also be deduced using the recurrence
relation of orthogonal polynomials Φk (z). Indeed, according to Theorem
2.16,
d
det(In − Un ) = det(In − Cn ). (2.57)
On the other hand, by Lemma 2.6 and (2.37)
det(In − Cn ) = Φn (1)
= Φn−1 (1) − ᾱn−1 Φ∗n−1 (1).
Note by (2.40)
Φ∗n−1 (1)
= e−iψn−1 (0) .
Φn−1 (1)
Hence we have
 
det(In − Cn ) = Φn−1 (1) 1 − ᾱn−1 e−iψn−1 (0) .

Inductively, using (2.35) we get


n−1
Y 
det(In − Cn ) = 1 − ᾱk e−iψk (0) .
k=0

Observe that ψ0 (0) = 0 and ψk (0) depends only on α0 , α1 , · · · , αk−1 . Using


the conditioning argument and the rotation invariance of αk , we can get
n−1
d
Y
det(In − Cn ) = (1 − αk ), (2.58)
k=0

which together with (2.57) implies (2.53). It is worth mentioning that the
identity (2.58) is still valid for the CβE discussed in next section.

We also need some basic estimates about the moments of Θv -distributed


random variables.

Lemma 2.12. Assume Z is Θv -distributed for some v ≥ 1.


(i) |Z| and argZ are independent real random variables. Moreover, argZ
is uniform over (0, 2π) and |Z| is distributed with density function
p|Z| (r) = (v − 1)r(1 − r2 )(v−3)/2 , 0 < r < 1.
(ii)
2 8
EZ = EZ 2 = 0, E|Z|2 = , E|Z|4 = .
v+1 (v + 1)(v + 3)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 73

Circular Unitary Ensemble 73

Proof. (i) directly follows from (2.47), while a simple computation easily
yields (ii). 
Proof of Theorem 2.11. Without loss of generality, we may and do
assume θ0 = 0. According to Lemma 2.11, it suffices to prove the following
asymptotic normality
n−1
1 X d
√ log(1 − αk ) −→ NC (0, 1). (2.59)
log n k=0
Since |αk | < 1 almost surely for k = 0, 1, · · · , n − 2, then

X 1
log(1 − αk ) = − αkl .
l
l=1

Taking summation over k, we have the following


n−1
X ∞ n−2
X X1
log(1 − αk ) = − αkl + log(1 − αn−1 )
l
k=0 l=1 k=0
n−2 n−2 ∞ n−2
X 1 X 2 XX 1 l
= − αk − αk − α + log(1 − αn−1 )
2 l k
k=0 k=0 l=3 k=0
=: Zn,1 + Zn,2 + Zn,3 + Zn,4 .
Firstly, we shall prove
Z d
√ n,1 −→ NC (0, 1). (2.60)
log n
It is equivalent to proving
n−2
1 X d
 1
√ |αk | cos ηk −→ N 0, (2.61)
log n k=0 2
and
n−2  1
1 X d
√ |αk | sin ηk −→ N 0, (2.62)
log n k=0 2
where ηk = argαk .
We only prove (2.61) since (2.62) is similar. In view of Lemma 2.12,
1 2
E|αk |2 = , E|αk |4 =
n−k (n − k)(n − k + 1)
and
1
E cos2 ηk = , E cos4 ηk ≤ 1.
2
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 74

74 Random Matrices and Random Partitions

Since ηk is uniform, it is easy to check


n−2
1 X 1
E|αk |2 E cos2 ηk →
log n 2
k=0

and
n−2
X π2
E|αk |4 ≤ .
3
k=0

Hence (2.61) is now a direct consequence of the Lyapunov CLT.


Secondly, to deal with Zn,2 , note by Lemma 2.12
n−2 n−2
π2
X 2 2 X

E αk = E|αk |4 ≤ .
3
k=0 k=0

This, together with the Markov inequality, easily implies


1 P
√ Zn,2 −→ 0. (2.63)
log n
Thirdly, it is easy to check EZn,3 = 0 and
∞ n−2
X X1 2π 2
E|Zn,3 |2 ≤ E|αk |2l ≤ .
l 3
l=3 k=0

So, it follows
1 P
√ Zn,3 −→ 0. (2.64)
log n
Finally, for Zn,4 , note αn−1 = eiη is uniformly distributed on T, then almost
surely αn−1 is not equal to 1, and so log(1 − αn−1 ) < ∞. Hence it follows
1 P
√ Zn,4 −→ 0. (2.65)
log n
Gathering (2.60), (2.63), (2.64) and (2.65) together implies (2.59). 
Turn to Theorem 2.12. Let αk = αk (Un , e1 ), the Verblunsky coefficients
associated to (Un , e1 ), and construct the Bk and ψk as in (2.39) and (2.40),
then
e : Bn−1 (eiθ ) = e−iη = eiθ : ψn−1 (θ) ∈ 2πZ + η
 iθ 

is the eigenvalues of Un . In particular, the number of angles lying in the arc


(a, b) ⊆ [0, 2π) is approximately (ψn−1 (b) − ψn−1 (a))/2π; indeed, it follows
ψn−1 (b) − ψn−1 (a)
Nn (a, b) − ≤ 1.


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 75

Circular Unitary Ensemble 75

In this way, it suffices to show that asymptotically, ψn−1 (b) and ψn−1 (a)
follow a joint normal law. This is what Killip and Nenciu (2008) employed
in the study of general CβE.

Lemma 2.13. Assume a, b ∈ R and α ∼ Θv . Define

Υ(a, α) = −2Im log 1 − αeia , Υ̃(a, α) = 2Im αeia .


 

Then we have

EΥ(a, α) = E Υ̃(a, α) = 0, (2.66)


4
E Υ̃(a, α)Υ̃(b, α) = cos(b − a), (2.67)
v+1
48
E Υ̃(a, α)4 = , (2.68)
(v + 1)(v + 3)
16
E|Υ(a, α) − Υ̃(a, α)|2 ≤ , (2.69)
(v + 1)(v + 3)
8
E|Υ(a, α)|2 ≤ . (2.70)
v+1
Proof. The fact that α follows a rotationally invariant law immediately
implies

EΥ(a, α) = 0.

Specifically, for any 0 ≤ r < 1,


Z 2π
1
log 1 − rei(θ+a) dθ = 0

2π 0
by the mean value principle for harmonic functions. E Υ̃(a, α) = 0 is similar
and simpler.
For (2.67), note

Im(αeia ) = |α| sin(a + argα)

and so it follows by Lemma 2.12

E Υ̃(a, α)Υ̃(b, α)
= 4E|α|2 sin(a + argα) sin(b + argα)
4(v − 1) 1 2π 3
Z Z
(v−3)/2
= r 1 − r2 sin(θ + a) sin(θ + b)drdθ
2π 0 0
4
= cos(b − a).
v+1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 76

76 Random Matrices and Random Partitions

Similarly, we have

16(v − 1) 1 2π 5
Z Z
(v−3)/2 4
E Υ̃(a, α)4 = r 1 − r2 sin (θ + a)drdθ
2π 0 0
48
= .
(v + 1)(v + 3)
Applying Plancherel’s theorem to the power series formula for Υ gives

2 X 2
E|α|2l .

E Υ(a, α) − Υ̃(a, α) =
l2
l=2
 π2 
≤2 − 1 E|α|4
6
(2.67) easily follows from Lemma 2.12.
Lastly, combining (2.67) and (2.69) implies (2.70). 

Lemma 2.14. Assume that ak , Θk , γk , k ≥ 0 are real valued sequences


satisfying

Θk+1 = Θk + δ + γk

with 0 < δ < 2π. Then we have


n
X
1 − eiδ ak eiΘk

k=1
n
X n
X
≤ 2 max |ak | + |ak − ak+1 | + |ak γk |. (2.71)
1≤k≤n
k=1 k=1

Proof. Note
n
X
eiδ − 1 ak eiΘk
k=1
n
X
ak ei(Θk +δ) − eiΘk

=
k=1
Xn n
X
iΘk+1 iΘk
ak eiΘk+1 − ei(Θk +δ) .
 
= ak e −e −
k=1 k=1

Then (2.71) easily follows using summation by parts and the fact 1−eiγk ≤
|γk |. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 77

Circular Unitary Ensemble 77

Proof of Theorem 2.12. Start with 1-dimensional convergence. Note for


any 0 < a < 2π
n−1
X 
ψn−1 (a) − na = ψk (a) − ψk−1 (a) − a
k=1
n−2
X 
= Υ ψk (a), αk ,
k=0

where Υ ψk (a), αk is defined as in (2.41). Since ψk (a) depends only on
α0 , · · · , αk−1 , then it follows from Lemma 2.13

E Υ(ψk (a), αk ) α0 , · · · , αk−1 = 0.

Namely, Υ ψk (a), αk , 0 ≤ k ≤ n − 2 is a martingale difference sequence.
Define
Υ̃ ψk (a), αk = −2Im αk eiψk (a) .
 

Similarly, Υ̃ ψk (a), αk , 0 ≤ k ≤ n − 2 is also a martingale difference
sequence. Moreover, by Lemma 2.13 again
n−2
X 2
E Υ(ψk (a), αk ) − Υ̃(ψk (a), αk ) ≤ 3π 2 .

k=0

Thus it suffices to prove


n−2
1 X  d
√ Υ̃ ψk (a), αk −→ N (0, 1).
2 log n k=0

It is in turn sufficient to verify


n−2
1 X  P
E Υ̃(ψk (a), αk )2 α0 , · · · , αk−1 −→ 1

2 log n
k=0

and
n−2
1 X 4
2 E Υ̃ ψk (a), αk ) → 0.
log n k=0

These directly follow from Lemma 2.12.


Next turn to 2-dimensional convergence. We need to verify
n−2
1 X  P
E Υ̃(ψk (a), αk )Υ̃(ψk (b), αk ) α0 , · · · , αk−1 −→ 0. (2.72)
log n
k=0
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 78

78 Random Matrices and Random Partitions

For distinct numbers a and b, we have by Lemma 2.12



E Υ̃(ψk (a), αk )Υ̃(ψk (b), αk ) α0 , · · · , αk−1
2 
= cos ψk (b) − ψk (a) .
n−k
Define
1
ak = , Θk = ψk (b) − ψk (a), δ =b−a
n−k
and
 
γk = Υ ψk (b), αk − Υ ψk (a), αk ,
then by (2.41)
Θk+1 = Θk + δ + γk .
Applying Lemma 2.14 and noting
8
E|γk | ≤ ,
(n − k)1/2
we have
n−2X 1
1 − ei(b−a) E ei(ψk (b)−ψk (a)) ≤ 9.

n−k
k=0

(2.72) is now valid. Thus by the martingale CLT


1  d
√ ψn−1 (a) − na, ψn−1 (b) − nb −→ (Z1 , Z2 ),
2 log n
where Z1 , Z2 are independent standard normal random variables. 

2.5 Circular β ensembles

The goal of this section is to extend the five diagonal matrix representation
to the CβE.
Recall that the CβE represents a family of probability measures on
n points of T with density function pn,β (eiθ1 , · · · , eiθn ) defined by (2.5).
As observed in Section 2.1, pn,2 describes the joint probability density of
eigenvalues of a unitary matrix chosen from Un according to Haar measure.
Similarly, pn,1 (pn,4 ) describes the joint probability density of eigenvalues of
an orthogonal (symplectic) matrix chosen from On (Sn ) according to Haar
measure. However, no analog holds for general β > 0.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 79

Circular Unitary Ensemble 79

The following five diagonal matrix model discovered by Killip and Nen-
ciu (2004) plays an important role in the study of CβE.

Theorem 2.17. Assume that α0 , α1 , · · · , αn−1 are independent complex


random variables and αk ∼ Θβ(n−k−1)+1 -distributed for β > 0. Construct
the CMV matrix Cn as in (2.43), then the eigenvalues of Cn obey the same
law as pn,β .

The rest of this section is to prove the theorem. The proof is actually an
ordinary use of the change of variables in standard probability theory. Let
HnU be as in (2.46), then we have by Lemmas 2.7 and 2.8
αk (Cn , e1 ) = αk HnU , e1 = αk .


So it suffices to prove the claim for HnU . Denote the ordered eigenvalues of
HnU by eiθ1 , · · · , eiθn . Then there must be a unitary matrix Vn = (vij )n×n
such that
 iθ1
0 ··· 0

e
 0 eiθ2 · · · 0 
 ∗
HnU = Vn  . ..  Vn . (2.73)

.. . .
 .. . . . 
0 0 · · · eiθn
Vn may be chosen to consist of eigenvectors v1 , v2 , · · · , vn . We also further
require that v11 := q1 > 0, v12 := q2 > 0, · · · , v1n := qn > 0, thus Vn is
uniquely determined. It easily follows
q12 + q22 + · · · + qn2 = 1 (2.74)
because of orthonormality of Vn .
The following lemma gives an elegant identity between the eigenvalues
and eigenvectors and the Verblunsky coefficients.

Lemma 2.15.
n n−2
e j − eiθk 2 =
Y Y Y n−l−1
ql2

1 − |αl |2

.
l=1 1≤j<k≤n l=0

Proof. Define A and Q by


···
 
1 1 1
 eiθ1 eiθ2 ··· eiθn 
A=
 
.. .. .. .. 
 . . . . 
ei(n−1)θ1 ei(n−1)θ2 · · · ei(n−1)θn
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 80

80 Random Matrices and Random Partitions

and
q12 ···
 
0 0
0 q22 ··· 0 
Q= .
 
.. .. ..
 ..

. . . 
0 0 · · · qn2
then it follows
n
e j − eiθk 2 = det(AQA∗ ).
Y Y
ql2

l=1 1≤j<k≤n
On the other hand, define
Φ0 (eiθ1 ) Φ0 (eiθ2 ) · · · Φ0 (eiθn )
 
 Φ1 (eiθ1 ) Φ1 (eiθ2 ) · · · Φ1 (eiθn ) 
B= ,
 
.. .. .. ..
 . . . . 
iθ1 iθ2 iθn
Φn−1 (e ) Φn−1 (e ) · · · Φn−1 (e )
where Φ0 , Φ1 , · · · , Φn−1 are monic orthogonal polynomials associated to the
Verblunsky coefficients α0 , α1 , · · · , αn−1 . Then it is trivial to see
det(A) = det(B).
In addition, from the orthogonality property of the Φl , it follows
kΦ0 k2 0 · · ·
 
0
 0 kΦ1 k2 · · · 0 
BQB ∗ =  . ,
 
. ..
 .. .. . 0 
0 0 · · · kΦn−1 k2
Pn
where kΦl k2 = j=1 qj2 |Φl (eiθj )|2 .
Hence according to (2.38),
n−1
Y
det(AQA∗ ) = det(BQB ∗ ) = kΦl k2
l=0
n−2
2 n−l−1
Y 
= 1 − |αl |
l=0
just as required. 
A key ingredient to the proof of Theorem 2.17 is to look for a proper change
of variables and to compute explicitly the corresponding determinant of the
Jacobian. For any |t| < 1, it follows from (2.73)
(1 − teiθ1 )−1 · · ·
 
0
In − tHnU
−1
= Vn  .. .. ..  ∗
 Vn . (2.75)

. . .
iθn −1
0 · · · (1 − te )
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 81

Circular Unitary Ensemble 81

Applying the Taylor expansion of (1 − x)−1 , and equating powers of t of


the (1, 1) entries on both sides of (2.75), we can get the following system of
equations

n
X
ᾱ0 = qj2 eiθj
j=1
n
X
∗ + ρ20 ᾱ1 = qj2 ei2θj
j=1
Xn
∗ + ρ20 ρ21 ᾱ2 = qj2 ei3θj (2.76)
j=1
.. ..
. .
n
X
∗ + ρ20 ρ21 · · · ρ2n−2 ᾱn−1 = qj2 einθj
j=1

where the ∗ denotes terms involving only variables already having appeared
on the left hand side of the preceding equations.
In this way, we can naturally get a one-to-one mapping from
(α0 , α1 , · · · , αn−1 ) to (eiθ1 , · · · , eiθn , q1 , · · · , qn−1 ). Recall that αk , 0 ≤ k ≤
n − 2 has an independent real and imaginary part, while αn−1 , eiθj have
unit modulus. We see that the number of variables is equal to 2n − 1. In
particular, let αk = ak + ibk and define J to be the determinant of the
Jacobian matrix for the change of variables, namely

Vn−2 V
k=0 dak ∧ dbk dᾱn−1
J= Vn−1 Vn
l=1 dql j=1 dθj

where ∧ stands for the wedge product.


We shall compute explicitly the J following Forrester and Rains (2006)
below. First, taking differentials on both sides of (2.74) immediately yields

n−1
X
qn dqn = − qj dqj . (2.77)
j=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 82

82 Random Matrices and Random Partitions

Similarly, taking differentials on both sides of (2.76) gives


Xn n
X
dᾱ0 = 2 eiθj qj dqj + i qj2 eiθj dθj
j=1 j=1
n
X Xn
ρ20 dᾱ1 = 2 ei2θj qj dqj + i2 qj2 ei2θj dθj
j=1 j=1
Xn Xn
ρ20 ρ21 dᾱ2 = 2 ei3θj qj dqj + i3 qj2 ei3θj dθj (2.78)
j=1 j=1
.. ..
. .
n
X n
X
ρ20 ρ21 · · · ρ2n−2 dᾱn−1 =2 e inθj
qj dqj + in qj2 einθj dθj .
j=1 j=1
Forming the complex conjugates of all these equations but last, we get
Xn n
X
dα0 = 2 e−iθj qj dqj − i qj2 e−iθj dθj
j=1 j=1
n
X Xn
ρ20 dα1 = 2 e−i2θj qj dqj − i2 qj2 e−i2θj dθj
j=1 j=1
Xn Xn
ρ20 ρ21 dα2 = 2 e−i3θj qj dqj − i3 qj2 e−i3θj dθj (2.79)
j=1 j=1
.. ..
. .
n
X n
X
ρ20 ρ21 · · · ρ2n−3 dαn−2 = 2 e−i(n−1)θj qj dqj − i(n − 1) qj2 e−i(n−1)θj dθj .
j=1 j=1
Now taking the wedge products of both sides of these 2n − 1 equations in
(2.78) and (2.79), and using (2.77) shows
n−2
Y 4(n−l−2) n−2
^ ^
ρ20 ρ21 · · · ρ2n−2 ρl dᾱl ∧ dαl dᾱn−1
l=0 l=0
n−1
Y  n−1
^ n
^
= (2i)n−1 qn2 ql3 D eiθ1 , · · · , eiθn dql dθj , (2.80)
l=1 l=1 j=1

where D eiθ1 , · · · , eiθn is defined by
" j # " #
jxjk

xk − xjn
 −j −jx−j
D(x1 , · · · , xn ) = det  xk − x−j .
j=1,··· ,n−1

n k
j,k=1,··· ,n−1 k=1,··· ,n
[xnk − xnn ]k=1,··· ,n−1 [nxnk ]k=1,··· ,n
(2.81)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 83

Circular Unitary Ensemble 83

Lemma 2.16. We have


4
Q
(n−1)(n−2)/2 1≤j<k≤n (xj − xk )
D(x1 , · · · , xn ) = (−1) Qn 2n−3 .
j=1 xj

Proof. By inspection, the determinant D(x1 , · · · , xn ) is a symmetric


function of x1 , · · · , xn which is homogeneous of degree n. Upon multi-
plying columns 1 and n by x2n−3 1 , we see that D(x1 , · · · , xn ) becomes a
polynomial in x1 , so it must be of the form
p(x1 , · · · , xn )
Qn 2n−3 ,
j=1 xj

where p(x1 , · · · , xn ) is a symmetric polynomial of x1 , · · · , xn of degree


2n(n − 1).
We see immediately from (2.81) that D(x1 , · · · , xn ) = 0 when x1 = x2 .
Furthermore, it is straightforward to verify that
 ∂ j
x1 D(x1 , · · · , xn ) = 0, j = 1, 2, 3
∂x1
when x1 = x2 . This is equivalent to saying
∂j
D(x1 , · · · , xn ) = 0, j = 1, 2, 3
∂xj1
when x1 = x2 . The polynomial p(x1 , · · · , xn ) must thus contain as a factor
(x1 − x2 )4 , and so 1≤j<k≤n (xj − xk )4 by symmetry. As this is of degree
Q

2n(n − 1), it follows that p(x1 , · · · , xn ) must in fact be proportional to


4
Q
1≤j<k≤n (xj − xk ) , which gives

4
Q
1≤j<k≤n (xj − xk )
D(x1 , · · · , xn ) = cn Qn 2n−3
j=1 xj

for some constant cn .


To decide the cn , let us look at the coefficient of the term
Qn −(2n−3) Qn 4(j−1)
j=1 xj j=1 xj in the determinant D(x1 , · · · , xn ). For sake
of clarity, we consider two cases separately: n is either even or odd.
Assume n = 2k. Let us add n − 1 times the first column to the nth
−(2n−3)
column. Then we see the coefficient of x1 is given by a cofactor of
the following 2 × 2 matrix
!
−(n−2) −(n−2) −(n−2) −(n−2)
x1 − xn x1 − (n − 1)xn
−(n−1) −(n−1) −(n−1) .
x1 − xn −(n − 1)xn
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 84

84 Random Matrices and Random Partitions

In the cofactor, we add n−3 times the first column to the (n−1)th column.
−(2n−7)
Then we see the coefficient of x2 is given by a cofactor of the following
2 × 2 matrix !
−(n−4) −(n−4) −(n−4) −(n−4)
x2 − xn x2 − (n − 1)xn
−(n−3) −(n−3) −(n−3) .
x2 − xn −(n − 3)xn
Proceeding in this manner, we see the coefficient of x−5 k−1 is given by the
determinant
 of the n + 1 × n + 1 matrix 
xk − xn · · · xn−1 − xn xk ··· xn
 x−1 − x−1 · · · x−1 − x−1 −x−1 ··· −x−1 
 k n n−1 n k n 
2 2 2 2 2 2
 xk − xn · · · xn−1 − xn 2xk ··· 2xn
 

 .. .. .. .. .. .. .
. . . . . .
 
 
 n−1 n−1 n−1 n−1 n−1 n−1
 xk − xn · · · xn−1 − xn (n − 1)xk · · · (n − 1)xn 

xnk − xnn · · · xnn−1 − xnn nxnk ··· nxnn


Interchange
 −1 the top two −1 rows to get
−x−1

xk − x−1 n · · · xn−1 − x−1 n k ··· −x−1 n
 x −x · · · xn−1 − xn xk ··· xn 
 k n 
2 2 2 2 2 2
 xk − xn · · · xn−1 − xn 2xk ··· 2xn
 

 .. .. .. .. .. .. . (2.82)
. . . . . .
 
 
 n−1 n−1 n−1
 xk − xn−1 n · · · x n−1 − x n−1
n (n − 1)x k · · · (n − 1)x n−1 
n 
xnk − xnn · · · xnn−1 − xnn nxnk ··· nxnn
We postpone deciding the coefficient of x−1 k , but we turn to the term xn
2n−1
.
In the determinant of (2.82), we first subtract the kth column from columns
1, 2, ·· · , k − 1 to get
x−1 −1 −1
−x−1

−1
k − xn−1 · · · xn−1 − xn k ··· −x−1 n
 x −x
n−1 · · · xn−1 − xn xk ··· xn 
 k 
 2 2 2 2 2 2
 xk − xn−1 · · · xn−1 − xn 2xk ··· 2xn


 .. .. .. .. .. .. .
. . . . . .
 
 
 n−1
 xk − xn−1 n−1 · · · x n−1
n−1 − x n−1
n (n − 1)x n−1
k · · · (n − 1)x n−1 
n 
xnk − xnn−1 · · · xnn−1 − xnn nxnk ··· nxnn
Then we add n times the kth column to (n + 1)th column to see the coef-
ficient of−1 x2n−1
n is given by the determinant of the n − 1 × n − 1 matrix
xk − x−1 −1 −1
−x−1

n−1 · · · xn−2 − xn−1 k ··· −x−1 n
 x −x
n−1 · · · xn−2 − xn−1 xk ··· xn 
 k 
 2
 xk − x2n−1 · · · x2n−2 − x2n−1 2x2k ··· 2x2n


 .. .. .. .. .. .. .
. . . . . .
 
 
 n−3 n−3 n−3 n−3 n−3 n−3 
 xk − xn−1 · · · xn−2 − xn−1 (n − 3)xk · · · (n − 3)xn−1 
xn−2
k − xn−2 n−2
n−1 · · · xn−2 − xn−1 (n − 2)xk
n−2 n−2
· · · (n − 2)xn−2 n−1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 85

Circular Unitary Ensemble 85

Repeating this operation, we get the coefficient of x3k+1 is −x−1


k . In sum-
Qn −(2n−3) Qn 4(j−1)
mary, the coefficient of the j=1 xj j=1 xj is (−1)k−1 .
Assume n = 2k + 1. Then we can almost completely repeat the proce-
Qn −(2n−3) Qn 4(j−1)
dure above to see that the coefficient of the j=1 xj j=1 xj is
k
(−1) .
Qn 4(j−1)
in 1≤j<k≤n (xj − xk )4 is
Q
Finally, note the coefficient of j=1 xj
1. So it follows

cn = (−1)(n−1)(n−2)/2 ,

as desired. 

Proceed to computing the determinant J. We have

Lemma 2.17.
Qn−2 
l=0 1 − |αl |2
|J| = Qn .
qn l=1 ql

Proof. Note

dαk = dak + idbk ,


dᾱk = dak − idbk .

It easily follows
 
1 −i
dᾱk ∧ dαk = det dak ∧ dbk
1 i
= 2idak ∧ dbk . (2.83)

Inserting (2.83) into (2.80) gives


Qn−1
qn2 l=1 ql3 iθn 
J = Qn−2 Qn−2 4(n−l−2) D eiθ1 ,··· ,e .
2
l=0 ρl l=0 ρl

According to Lemmas 2.15 and 2.16, it immediately follows


Qn−1
q 2 l=1 ql3 Y
eiθj − eiθk 4
|J| = Qn−2 nQn−2

2 4(n−l−2)
l=0 ρl l=0 ρl 1≤j<k≤n
Qn−2 
1 − |αl |2
= l=02 Qn .
qn l=1 ql
The proof is now complete. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 86

86 Random Matrices and Random Partitions

Lemma 2.18. Let

∆n = (q1 , q2 , · · · , qn−1 ) : qi > 0, q12 + · · · + qn−1


2

<1 .

Then
n
Γ( β2 )n
Z
1 Y β−1
qj dq1 · · · dqn−1 = ,
∆n qn j=1 2n−1 Γ( βn2 )
q
2
where qn = 1 − q12 − · · · − qn−1 .

Proof. We only consider the case of n ≥ 2 since the other case is trivial.
Start with n = 2. Using the change of variable, we have
Z 2 Z 1
1 Y β−1 β/2−1 β−1
qj dq1 = 1 − q12 q1 dq1
∆2 q2 j=1 0

1 1
Z
β/2−1
= (1 − q1 )β/2−1 q1 dq1
2 0
Γ( β2 )2
= .
2Γ(β)
Assume by induction that the claim is valid for some n ≥ 2. It easily follows
Z n+1
1 Y β−1
qi dq1 · · · dqn
∆n+1 qn+1 i=1
Z n
β/2−1 Y
= 1 − qn2 − q12 − · · · − qn−1
2
qiβ−1 dq1 · · · dqn
q12 +···+qn−1
2 2
≤1−qn i=1
Z 1 Z n−1
β/2−1 Y
= 1 − qn2 qnβ−1 dqn qiβ−1
0 q12 +···+qn−1
2 2
≤1−qn i=1
2
 q12 qn−1 β/2−1
· 1− − · · · − dq1 · · · dqn−1 . (2.84)
1 − qn2 1 − qn2
Making a change of variable, the inner integral becomes

β/2−1 n−1
Z Y β−1
(1 − qn2 )(n−1)β/2 1 − q12 − · · · − qn−1
2
qi dq1 · · · dqn−1 ,
∆n i=1

which is in turn equal to


Γ( β2 )n
(1 − qn2 )(n−1)β/2
2n−1 Γ( nβ
2 )
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 87

Circular Unitary Ensemble 87

using the induction hypothesis. Substituting in (2.84) gives


n+1
Γ( β2 )n
Z Z 1
1 Y β−1
qi dq1 · · · dqn = (1 − qn2 )nβ/2−1 qnβ−1 dqn
∆n+1 qn+1 i=1 2n−1 Γ( nβ2 ) 0

Γ( β2 )n+1
= .
2n Γ( (n+1)β
2 )
We conclude by induction the proof. 
Proof of Theorem 2.17. As remarked above, (2.73)  naturally induces a
one-to-one mapping from eiθ1 , · · · , eiθn , q1 , · · · , qn−1 to (a0 , b0 , · · · , an−2 ,
bn−2 , αn−1 ). Let fn,β and hn,β be their respective joint probability density
functions. Then it follows by Lemmas 2.17 and 2.15
fn,β eiθ1 , · · · , eiθn , q1 , · · · , qn−1


= hn,β (a0 , b0 , · · · , an−2 , bn−2 , αn−1 )|J|


n−2
β n−1 Y β(n−l−1)/2 1
= (n − 1)! 1 − |αl |2 Qn
(2π)n qn l=1 ql
l=0
n
β n−1 Y
e j − eiθk β 1
Y
qlβ−1 .

= (n − 1)!
(2π)n qn
1≤j<k≤n l=1

This trivially implies that eiθ1 , · · · , eiθn is independent of (q1 , · · · , qn−1 ).
Integrating out the qj over ∆n , we get by Lemma 2.18
Z
gn,β eiθ1 , · · · , eiθn := fn,β eiθ1 , · · · , eiθn , q1 , · · · , qn−1 dq1 · · · dqn−1
 
∆n
n! Y
e j − eiθk β .

=
(2π)n Zn,β
1≤j<k≤n

Dividing by n! to eliminate the ordering of eigenvalues, we conclude the


proof as desired. 
Having a CMV matrix model, we can establish the following asymptotic
normal fluctuations for the CβE.

Theorem 2.18. Let eiθ1 , · · · , eiθn be chosen on the unit circle according to
the CβE. Then as n → ∞
(i) for any θ0 with 0 ≤ θ0 < 2π,
n
1 X  d
q log 1 − ei(θj −θ0 ) −→ NC (0, 1);
2
β log n j=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 88

88 Random Matrices and Random Partitions

(ii) for any 0 < a < b < 2π,


Nn (a, b) − n(b−a)
2π d
q −→ N (0, 1),
1 2
π β log n

where Nn (a, b) denotes the number of the angles θj lying in the arc between
a and b.

Proof. Left to the reader. 


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 89

Chapter 3

Gaussian Unitary Ensemble

3.1 Introduction

Let Hn be the set of all n × n Hermitian matrices. To each matrix H ∈ Hn


assign a probability measure as follows
2n(n−1)/2 − 1 trH 2
Pn (H)dH = e 2 dH, (3.1)
(2π)n2 /2
where dH is the Lebesgue measure on the algebraically independent entries
of H. Pn is clearly invariant under unitary transform, namely Pn (U HU ∗ ) =
Pn (H) for every unitary matrix U , see Chapter 2 of Deift and Gioev (2009)
for a proof. The probability space (Hn , Pn ) is called Gaussian Unitary
Ensemble (GUE). It is the most studied object in random matrix theory.
As a matter of fact, the GUE is a prototype of a large number of matrix
models and related problems.
Note that the GUE can be realized in the following way. Let zii , 1 ≤
i ≤ n be a sequence of i.i.d real standard normal random variables, zij , 1 ≤
i < j ≤ n an array of i.i.d. complex standard normal random variables

independent of the zii ’s. Then An := (zij )n×n where zji = zij , i < j will
induce a probability measure as given by (3.1) in Hn .
A remarkable feature of the GUE is that the eigenvalues have an ex-
plicit nice probability density function. Let λ1 , · · · , λn be n real unordered
eigenvalues of An , then they are almost surely distinct to each other and
are absolutely continuous with respect to Lebesgue measure on Rn . In
particular, we have
Theorem 3.1. Let pn (x) denote the joint probability density function of
λ = (λ1 , · · · , λn ), then
n
1 Y Y 2
pn (x) = n/2
Qn |x j − xi | 2
e−xk /2 , (3.2)
(2π) k=1 k! 1≤i<j≤n k=1

89
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 90

90 Random Matrices and Random Partitions

where x = (x1 , · · · , xn ) ∈ Rn .
This theorem, due to Weyl, plays an important role in the study of GUE.
Its proof can be found in the textbooks Anderson, Guionnet and Zeitouni
(2010), Deift and Gioev (2009). (3.2) should be interpreted as follows. Let
f : Hn 7→ R be an invariant function, i.e., f (H) = f (U HU ∗ ) for each
H ∈ Hn and unitary matrix U . Then
Z
Ef (H) = f (x)pn (x)dx.
Rn
It is worthy to remark that there are two factors in the righthand side of
(3.2). One is the product of n standard normal density functions, while the
other is the square of Vandermonde determinant. The probability that two
eigenvalues neighbor each other very closely is very small. Hence intuitively
speaking, eigenvalues should locate more neatly than i.i.d. normal random
points in the real line. It is the objective of this chapter that we shall take
a closer look at the arrangement of eigenvalues from global behaviours.
In order to analyze the precise asymptotics of pn (x), we need to intro-
duce Hermite orthogonal polynomials and the associated wave functions.
Let hl (x), l ≥ 0 be a sequence of monic orthogonal polynomials with respect
2
to the weight function e−x /2 with h0 (x) = 1. Then
2 dl 2
hl (x) = (−1)l ex /2 l e−x /2
dx
[l/2]
X xl−2i
= l! (−1)i i , l ≥ 1. (3.3)
i=0
2 i!(l − 2i)!
Define
2
ϕl (x) = (2π)−1/4 (l!)−1/2 hl (x)e−x /4
(3.4)
so that we have Z

ϕl (x)ϕm (x)dx = δl,m , ∀l, m ≥ 0.
−∞
Now a simple matrix manipulation directly yields
1 ··· 1
 
1
Y  x1 x2 · · · xn 
(xj − xi ) = det  .
 
.. . . .
 .. . ..

1≤i<j≤n . 
xn−1
1 xn−1
2 · · · xn−1
n

h0 (x2 ) · · · h0 (xn )
 
h0 (x1 )
 h1 (x2 ) · · · h1 (xn ) 
h1 (x1 )
= det   . (3.5)
 
.. .. .. ..
 . . . . 
hn−1 (x1 ) hn−1 (x2 ) · · · hn−1 (xn )
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 91

Gaussian Unitary Ensemble 91

Furthermore, substituting (3.5) into (3.2) and noting (3.4) immediately


leads to the following determinantal expression for pn (x).

Proposition 3.1.
1 
pn (x) = det Kn (xi , xj ) n×n , (3.6)
n!
where Kn is defined by
n−1
X
Kn (x, y) = ϕl (x)ϕl (y). (3.7)
l=0

Such a expression like (3.6) turns out to be very useful in the study of
asymptotics of eigenvalues. In fact, the GUE is one of the first examples
of so-called determinantal point processes (see Section 3.3 below for more
details). A nice observation about the kernel Kn is the following:
Z ∞
Kn (x, z)Kn (z, y)dz = Kn (x, y).
−∞

As an immediate consequence, we can easily obtain any k-dimensional


marginal density. Let pn,k (x1 , · · · , xk ) be the probability density function
of (λ1 , · · · , λk ), then it follows
(n − k)! 
pn,k (x1 , · · · , xk ) = det Kn (xi , xj ) k×k . (3.8)
n!
In particular, we have
1
pn,1 (x) = Kn (x, x).
n
We collect some basic properties of Hermite wave functions ϕl (x) below.
See Szegö (1975) for more information.

Lemma 3.1. For l ≥ 1, it follows


(i) recurrence equation
√ √
xϕl (x) = l + 1ϕl+1 (x) + lϕl−1 (x);

(ii) differential relations


x √
ϕ0l (x) = − ϕl (x) + lϕl−1 (x),
2
 1 x2 
ϕ00l (x) = − l + − ϕl (x);
2 4
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 92

92 Random Matrices and Random Partitions

(iii) Christoffel-Darboux identities


l−1 √
X ϕl (x)ϕl−1 (y) − ϕl (y)ϕl−1 (x)
ϕm (x)ϕm (y) = l· , x 6= y (3.9)
m=0
x−y
l−1
X √
ϕ2m (x) = l · ϕ0l (x)ϕl−1 (x) − ϕl (x)ϕ0l−1 (x) ;

(3.10)
m=0

(iv) boundedness
κ := sup kϕl k∞ < ∞. (3.11)
l≥0

The next lemma, known as Plancherel-Rotach formulae, provides asymp-


totic behavior formulae for the Hermite orthogonal polynomials.

Lemma 3.2. We have as n → ∞


(i) for |x| < 2 − δ with δ > 0,
r
√ 2 1  k+1 π
1/4
n ϕn+k ( nx) = cos nα(θ) + θ − + O(n−1 ),
π (4 − x2 )1/4 2 4
where x = 2 cos θ, α(θ) = θ − sin 2θ/2, k = −1, 0, 1 and the convergence is
uniform in x. The asymptotics is also valid for |x| < 2 − δn with δn−1 =
o(n2/3 );
(ii) for x = ±2 + ζn−2/3 with ζ ∈ R,

n1/12 ϕn ( nx) = 21/4 Ai(ζ) + O(n−3/4 ) ,


where Ai(ζ) stands for the standard Airy function;


(iii) for |x| > 2 + δ,
√ 1
n1/4 ϕn ( nx) = √ e−(2n+1)β(θ)/2 1 + O(n−1 ) ,

π sinh θ
where x = 2 cosh θ with θ > 0 and β(θ) = sinh 2θ/2 − θ.

As a direct application, we obtain a limit of the first marginal probability


density after suitably scaled.

Proposition 3.2. Define


√ √
p̄n (x) = npn,1 ( nx).
(i) We have as n → ∞
p̄n (x) → ρsc (x)
uniformly on any closed interval of (−2, 2), where ρsc was defined by (1.15).
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 93

Gaussian Unitary Ensemble 93

(ii) Given s > 0, there exist positive constants c1 and c2 such that for each
n≥1
 s  c1 3/2
p̄n 2 + 3/2 ≤ 1/3 e−c2 s , (3.12)
n n s
 s  c1 3/2
p̄n −2 − 3/2 ≤ 1/3 e−c2 s .
n n s
(iii) Given |x| > 2 + δ with δ > 0, there exist positive constants c3 and c4
such that for each n ≥ 1
3/2
p̄n (x) ≤ c3 e−c4 n|x| .

Proof. To start with the proof of (i). Note it follows from (ii) and (iii)
of Lemma 3.1
1 √ √ 
p̄n (x) = √ Kn nx, nx
n
√ √ √ √
= ϕ0n ( nx)ϕn−1 ( nx) − ϕn ( nx)ϕ0n−1 ( nx)
√ √ √ √ √
= nϕn−1 ( nx)2 − n − 1ϕn ( nx)ϕn−2 ( nx). (3.13)
Fix a δ > 0. For each x such that |x| ≤ 2 + δ, we have by Lemma 3.1 (i)
√ √ 2  π
nϕn−1 ( nx)2 = cos 2
nα(θ) − + O(n−1 )
π(4 − x2 )1/2 4
and
√ √ √
nϕn ( nx)ϕn−2 ( nx)
2  θ π  θ π
= 2 1/2
cos nα(θ) + − cos nα(θ) − − + O(n−1 ),
π(4 − x ) 2 4 2 4
where x = 2 cos θ, α(θ) = θ − sin 2θ/2 and the convergence is uniform in x.
Now a simple algebra yields
1 p
p̄n (x) = 4 − x2 + O(n−1 ),

as desired.
Proceed to prove (ii). Note that it follows by Lemma 3.1
d √
Kn (x, x) = n ϕ00n (x)ϕn−1 (x) − ϕn (x)ϕ00n−1 (x)

dx √
= − nϕn (x)ϕn−1 (x).
Since Kn (x, x) vanishes at ∞, we have
Z ∞

Kn (x, x) = n ϕn (u)ϕn−1 (u)du,
x
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 94

94 Random Matrices and Random Partitions

from which it follows



Z ∞ √ √
p̄n (x) = n ϕn ( nu)ϕn−1 ( nu)du.
x
By the Cauchy-Schwarz inequality,
√  ∞ √ ∞ √
Z 1/2 Z 1/2
p̄n (x) ≤ n ϕn ( nu)2 du ϕn−1 ( nu)du .
x x
2/3
Let x = 2 + s/n . We will below control
Z ∞ Z ∞ 
√ √ 2 1 √ u 2
n ϕn ( nu) du = 1/6 ϕn n(2 + 2/3 ) du
2+s/n2/3 n s n
from the above. Write
u 1
cosh θ = 1 + 2/3 , β(θ) = sinh 2θ − θ.
2n 2
Then it follows by using asymptotic formula
1 ∞ −(2n+1)β(θ)
Z ∞ 

Z
1 u 2
ϕn n(2 + ) du = (1 + o(1)) e dθ,
n1/6 s n2/3 π θs
where cosh θs = 1 + s/2n2/3 .
Since β 0 (θ) = 2(sinh θ)2 is increasing, we have for s → ∞ and n → ∞
1 ∞ −(2n+1)β(θ)
Z
1
e dθ ≤ e−(2n+1)β(θs ) .
π θs 2πnβ 0 (θs )
Note an elementary inequality
θ2
≤ cosh θ − 1 ≤ (sinh θ)2 ,
2
from which one can readily derive
β 0 (θs ) = 2(sinh θs )2 ≥ 2(cosh θs − 1)
s
= 2/3
n
and
Z θs Z θs 2
2
β(θs ) = 2 (sinh x)2 dx ≥ sinh xdx
0 θs 0
3/2
s
≥ .
2n
Thus

Z ∞ √ (1 + o(1)) −(1+o(1))s3/2
n ϕn ( nu)2 du ≤ e . (3.14)
2+s/n 2/3 2πn1/3 s

Similarly, the upper bound of (3.14) holds for the integral of ϕn−1 ( nu)2 ,
and so (3.12) is proven.
Last, we turn to (iii). The proof is completely similar to (ii). Now we
conclude the proposition. 
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 95

Gaussian Unitary Ensemble 95

Now we are ready to state and prove the celebrated Wigner semicircle
law for the GUE. Define the empirical spectral distribution for normalized
eigenvalues by
n
1X
Fn (x) = 1(λk ≤√nx) , −∞ < x < ∞. (3.15)
n
k=1

Proposition 3.2 gives the limit of the mean spectral density. In fact, we can
further prove the following

Theorem 3.2. We have as n → ∞


d
Fn −→ ρsc in P, (3.16)

where ρsc was defined by (1.15).

Proof. The statement (3.16) means that for any bounded continuous
function f ,
Z ∞ Z ∞
P
f (x)dFn (x) −→ f (x)ρsc (x)dx, n → ∞. (3.17)
−∞ −∞

Note that f in (3.17) can be replaced by any bounded Lipschitz function.


Let f be a bounded 1-Lipschtiz function, we will prove the following
claims
n Z ∞
1 X  λk 
E f √ → f (x)ρsc (x)dx (3.18)
n n −∞
k=1

and
n
1 X  λ 
k
V ar f √ → 0. (3.19)
n n
k=1

First, we prove (3.18). Note


λ  Z ∞  x 
k
Ef √ = f √ pn,1 (x)dx
n −∞ n
Z ∞
= f (x)p̄n (x)dx. (3.20)
−∞

Fix a small δ > 0 and let δn = sn /n2/3 satisfy δn n2/3 → ∞ and δn n1/2 → 0.
Write the integral on the righthand side of (3.20) as the sum of integrals
Ik , k = 1, 2, 3, 4, over the sets A1 = {x : |x| < 2−δ}, A2 = {x : 2−δ ≤ |x| <
2 − δn }, A3 = {x : 2 − δn ≤ |x| < 2 + δn } and A4 = {x : 2 + δn ≤ |x| < ∞}.
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 96

96 Random Matrices and Random Partitions

We will below estimate each integral, separately. First, it clearly follows


from Proposition 3.2 (i) that
Z Z
I1 = f (x)p̄n (x)dx → f (x)ρsc (x)dx.
A1 A1
It remains to show that Ik , k = 2, 3, 4 are asymptotically as small as δ.
Note f is bounded. Then Ik is bounded by the corresponding integral of
p̄n (x) over Ak .
Since sn → ∞, then according to Lemma 3.2 (i), we have for x ∈ A2 ,
√ 1  k+1 π
n1/4 ϕn+k ( nx) = √ cos nα(θ) + θ− (1 + o(1)),
π sin θ 2 4
where x = 2 cos θ, α(θ) = sin θ/2 − θ. Hence it follows

Z Z
2
p̄n (x)dx = n1/4 ϕn−1 ( nx) dx
A2 A2
r
n−1 √ √
Z
− n1/2 ϕn ( nx)ϕn−2 ( nx)dx
n A2
= O(δ).
To estimate the integral over A3 , we note (3.13) and use the bound in
(3.11). Then
Z
p̄n (x)dx ≤ 2kp̄n k∞ δn
A3

≤ 4κ2 n1/2 δn → 0.
To estimate the integral over A4 , we use Proposition 3.2 (ii) to get
Z Z ∞ 
s   s 
p̄n (x)dx ≤ p̄n 2 + 2/3 + p̄n −2 − 2/3 ds.
I4 sn n n
→ 0.
Combining the above four estimates together yields
 λ  Z 2−δ
1
lim Ef √ = f (x)ρsc (x)dx + O(δ).
n→∞ n −2+δ
Letting δ → 0, we can conclude the proof of (3.18).
Next we turn to the proof of (3.19). Observe
Xn  λ 
k
V ar f √
n
k=1
h  λ 2   λ 2 i
1 1
= n Ef √ − Ef √
n n
h  λ   λ    λ 2 i
1 2 1
+ n(n − 1) Ef √ f √ − Ef √ . (3.21)
n n n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 97

Gaussian Unitary Ensemble 97

Note
 λ 2 Z ∞
1
Ef √ = f (x)2 p̄n (x)dx
n −∞
Z ∞
1 √ √
= √ f (x)2 Kn ( nx, nx)dx
n −∞
Z ∞Z ∞
√ √
= f (x)2 Kn ( nx, ny)2 dxdy (3.22)
−∞ −∞

and
λ  λ 
1 2
Ef √ f √
n n
Z ∞Z ∞
√ √
= f (x)f (y)npn,2 ( nx, ny)dxdy
−∞ −∞
Z ∞Z ∞
1 √ √ 
= f (x)f (y) det Kn ( nx, ny) dxdy
n − 1 −∞ −∞
1  ∞ √ √
Z 2
= f (x)Kn ( nx, nx)dx
n − 1 −∞
Z ∞Z ∞
1 √ √
− f (x)f (y)Kn ( nx, ny)2 dxdy. (3.23)
n − 1 −∞ −∞
Substituting (3.22) and (3.23) into (3.21) yields
X n  λ 
k
V ar f √
n
k=1
Z ∞Z ∞
1 √ √
= (f (x) − f (y))2 Kn ( nx, ny)2 dxdy
2 −∞ −∞
n ∞ ∞  f (x) − f (y) 2
Z Z
=
2 −∞ −∞ x−y
√ √ √ √ 2
ϕn ( nx)ϕn−1 ( ny) − ϕn ( ny)ϕn−1 ( nx) dxdy.
Since f is 1-Lipschitz function, it follows by the orthogonality of ϕl
X n  λ 
k
V ar f √
n
k=1
Z ∞Z ∞
n √ √ √ √ 2
≤ ϕn ( nx)ϕn−1 ( ny) − ϕn ( ny)ϕn−1 ( nx) dxdy
2 −∞ −∞
= 1, (3.24)
which implies the claim (3.19).
Now we conclude the proof of Theorem 3.2. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 98

98 Random Matrices and Random Partitions

3.2 Fluctuations of Stieltjes transforms

Let An be the standard GUE matrix as in the Introduction. Consider the



normalized matrix Hn = An / n and denote by λ1 , λ2 , · · · , λn its eigenval-
ues. Define the Green function
1
Gn (z) = (Gij (z))n×n =
Hn − z
and its normalized trace
n
1 1 1 1X
mn (z) = trGn (z) = tr = Gii (z).
n n Hn − z n i=1

It obviously follows that


Z ∞
1
mn (z) = dFn (x) = sFn (z)
−∞ x−z
where Fn is defined by (3.15).
In this section we shall first estimate Emn (z) and V ar(mn (z)) and then
prove a central limit theorem for mn (z). Start with some basic facts and
lemmas about the Green function and trace of a matrix. We occasionally
suppress the dependence of functions on z when the context is clear, for
example we may write Gij instead of Gij (z), and so on.

Lemma 3.3. Let Hn = (Hij )n×n be a Hermitian matrix, Gn (z) =


(Gij (z))n×n its Green function. Then it follows
(i) matrix identity
n
1 1X
Gij = − + Gik Hki ;
z z
k=1

(ii) for z = a + iη where η 6= 0 and k ≥ 1


1 1 1
sup (Gkn )ij (z) ≤ k , trGkn ≤ k ;

1≤i,j≤n |η| n |η|
(iii) differential relations
∂Gkl
= −Gki Gil
∂Hii
and for i 6= j
∂Gkl ∂Gkl
= −(Gki Gjl + Gkj Gil ), = −i(Gki Gjl − Gkj Gil ).
∂ReHij ∂ImHij
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 99

Gaussian Unitary Ensemble 99

Proof. (i) trivially follows form the fact Gn (Hn − z) = In . To prove (ii),
let U = (uij )n×n be a unitary matrix such that
λ1 0 · · · 0
 
 0 λ2 · · · 0 
 ∗
Hn = U  . . . . U . (3.25)

 .. .. . . .. 
0 0 · · · λn
Then
(λ1 − z)−k ···
 
0 0
 0 (λ 2 − z)−k ··· 0 
 ∗
Gkn = U  U .

.. .. .. ..
 . . . . 
0 0 · · · (λn − z)−k
Hence we have
n
X
(Gkn )ij = uil (λl − z)−k u∗lj ,
l=1
from which it follows
n
(Gn )ij ≤ 1 1
X
|uil ||u∗lj | ≤ k .
k
|η|k |η|
l=1
We conclude (ii).
Finally, (iii) easily follows from the Sherman-Morrison equation
1 1 1 1
− = −δ A .
Hn + δA − z Hn − z Hn + δA − z Hn − z 
The next lemma collects some important properties that will be used below
about Gaussian random variables.
Lemma 3.4. Assume that g1 , g2 , · · · , gm are independent centered normal
random variables with Egk2 = σk2 . Denote σ 2 = max1≤k≤m σk2 .
(i) Stein equation: if F : Rm 7→ C is a differentiable function, then
∂F
Egk F (g1 , · · · , gm ) = σk2 E (g1 , · · · , gm ).
∂gk
(ii) Poincaré-Nash upper bound: if F : Rm 7→ C is a differentiable function,
then
2
E F (g1 , · · · , gm ) − EF (g1 , · · · , gm ) ≤ σ 2 E|∇F |2

where ∇F stands for the gradient of F .


(iii) Concentration of measure inequality: if F : Rm 7→ C is a Lipschitz
function, then for any t > 0
2 2
P F (g1 , · · · , gm ) − EF (g1 , · · · , gm ) > t ≤ e−t /2σ kF klip .

February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 100

100 Random Matrices and Random Partitions

Now we can use the above two lemmas to get a rough estimate for Emn (z)
and V ar(mn (z)).
Proposition 3.3. For each z with Imz 6= 0, it follows
(i)
Emn (z) = msc (z) + O(n−2 ), (3.26)
where msc (z) denotes the Stieltjes transform of ρsc , namely
z 1p 2
msc (z) = − + z − 4;
2 2
(ii)
E|mn (z) − Emn (z)|2 = O(n−2 ). (3.27)
Proof. Start with the proof of (3.27). Note that mn is a function of
independent centered normal random variables {Hii , 1 ≤ i ≤ n} and
{ReHij , ImHij , 1 ≤ i < j ≤ n}. We use the Poincaré-Nash upper bound
to get
n
1 hX ∂mn 2 X ∂mn 2
E|mn − Emn |2 ≤ E + E
n i=1 ∂Hii ∂ReHij

i<j
X ∂mn 2 i
+ E . (3.28)
i<j
∂ImHij
It easily follows from the differential relations in Lemma 3.3 (iii) that
n
∂mn 1X
=− Gli Gil ,
∂Hii n
l=1
n
∂mn 1X
=− (Gli Gjl + Glj Gil ),
∂ReHij n
l=1
n
∂mn 1 X
= −i (Gli Gjl − Glj Gil ).
∂ImHij n
l=1
In turn, according to Lemma 3.3 (ii), we have (3.27).
Proceed to the proof of (3.26). First, use the matrix identity to get
Emn
n
1X
= EGii
n i=1
n n
1X  1 1X 
= E − + Gik Hki
n i=1 z z
k=1
n n
1 1 X 1 X 
=− + EGii Hii + EGik ReHki + iImHki . (3.29)
z zn i=1 zn
i6=k
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 101

Gaussian Unitary Ensemble 101

Also, we have by Lemma 3.3 (iii)


1 ∂Gii 1
EGii Hii = E = − G2ii (3.30)
n ∂Hii n
1 ∂Gik 1
EGik ReHki = E = − (G2ik + Gii Gkk ) (3.31)
2n ∂ReHki 2n
1 ∂Gik i
EGik ImHki = E = − (G2ik − Gii Gkk ). (3.32)
2n ∂ImHki 2n
Substituting (3.30)-(3.32) into (3.29) immediately yields
1 1
Emn = − − Em2n . (3.33)
z z
According to (ii),
Em2n − (Emn )2 = O(n−2 ).
Hence it follows
1 1
Emn = − − (Emn )2 + O(n−2 ). (3.34)
z z
Recall that msc (z) satisfies the equation
1 1
msc = − − m2sc .
z z
It is now easy to see
Emn (z) = msc (z) + O(n−2 ),
as desired. 

Remark 3.1. (3.27) can be extended to any linear eigenvalue statistic


(see (3.52) below) with differentiable test function. See Proposition 2.4 of
Lytova and Pastur (2009).

As a direct consequence of Proposition 3.3, we obtain

Corollary 3.1. For each z with Imz 6= 0,


P
mn (z) −→ msc (z), n → ∞. (3.35)

According to Theorem 1.14, the Stieltjes continuity theorem, (3.35) is in


turn equivalent to saying that as n → ∞
d
Fn −→ ρsc in P.
Thus we have derived the Wigner semicircle law using the Green function
approach.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 102

102 Random Matrices and Random Partitions

In the following we shall be devoted to the refinement of estimates of


Emn and V ar(mn ) given in Proposition 3.3. A basic tool is still Stein’s
equation. As above, we will repeatedly use the Stein equation to get the
precise coefficients in terms of n−2 . Main results read as follows
Theorem 3.3. For each z with Imz 6= 0, it follows
(i)
1 1
Emn (z) = msc (z) + + o(n−2 );
2
2(z − 4) 5/2 n2
(ii)
2 1 1
E mn (z) − Emn (z) = 2 + o(n−2 ), (3.36)
(z − 4) n2
2

Cov mn (z1 ), mn (z2 )
1  z1 z2 − 4 1
= − 1 + o(n−2 ). (3.37)
2(z1 − z2 )2 n2
p p
z12 − 4 · z22 − 4
Proof. One can directly get (i) from (ii) by noting the equation (3.33).
We shall mainly focus on the computation of (3.36), since (3.37) is similar.
To do this, note
n
1X
Em2n = Emn Gii
n i=1
n n
1X  1 1X 
= Emn − + Gki Hik
n i=1 z z
k=1
n
1 1 X
= − Emn + Emn Gii Hii
z zn i=1
1 X i X
+ Emn Gki ReHik + Emn Gki ImHik .
zn zn
i6=k i6=k
Using Lemma 3.3 (iii) and some simple algebra we get
1 1 1 X
Em2n = − Emn − Em3n − 3 EGij Gjk Gki . (3.38)
z z zn
i,j,k
Hence we have
n
1 1 1 X
Em2n 2 3 2
− (Emn ) = − Emn − Emn − (Emn ) − 3 EGij Gjk Gki
z z zn
i,j,k=1
n
2 1 X
Em2n − (Emn )2 − 3

= − Emn EGij Gjk Gki
z zn
i,j,k=1
1 3
− E mn − (Emn ) .
z
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 103

Gaussian Unitary Ensemble 103

Solving this equation further yields


n
1 1 X
Em2n − (Emn )2 = − EGij Gjk Gki + o(n−2 ). (3.39)
z + 2Emn n3
i,j,k=1

We remark that the summand in the righthand side of (3.39) is asymptot-


ically as small as n−2 by Lemma 3.3 (ii). It remains to precisely estimate
this summand below. For this, we first observe
n n n
1 X 1 X  1 1X 
EGik Gki = EGik − δi,k + Gkl Hli
n n z z
i,k=1 i,k=1 l=1
n
1 1 X
= − Emn + EGik Gkl Hli
z zn
i,j,k=1
n
1 2 1 X
= − Emn − Emn Gik Gki
z z n
i,k=1
n
1 2 1 X
= − Emn − Emn EGik Gki + O(n−1 ),
z z n
i,k=1

and so we have
n
1 X Emn
1 + O(n−1 ) .

EGik Gki = − (3.40)
n z + 2Emn
i,k=1

In the same spirit, we have


n n n
1 X 1 X 1 X
EGij Gjk Gki = − EGij Gji + EGij Gjk Gkl Hli
n zn i,j=1 zn
i,j,k=1 i,j,k,l=1
n n 2
1 X 1 1  X
=− EGij Gji − EGij Gji
zn i,j=1 z n i,j=1
n
2 1 X
− Emn EGij Gjk Gki + O(n−1 ).
z n
i,j,k=1

Solving this equation and noting (3.40) yields


n n n 2 i
1 X 1 + o(1) h 1 X 1 X
EGij Gjk Gki = EGik Gki + EGij Gji
n z + 2Emn n n i,j=1
i,j,k=1 i,k=1

zEmn + (Emn )2
=− (1 + o(1))
(z + 2Emn )3
1 + o(1)
= ,
(z + 2Emn )3
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 104

104 Random Matrices and Random Partitions

where in the last step we used the fact zEmn + (Emn )2 = −1 + o(1).
Next, we turn to prove (3.37). Since the proof is very similar to (3.36),
we only give some main steps. It follows by the matrix identity

Emn (z1 )mn (z2 )


n
1X
= Emn (z1 )Gii (z2 )
n i=1
n n
1X  1 1 X 
= Emn (z1 ) − + Gik (z2 )Hki
n i=1 z2 z2
k=1
n
1 1 X
=− Emn (z1 ) + Emn (z1 )Gik (z2 )Hki . (3.41)
z2 z2 n
i,k=1

Applying the Stein equation to mn (z2 ),


n
1 X
Emn (z1 )Gik (z2 )Hki = −Emn (z1 )mn (z2 )2
n
i,k=1
n
1X
− EGkl (z1 )Gli (z1 )Gik (z2 ). (3.42)
n
i,k,l

Substituting (3.42) into (3.41) yields


1 1
Emn (z1 )mn (z2 ) = − Emn (z1 ) − Emn (z1 )mn (z2 )2
z2 z2
n
1 X
− EGkl (z1 )Gli (z1 )Gik (z2 ).
z2 n
i,k,l

We now have

Emn (z1 )mn (z2 ) − Emn (z1 )Emn (z2 )


1 1
= − Emn (z1 ) − Emn (z1 )mn (z2 )2 − Emn (z1 )Emn (z2 )
z2 z2
n
1 X
− EGkl (z1 )Gli (z1 )Gik (z2 ).
z2 n
i,k,l=1

By virtue of Proposition 3.3 and (3.33), it follows

Emn (z1 )mn (z2 ) − Emn (z1 )Emn (z2 )


n
1 1 X
=− EGkl (z1 )Gli (z1 )Gik (z2 ) + o(n−2 ). (3.43)
z2 + 2Emn (z2 ) n
i,k,l=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 105

Gaussian Unitary Ensemble 105

It remains to estimate the summand of the righthand side of (3.43). To


this end, note
n
1 X
EGkl (z1 )Glk (z2 )
n
k,l=1
n n
1 X  1 1 X 
= EGkl (z1 ) − δk,l + Glm (z2 )Hmk
n z2 z2 m=1
k,l=1
n
1 1 X
=− Emn (z1 ) + EGkl (z1 )Glm (z2 )Hmk
z2 z2
k,l,m=1
n
1 1 1 X
=− Emn (z1 ) − Emn (z1 ) Gkl (z1 )Glk (z2 )
z2 z2 n
k,l=1
n
1 1 X
− Emn (z2 ) Gkl (z1 )Glk (z2 )
z2 n
k,l=1
n
1 1 1 X
=− Emn (z1 ) − Emn (z1 ) EGkl (z1 )Glk (z2 )
z2 z2 n
k,l=1
n
1 1 X
− Emn (z2 ) EGkl (z1 )Glk (z2 ) + o(1)
z2 n
k,l=1

which immediately gives


n
1 X Emn (z1 )
EGkl (z1 )Glk (z2 ) = − (1 + o(1)).
n z2 + Emn (z1 ) + Emn (z2 )
k,l=1

Applying once again the matrix identity and Stein equation, we obtain
n
1 X
EGkl (z1 )Gli (z1 )Gik (z2 )
n
i,k,l=1
n
1 X
=− EGkl (z1 )Glk (z1 )
z2 n
k,l=1
n n
1 1 X  1 X 
− E Gkl (z1 )Glk (z1 ) E Gkl (z1 )Glk (z2 )
z2 n n
k,l=1 k,l=1
n
1 1 X
− Em (z
n 1 ) EGkl (z1 )Gli (z1 )Gik (z2 )
z2 n n
i,k,l=1
n
1 1 X
− Emn (z2 ) EGkl (z1 )Gli (z1 )Gik (z2 ) + o(1).
z2 n n
i,k,l=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 106

106 Random Matrices and Random Partitions

In combination, we have
Emn (z1 )mn (z2 ) − Emn (z1 )Emn (z2 )
1 + o(1) Emn (z1 )
=−
z2 + 2Emn (z2 ) z2 + Emn (z2 ) + Emn (z1 )
1  Emn (z1 ) 1
× 1− . (3.44)
z1 + 2Emn (z1 ) z2 + Emn (z2 ) + Emn (z1 ) n2
To simplify the righthand side of (3.44), we observe the following asymptotic
formulae
Emn (z) = msc (z)(1 + o(1))
and
Emn (z1 ) Emn (z2 )
= (1 + o(1)).
z2 + Emn (z2 ) + Emn (z1 ) z1 + Emn (z2 ) + Emn (z1 )
Thus a simple calculus now easily yields
Emn (z1 )mn (z2 ) − Emn (z1 )Emn (z2 )
1 + o(1) 2
=p p p p
z1 − 4 z2 − 4 z1 − 4 + z22 − 4 + z1 − z2
2 2 2

2 1
×p 2
z1 − 4 + z2 − 4 − (z1 − z2 ) n2
p
2

1 + o(1) 2 1
=p
z12 − 4 z22 − 4 z1 z2 − 4 + (z12 − 4)(z22 − 4) n2
p p
p
1 + o(1) z1 z2 − 4 − (z12 − 4)(z22 − 4) 1
= p 2 ,
(z1 − z2 )2 n2
p
2 z1 − 4 z22 − 4
as desired. 
We have so far proved a kind of law of large numbers for mn (z) and provided
a precise estimate of Emn (z) and V ar(mn (z)). Having these, one may ask
how mn (z) fluctuates around its average. In the rest of this section we will
deal with such a issue. It turns out that mn (z) asymptotically follows a
normal fluctuation. Moreover, we have
Theorem 3.4. Define a random process by
ζn (z) = n(mn (z) − Emn (z)), z ∈ C \ R.
Then there is a Gaussian process Ξ = {Ξ(z), z ∈ C \ R} with the covariance
structure
 1  z1 z2 − 4 
Cov Ξ(z1 ), Ξ(z2 ) = 2
p p − 1
2(z1 − z2 ) z12 − 4 · z12 − 4
such that
ζn ⇒ Ξ, n → ∞.
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 107

Gaussian Unitary Ensemble 107

Proof. We use a standard argument in the context of weak convergence


of processes. That is, we will verify both finite dimensional distribution
convergence and uniform tightness below.
Start by proving the uniform tightness. As in (3.28), we have
1
E|mn (z1 ) − mn (z2 )|2 ≤ E|∇(mn (z1 ) − mn (z2 ))|2
n
n
1 hX ∂(mn (z1 ) − mn (z2 )) 2
= E
n i=1 ∂Hii

X ∂(mn (z1 ) − mn (z2 )) 2


+
∂ReHij

i<j
X ∂(mn (z1 ) − mn (z2 )) 2 i
+ . (3.45)
∂ImHij

i<j
Observe the eigendecomposition (3.25). Then it follows
∂λk
= uik u∗ik ,
∂Hii
∂λk
= uik u∗jk + u∗ik ujk
∂ReHij
= 2Re(uik u∗jk ),
∂λk
= i(u∗ik ujk − uik u∗jk )
∂ImHij
= 2Im(uik u∗jk ).
Hence we have
∂(mn (z1 ) − mn (z2 ))
∂Hii
n
X ∂(mn (z1 ) − mn (z2 )) ∂λk
= ·
∂λk ∂Hii
k=1
n
1 X 1 1 
= − uik u∗ik ,
n (λk − z1 )2 (λk − z2 )2
k=1
and so
∂(m (z ) − m (z )) 2
n 1 n 2
∂Hii

n
1 X  1 1 
≤ 2 −
n (λk − z1 )2 (λk − z2 )2
k,l=1
 1 1 
× 2
− 2
uik u∗ik uil u∗il . (3.46)
(λl − z̄1 ) (λl − z̄2 )
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 108

108 Random Matrices and Random Partitions

Similarly,
∂(m (z ) − m (z )) 2
n 1 n 2
∂ReHij

n
4 X  1 1 
≤ 2 2
− 2
n (λk − z1 ) (λk − z2 )
k,l=1
 1 1 
× − Re(uik u∗jk )Re(uil u∗jl ) (3.47)
(λl − z̄1 )2 (λl − z̄2 )2
and
∂(m (z ) − m (z )) 2
n 1 n 2
∂ImHij

n
4 X  1 1 
≤ 2 −
n (λk − z1 )2 (λk − z2 )2
k,l=1
 1 1 
× − Im(uik u∗jk )Im(uil u∗jl ). (3.48)
(λl − z̄1 )2 (λl − z̄2 )2
Substituting (3.46)-(3.48) into (3.45) yields
n
4 X  1 1 
E|mn (z1 ) − mn (z2 )|2 ≤ 3 2
− 2
n (λk − z1 ) (λk − z2 )
i,j,k,l=1
 1 1 
× − uik u∗jk u∗il ujl .
(λl − z̄1 )2 (λl − z̄2 )2
Note by orthonormality
Xn Xn
uik u∗il = δk,l , ujk u∗jl = δk,l
i=1 j=1

n
X
uik u∗jk u∗il ujl = n.
i,j,k,l=1

We have
n 2
4 X
2 1 1
E|mn (z1 ) − mn (z2 )| ≤ 3 −

n (λk − z1 )2 (λk − z2 )2

k=1
4
≤ 2 6 |z1 − z2 |2 ,
n η
from which we can establish the uniform tightness for mn (z).
Proceed to proving finite dimensional distribution convergence. Fix
z1 , z2 , · · · , zq ∈ C \ R. It is enough to prove
d
(ζn (z1 ), · · · , ζn (zq )) −→ (Ξ(z1 ), · · · , Ξ(zq )), n → ∞.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 109

Gaussian Unitary Ensemble 109

Equivalently, for any c1 , · · · , cq


q q
d
X X
cl ζn (zl ) −→ cl Ξ(zl ), n → ∞.
l=1 l=1
Let
q
X
Xn = cl ζn (zl ).
l=1
We shall prove that for any t ∈ R,
EXn eitXn − itEXn2 EeitXn → 0.
This will be again done using the Stein equation. For simplicity and clarity,
we only deal with the 1-dimensional case below. In particular, we shall
prove
Eζn eitζn − itEζn2 Eeitζn → 0.
Namely,
nEmn eitζn − nEmn Eeitζn − itn2 Em2n − (Emn )2 Eeitζn → 0. (3.49)


Following the strategy in the proof of Proposition 3.3, it follows


n
1 1 it X
Emn eitζn = − Eeitζn − Em2n eitζn − E Gik Gkl Gli eitζn .
z z zn
i,k,l=1
We have by virtue of (3.33)
n
n Emn eitζn − Emn Eeitζn = − Em2n eitζn − Em2n Eeitζn
 
z
n
it X
− E Gik Gkl Gli eitζn . (3.50)
zn
i,k,l=1
Likewise, it follows
n
1 itζn 1 3 itζn 1 X
2 itζn
Emn e = − Emn e − Emn e − 3E Gik Gkl Gli eitζn
z z zn
i,k,l=1
n
it X
− Emn Gik Gkl Gli eitζn .
zn3
i,k,l=1
We have by virtue of (3.38)
Em2n eitζn − Em2n Eeitζn
1 1
= − E(mn − Emn )eitζn − E m3n − Em3n eitζn

z z
n n
it  X itζn
X 
− 3 E G G G
ik kl li e − E Gik Gkl Gli Eeitζn
zn
i,k,l=1 i,k,l=1
n
it X
− Emn Gik Gkl Gli eitζn .
zn3
i,k,l=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 110

110 Random Matrices and Random Partitions

Also, using some simple algebra and Proposition 3.3 yields


E m3n − Em3n eitζn = 3(Emn )2 E(mn − Emn )eitζn + o(n−2 ).


In turn, this implies


Em2n eitζn − Em2n Eeitζn
1 + 3(Emn )2
=− E(mn − Emn )eitζn
z
n
itEmn X
− E Gik Gkl Gli eitζn + o(n−2 ). (3.51)
zn3
i,k,l=1

Substituting (3.51) into (3.50) and solving the equation,


 1 + 3(Emn )2 
1− n(Emn eitζn − Emn Eeitζn )
z2
 Em n
n 1 1 X
= it − E Gik Gkl Gli Eeitζn + o(1).
z2 z n
i,k,l=1

Note by (3.34)
1 + 3(Emn )2  Em
n 1
1− = −(z + 2Em n ) − + o(1).
z2 z2 z
In combination with (3.39), it is now easy to see that (3.49) holds true, as
desired. 
To conclude this section, let us turn to linear eigenvalue statistics. This
is a very interesting and well studied object in the random matrix theory.
Let f : R → R be a real valued measurable function. A linear eigenvalue
statistic with test function f is defined by
n
1X
Tn (f ) = f (λi ), (3.52)
n i=1
where the λi ’s are eigenvalues of normalized GUE matrix Hn .
As shown in Theorem 3.2, if f is bounded and continuous, then
Z 2
P
Tn (f ) −→ f (x)ρsc (x)dx, n → ∞.
−2
This is a certain weak law of large numbers for eigenvalues. From a proba-
bilistic view, the next natural issue is to take a closer look at the fluctuation.
Under what conditions could one have asymptotic normality? As a matter
of fact, this is usually a crucial problem in the statistical inference theory.
As an immediate application of Theorem 3.4, we can easily derive a
central limit theorem for a class of analytic functions.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 111

Gaussian Unitary Ensemble 111

Theorem 3.5. Suppose that f : R → R is a bounded continuous function


and analytic in a region including the real line. Then
 Z 2 
d
f (x)ρsc (x)dx −→ N 0, σf2 , n → ∞

n Tn (f ) − (3.53)
−2
2
where the variance σf is given by
Z 2Z 2
2 1 4 − xy (f (x) − f (y))2
σf = √ dxdy. (3.54)
4π 2 −2 −2 4 − x2 4 (x − y)2
p
− y2
Proof. Without loss of generality, we may and do assume f is analytic in
the region {z = x + iη : x ∈ R, |η| ≤ 1}. According to the Cauchy integral
formula,
Z
1 f (z)
f (x) = dz,
2πi |z|=1 x − z
which in turn implies
Z
1
Tn (f ) = f (z)mn (z)dz.
2πi |z|=1
Hence it follows from Proposition 3.3
Z
1
ETn (f ) = f (z)Emn (z)dz
2πi |z|=1
Z
1
f (z)msc (z) 1 + O(n−2 ) dz

=
2πi |z|=1
Z 2
= f (x)ρsc (x)dx + O(n−2 ).
−2
In addition, it also follows from Theorem 3.4
Z
 1 
n Tn (f ) − ETn (f ) = f (z)n mn (z) − Emn (z) dz
2πi |z|=1
Z
d 1
−→ f (z)Ξ(z)dz,
2πi |z|=1
where the convergence is a standard application of continuous mapping
theorem.
To get the variance, note the following integral identity
1  z1 z2 − 4 
Cov(Ξ(z1 ), Ξ(z2 )) = − 1
2(z1 − z2 )2
p p
z12 − 4 · z12 − 4
Z 2Z 2
1 xy − 4
= √ √
4π 2 −2 −2 (x − y)2 x2 − 4 x2 − 4
 1 1  1 1 
× − − dxdy.
z1 − x z1 − y z2 − x z2 − y
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 112

112 Random Matrices and Random Partitions

Therefore we have
Z Z
2 1
σf = f (z1 )f (z2 )Cov(Ξ(z1 ), Ξ(z2 ))dz1 dz2
(2πi)2 |z1 |=1 |z2 |=1
Z 2Z 2
xy − 4
Z Z
1 1
= √ dxdy
4π 2 −2 −2 (x − y)2 x2 − 4 y 2 − 4 (2πi)2 |z1 |=1 |z2 |=1
p
 1 1  1 1 
f (z1 )f (z2 ) − − dz1 dz2
z1 − x z1 − y z2 − x z2 − y
Z 2Z 2
1 xy − 4 (f (x) − f (y))2
= 2
√ dxdy.
(x − y)2
p
4π −2 −2 x2 − 4 y 2 − 4
The proof is now complete. 
It has been an interesting issue to study fluctuations of linear eigenvalue
statistics for as wide as possible class of test functions. In Theorem 3.5,
the analyticity hypothesis was only required to use the Cauchy integral
formula. This condition can be replaced by other regularity properties. For
instance, Lytova and Pastur (2009) proved that Theorem 3.5 is valid for
a bounded continuous differentiable test function with bounded derivative.
Johansson (1998) studied the global fluctuation of eigenvalues to manifest
the regularity of eigenvalue distribution. In particular, assume that f :
R 7→ R is not too large for large values of x:
(i) f (x) ≤ L(x2 + 1) for some constant L and all x ∈ R;
(ii) |f 0 (x)| ≤ q(x) for some polynomial q(x) and all x ∈ R;
(iii) For each x0 , there exists an α > 0 such that f (x)ψx0 (x) ∈ H 2+α , where
H 2+α is standard Sobolev space and ψx0 (x) is an infinitely differentiable
function such that |ψx0 (x)| ≤ 1 and

1 |x| ≤ x0 ,
ψx0 (x) =
0 |x| > x0 + 1.
Then (3.53) is also valid with σf2 given by.
Z 2Z 2 0 √
2 1 f (x)f (y) 4 − x2
σf = dxdy. (3.55)
4π 2 −2 −2 (y − x) 4 − y 2
p

Here we note that the righthand sides of (3.54) and (3.55) are equal.

3.3 Number of eigenvalues in an interval

In this section we are further concerned with such an interesting issue like
how many eigenvalues locate in an interval. Consider the standard GUE
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 113

Gaussian Unitary Ensemble 113

matrix An , and denote by λ1 , λ2 , · · · , λn its eigenvalues. For [a, b] ⊆ (−2, 2),


define Nn (a, b) to be the number of normalized eigenvalues lying in [a, b].
Namely,
 √ √
Nn (a, b) = ] 1 ≤ i ≤ n : n a ≤ λi ≤ n b .
According to the Wigner semicircle law, Theorem 3.2,
Z b
Nn (a, b) P
−→ ρsc (x)dx, n → ∞.
n a
In fact, using the asymptotic behaviors for Hermitian orthogonal polyno-
mials as in Section 3.1, we can further have

Proposition 3.4.
Z b
ENn (a, b) = n ρsc (x)dx + O(1) (3.56)
a
and
 1 
V ar Nn (a, b) = 2
log n 1 + o(1) . (3.57)

Proof. (3.56) is trivial since the average spectral density function p̄n (x)
converges uniformly to ρsc (x) in [a, b].
To prove (3.57), note the following variance formula
Z b
√ √ √ 
V ar(Nn (a, b)) = n Kn nx, nx dx
a
Z b Z b √ √ 2
−n Kn nx, ny dxdy
a a
Z bZ ∞ √ √ 2
= n Kn nx, ny dxdy
a b
Z bZ a √ √ 2
+n Kn nx, ny dxdy
a −∞
=: I1 + I2 .
We shall estimate the integrals I1 and I2 below. The focus is upon I1 , since
I2 is completely similar. A change of variables easily gives
Z ∞ Z b−a
√ √ 2
I1 = n dv Kn n(b − u), n(b − u + v) du
b−a 0
Z b−a v √ √
Z
2
+n dv Kn n(b − u), n(b − u + v) du
0 0
=: I1,1 + I1,2 .
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 114

114 Random Matrices and Random Partitions

It is easy to control I1,1 from above. In fact, when v ≥ b − a


√ √
Kn ( nx, ny)2
1 √ √ √ √ 2
≤ 2
ϕn ( nx)ϕn−1 ( ny) − ϕn ( ny)ϕn−1 ( nx) ,
(b − a)
where x = b − u, y = b − u + v. Hence we have by the orthogonality of ϕn
and ϕn−1
Z ∞Z ∞
1 √ √
I1,1 ≤ 2
n ϕn ( nx)ϕn−1 ( ny)
(b − a) −∞ −∞
√ √ 2
−ϕn ( ny)ϕn−1 ( nx) dxdy
2
≤ .
(b − a)2
Turn to estimating I1,2 . Note

lim Kn (x, y) = n ϕ0n (x)ϕn−1 (x) − ϕn (x)ϕ0n−1 (x)

y→x

and

kϕl k∞ ≤ κ, kϕ0l k∞ ≤ lκ.
So,
1
Z n
Z v √ √ 2
n dv Kn n(b − u), n(b − u + v) du = O(1).
0 0
For the integral over (1/n, b − a), we use Lemma 3.2 to get
√ √ √ √
n1/2 ϕn ( nx)ϕn−1 ( ny) − ϕn ( ny)ϕn−1 ( nx)


2 1 h  θ1 π  π
= 2 1/4 2 1/4
cos nα(θ1 ) + − cos nα(θ2 ) −
π (4 − x ) (4 − y ) 2 4 4
 θ2 π  π i
− cos nα(θ2 ) + − cos nα(θ1 ) − + O(n−1 )
2 4 4
1 (4 − xy)1/2
= + O(n−1 ).
2π (4 − x2 )1/4 (4 − y 2 )1/4
Thus with x = b − u and y = b − u + v,
Z b−a Z v
√ √ 2
n dv Kn n(b − u), n(b − u + v) du
1
n 0
b−a v
4 − xy
Z Z
1
= dv du
4π 2 1
n 0 v 2 (4 − x2 )1/2 (4 − y 2 )1/2
Z b−a Z v
1
+O(n−1 ) dv du.
1
n 0 v2
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 115

Gaussian Unitary Ensemble 115

Trivially, it follows
Z b−a Z v
1
dv du = O(log n).
1
n 0 v2
We also note
4 − xy
sup ≤ Ca,b
x,y∈(a,b) (4 − x2 )1/2 (4 − y 2 )1/2
for some positive constant Ca,b . Then for any ε > 0
Z b−a Z v
4 − xy
dv du 2 2 )1/2 (4 − y 2 )1/2
= O(| log ε|). (3.58)
ε 0 v (4 − x
On the other hand, it is easy to see
4 − xy
= 1 + O(ε2 ), 0 < v < ε.
(4 − x2 )1/2 (4 − y 2 )1/2
Hence we have
Z ε Z v
4 − xy
dv du 2 2 )1/2 (4 − y 2 )1/2
= (1 + O(ε2 ))(log n + | log ε|). (3.59)
1
n 0 v (4 − x
Combining (3.58) and (3.59) together yields
Z b−a Z v
4 − xy
dv du 2
1
n 0 v (4 − x )1/2 (4 − y 2 )1/2
2
Z ε Z v
4 − xy
= dv du 2 2 )1/2 (4 − y 2 )1/2
n
1
0 v (4 − x
Z b−a Z v
4 − xy
+ dv du 2 2 )1/2 (4 − y 2 )1/2
ε 0 v (4 − x
= (1 + O(ε2 ))(log n + | log ε|).

√Pn
We remark that the linear eigenvalue statistic i=1 f (λi / n) has variance
at most 1 whenever f is a 1-Lipschtiz test function, see (3.24). On the other
hand, the counting function is not a 1-Lipschitz function. The Proposition
3.4 provides a log n-like estimate for the size of variance of Nn (a, b).
Having the Proposition, one would expect the asymptotic normal fluc-
tuations for Nn (a, b). Below is our main result of this section.

Theorem 3.6. Under the above assumptions, as n → ∞


Z b
1  
d
q Nn (a, b) − n ρsc (x)dx −→ N (0, 1).
1 a
2π 2 log n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 116

116 Random Matrices and Random Partitions

The rest of this section is devoted to the proof of Theorem 3.6. In fact, we
shall prove the theorem under a more general setting. To do this, we need to
introduce some basic definitions and properties about determinantal point
processes. Recall a point process X on R is a random configuration such
that any bounded set contains only finitely many points. The law of X
is usually characterized by the family of integer-valued random variables
{NX (A), A ∈ B}, where NX (A) denotes the number of X in A. Besides,
the correlation function is becoming a very useful concept in describing the
properties of point processes. The so-called correlation function was first
introduced to study the point process by Macchi (1975). Given a point
process X , its k-point correlation function is defined by
1 
ρk (x1 , · · · , xk ) = lim P (xi − δ, xi + δ) ∩ X 6= ∅, 1 ≤ i ≤ k ,
δ→0 (2δ)k
where x1 , · · · , xk ∈ R. Here we only considered the continuous case, the
corresponding discrete case will be given in Chapter 4.
It turns out that the correlation functions is a powerful and nice tool in
computing moments of NX (A). In fact, it is easy to see
Z
↓k
E NX (A) = ρk (x1 , · · · , xk )dx1 · · · dxk , (3.60)
A⊗k

where m↓k = m(m − 1) · · · (m − k + 1), and


k
Y Z
E NX (Ai ) = ρk (x1 , · · · , xk )dx1 · · · dxk .
i=1 A1 ×···×Ak

A point process X is said to be determinantal if there exists a kernel function


KX : R × R 7→ R such that

ρk (x1 , · · · , xk ) = det KX (xi , xj ) k×k

for any k ≥ 1 and x1 , · · · , xk ∈ R.


The determinantal point processes have been attracting a lot of atten-
tion in the past two decades. More and more interesting examples have
been found in the seemingly distinct problems. For instance, the GUE
model An is a determinantal point process with X = {λ1 , λ2 , · · · , λn } and
kernel function KX = Kn given by (3.7). Another well-known example is
Poisson point process on R. Let P be a Poisson point process with in-
tensity function %(x), then P can be viewed as a determinantal process
having KP (x, y) = %(x)δx,y . Note that Poisson point process is an inde-
pendent point process, that is, a two-point correlation function is equal
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 117

Gaussian Unitary Ensemble 117

to the product of two one-point correlation functions. However, a gen-


eral determinantal point process is a negatively associated process, since
ρ2 (x, y) ≤ ρ1 (x)ρ1 (y).
Note that no claim is made about the existence or uniqueness of a de-
terminantal point process for a given kernel K. To address these issues, we
need to make some additional assumptions below. The kernel K is required
to be symmetric and nonnegative definite, that is, K(x, y) = K(y, x) for
every x, y ∈ R and det(K(xi , xj ))k×k ≥ 0 for any x1 , x2 , · · · , xk ∈ R. we
also further assume that K is locally square integrable on R2 . This means
that for any compact D ⊆ R, we have
Z
|K(x, y)|2 dxdy < ∞.
D2
Then we may use K as an integral kernel to define an associated integral
operator as
Z
Kf (x) = K(x, y)f (y)dy < ∞
R
for functions f ∈ L2 (R, dx) with compact support.
For a compact set D, the restriction of K to D is the bounded linear
operator KD on L2 (R) defined by
Z
KD f (x) = K(x, y)f (y)dy, x ∈ D.
D
Thus KD is a self-adjoint compact operator. Let qnD , n ≥ 1 be nonnegative
eigenvalues of KD , the corresponding eigenfunctions φDn forms a orthonor-
2
mal basis on L (D, dx). We say that KD is of trace class if
X∞
|qnD | < ∞.
n=1
If KD is of trace class for every compact subset D, then we say that K is
locally of trace class. The following two lemmas characterize the existence
and uniqueness of a determinantal point process with a given kernel.

Lemma 3.5. Let X be a determinantal point process with kernel KX . If


ENX (A) < ∞, then EetNX (A) < ∞ for any t ∈ R. Consequently, for each
compact set D, the distribution of NX (D) is uniquely determined by KX .

Proof. It easily follows


NX (A)
EetNX (A) = E 1 + (et − 1)

X (et − 1)k ↓k
= E NX (A) .
k!
k=0
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 118

118 Random Matrices and Random Partitions

Also, by (3.60) and the Hadamard inequality for nonnegative definite matrix
Z
↓k 
E NX (A) = det KX (xi , xj ) dx1 · · · dxk
A⊗k
Z k k
≤ KX (x1 , x1 ))dx1 = ENX (A) .
A
Therefore, we have

X (et − 1)k k
EetNX (A) ≤ ENX (A) < ∞.
k!
k=0
R
For any compact set D, ENX (D) < ∞ since D
KX (x, x)dx < ∞, so
EetNX (D) < ∞ for all t ∈ R. 

Lemma 3.6. Assume that K is a symmetric and nonnegative definite ker-


nel function such that the integral operator K is a locally trace class. Then
K defines a determinantal point process on R if and only if the spectrum
of K is contained in [0,1].

Proof. See Theorem 4.5.5 of Soshnikov (2000). 

Theorem 3.7. Let Xn , n ≥ 1 be a sequence of determinantal point processes


with kernel KXn on R, let In , n ≥ 1 be a sequence of intervals on R. Assume
that KXn · 1In define an integrable operator of locally trace class. Set Nn =
NXn (In ). If V ar(Nn ) → ∞ as n → ∞, then
N − ENn d
pn −→ N (0, 1).
V ar(Nn )
The theorem was first proved by Costin and Lebowitz (1995) in the very
special case. They only considered the Sine point process with kernel
KSine (x, y) = sin(x − y)/(x − y), and Widom suggested it would hold for
the GUE model. Later on Soshnikov (2002) extended it to general de-
terminantal random points fields, including Bessel, Airy point processes.

The proof of the theorem is quite interesting. A basic strategy is to use


the moment method, namely Theorem 1.8. Set
Nn − ENn
Xn = p .
V ar(Nn )
Trivially, τ1 (Xn ) = 0, τ2 (Xn ) = 1 and
γk (Nn )
τk (Xn ) = , k ≥ 3.
(V arNn )k/2
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 119

Gaussian Unitary Ensemble 119

Then it suffices to show


γk (Nn ) = o γ2 (Nn )k/2 ,

k≥3
provided γ2 (Nn ) = V ar(Nn ) → ∞.
Proof. For the sake of clarity, we write γk for γk (Nn ). A key ingredient is
to express each γk in terms of correlation functions and so kernel functions
of Xn . Start with γ3 . It follows from (1.17)
γ3 = ENn3 − 3ENn2 ENn + 2(ENn )3
= E(Nn )3 + 3E(Nn )2 + ENn − 3E(Nn )2 ENn
−3(ENn )2 + 2(ENn )3 .
Also, we have by (3.60) and a simple algebra
Z
γ3 = 2 Kn (x1 , x2 )Kn (x2 , x3 )Kn (x3 , x1 )dx1 dx2 dx3
⊗3
In
Z Z
−3 Kn (x1 , x2 )Kn (x2 , x1 )dx1 dx2 + Kn (x1 , x1 )dx1 .
⊗2
In In

To obtain a general equation for γk , we need to introduce k-point cluster


function, namely
X l
Y
l−1
αk (x1 , · · · , xk ) = (−1) (l − 1)! ρ|Gj | (x̄(Gj )),
G j=1

where 1 ≤ l ≤ k, G = (G1 , · · · , Gl ) is a partition of the set {1, 2, · · · , k},


|Gj | stands for the size of Gj , x̄(Gj ) = {xi , i ∈ Gj }.
Using Möbius inversion formula, we can express the correlation functions
in terms of Ursell functions as follows
l
XY
ρk (x1 , · · · , xk ) = α|Gj | (x̄(Gj )).
G j=1

Moreover, we have an elegant formula in the setting of determinantal point


processes
αk (x1 , · · · , xk )
X
=(−1)k−1 KXn (x1 , xσ(1) )KXn (xσ(1) , xσ(2) ) · · · KXn (xσ(k) , x1 ), (3.61)
σ

where the sum is over all cyclic permutations (σ(1), · · · , σ(k)) of


(1, 2, · · · , k). Define
Z
βk = αk (x1 , · · · , xk )dx1 · · · dxk , k ≥ 1.
⊗k
In
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 120

120 Random Matrices and Random Partitions

Then it is not hard to see


X l Z
Y
l−1
βk = (−1) (l − 1)! ⊗|Gj |
ρ|Gj | (x̄(Gj ))dx̄(Gj )
G j=1 In

X l
Y
= (−1)l−1 (l − 1)! E(Nn )↓|Gj |
G j=1
l
X k! Y
= Q (−1)l−1 (l − 1)! E(Nn )↓τj ,
τi !mτi !
τ 7→k j=1

where τ = (τ1 , · · · , τl ) →
7 k is an integer partition of k, mτi stands for the
multiplicity of τi in τ . We can derive from (1.17)
∞ ∞
X βk X E(Nn )↓k
(et − 1)k = log (et − 1)k
k! k!
k=1 k=0

X γk
= log EetNn = tk . (3.62)
k!
k=1

Comparing the coefficients of the term tk at both sides of (3.62), we obtain


k
X βl X k!
γk = .
l! τ1 ! · · · τl !
l=1 τ1 +···+τl =k

Equivalently,
k−1
X
γk = βk + bk,j γj (3.63)
j=1

where the coefficients bk,j are given by


bk,1 = (−1)k (k − 1)!, bk,k = −1, k≥2
and
bk,j = bk−1,j−1 − (k − 1)bk−1,j , 2 ≤ j ≤ k − 1.
Since it follows from (3.63)
k−1
X
γk = βk + (−1)k (k − 1)!γ1 + bk,j γj , k ≥ 3,
j=2

then it suffices to show


k/2 
βk + (−1)k (k − 1)!γ1 = o γ2 , k ≥ 3. (3.64)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 121

Gaussian Unitary Ensemble 121

To do this, use (3.61) to get


XZ
βk = (−1)k−1 KXn (x1 , xσ(1) ) · · · KXn (xσ(k) , x1 )dx1 · · · dxk
⊗k
σ In
Z
k−1
= (−1) (k − 1)! KXn (x1 , x2 ) · · · KXn (xk , x1 )dx1 · · · dxk ,
⊗k
In

and so
Z
k k
βk + (−1) (k − 1)!γ1 = (−1) (k − 1)! KXn (x1 , x1 )dx1
In
Z 
− KXn (x1 , x2 ) · · · KXn (xk , x1 )dx1 · · · dxk .
⊗k
In

Define an integrable operator KIn : L2 (In , dx) → L2 (In , dx) by


Z
KIn f (x) = f (y)KIn (x, y)dy, x ∈ In .
In

Then it follows
βk + (−1)k (k − 1)!γ1 = (−1)k (k − 1)! trKIn − trKIkn


k
X
= (−1)k (k − 1)! trKIl−2 KIn − KI2n .

n
l=2

According to Lemma 3.6, we have


βk + (−1)k (k − 1)!γ1 ≤ k! trKIn − trKI2

n

= k!γ2 ,
which gives (3.64). Now we conclude the proof of the theorem. 
As the reader may see, Theorem 3.7 is of great universality for determi-
antal point processes in the sense that there is almost no requirement on
the kernel function. However, the theorem itself does not tell what the ex-
pectation and variance of NXn (In ) look like. To have numerical evaluation
of expectation and variance, one usually needs to know more information
about the kernel function. In the case of GUE, the kernel function is given
by Hermite orthogonal polynomials so that we can give precise estimates
of expectation and variance. This was already done in Proposition 3.4.
It is believed that Theorem 3.7 would have a wide range of applications.
We only mention the work of Gustavsson (2005), in which he studied the
kth greatest eigenvalue λ(k) of GUE model and used Theorem 3.7 to prove
the λ(kn ) after properly scaled has a Gaussian fluctuation around its av-
erage as kn /n → a ∈ (0, 1). He also dealt with the case of kn → ∞ and
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 122

122 Random Matrices and Random Partitions

kn /n → 0. These results are complement to the Tracy-Widom law for


largest eigenvalues, see Section 4.1 and Figure 5.2.
In the end of this section, we shall provide a conceptual proof of Theorem
3.7. This is based on the following expression for the number of points lying
in a set as a sum of independent Bernoulli random variables.
Let K be a kernel function such that the integral operator K is locally
of trace class. Let X be a determinantal point process with K as its kernel.
Let I be a bounded Borel set on R, then K · 1I is locally trace class. Denote
by qk , k ≥ 1 the eigenvalues of K · 1I , the corresponding eigenfunctions φk
form a orthonormal basis in L2 (I). Define a new kernel function K I by

X
K I (x, y) = qk φk (x)φk (y),
k=1

which is a mixture of the qk and φk .


It is evident that the point process X ∩I is determinantal. The following
proposition implies that its kernel is given by K I .

Proposition 3.5. It holds almost everywhere with respect to Lebesgue mea-


sure
K(x, y) = K I (x, y).
Furthermore, assume that ξk , k ≥ 1 is a sequence of independent Bernoulli
random variables,
P (ξk = 1) = qk , P (ξk = 0) = 1 − qk ,
then we have

d
X
NX (I) = ξk . (3.65)
k=1

Proof. By assumption of trace class,


Z X∞ 2 ∞
X
qk φk (x) dx = qk < ∞.
I k=1 k=1
P∞
This shows that the series k=1 qk2 φk (x) converges in L2 (I) and also that
it converges pointwise for every x ∈ I \ I0 for some set I0 of zero measure.
By the Cauchy-Schwarz inequality,
X∞ 2  X ∞ ∞
 X 
qk φk (x)φk (y) ≤ qk φk (x)2 qk φk (y)2 . (3.66)
k=n k=n k=n
P∞
Hence the series k=1 qk φk (x)φk (y) converges absolutely.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 123

Gaussian Unitary Ensemble 123

Let f ∈ L2 (I). Write f in terms of the orthonormal basis {φk } to get


for any x ∈ I \ I0
∞ Z
X 
f (x) = f (y)φk (y)dy φk (x)
k=1 I

and so
∞ Z
X 
Kf (x) = f (y)φk (y)dy Kφk (x)
k=1 I
Z ∞
X
= f (y) qk φk (y)φk (x)dy
I k=1
Z
= f (y)K I (x, y)dy.
I

This implies that we must have

K(x, y) = K I (x, y) a.e.

Turn to prove (3.65). We shall below prove


P∞
EetNX (I) = Eet k=1 ξk
, t ∈ R.

First, it is easy to see

P∞ ∞
Y
Eet k=1 ξk
= Eetξk
k=1
Y∞  
= 1 + qk (et − 1)
k=1

X X
= 1+ qi1 · · · qik (et − 1)k .
k=1 1≤i1 <···<ik <∞

Second, to compute EetNX (I) , we use the following formula


∞ 
X E NX (I) k t
Ee tNX (I)
= (e − 1)k
k!
k=0

(et − 1)k
X Z
det K I (xi , xj ) k×k dx1 · · · dxk . (3.67)

=
k! R k
k=0
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 124

124 Random Matrices and Random Partitions

For k ≥ 1,
q1 φ1 (x1 ) q2 φ2 (x1 ) · · · qn φn (x1 ) · · ·
 
 q1 φ1 (x2 ) q2 φ2 (x2 ) · · · qn φn (x2 ) · · · 
K I (xi , xj ) k×k

= 
 
.. .. .. .. 
 . . ··· . . 
q1 φ1 (xk ) q2 φ2 (xk ) · · · qn φn (xk ) · · ·
φ1 (x1 ) φ1 (x2 ) · · · φ1 (xk )
 
 φ2 (x1 ) φ2 (x2 ) · · · φ2 (xk ) 
 
×  ... .. ..
 
 . · · · . 

 φn (x1 ) φn (x2 ) · · · φn (xk ) 
··· ··· ··· ···
=: AB. (3.68)
Then according to the Cauchy-Binet formula
X
det K I (xi , xj ) k×k =

det(Ak Bk ), (3.69)
1≤i1 <···<ik

where Ak is a k × k matrix consisting of row 1, · · · , row k and column i1 ,


· · · , column ik from A, Bk is a k × k matrix consisting of column 1, · · · ,
column k and row i1 , · · · , row ik from B.
Using the orthogonality of ϕi , we have
Z
det(Ak Bk )dx1 · · · dxk = k!qi1 · · · qik . (3.70)
Rk

Combining (3.67), (3.69) and (3.70) together yields



X X
EetNX (I) = 1 + qi1 · · · qik (et − 1)k .
k=1 1≤i1 <···<ik <∞

Thus we prove (3.65), and so conclude the proof. 


Proof of Theorem 3.7. Having the identity (3.65) in law, a classic
Lyapunov theorem (see (1.9)) can be used to establish the central limit
theorem for Nn . Indeed, applying the Proposition 3.5, we get an array
{ξn,k , n ≥ 1, k ≥ 1} of independent Bernoulli random variables, so it suf-
P∞
fices to show the central limit theorem holds for the sums k=1 ξn,k . In
turn, note
P∞ 3
k=1 E|ξn,k − Eξn,k | 1
P∞ ≤ →0
(V ar( k=1 ξn,k ))3/2 (V ar(Nn ))1/2
provided V ar(Nn ) → ∞. Thus the Lyapunov condition is satisfied. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 125

Gaussian Unitary Ensemble 125

3.4 Logarithmic law

In this section we are concerned with the asymptotic behaviors of the log-
arithm of the determinant of the GUE matrix. Let An = (zij )n×n be the
standard GUE matrix as given in the Introduction, denote its eigenvalues
by λ1 , λ2 , · · · , λn . Then we have

Theorem 3.8. As n → ∞,
1  1 1 
d
q log | det An | − log n! + log n −→ N (0, 1). (3.71)
1 2 4
2 log n

The theorem is sometimes called the logarithmic law in literature. We re-


Pn
mark that log |det An | = i=1 log |λi | is a linear eigenvalue statistic with
test function f (x) = log |x|. However, the function log |x| is not so nice
that one could not directly apply the results discussed in Section 3.2. The
theorem was first proved by Girko in the 1970s using the martingale argu-
ment, see Girko (1979, 1990, 1998) and references therein for more details.
Recently, Tao and Vu (2012) provided a new proof, which is based on a
tridiagonal matrix representation due to Trotter (1984). We shall present
their proof below. Before that, we want to give a parallel result about
Ginibre model .
Let Mn = (yij )n×n be an n × n random matrix whose entries are all
independent complex standard normal random variables. This is a rich
and well-studied matrix model in the random matrix theory as well. Let
ν1 , ν2 , · · · , νn be its eigenvalues, then the joint probability density function
is given by
n
1Y Y 2
%n (z1 , · · · , zn ) = |zi − zj |2 e−|zi | /2 , zi ∈ C. (3.72)
n i<j i=1

Define the bivariate empirical distribution function


n
1X
Fn (x, y) = 1(Reνi ≤x, Imνi ≤y) .
n i=1
Then it follows
d
Fn −→ %c in P, (3.73)
where %c stands for the uniform law in unit disk in the plane.
We leave the proofs of (3.72) and (3.73) to readers. Other more infor-
mation can be found in Ginibre (1965) and Mehta (2004). As far as the
determinant, a classic and interesting result is
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 126

126 Random Matrices and Random Partitions

Proposition 3.6. As n → ∞,
1  1 1 
d
q log | det Mn | − log n! + log n −→ N (0, 1). (3.74)
1 2 4
4 log n

Proof. First, observe the following identity in law


n
d 1 Y
| det Mn | = χ2i , (3.75)
2n/2 i=1

where the χi is a chi random variable with index i, and all chi random
variables are independent.
Indeed, let y1 , y2 , · · · , yn denote row vectors of Mn . Then the abso-
lute value of the determinant of Mn is equal to the volume of parallelnoid
consisting of vectors y1 , y2 , · · · , yn . In turn, the volume is equal to
|y1 | · |(I − P1 )y2 | · · · |(I − Pn−1 )yn |, (3.76)
where Pi is an orthogonal projection onto the subspace spanned by vectors
{y1 , y2 , · · · , yi }, 1 ≤ i ≤ n − 1. Note Pi is an idempotent projection
with rank i, so Pi yi+1 is an i-variate complex standard normal random √
vector and√is independent of {y1 , y2 , · · · , yi }. Then letting χ2n = 2|y1 |,
χ2(n−i) = 2|(I − Pi )yi+1 |, 1 ≤ i ≤ n − 1 conclude the desired identity.
Second, note the χi has a density function
21−i/2 i−1 −x2 /2
x e , x>0
Γ(i/2)
then it is easy to get
Γ((i + k)/2)
Eχki = 2k
Γ(i/2)
and the following asymptotic estimates
1 1 1
+ O i−2 , V ar(log χi ) = + O i−2 .
 
E log χi = log i −
2 2i 2i
In addition, for each positive integer k ≥ 1
E(log χi − E log χi )2k = O i−k .


Lastly, note by (3.75)


n
d log 2 X
log | det Mn | = − n+ log χ2i .
2 i=1

(3.74) now directly follows from the classic Lyapunov CLT for sums of
independent random variables. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 127

Gaussian Unitary Ensemble 127

The proof of Proposition 3.6 is simple and elegant. The hypothesis that all
entries are independent plays an essential role. It is no longer true for An
since it is Hermitian. We need to adopt a completely different method to
prove Theorem 3.8. Start with a tridiagonal matrix representation of An .
Let an , n ≥ 1 be a sequence of independent real standard normal random
variables, bn , n ≥ 1 a sequence of independent random variables with each
bn distributed like χn . In addition, assume an ’s and bn ’s are all independent.
For each n ≥ 1, construct a tridiagonal matrix
 
an bn−1 0 0 ··· 0
 n−1 an−1 bn−2 0 · · · 0 
b 
 0 bn−2 an−2 bn−3 · · · 0 
 
Dn =  .. .. . . . . ..  . (3.77)
 
 0 . . . . . 
0 · · · b2 a2 b1 
 
 0
0 0 0 · · · b1 a1
Lemma 3.7. The eigenvalues of Dn are distributed according to (3.2). In
particular,
d
det An = det Dn . (3.78)

Proof. We shall obtain the Dn in (3.77) from An through a series of


Householder transforms. Write
 
z11 z1
An =
z∗1 An,n−1
where z1 = (z12 , · · · , z1n ). Let
z21  1 |z21 | 1/2
w1 = 0, w2 = − (1 − )
|z21 | 2 α
zl1
wl = − , l≥3
(2α(α − |z21 |))1/2
where α > 0 and
α2 = |z21 |2 + |z31 |2 + · · · + |zn1 |2 .
Define the Householder transform by
Vn = In − 2wn wn∗
1 0 ··· 0
 
0 
= . ,
 
 .. Vn,n−1 
0
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 128

128 Random Matrices and Random Partitions

where wn = (w1 , w2 , · · · , wn )τ .
It is easy to check that Vn is a unitary matrix and
z11 (z , z , · · · , z1n )Vn,n−1
 
 ∗  12 13

 z12 

∗ 
 z13
Vn An Vn = 
 
 Vn,n−1  .  Vn,n−1 An,n−1 Vn,n−1 
  
  ..  

z1n
α |zz12
 
z11 21 |
0

=  α |zz12 | .
 
21 Vn,n−1 An,n−1 Vn,n−1
0
To make the second entry in the first column nonnegative, we need to add
one further configuration. Let Rn differ from the identity matrix by having
(2,2)-entry e−iφ with φ chosen appropriately and form Rn Vn An Vn Rn∗ . Then
we get the desired matrix
 
z11 α 0
 α ,
Vn,n−1 An,n−1 Vn,n−1
0
where α2 = |z21 |2 + |z31 |2 + · · · + |zn1 |2 .
Define an = z11 , bn−1 = α. Vn,n−1 is also a unitary matrix and is
independent of An,n−1 , so Vn,n−1 An,n−1 Vn,n−1 is an n − 1 × n − 1 GUE
matrix. Repeating the preceding procedure yields the desired matrix Dn .
The proof is complete. 
According to (3.78), it suffices to prove (3.71) for log | det Dn | below. Let
dn = det Dn . It is easy to see the following recurrence relations
dn = an dn−1 − b2n−1 dn−2 (3.79)
dn−1 = an−1 dn−2 − b2n−2 dn−3 .
(3.80)
√ √
Let en = dn / n! and cn = (b2n − n)/ n. Note cn−k is asymptotically
normal as n − k → ∞. So we deduce from (3.79) and (3.80)
an  cn−1 1 
en = √ en−1 − 1 + √ − en−2 + 1 (3.81)
n n 2n
an−1  cn−2 1 
en−1 = √ en−2 − 1 + √ − en−3 + 2 , (3.82)
n n 2n
where and in the sequel 1 , 2 denote a small negligible quantity, whose
value may be different from line to line.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 129

Gaussian Unitary Ensemble 129

In addition, substituting (3.82) into (3.81), we have


 cn−1 an an−1 1  a
n an cn−2 
en = −1 − √ + + en−2 − √ + en−3 + 1 .
n n 2n n n
In terms of vectors, we have the following recurrence formula
   e   
en 1 1 n−2 1
= −I2 − √ Sn,1 + Sn,2 + , (3.83)
en−1 n n en−3 2
where
 
cn−1 an
Sn,1 =
−an−1 cn−2
 
1 an−1 an −an cn−2
Sn,2 = I2 + .
2 0 0
Let rn2 = e22n + e22n−1 . It turns out that log rn satisfies a CLT after properly
scaled. This is stated as

Lemma 3.8. As n → ∞,
log rn + 14 log n d
q −→ N (0, 1). (3.84)
1
2 log n

Proof. Use (3.83) to get


 
e2n
rn2 = (e2n , e2n−1 )
e2n−1
 1 τ
= (e2n−2 , e2n−3 ) I2 + √ (S2n,1 + S2n,1 )
2n
1 τ e 
τ 2n−2
+ (S2n,1 S2n,1 − S2n,2 − S2n,2 ) + 0 ,
2n e2n−3
where 0 denotes a small negligible quantity, whose value may be different
from line to line. Define
 
1 τ
 e2n−2
ξn = 2 √ (e2n−2 , e2n−3 ) S2n,1 + S2n,1 ,
rn−1 2n e2n−3
 
1 τ τ
 e2n−2
ηn = 2 (e2n−2 , e2n−3 ) S2n,1 S2n,1 − S2n,2 − S2n,2 .
rn−1 2n e2n−3
Then we have a recursive relation
rn2 = (1 + ξn + ηn + 0 )rn−1
2
.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 130

130 Random Matrices and Random Partitions

Let Fn = σ{a1 , · · · , a2n ; b1 , · · · , b2n−1 }. Then it follows for n ≥ 1


2
E ξn2 |Fn−1 = , E ξn4 |Fn−1 = O(n−2 ) (3.85)
  
E ξn |Fn−1 = 0,
n
and
1
E ηn2 |Fn−1 = O(n−2 ).
 
E ηn |Fn−1 = , (3.86)
2n
Using the Taylor expansion of log(1 + x) we obtain
ξn2
log rn2 = log rn−1
2
+ ξn + ηn − + 0 .
2
Let mn be a sequence of integers such that mn / log n → 1. Then
n n
X X  ξl2 
log rn2 = ξn + ηl − 2
+ 0 + log rm .
2 n
l=mn +1 l=mn +1

By the choice of mn , we have


2
log rm P
√ n
−→ 0.
log n
Also, by the Markov inequality, (3.85) and (3.86)
n
1 X  ξ2 1 
P
√ ηl − l + + 0 −→ 0.
log n l=m +1 2 2l
n

Finally, by (3.85) and the martingale CLT we have


n
1 X d
√ ξl −→ N (0, 1).
log n l=m +1
n

In combination, we have so far proven (3.84). 

The above lemma describes asymptotically the magnitude of the vector


(e2n , e2n−1 ). In order to obtain each component, we also need information
about the phase of the vector.

Lemma 3.9. Let θn ∈ (0, 2π) be such that (e2n , e2n−1 ) = rn (cos θn , sin θn ).
Then as n → ∞,
d
θn −→ Θ, (3.87)

where Θ ∼ U (0, 2π).


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 131

Gaussian Unitary Ensemble 131

Proof. Let us first look at the difference between θn and θn−1 . Rewrite
(3.83) as
   
cos θn cos θn−1
rn = rn−1 (−I2 + Dn ) ,
sin θn sin θn−1
where
1 1
Dn = − √ Sn,1 + Sn,2 + 
n n
where  is a negligible matrix. It then follows
     
rn cos θn cos θn−1 cos θn−1
=− + Dn . (3.88)
rn−1 sin θn sin θn−1 sin θn−1
Note (cos θn−1 , sin θn−1 ) and (− sin θn−1 , cos θn−1 ) form an orthonormal
basis. It is easy to see
     
cos θn−1 cos θn−1 − sin θn−1
Dn = xn + yn , (3.89)
sin θn−1 sin θn−1 cos θn−1
where the coefficients xn and yn are given by
 
cos θn−1
xn = (cos θn−1 , sin θn−1 )Dn
sin θn−1
and
 
cos θn−1
yn = (− sin θn−1 , cos θn−1 )Dn .
sin θn−1
Substituting (3.89) back into (3.88) yields
     
rn cos θn cos θn−1 − sin θn−1
= (−1 + xn ) + yn .
rn−1 sin θn sin θn−1 cos θn−1
Thus it is clear that
yn
tan(θn − θn−1 ) = ,
−1 + xn
which in turn leads to
yn
θn − θn−1 = arctan .
−1 + xn
Next we estimate xn and yn . There is a constant ς > 0 such that
xn = OP (n−ς ), yn = OP (n−ς ).
In addition, for some ι > 1
E xn θn−1 = O(n−ι ), E yn θn−1 = O n−ι
  
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 132

132 Random Matrices and Random Partitions

1
E yn2 θn−1 = + O n−ι .
 
2n
Using the Taylor expansions of arctan x and (1 + x)−1 we obtain
θn − θn−1 = −(yn − xn yn ) + O yn3 ,


and so
 k2 
eikθn = eikθn−1 1 − ikyn + ikyn xn − yn2 + O yn2 (xn + yn ) .
2
Hence we have
 k2 
Eeikθn = Eeikθn−1 1 − ikyn + ikyn xn − yn2 + O yn2 (xn + yn ) .

2
Moreover, using conditioning argument we get
 k2 
Eeikθn = Eeikθn−1 1 − + O n−ι .

(3.90)
4n
Let mn → ∞ and mn /n → 0. Then repeatedly using (3.90) to get
n n
Y  k 2  ikθmn  X 
Eeikθn = 1− Ee +O l−ι
4l
l=mn +1 l=mn +1
→ 0, n→∞
Qn  2

where in the last limit we used ι > 1 and the fact that l=mn +1 1− k4l → 0
Pn  2

since l=mn +1 1 − k4l → ∞. Thus we complete the proof of (3.87). 

Proof of Theorem 3.8. It easily follows from Lemma 3.9


 1 
P | cos θn | < → 0, n → ∞.
log n
This in turn implies
log | cos θn | P
√ −→ 0, n → ∞. (3.91)
log n
On the other hand, we have
log |e2n | = log rn + log | cos θn |.
Then according to Lemma 3.8 and (3.91),
log |e2n | + 41 log n d
q −→ N (0, 1).
1
2 log n

The analog is valid for e2n−1 . Therefore we have proven Theorem 3.8. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 133

Gaussian Unitary Ensemble 133

To conclude this section, we shall mention the following variants of


logarithmic law. Given z ∈ R, we are interested in the asymptotic behaviors
of the characteristic polynomials of An at z. We need to deal with two cases
separately: either outside or inside the support of Wigner semicircle law.
Theorem 3.9. If z ∈ R \ [−2, 2], then
√ n d
log det(An − z n) − log n − nµz −→ N 0, σz2 ,

2
where µz and σz2 are given by

1 z − z2 − 4 √z 2 − 4 + z
µz = √ + log

2z+ z −4 2
2
,

|z| + z 2 − 1 1
2
σz = log − log(z 2 − 1).
2 2
Proof. This is a corollary to Theorem 3.5. Given z ∈ R \ [−2, 2], define
fz (x) = log |z − x|. This is analytic outside a certain neighbourhood of
[−2, 2]. In addition, a direct and lengthy computation shows
Z 2
µz = fz (x)ρsc (x)dx
−2
√ √z 2 − 4 + z
1 z − z2 − 4
= √ + log

2 z + z2 − 4 2

and √
Z 2Z 2 0
2 1 fz (x)fz (y) 4 − x2
σz = dxdy
4π 2 −2 −2 (y − x) 4 − y 2
p

|z| + z 2 − 1 1
= log − log(z 2 − 1). 
2 2
Theorem 3.10. If z ∈ (−2, 2), then
1  √ n n z2 
q log det(An − z n) − log n − ( − 1)
1 2 2 2
2 log n
d
−→ N (0, 1). (3.92)
To prove Theorem 3.10, we need a well-estimated result on the power of
the characteristic polynomial for the GUE by Krasovsky (2007).
Proposition 3.7. Fix z ∈ (−2, 2). The following estimate holds
√ 2α
E det(An − z n)
2 2
 z 2 α /2  n αn+α (z2 /2−1)αn
= C(α)2αn 1 − e (1 + εα,n ) (3.93)
4 2
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 134

134 Random Matrices and Random Partitions

uniformly on any fixed compact set in the half plane Reα > −1/2. Here
1  Z α 1 
C(α) = 1 exp 2 log Γ(s + )ds + α2
Γ(α + 2 ) 0 2
2 G(α + 1)2
= 22α , (3.94)
G(2α + 1)
where G(α) is Barnes’s G-function. The remainder term εα,n = O(log n/n)
is analytic in α.

The proof is omitted. For α positive integers (3.93) has been found by
Brézin and Hikami (2000), Forrester and Frankel (2004). For such α,

E| det(An − z n)|2α can be reduced to the Hermite polynomials and their
derivatives at the points z. However, it is not the case for noninteger α.
In order to obtain (3.93), Krasovsky (2007) used Riemann-Hilbert problem
approach to compute asymptotics of the determinant of a Hankel matrix
whose support is supported on the real line and possesses power-like singu-
larity.
Proof of Theorem 3.10. Start with computing expectation and variance
√ √ 2α
of log det(An −z n) . For simplicity, write M (α) for E det(An −z n)
below, and set
M (α) = A(α)B(α),
where
2 2
 z 2 α /2  n αn+α (z2 /2−1)αn
A(α) = 2αn 1 − e
4 2
and
B(α) = C(α)(1 + εα,n ).
It obviously follows
√ 2
E log det(An − z n) = M 0 (0)

and
 √ 2
E log | det(An − z n)|2 = M 00 (0).

Thus we need only to evaluate M 0 (0) and M 00 (0). A direct calculation


shows
 n  z2  α2  z 2 
M 0 (α) = n log n + 2α log + −1 n+ 1− A(α)B(α)
2 2 2 4
+A(α)B 0 (α)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 135

Gaussian Unitary Ensemble 135

and
 n  z2  α2  z 2 
M 00 (α) = 2 n log n + 2α log + −1 n+ 1− A(α)B 0 (α)
2 2 2 4
 n  z2  α2  z 2 2
+ n log n + 2α log + −1 n+ 1− A(α)B(α)
2 2 2 4
 n 1 z 2 
+ 2 log + 1− A(α)B(α) + A(α)B 00 (α).
2 2 4
It is easy to see M (0) = A(0) = 1, and so B(0) = 1. Furthermore, by the
analyticity of B(α) for Reα > −1/2 and using Cauchy’s theorem, we have
B 0 (0) = C 0 (0) + O(n−1 log n), B 00 (0) = C 00 (0) + O(n−1 log n).
Similarly, it follows from (3.94)
 G0 (α + 1) G0 (2α + 1) 
C 0 (α) = C(α) 4 log 2α + 2 −2 .
G(α + 1) G(2α + 1)
Note
G0 (α + 1) 1 1 Γ0 (α)
= log(2π) + − α + α
G(α + 1) 2 2 Γ(α)
1 1 π2
= log(2π) − − (γ + 1)α + α + O(α3 ),
2 2 6
where γ is the Euler constant. So we have
C 0 (0) = 0, C 00 (0) = 4 log 2 + 2(γ + 1).
In combination, we obtain
 z2 
M 0 (0) = n log n + ( − 1)n + B 0 (0)
2
and
 z2 2  z2 
M 00 (0) = n log n + ( − 1)n + 2 n log n + ( − 1)n B 0 (0)
2 2
 n 1 z 2 
+ 2 log + 1− + B 00 (0).
2 2 4
This in turn gives
√ 1 z2 
E log det(An − z n) = n log n + ( − 1)n + o(1) (3.95)
2 2
and
√ 
V ar log | det(An − z n)|
1 z2 1 1 5
= log n − − log 2 + γ + + o(1). (3.96)
2 32 2 2 8
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 136

136 Random Matrices and Random Partitions

Next we turn to the proof of (3.92). Define for t ∈ R


 log | det(A − z √n)| − E log | det(A − z √n)| 
n n
mn (t) = E exp t p √ .
V ar(log | det(An − z n)|)
It is sufficient to prove
2
mn (t) → et /2
, n → ∞. (3.97)
Indeed, using (3.95) and (3.96) we have
 n log n + ( z2 − 1)n   t 
2
mn (t) = exp − √ t M √ (1 + o(1))
2 log n 2 log n
2
 t 
= et /2 C (1 + o(1)).
2 log n
As is known,
 the Barne’s G-function is entire and G(1) = 1. It follows that
C t/2 log n → 1 as n → ∞, then we get (3.97) as desired. The proof is
complete. 

3.5 Hermite β ensembles

In the last section of this chapter, we will turn to the study of the Hermite
β Ensemble (HβE), which is a natural extension of the GUE. By the HβE
we mean an n-point process in the real line R with the following joint
probability density function
n
Y Y 2
pn,β (x1 , · · · , xn ) = Zn,β |xi − xj |β e−xj /2 , (3.98)
1≤i<j≤n j=1

where x1 , · · · , xn ∈ R, β > 0 is a model parameter and


1 Γ( β2 )n
Zn,β =
(2π)n/2 n! nj=1 Γ( βj
Q
2 )

by Selberg’s integral. This model was first introduced by Dyson (1962) in


the study of Coulomb lattice gas in the early sixties. The formula (3.98)
can be rewritten as
pn,β (x1 , · · · , xn ) ∝ e−βHn (x1 ,··· ,xn ) ,
where
n
1 X 2 1X
Hn (x1 , · · · , xn ) = x − log |xi − xj |
2β j=1 j 2
i6=j
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 137

Gaussian Unitary Ensemble 137

is a Hamiltonian quantity, β may be viewed as inverse temperature. The


quadratic function part means the points fall independently in the real line
with normal law, while the extra logarithmic part indicates the points repel
each other. The special cases β = 1, 2, 4 correspond to GOE, GUE and GSE
respectively.
In the study of HβE, a remarkable contribution was made by Dumitriu
and Edelman (2002), in which a tridiagonal matrix representation was dis-
covered. Specifically speaking, let an , n ≥ 1 be a sequence of independent
normal random variables with mean 0 and variance 2. Let bn , n ≥ 1 be
a sequence of independent chi random variables, each bn having density
function:
21−βn/2 βn−1 −x2 /2
x e , x > 0.
Γ( βn
2 )
In addition, all an ’s and bn ’s are assumed to be independent. Define a
tridiagonal matrix
 
an bn−1 0 0 ··· 0
b
 n−1 an−1 bn−2 0 ··· 0 

0 bn−2 an−2 bn−3 · · · 0
 
1  
Dn,β = √  . .. .. .. .. .. .
2 .. . . . . .


0 ···
 
 0 b2 a2 b1 
0 0 0 · · · b1 a1
Then we have

Theorem 3.11. The eigenvalues of Dn,β are distributed according to


(3.98).

As we see from Lemma 3.7, an explicit Householder transform can be used


to produce Dn,2 from the GUE square matrix model. The general case will
be below proved using eigendecomposition of a tridiagonal and the change
of variables formula.
Given two sequences of real numbers x1 , x2 , · · · , xn and y1 , y2 , · · · , yn−1 ,
construct a tridiagonal matrix Xn as follows
 
xn yn−1 0 0 ··· 0
 n−1 xn−1 yn−2 0 · · · 0 
y 
 0 yn−2 xn−2 yn−3 · · · 0 
 
Xn =  . .. .. . . . . ..  .
 
 .. . . . . . 
· · · y2 x2 y1 
 
 0 0
0 0 0 · · · y1 x1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 138

138 Random Matrices and Random Partitions

(n) (n) (n) (n) (n)


Let λ1 , λ2 , · · · , λn be eigenvalues of Xn and assume that λ1 > λ2 >
(n)
· · · > λn . Write
 (n) 
λ1 0 ··· 0
(n)
 0 λ2 · · · 0  τ
 
Xn = QΛQτ =: Q   .. .. . . Q
.  (3.99)
 . . . .. 
(n)
0 0 · · · λn
for eigendecomposition, where Q is eigenvector matrix such that QQτ =
Qτ Q = In and the first row q = (q1 , q2 , · · · , qn ) is strictly positive. Note
that once q1 , q2 , · · · , qn are specified, then other components of Q will be
uniquely determined by eigenvalues and Xn . Conversely, starting from Λ
and q, one can reconstruct the matrix Xn .

Lemma 3.10.
Qn−1 i
Y (n) (n)  yi
λi − λj = Qi=1
n .
1≤i<j≤n i=1 qi

Proof. We similarly define Xk using x1 , x2 , · · · , xk and y1 , y2 , · · · , yk−1


for 2 ≤ k ≤ n. Let Pk (λ) be the characteristic polynomial of Xk , and let
(k) (k) (k)
also λ1 , λ2 , · · · , λk be the eigenvalues in decreasing order. Then it is
easy to see the following recursive formula
2
Pn (λ) = (xn − λ)Pn−1 (λ) − yn−1 Pn−2 (λ). (3.100)

We can deduce from (3.100) that for any 1 ≤ j ≤ n − 1


n n−2
(n) (n−1)  (n−2) (n−1) 
Y Y
2
λi − λj = −yn−1 λi − λj .
i=1 i=1

Hence it follows
n−1 n Y n−2
n−1
(n) (n−1)  2(n−1) (n−2) (n−1) 
YY Y
λi − λj = (−1)n−1 yn−1 λi − λj
j=1 i=1 j=1 i=1
n−1
Y
= (−1)n(n−1)/2 yl2l . (3.101)
l=1

On the other hand, note the following identity


n
Pn−1 (λ) X qi2
= (n)
,
Pn (λ) i=1 λ −λ i
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 139

Gaussian Unitary Ensemble 139

that is
n
(n)
X Y
qi2

Pn−1 (λ) = λl −λ .
i=1 l6=i

This obviously implies for each 1 ≤ j ≤ n


n−1
(n−1)
Y Y (n) (n) 
− λnj = qj2

λi λi − λj .
i=1 i6=j

Hence it follows
n n−1
(n−1) (n) 
Y Y
λi − λj
j=1 i=1
n
(n) (n) 
Y Y
= qj2 λi − λj
j=1 i6=j
n
(n) (n) 2
Y Y
= (−1)n(n−1)/2 qj2 λi − λj . (3.102)
j=1 1≤i<j≤n

Combining (3.101) and (3.102) together yields


Qn−1 2i
Y (n) (n) 2 yi
λi − λj = Qi=1
n 2 .
1≤i<j≤n i=1 qi

The proof is complete. 


Consider the eigendecomposition (3.105), the 2n − 1 variables
x = (x1 , x2 , · · · , xn ), y = (y1 , y2 , · · · , yn−1 )
can be put into a one-to-one correspondence with the 2n−1 variables (Λ, q).
Let J denote the determinant of the Jacobian for the change of variables
from (x, y) to (Λ, q). Then we have

Lemma 3.11.
Qn−1
i=1 yi
J= Q n . (3.103)
qn i=1 qi
Proof. Observe the following identity
n
−1 X qi2
In − λXn 11 = (n)
.
i=1 1 − λλi
Use the Taylor expansion of (1 − x)−1 to get
∞ ∞ X n
(n)k
X X
k k
qi2 λi λk .

1+ λ Xn 11 =
k=1 k=0 i=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 140

140 Random Matrices and Random Partitions

Hence we have for each k ≥ 1


n
(n)k
X
Xnk qi2 λi

11
= .
i=1
In particular,
n
(n)
X
xn = qi2 λi
i=1
n
(n)2
X
2
∗ + yn−1 = qi2 λi
i=1
n
(n)3
X
2
∗ + yn−1 xn−1 = qi2 λi (3.104)
i=1
··· ··· ···
n
(n)2n−1
X
2
∗ + yn−1 · · · y12 x1 = qi2 λi
i=1
where ∗ stands for what already appeared in the preceding equation.
Taking differentials at both sides of equations in (3.104) and noting the
Pn−1
fact qn2 = 1 − i=1 qi2 yields
   (n) 
dxn dλ1
 dx   dλ(n) 
 n−1   2 
 .   . 
 ..   . 
   . 
A  dx1  = (B1 , B2 )  dλ(n) ,
   
   n 
 dyn−1   dq1 
 .. 
   
 . 
 .   .. 
dy1 dqn−1
where
 n−1
Y n−1
Y 
2
A = diag 1, yn−1 ,··· , yi2 , 2yn−1 , 2yn−2 yn−1
2
· · · , 2y1 yl2 ,
i=1 l=2
 
(n)i
B1 = 2qj (λj − λ(n)i
n ) ,
1≤i≤2n−1, 1≤j≤n−1
 
(n)i−1
B2 = iqj2 λj .
1≤i≤2n−1, 1≤j≤n
Hence a direct computation gives
 ∂(x, y) 
J = det
∂(λ(n) , q)
Qn−1  Qn 4
1 i=1 yi i=1 qi
Y (n) (n) 4
= Qn Qn−1 λi − λj .
qn i=1 qi i=1 yi
i
1≤i<j≤n
Now according to Lemma 3.10, (3.103) holds as desired. 
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 141

Gaussian Unitary Ensemble 141

Proof of Theorem 3.11. Denote by λ1,β , λ2,β , · · · , λn,β the eigenvalues


of Dn,β . For clarity, we first assume that λ1,β > λ2,β > · · · > λn,β , and
write
λ1,β 0 · · · 0
 
 0 λ2,β · · · 0, 
Dn,β = QΛQτ =: Q  .
 τ
..  Q (3.105)

.. . .
 .. . . . 
0 0 · · · λn,β
for eigendecomposition, where Q is eigenvector matrix such that QQτ =
Qτ Q = In and the first row q = (q1 , q2 , · · · , qn ) is strictly positive.
As remarked above, such an eigendecomposition is unique. Let T be a
one-to-one correspondence between Dn,β and (Λ, Q), then the determinant
of its Jacobian is given by (3.103). Hence the joint probability density
p(λ, q) of (λ1,β , λ2,β , · · · , λn,β ) and q = (q1 , q2 , · · · , qn−1 ) is equal to
n n−1
1 2n−1 Y
− 12 x2i
Y βi−1 2

n/2 Qn−1 βi e yi e−yi |J|


(2π) i=1 Γ( 2 ) i=1 i=1

1 2n−1 Y 1 2 n−1
n Y βi−1 Qn−1 yi
− 2 λi i=1
= e yi n
(2π)n/2 n−1 βi
Q
q i=1 qi
Q
i=1 Γ( )
2 i=1 i=1 n
n n
1 2n−1 Y
− 12 λ2i
Y
β 1
Y
= e (λi − λj ) qiβ−1 . (3.106)
(2π)n/2 n−1 βi q
Q
i=1 Γ( )
2 i=1 1≤i<j≤n
n i=1

We see from (3.106) that (λ1,β , λ2,β , · · · , λn,β ) and q = (q1 , q2 , · · · , qn−1 )
are independent and so the joint probability density pn,β (λ) of (λ1,β , λ2,β ,
· · · , λn,β ) can be obtained by integrating out the variable q
pn,β (λ1 , · · · , λn )
n
1 2n−1 Y 2 Y
= n/2 n−1 βi
e−λi /2 (λi − λj )β
(2π)
Q
i=1 Γ( )
2 i=1 1≤i<j≤n
Z n
1 Y
· qiβ−1 dq1 · · · dqn
q:qi >0, i=1 qi2 =1 qn i=1
Pn

n
1 Γ( β2 )n Y Y 2
= n/2 n βi
(λ i − λ j )β
e−λi /2 , (3.107)
(2π)
Q
i=1 Γ( 2 ) 1≤i<j≤n i=1

where we used Lemma 2.18 to compute the integral in the second equation.
Finally, to obtain the joint probability density of unordered eigenvalue,
we only need to multiply (3.107) by the factor 1/n!. The proof is now con-
cluded. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 142

142 Random Matrices and Random Partitions

The HβE is a rich and well studied model in random matrix theory.
It possesses many nice properties similar to the GUE. In particular, there
have been a lot of new advances in the study of asymptotic behaviours of
point statistics since the tridiagonal matrix model was discovered. We will
below quickly review some results related to limit laws without proofs. The
interested reader is referred to original papers for more information.
√ √
To enable eigenvalues
q asymptotically fall in the interval (−2 n, 2 n),
we consider Hn,β =: β2 Dn,β . Denote by λ1,β , λ2,β , · · · , λn,β the eigenval-
ues of Hn,β , the corresponding empirical distribution function is
n
1X √
Fn,β (x) = 1 .
n i=1 (λi,β ≤ nx)
Dumitriu and Edelman (2002) used moment methods to prove the Wigner
semicircle law as follows
d
Fn,β −→ ρsc in P.
In particular, for each bounded continuous function f ,
n Z 2
1 X  λi,β  P
f √ −→ f (x)ρsc dx.
n i=1 n −2

Moreover, if f satisfy a certain regularity condition, then according to Jo-


hansson (1998), the central limit theorem holds. Namely,
n λ  Z 2
i,β d
X
2

f √ −n f (x)ρsc (x)dx −→ N 0, σβ,f ,
i=1
n −2
2
where σβ,f is given by
2  1 Z 2 
2
σβ,f = −1 (f (2) + f (−2)) − f (x)ρ0sc (x)dx
β 4 −2
Z 2Z 2 0
1 f (x)f (y)ρsc (x)
− 2 dxdy.
2π β −2 −2 (x − y)ρsc (y)
Following the line of the proof in Theorem 3.8, one could also prove the
logarithmic law
1  1 1 
d
q log | det Hn,β | − log n! + log n −→ N (0, 1).
β
log n 2 4
4
As for the counting functions of eigenvalue point process, it is worthy men-
tioning the following two results both at the edge and inside the bulk. Let
un be a sequence of real numbers. Define for x ∈ R
Nn,β (x) = ] 1 ≤ i ≤ n : n1/6 (λi,β − un ) fall between 0 and x .

February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 143

Gaussian Unitary Ensemble 143

Based on variational analysis, Ramı́rez, Rider and Virág (2011) proved that

under the assumption n1/6 (2 n − un ) → a ∈ R,
d
Nn,β (x) −→ NAiryβ (x), x ∈ R,
where Airyβ is defined as −1 times the point process of eigenvalues of the
stochastic with parameter β, and NAiryβ (x) is the number of points between
0 and x.
In the same spirit, Valkó and Virág (2009) considered the eigenvalues
around any location away from the spectral edge. Let un be a sequence of

real numbers so that n1/6 (2 n − |un |) → ∞. Define for x ∈ R
 p
Nn,β (x) = ] 1 ≤ i ≤ n : 4n − u2n (λi,β − un ) fall between 0 and x ,
then
d
Nn,β (x) −→ NSineβ (x), x ∈ R,
where Sineβ is a translation invariant point process given by the Brownian
carousel.
As the reader may see, the point process from the HβE is no longer
determinantal except in special cases. Thus Theorem 3.7, the Costin-
Lebowitz-Soshnikov theorem, is not applicable. However, we can follow
the strategy of Valkó and Virág (2009) to prove the central limit theorem
for the number of points of the HβE lying in the right side of the origin,
see and (2010).
Theorem 3.12. Let Nn (0, ∞) be the number of points of the HβE lying in
the right side of the origin. Then it follows
1  n d
q Nn (0, ∞) − −→ N (0, 1). (3.108)
1
log n 2
βπ 2

We remark that the number Nn (0, ∞), sometimes called the index, is a key
object of interest to physicists. Cavagna, Garrahan and Giardina (2000)
calculated the distribution of the index for GOE by means of the replica
method and obtained Gaussian distribution with asymptotic variance like
log n/π 2 . Majumdar, Nadal, Scardicchio and Vivo (2009) further computed
analytically the probability distribution of the number Nn [0, ∞) of positive
points for HβE using the partition function and saddle point analysis. They
computed the variance log n/βπ 2 + O(1), which agrees with the correspond-
ing variance in (3.108), while they thought the distribution is not strictly
Gaussian due to an usual logarithmic singularity in the rate function.
The rest part of the section will prove Theorem 3.12. The proof relies
largely on the new phase evolution of eigenvectors invented by Valkó and
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 144

144 Random Matrices and Random Partitions

p
Virág (2009). Let Hn,β = 2/βDn,β . We only need to consider the number
of positive eigenvalues of Hn,β . A key idea is to derive from the tridiagonal
matrix model a recurrence relation for a real number Λ to be an eigenvalue,
which yields an evolution relation for eigenvectors. Specifically, let sj =
p
n − j − 1/2. Define
d11 0 0 · · · 0
 
 0 d22 0 · · · 0 
On =  . . . . . ,
 
 .. .. .. . . .. 
0 0 0 · · · dnn
where
bn−1−i
d11 = 1, dii = di−1,i−1 , 2 ≤ i ≤ n.
si−1
Let
an−i
Xi = √ , 0≤i≤n−1
β
and
b2n−1−i
Yi = − si , 0 ≤ i ≤ n − 2.
βsi+1
Then
X0 s0 + Y0 0 ··· 0
 
 s1 X1 s1 + Y1 · · · 0 
 
On−1 Hn,β On =  0
 s2 X2 · · · 0  
 . .. .. .. .. 
 .. . . . . 
0 0 0 · · · Xn−1
obviously have the same eigenvalues as Hn,β . However, there is a significant
difference between these two matrices. The rows between On−1 Hn,β On are
independent of each other, while Hn,β is symmetric so that the rows are
not independent.
Assume that Λ is an eigenvalue of On−1 Hn,β On , then by definition there
exists a nonzero eigenvector v = (v1 , v2 , · · · , vn )τ such that
On−1 Hn,β On v = Λv.
Without loss of generality, we can assume v1 = 1. Thus, Λ is an eigenvalue
if and only there exists an eigenvector vτ = (1, v2 , · · · , vn ) such that
X0 s0 + Y0 0 ··· 0 1 1
    
 s1 X1 s1 + Y1 · · · 0   v2   v2 
    
 0
 s2 X2 · · · 0    v3  = Λ  v3  .
   
 . .. .. .. . .  . 
 .. . ..   ..   .. 
  
. .
0 0 0 · · · Xn−1 vn vn
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 145

Gaussian Unitary Ensemble 145

It can be equivalently rewritten as


 
0
1 X0 s0 + Y0 0 ··· 0 0   1  1
    
 0
 s1 X1 s1 + Y1 ··· 0 0 v 

  2   v2 
 
 0
 0 s2 X2 ··· 0 0   v3  .
v3  = Λ  
 .. .. .. .. .. ..   .. 
  
 ... 
 
··· . . . . . .  . 
 
0 0 0 0 · · · Xn−1 1  vn  vn
0
Let v0 = 0, vn+1 = 0 and define rl = vl+1 /vl , 0 ≤ l ≤ n. Thus we have
the following necessary and sufficient condition for Λ to be an eigenvalue
in terms of evolution:
∞ = r0 , rn = 0
and  
1 1 Λ − Xl
rl+1 = − + , 0 ≤ l ≤ n − 2. (3.109)
1 + Ysll rl sl
Since the (Xl , Yl )’s are independent, then r0 , r1 , · · · , rn−1 , rn forms a
Markov chain with ∞ as initial state and 0 as destination state, and the
next state rl+1 given a present state rl will be attained through a random
fractional linear transform.
Next we turn to the description of the phase evolution. Let H denote
the upper half plane, U the Poincaré disk model, define the bijection
i−z
U : H̄ → Ū, z → ,
i+z
which is also a bijection of the boundary. As r moves on the boundary
∂H = R ∪ {∞}, its image under U will move along ∂U.
In order to follow the number of times this image circles U, we need to
extend the action from ∂U to its universal cover, R0 = R, where the prime is
used to distinguish this from ∂H. For an action T on R0 , the three actions
are denoted by
H̄ → H̄ : z → z• T, Ū → Ū : z → z◦ T, R0 → R0 : z → z∗ T.
Let Q(α) denote the rotation by α in U about 0, i.e.,
ϕ∗ Q(α) = ϕ + α.
For a, b ∈ R, let A(a, b) be the affine map z → a(z + b) in H. Furthermore,
define !
1 Xl
Wl = A , −
1 + Ysll sl
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 146

146 Random Matrices and Random Partitions

and
 
Λ
Rl,Λ = Q(π)A 1, Wl , 0 ≤ l ≤ n − 1.
sl
With these notations, the evolution of r in (3.109) becomes
rl+1 = rl• Rl,Λ , 0≤l ≤n−1
and Λ is an eigenvalue if and only if
∞• R0,Λ · · · Rn−1,Λ = 0.
For 0 ≤ l ≤ n define
ϕ̂l,Λ = π∗ R0,Λ · · · Rl−1,Λ , ϕ̂ −1 −1
l,Λ = 0∗ Rn−1,Λ · · · Rl,Λ ,

then
ϕ̂l,Λ = ϕ̂
l,Λ mod 2π.
The following lemma summarizes nice properties about ϕ̂ and ϕ̂ , whose
proof can be found in Valkó and Virág (2012).

Lemma 3.12. With the above notations, we have


(i) rl,Λ• U = eiϕ̂l,Λ ;
(ii) ϕ̂0,Λ = π, ϕ̂ n,Λ = 0;
(iii) for each 0 < l ≤ n, ϕ̂l,Λ is an analytic and strictly increasing in Λ.
For 0 ≤ l < n, ϕ̂ l,Λ is analytic and strictly decreasing in Λ;
(iv) for any 0 ≤ l ≤ n, Λ is an eigenvalue of Hn,β if and only if ϕ̂l,Λ − ϕ̂ l,Λ ∈
2πZ.
√ √
Fix −2 < x < 2 and n0 = n(1 − x2 /4) − 1/2. Let Λ = x n + λ/2 n0 and
recycle the notation rl,λ , ϕ̂l,λ , ϕ̂
l,λ for the quantities rl,Λ , ϕ̂l,Λ , ϕ̂l,Λ .
Note that there is a macroscopic term Q(π)A(1, Λ/sl ) in the evolution
operator Rl,Λ . So the phase function ϕl,Λ exhibits fast oscillation in l. Let
 √ 
x n
Jl = Q(π)A 1,
sl
and
s s
nx2 /4 n0 − l
ρl = 2
+i .
nx /4 + n0 − l nx2 /4 + n0 − l
Thus Jl is a rotation since ρl• Jl = ρl . We separate Jl from the evolution
operator R to get
 
λ
Rl,λ = Jl Ll,λ Wl , Ll,λ = A 1, √ .
2 n0 sl
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 147

Gaussian Unitary Ensemble 147

Note that for any finite λ, Ll,λ and Wl become infinitesimal in the n → ∞
limit while Jl does not. Let
 
1
Tl = A , −Re(ρl ) ,
Im(ρl )
then
−1
Jl = Q(−2arg(ρl ))Tl

where AB = B −1 AB. Define

Ql = Q(2arg(ρ0 )) . . . Q(2arg(ρl ))

and

ϕl,λ = ϕ̂l,λ∗ Tl Ql−1 , ϕ


l,λ = ϕ̂l,λ∗ Tl Ql−1 .

The following lemma is a variant of Lemma 3.12.

Lemma 3.13. With the above notations, we have for 1 ≤ l ≤ n − 1


(i0 ) ϕ0,λ = π;
(ii0 ) ϕl,λ and −ϕ l,λ are analytic and strictly increasing in λ and are also
independent;
(iii0 ) with Sl,λ = T−1 2 2 2
l Lλ Wλ Tl+1 and ηl = ρ0 ρ1 · · · ρl , we have

∆ϕl,λ := ϕl+1,λ − ϕl,λ = ash(Sl,λ , −1, eiϕl,λ η̄l );

(iv 0 ) ϕ̂l,λ = ϕl,λ∗ Q−1 −1


l−1 Tl ;
0 0
(v ) for any λ < λ we have a.s.
√ √ λ0
 
λ
Nn x n + √ , x n+ √
2 n0 2 n0
= ] (ϕl,λ − ϕ ϕl,λ0 − ϕ

l,λ , l,λ0 ] ∩ 2πZ . (3.110)

The difference ∆ϕl,λ can be estimated as follows. Let

Zl,λ = i• S−1
l,λ − i

= i• T−1
l+1 (Lλ Wλ )
−1
Tl − i
= vl,λ + Vl ,

where
λ ρl+1 − ρl Xl + ρl+1 Yl
vl,λ = − √ √ + , Vl = √ .
2 n0 n0 − l Im(ρl ) n0 − l
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 148

148 Random Matrices and Random Partitions

Then
∆ϕl,λ = ash(Sl,λ , −1, z η̄)
i(1 + z̄η)2 2
 
= Re −(1 + z̄η)Z − Z + O(Z 3 )
4
2
ImZ
= −ReZ + + η terms + O(Z 3 ),
4
where we used Z = Zl,λ , η = ηl and z = eiϕl,λ .

Lemma 3.14. Assume λ = λn = o( n). For l ≤ n0 , we have
1 1
osc1 + O (n0 − l)−3/2 ,
 
E ∆ϕl,λ |ϕl,λ = x = bn + (3.111)
n0 n0
1 1
E (∆ϕl,λ )2 |ϕl,λ = x = osc2 + O (n0 − l)−3/2 , (3.112)
 
an +
n0 n0

E |∆ϕl,λ |d |ϕl,λ = O (n0 − l)−3/2 , d > 2,


 
(3.113)
where

n0 λ Re dρ
dt n0 Im(ρ2 )
bn = √ − + √ ,
2 n0 − l Imρ 2β n0 − l
2n0 n0 (3 + Reρ2 )
an = + .
β(n0 − l) β(n0 − l)
The oscillatory terms are
 q  1
osc1 = Re (−vλ − i )e−ix ηl + Re ie−2ix ηl2 q ,

2 4
h 1 i
osc2 = pn Re e−ix ηl + Re qn (e−ix ηl + e−i2x ηl2 ) ,

2
where
4n0 2n0 (1 + ρl )
pn = , qn = .
β(n0 − l) β(n0 − l)
Lemma 3.15. We have for 0 < l ≤ n0
(i)
φ̂l,∞ = π, φ̂
l,∞ = −2(n − l)π;

(ii)
φl,∞ = (l + 1)π, φ
l,∞ = −2nπ + 3lπ;

(iii)
φ
l,0 = φ̂l,0 + lπ, φ̂ −1 −1
l,0 = 0∗ Rn−1 · · · Rl .
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 149

Gaussian Unitary Ensemble 149

Proof. First, prove (i). Recall

ϕ̂l,∞ = π∗ R0,∞ · · · Rl−1,∞ , ϕ̂ −1 −1


l,∞ = 0∗ Rn−1,∞ · · · Rl,∞ ,

where Rl,∞ = Q(π)A (1, ∞) Wl .


Note the affine transformation A (1, ∞) Wl maps any z to ∞, and the
image of ∞ under the Möbius transform U is −1, which in turn corresponds
to π ∈ R0 . Thus it easily follows φ̂l,∞ = π.
As for φ̂
l,∞ , note

φ̂ −1
l,∞ = φ̂l+1,∞∗ Rl,∞ ,

where R−1 −1
l,∞ = Wl A (1, −∞) Q(−π).
By the angular shift formula, we have

φ̂ −1
l,∞ = φ̂l+1,∞∗ Wl A (1, −∞) − π
 
iφ̂
= φ̂ −1
l+1,∞ + ash Wl A (1, −∞) , −1, e
l+1,∞ −π
!
eiφ̂l+1,∞ ◦ Wl−1 A (1, −∞)
= φ̂
l+1,∞ + arg[0,2π)
−1◦ Wl−1 A (1, −∞)

!
eiφ̂l+1,∞
−arg[0,2π) −π
−1

= φ̂
l+1,∞ + arg[0,2π) (1) − arg[0,2π) (−e
iφ̂l+1,∞
) − π,

from which and the fact φ̂


n,∞ = 0 we can easily derive

φ̂
n−1,∞ = −2π, φ̂
n−2,∞ = −4π, · · · , φ̂l,∞ = −2(n − l)π.

Next we turn to the proof of (ii) and (iii). Since x = 0, then ρl = i, and so
Tl is the identity transform and Ql−1 = Q(lπ) for each 0 < l ≤ n0 . Thus
we have by the fact that Q is a rotation,

ϕl,λ = ϕ̂l,λ∗ Tl Ql−1 (3.114)


= ϕ̂l,λ∗ Q(lπ)
= ϕ̂l,λ + lπ,

where 0 ≤ λ ≤ ∞. Similarly, ϕ
l,λ = ϕ̂l,λ + lπ. 
Proof of Theorem 3.12. Let x = 0 and n0 = n − 1/2. Taking l = bn0 c =
n − 1, we have by (3.110)

Nn (0, ∞) = ] (ϕn−1,0 − ϕ ϕn−1,∞ − ϕ



n−1,0 , n−1,∞ ) ∩ 2πZ ,
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 150

150 Random Matrices and Random Partitions

from which it readily follows



Nn (0, ∞) − 1 ϕn−1,∞ − ϕ

n−1,∞ − (ϕn−1,0 − ϕn−1,0 ≤ 1.
)

Applying Lemma 3.15 to l = n − 1 immediately yields
ϕn−1,∞ − ϕ
n−1,∞ = 3π

and
ϕ −1
n−1,0 = 0∗ Rn−1,0 + (n − 1)π.

Also, it follows
0∗ R−1 P
√ n−1,0 −→ 0.
log n
In combination, we need only to prove
 
ϕ d 4
√n−1,0 −→ N 0, .
log n β
To this end, we shall use the following CLT for Markov chain. Recall that
π = ϕ0,0 , ϕ1,0 , · · · , ϕn−1,0 forms a Markov chain. Let

zl+1 = ∆ϕl,0 − E ∆ϕl,0 |ϕl,0 .
Then z1 , z2 , · · · , zn−1 forms a martingale difference sequence. The martin-
gale CLT implies: if the following conditions are satisfied:
(i)
n−1
X
Bn := Ezl2 → ∞, (3.115)
l=1

(ii)
n−1
1 X  P
E zl2 |ϕl−1,0 −→ 1, (3.116)
Bn
l=1

(iii) for any ε > 0


n−1
1 X  P
E zl2 1(|zl |>εσn ) |ϕl−1,0 −→ 0, (3.117)
Bn
l=1

then we have
n−1
1 X d
√ zl −→ N (0, 1).
Bn l=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 151

Gaussian Unitary Ensemble 151

We will next verify conditions (3.115) - (3.117)by asymptotic estimates


(3.111) - (3.113) for the increments E ∆ϕl,0 |ϕl,0 . Start with Bn . Note
E(∆ϕl,0 |ϕl,0 ) = O (n0 − l)−3/2


E(∆ϕl,0 )2 = E E (∆ϕl,0 )2 |ϕl,0




4 4
ERe (−1)l+1 e−iϕl,0

= +
β(n0 − l) n0 − l
+O (n0 − l)−3/2 .


Hence a direct computation yields


n−1
X
Bn = E(∆ϕl,0 )2 − E(E(∆ϕl,0 |ϕl,0 ))2
l=1
4
= log n + O(1) → ∞
β
and
n−1
1 X
E zl2 |ϕl−1,0 − 1

Bn
l=1
n−1
1 X
E(zl2 |ϕl−1,0 ) − Ezl2

=
Bn
l=1
n−1
1 X
E((∆ϕl−1,0 )2 |ϕl−1,0 ) − E(∆ϕl−1,0 )2

=
Bn
l=1
n−1
1 X 2 2
+ E E(∆ϕl−1,0 |ϕl−1,0 ) − E(∆ϕl−1,0 |ϕl−1,0 )
Bn
l=1
P
→ 0.
It also follows from (3.113) that
n−1
X
E|∆ϕl−1,0 |3 = O(1),
l=1

which in turn immediately implies the Lindeberg condition (3.117). Thus


we have completed the proof of the theorem. 
May 2, 2013 14:6 BC: 8831 - Probability and Statistical Theory PST˙ws

This page intentionally left blank


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 153

Chapter 4

Random Uniform Partitions

4.1 Introduction

The theory of partitions is one of the very few branches of mathematics


that can be appreciated by anyone who is endowed with little more than a
lively interested in the subject. Its applications are found wherever discrete
objects are to be counted or classified, whether in the molecular and the
atomic studies of matter, in the theory of numbers, or in combinatorial
problems from all sources.
Let n be a natural number. A partition of n is a finite nonincreasing
Pl
sequence of positive integers λ1 ≥ λ2 ≥ · · · ≥ λl > 0 such that j=1 λj = n.
Set
rk = ]{1 ≤ j ≤ l : λj = k}.
Trivially,

X ∞
X
rk = l, krk = n.
k=1 k=1
As we remarked in Section 2.2, there is a close connection between partitions
and permutations.
The set of all partitions of n are denoted by Pn , and the set of all par-
titions by P, i.e., P = ∪∞ n=0 Pn . Here by convention the empty sequence
forms the only partition of zero. Among the most important and funda-
mental is the question of enumerating various set of partitions. Let p(n)
be the number of partitions of n. Trivially, p(0) = 0, and p(n) increases
quite rapidly with n. In fact, p(10) = 42, p(20) = 627, p(50) = 204226,
p(100) = 190569292, p(200) = 3972999029388.
The study of p(n) dates back to Euler as early as in the 1750s, who
proved many beautiful and significant partition theorems, and so laid the

153
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 154

154 Random Matrices and Random Partitions

foundations of the theory of partitions. Many of the other great mathe-


maticians have contributed to the development of the theory. The reader
is referred to Andrews (1976), which is a first thorough survey of this field
with many informative historic notes.
It turns out that generating function is a powerful tool for studying
p(n). Define the generating function of the p(n) by

X
F(z) = p(n)z n . (4.1)
n=0

Euler started the analytic theory of partitions by providing the explicit


formula

Y 1
F(z) = . (4.2)
1 − zk
k=1

We remark that on the one hand, for many problems it suffices to consider
F(z) as a formal power series in z; on the other hand, much asymptotic
work requires that F(z) be an analytic function of the complex variable z.
The asymptotic theory starts 150 years after Euler, with the first cel-
ebrated letters of Ramanujan to Hardy in 1913. In a celebrated series of
memoirs published in 1917 and 1918, Hardy and Ramanujan found (and
was perfected by Radamacher) very precise estimates for p(n). In particu-
lar, we have

Theorem 4.1.
1 √
p(n) = √ e2c n 1 + o(1) ,

(4.3)
4 3n

where and in the sequel c = π/ 6.

The complete proof of Theorem 4.1 can be found in §2.7 of the book Post-
nikov (1988). Instead, we prefer to give a rough sketch of the proof, without
justifying anything. First, using the Cauchy integral formula for the coef-
ficients of a power series, we obtain from (4.1) and (4.2)
Z π
1
r−n e−inθ F reiθ dθ.

p(n) =
2π −π
Choose θn > 0 and split the integral expression for p(n) into two parts:
Z Z
1  
r−n e−inθ F reiθ dθ.

p(n) = +
2π |θ|≤θn |θ|>θn
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 155

Random Uniform Partitions 155

If θn = n−3/4+ε , ε > 0, then it holds


Z
1
r−n e−inθ F reiθ dθ.

p(n) ≈
2π |θ|≤θn
By Taylor’s formula with two terms, for |θ| ≤ θn we have
0 θ2 00
log F(reiθ ) ≈ log F(r) + iθ log F(r) − log F(r) ,
2
and so
F(r)
Z
0 2
(log F (r))00 /2
p(n) ≈ e−iθ(n−(log F (r)) ) e−θ dθ.
2πrn |θ|≤θn

Up to this point,
0 r has been a free parameter.−vWe now choose r so that
n + log F(r) = 0, i.e., we must choose r = e to satisfy the equation

X k
n= .
ekv − 1
k=1

Note
∞ ∞
kX 1 X kv
= v kv
ekv − 1 v2 e −1
k=1 k=1
Z ∞
1 x
≈ 2 dx
v 0 ex − 1
c2
= 2.
v

Thus we must take v so that n ≈ c /v 2 , i.e., v ≈ c/ n. For such a choice,
2

F(r)
Z
2 00
p(n) ≈ n
e−θ (log F (r)) /2 dθ
2πr |θ|≤θn
F(r) ∞ −θ2 (log F (r))00 /2
Z
≈ e dθ
2πrn −∞
F(r)
= p , (4.4)
r n 2π log(F(r))00
using the classical normal integral. To evaluate the value of (4.4), we need
the following lemma, see Postnikov (1988).

Lemma 4.1. Assume Rez > 0 and z → 0, staying within some angle lying
in the right half-plane. Then
c2 1 z
log F(e−z ) = + log + O(|z|). (4.5)
z 2 2π
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 156

156 Random Matrices and Random Partitions


As a consequence, it easily follows with r = e−c/ n
√ 1 1
log F(r) ≈ c n + log (4.6)
4 24n
and
2
(log F(r))00 ≈ n3/2 . (4.7)
c
Substituting (4.6) and (4.7) into (4.4) yields the desired result (4.3).
Another effective elementary device for studying partitions is graphical
representation. To each partition λ is associated its Young diagram (shape),
which can be formally defined as the set of point (i, j) ∈ Z2 such that
1 ≤ j ≤ λi . In drawing such diagrams, by convention, the first coordinate
i (the row index) increases as one goes downwards, the second coordinate
j (the column index) increases as one goes from the left to the right and
these points are left justified. More often it is convenient to replace the
nodes by unit squares, see Figure 2.1.
Such a representation is extremely useful when we consider applications
of partitions to plane partitions or Young tableaux. Sometimes we prefer
the representation to be upside down in consistency with Descartes coordi-
nate geometry.
The conjugate of a partition λ is the partition λ0 whose diagram is the
transpose of the diagram λ, i.e., the diagram obtained by reflection in the
main diagonal. Hence the λ0i is the number of squares in the ith column of
λ, or equivalently,

X
λ0i = rk . (4.8)
k=i
In particular, λ01 = l(λ) and λ1 = l(λ0 ). Obviously, λ00 = λ.
We have so far defined the set Pn of partitions of n and known how to
count its size p(n). Now we want to equip a probability measure on this
set. As we will see, this set bears many various natural measures. The first
natural measure is certainly uniform, i.e., choose at random a partition
with equal probability. Let Pu,n be the uniform measure defined by
1
Pu,n (λ) = , λ ∈ Pn , (4.9)
p(n)
where the subscript u stands for Uniform. The primary goal of this chapter
is to study the asymptotic behaviours of a typical partition under (Pn , Pu,n )
as its size n → ∞. The first remarkable feature is that a typical Young
diagram properly scaled has a limit shape. To be precise, define
X
ϕλ (t) = rk , t ≥ 0. (4.10)
k≥t
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 157

Random Uniform Partitions 157

In particular,
R∞ ϕλ (i) = λ0i , and ϕλ (t) is a nonincreasing step function such
that 0 ϕλ (t)dt = n.
Theorem 4.2. Under (Pn , Pu,n ) we have as n → ∞
1 √
P
sup √ ϕλ ( nt) − Ψ(t) −→ 0 (4.11)

a≤t≤b n
where 0 < a < b < ∞ and
Z ∞
e−cu 1
Ψ(t) = −cu
du = − log(1 − e−ct ). (4.12)
t 1 − e c
We remark that the curve Ψ(t) was first conjectured by Temperley (1952),
who studied the number of ways in which a given amount of energy can be
shared out among the different possible states of an assembly. The rigorous
argument was given by Vershik (1994, 1996). In fact, Vershik and his school
has been recognized to be the first group who started a systematic study
of limit shapes of various random geometric objects. We also note that the
curve Ψ(t) has two asymptotic lines: s = 0 and t = 0, see Figure 4.1.

Fig. 4.1 Temperley-Vershik curve

From the probabilistic viewpoint, Theorem 4.2 is a kind of weak law


of large numbers. Next, it is natural to ask what the fluctuation is of a
typical Young diagram around the limit shape. That is the problem of the
second order fluctuation. It turns out that we need to deal with two cases
separately: at the edge and in the bulk. Let us first treat the edge case of
ϕλ (k), k ≥ 0.
Note that λ and λ0 have the same likelihood, then it follows by duality
d
ϕλ (k) = λ0k = λk , k ≥ 1.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 158

158 Random Matrices and Random Partitions

Hence it is sufficient to study the asymptotic distribution of the λk ’s, the


largest parts of λ. Let us begin with the following deep and interesting
result, due to Erdös and Lehner (1941).

Theorem 4.3. As n → ∞, we have for each x ∈ R


 c √
n  −x
Pu,n √ λ1 − log ≤ x −→ e−e . (4.13)
n c
Note that the limit distribution in the right hand side of (4.13) is the
famous Gumbel distribution, which appears widely in the study of extremal
statistics for independent random variables.
Besides, one can further consider the jointly asymptotic distributions of
the first m largest parts. Fristedt (1993) obtained the following

Theorem 4.4. As n → ∞, we have for x1 > x2 > · · · > xm



 c n 
lim Pu,n √ λi − log ≤ xi , 1 ≤ i ≤ m
n→∞ n c
Z x1 Z xm m
Y
= ··· p0 (x1 ) p(xi−1 , xi )dx1 · · · dxm , (4.14)
−∞ −∞ i=2

where p0 and p are defined as follows


−x
p0 (x) = e−e −x
, x∈R
and
( −x
−e−y −y
ee , x > y,
p(x, y) =
0, x ≤ y.

To understand the limit (4.14), we remark the following nice fact. Let
η1 , η2 , · · · be a sequence of random variables with (4.14) as their joint
distribution functions, then for x1 > x2 > · · · > xm

P ηm = xm ηm−1 = xm−1 , · · · , η1 = x1 = p(xm−1 , xm ).
Hence the ηk ’s form a Markov chain with p(x, y) as the transition density.

Next, let us turn to the bulk case, i.e., treat ϕλ ( nt) where 0 < t < ∞.
Define
1 √ √ 
Xn (t) = 1/4 ϕλ ( nt) − nΨ(t) , t > 0.
n
An interesting result is the following central limit theorem due to Pittel
(1997).
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 159

Random Uniform Partitions 159

Theorem 4.5. Under (Pn , Pu,n ) we have as n → ∞


Xn ⇒ X (4.15)
in terms of finite dimensional distributions. Here X(t), t > 0 is a centered
Gaussian process with the covariance structure
Cov X(t1 ), X(t2 ) = σt22 − st1 st2 , t1 ≤ t2

(4.16)
where

e−cu e−ct
Z
σt2 = du =
t (1 − e−cu )2 c(1 − e−ct )
and

ue−cu
Z
st = du
t (1 − e−cu )2
te−ct 1
− 2 log 1 − e−ct .

= −ct
(1 − e ) c
What happens if t = tn tends to infinity? It turns out that a similar central
limit theorem holds when tn grows slowly. In particular, we have

Theorem 4.6. Assume tn , n ≥ 1 is a sequence of positive numbers such


that
1
tn → ∞, tn − log n → −∞. (4.17)
2c
Let
ectn /2 √ √ 
Xn (tn ) = ϕλ ( ntn ) − nΨ(tn ) , t>0
n1/4
then under (Pn , Pu,n ), as n → ∞
d
Xn (tn ) −→ N (0, 1). (4.18)

Note that ectn /2 /n1/4 goes to zero under the assumption (4.17). We have
so far seen many interesting probability limit theorems for random uniform
partitions. In the next two sections we shall provide rigorous proofs. A
basic strategy is as follows. First, we will in Section 4.2 construct a larger
probability space (P, Qq ) where 0 < q < 1 is a model parameter, under
which the rk ’s are independent geometric random variables. Thus we can
P
directly apply the classical limit theory to the partial sums k rk . Second,
we will in Section 4.3 transfer to the desired space (Pn , Pu,n ) using the fact
Pu,n is essentially the restriction of Qq to Pn . It is there that we develop
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 160

160 Random Matrices and Random Partitions

a conditioning argument, which is consistent with the so-called transition


between grand and small ensembles in physics literatures.
Having the convergence of finite dimensional distributions, one might
expect a weak convergence of the processes (Xn (t), t > 0). To this end, it
is required to check the uniform tightness condition, i.e., for any ε > 0
 
lim lim Pu,n sup |Xn (t) − Xn (s)| > ε = 0.
δ→0 n→∞ |t−s|≤δ

However, we are not able to find a good way to verify such a condition.
Instead, we shall in Section 4.4 state and prove a weaker stochastic
equicontinuity condition: for any ε > 0

lim lim sup Pu,n |Xn (t) − Xn (s)| > ε = 0.
δ→0 n→∞ |t−s|≤δ

This together with Theorem 1.16 immediately implies a functional central


limit theorem holds for a certain class of integral statistics of Xn (t). We
shall also give two examples at the end of Section 4.4. To conclude this
chapter, we shall briefly discuss a generalized multiplicative random parti-
tions induced by a family of analytic functions.
Throughout the chapter, c1 , c2 , · · · denote positive numeric constants,
whose exact values are not of importance.

4.2 Grand ensembles

In this section we shall study unrestricted random partitions with multi-


plicative measures. Let 0 < q < 1, define the multiplicative measure by
q |λ|
Qq (λ) = , λ ∈ P, (4.19)
F(q)
where |λ| denotes the size of the partition λ.
Note by (4.19) and (4.1)
X 1 X |λ|
Qq (λ) = q
F(q)
λ∈P λ∈P

1 X X n
= q = 1.
F(q) n=0
λ∈Pn

Thus we can induce a probability space (P, Qq ), which is called a grand


ensemble with parameter q. Surprisingly, this Qq has an elegant property
as follows.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 161

Random Uniform Partitions 161

Lemma 4.2. Under (P, Qq ), r1 , r2 , · · · is a sequence of independent geo-


metric random variables. In particular, we have

Qq (λ ∈ P : rk = j) = (1 − q k )q jk , j = 0, 1, 2, · · · .

Proof. The proof is easy. Indeed, note λ = (1r1 , 2r2 , · · · ), so |λ| =


P∞
k=1 krk . Thus we have


Y
Qq (λ) = q krk (1 − q k ),
k=1

as desired. 

This lemma will play a fundamental important role in the study of random
uniform partitions. It will enable us to apply the classical limit theorems
for sums of independent random variables. Denote by Eq expectation with
respect to Qq . As a direct consequence, we have

qk qk
Eq rk = , V arq (rk ) =
1 − qk (1 − q k )2

and
1 − qk
Eq z rk = .
1 − zq k

Under (P, Qq ), the size |λ| is itself a random variable. Let qn = e−c/ n
,
then it is easy to see

X
µn := Eqn |λ| = kEqn rk
k=1
∞ ∞ √
X kqnk X ke−ck/ n
= k
= √
1 − qn 1 − e −ck/ n
k=1 k=1
Z ∞
ue−cu √
= n −cu
du + O( n)
0 1 − e

= n + O( n), (4.20)

where in the last step we used the fact


Z ∞
ue−cu
du = 1.
0 1 − e−cu
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 162

162 Random Matrices and Random Partitions

Similarly,

X
σn2 := V arqn (|λ|) = k 2 V arqn (rk )
k=1
∞ ∞ √
X k 2 qnk X k 2 e−ck/ n
= = √
(1 − qnk )2 (1 − e−ck/ n )2
k=1 k=1
Z ∞
3/2 u2 e−cu
= n du + O(n)
0 (1 − e−cu )2
2
= n3/2 + O(n). (4.21)
c
Fristedt (1993) obtained the following refinement.

Theorem 4.7. Under (P, Qqn ), |λ| normally concentrates around n.


Namely,
|λ| − n d  2
3/4
−→ N 0, .
n c
Moreover, we have the local limit theorem
 1 
Qqn |λ| = n = 1 + o(1) . (4.22)
(96)1/4 n3/4
Proof. By virtue of (4.20) and (4.21), it suffices to prove
|λ| − µn d
−→ N (0, 1). (4.23)
σn
In turn, this will be proved using characteristic functions below. Let
 ix|λ| 
fn (x) = Eqn exp .
σn
Then it follows from Lemma 4.2
 ix X∞ 
fn (x) = Eqn exp krk
σn
k=1

Y 1 − qnk
= .
1 − qnk eikx/σn
k=1
Observe the following elementary Taylor formulas
z2
+ O |z|3 , |z| → 0

log(1 + z) = z +
2
and
x2
eix = 1 + ix − + O |x|3 , |x| → 0.

2
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 163

Random Uniform Partitions 163

We have
 q k (eikx/σn − 1)  q k (eikx/σn − 1) 1 qn2k (eikx/σn − 1)2
log 1 − n k
=− n +
1 − qn 1 − qnk 2 (1 − qnk )2
 q 3k |eikx/σn − 1|3 
+O n
(1 − qnk )3
ix kqnk x2 k 2 qnk
=− · + ·
σn 1 − qnk 2σn2 (1 − qnk )2
 k 3 q 3k 
n
+O .
(1 − qnk )3
Taking summation over k yields
∞ ∞ ∞
X  q k (eikx/σn − 1)  ix X kqnk x2 X k 2 qnk
log 1 − n = − +
1 − qnk σn 1 − qnk 2σn2 (1 − qnk )2
k=1 k=1 k=1

 1 X k 3 qn3k 
+O 3 . (4.24)
σn (1 − qnk )3
k=1

It follows by (4.21),

1 X k 3 qn3k
= O n−1/4 ,

3 k 3
(4.25)
σn (1 − qn )
k=1

which implies that the above Taylor expansions are reasonable.


Therefore we see from (4.24) and (4.25) that

X 1 − qnk eikx/σn
log fn (x) = − log
1 − qnk
k=1

X  q k (eikx/σn − 1) 
=− log 1 − n
1 − qnk
k=1
ixµn x2
= − + o(1).
σn 2
We now conclude the desired assertion (4.23).
Next we turn to the proof of (4.22). To this end, we use the inverse
formula for lattice random variables to get
 |λ| n
Qqn (|λ| = n) = Qqn =
σn σ
Z πσn n
1
= e−ixn/σn fn (x)dx. (4.26)
2πσn −πσn
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 164

164 Random Matrices and Random Partitions

1/3
Let ρ(n) = πσn . Then for |x| < ρ(n),

1X  2q k (1 − cos kx/σn ) 
log |fn (x)| = − log 1 + n
2 (1 − qnk )2
k=1
1 X  c1 x2 
≤− log 1 + 2/3
2 2/3 2/3 σn
σn /2<k≤σn
X
≤ −c2 x2 σn−2/3
2/3 2/3
σn /2<k≤σn

≤ −c3 x2 .
Thus we get
2
|fn (x)| ≤ e−c3 x , |x| ≤ ρ(n).
n o
2/3
For |x| > ρ(n), let Sx = k : k ≤ σn , cos kx/σn ≤ 0 . Then

1 X  2q k (1 − cos kx/σn ) 
log |fn (x)| ≤ − log 1 + n
2 (1 − qnk )2
k∈Sx

≤ −c4 σn2/3 ,
which implies
2/3
|fn (x)| ≤ e−c4 σn = o σn−1 .

sup (4.27)
ρ(n)<|x|≤πσn

Next we shall estimate the integral in the right hand side of (4.26).
Split the interval (−πσn , πσn ) into two disjoint subsets: {|x| ≤ ρ(n)} and
{ρ(n) < |x| ≤ πσn }, and evaluate the integral value over each one. Since
2
e−ixn/σn fn (x) → e−x /2 , then by the control convergence theorem,
Z Z ∞ √
2
e−ixn/σn fn (x)dx −→ e−x /2 dx = 2π.
|x|≤ρ(n) −∞

Also, we have by (4.27)


Z
e−ixn/σn fn (x)dx = o(1).
ρ(n)<|x|≤πσn

In combination, we get the desired assertion. 

Theorem 4.8. Under (P, Qqn ) we have as n → ∞


1 √
P
sup √ ϕλ ( nt) − Ψ(t) −→ 0 (4.28)

a≤t≤b n
where 0 < a < b < ∞.
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 165

Random Uniform Partitions 165

Proof. We first prove the convergence in (4.28) for each fixed t > 0.
Indeed, it follows by (4.20)

1 √ 1 X 1 X qnk
Eqn √ ϕλ ( nt) = √ Eqn rk = √
n n √ n √ 1 − qnk
k≥ nt k≥ nt
Z ∞ −cu
e
= du + o(1) = Ψ(t) + o(1).
t 1 − e−cu
Similarly, by (4.21)
 1 √ 
V arqn √ ϕλ ( nt) = O n−1/2 .

n
Therefore according to the Markov inequality, we immediately have
1 √
P
√ ϕλ ( nt) − Ψ(t) −→ 0. (4.29)

n
Turn to the uniform convergence. Fix 0 < a < b < ∞. For any ε > 0, there
is an m ≥ 0 and a = t0 < t1 < · · · < tm < tm+1 = b such that

max |Ψ(ti ) − Ψ(ti+1 )| ≤ ε.


0≤i≤m

Also, by virtue of the monotonicity of ϕλ , we have


1 √ 1 √
sup √ ϕλ ( nt) − Ψ(t) ≤ 3 max √ ϕλ ( nti ) − Ψ(ti )

a≤t≤b n 0≤i≤m n
+ max |Ψ(ti ) − Ψ(ti+1 )|. (4.30)
0≤i≤m

Hence it follows from (4.29) and (4.30)


 1 √ 
Qqn sup √ ϕλ ( nt) − Ψ(t) > 4ε

a≤t≤b n
 1 √ 
≤ m max Qqn √ ϕλ ( nti ) − Ψ(ti ) > ε

0≤i≤m n
→ 0, n → ∞.

The proof is complete. 

Theorem 4.9. For any x ∈ R, we have as → ∞



 c n  −x
Qqn √ λ1 − log ≤ x −→ e−e . (4.31)
n c
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 166

166 Random Matrices and Random Partitions

√ √
Proof. Let An,x = n(log n/c + x)/c. Since λ1 is the largest part of
λ, then it is easy to see

 c n  
Qqn √ λ1 − log ≤ x = Qqn rk = 0, ∀k ≥ An,x .
n c
It follows by Lemma 4.2
Y Y
1 − qnk .

Qqn (rk = 0) = (4.32)
k≥An,x k≥An,x

For each x ∈ R, qnk → 0 whenever k ≥ An,x . Hence we have as n → ∞,


X X
log(1 − qnk ) = −(1 + o(1)) qnk
k≥An,x k≥An,x
dA e
qn n,x
= −(1 + o(1))
1 − qn
→ −e−x ,
which together with (4.32) implies (4.31). 

Theorem 4.10. For x1 > · · · > xm


 c √
n 
lim Qqn √ λi − log ≤ xi , 1 ≤ i ≤ m
n→∞ n c
Z x1 Z xm m
Y
= ··· p0 (x1 ) p(xi−1 , xi )dx1 · · · dxm ,
−∞ −∞ i=2

where p0 and p are as in Theorem 4.4.

Proof. For simplicity of notations, we only prove the statement in the


√ √
case m = 2. Let An,x = n(log n/c + x)/c. Then it is easy to see
 c √
n 
Qqn √ λi − log ≤ xi , i = 1, 2
n c
 \ 
= Qqn {rk = 0}
k>An,x2
X  \ 
+ Qqn {rj = 1} {rk = 0} . (4.33)
An,x2 <j≤An,x1 k>An,x2 ,k6=j

By Lemma 4.2, it follows for each An,x2 < j ≤ An,x1


 \  Y
Qqn {rj = 1} {rk = 0} = qnj (1 − qnk ).
k>An,x2 ,k6=j k>An,x2
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 167

Random Uniform Partitions 167

A simple calculus shows


dAn,x1 e−dAn,x2 e
X dAn,x2 e 1 − qn
qnj = qn
1 − qn
An,x2 <j≤An,x1

→ e−x2 − e−x1 , n → ∞.
Also, according to the proof of Theorem 4.9,
 \  −x
lim Qqn {rk = 0} = e−e , x ∈ R.
n→∞
k>An,x

Therefore it follows from (4.33)


 c √
n 
lim Qqn √ λi − log ≤ xi , i = 1, 2
n→∞ n c
−x2 −x1
= e−e + (e−x2 − e−x1 )e−e .
This is the integral of p0 (x1 )p(x1 , x2 ) over the region {(x1 , x2 ) : x1 > x2 }.
The proof is complete. 

Theorem 4.11. Under (P, Qqn )


Xn ⇒ G
in terms of finite dimensional distributions. Here G is a centered Gaussian
process with the covariance structure
Z ∞
e−cu
Cov(G(s), G(t)) = du, s < t.
t (1 − e−cu )2
Proof. First, we can prove in a way completely similar to that of Theorem
4.7 that for each t > 0
d
Xn (t) −→ G(t).
Next we turn to the 2-dimensional case. Assume 0 < s < t < ∞. Then for
any x1 and x2 ,
x1  X √ 
x1 Xn (s) + x2 Xn (t) = 1/4 rk − n(Ψ(s) − Ψ(t))
n √ √
ns≤k< nt
x1 + x2 X
 √ 
+ 1/4 rk − nΨ(t) . (4.34)
n √
k≥ nt

Since two summands in the right hand side of (4.34) are independent and
each converges weakly to a normal random variable, then x1 Xn (s)+x2 Xn (t)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 168

168 Random Matrices and Random Partitions

2
must converges weakly to a normal random variable with variance σs,t given
by
Z t Z ∞
e−cu e−cu
x21 −cu )2
du + (x 1 + x2 )2
du
s (1 − e t (1 − e−cu )2
Z ∞ Z ∞
e−cu 2 e−cu
= x21 du + x 2 du
s (1 − e−cu )2 t (1 − e−cu )2
Z ∞
e−cu
+2x1 x2 du.
t (1 − e−cu )2
Therefore
 d 
Xn (s), Xn (t) −→ G(s), G(t) ,
where (G(s), G(t)) is jointly normally distributed with covariance
Z ∞
e−cu
Cov(G(s), G(t)) = du.
t (1 − e−cu )2
The m-dimensional case can be analogously proved. 
To conclude this section, we investigate the asymptotic behaviours when tn
tends to ∞.
Theorem 4.12. Assume that tn is a sequence of positive numbers such
that
1
tn → ∞, tn − log n → −∞. (4.35)
2c
Then under (P, Qqn ),
ectn /2 √ √  d
1/4
ϕλ ( ntn ) − nΨ(tn ) −→ N (0, 1). (4.36)
n

Proof. First, compute mean and variance of ϕλ ( ntn ). According to
the definition of ϕλ ,
√ X X qnk
Eqn ϕλ ( ntn ) = Eqn rk =
√ √ 1 − qnk
k≥ ntn k≥ ntn
Z ∞
√ e−cu
du 1 + O(n−1/2 )

= n −cu
tn 1 − e

= nΨ(tn ) 1 + O(n−1/2 )


and
√ X X qnk
V arqn ϕλ ( ntn ) = V arqn (rk ) =
√ √ (1 − qnk )2
k≥ ntn k≥ ntn
Z ∞
√ e−cu
du 1 + O(n−1/2 )

= n −cu 2
tn (1 − e )
√ −ctn −1/2

= ne 1 + O(n ) .
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 169

Random Uniform Partitions 169


The condition (4.35) guarantees ne−ctn → ∞. Next, we verify the Linde-
berg condition. For any ε > 0,
X X X X
j 2 qnjk (1 − qnk ) ≤ j 2 qnjk
√ √
k≥ ntn j≥εσn k≥ ntn j≥εσn
X X
= j2 qnjk

j≥εσn k≥ ntn

jd ntn e
X q n
= j2 .
j≥εσn
1 − qnj

It is now easy to see



jd ntn e
1 X 2 qn
j →0
σn2 1 − qnj
j≥εσn

under the condition (4.35). So the is satisfied, and we conclude (4.36). 

4.3 Small ensembles

This section is devoted to the proofs of main results given in the Intro-
duction. A basic strategy is to use conditioning argument on the event
{|λ| = n}. The following lemma due to Vershik (1996) characterizes the
relations between grand ensembles and small ensembles.

Lemma 4.3. For any 0 < q < 1 and n ≥ 0, we have


(i) Pu,n is the conditional probability measure induced on Pn by Qq , i.e.,
Qq |Pn = Pu,n ;
(ii) Qq is a convex combination of measures Pu,n , i.e.,

1 X
Qq = p(n)q n Pu,n .
F(q) n=0

Let Wn (λ) be a function of λ taking values in Rbn where bn < ∞ or bn = ∞.


Wn can be regarded as a random variable in (P, Qqn ). When restricted to
Pn , Wn is also a random variable in (Pn , Pu,n ). Denote by Qqn ◦ Wn−1 and
Pu,n ◦ Wn−1 the induced measures by Wn respectively. The total variation
distance is defined by
dT V Qqn ◦ Wn−1 , Pu,n ◦ Wn−1 = sup Qqn ◦ Wn−1 (B) − Pu,n ◦ Wn−1 (B) .

B⊂Rbn
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 170

170 Random Matrices and Random Partitions

Lemma 4.4. If there exists a sequence of subsets Bn ⊆ Rbn such that


(i)
Qqn ◦ Wn−1 (Bn ) → 1, (4.37)
(ii)
Q (|λ| = n|W = w )
q n n
sup n − 1 → 0, (4.38)

wn ∈Bn Qqn (|λ| = n)
then it follows
dT V Qqn ◦ Wn−1 , Pu,n ◦ Wn−1 → 0,

n → ∞. (4.39)

Proof. Observe for any B ⊆ Rbn


Qqn ◦ Wn−1 (B) = Qqn ◦ Wn−1 (B ∩ Bn ) + Qqn ◦ Wn−1 (B ∩ Bnc )
and
Pu,n ◦ Wn−1 (B) = Pu,n ◦ Wn−1 (B ∩ Bn ) + Pu,n ◦ Wn−1 (B ∩ Bnc ).
Since Qqn ◦ Wn−1 (Bnc ) → 0 by (4.37), then we need only estimate

Qqn ◦ Wn−1 (B ∩ Bn ) − Qqn ◦ Wn−1 B ∩ Bn |λ| = n

X 
≤ Qqn (Wn = wn ) − Qqn Wn = wn |λ| = n
wn ∈Bn
Q |λ| = n W = w 
X q n n
= Qqn (Wn = wn ) n − 1 . (4.40)

Qqn (|λ| = n)
wn ∈Bn

It follows from (4.38) that the right hand side of (4.40) tends to 0. Thus
we conclude the desired assertion (4.39). 

Lemma 4.5. Assume that Kn is a sequence of positive integers such that


X k 2 qnk
= o(n3/2 ). (4.41)
(1 − qnk )2
k∈Kn

Then for Wn : λ → (rk (λ), k ∈ Kn )


dT V Qqn ◦ Wn−1 , Pn ◦ Wn−1 −→ 0.


Proof. We will construct Bn such that (i) and (ii) of Lemma 4.4 holds.
First, observe that there is an an such that
X k 2 qnk
= o(a2n ), an = o(n3/4 ). (4.42)
(1 − qnk )2
k∈Kn
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 171

Random Uniform Partitions 171

Define
n X X o
Bn = (xk , k ∈ Kn ) : kxk − kEqn rk ≤ an .

k∈Kn k∈Kn

Then by (4.42)
 X X 
Qqn ◦ Wn−1 Bnc = Qqn

krk − kEqn rk > an

k∈Kn k∈Kn
P 
V arqn k∈Kn krk
≤ 2
an
1 X k 2 qnk
= → 0.
a2n (1 − qnk )2
k∈Kn

It remains to show that



Qqn |λ| = n Wn = wn
→1 (4.43)
Qqn (|λ| = n)
uniformly in wn ∈ Bn .
Fix wn = (xk , k ∈ Kn ). Then by independence of the rk ’s
 
Qqn |λ| = n Wn = wn

X 

= Qqn krk = n rk = xk , k ∈ Kn
k=1
 X X 
= Qqn krk = n − kxk rk = xk , k ∈ Kn
k∈K
/ n k∈Kn
 X X 
= Qqn krk = n − kxk . (4.44)
k∈K
/ n k∈Kn

It follows by (4.21) and (4.42)


 X  X ∞ X
V arqn krk = k 2 V arqn (rk ) − k 2 V arqn (rk )
k∈K
/ n k=1 k∈Kn

= σn2 (1 + o(1)) → ∞.
Hence as in Theorem 4.7 one can prove that under (P, Qqn )
P
/ n k(rk − Eqn rk )
qk∈K P −→ N (0, 1)
/ n krk )
V arqn ( k∈K
and so
1 X
k(rk − Eqn rk ) −→ N (0, 1).
σn
k∈K
/ n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 172

172 Random Matrices and Random Partitions

Note that

1  X X 
n− Eqn rk − k(xk − Eqn rk ) → 0
σn
k=1 k∈Kn

uniformly in (xk , k ∈ Kn ) ∈ Bn . Then using the inverse formula as in


Theorem 4.7, we have
 X X 
Qqn krk = n − kxk
k∈K
/ n k∈Kn
 X ∞
X X 
= Qqn k(rk − Eqn rk ) = n − Eqn rk − k(xk − Eqn rk )
k∈K
/ n k=1 k∈Kn
1 
= 1 + o(1) . (4.45)
(96)1/4 n3/4
Combining (4.44), (4.45) and (4.22) yields (4.43), as desired. 
Now we are ready to prove Theorems 4.3, 4.4 and 4.6.
√ √
Proof of Theorems 4.3. As in Theorem 4.9, let An,x = n(log n/c +
x)/c. Define Kn = {k : k ≥ An,x }, then it is easy to see
X k 2 qnk
= o(n3/2 )
(1 − qnk )2
k∈Kn

since An,x / n → ∞. Hence applying Lemma 4.5 to Wn : λ 7→ (rk (λ), k ∈
Kn ) yields
Qqn (λ ∈ P : rk = 0, k ∈ Kn ) − Pu,n (λ ∈ Pn : rk = 0, k ∈ Kn ) → 0.
According to Theorem 4.9, we in turn have for any x ∈ R

 c n 
Pu,n λ ∈ Pn : √ λ1 − log ≤ x = Pu,n (λ ∈ Pn : rk = 0, k ∈ Kn )
n c
−x
−→ e−e .
The proof is complete. 
Proof of Theorem 4.4. Similar to the proof of Theorem 4.3. 

Proof of Theorem 4.6 Define Kn = {k : k ≥ ntn }, then it is easy to
see
X k 2 qnk
= o(n3/2 )
(1 − qnk )2
k∈Kn

since tn → ∞. Hence applying Lemma 4.5 to Wn : λ 7→ (rk (λ), k ∈ Kn )


yields
Qqn (λ ∈ P : Wn ∈ B) − Pu,n (λ ∈ Pn : Wn ∈ B) → 0.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 173

Random Uniform Partitions 173

where
n ectn /2  X √  o
B = (xk , k ∈ Kn ) : 1/4 xk − nΨ(tn ) ≤ x .
n k∈K n

We now obtain the desired (4.18) according to Theorem 4.12. 


However, for any fixed t ≥ 0, the condition (4.41) is not satisfied by

Kn = {k : k ≥ nt}. Thus one cannot directly derive Theorem 4.2 nor
Theorem 4.5 from Lemma 4.5.
The rest of this section is devoted to proving Theorems 4.2 and 4.5
following Pittel (1997), and the focus is upon the latter since the other can
be proved in a similar and simpler way.
For simplicity of notations, we only consider two dimensional case below.
Assume 0 < t1 < t2 , we shall prove that for any x1 , x2 ∈ R
d
x1 Xn (t1 ) + x2 Xn (t2 ) −→ N 0, σx21 ,x2 ,

(4.46)
where
 
x1
σx21 ,x2 = (x1 , x2 )Σt1 ,t2 .
x2
Here Σt1 ,t2 is a covariance matrix of X given by (4.16). Indeed, it suffices
to prove (4.46) holds for
x1 X x2 X
ξn (x, t) := 1/4 (rk − Eqn rk ) + 1/4 (rk − Eqn rk ).
n √ n √
k≥ nt1 k≥ nt2

This will in turn be done by proving


 σ2 
x1 ,x2
Eu,n eξn (x,t) → exp , n → ∞. (4.47)
2
In doing this, a main ingredient is to show the following proposition. We
need additional notations. Define for any 1 ≤ k1 < k2 < ∞
2 −1
kX 2 −1
kX
qnk qnk
m(k1 , k2 ) = , σ 2 (k1 , k2 ) = ,
1 − qnk (1 − qnk )2
k=k1 k=k1

∞ ∞
X qnk X qnk
m(k2 ) = , σ 2 (k2 ) =
1 − qnk (1 − qnk )2
k=k2 k=k2

and
2 −1
kX ∞
kqnk X kqnk
s(k1 , k2 ) = , s(k2 ) = .
(1 − qnk )2 (1 − qnk )2
k=k1 k=k2
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 174

174 Random Matrices and Random Partitions

Proposition 4.1. For u1 , u2 ∈ R


 u u2 X 
1
X
Eu,n exp 1/4 (rk − Eqn rk ) + 1/4 (rk − Eqn rk )
n k ≤k<k
n k≥k
1 2 2
 u2 u22 2 c 2 
1
= exp 1/2
σ 2 (k1 , k2 ) + 1/2 σ (k2 ) − 2 u1 s(k1 , k2 ) + u2 s(k2 )
2n 2n 4n
·(1 + o(1)). (4.48)

The proof of Proposition 4.1 will consist of several lemmas. Note that for
any 0 < q < 1 and z1 , z2
Y Y r Y Y
Eq z1rk z2k = Eq z1rk Eq z2rk
k1 ≤k<k2 k≥k2 k1 ≤k<k2 k≥k2
Y 1 − qk Y 1 − qk
= .
1 − z1 q k 1 − z2 q k
k1 ≤k<k2 k≥k2

On the other hand, it follows by Lemma 4.3



Y Y 1 X Y Y
Eq z1rk z2rk = p(n)q n Eu,n z1rk z2rk .
F(q) n=0
k1 ≤k<k2 k≥k2 k1 ≤k<k2 k≥k2

Thus we have for each 0 < q < 1



X Y Y
p(n)q n Eu,n z1rk z2rk
n=0 k1 ≤k<k2 k≥k2
Y 1 − z1 q k Y 1 − qk
= F(q)
1 − qk 1 − z2 q k
k1 ≤k<k2 k≥k2
=: F (z; q), (4.49)

where z = (z1 , z2 ).
Note that the above equation (4.49) is still valid for all complex number
q with |q| < 1. Hence using the Cauchy integral formula yields
Z π
Y rk
Y r 1
r−n e−inθ F z; reiθ dθ, (4.50)

Eu,n z1 z2 =
k

2πp(n) −π
k1 ≤k<k2 k≥k2

where r is a free parameter.

Lemma 4.6.

F z, reiθ ≤ c5 F (z, r) exp


  2r(1 + r)(cos θ − 1) 
3
.
(1 − r) + 2r(1 − r)(1 − cos θ)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 175

Random Uniform Partitions 175

Proof. Observe an elementary inequality


1 1
≤ exp(Req − |q|), |q| < 1.

1−q 1 − |q|

We have for 0 < r < 1



X 
F (z; reiθ ) ≤ F (z; r) exp rk (cos kθ − 1)

k=1
 1 1 
= F (z; r) exp Re − .
1 − reiθ 1−r
Also, it is easy to see
1 1 2r(1 + r)(cos θ − 1)
Re iθ
− = .
1 − re 1−r (1 − r)3 + 2r(1 − r)(1 − cos θ)
The proof is complete. 

We shall asymptotically estimate r−n F (z; r) below. Define for t and z


2 −1
kX
(1) ke−tk
s (t, z) =
(1 − ze−tk )(1 − e−tk )
k=k1

and

X ke−tk
s(2) (t, z) = .
(1 − ze−tk )(1 − e−tk )
k=k2

Lemma 4.7. Let r = e−τ where


2
 1 X  c
τ = τ∗ 1 + (zb − 1)s(b) (τ ∗ , zb ) , τ∗ = √ .
2n n
b=1
√ √
Then we have for z1 = eu1 / n
and z2 = eu2 / n


−n e2c n  −1/4

r F (z; r) = 1 + O(n )
(24n)1/4
 u
1 u21 2 
· exp 1/4 m(k1 , k2 ) + 1/2 σ (k1 , k2 )
n 2n
 u
2 u22 2 
· exp 1/4
m(k2 ) + 1/2 σ (k2 )
n c 2n
2 
· exp − 2 u1 s(k1 , k2 ) + u2 s(k2 ) .
4n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 176

176 Random Matrices and Random Partitions

Proof. Let
2 −1
kX ∞
c2 1 − e−tk X 1 − e−tk
H(z; t) = nt + + log + log .
t 1 − z1 e−tk 1 − z2 e−tk
k=k1 k=k2

A simple calculus shows


2
c2 X
Ht (z; t) = n − − (zb − 1)s(b) (t, zb )
t2
b=1
and
2
2c2 X (b)
Htt (z; t) = − (zb − 1)st (t, zb ).
t3
b=1
Making the Taylor expansion at τ , we have
1
H(z; τ ∗ ) = H(z; τ ) + Ht (z; τ )(τ ∗ − τ ) + Htt z; t̃ (τ ∗ − τ )2

2

for a t̃ between τ and τ . Hence it follows
1
H(z; τ ) = H(z; τ ∗ ) − Ht (z; τ )(τ ∗ − τ ) − Htt z; t̃ (τ ∗ − τ )2 . (4.51)

2
We shall estimate each summand in the right hand side of (4.51) below.
Begin with H(z; τ ∗ ). As in Theorem 4.7, we have by the Taylor expansions
2 −1
kX 2 −1
kX k2 −1
1 − qnk qnk (z1 − 1)2 X qn2k
log = (z1 − 1) +
1 − z1 qnk 1 − qnk 2 (1 − qnk )2
k=k1 k=k1 k=k1
2 −1
kX
 qn3k 
+O |z1 − 1|3
(1 − qnk )3
k=k1
k2 −1 k2 −1
u1 X qnk u1 X qnk
+ O n−1/4

= 1/4 k
+ 1/2 k 2
n 1 − q n 2n (1 − q n )
k=k1 k=k1
u1 u1 2
= 1/4 m(k1 , k2 ) + 1/2 σ (k1 , k2 ) + O n−1/4 .

n 2n
Similarly,

X 1 − qnk u2 u2
= 1/4 m(k2 ) + 1/2 σ 2 (k2 ) + O n−1/4 .

log k
1 − z2 qn n 2n
k=k2

This immediately gives


√ u1 u1
H(z; τ ∗ ) = 2c n + 1/4 m(k1 , k2 ) + 1/2 σ 2 (k1 , k2 )
n 2n
u2 u2
+ 1/4 m(k2 ) + 1/2 σ 2 (k2 ) + O n−1/4 .

(4.52)
n 2n
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 177

Random Uniform Partitions 177

Turn to the second term Ht (z; τ ). Note for b = 1, 2


ub
zb − 1 = 1/4 + O n−1/2

n
and
s(b) (τ ∗ , zb ) = O(n), s(b) (τ, zb ) = O(n).
Then we have
2 −2
1 X
Ht (z, τ ) = n − n 1 + (zb − 1)s(b) (τ ∗ , zb )
2n
b=1
2
X
− (zb − 1)s(b) (τ, zb )
b=1
2
X
(zb − 1) s(b) (τ ∗ , zb ) − s(b) (τ, zb ) + O n1/2
 
=
b=1

= O n1/2 .

(4.53)

To evaluate the third term, note for any t̃ between τ and τ
2
Htt (z; τ ) = n3/2 + O n5/4

c
and
2
τ∗ X
τ∗ − τ = (zb − 1)s(b) (τ ∗ , zb )
2n
b=1
2
c X
ub s(b) + O n−1 .

= 7/4
2n b=1
We have
2 2
c X
Htt z; t̃ (τ ∗ − τ )2 = ub s(b) + O n−1/2 .
 
2
(4.54)
2n
b=1
Inserting (4.52)-(4.54) into (4.51) yields
√ u1 u1
H(z; τ ) = 2c n + 1/4 m(k1 , k2 ) + 1/2 σ 2 (k1 , k2 )
n 2n
u2 u2
+ 1/4 m(k2 ) + 1/2 σ 2 (k2 )
n 2n
c 2
− 2 u1 s + u2 s(2) + O n−1/4 .
(1)

4n
Finally, with the help of τ − τ ∗ = O n−3/4 , we have


1 τ 1
+ O n−1/4 ,

log = log 1/4
2 2π (24n)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 178

178 Random Matrices and Random Partitions

and so

−τ ec n
1 + O(n−1/4 ) .

F(r) = F(e ) = 1/4
(24n)
To conclude the proof, we need only to note
 c2 
r−n F (z; r) = F(r) exp − + H(z; τ ) .
τ 
To estimate the integral over (−π, π), we split the interval (−π, π) into two
subsets: |θ| ≤ θn and |θ| ≥ θn , where θn = n−3/4 log n. The following
lemma shows that the overall contribution to the value of integral in (4.50)
made by large θ’s is negligible.

Lemma 4.8. Assume r = e−τ is as in Lemma 4.7. Then


Z
−n F (z; reiθ ) dθ ≤ r−n F (z; r) exp − c7 log2 n .

r (4.55)
|θ|≥θn

Proof. If r = e−τ , then for all n ≥ 1


2r(1 + r)(cos θ − 1) c6 θ 2
≥ − .
(1 − r)3 + 2r(1 − r)(1 − cos θ) n−3/2 + θ2 n−1/2
By Lemma 4.6,
c6 θ 2
Z Z  
−n iθ −n

r F (z; re ) dθ ≤ r F (z; r) exp − −3/2 dθ.
|θ|≥θn |θ|≥θn n + θ2 n−1/2
To estimate the integral, we consider two cases separately: θn ≤ |θ| ≤ n−1/2
and |θ| > n−1/2 .
c6 θ2
Z   Z
1/2
exp − −3/2 2 −1/2
dθ ≤ e−c6 n /2 dθ
θn ≤|θ|≤n−1/2 n +θ n θn ≤|θ|≤n−1/2

= exp(−c6 log2 n/3)


and
c6 θ2
Z   Z
3/2 2
exp − dθ ≤ e−c6 n θ /2

|θ|≥n−1/2 n−3/2 + θ2 n−1/2 |θ|≥n−1/2

= exp(−c6 log2 n/3).


In combination, (4.55) holds for a new constant c7 > 0. 
Turn now to the major contribution to the integral value of small θ’s.

Lemma 4.9. Assume r = e−τ is as in Lemma 4.7. Then


Z
π
r−n e−inθ F (z; reiθ )dθ = r−n F (z; r) 1/4 3/4 1 + o(1) .

(4.56)
|θ|≤θn 6 n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 179

Random Uniform Partitions 179

Proof. First, observe


 c2 
r−n e−inθ F z; reiθ = F(eτ −iθ ) exp −

+ H(z; τ − iθ) . (4.57)
τ − iθ
Also, it follows from (4.5)
 c2 1 τ − iθ 
F eτ −iθ = exp

+ log + O(|τ − iθ|) . (4.58)
τ − iθ 2 2π
It is easy to see
sup{|Htt (z; t)| : |t − τ | ≤ θn } = O(n2 ).
Then the Taylor expansion at τ gives
1
H(z; τ − iθ) = H(z; τ ) − iHt (z; τ )θ − Htt (z; τ )θ2 + O(n2 θn3 ). (4.59)
2
Hence combining (4.57)-(4.59) together implies
r−n e−inθ F z; reiθ

 θ2 
= r−n F (z; r) exp − iθHt (z, t) − Htt (z, t) 1 + o(1) ,

2
whichZ in turn gives
r−n e−inθ F z; reiθ dθ

|θ|≤θn
θ2
Z  
−n

=r F (z, r) exp − iθHt (z, t) − Htt (z, t) dθ 1 + o(1) .
|θ|≤θn 2
Note for a c8 > 0
Ht2 (z; τ )
= O(n−1/2 ), θn2 Htt (z, t) ≥ c8 log2 n.
Htt (z; t)
Hence itZ follows
 θ2 
exp − iθHt (z; τ ) − Htt (z; τ ) dθ
|θ|≤θn 2
Z ∞ Z   θ2 
= − exp − iθHt (z; τ ) − Htt (z; τ ) dθ
−∞ |θ|>θn 2
s
2π  H 2 (z; τ ) 
−1/2 2
exp − t + O Htt e−θn Htt (z;τ )/2

=
Htt (z; τ ) 2Htt (z; τ )
π 
= 1/4 3/4 1 + o(1) ,
6 n
where in the second equation we used a standard normal integral formula.
Likewise, it follows
θ2
Z   π 
exp − iθHt (z; τ ) − Htt (z; τ ) dθ = 1/4 3/4 1 + o(1) .

|θ|≤θn 2 6 n
HenceZ
π
r−n e−inθ F (z, reiθ )dθ = r−n F (z, r) 1/4 3/4 1 + o(1) .

|θ|≤θn 6 n 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 180

180 Random Matrices and Random Partitions

1/4 1/4
Proof of Proposition 4.1. Let z1 = eu1 /n and z2 = eu2 /n , and
choose r = e−τ in (4.50). Combining (4.55) and (4.56) yields
Z π Z Z 
r−n e−inθ F z; reiθ dθ = r−n e−inθ F z; reiθ dθ
 
+
−π |θ|≤θn |θ|≥θn
−n π 
=r F (z, r) 1 + o(1) .
61/4 n3/4
Taking Theorem 4.1 and Lemma 4.7 into account, we conclude (4.48) as de-
sired. 

Proof of Theorem 4.5. Take u1 = x1 and u2 = x1 + x2 and k1 = n t1

and k2 = n t2 . Note
Z t2
√ e−cu
σ 2 (k1 , k2 ) = n −cu )2
du(1 + o(1)),
t1 (1 − e


√ e−cu
Z
2
σ (k2 ) = n du(1 + o(1)),
t2 (1 − e−cu )2

t2
ue−cu
Z
s(k1 , k2 ) = n du(1 + o(1)),
t1 (1 − e−cu )2

ue−cu
Z
s(k2 ) = n du(1 + o(1)).
t2 (1 − e−cu )2
Substituting these into (4.48) of Proposition 4.1, we easily get (4.47), as
required. We conclude the proof. 

4.4 A functional central limit theorem

In this section we shall first prove a theorem that may allow us to get the
distributional results in the case when the functional of λ depends primarily
on the moderate-sized parts. Then we shall use the functional central limit
theorem to prove the asymptotic normality for character ratios and the
log-normality for dλ .
Introduce an integer-valued function
l √n 1 m
kn (t) = log , t ∈ (0, 1).
c 1−t
Let
c √
t0 (n) = √ , t1 (n) = n−δ0 , t2 (n) = 1 − e−c/ n
2 n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 181

Random Uniform Partitions 181

where δ0 ∈ (0, 1/8). Define


(
t
P
n1/4 k≥kn (t) (rk − Eqn rk ), t ∈ [t0 (n), 1),
Yn (t) =
0, 0 ≤ t < t0 (n) or t = 1.
Let Y (t), 0 ≤ t ≤ 1 be a separable centered Gaussian process with the
covariance function given by
1 1 
EY (s)Y (t) = s(1 − t) − l(s)l(t) , 0 < s ≤ t < 1 (4.60)
c 2
where
1 
l(t) = t log t − (1 − t) log(1 − t) .
c
The so-called functional central limit theorem reads as follows.

Theorem 4.13. (i) With probability 1, Y (t) is uniformly continuous on


[0, 1].
(ii) Yn converges to Y in terms of finite dimensional distributions.
(iii) Let g(t, x) be continuous for (t, x) ∈ D := (0, 1) × R and such that
|x|γ
|g(t, x)| ≤ c10 (4.61)
tα (1 − t)β
for some γ > 0, α < 1 + γ/2, β < 1 + γ/6, uniformly for (t, x) ∈ D. Then
under (Pn , Pu,n )
Z 1 Z 1
d
g(t, Yn (t))dt −→ g(t, Y (t))dt.
0 0

We shall prove the theorem following the line of Pittel (2002) by applying
the Gikhman-Skorohod theorem, namely Theorem 1.16. A main step is to
verify the stochastic equicontinuity for Yn (t), 0 ≤ t ≤ 1. As the reader may
notice, a significant difference between Yn and Xn is that there is an extra
factor t in Yn besides parametrization. This factor is added to guarantee
that Yn satisfies the stochastic equicontinuity property and so the limit
process Y has continuous sample paths.

Lemma 4.10. For u ∈ R


 u X 
Eqn exp 1/4 (rk − Eqn rk )
n k1 ≤k<k2
 u2 X qnk 3/4−2δ

= exp + O(n ) , (4.62)
2n1/2 (1 − qnk )2
k1 ≤k<k2

where the error term holds uniformly over nδ ≤ k1 < k2 ≤ ∞ with δ < 1/2.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 182

182 Random Matrices and Random Partitions

Proof. The proof is completely similar to that of Theorem 4.7. It follows


by independence
 u X  Y  u 
Eqn exp 1/4 rk = Eqn exp 1/4 rk
n k ≤k<k k ≤k<k
n
1 2 1 2

Y 1 − qnk
=
k1 ≤k<k2
1 − eu/n1/4 qnk
1/4
 X 1 − eu/n qnk 
= exp − log .
1 − qnk
k1 ≤k<k2

Using the Taylor expansion, we obtain


1/4
X 1 − eu/n qnk 1/4 X qnk
= eu/n − 1

log
1 − qnk 1 − qnk
k1 ≤k<k2 k1 ≤k<k2

1 1/4 2 X qn2k
+ eu/n −1
2 (1 − qnk )2
k1 ≤k<k2

1/4 X qn3k
+O |eu/n − 1|3

.
(1 − qnk )3
k1 ≤k<k2

Note
u/n1/4 3 X qn3k  |u|3 Z ∞ e−3cx 
e − 1 k 3
= O 1/4 √ −cx 2
dx
(1 − qn ) n k1 / n (1 − e )
k1 ≤k<k2

= O n3/4−2δ ,


and the contribution proportional to u3 that comes from the first two terms
is of lesser order of magnitude. We conclude the proof. 

Lemma 4.11. For u ∈ R


 u X 
Eu,n exp 1/4 (rk − Eqn rk )
n k1 ≤k<k2
 u2 X qnk 
3/4−2δ
≤ c11 exp + O n , (4.63)
2n1/2 (1 − qnk )2
k1 ≤k<k2

where the error term holds uniformly over nδ ≤ k1 < k2 ≤ ∞ with δ > 3/8.

Proof. This can be proved by a slight modification of Theorem 4.1. For


any 0 < q < 1,

P 1 X n P
Eq x k1 ≤k<k2 rk = q p(n)Eu,n x k1 ≤k<k2 rk .
F(q) n=0
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 183

Random Uniform Partitions 183

On the other hand,


P
rk
Y 1 − qk
Eq x k1 ≤k<k2
= .
1 − xq k
k1 ≤k<k2

Thus we have for any 0 < q < 1



X P
rk
Y 1 − qk
q n p(n)Eu,n x k1 ≤k<k2
= F(q) .
1 − xq k
n=0 k1 ≤k<k2

Indeed, the above equation holds for any complex number |q| < 1. Using
the Cauchy integral formula, we obtain
Z π
P
k1 ≤k<k2 rk
1
r−n e−inθ Fn x; reiθ dθ

Eu,n x =
2πp(n) −π

where
Y 1 − qk
Fn (x; q) := F(q) .
1 − xq k
k1 ≤k<k2

−c/ n
Choose r = qn = e , with the help of (4.5) we further obtain
P
rk
Y 1 − qnk
Eu,n x k1 ≤k<k2
≤ c8 .
1 − xqnk
k1 ≤k<k2

u/ n
Letting x = e , we have by (4.62)
 u X   u X 
Eu,n exp 1/4 rk ≤ c8 exp 1/4 Eqn rk
n k1 ≤k<k
n k1 ≤k<k2
2
 u2 X qnk 
3/4−2δ
· exp + O n ,
2n1/2 (1 − qnk )2
k1 ≤k<k2

which immediately implies (4.63). The proof is complete. 

Lemma 4.12. (i) For ε > 0,



lim lim sup Pu,n |Yn (t) − Yn (s)| > ε = 0. (4.64)
δ→0 n→∞ |t−s|≤δ

(ii) For any m ≥ 1 and 0 < ρ < 1/12 − 2δ0 /3,

Eu,n |Yn (t)|m ≤ c9 tm/2 (1 − t)m/2 + n−mρ ,



t ≥ t0 (n).
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 184

184 Random Matrices and Random Partitions

Proof. Observe that if t > t2 (n) then kn (t) > n so Yn (t) = 0; while
if t < t0 (n) then Yn (t) = Op (n1/4−δ0 log n). So it suffices to prove (4.64)
uniformly for t, s ∈ [t1 (n), t2 (n)]. Assume s < t. We use Lemma 4.11 to
obtain
 us X 
Eu,n exp 1/4 (rk − Eqn rk )
n
kn (s)≤k<kn (t)
 u2 s2 X qnk 
≤ c9 exp k 2
2n1/2 (1 − qn )
kn (s)≤k<kn (t)
 u2 
≤ c10 exp (t − s) (4.65)
2c
and
 u(t − s) X 
Eu,n exp (rk − Eqn rk )
n1/4
k≥kn (t)
 u2 (t − s)2 X qnk 
≤ c11 exp
2n1/2 (1 − qnk )2
k≥kn (t)
 u2 
≤ c12 exp (t − s) . (4.66)
2c
Note
t−s X
Yn (t) − Yn (s) = (rk − Eqn rk )
n1/4
k≥kn (t)
s X
− (rk − Eqn rk ).
n1/4
kn (s)≤k<kn (t)

It follows by the Cauchy-Schwarz inequality, (4.65) and (4.66)


 
Eu,n exp u(Yn (t) − Yn (s))
  2us X 1/2
≤ Eu,n exp − 1/4 (rk − Eqn rk )
n
kn (s)≤k<kn (t)
  2u(t − s) X 1/2
· Eu,n exp (rk − Eqn rk )
n1/4
k≥kn (t)
 2u2 
≤ c13 exp (t − s) .
c
A standard argument now yields
( 2
 e−cε /8(t−s) , ε ≤ nρ (t − s),
Pu,n |Yn (t) − Yn (s)| > ε ≤ ρ
e−cn ε/8 , ε ≥ nρ (t − s).
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 185

Random Uniform Partitions 185

Therefore we have
2 ρ
Pu,n |Yn (t) − Yn (s)| > ε ≤ e−cε /8δ + e−cn ε/8


uniformly for s, t ∈ [t1 (n), t2 (n)] with |t−s| ≤ δ. This verifies the stochastic
equicontinuity property (4.64).
We can analogously obtain
( 2
 e−cx /8(t−s) , x ≤ nρ t(1 − t)/c,
Pu,n |Yn (t)| > x ≤ ρ
e−cn x/8 , x ≥ nρ t(1 − t)/c.
Therefore it follows by integral formula by parts
Z ∞
Eu,n |Yn (t)|m = m xm−1 Pu,n |Yn (t)| > x dx

0
≤ c15 tm/2 (1 − t)m/2 + n−mρ ,


as desired. 
Proof of Theorem 4.13. Begin with the continuity of sample paths of
Y (t). Note
2
E Y (t) − Y (s) = EY (t)2 − 2EY (s)Y (t) + EY (s)2
1 1 
= t − s − (t − s)2 − (l(t) − l(s))2
c 2
1
≤ (t − s).
c
Since Y (t) − Y (s) is Gaussian with zero mean, we have
4 3
E Y (t) − Y (s) ≤ 2 (t − s)2 .
c
This implies by Kolmogorov’s continuity criterion that there exists a sepa-
rable continuous version of Y (·) on [0, 1].
Turn to the proof of (ii). The asymptotic normality directly follows
from Theorem √ 4.5. Indeed, making a change of time parameter, we see
that tXn( cn log 1−t 1
), 0 < t < 1 converges weakly to tX( 1c log 1−t 1
),
0 < t < 1 in terms of finite dimensional distributions. If letting Y (t) =
tX( 1c log 1−t
1
), then a simple calculus shows that Y (t), 0 < t < 1 has the
desired covariance structure (4.60).
Finally, we show (iii). Fix g. Without loss of generality, we assume
that α, β > 1. Introduce εm = 1/m, m ≥ 1, and break up the integration
interval into three subsets: (0, εm ), (εm , 1 − εm ), and (1 − εm , 1). Let
Z 1 Z 1−εm
Zn = g(t, Yn (t))dt, Zn,m = g(t, Yn (t))dt
0 εm
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 186

186 Random Matrices and Random Partitions

and
Z 1 Z 1−εm
Z= g(t, Y (t))dt, Zm = g(t, Y (t))dt.
0 εm
Then using Lemma 4.12 and Theorem 1.16, it is not difficult to check the
following three statements:
(a) for any ε > 0,
lim lim Pn (|Zn − Zn,m | > ε) = 0;
m→∞ n→∞
(b) for each m ≥ 1,
d
Zn,m −→ Zm ;
(c) for any ε > 0,
lim P (|Zm − Z| > ε) = 0, m → ∞.
m→∞
Here we prefer to leave the detailed computation to the reader, see also
Pittel (2002).
Having (a), (b) and (c) above, Theorem 4.2, Chapter 1 of Billingsley
d
(1999a) guarantee Zn −→ Z, which concludes the proof. 
To illustrate, we shall give two examples. The first one treats the char-
acter ratios in the symmetric group Sn . Fix a transposition τ ∈ Sn . Define
the character ratio by
χλ (τ )
γτ (λ) = , λ ∈ Pn

where χλ be an irreducible representation associated with the partition
λ 7→ n, dλ is the dimension of χλ , i.e., dλ = χλ (1n ).
The ratio function played an important role in the well-known analy-
sis of the card-shuffling problem performed by Diaconis and Shahshahani
(1981). In fact, Diaconis and Shahshahani proved that the eigenvalues for
this random walk are the character ratios each occurring with multiplicity
d2λ . Character ratios also play a crucial role in the work on moduli spaces
of curves, see Eskin and Okounkov (2001), Okounkov and Pandharipande
(2004). The following theorem can be found in the end of the paper of
Diaconis and Shahshahani (1981).
Theorem 4.14. Under (Pn , Pu,n ),
d
n3/4 γτ (λ) −→ N 0, στ2 ,


where στ2 is given by


4 1 1 EY (s)Y (t) 1−s 1−t
Z Z
στ2 = 4 log log dsdt
c 0 0 s(1 − s)t(1 − t) s t
where EY (s)Y (t) is given by (4.60).
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 187

Random Uniform Partitions 187

Proof. Recall the following classic identity due to Frobenius (1903)


1 X 
γτ (λ) = λ2k − (2k − 1)λk
n(n − 1)
k
   0 
1 X  λk λk
= n − .
2
2 2
k
d
It follows from the second equation that γτ (λ) = γτ (λ0 ) and Eu,n γτ (λ) = 0
since Pu,n (λ) = Pu,n (λ0 ).
To prove the central limit theorem, we observe
X 
λ02
k − (2k − 1)λ 0
k = n + Un,1 + Un,2 + Un,3 ,
k
where
X
m(k)2 − 2km(k) ,

Un,1 =
k
X 2
Un,2 = ϕλ (k) − m(k) ,
k
X  
Un,3 = 2 m(k) − k ϕλ (k) − m(k) .
k
It is easy to check that
Un,1 = Op (n log2 n), Un,2 = Op (n log2 n).
Turn to Un,3 . Switching to integration, we get
Z ∞
m(x) − x ϕλ (x) − m(x) dx + Op n log2 n .
  
Un,3 = 2
0

Substituting x = n| log(1 − t)|/c and a simple calculus shows that
Z ∞
 
m(x) − x ϕλ (x) − m(x) dx
0
1
n5/4 1−t
Z
Yn (t)
dt + Op n3/4 log n .

= log
c2 0 t(1 − t) t
Set
x 1−t
g(t, x) = log dt.
t(1 − t) t
Then g(t, x) obviously satisfies the condition (4.61) of Theorem 4.13 with
parameters µ = 1 and α = β = 3/2. So we have
Z 1 Z 1
Yn (t) 1−t d Y (t) 1−t
log dt −→ log dt.
0 t(1 − t) t 0 t(1 − t) t
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 188

188 Random Matrices and Random Partitions

Note that the limit variable is a centered Gaussian random variable with
variance
Z 1Z 1
EY (s)Y (t) 1−s 1−t
log log dsdt,
0 0 s(1 − s)t(1 − t) s t
where EY (s)Y (t) is given by (4.60).
In combination, we obtain
d
n3/4 γτ (λ) −→ N 0, στ2 ,


as desired. 
The second example we shall treat is dλ . It turns out that the logarithm
of dλ satisfies the central limit theorem. Introduce
Z ∞
log | log x|
κ(t) = dx. (4.67)
0 (1 − t − tx)2
Theorem 4.15. Under (Pn , Pu,n ),
1  1 
d
−→ N 0, σd2 .

3/4
log d λ − n log n + An
n 2
Here A and σd2 are given by
1 ∞ y log y
Z
A = 1 − log c + 2 dy
c 0 ey − 1
and
Z 1 Z 1
1
σd2 = EY (s)Y (t)κ(s)κ(t)dsdt, (4.68)
c2 0 0

where EY (s)Y (t) is given by (4.60). Numerically, σd2 = 0.3375 · · · .

The theorem was first proved by Pittel (2002). The proof will use the
following classic identities (see also Chapter 5):
Q
1≤i<j≤l (λi − λj + j − i)
dλ = n! Q (4.69)
1≤i≤l (λi − i + l)!

and
n! n!
dλ = := Q , (4.70)
Hλ ∈λ h

where h = λi − i + λ0j − j + 1, the hook length of the (i, j) square.


It follows directly from (4.70) and (4.69) that
0 0
Q
1≤i<j≤λ1 (λi − λj + j − i)
dλ = dλ0 = n! Q 0 .
1≤i≤λ1 (λi − i + λ1 )!
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 189

Random Uniform Partitions 189

Consequently, we obtain
log dλ − log n!
X X
log λ0i − λ0j + j − i − log λ0i − i + λ1 !
 
=
1≤i<j≤λ1 1≤i≤λ1
=: Mn − Nn . (4.71)
The bulk of the argument consists of computing Mn and Nn . We need
some basic estimates about λ0i − λ0j below. Let 0 < δ < 1/2 and define
K = (k1 , k2 ) : nδ ≤ k1 ≤ k2 − nδ .


Lemma 4.13. Let



n 1
`(x) = log √ , x>0
c 1 − ecx/ n
and denote `(x, y) = `(x) − `(y) for any 0 < x ≤ y. Then we have
(i)
m(k1 , k2 ) = 1 + O(n−δ ) `(k1 , k2 )


uniformly for (k1 , k2 ) ∈ K, and for xi = cki / n
√ 
 n 1 1 
σ 2 (k1 , k2 ) = 1 + O(n−δ ) − ;
c 1 − e−x1 1 − e−x2
(ii)
σ(k1 , k2 )
= O n−(δ−a)/2

m(k1 , k2 )

uniformly for (k1 , k2 ) ∈ K and k1 ≤ a n log n/c where a < δ;
(iii)
σ 2 (k1 , k2 )
lim =1
n→∞ m(k1 , k2 )

uniformly for (k1 , k2 ) ∈ K and k1 ≥ a n log n/c where a < δ.

Proof. See Lemmas 1 and 2 of Pittel (2002). 

Lemma 4.14. Given ε > 0 and 0 < a < δ < 1/2, there is an n0 (a, δ, ε) ≥ 1
such that
(i)
p
Qqn |λ0k1 − λ0k2 − m(k1 , k2 )| > σ(k1 , k2 ) ε log n ≤ n−ε/3


uniformly for (k1 , k2 ) ∈ K and k1 ≤ a n log n/c;
(ii)
Qqn |λ0k1 − λ0k2 − m(k1 , k2 )| > (σ(k1 , k2 ) + log n) ε log n ≤ n−ε/3
p 

uniformly for (k1 , k2 ) ∈ K and k1 ≥ a n log n/c.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 190

190 Random Matrices and Random Partitions

Proof. Let η be small enough to ensure that eη qnk < 1 for all k ∈ [k1 , k2 ).

For instance, select |η| = σ(k1 , k2 ) ε log n. We have
   η2 
Eqn exp η(λ0k1 − λ0k2 ) = exp ηm(k1 , k2 ) + σ 2 (k1 , k2 )
2
 
3
X qn3k 
· exp O |η| 3k
. (4.72)
(1 − qn )
k1 ≤k<k2

Moreover, a delicate analysis shows the remainder term is indeed of order


n−δ/2 log3/2 n = o(1). Using (4.72) and Markov’s inequality, we easily get
p
Qqn |λ0k1 − λ0k2 − m(k1 , k2 )| ≥ σ(k1 , k2 ) ε log n

 ε 
≤ 2 exp − log n + O n−δ/2 log3/2 n
2
≤ n−ε/3 .
This concludes the proof of (i). Turn to (ii). Set

ε log n
|η| = .
σ(k1 , k2 ) + log n
Then |η| → 0. Using only the first order expansion, we obtain
   
Eqn exp η(λ0k1 − λ0k2 ) = exp (eη − 1)m(k1 , k2 )
  X qn2k 
· exp O η 2 2k
. (4.73)
(1 − qn )
k1 ≤k<k2

Note
X qn2k
η2 = O(n−a log n).
(1 − qn )2k
k1 ≤k<k2

Using (4.73) and the Markov inequality, and noting σ 2 (k1 , k2 )/m(k1 , k2 ) =
1 + o(1), we have
p
Qqn |λ0k1 − λ0k2 − m(k1 , k2 )| ≥ (σ(k1 , k2 ) + log n) ε log n ≤ n−ε/3 .


The proof is complete. 


Lemma 4.14 can be used to obtain the following concentration-type bound
for λ0i − λ0j under (Pn , Pu,n ).

Proposition 4.2. With Pu,n -probability 1 − O n−1/4 at least,



0
λk − λ0k − m(k1 , k2 ) ≤ 3 σ(k1 , k2 ) + log n (log n)1/2

1 2
(4.74)
uniformly for (k1 , k2 ) ∈ K. If, in addition, k1 ≤ an1/2 log n/c where a < δ,
then the summand log n can be dropped.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 191

Random Uniform Partitions 191

Proof. Take ε = 9 in Lemma 4.14 to conclude


0
λk − λ0k − m(k1 , k2 ) ≥ 3(σ(k1 , k2 ) + log n)(log n)1/2 ≤ n−3 .

Qqn 1 2

On the other hand, note by Lemma 4.2



Pu,n (B) = Qqn B |λ| = n
Qqn (B)
≤ .
Qqn (|λ| = n)
Then according to (4.22)
Pu,n λ0k1 − λ0k2 − m(k1 , k2 ) ≥ 3(σ(k1 , k2 ) + log n)(log n)1/2 ≤ n−9/4


for each (k1 , k2 ) ∈ K and 1 ≤ k1 , k2 ≤ n. This immediately implies (4.74),


as asserted. 
Having these basic estimates, we are now ready to compute Mn and Nn .
Let us start with Nn . Define
µk = d`(k)e,
X 
Nn = log µk − k + λ1 !,
1≤k≤λ1

X
λ0k − m(k) log m(k) − k + λ1 ,
 
Rn =
ln ≤k<kn
δ
√ 
where ln = [n ] and kn = n log n/c .

Lemma 4.15. Under (Pn , Pu,n ), we have for 1/8 < δ < 1/4
Nn = Nn + Rn + op (n3/4 ). (4.75)

Proof. Let φ(x) = x log x − x. Then by the Stirling formula for factorial,
λ1
X
φ λ0k − k + λ1 + ∆(λ1 ),

Nn = (4.76)
k=1

where ∆(λ1 ) = Op n1/2 log2 n . Define




λ1
X 
N̄n = φ m(k) − k + λ1 .
k=1

We shall compare Nn with N̄n below. For this, we break up the sum in
(1) (2) (3)
(4.76) into Nn , Nn and Nn for k ∈ [1, ln ), k ∈ [ln , kn ] and k ∈ (kn , ∞)
(1) (2) (3)
respectively. We similarly define N̄n , N̄n and N̄n .
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 192

192 Random Matrices and Random Partitions

Observe that uniformly for 1 ≤ k ≤ λ1


log λ0k − k + λ1 = Op (log n), log m(k) − k + λ1 = Op (log n),
 

and so
φ λ0k − k + λ1 = Op n1/2 log2 n , φ m(k) − k + λ1 = Op n1/2 log2 n .
   

Then we have
Nn(1) = Op n1/2+δ log2 n , N̄n(1) = Op n1/2+δ log2 n ,
 

which implies
Nn(1) − N̄n(1) = Op n1/2+δ log2 n = op n3/4 .
 
(4.77)
(2) (2)
Turn to Nn − N̄n . With the help of (4.74), we expand φ(x) at x =
m(k) − k + λ1 :
φ λ0k − k + λ1 = φ m(k) − k + λ1 + λ0k − m(k) log m(k) − k + λ1
   
 σ 2 (k) log n 
+Op .
m(k) − k + λ1
It follows from Lemma 4.13 that the remainder term is controlled by
Op (log n). Hence
kn
X
Nn(2) N̄n(2) λ0k − m(k) log m(k) − k + λ1
 
− =
k=ln

+Op n1/2 log2 n .



(4.78)
As for the third term, we analogously use the Taylor expansion to obtain
X
Nn(3) − N̄n(3) = λ0k − m(k) log x∗k ,


k>kn

where x∗k
is between m(k) − k + λ1 and λ0k − k + λ1 .
It follows from (4.74)
Nn(3) − N̄n(3) = op n3/4 .

(4.79)
Putting (4.77)-(4.79) together yields
Nn − N̄n = Rn + op n3/4 .


To conclude the proof, we observe


N̄n = Nn + Op n1/2 log n .


Now the assertion (4.75) is valid. 


March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 193

Random Uniform Partitions 193

We shall next turn to compute the Mn . Let


g(y) = log(ey − 1),
Z ∞
v(x) = − e−y log |g(y) − g(x)|dy, x>0
0

X 
Mn = log µ(i, j) + j − i ,
1≤i<j≤λ1

X
λ0k − m(k) v(yk ) + log(g(yλ1 ) − g(yk )) ,
 
Sn =
2ln ≤k≤λ1 −ln
√ Pj−1
where yk = ck/ n and µ(i, j) = l=i µl .

Lemma 4.16. Under (Pn , Pu,n ), we have


Mn = Mn + Sn + op n3/4 .

(4.80)

Proof. Denote K = {(k1 , k2 ) : ln ≤ k1 ≤ k2 − ln } ⊆ [1, λ1 ] × [1, λ1 ].


Obviously, it follows
X
log λ0k1 − λ0k2 + k2 − k1 + Op n1/2+δ log2 n . (4.81)
 
Mn =
(k1 ,k2 )∈K

By (4.74), with high probability


|λ0k1 − λ0k2 − m(k1 , k2 )| σ(k1 , k2 )
≤3
m(k1 , k2 ) + k2 − k1 m(k1 , k2 ) + k2 − k1
= o(1)
for all (k1 , k2 ) ∈ K. So uniformly for (k1 , k2 ) ∈ K
log λ0k1 − λ0k2 + k2 − k1 = log m(k1 , k2 ) + k2 − k1
 

λ0k1 − λ0k2 + m(k1 , k2 )


+
m(k1 , k2 ) + k2 − k1
 σ 2 (k , k ) log2 n + log4 n 
1 2
+Op . (4.82)
(m(k1 , k2 ) + k2 − k1 )2
Take a closer look at each term in the right hand side of (4.82). First,
observe
X σ 2 (k1 , k2 ) log2 n + log4 n
= op n3/4 .

2
(4.83)
(m(k1 , k2 ) + k2 − k1 )
(k1 ,k2 )∈K
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 194

194 Random Matrices and Random Partitions

Second, a simple algebra shows


X λ0k − λ0k + m(k1 , k2 )
1 2
= Sn(1) + Sn(2) , (4.84)
m(k1 , k2 ) + k2 − k1
(k1 ,k2 )∈K

where
2ln λ1
X  X 1
Sn(1) = λ0k − m(k)
m(j, k) + j − k
k=ln j=k+ln

and
λ1
X X 1
Sn(2) = λ0k − m(k)

.
m(j, k) + j − k
k=2ln ln ≤j≤λ1 ,|j−k|≥ln
(1)
It follows from Proposition 4.2 that with high probability Sn is of smaller
(2)
order than n3/4 . To study Sn , we need a delicate approximation (see pp.
200-202 of Pittel (2002) for lengthy and laborious computation):
X 1 
= v(yk ) + log g(yλ1 ) − g(yk )
m(j, k) + j − k
ln ≤j≤λ1 ,|j−k|≥ln

+Op n−1/2 log n .




Then (4.84) becomes


λ1
X
λ0k − m(k) v(yk ) + log(g(yλ1 ) − g(yk )) + op n3/4 .
  
(4.85)
k=2ln

Inserting (4.82) into (4.81) and noting (4.83) and (4.85),


X 
Mn = log m(k1 , k2 ) + k2 − k1
(k1 ,k2 )∈K
λ1
X
λ0k − m(k) v(yk ) + log(g(yλ1 ) − g(yk )) + op n3/4 .
  
+
k=2ln

To conclude the proof, we observe


X
log m(k1 , k2 ) + k2 − k1 = Op n1/2+δ log2 n
 

(k1 ,k2 )∈K c

and
X  
log m(k1 , k2 ) + k2 − k1 − log µ(k1 , k2 ) + k2 − k1
1≤k1 <k2 ≤λ1
X 1
= Op n1/2 log2 n .

≤2
k2 − k1
1≤k1 <k2 ≤λ1 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 195

Random Uniform Partitions 195

Lemma 4.17. Under (Pn , Pu,n ), we have


log dλ − log n! = log f (µ) + Tn + op n3/4 ,

(4.86)
where
Q
1≤i<j≤λ1 (µi − µj + j − i)
f (µ) = Q
1≤i≤λ1 (µi − i + λ1 )!
and
λ1
X
v(yk ) λ0k − m(k) .

Tn = (4.87)
k=2ln

Proof. By (4.16), (4.75) and (4.80), we have


log dλ − log n! = Mn − Nn
= Mn − Nn + Sn − Rn + op n3/4


= log f (µ) + Sn − Rn + op n3/4 .




On the other hand, it trivially follows


X
v(yk ) λ0k − m(k)

Sn − Rn − Tn =
k>λ1 −ln
X
log g(yλ1 ) − g(yk ) λ0k − m(k)
 
+
2ln ≤k≤λ1 −ln
X
log m(k) − k + λ1 λ0k − m(k) .
 
− (4.88)
ln ≤k≤kn

Therefore we need only prove that the right hand side of (4.88) are neg-
ligible. First, according to Lemma 5 of Pittel (2002), there is a constant
c6 > 0
 1 c6
|v(x)| = c6 log x + , |v 0 (x)| ≤ . (4.89)
x x
We easily get
X
v(yk )λ0k = O nδ λ1 log n = Op n1/2+δ log2 n .
 

k>λ1 −ln

Also, it is even simpler to check


X
v(yk )m(k) = Op (n1/2 log2 n).
k>λ1 −ln

Thus we have
X
v(yk ) λ0k − m(k) = op n3/4 .
 

k>λ1 −ln
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 196

196 Random Matrices and Random Partitions

Second, observe the following simple facts:


Xλ1
λ0k = n
k=1
and
λ1 Z ∞
X n 1
dy + Op n1/2 log n

m(k) = log
−y
c 0 1−e
k=1

= n + Op n1/2 log n .


Then we easily have


X λ1
λ0k − m(k) = Op n1/2 log n .
 
(4.90)
k=1
On the other hand, it follows from (4.74)
X
λ0k − m(k) = Op n3/4−ε ,


k∈[2l
/ n ,kn ]
which together with (4.90) in turn implies
X
λ0k − m(k) = Op n3/4 .
 
(4.91)
2ln ≤k≤kn
Besides, we have by (4.74)
X
λ0k − m(k) = Op n3/4 log n .

(4.92)
2ln ≤k≤kn
By the definition of g(x) and m(k),
√ √ √ 
n n
1 − e−cλ1 / n

g(yλ1 ) − g(yk ) = m(k) − k + λ1 +
c c
= m(k) − k + λ1 + Op (1).
Therefore for k ≤ kn
 
log g(yλ1 ) − g(yk ) − log m(k) − k + λ1
c
= log √ + Op n−1/2 log−1 n .

n
Thus by (4.91) and (4.92)
X
log(g(yλ1 ) − g(yk )) − log(m(k) − k + λ1 ) λ0k − m(k)
 

2ln ≤k≤kn
c X
λ0k − m(k)

= log √
n
2ln ≤k≤kn
X
−1/2
log−1 n
 0
+Op n λk − m(k)
2ln ≤k≤kn
3/4

= op n .
The proof is complete. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 197

Random Uniform Partitions 197

To proceed, we need to treat the constant term log fµ .

Lemma 4.18. Under (Pn , Pu,n ), we have


Z ∞
1 t log t
+ op n3/4 .

log f (µ) = − n log n − n ct
2 0 e −1

Proof. Form a Young diagram µ =  µ(1), µ(2), · · · , µ(λ1 ) and denote
its dual by ν = ν(1), ν(2), · · · , ν(µ(1)) , where
ν(i) = max{1 ≤ k ≤ λ1 : µ(k) ≥ i}, 1 ≤ k ≤ µ(1).
We remark that ν can be viewed as an approximation to the random di-
agram λ since µ is an approximation of λ0 . Now apply the hook formula
(4.70) to the diagram µ to obtain
X 
log f (µ) = − log ν(i) − k + µ(k) − i + 1 . (4.93)
i≤µ(1),k≤ν(i)

As a first step, we need to replace asymptotically µ(·) and ν(·) by `(·) in


(4.93). Indeed, by the definition of `(·) and µ(·), it follows
`(k) − 1 ≤ ν(k) ≤ `(k − 1), 1 < k ≤ µ(1).
Define

D = (i, k) : i ≤ min{µ(k), `(k)}, k ≤ min{ν(i), `(i)} ,
then
X
log ν(i) − k + µ(k) − i + 1 + Op n1/2 log2 n .
 
log f (µ) = −
(i,k)∈D

Moreover,
X 
− log ν(i) − k + µ(k) − i + 1
(i,k)∈D
X 
=− log `(i) − k + `(k) − i + 1
(i,k)∈D
 X |`(i) − ν(i)| |`(k) − µ(k)| 
+O +
min{ν(i), `(i)} − k + 1 min{µ(k), `(k)} − i + 1
(i,k)∈D
X
log `(i) − k + `(k) − i + 1 + Op n1/2 log2 n .
 
=−
(i,k)∈D

The same argument results in another Op n1/2 log2 n error term if we




replace further D by D∗ = {(i, k) : i ≤ `(k), k ≤ `(i)}. Thus


X
log `(i) − k + `(k) − i + 1 + Op n1/2 log2 n .
 
log f (µ) = −
(i,k)∈D ∗
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 198

198 Random Matrices and Random Partitions

The next step is to switch the sum into an integral. Let



Hn = (x, y) : 0 < x, y ≤ `(1), x ≤ `(y), y ≤ `(x) ,
then
Z Z

log f (µ) = − log `(x) − y + `(y) − x + 1 dxdy
(x,y)∈Hn
1/2
log2 n .

+O n
Furthermore, if letting

H∞ = (x, y) : x, y > 0, x ≤ `(y), y ≤ `(x) ,
then we have
Z Z

log f (µ) = − log `(x) − y + `(y) − x + 1 dxdy
(x,y)∈H∞
1/2
log2 n .

+O n (4.94)
To see this, make a change of variables
`(x) − y `(y) − x
u= , v= .
n1/2 n1/2
Then in terms of u, v, the domain H∞ becomes {(u, v) : u ≥ 0, v ≥ 0}, and
the inverse transform is
√ √
n ec(u+v) − 1 n ec(u+v) − 1
x= log c(u+v) , y = log .
c e − ecv c ec(u+v) − ecu
So the Jacobian determinant is
!
∂x ∂x
∂u ∂v n
det ∂y ∂y = c(u+v) .
∂u ∂v
e −1
The difference between the integrals over H∞ and Hn is the double integral
over H∞ \ Hn :
Z Z

log `(x) − y + `(y) − x + 1 dxdy
(x,y)∈H∞ \Hn
Z Z √
log( n(u + v) + 1)
≤n dudv
0≤u≤n−1/2 ,v≥0 ec(u+v) − 1
1/2 2

=O n log n .
The last step is to explicitly calculate the double integral value over H∞ in
(4.94). Via the substitutions, the integral is equal to
Z ∞
1 t log t
dt + O n1/2 ,

n log n + n ct
2 0 e −1
as claimed. 
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 199

Random Uniform Partitions 199

To conclude, we shall show that the linearised weighted sum Tn in (4.87)


admits an integral representation up to a negligible error term, and so
converges in distribution to a normal random variable.

Lemma 4.19. Under (Pn , Pu,n ), we have


Tn d
−→ N 0, σT2 ,

n3/4

where σT2 = σd2 given by (4.68).

Proof. Start with an integral representation for Tn . Using the second


inequality of (4.89),
y − y 
k k−1
= O n−δ

v(yk ) − v(x) = O
yk
uniformly for x ∈ [k − 1, k) and k ≥ 2ln .
Also, it is easy to see
m(k) − m(x) = O (eyk − 1)−1/2


uniformly for x ∈ [k − 1, k) and k ≥ 2ln .


Therefore we have
Z ∞ X 
Tn = v(x) (rk − Eqn rk ) dx + ∆Tn , (4.95)
2ln −1 k≥x

where ∆Tn is of order


∞ ∞
X X log(yk + yk−1 ) + nδ
n−δ
0
λk − m(k) + .
eyk − 1
k=2ln k=2ln

By virtue of Proposition 4.2, the whole order is actually Op (n3/4−δ log1/2 n).
Neglecting ∆Tn , we can equate Tn with the integral on the right side in
(4.95). Furthermore, we also extend the integration to [xn , ∞), where

n  c 
xn = − log 1 − √ .
c 2 n
Making the substitution

n 1
x= log
c 1−t
in the last integral and using the definition of the process Yn (t) we obtain
n3/4 1 v(− log(1 − t))
Z
Yn (t)dt + O n3/4−ε .

Tn =
c 0 t(1 − t)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 200

200 Random Matrices and Random Partitions

Now we are in a position to apply Theorem 4.13 to the function


v(− log(1 − t))
g(t, x) := x,
t(1 − t)
which clearly meets the condition (4.61) with µ = α = β = 1. Therefore
Tn d 1 1 v(− log(1 − t))
Z
−→ Y (t)dt.
n3/4 c 0 t(1 − t)
The limit variable in the right hand side is a centered Gaussian random
variable with variance
1 1 1 v(− log(1 − s)) v(− log(1 − t))
Z Z
σT2 := 2 EY (s)Y (t) dsdt.
c 0 0 s(1 − s) t(1 − t)
To conclude the proof, we note
Z ∞ (1 − t)
v(− log(1 − t)) = − e−y log log + log(ey − 1) dy

0 t
= −t(1 − t)κ(t),
where κ(t) is given by (4.67). The proof is complete. 
Proof of Theorem 4.15. Putting Lemmas 4.17, 4.18 and 4.19 all together,
we can conclude the proof. 

4.5 Random multiplicative partitions

In this section we shall introduce a class of multiplicative measures as exten-


sion of uniform measure and describe briefly the corresponding limit shape
and second order fluctuation around the shape. The reader is referred to
Su (2014) for detailed proofs and more information.
Consider a sequence of functions gk (z), k ≥ 1, analytic in the open disk
D% = {z ∈ C : |z| < %}, % = 1 or % = ∞, such that gk (0) = 1. Assume
further that
(i) the Taylor series

X
gk (z) = sk (j)z j
j=0

have all coefficients sk (j) ≥ 0 and


(ii) the infinite product

Y
G(z) = gk (z k )
k=1
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 201

Random Uniform Partitions 201

converges in D% .
Now define the measure Pm,n on Pn by
 sk (j)
Pm,n λ ∈ Pn : rk (λ) = j =
Zm,n
and
Q∞
k=1 sk (rk )
Pm,n (λ) = , λ ∈ Pn ,
Zm,n
where

X Y
Zm,n = sk (rk ).
λ∈Pn k=1

Here m in the subscript stands for multiplicative.


We also define a family of probability measures Qm,q , q ∈ (0, %) on P in
the following way:
Q∞
sk (rk ) |λ|
Qm,q (λ) = k=1 q , λ ∈ P.
G(q)
It is easy to see
 sk (j)q kj
Qm,q λ ∈ P : rk (λ) = j = , j ≥ 0, k≥1
gk (q k )
and so different occupation numbers are independent. The measure Qm,q
is called multiplicative.
According to Vershik (1996), analog of Lemma 4.3 is valid for Qm,q and
Pm,n . This will enable us make full use of conditioning argument.
Note that the generating function G(z), along with its decomposition
Q∞
G(z) = k=1 gk (z k ), completely determines such a family. It actually con-
tain many important examples, see Vershik (1996), Vershik and Yakubovich
(2006). A particularly interesting example is the G(z) is generated by
β
gk (z) = 1/(1 − z)k , β > −1. In such a special case, the convergence
radius of gk and G is % = 1. We also write Qβ,q , Pβ,n for probabilities and
Eβ,q , Eβ,n for expectations respectively. Set

Y 1
Gβ (z) = .
k=1
(1 − z k )kβ

Vershik (1996), Vershik and Yakubovich (2006) treat Qβ,q and Pβ,n as
generalized Bose-Einstein models of ideal gas; while in combinatorics and
number theory they are well known for a long time as weighted partitions.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 202

202 Random Matrices and Random Partitions

Remark 4.1. P0,n corresponds to the uniform measure Pu,n on Pn , and


Z0,n is the Euler function p(n): the number of partitions of n. In the case
of β = 1, the Gβ (z) is the generating function for the numbers p3 (n) of
3-dimensional plane partitions of n (see Andrews (1976)):
X Y 1
p3 (n)z n = .
(1 − z k )k
n≥0 k≥1

However, the P1,n is completely different from the uniform measure on


3-dimensional plane partitions of n.

Vershik (1996), in an attempt to capture various limiting results concern-


ing particular functionals in a unified framework, posed the question of
evaluating the limit shape for ϕλ (t) under Pβ,n . In particular, we have

Theorem 4.16. Assume β ≥ 0, let hn = ( Γ(β+2)ζ(β+2)


n )1/(β+2) . Consider
the scaled function
 t 
ϕβ,n (t) = hβ+1
n ϕλ , t ≥ 0.
hn
Then it follows
ϕβ,n → Ψβ
in the sense of uniform convergence on compact sets, where Ψβ is the func-
tion defined by
Z ∞ β −u
u e
Ψβ (t) = du.
t 1 − e−u
More precisely, for any ε > 0 and 0 < a < b < ∞, there exists an n0 such
that for n > n0 we have
 
Pβ,n λ ∈ Pn : sup |ϕβ,n (t) − Ψβ (t)| > ε < ε.
a≤t≤b

Remark 4.2. The value of hn is in essence determined so that Eβ,q |λ| ∼ n,



where q = e−hn . For β = 0, the scaling constants along both axes are c n.
Moreover Ψ0 (t) is equal to Ψ(t) of (4.12). While for β > 0, two distinct
scaling constants must be adapted. In fact, the value on the s axis is more
compressed than the indices on the t axis. Also, it is worth noting
Z ∞ β −u
u e
Ψβ (0) = du < ∞
0 1 − e−u
by virtue of β > 0.
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 203

Random Uniform Partitions 203

Having the limit shape, we will continue to further study the second order
fluctuation of Young diagrams around it. This will separately discussed
according to two cases: at the edge and in the bulk. First, let us look at
the asymptotic distribution of the largest part of a partition under Pβ,n .
The following result, due to Vershik and Yakubovich (2006), is an extension
of Erdös and Lehner’s theorem

Theorem 4.17.
 An x  −x
lim Pβ,n λ ∈ Pn : λ1 − ≤ = e−e ,
n→∞ hn hn
where
β+1 β+1 β+1
An = log n + β log log n + β log − log Γ(β + 2)ζ(β + 2).
β+2 β+2 β+2
d
Remark 4.3. As known to us, λ = λ0 , and so λ01 and λ1 have the same
asymptotic distribution under (Pn , Pu,n ). But such an elegant property is
no longer valid under (Pn , Pβ,n ). In fact, we have the following asymptotic
normality for λ01 instead of Gumbel distribution.
−(β+1)
Let σn2 = hn and define for k ≥ 1
∞ ∞
X e−hn j X e−hn j
µn,k = jβ , 2
σn,k = jβ ,
1 − e−hn j (1 − e−hn j )2
j=k j=k

where hn is as in Theorem 4.16.

Theorem 4.18. (i) Under Pβ,n with β > 1, we have as n → ∞,


λ0k − µn,k d
−→ N 0, κ2β (0) ,

σn
where
Γ(β + 2)ζ 2 (β + 2, 0)
κ2β (0) = Γ(β + 1)ζ(β + 1, 0) −
ζ(β + 2)
and

ur e−u
Z
1
ζ(r + 1, 0) := du for r > 1.
Γ(r + 1) 0 (1 − e−u )2
(ii) Under P1,n , we have as n → ∞,
λ0k − µn,k d
p −→ N (0, 1).
σn | log hn |
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 204

204 Random Matrices and Random Partitions

Theorem 4.18 corresponds to the end of partitions. We consider the fluc-


tuations in the deep bulk of partitions below. Let
1  t 
Xβ,n (t) = ϕλ ( ) − µn,d ht e , t ≥ 0.
σn hn n

Theorem 4.19. Under Pβ,n with β > −1, we have as n → ∞


(i) for each t > 0,
d
Xβ,n (t) −→ Xβ (t),

where Xβ (t) is a normal random variable with zero mean and variance
1 2
2
κ2β (t) = σβ2 (t) − σβ+1 (t) ;
Γ(β + 3)ζ(β + 2)
(ii) for 0 < t1 < t2 < · · · < tm < ∞,
 d 
Xβ,n (t1 ), Xβ,n (t2 ), · · · , Xβ,n (tm ) −→ Xβ (t1 ), Xβ (t2 ), · · · , Xβ (tm ) ,

where Xβ (t1 ), Xβ (t2 ), · · · , Xβ (tm ) is a Gaussian vector with covariance
structure
2 2
σβ+1 (s)σβ+1 (t)
Cov Xβ (s), Xβ (t) = σβ2 (t) −

, s < t;
Γ(β + 3)ζ(β + 2)
(iii) Each separable version of Xβ is continuous in (0, ∞).

Next we give the limiting distribution of dλ after properly scaled.

Theorem 4.20. Under (Pn , Pβ,n ) with β > 1, we have as n → ∞


 dλ 
d 2
h(β+3)/2

n log − bn −→ N 0, σβ,d ,
(n!)1/(β+2)
2
where the normalizing constant bn and limiting variance σβ,d are given by
∞ ∞ ∞
β+1X X X β+1
bn = µn,k log n − µn,k log µn,k + µn,k − n,
β+2 β+2
k=1 k=1 k=1

β+1  X 
− log Γ(β + 2)ζ(β + 2) µn,k − n
β+2
k=1

and
Z ∞ Z ∞
2

σβ,d = Cov Xβ (s), Xβ (t) log Ψβ (s) log Ψβ (t)dsdt.
0 0
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 205

Random Uniform Partitions 205

To conclude this chapter, we want to mention another interesting exam-


ple of multiplicative measure which is given by the exponential generating
function. Let a = (ak , k ≥ 1) be a parameter function determined by
g(x) = exp( k≥1 ak xk ). Define a probability Pa,n on Pn by
P

1 Y ark
k
Pa,n (λ) = , λ ∈ Pn ,
Za,n rk !
k=1

where Za,n is the partition function.


In terms of the form of parameter function, the measure Pa,n substan-
tially differ from either Pu,n or Pβ,n . The reader is referred to Erlihson
and Granovsky (2008) and the reference therein for the limit shape and
functional central limit theorem for the fluctuation.
May 2, 2013 14:6 BC: 8831 - Probability and Statistical Theory PST˙ws

This page intentionally left blank


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 207

Chapter 5

Random Plancherel Partitions

5.1 Introduction

In this chapter we shall consider another probability measure, namely


Plancherel measure, in Pn and study its asymptotic properties as n → ∞.
As in Chapter 4, our main concerns are again the fluctuations of a typical
Plancherel partition around its limit shape.
To start, let us recall the classic elegant Burnside identity
X
d2λ = n!,
λ∈Pn

where dλ is the number of standard Young tableaux with a shape λ, see


(2.13). This naturally induces a probability measure
d2λ
Pp,n (λ) = , λ ∈ Pn
n!
where p in the subscript stands for Plancherel. Pp,n is often referred to as
Plancherel measure because the Fourier transform
 Fourier 2
L2 Sn , µs,n

−→ L Sbn , Pp,n ,

is an isometry just like in the classical Plancherel theorem, where µs,n is


the uniform measure on Sn and Sbn is the set of irreducible representations
of Sn .
Plancherel measure naturally arises in many representation-theoretic,
combinatorial and probabilistic problems. To illustrate, we consider
the length of longest increasing subsequences in Sn . For a given π =
(π1 , π2 , · · · , πn ) ∈ Sn and i1 < i2 < · · · < ik , we say πi1 , πi2 , · · · , πik
is an increasing subsequence if πi1 < πi2 < · · · < πik . Let `n (π) be the
length of longest increasing subsequences of π. For example, let n = 10,

207
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 208

208 Random Matrices and Random Partitions

π = (7, 2, 8, 1, 3, 4, 10, 6, 9, 5). Then `n (π) = 5, and the longest increasing


subsequences are 1, 3, 4, 6, 9 and 2, 3, 4, 6, 9.
The study of `n (π) dates back to Erdös and Szekeres in the 1930s.
A celebrated theorem states every π of Sn contains an increasing and/or

decreasing subsequence of length at least n (see Steele (1995, 1997)).
This can be proved by an elementary pigeon-hole principle. But it also
follows from an algorithm developed by Robinson, Schensted and Knuth
(see Sagan (2000)) to obtain Young tableaux with the help of permutations.
Let Tn be the set of standard Young tableaux with n squares. According
to this algorithm, for any n ≥ 1 there is a bijection, the so-called RSK
correspondence, between Sn and pairs of T, T 0 ∈ Tn with the same shape:
RSK
Sn 3 π ←→ T (π), T 0 (π) ∈ Tn × Tn .


The RSK correspondence is very intricate and has no obvious algebraic


meaning at all, but it is very deep and allows us to understand many things.
In particular, it gives an explicit proof of the Burnside identity (2.13). More
interestingly, `n (π) is exactly the number of squares in the first row of T (π)
or T 0 (π), namely `n (π) = λ1 (T (π)). Consequently,
 |{π ∈ Sn : `n (π) = k}|
µs,n π ∈ Sn : `n (π) = k =
n!
X d2λ
=
n!
λ∈Pn :λ1 =k

= Pp,n λ ∈ Pn : λ1 = k .
In words, the Plancherel measure Pp,n on Pn is the push-forward of the
uniform measure µs,n on Sn . Thus the analysis of `n (π) is equivalent to a
statistical problem in the geometry of the Young diagram. See an excellent
survey Deift (2000) for more information.
A remarkable feature is that there also exists a limit shape for random
Plancherel partitions. Define for λ ∈ P
ψλ (0) = λ1 , ψλ (x) = λdxe , x > 0.

RNote that ψλ (x), x ≥ 0 is a nonincreasing step function such that



0
ψ λ (x)dx = |λ|. Also, ψλ0 (x) = ϕλ (x) where λ0 is a dual partition
of λ and ϕλ was defined by (4.10).
The so-called limit shape is a function y = ω(x) defined as follows:
2
x = (sin θ − θ cos θ), y = x + 2 cos θ
π
where 0 ≤ θ ≤ 2π is a parameter, see Figure 5.1 below.
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 209

Random Plancherel Partitions 209

Fig. 5.1 ω curve

Logan and Shepp (1977) used the variational argument to prove


√ √
Theorem 5.1. Under (Pn , Pp,n ), the rescaled function ψλ ( nx)/ n con-
verges to the function ω(x) in a sense of weak convergence in a certain
metric d. Here the metric d is defined by (1.15) of Logan and Shepp (1977).

We remark that one cannot derive from Theorem 5.1


1 √  P
√ ψλ n x −→ ω(x)
n
for every x ≥ 0.
Independently, Vershik and Kerov (1977, 1985) developed a slightly
√ √
different strategy to establish a uniform convergence for ψλ ( nx)/ n. To
state their results, it is more convenient to use the rotated coordinate sys-
tem
u = x − y, v = x + y.
Then in the (u, v)-plane, the step function ψλ (x) transforms into a piecewise
linear function Ψλ (u). Note Ψ0λ (u) = ±1, Ψλ (u) ≥ |u| and Ψλ (u) = |u| for
sufficiently large u. Likewise, ω(x) transforms into Ω(u) (see (1.34) and
Figure 1.4):
2 √
u arcsin u2 + 4 − u2 , |u| ≤ 2

Ω(u) = π (5.1)
|u|, |u| ≥ 2.
Define
1 √
Ψn,λ (u) = √ Ψλ ( n u), ∆n,λ (u) = Ψn,λ (u) − Ω(u). (5.2)
n
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 210

210 Random Matrices and Random Partitions

Theorem 5.2. Under (Pn , Pp,n ),


1 √
P
sup √ Ψλ ( n u) − Ω(u) −→ 0, n → ∞. (5.3)

−∞<u<∞ n
As an immediate corollary, we can improve Logan-Shepp’s result to a uni-
form convergence.

Corollary 5.1. Under (Pn , Pp,n ),


1 √ 
P
sup √ ψλ n x − ω(x) −→ 0, n → ∞. (5.4)

0≤x<∞ n

We remark that ω(0) = 2 and ω(2) = 0. Compared to Ψ(x) of (4.12), ω(x)


looks more balanced. This can be seen from the definition of Plancherel
measure. Roughly speaking, the more balanced a Young diagram is, the
more likely it appears. For instance, fix n = 10, and consider two partitions
λ(1) = (110 ) and λ(2) = (1, 2, 3, 4). Then
1 256 1
Pp,10 λ(1) = , Pp,10 λ(2) =
 
≈ .
10! 1575 6
Corollary 5.2. Under (Pn , Pp,n ),
λ P λ0 P
√1 −→ 2, √1 −→ 2, n → ∞.
n n
Consequently,
`n (π) P
√ −→ 2, n → ∞. (5.5)
n
(5.5) provides a satisfactory solution to Ulam’s problem.
The rest of this section shall be devoted to a rigorous proof of Theorem
5.2 due to and Kerov (1977). It will consist of a series of lemmas. A
key technical ingredient is to prove a certain quadratic integral attains its
minimum at Ω. Start with a rough upper bound.

Lemma 5.1.
√  √
Pp,n max{λ1 , λ01 } ≥ 2e n ≤ e−2e n . (5.6)

Proof. We need an equivalent representation of `n (π). Let X1 , X2 , · · · ,


Xn be a sequence of i.i.d. uniform random variables on [0, 1]. Let `n (X)
be the length of the longest increasing subsequences of X1 , X2 , · · · , Xn .
Trivially,
d
`n (π) = `n (X),
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 211

Random Plancherel Partitions 211

from which we in turn derive


d d
λ1 = `n (π) = `n (X).
Then it follows
Pp,n (λ1 ≥ k) = P (`n (X) ≥ k)
 
n 1
≤ .
k k!
In particular,
√  √
Pp,n λ1 ≥ 2e n ≤ e−2e n .
We conclude the proof. 
Next let us take a look at d2λ /n!.

Lemma 5.2. As n → ∞
 d2 √ 
Pp,n − log λ > 2c n → 0, (5.7)
n!

where c = π/ 6 as in Chapter 4.

Proof. Denote by An the event in (5.7). Then by (4.3)


X d2
λ
Pp,n (An ) =
n!
λ∈An
X √
≤ e−2c n
λ∈An

≤ p(n)e−2c n
→ 0,
as desired. 
Observe that it follows from the hook formula (4.70)
d2λ n!
= 2.
n! Hλ
So we have
d2 n!
− log λ = − log 2
n! Hλ
= − log n! + 2 log Hλ
X
log λi − i + λ0j − j + 1

= − log n! + 2
(i,j)∈λ
X 1
log √ λi − i + λ0j − j + 1 .

= − log n! + n log n + 2
n
(i,j)∈λ
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 212

212 Random Matrices and Random Partitions

√ √
For simplicity of notations, write ψn,λ (x) for ψλ ( nx)/ n. Then
1
log √ λi − i + λ0j − j + 1

n
Z Z
1
√ ψλ (x) − x + ψλ−1 (y) − y dxdy

= log
n
Z Z 
1
log √ ψλ (x) − x + ψλ−1 (y) − y dxdy,


 n
−1
where  stands for the (i, j)th unit square, ψn,λ denotes the inverse of ψn,λ
and the last inequality follows from the concavity property of logarithmic
function.
Hence we obtain
d2
− log λ ≥ − log n! + n log n
n! Z Z
−1

+2n log ψn,λ (x) − x + ψn,λ (y) − y dxdy
0≤y<ψn,λ (x)
=: nI(ψn,λ ) + n ,
where n = O(log n) and
Z Z
−1

I(ψn,λ ) = 1 + 2 log ψn,λ (x) − x + ψn,λ (y) − y dxdy.
0≤y<ψn,λ (x)

As a direct consequence of Lemma 5.2, it follows for any ε > 0



Pp,n I(ψn,λ ) > ε → 0. (5.8)
Making a change of variables, we have
Z Z
1
log(u − v) 1 − Ψ0n,λ (u) 1 + Ψ0n,λ (v) dudv
 
I(ψn,λ ) = 1 +
2 v<u
=: J(Ψn,λ ).
In terms of J(Ψn,λ ), (5.8) can be written as

Pp,n J(Ψn,λ ) > ε → 0. (5.9)
Similarly, define
Z Z
1
log(u − v) 1 − Ω0 (u) 1 + Ω0 (v) dudv.
 
J(Ω) = 1 +
2 v<u

A remarkable contribution due to Vershik and Kerov (1977, 1985) is the


following

Lemma 5.3. With notations above, we have


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 213

Random Plancherel Partitions 213

(i)
J(Ω) = 0; (5.10)
(ii)
Z ∞ Z ∞
1
J(Ψn,λ ) = − log |u − v|∆0n,λ (u)∆0n,λ (v)dudv
4 −∞ −∞
|u|
Z

+ Ψn,λ (u) − |u| arccosh du. (5.11)
|u|>2 2
Consequently,
Z ∞ Z ∞
1
J(Ψn,λ ) ≥ − log |u − v|∆0n,λ (u)∆0n,λ (v)dudv. (5.12)
4 −∞ −∞

Proof. Start with the proof of (5.10). Let


Z x Z x
%0 (x) = − log |x|, %1 (x) = %0 (y)dy, %2 (x) = %1 (y)dy.
0 0

A simple calculus shows


3x2 x2
%1 (x) = x − x log |x|, %2 (x) = − log |x|
4 2
and
%1 (−x) = −%1 (x), %2 (−x) = %2 (x).
Note Ω0 (u) = 1 for u ≥ 2 and Ω0 (u) = −1 for u ≤ −2. Then
1 2 2 1 2 u
Z Z Z Z 
J(Ω) = 1 − %0 (u − v)dudv + %0 (u − v)dv Ω0 (u)du
4 −2 −2 2 −2 −2
Z 2 Z 2 
−2 %0 (u − v)du Ω0 (v)dv
−2 v
Z 2 Z 2
1
+ %0 (u − v)Ω0 (u)Ω0 (v)dudv. (5.13)
4 −2 −2

First, it is easy to see


Z 2 Z 2
%0 (u − v)dudv = 2%2 (4).
−2 −2

Also, it follows
Z u Z 2
%0 (u − v)dv = %1 (2 + u), %0 (u − v)du = %1 (2 − v).
−2 v
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 214

214 Random Matrices and Random Partitions

To calculate the last three double integral in the right hand side of (5.13),
set
Z 2
H2 (u) = %1 (u − v)Ω00 (v)dv, −∞ < u < ∞.
−2

Then
Z 2
2 1
H200 (u) = √ dv = 0, −2 ≤ u ≤ 2
π −2 (u − v) 4 − v 2
and so for each −2 ≤ u ≤ 2
2
log |v|
Z
2
H20 (u) = H20 (0) = √ dv = 0.
π −2 4 − v2
This in turn implies
Z 2
H2 (u) = H2 (0) = − %1 (v)Ω00 (v)dv = 0 (5.14)
−2

since %1 (v) is odd.


It follows by integration by parts
Z 2 Z 2  Z 2
0
%0 (u − v)du Ω (v)dv = %1 (2 − v)Ω0 (v)dv
−2 v −2
Z 2
= −%2 (4) + %2 (2 − v)Ω00 (v)dv
−2
Z 2
= −%2 (4) + %2 (v)Ω00 (v)dv
−2
= 2 − %2 (4),
where we used (5.14) and the fact
Z 2
%2 (v)Ω00 (v)du = 2.
−2

Similarly,
Z 2 Z u  Z 2
%0 (u − v)dv Ω0 (u)du = %1 (2 + u)Ω0 (u)du
−2 −2 −2
= %2 (4) − 2.
Again, by (5.14)
Z 2 Z 2
%0 (u − v)Ω0 (v)dv = %1 (2 − u) − %1 (2 + u) + %1 (u − v)Ω00 (v)dv
−2 −2
= %1 (2 − u) − %1 (2 + u).
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 215

Random Plancherel Partitions 215

Hence we have
Z 2Z 2 Z 2
%0 (u − v)Ω0 (u)Ω0 (v)dudv = %1 (2 − u) − %1 (2 + u) Ω0 (u)du

−2 −2 −2
= 4 − 2%2 (4).
In combination, we have proven (5.10).
Turn to the proof of (5.11). First, observe there is an a = an (may
depend on λ) such that [−a, a] contains the support of ∆n,λ (u). Hence we
have
1 a a
Z Z
J(Ψn,λ ) = 1 − %0 (u − v)dudv
4 −a −a
1 a u
Z Z 
+ %0 (u − v)dv Ψ0n,λ (u)du
2 −a −a
1 a a
Z Z 
− %0 (u − v)du Ψ0n,λ (v)dv
2 −a v
1 a a
Z Z
+ %0 (u − v)Ψ0n,λ (u)Ψ0n,λ (v)dudv.
4 −a −a
A simple calculus shows
Z aZ a
%0 (u − v)dudv = 2%2 (2a),
−a −a

Z a Z u  Z a
%0 (u − v)dv Ψ0n,λ (u)du = %1 (a + u)Ψ0n,λ (u)du
−a −a −a
= ρ2 (2a),
Z a Z a  Z a
%0 (u − v)du Ψ0n,λ (v)dv = %1 (a − v)Ψ0n,λ (v)dv
−a v −a
= −ρ2 (2a).
On the other hand, it follows
1 ∞ ∞
Z Z
− log |u − v|∆0n,λ (u)∆0n,λ (v)dudv
4 −∞ −∞
1 a a
Z Z
= %0 (u − v)Ω0 (u)Ω0 (v)dudv
4 −a −a
1 a a
Z Z
− %0 (u − v)Ω0 (u)Ψ0n,λ (v)dudv
2 −a −a
1 a a
Z Z
+ %0 (u − v)Ψ0n,λ (u)Ψ0n,λ (v)dudv.
4 −a −a
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 216

216 Random Matrices and Random Partitions

Note Ω00 (u) = 0 for |u| > 2 and so


Z a
%0 (u − v)Ω0 (u)du = %1 (a − v) − %1 (a + v) + H2 (v). (5.15)
−a

Since H2 (v) = 0 for |v| ≤ 2, we have


Z aZ a
%0 (u − v)Ω0 (u)Ω0 (v)dudv
−a −a
Z a Z a
0
= %1 (a − v)Ω (v)dv − %1 (a + v)Ω0 (v)dv
−a −a
Z −2 Z a
− H2 (u)du + H2 (u)du.
−a 2

Also, it is easy to see


Z a Z 2
%1 (a − v)Ω0 (v)dv = −%2 (2a) + %2 (a − v)Ω00 (v)dv
−a −2
Z 2
%2 (a − v) − %2 (2 − v) Ω00 (v)dv

= 2 − %2 (2a) +
−2
Z aZ 2
= 2 − %2 (2a) + %1 (u − v)Ω00 (v)dvdu
2 −2
Z a
= 2 − %2 (2a) + H2 (u)du,
2

and similarly
Z a Z −2
%1 (a + v)Ω0 (v)dv = %2 (2a) − 2 + H2 (u)du.
−a −a

By (5.15),
Z a Z a 
%0 (u − v)Ω0 (u)du Ψ0n,λ (v)dv
−a −a
Z a Z −2
= −2ρ2 (2a) + H2 (v)Ψ0n,λ (v)dv + H2 (v)Ψ0n,λ (v)dv.
2 −a

In combination, we get
J(Ψn,λ )
1 ∞ ∞
Z Z
=− log |u − v|∆0n,λ (u)∆0n,λ (v)dudv
4 −∞ −∞
1 a 1 −2
Z Z
0
+ H2 (u) Ψn,λ (u) − u du + H2 (u)(Ψn,λ (u) + u)0 dv.
2 2 2 −a
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 217

Random Plancherel Partitions 217

To proceed, note for u > 2


Z 2
H20 (u) = ρ0 (u − v)Ω00 (v)dv
−2
Z 2
2
log(u − v) √
=− dv
−2 π 4 − v2
u
= −2 arccosh .
2
Thus by integration by parts and using the fact Ψn,λ (a) = a and H2 (2) = 0,
1 a 1 a 0
Z Z
0 
H2 (u) Ψn,λ (u) − u du = − H2 (u) Ψn,λ (u) − u du
2 2 2 2
Z a
 u
= Ψn,λ (u) − u arccosh du.
2 2
Similarly, it follows
1 −2 1 −2 0
Z Z
0 
H2 (u) Ψn,λ (u) + u du = − H2 (u) Ψn,λ (u) + u du
2 −a 2 −a
Z −2
|u|
= (Ψn,λ (u) + u) arccosh du.
−a 2
In combination, we now conclude the proof of (5.11).
Finally, (5.12) holds true since Ψn,λ (u) ≥ |u| for all u ∈ R . 
The following lemma is interesting and useful since it introduces the Sobolev
norm into the study of random partitions. Define
Z ∞Z ∞
2 f (u) − f (v) 2
kf ks = dudv,
−∞ −∞ u−v
where s in the subscript stands for Sobolev.

Lemma 5.4.
Z ∞ Z ∞
1
− log |u − v|f 0 (u)f 0 (v)dudv = kf k2s . (5.16)
−∞ −∞ 2
Proof. Denote by H(f ) the Hilbert transform, namely
Z ∞
f (u)
H(f )(v) = du.
−∞ v −u
Then it is easy to see
Z ∞
[)(ω) =
H(f ei2πωv H(f )(v)dv
−∞

= isgnω fb(ω),
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 218

218 Random Matrices and Random Partitions

where fb is the Fourier transform of f . Then by integration formula by parts


and the Pasval-Plancherel identity
Z ∞
LHS of (5.16) = H(f )(v)f 0 (v)dv
−∞
Z ∞
= [)(ω)fb0 (ω)dω
H(f
−∞
Z ∞
= isgnω fb(ω)iω fb(ω)dω
−∞
Z ∞
2
= |ω| fb(ω) dω.
−∞

To finish the proof, we need a key observation due to Vershik and Kerov
(1985)
Z ∞
2 1
|ω| fb(ω) dω = kf k2s .

−∞ 2
The proof is complete. 
Combining (5.12) and (5.16) yields
1
J(Ψn,λ ) ≥ k∆n,λ k2s . (5.17)
8
Now we are ready to give
Proof of Theorem 5.2. In view of Lemma 5.5, we can and do consider
only the case in which the support of ∆n,λ is contained in a finite interval,
say, [−a, a]. We will divide the double integral into two parts:
Z aZ a Z a 2
2 ∆n,λ (u) − ∆n,λ (v) 2 ∆n,λ (u)
k∆n,λ ks = dudv + 4a 2 2
du,
−a −a u − v −a a − u

which together with (5.17) implies


∆2n,λ (u)
Z a Z a
∆2n,λ (u)du ≤ a2 du
−a −a a 2 − u2
a
≤ k∆n,λ k2s
4
≤ 2aJ(Ψn,λ ).
Also, since |∆0n,λ (u)| ≤ 2, then
 Z a 1/3
∆n,λ ≤ 61/3 ∆2n,λ (u)du

sup
−a≤u≤a −a
1/3
≤ (12a) J(Ψn,λ )1/3 .
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 219

Random Plancherel Partitions 219

By virtue of (5.9), it follows


P
sup ∆n,λ (u) −→ 0.
−a≤u≤a

The proof is complete. 


We have so far proven that the limit shape exists and found its explicit
form. To proceed, it is natural to look at the fluctuations of a typical
Plancherel partition around the limit curve. This question was first raised
by Logan and Shepp in 1977. The following words are excerpted from page
211 of Logan and Shepp (1977):
It is of course natural to expect that for appropriate normalizing con-
stants cn → ∞ (perhaps cn = n1/4 would do) the stochastic processes
λ̄n (t) = cn (λn (t) − f0 (t)), t≥0
would tend weakly to a nonzero limiting process W (t), t ≥ 0, as cn → ∞. It
would be of interest to know what the process W is. It is clear only that W
integrates pathwise to zero and that W (t) ≥ 0 for t ≥ 2. Perhaps W (t) = 0
for t ≥ 2 and is the Wiener process in 0 ≤ t ≤ 2 conditioned to integrate to
zero over [0, 2] and to vanish at 0 and 2, but this is just a guess.
This turns out to be an interesting and challenging problem. To see
the fluctuation at a fixed point, we need to consider two cases separately:
at the edge and in the bulk. At x = 0, ψλ (0) is equal to λ1 , the largest
part of a partition. Around 2000, several important articles, say Baik,
Deift and Johansson (1999), Johansson (2001), Okounkov (2000), were de-
voted to studying the asymptotic distribution of the λ1 after appropriately
normalized. It was proved that

λ1 − 2 n d
−→ F2 , n → ∞
n1/6
where F2 is the Tracy-Widom law, which was first discovered by Tracy
and Widom in the study of random matrices, see Tracy and Widom (1994,
2002). The analogs were proven to hold for each λk , k ≥ 2. By symmetry,
one can also discuss the limiting distribution at x = 2. The graph of F2 is
shown in Figure 5.2 below. The picture looks completely different in the
bulk. It will be proved in Section 5.3 that for each 0 < x < 2,
1 √ √  d
1
√ ψλ ( nx) − nω(x) −→ ξ(x), n → ∞
2π log n
where ξ(x) is a centered normal random variable. Note that the normalizing

constant log n is much smaller than n1/6 . In addition, we will also see that
ξ(x), 0 < x < 2, constitutes a white noise, namely Cov ξ(x1 ), ξ(x2 ) = 0
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 220

220 Random Matrices and Random Partitions

Fig. 5.2 Tracy-Widom law

for two distinct points x1 and x2 . Thus, one cannot expect a kind of
weak convergence of the stochastic process. However, we will in Section
5.2 establish a functional central limit theorem, namely Kerov’s integrated
√ √
central limit theorem for ψλ ( nx) − nω(x).

5.2 Global fluctuations

In this section we shall establish an integrated central limit theorem, which



is used to described the global fluctuation of Ψn,λ ( n u) around the limit
shape Ω(u). Let uk (u), k ≥ 0 be a sequence of modified Chebyshev poly-
nomials, i.e.,
[k/2]  
j k−j
X
uk (u) = (−1) uk−2j . (5.18)
j=0
j

Note
sin(k + 1)θ
uk (2 cos θ) =
sin θ
and
Z 2
uk (u)ul (u)ρsc(u) du = δk,l .
−2

Theorem 5.3. Define


Z ∞
√ √ 
Xn,k (λ) = uk (u) Ψλ ( nu) − nΩ(u) du, λ ∈ Pn .
−∞
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 221

Random Plancherel Partitions 221

Then under (Pn , Pp,n ) as n → ∞


 d  2 
Xn,k , k ≥ 1 −→ √ ξk , k≥1 .
k+1
Here ξk , k ≥ 1 is a sequence of standard normal random variables, the
convergence holds in terms of finite dimensional distribution.

This theorem is referred to the Kerov CLT since it was Kerov who first
presented it and outlined the main ideas of the proof in Kerov (1993). A
complete and rigorous proof was not given by Ivanov and Olshanski (2002)
until 2002. The proof uses essentially the moment method and involves a
lot of combinatorial and algebraic techniques, though the theorem is stated
in standard probability terminologies. We need to introduce some basic
notations and lemmas.
Begin with Frobenius coordinates. Let λ = (λ1 , λ2 , · · · , λl ) be a parti-
tion from P. Define

āi = λi − i, b̄i = λ0i − i, i = 1, 2, · · · , `, (5.19)

and
1 1
ai = λi − i + , bi = λ0i − i + , i = 1, 2, · · · , `, (5.20)
2 2
where ` := `(λ) is the length of the main diagonal in the Young dia-
gram of λ. The natural numbers {āi , b̄i , i = 1, · · · , `} are called the usual
Frobenius coordinates, while the half integer numbers {ai , bi , i = 1, · · · , `}
are called the modified Frobenius coordinates. We sometimes represent
λ = (a1 , a2 , · · · , a` |b1 , b2 , · · · , b` ).

Lemma 5.5. For any λ ∈ P



Y z + i − 21
Φ(z; λ) : = 1
i=1
z − λi + i − 2
`
Y z + bi
= . (5.21)
i=1
z − ai

Proof. First, observe the infinite series in (5.21) is actually finite because
λi = 0 when i is large enough. Second, the second product is an noncon-
tractible function since the numbers a1 , a2 , · · · , a` , −b1 , −b2 , · · · , −b` are
pairwise distinct. The identity (5.21) follows from a classical Frobenius
lemma, see Proposition 1.4 of Ivanov and Olshanski (2002). 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 222

222 Random Matrices and Random Partitions

As a direct consequence, it follows



X p̄k (λ)
log Φ(z; λ) = z −k , (5.22)
k
k=1
where
`
X
aki − (−bi )k .

p̄k (λ) = (5.23)
i=1
Motivated by (5.23), we introduce the algebra A over R generated by
p̄1 , p̄2 , · · · . By convention, 1 ∈ A.
Lemma 5.6. The generators p̄k ∈ A are algebraically independent, so that
A is isomorphic to R[p̄1 , p̄2 , · · · ].
Proof. See Proposition 1.5 of Ivanov and Olshanski (2002). 
Recall that the algebra of symmetric functions, denoted as F, is the graded
algebra defined as the projective limit of Λn , where Λn is the algebra of
symmetric polynomials in n variables defined in Section 2.2.
Set F 3 pk 7→ p̄k ∈ A, we get an algebra isomorphism F 7→ A. We call
it the canonical isomorphism, and call the grading in A inherited from that
of F the canonical grading of A.
For each λ ∈ P, we define the functions p̃2 , p̃3 , · · · by setting
Z ∞
1
uk−2 Ψλ (u) − |u| du.

p̃k (λ) = k(k − 1) (5.24)
−∞ 2
Similarly, define
Z ∞
1
uk−2 Ψn,λ (u) − |u| du

p̃k (Ψn,λ ) = k(k − 1)
−∞ 2
and Z ∞
1
uk−2 Ω(u) − |u| du.

p̃k (Ω) = k(k − 1) (5.25)
−∞ 2
Lemma 5.7. For each k ≥ 2 we have
(i)
(
(2m)!
2, k = 2m,
p̃k (Ω) = (m!) (5.26)
0, k = 2m + 1;
(ii)
q+1
X q
X
p̃k (λ) = xki − yik , λ ∈ P (5.27)
i=1 i=1
where the xi ’s are the local minima and the yj ’s are the local maxima of
the function Ψλ and x1 < y1 < x2 < · · · < xq < yq < xq+1 , see Figure 5.3
below.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 223

Random Plancherel Partitions 223

Fig. 5.3 Frobenius representation

Proof. (i) Note by integration formula by parts


Z ∞
1 00
p̃k (Ω) = uk Ω(u) − |u| du
−∞ 2
Z ∞
2
= uk √ du.
−∞ π 4 − u2
(5.26) now easily follows.
Turn to (ii). Note
q+1 q
1 00 X X
Ψλ (u) − |u| = δxi − δyj − δ0 .
2 i=1 j=1
Then we have Z ∞
1 00
p̃k (Ψλ ) = uk
Ψλ (u) − |u| du
−∞ 2
Z ∞ X q+1 q
X 
= uk δ xi − δyj − δ0 du
−∞ i=1 j=1
q+1
X q
X
= xki − yik ,
i=1 i=1
as desired. 
Lemma 5.8. The functions p̃2 , p̃3 , · · · belong to the algebra A. In particu-
lar, we have for any λ ∈ P,
[k/2]  
p̃k+1 (λ) X 1 k
= 2j (2j + 1) 2j
p̄k−2j (λ), k ≥ 1. (5.28)
k+1 j=0
2
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 224

224 Random Matrices and Random Partitions

Proof. Fix λ ∈ P and let x1 , · · · , xq+1 , y1 , · · · , yq be as in (5.27). Then


according to Proposition 2.6 in Ivanov and Olshanski (2002), the following
identity holds:
Qq yj 
Φ z − 12 ; λ

j=1 1 − z
= Qq+1 .
Φ z + 21 ; λ

xi
i=1 1 − z

This in turn implies


 1   1 
log Φ z − ; λ − log Φ z + ; λ
2 2
q q+1
X  yj  X  xi 
= log 1 − − log 1 − . (5.29)
j=1
z i=1
z

By (5.22), the left hand side of (5.29) equals


 1   1 
log Φ z − ; λ − log Φ z + ; λ
2 2

X p̄l  1 1 
= (z − )−l − (z + )−l . (5.30)
l 2 2
l=1

Also, by Lemma 5.7, the right hand side of (5.29) equals



X p̃k
z −k . (5.31)
k
k=1

By comparing coefficients of z −k in both (5.30) and (5.31), we easily get


(5.28). 
In a simpler way, (5.28) can be interpreted as
p̃k+1 (λ)
= p̄k + a linear combination of p̄1 , p̄2 , · · · , p̄k−1 , k ≥ 1.
k+1
Conversely,
p̃k+1 (λ)
p̄k (λ) = + a linear combination of p̃2 , p̃2 , · · · , p̃k , k ≥ 1.
k+1
Hence the functions p̃2 (λ), p̃3 (λ), · · · are algebraically independent genera-
tors of the algebra A:
A = R[p̃2 , p̃3 , · · · ].
The weight grading of A is defined as
wt(p̃k ) = k, k = 2, 3, · · · .
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 225

Random Plancherel Partitions 225

Equivalently, the weight grading is the image of the standard grading of F


under the algebra morphism:
R[p1 , p2 , · · · ] = F 7→ A = R[p̃2 , p̃3 , · · · ]
p1 → 0, pk → p̃k , k = 2, 3, · · · .
The weight grading induces a filtration in A, which we call the weight
filtration and denote by the same symbol wt(·). In particular,
wt(p̄k ) = k + 1, k≥1
since the weight of the top homogeneous component of p̄k is p̃k+1 /(k + 1).
Define for each k ≥ 1
( n−k
] n↓k χλ (k,1 )
, λ ∈ Pn , n ≥ k
pk (λ) = dλ (5.32)
0, λ ∈ Pn , n < k
where n↓k = n(n − 1) · · · (n − k + 1).

Lemma 5.9. Fix k ≥ 1 and λ ∈ P. Then p]k (λ) equals the coefficient of
z −1 in the expansion of the function
1 1 ↓k Φ(z; λ)
− z− (5.33)
k 2 Φ(z − k; λ)
in descending powers of z about the point z = ∞.

Proof. We treat two cases separately. First, assume λ ∈ Pn where n < k.


Then by definition, p]k (λ) = 0. Also, by Lemma 5.5, it is easy to see that
the function of (5.33) is indeed a polynomial of z. So the claim is true.
Next, consider the case n ≥ k. Recall the following formula due to
Frobenius (see Example 1.7.7 of Macdonald (1995) and Ingram (1950)):
p]k (λ) equals the coefficients of z −1 in the expansion of the function
n
1 Y z − λi − n + i − k
F (z) = − z ↓k
k i=1
z − λi − n + i
about z = ∞. Namely, p]k (λ) = −Res F (z), z = ∞ . A simple transfor-


mation yields
Φ z − n + 21 ; λ

1 ↓k
F (z) = − (z − n) .
k Φ z − n + 12 − k; λ
Note the residue at z = ∞ will not change under the shift z 7→ z + n − 1/2.
Consequently,
 1 1 ↓k Φ(z; λ) 
p]k (λ) = −Res − z− ,z = ∞ .
k 2 Φ(z − k; λ)
The proof is complete. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 226

226 Random Matrices and Random Partitions

We shall employ the following notation. Given a formal series A(t), let
[tk ]{A(t)} = the coefficient of tk in A(t).
Lemma 5.10. The functions p]k (λ) belong to the algebra A. In particular,
it can be described through the generators p̄1 , p̄2 , · · · as follows
n 1Y k
1 
p]k (λ) = [tk+1 ] − 1 − (j − )t
k j=1 2

X p̄j (λ)tj o
· exp 1 − (1 − kt)−j . (5.34)
j=1
j

Proof. This is a direct consequence of Lemmas 5.5 and 5.9. 


The expression (5.34) can be written in the form

1 n  X o
p]k (λ) = − [tk+1 ] (1 + ε0 (t)) exp − k p̄j (λ)tj+1 (1 + ε1 (t))
k j=1
∞ ∞
1 n X (−1)m  X m o
= − [tk+1 ] (1 + ε0 (t)) k p̄j (λ)tj+1 (1 + ε1 (t)) .
k m=0
m! j=1

Here each εr (t) is a power series of the form c1 t + c2 t2 + · · · , where the


coefficients c1 , c2 , · · · do not involve the generators p̄1 , p̄2 , · · · .
We can now readily evaluate the top homogeneous component of p]k (λ)
with respect to both the canonical grading and the weight grading in A. In
the canonical grading, the highest term of p]k equals p̄k :
p]k = p̄k + lower terms;
while in the weight grading, the top homogeneous component of p]k has
weight k + 1 and can be written as
p̃k+1
p]k = + f (p̃2 , · · · , p̃k ) + lower terms, (5.35)
k+1
where f (p̃2 , · · · , p̃k ) is a homogeneous polynomial in p̃2 , · · · , p̃k of total
weight k + 1.
Now we invert (5.35) to get
Lemma 5.11. For k = 2, 3, · · ·
P
X k ↓ ri Y ] ri
p̃k (λ) = Q pi−1 (λ) + lower terms , (5.36)
ri !
i≥2
where the sum is taken over all r2 , r3 , · · · with 2r2 +3r3 +· · · = k, and lower
terms means a polynomial in p]1 , p]2 , · · · , p]k−2 of total weight ≤ k −1, where
wt(p]i ) = i + 1.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 227

Random Plancherel Partitions 227

The proof is left to the reader. See also Proposition 3.7 of Ivanov and
Olshanski (2002), which contains a general inversion formula.
We next extend p]k in (5.32) to any partition ρ on P. Let |ρ| = r, define
( n−r
] n↓r χλ (ρ,1

)
, λ ∈ Pn , n ≥ r,
pρ (λ) = (5.37)
0, λ ∈ Pn , n < r.
The following lemma lists some basic properties. The reader is referred to
its proof and more details in Kerov and Olshanski (1993), Okounkov and
Olshanski (1998), Vershik and Kerov (1985).

Lemma 5.12. (i) For any partition ρ, the function p]ρ is an element of A.
(ii) In the canonical grading,
p]ρ (λ) = p̄ρ (λ) + lower terms,
where λ = (1r1 , 2r2 , · · · ) and
Y
p̄ρ (λ) = p̄i (λ)ri .
i=1

(iii) The functions p]ρ


form a basis in A.
(iv) For any partitions σ and τ , in the canonical grading
p]σ p]τ = p]σ∪τ + lower terms. (5.38)

We remark that the basis p]ρ is inhomogeneous both in the canonical grading
and weight grading. For each f ∈ A, let (f )ρ be the structure constants of
f in the basis of p]ρ . Namely,
X
f (λ) = (f )ρ p]ρ (λ).
ρ

Define for any index set J ⊆ N


X
kρkJ = |ρ| + rj (ρ), degJ (f ) = max kρkJ .
ρ:(f )ρ 6=0
j∈J

We will be particularly interested in J = ∅, {1} and N below. For simplicity,


denote
kρk0 = kρk∅ , kρk1 = kρk1 , kρk∞ = kρkN
and
deg0 (f ) = deg∅ (f ), deg1 (f ) = deg1 (f ), deg∞ (f ) = degN (f ).

Lemma 5.13. For any partition σ,


p]σ p]1 = p]σ∪1 + |σ| · p]σ . (5.39)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 228

228 Random Matrices and Random Partitions

Proof. This directly follows from the definition (5.37). Indeed, we need
only to show for each λ ∈ Pn with n ≥ |σ| since otherwise both sides are
equal to 0. When n = |σ| and λ ∈ Pn ,
p]σ (λ)p]1 (λ) = n · p]σ (λ) = |σ| · p]σ (λ).
Next assume n ≥ |σ| + 1. Setting k = |σ|, then
χλ (σ, 1n−k )
p]σ (λ)p]1 (λ) = n↓(k) · n .

Hence the claim follows from a simple relation
n↓k · n = n↓(k+1) + n↓k · k. 

Lemma 5.14. For any partitions σ and τ ,


p]σ p]τ = p]σ∪τ + lower terms, (5.40)
where lower terms means a linear combination of p]ρ with kρkN < kσkN +
kτ kN .

Proof. Set
X
p]σ p]τ = (p]σ p]τ )ρ p]ρ .
ρ

We claim that only partitions ρ with kρkN ≤ kσkN + kτ kN can really con-
tribute. Indeed, assume (p]σ p]τ )ρ 6= 0, and fix a set X of cardinality |ρ| and a
permutation s : X → X whose cycle structure is given by ρ. Then according
to Proposition 4.5 of Ivanov and Olshanski (2002) (see also Proposition 6.2
and Theorem 9.1 of Ivanov and Kerov (2001)), there must exist a quadruple
{X1 , s1 , X2 , s2 } such that
(i) X1 ⊆ X, X2 ⊆ X, X1 ∪ X2 = X;
(ii) |X1 | = |σ| and x1 : X1 7→ X1 is a permutation of cycle structure σ;
(iii) |X2 | = |τ | and x2 : X2 7→ X2 is a permutation of cycle structure τ ;
(iv) denoting by s̄1 : X → X and s̄2 : X → X the natural extensions of s1,2
from X1,2 to the whole X. I.e., s̄1,2 is trivial on X \ X1,2 , then s̄1 s̄2 = s.
Fix any such quadruple and decompose each of the permutations s, s1 , s2
into cycles. Let CN (s1 ) denote the set of all cycles of s1 , AN (s1 ) the subset
of those cycles of s1 that entirely contained in X1 \ X2 , BN (s1 ) the subset
of those cycles of s1 that have a nonempty intersection with X1 ∩ X2 . Then
CN (s1 ) = AN (s1 ) + BN (s1 ).
Define similarly CN (s2 ), AN (s2 ) and BN (s2 ), then we have
CN (s2 ) = AN (s2 ) + BN (s2 ).
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 229

Random Plancherel Partitions 229

Similarly again, let CN (s) denote the set of all cycles of s, BN (s) the subset
of those cycles of s that intersect both X1 and X2 . Then
CN (s) = AN (s1 ) + AN (s2 ) + BN (s).
The claimed inequality kρkN ≤ kσkN + kτ kN is equivalent to
|X| + |BN (s)| ≤ |X1 | + |BN (s1 )| + |X2 | + |BN (s2 )|. (5.41)
To prove (5.41), it suffices to establish a stronger inequality:
|BN (s)| ≤ |X1 ∩ X2 |. (5.42)
To see (5.42), it suffices to show each cycle ∈ o ∈ BN (s) contains a point
of X1 ∩ X2 . By the definition of BN (s), cycle o contains both points of X1
and X2 . Therefore there exist points x1 ∈ X1 ∩ c and x2 ∈ X2 ∩ o such that
sx1 = x2 . By (iv), it follows that either x1 or x2 lies in X1 ∩ X2 . Thus the
claim is true, as desired.
Now assume (p]σ p]τ )ρ 6= 0 and kρkN = kσkN + kτ kN , then both BN (s1 )
and BN (s2 ) are empty, which implies X1 ∩ X2 = ∅. Therefore ρ = σ ∪ τ .
Finally, by (5.38),
(p]σ p]τ )σ∪τ = 1.
It concludes the proof. 

Lemma 5.15. (i) For any two partitions σ and τ with no common part,
p]σ p]τ = p]σ∪τ + lower terms,
where lower terms means terms with deg1 (·) < kσ ∪ τ k1 .
(ii) For any partition σ ∈ P and k ≥ 2, if rk (σ) ≥ 1, then
p]σ p]k = p]σ∪k + krk (σ)p](σ\k)∪1k + lower terms, (5.43)
where lower terms means terms with deg1 (·) < kσk1 + k.

Proof. (i) can be proved in a way similar to that of Lemma 5.14 with
minor modification. Turn to (ii). Set
X
p]σ p]τ = (p]σ p]τ )ρ p]ρ .
ρ

Again, only partitions ρ with kρk1 ≤ kσk1 + k can really contribute. We


need below only search for partitions ρ such that (p]σ p]τ )ρ 6= 0 and kρk1 =
kσk1 + k. As in Lemma 5.14, we get
B1 (s1 ) = ∅, B1 (s2 ) = ∅, |B1 (s)| = |X1 ∩ X2 |.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 230

230 Random Matrices and Random Partitions

This means that X1 ∩ X2 = ∅ or X1 ∩ X2 entirely consists of common


nontrivial cycles of the permutations s1 and s−1
2 . The first possibility,
X1 ∩ X2 = ∅, means that ρ = σ ∪ (k). Furthermore, by (5.38),

(p]σ p]τ )σ∪k = 1.

The second possibility means X1 ⊇ X2 because s−1 2 reduces to a single


k-cycle which is also a k-cycle of s1 . This in turn implies that rk (σ) ≥ 1
and ρ = (σ \ k) ∪ 1k .
It remains to evaluate (p]σ p]τ )ρ = krk (σ). Note that the number of ways
to choose a k-cycle inside a k +r1 (σ)-point set equals (k +r1 (σ))!/k(r1 (σ))!.
According to Proposition 6.2 and Theorem 9.1 of Ivanov and Kerov (2001),
we know
kzσ (k + r1 (σ))!
(p]σ p]τ )ρ =
zρ k(r1 (σ))!
= krk (σ),

where zλ is defined by (2.15) for a partition λ. The proof is complete. 

In the preceding paragraphs we have briefly described the structure of al-


gebra A and its three families of bases including {p̄k }, {p̃k } and {p]ρ }. Next
we need to take average operation with respect to Pp,n for elements of A.
A basic result is as follows.

Lemma 5.16. Let |ρ| = r, n ≥ r. Then


 ↓r
n , ρ = (1r ),
Ep,n p]ρ = (5.44)
0, otherwise.

Proof. By (5.37) and (2.12)


χλ (ρ, 1n−r )
Ep,n p]ρ = Ep,n n↓r

n↓r X
= χλ (ρ, 1n−r )dλ
n!
λ∈Pn

n↓r X
= χλ (ρ, 1n−r )χλ (e)
n!
λ∈Pn
 ↓r
n , ρ = (1r ),
=
0, otherwise.
The proof is complete. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 231

Random Plancherel Partitions 231

To proceed, we shall prove a weak form of limit shape theorem.

Theorem 5.4. Define


Z ∞
uk Ψn,λ (u) − Ω(u) du,

Yn,k (λ) = λ ∈ Pn .
−∞

Then for each k ≥ 0


P
Yn,k −→ 0, n → ∞.

Proof. Note by (5.24) and (5.25),


Z ∞ Z ∞
uk Ψn,λ (u) − |u| du − uk Ω(u) − |u| du
 
Yn,k (λ) =
−∞ −∞
2  p̃
k+2 (λ)

= − p̃k+2 (Ω) .
(k + 2)(k + 1) n(k+2)/2
Hence it suffices to prove for each k ≥ 2
p̃k (λ) P
− p̃k (Ω) −→ 0, n → ∞.
nk/2
Equivalently,
p̃k (λ) P
−→ p̃k (Ω), n → ∞.
nk/2
In turn, this will be done by checking as n → ∞
Ep,n p̃k (λ)
−→ p̃k (Ω) (5.45)
nk/2
and
Ep,n p̃2k (λ)
−→ p̃2k (Ω). (5.46)
nk
Expand p̃k in the basis of p]ρ
X
p̃k (λ) = (p̃k )ρ p]ρ (λ), (5.47)
ρ

where (p̃k )ρ denotes the structure coefficient. Note by Lemmas 5.11 and
5.14 deg∞ (p̃k ) = k so that the summation in (5.47) is over all ρ with
kρk∞ ≤ k. Then it follows from (5.44)
X
Ep,n p̃k = (p̃k )ρ Ep,n p]ρ
kρk∞ ≤k
X
= (p̃k )1r n↓r .
2r≤k
March 3, 2015 14:1 9197-Random Matrices and Random Partitions ws-book9x6 page 232

232 Random Matrices and Random Partitions

In addition, according to (5.36) and (5.39), if k = 2r


k ↓r
(p̃k )1r = . (5.48)
r!
Hence for k = 2m,
Ep,n p̃2m (2m)!
m
→ ,
n (m!)2
while for k = 2m + 1,
Ep,n p̃k
→ 0.
nk/2
This proves (5.45). Analogously, we can prove (5.46). Indeed, it follows
from (5.47)
X
p̃2k (λ) = (p̃k )ρ (p̃k )σ p]ρ (λ)p]σ (λ)
kρk∞ ≤k,kσk∞ ≤k

which in turn implies


X
Ep,n p̃2k = (p̃k )ρ (p̃k )σ Ep,n p]ρ p]σ .
kρk∞ ≤k,kσk∞ ≤k

Also, by (5.40),
Ep,n p]ρ p]σ = Ep,n p]ρ∪σ + Ep,n lower terms
where lower terms means a linear combination of p]τ ’s with kτ k∞ < kρk∞ +
kσk∞ .
Again by (5.48),
 ↓r
] n , ρ ∪ σ = (1r ),
Ep,n pρ∪σ =
0, otherwise.
In summary, we have
X
Ep,n p̃2k = (p̃k )1r1 (p̃k )1r2 n↓(r1 +r2 ) + Ep,n lower terms
2r1 ≤k,2r2 ≤k

In particular, we have
 (2m)↓m 2
Ep,n p̃22m = n↓2m + O n2m−1

m!
and
Ep,n p̃22m+1 = O n2m .


Therefore it follows
( 2
(2m)↓m
Ep,n p̃2k , k = 2m,
→ m!
nk 0, k = 2m + 1.
The proof is now complete. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 233

Random Plancherel Partitions 233

Remark 5.1. One can derive the strong form of limit shape theorem, i.e.,
Theorem 5.2, using the equivalence between weak topology and uniform
topology. The interested reader is referred to Theorem 5.5 of Ivanov and
Olshanski (2002).

Theorem 5.5. Define


p]k (λ)
Zn,k (λ) = , λ ∈ Pn .
nk/2
Then under (Pn , Pp,n ) as n → ∞
 d √ 
Zn,k , k ≥ 2 −→ k ξk , k≥2 .

Here ξk , k ≥ 2 is a sequence of standard normal random variables, the


convergence holds in terms of finite dimensional distribution.

Proof. For simplicity of notations, we will mainly focus on the 1-


dimensional case. Namely, we shall below prove for each k ≥ 2
d √
Zn,k −→ k ξk , n → ∞.
Adapt the moment method. Fix l ≥ 1, we need to check
Ep,n (ηk )l → Eξkl , n → ∞

where ηk = Zn,k / k. This is equivalent to proving
Ep,n hl (ηk ) −→ Ehl (ξk ), n→∞
where hl is a classical Hermite √orthogonal polynomial of order l with respect
−x2 /2
to the weight function e / 2π, see (3.3). Trivially, by the orthogonality
property, Ehl (ξk ) = 0. So we shall only prove
Ep,n hl (ηk ) → 0. (5.49)
Note by (5.43),
p]kl p]k = p]kl+1 + klp]kl−1 ∪1k + lower terms, (5.50)
where lower terms means a term with deg1 (·) < k(l + 1).
Also, by definition
n↓kl χλ (k l−1 ∪ 1k , 1n−kl )
p]kl−1 ∪1k (λ) =

n↓kl ]
= ↓k(l−1) pkl−1 (λ), λ ∈ Pn . (5.51)
n
March 3, 2015 14:29 9197-Random Matrices and Random Partitions ws-book9x6 page 234

234 Random Matrices and Random Partitions


Inserting (5.51) back into (5.50) and dividing by ( knk/2 )l+1 , we get
n↓kl 1
ηkl ηk = ηkl+1 + l ηkl−1 + √ lower terms, (5.52)
n↓k(l−1) nk
( kn )l+1
k/2

Recall that hl (x) is characterized by the recurrence relation


xhl (x) = hl+1 (x) + lhl−1 (x)
together with the initial data h0 = 1 and h1 (x) = x. Hence we make
repeatedly use of (5.52) to yield
1
ηkl = hl (ηk ) + √ lower terms
( knk/2 )l
where lower terms means a term with deg1 (·) < kl.
In particular, it follows
1
hl (ηk ) = ηkl + √ lower terms.
( knk/2 )l
Thus by (5.44)
1
Ep,n hl (ηk ) = Ep,n ηkl + √ Ep,n lower terms
( knk/2 )l
= O(n−1/2 ),
which proves (5.49) as desired.
To treat m-dimensional case, we need to prove for any positive integers
l2 , · · · , lm
m
Y Ym
lk
Ep,n ηk → Eξklk = 0, n → ∞.
k=2 k=2
Equivalently, for hl2 , · · · , hlm
Ym m
Y
Ep,n hlk (ηk ) → Ehlk (ξk ), n → ∞.
k=2 k=2
The details are left to the reader. 
Now we are ready to prove Theorem 5.3. Define q1 = 0 and for any k ≥ 2
1
p̃k+1 (λ) − n(k+1)/2 p̃k+1 (Ω) , λ ∈ Pn . (5.53)

qk (λ) =
(k + 1)nk/2
Lemma 5.17. For any k ≥ 2,
[(k−1)/2]   ]
X k pk−2j (λ)
qk (λ) =
j=0
j n(k−2j)/2
1
+ lower terms with deg1 (·) ≤ k − 1. (5.54)
nk/2
March 3, 2015 14:29 9197-Random Matrices and Random Partitions ws-book9x6 page 235

Random Plancherel Partitions 235

Proof. Using Lemma 5.11, one can express p̃k+1 as a polynomial


p]1 , p]2 , · · · up to terms of lower weight. In particular, we have

[(k−1)/2]  
X k ]
p̃k+1 (λ) = (k + 1) nj p (λ) + n(k+1)/2 p̃k+1 (Ω)
j=0
j k−2j
+lower terms with deg1 (·) ≤ k − 1. (5.55)

A nontrvial point in (5.55) is the switch between two weight filtrations. Its
proof is left to the reader. See also Proposition 7.3 of Ivanov and Olshanski
(2002) for details. 

Inverting (5.54) easily gives

Lemma 5.18. For any k ≥ 2,


[(k−1)/2]
p]k (λ)
 
X k k−j
= (−1)j qk−2j (λ),
nk/2 j=0
k−j j
1
+ lower terms with deg1 (·) ≤ k − 1. (5.56)
nk/2
Proof. Recall the following combinatorial inversion formula due to Rior-
dan (1968): assume that α0 , α1 , · · · ; β0 , β1 , · · · are two families of formal
variables, then
[k/2]  
X k
αk = βk−2j , k = 0, 1, · · ·
j=0
j
m
[k/2]  
X k k−j
βk = (−1)j αk−2j , k = 0, 1, · · · .
k−j j
j=0

Set α0 = α1 = 0, αk = qk (λ), k ≥ 2; β0 = β1 = 0, βk = p]k (λ)/nk/2 , k ≥ 2.


If we neglect the lower terms in (5.54), then it obviously follows

[(k−1)/2]
p]k (λ)
 
X k k−j
= (−1)j qk−2j .
nk/2 j=0
k−j j

The appearance of remainder terms affect only similar remainder terms in


the reverse relations. 
March 3, 2015 14:29 9197-Random Matrices and Random Partitions ws-book9x6 page 236

236 Random Matrices and Random Partitions

Proof of Theorem 5.3. By (5.18) and integrating term by term, we


obtain
[k/2] ∞ √ √
 Z
X k−j
(−1)j uk−2j Ψλ ( nu) − nΩ(u) du

Xn,k (λ) =
j=0
j −∞

[k/2] ∞ √ √
 h Z
X k−j
(−1)j uk−2j Ψλ ( nu) − n|u| du

=
j=0
j −∞
Z ∞ √  i
− uk−2j n (Ω(u) − |u|) du
−∞
[k/2]  
X k−j 2
= (−1)j
j (k + 2 − 2j)(k + 1 − 2j)
j=0
 p̃
k+2−2j (λ) √ 
· (k+1−2j)/2 − np̃k+2−2j (Ω) .
n
By the definition of (5.53) and noting
   
1 k−j 1 k+1−j
= ,
k + 2 − 2j j k+1−j j
we further get
[k/2]  
X
j 2 k+1−j
Xn,k (λ) = (−1) qk+1−2j (λ).
k+1−j j
j=0

By (5.56),

2p]k+1 (λ)
Xn,k (λ) =
(k + 1)n(k+1)/2
1
+ (k+1)/2 lower terms with deg1 (·) ≤ k.
n
Since the remainder terms of negative degree do not affect the asymptotics,
then we can use Theorem 5.5 to conclude the proof. 
To conclude this section, we remark that Theorem 5.5 for character ra-
tios is of independent interest. Another elegant approach was suggested by
Hora (1998), in which a central limit theorem was established for adjacency
operators on the infinite symmetric group. Still, Fulman (2005, 2006) de-
veloped the Stein method and martingale approach to prove asymptotic
normality for character ratios.
March 3, 2015 14:29 9197-Random Matrices and Random Partitions ws-book9x6 page 237

Random Plancherel Partitions 237

5.3 Fluctuations in the bulk

In this section we shall turn to the study of fluctuations of a typical


Plancherel Young diagrams around their limit shape in the bulk of the
partition spectrum. Here we use the term spectrum informally by analogy
with the GUE, to refer to the variety of partition’s terms λi ∈ λ. Recall
ψλ (x) and ω(x) introduced in Section 5.1 and define the random process
√  √
Ξn (x) = ψλ n x − n ω(x), x ≥ 0, λ ∈ Pn .
According to the Corollary 5.1, it follows under (Pn , Pp,n )
1 P
√ sup |Ξn (x)| −→ 0, n → ∞.
n x≥0
This is a kind of weak law of large numbers. The following theorem de-
scribes the second order fluctuation of Ξn at each fixed 0 < x < 2.

Theorem 5.6. Under (Pn , Pp,n )


Ξn (x) d
−→ N 0, %2 (x) ,

1
√ n→∞ (5.57)

log n
for each 0 < x < 2, where %−2 (x) = 1
π arccos |ω(x)−x|
2 .

The asymptotics of finite dimensional distributions of the random process


Ξn (x) reads as follows.

Theorem 5.7. Assume 0 < x1 < · · · < xm < 2, then under (Pn , Pp,n ),
1  d
1
√ Ξn (xi ), 1 ≤ i ≤ m −→ (ξi , 1 ≤ i ≤ m), n → ∞
2π log n
where ξi , 1 ≤ i ≤ m are independent normal random variables.

Remark 5.2. (i) The work of this section, in particular Theorems 5.6
and 5.7, are motivated by Gustavsson (2005), in which he investigated
the Gaussian fluctuation of eigenvalues in the GUE. There is a surprising
similarity between Plancherel random partitions and GUE from the view-
point of asymptotics, though no direct link exists between two finite models.
(ii) Compared with the uniform random partitions, the normalizing con-

stant log n is much smaller than n1/4 , see Theorem 4.5. This means that
Plancherel Young diagrams concentrated more stably around their limit
shape.
(iii) The random process Ξn (x) weakly converges to a Gaussian white
noise in the finite dimensional sense. Thus one cannot expect a usual pro-
cess convergence for Ξn in the space of continuous functions on [0, 2].
March 31, 2015 12:3 9197-Random Matrices and Random Partitions ws-book9x6 page 238

238 Random Matrices and Random Partitions

(iv) As we will see, if xn → x where 0 < x < 2, then (5.57) still holds
for Ξn (xn ), namely
Ξn (xn ) d
−→ N 0, %2 (x) ,


1 n → ∞. (5.58)

log n
(v) Since %(x) → ∞ as x → 0 or 2, the normal fluctuation is no longer
true at either 0 or 2. In fact, it was proved

ψλ (0) − 2 n d
−→ F2 , n → ∞
n1/6
where F2 is Tracy-Widom law.

It is instructive to reformulate Theorems 5.6 and 5.7 in the rotated coordi-


nates u and v. Recall Ψλ (u) and Ω(u) are rotated versions of ψλ (x) and
ω(x). Define
√ √
Υn (u) = Ψλ ( n u) − n Ω(u), −∞ < u < ∞. (5.59)
We can restate Theorem 5.6 in the following elegant version, whereby—
quite surprisingly—the normalization does not depend on the location in
the spectrum.

Theorem 5.8. Under (Pn , Pp,n )


Υn (u) d
1
√ −→ N (0, 1), n→∞
π
log n
for −2 < u < 2.

Proof. Fix −2 < u < 2 and assume λ ∈ Pn . A key step is to express the
√ √
error Ψλ ( n u) − nΩ(u) in terms of ψλ and ω. Let local extrema consist
of two interlacing sequences of points
ǔ1 < û1 < ǔ2 < û2 < · · · < ǔm < ûm < ǔm+1 ,
where ǔi ’s are the local minima and ûi ’s are the local maxima of the function

Ψλ ( n ·). Without loss of generality, we may and will assume that u is

between ûk and ǔk+1 for some 1 ≤ k ≤ m. Denote by n(xn , xn ) and
√ √ √ √
n(x∗n , x∗n ) the projections of ( n u, Ψλ ( n u)) and n(u, Ω(u)) in the
line u = v, respectively. Then we obviously have
√ √ √
Ψλ ( n u) − n Ω(u) = 2 n(xn − x∗n ).
According to Theorem 5.2, it follows
P
xn − x∗n −→ 0, n→∞
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 239

Random Plancherel Partitions 239

and so
P 1
xn −→ x := (Ω(u) + u).
2
√ √
On the other hand, if we let 2hn be the distance between n(xn , xn ) and
√ √
( n u, Ψλ ( n u)), then we have
√ √
n(xn − x∗n ) = hn − n ω(x∗n )
√ √
= hn − n ω(xn ) + n(ω(xn ) − ω(x∗n )). (5.60)
Now using the Taylor expansion for the function ω at xn and solving equa-
tion (5.60), we obtain

√ hn − n ω(xn )
n(xn − x∗n ) = ,
1 − ω 0 (x̃n )
where x̃n is between xn and x∗n .
P
Since xn , x∗n −→ x ∈ (0, 2), then it holds
1 P 1 1 ω(x) − x
−→ = arccos .
1 − ω 0 (x̃n ) 1 − ω 0 (x) π 2

Hence it suffices to prove hn − n ω(xn ) after properly scaled converges in
distribution. Observe that
√ √
ψλ ( n xn + 1) ≤ hn ≤ ψλ ( n xn )
since u is between ûk and ǔk+1 .
P
Note xn −→ x ∈ (0, 2). Then for each subsequence {n0 } of integers
there exists a further subsequence {n00 } ⊆ {n0 } such that xn00 → x a.e.
Thus by (5.58) it holds
√ √
ψλ ( n00 xn00 ) − n00 ω(xn00 ) d
−→ N 0, %2 (x) .

1
√ 00
2π log n
By a standard subsequence argument, it holds
√ √
ψλ ( n xn ) − n ω(xn ) d
−→ N 0, %2 (x) .

1

2π log n

Similarly, since n ω(xn + √1n ) − ω(xn ) = Op (1),

√ √
ψλ ( n xn + 1) − n ω(xn ) d
−→ N 0, %2 (x) .

1

2π log n
In combination, we have

hn − n ω(xn ) d
−→ N 0, %2 (x) .

1

2π log n
We conclude the proof. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 240

240 Random Matrices and Random Partitions

Let us now turn to the proof of Theorems 5.6 and 5.7. A basic strat-
egy is to adapt the conditioning argument—the Poissonization and de-
Poissonization techniques. Define the Poissonized Plancherel measure Qp,θ
on P as follows:
 d 2
λ
Qp,θ (λ) = e−θ θ|λ|
|λ|!

X θn
= e−θ Pp,n (λ)1(λ∈Pn ) , λ ∈ P (5.61)
n=0
n!
where θ > 0 is a model parameter. This is a mixture of a Poisson random
variable with mean θ and the Plancherel measures. Let Ξθ (x) be given by
(5.57) with n replaced by θ, namely
√  √
Ξθ (x) = ψλ θ x − θ ω(x), x ≥ 0, λ ∈ P. (5.62)

Theorem 5.9. Under (P, Qp,θ ),


Ξθ (x) d
−→ N 0, %2 (x) ,

1
√ θ→∞

log θ
for each 0 < x < 2.

Theorem 5.10. Assume 0 < x1 < · · · < xm < 2. Under (P, Qp,θ ),
1  d
1
√ Ξθ (xi ), 1 ≤ i ≤ m −→ (ξi , 1 ≤ i ≤ m), θ → ∞

log θ
where ξi , 1 ≤ i ≤ m are independent normal random variables.

Before giving the proof of Theorems 5.9 and 5.10, let us prove Theorems
5.6 and 5.7 with the help of the de-Poissonization technique.

Lemma 5.19. For 0 < α < 1, define


p
θn± = n ± n(log n)α .
Then uniformly in x ≥ 0 and z ∈ R,
Qp,θn+ (λ ∈ P : ψλ (x) ≤ z) − εn ≤ Pp,n (λ ∈ Pn : ψλ (x) ≤ z)
≤ Qp,θn− (λ ∈ P : ψλ (x) ≤ z) + εn (5.63)
where εn → 0 as n → ∞.

Proof. We need only give the proof of the lower bound, since the upper
bound is similar. Let X be a Poisson random variable with mean θn+ . Then
we have EX = θn+ , V arX = θn+ and the following tail estimate
p
εn := P |X − θn+ | > n(log n)α


= O log−α n .

February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 241

Random Plancherel Partitions 241

It follows by (5.61)
Qp,X (λ) = EPp,X (λ), λ∈P
and so for any event A ⊆ P
Qp,X (A) = EPp,X (A), A ⊆ P.
Note
EPp,X (A) = EPp,X (A)1(X<n) + EPp,X (A)1(X≥n) .
Trivially,
EPp,X (A)1(X<n) ≤ P (X < n) ≤ εn . (5.64)
In addition, set A = {λ ∈ P : ψλ (x) ≤ z}. Then using a similar argument
to that of Lemma 2.4 of Johansson (1998b), A is a monotonoic event under
(Pp,n , n ≥ 1), namely Pp,n+1 (A) ≤ Pp,n (A). Hence it follows
EPp,X (A)1(X≥n) ≤ Pp,n (A). (5.65)
Combining (5.64) and (5.65) together implies the lower bound. 
Proof of Theorem 5.6. Set
√ √
nx nx
+
xn = p , x−
n =p
θn+ θn−
where θn± are as in Lemma 5.19. Trivially, x+ −
n , xn → x. Also, since 0 <
α < 1, then it follows
p √
θn± − n
√ → 0, n → ∞.
log n
Note
√ √ p p p
ψλ ( n x) − n ω(x) ψλ ( θn± x±n)− θn± ω(x±
n) log θn±
√ = p · √
log n log θn± log n
p √
θn± ω(x± ) − nω(x)
+ √n (5.66)
log n
and as n → ∞
p √ p
θn± ω(x±
n)− n ω(x) log θn±
√ → 0, √ → 1. (5.67)
log n log n

On the other hand, by Theorem 5.9, under P, Qp,θn±
p p
ψλ ( θn± x±n)− θn± ω(x±
n) d
−→ N 0, %2 (x) .

p
1 ±
2π log θn
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 242

242 Random Matrices and Random Partitions


Hence it follows from (5.66) and (5.67) that under P, Qp,θn±
√ √
ψλ ( n x) − n ω(x) d
−→ N 0, %2 (x) .

1


log n
Taking Lemma 5.19 into account, we now conclude the proof. 
To prove Theorem 5.9, we will again apply the Costin-Lebowitz-
Soshnikov theorem for determinantal point processes, see Theorem 3.7. To
do this, we need the following lemma due to Borodin, Okounkov and Ol-
shanski (2000), in which they proved the Tracy-Widom law for the largest
parts. Set for λ = (λ1 , λ2 , · · · , λl ) ∈ P

X (λ) = {λi − i, 1 ≤ i ≤ l}. (5.68)

For k = 1, 2, · · · , the k-point correlation function ρk is defined by



ρk (x1 , x2 , · · · , xk ) = Qp,θ λ ∈ P : x1 , x2 , · · · , xk ∈ X (λ) ,

where x1 , x2 , · · · , xk are distinct integers.

Lemma 5.20. ρk has a determinantal structure as follows:



ρk (x1 , x2 , · · · , xk ) = det Kθ (xi , xj ) 1≤i,j≤k , (5.69)

with the kernel Kθ of the form


(√
J J −J J
θ x y+1x−yx+1 y , x 6= y
Kθ (x, y) = √ (5.70)
0
θ Jx0 Jx+1 − Jx+1

Jx , x = y
√ 
where Jm ≡ Jm 2 θ is the Bessel function of integral order m.

We will postpone the proof to Section 5.5. Now we are ready to give
Proof of Theorem 5.9. Fix 0 < x < 2. It suffices to show that for any
z∈R
 %(x) p 
Qp,θ Ξθ (x) ≤ log θ z → Φ(z), θ → ∞, (5.71)

where Φ denotes the standard normal distribution function. Equivalently,
it suffices to show that for any z ∈ R
√ √ 
Qp,θ ψλ ( θ x) − d θ xe ≤ aθ → Φ(z),

where
√ %(x) p
aθ := aθ (x, z) = θ(ω(x) − x) + log θ z. (5.72)

February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 243

Random Plancherel Partitions 243

Consider the semi-infinite interval Iθ := [aθ , ∞) and let Nθ be the number


of points of λi − i ∈ X (λ) contained in Iθ . Using that the sequence λi − i
is strictly decreasing, it is easy to see that relation (5.71) is reduced to
√ 
Qp,θ Nθ ≤ d θ xe → Φ(z). (5.73)
In this situation, one can apply the Costin-Lebowitz-Soshnikov theorem as
in Section 3.4. Since the X (λ) is by Lemma 5.20 of determinantal, then
N − Ep,θ Nθ d
pθ −→ N (0, 1) (5.74)
V arp,θ (Nθ )
provided that V arp,θ (Nθ ) → ∞ as θ → ∞.
In order to derive (5.73) from (5.74), we need some basic asymptotic
estimation of the first two moments of the random variable Nθ . This will be
explicitly given in Lemma 5.21 below. Module Lemmas 5.20 and 5.21, the
proof is complete. 

Lemma 5.21. Fix 0 < x < 2, z ∈ R and let Iθ = [aθ , ∞) be as in (5.72).


Then as θ → ∞
√ z p
Ep,θ Nθ = θ x − log θ + O(1)

and
log θ
V arp,θ (Nθ ) = (1 + o(1)).
4π 2
The proof of Lemma 5.21 essentially involves the computation of moments
of the number of points lying in an interval for a discrete determinantal
point process. Let k ∈ Z be a integer (possibly depending on the model
parameter θ), and let Nk be the number of points of X (λ) lying in [k, ∞).
Then we have by Lemma 5.20

X 
Ep,θ Nk = Pp,θ λ ∈ P : j ∈ X (λ)
j=k
X∞
= Kθ (j, j)
j=k
X∞ X∞ √ 2
= Jk+s 2 θ
j=k s=1
X∞ √ 2
= (m − k)Jm 2 θ
m=k
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 244

244 Random Matrices and Random Partitions

and

X ∞
X
V arp,θ (Nk ) = Kθ (i, i) − Kθ (i, j)2
i=k i,j=k
∞ k−1
X X
= Kθ (i, j)2
i=k j=−∞
∞ k−1 √
X X θ √ √ √ √ 2
= Ji (2 θ)Jj+1 (2 θ) − Ji+1 (2 θ)Jj (2 θ) .
j=−∞
i−j
i=k

To figure out these infinite sums, we need some precise asymptotic be-
haviours of Bessel functions in the whole real line. They behave rather
differently in three regions so that we must take more care in treating the
sums over two critical values. The lengthy computation will appear in the
forthcoming paper, see Bogachev and Su (2015).

5.4 Berry-Esseen bounds for character ratios

This section is particularly devoted to the study of convergence rate of


random character ratios. Define for n ≥ 2
(n − 1)χλ (1n−2 , 2)
Wn (λ) = √ , λ ∈ Pn .
2dλ

Note by (5.32) Wn (λ) = p]2 (λ)/ 2n. It was proved in Section 5.2
d
Wn −→ N (0, 1), n→∞
using the moment method. Namely,

sup Pp,n Wn (λ) ≤ x − Φ(x) → 0, n → ∞. (5.75)

−∞<x<∞

Having (5.75), it is natural to ask how fast it converges. This was first
studied by Fulman. In fact, in a series of papers, he developed a Stein
method and martingale approach to the study of the Plancherel measure. In
particular, Fulman (2005, 2006) obtained a speed of n−s for any 0 < s < 1/2
and conjectured the correct speed is n−1/2 . Following this, Shao and Su
(2006) confirmed the conjecture to get the optimal rate. The main result
reads as follows.

Theorem 5.11.

Pp,n λ ∈ Pn : Wn (λ) ≤ x − Φ(x) = O(n−1/2 ).

sup

−∞<x<∞
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 245

Random Plancherel Partitions 245

The basic strategy of the proof is to construct a Wn0 (λ) such that
0

Wn (λ), Wn (λ) is an exchangeable pair and to apply the Stein method
to Wn (λ), Wn0 (λ) . Let us begin with a Bratelli graph, namely an oriented
graded graph G = (V, E). Here the vertex set V = P = ∪∞ n=0 Pn and there
is an oriented edge from λ ∈ Pn to Λ ∈ Pn+1 if Λ can be obtained from λ
by adding one square, denoted by λ % Λ, see Figure 5.4 below.

Fig. 5.4 Bratelli graph

Lemma 5.22. The Plancherel measure is coherent in G, namely


X dλ
Pp,n (λ) = Pp,n+1 (Λ).

Λ:λ%Λ

Proof. According to the hook formula (4.70), it suffices to prove


X Hλ
= 1.

Λ:λ%Λ

Let us compute the quotient Hλ /HΛ . Assume that the new square is located
in the rth row and sth column of the diagram Λ. Since the squares outside
the rth row or sth column have equal hook lengths in the diagrams λ and
Λ, we have by the hook formula,
Y hri (λ) r−1
s−1 Y hjs (λ)

= ,
HΛ h (λ) + 1 j=1 hjs (λ) + 1
i=1 ri

where h (λ) denotes the hook length of  in λ.


February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 246

246 Random Matrices and Random Partitions

Next we want to express the quotient in terms of local extrema. Let λ


have interlacing local extrema
x1 < y1 < x2 < y2 < · · · < xq < yq < xq+1
and suppose the square that distinguishes Λ from λ is attached to the
minimum xk of λ, see Figure 5.5.

Fig. 5.5 λ%Λ

Then it follows
s−1 k−1
Y xk − ym
Y hri (λ)
=
i=1
hri (λ) + 1 m=1 xk − xm
and
r−1 q+1
Y hjs (λ) Y xk − ym−1
= .
j=1
hjs (λ) + 1 xk − xm
m=k+1
Thus we rewrite
k−1
Y xk − ym q+1
Hλ Y xk − ym−1
= =: ak .
HΛ x − xm
m=1 k
xk − xm
m=k+1
It remains to check
q+1
X
ak = 1. (5.76)
k=1
To do this, note that these numbers coincides with the coefficients of the
partial fraction expansion
Qq q+1
i=1 (u − yi ) ak
X
Qq+1 = .
i=1 (u − xi )
u − xk
k=1
Multiplying both sides by u and letting u → ∞ yields (5.76). 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 247

Random Plancherel Partitions 247

(n) (n)
To construct an exchange pair, we introduce a Markov chain X0 , X1 ,
(n)
· · · , Xk , · · · with state space Pn and transition probability
(n) (n) 
pn (λ, µ) := P X1 = µ X0 = λ

= ]P (λ, µ), (5.77)
(n + 1)dλ

where P (λ, µ) = τ ∈ Pn−1 , τ % λ, τ % µ .

Lemma 5.23. (i) pn (λ, µ) is a well defined transition probability:


X
pn (λ, µ) = 1. (5.78)
µ∈Pn

(ii) Pp,n is a stationary distribution of the Markov chain X (n) , namely


X
Pp,n (µ) = Pp,n (λ)pn (λ, µ).
λ∈Pn
(n)
(iii) The Markov chain X is , namely for any λ, µ ∈ Pn
Pp,n (λ)pn (λ, µ) = Pp,n (µ)pn (µ, λ).

Proof. Note the following formula (see the note following the proof of
Lemma 3.6 in Fulman (2005))
1 X 
]P (λ, µ) = χµ (π)χλ (π) r1 (π) + 1 , (5.79)
n!
π∈Sn

where r1 (π) is the number of fixed points in π. Hence it follows from (5.77)
X
pn (λ, µ)
µ∈Pn
X dµ 1 X 
= χµ (π)χλ (π) r1 (π) + 1
(n + 1)dλ n!
µ∈Pn π∈Sn
1 X 1 X  
= dµ χµ (π) χλ (π) r1 (π) + 1 . (5.80)
(n + 1)dλ n!
π∈Sn µ∈Pn

By (2.11),
π = 1n ,

1 X 1,
dµ χµ (π) = (5.81)
n! 0, π 6= 1n .
µ∈Pn

Inserting into (5.80) easily yields (5.78), as desired.


(ii) is a direct consequence of Lemma 5.22, while (iii) follows from (5.77)
and the Frobenius formula. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 248

248 Random Matrices and Random Partitions

Lemma 5.24. Given a λ ∈ Pn ,


 2 
EWn0 (λ) = 1 − Wn (λ). (5.82)
n+1
Consequently,
Ep,n Wn = 0. (5.83)

Proof. By definition,
(n) (n)
EWn0 (λ) = EWn X1 X0 = λ

X
= Wn (µ)pn (λ, µ)
µ∈Pn
n−1 X
χµ 1n−2 , 2 ]P (λ, µ).

= √ (5.84)
(n + 1) 2dλ µ∈Pn

Substituting (5.79) and noting (2.12), then (5.84) becomes


n−1 X 1 X 
χµ 1n−2 , 2 χµ (π) χλ (π) r1 (π) + 1
 

(n + 1) 2dλ π∈Sn n! µ∈Pn
 2 
= 1− Wn (λ).
n+1
This completes the proof of (5.82).
To see (5.83), note
Ep,n Wn = Ep,n Wn0 = Ep,n EWn0 (λ)

 2 
= 1− Ep,n Wn .
n+1
The conclusion holds. 

Lemma 5.25.

2 1 2(n − 1)(n − 2)2 χλ 1n−3 , 3
E Wn0 (λ) = 1− + ·
n n(n + 1) dλ

2 χ 1n−4 , 22
(n − 1)(n − 2)(n − 3) λ
+ · . (5.85)
2n(n + 1) dλ
Proof. Similarly to (5.84), it follows
X
E(Wn0 (λ))2 = Wn2 (µ)pn (λ, µ)
µ∈Pn

(n − 1)2 X χ2µ 1n−2 , 2
= ]P (λ, µ). (5.86)
2(n + 1)dλ dµ
µ∈Pn
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 249

Random Plancherel Partitions 249

Substituting (5.79), (5.86) becomes



(n − 1)2 X  1 X χ2µ 1n−2 , 2  
χµ (π) χλ (π) r1 (π) + 1 .
2(n + 1)dλ n! dµ
π∈Sn µ∈Pn

To proceed, we need the following equation (see Exercise 6.76 of Stanley


(1999))
  −2
1 X χ2µ 1n−2 , 2 n
χµ (π) = ](π), (5.87)
n! dµ 2
µ∈Pn

where ](π) is the number of pairs of (σ, τ ) such that σ and τ come from
the same conjugacy class 1n−2 , 2 and σ ◦ τ = π. See also Lemma 3.4 of
Fulman (2005).
Note σ ◦ τ = π can assume only values in three distinct conjugacy
classes: 1n , 1n−3 , 3 , 1n−4 , 22 , and ](π), χλ (π) and r1 (π) are all class
functions. It is easy to see
 
n
] 1n =

2
 
n
] 1n−3 , 3 = 2(n − 2)

2
  
n n − 2
] 1n−4 , 22 =

.
2 2
In combination, we easily get the desired conclusion (5.85). 
As a direct consequence, we obtain the following

Corollary 5.3.
1
V arp,n (Wn ) = V arp,n Wn0 = 1 − .

n
The last lemma we need is to control the difference between Wn (λ) and
Wn0 (λ).

Lemma 5.26. Let ∆n (λ) = Wn (λ) − Wn0 (λ), then



 4e 2  √
Pp,n |∆n (λ)| ≥ √ ≤ 2e−2e n .
n
Consequently,
Ep,n |∆n (λ)|2 1(|∆n (λ)|≥4e√2/√n) = O(n−1/2 ). (5.88)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 250

250 Random Matrices and Random Partitions

Proof. Recall the Frobenius formula


√    0 
2 X  λi λi
Wn (λ) = − .
n i 2 2
(n) (n)
Given X0 = λ, X1 = µ only when µ is obtained from λ by moving a
box from row i and column j to row s and column t. Then
∆n (λ) = Wn (µ) − Wn (λ)

2
λi + λ0t − λs − λ0j .

=
n
Hence we have

2 2
|∆n (λ)| ≤ max{λ1 , λ01 }.
n
According to Lemma 5.1, we have (5.88), as desired. The proof is now
complete. 
Having the preceding preparation, we are now ready to prove Theorem
5.11. The proof is based on the following refinement of Stein’s result for
exchangeable pairs.

Theorem 5.12. Let (W, W 0 ) be an exchangeable pair of real-valued random


variables such that
E(W 0 |W ) = (1 − τ )W,
with 0 < τ < 1. Assume E(W 2 ) ≤ 1. Then for any a > 0,
r  2 a3
1
sup |P (W ≤ x) − Φ(x)| ≤ E 1 − E(∆2 |W ) +
−∞<x<∞ 2τ τ
+2a + E∆2 1(|∆|>a) .

Proof. See Theorem 2.1 of Shao and Su (2006). 


Proof of Theorem 5.11. This  is a direct application of Theorem
√ √ 5.12
to exchangeable pairs Wn , Wn0 . Set τn = 2/(n + 1), an = 4e 2/ n and
∆n = Wn − Wn0 . In view of (5.88), we need only prove
 1 2
Ep,n 1 − E ∆2n Wn = O(n−1 ).
2τn
In fact, a simple algebra yields
 1 2 3n2 − 5n + 6
Ep,n 1 − E ∆2n Wn = . (5.89)
2τn 4n3
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 251

Random Plancherel Partitions 251

To see this, note


 1 2 1
E ∆2n Wn = 1 − Ep,n E ∆2n Wn

Ep,n 1 −
2τn τn
1 2
+ 2 Ep,n E ∆2n Wn .
4τn
By Lemma 5.24, we have
E ∆2n Wn = E Wn2 + (Wn0 )2 − 2Wn Wn0 Wn
 
 4 
− 1 Wn2 + E (Wn0 )2 Wn ,

=
n+1
and so by Corollary 5.3,
4  1
Ep,n E ∆2n Wn =

1− .
n+1 n
Again, by Lemma 5.25,
E ∆2n Wn = A + B + C + D


where
1
A = 1− ,
n
2(n − 1)(n − 2)2 χλ (1n−3 , 3)
B= · ,
n(n + 1) dλ
(n − 1)(n − 2)(n − 3)2 χλ (1n−4 , 22 )
C= · ,
2n(n + 1) dλ
 4  (n − 1)2 χ2 (1n−2 , 2)
D= −1 · λ 2 .
n+1 2 dλ
What we next need is to compute explicitly Ep,n (A + B + C + D)2 . We
record some data as follows.
Ep,n AB = Ep,n AC = 0;

(n − 1)2  4 
Ep,n AD = 2
−1 ;
n n+1
 
(n − 1)2 (n − 2)3 (n − 3)2 χλ 1n−3 , 3 χλ 1n−4 , 22
Ep,n BC = Ep,n ·
n2 (n + 1)2 dλ dλ
2 3 2
(n − 1) (n − 2) (n − 3) 1 X
χλ 1n−3 , 3 χλ (1n−4 , 22 )

=
n2 (n + 1)2 n!
λ∈Pn
= 0;
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 252

252 Random Matrices and Random Partitions

 
(n − 1)3 (n − 2)2 (n − 3) χλ 1n−3 , 3 χ2λ 1n−2 , 2
Ep,n BD = − Ep,n ·
n(n + 1)2 dλ d2
n−3
 2 λn−2 
3 2
(n − 1) (n − 2) (n − 3) 1 X χλ 1 , 3 χλ 1 ,2
=− 2
n(n + 1) n! dλ
λ∈Pn
2
12(n − 1)(n − 2) (n − 3)
=− ;
n3 (n + 1)2
 
(n − 1)3 (n − 2)(n − 3)3 χλ 1n−4 , 22 χ2λ 1n−2 , 2
Ep,n CD = − Ep,n ·
4n(n + 1)2 dλ d2
n−4 2
 2 λn−2 
(n − 1)3 (n − 2)(n − 3)3 1 X χλ 1 , 2 χλ 1 ,2
=− 2
4n(n + 1) n! dλ
λ∈Pn
3
2(n − 1)(n − 2)(n − 3)
=− ;
n3 (n + 1)2

4(n − 1)2 (n − 2)4 χ2λ (1n−3 , 3)


Ep,n B 2 = Ep,n
n2 (n + 1)2 d2λ
2 4
4(n − 1) (n − 2) 1 X
= χ2λ (1n−3 , 3)
n2 (n + 1)2 n!
λ∈Pn

12(n − 1)(n − 2)3


= ;
n3 (n + 1)2

(n − 1)2 (n − 2)2 (n − 3)4 χ2 (1n−4 , 22 )


Ep,n C 2 = Ep,n λ
2
4n (n + 1) 2 d2λ
(n − 1)2 (n − 2)2 (n − 3)4 1 X 2 n−4 2
= χλ (1 ,2 )
4n2 (n + 1)2 n!
λ∈Pn
3
2(n − 1)(n − 2)(n − 3)
= ;
n3 (n + 1)2

(n − 1)4 (n − 3)2
2 χ4λ 1n−2 , 2
Ep,n D = Ep,n
4(n + 1)2 d4λ

(n − 1)4 (n − 3)2 1 X χ4λ 1n−2 , 2
=
4(n + 1)2 n! d2λ
λ∈Pn

2(n − 1)(n − 3)2  n


  
= 3 + 4(n − 3) .
n3 (n + 1)2 2
In combination, we obtain (5.89). The proof is now complete. 
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 253

Random Plancherel Partitions 253

5.5 Determinantal structure

The goal of this section is to provide a self-contained proof of Lemma 5.20


following the line of Borodin, Okounkov and Olshanski (2000). Let us first
prove an equivalent form of Lemma 5.20. Let λ = (λ1 , λ2 , · · · , λl ) ∈ P.
Recall the ordinary Fronebius coordinates are
āi = λi − i, b̄i = λ0i − i, 1 ≤ i ≤ `
and the modified Frobenius coordinates are
1 1
ai = āi + , bi = b̄i + , 1 ≤ i ≤ `,
2 2
where ` is the length of the main diagonal in the Young diagram of λ.
Define

F(λ) = ai , −bi , 1 ≤ i ≤ ` . (5.90)
1
This is a finite set of half integers, F(λ) ⊂ Z + 2 . Note F(λ) consists of
equally many positive half integers and negative half integers. Interestingly,
F(λ) have a nice determinantal structure under the Poissonized Plancherel
measure. In particular, denote the k-point correlation function

%k (x1 , · · · , xk ) = Qp,θ λ ∈ P : x1 , · · · , xk ∈ F(λ) ,
where xi ∈ Z + 21 , 1 ≤ i ≤ k. Then we have
Lemma 5.27.

%k (x1 , · · · , xk ) = det M (xi , xj ) k×k ,
where the kernel function 

 θ K+ (|x|,|y|) , xy > 0,
|x|−|y|
M (x, y) = √ K (|x|,|y|) (5.91)
 θ −
|x|−|y| , xy < 0.
Here
K+ (x, y) = Jx− 21 Jy− 21 − Jx+ 21 Jy+ 21 ,
K− (x, y) = Jx− 21 Jy− 12 − Jx+ 21 Jy+ 21 .
Its proof is based on the following three lemmas. The first one shows that
the Poissonizaed Plancherel measure can be expressed by a determinant.
Set (
0, xy > 0,
L(x, y) = 1 θ (|x|+|y|)/2
·
x−y Γ(|x|+ 1 )Γ(|y|+ 1 ) , xy < 0.
2 2

Lemma 5.28. 
det L(xi , xj ) 2`×2`
Qp,θ (λ) = , (5.92)
det(1 + L)
where xi = ai , xi+` = −bi , 1 ≤ i ≤ `.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 254

254 Random Matrices and Random Partitions

We remark that the numerator in (5.92) is a determinant of 2` × 2` matrix,


while the denumerator det(1 + L) is interpreted as the
X 
det(1 + L) = det L(X) ,
X⊆Z+ 21

where det(L(X)) = 0 unless X consists of equally many positive and neg-


ative half integers.
Proof. Recall a classic determinant formula for dλ :
dλ  1 
= det .
|λ|! (āi + b̄j + 1)āi !b̄j ! `×`
Thus letting
L(x1 , x`+1 ) L(x1 , x`+2 ) · · · L(x1 , x2` )
 
 L(x2 , x`+1 ) L(x2 , x`+2 ) · · · L(x2 , x2` ) 
A= ,
 
.. .. .. ..
 . . . . 
L(x` , x`+1 ) L(x` , x`+2 ) · · · L(x` , x2` )
we have
 d 2  
λ 0 A
= det .
|λ|! −A0 0
It follows by definition of L

Qp,θ (λ) = Qp,θ {x1 , · · · , x2` }
= e−θ det L(xi , xj ) 2`×2` .


As a direct consequence,
X
eθ = eθ Qp,θ (λ)
λ∈P
X 
= det L(X) = det(1 + L).
X

We now conclude the proof. 


We shall below prove that the point process F(λ) is of determinantal. Let
L
ML = .
1+L
Then we have

Lemma 5.29. Given x1 , · · · , xk ∈ Z + 21 ,



%k (x1 , · · · , xk ) = det ML (xi , xj ) k×k
.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 255

Random Plancherel Partitions 255

Proof. Assume that g : Z + 12 → Z + 21 takes 0 except at finitely many


points. According to Lemma 5.28, it follows
Y  X Y  
Ep,θ 1 + g(x) = 1 + g(x) Qp,θ F(λ) = X
x∈F (λ) X⊆Z+ 21 x∈X
X Y  det(L(X))
= 1 + g(x)
det(1 + L)
X⊆Z+ 12 x∈X

X det (1 + g)L(X)
=
det(1 + L)
X⊆Z+ 12

det 1 + L + gL 
= = det 1 + gML
det(1 + L)
X Y 
= g(x) det ML (X) , (5.93)
X⊆Z+ 21 x∈X

where in the last two equations we used the properties of Fredholm deter-
minants. On the other hand,
Y  X Y  
Ep,θ 1 + g(x) = 1 + g(x) Qp,θ F(λ) = X
x∈F (λ) X⊆Z+ 12 x∈X
X X Y 
= g(x)Qp,θ F(λ) = X
X⊆Z+ 12 Y :Y ⊆X x∈Y
X Y X 
= g(x) Qp,θ F(λ) = X
Y ⊆Z+ 21 x∈Y X:Y ⊆X
X Y
= g(x)%(Y ). (5.94)
Y ⊆Z+ 12 x∈Y

Thus comparing (5.93) with (5.94) yields



%(X) = det ML (X)
since g is arbitrary. 
To conclude the proof of Lemma 5.27, we need only to determine ML above
is exactly equal to M of (5.91).

Lemma 5.30.
ML (x, y) = M (x, y).

Proof. Fix x, y ∈ Z + 12 and set z = θ. Note M and L are a function
of z. We need to prove for all z ≥ 0,
M + M L − L = 0. (5.95)
March 5, 2015 15:59 9197-Random Matrices and Random Partitions ws-book9x6 page 256

256 Random Matrices and Random Partitions

Obviously, (5.95) is true at z = 0. We shall below prove


Ṁ + Ṁ L + M L̇ − L̇ = 0, (5.96)
where Ṁ = ∂M ∂L
∂z and L̇ = ∂z .
It easily follows by definition
(
0, xy > 0,
L̇ = z |x|+|y|−1
sgn(x) Γ(|x|+ 1
)Γ(|y|+ 1 )
, xy < 0.
2 2

To compute Ṁ , we use the following formulas:


∂ x
Jx (2z) = −2Jx+1 (2z) + Jx (2z)
∂z z
x
= 2Jx−1 (2z) − Jx (2z).
z
Then
(
J|x|− 12 J|y|+ 21 + J|x|+ 12 J|y|− 21 , xy > 0,
Ṁ =
sgn(x)(J|x|− 12 J|y|− 21 − J|x|+ 12 J|y|+ 21 ), xy < 0.

It remains to verify (5.96). To do this, recall the following identities: for


any ν 6= 0, −1, −2, · · · and any z 6= 0 we have

X 1 zm
Γ(ν)Jν (2z) = z ν Jm (2z),
m=0
m + ν m!

X 1 zm
Γ(ν)Jν−1 (2z) = z ν−1 − z ν Jm+1 (2z).
m=0
m + ν m!

Now the verification of (5.96) becomes a straightforward application, except


for the occurrence of the singularity at negative integers ν. This singularity
is resolved using the following identity due to Lommel
sin πν
Jν (2z)J1−ν (2z) + J−ν (2z)Jν−1 (2z) =
.
πz
This concludes the proof of Lemma 5.30, and so Lemma 5.27. 
Turn to the proof of Lemma 5.20. A key observation is the following link
between X (λ) and F(λ) due to Frobenius (see (5.68) and (5.90)):
 1  1
F(λ) = X (λ) + ∆ Z≤0 − . (5.97)
2 2
n o
Given x1 , · · · , xk ∈ Z, denote X = x1 + 21 , · · · , xk + 12 . Divide X into
 
positive half integers and negative half integers: X+ = X ∩ Z≥0 + 21 and
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 257

Random Plancherel Partitions 257

 
X− = X ∩ Z≤0 − 12 . If X ⊆ X (λ) + 12 , then by (5.97 ), X+ ⊆ F(λ) and
 
there exists a finite subset S ⊆ Z≤0 − 21 \ X− such that S ⊆ F(λ). This
implies
 1
Qp,θ X ⊆ X (λ) +
2
  1 
= Qp,θ ∃S ⊆ Z≤0 − \ X− : X+ ∪ S ⊆ F(λ) . (5.98)
2
By exclusion-inclusion principle, the right hand side of (5.98) becomes
X  
(−1)|S| Qp,θ X+ ∪ S ⊆ F(λ)

S⊆ Z≤0 − 21 \X−
X
= (−1)|S| %(X+ ∪ S)

S⊆ Z≤0 − 21 \X−
X
(−1)|S| det M (X+ ∪ S) ,

= (5.99)

S⊆ Z≤0 − 21 \X−

where in the last equation we used Lemma 5.27.


Define a new as follows:
x ∈ Z≥0 + 12 ,

 M (x, y),
x ∈ Z≥0 + 21 , y ∈ Z≤0 − 21 ,

−M (x, y),

M 4 (x, y) =

 −M (x, y), x, y ∈ Z≤0 − 12 , x 6= y,
1 − M (x, x), x, y ∈ Z≤0 − 21 , x = y.

Lemma 5.31. Given x, y ∈ Z,


 1 1
M4 x + , y + = (x)(y)Kθ (x, y),
2 2
x+1
where (x) = sgn(x) .
Proof. It suffices to show

sgn(x)(x)(y)Kθ (x, y), x 6= y,
 1 1 
M x+ , y+ = Kθ (x, y), x = y > 0,
2 2
1 − Kθ (x, y),

x = y < 0.
Using the relation
√  √ 
J−n 2 θ = (−1)n Jn 2 θ
and the definition of M , one can easily verify the case x 6= y. Also, the claim
remains valid for x = y > 0. It remains to consider the case x = y < 0. In
this case, we have to show that
 1 1
1 − M x + ,x + = J(x, x), x ∈ Z≤0 .
2 2
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 258

258 Random Matrices and Random Partitions

Equivalently,
1 − J(k, k) = J(−k − 1, −k − 1), k ∈ Z≥0 . (5.100)
Note for any k ∈ Z,

X (2k + m + 2)↑m θk+m+1
J(k, k) = (−1) ,
m=0
((k + m + 1)!)2 m!
where we use the symbol (x)↑m = x(x + 1) · · · (x + m − 1). We need to show

X (2k + m + 2)↑m θk+m+1
1− (−1)m
m=0
((k + m + 1)!)2 m!

X (−2k + l + 2)↑l θ−k+l
= (−1)l . (5.101)
(Γ(−k + l + 1))2 l!
nml=0
Examine the right hand side of (5.101). The terms with l = 0, 1, · · · , k − 1
vanish because then 1/Γ(−k + l + 1) = 0. The term with l = k is equal to 1.
Next the terms with l = k + 1, · · · , 2k vanish because for these values of l,
the expression (−2k+l)↑l vanishes. Finally, for l ≥ 2k+1, say l = 2k+1+m,
(−2k + l)↑l θ−k+l (m + 1)↑l θk+m+1
(−1)l = (−1)m+1
(Γ(−k + l + 1))2 l! ((k + m + 1)!)2 (2k + 1 + m)!
(2k + m + 2)↑m θk+m+1
= (−1)m+1 .
((k + m + 1)!)2 m!
Thus we have proved (5.100). 
Proof of Lemma 5.20 Fix X = (x1 , x2 , · · · , xk ) ⊆ Z. Then according to
(5.27), (5.98) and (5.99), we have

ρk (x1 , · · · , xk ) = Qp,θ λ ∈ P : x1 , · · · , xk ∈ X (λ)
X
(−1)|S| det M (X+ ∪ S) . (5.102)

=
S⊆(Z≤0 − 12 )\X−

To compute the alternating sum in (5.102), write


 
(X+ , X+ ) (X+ , S)
M (X+ ∪ S) =  ,
(S, X+ ) (S, S)
where (X+ , X+ ) stands for the matrix M (xi + 21 , xj + 12 ) with xi + 1/2,


xj + 1/2 ∈ X+ , the others are similar. Then by definition


 
(X+ , X+ ) (X+ , S)
M 4 (X+ ∪ S) =  .
−(S, X+ ) 1 − (S, S)
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 259

Random Plancherel Partitions 259

A simple matrix determinant manipulation shows


X
(−1)|S| det M (X+ ∪ S) = det M 4 (X+ ∪ S) .
 

S⊆(Z≤0 − 21 )\X−

It follows in turn from Lemma 5.31


 1 1 
det M 4 (xi + , xj + )

= det Kθ (xi , xj ) k×k .
2 2 k×k
This concludes the proof. 
May 2, 2013 14:6 BC: 8831 - Probability and Statistical Theory PST˙ws

This page intentionally left blank


March 5, 2015 16:11 9197-Random Matrices and Random Partitions ws-book9x6 page 261

Bibliography

Anderson, G.W., Guionnet, A. and Zeitouni, O. (2010). An Introduction to Ran-


dom Matrices, Cambridge University Press.
Andrews, G. E. (1976).The theory of partitions, Encyclopedia of mathematics and
its applications, 2, Addison-Wesley, Reading, MA Press.
Bai, Z. D. and Silverstein, J. (2010). Spectral Analysis of Large Dimensional
Random Matrices, Springer Series in Statistics, Science Press.
Baik, J., Deift, P. and Johansson, K. (1999). On the distribution of the length
of the longest increasing subsequence in a radnom permutation, J. Amer.
Math. Soc. 12, 1119-1178.
Bao, Z. G. and Su, Z. G. (2010). Local semicircle law and Gaussian fluctuation
for Hermite β ensemble, arXiv:11043431 [math.PR]
Billingsley, P. (1999a). Convergence of Probability Measures, 2nd edition, Wiley-
Interscience.
Billingsley, P. (1999b). Probability and Measure, 3rd edition, Wiley-Interscience.
Bogachev, L. V. and Su, Z.G. (2015). Gaussian fluctuations for random Plancherel
partitions, in preparation.
Borodin, A., Okounkov, A. and Olshanski, G. (1999). Asymptotics of Plancherel
measures for symmetric groups, J. Amer. Math. Soc. 13, 481–515.
Bourgade, P., Hughes, C.P., Nikeghbali, A. and Yor, M. (2008). The characteristic
polynomial os a random unitary matrix: a probabilistic approach, Duke
Math. J. 145, 45-69.
Brézin, E. and Hikami, S. (2000). Characteristic polynomials of random matrices,
Comm. Math. Phys. 214, 111-135.
Brown, B. M. (1971). Martingale central limit theorems, Ann. Math. Stat. 42,
59-66.
Cantero, M. J., Moral, L. and Velázquez, L. (2003). Five-diagonal matrices and
zeros of orthogonal polynomials on the unit circle, Linear Algebra Appl.
362, 29-56.
Cavagna, A., Garrahan, J. P. and Giardina, I. (2000). Index distribution of ran-
dom matrices with an application to disordered system, Phys. Rev. B 61,
3960-3970.
Chen, L. H. Y., Goldstein, L. and Shao, Q. M. (2010). Normal Approximation

261
March 5, 2015 16:11 9197-Random Matrices and Random Partitions ws-book9x6 page 262

262 Random Matrices and Random Partitions

by Stein’s Method. Probability and its Applications, Springer-Verlag, New


York.
Chow, Y. S. (2003). Probability Theory: Independence, Interchangeability, Mar-
tingales, Springer Texts in Statistics, 3rd edition, Springer.
Chung, K. L. (2000). A Course in Probability Theory, 3rd edition, Academic
Press.
Conrey, B. (2005). Notes on eigenvalue distributions for the classical compact
groups. Recent Perspectives in Random Matrix Theory and Number Theory,
LMS Lecture Note Series (No. 322), 111-146, Cambridge University Press.
Costin, O. and Lebowitz, J. (1995). Gaussian fluctuations in random matrices,
Phys. Rev. Lett. 75, 69-72.
Deift, P. (2000). Integrable systems and combinatorial theory, Notices of Amer.
Math. Soc. 47, 631-640.
Deift, P. and Gioev, D. (2009). Random Matrix Theory: Invariant Ensembles and
Universality, Amer. Math. Soc. Providence, RI.
Deift, P., Its. A. and Krasovsky, I. (2012). Toeplitz matrices and Toeplitz determi-
nants under the impetus of the Ising model: Some history and some recent
results, Comm. Pure Appl. Math. 66, 1360-1438.
Diaconis, P. (2003). Patterns in eigenvalues: The 70th Josiah Willard Gibbs Lec-
ture, Bull. Amer. Math. Soc. 40, 155-178.
Diaconis, P. and Evans, S. N. (2001). Linear functionals of eigenvalues of random
matrices, Trans. Amer. Math. Soc. 353, 2615-2633.
Diaconis, P., Shahshahani, M. (1981). Generating a random permutation with
random transpositions, Z. Wahrsch Verw. Gebiete. 57, 159-179.
Diaconis, P. and Shahshahani, M. (1994). On the eignvalues of random matrices,
J. Appl. Probab. 31 A, 49-62.
Durrett, R. (2010). Probability: Theory and Examples, Cambridge University
Press.
Dumitriu, I. and Edelman, A. (2002). Matrix models for beta ensembles, J. Math.
Phys. 43, 5830-5847.
Dyson, F. M. (1962). The threefold way: Algebraic structure of symmetry groups
and ensembles in quantum mechanics, J. Math. Phys. 3, 1199.
Dyson, F. M. (1962). Statistical theory of energy levels of complex systems II, J.
Math. Phys. 62, 157-165.
Erdös, P., Lehner, J. (1941). The distribution of the number of summands in the
partition of a positive integer, Duke Math. J. 8, 335-345.
Erlihson, M. M. and Granovsky, B. L. (2008). Limit shapes of Gibbs distributions
on the set of integer partitions: The expansive case, Ann. Inst. H. Poincare
Probab. Statist., 44, 915-945.
Eskin, A. and Okounkov, A. (2001). Asymptotics of numbers of branched coverings
of a torus and volumes of moduli spaces of holomorphic differentials, Invent.
Math. 145, 59-103.
Feller, W. (1968). An Introduction to Probability Theory and Its Applications, 1,
3rd edition, John Wiley & Sons, Inc.
Feller, W. (1971). An Introduction to Probability Theory and Its Applications, 2,
2nd edition, John Wiley & Sons, Inc.
March 5, 2015 16:11 9197-Random Matrices and Random Partitions ws-book9x6 page 263

Bibliography 263

Fischer, H. (2011). A history of the central limit theorm—From classical to modern


probability theory, Springer.
Forrester, P. J. (2010). Log-Gas and Random Matrices, Princeton University
Press.
Forrester, P. J. and Frankel, N. E. (2004). Applications and generalizations of
Fisher-Hartwig asymptotics, J. Math. Phys. 45, 2003-2028.
Forrester, P. J. and Rains, E. M. (2006). Jacobians and rank 1 perturbations
relating to unitary Hessenberg matrices, Int. Math. Res. Not. 2006, 1-36.
Frame, J. S., de B. Robinson, G. and Thrall, R. M. (1954). The hook graphs of
the symmetric groups, Canada J. Math. 6, 316-324.
Fristedt, B. (1993). The structure of random partitions of large integers, Trans.
Amer. Math. Soc. 337, 703-735.
Frobenius, G. (1903). Über die charactere der symmetrischen gruppe, Sitzungsber
Preuss, Aadk. Berlin, 328-358.
Fulman, J. (2005). Steins method and Plancherel measure of the symmetric group,
Trans. Amer. Math. Soc. 357, 555-570.
Fulman, J. (2006). Martingales and character ratios, Trans. Amer. Math. Soc.
358, 4533-4552.
Gikhman, I. I. and Skorohod, A. V. (1996). Introduction to The theory of Random
Processes, Dover Publications, New York.
Ginibre, J. (1965). Statistical ensembles of complex, quaterion, and real matrices,
J. Math. Phys. 6, 440-449.
Girko, V. L. (1979). The central limit theorem for random determinants (Russian),
Translation in Theory Probab. Appl. 24, 729-740.
Girko, V. L. (1990). Theory of random determinants, Kluwer Acadmic Publishers
Group, Dordrecht.
Girko, V. L. (1998). A refinement of the central limit theorem for random deter-
minants (Russian), Translation in Theory Probab. Appl. 42, 121-129.
Gustavsson, J. (2005). Gaussian fluctuations of eigenvalues in the GUE, Ann.
Inst. H. Poincaré Probab. Stat. 41, 151-178.
Hall, P. and Heyde, C. C. (1980). Martingale Limit Theory and Its Application,
Academic Press.
Hardy, G. H. and Ramanujan, S. (1918). Asymptotic formulae in combiantory
analysis, Proc. London Math. Soc. 17, 75-115.
Hora, A. (1998). Central limit theorem for the adjacency operators on the inifnite
symmetric groups, Comm. Math. Phys. 195, 405-416.
Ingram, R. E. (1950). Some characters of the symmetric group, Proc. Amer. Math.
Soc. 1, 358-369.
Ivanov, V. and Kerov, S. (2001). The algebra of conjugacy classes in symmetric
groups, and partial permutations, J. Math. Sci. (New York), 107, 4212-4230.
Ivanov, V. and Olshanski, G. (2002). Kerov’s central limit theorem for the
Plancherel measure on Young diagrams, Symmetric Functions 2001: Sur-
veys of Developments and Perspectives (S. Fomin, ed.), 93-151, NATO Sci.
Ser. II Math. Phys. Chem. 74, Kluwer Acad. Publ., Dordrecht.
Johansson, K. (1998a). On fluctuations of eigenvalues of random Hermitian ma-
trices, Duke Math. J. 91, 151-204.
March 5, 2015 16:11 9197-Random Matrices and Random Partitions ws-book9x6 page 264

264 Random Matrices and Random Partitions

Johansson, K. (1998b). The longest increasing subsequence in a random permu-


tation and a unitary random matrix model, Math. Res. Lett. 5, 63-82.
Johansson, K. (2001). Discrete orthogonal polynomials ensembles and the
Plancherel measure, Ann. Math. 153, 259-296.
Keating, J. P. and Snaith, N. C. (2000). Random matrix theory and ζ(1/2 + it),
Commun. Math. Phys. 214, 57-89.
Kerov, S. V. (1993). Gaussian limit for the Plancherel measure of the symmetric
group, C. R. Acad. Sci. Paris, Sér. I Math. 316, 303-308.
Kerov, S. V. and Olshanski, G. (1994). Polynomial functions on the set of Young
diagrams, Comptes Rendus Acad. Sci. Paris Sér. I 319, 121-126.
Killip, R. (2008). Gaussian fluctuations for β ensembles, Int. Math. Res. Not.
2008, 1-19.
Killip, R. and Nenciu, I. (2004). Matrix models for circular ensembles, Int. Math.
Res. Not. 2004, 2665-2701.
Krasovsky, I. V. (2007). Correlations of the characteristic polynomials in the
Gaussian unitary ensembles or a singular Hankel determinant, Duke Math.
J. 139, 581-619.
Ledoux, M. and Talagrand, M. (2011). Probability in Banach Spaces: Isoperimetry
and Processes, Springer Verlag.
Logan, B. F. and Shepp, L. A. (1977). A variational problem for random Young
tableaux, Adv. Math. 26, 206-222.
Lytova, A. and Pastur, L. (2009). Central limit theorem for linear eigenvalue
statistics of random matrices with independent entries, Ann. Prob. 37, 1778-
1840.
Macchi, O. (1975). The coincidence approach to stochastic point processes, Adv.
Appl. Prob. 7, 83-122.
Macdonald, I. (1995). Symmetric Functions and Hall Polynomials, 2nd edition,
Clarendon Press, Oxford.
Majumdar, S. N., Nadal, C., Scardicchio, A. and Vivo, P. (2009). The index
distribution of Gaussian random matrices, Phys. Rev. Lett. 103, 220603.
McLeish, D. L. (1974). Dependent central limit theorems and Invariance
principles, Ann. Probab. 2, 620-628.
Mehta, L. A. (2004). Random Matrices, 3rd edition, Academic Press.
Okounkov, A. (2000). Random matrices and random permutations, Int. Math.
Res. Not. 2000, 1043-1095.
Okounkov, A. (2001). Infinite wedge and random partitions, Selecta Math. 7,
57-81.
Okounkov, A. (2003). The use of random partitions, XIVth ICMP, 379-403, World
Sci. Publ., Hackensack, New Jersey.
Okounkov, A. and Olshanski, G. (1998). Shifted Schur functions, St. Petersburg
Math. J. 9, 239-300.
Okounkov, A. and Pandharipande, R. (2005). Gromov-Witten theory, Hurwitz
numbers, and matrix models, Algebraic geometry—Seattle 2005, Part I, 325-
414, Proc. Sympos. Pure Math., 80, Part I, Amer. Math. Soc., Providence,
RI, 2009.
Pastur, L. and Shcherbina, M. (2011). Eigenvalue Distribution of Large Random
March 6, 2015 14:4 9197-Random Matrices and Random Partitions ws-book9x6 page 265

Bibliography 265

Matrices, Mathematical Surveys and Monographs, 171, Amer. Math. Soc.


Petrov, V. (1995). Limit Theorems of Probability Theory, Sequences of Indepen-
dent Random Variables, Oxford University Press.
Pittel, B. (1997). On a likely shape of the random Ferrers diagram, Adv. Appl.
Math. 18, 432-488.
Pittel, B. (2002). On the distribution of the number of Young tableaux for a uni-
formly random diagram, Adv. Appl. Math. 29, 184-214.
Postnikov, A. G. (1988). Introduction to analytic number theory, Translation of
Mathematical Monographs, 68, Amer. Math. Soc., Providence, RI.
Ramı́er, J., Rider, B. and Virág, B. (2011). Beta ensembles, stochastic Airy spec-
trum, and a diffusion, J. Amer. Math. Soc. 24, 919-944.
Riordan, J. (1968). Combinatorial identities, Wiley, New York.
Ross, N. (2011). Fundamentals of Steins method. Probab. Surveys 8, 210-293.
Sagan, B. E. (2000). The Symmetric Group: Representations, Combiantorial Al-
gorithms, and Symmetric Functions, 2nd edition, GTM 203, Springer.
Serfozo, R. (2009). Basics of Applied Stochastic Processes, Probability and its
Applications, Springer.
Shao, Q. M. and Su, Z. G. (2006). The Berry-Esseen bound for character ratios,
Proc. Amer. Math. Soc. 134, 2153-2159.
Simon, B. (2004). Orthogonal Polynomials on the Unit Circle, 1, AMS Collo-
quium Series, Amer. Math. Soc., Providence, RI.
Soshnikov, A. (2000). Determinantal random point fields, Ruassian Math. Sur-
veys, 55, 923-975.
Soshnikov, A. (2002). Gaussian limit for determinantal random point fields, Ann.
Probab. 30, 171-180.
Stanley, R. (1999). Enumerative combinatorics, 2, Cambridge Studies in Ad-
vanced Mathematics, 62, Cambridge University Press.
Steele, J. M. (1995). Variations on monotone subsequence problem of Erdös and
Szekeres, In Discrete Probability and Algorithms (Aldous, Diaconis, and
Steele, Eds.) 111-132, Springer Publishers, New York.
Steele, J. M. (1997). Probability theory and combinatorial optimization, CBMS-
NSF Regional Conference Sereis in Applied Mathematics, 69, SIAM.
Stein, C. (1970). A bound for the error in the normal approximation to the distri-
bution of a sum of dependent random variables, Proc. Sixth Berkeley Symp.
Math. Statist. Prob. 2, University of California Press.
Stein, C. (1986). Approximate Computation of Expectations, IMS, Hayward, Cal-
ifornia.
Su, Z. G. (2014). Normal convergence for random partitions with multiplicative
measures, Teor. Veroyatnost i. Primenen., 59, 97-129.
Szegö, G. (1975). Orthogonal Polynomials, 23, Colloquium Publications, 4th edi-
tion, Amer. Math. Soc., Providence, RI.
Tao, T. (2012). Topics in random matrix theory, Graduate Studies in Mathemat-
ics, 132, Amer. Math. Soc..
Tao, T. and Vu, V. (2012). A cnetral limit theorem for the determinant of a
Wigner matrix, Adv. Math. 231, 74-101.
Tao, T. and Vu, V. (2010). Random matrices: universality of local eigenvalues,
March 5, 2015 16:11 9197-Random Matrices and Random Partitions ws-book9x6 page 266

266 Random Matrices and Random Partitions

Acta. Math. 206, 127-204.


Temperley, H. (1952). Statistical mechanics and the partition of numbers, II. The
form of crystal surfaces. Proc. Royal Soc. London, A. 48, 683-697.
Tracy, C. and Widom, H. (1994). Level spacing dsitributions and the Airy kernel,
Comm. Math. Phys. 159, 151-174.
Tracy, C. and Widom, H. (2002). Dsitribution functions for largest eigenvalues
and their applications, Proc. ICM, 1, 587-596, Higher Education Press,
Beijing.
Trotter, H. (1984). Eigenvalue distributions of large Hermitian matrices:
Wigner’s semicircle law and a theorem of Kac, Murdock, and Szegö, Adv.
Math. 54, 67-82.
Valkó, B. and Virág, B. (2009). Continuum limits of random matrices and the
Brownian carousel, Invent. Math. 177, 463-508.
Vershik, A. M. (1994). Asymptotic combinatorics and algebraic analysis, Proc.
ICM, Zürich, 2, 1384-1394, Birkhäuser, Basel, 1995.
Vershik, A. M. (1996). Statistical mechanics of combinatorial partitions, and their
limit configurations, Funct. Anal. Appl. 30, 90–105.
Vershik, A. M. and Kerov, S. V. (1977). Asymptotics of the Plancherel measure
of the symmetric group and the limiting form of Young tables, Soviet Math.
Doklady 18, 527–531.
Vershik, A. M. and Kerov, S. V. (1985a). Asymptotic of the largest and the typical
dimensions of irreducible representations of a symmetric group, Func. Anal.
Appl. 19, 21-31.
Vershik, A. M. and Kerov, S. V. (1985b) Asymptotic theory of characters of the
symmetric group, Func. Anal. Appl. Vol. 15, 246-255.
Vershik, A. M. and Yakubovich, Yu. (2006). Fluctuations of the maximal particle
energy of the quantum ideal gas and random partitions, Comm. Math. Phys.
261, 795-769.
Wieand, K.L. (1998). Eigenvalue distributions of random matrices in the permu-
tation group and compact Lie groups, Ph.D. thesis, Harvard University.
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 267

Index

Airy function, 92 Brown, 8


Airy operator, 143 Brownian carousel, 143
Airy point process, 118 Brownian motion, 31
algebra isomorphism, 222 Burnside identity, 44, 207, 208
Anderson, 28, 90
Andrews, 154, 202 canonical grading, 222, 226
angular shift formula, 149 canonical isomorphism, 222
aperiodic, 10 Cantero, 60
Ascoli-Arzelà lemma, 30 Carleman condition, 12
average spectral measure, 36 Cauchy integral formula, 111, 154,
174
Bai, 28 Cauchy random variable, 4
Baik, 219 Cauchy theorem, 135
Bao, 143 Cauchy transform, 25
Barnes G-function, 134, 136 Cauchy-Binet formula, 124
bell curve, 2 Cauchy-Schwarz inequality, 23, 94,
Bernoulli law, 1, 3 122, 184
Bernoulli random variable, 3, 124 Cavagna, 143
Berry-Esseen bound, 244 Chapman-Kolmogorov equation, 10
Bessel function, 242 character, 40
Bessel point process, 118 character ratio, 186, 236, 244
Billingsley, 2, 29, 32, 186 character table, 41
binomial distribution, 4 characteristic function, 4, 162
binomial formula, 2 characteristic polynomial, 34, 53, 59,
Blaschke product, 59 133
Bochner theorem, 5 Chebyshev inequality, 4, 18
Bogachev, 244 Chebyshev polynomial, 220
Borodin, 242, 253 Chen, 19, 21
Bose-Einstein model, 201 chi random variable, 126
Bourgade, 69 Chow, 2
Brézin, 134 Chung, 2
Bratelli graph, 245 Circular β ensemble, 36

267
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 268

268 Random Matrices and Random Partitions

Circular unitary ensemble, 33 Feller condition, 7


CMV matrix, 64, 87 Feller-Lévy CLT, 6, 16, 31
complete symmetric polynomial, 44 Fischer, 2
complex normal random variable, 33, five diagonal matrix model, 69
48, 50, 125 Forrester, 34, 81, 134
Concentration of measure inequality, Four moment comparison theorem, 18
99 Fourier coefficient, 47
conjugacy class, 34, 39, 249 Fourier transform, 27, 37, 207, 218
conjugate partition, 45 fractional linear transform, 145
Conrey, 35 Frankel, 134
continual diagram, 28 Fredholm determinant, 254, 255
converges in distribution, 3, 239 Fristedt, 158, 162
correlation function, 116, 253 Frobenius, 187
Costin, 118 Frobenius coordinate, 221, 253
Costin-Lebowitz-Soshnikov theorem, Frobenius formula, 247, 250
143, 242, 243 Fulman, 236, 244, 247, 249
Cramér rule, 62
Cramér-Wald device, 56 Garrahan, 143
cumulant, 14 Gaussian unitary ensemble, 51, 89
cycle type, 38 geometric random variable, 161
Giardina, 143
De Moivre-Laplace CLT, 2 Gibbs measure, 36
Deift, 48, 89, 90, 208, 219 Gikhman, 32
determinantal point process, 91, 116, Gikhman-Skorohod theorem, 32, 181
122, 242, 243 Ginibre, 125
Diaconis, 37, 48, 55, 186 Ginibre model, 125
dominance order, 42 Gioev, 89, 90
Donsker invariance principle, 31 Girko, 125
Dumitriu, 137, 142 Goldstein, 19, 21
Durrett, 2 Gram-Schmidt algorithm, 33, 57
Dyson, 36, 136 grand ensemble, 160, 169
Granovsky, 205
Edelman, 137, 142 Green function, 98
eigendecomposition, 107, 137, 139 Guionnet, 28, 90
elementary symmetric polynomial, 44 Gumbel distribution, 158, 203
empirical spectral distribution, 95 Gustavsson, 121, 237
Erdös, 158
ergodic, 10 Haar measure, 33, 66
Erlihson, 205 Hadamard inequality, 118
Eskin, 186 Hall, 9
Euler constant, 135 Hankel matrix, 134
Evans, 48, 55 Hermite β ensemble, 136
exchangeable pair, 22, 245 Hermite orthogonal polynomial, 90,
exclusion-inclusion principle, 257 92, 113, 233
Hermite wave function, 91
Favard theorem, 58 Hessenberg matrix, 59, 64
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 269

Index 269

Heyde, 9 Möbius transform, 149


Hikami, 134 Möbius inversion formula, 119
Hilbert space, 57 Macchi, 116
hook formula, 211, 245 Macdonald, 38, 225
Hora, 236 Majumdar, 143
Householder transform, 66, 127, 137 Marchenko-Pastur law, 14
Hughes, 69 Markov chain, 9, 145, 150, 158, 247
Markov inequality, 50, 55, 74, 165,
idempotent projection, 126 190
Ingram, 225 Markov property, 9
integrating-out formula, 91 martingale, 8
irreducible, 10, 39 martingale CLT, 8, 130, 150
Its, 48 martingale difference sequence, 8, 77,
Ivanov, 221, 222, 224, 227, 228, 230, 150
233, 235 matrix representation, 39
McLeish, 9
Jacobian matrix, 65, 81 Mehta, 34, 125
Johansson, 112, 142, 219, 241 moment generating function, 26
Moral, 60
Keating, 53 multiplicative measure, 160, 200
kernel function, 116, 257
Nadal, 143
Kerov, 209, 210, 212, 218, 221, 227,
Nenciu, 69, 75, 79
228, 230
Nikeghbali, 69
Kerov CLT, 221
normal random variable, 4, 240
Khinchine law, 3, 38
Killip, 69, 75, 79
Okounkov, 186, 219, 227, 242, 253
Knuth, 208
Olshanski, 221, 222, 224, 227, 228,
Kolmogorov continuity criterion, 185
233, 235, 242, 253
Krasovsky, 48, 133, 134
Pandharipande, 186
Lévy continuity theorem, 5 partition, 41, 153
Lévy maximal inequality, 31 Pastur, 101, 112
Lebesgue measure, 34, 89, 122 Pasval-Plancherel identity, 218
Lebowitz, 118 Pittel, 158, 173, 181, 188, 189, 194
Ledoux, 6 Plancherel measure, 207, 210, 240,
left orthogonal L-polynomial, 60 244
Lehner, 158 Plancherel theorem, 76, 207
lexicographic order, 41 Plancherel-Rotach formula, 92
Lindeberg condition, 7, 151, 169 Poincaré disk model, 145
Lindeberg replacement strategy, 16 Poincaré-Nash upper bound, 99
Lindeberg-Feller CLT, 7 Poisson point process, 116
Logan, 209, 219 Poisson random variable, 4, 240
Lyapunov CLT, 74, 126 Poissonized Plancherel measure, 240,
Lyapunov condition, 7, 124 253
Lytova, 101, 112 polar coordinate, 48
March 5, 2015 16:11 9197-Random Matrices and Random Partitions ws-book9x6 page 270

270 Random Matrices and Random Partitions

polytabloid, 43 Steele, 208


positive recurrent, 10 Stein, 19
Postnikov, 154, 155 Stein continuity theorem, 20
power sum polynomial, 45, 49 Stein equation, 19, 99, 102, 105
Präfer phase, 59, 69 Stein method, 19, 236, 244, 245
Prohorov theorem, 30 Stieltjes continuity theorem, 27, 101
projective limit, 222 Stieltjes transform, 25, 100
Stirling formula, 3, 191
Rains, 81 stochastic equicontinuity, 32, 160, 181
Ramı́rez, 143 Su, 143, 200, 244, 250
random Plancherel partition, 28, 207 symmetric group, 38
random uniform partition, 161 Szegö, 91
recurrent, 10 Szegö dual, 57
reversible, 247 Szegö recurrence relation, 58
Rider, 143 Szegö strong limit theorem, 48
Riemann zeta function, 53
Riemann-Hilbert problem, 134 tabloid, 42
Riesz transform, 28 Talagrand, 6
right orthonormal L-polynomial, 60 Tao, 18, 28, 125
Riordan, 235 Temperley, 157
Robinson, 208 Tracy, 219
Ross, 19 Tracy-Widom law, 122, 219, 238, 242
RSK correspondence, 208 transient, 10
transition density, 158
Sagan, 38, 208 transition matrix, 9
Scardicchio, 143 transition probability, 9, 247
Schensted, 208 tridiagonal matrix model, 142, 144
Schur orthonormality, 46 Trotter, 125
Schur polynomial, 45
Schur-Weyl duality, 46 Ulam problem, 210
selection principle, 18 uniform measure, 36, 66, 200, 207
Serfozo, 11 uniform topology, 233
Shahshahani, 37, 186 unitary, 33
Shao, 19, 21, 244, 250 Ursell function, 119
Shepp, 209, 219
Sherman-Morrison equation, 99 Valkó, 143, 146
Silverstein, 28 Vandermonde determinant, 44, 90
Simon, 48, 58 Velázquez, 60
Skorohod, 32 Verblunsky coefficient, 58, 64, 69, 74,
slowly varying, 52 79
small ensemble, 169 Verblunsky theorem, 58
Snaith, 53 Vershik, 157, 169, 201–203, 209, 210,
Sobolev norm, 217 212, 218, 227
Sobolev space, 112 Virág, 143, 144, 146
Soshnikov, 118 Vivo, 143
stationary distribution, 10, 247 Vu, 18, 125
February 2, 2015 10:5 9197-Random Matrices and Random Partitions ws-book9x6 page 271

Index 271

Wasserstein distance, 21
weak topology, 233
wedge product, 81
weight filtration, 225, 235
weight grading, 224, 226
Weyl formula, 34, 53
Widom, 219
Wieand, 55
Wigner semicircle law, 14, 95, 113,
133, 142

Yakubovich, 201, 203


Yor, 69
Young diagram, 28, 42, 156, 210, 221,
237, 253
Young tableau, 42

Zeitouni, 28, 90

You might also like