0% found this document useful (0 votes)
32 views18 pages

Book Chap2

Formal matrix integrals are defined as asymptotic series whose coefficients are Gaussian matrix integrals. This allows them to be related to generating functions for maps. Specifically: 1) A formal matrix integral is defined by expanding the exponential of the non-quadratic part of the potential V(M) as a Taylor series, and writing the integral as an infinite sum of polynomial moments of a Gaussian integral. 2) Each coefficient of the Taylor series expansion is a Laurent polynomial in N, so the formal integral itself is always a Laurent polynomial in N and thus has a 1/N expansion. 3) Formal integrals differ from convergent matrix integrals, as the order of integration and summation are exchanged.

Uploaded by

foo-hoat Lim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views18 pages

Book Chap2

Formal matrix integrals are defined as asymptotic series whose coefficients are Gaussian matrix integrals. This allows them to be related to generating functions for maps. Specifically: 1) A formal matrix integral is defined by expanding the exponential of the non-quadratic part of the potential V(M) as a Taylor series, and writing the integral as an infinite sum of polynomial moments of a Gaussian integral. 2) Each coefficient of the Taylor series expansion is a Laurent polynomial in N, so the formal integral itself is always a Laurent polynomial in N and thus has a 1/N expansion. 3) Formal integrals differ from convergent matrix integrals, as the order of integration and summation are exchanged.

Uploaded by

foo-hoat Lim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Chapter II

Formal matrix integrals

In this chapter we introduce the notion of a formal matrix integral, which is a very
useful for combinatorics, as it turns out to be identical to the generating function of
maps of chapter.I.
A formal integral is a formal series (an asymptotic series) whose coefficients are
Gaussian matrix integrals, it is not necessarily convergent (in fact it is definitely not
convergent, it has a vanishing radius of convergency).
Then, using Wick’s theorem to compute Gaussian integrals in a combinatorial way,
we relate formal matrix integrals to generating functions for maps.
The relationship between formal matrix integrals and maps, was first noticed by
’tHooft in the context of the study of strong nuclear interactions [?], and then really
introduced as a tool for studying maps by Brezin-Itzykson-Parisi-Zuber in 1978 [?].

1 Definition of a formal matrix integral


1.1 Introductory example: 1-matrix model and quadrangula-
tions
Consider the following polynomial moment of a gaussian integral over the set of her-
mitian N × N matrices:
!
Nk 4 k −N Tr M2
2
Ak (N) = dM ( Tr M ) e
k! 4k HN

where M is a N × N hermitian matrix, with measure

"N "
1
dM = N 2 dMii dReMij dImMij
2 (π/N)N /2 i=1 i<j

23
# M2
normalized so that dM e−N Tr 2 = 1.
We shall see below that Ak (N) is a polynomial in N and 1/N, so it can be continued
to any N ∈ C∗ .
With the sequence Ak (N), k = 0, 1, 2, . . . , ∞, we define a formal power series
(asymptotic series) in powers of a variable which we choose to call t4 because it is
associated to Tr M 4 (later we shall associate tn to Tr M n ):

$
ZN (t4 ) = tk4 Ak (N).
k=0

ZN (t4 ) is well defined as a formal power series in t4 , in other words, ZN (t4 ) is noth-
ing but a notation which summarizes all the coefficients Ak (N) in only one symbol
ZN (t4 ). This means that every time we are going to write properties or equations
for ZN (t4 ), we actually mean properties of the coefficients in the small t4 expansion.
Writing the equations in terms of ZN (t4 ) is merely a shorter way of writing equations
for Ak (N) ∀k.
We are never going to consider ZN (t4 ) as a usual function of t4 , and in fact, for
t4 > 0 the series ZN (t4 ) is never convergent (in the Borel sense for instance).

1.2 Comparison with convergent integrals


The definition of a formal matrix integral ZN (t4 ) is not to be confused with the her-
mitean convergent matrix integral:
!
M2 M4
Zconv (t4 , N) = dM e−N Tr ( 2 −t4 4 )
!HN $ ∞
Nk −N Tr M2
2
= tk4 k
dMe ( Tr M 4 )k
HN k=0 k! 4

One should notice that Zconv (t4 , N) is well defined only for Re(t4 ) < 0.
The existence and nature of large N asymptotics of hermitean convergent matrix
integrals is a difficult problem which has been solved in a few cases, and which remains
an open question in many cases at the time this book is being written (2-matrix model
for instance).
The only difference in the definition of ZN (t4 ) and Zconv (t4 , N), is that the order of
the sum over k and the integral over HN has been exchanged. In general, the sum and
the integral don’t commute, and in general:

ZN (t4 ) %= Zconv (t4 , N)

in other words:
$ ∞ ! k ! $ ∞ k
k N k N
2 2
−N Tr M2 4 k −N Tr M2
t4 k
dMe ( Tr M ) %
= t4 k
dMe ( Tr M 4 )k
k=0 HN
k! 4 HN k=0 k! 4

24
Those two definitions of a matrix integral differ even after Borel resummation and
analytical continuation from t4 > 0 (which is the interesting regime for combinatorics)
to t4 < 0 (where Zconv (t4 ) is well defined) they do not necessarily coincide, they might
still differ by exponentialy small terms.
Convergent matrix integrals are not the topic of this book, and readers interested
in asymptotic properties of large Matrix integrals, can refer to for instance [].

1.3 Formal integrals


So far, we have studied the example of a formal matrix integral with quartic potential,
now let us give the general definition of a formal integral of the form:
!
N
e− t Tr V (M ) dM.
formal

The idea is to expand (Taylor series) the exponential of the non-quadratic part of
V (M), and write the integral as an infinite sum of polynomial moments of a gaussian
integral, and then invert the integral and the summation.
More preciselly, let
d
M 2 $ tj j
V (M) = − M
2 j=3
j

be called the potential, then we define the following polynomial moment of a Gaussian
integral:
! % d &k
1 Nk N M2
$ tj
Ak = dM e− t Tr 2 Tr M j .
k! tk HN j=3
j

Lemma 1.1 Ak is a polynomial in t such that:


[(d−2)k/2]
$
Ak = Ak,m tm .
m=k/2

proof:
A monomial moment of a Gaussian integral vanishes if the degree of the monomial
is odd, and is'proportional to t to the power half the degree, if the degree is even. The
t
polynomial ( dj=3 jj Tr M j )k can be decomposed into a finite sum of monomials in M
of the form:
"d $ d
( Tr M j )nj , nj = k
j=3 j=3
'
i.e. of degree j jnj . Therefore such a term contributes to Ak with a power of tm
equal to:
d d d
1$ 1$ 1$ k
m = −k + jnj = (j − 2)nj ≥ nj = .
2 j=3 2 j=3 2 j=3 2

25
The upper bound m ≤ (d − 2)k/2 is easily obtaind because j ≤ d and nj ≤ k. !
This Lemma allows to define:
2m
$
Ãm = Ak,m .
k=0

Definition 1.1 The formal integral is the formal power series in t:


! ∞
$
− Nt Tr V (M )
Z(t) = e = tm Ãm .
formal m=0

Remark 1.1 It is also a formal power series in each tj with 3 ≤ j ≤ d. We may choose
t = 1 and expand in powers of t3 or t4 ..., as we did for quadrangulations. It is clear that t
√ j
can be absorbed by a redefinition M → tM and tj → t 2 −1 tj , exactly like in section 2.3 of
chapter I.

Remark 1.2 The formal integral and the convergent matrix integral differ by the order of
integration and sum. In general the two operations do not commute, and the formal integral
and the convergent integral are different:
! !
− Nt Tr V (M ) N
e dM %= e− t Tr V (M ) dM.
formal HN

Remark 1.3 We shall see below in section II.2.2, that each Ak,m is a Laurent polynomial
in N :
gmax (k,m)
$ (g)
Ak,m = N g Ak,m
g=−gmin (k,m)

so that each Ãm is also a Laurent polynomial of N , and thus, to a given order tm , the formal
integral is a Laurent polynomial in N , and thus a formal matrix integral always has a
1/N expansion.
In other words, the question of a 1/N expansion is trivial for formal integrals, whereas
it is a difficult question for convergent integrals (mostly unsolved for multi-matrix integrals
with complex potentials).

Remark 1.4 Most of physicist’s works in so-called “2d-quantum-gravity” are actually


using that ”formal” definition of a matrix integral (in fact almost all works in quantum field
theory after Feynman’s works, use formal integrals). Most of the works initiated by Brezin-
Itzykson-Parisi-Zuber [?] in 1978 assume the formal definition of matrix integrals, and are
correct and rigorous only with that definition, they are often wrong if one uses convergent
hermitian matrix integrals instead.

2 Wick’s theorem and combinatorics


2.1 Generalities about Wick’s theorem
Wick’s theorem is a very useful theorem for combinatorics. It gives a combinatoric
way of computing Gaussian expectation values, or conversely, it gives an algebraic and
analytical way of enumerating graphs.

26
Let A be a positive definite n×n symmetric matrix, and let x1 , . . . , xn be n Gaussian
random variables, with a probability measure:

(2π)n/2 − 1 Pi,j Ai,j xi xj


dµ(x1 , . . . , xn ) = √ e 2 dx1 dx2 . . . dxn
det A
and let
B = A−1 (II-2-1)
which we call the propagator.
Let us denote expectation values with brackets (this is the usual notation in
physics): !
def
< f (x1 , . . . , xn ) > = f (x1 , . . . , xn ) dµ(x1 , . . . , xn ).

Wick’s theorem states that:


Theorem 2.1 (Wick’s theorem) []
The expectation value of a product of gaussian random variables, is the sum over
all pairings of product of expectation values of pairs.
We have
< xi xj >= Bi,j = propagator
and the expectation value of any odd number of variables is zero, and:
$ "
< xi1 xi2 . . . xi2m >= Bik ,il .
pairings pairs(k,l)

Example:

< xi1 xi2 xi3 xi4 >= Bi1 ,i2 Bi3 ,i4 + Bi1 ,i3 Bi2 ,i4 + Bi1 ,i4 Bi2 ,i3 .

Wick’s theorem becomes even more interesting when the indices i1 , . . . , i2m are not
distinct. For instance:

< x2i1 x2i2 >= Bi1 ,i1 Bi2 ,i2 + 2Bi1 ,i2 Bi1 ,i2 .

Graphs
The best way to write Wick’s theorem is diagrammatically. Associate to each pair
(ik , il ) an edge with weight Bik ,il . If an index ik is repeated, i.e. if it appears as
xpikk , then we associate to it a vertex with pk half edges. Wick’s theorem says that
the expectation value is the sum over all possible ways to link vertices by edges, of the
product of propagators corresponding to edges. In other words, draw all possible graphs
with the given vertices, and weight each graph by the product of its edge propagators.
Example:
....
... ....... ... ..
.... .. .............. .
.. ..
. .
... ... ... ..... ..... ..
.
. ..

< x3i1 x5i2 > = " i#$ i2&#$% ! = i2 #$


.

#$ + . . . 104 other pairings


! 1 " i1 ..
&%..
.
(II-2-2)
.. ... ... ..
... .. ... ....
.... ...

27
where this graph has weight
Bi31 ,i2 Bi2 ,i2 .

In other words, Wick’s theorem allows to count the number of ways of gluing
vertices (of given valence) by their edges. Such graphs are called Feynman graphs.
A Feynman graph is a graph, with given vertices, to which we associate a value, wich
is the product of the propagators Bi,j ’s of edges:
"
Bie ,je .
e∈edges

Symmetry factors
The total number of possible graphs with m edges, is the number of pairings of 2m
half edges, it is:

(2m − 1)!! = (2m − 1)(2m − 3)(2m − 5) . . . 1.

However, many of the graphs obtained, are topologically identical, they have the same
weight, and it may be more convenient to write only non-topologicaly equivalent graphs,
and associate to them an integer factor (the symmetry factor).
For example, the graph displayed in eq.II-2-2, is obtained 60 times, and the only
other topological graph is obtained 45 times, which make a total of 60 + 45 = 105 =
7 ∗ 5 ∗ 3 = 7!!:
....
... ....... ... ..
.... .. ... ......... .
.. .. ...
... .......
. .
.... ... .... .... ..... .. .. .. .. ..
... ...
. ... ..

< x3i1 x5i2 > = " i #$ i2&#$% ! = 60 i #$ i2 #$ + 45 #$


&%. i1
i2 #$
&%..
! 1 " 1 .. . .. .
(II-2-3)
.. ... ... . .. .... ... ..
... .. .... ... ... .. ... ...
.... ... .... ... ..

= 60 Bi31 ,i2 Bi2 ,i2 + 45 Bi1 ,i2 Bi1 ,i1 Bi22 ,i2 .

Notice on that example, that both 60 and 45 divide 3! ∗ 5!:


....
... ....... ... ..
.... .. .... ......... .
.. .. ...... ......
( x3 ) . ..
.. .... . ..
.... ... ... ...... ..
. .. .. .
...
5 ... ..
i1 xi2
= 1 #$ i2 #$ + 1 #$ i2 #$
3! 5! 12 i1 ..
&%.. 16 i1 ..
&%..
. .
(II-2-4)
.. .... ... .. .. .... ... ..
... .. ... .... ... .. ... ...
.... ... .... ... ..

This is something general: the number of relabelings which leave a graph invariant (i.e.
the number of times we obtain the same graph), is equal to the order of the group of
relabelings, divided by the number of automorphisms of the graph.
What we call the symmetry factor, is the number of automorphisms of a graph, it
appears in the denominator.

To summarize, one may say that Gaussian expectation values are generating
functions for counting (weighted by the inverse of their integer symmetry
factor) the number of graphs with given vertices.

28
2.2 Matrix gaussian integrals
Let us now apply Wick’s theorem, to the computation of gaussian matrix integrals. In
that case, the Feynman graphs are going to be fatgraphs also called ribbon-graphs, or
maps, or discrete surfaces.

Application of Wick’s theorem to matrix integrals


Consider a random hermitean matrix M of size N, with Gaussian probability measure:
N
1 − N Tr M 2 " "
dµ0(M) = e 2t dMi,i dReMi,j dImMi,j
Z0 i=1 i<j

in other words, the variables Mi,i , ReMi,j , ImMi,j are# independent gaussian random
variables. Z0 is the normalization constant such that dµ0 (M) = 1:
N2
Z0 = 2N (π t/N) 2 (II-2-5)
'
Since tr M 2 = i,j Mi,j Mj,i, the Wick’s propagator (defined in eq.II-2-1) is easily
computed:
t
δi,l δj,k
< Mi,j Mk,l >0 =
N
where <>0 means the expectation value with the measure dµ0 .
'
As a first example, let us compute < Tr M 4 >0 = i,j,k,l < Mi,j Mj,k Mk,l Ml,i >0 ,
which we represent as a vertex with 4 double-line half edges:

i j
i j
l k
l k
We write the half edges as double lines, and associate to each single line its index.
Because of the trace, the indices are constant along single lines.
Since the propagator is < Mi,j Mk,l >0 = Nt δi,l δj,k , it is going to be used to glue
together half edges carrying the same oriented pair of indices, we can represent it as
an edge:
i l
j k
So, let us compute < Tr M 4 >0 :
N
< Tr M 4 >0
4t
29
N $
= < Mi,j Mj,k Mk,l Ml,i >0
4t i,j,k,l
N $
= < Mi,j Mj,k >0 < Mk,l Ml,i >0
4t i,j,k,l
+ < Mi,j Ml,i >0 < Mj,k Mk,l >0 + < Mi,j Mk,l >0 < Mj,k Ml,i >0
i j i j i j
i j i j i j
l k
+ l k
+ l k
l k l k l k

=
N $ t t t t t t
= δi,k δj,j δk,iδl,l + δi,i δj,l δj,l δk,k + δi,l δj,k δj,iδk,l
4t N N N N N N
i,j,k,l
Nt 1 * 1 1 +
3 3
= N + N + N
4* N 2 N 2+ N2
t
= N2 + N2 + N0
4
tN 2 tN 0
= +
2 4

Notice that there are two steps in that computation:


- the first one consists in applying Wick’s theorem, i.e. representing each term as
one way of gluing together half edges of the 4-valent vertex with propagators.
- the second step consists in performing the summation over the indices. Notice
that the special form of the propagator,with δ−functions of indices, ensures that there
is exactly one independent index per single line. The sum over all indices is thus equal
to N to the power the number of single lines, i.e. number of faces of the graph.
Since we also have a factor 1/N per propagator i.e. per edge, and a factor N in
front of the trace, i.e. a factor N per vertex, in the end the total N dependance for a
given graph is:
N #vertices−#edges+#faces = N χ
where χ is a topological invariant of the graph, called its Euler characteristics.
It should now be clear to the reader that this is something general. The fact that
the power of N is a topological invariant, first discovered by ’tHooft [?], is the origin
of the name ”topological expansion”.
Wick’s theorem ensures that each term in the expectation value corresponds to one
way of gluing vertices by their edges, and the sum over indices coming from the traces
ensures that the total power of N for each graph is precisely its Euler characteristics,
which we summarize as:
"m $
< (N Tr M pk ) >0 = N χ(G) t#edges
k=1 L−Fat Graphs G

where the sum is over the set of (labeled) oriented fat graphs having vertices of valence
p1 , . . . , pm obtained by gluing together half edges.

30
One should make some remarks:
• the graphs in that sum maybe disconnected
• several graphs may be topologically equivalent in the sum, i.e. if we remove the
labelling of indices. The order of the group of relabellings is:
m
" "
pk (#{pi | pi = p})!
i=k p

indeed, at each vertex of valence pk one can make pk rotations of the indices, and if
several vertices have the same valence they can be permuted.
Therefore, it is better to rewrite:

"m $
1 N 1
< ( Tr M k )nk >0 = N χ(G) t#edges
k=1
nk ! k Fat Graphs G
#Aut(G)
(II-2-6)

where now the sum is over non-topologicaly equivalent graphs made with nk k-valent
vertices, and #Aut(G) is the number of automorphisms of the graph G.

From graphs to maps


Instead of summing over fatgraphs, let us sum over their duals, using the obvious
bijection between a graph and its dual. The dual of a k-valent vertex is a k-gon:

gluing together vertices by their half-edges is clearly equivalent to gluing (oriented)


polygons together by their sides, and thus we obtain a map. equation eq.II-2-6 can
thus be rewritten:
"m $ t#edges−#faces
1 N
< ( Tr M k )nk >0 = N χ(Σ)
k=1
nk ! kt Maps Σ
#Aut(Σ)

where now the sum is over maps made with nk k-gons, and #Aut(Σ) is the number of
automorphisms of the map Σ. We have:
• vertices of G ↔ faces of Σ
• edges of G ↔ edges of Σ
• faces of G ↔ vertices of Σ
Notice that the Euler characteristics of a graph and its dual is the same. The
Euler-Characteristics is

χ = #vertices − #edges + #faces

31
in other words, the power of t is also:

t#vertices−χ

i.e.

Theorem 2.2

"m
1 N $ t#vertices(Σ) , N -χ(Σ)
k nk
< ( Tr M ) >0 =
k=1
nk ! kt Maps Σ
#Aut(Σ) t
(II-2-7)

where the sum is over all maps (not necessarily connected) having exactly m faces, with
given degrees nk , k = 1, . . . , m.

3 Generating functions of maps and matrix inte-


grals
3.1 Generating functions for closed maps
Theorem 2.2 implies that the generating function ZN of eq.I-2-5, which counts non
connected maps, is nothing but the formal integral:

Proposition 3.1
!
M2 Tr (
t3 t t
ZN (t; t3 , t4 , . . . , td ) = dM e−N Tr 2t
N
et 3
M 3 + 44 M 4 +... dd M d )
formal
$ , -χ(Σ)
N n (Σ) n4 (Σ) n (Σ) t#vertices(Σ)
= t3 3 t4 . . . td d
n.c. closed maps Σ
t #Aut(Σ)

where again, formal integral means that we Taylor expand the exponentials of all non
quadratic terms, and exchange the Taylor series and the integration. In other words,
we perform a formal small t (or also t3 , t4 , . . . , td ) asymptotic expansion, and order by
order we get the number of corresponding maps. The ' coefficient of tj is the finite sum
of (n.c. = non-connected) closed maps such that 12 i (i − 2)ni = j = #vertices − χ.

Connected maps
When we have a formal generating series counting disconnected objects multiplicatively,
it is well known that the logarithm is the generating function which counts only the
connected objects, i.e. it is the generating function of eq.I-2-5:

ln (ZN (t; t3 , t4 , . . . , nd ))
= F (t; t3 , t4 , . . . , nd ; N)

32
$ , -2−2g(Σ)
N n (Σ) n4 (Σ) n (Σ) t#vertices
= t3 3 t4 . . . td d .
closed connected maps Σ
t #Aut(Σ)

'
Again, the coefficient of tj is the finite sum of connected closed maps such that 12 i (i−
2)ni = j = #vertices−χ. And the Euler characteristics of a connected map is χ = 2−2g
where g is the genus.

Topological expansion: maps of given genus


We thus see, that order by order in the small t expansion, the coefficients of tj in N −2 F
are polynomials in N −2 , and thus we can define generating series of coefficients of a
given power of N −2g , we define:
$∞ , -2−2g
N
F = Fg
g=0
t

where again we emphasize that this is an equality of formal series in powers of t, and
order by order, the sum over g is finite, and the coefficients are polynomials in N −2 . Fg
is obtained by collecting the coefficients of N −2g , and its computation does not involve
any large N limit.
We recognize the generating function of connected closed maps of genus g, of
def.I-2-4:
$ $ n (Σ) n (Σ) n (Σ) 1
Fg (t; t3 , t4 , . . . , nd ) = tv t3 3 t4 4 . . . td d .
v (g)
#Aut(Σ)
Σ∈M0 (v)

4 Maps with boundaries or marked faces


4.1 One boundary
So far, we have seen how formal matrix integrals, thanks to Wick’s theorem, are count-
ing closed maps were all polygons played similar roles. Now let us count maps with
some marked faces.
Consider the following formal matrix integral:
# M2 t3 3 t4 td
e t Tr ( 3 M + 4 M +... d M )
N 4 d
dM Tr M l e−N Tr 2t
< Tr M l > = formal
# M2 t3 3 t4 td (II-4-1)
e t Tr ( 3 M + 4 M +... d M )
N 4 d
formal
dM e−N Tr 2t

The bracket < . > now denotes expectation value with respect to the formal measure
2 t t t
1 −N Tr M N
Tr ( 33 M 3 + 44 M 4 +... dd M d )
Z
e 2t e t dM, whereas in the previous section < . >0 meant
M2
the expectation value with respect to the gaussian measure Z10 e−N Tr 2t dM.
The numerator in eq.II-4-1 is
!
M2 t3 3 t4 td
dM Tr M l e−N Tr 2t e t Tr ( 3 M + 4 M +... d M )
N 4 d

formal

33
$ N n3 tn3 3 N n4 tn4 4 N nd tnd d 1
= . . .
d d nd ! t nj
P
n n
3 3 n3 ! 4 4 n4 ! n
n3!,...,nd
M2
Tr M l ( Tr M 3 )n3 ( Tr M 4 )n4 . . . ( Tr M d )nd e−N Tr 2t

it can be computed using Wick’s theorem, and it gives a sum over all fatgraphs (or
maps) with n3 triangles, n4 squares, . . . , nd d-gons, and one marked l-gon. The sum
may include non connected maps, and the role of the denominator in eq.(II-4-1) is
precisely to kill all non-connected maps (see section 2.5 in chapter.I).
There should be a symmetry factor 1/#Aut(Σ) counting automorphisms which
preserve the marked face, and since there is no factor 1l in front of Tr M l , we get
l times the number of maps with no marked edge on the marked face, i.e. we get
the number of maps with one marked edge on the marked face. Since there is no N
accompanying the Tr M l , the power of N is χ − 1 = 2 − 2g − 1 which is the Euler
characteristic of a surface with one boundary. Therefore we recognize the generating
function Tl of eq.(I-2-2) in chapter.I:

< Tr M l > = Tl , -1−2g


$ N n (Σ) n4 (Σ) n (Σ) t#vertices
= t3 3 t4 . . . td d
t #Aut(Σ)
maps Σ with 1 boundary of length l
= − Res xl W1 (x) dx.
x→∞

4.2 several boundaries


The previous subsection can be immediately generalized to:
l l l
< Tr
! M 1 Tr M 2 . . . Tr M k >
1 M2 t3 3 t4 td
dM Tr M l1 Tr M l2 . . . Tr M lk e−N Tr 2 eN Tr ( 3 M + 4 M +... d M )
4 d
=
Z formal
1 ∗
= T (II-4-2)
Z l1 ,...,lk
where Tl1∗,...,lk is the generating function of not necessarily connected maps with k
boundaries of lengths l1 , . . . , lk of all genus.

One obtains connected maps by computing cumulants (see section 2.5 of chapter
I), for instance:

< Tr M l1 Tr M l2 >c =< Tr M l1 Tr M l2 > − < Tr M l1 > < Tr M l2 >

And thus the cumulants compute connected maps with k boundaries of lengths
l1 , . . . , lk :

Tl1 ,...,lk
= < Tr M l1 Tr M l2 . . . Tr M l, k
>c-
$ 2−2g−k #vertices
N n (Σ) n (Σ) n (Σ) t
= t3 3 t4 4 . . . td d .
Σ with k boundaries of length l ,...,l
t #Aut(Σ)
1 k

34
4.3 Topological expansion for bounded maps of given genus
The Euler characteristics of a connected surface of genus g with k boundaries is:
χ = 2 − 2g − k.
Therefore we have:

$ , -2−2g−k
l1 l2 lk (g) N
< Tr M Tr M . . . Tr M >c = Tl1 ,...,lk
g=0
t
(II-4-3)
(g)
where Tl1 ,...,lk is the generating function defined in chapter I, eq.(I-2-2), which counts
connected maps of genus g, with k boundaries of lengths l1 , . . . , lk .
Once more we emphasize that this equality holds term by term in the powers of t,
and for each power, the sum over g is finite, i.e. both left hand side and right hand
side are Laurent polynomials in N.
In other words, eq. II-4-3 is not a large N expansion, it is a small t expansion.

4.4 Resolvents
We define the resolvent:
$∞ $∞ . /
1 Ml
W1 (x) = Tl = Tr l+1
l=0
xl+1 l=1
x

and conversely:
Tl = − Res xl W1 (x) dx
x→∞

Very often (in particular in physicist’s literature), the resolvent is written:


1
W1 (x) =< Tr >
x−M
which holds in the formal sense, i.e. to each given power of t, the sum over l is finite
and each coefficient in the small t or tj ’s expansion is a polynomial in 1/x.
More generally:

$ 1
Wk (x1 , . . . , xk ) = Tl1 ,...,lk
xl1 +1
l1 ,...,lk =0 1
. . . xlkk +1

0 1
$ M l1 M lk
= Tr l1 +1 . . . Tr lk +1
l1 ,...,lk =0
x1 xk
. / c
1 1
= Tr . . . Tr
x1 − M xk − M c
$ , N -2−2g−k (g)
= Wk (x1 , . . . , xk ) (II-4-4)
g
t
(g)
The Wk are the same as those of definition 2.2 in chapter I.

35
5 Loop equations
In this section, we derive a matrix-model proof of Tutte’s equation of chapter I. In the
matrix model framework, those equations are called ”loop equations” [?].
Loop equations merely arise from the fact that an integral is invariant under a
change of variable, or alternatively from integration by parts. They are sometimes
called Schwinger–Dyson equations.
Although loop equations are equivalent to Tutte’s equations, it is often easier to
integrate by parts in a matrix integral, than finding bijections between sets of maps, and
it is much faster to derive loop equations from matrix models than from combinatorics.

Consider the following polynomial expectation value of degree l = l1 + . . . + lk :


# N k
dM G∗ (M) e− t Tr V (M ) "
∗ ∗
< G (M) >= # , G (M) = Tr M lj
− Nt Tr V (M )
dM e j=1
#
where means either the convergent or the formal matrix integral (i.e., to any order
in t, a finite sum of convergent gaussian integrals).
We shall derive a recursion relation on the degree l = (l1 , . . . , lk ).
The method is called loop equations, and it is nothing but integration by parts.
It is based on the observation that the integral of a total derivative vanishes, and
thus, 2if G(M) is any matrix valued polynomial function of M (for instance G(M) =
l1 k lj
M j=2 Tr M ), we have:

$! ∂ * N
+
0 = dM (G(M))ij e− t Tr V (M )
i<j
∂ReMi,j
$ ! * +
∂ − Nt Tr V (M )
−i dM (G(M)) ij e
i<j
∂ImMi,j
N !
$ ∂ * N
+
+ dM (G(M))ii e− t Tr V (M ) (II-5-1)
i=1
∂M i,i

Choosing
k
"
G(M) = M l1 Tr M lj
j=2

and after computing the derivatives we get:


l$
1 −1 k
" k
$ k
"
j l1 −1−j li lj +l1 −1
< Tr M Tr M Tr M > + lj < Tr M Tr M li >
j=0 i=2 j=2 i=2,i&=j
k
"
N
= < Tr (M l1 V ' (M)) Tr M li > (II-5-2)
t i=2

Again, we emphasize that this equation is valid for both convergent matrix integrals
and formal matrix integrals, indeed it is valid for gaussian integrals, and thus for any

36
finite linear combination of gaussian integrals, i.e. formal integrals. In case of formal
integrals, those equations are valid, of course, only order by order in t. In other words
the loop equations are independent of the order of the integral and the Taylor series
expansion.
Using the notations of eq.(II-4-3), we may rewrite the loop equation eq.(II-5-2):

Theorem 5.1 Loop equations ∀g:

'l1 −1 3 'g ' (h) (g−h) (g−1)


4 '
k (g)
j=0 h=0 T T
J⊂L j,J l1 −1−j,L/J + Tj,l1 −1−j,L + j=2 lj Tlj +l1 −1,L/{j}
(g) 'd (g)
= Tl1 +1,L − j=3 tj Tl1 +j−1,L

(II-5-3)
where we denote collectively L = {l2 , . . . , lk }
(g)
We recall that Tl1 ,l2 ,...,lk is the generating function which counts the number of
connected maps of genus g with k boundaries of perimeters l1 , . . . , lk , and therefore we
have re–derived the generalized Tutte equation eq.(I-3-2) of chapter I.

It is interesting to rewrite the loop equations of eq.(II-5-3) in terms of resolvents


(g) 2
Wk ’s defined in eq.(II-4-4). We merely multiply eq.(II-5-3) by ki=1 1/xlii +1 and sum
over l1 , . . . , lk (to any given power of t, the sum is finite).

Theorem 5.2 Loop equations. For any k and g, and L = {x2 , . . . , xk }, we have:
g
$ $ (h) (g−h) (g−1)
W1+#J (x1 , J)Wk−#J (x1 , L \ J) + Wk+1 (x1 , x1 , L)
h=0 J⊂L
$k (g) (g)
∂ Wk−1 (x1 , L ⊂ {xj }) − Wk−1(L)
+
j=2
∂xj x1 − xj
' (g) (g)
= V (x1 )Wk (x1 , L) − Pk (x1 , L) (II-5-4)
(g) (0)
where Pk (x1 , L) is a polynomial in x1 , of degree d − 3 (except P1 which is of degree
d − 2):
d−1 j−1 ∞ (g)
(g)
$ $ $ Tj−1−i,l2 ,...,lk
Pk (x1 , x2 , . . . , xk ) =− tj+1 xi1 + t δg,0 δk,1
j=2 i=0 l2 ,...,lk =1
xl22 +1 . . . xlkk +1

proof:
Indeed, if we expand both sides of eq.(II-5-4) in powers of x1 → ∞, and indentify
the coefficients on both side, we find that the negative powers of the xi ’s give precisely
the loop equations eq.(II-5-3), whereas the coefficients of positive powers of x1 cancel
(g) (g)
due to the definition of Pk , which is exactely the positive part of V ' (x1 )Wk :
* +
(g) (g)
Pk (x1 , x2 , . . . , xk ) = Pol V ' (x1 ) Wk (x1 , x2 , . . . , xk )
x1 →∞

37
where Pol means that we keep only the polynomial part, i.e. the positive part of the
Laurent series at x1 → ∞. !

6 Loop equations and Virasoro constraints


We have seen two derivations of the loop equations. One combinatoric proof in chapter
I, based on Tutte’s method, corresponding to recursively removing a marked edge, and
one proof based on integration by parts in the formal matrix integral in chapter II.
However, there exist other possible derivations.
In particular, in string theory and quantum gravity, it is known that partition
functions must satisfy Virasoro constraints. Here, we show how to rewrite the loop
equations for generating functions of maps, as Virasoro constraints.
We write the potential:

$ tj
V (x) = − xj
j=1
j

In the end, we will be interested in t1 = 0, t2 = −1 and tj = 0 if j > d.


It is easy to see from the definitions of our generating functions, and particularly
on the formal matrix integral, that:

(g) ∂Fg (g) ∂ 2 Fg


Tj =j , Tj1 ,j2 = j1 j2
∂tj ∂tj1 ∂tj2

and therefore the loop equations eq.(II-5-3) for k = 1 can be rewritten:

∀k ≥ −1 , Vk .Z = 0 (II-6-1)

where we have defined the operator:

k−1 d
1 $ ∂ ∂ $ ∂
Vk = 2 j (k − j) + (k + j) tj .
N j=1 ∂tj ∂tk−j j=2
∂tk+j

The differential operators Vk form a representation of (the positive part of) the Virasoro
algebra, indeed one easily verifies that they satisfy:

[Vk , Vj ] = (k − j)Vk+j

This method has been extensively used by physicists, but we shall not pursue in
that direction in this book.
An important property, is that eq.(II-6-1) is a linear equation for Z, and thus, linear
combinations of solutions are also solutions.

38
7 Summary Maps and matrix integrals
Let us summarize the concepts introduced in this chapter:
• Formal integral
! d
− Nt Tr V (M ) M 2 $ tk k
ZN = e dM , V (M) = − M
formal 2 k=3
k
#
where formal means that we exchange the order of the integral and the Taylor
expansion of the exponentials of the tk ’s.
(g)
• M0 (v) = finite set of connected maps of genus g and no boundary, with v
vertices, obtained by gluing n3 triangles, n4 squares, n5 pentagons,..., nd d−gons.
Generating function:
∞ ,
$ -2−2g
N
ln ZN = Fg
g=0
t

$ $ $ n (Σ) n4 (Σ) n (Σ)
j 2−2g t3 3 t4 . . . td d
= t N
j=0 v+2g−2=j (g)
#Aut(Σ)
Σ∈M0 (v)

We also denote:
(g)
Fg = W0 .
(g)
• Mk (v) = connected maps of genus g with v vertices, obtained by gluing n3
triangles, n4 squares, n5 pentagons, and k boundaries of length l1 , . . . , lk .
Generating function:
< Tr M l1 Tr M l2 . . . Tr M lk >c

$ $ $ n (Σ) n4 (Σ) n (Σ)
t3 3 t4 . . . td d
= tj N 2−2g−k
j=0 v+2g+k−2=j (g)
#Aut(Σ)
Σ∈Mk (v),δΣ={l1 ,...,lk }
$, N
-2−2g−k
(g)
= Tl1 ,...,lk .
g
t

• Resolvents for connected maps of genus g and with k boundaries.


Generating function:
Wk (x1 , . . . , xk )
$ , N -2−2g−k (g)
= Wk (x1 , . . . , xk )
g
t
1 1
= < Tr . . . Tr >c
x1 − M xk − M

$ $ $ tn3 3 tn4 4 . . . tnd d 1
= tj N 2−2g−k l1 +1 l +1
.
x . . . x k #Aut(Σ)
j=0 v+2g+k−2=j Σ∈Mg,k (v) 1 k

39
• Loop equations (Tutte’s equations):
1 −1 3 $
l$ g
$ 4 k
$
(h) (g−h) (g−1) (g)
Tj,J Tl1 −1−j,L/J + Tj,l1 −1−j,L + lj Tlj +l1 −1,L/{j}
j=0 h=0 J⊂L j=2
$d
(g) (g)
= Tl1 +1,L − tj Tl1 +j−1,L
j=3

where L = {l2 , . . . , lk }. Equivalently, the loop equations can be written in terms


(g)
of Wk ’s and with L = {x2 , . . . , xk }:
g $
$ (h) (g−h) (g−1)
W1+#J (x1 , J)Wk−#J (x1 , L \ J) + Wk+1 (x1 , x1 , L)
h=0 J⊂L
$k (g) (g)
∂ Wk−1 (x1 , L \ {xj }) − Wk−1 (L)
+
j=2
∂xj x1 − xj
' (g) (g)
= V (x1 )Wk (x1 , L) − Pk (x1 , L)
(g) (g)
where L = {x2 , . . . , xk }, and Pk (x1 , L) = Polx1 V ' (x1 ) Wk (x1 , L) is a polyno-
(0)
mial in the variable x1 , of degree d − 3, except P1 which is of degree d − 2.

8 Exercises
Exercise 1:
For the quartic formal matrix integral
! , -
1 2 t M4
− Nt Tr M2 − 4 4 Nt4 5 4
6 1 Nt4 2 5 6
Z= dM e = 1+ tr M + ( tr M 4 )2 +O(t34 )
Z0 formal 4t 2 4t
using Wick’s theorem, recover the generating function of quadrangulations
t4 t2
ln Z = F = (2N 2 + 1) + 4 (9N 2 + 15) + O(t34 ).
4 8

Exercise 2: Prove that with any potential:


t
< Tr V ' (M) >= 0 , < Tr MV ' (M) >= t2
N
Hint: this is a loop equation, use integration by parts.
M2 M4
Exercise 3: Prove that for quadrangulations (i.e. with V (M) = 2
− t4 4
):
∂F Nt4
= 2 T4
∂t 4t

answer: hint: use exercise 2, and don’t forget the t dependance of the normalization
factor Z0 in eq.(II-2-5).

40

You might also like