0% found this document useful (0 votes)
4 views

appliedComplexAnalysis

The document contains lecture notes on advanced complex analysis from a course taught by Dr. Dmitry B. Fuchs at the University of California, Davis. It covers various topics including the Riemann Mapping Theorem, conformal maps, and fractional linear transformations, with an emphasis on their applications in mathematics. The notes are intended to provide insights and intuition behind theorems, supplemented by diagrams and examples.

Uploaded by

Angelo Oppio
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

appliedComplexAnalysis

The document contains lecture notes on advanced complex analysis from a course taught by Dr. Dmitry B. Fuchs at the University of California, Davis. It covers various topics including the Riemann Mapping Theorem, conformal maps, and fractional linear transformations, with an emphasis on their applications in mathematics. The notes are intended to provide insights and intuition behind theorems, supplemented by diagrams and examples.

Uploaded by

Angelo Oppio
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/265524232

Complex Analysis with Applications Notes

Article

CITATIONS READS
0 15

1 author:

Alex Nelson
University of California, Davis
13 PUBLICATIONS 1 CITATION

SEE PROFILE

All content following this page was uploaded by Alex Nelson on 22 May 2023.

The user has requested enhancement of the downloaded file.


Complex Analysis with Applications Notes
Alex Nelson∗
Email: [email protected]
September 29, 2011

Abstract
These are notes from Dr Dmitry B. Fuchs’ course (Math 185B) from Spring 2009.
This covers advanced complex analysis. The “applications” are to various fields of
mathematics.
NOTE: any errors are due to me, and not Dr Fuchs; and corrections are welcome.

Contents
Introduction 3

Lecture 1 4
1.1 Riemann Mapping Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Lecture 2 5

Lecture 3 7

Lecture 4 8

Lecture 5: Analytic Continuation 10

Lecture 6 12

Lecture 7 12
7.1 Reflection Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Homework 1 14

Lecture 8 15
8.1 Argument Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Lecture 9 16
9.1 Rouché’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Homework 2 19

Lecture 10 19

Lecture 11 22

Homework 3 24
∗ This
is a page from https://pqnelson.github.io/notebk/
Compiled: January 31, 2016 at 4:43pm (PST)

1
Contents 2

Lecture 12 25
12.1 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Lecture 13 28

Homework 4 31

Lecture 14 31
14.1 Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Lecture 15 34

Lecture 16 36

Homework 5 38

Lecture 17 39
17.1 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Lecture 18 40

Lecture 19 42

Midterm 44

Lecture 20 45
20.1 Some Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
20.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Lecture 21 47

Homework 6 49

Lecture 22 49

Lecture 23 50

Lecture 24 51

Lecture 25 52

References 53
Introduction 3

Introduction
These are a collection of notes on complex analysis. It is mostly from the course math
185B taught spring quarter 2009, by Dmitry Fuchs. Of course, there are several peculiarities
with my notes worth mentioning.
First, I do use diagrams. This tends to make most people cringe (the puritans do
not like pictures). But my diagrams include more than pictures: it includes commutative
diagrams! For example
a = w
w b
w w (0.1)
A = B
would be used in place of four equations

a=b (0.2a)
b=B (0.2b)
a=A (0.2c)
A=B (0.2d)

which, in my modest opinion, takes up far too much room. My diagrams litter the notes,
hopefully they are useful.
Second, this is currently written in the more “Russian” style. That is, it glosses over a
lot of material giving the intuition underlying the theorems and few proofs. The idea is that
the reader would be mathematically intelligent enough to supply the proofs instantly.
Lecture 1 4

Lecture 1
A “Conformal Map”, for us, is a smooth map from a plane into itself that preserves
angles. Let A ⊆ R2 , B ⊆ R2 , and
f: A→B (1.1)
be smooth and bijective. Consider the following:
f (γ2 )
γ2

f (γ1 )
f
7 →
~v γ1
f (~v )

Additionally we have
kf (~v )k
= constant. (1.2)
k~v k
It is understood that it is an orientation preserving map, the correct term is an “orientation
preserving conformal map”. They precisely correspond to analytic functions of complex
variables with nonzero first derivative.
In complex analysis this is equivalent to A, B ⊆ C, we have z ∈ A and ω ∈ B, f be
holomorphic. If
f 0 (z) 6= 0 (1.3)
for all z ∈ A, then it is conformal. z
NON-Example (Inversion). We have some disc of radius R,
and it has its center be O. Inversion would be z 7→ 1/z̄ in the
complex plane (if we make the origin O and dilate by 1/R). It 1/z̄
O
does not preserve orientation. In the complex plane, f (z) = 1/z
is a bit more interesting. This preserves angles but it reverses 1/z
orientation. This situation is doodled on the right.
Inversion changes lines into circles if we have lines that do
not pass through the origin. How can we see this? Consider

γ(t) = z0 + z1 t (1.4)

where we have nonzero constants z0 , z1 ∈ C − {0}. Then γ is a line that does not pass
through the origin. What happens when we invert it? It becomes
1 z̄0 + z¯1 t
γ
e(t) = = (1.5)
z0 + z1 t kz0 + z1 tk2

We see that the denominator is

kz0 + z1 tk2 = z0 z̄0 + (z̄1 z0 + z̄1 z0 )t + z̄1 z1 t2 (1.6a)


= r0 + 2 Re(z̄1 z0 )t + r12 t2 (1.6b)

where r0 , r1 ∈ R are positive real numbers. Observe, then, that

lim γ
e(t) = 0 (1.7)
t→+∞
Lecture 2 5

since it’s of the form


const + z̄1 t
e(t) ∝
γ (1.8)
const + kz1 k2 t2
and the denominator vanishes quicker than the numerator increases. Likewise, for precisely
the same reason, we have
lim γe(t) = 0 (1.9)
t→−∞

and
e(0) = z0−1
γ (1.10)
Is this convincing? Well, yes and no.
Consider the stereographic projection to the sphere. We will consider longitudinal lines
on the sphere. What happens when we consider the inverse to the stereographic projection?
We leave this to the reader. . .

1.1 Riemann Mapping Theorem


Suppose that you have two domains in the plane that are not empty and not the whole
plane. Is it possible to find some conformal map from one to the other? If so, how many are
there?
Theorem 1.1 (Incomplete Version Riemann Mapping). If we have two nonempty, simply
connected subsets of the plane (which are not the whole plane), if we map z0 7→ f (z0 ) and
preserve orientation, then there exists a unique conformal map from one to the other.
For existence, we need some extra conditions.

Lecture 2
We stated previously the first nontrivial theorem of the course:
Riemann Mapping Theorem. If U ⊂ C is a simply connected domain, and U = 6 ∅, C,
then there exists a conformal map f : U → D where D = {z ∈ C | kzk < 1} is the unit disc.
Additionally, if we fix some point u ∈ U, demand it be mapped to the origin 0 ∈ D, and some
direction specified in U be mapped to some direction in D, then f is unique.
Existence is a more delicate matter than uniqueness. The domains need not be finite
(e.g., the upper half plane of C). There are some particular cases we need to consider, e.g.
when U = D.
Uniqueness can be constructed in two different f
forms. Consider two maps f, g : U → D. We see −−→
that h = g ◦ f −1 : D → D. We also can see that
together this map takes 0 7→ 0. We have the
Schwarz inequality kh(z)k ≤ kzk. We can deduce
this from the Cauchy integral formula. We can −−→
g
−1 −1
apply the same process to h = f ◦ g : D → D.
We have the inequality kh−1 (ω)k ≤ kωk where
ω = h(z) and z = h−1 (ω). We see that this implies
kzk ≤ kh(z)k ≤ kzk which implies kh(z)k = kzk. We see then that this implies h(z) = λz
where kλk = 1 is a root of unity. In other words, h is just a rotation. By fixing the orientation
we have g ◦ f −1 = h = id imply f = g. This is simply just from the Schwarz inequality.
There is a notion of a “Fractional Linear Map” where, given some a, b, c, d ∈ C, we
have
a b
|A| = = |ad − bc| = 6 0, (2.1)
c d
we are interested in the domain and the range of the map
az + b
fA (z) = . (2.2)
cz + d
Lecture 2 6

Observe that it is singular at z = −d/c, and we also see that fA (z) 6= a/c.
We see that the composition of two linear fractional maps is itself a fractional linear
map given by the product of the matrices.
We can consider the matrices as being unique (up to multiplication by a complex
number), so lets think of matrices with determinant of 1. We should make them up to a
sign, this is
SL(2, C)/{±1} ∼
= PSU(2). (2.3)
We want to say now that every fractional linear transformation maps circles and straight
lines into circles and straight lines.
Theorem 2.1. Fractional linear transformations preserves circles and lines.
We can explicitly write
 
az + b 1 bc − ad
f (z) = = a+ (2.4)
cz + d c cz + d
by considering the following:
f1 (z) = cz
f2 (z) = z + d
f3 (z) = 1/z
f4 (z) = (bc − ad)z
f5 (z) = a + z
1
f6 (z) = z assuming c 6= 0.
c
Each of these maps—with one exception—are just translations and multiplication by complex
numbers. These preserves circles and straight lines. The inversion operation, f3 (z), after
some thinking can be shown to preserve “generalized circles”. . . that is to say, straight lines
become circles and circles become straight lines.
Consider the unit disc. What does a “fractional linear map” map it to? Well, there are
several possibilities, doodled below. The boundary can be mapped to a line or a unit circle,
and the contents may be mapped to several places.


∞ 7−→ or 7−→

It depends on where the point mapped to infinity lives. It can live on the boundary. If the
singularity of the map is on the boundary, the circle is mapped to a line, and two distinct
points on the interior of the disc is mapped to the same side of the line, so the disc is
mapped to the plane. If the singularity is on the interior of the disc, the disc is mapped to
the exterior of the boundary’s mapping in the range. The last case of interest is if the point
mapped to infinity is outside the disc, then the disc is mapped to a disc.
Take the map
z−a
f (z) = λ (2.5)
1 − zā
where kak < 1 and kλk = 1. This maps D → D, and covers all we need for the map to exist
by the Riemann mapping theorem. We see first that f (a) = 0. We see that
 
0 1 − āz + ā(z − a)
f (z) = λ (2.6a)
(1 − āz)2
Lecture 3 7

 
1 − āa
=λ (2.6b)
(1 − āz)2

Observe that
f 0 (0) = λ(1 − aā) (2.7)
So for some orientation +λ̄, we have it be mapped to the positive real direction.
If kzk = 1, then

kf (z)k = 1 (2.8a)
= kf (z)z̄k (2.8b)
z z̄ − z̄a
= kλk · (2.8c)
1 − āz
1 − z̄a
= kλk · (2.8d)
1 − āz
= kλk = 1 (2.8e)

How interesting.

Lecture 3
We will discuss the scheme of two proofs of existence. Given some U ⊂ R2 that is
simply connected, U 6= R2 and U 6= ∅, then there exists some function

=
f: U −
→D (3.1)

is conformal (where D is the open unit disc in C), or holomorphic with f 0 nonvanishing. We
fix some a ∈ U.
Consider the set of maps

S = {f : U → D is conformal | f (a) = 0, f 0 (a) > 0} (3.2)

First we see that S is nonempty.


We suppose that 0 ∈ / U. Since U is simply connected, for each circle around o we have
some point on the circle not in √ U since U 2is simply connected we√can contract it.
We √ consider a branch √ − : U → R , namely we see that −(U) contains some disc.
of
If ω ∈ −(U), then −ω ∈ / −(U).
For every f ∈ S, f 0 (a) ∈ R+ is bounded. There exists some M > 0 such that f (a) < M
for all f ∈ S (i.e., it’s bounded).
Now we use a standard trick in analysis. Let

M = sup{f 0 (0) | f ∈ S} (3.3)

If we have some sequence {fi } ⊂ S, we can find some convergent subsequence. We wish to
deduce that ∃f ∈ S such that f 0 (a) = M . This is our f that maps U to D injectively.
Suppose that B ⊂ D, 0 ∈ B and B 6= D is simply connected. We claim that there is a
conformal map g : B → D, where f (A) = B. Then g ◦ f : A → D is itself conformal.
Suppose we have a domain U which we do not demand to be simply connected. For
every continuous function on ∂U, there is an analytic function in U; i.e.,

∀h : ∂U → R
(3.4)
∃u : Ū → R such that h = u|∂U

We have
h(a) = ln |z − a|. (3.5)
Lecture 4 8

Now this u may be supplemented with v such that u + iv is holomorphic. Then we take

f (z) = (z − a) exp − (u + iv) (3.6)
Our claim is that this is our function, it bijectively maps U to D.
If z ∈ ∂U, then kf (z)k = 1. We see this by direct computation:
kf (z)k = kz − ak · ke−u(a) k (3.7a)
kz − ak
= (3.7b)
kz − ak
=1 (3.7c)
The only thing that requires work is that f 0 (z) > 0, or at least nonzero. We cannot use any
information of the boundary of U.

= ∼=
Is it possible to extend the Riemann mapping theorem from U − → D to Ū −→ D̄? Not
necessarily. It is possible if ∂U is a continuous closed curved.
If U, V are simply connected and bounded domains, then what? Well, suppose we have
a conformal map
f : Ū → R2 (3.8)
such that f |U is conformal, and additionally that f (∂U) = ∂V. Then
f : U → V. (3.9)
It is sufficient to let V = D, then
f : Ū → D (3.10)
and z ∈ Ū implies kf (z)k ≤ 1.
Let us consider conformal maps between D and the upper half plane. Consider
 
−z + 1
z 7→ i (3.11)
z+1
which maps 1 7→ 0, −1 7→ ∞, i 7→ −1, and indeed we can note
1 − kzk2
 
−z + 1
i =i (3.12)
z+1 k1 + zk2

Lecture 4
It is sometimes important to give conformal equivalence between two domains. There
are some books on the subject of equivalence of conformal domains.
Example 4.1. One of the important functions that is a conformal mapping is the exponential
mapping
f (z) = ez . (4.1)
It maps R × [0, 2π) ⊂ C to C − {0}. We shade the domain in gray, and consider how
horizontal lines behave under this mapping:

exp
7−→
Lecture 4 9

Note that horizontal lines are mapped to lines, and vertical lines are mapped to circles.
The line yi (that is, z(t) = ti which is purely imaginary) is mapped to the unit circle,
the lines with x < 0 are mapped to concentric circles. The vertical lines (to the right) are
mapped to circles with radius exp(x). So the region

{(x, y) ∈ C | x ≥ 0, 0 ≤ y ≤ 2π}

is mapped to the region outside the unit circle, since exp(x) ≥ exp(0) = 1.
Example 4.2. Consider
1
f (z) = z + (4.2)
z
How does this behave? Lets consider a nice subdomain of the upper half plane:

f
7−→
−1 1 −2 2

Wee see that f (1) = 2 and f (−1) = −2 by direct computation; when kzk = 1, we see that
1/z = exp(−iθ) and we have more generally 1/z = z̄. Thus f (z) = z + z̄ = 2 Re(z). So we
see that this boundary is mapped to the real line. We see that this maps the domain to
the upper half-plane. But to make things interesting, we cut up the domain in the manner
we have doodled. The circular arcs are mapped to elliptical arcs. The lines are mapped to
hyperbolas. We have a family of ellipses given by the relation

x2 y2
+ =1 (4.3)
a+t b+t
where t ∈ R is “some parameter”. There are places where it is badly behaved, but that’s
okay. We also have imaginary ellipses when t < a or t < b.
There are many beautiful properties of confocal family (conformal foci family) which we
will not pursue here. I believe my good friend, Dmitry B. Fuchs, has beautifully examined
it in Mathematical Omnibus: Thirty Lectures on Classic Mathematics. Additionally, Serge
Tabachnikov’s Geometry and Billiards (American Mathematical Society, 2005) is a good
resource.
There is one more transformation to be considered which is fairly
beautiful, so we will consider it. Suppose we have a convex n-gon (n
sided polygon), there is a conformal map from the upper half plane to αj
this n-gon.
Let p be a convex n-gon with exterior angle α1 π, . . . , αn π as doodled
to the right. We see that α1 + · · · + αn = 2, and 0 < αj < 1. Consider the following: fix
n − 1 points (on the real line) x1 , . . . , xn−1 ordered from left to right with distances left
unspecified.
Consider Z z
f (z) = b + a (ξ − x1 )−α1 (. . . )(ξ − xn−1 )−αn−1 dξ (4.4)
z0

where a, b ∈ C. Why consider only (n − 1) such α’s? Well, the nth is determined completely
by our relations above.
When we consider f (z) on the real line, what happens when ξ approaches, e.g., x1 ?
Does this converge or diverge as an improper integral? Well, since we have

0 < αj < 1 (4.5)


Lecture 5 10

we see the integral converges since Z 1


dx
√ =2 (4.6)
0 x
is finite. We can change variables from ξ to ξ − xj , and we get something like our integral
in Eq (4.6). Now, it really looks like
Z z
f (z) ∝ ξ −2+αn dξ (4.7)
z0

and this converges. So this function f (z) is defined at the dangerous points x1 , . . . , xn and
±∞. Lets set b = 0 and a = 1 for now.
The fundamental theorem fo calculus says

f 0 (z) = (ξ − x1 )−α1 (. . . )(ξ − xn−1 )−αn−1 (4.8)

Let xi < z < xi+1 . We are interested in

arg f 0 (z) =?

(4.9)

We see for real numbers arg(x) = 0. By direct computation we find

arg (f 0 (z)) = (−αj+1 − · · · − αn−1 )π (4.10)

This means that the direction is constant.


f (xj+2 )
We see that as we move along the real axis,
the argument changes in “discrete chunks”. If
z < x1 , then what may we say about arg(f 0 (z))? ∆arg = αj+1 π
Well well well, we see that f (xj ) f (xj+1 )

arg(f 0 (z)) = (−α1 − · · · − αn−1 )π = (2 − αn )π = −αn π (4.11)

The last step is because 2π ≡ 0 since we mod out by 2π.


f (∞) We have no xn on the real axis specified be-
f (xn−1 ) cause we have f (∞) “=” f (xn ), so we have in
αn π effect what is doodled on the left. We have the
identification f (∞) = f (−∞).
f (x1 ) What did we just do? We verified that the
boundary is mapped to the boundary, but what about the interior ?
Consider the rectangle. As a holomorphic function (obtained by our construction) we
get the elliptic integral.
Remark 4.3 (Further Reading). For more on this, see Stein [5], chapter 8 §4 “Conformal
mappings onto polygons.”

Lecture 5: Analytic Continuation


Applications of conformal maps to partial differential V
equations will not be covered. Instead we will skip ahead to
ANALYTIC CONTINUATION! We have a domain U and
a domain V. We have a function f : U → C.
We wish to extend f to g : V → C. That is, when we
restrict g to U we recover f . The natural questions that arise
are: does such an extension exist, and if so is it unique? U
The famous example is the Riemann zeta conjecture: ζ function;
the real part of the nontrivial zeroes of the zeta function is 1/2, where the zeta function is Riemann conjecture
X
ζ(z) = n−z . (5.1)
n=1
Lecture 5 11

Note it converges for Re(z) > 1. Also observe for z = 1 we have the divergent Harmonic
series.
Consider the Gamma function Gamma function
Z ∞
Γ(µ) = xµ−1 e−x dx, (5.2)
0

if µ < 0 we are in trouble. We will consider Re(µ) > 0. Using this, we can express the zeta
function as Z ∞ µ−1
1 x
ζ(µ) = dx (5.3)
Γ(µ) 0 ex − 1
Nifty!
Lemma 5.1. Let f, g : U → C (where U is connected) be analytic. Let V ⊂ U, f |V = g|V ,
and V be open and nonempty. Then f = g.
Lemma 5.2 (“Sublemma”). For the same f, g, U. Suppose that we have a sequence zi ∈ U,
zi 6= zj if i 6= j, suppose
lim zi = z0 ∈ U. (5.4)
i→∞

If f (zi ) = g(zi ) for all i ∈ N, then f = g.


Proof of Sublemma. Let f − g =: h. Then h is also analytic. We take
h(z) = a(z − z0 )k + . . . (5.5)
then it follows
|a − ε| · kz − z0 kk < kh(z)k < |a + ε| · kz − z0 kk (5.6)
in a small neighborhood of z0 . So kh(z)k > 0, a contradiction, h(z) 6= 0 in the neighborhood
with the point z0 removed.
Proof of Lemma. Let
V = {z0 ∈ U | f (z) = g(z) in some neighborhood of z0 }. (5.7)
Well then, V is open (it’s obvious), and it follows from the sublemma that V is closed: it
contains its boundary points. So either V is U or ∅, but by hypothesis V 6= ∅.
Suppose we have two functions f : U → C and
V
g : V → C. We demand that U ∩ V 6= ∅, we do not
demand simply connectedness. Suppose also that the
functions agree on the overlap, i.e. f |U ∩V = g|U ∩V . Now
U
if we “combine” these two functions, we get an analytic
function in U ∩ V.
Let (
f (z) if z ∈ U
h(z) = (5.8)
g(z) z ∈ V
Our lemma implies if U, V are connected, then f determines g.
Suppose again we have some function f : U → C
and let z0 ∈ ∂U. We wish to define a function on a small
neighborhood Bε (z0 ) of z0 such that g : Bε (z0 ) → C such
z0
that it agrees with f on the overlap: f |B∩U = g|B∩U .
If no such g exists, it is impossible to extend f to any
U V = U ∪ Bε (z0 ). On the other hand, if g exists, we can
extend f to this new domain.
If f is so badly behaved that f is not defined on the
boundary of U, it cannot be extended at all.
Lecture 7 12

Example 5.3. Consider the function



X
f (z) = z n! (5.9)
n=0

it is defined on the unit disc in C, it is analytic, but it cannot be extended and it is ill
defined on the boundary.
Remark 5.4. For several complex variables, we cannot abuse any knowledge of a single
complex variable concept.
Of course any domain of interest is conformally equivalent to the unit disc, and this
interesting open domain will (by the Riemann mapping theorem) always have a nonextendable
function on the boundary. √
Consider the function f (z) = z, then f (z)2 = z. If
we extend it on some patchwork, we go around this
patch work and end up in the shaded region on our
f original domain. This may be seen as doodled on the
left. However, the resulting extension is not our beloved
f ! We do not end up with a function with a domain in
the plane, no! We see this is a Riemann surface!
Aside. Riemann surfaces are fairly interesting as a
subject. Note that for the most part, we will be work-
ing with discs (which are “the same” as the upper half
complex plane). They are homotopically equivalent to
spheres with finitely many punctures. So, in short, all
we care about is how do we glue together the discs from analytically continuing a given
function? Provided, of course, that it does not “close” under continuation.

Lecture 6
I was sick for this lecture and missed it. It covered Riemann surfaces, and intro-
duced terminology (the family of discs which constitute a Riemann surface, we will call
them “Relatives” and if they overlap they are “Close Relatives” —note this is our
own terminology, not found in the literature). For a good reference, see Teleman’s notes:
http://math.berkeley.edu/~teleman/math/Riemann.pdf

Lecture 7
We discussed the Riemann surface of
p
h(z) = (z − 1)z(z + 1). (7.1)

This is doodled thus:


Lecture 7 13

Consider
ω 3 − ω + z = 0, (7.2)
we see that
z = ω − ω3 , (7.3)
so we may write z = z(ω). We can invert this to find ω = ω(z). More generally we can
suppose that p(ω) = z and its inverse is q(z) = ω. We can now consider the Riemann surface
of this function. Consider the graph of this function (roughly doodled below to the left).
The inverse function has several values,
so we get in this complex analogue a
Riemann surface. From the projection,
which is multivalued, we have the Rie-
mann surface. In this case for h(z), the
p Riemann surface is homeomorphic to a
p(ω) = z ← torus.
The reader should make a mental
note on the importance of branch cuts
in this method of constructing Riemann
surfaces. Also note that we are project-
ω ing onto Riemann spheres which are
distinguished from the notion of Rie-
mann surfaces. The Riemann sphere is
C as a sphere, obtained from Stereographic projection.
Proposition 7.1 (Fact from geometry). An orientable, closed, compact surface is homeo-
morphic to a torus (or more generally a sphere with p handles).
How do we find the genus of a Riemann surface? (I.e., how do we find the value of
p?) We have n roots, we count how many times we glue points together. We subtract the
number of boundary points of the cuts. So we have
χ = 2n − #(boundary points of the cuts) (7.4)
which is precisely the Euler characteristic. The genus is Euler Characteristic of
Riemann Surface
2−χ
= genus. (7.5)
2
This is for polynomials, however.
Riemann surfaces are defined for algebraic functions. Consider the famous example
of the logarithm function, it covers the complex plane infinitely many times. When we
consider the Riemann surface, it’s like an infinite Helix. This is the logarithmic staircase.
See Penrose’s Road to Reality for a good doodle.
y
7.1 Reflection Principle
We are nonetheless interested in extending functions. We
have the Reflection principle. We have some domain which U
contains on the boundary part of the x (real) axis. We consider
some function f : U → C, we extend f to another function fe on
U ∪ I where I is the real part of ∂U, i.e., I = R ∩ ∂U. We demand
that fe be continuous, and demand that fe|I be real. We consider I x
the complex conjugation of U, doodled to the right, which is
Ū = {z̄ | z ∈ U}. We introduce a function fb such that

f (z) if z ∈ U

fb(z) = f (z̄) if z ∈ Ū


f (z) if z ∈ I.
e
Lecture 8 14

We have then V = U ∪ I ∪ Ū, and we see that fb is continuous on V. We see that if fb is


analytic on U, then fb is analytic on Ū.
Let z = x + iy ∈ U,
fb(x + iy) = u(x, y) + iv(x, y) (7.6)
and for z ∈ Ū we see we have
f (z) = u(x, −y) − iv(x, −y) (7.7)
By the chain rule we see that f (z) satisfies the Cauchy-Riemann equations.
A General Statement. We will use the diagrams doodled on the
W1
right for reference. Consider ϕ : W → C such that ϕ is continuous
on W . If ϕ|W1 and ϕ̄|W2 are analytic, then ϕ is analytic on W . γ W
Lemma 7.2. Suppose we have in some domain U a continuous
function W2
f : U → C.
Let γ : I → U be a continuous, closed curve, and for any such γ that is simply connected,
i.e., Z
f (z)dz = 0.
γ
Then the function is analytic.
We see that the integral over a closed, simply connected path is zero if its contained
entirely in W1 or W2 . We just need to check for a path that crosses γ, we just treat it by
breaking it up into pieces.

Homework 1
x EXERCISE 1
Prove that if a conformal map of a domain U in R2 takes straight lines parallel to the x axis into
parallel lines, then it takes any straight lines into straight lines and circles into circles.
x EXERCISE 2
Prove that if a conformal map of a domain U in R2 takes straight lines into straight lines, then, at
least locally, f coincides with a similarity map (a combination of a rotation and a dilation). (Hint:
triangles.)
x EXERCISE 3
Prove that if a smooth map f : U → V between domains in the plane preserves perpendicularity
(that is, if two curve, γ, γ 0 in U intersect each other under a right angle, then so do their images),
then it is conformal.
x EXERCISE 4
Let u, v, w be three arbitrary different points in C ∪ {∞}. Prove that for any three different points
u, v, w ∈ C ∪ {∞} there exists a unique fractional linear transformation f such that f (u) = u,
f (v) = v, f (w) = w. [Hint One: it is sufficient to consider u = 0, v = 1, w = ∞. Hint Two:
cross-ratio (Page 331 of the book); Hint Two is helpful, but not necessary.]
x EXERCISE 5
Prove that (a) a fractional linear transformation must have at least one fixed point;
(b) if a fractional linear transformation has no finite (different from ∞) fixed points, then it is a
translation, f (z) = z + b.
x EXERCISE 6
Let f (z) = z + z1 . Prove that f takes concentric circles kzk = R, R ≥ 1 and straight rays z = tα,
kαk = 1, t ≥ 1 into ellipses and hyperbolas with the foci (2, 0) and (2, 0). See the picture. (Comment.
This is an “honest” picture, the figure on the right is obtained by the given transformation from
the figure on the left.)
Lecture 8 15

Lecture 8
A confocal family of conics — every point lies in a hyperbola and an ellipse which are
perpendicular to each other.
Now, let us continue considerations of the reflection principle. If f : U → C is analytic,
and just to explicitly reiterate the notion of reflection we have

U = {z | z ∈ U} (8.1)

be a reflection of U, f be real on I, we can analytically continue f on U.


We can similarly do this on
the unit circle, how? Well, we can
use one of our beloved conformal ψ U
mappings ψ which maps the region →−7
U to a domain ψ(U) in the unit
circle with part of its boundary
ψ(I) on the boundary of the unit I x
ψ(U)
circle. Then f can be inverted. Let
the domain of the inversion be V. ψ(I)
Let i inverse be in the circle, j
inverse be in the image. So

f (z) = jf (i(z)). (8.2) ψ(Ū)


We may construct a fractional linear transformation ϕ.


We simply replace f by ψ −1 ◦ f ◦ ϕ. We generalize the reflection principle to circles.

And Now, For Something Completely Different


The Riemann mapping theorem may be generalized to
non-simply connected regions. We can extract it from the f
r 7−→ R
circle reflection principle. How to do this witchcraft? Well,
suppose we have 2 discs, and inside 2 subdiscs of radii r and
R. Then we have a conformal mapping from one to the other
if r = R. The boundary circles are mapped to the boundary circles; and moreover f is a
rotation.
We can see from inversion as doodled to the right takes the annulus
of radially shaded domain to the angularly shaded domain. We can
iterate over and over again. We end up with D − {0} — the unit disc
missing the origin. Well, to be more precise, we do not invert the annulus:
we invert the inner disc of radius r or R, we get the result doodled on
the right. We have a holomorphic map

f : D − {0} → D − {0} (8.3)

which is bijective. We interpret this as mapping the center to the center, so we have a
conformal map. We notice that the only undetermined property is how the mapping treats
orientation, but since it is conformal then all orientation is preserved. The only such map is
a rotation! Quite ingenious to say f : D → D such that 0 7→ 0.
What happens if we have a conformal map that is not concentric (i.e., the domain is
not a concentric annulus)? It becomes more of a challenge to prove that the domains are
conformal. Some examples of these would look like:
Lecture 9 16

8.1 Argument Principle


We will consider several theorems whose name change but results remain the same. (I
believe Led Zeppelin had a song with this title!) Suppose we have some domain, and we
have some function that is not zero on the boundary of the domain and there are no poles
on the boundary. So all poles are inside the domain. We have the function be meromorphic.
We compute:

Z = number of zeroes
(8.4)
P = number of poles

If we travel f along the boundary, when we return to our departure point kf k is the same
but the argument differs by 2πk (k ∈ Z).
We taake the unit disc, the function f (z) = z. We start at 1. We travel
along the boundary and when we get back to the point of departure we find 1
0
∆ arg(f (z)) = 2π · 1. (8.5)

In general we let γ be the boundary of the domain. We have, then, in general

∆γ arg(f (z)) = 2π(Z − P ) (8.6)

If we know the behavior of the function on the boundary, we have a good clue to the number
of zeroes. Also note, if we were to move clockwise then our formula changes to

∆γ arg(f (z)) = 2π(P − Z) (8.7)

instead. Note the difference between eq (8.6) and (8.7).


Theorem 8.1. Suppose we have a meromorphic function f : U → C, and U is our domain,
and also suppose f has zeroes and poles. Consider a closed curve γ in this domain not
passing through zeroes or poles. Then

∆γ arg(f (z)) = 2π(Z − P ) (8.8)

holds for our oriented closed curve γ.


Theorem 8.2. We have
 
Z 0
f (z) X X
dz = 2πi  I(γ, a) − I(γ, b) (8.9)
γ f (z) zeroes a poles b

Observe that this integral is the same as


Z 0 Z
f (z)
dz = d (log (f (z))) . (8.10)
γ f (z) γ

Remark 8.3. This second theorem implies the first. The second theorem follows directly
from the residue theorem.

Lecture 9
We have an analytic function f : U → C (or more precisely meromorphic), U is simply A meromorphic map is
connected, and γ is an oriented closed curve in U that doesn’t pass through poles or zeroes, f : U → C ∪ {∞}, or the
we stated ratio of two holomorphic
functions.
 
Z 0
f (z) X X
dz = 2πi  I(γ, a) − I(γ, b) (9.1)
γ f (z) zeroes a poles b
Lecture 9 17

Remark 9.1. Remember γ does not pass through any zeroes or poles of f .
For a point z0 , we have “locally”

f (z) = (z − z0 )k ϕ(z) (9.2)

for some nonzero k ∈ Z and an analytic ϕ(z) with neither pole nor zero at z0 . For k > 0, we
see that z0 is a zero of f , and for −k < 0 that f has a pole at z0 . So we can see that
f 0 (z) k(z − z0 )k−1 ϕ(z) + (z − z0 )k ϕ0 (z)
= (9.3a)
f (z) (z − z0 )k ϕ(z)
k ϕ0 (z)
= + (9.3b)
z − z0 ϕ(z)
We see that k is the residue of f 0 (z)/f (z) at z0 , so by our theorem we have
 

f 0 (z)
Z  X X 
dz = 2πi  µ(a)I(γ, a) − µ(b)I(γ, b) (9.4)
 
γ f (z)  
distinct distinct
zeroes poles
a b

For the winding path, we have the following rule for each intersection of the curve:

+1 −1
We have this integral also be, by the fundamental theorem of calculus,
Z 0 Z
f (z)
dz = d log(z) (9.5)
γ f (z) γ

but beware! The logarithm is multivalued. We see that

log(ω) = log kωk + i arg(ω). (9.6)

So we use a notation
Z
d log(f (z)) = ∆γ log kf (z)k +i∆γ arg(f (z))
γ
no change if γ is closed, so it’s 0
(9.7)
= i∆γ arg(f (z))
We can then write
X X
i∆γ arg(f (z)) = 2πi( I(γ, a) − I(γ, b))
X X (9.8)
=⇒ ∆γ arg(f (z)) = 2π( I(γ, a) − I(γ, b))

We can write
γ: I → C (9.9)
which is γ given parametrically, then write f ◦ γ or f (γ(t)). If we integrate over I we get
back 2π times a number. We can write, then,
X X
∆γ arg(f (z)) == 2π( I(γ, a) − I(γ, b))
w
w
(9.10)
w
w
w
2πI(f ◦ γ, 0)
Lecture 9 18

If γ is a simple curve, we can write


   
zeroes poles
I(f ◦ γ, 0) =  within  −  within 
the bdry the bdry (9.11)
=Z −P

Recall a simple curve is any nonintersecting curve.

9.1 Rouché’s Theorem


Suppose that in a simply connected domain U we have meromorphic functions

f, g : U → C (9.12)

let
kf (z) − g(z)k < kf (z)k ∀z ∈ γ (9.13)
where γ is an oriented closed curve in U. Then

∆γ arg(f ) = ∆γ arg(g) (9.14)

is our claim.
What this means is f ◦ γ has a curve
in C and for some f (z) ∈ f ◦ γ, we have
g(z) in a disc of radius kf (z)k. We wish
to argue that the distance from f (γ(t))
to g(γ(t)) is less than the distance from f (z)
f ◦γ
f (γ(t)) to the origin. So we have the in- kf (z)k
equality kf (γ(t)) − g(γ(t))k < kf (γ(t))k. 0
This situation is doodled to the right, where kf (z) − g(z)k
the dark red line indicates the line from g(z)
f (γ(t)) to the origin, and the dark blue
line indicates the distance from f (γ(t)) to
g(γ(t)). g◦γ
Now every domain in math has a cherished proof of the fundamental theorem of algebra,
we now have the tools to prove it. Let

f (z) = z n + an−1 z n−1 + · · · + a1 z + a0 (9.15)

and
g(z) = z n . (9.16)
Now we take

kf (z) − g(z)k = kan−1 z n−1 + · · · + a0 k ≤ kan−1 k · kz n−1 k + · · · + ka0 k (9.17)

then for some radius R we consider the curve

γ = {z | kzk = R}. (9.18)

We find on this curve


kf (z) − g(z)k < kg(z)k (9.19)
Then the ratio
kf (z) − g(z)k 1 1 1
≤ kan−1 k + kan−2 k 2 + · · · + ka0 k n (9.20)
kg(z)k R R R
Lecture 10 19

is very interesting. We let

h(z) = kan−1 kz + kan−2 kz 2 + · · · + ka0 kz n (9.21)

then we find a δ > 0 such that

kzk < δ =⇒ kh(z)k < 1 (9.22)

We take R = 1/δ.

Homework 2
x EXERCISE 7
Let U be the domain obtained from C by deleting all real x with |x| ≥ 1. Describe (explicitly) a
conformal map of U onto the upper half-plane.
x EXERCISE 8
Prove the following result of Karl Weierstrass. Let

X
f (z) = z n!
n=0

Then f cannot be analytically continued to any open set properly containing

A = {z ∈ C | kzk < 1}.

Hint: First consider z = r exp(i2πp/q) where p and q are integers.

x EXERCISE 9
Describe, in terms of cutting and pasting, the Riemann surface of the function w = w(z) given by
the implicit formula w3 − w + z = 0 (you can use the algebraic fact that a complex number a is a
multiple root of an algebraic equation p(x) = 0 if and only if p(a) = p0 (a) = 0). Try to generalize to
wn − w + z = 0.
x EXERCISE 10
Let f be a conformal map of the half-disk D+ = {z ∈ C | kzk < 1, Im(z) > 0} onto itself which can
be extended to a continuous map of D+ ∪ (1, 1) onto itself. Prove that
z−a
f (z) =
1 − az
for some a ∈ (−1, 1). (Hint: Schwarz’s Reflection Principle.) (By the way, is the extendability
condition really needed? The answer can be deduced immediately from the Riemann mapping
theorem.)

x EXERCISE 11
Let d1 , d2 be two closed discs contained in the open unit disk D. Prove that any conformal map
D − {d1 } → D − {d2 } can be extended to a fractional linear map. Try to deduce a condition on d1 ,
d2 under which such f can exist. (Hint: the circular Schwarz Reflection Principle, Page 370.)

Lecture 10
Today, we are going to prove a couple of theorems.

Theorem 10.1. Suppose we have an analytic function in some domain U which is injective
(at least locally). So f : U → C is injective, then f 0 (z) 6= 0 for all z ∈ U.
Note that this is not true for real analysis.
Lecture 10 20

Proof. Suppose for contradiction this is not true. So f 0 (z0 ) = 0 for some z0 ∈ U, we have
g(z) = (f (z) − f (z0 )) be zero at z0 and its derivative is zero. It’s a double zero, at least at
z90 . This implies there is a neighborhood around z0 , d the disc with boundary γ such that
neither g(z) nor g 0 (z) is zero in the neighborhood. There exists an a such that

0 < a < inf kf (z) − f (z0 )k (10.1)


z∈γ

Consider
h(z) = f (z) − f (z0 ) − a (10.2)
By Rouché’s theorem, h(z) has at least two zeroes. So h has precisely k zeroes within the
disc. If for a z ∈ d, h(z) = 0 but its derivative h0 (z) 6= 0 is nonvanishing (we see in fact
h(z0 ) = −a 6= 0), it follows that all zeroes are different

h(z) = 0 =⇒ f (z) = f (z0 ) + a = constant. (10.3)

This contradicts the premise that f (z) is injective, we reject our initial supposition.
Theorem 10.2. Let f : U → C be nonconstant, analytic such that for some z0 ∈ U,
f (z0 ) = a, then f − a has a zero of multiplicity k at z0 .

Consider

X
ζ(s) = n−s (10.4)
n=1

We know ζ(2k) is related to the Bernoulli numbers. Try to do the computation. We will
discuss it next time. Q
Consider infinite products an , we consider the partial products
n
Y
pn = ak (10.5)
k=1

If
lim pn = p (10.6)
n→∞

exists and is finite, then



Y
an = p. (10.7)
n=1

It is tempting to say this. But consider


∞  
Y 1 123 N −1
1− = lim ...
n N →∞ 2 3 4 N
n=2 (10.8)
1
= lim =0
N →∞ N

Is this true? No! The product diverges.


Suppose

Y
an and an 6= 0 for all n (10.9)
n=1

and further suppose


N
Y
lim an 6= 0. (10.10)
N →∞
n=1
Lecture 10 21

We could argue that it converges. If we extend this to include finitely many an = 0, but
after some m all an 6= 0 for n ≥ m, then
N
Y
lim an 6= 0. (10.11)
N →∞
n=m

We just start from where an will not be zero.


Consider
Y∞
(1 + zn ) (10.12)
n=1
and suppose zn 6= −1 for all n. Suppose this converges
N
Y
(1 + zn ) → p (10.13)
n=1

so it is also true that


N
Y
k1 + zn k → kpk. (10.14)
n=1
We write
N
X
log k1 + zn k → log kpk (10.15)
n=1
so it means
N
X
SN = log k1 + zn k (10.16)
n=1
converges. But for the sum to converge, the summands form a sequence which converge to
zero, i.e.,
lim log k1 + zn k = 0. (10.17)
n→∞
But this happens if and only if
k1 + zn k → 1. (10.18)
How interesting!
Euler showed that
∞ 
z2
Y 
sin(z) = z 1− 2 2 , (10.19)
n=1
n π
the zeroes of since are 0, ±π, ±2π, . . . , ±kπ, . . . ; we could write this out as a product:
z(z − π)(z + π)(z − 2π)(z + 2π)(. . . ) = z(z 2 − π 2 )(z 2 − (2π)2 )(. . . ) (10.20a)
 2  2 
z z
=z −1 − 1 (. . . ) (10.20b)
π2 22 π 2

X (−1)n z 2n+1
= (10.20c)
n=0
(2n + 1)!

where the last step is justified by setting equals to equals (the product definition of sine to
the Taylor series definition of sine).
Remark 10.3. Note that going from Eq (10.20a) to Eq (10.20b) is a little confusing for
neophytes (read: the author). However, it is a common trick to write it in this form so the
infinite product looks like Y
sin(. . . ) = z0 (1 − zn ) (10.21)
n
which is the form of the infinite product we study thoroughly!
Lecture 11 22

We may set terms of the same power equal to each other, we find

X 1 1
2 n2
= (10.22a)
n=1
π 3!

X 1 1
= (10.22b)
n=1
π 4 n4 5!

and so on. How do we get this? Well, it has to do with picking the terms intelligently from
the infinite product.

Lecture 11
The Riemann zeta function is investigated further in p-adic numbers. In real variables,
we have Bernoulli numbers obey

x x X
x
=1− + (−1)k−1 Bk x2k (11.1)
e −1 2
k=1

where
1 1 1 5 691
Bk = , , , , ,... (11.2)
6 30 42 66 8130
We have
∞ 2k−1
X 1 2k 2
= B k π (11.3)
n=1
n2k (2k)!
So by plugging in k = 1 we obtain
X 1 π2
2
= (11.4)
n=1
n 6

We also assert that


X π 2s
(k1 k2 · · · ks )−2 = (11.5)
(s + 1)!
1≤k1 <···<ks

This is not so obvious (which is why we assert it for the moment).


If we consider the following diagram
X 2 X
1 1 X 1
= + 2
n2 n4 k2 k2
k1 <k2 1 2
(11.6)
w
π4 π4
X 
1
= +
36 n4 60
which then gives us
X 1 π4
4
= (11.7)
n 90
We can go further
X 3
1 X 1 X 1 X
= + 3 + 6 (k1 k2 k3 )−2 (11.8)
n2 n6 k 4 `2
k6=` k1 <k2 <k3

and take X 1 X 1 X 1 X 1
= +
n4w n2 n6 k 4 `2
k6=`
w (11.9)
 4 2
π π
90 6
Lecture 11 23

We subtract these results


X 3  
1 X 1 X 1 1 3
− 3 = − π6 (11.10a)
n2 n4 n2 63 6 · 90
X 1 X
= −2 6
+6 (k1 k2 k3 )−2 (11.10b)
n
k1 <k2 <k3

We know
π6
X  
−2
(k1 k2 k3 ) =6 (11.11)
7!
k1 <k2 <k3

so we plug this in and we find


π6 6
X 1  
1 1
= + − . (11.12)
n6 2 7! 120 63
We may continue iterating, but there is no closed form expression in general.
Now, suppose we have a sequence of complex numbers a1 , a2 , . . . , and we do not
demand they are distinct. We can construct an entire function with zeroes at a1 , . . . , would
this be possible? Why not? Take the product

f (z) = (z − a1 )(z − a2 )(. . . ) (11.13)

Why not?
Remark 11.1. If a function has singularities, we have f be continuous if it has no poles (or
in a neighborhood of a pole); if we have f with an essential singularity, a function takes
values (in a neighborhood of an essential singularity) all values with possibly two exceptions:
0 and ∞. This is Picard’s theorem.
If we have a function without singularities, and a sequence of points a1 , . . . ; can we
have a function f (z) such that f (ak ) = 0 for all k ∈ N? We can construct
Y z

f (z) = 1− (11.14)
ak
k

When will this work? We need to impose the condition that kak k → ∞, but we need more.
This will converge absolutely if
X 1
converges (11.15)
kan k
This is actually too strong a condition. What about an = n, the sum is harmonic and
diverges!
Consider
Y z

f (z) = z k 1− (11.16)
j
aj

we write instead
Y z

f (z) = 1− ez/an (11.17)
n
a n

which converges if
X 1
converges. (11.18)
kan k2
Why? Because
    
z z/an z z
1− e = 1− 1+ + ··· (11.19a)
an an an
Lecture 12 24

z2 zk
   
1 −1 1
=1+ −1 + + ··· + k + + ··· (11.19b)
a2n 2 an (k − 1)! k!
So we are considering evaluating products of the form

Y
(1 + ωn ) (11.20)
n=1

where ωn = (z/an )2 (−1 + 12 ) + . . . , we have


∞ k
X z 1 1
kωn k < − (11.21)
an k! (k − 1)!
k=2
<1

and note that the sum on the right hand side begins with k = 2. If kzk < an , then
kz/an k2
kωn k < (11.22)
1 − kz/an k
Q
by geometric series.So it follows that (1 + ωn ) converges if the aforementioned series
converges. Remember we have
 
z
1 + ωn = 1− ez/an (11.23)
an
We have our product converging, uniformly on the unit disc, with zeroes at a1 , . . . .
kan k−2 converges,
P
Theorem 11.2. If a1 , . . . , is a sequence where ai 6= 0 for every i, and
then
Y z

f (z) = 1− ez/an (11.24)
n
a n

converges and is entire with zeroes at a1 , . . . .


Note that this is an existence theorem, not a uniqueness theorem. It is unique up to a
factor of exp(g(z)) for arbitrary g(z).

Homework 3
x EXERCISE 12
Let f , g be two analytic functions in a domain U bounded by a closed curve γ, continuous in the
closure of U, and let 0 < kg(z)k < kf (z)k for all z ∈ γ. Prove that the numbers(with multiplicities)
of solutions of equations f (z) = g(z) and f (z) = 0 are the same.
x EXERCISE 13
How many zeroes does z 6 − 4z 5 + z 2 − 1 have in the disk D = {z | kzk < 1}?
x EXERCISE 14
Show that if p(z) = z n + an−1 z n−1 + · · · + a1 z + a0 then there must be a point z with kzk = 1 and
p(z) ≥ 1.
x EXERCISE
n
15
X zk
Let gn = . Let D(0, R) be the disk of radius R > 0. Show that for n large enough gn has no
k
k=0
zeroes in D(0; R).
x EXERCISE 16
Let α be a complex number.
(a) How many values at a generic z does the function z(= eα log z ) have? (The answer depends
on α.)
(b) Describe the Riemann surface of this function.
(c) In terms of kzk and arg z (and standard elementary functions from Real Analysis) list all
the values of z i .
Lecture 12 25

Lecture 12
So recall our theorem
kan k−2 converges,
P
Theorem 12.1. If a1 , . . . , is a sequence where ai 6= 0 for every i, and
then
Y z

f (z) = 1− ez/an (12.1)
n
a n

converges and is entire with zeroes at a1 , . . . .


We know
∞ 
Y z  z/(πn)
sin(z) = z 1− e (12.2)
n=−∞
πn
n6=0

We collect terms
∞ 
Y z  z  z/(πn) z/(−πn)
sin(z) = z 1− 1+ e e (12.3a)
n=1
πn πn
∞ 
z2
Y 
=z 1− 2 2 (12.3b)
n=1
π n

If we wrote Y z 
f (z) = 1− (12.4)
πn
n6=0

instead, then f (z) diverges entirely. This problem is similar to how


X X  X 
(−1)n = 1 + −1 (12.5)

diverges but X X
(−1)n = (1 − 1) = 0 (12.6)
converges. So if we disregard the contribution of exp[z/(πn)], the series diverges very much
as the harmonic series diverges. Consider
X 1
(12.7)
z+n
n6=0

which diverges, but


X 1 1

− (12.8)
z+n n
n6=0

converges! But
X1
=0 (12.9)
n
n6=0

which changes nothing.

12.1 Gamma Function


We will begin with examining the Γ function. What do we know about it? Well,

Γ(n + 1) = n! (12.10)

and Γ(−n) are singular (simple poles really). We also know

Γ(µ + 1) = µΓ(µ) (12.11)


Lecture 12 26

There is a lot of information on the Γ function in Marsden [4].


Consider its definition: the Γ function is defined by
Z ∞
Γ(µ) = xµ−1 e−x dx (12.12)
0

Consider some x ∈ R, µ ∈ C. But are these arbitrary? When will the integral be defined?
Well, lets consider

xa+ib = xa xib (12.13a)


a ib log(x)
=x e (12.13b)
b
= xa ei log(x )
(12.13c)

Note that kxib k = 1 for a, b ∈ R. We are not interested in how our integral behaves near
x → ∞, since e−x wins out (i.e., vanishes faster than xµ−1 explodes). So we are interested
in the behavior of the integrand near zero. The condition is that Re(µ) > 0, the integral is
defined.
We need to employ our favorite phrase: f (x) = 1/x
analytic continuation. We see that when
µ = 0, the integral diverges. Similarly for
µ = −1, −2, . . . , the integral diverges too.
There are poles but no zeroes, and for
this reason it is not entire; but its inverse
1/Γ(z) is entire.
We will consider the inverse and bring
in the infinite product for the sine. Why?
Well, we have zeroes for G(z) = 1/Γ(z),
which are N. They are all multiplicity zero,
so we’ll use N to indicate the zeroes. This is
a little bit sloppy, but meh, that’s life. Now x
we know how to construct the product for y
G(z) since we know its zeroes. We construct it explicitly as
∞ 
Y z  −z/n
G(z) = 1+ e (12.14)
n=1
n

observe G(0) = 1. We know the relation between G(z) and sin(z) by the product series.
Indeed, observe
sin(πz) = πzG(z)G(−z). (12.15)
Lets see to what extent Γ(µ + 1)µΓ(µ) is preserved in G(z), i.e. G(z − 1) = H(z). We see
by inspecting the zeroes of H(z) that

H(z) = eg(z) zG(z) (12.16)

we have some unknown factor g(z).


Theorem 12.2. The factor g(z) is a constant denoted as γ, shorthand for
N
!
X 1
lim − log(N ) (12.17)
N →∞
n=1
n

This is certainly one definition of γ. (This is drawn above to the right.)


Lecture 12 27

Note that numerically,

γ ≈ 0.57721 56649 01532 86060 (12.18)

We will now define the Γ function as


−1
Γ(z) = [zeγz G(z)] (12.19)

We need to prove that g(z) = γ is constant though. We do the following:


d d
log(H(z)) = log(G(z − 1)) (12.20)
dz dz
We observe

log(H(z)) = g(z) + log(z) + log(G(z)) (12.21a)



X  z z
= g(z) + log(z) + log 1 + − (12.21b)
n=1
n n

We see

!
d d X z z

log(H(z)) = g(z) + log(z) + log 1 + − (12.22a)
dz dz n=1
n n
∞  
1 X 1 1
= g 0 (z) + + − . (12.22b)
z n=1 z + n n

Now we consider calculations involving G(z) thus


∞  
Y z − 1 (1−z)/n
log(G(z − 1)) = log( 1+ e ) (12.23a)
n=1
n
∞    
X z−1 z−1
= log 1 + − (12.23b)
n=1
n n
∞  
d X 1 1
=⇒ log(G(z − 1)) = − (12.23c)
dz n=1
z+n−1 n
∞  
1 X 1 1
= + − (12.23d)
z n=1 z + n n

This implies that


g 0 (z) = 0 =⇒ g(z) = (constant) (12.24)
which concludes the proof of the theorem.
How to find the value of g(z)? Take H(z) and set z = 1, so

H(1) = eg(1) G(1) (12.25)

which gives us a new problem: what is G(1)? Well,


N  
Y 1
G(1) = lim 1+ e−1/n (12.26)
N →∞
n=1
n

Observe that
N  
Y 1
1+ =N +1 (12.27)
n=1
n
Lecture 13 28

which means PN
G(1) = lim (N + 1)e− n=1
(1/n)
(12.28)
N →∞

So
N
X 1
log(G(1)) = lim log(N + 1) − (12.29a)
N →∞
n=1
n
= −γ. (12.29b)

This concludes this lecture.

Lecture 13
We will continue on the Γ function. Remember Stirling’s approximation
√  n n
n! ≈ 2πn (13.1)
e
We see that 5! = 120 but √
10π(5/e)5 ≈ 118.019168 (13.2)
which is reasonably accurate. There are more accurate formulas approximating n! though.
We saw that
sin(πz) = πzG(z)G(−z) (13.3)
We should regard G(z) as the material for building the Γ function. Observe that

G(z − 1) = eγ zG(z) (13.4)

where γ ≈ 0.57721 is Euler’s constant. We replace z 7→ z + 1 which gives us

G(z) = (z + 1)eγ G(z + 1)


(13.5)
=⇒ G(z + 1) = (z + 1)−1 e−γ G(z)

This gives us our definition


−1
Γ(z) = (zeγz G(z)) (13.6)
This is defined for all values of z. We see first of all that

Γ(1) = (1eγ G(1))−1 (13.7)

where
G(1) = e−γ G(0) = e−γ (13.8)
together both imply
G(1) = (1eγ e−γ )−1 = 1 (13.9)
We can immediately see that
 −1
Γ(z + 1) = (z + 1)eγ(z+1) G(z + 1) (13.10a)
 −1
1)eγ(z+1) e−γ 1)−1 G(z)

= (z 
+ (z 
+ (13.10b)
 

−1
= (eγz G(z)) (13.10c)
−1
= z (zeγz G(z)) (13.10d)
= zΓ(z) (13.10e)

Consider the formula


sin(πz) = πzG(z)G(−z) (13.11)
Lecture 13 29

We want to use Γ(z) instead of G(z), which permits us to see that


sin(πz) = π(zeγz G(z))(−ze−γz G(−z)) (13.12)
This allows us to calculate
Γ(z)Γ(1 − z) = [(zeγz G(z))((1 − z)eγ(1−z) G(1 − z))]−1 (13.13a)
γz −γz γ −γ −1 −1
= [ze e G(z)e (1 − z)(e (1 − z) G(−z))] (13.13b)
−1 −1
= [zG(z)G(−z)] = [sin(πz)/π] (13.13c)
π
= (13.13d)
sin(πz)
We find by accident a cute relation

Γ(1/2)2 = π =⇒ Γ(1/2) = π. (13.14)
So what? Well, we have Z ∞ √
2
e−x dx = π (13.15)
−∞
which we get from all our information of Γ(1/2), as well as exotic numbers like e and π.
How is this even possible? Well we see
Z ∞
Γ(z) = tz−1 e−t dt (13.16)
0−
so we see
Z ∞
Γ(1/2) = t−1/2 e−t dt (13.17a)
0
Z ∞
1 2
= e−u du (13.17b)
2 0

by choosing u2 = t. Now we can deduce the Gamma function’s action on half-integral values:

Γ(1/2) π
Γ(3/2) = Γ(1 + (1/2)) = = (13.18)
2 2
and so on.
We are now interested in this integral tz−1 e−t dt. Most textbooks insist on integrating
R

by parts, but we want to compute it. We will rewrite it as


Z ∞ Z N  N
z−1 −t z−1 t
t e dt = lim t 1− dt (13.19)
0 N →∞ 0 N
We just substitute in the limit definition for exponentiation. But we will change variables
t = N s so we may write
Z N  N Z 1
z−1 t
lim t 1− dt = lim (N s)z−1 (1 − s)N N ds
N →∞ 0 N N →∞ 0
Z 1 (13.20)
z z−1 N
= lim N s (1 − s) ds
N →∞ 0

We will now perform integration by parts. We do it once to find


 
Z 1  z 1 Z 1 
z z−1 N z Ns N N −1 z
 
lim N s (1−s) ds = lim N  (1 − s) + (1 − s) s ds
 (13.21)
N →∞ 0 N →∞ 
 z 0 z 0 
=0
Homework 4 30

and again to find


Z 1
N (N − 1) 1
 Z 
z z−1 N N −2 z+1
lim N s (1 − s) ds = lim . . . (1 − s) s ds (13.22)
N →∞ 0 z(z + 1) 0
and after many integration by parts we see that

(1 − s)N → N !,
Z Z 1 (13.23)
z 1
s → sz+n−1 ds =
0 z+N
We therefore claim that
N !N z
Γ(z) = lim (13.24)
N →∞ z(z + 1)(. . . )(z + N )

Let us prove this.


Consider
1
= zG(z)eγz (13.25a)
Γ(z)

so we substitute in the definition of γ and G(z)


"N # "N #
Y z  −z/n X
= lim z 1+ e exp (z/n) − z ln(n) (13.25b)
N →∞
n=1
n

we gather terms
N 
Y z
= lim ze− ln(N )z 1+ (13.25c)
N →∞
n=1
n

then by using the law of logarithms to simplify


N
Y n+z
= lim zN −z (13.25d)
N →∞
n=1
n

and then expanding out the product yields

= lim zN −z (z + 1)(z + 2)(. . . )(z + N )/N ! (13.25e)


N →∞
z(z + 1)(z + 2)(. . . )(z + N )
= lim (13.25f)
N →∞ N zN !
Thus we are done!
Now for Stirling’s approximation, we have

n! ∼ 2πn(n/e)n (13.26)

where
an
an ∼ bn means lim = 1. (13.27)
n→∞ bn
We can really write

 
n 1 1
n! = 2πn(n/e) 1 + + + ··· (13.28)
12n 288n2
if we want to use equality instead of an equivalence relation.
Lecture 14 31

Homework 4
x EXERCISE 17
Show that
∞  
Y 1 1
1− 2 = .
n=2
n 2
Relate this to the infinite product formula for sine.

x EXERCISE 18
∞  
Y 2 2
Show that 1− = .
n=2
n3 + 1 3

x EXERCISE 19
It is obvious that if x is real (and not a pole of Γ), then Γ(x) is real. What is the set {x ∈ R |
Γ(x) > 0}? (The answer can be seen on the picture on page 415, but a proof is requested.)

x EXERCISE 20∞  
Y 1
Find explicitly 1+ .
n=1
n2

Lecture 14
There are several things of the Γ function which we still can discuss. There are two
properties of interest: computation of its residues, and Gauss’ formula.
Gauss’ formula says Γ(z)Γ(z + n1 ) . . . Γ(z + n−1
n ) is related to Γ(n + z). To guess the
Gauss formula, choose z = m ∈ N. So we want to relate Γ(m)Γ(m + n1 ) . . . Γ(m + n−1n ). To
begin with n = 1 is uninteresting. So lets begin with n = 2, we have
 
1 13 2m − 1
Γ(m)Γ m + = (m − 1)! · Γ(1/2) (. . . ) (14.1a)
2 22 2
(2m − 1)!
= Γ(1/2) 2m−1 (14.1b)
2√
(2m − 1)! π
= (14.1c)
22m−1
Now for n = 3 what do we have? Well, we find
   
1 2 (3m − 1)!
Γ(m)Γ m + Γ m+ = Γ(1/3)Γ(2/3) (14.2a)
3 3 33m−1
(3m − 1)! 2π
= √ (14.2b)
33m−1 3
We see that for some general n that
   
1 n−1 (mn − 1)!
Γ(m)Γ m + (. . . )Γ m + = Γ(1/n)Γ(2/n)(. . . )Γ([n − 1]/n) (14.3)
n n nmn−1

We can compute the Γ terms by first squaring it and rearranging terms to read:
           
2 1 n−1 2 n−2 n−1 1
(Γ(1/n)(. . . )Γ([n − 1]/n)) = Γ Γ Γ Γ (. . . )Γ Γ
n n n n n n
π n−1
= (14.4)
sin(π/n) sin(2π/n)(. . . ) sin([n − 1]π/n)

So we find  
(n − 1) n
sin(π/n)(. . . ) sin π = (14.5)
n 2n−1
Lecture 14 32

we can rewrite this as


Γ(mn) π (n−1)/2 2(n−1)/2
   
1 n−1
Γ(m)Γ m + (. . . )Γ m + = nm−1 √ . (14.6)
n n n n

We then replace m → z and that is Gauss’ formula.


Now to compute the residues, choose (−m) where m ∈ N, the residue is

Γ(z + 1)
lim (z + m)Γ(z) = lim (z + m) (14.7a)
z→−m z→−m z
Γ(z + 2)
= lim (z + m) (14.7b)
z→−m z(z + 1)
Γ(z + m + 1)
= lim (z + m) (14.7c)
z→−m z(z + 1)(. . . )(z + m)
Γ(z + m + 1)
= lim (14.7d)
z→−m z(z + 1)(. . . )(z + m − 1)

(−1)m
= (14.7e)
m!
This is the residue of the Γ function.

14.1 Asymptotics
Asymptotiocs are different than approximations, we will begin with asymptotics of
factorials. Consider N ! for some large N , the number of digits of the number is more or less
ln(N !). Lets compare n! to nn , we take

n! 12 n
n
= (. . . ) (14.8)
n nn n
We take the logarithm of this to make the product into a sum
  n
n! X
ln = ln(k/n) (14.9)
nn
k=1

but this sum is not well behaved. To remedy the situation, we divide through by N
    n 
1 n 1 1
ln(n!/n ) = ln + (. . . ) + ln (14.10a)
n n n n
Z 1 1
≈ ln(x)dx = lim x ln(x) − x (14.10b)
0 ε→0
ε

= (0 − 0) − (1 − 0) = −1. (14.10c)

We see that r !
n n!
ln ≈ −1 (14.11)
nn
so we exponentiate both sides r
n!
≈ e−1
n
(14.12)
nn
then we raise both sides to the nth power
n!  n n
n
≈ e−n =⇒ n! ≈ (14.13)
n e
Lecture 14 33

which is a very rough asymptotic formula, but it was the first asymptotic formula for the
factorial.
There are more precisely asymptotic formulas, of which Stirling’s is the most famous.
It states √  n n
n! ∼ 2πn (14.14)
e

What about the relative error of using 2πn(n/e)n instead of n!, that is the error in terms
of percents. Consider a more precise form of Stirling’s approximation

  
1 n n
n! ∼ 2πn 1 + (14.15)
12n e

There is in fact an infinite series, then next asymptotic would be



  
1 1 n n
n! ∼ 2πn 1 + + 2
(14.16)
12n 288n e

and so on.
Consider n = 10, then

10! = 3 628 800 (14.17a)


10


10
2π10 ≈ 3 598 695 (14.17b)
e

and
  10


1 10
2π10 1 + ≈ 3 628 685 (14.17c)
120 e

and the next expansion would be precise to one digit, probably.


Consider the asymptotics for the number of primes less than n. This is a special function Prime number function π(n)
denoted
π(n) = number of primes less than n (14.18)
We have two estimates: Euler’s formula

π(n) ∼ log(n)/n (14.19)

and the logarithmic integral function (“li’s integral”)


Z n
π(n) ∼ (1/ ln(t))dt. (14.20)
0

The latter is better. Consider n = 109 , what happens? Well we see (truncating to integer
values) that

n = 109 (14.21a)
π(n) = 50 847 534 (14.21b)
n
≈ 48, 429, 482 (14.21c)
ln(n)
Z n
dt
= 50, 849, 235 (14.21d)
2 ln(t)

Can we say anything about the error? Yes and no: yes because yes, and no because no.
Most theorems about the error depends on the Riemann zeta conjecture being true.
Lecture 15 34

If the Riemann hypothesis is true, then



n log(n)
| Li(n) − π(n)| < (14.22)

where Z n
dt
Li(n) := (14.23)
2 log(t)
The difference Li(n) − π(n) changes sign infinitely many times (John Littlewood proved this
fact in 1914), the first time is at 10349 . (Although a more recent estimate puts this around
10316 .)
The partition function p(n) counts the number of ways to write n as a sum of positive Partition function p(n)
numbers. So for example,
p(4) = 5 (14.24)
since we have
4, 3 + 1, 2 + 2, 2 + 1 + 1, 1+1+1+1 (14.25)
are the five distinct sums. We have an asymptotic formula for it Rademacher’s formula

1 √
p(n) ∼ √ e2π n/6 (14.26)
4n 3
This grows faster than any polynomial, but slower than any exponential.

Lecture 15
A function (for us a meromorphic function of a complex variable) may be written
asymptotically as
a1 a2 a3
f (z) ∼ a0 + + 2 + 3 + ... (15.1)
z z z
Our example from last time was
√  z z  
1
Γ(z + 1) ∼ 2πz 1+ + ... (15.2)
e 12z

First, the series diverges in many important cases; and second, the behavior at the poles is
misleading. √
We make our first assumption, Γ(z) ∼ 2πz(. . . ) holds NOT for all
values of z but for a sector. That is, we confine the argument of z to
satisfy
α ≤ arg(z) ≤ β (15.3)
We then commit our second assumption that we deal with a finite sum
a1 aN
SN = a0 + + ··· + N (15.4)
z z
we demand that
lim kzkN · kf (z) − SN k = 0. (15.5)
kzk→∞

So for each ε > 0 there exists a RN > 0 such that if α ≤ arg(z) ≤ β and kzk ≥ RN implies
ε
kf (z) − SN k < (15.6)
kzkN
Lecture 15 35

Within a portion of a sector the asymptotic formula is very good.


Outside of it, it becomes a bad approximation. This is doodled on the
right where the gray region is when SN is very good.
Consider an example
Z ∞
f (x) = t−1 ex−t dt, (15.7)
x

now consider the formula


1 1 2! 3! 4!
f (x) ∼ − + 3 − 4 + 5 + ··· (15.8)
x x2 x x x
This is the canonical example of the series with a radius of convergence being zero.
We will now show how to relate the two, or more precisely how to go from the integral,
Eq (15.7), to the sum, Eq (15.8). We write
Z ∞
f (x) = t−1 ex−t dt (15.9)
x

and integrate by parts letting

u = t−1 and dv = ex−t dt (15.10)

thus
Z ∞

f (x) = −t−1 ex−t x − t−2 ex−t dt
x
Z ∞ (15.11)
= x−1 ex−x − t−2 ex−t dt
x

So we can simplify Z ∞
1
f (x) = − t−2 ex−t dt (15.12)
x x
and by iterating we obtain

(−1)n−1 (n − 1)!
Z
1 1 2! 3!
f (x) = − 2 + 3 − 4 + ··· + + (−1)n n! t−(n+1) ex−t dt (15.13)
x x x x xn x

remainder term

We will not show, but writing (t/x)−1 instead of t, we get the remainder term being bounded
by n!/x. So if x is really big, written
x  n! (15.14)
we have a fairly good approximation within a sector. So fro more terms in this asymptote,
the smaller the domain for convergence.
We have two things left to discuss today. First the method of steepest descent. This
may be considered a generalization of Student’s formula.
a1
Remark 15.1. We wrote f (x) ∼ a0 + x + . . . , we should write more generally that f (x) ∼
g(x)(a0 + . . . ), so we consider
f (x)
lim = a0 , (15.15)
x→∞ g(x)
and this first term usually contains a lot of information.
The following has nothing to do with complex analysis: it is absolutely real.
Lecture 16 36

We have some function h : (0, ∞) → R, we will consider


Z ∞ h(t)
f (z) = ezh(t) dt (15.16)
0 t
t0
We suppose it converges. Our conditions is that h(t) has a maximum at t0 ,
h0 (t0 ) = 0 and h00 (t0 ) < 0. We plot h(t) to the right, with maximum at t0 .
So then we find √ !
ex−h(t0 ) 2π
f (x) ∼ √ p (15.17)
x −h00 (t0 )
This generalizes Stirling’s formula. How? Well, observe
Z ∞
Γ(x + 1) = tx e−t dt (15.18a)
0

change variables xu = t so xdu = dt, thus


Z ∞
= (xu)x e−xu xdu (15.18b)
0
Z ∞
=x x+1
ux e−xu du (15.18c)
0
Z ∞
x+1

=x exp x ln(u) − u du (15.18d)
0

where we let h(u) := ln(u) − u. We end up with


Z ∞
Γ(x + 1)
= ex(ln(u)−u) du
xx+1 0
√ (15.19)
e−(x+1) 2π
∼ √ √
x 1
thus we get Stirling’s formula back entirely.

Lecture 16
We consider the function
Z ∞
h(ζ) f (x) = exh(ζ) dζ (16.1)
0
ζ and we want to consider it for big values of x. We see that
ζ0
for any ζ 6= ζ0 that
kh(ζ0 ) − h(ζ)k = α (16.2)
and since h(ζ0 ) is the location of the maxima of h we see
exp(h(ζ0 )x)
exp(h(ζ)x) h(ζ0 ) − h(ζ) = −α. (16.3)
Thus we have
ζ0
exh(ζ) ≈ e−xα ≈ 0 (16.4)
for big x. We see that
Z ∞ Z ζ0 +ε
f (x) = exh(ζ) dζ ∼ exh(ζ) dζ (16.5)
0 exp ζ0 −ε

which is an exponential equivalence, not an “asymptotically behaves as”. The “Exponential


Equivalence” of f and g means that they have the same asymptotics, it is considered to
be “very strong”.
Lecture 16 37

Notation. We will indicate f is exponentially equivalent to g by the notation


f ∼ g. (16.6)
exp

It is odd notation, but it is our odd notation!


Now, we have assumed that h has an extrema at ζ0 , so let us Taylor expand about ζ0
1
h(ζ) = h(ζ0 ) + h00 (ζ0 ) · (ζ − ζ0 )2 + . . . , (16.7)
2
from just our simple Taylor expansion. We then have
Z ζ0 +ε
f ∼ exh(ζ) dζ (16.8a)
exp ζ0 −ε

as we have discussed, so we change variables ζ = ζ0 + z


Z ζ0 +ε Z ε
exh(ζ) dζ ∼ exh(ζ0 +z) dz (16.8b)
ζ0 −ε ε

and then plug in the Taylor expansion of h about ζ0 (up to second order — additional orders
would be negligible):
Z ε Z ε
00 2
exh(ζ0 +z) dz ∼ ex[h(ζ0 )+h (ζ0 )z /2] dz (16.8c)
ε ε

and we observe that


Z ε Z ε
00 00
(ζ0 )z 2 /2] (ζ0 )z 2 /2
ex[h(ζ0 )+h dz = exh(ζ0 ) exh dz (16.8d)
ε ε
√ p
= exp[xh(ζ0 )] π/ −xh00 (ζ0 )/2 (16.8e)
Thus we conclude √
exh(ζ0 ) 2π
f (x) ∼ √ p (16.9)
x −h00 (ζ0 )
as the asymptotic behavior for f (x).
This steepest descent method holds for f (z) provided that z lies in
the sector of C where x > 0 and y is on the right hand side. See the
shaded part of the diagram to the right for details. α
Now we should remember β
1
h(ζ) = h(ζ0 ) + h00 (ζ0 )(ζ − ζ0 )2 + . . . (16.10)
2
=−ω(z)2

In a reasonable neighborhood of 0, ω(z) is invertible. We have from our


computations
Z ε Z ∞
f (x) ∼ exh(ζ0 +z) dz ∼ exh(ζ) dζ (16.11a)
exp −ε −∞
Z ε
xh(ζ0 ) −ω 2 x
=e e dz (16.11b)
−ε

where we have used the fact that ω(z) is invertible in a neighborhood of 0, so we get z = z(ω)
and dz = z 0 (ω)dω, thus
Z ε
2
f (x) ∼ exh(ζ0 ) e−ω x dz
−ε
Lecture 17 38

Z ω(ε)
2
=e xh(ζ0 )
e−ω x z 0 (ω)dω (16.11c)
ω(−ε)

an ω n , and thus we obtain


P
Now we suppose that we can power-series expand z(ω) =
Z ω(ε)
2
f (x) ∼ exh(ζ0 ) e−ω x z 0 (ω)dω
ω(−ε)
∞ Z ω(ε)
X 2
= exh(ζ0 ) an nω n−1 e−ω x dω (16.11d)
n=1 ω(−ε)
∞ Z ∞
X 2
∼ exh(ζ0 ) an n ω n−1 e−ω x dω (16.11e)
n=1 −∞

and only the even integrands survive, so we get


∞ Z ∞
X 2
=e xh(ζ0 )
a2n−1 (2n − 1) ω (2n−1)−1 e−ω x dω (16.11f)
n=1 −∞

We recall that
∞ +∞ ∞
tn+1
Z   Z
2 2 2 2
tn e−t dt = e−t + e−t tn+2 dt (16.12)
−∞ n+1 −∞ n+1 −∞

=0

and thus inductively we find


Z ∞ √
2 (2m)! π
t2m e−t dt = . (16.13)
−∞ m!2m−1
When we plug this result into the integral, we find precisely
∞ √
X (2m)! π
f (x) ∼ exh(ζ0 ) a2m+1 (2m + 1) (16.14)
m=0
m!2m+1

Note that this method is useful when considering the classical limit in path integral quanti-
zation.

Homework 5
x EXERCISE 21
How many digits (in the standard decimal presentation) does the number 1000! have? Find several
(as many as you can) first digits.
x EXERCISE 22
Prove that for any real a, b, kΓ(a + bi)k ≤ kΓ(a)k.
x EXERCISE 23 Z ∞   Z ∞
1 1/x 1 2
Prove that Γ(x) = e−t dt. (This is a generalization of the equality Γ =2 e−t dt.)
x 0 2 0

x EXERCISEZ 24

ex−t 1 1 (−1)n−1 (n − 1)!
Let f (x) = dt, Sn (x) = − 2 + ··· + . The “relative error” of the
x t x x xn
f (x) − Sn (x)
approximation f (x) ≈ Sn (x) is .
f (x)
(a) Prove that there exists a sequence εn , limn→∞ εn = 0, such that if x > n, then the relative
error of the approximation f (x) ≈ Sn (x) does not exceed εn .
(b) Is it true (or likely to be true) that the inequality x > n in the previous statement can be

replaced by x > n?
Lecture 17 39

Lecture 17
Last time, we wrote the formula
Z ∞
f (x) = exh(ζ) dζ (17.1)
0

where h(ζ0 ) is a maximum and h00 (ζ0 ) < 0. We know from last time that
r  
π 1·3 1·3·5
f (x) ∼ exh(ζ0 ) a1 + a3 + a5 + . . . (17.2)
x x x2

We wrote
h(ζ) = h(ζ0 ) − ω(ζ)2 (17.3)
so p
ω(ζ) = h(ζ0 ) − h(ζ) > 0 (17.4)
since h(ζ0 ) is the maximum for h. We see that ω(ζ) is smooth, so we can express ζ(ω) =
ω −1 (ζ), thus
ζ(ω) = ζ0 + a1 ω + a2 ω 2 + . . . . (17.5)
Certainly this could be applied to the Γ function, which is very famous.

17.1 The Laplace Transform


The Laplace transform is used to solve differential equations, it is a version of the
Fourier transform. It takes sums to sums and products to convolutions.
Let
S = {z ∈ C | kzk = 1} (17.6)
we have characters of a group G which are maps G → S, they form a group denoted

G∗ = Char(G). (17.7)

If G = S, then characters are all raising to integral powers. So we have G∗ = Z. Let f be a


function on G, fb be a function on G∗ , we write
Z
fb = χ(g)f (g)dg (17.8)
G

What is dg? It is a Haar measure, there exists some ability to integrate functions on groups
(if the group is topological, the measure is continuous). We need to consider functions that
are integrable, if the group is noncompact (e.g. R) we could get divergent integrals. We
have fb(χ) be the Fourier coefficients cn .
Remark 17.1. What we are doing is considering the dual group G b in the sense of Pontryagin
duality.
We change now to let

G = R, (17.9a)
f: R→C (17.9b)

and we write Z ∞
1
fb(t) = √ eitx f (x)dx (17.9c)
2π −∞

It is the Fourier transform. If we have two functions

f1 , f2 : R → C (17.10)
Lecture 18 40

their convolution is Z
(f1 ∗ f2 )(h) = f1 (g)f2 (g −1 h)dg (17.11)
G
in a reasonable setting it is a sort of involution. Let us return to discuss the Laplace
transform.
Consider f : (0, ∞) → C (we can generalize any half line), the Laplace transform is
defined as Z ∞
fe(t) = e−tx f (x)dx (17.12)
0
The onyl condition we have on f is that

kf (x)k < AeBx (17.13)

where A, B ∈ R are “Big Constants”.


So, after all, we are doing asymptotics, if f has two properties:
1. f ∈ C ∞ (0, ∞), and
2. kf (n) (x)k < An exp(Bn x),

then we have the following asymptotics:

f (0) f 0 (0) f (2) (0) f (n−1) (0)


fe(x) ∼ + 2 + + · · · + + ... (17.14)
x x x3 xn
Observe that we do not demand its convergence since we do not weight by factorials.

Lecture 18
The steepest descent method may be generalized from
Z ∞ r 00
−xh(ζ) 2π e−xh (ζ0 )
f (x) = e dζ ∼ p (18.1)
0 x −h00 (ζ0 )

to Z ∞
f (x) = e−xh(ζ) g(ζ)dζ (18.2)
0

where g(ζ) is bounded and g(ζ0 ) 6= 0. We see that this doesn’t change anything since we’re
working in a small neighborhood near ζ0 , so we have essentially
r 00
2π g(ζ0 )e−xh (ζ0 )
f (x) ∼ p (18.3)
x −h00 (ζ0 )

We make a small change in the integrand and the limits of integration


Z b
f (x) = eixh(ζ) g(ζ)dζ (18.4)
a

We do not insist anymore that h(ζ0 ) is a maximum, we relax this to

h0 (ζ0 ) = 0 and h00 (ζ0 ) 6= 0. (18.5)

Here g(ζ) is not really all that important.


We recall from calculus that

eixh(ζ) = cos(xh(ζ)) + i sin(xh(ζ)) (18.6)


Lecture 18 41

How does cos(xh(ζ)) look around ζ0 ? In some neighborhood of ζ0 , we have

cos(xh(ζ))“≈” cos(constant) = constant (18.7)

locally in that neighborhood. A doodle to the left shows√ the intuitive picture. The interval
where it is constant is (ζ0 − ε, ζ0 + ε), where ε ∼ 1/ x, x is huge so we can think of it as
the frequency of the wave.
The contributions to the oscillations outside of this neighborhood is ≈ 0 since destructive
interference sets everything to be negligible in comparison to the contribution from the
(ζ0 − ε, ζ0 + ε) neighborhood.
We then have, on the one hand
Z b
f (x) = eixh(ζ) g(ζ)dζ (18.8a)
a

eixh(ζ0 ) 2π sgn(h00 (ζ0 ))iπ/4
∼√ p e (18.8b)
x kh00 (ζ0 )k
and on the other hand we have
Z ζ0 +ε
00
(ζ0 )(ζ−ζ0 )2 /2]
f (x) ∼ eix[h(ζ0 )+h dζ (18.9a)
exp ζ0 −ε
Z ζ0 +ε
00
ixh(ζ0 ) (ζ0 )(ζ−ζ0 )2 /2
=e eixh dζ (18.9b)
ζ0 −ε

change variables to u = ζ − ζ0
Z ε
00
ixh(ζ0 ) (ζ0 )u2 /2
=e eixh du (18.9c)
−ε
Z +ε
2
= c0 eixh(ζ0 ) eiv dv (18.9d)
−ε
p
where v = xh00 (ζ0 )/2u. Note that we know
Z ∞
2 p
eiv dv = (1 + i) π/2 (18.10)
−∞

so we use it!
Lets return now to the Laplace transform. We have our function

f : (0, ∞) → R (18.11)

and suppose it grows slower than exponentially. More precisely, there are numbers A, B ∈ R
such that
kf (x)k ≤ AeBx (18.12)
where A, B 6= 0.
We have Z ∞
fe(x) = e−xt f (t)dt (18.13)
0
we require x > B, otherwise we don’t know anything about convergence. If we allow x ∈ C,
then the requirement because Re(x) > B.
Now, what about asymptotics for this function? We need f ∈ C ∞ and kf (n) (t)k ≤
An exp(Bn t), we can then write
Z ∞
f (x) =
e e−xt f (t)dt (18.14a)
0
Lecture 19 42

then integrate by parts


t=∞
−1 −xt 1 ∞ −xt 0
Z
= e f (t) + e f (t)dt (18.14b)
x t=0 x 0
f (0) 1 ∞ −xt 0
Z
= + e f (t)dt (18.14c)
x x 0

integration by parts again yields



f (0) f 0 (0)
Z
1
= + + 2 e−xt f 00 (t)dt (18.14d)
x x x 0

and inductively we obtain



f (n) (0)
Z
f (0) 1
fe(x) = + · · · + n+1 + n+1 e−xt f (n+1) (t)dt (18.14e)
x x x 0

We can consider the last term


Z ∞
1
e−xt f (n+1) (t)dt =: R(x) (18.15)
xn+1 0

as the remainder term.

Lecture 19
Lets continue on the Laplace transform. Consider

f : (0, ∞) → R. (19.1)
B
The expression Z ∞
fe(z) = e−tz f (t)dt (19.2)
0
it converges for mild conditions on f , namely it should satisfy

kf (t)k < A exp(Bt) (19.3)

where A, B ∈ R − {0}. We demand

Re(z) > B (19.4)

and the imaginary part of z won’t affect convergence. Then we are working with the half of
the complex plane doodled to the left.
Now last time we considered for f ∈ C ∞ (0, ∞) and

kf (n) (t)k ≤ An exp(Bn t) (19.5)

the asymptotics for its Laplace transform is


Z ∞
1 1 1 1
fe(x) ∼ f (0) + 2 f 0 (0) + · · · + n+1 f (n) (0) + n+1 e−xt f (n+1) (t)dt. (19.6)
x x x x 0

If g = f 0 , and we know fe, how can we express ge? Well, we go back to the definition of the
Laplace transform
Z ∞
ge(z) = e−tz g(t)dt (19.7a)
0
Lecture 19 43

Z ∞
= e−tz f 0 (t)dt (19.7b)
0

integrate by parts
Z ∞  
d −tz
= e−tz f (z)|∞
0 − f (t) e dt (19.7c)
0 dz
Z ∞
= 0 − f (0) + z f (t)e−tz dt (19.7d)
0
= −f (0) + z fe(z) (19.7e)

This is the relation between fe(z) and ge(z).


Consider
g(t) = f (t)e−at (19.8)
for some a ∈ C. This is Laplace transformed into a shift

ge(z) = fe(z + a) (19.9a)


Z ∞
= e−tz e−at f (t)dt (19.9b)
0
Z ∞
= e−t(z+a) f (t)dt (19.9c)
0

let ze = z + a
Z ∞
= e−te
z
f (t)dt (19.9d)
0
= fe(e
z ) = fe(z + a) (19.9e)

Consider (
0 if t ≤ −a
g(t) = (19.10)
f (t + a) if t ≥ −a
we see then that
ge(z) = fe(z)e−az (19.11)
If we have h(t) defined such that
h(z) = fe(z)e
e g (z) (19.12)
What h(t) could do this? A convolution of f and g, that is
Z ∞
h(t) = (f ∗ g)(t) = f (t − τ )g(τ )dτ (19.13)
0

Suppose that g is bounded and


1
f (x) = (19.14)
ε
when x ∈ (−ε, ε) for “small” 0 < ε ≪ 1. Then we have
Z ∞ Z t+ε
f (t − τ )g(τ )dτ = f (t − τ )g(τ )dτ (19.15a)
0 t−ε
1
≈ (g(t + ε) + g(t − ε)) (19.15b)

≈ g(t) as ε → 0. (19.15c)

That concludes today’s lecture.


Midterm 44

Midterm
1. Find all conformal maps of the upper half plane {z | Im(z) > 0} onto the lower
half plane {z | Im(z) < 0}.

2. Describe the Riemann surface of the function f (z) = z 2 − z.


∞ 
(−1)n
Y 
3. Does the infinite product 1+ converge? If yes, find the value of the
n=2
n
product. Does it converge absolutely?

4. Let p(z) = an z n + an−1 z n−1 + · · · + a1 z + a0 be not identically 0, and assume that,


for some k,
kak k > ka0 k + ka1 k + · · · + kak−1 k + kak+1 k + · · · + kan k.
Prove that the polynomial p has precisely k roots (counted with their multiplicities) within
the disk {z | kzk < 1}.

5. Find an expression, in terms of elementary functions, for Γ(5 + z)Γ(5 − z).


 
n
6. Find an asymptotic formula for (I mean an expression F (n) in terms of
k
elementary functions whose ratio with the given binomial  coefficient has limit 1 when

n 1
n → ∞. But I do not mind if you get a formula like this: ∼ [???] · 1 ± + . . . .)
k ?n
Lecture 20 45

Lecture 20
Remark 20.1 (On Midterm). If we wish to map the upper half of C to the lower half of C,
we use a fractional linear transformation with real coefficients. Also note that the product
cannot be replaced by expanding into pairwise expansion, that is
Y (−1)n
 Y 
1
1+ 6= 1−1 (20.1)
n n(n + 1)
otherwise by this reasoning
X X
(−1)n = (1 − 1) = 0 (20.2)

is correct, which it most certainly is not.


So what do we know about the Laplace transform? It’s something that looks like
Z ∞
fe(z) = e−zt f (t)dt (20.3)
0

What else can we say? First we fix notation, we will write


Z ∞
L [f (t)] (z) = f (z) =
e e−zt f (t)dt. (20.4)
0

Let us list out the properties of the Laplace transform:


1. If f is differentiable and its derivative is not growing too fast, then (f
g 0 )(z) = z fe(z) −

f (0).
^
2. We see that (tf (t))(z) = −(fe)0 (z).

3. A shift corresponds to multiplication by the exponential, L [e−at f (t)] (z) = fe(z + a)


where a ∈ C is arbitrary.
4. We consider for t ≥ a L [f (t − a)] (z) = e−az fe(z).
5. We have this strange correspondence between convolution and the product L [f ∗ g] (z) =
fe(z)e
g (z).
6. Consider a specific example for a > −1: L [ta ] = Γ(a + 1)/z a+1 .
7. An example a ∈ C is arbitrary, then L [e−at ] (z) = 1/(z + a).
8. For cosine we have L [cos(at)] (z) = z/(z 2 + a2 ).
9. For sine we have L [sin(at)] (z) = a/(z 2 + a2 ).

20.1 Some Proofs


Let us consider some proofs of the properties just asserted.
Proposition 20.2. For a > −1 we have L [ta ] = Γ(a + 1)/z a+1 .
Proof. We see that Z ∞
a
L [t ] = e−zt ta dt (20.5)
0
we let u = zt, so t = u/z and dt = du/z then
Z ∞ Z ∞  u a  du 
e−zt ta dt = e−u
0 0 z z
(20.6)
1
= a+1 Γ(a + 1)
z
where we have used the definition of the Γ function, specifically Eq (12.12).
Lecture 20 46

Example 20.3. We see L [1] = z −1 , L [t] = z −2 , etc. Consider


1
f (t) = f (0) + f 0 (0)t + f 00 (0)t2 + . . . (20.7)
2
then
f (0) f 0 (0) f (n) (0)
L [f ] (z) = + 2 + · · · + n+1 + . . . (20.8)
z z z
This is a bit of a joke, the approach is unreliable. The result is correct however.
Proposition 20.4 (Transform of Cosine). For some a ∈ C, we have
z
L [cos(at)] (z) = (20.9)
z 2 + a2
Proof. By direct computation we see

eiat + e−iat
 
L [cos(at)] z = L (z) (20.10)
2

and by linearity we obtain

e + e−iat
 iat 
1   1 
(z) = L eiat z + L e−iat (z).

L (20.11)
2 2 2

We use the result from computing L [e−at 1] to find


 
1  iat  1  1 1 1
z + L e−iat (z) =

L e − (20.12)
2 2 2 z − ia z + ia

and by gathering terms we have


   
1 1 1 1 2z
− = (20.13)
2 z − ia z + ia 2 z 2 + a2

which proves the theorem.


Proposition 20.5 (Transform of Sine). For some a ∈ C, we have
a
L [sin(at)] (z) = (20.14)
z 2 + a2
Proof. We may use the fact
1 d
sin(at) = cos(at) (20.15)
a dz
take the Laplace transform of both sides, and we immediately get the result.

20.2 Applications
It is tempting by the first property to apply this to differential equations, we’d end up
with the answer but Laplace transformed. We’d need to inverse the transform to get the
answer.
Let us first consider a very simply thing. If we have, e.g.,
1
fe(z) = , (20.16)
z−a
what would we expect to find for f (t)? We expect f (t) = exp(at).
If we instead have now
1
fe(z) = (20.17)
(z − 1)(z − 2)
Lecture 21 47

we’d very much like to use partial fraction decomposition


1 1
fe(z) = − (20.18)
z−2 z−1
to obtain the result
f (t) = e2t − et . (20.19)
More generally, if we have a rational function
P (z)
fe(z) = (20.20)
Q(z)
where
deg(P ) < deg(Q) (20.21)
and we demand that all roots of Q are different (so Q has simple roots). Let z1 , . . . , zn be
the roots of Q. We write
m  
X P (zi )
f (t) = ezi t . (20.22)
Q0 (zi )
k=1

Let F be some function, F = fe or in other words F is the function we will try to “invert
Laplace transform”; let X
f (t) = residues(ezt F (z)) (20.23)
But the sum of residues is an integral! We see that
Z a+i∞
1
f (t) = ezt F (z)dz (20.24)
2πi a−i∞

All the poles should be on the left of a, and there should be only finitely many poles.

Lecture 21
Now, last time we covered the inverse Laplace transform. Let F (z) be analytic in C
with possibly only finitely many poles. Then F = fe, and we obtain the original function by
X
Residues of etz F (z)

f (t) = (21.1)

The poles of this function comes entirely from F (z) since etz has no poles. We suppose that
F (z) is analytic on C except for a finite number of isolated singularities and for some σ ∈ R
we have F be analytic on the plane {z ∈ C | Re(z) > σ}.
The requirements: there are 3 positive constants M , R, β > 0 such that if kzk > R
then
M
kF (z)k < = M (kzk−β ) (21.2)
kzkβ
This is some contour integral with the requirement as kzk → ∞, then on the boundary
kF (z)k is “really small”.
What do we do? We create a rectangle Γ which is big enough
× to contain all the singularities of F . This is doodled to the left,
× the × indicates singularities of F . We break up Γ into two bits γ
which contains all the singularities and γe which is everything else.
We see since all the singularities live inside γ that
× Z
× ezt F (z)dz = 2πif (t) (21.3)
γ
γ γ
e
Γ How can we check that this is correct?
Homework 6 48

We take its Laplace transform


Z ∞ Z 
−zt ζt
2πife(z) = e e F (ζ)dζ dt (21.4)
0 γ

and change the order of integration


Z Z ∞
2πife(z) = e−zt eζt F (ζ)dζdt.
γ 0

This is a little bit sloppy, it is really


Z Z r
2πife(z) = lim e−zt eζt F (ζ)dζdt (21.5)
r→∞ γ 0

We then evaluate the integral and we find


Z   F (ζ)
2πife(z) = lim e(ζ−z)r − 1 dζ
r→∞ γ ζ −z
  (21.6)
−1
Z
= F (ζ) dζ
γ ζ −z

We want to show that F (z) = fe(z).


We use the fact that Z Z Z
(. . . ) = (. . . ) + (. . . ) (21.7)
γ Γ γ
e
to deduce
Z
F (ω)
−2πife(z) = − dω (21.8a)
γ ω−z
Z Z
F (ω) F (ω)
=− dω − dω (21.8b)
Γ ω − z γ
e ω −z
Z
F (ω)
=− dω − 2πiF (z) (21.8c)
Γ ω −z

and we see that Z


F (ω)
dω ≈ 0 (21.9)
Γ ω−z
when kω − zk ∼ R and R becomes huge, we basically divide by “infinity”. So we have fe = F .
Remark 21.1. The derivative is convolution with the derivative of the delta function.
This theorem has many corollaries. We see
X
f (t) = (residues etz F (z)) (21.10)

so we can write this as an integral (thanks to the Residue theorem)


Z a+i∞
1
f (t) = ezt F (z)dz (21.11)
2πi a−i∞

This formula is very close to the Laplace transform, and we derived various properties of the
Laplace transform using only integration by parts (which means the inverse transform has
analogous properties).
Lecture 22 49

Homework 6
x EXERCISE 25
Let (
0, if t ≤ 0,
f (t) = (21.12)
btc + 1, if t > 0
(where btc denotes the maximal integer in the interval (−∞, t]). Find (an explicit formula for) the
Laplace transform fe(z). What is σ?
x EXERCISE 26
The same for the “saw-function”

0
 if t ≤ 0
f (t) = t − 2n if 2n ≤ t < 2n + 1 (21.13)

2n + 2 − t if 2n + 1 ≤ t < 2n + 2

x EXERCISE 27
Let ( (
0, if x < 0, 0, if x < 0
f (x) = , g(x) = (21.14)
1, if x ≥ 0 x, if x ≥ 0.
Find f ∗ f , f ∗ g, g ∗ g.
x EXERCISE 28
Under the appropriate assumptions (you need to formulate them), prove the identities f ∗ g =
g ∗ f, (f ∗ g)0 = f ∗ g 0 = f 0 ∗ g.

Lecture 22
The two fields of interest were really (1) Riemann surfaces, (2) the theory of conformal
mappings. Geometry merely refers to measuring lengths and angles. We will briefly discuss
the Riemann zeta function. First recall the fundamental theorem of arithmetic
Theorem 22.1 (Arithmetic’s Fundamental). If m is a positive integer, it can be written
uniquely (up to order of factors) as a product of prime numbers

m = pq11 (. . . )pqkk (22.1)

where p1 , . . . , pk are all distinct primes.


Now, the Riemann zeta function

X 1
ζ(s) = s
(22.2)
n=1
n

can be analytically continued to C − {1}. We see that, by the fundamental theorem of


arithmetic, we may group terms
   
1 1 1 1
1 + s + 2s + . . . 1 + s + ... 1 + s + . . . (. . . ) = ζ(s) (22.3)
2 2 3 5
This is just from
X 1 X 1
= (22.4)
n s
(p1 . . . pkmm )s
k 1

and thus Euler Product


ps Y 
Y 1 Y 1
ζ(s) =  = = 1 + (22.5)
1 ps − 1 ps − 1
prime p 1− s
p
Lecture 23 50

This is the “Euler Product”. We can represent it as an integral, first note the notation [x]
for the integer part of x, then
Z 1 s−1 
x 1
ζ(s) = x
dx (22.6a)
0 e −1 Γ(s)
Z ∞
[x]
=s dx (22.6b)
xs+1
 0  Z ∞
s x − [x]
= −s dx (22.6c)
s−1 0 xs+1

which converges for Re(s) > 0 which is an extension of the preceding integral. So we obtain
a functional relationship
 sπ 
ζ(s) = 2s π s−1 sin Γ(1 − s)ζ(1 − s) (22.7)
2
and with this functional relationship we may uniquely extend the zeta function to a larger
domain.

Lecture 23
We are referred to:
• E. Titchmarsh,
The Zeta-Function of Riemann.
for further reading on the Riemann zeta function. We will continue analytically continuing
ζ(s) to C − {1}. Now what happens to Eq (22.7) when we take ζ(1 − s) = . . . ? It becomes
 sπ 
ζ(1 − s) = 2(2π)−s cos Γ(s)ζ(s). (23.1)
2
Notice that for s = 2n + 1, for any n ∈ Z, we have

ζ(1 − s) = 0. (23.2)

But due to the Γ function, we have s > 0, otherwise Γ is undefined. But they’re killed off by
cos(πs/2), so it’s all good. There are the trivial zeroes ζ(−2) = ζ(−4) = · · · = 0.
Now, the most famous unsolved problem in mathematics: the Riemann hypothesis. The
only zeroes (in addition to the trivial zeroes) are at
1
s= + i(something). (23.3)
2
Note ζ(s) = ζ(s).
Proposition 23.1. There are infinitely many “nontrivial zeroes” for ζ(s).
Proposition 23.2. If γn is the nth nontrivial zero, limn→∞ (γn − γn−1 ) = 0.
Conjecture 23.3. If N (T ) is the number of γ with Im(γn ) < T , then N (T ) ∼ T ln(T ) is
the asymptotic behavior.
The first nontrivial zero is at
1
γ1 ≈ + i14.1347 2514 17346. (23.4)
2
So what are the distribution of these nontrivial zeroes? We see that the number k ∈ N with
Im(γk ) < n is described on the following table:
Lecture 25 51

n max{k ∈ N | Im(γk ) < n}


100 29
1000 649
100,000 10142
1,000,000 1,747,146

Let
π(N ) := (number of primes< N ) (23.5)
We have
π(N ) ∼ Li(N ) (23.6)
where Z x
du
Li(x) = (23.7)
2 log(u)
is the Logarithmic integral function. The Riemann hypothesis then states there exists
constants c, C ∈ R such that c > 0 and C > 0 which obey
√ √
c N ln(N ) < |π(N ) − Li(N )| < C N ln(N ) (23.8)

How beautiful.

Lecture 24
We will cover multiple complex variables, and analytic continuation. First the last.
We have some boundary, and a function defined inside the region.
• z0 When can it not be extended beyond the region. Well, consider one
point, what function cannot be extended to z1 ? Well, 1/(z − z1 ). For
n points z1 , . . . , zn we could have
n
X 1
f (z) = (24.1)
z − zk
k=1

We could replace it by an integral on the boundary, but that’d be


hard. Why not take an infinite sum? Well, why not work with a dense
(countable) set of points on the boundary? This approach is better, since
it’s countable. We take {zk } to be dense in the boundary of the region. We can write out
the sum

X 1
f (z) = (24.2)
z − zk
k=1
but it won’t converge. We then generalize this to

X ak
f (z) = (24.3)
z − zk
k=1

where {ak } is a sequence rapidly decreasing. Although it’d converge on the interior of the
region, there is no guarantee for convergence on the boundary.
We then specify
an = 2−n min (1, kzm − zn k) (24.4)
m<n

and so on the boundary we have 1 singular point and a sum


∞ ∞
X ak X
< 2−k (24.5)
z − zk
k6=n k6=n

which converges.
Lecture 25 52

Lecture 25
Suppose we have a region Γ in C2 which we want a holomorphic function singular at
(z0 , ω0 ) ∈ Γ. Suppose we have it be
1
f (z, ω) = (25.1)
z + ω − z0 − ω0
This is singular at

z = a + z0
(25.2)
ω = −a + ω0

for some a ∈ C. This is a complex line (since a varies over all of C). Suppose we want it to
not pierce the region. . . then we demand the region must be convex for this to be true.
The hyperplane tangent to the point (z1 , . . . , zn ) is defined by the equation

a1 z1 + · · · + an zn + b = 0 (25.3)

So back to the original problem in C2 , we write (a1 z + a2 ω + b)−1 is regular in the domain
and it has a pole on the surface. We do what we did last time: take a dense selection of
such points, and so on. We can weaken the condition of convexity to pseudoconvexity (i.e.,
the tangent plane may possibly intersect the domain).
Consider a surface described by

F (z1 , . . . , zn ) = 0 (25.4)

We can rewrite it as
F (x1 , y1 , . . . , xn , yn ) = 0. (25.5)
Let
   
∂ ∂ ∂
∂k = = +i
∂zk ∂xk ∂yk
    (25.6)
∂ ∂ ∂
∂k = = −i
∂z k ∂xk ∂yk

For the second derivatives, we have [∂i ∂ j F ] be Hermitian, then the domain is a “Holomorphic
Domain”. They are very important!
In Cn (for n ≥ 2) the region between two concentric spheres is not a holomorphic
domain, we can extend it to the center ball though. Recall that if f is analytic inside a
domain with boundary γ, then
Z
1 f (ζ)
f (z0 = dζ. (25.7)
2πi γ ζ − z0

Suppose that f were continuous on γ, then


Z
f (ζ)
g(z0 ) = dζ (25.8)
γ ζ − z0

it is a differentiable function of z0 . Very briefly, the idea is to take your domain, stack
an infinite number of discs there, and perform Cauchy integration on each disc. We get a
function in one variable, and this function turns out to tell us that f = g in the domain. So
f can be expanded. This approach can be applied to many other domains.
References 53

References
[1] John B. Conway,
Functions of one complex variable.
Springer–Verlag, 1978.

[2] John B. Conway,


Functions of one complex variable II.
Springer–Verlag, 1995.
[3] Serge Lang,
Complex Analysis.
Springer–Verlag, Fourth edition, 1998.
[4] Jerrold E. Marsden and Michael J. Hoffman,
Basic Complex Anlaysis.
W. H. Freeman; Third Edition edition, 1999.

[5] Elias M. Stein, Rami Shakarchi,


Complex Analysis.
Princeton University Press, 2003.
[6] Hermann Weyl,
The Concept of a Riemann Surface.
Dover Books on Mathematics, Third edition, 2009.

Note that [4] was the “official” text for the course, although we didn’t touch it in math
185B. It seems like most of the material was drawn from [3], specifically chapters 7–16.
Some other references which may be useful:
1. Walter Rudin,
Real and Complex Analysis.
Third edition, Boston: McGraw Hill, 1987.

The homological definition of integrals of the form


Z
g(x)ef (x) dx

are discussed in
2. Albert Schwarz, Ilya Shapiro,
“Twisted de Rham cohomology, homological definition of the integral and ‘Physics over
a ring’ ”.
Nucl.Phys.B809:547–560,2009. Eprint: arXiv:0809.0086v1 [math.AG]

View publication stats

You might also like