Complex Analysis Applications
Complex Analysis Applications
net/publication/337812206
CITATION READS
1 333
1 author:
Johar M. Ashfaque
SEE PROFILE
All content following this page was uploaded by Johar M. Ashfaque on 07 December 2019.
Johar M. Ashfaque
Contents
1 What are Complex Numbers? 2
1.1 The Complex Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 The Complex Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Euler’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Argument, Magnitude & Conjugate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Differentiation 4
2.1 The Cauchy-Riemann Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Integration 5
3.1 Cauchy’s Differentiation Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4 Applications 5
4.1 Fourier Series: Complex Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.1.1 Parseval’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.2 Virasoro Algebra: Cauchy’s Integral Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.3 Some Properties of the Riemann Zeta Function . . . . . . . . . . . . . . . . . . . . . . . . 6
4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.3.2 The Euler Product Formula for ζ(s) . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.3.3 The Bernoulli Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.3.4 Relationship to the Zeta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3.5 The Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3.6 The Euler Reflection Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.4 The Hurwitz Zeta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4.1 Variants of the Hurwitz Zeta Function . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.4.2 Sums of the Hurwitz Zeta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.5 Epstein Zeta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.5.2 The Functional Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.6 The Mellin Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.6.1 The Mellin Transform and its Properties . . . . . . . . . . . . . . . . . . . . . . . 23
4.6.2 Relation to Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.6.3 Inversion Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.6.4 Scaling Property for a > 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.6.5 Multiplication by ta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.6.6 Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.6.7 Another Property of the Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.6.8 Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.6.9 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.6.10 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1
1 What are Complex Numbers?
A complex number, z, is an ordered pair of real numbers similar to the points in the real plane, R2 . The
first and second components of z is called the real and imaginary parts respectively
<(z) = x, =(z) = y.
An equivalent form to the ordered pair, is to express the complex number in terms of its components as
z = x + iy.
The plane of complex numbers, C, differs from R2 in that the product of two complex numbers is defined
to be
z1 · z2 = (x1 + iy1 ) · (x2 + iy2 )
= x1 x2 + iy1 x2 + iy2 x1 − y1 y2
= x1 x2 − y1 y2 + i(y1 x2 + y2 x1 )
2
1.4 Argument, Magnitude & Conjugate
For a complex number, z, in polar coordinates, it is customary to call the angle, the argument of z,
denoted
ϕ = arg(z)
and r is known as the magnitude of z denoted
r = |z|.
The complex conjugate is an operation which flips the sign of the imaginary part of the complex number
on which it acts. For z = x + iy, the complex conjugate denoted by z ∗ is simply
z ∗ = x − iy.
In polar coordinates, we know that flipping the sign of the argument will flip the sign of the imaginary
part since
=(z) = r sin ϕ
and since sine is an odd function
z1 · z2 = r1 r2 ei(θ1 +θ2 ) .
From this it can be easily seen that the geometric interpretation of the complex product is that it
multiplies the magnitudes and adds the arguments.
Taking the product of a complex number with its complex conjugate, we find
z · z ∗ = r2 = |z|2
3
2 Differentiation
One might expect to define the complex derivative as in the case of the reals. However, there is an
additional complication because C forms a plane, and therefore we must specify a direction to step with
the differential element h when taking the derivative just as in the case of the directional derivative in R2 .
In general, the derivative at a single point can take on many values depending on the direction we choose
to step. But, if we consider only holomorphic or analytic functions that have a single valued derivative
at every point in some region. We might be in luck.
df f (z + h) − f (z)
= lim
dz h→0 h
if we pick h to be real
df f (x + h, y) − f (x, y)
= lim
dz h→0 h
∂f
=
∂x
df f (x, y + l) − f (x, y)
= lim
dz l→0 il
1 ∂f
=
i ∂y
Defining
u ≡ <(f (z)), v ≡ =(f (z))
and requiring that the derivative be the same in all directions
∂u ∂v 1 ∂u ∂v
+i = +i
∂x ∂x i ∂y ∂y
If the partial derivatives of a function are continuous in some region and satisfy the Cauchy-Riemann
Equations in that region, then the function is holomorphic in that region.
4
3 Integration
4 Applications
Complex exponential functions give a convenient way to rewrite Fourier series. By considering Fourier
series in complex form allows to work with one set of coefficients {cn }∞
n=−∞ where all the coefficients are
given by a single formula. Then
π
einx + e−inx
Z
1
an cos nx + bn sin nx = f (x) cos nx dx
π −π 2
Z π
− e−inx
inx
1 e
+ f (x) sin nx dx
π −π 2i
Z π
1
= f (x)e−inx dx einx
2π −π
Z π
1
f (x)einx dx e−inx
2π −π
and Z π
a0 1
= f (x)dx.
2 2π −π
When speaking of Fourier series of f in complex form, one simply means the infinite series appearing in
∞
X
f∼ cn einx .
n=−∞
Note. There is an easy way to come from the Fourier coefficients with respect to sine functions and
cosine functions to the coefficients for Fourier series in complex form. In fact
a0
c0 =
2
and for n ∈ N
an − ibn an + ibn
cn = , c−n = .
2 2
On the other hand, if the Fourier coefficients in complex form are known then
5
4.1.1 Parseval’s Theorem
Assume that the function f ∈ L2 (−π, π) has the Fourier coefficients {an }∞ ∞
n=0 , {bn }n=1 or in complex
∞
form {cn }n=−∞ . Then
Z π ∞ ∞
1 1 1X X
|f (x)|2 dx = |a0 |2 + (|an |2 + |bn |2 ) = |cn |2 .
2π −π 4 2 n=1 n=−∞
then
∞
X
f (x) = cn einx
−∞
2
define a function in L (−π, π).
We know I
dz
Lm = z m+1 T (z)
2πi
and I
dω
Ln = ω n+1 T (ω) .
2πi
I
dz m+1
[Lm , Ln ] = z (T (z)Ln − Ln T (z))
2πi
I I
dz dω m+1 n+1 dz dω m+1 n+1
= z ω − z ω
|z|>|ω| 2πi 2πi |ω|>|z| 2πi 2πi
I I c
dω dz m+1 n+1 2 2T (ω) ∂T (ω)
= z ω + +
0 2πi ω 2πi (z − ω)4 (z − ω)2 z−ω
dω n+1 1 d3 c m+1
I
d m+1 m+1 0
= ω z + z 2T (ω) + z T (ω)
2πi 6 dz 3 2 dz z=ω
I
dω c m+n−1
= m(m + 1)(m − 1) ω + (m + 1)ω m+n+1
2T (ω) + ω m+n+2 T 0 (ω)
2πi 12
c
= m(m + 1)(m − 1)δm+n,0 + 2(m + 1)Lm+n − (m + n + 2)Lm+n
12
c
= m(m2 − 1)δm+n,0 + (m − n)Lm+n
12
4.3.1 Introduction
Leonhard Euler lived from 1707 to 1783 and is, without a doubt, one of the most influential mathemati-
cians of all time. His work covered many areas of mathematics including algebra, trigonometry, graph
theory, mechanics and, most relevantly, analysis.
6
Although Euler demonstrated an obvious genius when he was a child it took until 1735 for his talents to
be fully recognised. It was then that he solved what was known as the Basel problem, the problem set
in 1644 and named after Euler’s home town [?]. This problem asks for an exact expression of the limit
of the equation
∞
X 1 1 1 1
2
=1+ + + + ..., (1)
n=1
n 4 9 16
which Euler calculated to be exactly equal to π 2 /6. Going beyond this, he also calculated that
∞
X 1 1 1 1 π4
4
=1+ + + + ... = , (2)
n=1
n 16 81 256 90
among other specific values of a series that later became known as the Riemann zeta function, which is
classically defined in the following way.
Definition 4.1 For <(s) > 1, the Riemann zeta function is defined as
∞
X 1 1 1 1
ζ(s) = s
= 1 + s + s + s + ...
n=0
n 2 3 4
This allows us to write the sums in the above equations (1) and (2) simply as ζ(2) and ζ(4) respectively.
A few years later Euler constructed a general proof that gave exact values for all ζ(2n) for n ∈ N. These
were the first instances of the special values of the zeta function and are still amongst the most interesting.
However, when they were discovered, it was still unfortunately the case that analysis of ζ(s) was restricted
only to the real numbers. It wasn’t until the work of Bernhard Riemann that the zeta function was to
be fully extended to all of the complex numbers by the process of analytic continuation and it is for this
reason that the function is commonly referred to as the Riemann zeta function. From this, we are able
to calculate more special values of the zeta function and understand its nature more completely.
We will be discussing some classical values of the zeta function as well as some more modern ones and will
require no more than an undergraduate understanding of analysis (especially Taylor series of common
functions) and complex numbers. The only tool that we will be using extensively in addition to these
will be the ‘Big Oh’ notation, that we shall now define.
Definition 4.2 We can say that f (x) = g(x) + O(h(x)) as x → k if there exists a constant C > 0 such
that |f (x) − g(x)| ≤ C|h(x)| for when x is close enough to k.
This may seem a little alien at first and it is true that, to the uninitiated, it can take a little while to
digest. However, its use is more simple than its definition would suggest and so we will move on to more
important matters.
Although we will not give a complete proof here, one is referred to [?]. However, it is worth noting that
m
(m )
Y X
(1 + an ) = exp ln(1 + an ) .
n=1 n=1
7
Theorem 4.4 Let p denote the prime numbers. For <(s) > 1,
∞
Y
ζ(s) = (1 − p−s )−1 .
p
If we continue this process of siphoning off primes we can see that, by the Fundamental Theorem of
Arithmetic, Y
ζ(s) (1 − p−s ) = 1,
p
We will now move on to the study of Bernoulli numbers, a sequence of rational numbers that pop up
frequently when considering the zeta function. We are interested in them because they are intimately
related to some special values of the zeta function and are present in some rather remarkable identities.
We already have an understanding of Taylor series and the analytic power that they provide and so we
can now begin with the definition of the Bernoulli numbers. This section will follow Chapter 6 in [?].
Definition 4.5 The Bernoulli Numbers Bn are defined to be the coefficients in the series expansion
∞
x X Bn xn
= .
ex − 1 n=0 n!
It is a result from complex analysis that this series converges for |x| < 2π but, other than this, we cannot
gain much of an understanding from the implicit definition. Please note also, that although the left hand
side would appear to become infinite at x = 0, it does not.
8
Corollary 4.6 We can calculate the Bernoulli numbers by the recursion formula
k−1
X
k
0= Bj ,
j=0
j
where B0 = 1.
Proof. * Let us first replace ex − 1 with its Taylor series to see that
∞ j ∞ ∞ ∞
X x − 1
X Bn xn X xj X Bn xn
x = = .
j=0
j n=0
n! j=1
j! n=0 n!
Note that the inverse k! term is irrelevant to the recursion formula. This completes the proof.
The first few Bernoulli numbers are therefore
B0 = 1, B1 = −1/2, B2 = 1/6, B3 = 0,
B4 = −1/30, B5 = 0, B6 = 1/42, B7 = 0.
Lemma 4.7 The values of the odd Bernoulli numbers (except B1 ) are zero
Proof. * As we know the values of B0 and B1 , we can remove the first two terms from Definition 4.5
and rearrange to get
∞
x x X Bn xn
x
+ = 1 + ,
e −1 2 n=2
n!
which then simplifies to give
∞
ex + 1 Bn xn
x X
=1+ .
2 ex − 1 n=2
n!
We can then multiply both the numerator and denominator of the left hand side by exp(−x/2) to get
∞
x ex/2 + e−x/2 Bn xn
X
x/2 −x/2
= 1 + . (3)
2 e −e n=2
n!
By substituting x → −x into the left hand side of this equation we can see that it is an even function and
hence invariate under this transformation. Hence, as the odd Bernoulli numbers multiply odd powers
of x, the right hand side can only be invariant under the same transformation if the value of the odd
coefficients are all zero.
As, we have already dicussed, Euler found a way of calculating exact values of ζ(2n) for n ∈ N. He did
this using the properties of the Bernoulli numbers, although he originally did it using the infinite product
for the sine function. The relationship between the zeta function and Bernoulli numbers is not obvious
but the proof of it is quite satisfying.
9
To prove this theorem, we will be using the original proof attributed to Euler and reproduced in [?]. This
will be done by finding two seperate expressions for z cot(z) and then comparing them. We will be using
a real analytic proof which is slightly longer than a complex analytic proof, an example of which can be
found in [?].
Proof. Substitute x = 2iz into equation (3) and observe that, because the odd Bernoulli numbers are
zero, we can write this as
∞
x ex/2 + e−x/2 eiz + e−iz B2n z 2n
X
x/2 −x/2
= iz iz −iz
= 1 + (−4)n .
2 e −e e −e n=1
(2n)!
Noting that the left hand side is equal to z cot(z) completes the proof
Proof. Recall that 2 cot(2z) = cot(z) + cot(z + π/2). If we continually iterate this formula we will find
that
2n −1
1 X z + jπ
cot(z) = n cot ,
2 j=0 2n
which can be proved by induction. Removing the j = 0 and j = 2n−1 terms and recalling that cot(z +
π/2) = − tan(z) gives us
cot(z/2n ) tan(z/2n )
cot(z) = −
2n 2n
n−1 n
2 −1 2X −1
1 X z + jπ z + jπ
+ cot + cot .
2n j=1
2n 2n
j=2n−1 +1
All we have to do now is observe that, as cot(z + π) = cot(z), we can say that
n
2X −1 2n−1
X−1
z + jπ z − jπ
cot = cot ,
2n j=1
2n
j=2n−1 +1
Proof. In order to obtain this, we first multiply both sides of equation (5) by z to get
2n−1
X−1 z z + jπ
z n z n z − jπ
z cot(z) = n cot(z/2 ) − n tan(z/2 ) + cot + cot . (7)
2 2 j=1
2n 2n 2n
Let us now take the limit of the right hand side as n tends to infinity. First recall that the Taylor series
for x cot(x) and x tan(x) can be respectively expressed as
x cot(x) = 1 + O(x2 )
10
and
x tan(x) = x2 + O(x4 ).
Hence, if we substitute x = z/2n into both of these we can see that
hz z i
lim cot =1 (8)
n→∞ 2n 2n
and hz z i
lim tan = 0. (9)
n→∞ 2n 2n
Now we have dealt with the expressions outside the summation and so we need to consider the ones
inside. To make things slightly easier for the moment, let us consider both of the expressions at the same
time. Using Taylor series again, we can see that
z z ± jπ z
n
cot n
= + O(4−n ). (10)
2 2 z ± jπ
Substituting equations (8), (9) and (10) into the right hand side of equation (7) gives that
2n−1
X−1
z z
z cot(z) = 1 − lim + + O(4−n ) ,
n→∞
j=1
z + jπ z − jπ
2n−1
X−1
| O(4−n )| ≤ C(2n−1 − 1)4−n ,
j=1
Proof. Take the summand of equation (4.11) and multiply both the numerator and denominator by
(jπ)−2 to obtain
∞
X (z/jπ)2
z cot(z) = 1 − 2 .
j=1
1 − (z/jπ)2
But, we can note that the summand can be expanded as an infinite geometric series. Hence we can write
this as
∞ X ∞ 2n
X z
z cot(z) = 1 − 2 ,
j=1 n=1
jπ
which
∞ 2n
X z
=1−2 ζ(2n)
n=1
π
as long as the geometric series converges (i.e. |z| < π). Note that exchanging the summations in such a
way is valid as both of the series are absolutely convergent.
11
Now, we can complete the proof of the Theorem 4.8 by equating equations (4) and (11) to see that
∞ 2n ∞ 2n
n B2n z z
X X
1+ = (−4) =1−2 ζ(2n).
n=2
(2n)! n=1
π
If we then strip away the 1 terms and remove the summations we obtain the identity
B2n z 2n z 2n
(−4)n = −2 2n ζ(2n),
(2n)! π
(2π)2n |B2n |
ζ(2n) = .
2(2n)!
∞
X 1 (2π)8 |B8 | π8
ζ(8) = 8
= = ,
n=1
j 2(8)! 9450
etc. This is a beautiful formula and it is unfortunate that no similar formula has been discovered for
ζ(2n + 1).
However, that’s not to say that there aren’t interesting results regarding these values! There have been
recent results concerning the values of the zeta function for odd integers. For example, Apéry’s proof of
the irrationality of ζ(3) in 1979 or Matilde Lalı́n’s integral representations of ζ(3) and ζ(5) by the use of
Mahler Measures.
Mercifully, special values for ζ(−2n) and ζ(−2n + 1) have been found, the latter of which also involves
Bernoulli numbers! It is to our good fortune that we will have to take a whirlwind tour through some of
the properties of the Gamma function in order to get to them.
The Gamma function is closely related to the zeta function and as such warrants some exploration of its
more relevant properties. We will only be discussing a few of the qualities of this function but the reader
should note that it has many applications in statistics (Gamma and Beta distributions) and orthoganal
functions (Bessel functions). It was first investigated by Euler when he was considering factorials of
non-integer numbers and it was through this study that many of its classical properties were established.
This section will follow Chapter 8 in [?] with sprinklings from [?]. First, let us begin with a definition
from Euler.
Now, this function initially looks rather daunting and irrelavent. We will see, however, that is does have
many fascinating properties. Among the most basic are the following two identities ...
12
Proof. We can prove this by performing a basic integration by parts on the gamma function. Note that
Z ∞ Z ∞
s −t
s −t ∞
Γ(s + 1) = t e dt = −t e 0 + s ts−1 e−t dt
0 0
= [0] + sΓ(s)
as required.
Remark. We can use the fact that Γ(s) = Γ(s + 1)/s to see that, as s tends to 0, Γ(s) → ∞. We
can also use this recursive relation to prove that the Gamma function has poles at all of the negative
integers. However, the more beautiful proof of this is to come in Section 6.
If we switch to polar co-ordinates using the change of variables x = r cos(θ), y = r sin(θ). Noting that
dydx = rdθdr, we have
Z 2π Z ∞ Z ∞
I= 2
r exp(−r )drdθ = π 2r exp(−r2 )dr = π[exp(−r2 )]∞0 = π.
0 0 0
We can then seperate the original integral into two seperate integrals to obtain
Z ∞ Z ∞
I= exp −x2 dx exp −y 2 dy = π.
−∞ −∞
Noting that the two integrals are identical and are also both even functions, we can see that integrating
one of them from zero to infinity completes the proof as required
Corollary 4.17 Consider (n)!2 = n(n − 2)(n − 4)...,, which terminates at 1 or 2 depending on whether
n is odd or even respectively. Then for n ∈ N,
√
2n + 1 π(2n − 1)!2
Γ = .
2 2n
Noting that the leftmost and rightmost equalities are equal by definition completes the proof.
13
Remark. We can use this relationship Γ(1 + 1/s) = (1/s)Γ(s) to see, for example, that
√ √
3 π 15 π
Γ(5/2) = , Γ(7/2) = ,
4 8
etc.
This chapter will use a slightly different definition of the Gamma function and will follow source [?]. First
let us consider the definition of the very important Euler constant γ.
We will then use Gauss’ definition for the Gamma function which can be written as follows ...
and
Γ(s) = lim Γh (s).
h→∞
This does not seem immediately obvious but the relationship is true and is proven for <(s) > 0 in [?].
So now that we have these definitions we can work on a well known theorem.
Theorem 4.21 The Gamma function can be written as the following infinite product;
∞
1 Y s −s/n
= seγs 1+ e .
Γ(s) n=1
n
Proof. Before we start with the derivation, let us note that the infinite product is convergent because
the exponential term forces it. Now that we have cleared that from our conscience, we will begin by using
Definition 4.20 and say that
hs
Γh (s) = .
s(1 + s)(1 + s/2)...(1 + s/h)
Now we can also see that
hs = exp (s log(h))
1 1 1 1
= exp s log(h) − 1 − − ... − exp s 1 + + ... + .
2 h 2 h
14
We can then observe that
1 es es/2 es/h
1 1
Γh (s) = ... exp s log(h) − 1 − − ... − ,
s 1 + s 1 + s/2 1 + s/h 2 h
All we need to do now is to take the limit of this as h tends to infinity and use Definition 4.19 to prove
the theorem as required.
This theorem is very interesting as it allows us to prove two really quite beautiful identities, known as the
Euler Reflection formulae. But before we do this, we are going to need another way of dealing with the
sine function. It should be noted that the method of approach that we are going to use is not completely
rigorous. However, it can be proven rigorously using the Weierstrass Factorisation Theorem - a discussion
of which can be found in [?].
Theorem 4.23 The Gamma function has the following reflective relation -
1 sin(πs)
= .
Γ(s)Γ(1 − s) π
Corollary 4.24 The Gamma function also has the reflectional formula
1 s sin(πs)
=− .
Γ(s)Γ(−s) π
Proof. This can easily be shown using a slight variation of the previous proof. However, an alternate
proof can be constructed by considering Theorem 4.23 and Corollary 4.14.
Definition 4.25 For 0 < a ≤ 1 and <(s) > 1, we define ζ(s, a), the Hurwitz zeta funtion as
∞
X 1
ζ(s, a) = .
n=0
(n + a)s
Remark. It is obvious that ζ(s, 1) = ζ(s) and from this we can see that if we can prove results for the
Hurwitz zeta function that are valid when a = 1, then we obtain results for the regular zeta function
automatically. Let us then begin with our first big result.
15
Theorem 4.26 For <(s) > 1, the Hurwitz zeta function can be expressed as the infinite integral
Z ∞ s−1 −ax
1 x e
ζ(s, a) = dx. (15)
Γ(s) 0 1 − e−x
Proof 4.28 Simply substitute a = 1 into equation (15) to complete the proof.
Now we will prove a slight variation of the this integral.
Proposition 4.29 The follow identity holds for the zeta function:
Z ∞ s−1 x
s 2 x e
(2 − 1)ζ(s) = ζ(s, 1/2) = dx.
Γ(s) 0 e2x − 1
which s s s
2 2 2
= 2s + + + + ... = ζ(s, 21 ).
3 5 7
We have that
∞
xs−1 e−x/2
Z
1
ζ(s, 1/2) = dx.
Γ(s) 0 1 − e−x
We can then multiply the numerator and denominator by exp( x2 ) and perform the substitution x = 2y to
obtain the identity Z ∞ Z ∞ s−1 y
2 (2y)s−1 ey 2s y e
ζ(s, 1/2) = dy = dy
Γ(s) 0 e2y − 1 Γ(s) 0 e2y − 1
as required.
There are many variants of the Hurwitz zeta function and we will only prove identities involving the more
obvious ones.
Definition 4.32 We define the alternating zeta function as ζ(s) = ζ(s, 1).
Theorem 4.33 For <(s) > 1, the alternating Hurwitz zeta function can also be written as
Z ∞ s−1 −ax
1 x e
ζ(s, a) = dx.
Γ(s) 0 1 + e−x
16
Corollary 4.34 The alternating zeta function can also be written as
Z ∞ s−1
1 x
ζ(s) = (1 − 21−s )ζ(s) = dx.
Γ(s) 0 ex + 1
Proof 4.35 We have already done the hard work in Theorem 4.33 and as such, all that remains to be
proven is that ζ(s) = (1 − 21−s )ζ(s), which can be easily shown by expanding the series.
Noting that the first term inside the curved brackets is just the Taylor series for ex − 1 then gives us
∞ Z ∞ "Z k
! #
∞
X ex − 1 e−ax x
X xs−1
ζ(s, a) = dx − lim e − dx .
s=2 0 eax (1 − e−x ) k→∞ 0 1 − e−x s=1
(s − 1)!
Now, we can see that expression inside the limit tends to zero as k tends to infinity which then leaves us
with the rather inspiring identity
∞ ∞
ex − 1
X Z
ζ(s, a) = dx,
s=2 0 eax (1 − e−x )
17
Corollary 4.40 The sums of the regular zeta function diverge.
Proof 4.41 Simply substitute a = 1 into the previous result to complete the proof.
We can also prove a set of similar results using the same technique as in Proposition 4.38. The two easiest
examples are given in the propositions below.
Proof 4.45 This can be shown by subtracting equation (17) from (16).
The final identity that we will prove requires a little more work than the previous one and we will first
require a nice Lemma.
as required.
18
Proposition 4.48 If Hn0 represents the nth alternating Harmonic Number then, for an integer a > 2, it
is true that
∞
X
0 1
ζ(s, a) = (−1)a ln(4) − 2Ha−2
− .
s=2
a − 1
Proof 4.49 If we employ the same methods as used in the proof of Proposition 4.38 we can easily see
that
∞ Z ∞ −ax x
X e (e − 1)
ζ(s, a) = dx.
s=2 0 1 + e−x
We can then make the substitution x = ln(y) and dx = dy/y to see that this transforms to
∞ ∞
y−1
X Z
ζ(s, a) = dy.
s=2 1 y a (y
+ 1)
We can then remove the first term from the summation and seperate the integrals to find that the above
equation
∞ Z ∞ ( "a−1
X (−1)k
#)
a+1 1 a+1
= 2(−1) (ln(y + 1) − ln(y)) + a−1 + 2(−1) dy.
y (a − 1) 1 1 yk
k=2
If we now compute the value of the left-most expression and integrate the right-most, (assuming that we
can exchange the summation and integral) we see that
∞
X 1 X (−1)k+1 ∞
a−1
a a+1
ζ(s, a) = (−1) ln(4) − − 2(−1) .
s=2
a−1 (k − 1)y k−1 1
k=2
as required.
Corollary 4.50 The sum of the alternating Hurwitz zeta functions is irrational.
Corollary 4.51 The sum of the regular alternating zeta functions diverges.
4.5.1 Introduction
Let a, b and c be real numbers with a > 0 and D = 4ac − b2 > 0 so that
19
The Epstein zeta function Z(s) is then defined by the double series
∞
X 1
Z(s) =
m,n=−∞
Q(m, n)s
(m,n)6=(0,0)
Recall 4.52 √
b+ b2 − 4ac
τ=
2a
Setting √ √
b D b+i D
x= , y= , τ = x + iy =
2a 2a 2a
we have
b
τ +τ =
a
c
ττ =
a
so that
Q(m, n) = am2 + bmn + cn2
= a(m + nτ )(m + nτ )
= a|m + nτ |2
and
∞
X 1
Z(s) = , σ > 1.
m,n=−∞
as |m + nτ |2s
(m,n)6=(0,0)
We wish to evaluate the second term in and therefore apply the Poisson summation formula
X∞ ∞
X Z ∞
f (m) = f (u) cos 2mπu du
m=−∞ m=−∞ −∞
20
to the function
1
f (t) =
|t + τ |2s
to obtain
∞ ∞ Z ∞
X 1 X cos 2mπu
= du
m=−∞
|m + τ |2s m=−∞ −∞
|u + τ |2s
∞ Z ∞
X cos 2mπu
= du
m=−∞ −∞
{(u + x)2 + y 2 }s
∞ Z ∞
X cos 2mπ(t − x)
= dt
m=−∞
(t2 + y 2 )s
−∞
∞ Z ∞
X cos 2mπt
= cos 2mπx 2 + y 2 )s
dt
m=−∞ −∞ (t
since the integrals involving the sine function vanish which yields
∞ Z ∞ ∞ Z ∞
X 1 2 1 22 X cos 2mπyt
2s
= 2s−1 2 )s
dt + 2s−1 cos 2mπx 2 s
dt, σ > 1.
m=−∞
|m + τ | y 0 (1 + t y m=1 −∞ (1 + t )
Now, we wish to evaluate the two integrals by first making the substitution
t2
u=
1 + t2
which gives
1 2tdt 1 3
2
= 1 − u, du = 2 2
= 2u 2 (1 − u) 2 dt
1+t (1 + t )
and therefore
∞ √
1
Γ(s − 21 ) π
Z Z
dt 1 3 1 1 1 1
= (1 − u)s− 2 u− 2 du = B s− , = .
0 (1 + t2 )s 2 0 2 2 2 2Γ(s)
Thus
√
∞ Γ s − 21 π √ ∞
X 1 4 π X 1
= + 2s−1 (mπy)s− 2 cos(2mπx)Ks− 12 (2mπy), σ > 1
m=−∞
|m + τ |2s y 2s−1 Γ(s) y Γ(s) m=1
2s
= 2s−1 y 2s−1 Γ(s)
+ 2s−1 y 2s−1 Γ(s)
(mnπy)s− 2 cos(2mnπx)Ks− 21 (2mnπy).
m=−∞
|m + nτ | n n m=1
Hence
√
1
Γ s− π 1 ∞ ∞
2 8a−s y 2 −s π s X 1−2s X 1
Z(s) = 2a−s ζ(2s)+2a−s y 1−2s ζ(2s−1)+ n (mn)s− 2 cos(2mnπx)Ks− 12 (2mnπy).
Γ(s) Γ(s) n=1 m=1
21
Collecting the terms with mn = k, we obtain
√
1
Γ s− π 1 ∞
2 8a−s y 2 −s π s X X 1−2s
1
Z(s) = 2a−s ζ(2s)+2a−s y 1−2s ζ(2s−1)+ n (k)s− 2 cos(2kπx)Ks− 12 (2kπy)
Γ(s) Γ(s)
k=1 n|k
that is
√
Γ s − 12 π 1
2a−s y 2 −s π s
Z(s) = 2a −s
ζ(2s) + 2a −s 1−2s
y ζ(2s − 1) + H(s)
Γ(s) Γ(s)
where
∞
1
X
H(s) = 4 σ1−2s (k)(k)s− 2 cos(2kπx)Ks− 12 (2kπy)
k=1
we have
1 1
π 2 −s Γ s − ζ(2s − 1) = π s−1 Γ(1 − s)ζ(2 − 2s)
2
and s s 1−s
ay y y 1
Γ(s)Z(s) = 2 Γ(s)ζ(2s) + 2 Γ(1 − s)ζ(2 − 2s) + 2y 2 H(s).
π π π
Recall 4.53
K−ν (y) = Kν (y)
and
ν ν
k − 2 σν (k) = k 2 σ−ν (k)
leading to
H(s) = H(1 − s).
For s
ay
φ(s) = Γ(s)Z(s)
π
we have
φ(s) = φ(1 − s).
22
Since √
D
ay =
2
we have √ s √ 1−s
D D
Γ(s)Z(s) = Γ(1 − s)Z(1 − s)
2π 2π
which is the functional equation of Z(s).
The Mellin transform is extremely useful for certain applications including solving Laplaces equation.
which is by definition the Laplace transform of e−px that is L{f (e−x )}.
We have Z ∞
M(f (at); p) = f (at)tp−1 dt.
0
Substituting x = at where dx = adt, we obtain
Z ∞
−p
a f (x)xp−1 dx = a−p F (p).
0
4.6.5 Multiplication by ta
23
4.6.6 Derivative
We have
Z ∞
M(f (t); p)
0
= f 0 (t)tp−1 dt
0
p−1 ∞
Z ∞
= t f (t) 0 − (p − 1) f (t)tp−2 dt
0
which gives
M(f 0 (t); p) = −(p − 1)M(f (t); p − 1)
provided tp−1 f (t) → 0 as x → 0 and tp−1 f (t) → ∞ as x → ∞.
For the n-th derivative this produces
M(f (n) (t); p) = (−1)n (p − 1)(p − 2)(p − 3)...(p − n)F (p − n)
provided that the extension to higher derivatives of the conditions as x → 0 and asx → ∞ holds upto
(n − 1)th derivative.
Knowing that
(p − 1)! Γ(p)
(p − 1)(p − 2)(p − 3)...(p − n) = =
(p − n − 1)! Γ(p − n)
the expression for the n-th derivative can be expressed as
Γ(p)
M(f (n) (t); p) = (−1)n F (p − n).
Γ(p − n)
(p + n − 1)!
M(xn f (n) (t); p) = (−1)n F (p).
(p − 1)!
4.6.8 Integral
By making use of the derivative property Rof the Mellin transform we can easily derive this property. We
t
begin by choosing to write f (t) as f (t) = 0 h(u)du where f 0 (t) = h(t). As a result, we obtain
Z t
M(f (t) = h(t); p) = −(p − 1)M(f (t) =
0
h(u)du; p − 1).
0
Rearranging gives Z t
1
− M(h(t); p) = M( h(u)du; p − 1)
p−1 0
Substituting p for p − 1 we arrive at the desired identity
Z t
1
− M(h(t); p + 1) = M( h(u)du; p).
p 0
4.6.9 Example 1
24
4.6.10 Example 2
25
View publication stats