Advanced Integration Techniques PDF
Advanced Integration Techniques PDF
Techniques
Advanced approaches for solving many complex integrals using special functions, some
transformations and complex analysis approaches
Third Version
ZAID ALYAFEAI
YEMEN
mailto:[email protected]
Contents
1.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2 Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5.1 Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.6.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.8 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1
3.11 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.13 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.14 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4 Beta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1 Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.8 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.9 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.10 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5 Digamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2
5.9 Gauss Digamma theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.11 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.12 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.13 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.14 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.15 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6 Zeta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.4 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.6.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8 Polylogarithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
8.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
8.5 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.6 Dilogarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.6.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.6.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3
8.6.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.6.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
8.6.8 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
9.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
9.3 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
9.5 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
10 Error Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
10.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
10.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
10.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
10.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
10.8 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
10.9 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
10.10 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
11.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
11.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
11.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
11.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
11.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
11.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
11.7 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4
12.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
12.5 Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
13 Euler sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
13.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5
15.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6
20.5 Loggamma integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7
22.6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
8
ÕækQË@ á ÔgQË@ Õæ
é<Ë@
.
9
Acknowledgement
I want to offer my sincerest gratitude to all those who supported me during my journey to finish this
book. Especially my parents, sisters and friends who supported the idea of this book. I also want to
thank my Math teachers at King Fahd University because I wouldn’t be able to learn the advanced
without having knowledge of the elementary. I also want to extend my thanks to all my friends on the
different math forums like MMF, MHB and stack exchange without them I wouldn’t be learning any
thing.
Reviewers
A special thank for Mohammad Nather Shaaban for reviewing some parts of the book.
What is new?
The new version is all about contour integration using the concepts from complex analysis. One might
deviate from such approaches because of the heavy theory behind them but I tried to give a brief
I have a plan to add many other sections. Basically I’ll try to focus on transformations like Mellin and
fourier transforms. Also many other functions like the Jacobi theta function and q-series.
10
Introduction
This book is a summary of working on advanced integrations for around five years. It collects many
examples that I gathered during that period. The approaches taken to solve the integrals aren’t neces-
sarily the only and best methods but they are offered for the sake of explaining the topic. Most of the
content of this book I already wrote on mathhelpboards.com during the past three years but I thought
that publishing it using a pdf would be easier to read and distribute. The motivation behind this book is
to allow those who are interested in solving complicated integrals to be able to use the different methods
to solve them efficiently. When I started learning about these techniques I would suffer to get enough
information about all the required approaches so I tried to collect every thing in just one book. You are
free to distribute this book and use any of the methods to solve the integrals or use the same techniques.
The methods used are not necessarily new or ground-breaking but as I said they introduce the concept
as easy as possible.
To follow this book you have to be know the basic integration techniques like integration by parts, by
substitution and by partial fractions. I don’t assume that the readers know any other stuff from any
other topics or advanced courses from mathematics. Usually the details that require deep knowledge of
analysis or advanced topics are left or just touched upon lightly to give the reader some hints but not
After reading this book you should be able to solve many advanced integrals that you might face
in engineering courses. I hope you enjoy reading this book and if you have any suggestions, com-
mathhelpboards.com if you have some questions that I could reply to you directly using Latex.
11
1 Differentiation under the integral sign
This is one of the most commonly used techniques to solve a numerous number of questions.
b
∫ f (x, y) dx
a
Then we can differentiate with respect to y provided that f is continuous and has a partial continuous
b
F ′ (y) = ∫ fy (x, y) dx
a
Now using this in many problems is not that clear you have to think a lot to get the required answer
because many integrals are usually in one variable so you need to introduce the second variable and
1.1 Example
1 x2 − 1
∫ dx
0 log(x)
That seems very difficult to solve but using this technique we can solve it easily. The crux move is to
decide where to put the second variable! So the problem with the integral is that we have a logarithm
in the denominator which makes the problem so difficult to tackle! Remember that we can get a natural
1 xa − 1
F (a) = ∫ dx
0 log(x)
Now we take the partial derivative with respect to a
1 ∂ xa − 1 1 1
F ′ (a) = ∫ ( ) dx = ∫ xa dx =
0 ∂a log(x) 0 a+1
Integrate with respect to a
F (a) = log (a + 1) + C
12
To find the value of the constant put a = 0
F (0) = log(1) + C Ô⇒ C = 0
1 xa − 1
∫ dx = log (a + 1)
0 log(x)
By this powerful method we were not only able to solve the integral we also found a general formula for
1 x2 − 1
∫ dx = log (2 + 1) = log(3)
0 log(x)
1.2 Example
π
2 x
∫ dx
0 tan x
So where do we put the variable a here? that doesn’t seem to be straight forward , how do we proceed ?
π
2 arctan(a tan(x))
F (a) = ∫ dx
0 tan(x)
Now differentiate with respect to a
π
1
F ′ (a) = ∫
2
dx
0 1 + (a tan(x))2
It can be proved that
π
2 1 π
∫ dx =
0 1 + (a tan(x))2 2(1 + a)
Now Integrate both sides
π
F (a) = log(1 + a) + C
2
Substitute a = 0 to find C = 0
13
π
2 arctan(a tan(x)) π
∫ dx = log(1 + a)
0 tan(x) 2
Put a = 1 in order to get our original integral
π
2 x π
∫ dx = log(2)
0 tan(x) 2
1.3 Example
∞ sin(x)
∫ dx
0 x
This problem can be solved by many ways , but here we will try to solve it by differentiation. So as I
showed in the previous examples it is generally not easy to find the function to differentiate. Actually
this step might require trial and error techniques until we get the desired result, so don’t just give up if
∞ sin(ax)
F (a) = ∫ dx
0 x
If we differentiated with respect to a we would get the following
∞
F ′ (a) = ∫ cos(ax) dx
0
But unfortunately this integral doesn’t converge, so this is not the correct one. Actually, the previous
∞ sin(x)e−ax
F (a) = ∫ dx
0 x
Take the derivative
∞
F ′ (a) = − ∫ sin(x)e−ax dx
0
∞ −1
F ′ (a) = − ∫ sin(x)e−ax dx =
0 a2 + 1
Integrate both sides
14
F (a) = − arctan(a) + C
To find the value of the constant take the limit as a grows large
π
C = lim F (a) + arctan(a) =
a→∞ 2
So we get our F (a) as the following
π
F (a) = − arctan(a) +
2
For a = 0 we have
∞ sin(x) π
∫ =
0 x 2
15
2 Laplace Transform
Laplace transform is a powerful integral transform. It can be used in many applications. For example, it
can be used to solve Differential Equations and its rules can be used to solve integration problems.
∞
F (s) = L(f (t)) = ∫ e−st f (t) dt
0
2.1.1 Example
1. f (t) = 1
∞ 1
F (s) = ∫ e−st dt =
0 s
∞ n!
F (s) = ∫ e−st tn dt =
0 sn+1
∞ s
F (s) = ∫ e−st cos(at) dt =
0 s2 + a2
16
2.2 Example
∞
∫ e−2t t3 dt
0
∞ n!
∫ e−st tn dt =
0 sn+1
Here we have s = 2 and n = 3
∞ 3! 3
∫ e−2t t3 dt = =
0 23+1 8
2.3 Convolution
t
(f ∗ g)(t) = ∫ f (s)g(t − s) ds
0
So, basically you are given F (s) and we want to get f (t) this is denoted by
2.4.1 Example
1. F (s) = 1
s3
2! 1 1
L(t2 ) = 3
⇒ L(t2 ) = 3
s 2 s
17
Now take the inverse to both sides
t2 1
= L−1 ( 3 )
2 s
2. F (s) = s
s2 +4
s
cos(2t) = L−1 ( )
s2 + 4
Exercises
sin(at)
2.5.1 Example
1 Γ(x)Γ(y)
β(x, y) = ∫ tx−1 (1 − t)y−1 dt =
0 Γ(x + y)
β is the Beta function and Γ is the Gamma function. We will take enough time and examples to explain both
proof
f (t) = tx , g(t) = ty
Hence we get
t
(tx ∗ ty ) = ∫ sx (t − s)y ds
0
18
L (tx ∗ ty ) = L(tx )L(ty )
x! ⋅ y!
L (tx ∗ ty ) =
sx+y+2
Notice that we need to find the inverse of Laplace L−1
x! ⋅ y! x! ⋅ y!
L−1 (L(tx ∗ ty )) = L−1 ( ) = tx+y+1
sx+y+2 (x + y + 1)!
So we have the following
x! ⋅ y!
(tx ∗ ty ) = tx+y+1
(x + y + 1)!
By definition we have
x! ⋅ y! t
tx+y+1 = ∫ sx (t − s)y ds
(x + y + 1)! 0
x! ⋅ y! 1
= ∫ sx (1 − s)y ds
(x + y + 1)! 0
1 Γ(x + 1)Γ(y + 1)
∫ sx (1 − s)y ds =
0 Γ(x + y + 2)
which can be written as
1 Γ(x)Γ(y)
∫ sx−1 (1 − s)y−1 ds =
0 Γ(x + y)
2.5.2 Example
∞ f (t) ∞
∫ dt = ∫ L(f (t)) ds
0 t 0
proof
19
∞ ∞ ∞
∫ L(f (t)) ds = ∫ (∫ e−st f (t) dt) ds
0 0 0
∞ ∞
∫ f (t) (∫ e−st ds) dt
0 0
∞ 1
∫ e−st ds =
0 t
Now substitute this value in the integral
∞ f (t)
∫ dt
0 t
2.5.3 Example
∞ sin(t)
∫ dt
0 t
This is not the first time we see this integral and not the last . We have seen that we can find it using
∞ sin(t) ∞
∫ dt = ∫ L(sin(t)) ds
0 t 0
1
L(sin(t)) =
s2 + 1
Substitute in our integral
∞ ds π
∫ = tan−1 (s)∣s=∞ − tan−1 (s)∣s=0 =
0 1 + s2 2
20
3 Gamma Function
The gamma function is used to solve many interesting integrals, here we try to define some basic prop-
3.1 Definition
∞
Γ(x + 1) = ∫ e−t tx dt
0
For the first glance that just looks like the Laplace Transform, actually they are closely related.
∞
Γ(n + 1) = ∫ e−t tn dt
0
∞ n!
∫ e−t tn = ∣s=1 = n!
0 sn+1
So we see that there is a relation between the gamma function and the factorial. We will assume for the
n! = Γ(n + 1)
This definition is somehow limited but it will be soon replaced by a stronger one.
3.2 Example
∞
∫ e−t t4 dt
0
∞
∫ e−t t4 dt = Γ(4 + 1) = 4! = 24
0
21
3.3 Example
1.
∞
e−t t dt
2
∫
0
We need a substitution before we go ahead, so let us start by putting x = t2 so the integral becomes
1 ∞ −x 21 − 12 1 1
∫ e x ⋅ x dt = Γ(1 + 0) =
2 0 2 2
2.
1
∫ log(t) t2 dt
0
1 1 ∞ − 3x
∫ log(t) t2 dt = − ∫ e 2 ⋅ x dx
0 4 0
−1 ∞ −x −Γ(2) −1
∫ e x dx = =
9 0 9 9
It is an important thing to get used to the symbol Γ. I am sure that you are saying that this seems
elementary, but my main aim here is to let you practice the new symbol and get used to solving some
3.4 Exercises
Prove that
Γ(5) ⋅ Γ(2) 1
=
Γ(7) 30
Find the following integral
∞
e− 60 t t20 dt
1
∫
0
22
3.5 Extension
For simplicity we assumed that the gamma function only works for positive integers. This definition
was so helpful as we assumed the relation between gamma and factorial. Actually, this restricts the
gamma function, we want to exploit the real strength of this function. Hence, we must extend the gamma
function to work for all real numbers except for some values. Actually we will see soon that we can
extend it to work for all complex numbers except where the function has poles.
3.5.1 Theorem
Using the integral representation we can extend the gamma function to x > −1.
proof
∞ ∞
∣∫ e−t tx dt∣ ≤ ∫ e−t dt < ∞
0
e−t 1
∣∫ dt∣ ∼ ∫ dt < ∞
0 tz 0 tz
Γ(x + 1) = xΓ(x)
This can be proved through integration by parts for x > 0. Actually this representation allows us to
extend the gamma function for all real numbers for non-negative integers. In terms of complex analysis
proof
Note that
n
Γ(z + n + 1) = Γ(z + 1) ∏ (k + z)
k=1
23
Which indicates that
n
Γ(z + n + 1)
∏ (k + z) =
k=1 zΓ(z)
Also note that
n
∏ k = n!
k=1
Hence we have
nz n k nz × n!
lim ∏ = Γ(z) lim
k=1 k + z
n→∞ z n→∞ Γ(z + n + 1)
nz × n!
lim =1
n→∞ Γ(z + n + 1)
√
Γ(z + n + 1) ∼ 2π(n + z)n+z+1/2 e−(n+z)
and
√
n! ∼ 2πnn+1/2 e−n
Hence we have by
√
nz × ( 2πnn+1/2 e−n ) nn
lim √ = lim =1
n→∞ 2π(n + z)n+z+1/2 e−(n+z) n→∞ (n + z)n e−z
Note that
z n
lim (1 + ) = ez
n→∞ n
To prove the other product formula note that
z
n
1 z ∏nk=1 (1 + k)
∏ (1 + ) = n = (n + 1)z ∼ nz
k=1 k ∏k=1 k z
Hence we deduce
z z
∏k=1 (1 + k1 ) n 1 1 ∞ (1 + k )
n 1
nz n k
lim ∏ = lim ∏ = ∏
k=1 k + z k=1 1 + k z k=1 1 + kz
n→∞ z z
n→∞ z
24
3.6.2 Example
Prove that
∞
Γ(x)Γ(y) z z
= ∏ [(1 + ) (1 − )]
Γ(x + z)Γ(y − z) k=0 x+k y+k
proof
Start by
nz n k
Γ(z) = lim ∏
k=1 k + z
n→∞ z
We have
x y
Γ(x)Γ(y) ( nx ∏nk=1 k
k+x
) ( ny n
∏k=1 k
k+y
)
= lim
Γ(x + z)Γ(y − z) n→∞ ( nx+z ∏n
k=1 k+x+z ) ( y−z
ny−z
x+z
k
∏k=1
n k
k+y−z
)
By simplifications we have
∞
Γ(x)Γ(y) (k + x + z)(k + y − z) ∞ z z
=∏ = ∏ [(1 + ) (1 − )]
Γ(x + z)Γ(y − z) k=0 (k + x)(k + y) k=0 x+k y+k
e−γz ∞ z −1 z/n
Γ(z) = ∏ (1 + ) e
z n=1 n
where γ is the Euler constant
proof
n n
z
log zΓ(z) = lim z ∑ (log (1 + k) − log(k)) − ∑ log (1 + )
n→∞
k=1 k=1 k
Note the alternating sum
n
∑ (log (1 + k) − log(k)) = log(n + 1)
k=1
Hence we have
25
n
z
log zΓ(z) = lim z log(n + 1) − ∑ log (1 + )
n→∞
k=1 k
Now we can use the harmonic numbers
n
1
Hn = ∑
k=1 n
Add and subtract zHn+1
n
z −1 z z
log zΓ(z) = lim z log(n + 1) − zHn+1 + ∑ [log (1 + ) + ] +
n→∞
k=1 k k n +1
The last term goes to zero and by definition we have the Euler constant
γ = lim Hn − log(n)
n→∞
∞
z −1 z
log zΓ(z) = −zγ + ∑ log (1 + ) +
k=1 k k
By taking the exponent of both sides
∞
z −1 z
zΓ(z) = e−γz ∏ (1 + ) e k
k=1 k
∞
Γ(k) (1) k
Γ(z + 1) = ∑ z
k=0 k!
For the first term
f (0) = Γ(1 + 0) = 1
f ′ (0)
= Γ′ (1)
1!
26
To find the derivative, note that by the Weierstrass representation
∞
z −1 z
log Γ(z) = −γz − log(z) + ∑ log (1 + ) +
n=1 n n
By taking the derivaive we have
Γ′ (z) 1 ∞
1
= −γ − + z ∑
Γ(z) z k=1 k(z + k)
Hence we have
∞
1
Γ′ (1) = −γ − 1 + ∑ = −γ − 1 + 1 = −γ
k=1 k(1 + k)
∞
1
Γ′′ (1) = (Γ′ (z))2 + 1 + ∑ = γ 2 + ζ(2)
k=1 (1 + k) 2
1 2
Γ(z + 1) = 1 − γz + (γ + ζ(2))z 2 + O(z 3 )
2!
Dividing by z we get our result.
3.8 Example
∞ e−t
∫ √ dt
0 t
Now according to our definition this is equal to Γ ( 21 ) but this value can be represented using elementary
functions as follows
√
Let us first make a substitution t=x
∞
e−x dx
2
2∫
0
27
Now to find this integral we need to do a simple trick, start by the following
∞ 2 ∞ ∞
e−x dx) = (∫ e−x dx) ⋅ (∫ e−x dx)
2 2 2
(∫
0 0 0
∞ ∞
e−x dx) ⋅ (∫ e−y dy)
2 2
(∫
0 0
Now since they are two independent variables we can do the following
∞ ∞
e−(x +y 2 )
2
∫ ∫ dy dx
0 0
π
∞
e−r r dr dθ
2 2
∫ ∫
0 0
π
2 1 π
∫ dθ =
0 2 4
So we have
∞ 2
π
e−x dx) =
2
(∫
0 4
Take the square root to both sides
∞
√
π
e−x dx =
2
∫
0 2
∞ √
e−x dx =
2
2∫ π
0
1 √
Γ( ) = π
2
We can use the reduction formula and the value of Γ(1/2) to deduce other values. Assume that we want
to find
28
3
Γ( )
2
If we used this property we get
√
1 1 1 π
Γ (1 + ) = ⋅ Γ ( ) =
2 2 2 2
Not all the time the result will be reduced to a simpler form as the previous example. For example we
don’t know how to express Γ( 41 ) in a simpler form but we can approximate its value
1
Γ ( ) ≈ 3.6256 ⋯
4
Hence we just solve some integrals in terms of gamma function since we don’t know a simpler form.
∞
e−t t 4 dt
1
∫
0
∞ 5 Γ ( 41 )
e−t t 4 dt = Γ ( ) =
1
∫
0 4 4
√
We have seen that Γ ( 21 ) = π but what about Γ ( −1
2
)?
1 −1 −1
Γ (1 − ) = Γ( )
2 2 2
so we have that
1 √
Γ (− ) = −2 π
2
Then we can prove that any fraction where the denominator equals to 2 and the numerator is odd can be
reduced into
2n + 1 1
Γ( ) = C Γ( ) , C ∈ Q , n ∈ Z
2 2
29
3.10 Legendre Duplication Formula
1 (2n)! √
Γ ( + n) = n π
2 4 n!
proof
For the proof we use induction by assuming n ≥ 0. If n = 0 we have our basic identity
1 √
Γ( ) = π
2
Now we need to prove that
1 1 + 2n 1
Γ ( + n + 1) = Γ ( + n)
2 2 2
By the inductive step we have
1 + 2n 1 1 + 2n (2n)! √
Γ ( + n) = ⋅ n π
2 2 2 4 n!
We can multiply and divide by 2n + 2
3.11 Example
√
∞ e−t cosh(a t)
∫ √ dt
0 t
we have a hyperbolic function
∞
x2n
cosh(x) = ∑
n=0 (2n)!
√
Let x = a t
√ ∞
a2n ⋅ tn
cosh(a t) = ∑
n=0 (2n)!
30
∞ ∞
a2n ⋅ tn
∫ e−t ∑ √ dt
0 n=0 (2n)! t
Now since the series is always positive we can swap the integral and the series
∞ ∞
a2n
e−t tn− 2 dt]
1
∑ [∫
n=0 (2n)! 0
∞ a2n Γ ( 21 + n)
∑
n=0 (2n!)
Using LDF (Legendre Duplication Formula) we get
∞
a2n (2n)! √
∑ ( n π)
n=0 (2n!) 4 n!
By further simplification
√ ∞
a2n
π ∑ n
n=0 4 n!
∞
zn
∑ = ez
n=0 n!
a2 √
Putting z = 4
and multiplying by π we get
2 n
√ ∞ ( a4 ) √ a2
π∑ = πe 4
n=0 n!
So we have finally that
√
∞ e−t cosh(a t) √ a2
∫ √ dt = πe 4
0 t
−1
π 1 ∞ z2
= ∏ (1 − 2 )
sin(πz) z n=1 n
31
Now we start by noting that
Γ(z)Γ(1 − z) = −zΓ(z)Γ(−z)
−1
1 ∞ z2 π
∏ (1 − 2 ) =
z n=1 n sin(πz)
3.13 Example
1.
3 1
Γ( ) Γ( )
4 4
1 1
Γ (1 − ) Γ ( )
4 4
1 1 π √
Γ (1 − ) Γ ( ) = = 2π
4 4 sin ( 4 )
π
2.
1+i 1−i
Γ( ) Γ( )
2 2
1+i 1+i
Γ( ) Γ(1 − )
2 2
π π
=
sin ( π(1+i) ) cos ( iπ
2
)
2
32
By geometry to hyperbolic conversions we get
π π
= π sech ( )
cosh ( 2 )
π 2
3.14 Example
a+1
∫ log Γ(x)dx
a
a+1
f (a) = ∫ log Γ(x)dx
a
f (a) = a log(a) − a + C
Let a → 0
We have
1
C=∫ log Γ(x) dx
0
1 1 1 1
∫ log Γ(x) dx = ∫ log π dx − ∫ log sin(πx) dx − ∫ log Γ(1 − x) dx
0 0 0 0
1 1 1 1
2∫ log Γ(x) dx = ∫ log π dx − ∫ log sin(πx) dx = log(2π) − ∫ log ∣2 sin(πx)∣ dx
0 0 0 0
1 2 2π 2
∫ log ∣2 sin(πx)∣ dx = ∫ log ∣2 sin(x/2)∣ dx = cl2 (2π) = 0
0 π 0 π
33
Hence we finalize by
1 1
∫ log Γ(x) dx = log(2π)
0 2
34
4 Beta Function
4.1 Representations
Γ(x)Γ(y)
β(x, y) =
Γ(x + y)
We have proved this identity earlier when we discussed convolution.
β(x, y) = β(y, x)
Beta function has many other representations all can be deduced through substitutions
4.2 Example
∞ 1 π
∫ dz =
0 z2 + 1 2
proof
√
Put z = t
1 ∞ t2
−1
∫ dt
2 0 t+1
35
We can use the second integral representation by finding the values of x and y
−1 1
x−1= ⇒ x=
2 2
1
x+y =1 ⇒ y =
2
Hence we have
√ √
1 ∞ t2 B( 21 , 12 ) Γ( 12 ) Γ( 12 ) π⋅ π π
−1
∫ dt = = = =
2 0 (t + 1) 2 2 2 2
4.3 Example
∞ 1
∫ dz
0 (z 2 + 1)2
Using the same substitution as the previous example we get
1 ∞ t2
−1
∫ dt
2 0 (t + 1)2
Then we can find the values of x and y
−1 1
x−1= ⇒ x=
2 2
3
x+y =2 ⇒ y =
2
Then
1 ∞ t2 B( 12 , 32 ) Γ( 21 ) Γ( 32 ) Γ( 21 ) Γ( 12 ) π
−1
∫ dt = = = =
2 0 (t + 1)2 2 2 4 4
4.4 Example
∞ 1 1
∫ dx , ∀ n >
0 (x2 + 1)n 2
Using the same substitution again
1 ∞ t2
−1
∫ dt
2 0 (t + 1)n
36
Then we can find the values of x and y
−1 1
x−1= ⇒ x=
2 2
1
x+y =n ⇒ y =n−
2
Then
1 ∞ t2 Γ( 12 ) ⋅ Γ (n − 21 )
−1
∫ dt =
2 0 (t + 1)n 2Γ(n)
Now by LDF
√ √
1 (2n − 2)! π Γ (2n − 1) π
Γ (n − ) = n−1 =
2 4 (n − 1)! 4n−1 Γ(n)
Substituting in our integral we have the following
1 ∞ t2 2π ⋅ Γ (2n − 1)
−1
∫ dt =
2 0 (t + 1) n 4n ⋅ Γ2 (n)
∞ 1 π ⋅ Γ (2n − 1)
∫ dx = 2n−1 2
0 (x2 + 1)n 2 ⋅ Γ (n)
It is easy to see that for n ∈ Z+ we get a π multiplied by some rational number.
4.5 Example
1 zn (2n)!!
∫ √ dz = 2 ⋅
0 1−z (2n + 1)!!
Where the double factorial !! is defined as the following
⎧
⎪
⎪
⎪
⎪n ⋅ (n − 2) ⋯ 5 ⋅ 3 ⋅ 1 ; if n is odd
⎪
⎪
⎪
⎪
⎪
⎪
n!! = ⎨n ⋅ (n − 2) ⋯ 6 ⋅ 4 ⋅ 2 ; if n is even
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪1 ; if n = 0
⎩
The integral in hand can be rewritten as
1
z n ⋅ (1 − z)− 2 dz
1
∫
0
37
x−1=n ⇒ x=n+1
−1 1
y−1= ⇒ y=
2 2
This can be written as
1 1
z n ⋅ (1 − z)− 2 dz = B(n + 1, )
1
∫
0 2
By some simplifications
√
1 Γ ( 12 ) Γ(n + 1) π Γ(n + 1)
B (n + 1, ) = =
2 Γ (n + 2 )
3
(n + 12 )Γ (n + 12 )
Now you shall realize that we must use LDF
√ √
πΓ(n + 1) 2 π n! 2 ⋅ 22n (n!)2
= √ =
(n + 21 )Γ (n + 12 ) (2n + 1) π (2n)!
4n n!
(2n)!
Now we should separate odd and even terms in the denominator
22n (n ⋅ (n − 1) ⋯ 3 ⋅ 2 ⋅ 1)2
2⋅
(2n ⋅ (2n − 2) ⋯ 4 ⋅ 2)((2n + 1) ⋅ (2n − 1) ⋯ 3 ⋅ 1)
We insert 22n into the square to obtain
4.6 Example
−n
∞ x2 2
∫ (1 + ) dx
−∞ n−1
First we shall realize the evenness of the integral
−n
∞ x2 2
2∫ (1 + ) dx
0 n−1
x2
Let t = n−1
√ ∞ t− 2
1
n − 1∫ n dt
0 (1 + t) 2
38
Now we see that our integral becomes so familiar
√
√ 1 n−1 π(n − 1)Γ ( n−1 )
n − 1B ( , )= 2
2 2 Γ ( n2 )
4.7 Example
∞ x−p
∫ dx
0 x3 + 1
Let us do the substitution x3 = t
∞ t− 3
p+2
1
∫ dt
3 0 t+1
Now we should find x, y
1−p
x=
3
1−p
y+x=1 ⇒ y =1−
3
so we have our beta representation of the integral
Γ ( 1−p ) Γ (1 − 1−p
) π π π − πp
3 3
= = csc ( )
3 3 sin ( π(1−p)
3
) 3 3
4.8 Example
π
2
√
∫ sin3 z dz
0
Rewrite as
π
2 3
∫ sin 2 z cos0 z dx
0
39
This is the Geometric representation
3 5
2x − 1 = ⇒ x=
2 4
1
2y − 1 = 0 ⇒ y =
2
Then
π
2 3 B ( 45 , 12 ) Γ ( 54 ) Γ ( 12 )
∫ sin 2 z dz = =
0 2 2Γ ( 74 )
4.9 Example
1+i
2x − 1 = i ⇒ x =
2
1−i
2y − 1 = −i ⇒ y =
2
Then
1 1+i 1−i
Γ( ) Γ( )
2 2 2
Now we see that we have to use ERF
1 1+i 1+i π π π
Γ( ) Γ (1 − )= π(1+i)
= sech ( )
2 2 2 2 sin ( 2 ) 2 2
4.10 Exercise
Prove
∞ x2m+1 m! (n − m − 2)!
∫ dx =
0 (ax + c)
2 n 2(n − 1)! am+1 cn−m−1
40
5 Digamma function
5.1 Definition
Γ′ (x)
ψ(x) =
Γ(x)
We call digamma function the logarithmic derivative of the gamma function. Using this we can define
5.2 Example
Γ(2x + 1)
f (x) =
Γ(x)
We can use the differentiation rule for quotients
proof
Γ(x)Γ(1 − x) = π csc(πx)
41
ψ(x)Γ(x)Γ(1 − x) − ψ(1 − x)Γ(x)Γ(1 − x) = −π 2 csc(πx) cot(πx)
1
ψ(1 + x) − ψ(x) =
x
proof
Γ(1 + x)
=x
Γ(x)
Now differentiate both sides
Γ(1 + x)
(ψ(1 + x) − ψ(x)) = 1
Γ(x)
Which simplifies to
Γ(x) 1
ψ(1 + x) − ψ(x) = =
Γ(1 + x) x
5.4 Example
∞ log(x)
∫ dx
0 (1 + x2 )2
Consider the general case
∞ xa
∫ dx
0 (1 + x2 )2
Use the following substitution x2 = t
42
a−1
1 ∞ t 2
∫ dt
2 0 (1 + t)2
By the beta function this is equivalent to
a−1
1 ∞ t 2 1 a+1 a+1 1 a+1 a+1
∫ dt = B ( ,2 − ) = Γ( ) Γ (2 − )
2 0 (1 + t)2 2 2 2 2 2 2
Differentiate with respect to a
a−1
1 ∞ log(t) t 2 1 a+1 a+1 a+1 a+1
F ′ (a) = ∫ dt = Γ ( ) Γ (2 − ) [ψ ( ) − ψ (2 − )]
4 0 (1 + t)2 4 2 2 2 2
Now put a = 0
1 ∞ log(t) t 2
−1
1 1 3 1 3
∫ dt = Γ ( ) Γ ( ) [ψ ( ) − ψ ( )]
4 0 (1 + t) 2 4 2 2 2 2
Now we use our second difference formula
1 3 1 1
ψ ( ) − ψ ( ) = − (ψ (1 + ) − ψ ( )) = −2
2 2 2 2
Also by some gamma manipulation we have
1 3 1 1 π
Γ ( ) Γ ( ) = Γ2 ( ) =
2 2 2 2 2
The integral reduces to
1 ∞ log(t) t 2
−1
π
∫ dt = −
4 0 (1 + t)2 4
Putting x2 = t we have our result
∞ log(x) π
∫ dx = −
0 (1 + x )
2 2 4
We start by taking the logarithm of the Weierstrass representation of the gamma function
∞
x x
log (Γ(x)) = −γ x − log(x) + ∑ − log (1 + )+
n=1 n n
43
Now we shall differentiate with respect to x
∞ −1
1 1
ψ(x) = −γ − +∑ nx +
x n=1 1 + n n
Further simplification will result in the following
∞
1 x
ψ(x) = −γ − +∑
x n=1 n(n + x)
1. ψ(1)
∞
1
ψ(1) = −γ − 1 + ∑
n=1 n(n + 1)
∞
1
∑ =1
n=1 n(n + 1)
Hence we have
ψ(1) = −γ
2. ψ ( 12 )
∞
1 1
ψ ( ) = −γ − 2 + ∑
2 n=1 n(2n + 1)
We need to find
∞
1
∑
n=1 n(2n + 1)
We can start by
∞
xn
∑ = − log(1 − x)
n=1 n
44
∞
1
∑ = 2 − 2 log(2)
n=1 n(2n + 1)
Hence
1
ψ ( ) = −γ − 2 log(2)
2
5.7 Example
Prove that
1 1 1
∫ + dx = γ
0 log(x) 1 − x
proof
Let x = e−t
∞ 1 e−t
∫ − dt
0 et − 1 t
Let the following
∞ ts
F (s) = ∫ − ts−1 e−t dt = ζ(s + 1)Γ(s + 1) − Γ(s)
0 et − 1
Hence the limit
1 1
lim Γ(s + 1) (ζ(s + 1) − ) = lim ζ(s + 1) −
s→0 s s→0 s
Use the expansion of the zeta function
1 ∞ γn
ζ(s + 1) = +∑ (−s)n
s n=0 n!
Hence the limit is equal to γ0 = γ.
45
∞ t ∞ e−z − e−tz
∫ ∫ e−xz dx dz = ∫ dz
0 1 0 z
Using fubini theorem we also have
t ∞ t 1
∫ ∫ e−xz dz dx = ∫ dx = log t
1 0 1 x
Hence we have the following
∞ e−z − e−tz
∫ dz = log(t)
0 z
We also know that
∞
Γ′ (a) = ∫ ta−1 e−t log t dt
0
Hence we have
∞ 1 −z ∞ a−1 −t ∞
Γ′ (a) = ∫ (e ∫ t e dt − ∫ ta−1 e−t(z+1) dt) dz
0 z 0 0
∞
∫ ta−1 e−t(z+1) dt = Γ(a) (z + 1)−a
0
Aslo we have
∞
∫ ta−1 e−t dt = Γ(a)
0
∞ e−z − (1 + z)−a
Γ′ (a) = Γ(a) ∫ dz
0 z
46
5.8.2 Second Integral representation
1 1 − xs
ψ(s + 1) = −γ + ∫ dx
0 1−x
proof
∞
s
ψ(s + 1) = −γ + ∑
n=1 n(n + s)
It is left as an exercise to prove that
∞ 1 1 − xs
s
∑ =∫ dx
n=1 n(n + s) 0 1−x
Let e−t = x
1 1 xa−1
∫ − − dx
0 log(x) 1 − x
By adding and subtracting 1
1 1 1 1 1 − xa−1
−∫ + dx + ∫ dx
0 log(x) 1 − x 0 1−x
Using the second integral representation
1 1 1
−∫ + dx + γ + ψ(a)
0 log(x) 1 − x
We have already proved that
1 1 1
∫ + dx = γ
0 log(x) 1 − x
Finally we get
∞ e−t e−(at)
∫ − dt = −γ + γ + ψ(a) = ψ(a)
0 t 1 − e−t
47
5.8.4 Fourth Integral representation
Prove that
1 ∞ t
ψ(z) = log(z) − − 2∫ dt ; Rez > 0
2z 0 (t2 + z 2 )(e2π − 1)
We prove that
∞ t 1
2∫ dt = log(z) − − ψ(z)
0 (t2 + z 2 )(e2π − 1) 2z
First note that
2
= coth(πt) − 1
e2πt −1
Also note that
1 2t ∞ 1
coth(πt) = + ∑ 2 2
πt π k=1 k + t
Hence we conclude that
2t 1 2t2 ∞ 1
= − t + ∑
e2πt − 1 π π k=1 k 2 + t2
Substitute the value in the integral
∞ 1 1 2t2 ∞ 1
∫ { −t+ ∑ } dt
0 t2 +z π
2 π k=1 k 2 + t2
The first integral
1 ∞ 1 1
∫ dt =
π 0 t +z
2 2 2z
Since the second integral is divergent we put
N t 1
∫ dt = log(N 2 + z 2 ) − log(z)
0 t2 +z 2 2
Also for the series
2 N ∞ t2 N
1
∑∫ dt = ∑
π k=1 0 (t2 + z 2 )(t2 + k 2 ) k=1 k + z
Which simplifies to
48
N N
1 1 N z N
z
∑ =∑ −∑ = HN − ∑
k=1 k + z k=1 k k=1 k(k + z) k=1 k(k + z)
N
1 z
lim − log(N 2 + z 2 ) + log(z) + HN − ∑
N →∞ 2 k=1 k(k + z)
Or
∞
z
lim HN − log(N ) + log(z) − ∑
N →∞ k=1 k(k + z)
This simplifies to
∞
z 1
log(z) + γ − ∑ = log(z) − − ψ(z)
k=1 k(k + z) z
Collecting the results we have
∞ t 1 1 1
2∫ dt = log(z) + − − ψ(z) = log(z) − − ψ(z)
0 (t2 + z 2 )(e2π − 1) 2z z 2z
q/2−1
p π p 2πpk πk
ψ ( ) = −γ − log(2q) − cot ( π) + 2 ∑ cos ( ) log [sin ( )]
q 2 q k=1 q q
proof
q/2−1
1 π π 2πk πk
ψ ( ) = −γ − log(2q) − cot ( ) + 2 ∑ cos ( ) log [sin ( )]
q 2 q k=1 q q
So for example
49
1 1 √
ψ ( ) = (−6γ − π 3 − 9 log(3))
3 6
1 1
ψ ( ) = (−2γ − π − 6 log(2))
4 2
1 1√ 3
ψ ( ) = −γ − 3π − 2 log(2) − log(3)
6 2 2
5.11 Example
∞
∫ e−at log(t) dt
0
We start by considering
∞
F (b) = ∫ e−at tb dt
0
1 ∞ x b
F (b) = ∫ e−x ( ) dx
a 0 a
We can use the gamma function
1 ∞ b
Γ(b + 1)
−x x
F (b) = ∫ e ( ) dx =
a 0 a ab+1
Now differentiate with respect to b
5.12 Example
50
First note that since there is a log in the denominator that gives as an idea to use differentiation under
Let
1 (1 − xa )(1 − xb )(1 − xc )
F (c) = ∫ dx
0 (1 − x)(− log x)
Differentiate with respect to c
1 (1 − xa )(1 − xb )xc
F ′ (c) = ∫ dx
0 (1 − x)
By expanding
F (c) = − log [Γ(c + 1)] + log [Γ(a + c + 1)] + log [Γ(b + c + 1)] − log [Γ(a + b + c + 1)] + e
Which reduces to
Γ(a + c + 1)Γ(b + c + 1)
log [ ]+e
Γ(c + 1)Γ(a + b + c + 1)
Now put c = 0 we have
Γ(a + 1)Γ(b + 1)
0 = log [ ]+e
Γ(a + b + 1)
The constant
51
Γ(a + 1)Γ(b + 1)
e = − log [ ]
Γ(a + b + 1)
So we have the following
5.13 Example
∞ 1 dx
∫ (e−bx − )
0 1 + ax x
Let us first use the substitution t = ax
∞ 1 dt
(e− a −
bt
∫ )
0 1+t t
Add and subtract e−t
∞ 1 dt
(e−t − e−t + e− a −
bt
∫ )
0 1+t t
Separate into two integrals
∞ e− a − e−t
bt
∞ 1 dt
−t
∫ (e − ) +∫ dt
0 1+t t 0 t
The first integral is a representation of the Euler constant when a = 1
∞ 1 dt
∫ (e−t − ) = −γ
0 1+t t
We also proved
e− a − e−t
bt
∞ b a
∫ dt = − log ( ) = log ( )
0 t a b
Hence the result
∞ 1 dx a
∫ (e−bx − ) = log ( ) − γ
0 1 + ax x b
52
5.14 Example
∞ 1
∫ e−ax ( − coth(x)) dx
0 x
By using the exponential representation of the hyperbolic functions
∞ 1 1 + e−2x
∫ e−ax ( − ) dx
0 x 1 − e−2x
Now let 2x = t so we have
∞ 1 1 + e−t
e−( 2 ) ( −
at
∫ ) dt
0 t 2(1 − e−t )
e−t e−( 2 )
at
∞ a
∫ − −t
dt = ψ ( )
0 t 1−e 2
The second integral reduces to
e−( 2 ) − e−t
at
∞ a
∫ dt = − log ( )
0 t 2
By collecting the results
∞ 1 a a 1
∫ e−ax ( − coth(x)) dx = ψ ( ) − log ( ) +
0 x 2 2 a
53
5.15 Example
Prove that
∞ x dx 1 1 a 1 a 2
∫ = ψ( + )− ψ( )− a
0 x2 + 1 sinh(ax) 2 2 2π 2 2π π
proof
∞ x dx ∞ x dx
∫ =∫
0 x2 + 1 sinh(ax) 0 x + a sinh(x)
2 2
∞ ∞ sin(xt)
=∫ ∫ e−at dt dx
0 0 sinh(x)
∞ ∞ sin(xt)
=∫ e−at ∫ dx dt
0 0 sinh(x)
π ∞ π
= ∫ e−at tanh ( t) dt
2 0 2
∞ 2
−zx
=∫ e tanh(x) dt ; z = a
0 π
∞ e−zx (1 − e−2x )
=∫ dx
0 e−2x + 1
∞ e−zx ∞
∫ −2x
dx = ∑ ∫ e−x(2n+z) dx
0 e +1 n≥0 0
(−1)n
=∑
n≥0 2n + z
1 1 z z
= (ψ ( + ) − ψ ( ))
4 2 4 4
∞ −e−x(z+2) (−1)n
−∫ dx = − ∑
0 e−2x + 1 n≥0 z + 2 + 2n
1 1 z z
= − (−ψ ( + ) + ψ (1 + ))
4 2 4 4
Hence we have
∞ x dx 1 1 z z z
∫ = (2ψ ( + ) − ψ (1 + ) − ψ ( ))
0 x2 + 1 sinh(ax) 4 2 4 4 4
1 1 z 1 z
= ψ( + )− ψ( )−z
2 2 4 2 4
1 1 a 1 a 2
= ψ( + )− ψ( )− a
2 2 2π 2 2π π
54
Let a = π/2
∞ x dx 1 1 1 1 1
∫ = ψ( + )− ψ( )−1
0 x2 + 1 sinh( π2 x) 2 2 4 2 4
π
= cot(π/4) − 1
2
π
= −1
2
55
6 Zeta function
Zeta function is one of the most important mathematical functions. The study of zeta function isn’t
exclusive to analysis. It also extends to number theory and the most celebrating theorem of Riemann.
6.1 Definition
∞
1
ζ(s) = ∑ s
n=1 n
∞
x Bk k
= ∑ x
e − 1 k=0 k!
x
Now let us derive some values for the Bernoulli numbers , rewrite the power series as
∞
Bk k
x = (ex − 1) ∑ x
k=0 k!
By expansion
1 2 1 3 1 4 B2 2 B3 3
x = (x + x + x + x + ⋯) ⋅ (B0 + B1 x + x + x + ⋯)
2! 3! 4! 2! 3!
By multiplying we get
B0 2 B0 B2 B1 3 B0 B1 B2 B3 4
x = B0 x + (B1 + )x + ( + + )x + ( + + + )x + ⋯
2! 3! 2! 2! 4! 3! 2! 2! 3!
By comparing the terms we get the following values
1 1 1
B0 = 1 , B 1 = − + B2 = , B 3 = 0 , B 4 = − , ⋯
2 6 30
Actually we also deduce that
B2k+1 = 0 , ∀ k ∈ Z+
56
6.3 Relation between zeta and Bernoulli numbers
22k−1 2k
ζ(2k) = (−1)k−1 B2k π
(2k)!
proof
sin(z) ∞ z2
= ∏ (1 − 2 2 )
z n=1 n π
Take the logarithm to both sides
∞
z2
log(sin(z)) − log(z) = ∑ log (1 − )
n=1 n2 π 2
By differentiation with respect to z
∞ z
1 2 2
cot(z) − = −2 ∑ n πz2
z n=1 1 − n2 π 2
∞
z2 ⎛ 1 ⎞
z cot(z) = 1 − 2 ∑ 2 π2 ⎝ z2 ⎠
n=1 n 1 − π 2 n2
Now using the power series expansion
∞
1 1
z2
=∑ z 2k , ∣z∣ < π n
1− π 2 n2 k=0 n 2k π 2k
z2 ⎛ 1 ⎞ ∞ 1 ∞
1
= ∑ 2k+2 2k+2 z 2k+2 = ∑ 2k 2k z 2k
n π ⎝ 1 − π2 n2 ⎠ k=0 n
2 2 z 2
π k=1 n π
∞ ∞
1 z 2k
z cot(z) = 1 − 2 ∑ ∑ 2k π 2k
n=1 k=1 n
∞ ∞ ∞
1 z 2k ζ(2k) 2k
z cot(z) = 1 − 2 ∑ ∑ 2k π 2k
= 1 − 2 ∑ 2k
z
k=1 n=1 n k=1 π
Euler didn’t stop here, he used power series for z cot(z) using the Bernoulli numbers.
57
∞
x Bk k
= ∑ x
e − 1 k=0 k!
x
∞
2iz Bk k
= ∑ (2iz)
e2iz − 1 k=0 k!
Which can be reduced directly to the following by noticing that B2k+1 = 0
∞
22k 2k
z cot(z) = 1 − ∑ (−1)k−1 B2k z
k=1 (2k)!
The result is immediate by comparing the two different representations.
6.4 Exercise
ζ(4) , ζ(6) , B5 , B6
∞ e−t ts−1
∫ dt
0 1 − e−t
Using the power expansion
∞
1
= ∑ e−nt
1 − e−t n=0
Hence we have
∞ ∞
∫ e−t ts−1 ( ∑ e−nt ) dt
0 n=0
∞ ∞ ∞
1
∑∫ ts−1 e−(n+1)t dt = Γ(s) ∑ = Γ(s)ζ(s)
n=0 (n + 1)
0 s
n=0
58
6.6 Hurwitz zeta and polygamma functions
6.6.1 Definition
∞
1
ζ(a, z) = ∑ ; ζ(a, 1) = ζ(a)
n=0 (n + z)a
Let us define the polygamma function as the function produced by differentiating the Digamma function
ψn (z) ∀ n ≥ 0
So we have
∀n≥1
22n−2 2n
ψ2n−1 (1) = (−1)n−1 B2n π
n
proof
1 ∞ z
ψ0 (z) = −γ − +∑
z n=1 n(n + z)
This can be written as the following
∞
1 1
ψ0 (z) = −γ + ∑ −
k=0 k + 1 k+z
By differentiating with respect to z
59
∞
1
ψ1 (z) = ∑
k=0 (k + z)
2
∞
1
ψ2 (z) = −2 ∑
k=0 (k + z)3
∞
1
ψ3 (z) = 2 ⋅ 3 ∑
k=0 (k + z)
4
∞
1
ψ4 (z) = −2 ⋅ 3 ⋅ 4 ∑
k=0 (k + z)
5
∞
1
ψn (z) = (−1)n+1 n! ∑
k=0 (k + z)n+1
We realize the RHS is just the Hurwitz zeta function
22k−1 2k
ζ(2k) = (−1)k−1 B2k π
(2k)!
we can easily verify the following
22n−1 2n 22n−2 2n
ψ2n−1 (1) = (2n − 1)! (−1)n−1 B2n π = (−1)n−1 B2n π
(2n)! n
This can be used to deduce some values for the polygamma function
π2 π4
ψ1 (1) = , ψ3 (1) =
6 15
60
6.7 Example
Prove that
π
2 π π3
∫ x sin(x) cos(x) log(sin x) log(cos x) dx = −
0 16 192
proof
π π
2 π 2
∫ x sin(x) cos(x) log(sin x) log(cos x) dx = ∫ sin(x) cos(x) log(sin x) log(cos x) dx
0 4 0
We need to find
π
2
∫ sin(x) cos(x) log(sin x) log(cos x) dx
0
π
2 Γ(a)Γ(b)
F (a, b) = 2 ∫ sin2a−1 (x) cos2b−1 (x) dx =
0 Γ(a + b)
Now let us differentiate with respect to a
π
∂ 2
(Fa (a, b)) = 8 ∫ sin2a−1 (x) cos2b−1 (x) log(sin x) log(cos x)dx
∂b 0
Now to evaluate ψ1 (2) , we have to use the zeta function we have already established the following
relation
61
∞
1
ψ1 (z) = ∑
k=0 (n + z)
2
∞
1
ψ1 (2) = ∑
k=0 (k + 2)2
Let us write the first few terms in the expansion
∞
1 1 1 1
∑ = 2 + 2 + 2 +⋯
k=0 (k + 2) 2 2 3 4
we see this is similar to ζ(2) but we are missing the first term
π2
ψ1 (2) = ζ(2) − 1 = −1
6
Collecting all these results together we have
π
2 1 π2
∫ sin(x) cos(x) log(sin x) log(cos x)dx = −
0 4 48
Finally we get our result
π
2 π π3
∫ x sin(x) cos(x) log(sin x) log(cos x) dx = −
0 16 192
62
7 Dirichlet eta function
7.1 Definition
∞
(−1)n−1
η(s) = ∑
n=1 ns
The alternating form of the zeta function is easier to compute once we have established the main results
of the zeta function because the alternating form is related to the zeta function through the relation
proof
∞
1 1 ∞ 1
∑ s
− ∑
n=1 n 2s−1 n=1 ns
∞ ∞
1 1
∑ − 2 ∑
n=1 n s
n=1 (2n) s
Clearly we can see that we are subtracting even terms twice , this is equivalent to
∞ ∞
1 1
∑ −∑
n=1 (2n − 1) s
n=1 (2n) s
1 1 1 1 1
(1 + s
+ s + ⋯) − ( s + s + s + ⋯)
3 5 2 4 6
Rearranging the terms we establish the alternating form
∞
1 1 1 (−1)n−1
1− + − + ⋯ = ∑ = η(s)
2s 3s 4s n=1 ns
63
7.3 Integral representation
∞ ts−1
η(s) Γ(s) = ∫ dt
0 et + 1
proof
∞ ∞
∫ e−t ts−1 dt ( ∑ (−1)n e−nt )
0 n=0
∞ ∞
∑ (−1) ∫
n
e−(n+1)t ts−1 dt
n=0 0
∞ Γ(s)
∫ e−(n+1)t ts−1 dt =
0 (n + 1)s
Hence we have the following
∞ ∞
(−1)n (−1)n−1
Γ(s) ∑ = Γ(s) ∑ = Γ(s) η(s)
n=0 (n + 1)
s ns
n=1
∞ t π2
∫ dt = Γ(2) η(2) =
0 et +1 12
64
8 Polylogarithm
8.1 Definition
∞
zk
Lin (z) = ∑ n
k=1 k
The name contains two parts, (poly) because we can choose different n and produce many functions and
∞
1
Lin (1) = ∑ n
= ζ(n)
k=1 k
π2
Li2 (1) = ζ(2) =
6
Also we can relate it to the eta function though z = −1
∞
(−1)k
Lin (−1) = ∑ n
= −η(n)
k=1 k
∞
zk
Li 1 (z) = ∑
n=1 k
∞
zk
∑ = − log(1 − z)
k=1 k
z 1 ∞ tk ∞
1 z ∞
zk
∫ ( ∑ n ) dt = ∑ n ∫ tk−1 dt = ∑ n+1 = Li n+1 (z)
0 t k=1 k k=1 k 0 k=1 k
65
8.4 Square formula
proof
∞
z k ∞ (−z)k
∑ n
+∑ n
k=1 k k=1 k
z2 z3 z2 z3
z+ + + ⋯ + (−z + − + ⋯)
2n 3n 2n 3n
The odd terms will cancel
z2 z4 z6
2 + 2 + 2 +⋯
2n 4n 6n
Take 21−n as a common factor
∞
(z 2 )2 (z 2 )3 (z 2 )k
21−n (z 2 + + + ⋯) = 2 1−n
∑ = 21−n Li n (z 2 )
2n 3n k=1 k n
8.5 Exercise
Prove that
z log(1 − t)
Li 2 (z) = − ∫ dt
0 t
66
8.6 Dilogarithms
Of all polylogarithms Li2 (z) is the most interesting one, in this section we will see why!
8.6.1 Definition
∞ z log(1 − t)
zk
Li2 (z) = ∑ 2
= −∫ dt
k=1 k 0 t
The curious reader should try to prove the integral representation using the recursive definition we
−1 1 π2
Li2 ( ) + Li2 (−z) = − log2 (z) −
z 2 6
proof
−1 log(1 − t)
−1
z
Li2 ( ) = −∫ dt
z 0 t
Differentiate with respect to z
−1 −z log(1 − t) 1 1
Li2 ( )=∫ dt − log2 (z) + C = −Li2 (−z) − log2 (z) + C
z 0 t 2 2
To find the constant C let z = 1
C = 2Li2 (−1)
−π 2
C = 2Li2 (−1) = −2η(2) =
6
Which proves the result by simple rearrangement
−1 1 π2
Li2 ( ) + Li2 (−z) = − log2 (z) −
z 2 6
67
8.6.3 Second functional equation
π2
Li2 (z) + Li2 (1 − z) = − log(z) log(1 − z) , 0 < z < 1
6
proof
z log(1 − t)
Li2 (z) = − ∫ dt
0 t
Now integrate by parts to obtain
z log(t)
Li2 (z) = − ∫ dt − log(z) log(1 − z)
0 1−t
By the change of variable t = 1 − x we get
1 log(1 − x)
∫ dx = −Li2 (1) + Li2 (1 − z)
1−z x
Which implies that
π2
Li2 (z) + Li2 (1 − z) = − log(z) log(1 − z)
6
We can easily deduce that for z = 1
2
1 π2 1
2Li2 ( ) = − log2 ( )
2 6 2
1 π2 1 1
Li2 ( ) = − log2 ( )
2 12 2 2
68
8.6.4 Third functional equation
z 1
Li2 (z) + Li2 ( ) = − log2 (1 − z) z < 1
z−1 2
proof
z−1 log(1 − t)
z
z
Li2 ( ) = −∫ dt
z−1 0 t
Differentiate both sides with respect to z
d z 1 ⎛ log (1 − z−1
z
)⎞
Li2 ( )=
dz z−1 (z − 1) ⎝
2 z
z−1 ⎠
Upon simplification we obtain
d z − log(1 − z)
Li2 ( )=
dz z−1 z(z − 1)
Using partial fractions decomposition
d z log(1 − z) log(1 − z)
Li2 ( )= +
dz z−1 1−z z
Integrate both sides with respect to z
z 1
Li2 ( ) = − log2 (1 − z) − Li2 (z) + C
z−1 2
Put z = −1 to find the constant
1 1
Li2 ( ) = − log2 (2) − Li2 (−1) + C
2 2
Remember that
1 π2 1 1 π2
Li2 ( ) = − log2 ( ) , Li2 (−1) = −
2 12 2 2 12
Hence we deduce that C = 0
z 1
Li2 ( ) = − log2 (1 − z) − Li2 (z)
z−1 2
Which can be written as
z 1
Li2 ( ) + Li2 (z) = − log2 (1 − z)
z−1 2
69
8.6.5 Example
Prove that
√ √
5−1 π2 5−1
Li2 ( )= − log (
2
)
2 10 2
proof
z 1 1
Li2 ( ) + Li2 (z 2 ) − Li2 (−z) = − log2 (1 − z)
z−1 2 2
√
Now let z = 1− 5
2
√ √ √ √
1−2 5+5 3− 5 z 5−1 3− 5
z2 = = Ô⇒ = √ =
4 2 z−1 1+ 5 2
Hence we have
√ √ √
3 3− 5 5−1 1 5+1
Li2 ( ) − Li2 ( ) = − log (
2
)
2 2 2 2 2
We already established the following functional equation
π2
Li2 (z) + Li2 (1 − z) = − log(z) log(1 − z)
6
√
Put z = 3− 5
2
√ √ √ √
3− 5 5−1 π2 3− 5 5−1
Li2 ( ) + Li2 ( )= − log ( ) log ( )
2 2 6 2 2
√
Solving the two representations for Li2 ( 5−1
2
) we get our result.
8.6.6 Example
1 log(1 − x) log(x)
I =∫ dx
0 x
Integrate by parts
∞
1 Li2 (t) 1
I = − log(x)Li2 (x)∣10 + ∫ dt = Li3 (1) = ∑ 3 = ζ(3)
0 t k=1 k
70
8.6.7 Example
x log2 (1 − t)
∫ dt , 0 < x < 1
0 t
Integrating by parts we get the following
x log2 (1 − t) x Li (t)
2
∫ dt = − log(1 − x)Li2 (x) − ∫ dt
0 t 0 1−t
Now we are left with the following integral
x Li2 (t) 1 Li (1 − t)
2
∫ dt = ∫ dt
0 1−t 1−x t
Using the first functional equation
π2
1
6
− Li2 (t) − log(1 − t) log(t)
∫ dt
1−x t
1 Li2 (t)
∫ dt = Li3 (1) − Li3 (1 − x)
1−x t
The second integral is the same as the first exercise
1 log(1 − t) log(t)
∫ = Li3 (1) + log(1 − x)Li2 (1 − x) − Li3 (1 − x)
1−x t
Collecting the results together we obtain
x Li2 (t) π2
∫ dt = − log(1 − x) − Li2 (1 − x) log(1 − x) + 2 Li3 (1 − x) − 2ζ(3)
0 1−t 6
Finally we have
x log2 (1 − t) π2
∫ dt = − log(1 − x)Li2 (x) + log(1 − x) + Li2 (1 − x) log(1 − x) − 2 Li3 (1 − x) + 2ζ(3)
0 t 6
71
8.6.8 Example
a x
I(a) = ∫ dx
0 ex − 1
1
Start by the power expansion of 1−e−x
a a ∞
x
I(a) = ∫ dx = ∫ xe−x ∑ e−nx
0 ex −1 0 n=0
∞ a
I(a) = ∑ ∫ xe−x(n+1) dx
n=0 0
∞
1 e−(n+1)a ae−(n+1)a
I(a) = ∑ − −
n=0 (n + 1) (n + 1)2 (n + 1)
2
72
9 Ordinary Hypergeometric function
Ordinary or sometimes called the Gauss hypergoemtric function is a generalization of the power expan-
sion definition. Before we start with the definition we will explain some notations.
9.1 Definition
⎧
⎪
⎪
⎪
⎪ 1 ∶n=0
(z)n = ⎨
⎪
⎪
⎪ Γ(z+n)
⎪ ∶n>0
⎩ Γ(z)
Using this definition have
∞
(a)n (b)n z n
2 F1 (a, b; c; z) =∑
n=0 (c)n n!
1. Logarithm
∞ ∞ ∞
(1)n (1)n (−z)n n! z n+1
z 2 F1 (1, 1; 2; −z) = z ∑ = ∑ (−1)n z n+1 = ∑ (−1)n = log(1 + z)
n=0 (2)n n! n=0 (n + 1)! n=0 n+1
2. Power function
∞ ∞
(a)n (1)n z n (a)n n
2 F1 (a, 1; 1; z) =∑ =∑ z = (1 − z)−a
n=0 (1)n n! n=0 n!
3. Sine inverse
∞ ( 12 )n ( 12 )n z 2n+1 ∞( 12 )n z 2n+1 ∞
1 ⋅ 3 ⋅ 5⋯(2n − 1) 2n+1
z 2 F1 ( 21 , 12 ; 32 ; z 2 ) =∑ =∑ =∑ z
n=0 ( 32 )n n! n=0 2n + 1 n! n=0 2 n! (2n + 1)
n
∞
(2n)! ∞ (2n )
z 2 F1 ( 21 , 12 ; 32 ; z 2 ) = ∑ z 2n+1
= ∑
n
z 2n+1 = arcsin(z)
n=0 4 n (n!) 2 (2n + 1) n=0 4 n (2n + 1)
Now we consider converting the Taylor expansion into the equivalent hypergeomtric representation
73
∞
2 F1 (a, b; c; z) = ∑ tk z k , t 0 = 1
k=0
tk+1 (k + a)(k + b)
= z
tk (k + c)(k + 1)
Using this definition, we can easily find the terms a, b, c.
1. Exponential function
f (z) = ez
∞
zk
f (z) = ez = ∑
k=0 k!
Hence we have
tk+1 z
=
tk k+1
ez = 2 F1 (−, −; −; z)
2. Cosine function
∞
(−1)k z 2k
f (z) = cos(z) = ∑
k=0 (2k)!
tk+1 1 1 −z 2
=− z=
tk (2k + 2)(2k + 1) (k + 1) (k + 21 ) 4
Hence we have
1 −z 2
cos(z) = 2 F1 (−, −; ; )
2 4
74
3. Power function
∞
(a)k k
f (z) = (1 − z)−a = ∑ z
k=0 k!
tk+1 (k + a) (k + a)(k + 1)
= z= z
tk (k + 1) (k + 1)(k + 1)
Hence we have
(1 − z)−a = 2 F1 (a, 1; 1; z)
9.3 Exercise
arcsin(z), sin(z)
1
∫ tb−1 (1 − t)c−b−1 (1 − tz)−a dt
0
∞
1 (a)k
∫ tb−1 (1 − t)c−b−1 ∑ (tz)k
0 k=0 k!
Interchanging the integral with the series
∞
(a)k k 1
∑ z ∫ tk+b−1 (1 − t)c−b−1 dt
k=0 k! 0
∞
(a)k Γ(k + b)Γ(c − b) z k
∑
n=0 Γ(k + c) k!
75
Using the identity that
Γ(b)Γ(c − b)
β(c − b, b) =
Γ(c)
and
Γ(z + k)
= (z)k
Γ(z)
We deduce that
∞ ∞
(a)k Γ(k + b)Γ(c) z k (a)k (b)k z k
β(c − b, b) ∑ = β(c − b, b) ∑
k=0 Γ(b)Γ(k + c) k! k=0 (c)k k!
9.5 Transformations
1. Pfaff transformations
z
2 F1 (a, b; c; z) = (1 − z)−a 2 F1 (a, c − b; c; )
z−1
and
z
2 F1 (a, b; c; z) = (1 − z)−b 2 F1 (c − a, b; c; )
z−1
proof
1 1 tb−1 (1 − t)c−b−1
2 F1 (a, b; c; z) = ∫ dt
β(b, c − b) 0 (1 − tz)a
(1 − z)−a 1 z −a
∫ t c−b−1
(1 − t) b−1
(1 − t ) dt
β(b, c − b) 0 z−1
76
z
(1 − z)−a 2 F1 (a, c − b; c; )
z−1
2 F1 (a, b; c; z) = 2 F1 (b, a; c; z)
We deduce that
z
2 F1 (a, b; c; z) = (1 − z)−b 2 F1 (c − a, b; c; )
z−1
2. Euler transformation
2 F1 (a, b; c; z) = (1 − z)c−a−b 2 F1 (c − a, c − b; c; z)
proof
z
2 F1 (a, b; c; ) = (1 − z)−a 2 F1 (a, c − b; c; z)
z−1
and
z
2 F1 (a, b; c; ) = (1 − z)−b 2 F1 (c − a, b; c; z)
z−1
2 F1 (a, b; c; z) = (1 − z)c−a−b 2 F1 (c − a, c − b; c; z)
77
3. Quadratic transformation
a a 1 z2
= (1 − z)− 2 2 F1 ( , b − ; b + ;
a
2 F1 (a, b; 2b; z) )
2 2 2 4z − 4
4. Kummer
2 F1 (a, b; c; z) = 2 F1 (a, b; 1 + a + b − c; 1 − z)
1. At z = 1
Γ(c)Γ(c − a − b)
2 F1 (a, b; c; 1) =
Γ(c − a)Γ(c − a)
1 1
2 F1 (a, b; c; 1) = ∫ t (1 − t)
b−1 c−b−a−1
dt
β(c − b, b) 0
Now we can use the first integral representation of the beta function
1 Γ(b)Γ(c − b − a)
2 F1 (a, b; c; 1) = ⋅
β(c − b, b) Γ(c − a)
2. At z = −1
Γ(1 + a − b)Γ (1 + a2 )
2 F1 (a, b; 1 + a − b; −1) =
Γ(1 + a)Γ(1 + a
2
− b)
78
10 Error Function
The error function is an interesting function that has many applications in probability, statistics and
physics.
10.1 Definition
2 x
erf(x) = √ ∫ e−t dt
2
π 0
erfc(x) = 1 − erf(x)
erfi(x) = −ierf(ix)
10.4 Properties
2 −x 2 x
e−t dt = − √ ∫ e−t dt = −erf(x)
2 2
erf(−x) = √ ∫
π 0 π 0
erf(z) + erf(z̄)
R erf(z) =
2
erf(z) − erf(z̄)
I erf(z) =
2i
1. Hypergeomtric function
2x 1 3
erf(x) = √ 1 F1 ( , , −x2 )
π 2 2
79
proof
2x ∞ ( ) (−x2 )k
1
2x 1 3
√ 1 F1 ( , , −x2 ) = √ ∑ 23 k
π 2 2 π k=0 ( 2 ) k!
k
2x 1 3 x ∞ (−x2 )k
√ 1 F1 ( , , −x2 ) = √ ∑ 1
π 2 2 π k=0 ( 2 + k)k!
2 x 2 x ∞ (−x2 )k 2 ∞ (−x)2k+1
√ ∫ e−t dt = √ ∫ ∑
2
dt = √ ∑
π 0 π 0 k=0 k! π k=0 (2k + 1)k!
Γ ( 21 , x2 )
erf(x) = 1 − √
π
proof
1 ∞
t− 2 e−t dt
1
Γ ( , x2 ) = ∫
2 x 2
Let t = y 2
1 ∞
e−y dy
2
Γ ( , x2 ) = 2 ∫
2 x
1 ∞ x
e−y dy − 2 ∫ e−y dy
2 2
Γ ( , x2 ) = 2 ∫
2 0 0
∞ √
e−y dy =
2
2∫ π
0
80
1 √ √
Γ ( , x2 ) = π − πerf(x)
2
10.6 Example
x 2
I =∫ et dt
0
The function has no elementary anti-derivative so we represent it using the error function.
2 ix
e−t dt
2
erfi(x) = −i √ ∫
π 0
By differentiating both sides we have
d 2 2
erfi(x) = √ ex
dx π
Hence we have
√
2 d π
ex = erfi(x)
dx 2
By integrating both sides we have
x
√
2 π
∫ et dt = erfi(x)
0 2
10.7 Example
Prove that
∞ 1
∫ erfc(x) dx = √
0 π
proof
∞
∫ (1 − erf(x)) dx
0
81
Integrating by parts we have
2 ∞
∞
xe−x dx
2
I = x(1 − erf(x)) ]0 + √ ∫
π 0
Now we compute erf(∞)
∞
√
2 −t2 2 π
erf(∞) = √ ∫ e dt = √ × =1
π 0 π 2
So the first term will go to zero. The integral can be solved by substitution
2 ∞ 1
xe−x dx = √
2
I=√ ∫
π 0 π
10.8 Example
Prove that
√
∞ 2− 2
2
∫ erfc (x) dx = √
0 π
proof
Integrate by parts
∞
∞
I = xerfc2 (x) ]0 − 2 ∫ xerfc′ (x)erfc(x) dx
0
∞
I = −2 ∫ x erfc′ (x)erfc(x) dx
0
2
erfc′ (x) = (1 − erf(x))′ = − √ e−x
2
π
That results in
4 ∞
xe−x erfc(x) dx
2
I=√ ∫
π 0
Integrate by parts again
2 4 ∞
∞
I = √ e−x erfc(x) ]0 − ∫ e−2t dt
2 2
π π 0
82
At infinity the integral goes to 0. At 0 we get
2 2 2
√ e−x erfc(x)]x=0 = √ (1 − erf(0)) = √
2
π π π
The integral can be evaluated to
∞
√
−2t2 π
∫ e dt = √
0 2 2
Collecting the results together we have
√ √
2 π 4 2− 2
I=√ − √ × = √
π 2 2 π π
10.9 Example
Prove that
√
∞ π − 2 coth−1 2
∫ sin(x )erfc(x) dx =
2
√
0 4 2π
proof
√
Using the substitution x = t
1 ∞ √
sin(t)t− 2 erfc( t) dt
1
∫
2 0
Consider the function
1 ∞ √
sin(t)t− 2 erfc(a t) dt
1
I(a) = ∫
2 0
Differentiating with respect to a we have
−1 ∞ −1 1
I ′ (a) = √ ∫ sin(t)e−a t dt = √ ⋅ 4
2
π 0 π a +1
Now integrating with respect to a
−1 a dx
I(a) = √ ∫ +C
π 0 x4 + 1
To evaluate the constant we take a → ∞
−1 ∞ dx
I(∞) = √ ∫ +C
π 0 x4 + 1
83
The function has an anti-derivative and the value is
∞ dx
√
−1 π
√ ∫ =− √
π 0 x +1
4
2 2
Note that
√
π
erfc(∞) = 0 Ô⇒ C= √
2 2
Finally we get
√
−1 a dx π
I(a) = √ ∫ + √
π 0 x4 + 1 2 2
Let a = 1 in the integral
∞
√ 1 dx
π 1
I(1) = ∫ sin(x2 )erfc(x) dx = √ − √ ∫
0 2 2 π 0 x4 + 1
Also knowing that
√
1 1 dx π + 2 coth−1 2
√ ∫ = √
π 0 x4 + 1 4 2π
Hence we have the result
√
∞ π − 2 coth−1 2
∫ sin(x )erfc(x) dx =
2
√
0 4 2π
10.10 Exercise
∞
∫ erfc3 (x) dx =?
0
∞
∫ erfc4 (x) dx =?
0
What about
∞
∫ erfcn (x) dx =?
0
84
11 Exponential integral function
11.1 Definition
∞ e−t ∞ e−xt
E(x) = ∫ dt = ∫ dt
x t 1 t
11.2 Example
Prove that
proof
∞
∞
E(x) = e−t log(t) ]x + ∫ log(t)e−t dt
x
∞
E(x) = −e−x log(x) + ∫ log(t)e−t dt
x
∞
lim [log(x) + E(x)] = lim (log(x) − e−x log(x)) + ∫ log(t)e−t dt
x→0 x→0 0
∞
lim [log(x) + E(x)] = ∫ log(t)e−t dt = ψ(1) = −γ
x→0 0
11.3 Example
∞ Γ(p)
∫ xp−1 E(ax) dx =
0 pap
proof
85
∞ 1 1 ∞
∞
∫ xp−1 E(ax) dx = xp E(ax) ]0 + ∫ xp−1 e−ax dx
0 p ap 0
The first limit goes to 0
1 ∞ 1 ∞ Γ(p)
∫ xp−1 e−ax dx = p ∫ xp−1 e−x dx =
ap 0 pa 0 p ap
11.4 Example
∞ π Γ(p)
∫ xp−1 eax E(ax) dx = ⋅
0 sin(aπ) ap
proof
∞ ∞ e−t
∫ xp−1 eax ∫ dt dx
0 ax t
Use the substitution t = ax y
∞ ∞ e−ax(y−1)
∫ ∫ xp−1 dy dx
0 1 y
By switching the two integrals
∞ 1 ∞
∫ ∫ xp−1 e−ax(y−1) dx dy
1 y 0
By the Laplace identities
Γ(p) ∞ 1
∫ dy
ap 1 y(y − 1)p
Now let y = 1/x
Γ(p) 1 p−1 −p
∫ x (1 − x) dx
ap 0
Using the reflection formula for the Gamma function
86
11.5 Example
Prove that
∞ π2
∫ ez E 2 (z) dz =
0 6
proof
∞ ∞ e−xz e−yz
E 2 (z) = ∫ ∫ dx dy
1 1 xy
∞ ∞ ∞ ∞ e−z(x+y−1)
∫ ez E 2 (z) dz = ∫ ∫ ∫ dx dy dz
0 0 1 1 xy
Swap the integrals
∞ 1 ∞ 1 ∞
∫ ∫ ∫ e−z(x+y−1) dz dx dy
1 y 1 x 0
∞ 1 ∞ 1
∫ ∫ dx dy
1 y 1 x(x + y − 1)
The inner integral is an elementary integral
∞ 1 log(y)
∫ dx = −
1 x(x + y − 1) 1−y
The integral becomes
∞ log(y)
∫ dy
1 y(y − 1)
Now use the substitution y = 1/x
1 log(x) 1 log(1 − x) π2
−∫ dx = − ∫ dx = Li2 (1) =
0 (1 − x) 0 x 6
11.6 Example
Prove that
∞ 2Γ(p)
∫ z p−1 E 2 (z) dz = 2 F1 (p, p; p + 1; −1)
0 p2
proof
87
Consider
∞ e−t
E 2 (z) = ∫ dt
z t
By differentiation with respect to z
2 e−z E(z)
2E ′ (z)E(z) =
z
Knowing that we can return to our integral by integration by parts
2 ∞ p−1 −z
∫ z e E(z) dz
p 0
Write the integral representation
2 ∞ p−1 −z ∞ e−zt
∫ z e ∫ dtdz
p 0 1 t
Swap the two integrals
2 ∞ 1 ∞ p−1 −z(1+t)
∫ ∫ z e dz dt
p 1 t 0
The inner integral reduces to
2Γ(p) ∞ dt
∫
p 1 t(1 + t)p
Use the substitution t = 1/x
2Γ(p) 1 xp−1
∫ dx
p 0 (1 + x)p
1 xb−1 (1 − x)c−b−1
β(c − b, b) 2 F1 (a, b; c; z) = ∫ dx
0 (1 − xz)a
Let c = p + 1, b = p, a = p, z = −1
1 xp−1
β(1, p) 2 F1 (p, p; p + 1; −1) = ∫ dx
0 (1 + x)p
Hence the result
∞ 2Γ(p)
∫ z p−1 E 2 (z) dz = 2 F1 (p, p; p + 1; −1)
0 p2
88
Where
Γ(p) 1
β(1, p) = =
Γ(p + 1) p
11.7 Exercise
∞
∫ xn E 2 (x) dx
0
89
12 Complete Elliptic Integral
π 1 1
E(k) = 2 F1 ( , − , 1, k )
2
2 2 2
proof
1 tb−1 (1 − t)c−b−1
β(c − b, b) 2 F1 (a, b, c, z) = ∫ dt
0 (1 − tz)a
Now use the substitution t = x2 and z = k 2
1 x2b−1 (1 − x2 )c−b−1
β(c − b, b) 2 F1 (a, b, c, k 2 ) = 2 ∫ dx
0 (1 − k 2 x2 )a
Put a = 1
2
;b= 1
2
and c = 1
1 dx 1 1 1
∫ √ √ = β(1/2, 1/2) 2 F1 ( , , 1, k 2 )
0 1 − x2 1−k x
2 2 2 2 2
By the beta function we have
1 1 1 π
β(1/2, 1/2) = Γ2 ( ) =
2 2 2 2
Hence the result
π 1 1
K(k) = 2 F1 ( , , 1, k )
2
2 2 2
90
By the same approach we have
π 1 1
E(k) = 2 F1 ( , − , 1, k )
2
2 2 2
12.4 Example
Prove that
1
∫ K(k) dk = 2G
0
proof
1 1 1
I =∫ ∫ √ √ dx dk
0 0 1 − x2 1 − k 2 x2
Switching the two integrals
1 1 1 1
I =∫ √ ∫ √ dk dx
0 1 − x2 0 1 − k 2 x2
1 arcsin x
I =∫ √ dx
0 x 1 − x2
Now let arcsin x = t hence we have x = sin t
π
2 t
I =∫ dt
0 sin t
The previous integral is a representation of the constant
I
G= Ô⇒ I = 2G
2
12.5 Identities
1. For k ≥ 1
√
√ ⎛ k ⎞ 1
K( k) = √ K
1 − k ⎝ k − 1⎠
and
91
√
√ √ ⎛ k ⎞
E( k) = 1 − k E
⎝ k − 1⎠
proof
1 dx
K(k) = ∫ √ √
0 1 − x2 1 − k 2 x2
√
Use the substitution x = 1 − y2
1 y dy
∫ √ √ √
0 1 − y2 1 − (1 − y 2 ) 1 − k 2 (1 − y 2 )
1 dy
∫ √ √
0 1 − y2 1 − k2 + k2 y2
√
Take 1 − k 2 as a common factor
1 dy
∫ √ √ √
k2
0
1 − k2 1 − y2 1− k2 −1
y2
√
⎛ k2 ⎞ 1
K(k) = √ K
1 − k2 ⎝ k − 1 ⎠
2
√
We can finish by k → k
√
√ ⎛ k ⎞ 1
K( k) = √ K
1 − k ⎝ k − 1⎠
√
√ 1 1 − kx2
E( k) = ∫ √ dx
0 1 − x2
√
By using that x = 1 − y2
√ √
√ √ 1 1 − k−1
k
y √ ⎛ k ⎞
E( k) = 1−k∫ √ dy = 1 − k E
0 1−y 2 ⎝ k − 1⎠
92
2.
√
1 2 k
K(k) = K( )
k+1 1+k
proof
4z 1 1
2 F1 (a, b, 2b, ) = (1 + z)2a 2 F1 (a, a − b + , b + , z 2 ) .
(1 + z)2 2 2
√
2 k
K( ) = (1 + k)K(k)
1+k
Or we have
√
1 2 k
K(k) = K( )
k+1 1+k
3.
√ √
2 k 1+k 2 −k
K( )= K( )
1+k 1−k 1−k
and
√ √
2 k 1−k 2 −k
E( )= E( )
1+k 1+k 1−k
proof
√
2 k 1 1
K( )=∫ √ √ dx
1+k 0 1 − x2 1 − 4k
x2
(1+k)2
√
2 k 1 1
E( ) = (1 + k) ∫ √ √ dx
1+k 0 1 − x (1 + k)2 − 4k x2
2
√
Use x = 1 − y2
1 1+k 1+k 1 1
∫ √ √ dy = ∫ √ √ dy
0 1 − y 2 (1 + k)2 − 4k (1 − y 2 ) 1−k 0 1 − y2 1 + 4k
y2
(1−k)2
93
Hence we have
√ √
2 k 1+k 2 −k
K( )= K( )
1+k 1−k 1−k
Similarly we have
√ √
2 k 1−k 2 −k
E( )= E( )
1+k 1+k 1−k
1.
1 1
K(i) = √ Γ2 ( )
4 2π 4
and
Γ2 ( 41 ) Γ2 ( 34 )
E(i) = √ + √
4 2π 2π
proof
By definition we have
1 dx 1 dx
K(i) = ∫ √ √ =∫ √
0 1−x 1+x
2 2 0 1 − x4
√
Let x = t we have dx = 14 t
4 −3
4 dt
1 1 −3
K(i) = ∫ t 4 (1 − t) 2 dt
−1
4 0
By beta function
Γ ( 41 ) Γ ( 12 )
K(i) =
4Γ ( 34 )
By reflection formula
1 3 π √
Γ ( ) Γ ( ) = π csc ( ) = π 2
4 4 4
Γ2 ( 41 ) Γ ( 12 ) Γ2 ( 14 )
K(i) = √ = √
4π 2 4 2π
94
Γ2 ( 41 ) Γ2 ( 34 )
E(i) = √ + √
4 2π 2π
By definition we have
√
1 1 + x2
E(i) = ∫ √ dx
0 1 − x2
11 + x2 1 1 1 x2
E(i) = ∫ √ dx = ∫ √ dx + ∫ √ dx
0 1 − x4 0 1 − x4 0 1 − x4
√
The first integral is K(i) for the second integral use x = 4 t
1 1 34 −1 −1
Γ ( 43 ) Γ ( 12 ) Γ2 ( 34 )
∫ t (1 − t) 2 dt = = √
4 0 4Γ ( 54 ) 2π
Hence we have
Γ2 ( 3 ) Γ2 ( 1 ) Γ2 ( 3 )
E(i) = K(i) + √ 4 = √ 4 + √ 4
2π 4 2π 2π
2.
1 1 1
K ( √ ) = √ Γ2 ( )
2 4 π 4
and
1 Γ2 ( 1 ) Γ2 ( 3 )
E ( √ ) = √4 + √4
2 8 π 2 π
proof
√
√ 1 ⎛ k ⎞
K( k) = √ K
1 − k ⎝ k − 1⎠
1 1
K(i) = √ K ( √ )
2 2
95
1 √ 1 1
K ( √ ) = 2K(i) = √ Γ2 ( )
2 4 π 4
Similarly we have
1 Γ2 ( 1 ) Γ2 ( 3 )
E ( √ ) = √4 + √4
2 8 π 2 π
3.
√ √
√ 1+ 2 2 1
K (2 −4 + 3 2) = √ Γ ( )
4 2π 4
and
√ √
√ 2 ⎡⎢ Γ ( ) Γ ( ) ⎤⎥
2 1 2 3
E (2 −4 + 3 2) = √ ⎢ √4 + √ 4 ⎥
1 + 2 ⎢⎣ 8 π π ⎥⎦
proof
√
1 2 k
K(k) = K( )
k+1 1+k
Hence we have for k = √1
2
√ √ √
√ 1+ 2 1 1+ 2 2 1
K (2 −4 + 3 2) = √ K ( √ ) = √ Γ ( )
2 2 4 2π 4
−1
For the elliptic integral of second kind using the hypergeomtric representation with a = 2
and
b= 1
2
4z
2 F1 (−1/2, 1/2, 1, ) = (1 + z)−1 2 F1 (−1/2, −1/2, 1, z 2 )
(1 + z)2
The later hypergeometric series can be written in terms of elliptic integrals using some general
contiguity relations
2
2 F1 (−1/2, −1/2, 1, z )= (2E(k) + (k 2 − 1)K(k))
2
π
So we have
√
2 k
2E(k) + (k 2 − 1)K(k) = (k + 1)E ( )
1+k
96
For k = √1
2
√ √
√ 2 ⎡⎢ Γ ( ) Γ ( ) ⎤⎥
2 1 2 3
E (2 −4 + 3 2) = √ ⎢ √4 + √ 4 ⎥
1 + 2 ⎢⎣ 8 π π ⎥⎦
4.
√ √ √
√ π π 2− 2
K (2 −4 − 3 2) = ⋅
4 Γ2 ( 43 )
and
√
√ √ (2 + 2) (π 2 + 4Γ4 ( 34 ))
E (2 −4 − 3 2) = √
4 πΓ2 ( 34 )
proof
√ √
2 k 1+k 2 −k
K( )= K( )
1+k 1−k 1−k
√
Let x = 1/ 2
√ √ √
√ π π 2− 2
K (2 −4 − 3 2) = ⋅
4 Γ2 ( 43 )
√
√ √ (2 + 2) (π 2 + 4Γ4 ( 34 ))
E (2 −4 − 3 2) = √
4 πΓ2 ( 34 )
Note We should remove the variable k and denote elliptic integrals E and K once there is no confusion.
Interestingly the derivative of elliptic integrals can be written in terms of elliptic integrals
√
d 1 ∂
1 − k 2 x2
E=∫ ∂k
√ dx
dk 0 1 − x2
d 1 −k x2
E=∫ √ √ dx
dk 0 1 − x2 1 − k 2 x2
Adding and subtracting 1 results in
√
1 1 1 − k 2 x2 1 1 dx
∫ √ dx − ∫ √ √
k 0 1−x 2 k 0 1 − x 1 − k 2 x2
2
97
Upon realizing the relation to elliptic integrals we conclude
d E−K
E=
dk k
For the complete elliptic integral of first kind we need more work
d 1 1 ∂ 1
K=∫ √ [√ ] dx
dk 0 1−x2 ∂ k 1 − k 2 x2
d 1 kx2
K=∫ √ √ dx
dk 0 1 − x2 1 − k 2 x2 (1 − k 2 x2 )
Adding and subtracting 1 we have
−1 1 1 − kx2 − 1 1 1 dx K
∫ √ √ dx = ∫ √ √ −
k 0 1 − x2 1 − k 2 x2 (1 − k 2 x2 ) k 0 1 − x2 1 − k 2 x2 (1 − k 2 x2 ) k
Let us focus on the first integral
1 1
∫ √ 3
dx
0 1 − x2 (1 − k 2 x2 ) 2
√
Let x = t and we have dx = 1
√
2 t
dt
t− 2
1
1 1
∫ √ 3 dx
2 0 1 − t(1 − k 2 t) 2
Using the hypergeometric integral representation
t− 2
1
1 1 π 3 1
∫ √ 3 = 2 F1 ( , , 1, k )
2
2 0 1 − t(1 − k 2 t) 2 2 2 2
Using the linear transformation
2 F1 (a, b, c, z) = (1 − z)c−a−b 2 F1 (c − a, c − b, c, z)
√
We get by putting k ′ = 1 − k2
π 3 1 1 π 1 1 E
2 F1 ( , , 1, k ) = 2 F1 (− , , 1, k ) = ′2
2 2
2 2 2 1−k 2
2 2 2 k
So finally we get
d 1 E
K = ( ′2 − K)
dk k k
98
13 Euler sums
13.1 Definition
∞ (p)
(Hk )r
Sp r ,q = ∑
k=1 kq
Where we define the general harmonic number
k k
(p) 1 (1) 1
Hk =∑ p
; H k ≡ Hk = ∑
n=1 n n=1 n
k=1 1−x
Proof
(p)
Start by writing Hk as a sum
∞ ∞ k
(p) 1 k
∑ Hk x = ∑ ∑
k
p
x
k=1 k=1 n=1 n
∞ ∞ ∞
xk 1 ∞ k
∑ ∑ p
= ∑ p ∑
x
n=1 k=n n n=1 n k=n
1 ∞ xn Lip (x)
∑ =
1 − x n=1 np 1−x
We can use this to generate some more functions by integrating.
∞
1 1 − xn 1
∫ dx = ∑ ∫ xk − xn+k dx
0 1−x k=0 0
99
∞
1 1
∑ − = Hn
k=0 k + 1 n+k+1
13.4 Example
∞
Hn
∑ 2
= 2ζ(3)
n=1 n
proof
1 1 ∞ 1 − xn 1 ζ(2) − Li (x)
2
∫ ∑ dx = ∫ dx
0 1 − x n=1 n 2 0 1−x
Now use the functional equation
Hence we have
1 Li2 (1 − x)
∫ = Li3 (1) = ζ(3)
0 1−x
The Second integral using integration by parts
∞
Hn
∑ 2
= ζ(3) + ζ(3) = 2ζ(3)
n=1 n
13.5 Example
∞
Hk k 1
∑ 2
x = Li3 (x) − Li3 (1 − x) + log(1 − x)Li2 (1 − x) + log(x) log2 (1 − x) + ζ(3)
k=1 k 2
proof
100
∞
log(1 − x)
∑ Hk x = −
k
k=1 1−x
Divide by x and integrate to get
∞
Hk k 1
∑ x = Li2 (x) + log2 (1 − x)
k=1 k 2
Now divide by x and integrate again
∞
Hk k 1 x log2 (1 − t)
∑ 2
x = Li3 (x) + ∫ dt
k=1 k 2 0 t
Now let us look at the integral
x log2 (1 − t)
∫ dt
0 t
Integrating by parts
x log2 (1 − t) x Li (t)
2
∫ dt = − log(1 − x)Li2 (x) − ∫ dt
0 t 0 1−t
Use a change of variable in the integral
x Li2 (t) 1 Li (1 − t)
2
∫ dt = ∫ dt
0 1−t 1−x t
Now we can use the second functional equation of the dilogarithm
π2
1
6
− Li2 (t) − log(1 − t) log(t)
∫ dt
1−x t
Separate the integrals
1 Li2 (t)
∫ dt = Li3 (1) − Li3 (1 − x)
1−x t
Use integration by parts in the second integral
1 log(1 − t) log(t)
∫ = Li3 (1) + log(1 − x)Li2 (1 − x) − Li3 (1 − x)
1−x t
Collecting the results together we obtain
101
x Li2 (t) π2
∫ dt = − log(1 − x) − Li2 (1 − x) log(1 − x) + 2 Li3 (1 − x) − 2ζ(3)
0 1−t 6
Hence we solved the integral
x log2 (1 − t) π2
∫ dt = − log(1 − x)Li2 (x) + log(1 − x) + Li2 (1 − x) log(1 − x) − 2 Li3 (1 − x) + 2ζ(3)
0 t 6
∞
Hk k 1 π2
∑ 2
x = Li3 (x) + (− log(1 − x)Li2 (x) + log(1 − x) + Li2 (1 − x) log(1 − x) − 2 Li3 (1 − x) + 2ζ(3))
k=1 k 2 6
∞
Hk k 1
∑ 2
x = Li3 (x) − Li3 (1 − x) + log(1 − x)Li2 (1 − x) + log(x) log2 (1 − x) + ζ(3)
k=1 k 2
13.7 Example
1 log2 (1 − x) log(x) π4
∫ =−
0 x 180
proof
∞
log(1 − x)
∑ Hk x
k−1
=−
k=1 x(1 − x)
By integrating both sides
∞
Hk k 1
∑ x = Li2 (x) + log2 (1 − x)
k=1 k 2
Or
∞
Hk k
log2 (1 − x) = 2 ∑ x − 2Li2 (x)
k=1 k
102
1 ∞
Hk k log(x)
2∫ (∑ x − Li2 (x)) dx
0 k=1 k x
Which simplifies to
∞ 1 1
Hk log(x)
∫ log(x)x dx − 2 ∫ Li2 (x)
k−1
2∑ dx
k=1 k 0 0 x
The second integral
1 log(x) 1 Li (x)
3
−2 ∫ Li2 (x) dx = 2 ∫ dx = 2ζ(4)
0 x 0 x
The first integral
∞ 1
Hk k−1
2∑ ∫ x log(x) dx
k=1 k 0
∞
Hk
−2 ∑ 3
= −5ζ(4) + ζ 2 (2)
k=1 k
Finally we get
1 log2 (1 − x) log(x)
∫ = −5ζ(4) + ζ 2 (2) + 2ζ(4) = ζ 2 (2) − 3ζ(4)
0 x
13.8 Example
Show that
∞ log t log(a2 + b2 ) b
∫ e−at sin(bt) dt = − ( + γ) arctan ( )
0 t 2 a
Proof
∞
I(s) = ∫ ts−1 e−at sin(bt)dt
0
∞ ∞
(−1)n (bt)2n+1
I(s) = ∫ ts−1 e−at ∑
0 n=0 Γ(2n + 2)
By swapping the summation and integration
103
∞
(−1)n (b)2n+1 ∞ 1 ∞ (−1)n (b)2n+1 Γ(s + 2n + 1)
I(s) = ∑ ∫ ts+2n e−at dt = s ∑
n=0 Γ(2n + 2) 0 a n=0 Γ(2n + 2)a2n+1
By differentiating and plugging s = 0 we have
∞ ∞
(−1)n ψ0 (2n + 1) b 2n+1 (−1)n b 2n+1
I ′ (0) = ∑ ( ) − log(a) ∑ ( )
n=0 2n + 1 a n=0 2n + 1 a
∞
(−1)n H2n − γ b 2n+1 b
I ′ (0) = ∑ ( ) − log(a) arctan ( )
n=0 2n + 1 a a
∞
(−1)n H2n b 2n+1 b
I ′ (0) = ∑ ( ) − (γ + log(a)) arctan ( )
n=0 2n + 1 a a
Now we look at the harmonic sum
∞ ∞ 1 1 − t2k
∑ (−1) H2k x = ∑ (−1) x ∫
k 2k k 2k
dt
k=0 k=0 0 1−t
Use the integral representation
1 1 ∞
∑ (−1) x (1 − t ) dt
k 2k 2k
∫
0 1 − t k=0
Swap the series and the integral
1 1 ∞
∑ (−1) (x − (xt) ) dt
k 2k 2k
∫
0 1 − t k=0
Evaluate the geometric series
1 1 1 1 −x2 1 (1 − t2 )
∫ ( − ) dt = ∫ dt
0 1 − t 1 + x2 1 + t 2 x2 1 + x2 0 (1 − t)(1 + t2 x2 )
which simplifies to
−1
(2x arctan(x) + log(1 + x2 ))
2(1 + x2 )
Using this we conclude by integrating
∞
(−1)k H2k 2k 1
∑ x = − log(1 + x2 ) arctan(x)
k=0 2k + 1 2
104
Hence the following
∞
(−1)k H2k b 2k+1 1 a2 + b2 b
∑ ( ) = − log ( ) arctan ( )
k=0 2k + 1 a 2 a2 a
Substituting that in our integral
∞ log t 1 a2 + b2 b
∫ e−at sin(bt) dt = − ( log ( 2
) + γ + log(a)) arctan ( )
0 t 2 a a
log(a2 + b2 ) b
= −( + γ) arctan ( )
2 a
13.9 Example
proof
∞
1 Hk
C (α, k) = ∑ ; C (1, k) =
n=1 nα (n + k) k
This can be solved using
∞
1 1 1
C (α, k) = ∑ ( − )
n=1 k n
α−1 n n+k
1 1
= ζ(α) − C (α − 1, k)
k k
1 1 1
= ζ(α) − 2 ζ(α − 1) + 2 C (α − 2, k)
k k k
⋅
⋅
1 1 ζ(2) 1
= ζ(α) − 2 ζ(α − 1) + ⋯ + (−1)α α−1 + α−1 C (α − (α − 1), k)
k k k k
α−1
ζ(α − n + 1) Hk
= ∑ (−1)n−1 n
+ (−1)α−1 α
n=1 k k
105
Hence we have the general formula
α−1
ζ(α − n + 1) Hk
C (α, k) = ∑ (−1)n−1 n
+ (−1)α−1 α
n=1 k k
β
Dividing by k and summing w.r.t to k
∞ ∞
C (α, k) α−1 Hk
∑ β
= ∑ (−1)n−1 ζ(α − n + 1)ζ(β + n) + (−1)α−1 ∑ α+β
k=1 k n=1 k=1 k
Now we use the general formula
∞
Hn q 1 q−2
∑ q
= (1 + ) ζ(q + 1) − ∑ ζ(k + 1)ζ(q − k)
n=1 n 2 2 k=1
Hence we have
∞
Hn α+β 1 α+β−2
∑ α+β
= (1 + ) ζ(α + β + 1) − ∑ ζ(k + 1)ζ(α + β − k)
n=1 n 2 2 k=1
And the generalization is the following formula
∞
C (α, k) α−1 1 α+β−2
∑ = ∑ (−1) n−1
ζ(α − n + 1)ζ(β + n) − ∑ (−1) ζ(n + 1)ζ(α + β − n)
α−1
∞ ∞ ∞
1 C (p, k)
∑∑ q np (n + k)
= ∑
k=1 n=1 k k=1 kq
(p) ψp−1 (k + 1)
Hk = ζ(p) + (−1)p−1
(p − 1)!
proof
k ∞
(p) 1 1
Hk =∑ p
= ζ(p) − ∑ p
n=1 n n=k+1 n
Now change the index in the sum n = i + k + 1
k ∞
(p) 1 1
Hk =∑ = ζ(p) − ∑
n=1 n p
i=0 (i + k + 1)p
106
We know that
ψp−1 (k + 1) ∞ 1
(−1)p =∑ p≥1
(p − 1)! i=0 (i + k + 1)
p
Hence we have
(p) ψp−1 (k + 1)
Hk = ζ(p) + (−1)p−1
(p − 1)!
We can use that to obtain a nice integral representation.
Note that
1 1 − xa
ψ0 (a + 1) = ∫ dx
0 1−x
By differentiating with respect to a , p times we have
∂ 1 1 − xa
ψp (a + 1) = ∫ dx
∂ap 0 1 − x
1 xa log(x)p
ψp (a + 1) = − ∫ dx
0 1−x
Let a = k
1 xk log(x)p−1
ψp−1 (k + 1) = − ∫ dx
0 1−x
Use the relation to polygamma
1 1 xk log(x)p−1
(p)
Hk = ζ(p) + (−1)p ∫ dx
(p − 1)! 0 1−x
Now divide by k q and sum with respect to k
107
13.12 Symmetric formula
∞ (p)
∞ H (q)
Hk
∑ q
+ ∑ k
p
= ζ(p)ζ(q) + ζ(p + q)
k=1 k k=1 k
proof
Take the leftmost series and swap the finite and infinite sums
∞ ∞ ∞ ∞ ∞
1 1 1 1 1 i−1 1
∑∑ p q
= ∑ ∑ p q
− ∑ p ∑ q
i=1 k=i i k i=1 k=1 i k i=1 i k=1 k
∞ ∞
1 i−1 1 1 i
1 1
∑ p ∑ q = ∑ p (∑ q − p )
i=1 i k=1 k i=1 i k=1 k i
By separating and changing the index we get
∞ (q)
Hk
∑ p
− ζ(p + q)
k=1 k
Hence we have
∞ (p) ∞ H (q)
H
∑ kq = ζ(p)ζ(q) − ∑ kp + ζ(p + q)
k=1 k k=1 k
∞ (p)
∞ H (q)
Hk
∑ q
+ ∑ kp = ζ(p)ζ(q) + ζ(p + q)
k=1 k k=1 k
∞ (n)
Hk ζ 2 (n) + ζ(2n)
∑ n
=
k=1 k 2
13.13 Example
∞ (3)
Hk 11ζ(5)
∑ 2
= − 2ζ(2)ζ(3)
k=1 k 2
proof
∞ (3) ∞ H (2)
H
∑ k2 = ζ(2)ζ(3) + ζ(5) − ∑ k3
k=1 k k=1 k
108
Using integration by parts on the integral
1 Li2 (x)Li2 (1 − x)
∫ dx
0 x
Using the duplication formula
1 Li2 (x)
ζ(2) ∫ dx = ζ(2)ζ(3)
0 x
The third integral
∞ (2)
Hk 3 1 Li22 (x)
∑ 3
= ∫ dx
k=1 k 2 0 x
Hence we finally get that
∞ (3)
Hk 3 1 Li22 (x)
∑ 2
= ζ(2)ζ(3) + ζ(5) − ∫ dx
k=1 k 2 0 x
Let us solve the integral
1 Li22 (x)
∫ dx
0 x
By series expansion
109
∞ ∞ 1 ∞ ∞
1 1
∑∑ ∫ x n+k−1
dx = ∑ ∑
n=1 k=1 (nk) n=1 k=1 (nk) (n + k)
2 0 2
∞
1 ∞ k ∞
1 ∞ 1 ∞
1 ∞ 1
∑ 3 ∑ 2 (n + k)
= ∑ 3 ∑ 2
− ∑ 3 ∑ n(n + k)
k=1 k n=1 n k=1 k n=1 n k=1 k n=1
∞
1 Li22 (x) Hk
∫ dx = ζ(2)ζ(3) − ∑ 4
0 x k=1 k
∞
Hk
∑ 4
= 3ζ(5) − ζ(2)ζ(3)
k=1 k
Hence
1 Li22 (x)
∫ dx = 2ζ(2)ζ(3) − 3ζ(5)
0 x
Finally we get
∞ (3)
H 3 11ζ(5)
∑ k2 = ζ(2)ζ(3) + ζ(5) − (2ζ(2)ζ(3) − 3ζ(5)) = − 2ζ(2)ζ(3)
k=1 k 2 2
110
14 Sine Integral function
14.1 Definition
z sin(x)
Si(z) = ∫ dx
0 x
A closely related function is the following
∞ sin(x)
si(z) = − ∫ dx
z x
These functions are related through the equation
π
Si(z) = si(z) +
2
A closely related function is the sinc function
⎧
⎪
⎪
⎪
⎪ x=0
⎪1
sinc(x) = ⎨
⎪
⎪
⎪
⎪
⎪
sin(x)
x≠0
⎩ x
Using that we conclude
d
Si(x) = sinc(x)
dx
For integration we have
14.2 Example
Show that
∞ π
∫ sin(x)si(x) dx = −
0 4
proof
111
Let 2x = t
1 ∞ sin(t) π
− ∫ dx = −
2 0 t 4
14.3 Example
Prove
∞ Γ(α) πα
∫ xα−1 si(x) dx = − sin ( )
0 α 2
proof
∞ ∞ sin(t)
−∫ xα−1 ∫ dt dx
0 x t
Let xy = t
∞ ∞ sin(xy)
−∫ xα−1 ∫ dy dx
0 1 y
Switching the integrals we get
∞ 1 ∞
−∫ ∫ xα−1 sin(xy) dx dy
1 y 0
Now let xy = t
∞ 1 ∞
−∫ ∫ tα−1 sin(t) dt dy
1 y α+1 0
∞ πs
Ms (sin(x)) = ∫ xs−1 sin(x) dx = Γ(s) sin ( )
0 2
Hence we conclude that
πα ∞ 1 Γ(α) πα
−Γ(α) sin ( )∫ α+1
=− sin ( )
2 1 y α 2
112
14.4 Example
Show that
∞ arctan(α)
∫ e−α x si(x) dx = −
0 α
proof
∞ ∞ sin(t)
−∫ e−α x ∫ dt dx
0 x t
Let xy = t
∞ ∞ sin(xy)
−∫ e−α x ∫ dy dx
0 1 y
Switching the integrals
∞ 1 ∞
−∫ ∫ e−α x sin(xy) dx dy
1 y 0
The inner integral is the laplace transform of the sine function
a
Ls (sin(at)) =
s2 + a2
Hence we conclude that
∞ 1 arctan(α)
−∫ dy = −
1 y 2 + α2 α
14.5 Example
∞
∫ si(x) log(x) dx = γ + 1
0
∞ Γ(α) πα
∫ xα−1 si(x) dx = − sin ( )
0 α 2
Differentiate with respect to α
113
∞ Γ(α) πα Γ(α)ψ(α) πα πΓ(α) πα
∫ xα−1 si(x) log(x)dx = 2
sin ( ) − sin ( ) − cos ( )
0 α 2 α 2 2α 2
Let α → 1
∞
∫ si(x) log(x)dx = 1 − ψ(1) = 1 − (−γ) = 1 + γ
0
14.6 Example
∞
∫ si(x) sin(px) dx
0
solution
∞
si(x) cos(px) 1 ∞ sin(x)
[− ] + ∫ cos(px) dx
p 0
p 0 x
Taking the limits
si(x) cos(px)
lim − =0
x→∞ p
Hence we get
π 1 ∞ sin(x)
− + ∫ cos(px) dx
2p p 0 x
The integral
π π
I= − =0
4 4
114
If p − 1 < 0
1 ∞ sin(2x) π
I= ∫ dx + 0 =
2 0 x 4
Finally we get
⎧
⎪
⎪
⎪
⎪− 2p
π
p>1
⎪
⎪
⎪
∞ ⎪
⎪
⎪
∫ si(x) sin(px)dx = ⎨− π p=1
0 ⎪
⎪
⎪
4p
⎪
⎪
⎪
⎪
⎪
⎪
⎪0 p<1
⎩
14.7 Example
∞ π
∫ si2 (x) cos(ax) dx = log(a + 1)
0 2a
proof
∞
si2 (x) sin(ax) 2 ∞ si(x) sin(x)
[ ] − ∫ sin(ax) dx
a 0
a 0 x
Taking the limits
∞ si(x) sin(x)
I(a) = ∫ sin(ax) dx
0 x
Differentiate with respect to a
∞
I ′ (a) = ∫ si(x) sin(x) cos(ax) dx
0
115
Now use the product to sum trigonometric rules
1 ∞
I ′ (a) = ∫ si(x)(sin((a + 1)x) − sin((a − 1)x)) dx
2 0
From the previous exercise we have
∞ −π
∫ si(x) sin((a + 1)x)dx = ;a>0
0 4(a + 1)
∞
∫ si(x) sin((a + 1)x)dx = 0 ; a < 2
0
π
I ′ (a) = −
4(a + 1)
Integrate with respect to a
π
I(a) = − log(a + 1) + C
4
Let a → 0
I(0) = 0 + C → C = 0
Hence we have
∞ si(x) sin(x) π
∫ sin(ax) dx = − log(a + 1)
0 x 4
Which implies that
∞ −2 π π
∫ si2 (x) cos(ax) dx = (− log(a + 1)) = log(a + 1)
0 a 4 2a
14.8 Example
∞
∫ si(x) cos(ax) dx
0
solution
116
1 ∞ sin(x) sin(ax)
∫ dx
a 0 x
Let the integral
∞ sin(x) sin(ax)
I(t) = ∫ e−tx dx
0 x
Differentiate with respect to t
∞
I ′ (t) = − ∫ e−tx sin(x) sin(ax) dx
0
1 ∞ −tx
I ′ (t) = ∫ e (cos((a + 1)x) − cos((a − 1)x)) dx
2 0
Now we can use the Laplace transform
1 t t
I ′ (t) = ( 2 − 2 )
2 t + (a + 1) 2 t + (a − 1)2
Integrate with respect to t
1 t2 + (a + 1)2
I(t) = − log ( 2 )+C
4 t + (a − 1)2
After verifying the constant goes to 0, we have
∞ 1 a+1 2
∫ si(x) cos(ax) dx = − log ( )
0 4a a−1
117
15 Cosine Integral function
15.1 Definition
Define
∞ cos(t)
ci(x) = − ∫ dt
x t
A related function is the following
x 1 − cos(t)
Cin(x) = ∫ dt
0 t
The derivative is
d cos(x)
ci(x) =
dx x
The integral
Prove that
proof
z 1 − cos(t)
lim ∫ dt − log z
z→∞ 0 t
Can be written
z 1 − cos(t) z 1 ∞ 1 cos(t)
lim ∫ dt − ∫ dt = ∫ − dt
z→∞ 0 t 0 1+t 0 t(1 + t) t
This is equivalent to
∞ ts−1
lim ∫ − ts−1 cos(t)dt
s→0 0 (1 + t)
The first integral
118
∞ ts−1
∫ = Γ(s)Γ(1 − s)
0 (1 + t)
The second integral
∞
∫ ts−1 cos(t)dt = Γ(s) cos(πs/2)
0
Γ(1 − s) − cos(πs/2)
lim
s→0 s
Use L’Hospital rule
15.3 Example
Start by
x 1 − cos(t)
Cin(x) = ∫ dt
0 t
Rewrite as
∞ 1 − cos(t) ∞ 1 − cos(t)
Cin(x) = ∫ dt − ∫ dt
0 t x t
Which simplifies to
z 1 − cos(t)
Cin(x) = lim [∫ dt − log(z)] − ci(x) + log(x)
z→∞ 0 t
The limit goes to the Euler Maschorinit constant
119
15.4 Example
∞
∫ ci(x) cos(px) dx
0
solution
∞
ci(x) sin(px) 1 ∞ cos(x)
[ ] − ∫ sin(px) dx
p 0
p 0 x
Taking the limits
ci(x) sin(px)
lim =0
x→0 p
ci(x) sin(px)
lim =0
x→∞ p
Hence we get
1 ∞ cos(x)
− ∫ sin(px) dx
p 0 x
The integral
π π π
I= + =
4 4 2
If p − 1 < 0
π π
I= − =0
4 4
If p = 1 we have
120
1 ∞ sin(2x) π
I= ∫ dx + 0 =
2 0 x 4
Finally we get
⎧
⎪
⎪
⎪
⎪− 2p
π
p>1
⎪
⎪
⎪
∞ ⎪
⎪
⎪
∫ ci(x) cos(px) dx = ⎨− π p=1
0 ⎪
⎪
⎪
4p
⎪
⎪
⎪
⎪
⎪
⎪
⎪0 p<1
⎩
15.5 Example
∞
∫ ci(px)ci(x) dx
0
solution
Let
∞
I(p) = ∫ ci(px)ci(x) dx
0
1 ∞
I ′ (p) = ∫ cos(px)ci(x) dx
p 0
If p > 1 from the previous example we conclude that
1 −π π
I ′ (p) = ( )=− 2
p 2p 2p
Integrate with respect to p
π
I(p) = +C
2p
Take the limit p → ∞, so C = 0.
15.6 Example
Prove that
∞ Γ(α) απ
∫ xα−1 ci(x) dx = − cos ( )
0 α 2
121
proof
∞ ∞ cos(t)
−∫ xα−1 ∫ dt dx
0 x t
Let t = yx
∞ ∞ cos(yx)
−∫ xα−1 ∫ dy dx
0 1 y
Switch the integrals
∞ 1 ∞
−∫ ∫ xα−1 cos(yx) dx dy
1 y 0
Using the Mellin transform we get
απ ∞ 1 Γ(α) απ
−Γ(α) cos ( )∫ 1+α
dy = − cos ( )
2 1 y α 2
15.7 Example
Prove that
∞ π
∫ ci(x) log(x) dx =
0 2
proof
∞ Γ(α) απ
∫ xα−1 ci(x) dx = − cos ( )
0 α 2
Differentiate with respect to α
∞ π π π
∫ ci(x) log(x) dx = 0 − 0 + sin ( ) =
0 2 2 2
122
15.8 Example
Show that
∞ 1 √
∫ ci(x)e−αx dx = − log 1 + α2
0 α
proof
∞ ∞ cos(yx)
−∫ e−αx ∫ dy dx
0 1 y
Switch the integrals
∞ 1 ∞
−∫ ∫ e−αx cos(yx) dx dy
1 y 0
Use the Laplace transformation
∞ α 1 1 √
−∫ dy = − log(1 + α2 ) = − log 1 + α2
1 y(α2+y )
2 2α α
123
16 Integrals involving Cosine and Sine Integrals
16.1 Example
solution
∞ ∞ cos(yx)
−∫ si(qx) ∫ dy dx
0 1 y
Switch the integrals
∞ 1 ∞
−∫ ∫ si(qx) cos(yx) dx dy
1 y 0
We also showed that
∞ 1 a+1
∫ si(x) cos(ax) dx = − log ( )
0 2a a−1
Let a = y/q
∞ q y+q
∫ si(x) cos(yx/q) dx = − log ( )
0 2y y−q
Let x = tq
∞ 1 y+q
∫ si(qt) cos(yt) dx = − log ( )
0 2y y−q
Substitue the value of the itnegral
1 ∞ 1 y+q
∫ log ( ) dy
2 1 y 2 y−q
We can prove that the anti-derivative
∞
log(y) 1 1 y+q
[ − log(y 2 − q 2 ) − log ( )]
q 2q 2y y−q 1
Which simplifies
∞
1 y2 − q2 1 y+q
[− log ( )− log ( )]
2q y2 2y y−q 1
124
The limit y → ∞
1 y2 − q2 1 y+q
lim log ( )+ log ( )=0
y→∞ 2q y 2 2y y−q
The limit y → 1
1 1 1+q
log (1 − q 2 ) + log ( )
2q 2 1−q
Can be written as
1 2 1 1+q 2
log (1 − q 2 ) + log ( )
4q 4 1−q
16.2 Example
Prove that
∞ ci(αx) 1
∫ dx = − {si(αβ)2 + ci(αβ)2 }
0 x+β 2
proof
∞ ci(αx)
I(α) = ∫ dx
0 x+β
Differentiate with respect to α
1 ∞ cos(αx)
I ′ (α) = ∫ dx
α 0 x+β
Let x + β = t
1 ∞ cos(α(t − β))
I ′ (α) = ∫ dt
α β t
Use trignometric rules
125
cos(αβ) sin(αβ)
I ′ (α) = − ci(αβ) − si(αβ)
α α
Integrate with respect to α
1
I(α) = − {si(αβ)2 + ci(αβ)2 } + C
2
If α → ∞ we have C = 0.
126
17 Logarithm Integral function
17.1 Definition
Define =
x dt
li(x) = ∫
0 log(t)
The derivative is
d 1
li(z) =
dz log(z)
The integral
z x
∫ li(z) dz = zli(z) − ∫ dx
0 log(x)
In the integral let −2 log(x) = t
∞ e−t
∫ li(z) dz = zli(z) + ∫ dt = zli(z) − Ei(2 log z)
−2 log(z) t
17.2 Example
Prove that
1
∫ li(x) dx = − log(2)
0
proof
1 x e−a log(t) dt
I(a) = ∫ ∫ dx
0 0 log(t)
Differentiate with respect to a
1 x
I ′ (a) = − ∫ ∫ t−a dt dx
0 0
127
1 1
I ′ (a) = 1−a
∫ x dx
a−1 0
Which reduces to
1 1 1
I ′ (a) = = −
(a − 1)(2 − a) 2 − a 1 − a
Integrate with respect to a
1−a
I(a) = log ( )+C
2−a
Take the limit a → ∞ we get C = 0
1 1
∫ li(x) dx = log ( ) = − log(2)
0 2
solution
1 x e−a log(t) dt
I(a) = ∫ xp−1 ∫ dx
0 0 log(t)
Differentiate with respect to a
1 x
I ′ (a) = − ∫ xp−1 ∫ t−a dt dx
0 0
1 1
I ′ (a) = ∫ x
p−a
dx
a−1 0
Which reduces to
1 1 1 1
I ′ (a) = = { − }
(a − 1)(p − a + 1) p p − a + 1 1 − a
Integrate with respect to a
128
1 1−a
I(a) = log ( )+C
p p−a+1
Take the limit a → ∞ we get C = 0
1 x e−a log(t) dt 1 1 1
∫ xp−1 ∫ dx = log ( ) = − log (p + 1)
0 0 log(t) p p+1 p
e−b log(t) dt
1
1 x
I(b) = ∫ sin(a log(x)) ∫ dx
0 0 log(t)
Differentiate with respect to b
1
1
I ′ (b) = − ∫ t−b dt dx
x
sin(a log(x)) ∫
0 0
1 1
I ′ (b) = b−1
∫ x sin(a log(x)) dx
b−1 0
Let log(x) = −t
1 ∞
I ′ (b) = ∫ e−tb sin(at) dt
1−b 0
Using the Laplace transform
a
I ′ (b) =
(b − 1)(a2 + b2 )
Integrate with respect to b
129
π
0= +C
2(a2 + 1)
Hence we have
1 a log(a2 ) π 1 π
∫ sin(a log(x))li(x) dx = − = (a log(a) − )
0 2a2 + 2 2(a2 + 1) a2 + 1 2
17.5 Example
1 li(x) 1
∫ logp−1 ( ) dx
0 x x
proof
1 1 x e−a log(t) 1
I(a) = ∫ [∫ dt] logp−1 ( ) dx
0 x 0 log(t) x
Differentiate with respect to a
1 1 x 1
I ′ (a) = − ∫ [∫ t−a dt] logp−1 ( ) dx
0 x 0 x
1 1
p−1 1
I ′ (a) = −a
∫ x log ( ) dx
a−1 0 x
Let − log(x) = t
1 ∞
I ′ (a) = ∫ e−(1−a)t tp−1 dt
a−1 0
Γ(p) Γ(p)
I ′ (a) = − =−
(1 − a)(1 − a)p (1 − a)p+1
Integrate with respect to a
Γ(p)
I(a) = −
p(1 − a)p
130
Let a → 0, Hence
1 li(x) 1 Γ(p)
∫ logp−1 ( ) dx = −
0 x x p
17.6 Example
Prove that
∞ 1 π
∫ li ( ) logp−1 (x) dx = − Γ(p)
1 x sin(πp)
proof
∞
I(a) = ∫ li (x−a ) logp−1 (x) dx
1
x−a
−a
d d x dt
li (x−a ) = ∫ =
da da 0 log(t) a
Hence we have
1 ∞
I ′ (a) = ∫ x−a logp−1 (x) dx
a 1
Let log(x) = t
1 ∞
I ′ (a) = ∫ e−(a−1)t tp−1 dt
a 0
Using the Laplace transform
1
I ′ (a) = Γ(p)
a(a − 1)p
Take the integral
∞ ∞ 1
∫ I ′ (a)da = Γ(p) ∫ da
1 1 a(a − 1)p
The left hand-side
∞ 1
I(∞) − I(1) = Γ(p) ∫ da
1 a(a − 1)p
131
Now since I(∞) = 0
∞ 1
I(1) = −Γ(p) ∫ da
1 a(a − 1)p
Which implies that
∞ 1 ∞ 1
∫ li ( ) logp−1 (x) dx = −Γ(p) ∫ da
1 x 1 a(a − 1)p
Now let t = a − 1
∞ t−p
∫ dt
0 t+1
Using the beta integral x + y = 1 and x − 1 = −p which implies that x = 1 − p, y = p
Hence we have
∞ t−p π
∫ dt = β(p, 1 − p) = Γ(p)Γ(1 − p) =
0 t+1 sin(πp)
Finally we get
∞ 1 π
∫ li ( ) logp−1 (x) dx = − Γ(p)
1 x sin(πp)
132
18 Clausen functions
18.1 Definition
Define
⎧
⎪
⎪
⎪ ∞ sin(kθ)
⎪∑k=1
⎪ km
m is even
clm (θ) = ⎨
⎪
⎪
⎪ ∞
⎪ cos(kθ)
⎩∑k=1
⎪ km
m is odd
proof
If m is even then
∞ ∞
sin(kπ − kθ) sin(kθ)
clim (π − θ) = ∑ m
= − ∑ (−1)k
k=1 k k=1 km
This implies
∞
sin(kθ) ∞ sin(kθ) 1 ∞ sin(2kθ)
cl2 (θ) + clim (π − θ) = ∑ (−1)k m
+∑ m
= m−1 ∑
k=1 k k=1 k 2 k=1 km
This implies that
If m is odd then
∞
cos(kπ − kθ) ∞ cos(kθ)
clim (π − θ) = ∑ m
= ∑ (−1)k
k=1 k k=1 km
∞
cos(kθ) ∞ k cos(kθ) 1 ∞ sin(2kθ)
∑ + ∑ (−1) = ∑
k=1 km k=1 km 2m−1 k=1 k m
Which implies that
133
18.3 Example
π
∫ clm (θ)dθ
0
solution
π ∞ sin(kθ)
∫ ∑ dθ
0 k=1 km
Swap the integral and the series
∞ π
1
∑ m ∫ sin(kθ)dθ
k=1 k 0
The integral
π 1 π
−(−1)k + 1
∫ sin(kθ)dθ = − [ cos(kθ)] =
0 k 0 k
We get the summation
∞
−(−1)k + 1
∑ = ζ(m + 1) + η(m + 1)
k=1 k m+1
Now use that
∞
−(−1)k + 1
∑ m+1
= ζ(m + 1) + (1 − 2−m )ζ(m + 1) = ζ(m + 1)(2 − 2−m )
k=1 k
18.4 Example
∞
∫ clm (θ)e−nθ dθ
0
∞ ∞ sin(kθ) −nθ
∫ ∑ e dθ
0 k=1 km
134
Swap the integral and the series
∞ ∞
1
∑ m ∫ sin(kθ)e−nθ dθ
k=1 k 0
∞
1
∑ m−1 (k 2 + n2 )
k=1 k
Add and subtract k 2 and divide by n2
1 ∞ k 2 + n2 − k 2
∑
n2 k=1 k m−1 (k 2 + n2 )
Distribute the numerator
1 1 ∞ 1
ζ(m − 1) − ∑ m−3 2
n 2 2
n k=1 k (k + n2 )
Continue this approach to conclude that
j
1 (−1)j ∞ 1
∑(−1) − (2l − +
l−1
ζ(m 1)) ∑ m−(2j+1) 2
l=1 n 2l n 2j
k=1 k (k + n2 )
Let m − 2j − 1 = 1 which implies that j = m/2 − 1
m/2−1
1 (−1)m/2−1 ∞ 1
∑ (−1) − (2l − +
l−1
ζ(m 1)) ∑
l=1 n2l nm−2 k=1 k(k 2 + n2 )
Now let us look at the sum
∞ ∞
1 1 1 1
∑ =∑ { − }
k=1 k(k 2 + n ) k=1 2ink k − in k + in
2
∞
1 1 ∞ 1 in −in
∑ = ∑ { + }
k=1 k(k + n ) 2n k=1 k k + in k − in
2 2 2
∞
1 1
∑ = 2 {γ + ψ(1 + in) + ψ(1 − in) + γ}
k=1 k(k 2 + n ) 2n
2
which simplifies to
∞
1 ψ(1 − in) + ψ(1 + in) + 2γ
∑ 2 + n2 )
=
k=1 k(k 2n2
135
Now we we can verify ψ(1 − in) = ψ(1 + in)
∞
1 2R {ψ(1 + in)} + 2γ R {ψ(1 + in)} + γ)
∑ 2 + n2 )
= =
k=1 k(k 2n2 n2
This concludes to
m/2−1
ζ(m − (2l − 1)) R {ψ(1 + in)} + γ
∑ (−1) + (−1)m/2−1
l−1
l=1 n2l nm
136
19 Clausen Integral function
19.1 Definiton
We define
sin(kx)
cl2 (x) = ∑
k=1 k2
∞
eikθ ∞ cos(kθ) ∞
sin(kθ)
Li2 (eiθ ) = ∑ 2
= ∑ 2
+ i ∑
k=1 k k=1 k k=1 k2
By the integral definition of the dilogarithm
eiθ log(1 − x)
Li2 (eiθ ) − ζ(2) = − ∫ dx
1 x
Let x = eiφ
θ
Li2 (eiθ ) − ζ(2) = −i ∫ log(1 − eiφ )dφ
0
Which simplifies to
θ
Li2 (eiθ ) − ζ(2) = −i ∫ log [2 sin(φ/2)e−(i/2)(π−φ) ] dφ
0
θ 1 1
Li2 (eiθ ) − ζ(2) = −i ∫ log [2 sin(φ/2)] dφ + (π − θ)2 − π 2
0 4 4
137
By equating the imaginary parts we have our result.
proof
2θ t
cl2 (2θ) = − ∫ log [2 sin ( )] dt
0 2
Let t = 2φ
θ
−2 ∫ log [2 sin (φ)] dφ
0
θ φ φ
−2 ∫ log [4 sin ( ) cos ( )] dφ
0 2 2
Separate the logarithms
θ φ θ φ
−2 ∫ log [2 sin ( )] dφ − 2 ∫ log [2 cos ( )] dφ
0 2 0 2
We can verify that
θ φ
cl2 (π − θ) = ∫ log [2 cos ( )] dφ
0 2
Hence
138
3π π
cl2 (3π) = 2cl2 ( ) − 2cl2 (− )
2 2
Since cl2 (3π) = 0
3π π π
cl2 ( ) = cl2 (− ) = −cl2 ( ) = −G
2 2 2
19.4 Example
Prove that
2π π5
∫ cl2 (x)2 dx =
0 90
Using the series representation
∞ ∞ 2π
1
∑∑ ∫ sin(kx) sin(nx)dx
k=1 n=1 (nk)
2 0
2π 1 2π
∫ sin(kx) sin(nx) dx = ∫ cos((k − n)x) − cos((k + n)x) dx
0 2 0
We have two cases
If n = k then
1 2π
∫ 1 − cos(2nx) dx = π
2 0
If n ≠ k
2π
1 2π 1 sin((k − n)x) sin((k + n)x)
∫ cos((k − n)x) − cos((k + n)x) dx = [ − ] =0
2 0 2 k−n k+n 0
Hence we have
⎧
⎪
⎪
⎪
⎪ n≠k
2π ⎪0
∫ sin(kx) sin(nx) dx = ⎨
0 ⎪
⎪
⎪
⎪
⎪π n=k
⎩
We can write the series as
∞ ∞ ∞ ∞
1 1 1
∑∑ = ∑ + ∑
n=1 k=1 (nk) n≠k (nk)
2 2 4
n=1 n
139
∞
1 π5
π∑ 4
= πζ(4) =
n=1 n 90
19.5 Example
Prove that
π/2 7 π2
∫ x log(sin x) dx = ζ(3) − log 2
0 16 8
proof
π π
2 2 π2
I =∫ x log(sin x) dx = ∫ x log(2 sin x) − log(2)
0 0 8
The integral reduces to
π π
2 1 2
∫ x log(2 sin x)dx = ∫ Cl2 (2θ) dθ
0 2 0
π ∞
1 2 sin(2nθ)
= ∫ ∑ dθ
2 0 n=1 n2
1 (−1)n 1 1
=− ∑ 3
+ ∑ 3
4 n=1 n 4 n=1 n
7
= ζ(3)
16
7 π2
I= ζ(3) − log(2)
16 8
19.6 Example
Prove that
π/4 π
∫ x cot(x) dx = − log(2) + G/2
0 8
proof
π/4 π π/4
∫ x cot(x) dx = log(2) − ∫ log(sin x) dx
0 8 0
In the integral
140
π/4
∫ log(sin x) dx
0
Let x → t/2
1 π/2
∫ log(sin t/2) dt
2 0
Which can be written as
1 π/2 1 π/2
∫ log(2 sin t/2) dt − ∫ log(2) dt
2 0 2 0
Using the Clausen integral function we get
1 π
− cl2 (π/2) − log(2)
2 4
Note that cl2 (π/2) = G
We deduce that
π/4 π
∫ log(sin x) dx = −G/2 − log(2)
0 4
Collecting the results we have
π/4 π π π
∫ x cot(x) dx = log(2) + G/2 − log(2) = − log(2) + G/2
0 8 4 8
Prove that
1 log(x)
cl2 (θ) = − sin(θ) ∫ dx
0 x2 − 2 cos(θ)x + 1
proof
Note that
This implies
1 1 1 1
= iθ −iθ
{ − }
x2 − 2 cos(θ)x + 1 e − e x−e iθ x − e−iθ
141
Note that eiθ − e−iθ = 2i sin(θ)
1 1 1 1
= { − }
x2 − 2 cos(θ)x + 1 2i sin(θ) x − e iθ x − e−iθ
Now use the geometric series
∞ ∞
1 −1 k −i(k+1)θ
= { ∑ x e − ∑ xk ei(k+1)θ }
x2 − 2 cos(θ)x + 1 2i sin(θ) k=0 k=0
∞
1 1
= k−1
∑ x sin(kθ)
x2 − 2 cos(θ)x + 1 sin(θ) k=1
That implies
1 ∞ 1
log(x)
− sin(θ) ∫ dx = − ∑ sin(kθ) ∫ xk−1 log(x) dx = cl2 (θ)
0 x − 2 cos(θ)x + 1
2
k=1 0
19.8 Example
2π
cl2 ( )
3
proof
√
3 1 log(x)
cl2 (2π/3) = − ∫ dx
2 0 x2 + x + 1
Use that
x3 − 1 = (x − 1)(x2 + x + 1)
Hence
√
3 1 x−1
cl2 (2π/3) = − ∫ log(x)dx
2 0 x3 − 1
Let x3 = t
1 1 t1/3−1 (t1/3 − 1)
cl2 (2π/3) = − √ ∫ log(t)dx
6 3 0 t−1
Note that
142
1 xs−1
ψ ′ (s) = ∫ log(x) dx
0 1−x
We deduce that
1
cl2 (2π/3) = − √ (ψ ′ (2/3) − ψ ′ (1/3))
6 3
143
20 Barnes G function
20.1 Definition
z + z 2 (1 + γ) ∞ z n z2
G(z + 1) = (2π)z/2 exp (− ) ∏ {(1 + ) exp ( − z)}
2 n=1 n 2n
Prove that
G(z + 1) = Γ(z)G(z)
proof
G(z + 1) √ γ ∞ k+z k 2z − 1 − 2k
= 2π exp (−z − γz + ) ∏ ( ) exp ( ).
G(z) 2 k=1 k + z − 1 2k
This can be written as
G(z + 1) √ γ ∞ k+z k 1 + 2k z
= zΓ(z) 2π exp (−z + ) ∏ ( ) exp (− ) (1 + )
G(z) 2 k=1 k + z − 1 2k k
It suffices to prove that
√ γ ∞ k+z k 1 + 2k z
z 2π exp (−z + ) ∏ ( ) exp (− ) (1 + ) = 1
2 k=1 k + z − 1 2k k
or
∞
k+z k 1 + 2k z exp (z − γ2 )
∏( ) exp (− ) (1 + ) = √
k=1 k+z−1 2k k z 2π
Start by
N
k+z k 1 + 2k z
lim ∏ ( ) exp (− ) (1 + )
N →∞ k=1 k+z−1 2k k
Notice
144
∏k=1 (k + z)k ∏k=1 (1 + kz )
N N
N
k+z k z
∏( ) (1 + ) =
k=1 k+z−1 k ∏k=1 (k + z − 1)k
N
−1 N −1
(N + z)N +1 ∏N
k=1 (k + z) ∏k=1 (k + z)
k
= −1
k=1 (k + z)
zN ! ∏N k+1
−1
(N + z)N +1 ∏N
k=1 (k + z)
k+1
= −1
k=1 (k + z)
zN ! ∏N k+1
(N + z)N +1
=
zN !
N
1 + 2k N
1 + 2k
) = e− 2 HN −N
1
∏ exp (− ) = exp (− ∑
k=1 2k k=1 2k
Hence we have the following
(N + z)N +1
e− 2 HN −N
1
zN !
According to Stirling formula we have
(N + z)N +1 (N + z)N +1 1
e− 2 HN −N ∼ e− 2 HN −N
1 1
×√
zN ! z(N /e)N
2πN
By some simplifications we have
lim Hn − log(n) = γ
n→∞
and
z N
lim (1 + ) = ez
n→∞ N
145
proof
This simplifies to
∞
G(1 − z) (n − z)n 2z
= (2π)−z ez ∏ e
G(1 + z) n=1 (n + z)
n
∞
G(1 − z) (n − z)n 2z
log { } = −z log(2π) + z + log { ∏ e }
G(1 + z) n=1 (n + z)
n
∞ ∞
(n − z)n 2z
f (z) = log { ∏ e } = ∑ n log(n − z) − n log(n + z) + 2z
n=1 (n + z)
n
n=1
∞ ∞
−n n −n(n + z) − n(n − z) + 2(n2 − z 2 )
f ′ (z) = ∑ − +2= ∑
n=1 n − z n+z n=1 n2 − z 2
Hence we have
∞
−2z 2
∑
n=1 n − z
2 2
∞
−2z 2
zπ cot πz = 1 + ∑
n=1 n − z
2 2
f ′ (z) = zπ cot πz − 1
z
f (z) = ∫ xπ cot(πx) dx − z
0
Hence we have
146
G(1 − z) z
log { } = −z log(2π) + ∫ zπ cot(πx) dx
G(1 + z) 0
z z
∫ xπ cot(πx) dx = z log(sin πz) − ∫ log(sin πx)dx (1)
0 0
z
= z log(2 sin πz) − ∫ log(2 sin πx)dx (2)
0
1 2πz x
= z log(2 sin πz) − ∫ log (2 sin ) dx (3)
2π 0 2
cl2 (2πz)
= z log(2 sin πz) + (4)
2π
(5)
That implies
Prove that
n−1
G(n) = ∏ Γ(k)
k=1
proof
n−1
G(n) = ∏ Γ(k)
k=1
We want to show
n
G(n + 1) = ∏ Γ(k)
k=1
n−1 n
G(n + 1) = Γ(n)G(n) = Γ(n) ∏ Γ(k) = ∏ Γ(k)
k=1 k=1
147
20.4 Relation to Hyperfactorial function
n
H(n) = ∏ k k
k=1
(n!)n
G(n + 1) =
H(n)
proof
suppose that
Γ(n)n−1
G(n) =
H(n − 1)
we want to show that
Γ(n)n−1
G(n + 1) = Γ(n)G(n) = Γ(n)
H(n − 1)
Notice that
n
n−1
∏ k k H(n)
H(n − 1) = ∏ k k = =
nn nn
We deduce that
Γ(n)n × nn (n!)n
G(n + 1) = Γ(n)G(n) = =
H(n) H(n)
Prove that
z z z(z − 1)
∫ log Γ(x)dx = log(2π) + + z log Γ(z) − log G(z + 1)
0 2 2
proof
z z + z 2 (1 + γ) ∞ z z2
log G(z + 1) = log(2π) − + ∑ n log (1 + ) + −z
2 2 n=1 n 2n
148
Let the following
∞
z z2
f (z) = ∑ n log (1 + )+ −z
n=1 n 2n
Differentiate with respect to z
∞ ∞
n z z2
f ′ (z) = ∑ + −1= ∑
n=1 z + n n n=1 n(n + z)
1 ∞ z
ψ(z) = −γ − +∑
z n=1 n(n + z)
which implies that
∞
z2
∑ = zψ(z) + γz + 1
n=1 n(n + z)
Hence we have
f ′ (z) = zψ(z) + γz + 1
z γz 2
f (z) = ∫ xψ(x)dx + +z
0 2
which implies that
z γz 2
f (z) = z log Γ(z) − ∫ log Γ(x)dx + +z
0 2
Hence we have
z z + z 2 (1 + γ) z γz 2
log G(z + 1) = log(2π) − + z log Γ(z) − ∫ log Γ(x)dx + +z
2 2 0 2
By some rearrangements we have
z z z(z − 1)
∫ log Γ(x)dx = log(2π) + + z log Γ(z) − log G(z + 1)
0 2 2
149
20.6 Glaisher-Kinkelin constant
H(n)
A = lim
n→∞ nn2 /2+n/2+1/12 e−n2 /4
Prove that
G(n + 1) e1/12
lim =
n→∞ (2π)n/2 nn /2−1/12 e−3n /4
2 2
A
proof
(n!)n
lim
n→∞ H(n)(2π)n/2 nn /2−1/12 e−3n /4
2 2
2
+n/2 −n2 +1/12
(n!)n ∼ (2π)n/2 nn e
2
/2+n/2+1/12 −n2 /4
nn e e1/12
e1/12 lim =
n→∞ H(n) A
20.8 Example
Prove that
π2
ζ ′ (2) = (log(2π) + γ − 12 log A)
6
We already proved that
G(n + 1) 1
log [ ]∼ − log A
(2π)n/2 nn2 /2−1/12 e−3n2 /4 12
150
Let the following
G(n + 1)
f (n) = log [ ]
(2π)n/2 nn2 /2−1/12 e−3n2 /4
Use the series representation of the Barnes functions
⎡ ⎤
⎢ (2π)n/2 exp (− n+n 2(1+γ) ) ∏∞
2
n k n2
⎢ k=1 {(1 + k ) exp ( 2k − n)} ⎥
⎥
f (n) = log ⎢ ⎥
⎢ (2π) n/2 nn2 /2−1/12 e−3n2 /4 ⎥
⎢ ⎥
⎣ ⎦
Which reduces to
n + n2 (1 + γ) ∞ n n2 n2 1 3n2
f (n) = − + ∑ {k log (1 + ) + − n} − ( − ) log(n) +
2 k=1 k 2k 2 12 4
Differentiate with respect to n
1 n 1 3n
f ′ (n) = − − n − γn + nψ(n) + γn + 1 − n log(n) − + +
2 2 12n 2
Note that we already showed that
d ∞ n n2
∑ {k log (1 + ) + − n} = nψ(n) + γn + 1
dn k=1 k 2k
By simplifications we have
1 1
f ′ (n) = nψ(n) − n log(n) + +
12n 2
Now use that
1 ∞ zdz
ψ(n) = log(n) − − 2∫ dz
2n 0 (n2 + z 2 )(e2πz − 1)
Hence we deduce that
∞ nzdz 1
f ′ (n) = −2 ∫ dz +
0 (n2 + z 2 )(e2πz − 1) 12n
Integrate with respect to n
∞ z log(n2 + z 2 ) 1
f (n) = − ∫ dz + log(n) + C
0 (e 2πz − 1) 12
Take the limit n → 0
1 ∞ z log(z 2 )
C = lim f (n) − log(n) + ∫ dz
n→0 12 0 (e2πz − 1)
151
Hence we have the limit
G(n + 1) 1 G(n + 1)
lim − log(n) = lim log =0
n→0 (2π)n/2 nn2 /2−1/12 e−3n2 /4 12 n→0 (2π) nn2 /2 e−3n2 /4
n/2
∞ z log(z)
C = 2∫ dz
0 (e2πz − 1)
Finally we have
∞ z log(n2 + z 2 ) 1 ∞ z log(z)
f (n) = − ∫ dz + log(n) + 2 ∫ dz
0 (e 2πz − 1) 12 0 (e2πz − 1)
z2
∞ z log (1 + n2
) ∞ z 1 ∞ z log(z)
f (n) = − ∫ dz − log(n2 ) ∫ dz + log(n) + 2 ∫ dz
0 (e2πz − 1) 0 (e2πz − 1) 12 0 (e2πz − 1)
Also we have
∞ z 1
∫ dz =
0 (e2πz − 1) 24
That simplifies to
z2
∞ z log (1 + n2
) ∞ z log(z)
f (n) = − ∫ dz + 2 ∫ dz
0 (e2πz − 1) 0 (e2πz − 1)
Take the limit n → ∞
∞ z log(z) 1
2∫ dz = − log A
0 (e 2πz − 1) 12
Now use that
∞ z log(z) ∞ z log(z) 1
2∫ dz = 2 ∫ × dz
0 (e 2πz − 1) 0 e 2πz 1 − e−2πz
∞ ∞
=2∑∫ e−2πz(n+1) z log(z) dz
n=0 0
∞
ψ(2) − log(2π) + log(n)
=∑
n=1 2π 2 n2
(ψ(2) − log(2π))ζ(2) + ζ ′ (2)
=
2π 2
1
ζ ′ (2) = (log(2π) − ψ(2))ζ(2) + 2π 2 ( − log A) = ζ(2)(log(2π) + γ − 12 log A)
12
152
20.9 Example
Prove that
1
ζ ′ (−1) = − log A
12
proof
Start by
m
m1−s m−s sm−s−1
ζ(s) = lim ( ∑ k −s − − + ) , Re(s) > −3.
m→∞
k=1 1−s 2 12
Differentiate with respect to s
m
m1−s m1−s m−s m−s−1 m−s−1
ζ ′ (s) = lim (− ∑ k −s log(k) − + log(m) + log(m) + − log(m))
m→∞
k=1 (1 − s) 2 1−s 2 12 12
Now let s → −1
m
m2 m2 m 1 1
ζ ′ (−1) = lim (− ∑ k log(k) − + log(m) + log(m) + − log(m))
m→∞
k=1 4 2 2 12 12
Take the exponential of both sides
2
/2+m/2−1/12 −m2 /4 2
/2+m/2−1/12 −m2 /4
ζ ′ (−1) mm e mm e e1/12
e =e 1/12
lim ∑m k log(k)
=e 1/12
lim =
m→∞ e k=1 m→∞ H(m) A
We conclude that
1
ζ ′ (−1) = − log A
12
Prove that
proof
153
z log(z) z 2 log(z) z 2 ∞ x log(x2 + z 2 ) + 2z arctan(x/z)
ζ ′ (−1, z) = − + − +∫ dx
2 2 4 0 (e2πx − 1)
Now use that
1 ∞ x
ψ(z) = log(z) − − 2∫ dx
2z 0 (z 2 + x2 )(e2πx − 1)
Which implies that
∞ 2zx 1
∫ dx = z log(z) − − zψ(z)
0 (z 2 + x2 )(e2πx − 1) 2
By taking the integral
∞ x log(x2 + z 2 ) − x log(x2 ) z z z
∫ dx = ∫ x log(x) dx − ∫ xψ(x) dx −
0 (e 2πx − 1) 0 0 2
Which simplifies to
∞ x log(x2 + z 2 ) z2 1 2 z z
′
∫ dx = ζ (−1) − + z log(z) − z log Γ(z) + ∫ log Γ(x) dx −
0 (e 2πx − 1) 4 2 0 2
Also we have
∞ x 1
2∫ dx = log(z) − − ψ(z)
0 (x2 + z 2 )(e2πx − 1) 2z
By integration we have
∞ arctan(x/z) log(z)
2∫ dx = z + − z log(z) + log Γ(z) + C
0 (e2πx − 1) 2
Let z → 1 to evaluate the constant
∞ arctan(x/z) log(z) 1
2∫ dx = z + − z log(z) + log Γ(z) − log(2π)
0 (e 2πx − 1) 2 2
Multiply by z
∞ z arctan(x/z) z log(z) z
2∫ dx = z 2 + − z 2 log(z) + z log Γ(z) − log(2π)
0 (e2πx − 1) 2 2
Substitute both integrals in our formula
z z z(1 − z) ′ ′
∫ log Γ(x) dx = log(2π) + − ζ (−1) + ζ (−1, z)
0 2 2
We also showed that
154
z z z(1 − z)
∫ log Γ(x) dx = log(2π) + + z log Γ(z) − log G(z + 1)
0 2 2
By equating the equations we get our result.
20.11 Example
Prove that
1
G ( ) = 21/24 π −1/4 e1/8 A−3/2
2
proof
We know that
Note that
1
ζ (s, ) = (2s − 1)ζ(s)
2
Which implies that
1 log(2) 1
ζ ′ (−1, ) = ζ(−1) − ζ ′ (−1)
2 2 2
Hence we have
1 1 1 3 log(2)
log G ( ) + log Γ ( ) = ζ ′ (−1) − ζ(−1)
2 2 2 2 2
Using that we have
1
G ( ) = 21/24 π −1/4 e 2 ζ (−1)
3 ′
2
Note that
1
ζ(−1) = −
12
This can be proved by the functional equation of the zeta function.
155
21 Complex Analysis
The idea of complex numbers originated from trying to solve polynomials like those in the form x2 +1 = 0.
If we do a simple algebra then it is easy to induce that x2 = −1 but we know that there is no real number
whose square is a negative number. Which implies that this polynomial has no solutions in the set of
real numbers. Hence, we need to expand the set of real numbers R to the set of complex numbers C. By
definition a complex number can be written in the form z = x + iy where x, y ∈ R. We say the real part of
R(z) = x and the imaginary part I(z) = y. We can think of i as a special symbol with the property i2 = −1
√
or i = −1. Now let us define some algebraic operations.
Let a = x1 + iy1 and b = x2 + iy2 be two complex numbers then we define the following operations
a ± b = (x1 + x2 ) ± i(y1 + y2 )
a × b = x1 x2 − y1 y2 + i(x1 y2 + x2 y1 )
For division we need that b ≠ 0 which by definition means both x2 and y2 can’t be zero at the same
time. Also we need to define the complex conjugate which is ā = x1 − iy1 then we can easily see that
a a b̄ ab̄
= × =
b b b̄ x22 + y22
This representation allows a better representation of both multiplication and division. Let z1 = r1 eiφ1
z1 × z2 = r1 r2 ei(φ1 +φ2 )
z1 r1 i(φ1 −φ2 )
= e
z2 r2
156
I{z}
z = x + iy
∣z∣
φ R{z}
The absolute value of complex number or the length of the vector representation is an important property
of a complex number. Note that ∣z∣ = 1 implies that z is unit vector. One of the most important properties
is the triangle inequality which will be very handy for us in the coming sections
Note also
Finally we have
n n
∣ ∑ zk ∣ ≤ ∑ ∣zk ∣
k=1 k=1
157
Note that geometrically in terms of vectors the triangle inequality implies that the length of two vectors
is greater than the length of their sum. This also holds in triangles in Euclidean geometry where the sum
the plain is called an analytic function. You can see a specialized complex analysis book to understand
The exponential function f = exp(z) is an entire function (analytic/differentiable in the whole plain). We
Which implies that ∣ez ∣ = ex . The function acts smoothly with the usual derivative (ez )′ = ez and the
expansion
∞
zk
ez = ∑
k=0 k!
One also can note that the function is periodic with period 2kπi.
These functions are also entire in the complex plane. They almost have the same properties as the real
counterparts but one essential difference is boundedness. We know in the real case that ∣ sin(x)∣ ≤ 1 but
it is not the case for ∣ sin(z)∣. You can see that since sin(z) = 1
2i
(eiz − e−iz ) . Now since ∣e−iz ∣ = ey we see
that ey is unbounded near infinity then sin(z) is also unbounded. You can deduce the same for cos(z).
∞
(−1)n 2k+1
sin(z) = ∑ z
k=0 (2k + 1)!
∞
(−1)n 2k
cos(z) = ∑ z
k=0 (2k)!
The hyperbolic functions can be defined in terms of the sine and cosine functions using the relations
158
cosh(z) = cos(iz) , sinh(z) = −i sin(iz)
You can then deduce the derivatives and the series expansions around zero. The exponential represen-
ez + e−z ez − e−z
cosh(z) = , sinh(z) =
2 2
Usually we think of logarithm as the inverse of the exponential function away from zero. As in real
analysis we have log(ex ) = x but it is completely risky to think the same for complex analysis. The
problem with complex exponential is its periodicity. Note that we have ez+2kπi = ez e2kπi = ez , hence the
function is not one to one. We have infinite values that map to the same value. This makes the complex
Some authors might use ln(z) interchangeably with log(z). Here we usually use log(z) to denote the
solution
The easiest way is to use the polar representation. Note that we have eiπ = −1 which implies 2eiπ+2kπi =
Similarly we have
π π
log(i) = log(1) + i ( + 2kπ) = i ( + 2kπ)
2 2
√ iπ
Finally since 1 + i = 2e 4 which implies
√ π
log(1 + i) = log( 2) + i ( + 2kπ)
4
Properties
2. log(z n ) = n log(z)
159
3. elog(z) = z
proof
= log(z1 ) + log(z2 )
3. Finally we have
Some properties don’t necessarily hold for example log(ez ) ≠ z since ez+2kπi = ez and that implies z =
z + 2kπi which is obviously can’t be true unless we choose k = 0. This suggests that we can make nice
properties by choosing proper values of the argument. This raises the concept of Branches of logarithm.
The definition log(z) = log ∣z∣ + i(φ + 2kπ) makes the complex logarithm multi valued as explained earlier.
Each value for k raises a different branch of the logarithm that makes it single valued. One interesting
where we define −π < Arg(z) ≤ π. Note that we usually use log(z) to denote the principal value when it
is clear from the context. Clearly this makes the function single valued.
solution
The main trick it to find the value of the argument that falls in the interval (−π, π]. Using that we
conclude
log(−2) = log(2) + iπ
Similarly we have
160
√ π
log(1 + i) = log( 2) + i
4
The usual properties described in the previous section don’t usually carry on for the principal value. For
Now let us define the concept of a Branch cut. In order to make the complex logarithm an analytic
function we have to make it differentiable in a certain neighborhood. Note that the principal value of the
logarithm by choosing a certain branch the function became one-to-one but this is not enough because
the function Log(z) doesn’t behave well on the negative real axis.
I{z}
π
R{z}
−π
We see from the graph in Figure 2 that the function approach different value as we approach the negative
real axis. This causes a discontinuity that prevents the function from being analytic on that line.
1
is analytic and the derivative is z
.
Using that we can expand the function along values away from the branch cut. For instance for ∣z∣ < 1
we have
∞
zk
log(1 − z) = − ∑
k=1 k
161
We are not going to go into details of the proofs of such claims because they require a more firm under-
In this section we establish the foundations of contour integration by studying the theory of Laurent
expansion, residues and poles. It is preferable if you have a basic knowledge in real analysis because
many theorems carry on from the real case to the complex plane. We start by the following theorem
∞
f (k) (z0 )
f (z) = ∑ (z − z0 )k
k=0 k!
is a valid representation in the largest circle with radius ∣z − z0 ∣ contained in D.
solution
Since the function is analytic in the whole plane except at the origin. Then we can use the Theorem to
deduce that
(−1)k k!
f (k) (2) =
2k+1
You can verify that by taking multiple derivatives and substitute the value of z0 = 2. Hence we deduce
that
1 ∞ (−1)k k!
=∑ (z − 2)k
z k=0 2k+1
And the radius of convergence is ∣z − 2∣ < 2 which represents a ball centered at z0 = 2 with radius less
Note that we cannot include zero because the function is not defined there. So the maximum circle is
that of radius 2.
Theorem 3 Let f be an analytic function in the punctured disk r < ∣z −z0 ∣ < R . Then f has a series representation
∞
f (z) = ∑ ak (z − z0 )k
k=−∞
The theorem is actually saying that if a function is not analytic on a point z0 but analytic around it
then we can expand the function using what we call a Laurent expansion. The difference between
162
0 1 2 3 4
Laurent expansion and Taylor is that Laurent allows negative indices of the series. We can think of it as a
generalization of the Taylor theorem which is a special case where coefficients with negative indices are
zero.
solution
∞
f (z) = ∑ ak (z − z0 )k
k=−∞
We need to expand both functions in the domain 0 < ∣z∣ < 1. Note that the function f (z) = 1
z
is a
Laurent expansion around zero in the punctured disk ∣z∣ > 0 what is remaining is to expand the function
f (z) = 1
1−z
in the domain ∣z∣ < 1. Hence the intersection of both domains is actually 0 < ∣z∣ < 1. Note that
∞
1
= ∑ zk
1 − z k=0
is a Taylor expansion valid in ∣z∣ < 1 hence we have
1 ∞ k ∞ k−1 ∞
f (z) = ∑z = ∑z = ∑ zk
z k=0 k=0 k=−1
solution
Note that
∞
(−1)k 2k+1
sin(z) = ∑ z
k=0 (2k + 1)!
163
Which is a valid expansion in the domain z ∈ C. Also 1
z2
is a valid expansion in the domain ∣z∣ > 0. Hence
∞
(−1)k 2k−1
f (z) = ∑ z
k=0 (2k + 1)!
solution
∞
zk
ez = ∑
k=0 k!
∞
1
ez−1 = ∑ (z − 1)k
k=0 k!
Which implies that
1 ∞ 1
ez = ∑ (z − 1)
k
e k=0 k!
is a valid expansion in the whole domain C. Finally we have
1 ∞ 1
f (z) = ∑ (z − 1)
k−1
e k=0 k!
in the punctured disk ∣z − 1∣ > 0.
Here we discuss the of types of points where a complex function is not defined at. Usually they are called
An isolated singular point is a point where the function is analytic in the punctured disk by removing
that point.
1. Removable singularity A singular point is removable if limz→z0 f (z) exists. Equivalently, if all the
negative indexed terms in the Laurent expansion are zeros,namely a−k = 0 for all k > 0.
2. Poles a function f has a pole of order m > 0 if a−m ≠ 0 and a−k = 0 for all k > m.
3. Essential singularity The Laurent expansion has an infinite number of non-positive indexed terms.
164
sin(z)
1. f (z) = z
at z = 0
ez
2. f (z) = (z−1)2
at z = 1
3. f (z) = e1/z at z = 0
solution
sin(z)
lim =1
z→0 z
2. By expanding around 1 we have
1 ∞ 1
f (z) = ∑ (z − 1)
k−2
e k=0 k!
This implies 1 is a pole of order 2.
3. Note that
∞
1
e1/z = ∑ k
k=0 k!z
Then z = 0 is an essential singularity.
g(z)
f (z) =
(z − z0 )m
where g(z0 ) ≠ 0.
The proof is done by writing the Laurent expansion of f and take (z − z0 )−m as a common factor.
165
Residues are closely related to the Laurent expansion of a function. They will become really handy when
we start discussion about contours. We usually use the following notation Res(f, z0 ) as the residue of
Definition. Let f (z) be an analytic function in a domain D with the Laurent expansion
∞
f (z) = ∑ ak (z − z0 )k
k=−∞
Res(f, z0 ) = a−1
sin(z)
1. f (z) = z
ez
2. f (z) = z2
3. f (z) = e1/z
solution
1. Note that since f has a removable singularity and since a−k = 0 for all k > 0 then we have Res(f, 0) = 0.
1
Res(f, 0) =
e
3. Using the Taylor expansion of ez and letting z → 1/z and rewrite the expansion as
‘
0
1
e1/z = ∑ zk
k=−∞ (−k)!
Hence we have
Res(f, 0) = 1
Usually we don’t have to find the Laurent expansion in order to find the residues at a specific point. Let
us first note that the residues of an entire function is zero at any point. Now let us see theorems that help
166
Theorem 5 Suppose z0 is a pole of order m of f (z) then
1 dm−1
Res(f, z0 ) = lim (z − z0 )m f (z)
z→z0 (m − 1)! dz m−1
proof
a−m a−1
f (z) = +⋯+ +⋯
(z − z0 ) m (z − z0 )
Multiply by (z − z0 )m
dm−1
(z − z0 )m f (z) = (m − 1)!a−1 + O((z − z0 ))
dz m−1
We finish by Letting z → z0
1 dm−1
lim (z − z0 )m f (z) = a−1
(m − 1)! z→z0 dz m−1
f (z)
h(z) =
g(z)
where z0 is a simple zero of g(z) and f (z0 ) ≠ 0 then
f (z0 )
Res(h, z0 ) =
g ′ (z0 )
proof
167
21.6 Integration around paths
Since in the complex plane C every complex number z ∈ C has two components real and complex. We
can not use the same process as in real analysis to evaluate integrals. Now, integration happens around
paths. We will be concerned about a family paths. Mainly those curves that have continuous derivatives.
These are called smoothed curves. Mainly these curves don’t have a peak where the derivative doesn’t
exist, hence named smooth. We can attach a finite number of smooth curves to obtain piece-wise smooth
curves. Also those curves must be simple, that means it doesn’t cross itself.
y y
b
C1 C2 C3 C
a b x a x
Note that the orientation of a curve is essential to the evaluation of a path integral. Usually we work
Theorem 7 Let f be continuous on a smooth curve γ given by γ(t) = x(t) + iy(t) , where t ∈ [a, b] then
b
∫ f (z)dz = ∫ f (γ(t))γ ′ (t)dt
γ a
Example
solution
Denote the curve γ(t) = eit where t ∈ [0, 2π). Now since z 2 is continuous on the simple smooth path we
have
2π 1 2πi
∫ z dz = i ∫
2
e2it eit dt = (e − e0 ) = 0
γ 0 3
This seems to imply that if we integrate around a simple closed curve the integration will be always zero.
This is not always the case, consider integrating z −1 around the same curve
2π 2π
−1
∫ z dz = i ∫ e−it eit dt = i ∫ dt = 2πi
γ 0 0
168
One might notice that the function f (z) = z −1 has a simple pole inside the closed path while f (z) = z 2
is analytic in and on the contour. This is indeed the case as the Cauchy’s theorem illustrates. We first
define simply connected domains. A domain is simply connected if every closed contour/path can be
shrunk into one point. That implies the domain can’t contain holes. Let us know take a look at the
Cauchy-Grousat Theorem.
Theorem 8 Let f is an analytic in a simpley connected domain D. Then for every simple closed contour C in D
we have
∮ f (z)dz = 0
C
Example. Choose a closed simple contour γ(t) in the complex plain then find the integration of f (z) =
solution
We don’t have to worry about parametrizing the integral since the function is entire. We apply the
∫ f (z) = ∫ (z + 3z + 1) = 0
2 n
γ C
One might think that regardless of the path of integration we can deduce that Independence of paths
Theorem 9 For any analytic function f in a connected domain D for any simple paths γ1 , γ2 inside D we have
∫ f (z)dz = ∫ f (z)dz
γ1 γ2
a b x
By independence of paths we can use a simple contour (line) that connect the same points a and b. Define
169
1 b2 − a2 b2 − a2
∫ zdz = (b − a) ∫ bt + (1 − t)a dt = b2 − a2 − =
γ 0 2 2
One can also use the anti-derivative
b
z2 b2 − a2
∫ zdz = [ ] =
γ 2 a 2
This is indeed true by the fundamental theorem.
Theorem 10 For any analytic function f in a connected domain D for any simple paths γ(t), t ∈ [a, b] inside D
we have
So if the function is analytic and by the independence of path theorem we can find the anti-derivative
Now we look at the approach to evaluate functions that have poles inside the contour. First let us look
Theorem 11 Let f be an analytic function in a simply connected domain D then for any contour γ ∈ D we have
f (z)
∮ dz = 2πi f (z0 )
γ z − z0
proof
First we can deform the contour γ into a circular contour C. Then we have the following
f (z) − f (z0 )
∫ dz = 0
C z − z0
Hence we conclude
f (z) dz
∫ dz = f (z0 ) ∫ = 2πi f (z0 )
γ z − z0 C z − z0
Using that we can conclude the Cauchy’s Integral formula for Derivatives
170
This theorem implies that the evaluation of a closed contour depends on the residue evaluation. Since
∞
f (k) (z0 )
f (z) = ∑ (z − z0 )k
k=0 k!
Then we have
We generalize the theorem by the following theorem Cauchy Residue Theorem or the residue theorem.
Theorem 12 Let f be an analytic function on a connected domain D except for some isolated points z1 , ⋯, zn then
for any simple closed contour γ ⊂ D that contains the singularities we have
n
∫ f (z)dz = 2πi ∑ Res(f (z), zk )
γ k=1
proof
Deform the contour γ into a set of circular contours γ1 , ⋯, γn around each singularity
∫ f (z)dz = 2πiRes(f, zk )
γk
Hence we deduce the result as sum of the residues. This is very powerful since it explains that the
evaluation of a function with some singularity around a contour containing them is no more than the
sum of residues. This relationship will be essential when we try to evaluate real integrals.
Example. Let P (z) be a polynomial with the leading coefficient as 1. Then find the following
dz
∫
γ P (z)
where γ contains the zeros of P .
171
solution
dz dz 1
∫ =∫ = 2πiRes ( , z0 ) = 0
γ P (z) γ (z − z0 )2 (z − z0 )2
Note that
(z − z0 ) 1
Res(1/P (z), z0 ) = lim =
z→z0 (z − z0 )(z − z1 ) z0 − z1
(z − z1 ) 1
Res(1/P (z), z1 ) = lim =
z→z1 (z − z0 )(z − z1 ) z1 − z0
Hence we conclude that also
dz
∫ =0
γ P (z)
Can you generalize that ?
Some integrals on contours are too difficult to evaluate so we better find a good bound or prove the
∣∮ f (z)dz∣ ≤
γ
So the integral can be made arbitrary close hence it goes to 0. We start by an important bound called the
Estimation lemma.
Theorem 13 Let f be a complex-valued, continuous function on the contour c if f is bounded by M for all z on c
then
∫ f (z)dz ≤ M L
c
172
proof
Now assume that ∣f (z)∣ ≤ M that means the function is bounded on the curve
b
∫ ∣dz∣ = ∫ ∣γ ′ (t)∣ dt
c a
b
∫ ∣γ ′ (t)∣ dt = L
a
1
lim ∫ dz = 0
R→∞ CR z2 +1
Where CR is a semi-circle of radius R and centered at the origin.
proof
First note that for any point on the curve we have ∣z∣ = R hence
1 1
∣ ∣≤ 2
z2 +1 R −1
Where we used the Triangle inequality to show ∣z∣ − 1 ≥ R − 1 for large R. Also note that length of the
1 πR
∣∫ dz∣ ≤ 2
CR z2 + 1 R −1
By taking R → ∞ we conclude our result.
The estimation lemma could be used to prove a more generalized form which we call the Jordan’s
lemma.
Theorem 14 Let f be a complex-valued, continuous function on the contour CR which is a semi-circle on the
upper half plane centered at the origin. Let f be defined as the following
173
f (z) = eiaz g(z)
π
∣∫ f (z)dz∣ ≤ MR
CR a
proof
π
∣∫ f (z)dz∣ ≤ RMR ∫ e−aR sin(t) dt
CR 0
π π/2
∫ e−aR sin(t) dt = 2 ∫ e−aR sin(t) dt
0 0
π/2 π/2 π π
2∫ e−aR sin(t) dt ≤ 2 ∫ e−2atR/π dt = (1 − e−aR ) ≤
0 0 aR aR
We finally get the result
π π
∣∫ f (z)dz∣ ≤ × RMR = MR
CR aR a
Example. Prove that
e2iz
lim ∫ dz = 0
R→∞ CR z2 + 1
Where CR is a semi-circle of radius R and centered at the origin.
proof
By Jordan’s lemma
174
e2iz π 1
∣∫ dz∣ ≤
CR z +1
2 2 R2 − 1
It is left as an exercise to show the same result holds for
P (z)
f (z) = eiaz
Q(z)
where P (z) and Q(z) are polynomials and D(P ) ≤ D(Q) + 1. Another remark is that we can generalize
the result to contours in the lower half-plain but wit the condition a < 0.
Now we look at the case where we have a singular point along the path of integration. To define that we
have to look at the concept Cauchy Principal Value. Let use first look at the definition in the real case.
Consider the case where f is continuous on the interval [a, b] except for c in that interval then
b c− b
PV ∫ f (x)dx = lim+ ∫ f (x)dx + ∫ f (x)dx
a →0 a c+
Interestingly we can look at the point x = c as a singular point of the function f (x).
solution
First note that the integral in the usual definition of Riemann doesn’t exist because the function is not
1 1 − 1 1 1 1 1 1 1
PV ∫ dx = lim+ ∫ dx + ∫ dx == lim+ − ∫ dx + ∫ dx = 0
−1 x →0 −1 x x →0 x x
Hence the principal value exists while the integral is infinite on the interval. But if the Riemann integral
exists then the principal value integral exists and they are equal.
We can generalize the case to complex functions. Suppose that the function f around the along the
contour C has a pole at the point z = z0 then we can make a detour (semi-circle) around the pole and
Suppose that f (z) has a singularity at z = 0 and we need to find the integral along the contour C that is
− b
P V ∫ f (z)dz = lim ∫ f (x)dx + ∫ f (z)dx + ∫ f (x)dx
C →0 a C
Let us now look at a simpler way to evaluate the contour around a simple pole instead of taking the
limit.
175
y
Cε
a −ε ε b x
Theorem 15 Let f (z) have a simple pole at z = z0 . Suppose that Cr is an arc starting at an angle θi and ending
f (z)
lim ∫ dz = i(θf − θi )f (z0 )
r→0 Cr z − z0
This theorem is powerful and gives away a simple way to evaluate integrals around arbitrary arcs.
eiz
Example . Find the integral of f (z) = z
around z = 0 traversed contour-clockwise.
solution
eiz
lim ∫ dz = (0 − π)iei0 = −πi
r→0 Cr z−0
Note the negative sign because we are starting at an angle π and up to 0.
176
22 Real integrals using contour integration
In this section we consider solving integrals of functions of the form f (cos θ, sin θ) over the interval
(0, 2π). This can be done by realizing that for any point satisfying ∣z∣ = 1 then we have
z + z −1 = 2 cos θ
z − z −1 = 2i sin θ
z + z −1 z − z −1 dz 2π
∮ f( , ) =∫ f (cos θ, sin θ)dθ
∣z∣=1 2 2i iz 0
This can be done by parameterizing ∣z∣ = 1 as eiθ on the interval θ ∈ (0, 2π). Hence one can use the residue
22.1.1 Example
2π sin θ
∫ dθ
0 sin θ + 2
solution
z−z −1
2π sin θ dz z2 − 1
∫ dθ = ∮ 2i
= −i ∮ dz
0 sin θ + 2 ∣z∣=1 2 + z−z
2i
−1
iz ∣z∣=1 z(z + 4iz − 1)
2
√ √
The function has three poles 0, −2i ± 3i with the only poles inside the contour 0, (−2 + 3)i
z2 − 1
Res(f, 0) = lim =1
z→0 z 2 + 4iz − 1
√
√ √ z2 − 1 −(−2 + 3)2 − 1 2
Res(f, (−2+ 3)i) = lim√ (z−(−2+ 3)i) √ √ = √ √ = −√
z→(−2+ 3)i z(z − (−2 + 3)i)(z − (−2 − 3)i) −2 3(−2 + 3) 3
z2 − 1 2
−i ∮ dz = 2π (1 − √ )
∣z∣=1 z(z 2 + 4iz − 1) 3
177
22.2 Integrating around an ellipse
22.2.1 Example
2π dt 2π
∫ =
0 a2 cos2 t + b2 sin2 t ab
solution
b γ(t)
2π −a sin t + ib cos t
∮ f (z) dz = ∫ dt
γ 0 a cos t + ib sin t
The intgrand simplifies to the following
Hence
178
By equating the imaginary parts
2π dt
iab ∫ = 2πi
0 a2 cos2 t + b2 sin2 t
Which implies the result
2π dt 2π
∫ =
0 a2 cos2 t + b2 sin2 t ab
Theorem. Let f be analytic function in the unit circle ∣z∣ ≤ 1 such that f ≠ 0 . Then
2π
∫ f (eit ) dt = 2π f (0)
0
proof
Since the function f is analytic in and on the contour we have by the Cauchy integral theorem
f (z)
∮ dz = 2πi Res(f /z, 0)
∣z∣=1 z
Use z = eit whenere 0 ≤ t ≤ 2π
2π f (eit ) it
i∫ e dt = 2πi Res(f /z, 0)
0 eit
Note that
f (z) f (z)
Res ( , 0) = lim z = f (0)
z z→0 z
Hence
2π
∫ f (eit ) dt = 2π f (0)
0
22.3.1 Example
f (z) = exp(exp(z))
179
2π
∫ exp(exp(exp(it))) dt = 2π e
0
Note that
2π cos t
∫ ee cos(sin t)
cos(ecos t sin(sin t)) dt = 2π e
0
22.3.2 Example
f (z) = log(2 + z)
Where we take the principal logarithm with the branch cut located at y = 0, x ≤ −2
2π
∫ log(2 + eit ) = 2π log(2)
0
Note that
sin t
log(2 + eit ) = log ∣(2 + cos t)2 + sin2 t∣ + i arctan ( )
1 + cos t
1 sin t
= log(5 + 4 cos t) + i arctan ( )
2 2 + cos t
2π
∫ log(5 + 4 cos t) dt = 4π log(2)
0
22.3.3 Example
We deduce that
180
2π sin t
∫ exp(ecos t cos(sin t))(cos(ecos t sin(sin t)) log(5+4 cos t)−2 arctan ( ) sin(ecos t sin(sin t))) dt = 4πe log(2)
0 2 + cos t
∞ cos(ax)
∫ dx
−∞ P (x)
where P (x) is continuous in the real line and of degree 2 or more. You may notice that it is enough to
eiaz
f (z) =
P (z)
and then take the real part. Generally speaking to solve that we take a semi-circle in the upper half plane
where the contour will enclose some or all of the zeros of P (z). Notice that we may use the Jordan’s
lemma to prove that the circular part goes to zero and what is left is the real integral along the real axis.
22.4.1 Example
∞ cos(ax)
∫ dx
−∞ x2 + 1
proof
eiaz
f (z) =
z2 + 1
and integrate around the contour in Figure 7
R eiax
∫ dx + ∫ f (z)dz = 2πi Res(f, i)
−R x2 + 1 CR
By taking the limit R → ∞ note that by the Jordan’s lemma the integral along the circular part vanishes
∞ eiax
∫ dx = 2πi Res(f, i)
−∞ x2 + 1
181
CR
i
R
−R R
eiaz e−a
Res(f, i) = lim(z − i) =
z→i (z − i)(z + i) 2i
Finally we get
∞ eiax
∫ dx = πe−a
−∞ x2 + 1
and eventually
∞ cos(ax)
∫ dx = πe−a
−∞ x2 + 1
Advice. One might ask how to choose the complex function and the contour. There is no general formula
for that and you can obtain that by experience and trail and error. It is actually a good idea to try different
Assume that we need to integrate a function and we have a pole at the path of integration then we have
22.5.1 Example
∞ sin(x) π
∫ dx =
0 x 2
proof
182
eiz
f (z) =
z
Note that in order to integrate a round a semi-circle in the upper half plane we have to make a detour
CR
cr
−R −r r R x
R eix −r eix
∫ dx + ∫ dx + ∫ f (z)dz + ∫ f (z)dz = 0
r x −R x CR Cr
lim ∫ f (z)dz = 0
R→∞ Cr
eiz
lim ∫ dz = −πie0i = −πi
r→0 Cr z−0
Hence we finally get
∞ eix 0 eix
∫ dx + ∫ dx = πi
0 x −∞ x
183
22.6 Integrals of functions with branch cuts
We have a serious problem when dealing with functions like f (z) = log(z) or z a where we have a branch
point at 0. In order to evaluate the integral we have to avoid both the branch point and the branch cut.
Generally speaking the integral around a branch point will most probably vanish but this is not taken
22.6.1 Example
∞ log(x) π log(2)
∫ = √
0 x2 + 2 4 2
proof
Let us consider the usual integral of a semi-circle with a detour around the branch point at 0. But first
where arg(z) ∈ (−π/2, 3π/2) i.e we are taking the branch cut on the negative imaginary axis. By this
construction we have an analytic function along the path of integration. Now consider the function
log(z)
f (z) =
z2 + 2
√
The zero z = 2i lies inside the semi-circle
CR
√
2i
Cr
−R −r r R x
Branch
184
R log ∣x∣ −r log ∣x∣ + iπ √
∫ dx + ∫ dx + ∫ f (z)dz + ∫ f (z)dz = 2πiRes(f, 2i)
r x +2
2 −R x +2
2 CR Cr
Note the ∣ log(z)∣ = ∣ log ∣z∣ + iarg(z)∣ ≤ ∣ log R∣ + ∣arg(z)∣. Since the argument is bounded we can bound the
function using
log R + Const
lim R =0
R→∞ R2 − 2
Similarly we have
∣ log r∣ + Const
lim r =0
r→0 r2 − 2
Hence it follows by the Estimation lemma that both integrals vanish and
By rearranging
∞ log ∣x∣ ∞ dx √
2∫ dx + πi ∫ = 2πiRes(f, 2i)
0 x2 + 2 0 x2 + 2
The residue evaluation
√ √
√ log( 2i) log 2 + 2 i
π
Res(f, 2i) = √ = √
2 2i 2 2i
Hence
√
∞ log ∣x∣ ∞ dx log 2 + π2 i
2∫ dx + πi ∫ =π √
0 x2 + 2 0 x2 + 2 2
By equating the real parts we get our result.
22.6.2 Example
185
solution
Lemma
∞ log3 (1 + x2 )
∫ dx = π 3 + 3π log2 (4)
0 x2
proof
∞ Γ(1 − p)Γ(p − s)
∫ x−p (1 + x)s−1 dx =
0 Γ(1 − s)
Let x → x 2
∞ Γ(1 − p)Γ(p − s)
∫ x−2p+1 (1 + x2 )s−1 dx =
0 2Γ(1 − s)
Let p = 3/2
∞ 1 Γ(−1/2)Γ(3/2 − s)
∫ dx =
0 x2 (1 + x2 )1−s 2Γ(1 − s)
By taking the third derivative and s → 1
log3 (1 − iz)
f (z) =
z2
Define the principle logarithm as follows
√
log z = log x2 + y 2 + i arctan(y/x)
For the principle logarithm the branch cut is defined as I(1 − iz) = 0, R(1 − iz) ≤ 0 which then reduces
for z = x + iy to x = 0, 1 + y ≤ 0 . Hence the branch cut on the imaginary axis where x = 0, y < −1 . Also
186
Hence at z = 0 we have a removable singularity. Define the following contour were we avoid the branch
cut
CR
−R R
−i
Branch
√ 3 √
0 (log( 1 + x2 ) + i arctan(x)) R (log( 1 + x2 ) + i arctan(x))3
∫ f (z) dz + ∫ dx + ∫ dx = 0
CR −R x2 0 x2
Apparently
√ 3 √
R (log( 1 + x2 ) − i arctan(x)) R (log( 1 + x2 ) + i arctan(x))3
∫ f (z) dz + ∫ dx + ∫ dx = 0
CR 0 x2 0 x2
Note that
This simplifies to
187
22.6.3 Example
Prove a, b, c, d > 0
∞ log(a2 + b2 x2 ) π ad + bc
∫ dx = log
0 c2 + d2 x2 cd d
proof
log(a − ibz)
f (z) =
c2 + d 2 z 2
We need the logarithm with the branch cut y < − ab , x = 0 . Note that this corresponds to
√ π 3π
log(a + ibz) = log (a + y)2 + b2 x2 + iθ , θ ∈ [− , )
2 2
Consider the contour that avoids the branch-cut on the negative imaginary part
Cρ
c
d
ρ
−ρ ρ
− ab
Branch
√ bx
log(a ± bix) = log( a2 + b2 x2 ) ± i arctan ( )
a
188
Using that we deduce
This implies to
ρ log(a2 + b2 x2 ) c
∫ f (z) dz + ∫ dx = 2πi Res (f, i)
Cρ 0 c2 + dx2 d
For the circular part
c log(a − ibz) π ad + bc
2πiRes (f, i) = 2πi limc 2
= log
d z→ d i 2d z cd d
Hence this simplifies to
∞ log(a2 + b2 x2 ) π ad + bc
∫ dx = log
0 c + dx
2 2 cd d
22.6.4 Example
log(z) iz
f (z) = e
(z 2 + 1)2
Now consider the the branch of the logarithm
189
CR
i
−R −r r R
Cr
Which simplifies to
log(R) + iθ log R + 2π
MR = maxθ∈[0,π] ∣ ∣≤
(R e + 1)
2 2iθ 2 (R2 − 1)2
It follows by the Jordan’s lemma
lim ∫ f (z) dz = 0
R→∞ CR
π log r + iπ reiθ iθ
∫ f (z) dz = ir ∫ e e dθ
cr 0 (r2 e2iθ + 1)2
Note that by a similar argument to the Jordan’s lemma we have
It follows then
190
d log(z) iz π+i
Resi f (z) = lim ((z − i)2 2 e )=
z→i dz (z + 1) 2 4e
Note that
∞ eix π Ei(1)
∫ dx = −i
0 (x2 + 1)2 2e 2e
Since
∞ cos(x) −i
2∫ dx = 2πi ( )
0 (x2 + 1)2 2e
By integrating around a semi-circle in the upper half plane
∞ cos(x) π
∫ dx =
0 (x + 1)
2 2 2e
For the other integral, It is easy to see that
∞ sin(x) 1 −a ′
∫ dx = [e Ei (a) − ea Ei(−a)]
0 x +a
2 2 2a
By differentiation and letting a → 1 we have
∞ sin(x) Ei(1)
∫ dx =
0 (x + 1)
2 2 2e
We deduce that
22.6.5 Example
∞ xα
∫ dx = −π csc(πα)
0 x+1
proof
191
zα eα log(z)
f (z) = =
1+z 1+z
We consider the branch of the logarithm
⎧
⎪
⎪
⎪
⎪ α
θ→0
⎪x
eα log(z)
=x e
α iθα
=⎨
⎪
⎪
⎪
⎪
⎪xα e2παi θ → 2π
⎩
Integrating around contour (called key-hole contour) as in Figure 13
Cρ
C
−1 O x
Which becomes
ρ ρ
∫ f (z) dz + ∫ f (z) dz + ∫ f (z)dz − ∫ f (z)dz = 2πi Res(f, −1)
Cρ C
ρ xα ρ xα
∫ f (z) dz + ∫ f (z) dz + ∫ dx − e2πiα ∫ dx = 2πi Res(f, −1)
Cρ C x+1 x+1
192
zα ∣z∣α ρα+1
∣∮ f (z) dz∣ ≤ 2πρ max ∣ ∣ ≤ 2πρ = 2π
∣z∣=ρ 1+z ∣∣z∣ − 1∣ ∣ρ − 1∣
Note the triangle inequality
∣ω + 1∣ ≥ ∣∣ω∣ − ∣1∣∣
ρα+1
∣ lim ∮ f (z) dz∣ ≤ 2π →0
ρ→∞ ∣z∣=ρ ∣ρ − 1∣
Similarly we have
α+1
∣lim ∮ f (z) dz∣ ≤ 2π →0
→0 ∣z∣= ∣ − 1∣
We deduce that as → 0 and ρ → ∞
∞ xα
(1 − e2πiα ) ∫ dx = 2πi Res(f, −1) , −1 < α < 0
0 x+1
Notice that
Hence we have
∞ xα eαπi 2πi
∫ dx = 2πi = = −π csc(πα)
0 x+1 1 − e2απi e−πiα − eπiα
Note we can deduce the Euler reflection formula
Γ(α)Γ(1 − α) = π csc(πα)
22.6.6 Example
π/2 πΓ(m + 1)
∫ cos(nt) cosm (t) dt =
0 2m+1 Γ ( n+m+2
2
) Γ ( 2−n+m
2
)
proof
193
m
f (z) = z n−m−1 (1 + z 2 )
Note that the function z n−m−1 = e(n−m−1) log(z) will have a branch cut on the negative x axis. Also we
have
m 2
)
(1 + z 2 ) = em log(1+z
The principle branch will be on the imaginary y axis where ∣y∣ ≥ 1 which implies that y ≥ 1 or y ≤ −1. We
i
γ 1
γ 2
γ 3
−i
i2 −i+i3
∫ f (z) dz + ∫ f (z) dz + ∫ f (z) dz + ∫ f (z) dz + ∫ f (z) dz + ∫ f (z) dz = 0
γ1 γ2 γ3 i−i1 −i2 C
194
Note on on the circular part of ∣z∣ = 1
π/2 m
∫ f (z) dz = i ∫ eit eint−imt−it (1 + e2it ) dt
C −π/2
π/2
=i∫ eint (e−it + eit )m dt
−π/2
π/2
= i2m ∫ eint cosm (t) dt
−π/2
0 m −i m π/2
∫ xn−m−1 (1 + x2 ) dx + ∫ xn−m−1 (1 + x2 ) dx = −i2m ∫ eint cosm (t) dt
i 0 −π/2
1 m 1 m π/2
−i ∫ (it)n−m−1 (1 − t2 ) dt − i ∫ (−it)n−m−1 (1 − t2 ) dt = −i2m ∫ eint cosm (t) dt
0 0 −π/2
1 m π/2
−i((i)n−m−1 + (−i)n−m−1 ) ∫ tn−m−1 (1 − t2 ) dt = −i2m ∫ eint cosm (t) dt
0 −π/2
Note that
nπ − mπ
−(in−m − (−i)n−m ) = − (ei(n−m)π/2 − e−i(n−m)π/2 ) = −2i sin ( )
2
We deduce that
π/2 nπ − mπ 1
∫ eint cosm (t) dt = 21−m sin ( ) ∫ tn−m−1 (1 − t2 )m dt
−π/2 2 0
π/2 nπ − mπ 1
∫ cos(nt) cosm (t) dt = 2−m sin ( ) ∫ tn−m−1 (1 − t2 )m dt
0 2 0
n−m n−m π
Γ( ) Γ (1 − )=
2 2 sin ( nπ−mπ
2
)
195
We deduce then that
π/2 πΓ(m + 1)
∫ cos(nt) cosm (t) dt =
0 2m+1 Γ ( n+m+2
2
) Γ ( 2−n+m
2
)
In this section we consider the case of a rectangular contour. Usually considering such contours we have
to separate the contour into four integrals. Note that such contours are useful for evaluating integrals of
22.7.1 Example
∞ cos(ax) πa
∫ dx = π sech ( )
−∞ cosh(x) 2
proof
Consider
eiaz
f (z) =
sinh(z)
If we integrate around a contour of height π and stretch it to infinity we get
πi/2 − T πi/2 + T
−πi/2 − T −πi/2 + T
By taking T → ∞
Consider
−iπ/2+∞ eiax
∫ dx
−iπ/2−∞ sinh(x)
196
Let x = −πi/2 + y
πa ∞ eiay
ie 2
∫ dy
−∞ cosh(y)
Similarly we have for
iπ/2−∞ eiax
∫ dx
iπ/2+∞ sinh(x)
By letting x = πi/2 + y
∞ eiay
ie−
πa
2
∫ dy
−∞ cosh(y)
Now consider the sum of the two remaining integrals
iπ/2+T sin(x)
∫ dx
−iπ/2+T sinh(x)
Let y = −i(x − T )
Hence by taking T → ∞ we are integrating around an infinitely small area which goes to 0
∞ eiay
+ e−
πa πa
i (e 2 2 )∫ dy = 2πiRes(f, 0)
−∞ cosh(y)
Calculating the residue we have
eiaz eiaz
Res(f, 0) = lim z = lim =1
z→0 sinh(z) z→0 cosh(z)
Using that we get
∞ eiay 2π
∫ dy = πa
e 2 + e− 2
πa
−∞ cosh(y)
197
By taking the real part
∞ cos(ay) π
∫ dy = π sech ( a)
−∞ cosh(y) 2
22.7.2 Example
∞ 1 cosh (y + 4 ) π
2 π π
∫ dy = 3 (π cosh ( ) − 4 sinh ( ))
−∞ (5π 2 + 8πy + 16y 2 ) cosh3 (y) π 4 4
proof
Consider
sinh(z)
f (z) =
z sinh3 (z − π/4)
If we integrate around a contour of height π and stretch it to infinity we get
πi/2 − T πi/2 + T
π/4
−πi/2 − T −πi/2 + T
By taking T → ∞
−iπ/2+∞ sinh(x)
∫ dx
−iπ/2−∞ x sinh3 (x − π/4)
Let x = −π/2i + π/4 + y
∞ 1 cosh(π/4 + y)
−∫ dy
−∞ −iπ/2 + π/4 + y cosh3 (y)
Similarly we have for
iπ/2−∞ sinh(x)
∫ dx
iπ/2+∞ x sinh3 (x − π/4)
By letting x = iπ/2 + π/4 + y
198
∞ 1 cosh(π/4 + y)
∫ dy
−∞ iπ/2 + π/4 + y cosh3 (y)
The other integrals go to 0 hence
∞ 1 cosh (y + 4 ) π
π
−16πi ∫ dy = 2πiRes(f, )
−∞ (5π 2 + 8πy + 16y 2 ) cosh3 (y) 4
Calculating the residue we have
∞ 1 cosh (y + 4 ) π
−(16(π cosh(π/4) − 4 sinh(π/4))
−16πi ∫ dy = 2πi
−∞ (5π + 8πy + 16y ) cosh3 (y)
2 2 π3
Which reduces to our result
∞ 1 cosh (y + 4 ) 2
π
π π
∫ dy = 3 (π cosh ( ) − 4 sinh ( ))
−∞ (5π + 8πy + 16y ) cosh (y)
2 2 3
π 4 4
22.7.3 Example
∞ sin(ax) 1 a 1
∫ dx = coth ( ) −
0 e2πx −1 4 2 2a
solution
eiaz
f (z) =
e2πz − 1
The function is analytic in and on the contour, indented at the poles of the function
R+i
i
i − i1 γ1
i2 γ 2
2 R
199
R R+i i+2 i−i1
∫ f (z) dz + ∫ f (x) dx + ∫ f (x) dx + ∫ f (x) dx + ∫ f (z) dz − ∫ f (x) dx = 0
γ2 2 R R+i γ1 i2
πi i
lim ∫ f (z) dz = − Res(f, 0) = −
2 →0 γ 2 4 2
πi i
lim ∫ f (z) dz = − Res(f, i) = − e−a
1 →0 γ1 4 2
By combining the results together
200
∞ eiax 1 cos(πx) e−a + 1 1 − e−a
(1 − e−a )P V ∫ dx − P V ∫ e −ax
dx = i − i
0 e2πx − 1 0 2 sin(πx) 4 2a
By equating the imaginary parts
∞ sin(ax) 1 a 1
∫ dx = coth ( ) −
0 e2πx − 1 4 2 2a
22.8.1 Example
∞ ∞ sin(x) √
cos(x) π
∫ √ dx = ∫ √ dx =
0 x 0 x 2
solution
iR
ir Cr
r R
R iR
∫ f (z) dz + ∫ f (x) dx + ∫ f (z) dz + ∫ f (x) dx = 0
Cr r γ ir
√ π/2 √ π/2
e−it/2 erie dt∣ ≤ ∣e−r sin(t) ∣ dt ∼ 0
it
∣∫ f (z) dz∣ ≤ ∣ r ∫ r∫
Cr 0 0
1 √ 1 e−Rt
∣∫ f (z) dz∣ = ∣R(i − 1) ∫ e−1/2 log(R(1−t)+iRt) ei(1−t)R−Rt dt∣ ≤ 2R ∫ √ dt
γ 0 0 4
(1 − t)2 + t2
201
√ √
Note that on the interval 4
(1 − t)2 + t2 ≥ 4
2
√ 1 21/4
∣∫ f (z) dz∣ ≤ 21/4 R ∫ e−Rt dt = √ (1 − e−R ) ∼∞ 0
γ 0 R
Finally what is remaining when r → 0 and R → ∞
∞ eix ∞
∫ √ dx = i ∫ (ix)−1/2 e−x dx
0 x 0
∞ eix ∞ √
∫ √ dx = ie−iπ/4 ∫ x−1/2 e−x dx = ie−iπ/4 π
0 x 0
∞ ∞ sin(x) √
cos(x) π
∫ √ dx = ∫ √ dx =
0 x 0 x 2
1 1
Res(f, ∞) = −Res ( 2
f ( ) , 0)
z z
This is useful especially considering integrals where we wrap the contour around the whole complex
plain by adding the point at infinity. Hence if we exclude a certain region of the complex plain and
integrate around the whole complex plain (think about it as a sphere) then we can apply the residue
22.9.1 Example
Prove that
1√ √ π
∫ x 1 − x dx =
0 8
proof
√ 1 2
)
f (z) = z − z 2 = e 2 log(z−z
202
Consider the branch cut on the x-axis
x(1 − x) ≥ 0 Ô⇒ 0 ≤ x ≤ 1
Let w = z − z 2 then
c0 c1
1− 1−
e 2 log ∣x−x ∣ dx − ∫ e 2 log ∣x−x ∣+πi
1 2 1 2
∫ f (z) dz + ∫ f (z) dz + ∫ dx = 2πiRes(f, ∞)
c0 c1
√
√ √ 1 ∞
1/2 1 k
z − z2 =i z2 1− = iz ∑ ( ) (− )
z k=0 k z
Hence we deuce that
i
Res(f, ∞) = −
8
That implies
1− √ √ π
∫ f (z) dz + ∫ f (z) dz + 2 ∫ x 1 − x dx =
c0 c1 4
The contours around the branch points go to zero. Finally we get
1√ √ π
∫ x 1 − x dx =
0 8
203
22.10 Inverse of Laplace transform
1 γ+iT
f (t) = L−1 {F }(t) = lim ∫ est F (s)ds
2πi T →∞ γ−iT
22.10.1 Example
1 c+i∞ Γ(a + b) a
∫ Γ(a + t)Γ(b − t)s−t dt = s
2πi c−i∞ (1 + s)a+b
proof
Suppose that a, b ∈ R and a < b. Note that the Gamma function has a pole of order 1 at each non-positive
(−1)n
Res(Γ, −n) =
n!
The function f has poles at the following points
−n − a, −(n − 1) − a, ⋯, −a, b, b + 1, ⋯, b + n
Notice that the function f is analytic on the region −a < Re(z) < b , hence consider the contour in Figure
19.
c + iT
CR
R
−a − 1 −a b b+1
c − iT
204
c+iT n
∫ f (z) dz + ∫ f (z) dz = 2πi ∑ Res f (z)
CR c−iT k=0 z=−a−k
c+i∞ ∞
lim ∫ f (z) dz + ∫ f (t) dt = 2πi ∑ Res f (z)
R→∞ CR c−i∞ k=0 z=−a−k
Note that
∞ ∞
Γ(a + b + k) (−s)k
sa ∑ (−s)k = sa Γ(a + b) ∑ (a + b)k = 2 F1 (1; 1, a + b, −s)
k=0 Γ(k + 1) k=0 k!
By definition of the Hypergeometric funciton
∞
(−s)k Γ(a + b) a
sa Γ(a + b) ∑ (a + b)k = sa Γ(a + b)2 F1 (a + b, 1; 1, −s) = s
k=0 k! (1 + s)a+b
Also notice that
lim ∫ f (z) dz = 0
R→∞ CR
So we deduce that
1 c+i∞ Γ(a + b) a
∫ Γ(a + t)Γ(b − t)s−t dt = s
2πi c−i∞ (1 + s)a+b
If we have an infinite number of poles then by taking a contour that covers all of them we have an infinite
22.11.1 Example
Prove that
∞
Hn
∑ 2
= 2ζ(3)
n=1 n
proof
205
(ψ(−z) + γ)2
f (z) =
z2
Note that f has poles at non-negative integers
Cρ
ρ
0 1 2
Note that
∞
∮ f (z) dz = 2πi(Res(f, 0) + ∑ Res(f, n))
n=1
∞
2πi(Res(f, 0) + ∑ Res(f, n)) = 0
n=1
By expansion near z = 0
206
Res(f, 0) = −2ζ(3)
∞
j j + 1 (z − n)
j
1
= ∑ (−1) ( )
z 2 j=0 1 n2+j
∞
1
ψ(−z) + γ = + Hn + ∑ ((−1)k Hn(k+1) − ζ(k + 1))(z − n)k
z−n k=1
1 Hn
(ψ(−z) + γ)2 ≈ +2
(z − n) 2 (z − n)
This implies the residue
Hn 2
Res(f, n) = 2 −
n2 n3
We then deduce that
∞
Hn 1
2∑[ − ] − 2ζ(3) = 0
n=1 n2 n3
Finally we get
∞
Hn
∑ 2
= 2ζ(3)
n=1 n
207
References
[1] Leonard Lewin. Polylogarithms and associated functions. New York, 1981.
[2] Marko Petkovsek, Herbert Wilf, Dorom Zeilbegreger. A=B. Philadelphia, 1997.
[3] Pedro Freitas. Integrals of polylogarithmic functions, recurrence relations and associated Euler sums. 2004.
[4] Gradshteyn, Ryhik. Edited by Alan Jeffrey, Daniel Zillinger. Table of integrals, series and products. 2007.
[5] Habib Muzaffiar, Kenneth Williams. Evaluation of complete elleptic integrals of the first kind at singular
moduli. 2006.
[6] Stefan Boettner, Victor Moll. The integrals in Gradshteyn and Ryzhik, Part 16: Complete elliptic integrals.
2010.
[9] http://www.mymathforum.com
[10] http://www.mathhelpboards.com
[11] http://www.mathstackexchange.com
[12] http://www.integralsandseries.prophpbb.com
[13] http://www.wikipedia.org
[14] http://mathworld.wolfram.com
[15] http://dlmf.nist.gov/
[16] Dennis G. Zill, Patrick D. Shanahan. A First Course in Complex Analysis with Applications. 2003.
[18] Ravi P. Agarwal, Kanishka Perera, Sandra Pinelas. An Introduction to Complex Analysis. 2011.
208