Maths
Maths
Maths
There are several issues with the traditional mathematics education. First, it focuses too p
much on
2 4ac
technical details. For example, students are asked to routinely apply the formula b˙ b =2a
2 2
to solve many quadratic equations (e.g. x 2x C 1 D 0, x C 5x 10 D 0 etc. and the
list goes on). Second, the history of mathematics is completely ignored; textbook exposition
usually presents a complete reversal of the usual order of developments of mathematics. The
main purpose of the textbooks is to present mathematics with its characteristic logical structure
and its incomparable deductive certainty. That’s why in a calculus class students are taught what
is a function, then what is a limit, then what is a derivative and finally applications. The truth is
the reverse: Fermat implicitly used derivative in solving maxima problems; Newton and Leibniz
discovered it; Taylor, Bernoulli brothers, Euler developed it; Lagrange characterized it; and only
at the end of this long period of development, that spans about two hundred years, did Cauchy
and Weierstrass define it. Third, there is little opportunity for students to discover (rediscover to
be exact) the mathematics for themselves. Definitions, theorems are presented at the outset, the
students study the proofs and do applications.
Born and grew up in Vietnam in the early 80s, I received such a mathematical education.
Lack of books and guidance, I spent most of the time solving countless of mathematical exercises.
Even though I remembered enjoying some of them, admittedly the goal was always to get high
marks in exams and particularly pass the university entrance examination. Most of the time, it
was some clever tricks that are learned, not the true meaning of the mathematical concepts or
their applications. Of course why people came up with those concepts and why these concepts
are so defined were not discussed by the teachers (and unfortunately I did not ask these important
questions). After my bachelor, I enrolled in a master program. Again, I was on the same education
route: solving as many problems as possible. And you could guess, after a master was a PhD
study in the Netherlands. Though I had time and freedom and resources to do whatever I felt
needed, the focus was still to pass yet another form of examination – graduation. This time it
is measured by a number of research papers published in a peer-reviewed journal. To pursuit
an academic career, I took a postdoctoral job of which the main aim is to have as many papers
as possible. As you can imagine, I became technically fluent in a narrow field but on a weak
foundation.
Eventually, I got a job in a university in 2016. For the first time in my life, I did not have to
‘perform’ but I am able to really learn things (staff in universities still need to perform to satisfy
3
certain performance criteria which is vital for probation and promotion). This is when I started
reading books not on my research field, and I found that very enjoyable.
The turning point was the book called A Mathematician’s Lament by Paul Lockhart, a profes-
sional mathematician turned college teacher. Paul Lockhart describes how maths is incorrectly
taught in schools and he provides better ways to teach maths. He continues in Measurement
by showing us how we should learn maths by ‘re-discovering maths’ for ourselves. That made
me to decide to re-learn mathematics. But this time it must be done in a (much) more fun and
efficient way. A bit of researching led me to reading the book Learning How to Learn by Barbara
Oakley and Terry Sejnowski. The biggest lesson taken from Oakley and Sejnowski’s book is
that you can learn any subject if you do it properly.
So, I started learning mathematics from scratch during my free time. It started probably
in 2017. I have read many books on mathematics and physics and books on the history of
mathematics. I wrote some notes on my iPad recording what I have learned. Then, it came the
COVID-19 pandemic, also known as the coronavirus pandemic which locked down Melbourne–
the city I am living in. That was when I decided to put my iPad notes in a book format to have a
coherent story which is not only beneficial to me, but it will be helpful to others, hopefully.
This book is a set of notes covering (elementary) algebra, trigonometry, analytic geometry,
calculus of functions of single variables and probability. This covers the main content of the
mathematics curriculum for high school students; except that Euclid geometry is not discussed
extensively. These are followed by statistics, calculus of functions of more than one variable,
differential equations, variational calculus, linear algebra and numerical analysis. These topics
are for undergraduate college students majoring in science, technology, engineering and mathe-
matics. Very few such books exist, I believe, as the two targeted audiences are too different. This
one is different because it was written for me, firstly and mainly. However, I do believe that high
school students can benefit from ‘advanced’ topics by seeing what can be applications of the
high school mathematics and what could be extensions or better explanations thereof. On the
other hand, there are college students not having a solid background in mathematics who can
use the elementary parts of this book as a review.
The style of the book, as you might guess, is informal. Mostly because I am not a mathemati-
cian and also I like a conversational tonne. This is not a traditional mathematics textbook, so it
does not include many exercises. Instead it focuses on the mathematical concepts, their origin
(why we need them), their definition (why they are defined like the way they’re), their extension.
The process leading to proofs and solutions is discussed as most often it is the first step which is
hard, all the remaining is mostly labor work (involving algebra usually). And of course, history
of mathematics is included by presenting major men in mathematics and their short biographies.
Of course there is no new mathematics in this book as I am not a mathematician; I do not
produce new mathematics. The maths presented is standard, and thus I do not cite the exact
sources. But, I do mention all the books and sources where I have learned the maths.
The title deserves a bit of explanation. The adjective minimum was used to emphasize that
even though the book covers many topics it has left out also many topics. I do not discuss
topology, graph theory, abstract algebra, differential geometry, simply because I do not know
them (and plan to learn them when the time is ready). But the book goes beyond a study of
mathematics just to apply it to sciences and engineering. However, it seems that no amount of
mathematics is sufficient as Einstein, just hours before his death, pointed to his equations, while
lamenting to his son “If only I had more mathematics”.
And finally, influenced by the fact that I am an engineer, the book introduces programming
from the beginning. Thus, young students can learn mathematics and programming at the same
time! For now, programming is just to automate some tedious calculations, or to compute an
infinite series numerically before attacking it analytically. Or a little bit harder as to solve
Newton’s equations to analyse the orbit of some planets. But a soon exposure to programming
is vital to their future career. Not least, coding is fun!
Acknowledgments
I was lucky to get help from some people. I would like to thank “anh Bé’ who tutored me, for
free, on mathematics when I needed help most. To my secondary school math teacher “Thay
Dieu, who refused to receive tutor fee, I want to acknowledge his generosity. To my high school
math teacher “Thay Son”, whose belief in me made me more confident in myself, I would like to
say thank you very much. To my friend Phuong Thao, who taught me not to memorize formulas,
I want to express my deepest gratitude as this simple advise has changed completely the way I
have studied since. And finally to Prof Hung Nguyen-Dang, whose the EMMC master program
has changed the course of my life and many other Vietnamese, "em cam on Thay rat nhieu".
In the learning process, I cannot say thank you enough to some amazing YouTube channels
such as 3Blue1Brown, Mathologer, blackpenredpen, Dr. Trefor Bazett. They provide animation
based explanation for many mathematics topics from which I have learned a lot.
I have received encouragement along this journey, and I would like to thank you Miguel
Cervera at Universitat Politècnica de Catalunya whom I have never met, Laurence Brassar at
University of Oxford, Haojie Lian at Taiyuan University of Technology. To my close friend Chi
Nguyen-Thanh (Royal HaskoningDHV Vietnam), thank you very much for your friendship and
encouragement to this project.
This book was typeset with LATEX on a MacBook. Figures are drawn by hands using an iPad
or generated using many open source software such as geogebra,processing,julia.
1 Introduction 3
1.1 What is mathematics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Axiom, definition, theorem and proof . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Exercises versus problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Problem solving strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5 Computing in mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Mathematical anxiety or math phobia . . . . . . . . . . . . . . . . . . . . . . 17
1.7 Millennium Prize Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.8 Organization of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2 Algebra 23
2.1 Natural numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Integer numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.1 Negative numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.2 A brief history on negative numbers . . . . . . . . . . . . . . . . . . 30
2.2.3 Arithmetic of negative integers . . . . . . . . . . . . . . . . . . . . . 31
2.3 Playing with natural numbers . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4 If and only if: conditional statements . . . . . . . . . . . . . . . . . . . . . . 36
2.5 Sums of whole numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5.1 Sum of the first n whole numbers . . . . . . . . . . . . . . . . . . . . 37
2.5.2 Sum of the squares of the first n whole numbers . . . . . . . . . . . . 41
2.5.3 Sum of the cubes of the first n whole numbers . . . . . . . . . . . . . 42
2.6 Prime numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.6.1 How many primes are there? . . . . . . . . . . . . . . . . . . . . . . 44
2.6.2 The prime number theorem . . . . . . . . . . . . . . . . . . . . . . . 45
2.6.3 Twin primes and the story of Yitang Zhang . . . . . . . . . . . . . . 46
2.7 Rational numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.7.1 What is 5=2? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.7.2 Decimal notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.8 Irrational numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7
2.8.1 Diagonal of a unit square . . . . . . . . . . . . . . . . . . . . . . . . 51
2.8.2 Arithmetic
p of the irrationals . . . . . . . . . . . . . . . . . . . . . . 53
2.8.3 Roots n x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.8.4 Golden ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.8.5 Axioms for the real numbers . . . . . . . . . . . . . . . . . . . . . . 59
2.9 Fibonacci numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.10 Continued fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.11 Pythagoras theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.11.1 Pythagorean triples . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.11.2 Fermat’s last theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.11.3 Solving integer equations . . . . . . . . . . . . . . . . . . . . . . . . 70
2.12 Imaginary number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.12.1 Linear equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.12.2 Quadratic equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.12.3 Cubic equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.12.4 How Viète solved the depressed cubic equation . . . . . . . . . . . . 77
2.13 Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.14 Word problems and system of linear equations . . . . . . . . . . . . . . . . . 83
2.15 System of nonlinear equations . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.16 Algebraic and transcendental equations . . . . . . . . . . . . . . . . . . . . . 91
2.17 Powers of 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
2.18 Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.18.1 Arithmetic series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.18.2 Geometric series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.18.3 Harmonic series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
2.18.4 Basel problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
2.18.5 Viète’s infinite product . . . . . . . . . . . . . . . . . . . . . . . . . 106
2.18.6 Sum of differences . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
2.19 Sequences, convergence and limit . . . . . . . . . . . . . . . . . . . . . . . . 110
2.19.1 Some examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
2.19.2 Rules of limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
2.19.3 Properties of sequences . . . . . . . . . . . . . . . . . . . . . . . . . 114
2.20 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
2.20.1 Simple proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
2.20.2 Inequality of arithmetic and geometric means . . . . . . . . . . . . . 116
2.20.3 Cauchy–Schwarz inequality . . . . . . . . . . . . . . . . . . . . . . 120
2.20.4 Inequalities involving the absolute values . . . . . . . . . . . . . . . 124
2.20.5 Solving inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
2.20.6 Using inequalities to solve equations . . . . . . . . . . . . . . . . . . 126
2.21 Inverse operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
2.22 Logarithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
2.22.1 Why logarithm useful . . . . . . . . . . . . . . . . . . . . . . . . . . 130
2.22.2 How Henry Briggs calculated logarithms in 1617 . . . . . . . . . . . 131
2.22.3 Solving exponential equations . . . . . . . . . . . . . . . . . . . . . 133
2.23 Complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
2.23.1 de Moivre’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . 139
2.23.2 Roots of complex numbers . . . . . . . . . . . . . . . . . . . . . . . 140
2.23.3 Square root of i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
2.23.4 Trigonometry identities . . . . . . . . . . . . . . . . . . . . . . . . . 143
2.23.5 Power of real number with a complex exponent . . . . . . . . . . . . 144
2.23.6 Power of an imaginary number with a complex exponent . . . . . . . 148
2.23.7 A summary of different kinds of numbers . . . . . . . . . . . . . . . 150
2.24 Combinatorics: The Art of Counting . . . . . . . . . . . . . . . . . . . . . . . 150
2.24.1 Product rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
2.24.2 Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
2.24.3 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
2.24.4 Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
2.24.5 Generalized permutations and combinations . . . . . . . . . . . . . . 157
2.24.6 The pigeonhole principle . . . . . . . . . . . . . . . . . . . . . . . . 158
2.25 Binomial theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
2.26 Compounding interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
2.27 Pascal triangle and e number . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
2.28 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
2.28.1 Arithmetics of polynomials . . . . . . . . . . . . . . . . . . . . . . . 169
2.28.2 The polynomial remainder theorem . . . . . . . . . . . . . . . . . . 170
2.28.3 Polynomial evaluation and Horner’s method . . . . . . . . . . . . . . 171
2.28.4 Vieta’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
2.29 Modular arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
2.30 Cantor and infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
2.30.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
2.30.2 Finite and infinite sets . . . . . . . . . . . . . . . . . . . . . . . . . . 183
2.30.3 Uncountably infinite sets . . . . . . . . . . . . . . . . . . . . . . . . 185
2.31 Number systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
2.32 Graph theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
2.32.1 The Seven Bridges of Königsberg . . . . . . . . . . . . . . . . . . . 187
2.32.2 Map coloring and the four color theorem . . . . . . . . . . . . . . . . 189
2.33 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
2.33.1 Euclidean algorithm: greatest common divisor . . . . . . . . . . . . . 191
2.33.2 Puzzle from Die Hard . . . . . . . . . . . . . . . . . . . . . . . . . . 192
2.34 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
3 Trigonometry 197
3.1 Euclidean geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
3.2 Trigonometric functions: right triangles . . . . . . . . . . . . . . . . . . . . . 202
3.3 Trigonometric functions: unit circle . . . . . . . . . . . . . . . . . . . . . . . 203
3.4 Degree versus radian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
3.5 Some first properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
3.6 Sine table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
3.7 Trigonometry identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
3.8 Inverse trigonometric functions . . . . . . . . . . . . . . . . . . . . . . . . . 218
3.9 Inverse trigonometric identities . . . . . . . . . . . . . . . . . . . . . . . . . 219
3.10 Trigonometry inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
3.11 Trigonometry equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
3.12 Generalized Pythagoras theorem . . . . . . . . . . . . . . . . . . . . . . . . . 230
3.13 Graph of trigonometry functions . . . . . . . . . . . . . . . . . . . . . . . . . 231
3.14 Hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
3.15 Applications of trigonometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
3.15.1 Measuring the earth . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
3.15.2 Charting the earth . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
3.16 Infinite series for sine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
3.17 Unusual trigonometric identities . . . . . . . . . . . . . . . . . . . . . . . . . 244
3.18 Spherical trigonometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
3.19 Computer algebra systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
3.20 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
4 Calculus 251
4.1 Conic sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
4.1.1 Cartesian coordinate system . . . . . . . . . . . . . . . . . . . . . . 255
4.1.2 Circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
4.1.3 Ellipses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
4.1.4 Parabolas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
4.1.5 Hyperbolas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
4.1.6 General form of conic sections . . . . . . . . . . . . . . . . . . . . . 260
4.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
4.2.1 Even and odd functions . . . . . . . . . . . . . . . . . . . . . . . . . 264
4.2.2 Transformation of functions . . . . . . . . . . . . . . . . . . . . . . 265
4.2.3 Function of function . . . . . . . . . . . . . . . . . . . . . . . . . . 266
4.2.4 Domain, co-domain and range of a function . . . . . . . . . . . . . . 267
4.2.5 Inverse functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
4.2.6 Parametric curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
4.2.7 History of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
4.2.8 Some exercises about functions . . . . . . . . . . . . . . . . . . . . . 270
4.3 Integral calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
4.3.1 Areas of simple geometries . . . . . . . . . . . . . . . . . . . . . . . 271
4.3.2 Area of the first curved plane: the lune of Hippocrates . . . . . . . . . 273
4.3.3 Area of a parabola segment . . . . . . . . . . . . . . . . . . . . . . . 274
4.3.4 Circumference and area of circles . . . . . . . . . . . . . . . . . . . 275
4.3.5 Calculation of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
4.3.6 Definition of an integral . . . . . . . . . . . . . . . . . . . . . . . . . 281
4.3.7 Calculation of integrals using the definition . . . . . . . . . . . . . . 283
4.3.8 Rules of integration . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
4.3.9 Indefinite integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
4.4 Differential calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
4.4.1 Maxima of Fermat . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
4.4.2 Heron’s shortest distance . . . . . . . . . . . . . . . . . . . . . . . . 287
4.4.3 Uniform vs non-uniform speed . . . . . . . . . . . . . . . . . . . . . 289
4.4.4 The derivative of a function . . . . . . . . . . . . . . . . . . . . . . . 292
4.4.5 Infinitesimals and differentials . . . . . . . . . . . . . . . . . . . . . 293
4.4.6 The geometric meaning of the derivative . . . . . . . . . . . . . . . . 294
4.4.7 Derivative of f .x/ D x n . . . . . . . . . . . . . . . . . . . . . . . . 296
4.4.8 Derivative of trigonometric functions . . . . . . . . . . . . . . . . . . 297
4.4.9 Rules of derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
4.4.10 The chain rule: derivative of composite functions . . . . . . . . . . . 300
4.4.11 Derivative of inverse functions . . . . . . . . . . . . . . . . . . . . . 301
4.4.12 Derivatives of inverses of trigonometry functions . . . . . . . . . . . 301
4.4.13 Derivatives of ax and number e . . . . . . . . . . . . . . . . . . . . . 302
4.4.14 Logarithm functions . . . . . . . . . . . . . . . . . . . . . . . . . . 304
4.4.15 Derivative of hyperbolic and inverse hyperbolic functions . . . . . . . 306
4.4.16 High order derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 307
4.4.17 Implicit functions and implicit differentiation . . . . . . . . . . . . . 308
4.4.18 Derivative of logarithms . . . . . . . . . . . . . . . . . . . . . . . . 309
4.5 Applications of derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
4.5.1 Maxima and minima . . . . . . . . . . . . . . . . . . . . . . . . . . 310
4.5.2 Convexity and Jensen’s inequality . . . . . . . . . . . . . . . . . . . 312
4.5.3 Linear approximation . . . . . . . . . . . . . . . . . . . . . . . . . . 316
4.5.4 Newton’s method for solving f .x/ D 0 . . . . . . . . . . . . . . . . 317
4.6 The fundamental theorem of calculus . . . . . . . . . . . . . . . . . . . . . . 320
4.7 Integration techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
4.7.1 Integration by substitution . . . . . . . . . . . . . . . . . . . . . . . 325
4.7.2 Integration by parts . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
4.7.3 Trigonometric integrals: sine/cosine . . . . . . . . . . . . . . . . . . 328
4.7.4 Repeated integration by parts . . . . . . . . . . . . . . . . . . . . . . 331
4.7.5 Trigonometric integrals: tangents and secants . . . . . . . . . . . . . 333
4.7.6 Integration by trigonometric substitution . . . . . . . . . . . . . . . . 335
4.7.7 Integration of P .x/=Q.x/ using partial fractions . . . . . . . . . . . 337
4.7.8 Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
4.8 Improper integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
4.9 Applications of integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
4.9.1 Length of plane curves . . . . . . . . . . . . . . . . . . . . . . . . . 345
4.9.2 Areas and volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
4.9.3 Area and volume of a solid of revolution . . . . . . . . . . . . . . . . 348
4.9.4 Gravitation of distributed masses . . . . . . . . . . . . . . . . . . . . 352
4.9.5 Using integral to compute limits of sums . . . . . . . . . . . . . . . . 354
4.10 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
4.10.1 Definition of the limit of a function . . . . . . . . . . . . . . . . . . . 356
4.10.2 Rules of limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
4.10.3 Continuous functions . . . . . . . . . . . . . . . . . . . . . . . . . . 363
4.10.4 Indeterminate forms . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
4.10.5 Differentiable functions . . . . . . . . . . . . . . . . . . . . . . . . . 367
4.11 Some theorems on differentiable functions . . . . . . . . . . . . . . . . . . . 369
4.11.1 Extreme value and intermediate value theorems . . . . . . . . . . . . 369
4.11.2 Rolle’s theorem and the mean value theorem . . . . . . . . . . . . . . 370
4.11.3 Average of a function and the mean value theorem of integrals . . . . 371
4.12 Polar coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
4.12.1 Polar coordinates and polar graphs . . . . . . . . . . . . . . . . . . . 372
4.12.2 Conic sections in polar coordinates . . . . . . . . . . . . . . . . . . . 374
4.12.3 Length and area of polar curves . . . . . . . . . . . . . . . . . . . . 376
4.13 Bézier curves: fascinating parametric curves . . . . . . . . . . . . . . . . . . 377
4.14 Infinite series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
4.14.1 The generalized binomial theorem . . . . . . . . . . . . . . . . . . . 382
4.14.2 Series of 1=.1 C x/ or Mercator’s series . . . . . . . . . . . . . . . . 385
4.14.3 Geometric series and logarithm . . . . . . . . . . . . . . . . . . . . . 386
4.14.4 Geometric series and inverse tangent . . . . . . . . . . . . . . . . . . 387
4.14.5 Euler’s work on exponential functions . . . . . . . . . . . . . . . . . 388
4.14.6 Euler’s trigonometry functions . . . . . . . . . . . . . . . . . . . . . 389
4.14.7 Euler’s solution of the Basel problem . . . . . . . . . . . . . . . . . 391
4.14.8 Taylor’s series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
4.14.9 Common Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . 395
4.14.10 Taylor’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
4.15 Applications of Taylor’ series . . . . . . . . . . . . . . . . . . . . . . . . . . 399
4.15.1 Integral evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
4.15.2 Limit evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
4.15.3 Series evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
4.16 Bernoulli numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
4.17 Euler-Maclaurin summation formula . . . . . . . . . . . . . . . . . . . . . . 404
4.18 Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
4.18.1 Periodic functions with period 2 . . . . . . . . . . . . . . . . . . . 408
4.18.2 Functions with period 2L . . . . . . . . . . . . . . . . . . . . . . . . 411
4.18.3 Complex form of Fourier series . . . . . . . . . . . . . . . . . . . . . 413
4.19 Special functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
4.19.1 Factorial of 1=2 and the Gamma function . . . . . . . . . . . . . . . 414
4.19.2 Zeta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
4.20 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
5 Probability 419
5.1 A brief history of probability . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
5.2 Classical probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
5.3 Empirical probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
5.4 Buffon’s needle problem and Monte Carlo simulations . . . . . . . . . . . . . 425
5.4.1 Buffon’s needle problem . . . . . . . . . . . . . . . . . . . . . . . . 425
5.4.2 Monte Carlo method . . . . . . . . . . . . . . . . . . . . . . . . . . 426
5.5 A review of set theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
5.6 Random experiments, sample space and event . . . . . . . . . . . . . . . . . . 433
5.7 Probability and its axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
5.8 Conditional probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
5.8.1 What is a conditional probability . . . . . . . . . . . . . . . . . . . . 438
5.8.2 P .AjB/ is also a probability . . . . . . . . . . . . . . . . . . . . . . 439
5.8.3 Multiplication rule for conditional probability . . . . . . . . . . . . . 440
5.8.4 Bayes’ formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
5.8.5 The odds form of the Bayes’ rule . . . . . . . . . . . . . . . . . . . . 444
5.8.6 Independent events . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
5.8.7 The gambler’s ruin problem . . . . . . . . . . . . . . . . . . . . . . 451
5.9 The secretary problem or dating mathematically . . . . . . . . . . . . . . . . 454
5.10 Discrete probability models . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
5.10.1 Discrete random variables . . . . . . . . . . . . . . . . . . . . . . . 460
5.10.2 Probability mass function . . . . . . . . . . . . . . . . . . . . . . . . 461
5.10.3 Special distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 462
5.10.4 Cumulative distribution function . . . . . . . . . . . . . . . . . . . . 474
5.10.5 Expected value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
5.10.6 Functions of random variables . . . . . . . . . . . . . . . . . . . . . 477
5.10.7 Linearity of the expectation . . . . . . . . . . . . . . . . . . . . . . . 479
5.10.8 Variance and standard deviation . . . . . . . . . . . . . . . . . . . . 481
5.10.9 Expected value and variance of special distributions . . . . . . . . . . 484
5.11 Continuous probability models . . . . . . . . . . . . . . . . . . . . . . . . . . 485
5.11.1 Continuous random variables . . . . . . . . . . . . . . . . . . . . . . 485
5.11.2 Probability density function . . . . . . . . . . . . . . . . . . . . . . 485
5.11.3 Expected value and variance . . . . . . . . . . . . . . . . . . . . . . 487
5.11.4 Special continuous distributions . . . . . . . . . . . . . . . . . . . . 488
5.12 Joint distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
5.12.1 Two jointly discrete variables . . . . . . . . . . . . . . . . . . . . . . 491
5.12.2 Two joint continuous variables . . . . . . . . . . . . . . . . . . . . . 493
5.12.3 Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
5.13 Inequalities in the theory of probability . . . . . . . . . . . . . . . . . . . . . 497
5.13.1 Markov and Chebyshev inequalities . . . . . . . . . . . . . . . . . . 497
5.13.2 Chernoff’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . . 498
5.14 Limit theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
5.14.1 The law of large numbers . . . . . . . . . . . . . . . . . . . . . . . . 498
5.14.2 Central limit theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 498
5.15 Generating functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
5.15.1 Ordinary generating function . . . . . . . . . . . . . . . . . . . . . . 502
5.15.2 Probability generating functions . . . . . . . . . . . . . . . . . . . . 504
5.15.3 Moment generating functions . . . . . . . . . . . . . . . . . . . . . . 504
5.15.4 Proof of the central limit theorem . . . . . . . . . . . . . . . . . . . 506
5.16 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
Bibliography 887
Index 893
Contents
1.1 What is mathematics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Axiom, definition, theorem and proof . . . . . . . . . . . . . . . . . . . . 10
1.3 Exercises versus problems . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Problem solving strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5 Computing in mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Mathematical anxiety or math phobia . . . . . . . . . . . . . . . . . . . 17
1.7 Millennium Prize Problems . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.8 Organization of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Mathematicians study imaginary objects which are called mathematical objects. Some examples
are numbers, functions, triangles, matrices, groups and more complicated things such as vector
spaces and infinite series. These objects are said imaginary or abstract as they do not exist in our
physical world. For instance, in geometry a line does not have thickness and a line is perfectly
straight! And certainly mathematicians don’t care if a line is made of steel or wood. There are
||
Philip J. Davis (1923–2018) was an American academic applied mathematician.
Reuben Hersh (1927–2020) was an American mathematician and academic, best known for his writings on the
nature, practice, and social impact of mathematics. His work challenges and complements mainstream philosophy
of mathematics.
3
Chapter 1. Introduction 4
no such things in the physical world. Similarly we cannot hold and taste the number three. When
we write 3 on a beach and touch it, we only touch a representation of the number three.
Why working with abstract objects useful? One example from geometry is provided as a
simple answer. Suppose that we can prove that the area of a (mathematical) circle is times the
square of the radius, then this fact would apply to the area of a circular field, the cross section of
a circular tree trunk or the floor area of a circular temple.
Having now in their hands some mathematical objects, how do mathematicians deduce
new knowledge? As senses, experimentation and measurement are not sufficient, they rely on
reasoning. Yes, logical reasoning. This started with the Greek mathematicians. It is obvious that
we can not use our senses to estimate the distance from the Earth to the Sun. It would be tedious
to measure the area of a rectangular region than measuring just its sides and use mathematics
to get the area. And it is very time consuming and error prone to design structures by pure
experimentation. If a bridge is designed in this way, it would only be fair that the designer be
the first to cross this bridge.
What mathematicians are really trying to get from their objects? Godfrey Hardy answered
this best:
A mathematician, like a painter or poet, is a maker of patterns. If his patterns are
more permanent than theirs, it is because they are made with ideas.
Implied by Hardy is that mathematics is a study of patterns of mathematical objects. Let’s
confine to natural numbers as the mathematical object. The following is one example of how
mathematicians play with their objects. They start with a question: what is the sum of the first n
natural numbers? This sum is mathematically written as
S.n/ D 1 C 2 C 3 C C n
For example, if n D 3, then the sum is S.3/ D 1 C 2 C 3, and if n D 4 then the sum is
S.4/ D 1 C 2 C 3 C 4 and so on. Now, mathematicians are lazy creatures, they do not want to
compute the sums for different values of n. They want to find a single formula for the sum that
works for any n. To achieve that they have to see through the problem or to see the pattern. Thus,
they compute the sum for a few special cases: for n D 1; 2; 3; 4, the corresponding sums are
n D 1 W S.1/ D 1
23
n D 2 W S.2/ D 1 C 2 D 3 D
2
34
n D 3 W S.3/ D 1 C 2 C 3 D 6 D
2
A pattern emerges and they guess the following formula
n.n C 1/
S.n/ D 1 C 2 C 3 C C n D (1.1.1)
2
Godfrey Harold Hardy (February 1877 – December 1947) was an English mathematician, known for his
achievements in number theory and mathematical analysis. In biology, he is known for the Hardy–Weinberg
principle, a basic principle of population genetics.
What is more interesting is how they prove that their formula is true. They write S.n/ in the
usual form, and they also write it in a reverse order , and then they add the two
S.n/ D 1 C 2 C C .n 1/ C n
S.n/ D n C .n 1/ C C 2 C 1
2S.n/ D .n C 1/ C .n C 1/ C C .n C 1/ C .n C 1/ D n.n C 1/
„ ƒ‚ …
n terms
That little narrative is a (humbling) example of the mathematician’s art: asking simple and
elegant questions about their imaginary abstract objects, and crafting satisfying and beautiful
explanations. Now how did mathematicians know to write S.n/ in a reverse order and add the
twos? How does a painter know where to put his brush? Experience, inspiration, trial and error,
luck. That is the art of it. There is no systematic approach to maths problems. And that’s why it
is interesting; if we do the same thing over and over again, we get bored. In mathematics, you
won’t get bored.
All high school students know that in mathematics we have different territories: algebra,
geometry, analysis, combinatorics, probability and so on. What they usually do not know is that
there is a connection between different branches of mathematics. Quite often a connection that
we least expect of. To illustrate the idea, let us play with circles and see what we can get. Here
is the game and the question: roll a circle with a marked point around another circle of the same
radius, this point traces a curve. What is the shape of this curve? In Fig. 1.1a we rolled the orange
circle around the red circle and we get a beautiful heart-shaped curve, which is called a cardioid.
This beautiful heart-shaped curve shows up in some of the most unexpected places.
Got your coffee? Turn on the flashlight feature of your phone and shine the light into the cup
from the side. The light reflects off the sides of the cup and forms a caustic on the surface of the
coffee. This caustic is a cardioid (Fig. 1.1b). Super interesting, isn’t it ?
(a) (b)
They can do this because of the commutative property of addition: Changing the order of addends does not
change the sum.
For more detail check https://divisbyzero.com/2018/04/02/i-heart-cardioids/.
So far, the cardioid appears in geometry and in real life. Where else? How about times table?
We all know that 2 1 D 2, 2 2 D 4, 2 3 D 6 and so on. Let’s describe this geometrically
and a cardioid will show up! Begin with a circle (of any radius) and mark a certain number
(designated symbolically by N ) of evenly spaced points around the circle, and number them
consecutively starting from zero: 0; 1; 2; : : : ; N 1. Then for each n, draw a line between points
n and 2n mod N . For example, for N D 10, connect 1 to 2, 2 to 4, 3 to 6, 4 to 8, 5 to 0 (this is
similar to clock: after 12 hours the hour hand returns to where it was pointing to), 6 to 2, 7 to
4, 8 to 6, 9 to 8. Fig. 1.2 is the results for N D 10; 20; 200, respectively. The envelope of these
lines is a cardioid, clearly for large N .
Let’s enjoy another unexpected connection in mathematics. The five most important numbers
in mathematics are 0,1 (which are foundations of arithmetic), D 3:14159 : : :, which is the
most important number in geometry; e D 2:71828 : : :, which is the most important number in
calculus; and the imaginary number i , with i 2 D 1. And they are connected via the following
simple relation:
ei C 1 D 0
which is known as Euler’ equation and it is the most beautiful equation in mathematics! Why
an equation is considered as beautiful? Because the pursuit of beauty in pure mathematics is a
tenet. Neuroscientists in Great Britain discovered that the same part of the brain that is activated
by art and music was activated in the brains of mathematicians when they looked at math they
regarded as beautiful.
We think these unexpected connections are sufficient for many people to spend time playing
with mathematics. People who do mathematics just for fun is called pure mathematicians. To
get an insight into the mind of a working pure mathematician, there is probably no book better
than Hardy’s essay A Mathematician’s Apology. In this essay Hardy offers a defense of the
pursuit of mathematics. Central to Hardy’s "apology" is an argument that mathematics has value
independent of possible applications. He located this value in the beauty of mathematics.
Below is a mathematical joke that reflects well on how mathematicians think of their field:
But, if you are pragmatic, you will only learn something if it is useful. Mathematics is super
useful. With it, physicists unveil the secretes of our universe; engineers build incredible machines
and structures; biologists study the geometry, topology and other physical characteristics of DNA,
proteins and cellular structures. The list goes on. People who do mathematics with applications
in mind is called applied mathematicians.
And a final note on the usefulness of mathematics. In 1800s, mathematicians worked on
wave equations for fun. And in 1864, James Clerk Maxwell–a Scottish physicist– used them to
predict the existence of electrical waves. In 1888, Heinrich Rudolf Hertz–a German physicist–
confirmed Maxwell’s predictions experimentally and in 1896, Guglielmo Giovanni Marconi– an
Italian electrical engineer– made the first radio transmission.
Is the above story of radio wave unique? Of course not. We can cite the story of differential
geometry (a mathematical discipline that uses the techniques of differential calculus, integral
calculus, linear algebra and multilinear algebra to study problems in geometry) by the German
mathematician Georg Friedrich Bernhard Riemann in the 19th century, which was used later by
the German-born theoretical physicist Albert Einstein in the 20th century to develop his general
relativity theory. And the Greeks studied the ellipse more than a millennium before Kepler used
their ideas to predict planetary motions.
The Italian physicist, mathematician, astronomer, and philosopher Galileo Galilei once wrote:
Philosophy [nature] is written in that great book which ever is before our eyes – I
mean the universe – but we cannot understand it if we do not first learn the language
and grasp the symbols in which it is written. The book is written in mathematical
language, and the symbols are triangles, circles and other geometrical figures, with-
out whose help it is impossible to comprehend a single word of it; without which
one wanders in vain through a dark labyrinth.
And if you think mathematics is dry, we hope that Fig. 1.3 will change your mind. These
images are Newton fractals obtained from considering this equation of one single complex
variable f .z/ D z 4 1 D 0. There are four roots corresponding to four colors in the images. A
grid of 200 200 points on a complex plane is used as initial guesses in the Newton method of
finding the solutions to f .z/ D 0. The points are colored according to the color of the root they
converge to. Refer to Section 4.5.4 for detail.
And who said mathematicians are boring, please look at Fig. 1.4. And Fig. 1.5, where we
start with an equilateral triangle. Subdivide it into four smaller congruent equilateral triangles
and remove the central triangle. Repeat step 2 with each of the remaining smaller triangles
infinitely. What we obtain are Sierpiński triangles .
Let’s now play the “chaos game” and we shall meet Sierpiński triangles again. The process is
simple: (1) Draw an equilateral triangle on a piece of paper and draw a random initial point, (2)
The Polish mathematician Wacław Sierpiński (1882 – 1969) described the Sierpinski triangle in 1915. But
similar patterns already appeared in the 13th-century Cosmati mosaics in the cathedral of Anagni, Italy.
100 100
Color Color
200 200 4 4
y y 3 3
300 300
2 2
400 400 1 1
0 0
500 500
(a)
100Un-zoomed
200 300 image
400 500 (b) Zoomed
100 in image
200 300 400 500
Draw the next point midway to one of the vertices of the triangle, chosen randomly, (3) Repeat
step 2 ad infinitum. What is amazing is when the number of points is large, a pattern emerges,
and it is nothing but Sierpiński triangles (Fig. 1.6)! If you are interested in making these stunning
images (and those in Fig. 1.7), check Appendix B.11.
To know what is mathematics, there is no better way than to see how mathematicians think
and act. And for that I think mathematical jokes are one good way. Mathematicians Andrej and
Elena Cherkaev from University of Utah have provided a collection of these jokes at Mathemat-
ical humor and I use the following one
With just pen and paper and reasoning mathematics can help us uncover hidden secretes of
many many things from giant objects such as planets to minuscule objects such as bacteria and
every others in between. Let’s study this fascinating language; the language of our universe.
Hey, but what if someone does not want to become an engineer or scientist, does he/she
still have to learn mathematics? We believe he/she should because of the following reasons.
According to Greek, mathematics is learning and according to Hebrew it is thinking. So learning
mathematics is to learn how to think, how to reason, logically. Réne Descarte once said “I think
then I am”.
Before delving into the world of mathematics, we first need to get familiar to some common
terminologies; terms such as axioms, theorems, definitions and proofs. And the next section is
for those topics.
Unlike scientists and engineer who study real things in our real world and that’s why they
are restricted by the laws of nature, mathematicians study objects such as numbers, functions
which live in a mathematical world. Thus, mathematicians have more freedom.
Next come theorems. A theorem is a statement about properties of one or more than objects.
One can have this theorem regarding even functions: ‘If f .x/ is an even function, then its
derivative is an odd function’. We need to provide a mathematical proof for a mathematical
statement to become a theorem.
The word "proof" comes from the Latin probare (to test). The development of mathematical
proof is primarily the product of ancient Greek mathematics, and one of its greatest achievements.
Thales and Hippocrates of Chios gave some of the first known proofs of theorems in geometry.
Mathematical proof was revolutionized by Euclid (300 BCE ), who introduced the axiomatic
method still in use today. Starting with axioms, the method proves theorems using deductive
logic: if A is true, and A implies B, then B is true. Or “All men smoke weed; Sherlock Holmes
is a man; therefore, Sherlock Holmes smokes weed”.
As a demonstration of mathematical proofs, let’s consider the following problem. Given
a b c 0 and a C b C c 1, prove that a2 C 3b 2 C 5c 2 1.
Proof. We first rewrite the term a2 C 3b 2 C 5c 2 as (why? how do we know to do this step?)
a2 C 3b 2 C 5c 2 D a2 C b 2 C c 2 C 2b 2 C 2c 2 C 2c 2
Then using the data that a b c 0, we know that 2b 2 D 2bb 2ab, thus
Now, we recognize that the RHS is nothing but .a Cb Cc/2 because of the well known identity
.a C b C c/2 D a2 C b 2 C c 2 C 2ab C abc C 2ca. Thus, we have
a2 C 3b 2 C 5c 2 .a C b C c/2
And if we combine this with the data that a C b C c 1, we have proved the problem.
To indicate the end of a proof several symbolic conventions exist. While some authors
still use the classical abbreviation Q.E.D., which is an initialism of the Latin phrase quod erat
demonstrandum, meaning "which was to be demonstrated", it is relatively uncommon in modern
mathematical texts. Paul Halmos pioneered the use of a solid black square at the end of a proof
as a Q.E.D symbol, a practice which has become standard (and followed in this text), although
not universal.
The proof is simple because this is a problem for grade 7/8 students. But how about a proof
with shapes? See Fig. 1.8 for such a geometry-based proof. Essentially this geometry based
proof is similar to the previous proof, but everyone would agree it is easier to understand. We
recommend the book Proofs without words by Roger Nelsen [42] for such elegant proofs. (The
number in brackets refers to the number of the book quoted in the Bibliography at the end of the
book).
We present another problem. Let’s take the case of a triangle inside a semicircle. If we play
with it long enough, we will see one remarkable thing: no matter where on the circle we place
the tip of the triangle, it always forms a nice right triangle (Fig. 1.9a). But is it true? We need a
proof. In the same figure, we present a proof commonly given in high school geometry classes. A
complete proof would be more verbose than what we present here. Does it exist a better (elegant)
proof? See Fig. 1.9b. ABC C 0 is a rectangle and thus ABC is a right triangle!
Not all proofs are as simple as the above ones. For example, in number theory, Fermat’s Last
Theorem states that no three positive integers a; b, and c satisfy the equation an C b n D c n
for any integer value of n greater than 2. This theorem was first stated as a theorem by Pierre
de Fermat around 1637 in the margin of a copy of Arithmetica; Fermat added that he had a
The expression on the right side of the "=" sign is the right side of the equation and the expression on the left
of the "=" is the left side of the equation. For example, in x C 5 D y C 8, x C 5 is the left-hand side (LHS) and
y C 8 is the right-hand side (RHS).
(a) (b)
Figure 1.9: The angle inscribed in a semicircle is always a right angle (90ı ).
proof that was too large to fit in the margin. After 358 years of effort by countless number of
mathematicians, the first successful proof was released only very recently, in 1994, by Andrew
Wiles (1953) an English mathematician. About Wiles’ proof, it is 192 pages long.
Proofs are what separate mathematics from all other sciences. In other sciences, we accept
certain laws because they conform to the real physical world, but those laws can be modified if
new evidence presents itself. One famous example is Newton’s theory of gravity was replaced
by Einstein’s theory of general relativity. But in mathematics, if a statement is proved to be true,
then it is true forever. For instance, Euclid proved, over two thousand years ago, that there are
infinitely many prime numbers, and there is nothing that we can do that will ever contradict the
truth of that statement.
In mathematics, a conjecture is a conclusion or a proposition which is suspected to be true
due to preliminary supporting evidence, but for which no proof or disproof has yet been found.
For example, on 7 June 1742, the German mathematician Christian Goldbach wrote a letter to
Leonhard Euler in which he proposed the following conjecture: Every positive even integer can
be written as the sum of two primes. Sounds true: 8 D 5 C 3; 24 D 19 C 5; 64 D 23 C 41, as no
one has yet found an even number for which this statement does not work out. Thus, it became
Goldbach’s conjecture and is one of the oldest and best-known unsolved problems in number
theory and all of mathematics.
p
2
might not be able to finish it). For example, a typical exercise is using the formula b˙ 2a
b 4ac
x4 3x 3 C 4x 2 3x C 1 D 0 (1.3.1)
How can we solve this equation? There is no formula for x. After many attempts, we have found
that dividing this equation by x 2 is a correct direction (actually this was used by Lagrange some
200 years ago):
2 1 1
x C 2 3 xC C4D0 (1.3.2)
x x
Due to symmetry, we do a change of variable with u D x C 1=x , thus we obtain
u2 3u C 2 D 0 ) u D 1; uD2 (1.3.3)
If we allow only real solutions, then with u D 2, we have x C 1=x D 2 which gives x D 1.
Can we check the result? Substituting x D 1 into the LHS of Eq. (1.3.1) indeed yields
zero;
Can we guess the result? Can we solve it differently? We can, by trial and error, see that
x D 1 is a solution and factor the LHS as .x 1/.x 3 2x 2 C 2x 1/. And proceed from
there.
Can we use the method for some other problem? Yes, we can use the same technique for
equations of this form ax 4 C bx 3 C cx 2 C bx C a D 0.
This step of looking back is actually similar to reflection in our lives. We all know that once in a
while we should stop doing what we suppose to do to think about what we have done.
Another useful strategy is to get familiar with the problem before solving it. For example,
consider this two simultaneous equations:
There is a routine method for solving such equations, that we do not bother you with here. What
we want to say here is that if we’re asked to solve the following equations by hands, should we
just apply that routine method?
No, we leave that for computers. We’re better. Let’s spend time with the problem first, and we
see something special now:
We see a symmetry in the coefficients of the equations. This guides us to perform operations that
maintain this symmetry: if we sum the two equations we get x C y D : : : And if we subtract the
first from the second we get x y D : : : (we can do the inverse to get y x D : : :). Now, the
problem is very easy to solve.
As another example of exploiting the symmetry of a problem, consider this geometry prob-
lem: a square is inscribed in a circle that is inscribed in a square. Find the ratio of the area of
the smaller square over that of the large square. We can introduce symbols to the problem and
use the Pythagorean theorem to solve this problem (Fig. 1.10a). But we can also use symmetry:
if we rotate the smaller square 45 degrees with respect to the center of the circle, we get a new
problem shown in Fig. 1.10b. And it is obvious that the ratio we’re looking for is 1=2.
For problem solving skills, we recommend to read Pólya’s book and the book by Paul Zeit,
[58]. The latter contains more examples at a higher level than Pólya’s book. Another book is
‘Solving mathematical problems: a personal perspective’ by the Australian-American mathemati-
cian Tarence Tao (1975). He is widely regarded as one of the greatest living mathematicians. If
you want to learn ’advanced’ mathematics, his blog is worth of checking.
(a) (b)
small code shown in Fig. 1.12b to produce the table shown in Fig. 1.12a. The data shown in this
table clearly indicates that the geometric series do converge and its sum is 1.
Third, when it comes to applied mathematics, computers are an invaluable tool. In applied
mathematics, problems are not solved exactly by hands, but approximately using some algo-
rithms which are tedious for hand calculations but suitable for computers. To illustrate what
applied mathematics is about, let’s solve this equation f .x/ D cos x p x D 0; i.e., finding all
b 2 4ac
values of x such that f .x/ D 0. Hey, there is no formula similar to b˙ 2a for this equation.
That’s why Newton developed a method to get approximate solutions. Starting from an initial
guess x0 , his method iteratively generates better approximations:
cos xn xn
xnC1 D xn C
1 C sin xn
With only four such calculations, we get x D 0:73908513 which is indeed the solution to
cos x x D 0.
(a) (b)
And finally, computers are used to build amazing animations to explain mathematics, see
for example this YouTube video. Among various open source tools to create such animations,
processing is an easy to use tool, based on Java–a common programming language. Figs. 1.2,
1.5 and 1.6 were made using processing.
I have introduced two tools for programming, namely Julia and processing. This is be-
cause the latter is better suited for making animations while the former is for scientific computing.
For the role of computers in doing mathematics, I refer to the great book Mathematics by
Experiment: Plausible Reasoning in the 21st Century by Jonathan Borwein and David Bailey
[6].
But if you think that computers can replace mathematicians, you are wrong. Even for arith-
metic problems, computers are not better than human. One example is the computation of a sum
like this (containing 1012 terms)
1 1 1 1
SD
C C C C 24
1 4 9 10
Even though a powerful computer can compute this sum by adding term by term, it takes
a long time (On my macbook pro, Julia crashed when computing this sum!). The result is
S D 1:6449340668482264 . Mathematicians developed smarter ways to compute this sum; for
example this is how Euler computed this sum in the 18th century:
1 1 1 1 1 1 1 1 1
SD C C C C C C C C
1 4 9 16 25 36 49 64 81
1 1 1 1
C C C
10 200 6000 3 106
a sum of only 13 terms and got 1:644934064499874-a result which is correct up to eight deci-
mals! The story is while solving the Basel problem (i.e., what is S D 1 C 1=4 C 1=9 C 1=16 C
Available for free at https://processing.org.
And this number is exactly 2=6. Why is here? It’s super interesting, isn’t it? Check this youtube video for
an explanation.
Considering how many fools can calculate, it is surprising that it should be thought
either a difficult or a tedious task for any other fool to learn how to master the same
tricks. Some calculus-tricks are quite easy. Some are enormously difficult. The fools
who write the textbooks of advanced mathematics — and they are mostly clever
fools — seldom take the trouble to show you how easy the easy calculations are. On
the contrary, they seem to desire to impress you with their tremendous cleverness
by going about it in the most difficult way. Being myself a remarkably stupid fellow,
I have had to unteach myself the difficulties, and now beg to present to my fellow
fools the parts that are not hard. Master these thoroughly, and the rest will follow.
What one fool can do, another can.
Talking about teachers, Nobel winning physicist Richard Feynman once said "If you find science
boring, you are learning it from wrong teacher" to emphasize that if you have a good teacher
you can learn any topic.
Let me get back to those kids who thought they fell behind the math curriculum. What
should you do? I have some tips for you. First, read A Mathematician’s Lament of Paul Lockhart.
After you have finished that book, you would be confident that if learn properly you can enjoy
mathematics. Second, spend lots of time (I spent one summer when I fell behind in the 9th grade)
to learn maths from scratch . Lockhart’s other books (see appendix A) will surely help. And this
book (Chapters 1/2/3 and Appendices A/B) could be useful.
Ok. What one fool can do, another can. What a simple sentence but it has a tremendous
impact on people crossing it. It has motivated many people to start learning calculus, including
Feymann. And we can start learning maths with it.
The nontrivial zeroes of the zeta function lie on the line Re s D 0:5
Yes, the problem statement is as that simple, but its proof is elusive to all mathematicians to date.
In 1900 at the International Congress of Mathematicians in Paris, the Germain mathematician
David Hilbert gave a speech which is perhaps the most influential speech ever given to math-
ematicians, given by a mathematician, or given about mathematics. In it, Hilbert outlined 23
major mathematical problems to be studied in the coming century. And the Riemann hypothesis
was one of them. Hilbert once remarked:
If you’re in the middle of a semester, then spend less time on other topics. You cannot have everything!
If I were to awaken after having slept for a thousand years, my first question would
be: Has the Riemann hypothesis been proven?
Judging by the current rate of progress (on solving the hypothesis), Hilbert may well have to
sleep a little while longer.
It is usually while solving unsolved mathematical problems that mathematicians discover
new mathematics. The new maths also help to understand the old maths and provide better
solution to old problems. Some new maths are also discovered by scientists especially physicists
while they are trying to unravel the mysteries of our universe. Then, after about 100 or 200 years
some of the new maths come into the mathematics curriculum to train the general public.
pages. This is unavoidable as calculus deals with complex problems. But it mainly concerns the
Rb
two big concepts: derivative (f 0 .x/ and those dy, dx) and integral ( a f .x/dx).
Chapter 5 presents a short introduction to the mathematical theory of probability. Probability
theory started when mathematicians turned their attention to games of chance (e.g. dice rolling).
Nowadays it is used widely in areas of study such as statistics, mathematics, science, finance,
gambling, artificial intelligence, machine learning, computer science, game theory, and philoso-
phy to, for example, draw inferences about the expected frequency of events. Probability theory
is also used to describe the underlying mechanics and regularities of complex systems.
Chapter 6 discusses some topics of statistics. Topics are least squares, Markov chain,
After calculus of functions of single variable is calculus of functions of multiple variables
(Chapter 7). There are two types of such functions: scalar-valued multivariate functions and
vector-valued multivariate functions. An example of the former functions is T D g.x; y; z/
which represents the temperature of a point in the earth. An example of the latter is the velocity
of a fluid particle. We first introduce vectors and vector algebra (rules to do arithmetic with
vectors). Certainly dot product and vector product are the two most important concepts in vector
algebra. Then I present the calculus of these two families of functions. For the former, we will
have partial derivatives and double/triple integrals. The calculus of vector-valued functions is
called vector calculus, which was firstly developed for the study of electromagnetism. Vector
calculus then finds applications in many problems: fluid mechanics, solid mechanics etc. In
vector calculus, we will meet divergence, curl, line integral and Gauss’s theorem.
In Chapter 8, I discuss what probably is the most important application of calculus: differen-
tial equations. These equations are those that describe many physical laws. The attention is on
how to derive these equations more than on how to solve them. Derivation of the heat equation
@ @2 @2 u 2 @2 u
@t
D 2 @x 2 , the wave equation @t 2 D c @x 2 etc. are presented. Also discussed is the problem
of mechanical vibrations.
I then discuss in Chapter 9 the calculus of variations which is a branch of mathematics that
Rb
allows us to find a function y D f .x/ that minimizes a functional I D a G.y; y 0 ; y 00 ; x/dx.
For example it provides answers to questions like ‘what is the plane curve with maximum area
with a given perimeter’. You might have correctly guessed the answer: in the absence of any
restriction on the shape, the curve is a circle. But calculus of variation provides a proof and
more. One notable result of variational calculus is variational methods such as Ritz-Galerkin
method which led to the finite element method. The finite element method is a popular method
for numerically solving differential equations arising in engineering and mathematical modeling.
Typical problem areas of applications include structural analysis, heat transfer, fluid flow, mass
transport, and electromagnetic potential.
Chapter 10 is about linear algebra. Linear algebra is central to almost all areas of mathematics.
Linear algebra is also used in most sciences and fields of engineering. Thus, it occupies a vital
part in the university curriculum. Linear algebra is all about matrices, vector spaces, systems of
linear equations, eigenvectors, you name it. It is common that a student of linear algebra can
do the computations (e.g. compute the determinant of a matrix, or the eigenvector), but he/she
usually does not know the why and the what. This chapter hopefully provides some answers to
these questions.
Chapter 11 is all about numerical methods: how to compute a definite integral numerically,
how to interpolate a given data, how to solve numerically and approximately an ordinary differ-
ential equation. The basic idea is to use the power of computers to find approximate solutions to
mathematical problems. This is how Katherine Johnson–the main character in the movie Hidden
Figures– helped put a man on the moon. She used Euler’s method (a numerical method discussed
in this chapter) to do the calculation of the necessary trajectory from the earth to the moon for
the US Apollo space program. Just that she did by hands.
The book also contains two appendices. In appendix A I present a reading list of books that I
have enjoyed reading and learned very much from them. I also present a list of actionable advice
on how to learn mathematics. You could probably start reading this appendix first. In appendix
B I present some Julia codes that are used in the main text. The idea is to introduce young
students to programming as much early as possible.
When we listen to a song or look at a painting we really enjoy the song or the painting
much more if we know just a bit about the author and the story about her/his work. In the same
manner, mathematical theorems are poems written by mathematicians who are human beings.
Behind the mathematics are the stories. To enjoy their poems we should know their stories.
The correspondence between Ramanujan– a 23 year old Indian clerk on a salary of only £20
per annum and Hardy–a world renown British mathematician at Cambridge is a touching story.
Or the story about the life of Galois who said these final words Ne pleure pas, Alfred ! J’ai
besoin de tout mon courage pour mourir à vingt ans (Don’t cry, Alfred! I need all my courage
to die at twenty) to his brother Alfred after being fatally wounded in a duel. His mathematical
legacy–Galois theory and group theory, two major branches of abstract algebra–remains with us
forever. Because of this, in the book biographies and some stories of leading mathematicians
are provided. But I am not a historian. Thus, I recommend readers to consult MacTutor History
of Mathematics Archive. MacTutor is a free online resource containing biographies of nearly
3000 mathematicians and over 2000 pages of essays and supporting materials.
How this book should be read? For those who do not where to start, this is how you could read
this book. Let’s start with appendix A to get familiar with some learning tips. Then proceed
with Chapter 2, Chapter 3 and Chapter 4. That covers more than the high school curriculum. If
you’re interested in using the maths to do some science projects, check out Chapter 11 where
you will find techniques (easy to understand and program) to solve simple harmonic problems
(spring-mass or pendulum) and N -body problems (e.g. Sun-Earth problem, Sun-Earth-Moon
problem). If you get up to there (and I do not see why you cannot), then feel free to explore the
remaining of the books.
Conventions. Equations, figures, tables, theorems are numbered consecutively within each sec-
tion. For instance, when we’re working in Section 2.2, the fourth equation is numbered (2.2.4).
And this equation is referred to as Equation (2.2.4) in the text. Same conventions are used for
figures and tables. I include many code snippets in the appendix, and the numbering convention
is as follows. For instance Listing B.5 refers to the fifth code snippet in Appendix B. Asterisks
(*), daggers () and similar symbols indicate footnotes.
Without further ado, let’s get started and learn maths in the spirit of Richard Feynman:
I wonder why. I wonder why
Because a curious mind can lead us far. After all, you see, millions saw the apple fall, but only
Newton asked why.
Contents
2.1 Natural numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Integer numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3 Playing with natural numbers . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4 If and only if: conditional statements . . . . . . . . . . . . . . . . . . . . 36
2.5 Sums of whole numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.6 Prime numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.7 Rational numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.8 Irrational numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.9 Fibonacci numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.10 Continued fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.11 Pythagoras theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.12 Imaginary number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.13 Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.14 Word problems and system of linear equations . . . . . . . . . . . . . . . 83
2.15 System of nonlinear equations . . . . . . . . . . . . . . . . . . . . . . . . 88
2.16 Algebraic and transcendental equations . . . . . . . . . . . . . . . . . . 91
2.17 Powers of 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
2.18 Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.19 Sequences, convergence and limit . . . . . . . . . . . . . . . . . . . . . . 110
2.20 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
2.21 Inverse operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
2.22 Logarithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
23
Chapter 2. Algebra 24
Algebra is one of the broad parts of mathematics, together with number theory, geometry
and analysis. In its most general form, algebra is the study of mathematical symbols and the
rules for manipulating these symbols; it is a unifying thread of almost all of mathematics. It
includes everything from elementary equation solving to the study of abstractions such as groups,
rings, and fields. Elementary algebra is generally considered to be essential for any study of
mathematics, science, or engineering, as well as such applications as medicine and economics.
This chapter discusses some topics of elementary algebra. By elementary we meant the
algebra in which the commutative of multiplication rule a b D b a holds. There exists other
algebra which violates this rule. There is also matrix algebra that deals with groups of numbers
(called matrices) instead of single numbers.
Our starting point is not the beginning of the history of mathematics; instead we start with the
concept of positive integers (or natural numbers) along with the two basic arithmetic operations
of addition and multiplication. Furthermore, we begin immediately with the decimal, also called
Hindu-Arabic, or Arabic, number system that employs 10 as the base and requiring 10 different
numerals, the digits 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. And finally, we take it for granted the liberal use of
symbols such as x; y and write x.10 x/ rather than as
If a person puts such a question to you as: ‘I have divided ten into two parts, and
multiplying one of these by the other the result was twenty-one;’ then you know that
one of the parts is thing and the other is ten minus thing.
from al-Khwarizimi’s “Algebra” (ca. 820 AD).
Our approach is reasonable given the fact that we have limited lifespan and thus it is impos-
sible to trace the entire history of mathematics.
multiplication ( or ). Addition such as 3 C 5 D 8 (8 is called the sum) and multiplication such
as 4 5 D 4 5 D 20 (where 20 is called the product), are easy to understand.
It can be seen that the addition and multiplication operations have the following properties
3 C 5 D 5 C 3; 3 5 D 5 3; 3 .2 C 4/ D 3 2 C 3 4 .D 18/ (2.1.1)
One way to understand why 3 5 D 5 3 is to use visual representations. Fig. 2.1 provides two
such representations. For 3 .2 C 4/ D 3 2 C 3 4, see Fig. 2.2.
As there is nothing special about numbers 3; 5; 2; 4 in Eq. (2.1.1), one can define the follow-
ing arithmetic rules for natural numbers a, b and c :
focus on the ideas and patterns). We should pause and appreciate the power of these rules. For
example, the b1 rule allows us to put the parentheses anywhere we like (or even omit them).
Once we have recognized how numbers behave, we can take advantage of that. For example,
to compute 571 36 C 571 64 the naive way we need two multiplications and one addition.
Using the distributive property we can do: 571 .36 C 64/ D 571 100–one addition and
one easy multiplication. That’s a humbling example of the power of recognizing the pattern in
mathematics. Another example, to compute the sum 1 C 278 C 99, we can use the b1 rule to
proceed as .1 C 99/C278 D 100C278 D 378. Note also that the distributive rule can be written
as .b C c/a D ba C ca, and this is the rule we implicitly use when we write 5a C 7a D 12a.
We must note here that the introduction of a symbol (say a) to label any natural number is
a significant achievement in mathematics, done in about the 16th century. Before that mathe-
maticians only worked with specific concrete numbers (e.g. 2 or 10). With symbols comes the
power of generalization; Eq. (2.1.2) covers all natural numbers in one go! Note that we have
infinity of such numbers and just one short equation can state a property of all of them. But if
we think deeply we see that we do it all the times in our daily life. We use “man” and “woman”
to represent any man and woman, whereas “John” and “Mary” describe two particular man and
woman!
It should be emphasized that arithmetic is not mathematics. The fact that 3 C 5 is eight is not
important nor interesting; what is more interesting is 3C5 D 5C3. Professional mathematicians
are usually bad at arithmetic as the following true story can testify:
With these rules, we can start doing some algebra. For example, what is the square of a C b,
which is .a C b/.a C b/ (think of a square of side a C b, then its area is .a C b/.a C b/)?
Mathematicians are lazy, so they use the notation .a C b/2 for .a C b/.a C b/. For a given
integer a, its square is a2 D a a, its cube is a3 D a a a; they are examples of powers of
a. In Section 2.17 we will talk more about powers.
Getting back to .a C b/2 , we proceed as
And a geometry proof of this is shown in Fig. 2.3. This was how ancient Greek mathematicians
thought of .a C b/2 . They thought in terms of geometry: any quadratic term of the form ab is
associated with the area of a certain shape. And this way of geometric thinking is very useful as
we will see in this book. We are against memorizing any formula (including this identity); this
is because understanding is important.
Let’s pause for a moment and think more about .a C b/2 D a2 C
2ab C b 2 . What else does this tell us? Surprisingly a lot! We can think of
.a C b/2 as a hamster (in our mathematical world). If we do not touch it,
talk to it, it does not talk back. And that’s why we just see it as .a C b/2 .
However, when we talk to it by massaging it, it talks back by revealing its
secret: it has another name and it is a2 C 2ab C b 2 . So, we can think of
mathematicians as magicians (but without a tick), while magicians can get
a rabbit out of an empty hat with a tick, mathematicians can too: they poke their numbers and
pop out many interesting facts.
But hey, why knowing another name of that hamster is useful? First, mathematicians–as
human beings–are curious by nature: they want to know everything about mathematical objects.
Second, probably not a very good example, but if the more you know about your enemy, the
better, don’t you think?
In the same manner, we have the following identity (it is called an identity as it always holds
for all values of a and b)
This identity can help us for example in computing 100 0022 99 9982 without a calculator
nearby. Squaring and subtracting would take quite a while, but the identity is of tremendous
help: 100 0022 99 9982 D .100 002 C 99 998/.100 002 99 998/ D 4.200 000/.
Writing a2 b 2 as .a b/.a Cb/ is called factorizing the term. In mathematics, factorization
or factoring consists of writing a number or another mathematical object as a product of several
factors, usually smaller or simpler objects of the same kind. For what? For a better understand of
the original object. For example, 3 5 is a factorization of the integer 15, and .x 2/.x C 2/ is a
factorization of the polynomial x 2 4. We have more to say about factorization in Section 2.13.
And when we meet other mathematical objects (e.g. matrices) later in the book, we shall see that
mathematicians do indeed spend a significant of time just to factor matrices.
How about .a C b C c/2 ? Of course we can do the same way by writing this term as
Œ.a C b/ C c/2 . However, there is a better way: guessing the result! Our guess is as follows
The red terms are present when c D 0, the blue terms are due to the fact that a; b; c are equal:
if there is a2 , there must be c 2 . By doing this way we’re gradually developing a feeling of
mathematics.
.a C b/.c C d / D ac C ad C bc C bd
And to help students memorize it someone invented the FOIL rule (First-Outer-Inner-
Last). We’re against this way of teaching mathematics. This identity is vary natural as it
comes from the arithmetic rules given in Eq. (2.1.2). Let’s denote c C d D e (the sum of
two natural numbers is a natural number), so we can write
Abstraction and representation. As kids we were introduced to natural numbers too early that
mots of the time we take them for granted. When we’re getting old enough, we should question
them. From concrete things in life such as five trees, five fishes, five cows etc. human being
developed number five to represent the five-ness. This number five is an abstract entity in the
sense that we never see, hear, feel, or taste it. And yet, it has a definite existence for the rest of
our lives. Do not confuse number five and its representation (5 in our decimal number system)
as there are many representations of a number (e.g. V in the Roman number system).
We observed a pattern (five-ness) and we created an abstract entity from it. This is called
abstraction. And this abstract entity is very powerful. While it is easy to explain a collection
of five or six objects (using your fingers), imagine how awkward would it be to explain a set of
thirty-five objects without using the number 35.
Now that we have the concept of natural numbers, how we are going to represent them? Peo-
ple used dots to represent numbers, tallies were also used. But it was soon realized that all these
methods are bad at representing large numbers. Only after a long period that we developed the
decimal number system with only 10 digits (0,1,2,3,4,5,6,7,8,9) that can represent any number
you can imagine of!
Is the decimal number system the only one? Of course not, the computers only use two digits
0 and 1. Is it true that we’re comfortable with the decimal number system because we have ten
fingers? We do not know. I posed this question just to demonstrate that even for something as
simple as counting numbers, that we have taken for granted, there are many interesting aspects
to explore. A curious mind can lead us far.
History of the equal sign. The inventor of the equal sign ‘=’ was the Welsh physician and
mathematician Robert Recorde (c. 1512 – 1558). In 1557, in The Whetstone of Witte, Recorde
used two parallel lines (he used an obsolete word gemowe, meaning ‘twin’) to avoid tedious
repetition of the words ‘is equal to’. He chose that symbol because ‘no two things can be more
equal’. Recorde chose well. His symbol has remained in use for 464 years.
So far so good: addition and multiplication of natural numbers are easy. But what is more im-
portant is this observation: adding (or multiplying) two natural numbers gives us another natural
number. Mathematicians say that natural numbers are closed under addition and multiplication.
Why they care about this? Because it ensures security: we never step outside of the familiar
world of natural numbers, until... when it comes to subtraction. What is 3 5? Well, we can take
3 from 3 and we have nothing (zero). How can we take away two from nothing? It seems impos-
sible. Shall we only allow subtraction of the form a b when a b (this is how mathematicians
say a is larger than b)?
4 1D3
4 2D2
4 3D1 (2.2.1)
4 4D0
4 5D‹
If we imagine a line on which we put zero at a certain place, and on the right of zero we
place 1, 2, 3 and so on . Now, when we do a subtraction, let say 4 1, we start from 4 on this
line and walk towards zero one step: we end up at three. Similarly when we do 4 2 we walked
towards zero two steps. Eventually we reach zero when have walked four steps: 4 4 D 0. What
happens then if we walk past zero one step? It is exactly what 4 5 means. We should now
be at the position marked by number one but in red (to indicate that this position is on the left
side of zero). So, we have solved the problem: 4 5 D 1. Nowadays people write 1 (read
’negative one’) instead of using a different color. Thus, 4 5 D 1. Now we have two kinds
of numbers: the ones on the right hand side of zero (e.g. 1; 2; : : :) and the ones on the left hand
side (e.g. 1; 2; : : :). The former are called positive integers and the latter negative integers;
together with zero they form the so-called integers: f: : : ; 3; 2; 1; 0; 1; 2; 3; : : :g .
The number line is kind of a two-way street: starting from zero, if we go to the right we go
in the positive direction (for we see positive integers), and if we go to the left, we follow the
negative direction. For every positive integers a, we have a negative counterpart a. We can
think of as an operation that flips a to the other side of zero. Why we have to start with a
positive integer (all numbers should be treated equal)? If we start with a negative number, let say,
b (b > 0), then to flip it to the other side of zero is: . b/ which is b. So we have . b/ D b
for any integer–positive and negative. If b > 0 you can think of this as taken away a debt is an
asset.
While we have no problems accepting positive numbers, it is mentally hard to grasp negative
numbers. What is negative four cookies? This is because negative numbers are more abstract
than positive ones. For a long time, negative solutions to problems were considered "false". In
Hellenistic Egypt, the Greek mathematician Diophantus, in his book Arithmetica, while referring
to the equation 4x C 20 D 4 (which has a negative solution of 4) saying that the equation was
absurd. This is because Greek mathematics was founded on geometrical ideas: a number is a
certain length or area or volume of something; thus number is always positive.
. 1/ C . 1/ C . 1/ D 3
as, after all, if I borrow you one dollar a week for three weeks, then I own you three dollars .
This immediately results in the following
. 1/ 3 WD . 1/ C . 1/ C . 1/ D 3 .D 3 . 1//
And with that we know how to handle 2 10 and so on . But what maths has to do with
debts? Can we deduce the rules without resorting to debts, which are very negative. Ok, let’s
compute 5 .3 C . 3// in two ways. First, as 3 C . 3/ D 0, we have 5 .3 C . 3// D 0. But
from Eq. (2.1.2), we also have (distributive rule)
5 .3 C . 3// D 5 3 C 5 . 3/ D 0 H) 5 . 3/ D 15
Thus, if we insist that the usual arithmetic rules apply also for negative numbers, we have deduced
a rule that is consistent with daily experience. From a mathematical viewpoint, mathematicians
always try to have a set of rules that works for as many objects as possible. They have the rules
in Eq. (2.1.2) for positive integers, now they gave birth to negative integers. To make positive
and negative integers live happily the negative integers must follow the same rules. (They can
have their own rules, that is fine, but they must obey the old rules).
If you prefer thinking of geometry, the the number line is very useful: . 1/ C . 1/ C . 1/ is walking three
steps to the negative direction from zero, we must end up at -3.
The rule is the multiplication of a positive and a negative number yields a negative number whose numerical
value is the product of the two given numerical values. When a positive number a is multiplied with 1 it is flipped
to the other side of zero on the number line at a.
But how about . 1/ . 1/? One way to figure out the result is to look at the following
9
. 1/ 3 D 3>>
>
. 1/ 2 D 2=
H) . 1/ . 1/ D 1
. 1/ 1 D 1>>
>
. 1/ 0 D 0
;
and observe that from top going down, the RHS numbers get increased by one. Thus . 1/0 D 0
should lead to . 1/ . 1/ D 0 C 1 D 1. This is certainly not a proof for we’re not sure that
the pattern will repeat. This was just one short explanation. If you were not happy with that,
then . 1/ . 1/ D 1 was a consequence by our choice to maintain the arithmetic rules, the
distributive rule, in Eq. (2.1.2):
1 C . 1/ D 0 ) Œ1 C . 1/ . 1/ D 0 W 1 . 1/ C . 1/ . 1/ D 0 H) . 1/ . 1/ D 1
Coincidentally, it is similar to the ancient proverb the enemy of my enemy is my friend. If you are
struggling with this, it is OK as the great Swiss mathematician Euler (who we will meet again
and again in this book) also struggled with it too.
Question 1. How many are there integer numbers?
Now we have two groups of even and odd numbers, questions about their relation arise. For
instance, is there any relation between even/odd numbers? Yes, for example:
Think of concrete examples such as 2; 4 or 6; 8, and you will see this.
ajb does not mean the same thing as a=b. The latter is a number, the former is a statement about two numbers.
So, you see that after the property has been discovered, the proof might not be so difficult. Now,
we write a counting number in this form
an an 1 a1 a0
an an 1 a1 a0 D an 10n C an 1 10n 1
C C a1 10 C a0
D an .10n 1 C 1/ C an 1 .10n 1 1 C 1/ C C a0
D .an C an 1 C C a1 C a0 / C 9.an an an C an 1 an 1 an 1 C C a1 /
„ ƒ‚ … „ ƒ‚ …
n terms n 1 terms
10n 1 D 99
„ ƒ‚ 9 H) an .10n
… 1/ D an 99 …
„ ƒ‚ 9 D 9 an an an
„ ƒ‚ …
n terms n terms n terms
A good question is how we have discovered the property in the first place? It is simple: by
playing with numbers very carefully. For example, we all know the times table for 9. If we not
just look at the multiplication, but also the inverse i.e., the division, we see this:
91D9 9W9D1
9 2 D 18 18 W 9 D 2
9 3 D 27 27 W 9 D 3
9 4 D 36 36 W 9 D 4
Then, by looking at the red numbers, the divisibility of a number for 9 was discovered. The
lesson is always to look at a problem from different angles. For example, if you see the word
‘Rivers’, it can be a name of a person not just the rivers.
Here are only a few interesting facts about natural numbers. There are tons of other interesting
results. If you have found that they are interesting, study them! The study of natural numbers
has gained its reputation as the “queen of mathematics” according to Gauss–the famous German
mathematician, and many of the greatest mathematicians have devoted study to numbers. You
could become a number theorist (a mathematician who studies natural numbers) or you could
work for a bank on the field of information protection – known as “cryptography”. Or you
could become an amateur mathematician like Pierre de Fermat who was a lawyer but studied
mathematics in free time for leisure purposes.
If you do not enjoy natural numbers, that is of course also totally fine. For sciences and
engineering, where real numbers are dominant, a good knowledge of number theory is not
needed. Indeed, before writing this book, I knew just a little about natural numbers and relations
between them.
One of the amazing things about pure mathematics – mathematics done for its own sake,
rather than out of an attempt to understand the “real world” – is that sometimes, purely theoretical
discoveries can turn out to have practical applications. This happened, for example, when non-
Euclidean geometries described by the mathematicians Karl Gauss and Bernard Riemann turned
out to provide a model for the relativity between space and time, as shown by Albert Einstein.
Taxicab number 1729. The name is derived from a conversation in about 1919 involving
British mathematician G. H. Hardy and Indian mathematician Srinivasa Ramanujan. As
told by Hardy:
I remember once going to see him [Ramanujan] when he was lying ill at
Putney. I had ridden in taxi-cab No. 1729, and remarked that the number
seemed to be rather a dull one, and that I hoped it was not an unfavorable
omen. "No," he replied, "it is a very interesting number; it is the smallest
number expressible as the sum of two [positive] cubes in two different ways.
Let’s see some math magics, which, unlike other kinds of magics, can be explained.
Magic numbers.
This magic trick is taken from the interesting book Alex’s Adventures in Numberland by
Alex Bellos [5]. The trick is: "I ask you to name a three-digit number for which the first
and last digits differs by at least two. I then ask you to reverse that number to give you a
second number. After that, I ask you to subtract the smaller number from the larger number.
I then ask you to add this intermediary result to its reverse. The result is 1089, regardless
whatever number you have chosen". For instance, if you choose 214, the reverse is 412.
Then, 412 – 214 = 198. I then asked you to add this intermediary result to its reverse,
which is 198 + 891, and that equals 1089.
there is at least one even in those integers. Using this fact, the problem is now amount to proving
that among the integers a1 1; a2 2; a3 3; : : : ; an n there is at least one even number. We
have transformed the given problem into an easier problem: instead of dealing with a product of
numbers which we do not know, now we just need to find one even number.
Let’s make the problem concrete so that it is easier to deal with. We consider the case n D 5.
We have to prove that among the numbers
a1 1; a2 2; a3 3; a4 4; a5 5
there exists at least one even number. Proving this is hard (because it is not clear which one is
even), so we transform the problem to proving that it is impossible that all those numbers are
odd. If we can prove that, then at least one of them is even. This technique is called proof by
contradiction.
If we assume that all numbers a1 1; a2 2; a3 3; a4 4; a5 5 are odd, we get a1 is
even, a2 is odd, a3 is even, a4 is odd and a5 is even. Thus, there are three even numbers and two
odds. But in 1; 2; 3; 4; 5 there are two evens and three odds! We arrive at a contradiction, thus
our assumption is wrong. We have proved the problem, at least for n D 5.
Nothing is special about 5, the same argument works for 7; 9; ::: Actually 1; 2; 3; : : : ; n starts
with 1, an odd number, and thus there are more odd numbers than even ones. But a1 1; a2
2; : : : ; an n starts with an even number, and hence has more evens than odd numbers.
It was a good proof, but what do you think of the following proof? Even though the problem
concerns a product, let’s consider the sum of a1 1; a2 2; : : : ; an n:
S D .a1 1/ C .a2 2/ C .a3 3/ C C .an n/
D .a1 C a2 C C an / .1 C 2 C C n/
Why bother with this sum? Because it is zero whatever the values of a1 ; a2 ; : : : Now the sum
of an odd number of integers is zero (which is even) leads to the conclusion that one of the
number must be even. (Otherwise, the sum would be odd; think of 3 C 5 C 7 which is odd).
Why mathematicians knew to look at the sum S instead of the product? I do not know
the exact answer. One thing is sum, product are familiar things to think of. But if that did not
convince you, then the following joke tells it best:
A man walking at night finds another on his hands and knees, searching for some-
thing under a streetlight. "What are you looking for?", the first man asks; "I lost a
quarter," the other replies. The first man gets down on his hands and knees to help,
and after a long while asks "Are you sure you lost it here?". "No," replies the second
man, "I lost it down the street. But this is where the light is."
Given a conditional statement “if A then B", we’re also interested in the converse: “if B then
A". It is easy to see that the converse is not always true. The number six is divisible by 2, but it
is not divisible by four. When the converse is true, we have a biconditional statement:
S.n/ D 1 C 2 C 3 C C n (2.5.1)
The notation S.n/ indicates this is a sum and its value depends on n. The ellipsis : : : also known
informally as dot-dot-dot, is a series of (usually three) dots that indicates an intentional omission
of a word, sentence, or whole section from a text without altering its original meaning. The word
(plural ellipses) originates from the Ancient Greek élleipsis meaning ’leave out’. In the above
equation, an ellipsis (raised to the center of the line) used between two operation symbols (+
here) indicates the omission of values in a repeated operation.
There are different ways to compute this sum. I present three ways to demonstrate that there
are usually more than one way to solve a mathematical problem. And the more solutions you
can have the better. Among these different ways to a solve a problem, if it can be applied to
many different problems, it is a powerful technique which should be studied.
The first strategy is simple: get your hands dirty by calculating manually this sum for some
cases of n D 1; 2; 3; 4; : : : and try to find a pattern. Then, we propose a formula and if we
can prove it, we have discovered a mathematical truth (if it is significant then it will be called
theorem, and your name is attached to it forever). For n D 1; 2; 3; 4, the corresponding sums are
n D 1 W S.1/ D 1
23
n D 2 W S.2/ D 1 C 2 D 3 D
2
34
n D 3 W S.3/ D 1 C 2 C 3 D 6 D
2
45
n D 4 W S.4/ D 1 C 2 C 3 C 4 D 10 D
2
From that (the red numbers) we can guess the following formula
n.n C 1/
S.n/ D 1 C 2 C 3 C C n D (2.5.2)
2
You should now double check this formula for other n, and only
when you’re convinced that it might be correct, then prove it. Why
bother? Because if you do not prove this formula for any n, it remains
only as a conjecture: it can be correct for all ns that you have manually
checked, but who knows whether it holds for others. How are we going
to prove this? Mathematicians do not want to prove Eq. (2.5.2) n times;
they are very lazy which is actually good as it forces them to come up with clever ways. A
technique suitable for this kind of proof is proof by induction. The steps are: (1) check S.1/ is
correct–this is called the basis step, (2) assume S.k/ is correct, this is known as the induction
hypothesis and (3) prove that S.k C 1/ is correct: the induction step. So, the fact that S.1/ is
valid leads to S.2/ is correct, which in turn leads to S.3/ and so on. This is similar to the familiar
domino effect.
Proof by induction of Eq. (2.5.2) . It is easy to see that S.1/ is true (Eq. (2.5.2) is simply 1 D 1).
Now, assume that it holds for k–a natural number, thus we have
k.k C 1/
S.k/ D 1 C 2 C 3 C C k D
2
Now, we consider S.k C 1/, which is 1 C 2 C C k C k C 1, which is S.k/ C .k C 1/. If we
can show that S.k C 1/ D 0:5.k C 1/.k C 1 C 1/, then we’re done. Indeed, we have
k.k C 1/ .k C 1/.k C 1 C 1/
S.k C 1/ D S.k/ C .k C 1/ D C .k C 1/ D
2 2
We present another way done by the 10 years old Gauss (who would later become the prince
of mathematics and one of the three greatest mathematicians of all time, along with Archimedes
and Newton):
S D 1 C 2 C 3 C C 100
S D 100 C 99 C 98 C C 1
2S D 101 C 101 C C 101 D 101 100 (2.5.3)
100 101
SD
2
What a great idea! A geometric illustration of Gauss’ clever
idea is given in the figure: our sum is a triangle, and by adding
to this triangle another equal triangle we get a rectangle which is
easier to count the dots. Why 1 C 2 C 3 C makes a triangle?
See Fig. 2.5 for the reason. The lesson here is try to have different
views (or representations) of the same problem. In this problem,
we move away from the abstract (numbers 1, 2,3,...) back to the
concrete (rocks or dots) and by playing with the dots, we can see
the way to solve the problem.
1 D1
3 D1C2
6 D1C2C3
10 D 1 C 2 C 3 C 4
The power of a formula. What is significant about Eq. (2.5.2)? First, it simplifies computation
by reducing a large number of additions to three fixed operations: one of addition, one of
multiplication and one of division. Second, as we have at our disposal a formula which produces
a number if we plug in a number, we can, in theory, to compute S.5=2/, it is 35=8. Of course it
does not make sense to ask the sum of the first 5=2 integers. Still, formula extends the scope of
Refer to Section 2.24.2 for detail on factorial.
the original problem to values of the variable other than those for which it was originally defined
The notation nkD1 k reads sigma of k for k ranges from 1; 2; 3, to n; k is called the index of
P
summation. It is a dummy variable in the sense
Pnthat it does not appear in the actual sum. Indeed,
we can use any letter we like; we can write i D1 i ; 1 is the starting point of the summation or
P lower limit of the summation; n is the stopping point or upper limit of the summation. And
the
is the capital Greek letter sigma corresponding to S for sum. This summation notation was
introduced by Fourier in 1820. You will see that mathematicians introduce weird symbols all the
times. Usually they use Greek letters for this purpose. Note that there is no reason to be scared
of them, just like any human languages we need time to get used to these symbols.
Now comes the art. Out of the blue , mathematicians consider this identity .k 1/2 D
2
k 2k C 1 to get
.k 1/2 D k 2 2k C 1 H) k 2 .k 1/2 D 2k 1 (2.5.5)
The boxed equation is an identity i.e., it holds for k D 1; 2; 3; : : :. Now, we substitute k D
1; 2; : : : ; n in the boxed identity, we get n equations, and if we add these n equations we’re led
to the following which involves S.n/
Xn X n Xn
Œk 2 .k 1/2 D .2k 1/ D 2 k n D 2S.n/ n (2.5.6)
kD1 kD1 kD1
Now if the sum on the left hand side can be found, we’re
Pdone. As it turns out it is super easy to
n
compute this sum, to see that we just need to write out kD1 Œk 2 .k 1/2 explicitly:
n
X
Œk 2 .k 1/2 D .12 02 / C .22 12 / C .32 22 / C C .n2 .n 1/2 /
kD1
D 12 C 22 12 C 32 22 C C .n2 .n 1/2 / D n2
This sum is known as a sum of differences, and it has a telescoping property that its sum depends
only on the first and the last term for many terms cancel each other (e.g. the red and blue terms).
We will discuss more about sum of differences, when we see that it is a powerful technique (as
the sum is so easy to compute).
Introducing the above result into Eq. (2.5.6) we can compute S.n/ and the result is identical
to the one that we have obtained using Gauss’ idea and induction.
Believe me, it is what mathematicians
p
do and it led to many interesting and beautiful results; one of them is
the factorial of 0.5 or .1=2/Š D =2, why here?, see Section 4.19.1.
If you are really
P wondering the origin
P of this magical step, Section
Pn 2.18.6 provides Pone answer.P
To see why nkD1 .2k 1/ D 2 nkD1 k Pn n, go slowly: kD1
Pn .2k 1/ D n
kD1 2k
n
kD1 1. Now,
1 C 1 C C 1 D n, but 1 C 1 C C n D kD1 1. For the term kD1 2k, it is 2 1 C 2 2 C C 2 n D
„ ƒ‚ …
n terms
2.1 C 2 C C n/ D 2 nkD1 k.
P
Among the previous three ways, which one can be used now? Obviously, the clever Gauss’s
trick is out of luck here. The tedious way of computing the sum for a few cases, find the pattern,
guess a formula and prove it might work. But it is hard in the step of finding the formula.
So, we adopt the telescope sum technique starting with this identity .k 1/3 D k 3 3k 2 C
3k 1
.k 1/3 D k 3 3k 2 C 3k 1 H) k 3 .k 1/3 D 3k 2 3k C 1 (2.5.8)
It follows then
n
X n
X n
X
Œk 3 .k 1/3 D 3 k2 3 kCn (2.5.9)
kD1 kD1 kD1
Pn
But, the telescope sum on the right hand side is n3 i.e., kD1 Œk 3 .k 1/3 D n3 . Thus, we
can write
n.n C 1/ n.n C 1/
3S.n/ D n3 C 3 nD .2n C 1/ (2.5.10)
2 2
where we have used the result from Eq. (2.5.2) for nkD1 k. Can we understand why the result
P
is as it is? Consider the case n D 4 i.e., S.4/ D 1 C 4 C 9 C 16. We can express this sum as a
triangle shown in the first in Fig. 2.6a. As the sum does not change if we rotate this triangle, we
consider two rotations (the first rotation is an anti-clockwise 120 degrees about the center of the
triangle) shown in the two remaining figures). If we sum these three triangles i.e., 3S.4/, we get
a new triangle shown in Fig. 2.6b. What is the sum of this triangle? It is 9.1 C 2 C 3 C 4/, and
9 D 2.4/ C 1, so this triangle gives .2 4 C 1/.4/.5/=2, which is the RHS of Eq. (2.5.10).
Why we knew that a rotation would solve this problem? This is because any triangle in
Fig. 2.6a is rotationally symmetric.
(a) (b)
Figure 2.6
As this point, you certainly know how to tackle this sum. We start with .k 1/4 :
.k 1/4 D k 4 4k 3 C 6k 2 4k C 1 H) k 4 .k 1/4 D 4k 3 6k 2 C 4k 1 (2.5.12)
So,
n
X n
X n
X n
X
4 4 3 2
Œk .k 1/ D 4 k 6 k C4 k n (2.5.13)
kD1 kD1 kD1 kD1
( nkD1 Œk 4 4 4
P
We know the LHS .k 1/ D n ), and the second and third sums (from previous
problems) in the RHS except the one we are looking for (the red term), so we can compute it as:
Pn 1
n.n C 1/ n2 n
kD1 k D D C
2 2 2
3
Pn n.n C 1/.2n C 1/ n 3n2 C n
kD1 k2 D D C (2.5.15)
6 3 6
2 2 4
Pn n .n C 1/ n 2n C n2
3
kD1 k3 D D C
4 4 4
Clearly, we can see a pattern which allows us to write for any whole number p (we believe in
the pattern that it will hold when p D 4; 5; : : :)
n
X npC1
kp D C R.n/ (2.5.16)
pC1
kD1
Z
If you know calculus, this is the younger brother of this x n dx D x nC1=nC1 C C .
where the ratio of R.n/ over npC1 approaches zero when n is infinitely large; see Section 2.19
for a discussion on sequence and limit. This result would become useful in the development of
calculus (precisely, in the problem of determining the area under the curve y D x p ).
All
Pnthe sums in Eq. (2.5.15) contain two terms, and we can see why by looking at Fig. 2.7.
For kD1 k , the term n2 =2 isP
1
the area of the green triangle. And the term n=2 is the area of the
pink staircases. Similarly, for nkD1 k 2 , the term n3 =3 is the volume of the pyramid. If you’re
good at geometry you should P be able to compute this sum geometrically following this pyramid
interpretation. However, for nkD1 k p p 3, it is impossible to use geometry while algebra
always gives you the result, albeit more involved.
(a) (b)
Figure 2.7
Question 2. We have found the sums of integral powers up to power of three. One question
arises naturally: is there a general formula that works for any power?
So, we have two groups of natural number as far as factorizing (expressing a number as a product
of other numbers) them is concerned. In one group (2,3,5,7), the numbers can only be written as
a product of one and itself. Such numbers are called prime numbers. The other group (4,6,8,9)
contains non-prime numbers or composite numbers. Primes are central in number theory because
of the fundamental theorem of arithmetic stating that every natural number greater than one is
either a prime itself or can be factorized as a product of primes that is unique up to their order:
each of the numbers 2,3,11,113 is prime. And this prime factorization is unique (order of the
factors does not count). That’s why mathematicians decided that 1 is not a prime. If 1 was a
prime then we could write 6 D 1 2 3 D 2 3: the factorization is not unique! As with
matters are made of atoms, numbers are made of prime numbers!
100 25 0.25
1 000 168 0.168
10 000 1 229 0.123
100 000 9 592 0.096
1 000 000 78 498 0.079
is there are 25 primes in the first 100 integers. Among the first 1 000 integers, there are 168
primes, so .1000/ D 168, and so on. Note that as we considered the first 100, 1000 and 10 000
integers, the percentage of primes went from 25% to 12.3%. These examples suggest, and the
prime number theorem confirms, that the density of prime numbers at or below a given number
decreases as the number gets larger.
But if we keep counting for bigger N we see that the list of prime number goes on. Indeed,
there are infinite prime numbers as proved by Euclid more than 2000 years ago. His proof is one
of the most famous, most often quoted, and most beautiful proofs in all of mathematics.
His proof is now known as proof by contradiction (also known as the method of reductio ad
absurdum, Latin for "reduction to absurdity"). To use this technique, we assume the negate of
the statement we are trying to prove and use that to arrive at something impossibly correct. So,
The Greek letter makes a “p” sound, and stands for “prime".
I used the package Primes.jl which provides the function isprime(n) to check if a given n is prime or not.
we assume that there are finite prime numbers namely p1 ; p2 ; : : : ; pn . And from this assumption
we do something to arrive at something absurd, thus invalidating our starting point.
Euclid considered this number p:
p D p1 p2 pn C 1
Because we have assumed there are only n primes, p cannot be a prime. Thus, according to
the fundamental theorem of arithmetic, p must be divisible by any of pi (1 i n), but the
above equation says that p divides by any pi always with remainder of 1. A contradiction! So
the assumption that there are finite primes is wrong, and thus there are infinite prime numbers .
25 175
1200
150
20
1000
125
15 800
100
75 600
10
50 400
5
25 200
0 0 0
0 20 40 60 80 100 0 200 400 600 800 1000 0 2000 4000 6000 8000 10000
Figure 2.8: Plot of the prime counting function .N / for N D 102 ; 103 ; 104 .
We are not Gauss, so we need to visualize the data. We can say .N / is a function and call
it the prime counting function. It is a function because when we feed to it a number it returns
A misconception is that p is always a prime. One example 2 3 5 7 11 13 C 1 D 30031 D 59 509,
not a prime.
another number. In Fig. 2.8 the plot of .N / is given for N D 102 ; 103 ; 104 . What can we get
from these plots? It is clear that as N get larger and larger .N / can be considered as a smooth
function. Among all functions that we know of it is N=log N that best approximates .N /.
But why log? See Table 2.2 and the red numbers. The red number is exactly log 10. In this
table, the third column is N=.N / and the first entry in the fourth column is the difference of
the second entry and the first entry in the 3rd column. Let f .N / be the mysterious function for
N=.N /, then we have f .10N / D f .N / C 2:3. A function that turns a product into a sum!
That can be a logarithm. Indeed, log.10N / D log N C log 10, and log 10 D 2:3. This table was
probably the one that Gauss merely looked at and guessed correctly the function.
Table 2.2: The density of prime numbers. The fourth col is the difference in 3rd col.
N .N / N=.N /
Gauss did not prove his conjecture. The theorem was proved independently by Jacques
Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard
Riemann, in particular, the Riemann zeta function (Section 4.19.2).
f2; 3; 5; 7; 11; 13; 17; 19; 23; 29; 31; 37; 41; 43; 47; 53; 59; 61; 67; 71; 73; 79; 83; 89; 97g
Mathematicians call the prime pairs .3; 5/, .5; 7/, .11; 13/ etc. the twin primes. Thus, we have
the following definition:
Definition 2.6.1
A couple of primes .p; q/ are said to be twins if q D p C 2.
Note that except .2; 3/, 2 is the smallest possible distance (or gap) between two primes.
Mathematicians then ask the same question: how many are there twin primes? It is unknown
whether there are infinitely many twin primes (the so-called twin prime conjecture) or if there
is a largest pair. The breakthrough work of Yitang Zhang in 2013, as well as work by James
Maynard, Terence Tao and others, has made substantial progress towards proving that there are
Created using the function step of matplotlib.
infinitely many twin primes, but at present this remains unsolved. For a list of unsolved maths
problems check here.
It is usually while solving unsolved mathematical problems that mathematicians discover
new mathematics. The new maths also help to understand the old maths and provide better
solution to old problems. Then, after about 100 or 200 years some of the new maths come into
the mathematics curriculum to train the general public.
Yitang Zhang (born February 5, 1955). On April 17, a paper arrived in the inbox of Annals of
Mathematics, one of the discipline’s preeminent journals. Written by a mathematician virtually
unknown to the experts in the field — a 58 year old§ lecturer at the University of New Hampshire
named Yitang Zhang — the paper claimed to have taken a huge step forward in understanding
the twin primes conjecture, one of mathematics’ oldest problems. Just three weeks later Zhang‘s
paper was accepted. Rumors swept through the mathematics community that a great advance had
been made by an unknown mathematician — someone whose talents had been so overlooked
after he earned his doctorate in 1991 that he had found it difficult to get an academic job, working
for several years as an accountant and even in a Subway sandwich shop .
“Basically, no one knows him,” said Andrew Granville, a number theorist at the Université
de Montréal. “Now, suddenly, he has proved one of the great results in the history of number
theory.” For Zhang’s story, you can watch this documentary movie.
There are many more interesting stories about primes but we stop here, see Fig. 4.69 for a
prime spiral.
Similar to counting discrete objects (one bird, two carrots etc.), one needs to define a unit before
measurement can be done. For example, how long is a rod? We can define a unit of length to
which we assign a value of 1 and then the rod length is expressed in terms of this unit. If the unit
is meter, the rod is 5 meters. If the unit is yard, the rod is 5.46807 yards.
One problem arises immediately. Not all quantities can be expressed as integral multiples of
a unit. A rod can be one meter and something long. To handle this, we define a sub-unit. For
example, we can divide 1 meter into 100 equal parts and each part (which we call a centimeter
by the way) is now a new unit. The rod length is now 120 centimeters, or 120.1=100/ meters.
We can generalize this by dividing 1 into m equal parts to obtain 1=m the measure of our new
sub-unit . Any length can then be expressed as an integral multiple of 1=m or n=m, a ratio.
And that’s how mathematicians defined rational numbers.
Definition 2.7.1
A rational number is a number that can be written in the form p=q where p and q are integers
and q is not equal to zero.
The requirement that q is not equal to zero comes from the fact that division by zero is
meaningless. Because, if we allowed it, we would have been able to write 0 1 D 0 2, divide
both sides by 0, to get 1 D 2!
We now need to define addition and multiplication for rational numbers. We first present
these rules here (explanations follow immediately):
a c ac a c ad C bc
D ; C D (2.7.1)
b d bd b d bd
Surprisingly the rule for multiplication is easier to grasp than that for addition. We refer to
Fig. 2.9 for an illustration. Imagine of a wooden plate of a rectangular shape of which one side
is of 3 unit long and the other side is 2 unit long. Thus the area of this plate is 6 (the shaded
area). Now we divide the longer side into 3 equal parts, so each part is 1/3. Similarly, we chop
the shorter edge into two halves, so each part is 1/2. Now, the area of 1 peace is 1=3 1=2 and it
is equal to 1=6 (as there are six equal squares, and in total they make a rectangle of area of 6).
It is not hard to add two rational numbers when they have the same denominator:
1 3 1C3 4
C D D
2 2 2 2
This is because one-half plus three halves is certainly four halves, which is 4=2. This is similar
to one carrot plus two carrots is three carrots. The unit is just a half instead of 1 carrot. For
rational numbers having different denominators, the rule is then to convert them to have the
same denominator:
1 4 13 42 3 8 11
C D C D C D
2 3 23 32 6 6 6
Note that this is a geometrical construction problem: given a segment, use ruler and compass to divide it into
equal parts. From Euclidean geometry we know that this construction can be done.
Rational here does not mean logical or reasonable, it is a ratio of two integers.
Figure 2.10: Equality of two rational numbers. The rational 1=2 is said to be in its lowest term as it is
impossible to simplify it. On the other hand, 2=4 is not in lowest term.
The conversion is based on the equality of two rational numbers explained in Fig. 2.10.
Percentage. In mathematics, a percentage (from Latin per centum "by a hundred") is a ratio
expressed as a fraction of 100. It is often denoted using the percent sign ("%"), although the
abbreviations "pct.", "pct" and sometimes "pc" are also used. As a ratio, a percentage is a
dimensionless number (pure number); it has no unit of measurement.
Arithmetic is important but this is more important. We have to check whether the rules of
integers, stated in Eq. (2.1.2), still hold for the new number–the rationals? It turns out that the
rules hold. For example, the addition is still commutative:
a c ad C bc bc C ad bc ad c a
C D D D C D C
b d bd bd bd bd d b
Note that in the proof we have used ad C bc D bc C ad , as these numbers are integers. Why
this is important? Because mathematicians want to see 2 D 2=1–that is an integer is a rational
number. Thus, the arithmetic for the rationals must obey the same rules for the integers.
210
351 D 3 100 C 5 10 C 1 1 D 3 102 C 5 101 C 1 100
which means that the units are in the 0 position, the tens in the 1 position and the hundreds in
the 2 position, and the position decides the power of tens. Now, 3=10 D 3 10 1 is zero unit and
three tenths, thus the digit 3 must be placed on the 1 position, which is before the units: 03,
but we need something to separate the two digits otherwise it is mistaken with 3. The decimal
point separates the units column from the tenths column. The number 351:3 is understood as
210 1
351:3 D 3 102 C 5 101 C 1 100 C 3 10 1
And thus 3=100, which is three hundredths, and written as 0:03–the number 3 is at position -2.
The Flemish mathematician Simon Stevin (1548–1620), some-
times called Stevinus, first used a decimal point to represent a fraction
with a denominator of ten in 1585. While decimals had been used
by both the Arabs and Chinese long before this time, Stevin is cred-
ited with popularizing their use in Europe. An English translation of
Stevin’s work was published in 1608 and titled Disme, The Arts of
Tenths or Decimal Arithmetike, and it inspired the third president of
the United States Thomas Jefferson to propose a decimal-based cur-
rency for the United States (for example, one tenth of a dollar is called
a dime).
If we do long division for rationals we see the following decimals
1 1 1
D 0:25; D 0:3333 : : : ; D 0:142857142857 : : : (2.7.2)
4 3 7
First I introduce some terminologies. In decimals the number of places filled by the digits after
(to the right of) the decimal point are called the decimal places. Thus, 0:25 has 2 decimal places
and 0.2 has 1 decimal place. That’s boring (but we need to know the term to understand other
people). What’s more interesting lies in Eq. (2.7.2): we can see that there are two types of
decimals for rational numbers. The decimal 0:25 is a terminating decimal. The (long) division
process terminates. On the other hand, 1=3 D 0:3333 : : : with infinitely many digits 3 as the
division does not terminate. The decimal 0.3333... is called a recurring decimal. How about 1=7?
Is it a recurring decimal? Of course it is, you might say. But think about this: how can you sure
that the red digits repeat forever? Can it be like this: 1=7 D 0:142857142857 : : : 142857531 : : :
But things are not that complicated. Any recurring decimal has the pattern forever. And the
reason is not hard to see. Let’s look at the following division of integers by 7:
0 D 0 7 C 0; 6 D 0 7 C 6; 12 D07C5
1 D 0 7 C 1; 7 D 0 7 C 0; 13 D07C6
2 D 0 7 C 2; 8 D 0 7 C 1; 14 D07C0
3 D 0 7 C 3; 9 D 0 7 C 2; 15 D07C1
4 D 0 7 C 4; 10 D 0 7 C 3; 16 D07C2
5 D 0 7 C 5; 11 D 0 7 C 4; 17 D07C3
Look at the remainders: there are only six (except 0) of them: f0; 1; 2; 3; 4; 5; 6g. That’s why
1=7 D 0:142857142857 : : :, which has a cycle of six–the length of the repeating digits.
Sometimes you’re asked to find the fraction corresponding to a recurring decimal. For exam-
ple, what is the fraction of 0:2272727 D 0:227 where the bar on 27 is to indicate the repeated
digits. To this end, we write 0:227 D 0:2 C 0:027. Now, we plan to find the fraction for 0:027.
We start with y D 0:27, then taking advantage of the repeating pattern, we will find a linear
equation in terms of y to solve for it:
100y D 27:27
27 27 2 27 5
99y D 27 H) y D H) 0:027 D H) 0:227 D C D
99 990 10 990 22
Is 0.9999... equal to 1? We all know that 1=3 D 0:3, multiplying both sides with 3, we obtain
1 D 0:9 D 0:9999 : : : And there are many other proofs for this. For example, the following
proof is common and easy to get:
x D 0:999 : : :
100x D 99:999 : : :
99x D 99 H) x.D 0:999 : : :/ D 1
But what is going on here? The problem is at the equal sign, and the never ending 9999. To fully
understand this we need to go to infinity and this will be postponed until Section 2.19.
(a) (b)
Figure 2.11: By adding three unit squares to the problem, we suddenly get a symmetrical geometry object.
The area of the square ABCD is d 2 and this square is twice as large as the p unit square. Thus, d 2 D 2.
On the right is a geometric construction of a line segment with length being 2. We startp with the right
triangle OAB with AO D AB D 1. The Pythagorean theorem then tells us that OB D 2.pNow using a
p with OB as radius we get point C with OC D 2. And that
compass, draw a circle centered at O and
point C is where the irrational number 2 lives.
p
How are we going to prove that 2 is irrational? The only information we have is the
definition
p of an irrational number–the number which is not a=b.pSo, the goal is to prove that
2 ¤ a=b. Where do we begin? It seems easier if we start with 2 D a=b, and play with this
to see if somethingp come up. We’re trying to use
p proof by contradiction. Let’s do it.
2 2
Assume that 2 is a rational number i.e., 2 D a=b or a =b D 2 where a; b are not both
even (if they are, one can always cancel out the factor 2). So, a2 D 2b 2 which is an even number
(since it is 2 multiplied by some number). Thus, a is an even number (even though this is rather
obvious, as always, prove it). Since a is even, we can express it as a D 2c where c D 1; 2; 3; : : :
a D 2c H) a2 D 4c 2 H) 4c 2 D 2b 2 H) b 2 is even, or b is even
So, we are led to the fact that both a; b are even, which is in contradiction with a; b being not
both even. So, the square root of two must be irrational. We used proof by contradiction. To
use this technique, we assume the negate of the statement we are trying to prove and use that to
arrive at something impossibly correct.
Examples
p of irrational numbers include square rootsp
of integers that are not complete squares
e.g. 10, cube roots of integers that are not cubes, like 3 7, and so on. Multiplying an irrational
number by a rational coefficient or adding a rational number to it produces again an irrational
number . The most famous irrational number is –the ratio of a circle circumference to its
diameter– D 3:14159265 : : : The decimal portion of is infinitely long and never repeats
itself.
replacing 2 with 1:414? There are many reasons. One is that mathematicians love patterns
not the answer. For example, the Basel problem asked mathematicians to compute the sum of
infinite terms:
1 1 1
S D1C C C C
4 9 16
Anyone knows that the answer is 1:6449, approximately. But Euler was not happy with that, and
eventually he found out that the exact answer is 2=6. Not only this is a beautiful result in itself,
Euler had discovered other mathematical results while working on this problem.
p
n
2.8.3 Roots x
A square root of a number x is a number y such that y 2 D x; in other words, a number y
whose square (the result of multiplying the number by itself, or y y) is x. For example, 4
p p
For example, assume that 2 C r1 D r2 where r1 ; r2 are two rationals, then we get 2 D r2 r1p . But
rationals are closed under
p subtraction i.e., r 2 r 1 is a rational. Thus we arrive at the absurd conclusion that 2 is
rational. Therefore, 2 C r1 must be irrational.
Because the LHS is 5, and square of 5 is 25 not 13.
and -4 are square roots of 16, because 42 D . 4/2 D 16. Every nonnegative real number px
has a unique nonnegative square root, called the principal square root, which is denoted by x
p
where the symbol is called the radical sign. The term (or number) whose square root is being
considered is known as the radicand. The radicand is the number or expression underneath the
radical sign. The radical symbol was first used in print in 1525, in Christoph Rudolff’s Coss .
It is believed that this was because it resembled a lowercase "r" (for "radix"). The fact that the
p
symbol of square root is is not important as the concept of square root itself. However, for
the communication of mathematics, we have to get to know and use this symbol when it has
become standard.
The definition of a square root of x as a number y such that y 2 D x has been generalized p
in the following way. A cube root of x is a number y such that y 3 D x; it is denoted by 3 x.
We need a cube root when we know the volume of a square box and need to determine its side.
Extending to other roots is straightforward. If np is an integer greater than two, a nth root of x is
n n
a number y such
p that y D x; it is denoted by x.
2
What is 4? It is a number y such that y D 4 which is absurd. So, we only compute
square roots of positive numbers, at least for now.
p
Calculation of square roots. What is the value of 5? And you have to find that value with-
out using a calculator. Why bothering with this? Because you could develop an algorithm for
calculating a square root of any positive number by yourself. Itself is a big achievement (even
though someone had done it before you). Furthermore, this activity is important if you later on
follow a career in applied mathematics, sciences and engineering. In these areas people often
use approximate methods to solve problems; for example they solve the equation x D sin x
approximately using algorithms similar (in nature) to the one we are discussing in this section.
If you are lazy and just use a calculator, you would learn
p nothing!
Perhaps the first algorithm used for approximating x is known as the Babylonian method.
The method is also known as Heron’s method, after the first-century Greek mathematician Hero
of Alexandria who gave the first explicit description of the method in his AD 60 work Metrica.
So, what is exactly their algorithm? It starts with an initial guess of the square root x0 and this
observation: if x0 is smaller than the true square root of S , then S=x0 is larger than the root of
S . So, an average of these two numbers might be a better approximation:
1 S
x1 D x0 C (2.8.1)
2 x0
And we use x1 to compute x2 D 0:5.x1 C S=x1 /. The process is repeated until we get the value
that we aim for. How good is it algorithm? Using Julia
p (see Listing B.1) I wrote a small function
implementing this algorithm. Using it I computed 5 with x0 D 2 and the results are given in
Table 2.3.
The performance of the algorithm is so good, with three iterations and simple calculations
we get a square root of 5 with 6 decimals. However, there are many questions to be asked. For
example, where did Eq. (2.8.1) come from?
Christoph Rudolff (1499-1545) was the author of the first German textbook on algebra "Coss". Check this.
p
Table 2.3: Calculation of 5 with starting value x0 D 2.
p
n xn error e D xn 5
1 2.25 1.00e-2
2 2.2361111 4.31e-5
3 2.2360680 2.25e-8
p
One derivation of Eq. (2.8.1) is as follows. Assume that x0 is close to S , and e is the error
in that approximation, then we have .x0 C e/2 D S . We can solve for e from this equation:
S x0
.x0 C e/2 D S H) x02 C 2x0 e C e 2 D S H) e D (2.8.2)
2x0 2
where e 2 was omitted as it is negligible. Having obtained e, adding e to x0 we will get Eq. (2.8.1).
Actually, the Babylonian method is an example of a more general method–the Newton method
for solving f .x/ D 0–see Sectionp 4.5.4.
n
How about the calculation of x? Does the Newton method still work? If so, what should
be the initial guess? Is Newton method fast? Using a small program you can investigate all these
questions, and discover for yourselves some mathematics.
p
Rationalizing denominators.
p Do you remember that when you wrote 1= 2 and your strict
teacher corrected it to 2=2? They are the same, so p why bother? I think that the reason is
historical. Before calculators, it is easier to compute 2=2 (as approximately 1.4142135/2)
than to compute 1=1:4142135. And thus it has become common to not write radicals in the
denominators. Now, we know the why, let’s move to the how. p
How to rationalize the denominatorp of this term
p 1=.1 C 2/? The secret lies in the identity
.aCb/.a b/ D a2 b 2 , and thus .1C p2/.1 2/ D 1, the radical is gone. So, we
pmultiply
the nominator and denominator by 1 2, which is the conjugate radical of 1 C 2:
p p
1 1 1 1 2 1 2 p
p D p 1D p p D D 2 1
1C 2 1C 2 1C 2 1 2 1
And it is exactly the same idea when we have to divide two complex numbers .a C bi/=.c C d i/.
We multiply the nominator and denominator by c d i , which is the complex conjugate of c Cd i .
This time doing so eliminates i in the denominator
p as i 2 p
D 1.
In general the radical conjugate of a C b c is a b c. When multiplied together it gives
us a2 b 2 c. The principle of rationalizing denominators is as simple as that. But, let’s try this
problem: simplify the following expression
1 1 1 1 1 1
SD p C p p Cp p Cp p Cp C p
3C2 2 2 2C 7 7C 6 6C 5 5C2 2C 3
The word conjugate comes from Latin and means (literally) "to yoke together", and the idea behind the word
is that the things that are conjugate are somehow bound to each other.
A rush application of the technique would work, but in a tedious way. Let’s spend time with the
expression and we see something special, a pattern:
1 1 1 1 1 1
SD p C p p Cp p Cp p Cp C p
3C2 2 2 2C 7 7C 6 6C 5 5C2 2C 3
So, we rewrite the expression as
1 1 1 1 1 1
SDp p Cp p Cp p Cp p Cp p Cp p
9C 8 8C 7 7C 6 6C 5 5C 4 4C 3
p p p p
Now, we apply the trick to, say, 1=. 9 C 8/ and get a nice result of 9 8. And doing the
same for other terms gives us:
p p p p p p p p p p p p p
SD 9 8C 8 7C 7 6C 6 5C 5 4C 4 3D3 3
p
where all terms, except the first and last, are canceled leaving us a neat final result of 3 3.
This is called a telescoping sum and we see this kind of sum again and again in mathematics,
for instance Section 2.18.4. The name comes from the old collapsible telescopes you see in
pirate movies, the kind of spyglass that can be stretched out or contracted at will. The analogy
is the original sum appears in its stretched form, and it can be telescoped down to a much more
compact expression. p p
Another common exercise is to simplify radicals. For example, whatpis 4 C 2 p3. As we
know that the radicand should be a perfect square, we assume that 4 C 2 3 D .a C 3/2 , and
we’re going to find a: p p
4 C 2 3 D .a2 C 3/ C 2a 3
From that we have two equations
p by p equating the red and blue terms: 4 D a2 C 3 and 2 D 2a,
p
which gives us a D 1. So 4 C 2 3 D 1 C 3. This technique is called the method of
undetermined coefficients.
Now, you have the tool, let’s simplify the following
p p
q q
6 6
26 C 15 3 26 15 3
The solution is based on the belief that the radicand must be a perfect square i.e., it is of the
form . /2 . And this radicand has 4 terms, we think ofpthe identity 2 2
p .x Cpy C z/ D x C ,
and this leads to the beautiful compact answer of 13 2 C 4 3 C 18 5. Well, I leave the
details for you.
3x2 C 6x 4
D 6x 4
3x2
The correct answer is 1 C 2x 2 . It is clear that .6 C 3/=6 is definitely not 3! If you’re not sure,
one example can clarify the confuse.
Another common mistake is this one:
p p
3 x 2 C 3 x 4 C 3 x x2 C x4 C x
D
3 x 2 x2
p
Due to the square root in 3x, p it is incorrect to cancel 3 inside the square root. This is clear if
you think of the last term as 3=3, forget the x, and this is definitely not 1!
a c aCb 1
D D H) D 1 C or 2 1D0 (2.8.3)
b a a
The number is irrational§ . It exhibits many amazing properties. Euclid (325-265 B.C.) in his
classic book Elements gave the first recorded definition of . His own words are ‘A straight
line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater
segment, so is the greater to the lesser’. The German astronomer and mathematician Johannes
Kepler once said ‘Geometry has two great treasures: one is the theorem of Pythagoras, the other
the division of a line into extreme and mean ratio. The first we may compare to a mass of gold,
the second we may call a precious jewel.’
Let’s start with a square of any side, say x, then construct a rectangle by stretching the square
horizontally by a scale by (what else?). What obtained is a golden rectangle. If you put the
square over the rectangle so that the left edges are aligned, you get two areas following the
golden ration (Fig. 2.12 ). For the right rectangle (which is also a golden rectangle), split it into
a square and a rectangle, then you get another rectangle, and repeat this infinitely. Starting from
the left most square, let’s draw a circular arc, then another arc for the next square etc. What you
obtain is a spiral which appears in nature again and again (Fig. 2.13 )
The golden ratio appears in a pentagon as shown in Fig. 2.14. Assume that the sides of the
pentagon are one, and the diagonals are d . From the two similar triangles (shaded), one has
CE D 1=d , and thus 1=d C 1 D d : the short portion of the diagonal AE plus the longer portion
equals the diagonal itself. So, d D . The flake in Fig. 1.7 is also related to the golden ratio. It’s
super cool, isn’t it? .
Figure 2.14: The ratio of a diagonal over a side of a pentagon is the golden ratio.
and all the irrational numbers, such as 2, and so on. The adjective real in this context was
introduced in the 17th century by René Descartes, who distinguished between real and imaginary
roots of polynomials. The set of all real numbers is denoted by R. To do arithmetic with real
numbers, we use the following axioms (accepted with faith) for a; b; c being real numbers:
We use these axioms all the time without realizing that we are actually using them. As an
example, below are two results which are derived from the above axioms:
a D . 1/a
. a/ D Ca D a (2.8.6)
.a b/ D a C b
The third is known as a rule saying that if a bracket is preceded by a minus sign, change positive
signs within it to negative and vice-versa when removing the bracket.
Always use one example to check: .5 2/, which is 3, is equal to 5 C 3, which is 2. So the rule is ok.
aD aC0 (a 0 D 0)
aD aC0a (Axiom 5)
aD a C .1 C . 1// a (Axiom 6)
aD a C a C . 1/ a (Axiom 9)
aD. 1/ a (Axiom 6)
With that result, it is not hard to get . a/ D . 1/. a/ D . 1/. 1/.a/ D a. For .a b/ D
a C b, we do:
.a b/ D . 1/.a b/ (just proved)
D . 1/a C . 1/. b/ (Axiom 9)
D a C . 1/. b/ (just proved)
D aCb (. c/. d / D cd )
I did not prove . c/. d / D cd but it is reasonable given the fact that we have proved
. 1/. 1/ D C1.
You might be thinking: are mathematicians crazy? About these proofs of obvious things
George Pólya once said
Mathematics consists of proving the most obvious thing in the least obvious way
(George Pólya)
But why they had to do that? The answer is simple: to make sure the axioms selected are
minimum and yet sufficient to provide a foundation for the theory they’re trying to build.
Definition 2.9.1
The Fibonacci sequence starts with 1,1 and the next number is found by adding up the two
numbers before it:
Fn D Fn 1 C Fn 2 ; n 2 (2.9.1)
Table 2.4: Ratios of two consecutive Fibonacci numbers approach the golden ratio .
n Fn FnC1 =Fn
2 2 -
3 3 1.50000000
4 5 1.66666667
:: :: ::
: : :
19 6765 -
20 10946 1.61803400
21 28657 1.61803399
In a program, we define a function and within its definition we use it.
Fn C FnC1 D x 2 Fn ; Fn C xFn D x 2 Fn
Now, divide the last equation by Fn and we get x 2 D x C 1: the same quadratic equation that
the golden ratio satisfies. That is why the ratio of consecutive Fibonacci numbers is the golden
ratio.
There exists another interesting relation between the golden ratio and Fibonacci numbers; it
is possible to express the powers of the golden ratios in terms of a C b where a; b are certain
Fibonacci numbers. The procedure is as follows:
2 D 1 C
3 D 2 D .1 C / D C 2 D 1 C 2
(2.9.2)
4 D 3 D .1 C 2/ D C 2.1 C / D 2 C 3
5 D 3 C 5
That is, starting with 2 D 1 C , which is just the definition of , we raise the exponent by one
to get 3 , and replace 2 by 1 C . Then, we use 3 to get the the fourth-power, and so on. The
expression for 5 was not obtained by detailed calculations, but by guessing, again we believe
the pattern we are seeing: the coefficients of the power of the golden ratio are the Fibonacci
numbers. In general, we can write:
n D Fn 2 C Fn 1 (2.9.3)
Notepthat the equation D 1 C 1= has two solutions, one is and the other is D
1=2.1 5/ and these two solutions are linked together by D 1. That is the negative
solution is 1=. If we have Eq. (2.9.3) for , should we also have something similar for 1=–
the other golden ratio? Following the same procedure done in Eq. (2.9.2). As 1= is a solution
to D 1 C 1=, we have
1
D1
Squaring the both sides of this, using 2 D 1 C and D 1 C 1=:
1 2
1
D 1 2 C 2 D 2 D 1
1 3
1
D .1 /.2 / D 3 2 D 1 2
1 4
1
D .1 /.2 / D 3 2 D 2 3
1 5
1
D .1 /.3 2/ D 3 5
In all the final equalities, we have used D 1 C 1= so that final expressions are written in
terms of 1=. Now, we’re ready to have the following
1 n
1
D Fn 2 Fn 1 (2.9.4)
Now comes a nice formula for the Fibonacci sequence, a direct formula not recursive one. If
we combine Eqs. (2.9.3) and (2.9.4) we have
n D Fn 2 C Fn 1 =
9
1 n
n n 1
1 H) D C Fn 1
1
D Fn 2 Fn 1 ;
p
And thus, (because C 1= D 5)
n " nC1 #
1 1 1 1
Fn 1 D p n ; Fn D p nC1 (2.9.5)
5 5
And this equation is now referred to as Binet’s Formula in the honor of the French mathematician,
physicist and astronomer Jacques Philippe Marie Binet (1786 – 1856), although the same result
was known to Abraham de Moivre a century earlier. p
We have one question for you: in Eq. (2.9.5), D 0:5.1 C 5/ is an irrational number, and
Fn is always a whole number. Is it possible?
The purpose of this section was to present something unexpected in mathematics. Why on
earth the golden ratio (which seems to be related to geometry) is related to a bunch of numbers
coming from the sky like the Fibonacci numbers? But there are more. Eq. (2.9.1) is now referred
to as a difference equation or recurrence equation. And similar equations appear again and again
in mathematics (and in science); for example in probability as discussed in Section 5.8.7.
45 13 1 1 1
D2C D2C D2C D2C
16 16 16=13 3 1
1C 1C
13 1
4C
3
Ok, so continued fraction is just another way to
p write a number. What’s special? Let’s explore
more. How about an irrational number, like 2?pHow to express the square root of 2 as a
continued fraction? Of course we start by writing 2 D 1 C :
p p 1 p p
2D1C 2 1D1C p because . 2 C 1/. 2 1/ D 1
1C 2
p p
Now, we replace 2 in the fraction i.e., 1=.1 C 2/ by the above equation, and doing so gives
us:
p 1 1 1
2D1C p D1C D1C (2.10.1)
1C 2 1 1
2C p 2C
1C 2 1
2C
2 C
We got an infinite continued fraction. Note that for 45=16, a rational number, we got a finite
continued fraction.
Using the same idea, we can write the golden ratio as an infinite continued fraction
1 1 1
D1C ) D1C D1C (2.10.2)
1 1
1C 1C
1
1C
http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/cfINTRO.html.
p
And as D 0:5.1 C 5/, we get this beautiful equation:
p
1C 5 1
D1C (2.10.3)
2 1
1C
1 C
p p
p q
And that is not the end. You have probably seen this: 0:5.1 C 5/ D 1 C 1 C 1 C .
Here is why:
r
p
q
1 2
p
D 1 C ) D 1 C ) D 1 C ) D 1 C 1 C 1 C (2.10.4)
Fixed point iterations. Now, we’re going to compute using its definition: D 1 C 1=.
Definition 2.10.1
A fixed point x of a function f .x/ is such a point that x D f .x /.
In Section 3.15.1, we will see that it was the 10th century Islamic mathematical genius Al-
Biruni, in an attempt to measure the earth’s circumference, developed this technique of fixed
point iterations. He needed it to solve a cubic equation of which solution was not available at
his time. A geometric illustration of a fixed point is shown in Fig. 2.16. Among other things,
this concept can be used to solve equations g.x/ D 0. First, we rewrite the equation in this form
x D f .x/, then starting with x0 , we compute a sequence .xn / D .x1 ; x2 ; : : : ; xn / with
As shown in Fig. 2.17a, the sequence .xn / converges to the solution x , if x0 was chosen properly.
Starting from x0 , draw a vertical line that touches the curve y D f .x/, then go horizontally until
we get to the diagonal y D x. The x-coordinate of this point is x1 , and we repeat the process.
Fig. 2.17(b,c) are the results of fixed point iterations for the function y D 2:8x.1 x/. What
we are seeing is called a cobweb.
Figure 2.16: A fixed point of a function f .x/ is the intersection of the two curves: y D f .x/ and y D x.
For now, a sequence is nothing but a list of numbers. In Section 2.19, we talk more about sequences.
1.0
0.8
0.6
0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
Figure 2.17: Fixed point iterations for the function x D 2:8x.1 x/.
We demonstrate how this fixed point iteration scheme works for the golden ratio . In
Table 2.5, we present the data obtained with nC1 D 1 C 1=n with two starting points 0 D 1:0
and another 0 D 0:4. Surprisingly, both converge to the same solution of 1.618. Thus, the
second negative solution of D 1 C 1= escaped. In Fig. 2.18, we can see this clearly.
n nC1 n nC1
1 2.0 1 -1.5
2 1.5 2 0.3333333
3 1.666666 3 3.9999999
4 1.6 4 1.25
5 1.625 5 1.8
:: :: :: ::
: : : :
19 1.618034 19 1.618034
20 1.618034 20 1.618034
There are many questions remain to be asked regarding this fixed point method. For example,
for what functions the method works, and can we prove that (to be 100% certain) that the
sequence .xn / converges to the solution? To answer these questions, we need calculus and thus
we postpone the discussion to Section 4.11.1.
where c represents the length of the hypotenuse and a and b the lengths of the triangle’s other
two sides (Fig. 2.19). The theorem, whose history is the subject of much debate, is named for
the ancient Greek thinker Pythagoras.
Figure 2.19: Pythagorean theorem. The sum of the areas of the two squares on the legs (a and b) equals
the area of the square on the hypotenuse (c).
The theorem has been given numerous proofs – possibly the most for any mathematical
theorem. We present in Fig. 2.20 one proof. And we recommend young students to prove this
theorem as many ways as possible.
Plimpton 322 is a Babylonian clay tablet (Fig. 2.21), believed to have been written about
1800 BC, has a table of four columns and 15 rows of numbers in the cuneiform script of the
period. This table lists two of the three numbers in what are now called Pythagorean triples.
How to generate Pythagorean triples? There are more than one way and surprisingly using
complex numbers is one of them. If you need a brief recall on complex numbers, see Section 2.23.
Let’s start with a complex number z D uCvi p where u; v are positive integers and i is the number
2
such that i D 1. Its modulus is jzj D u2 C v 2 . The key point is that the modulus of the
square of z is u2 C v 2 , which is an integer. So, let’s compute z 2 and its modulus:
p
z 2 D .u C vi/2 D u2 v 2 C 2uvi ) jz 2 j D .u2 v 2 /2 C .2uv/2 D u2 C v 2 (2.11.2)
which indicates that .u2 v 2 /2 C.2uv/2 D .u2 Cv 2 /2 . Thus, the triple .u2 v 2 ; 2uv; u2 Cv 2 / is
a Pythagorean triple! We are going to compute some Pythagorean triples using this and Table 2.6
presents the result.
Note that the triples .3; 4; 5/ and .12; 16; 20/ are related; the latter can be obtained by mul-
tiplying the former by 4. The corresponding right triangles are similar. Generally, if we take
a Pythagorean triple .a; b; c/ and multiply it by some other number d , then we obtain a new
(2,1) (3,4,5)
(4,2) (12,16,20)
(3,2) (5,12,13)
(4,3) (7,24,25)
(5,4) (9,40,41)
Pythagorean triple .da; db; dc/. This leads to the so-called primitive Pythagorean triples in
which a; b; c have no common factors. A common factor of a; b and c is a number d so that
each of a; b and c is a multiple of d . For example, 3 is a common factor of 30, 42, and 105, since
30 D 3 10; 42 D 3 14, and 105 D 3 35, and indeed it is their largest common factor. On
the other hand, the numbers 10, 12, and 15 have no common factor (other than 1).
If .a; b; c/ is a primitive Pythagorean triple, it can be shown that
a2 C b 2 D c 2 ) b 2 D .c a/.c C a/ (2.11.3)
As both a and c are odd, so its sum and difference are even. Thus, we can write c a D 2m,
c C a D 2n. Eq. (2.11.3) becomes
or in general, any power higher than the second, into two like powers.” He also wrote that “I
have discovered a truly marvelous proof of this proposition which this margin is too narrow to
contain.” This habit of not revealing his calculations or the proofs of his theorems frustrated his
adversaries: Descartes came to call him a “braggart”, and the Englishman John Wallis referred
to him as “that damn Frenchman”. About this story, there is a story of another mathematician
that goes like this
How can we solve this? Some hints: (1) a and b are symmetrical so if .a; b/ is a solution, so
is .b; a/; (2) usually squaring is used to get rid of square roots. But we have to first isolate a; b
before squaring:
p p p
aD 2009 b
p p
a D 2009 C b 2 2009b ) 2009b D c; c2N
p
7 41b D c ) b D 41m2
The
p reason for the last step is that only the square root of a perfect square is a natural number:
41b is a natural number when b D 41m2 , where m 2 N (this is similar to writing m is a
natural number, but shorter, we will discuss about this notation later). Since a and b are playing
the same role, we also have a D 41n2 , n 2 N. With these findings, Eq. (2.11.5) becomes:
p p p
n 41 C m 41 D 7 41 ) n C m D 7
It is interesting that the scary looking equation Eq. (2.11.5) is equivalent to this easy equation
n C m D 7, which can be solved by kids of 7 years ago and above by a rude method:
p p p
Table 2.7: Solutions to aC b D 2009.
If we’re skillful enough and lucky –if we transform the equations in just the right way– we
can get them to reveal their secrets. And things become simple. Creativity is required, because
it often isn’t clear which manipulations to perform.
Herein x is called the unknown of the equation. Our job is to solve the equation: find x such
that it satisfies the equation (make the equation a true statement). For example, x D 5=2 is the
solution to the first equation for 2.5=2/ D 5.
which is the mathematical expression of the fact that the two planes have traveled a total distance
of 2 400 miles. To solve this linear equation , we simply massage it in the way that x is isolated
The equation ax C b D 0 is called a linear equation for if we plot the function y D ax C b on the Cartesian
plane we get a line. Thus the solution to this equation is the intersection of two lines: y D ax C b and y D 0–which
is the x axis.
in one side of the equality symbol: x D : : : So, we do (the algebra is based on the arithmetic
rules stated in Eq. (2.1.2); there is nothing to memorize here! )
5x C 5.x 60/ D 2400 , 10x 300 D 2400 , 10x D 2400 C 300 , x D 2700=10 D 270
Thus, the speed of one plane is 270 miles per hour and the speed of the other plane is 210 miles
per hour. In the above equation, the symbol , means ‘equivalent’ that is the two sides of this
symbol are equivalent; sometimes we use ” for the same purpose.
Usually solving a linear equation in x is straightforward, but the following equation looks
hard:
x x x
xC C C C D 4021
1C2 1C2C3 1 C 2 C 3 C C 4021
The solution is 2021. If you cannot solve it, look at the red term (all the denominator has same
form) and ask yourself what it is.
time by Babylonian mathematicians around 500 BC. By considering all the term in the
equation areas of some rectangles (see next figure), we see that the area of the biggest
square of side x C 5 equals the sum of areas of smaller rectangles. This lead us to write
.x C 5/2 D x 2 C 10x C 25
D 39 C 25 D 64 D 82
100
x2 4x + 3
u2 1
Quadratic equations in disguise. Many equations are actually 75
y
25
To demonstrate unexpected things in maths, let’s consider this
equation: p 0
8 6 4 2 0 2 4 6 8
5 x D 5 x2 x
To remove the square root, we follow the old rule: squaring both sides of the equation:
5 x D 25 10x 2 C x 4
Ops! We’ve got a quartic equation! Now comes the magic of maths, when I first saw this it was
like magic. Instead of seeing the equation as a quartic equation in terms of x, how about seeing
it as a quadratic equation in terms of 5??? With that in mind, we re-write the equation as
ax 3 C bx 2 C cx C d D 0; a¤0 (2.12.2)
The condition a ¤ 0 is needed, otherwise Eq. (2.12.2) becomes a quadratic equation (suppose
that b ¤ 0). As we can always divide Eq. (2.12.2) by a, it suffices to consider the following
cubic equation
x 3 C bx 2 C cx C d D 0 (2.12.3)
It turned out that solving a full cubic equation Eq. (2.12.3) was not easy. So, in 1545, the Italian
mathematician Gerolamo Cardano (1501–1576) presented a solution to the following depressed
cubic equation (it is always possible to convert a full cubic equation to the depressed cubic by
using this change of variable x D u b=3 to get rid of the quadratic term )
x 3 C px D q (2.12.4)
As Eq. (2.12.5) was successfully used to solve many depressed cubic equations,
p it was perplexing
that for Eq. (2.12.6) it involves the square root of a negative number i.e., 121.
So, Cardano stopped there and it took almost 30 years for someone to make progress. It was
Rafael Bombelli (1526-1572)–another Italian– in 1572 who examined Eq. (2.12.7). He knew
that x D 4 is a solution to Eq. (2.12.6). Thus, he was about to check the validity of the following
identity
p p
q q
3 3 ‹
2C 121 2C 121 D 4 (2.12.8)
Note that the b; c; d in ?? are different from Eq. (2.12.2).
Again, calculus helps to understand why this change of variable: x D b=3 is the x-coordinate of the inflection
point of the cubic curve y D x 3 C bx 2 C cx C d . Note, however, that at the time of Cardano, calculus has not yet
been invented. But with the success of reducing a quadratic equation to the form u2 d D 0, mathematicians were
confident that they should be able to do the same for the cubic equation.
where the LHS is the solution if the cubic formula is correct and 4 is the true solution. In the
process, he accepted the square root of negative numbers and treated
p it as an ordinary number.
In hisp
own words, it was a wild thought as he had no idea about 121. He computed this term
.2 C 1/3 as
p p p p
.2 C 1/3 D 8 C 3.2/2 1 C 3.2/. 1/2 C . 1/3
p p p p (2.12.9)
D 8 C 12 1 6 1 D 2 C 11 1D2C 121
p3
p p p
3
p p
Thus, he knew 2 C 121 D 2C 1. Similarly, he also had 2C 121 D 2C 1.
Plugging these into Eq. (2.12.7) indeed gave him four (his intuition was correct):
p p
q q
3 3
xD 2C 121 2C 121 D 4 (2.12.10)
Remark 1. Knowing one solution x D 4, it is straightforward to find the other solutions using
a factorization as
x3 15x 4 D 0 ” .x 4/.x 2 C 4x C 1/ D 0
If you’re not sure of this factorization, please refer to Section 2.28.2. The other solutions can be
found by solving the quadratic equation x 2 C 4x C 1 D 0. That’s why we only need to find one
solution to the cubic equation.
del Ferro’s method to solve the depressed cubic equation. For unknown reason, he considered
the solution x D u C v. Putting this into the depressed cubic equation, we get:
He needed another equation (as there are two unknowns), so he considered 3uv C p D 0, or
v D p=3u. With this, the above equation becomes u3 C v 3 D q, or
p3
u3 Dq
27u3
It follows that x D a cos where a and are functions of p; q. Substituting this form of x into
the cubic equation we obtain
p q
cos3 D 2
cos C 3 (2.12.12)
a a
As any satisfies the above trigonometric identity, we get the following system of equations to
solve for a and in terms of p and q:
p 3 p p
2
D 2 3p 1 3 3q
a 4 H) a D ; D cos 1
(2.12.13)
p
q 1 3 3 2p p
D cos.3 /
a3 4
Thus, the final solution is
p p
2 3p 1 1 3 3q
xD p cos cos p (2.12.14)
3 3 2p p
Does Viète’s solution work for the case p D 15 and q D 4 (the one that caused trouble with
Cardano’s solution)? Using Eq. (2.12.14) with p D 15 and q D 4, we get
p
p
1 1 2 5
x D 2 5 cos cos (2.12.15)
3 25
which can be evaluated using a computer (or calculator) to give 4 (with angle of 1.3909428270).
Note that this equation also gives the other two roots 3:73205 p (angle is 1:3909428270 C 2)
and 0:267949 (angle is 1:3909428270 C 4). And there is no 1 involved! What does this
tell us? The same thing (i.e., the square root of a negative number) can be represented by i and
by cosine/sine functions. Thus, there must be a connection between i and sine/cosine. We shall
see this connection later.
Seeing how Viète solved the cubic equation, we can unlock de Ferro’s solution. de Ferro
used this identity .u C v/3 D u3 C v 3 C 3u2 v C 3uv 2 . We put this identity and the depressed
cubic equation altogether
.u C v/3 D 3uv.u C v/ C u3 C v 3
x3 D px C q
So, with x D u C v we obtain from the depressed cubic equation .u C v/3 D p.u C v/ C q.
Compare this with the identity .u C v/3 D , we then get two equations to solve for p and q:
pD 3uv; q D u3 C v 3
And voilà! We now understand the solution of de Ferro. Obviously the algebra of his solution
is easy, what is hard is to think of that identity .u C v/3 D u3 C v 3 C 3u2 v C 3uv 2 in the first
place.
x3 5x D 6
Viète could treat the general cubic equation
A3 C px D q
where p and q are constants. Note that Viète’s version of algebra was still cumbersome
and wordy as he wrote ‘D in R - D in E aequabitur A quad’ for DR AE D A2 in our
notation.
2.13 Factorization
I have discussed a bit about factorization when presenting the identity a2 b 2 D .a b/.a C b/.
Herein, we delve into this topic with more depth. Recall that factorization or factoring consists
of writing a number or another mathematical object as a product of several factors, usually
smaller or simpler objects of the same kind. Factorization was first considered by ancient Greek
mathematicians in the case of integers. They proved the fundamental theorem of arithmetic,
which asserts that every positive integer may be factored into a product of prime numbers, which
cannot be further factored into integers greater than one. For example,
48 D 16 3 D 2 2 2 2 3
Then comes the systematic use of algebraic manipulations for simplifying expressions (more
specifically equations) dated to 9th century, with al-Khwarizmi’s book The Compendious Book
on Calculation by Completion and Balancing.
The following identities are useful for factorization:
In using these identities, we need to see 1 as 12 or 13 , then the identity appears. For example,
a3 1 is a3 13 D .a 1/.a2 C a C 1/. This is similar to in trigonometry we see 1 as
sin2 x C cos2 x.
The first method for factorization is finding a common factor and using the distributive law
a.b C c/ D ab C ac. For example,
6x 3 y 2 C 8x 4 y 3 10x 5 y 3 D 2x 3 y 2 .3 C 4xy 5x 2 y/
Then, factorizing each group and a common factor for the entire expression will show up:
In many cases, we have to look at the expressions carefully so that the identities in Section 4.12.2
will appear. For example, let’s simplify the following fraction
x 6 C a2 x 3 y
x 6 a4 y 2
We can process the numerator as x 3 .x 3 C a2 y/. About the denominator we should see it as
.x 3 /2 .a2 y/2 , then things become easy as the denominator becomes .x 3 C a2 y/.x 3 a2 y/.
And the fraction is simplified to x 3=x 3 a2 y .
The next exercise about factorization is the following expression:
a3 C b 3 C c 3 3abc
AD
.a b/2 C .b c/2 C .c a2 /
Now we make some observations. First, the nominator is of degree three and the denominator
is of second degree. Second the three variables a; b; c are symmetrical. Thus, if that expression
can be factorized into a polynomial, it must be of this form
A D pa C qb C rc H) A D p.a C b C c/
The fact that p D q D r stems from the symmetry of a; b; c. To find p, just use b D c D 0 in
the original expression, we find that p D 0:5. Thus, one answer might be:
aCbCc
AD
2
And now we just need to check if
3 3 3 aCbCc
a Cb Cc 3abc D Œ.a b/2 C .b c/2 C .c a2 /
2
And it is indeed the case. Thus, the answer is 0:5.a C b C c/.
The above method is not the usual one often presented in textbooks. Here is the textbook
method:
.a3 C b 3 / C c 3 3abc D .a C b/3 3ab.a C b/ C c 3 3abc
3 3
D Œ.a C b/ C c 3ab.a C b C c/
D Œ.a C b/ C cŒ.a C b/2 .a C b/c C c 2 3ab.a C b C c/
D .a C b C c/.: : :/
where in the third equality we have used the identity x 3 C y 3 D .x C y/.x 2 xy C y 2 /. Now
you see why in the expression of A, we must have the term 3abc, not 4abc or anything else. It
must be 3abc, otherwise there is nothing to simplify!
Another powerful method to do factorization is to use the identity difference of squares i.e.,
.X /2 .Y /2 D .X Y /.X C Y /. The thing is we have to make appear the form .X/2 .Y /2
called difference of squares. One way is to complete the square by adding zero to an expression.
For example, suppose that we need to factorize the following expression:
A D x4 C 4
We add zero to it so that a square appears:
A D .x 2 /2 C 22 C 4x 2 4x 2
D .x 2 C 2/2 .2x/2
D .x 2 C 2 C 2x/.x 2 C 2 2x/
Let’s solve one challenging problem in which we will meet a female mathematician and an
identity attached to her name. The problem is: compute the following without calculator:
.104 C 324/.224 C 324/ .584 C 324/
AD
.44 C 324/.164 C 324/ .524 C 324/
Observe first that 324 D 4 81 D 4 34 . Then all terms in A have this form: a4 C 4b 4 with
b D 3. So, let’s factorize a4 C 4b 4 :
.a2 /2 C .2b 2 /2 D .a2 /2 C 4a2 b 2 C .2b 2 /2 4a2 b 2
D .a2 C 2b 2 /2 4a2 b 2 (2.13.2)
2 2 2 2
D .a C 2b C 2ab/.a C 2b 2ab/
This identity is known as the Sophie Germain identity, named after the French mathematician,
physicist, and philosopher Marie-Sophie Germain (1776 – 1831). Despite initial opposition from
her parents and difficulties presented by society, she gained education from books in her father’s
library and from correspondence with famous mathematicians such as Lagrange, Legendre, and
Gauss (under the pseudonym of ’Monsieur LeBlanc’). Because of prejudice against her sex, she
was unable to make a career out of mathematics, but she worked independently throughout her
life. Before her death, Gauss had recommended that she be awarded an honorary degree, but
that never occurred!
Now A is making sense: in the above identity we have a 6 and a C 6, and note that the numbers
in the nominator and denominator in A differ by 6: 10 and 4, 22 and 16 etc. This means that
there are many terms that can be canceled. Indeed, with Eq. (2.13.3), we have:
p
The answer is 2 1.
Why factorization? Because factored expressions are usually more useful than the correspond-
ing un-factored expressions. For example, we use factorization to simplify fractions. We use
factorization to solve equations. It is hard to know what is the solution of x 3 6x 2 C11x 6 D 0,
but it is easy with .x 1/.x 2/.x 3/ D 0. Factors can be helpful for checking expressions.
For instance, consider a triangle of sides a; b; c, its area is denoted by A, then we have two
equivalent expressions for 16A2 :
16A2 D 2b 2 c 2 C 2c 2 a2 C 2a2 b 2 a4 b 4 c 4
D .a C b C c/.a C b c/.b C c a/.c C a b/
As we know that the triangle area will be zero if a C b D c, and thus the factored expression for
16A2 reveals this clearly while the un-factored expression does not. By the way, the factored
expression above is known as Heron’s formula, see Eq. (4.3.1).
Manipulation of algebraic expressions is a useful skill which can be learned. Herein we dis-
cuss some manipulation techniques. An algebraic expression is an expression involving numbers,
p
parentheses, operation signs (C; ; ; ) and variables a; b; x; y. Examples of algebraic expres-
sions are: 3x C 1 and 5.x 2 C 3x/. Note that the multiplication sign is omitted between letters
and between a number and a letter: so we write 2x instead of 2 x.
Consider this problem: given that the sum of a number and its reciprocal (i.e., its inverse) is
one, find the sum of the cube of that number and the cube of its reciprocal.
We can proceed as follows. Let’s denote by xp the number, we then have x C 1=x D 1.
Solving this quadratic equation
p we get x D .1 ˙ i 3/=2. Now, to get x 3 C 1=x 3 we need to
compute x 3 , which is .1 ˙ i 3/3 =8, but that would be difficult . There should be a better way.
This is what we need
1
S D x3 C 3
x
and we have x C 1=x D 1. Let’s cube this and S will show :
3
1 31 1 1
xC D x C 3 C 3x 2 C 3x 2
x x x x
1
1 D S C 3.x C /
x
1 D S C 3 1 H) S D 2
Let’s consider another problem: given two real numbers x ¤ y that satisfy
(
x 2 D 17x C y
y 2 D 17y C x
p
What is the value of S D x 2 C y 2 C 1?
The problem is obviously symmetrical, so we will perform symmetrical operations: we sum
the two given equations, and we subtract the second from the first one:
(
x 2 C y 2 D 18.x C y/
(2.13.4)
x 2 y 2 D 16.x y/
.x 2 C y 2 /.x 2 y 2 / D .16/.18/.x 2 y 2/
p
Thus, S D .16/.18/ C 1. Another way (a bit slower) is to solve for x C y from the second
equation of Eq. (2.13.4), and then put it into the first to solve for x 2 C y 2 .
Figure 2.22: Alice, Bob and Charlie pouring concrete into a container.
denote A, B and C the number of concrete volume (in m3 ) that Alice, Bob and Charlie can
pour into the container within 1 hour. With this, it is straightforward to translate the sentence ’to
complete a job, it takes Alice and Bob 2 hours’ to 2A C 2B D 100. So, we have this system of
equations
2A C 2B D 100
3A C 3C D 100 (2.14.1)
4B C 4C D 100
We have a system of three linear equations that is why we call it a system of linear equations.
The solution of this system is the three numbers A; B; C that when substituted into the system
we get true statements. How are we going to solve it? We know how to solve ax C b D 0, so
the plan is to remove/eliminate two unknowns and we’re left with one unknown. To remove two
unknowns, we first remove one unknown. To do that we can use any equation, e.g. B C C D 25,
write the to-be-removed unknown in terms of the other: for instance C D 25 B. Now C is
gone.
We can start removing any unknown, I start with C : from the third equation, we can get
C D 25 B, put it into the second equation we get 3A 3B D 25. This and the first equation
is the new system (with only two unknowns A; B) that we need to solve. We do the same
thing again: from 2A C 2B D 100 we get B D 50 A (i.e., we’re removing B), put that into
3A 3B D 25: A D 175=6. Now we go backward to solve for B and for C . Altogether, the
solution is A D 175=6, B D 125=6 and C D 25=24 . Then, the time t for all three people
work together is .A C B C C /t . Thus,
100 24
.A C B C C /t D 100 H) t D D hours (2.14.2)
ACB CC 13
This solution is plausible because it is smaller than the two hours that take Alice and Bob;
Charlie should be useful even though he is a bit slower than the other two kids.
Did we solve the system? Even though we spent sometime and found A; B; C satisfying the solution, to be
honest with you, we have just found one solution. Of course if we can prove that this system has only one solution,
then our A; B; C are the solution. Can you explain why this system has a unique solution and when such a system
does not have solution? And can it have more than one solutions?
Let’s consider another word problem taken from The joy of x by the American mathematician
Steven Strogatz (born 1959). If the cold faucet can fill a bathtub in half an hour and the hot faucet
fills it in one hour, then how long does it take if both faucets are filling together the bathtub?
At the age of 10 or 11 Strogatz’s answer was 45 minutes when given this problem by his uncle.
What’s your solution?
Here is his uncle’s solution. In one minute, the cold faucet fills 1=30 of the bathtub and
the hot faucet fills 1=60 of the bathtub. So, together they can fill 1=30 C 1=60 D 1=20 of the
bathtub in one minute. Thus, it takes them 20 minutes. That’s the answer. What if we do not
know fractions?
Is it possible to get the same answer without using fractions? Yes, using hours instead of
minutes! So, in one hour the cold faucet can fill two bathtubs, and the hot faucet fills one bathtub.
Together, in one hour they can fill 3 bathtubs. So, it takes them 1/3 hour to fill in one bathtub.
This is the solution of the older Strogatz. It does not involve fractions but it involves 3 bathtubs.
We could not think of this solution if our mind is fixed with the image of a real bathtub: one
bathtub with two faucets.
Let’s stretch farther, can we solve this problem without doing any maths? Still remember
Paul Dirac’s above mentioned quote? This is the way to have deep understanding. Setting up the
equations and solving them without doing this step is like a robot.
Let’s try. Ok, we know that the cold faucet fills the tub in 30 minutes, so regardless the rate
of the hot faucet, together they have to fill in the tub in less than 30 mins. On the other hand, if
the hot faucet rate was the same as the cold one, then together they would do the job in 15 mins.
So, without doing any maths, we know the answer t is 15 < t < 30. What we have just done
is, according to Polya in How to solve it, considering special cases of the problem that we’re
trying to solve. We might not be able to solve the original problem, but we can solve at least
some simpler problems.
Systems of linear equations in chemistry. Back then in high school I did not know how to
balance chemical equations like the following one C3 H8 C 5 O2 ! 3 CO2 C 4 H2 O. The
problem is to find whole numbers x1 ; x2 ; x3 ; x4 such that
x1 C3 H8 C x2 O2 ! x3 CO2 C x4 H2 O
That is, to balance the total numbers of carbon (C), hydrogen (H) and oxygen (O) atoms on the
left and on the right of the chemical reaction . Now, C, H and O play similar role of Alice, Bob
and Charlie. There are three atoms, and conservation of each atom gives one equation:
Again, we see a system of linear equations! Solving this is easy: elimination technique. There
is one catch: we have four unknowns but only three equations. Let x4 D n, then we can
solve for x1 ; x2 ; x3 in terms of n: x1 D n=4, x3 D 3n=4, x2 D 5n=4. Take n D 4, we get
Because atoms are neither destroyed nor created in the reaction.
Systems of linear equations. Eq. (2.14.1) is one example of a system of linear equations. In
these systems, there are n equations for n unknowns x1 ; x2 ; : : : ; xn where all equations are linear
in terms of xi (i D 1; 2; : : :) i.e., we will not see nonlinear terms like xi xj . In what follows, we
give examples for n D 2; 3; 4:
If we focus on how to solve these equations, we would come up with the so-called Gaussian
elimination method (when we’re pressed to solve a system with many unknowns, say n 6).
On the other hand, if we are interested in the question when such a system has a solution, when
it does not have a solution and so on, we could come up with matrices and determinant. For
example, we realize that putting all the coefficients in a system of linear equations in an array
like
2 3
1 2 4 1
62 1 1 77
AD6 45 1 3 45
7 (2.14.4)
6 7 2 3
and we can play with this array similarly to the way we do with numbers. We can add them,
multiply them, subtract them. And we give it a name: A is a matrix. Matrices, determinants
and how to solve efficiently large systems of linear equations (n in the range of thousands and
millions) belong to a field of mathematics named linear algebra, see Chapter 10.
We’re not sure about the original source of systems of linear equations, but systems of linear
equations arose in Europe with the introduction in 1637 by René Descartes of coordinates in
geometry. In fact, in this new geometry, now called analytical geometry, lines and planes are
represented by linear equations, and computing their intersections amounts to solving systems
of linear equations.
But if systems of linear equations only come from analytical geometry we would only have
systems of 3 equations (a plane in 3D is of the form ax C bc C cz D 0), and life would be
boring. Systems of linear equations appear again and again in many fields (e.g. physics, biology,
economics and in mathematics itself). For example, in structural engineering–a sub-discipline of
civil engineering which deals with the design of structural elements (beams, columns, trusses),
we see systems of linear equations; actually systems of many linear equations. For example,
consider a bridge shown in Fig. 2.23a which is idealized as a system of trusses of which a part
is shown in Fig. 2.23b. Applying the force equilibrium to Fig. 2.23b we will get a system of 9
linear equations for the 9 unknown forces in the nine trusses.
(a) (b)
1. Two dogs, each traveling 10 ft/sec, run towards each other from 500 feet apart. As
they run, a flea flies from the nose of one dog to the nose of the other at 25 ft/sec.
The flea flies between the dogs in this manner until it is crushed when the dogs
collide. How far did the flea fly?
2. Alok has three daughters. His friend Shyam wants to know the ages of his daughters.
Alok gives him first hint: The product of their ages is 72. Shyam says this is not
enough information Alok gives him a second hint: the sum of their ages is equal
to the number of my house. Shyam goes out and look at the house number and
tells “I still do not have enough information to determine the ages”. Alok admits
that Shyam cannot guess and gives him the third hint: my oldest daughter likes
strawberry ice-cream.” With this information, Shyam was able to determine all
three of their ages. How old is each daughter?
Regarding the daughter-age problem, we have three unknowns and three hints, so it seems
to be a good problem. But did you try to set up the equations? There is only one equation, that is
xyz D 72! if x; y; z are the ages of the daughters. What if the product of their ages is a smaller
number, let say, 12? Ah, we can list out the ages as there are only a few cases. If that method
works for 12, of course it will work for 72; just a bit extra work. If you still cannot find the
solution, check this this website out. What if the product of their ages was a big number?
This is a good exercise to show that we should be flexible. Setting up equations is a good
method to solve word problems; but it does not solve all problems. There seems to be a problem
that defy all existing mathematics. And it is a good thing as it is these problems that keep
mathematicians working late at nights.
Algebra is a language of symbols. Now, if we think again about the word problems, we see that
algebra is actually a language–a language of symbols (such as a, or A). What is the advantage
of this language? It is comprehensible: it can translate a length verbose problem into a compact
form that the eyes can see quickly and the mind can retain what is going on. Compare this
To complete a job, it takes: Alice and Bob 2 hours, Alice and Charlie 3 hours and
Bob and Charlie 4 hours. How long will the job take if all three work together?
Assume that the efficiency of Alice, Bob and Charlie is constant.
and
2A C 2B D 100
3A C 3B D 100
4B C 4C D 100
x 3 C 9x 2 y D 10
(2.15.1)
y 3 C xy 2 D 2
Can we eliminate one variable? It might be possible, but we do not dare to follow that path. Try
it and you’ll see why. There must be a better way. Why? because this is a math exercise! High
school students should be aware of this fact: nearly all questions in a test/exam have solutions
and it is usually not hard and time consuming (as the test duration is finite!). Furthermore, if
there is a hard question, its mark is often low. Thus, you do not need to spend all of your time to
study to get A grades. Use that time to explore the world.
We present the first solution by considering .x C 3y/3 . Why this term? Because upon
expansion, we will have terms appearing in the two equations:
y 3 C .4 3y/y 2 D 2 H) y 3 2y 2 C 1 D 0 (2.15.2)
Recognizing y D 1 is one solution of the above equation, we can factor its LHS and write§
p p
2 1˙ 5 53 5
.y 1/.y y 1/ D 0 H) y D 1 .x D 1/; y D .x D /
2 2
Is this solution a good one? Yes, but it is not general as it cannot be used when the second
equation is slightly different e.g. y 3 C 5xy 2 D 2. We need another solution which works for
any coefficients.
What is special about Eq. (2.15.1)? We see x 3 , x 2 y 1 , y 3 and x 1 y 2 ; these terms are all of
cubic order! If we do this substitute y D kx (or x D ky), all these terms become x 3 , kx 3 , k 3 x 3
and k 2 x 3 , and thus we can factor out x 3 and thus cancel this x 3 and we have an equation for k.
That’s the trick:
x 3 C 9x 2 y D 10 H) x 3 .1 C 9k/ D 10
y 3 C xy 2 D 02 H) x 3 .k 3 C k 2 / D 2
By dividing the first equation by the second one, we get the following cubic equation for k:
We can isolate terms involving y and square to get two equations for x:
( p p ( p
xD3 y x D9Cy 6 y
p p H) p
xC5D5 yC3 x C 5 D 25 C y C 3 10 y C 3
§
This exercise was not about solving cubic equations, so this cubic equation must be easy. That’s why guessing
one solution is the best technique here.
r 2
2
1 1 1 1 r
.p C q/2 D .p 2
q/ C pq H) pC D p Cr (2.15.5)
4 4 4 p 4 p
And that’s what we need x D ./2 ; x C 5 D ./2 . So, using Eq. (2.15.5), we introduce these
changes of variables:
5 2 5 2
8 8
ˆ 1 ˆ 1
<x D a <x C 5 D aC
ˆ
ˆ ˆ
ˆ
4 a 4 a
H) ˆ
3 2 3 2
ˆ 1 1
:y D b :y C 3 D bC
ˆ
ˆ ˆ
ˆ
4 b 4 b
The original system of equations become simply as:
8
1 5 1 3 8
ˆ
ˆ a C b D3 < aCb D8
2 a 2 b
<
H) 5
: C3 D2
ˆ 1 5 1 3
ˆ
: aC C bC D5 a b
2 a 2 b
which can be solved easily. A correct change of variable goes a long way!
Sometimes we can solve a hard equation by converting it to a system of equations which is
easier to deal with. As one typical example, let’s solve the following equation:
p p
q q
3 3
14C x C 14 xD4
If we look at the terms under p
the cube roots, we seep
something special: their sum is constant i.e.,
p p
3
without x. So, if we do u D 14 C x and v D 14 3
x, we have u3 C v 3 D 28. And of
course, we also have u C v D 4 from the original equation. Thus, we have
(
uCv D4
u3 C v 3 D 28
which can be solvedpto have u D p 1; v D 3, and from that we get x D 169. If the equation is
p p
3
slightly changed to 14 C x C 3 14 a x D 4, a is any number, then our trick would not
work. Don’t worry you will not see that in standardized tests. In real life, probably. But then we
can just use a numerical method (e.g. Newton’s method, discussed in Section 4.5.4, or a graphic
method) to find an approximate solution.
The linear, quadratic and cubic equations that we have just seen belong to a general class of
algebraic equations. Other equations which contains polynomials, trigonometric functions, loga-
rithmic functions, exponential functions etc., are called transcendental equations. For example
x D cos x is a transcendental equation.
Definition 2.16.1
A polynomial equation of the form f .x/ D an x n Can 1 x n 1 Can 2 x n 2 C Ca1 x Ca0 D
0 is called an algebraic equation. An equation which contains polynomials, trigonometric
functions, logarithmic functions, exponential functions etc., is called a transcendental equation.
In Section 2.12 we have solved linear/quadratic/cubic equations directly. That is, the so-
lutions ofpthese equations can be expressed as roots of the coefficients in the equations e.g.
x D b˙ b 2 4ac=2a in case of quadratic equations. It is also possible to do the same thing for
fourth-order algebraic equations (the formula is too lengthy to be presented here). But, as the
French mathematician and political activist Évariste Galois (1811 – 1832) showed us, polyno-
mials of fifth order and beyond have no closed form solutions using radicals. Why fifth order
equations so hard? To answer this question, we need to delve into the so-called abstract algebra–
a field about symmetries and groups. I do not know much about this branch of mathematics, so
I do not discuss it here. I strongly recommend Ian Stewart’s book Why Beauty Is Truth: The
History of Symmetry [49].
For transcendental equations, we need to use numerical methods i.e., those methods that
give approximate solutions not exact ones expressed as roots of the coefficients in the equations.
For example, a numerical method would give the solution x D 0:73908513 to the equation
cos x x D 0. We refer to Section 4.5.4 for a discussion on this topic.
Associated with algebraic equations and transcendental equations we have algebraic and
transcendental numbers, respectively. An algebraic number is any complex number (including
real numbers) that is a root of a non-zero polynomial in one variable with rational coefficients
(or equivalently, by clearing denominators, with integer coefficients). All integers and rational
numbers are algebraic, as are all roots of integers. Real and complex numbers that are not
algebraic, such as and e, are called transcendental numbers. If you’re fascinated by numbers,
check out [43].
2.17 Powers of 2
The two to power four is two multiplied by itself four times, which is expressed as
24 WD 2„ 2ƒ‚
2 …
2 (2.17.1)
4 times
Thus, 24 is nothing but a shorthand for 2 2 2 2. So, for positive integer as exponents, a
power is just a repeated multiplication .
We can deduce rules for common operations with powers. For example, multiplication of
two powers of two is given by
2m 2n WD .2
„ 2 ƒ‚
…
2/ .2„ 2 ƒ‚ 2/ D 2mCn
… (2.17.2)
m times n times
which basically says that to multiply two exponents with the same base (2 here), you keep the
base and add the powers. And this is the product rule am an D amCn for m; n 2 N.
The next thing is certainly division of two powers. Division of two powers of two is written
as
2m
n
D 2m n (2.17.3)
2
If that was not clear, we can always check a concrete case. For example,
25 22222 2 2 2 2 2
3
D D D 2 2 D 22 D 25 3
2 222 2 2 2
How about raising a power i.e., a power of a power such as .23 /2 ? It’s 82 D 64 D 26 . And we
generalize this to:
.2m /n WD .2„ 2 ƒ‚
…
2/ .2
„ 2 ƒ‚
…2/ .2 …2/ D 2mn
„ 2 ƒ‚ (2.17.4)
m times m times m times
„ ƒ‚ …
n times
And we also have this result .2m /n D .2n /m as both are equal to 2mn .
So far so good, we have rules for powers with positive integer index. How about zero and
negative index e.g. 20 and 2 1 ? To answer these questions, again we follow the rule applied to
1 1 D 1: the new rule should be consistent with the old rule. From the data in Table 2.8:
2 D 1 and 2 1 D 1=2: in this table, while going down from the top row, the value of any row
0
in the third column is obtained by dividing the value of the previous row by two.
The next natural question is how to find powers to a rational index e.g. 21=2 . We apply the
rules working for integer indices e.g. the raising a power rule in Eq. (2.17.4). We do not know
yet what 21=2 is, but we know its square! Details are as follows:
p
.21=2 /2 D 2.1=2/2 D 2 H) 21=2 D 2 (2.17.5)
We did the same game before: multiplication (of 2 integers) is a repeated addition. Now, we define a new math
object based on repeated multiplication. Why? Because it saves time.
n 2n Value
3 222 8
2 22 4
1 2 2
0 20 1
1
-1 2 1/2
which reads ’2 to the power of 1=2 is the square root of 2’, nothing new comes up here. In the
same manner, 21=3 is computed as
p3
.21=3 /3 D 2.1=3/3 D 2 H) 21=3 D 2
We can now generalize these results, to have (n; p; q are positive integers or n; p; q 2 N
p p
a1=n D ap=q D
n q
a; ap (2.17.6)
This was obtained by replacing 2 by a–a real number, as in previous development there is nothing
special about 2; what we have done for 2 works exactly for any real number.
Now that we have defined powers with a rational index am=n . Do all the rules (e.g. the
product rule) still apply for such powers? That is do we still have am=n ap=q D am=nCp=q ? To
gain insight, we can try few examples. For instance, 31=2 31=2 equals 3 (from square), but is
also equal 3 from 31=2C1=2 D 31 . Now we need a proof, once and for all!
A bit of history about notation of exponents. The notation we use today to denote an
exponent was first used by Scottish mathematician, James Hume in 1636. However, he used
Roman numerals for the exponents. Using Roman numerals as exponents became problematic
since many of the exponents became very large so Hume’s notation didn’t last long. A year
later in 1637, Rene Descartes became the first mathematician to use the Hindu-Arabic numerals
Refer to Section 2.23.7 for what N is. Briefly it is the set (collection) of all integers. Instead of writing the
lengthy “n is an integer”, mathematicians write a 2 N.
of today as exponents. It was Newton who first used powers with negative and rational index.
Before him, Wallis wrote 1=a2 instead of a 2 .
Power with an irrational index. For a number raised to a fractional exponent, p i.e., ap=q , the re-
q
sult is the denominator-th root of the number raised to the numerator, i.e., ap . Again, we should
ask ourselves this question: so what happens when you raise a number to an irrational number?
Obviously it isp
not so simple to break it down like what
p we have done in e.g. Eq. (2.17.5).
2
What is 2 ? It cannot be 2 multiplied by itself 2 times! So, the definition in Eq. (2.17.1)
no longer works. In other words, the starting point that a power is just a repeated multiplication is
no longer valid. This situation is similar to multiplication is a repeated addition (23 D 2C2C2)
does not apply to 2 3:4. p
To see what might be 2 2 , we can proceed as follows, without a calculator of course.
Otherwise
p we would not learn anything interesting but a meaningless number. We approximate
2 successively by 1:4, 1:41,
p 1:414 etc. and we compute the corresponding powers (e.g.
5
21:4 D 214=10 D 27=5 D 27 ). The results given in Table 2.9 show that as a more accurate
approximation of the square root of 2 is used, the powers pconverge to a value. p Note that we
2 14=10 10
have used a calculator to compute each approximation of 2 e.g. 2 D 214 . This is not
cheating as the main point here is to get the value of these approximations.
p
Table 2.9: Calculation of 2 2.
p10
21:4 214=10 D 214 2.6390158
p
100
21:41 2141=100 D 2141 2.6573716
p
1000
21:414 21414=1000 D 21414 2.6647496
21:4142 2.6651190
21:41421 2.6651190
21:41421356 2.6651441383063186
1:414213562
2 2.665144142000993
21:4142135623 2.665144142555194
21:41421356237 2.6651441426845075
p
But, how can we be sure that 2 2 is a number? This can be guaranteed by looking at the
function 2x as shownpin Fig. 2.24. There is no hole in this curve or the function is continuous,
so there must exist 2 2 .
Are the rules of powers still apply for irrational index? Do we still have ax ay D axCy with
x; y being irrational numbers? If so, we say that the power rules work for real numbers, and
we’re nearly done (if we did not have complex numbers). How to prove this? One easy but not
strict way is to say that we can always replace ax by ar with r is a rational number close to x,
and ay by at . Thus ax ay ar at D arCt .
p
We have calculated
p 2 2 by approximating the square root of 2 with a rational number, e.g.
1000
21414=1000 D 21414 . However, calculating the 1000th root is not an easy task. There must be
a better way to compute 2x for any real number x directly and efficiently. For this, we need cal-
culus (Chapter 4). That is, algebra can only help us so far, to go further we need new mathematics.
Adding up powers of two. Let’s consider the summation of powers of two starting from 21 to
2n 1 :
n 1
X
S.n/ D 1 C 2 C 4 C 8 C C 2n 1 D 2i D‹ (2.17.7)
i D0
P
We have added the shorthand notation using the sigma just for people not familiar with this
to practice using it. It is useless for our purpose here though. To find the expression for S.n/,
we need to get our hands dirty by computing S.n/ for a number of values of n. The results for
n D 1; 2; 3; 4 are tabulated in Table 2.10. From this data we can find a pattern (see columns 3
and 4 of this table). And this brings us to the following conjecture:
S.n/ D 1 C 2 C 4 C 8 C C 2n 1
D 2n 1 (2.17.8)
And if we can prove that this conjecture is correct then we have discovered a theorem.
1 1 2-1 21 1
2 3 4-1 22 1
3 7 8-1 23 1
4 15 16-1 24 1
Proof. It is easy to see that S.1/ is correct (1 D 21 1). Now, assume that S.k/ is correct, or
1 C 2 C 4 C 8 C C 2k 1
D 2k 1
Why powers?
I think that the concept of power emerged from practical geometry problems. If you have
a square of length 2, what is the area? It is 2 2 or two squared. If you have a cube of
length 2, the volume is 222 or two cubed. The notation 23 is just a convenient shortcut
for 2 2 2. Then, mathematicians generalize to an for any n.
What is 2 ? It is 2, an integer! You can check this using a calculator and then prove
it using the rules of powers that you’re now familiar with. Let’s go crazy: how about ?
We know that x; x 2 ; x 3 are called the first, second and third powers of x. But we also
know that x 2 is written/read x squared and x 3 as x cubed. Why? This is because ancient
Greek mathematicians see x 2 as the area of a square of side x.
Scientific notation.
When working with very large numbers such as 3 trillion we do not write it as
3 000 000 000 000 as there are too many zeros. Instead, we write it as 3 1012 (there
are 12 zeros explicitly written). Any number can be written as the product of a number
between 1 and 10 and a number that is a power of ten. For example, we can write 257 as
2:57 102 and 0.00257 as 2:57 10 3 . This system is called the scientific notation.
Doing arithmetic with this notation is easier due to properties of exponents. For example,
when we multiply numbers, we multiply coefficients and add exponents:
The scientific notation immediately reveals how big a number is. We use the order of
magnitude to measure a number. Generally, the order of magnitude of a number is the
smallest power of 10 used to represent that number. For example, 257 D 2:57 102 , so
it has an order of magnitude of 2.
2.18 Infinity
This section presents a few things about infinity, the concept of something that is unlimited,
endless, without bound. The common symbol for infinity, 1, was invented by the English math-
ematician John Wallis in 1655. Mathematical infinities occur, for instance, as the number of
points on a continuous line or as the size of the endless sequence of counting numbers: 1, 2, 3
etc.
The symbol 1 essentially means arbitrarily large or bigger than any positive number. Like-
wise, the symbol 1 means less than any negative number.
This section mostly concerns infinite sums e.g. what is the sum of all positive integers. Such
sums are called series. In Section 2.18.1 I present arithmetic series (e.g. 2 C 4 C 6 C ), in
Section 2.18.2 I present geometric series (e.g. 1C2C4C ), and in Section 2.18.3 the harmonic
series 1 C 1=2 C 1=3 C . In Section 2.18.4, the famous Basel problem is presented. Section
Section 2.18.5 is about the first infinite product known in mathematics, and the first example of
an explicit formula for the exact value of .
Why we have to bother with infinite sums? One reason is that many functions can be ex-
pressed as infinite sums. For example,
1
X
f .x/ D a0 C .an cos nx C bn sin nx/
nD1
1 2 1 4 1 6
.1 x 2 /1=2 D 1 x x x C
2 8 16
This simple problem exhibits what is called an arithmetic series. After day 1, he has 10 cents.
On the second day he gets 13 cents, on the third day 16 cents, and so on. The list of amounts he
gets each day
10; 13; 16; 19; 22; : : : ;
is called a sequence. When we add up the terms in this sequence to get the total amount he has
at some point
10 C 13 C 16 C 19 C 22 C 25 C 28 C 31 C 34 C 37
the result is a series or precisely a finite series, because the number of terms is finite. Shortly,
we shall discuss infinite series in which the number of terms is infinite. In this particular case,
where each term is separated by a fixed amount from the previous one, both series and sequence
are called arithmetic.
The amount is simply obtained as a sum of ten terms, it is 235. But we need a smarter way
to solve this problem, just in case we face this problem: what is the amount after a year? Doing
the sum for 365 terms is certainly a boring task.
What we want here is a formula that gives us directly the arithmetic series. And mathemati-
cians solve this specific problem by considering a general problem (as it turns out it is easier
to handle the general problem with symbols). Let’s first define a general arithmetic sequence
with a being the first term and d being the difference between successive terms. The arithmetic
sequence is then
a; a C d; a C 2d; : : : ; a C .n 1/d; : : : (2.18.1)
where the nth term is a C .n 1/d . Now, the sum of the first n terms of this sequence is
a C a C d C a C 2d C C a C .n 1/d . To compute this sum, we follow Gauss, by writing
the sum S in the usual order and in a reverse order (for 4 terms only, which is enough to see the
point):
S D a C aCd C a C 2d C a C 3d
S D a C 3d C a C 2d C a C d C a
2S D 2a C 3d C 2a C 3d C 2a C 3d C 2a C 3d
We can see that 2S D 4 .2a C 3d /, or S D .4=2/.2a C 3d / D .4=2/Œ.a/ C .a C 3d /. Now
we see the pattern, and thus the general arithmetic series is given by
num. of terms
a C a C d C C a C .n 1/d D .1st term C final term/ (2.18.2)
2
Thus, with observation, we have developed a formula that just requires us to do one addition and
one multiplication, regardless of the number of terms involved! That’s the power of mathematics.
where the ellipsis ‘. . . ’ means ‘and so on forever’. This sum is called a geometric series, that is
a series with a constant ratio (1/2 for this particular case) between successive terms. Geometric
series are among the simplest examples of infinite series with finite sums, although not all of
them have this property. Why ‘geometric’? We explain it shortly.
Hey! What kind of human that walking to a door like that? The story is like this, you might
guess correctly that it came from a philosopher. In the fifth century BC the Greek philosopher
Zeno of Elea posed four problems, and the above is one of them passed on to us by Aristotle.
Zeno was wondering about the continuity of space and time.
To have an idea what S might be, you can compute it for some concrete values of n to see
what the sum might be. I did that for n up to 20 (of course using a small Julia code, Listing B.2)
and the result given in Table 2.11 indicates that S D 1. Even though the sum involves infinite
terms it converges to a finite value of one! And a geometry representation of this sum shown in
Fig. 2.25 confirms this. Noting that, in the past, Zeno argued that you would never be able to
get to the door; motion cannot exist! This is because the Greeks had no notion that an infinite
number of terms could have a finite sum.
Terms S
1 0.5
2 0.75
3 0.875
:: ::
: :
10 0.9990234375
20 0.9999990463
Pn 1
Table 2.11: S D i D1 2i . Figure 2.25: Geometry visualization of S D
P1 1
i D1 2i .
Although we have numerical and geometric evidence that the sum is one, we still need a
mathematical proof. We need to do some algebra tricks here. The idea is: we do not go to
infinity (where is it?), thus we consider only n terms in the sum, then we see what happens to
this sum when we let n go to infinity (the danger isPfor n not for us, and this works). That’s why
mathematicians introduce the partial sum Sn D niD1 1=2i . With this symbol, they start doing
some algebraic manipulations to it and it reveals its secret to them. First they multiply Sn by 1=2
and put Sn and .1=2/Sn together to see the connection:
1 1 1 1 1
Sn D C C C C n 1 C n
2 4 8 2 2
1 1 1 1 1
Sn D C C C n C nC1
2 4 8 2 2
What next then? Many terms are identical in Sn and half of it, so it is natural to subtract them
from each other to cancel out the common terms:
1 1 1 1
Sn Sn D H) Sn D 1
2 2 2nC1 2n
Because the series involves infinite terms, we should now consider the case when n is very large
i.e., n ! 1. For such n, the term 1=2n –which is the inverse of a giant number–is very very
small, and thus Sn is approaching one, which means that S approaches one too:
S D1 when n ! 1
There is nothing special about 1=2; 1=4; : : : in the series. Thus, we now generalize the above
discussion to come up with the following geometric series, with the first term a and the ratio r:
S D a C ar C ar 2 C ar 3 C (2.18.4)
Then, we introduce the partial sum Sn (n is the number of terms) and multiply it with r, rSn , as
follows
Sn D a C ar C ar 2 C ar 3 C C ar n 1
rSn D 0 C ar C ar 2 C ar 3 C C ar n
It follows then,
a
.1 r/Sn D a ar n H) Sn D .1 r n/
1 r
Or,
a
a C ar C ar 2 C ar 3 C C ar n 1
D .1 r n/ (2.18.5)
1 r
For the particular case of a D 1, we have this result
1
X 1 rn
ri D 1 C r C r2 C r3 C C rn D (2.18.6)
i D0
1 r
The Rice And Chessboard Story. There’s a famous legend about the origin of chess that goes
like this. When the inventor of the game showed it to the emperor of India, the emperor was so
impressed by the new game, that he said to the man "Name your reward!". The man responded,
"Oh emperor, my wishes are simple. I only wish for this. Give me one grain of rice for the first
square of the chessboard, two grains for the next square, four for the next, eight for the next
and so on for all 64 squares, with each square having double the number of grains as the square
before."
Let’s see how many grains would be needed. It can be seen that the total number of grains is
a geometric series with a D 1 and r D 2. Using Eq. (2.18.6), we can compute it:
1
S D 1 C 2 C 4 C ::: D .1 264 / D 18; 446; 744; 073; 709; 551; 615 (2.18.7)
1 2
The total number of grains equals 18,446,744,073,709,551,615 (eighteen quintillion four hun-
dred forty-six quadrillion, seven hundred forty-four trillion, seventy-three billion, seven hundred
nine million, five hundred fifty-one thousand, six hundred and fifteen)! Not only it is a very
large number, it is also a prime; the number of grains is the 64th Mersenne number. A Mersenne
number is a prime number that is one less than a power of two (2n 1). This number is named
after Marin Mersenne, a French Minim friar, who studied them in the early 17th century.
So we have seen two geometric series, one in Eq. (2.18.3) with r D 1=2 < 1 and one in the
chessboard legend with r D 2 > 1. While the first series converges, or is convergent (i.e., as the
number of terms get bigger and bigger the sum does not explode, it settles to a finite value), the
second series diverges (or is divergent); the more terms result in a bigger sum. The question now
is to study when the geometric series converges. Before delving into that question, noting that r
can be negative; actually mathematicians want it to be. Because they always aim for a general
result.
To see why geometric series with r < 1 converge, let’s look at Eq. (2.18.5). We have the term
1 r n which depends on n. But we also know that if 1 < r < 1 (or compactly jrj < 1 using
the absolute value notation), then r n approaches zero when n is getting bigger and bigger. You
can try these numbers 0:510 , 0:511 , 0:512 and you will see that they become smaller and smaller
and approaching zero (On a hand calculator, start with 0.5 and press the x 2 button successively,
you will get zero). Not a mathematical proof, but for now it is more than enough. For a proof,
we need the concept of limit. (Actually we have seen the idea of limit right in Table 2.11).
So, we have for jrj < 1, 1 r n goes to one when n goes to infinity. From Eq. (2.18.5) the
geometric series thus becomes
a
a C ar C ar 2 C ar 3 C D ; for jrj < 1 (2.18.8)
1 r
Note that this formula holds only for jrj < 1. If we use it for jrj > 1, we would get absurd
results. For example, with r D 2, this formula gives us
1 C 2 C 4 C 8 C D 1
which is absurd. Weird things can happen if we use ordinary algebra to a divergent series! Now
we can understand why Niels Henrik Abel said “Divergent series are the devil, and it is a
shame to base on them any demonstration whatsoever”.
Absolute value. When we want to say a number x is smaller than 1 but larger than -1, we write
jxj < 1. The notation jxj denotes the absolute value or modulus of a real number x. The absolute
value of a number may be thought of as its distance from zero. The notation jxj, with a vertical
bar on each side, was introduced by the German mathematician Karl Weierstrass (1815 – 1897)
in 1841. He was often cited as the "father of modern analysis" and we will have more to say
about him in Chapter 4.
For any real number x, the absolute value or modulus of x is defined as
x; if x 0
jxj D (2.18.9)
x; if x < 0:
For example, the absolute value of 3 is 3, and the absolute value of -3 is also 3.
Using the geometric series formula to express repeating decimals. We can use geometric
series to prove that a repeating decimal is a rational number. For example,
0:22222222 : : : D 0:2 C 0:02 C 0:002 C
2 2 2
D C C C .a D 2=10; r D 1=10/
10 100 1000
2 9 2
D = D using Eq. (2.18.8)
10 10 9
Niels Henrik Abel (1802 – 1829) was a Norwegian mathematician. His most famous single result is the first
complete proof demonstrating the impossibility of solving the general quintic equation in radicals. He was also an
innovator in the field of elliptic functions, discoverer of Abelian functions. He made his discoveries while living in
poverty and died at the age of 26 from tuberculosis. Most of his work was done in six or seven years of his working
life. Regarding Abel, the French mathematician Charles Hermite said: "Abel has left mathematicians enough to
keep them busy for five hundred years." Another French mathematician, Adrien-Marie Legendre, said: "what a head
the young Norwegian has!"). The Abel Prize in mathematics, originally proposed in 1899 to complement the Nobel
Prizes, is named in his honor.
9 9
0:99999 : : : D 0:9 C 0:09 C 0:009 C D = D1 (2.18.10)
10 10
Can you name a number that is larger than 0.999... and smaller than 1? If not, these two numbers
are the same!
Why is this series called the harmonic series? We can find the following answer everywhere.
It is such called because each terms of the series, except the first, is the harmonic mean of its
two nearest neighbors. And the explanation stops there!. This response certainly raises more
questions than it answers: What is the harmonic mean? To have a complete understanding, we
have to trace to the origin.
The harmonic mean. We know p the arithmetic mean of two numbers a and b is A D 0:5.a C b/.
The geometric mean is G D ab. The harmonic mean is H D 2ab=.a C b/. Or equivalently,
1=H D 0:5.1=a C 1=b/; H is the reciprocal of the average of the reciprocals of a and b. So,
1=n is the harmonic mean of 1=.n 1/ and 1=.n C 1/ for n > 1. Now, we are going to unfold
the meaning of these means.
It is a simple matter to find the average of two numbers. For example, the average of 6 and
10 is 8. When we do this, we are really finding a number x such that 6; x; 10 forms an arithmetic
sequence: 6,8,10. In general, if the numbers a; x; b form an arithmetic sequence, then
aCb
x aDb x H) x D (2.18.12)
2
Similarly, we can define the geometric mean (GM) of two positive numbers a and b to be the
positive number x such that a; x; b forms a geometric sequence. One example is 2; 4; 8 and this
helps us to find the formula for GM:
x b p
D H) x D ab
a x
Now getting back to the harmonic series. What is the value of S ? I do not know, so I
programmed a small function and let the computer compute this sum. And for n D 1010 (more
than a billion), we got 25.91. Now, we know this sum is infinity, thus called a divergent series.
How can we prove that? The divergence of the harmonic series was first proven in the 14th
century by the French philosopher of the later Middle Ages Nicole Oresme (1320–1325 – 1382).
Here is what he did:
1 1 1
S D1C C C C
2 3 4
1 1 1 1 1 1 1
S >1C C C C C C C C .replace 1=3 by 1=4/
2 4 4 5 6 7 8
1 1 1 1 1 1 (2.18.13)
S >1C C C C C C C .1=4 C 1=4 D 1=2/
2 2 5 6 7 8
1 1 1 1 1 1
S >1C C C C C C C .replace 1=5; 1=6; 1=7 by 1=8/
2 2 8 8 8 8
„ ƒ‚ …
1=2
So Oresme compared the harmonic series with another one which is divergent and smaller
than the harmonic series. Thus, the harmonic series must diverge. This proof, which used a
comparison test, is considered by many in the mathematical community to be a high point
of medieval mathematics. It is still a standard proof taught in mathematics classes today. Are
there other proofs? How about considering the function y D 1=x and the area under the curve
y D 1=x? See Fig. 2.26. The area under this curve is infinite and yet it is smaller than the area
of those rectangles in this figure. This area of the rectangles is exactly our sum S .
Figure 2.26: Calculus-based proof of the divergence of the harmonic series. The harmonic series and the
are under the curve y D 1=x leads to a famous constant in mathematics. Can you find it?
It is interesting to show that one can get the harmonic series from a static mechanics problem
of hanging blocks (Fig. 2.27a). Let’s say that we have two blocks and want to position them
one on top of the other so that the top one has the largest overhang, but doesn’t topple over.
From statics, the way to do that is to place the top block (
)1 precisely halfway across the one
underneath. In this way, the center of mass of the top block falls on the left edge of the bottom
block. So, with two blocks, we can have a maximum overhang of 1=2.
With three blocks, we first have to find the center of mass of the two blocks
1 and
.2 As
shown in Fig. 2.27b, this center’s x coordinate is 3=4 (check Section 7.8.7 for a refresh on how
to determine the center of mass of an object). Now we place block
3 such that its left edge
is exactly beneath that center. From that we can deduce that the overhang for the case of three
blocks is 1=2 C 1=4. Continuing this way, it can shown that the overhang is given by
1 1 1 1 1 1
C C C D 1 C C C
2 4 6 2 2 3
which is half of the harmonic series. Because the harmonic series diverges, it is possible to have
an infinite overhang!
Figure 2.27: Stacking identical blocks with maximum overhang and its relation to the harmonic series.
Without loss of generality, the length of each block is one unit.
To understand why similar series possess different properties, we put the geometric and the
harmonic series together below
1 1 1 1 1 1
Sgeo D 0 C C C C C C
2 4 8 16 32 64
1 1 1 1 1 1 1
Shar D 1 C C C C C C C
2 3 4 5 6 7 8
Now we can observe that the terms in the geometric series shrink much faster than the terms in
the harmonic series e.g. the sixth term in the former is 0.015625, while the corresponding term
is just 1=7 D 0:142857143.
Basel, the hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked
the problem.
The Basel problem asks for the precise summation of the reciprocals of the squares of the
natural numbers, i.e. the precise sum of the infinite series:
1 1 1 1
S D1C C C C C 2 C D‹ (2.18.14)
4 9 16 k
Before computing this series, let’s see whether it converges. The idea is to compare this
series with a larger series that converges. We compare the following series
1 1 1
S1 D 1 C C C C
22 33 44 (2.18.15)
1 1 1
S2 D 1 C C C C
12 23 34
And if S2 converges to a finite value, then S1 should be convergent to some value smaller as
S1 < S2 . Indeed, we can re-write the partial sum of the second series as a telescoping sum
(without 1)
1 1 1 1
S2 .n/ 1D C C C C
12 23 34 n.n C 1/
1 1 1 1 1 1 1
D 1 C C C C
2 2 3 3 4 n nC1
(2.18.16)
1 1 1 1 1 1 1
D 1 C C C C
2 2 3 3 4 n nC1
1 1
D1 H) S2 .n/ D 2
nC1 nC1
When n is approaching infinity the denominator in 1=nC1 is approaching infinity and thus this
fraction approaches zero. So, S2 converges to two. Therefore, S1 should converge to something
smaller than two. Indeed, Euler computed this sum, first by considering the first, say, 100 terms,
and found the sum was about 1.6349§ . Then, using ingenious reasoning, he found that *
1 1 1 1 2
S D1C C C C C 2 C D
4 9 16 k 6
How Euler came up with this result? He used the Taylor series expansion of sin x, and the infinite
product expansion. See Section 4.14.7 for Euler’s proof and Section 3.10 for Cauchy’s proof.
See Section 2.18.6 to see why mathematicians think of this way to compute S2 .
§
How Euler did this calculation without calculator is another story. Note that the series converge very slow
i.e., we need about one billion terms to get an answer with 8 correct decimals. Euler could not do that. But he is a
genius; he had a better way. Check Section 4.17 for detail.
We have to keep in mind that at that time Euler knew, see Section 4.14.4, that another related series has a sum
related to :
1 1 1
D1 C
4 3 5 7
In what follows, we present another proof. This proof is based on the following two lemmas :
S D 34 1 1
P
nD1 .2n 1/2 ;
R0 1
1 x m ln xdx D .mC1/2
which can be proved straightforwardly. Then, the sum in the Basel problem can be written as
1 1 1 Z
4X 1 4X 1 4 X 0 2n
SD D D x ln xdx
3 nD1 .2n 1/2 3 nD0 .2n C 1/2 3 nD0 1
1
4 0
Z X
2n
D ln x x dx .the sum is a geometric series/
3 1 nD0
Z 0
4 ln x
D dx
3 1 1 x2
where in the first equality, we simply changed the dummy variable 2n 1 to 2n C 1 as both
represent odd numbers. In the second equality, we used the second lemma with 2n plays the role
of m. In the third equality, we change the order of sum and integration and finally we computed
the sum which is a geometric series 1 C x 2 C x 4 C . Why the geometric series appear here
in the Basel problem? I do not know, but that is mathematics: when we have discovered some
maths, it appears again and again not in maths but also in physics!
And thus, equating this area to the circle area, we get the following equation
p p
2 2 2
D4 H) D
2 2
1
A16 D .16/. /.2 sin / cos
2 s16 p 16 p
1 2=2 4 2
D 8 sin D 8 Dp p
8 2 2C 2
And thus,
p p p p
4 2 2 2 2C 2
Dp p H) D
2C 2 4
Viète was a typical child of the Renaissance in that he freely mixed the methods of classical
Greek geometry with the new algebra and trigonometry. However, Viète did not know the concept
of convergence and thus did not worry whether his infinite sequence of operations would blow
up or not: one gets different value for with different numbers of terms adopted (we discuss
this issue in Section 2.19). As a engineer or scientist of which sloppy engineering mathematics
is enough, we just need to write a code to check. But as far as mathematicians are concerned,
they need a proof for the convergence/divergence of Viète’s formula. And the German and Swiss
mathematician Ferdinand Rudio (1856-1929) proved the convergence in 1891.
i 0 1 2 3 4 5
2
i 0 1 4 9 16 25
i2 .i 1/2 1 3 5 7 9
In Section 2.5.1, we considered the sum 1 C 2 C 3 C C n. This sum consists of the evens
and the odds, but we only know the sum of the odds. We can transform the evens to the odds:
2 D 1 C 1, 3 D 2 C 1 and so on:
S D 1 C 2 C 3 C 4 C 5 C 6 C C 8
D 1 C .1 C 1/ C 3 C .3 C 1/ C 5 C .5 C 1/ C 7 C .7 C 1/
2
8 8
D 2.1 C 3 C 5 C 7/ C .1 C 1 C 1 C 1/ D 2 C
2 2
I just wanted to show that the motivation for the trick of considering k 2 .k 1/2 D 2k 1
presented in Section 2.5.1 comes from Table 2.12.
Let’s now consider the sum of an infinite series that Huygens asked Leibniz to solve in 1670 :
When Leibniz went to Paris, he met Dutch physicist and mathematician Christiaan Huygens. Once he realized
that his own knowledge of mathematics and physics was patchy, he began a program of self-study, with Huygens as
his mentor, that soon pushed him to making major contributions to both subjects, including discovering his version
of the differential and integral calculus.
Since this sum is the sum of differences, it is equal to the difference between the first number
of the first row, which is one, and the last number (which is zero). But S is twice the sum of the
second row, thus S D 2.
And who was this Gottfried Wilhelm Leibniz? He would later become the co-inventor of
calculus (the other was Sir Isaac Newton). And in calculus that he developed, we have this fact:
Rb R
a f .x/dx D F .b/ F .a/. What is this? On the LHS we have a sum ( is the twin brother of
˙ ) and on the RHS we have a difference! And this fact was discovered by Leibniz–the guy who
played with sum of differences. What a nice coincidence!
where a1 is the first term, a2 is the second term, or generally an is the nth term. So, we use an to
denote the nth term of the sequence and .an / to denote the whole sequence (instead of writing
the longer expression a1 ; a2 ; a3 ; : : :). You will also see this notation fa1 ; a2 ; : : : ; an ; anC1 ; : : :g
in other books. Pn i
Let’s study the sequence of the partial sums Sn D i D1 1=2 of the geometric se-
ries in Eq. (2.18.3). That is the infinite list of numbers: S1 ; S2 ; S3 ; : : : We compute Sn for
Refer to Fig. 2.5 for an explanation of triangular numbers.
n D 1; 2; ; 15 and present the data in Table 2.14 and we plot S.n/ versus n in Fig. 2.29.
What we observe is that as n is getting larger and larger (in this particular example this is when
n > 14), Sn is getting closer and closer to one. We say that the sequence .Sn / converges to one
and its limit is one. In symbol, it is written as
lim Sn D 1 (2.19.2)
n!1
As the limit is finite (in other words the limit exists) the sequence is called convergent. A
sequence that does not converge is said to be divergent.
n Sn 1.0
1 0.5 0.9
2 0.75
3 0.875 0.8
S(n)
4 0.9375
5 0.96875 0.7
6 0.984375
7 0.992188 0.6
:: ::
: : 0.5
14 0.999939 5 10 15 20
15 0.999969 n
Table 2.14: The sequence . niD1 1=2i /. Figure 2.29: Plot of Sn versus n.
P
In the previous discussion, our language was not precise as we wrote when n is larger and
larger (what measures?) and Sn gets closer to one (how close?). Mathematicians love rigor , so
they reword what we have written as to say the limit of the sequence .an / is a:
So, the small positive number was introduced to precisely quantify how an is close to the limit
a. The number N was used to precisely state when n is large enough. The symbol 8 means “for
all” or “for any”. The symbol 9 means “there exists”.
Now, we can understand why 1 D 0:9999 : : : Let
and so on. Sn will stand for the decimal that has the digit 9 occurring n times after the decimal
point. Now, in the sequence S1 ; S2 ; S3 ; : : :, each number is nearer to 1 than the previous one, and
by going far enough along, we can make the difference as small as we like. To see this, consider
10S1 D 9 D 10 1
10S2 D 9:9 D 10 0:1
10S3 D 9:99 D 10 0:01
And thus,
1
lim 10Sn D lim 10 D 10 H) lim Sn D 1 or 0:9999 : : : D 1
n!1 n!1 10n 1 n!1
1
2. Prove that lim 2 D 0.
n!1 n
1
3. Prove that lim D 0.
n!1 n.n 1/
n2
4. Prove that lim D 1.
n!1 n2 C 1
First, we must ensure that we feel comfortable with the facts that all these limits (except the
last one) are zeros. We do not have to make tables and graphs as we did before (as we can do
that in our heads now). The first sequence is 1; 1=2; 1=3; : : : ; 1=1 000 000; : : : and obviously the
sequence converges to 0. For engineers and scientists that is enough, but for mathematicians,
they need a proof, which is given here to introduce the style of limit proofs.
The proof is based on the definition of a limit, Eq. (2.19.3), of course. So, what is , and N ?
We can pick any value for the former, say D 0:0001. To choose N we use jan j < 0:0001 or
1=n < 0:0001. This occurs for n > 10 000. So we have with D 0:0001, for all n > N D
10 000, jan j < . Done! Not really, as D 0:0001 is just one particular case, mathematicians are
not satisfied with this proof; they want a proof that covers all the cases. If we choose D 0:00012,
then 1= D 8 333;333, not an integer. In our case, we just need N D 8 334. That is when the
ceiling function comes in handy: dxe is the least integer greater than or equal to x. If there is a
ceiling, then there should be a floor; the floor function is bxc which gives the greatest integer
smaller than or equal to x.
Here is the complete proof. Let be any small positive number and select N as the least
integer greater than or equal to 1= or N D d1=e using the new ceiling function. Then, for
8n > N , we have 1=n < 1=N < .
This is what mathematicians want.
And we can prove the second limit in the same way. But, we will find it hard to do the same
for the third and fourth limits. In this case, we need to find the rules or the behavior of general
limits (using the definition) first, then we apply them to particular cases. Often it works this way.
And it makes sense: if we know more about something we can have better ways to understand
it. In calculus, we do the same thing: we do not find the derivative of y D tan x directly but via
the derivative of sin x and cos x and the quotient rule.
Proof. Let be any small positive number. As .an / converges to a, there exists N1 such that
8n > N1 , jan aj < =2 (why 0:5?). Similarly, as .bn / converges to b, there exists N2 such
that 8n > N2 , jbn bj < =2. Now, let’s choose N D max.N1 ; N2 / (so that after N terms, both
sequences converge to their corresponding limits), then 8n > N , we have
8
<jan aj <
ˆ
2 H) j.a a/ C .bn b/j jan aj C jbn bj < C D
n
: jbn bj <
ˆ „ ƒ‚ … 2 2
2 triangle inequality
This is exactly Diego Maradona did. He kicks soccer balls. What he did when he saw a tennis ball? He kicks
it! Watch this youtube video.
Now equipped with more tools, we can solve other complex limit problems. For example,
n2 1
lim 2
D lim (algebra)
n!1 n C 1 n!1 1 C 1=n2
limn!1 1
D (quotient rule)
limn!1 .1 C 1=n2 /
limn!1 1
D (summation rule for the denominator)
limn!1 1 C limn!1 1=n2
1
D D1
1C0
We just needed to compute one limit: limn!1 1=n2 . The key step is the first algebraic manipu-
lation step.
2.20 Inequalities
In mathematics, an inequality is a relation which makes a non-equal comparison between two
numbers or mathematical expressions. It is used most often to compare two numbers on the
number line by their size. There are several different notations used to represent different kinds
of inequalities:
The notation a < b means that a is less than b.
The notation a > b means that a is greater than b.
The notation a b means that a is less than or equal b.
Inequalities are governed by the following properties:
(a) transitivity if a b and b c then a c
(b) addition if x y and a b then x C a y C b
(c1) multiplication if x y and a 0 then ax ay (2.20.1)
(c2) multiplication if x y and a 0 then ax ay
(d) reciprocals if x y and xy > 0 then 1=x 1=y
I skip the proof of these simple properties herein. But if you find one which is not obvious you
should convince yourself by proving it.
Section 2.20.1 presents some simple inequality problems. Section 2.20.2 is about inequalities
involving the arithmetic and geometric means. The Cauchy-Schwarz inequality is introduced
in Section 2.20.3. Next, inequalities concerning absolute values are treated in Section 2.20.4.
Solving inequalities e.g. finding x such that jx 5j 3 is presented in Section 2.20.5. And
finally, how inequality can be used to solve equations is given in Section 2.20.6.
101999 C1 101998 C1
3. 102000 C1
‹ 101999 C1
4. 19991999 ‹ 20001998
One simple technique is to transform the given inequalities to easier ones. For the first problem,
we square two sides:
p p
19 C 99 C 2 19 99 ‹ 20 C 98 C 2 20 98
p p
19 99 ‹ 20 98
19 99 ‹ 20 98 D .19 C 1/ 98
19 99 ‹ 19 98 C 98
19 ‹ 98
p p p p
Now we know ‹ should be <, thus 19 C 99 < 20 C 98.
For the second problem, let’s first replace fractions:
1998 1999
‹
1999 2000
1998 2000 ‹ 19992
Now come the trick; we replace 1999 by 0:5.1998C2000/, and the solution follows immediately:
1998 C 2000 2
1998 2000 ‹
2
4 1998 2000 < .1998 C 2000/2
Now, we show that using the AM-GM for 4 numbers, we can get the AM-GM for 3 num-
bers. The idea is of course to remove d so that only three numbers a; b; c are left. Using
d D .a C b C c/=3 , and the AM-GM inequality for 4 numbers, we have
aCbCc
aCbCcC
s
3 4 aCbCc (2.20.3)
abc
4 3
which is equivalent to
s
aCbCc 4 aCbCc
abc
3 3
a1 C a2 C C an p
n a1 a2 : : : an (2.20.4)
n
I present a proof of this inequality carried out by the French mathematician, civil engineer, and
physicist Augustin-Louis Cauchy (1789 – 1857) presented in his Cours d’analyse. This book is
frequently noted as being the first place that inequalities, and ı arguments were introduced
into Calculus. Judith Grabiner wrote Cauchy was "the man who taught rigorous analysis to
all of Europe. The AM-GM inequality is a special case of the Jensen inequality discussed in
Section 4.5.2.
Cauchy used a forward-backward-induction. In the forward step, he proved the AM-GM
inequality for n D 2k for any counting number k. This is a generalization of what we did for the
n D 4 case. In the backward step, assuming that the inequality holds for n D k, he proved that
it holds for n D k 1 too.
Proof. Cauchy’s forward-backward-induction of the AM-GM inequality. Forward step. Assume
the inequality holds for n D k, we prove that it holds for n D 2k. As the inequality is true for k
numbers, we can write
a1 C a2 C C ak p
k a1 a2 : : : ak
k
akC1 C akC2 C C a2k p
k akC1 akC2 : : : a2k
k
This is the term we need to appear.
of her husband by her brother she fled to a haven near Tunis. There she asked the local leader,
Yarb, for as much land as could be enclosed by the hide of a bull. Since the deal seemed very
modest, he agreed. Dido cut the hide into narrow strips, tied them together and encircled a
large tract of land which became the city of Carthage (Fig. 2.31). Dido knew the isoperimetric
problem!
Another isoperimetric problem is ‘Among all planar shapes with the same perimeter the
circle has the largest area.’ How can we prove this? We present a simple ‘proof’:
1. Among triangles of the same perimeter, an equilateral triangle has the maximum area;
2. Among quadrilaterals of the same perimeter, a square has the maximum area;
3. Among pentagon of the same perimeter, a regular pentagon has the maximum area;
4. Given the same perimeter, a square has a larger area than an equilateral triangle;
5. Given the same perimeter, a regular pentagon has a larger area than a square
We can verify these results. And we can see where this reasoning leads us to: given a perimeter,
a regular polygon with infinite sides has the largest area, and that special polygon is nothing but
our circle!
Table 2.15: Given two whole numbers such that n C m D 10 what is the maximum of nm.
n m nm
1 9 9
2 8 16
3 7 21
4 6 24
5 5 25
Now, let’s solve the following problem: assume that a; b; c; d are positive integers with
a C b C c C d D 63, find the maximum of ab C bc C cd . This is clearly an isoperimetric
problem. This term A D abCbc Ccd is not nice to a and d in the sense that a and d appear only
once. So, let’s bring justice to them (or make the term symmetrical): A D abCbcCcd Cda da.
A bit of algebra leads to A D .a C c/.b C d / da.
Now we visualize A as in Fig. 2.32. Now the problem becomes maximize the area of the
big rectangle and minimize the small area ad . The small area is 1 when a D d D 1. Now the
problem becomes easy.
Figure 2.32
The proof of these inequalities is straightforward. Just expand all the terms, and we will end up
with: .ay bx/2 0 for the first inequality and .ay bx/2 C .az cx/2 C .bz cy/2 0
for the second inequality, which are certainly true. Can we have a geometric interpretation of
.ax C by/2 .a2 C b 2 /.x 2 C y 2 /? Yes, see Fig. 2.33; the area of the parallelogram EF GH is
the area of the big rectangle ABCD minus the areas of all triangles:
3. Example 3. For a; b; c; d > 0, prove that 1=a C 1=b C 4=c C 16=d 64=.a C b C c C d /.
Example 1: using Eq. (2.20.5) for .a2 b C b c C c 2 a/.ab 2 C bc 2 C ca2 / to have .a2 b C b c C
c 2 a/.ab 2 C bc 2 C ca2 / : : :, and use the 3 variable AM-GM inequality for the
p : :2: Example
p 2:
2 2 2 2
p 2application of Eq. (2.20.5) after writing 3.a C b C c/ as .1 C 1 C 1 /.. a/ C . b/ C
direct
. c/ /.
About Example 4, even though we know we have to use the AM-GM inequality and the
Cauchy–Schwarz inequality, it’s very hard to find out the way to apply these inequalities. Then, I
thought why I don’t reverse engineer this problem i.e., generate it from a fundamental fact. Let’s
do it and see what happens.
Let x; y; z > 0 and xyz D 1, using the AM-GM inequality we then immediately have
p
x C y C z 3 3 xyz D 3
Now to generate new inequality involving S (we are working out S) from the above fundamental
inequality, we do:
The LHS is in the form .ax C by C cz/2 , so we think of the Cauchy–Schwarz inequality. Of
p p
course we rewrite 1 by something else, 1 D x C y= x C y etc.,
2
p
p x y p z
A WD y C zp C z C xp C x C yp
yCz zCx xCy
x2 y2 z2 3
C C
yCz xCz xCy 2
And this inequality can be a good exercise for a test. But not for the IMO as it is too obvious
with the square terms. Now a bit of transformation will gives us another inequality (note that
xyz D 1/:
Proof. Now is the time to prove Eq. (2.20.6). Let’s start with the simplest case:
.a1 b1 C a2 b2 /2 .a12 C a22 /.b12 C b22 /
We consider the following function|| , which is always non-negative:
f .x/ D .a1 x C b1 /2 C .a2 x C b2 /2 0 for 8x
We expand this function to write it as a quadratic equation:
f .x/ D .a12 C a22 /x 2 C 2.a1 b1 C a2 b2 /x C .b12 C b22 /
Now we compute the discriminant of this quadratic equation:
D 4 .a1 b1 C a2 b2 /2 .a12 C a22 /.b12 C b22 /
As f .x/ D 0 does not have roots or at most has one root, we have 0. And that concludes
the proof. For the general case Eq. (2.20.6), just consider this function f .x/ D .a1 x C b1 /2 C
.a2 x C b2 /2 C C .an x C bn /2 .
What happened to IMO winners? One important point is that the IMO, like almost all other
mathematical olympiad contests, is a timed exam concerning carefully-designed problems with
solutions. Real mathematical research is almost never dependent on whether you can find the
right idea within the next three hours. In real maths research it might not even be known which
questions are the right ones to ask, let alone how to answer them. Producing original mathematics
requires creativity, imagination and perseverance, not the mere regurgitation of knowledge and
techniques learned by rote memorization.
We should be aware of the phenomenon of ’burn-out’, which causes a lot of promising
young mathematicians–those who might be privately tutored and entered for the IMO by pushy,
ambitious parents)–to become disenchanted in mathematics and drop it as an interest before they
even reach university. It is best to let the kids follow their interests.
||
How mathematicians knew to consider this particular function? No one knows.
jxj < 3
which means finding all values of x so that the distance of x from zero is less than 3. With a
simple picture (Fig. 2.34a), we can see that the solutions are:
3<x<3 or x 2 . 3; 3/
We have also presented the solutions using set notation x 2 . 3; 3/. The notation .a; b/ indicates
all number x such that a < x < b. It is called an open bracket as the two ends (i.e.,-3 and 3) are
not included. Then the symbol 2 means belong to. We will more to say about sets in Section 2.30.
6 2x C 3 6 ” 6 3 2x 6 3” 9=2 x 3=2
x 2 Œ 9=2; 3=2
x3 or x 3
Triangle inequality. Now comes probably the most important inequality involving absolute
values:
This inequality is used extensively in proving results regarding limits, see Section 4.10. (We
actually used already in Section 2.19) Why triangles involved here? It comes from the fact that
for a triangle the length of one side is smaller than the sum of the lengths of the other sides.
Using the language of vectors, see Section 10.1, this is expressed as
jja C bjj jjajj C jjbjj
Note the similarity of Eq. (2.20.9) compared with the above inequality. That explains its name.
Combined with the condition of x so that the inequality makes sense, we have the final solution:
0:5 x < 45=8 and x ¤ 0. Alternatively, using the set notation, we can write the solution as
(draw a fig like Fig. 2.34 would help):
1 45
x2 ; 0 [ 0;
2 8
The first approach is to square both sides to get rid of the square root. Doing so results in a
fourth order polynomial equation, which is something we should avoid. Let’s see if there is an
easier way. Note that the RHS is always smaller or equal 2. How about the LHS? It is equal
to .x C 1/2 C 2 which is always bigger or equal than 2. So, we have an equation in which the
LHS 2 and RHS 2. The only case is both of them being equal to two:
p
.x C 1/2 C 2 D 2; 4 x 2 D 2 ” x D 1; x D 0
There is no real solutions! If you prefer a visual solution: the LHS is a parabola facing up with
a vertex at . 1; 2/ while the RHS is a semi-circle centered at .0; 0/ with radius of 2 above the
x-axis. These two curves do not intersect! Of course this ‘faster’ method would not work if
number 3 in the LHS is replaced by another number so that the two curves intersect.
where the right column is the inverse of the operations in the left column. An inverse operation
undoes the operationp . Starting with the number 2, pressing the x 2 button on a calculator gives
you 4, and pressing x button (on 4) gives you back 2.
This is a powerful way to see subtraction, division and taking roots. For example, we do
not have to worry about subtraction as a totally new operation; in fact subtraction is merely the
inverse of addition. Later on, when you learn linear spaces, you will see that only addition is
defined for linear spaces. This is because 5 3 is simply 5 C . 3/. Actually we do inverse
operations daily; for example when we put shoes on and take them off.
2.22 Logarithm
The question which number which is powered to 2 gives 4 (i.e., x 2 D 4) gave us the square root.
And a similar question, to which index 2 is raised to get 4? (that is find x such that 2x D 4),
gave us logarithm. We summarize these two questions and the associated operations now
p2
x 2 D 4 H) x D 4
(2.22.1)
2x D 4 H) x D log2 4
Looking at this, we can see that logarithm is not a big deal; it is just the inverse of 2x in the same
manner as square root is the inverse of x 2 .
For the notation log2 4 we read logarithm base 2 of 4. You can understand these two equations
2
p using a calculator. Starting with the number 2, pressing x button gives you 4, and pressing
by
x button (on 4) gives you back 2–that’s why it is an inverse. Similarly, starting with 2, pressing
the button 2x yields 4 and pressing the button log2 x returns 2. Historically, logarithm was
discovered in an attempt to replace multiplication by summation as the latter is much easier than
the former, see Section 2.22.1. It was invented by the Scottish mathematician, physicist, and
astronomer John Napier (1550 – 1617) in early 17th century .
After this new loga b was discovered, we need to find the rules for them. If you play with
them for a while, you will discover the rules for logarithms. For example, considering a geometric
progression (GP): 2; 4; 8; 16; 32; 64; 128 (with r D 2), the corresponding logarithms (base 2)
are an arithmetic progression (AP): 1; 2; 3; 4; 5; 6; 7, see Table 2.16.
From this table, we see that log2 32 D log2 .4 8/ D log2 4 C log2 8 and log2 64=2 D
log2 64 log2 2. By playing with them long enough, people (and you can too if you’re given a
chance) discovered the following rules for logarithm:
If musicians can unbreak one’s heart, mathematicians can too.
The story is very interesting, see [22] for details. In 1590, James VI of Scotland sailed to Denmark to meet
Anne of Denmark–his prospective wife and was accompanied by his physician, Dr John Craig. Bad weather had
forced the party to land on Hven, near Tycho Brahe’s observatory. Quite naturally, Brahe demonstrated to the party
the process of using trigonometry identities to replace multiplication by summation. And Dr Craig happened to
have a particular friend whose name is John Napier. With that Napier set out the task of his life: developing a
method to ease multiplication. Twenty years later he had succeeded. And we have logarithm.
x 2 4 8 16 32 64 128
log2 x 1 2 3 4 5 6 7
(a)
loga ab D b
(b) Product rule
loga bc D loga b C loga c
(c) Quotient rule
loga bc D loga b loga c (2.22.2)
We are going to prove these rules. The first one loga ab D b is coming from the definition
of logarithm ax D b H) x D loga b. To prove the product rule, we first show a proof for a
particular case log2 .4 8/, to get confident that the rule is correct and use this particular proof
for a general proof.
It is obvious that log2 .4 8/ D 5 because 25 D 32. We can also proceed as follow
And thus we have proved the product rule for a concrete case of a D 2, b D 4 and c D 8. The
key step in this proof was to rewrite 4 D 22 and 8 D 23 i.e., expressing 4 and 8 in terms of
powers of 2. That is used in the following proof of the product rule:
Proof of the power rule 1. The proof of the power rule 1 uses the product rule (first consider the
case p is a positive integer):
loga b p D loga .b
„ b ƒ‚ b / D loga b C loga b C C loga b D p loga b
…
p times
Interestingly, this rule also works when p is a negave integer i.e., p D q and q is a counting
number. To see that we need to observe that loga 1=b D loga b. Why? See Table 2.17. In this
table, we have extrapolated what is true to the cases that we’re not sure. We did this because
we believe (again) in patterns. Indeed, log2 1=4 D log2 4 D 2. Another way to prove
loga 1=b D loga b is that 0 D loga 1 D loga .b/.1=b/ D loga b C loga .1=b/.
log2 x -2 -1 0 1 2 3 4
b 1
The proof of the quotient rule uses the product rule and the power rule 1: loga c
D loga bc .
Proof. Proof of the power rule 2 (with rational index). Setting u D b m=n , then un D b m . Thus,
loga un D n loga u
loga b m D n loga b m=n (use un D b m , u D b m=n )
m loga b D n loga b m=n
It is often the case that we need to change the base of logarithm. Let’s find the formula for
that. The idea, as always, is to play with the numbers and find a pattern. So, we compute the
logarithm with two bases (2 and 3) for some positive integers and put the results in Table 2.18.
But hey we do not know how to compute, let say, log3 5! I was cheating here, I used a calculator.
We shall come back to this question shortly.
Table 2.18: Logarithms bases 2 and 3 of 3,5,6,7 and their ratios (last row).
From this table, we can see that log2 x=log3 x D ˛, where ˛ is a constant. We aim to look for
this constant. Let’s denote log2 x D y, thus x D 2y , then we can compute log3 x in terms of y
as
We are a bit cheating here as we have used the power rule for logarithm loga b p D p loga b even
when p is not a whole number (y is real here). Lucky for us, this rule is valid for the case p is
real; but to show that we need calculus (see Chapter 4, Section 4.4.14). There is nothing special
about a and b here, so we can generalize the above result to arbitrary bases a and b
loga x
loga x D loga b logb x; or loga b D
logb x
1
log8 2 D x ) 2 D 8x D .23 /x D 23x ) 3x D 1 or x D
3
1
log3 D log3 35 D 5
243
p
3 x 2 4
logp3 9 D x ) 3 2 D 33 ) x D
3
When the English mathematician Henry Briggs learned in 1616 of the invention
of logarithms by John Napier, he determined to travel the four hundred miles north
to Edinburgh to meet the discoverer and talk to him in person.
A common argument for the use of technology is that it frees students from doing boring,
tedious calculations, and they can focus attention on more interesting and stimulating conceptual
matters. This is wrong. Mastering “tedious” calculations frequently goes hand-in-hand with a
deep connection with important mathematical ideas. And that is what mathematics is all about,
is it not?
To show the usefulness of logarithm assume we have to compute this product 18793:26
54778:18 (without a calculator of course). Using logarithm turns this multiplication problem
into a summation one:
log10 .18793:26 54778:18/ D log10 18793:26 C log10 54778:18
Assume that we know the logs of 18793.26 and 54778.18 (we will come to how to compute them
in a minute, Briggs provided tables for such values, nowadays we no longer need them), then
sum them to get A. Finally, the product we are looking for is then simply 10A (there were/are
tables for this and thus we obtain the product just by summing two numbers).
n
log x D log.1 C / ˛ H) log x 2n ˛.x 1=2 1/ (2.22.4)
2
Herman Heine Goldstine (1913 – 2004) was a mathematician and computer scientist, who worked as the
director of the IAS machine at Princeton University’s Institute for Advanced Study, and helped to develop ENIAC,
the first of the modern electronic digital computers. He subsequently worked for many years at IBM as an IBM
Fellow, the company’s most prestigious technical position.
p
2s
Table 2.19: Successive roots of 10: 10s or 10.
10s 1
n s D 1=2n 10s D 1 C s= s
0 10
1 3.16227766
2 1.77827941
3 1.33352143
4 1.15478198
5 1.15478198
6 1.07460783 2.38745051
7 1.03663293 2.34450742
8 1.01815172 2.32342038
9 1.00903504 2.31297148
10 0.00097656 1.00225115 0.00225115 0.43380638 2.30777050
11 0.00048828 1.00112494 0.00112494 0.43405039 2.30517585
12 0.00024414 1.00056231 0.00056231 0.43417242 2.30387999
13 0.00012207 1.00028112 0.00028112 0.43423345 2.30323242
::
:
20 9.53674316e-7 1.00000219 0.00000219 0.434294005 2.30258762
10x 1 C kx (2.22.5)
which can be seen from the sixth column of Table 2.19. And we have k D 1=˛. With calculus,
we will know that k D ln 10 (ln x is the logarithm of base e).
4x 3 2x C 2 D 0
Looking at the red numbers, you see that they are related: 5=10/2. If we pick a D 10, we can
get nice numbers:
x 1 10
4 log10 2 C x log10 D2
x 2
x 1
” 4 log10 2 C x.1 log10 2/ D 2
x
” .1 log10 2/x 2 C .4 log10 2 2/x 4 log10 2 D 0
Finally, we get a quadratic equation in terms of x, even though the coefficients are a bit scary.
Don’t worry, this is an exercise, the answers are usually of a compact form. So, using the
quadratic formula, we have:
8
2 4 log10 2 ˙ 2 <2
xD D 4 log10 2 log10 4
2.1 log10 2/ : D
2 log10 2 2 log10 5
That’s it! We used the fundamental property of logarithm to get a quadratic equation. If the
numbers 16,5,100 are replaced by others, then still we have a quadratic equation.
Can we have another solution, easier? Yes, if we divide the original equation by 100, factor
100 D 4 52 , after that we take logarithm base 10:
x 1 x 1
5x 16 x 16 x 5x
D1” D1
100 4 52
x 2
” 4 x 5x 2 D 1
x 2
” log10 4 C .x 2/ log10 5 D 0
x
” .x 2/.log10 4 C x log10 5/ D 0
Well, this is non-standard, and using the AM-GM inequality is the key as the LHS is always
greater or equal 4! If the RHS is 5 instead of 4, then we have to use the graphic method (plot
the function of the LHS and see where it intersects with the horizontal line y D 5) or Newton’s
method.
1=x
That’s the key point as 4 and 5 appear in 16x 5x . Don’t forget that 16 D 42 .
Solution of the first two equations is x D 2. In the first equation, pay attention to the
exponents, they’re related! In the second one, it is easy to see x D 2 is one solution. You
need to prove it’s the only solution.
Figure 2.35: Complex plane: the horizontal axis represents the real (Re) part and the vertical axis repre-
sents the imaginary (Im) part.
Definition 2.23.1
A complex
p number z is the one given by z D a C bi where a and b are real numbers and
i D 1–the imaginary unit; a is called the real part, and b is called the imaginary part.
Geometrically, a complex number is a point in a complex plane, shown in Fig. 2.35.
The adjective complex in complex numbers indicate that a complex numbers have more than
one part, rather than complicated.
As a new number, we need to define arithmetic rules for complex numbers. We first list the
rules for addition/subtraction and multiplication as follows
How these rules were defined? It depends. In the first way, we can assume that the rule of
arithmetic for ordinary numbers also apply for complex numbers, then there is no mystery
behind Eq. (2.23.1): we treat i as an ordinary number and whenever we see i 2 we replace that by
1 (hence i 3 D i 2 i D i ). In the second way, one first defines the addition and multiplication
of two vectors. The rule for addition follows the rule of vector addition (known since antiquity
from physics), see Fig. 2.36a. It was Wessel’s genius to discover/define the multiplication of two
vectors: the resulting vector has a length being the product of the lengths of the two vectors and
a direction being the sum of the direction of the two vectors (with respect to a horizontal line),
see Fig. 2.36b. How he got this multiplication rule? As I am not good at geometry I do not want
to study his solution. But do not worry, with a new way to represent points on a plane, his rule
reveals its mystery to us!
For a point on a plane, there are many ways to define its location. We have used the Cartesian
coordinates so far, but we can also use polar coordinates. Polar coordinates lead to the so-called
polar form of complex numbers. This is easy to obtain just relate the Cartesian coordinates .a; b/
to .r; /.
(a) (b)
Figure 2.37: Polar form of a complex number: z D a C bi D r.cos C i sin / and complex conjugate.
Definition 2.23.2
p
The polar form of a complex z is given by z D r.cos C i sin / where r D a2 C b 2
is called the modulus of z and tan D a=b, is the argument of the complex number, see
Fig. 2.37a. More compactly, people also write z D r†.
Using the polar form, the multiplication of two complex numbers z1 D r1 .cos ˛ C i sin ˛/
and z2 D r2 .cos C i sin / is written as
z1 z2 D r1 .cos ˛ C i sin ˛/ r2 .cos C i sin /
D r1 r2 Œ.cos ˛ cos sin ˛ sin / C i.sin cos ˛ C sin ˛ cos / (2.23.2)
D r1 r2 Œcos.˛ C / C i sin.˛ C /
From which the geometry meaning of multiplication of two complex numbers is obtained,
effortlessly and without any geometric genius insight! With Euler’s identity e i D cos C
i sin (see Section 2.23.5) , it is even easier to see the geometric meaning of complex number
multiplication:
)
z1 D r1 .cos ˛ C i sin ˛/ D r1 e i ˛
H) z1 z2 D r1 r2 e i.˛Cˇ /
z2 D r2 .cos ˇ C i sin ˇ/ D r2 e iˇ
But we know where 1 stays; left to the origin at a distance of one. In other words, 1 D
1†180ı , thus: (
ı 2 rD 1
1†180 D r †2 H)
D 90ı
p
And thus 1 is p
on an axis perpendicular to the horizontal axis and at a unit distance from the
origin, here stays 1 which is now designated by the iconic symbol i (standing for imaginary):
p
i WD 1 D 1†90ı
But that is just one i , if we go one around (or a any number of rounds) starting from i we get
back to it. So,
i D i sin C k2 ; k 2 N (2.23.3)
2
Question 3. If i rotates a vector in the complex plane, then what will rotate a vector in a 3D
space? This was the question that led the Irish mathematician William Hamilton (1805 – 1865)
to the development of quartenion, to be discussed in Section 10.1.6.
Complex conjugate. The complex conjugate of a complex number is the number with an equal
real part and an imaginary part equal in magnitude but opposite in sign (Fig. 2.37b). That is, (if
x and y are real) then the complex conjugate of xCyi is equal to x yi. The complex conjugate
of z is often denoted as z (read as z bar). In polar form, the conjugate of re i is re i , which
can be shown using Euler’s formula. The product of a complex number and its conjugate is a
real number .x C yi/.x yi/ D x 2 C y 2 . In other words, jz zN j D jzj2 .
Below is a summary of some of the properties of the conjugates. Proofs just follow the
definition of conjugate.
(c) The complex conjugate of the product is the product of the conjugates: zw D zw
z 2 D zz D r 2 Œcos.2˛/ C i sin.2˛/
(2.23.4)
z 3 D z 2 z D r 3 Œcos.3˛/ C i sin.3˛/
which can be generalized to z n D r n Œcos.n˛/ C i sin.n˛/ where n is any positive integer. When
r D 1 this formula is simplified to:
which is a useful formula, which is known as de Moivre’s formula (also known as de Moivre’s
theorem and de Moivre’s identity), named after the French mathematician Abraham de Moivre
(1667 – 1754). Refer to Section 2.23.5 to see how it leads to the famous Euler’s identity: e i C
1 D 0.
It is obvious that the next thing to do is to consider negative powers e.g. z 2 . To do so,
let’s start simple with z 1 which can be computed straightforwardly. We have z D a C bi D
r.cos C i sin /. We can compute z 1 using algebra as:
1 1 1 a bi 1
z D D D 2 D .cos i sin /
z a C bi a C b2 r
Thus, we get
1 1
D .cos i sin /
Œr.cos C i sin /
r
which shows that de Moivre’s formula still works for n D 1.
2
Alright, we’re ready to compute any negative power of a complex number. For example, z
is given by
2 1 2 1 1
z D .z / D .cos i sin /2 D .cos 2 i sin 2/ (2.23.6)
r2 r2
Now, we’re confident that de Moivre’s formula holds for any integer. If you want to prove it you
can use proof by induction.
˛ m
˛
cos C i sin D cos.˛/ C i sin.˛/ (2.23.7)
m m
which immediately gives us the formula to compute the m-th root of any complex number
p
p
m m
˛ ˛
r.cos.˛/ C i sin.˛// D r cos C i sin (2.23.8)
m m
This is sometimes also referred to as de Moivre’s formula.
As the first application of this new formula, we use Eq. (2.23.8) to prove that
p3
p
2C 121 D 2 C i .
Proof. First, we write the number under the cube root in polar form of a complex number, then
we use Eq. (2.23.8) to get the answer :
p
z D2C 121 D 2 C 11i D 11:18034.cos 1:39094283 C i sin 1:39094283/
p
3
z 1=3 D 11:18034.cos 0:46364761 C i sin.0:46364761/ D 2 C i
p
Note that 121 D 112 i 2 , thus 121 D 11i.
As another application of Eq. (2.23.8), we are going to compute the fifth root of one. We
also do that using algebra, and demonstrate that the two approaches yield identical results. First,
we write 1 D cos 2k, k D 0; 1; 2; : : :. Then,
p
5
p
5 2k 2k
1D cos 2k D cos C i sin
5 5
Thus, the 4 fifth roots of 1 are (note that k D 0 gives the obvious answer of 1)
2 2
k D 1 W cos C i sin D 0:309017 C 0:9510565i
5 5
4 4
k D 2 W cos C i sin D 0:809017 C 0:5877853i
5 5 (2.23.9)
6 6
k D 3 W cos C i sin D 0:809017 0:5877853i
5 5
8 8
k D 4 W cos C i sin D 0:309017 0:9510565i
5 5
As can be seen, these five roots are vertices of a pentagon inscribed in the unit circle, see Fig. 2.38.
What else can we say about them? Among these 4 complex roots, two are in the upper half of
the circle, and the other twos are in the bottom half: they are the conjugates of the ones in the
upper half. In Section 2.28.2 a proof is provided.
Figure 2.38: Fifth roots of one are vertices of a pentagon inscribed in the unit circle.
We can also find these roots using algebra. To do so, we solve the following equation
z5 1 D 0 , .z 1/.z 4 C z 3 C z 2 C z C 1/ D 0 ) .z 4 C z 3 C z 2 C z C 1/ D 0
For the above quintic equation, we use Lagrange’s clever trick by dividing this equation by z 2
to get
2 1 1 2 1 1
z CzC1C C 2 D0, z C 2 C zC C1D0
z z z z
Due to symmetry, we do a change of variable with u D z C 1=z , thus we obtain
u2 C u 1 D 0 ) u D 0:618034 uD 1:618034
Having obtained u, we can solve for z (a quadratic equation again). Finally, the four solutions
are
r
u u2
zD C 1 D 0:309017 C 0:9510565i; z D 0:809017 C 0:5877853i
2 r 4
u u2
zD 1 D 0:309017 0:9510565i; z D 0:809017 0:5877853i
2 4
which are identical to the solutions given in Eq. (2.23.9).
Thus, we have a and b satisfying the following system of equations by comparing the real parts
and imaginary parts of the two complex numbers
a2 b 2 D 0; 2ab D 1
p
of which solutions are a D b D ˙ 2=2. And we get the same result. We have used the method
of undetermined coefficients.
We imply that two complex numbers are equal if they have the same real and imaginary parts, which is
reasonable.
And if, we denote f .˛/ D cos ˛ C i sin ˛, then we observe that (thanks to the above equation)
With that, it is reasonable to appreciate the following equation (see below for a popular proof)
which, when evaluated at ˛ D yields one of the most celebrated mathematical formula, the
Euler’s theorem:
ei C 1 D 0 (2.23.16)
which connects the five mathematical constants: 0; 1; ; e; i. You have met numbers 0,1 and i .
We will meet the number e in Section 2.26. And of course, the ratio of a circle’s circumference
to its diameter. This identity is influential in complex analysis. Complex analysis is the branch
of mathematical analysis that investigates functions of complex numbers. It is useful in many
branches of mathematics, including algebraic geometry, number theory, analytic combinatorics,
applied mathematics; as well as in physics, including the branches of hydrodynamics, thermo-
dynamics, and particularly quantum mechanics. Refer to Section 7.12 for an introduction to this
fascinating field.
So, it is officially voted by mathematicians that e i C 1 D 0 is the most beautiful equation
in mathematics! As one limerick (a literary form particularly beloved by mathematicians) puts it
ei ˛ C e i˛
e i ˛ D cos ˛ C i sin ˛ H) cos ˛ D (2.23.17)
2
i˛ ei ˛ e i˛
e D cos ˛ i sin ˛ H) sin ˛ D (2.23.18)
2i
Proof. Here is one proof of e i D cos C i sin if we know the series of e x , sin x and cos x.
We refer to Sections 4.14.5 and 4.14.6 for a discussion on the series of these functions.
Start with the series of e x where x is a real number:
x x2 x3 x4 x5
ex D 1 C C C C C C
1Š 2Š 3Š 4Š 5Š
Replacing x by i , which is a complex number (why can we do this?, see Section 7.12):
With Euler’s identity, it is possible to derive the trigonometry identity for angle summation
without resorting to geometry; refer to Section 3.7 for such geometry-based derivations. Let’s
denote two complex numbers on a unit circle as z1 D cos ˛ C i sin ˛ D e i ˛ , z2 D cos ˇ C
i sin ˇ D e iˇ , we then can write the product z1 z2 in two ways
Equating the real and imaginary parts of z1 z2 given by both expressions, we can deduce the
summation sine/cosine identities, simultaneously!
Now we can answer the question asked in the beginning of this section: what is z D 23C2i ?
z D 23C2i D 23 22i D 8 4i
i
D 8 e ln 4 D 8 .cos.ln 4/ C i sin.ln 4//
And finally, it is possible to compute a logarithm of a negative number. For example, start with
e i D 1, take the logarithm of both sides:
ei D 1 H) ln. 1/ D i
Thus, the logarithm of a negative number is an imaginary number. That’s why when we first
learned calculus, logarithm of negative numbers was forbidden. This should not be the case since
we accept the square root of negative numbers! To know more about complex logarithm, check
out Section 7.12.
In the story of complex numbers, we have not only Wessel but also Jean-Robert Argand
(1768 – 1822), another amateur mathematician. In 1806, while managing a bookstore in Paris,
he published the idea of geometrical interpretation of complex numbers known as the Argand
diagram and is known for the first rigorous proof of the Fundamental Theorem of Algebra. We
recommend the interesting book An imaginary
p tale: The story of square root of -1 by Paul Nahin
[36] on more interesting accounts on i D 1.
Assume that f .z/ D zC1=z 1, compute f 1991 .2 C i/, where f 3 .z/ D f .f .f .z///. Don’t
be scared by 1991! Note that this is an exercise to be solved within a certain amount
of time after all. Let’s compute f 1 .2 C i/, f 2 .2 C i/, and a pattern would appear for a
generalization to whatever year that the test is on:
3Ci
f .2 C i/ D D2 i
1Ci
f 2 .2 C i/ D f .f .2 C i// D f .2 i/ D 2 C i
3
f .2 C i/ D f .f .f .2 C i/// D f .2 C i/ D 2 i
1. Find the imaginary part of z 6 with z D cos 12ı C i sin 12ı C cos 48ı C i sin 48ı .
3. Evaluate
1
X cos.n/
nD0
2n
where cos D 1=5.
The answers are 0, cos n and 6=7, respectively. If it is not clear about the third
problem, see below for a similar problem.
We are now going to solve a problem in which we see the interplay between real numbers and
imaginary numbers. That’s simply amazing. The problem is: Given the complex number 2 C i,
and let’s denote an and bn the real and imaginary parts of .2 C i/n , where n is a non-negative
integer. The problem is to compute the following sum
1
X an bn
SD
nD0
7n
Let’s find an and bn first. That seems a reasonable thing to do. Power of an imaginary number?
We can use de Moirve’s formula. To this end, we need to convert our number 2 C i to the polar
form:
p 2 1
2 C i D 5.cos C i sin /; cos D p ; sin D p
5 5
Then, its power can be determined and from that an , bn will appear to us:
p p p
.2 C i/n D . 5/n .cos n C i sin n/ H) an D . 5/n cos n; bn D . 5/n sin n
We did some massage to S to simplify it. Now comes the good part: we leave the real world and
move to the imaginary one, by replacing sin 2n by the imaginary part of e i 2n :
1
1X 5 n
SD Im e i 2n (2.23.19)
2 nD0 7
As the sum of the imaginary parts is equal to the imaginary of the sum , we write S as:
1
1 X 5 n i 2 n
S D Im .e /
2 nD0 7
What is the red term? It is a geometric series!, of the form 1; a; a2 ; : : : with a D .5=7/e i 2 , and
we know its sum 1=.1 a/ :
1 1
S D Im
2 1 57 e i 2
We know e i , thus we know its square e i 2 , thus the above expression is simply 7=16. Details
are as follow. First, we find the imaginary part of 1 51ei2 by:
7
1 7
5 i 2
D .e i ˛ D cos ˛ C i sin ˛/
1 7
e 7 5.cos 2 C i sin 2/
7Œ7 5 cos 2 C i5 sin 2/
D (remove i in the denominator)
.7 5 cos 2/2 C .5 sin 2/2
1 35 sin 2
Im 5 i 2
D
1 7
e 74 70 cos 2
Thus, S is simplified to
1 35 sin 2 7
SD D ::: D
2 74 70 cos 2 16
We have skipped some simple calculations in : : :
Is there a shorter solution? Yes, note that S involves an bn as a product, so we do not really
need to know an and bn , separately. From the fact that .2 C i/n D an C ibn , what we do to get
an bn ? Yes, we square the equation: .2 C i/2n D an2 bn2 C 2ian bn . Thus, an bn is half of the
imaginary part of .2 C i/2n . Plugging this into S and we fly off to the result in no time.
If not clear, one example is of great help: .a1 C b1 i / C .a2 C b2 i / D .a1 C a2 / C i.b1 C b2 /. Thus sum of
imaginary parts (b1 C b2 ) equals the imaginary of the sum.
Herein we accept that the results on geometric series also apply to complex numbers. Note that a has a
modulus of 5/7 which is smaller than 1.
So, i i is a real number! Actually i i has many values, we have just found one of them :
h ii
i. C2n / i i . 2 C2n /
i De 2 H) i D e D e 2 2n
Long before Euler wrote e i D cos C i sin , the Swiss mathematician Johann Bernoulli
(1667 – 1748)–one of the many prominent mathematicians in the Bernoulli family and Euler‘s
teacher–already computed i i using a clever technique. It is presented here so that we can enjoy
it all (assume you know a bit of calculus here). He considered the area of 1/4 of a unit circle:
Z 1p
D 1 x 2 dx
4 0
Now comes the clever idea, he used the following ’imaginary’ substitution using i (note that if
we proceed with the standard substitution x D sin , we will get =4 D =4, which is useless;
that’s why Bernoulli had to turn to i to have something new coming up):
xD iu H) dx D idu; 1 x 2 D 1 C u2
And the red integral can be computed (check Section 4.7 if you’re not clear):
i
D Œsec tan C ln.sec C tan /0
4 2
with tan D i . Thus, we have
i
D ln.i /
4 2
And from that the result i i D e =2 follows. As we have seen, once accepted i, mathematicians
of the 17th century played with them with joy and obtained interesting results. And of course
other mathematicians did similar things; for example, the Italian Giulio Carlo dei Toschi Fagano
(1682-1766) played with a circle but with its circumference, and got the same result as Bernoulli
[36]. It is similar to we–ordinary human–soon introduce many new tricks with a new FIFA play
station game.
Now comes a surprise. What is 1 ? We have learned that 1x D 1, so you might be guessing
1 D 1. But then you get only one correct answer. To see why just see 1 as a complex number
1 D 1 C 0i D e i.2n/ with n being an integer, thus
2
1 D .e i.2n/ / D e i.2n / D cos 2n 2 C i sin 2n 2
where in the last equality we have used Euler’s identity e i D cos C i sin . From this we see
that only with n D 0 we get 1 D 1, which is real. Other than that we have complex numbers!
Check Section 7.12 for detail.
This is because sin 2n 2 is always different from zero for all integers not 0. Why that? Because
is irrational, a result by the Swiss polymath Johann Heinrich Lambert (1728–1777). To see
why, let’s solve sin 2n 2 D 0, of which solutions are
m
sin 2n 2 D 0 ” 2n 2 D m ” 2 D
n
which cannot happen as cannot be expressed by m=n because it is an irrational number.
Note that we have introduced different symbols to represent different collections of numbers.
Instead of writing ‘a is a non-negative integer number’, mathematicians write a 2 N. When
they do so, they mean that a is a member of the set (collection) of non-negative integers; this
set is symbolically denoted by N. The notation Z comes from the German word Zahlen, which
means numbers. The notation Q is for quotients.
In mathematics, the notion of a number has been extended
over the centuries to include 0, negative numbers, rational num-
bers such as one third (1=3), real numbers such as the square root
of 5 and , and complex numbers which extend the real numbers
with a square root of -1. Calculations with numbers are done with
arithmetical operations, the most familiar being addition, subtrac-
tion, multiplication, division, and exponentiation. Besides their
practical uses, numbers have cultural significance throughout the
world. For example, in Western society, the number 13 is often regarded as unlucky.
The German mathematician Leopold Kronecker (1823 – 1891) once said, "Die ganzen Zahlen
hat der liebe Gott gemacht, alles andere ist Menschenwerk" ("God made the integers, all else is
the work of man").
At one party each man shook hands with everyone except his spouse, and no handshakes
took place between women. If 13 married couples attended, how many handshakes were
there among these 26 people?
How many ordered, nonnegative integer triples .x; y; z/ satisfy the equation x C y C z D
11?
A circular table has exactly 60 chairs around it. There are N people seated around this
table in such a way that the next person to be seated must sit next to someone. What is
smallest possible value of N ?
What would you do? While solving them you will see that it involves counting, but it is tedious
sometimes to keep track of all the possibilities. There is a need to develop some smart ways of
counting. This section presents such counting methods. Later in Section 5.2, you will see that to
correctly compute probabilities we need to know how to count correctly and efficiently.
2.24.2 Factorial
Assume that we have to arrange three books on a shelve. The titles of the three books are A, B
and C . The question is there are how many ways to do the arrangement? If we put A on the left
most there are two possibilities for B and C : ABC and ACB. If we put B on the left most, then
there are also two possibilities: BAC and BCA. Finally, if C is put in the left most, then we
have CAB and CBA. In summary, we have six ways of arrangement of three books:
How about arranging four books A; B; C; D? Again, let’s put A on the left most position, there
are then six ways of arranging the remaining three books (we have just solved that problem!).
Similarly, if B is put on the left most position, there are six ways of arranging the other three
books. Going along this reasoning, we can see that there are
What if we have to arrange five books? We can see that the number of arrangements is five times
the number of arrangements for 4 books. Thus, there are 5 24 D 120 ways.
There is a pattern here. To see it clearly, let’s denote by An the number of arrangements for
n books (n 2 N). We then have A5 D 5A4 || , but A4 D 4A3 , we continue this way until A1 –the
number of arrangements of only one book, which is one:
A5 D 5A4
D 5 .4A3 /
(2.24.1)
D 5 4 3A2
D 5 4 3 2 A1 D 5 4 3 2 1
with A1 being one as there is only one way to arrange one book. We are now able to give the
definition of factorial.
Definition 2.24.1
For a positive integer n 1, the factorial of n, denoted by nŠ, is defined as
n
Y
nŠ D n .n 1/ .n 2/ 3 2 1 D i
i D1
From this definition, it follows that nŠ D n.n 1/Š. Using this for n D 1, we get 1Š D 1 0Š,
so 0Š D 1. This is similar to a negative multiplied a negative is a positive. The notation nŠ was
introduced by the French
Q mathematician Christian Kramp (1760 – 1826) in 1808. We recall the
shorthand notation i (called the pi product notation) that was introduced in Eq. (2.18.21).
To understand the notation nŠ, let’s compute some factorials: 5Š D 120, 6Š D 720, not so
large, but 10Š D 3 628 800! How about 50Š? It’s a number with 65 digits:
50Š D 30 414 093 201 713 378 043 612 608 166 064 768 844 377 641 568 960 512 000 000 000 000
No surprise that Kramp used the exclamation mark for the factorial. Note that I have used Julia
to compute these large factorials. I could not find out the explanation of the name factorial,
however.
||
Just the translation of "the number of arrangements for 5 books is five times the number of arrangements for 4
books".
Factorions. A factorion is a number which is equal to the sum of the factorials of its digits. For
example, 145 is a factorion, because
145 D 1Š C 4Š C 5Š
Can you write a program to find other factorions? The answer is 40 585 and see Listing B.4 for
the program.
One problem involving factorial. Let’s consider a problem involving factorial: which one of
these numbers 5099 and 99Š is larger? The first attempt is to naturally consider the ratio of these
numbers and write out them explicitly (and see if the ratio is smaller than one or not):
5099 50 50 50
D
99Š 99 98 97 2 1
Now, instead of working directly with 99 terms in the numerator and 99 terms in the denominator,
we divide the 99 terms in the numerator into two groups and we’re left with one number 50.
Similarly, we divide the product in the denominator into two groups and left with 50:
49 terms 49 terms
‚ …„ ƒ ‚ …„ ƒ
5099 .50 50 50/ .50 50 50/
50
D
99Š .99 98 51/ .49 48 2 1/
50
We can cancel the single 50s, and then combine one term in one group with another term
in the other group in the way that 99 is paired with 1, 98 with 2 (why doing that? because
99 C 1 D 100 D 50 2|| ), and so on:
Now, it is becoming clearer that we just need to compare each term with 1, and it is quite easy
to see that all terms are larger than 1 e.g. 502=991 > 1. This is so because we have
2
2 2 aCb
.a b/ > 0 H) .a C b/ > 4ab H) > ab
2
nC1 n
> nŠ
2
||
Also because pairing numbers is a good technique that we learned from the 10 year old Gauss.
Another way is to write 99 1 D .50 C 49/.50 49/ D 502 492 < 502 . In other words, the rectangle 99 1
has an area smaller than that of the square of side 50.
Factorial equation. Let’s solve one factorial equation: find n 2 N such that
nŠ D n3 n
Without any clue, we proceed by massage this equation a bit as we see some common thing in
the two sides:
because n and n 1 cannot be zero (as n D f0; 1g do not satisfy the equation). At least, now we
have another equation, which seems to be less scary (e.g. n3 gone). What’s next then? The next
step is to replace .n 2/Š by .n 2/.n 3/Š:
nC1 n 2C3 3
.n 2/.n 3/Š D n C 1 H) .n 3/Š D D D1C
n 2 n 2 n 2
Doing so gives us a direction to go forward: a factorial of a counting number is always a counting
number, thus 1 C 3=n 2 must be a counting number, and that leads to
n 2 D f1; 3g H) n D f3; 5g
n C 1 2.n 2/ H) n 5 H) n D f5; 4g
Proof of Stirling’s approximation. From Section 4.19.1 on the Gamma function, we have the
following representation of nŠ: Z 1
nŠ D x n e x dx
0
What
R 1 isx 2the blue p integral? If I tell you it is related to the well known Gaussian integral
n.ln y y/
1e dx D , do you believe me? If not, plot e for n D 5 and y 2 Œ0; 5 you will
see that the plot resembles the bell curve. Thus, we need to convert ln y y to y 2 . And what
allows us to do that? Taylor comes to the rescue. Now, we look at the function ln y y and plot
it, we see that it has a maximum of 1 at y D 1 (plot it and you’ll see that), thus using Taylor’s
series we can write ln y y 1 .y 1/2 =2, thus
Z 1
n nC1 2
nŠ D e n e n.y 1/ =2 dy
0
p p
Thus, another change of variable t D nx= 2, and the red integral becomes
Z 1 p Z 1 p
2 2 2 2
e n.y 1/ =2 dy D p e t dx D p
0 n 0 n
R1 2 p
Why the lower integration bound is zero not 1 and we still can use 1 e x dx D ? This
is because the function e n.ln y y/ quickly decays to zero (plot and you see it), thus we can extend
the integration from Œ0; 1 to . 1; 1/. Actually the method just described to compute the blue
integral is called the Laplace method.
What is the lesson from Stirling’s approximation for nŠ? We have a single object which is
nŠ. We have a definition of it: nŠ D .1/.2/ .n/. But this definition is useless when n is large.
By having another representation of nŠ via the Gamma function, we are able to have a way to
compute nŠ for large n’s.
2.24.3 Permutations
Now we know that there are nŠ ways to arrange n distinct books. Generally there are nŠ per-
mutations of the elements of a set having n elements. A permutation of a set of n objects is
any rearrangement of the n objects. For example, considering this set f1; 2; 3g, we have these
arrangements (permutations): f1; 2; 3g; f1; 3; 2g; f2; 1; 3g; f2; 3; 1g; f3; 1; 2g and f3; 2; 1g.
We have used the simplest way to count the number of permutations of a set with n elements:
we listed all the possibilities. But we can do another way. Imagine that we have n distinct books
to be placed into n boxes. For the first box, there are n choices, then for each of these n choices
there are n 1 choices for the second box, for the third box there are n 2 choices and so on. In
total there will be n.n 1/.n 2/ .3/.2/.1/ ways. When we multiply all the choices we are
actually using the so-called basic rule of counting. This principle states that if there are p ways
to do one thing, and q ways to do another thing, then there are p q ways to do both things.
Note that we did not add up the choices.
There are 5Š ways to arrange 5 persons in 5 seats. But, there are how many ways to place
five people into two seats? There are only 5 4 D 20 ways because for the first seat we have
5 choices and for the second seat we have 4 choices. Assuming that the five people are named
A; B; C; D; E, then the 20 ways are:
AB BC CD DE AC AD AE BD BE CE
BA CB DC ED CA DA EA DB EB EC
Now, what we need to do is to find how the result of 20 is related to 5 people and 2 seats. For 5
people and 5 seats, the answer is 5Š. So, we expect that 20 should be related to the factorials of 5
and 2–the only information of the problem. Indeed, it can be seen that we can write 20 D 5 4
in terms of factorials of 5 and 2:
54321 5Š 5Š
54D D D
321 3Š .5 2/Š
We now generalize this. Assume we have a n-set (i.e., a set having n distinct elements) and we
need to choose r elements from it (r n). There are how many ways to do so if order matters?
In other words, how many r-permutations? For example considering this set fA; B; C g and we
choose 2 elements. We have six ways: fA; Bg, fB; Ag, fA; C g, fC; Ag, fB; C g, fC; Bg.
The number of r-permutations of an n-element set is denoted by P .n; r/ or sometimes by
r
Pn , which is defined as:
nŠ
P .n; r/ D Pnr D (2.24.3)
.n r/Š
And we can write P .n; r/ explicitly as:
n.n 1/.n 2/ .n r C 1/.n r/Š
P .n; r/ D D n.n 1/.n 2/ .n r C 1/
.n r/Š
This expression is exactly telling us what we have observed. We need to choose r elements;
there are n options for the first element, n 1 options for the second element, ... and n r C 1
options for the last element.
2.24.4 Combinations
In permutations, the order matters: AB is different from BA. Now, we move to combinations in
which the order does not matter. Let’s use the old example of placing five people into two seats.
These are 20 arrangements of five people A; B; C; D; E into two seats (there are 5 options for
the 1st seat and 4 options for the second seat):
AB BC CD DE AC AD AE BD BE CE
BA CB DC ED CA DA EA DB EB EC
And if AB is equal to BA i.e., what matter is who seats next to who not the order, there are only
10 ways. When order does not matter, we are speaking of a combination. My fruit salad is a
combination of apples, grapes and bananas. We do not care the order the fruits are in.
We can observe that:
20 5Š
10 D D
2 .5 2/Š2Š
which leads to the following r-combinations equation:
!
n nŠ Pr
D Cnr D D n (2.24.4)
r .n r/ŠrŠ rŠ
The last equality shows the relation between permutation and combination; there are less com-
binations than permutations due to repetitions. And there are rŠ repetitions. The notation nr is
read n choose r.
n
r
is also called the binomial coefficient. This is because the coefficients in the binomial
theorem are given by nr (Section 2.25).
Question 4. The factorial was defined for positive integers. Is it too restrict? If you’re feeling
this way, that’s very good. What p is the value of .1=2/Š? The result is surprising; it is not an
integer, it is a real number: 0:5 .
aaabb; aabab; abaab; baaab; aabba; ababa; baaba; abbaa; babaa; bbaaa (2.24.5)
That is ten words. The question now is how to derive a formula, as listing works only when
there are few combinations. First, let’s denote by N the number of 5-letter words that can be
made from 3 a’s and 2 b’s. Second, we convert this problem to the problem we’re familiar with:
permutations without repetition by using a1 ; a2 ; a3 for 3 a’s and b1 ; b2 for 2 b’s. Obviously there
are 5Š 5-letter words from a1 ; a2 ; a3 ; b1 ; b2 . We can get these words by starting with Eq. (2.24.5).
For each of them, we add subscripts 1,2,3 to the a’s (there are 3Š ways of doing that), and then
we add subscripts 1,2 to the b’s (there are 2Š ways). Thus, in total there are N 3Š2Š 5-letter words.
And of course we have N 3Š2Š D 5Š, thus
5Š
N D
3Š2Š
Now we generalize the result to the case of n objects which are divided into k groups in
which the first group has n1 identical objects, the second group has n2 identical objects, ..., the
kth group has nk identical objects. Certainly, we have n1 C n2 C C nk D n. The number of
permutations of these n such objects are
nŠ
(2.24.6)
n1 Šn2 Š nk Š
For the special case that k D 2, we have one group with r identical elements and one group with
n r elements:
aa …
„ ƒ‚ a bb …
„ ƒ‚ b
r n r
There are
nŠ
rŠ.n r/Š
n
permutations of such set. Coincidentally, it is equal to r
:
!
nŠ n
D (2.24.7)
rŠ.n r/Š r
To remove this confusion between permutations and combinations, we can change how we
look at the problem. For example, the problem of making 5-letter words with 3 a’s and 2 b’s can
be seen like this. There are 5 boxes in which we will place 3 a’s into 3 boxes. The remaining
boxes will be reserved for 2 b’s. How many way to select 3 boxes out of 5 boxes? It is 53 .
Instead of placing the a’s first we can place the b’s first. There 52 ways of doing so. There-
! !
n n
D (2.24.8)
k n k
We can check this identity easily using algebra. But the way we showed it here is interesting in
the sense that we do not need any algebra. This is proof by combinatorial interpretation. The
basic idea is that we count the same thing twice, each time using a different method and then
conclude that the resulting formulas must be equal.
Proof the generalized pigeonhole principle. Here is the proof of the extended pigeonhole princi-
ple. We use proof by contradiction: first we assume that no hole contains at least dp= he pigeons
and based on this assumption, we’re then led to something absurd. If no hole contains at least
dp= he, then every hole contains a maximum of dp= he 1 pigeons. Thus, p holes contains a
maximum of
.dp= he 1/ h
pigeons. We’re now showing that this number of pigeons is smaller than p:
This principle is also known as Dirichlet box principle, named after the German mathematician Johann Peter
Dirichlet (1805 – 1859).
1. Every point on the plane is colored either red or blue. Prove that no matter how the
coloring is done, there must exist two points, exactly a mile apart, that are the same
color.
.a C b/0 D 1
.a C b/1 D aCb
.a C b/2 D a C 2ab C b 2
2
(2.25.1)
.a C b/3 D a3 C 3a2 b C 3ab 2 C b 3
.a C b/4 D a4 C 4a3 b C 6a2 b 2 C 4ab 3 C b 4
We find the first trace of the Binomial Theorem in Euclid II, 4, "If a straight line be cut at random,
the square on the whole is equal to the squares on the segments and twice the rectangle of the
segments". This is .a C b/2 D a2 C b 2 C 2ab if the segments are a and b. The coefficients in
these binomial expansions make a triangle, which is usually referred to as Pascal’s triangle. As
shown in Fig. 2.39, this binomial expansion was known by Chinese mathematician Yang Hui
(ca. 1238–1298) long before Pascal.
To build the triangle, start with "1" at the top, then continue placing numbers below it in a
triangular pattern. Each number is the numbers directly above it added together. Can you write
a small program to build the Pascal triangle? This is a good coding exercise.
(a) (b)
Is there a faster way to know the coefficient of a certain term in .a C b/n without going
through the Pascal triangle? To answer that question, let’s consider .a C b/3 . We expand it as
follows
.a C b/3 D .a C b/.a C b/.a C b/
D .aa C ab C ba C bb/.a C b/
D aaa C aab C aba C abb C baa C bab C bba C bbb
Every term in the last expression has three components containing only a and b (e.g. aba). We
also know some of these terms are
2 3
going to group together; e.g. aba D baa D baa, as they are
all equal a b. Now, there are 2 ways to write a sequence of length three, with only a and b,
that has precisely two a’s in it. Thus, the coefficient of a2 b is 32 D 3. Refer to Section 2.24 for
Question 5. What if the exponent n is not a positive integer? How about .a C b/1=2 or
.a C b/ 3=2 ? To these cases, we have to wait for Newton’s discovery of the so-called gener-
alized binomial theorem, see Section 4.14.1.
Question 6. If we have the binomial theorem for .a C b/n , how about .a C b C c/n ? The third
power of the trinomial a C b C c is given by .a C b C c/3 D a3 C b 3 C c 3 C 3a2 b C 3a2 c C
3b 2 a C 3b 2 c C 3c 2 a C 3c 2 b C 6abc. Is it possible to have a formula for the coefficients of the
terms in .a C b C c/3 ? And how about .x1 C x2 C C xm /n ‹
Sum of powers of integers, binomial theorem and Bernoulli numbers. Now we present a
surprising result involving the binomial coefficients. Recall in Section 2.5 that we have computed
the sums of powers of integers. We considered the sums of powers of one, two and three only.
But back in the old days, the German mathematician Johann Faulhaber (1580- 1635) did that
for powers up to 23. Using that result, Jakob Bernoulli in 1713, and the Japanese mathematician
Seki Takakazu (1642-1708), in 1712 independently found a pattern and discovered a general
formula for the sum. With n; m 2 N and m 1, let
n 1
X
Sm .n/ WD km
kD1
Then, we have
m
!
1 X m C 1
Sm .n/ D . 1/k Bk nmC1 k
(2.25.3)
mC1 k
kD0
This is how Jacob Bernoulli derived this. He wrote something similar to Eq. (2.5.15) for m D 1; 2; 3; : : : ; 10.
Then, he looked at the coefficients of nmC1 ; nm ; : : : carefully. A pattern emerged, in connection to Pascal’s triangle,
and he guessed correctly Eq. (2.25.3) believing in the pattern. Thus, he did not prove this formula. It was later
proved by Euler.
where Bk are now called the Bernoulli numbers. Why not Takakazu numbers, or better Bernoulli-
Takakazu numbers? Because history is not what happened, but merely what has been recorded,
and most of what has been recorded in English has a distinctly Western bent. This is particularly
true in the field of mathematical history. The Bernoulli numbers Bk are
1 1 1
B0 D 1; B1 D ; B2 D ; B3 D 0; B4 D ; B5 D 0; : : :
2 6 30
What are the significance of these mysterious numbers? It turns out that, as is often the case
in mathematics, the Bernoulli-Takakazu numbers appear in various fields in mathematics, see
Section 4.16 for more detail.
n
notation :
Binomial theorem: a proof. The Pascal triangle are now written using the k
0
0
1 1
0 1
2 2 2
0 1 2
3 3 3 3
0 1 2 3
4 4 4 4 4
0 1 2 3 4
This identity–known as Pascal’s rule or Pascal’s identity–can be proved algebraically. But that is
just an exercise about manipulating factorials. We need a combinatorial proof so that we better
understand the meaning of the identity.
The left hand side (the red term) in Pascal’s identity is the number of .k C 1/-element subsets
taken from a set of n C 1 elements. Now what we want to prove is that the left hand side is
also the number of such subsets. Fig. 2.40 shows the proof for the case of n D 3 and k D 1.
I provided only a proof for a special case whereas all textbooks present a general proof. This
results in an impression that mathematicians only do hard things. Not at all. In their unpublished
notes, they usually had proofs for simple cases!
This is typeset using the package tikz.
Figure 2.40: Proof of Pascal’s identity for the case of n D 3 and k D 1. The red term in Eq. (2.25.4)
4
is 2 , which is the cardinality of S – a set that contains all subsets of two elements taken from the set
ABCX. We can divide S into two subsets: S1 is the one without X and S2 is the one with X.
With this identity, Eq. (2.25.4), we can finally prove the binomial theorem; that is the theorem
is correct for any n 2 N. The technique we use (actually Pascal did it first) is proof by induction.
Observe that the theorem is correct for n D 1. Now, we assume that it is correct for n D k, that
is
k
! ! !
X k k k
.a C b/k D ak j j
b D ak C ak 1 b C C ab k 1
C bk (2.25.5)
j D0
j 1 k 1
And our aim is to prove that it is also valid for n D k C 1, that is:
kC1
! ! !
X k C 1 kC1 k C 1 k C 1
.a C b/kC1 D a j j
b D akC1 C ak b C C ab k C b kC1
j D0
j 1 k
(2.25.6)
.a C b/kC1 D .a C b/k .a C b/
" ! ! #
k k
D ak C ak 1 b C C ab k 1 C b k .a C b/
1 k 1
! ! !
k k k
D akC1 C ak b C ak b C ak 1 b 2 C C ab k C ab k C b kC1
1 1 k 1
1 1
1st month: 1000 C 1000 D 1 C 1000
12 12
1 1
2nd month: 1C 1C 1000
12 12
1 12
1 1 1
1C 1C 1 C 1000 D 1 C 1000 D 2613:03529
12 12 12 12
„ ƒ‚ …
12 times
which is $2 613 and better than the annual compounding. Let’s be more greedy and try with daily,
hourly and minutely compounding. It is a good habit to ask questions ‘what if’ and work hard
investigating these questions. It led to new maths in the past! The corresponding calculations
are given in Table 2.20.
Table 2.20: Amounts of money received with yearly, monthly, daily, hourly and minutely compounding.
Formula Result
From this table we can see that the amount of money increases from $2 000 and settles at
$2 718,279 242 6. Euler introduced the symbol e to represent the rate of continuous compound-
ing:
1 n
e WD lim 1 C (2.26.1)
n!1 n
The fascinating thing about e is that the more often the interest is compounded, the less your
money grows during each period (compare 1 C 1 versus .1 C 1=12/ for example). Yet it still
amounts to something significant after a year, for it is multiplied over so many periods.
In mathematics, there are three most famous irrational numbers and e is one of them. They
are , and e. We have met two of them. We will introduce in Chapter 4.
How we compute e? Looking at its definition, we can think of using the binomial theorem
in Eq. (2.25.2) with a D 1 and b D 1=n. We compute e as follows
n n k
1 X nŠ 1
1C D 1
n kŠ.n k/Š n
kD0
nŠ nŠ nŠ (2.26.2)
D1C C 2
C C :::
.n 1/Šn 2Š.n 2/Šn 3Š.n 3/Šn3
1 1 1 3 2
D1C1C 1 C 1 C C :::
2Š n 3Š n n2
1 n
1 1 1
e WD lim 1 C D 1 C C C C : : : D 2:718281828459045 (2.26.3)
n!1 n 1Š 2Š 3Š
because when n ! 1 all the red terms approach one for the terms involving n approach zero.
See also for a calculus-based discussion on the fascinating number e in Section 4.14.5.
Was Euler selfish in selecting e for this number? Probably not. Note that it was Euler who adopted in 1737.
p
Irrationality of e. Similar to Euclid’s proof of the irrationality of 2, we use a proof of contrac-
tion here. We assume that e is a rational number and this will lead us to a nonsense conclusion.
The plan seems easy, but carrying it out is different. We start with Eq. (2.26.3):
1 1 1 a
1C C C C ::: D
1Š 2Š 3Š b
where a; b 2 N.
The trick is to make b appear in the LHS of this equation:
1 1 1 1 1 a
1 C C C C C C C D (2.26.4)
1Š 2Š bŠ .b C 1/Š .b C 2/Š b
We can simplify the two red and blue terms. For the red term, using the fact that bŠ D b.b
1/.b 2/ 2Š, we can show that the red term is of this form c=bŠ where c 2 N.
For the second term, we need to massage it a bit:
1 1 1 1
C C D C C
.b C 1/Š .b C 2/Š .b C 1/Š .b C 2/.b C 1/Š
1 1
D C C
.b C 1/bŠ .b C 2/.b C 1/bŠ
1 1 1
D C C
bŠ .b C 1/ .b C 2/.b C 1/
Denote by x the blue term, we are going to show that 0 < x < 1=b. In other words, x is a real
number. Indeed,
1 1 1 1 1 1 1
x< C 2
C 3
D 1C C 2
C D
b C 1 .b C 1/ .b C 1/ bC1 b C 1 .b C 1/ b
where we used the formula for the geometric series in the bracket.
Now Eq. (2.26.4) becomes as simple as:
a c 1
D C x
b bŠ bŠ
Multiplying this equation with bŠ to get rid of it, we have:
a.b 1/Š D c C x
And this is equivalent to saying an integer is equal to the sum of another integer and a real
number, which is nonsense!
Question 7. If
1 n
e D lim 1 C
n!1 n
Then, what is
1 n
lim 1 D‹
n!1 n
Try to guess the result, and check it using a computer.
For a given n if we compute the product of all the binomial coefficients in that row, denoted by
sn , something interesting will emerge. We define sn as :
n
!
Y n
sn D (2.27.1)
k
kD0
The first few sn are shown in Fig. 2.41. The sequence .sn / grows bigger and bigger. How about
the ratio sn =sn 1 ?
Qn n
Figure 2.41: Pascal triangle and some sn D kD0 k .
Note that when n is very big, n and n 1 are pretty the same. That is why in the above equation,
we have different expressions.
Proof. Herein we prove that Eq. (2.27.2) is true. First, we compute sn :
n n n
!
Y n Y nŠ Y 1
sn D D D .nŠ/nC1 (2.27.3)
k .n k/ŠkŠ .kŠ/2
kD0 kD0 kD0
To see the last equality, one can work out directly for a particular case. For n D 3, we have
3 3
Y nŠ 3Š 3Š 3Š 3Š Y 1
s3 D D D .3Š/4
.n k/ŠkŠ 3Š0Š 2Š1Š 1Š2Š 0Š3Š .kŠ/2
kD0 kD0
If, instead of a product we consider the sum of all the coefficients in the nth row we shall get 2n . Check
Fig. 2.41, row 3: 1 C 3 C 3 C 1 D 8 D 23 .
Qn n
Table 2.21: sn D kD0 k , see Listing B.5 for the code.
n sn rn D sn =sn 1 rn =rn 1
1 1 1 1
2 2 2 2
3 9 4.5 2.25
4 96 10.67 2.37
5 2500 26.042 2.44
6 162000 64.8 2.49
:: :: :: ::
: : : :
89 2.46e+1711
90 1.77e+1673 5.13e+37
91 2.46e+1711 1.39e+38 2.70
:: :: :: ::
: : : :
899 2.22e+174201
900 2.17e+174590 9.74e+388
901 5.74e+174979 2.65e+389 2.71677
2.28 Polynomials
A polynomial is an expression consisting of variables (also called indeterminates) and coeffi-
cients, that involves only the operations of addition, subtraction, multiplication, and non-negative
Pn .x/ WD an x n C an 1 x n 1
C C a2 x 2 C a1 x C a0 (2.28.1)
Assume that an ¤ 0, then n is called the degree of the polynomial (which is the largest degree of
any term with nonzero coefficient). Polynomials of small degree have been given specific names.
A polynomial of degree zero is a constant polynomial (or simply a constant). Polynomials of
degree one, two or three are linear polynomials, quadratic polynomials and cubic polynomials,
respectively. For higher degrees, the specific names are not commonly used, although quartic
polynomial (for degree four) and quintic polynomial (for degree five) are sometimes used.
Thus, the sum of two polynomials is obtained by adding together the coefficients of corre-
sponding powers of x. Subtraction of polynomials is the same. And from two polynomials to
n
Ppolynomials is a breeze thanks to Eq.P(2.28.2). To see the power of compact notation, let
n
PkD0 a k x k
be the first polynomial and nkD0 bk x k be the second, then the sum is obviously
n k
kD0 .ak C bk /x . It’s nice, isn’t it?
The next thing is the product of two polynomials:
which comes from the usual arithmetic rules. What is interesting is that for two polynomials p
and q, the degree of the product pq is the sum of the degree of p and q:
The division of one polynomial by another is not typically a polynomial. Instead, such ratios
are a more general family of objects, called rational fractions, rational expressions, or rational
functions, depending on context. This is analogous to the fact that the ratio of two integers is a
rational number. For example, the fraction 2=.1 C x 3 / is not a polynomial; it cannot be written
as a finite sum of powers of the variable x.
Let’s divide x 2 3x 10 by x C 2 and 2x 2 5x 1 by x 3 using long division:
x 5; 2x C 1
x2 2x 2 5x 1
xC2 3x 10 x 3
x2 2x 2x 2 C 6x
5x 10 x 1
5x C 10 xC3
0 2
Thus,
2x 2 5x 1 2
D 2x C 1 C ” 2x 2 5x 1 D .x 3/.2x C 1/ C 2
x 3 x 3
The blue term is called the dividend, the cyan term is called the divisor, and the purple term is
called the quotient. The red term is called the remainder term. And we want to understand it.
theorem that f .x/ D .x 1/. / C f .1/. But f .1/ D 0 as 1 is one solution of f .x/ D 0, so
f .x/ D .x 1/. /.
In Section 2.23.2 we have observed that the complex roots of z n 1 D 0 come in conjugate
pairs. Now, we can prove that. Suppose that
p.x/ D a0 C a1 x C C an x n ; ai 2 R
is a polynomial of real coefficients. Let ˛ be a complex root (or zero) of p i.e., p.˛/ D 0. We
need to prove that the complex conjugate of ˛ i.e., ˛ is also a root. That is, p.˛/ D 0. The
starting point is, of course, p.˛/ D 0. So we write p.˛/
p.˛/ D a0 C a1 ˛ C C an ˛ n
p.˛/ D a0 C a1 ˛ C C an ˛ n
D a0 C a1 ˛ C C an ˛ n .a D a if a is real/
D a0 C a1 ˛ C C an ˛ n N aCb DaCb
(ab D aN b),
D p.˛/ D 0 D 0
f .x0 / D 2 x0 x0 x0 6 x0 x0 C 2 x0 C1
„ ƒ‚ … „ ƒ‚ … „ƒ‚…
3 multiplications 2 multiplications 1 multiplications
which involves 6 multiplications and 3 additions. How about the general Pn .x0 /? To count the
multiplication/addition, we need to write down the algorithm, Algorithm 1 is such one. Having
this algorithm in hand, it is easy to covert it to a program.
addition: n
multiplication: 1 C 2 C C n D n.n C 1/=2
Can we do better? The British mathematician William George Horner (1786 – 1837) de-
veloped a better method today known as Horner’s method. But he attributed to Joseph-Louis
Lagrange, and the method can be traced back many hundreds of years to Chinese and Persian
mathematicians.
In Horner’s method, we massage f .x0 / a bit as:
which requires only 3 multiplications ! For Pn .x0 / Horner’s method needs just n multiplica-
tions. To implement Horner’s method, a new sequence of constants is defined recursively as
follows:
b 3 D a3 b3 D 2
b2 D x0 b3 C a2 b2 D 2x0 6
b 1 D x 0 b 2 C a1 b1 D x0 .2x0 6/ C 2
b 0 D x 0 b 1 C a0 b0 D x0 .x0 .2x0 6/ C 2/ C 1
where the left column is for a general cubic polynomial whereas the right column is for the
specific f .x/ D 2x 3 6x 2 C 2x C 1. Then, f .x0 / D b0 . As to finding the consecutive b-values,
we start with determining bn , which is simply equal to an . We then work our way down to the
other b’s, using the recursive formula: bn 1 D an 1 C bn x0 , until we arrive at b0 .
A by-product of Horner’s method is that we can also find the division of f .x/ by x x0 :
One application is to find all solutions of Pn .x/ D 0. We use Horner’s method together with
Newton’s method. A good exercise to practice coding is to code a small program to solve
Pn .x/ D 0. The input is Pn .x/ and press a button we shall get all the solutions, nearly instantly!
Yes, Horner’s method is faster than the naive method. But mathematicians are still not satisfied with this. Why?
Because there might be another (hidden) method that can be better than Horner’s. Imagine that if they can prove
that Horner’s method is the best then they can stop searching for better ones. And they proved that it is the case.
Details are beyond my capacity.
That is remarkable given that the expression for the roots is quite messy: their sum and product
are however very simple functions of the coefficients of the quadratic equation. And this is
known as Vieta’s formula discovered by Viète. Not many of high school students (including
the author) after knowing the well known quadratic formula asked this question to discover for
themselves this formula.
Remark 2. Did you notice something special about Eq. (2.28.4)? Note that x1 C x2 and x1 x2
will not change if we switch the roots; i.e., x2 C x1 is exactly x1 C x2 . Is this a coincidence? Of
course not. The quadratic equation does not care how we label its roots.
After this, another question should be asked: Do we have the same formula for cubic equa-
tions, or for any polynomial equations? Before answering that question, we need to find a better
way to come up with Vieta’s formula. As x1 and x2 are the roots of the quadratic equation, we
can write that equation in this form
.x x1 /.x x2 / D 0 ” x 2 .x1 C x2 /x C x1 x2 D 0
b c d
x1 C x2 C x3 D ; x1 x2 C x2 x3 C x3 x1 D ; x1 x2 x3 D
a a a
Summarizing these results for quadratic and cubic equations, we write (to see the pattern)
a1 a0
a2 x 2 C a1 x C a0 D 0 W x1 C x2 D x1 x2 D
a2 a2
a2 a0
a3 x 3 C a2 x 2 C a1 x C a0 D 0 W x1 C x2 C x3 D ; x1 x2 x3 D
a3 a3
Now, we meet a new mathematical object called elementary symmetric sums of polynomials.
The expressions x1 C x2 and x1 x2 , which we see above, are elementary symmetric sums.
Definition 2.28.1
The k-th elementary symmetric sum of a set of n numbers is the sum of all products of k of
those numbers (1 k n). For example, if n D 4, and our set of numbers is fa; b; c; d g,
then:
1st symmetric sum D S1 DaCbCcCd
2nd symmetric sum D S2 D ab C ac C ad C bc C bd C cd
(2.28.5)
3rd symmetric sum D S3 D abc C abd C acd C bcd
4th symmetric sum D S4 D abcd
With this new definition, we can write the general Vieta’s formula. For a nth order polynomial
equation
an x n C an 1 x n 1 C C a2 x 2 C a1 x C a0 D 0
we have
an j
Sj D . 1/j ; 1j n
an
where Sj is the j -th elementary symmetric sum of a set of n roots. With a proper tool, we
can have a compact Vieta’s formula that encapsulates all symmetric sums of the roots of any
polynomial equation!
If we do not know Vieta’s formula, then finding the complex roots of the following system
of equations:
xCyCz D2
xy C yz C zx D 4
xyz D 8
would be hard. But it is nothing than this problem: ‘solving this cubic equation t 3 2t 2 C4t 8 D
0’!
Some problems using Vieta’s formula.
For the first problem the idea is to use Vieta’s formula that reads x1 C x2 D 3 and
x1 x2 D 1. To use x1 C x2 and x1 x2 we have to massage S so that these terms show up.
For example, for the term x1=x2 C1, we do (noting that x22 C 3x2 C 1 D 0, thus x22 C x2 D
1 2x2 )
x1 x1 x2 x1 x2 1
D 2 D D
x2 C 1 x2 C x2 1 2x2 1 2x2
Do we need to do the same for the second term? No, we have it immediately once we had
the above:
x2 1
D
x1 C 1 1 2x1
Now, the problem is easier:
1 1 .1 C 2x1 /2 C .1 C 2x2 /2
SD C D D D 18
.1 C 2x1 /2 .1 C 2x2 /2 Œ.1 C 2x1 /.1 C 2x2 /2
(a) (b)
Figure 2.42: Telling the time using a clock. Imagine that the afternoon times are laid on top of their
respective morning times: 16 is next to 4, so 16 and 4 are the same or congruent (on the clock).
So in this clock world, we only care where we are in relation to the numbers 1 to 12. In this
world, 1; 13; 25; 37; : : : are all thought of as the same thing, as are 2; 14; 26; 38; : : : and so on.
What we are saying is "13 D 1Csome multiple of 12", and "26 D 2Csome multiple of 12",
or, alternatively, "the remainder when we divide 13 by 12 is 1" and "the remainder when we
divide 26 by 12 is 2”. The way mathematicians express this is:
This is read as "13 is congruent to 1 mod (or modulo) 12" and "26 is congruent to 2 mod 12".
But we don’t have to work only in mod 12. For example, we can work with mod 7, or mod
10 instead. Now we can better understand the cardioid introduced in Chapter 1, re-given below
in Fig. 2.43. Herein, we draw a line from number n to n .mod N / because on the circle we
only have N points. For example, 7 2 D 14 which is congruent to 4 modulo 10. That’s why
we drew a line from 7 to 4.
Should we stop with the times table of 2? No, of course. We play with times table of three,
four and so on. Fig. 2.44a shows the result for the case of eight. How about times table for a
non-integer number like 2:5? Why not? See Fig. 2.44b.
So, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around"
when reaching a certain value, called the modulus. The modern approach to modular arithmetic
was developed by Gauss in his book Disquisitiones Arithmeticae, published in 1801.
Now that we have a new kind of arithmetic, the next thing is to find the rules it obey. Actually,
(a) addition
a˙c b˙d .mod m/
(b) multiplication
ac bd .mod m/ (2.29.1)
(c) exponentiation
ap b p .mod m/; p 2 N
The proof of these rules is skipped here; noting that the exponentiation rule simply follows the
multiplication rule.
Let’s solve some problems using this new mathematics. The first problem is: what is the last
digit (also called the units digit) of the sum
Of course, we can solve this by computing the sum, which is 8221, and from that the answer is
1. But, using modular arithmetic provides a more elegant solution in which we do not have to
add all these numbers.
Note that,
And the units digit of the sum is one. In this method, we had to only add 3,1,8 and 9.
The second problem is: Andy has 44 boxes of soda in his truck. The cans of soda in each
box are packed oddly so that there are 113 cans of soda in each box. Andy plans to pack the
sodas into cases of 12 cans to sell. After making as many complete cases as possible, how many
sodas will he have leftover?
This word problem is mathematically translated as: finding the remainder of the product
44 113 when divided by 12. We have
Thus,
44 113 8 5 .mod 12/ 40 .mod 12/ 4
So, the number of sodas left over is four.
In the third problem we shall move from addition to exponentiation. The problem is what
are the tens and units digits of 71942 ? Of course, we find the answers without actually computing
71942 .
Let’s consider a much easier problem: what are the two last digits of 1235 using modular
arithmetic. We know that 1235 D 12 100 C 35, thus 1235 35 .mod 100/. So, we can work
with modulo 100 to find the answer. Now, the strategy is to do simple things first: computing the
powers of 7‘ and looking for the pattern:
71 D7 W 71 07 .mod 100/
72 D 49 W 72 49 .mod 100/
73 D 343 W 73 43 .mod 100/
74 D 2401 W 74 01 .mod 100/
75 D 16807 W 75 07 .mod 100/ (2.29.2)
76 D 117649 W 76 49 .mod 100/
77 D : : : 43 W 77 43 .mod 100/
78 D ::: W 78 01 .mod 100/
79 D ::: W 79 07 .mod 100/
We definitely see a pattern here, the last two digits of a power of 7 can only be either of
07; 49; 43; 01. Now, as 1942 is an even number, we just focus on even powers that can be divided
into two groups: 2; 6; 10; : : : and 4; 8; 12; : : : The first group can be generally expressed by
2 C 4k for k D 0; 1; 2; : : :. Now, solving
2 C 4k D 1942
gives us k D 485. Therefore, the last two digits of 71942 are 49. (Note that if you try with the
second group, a similar equation does not have solution, i.e., 1942 belongs to the first group).
Although the answer is correct, there is something fishy in our solution. Note that we only
computed powers of 7 up to 79 . There is nothing to guarantee that the pattern repeats forever
or at least up to exponent of 1942! Of course we can prove that this pattern is true using the
multiplication rule. We can avoid going that way, by computing 71942 directly by noting that
1942 D 5 388 C 2. Why this decomposition of 1942? Because 75 7 .mod 100/. With this,
we can write
71942 D .75 /388 .72 / .7388 /.49/ .mod 100/
7388 .777 /.73 / .715 /.72 /.73 / .73 /.72 /.73 / .mod 100/
As can be seen the idea is simple: trying to replace the large number (1942) by smaller ones!
Let’s solve another problem, which is harder than previous problems. Consider a function f
that takes a counting number a and returns a counting number obeying this rule
f .a/ D .sum of the digits of a/2
The question is to compute f .2007/ .22006 / i.e., f is composed of itself 2007 times. Why 2006?
Because this problem is one question from a math contest in Hong Kong happening in the year
of 2006.
Before we can proceed, we need to know more about the function f first. Concrete examples
are perfect for this. If a D 321, then
f .321/ D .3 C 2 C 1/2 D 36
Of course we cannot compute 22006 (because we are assumed to be in an exam without
accessing to a calculator), to know its digits and sum them and square the sum. Then, applying
the same steps for this new number. And do this 2007 times! We cannot do all of this w/o a
calculator. There must be another way.
Because I did not know where to start, I wrote a Julia program, shown in Listing 2.1 to
solve it. The answer is then 169.
But without a computer, how can we solve this problem? If we cannot solve this problem, let’s
solve a similar problem but easier, at least we get some points instead of zero! This technique
is known as specialization, and it is a very powerful strategy. How about computing f .5/ .24 /?
That can be done as 24 D 16:
f .1/ .24 / D f .16/ D .1 C 6/2 D 49
f .2/ .24 / D f .49/ D .4 C 9/2 D 169
.3/ 4 2
f .2 / D f .169/ D .1 C 6 C 9/ D 256
.4/ 4 2
f .2 / D f .256/ D .2 C 5 C 6/ D 169
f .5/ .24 / D f .169/ D .1 C 6 C 9/2 D 256
Now, consider this problem: finding the tens and units digits of 49971 ? But wait, isn’t it the same problem
before? Yes, but you will find that working with powers of 49, instead of 7, is easier.
The calculation was simple because 24 is a small number. What’s important is that we see a
pattern. With this pattern it is easy to compute f .n/ .24 / for whatever value of n, n 2 N.
So far so good. We made progress because we were able to compute 24 , which is 16, then
we can use the definition of the function f to proceed. For 22006 , it is impossible to go this way.
Now, we should ask this question: why the function f is defined this way i.e., it depends on the
sum of the digits of the input? Why not the product of the digits? Let’s investigate the sum of
the digits of a counting number. For example,
123 H) 1 C 2 C 3 D 6; 4231 H) 4 C 2 C 3 C 1 D 10
If we check the relation between 6 and 123 and 10 and 4231, we find this:
That is: the sum of the digits of a counting number is congruent to the number modulo 9. And
then, according to the exponentiation rule of modular arithmetic, the square of sum of the digits
of a counting number is congruent to the number squared modulo 9. For example, 36 1232
.mod 9/.
With this useful ‘discovery’, we can easily do the calculations w/o having to know the digits
of 24 (in other words w/o calculating this number; note that our actual target is 22006 ):
f .1/ .24 / .24 /2 4 .mod 9/
f .2/ .24 / .24 /4 7 .mod 9/
(2.29.3)
f .3/ .24 / .24 /8 4 .mod 9/
f .4/ .24 / .24 /8 7 .mod 9/
Now, if we want to compute f .4/ .24 /, we can start with the fact that it is congruent with 7
.mod 9/. But wait, there are infinite numbers that are congruent with 7 modulo 9; they are
f7; 16; 25; : : : ; 169; 178; : : : ; g. We need to do one more thing; if we can find a smallest upper
bound of f .4/ .24 /, let say f .4/ .24 / < M , we then can remove many options and be able to find
f .4/ .24 /.
Now, we can try the original problem. Note that 22006 4 .mod 9/ , then by similar
reasoning as in Section 2.30.3, we get
(
4 .mod 9/; if n is even
f .n/ .22006 / (2.29.4)
7 .mod 9/; if n is odd
And we combine this with the following result (to be proved shortly):
Now, we substitute n D 2005 in Eq. (2.29.4), we get f .2005/ .22006 / 7 .mod 9/. And because
the sum of the digits of a number is congruent to the number modulo 9, we now also have
which leads to
sum of digits of f .2005/ .22006 / < 23
Combining the two results on the sum of the digits of f .2005/ .22006 /, we can see that it can only
take one of the following two values:
which results in
Proof. Now is the proof of Eq. (2.29.5). We start with the fact that 22006 < 22007 D 8669 <
10669 . In words, 22006 is smaller than a number with 670 digits. By the definition of f , we then
have
This is because 99 : : : 9 with 699 digits is the largest number that is smaller than 10699 and has
a maximum sum of the digits. Next, we do something similar for f .2/ .22006 / starting now with
108 :
f .2/ .22006 / < f .99 9/ D .9 8/2 < 104
…
„ ƒ‚
8 terms
A boy is very excited about the number 100. He told me it is an even number and
101 is an odd number, and 1 million is an even number. Then the boy asked this
question: “Is infinity even or odd’?’
This is a very interesting question as infinity is something unusual as we have seen in Sec-
tion 2.18. Let’s assume that infinity is an odd number, then two times infinity, which is also
infinity, is even! So, infinity is neither even not odd!
This section tells the story of the discovery made by a mathematician named Cantor that
there are infinities of different sizes. I recommend the book To Infinity and Beyond: A Cultural
History of the Infinite by Eli Maor [33] for an interesting account on infinity.
2.30.1 Sets
A set is a collection of things. For example, f1; 2; 5g is a set that contains the numbers 1,2 and
5. These numbers are called the elements of the set. Because the order of the elements in a set
is irrelevant, f2; 1; 5g is the same set as f1; 2; 5g. Furthermore, an element cannot appear more
than once in a set; so f1; 1; 2; 5g is equivalent to f1; 2; 5g.
To say that 2 is a member of the set f1; 2; 5g, mathematicians write 2 2 f1; 2; 5g and to say
that 6 is not a member of this set, they write 6 … f1; 2; 5g.
Of course the next thing mathematicians do with sets is to compare them. Considering two
sets: f1; 2; 3g and f3; 4; 5; 6g, it is clear that the second set has more elements than the first. We
use the notation jAj, called the cardinality, to indicate the number of elements of the set A. The
cardinality of a set is the size of this set or the number of elements in the set.
N D f0; 1; 2; 3; : : :g
Things become interesting when we compare infinite sets. For example, Galileo wrote in his
Two New Sciences about what is now known as Galileo’s paradox:
1. Some counting numbers are squares such as 1; 4; 9 and 16, and some are not squares such
as 2; 5; 7 and so on.
2. The totality of all counting numbers must be greater than the total of squares, because the
totality of all counting numbers includes squares as well as non-squares.
3. Yet for every counting number, we can have a one-to-one correspondence between num-
bers and squares, for example (a doubled headed arrow $ is used for this one-to-one
correspondence)
1 2 3 4 5 6
l l l l l l
1 4 9 16 25 36
4. So, there are, in fact, as many squares as there are counting numbers. This is a contradiction,
as we have said in point 2 that there are more numbers than squares.
The German mathematician Georg Cantor (1845 – 1918) solved this problem by introducing
a new symbol @0 (pronounced aleph-null), using the first letter of the Hebrew alphabet with
the subscript 0. He said that @0 was the cardinality of the set of natural numbers N. Every set
whose members can be put in a one-to-one correspondence with the natural numbers also has
the cardinality @0 .
With this new technique, we can show that the sets N and Z have the same cardinality. Their
one-to-one correspondence is:
1 2 3 4 5 6 7
l l l l l l l
0 1 1 2 2 3 3
The next question is how about the set of rational numbers Q? Is this larger or equal the set
of natural numbers? Between 1 and 2, there are only two natural numbers, but there are infinitely
many rational numbers. Thus, it is tempting for us to conclude that jQj > jNj. Again, Cantor
proved that we were wrong; Q D @0 !
For simplicity, we consider only positive rational numbers. A positive rational number is a
number of this form p=q where p; q 2 N and q ¤ 0. First, Cantor arranged all positive rational
numbers into an infinite array:
1 2 3 4 5
1 1 1 1 1
1 2 3 4 5
2 2 2 2 2
1 2 3 4 5
3 3 3 3 3
1 2 3 4 5
4 4 4 4 4
1 2 3 4 5
5 5 5 5 5
:: :: :: :: :: ::
: : : : : :
where the first row contains all rational numbers with denominator of one, the second row
with denominator of two and so on. Note that this array has duplicated members; for instance
1=1; 2=2; 3=3; : : : or 1=2; 3=6; 4=8.
Next, he devised a zigzag way to traverse all the numbers in the above infinite array, once
for each number:
1 2 3 4
1 1 1 1
1 2 3 4
2 2 2 2
1 2 3 4
3 3 3 3
1 2 3 4
4 4 4 4
If we follow this zigzag path all along: one step to the right, then diagonally down, then one step
down, then diagonally up, then again one step to the right, and so on ad infinitum, we will cover
all positive fractions, one by one . In this way we have arranged all positive fractions in a row,
Along our path we will encounter fractions that have already been met before under a different name such as
2/2, 3/3, 4/4, and so on; these fractions we simply cross out and then continue our path as before
one by one. In other words, we can find a one-to-one correspondence for every positive rational
with the natural numbers. This discovery that the rational numbers are countable-in defiance of
our intuition- left such a deep impression on Cantor that he wrote to Dedekind: "Je le vois, mais
je ne le crois pas!" ("I see it, but I don’t believe it!").
Thus the natural numbers are countable, the integers are countable and the rationals are
countable. It seems as if everything is countable, and therefore all the infinite sets of numbers
you can care to mention - even ones our intuition tells contain more objects than there are natural
numbers - are the same size.
This is not the case.
There are exactly the same number of points in any interval Œa; b as in the number line R.
Using the above result, he proved that for the unit interval Œ0; 1, there is no one-to-one
correspondence between it and the set of natural numbers.
We focus on the second item . You’re might be guessing correctly that Cantor used a proof of
contradiction. And the proof must go like this. First, he assumed that all the decimals in Œ0; 1 is
countable. Second he would artificially create a number that is not in those decimals.
The following proof is taken from Bellos’ Alex adventure in numberland. It is based on
Hilbert’ hotel–a hypothetical hotel named after the German mathematician David Hilbert that
has an infinite number of rooms. One day there are infinite number of guests arriving at the
hotel. Each of these guests wears a T-shirt with a never-ending decimal between 0 and 1 (e.g.
0:415783113 : : :). The manager of this hotel is a genius and thus he was able to put all the guests
in the rooms:
room 1: 0:4157831134213468 : : :
room 2: 0:1893952093807820 : : :
room 3: 0:7581723828801250 : : :
room 4: 0:7861108557469021 : : :
room 5: 0:638351688264940 : : :
room 6: 0:780627518029137 : : :
:: ::
: :
You can prove the first item using ...geometry.
Now what Cantor did was to build one real number that was not in the above list. Cantor used
a diagonal method as follows. First, he constructed the number that has the first decimal place
of the number in Room 1, the second decimal place of the number in Room 2, the third decimal
place of the number in Room 3 and so on. In other words, he was choosing the diagonal digits
that are underlined here:
room 1: 0:4157831134213468 : : :
room 2: 0:1893952093807820 : : :
room 3: 0:7581723828801250 : : :
room 4: 0:7861108557469021 : : :
room 5: 0:638351688264940 : : :
room 6: 0:780627518029137 : : :
:: ::
: :
That number is 0:488157 : : : Second, he altered all the decimals of this number; he added one to
all the decimals. The final number is 0:599268 : : :. Now comes the best thing: This number is
not in room 1, because its first digit is different from the first digit of the number in room 1. The
number is not in room 2 because its second digit is different from the second digit of the number
in room 2, and we can continue this to see that the number cannot be in any room n. Although
Hilbert Hotel is infinitely large it is not enough for the set of real numbers.
So, now matter how big Hilber’s hotel is it cannot accommodate all the real numbers. The
set of real numbers is said to be uncountable. Now, we have countably infinite sets (such as
N; Z; Q) and uncountably infinite sets (such as R). With the right mathematics, Cantor proved
that there are infinities of different sizes.
There are only 10 types of people in the world: those who understand binary and
those who don’t.
If you got this joke you can skip this section and if you don’t, this section is for you.
Computers only use two digits: 0 and 1; which are
called the binary digits from which we have the word
"bit". In that binary world, how we write number 2? It
is 10. Now, you have understood the above joke. But
why 10 D 2? To answer that question we need to go
back to the decimal system. For unknown reason we–
human beings–are settled with this system. In this sys-
tem there are only ten digits: 0; 1; 2; 3; 4; 5; 6; 7; 8; 9.
How we write ten books then? There is no such digit in our system! Note that we’re allowed to
use only 0; 1; 2; 3; 4; 5; 6; 7; 8; 9. The solution is write ten as two digits: 10. To understand this
more, we continue with eleven (11), twelve (12), until nineteen (19). How about twenty? We do
the same thing: 20. Thus, any positive integer is a combination of powers of 10. Because of this
10 is called the base of the decimal system.
For the binary system we do the same thing, but with powers of 2 of course.
For example, 210 D 102 ; the subscripts to signify the number system; thus
102 is to denote the number ten in the binary system. Refer to the next figure
to see the binary numbers for 1 to 6 in the decimal system. With this, it is
straightforward to convert from binaries to decimals. For example 1112 D 1
22 C121 C120 D 710 . How about the conversion from decimals to binaries?
We use the fact that any binary is a combination of powers of two. For example,
7510 D 64C8C2C1 D 26 C025 C024 C23 C022 C21 C20 D 10010112 .
One disadvantage of the binary system is the long binary strings of 1’s and 0’s needed
to represent large numbers. To solve this, the “Hexadecimal” or simply “Hex” number system
adopting the base of 16 was developed. Being a base-16 system, the hexadecimal number system
therefore uses 16 different digits with a combination of numbers from 0 through to 15. However,
there is a potential problem with using this method of digit notation caused by the fact that the
decimal numerals of 10, 11, 12, 13, 14 and 15 are normally written using two adjacent symbols.
For example, if we write 10 in hexadecimal, do we mean the decimal number ten, or the binary
number of two (1 + 0). To get around this tricky problem hexadecimal numbers that identify
the values of ten, eleven, . . . , fifteen are replaced with capital letters of A, B, C, D, E and F
respectively.
So, let’s convert the hex number E7 to the decimal number. The old rule applies: a hex
number is a combination of powers of 16. Thus E7 D 7 160 C 14 161 D 231.
far from St. Petersburg, home of the famous mathematician Leonard Euler.
Carl Leonhard Gottlieb Ehler, mayor of Danzig, asked Euler for a solution to the problem in
1736. And this is what Euler replied (from [23]) seeing no connection between this problem and
current mathematics of the time:
Thus you see, most noble Sir, how this type of solution bears little relationship to
mathematics, and I do not understand why you expect a mathematician to produce it,
rather than anyone else, for the solution is based on reason alone, and its discovery
does not depend on any mathematical principle. Because of this, I do not know why
even questions which bear so little relationship to mathematics are solved more
quickly by mathematicians than by others.
Even though Euler found the problem trivial, he was still intrigued by it. In a letter written
the same year to the Italian mathematician and engineer Giovanni Marinoni, Euler said,
And as it is often the case, when Euler paid attention to a problem he solved it. Since neither
geometry nor algebra (in other words current maths was not sufficient to solve this problem), in
the process he developed a new maths, which we now call graph theory.
The firs thing Euler did was to get rid of things that are irrelevant to the problem. Things
such as color of the bridges, of the water, how big the landmasses are are all irrelevant. Thus, he
drew a schematic of the problem shown in the left of Fig. 2.45. He labeled the landmasses as
A; B; C; D and the bridges a; b; c; d; e; f; g. The problem is just the connection between these
entities. Nowadays, we can go further: it is obvious that we do not have to draw the landmasses,
we can represent them as dots, and the bridges as lines (or curves). In the right figure of Fig. 2.45,
we did that and this is called a graph (denoted by G).
Figure 2.45: The schematic of the Seven Bridges of Königsberg and its graph.
What information can we read from a graph? The first things are: number of vertices and
number of edges. Is that all? If so, how can we differentiate one vertex from another? Thus, we
have to look at the number of edges that can be drawn from a vertex. To save words, of course
mathematicians defined a word for that: it is called the degree of a vertex. For example, vertex
C has a degree of five whereas vertices A; B; D both have a degree of three.
Now, we are going to solve easier graphs and see the pattern. Then we come back to the
Seven Bridges of Königsberg. We consider five graphs as shown in Fig. 2.46. Now, try to solve
these graphs and fill in a table similar to Table 2.22 and try to see the pattern for yourself before
continuing. Based on the solution given in Fig. 2.47, we can fill in the table.
Figure 2.46: Easy graphs to solve. The number on top each graph is to number them.
Table 2.22: Results of graphs in Fig. 2.46. An odd vertex is a vertex having an odd degree.
1 0 4 Yes
2 2 2 Yes
3 4 0 No
4 4 1 No
5 2 3 Yes
Figure 2.47: Solution to easy graphs in Fig. 2.46. A single arrow indicates the starting vertex and a double
arrow for the finishing vertex.
What do we see from Table 2.22? We can only find a solution whenever the number of
odd vertices is either 0 or 2. The case of 0 is special: we can start at any vertex and we end up
eventually at exactly the same vertex (Fig. 2.47). For the case of two: we start at an odd vertex,
and end up at another odd vertex.
Figure 2.48: Coloring a map is equivalent to coloring the vertices of its graph.
In general, given any graph G, a coloring of the vertices is called (not surprisingly) a vertex
coloring. If the vertex coloring has the property that adjacent vertices are colored differently,
then the coloring is called proper. Every graph has a proper vertex coloring; for example, you
can color every vertex with a different color. But that’s boring! Don;t you agree? To make life
more interesting, we have to limit the number of colors used to a minimum. And we need a term
for that number. The smallest number of colors needed to get a proper vertex coloring is called
the chromatic number of the graph, written .G/.
We do not try to prove the four color theorem here. No one
could do it without using computers! It was the first major theo-
rem to be proved using a computer (proved in 1976 by Kenneth
Appel and Wolfgang Haken). Instead, we present one mundane
application of graph coloring: exam scheduling. Suppose algebra,
physics, chemistry and history are four courses in a college. And
suppose that following pairs have common students: algebra and
chemistry, algebra and history, chemistry and physics. If algebra and chemistry exam is held on
same day then students taking both courses have to miss at least one exam. They cannot take
both at the same time. How do we schedule exams in minimum number of days so that courses
having common students are not held on the same day? You can look at the graphs and see the
Francis Guthrie (1831-1899) was a South African mathematician and botanist who first posed the Four Color
Problem in 1852.
solution.
That’s all about graph for now. The idea is to inspire young students, especially those who
want to major in computer science in the future. If you’re browsing the internet, you are using
a graph. The story goes like this. In 1998, two Stanford computer science PhD students, Larry
Page and Sergey Brin, forever changed the World Wide Web as we know it. They created one of
the greatest universal website used daily. Google.com is one of the most successful companies
in the world. What was the basis for its success? It was the Google Search Engine that made
Larry Page and Sergey Brin millionaires.
The Google Search Engine is based one simple algorithm called PageRank. PageRank is an
optimization algorithm based on a simple graph. The PageRank graph is generated by having
all of the World Wide Web pages as vertices and any hyperlinks on the pages as edges. To un-
derstand how it works we need not only graphs, but also linear algebra (Chapter 10), probability
(Chapter 5) and optimization theory. Yes, there is no easy road to prosperity and fame.
2.33 Algorithm
2.33.1 Euclidean algorithm: greatest common divisor
To end this chapter I discuss a bit about algorithms for they are ubiquitous in our world. Let’s
play a game: finding the greatest common divisor/factor (gcd) of two positive integers. The gcd
of two integers is the largest number that divides them both. The manual solution is: (1) to list
all the prime factors of these two numbers and (2) get the product of common factors and (3)
that is the gcd, illustrated for 210 and 84:
210 D 2 3 5 7 D 42 5
84 D 2 2 3 7 D 42 2
Thus, the gcd of 210 and 84 is 42: gcd.210; 84/ D 42. Obviously if we need to find the gcd of
two big integers, this solution is terrible. Is there any better way?
If d is a common divisor to both a and b (assuming that a > b 0/, then we can write
a D d m and b D d n where m; n 2 N. Therefore, a b D d.m n/. What does this mean?
It means that d j.a b/ or d is also a divisor of a b . Conversely, if d is a common divisor
to both a b and b, it can be shown that it is a common divisor to both a and b. Therefore, the
set of common divisors of a and b is exactly the set of common divisors of a b and b. Thus,
gcd.a; b/ D gcd.a b; b/. This is a big deal because we have replaced a problem with an easier
(or smaller) one for a b is smaller than a. So, this is how we proceed: to find gcd.210; 84/ we
find gcd.126; 84/ and to find gcd.126; 84/ we find gcd.42; 84/, which is equal to gcd.84; 42/:
gcd.210; 84/
gcd.126; 84/
gcd.42; 84/ D gcd.84; 42/
gcd.42; 42/ D 42
One example: 5j10 and 5j25, and 5j.25 10/ or 5 is a divisor of 15.
We did not have to do this forever as gcd.a; a/ D a for any integer. This algorithm is better than
the manual solution but it is slow: imagine we have to find the gcd of 1000 and 3, too many
subtractions. But if we look at the algorithm we can see many repeated subtractions: for example
210 84 D 126 and 126 84 D 210 84 84 D 42. We can replace these two subtractions
by a single division: 42 D 210 mod 84 or 210 D 2 84 C 42. So, this is how we proceed:
It’s time for generalization. The problem is to find gcd.a; b/ for a > b > 0. The steps are a
repeated division: first a divide b to get the remainder r1 , then b divide r1 to get the remainder
r2 and so on :
gcd.a; b/ .a D qb C r1 /; 0 r1 < b
gcd.b; r1 / .b D q1 r1 C r2 /; 0 r2 < r1
gcd.r1 ; r2 / .r1 D q2 r2 C r3 /; 0 r3 < r2
::: ::: :::
We have obtained a sequence of numbers:
Since the remainders decrease with every step but can never be negative, eventually we must
meet a zero remainder, at which point the procedure stops. The final nonzero remainder is the
greatest common divisor of a and b.
What we have just seen is the Euclidean algorithm, named after the ancient Greek mathemati-
cian Euclid, who first described it in his Elements (c. 300 BC). It is an example of an algorithm,
a step-by-step procedure for performing a calculation according to well-defined rules, and is one
of the oldest algorithms in common use. About it, Donald Knuth wrote in his classic The Art of
Computer Programming: "The Euclidean algorithm is the granddaddy of all algorithms, because
it is the oldest nontrivial algorithm that has survived to the present day."
It is interesting to know that solution to this problem lies in the Euclidean algorithm. Take
for example the problem of finding gcd.34; 19/, using the Euclidean algorithm we do:
34 D 19.1/ C 15 ; gcd.19; 15/
19 D 15.1/ C 4 ; gcd.15; 4/
15 D 4.3/ C 3 ; gcd.4; 3/ (2.33.1)
4 D 3.1/ C 1 ; gcd.3; 1/
3 D 3.1/ C 0 ; gcd.1; 0/
Thus, gcd.34; 19/=1. Now we go backwards, starting from the second last equation with the
non-zero remainder of 1 which is the gcd of 34 and 19, we express 1 in terms of .
1 D 4 .1/3
D 4 .1/.15 4.3// D .4/4 .1/15 .replaced 3 by 3rd eq in Eq. (2.33.1)/
D .4/Œ19 .15/1 .1/15 D 4.19/ .5/15 .replaced 4 by 2nd eq in Eq. (2.33.1)/
D 4.19/ .5/Œ34 19.1/ D 5.34/ C 5.19/ .replaced 15 by 1st eq in Eq. (2.33.1)/
What did we achieve after all of this boring arithmetic? We have expressed gcd.34; 19/, which
is 1, as 5.34/ C 5.19/. This is known as Bézout’s identity: gcd.a; b/ D ax C by, where
a; b; x; y 2 Z. In English, the gcd of two integers a; b can be written as an integral linear
combination of a and b. (A linear combination of a and b is just a nice name for a sum of
multiples of a and multiples of b.)
How does this identity help us to solve McClane’s problem? Let a D 5 (5 gallon jug) and
b D 3, then gcd.5; 3/ D 1. The Bézout identity tells us that we can always write 1 D 5x C 3y,
or 4 D 5x 0 C 3y 0 (we need 4 as the problem asked for 4 gallons of water). It is easy to see that
the solutions to the equation 4 D 5x 0 C 3y 0 are x 0 D 2 and y 0 D 2: 4 D 5.2/ C .3/. 2/. This
indicates that we need to fill the 5-gallon jug twice and drain out (subtraction!) the 3-gallon jug
twice. That’s the rule to solving the puzzle .
Now is time for this problem “With only a 2 gallon jug and a 4 gallon jug, how to get one
gallon of water”. Here a D 4 and b D 2, we then have gcd.4; 2/ D 2. Bézout’s identity tells us
that 2 D 4x C 2y (one solution is .1; 1/). But the problem asked for one gallon of water, so we
need to find x 0 and y 0 so that 1 D 4x 0 C 2y 0 . After having spent quite some time without success
to find those guys x 0 and y 0 , we came to a conjecture that 1 cannot be written as 4x 0 C 2y 0 . And
this is true, because the smallest positive integer that can be so written is the gcd.4; 2/, which
is 2 .
2.34 Review
We have done lots of things in this chapter. It’s time to sit back and think deeply about what we
have done. We shall use a technique from Richard Feynman to review a topic. In his famous
lectures on physics [15], he wrote (emphasis is mine)
Details can be seen in the movie or youtube.
Note that d D gcd.a; b/ divides ax C by. If c D ax 0 C by 0 then d jc, or c D d n d . Thus d is the smallest
positive integer which can be written as ax C by.
If, in some cataclysm, all of scientific knowledge were to be destroyed, and only
one sentence passed on to the next generations of creatures, what statement would
contain the most information in the fewest words? I believe it is the atomic hypoth-
esis (or the atomic fact, or whatever you wish to call it) that all things are made
of atoms—little particles that move around in perpetual motion, attracting each
other when they are a little distance apart, but repelling upon being squeezed into
one another. In that one sentence, you will see, there is an enormous amount of
information about the world, if just a little imagination and thinking are applied.
I emphasize that using Feynman’s review technique is a very efficient way to review any
topic for a good understanding of it (and thus useful for exam review). Only few key information
are needed to be learned by heart, others should follow naturally as consequences. This avoids
rote memorization, which is time consuming and not effective.
I have planned to do a review of algebra starting with just one piece of knowledge, but I soon
realized that it is not easy. So I gave up. Instead I provide some observations (or reflection) on
what we have done in this chapter (precisely on what mathematicians have done on the topics
covered here):
By observing objects in our physical world and deduce their patterns, mathematicians
develop mathematical objects (e.g. numbers, shapes, functions etc.) which are abstract (we
cannot touch them).
Even though mathematical objects are defined by humans, their properties are beyond us.
We cannot impose any property on them, what we can do is just discover them.
Quite often, mathematical objects live with many forms. For example, let’s consider 1,
it can be 12 , 13 or sin2 x C cos2 x etc. Using the correct form usually offers the way to
something. And note that we also have many faces too.
Things usually go in pairs: boys/girls, men/women, right/wrong etc. They are opposite of
each other. In mathematics, we have the same: even/odd numbers, addition/subtraction,
multiplication/division, exponential/logarithm, and you will see differentiation/integration
in calculus.
Mathematicians love doing generalization. They first have arithmetic for numbers, then
they have arithmetic for functions, for vectors, for matrices. They have two dimensional
and three dimensional vectors (e.g. a force), and then soon they develop n-dimensional vec-
tors where n can be any positive integer! Physicists only consider a 20-dimensional space.
But the boldest generalization we have seen in this chapter was when mathematicians
extended the square root of positive numbers to that of negative numbers.
From a practical point of view all real numbers are rational ones. The distinction between
rational and irrational numbers are only of value to mathematics itself. Our measurements
always yield a terminating decimal e.g. 3.1456789 which is a rational number.
Is this algebra the only one kind of algebra? No, no, no. Later on we shall meet vectors, and
we have vector algebra and its generalization–linear algebra. We also meet matrices, and we
have matrix algebra. We have tensors, and we have tensor algebra. Still the list goes on; we have
abstract algebra and geometric algebra.
Contents
3.1 Euclidean geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
3.2 Trigonometric functions: right triangles . . . . . . . . . . . . . . . . . . 202
3.3 Trigonometric functions: unit circle . . . . . . . . . . . . . . . . . . . . . 203
3.4 Degree versus radian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
3.5 Some first properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
3.6 Sine table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
3.7 Trigonometry identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
3.8 Inverse trigonometric functions . . . . . . . . . . . . . . . . . . . . . . . 218
3.9 Inverse trigonometric identities . . . . . . . . . . . . . . . . . . . . . . . 219
3.10 Trigonometry inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 221
3.11 Trigonometry equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
3.12 Generalized Pythagoras theorem . . . . . . . . . . . . . . . . . . . . . . 230
3.13 Graph of trigonometry functions . . . . . . . . . . . . . . . . . . . . . . 231
3.14 Hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
3.15 Applications of trigonometry . . . . . . . . . . . . . . . . . . . . . . . . . 239
3.16 Infinite series for sine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
3.17 Unusual trigonometric identities . . . . . . . . . . . . . . . . . . . . . . . 244
3.18 Spherical trigonometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
3.19 Computer algebra systems . . . . . . . . . . . . . . . . . . . . . . . . . . 249
3.20 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
197
Chapter 3. Trigonometry 198
Trigonometry (from Greek trigōnon, "triangle" and metron, "measure") is a branch of mathe-
matics that studies relationships between side lengths and angles of triangles. The field emerged
during the 3rd century BC, from applications of geometry to astronomical studies. This is now
known as spherical trigonometry as it deals with the study of curved triangles, those triangles
drawn on the surface of a sphere. Later, another kind of trigonometry was developed to solve
problems in various fields such as surveying, physics, engineering, and architecture. This field is
called plane trigonometry or simply trigonometry. And it is this trigonometry that is the subject
of this chapter.
In learning trigonometry in high schools a student often gets confused of the following
facts. First, trigonometric functions are defined using a right triangle (e.g. sine is the ratio of
the opposite and the hypotenuse). Second, trigonometric functions are later on defined using a
unit circle. Third, the measure of angles is suddenly switched from degree to radian without a
clear explanation. Fourth, there are two many trigonometry identities. And fifth, why we have
to spend time studying these triangles? In this chapter we try to make these issues clear.
Our presentation of trigonometry does not follow its historical development. However, we
nevertheless provide some historical perspective to the subject.
We start with the Eucledian geometry in Section 3.1. Then, Section 3.2 introduces the
trigonometry functions defined using a right triangle (e.g. sin x). Then, trigonometry func-
tions defined on a unit circle are discussed in Section 3.3. A presentation of degree versus
radian is given in Section 3.4. We discuss how to compute the sine for angles between 0 and
360 degrees in Section 3.6, without using a calculator of course. Trigonometry identities (e.g.
sin2 x C cos2 x D 1 for all x) are then presented in Section 3.7, and Section 3.8 outlines in-
verse trigonometric functions e.g. arcsin x. Next, inverse trigonometry identities are treated in
Section 3.9. We devote Section 3.10 to trigonometry inequalities, a very interesting topic. Then
in Section 3.11 we present trigonometry equations and how to solve them. The generalized
Pythagorean theorem is treated in Section 3.12. Graph of trigonometry functions are discussed
in Section 3.13. Hyperbolic functions are treated in Section 3.14. Some applications of trigonom-
etry is given in Section 3.15. A power series for the sine function, as discovered by ancient Indian
mathematicians, is presented in Section 3.16. With it it is possible to compute the sine for any
angle. An interesting trigonometric identity of the form sin ˛ C sin 2˛ C C sin n˛ is treated
in Section 3.17. In Section 3.18 we briefly introduce spherical trigonometry as this topic has
been removed from the high school curriculum. Finally, a brief introduction to CAS (computer
algebra system) is given in Section 3.19, so that students can get acquaintance early to this
powerful tool.
But this book would be incomplete without mentioning Euclidean geometry, especially Eu-
clid’s The Elements. Why? Because Euclid’s Elements has been referred to as the most success-
ful and influential textbook ever written. It has been estimated to be second only to the Bible
in the number of editions published since the first printing in 1482, the number reaching well
over one thousand. Moreover, without a proper introduction of Euclid’s geometry it would be
awkward to talk about trigonometry–a branch of mathematics which is based on geometry.
Geometry (means "earth measurement") is one of the oldest branches of mathematics. It
is concerned with properties of space that are related with distance, shape, size, and relative
position of figures. A mathematician who works in the field of geometry is called a geometer.
Euclid’s geometry, or Euclidean geometry, is a mathematical system attributed to Alexan-
drian Greek mathematician Euclid, which he described in his textbook The Elements. Written
about 300 B.C., it contains the results produced by fine mathematicians such as Thales, Hippias,
the Pythagoreans, Hippocrates, Eudoxus. The Elements begins with plane geometry: lines, cir-
cles, triangles and so on. These shapes are abstracts of the real geometries we observe in nature
(Fig. 3.1). It goes on to the solid geometry of three dimensions. Much of the Elements states
results of what are now called algebra and number theory, explained in geometrical language.
Figure 3.1: Geometry in nature: circle, rectangle and hexagon (from left to right).
Euclid’s geometry operates with basic objects such as points, lines, triangles (and polygons),
and circles (Fig. 3.2). And it then studies the properties of these objects such as the length of a
segment (i.e., a part of a line), the area of a triangle/circle.
Similar to numbers–an abstract concept, points, lines etc. in geometry are also abstract. For
example, a point does not have size. A line does not have thickness and a line in geometry is
perfectly straight! And certainly mathematicians don’t care if a line is made of steel or wood.
There are no such things in the physical world.
The structure of Euclids’ Elements is as follows:
1. Some definitions of the basic concepts: point, line, triangle, circle etc.
2. Ten axioms on which all subsequent reasoning is based. For example, Axiom 1 states that
“Two points determine a unique straight line”. Axiom 6 is “Things equal to the same thing
are equal to each other” (which we now write if a D b, c D b then a D c).
3. Using the above definitions and axioms, Euclid proceeded to prove many theorems.
To illustrate one theorem and the characteristics of a geometry proof, let’s consider the following
theorem. An exterior angle of a triangle is greater than either remote interior angle of the triangle.
To be precise, in Fig. 3.3 the theorem asserts that angle D is greater than angles A and B. Before
Figure 3.3: An exterior angle of a triangle is greater than either remote interior angle of the triangle.
attempting to prove a theorem, we should check if it is correct, in Fig. 3.4, we try for the case
D 90ı and the theorem is correct. The idea of the proof is to draw the line going through C
and parallel to AB.
Figure 3.4: An exterior angle of a triangle is greater than either remote interior angle of the triangle.
Influence of The Elements. The Elements is still considered a masterpiece in the application
of logic to mathematics. It has proven enormously influential in many areas of science. Many
scientists, the likes of Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, Albert Einstein
and Isaac Newton were all influenced by the Elements. When Newton wrote his masterpiece
There are basically two kinds of mathematical thinking, algebraic and geometric.
A good mathematician needs to be a master of both. But still he will have a prefer-
ence for one rather or the other. I prefer the geometric method. Not mentioned in
published work because it is not easy to print diagrams. With the algebraic method
one deals with equations between algebraic quantities. Even tho I see the consis-
tency and logical connections of the equations, they do not mean very much to me.
I prefer the relationships which I can visualize in geometric terms. Of course with
complicated equations one may not be able to visualize the relationships e.g. it may
need too many dimensions. But with the simpler relationships one can often get help
in understanding them by geometric pictures.
One remarkable thing happened in Dirac’s life is that he learned projective geometry early in
his life (in secondary school at Bristol). He wrote "This had a strange beauty and power which
fascinated me". Projective geometry provided Dirac new insight into Euclidean space and into
special relativity.
Of course Dirac could not know that his early exposure to projective geometry would be vital
to his future career in physics. We simply can’t connect the dots looking forward, as Steven Jobs
(February 24, 1955 – October 5, 2011)–the Apple co-founder –once said in his famous 2005
commencement speech at Stanford University:
You can’t connect the dots looking forward; you can only connect them looking
backwards. So you have to trust that the dots will somehow connect in your future.
You have to trust in something — your gut, destiny, life, karma, whatever. This
approach has never let me down, and it has made all the difference in my life.
Now comes the key point. If we can manage to compute the ratio AC=AB for a given angle ˛,
then we can use it to solve any triangle with the angle at B equal ˛. Thus, we have our very first
trigonometric function–the tangent:
AC
tan ˛ WD
AB
Thus a trigonometric function relates an angle of a right-angled triangle to ratios of two side
lengths. And if we have a table of the tangent i.e., for each angle ˛, we can look up its tan ˛, we
then can solve every right triangle problems; in Fig. 3.6a we can determine A1 C1 D A1 B tan ˛.
The first trigonometric table was apparently compiled by Hipparchus of Nicaea (180 – 125 BCE),
who is now consequently known as "the father of trigonometry."
Why just the ratio AC =AB? All three sides of a triangle should be treated equally and their
ratios are constants for all right triangles with the same angle ˛. If so, from 3 sides, we can have
six ratios! And voilà, we have six trigonometric functions. Quite often, they are also referred to
as six trigonometric ratios. They include: sine, cosine and tangent and their reciprocals, and are
defined as (Fig. 3.6b):
adjacent AB BC 1
cos ˛ D D ; sec ˛ D D
hypotenuse BC AB cos ˛
opposite AC BC 1
sin ˛ D D ; csc ˛ D D
hypotenuse BC AC sin ˛
opposite AC AB 1
tan ˛ D D ; cot ˛ D D
adjacent AB AC tan ˛
The secant of ˛ is 1 divided by the cosine of ˛, the cosecant of ˛ is defined to be 1 divided by
the sine of ˛, and the cotangent (cot) of ˛ is 1 divided by the tangent of ˛. These three functions
(secant, cosecant and cotangent) are the reciprocals of the cosine, sine and tangent.
Where these names come from is to be explained in the next section.
from Baghdad through Spain, into western Europe in the Latin language, and then to modern
languages such as English and the rest of the world.
Right triangles have a serious limitation. They are excellent for angles up to 90ı . How about
angles larger than that? And how about negative angles? We change now to a circle which solves
all these limitations.
We consider a unit circle (i.e., a circle with a unit radius) centered at the origin of the
Cartesian coordinate system (refer to Section 4.1.1 for details). Angles are measured from the
positive x axis counter clockwise; thus 90ı is straight up, 180ı is to the left (Fig. 3.8). The circle
is divided into four quadrants: the first quadrant is for angles ˛ 2 Œ0ı ; 90ı , the second quadrant
is for angles ˛ 2 Œ90ı ; 180ı etc. An angle ˛ is corresponding to a point A on the circle. And
the x-coordinate of this point is cos ˛ whereas the y-coordinate is sin .
Mnemonics in trigonometry. The sine, cosine, and tangent ratios in a right triangle can be
remembered by representing them as strings of letters, for instance SOH-CAH-TOA in English:
Sine D Opposite=Hypotenuse
Cosine D Adjacent=Hypotenuse
Tangent D Opposite=Adjacent
He had used the term as early as 1871, while in 1869, the Scottish mathematician Thomas
Muir (1844 – 1934) was vacillated between the terms rad, radial, and radian. In 1874, after a
consultation with James Thomson, Muir adopted radian. The name radian was not universally
adopted for some time after this. Longmans’ School Trigonometry still called the radian circular
measure when published in 1890.
Figure 3.10
the famous Pythagorean theorem. Because D f0ı ; 90ı ; 180ı ; 270ı g coincide with the vertices
https://www.grc.nasa.gov/www/k-12/airplane/tablsin.html Page 2 of 3
36ı we then know the sine of 54ı . Using the trigonometry identity sin.˛ ˙ ˇ/ D sin ˛ cos ˇ ˙
sin ˇ cos ˛, to be discussed in Section 3.7, we get the sine for 72ı with ˛ D ˇ D 36ı , the sine for
18ı with ˛ D 72ı ; ˇ D 54ı and the sine for 75ı with ˛ D 30ı ; ˇ D 45ı . With ˛ D 75ı ; ˇ D 72ı
we get sin 3ı . And from that we can get the sines for all multiples of 3ı i.e., 6ı , 9ı , 12ı etc. (for
example using sin 2x D 2 sin x cos x).
Table 3.1: Sines and cosines of some angles from 0 degrees to 360 degrees.
0 0 0 1
p
30 =6 1=2 3=2
p p
45 =4 2=2 2=2
p
60 =3 3=2 1=2
90 =2 1 0
180 0 -1
270 3=2 -1 0
360 2 0 1
Figure 3.11: Calculation of sine and cosine for D =4, D =6 and D =3.
If we know sin 1ı , then we will know sin 2ı , sin 6ı , sin 5ı etc. and we’re done. But Ptolemy
could not find sin 1ı directly, he found an approximate method for it (see Section 3.10). The
Persian astronomer al-Kashi (c. 1380 – 1429) in his book The Treatise on the Chord and Sine,
computed sin 1ı to any accuracy. In the process, he discovered the triple angle identity often
attributed to François Viète in the sixteenth century.
Using the triple identity sin.3˛/ D 3 sin ˛ 4 sin3 ˛ (to be discussed in the next section .),
he related sin 1ı with sin 3ı (which he knew) via the following cubic equation:
But the cubic would not be solved for another 125 years by Cardano. Clearly, al-Kashi could not
wait that long. What did he do? With x D sin 1ı , he wrote
sin 3ı C 4x 3
sin 3ı D 3x 4x 3 H) x D
3
What is this? This is fixed point iterations method discussed in Section 2.10. With only 4
iterations we get sin 1ı with accuracy of 12 decimal places: sin 1ı D sin 0:017453292520 D
0:017452406437. al-Kashi gave us sin 1ı . Is there anything else? Look at the red numbers, what
did you see? It seems that we have sin x x at least for x D 1ı . This is even more important
than what sin 1ı is. Why? Because if it is the case, we can replace sin x–which is a complex
function –by a very simple x.
The proof of the addition angle formulae for sine and cosine is shown in Fig. 3.12. The idea
is to use the definition of sine and cosine, and thus constructing right triangles containing ˛
and ˇ and their sum. The choice of OC D 1 simplifies the calculations. The formula for
sin.˛ ˇ/ can be obtained from the addition angle formula by replacing ˇ with ˇ and noting
that sin. ˇ/ D sin ˇ. Or we can prove the formula for cos.˛ ˇ/ using the unit circle as
given in Fig. 3.13.
The identity for the addition angle for the tangent is obtained directly from its definition and
the available formulae for sine and cosine:
sin.˛ C ˇ/ sin ˛ cos ˇ C sin ˇ cos ˛ tan ˛ C tan ˇ
tan.˛ C ˇ/ D D D (3.7.2)
cos.˛ C ˇ/ cos ˛ cos ˇ sin ˛ sin ˇ 1 tan ˛ tan ˇ
where in the last step, we divide both the denominator and numerator by cos ˛ cos ˇ so that
tangents will appear.
Figure 3.13: Proof of cos.˛ ˇ/: expressing the distance d two ways (one from the left figure) and
one from the right figure). Recall d 2 D .x1 x2 /2 C .y1 y2 /2 is the distance squared between two
points .x1 ; y1 / and .x2 ; y2 /. From this, we can get cos.˛ C ˇ/, and sin.˛ ˇ/ by writing sin.˛ ˇ/ D
cos.=2 .˛ ˇ// D cosŒ.=2 ˛/ C ˇ. Then using the addition angle formula for cosine.
From the addition angle formula for sine, it follows that sin.2˛/ D sin.˛ C ˛/ D
2 sin ˛ cos ˛. Similarly, one can get the double-angle for cosine. If you do not like this geometric
based derivation, don’t forget we have another proof using complex numbers (Section 2.23).
Thus, we have the following double angle identities
(double-angle)
sin.2˛/ D 2 sin ˛ cos ˛
cos.2˛/ D cos2 ˛ sin2 ˛ D 2 cos2 ˛ 1D1 2 sin2 ˛ (3.7.3)
2 tan ˛
tan.2˛/ D
1 tan2 ˛
The triple-angle formula for sine can be obtained from the addition angle formula as follows
sin.3˛/ D sin.2˛ C ˛/
D sin.2˛/ cos ˛ C sin ˛ cos.2˛/
D 2 sin ˛ cos2 ˛ C sin ˛.cos2 ˛ sin2 ˛/
D 2 sin ˛.1 sin2 ˛/ C sin ˛.1 sin2 ˛ sin2 ˛/
And the derivation of the triple-angle for tangent is straightforward from the definition of tangent:
(triple-angle)
sin.3˛/ D 3 sin ˛ 4 sin3 ˛
cos.3˛/ D 4 cos3 ˛ 3 cos ˛ (3.7.4)
3 tan ˛ tan3 ˛
tan.3˛/ D
1 3 tan2 ˛
From the double-angle for cosine: cos.2˛/ D cos2 ˛ sin2 ˛ D 2 cos2 ˛ 1 we can derive
the identity for half angle. A geometry proof for this is shown in Fig. 3.14. The proof is simple
but it requires some knowledge of Euclidean geometry.
(half-angle)
r
1 C cos.2˛/
cos ˛ D (3.7.5)
r 2
1 cos.2˛/
sin ˛ D
2
Phu Nguyen, Monash University © Draft version
Chapter 3. Trigonometry 212
(Product identities)
sin.˛ C ˇ/ C sin.˛ ˇ/
sin ˛ cos ˇ D
2
cos.˛ C ˇ/ C cos.˛ ˇ/ (3.7.6)
cos ˛ cos ˇ D
2
cos.˛ ˇ/ cos.˛ C ˇ/
sin ˛ sin ˇ D
2
The product identities sin ˛ sin ˇ are obtained from the addition/subtraction identities:
)
sin.˛ C ˇ/ D sin ˛ cos ˇ C sin ˇ cos ˛
H) sin.˛ C ˇ/ C sin.˛ ˇ/ D 2 sin ˛ cos ˇ
sin.˛ ˇ/ D sin ˛ cos ˇ sin ˇ cos ˛
Another form of the product identities are the sum-product identities given by,
(Sum-product identities)
˛Cˇ ˛ ˇ
sin ˛ C sin ˇ D 2 sin cos
2 2
˛Cˇ ˛ ˇ (3.7.7)
cos ˛ C cos ˇ D 2 cos cos
2 2
˛Cˇ ˛ ˇ
cos ˛ cos ˇ D 2 sin sin
2 2
And finally are two identities relating sine/cosine with tangent of half angle :
Historically, the product identities, Eq. (3.7.6), were used before logarithms were invented to
perform multiplication. Here’s how you could use the second one. If you want to multiply x y,
use a table to look up the angle ˛ whose cosine is x and the angle ˇ whose cosine is y. Look
up the cosines of the sum ˛ C ˇ and the difference ˛ ˇ. Average those two cosines. You get
the product xy! Three table look-ups, and computing a sum, a difference, and an average rather
than one multiplication. Tycho Brahe (1546–1601), among others, used this algorithm known as
prosthaphaeresis.
If we know vector algebra (Section 10.1), we can derive this identity easily. Consider two
unit vectors a and b. The first vector makes with the horizontal axis an angle ˛ and the
second vector an angle ˇ. So, we can express these two vectors as
Then, the dot product of these two vectors can be computed by two ways:
2 D 3
sin.2 / D sin. 3/ D sin.3 /
2 sin cos D 3 sin 4 sin3
p
2 1C 5
4 cos 2 cos 1 D 0 ) cos D
4
Pascal triangle again. If we compute tan n˛ in terms of tan ˛ for n 2 N, we get the following
(only up to n D 4):
tan ˛ D t
2t
tan 2˛ D
1 t2
3t t 3 (3.7.9)
tan 3˛ D
1 3t 2
4t 4t 3
tan 4˛ D
1 6t 2 C 1t 4
Phu Nguyen, Monash University © Draft version
Chapter 3. Trigonometry 214
And see what? Binomial coefficients multiplying the powers tanm ˛ show up. The binomial
coefficients, corresponding to the numbers of the row of Pascal’s triangle, occur in the expression
in a zigzag pattern (i.e. coefficients at positions 1; 3; 5; : : : are in the denominator, coefficients at
the positions 2; 4; 6; : : : are in the numerator, or vice versa), following the binomials in row of
Pascal’s triangle in the same order.
Bernoulli’s imaginary trick. The way we obtained tan n in terms of tan works nicely for
small n. Is it possible to have a method that works for any n? Yes, Bernoulli presented such
a method, but it adopted imaginary number i 2 D 1 and the new infinitesimal calculus that
Leibniz just invented . Here is what he did:
)
x D tan ;
H) tan 1 y D n tan 1 x
y D tan n
We refer to Section 3.9 for a discussion on inverse trigonometric functions (e.g. tan 1 x. Briefly,
given an angle , press the tangent button gives us tan , and pressing the tan 1 button gives us
back the angle). Now, he differentiated tan 1 y D n tan 1 x to get
dy dx
2
Dn
1Cy 1 C x2
Then he indefinitely integrated the equation to get:
dy dx
Z Z
2
D n (3.7.10)
1Cy 1 C x2
Now comes the trick of using i :
1 1 1 1 1 1
2
D 2 D D
1Cx x i2 .x i/.x C i/ 2i x i xCi
So what he did is called factoring into imaginary components, R and in the final step, a partial
fraction expansion. With that, it’s easy to compute the integral dx=1Cx 2 :
Z ˇ ˇ
dx 1 dx dx 1 1 ˇˇ x i ˇˇ
Z Z
D D .ln jx i j ln jx C i j/ D ln
1 C x2 2i x i xCi 2i 2i ˇ x C i ˇ
With this result, he could proceed with Eq. (3.7.10):
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
1 ˇˇ y i ˇˇ 1 ˇˇ x i ˇˇ ˇy i ˇ ˇx i ˇ
ˇ C C0
ln ˇ D n ln ˇ C C ” ln ˇˇ ˇ D n ln ˇ (3.7.11)
2i yCi ˇ 2i xCi ˇ yCi ˇ ˇ x Ciˇ
n n
y i x i n 1 x i
ln D ln C lnŒ. 1/ D ln . 1/n 1
(3.7.12)
yCi xCi xCi
Thus, he obtained this n
y i x i
D . 1/n 1
(3.7.13)
yCi xCi
which gave him
x i n
y i
DC n D 1; 3; 5; : : :
yCi xCi
x i n
y i
D n D 2; 4; 6; : : :
yCi xCi
And solving for y (the above equations are just linear equations for y), Bernoulli have obtained
a nice formula for y or tan n with x D tan
.x C i/n C .x i/n
tan n D i n D 1; 3; 5; : : :
.x C i/n .x i/n
(3.7.14)
.x C i/n .x i/n
tan n D i n D 2; 4; 6; : : :
.x C i/n C .x i/n
Now, we check this result, by applying it to n D 2, and the above equation (the second one of
course as n D 2) indeed leads to the correct formula of tan 2 D 2 tan =.1 tan2 /.
Trigonometry identities for angles of plane triangles. Let’s consider a plane triangle with
three angles denoted by x; y and z (in many books we will see the notations A, B and C ). We
thus have the constraint x C y C z D . We have then many identities. For example,
x y z x y z
cot C cot C cot D cot cot cot
2 2 2 2 2 2
Proof. The above formula is equivalent to the following
x y y z z x
tan tan C tan tan C tan tan D 1
2 2 2 2 2 2
From x C y C z D , we can relate tangent of .x C y/=2 to tangent of z=2, and use the addition
angle formula for tangent, we will arrive at the formula:
x y z 1
tan C D cot D
2 2 2 tan z2
x y
tan C tan
2 2 D 1
x y z
1 tan tan tan
2 2 2
Proof follows the same reasoning: using x C y C z D to replace z and use corresponding
identities, Section 3.7.
x y z
Proof. This is a proof for cos x C cos y C cos z D 4 sinsin sin C 1: From x C y C z D ,
2 2 2
we can relate cosine of z=2 to cosine of x C y, and use the summation formula for cosine to
the term cos x C cos y, we will make appear half angles. Also using the double angle formula
cos 2u D 2 cos2 u 1:
xCy x y
cos x C cos y C cos z D 2 cos cos cos.x C y/
2 2
xCy x y
2 x Cy
D 2 cos cos 2 cos C1
2 2 2
z x y
xCy
D 2 sin cos cos C1
2 2 2
sin n˛ for any n. In Section 2.23.4 we have used de Moivre’s formula to derive the formula
for sin 2˛, sin 3˛ in terms of powers of sin ˛. In principle, we can follow that way to derive the
formula of sin n˛ for any n, but the process is tedious (try with sin 5˛ and you’ll understand
what I meant). There should be an easier way.
The trick is in Eq. (2.23.18), which we re-write here:
ei ˛ e i˛
sin ˛ D (3.7.16)
2i
Phu Nguyen, Monash University © Draft version
Chapter 3. Trigonometry 217
e i n˛
e i n˛ .e i ˛ /n .e i ˛ /n
sin n˛ D D
2i 2i
.cos ˛ C i sin ˛/n .cos ˛ i sin ˛/n
D .use e i ˛ D cos ˛ C i sin ˛/
Pn 2i Pn
n n
n k
˛.i sin ˛/k
n k
kD0 k cos kD0 k cos ˛. i sin ˛/k
D (3.7.17)
2i
n
!
k
X n
n k k i . i/k
D cos ˛ sin ˛
k 2i
kD0
n n.n 1/.n 2/
D cosn 1
˛ sin ˛ cosn 3
˛ sin3 ˛ C
1Š 3Š
where in the third equality, we have used the binomial theorem to expand . /n , and the red
term is equal to zero for k D 0; 2; 4; : : : and equal to one for k D 1; 3; 5; : : :.
cos n˛ for any n. If we have something for sine, cosine is jealous. So, we do the same analysis
for cosine, and get:
e i n˛ C e i n˛
cos n˛ D
2!
n
X n i k C . i/k
D cosn k
˛ sink ˛
k 2
kD0
n.n 1/ n.n 1/.n 2/.n 3/
D cosn ˛ cosn 2
˛ sin2 ˛ C cosn 4
˛ sin4 ˛ C
2Š 4Š
(3.7.18)
where in the third equality, we have used the binomial theorem to expand . /n , and the red
term is equal to zero for k D 1; 3; 5; : : : and equal to one for k D 0; 4; 8; : : :, and equal to minus
one for k D 2; 6; 10; : : :.
With that, we can write the formula for cos.n˛/ for the first few values of n:
cos.0˛/ D 1
cos.1˛/ D cos ˛
cos.2˛/ D 2 cos2 ˛ 1
3 (3.7.19)
cos.3˛/ D 4 cos ˛ 3 cos ˛
cos.4˛/ D 8 cos4 ˛ 8 cos2 ˛ C 1
cos.5˛/ D 16 cos5 ˛ 20 cos3 ˛ C 5 cos ˛
What is the purpose of doing this? The next step is to try to find a pattern in these formula. One
question is, is it possible to compute cos.6˛/ w/o resorting to Eq. (3.7.18)? Let’ see how we
can get cos.2˛/ D 2 cos2 ˛ 1 from cos 1˛ D cos ˛: we can multiply cos 1˛ with 2 cos ˛ and
minus 1, and 1 is cos 0˛:
Thus, we can compute cos.k˛/ from cos.k 1/˛ and cos.k 2/˛! The formula is
One application of this formula is to derive the Chebyshev polynomials of the first kind
described in Section 11.3.2. Why this has to do with polynomials? Note that from the above
equation, cos.n˛/ is a polynomial in terms of cos ˛, e.g. cos 3˛ D 4.cos ˛/3 3 cos.˛/. That’s
why. If you forget what is a polynomial, check Section 2.28.
from the following geometric relationships. When measuring in radians, an angle of radians
will correspond to an arc whose length is r , where r is the radius of the circle. Thus in the unit
circle, the arc whose cosine is x is the same as "the angle whose cosine is x", because the length
of the arc of the circle is the same as the measurement of the angle.
Note that this was how to discover this relation between cos.k˛/, cos.k 1/˛ and cos.k 2/˛. When we
know such formula exists, we can prove it in an easier way. I leave it as an trigonometry exercise.
If there are polynomials of 1st kind, then where are those of 2nd kind? They’re polynomials related to sin n˛.
Those related to the cosine are called the 1st kind probably because cos ˛ is the real part of e i ˛ .
Proof. Derivation of Machin’s formula Eq. (3.9.2) using Eq. (3.9.1) . We start with arctan 15 C
arctan 15 to get 2 arctan 51 :
1 1 1
2 arctan D arctan C arctan
5 5 5 (3.9.4)
15C15 5
D arctan D arctan
55 11 12
Now, with 2 arctan 15 C 2 arctan 51 we get 4 arctan 51 :
1 1 1
4 arctan D 2 arctan C 2 arctan
5 5 5
5 5
D arctan C arctan (Eq. (3.9.4))
12 12
5 12 C 5 12 120
D arctan D arctan
12 12 5 5 119
Finally, we consider 4 arctan 15
4
, writing =4 as arctan 1=1:
1 1 1
4 arctan D 4 arctan arctan
5 4 5 1
120 1
D arctan arctan
119 1
120 1 119 1 1
D arctan D arctan
119 1 C 120 1 239
Compute from thin air. Machin’s formula for is great, but there is an unbelievable way to
get it, from thin air. To be precise from i 2 D 1. Recall that we have (Section 2.23.6):
i
D ln.i /
4 2
A bit of algebra to convert i to a fraction form:
i i 1Ci i
D ln.i / D ln D .ln.1 C i / ln.1 i //
4 2 2 1 i 2
Now, we use the power series of logarithm, written for a complex number z:
z2 z3 z4
ln.1 C z/ D z C C
2 3 4
Thus, we have
i2 i3 i4
ln.1 C i / D i C C
2 3 4
. i/2 . i/3 . i/4
ln.1 i/ D i C C
2 3 4
Phu Nguyen, Monash University © Draft version
Chapter 3. Trigonometry 221
Finally, we get :
1
1 1 1 X 1
D1 C C D . 1/nC1
4 3 5 7 nD1
2n 1
p
Great. We got from 1; a real number from an imaginary one! It seems impossible, so we
should check this result. That’s why we have provided the last expression, which can be coded
in a computer. The outcome of that exercise is that the more terms we use the more close to
=4 D 0:7853981633 : : : we get. However this series is too slow in the sense that we need
too many terms to get an accurate value of . That’s why Machin and other mathematicians
developed other formula.
But still this is not Eq. (3.9.3). Don’t worry. The Germain mathematician Karl Heinrich
Schellbach (1809-1890) did in 1832. He used:
.5 C i/4 . 239 C i/
2
D ln
i .5 i/4 . 239 i/
It is certain that Schellbach was aware of Machin’s formula, and that was how he could think of
the crazy expression in the bracket for i .
Derivation of Eq. (3.9.1) using complex numbers. If we consider two complex numbers
b1 C a1 i with the angle 1 D arctan a1 =b1 and b2 C a2 i with the angle 2 D arctan a2 =b2 ,
then its product is b1 b2 a1 a2 C.a1 b2 a2 b1 /i with the angle D arctan.a1 b2 a2 b1 /=.b1 b2
a1 a2 /. Then Eq. (3.9.1) is nothing but D 1 C2 , a property of complex number multiplication.
And this is expected as we started from the trigonometry identity for angle difference/addition.
Figure 3.15
As we now pay attention to inequalities of trigonometry functions, we turn to the unit circle
with sine/cosine/tangent, Fig. 3.15, and we discover these inequalities:
where the first inequality was obtained by comparing the length of the bold line versus the length
of the arc in the middle figure. The second inequality was obtained by comparing the areas (one
of the triangle OAB and one is the shaded region) in the right figure.
There is nothing special about sin 3ı < 3 sin 1ı , if we have this, we should have this:
sin ˛ ˛
< ; for all ˛ > ˇ 2 Œ0; =2 (3.10.2)
sin ˇ ˇ
And of course we need a proof as for now it is just our guess. Before presenting a proof, let’s
see how Eq. (3.10.2) was used by Ptolemy to compute sin 1ı :
2
˛ D .3=2/ı ; ˇ D 1ı W sin 1ı > sin.3=2/ı
3
4
˛ D 1ı ; ˇ D .3=4/ı W sin 1ı < sin.3=4/ı
3
From sin 3ı , we can compute sin.3=2/ı and sin.3=4/ı . Thus, we get 0:017451298871915433 <
sin 1ı < 0:01745279409512592. So, we obtain sin 1ı D 0:01745. The accuracy is only 5
decimal places. Can you improve this technique?
Proof. We’re going to prove Eq. (3.10.2) using algebra and Eq. (3.10.1). There exists a geometric
proof of Aristarchus of Samos–an ancient Greek astronomer and mathematician who presented
the first known heliocentric model that placed the Sun at the center of the known universe with
the Earth revolving around it. Thus, this inequality is known as Aristarchus’s inequality. I refer
to Wikipedia for the geometry proof.
First, using algebra to transform the inequality to a ’better’ form:
sin ˛ ˛
<
sin ˇ ˇ
ˇ sin ˛ < ˛ sin ˇ (all quantities are positive)
ˇ.sin ˛ sin ˇ/ < .˛ ˇ/ sin ˇ (add ˇ sin ˇ to both sides)
sin ˛ sin ˇ sin ˇ
<
˛ ˇ ˇ
The key step is of course the highlighted one where we brought the term ˇ sin ˇ to the game.
Why that particular term? Because it led us to this term sin ˛ sin ˇ=˛ ˇ . As all steps are equivalent,
we just need to prove the final inequality. We use the identity for sin ˛ sin ˇ to rewrite the
LHS of the last inequality (the blue term) and use sin x < x:
˛ ˇ ˛Cˇ
sin ˛ sin ˇ 2 sin 2 cos 2
2
˛ ˇ
˛Cˇ
˛Cˇ
D < cos D cos
˛ ˇ ˛ ˇ ˛ ˇ 2 2 2
The next move is to get rid of ˛ in cos.˛Cˇ=2/. For that, we need this fact cos x < cos y if x > y.
Because ˛ > ˇ, then 0:5.˛ C ˇ/ > 0:5.ˇ C ˇ/ D ˇ, thus
sin ˛ sin ˇ
˛Cˇ
< cos < cos ˇ
˛ ˇ 2
The last step is to convert from cos ˇ to sin ˇ noting that we have a tool not yet used, that is
tan ˇ > ˇ. Writing tan ˇ D sin ˇ= cos ˇ in that inequality, and we’re done.
Actually, if we know calculus, the proof is super easy; it does not require us being genius.
The function y D sin x=x is a decreasing function for x 2 Œ0; (check its first derivative and
using tan x > x), thus considering two numbers ˛ > ˇ in this interval, we have immediately
f .˛/ < f .ˇ/. Done. Alternatively, if we consider the function y D sin x, we also have the
inequality, see Fig. 3.16. Comparing with Aristarchus’s proof, which was based on circles and
triangles, the calculus based proof is straightforward. Why? Because in the old trigonometry
sine was attached to angles of triangles, whereas in calculus it is free of angles/triangles. It is
simply a function.
Figure 3.16: Calculus based proof of Aristarchus’s inequality sin ˛=sin ˇ < ˛=ˇ .
sin x x. When we were building our sine table, we have discovered that sin x x, at least
when x D 1ı D =180. It turns out that for small x, this is always true. And it stems from
Eq. (3.10.1), which we rewrite as
sin x
sin x < x < tan x ” cos x < <1
x
sin x
Now, let x approaches zero, then cos x approaches 1, and thus 1 < x
< 1. This leads to:
sin x
lim D1 (3.10.3)
x!0 x
Some inequalities for angles of a triangle. Below are some well known inequalities involving
angles of a triangle. We label the three angles by A, B, C this time. For all inequalities, equality
Proof. We prove (a) using the Jensen inequality (check Section 4.5.2 if it’s new to you) which
states that for a convex function f .x/, f ..x C y C z/=3/ .1=3/.f .x/ C f .y/ C f .z//. As
the function y D sin x for 0 x is a concave function, we have:
sin A C sin B C sin C
ACB CC
sin
3 3
Thus, p
3
sin A C sin B C sin C 3 sin D3
3 2
Proof. You might be thinking the proof of (b) is similar to (a). Unfortunately, the cosine function
is harder: its graph consists of two parts, see Fig. 3.18. Only for acute-angled triangles, we can
use the Jensen inequality as in (a). Hmm. We need another proof for all triangles. First, we
convert the term cos A C cos B C cos C to:
ACB A B C C A B C
cos ACcos BCcos C D 2 cos cos C1 2 sin2 D 2 sin cos C1 2 sin2
2 2 2 2 2 2
Then, the inequality becomes:
2
C A B C 1
ED 2 sin C 2 cos sin
2 2 2 2
This is a quadratic function in terms of sin C =2 with a negative highest coefficient (i.e., 2).
To show that E 0 for all A; B; C , we just need to check its discriminant . Indeed, the
discriminant is always negative, thus E is always below the x axis, and thus it is always smaller
or equal to 0. A consequence of this result is another inequality that reads
A B C 1
sin sin sin
2 2 2 8
C
which is obtained from (c) and the identity cos A C cos B C cos C D 4 sin A2 sin B2 sin C 1.
2
Proof. We prove (c) using the Jensen inequality for the convex function tan x. Note that we
only need to consider acute angles (because if one angle is not acute, its cotangent is negative
whereas the other cotangents are positive, and thus the inequality holds):
Thus, p
tan A C tan B C tan C 3 tan D3 3
3
But, we have tan A C tan B C tan C D tan A tan B tan C , thus
p
p 1 3
tan A tan B tan C 3 3 ”
tan A tan B tan C 9
Proof. We prove (d) as follows. The key point is the identity cot A cot B C cot B cot C C
cot C cot A D 1. We start with:
.cot ACcot BCcot C /2 D cot2 ACcot2 BCcot2 C C2.cot A cot BCcot B cot C Ccot C cot A/
And we can relate cot2 A C cot2 B C cot2 C with cot A cot B C cot B cot C C cot C cot A:
8 2 2
< cot A C cot B 2 cot A cot B
ˆ
cot2 B C cot2 C 2 cot B cot C
ˆ
: 2
cot C C cot2 A 2 cot C cot A
Therefore,
And then,
Proof. We prove (e) using some algebra and the inequality (b). First, we transform sin2 A C
sin2 B C sin2 C to cos 2A; : : ::
3 1
sin2 A C sin2 B C sin2 C D .cos 2A C cos 2B C cos 2C /
2 2
Then, using Eq. (3.7.15), we get:
If one angle (assuming that angle is A without loss of generality) is not acute, then cos A < 0
and cos B; cos C > 0, thus cos A cos B cos C < 0. Therefore, sin2 A C sin2 B C sin2 C < 2. If,
all angles are acute, cos A; cos B; cos C > 0, we can use the AM-GM inequality:
p
3 1
cos A cos B cos C .cos A C cos B C cos C /
3
And using the inequality (b), we get:
1 1 27 1
cos A cos B cos C .cos A C cos B C cos C /3 D
27 27 4 4
And the result follows immediately:
1 9
sin2 A C sin2 B C sin2 C 2 C D
2 4
Proof. We can prove (f) using the Cauchy-Swatch inequality and the inequality (d).
Cauchy’s proof of Basel problem. In Section 2.18.4 I have introduced the Basel problem and
one calculus-based proof. Herein, I present Cauchy’s proof using only elementary mathematics.
The plan of his proof goes as:
Now, he introduced two new positive integer variables n and N such that
n
D ; 1nN (3.10.6)
2N C 1
This definition of comes from the requirement that < =2. Now, Eq. (3.10.5) becomes
2N C 1 2
2 n n
cot < < 1 C cot2 (3.10.7)
2N C 1 n 2N C 1
Now that the Basel problem
P is about the summation of the reciprocals of the squares of
the natural numbers i.e., n 1=n2 , he made 1=n2 appear:
2 2 n 1 2 2 n
2
cot < 2
< 2
C 2
cot2 (3.10.8)
.2N C 1/ 2N C 1 n .2N C 1/ .2N C 1/ 2N C 1
P
The next step is, of course, to introduce n
1=n2 :
N N N N
X 2 2 n X 1 X 2 X 2 2 n
cot < < C cot
nD1
.2N C 1/2 2N C 1 nD1 n2 nD1 .2N C 1/2 nD1 .2N C 1/2 2N C 1
(3.10.9)
N N
2 X
2 n 2 X n
lim 2
cot < S < lim 2
cot2
N !1 .2N C 1/ 2N C 1 N !1 .2N C 1/ 2N C 1
nD1 nD1
(3.10.10)
What Cauchy needed now is to be able to evaluate the red sum.
And from that we can extract the imaginary part of .cot x C i/n :
! ! !
n n n 1 n n 3 n
Im.cot x C i/ D cot x cot xC cotn 5
x C
1 3 5
Using Eq. (3.10.12) and equating the imaginary parts of the sides, we get:
! ! !
sin nx n n n
D cotn 1 x cotn 3 x C cotn 5 x C (3.10.13)
sinn x 1 3 5
This is by itself a trigonometry identity that holds for any n 2 N and x 2 R. Now we
take this identity, fix a positive integer N and set n D 2N C 1 and xk D k=2N C1, for
k D 1; 2; : : : ; N . Why that? Because the LHS of the identity is zero with this choice:
sin nxk D sin.2N C 1/k=2N C1 D sin k D 0. Therefore Eq. (3.10.13) becomes
! ! !
2N C 1 2N C 1 2N C 1
0D cot2N xk cot2N 2 xk C C . 1/N (3.10.14)
1 3 2N C 1
for k D 1; 2; : : : ; N . The numbers xk are distinct numbers in the interval 0 < xk < =2.
The numbers tk D cot2 xk are also distinct numbers in this interval. What Eq. (3.10.14)
means is that, the numbers tk are the roots of the following N th degree polynomial:
! ! !
2N C 1 N 2N C 1 N 1 2N C 1
p.t/ D t t C C . 1/N (3.10.15)
1 3 2N C 1
Now, Vieta’s formula (Section 2.28.4) links everything together: the sum of all the roots
is the negative of the ratio of the second coefficient and the first one:
N 2N C1
X
3 .2N /.2N 1/
tk D 2N C1
D (3.10.16)
1
6
kD1
Replacing tk by its definition and noting that xk D k=2N C1, we get what is needed in
Eq. (3.10.10):
N
X k .2N /.2N 1/
cot2 D (3.10.17)
2N C 1 6
kD1
Or
2 2
<S <
6 6
Thus, S is sandwitched between 2 =6 and 2 =6, it must be 2 =6. And we come to the
end of the amazing proof due to the great Cauhy.
sin2 x
C 2 cos2 2x 1 D 0
cos2 x
1 cos 2x
C 2 cos2 2x 1 D 0
1 C cos 2x
1 u
C 2u2 1 D 0 .u D cos 2x/
1Cu
u.u2 C u 1/ D 0
When the sum of two non-negative terms is zero, it is only possible when the two terms are both
zeros:
which requires that sin x D 0; cos x D ˙1 or cos x D 0; sin x D ˙1. And now you can solve
the scary-looking equation sin2020 x C cos2020 x D 1.
We think we should pay less attention on solving trigonometry equations because up to this
point we still do not know how to compute sin x for any given x. All we know is just Table 3.1.
When we use a calculator and press sin 0:1 to get 0.09983341664, how does the calculator
compute that? See Section 3.16 for solution, sort of.
p
Solving that equation
p yields u D f0; . 1 ˙ 5/=2g. As u D cos 2x is always larger than 1, we do not
accept u D .1 C 5/=2.
1. Compute the sum sin2 10ı C sin2 20ı C sin2 30ı C sin2 40ı C C sin2 90ı .
5. Prove cos =7 cos 2=7 C cos 3=7 D 1=2 (IMO 1963).
Answers are 5, 7:5ı and 0.5, respectively. Hints: for the first problem, follow Gauss (see
Section 2.5.1 in case you have missed it) by grouping two terms together so that something
special appear. For the third problem, do not first find cos 36ı and then cos 36ı cos 72ı .
With little massage, you can compute cos 36ı cos 72ı directly. For the final problem,
remember how we computed sin =5?
In Fig. 3.17a, the proof for b 2 D c 2 C a2 2ca cos B is obtained by applying the Pythagoras
theorem for the right triangle ADC . Now, we do some checking for the newly derived formula.
First, when B is a right angle, its cosine is zero, and we get the familiar b 2 D a2 C c 2 again.
Second, the term 2ca cos B has dimension of length squared, which is correct (if that term
as 2a2 b cos B, the formula would be wrong because we cannot add a square of length with a
cubic of length. We cannot add area with volume). There is no need to prove the other second
formula. As a; b; c are symmetrical, from b 2 D c 2 C a2 C 2ca cos B we can get the other two
by permuting the variables: a ! b; b ! c; c ! a.
The generalized Pythagoras theorem is also known as the law of cosines and it relates the
lengths of the sides of a triangle to the cosine of one of its angles. If there are law of cosines,
then it should exist law of sines. This law is written as (Fig. 3.17b)
a b c
D D (3.12.2)
sin A sin B sin C
(a) (b)
Figure 3.17: Proof of the generalized Pythagoras theorem (a) and of the sine law (b).
Even though we postpone the discussion on the concept of mathematical functions to Section 4.2,
we present here the graph of some trigonometry functions, mostly for completeness of this
chapter. Loosely speaking a function is a device that receives a number (mostly a real number)–
called the input– and returns another number, the output. If we denote by x the input of the
sine function, we write y D sin x. By varying x from negative infinity to positive infinity
(of course practically just a finite interval was considered, here is Œ 4; 4), we compute the
corresponding ys, and plot all the points .x; y/ to get the graphs shown in Fig. 3.18.
1.0 sinx
cosx
0.5
0.0
4 2 3 /2 /2 0 /2 3 /2 2 4
0.5
1.0
Figure 3.18: Graphs of sine and cosine functions. Made with Julia using matplotlib.
OK. We have used technology to do the plotting for us (as it does a better job than human
beings), but we should be able to ‘read’ information from the graph. No computer is able to do
that. First, the two graphs are confined in the interval Œ 1; 1 (because both sine and cosine are
smaller/equal to 1 and larger/equal to -1). Second, where the sine is maximum or minimum the
cosine is zero and vice versa. Third, by focusing on the interval Œ0; 4, one can see that the sine
starts with zero, increases to 1 (at =2), then decreases to zero (at ), continues decreasing until
it gets to -1, then increases back to zero (at 2). After that the graph repeats. Thus, sine is a
periodic function. And its period T is 2. The cosine function has the same period. The period
f .x C T / D f .x/; 8x (3.13.1)
The graph of the tangent function is given in Fig. 3.19. It can be seen that the tangent function
is periodic with a period of i.e., tan.x C / D tan x, which can be proved using trigonometry
identity tan.a C b/ D tan aCtan b=1 tan a tan b. As tan x D sin x=cos x , the function is not defined for
angles xN such that cos xN D 0. Solving this equation yields xN D =2 C k, k D 0; ˙1; ˙2; : : :
The vertical lines at xN are the vertical asymptotes of the tangent curve.
4 tanx
asymptotes
2
0
2 0 /2 3 /2 2
2
sin x 1
y.x/ D D sin x
x x
which is obtained by taking the sine function and the 1=x function and multiply them. But hey,
why this function? Because it shows up a lot in mathematics . For example, in calculus we
need to find the derivative of the sine function. Here is what we do, by considering the function
y D sin t :
sin.t C x/ sin t
.sin t/0 D lim
x!0
x
cos x 1 sin x
D sin t lim C cos t lim
x!0 x x!0 x
We refer to Section 4.4.8 if something was not clear. If this is not enough to get your attention,
note that the function sin x=x is very popular in signal processing. So if you are to enroll in an
electrical engineering course, you will definitely see it.
As is always the case in mathematics, whenever we have a new object (herein the period), we have theorems
(facts) on them. Here is one: if f1 .x/ and f2 .x/ are two functions of the same period T , then the function ˛f1 .x/ C
ˇf2 .x/ also has the period of T .
Actually this function is named the sinc function by the British mathematician Philip Woodward (1919-2018).
In his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the
function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its
own".
In Fig. 3.20 we plot sin x, 1=x and sin x=x . What can we observe from the graph of f .x/ D
First, it is symmetrical with respect to the y-axis (this is because f . x/ D f .x/, or as
sin x=x ?
mathematicians call it, it is an even function). Second, similar to sin x, sin x=x is also oscillatory.
However not between -1 and 1. The amplitude of this oscillation is decreasing when jxj gets
larger. Can we find how this amplitude depends on x precisely?
4
1/x 1.0
3 sinx
2 0.8
1 0.6
0 0.4
10 5 0 5 10
1
0.2
2
3 0.0
10 5 0 5 10
4 0.2
(a) (b)
Figure 3.20: Graph of sin x, 1=x (a) and graph of sin x=x (b).
Yes, we can:
1 1 1
1 sin x 1 H) .sin x/
x x x
This comes from the fact that if a b, and c > 0 then ac bc. So, the above inequality for
sin x=x works only for x > 0. But due to symmetry of this function, the inequality holds for
x < 0 as well. Now we see that sin x=x can never exceed 1=x and 1=x; these two functions
are therefore called the envelops of sin x=x, see Fig. 3.21a.
4
sinx/x 8
3 1/x 7
2 1/x 6
1 5
0 4
10 5 0 5 10
1 3
2 2
3 1
4 2
0
0 /2 3 /2 2 5 /2
(a) (b)
Figure 3.21: Envelops of sin x=x are 1=x and 1=x (a) and solving tan x D x graphically (b).
Is it everything about sin x=x? No, no. There is at least one more thing: where are the
stationary points of this function? To that, we need to use calculus as algebra or geometry is not
powerful enough for this task. From calculus we know that at a stationary point the derivative of
the function vanishes:
x cos x sin x
f 0 .x/ D H) f 0 .x/ D 0 W tan x D x
x2
How we’re going to solve this equation of tan x D x or g.x/ WD tan x x D 0? Well, we do
not know. So we fall back to a simple solution: the solutions of tan x D x are the intersection
points of the curve y D tan x and the line y D x. From Fig. 3.21b we see that there is one
solution x D 0, and infinitely more solutions close to 3=2; 5=2; : : :
But the graphical method cannot give accurate solutions. To get them we have to use approxi-
mate methods and one popular method is the Newton-Raphson method described in Section 4.5.4,
see Eq. (4.5.9). In this method one begins with a starting point x0 , and gets better approximations
via:
g.xn / tan xn xn
xnC1 D xn D xn ; n D 0; 1; 2; : : : (3.13.2)
g 0 .xn / 1= cos2 xn 1
If you program this and use it you will see that the method sometimes blows up i.e., the solution
is a very big number. This is due to the tangent function which is very large for x such that
cos x D 0. So, we better off use this equivalent but numerically better g.x/ WD x cos x sin x:
xn cos xn sin xn
xnC1 D xn (3.13.3)
xn sin xn
With this and starting points close to 0, 3=2, 5=2, and 7=2 we get the first four solutions given
in Table 3.2. The third column gives the solutions in terms of multiples of =2 to demonstrate
the fact that the solutions get closer to the asymptotes of the graph of the tangent function. Here
Table 3.2: The first four solutions of tan x D x obtained with the Newton-Raphson method.
n x x
1 0.00000000 0
2 4.49340946 2:86=2
3 7.72525184 4:92=2
4 10.9041216 6:94=2
are two lessions learned from studying the graph of the nice function sin x=x :
Not all equations can be solved exactly. However, one can always use numerical methods
to solve any equation approximately. Mathematicians do that and particularly scientists
and engineers do that all the time;
Period of sin 2x C cos 3x. The problem that we’re now interested in is what is the period of a
sum of trigonometric functions? Specifically, sin 2x C cos 3x. There is one easy way: plotting
the function. Fig. 3.22 reveals that the period of this function is 2.
Figure 3.22: Plot of sin 2x (red), cos 3x (black) and sin 2x C cos 3x (blue).
Of course, there is another way without plotting the function. We know that the period of
sin x is 2, and thus the period of sin 2x is 2=2 D . Similarly, the period of cos 3x is 2=3.
Therefore, we have
sin 2x repeats at : ; 2; 3; : : :
2 2 2
cos 3x repeats at : ;2 ;3 ;:::
3 3 3
Thus, sin 2x C cos 3x will repeat the first time (considering positive x only) when x D 2, that
is the period of this function.
in which the first term is an even function, i.e., g. x/ D g.x/ and the second is an odd function
i.e., g. x/ D g.x/ (see Section 4.2.1).
Applying this decomposition to the exponential function y D e x , we have:
1 x 1 x
ex D Œe C e x
C Œe e x
(3.14.2)
2 2
from that we define the following two functions:
1 x
sinh x D .e e x/
2 (3.14.3)
1 x
cosh x D .e C e x /
2
They are called the hyperbolic sine and cosine functions, which explain their symbols. We
explain the origin of these names shortly. First, the graphs of these two functions together with
y D 0:5e x and y D 0:5e x are shown in Fig. 3.23a. The first thing we observe is that for large
x, the hyperbolic cosine function is similar to y D 0:5e x , this is because 0:5e x ! 0 when x is
large. Second, the hyperbolic cosine curve is always above that of y D 0:5e x . Third, cosh x 1.
This can be explained using the Taylor series of e x and e x (refer to Section 4.14.8 if you’re not
familiar with Taylor series):
x2 x3 x4
8
< ex 1 C x C C C C
ˆ
ex C e x x2 x4
ˆ
2Š 3Š 4Š H) 1 C C C 1
ˆ x x2 x3 x4 2 2Š 4Š
:e 1 x C
ˆ C C
2Š 3Š 4Š
From Eq. (3.14.3), it can be seen that cosh2 x sinh2 x D 1. And we have more identities
bearing similarity with trigonometry identities that we’re familiar with. For example, we have
Why called hyperbolic trigonometry? Remember the parametric equation of a unit circle
centered at the origin? It is given by x D sin t; y D cos t. Similarly, from the identity
cosh2 t sinh2 t D 1, the hyperbola x 2 y 2 D 1 is parameterized as x D cosh t and
y D sinh t. That explains the name ‘hyperbolic functions’ (Fig. 3.24). Not sure what is a
hyperbola? Check out Section 4.1.
6 6
0.5ex 0.5ex
0.5e x -0.5e x
cosh(x) sinh(x)
4 4
2 2
0 0
2 1 0 1 2 2 1 0 1 2
2 2
4 4
(a) (b)
Figure 3.23: Plot of the hyperbolic sine and cosine functions along with their exponential components.
Figure 3.24: sin x, cos x are related to a unit circle; they are circular trigonometry functions. sinh x and
cosh x are related to the right hyperbola x 2 y 2 D 1; they are hyperbolic trigonometry functions.
Another derivation of hyperbolic functions. Start with Euler’s identity e i D cos C i sin
but written for D x and D x:
e ix D cos x C i sin x
ix
e D cos x i sin x
We then have (adding the above two equations):
e ix C e ix
cos x D (3.14.5)
2
Phu Nguyen, Monash University © Draft version
Chapter 3. Trigonometry 238
Now we consider a complex variable z D x C iy, and use z in the above equation:
e i.xCiy/ C e i.xCiy/
cos.x C iy/ D
2
e ix y C e ixCy e ix e y C e ix e y
D D
2 2
.cos x C i sin x/e y C .cos x i sin x/e y
D
y 2
e Ce y
y
e y
e
D cos x i sin x
2 2
And you see the hyperbolic sine/cosine show up! With our definition of them in Eq. (3.14.3), we
get this cos.x C iy/ D cos x cosh y i sin x sinh y. And a similar equation is awaiting for sine:
And they are quite similar to the real trigonometry identities of sin.a C b/ and cos.a C b/! Now
putting x D 0 in the above, we get
which means that the cosine of an imaginary angle is real but the sine of an imaginary angle is
imaginary.
Can sine/cosine be larger than one? We all know that for real angles x, j sin xj 1. But for
complex angles z, might we have cos z > 1? Let’s find z such that cos z D 2. We start with
From the second equation we get sin x D 0; noting that we’re not interested in sinh y D 0
or y D 0 as we’re looking for complex angles not real ones. With sin x D 0, we then have
cos x D ˙1. But we remove the possibility of cos x D 1, as from the first equation we know
that cos x > 0 as cosh y > 0 for all y. So, we have cos x D 1 (or x D 2n), and with that we
have cosh y D 2:
ey C e y
cosh y D 2 ” D2
2
p
of which solutions are y D ln 2 ˙ 3 . Finally, the angle we’re looking for is:
p
z D 2n C i ln 2 ˙ 3
These hyperbolic functions are the creation of the humand minds, but again they model
satisfactorily natural phenomena. For example in Section 9.2 we shall demonstrate that the
hyperbolic sine is exactly the shape of a hanging chain‘ .
(a) (b)
(a) (b)
On this sphere there are three special points: the center O, the north pole N and the south
pole S . We draw many circles with center at O and pass through S and N (see Fig. 3.26). Each
half of such circles is called a line of longitude or meridian. Among many such meridians, we
define the prime meridian which is the meridian at which longitude is defined to be 0ı . The
prime meridian divides the sphere into two equal parts: the eastern and western parts.
All points on a meridian have the same longitude, which leads to the introduction of another
coordinate. To this end, parallel circles perpendicular to the meridians are drawn on the sphere.
One special parallel is the equator which divides the earth sphere in to two equal parts: the
northern and southern part.
Now we can define precisely what longitude and latitude mean. Referring to Fig. 3.26, we
first define a special point A which is the intersection of the equator and the prime meridian.
Now, the longitude is the angle AOB in degrees measured from the prime meridian. Thus a
longitude is an angle ranging from 0°E to 180°E or 0°W to 180°W. Similarly, the latitude is the
angle BOC measured from the equator up (N) or down (S), ranging from 0°N to 90°N or 0°S
to 90°S.
Figure 3.27
Considering two cities located at P and Q having the same longitude, P is on the equator
(Fig. 3.27). Now assume that the city located at Q has a latitude of '. The question we’re
interested in is: how far is Q from P (how far from the equator)? The answer is the arc PQ,
which is part of the great circle of radius R where R being the radius of the earth. Thus:
'
PQ D R
180
Now considering two cities located at Q and M having the same latitude. What is the
distance between them traveling along this latitude? This is the arc QM of the small circle
centered at O 0 . If we can determine the radius of this small circle, then we’re done. This radius
is O 0 Q D R cos '. Then the distance QM is given by
QM D O 0 Q
180
where is the difference (assuming that these two points are either on the eastern or western
part) of the longitudes of Q and M . But is this distance the shortest path between Q and M ? No!
The shortest path is the great-circle distance. The great-circle distance or spherical distance is
the shortest distance between two points on the surface of a sphere, measured along the surface
of the sphere.
Fig. 3.28 illustrates how to find such a great-circle distance. The first step is to find r D O 0 Q
as done before. Then in the triangle O 0 QM using the cosine law we can compute the straight-line
distance between QM , denoted by d :
d 2 D r2 C r2 2r 2 cos
Then using the cosine law again but now for the triangle OQM to determine the angle ˛:
2R2 d 2
2 2 2 2
d DR CR 2R cos ˛ H) ˛ D arccos
2R2
Knowing the angle of the arc QM in the great circle, it’s easy to compute its length:
˛
QM D R
180
Figure 3.28
How about the great-circle distance between any two points on the surface of the earth? We
do not know (yet) as it requires spherical trigonometry.
Figure 3.29
The exact area of the sector OBAC is =2. This area is approximated as a sum of the area
of triangles OBC , ABC and ABD (see Fig. 3.29). We first compute the areas of these triangles
now. The area of the triangle OBC is easy (recall that the circle has an unit radius):
1 1
OBC D 2 sin cos D sin
2 2 2 2
The area of the triangle ABC is also straightforward:
1
ABC D 2 sin 1 cos
2 2 2
(3.16.1)
2
D 2 sin sin (double angle formula for cosine)
2 4
We now use the approximation that sin x x for small x: thus the area of ABC is approximated
as 3=16.
Next we compute the area of the triangle ABD. If we work with Fig. 3.29a then finding out
this area might be hard, but if we rotate OAB a bit counterclockwise (Fig. 3.29b), we’ll see that
ABD is similar to ABC , but with =2, thus its area is:
1 3 3
ABD D
16 2 128
Let’s sum the areas of all these triangles (ABD counted twice), and we get:
1 3 3
A sin C C
2 16 64
We can see a pattern here, and thus the final formula for the area of the sector is:
1 3 3 3
A sin C C C C
2 16 64 256
The added terms account for the areas not considered in our approximation of the sector area.
The red term looks familiar: it’s a geometric series, so we can compute the red term, and get a
more compact formula for A as:
1 3 3 3
A sin C C C C
2 16 64 256
1 3 1 1 1
D sin C C C C
2 16 64 256
1 3
D sin C (geometric series)
2 12
Now we have two expressions for the same area, so we get the following equation, which leads
to an approximation for sin :
1 3 3
sin C H) sin
2 2 12 6
Want to have an even better approximation? Let’s apply sin x x x 3 =6 into Eq. (3.16.1) to
get ABC 3=128 5=8192 (the algebra is indeed a bit messy, thus we have used a CAS to help
us doing this tedious algebraic manipulation, see Section 3.19). And we repeat what we have
just done to get:
3 5
sin C
6 120
And of course we want to do better. What should be the next term after 5=120? It is 7=x with
x D 5040:
3 5 7
sin C
6 120 5040
Are you asking if there is any relation between those numbers in the denominators and those
exponents in the nominators? There is! If you just played with factorial (Section 2.24.2) enough
you would recognize that 6 D 3Š, 120 D 5Š and of course 5040 must be 7Š (pattern again!), thus
1
3 5 7 X sin2i C1
sin C C D . 1/i (3.16.2)
1Š 3Š 5Š 7Š i D0
.2i C 1/Š
Can we develop a similar formula for cosine? Of course. But for that we need to wait until the
17th century to meet Euler and Taylor who gave us a systematic way to derive infinite series for
trigonometry functions. Refer to Sections 4.14.6 and 4.14.8 if you cannot wait.
Why Eq. (3.16.2) was a significant development in mathematics? Remember that we have
built a sine table in Section 3.6? It is useful but it is only for integral angles e.g. 30ı or 45ı . If the
angle is not in the table, we have to use interpolation, which is of low accuracy. To have higher
accuracy (and thus better solutions to navigation problems in the old days), ancient mathemati-
cians had to find a formula that can give them the value of the sine for any angle. And Eq. (3.16.2)
is one such formula; it involves only simple addition/subtraction/multiplication/division.
Proof. Proof of Eq. (3.17.2). Remember how the 10-year old Gauss computed the sum of the
first n whole numbers? We follow him here. Denoting S is the sum on the LHS of Eq. (3.17.2),
we write
S D sin ˛ C sin 2˛ C C sin.n 1/˛ C sin n˛
S D sin n˛ C sin.n 1/˛ C C sin 2˛ C sin ˛
Then, we sum these two equations:
2S D .sin ˛ Csin n˛/C.sin 2˛ Csin.n 1/˛/C C.sin.n 1/˛ Csin 2˛/C.sin n˛ Csin ˛/
And now, of course we use the sum-to-product trigonometry identity sin a C sin b D
2 sin.a C b/=2 cos.a b/=2 for each sum (because it helps for the factorization):
.n C 1/˛ .1 n/˛ .n C 1/˛ .3 n/˛
2S D 2 sin cos C 2 sin cos C C
2 2 2 2
.n C 1/˛ .n 3/˛ .n C 1/˛ .n 1/˛
C 2 sin cos C 2 sin cos
2 2 2 2
A common factor appears, so we factor the above as:
.n C 1/˛ .1 n/˛ .3 n/˛ .n 3/˛ .n 1/˛
2S D 2 sin cos C cos C C cos C cos
2 2 2 2 2
(3.17.3)
So far so good. The next move is the key and we find it thanks to Eq. (3.17.2). So, this is definitely
not the way the author of this identity came up with it (because he did not know of this identity
before discovering it). In Eq. (3.17.2) we see the term sin ˛=2, so we multiply Eq. (3.17.3) with
it:
˛ .n C 1/˛ ˛ .1 n/˛ ˛ .3 n/˛
2S sin D sin 2 sin cos C 2 sin cos C
2 2 2 2 2 2
˛ .n 3/˛ ˛ .n 1/˛
C 2 sin cos C 2 sin cos
2 2 2 2
Now we want to simplify the term in the bracket. To this end, we use the product-to-sum
trigonometric identity 2 sin ˛ cos ˇ D sin.˛ C ˇ/ C sin.˛ ˇ/:
"
˛ .n C 1/˛ n˛ .2 n/˛ .n 2/˛ .4 n/˛
2S sin D sin sin C sin C sin C sin C
2 2 2 2 2 2
#
.n 2/˛ .4 n/˛ .2 n/˛ n˛
C C sin C sin C sin C sin
2 2
2 2
And lucky for us that all terms in the bracket cancel out except the red terms. It’s a bit hard to
see how other terms are canceled out, one way is to do this for n D 3 and n D 4 to see that it is
indeed the case. Now, the above equation becomes
˛ .n C 1/˛ n˛
2S sin D 2 sin sin
2 2 2
And from that we can get our identity.
If we have one identity for the sine, we should have one for the cosine and from that one for
the tangent:
The terms in our identities show up both for the sine and cosine! That’s the power of complex
numbers. Now is the plan: we will compute A in another way, from that we get the real and
imaginary parts of it. Then, we compare with Eq. (3.17.6): equating the imaginary parts gives
us the sine formula, and equating the real parts gives us the cosine formula.
It can be seen that A is a geometric series, so it’s not hard to compute it:
e i ˛ .1 e i n˛ /
A D e i ˛ C e i 2˛ C C e i n˛ D e i ˛ .1 C e i ˛ C e i 2˛ C C e i.n 1/˛
/D (3.17.7)
1 ei ˛
Of course, now we bring back sine and cosine (because that’s what we need), and A becomes:
e i ˛ .1 e i n˛ / 1 cos n˛ i sin n˛
AD D .cos ˛ C i sin ˛/
1 ei ˛ 1 cos ˛ i sin ˛
.1 cos n˛ i sin n˛/.1 cos ˛ C i sin ˛/
D .cos ˛ C i sin ˛/ (3.17.8)
.1 cos ˛ i sin ˛/.1 cos ˛ C i sin ˛/
.1 cos n˛ i sin n˛/.1 cos ˛ C i sin ˛/
D .cos ˛ C i sin ˛/
2.1 cos ˛/
What we have just done is standard to remove i in the denominator, now we can get the real and
imaginary parts of A. Let’s focus on the imaginary part:
Now comparing Eq. (3.17.6) with Eq. (3.17.9), we can get the sine identity.
Hey, but wait. Euclid would ask where is geometry? We can construct the sum sin ˛ C
sin 2˛ C : : : and cos ˛ C cos 2˛ C : : : as in Fig. 3.30. To ease the presentation we considered
only the case n D 3. It can be seen that sin ˛ C sin 2˛ C : : : equals the y-coordinate of P3 . Now
if we can compute d and ˇ, then we’re done.
Figure 3.30
1 .n C 1/˛
ˇ D .n˛ ˛/ H) ˛ C ˇ D
2 2
Figure 3.31: Central angle theorem: proof can be done with the introduction of the blue line OC1 .
We emphasize that there is no real life applications of Eq. (3.17.4). If you’re asking why we
bothered with these formula, the answer is simple: we had fun playing with them. Is there any-
thing more important than that in life, especially when we’re young. Moreover once again we see
the connection between geometry, algebra and complex numbers. And we saw the telescoping
sum again.
Example 3.1
This example is taken from the 2021 Oxford MAT admission test: compute the following sum
S D sin2 .1ı / C sin2 .2ı / C sin2 .3ı / C C sin2 .89ı / C sin2 .90ı / (3.17.10)
There are 44 terms of the form sin2 .x ı / C sin2 .90ı x ı / , and each term is equal to one
(why?), and the red and blue terms are easy, thus
p !2
2
S D 44 1 C 1 C D 45:5
2
subject is practical, for example, because we live on a sphere. Spherical trigonometry is of great
importance for calculations in astronomy, geodesy, and navigation. For details, we refer to the
textbook of Glen Van Brummelen [7]. Glen Robert Van Brummelen (born 1965) is a Canadian
historian of mathematics specializing in historical applications of mathematics to astronomy. In
his words, he is the “best trigonometry historian, and the worst trigonometry historian” (as he is
the only one).
3.20 Review
This chapter has presented trigonometry as usually taught in high schools but with less focusing
on rote memorization of many trigonometric identities. Briefly, trigonometry was developed as
a tool to solve astronomical problems. It was then modified and further developed to solve plane
triangle problems–those arising in navigation, and surveying. And eventually it became a branch
of mathematics i.e., it is studied for its own sake.
Now that we know a bit of algebra and a bit of trigonometry, it is time to meet calculus.
About calculus, the Hungarian-American mathematician, physicist, John von Neumann said
The calculus was the first achievement of modern mathematics and it is difficult to
overestimate its importance. I think it defines more unequivocally than anything
else the inception of modern mathematics; and the system of mathematical analysis,
which is its logical development, still constitutes the greatest technical advance in
exact thinking.
Contents
4.1 Conic sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
4.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
4.3 Integral calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
4.4 Differential calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
4.5 Applications of derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
4.6 The fundamental theorem of calculus . . . . . . . . . . . . . . . . . . . . 320
4.7 Integration techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
4.8 Improper integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
4.9 Applications of integration . . . . . . . . . . . . . . . . . . . . . . . . . . 345
4.10 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
4.11 Some theorems on differentiable functions . . . . . . . . . . . . . . . . . 369
4.12 Polar coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
4.13 Bézier curves: fascinating parametric curves . . . . . . . . . . . . . . . . 377
4.14 Infinite series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
4.15 Applications of Taylor’ series . . . . . . . . . . . . . . . . . . . . . . . . 399
4.16 Bernoulli numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
4.17 Euler-Maclaurin summation formula . . . . . . . . . . . . . . . . . . . . 404
4.18 Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
4.19 Special functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
4.20 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
251
Chapter 4. Calculus 252
the next point (infinitesimally nearby). The slope of a line? We know it.
This chapter is devoted to calculus of functions of single variable. I use primarily the follow-
ing books for the material presented herein:
Infinite Powers by Steven Strogatz§ [55]. I recommend anyone to read this book before
taking any calculus class;
Our plan in this chapter is as follows. First, in Section 4.1, we briefly discuss the analytic
geometry with the introduction of the Cartesian coordinate system, the association of any curve
with an equation. Second, the concept of function is introduced (Section 4.2). Then, integral
calculus of which the most important concept is an integral is treated in Section 4.3. That is
followed by a presentation of the differential calculus of which the most vital concept is a
derivative (Section 4.4). We then present some applications of the derivative in Section 4.5. The
connection between integral and derivative is treated in Section 4.6, followed by methods to
compute integrals in Section 4.7.
Section 4.9 gives some applications of integration. A proper definition of the limit of a
function is then stated in Section 4.10. Some theorems in calculus are presented in Section 4.11.
Polar coordinates are discussed in Section 4.12. Bézier curves–a topic not provided in high school
and even college program–is shown in Section 4.13. Infinite series and in particular Taylor series
are the topics of Section 4.14. Applications of Taylor series are given in Section 4.15. Fourier
series are given in Section 4.18. Section 4.19
§
Steven Henry Strogatz (born 1959) is an American mathematician and the Jacob Gould Schurman Professor of
Applied Mathematics at Cornell University. He is known for his work on nonlinear systems, including contributions
to the study of synchronization in dynamical systems, for his research in a variety of areas of applied mathematics,
including mathematical biology and complex network theory. Strogatz is probably famous for his writings for the
general public, one can cite Sync, The joy of x, Infinite Powers.
‘
William Gilbert Strang (born 1934) is an American mathematician, with contributions to finite element theory,
the calculus of variations, wavelet analysis and linear algebra. He has made many contributions to mathematics
education, including publishing seven mathematics textbooks and one monograph.
Herbert Ellis Robbins (1915 – 2001) was an American mathematician and statistician. He did research in
topology, measure theory, statistics, and a variety of other fields. The Robbins lemma, used in empirical Bayes
methods, is named after him. Robbins algebras are named after him because of a conjecture that he posed concerning
Boolean algebras.
Morris Kline (1908 – 1992) was a professor of Mathematics, a writer on the history, philosophy, and teaching
of mathematics, and also a popularizer of mathematical subjects.
Two well-known conics are the circle and the ellipse. They arise when the intersection of the
cone and plane is a closed curve (Fig. 4.2a). The circle is a special case of the ellipse in which
the plane is perpendicular to the axis of the cone. If the plane is parallel to a generator line of
the cone, the conic is called a parabola. Finally, if the intersection is an open curve and the plane
is not parallel to generator lines of the cone, the figure is a hyperbola.
(a) (b)
Figure 4.2
Conic sections are observed in the paths taken by celestial bodies (e.g. planets). When two
massive objects interact according to Newton’s law of universal gravitation, their orbits are conic
sections if their common center of mass is considered to be at rest. If they are bound together,
they will both trace out ellipses; if they are moving apart, they will both follow parabolas or
hyperbolas (Fig. 4.2b).
Straight lines use 1; x; y. The next curves use x 2 ; xy; y 2 , which are conics. It is important to
see both the curves and their equations. This section presents the analytic geometry of René
Descartes and Pierre de Fermat in which the geometry of the curve is connected to the analysis of
the associated equation. Numbers are assigned to points, we speak about the point .1; 2/. Euclid
and Archimedes might not have understood as Strang put it.
History note 4.1: René Descartes (31 March 1596 – 11 February 1650)
René Descartes (Latinized: Renatus Cartesius) was a French philoso-
pher, mathematician, and scientist who spent a large portion of his
working life in the Dutch Republic, initially serving the Dutch States
Army of Maurice of Nassau, Prince of Orange and the Stadtholder of
the United Provinces. One of the most notable intellectual figures of
the Dutch Golden Age, Descartes is also widely regarded as one of the
founders of modern philosophy. His mother died when he was very
young, so he and his brothers were sent to live with his grandmother.
His father believed that a good education was important, so Descartes was sent off to
boarding school at a young age.
In 1637, Descartes published his Discours de la methodé in which he explained his ratio-
nalist approach to the interpretation of nature. La methodé contained three appendices:
La dioptrique, Les météories, and La géométrie. The last of these, The Geometry, was
Descartes’ only published mathematical work. Approximately 100 pages in length, The
Geometry was not a large work, but it presented a new approach in mathematical thinking.
Descartes boasted in his introduction that “Any problem in geometry can easily be re-
duced to such terms that a knowledge of the length of certain straight lines is sufficient for
construction." But Descartes’ La géométrie was difficult to understand and follow. It was
written in French, not the language of scholarly communication at the time, and Descartes’
writing style was often obscure in its meaning. In 1649, Frans van Schooten (1615–1660),
a Dutch mathematician, published a Latin translation of Descartes’ Geometry, adding his
own clarifying explanations and commentaries.
4.1.2 Circles
Definition 4.1.1
A circle is a set of points whose distance to a special point–the center–is constant.
From this definition, we can develop the equation of a circle. Let denote the center by .xc ; yc /
and the radius is r, then we have
p
.x xc /2 C .y yc /2 D r ) .x xc /2 C .y yc /2 D r 2 (4.1.1)
When xc D yc D 0 i.e., the center of the circle is at the origin, the equation of the circle is much
simplified:
x2 C y2 D r 2 (4.1.3)
4.1.3 Ellipses
Definition 4.1.2
The ellipse is the set of all points .x; y/ such that the sum of the distances from .x; y/ to the
foci is constant.
We are going to use the definition of an ellipse to derive its equation. Assume that the ellipse
is centered at the origin, and its foci are located at F1 . c; 0/ and F2 .c; 0/. The two vertices on
the horizontal axis are A1 .a; 0/ and A2 . a; 0/.
It is clear that the distances from A1 (or A2 ) to the two foci are 2a. So, pick any point P .x; y/,
and compute its distances to the foci d1 C d2 , set it to 2a and do some algebraic manipulations,
we have
Figure 4.3: An ellipse centered at the origin. The major axis of an ellipse is its longest diameter: a line
segment that runs through the center and both foci, with ends at the widest points of the perimeter. The
semi-major axis is one half of the major axis. The semi-minor axis is a line segment that is perpendicular
with the semi-major axis and has one end at the center.
d1 C d2 D 2a (definition of ellipse)
p p
.x C c/2 C y 2 C .x c/2 C y 2 D 2a (definition of distance)
p p
.x C c/2 C y 2 D 2a .x c/2 C y 2
p
.x C c/2 C y 2 D 4a2 C .x c/2 C y 2 4a .x c/2 C y 2
p
a .x c/2 C y 2 D a2 xc
.a2 c 2 /x 2 C a2 y 2 D a2 .a2 c2/
x2 y2
C D1
a2 a2 c 2
All steps from the third equality are just algebraic, to remove the square root. Now, the final step
is to bring b into play by considering that distances from B1 to the foci are also 2a (from the
very definition of an ellipse). This gives us b 2 C c 2 D a2 . So, we have
x2 y2
C D 1; b 2 C c 2 D a2 (4.1.4)
a2 b2
From which an ellipse is reduced to a circle when a D b.
Ellipses are common in physics, astronomy and engineering. For example, the orbit of each
planet in the solar system is approximately an ellipse with the Sun at one focus point. The same
is true for moons orbiting planets and all other systems of two astronomical bodies. The shapes
of planets and stars are often well described by ellipsoids.
Area of ellipse. If we know the area of a circle is r 2 , then what is the area of an ellipse? We
can get the formula without actually computing it. This area must be in the form f .a; b/, and
f .a; b/ D f .b; a/ and f .a; a/ D a2 . The only form is f .a; b/ D ab. So, area of an ellipse is
ab.
Reflecting property of ellipses. The ellipse reflection property says that rays of light emanating
from one focus, and then reflected off the ellipse, will pass through the other focus. Now, apart
from being mathematically interesting, what makes this property so fascinating? Well, there
are several reasons. Most notable of which is its significance to physics, primarily optics and
acoustics. Both light and sound are affected in this way. In fact there are many famous buildings
designed to exploit this property. Such buildings are referred to as whisper galleries or whisper
chambers. St. Paul’s Cathedral in London, England was designed by architect and mathematician
Sir Christopher Wren (1632–1723) and contains one such whisper gallery. The effect that such
a room creates is that if one person is standing at one of the foci, a person standing at the other
focus can hear even the slightest whisper spoken by the other. We refer to Section 4.4.2 for a
proof.
4.1.4 Parabolas
When you kick a soccer ball (or shoot an arrow, fire a missile or throw a stone) it arcs up into
the air and comes down again ... following the path of a parabola. A parabola is a curve where
any point is at an equal distance from: a fixed point (called the focus), and a fixed straight line
(called the directrix).
Figure 4.4: A parabola is a curve where any point is at an equal distance from: a fixed point (the focus),
and a fixed straight line (the directrix). The vertex V is the lowest point on the parabola.
In Fig. 4.4a, we label the focus as F with coordinates .a; b/, and a horizontal directrix y D k
(of course we can have parabolas with a vertical directrix). Then, the definition of a parabola
gives us:
p p
.y k/2 D .x a/2 C .y b/2
y2 2yk C k 2 D x 2 2ax C a2 C y 2 2yb C b 2
(4.1.5)
.x a/2 bCk
yD C
2.b k/ 2
One can see that bCk=2 is the ordinate of the vertex of the parabola. To simplify the equation, we
can put the origin at V , as done in Fig. 4.4b, then we have a D 0 and k D b, thus
x2
yD or x 2 D 4by
4b
4.1.5 Hyperbolas
Definition 4.1.3
A hyperbola is the set of all points .x; y/ in a plane such that the difference of the distances
between .x; y/ and the two foci is a positive constant.
Notice that the definition of a hyperbola is very similar to that of an ellipse. The distinction
is that the hyperbola is defined in terms of the difference of two distances, whereas the ellipse is
defined in terms of the sum of two distances. So, the equation of a hyperbola is very similar to
the equation of an ellipse (instead of a plus sign we have a minus sign):
x2 y2
D1 (4.1.6)
a2 b 2
What is the graph of a hyperbola looks like? First, we need to re-write the equation in the usual
form y D f .x/:
r
1 1
y D ˙bx 2
; jxj a
a x2
Thus, there are two branches, one for x a and one for x a. When x ! 1, y ! 1, but
more precisely y ! ˙.b=a/x (for positive y, from below due to the term 1=a2 1=x 2 ). These
two lines are therefore called the asymptotes of the hyperbola. We can see all of this in Fig. 4.5a
for a particular case with a D 5 and b D 3. When a D b, the asymptotes are perpendicular, and
we get a rectangular or right hyperbola (Fig. 4.5b).
8 y=x
6 y= x
4
2
0
5 0 5
2
4
6
8
(a) a D 5, b D 3 (b) a D b D 2
Ax 2 C Bxy C Cy 2 C Dx C Ey C F D 0 (4.1.8)
The proof is based on the fact that we can transform Eq. (4.1.8) to Eq. (4.1.7) by a spe-
cific rotation of axes to be described in what follows. First we consider axes Ox and Oy.
We then rotate these axes an angle counterclockwise to have OX and OY . Considering a
point P which has coordinates .x; y/ in the xy system and .X; Y / in the rotated system. The
aim is now to relate these two sets of coordinates. From the figure, we have these results:
( (
X D r cos ' x D r cos.' C /
;
Y D r sin ' y D r sin.' C /
Using the trigonometry identities for sin.a C b/ and cos.a C b/,
we can write x; y in terms of X; Y as
(
x D X cos Y sin
(4.1.9)
y D X sin C Y cos
A.X cos Y sin /2 C B.X cos Y sin /.X sin C Y cos /C
2
C.X sin C Y cos / C D.X cos Y sin / C E.X sin C Y cos / C F D 0 (4.1.10)
A0 X 2 C B 0 XY C C 0 Y 2 C D 0 X C E 0 Y C F D 0
A C
B cos 2 C .C A/ sin 2 D 0 H) cot 2 D
B
Example 4.1
Now we show that the equation xy D 1 is a hyperbola. This is of the form in Eq. (4.1.8)
with A D C D 0 and B D 1. Thus, cot 2 D 0, hence D =4. With this rotation angle,
using Eq. (4.1.9) we can write x; y in terms of X; Y as
p p p p
2 2 2 2
xDX Y ; yDX CY
2 2 2 2
And therefore xy D 1 becomes
X2 Y 2
D1
2 2
which is obviously a hyperbola.
Isn’t is remarkable that even though A0 ; B 0 ; C 0 are different from A; B; C , certain quantities do
not. For example, the sum A0 C C 0 is invariant:
A0 C C 0 D A C C
We also have another invariant–the so-called discriminant of the equation given by B 02 4A0 C 0 :
5x 2 C y 2 C y 8D0
Well, you can massage (completing the square technique) the equation to arrive at
x 2 =5 C .y C 1=2/2 D 33=4
D 0 X C E 0 Y C F D 0, one can deduce the type of the conic based on the sign of 4A0 C 0 , thus
for the general form of conic Ax 2 C Bxy C Cy 2 C Dx C Ey C F D 0, we have this theorem:
8
2
<B
ˆ 4AC > 0 W hyperbola
B2 4AC < 0 W ellipse (4.1.11)
ˆ
: 2
B 4AC D 0 W parabola
4.2 Functions
Consider now Galileo’s experiments on balls rolling down a ramp. He measured how far a
ball went in a certain amount of time. If we denote time by t and distance by s, then we have a
relation between s and t . As s and t are varying quantities, they are called variables. The relation
between these two variables is a function. Loosely stated for the moment, a function is a relation
between variables.
The most effective mathematical representation of a function is what we call a formula. For
example, the distance the ball traveled is written as s D t 2 . The formula immediately gives us
the distance at any time; for example by plugging t D 2 into the formula the distance traveled is
4 . As s depends on t , t is an independent variable and s a dependent variable. And we speak
of s D t 2 as s is a function of t .
As we see more and more functions it is convenient to have a notation specifically invented
for functions. Euler used the notation s D f .t/, reads f of t , to describe all functions of single
variable t. When the independent variable is not time, mathematicians use y D f .x/. And this
short notation represents all functions that take one number x and return another number y! It
can be y D x, y D sin x etc.
In the function y D f .x/ for each value of x we have a corresponding value for y (D f .x/).
But what are the possible values of x? That varies from functions to functions. pFor y D x, x can
be any real number (mathematicians like to write x 2 R for this). For y D x, x must be any
real number that is equal or larger than zero (we do not discuss complex numbers in calculus in
this chapter). That’s why when we talk about a function we need to be clear about the range of
the input (called the domain of a function) and also the range of the output. The notation for that
is f W R ! R for any function that takes a real number and returns a real number.
Now we consider three common functions: a linear function y D f .x/ D x, a power
function y D x 2 and an exponential function y D 2x . For various values of the input x,
Table 4.1 presents the corresponding outputs. It is obvious that it is hard to get something out
of this table, algebra is not sufficient. We need to bring in geometry to get insights. A picture is
worth 1000 words. That’s why we plot the points .x; f .x// in a Cartesian plane and connect the
points by lines and we get the so-called graphs of functions. See Fig. 4.6 for the graphs of the
three functions under consideration.
With a graph you can actually see how the graph is changing, where its zeroes and inflection
points are, how it behaves at each point, what are its minima etc. Compare looking at a graph
Units are not important here and thus skipped.
x yDx y D x2 y D 2x
0 0 0 1
1 1 1 (1) 2 (1)
2 2 4 (3) 4 (2)
3 3 9 (5) 8 (4)
4 4 16 (7) 16 (8)
5 5 25 (9) 32 (16)
6 6 36 (11) 64 (32)
Just as with numbers where we have even numbers and odd numbers, we also have even and
odd functions. If we plot an even function y D f .x/ we observe that it is symmetrical with
respect to (sometimes, the abbreviated w.r.t is used) the y-axis; the part on one side of the vertical
axis is a reflection of the part on the other side, see Fig. 4.7. This means that f . x/ D f .x/.
On the other hand, the graph of an odd function has rotational symmetry with respect to the
origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin.
So, even functions and odd functions are functions which satisfy particular symmetry relations.
Mathematicians define even and odd functions as:
Definition 4.2.1
(a) A function f .x/ W R ! R is an even function if for any x 2 R: f . x/ D f .x/.
Figure 4.7: Graphs of some even and odd functions. Typical even functions are y D x 2n , y D cos x and
typical odd functions are y D x 2nC1 , y D sin x.
Decomposition of a function. Any function f .x/ can be decomposed into a sum of an even
function and an odd function, as
Why such a decomposition is worthy of studying? One example: As integral is defined as area,
from Fig. 4.7, we can deduce the following results:
Z a Z a Z a
e e
f .x/dx D 2 f .x/dx; f o .x/dx D 0 (4.2.3)
a 0 a
8
f(x)
f(x) + c
6 f(x c)
f(x + c)
4
0
5.0 2.5 0.0 2.5 5.0
Figure 4.8: Translation of a function y D f .x/: vertical translation f .x/ C c displaces the function a
distance c upward (c > 0), and downward if c < 0. Horizontal translation to the right with f .x c/ and
to the left with f .x C c/ for c > 0. Note: the original function is y D x 2 plotted as the blue curve.
And as we stretch (or squeeze/shrink) a solid object mathematicians stretch and squeeze
functions. They can do a horizontal stretching by the transformation f .cx/ (c < 1) and a
vertical stretching with cf .x/ (c > 1). Fig. 4.9 illustrates these scaling transformations for
y D sin x.
1.0 2
sin(x)
2sin(x)
0.5 1
sin(x)
0.0 sin(2x) 0
0.5 1
1.0 2
5.0 2.5 0.0 2.5 5.0 5.0 2.5 0.0 2.5 5.0
For example, consider two functions: g.x/ D sin x and f .x/ D x 2 , we obtain the composite
function sin x 2 . If we do the inverse i.e., .f ı g/.x/ we get sin2 x. So, .g ı f /.x/ ¤ .f ı g/.x/.
Is it interesting to know that, later on in linear algebra course, we will see that this fact is why
matrix-matrix multiplication is not commutative (Section 10.6).
How about chaining three functions h.x/; g.x/ and f .x/? It is built on top of composing
two functions:
That is function composition is not commutative but is associative (similar to .ab/c D a.bc/
for reals a; b; c).
Figure 4.10: Venn diagram for domain, co-domain and range of a function.
We confine our discussion in this chapter mostly to functions of real numbers. Functions of complex numbers
is left to Chapter 7.
Chekc Section 5.5 if you’re not sure of Venn diagrams.
Example 4.2
One example is sufficient to demonstrate how to find the domain of a function:
2x 1
f .x/ D p
1 x 5
As we forbid division by zero and only real numbers are considered, the function only makes
sense when: ( p
1 x 5¤0
H) x ¤ 6 and x 5
x 50
To say x is a number that is larger or equal 5 and different from 5, we can write x ¤ 6 and
x 5. Mathematicians seems to write it this way: x 2 Œ5; 6/ [ .6; 1/. This is because
they’re thinking this way: considering the number line starting from 5, and you make a cut
at 6 (we do not want it). Thus the line is broken into two peaces Œ5; 6/ and .6; 1/. And the
symbol [ in A [ B means a union of both sets A and B. The brackets mean that the interval
is closed – it includes the endpoints. An open interval .a; b/, on the other hand, would not
include endpoints a and b, and would be defined as a < x < b.
R1
Let’s consider we know the integral of y D x 2 between 0 and 1 i.e., 0 x 2 dx. What is then
R1p p
0 udv? As the two functions x 2 and u are inverses of each other, it follows that the sum
of these integrals equal 1 (Fig. 4.12). So, knowing one integral yields the other.
Figure 4.12
x.t/ D r cos t
(4.2.6)
y.t/ D r sin t; 0 t 2
Curves, represented by an equation of the form x.t/; y.t/ are called parametric curves. And the
variable t is called a parameter. How to get the graph of a parametric curve? That is simple: for
each value of t, we compute x.t/ and y.t/, which constitute a point in the xy plane. The locus
of all such points is that graph. Fig. 4.13 shows some parametric curves.
This is from JEE-Advanced 2021 exam. Joint Entrance Examination – Advanced (JEE-
Advanced) is an academic examination held annually in India.
Ok. Assume that we’re not sitting any exam, and the ultimate goal is to get the sum, then it
is super easy. Write a few lines of code and you’ll see that S D 19. But what if we actually have
to do this without calculator not alone a PC. What we’re going to do? We pay attention to the
expression of S and we observe a regularity:
1 2 38 39 1
S Df Cf C C f Cf f
40 40 40 40 2
That is the pairs of the same color sum to one (e.g. 1=40 C 39=40 D 1, 2=40 C 38=40 D 1,
3=40 C 37=40 D 1 etc.). Let’s compute then f .x/ C f .1 x/, and hope that something good
is there. To test this idea, we compute f .0/ C f .1/ (as these sums are easy), and it gives us 1:
very promising. Moving on to f .x/ C f .1 x/ :
4x 41 x
f .x/ C f .1 x/ D C D D 1
4x C 2 41 x C 2
The sum is also 1. Then, S is consisted of 19 sums of the form f .x/ C f .1 x/, which is
nothing but one, plus f .20=40/ minus f .1=2/. The final result is thus simply 19 .
Just use the rule ay z D ay=az , then 41 x D 4=4x .
With this, now you can try this Canadian math problem in 1995. Let f .x/ D 9x =.9x C 3/. Compute S D
P1995
nD1 f .n=1996/.
What we just did is starting from f0 .x/ D x=xC1, using fnC1 .x/ D .f0 ı fn /.x/ we compute
f1 .x/ (recall that .f0 ı fn /.x/ is a composite function), then using f1 .x/ to compute f2 .x/ and
so on. Lucky for us that we see a pattern . Observe the red numbers on each equation and we
can write
x
fn .x/ D
.n C 1/x C 1
Now we prove this formula using ... proof by induction (what else?). The formula works for
n D 0. Now we assume it works for n D k:
x
fk .x/ D
.k C 1/x C 1
And we’re going to prove that it’s also correct for n D k C 1 i.e.,
x
fkC1 .x/ D
.k C 2/x C 1
Figure 4.14: The area of a triangle is related to the area of the bounding rectangle.
ancient mathematicians computed the area of new more complex geometries based on the known
area of old simpler geometries.
Heron’s formula. What is the area of a triangle in terms of its sides a; b; c? The formula
is credited to Heron (or Hero) of Alexandria, and a proof can be found in his book, Met-
rica, written c. CE 60. It has been suggested that Archimedes knew the formula over two
centuries earlier. I now present a derivation of this formula using the Pythagorean theorem.
First, the area is computed using the familiar formula "half of the base
multiplied with the height": A D 1=2ah. Second, the height is expressed
in terms of a; b; c. Refer to the figure, there are 3 equations to determine
x; y; h:
9
xCy Da > = a b2 c2 a b2 c2
x 2 C h2 D c 2 H) x D ; yD C ; h2 D c 2 x2
> 2 2a 2 2a
y 2 C h2 D b 2
;
4A2 D a2 .c 2 x2/
b2 c2 a b2 c2
2 2 a 2
4A D a .c x/.c C x/ D a c C cC
2 2a 2 2a
2 2 2 2 2 2
2ac a C b c 2ac C a b Cc
4A2 D a2
2a 2a
2 2 2 2 2
16A D Œb .a c/ Œ.a C c/ b D .b C a c/.b a C c/.a C c C b/.a C c b/
The final expression of A is symmetrical with respect to a; b; c and it has a correct dimension
(square
p root of length power 4 is lengthpsquared–an area). Thus, it seems correct (if it was
A D s.s 2a/.s b/.s c/ or A D s.s a/2 .s b/.s c/, then it is definitely wrong).
It has to have a pattern. Why? Because this is a test! It must be answered within a short amount of time.
How we know that it’s correct? Check it for a triangle of which area we know for sure. Note that
using the generalized Pythagorean theorem gives a shorter/easier proof.
What can we do with Heron’s formula? We can use it to compute the area of a triangle
given the sides a; b; c, of course. The power of symbolic algebra is that we can deduce new
information from Eq. (4.3.1). We can pose this question: among all triangles of the same
perimeter, which triangle has the maximum area? Using the AM-GM inequality (Section 2.20),
it’s straightforward to show that an equilateral triangle (i.e., triangle with three sides equal
a D b D c) has the maximum area.
A physicist and a mathematician are sitting in a faculty lounge. Suddenly, the coffee
machine catches on fire. The physicist grabs a bucket and leap towards the sink,
filled the bucket with water and puts out the fire. Second day, the same two sit in the
same lounge. Again, the coffee machine catches on fire. This time, the mathematician
stands up, got a bucket, hands the bucket to the physicist, thus reducing the problem
to a previously solved one.
Figure 4.15: Lune of Hippocrates. The shaded area AEBF is a moon-like crescent shape, and it is called
a lune, deriving from the Latin word luna for moon. Geometrically a lune is the area between two circular
arcs.
Hippocrates wanted to solve the classic problem of squaring the circle, i.e. constructing a
square by means of straightedge and compass, having the same area as a given circle. He proved
that the lune bounded by the arcs labeled E and F in the figure has the same area as triangle
ABO. This afforded some hope of solving the circle-squaring problem, since the lune is bounded
only by arcs of circles. Heath concludes that, in proving his result, Hippocrates was also the
first to prove that the area of a circle is proportional to the square of its diameter.
Figure 4.16
First, the areas of triangles OQR and QBR are identical and equal 1=16. Thus 2 D 1=8, and
therefore 2 C 3 D 1=4. And note that 1 D 1, so 2 C 3 D 1=41 . So, we can write that
1 1 1 1 4
A D 1 C C C D 1 1 C C C D 1 (4.3.2)
4 16 4 16 3
Sir Thomas Little Heath (1861 – 1940) was a British civil servant, mathematician, classical scholar, and
historian of ancient Greek mathematics. Heath translated works of Euclid of Alexandria, Apollonius of Perga,
Aristarchus of Samos, and Archimedes of Syracuse into English.
where use was made of the geometric series (Section 2.18.2). From this result, it is simple to
deduce that the area below the parabola is 2 4=3 D 2=3.
A student in a calculus course would just use integration and immediately obtain the result,
as
Z 1 1
x3
2 4
AD2 .1 x /dx D 2 x D (4.3.3)
0 3 0 3
This technique has a name because it was widely used by Greek mathematicians: it’s called
the method of exhaustion; as we add more and more triangles they exhaust the area of the
parabola segment. There are a lots to learn about Archimedes’ solution to this problem. First, he
also used the area of simpler geometry (a triangle). Second, and the most important idea, is that
he used infinitely many triangles! Only when the number of triangles is approaching infinity the
sum of all the triangle areas approach the area of the parabola segment. This sounds similar to
integral calculus we know of today! But wait, while Eq. (4.3.3) is straightforward, Archimedes’
solution requires his genius. For example, how would we know to use triangles that he adopted?
Even though Archimedes’ solution is less powerful than the integral calculus developed
much later in the 17th century, he and Greek mathematicians were right in going to infinity. The
main idea of computing something finite, e.g. the area of a certain (curved) shape, is to chop it
into many smaller pieces, handle these pieces, and when the number of pieces goes to infinity
adding them up will gives the answer. This is what Strogatz called the Principle of Infinity in his
book The Power of Infinite. It is remarkable that we see Archimedes’ legacy in modern world,
see for instance Fig. 4.17. In computer graphics and in many engineering and science fields, any
shape is approximated by a collection of triangles (sometimes quadrilaterals are also used). What
is difference is that we do not go to infinity with this process, as we’re seeking an approximation.
Note that Archimedes was trying to get an exact answer.
the area of a circle is proportional to the square of its radius, so A D 2 r 2 assuming that
the proportionality is 2 ;
The third fact induces that 1 D 2 D . A proof of A D 1=2C r is shown in Fig. 4.18: the area
of the circle equals the area of an inscribed regular polygon having infinite sides. The area of
this polygon is the sum of the area of all the isosceles triangles OAB; an isosceles triangle is a
triangle that has two sides of equal length. These triangles have a height (OH ) equal the radius
of the circle and the sum of their bases equal the circle’s circumference.
(a) Archimedes
Figure 4.17: Archimedes’ legacy in the modern world: use of triangles and tetrahedra to approximate any
2D and 3D objects.
If the above reasoning was not convincing enough, here is a better one. Let’s consider a
regular polygon of n sides inscribed in a circle. Its area is denoted by An and its circumference
by Cn , from Fig. 4.18, we can get
An D nr 2 sin cos ; Cn D n2r sin
n n n
Then, we consider the ratio An =Cn when n is very large:
An 1 An 1
D r cos H) lim D r
Cn 2 n n!1 Cn 2
See Table 4.2 for supporting data.
How ancient mathematicians came up with the formula A D r 2 ? The idea of calculating
the area of the circle is the same: breaking the circle into simpler shapes of which the area is
known. This is what ancient mathematicians did, see Fig. 4.19: they chopped a circle into eight
wedge-shaped pieces (like a pizza), and rearrange the wedges. The obtained shape does not look
similar to any shape known of. So, they chopped the circle into two wedges: this time 16 pieces.
This time, something familiar appears! The wedges together looks like a rectangle. Being more
confident now, they decided to go extreme: divide the circle into infinite number of wedges.
What they got is a rectangle of sides r (half of the circle perimeter) and r. Thus, the area of a
circle is r 2 . What an amazing idea it was.
4.3.5 Calculation of
Table 4.2: Proof of A D 0:5C r with r D 1: using regular polygons of 4 to 512 sides.
n An Cn An=Cn
Starting with a hexagon (n D 6), then n D 12; 24; 48; 96, Archimedes got
10 1
3 < <3
71 7
This polygonal algorithm dominated for over 1 000 years until infinite series were discovered.
We presented one such infinite series for in Eq. (2.18.17). And there is Machin’s formula in
Eq. (3.9.3). And we will present more in this chapter.
is a special number, various books are written about it. There is even a day called Pi day
(March 14), which is coincidentally also the birthday of Albert Einstein (14 March 1879). People
keep calculating
p more and more digits of this number. Note that no one cares about the decimal
digits of 2. I recommend the book A History of Pi by Petr Beckmann.
Liu Hui’s algorithm. Liu Hui (3rd century CE) was a Chinese mathematician and writer who
lived in the Three Kingdoms period (220–280) of China. In 263, he edited and published a book
with solutions to mathematical problems presented in the famous Chinese book of mathematics
known as The Nine Chapters on the Mathematical Art, in which he was possibly the first mathe-
matician to discover, understand and use negative numbers. Along with Zu Chongzhi (429–500),
Liu Hui was known as one of the greatest mathematicians of ancient China. In this section I
present his method to determine .
Liu Hui first derived an inequality for based on the area of
inscribed polygons with N and 2N sides. In the diagram, ABCD
is a N polygon whereas AEBF C GDH is a 2N polygon, both
inscribed in the circle. Regarding the areas of these polygons and
the circle, we have the following relations:
AN D green area (4.3.4a)
A2N D green area + orange area (4.3.4b)
A2N < Ac < A2N C grey area (4.3.4c)
grey area D orange area (4.3.4d)
Therefore, we can deduce that
A2N < Ac < 2A2N AN H) A2N < < 2A2N AN (4.3.5)
where the last inequality holds when considering a circle of unit radius.
Liu Hui then computed the area of inscribed polygons with
N and 2N sides. To that end, he needed a formula relating the
side of a 2N -gon, denoted by m with that of a N -gon, denoted
by M . Using the Pythagorean theorem, he derived this equation
q
(see figure):
0 s 12
2 2
M M
mD C @r r2 A
2 2
Now, he calculated the area of a N gon approximately as the sum of the areas of all triangles
making the polygon:
1 1
AN N Mr N M (4.3.7)
2 2
Now come the complete algorithm: we start with N D 6 (hexagon), thus M D 1 (as r D 1).
Then, we do:
With only one term, we get D 3:1415926535897936! I do not know the derivation of it. But
it is certain that it did not come from the method of ancient mathematicians which relied on
geometry. Ramanujan had in his hands the power of 20th century mathematics. To know more
about Ramanujan, I recommend the 2015 British biographical drama film ’The Man Who Knew
Infinity’. The movie is based on the 1991 book of the same name by Robert Kanigel.
of age. I have had no University education but I have undergone the ordinary school
course. After leaving school I have been employing the spare time at my disposal to
work at Mathematics. I have not trodden through the conventional regular course which is
followed in a University course, but I am striking out a new path for myself. I have made
a special investigation of divergent series in general and the results I get are termed by the
local mathematicians as ‘startling’.
Just as in elementary mathematics you give a meaning to an when n is negative and
fractional to conform to the law which holds when n is a positive integer, similarly the
whole of my investigations proceed on giving a meaning to Eulerian Second Integral
for all values of n . MyRfriends who have gone through the regular course of University
1
education tell me that 0 x n 1 e x dx D .n/ is true only when n is positive. They
say that this integral relation is not true when is negative. Supposing this is true only for
positive values of n and also supposing the definition n .n/ D .nC1/ to be universally
true, I have given meanings to these integrals and under the conditions I state the integral
is true for all values of n negative and fractional. My whole investigations are based upon
this and I have been developing this to a remarkable extent so much so that the local
mathematicians are not able to understand me in my higher flights.
Very recently I came across a tract published by you styled Orders of Infinity in page
36 of which I find a statement that no definite expression has been as yet found for
the number of prime numbers less than any given number. I have found an expression
which very nearly approximates to the real result, the error being negligible. I would
request you to go through the enclosed papers. Being poor, if you are convinced that
there is anything of value I would like to have my theorems published. I have not given
the actual investigations nor the expressions that I get but I have indicated the lines on
which I proceed. Being inexperienced I would very highly value any advice you give me.
Requesting to be excused for the trouble I give you.
I remain, Dear Sir, Yours truly, S. Ramanujan
y y
a b x a b x
y y
a b x a b x
Figure 4.20: Approximating the area under the curve y D f .x/ by many thin rectangles.
What we need to do now is to compute An . Luckily that’ simple and it should be because it is
our choice to make this chop! For simplicity, assume that these rectangles have the same base
x D .b a/=n (Fig. 4.21). That is we place n C 1 equally spaced points x0 ; x1 ; : : : over the
interval Œa; b, we have then n sub-intervals Œxi ; xi C1 . Actually we have two ways to build the
slices: one way is to use the left point xi of Œxi ; xi C1 (similar to an inscribed polygon in a
circle); the second way is to use the right point xi C1 (similar to circumscribed polygon). The
Note that i here is not the imaginary unit i 2 D 1.
a b x
Figure 4.21: Area of y D f .x/ by chopping it into an infinite number of thin slices. The interval Œa; b
is divided into n sub-intervals Œxi ; xi C1 where xi D a C i.b a/=n. We can either use the left point or
the right point to define the height of one slice.
b3
3
1 1 1
D b lim C C 2 D
n!1 3 2n 6n 3
||
He first used the symbol omn, short for omnia which is Latin for sum. On the other hand, Newton did not care
about notation, thus he did not have a systematic notation for the integral.
The red terms vanish when n approaches infinity; they are infinitely small. The result before
going to limit is quite messy (many terms), but in the limit, a simple result of b 3 =3 was obtained.
This is similar to how ancient mathematicians found the area of the circle (Fig. 4.19). By the
way, the red terms account for those small triangles above the curve. If b D 1, the area is 1/3
which agrees with Archimedes’ finding.
Let’s do another integral for y D x 3 , and hope that we can see a pattern for y D x p with
p being a positive integer (because we do not want to repeat this for y D x 4 , y D x 5 etc.;
mathematics would be boring then):
n n
b
ib 3 b b4 X 3 b 4 n4 C 2n3 C n2 b4
Z X
3
x dx D lim D lim 4 i D lim D (4.3.11)
0 n!1
i D1
n n n!1 n i D1 n!1 4 n4 4
and we have used Eq. (2.5.14) to compute niD1 i 3 . We are seeing a pattern here, and thus for
P
any positive integer p, we have the following results
b a b
b 1Cp a1Cp b 1Cp a1Cp
Z Z Z
p p
x dx D H) x dx D ) x p dx D (4.3.12)
0 1Cp 0 1Cp a 1Cp 1Cp
b
m
Z
x 1=m dx D b 1=mC1 ; .m ¤ 1/ (4.3.13)
0 1Cm
Figure 4.22
Z b Z c Z b
f .x/dx Df .x/dx C f .x/dx
a a c
Z a Z c Z a
f .x/dx D 0 H) f .x/dx D f .x/dx
a a c
Z b Z b Z b (4.3.14)
Œ˛f .x/ C ˇg.x/dx D ˛ f .x/dx C ˇ g.x/dx
a a a
Z b
f .x/dx > 0 if f .x/ > 0 8x 2 Œa; b
a
The first rule means that we can split the integration interval into sub-intervals and do the
integration over the sub-intervals and sum them up. The second rule indicates that if we reverse
the integration limits, the sign of the integral change. The third rule is actually a combination of
Rb Rb Rb Rb Rb
two rules: a ˛f .x/dx D ˛ a f .x/dx and a Œf .x/ C g.x/dx D a f .x/dx C a g.x/dx.
The fourth means that if the integrand is positive within an interval, then over this interval the
integral is positive.
Another rule (or property) of integrals is the following
Z b Z b Z b
if h.x/ f .x/ g.x/ .a x b/ H) h.x/d f .x/dx g.x/dx (4.3.15)
a a a
I have used two notations f .u/du and f .t/dt to illustrate that u or t can be thought of dummy
variables; any variable (not x) can be used.
That’s all we can do with integral calculus, for now. We are even not able to compute the
area of a circle using the integral! We need the other part of calculus–differential calculus, which
is the topic of the next section.
Fermat solved a maxima problem using the idea behind the concept of derivative that we know
of today. As Fermat (and all the mathematicians of his time) did not know the concept of limit–
which is the foundation of calculus, his maths was not rigorous, but it worked in the sense that
it provided correct results. The motivation of the inclusion of Fermat’s work is to show that
mathematics were not developed as it is now presented in textbooks: everything works nicely.
Far from that, there are set backs, doubts, criticisms and so on. Then in Section 4.4.3 we talk
about uniform and non-uniform speeds as a motivation for the concept of derivative introduced
in Section 4.4.4. As we have already met the limit concept in Section 2.19, I immediately use
limit to define the derivative of a function. But I will postpone a detailed discussion of what is
a limit until Section 4.10 to show that without limits 17th century mathematicians with their
intuition can proceed without rigor. This way of presentation style will, hopefully, comfort many
students. It took hundreds of years for the best mathematicians to develop the calculus that we
know of today. It is OK for us to be confuse, to make mistakes and to have low grades.
thus M is maximum when the red term vanishes or when x D a=4 . Thus y D a=4, and a square
has the largest area among all rectangles with a given perimeter. One thing to notice herein is
that this algebraic way is working only for this particular problem. We need something more
powerful which can be, hopefully, applicable to all problems, not just Eq. (4.4.1).
Fermat’s reasoning was that: if x is the one that renders M maximum, then adding a small
number to x would not change M . This gives us the equation M.x C / D M.x/, and with
Eq. (4.4.1), we get:
a.x C / ax
.x C /2 D x2 (4.4.3)
2 2
Why? Imagine you’re climbing up a hill. When you’re not at the top each move increases your altitude.
But when you’re already at the top, then a move will not change your altitude. Actually, it changes, but only
insignificantly (assuming that your step is not giant).
which leads to another equation, by dividing the above equation by (this can be done because
¤ 0):
a
2x C D 0 (4.4.4)
2
Then, he used D 0, to get x
a a
2x D 0 H) x D (4.4.5)
2 4
To someone who knows calculus, it is easy to recognize that Eq. (4.4.5) is exactly M 0 .x/ D 0 in
our modern notation, where M 0 .x/ is the first derivative of M.x/. Thus Fermat was very close
to the discovery of the derivative concept.
It is important to clearly understand what Fermat did in the above process. First, he
introduced a quantity which is initially non-zero. Second, he manipulated this as if it is an
ordinary number. Finally, he set it to zero. So, this is something and nothing simultaneously!
Newton and Leibniz’s derivative, also based on similar procedures, thus lacks a rigorous founda-
tion for 150 years until Cauchy and Weierstrass introduced the concept of limit (Section 4.10).
But Fermat’s solution is correct!
Heron’s proof of the shortest distance problem. Referring to Fig. 4.24, Heron created a new
point B 0 which is the reflection of point B through the horizontal line. Then, the solution is the
intersection of the line AB 0 and the horizontal line. An elegant solution, no question. But it lacks
generality, while the calculus-based solution is universally applicable to almost any optimization
problem and it does not require the user to be a genius. With calculus, things become routine.
But wait, how did Heron know to create point B’? Inspiration, experience, trial and error,
dumb luck. That’s the art of mathematics, creating these beautiful little poems of thought, the
sonnets of pure reason.
Algebra vs geometry. This problem illustrates the differences between algebra and geometry.
Geometry is intuitive and visual. It appeals to the right side of the brain. With geometry,
beginning an argument requires strokes of genius (like drawing the point B’). On the other
You can also watch this youtube video.
Proof of reflection property of ellipse. The reflective property of an ellipse is simply this: A ray
of light starting at one focus will bounce off the ellipse and go through the other focus. Referring
to Fig. 4.25, we need to prove that a light starts from F1 coming to P , bounces off the ellipse
and gets reflected to F2 . For the proof, we draw a tangent to the ellipse at P . On this tangent
we consider an arbitrary point Q. Now we show that the distance from Q to the foci are larger
than 2a (to be done shortly). Thus, P is the point that minimizes the distance from a point on
the tangent to the two foci. From the result of Heron’s shortest distance property, P is the point
such that the two shaded angles are equal. Therefore a ray leaves F1 and meets P , it will reflect
off the ellipse and pass through F2 .
Proof of the fact that the distance from Q to the foci are larger than 2a.
F1 Q C F2 Q D .F1 M C MQ/ C F2 Q
D F1 M C .MQ C F2 Q/
> F1 M C F2 M (for a sum of two sides is > the remaining side)
2a
at a certain moment in time if it is moving with a non-uniform velocity. We take the second
approach as there is change inherently in this problem. This was also how Newton developed
his fluxions . Note that Newton is not only a mathematician, but also a physicist.
Let’s start simple with a car moving with a constant speed. If it has gone 30 kilometers in
1 hour, we say that its speed is 30 kilometers per hour. To measure this speed, we divide the
distance the car has traveled by the elapsed time. If s measures the distance and t measure time,
then
distance s
uniform speed D D (4.4.7)
time interval t
The ratio s=t is called a time rate of change of position i.e., change of position per unit time.
Sometimes it is simply referred to as the rate of change of position. Note that does not stand
for any number. stands for ‘the change in’ — that and nothing else. Thus, s (read Delta s) is
used to indicate a change in s and t (read Delta t) is used to indicate a change in t .
But life would be boring if everything is moving at constant speed. Then, one would need no
differential calculus. Luckily, non-uniform motions are ubiquitous. Kepler discovered that the
planets moved non-uniformly around their ellipses with the Sun as focus, sometimes hesitating
far from the Sun, sometimes accelerating near the Sun. Likewise, Galileo’s projectiles moved at
ever-changing speeds on their parabolic arcs. They slowed down as they climbed, paused at the
top, then sped up as they fell back to earth. The same was true for pendulums. And a car which
travels 30 miles in an hour does not travel at a speed of 30 miles an hour. If its owner lives in a
big town, the car travels slowly while it is getting out of the town, and makes up for it by doing
50 on the arterial road in the country.
How could one quantify motions in which speed changed from moment to moment? It was
the task that Newton set out for himself. And to answer that question he invented calculus. We
are trying here to reproduce his work. We use Galileo’s experiment of ball rolling down an
inclined plane (Table 4.3 from s D t 2 ) and seek out to find the ball speed at any time instant, the
notation for that is v.t/, where v is for velocity.
time [second] 0 1 2 3 4 5 6
distance [feet] 0 1 4 9 16 25 36
Let us first try to find out how fast the ball is going after one second. First of all, it is easy to
see that the ball continually goes faster and faster. In the first second it goes only 1 foot ; in the
next second 3 feet; in the third second 5 feet, and so on. As the average speed during the first
second is 1 foot per second, the speed of the ball at 1 second must be larger than that. Similarly,
The modern definition of a function had not yet been created when Newton developed his calculus. The context
for Newton’s calculus was a particle “flowing” or tracing out a curve in the x y plane. The x and y coordinates of
the moving particle are fluents or flowing quantities. The horizontal and vertical velocities are the fluxions (which
we call derivatives) of x and y, respectively, associated with the flux of time.
the average speed during the second second is 3 feet per second, thus the speed of the ball at 1
second must be smaller than that. So, we know 1 < v.1/ < 3.
Can we do better? Yes, if we have a table similar to Table 4.3 but with many many more data
points not at whole seconds. For example, if we consider 0.9 s, 1 s and 1.1 s (Table 4.4), we can
get 1:9 < v.1/ < 2:1. And if we consider 0.99 s, 1 s and 1.01 s, we get 1:99 < v.1/ < 2:01.
And if we take thousandth of a second, we find the speed lies between 1.999 and 2.001. And if
we keep refining the time interval, we find that the only speed satisfying this is 2 feet per second.
Doing the same thing, we find the speed at whole seconds in Table 4.5. If s D t 2 , then v D 2t.
Table 4.4: Galileo experiment of ball rolling down an inclined plane with time increments of 0.1 s.
Table 4.5: Galileo experiment of ball rolling down an inclined plane: instantaneous speed.
time [second] 0 1 2 3 4 5 6
speed [feet/s] 0 2 4 6 8 10 12
So the speed at any moment will not differ very much from the average speed during the
previous tenth of a second. It will differ even less from the average speed for the previous
thousandth of a second. In other words, if we take the average speed for smaller and smaller
lengths of time, we shall get nearer and nearer — as near as we like — to the true speed.
Therefore, the instantaneous speed i.e., the speed at a time instant is defined as the value that the
sequence of average speeds approaches when the time interval approaches zero. We show this
sequence of average speeds in Table 4.6 at the time instant t0 D 2s. Note that this table presents
not only the average speeds from the time instances t0 C h and t0 , but also from t0 h and t0 .
And both sequences converge to the same speed of 4, which is physically reasonable. Later on,
we know that these correspond to the right and left limits.
But saying ‘the value that the sequence of average speeds approaches when the time interval
approaches zero’ is verbose, we have a symbol for that, discussed in Section 2.19. Yes, that
value (i.e., the instantaneous speed) is the limit of the average speeds when the time interval
approaches zero. Thus, the instantaneous speed is defined succinctly as
s
instantaneous speed s 0 .t / or sP lim (4.4.8)
t!0 t
where, we recall, the notation s is used to indicate a change in s; herein it indicates the distance
traveled during t. And we use the symbol s 0 .t/ to denote this instantaneous speed and call it the
derivative of s.t/. Newton’s notation for this derivative is sP , and it is still being used especially
in physics. This instantaneous speed is the number that the speedometer of your car measures.
Table 4.6: Limit of average speeds when the time interval h is shrunk to zero.
1
10 4.100000000000 3.900000000000
2
10 4.010000000000 3.990000000000
3
10 4.001000000000 3.998999999999
4
10 4.000100000008 3.999900000000
5
10 4.000010000027 3.999990000025
6
10 4.000001000648 3.999998999582
In words, the derivative f 0 .x0 / is the limit of the ratio of change of f (denoted by f ) and
change of x (denoted by x) when x approaches zero. The term f =x is called a difference
quotient.
Instead of focusing on a specific value x0 , we can determine the derivative of f .x/ at an
arbitrary point x, which is denoted by f 0 .x/. For an x we have a corresponding number f 0 .x/,
thus f 0 .x/ is a function in itself. Often we use h in place of x because it is shorter. Thus, the
derivative is also written as
f .x C h/ f .x/
f 0 .x/ D lim
h!0 h
Notations for the derivative. There are many notations for the derivative: (1) Newton’s notation
fP, (2) Leibniz’s notation for the derivative f 0 .x/ D dy=dx, and (3) Lagrange’s notation f 0 .x/.
Let’s discuss Lagrange’s notation first as it is easy. Note that given a function y D f .x/, its
derivative is also a function, which Lagrange called a derived function of f .x/. That’s the origin
of the name ‘derivative’ we use today. Lagrange’s notation is short, and thus very convenient.
How about Leibniz’s notation? I emphasize that when Leibniz developed the concept of
derivative, the concept of limit was not available . Leibniz was clear that the derivative was
obtained when f and x were very small, thus he used df and dx, which he called the
infinitesimals (infinitely small quantities) or differentials. An infinitesimal is a hazy thing. It
is supposed to be the tiniest number we can possibly imagine that isn’t actually zero. In other
this only came to life about 200 years after Newton and Leibniz!
words, an infinitesimal is smaller than everything but greater than nothing (0). On the other
hand, the notation dy=dx has these advantages: (i) it reminds us that the derivative is the rate of
change y=x when x ! 0 (the d s remind us of the limit process), (ii) it reveals the unit of
the derivative immediately as it is written as a ratio while f 0 .x/ is not. But, the major advantage
is that we can use the differentials dy and dx separately and perform algebraic operations on
them just like ordinary numbers.
2:0013 D 8:012006001
So, it is 8 plus a bit. That makes sense, a tiny change from 2 to 2.001 results in a tiny change
from 8 to 8.012006001 (a change of 0.012006001). What is interesting is that we can decompose
this change into a sum of three parts as follows
Now we can see why the change consists of three parts of different sizes. The small but dominant
part is 12x D 12.:001/ D :012. The remaining parts 6.x/2 and .x/3 account for the super-
small .000006 and the super-super-small .000000001. The more factors of x there are in a part,
the smaller it is. That’s why the parts are graded in size. Every additional multiplication by the
tiny factor x makes a small part even smaller.
Now come the power of Leibniz’s notation dx and dy. In Eq. (4.4.10), if we replace x by
dx and call dy the change due to dx, and of course we neglect the super and super-super small
parts (i.e., .dx/2 and .dx/3 ), then we have a nice formula:
Differential operator. Yet another notation for the derivative of y D f .x/ at x0 is:
ˇ
d ˇ f .x0 C h/ f .x0 /
f .x/ˇˇ D lim
dx xDx0 h!0 h
d
This notation adopts the so-called differential operator dx f .x/. What is an operator?
p Think of
p root of a number. Feed in a number x, the operator square root 2 gives another
the square
number x. Similarly, feed in a function f .x/, the operator d=dx 2 gives another function–the
derivative f 0 .x/. For the time being, just think of this operator as another notation that works
better aesthetically (not objective) for functions of which expression is lengthy. Compare the
following two notations and decide for yourself:
2 0 2
x C 3x C 5 d x C 3x C 5
p ; p
x 3 3x C 1 dx x 3 3x C 1
Later on, we shall see that mathematicians consider this operator as a legitimate mathematical
object and study its behavior. That is, they remove the functions out of the picture and think of
the differentiation process (differentiation is the process of finding the derivative).
Nonstandard analysis. The history of calculus is fraught with philosophical debates about the
meaning and logical validity of fluxions and infinitesimals. The standard way to resolve these
debates is to define the operations of calculus using the limit concept rather than infinitesimals.
And that resulted in the so-called real analysis. On the other hand, in 1960, Abraham Robinson
developed nonstandard analysis that reformulates the calculus using a logically rigorous notion
of infinitesimal numbers. This is beyond the scope of the book and my capacity as I cannot
afford to learn another kind of number–known as the hyperreals (too many already!).
graph of the function y D f .x/ on the Cartesian xy plane. We then consider a point P with
coordinates .x0 ; f .x0 //, cf. Fig. 4.26a. To have change, we consider another point Q with
coordinates .x0 C h; f .x0 C h//. Then we have the average rate of change of the function at
P , that is f =h where f D f .x0 C h/ f .x0 /, which is the slope of the secant PQ. Now,
the process of considering smaller and smaller h, to get the derivative, is amount to considering
points Q0 ; Q00 which are closer and closer to P . The secants PQ, PQ0 , PQ00 , ... approach the
line PP 0 which touches the curve y D f .x/ at P . PP 0 is the tangent to the curve at P . The
average rate of change f =h approaches df =dx–the derivative of f .x/ at x0 .
When h approaches 0, the secants approach the tangent and their slopes approach the deriva-
tive. Thus, the derivative of the function at P is the slope of the tangent to f .x/ at the same
point. That’s the geometric meaning of the derivative.
Figure 4.26: The derivative of a function y D f .x/ is the slope of the tangent to the curve at x.
Now we derive the equation for this tangent. It is the line going through the point P .x0 ; y0 /
and has a slope equal f 0 .x0 /, thus the equation for the tangent is:
And this leads to the so-called linear approximation to a function, discussed later in Section 4.5.3.
The idea is to replace a curve–which is hard to work with–by its tangent (which is a line and
easier to work with).
We now understand the concept of the derivative of a function, algebraically and geometri-
cally. Now, it is the time to actually compute the derivative of functions that we know: polyno-
mials, trigonometric, exponential etc.
The algebra was simple but there are some points worthy of further discussion. First, if we used
h D 0 in the difference quotient 2x0 hCh2=h we would get this form 0=0–which is mathematically
meaningless. This is so because to get the derivative which is a rate of change at least we should
allow h to be different from zero (so that some change is happening). That’s why the derivative
was not defined as the difference quotient when h D 0. Instead, it is defined as the limit of this
quotient when h approaches zero. Think of the instantaneous speed (Table 4.6), and thing is
clear.
As always, it is good to try to have a geometric interpretation. What we are looking for is
what is the change of x 2 if there is a tiny change in x. We think of x 2 immediately as the area of
a square of side x (Fig. 4.27). Then, a tiny change dx leads to a change in area of 2xdx, because
the change .dx/2 is so so small that it can be neglected.
So, it’s up to you to like the limit approach or the infinitesimal one. If you prefer rigor then
using limit is the way to go. But if you just do not care what is the meaning of infinitesimals
(whether they exist for example), then use dx and dy freely like Leibniz, Euler, and many
seventeenth century mathematicians did. And the results are the same!
Figure 4.27: Geometric derivation of the derivative of x 2 . The change .dx/2 is small compared with
2xdx.
.x 3 /0 D 3x 2
.x 4 /0 D 4x 3
It is hard to resist to write this general formula for all positive integers n:
.x n /0 D nx n 1
(4.4.14)
1
How about the derivative when n is negative? Let’s start with f .x/ D x D 1=x. Using
the definition, we can compute its derivative as
1 1
0
1 1 1
D lim x C h x D lim D
x h!0 h h!0 x.x C h/ x2
p
Figure 4.28: Geometric derivation of the derivative of x.
functions are functions of sine/cosine. Let’s start with a direct application of the definition of a
derivative for sin x:
sin.x C h/ sin x
.sin x/0 D lim
h!0 h
sin x cos h C sin h cos x sin x
D lim
h!0 h
cos h 1 sin h
D sin x lim C cos x lim
h!0 h h!0 h
We need the following limits (proof of the first will be given shortly, for the second limit, check
Eq. (3.10.3))
cos h 1 sin h
lim D 0; lim D1 (4.4.17)
h!0 h h!0 h
which leads to
.sin x/0 D cos x (4.4.18)
We can do the same thing to get the derivative of cosine. But we can also use trigonometric
identities and the chain rule (to be discussed next) to obtain the cosine derivative:
0 d
.cos x/ D sin x D cos x D sin x (4.4.19)
dx 2 2
A geometric derivation of the derivative of sin x, shown in Fig. 4.29, is easier and without
requiring the two limits in Eq. (4.4.17).
Using the quotient rule, we can compute the derivative of tan x:
sin x 0 cos2 x C sin2 x
0 1
.tan x/ D D D (4.4.20)
cos x cos2 x cos2 x
Proof. Herein, we prove that the limit of cos h 1=h equals zero. The proof is based on the limit of
sin h=h and a bit of algebra:
Figure 4.29: Geometric derivation of the derivative of the sine/cosine functions by considering a unit
circle. For a small variation in angle dx, we have AC D dx. Note that angles are in radians. If it is not
the case, AC D .dx=180/, and the derivative of sin x would be .=180/ cos x.
Among these rules the chain rule is the hardest (and left to the next section), other rules are quite
easy. The function y D a is called a constant function for y D a for all x. Obviously we cannot
have change with this boring function, thus its derivative is zero.
If we follow Eq. (4.4.13) we can see that the derivative of 3x 2 is 3.2x/. A bit of thinking
will give us the derivative of af .x/ is af 0 .x/, which can be verified using the definition of
derivative, Eq. (4.4.9). Again, following the steps in Eq. (4.4.13), the derivative of x 3 C x 4 is
3x 2 C 4x 3 , and this leads to the derivative of f .x/ C g.x/ is f 0 .x/ C g 0 .x/: the derivative of
the sum of two functions is the sum of the derivatives. This can be verified using the definition of
derivative, Eq. (4.4.9). Now, af .x/ is a function and bg.x/ is a function, thus the derivative of
af .x/ C bg.x/ is .af .x//0 C .bg.x//0 , which is af 0 .x/ C bg 0 .x/. And this is our first rule .
The sum rule says that the derivative of the sum of two func-
tions is the sum of the derivatives. Thus Leibniz believed that the
derivative of the product of two functions is the product of the
derivatives. It took him no time (with an easy example, let say
x 3 .2x C3/) to figure out that his guess was wrong, and eventually
he came up with the correct rule. The proof of the product rule is
given in the beside figure. The idea is to consider a rectangle of sides f and g with an area of
fg. (Thus implicitly this proof applies to positive functions only). Now assume that we have
an infinitesimal change dx, which results in a change in f , denoted by df D f 0 .x/dx and
a change in g, denoted by dg D g 0 .x/dx. We need to compute the change in the area of this
rectangle. It is gdf C f dg C df dg, which is gdf C f dg as .df /.dg/ is minuscule. Thus
the change in the area which is the change in fg is Œgf 0 .x/ C fg 0 .x/dx. That concludes our
geometric proof.
The proof of the reciprocal rule starts with this function f .x/ 1=f .x/ D 1. Applying the
product rule for this constant function, we get
f 0 .x/
0 1 d 1 d 1
0 D f .x/ C f .x/ H) D
f .x/ dx f .x/ dx f .x/ f 2 .x/
The quotient rule is obtained from the product rule and the reciprocal rule as shown in
Eq. (4.4.21)
d f d 1
D f
dx g dx g
df 1 d 1
D Cf (4.4.21)
dx g dx g
f 0 g fg 0
df 1 dg=dx
D f D
dx g g2 g2
f f y df df dy
D H) D (4.4.22)
x y x dx dy dx
Note that this rule covers many special cases. For example, taking a D 1, b D 1, we have Œf .x/
g.x/0 D f 0 .x/ g 0 .x/. Again, subtraction is secondary for we can deal with it via addition. Furthermore, even
though our rule is stated for two functions only, it can be extended to any number of functions. For instance,
Œf .x/ C g.x/ C h.x/0 D f 0 .x/ C g 0 .x/ C h0 .x/, this is so because we can see f .x/ C g.x/ as a new function
w.x/, and we can use the sum rule for the two functions w.x/ and h.x/.
which means that that derivative of f w.r.t x is equal to the derivative of f w.r.t y multiplied by
the derivative of y w.r.t x.
Thus, for f D sin x 2 , its derivative is:
d
.sin x 2 / D cos x 2 2x
dx
sin x arcsin x p 1
1 x2
cos x arccos x p 1
1 x2
1
tan x arctan x 1Cx 2
1
cot x arccot x 1Cx 2
We present the proof of the derivative of arcsin x. Write y D sin x, then we have dy=dx D
cos x. The inverse function is x D arcsin y. Using the rule of the derivative of inverse function:
dx 1 1 1
D D Dp (4.4.24)
dy dy=dx cos x 1 y2
where in the final step we have converted from x to y aspdx=dy is a function of y. Now consid-
ering the function y D arcsin x, we have dy=dx D 1= 1 x 2 . Proofs of other trigonometric
inverses follow.
1 2 2
2 4 4
3 8 8
4 16 16
We know that this cannot be true as dt is too big. But it gave us some hint that the derivative
should be related to 2t . So, we use the definition of derivative and do some algebra this time so
that 2t shows up:
0.1 0.7177346253629313
0.01 0.6955550056718884
0.001 0.6933874625807412
0.00001 0.6931495828199629
0.000001 0.6931474207938493
adt 1 7.
Table 4.10: dt
with dt D 10
.2dt 1/ .3dt 1/ .4dt 1/ .6dt 1/ .8dt 1/
dt dt dt dt dt
there exists a number c within Œ2; 3 that the derivative of c x is c x . It turns out that this function
is f .t / D e t , where e is the Euler number (its value is approximately 2.78) that we have found
in the context of continuously compounding interest (Section 2.26). Indeed,
d.e t / e dt 1
D e t lim D et (4.4.26)
dt dt !0 dt
Because e is defined as a number that satisfy the following limit
e dt 1
lim D1 (4.4.27)
dt !0 dt
You can see where this definition of e comes from by looking at Eq. (2.22.5) (in the context that
Briggs calculated his famous logarithm tables). It can be shown that this definition is equivalent
to the definition of e as the rate of continuously compound interest:
1=dt n
1
e D lim 1 C dt D lim 1C (4.4.28)
dt !0 n!1 n
Proof. The proof of the derivative of at is simple. Since we know the derivative of e x , we write
ax in terms of e x . So, we write a D e ln a , thus
Are there other functions of which derivatives are the functions themselves? No, the only
function that has this property is y D ce x . The function y D e x is the only function of which
the derivative and integral are itself. To it, there is a joke that goes like this.
And we use the definition of integral to compute f .x/ for some values of x. The results are
given in Table 4.11. We have used the mid-point rule with 20 000 sub-divisions to compute these
integrals. Refer to Section 11.4.1 if you’re not sure what is the mid-point rule.
Rx
Table 4.11: The area of y D 1=u from 1 to x: f .x/ D 1 du=u.
Anything special from this table? Ah yes. In the first row we have a geometric progression
2; 4; 8; 16, and in the second row we have an arithmetic progression (indicated by a constant
f .x/ in the last row). Which function has this property? A logarithm! You can check from the
values in the table that
Z 2 Z 4 Z 8
f .8/ D f .4 2/ D f .4/ C f .2/; du=u D du=u D du=u
1 2 4
And all properties of logarithm (such as ln ab D ln a C ln b) should follow naturally from this
definition. With Fig. 4.31, we can prove ln ab D ln a C ln b as follows:
Z ab Z b Z ab
du du du
ln ab D D C D ln b C ln a (4.4.33)
1 u 1 u b u
R ab Ra
where use was made of Eq. (4.4.31) to convert b du=u to 1 du=u D ln a.
We defer the discussion on the derivative of logarithm functions to Section 4.4.18. Fig. 4.32
presents the graph of the exponential and logarithm functions. Both are monotonically increasing
functions. This is so because their derivatives are always positive.
Figure 4.31
We’re now too familiar with the concept of inverse operators/functions. So it is natural
to consider inverse hyperbolic functions. For brevity, we consider only y D sinh 1 x and
y D cosh 1 x. Let’s compute
p the derivative of y D sinh 1 x. We have x D sinh y, and
thus dx=dy D cosh y D 1 C x 2 . So,
d 1
1 1
sinh x D Dp (4.4.36)
dx dx=dy 1 C x2
1
If someone tell you that sinh x is actually a logarithm of x:
p
y D sinh 1 x D ln x C 1 C x 2
Do you believe him? Yes. Because the sine hyperbolic function is defined in terms of the
exponential e x , it is reasonable that its inverse is related to ln x–the inverse of e x . The proof is
simple:
ey e y p
x D sinh y D H) .e y /2 .2x/e y 1 D 0 H) e y D x C 1 C x 2
2
Fractional derivative. Regarding the n-order derivative of a function f .n/ .x/, in a 1695 letter,
l’Hopital asked Leibniz about the possibility that n could be something other than an integer,
such as n D 1=2. Leibniz responded that “It will lead to a paradox, from which one day useful
consequences will be drawn.” Leibniz was correct, but it would not be centuries until it became
clear just how correct he was.
There are two ways to think of f .n/ .x/. The first is the one we all learn in basic calculus:
it’s the function that we obtain when we repeatedly differentiate f n times. The second is more
subtle: we interpret it as an operator whose action on the function f .x/ is determined by the
parameter n. What l’Hopital is asking is what the behavior is of this operator when n is not
an integer. The most natural way to answer this question is to interpret differentiation (and
integration) as transformations that take f and turn it into a new function.
That’s all I know about fractional derivative and fractional calculus. I have presented them
here to illustrate the fact that if we break the rules (the order of differentiation is usually a
positive integer) we could make new mathematics.
Figure 4.33: One problem on related rates: the balloon is flying up with a constant speed of 3 m/s. While
it is doing so the distance from it to an observer at A, denoted by z, is changing. The question is find
dz=dt when y D 50 m.
We need to relate z.t/ to y.t/, and then differentiate it with respect to time:
dz dy dz y
Œz.t/2 D 1002 C Œy.t/2 H) 2zD 2y H) D3
dt dt dt z
p
When the p balloon ispabove the ground 50 m, z D 50 50 m, so at that time, dz=dt D
3.50/=.50 50/ D 3 5=5 m/s. The problem is easy using the chain rule and it is so because
time is present in the problem.
Now we come back to this problem: given x 2 C y 2 D 25, what is dy=dx? We can imagine
a point with coordinate .x.t/; y.t// moving along the circle of radius 5 centered at the origin.
Then, we just do the differentiation w.r.t time:
dx dy dy x
Œx.t /2 C Œy.t/2 D 25 H) 2x C 2y D 0 H) 2xdx C 2ydy D 0 H) D
dt dt dx y
p
Is this result correct? If we write y D 25 x 2 (for the upper part of the circle), then dy=dx D
x=y , the same result obtained using implicit differentiation. You can see that dt disappears. We
y D loga x H) x D ay
The graphs of the function f .x/, the first derivative f 0 .x/ and the second derivative f 00 .x/ are
shown in Fig. 4.34. We can see that:
Figure 4.34: Graph of a fourth-order polynomial with its first and second derivatives. Drawn with Desmos
at https://www.desmos.com/calculator.
The function is decreasing within the interval in which f 0 .x/ < 0. This makes sense
noting that f 0 .x/ is the rate of change of f .x/–when it is negative the function must be
decreasing;
At point x0 where f 0 .x0 / D 0, the function is not increasing nor decreasing; it is stationary–
the tangent is horizontal. There (x0 D 1; 2; 3), the function is either a local minimum
or a local maximum. It is only a local minimum or maximum for there are locations
where the functions can get a larger/smaller value. The derivative at a point contains local
information about a function around the point (which makes sense from the very definition
of the derivative);
A stationary point x0 is a local minimum when f 00 .x0 / > 0; the tangent is below the
function, or the curve is concave up. Around that point the curve has the shape of a cup [;
A stationary point x0 is a local maximum when f 00 .x0 / < 0; the tangent is above the
function, or the curve is concave down. Around that point the curve has the shape of a cap
\.
Differential calculus provides us a more efficient way consisting of two steps: (1) finding
stationary points where the first derivative of the function is zero, and (2) evaluate the second
derivative at these points.
Snell’s law of refraction. We use the derivative to derive the Snell’s law of refraction . This
law is a formula used to describe the relationship between the angles of incidence and refraction,
when referring to light (or other waves) passing through a boundary between two different
isotropic media such as water/air. Fermat derived this law in 1673 based on his principle of least
time. Referring to Fig. 4.35, we compute the time required for the light to go from A to B:
p p
a2 C x 2 b 2 C .d x/2
t D tAO C tOB D C
v1 v2
Calculating the first derivative of t and set it to zero gives us (i.e., the light follows a path that
minimizes the travel time–light is lazy)
x d x sin ˛1 sin ˛2
p Dp ; H) D (4.5.1)
a2 C x 2 v1 b 2 C .d x/2 v2 v1 v2
sin ˛1 sin ˛2
D or n1 sin ˛1 D n2 sin ˛2 (4.5.2)
v1 v2
Figure 4.35: Snell’s law of refraction: v1 and v2 are the velocities of light in the medium 1 and 2. As the
velocity is lower in the second medium, the angle of refraction ˛2 is smaller than the angle of incidence
˛1 .
Figure 4.36: Convex function (left) and non-convex function (right). The function f .x/ is said to be
convex if its graph in the interval Œa; b is below the secant line joining the two end points .a; f .a// and
.b; f .b//;
And we ask the question: does this nice inequality hold for 3 points? We need to check this:
We use Eq. (4.5.4) to prove the above inequality. First, we need to split 3 terms into 2 terms (to
And nothing can stop us to generalize this inequality to the case of n points:
n n n
!
X X X
f ti x i ti f .xi / ; ti D 1 (4.5.5)
i D1 i D1 i D1
And this is known as the Jensen inequality, named after the Danish mathematician Johan Jensen
(1859 – 1925). Jensen was a successful engineer for the Copenhagen Telephone Company and
became head of the technical department in 1890. All his mathematics research was carried out
in his spare time. Of course if the
P function is concave, the inequality is reversed.
To avoid explicitly stating ti D 1, another form of the Jensen inequality is:
Pn Pn
i D1 ai xi D1 ai f .xi /
f Pn iP n (4.5.6)
i D1 ai i D1 ai
P
where ti D ai = ai , and ai > 0 are weights.
Note that in the equation for yCM the limits of summation were skipped for the sake of brevity.
The nice thing is that the center of mass is always inside the polygon with vertices being the
point masses (Section 7.8.7). This leads immediately to yCM f .mi xi =m/.
Figure 4.37: Geometric interpretation of the Jensen inequality for the case of more than 2 points.
Why convex functions important. Convex functions are important because they have nice
properties. Given a convex function within an interval, if a local minimum (maximum) is found,
it is also the global minimum (maximum). And it leads to convex optimization. Convex opti-
mization is the problem of minimizing a convex function over convex constraints. It is a class
of optimization problems for which there are fast and robust optimization algorithms, both in
theory and in practice.
Now, you have the tool, let’s solve this problem: Given three positive real numbers a; b; c,
prove that
aCbCc
aa b b c c .abc/ 3
The art of using the Jensen inequality is to use what function. If you know what f .x/ to be used,
then it becomes easy.
At x0 we have Y.x0 / D f .x0 /, but the approximation get worse for x far way from x0 . This
is obvious. We need to know the error of this approximation. Let’s try withp a function and play
with the error. We can spot the pattern from this activity. We use y D x and x0 D 100 (no
thing special about this point except its square root is 10). We compute
p the square root of 100Ch
for h D f1:0; 0:1; 0:01; 0:001g using Eq. (4.5.7), which yields 100 p h 10 C h=20, and the
C
error associated with the approximation is e.h/ WD Y.100 C h/ 100 C h.
p
Table 4.12: Linear approximations of x at x0 D 100 for various h.
h Y D 10 C h=20 e.h/
The results are given in Table 4.12. Looking at this table we can see that e.h/ h2 . That
is when h is decreasing by 1=10 the error is decreasing by 1=100. We can also get this error
p
measure by squaring 100 C h 10 C h=20
p h h2
100 C h 10 C ) 100 C h 100 C h C
20 400
Some common linear approximations near x D 0 are
ex 1 C x
(4.5.8)
sin x x
where the approximation for the sine function is used in solving the oscillation of a pendulum.
At point .xn ; f .xn //, we draw a line tangent to the curve and find xnC1 as the intersection
of this line and the x-axis. Thus, xnC1 is determined using xn , f .xn / and f 0 .xn /:
f .xn /
xnC1 D xn (4.5.9)
f 0 .xn /
xn2 a
xn a 1 a
xnC1 D xn D C D xn C (4.5.10)
2xn 2 2xn 2 xn
Note that the final expression was used by Babylonians thousands years before Newton. The
result of the calculation given in Table 4.13 demonstrates that Newton method converges
quickly. More precisely it converged quadratically when close to the solution: the last three
p
Table 4.13: Solving x D 2 with x0 D 1.
n xn e.h/
1 1.0 4.14e-01
2 1.5 -8.58e-02
3 1.416666666 -2.45e-03
4 1.414215686 -2.12e-06
5 1.414213562 -1.59e-12
Coding Newton’s method. Let’s solve this equation f .x/ D cos x x D 0 using a computer.
That is we do not compute f 0 .x/ explicitly and use Eq. (4.5.9) to get:
cos xn xn
xnC1 D xn C
1 C sin xn
That is too restrictive. We want to write a function that requires a function f .x/ and a tolerance.
That’s it. It will give us the solution for any input function. The idea is to use an approximation
for the derivative, see Section 11.2.1. The code is given in Listing B.6. In any field (pure or
applied maths, science or engineering), coding has become an essential skill. So, it is better to
learn coding when you’re young. That’s why I have inserted many codes throughout the note.
Is Newton’s method applicable only to f .x/ D 0? No! It is used to solve systems of
equations of billion unknowns, see Section 7.4. Actually it is used everyday by scientists
and engineers. One big application is nonlinear finite element analyses to design machines,
buildings, airplanes, you name it.
Exploring Newton’s method. With a program implementing the Newton method p we can
play with it, just to see what happens. For example, in the problemp of finding 2 by solving
x 2 2 D 0, if we start with x0 D 1, then the method gives us 2. Not that we want! But it
2
is also a root of x 2 D 0. Thus, the method depends on the initial guess (Fig. 4.40). To find a
good x0 for f .x/ D 0 we can use a graphic method: we plot y D f .x/ and locate the points it
intersects with the x-axis roughly, and use that for x0 .
Newton’s method on the complex plane. We discussed complex numbers in Section 2.23,
but we seldom use them. Let’s see if we can use Newton’s method to solve f .z/ D 0 such as
z 4 1 D 0 where z is a complex number. Just assume that we can treat functions of a complex
variable just as functions of a real variable, then
f .zn /
znC1 D zn (4.5.11)
f 0 .zn /
10 14
12
8
10
6
8
4 6
4
2
2
0
3 2 1 0 1 0
1 0 1 2 3 4
2 2
(a) x0 D 3 (b) x0 D C3
Let’s solve the simplest complex equation z 2 C 1 D 0, this equation has two solutions z D ˙i .
With the initial guess z0 D 1 C 0:5i Newton’s method converges to z D i (Table 4.14). So, the
method works for complex numbers too. Surprise? But happy. If z0 D 1 i , the method gives us
the other solution z D i (not shown here). If we pose this question we can discover something
Table 4.14: Solving z 2 C 1 D 0 with z0 D 1 C 0:5i. See Listing B.7 for the code.
n zn
1 0:1 C 0:45i
2 0:185294 C 1:28382i
3 0:0375831 C 1:02343i
4 0:000874587 C 0:99961i
5 3:40826e 7 C 1:0i
6 1:04591e 13 C 1:0i
interesting. The question is if f .z/ D 0 has multiple roots then which initials z0 converge to
which roots? And a computer can help us to visualize this. Assume that we know the exact roots
and they are stored in a vector zexact D ŒzN 1 ; zN2 ; : : :. Corresponding these exact roots are some
colors, one color for each root. Then, the steps are
3
You can findp p Let’s apply it to f .z/ D z
the code in Listing B.7. 1 D 0. The roots of f .z/ are:
1, 1=2 C i 3=2, and 1=2 i 3=2. Three roots and thus three colors. Points inp the green
color converge to the root zN 1 D 1, those in the purple color to the root zN2 D 1=2 C i 3=2 and
ones in the red color converge to the remaining root. These three domains are separated by a
boundary which is known as Newton fractal. We see that complex numbers very close together,
converging to different solutions, arranged in an intricate pattern.
100 100
Color Color
200 200 3.0 3.0
y y 2.5 2.5
300 300
2.0 2.0
400 400 1.5 1.5
1.0 1.0
500 500
100 200(a)300 400 500 100 200(b)300 400 500
Arthur Cayley (1821 – 1895) was a prolific British mathematician who worked mostly on
algebra. He helped found the modern British school of pure mathematics. In 1879 he published
a theorem for the basin of attraction for quadratic complex polynomials. Cayley also considered
complex cubics, but was unable to find an obvious division for the basins of attraction. It was
only later in the early 20th century that French mathematicians Pierre Joseph Louis Fatou (1878 –
1929) and Gaston Maurice Julia (1893 – 1978) began to understand the nature of complex cubic
polynomials. With computers, from 1980s mathematicians were able to finally create pictures
of the basins of attraction of complex cubic functions.
RT
total distance is simply the sum of all these ds, or symbolically 0 ds. But ds D vdt, so the
RT
distance is 0 vdt . So, the distance is the area under the speed curve v.t/. This is not unexpected
(Fig. 4.42).
Figure 4.43: Geometric proof of Eq. (4.6.3). The key point is to think of the area problem dynamically.
Imagine sliding x to the right at a constant speed. You could even think of x as time; Newton often did.
Then the area of the crossed region changes continuously as x moves. Because that area depends on x,
it should be regarded as a function of x. Now considering a tiny change of x, denoted by dx. The area
is increased by a tall, thin rectangle of height f .x/ and infinitesimal width dx; this tiny rectangle has
an infinitesimal area f .x/dx. Thus, the rate at which the area accumulates is f .x/. And this leads to
Eq. (4.6.3).
RT
Assume that the speed is v.t/ D 8t t 2 , what is the distance 0 v.t/dt ? We do not know
how to evaluate this integral (not using the definition of integral of course) but we know that
it is a function s.T / such that ds=d T D v.T / D 8T T 2 , from Eq. (4.6.2). A function like
s.T / is called an anti-derivative. We have just met something new here. Before, we are given
a function, let say, y D x 3 , and we’re asked (or required) to find its derivative: .x 3 /0 D 3x 2 .
Now, we’re facing the inverse problem: .‹/0 D 3x 2 , that is finding the function of which the
derivative is 3x 2 . We know that function, it is x 3 . Thus, x 3 is one anti-derivative of 3x 2 . I used
the word one anti-derivative for we have other anti-derivatives. In fact, there are infinitely many
anti-derivatives of 3x 2 , they are x 3 C C , where C is called a constant of integration. It is here
because the derivative of a constant is zero. Graphically, x 3 C C is just a vertical translation of
the curve y D x 3 , the tangent to x 3 C C at every point has the same slope as those of x 3 .
Coming back now to s.T /, we can thus write:
Z T 9
2
.8t t /dt D s.T /
> Z T
T3
>
=
0 H) s.T / D .8t t 2 /dt D 4T 2 CC (4.6.4)
ds 2>
> 0 3
D 8T T ;
dT
To find the integration constant C , we use the fact that s.0/ D 0, so C D 0.
It is straightforward to use Eq. (4.6.4) for determining the distance traveled between t1 and
Rb
t2 (we’re really trying to compute the general definite integral a f .x/dx here):
Z t2
.8t t 2 /dt D s.t2 / s.t1 /
t1
t23 t13
2 2
D 4t2 CC 4t1 CC (4.6.5)
3 3
t23 t13
2 2
D 4t2 4t1
3 3
There is nothing special about distance and speed, we have, for any function f .x/, the
following result
b
dF
Z
f .x/dx D F .b/ F .a/ with D f .x/ (4.6.6)
a dx
which is known as the fundamental theorem of calculus, often abbreviated as FTC. So, to find a
definite integral we just need to find one anti-derivative of the integrand, evaluate it at two end
points and subtract them. It is this theorem that makes the problem of finding the area of a curve
a trivial exercise for modern high school students. Notice that the same problem once required
the genius of the likes of Archimedes.
While it is easy to understand Eq. (4.6.5) as the distance traveled between t1 and t2 must be
s.t2 / s.t1 /, it is hard to believe that a definite integral which is the sum of all tiny rectangles
eventually equals F .b/ F .a/; only the end points matter. But this can be seen if we use
Leibniz’s differential notation:
Z b Z b Z b
dF
f .x/dx D dx D dF
a a dx a
D .F2 F1 / C .F3 F
2 / C .F4 3 / C C .Fn
F
Fn 1 /
D Fn F1 D F .b/ F .a/
History note 4.4: Sir Isaac Newton (25 December 1642 – 20 March 1726/27)
Sir Isaac Newton was an English mathematician, physicist, astronomer,
and theologian (described in his own day as a "natural philosopher")
who is widely recognized as one of the most influential scientists of
all time and as a key figure in the scientific revolution. His book
Philosophiæ Naturalis Principia Mathematica (Mathematical Princi-
ples of Natural Philosophy), first published in 1687, established clas-
sical mechanics. Newton also made seminal contributions to optics,
and shares credit with Gottfried Wilhelm Leibniz for developing the
infinitesimal calculus.
Newton was born prematurely in 1642 at his family’s home near the town of Grantham,
several months after the death of his father, an illiterate farmer. When Newton was three,
his mother wed a wealthy clergyman, who didn’t want a stepson. Newton’s mother went
to live with her new husband in another village, leaving behind her young son in the care
of his grandparents.
In 1705, Newton was knighted by Queen Anne. By that time, he’d become wealthy after
inheriting his mother’s property following her death in 1679 and also had published
two major works, 1687’s “Mathematical Principles of Natural Philosophy” (commonly
called the “Principia”) and 1704’s “Opticks.” After the celebrated scientist died at age
84 on March 20, 1727, he was buried in Westminster Abbey, the resting place of English
monarchs as well as such notable non-royals as Charles Darwin, Charles Dickens and
explorer David Livingstone.
Let’s see how many ways we can compute integrals (indefinite or definite) using paper and
pencil. The first way is to use the definition of integral as the limit of the sum of all the areas of
the small thin rectangles. The fundamental theorem of calculus saves us from going down this
difficult track. Therefore, the second way is to find an anti-derivative of the integrand function.
Anti-derivatives of many common functions have been determined and tabulated in tables. So,
we just do ‘table look up’. Clearly that these tables cannot cover all the functions, so we need
a third way (or fourth). This section presents integration techniques for functions of which
anti-derivatives not present in tables.
(4.7.1)
2
Z p
2 2 3=2
1 C x 2xdx D .1 C x / C C
3
And you can verify the above equation by differentiating the RHS and you get the integrands in
the LHS. If you look at these two integrals, you will recognize that they are of this form:
Z b Z ˇ
0
f .g.x//g .x/dx D f .u/du; u D g.x/ (4.7.2)
a ˛
So, we do a change of variable u D g.x/, which leads to du D g 0 .x/dx, then the LHS of
Rb Rˇ
Eq. (4.7.2) becomes the RHS i.e., a f .g.x//g 0 .x/dx D ˛ f .u/du. Of course, ˛ D g.a/ and
ˇ D g.b/. Eq. (4.7.2) is called integration by substitution and it is based on the chain rule of
differentiation. Nothing new here, one fact of differentiation leads to another corresponding fact
of integration, because they are related.
Now we can understand Eq. (4.7.1). Let’s consider the first integral, we do the substitution
u D x 2 , hence du D 2xdx, then:
Z Z
cos x 2xdx D cos.u/du D sin u C C D sin x 2 C C
2
Proof. Proof of integration by substitution given in Eq. (4.7.2). We start with a composite
function F .g.x// as we want to use the chain rule. We compute the derivative of this function:
d
F .g.x// D F 0 .g.x//g 0 .x/ (4.7.3)
dx
Now we integrate the two sides of the above equation, we get:
Z b Z b
d
F .g.x//dx D F 0 .g.x//g 0 .x/dx
a dx a
(if we have two identical functions, the areas under the two curves described by these two
functions are the same, that’s what the above equation means). Now, the FTC tells us that
b
d
Z
F .g.x//dx D F .g.b// F .g.a// (4.7.4)
a dx
Introducing two new numbers ˛ D g.a/ and ˇ D g.b/, then as a result of the FTC, where
u D g.x/, we have:
Z ˇ
F .ˇ/ F .˛/ D F 0 .u/du (4.7.5)
˛
From Eqs. (4.7.4) and (4.7.5) we obtain,
Z ˇ Z b Z b
0 d
F .u/du D F .g.x//dx D F 0 .g.x//g 0 .x/dx
˛ a dx a
To make f .x/ appear, just introducing f .x/ D F 0 .x/, then the above equation becomes
Z b Z ˇ
0
f .g.x//g .x/dx D f .u/du
a ˛
So, the substitution rule guides us to replace a hard integral by a simpler
R p one. The main
challenge is to find an appropriate substitution. For certain integrals e.g. 1 x 2 dx, the new
variable is clear: x D sin to just get rid of the square root. I present in Section 4.7.6 such
trigonometry substitutions. For most of the cases, finding a good substitution is a matter in which
practice and ingenuity, in contrast to systematic methods, come into their own.
Let’s compute the following integral
Z 3
2x 3x 2
I D 2
dx
0 .1 C sin x/
which is the 2015 Cambridge STEP 2. Sixth Term Examination Papers in Mathematics, often
referred to as STEP, are university admissions tests for undergraduate Mathematics courses
developed by the University of Cambridge. STEP papers are typically taken post-interview, as
part of a conditional offer of an undergraduate place. There are also a number of candidates
who sit STEP papers as a challenge. The papers are designed to test ability to answer questions
similar in style to undergraduate Mathematics.
What change of variable to be used? After many unsuccessful attempts, we find that u D
x looks promising:
And what is the red term? It is I , so we have an equation for I and solving it gives us a new
form for I :
3 du
Z
I D
2 0 .1 C sin u/2
We stop here, as the new integral seems solvable. What we want to say here is that this integral
was designed so that the substitution u D x works. If we slightly modify the integral as
follows
Z =2 3 Z Z 3
2x 3x 2 2x 3 3x 2 3x 3x 2
I1 D dx; I2 D dx; I3 D dx
0 .1 C sin x/2 0 .1 C sin x/
2
0 .1 C sin x/
2
Our substitution would not work! That’s why it was just a trick; even though a favorite one of
examiners. How we integrate these integrals then? We fall back to the very definition of integral
as the sum of many many thin rectangles, but we use the computer to do the boring sum. This is
called numerical integration (see Section 11.4 if you’re interested in, that’s how scientists and
engineers do integrals).
So, instead of calculating the integral u0 .x/v.x/dx, we compute v 0 .x/u.x/dx which should
R R
be simpler. Basically we transfer the derivative from u to v. The hard thing is to recognize
which should be u.x/ and v.x/. Some examples are provided to see how to use this technique.
R
Example 1 is to determine ln xdx. Start with x ln x and differentiate that (then ln x will show
up), and we’re done:
Z
0
.x ln x/ D ln x C 1 H) ln xdx D x ln x xCC
R
Example 2 is x cos xdx. Start with x sin x,
Z Z
0
.x sin x/ D sin x C x cos x H) x cos xdx D x sin x sin xdx
two times. First, recognize that derivative of e x is itself, so we consider the function x 2 e x , its
derivative will make appear x 2 e x (the integrand), and another term with a lower power of x
(which is good). So,
Z Z
2 x 0 x 2 x 2 x 2 x
.x e / D 2xe C x e H) x e dx D x e 2 xe x dx
Now, we have an easier problem to solve: the integral of xe x . Repeat the same step, we write
Z Z
x 0 x x x x
.xe / D e C xe H) xe dx D xe e x dx D xe x e x
Should we stop here andR move to other integrals? If we stop here and someone come to ask us to
compute this integral x 5 e x dx or even x 20 e x dx, we would struggle to solve these integrals.
R
There is a structured behind Eq. (4.7.7), which we will come back to in Section 4.7.4.
We restrict the discussion in this section to nonnegative p and q. The next section is devoted to
negative exponents, and you can see it is about integration of tangents and secants. The integrals
in the last three rows are very important; they aren’t exercises on integrals. They are the basics
of Fourier series (Section 4.18).
Before computing these integrals, we Rwould like to calculate the last one without actually
2
calculating it. We know immediately that 0 sin2 8xdx D . Why? This is because:
Z 2 Z 2 Z 2
2 2
sin 8xdx C cos 8xdx D dx D 2 (4.7.8)
0 0 0
R 2 R 2
And 0 sin2 8xdx D 0 cos2 8xdx because of symmetry.
Example 1. Let’s compute sin2 x cos3 xdx. As sin2 x C cos2 x D 1, we can always replace an
R
even power of cosine (cos2 x) in terms of sin2 x. We are left with cos xdx which is fortunately
d.sin x/. So,
1 1 5
Z Z
sin x cos xdx D .sin2 x sin4 x/d.sin x/ D sin3 x
2 3
sin x C C (4.7.10)
3 5
Example 2. How about sin5 xdx? The same idea: sin5 x D sin4 x sin x, and sin xdx D
R
2 1
Z Z
sin xdx D . 1 C 2 cos2 x cos4 x/d.cos x/ D
5
cos x C cos3 x cos5 x C C
3 5
These two examples cover the integral sinp x cosq xdx where p; q 0 and either p or q is
R
Example 3 is this integral cos4 xdx. We can do integration by parts or use trigonometric
R
1 C cos 2x 2
4
cos x D
2
1 C 2 cos 2x C cos2 2x
D
4
1 cos 2x 1 C cos 4x
D C C
4 2 8
Thus, the integral is given by
sin2 x cos2 xdx. Again, we use trigonometric identities to lower the powers, this
R
Example 4 is
1 cos 2x 1 C cos 2x
sin2 x cos2 x D
2 2
2
1 cos .2x/
D (4.7.11)
4
1 cos.4x/ x sin.4x/
Z
D H) sin2 x cos2 xdx D CC
8 8 8 32
R
Example 5 is sin 8x cos 6xdx. The best way is to use the product identity, see Eq. (3.7.6) to
replace a product of sines with a sum of two sines:
1
sin 8x cos 6x D sin 14x C sin 2x
2
Z 2
1 2
Z
H) sin 8x cos 6xdx D .sin 14x C sin 2x/dx D 0
0 2 0
The result is zero because of the nature of the sine function, see Fig. 4.44.
R 2
Figure 4.44: 0 sin nxdx D 0 for any positive integer n. This is because the plus area is equal to the
negative area.
R
Example 6 is sin 8x sin 6xdx. We follow the strategy done in example 4:
1
sin 8x sin 6x D cos 2x cos 14x
2
Z 2
1 2
Z
H) sin 8x sin 6xdx D .cos 2x cos 14x/dx D 0
0 2 0
even.
Now, we show that Eq. (4.7.12) can lead to an infinite product for . Using the above but
with integrations limits 0 and =2, we have (the term Œ n1 sinn 1 x cos x=2
0 D 0)
Z =2
n 1 =2 n 2
Z
n (4.7.13)
sin xdx D sin xdx
0 n 0
Now, consider two cases: n is even and n is odd. For the former case (n D 2m), repeated
application of Eq. (4.7.13) gives us
Z =2
2m 1 =2 2m 2
Z
2m
sin xdx D sin xdx
0 2m 0
Z =2
2m 1 2m 3
D sin2m 4 xdx (4.7.14)
2m 2m 2 0
2m 1 2m 3 2m 5 3 1
D
2m 2m 2 2m 4 4 2 2
And for odd powers n D 2m C 1, we have
Z =2
2mC1 2m 2m 2 4 2
sin xdx D (4.7.15)
0 2m C 1 2m 1 5 3
From Eqs. (4.7.14) and (4.7.15), we obtain by dividing the former equation by the latter equation
2244 2m 2m
D (4.7.16)
2 1335 .2m 1/ .2m C 1/
R =2 2m
0 sin xdx
where we used the fact that R =2 D 1 when m approaches infinity (a proof is due
2mC1
0 sin xdx
in what follows).
To find out the numbers 3=4 and 1=2 in the last equality, just use m D 3. The number =2 is nothing but
R =2
0 dx when m has been reduced to 1.
Proof.
R =2
sin2m xdx
lim R 0 D1 (4.7.17)
m!1 =2
0 sin2mC1 xdx
As 0 x =2, we have
Now, let’s denote the ratio on the RHS of the above equation by A and we want to compute it.
First, Eq. (4.7.12) is used to get
=2 =2
2m
Z Z
2mC1
sin xdx D sin2m 1
xdx
0 2m C 1 0
Thus, A is given by
R =2 R =2
0 sin2m 1
2m C 1 0 sin2m
xdx 1
xdx 1
R =2 D R =2 2m D1C
sin2mC1 xdx 2m sin 1
xdx 2m
0 0
We can see the structure in the RHS: x 2 ! 2x ! 2; that is the result of the repeated differ-
entiation of x 2 . The alternating signs C= =C are due to the minus sign appearing in each
integration by parts.
With this understanding, without actually doing the integration, we know that
Z
x 4 e x dx D x 4 e x 4x 3 e x C 12x 2 e x 24xe x C 24e x
R1
Now we move to the integral 0 x 4 e x dx. First, replacing e x by e x we have the following
results:
Z
x 2 e x dx D x 2 e x 2xe x 2e x
Z
x 4 e x dx D x 4 e x 4x 3 e x 12x 2 e x 24xe x 24e x
Focus now on the second integral, but now with special integration limits, we have:
Z 1
1
x 4 e x dx D x 4 e x 4x 3 e x 12x 2 e x 24xe x 0 4Še x j1
0 (4.7.19)
0
All the terms in the brackets are zeroes and e x j10 D 1, thus we obtain a very interesting
result: Z 1
x 4 e x dx D 4Š (4.7.20)
0
This is a stunning result. Can you see why? We will come back to it later in Section 4.19.1.
tan2 x
D ln j cos xj (substitution u D tan x)
2
Phu Nguyen, Monash University © Draft version
Chapter 4. Calculus 334
Now, we see the way and can do the general tanm xdx:
R
Z Z
tan xdx D tan2 x tanm 2 xdx
m
Z
D .sec2 x 1/ tanm 2 xdx
Z Z (4.7.24)
2 m 2 m 2
D sec x tan xdx tan dx
tanm 1 x
Z
D dx tanm 2 dx
m 1
m
tanm 2 xdx, which in turn
R R
That is, we have a formula for tan xdx that requires R involves
m 4
R
tan xdx and so on. Depending on m being odd or even, this leads us to either tan xdx
2
R
or tan xdx, which we know how to integrate.
R Ok. Let’s move to the secant function. How we’re going to compute the following integral
sec xdx? Replacing sec x D 1=cos x would not help. Think of its friend tan x, we do this:
sec x sec x
Z Z Z
sec xdx D dx D dx
1 sec2 x tan2 x
We succeeded in bring in the two friends. Now the next is just algebra:
sec x
Z Z
sec xdx D dx
.sec x tan x/.sec x C tan x/
Z
1 1 1
D C dx
2 sec x tan x sec x C tan x
Now, we switch to sin x and cos x, as we see something familiar when doing so:
cos x cos x
Z
1
Z
sec xdx D C dx
2 1 sin x 1 C sin x
d.1 sin x/ d.1 C sin x/
Z
1
D C
2 1 sin x 1 C sin x
1 1 1 C sin x
D .ln.1 C sin x/ ln.1 sin x// D ln
2 2 1 sin x
We can stop here. However, we can further simplify the result, noting that
1 C sin x sin2 x=2 C cos2 x=2 C 2 sin x=2 cos x=2
D
1 sin x sin2 x=2 C cos2 x=2 2 sin x=2 cos x=2
sin x=2 C cos x=2 2 1 C sin x 2
D D D .sec x C tan x/2
sin x=2 cos x=2 cos x
And finally, the integral of sec x is:
Z
sec xdx D ln j sec x C tan xj C C
This one was hard, but sec2Rxdx is easy. It is .1 C tan2 x/dx. How about sec3 xdx? We
R R R
Z Z Z
D .1 C tan x/ sec xdx D sec xdx C tan2 x sec xdx
2
tan2 x sec xdx, we use integration by parts with u D sec x and v D tan x.
R
For the integral
Finally, Z
sec3 xdx D 0:5.sec x tan x C ln j sec x C tan xj/ C C
Why bother with this integral? But this integral is the answer to the problem of calculating the
length of a segment of a parabola (Section 4.9.1).
Now comes another trigonometric substitution using the tangent function. The following integral
Z 1
dx
(4.7.26)
0 16 C x 2
with 8
< dx D 4 sec2 d
x D 4 tan ) 16 C x 2 D 16.1 C tan2 / D 16 sec2 (4.7.27)
0
:
2
is simplified to
1
4 sec2 1 =2
=2
dx
Z Z
D d D D
0 016 C x 2
16 sec2 4 0 8
R 1 dx
Sometimes we see an integral which is a disguised form of 0 16Cx 2 , for example:
dx
Z
5x 2 10x C 25
In this case, we just need to complete the square i.e., 5x 2 10x C 25 D ./2 C c, c is a constant.
Then, the substitution of x D c tan is used. So, the steps are:
dx 1 dx
Z Z
D
5x 2 10x C 25 5 x 2 2x C 5
1 d.x 1/
Z
D
5 .x 1/2 C 4
1 du 1 1 x 1
Z
D D tan CC
5 u2 C 4 10 2
The second step is completing the square, the third step is to rewrite it in the familiar form of
Eq. (4.7.26).
We present the final trigonometric substitution so that we can evaluate integrals of any
rational function of sin x and cos x. For example,
dx dx
Z Z
;
3 5 sin x 1 C sin x cos x
The substitution is (discovered by the Germain mathematician Karl Weierstrass (1815-1897))
x 2du
u D tan ; dx D
2 1 C u2
Phu Nguyen, Monash University © Draft version
Chapter 4. Calculus 337
This is because, as given in Eq. (3.7.8), we can express sin x and cos x in terms of u:
2u 1 u2
sin x D ; cos x D
1 C u2 1 C u2
dx
R
Then, 3 5 sin x
becomes:
dx du
Z Z
D2 (4.7.28)
3 5 sin x 3u2 10u C 3
This integral is of the form P .u/=Q.u/ and we discuss how to integrate it in the next section.
It is always a good idea to stop doing what we’re doing, and summarize the achievement.
We provide such a summary in Table 4.15.
Table 4.15: Summary of trigonometric substitutions.
we can always transform 4xC16=x 4x into a sum of simpler fractions (called partial fractions):
3
4x C 16 4x C 16 A B C
D D C C
x 3 4x x.x 2/.x C 2/ x x 2 xC2
where each partial fraction is of the form p.x/=q.x/ where the degree of the nominator is one
less than that of the denominator. This is called the method of Partial Fraction Decomposition.
To find the constants A; B; C , we just convert the RHS into the form of the LHS:
A B C .A C B C C /x 2 C 2.B C /x 4A
C C D
x x 2 xC2 x 3 4x
Phu Nguyen, Monash University © Draft version
Chapter 4. Calculus 338
As this fraction is equal to 4xC16=x 3 4x , the two nominators must be the same, thus we have
.A C B C C /x 2 C 2.B C /x 4A 4x C 16, which leads to
A C B C C D 0; 2.B C / D 4; 4A D 16 H) A D 4; B D 1; C D 3
R 4xC16
Now x 3 4x dx can be computed with ease:
Z
4x C 16 1 3 4
Z
dx D C dx (4.7.29)
x 3 4x x 2 xC2 x
With this new tool we can finish the integral 3 5dxsin x , see Eq. (4.7.28):
R
Z
dx du 1 du du
Z Z Z
D2 D
3 5 sin x 3u2 10u C 3 4 u 3 u 1=3
1
D .ln ju 3j ln ju 1=3j/
4
1
D .ln j tan x=2 3j ln j tan x=2 1=3j/
4
And we can check our result using a CAS (Fig. 4.45).
Figure 4.45: Symbolic evaluation of integrals using the library SymPy in Julia. SymPy is actually a
Python library, so we can use it directly not necessarily via Julia.
If you were attentive you would observe that the two integrals that we have just considered are
of the form P .x/=Q.x/ where the degree of the denominator is larger than that of the nominator.
These particular rationals are called proper rationals. And we just need to pay attention to them
only, as the other case can be re-written in this form, for example:
2x 2 5x 1 2
D 2x C 1 C
x 3 x 3
You should have also noticed that in the considered rationals, Q.x/ has distinct roots i.e., it can
be factored as Q.x/ D .a1 x C b1 /.a2 x C b2 / .an x C bn / where n is the degree of Q.x/. In
this case, the partial fraction decomposition is:
P .x/ P .x/
D
Q.x/ .a1 x C b1 /.a2 x C b2 / .an x C bn /
(4.7.30)
A1 A2 An
D C C C
a1 x C b1 a2 x C b 2 an x C bn
And it’s always possible to find Ai when P .x/ is a polynomial of degree less than n, which is
the case for proper rationals.
Now we consider the case where Q.x/ has repeated roots, for example the following integral
x 2 C 15
Z
dx
.x C 3/2 .x 2 C 3/
where Q.x/ D 0 has a repeated root of -3. The decomposition in this case is little bit special:
x 2 C 15 Ax C B C D
D C C
.x C 3/2 .x 2 C 3/ x 2 C 3 .x C 3/ .x C 3/2
where the red terms follow this rule: for .ax C b/n we need a partial fraction for each exponent
from 1 up to n. To understand this decomposition, consider the following rational
1
.x C 3/2
With a new variable u D x C 3, it is written as
1 A Bu C C A Bu C A D
D C D C C D C
u2 u u2 u u2 u2 u u2
To wrap up this section, let’s compute the following integral
dx
Z
I D
1 C x4
We need first to factor 1 C x 4 :
1 1
D
1 C x4 1 C x 4 C 2x 2 2x 2
1 1
D p D p p
.1 C x 2 /2 . 2x/2 .x 2 C 2x C 1/.x 2 2x C 1/
The next step is to do a partial fraction decomposition for this, and we’re done. See Fig. 4.45 for
the result, done by a CAS.
4.7.8 Tricks
This section presents a few tricks to compute some interesting integrals. If you’re fascinated by
difficult integrals, you can consult YouTube channels by searching for ‘MIT integration bee’ and
the likes . Or you can read the book Inside Interesting Integrals of Paul Nahin [39].
The first example is the following integral
Z 1
cos x
1=x
dx
1 1Ce
Rq p p
One example from the MIT integration bee: x x x : : :dx.
Ra
You should ask why the integration limits are 1 and 1, not 1 and 2? Note that a f .x/dx D 0
if f .x/ is an odd function. So, we decompose the integrand function into an even and an odd
part:
And we do not care about the odd part, because its integral is zero, anyway. So,
1 1
cos x
Z Z
dx D cos xdx D sin.1/
1 1 C e 1=x 0
Feymann’s trick. This trick is based on the Leibniz rule that basically says:
b b
d I.t/ @f .x; t/
Z Z
I.t/ D f .x; t/dx H) D dx (4.7.31)
a dt a @t
We refer to Section 7.8.7 for a discussion leading to this rule. The symbol @f @t .x;t /
is a partial
derivative of f .x; t/ w.r.t to t while holding x constant.
As the first application of this rule, we can generate new integrals from old ones. For example,
we know the following integral (integrals with one limit goes to infinity are called improper
integrals and they are discussed in Section 4.8)
Z 1
dx
1 x =2
1
I D 2 2
D tan D (4.7.32)
0 x Ca a a 0 2a
And by considering a as a variable playing the role of t in Eq. (4.7.31), we can write:
Z 1 Z 1
dx dI 2a
I.a/ D H) D dx (4.7.33)
0 x 2 C a2 da 0 .x 2 C a2 /2
And from Eq. (4.7.32)–which says I D =2a–we can easily get dI =da D =2a2 , and thus we
get the following new integral:
Z 1 Z 1
2a dx
2 2 2
dx D H) 2 2 2
D 3
0 .x C a / 2a 0 .x C a / 4a
Of course, we can go further by computing d 2 I =da2 and get new integrals. But we stop here to
do something else.
Suppose we need to evaluate this integral (of which antiderivative cannot be found in ele-
mentary functions)
Z 1 2
x 1
dx (4.7.34)
0 ln x
dI b b2 b 2 =4
D db H) ln jI j D C D H) I D C e .C D e D / (4.7.39)
I 2 4
R1 2 p
Again, we need to find C and with b D 0, we have I.0/ D C D 0 e x dx D =2 . So, we
get a nice result for our original integral and many more corresponding with different values of
b:
Z 1 p
x2 25=4
I.5/ D e cos.5x/dx D e
0 2
Z 1 p (4.7.40)
x2
I.2/ D e cos.2x/dx D
0 2e
R1 x2
How to compute the integral 0 e dx is another story, see Section 5.11.4.
R1
Dirichlet integral. Another interesting integral is 0 sin x=x dx. Let us introduce the parameter
b in such a way that differentiating the integrand will give us a simpler integral:
sin bx 1
Z 1 Z 1
sin bx
dI
I.b/ D dx ) D cos.bx/dx D (4.7.41)
0 x db 0 b 0
Unfortunately, we got an improper integral. So, we need to find another way. We need a function
of which the derivative has x. That can be e bx . But due to the limit of infinity, we have to use
e bx with b 0. Thus, we consider the following integral
Z 1
sin x bx
I.b/ D e dx (4.7.42)
0 x
R1
From which 0 sin x=x dx D I.0/. Let’s differentiate this integral w.r.t b:
Z 1
dI
D sin xe bx dx D A (4.7.43)
db 0
And our task is to derive an expression for Si.x/. We have just showed that we cannot compute
the integral directly, the Feynman technique only works for definite integrals in which the limits
are numbers not variables. But we have another way, from Newton: we can replace sin t by its
Taylor series, then we can integrate sin t=t easily:
1 3 1 sin t t2 t4
sin.t / D t t C t5 H) D1 C
3Š 5Š t 6 5Š
Thus, we can write
x x
sin t t2 t4
Z Z
dt D 1 C dt
0 t 0 3Š 5Š
x
t3 t5
D t C
3 3Š 5 5Š 0
x1 x3 x5
D C
1 1Š 3 3Š 5 5Š
Thus, the Si.x/ function is written as:
Z x 1
sin t X x 2iC1
Si.x/ WD dt D . 1/i (4.7.49)
0 t i D0
.2i C 1/.2i C 1/Š
With this we can plot this function, see Fig. 4.46 where the graph of sin x=x is also given.
2
1.0
0.8 1
0.6
0.4 0
10 5 0 5 10
0.2
1
0.0
10 5 0 5 10
0.2 2
(a) (b)
R x sin t
Figure 4.46: Graph of sin x=x (a) and graph of S i.x/ D 0 t dt (b).
Rb
We do not know how to evaluate this integral, but we know how to compute I.b/ D 1 dx=x 2 . It
is I.b/ D 1 1=b . And by considering different values for b (larger than 1 of course), we have a
sequence of integrals, see Fig. 4.47. Let’s denote this by .I1 ; I2 ; : : : ; In /. It’s obvious that this
sequence converges to 1 when n approaches infinity. In other words, the area under the curve
y D 1=x 2 from 1 to infinity is one. Therefore, we define
Z 1 Z b
dx dx
I D WD lim
1 x2 b!1 1 x 2
y y y
Figure 4.47
In the same manner, if the lower integration limit is minus infinity, we have this definition:
Z b Z b
I D f .x/dx WD lim f .x/dx
1 a! 1 a
The next improper integral to be discussed is certainly the one with both integration limits being
infinite, like the following Z 1
dx
I D 2
1 1Cx
The strategy is to split this into two improper integrals of the form we already know how to
compute: Z a Z 1
dx dx
I D 2
C
1 1Cx a 1 C x2
To ease the computation we will select a D 0, just because 0 is an easy number to work with.
The above split does not, however, depend on a (as we will show shortly). With the substitution
x D tan , see Table 4.15, we can compute the two integrals and thus I as
I D Œ0 =2 C Œ=2
0 D C D
2 2
Now to show that any value for a is fine, we just use a, and compute I as:
a =2
I D Œarctan
=2 C Œarctan a D arctan a C C arctan a D
2 2
R1
And what we have done for this particular integral applies for 1 f .x/dx.
Perimeter of a circle. We only have separate functions of each of the four quarters ofpa circle,
so we compute the length of the first quarter. We write the circle’s equation as y D 1 x 2 ,
then a direct application of Eq. (4.9.1) gives
Z 1r Z 1 Z =2
x2 dx
1C 2
dx D p D d D .x D sin /
0 1 x 0 1 x2 0 2
Unfortunately, we cannot compute this integral unless we use numerical integration (Sec-
tion 11.4). Be careful that the integrand is infinity at x D 1 and thus not all numerical integration
method can be used. There is no simple exact closed formula for the perimeter of an ellipse! We
will come back to this problem of the determination of the ellipse perimeter shortly.
Arc-length of parametric curves. For parametric curves given by .x.t/; y.t//, its length is
given by Z t2 p
.dx=dt/2 C .dy=dt/2 dt (4.9.2)
t1
We consider again the perimeter of 1/4 of an ellipse. Using Eq. (4.9.2), we do
) ( Z =2 p
x D cos t dx=dt D sin t
p ) p ) sin2 t C 2 cos2 tdt
y D 2 sin t dy=dt D 2 cos t 0
Of course we cannot find an anti-derivative for this integral. Compared to Section 4.9.1, this
one is better as the integrand does not blow up at the integration limits. Using any numerical
quadrature method, we can evaluate this integral easily. This is how an applied mathematician
or engineer or scientist would approach the problem. If they cannot find the answer exactly,
they adopt numerical methods. But pure mathematicians do not do that. They will invent new
mathematics to deal with integrals that cannot be solved using existing (elementary) functions.
Recall that they invented negative integers so that we can solve for 5 C x D 2, and i 2 D 1,
and so on.
Elliptic integrals. Consider an ellipse given by x 2 =a2 C y 2 =b 2 D 1, with a > b, its length is
given by
Z =2 p
C D4 a2 cos2 t C b 2 sin2 tdt
0
p
With k D a 2 2
b =a, we can re-write the above integral as
Z =2 p
C D 4aE.k/; E.k/ D 1 k 2 sin2 tdt
0
The integral E.k/ is known as an elliptic integral. The name comes from the integration of
the arc length of an ellipse. As there are other kinds of elliptic integral, the precise name is the
elliptic integral of second kind. What is then the elliptic integral of first kind? It is defined as
Z =2
dt
E.k/ D p
0 1 k 2 sin2 t
It is super interesting when this integral appears again and again in physics. And we will see it
in the calculation of the period of a simple pendulum (Section 8.8.6).
Next, we compute the volume of a cone with radius r and height h. We approximate the
cone as a series of thin slices of thickness dy parallel to the base, see Fig. 4.50. The volume of
each slide is R2 dy, and thus the volume of the cone is:
y 2
Z h Z h
r 2h
2 2
R dy D r 1 dy D
0 0 h 3
In the same manner, we compute the volume of a sphere as follows (Fig. 4.51). We consider a
slice of thickness dy of which the volume is r 2 dy, with r 2 D R2 y 2 where R is the sphere’s
radius and y is the distance from the origin to the slice. Thus, the total volume is:
R
4 r 3
Z
2 .R2 y 2 /dy D (4.9.3)
0 3
(a) (b)
Figure 4.52: Solid of revolution: revolving the red curve y D f .x/ around an axis (the red axis). Gener-
ated using the geogeba software.
Are of the surface of a solid of revolution. Using the idea of calculus, to find the area of a
surface of revolution, we need to divide this surface into many tiny pieces, the area of each
piece can be computed. Then, we sum these areas up when the number of pieces is approaching
infinity. We divide the surface into many thin bands shown in Fig. 4.53. As the band is thin, it is
actually a truncated cone.
Figure 4.53: A surface of revolution obtained by revolving a curve y D f .x/ around the x-axis 360ı . To
find the surface area, we divide the surface into many tiny bands (orange).
To find the area of a truncated cone, we start from a cone of radius r and slant s. Its area
is rs by flattening the cone out and get a fraction of a circle, see Fig. 4.54. The area of a
truncated cone is therefore r1 s1 r2 s2 . It can be seen that this area also equals 2 rs where
r D 0:5.r1 C r2 /.
So, the total surface is the sum of all these areas and when s is making super small, we get
an integral:
Z b Z b p
area of surf. of revolution (x-axis) D 2yds D 2 f .x/ 1 C Œf 0 .x/2 dx (4.9.5)
a a
Figure 4.54: Surface area of a truncated cone is 2 rs where r is the average radius and s is the width.
(a) (b)
Figure 4.55: Solid of revolution: revolving y D f .x/ around an axis (red axis). Generated using the
geogeba software.
Now, we assume that a > b (we have to assume this or a < b to use the appropriate trigonometry
substitution), and thus we use the following substitution:
a
uD p sin ˛
a2 b2
which leads to
p
a2 b 2 =a
4ba2 arcsin
1 C cos 2˛
Z
AD p d˛
a"2 b 2 0 2
p #
2 2 2
a b a b
D 2 b 2 C p arcsin
a2 b 2 a
Ok, if we now apply this result to concrete cases a D : : : and b D : : :, then it’s fine. But we will
miss interesting things. Let’s consider the case a < b to see what happens.
Now, consider the case a < b, then we write A in a slightly different form:
Z 1 p
AD2 2b a2 C .b 2 a2 /du
0
Now comes a nice observation. The area of an ellipsoid does not care about the magnitude of a
and b. But, then why we have two different expressions for the same thing? This is because we
do not allow square root of negative numbers. But hey, we know imaginary numbers. Why don’t
use them to have a unified expression? Let’s do it.
First, define the following:
p
a2 b 2 b
sin D ; cos D
a a
Then, we can write:
p p p p
b 2 a2 .a2 b 2 /. 1/ .a2 b 2 /i 2 i a2 b 2
D D D D i sin
a a a a
With this, we have two expressions for A:
a2 b
2
A D 2 b C p .a > b/
a2 b 2 (4.9.6)
a2 b
2
A D 2 b C p ln .cos C i sin /
i a2 b 2
And of course the second terms in the above should be the same:
ln .cos C i sin / D i
And this is obviously related to Euler’s identity e i D cos C i sin . This logarithmic version
of Euler’s identity was discovered by the English mathematician Roger Cotes (1682 – 1716),
who was known for working closely with Isaac Newton by proofreading the second edition
of the Principia. He was the first Plumian Professor at Cambridge University from 1707 until
his early death. About Cotes’ death, Newton once said “If he had lived, we might have known
something”. The above analysis was inspired by [36].
Gravitational pull of a thin rod. Consider a thin rod of length L, its mass M is uniformly
distributed along the length. Ahead one end of the rod along its axis is placed a small mass m at
a distance a (see Fig. 4.56). Calculate the gravitational pull of the rod on m.
Figure 4.56
Let’s consider a small segment dx, its mass is d m D M=Ldx. So, this small mass d m will
pull the m with a force dF given by Newton’s gravitational theory. The pull of the entire rod is
then simply the pull of all these small dF :
L
GM m dx GM m dx GM m
Z
dF D 2
)F D 2
D (4.9.8)
L .L C a x/ L 0 .L C a x/ a.L C a/
Gravitational pull of a thin rod 2. Consider a thin rod of length 2L, its mass M is uniformly
distributed along the length. Above the center of the rod at a distance h is placed a small mass
m (see Fig. 4.58). Calculate the gravitational pull of the rod on m.
Figure 4.57
Due to symmetry the horizontal component of the gravitational pull is zero. Thus, only the
vertical component counts. This component for a small segment dx can be computed, and thus
the total force is just a sum of all these tiny forces, which is of course an integral:
L
GM m dx GM m dx
Z
dF D 2 2
cos ) F D h (4.9.9)
2L h C x L 0 .h2 C x 2 /3=2
RL dx
To evaluate the integral 0 .h2 Cx 2 /3=2 , we use the trigonometric substitution:
8
< dx D h sec2 d
x D h tan ) h2 C x 2 D h2 sec2 (4.9.10)
0 tan 1 .L= h/
:
GM m
F D p
h h2 C L2
Gravitational pull of a thin disk. Consider a thin disk of radius a, its mass M is uniformly
distributed. Above the center of the disk at a distance h is placed a small mass m (see Fig. 4.58).
Calculate the gravitational pull of this disk on m.
Figure 4.58
We consider a ring located at distance r from the center, this ring has a thickness dr. We
first compute the gravitational pull of this ring on m. Then, we integrate this to get the total pull
of the whole disk on m. Again, due to symmetry, only a downward pull exists. Consider a small
d m on this ring, we have
d mGm Gm cos
Z
dF D cos ) Fring D dF D mring (4.9.11)
R2 R2
This is because R and cos are constant along the ring. The mass of the ring is mring D
tM Œ.r C dr/2 r 2 D 2 rtMdr. So, the pull of the ring on m is
Gm cos rdr
Fring D 2 rtMdr D GM mt2h p (4.9.12)
R2 h2 C r 2
When the interval is Œ0; 1 and the intervals are equally spaced (i.e., xi D 1=n), the above
becomes
n Z 1
1X 1
lim f D f .x/dx (4.9.14)
n!1 n n 0
i D1
Now that we know all the techniques to compute definite integrals, we can use integral to
compute limits of sum. For example, compute the following limit:
n
X n
lim
n!1
i D1
n2 C i2
The plan is to rewrite the LHS in the form of a Riemann sum, then Eq. (4.9.14) allows us to
equate it to an integral, compute that integral. So, we write n=n2 Ci 2 D .1=n/1=1C.i=n/2 . Thus,
n n 1
n 1X 1 1
X Z
lim D lim D dx D D
n!1
i D1
n2 C i 2 n!1 n
i D1
1 C .i=n/2
0 1 C x2 4
Example 4.3
Evaluate the following limit:
1
X 2x
lim
x!0C 1 C k2x2
kD1
4.10 Limits
The calculus was invented in the 17th century and it is based on limit–a concept developed in the
18th century. That’s why I have intentionally presented the calculus without precisely defining
what a limit is. This is mainly to show how mathematics was actually evolved. But we cannot
avoid working with limits, that’s why we finally discuss this concept in this section.
Let’s consider the quadratic function y D f .x/ D x 2 , and we want to define the derivative
of this function at x0 . We consider a change h in x with a corresponding change in the function
f D .x0 C h/2 x02 . We now know that Newton, Leibniz and their fellows defined the
derivative as the value that the ratio f =h tends to when h approaches zero. Here what they did
Leibniz realized this and solved the problem by saying that h is a differential–a quantity that
is non-zero but smaller than any positive number. Because it’s non-zero, the third equation in
the above is fine, and because it is a super super small number, it’s nothing compared with 2x0 ,
thus we can ignore it.
Was Leibniz correct? Yes, Table 4.16 confirms that. This table is purely numerics, we com-
puted f = h for many values of h getting smaller and smaller (and we considered x0 D 2 as we
have to give x0 a value).
h f =h
1
10 4.100000000000
2
10 4.010000000000
3
10 4.001000000000
4
10 4.000100000008
5
10 4.000010000027
6
10 4.000001000648
Now we’re ready for the presentation of the limit of a function. The key point here is to
see f =h as a function of h; thus the derivative of y D f .x/ at x0 is the limit of the function
g.h/ WD f =h when h approaches zero:
.x0 C h/2 x02
f 0 .x0 / D lim D lim .2x0 C h/
h!0 h h!0
And what is this limit? As can be seen from Fig. 4.59, as h tends to zero 2x0 C h is getting closer
and closer to 2x0 . And that’s what we call the limit of 2x0 C h.
In the preceding discussion we have used the symbol h to denote the change in x when
defining the derivative of y D f .x/. This led to the limit of another function g.h/ with h being
the independent variable. It’s possible to restate the problem so that the independent variable is
always x. We choose a fixed point x0 . And we consider another point x, then we have
x2 x02
f 0 .x0 / D lim D lim x C x0 D 2x0
x!x0 x x0 x!x0
more compactly as jx aj < ı. Similarly, f .x/ gets closer to L means jf .x/ Lj < , is
yet another small positive number. Cauchy and Bernard Bolzano (1781–1848) was the first who
used these and ı.
There is a little detail here before we present the definition of the limit of a function. We
always say that a limit of a function when x approaches a. This implies that we do not care what
happens when x D a. For example, the function y D x 1=x 2 1 is not defined at x D 1, but it
is obvious that lim f .x/ D 0:5 . But the classic example is a circle and a n-polygon inscribed
x!1
in it. When we say a limit of this n polygon when n approaches infinity is the circle, we mean
that n is a very large number. But it is meaningless if n is actually infinity. Because in that case
we would have a polygon of which each side is of vanished length.
Thus, x close to a and not equal to a is written mathematically as:
0 < jx aj < ı
Definition 4.10.1
We denote the limit of f .x/ when x approaches a by lim f .x/, and this limit is L i.e.,
x!a
lim f .x/ D L
x!a
This definition was given by the German mathematician Karl Theodor Wilhelm Weierstrass
(1815 –1897) who was often cited as the "father of modern analysis".
The key point here is that is the input that indicates the level of accuracy we need for f .x/
to approach L and ı is the output (thus depends on ). Fig. 4.60 illustrates this; for a smaller ,
we have to make x closer to a and thus a smaller ı.
You can try by plotting this function or making a table similar to Table 4.16 to confirm this.
What is analysis by the way? Analysis is the branch of mathematics dealing with limits
and related theories, such as differentiation, integration, measure, infinite series, and analytic
functions. These theories are usually studied in the context of real and complex numbers
and functions. Analysis evolved from calculus, which involves the elementary concepts and
techniques of analysis.
p
One-sided limits. If we want to find the limit of this function x 1 when x approaches 1,
we’ll see that we need to consider only x 1, and this leads to the notion of one-sided limit:
p p
lim x 1 D lim x 1
x!1C x#1
which is a right hand limit when we approach 1 from above, as indicated by the notation # 1,
even though this
pis not popular. And of course, if we have right hand limit, we have left hand
limit e.g. lim 1 x.
x!1
If the limit of f .x/ when x approaches a exists, it means that the left hand and right hand
one-sided limits exit and equal:
Infinite limits. If we consider the function y D 1=x 2 we realize that for x near 0 y is very large.
Thus, we say that:
1
lim 2 D 1
x!0 x
And this is called an infinite limit which is about limit of a function which is very large near
x D 0. We can generalize this to have
Fig. 4.61 illustrates some of infinite limits and we can see that the lines x D a are the vertical
asymptotes of the graphs. This figure suggests the following definition of infinite limits.
Definition 4.10.2
The limit of y D f .x/ when x approaches a is infinity, written as,
lim f .x/ D 1
x!a
when, for any large number M , there exists a ı > 0 such that
Limits when x approaches infinity. Again considering the function y D 1=x 2 but now focus
on what happens when x approaches infinity i.e., x is getting bigger and bigger or when it
gets smaller and smaller. It’s clear that 1=x 2 is then getting smaller and smaller. We write
lim 1=x 2 D lim 1=x 2 D 0.
x!C1 x! 1
(a) (b)
Definition 4.10.3
The limit of y D f .x/ when x approaches 1 is finite, written as,
lim f .x/ D a
x!1
when, for any > 0, there exists a number M > 0 such that
We can use this definition to prove that lim 1=x 2 D 0; select M D 1= then 1=x will be
x!C1
near to .
We soon realize that the definition of the limit of a function is not as powerful as it seems to
be. For example, with the definition of limit, we’re still not able to compute the following limit
p
t2 C 9 3
lim
t !0 t2
The situation is similar to differentiation. We should now try to find out the rules that limits obey,
then using them will enable us to evaluate limits of complex functions.
The sum rule basically states that the limit of the sum of two functions is the sum of the limits.
And this is plausible: near x D a the first function is close to L1 and the second function to L2 ,
thus f .x/ C g.x/ is close to L1 C L2 . And of course when we have this rule for two functions,
we also have it for any number of functions! Need a proof? Here it is:
Now you know why we have used =2 as the accuracy in Eq. (4.10.2). To summary, the whole
proof uses (1) the triangle inequality ja C bj < jaj C jbj and (2) a correct accuracy (e.g. =2
here). Do we need another proof for the difference rule? No! This is because a b is simply
a C . b/. If you’re still not convinced, we can do this:
It’s possible to prove the product rule in the same way as the sum rule, but it’s hard. We follow
an easier path. First we massage a bit fg :
fg D .f L/.g M/ LM C Mf C Lg
Now if we can prove that limx!a .f L/.g M / D 0 then we’re done. Indeed, we have
p )
0 < jx aj < ı1 H) jf Lj <
p
0 < jx aj < ı2 H) jg M j <
1 1
lim D (4.10.3)
x!a g.x/ lim g.x/
x!a
This is the crux of the whole proof. This transform the original problem to this problem: prove limx!a .f
L/.g M / D 0, which is much more easier.
lim f .x/
x!a
D (Eq. (4.10.3))
lim g.x/
x!a
To prove Eq. (4.10.3), let’s denote M D lim g.x/. Then, what we have to prove is that
x!a
ˇ ˇ
ˇ 1 1 ˇˇ
ˇ g.x/ M ˇ < when 0 < jx aj < ı
ˇ
Or this
1 1
jg.x/ Mj < when jx aj < ı (4.10.4)
jM j jg.x/j
1
Now we need to find jg.x/j
<‹ and jg.x/ M j <‹. Because lim g.x/ D M , when 0 < jx aj <
x!a
ı1 we have
jg M j < jM j=2
We can always select ı1 so that the above inequality holds. You can draw a picture, similar to
Fig. 4.60 to convince yourself about this. Thus,
jM j D jM g.x/ C g.x/j
jM g.x/j C jg.x/j .triangle inequality/
jM j 1 2
jg.x/ M j C jg.x/j C jg.x/j H) <
2 g.x/ jM j
Now based on Eq. (4.10.4), we need jg.x/ M j < .M 2=2/. And of course we have it at our
disposal because the limit of g is M . This holds true when 0 < jx aj < ı2 . Now, with
ı D min.ı1 ; ı2 /, we have
1 2 M2 1 1 1 2 M2
< ; jg.x/ Mj < H) jg.x/ Mj < D
g.x/ jM j 2 jM j jg.x/j jM j jM j 2
Sadly that in many textbooks, the proof is written in a reversal way, which makes students
believe that they look stupid. We emphasize again that finding a proof is hard and involves
many setbacks. When a proof has been found, the author presents it not in a way the proof was
found.
Using the definition of limit, we can see that:
lim x D a (4.10.5)
x!a
lim x n D an (4.10.6)
x!a
If we look at again these two results, we see that the function y D x n has this nice property:
lim f .x/ D f .a/, that is the limit when x approaches a equals the function value at a. We’re
x!a
now turning our discussion to the functions that have this special property.
With that definition of the continuity of a function at a single point, we have another definition.
A function is continuous over an interval if it is continuous everywhere in that interval.
It is not hard to discover these rules for continuity of functions:
(a: sum/diff rule) if f .x/ and g.x/ are continuous then f ˙ g is continuous
(b: linearity rule) if f .x/ is continuous then cf is continuous
(4.10.8)
(c: product rule) if f .x/ and g.x/ are continuous then fg is continuous
(d: quotient rule) if f .x/ and g.x/ are continuous then f =g is continuous
We skip the proof: it’s a combination of the definition of continuity and the limit rules in
Eq. (4.10.1). Now we’re in a position to establish the continuity of many functions we know of.
We start with polynomials, those of the form
n
X
P .x/ D an x n C an 1 x n 1
C C a2 x 2 C a1 x C a0 D ai x i (4.10.9)
i D0
They are continuous everywhere. This is because each term ai x i is continuous (this in turn is
due to y D x n is continuous and cx n is also continuous).
Next is rational functions y D P .x/=Q.x/; they are continuous due to the quotient rule in
Eq. (4.10.8). Of course they’re only continuous where Q.x/ ¤ 0. Then, trigonometry functions,
logarithm functions, exponential functions are all continuous.
2
How about composite functions e.g. sin x 2 or e x ? Our intuition tells us that they are
continuous. We can confirm that by drawing them and see that their graphs are continuous
(Fig. 4.62). Therefore, we have
x2
lim sin x 2 D sin.1/; 1
lim e De
x!1 x!1
1.0
1.00
0.75 0.8
0.50
0.6
0.25
0.00 0.4
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0
0.25
0.2
0.50
0.75 0.0
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0
2 x2
(a) sin x (b) e
We’re now finally in a position ready to compute some interesting limits. For example,
p
t2 C 9 3 t2 1
lim D lim p D lim p (algebra)
t !0 t2 t !0 t 2 . t 2 C 9 C 3/ t !0 t2 C 9 C 3
1
D p (quotient rule with f .x/ D 1)
lim . t 2 C 9 C 3/
t !0
1
D p (sum rule)
lim . t 2 C 9/ C 3
t !0
1 1
Dq Dp D 1=6 (Eq. (4.10.10))
lim .t 2 C 9/ C 3 9C3
t !0
(4.10.11)
where the first step is to convert the form 0=0 to something better.
f .a C h/ f .a/
f 0 .a/ D lim
h!0 h
Even though now we know the quotient rule of limits, we cannot compute f 0 .a/ as:
lim f .a C h/ f .a/
0 f .a C h/ f .a/ h!0
f .a/ D lim D
h!0 h lim h
h!0
because it is of the form 0=0 which is not defined. Limit of the form 0=0 is called an indeterminate
form and we list other indeterminate forms in Table 4.17. How to compute indeterminate forms
1 1
4x 2 C x 4C lim 4 C
lim D lim x D x!1 x D 4 D2
x!1 2x 2 C x x!1 1 1 2
2C lim 2 C
x x!1 x
Why we divided both the nominator and denominator by x 2 ? This is because we know that for
a very large x, x is nothing (or negligible) compared with 4x 2 and 2x 2 , so we can write (not
mathematically precise but correct):
4x 2 C x 4x 2
lim D lim D2
x!1 2x 2 C x x!1 2x 2
So to say x is nothing is equivalent to convert it to the form 1=x, and that’s why we did the
division of x 2 . And there is no value in doing more limits of this form, as we can guess (note that
generalization is a good thing to do) the following result for the ratio of any two polynomials:
8
Pn .x/ <0
ˆ if n < m
lim D 1 if n > m
x!1 Qm .x/
: an
ˆ
bn
if n D m
which is nothing but the fact that this limit depends on whether the nominator or denominator
overtakes the other. If the denominator overtakes the nominator, the limit is zero.
L’Hopital’s rule. The method of using algebra does not apply for this limit: limx!0 sin x=x . To
deal with this one we had to use geometry, but isn’t it against the spirit of calculus? We need to
find a mechanical way so that everyone is able to compute this limit and similar limits without
resorting to geometry (which is always requiring some genius idea).
What do you think if you see someone doing this?
f .x/ f 0 .a/
lim D 0 (4.10.12)
x!a g.x/ g .a/
Actually it is not hard to guess this rule. Recall that for x near a, we have the following approxi-
mations for f .x/ and g.x/:
Thus,
f .x/ f 0 .a/.x a/ f 0 .a/
lim D lim 0 D 0
x!a g.x/ x!a g .a/.x a/ g .a/
What is the limit of x n=nŠ when n ! 1? Why bother with this? Because it is involved in the
Taylor theorem (Section 4.14.10), which is a big thing. Let’s start simple and concrete with
x D 2:
2n
lim D‹
n!1 nŠ
A bit of algebraic manipulation goes a long way (of course we assume n > 2 as we’re interested
in the case n goes to infinity):
2n 2 2 2 2 2 2 2 2 2
D D
nŠ 1 2 3 n 1 2 3 4 n
As the red terms are all smaller than one, we’re multiplying a constant (the blue term) repeatedly
with factors smaller than one, we guess that as n approaches infinity, the limit is zero. But, to be
2n 2 2 2 2 2 2 2
D
nŠ 1 2 ƒ‚ 3 …
„ 4 „
5 6 ƒ‚ n
…
4 terms n 4 terms
What is nice with this new form is that all terms in red are smaller than 1=2, thus we immediately
have n 4
2n
2 2 2 2 1 1
< D 24 k n
nŠ 1 2 3 4 2 2
„ ƒ‚ …
k
2n 1 1 1 1
lim < lim 24 k n D 24 k lim n D 24 k n
D 24 k D0
n!1 nŠ n!1 2 n!1 2 limn!1 2 1
This proof holds for x D 3; 4; : : : or even negative integers of which the absolute is larger than
one. But how about x D 3:123? We just see that
3:123n 4n
<
nŠ nŠ
And then, we know that for all x 2 R, we have limn!1 x n=nŠ D 0.
1
X
f .x/ D an cos.b n x/; a 2 .0; 1/ (4.10.13)
nD0
Fig. 4.63 gives the plots of two cases: (i) a D 0:2; b D 0:1; n D 3 and (ii) a D 0:2; b D 7; n D 3
1.0 1.0
0.5
0.5
0.0
0.0
−0.5
−0.5 −1.0
−2 −1 0 1 2 −2 −1 0 1 2
Definition 4.10.6
A function f W .a; b/ ! R is continuously differentiable on .a; b/, written f 2 C 1 .a; b/, if
it is differentiable on .a; b/ and f 0 W .a; b/ ! R is continuous.
Definition 4.10.7
A function f W .a; b/ ! R is said to be k-times continuously differentiable on .a; b/, written
f 2 C k .a; b/, if its derivatives of order j , where 0 j k, exist and are continuous
functions.
Applications. As an application of the intermediate value theorem, let’s consider this problem:
‘prove that the equation x 3 C x 1 D 0 has solutions.’ Let’s denote by f .x/ D x 3 C x 1, we
then have f .0/ D 1 and f .1/ D 1. According to the intermediate value theorem, there exists
a point c 2 .0; 1/ such that f .c/ D 0 because 0 is an intermediate value between f .0/ D 1
and f .1/ D 1.
which is differentiable and g.a/ D g.b/, thus there exists c 2 .a; b/ so that g 0 .c/ D 0. And that
leads to the mean value theorem.
Michel Rolle (1652 – 1719) was a French mathematician. He is best known for Rolle’s theorem.
Rolle, the son of a shopkeeper, received only an elementary education. In spite of his minimal
education, Rolle studied algebra and Diophantine analysis (a branch of number theory) on his
own. Rolle’s fortune changed dramatically in 1682 when he published an elegant solution of
a difficult, unsolved problem in Diophantine analysis. In 1685 he joined the Académie des
Sciences. Rolle was against calculus and ironically the theorem bearing his name is essential for
basic proofs in calculus. Among his several achievements, Rolle helped advance the currently
accepted size order for negative numbers. Descartes, for example, viewed 2 as smaller than
5. Rolle preceded most of his contemporaries by adopting the current convention in 1691.
Rolle’s 1691 proof covered only the case of polynomial functions. His proof did not use the
methods of differential calculus, which at that point in his life he considered to be fallacious.
The theorem was first proved by Cauchy in 1823 as a corollary of a proof of the mean value
theorem. The name "Rolle’s theorem" was first used by Moritz Wilhelm Drobisch of Germany
in 1834 and by Giusto Bellavitis of Italy in 1846.
Analysis of fixed point iterations. In Section 2.10 we have seen the fixed point iteration method
as a means to solve equations written in the form x D f .x/. In the method, we generate a
sequence starting from x0 : .xn / D fx1 ; x2 ; : : : ; xn g using the formula xnC1 D f .xn /. We have
demonstrated that these numbers converge to x which is the solution of the equation. Now,
we’re going to prove this using the mean value theorem. The whole point of the proof is that
if the method works, then the distance from the points x1 ; x2 ; : : : to x must decrease. So, we
compute one such distance xn x :
Now there are two cases. First, if jf 0 ./j 1, then jxn x j jxn 1 x j, that is, the distance
between xn and x is smaller than xn 1 and x . And that tells us that xn converges to x . Thus,
if we start close to x i.e., x0 2 I D Œx ˛; x C ˛, and the absolute of the derivative of the
function is smaller than 1 in that interval I , the method works.
numbers living in that interval! Don’t worry, integral calculus is capable of handing just that.
Finding an answer to that question led to the concept of the average of a function.
The idea is to use integration. Assume we want to find the average of a function f .x/ for
a x b. We divide the interval Œa; b into n equal sub-intervals of spacing x D .b a/=n.
Example. Let’s compute the average of these functions: y D x in Œ0; 1, y D x 2 in Œ 1; 1 and
y D sin2 x in Œ0; . They are given by
Z 1
1 1 1 2 1 1 2 1
Z Z
faverage D xdx D ; faverage D x dx D ; faverage D sin xdx D
0 2 2 1 3 0 2
Figure 4.66: Averages of functions: y D x in Œ0; 1, y D x 2 in Œ 1; 1 and y D sin2 x in Œ0; .
Looking at Fig. 4.66, it is obvious to see that there exists point c in Œa; b such that f .c/ is
the average height of the function (the horizontal line y D faverage always intersects the curve
y D f .x/). And this is the mean value theorem of an integral:
Z b
1
9c 2 .a; b/ s.t f .c/ D f .x/dx (4.11.3)
b a a
p
For y D x 2 we have c D ˙1= 3. They are Gauss points in the Gauss quadrature method
to numerically evaluate integrals, see Section 11.4 for details.
the more familiar Cartesian coordinate system), cf. Fig. 4.67. The polar coordinate system is
used in many fields, including mathematics, physics, engineering, navigation and robotics. It is
especially useful in situations where the relationship between two points is most easily expressed
in termspof angles and distance. For instance, let’s consider a unit circle centered at the origin:
y D ˙ 1 x 2 in Cartesian coordinates, but simply r D cos in polar coordinates.
The full history of polar coordinates is described in Origin of Polar Coordinates of the
American mathematician and historian Julian Lowell Coolidge (1873 – 1954). The Flemish
mathematician Grégoire de Saint-Vincent (1584 – 1667) and Italian mathematician Bonaventura
Cavalieri (1598 – 1647) independently introduced the concepts at about the same time. In Acta
eruditorum (1691), Jacob Bernoulli used a system with a point on a line, called the pole and polar
axis, respectively. Coordinates were specified by the distance from the pole and the angle from
the polar axis. The actual term polar coordinates has been attributed to the Italian mathematician
Gregorio Fontana (1735 – 1803). The term appeared in English in George Peacock’s 1816
translation§ of Lacroix’s Differential and Integral Calculus‘ .
In the Cartesian coordinate system we lay a grid consisting of horizontal and vertical lines
that are at right angles. Two lines are special as their intersection marks the origin from which
other points are located. In a polar coordinate system, we also have two axes with a origin.
Concentric circles centered at the origin are used to mark constant distances r from the origin.
Also, lines starting from the origin are drawn; every points on such a line has a constant angle .
So, a point is marked by .r; / (Fig. 4.67).
Curves are described by equations of the form y D f .x/ in the Cartesian coordinate system.
Similarly, polar curves are written as r D f ./. Let’s start with the unit circle. Using Cartesian
coordinates, it is written as x 2 Cy 2 D 1. Using polar coordinates, it is simply as r D 1! Fig. 4.68
§
George Peacock (1791 – 1858) was an English mathematician and Anglican cleric. He founded what has been
called the British algebra of logic.
‘
Sylvestre François Lacroix (1765 – 1843) was a French mathematician. Lacroix was the writer of important
textbooks in mathematics and through these he made a major contribution to the teaching of mathematics throughout
France and also in other countries. He published a two volume text Traité de calcul differéntiel et du calcul intégral
(1797-1798) which is perhaps his most famous work.
presents a nice polar curve–a polar rose with as many petals as we want, and a more realistic
rose.
What do you think of Fig. 4.69? It is a spiral, from prime numbers! It was created by plotting
points .r; / D .p; p/, where p is prime numbers beneath 20 000. That is the radius and angle
(in radians) are both prime numbers.
90°
135° 45°
35 40
20 25 30
10 15
0 5
180° 0°
225° 315°
270°
Figure 4.68: Polar rose r. / D a cos k with a D 1. It is a k-petaled rose if k is odd, or a 2k-petaled
rose if k is even. The variable a represents the length of the petals of the rose. In (c) is a more real rose
with r D C 2 sin.2 /.
Figure 4.69: Prime numbers from 1 to 20 000 plotted on a polar plane. Generated using Julia package
Primes: the function primes(n) returns all primes from 1 to n.
Figure 4.70
Considering Fig. 4.70 where F –the focus–is at the origin, the directrix is the line parallel to
the y axis and at a distance d from F . Let’s denote by e the eccentricity and by P a point in
the conic section with coordinates .x; y/ D .r cos ; r sin / , then a conic section is defined as
PF=PE D e, which leads to the equation:
ed
r D e.d r cos / H) r D (4.12.1)
1 C e cos
You might be not convinced that this equation is a conic section. We can check that by either
using a software to draw this equation and see what we get or we can transform this back to a
Cartesian form (which we already know the result). We do the latter now. Why bother doing all
of this? This is because, for certain problems, polar coordinates are more convenient to work
with than the Cartesian coordinates. Later on we shall use the result in this section to prove
Kepler’s 1st law that the orbit of a planet around the Sun is an ellipse (Section 7.10.9).
From Eq. (4.12.1), we have r D e.d r cos / D e.d x/ for x D r cos , now we square
this equation and use r 2 D x 2 C y 2 , we get
x 2 C y 2 D e 2 .d x/2 D e 2 .d 2 2dx C x 2 /
And a bit of massage to it, we obtain
2e 2 d y2 e2d 2
x2 C x C D
1 e2 1 e2 1 e2
Knowing already the Cartesian form of an ellipse (.x=a/2 C .y=b/2 D 1), we now complete the
square for x :
2
e2d y2 e2d 2 e4d 2
xC C D C (complete the square)
1 e2 1 e2 1 e2 .1 e 2 /2
2
e2d y2 e2d 2
xC C D (algebra)
1 e2 1 e2 .1 e 2 /2
The next step is of course to introduce a and b, and h (now we need e < 1)
e2d 2 e2d 2 e2d
a2 D ; b2 D ; hD (4.12.2)
.1 e 2 /2 1 e2 1 e2
If we just learnt by heart the quadratic equation we would forget how to complete a square!
With these new symbols, our equation becomes, which is the familiar ellipse:
.x C h/2 y2
C D1
a2 b2
But what is h? You might be guessing correctly that it should be related to c. Indeed, we know
that, from Section 4.1, the distance from the center of an ellipse to one focus is c and it is defined
by c 2 C b 2 D a2 , thus
e2d 2 e2d 2 e4d 2
c 2 D a2 b2 D D Dh
.1 e 2 /2 1 e2 .1 e 2 /2
Theorem 4.12.1
A polar equation of the form
ed ed
rD or rD
1 ˙ e cos 1 ˙ e sin
represents conic section with eccentricity e. The conic is an ellipse if e < 1, a parabola if
e D 1 and a hyperbola if e > 1.
As we now work with polar coordinates, we need to convert .x; y/ to .r; /:
x D r cos ; y D r sin ; r D f ./ (4.12.4)
And that allows us to compute dx; dy:
dx D cos dr r sin d D .cos f 0 f ./ sin /d
(4.12.5)
dy D sin dr C r cos d D .sin f 0 C f ./ cos /d
That derivation is purely algebraic. Many people prefer geometry. Fig. 4.71 shows that ds 2 D
.rd /2 C .dr/2 , which is exactly what we have obtained using algebra.
Figure 4.71
(a) (b)
(c) (d)
This is neither interesting nor new. Do not worry it is just the beginning. If we now have three
points P 1 , P 2 and P 3 , we get a quadratic curve. de Casteljau developed a recursive algorithm
to get that curve . For a given t fixed, using Eq. (4.13.2) to determine two new points P 12
and P 23 , then using Eq. (4.13.2) again with the two new points to get Q (Fig. 4.72a). When t
varies from 0 to 1, this point Q traces a quadratic curve passing P 1 and P 2 (Fig. 4.72d). The
points P k ; k D 1; 2; 3 are called the control points. They are so called because the control points
control the shape of the curve.
Paul de Casteljau (born 19 November 1930) is a French physicist and mathematician. In 1959, while working
at Citroën, he developed an algorithm for evaluating calculations on a certain family of curves, which would later
be formalized and popularized by engineer Pierre Bézier, leading to the curves widely known as Bézier curves.
Q D .1 t/P 12 C tP 23
D .1 t/Œ.1 t/P 1 C t P 2 C tŒ.1 t/P 2 C tP 3 (Eq. (4.13.2)) (4.13.3)
2 2
D .1 t/ P 1 C 2t.1 t/P 2 C t P 3
What we see here is that the last equation is a linear combination of some polynomials (the red
terms) and some constant coefficients being the control points.
Moving on to a cubic curve with four control points (Fig. 4.73). The procedure is the same,
and the result is
Animation of the construction of Bézier curves helps the understanding. A coding exercise for
people who likes coding is to write a small program to create Fig. 4.73. If you do not like coding,
check out geogbra where you can drag and move the control points to see how the curve changes.
And this allows free form geometric modeling.
(a) (b)
To see the pattern (for the generalization to curves of higher orders), let’s put the quadratic
and cubic Bézier curves together:
0; 1; 2; : : : ; n) is
n n
!
X n n k k
X
B.t/ D .1 t/ t Pk D Bk;n P k (4.13.5)
k
kD0 kD0
where Bk;n is the Bernstein basis polynomial , given by
!
n
Bk;n .t/ D .1 t/n k t k ; 0t 1 (4.13.6)
k
The Bernstein
Pn basis polynomial possess some nice properties: they are non-negative, their sum
is one i.e., kD1 Bk;n .t/ D 1 , see Fig. 4.74a. Because of these two properties we can see that
the point B.t / is a weighted average of the control points, hence lies inside the convex hull of
those points (Fig. 4.74b).
(a) (b)
Figure 4.74: Bernstein cubic polynomials and convex hull property of Bézier curves.
You might be asking: where are calculus stuff? Ok, let’s differentiate the cubic curve B.t/
to see what we get:
What is this equation telling us? It indicates that the tangent to the curve at P 1 (or t D 0) is
proportional to the line P 1 P 0 . And the tangent to the curve at P 2 is proportional to the line
P 3 P 2 . This should be not a surprise as we have actually seen this in Fig. 4.73b. Because of
this, and the fact that the curve goes through the starting and ending points i.e., B.0/ D P 0 and
B.1/ D P 3 , we say that a cubic Bézier curve is completely determined by four numbers: the
values of the curve at the two end points and the slopes of the curve at these points. And this is
where Bézier curves look similar to Hermite interpolation (??).
The vectors extending from P 0 to P 1 and from P 3 to P 2 are called handles and can be
manipulated in graphics programs like Adobe Photoshop and Illustrator to change the shape of
Sergei Natanovich Bernstein (5 March 1880 – 26 October 1968) was a Soviet and Russian mathematician of
Jewish origin known for contributions to partial differential equations, differential geometry, probability theory, and
approximation theory.
Why? The binomial theorem is the answer
Bézier curves, CAD, and cars. The mathematical origin of Bézier curves comes from a 1912
mathematical discovery: Bernstein discovered (or invented) the now so-called Bernstein basis
polynomial, and used it to define the Bernstein polynomial. What was his purpose? Only to prove
Weierstrass’s approximation theorem (Section 11.3.1). We can say that Bernstein polynomials
had no practical applications until ... 50 years later. In 1960s, through the work of Bézier and
de Castelijau, Bernstein basis polynomials come to life under the form of Bézier curves.
de Casteljau’s idea of using mathematics to design car bod-
ies met with resistance from Citroën. The reaction was: Was it
some kind of joke? It was considered nonsense to represent a car
body mathematically. It was enough to please the eye, the word
accuracy had no meaning .... Eventually de Casteljau’s insane
persistence led to an increased adoption of computer-aided de-
sign methods in Citroën from 1963 onward. About his time at
Citroën in his autobiography de Casteljau wrote
Thanks to people like de Casteljau that now we have a field called computer aided design (CAD)
in which mathematics and computers are used to help the design of all things you can imagine
of: cars, buildings, airplanes, phones and so on.
Regarding the organization, first, ingenious ways to obtain such infinite series are presented
and second, a systematic method, called Taylor’s series, is given.
Pierre Étienne Bézier (1 September 1910 – 25 November 1999) was a French engineer at Renault. Bezier
came from a family of engineers. He followed in their footsteps and earned degrees in mechanical engineering from
École nationale supérieure d’arts et métiers and electrical engineering from École supérieure d’électricité. At the
age of 67 he earned a doctorate in mathematics from Pierre-and-Marie-Curie University.
When n is even fn .x/ can be found explicitly since he knows from Wallis that
x
x pC1
Z
up du D
0 pC1
Hence,
x
x
Z
f0 .x/ D du D 1
0 1
x
x3
x
Z
2
f2 .x/ D .1 u /du D 1 C1
0 1 3
x (4.14.2)
x3
5
x x
Z
2 2
f4 .x/ D .1 u / du D 1 C2 C1
0 1 3 5
x 3
5
x7
x x x
Z
2 3
f6 .x/ D .1 u / du D 1 C3 C3 C1
0 1 3 5 7
You can see that the red numbers follow the Pascal’s triangle (Section 2.25). These results for
even n can be generalized to have the following
1 2mC1
m x
X
fn .x/ D amn . 1/ (4.14.3)
mD0
2m C 1
where amn denotes the red coefficients in Eq. (4.14.2), they are called Integral binomial coef-
ficients and . 1/m is either +1 or -1 and is used to indicate the alternating plus/minus signs
appearing in Eq. (4.14.2). And Newton believed that this formula also works for odd integers
n D 1; 3; 5; : : : So he collected the red coefficients in Eq. (4.14.2) in a table (Table 4.18). And
his goal was to find the coefficients for n D 1; 3; 5; : : : i.e., the boxes in this table. With those
coefficients, we know the integrals in Eq. (4.14.1) and by term-wise differentiation we would
get the series for .1 x 2 /n for n D 1=2; 3=2 etc.
John Wallis (1616 – 1703) was an English clergyman and mathematician who is given partial credit for the
development of infinitesimal calculus.
For example, if n D 0 and m D 0, then amn D 1 by looking at the first in Eq. (4.14.2).
n
m 0 1 2 3 4 5 6
0 1 1 1 1 1 1 1
1 0 1/2 1 3/2 2 5/2 3
2 0 0 1 3 3
3 0 0 0 1 1
4 0 0 0 0 0
5 0 0 0 0 0
Table 4.18: Integral binomial coefficients. The row of m D 0 is all 1, follow Eq. (4.14.2) (coefficient
of x term is always 1). The rule of this table is (because amn follows the Pascal’s triangle): am;nC2 D
am;n C am 1;n for m 1 (see the three circled numbers for one example). Note that a1n D n=2 for even
ns, and Newton believed it is also the case for odd ns. That’s why he put 1=2, 3=2 and 5=2 in the row of
m D 1 for odd ns.
n
m 0 1 2 3 4 5
0 a a a a a a
1 b aCb 2a C b 3a C b 4a C b 5a C b
2 c bCc a C 2b C c 3a C 3b C c 6a C 4b C c 10a C 5b C c
3 d cCd b C 2c C d a C 3b C 3c C d 4a C 6b C 4c C d 10a C 10b C 5c C d
A complete table for integral binomial coefficients is given in Table 4.19. And we determine
a; b; c; d; : : : by equating the m-th row in Table 4.19 with the corresponding row in Table 4.18,
but only for columns of even n.
For example, considering the third row (the red numbers in Table 4.19), we have the following
equations
1
8
9 ˆ
ˆ a21 D b C c D
c D 0>
ˆ
ˆ
ˆ 8
= 1 1 <
3
a C 2b C c D 0 H) c D 0; a D ; b D H) a23 D 3a C 3b C c D
> 4 8 ˆ 8
6a C 4b C c D 1; ˆ
:a D 10a C 5b C c D 15
ˆ
ˆ
ˆ
25
8
Similarly, considering now the fourth row, we have
1
8
a31 D c C d D
9 8
d D 0> a D 1=8 ˆ
ˆ
>
>
ˆ
ˆ
ˆ
ˆ
ˆ
ˆ 16
b C 2c C d D 0= < b D 1=8 <
1
H) H) a33 D a C 3b C 3c C d D
4a C 6b C 4c C d D 0>
> ˆ
ˆ c D 1=16 ˆ
ˆ 16
:a D 10a C 10b C 5c C d D 5
> ˆ ˆ
20a C 15b C 6c C d D1 d D0
; : ˆ
ˆ
35
16
Phu Nguyen, Monash University © Draft version
Chapter 4. Calculus 384
x
x3 1 x5 x7
1 1
Z
2 1=2
f1 .x/ D .1 u / du D x C C C
0 2 3 8 5 16 7
x
x3 3 x5 x7
3 1
Z
2 3=2
f3 .x/ D .1 u / du D x C C
0 2 3 8 5 16 7
Now, we differentiate the two sides of the above equations; for the LHS the fundamental theorem
of calculus is used to obtain directly the result, and for the RHS, a term-wise differentiation is
used:
1 2 1 4 1 6
.1 x 2 /1=2 D 1 x x x C
2 8 16 (4.14.4)
2 3=2 3 2 3 4 1
.1 x / D1 x C x C x6 C
2 8 16
Verification. To test his result, Newton squared the series for .1 x 2 /1=2 and observed that it
became 1 x 2 plus some remaining terms which will vanish. Precisely, Newton squared the
quantity 1 1=2x 2 1=8x 4 1=16x 6 5=128x 8 C R.x/ and obtained 1 x 2 C Q.x/ where
Q.x/ contains the lowest order of 10 i.e., very small. Today, we can do this verification easily
using Sympy.
Now comes Pnthe surprising part. We all know the binomial theorem which says, for n 2 N,
n n
k
.1 C x/ D kD0 k x . The LHS of Eq. (4.14.4) are of the same form only with rational
exponents. The question is: can Eq. (4.14.4) still be written in the same form of the binomial
theorem? That is
1
!
X m 2k
.1 x 2 /m D . 1/k x (4.14.5)
k
kD0
The answer is yes. The only difference compared with integral exponent case is that the binomial
expansion is now an infinite series when m is a rational number.
Newton computed . He considered the first quarter of a unit circle and calculated its area
(even though he knew that it is =4; thus he wanted to compete with Archimedes on who would
get more digits of . Actually he was testing
p his generalized binomial theorem). The function
of the first quarter of a unit circle is y D 1 x 2 , and thus its area is
Z 1 p
AD 1 x 2 dx
0
p
Now comes the power of Eq. (4.14.4): Newton replaced 1 x 2 by its power series, and with
A D =4, he obtained:
1
1 2 1 4 1 6 5 8
Z
D 1 x x x x dx
4 0 2 8 16 128
1
1 x3 1 x5 1 x7 5 x9
D x
4 2 3 8 5 16 7 128 9 0
11 11 1 1 5 1
D4 1
23 85 16 7 128 9
However, he realized that this series converged quite slowly .
Why this series converge slowly? Because in the terms x n =n, we
substituted x D 1. If 1 was replaced by a number smaller than 1,
then x n =n would be much smaller, and the series would converge
faster. And that exactly what Newton did: he only integrated to
0.5, and obtained this series (see next figure)
p
3 1 11 1 1 1 1 5 1
C D
12 8 2 6 8 40 32 112 128 1152 512
with which he managed to compute at least 15 digits. He admitted as much in 1666 (at the age of
23) when he wrote, "I am ashamed to tell you to how many figures I carried these computations,
having no other business at the time."
As you can see, having the right tool, the calculation of became much easier than the
polygonal method of Archimedes.
We put all the coefficients in Table 4.20 (left) and want to find the coefficients for column n D 1
assuming that the rules work for n D 1 as well. It follows that the coefficient for n D 1 given
in the right table ensures this rule.
n n
m -1 0 1 2 3 4 5 m -1 0 1 2 3 4 5
0 1 1 1 1 1 1 0 +1 1 1 1 1 1 1
1 0 1 2 3 4 5 1 -1 0 1 2 3 4 5
2 0 0 1 3 6 10 2 +1 0 0 1 3 6 10
3 0 0 0 1 4 10 3 -1 0 0 0 1 4 10
4 0 0 0 0 1 5 4 +1 0 0 0 0 1 5
5 0 0 0 0 0 1 5 -1 0 0 0 0 0 1
Table 4.20: Integral binomial coefficients. The row of m D 0 is all 1, follow Eq. (4.14.2) (coefficient of x
term is always 1). The rule of this table is: am;nC2 D am;n C am 1;n .
Therefore, we can get the integral, and term-wise differentiation gives the series:
Z x
du x2 x3 x4 1
Dx C C H) D 1 x C x2 x3 C x4
0 1Cu 2 3 4 1Cx
And we obtain the geometric series!
dx x2 x3
Z Z
2 3
.1 C x C x C x C /dx D H) x C C C D ln.1 x/ (4.14.9)
1 x 2 3
Similarly, this geometric series 1 x C x2 x 3 C gives us ln.1 C x/
2 3 1 x2 x3 x4
1 xCx x C D H) ln.1 C x/ D x C C (4.14.10)
1Cx 2 3 4
With this, it is the first time that we are able to compute ln 2 directly using only simple arithmetic
operations: ln 2 D ln.1 C 1/ D 1 1=2 C 1=3 1=4 C . Using a calculator we know that
ln 2 D 0:6931471805599453. Let’s see how the series in Eq. (4.14.10) performs. The calculation
in Table 4.21 (of course done by a Julia code) indicates that this series is practically not useful
1 1.0 0.666667
2 0.5 0.666667
:: :: ::
: : :
11 0.736544 0.693147
:: :: ::
: : :
1000 0.692647 0.693147
as it converges too slow. See column 2 of the table, with 1000 terms and still the value is not yet
close to ln 2.
How can we get a series with a better convergence? The issue might be in the alternating
C= sign in the series. By combining the series for ln.1 C x/ and ln.1 x/, we can get rid
of the terms with negative sign:
x2 x3 x4
8
< ln.1 C x/ D x C C
ˆ
x3 x5
ˆ
2 3 4 1Cx
H) ln D2 xC C C :::
ˆ x2 x3 x4 1 x 3 5
: ln.1 x/ D x C
ˆ C C C
2 3 4
(4.14.11)
Using x D 1=3, we have ln 2 D 2.1=3 C .1=3/3 =3 C : : :/ The data in column 3 in Table 4.21
confirms that this series converge much better: only 11 terms give us 0.693147. What is more
is that while Eq. (4.14.10) cannot be used to compute ln e (because of the requirement jxj < 1),
Eq. (4.14.11) can. For any positive number y, x D y 1=yC1 satisfies jxj < 1.
With this, we can derive the magical formula for discovered by Gregory and Leibniz (actually
re-discovered as 200 years before Leibniz some Indian mathematician found it). The angle =4
has tangent of 1, thus tan 1 1 D =4, with x D 1 we have:
1 1 1
D1 C (4.14.13)
4 3 5 7
This series converges slowly i.e., we need to use lots of terms (and thus lots of calculations) to
get a more accurate result for . However this series is theoretically p
interesting as it provides
p
a new way of calculating . We use x < 1. Note that tan =6 D 1= 3. So with x D 1= 3,
we can compute a better approximation for . As is involved, is there any hidden circle in
Eq. (4.14.13)? The answer is yes, and to see that, check https://www.youtube.com/watch?
v=NaL_Cb42WyY&feature=youtu.be.
a D 1 C k (4.14.14)
Now, Euler introduced ax (this is what we need) with x D N where N is a very big number so
that is a very small number. Now, we can write ax D aN D .a /N and use Eq. (4.14.14):
ax D .1 C k/N
N.N 1/ N.N 1/.N 2/
D 1 C N.k/ C .k/2 C .k/3 C (binomial theorem)
2Š 3Š
x N.N 1/ 2 x 2 N.N 1/.N 2/ x3
D 1 C Nk C k 2C k3 C
N 2Š N 3Š N3
1 1 1
D 1 C kx C .kx/2 C .kx/3 C
1Š 2Š 3Š
(4.14.15)
The last equality is due to the fact that N D N 1 D N 2 as N is very large. Now, we
evaluate Eq. (4.14.15) at x D 1 to get an equation between a and k:
k 1 1
a D1C C k2 C k3 C
1Š 2Š 3Š
Euler defined e as the number for which k D 1:
1 1 1
e D 1 C C C C (4.14.16)
1Š 2Š 3Š
The series on the RHS indeed converges because nŠ gets bigger and bigger and 1=nŠ becomes
close to zero. A small code computing this series gives us e D 2:718281828459045. With
k D 1, Eq. (4.14.15) allows us to write e x as
N
x x 1 1 1
e D 1C D1C x C x2 C x3 C (4.14.17)
N 1Š 2Š 3Š
to get
1
cos.n˛/ D Œ.cos ˛ C i sin ˛/n C .cos ˛ i sin ˛/n
2
1
i sin.n˛/ D Œ.cos ˛ C i sin ˛/n .cos ˛ i sin ˛/n
2
Using the binomial theorem: .a C b/n D nkD0 kn an k b k , we can expand the terms .cos ˛ C
P
i sin ˛/n as
n.n 1/
.cos ˛ C i sin ˛/n D cosn ˛ C i n cosn 1
˛ sin ˛ cosn 2 ˛ sin2 ˛
2Š
n.n 1/.n 2/ n.n 1/.n 2/.n 3/
i cosn 3 ˛ sin3 ˛ C cosn 4
˛ sin4 ˛
3Š 4Š
n.n 1/.n 2/.n 3/.n 4/
Ci cosn 5 sin5 ˛ C
5Š
n.n 1/
.cos ˛ i sin ˛/n D cosn ˛ i n cosn 1
˛ sin ˛ cosn 2 sin2 ˛
2Š
n.n 1/.n 2/ n.n 1/.n 2/.n 3/
Ci cosn 3 ˛ sin3 ˛ C cosn 4
˛ sin4 ˛
3Š 4Š
n.n 1/.n 2/.n 3/.n 4/
i cosn 5 ˛ sin5 ˛ C
5Š
Phu Nguyen, Monash University © Draft version
Chapter 4. Calculus 390
Therefore,
n.n 1/ n.n 1/.n 2/.n 3/
cos.n˛/ D cosn ˛ cosn 2 ˛ sin2 ˛ C cosn 4
˛ sin4 ˛ C
2Š 4Š
n.n 1/.n 2/
sin.n˛/ D n cosn 1 ˛ sin ˛ cosn 3 ˛ sin3 ˛
3Š
n.n 1/.n 2/.n 3/.n 4/
C cosn 5 ˛ sin5 ˛ C
5Š
Now comes the magic of Euler. Considering ˛ D x=N where N is a very large positive integer,
thus ˛ is very small leading to cos ˛ 1, and sin ˛ ˛. Hence, cos.n˛/ becomes
1
1 3 1 1 7 X 1
sin.x/ D x x C x5 x C D . 1/i 1
x 2i 1
3Š 5Š 7Š i D1
.2i 1/Š
1
(4.14.18)
1 2 1 1 6 X 1 2i
cos.x/ D 1 x C x4 x C D . 1/i x
2Š 4Š 6Š i D0
.2i/Š
We have included the formula using the sigma notation. It is not for beauty, that formula is
translated directly to our Julia code, see Listing B.3. Even though this was done by the great
mathematician Euler, we have to verify them for ourselves. p Let’s compute sin =4 using the
series. With only 5 terms, we got 0.707106781 (same as 2=2 computed using trigonometry
from high school maths)! Why so fast convergence?
With Eq. (4.14.18) we can see that the derivative of sine is cosine: just differentiating the first
series and you will obtain the second. Can we also obtain the identity sin2 x C cos2 x D 1 from
these series? Of course, otherwise it was not called sine/cosine series. Some people is skillful
enough to use Eq. (4.14.18) to prove this identity. It is quite messy. We can go the other way:
g.x/ D sin2 x C cos2 x H) g 0 .x/ D 2 sin x cos x 2 cos x sin x D 0 H) g.x/ D constant
But, we know g.0/ D sin2 0 C cos2 0 D 1 (using Eq. (4.14.18) of course). So g D 1! We still
have to relate the sine/cosine series to the traditional definition of sine/cosine based on a right
triangle. And finally, the identity sin.x C y/ D sin x cos y C sin y cos x and so on (all of this
can be done, but that’s enough to demonstrate the idea). You might ask why bothering with all of
this? This is because if we can do so, then you can see that trigonometric functions can be defined
completely without geometry! Why that useful? Because it means that trigonometric functions
are more powerful than we once thought. Indeed later on we shall see how these functions play
an important role in many physical problems that have nothing to do with triangles!
Euler’s proof was based on the power series of sin x (see the previous section), and the
fact that if f .x/ D 0 has solutions x1 D a, x2 D b, etc. then we can factor it as f .x/ D
.a x/.b x/.c x/ D .1 x=a/.1 x=b/.1 x=c/ if all of the solutions are different
from zero.
From the power series of sin.x/ in Eq. (4.14.18), we obtain
sin x x2 x4
f .x/ D D1 C C (4.14.19)
x 3Š 5Š
As the non-zero solutions of f .x/ D 0 are ˙, ˙2, ˙3, etc, we can also write it as
x x x x
f .x/ D 1 1C 1 1C
2 2
(4.14.20)
1 1 1 2
D1 C C C x C
2 4 2 9 2
1 1 1 1 1 1 2
C C C D H) 1 C C C D (4.14.21)
2 4 2 9 2 3Š 4 9 6
It is easy to verify this by writing a small code to calculate the sum of niD1 1=i 2 , for example
P
with n D 1000 and see that the sum is indeed equal to 2 =6. And with this new toy, Euler
continued and calculated the following sums (note that all involve even powers)
1 1 2
1C C C D .power 2/
4 9 6
1 1 4
1C C C D .power 4/
16 81 90
But Euler and no mathematicians after him is able to crack down the sum with odd powers. For
example, what is 1 C 213 C 313 C 413 ?
Wallis’ infinite product Euler’s method simultaneously leads us to Wallis’ infinite product
regarding . The derivation is as follows
sin x x2 x2 x2
x x x x
D 1 1C 1 1C D 1 1 1
x 2 2 2 4 2 9 2
2 1 1 1
D 1 1 1
4 16 36
3 15 35
D
4 16 36
3 35 57 2 2 4 4 6 6
D H) D
22 44 66 2 3 3 5 5 7 7
Harmonic series and Euler’s constant. Up to now we have met the three famous numbers in
mathematics: , e and i. Now is the time to meet the fourth number:
D 0:577215 : : : While
Euler did not discover , e and i he gave the names to two of them ( and e). Now that he
discovered
but he did not name it.
Recall that S.n/–the n-th harmonic number–is the sum of the reciprocals of the first n natural
numbers:
n
1 1 1 X1
S.n/ WD 1 C C C C D (4.14.22)
2 3 n i D1
i
Using a computer, with n D 107 , I got
D 0:577215, correct to six decimals. In 1734, Euler
computed
to five decimals. Few years later he computed
up to 16 digits.
But hey! How did Euler think of Eq. (4.14.23)? If someone told you to consider this sequence,
you could write a code to compute A.n/ and see it for yourself that it converges to a value of
0.577215. And you would discover
. Now you see the problems with how mathematics is
currently taught and written. For detail on the discovery of
, I recommend the book Gamma:
exploring Euler’s constant by Julian Havil [22] for an interesting story about
. There are many
books about the great incomparable Euler e.g. Euler: The master of us all by Dunham William
[12] or Paul Nahin’s Dr. Euler’s Fabulous Formula: Cures Many Mathematical Ills [37].
1
X
2 3
f .x/ D a0 C a1 x C a2 x C a3 x C D an x n (4.14.25)
nD0
f .0/ D a0
f 0 .0/ D a1
f 00 .0/ D 2Ša2
f 000 .0/ D 3Ša3 (4.14.26)
::
:
f .n/ .0/ D nŠan
And putting these coefficients into Eq. (4.14.25), we obtain the Taylor’s series of any function
f .x/|| :
1
f 0 .0/ f 00 .0/ 2 f 000 .0/ 3 X f .n/ .0/ n
f .x/ D f .0/ C xC x C x C D x (4.14.27)
1Š 2Š 3Š nD0
nŠ
where the notation f .n/ .x/ denotes the n-order derivative of f .x/; for n D 0 we have f .0/ .x/ D
f .x/ (i.e., the 0th derivative is the function itself). See Fig. 4.75 for a demonstration of the
Taylor series of cos x. The more terms we include a better approximation of cos x we get. What
is interesting is that we use information of f .x/ only at x D 0, yet the Taylor series (with
enough terms) match the original function for many more points. Taylor series expanded around
0 is sometimes known as the Maclaurin series, named after the Scottish mathematician Colin
Maclaurin (1698 – 1746).
There is nothing special about x D 0. And we can expand the function at the point x D a:
1
X f .n/ .a/
f .x/ D .x a/n (4.14.28)
nD0
nŠ
||
Actually not all functions but smooth functions that have derivatives
Figure 4.75: The graph of cos x and some of its Taylor expansions: 1 x 2 =2, 1 x 2 =2 C x 4 =4Š and
1 x 2 =2 C x 4 =4Š x 6 =6Š.
Equipped with Eq. (4.14.27) it is now an easy job to develop power series for trigonometric
functions, exponential functions, logarithm functions etc. We put commonly used Taylor series
in the following equation:
1
1 1 1 X xn
ex D1C x C x2 C x3 C D x2R
1Š 2Š 3Š nD0
nŠ
1
1 3 1 1 7 X x 2nC1
sin x Dx x C x5 x C D . 1/n x2R
3Š 5Š 7Š nD0
.2n C 1/Š
1
1 2 1 1 6 X x 2n
cos x D1 x C x4 x C D . 1/n x2R
2Š 4Š 6Š nD0
.2n/Š
1
x3 x5 x7 X x 2nC1
arctan x D x C C D . 1/n x 2 Œ 1; 1
3 5 7 nD0
.2n C 1/
1
x2 x3 x4 X xn
ln.1 C x/ D x C C D . 1/nC1 x 2 . 1; 1/
2 3 4 nD1
n
1
1 X
D 1 C x C x2 C x3 C D xn x 2 . 1; 1/
1 x nD0
If we look at the Taylor series of cos x we do not see odd powers. Why? This is because
cos. x/ D cos.x/ or cosine is an even function. Similarly, in the series of the sine, we do not
see even powers. In the above equation, for each series a condition e.g. x 2 Œ 1; 1 was included.
This is to show for which values of x that we can use the Taylor series to represent the origin
functions. For example, if jxj > 1 then we cannot use x x 3=3 C x 5=5 x 7=7 C to replace
arctan x.
In Fig. 4.76 we plot e x and ln.1 C x/ and their Taylor series of different number of terms
n. We see that the more terms used the more accurate the Taylor series are. But how accurate
exactly? You might guess the next thing mathematicians will do is to find the error associated
with a truncated Taylor series (we cannot afford to use large n so we can only use small
number of terms, and thus we introduce error and we have to be able to quantify this error).
Section 4.14.10 is devoted to this topic.
Taylor’s series of other functions. For functions made of elementary functions, using the
definition of Taylor’s series is difficult. We can find Taylor’s series for these functions indirectly.
For example, to find the Taylor’s series of the following function
f .x/ D ln.cos x/; x 2 ;
2 2
we first re-write f .x/ in the form ln.1 C t / so that Taylor’s series is available:
f .x/ D ln.1 C .cos x 1//
.cos x 1/2 .cos x 1/3 .cos x 1/4 (4.14.29)
D .cos x 1/ C C
2 3 4
Now we use Taylor’s series for cos x:
1 2 1 1 6
cos x 1D x C x4 x C (4.14.30)
2Š 4Š 6Š
Phu Nguyen, Monash University © Draft version
Chapter 4. Calculus 397
4.0
2
3.5
1
3.0
2.5 0
1.5 1.0 0.5 0.0 0.5 1.0 1.5
2.0
1
1.5
2
1.0
exp(x)
T1(x) ln(1 + x)
0.5 3 T4(x)
T2(x)
T3(x) T7(x)
T4(x) T11(x)
0.0 T16(x)
1.5 1.0 0.5 0.0 0.5 1.0 1.5 4
2
1 2 1 4 1 6 1 1 2 1 1 6
f .x/ D x C x x C x C x4 x C
2Š 4Š 6Š 2 2Š 4Š 6Š
3
1 1 2 1 1 6
C x C x4 x C C
3 2Š 4Š 6Š
Assume that we ignore terms of order 8 and above, we can compute f .x/ as:
1 4 2 1 1 2 3
1 2 1 4 1 6 1 1 2
ln.cos x/ D x C x x C x C x C x
2Š 4Š 6Š 2 2Š 4Š 3 2Š
x2 x4 x6
D C O.x 8 /
2 12 45
Big O notation. In the above equation I have introduced the big O notation (O.x 8 /). In that
equation, because we neglected terms of order of magnitude equal and greater than eight, the
notation O.x 8 / is used. Let’s see one example: the sum of the first n positive integers is
n.n C 1/ n2 n
1 C 2 C 3 C C n D D C
2 2 2
When n is large, the second term is much smaller relatively than the first term; so the order of
magnitude of 1 C 2 C C n is n2 ; the factor 1=2 is not important. So we write
1 C 2 C 3 C C n D O.n2 /
To get familiar with this notation, we write, in below, the full Taylor’s series for e x , and two
truncated series
1 1 2 1 3 1
ex D 1 C xC x C x C D 1 C x C O.x 2 /
1Š 2Š 3Š 1Š
1 1 2 1 3 1 1
ex D1C xC x C x C D 1 C x C x 2 C O.x 3 /
1Š 2Š 3Š 1Š 2Š
The notation O.x 2 / allows us to express the fact that the error in e x D 1 C x is smaller in
absolute value than some constant times x 2 if x is close enough to 0 . The big O notation
is also called Landau’s symbol named after the German number theoretician Edmund Landau
(1877–1938) who invented the notation. The letter O is for order.
1 n 1
X f .n/ .a/ X f .i / .a/ X f .i / .a/
f .x/ D .x a/n D .x a/i C .x a/i (4.14.31)
nD0
nŠ i D0
iŠ i DnC1
iŠ
„ ƒ‚ … „ ƒ‚ …
Tn .x/ Rn .x/
The first sum (has a finite term) is a polynomial of degree n and thus called a Taylor polynomial,
denoted by Tn .x/. The remaining term is called, understandably, the remainder, Rn .x/.
It is often that scientists/engineers do this approximation: f .x/ Tn .x/. This is because it’s
easy to work with a polynomial (e.g. differentiation/integration, root finding of a polynomial is
straightforward). In this case Rn .x/ becomes the error of this approximation. If only two terms
in the Taylor series are used, we get:
Theorem 4.14.1
f .nC1/ .c/
Rn .x/ D .x a/nC1 (4.14.32)
.n C 1/Š
Example 4.4
The Taylor series for y D e x at a D 0 with the remainder is given by
x x2 xn ec
ex D 1 C C C C C Rn .x/; Rn .x/ D x nC1
1Š 2Š nŠ .n C 1/Š
where 0 < c < x. The nice thing with e x is that Rn .x/ approaches zero as n goes large. Note
that we have jcj < jxj and e x is an increasing function, thus
e jxj jx nC1 j
jRn .x/j jx nC1 j H) lim jRn .x/j < e jxj lim D0
.n C 1/Š n!1 n!1 .n C 1/Š
See Section 4.10.4 if you’re not clear why the final limit is zero.
First, write a small Julia code to verify this formula (using n D 100 and compute the RHS
to see if it matches D 3:1415 : : :). How on earth mathematicians discovered this kind of
equation? They started with a definite integral of which the integral involves :
Z 1=2
dx
2
D p
0 x xC1 3 3
If you cannot evaluate this integral: using a completing a square for x 2 x C 1, then using a
trigonometry substitution (tan ). That’s not interesting. Here is the great stuff:
1 C x 3 D .1 C x/.x 2 x C 1/
Thus,
1=2 Z 1=2 Z 1=2 Z 1=2
dx xC1 xdx dx
Z
I D D dx D C
0 x2 x C 1 0 1 C x3 0 1 C x3 0 1 C x3
Of course, now we replace the integrands by corresponding power series. Starting with the
geometric series:
1
D 1 C x C x2 C x3 C
1 x
We then have:
1
D1 x3 C x6 x9 C
1 C x3
x
Dx x4 C x7 x 10 C .obtained from the above times x/
1 C x3
Now, the integral I can be evaluated using these series:
Z 1=2 Z 1=2
4 7 10
I D .x x C x x C /dx C .1 x3 C x6
x 9 C C/dx
0 0
1 1 1 1 1 1 1 1 1 1 1 1 1
D 0 C 2 C 1 0 C
4 2 8 5 8 8 8 2 8 4 8 7 82
Now we can understand Eq. (4.15.1).
Mysterious function x x . What would be the graph of the mysterious function y D x x ? Can it
be defined for negative x? Is it an increasing/decreasing function? We leave that for you, instead
we focus on the integration of this function. That is we consider the following integral:
Z 1
I WD x x dx
0
x x D .e ln x /x D e x ln x
Therefore,
1 1
1 n
Z Z
n
.x ln x/ dx D Œx nC1 .ln x/n 10 x n .ln x/n 1 dx
0 nC1 nC1 0
Z 1
n
D x n .ln x/n 1 dx
nC1 0
This is because limx!0 x nC1 .ln x/n D 0. Now if we repeatedly apply integration by parts to
lower the power in .ln x/n 1 , we obtain:
1 Z 1
n
Z
n
.x ln x/ dx D x n .ln x/n 1 dx
0 nC1 0
Z 1
n n 1
D x n .ln x/n 2 dx
nC1 nC1 0
Z 1
n n 1 n 2
D x n .ln x/n 3 dx
nC1 nC1 nC1 0
1
nŠ
Z
.x ln x/n dx D . 1/n
0 .n C 1/nC1
x2ex
lim
x!0 cos x 1
x
And again, the idea is to replace e and cos x by its Taylor’s series, and we will find that the
limit will come easily:
1 1 2 1 3 1 3 1 4 1 5
x 2 .1 C 1Š
x C 2Š x C 3Š x C / x2 C 1Š
x C 2Š x C 3Š x C
AD 1 2 1 4
D 1 2 1 4
H) lim A D 2
2Š
x C 4Š x 2Š
x C 4Š x x!0
1 1 1
1 C C D‹
1Š 2Š 3Š
x
We can recognize that the above series is e evaluated at x D 1, so the series converges to 1=e.
1 1 1 1 1
ex 1D x C x 2 C x 3 C x 4 C x 5 C O.x 6 /
1Š 2Š 3Š 4Š 5Š
Then, we can write 1=.e x 1/ as
1 1
D
ex 1 1 1 1 1 1
x C x 2 C x 3 C x 4 C x 5 C O.x 6 /
1Š 2Š 3Š 4Š 5Š
0 1 1
1B
B1 C 1 x C 1 x 2 C 1 x 3 C 1 x 4 CO.x 5 /C
C
D
x@ 2
„ 6 24
ƒ‚ 120 … A
y
Now, using the Taylor series for 1=.1 C y/ D 1 y C y 2 y 3 C y 4 (we stop at y 4 as we skip
terms of powers higher than 4), and also using SymPy, we get
1
x x x2 x4 x0 1 x1 1 x2 1 x4 X xn
D1 C C D 1 C C D Bn
ex 1 2 12 720 0Š 2 1Š 6 2Š 30 4Š nD0
nŠ
(4.16.1)
The second equality is to introduce nŠ into the formula as we want to follow the pattern of the
Taylor series. With that, we obtain a nice series for x=ex 1 in which the Bernoulli numbers show
up again! They are
1 1 1 1
B0 D 1; B1 D ; B2 D ; B3 D 0; B4 D ; B5 D 0; B6 D ; B7 D 0; : : :
2 6 30 42
Recurrence relation between Bernoulli numbers. Recall that we have met Fibonacci numbers,
and they are related to each other. Then, we now ask whether there exists a relation between the
Bernoulli numbers. The answer is yes, that’s why mathematics is super interesting. The way to
derive this relation is also beautiful. From Eq. (4.16.1), we can compute x in terms of e x 1
P1 n
and nD0 Bn xnŠ :
1 1
!
n 2 n
X
Xx x x x
x D .e x 1/ Bn D C C Bn
nD0
nŠ 1Š 2Š nD0
nŠ
1 1 1 1
! ! ! !
X xm X xn X x mC1 X xn
D Bn D Bn
mD1
mŠ nD0
nŠ mD0
.m C 1/Š nD0
nŠ
The last equality was to convert the lower limit of summation of 1 x m=mŠ from 1 to zero, to
P
mD1
apply the Cauchy product. Now, we use the Cauchy product for two series to get
1 n 1 Xn
! !
X X xn kC1
xk X nC1 x nC1
xD Bk D Bk
nD0
.n k C 1/Š kŠ nD0
k .n C 1/Š
kD0 kD0
Refer to Eq. (7.12.2) for derivation.
This is similar to x D b0 x C .b1 C b2 /x 2 for all x, then we must have b0 D 1 and b1 C b2 D 0.
Explicitly, we have
1 D B0
0 D B0 C 2B1
0 D B0 C 3B1 C 3B2
0 D B0 C 4B1 C 6B2 C 4B3
0 D B0 C 5B1 C 10B2 C 10B3 C 5B4
Cotangent and Bernoulii numbers. If we consider the function g.x/ D x=ex 1 B1 x, we get;
check Section 5.15 for detail,
1
x x e x=2 C e x=2 X B2n 2n
g.x/ WD B1 x D D x
ex 1 2 e x=2 e x=2
nD0
.2n/Š
If we know hyperbolic trigonometric functions, see Section 3.14, then it is not hard to see that
the red term is coth .x=2/; and thus we’re led to
1
x x X B2n 2n
coth D x
2 2 nD0
.2n/Š
And to get from coth to cot, just replace x by ix, and we get the series for the cotangent function:
1
X 2B2n
cot x D . 1/n .2x/2n 1
nD0
.2n/Š
Now, to simplify the notation, we simply use Sm for Sm .n/. And for later use, we list the first
few sums :
S0 D 10 C 20 C 30 C C n0 D B0 n
1
S1 D 11 C 21 C 31 C C n1 D B0 n2
2B1 n
2
1
S2 D 12 C 22 C 32 C C n2 D B0 n3 3B1 n2 C 3B2 n
3 (4.17.1)
1
S3 D 13 C 23 C 33 C C n3 D B0 n4 4B1 n3 C 6B2 n2 C 4B3 n
4
1
S4 D 14 C 24 C 34 C C n4 D B0 n5 5B1 n4 C 10B2 n3 C 10B3 n2 C 5B4 n
5
The Euler-Maclaurin summation formula involves the sum of a function y D f .x/ evaluated
at integer values of x from 1 to n. For example, considering y D x 2 , and this sum
which is nothing but the S2 we’re familiar with. Considering another function y D x 2 C 3x C 2,
and the sum S WD f .1/Cf .2/C Cf .n/, which is nothing but S2 C3S1 C2S0 . To conclude, for
polynomials, S can be written in terms of S0 ; S1 ; : : : And we know how to compute S0 ; S1 ; : : :
using Eq. (4.17.1).
Moving on now to non-polynomial functions such as sin x or e x . Thanks to Taylor, we can
express these functions as a power series, and we return back to the business of dealing with
polynomials. For an arbitrary function f .x/–which is assumed to be able to have a Taylor’s
expansion, we can then write
f .x/ D c0 C c1 x C c2 x 2 C
Thus, we can compute S D niD1 f .i/ in the same manner as we did for polynomials, only this
P
time we have an infinite sum:
n
X
S WD f .i/ D c0 S0 C c1 S1 C c2 S2 C c3 S3 C
i D1
Now, we need to massage S a bit so that it tells us the hidden truth; we group terms with
B0 ; B1 ; : : ::
1 2 1 3 1 4
S D B0 c0 n C c1 n C c2 n C c3 n C C
2 3 4
2 3 4
B1 .c1 n C c2 n C c3 n C c4 n C /C
3
C B2 .c2 n C c3 n2 C 2c4 n3 C / C B3 . / C
2
Now come the magic, the red term is the integral of f .x/|| , the blue term is the first derivative of
f .x/ at x D n minus f 0 .0/, and the third term is f 00 .n/ f 00 .0/ and so on, so we have
Z n
B2 B3
f 0 .n/ f 0 .0/ C f 00 .n/ f 00 .0/ C
SD f .x/dx B1 .f .n/ f .0// C
0 2Š 3Š
Noting that B2nC1 are all zeros except B1 and B1 D 1=2, we can rewrite the above equation
as
n Z n 1
X f .n/ f .0/ X B2k .2k 1/
f .i/ D f .x/dx C C f .n/ f .2k 1/ .0/
i D1 0 2 .2k/Š
kD1
Why can this formula be useful when we replace a finite sum by a definite integral (which can
be done) and an infinite sum? You will see that this is a powerful formula to compute sums,
both
P1 infinite sums and finite sums. That was the powerful weapon that Euler used to compute
2
kD1 1=k in the Basel problem. But first, we need to polish our formula, because there is an
asymmetry in the formula: on the LHS we start from 1, but on the RHS, we start from 0. If we
add f .0/ to both sides, we get a nicer formula:
n Z n 1
X f .n/ C f .0/ X B2k .2k 1/ .2k 1/
f .i/ D f .x/dx C C f .n/ f .0/
i D0 0 2 .2k/Š
kD1
Now if we ask why start from 0? What if f .0/ is undefined (e.g. for f .x/ D 1=x 2 /? We can
start from any value smaller than n. Let’s consider m < n, and we compute two sums:
n Z n 1
X f .n/ C f .0/ X B2k .2k 1/
f .i / D f .x/dx C C f .n/ f .2k 1/ .0/
i D0 0 2 .2k/Š
kD1
m Z m 1
X f .m/ C f .0/ X B2k .2k 1/
f .i / D f .x/dx C C f .m/ f .2k 1/ .0/
i D0 0 2 .2k/Š
kD1
Now, we subtract the first formula from the second one, we then have a formula which starts
from m nearly (note that on the LHS, we start from m C 1 because f .m/ was removed):
n Z n 1
X f .n/ f .m/ X B2k .2k 1/
f .i/ D f .x/dx C C f .n/ f .2k 1/ .m/
i DmC1 m 2 .2k/Š
kD1
|| n
.c0 C c1 x C c2 x 2 C /dx D .c0 x C c1 x 2 =2 C c2 x 3 =3 C /jn0 .
R
0
Using the same trick of adding f .m/ to both sides, we finally arrive at
n n 1
f .n/ C f .m/ X B2k .2k
X Z
1/
f .i / D f .x/dx C C f .n/ f .2k 1/
.m/
i Dm m 2 .2k/Š
kD1
(4.17.2)
And this is the Euler-Maclaurin summation formula, usually abbreviated as EMSF, about which
D. Pengelley wrote the formula that dances between continuous and discrete. This is the form
without the remainder term. This is because in the formula we do not know when to truncate the
infinite series .
Basel sum. Now we use the EMSF to compute the Basel sum, tracing the footsteps of the great
Euler. We write the sum of the second powers of the reciprocals of the positive integers as
1 N
X1 1 1
X 1 X 1
2
D 2
C (4.17.3)
k k k2
kD1 kD1 kDN
Now, the first sum with a few terms, we compute it explicitly (i.e., add term by term) and for
the second term, we use the EMSF in Eq. (4.17.2). We can compute the red term as, with
f .x/ D 1=x 2
1
X 1 1 1 1 1 1
D C C C C
k2 N 2N 2 6N 3 30N 5 42N 7
kDN
For example with N D 10, we have (with only four terms in the above series)
1 9
X 1 X 1 1 1 1 1
2
D 2
C C 2
C
k k N 2N 6N 3 30N 5
kD1 kD1
An infinite sum was computed using only a sum of 13 terms! How about the accuracy? The exact
value is 2 =6 D 1:6449340668482264, and the one based on the EMSF is 1:644934064499874;
an accuracy of eight decimals. If we do not know the EMSF,
P9 we would have had to compute 1 bil-
2
lion terms to get an accuracy of 8 decimals! Note that kD1 1=k is only 1:539767731166540.
of which proof is not discussed here. We once had a thought that, in a boring calculus class, why
we spent a significant amount of our youth to compute these seemingly useless integrals like the
above? It is interesting to realize that these integrals play an important role in mathematics and
then in our lives.
Now, Fourier believed that it is possible to expand any periodic function f .x/ with period
2 as a trigonometric infinite series (as mentioned, refer to Sections 8.9 and 8.11 to see why
Fourier came up with this idea; once the idea is there, the remaining steps are usually not hard,
as I can understand them):
We do not have b0 because sin 0x D 0. This trigonometric infinite series is called a Fourier
series and the coefficients an , bn are called the Fourier coefficients. Our goal now is to determine
these coefficients.
For a0 , we just integrate two sides of Eq. (4.18.2) from to , we get:
Z Z 1
X Z Z
f .x/dx D a0 dx C an cos nxdx C bn sin nxdx (4.18.3)
nD1
Now the "seemingly useless" integrals in Eq. (4.18.1) come into play: the red integrals are all
zeroes, so
Z Z
1
f .x/dx D 2a0 H) a0 D f .x/dx (4.18.4)
2
The results do not change if we integrate from 0 to 2. In fact, if a function y D f .x/ is T -periodic, then
Z aCT Z bCT
f .x/dx D f .x/dx
a b
Drawing a picture of this periodic function, and note that integral is area, and you will see why this equation holds.
For an with n 1, we multiply Eq. (4.18.2) with cos mx and integrate two sides of the
resulting equation. Doing so gives us:
Z Z
f .x/ cos mxdx D a0 cos mxdxC
1
X Z Z
C an cos mx cos nxdx C bn cos mx sin nxdx
nD1
Again, the integrals in Eq. (4.18.1) help us a lots here: the red integrals vanish. We’re left with
this term
X1 Z
an cos mx cos nxdx
nD1
As the blue integral is zero when n ¤ m and it is equal to when n D m, the above term should
be equal am . Thus,
Z
1
Z
f .x/ cos mxdx D am H) am D f .x/ cos mxdx (4.18.5)
Similarly, for bn we multiply Section 10.8.2 with sin mx and integrate two sides of the
resulting equation. Doing so gives us:
1
Z
bm D f .x/ sin mxdx (4.18.6)
Example 1. As the first application of Fourier series, let’s try the square wave function given by
(
0 if x < 0
f .x/ D ; f .x C 2/ D f .x/ (4.18.7)
1 if 0 x <
Square waves are often encountered in electronics and signal processing, particularly digital
electronics and digital signal processing. Mathematicians call the function in Eq. (4.18.7) a
piecewise continuous function. This is because the function is consisted of many pieces, each
piece is defined on a sub-interval. Within a sub-interval the function is continuous, but at some
points between two neighboring sub-intervals there is a jump.
The determination of the Fourier coefficients for this function is quite straightforward:
Z Z
1 1 1
a0 D f .x/dx D dx D
2 2 0 2
Z Z
1 1
an D f .x/ cos nxdx D cos nxdx D 0
0
1 1 1
Z Z
bn D f .x/ sin nxdx D sin nxdx D .cos n 1/
0 n
Noting that bn is non-zero only for odd n. In that case, cos n D 1. Thus, the Fourier series of
this square wave is:
1
1 2 2 1 X 2
f .x/ D C sin x C sin 3x C D C sin.2n 1/x (4.18.8)
2 3 2 nD1 .2n 1/
Fig. 4.77 plots the square wave along with some of its Fourier series with 1,3,5,7 and 15 terms.
With more than 7 terms, a good approximation is obtained. Note that Taylor series cannot do
this!
1.00 S1 1.00 S3
0.75 f(x) 0.75 f(x)
0.50 0.50
0.25 0.25
0.00 0.00
3 2 1 0 1 2 3 3 2 1 0 1 2 3
1.00 S5 1.00 S7
0.75 f(x) 0.75 f(x)
0.50 0.50
0.25 0.25
0.00 0.00
3 2 1 0 1 2 3 3 2 1 0 1 2 3
1.00 S11 1.00 S15
0.75 f(x) 0.75 f(x)
0.50 0.50
0.25 0.25
0.00 0.00
3 2 1 0 1 2 3 3 2 1 0 1 2 3
1 2
Figure 4.77: Representing a square wave function by a finite Fourier series Sn D 2 C sin x C C
2
n sin nx for n D 2k 1.
Let’s have some fun with this new toy and we will rediscover an old series. For 0 x < ,
f .x/ D 1, so we can write 1 D 1=2 C 2= sin x C 2=3 sin 3x C . Then, a bit of algebra, and
finally choosing x D =2, we see again the well know series for =4:
1 2 2
D sin x C sin 3x C
2 3
1 1
D sin x C sin 3x C sin 5x C
4 3 5
1 1 1
D1 C C (evaluating the above equation at x D =2)
4 3 5 7
Phu Nguyen, Monash University © Draft version
Chapter 4. Calculus 411
The determination of the Fourier coefficients for this function is also straightforward:
1
1 1 1
Z Z
a0 D jxjdx D dx D
2 1 2 0 2
1 1
2
Z Z
an D jxj cos nxdx D 2 x cos nxdx D .cos n 1/
1 0 n2 2
Z 1
bn D jxj sin nxdx D 0 (jxj sin nx is an odd function )
1
Of course, we have used integration by parts to compute an . Noting that an is non-zero only for
odd n. In that case, cos n D 1. Thus, the Fourier series of this triangular wave is:
1
1 4 4 1 X 4
f .x/ D 2
cos x 2
cos 3xC D cos.2n 1/x (4.18.11)
2 9 2 nD1 .2n 1/2 2
A plot of some Fourier series of this function is given in Fig. 4.78. Only four terms and we
obtain a very good approximation.
1.00
S1
0.75 f(x)
0.50
0.25
0.00
4 2 0 2 4
1.00
S3
0.75 f(x)
0.50
0.25
0.00
4 2 0 2 4
1.00
S5
0.75 f(x)
0.50
0.25
0.00
4 2 0 2 4
1 4
Figure 4.78: Representing a triangular wave function by a finite Fourier series Sn D 2 2
cos x
n24 2 cos nx for n D 2k 1.
Similarly to example 1, we can also get a nice series related to by considering f .x/ and
its Fourier series at x D 0:
1 4 4 4
f .x/ D 2
cos x 2
cos 3x cos 5x
2 9 25 2
1 4 4 4 2 1 1 1
D 2C C C H) D C C C
2 9 2 25 2 8 1 9 25
Now, what is important to consider is the difference between the Fourier series for the square
wave and the triangular wave. I put these two series side by side now
1
1 X 2
square wave: f .x/ D C sin.2n 1/
2 nD1 .2n 1/
1
1 X 4
triangular wave: f .x/ D cos.2n 1/x
2 nD1
.2n 1/2 2
Now we can see why we need less terms in the Fourier series to represent the triangular wave
than the square wave. The difference lies in the red number. The terms in the triangular series
approach zero faster than the terms in the square series. And by looking at the shape of these
waves, it is obvious that smoother waves (the square wave has discontinuities) are easier for
Fourier series to converge.
Having another way to look at Fourier series is itself something significant. Still, we can see the
benefits of the complex form: instead of having a0 , an and bn and the sines and cosines, now we
just have cn and the complex exponential.
We have more, lot more, to say about Fourier series e.g. Fourier transforms, discrete Fourier
transform, fast Fourier transforms etc. (Section 8.12) We still do not know the meanings of
the a’s and b’s (or cn ). We do not know which functions can have a Fourier series. To an-
swer these questions, we need more maths such as linear algebra. I have introduced Fourier
series as early as here for these reasons. First, we learned about Taylor series (which allows us
to represent a function with a power series). Now, we have something similar: Fourier series
where
R a function is represented as a trigonometric series. Second, something like the identity
sin nx cos mxdx D 0 looks useless, but it is not.
About Fourier’s idea of expressing a function as a trigonometry series, the German mathe-
matician Bernhard Riemann once said:
Nearly fifty years has passed without any progress on the question of analytic repre-
sentation of an arbitrary function, when an assertion of Fourier threw new light on
the subject. Thus a new era began for the development of this part of Mathematics
and this was heralded in a stunning way by major developments in mathematical
physics.
Z 1
nŠ D t n e t dt
0
Therefore,
Z 1
p
1 1 1=2
ŠD D t e t dt D (4.19.4)
2 2 0
which can be seen as the sum of the integral powers of the reciprocals of the natural numbers.
1
X 1
.z/ WD ; z2C
kz
kD1
1=2 u2
and we get a new integral 2 u2 e
R
For the final integral, change of variable u D t du.
4.20 Review
It was a long chapter. This is no surprise for we have covered the mathematics developed during
a time span of about 200 years. But as it is always the case: try to do not lose the forest for the
trees. The core of calculus is simple, and I’ am trying to summarize that core now. Understand
that and others will follow quite naturally (except the rigorous foundation–that’s super hard).
The calculus is the mathematics of change: it provides us notions and symbols and methods
to talk about changes precisely;
What is better than motion as an example of change? For motion, we need three notions:
(1) position x.t/–to quantify the position (that answers the question where an object is
at a particular time), (2) velocity v.t/–to quantify the speed (that answers the question
how fast our object is moving), and (3) acceleration a.t/–to quantify how fast the object
changes its speed.
Going from (1) to (2) to (3) is called “taking the derivative”: the derivative gives us the
way to quantify a time rate of change. For the velocity, it is the rate of change of the
position per unit time. That’s why we have the symbols dx, dt and dx=dt ;
Z t
Going from (3) to (2) to (1) is called “taking the integral”: x.t/ D vdt . Knowing
0
the speed v.t/ and consider a very small time interval dt during which the distance the
object has traveled is v.t/dt , finally adding up all those tiny distances and we get the total
distance x.t/;
So, the calculus is the study of derivative and integral. But they are not two independent
things, they are the inverse of each other like negative/positive numbers, men/women,
war/peace and so on;
When we studied counting numbers we have discovered many rules (e.g. odd + odd =
even). The same pattern is observed here: the new toys of mathematicians–the derivative
and the integral–have their own rules. For example, the derivative of a sum is the sum of
the derivatives. Thanks to this rule, we know how to determine the derivative of x 10 C
x 5 C 23x 3 , for example for we know to differentiate each term.
Calculus does to algebra what algebra does to arithmetic. Arithmetic is about manipulating
numbers (addition, multiplication, etc.). Algebra finds patterns between numbers e.g. a2
b 2 D .a b/.a C b/. Calculus finds patterns between varying quantities;
Historically Fermat used derivative in his calculations without knowing it. Later, Newton
and Leibniz discovered it. Any other mathematicians such as Brook, Euler, Lagrange
developed and characterized it. And only at the end of this long period of development,
that spans about two hundred years, did Cauchy and Weierstrass define it.
Confine to the real numbers, the foundation of the calculus is the concept of limit. This is so
because with limits, mathematicians can prove all the theorems in calculus rigorously. That
branch of mathematics is called analysis. This branch focuses not on the computational
aspects of the calculus (e.g. how to evaluate an integral or how to differentiate a function),
instead it focuses on why calculus works.
In the beginning of this chapter, I quoted Richard Feynman saying that “Calculus is the
language God talks”, and Steven Strogatz writing ‘Without calculus, we wouldn’t have cell
phones, computers, or microwave ovens. We wouldn’t have radio. Or television. Or ultrasound
for expectant mothers, or GPS for lost travelers. We wouldn’t have split the atom, unraveled
the human genome, or put astronauts on the moon.’ But for that we need to learn multivariable
calculus and vector calculus (Chapter 7)–the generalizations of the calculus discussed in this
chapter and differential equations (Chapter 8). This is obvious: our world is three dimensions
and the things we want to understand depend on many other things. Thus, f .x/ is not sufficient.
But the idea of multivariable calculus and vector calculus is still the mathematics of changes: a
small change in one thing leads to a small change in another thing.
Consider a particle of mass m moving under the influence of a force F , then Newton gave
us the following equation md 2 x=dt 2 D F , which, in conjunction with the data about the position
of the particle at t D 0, can pinpoint exactly the position of the particle at any time t. This is
probably the first differential equation–those equations that involve the derivatives–ever. This is
the equation that put men on the Moon. R
Leaving behind the little bits dx, dy and the sum , our next destination in the mathematical
world is a place called probability. Let’s go there to see dice, roulette, lotteries–game of chances–
to see how mathematicians develop mathematics to describe random events, how they can see
through the randomness to reveal its secrets.
Contents
5.1 A brief history of probability . . . . . . . . . . . . . . . . . . . . . . . . . 421
5.2 Classical probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
5.3 Empirical probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
5.4 Buffon’s needle problem and Monte Carlo simulations . . . . . . . . . . 425
5.5 A review of set theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
5.6 Random experiments, sample space and event . . . . . . . . . . . . . . . 433
5.7 Probability and its axioms . . . . . . . . . . . . . . . . . . . . . . . . . . 434
5.8 Conditional probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
5.9 The secretary problem or dating mathematically . . . . . . . . . . . . . 454
5.10 Discrete probability models . . . . . . . . . . . . . . . . . . . . . . . . . 457
5.11 Continuous probability models . . . . . . . . . . . . . . . . . . . . . . . 485
5.12 Joint distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
5.13 Inequalities in the theory of probability . . . . . . . . . . . . . . . . . . . 497
5.14 Limit theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
5.15 Generating functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
5.16 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
Games of chance are common in our world–including lotteries, roulette, slot machines and
card games. Thus, it is important to know a bit about the mathematics behind them which is
known as probability theory.
Gambling led Cardano–our Italian friend whom we met in the discussion on cubic equations–
to the study of probability, and he was the first writer to recognize that random events are
governed by mathematical laws. Published posthumously in 1663, Cardano’s Liber de ludo
419
Chapter 5. Probability 420
aleae (Book on Games of Chance) is often considered the major starting point of the study of
mathematical probability.
Since then the theory of probability has become a useful tool in many problems. For ex-
ample, meteorologists use weather patterns to predict the probability of rain. In epidemiology,
probability theory is used to understand the relationship between exposures and the risk of health
effects. Another application of probability is with car insurance. Companies base your insurance
premiums on your probability of having a car accident. To do this, they use information on the
frequency of having a car accident by gender, age, type of car and number of kilometres driven
each year to estimate an individual person’s probability (or risk) of a motor vehicle accident.
Indeed probability is so useful that the famous French mathematician and astronomer (known
as the “Newton of France”) Pierre-Simon Marquis de Laplace once wrote:
We see that the theory of probability is at bottom only common sense reduced to
calculation; it makes us appreciate with exactitude what reasonable minds feel
by a sort of instinct, often without being able to account for it....It is remarkable
that this science, which originated in the consideration of games of chance, should
have become the most important object of human knowledge... The most important
questions of life are, for the most part, really only problems of probability.
This chapter is an introduction to probability and statistics. It was written based on the
following excellent books:
The Unfinished game: Pascal, Fermat and the letters by Keith Devlin‘ [11]
The history of statistics: the measurement of uncertainty before 1900 by Stefen Stigler
[51].
A History of Probability and Statistics and Their Applications before 1750, by Anders
Hald [19]
‘
Keith J. Devlin (born 16 March 1947) is a British mathematician and popular science writer. His current
research is mainly focused on the use of different media to teach mathematics to different audiences.
The book is freely available at https://www.probabilitycourse.com/.
||
Sheldon Ross (April 30, 1943) is the Daniel J. Epstein Chair and Professor at the USC Viterbi School of
Engineering. He is the author of several books in the field of probability. In 1978, he formulated what became
known as Ross’s conjecture in queuing theory, which was solved three years later by Tomasz Rolski at Poland’s
Wroclaw University.
Stephen Mack Stigler (born August 10, 1941) is Ernest DeWitt Burton Distinguished Service Professor at the
Department of Statistics of the University of Chicago. He has authored several books on the history of statistics.
Stigler is also known for Stigler’s law of eponymy which states that no scientific discovery is named after its original
discoverer (whose first formulation he credits to sociologist Robert K. Merton).
Anders Hjorth Hald (1913 – 2007) was a Danish statistician. He was a professor at the University of Copen-
hagen from 1960 to 1982. While a professor, he did research in industrial quality control and other areas, and also
authored textbooks. After retirement, he made important contributions to the history of statistics.
I did not like gambling and did not pay attention to probability. I performed badly in high
school and university when it came to classes on probability; I actually failed the unit. But I do
have companies. In 2012, 97 Members of Parliament in London were asked: ‘If you spin a coin
twice, what is the probability of getting two heads?’ The majority, 60 out of 97, could not give
the correct answer.
I did not plan to re-learn probability but the Covid pandemic came. People is talking about
the probability of getting the Covid etc. And I wanted to understand what they mean. Therefore,
I decided to study the theory of probabality again. I do not have to be scared as this time I will
not have to take any exam about probability!
ity. Before Laplace, probability theory was solely concerned with developing a mathematical
analysis of games of chance. Laplace applied probabilistic ideas to many scientific and practical
problems. The theory of errors, actuarial mathematics, and statistical mechanics are examples
of some of the important applications of probability theory developed in the l9th century.
Like so many other branches of mathematics, the development of probability theory has been
stimulated by the variety of its applications. Conversely, each advance in the theory has enlarged
the scope of its influence. Mathematical statistics is one important branch of applied probability;
other applications occur in such widely different fields as genetics, psychology, economics, and
engineering. Many workers have contributed to the theory since Laplace’s time; among the most
important are Chebyshev, Markov, von Mises, and Kolmogorov.
One of the difficulties in developing a mathematical theory of probability has been to arrive
at a definition of probability that is precise enough for use in mathematics, yet comprehensive
enough to be applicable to a wide range of phenomena. The search for a widely acceptable
definition took nearly three centuries and was marked by much controversy. The matter was
finally resolved in the 20th century by treating probability theory on an axiomatic basis. In 1933
the Russian mathematician A. Kolmogorov outlined an axiomatic approach that forms the basis
for the modern theory . Since then the ideas have been refined somewhat and probability theory
is now part of a more general discipline known as measure theory.
1st toss H H H H T T T T
2nd toss H H T T H H T T
3rd toss H T H T H T H T
ceptually simple for many situations. However, it is limited, since many situations do not have
finitely many equally likely outcomes. Tossing a weighted die is an example where we have
finitely many outcomes, but they are not equally likely. Studying people’s incomes over time
would be a situation where we need to consider infinitely many possible outcomes, since there
is no way to say what a maximum possible income would be, especially if we are interested in
the future.
n n.H / P
10 6 0.6
100 48 0.48
1000 492 0.492
2000 984 0.492
10000 5041 0.5041
Limitations of the empirical probability. The limits of this theory of probability lies in
Eq. (5.3.1). How do we know that n.E /=n will converge to some constant limiting value that
will be the same for each possible sequence of repetitions of the experiment? Table 5.2 obvi-
ously indicates that the term n.E /=n is actually oscillating.
There is a need of another theory of probability. Axiomatic probability is such a theory, it uni-
fies different probability theories. Similar to Euclidean geometry, the axiomatic probability starts
with three axioms called Kolmogorov’s Three Axioms, named after the Soviet mathematician
Andrey Nikolaevich Kolmogorov (1903 – 1987).
Now, we plot the function d D 2l sin on the d plane. The cut condition is then the area
of the shaded region in Fig. 5.1. The probability that the needle will intersect one of these lines
is then: Z =2
l=2 sin d
0 2l
P D t D
t
22
Georges-Louis Leclerc, Comte de Buffon (1707 – 1788) was a French naturalist, mathematician, cosmologist,
and encyclopédiste.
This is the most important step; without it we cannot proceed further. Why 2? Because 1 line is not enough
and there are infinitely many lines in the problem, then 2 is sufficient.
It is expected that P is proportional to l (the longer the needle the more chance it hits the lines)
and inversely proportional to t–the distance between the lines. However, it is un-expected that
shows up in this problem. No circles involved! We discuss this shortly.
In 1886, the French scholar and polymath Marquis Pierre–Simon de Laplace (1749 – 1827)
showed that the number can be approximated by repeatedly throwing a needle onto a lined
sheet of paper N times and counting the number of intersected lines (n):
2l n 2l N
D H) D
t N t n
In 1901, the Italian mathematician Mario Lazzarini performed Buffon’s needle experiment.
Tossing a needle 3 408 times with t D 3 cm, l D 2:5 cm , he got 1 808 intersections. Thus, he
obtained the well-known approximation 355=113 for , accurate to six significant digits. How-
ever, Lazzarini’s "experiment" is an example of confirmation bias, as it was set up to replicate
the already well-known approximation of , that is 355=113. Here’s the details:
2l N 2 25 3408 5 71 3 16 355
D D D D 3:14159292
t n 30 1808 3 113 16 113
Guessing Buffon’s formula. Herein, we’re trying to guess the solution without actually solving
it. This is a very important skill. We admit that we’re doing it only after we have known the
result. As the problem has only two parameters: the needle length l and the distance t between
two lines, the result must be of this form P D c .l=t / where c is a dimensionless number (refer to
Section 8.7.1 for detail on dimensions and units). To find out c, we reason that the result should
not depend on the shape of the needle. If so, we can consider a needle of the form of a circle of
radius r. The length of this circular needle is 2 r and it must be equal to l, thus its diameter is
d D l= . The probability is therefore 2l= t noting that a circular needle cuts a line twice.
total number of hits by n, then the area is approximately n=N , and thus
n
D4
N
A Julia code (Listing B.13) was written and the results are given in Table 5.3 for various N .
These Monte Carlo methods for approximating are very slow compared to other methods (e.g.
those presented in Section 4.3.5), and do not provide any information on the exact number of
digits that are obtained. Thus they are never used to approximate when speed or accuracy is
desired.
4 Nn
1.0
N
0.8
100 3.40000000
0.6
200 3.14000000
400 3.16000000 0.4
such as intersections and unions. This section may seem somewhat theoretical and thus less
interesting than the rest of the chapter, but it lays the foundation for what is to come.
A set is a collection of things (called elements). We can either explicitly write out the
elements of a set as in the set of natural numbers
N D f1; 2; 3; : : :g
or, we can also define a set by stating the properties satisfied by its elements. For example, we
may write
A D fx 2 Njx 4g; or A D fx 2 N W x 4g
The symbols j and W are read "such that". Thus, the above set contains all counting numbers
equal to or greater than four. A set is a collection of things. Because the order of the elements in
a set is irrelevant, f2; 1; 5g is the same set as f1; 2; 5g. Furthermore, an element cannot appear
more than once in a set; so f1; 1; 2; 5g is equivalent to f1; 2; 5g.
Ordered sets. Let A be a set. An order on A is a relation denoted by < with the following two
properties:
x < y; x D y; x>y
If x; y; z 2 A, then
x < y; y < z H) x < z
Subset, superset and empty set. Set A is a subset of set B if every element of A is also an
element of B. We write A B where the symbol indicates "subset". Inversely, B is a superset
of A; we write it as B A.
The set with no elements is the null set, designated by ;. This null set is similar to number
zero in number theory.
A universal set is the collection of all objects in a particular context. We use the notation S
to label the universal set. Its role is similar to the number line in number theory. When we refer
to a number we visualize it as a point on the number line. In the same manner, we can visualize
a set on the background of the universal set.
The Cartesian product of two sets A and B, denoted by A B, is defined as the set consisting
of all ordered pairs .a; b/ for which a 2 A and b 2 B. For example, if A D x; y and B D
f3; 6; 9g, then A B D f.x; 3/; .x; 6/; .x; 9/; .y; 3/; .y; 6/; .y; 9/g. Note that because the pairs
are ordered so A B ¤ B A. An important example of sets obtained using a Cartesian product
is Rn , where n 2 N. For n D 2, we have
R2 D R R D f.x; y/jx 2 R; y 2 Rg
Thus, R2 is the set consisting of all points in the two-dimensional plane. Similarly, R3 is the set
of all points in the three dimensional space that we’re living in.
Lower bound and upper bound. Given a set X 2 R (e.g. X D Œ0; 5), then
Sups and Infs. Suppose that X is bounded above, there exists infinite upper bounds. One can
define the smallest among the upper bounds. The supremum of X , denoted by sup X, is the
smallest upper bound for X ; that is
8 > 0, 9x such that x > sup X (sup X is the smallest upper bound))
Suppose that X is bounded below, there exists infinite lower bounds. One can define the largest
among the lower bounds. The infimum of X , denoted by inf X , is the largest lower bound for
X ; that is
8 > 0, 9x such that x < inf X C (inf X is the largest lower bound))
Maximum vs supremum. Is maximum and supremum of an ordered set the same? Examples
can show the answer. Example 1: consider the set A D fx 2 Rjx < 2g. Then, the maximum
of A is not 2, as 2 is not a member of the set; in fact, the maximum is not well defined. The
supremum, though is well defined: 2 is clearly the smallest upper bound for the set. Example 2:
B D f1; 2; 3; 4g. The maximum is 4, as that is the largest element. The supremum is also 4, as
four is the smallest upper bound.
Venn diagrams. Venn diagrams are useful in visualizing relation between sets. Venn
diagrams were popularized by the English mathematician, logician and philosopher John
Venn (1834 – 1923) in the 1880s. See Fig. 5.3 for one example of Venn diagrams. In a Venn
diagram a big rectangle is used to label the universal set, whereas a circle are used to denote a set.
Set operations. Sets can be combined (if we can combine numbers via arithmetic operations,
we can do something similar for sets) via set operations (Fig. 5.4). We can combine two sets in
many different ways. First, the union of two sets A and B is a set, labelled as A [ B, containing
all elements that are in A or in B. For example, f1; S3; 4g [ f3; 4; 5g D f1; 3; 4; 5g. If we have
many sets A1 ; A2 ; : : : ; An , the union is written as|| : niD1 Ai .
Second, the intersection of two sets A and B, denoted by A \ B, consists of all elements
that are both in A and B. For instance, f1; 2g \ f2; 3g D f2g. When the intersection of two sets
is empty i.e., A \ B D ;, the two sets are called mutually exclusive or disjoint. We now extend
this to more than two sets. If we have n sets A1 ; A2 ; : : : ; An , these sets are disjoint if they are
pairwise disjoint:
Ai \ Aj D ; for all i ¤ j
Third, the difference of two sets A and B is denoted by A B and is a set consists of elements
that are in A but not in B.
Finally we have another operation on set, but this operator applies to one single set. The
complement of a set A, denoted by Ac , is the set of all elements that are in the universal set S
but are not in A. The Venn diagrams for the presented set operations are shown in Fig. 5.4.
||
Pn
Note the similarity to the sigma notation i D1 xi D x1 C x2 C C xn .
Cardinality. The cardinality of a set is basically the size of the set, it is denoted by jAj. For
finite sets (e.g. the set f1; 3; 5g), its cardinality is simply the number of elements in A. Again,
once a new object was introduced (or discovered) in the mathematical world, there are rules
according to which it (herein is the cardinality of a set) obeys. For instance, we can ask given
two sets A; B with cardinalities jAj and jBj, what is the cardinality of their union i.e., jA [ Bj?
For two sets A and B, we have this rule called the inclusion-exclusion principle or PIE:
When A and B are disjoint, the cardinality of its union is simply the sum of the cardinalities of
A and B. When they are not disjoint, when we add jAj and jBj, we’re counting the elements in
A \ B twice (a Venn diagram would help here), thus we need to subtract it to get the correct
cardinality. The name (of the principle) comes from the idea that the principle is based on
over-generous inclusion, followed by compensating exclusion.
Then, mathematicians certainly generalize this result to the union of n sets. For simplicity,
we just extend this principle to the case of three sets:
Example 5.1
How many integers from 1 to 100 are multiples of 2 or 3? Let A be the set of integers from
1 to 100 that are multiples of 2, then jAj D 50 (why?). Let B be the set of integers from 1 to
100 that are multiples of 3, then jBj D 33a . Our question is amount to computing jA [ Bj.
Certainly, we use the PIE:
jA [ Bj D jAj C jBj jA \ Bj
We need then A \ B which the set of integers from 1 to 100 that are multiples of both 2 and 3
or multiples of 6, we have jA \ Bj D 16. Thus, jA [ Bj D 50 C 33 16 D 67.
a
A number that is a multiple of 3 if it can be written as 3m, then 1 3m 100, thus m D b100=3c D 33.
Generalized principle of inclusion-exclusion. Now we extend the PIE to the case of n sets for
whatever n. First, we put the two identities for n D 2 and n D 3 together to see the pattern:
jA [ Bj D jAj C jBj jA \ Bj
jA [ B [ C j D jAj C jBj C jC j jA \ Bj jB \ C j jC \ AjCjA \ B \ C j
To see the pattern, let x belong to all three sets A; B; C . It is then counted in every term in the
RHS of the second equation: 4 times added (the red terms) and 3 times subtracted, adding up to 1.
As a preparation for the move to n sets, we no longer use A; B; C , instead we adopt A1 ; A2 ; : : :
Obviously wePwill run out of alphabets and moreover subscripts allow for compact notation: we can write
A1 C A2 C D i Ai . With A; B; ::: we simply cannot.
jA \ Bj jB \ C j jC \ Aj
which has 32 summands. Did mathematicians stop with Eq. (5.5.3)? No that equation is not
in a best form yet. Note that the RHS of that equation involvesPnnP terms and each term in turns
involves a sum of terms. Mathematicians want to write it as i . j j/. The key to this step
is to discard the subscripts i; j; k and replace them by subscripts with subscripts: i1 ; i2 ; : : :
ˇ n ˇ 0 1
ˇ[ ˇ X n X
ˇ Ai ˇ D . 1/kC1 @ jAi1 \ Ai2 \ Aik jA (5.5.4)
ˇ ˇ
ˇ ˇ
i D1 kD1 1i1 <<ik n
0 1
ˇ n ˇ n
ˇ[ ˇ X X \
. 1/kC1 B
B C
A D jAI jC ; A D Ai (5.5.5)
ˇ ˇ
ˇ iˇ @ A I
ˇ ˇ
i D1 kD1 I f1;2;:::;ng i 2I
jI jDk
The second sum runs over all subsets I of the indices 1; 2; : : : ; n which contain exactly k
elements (i.e., jI j D k). At this moment, mathematicians stop because that form is compact.
If you play with the Venn diagrams you will definitely discover many more identities on sets
similar to Eq. (5.5.2). For example, A D .A \ B c / [ .A \ B/. As is always in mathematics, this
seemingly pointless identity will be useful in other contexts.
One example clarifies everything, assume n D 3, k D 2, then I D f1; 2g, I D f1; 3g, I D f2; 3g.
Definition 5.5.1
Set A is called countable if one of the following is true
(b) it can be put in a one-to-one correspondence with natural numbers. In this case the set
is said to be countably infinite.
A set is called uncountable if it is not countable. One example is the set of real numbers R.
You can check again Section 2.30 on Georg Cantor and infinity if anything mentioned in
this definition is not clear.
de Morgan’s laws state that the complement of the union of two sets is equal to the intersec-
tion of their complements and the complement of the intersection of two sets is equal to the
union of their complements. The laws are named after Augustus De Morgan (1806 – 1871)–a
British mathematician and logician. He formulated De Morgan’s laws and introduced the term
mathematical induction, making its idea rigorous. For any two finite sets A and B, the laws are
.A[B/c D Ac \B c ; .A \ B/c D Ac [ B c
We can draw some Venn diagrams to see that the laws are valid, but that’s not enough as we
know that the laws might hold true for n > 2 sets, in that case no one can use Venn diagram for
a check. The generalized version of de Morgan’s first law is
n
!c n
[ \
.A1 [ A2 [ [ An /c D Ac1 \ Ac2 \ Acn ; or Ai D Aci
i D1 i D1
Proof of de Morgan’s 1st law for two sets. The plan is to pick an element x in .A [ B/c and
prove it is also an element of Ac \B c and vice versa. Let P D .A[B/c and Q D Ac \B c . Now,
consider x 2 P , we’re going to prove that x 2 Q, which means that P Q. As x 2 .A [ B/c ,
it is not in A [ B:
H) x … .A [ B/
H) .x … A/ and .x … B/
H) .x 2 Ac / and .x 2 B c /
H) x 2 .Ac \ B c / W x 2 Q H) P Q
Doing something similar with y 2 Q and then showing y 2 P , we get Q P . Now we have
P Q and Q P . What does it mean? It means P D Q. You can use proof by induction to
prove the generalized version.
result which we call an outcome. The set of all possible outcomes of a random experiment is
called the sample space. Since this sample space is the biggest space as far as the experiment
is concerned, it is our universal set S. An event is a subset of the sample space. Some examples
are:
Random experiment: toss a coin; sample space is S D fH; T g (H for head and T for tail),
and one event is E D fH g or E D fT g;
Random experiment: roll a six-sided die; sample space is S D f1; 2; 3; 4; 5; 6g, and one
event can be E D f2; 4; 6g if we’re interested in the chance of getting an even number;
Random experiment: toss a coin two times and observe the sequence of heads/tails; the
sample space is
S D f.H; H /; .H; T /; .T; H /; .T; T /g
One event can be E1 D f.H; H /; .T; T /g.
Axiom 3: If two events are disjoint, the probability that either of the two events
happens is the sum of the probabilities that each happens; P .A [ B/ D P .A/ C
P .B/ if A \ B D ;.
Union and intersection of events. As events are sets, we can apply set operations on events.
When working with events, intersection means "and", and union means "or". The probability of
intersection of A and B, P .A \ B/ is sometimes written as P .AB/ or P .A; B/.
Probability of intersection:
P .A \ B/ D P .AB/ D P .A and B/
Probability of union:
P .A [ B/ D P .A or B/
Example 5.2
We roll a fair six-sided die, what is the probability of getting 1 or 5? So, the event is E D f1; 5g
and the sample space is S D f1; 2; 3; 4; 5; 6g. We use the three axioms to compute P .E/. First,
as the die is fair, the chance of getting any number from 1 to 6 is equal:
where P .1/ is short for P .f1g/. Note that probability is defined only for sets not for numbers.
Now, we use axioms 2 and 3 together to writed )
(2) (3)
1 D P .S/ D P .1/ C P .2/ C C P .6/
which results in the probability of getting any number from 1 to 6 is 1=6. Then, using the
axiom 3 again for E, we have
1 1 1
P .f1; 5g/ D P .1/ C P .5/ D C D
6 6 3
Note that, 1=3 D 2=6, we can deduce an important formula:
1 2 jf1; 5gj
P .f1; 5g/ D D D
3 6 jS j
Therefore, for a finite sample space S with equally likely outcomes, the probability of an event
A is the ratio of the cardinality of A over that of S :
jAj
P .A/ D
jS j
d
The symbol (2) above the equal sign to indicate that axiom 2 is being used.
Example 5.3
Using the axioms of probability, prove the following:
(a) For any event A, P .Ac / D 1 P .A/.
(d) P .A B/ D P .A/ P .A \ B/
Proof of P .Ac / D 1 P .A/. Referring back to Fig. 5.4, we know that A [ Ac D S and A
and Ac are disjoint, thus
P .S / D P .A [ Ac /
1 D P .A/ C P .Ac / H) P .A/ D 1 P .Ac /
where use was made of axiom 2 (P .S/ D 1) and axiom 3 (P .A[Ac / D P .A/CP .Ac /).
Proof of P .A [ B/ D P .A/ C P .B/ P .A \ B/ is shown in the below figure. It uses
the result that P .A B/ D P .A/ P .A \ B/. Recall the inclusion-exclusion principle that
jA [ Bj D jAj C jBj jA \ Bj, P .A [ B/ D P .A/ C P .B/ P .A \ B/ is the version of
that principle for probability.
The rule (a) can be referred to as the rule of complementary probability. It is very simple and
yet powerful for problems in which finding P .A/ is hard and finding P .Ac / is much easier. We
will use this rule quite often.
Corresponding to the principle of inclusion-exclusion in Eq. (5.5.3), we have the probability
version:
n n n
! !
[ X X X \
P Ai D P .Ai / P .Ai \Aj /C P .Ai \Aj \Ak / C. 1/n 1 P Ai
i D1 i D1 i <j i <j <k i D1
Example 5.4
Now we consider a classic example that uses the inclusion-exclusion principle. Assume that
a secretary has an equal number of pre-labelled envelopes and business cards (denoted by n).
Suppose that she is in such a rush to go home that she puts each business card in an envelope
at random without checking if it matches the envelope. What is the probability that each of
the business cards will go to a wrong envelope?
Always start simple, so we now assume that n D 3, and we define the following events:
Now let E be the event that each of the business cards will go to a wrong envelope. We want
to compute P .E/. E occurs only when none of A1 ; A2 ; A3 has happened. Thus,
P .E/ D 1 P .E c / D 1 P .A1 [ A2 [ A3 /
The next step is to use the PIE to get the red term, and thus P .E/ is given by
0 1
X3 X
P .E/ D 1 @ P .Ai / P .Ai \ Aj / C P .A1 \ A2 \ A3 /A
i i <j
To practice Monte Carlo methods, you’re encouraged to implement it for this problem. If need
help, check the code monte-carlo-pi.jl on my github account.
b
For n D 3 there are a total of 3Š outcomes, and to have 2 cards in correct envelops we just need to care
about the remaining cards (3 2), and for them there are of course .3 2/Š ways.
Example 5.5
Consider a family that has two children. We are interested in the children’s genders. Our
sample space is S D f.G; G/; .G; B/; .B; G/; .B; B/g. Also assume that all four possible
outcomes are equally likely; that is 1=4.
What is the probability that both children are girls?
What is the probability that both children are girls given that the first child is a girl?
What is the probability that both children are girls given that we know at least one of
them is a girl?
Of course the probability that both children are girls is 1=4. The two remaining probabilities
are more interesting and new; and most of us would say the answer is 1=2 for both. Let’s
denote by A the event that both children are girls and B the event that the first child is a
girl. That is B D f.G; G/; .G; B/g. Now, the chance to have two girls is therefore 1=2. Let’s
denote by C the event that one of the children is a girl. That is C D f.G; G/; .G; B/; .B; G/g.
Now, the chance to have two girls is 1=3.
The probability that both children are girls (event A) given that the first child is a girl (event
B) is called a conditional probability. And it is written as P .AjB/; the vertical line | is read
“given that”. This example clearly demonstrates that when we incorporate existing facts into the
calculations, it can change the probability of an outcome. The sample space is changed!
The next thing we need to do is to find a formula for P .AjB/.
Because B has occurred it becomes the sample space, and the only way that A can happen
is when the outcome belongs to the set A \ B, we thus have P .AjB/ as
jA \ Bj
P .AjB/ D
jBj
Now we can divide the denominator and the numerator by jSj the cardinality of the original
sample space, to have
jA \ Bj=jS j P .A \ B/
P .AjB/ D D (5.8.1)
jBj=jS j P .B/
Of course as B has occurred, P .B/ > 0, so there is no danger in dividing something by it. Note
that Eq. (5.8.1) was derived for sample spaces with equally likely outcomes only. For other cases,
take it as a definition for conditional probability.
Axiom 3: If two events are disjoint, the conditional probability that either of the two events
happens is the sum of the probabilities that each happens; P .A [ BjF / D P .AjF / C
P .BjF / if A \ B D ;.
Proof. The proof of axiom 2 goes as simple as (based on the fact S \ F D F )
P .SF / P .F /
P .SjF / D D D1 .SF D S \ F D F /
P .F / P .F /
The proof of axiom 3 goes like this, I go from the RHS to the LHS, it’s just a personal taste:
P .AF / P .BF / P .AF / C P .BF /
P .AjF / C P .BjF / D C D
P .F / P .F / P .F /
P .AF [ BF /
D .AF \ BF D ;/
P .F /
P .AB [ F /
D .AB [ F D AF [ BF /
P .F /
D P .ABjF /
So, the proof used the given information that A and B are disjoint, thus AF and BF are also
disjoint (why?).
The generalize version of axiom 3 is
1 1
!
[ X
P Ai jF D P .Ai jF /
i D1 i D1
You should prove it. The proof is exactly the same as the one I presented for two events A1 and
A2 !
If we define Q.E/ D P .EjF /, then Q.E/ may be regarded as a probability function on the
events of S because it satisfies the three axioms. Hence, all of the propositions previously proved
for probabilities apply to Q.E/. For example, all results from Example 5.3 hold for conditional
probabilities:
You wanna a proof? It is simple: application of the definition of conditional probability to all
terms, except the first one P .E1 / in the RHS:
P.E1 E2 / (
P( .E(1 E2 E3 / E 1 E 2 E 3 : : : En
(( (
(
P .E
1
/
(((
P .E1/ P .E1 E2 /
E
(1(E(2 E
( 3
(: (
: : En 1
where all the terms cancel each other except the final numerator, which is the LHS of Eq. (5.8.4)
E D EF [ EF c
which simply states that the probability of event E is the sum of the conditional probabilities of
event E given that event F has or has not occurred. This formula is extremely useful when it
is difficult to compute the probability of an event directly, but it is straightforward to compute
it once we know whether or not some second event (F ) has occurred. The following example
demonstrates how to use this formula.
Example 5.6
An insurance company believes that people can be divided into two classes: those who are
accident prone and those who are not. The company’s statistics show that an accident-prone
person will have an accident at some time within a fixed 1-year period with probability
0.4, whereas this probability decreases to 0.2 for a person who is not accident prone. If we
assume that 30 percent of the population is accident prone, what is the probability that a new
policyholder will have an accident within a year of purchasing a policy?
Solution. Let’s denote by E the event a new policyholder will have an accident within a year
of purchasing a policy. We need to find P .E/. This person is either accident-prone or not.
Let’s call F the event that a new policyholder is accident-prone, then F c is the even that this
person is not accident-prone. Then, we have P .F / D 0:3 and P .F c / D 0:7, P .EjF / D 0:4
and P .EjF c / D 0:2, then Eq. (5.8.5) gives:
Now we generalize Eq. (5.8.5). How? Note that in that formula, we have two events F and
c
F , which are two disjoint events that together fill completely the sample space. Just generalizing
this to n events. First, assume that we can partition the sample space S into three disjoint sets
B1 , B2 and B3 . Then, we have, see Fig. 5.5
A D .A \ B1 / [ .A \ B2 / [ .A \ B3 /
and A \ B1 , A \ B2 and A \ B3 are mutually disjoint. Thus, we can write P .A/ as
P .A/ D P ..A \ B1 / [ .A \ B2 / [ .A \ B3 //
D P .A \ B1 / C P .A \ B2 / C P .A \ B3 / .axiom 3/
D P .AjB1 /P .B1 / C P .AjB2 /P .B2 / C P .AjB3 /P .B3 /
which is referred to as the law of total probability. This formula states that P .A/ is equal to
a weighted average of P .AjBi /, each term being weighted by the probability of the event on
which it is conditioned.
We’re now deriving the Bayes’s formula or Bayes’s rule that relates P .AjB/ to P .BjA/. We
start with the conditional probability:
P .AjB/P .B/ D P .BjA/P .A/ .D P .A \ B/ D P .B \ A//
Dividing this equation by P .A/ > 0, we get the Bayes’s formula:
P .AjB/P .B/
P .BjA/ D
P .A/
This formula is referred to as Bayes’ theorem or Bayes’ Rule or Bayes’ Law and is the foundation
of the field of Bayesian statistics. Bayes Theorem is also widely used in the field of machine
learning. For sure, it is one of the most useful results in conditional probability. The rule is
named after 18th-century British mathematician Thomas Bayes. The term P .BjA/ is referred
to as the posterior probability and P .B/ is referred to as the prior probability.
We can use Eq. (5.8.6) to compute P .A/, and thus obtained the extended form of Bayes’s
formula:
P .AjBj /P .Bj /
P .Bj jA/ D Pn (5.8.7)
i D1 P .AjBi /P .Bi /
Example 5.7
A certain disease affects about 1 out of 10 000 people. There is a test to check whether the
person has the disease. The test is quite accurate. In particular, we know that the probability
that the test result is positive (i.e., the person has the disease), given that the person does not
have the disease, is only 2 percent; the probability that the test result is negative (i.e., the
person does not have the disease), given that the person has the disease, is only 1 percent.
A random person gets tested for the disease and the result comes back positive. What is the
probability that the person has the disease?
Solution. A person either gets the disease or not. So the sample space is partitioned
into two sets: D for having the disease and D c for not. We have P .D/ D 0:0001 and
P .D c / D 1 0:0001. For event A we use the event the test result is positive. Thus, we have
P .AjD c / D 0:02 and P .Ac jD/ D 0:01 which also yields P .AjD/ D 1 0:01 (complemen-
tary probability). The question is now to compute P .DjA/. Now, it is just an application of
Bayes’ formula, i.e., Eq. (5.8.7)
Example 5.8
The Monty Hall problem is a probability puzzle, loosely based on the American television
game show Let’s Make a Deal and named after its original host, Monty Hall. The problem
was originally posed and solved in a letter by Steve Selvin to the American Statistician in
1975. In the problem, you are on a game show, being asked to choose between three doors.
A car is behind one door and two goats behind the other doors. You choose a door. The host,
Monty Hall, picks one of the other doors, which he knows has a goat behind it, and opens it,
showing you the goat. (You know, by the rules of the game, that Monty will always reveal
a goat.) Monty then asks whether you would like to switch your choice of door to the other
remaining door. Assuming you prefer having a car more than having a goat, do you choose to
switch or not to switch?
Vos Savant’s response was that the contestant should switch to the other door. Many
readers of vos Savant’s column refused to believe switching is beneficial and rejected her
explanation. After the problem appeared in Parade, approximately 10 000 readers, including
nearly 1 000 with PhDs, wrote to the magazine, most of them calling vos Savant wrong. Even
when given explanations, simulations, and formal mathematical proofs, many people still did
not accept that switching is the best strategy. Paul Erdősa remained unconvinced until he was
shown a computer simulation demonstrating vos Savant’s predicted result.
a
Paul Erdős (1913 – 1996) was a renowned Hungarian mathematician. He was one of the most prolific
mathematicians and producers of mathematical conjectures of the 20th century. He devoted his waking hours to
mathematics, even into his later years—indeed, his death came only hours after he solved a geometry problem
at a conference in Warsaw. Erdős published around 1 500 mathematical papers during his lifetime, a figure that
remains unsurpassed. He firmly believed mathematics to be a social activity, living an itinerant lifestyle with the
sole purpose of writing mathematical papers with other mathematicians.
First, we solve this problem using a computer simulation. The code of a computer simulation
of this problem is given in Listing 5.2. The result shows that the probability of not switching is
1=3, which is making sense, and the probability of switching is 2=3, that is twice higher. The
code assumes that the car is behind door 1 without loss of generality. Note that the host will
choose a door that we did not select and that does not contain a car and reveal this to us.
Another way to see the solution is to explicitly list out all the possible outcomes, and count
how often we get the car if we stay versus switch. Without loss of generality, suppose our
selection was door 1. Then the possible outcomes can be seen in Table 5.4. In two out of three
cases, we win the car by changing our selection after one of the doors is revealed.
Table 5.4: The Monty Hall problem: listing all possible outcomes. Car behind door 1.
P .A/ P .A/
O.A/ WD c
D (5.8.8)
P .A / 1 P .A/
That is, the odds of an event A tell how much more likely it is that A occurs than it is that it does
not occur. For instance, if P .A/ D 2=3, then P .A/ D 2P .Ac /, so the odds are 2. If the odds are
equal to ˛, then it is common to say that the odds are ˛ to 1, or ˛ W 1 in favor of the hypothesis.
Having defined the odds of an event, we now write the Bayes’ formula in the odds form.
To this end, consider now a hypothesis H that is true with probability P .H /, and suppose that
new evidence E is introduced (or equivalently, new data is introduced). Then the conditional
probabilities, given the evidence E, that H is true and that H is not true are respectively given
by
P .EjH /P .H / P .EjH c /P .H c /
P .H jE/ D ; P .H c jE/ D (5.8.9)
P .E/ P .E/
Therefore, the new odds after the evidence E has been introduced are, obtained by taking the
ratio of P .H jE/ and P .H c jE/
P .H jE/ P .H / P .EjH /
c
D (5.8.10)
P .H jE/ P .H c / P .EjH c /
That is, the new value of the odds of H is the old value, multiplied by the ratio of the conditional
probability of the new evidence given that H is true to the conditional probability given that H
is not true.
Example 5.9
Suppose there are two bowls of cookies. Bowl 1 contains 30 vanilla cookies and 10 chocolate
cookies. Bowl 2 contains 20 of each. Now suppose you choose one of the bowls at random
and, without looking, select a cookie at random. The cookie is vanilla. What is the probability
that it came from Bowl 1?
Solution. Let denote by H the event that the cookie comes from Bowl 1, and E the cookie is
a vanilla. We have P .H / D P .H c / D 1=2 (without the information that the chosen cookie
was a vanilla, the probability for it to come from either of the two bowls is 50%). We also
have P .EjH /, the probability that the cookie is a vanilla given that it comes from Bowl 1,
which is 30=40 D 3=4. Similarly we have P .EjH c / D 20=40 D 1=2. Then, using the odds
form of Bayes’s rule, we have
P .H jE/ P .H / P .EjH / 1=2 3=4 3
c
D c c
D D
P .H jE/ P .H / P .EjH / 1=2 1=2 2
Therefore, P .H jE/ is 3=5. Of course, we can find this probability w/o using the odds form
of Bayes’ rule: Eq. (5.8.7) gives us
P .EjH /P .H / 3
P .H jE/ D c c
D D
P .EjH /P .H / C P .EjH /P .H / 5
And this is not unexpected as the two formula are equivalent. The odds form is still useful, as
demonstrated in the next example, for cases that we do not know how to compute the prior
odds.
For a hypothesis H and evidence (or data) E, the Bayes factor is the ratio of the likelihoods:
P .EjH /
Bayes factor WD (5.8.11)
P .EjH c /
With this definition, Eq. (5.8.9) can be succinctly written as
From this formula, we see that the Bayes’ factor (BF) tells us whether the evidence/data provides
evidence for or against the hypothesis.
If BF > 1 then the posterior odds are greater than the prior odds. So the data provides
evidence for the hypothesis.
If BF < 1 then the posterior odds are less than the prior odds. So the data provides
evidence against the hypothesis.
If BF D 1 then the prior and posterior odds are equal. So the data provides no evidence
either way.
P .AB/
P .AjB/ D
P .B/
P .AjB/P .B/
P .BjA/ D
P .A/
Example 5.10
Here is another problem from MacKay’s Information Theory, Inference, and Learning
Algorithms: Two people have left traces of their own blood at the scene of a crime. A suspect,
Oliver, is tested and found to have type ‘O’ blood. The blood groups of the two traces are
found to be of type ‘O’ (a common type in the local population, having frequency 60%) and
of type ‘AB’ (a rare type, with frequency 1%). Do these data [bloods of type ‘O’ and ‘AB’
found at the scene] give evidence in favor of the proposition that Oliver was one of the people
who left blood at the scene?
Solution. Let’s call H the hypothesis (or proposition) that Oliver was one of the people who
left blood at the scene. And let E be the evidence that there are bloods of type ‘O’ and ‘AB’
found at the scene. The only formula we have is the odds form of Bayes’ rule:
P .H jE/ P .H / P .EjH /
D
P .H c jE/ P .H c / P .EjH c /
It is obvious that we cannot compute P .H /=P .H c /. In fact, we do not need it, because the
question is not about the actual probability that Oliver was one of the people who left blood at
the scene! If we can compute the Bayes factor and based on whether it is larger than or smaller
than one, we can have a conclusion. What is P .EjH /? When H happens, Oliver left his blood
of type ‘O’ at the scene, the other people has to have type ‘AB’ blood with probability of 0.01.
Thus, P .EjH / D 0:01. For P .EjH c /, we have then two random people at the scene, and we
want the probability that they have type ‘O’ and ‘AB’ blood. Thus, P .EjH c / D 0:60:012;
why 2?. Note that we have assumed that the blood types of two people are independent (so
that we can just multiply the probabilities).
So,
P .EjH / 0:01
c
D D 0:83333333
P .EjH / 0:6 0:01 2
Since the Bayes factor is smaller than 1, the evidence does not support the proposition that
Oliver was one of the people who left blood at the scene.
Another suspect, Alberto, is found to have type ‘AB’ blood. Do the same data give evidence
of the proposition that Alberto was one of the two people at the scene?
P .EjH / 0:6
c
D D 50
P .EjH / 0:6 0:01 2
Since the Bayes factor is a lot larger than 1, the data provides strong evidence in favor of
Alberto being at the crime scene.
What this formula says is that for two independent events A and B, the chance that both of them
happen at the same time is equal to the product of the chance that A happens and the chance
that B happens. And this is the multiplication rule of probability that Cardano discovered, check
Section 5.2.
Example 5.11
Suppose that we toss 2 fair six-sided dice. Let E1 denote the event that the sum of the dice
is 6, E2 be the event that the sum of the dice equals 7, and F denote the event that the
first die equals 4. The questions are: are E1 and F independent and are E2 and F independent?
Solution. We just need to check whether the definition of independence of two events i.e.,
P .AB/ D P .A/P .B/ holds. We have
5 6 5
P .E1 /P .F / D D
36 36 216
and
1
P .E1 F / D P .f.4; 2/g/ D
36
Thus, P .E1 F / ¤ P .E1 /P .F /: the two events E1 and F are not independent; we call them
dependent events.
In the same manner, we compute
6 6 1
P .E2 /P .F / D D
36 36 36
and
1
P .E2 F / D P .f.4; 3/g/ D
36
Thus, P .E2 F / D P .E2 /P .F /: the two events E2 and F are independent. Shall we move
on to other problems? No, we had to compute many probabilities to get the answers. Can we
just use intuitive guessing? Let’s try. To get a sum of six (event E1 ), the first die must be one
of any of f1; 2; 3; 4; 5g; the first die cannot be six. Thus, E1 depends on the outcome of the
first die. On the other hand, to get a sum of seven (event E2 ), the first die can be anything
of f1; 2; 3; 4; 5; 6g; all the possible outcomes of a die. Therefore, E2 does not depend on the
outcome of the first die.
Independent events vs disjoint events. Are disjoint events independent or not? If A and
B are two disjoint events, then AB D ;, thus P .AB/ D 0, whereas P .A/P .B/ ¤ 0. So,
P .AB/ ¤ P .A/P .B/. Two disjoint events are dependent.
Some rules of independent events. Given that A and B are two independent events, what can
we say about their complements or unions? Regarding the complementary events, we have this
result: If A and B are independent then
Thus, if A and B are independent events, then the probability of A’s occurrence is unchanged
by information as to whether or not B has happened.
Now we are going to generalize the definition of independence of two events to more than
two events. Let’s start simple with three events, and with one concrete example. It motivates our
definition of the independence of three events.
Example 5.12
Two fair 6-sided dice are rolled, one red and one blue. Let A be the event that the red die’s
result is 3. Let B be the event that the blue die’s result is 4. Let C be the event that the sum of
the rolls is 7. Are A; B; C mutually independent?
Solution. It’s clear that A and B are independent. From Example 5.11, we also know that A; C
are independent and B; C are also independent. We’re now checking whether P .ABC / D
P .A/P .B/P .C /. First,
1 1 6 1
P .A/P .B/P .C / D D
6 6 36 216
Second,
1 1 1
P .ABC / D P .A/P .BjA/P .C jAB/ D 1D
6 6 36
Three events A, B, and C are independent if all of the following conditions hold
where P .H / D p is the probability that the coin lands on head and P .H c / D q is the probability
that the coin land on tail. What is P .EjH / and P .EjH c /? Now, as the first coin lands on head
(i.e., H ), A has i C 1 coins. Since successive flips are assumed to be independent, we just have
the same game in which A starts with i C 1 coins. Therefore, P .EjH / D Pi C1 . Similarly, if
the first coin shows tail, P .EjH c / D Pi 1 . With that, we can write
Now, using the fact that p C q D 1, we have Pi D .p C q/Pi D pPi C qPi . Replace Pi in the
above equation with this, we obtain
q
pPi C qPi D pPi C1 C qPi 1 H) Pi C1 Pi D .Pi Pi 1 /; i D 1; 2; 3; : : : ; N 1
p
Now, we will explicitly write out this equation for i D 1; 2; 3; : : :, with the so-called boundary
condition that P0 D 0 i.e., we assume that if player A starts with zero coin, he will lose:
q q
i D1 W
P
2 P1 D .P1 P0 / D P1
p p
q 2
q
i D2 W
P
3 P
2 D .P2 P1 / D P1 (use the result from above row)
p p
3
q q
i D3 W P4 P
3 D .P3 P2 / D P1
p p
:: : ::
: W :: :
k 1
q q
i Dk 1 W Pk Pk 1 D .Pk 1 Pk 2 / D P1
p p
What we do next? We sum up all the above equations, because we see a telescoping sum
.P2 P1 / C .P3 P2 / C ; for the first three rows, only P4 and P1 are left without being
canceled out, for k D 1; 2; 3; : : : ; N :
2 3 k 1 !
q q q q
Pk D P1 1 C C C C C
p p p p
And what is the infinite sum on the RHS? It is a geometric series, recall from Eq. (2.18.5) that
a
a C ar C ar 2 C ar 3 C C ar n 1
D .1 r n/
1 r
Thus, we have a geometric series with a D 1, r D q=p, thus for i D 1; 2; 3; : : : ; N (I switched
back to i instead of k) 8
<iP1 ; if q=p D 1
Pi D 1 .q=p/i
:P1 ; if q=p ¤ 1
1 q=p
All is good, but still we do not know P1 . Now, we use another boundary condition PN D 1,
and then we’re able to determine P1 and then Pi . Plug i D N into the above equation, we can
determine P1 :
1
8
< ;
ˆ if p D 1=2
P1 D N1 q=p
ˆ
: ; if p ¤ 1=2
1 .q=p/N
And with that, Pi is given by
8i
< ;
ˆ if p D 1=2
Pi D N 1 .q=p/ i (5.8.17)
ˆ
: ; if p ¤ 1=2
1 .q=p/N
What are the outcomes of this gambler’s ruin game? First outcome is player A wins, the
second is player B wins. Is that all? Is it possible that the game is never ending? To check that we
need to compute the probability that player B wins when A starts with i coins, this probability
is designated by Qi . And if Pi C Qi D 1, then the game will definitely end with either A wins
or B wins.
By symmetry, we can get the formula for Qi from Pi by replacing i with N i, which is
the amount of coins that player B starts, and p with q:
8N i
ˆ
< ; if p D 1=2
Qi D 1 N .p=q/N i
ˆ
: ; if p ¤ 1=2
1 .p=q/N
I have moved P1 to the RHS.
Note that q=p D 1 is equivalent to p D 1=2.
1 .q=p/i 1 .p=q/N i
P i C Qi D C D D 1
1 .q=p/N 1 .p=q/N
Some details were skipped for sake of brevity. Thus, the game will end with either A wins or
B wins. Let’s pause a bit and see what we have seen: we have seen a telescoping sum, and a
geometric series in a game of coin tossing! Isn’t mathematics cool?
Solution using difference equations. Eq. (5.8.16) is a (linear) difference equation (or a recur-
rence equation) which involves the differences between successive values of a function of a
discrete variable. In that equation we have the difference between Pi , Pi C1 and Pi 1 , all are
values of a function of i –a discrete variable. (A discrete variable is a variable of which values
can only be integers.) Note that a difference equation is the discrete analog of a differential
equation discussed in Chapter 8.
To solve Eq. (5.8.16), we re-write it as follows
Pi D Ar i H) Pi C1 D Ar i C1 ; Pi 1 D Ar i 1
Ar i 1
.pr 2 r C q/ D 0 H) pr 2 r Cq D0 (5.8.19)
This is a quadratic equation, thus, for the case p ¤ 1=2 it has two roots (note that q D 1 p):
q
r1 D 1; r2 D
p
Thus, A1 r1i C A2 r2i is the general solution to Eq. (5.8.18), so we can write
i
q
Pi D A1 C A2 (5.8.20)
p
Now, we determine A1 and A2 using the two boundary conditions: P0 D 0 and PN D 1:
N
q
A1 C A2 D 0; A1 C A2 D1
p
Why this form? If we start with this simpler equation Pi D qPi 1, then, we have
Pi D q 2 Pi 2 D q 3 Pi 3 D D q i P0
Let’s see what are the odds playing in a casino. Assume that N = 10 000 Units. Using
Eq. (5.8.17), the odds are calculated for different initial wealth. The results shown in Table 5.5
are all bad news. As we cannot have more money than the casino, we look at the top half of the
table, and the odds are all zero (do not look at the column with p D 0:5; that’s just for reference).
One way to improve our odds is to be bold: instead of betting 1 dollar, betting 10 dollars, for
example.
If N D 100 dollars, and player A starts with 10 dollars, what is his chance if he bets 10
dollars per game? Think of 1 coin is 10 dollars, then we can just use Eq. (5.8.17) with i D 1 and
N D 10: P1 D 1 .q=p/1=1 .q=p/10 .
Table 5.5: Probabilities of player A breaking the bank with total initial wealth N = 10000 Units.
You are the HR manager of a company and need to hire the best secretary out of
a given number N of candidates. You can interview them one by one, in random
The first thing we need to do is to translate the problem into mathematics. Let’s assign a
counting number to each candidate. Thus, four candidates John, Sydney, Peter and Laura would
be translated to a list of integers: .1; 7; 3; 9/, an integer can be thought of the score of a candidate.
In general we denote by .a1 ; a2 ; : : : ; aN / this list. The problem now is to find the maximum of
this list, denoted by amax .
If you’re thinking, this is easy, I pick Laura as the best applicant for 9 is the maximum of
.1; 7; 3; 9/. No, you cannot do this for one simple reason: you cannot look ahead. Think of your
dating, you cannot know in advance who will you date in the future! Thus, at the time the HR
manage is interviewing Peter (3) she does not know that there is a better candidate waiting for
her. Note that she has to make a decision (rejecting or accepting) immediately after the interview.
That is the rule of this problem. It might not be real, but mathematicians do not care.
Ok. I pick the last applicant! But the probability of getting the best is only 1=N , if N is large
then that probability is slim. So, we cannot rely on luck, we need some strategy here. Again
think of dating, what is the strategy there? The strategy most adults adopt — insofar as they
consciously adopt a strategy — is to date around for a while, gain some experience, figure out
one’s options, and then choose the next best thing that comes around.
We adopt that strategy here. Thus, we scan the first r candidates, record the maximum score,
denoted by a D max, (i.e., a D max ai ; 1 i r), and then select the first candidate whose
score is larger than a (Fig. 5.6). Now, we’re going to compute what is the probability if we
do this. Obviously, that probability depends on r; if we have that probability, labeled by P .r/,
then we can find r that maximizes P .r/. Assume that that optimal r is five, then the optimal
strategy is: date 5 persons, discard all of them and marry the next person who is better than the
best among your five old lovers.
Figure 5.6: Secretary problem: scan the first r candidates, record the maximum score (i.e., a D
max ai ; 1 i r, and select the first candidate whose score is larger than a .
Let n be the nth candidate (after r rejected or scanned candidates) of which the score is
maximum. Of course we need to have n r C 1 (if not, we would lose the best among the
rejected r candidates). The first candidate with a score higher than a is the best candidate (i.e.,
an D amax ) only happens when the second best is in r candidates. Therefore, P .r/ is
N
X
P .r/ D P .1st > a and amax /
nDrC1
N
X
D P .nth is the best and the second best is in r candidates/
nDrC1
N
X
D P .nth is the best/ P .the second best is in r candidates out of n 1/
nDrC1
N N N 1
X 1 r r X 1 r X1
D D D
nDrC1
Nn 1 N nDrC1 n 1 N nDr n
The question now is what should be the value of r so that P .r/ 0.4
P (r)
N D 10 and for r D 1; 2; 3; 4; 5; 6; 7; 8; 9; 10 we compute ten 0.2
P .r/, plot them and we obtain a plot shown in the figure. This 0.1
r C 1 has a lower probability. So, we just need to find r such that P .r C 1/ P .r/:
N 1 N 1 N
X1 1
r C1 X 1 r X1
P .r C 1/ P .r/ ” ” 1
N nDrC1 n N nDr n nDrC1
n
Recognizing the red sum is related to the n harmonic number , we rewrite the above sum as
N
X1 r
1 X 1 N 1
ln.N 1/ ln r D ln
nD1
n nD1
n r
where the first sum in the left most term is the .N 1/th harmonic number HN 1 , the second term
is the rth harmonic number, and noting that we can approximate Hn ln n C
C O.n/ where
is the Euler-Mascheroni constant defined in Eq. (4.14.24). When N is very large, N 1 D N ,
and thus we need to find r such that
N N N
ln 1” e H) r 0:37N
r r e
What this formula tells us is that we should discard 37% of the total number of candidates, then
select the next person that comes along who is better than all of those discarded.
If needed, check Section 4.14.7 for refresh on harmonic numbers.
Kepler and the marriage problem. Kepler–the man who gave us the three laws on planetary
motions–also gave us the marriage problem (now called the secretary problem). On 30 October
1613, Kepler married the 24-year-old Susanna Reuttinger. Following the death of his first wife
Barbara, Kepler had considered 11 different matches over two years. He eventually returned to
Reuttinger (the fifth match) who, he wrote, "won me over with love, humble loyalty, economy
of household, diligence, and the love she gave the stepchildren .
S D fs1 ; s2 ; s3 ; : : :g
Now if A is an event, we have A S , then A is also countable. By the third axiom (of probabil-
ity), we have 0 1
[ X
P .A/ D P @ sj A D P .sj / (5.10.1)
sj 2A sj 2A
Thus in a countable sample space, to find the probability of an event, all we need to do is to sum
the probability of individual elements in that set. How can we find the probability of individual
elements then? We answer this question next.
Finite sample spaces with equally likely outcomes. An important special case of discrete
probability models is when we have a finite sample space S, where each outcome is equally
likely to occur i.e.,
Therefore,
1
P .si / D for all i D f1; 2; :::; N g
N
The fourth woman was nice to look at — of "tall stature and athletic build", but Kepler wanted to check out
the next one, who, he’d been told, was "modest, thrifty, diligent and [said] to love her stepchildren," so he hesitated.
He hesitated so long, that both No. 4 and No. 5 got impatient and took themselves out of the running, leaving him
with No. 6, who scared him.
Next, we’re going to calculate P .A/ for event A with jAj D M , we write
0 1
[ X M jAj
P .A/ D P @ sj A D P .sj / D D (5.10.3)
sj 2A sj 2A
N jSj
Thus, finding the probability of A reduces to a counting problem in which we need to count
how many elements are in A and S . We get the results that Cardano had discovered . And do
we know how to count things...efficiently? Yes, we do (Section 2.24). If your understanding of
factorial, permutations and combinations is not solid (yet), you have to study them again before
continuing with probability.
The birthday problem deals with the probability that in a set of n randomly selected people,
at least two people share the same birthday. This problem is often referred to as the birthday
paradox because the probability is counter-intuitively high: with only 23 people, the probability
is 50% that at least two people share the same birthday, and with 50 people that chance is about
90%. The first publication of a version of the birthday problem was by Richard von Mises|| in
1939.
Equipped with probability theory, we’re going to solve this problem. But, we need a few
assumptions. First, we disregard leap year, which simplifies the math, and it doesn’t change the
results by much. We also assume that all birthdays have an equal probability of occurring .
Because leap years are not considered, there are only 365 birthdays. And we use this formula
P .Ac / D 1 P .A/. That is, instead of working directly, we approach the problem indirectly by
asking what is the probability that none people share the same birthday. This is because doing
so is much easier (note that in the direct problem, handling “at least” two people is not easy as
there are two many possibilities).
The sample space is f1; 2; : : : ; 365gn , which has a cardinality of 365n . For the first person of
n people, there are 365 choices, for the second person, there are only 364 choices, third person
363 choices. And for the nth person, there are 365 n C 1 choices. Thus the probability that
none people share the same birthday is
.365/.364/ .365 n C 1/
365n
Therefore, the probability we’re looking for is:
.365/.364/ .365 n C 1/
P .n/ D 1 (5.10.4)
365n
Note that Cardano could not prove this formula, and we could, starting from Kolmogorov’s three axioms.
||
Richard Edler von Mises (1883 – 1953) was an Austrian scientist and mathematician who worked on solid
mechanics, fluid mechanics, aerodynamics, aeronautics, statistics and probability theory. In solid mechanics, von
Mises made an important contribution to the theory of plasticity by formulating what has become known as the von
Mises yield criterion. If you want to become a civil/mechanical/aeorspace engineer, you will encounter his name.
The second assumption is not true. But for the first attack to this problem, do not bother too much.
P (n)
ical solutions match well with the analytical solutions. And 0.4
about 90%.
The main reason that this problem is called a paradox is that if you are in a group of 23 and
you compare your birthday with the others, you think you’re making only 22 comparisons. This
means that there are only 22 chances of sharing the birthday with someone. However, we don’t
make only 22 comparisons. That number is much larger and it is the reason that we perceive
this problem as a paradox. Indeed, the comparisons of birthdays will be made between every
possible pair of individuals. With 23 individuals, there are 232
D .23 22/=2 D 253 pairs to
consider, which is well over half the number of days in a year (182.5 or 183).
Now, we consider the inverse problem of the birthday problem: how many people (i.e., n D‹)
so that at least two people will share a birthday with a probability of 0.5 ? It seems easy, we just
need to solve the following equation for n
.365/.364/ .365 n C 1/
1 D 0:5
365n
Hmm. How to solve this equation? It is interesting to realize that a bit massage to P .n/ will be
helpful. We rewrite P .n/ as follows
365 364 365 n C 1
P .n/ D 1
365 365 365
(5.10.5)
365 1 2 n 1
D1 1 1 1
365 365 365 365
Now comes the art of approximation, recall that for small x close to zero, we have
e x 1 C x H) e x
1 x
Carl Gustav Jacob Jacobi, 19th century mathematician, using the phrase to describe how he thought many
problems in math could be solved by looking at the inverse.
If this is not clear check Taylor series in Section 4.14.8. It is hard to live without calculus!
(Note that Eq. (5.10.5) has terms of the form 1 x). Thus, Eq. (5.10.5) becomes
1 2 n 1
P .n/ 1 e 365 e 365 e 365
1 C 2 C C n 1
1 exp (5.10.6)
365
n2
n.n 1/
1 exp 1 exp
2 365 2 365
where use was made of the sum of the first counting numbers formula (Section 2.5.1).
With this approximation, it is easy to find the n such that P .n/ D 0:5:
n2
n2 p
1 exp D 0:5 H) D ln 2 H) n D ln 2 730 D 22:494
2 365 2 365
And from that we get n D 23.
Figure 5.8: A random variable is a real-valued function from the sample space S to R.
is so called because we cannot list it as we do for discrete random variable. Still remember
Hilbert’s hotel with infinite rooms and Georg Cantor? This section is confined to a discussion of
discrete random variables only.
Example 5.13
Tossing a coin twice and let X be the number of heads observed. Find the probability mass
function PX . The sample space is S D f.H; H /; .H; T /; .T; H /; .T; T /g. So, the number of
heads X is:
X D f0; 1; 2g
So, the probability mass function of a random variable X is the function that takes a num-
ber x 2 R as input and returns the number P .X D x/ as output. (Note that we included
continuous random variables in this discussion).
To better visualize the PMF, we can plot it. Fig. 5.9 shows the PMF of the above random
variable X ; the plot on the right is known as a bar plot. As we see, the random variable can take
three possible values 0,1 and 2. The figure also clearly indicates that the event X D 1 is twice
as likely as the other two possible values.
PX (x)
0.50 0.50
PX (x)
0.25
0.25
0.00
0.00 0 1 2
0 1 2 x x
(a) (b)
Rolling two dice. You either get a double six (with probability of 1=36) or not a double
six (with a chance of 35=36).
Definition 5.10.1
A random variable X is said to be a Bernoulli random variable with parameter p, denoted by
X Ï Bernoul li.p/, if its PMF is given by
8
<p;
ˆ if x D 1
PX .x/ D 1 p; if x D 0 (5.10.8)
ˆ
0; otherwise
:
Geometric distribution. Assume that we have an unfair coin for which P .H / D p, where
0 < p < 1 and p ¤ 0:5. We toss the coin repeatedly until we observe a head for the first time.
Let X be the total number of coin tosses. Find the distribution of X .
First, we see that X D f1; 2; 3; : : : ; k; : : :g. To find the distribution of X is to find PX .k/ D
P .X D k/ for k D 1; 2; 3; and so on. These probabilities are (as all tosses are independent, the
probability of, let say, TH is just the product of the probabilities of getting T and H )
PX .1/ W P .H / Dp
PX .2/ W P .TH / D .1 p/p
PX .3/ W P .T TH / D .1 p/.1 p/p D .1 p/2 p
:: :: ::
: : :
PX .k/ W P .T T : : : H / D .1 p/k 1 p
Binomial distribution. Suppose that we have a coin for which P .H / D p and thus
P .T / D 1 p. We toss it five times. What is the probability that we observe exactly k heads
and 5 k tails?|| To solve this problem, we start with a concrete case: Let A be the event that we
observe exactly three heads and two tails. What is P .A/?
||
Of course k D 0; 1; 2; 3; 4; 5.
Because A is the event that we observe exactly three heads and two tails, we can write
A D fHHH T T; T THHH; THHH T; : : :g
It can be shown that the probability of each member of A is p 3 .1 p/2 . As there are jAj such
members, the probability of A is
P .A/ D jAjp 3 .1 p/2
But from Section 2.24.5, we know that jAj D 53 , so
!
5 3
P .A/ D p .1 p/2
3
With this, we have the following definition of a binomial distribution.
Definition 5.10.2
A random variable X is said to be a binomial random variable with parameters n and p, shown
as X Ï Bi nomial.n; p/, if its PMF is given byd
!
n k
PX .k/ D p .1 p/n k for k D 0; 1; 2; ; n (5.10.9)
k
d
How to make sure that this is indeed a PMF? Eq. (5.10.7) is the answer.
Example 5.14
What is the probability that among five families, each with six children, at least three of the
families have four or more girls? Of course, we assume that the probability to have a boy is
0.5.
To solve this problem, first note that the five families are the five trials. And each trial is a
success if that family has at least four girls. And if we denote by p0 the probability of a family
to have at least four girls, the probability that at least three of the families have four or more
girls is: ! ! !
5 3 5 5
p0 .1 p0 /2 C p04 .1 p0 / C p05 (5.10.10)
3 4 5
To find p0 , we realize that to get six children, each family has to perform six Bernoulli trials
with p D 0:5 to get a boy or a girl, thus:
! ! !
6 6 6 11
p0 D .0:5/6 C .0:5/6 C .0:5/6 D
4 5 6 32
Plugging this p0 into Eq. (5.10.10) we get the answer to this problem. But that number is not
important than the solution process.
We can generalize what we have found in the above example, to have a formula for calculat-
ing the probability of a X b:
b
!
X n k
P .a X b/ D p .1 p/n k
(5.10.11)
k
kDa
PX (x) PX (x)
0.12
0.08
0.10
0.06
0.08
0.06
0.04
0.04
0.02
0.02
0.00 0.00
0 20 40 60 80 100
x 0 20 40 60 80 100
x
Figure 5.11: Visualization of two binomial distributions. Observe that the curves peak at around np.
To have a better understanding of the binomial distribution, we plot some of them in Fig. 5.11.
The curve has an ascending branch starting from k D 0 to kmax , and a descending branch with
k kmax . It is possible to determine the value for kmax . First, let’s denote bn .k/ D PX .k/, and
we need to compute the ratio of two successive terms:
bn .k/ nŠ k n k nŠ k 1 n kC1
D p .1 p/ = p .1 p/
bn .k 1/ .n k/ŠkŠ .n k C 1/Š.k 1/Š
.n k C 1/p
D
kq
(5.10.12)
To find the peak of the binomial distribution curve, we find k such that the ratio bn .k/=bn .k 1/ is
larger than or equal to one:
bn .k/
1 ” .n C 1/p k H) kmax np (5.10.13)
bn .k 1/
Now we can understand why each plot in Fig. 5.11 has a peak near np. And why np is at the
peak? Because it is the expected value of X i.e., it is the average value of X . And it should be
the average value that has the highest probability.
Having the ratio between successive terms, it is possible to compute bn .k/ recursively. That
is, we compute the first term i.e., bn .0/, then use it to compute the second term bn .1/ and so on:
John Arbuthnot and Willem Jacob ’s Gravesande. In 1710 John Arbuthnot (1667–1735)
presented a paper titled An Argument for Divine Providence to the London Royal Society, which
is a very early example of statistical hypothesis testing in social science. The paper presents
a table containing the number of baptised children in London for the previous 82 years. One
seemingly spectacular feature of this data was that in each of these 82 years the number of boys
was higher than that of the girls. Willem Jacob ’s Gravesande (1688 – 1742) set out a task to
find out why.
’s Gravesande first found a representative year by taking the average number of births over
the 82 years in question, which was 11 429. For each year, he then scaled the numbers of births
per sex to that average number. In this scaled data, Gravesande found that the number of boys
had always been between 5 745 and 6 128.
Now, seeing a birth as a Bernoulli trial with p D 0:5, he used Eq. (5.10.11) to compute the
probability of the number of male births falling within this range in a given year as
6128
!
X 11429 1 11429
P D (5.10.15)
k 2
kD5745
How did ’sGravesande compute this P in 1710 ? First, he re-wrote it as follows (using the fact
that the sum of the coefficients of the nth row in Pascal’s triangle is 2n , check Section 2.27)
P6128 11429
P6128 11429
kD5745 k kD5745 k
P D 11429
D P11429 11429 (5.10.16)
2 kD0 k
The problem now boils down to how to handling the coefficients (and sum of them) in a row
of Pascal’s triangle when n is large. To show how Gravesande did that, just consider the case
n D 5 (11429 is an odd number):
5 5 5 5 5 5
0 1 2 3 4 5
(5.10.17)
Willem Jacob ’s Gravesande was a Dutch mathematician and natural philosopher, chiefly remembered for
developing experimental demonstrations of the laws of classical mechanics and the first experimental measurement
of kinetic energy. As professor of mathematics, astronomy, and philosophy at Leiden University, he helped to
propagate Isaac Newton’s ideas in Continental Europe.
Today we would code a few lines of code and get the result of 0.2873.
Thus we can assign the middle term (boxed in the above equation) to any value; say 53 D a, and
we can compute the next term 54 in terms of a using the following identity between adjacent
then the next term 55 in terms of a. Adding all these three terms and multiplying the result by
two, we get the sum of all the coefficients in terms of a. In this way, Gravesande constructed a
table containing half of the coefficients in .a C b/11429 starting from the middle term 5 715
to 5 973. Note that the coefficients are decreasing from the middle term, and from 5 973 on, the
coefficients are negligible.
Although ’s Gravesande was able to solve this computationally challenging binomial related
problem, he stopped there. Thus, ’s Gravesande was not a systematic mathematician but rather a
problem solver and a number cruncher.
The next step is to take the natural logarithm of Eq. (5.10.20) to have a sum instead of a product:
mC1 mC2 mCm 1
ln A D ln C ln C C ln C ln 2
m 1 m 2 m .m 1/
1 C 1=m 1 C 2=m 1 C .m 1/=m
D ln C ln C C ln C ln 2 (5.10.21)
1 1=m 1 2=m 1 .m 1/=m
m
X1 1 C i=m
D ln C ln 2
i D1
1 i=m
Now we can use the following series for ln 1Cx=1 x , check Section 4.14.3 for details,
1
x3 x5 x 2k 1
1Cx X
ln D2 xC C C ::: D 2 (5.10.22)
1 x 3 5 2k 1
kD1
to have
m
X1 X
1 2k 1 1 m
X1
1 i X 1
ln A ln 2 D 2 D2 i 2k 1
(5.10.23)
i D1 kD1
2k 1 m .2k 1/m2k 1
i D1
kD1
What the red term is? It is the sum of powers of integers that Bernouilli computed some years
ago! Using Eq. (2.25.3), we thus can compute it:
m
X1 .m 1/2k 1 1
i 2k 1
D .m 1/2k 1
C .2k 1/B2 .m 1/2k 2
C (5.10.24)
i D1
2k 2 2
Setting t D m 1=m, and substituting Eq. (5.10.24) into Eq. (5.10.23), we get ln A ln 2 as
1 1 1
X t 2k 1 X t 2k 1 B2 X 2k 2
2.m 1/ C t C (5.10.25)
.2k 1/2k 2k 1 m
kD1 kD1 kD1
Now, we have to compute the three sums in the above expression. The second one is easy; it is
just Eq. (5.10.22):
1
X t 2k 1 1 1Ct 1
D ln D ln.2m 1/ (5.10.26)
2k 1 2 1 t 2
kD1
The first one is very similar to Eq. (5.10.22). In fact if we integrate both sides of that equation
we will meet the first sum:
1 Z 1
1Cx x 2k 1 x 2k
Z X X
ln dx D 2 dx D 2 (5.10.27)
1 x 2k 1 .2k 1/.2k/
kD1 kD1
R 1Cx
For the integral ln 1 x dx I have used the Python package SymPy and with that integral com-
puted, the above equation becomes:
1
x 2k
1Cx 2
X
x ln C ln 1 x D 2 (5.10.28)
1 x .2k 1/.2k/
kD1
Dividing this with x, we get (also replaced x by t , and then t by m using t D m 1=m)
1
t 2k 1
X 1Ct
C t 1 ln 1 t 2
2 D ln
.2k 1/.2k/ 1 t
kD1 (5.10.29)
m 2m 1
D ln.2m 1/ C ln
m 1 m2
The third sum involves a geometric series, and can be shown to converge to 1=12 when m
approaches infinity. Similarly, the next sum in Eq. (5.10.23) is 1=360 and so on. With all these
results we can write ln A as
1 1 1 1 1
ln A 2m ln.2m 1/ 2m ln.m/ C ln 2 C C C
2 12 360 1260 1680
(5.10.30)
The logarithm of the last ratio equals (with this approximation ln.1 C x/ x for x near 0)
i i i i i
ln 1 ln 1 C D (5.10.36)
nq np nq np npq
For l 1 and kmax C l n, we can compute the term which is distant from the middle by the
distance l i.e., ln bn .kmax Cl/=bn .kmax / using Eq. (5.10.36), as follows
bn .kmax C l/ bn .kmax C 1/ bn .kmax C 2/ b.kmax C l/
ln D ln
bn .kmax / bn .kmax / bn .kmax C 1/ bn .kmax C l 1/
bn .kmax C 1/ bn .kmax C 2/ bn .kmax C l/
D ln C ln C C
bn .kmax / bn .kmax C 1/ bn .kmax C l 1/
2
1 C 2 C C l 1 l
.sum of first l integers=l.l C 1/=2/
npq 2 npq
(5.10.37)
Thus, bn .kmax C l/ is exponentially proportional to bn .kmax /:
l2
bn .kmax C l/ bn .kmax / exp (5.10.38)
2npq
where exp.x/ D e x is the exponential function . Using Eq. (5.10.34), which is bn .kmax / for the
case p D q D 1=2 and n is even, we get de Moivre’s approximation to the symmetric binomial
distribution:
2l 2
2
b.n=2I n; 1=2/ WD bn .n=2 C l/ p exp (5.10.39)
2 n n
Remarkably two famous numbers in mathematics D 3:1415 : : : and e appear in this formula!
Even though de Moivre did not draw his approxi- 0.08 normal curve
mation, he mentioned the curve in his "The Doctrine of
Chances" in 1738 when he was 71 years old. He even 0.06
computed the two inflection points of the curve. And
PX (x)
this is probably the first time the normal curve appears. 0.04
d d
2l 2 2x 2
2 2
X Z
P .n=2 X n=2 C d / p exp p exp dx
2 n n 2 n 0 n
lD0
(5.10.40)
Thus theoretically this works only for small i .
We use e x when the term in the exponent is short and exp.: : :/ when that term is long or complex.
Noting that he approximated the sum in his approximate binomial distribution by an integral.
Thus, de Moirve did not think of a probability distribution
pfunction. And from that, it is easy to
have with a factor of two and a change of variable (x D ny):
Z d=pn
4
exp 2y 2 dy
P .jX n=2j d / p (5.10.41)
2 0
To evaluate the integral, de Moivre replaced the exponential function by its series and did a term
by term integration. This is what Newton and mathematicians in the 18th p century did. We also
discussed it in Section 4.15. He obtained a result of 0:682688 for d= n D 1=2. As we’re not
in a calculus class, we can use a library to do this integral for us, see Listing 5.3. The result is
0:682689. Note that what de Moirve computed shows that 68% of the data is within 1 standard
deviation of the mean§ .
Listing 5.3: Example of using the QuadGK pacakge for numerical integration.
1 using QuadGK
2 integral, err = quadgk(x -> (4/sqrt(2*pi))*exp(-2*x^2), 0, 0.5, rtol=1e-8)
p p
Continuing with d= n D 1 and d= n D 3=2, he
obtained what is now referred to as the 68 95 99
rule (see figure next‘ ). Despite de Moivre’s scientific
eminence his main income was as a private tutor of
mathematics and he died in poverty. Desperate to get
a chair in Cambridge he begged Johann Bernoulli to
persuade Leibniz to write a supporting letter for him.
Bernoulli did so in 1710 explaining to Leibniz that de
Moivre was living a miserable life of poverty. Indeed
Leibniz had met de Moivre when he had been in London
in 1673 and tried to obtain a professorship for de Moivre
in Germany, but with no success. Even his influential English friends like Newton and Halley
could not help him obtain a university post.
He was unmarried, and spent his closing years in peaceful study. De Moivre, like Cardano,
is famed for predicting the day of his own death. He found that he was sleeping 15 minutes
longer each night and summing the arithmetic progression, calculated that he would die on the
day that he slept for 24 hours. He was right!
Negative Binomial Distribution. Suppose that we have a coin with P .H / D p. We toss the
coin until we observe m heads, where m 2 N. We define X as the total number of coin tosses in
this experiment. Then X is said to have Pascal distribution with parameter m and p. We write
§
p
We shall know shorly that the standard deviation is 0:5 n.
‘
Check Listing B.18 for the code. This is the well know bell-shaped normal curve. It is symmetric about zero:
the part of the curve to the right of zero is a mirror image of the part to the left.
X Ï P ascal.m; p/. Note that P ascal.1; p/ D Geomet ric.p/. Note that by our definition
the range of X is given by RX D fm; m C 1; m C 2; : : :g. This is because we need to toss at
least m times to get m heads.
Our goal is to find PX .k/ for k 2 RX . It’s easier to start with a concrete case, say m D 3.
What is PX .4/? In other words, what is the probability that we have to toss the coin 4 times to
get 3 heads? The fact that we had to toss the coin 4 times indicating that in the first three tosses
we only got 2 heads. This observation is the key to the solution of this problem. And in the final
toss (the fourth one) we got a head. Thus,
And with that, it is just one small step to get the general result:
!
k 1 m
PX .k/ D p .1 p/k m ; k D m; m C 1; : : : (5.10.42)
m 1
Binomial distribution versus Pascal distribution. A binomial random variable counts the
number of successes in a fixed number of independent trials. On the other hands, a negative
binomial random variable counts the number of independent trials needed to achieve a fixed
number of successes.
Poisson’s distribution. Herein, we’re going to present an approximation to the binomial distri-
bution when n is large, p is small and np is finite. Let’s introduce a new symbol such that
np D . We start with bn .0/, and taking advantage of the fact that n is large, we will use some
approximations:
n
n
bn .0/ D .1 p/ D 1 (5.10.43)
n
Now, taking the natural logarithm of both sides of the above equation, and we get
ln bn .0/ D n ln 1 (5.10.44)
n
Now, we use an approximation for ln.1 x/, check Taylor’s series in Section 4.14.8 if this was
not clear:
x2 x3 x4
ln.1 x/ D x C
2 3 4
With that approximation, we now can write ln bn .0/ as (with x D =n)
2 3
ln bn .0/ D (5.10.45)
2n 3n2
Phu Nguyen, Monash University © Draft version
Chapter 5. Probability 473
For very large n’s, we get a good approximation of bn .0/ by omitting terms with n in the
denominator:
ln bn .0/ H) bn .0/ e (5.10.46)
And of course, we use the recursive formula, Eq. (5.10.14), to get the next term bn .1/ and so
on. But first, we also need an approximation (when n is large) for the ratio bn .k/=bn .k 1/; using
Eq. (5.10.13) with p D =n, q D 1 p:
bn .k/ .n k C 1/p
D
bn .k 1/ kq k
k e
bn .k/
kŠ
And this is now known as Poisson distribution, named after the French mathematician Siméon
Denis Poisson (1781 – 1840). A random variable X is said to be a Poisson random variable with
parameter , shown as X Ï P oisson./ , if its range is RX D f0; 1; 2; 3; :::g, and its PMF is
given by
k e
PX .k/ D for k 2 RX (5.10.47)
kŠ
What should we do next after we have discovered the Poisson approximation to the binomial
distribution? We should at least do two things :
2. Justify the need of the Poisson approximation. We’re going to do this with one example
next.
Suppose you’re trying to get something to happen in a video game that is rare; maybe it
happens 1% of the time you do something. You’d like to know how likely it is to happen at least
P1 P1 k e
Actually we need to check whether kD0 PX .k/ D 1, or kD0 kŠ
D 1.
once if you try, say, 100 times. Here we have p D 1=100, n D 100. So the binomial distribution
gives us an exact answer, namely
1 100
P D1 1
100
The result is 0:63396 with a calculator, of course. Using the Poisson approximation with D
np D 1, that probability is (easier)
1
P D1 e D 0:632120
8
ˆ
ˆ0; if x < 0
<1;
ˆ
if 0 x < 1
FX .x/ D 43
ˆ ; if 1 x < 2
:4
ˆ
ˆ
1; if x 2
Now that we have seen a CDF, it’s time to talk about its properties. By looking at the graph
of this CDF, we can tell that
The first property is just a consequence of the second and third properties. The second property
is just another way of saying that the probability of X smaller than 1 is zero. Similarly, the
third property is the fact that the probability of something in the sample space occurs is one, as
any X must be smaller than infinity! About the last property, as we’re adding up probabilities,
the CDF must be non-decreasing. But we can prove it rigorously using the following result: for
a; b 2 R such that a < b:
of which a proof is given in Fig. 5.12. As probability is always non-negative, the above results
in FX .b/ FX .a/ 0 or FX .b/ FX .a/.
Thus, per game, we will lose $5.26. What does this number mean? Obviously for each game, we
either win $100 or lose $100. But in a long run when we have played many games, on average
we would have lost $5.26 per game.
We can see that this average amount can be computed by adding the product of the probability
of winning $100 and $100 to the product of the probability of losing $100 and -$100:
9 10
.$100/ C . $100/ D $5:26
19 19
Let’s consider another example of rolling a die N times. Assume that n1 times we observe
1, n2 times we observe 2, n3 times we observe 3, and so on. Now we compute the average of all
the numbers observed:
.1 C 1 C C 1/ C .2 C 2 C C 2/ C C .6 C 6 C C 6/
„ ƒ‚ … „ ƒ‚ … „ ƒ‚ …
n1 n2 n6
xD
N
.1/.n1 / C .2/.n2 / C C .6/.n6 /
D
N
Now, assume that N is large, then ni =N D 1=6, which is the probability that we observe i for
i D 1; 2; : : :. Thus,
n n n 1
1 2 6
x D .1/ C .2/ C C .6/ D .1 C 2 C 3 C 4 C 5 C 6/ (ni =N D 1=6)
N N N 6
21 7
D D D 3:5
6 2
Thus the averaged value of rolling a die is 7=2.
Notice that in both examples the averaged number is the sum of the products of the random
variable times its probability. This leads to the following definition for the expected value.
Definition 5.10.3
If X is a discrete random variable with values of fx1 ; x2 ; : : : ; xn g and its PFM is PX .xk /, then
the expected value of X , denoted by EŒX, is defined as:
X
EŒX D x1 PX .x1 / C x2 PX .x2 / C D xk PX .xk / (5.10.49)
k
Blaise Pascal was the third of Étienne Pascal’s children. Pascal’s mother
died when he was only three years old. Pascal’s father had unorthodox
educational views and decided to teach his son himself. Étienne Pascal
decided that Blaise was not to study mathematics before the age of 15
and all mathematics texts were removed from their house. Curiosity
raised by this, Pascal started to work on geometry himself at the age
of 12. He discovered that the sum of the angles of a triangle are two
right angles and, when his father found out, he relented and allowed
Blaise a copy of Euclid. About 1647 Pascal began a series of experiments on atmospheric
pressure. By 1647 he had proved to his satisfaction that a vacuum existed. Rene Descartes
visited Pascal on 23 September. His visit only lasted two days and the two argued about
the vacuum which Descartes did not believe in. Descartes wrote, rather cruelly, in a letter
to Huygens after this visit that Pascal ...has too much vacuum in his head.
Now, we’re deriving another formula for the expected value of X, but in terms of the proba-
bility of the members of the sample space:
X
EŒX D X.s/p.s/ (5.10.50)
s2S
We shall prove the important and useful result that the expected value of a sum of random
variables is equal to the sum of their expectations (i.e., EŒX C Y D EŒX C EŒY for two RVs
X and Y ) using Eq. (5.10.50).
Proof of Eq. (5.10.50). Let denote by Si the event that X.Si / D xi for i D 1; 2; : : : That is,
Si D fs W X.s/ D xi g
For example, in tossing two dice, and let X be the total number of faces, we have x1 D 2 and
x2 D 3, with S2 D f.1; 2/; .2; 1/g are the outcomes that led to x2 . Moreover, let p.s/ D P .s/
be the probability that s is the outcome of the experiment. The proof then starts with the usual
definition of EŒX and replaces X D xi by Si , Fig. 5.8 can be helpful to see the connection
between s, S and X :
X X X
EŒX D xi PX .xi / D xi PX .X D xi / D xi P .Si /
i i i
P
We continue with replacing P .Si / by s2Si p.s/ (that is using the third axiom),
X X XX XX
EŒX D xi p.s/ D xi p.s/ D X.s/p.s/
i s2Si i s2Si i s2Si
P P P
And finally, because S1 ; S2 ; : : : are disjoint or mutually exclusive, i Si is just s2S , thus
X
EŒX D X.s/p.s/
s2S
Figure 5.13: Pictorial presentation of sample space S, RV X, and function of a RV Y D g.X / and its
PFM.
Example 5.15
Let X be a RV that takes on any values 1; 0; 1 with respective probabilities
P .Y D 0/ D P .X D 0/ D 0:5
P .Y D 1/ D P .X D 1/ C P .X D 1/ D 0:5
But there is a faster way of doing this. The expected value of g.X/, EŒg.X/, is simply given
by
X
EŒg.X/ D g.xi /PX .xi / (5.10.52)
i
And this result is known as the law of the unconscious statistician, or LOTUS.
Before proving this result, let’s check that it is in accord with the results obtained directly
using the definition of EŒX 2 for the above example. Applying Eq. (5.10.52), we get
which is the same as the direct result. To see why the same result was obtained, we can do some
massage to the above expression:
The last expression is exactly identical to Eq. (5.10.51). The proof of Eq. (5.10.52) proceeds
similarly.
P
Proof of Eq. (5.10.52). We
P start with i g.xi /PX .xi /, then group terms with the same g.xi /,
and then transform it to j yj PY .yj / which is EŒg.X/ with yj are all the (different) values of
Y:
X X X
g.xi /PX .xi / D g.xi /PX .xi / (grouping step)
i j i Wg.xi /Dyj
X X
D yj PX .xi / (replacing g.xi / D yj )
j i Wg.xi /Dyj
X X
D yj PX .xi /
j i Wg.xi /Dyj
X
D yj PY .yj /
j
P
The notation i Wg.xi /Dyj g.xi /PX .xi / means that the sum is over i but only for i such that
g.xi / D yj , and that is achieved by the subscript i W g.xi / D yj under the sum notation.
Expected value of sum of two random variables. Let’s roll two dice and denote by S the sum
of faces. If we denote by X the face of the first die and by Y the face of the second die, then
S D X C Y . Obviously S is a discrete RV, and we can compute its PFM. Thus, we can compute
Now, we can compute P .S D xj / for xj D f2; 3; : : : ; 12g, and then using Eq. (5.10.49) to
compute the expected value:
1 2 3 4 5
EŒS D 2 C3 C4 C5 C6
36 36 36 36 36
6 5 4 3 2 1 252
C7 C8 C9 C 10 C 11 C 12 D D7
36 36 36 36 36 36 36
You might be asking what is special about this problem? Is it just another application of the
concept of expected value? Hold on. Look at the result of 7 again. Rolling one die and the
expected value is 7=2 , now rolling two dice and the expected value is 7. We should suspect
that
EŒX C Y D EŒX C EŒY (5.10.53)
which implies that the expected value of the sum of random variables is equal to the sum of their
individual expected values, regardless of whether they are independent. In calculus, we have
the derivative of the sum of two functions is the sum of the derivatives. Here in the theory of
probability, we see the same rule.
Proof of Eq. (5.10.53). Let X and Y be two random variables and Z D X C Y . We’re now
using Eq. (5.10.50) for the proof:
X
EŒZ D Z.s/p.s/
s
X
D ŒX.s/ C Y.s/p.s/
s
X X X
D ŒX.s/p.s/ C Y .s/p.s/ D X.s/p.s/ C Y.s/p.s/
s s s
D EŒX C EŒY
Check the paragraph before definition 5.10.3 if this was not clear.
This proof also reveals that the property holds not only for two RVs but for any number of
RVs. Thus, for n 2 N, we can write
Why square? Squaring always gives a positive value, so the variance will not be zero . A
natural question is: the absolute difference also has this property, why we cann’t define the
variance as EŒjX j? Yes, you can! The thing is that the definition in Eq. (5.10.55) prevails
First, from the fact that EŒnX D nEŒX we generalize to EŒaX D aEŒX . We have seen mathematicians
did this many time (e.g. check Section 9.2).
‘
Of course we prefer working with power functions, and .a b/2 is the lowest power function.
You’re encouraged to think of an example to see this.
1.0 dist1
dist2
0.8 dist3
0.6
PX (x)
0.4
0.2
0.0
−5 −4 −3 −2 −1 0 1 2 3 4 5
x
Figure 5.14: Three distributions of the same expected value but difference variances.
because it is mathematically easier to work with x 2 than to work with jxj. Again, just think
about differentiating these two functions and you will see what we mean by that statement.
Note that Var.X/ has a different unit than X. For example, if X is measured in meters then
Var.X / is in meters squared. To solve this issue, another measure, called the standard deviation,
usually denoted by X is defined, which is simply the square root of the variance.
Instead of using the definition of the variance
P directly to compute it, we can use LOTUS to
have a nicer formula for it (recall that D x xPX .x/):
X
Var.X/ D EŒ.X /2 D .x /2 PX .x/
x
X
D .x 2 2x C 2 /PX .x/
x (5.10.56)
X X X
2 2
D x PX .x/ 2 xPX .x/ C PX .x/
x x x
D EŒX 2 2 D EŒX 2
.EŒX/2
This formula is useful as we know EŒX (and thus its squared) and we know how to compute
EŒX 2 using the LOTUS. If you want to translate this formula to English, it is: the variance is
the mean of the square minus the square of the mean. Eventually, nothing new is needed, it is
just a combination of all the things we know of!
Let’s now compute Var.aX C b/. Why? To see if the variance is a linear operator or not.
Denoting Y D aX C b, then Y D a C b, which is the expected value of Y . Now, we can
write
Var.Y / D EŒ.Y Y /2
D EŒ.aX C b a b/2
(5.10.57)
D EŒa2 .X /2
D a2 EŒ.X /2 D a2 Var.X/
Thus, we have
Var.aX C b/ D a2 Var.X/ ¤ aVar.X/ C b (5.10.58)
What else does the above equation tell us? Let’s consider a D 1, that is Y D X C b, then
Var.Y / D Var.X/. Does this make sense? Yes, noting that Y D X C b is a translation of X
(Section 4.2.2), and a translation does not distort the object (or the function), thus the spread of
X is preserved.
Sample variance. Herein we shall meet some terminologies in statistics. For example, if we
want to find out how much the average Australian earns, we do not want to survey everyone in the
population (too many people), so we would choose a small number of people in the population.
For example, you might select 10,000 people. And that is called a sample .
Ok. Suppose now that we have already a sample with n observations (or measurements)
x1 ; x2 ; : : : ; xn . The question now is what is the variance for this sample? You might be surprised
to see the following§
n P
1 X 2 xi
2
S D .xi x/ N ; xN D i
n 1 i D1 n
Why n 1 but not n? In statistics, this is called Bessel’s correction, named after Friedrich Bessel.
The idea is that we need S 2 to match the population variance 2 , to have an unbiased estimator
of 2 . As shown below, with n in the denominator, we cannot achieve this. And what’s why
n 1 was used‘ .
Proof. First, we have the following identity (some intermediate steps were skipped)
n
X n
X n
X
.xi N 2D
x/ .xi2 2xi xN C xN 2 / D xi2 nxN 2
i D1 i D1 i D1
Now, we compute the expected value of the LHS of the above equation:
" n # n
X X
2
E .xi /2 nEŒ.xN /2
E Œ.xi / .xN / D
i D1 i D1
n
(5.10.59)
X
D Var.xi / N
nVar.x/
iD1
Why 10,000, you’re asking? It is not easy to answer that question. That’s why a whole field called design of
experiments was developed, just to have unbiased samples. This is not discussed here.
§
When work with the samples, we do not know the probabilities pi , and thus we cannot use the definition of
mean and expected value directly. Instead we just include each output x as often as it comes. We get the empirical
mean instead of the expected mean. Similarly we get the empirical variance.
‘
Another explanation that I found is: one degree of freedom was accounted for in the sample mean. But I do
not understand this.
1 X
D Var.xi / nVar.x/ N (used Eq. (5.10.59))
n 1 i
Note that as x1 ; x2 ; : : : ; xn are a random sample from a distribution with variance 2 , thus (check
Eq. (5.12.8) for the second result)
2
Var.xi / D 2 ; N D
Var.x/
n
Substituting these into EŒS 2 , we obtain
" n #
1 X 1
EŒS 2 D 2 2 D .n 2 2/ D 2
n 1 i D1 n 1
Thus the sample variance coincides with the population variance, which justifies the Bessel
correction.
Bernoul li.p/ p p
p
Bi nomi al.n; p/ n coin toss, X is # of heads observed np npq npq
1
Geomet ri c.p/ X is # of coin toss until a H is observed p
p
m
P ascal.m; p/ X is # of coin toss until m heads observed p
npq
P oi sson./
How they were computed? Of course using the definition of the expected value and variance,
massage the algebraic expression until the simplest form is achieved. I am going to give one
example.
Example 5.16
Determine the expected value for the geometric distribution with the PMF given by q k 1 p for
k D 1; 2; : : : Using Eq. (5.10.49), we can straightforwardly write EŒX as
X 1
X 1
X
k 1
EŒX D xk PX .xk / D kq pDp kq k 1
Now, the trouble is the red sum. To attack it, we need to use the geometric series,
1 1 1
!
X
k 1 d X k X 1
x D H) x D kx k 1 D
1 x dx .1 x/2
kD0 kD0 kD0
1 65.00 59.80
2 63.30 63.20
3 65.00 63.30
:: :: ::
: : :
1077 70.70 69.30
1078 70.00 67.00
data observations (in case of Pearson’s data, it is 1078), and for bin j , its frequency fj is given
by
n
1X
fj D 1fxi 2 Bj g; for j D 1; 2; : : : ; L (5.11.1)
n i D1
where 1fxi 2 Bj g returns 1 if xj is in bin Bj and 0 otherwise.
The final step is to plot the bins and fj . A bar plot where the X -axis represents the bin
ranges while the Y -axis gives information about frequency is used for this. Fig. 5.15a presents a
histogram for the fathers’ heights .
1.0
0.10
Cumulative distribution function
0.8
0.08
Frequency fj
0.6
0.06
0.04 0.4
0.02 0.2
0.00 0.0
60 65 70 75 60 65 70 75
Father’s height Father’s height
(a) (b)
Figure 5.15: Fathers’ height: probability histogram and cumulative distribution function.
It is useful to assume that the CDF of a continuous random variable is a continuous function,
See Listing C.1 for the code. I used Julia packages to compute and plot the histogram. You’re encouraged to
code Eq. (5.11.1) if you want to learn programming.
see Fig. 5.15b to see why. Then, recall from Eq. (5.10.48) that
And from the fundamental theorem of calculus (Chapter 4), we know that
b
dFX .x/
Z
FX .b/ FX .a/ D fX .x/dx; where fX .x/ D (5.11.2)
a dx
Thus, we can find the probability that x falls within an interval Œa; b in terms of the new function
fX .x/:
Z b Z b
P .a < x b/ D fX .x/dx; or P .a x b/ D fX .x/dx (5.11.3)
a a
The function fX .x/ is called the probability density function or PDF. Why that name? This is
because fX .x/ D dFX .x/=dx , which is probability per unit length. Note that for a continuous RV
writing P .a < x b/ or P .a x b/ is the same because P .x D a/ D 0. Actually we have
seen something similar (i.e., probability is related to an integral) in Eq. (5.10.41).
The probability density function satisfies the following two properties (which is nothing but
the continuous version of Eq. (5.10.7))
fX .x/ 0 for 8x 2 R
And from that we have the continuous counterparts, where sum is replaced by integral and the
PDF replacing the PMF
Z 1 Z 1
EŒX D xfX .x/dx; Var.X/ D .x /2 fX .x/dx (5.11.5)
1 1
Standard normal distribution. de Moivre had derived an approximation to the binomial distri-
2
bution and it involves the exponential function of the form e x . Thus, there is a need to evaluate
the following integral (see Eq. (5.10.40)):
Z 1
2
I D e x dx
1
2
Unfortunately it is impossible to find an antiderivative of e x . Note that if the integral was
2 2
2xe x dx, then life would be easier. The key point is the factor x in front of e x . If we go to
R
2D, then, we can make this factor appear. Let’s compute I 2 instead :
Z 1 Z 1 “ 1
2 x2 y2 .x 2 Cy 2 /
I D e dx e dy D e dxdy
1 1 1
The next step is to switch to polar coordinates in which dxdy will become rdrd (see Sec-
tion 7.8.2), and voilà:
Z 2 Z 1 Z 1 p
2 r2 x2
I D e rdr d D H) I D e dx D
0 0 1
With that, we can define what is called a standard normal variable as follows. A continuous
random variable Z is said to be a standard normal (or standard Gaussian) random variable,
denoted by Z Ï N.0; 1/, if its PDF is given by
z2
1
Z Ï N.0; 1/ W fZ .z/ D p exp (5.11.7)
2 2
p 2
Why this form? Why not this form .1= /e z ? This one is also a legitimate PDF, actually it is
the form that Gauss used. However, the one in Eq. (5.11.7) prevails simply because with it, the
variance is one (this is to be shown shortly)–which is a nice number.
Yes, sometimes making a problem harder and we can find the solution to the simpler problem.
p
The factor 1= 2 before the exponential function is required because of Eq. (5.11.4).
z
u2
1
Z
FZ .z/ D P .Z z/ D p exp du WD ˆ.z/ (5.11.8)
2 1 2
The integral in Eq. (5.11.8) does not have a closed form solution . Nevertheless, because of
the importance of the normal distribution, the values of this integral have been tabulated; see
Table 5.8 for such a table . Nowadays, it is available in calculators and in many programming
languages. Moreover, mathematicians introduced the short notation ˆ to replace the lengthy
integral expression. Fig. 5.16 plots both fZ .z/ and ˆ.z/.
(a) (b)
Figure 5.16: Plot of the standard normal curve, of which the area underneath from 1 to z is the CDF, and
plot of the CDF. As the total area under the normal curve is one, half of the area is 0.5, thus ˆ.0/ D 1=2.
Another property: ˆ. z/ D 1 ˆ.z/. This property is useful as we only need to make table of ˆ.z/ for
z 0. Why we have this property? Plot the normal curve, mark two points z and z on the horizontal
axis. Then, 1 ˆ.z/ is the area under the curve from z to 1 while ˆ. z/ is the area from 1 to z.
The normal curve is symmetric, thus the two areas must be equal.
Now, using Eq. (5.11.5) we’re going to find the expected value and the variance of N.0; 1/.
This means that there is no antiderivative written in elementary functions. The situation is similar to there is
no formula for the roots of a polynomial of high degree, e.g. five. This was proved by the French mathematician
Joseph Liouville (1809 – 1882).
Why we need this table? It is useful for inverse problems where we need to find z such that ˆ.z / D a where
a is a given value. This table was generated automatically (even the LATEX code to typeset it) using a Julia script.
For me it was simply a coding exercise for fun.
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103
:: :: :: :: :: :: :: :: :: ::
: : : : : : : : : :
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854
.x /2
2 1
X Ï N.; / W fX .x/ D p exp (5.11.9)
2 2 2
How did mathematicians come up with the above form of the PDF for the normal dis-
tribution? Here is one way. The standard normal distribution has a mean of zero and a
variance of one and the graph is centered around z D 0. Now, to have a distribution of the
same shape (exponential curve) but with mean and variance ( 2 ) different from one,
we need to translate and scale the standard normal curve (Section 4.2.2). This is achieved
with X D Z C . We can see that
EŒX D EŒZ C D EŒZ C D 0 C D
Var.Z/ D Var.Z C / D 2 Var.Z/ D 2
So far so good, now to get Eq. (5.11.9), we start with the CDF of X :
x x
FX .x/ D P .X x/ D P .Z C x/ D P Z Dˆ
From that we can determine the PDF of X :
d d x 1 0 x 1 x
fX .x/ D FX .x/ D ˆ D ˆ D fZ
dx dx
§
The first integral is zero because the integrand is an even function. For the second integral, using integration
by parts.
0.4
N (0, 1)
N (2, 2)
0.3
0.2
0.1
0.0
−4 −2 0 2 4 6
Figure 5.17: Transformation of the standard normal curve to get a normal curve with ¤ 0 and ¤ 1.
Figure 5.17 shows this translating– with (x )–and scaling–with (x = ). Now we can
write the CDF: x
FX .x/ D P .X x/ D ˆ (5.11.10)
And thus we can compute P .a X b/ as
b a
P .a X b/ D ˆ ˆ (5.11.11)
Uniform distribution
Uniform distribution
Similarly, we computed P .X D 130/, and P .X D 131/. And we put them in the margins of
the original joint PFM table (Table 5.10). Because of this, the probability mass functions for X
and Y are often referred to as the Marginal Distributions for X and Y .
With that example, we now give the definition of the marginal distribution for X (the one for
Y is similar):
X
PX .x/ WD PX Y .x; yj / for any x 2 RX (5.12.2)
yj 2RY
From this, we can, in a similar manner, define the joint cumulative distribution function for X
and Y :
FX Y .x; y/ WD P .X x; Y y/; for all x; y 2 R (5.12.3)
And of course, from the joint CDF FX Y .x; y/ we can determine the marginal CDFs for X and
Y:
FX .x/ WD P .X x; Y 1/ D lim FX Y .x; y/
y!1
(5.12.4)
FY .y/ WD P .X 1; Y y/ D lim FX Y .x; y/
x!1
Now, we get something new–the red term. Let’s massage it and see what we get (recalling that
EŒaX D aEŒX):
If X D Y , then the above becomes the variance of Y (or of X , if not clear, check Eq. (5.10.56)).
And if X; Y are independent, then EŒXY D EŒXEŒY , and the red term vanishes. So, what
we call the red term? We call it the covariance of X and Y , denoted by Cov.X; Y / or X Y :
Thus, the variance is a measure of the spread of one single variable w.r.t its mean. And the
covariance is a measure of two variables. The covariance is in units obtained by multiplying the
units of the two variables. What are we going to do now? Compute some covariance? That’s
important but not interesting: Excel can do that. As usual in maths, we will deduce properties of
the covariance before actually computing it!
Properties of the covariance. The covariance can be seen as an operator with two inputs and it
looks similar to the dot product of two vectors. If we look at the properties of the dot product in
Box 10.2 we guess the following are true (the last one not coming from dot product though):
The proof is skip as it is 100% based on the definition of the covariance i.e., Eq. (5.12.5). The
first property is: if Y always takes on the same values as X, we have the covariance of a variable
with itself (i.e., XX ), which is nothing but the variance.
Example 5.17
We consider the data given in Table 5.9 and use Eq. (5.12.5) to compute X Y . First, we need
the sample means: XN D .129 C 130 C 131/=3 D 130 and YN D .15 C 16/=2 D 15:5. Then,
X Y can be computed as
3 X
X 2
X Y D .Xi N j
X/.Y YN /Pij
i D1 j D1
Variance of a sum of variables. Suppose we have a sum of several random variables, in partic-
ular Y D X1 C C Xn . The question is: what is Var.Y /? If Y is just the sum of two variables,
then we know that ,
Var.X1 C X2 / D Var.X1 / C Var.X2 / C 2Cov.X1 ; X2 /
With that, it is only a small step to go to the general case Y D niD1 Xi . It might help if we go
P
slowly with n D 3 or Y D X1 C X2 C X3 , then Var.Y / D Cov.Y; Y / can be written as
Var.Y / D Cov .X1 C X2 C X3 ; X1 C X2 C X3 /
D Cov.X1 ; X1 C X2 C X3 / C Cov.X2 ; X1 C X2 C X3 / C Cov.X3 ; X1 C X2 C X3 /
D Cov.X1 ; X1 / C Cov.X1 ; X2 / C Cov.X1 ; X3 / C Cov.X2 ; X1 C X2 C X3 /
C Cov.X3 ; X1 C X2 C X3 /
D Var.X1 / C Var.X2 / C Var.X3 / C 2Cov.X1 ; X2 / C 2Cov.X1 ; X3 / C 2Cov.X2 ; X3 /
where in the second equality, we used the distributive property in Eq. (5.12.6). Then, this property
is used again in the third equality. Doing the same thing for Cov.X1 C X2 C X3 ; X2 / and
Cov.X1 C X2 C X3 ; X3 / we then obtain the final expression for Var.Y /. Now, we can go to the
general case:
X X n X
X n
Var.Y / D Cov.Y; Y / D Cov Xi ; Xj D Cov.Xi ; Xj /
i j iD1 j D1
n
(5.12.7)
X X
D Var.Xi / C 2 Cov.Xi ; Xj /
i D1 i <j
We can get this formula by this: Var.Y / D Cov.Y; Y / and use the distributive law of the covariance operator.
If Xi are uncorrelated all the Cov.Xi ; Xj / terms vanish, and thus we get the nice identity
n n
!
X X
Var Xi D Var.Xi / (5.12.8)
i D1 i D1
This statement is called the Bienaymé formula and was discovered in 1853. From that we can
deduce that Var.X/N D 2 =n.
Correlation coefficient.
X EŒX Y EŒY
U D ; V D
X Y
X EŒX Y EŒY X Y Cov.X; Y /
X Y D Cov.U; V / D Cov ; D Cov ; D
X Y X Y X Y
Now, we are going to show that 1 X Y 1. The proof uses Eq. (5.12.7) to compute the
variance of X=X ˙ Y=Y :
X Y X Y X Y
Var ˙ D Var C Var ˙ 2Cov ;
X Y X Y X Y
1 1 2 (5.12.9)
D 2 Var .X / C 2 Var .Y / ˙ Cov .X; Y /
X Y X Y
D 2 ˙ 2X Y (def. of X Y )
But, the variance of X=X ˙ Y=Y is non-negative, thus
0 2 ˙ 2X Y H) 1 X Y 1
If we have two variables X; Y we have one single Cov.X; Y /, what if we have more than
two variables? Let’s investigate the case of three variables X; Y and Z. Of course, we would
have Cov.X; Y /, Cov.X; Z/, Cov.Y; Z/, and so on. And if we put all of them in a matrix, we
get the so-called covariance matrix:
2 3
Cov.X; X/ Cov.X; Y / Cov.X; Z/
C D 4 Cov.X; Y / Cov.Y; Y / Cov.Y; Z/ 5
6 7
there. Let’s see. A 2 2 covariance matrix is sufficient to reveal the secret. Without loss of
generality, we consider only discrete random variables X with mean XN and Y with YN . Thus, we
have
X
N 2
8
" # ˆ
ˆCov.X; X/ D Pi .Xi X/
Cov.X; X/ Cov.X; Y /
<
i
CD ; XX
Cov.X; Y / Cov.Y; Y / N j YN /
: Cov.X; Y / D Pij .Xi X/.Y
ˆ
ˆ
i j
There is a non-symmetry in the formula of Cov.X; X/ and Cov.X; Y /: there is no PijPin the
former! Let’s make it appear and something wonderful will show up (this is due to Pi D j Pij ,
check the marginal probability if this was not clear):
X XX
Cov.X; X/ D Pi .Xi X/ N 2D Pij .Xi X/ N 2
i i j
With that, we can have a beautiful formula for C, in which C is a sum of a bunch of matrices,
each matrix is multiplied by a positive number (i.e., Pij ):
" #
XX .Xi X/N 2 N j YN /
.Xi X/.Y
CD Pij
N j YN /
.Xi X/.Y .Yi YN /2
i j
What is special about the red matrix? It is equal to UU> , where U D .Xi X; N Yi YN /. So what?
>
Every matrix UU is positive semidefinite . Thus, C combines all these positive semidefinite
matrices with weights Pij 0: it is positive semidefinite. This turns out to be a useful property
and exploited in principal component analysis–which is an important tool in statistics.
Sample covariance. If we have n samples and each sample has two measurements X and Y ,
hence we have X D .x1 ; : : : ; xn / and Y D .y1 ; : : : ; yn /, then the sample covariance between
X and Y is defined as (noting the Bessel’s correction n 1 in the denominator)
n
1 X
Cov.X; Y / D .xi N i
x/.y N
y/ (5.12.10)
n 1 i D1
What does that actually mean? Assume that X denotes the number of hours studied for
a subject and Y is the marks obtained in that object. We can use real data to compute the
covariance, and assume that the value is 90.34. What does this value mean? A positive value
of covariance indicates that both variables increase or decrease together e.g. as the number of
hours studied increase, the grades also increase. A negative value, on the other hand, means that
while one variable increases the other decreases or vice versa. And if the covariance is zero, the
two variables are uncorrelated.
Check Section 10.10.6 for quadratic forms and positive definiteness of matrices. The proof goes:
x .UU> /x D kU> xk2 0.
>
Now, we derive the formula for the covariance matrix for the whole data. We start with the
sample mean:
P
X W x1 x2 xn W xN D 1=n. i xi /
P
Y W y1 y2 yn W yN D 1=n. i yi /
Then, we subtract the data from the mean, to center the data
" # " #
x1 x2 xn x xN x2 xN xn xN
AD H) A D 1
y1 y2 yn y1 yN y2 yN yn yN
1
CD AA>
n 1
a fX .x/dx aP .X a/
a
EŒX
Markov’s inequality: X is any non-negative RV W P .X a/
a
This is a tail bound because it imposes an upper limit on how big the right tail at a can be.
Now, we apply Markov’s inequality to get Chebyshev’s inequality. Motivation: Markov’s
inequality involves the expected value. Where is the variance? It is involved in Chebyshev’s
inequality. Can we guess the form of this inequality? The variance is about the spread of X with
respect to the mean. So, we would get something similar to P .jX EŒXj b/ g.Var.X /; b/.
Note that because of symmetry when talking about spread, we have to have two tails involved:
that’s where the term jX EŒXj b comes into play.
8 8 8
7 7 7
6 6 6
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
1.2 1.4 1.6 1.8 2.0 1.2 1.4 1.6 1.8 2.0 1.2 1.4 1.6 1.8 2.0
Figure 5.18: The mean of n uniformly distributed RVs Xi Ï U nif orm.1; 2/. Note that each Xi has an
expected value of 1:5 and a SD of 1=12.
It is quite simple to verify the observations on the expected value and SD of Y . Indeed, we
can compute EŒY and Var.Y / using the linearity of the expected value and the property of the
variance. Let’s denote by and 2 the expected value and variance of Xi (all of them have the
same). Then,
1
EŒY D EŒX1 =n C EŒX2 =n C C EŒXn =n D n D (5.14.2)
n
and,
2
X1 C X2 C C Xn Xi
Var.Y / D Var D nVar D (5.14.3)
n n n
See Listing B.19 if you’re interested in how this was done.
where in the second equality, the Bienaymé formula i.e., Eq. (5.12.8) was used to replace the
variance of a sum with the sum of variances.
About the bell-shaped curve of Y when n is large, it is guaranteed by the central limit theorem
(CLT). According to this theorem (of which proof is given in Section 5.15.3), Y Ï N.; 2=n/.
Therefore, we have, for large ns (Eq. (5.11.11)):
b a
P .a Y b/ D ˆ p ˆ p (5.14.4)
= n = n
When n is sufficiently large?. Another question that comes to mind is how large n should
be so that we can use the CLT. The answer generally depends on the distribution of the Xi s.
Nevertheless, as a rule of thumb it is often stated that if n is larger than or equal to 30, then the
normal approximation is very good.
Theorem 5.14.1: Central limit theorem
Let X1 ; X2 ; : : : ; Xn be iid random variables with expected value and variance 2 . Then, the
random variable
XN X1 C X2 C C Xn n
Zn D p D p
= n n
converges in distribution to the standard normal random variable as n goes to infinity. That is,
Example 5.18
Test scores of all high school students in a state have mean 60 and variance 64. A random
sample of 100 (n D 100) students from one high school had a mean score of 58. Is there
evidence to suggest that this high school is inferior than others?
Let XN denote the mean of n D 100 scores from a population
p with D 64 and 2 D 64.
We know from the central limit theorem that .XN /=.= n/ is a standard normal distribution.
Thus,
58 58 60
P .XN 58/ D ˆ p Dˆ p D ˆ. 2:5/ D 1 ˆ.2:5/ D 0:0062
= n 8= 100
1 1 1 1
B0 D 1; B1 D ; B2 D ; B3 D 0; B4 D ; B5 D 0; B6 D ; B7 D 0; : : :
2 6 30 4
There are infinity of them, and it seems impossible to understand them. But, with Euler’s defini-
tion, in 1755, of the Bernoulli numbers in terms of the following function
1
x X xn
D Bn (5.15.1)
ex 1 nD0
nŠ
we have discovered the recurrence relation between Bn , Eq. (4.16.2). The function x=ex 1 is
called a generating function. It encodes the entire Bernoulli numbers sequence. Roughly speak-
ing, generating functions transform problems about sequences into problems about functions.
And by fooling around with this function we can explore the properties of the sequence it en-
codes. This is because we’ve got piles of mathematical machinery for manipulating functions
(e.g. differentiation and integration).
Now, we give another example showing the power of a generating function. If, we observe
carefully we will see that except B1 D 1=2, the odd numbers B2nC1 for n > 1 are zeros. Why?
Let’s fool with the function :
x x x x ex C 1 x ex C 1 e x=2
x e x=2 C e x=2
g.x/ WD B1 x D C D D D
ex 1 ex 1 2 2 ex 1 2 ex 1 e x=2 2 e x=2 e x=2
Why this function?
We added the red term so that we can have a symmetric form (e x C 1 is not symmetric but
e x=2 C e x=2 is). It’s easy to see that g. x/ D g.x/, thus it is an even function. Therefore, with
Eq. (5.15.1)
B2 2 B3 3
g.x/ D 1 C x C x C is an even function H) B2nC1 D 0
2Š 3Š
George Pólya wrote in his book Mathematics and plausible reasoning in 1954 about gener-
ating functions:
A generating function is a device somewhat similar to a bag. Instead of carrying
many little objects detachedly, which could be embarrassing, we put them all in a
bag, and then we have only one object to carry, the bag.
The pattern here is simple: the nth term in the sequence (indexing from 0) is the coefficient of
x n in the generating function. There are a few other kinds of generating functions in common
use (e.g. x=ex 1, which is called an exponential generating function), but ordinary generating
functions are enough to illustrate the power of the idea, so we will stick to them and from now
on, generating function will mean the ordinary kind.
Remark 3. A generating function is a “formal” power series in the sense that we usually
regard x as a placeholder rather than a number. Only in rare cases will we actually evaluate a
generating function by letting x take a real number value, so we generally ignore the issue of
convergence.
Just looking at this definition, there is no reason to believe that we’ve made any progress
in studying anything. We want to understand a sequence .a0 ; a1 ; a2 ; : : :/; how could it possibly
help to make an infinite series out of these! The reason is that frequently there’s a simple, closed
form expression for G.an I x/. The magic of generating functions is that we can carry out all
sorts of manipulations on sequences by performing mathematical operations on their associated
generating functions. Let’s experiment with various operations and characterize their effects in
terms of sequences.
Example 5.19
The generating function for the sequence 1; 1; 1; : : : is 1=1 x . This is because (if you still
remember the geometric series)
1
D 1 C x C x 2 C x 3 C where the coefs. of all x n is 1
1 x
We can create different generating functions from this one. For example, if we replace x by
3x, we have
1
D 1 C 3x C 9x 2 C 27x 3 C which generates 1; 3; 9; 27; : : :
1 3x
Multiplying this with x, we get
x
D 0 C x C 3x 2 C 9x 3 C 27x 4 C which generates 0; 1; 3; 9; 27; : : :
1 3x
which right-shift the original sequence (i.e., 1; 3; 9; 27; : : :) by one. We can multiply the GF
by x k to right-shift the sequence k times.
Solving difference equations. Assume that we have this sequence 1; 3; 7; 15; 31; : : : which can
be defined as
a0 D 1; a1 D 3; an D 3an 1 2an 2 .n 2/
The question is: what is the generating function for this sequence? Let’s denote by f .x/ that
function, thus we have (by definition of a generating function)
Now, the recurrent relation (an D 3an 1 2an 2 ) can be re-written as an 3an 1 C 2an 2 D 0,
and we will multiply f .x/ by 3x and also multiply f .x/ by 2x 2 and add all up including
f .x/:
1
f .x/Œ1 3x C 2x 2 D 1 H) f .x/ D
1 3x C 2x 2
where all the columns add up to zero except the first one, because of the recurrence relation
an 3an 1 C 2an 2 D 0.
But, why having the generating function is useful? Because it allows us to find a formula for
an ; thus we no longer need to use the recurrence relation to get an starting from a0 ; a1 ; : : : all
the way up to an 2 . The trick is to re-write f .x/ in terms of simpler functions (using the partial
fraction decomposition discussed in Section 4.7.7) and then replace these functions by their
corresponding power series. Now, we can decompose f .x/ easily with the ‘apart’ function in
SymPy
1 1 2
f .x/ D 2
D C
1 3x C 2x 1 x 1 2x
So, we want a Ferrari instead of a Honda CRV.
Check Section 3.19 if you’re not sure about SymPy.
Evaluating sums.
Recall the Cauchy product formula for two power series:
1
! 1 ! 1 n
!
X X X X
n m
an x bm x D ak bn k xn
nD0 mD0 nD0 kD0
For the special case with B.x/ D 1=1 x , all bi ’s equal one, and thus we have
n
A.x/ X
.c0 ; c1 ; : : :/ ! ; cn D ak (5.15.4)
1 x
kD0
Note that we also know the series of 1=.1 3x C 2x 2 /, but that series is simply the RHS of Eq. (5.15.3).
Moments, central moments. To motivate the introduction of moments in probability, let’s look
at how the expected value and the variance were defined:
Moment generating functions. The moment generating function (MGF) of a random variable
(discrete or continuous) X is simply the expected value of e tX :
Now, we will elaborate m.t/ to reveal the reason behind its name (and its definition). The idea
is to replace e tX by its Taylor series, then applying the linearity of the expected value and we
shall see all the moments k :
.tX/2 .tX/k
m.t/ D E 1 C tX C C C C
2Š kŠ
.t 2 EŒX 2 t k EŒX k
D 1 C tEŒX C C C C
2Š kŠ
2 2 k k
D 1 C 1 t C t C C t C
2Š kŠ
Compared with Eq. (5.15.2), which is the ordinary generating function for the sequence
.a0 ; a1 ; a2 ; : : :/, we can obviously see why m.t/, as defined in Eq. (5.15.5), is called the moment
generating function; it encodes all the moments k of X. By differentiating m.t/ and evaluate
it at t D 0, we can retrieve any moment. For example, m0 .0/ D 1 , and m00 .0/ D 2 .
We can now give a full definition of the moment generating function of either a discrete or
continuous random variable:
X
Discrete RV m.t/ D e tx P .x/
Zx 1 (5.15.6)
Continous RV m.t/ D e tx f .x/dx
1
We will now see some examples to see how powerful the MGFs are.
Example 5.20
We consider the geometric series, compute its moment generating function, and see what we
p 1 pe t
EŒX D 1 D m0 .0/ D D ; m0 .t/ D
.1 q/2 p .1 qe t /2
You can compare this and the procedure in Example 5.16 and conclude for yourself which
way is easier.
a
The red term is a geometric series.
Example 5.21
We now determine the MGF of a standard normal variable Z. We use the definition,
Eq. (5.15.6), to compute it:
Z 1 Z 1
tz 1 z 2 =2 t 2 =2 1 2 2
m.t/ D e p e dz D e p e 1=2.z 2t zCt / dz
1 2 1 2
Z 1
2 1 2 2
D e t =2 p e 1=2.z t / dz D e t =2
1 2
Because the red integral is simply one: it is the probability density function of N.t; 1/!
the proof is based on the moment generating function concept. First, the CLT is recalled now.
Let X1 ; X2 ; : : : ; Xn be iid random variables with expected value and variance 2 . Then, the
random variable
Sn n
Sn D p ; Sn D X1 C X2 C C Xn
n
Why this particular form? Because we have the convolution rule that works for a sum. Using
Eq. (5.15.7), the MGF of Sn is simply:
n
mSn .t/ D mX =pn .t/ (Eq. (5.15.7))
p n p (5.15.9)
D mX .t= n/ (Eq. (5.15.8) with a D 1= n, b D 0)
p
Now, we use Taylor’s series to approximate mX .t= n/ when n is large:
p t t2 t2
m X .t= n/ m X .0/ C mX0 .0/ p C mX00 .0/ D1C
n 2n 2n
n
t2
2 =2
mSn .t/ 1 C et
2n
So, we have proved that when n is large the MGF of Sn is approximately the MGF of N.0; 1/.
Thus, Sn has a standard normal distribution. Q.E.D.
This is because .1 C a=n/n ! e a when n ! 1. Check Eq. (4.14.17) if this is not clear.
5.16 Review
I had a bad experience with probability in university. It is quite unbelievable that I have now man-
aged to learn it at the age of 42 to a certain level of understanding. Here are some observations
that I made
Probability had a humbling starting point in games of chances. But mathematicians turned
it into a rigorous branch of mathematics with some beautiful theorems (e.g. the central
limit theorem) with applications in many diverse fields far from gambling activities;
To learn probability for discrete random variables, we need to have first a solid understand-
ing of counting methods (e.g. factorial, permutations and so on).
d
Contents
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Statistics with Julia: Fundamentals for Data Science, Machine Learning and Artificial
Intelligence, by Yoni Nazarathy and Hayden Klok [41];
d;
d
||
d
d
d
509
Chapter 6. Statistics and machine learning 510
6.1 Introduction
6.1.1 What is statistics
6.1.2 Why study statistics
6.1.3 A brief history of statistics
Let X be a normal random variable with mean D 100 and variance 2 D 15:
Find the probability that X > 100:
In statistical inference problems, the problem is completely different. In real life, we do not
know the distribution of the population (i.e., X). Most often, we use the central limit theorem to
assume that X has a normal distribution, yet we still do not know the values for and 2 .
This brings us to the problem of estimation. We use sample data to estimate for example the
mean of the population. If we just use a single number for the mean, we’re doing a point estima-
tion, whereas if we can provide an interval for the mean, we’re doing an interval estimation.
By Roger Cotes, Legendre and Gauss. In 1809 Carl Friedrich Gauss published his method (of least squares)
of calculating the orbits of celestial bodies.
Even though this problem can be solved by calculus (i.e., setting the derivative of S w.r.t a and b
to zero), I prefer to use linear algebra to solve it. Why? To understand more about linear algebra!
To this end, we introduce the error vector e D .e1 ; e2 ; : : : ; en / where ei D yi f .xi /. Let’s
start with the simplest case where f .x/ D ˛x C ˇ, then we can write the error function as
2 3 2 3 2 3
e1 y1 x1 1 " #
6 7 6 7 6 7
6e2 7 6y2 7 6x2 17 ˛
eD6 6 :: 7 D 6 :: 7 6 :: :: 7 ˇ
7 6 7 6 7 (6.5.1)
:
4 5 4 5 4 : : : 5
„ƒ‚…
en yn xn 1 x
„ƒ‚… „ ƒ‚ …
b A
In statistics, the matrix A is called design matrix. Usually we have lots of data thus this matrix
is skinny meaning that it has more rows than columns. Now the problem is to find x D .˛; ˇ/ to
minimize S which is equivalent to minimize kek (where jjvjj is the Euclidean norm), which is
equivalent to minimize kb Axk. We have converted the problem to a linear algebra problem
of solving Ax D b, but with a rectangular matrix. This overdetermined system is unsolvable in
the traditional sense that no x would make Ax equals b. Thus, we ask for a vector x that
minimize kb Axk, such a vector is called the least square solution to Ax D b. So, we have
the following definition:
Definition 6.5.1: Least squares problem
If A is an m n matrix and b is in Rn , a least squares solution of Ax D b is a vector x such
that
kb Ax k kb Axk
for all x in Rn .
Ax D projC.A/ .b/
We do not have to solve this system to get x , a bit of algebra leads to
ai .b Ax / D 0; i D 1; 2; : : : ; n
AC D .A> A/ 1 A>
Fitting a cloud of points with a parabola. The least squares method just works when f .x/ D
˛x 2 C ˇx C
. Everything is the same, except that we have a bigger design matrix and we have
three unknowns to solve for:
2 3 2 3
x12 x1 1 2 3 y1
6 2
6x2 x2 17
7 ˛ 6 7
6y2 7
AD6 :: :: 7 ; x D 4ˇ 5 ; b D 6 :: 7
6 7
6 ::
7 6 7
4: : :5 4:5
2
xn xn 1 yn
Fitting a cloud of 3D points with a plane. So far we just dealt with y D f .x/. How about
z D f .x; y/? No problem, the exact same method works too. Assume that we want to find the
best plane z D ˛x C ˇy C
:
2 3 2 3
x1 y1 1 2 3 z1
6
6x2 y2 17
7 ˛ 6 7
6z2 7
AD6 7; x D 6 ˇ ; b D
7
: : : 6 :: 7
6 7
6: :: :: 7 4 5
4: 4:5
5
xn yn 1 zn
Fibonacci number directly. In this section, a method based on linear algebra is introduced to
solve similar problems. We start with one example. To read this section you need linear algebra,
particularly on matrix diagonalization. Check Chapter 10.
Example 6.1
Consider the sequence .xn / defined by the initial conditions x1 D 1; x2 D 5 and the recur-
rence relation xn D 5xn 1 6xn 2 for n 2. Our problem is to derive a direct formula for
xn ( n 2 ) using matrices. To this end, we introduce the vector x n D .xn ; xn 1 /. With this
vector, we can write the given recurrent equation using matrix notation:
" # " #" # " #
xn 5 6 xn 1 5 6
xn D D D xn 1
xn 1 1 0 xn 2 1 0
And we have obtained a recurrent formula x n D Ax n 1 . With that, we get
x 3 D Ax 2 ; x 4 D Ax 3 D A2 x 2 : : : H) x n D An 2 x 2 ; x 2 D .5; 1/ (6.6.1)
Now our task is simply to compute Ak . With the eigenvalues of 3; 2 and eigenvectors .3; 1/
and .2; 1/, it is easy to do so:
" #" #" # 1 " #
k 3 2 3k 0 3 2 3kC1 2kC1 2.3kC1 / C 3.2kC1 /
A D
1 1 0 2k 1 1 3k 2k 2.3k / C 3.2k /
With that and the boxed equation, we can get xn D 3n 2n .
Nothing can be simpler but admittedly the maths is boring. Now comes the interesting part. We
rewrite the above using matrix notation, this is what we get
Let’s stop here and introduce some terminologies. What we are dealing with is called a Markov
chain with two states A and B. There are then four possibilities: a person in state A can stay in
that state or he/she can hop to state B and the person in state B can stay in it or move to A. The
probabilities of these four situations are the four numbers put in the matrix P.
And from that we can see that the Markov chain satisfies the recurrise formula x kC1 D Px k ,
for k D 0; 1; 2; : : :. Alternatively, we can write
x 1 D Px 0 ; x 2 D Px 1 D P2 x 0 ; : : : H) x k D Pk x 0 ; k D 1; 2; : : :
where the vectors x k are called state vectors and P is called the transition matrix. Instead of
working directly with the actual numbers of toothpaste users, we can use relative numbers:
" # " #
120=200 0:6
x0 D D W probability vector
80=200 0:4
Why relative numbers? Because they add up to one! That’s why vectors such as x 0 are called
probability vectors.
We’re now ready to answer the question: how many people will use each brand after, let say,
10 months? Using x k D Pk x 0 , we can compute x 1 ; x 2 ; : : : and get the following result
" # " # " # " #
0:5 0:45 0:4 0:4
x1 D ; x2 D ; : : : ; x9 D ; x 10 D
0:5 0:55 0:6 0:6
Two observations can be made based on this result. First, all state vectors are probability vectors
(i.e., the components of each vector add up to one). Second, the state vectors convergence to
a special vector .0:4; 0:6/. It is interesting that once this state is reached, the state will never
change:
" #" # " #
0:7 0:2 0:4 0:4
D
0:3 0:8 0:6 0:6
This special vector is called a steady state vector. Thus, a steady state vector x is one such that
Px D x. What does this equation say? It says that x is an eigenvector of P with corresponding
eigenvalue of one.
All these results are of course consequences of the following two properties of the Markov
matrix:
8
<1. Every entry is positive: P > 0
ij
Markov matrix: P
:2. Every column adds to 1:
i P D1
ij
Proof. [State vectors are probability vectors] Start with a state vector u, we need to prove that
x D Pu is a probability vector, where P is a Markov matrix. We know that the components of
u sum up to one. We need to translate that to mathematics, which is u1 C u2 C C un D 1
or better Œ1 1 : : : 1u D 1. So, to prove x adds up to one, we just need to show that
Œ1 1 : : : 1.Pu/ D 1. This is true because Œ1 1 : : : 1.Pu/ D .Œ1 1 : : : 1P/u D Œ1 1 : : : 1u,
which is one. (Œ1 1 : : : 1P D Œ1 1 : : : 1 because each column of P adds up to one).
6.6.2 dd
Now, if we label 1 the maximum of all the eigenvalues of S, then from Section 10.10.6, we
know that
1 D max u> Su
kuD1k
And this happens when u D u1 where u1 is the eigenvector corresponding to 1 . Now, we will
try to understand the geometric meaning of u>
1 Su1 . To this end, we confine to the 2D plane i.e.,
m D 2, and we can write then
"P # " #
1 2 1 2
P
x x y
i i
X x x y
i i
u>
1 Su1 D u> P i i Pi 2 u1 D u> i u1
n 1 1 i xi yi i yi
n 1 i
1
x i y i yi
2
1 X > 1 X
D u xi x>i u1 D .x > 2
i u1 / ; x i D .xi ; yi /
n 1 i 1 n 1 i
Figure 6.3
If we wish we can find the second axis given by, what else, the second eigenvector u2
(corresponding with the second largest eigenvalue 2 ). Along this axis the variance is also
maximum. And we can continue with other eigenvectors, thus we can project our data points to
a k-dimensional space spanned by u1 ; : : : ; uk . We put these eigenvectors in matrix Qk –a m k
matrix, then Y D Q> k A is the transformed data points living in a k-dimensional space where
k m.
Contents
7.1 Multivariable functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
7.2 Derivatives of multivariable functions . . . . . . . . . . . . . . . . . . . . 524
7.3 Tangent planes, linear approximation and total differential . . . . . . . . 526
7.4 Newton’s method for solving two equations . . . . . . . . . . . . . . . . . 527
7.5 Gradient and directional derivative . . . . . . . . . . . . . . . . . . . . . 528
7.6 Chain rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
7.7 Minima and maxima of functions of two variables . . . . . . . . . . . . . 531
7.8 Integration of multivariable functions . . . . . . . . . . . . . . . . . . . . 540
7.9 Parametrized surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
7.10 Newtonian mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
7.11 Vector calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
7.12 Complex analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
7.13 Tensor analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
In Chapter 4 we have studied the calculus of functions of one variable e.g. functions ex-
pressed by y D f .x/. Basically, we studied curves in a 2D plane, the tangent to a curve at any
point on the curve (1st derivative) and the area under the curve (integral). Now is the time to the
real world: functions of multiple variables. We will discuss functions of the form z D f .x; y/
known as scalar-valued functions of two variables. A plot of z D f .x; y/ gives a surface
in a 3D space. Of course, we are going to differentiate z D f .x; y/ and thus partial deriva-
tives @f , @f naturally emerge. We also compute integrals of z D f .x; y/, the double integrals
’ @x @y
f .x;”y/dxdy which can be visualized as the volume under the surface f .x; y/. And triple in-
tegrals f .x; y; z/dxdydz appear when we deal with functions of three variables f .x; y; z/.
All of this are merely an extension of the calculus we know from Chapter 4. If there are some
519
Chapter 7. Multivariable calculus 520
difficulties, they are just technical not mentally as when we learned about the spontaneous speed
of a moving car.
Then comes vector-valued functions used to describe vector fields. For example, if we want
to study the motion of a moving fluid, we need to know the velocity of all the fluid particles. The
velocity of a fluid particle is a vector field and is mathematically expressed as a vector-valued
function of the form v.x; y/ D .g.x; y/; h.x; y// in two dimensions. The particle position is
determined by its coordinates .x; y/ and its velocity by two functions: g.x; y/ for the horizontal
component of the velocity and h.x; y/ for the vertical component.
And with vector fields, we shall have vector calculus that consists of differential calculus of
vector fields and integral calculus of vector fields. In differential calculus of vector fields, we
shall meet the gradient vector of a scalar field rf , the divergence of a vector field
R r C and the
curl of a vector
R field r C . In the integral calculus, we have the line integral F d s, surface
integrals S C ndA and volume integrals. And these integrals are linked together via Green’s
theorem, Stokes’ theorem and Gauss’ theorem. They are generalizations of the fundamental
theorem of calculus (Table 7.1).
Table 7.1: Integral calculus of vector fields: a summary.
Theorem Formula
Z b
df
FTC dx D f .b/ f .a/
a dx
Z2
FTC of line integrals r ds D .2/ .1/
1
along C
Z
@Cy @Cx
I
Green’s theorem dA D .Cx dx C Cy dy/
S @x @y
Z I
Stokes’ theorem .r C / ndA D C ds
S
Z Z
Gauss’s theorem C ndA D r C dV
S V
This chapter starts with a presentation of multivariable functions in Section 7.1. The deriva-
tives of these functions are discussed in Section 7.2. Section 7.3 presents tangent planes and
linear approximations. Then, Newton’s method for solving a system of nonlinear equations is
treated in Section 7.4. The gradient of a scalar function and the directional derivative are given
in Section 7.5. The chain rules are introduced in Section 7.6. The problem of finding the ex-
trema of functions of multiple variables is given in Section 7.7. Two and three dimensional
integrals are given in Section 7.8. Section 7.9. Newtonian mechanics is briefly discussed in
Section 7.10. Then comes a big chapter on vector calculus (Section 7.11). A short introduction
to the wonderful field–complex analysis–is provided in Section 7.12.
Some knowledge on vector algebra and matrix algebra are required to read this chapter.
Section 10.1 in Chapter 10 provides an introduction to vectors and matrices.
I use primarily the following books for the material presented herein:
(a) (b)
Figure 7.1: Graph of the surface z.x; y/ D x 2 C y 2 and the intersection of it with the plane y D 1, which
is a curve z D x 2 C 1 (Drawn with geogebra).
p
Figure 7.2: Graph of the surface z.x; y/ D sin x 2 C y 2 = x 2 C y 2 C 0:000001 and its contour plot
which contains all the level curves. When we jump from the inner most level curve (0.2) to the next one,
we ‘climb’ to a higher point in the surface z.x; y/. And we do not go up/down by following a level curve.
That’s why it is such called. Note that closely spaced level curves indicate a steep graph.
Visualizing functions of two variables is more difficult as they represent surfaces; surfaces
formed by the set of all the points .x; y; z/ where z D f .x; y/, i.e.,, the set of points
.x; y; f .x; y//. We need to use software for this task. For example, Fig. 7.1 shows a plot
of the function z D x 2 C y 2 . In Fig. 7.2 we plot another function z D f .x; y/ in which
the surface is colored according to the value of z. In this way it is easy to see where is the
highest/lowest points of the surface. Furthermore, it is the only way to visualize T D f .x; y; z/.
This is because the graph of a function f .x; y; z/ of three variables would be the set of points
.x; y; z; f .x; y; z// in four dimensions, and it is difficult to imagine what such a graph would
look like.
Level curves, level surfaces and level sets. Another way of visualizing a function is through
level sets, i.e., the set of points in the domain of a function where the function is constant. The
nice part of level sets is that they live in the same dimensions as the domain of the function.
A level set of a function of two variables f .x; y/ is a curve in the two-dimensional xy-plane,
called a level curve (Fig. 7.1). A level set of a function of three variables f .x; y; z/ is a
surface in three-dimensional space, called a level surface. For a constant value c in the range of
f .x; y; z/, the level surface of f is the implicit surface given by the graph of c D f .x; y; z/.
Domain, co-domain and range of a function. For the function z D f .x; y/ W R2 ! R, we say
that the domain of this function is the entire 2D plane i.e., R2 . Thus, the domain of a function
is the set of all inputs. We also say that the co-domain is R: the co-domain is the set of outputs.
And finally the range of a function is a sub-set of its co-domain which contains the actual outputs.
For example, if f .x; y/ D x 2 C y 2 , then its co-domain is all real numbers but its range is only
non-negative reals.
If we keep one variable, say y constant, then from z D f .x; y/ we obtain a function of a
single variable x, see Fig. 7.1b. We can then apply the calculus we know from Chapter 4 to
this function. That leads to partial derivatives. Using these two partial derivatives, we will have
directional derivative Du that gives the change in f .x; y/ along the direction u. Other natural
extensions of Chapter 4‘s calculus are summarized in Table 7.2. We will discuss them, but as
you have seen, they are merely extensions of calculus of functions of single variable.
f .x/ f .x; y/
@f @f
1st derivative df =dx partial derivatives ;
@x @y
@2 f 2 2 2
2nd derivative d 2 f =dx 2 second par. der. @x 2
; @@yf2 ; @y@x
@ f @ f
; @y@x
1
20
15
10 0
5
0
1
1.0
0.5
1.0 0.0
0.5
0.0 0.5 2
0.5 1.0 1.0 0.5 0.0 0.5 1.0
1.0
@f f .x C x; y/ f .x; y/
D lim
@x x!0 x
(7.2.1)
@f f .x; y C y/ f .x; y/
D lim
@y y!0 y
In words, the partial derivative w.r.t x is the ordinary derivative while holding other variables
(y) constant. Sometimes, people write fx for @f =@x .
And of course, nothing can stop us from moving to second derivatives. From @f =@x we have
@2 f =@x 2
– derivative w.r.t x of @f =@x and @2 f =@x@y – derivative w.r.t y of @f =@x . And from @f =@y we
have @ f =@y 2 and @2 f =@y@x . To summarize, we write
2
@2 f @2 f
@f @ @f @ @f
! WD .orfxx /; WD .orfxy /
@x @x @x @x 2 @y @x @x@y
(7.2.2)
@2 f @2 f
@f @ @f @ @f
! WD .orfyx /; WD .orfyy /
@y @x @y @y@x @y @y @y 2
fxy and fyx are called cross derivatives or mixed derivatives. The origin of partial derivatives
was partial differential equation such as the wave equation (Section 8.5.1). Briefly, eighteenth
century mathematicians and physicists such as Euler, d’Alembert and Daniel Bernoulli were
investigating the vibration of strings (to understand music), and there was a need to consider
partial derivatives.
Example 1. Let’s consider the function f .x; y/ D x 2 y 2 C xy C y, its first and second (partial)
derivatives are
The calculations were nothing special, but one thing special is @2 f =@y@x D @2 f =@y@x . Is it luck?
Let’s see another example.
2
Example 2. Let’s consider this function f .x; y/ D e xy , its first and second derivatives are
2 2 2 2
fx D y 2 e xy fxx D y 4 e xy fxy D 2ye xy C 2xy 3 e xy
2 2 2 2 2
fy D 2xye xy fyy D 2xe xy C 4x 2 y 2 e xy fyx D 2ye xy C 2xy 3 e xy
Again, we get @2 f =@y@x D @2 f =@y@x . Actually, there is a theorem called Schwarz’s Theorem or
Clairaut’s Theorem which states that mixed derivatives are equal if they are continuous.
Our task is now to determine the coefficients A and B in terms of fx and fy (we believe in the
extension of elementary calculus to multi-dimensions). To determine A, we consider the plane
y D y0 . The intersection of this plane and the surface z D f .x; y/ is a curve in the x z plane,
see Fig. 7.4 for one example. The tangent to this curve at .x0 ; y0 / is z D z0 C fx .x0 ; y0 /.x x0 /
and thus A D fx .x0 ; y0 /. Similarly, consider the plane x D x0 , and we get B D fy .x0 ; y0 /.
The tangent plane is now written as
Alexis Claude Clairaut (13 May 1713 – 17 May 1765) was a French mathematician, astronomer, and geophysi-
cist. He was a prominent Newtonian whose work helped to establish the validity of the principles and results that
Sir Isaac Newton had outlined in the Principia of 1687.
Linear approximation. Around the point .x0 ; y0 /, we can approximate the (complicated) func-
tion f .x; y/ by a simpler function–the equation of the tangent plane:
f .x 0 / C .x x 0 /> rf .x 0 / (7.3.4)
We will discuss the notation rf shortly. Note that vector notation is being used:
x D .x1 ; x2 ; : : : ; xn / is a point in an n-dimensional space, refer to Section 10.1 in Chapter 10
for a discussion on vectors.
Total differential. On the curve y D f .x/, a finite change in x is x, and if we climb on the
curve, we move an amount y. But if we move an infinitesimal along x that is dx, and we
follow the tangent to the curve, then we move an amount dy D f 0 .x/dx. Now we do the same
thing, but we’re now climbing on a surface. Using Eq. (7.3.2), we write
@f @f
dz D dx C dy (7.3.5)
@x @y
With that we update the solution as x0 C x and y0 C y. And the iterative process is repeated
until convergence. This Newton method has been applied to solve practical problems that involve
millions of unknowns. In the above A 1 means the inverse of matrix A. We refer to Chapter 10
for details.
Then, we’re stuck as there is no concrete expression for f .x; y/ for us to manipulate. Here we
need a change of view. Note that in the above equation x0 ; y0 and u1 ; u2 are all fixed numbers,
only h is a variable. Thus, we can define a new function of a single variable g.z/ as
g.z/ WD f .x.z/; y.z//; x.z/ D x0 C u1 zI y.z/ D y0 C u2 z
What we are going to do with this new function? We differentiate it, using the chain rule (Sec-
tion 7.6):
dx dy
g 0 .z/ D fx C fy D fx u1 C fy u2
dz dz
We’re on good track as we have obtained fx u1 C fy u2 –the suspect that we’re looking for. From
this, we have
g 0 .0/ D fx .x0 ; y0 /u1 C fy .x0 ; y0 /u2
Now, we just need to prove that g 0 .0/ is nothing but the RHS of Eq. (7.5.1). That is,
‹ f .x0 C hu1 ; y0 C hu2 / f .x0 ; y0 /
g 0 .0/ D lim
h!0 h
Indeed, we can compute g 0 .0/ using the definition of derivative and replacing g with f (we need
it to appear now):
g.h/ g.0/ f .x0 C hu1 ; y0 C hu2 / f .x0 ; y0 /
g 0 .0/ D lim D lim
h!0 h h!0 h
The French mathematician, theoretical physicist, engineer, and philosopher of science Henri
Poincaré (1854 – 1912) once said ‘Mathematics is the art of giving the same name to different
things’. Herein we see the same expression of Du f but as the normal derivative of g.z/. That’s
the art. This can also be seen in the following joke
A team of engineers were required to measure the height of a flag pole. They only
had a measuring tape, and were getting quite frustrated trying to keep the tape
along the pole. It kept falling down, etc. A mathematician comes along, finds out
their problem, and proceeds to remove the pole from the ground and measure it
easily. When he leaves, one engineer says to the other: "Just like a mathematician!
We need to know the height, and he gives us the length!"
We now have a rule to compute the directional derivative for any functions. But there is one
more thing in its formula: @f =@x u1 C @f =@y u2 is actually the dot product between the vector u
and a vector, which we do not know, with components fx ; fy .
We now give the rule for a directional derivative for a function f .x; y; z/ and define the
gradient vector, denoted by rf (read nabla f or del f):
@f @f @f
Du f D rf u; rf D iC jC k (7.5.2)
@x @y @z
Refer to Section 10.1.2 if you need a refresh on the concept of the dot product of two vectors.
In words, the gradient of a function f .x; y; z/ at a any point is a 3D vector with components
.fx ; fy ; fz /. The gradient vector of a scalar function is significant as it gives us the direction
of steepest ascent. That is because the directional derivative indicates the change of f in a
direction given by u. Among many directions, due to the property of the dot product, this change
is maximum when u is parallel to rf (note that jjujj D 1):
where is the angle between u and rf ; the notation jjrf jj means the Euclidean length of rf .
Let’s see how the gradient vector looks like geometrically. For the
function f .x; y/ D x 2 C y 2 , we plot its gradient field 2xi C 2yj
superimposed with the level curves of f .x; y/ on the next figure. We
can see that the gradient vectors are perpendicular to the level curves.
This is because going along a level curve does not change f : Du f D
rf u D 0 when u is perpendicular to rf .
So far we have considered functions of two variables only. How
about functions of three variables w D f .x; y; z/? We believe that
at any point P D .x0 ; y0 ; z0 / on the level surface f .x; y; z/ D c the
gradient rf is perpendicular to the surface. By this we mean it is perpendicular to the tangent
to any curve that lies on the surface and goes through P. (See figure.)
It is not true that every three numbers make a vector. For example, we cannot make a
vector from this .fxx ; fy ; fz /. How to prove that .fx ; fy ; fz / is indeed a vector? We use
the fact that the dot product of two vectors is a scalar. To this end, we consider two nearby
points P1 .x; y; z/ and P2 .x Cx; y Cy; z Cz/. Assume that the temperature at P1 is
T1 and the temperature at P2 is T2 . Obviously T1 and T2 are scalars: they are independent
of the coordinate system we use. The difference of temperature T is also a scalar, it is
given by
@T @T @T
T D x C y C z
@x @y @z
Since T is a scalar, and .x; y; z/ is a vector (joining P1 to P2 ), we can deduce
that .Tx ; Ty ; Tz / is a vector.
Case 3: f .x; y/ with x D x.u; v/, y D y.u; v/. By holding v constant and using the chain rule
in case 2, we can write @f
@u
D @f @x
@x @u
C @f @y
@y @u
. Doing the same thing for @f
@v
, and putting these two
together, we have:
@f @f @x @f @y
D C
@u @x @u @y @u
(7.6.2)
@f @f @x @f @y
D C
@v @x @v @y @v
This rule can be re-written in a matrix form as:
@f
2 3 2 32 3
@f @x @y
7 6 @x 7
6 @u 7 6 @u @u 7 6 7
6 7D6 (7.6.3)
4 @f 5 4 @x @y 5 4 @f 5
@v @v @v @y
We can generalize this to the case of a function of n variables, f .x1 ; x2 ; : : : ; xn / and the variables
depend on m other variables, xi D xi .u1 ; u2 ; : : : ; um / for i D 1; 2; : : : ; n, then we have
@f @f @x1 @f @x2 @f @xn
D C C C .1 j m/
@uj @x1 @uj @x2 @uj @xn @uj
n
X @f @xi @f @xi
D D (Einstein’s summation rule on dummy index i )
i D1
@xi @uj @xi @uj
local minimum, local maximum, absolute minimum etc.) the tangent planes are horizontal. So,
at a stationary point .x0 ; y0 / the two first partial derivatives are zero (check Eq. (7.3.2) for the
equation of a plane if this is not clear):
fx .x0 ; y0 / D fy .x0 ; y0 / D 0 (7.7.1)
Figure 7.5: Graph of a function of two variables z D f .x; y/ with a colorbar representing the height
z. Using a colorbar is common in visualizing functions, especially functions of three variables. We can
quickly spot the highest/lowest points based on the color.
Saddle point. If we consider the function z D y 2 x 2 , the stationary point is .0; 0/ using
Eq. (7.7.1). But this point cannot be a minimum or a maximum point, see Fig. 7.6. We can see
that f .0; 0/ D 0 is a maximum along the x-direction but a minimum along the y-direction.
Near the origin the graph has the shape of a saddle and so .0; 0/ is called a saddle point of f .
Minimum or maximum or saddle point. For y D f .x/, we need to use the second derivative
at the stationary point x0 , f 00 .x0 /, to decide if x0 is a minimum or maximum or inflection point.
How did the second derivative help? It decides whether the curve y D f .x/ is below the tangent
at x0 (i.e., if y 00 .x0 / < 0 then x0 is a maximum point as we’re going downhill) or it is above the
tangent (i.e., if y 00 .x0 / > 0 then x0 is a minimum point). We believe this reasoning also applies
for f .x; y/. The difficulty is that we now have three second derivatives fxx ; fyy ; fxy not one!
The idea is to replace the general function f .x; y/ by a quadratic function of the form
ax 2 C bxy C cy 2 to which finding its extreme is straightforward (using only algebra). The
means to do this is Taylor’s series expansion of f .x; y/, see Section 7.7.2, around the stationary
point .x0 ; y0 / up to the second order (as the bending of a surface depends on second order terms
only):
f .x; y/ D ax 2 C 2bxy C cy 2 ; a D fxx .0; 0/; b D fxy .0; 0/; c D fyy .0; 0/ (7.7.2)
This is called a second derivatives test. Fig. 7.7 confirms this test. It is helpful to examine
the contour plot of the surfaces in Fig. 7.7 to understand geometrically when a function has a
min/max/saddle point. Fig. 7.8 tells us that around a max/min point the level curves are oval,
because going any direction will decrease/increase the function. On the other hand, around a
saddle point the level curves are hyperbolas (xy D c).
This matrix is special as it stores all the second derivatives of f .x; y/. It must have a special
name. It is called a Hessian matrix, named after the German mathematician Ludwig Otto Hesse
(22 April 1811 – 4 August 1874).
The second order Taylor’s polynomial has this general form T .x; y/ D a C bx C cy C
dxy C ex 2 C f y 2 . We find the coefficients a; b; c : : : by matching the function at .0; 0/ and all
derivatives up to second order at .0; 0/. The same old idea we have met in univariate calculus:
Note that we have not used the conventional x; y; z; instead we have used x1 ; x2 ; x3 . This is
because if we generalize our quadratic forms to the case of, let say 100, variables we will run
out of symbols using x; y; z; : : :
Now, we re-write this quadratic form Q.x1 ; x2 / D a1 x12 C a2 x1 x2 C a3 x22 as follows
2 X
X 2
Q.x1 ; x2 / D a11 x12 C a12 x1 x2 C a21 x2 x1 C a22 x22 D aij xi xj
iD1 j D1
So we have just demonstrated that any quadratic form can be expressed in this form x > Ax.
Let’s do that for this particular quadratic form Q.x1 ; x2 / D x12 C 5x1 x2 C 3x22 :
" #" # " #" #
h i 1 1 x h i 1 5=2 x
1 1
Q.x1 ; x2 / D x1 x2 D x1 x2
4 3 x2 5=2 3 x2
It is certain that we prefer the red matrix–which is symmetric i.e., a12 D a21 D 5=2–than the
non-symmetric matrix (the blue one). So, any quadratic form can be expressed in this form
x > Ax where A is a symmetric matrix. We need a proof because we used the strong word any
quadratic form, while we just had one example.
Proof. Suppose x > Bx is a quadratic form where B is not symmetric. Since it is a scalar, we
get the same thing when we transpose it:
1 > 1
x > Bx D x Bx C x > B > x D x > B C B > x
2 2
Why quadratic forms? Because, for unknown reasons, they show up again and again in
mathematics, physics, engineering and economics. The simplest example is 1=2kx 2 , which is
the energy of a spring of stiffness k.
If you’re not familiar with matrices, refer to Chapter 10.
At the touching point of two curves, the tangents are the same. In other words, the normal
vectors are parallel:
8
<f D g
x x
rf .x; y/ D rg.x; y/; or (7.7.8)
:f D g
y y
where is a real number. These are the two equations to solve for x; y; . But do not forget the
constraint x 2 C y 2 D 1. Three equations for three unknowns. Perfect.
Without constraints, the necessary condition for a function f .x; y/ to be stationary at .x0 ; y0 /
is rf .x0 ; y0 / D 0. With the constraint g.x; y/ D 0, we have instead Eq. (7.7.8). With a bit of
algebra, we can see the old criterion of zero gradient. Let’s introduce a new function L.x; y; /
as 8
<L D f gx
x x
L.x; y; / WD f .x; y/ g.x; y/; ) (7.7.9)
:L D f g
y y y
The condition rL D 0 resembles Eq. (7.7.8) and g.x; y/ D 0. So, by adding one more un-
known to the problem, and building a new function L.x; y; /, Lagrange turned a constrained
minimization problem into an unconstrained minimization problem! is called a Lagrange
multiplier and this method is known as the Lagrange multiplier method. Once Eq. (7.7.9) has
been solved, we get possibly a few solutions .xN i ; yNi /; the maximum of f .xN i ; yNi / is the maximum
we’re looking for, and minimum of f .xN i ; yNi / is the minimum we sought for.
As an example, we consider the problem given in Fig. 7.9. Eq. (7.7.8) and the constraint
gives us the following system of equations to solve for x; y; :
2x D 2x; 4y D 2y; x2 C y2 D 1
From the first equation we either get x D 0 (which leads to y D ˙1 from the constraint) or
D 1. From the second equation we obtain either y D 0 (which leads to x D ˙1 from the
constraint) or D 2. So, we have 4 points .0; 1; 2/, .0; 1; 2/, . 1; 0; 1/, .1; 0; 1/. These points
are exactly the ones we found graphically shown in Fig. 7.9b. Evaluating f at these four points:
Two constraints. After one constraint is of course two constraints, and then multiple constraints.
For two constraints, we have to move to functions of three variables. Otherwise two constraints
g.x; y/ D c1 and h.x; y/ D c2 already decide what is the critical point. Nothing left for
Lagrange to do!
We start with a concrete example. Consider the function f .x; y; z/ D x 2 C y 2 C z 2 and
two constraints g.x; y; z/ D x C y C z D 9 and h.x; y; z/ D x C 2y C 3z D 20. Find the
maximum/minimum of f . The two constraints are two planes and they meet at a line C . Now
we consider different level surfaces of f .x; y; z/ D x 2 C y 2 C z 2 D c; they are spheres of
radius c. When we increase c from 0 we have expanding spheres, and one of them will touch
the line C at a point P . At that point P , we have:
rf D 1 rg C 2 rh
8
<2x
ˆ
ˆ D 1 C 2
ˆ2y D 1 C 22
ˆ
2z D 1 C 32
:
Inequality constraints.
Proof of the AM-GM inequality. Still remember the AM-GM inequality that states
x1 C x2 C C xn p
n x1 x2 : : : xn
n
which was proved by Cauchy with his genius backward-forward induction method? Well, with
Lagrange and calculus, the proof is super easy. We demonstrate the proof for n D 3.
We consider the following function, with the constraint:
p
f .x/ D 3
x1 x2 x3 s.t x1 C x2 C x3 D c
And then, we can compute the derivatives of L with respect to x1 ; x2 ; x3 , then rL D 0 gives us
1 2=3
Lx1 D .x1 x2 x3 / x2 x3 D0
3
1 2=3
Lx2 D .x1 x2 x3 / x1 x3 D0
3
1 2=3
Lx3 D .x1 x2 x3 / x1 x2 D0
3
Solving this system of equations (easy) gives us x1 D x2 D x3 , then from the constraint
x1 C x2 C x3 D c, we get:
c
x1 D x2 D x3 D
3
p
Therefore, the maximum of f .x/ is 3 .c=3/3 which is c=3 or 1=3.x1 C x2 C x3 /. In other words,
p x1 C x2 C x3
3
x1 x2 x3
3
Phu Nguyen, Monash University © Draft version
Chapter 7. Multivariable calculus 540
For 1D integrals we divide the interval Œa; b into many sub-intervals and compute the area
as a sum of the area of all the rectangles (Fig. 7.10). We do the same thing here: the region R
is divided into many rectangles xi yi . For a point .xi ; yi / inside this rectangle, we compute
the base f .xi ; yi / of a box (the 3D counterpart of a rectangle in 2D). Then, the volume is
approximated as the sum of all the volumes of these boxes; that is sum of f .xi ; yi /xi yi .
When there are infinitely many such boxes, we get the true volume and define it as a double
integral:
Xn “
volume D lim f .xi ; yi /xi yi D f .x; y/dxdy (7.8.1)
n!1
i D1 R
To compute a double integral we proceed as shown in Fig. 7.11. First, we consider the plane
perpendicular to the x axis and we fix this plane, this plane intersects with the 3D region of
which the volume we’re trying Rto determine. The area of the intersection plane (crossed area in
the referred figure) is A.x/ D f .x; y/dy. Multiply this area with the thickness dx we get a
volume A.x/dx, and integrate this we get the sought-for volume:
“ Z "Z a
#
b Z Z b a
f .x; y/dxdy D f .x; y/dy dx D f .x; y/dx dy (7.8.2)
R 0 0 0 0
And of course, we can do the other way around. That is why I also wrote the second formula.
Noting that the process has been simplified by considering a rectangle for R. In a general case,
And that is how mathematicians use the notation with two integral signs.
the integration limits a and b are functions of y and x. The next example is going to show how
to handle this situation.
Example 7.1
Compute the volume under f .x; y/ D x 2y and the base triangle (see Fig. 7.11b). Using
Eq. (7.8.2), we can write:
“ Z 1 Z 1 x Z 1
1 x
xy y 2 0 dx
.x 2y/dxdy D .x 2y/dy dx D
R 0 0 0
And finally, “ Z 1
.x 2y/dxdy D . 2x 2 C 3x 1/dx D 1=6
R 0
Not hard but still a bit of work. Using polar coordinates (which suitable for circles) is so much
more easier. Using polar coordinates, double integrals are given by
“ “
f .x; y/dxdy D f .r cos ; r sin /rdrd (7.8.3)
R S
1
“ Z Z
mD rdrd D rdr d D
0 0 2
As another example of the usefulness of polar coordinates, let’s consider the following integral:
Z 1 p
x2
AD e dx D
1
which was computed using polar coordinates, see Section 5.11.4 for details.
Figure 7.13: Spherical coordinates. The differential volume dxdydz becomes 2 sin ddd.
N
GmdV 2 sin ddd
• •
Usphere D D GmN (7.8.4)
q q
where in the second equality, we used spherical coordinates. Using the law of cosines or the
generalized Pythagorean theorem, we can compute q in terms of D, and :
GmN dudd
•
Usphere D p
2D u
Z R Z
du 2
GmN
Z
D p d d (7.8.6)
2D 0 u 0
Z R
GmN
D .2/ Œ.D C / .D / d
D 0
R du
where for the integral p u
, the limits are .D /2 with D 0 and .D C /2 with D .
Finally, we do integration along the direction:
Z R
R3
GmN GmN GM m
Usphere D .4/ 2 d D .4/ D (7.8.7)
D 0 D 3 D
bounded by the four straight lines shown in Fig. 7.14 (left). Even though it is possible to directly
calculate this integral, it is tedious. We can use a change of variables as shown in the figure to
simplify the integral. Indeed, the integration limits are now constants.
Another example of change of variables is given in Fig. 7.15. This is to demonstrate that
straight edges in the uv plane can be transformed to curves in the xy plane. Actually we have
seen change of variables before: double integrals using polar coordinates.
We again believe in patterns and search for a formula for double integrals based on single
Figure 7.15: Straight edges in the uv plane can be transformed to curved edges in the xy plane.
And our task is to find the unknown red box which plays the role of g 0 .u/ when we replace
dx by du. This quantity is denoted by Juv and called the Jacobian of the transformation from
uv to xy. What should Juv be? From the 1D integrals, we guess Juv should be a function
of fu ; fv ; gu ; gv i.e., all the first derivatives. If you know linear algebra, precisely linear
transformations (Section 10.6), you’ll see that Juv is the determinant of a matrix containing all
these 1st derivatives. In what follows we explain where this matrix comes from. We note in
passing that for completeness we have included triple integrals, but we do not have to consider
double and triple integrals separately. What works for double integrals will work for triple
integrals.
Local linearity of transformations and the Jacobian matrix. Let’s come back to the transfor-
mation in Fig. 7.14. That is a linear transformation from a square in the uv plane to a rombus in
the xy plane (check Section 10.6 if that term is new to you), and the equation of the transforma-
tion is " # " #" #
1 1
x C u
D 21 2
1
(7.8.8)
y 4 4
v
Thus, from linear algebra, the area of the rombus is the area of the square (which is 16) scaled
with the absolute of the determinant of the red transformation matrix (which is j 1=4j), thus
the area is 4, which is correct.
But most of usual transformations are nonlinear (Fig. 7.15 is one of them: lines are trans-
formed to curves). In that case, how can we use linear transformations to find the area? The
answer is: linear approximations turn a curve to a line (tangent), a square to a parallelogram,
then the theory of linear transformations can be used.
Now we consider small changes in u and v namely u and v, and see how x and y change:
" # " # " # " #" #
.u C u/2 .v C v/2 u2 v 2 2uu 2vv 2u 2v u
D
2.u C u/.v C v/ 2uv 2uv C 2vu 2v 2u v
As can be seen, since for infinitesimal changes .u/2 and .v/2 are negligible, we have obtained
an approximation to a change in f and g in terms of a matrix containing the four partial
derivatives: fu D 2u; fv D 2v; gu D 2v; gv D 2u. This matrix is special and it has a name:
the Jacobian matrix, named after the German mathematician Carl Gustav Jacob Jacobi (1804 –
1851). Generally, we then have:
" # " #" #
dx fu fv du
D (7.8.9)
dy gu gv dv
where the matrix is the Jacobian matrix. Globally the transformation is nonlinear but locally
(when we zoom in) the transformation is linear.
To find Juv , considering a point .u0 ; v0 / and a rectangle of sides du and dv with one vertex
at .u0 ; v0 /, see Fig. 7.16. The vector .du; 0/ becomes .fu du; gu du/ according to Eq. (7.8.9)
whereas the vector .0; dv/ becomes .fv dv; gv dv/. The rectangle in the uv-plane has an area
of dudv whereas the transformed rectangle, which is a parallelogram, has an area of .fu gv
fv gu /dudv. Thus,
ˇ " #ˇ ˇ ˇ
ˇ ˇ @f @g @g @f ˇˇ
f f
ˇ
Juv D ˇdet u v ˇ D ˇˇ (7.8.10)
ˇ ˇ
ˇ gu gv ˇ @u @v @u @v ˇ
As the determinant can be positive, zero and negative, we needed to use its absolute value.
Ok. How are we sure that our Juv is correct? The answer is easy: just apply it to a case that
we’re familiar with: polar coordinates. In polar coordinates we use r; which are u; v:
) ˇ " #ˇ
x D r cos ˇ
cos r sin ˇ
ˇ
H) Juv D ˇdet ˇDr
ˇ
y D r sin ˇ sin r cos ˇ
Thus dxdy D rdrd.
We come back to the problem in Fig. 7.14. The determinant of the transformation is given
by " #
1=2 1=2 1
det D
1=4 1=4 4
For 2D integrals Juv is related to the determinant of a 2 2 matrix, and thus for 3D integrals,
it is related to the determinant of a 3 3 matrix containing all the nine first partial derivatives:
ˇ 2 3ˇ
ˇ
ˇ fu fv fw ˇˇ
Juv D ˇdet 4gu gv gw 5ˇ (7.8.11)
ˇ 6 7ˇ
ˇ ˇ
ˇ hu hv hw ˇ
which should not be a surprise. And of course we check this result by applying to triple integrals
using spherical coordinates. We don’t provide details, one just needs to know how to compute
the determinant of a 3 3 matrix.
Using Eq. (7.8.12) and Newton’s third law which states that F 12 D F 21 , these two forces
P
cancel out leaving us only the external forces in p:
we then can write the system momenta as if all the mass is concentrated on this center of mass:
p D m1 rP 1 C m2 rP 2 C C mn rP n
(7.8.16)
X
D MR P CM ; M D mi
i
In words, xCM is the average of all the x’s, if the masses are equal. Now, suppose we have only
two masses, and one mass is 2m and the other is m. Then we have xCM D .2x1 C1x2 /=3. In other
words, every mass being counted a number of times proportional to the mass. From that it can
be seen that xCM is somewhere larger than the smallest x and smaller than the largest x. That
holds for yCM and zCM . Thus, the CM lies within the envelope of the masses (Fig. 7.17).
Figure 7.17: The center of mass of n masses lie within the envelope of the masses.
Center of mass of solids. What is the center of mass of a continuous object; e.g. a steel disk?
Of course, integral calculus is the answer. The sums in Eq. (7.8.17) become integrals
1
• •
x CM D x .dxdydz/; M D .dxdydz/ (7.8.18)
M „ ƒ‚ …
dm
where is the density. Thus for objects with density that does not vary from point to point, the
geometric centroid and the center of mass coincide.
2
Pis I D 2mr ,
Recall that for a particle of mass m, its moment of inertia with respect to an axis
see Section 10.1.5. Extending this to a system of N particles, we will have I D ˛ m˛ r˛ and
to a continuum we have dI D r 2 d m, and thus:
Z •
2 2
Iz D .x C y /dV D .x 2 C y 2 /dxdydz (7.8.19)
B
And this is the moment of inertia of a solid B when it is rotating wrt the z-axis. Similarly, wrt
these other two axes, we have:
Z
Ix D .y 2 C z 2 /dV
ZB (7.8.20)
Iy D .x 2 C z 2 /dV
B
Now, if we consider plane figures i.e., objects of which the thickness is negligible compared
with other dimensions, we can see that z D 0 in Eq. (7.8.20), and thus
Z Z Z
Iz D .x C y /dA D x dA C y 2 dA D Iy C Ix
2 2 2
(7.8.21)
B B B
which are known as the second moment of inertia. The second moment of area is a measure of
the ’efficiency’ of a shape to resist bending caused by loading perpendicular to the beam axis
(Fig. 7.18). It appeared the first time in Euler–Bernoulli theory of slender beams.
Figure 7.18: The second moment of area is a measure of the ’efficiency’ of a shape to resist bending
caused by loading perperdicular the beam axis.
Example 1. Determine the center of gravity and moment of inertia of a semi-circular disk of
radius a made of a material with a constant density .
First we compute the mass. It is given by (Eq. (7.8.18) and use polar coordinates)
a
a2
“ Z Z
M D rddr D rdr d D
0 0 2
Then we determine the center of gravity (due to symmetry, only the y-component is non-zero)
a
1 1 4a
“ “ Z Z
2 2
yCM D yrddr D r sin ddr D r dr sin d D
M M M 0 0 3
Fig. 7.19 presents a summary of how to determine the center of mass for discontinuous and
continuous objects. Particularly interesting is the way how the center of mass of a compound
object is determined. In Fig. 7.19(d), we have an object consisting of two rectangles. As we can
treat each rectangle as a point mass with its center of mass already known, Fig. 7.19(c), the CM
of the compound object can be computed using Eq. (7.8.17). As the thickness (t ) is constant, we
can convert from mass to area (A), and obtain the following equation
P
x i Ai
x CM D Pi (7.8.23)
i Ai
for the CM of any 2D compound solid. The shape in Fig. 7.19(d) is the cross section of a T-beam
(or tee beam), used in civil engineering. Thus, civil engineers use Eq. (7.8.23) frequently.
Figure 7.19: Center of mass: from particles (a) to continuous objects (b) and compound objects (d).
In many cases, we remove material from a shape to make a new one, see Fig. 7.20. In that
case, the CM of the object is given by
x 1 A1 x 2 A2
x CM D (7.8.24)
A1 A2
Example 2. Determine the moment of inertia of a rod of length L with D 1 with respect to
various point: the left extreme A and the center O (Fig. 7.21). Could you guess which case has
a lower moment of inertia?
As the rod is very thin, we only have 1D integrals. So, the moments of inertia w.r.t A and O
are
L L=2
L3 L3
Z Z
2
IA D x dx D ; IO D x 2 dx D (7.8.25)
0 3 L=2 12
And the fact that IA > I0 indicates it is easier to turn the rod around O –its center of gravity.
This is consistent with our daily experiences.
Now, if we ask the following question various interesting things would show up. About
which point along the rod, the moment of inertia is minimum? Let’s denote I.t/ the moment of
inertia w.r.t a point located at a distance t from A. We can compute I.t/ as
Z L
I.t/ D .x t/2 dx
0
Z L Z L Z L
2 2 (7.8.26)
D x dx C t dx 2 xtdx
0 0 0
3
L
D C t 2L tL2
3
And differential calculus helps us to find t such that I.t/ is minimum:
d I.t/ L
D 2tL L2 D 0 H) t D (7.8.27)
dt 2
The first thing is that instead of integrating and then differentiating, we can do the reverse. That
is we differentiate the function in the integral and then do the integration:
Z L
d I.t/ d .x t/2
D dx
dt 0 dt
Z L
D 2 .x t/dx D L2 C 2tL
0
And we have got the same result. So, there must be a theorem about this. It is called Leibnitz
rule for differentiating under the integral sign:
Z b Z b
d I.t/ d f .x; t/
I.t/ D f .x; t/dx H) D dx (7.8.28)
a dt a dt
Parallel axis theorem. In the problem of the calculation of the moment of inertia of a rod
of length L, we have IA D L3=3 and IO D L3=12. If we ask this question: what is the relation
between these two quantities, we will get something interesting. Let’s first compute the difference
between them:
L3 L3 L3
IA IO D D
3 12 4
And this difference must depend on the distance between A and O which is L=2, thus we write
2
L3 L
IA IO D D L
4 2
Now, we anticipate the following result: if O 0 is at a distance d from the CM O, the moment of
inertia wrt to O 0 is given by:
IO 0 D IO C d 2 L
Next, we extend this result to 3D objects and obtain the so-called parallel axis theorem, which
facilitates the calculation of the moment of inertia about an arbitrary axis.
Figure 7.22: Parallel axis theorem: two parallel axes, one passing through the CM and the other is a
distance d away.
We consider an object B with density (Fig. 7.22). A set of coordinate axes is used where
O is at the origin. In this coordinate system, the center of mass of the object is located at
.xCM ; yCM ; zCM /. Let ICM be the moment of inertia of B with respect to an axis passing through
CM. Now we’re determining the moment of inertia w.r.t. an axis passing through O:
Z
Iz D .x 2 C y 2 /dV
ZB
D .xCM C x 0 /2 C .yCM C y 0 /2 dV
ZB Z Z Z
2 2
D .xCM C yCM /dV C .x C y /dV C 2xCM x dV C 2yCM y 0 dV
02 02 0
B B B B
2
D Md C ICM C 0 C 0
(7.8.29)
Iz D ICM C Md 2 (7.8.30)
You can find ICM for many common solids in textbooks, and from that the parallel axis theorem
allows us to compute the moment of inertia about an arbitrary axis.
But wait why the blue integrals in Eq. (7.8.29) are zero? This is due to one property of the
CM: R
xdV
Z Z
xCM D R B
H) .x xCM /dV D 0 H) x 0 dV D 0
B dV B B
Actually we know this result without realizing it, see Table 7.3.
P
Table 7.3: i .xi N D 0 where xN is the arithmetic average of xi s.
x/
xi xN xi xN
The second and third equations convert the barycentric coordinates to Cartesian coordinates.
They are just Eq. (7.8.17).
Now, we need to determine the barycentric coordinates of the three vertices. It is straightfor-
ward to see that the barycentric coords of A is .1; 0; 0/: using Eq. (7.8.31) with .1; 0; 0/ results
in .xA ; yA /. Another way to see this is that the only way so that the center of mass is at A is
when mA is very large compared with mB and mC ; thus 1 D mA=M D mA=mA D 1. Similarly,
coords of B is .0; 1; 0/ and of C is .0; 0; 1/. From that we can see that every point on the edge
BC has 1 D 0 (this makese sense as the only case where the center of mass is on BC is that
the mass at A is zeo). The point is within the triangle if 0 1 ; 2 ; 3 1. If any one of the
coordinates is less than zero or greater than one, the point is outside the triangle. If any of them
is zero, P is on one of the lines joining the vertices of the triangle. See Fig. 7.23.
Next, we’re showing that the line 1 D a e.g. 1 D 1=3 is parallel to the edge BC or the line
D 0. Using Eq. (7.8.31) with 1 D 1=3, we can obtain .x; y/ as
1 1 2
x D xA C 2 xB C 3 xC D xA xC C 2 .xB xC /
3 3 3 (7.8.32)
1 1 2
y D yA C 2 yB C 3 yC D yA yC C 2 .yB yC /
3 3 3
We have learnt in Section 10.1.3 that the above line has the direction vector xB x C , which is
edge BC . Therefore, the line 1 D 1=3 is parallel to BC .
Now, we carry out some algebraic manipulations to xP to show that there is nothing entirely
new about barycentric coordinates. To this end, we replace 1 by 1 2 3 , and we compute
xP xA which is the relative position of P wrt A:
xP xA D Œ.1 2 3 /xA C 2 xB C 3 x C xA D 2 .xB xA / C 3 .x C xA /
Or,
! ! !
AP D 2 AB C 3 AC (7.8.33)
So, if we use the vertex A as the origin and two edges AB and AC as the two basic vectors,
we have an oblique coordinate system, and in this system, any point P is specified with two
coordinates .2 ; 3 / is simply a linear combination of these two basic vectors with the coefficients
being 1 and 2 .
One question arises: why don’t we just use Eq. (7.8.33)? If we look at this equation carefully,
one thing comes to us: it is not symmetric! Why A is the origin? On the other hand, with the
barycentric coordinates .1 ; 2 ; 3 /, everything is symmetric. There is no origin!
Geometrical meaning. The point P divides the triangle ABC into three sub-triangles PBC ,
PAB and PAC . It can be shown that the barycentric coordinates .1 ; 2 ; 3 / are actually the
ratio of the areas of these sub-traingles with that of the big triangle:
where instead of .u; v/ and ' are used and ; ' 2 Œ0; 2/. The second one is
r.u; v/ D ..2 C sin v/ cos u; .2 C sin v/ sin u; u C cos v/; u 2 Œ0; 4; v 2 Œ0; 2
12
10
8
6
Z
1.5
1.0
0.5
0.0 Z 4
0.5
1.0
1.5 2
6 0
4
2
6 4 2
0 Y 23
2 0 321 01
1
X 2 4 01 2 Y
4
6 6 X 23 3
(a) (b)
@x @y @z
rv D .u0 ; v0 /i C .u0 ; v0 /j C .u0 ; v0 /k (7.9.1)
@v @v @v
Second, we fix v, and get a curve C2 lying on S , the tangent to this curve at P is:
@x @y @z
ru D .u0 ; v0 /i C .u0 ; v0 /j C .u0 ; v0 /k (7.9.2)
@u @u @u
N D ru rv
X
area of surface D kT u T v kuv
“
area of surface D kT u T v kdudv (7.9.3)
had a greater impact on the ground, suggesting that the stone picked up more speed as it fell
from the greater height.
Law 1: Each planet orbits in an ellipse with one focus at the sun;
Law 2: The vector from the sun to a planet sweeps out an area at a steady state: dA=dt D
constant.
Law 1: states that if a body is at rest or moving at a constant speed in a straight line, it will
remain at rest or keep moving in a straight line at constant speed unless it is acted upon
by a force.
Law 2: is a quantitative description of the changes that a force can produce on the motion
of a body. It states that the time rate of change of the momentum of a body is equal in both
magnitude and direction to the force imposed on it. The momentum of a body is equal to
the product of its mass and its velocity. In symbols, this law is written as F D ma.
Law 3: states that when two bodies interact, they apply forces to one another that are equal
in magnitude and opposite in direction. The third law is also known as the law of action
and reaction.
The first law is known as the law of inertia and was first formulated by Galileo Galilei. This law
is very counter-intuitive: if we go shopping with a cart and we stop pushing it it goes for a short
distance and stop. The law of inertia is wrong! As explained in the wonderful book Evolution
of Physics by Einstein and Infeld, only with the imagination that Galilei resolved the problem:
there is actually friction acting on the cart. If we can remove it (by having a very smooth road
for example) the cart would go indeed further. And with a ideally perfectly smooth road, it goes
forever.
We focus now on the 2nd law, which is written fully as
d 2x dvx
Fx D max D m 2
Dm
dt dt
d 2y dvy
Fy D may D m 2 D m (7.10.1)
dt dt
d 2z dvz
Fz D maz D m 2 D m
dt dt
How are we going to use it? First we need to know the force, we then resolve it into three
components Fx ; Fy and Fz , and finally we solve Eq. (7.10.1). How to do that is the subject of
the next section.
Eq. (7.10.1) are what mathematicians refer to as ordinary differential equations with the well
known abbreviation ODEs. Precisely they are second order ODEs as they contain the second
time derivative d 2 x=dt 2 . Scientists like to call them dynamical equations because they describe
the evolution in time (i.e., dynamics) of the system. Chapter 8 discusses differential equations
in detail.
Newton gave us the 2nd law which requires force so he had to give us some forces. And he
did. In Section 7.10.8 I present his force of gravitation. For other forces, he gave us the third law
which in many cases helps us to remove interaction forces (usually unknown) between bodies.
of gravitation, the earth is pulling the object with a force F pointing to the center of the earth
and has a magnitude of
GM m
F D
.R C h/2
Since h is tiny compared with R, we can approximate .R C h/2 as R2 C 2Rh C h2 R2 . Thus,
GM GM
F D m D mg; g D
R2 R2
where g is called the acceleration of gravity. The quantity mg is called the weight of the object,
which is how hard gravity is pulling on it. With
G D 6:673 10 11
Nm2 =kg2 ; M D 5:972 1024 kg; R D 6:37 106 m
With the gravitational force known, let’s solve the first real problem using calculus. The
problem is: we are shooting a basket ball or firing a gun; describe its motion. These projectile
motions occur in a plane. Let’s use the xy plane with x being horizontal and y vertical. For
simplicity the initial position of the object (with mass m) is at the origin. The initial velocity of
the object is .v0 cos ˛; v0 sin ˛/ (Fig. 7.30). Our task now is to solve the dynamical equations
given in Fig. 7.30.
Solving the second equation for x.t/, we get
which agrees with the law of inertia: no force on the x direction, the velocity (in the horizontal
direction) is then constant. Now, solving the first equation for y.t/, we get
d 2y 1 2
D g H) vy .t/ D gt C v0 sin ˛ H) y.t/ D .v0 sin ˛/t gt (7.10.3)
dt 2 2
Putting together x.t/ and y.t/ we get the complete trajectory of the projectile:
1 2
x.t/ D .v0 cos ˛/t; y.t/ D .v0 sin ˛/t gt (7.10.4)
2
Phu Nguyen, Monash University © Draft version
Chapter 7. Multivariable calculus 564
What this equation provides us is that: start with the initial position (which is .0; 0/ in this
particular example) and initial velocity, this equations predicts the position of the projectile at
any time instant t. One question here is: what is the shape of the trajectory? Eliminating t will
reveal that. From Eq. (7.10.2), we have t D x=v0 cos ˛, and substitute that into Eq. (7.10.3) we get
1 g
y D .tan ˛/x 2
x2 (7.10.5)
2 v0 cos2 ˛
A parabola! We can do a few more things with this: determining when the object hits the ground,
and how far. The power of Newton’s laws of motions is in the prediction of the motion of planets,
see Section 7.10.9 for detail.
Figure 7.31: Position vector R.t / and a change in position vector R.t /.
Knowing the function, the first step is to do the differentiation; which gives us the velocity
vector v.t /. To this end, we consider two time instants: at t the position vector is R.t/ and at
t C t the position vector is R.t C t/. Then, the velocity is computed as (one note about the
notation is in order: vectors are typeset by italic boldface minuscule characters like a)
R
v.t / D lim
t !0 t
Œx.t C t/ x.t/i C Œy.t C t/ y.t/j C Œz.t C t/ z.t/k
D lim (7.10.7)
t !0 t
dx dy dz
D iC jC k
dt dt dt
Implicitly we used the rule of limit: limit of sum is sum of limits.
What does this equation tell us? It tells us that differentiating a vector valued function is amount
to differentiating the three component functions (they are ordinary functions of a single variable).
The formula is simple because the unit vectors (i.e., i ,j ,k) are fixed. As we shall see later, this
is not the case with polar coordinates, and the velocity vector has more terms.
The speed (of the object) is then given by kv.t/k, the length of the velocity vector. The
direction of motion is given by the tangent vector T .t/ given by v=kvk. The tangent is a unit
vector, as we’re only interested in the direction.
The acceleration is just the derivative of the velocity:
dv d 2R d 2x d 2y d 2z
a.t/ D D D i C j C k (7.10.8)
dt dt 2 dt 2 dt 2 dt 2
Now, we generalize the rules of differentiation of ordinary functions to vector functions.
Let’s consider two vector valued functions u.t/ and v.t/ and a scalar function f .t/, we have the
following rules:
d
(a) Œu C v D u0 C v0
dt
d
(b) Œf .t/u D f 0 .t/u C f .t/u0
dt (7.10.9)
d 0 0
(c) Œu v D u v C u v
dt
d
(d) Œu v D u0 v C u v0
dt
These rules can be verified quite straightforwardly. These rules are just some maths exercises,
but amazingly we shall use the rule (d) to prove that the orbit of the earth around the sun is a
plane curve.
And with all of this, we can study a variety of motions such as projectile motion. In what
follows, we present uniform circular motion as an example of application of the maths.
Uniform motion along a circle. Uniform circular motion can be described as the motion of an
object in a circle at a constant speed. This might be a guest on a carousel at an amusement park,
a child on a merry-go-round at a playground, a car with a lost driver navigating a round-about
or "rotary", a yo-yo on the end of a string, a satellite in a circular orbit around the Earth, or the
Earth in a (nearly) circular orbit around our Sun.
At all instances, the object is moving tangentially to the circle. Since the direction of the
velocity vector is the same as the direction of the object’s motion, the velocity vector is directed
tangent to the circle as well. As an object moves in a circle, it is constantly changing its direction.
Therefore, it is accelerating (even though the speed is constant).
Let’s denote by ! the angular velocity of the object (the SI unit of angular velocity is radians
per second). Then, we can write its position vector, and differentiating this vector gives us the
velocity vector, which is then differentiated to give us the acceleration vector (assuming that the
radius of the circular path is r):
" # " # " #
r cos !t r! sin !t r! 2 cos !t
R.t / D H) v.t/ D H) a.t/ D (7.10.10)
r sin !t Cr! cos !t r! 2 sin !t
Figure 7.33: Unit vectors in polar coordinate system. The most important observation is that while rO and
O are constant in length (because they are both unit vectors), they are not constant in direction.
p In other
words, they are vector-valued functions that change from point to point. Note that jjrjj D x C y 2 .
2
Knowing rO allows us to determine the unit vector in the tangential direction O as the two vectors
are perpendicular to each other. Collectively, they are written as
rO D C cos i C sin j
(7.10.12)
O D sin i C cos j
As both of them are functions of only, their derivatives with respect to r are zeros. We need
their derivatives w.r.t :
d rO
D sin i C cos j D O
d
(7.10.13)
d O
D cos i sin j D rO
d
Phu Nguyen, Monash University © Draft version
Chapter 7. Multivariable calculus 567
We’re now ready to compute the derivative of these unit vectors w.r.t time (following Newton,
use the notation fP to denote the time derivative of f .t/):
d rO d rO d
D D P O
dt d dt
(7.10.14)
d O d O d
D D P rO
dt d dt
Now, we proceed to determine the velocity and acceleration. First, the velocity is
dr d rO
r D r rO H) D rP rO C r D rP rO C r P O (7.10.15)
dt dt
And therefore, the acceleration is
d 2r d P O
D P
r O
r C r
dt 2 dt
d rO d O (7.10.16)
D rR rO C rP C rP P O C r R O C r P
dt dt
P 2
D .rR r /rO C .2rP C r /P R O
Fr D m.rR r P 2 /
(7.10.17)
F D m.2rP P C r / R
rO D e i ; O D ie i (7.10.18)
O Now, we
As multiplying with i is a 90ı rotation, it is clear that rO is perpendicular to .
i
can differentiate r D re w.r.t time:
dr
r D re i H) P i C ire i P D rP rO C r P O
D re
dt
which is exactly what we obtained in Eq. (7.10.15). For the acceleration, doing something
similar as
dr d 2r
P i C ire i P H)
D re P i P C i re
R i C rie
D re P i P C ir ie
P i P C ire i R
dt dt 2
and we got Eq. (7.10.16).
Henry Cavendish (1731 – 1810) was an English natural philosopher, scientist, and an important experimental
and theoretical chemist and physicist.
From this, we can determine the length of the angular momentum as l D mr 2 !, where ! D ; P
O D 1 for two perpendicular unit vectors. From Fig. 7.34 and following the steps
because jjrO jj
in Eq. (7.10.24) but without the mass m, we get
1 P rO /
O D 0:5r 2 ! D l=2m
dA=dt D 0:5kr vk D kr pk D 0:5r 2 .
2m
Since the angular momentum l is conserved, we arrive at the conclusion that dA=dt is constant.
This proof shows us that as the planet is orbiting the sun, when it is close to the sun (r is small),
it speeds up (! is bigger as l D mr 2 ! is constant).
Proof of 2nd law. We use Newton’s 2nd law in polar coordinates i.e., Eq. (7.10.17) together with
Newton’s universal gravity to deduce Kepler’s 1st law. The only force is the Sun’s gravitational
pull written as
GM m
F D rO (7.10.25)
r2
Introducing this force into Eq. (7.10.17), we get the following system of two equations:
GM
rR r P 2 D
r2 (7.10.26)
2rP P C r R D 0
Solution of this system of equations is the orbit of the planet and it should be an equation for an
ellipse (but we need to prove this). From the second equation in Eq. (7.10.26) , we have
P D 0 ” r 2 P D h D constant
d=dt.r 2 /
1 qP 1 dq d dq
rD H) rP D D D h
q q2 q 2 d dt d
Don’t ask me why this new variable. I have no idea.
d 2r 2
d dq d dq d dq d 2 2d q
D h D h D h D h q (7.10.27)
dt 2 dt d d dt d d dt d 2
P
where, in the last equality, we used the result that hq 2 D .
We’re now ready to re-write the first equation of Eq. (7.10.26) in terms of h; q; :
d 2q 1 d 2q
h2 q 2 C .hq 2 2
/ D GM q 2
H) Cq DC ; .C D GM=h2 / (7.10.28)
d 2 q d 2
The boxed equation is a so-called differential equation (DE). We have more to say about dif-
ferential equations in Chapter 8, but briefly a DE is an equation that contains derivatives of
some function that we’re trying to find e.g. f .x/ C f 0 .x/ D 2. How are we going to solve the
above boxed equation? Solving DEs is not easy, but in this case it turns out that the solution is
something we know. What is the boxed equation saying to us? It tells us to find a function (i.e.,
q) such that its second derivative equals minus itself (the constant C is not important). We know
that cos is such a function. So, the solution to this equation is q D C D cos . Now, forget
q–it’s just a means to an end–we need r which is
1
rD
C D cos
But this is the equation of a conic section (Section 4.12.2). We need astronomical data to
determine C and D and from that to deduce that this is indeed the equation of an ellipse.
At this moment, you might be thinking ‘but the orbit of planets around the Sun was known
to be an ellipse thanks to Kepler’. It is indeed easier to work on a problem of which solution
we known beforehand. But, Newton’s universal gravity theory is more powerful than that. It can
predict things that we never know of.
In the beginning of the year 1665 I found the method of approximating series and
the rule for reducing any dignity [power] of any binomial into such a series. The
same year in May I found the method of tangents of Gregory and Slusius, and in
November had the direct method of fluxions and the next year [1666] in January had
the theory of colours and in May following I had entrance into the inverse method
of fluxions. And the same year I began to think of gravity extending to the orb of the
moon ... All this was in the two plague years of 1665 and 1666, for in those days I
was in the prime of my age for invention and minded Mathematics and Philosophy
more than at any time since.
r E D (7.11.1a)
0
@B
r E D (7.11.1b)
@t
@E j
c2r B D C (7.11.1c)
@t 0
r B D0 (7.11.1d)
where r is the gradient vector operator, r E is the divergence of the electric field E , r E is
the curl of E ; B is the magnetic field.
When the electric and magnetic field do not depend on the time i.e., the charges are per-
manently fixed in space or if they do more, they move as a steady flow, all of the terms in
Eq. (7.11.1) which are time derivatives of the fields are zero. And we get two sets of equations.
One for electrostatics:
r E D (7.11.2a)
0
r E D0 (7.11.2b)
j
r B D (7.11.3a)
c 2 0
r B D0 (7.11.3b)
Looking at these two sets of equations, we can see that electrostatics is a neat example of a
vector field with zero curl and a given divergence. And magnetostatics is a neat example of a
vector field with zero divergence and a given curl.
To summarize, the central object of vector calculus is vector fields C . And to this object, we
will of course do differentiation and integration, which leads to differential calculus of vector
fields and integral calculus of vector fields, and connections between them:
the fundamental theorem of calculus that links line integrals to surface integrals and
volume integrals: we have Green’s theorem, Stokes’ theorem and Gauss’ theorem. They
Rb
are all generalizations of a df =dx dx D f .b/ f .a/.
So, a 3D vector field is similar to three ordinary functions. If the field does not depend on time
t ; we have a static field, then in the above equation t is omitted. And for a plane vector field we
have F D M.x; y; t/i C N.x; y; t/j . Fig. 7.35 gives some plane vector fields which you can
think of the velocity field of a fluid.
2 2 1
1.5 1.5
1 1 0.5
0.5 0.5
0 0 0
-0.5 -0.5
-1 -1 -0.5
-1.5 -1.5
-2 -2 -1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -1 -0.5 0 0.5 1
Mm
Gravitational force: F D G rO
r2
1 qq0
Electric force: F D rO
40 r 2
Remarkably these two very different forces have the same mathematical format: they are in-
versely proportional to the distance r between two masses M , m or two charges q and q0 , and
they are proportional to the product of two masses or charges. They are known as inverse square
laws. As these forces are along the line connecting the two masses (or charges), they are called
central forces.
Charles-Augustin de Coulomb (1736 – 1806) was a French officer, engineer, and physicist. He is best known
as the eponymous discoverer of what is now called Coulomb’s law, the description of the electrostatic force of
attraction and repulsion. He also did important work on friction.The SI unit of electric charge, the coulomb, was
named in his honor in 1880.
Figure 7.36: Gravitational force between two masses M and m and electric force between two charges
q0 and q.
1 q
ED uO (7.11.5)
40 r 2
Figure 7.37
Fig. 7.38
2
ƒ‚ … C „ƒ‚…
0:5mv
„ mgh D const (7.11.6)
K.E. P.E
And we want to verify whether this principle is correct. We use Newton’s second law
F D ma D mdv=dt, but focus on energy aspects. Let’s calculate the change of the kinetic
energy T :
dT d 1 2 dv
D mv D mv D Fv (7.11.7)
dt dt 2 dt
Since F D mg and v D dh=dt , we get d T=dt D mg dh=dt D d=dt .mgh/. So the change in
the kinetic energy turns into potential energy, and thus Eq. (7.11.6) is indeed correct.
So, from Newton’s law we have discovered an interesting fact about energy conservation.
But it was only for the simple problem of free fall. Will this energy principle work for other
cases? Let’s check! In 3D, the kinetic energy T for a particle of mass m traveling along a 3D
curve is given by
1
mvx2 C mvy2 C mvz2
T D
2
Thus, its rate of change is
dT ds
DF v DF (7.11.9)
dt dt
Even though the trajectory is a 3D curve, the only non-zero force component is Fz D mg,
thus we have
dT dz d
D . mg/ D .mgz/
dt dt dt
And again, energy conservation works.
We have a tiny change of T w.r.t a tiny change in time, Eq. (7.11.9). Integral calculus gives
us the total change when the particle traverses the entire path, denoted by C . From Eq. (7.11.9)
we obtain d T D F ds, and integrating this gives us the total change of the kinetic energy
Z
T D F ds (7.11.10)
C
This integral (a significant integral) is named a line integral of a vector field. In mechanics, this
integral is called the work done by a force. And Eq. (7.11.10) is known as the work-kinetic
energy theorem: the change in a particle’s KE as it moves from point 1 to point 2 (the end points
of C ) is the work done by the force.
Let’s say a few words about the unit of work. As work is defined as force multiplied with
distance, its SI unit is Newton meter, which is one Joule.
Don’t let the name line integral fool you, the integral path C is actually a curve. As F d s
Rb
is a number the line integral is simply an extension of a f .x/dx. Instead of moving on the x
direction from .a; 0/ to .b; 0/, now we traverse a spatial curve C . Obviously when this curve
happens to be the horizontal line, the line integral is reduced to the ordinary integral. So, actually
nothing is too new here.
For the evaluation of a line integral it is convenient to use a parametric representation for
the curve C : .x.t/; y.t// for a t b. Then, Eq. (7.11.10) becomes, for a 2D vector field
F D M.x; y/i C N.x; y/j :
" # " #
b
x 0 .t/
Z Z
M.x.t/; y.t//
F ds D 0 dt (7.11.11)
C a N.x.t/; y.t// y .t/
Rb
The final integral is simply an integral of the form a f .t/dt, which can be evaluated using
standard techniques of calculus. In what follows, we present a few examples.
Example 1. Let’s consider this vector field F D yi C xj (see Fig. 7.35c), and the path is the
full unit circle centered at .2; 0/, and it is traversed counter-clockwise. First, we parametrize C ,
then just apply Eq. (7.11.11):
) (
x D 2 C cos t dx D sin tdt
H) ; F d s D . sin t/. sin t/dt C .2 C cos t/.cos t/dt
y D sin t dy D C cos tdt
H
So, (the symbol to designate that the curve is closed)
I Z 2
F ds D .1 C 2 cos t/dt D 2
0
The result is positive which is expected because the force and the path are both counter-
clockwise.
Example 2. Let’s consider this vector field F D 2xi C 2yj (see Fig. 7.35a), and the path is
the full unit circle centered at .2; 0/. Note that the vector field F is the gradient of this scalar
field D x 2 C y 2 .
We have
) (
x D 2 C cos t dx D sin tdt
H) ; F d s D .4 C 2 cos t/. sin t/dt C .2 sin t/.cos t/dt
y D sin t dy D C cos tdt
Thus, I Z 2
F ds D 4 sin tdt D 4 cos t 2
0 D 0 (7.11.12)
0
So, the line integral of a gradient field along a closed curve is zero! Let’s see would we also get
zero if the path is not a closed curve. Assume the path is just the first quarter of the circle, and
the line integral is
Z Z =2
F ds D 4 sin tdt D 4
0
Figure 7.39
Now, we suspect that there is something special about the line integral of a gradient vector.
Rb
But a line integral is a generalization of a f .x/dx, which satisfies the fundamental theorem of
calculus:
b
dF
Z
dx D F .b/ F .a/
a dx
So, the equivalent counterpart for line integrals should look like this:
Z2
r ds D .2/ .1/
1
along C
And it turns out that our guess is correct. Suppose that we have a scalar field .x; y/ and two
points 1 and 2. We denote .1/ is the field at point 1 and similarly .2/ is the field at point 2.
A curve C joints these two points (Fig. 7.39). We have the following theorem:
Theorem 7.11.1: Fundamental Theorem For Line Integrals
Z2
r ds D .2/ .1/ (7.11.13)
1
along C
which states that the line integral along the curve C of the dot product of a gradient r –
a vector field–with d s–another vector which is the infinitesimal line segment– equals the
difference of evaluated at the two end points of the curve C .
It is because of this theorem that the integral in Eq. (7.11.12) is zero, as the two end points
are the same. Also because of this theorem that the line integral of a gradient vector is path-
independent. That is, no matter how we go from point 1 to point 2, the integral is the same.
Proof. [Proof of theorem 7.11.1]. We use the definition of an integral as a Riemann sum to prove
the above theorem. To this end, we divide the curve C into many segments (Fig. 7.39). Then,
we can write the integral as
Z 2 n
X
r d s D lim .r s/i
1 n!1
i D1
Now, what is r s? It is the change of along s. Remember the directional derivative?
Here the direction is along the curve. So, we can compute this term for all n segments:
Figure 7.40: Work of the gravitational force in moving a mass m from point 1 to point 2 along a curved
path C . Note that d s D .dx; dy; dz/> .
We compute the work done by the gravity in moving a mass m from point 1 to point 2 along
a curved path C as shown in Fig. 7.40. The origin of the coordinate system is put at the earth of
mass M . As r 2 D x 2 C y 2 C z 2 , we have rdr D xdx C ydy C zdz, so the work is written as
dr 1 1
Z Z
W D F ds D GM m 2
D GM m (7.11.14)
C r r2 r1
And this work is also independent of the path! And if C is a close path, W would be zero.
We know that the work done is equal to the change in the kinetic energy (that is W D T ).
And Eq. (7.11.14) shows that work done is also a change of something: the RHS of that equation
is the difference of two terms which indicates a change of something (that we label as U ). Let
us define that
)
W D CT
H) .T C U / D 0 (energy is conserved) (7.11.15)
W D U
From Eqs. (7.11.14) and (7.11.15) we can determine the expression for U :
1 1 GM m
GM m D U H) U.r/ D (7.11.16)
r2 r1 r
And U.r/ is called the potential of the gravitational force.
.T1 T2 /A
QDk .W=J/s/ (7.11.17)
d
where k is the thermal conductivity of the material (SI unit is W/(mK)). This equation was
obtained based on experimental observations that the rate of heat conduction through a slab is
proportional to the temperature difference across the slab (T1 T2 ) and the heat transfer area
(A), but it is inversely proportional to the thickness of the slab d .
Now if we shrink the slab thickness d to zero so that we have the derivative of the temperature,
and divide the above equation by A (and thus get rid of that), we get the following differential
form of the one dimensional Fourier law for heat conduction:
dT
qD .W/m2 /
k (7.11.18)
dx
where q is the heat flux density. The word flux comes from Latin: fluxus means "flow", and
fluere is "to flow".
Now we move to heat conduction in a three dimensional body of complicated geometry. The
generalization of Eq. (7.11.18) is
qD krT (7.11.19)
where rT denotes the gradient of the temperature field.
Figure 7.41
Q qA1
D cos D q cos D q n
A1 = cos A1
And the total flux of heat through a surface S is the sum of all the fluxes through the small
surface elements dA:
Z
heat flux D q ndA
S
Z
flux D C ndA (7.11.20)
S
Imagine that we have a volume V with surface S (Fig. 7.42). Now we cut that volume into
two volumes V1 and V2 by a plane Sab . The first volume is enclosed by surface S1 which consists
of a part of the original surface Sa and Sab . The second volume is bounded by surface S2 which
consists of the other part of the original surface Sb and Sab . If we compute the flux of a vector
field C through the surface S1 and the flux through S2 , we get:
Z Z
flux through S1 : C ndA C C n1 dA
Sa Sab
Z Z
flux through S2 : C ndA C C n2 dA
Sb Sab
Noting that n2 D n1 , when we sum these two fluxes, the red terms cancel out, and we obtain
this interesting fact about flux: the flux through the complete outer surface S can be considered
as the sum of the fluxes out of the two pieces into which the volume was broken. And nothing
can stop us from dividing V1 into two little pieces and regardless of how we divide the original
volume we always get that the flux through the original outer surface S is equal to the sum of
the fluxes out of all the little interior pieces.
Figure 7.42
We continue that division process until we get an infinitesimal little piece. And that is a
very small cube. Now, we’re going to compute the flux of a vector field C through the faces
of an infinitesimal cube. And of course we choose a special cube, one that is aligned with the
coordinate axes (Fig. 7.43). R
The flux through faces 1 and 2, defined by C ndA, are given by (note that the normals of
these faces are parallel to the x direction so other components of C are irrelevant)
And as the cube is tiny, the field is constant over these faces. So, for face 1, the field is Cx .1/
where 1 is any point on this face.
Figure 7.43: Flux of a vector field C through the faces of an infinitesimal cube.
@Cx
Cx .2/ D Cx .1/ C x (7.11.21)
@x
which is correct as x is small. Thus, we can compute the flux through faces 1/2, and similarly
for faces 3/4 and 5/6. They are given by
@Cx
flux through faces 1/2 D xyz
@x
@Cy
flux through faces 3/4 D xyz
@y
@Cz
flux through faces 5/6 D xyz
@z
which gives us the total flux through all the six faces of the small cube with surface S :
@Cx @Cy @Cz
Z
C ndA D C C V (7.11.22)
S @x @y @z
where V D xyz is the volume of the cube. The red term is given a special name–the
divergence of C . Thus, the divergence of a 3D vector is defined as
What does Eq. (7.11.22) mean? It tells us that, for an infinitesimal cube, the outward flux of the
cube is equal to the divergence of the vector multiplied with the volume of the cube. To better
understand the meaning of this new divergence concept, we consider three vector fields and
compute the corresponding divergences (Fig. 7.44). Think of these vector fields as the velocities
of some moving fluid. Now put a sphere at the origin and the fluid can go in and out of this
sphere. In Fig. 7.44a, r C > 0 indicates that, due to Eq. (7.11.22), the fluid is moving out of the
sphere. On the contrary, in Fig. 7.44b, the fluid is entering the sphere, thus r C < 0. Finally, the
fluid in Fig. 7.44c is just swirling around: there is no fluid moving out of the sphere–r C D 0.
If the divergence cannot describe a rotating fluid, then we need another concept. And indeed, the
curl of the fluid velocity field does just that (Section 7.11.7).
2 2 1
1.5 1.5
1 1 0.5
0.5 0.5
0 0 0
-0.5 -0.5
-1 -1 -0.5
-1.5 -1.5
-2 -2 -1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -1 -0.5 0 0.5 1
Figure 7.44: Some 2D vector fields and their divergences: (a) r C D 2 > 0, (b) r C D 2 < 0 and
(c) r C D 0. You’re recommended to watch this amazing animation for a better understanding of the
meaning of the divergence and curl.
And if we sum up all these tiny cubes, the right hand side of Eq. (7.11.24) is the volume integral
of the divergence of C . How about the left hand side? It is the flux of C through the solid surface
S ; see the discussion related to Fig. 7.42. And that is, Gauss’s theorem or Gauss’s divergence
theorem :
This proof is however not mathematically rigorous. It is certainly true that any domain can be cut up into
cubes/boxes. But most domains have a curved boundary, so the domain is unlikely to be a union of boxes. It is not
uncommon to argue that by taking the boxes to be smaller and smaller we can approximate any reasonable domain
better and better, and hence taking some sort of limit, the divergence theorem follows for any such domain.
Z Z
Gauss’s divergence theorem: C ndA D r C dV (7.11.25)
S V
In Section 8.5.2 I provide one application of Gauss’ divergence theorem to derive the three
dimensional heat conduction equation.
Now, the circulation of the fluid around the rectangle is the line integral along the rectangle
boundary of the tangential component of the vector field or C d s. The line integral is broken into
four integrals along the four sides. Take the side 1 for example, using the mean value theorem
for integral i.e., Eq. (4.11.3) we can write
Z
C d s D Cx .1/x
side 1
where Cx .1/ is the value of Cx evaluated at some point on the side 1. It does not matter the
precise location of this point. Doing similarly for other sides, the integral is given by
I
C d s D Cx .1/x C Cy .2/y Cx .3/x Cy .4/y (7.11.26)
Similarly to what we have done to get the divergence, we group the red terms and blue terms:
@Cx
circulation along sides 1/3 D .Cx .1/ Cx .3//x D xy
@y
@Cy
circulation along sides 2/4 D .Cy .2/ Cy .4//x D C xy
@x
If we plot this velocity field it looks exactly similar to the one given in Fig. 7.35c. Then, the red
term in Eq. (7.11.27) but applied to v is given by
@vy @vx
D 2!
@x @y
Indeed that red term is an indication of the rotation.
Instead of considering a rectangle in the xy plane, we can consider rectangles in the yz and
zx plane. Altogether, the circulations are given by
@Cy @Cx
I
rectangle in xy plane: D C d s D xy
@x @y
@Cz @Cy
I
rectangle in yz plane: D C d s D yz
@y @z
@Cx @Cz
I
rectangle in zx plane: D C d s D zx
@z @x
The three terms in the brackets are the three Cartesian components of a vector called the curl of
C , written as r C (read del cross C) where is the cross product (see Section 10.1.5
for a discussion on the cross product between two vectors). One way to memorize the formula
for the curl of a vector field is to use the determinant of the following 3 3 matrix:
ˇ ˇ
ˇi j kˇ
ˇ ˇ
ˇ@
@ @ ˇˇ @Cz @Cy @Cx @Cz @Cy @Cx
ˇD iC jC k (7.11.28)
ˇ
ˇ @x @y @z ˇ @y @z @z @x @x @y
ˇ
ˇ ˇ
ˇ Cx Cy Cz ˇ
Now we return to Eq. (7.11.27). But the term in the brackets is just the z component of
r C . And xy is the area of our little square a.
I
C d s D .r C / na (7.11.29)
Figure 7.46
Thus we have
I XI X Z
C ds D C ds D .r C / na D .r C / ndA
i i i S
which is the Stokes theorem or the Kelvin–Stokes theorem. It is named after Lord Kelvin and
George Stokes.
Z I
Stokes’ theorem: .r C / ndA D C ds (7.11.30)
S
If we return to 2D planes, then Stokes theorem becomes Green’s theorem named after the British
mathematical physicist George Green. As C is now a two dimensional vector field, the integrand
in the surface integral is simply the z-component of the curl of C . Thus, Green’s theorem states
that
Z
@Cy @Cx
I
Green’s theorem: dA D .Cx dx C Cy dy/ (7.11.31)
S @x @y
That’s how physicists present a theorem. Mathematicians are completely different. Here is how
a mathematician presents Green’s theorem.
The main content is of course the same but with rigor. To use the theorem properly we
need to pay attention to the conditions mentioned in the theorem, especially about the curve C
(Fig. 7.47). For example, if the curve is open, forget Green’s theorem.
Figure 7.47: Illustration of positively oriented, piecewise smooth, simple closed curves.
History note 7.3: George Green (14 July 1793 – 31 May 1841)
George Green (14 July 1793 – 31 May 1841) was a British mathemati-
cal physicist who wrote An Essay on the Application of Mathematical
Analysis to the Theories of Electricity and Magnetism in 1828. The
essay introduced several important concepts, among them a theorem
similar to the modern Green’s theorem, the idea of potential functions
as currently used in physics, and the concept of what are now called
Green’s functions. Green was the first person to create a mathematical
theory of electricity and magnetism and his theory formed the foun-
dation for the work of other scientists such as James Clerk Maxwell, William Thomson,
and others. His work on potential theory ran parallel to that of Carl Friedrich Gauss.
The son of a prosperous miller and a miller by trade himself, Green was almost completely
self-taught in mathematical physics; he published his most important work five years
before he went to the University of Cambridge at the age of 40. He graduated with a BA
in 1838 as a 4th Wrangler (the 4th highest scoring student in his graduating class, coming
after James Joseph Sylvester who scored 2nd).
Now we do something remarkable, we remove f from the above, and define a gradient operator
as:
@ @ @
rD ; ;
@x @y @z
And this operator is a vector. But it is not a vector on its own. We have to attach it to something
else so that it has a meaning. What can we do with this vector? Recall that we can multiply
a vector with a scalar, we can do a dot product for two vectors and finally we can do a cross
product for two vectors. Now, we define all these operations for our new vector r with a scalar
f and a vector field C :
@f @f @f
scalar multiplication: rf D ; ;
@x @y @z
@ @ @ @Cx @Cy @Cz
dot product: r C D ; ; .Cx ; Cy ; Cz / D C C
@x @y @z @x @y @z
@ @ @
cross product: r C D ; ; .Cx ; Cy ; Cz /
@x @y @z
(7.11.32)
What we have achieved? Except for rf (which is where we started), we have obtained the
divergence and curl of a vector field, which matches the definition discovered previously when
we were doing physics!
Having now the new stuff, we’re going to find the rules for them. And of course we base our
thinking on the rules that we know for the differentiation of functions of a single variable. For
two functions f .x/ and g.x/, we know the sum and product rule:
d df dg
sum rule: .f C g/ D C
dx dx dx
d df dg
product rule: .fg/ D gC f
dx dx dx
From this sum rule, now considering f .x; y; z/, g.x; y; z/ and two vector fields a and b, we
have the sum rules
sum rule 1: r.f C g/ D rf C rg
sum rule 2: r .a C b/ Dr aCr b (7.11.33)
sum rule 3: r .a C b/ D r a C r b
We have not one sum rule but three because we have three combinations of r, f and a as shown
in Eq. (7.11.32). The proof is straightforward, so we just present the proof of the second sum
rule:
The pros of this notation is space saving, and it works for vectors in Rn for any n not just three.
Now comes the product rules. First, from rf we have r.fg/ and r.a b/. Second, from
r a we have r .f a/ and r .a b/. Third, from r a we have r .f a/ and r .a b/.
Totally, we have six product rules, they are given by
Second derivatives The grad, div and curl operators involve only first derivative. How about
second derivatives?
Start with a scalar f .x; y; z/; we have rf , which is a vector. And for a vector we can do
a div and a curl, so we will have r .rf / and r .rf /;
Start with a vector field C ; we have r C which is a scalar, and for a scalar we can do a
grad on it: r.r C /;
Start with a vector field C ; we have r C which is a vector, and for a vector we can do
a div on it: r .r C /, or we can do a curl on it: r .r C /.
We now compute all these possibilities and see what we will get. Let’s start with r .rf /:
@2 f @2 f @2 f
r .rf / D 2
C 2
C 2
D r 2 f D f
@x @y @z
So, r .rf / is a scalar and called the Laplacian of f , denoted by r 2 f . This operator appears
again and again in physics (and engineering). We can define the Laplacian of a vector field C as
a vector field with the components being the Laplacian of the components of the vector:
r 2 C D .r 2 Cx ; r 2 Cy ; r 2 Cz /
Moving on to r .rf /, which is the curl of the grad of f . It is a zero vector, due to this
@2 f @2 f
property of partial derivative @x@y D @y@x . It is interesting that r .r C /, which is the div of
a curl, is also zero.
You can check the last formula by computing the components of r C , and then computing
the curl of that vector, and you will see the RHS appear. The formula is not important, what is
important is that the curl of a curl does not give us anything new.
r .f a/ D f .r a/ C rf a
Integrating both sides of it over a volume B with boundary surface @B, we get
Z Z Z
r .f a/dV D f .r a/dV C rf adV
B B B
And using Gauss’ divergence theorem for the LHS to convert it to a surface integral on the
boundary, we obtain
Z Z Z
.f a/ ndS D f .r a/dV C rf adV
@B B B
From this result, we can obtain the gradient theorem. Let’s consider a constant vector a and
a smooth function u in place of f . From Eq. (7.11.36) we get (r a D 0)
Z Z
ru adV D .ua/ ndS
B @B
And since this holds for any constant vector a, we get the gradient theorem:
Z Z
rudV D undA (7.11.37)
V S
First identity. Assume two scalar functions u.x; y/ and v.x; y/ (extension to u.x; y; z/ is
straightforward), we then have
.vux /x D vx ux C vuxx
.vuy /y D vy uy C vuyy
where the notation ux means the first derivative of u with respect to x. Adding up these identities
gives
r .vru/ D rv ru C vu (7.11.38)
Integrating both sides of it over a volume B with boundary surface @B, we get
Z Z Z
r .vru/dV D rv rudV C vudV
B B B
Now, using again the Gauss divergence theorem for the LHS, we have
Z Z Z
.vru/ ndS D rv rudV C vudV
@B B B
@u
WD ru n
@n
With this new term, the first Green’s identity can also be written as, for a pair of .u; v/
@u
Z Z Z
vudV D rv rudV C v dS
B B @B @n
Second identity. Writing the first Green’s identity for two pairs, .u; v/ and .v; u/ we get
@u
Z Z Z
vudV D rv rudV C dS v
@n
ZB ZB Z@B
@v
uvdV D ru rvdV C u dS
B B @B @n
What we do next? We subtract the second from the first, as the red terms cancel each other:
Z
@v @u
Z
.uv vu/ dV D u v dS
B @B @n @n
a D a1 e 1 C a2 e 2 C a3 e 2 D ai e i (7.11.39)
where we have used Einstein summation rule in the last equality. We can write the dot product
of two vectors a and b as
a b D .ai e i / .bj ej / D ai bj e i ej
Now, we know that the three basis vectors are orthonormal, we can easily compute the dot
product of any two of them, it is given by
(
1 if i D j
e i ej D (7.11.40)
0 otherwise
a b D ai bj ıij D ai bi D aj bj D a1 b1 C a2 b2 C a3 b3
§
We move away from i , j and k and use e i as we are now using indicial notation.
So, the dot product gave us a new symbol ıij . The cross product should lead to a new symbol.
Let’s discover that. The cross product of two vectors a and b is a vector denoted by a b:
a b D ai e i bj ej D ai bj e i ej (7.11.42)
And of course we’re going to compute e i ej (we know how to compute the cross product of
two vectors). The results are
e1 e1 D 0 e1 e2 D e3 e1 e3 D e2
e2 e1 D e3 e2 e2 D 0 e2 e3 D e1 (7.11.43)
e3 e1 D e2 e3 e2 D e1 e3 e3 D 0
This allows us to write
ej e k D ij k e i (7.11.44)
where ij k is the permutation symbol or the Levi-Civita symbol, which is defined by
8
< C1 if .i; j; k/ is .1; 2; 3/, .2; 3; 1/, or .3; 1; 2/
ˆ
ij k D 1 if .i; j; k/ is .3; 2; 1/, .1; 3; 2/, or .2; 1; 3/ (7.11.45)
ˆ
0 i D j , j D k, or k D i
:
a b D aj bk ej e k D aj bk ij k e i (7.11.46)
Denote c as the cross product of a b, then we have c D aj bk ij k e i , i.e., the components of c
are ci D aj bk ij k , written explicitly
c1 D aj bk 1j k D a2 b3 a3 b2
c2 D aj bk 2j k D a3 b1 a1 b3
c3 D aj bk 3j k D a1 b2 a2 b 1
We’re now ready to prove the product rule 4 in Eq. (7.11.34) in a much elegant manner. First
it is necessary to express the curl of a vector using the Levi-Civita symbol:
@
a b D aj bk ij k e i H) r a D bk ij k e i D bk;j ij k e i (7.11.47)
@xj
where the notation bk;j means partial derivative of bk with respect to xj .
@
r .a b/ D .aj bk ij k / D .aj bk ij k /;i
@xi
D ij k aj;i bk C ij k aj bk;i
D .kij aj;i /bk aj j i k bk;i
„ ƒ‚ … „ ƒ‚ …
.ra/b a.rb/
where the minus comes from the fact that ij k D j i k , a property can be directly seen from its
definition.
@f @f
rf D iC j
@x @y
@f @r @f @ O C @f @r C @f @ O
D C .cos rO sin / .sin rO C cos /
@r @x @ @x @r @y @ @y
D : : : .using Eq. (7.11.49)/
@f 1 @f O
D rO C
@r r @
(7.11.50)
Cylindrical
sin z D sin.x C iy/ D sin x cos.iy/ C sin.iy/ cos x D sin x cosh y Ci sinh y cos x
„ ƒ‚ … „ ƒ‚ …
u.x;y/ v.x;y/
(where the identities cos.iy/ D cosh y and sin.iy/ D i sinh y; check Eq. (3.14.6)).
z z2 z3
e z WD 1 C C C C (7.12.1)
1Š 2Š 3Š
which is reasonable given the fact that this definition is consistent with the definition of y D e x .
Now, we want to check whether e z1 e z2 D e z1 Cz2 using Eq. (7.12.1). Why that? Because that
the rule that the ordinary exponential function obeys. The new exponential function should obey
that too! We have,
z1 z1 z12 z13
e D1C C C C
1Š 2Š 3Š
z2 z2 z3
e z2 D1C C 2 C 2 C
1Š 2Š 3Š
And therefore, the product e z1 e z2 :
z12 z22
z1 z2 z1 z2
e e D 1C C C 1C C C
1Š 2Š 1Š 2Š
What we’re currently dealing with is a product of two power series. It’s better
P1to develop a
z1 z2 n
formula
P1 for that and we get back to e e later. Considering two power series nD0 an x , and
m
mD0 m x , their product is given by
b
1
! 1 !
X X
an x n bm x m
nD0 mD0
To get the formula, let’s try the first few terms, and hope for a pattern:
If we look at the term .a0 b1 Ca1 b0 /x 1 we can see that the sum of the indices equals the exponent
of x 1 (a0 b1 has the indices sum to 1 for example). With this, we have discoverred the Cauchy
product formula for two power series
1
! 1 ! 1 n
!
X X X X
an x n bm x m D ak bn k x n (7.12.2)
nD0 mD0 nD0 kD0
With this tool, we go back to tackle the quantity e e , writing e z1 as a power series, and using
z1 z2
ln z D w
Writing z D re i , now we can express it another way because z D e w :
z D e w D e uCiv D e u e iv
Now, we have the same complex variable written in two forms: z D re i and z D e u e iv , we can
deduce that
r D e u .H) u D ln r/; v D C 2n
Finally, the logarithm of a complex number is given by
Powers. We know how to compute .3 C 2i/n , using de Moivre’s formula. But we do not know
what is .3 C 2i/2C3i . Given a complex variable z and a complex constant a, we define z a in the
same manner as for real numbers:
z a WD e a ln z
Note that the RHS of this equation is completely meaningful: we know ln z, thus a ln z and its
exponential. Now, using Eq. (7.12.3) for ln z, we obtain the expression for z a
p
1=n n
1 1 1
z D z D exp ln r C i C i 2m
n n n
1 1 1
D exp ln r exp i exp i 2m
n n n
pn i.=nC2m=n/
D re
With the special case of z D 1 (with r D 1; D 0), the nth root of one is thus given by
p
n
1 D e i.2=n/m
which are the vertices of a regular n polygon inscribed in the unit circle.
construct z D f .x C iy/;
assign arg z with a hue following the color wheel, and the magnitude jzj by other means,
such as brightness or saturation (there are many options for this).
The final result is a matrix of pixels of different RGB values. Fig. 7.49 shows the domain coloring
plots of f .z/ D sin z 1 and f .z/ D tan z 1 . This way of visualizing complex functions
was proposed by Frank Farris–an American mathematician working at Santa Clara University–
possibly around 1998.
Figure 7.49: Domain coloring based visualization of complex functions using ComplexPortraits.jl.
Two plane approach. Instead of using only one plane, we can visualize complex functions using
two planes: the xy plane and the uv plane. To demonstrate the idea, consider f .z/ D z 2 C 1,
we have
u.1; y/ D 2 y 2; v.1; y/ D 2y
which can be combined to get u D 2 v 2 =4, which is a parabola. Similarly, consider the grid
line y D 1, it is mapped to
u.x; 1/ D x 2 ; v.x; 1/ D 2x
which is also a parabola. It can be shown that these two parabolas are orthogonal. We can repeat
this process for other grid lines, and the result is shown in Fig. 7.50 where the grid lines x D a
are red colored and the lines y D b are blue. The plane in Fig. 7.50a is mapped or transformed
to the one in Fig. 7.50b.
f .x0 C h/ f .x0 /
f 0 .x0 / D lim
h!0 h
Phu Nguyen, Monash University © Draft version
Chapter 7. Multivariable calculus 601
(a) (b)
Figure 7.50: Visualization of complex functions as a mapping from the xy plane to uv plane (using
desmos). Note that the mapping preserves the angle between the grid lines: the grid lines in the uv plane
are still perpendicular to each other. Such a mapping is called a conformal mapping.
when this limit exists. We mimick this for complex functions: the complex function f .z/ D
u.x; y/ C iv.x; y/ with z D x C iy has a derivative at z0 D x0 C iy0 defined as
@v @u
f 0 .z0 / D .x0 ; y0 / i .x0 ; y0 / (7.12.6)
@y @y
In order to have f 0 .z0 /, at least the two values given in Eqs. (7.12.5) and (7.12.6) must
be equal because if they are not equal we definitely do not have f 0 .z0 /. And this leads to the
following equations
@u @v @v @u
D ; D (7.12.7)
@x @y @x @y
History note 7.4: Georg Bernhard Riemann (17 September 1826 – 20 July 1866)
Georg Friedrich Bernhard Riemann was a German mathematician
who made significant contributions to analysis, number theory, and
differential geometry. In the field of real analysis, he is mostly known
for the first rigorous formulation of the integral, the Riemann integral,
and his work on Fourier series. His contributions to complex analysis
include most notably the introduction of Riemann surfaces, breaking
new ground in a natural, geometric treatment of complex analysis. His
1859 paper on the prime-counting function, containing the original statement of the Rie-
mann hypothesis, is regarded as a foundational paper of analytic number theory. Through
his pioneering contributions to differential geometry, Riemann laid the foundations of
the mathematics of general relativity. He is considered by many to be one of the greatest
mathematicians of all time.
Contents
8.1 Mathematical models and differential equations . . . . . . . . . . . . . . 604
8.2 Models of population growth . . . . . . . . . . . . . . . . . . . . . . . . . 606
8.3 Ordinary differential equations . . . . . . . . . . . . . . . . . . . . . . . 608
8.4 Partial differential equations: a classification . . . . . . . . . . . . . . . . 615
8.5 Derivation of common PDEs . . . . . . . . . . . . . . . . . . . . . . . . . 615
8.6 Linear partial differential equations . . . . . . . . . . . . . . . . . . . . . 623
8.7 Dimensionless problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
8.8 Harmonic oscillation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
8.9 Solving the diffusion equation . . . . . . . . . . . . . . . . . . . . . . . . 652
8.10 Solving the wave equation: d’Alembert’s solution . . . . . . . . . . . . . 654
8.11 Solving the wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . 658
8.12 Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
8.13 Classification of second order linear PDEs . . . . . . . . . . . . . . . . . 663
8.14 Fluid mechanics: Navier Stokes equation . . . . . . . . . . . . . . . . . . 663
In this chapter we discuss what probably is the most important application of calculus: dif-
ferential equations. These equations are those that describe many laws of nature. In classical
physics, we have to mention Newton’s second law F D mxR that describes motions, Fourier’s
heat equation P D 2 @2 =@x 2 that describes how heat is transferred in a medium, Maxwell’s
equations describing electromagnetism and the Navier-Stokes equation that calculates how flu-
ids move. In quantum mechanics, we have the Schrödinger equation. In biology, we can cite
the Lotka–Volterra equations, also known as the predator–prey equations–a pair of first-order
nonlinear differential equations– used to describe the dynamics of biological systems in which
two species interact, one as a predator and the other as prey. In finance there is the Black–Scholes
equation.
603
Chapter 8. Differential equations 604
...partial differential equations are the basis of all physical theorems. In the theory
of sound in gaes, liquid and solids, in the investigations of elasticity, in optics, ev-
erywhere partial differential equations formulate basic laws of nature which can be
checked against experiments
The chapter introduces the mathematics used to model the real world. The attention is on
how to derive these equations more than on how to solve them. Yet, some exact solutions
are presented. Numerical solutions to differential equations are treated in Chapter 11. Topics
which are too mathematical such as uniqueness are omitted. Also discussed is the problem of
mechanical vibrations: simple harmonics and waves.
The following excellent books were consulted for the materials presented in this chapter:
Partial differential equations for scientists and engineers by Stanley Farlow [14];
The plan of this chapter is as follows. We start with a toy problem in Section 8.1 to get the
feeling of what mathematical modeling looks like. Then, we become a bit more serious with
a real differential equation describing the population growth (Section 8.2). In Section 8.3, we
discuss ordinary differential equations. Next, we move to partial differential equations (such
as the wave equation u t t D c 2 uxx ). We start with Section 8.4 in which we get familiar with
partial differential equations, discuss some terminologies. The derivation of common partial dif-
ferential equations (e.g. the heat equation, the wave equation and so on) is treated in Section 8.5.
Section 8.7
Harmonic oscillation is given in Section 8.8. How to solve the heat (diffusion) equation is
presented in Section 8.9. Solutions of the wave equation are given in Sections 8.10 and 8.11. It
is when solving these two equations the idea of Fourier series were born.
simplified model of the reality. The first assumption is the ball is always a sphere. The second
assumption is the density of snow does not change in time. These assumptions might not be
enough to have a very good model, but we have to start with something anyway. In summary,
we have the following set of facts to build our model:
The rate of change of the mass of the ball is proportional to the surface area of the ball;
At any time the ball is a sphere;
The density of the snow is constant.
All we have to do is to translate the above facts (written in English) to the language of mathemat-
ics. The assumption is that all variables are continuous. Thus, we can use differential calculus
to differentiate them as we want, even though for some problems such as population growth the
population is not continuous! Remember that we’re building a model. As the mass is density
times volume, we can determine the mass with r.t/ representing the radius of the snow ball at
time t . And we also compute its derivative w.r.t t (because the derivative captures changes):
4 dM dr
M D r 3 H) D 4r 2 (8.1.1)
3 dt dt
Using the experiment data on the rate of change of M , we can write
dM dr dr k
D k.4 r 2 / H) 4r 2 D k.4 r 2 / H) D (8.1.2)
dt dt dt
where k is constant that can only be experimentally determined. The minus sign reflects the fact
that the mass is decreasing. Quantities such as and k whose values do not change in time are
called parameters.
The equation in the box is a differential equation–an equation that contains derivatives.
In fact, it is an ordinary differential equation as there exits partial differential equations that
involve partial derivatives. In this example, t is the only independent variable and r.t/ is the
dependent variable. An ordinary differential equation expresses a relation between a dependent
variable (a function), its derivatives (first, second derivatives etc.) and the independent variable:
F .r.t /; r 0 ; r 00 ; : : : ; r .n/ ; t/ D 0. If there are more than one independent variable, we have a
partial differential equation as the derivatives are partial derivatives.
Now we have an equation. Next step is to solve it to find the solution. For what purpose?
For the prediction of the radius of the snow ball at any time instance. It is the prediction of
future events that is the ultimate goal of mathematical modeling of either natural phenomena or
engineering systems.
For this particular problem, it is easy to find the solution: by integrating both sides of the
boxed equation in Eq. (8.1.2):
dr k
D c; c WD H) r.t/ D ct C A (8.1.3)
dt
The notation r.t / is read “r at time t”, and the parentheses tell us that our variable is a function of time.
where A is a real number. But, why we get not one but many solutions? That is because the radius
at time t depends of course on the initial radius of the ball. So, we must know this initial radius
(denoted by R), then by substituting t D 0 in Eq. (8.1.3), we get A D R. Thus, r.t/ D R ct.
Now, we can predict when the ball is completely melt, it is when r.tm / D 0: tm D R=c. And
we need to check this against observations. If the prediction and the observation are in good
agreement, we have discovered a law. If not, our assumptions are too strict and we need to refine
them and refine our model.
Here the overdot denotes differentiation with respect to time following Newton. Now, we have
to solve the boxed ordinary differential equation. Lucky for us, we can solve this equation. The
solution i.e., N.t/ should involve the exponential function e ct (why?). Here is how:
Z t Z t
dN dN dN
D
N H) D
dt H) D
dt H) N.t/ D N0 e
t (8.2.2)
dt N 0 N 0
where we’ve assumed that the starting time is t D 0. Looking at the solution we can understand
why this model is called an exponential growth model.
How good is this model? To answer that (pure mathematicians do not care), scientists use
real data. For example, Table 8.1 is the USA population statistics taken from [8]. Of course the
data is much more, but we need to use just a small portion of the data to calibrate the model.
Calibrating a model is to find values for the parameters (or constants) in the model. In the context
here, we need to find N0 and
using the data in Table 8.1.
1790 3.9
1800 5.3
1810 7.2
We have data starting from the year of 1790, thus t D 0 is that year and then N0 D 3:9
millions. For
, use the data for 1800, noting that t in the model is in terms of 10 years, thus
1800 corresponds with t D 1:
5:3
5:3 D N.1/ D N0 e H)
D ln D 0:307
3:9
Now is time for prediction. The calibrated model is used to predict the population up to 1870.
The results given in Table 8.2 indicates that the model is in good agreement until 1870, at that
year the error is nearly 20%. It’s time for an improved model.
For students who would like to become scientists trying to understand our world, no one says
it best when it comes to how we–human beings–unravel the mysteries of the world, as Richard
Feynman in his interesting book The Pleasure of Finding Things Out :
. . . a fun analogy in trying to get some idea of what we’re doing in trying to under-
stand nature, is to imagine that the gods are playing some great game like chess. . .
and you don’t know the rules of the game, but you’re allowed to look at the board, at
least from time to time. . . and from these observations you try to figure out what the
rules of the game are, what the rules of the pieces moving are. You might discover
You can watch the great man here.
after a bit, for example, that when there’s only one bishop around on the board that
the bishop maintains its color. Later on you might discover the law for the bishop as
it moves on the diagonal, which would explain the law that you understood before
– that it maintained its color – and that would be analogous to discovering one
law and then later finding a deeper understanding of it. Then things can happen,
everything’s going good, and then all of a sudden some strange phenomenon oc-
curs in some corner, so you begin to investigate that – it’s castling, something you
didn’t expect. We’re always, by the way, in fundamental physics, always trying to
investigate those things in which we don’t understand the conclusions. After we’ve
checked them enough, we’re okay.
xP D f .x; t/ (8.3.1)
In the problem of population growth, x.t/ is N.t/–the population size. As the highest derivative
in the equation is one, it is called a first order ODE. Now, we show that we can always convert
a high order ODE to a system of first order ODEs. For example, the equation for a damped
harmonic oscillator is (Section 8.8)
b k
mxR C b xP C kx D 0 ” xR D xP x (8.3.2)
m m
Now, to remove the second derivative, we introduce a variable x2 D x;
P this leads to xR D xP 2 , and
voilà we have removed the second derivative. And of course instead of x we use x1 D x. Then,
xP 1 D xP D x2 , and we can write xP 2 D xR D .b=m/x2 .k=m/x1 from Eq. (8.3.2). Now, using
matrix notation, we write
) " # " #" #
x1 D x xP 1 0 1 x1
H) D (8.3.3)
x2 D xP xP 2 k=m b=m x2
This is a system of two first order linear ODEs with a constant coefficient (the matrix does
not vary with time) matrix. How about a problem with a time dependent term like the forced
oscillator of which the equation is mxR C b xP C kx D F sin t? The idea is the same, introduce
another variable to get rid of t:
xP 1 D x2
9 8
x1 D x > ˆ
ˆ
k b
= <
x2 D xP H) xP 2 D x1 x2 C F=m sin.x3 / (8.3.4)
> ˆ m m
x3 D t
; ˆ
:
xP 3 D 1
So, we can now just focus on the following system of equations, which provides a general
framework to study ODEs:
xP 1 D f1 .x1 ; : : : ; xn /
:: (8.3.5)
:
xP n D fn .x1 ; : : : ; xn /
This general equation covers both linear systems such as the one in Eq. (8.3.3) and nonlinear
ones e.g. Eq. (8.3.4). However it is hard to solve nonlinear systems, so in the next section we
just focus on systems of linear ODEs.
We use linear algebra (matrices) to solve it, so we re-write the above as (xP D .xP 1 ; xP 2 /)
" #" # " #" #
2 0 x1 1 2 x1
xP D ; xP D
0 5 x2 3 2 x2
„ ƒ‚ … „ ƒ‚ …
A1 A2
Before solving them, let’s make one observation: it is easier to solve the first system than
the second one; because the two equations in the former are uncoupled. This is reflected in
the diagonal matrix A1 with two zeros (red terms). The solution to the first system is simply
x D .C1 e 2t ; C2 e 5t /. But we can also write this as
Only if we know linear algebra we can appreciate why this form is better. So refresh your linear algebra before
continuing.
" # " #
1 0
x D C1 e 2t C C2 e 5t
0 1
Noting that 2; 5 are the eigenvalues of the matrix A1 , and the two unit vectors are the eigenvectors
of A1 . Thus, the solution to a system of linear first order differential equations can be expressed
in terms of the eigenvalues and eigenvectors of the coefficient matrix, at least when that matrix
is diagonal and two eigenvalues are different.
For the second system xP D A2 x, the matrix is not diagonal. But there is a way to diago-
nalize a matrix (check Section 10.11.4 for matrix diagonalization) using its eigenvalues and
eigenvectors v. So, we put these info below
Let x D Py and substitute that into the original system, we get (don’t forget that xP D A2 x)
x D Py H) xP D PyP H) PyP D A2 x D A2 Py H) yP D P 1 A2 Py
But P 1 A2 P is simply a diagonal matrix with the eigenvalues (of A2 ) on the diagonal, thus we
can easily solve for y, and from that we obtain x:
" # " # " # " #
4 0 C1 e 4t 2 1
yP D y H) y D t
H) x D C1 e 4t C C2 e t
(8.3.6)
0 1 C2 e 3 C1
Again, we can write the solution in terms of the eigenvalues and eigenvectors of the coefficient
matrix. To determine C1;2 we need the initial condition x 0 D x.0/; substituting t D 0 into the
boxed equation in Eq. (8.3.6) we can determine C1;2 in terms of x 0 :
" # " # " #" # " # " # 1
2 1 2 1 C1 C1 2 1
x 0 D C1 C C2 D H) D x0
3 C1 3 C1 C2 C2 3 C1
With a given x 0 , this equation gives us C1;2 and put them in the boxed equation in Eq. (8.3.6),
and we’re finished. Usually as a scientist or engineer we stop here, but mathematicians go further.
They see that
" # " # " #" #" # 1
2 1 2 1 e 4t 0 2 1
x D C1 e 4t C C2 e t
D x0 (8.3.7)
3 C1 3 C1 0 e t 3 C1
Is something useful with this new way of looking at the solution? Yes, the red matrix! It is a
matrix of exponentials. What would you do next when you have seen this?
For ease of presentation, we discussed systems of only two equations, but as can be seen, the
method and thus the result extends to systems of n equations (n can be 1000):
2 3 2 32 3
xP 1 A11 A12 A1n x1
6 7 6 76 7 n
6xP 2 7 6A21 A22 A2n 7 6x2 7 X
C i e i t x i
7 6
6 7D6 7 H) x D
6 :: 7 6 :: :: ::: :: 7 6 :: 7
4:5 4 : : : 54 : 5 i D1
xP n An1 An2 Ann xn
where the eigenvalues of A are i and eigenvectors are x i . Note that this solution is only possible
when A is diagonalizable i.e., when the eigenvectors are linear independent.
It is remarkable to look back the long journey from the simple equation xP D x with the
solution x.t/ DPC0 e t to a system of as many equations as you want, and the solution is still of
the same form niD1 Ci e i t x i . It is simply remarkable!
But wait. How about non-diagonalizable matrices? The next section is answering that ques-
tion.
Now we do something extraordinary. Starting with xP D ax with the solution x.t/ D ce at . Then,
consider the linear system xP D Ax, can we write the solution as x D e At x 0 , with x 0 being
a vector? To answer that question, we need to know what exponential of a matrix means. And
mathematicians define e A by analogy to e x :
x2 x3 A2 A3
ex D 1 C x C C C H) e A D I C A C C C (8.3.8)
2Š 3Š 2Š 3Š
On the RHS (of the boxed eqn) we have a sum of a bunch of matrices, thus e A is a matrix. If
we can compute the powers of a matrix (e.g. A2 ; A3 ; : : :) we can compute the exponential of a
matrix! Let’s use the matrix A2 and compute e At . For simplicity, I drop the subscript 2. The key
step is to diagonalize A :
" # " #
2 1 4 C0
A D PDP 1 ; P D ; DD
3 C1 0 1
Hey, isn’t this section only for non-diagonalizable matrices? We’re now testing the idea of e A for the case we
know the solution first. If it does not work for this case then forget the idea.
Then, using the definition for e A , we can compute e At as follow (with Ak D PDk P, k D
1; 2; : : :)
A2 2 A3 3
e At D I C At C t C t C
2Š 3Š
1 1
D PIP 1 C PDP 1 t C PD2 P 1 t 2 C PD3 P 1 t 3 C
2Š 3Š
1 2 2 1 3 3
D P I C Dt C D t C D t C P 1
2Š 3Š
Dt 1
D Pe P (the red term is e Dt due to Eq. (8.3.8))
Can we compute e Dt ? Because if we can then we’re done. Using Eq. (8.3.8), it can be shown
that " #
4t
e 0
e Dt D
0 e t
Did we see this matrix? Yes, it is exactly the red matrix in Eq. (8.3.7)! Now we have e At as
" #" #" # 1
At 2 1 e 4t 0 2 1
e D
3 C1 0 e t 3 C1
Multiplying with x 0 and we get e At x 0 D x–the solution we’re looking for (compare with
Eq. (8.3.7)). Now, we have reasons to believe that the exponential of a matrix, as we have
defined it, is working.
Is there an easier way to see that x D e At x 0 is the solution of xP D Ax? Yes, differentiate
x! But only if we’re willing to compute the derivative of e At . It turns out not hard at all :
d e At .At/2 .At/3
d 1
D I C At C C C D 0 C A C A2 t C A3 t 2 C
dt dt 2Š 3Š 2
1
D A I C At C A2 t 2 C D Ae At
2
d .e At x 0 / d e At
xP D D x 0 D A.e At x 0 / D Ax
dt dt
So the differentiation rule: d=dt .e ˛t / D ˛e ˛t still holds if ˛ is a matrix.
We assume that x.t / is one solution and there was another solution y.t /, then we build z D x y. Now,
letting v.t / D e At z.t /, it can be shown that vP D 0: so v.t / must be constant. But v.0/ D 0, thus v.t / D z.t / D 0.
Therefore, y D x: the solution is unique.
The matrix A is non-diagonalizable because it has repeated eigenvalues and thus linear dependent
eigenvectors: " # " #
1 1
1 D 2 D 1; x 1 D ; x2 D ˛
1 1
We have to rely on the infinite series in Eq. (8.3.8) to compute e At . First, massaging A a bit|| :
e It D Ie t ; e .A I/t
D I C .A I/t
e At D Ie t ŒI C .A I/t D e t ŒI C .A I/t
Is this solution correct? We can check! It is easy to see that y D e t and y D te t are two solutions
to y 00 2y 0 C y D 0. Thus, the solution is a linear combination of them. Hence, the solution
obtained using the exponential of a matrix is correct.
This method was based on this trick A D I C A I and the fact that .A I/2 D 0. How
can we know all of this ? It’s better to have a method that less depends on tricks.
Schur factorization. Assume a 2 2 matrix A with one eigenvalue and the associated eigen-
vector v i.e., Av D v. Now we select a vector w such that v; w are linear independent, thus we
can write Aw D cv C d w for c; d 2 R. Now, we have
( " # " #
Av D v h i h i c h i c h i 1
H) A v w D v w H) A D v w v w
Aw D cv C d w 0 d 0 d
So, we have proved that for any 2 2 matrix, it is always possible to diagonalize A into the form
PTP 1 where T is an upper triangle matrix. Now, we’re interested in the case A is defective i.e.,
it has a double eigenvalue 1 D 2 D , thus we have
i c kh
" # " #
hi c h i 1 h i 1
AD v w v w H) Ak D v w v w
0 0
It turns out that it is easy to compute the blue term: a triangular matrix is also nice to work with.
Indeed, we can decompose the blue matrix, now denoted by , into the sum of a diagonal matrix
and a nilpotent matrix. A nilpotent matrix is a square matrix N such that Np D 0 for some
positive integer p; the smallest such p is called the index of N. Using the binomial theorem and
the nice property of nilpotent matrices (in below the red matrix is N with p D 2), we get
" # " #!k " #k " #k 1 " # " #
0 0 c 0 0 0 c k kck 1
k D C D Ck D
0 0 0 0 0 0 0 0 k
The final step is to find w and we’re done. Recall that Aw D cv C d w, but d D , thus (redefine
w as .1=c/w), we obtain
Aw D cv C w ” Aw D v C w ” .A I/w D v H) .A I/2 w D 0
We call v the eigenvector of A, how about w? Let put the equations of these two vectors together:
.A I/1 v D 0
(8.3.9)
.A I/2 w D 0
With this, it is no surprise that mathematicians call w the generalized eigenvectors (of order 2)
of A. generalized eigenvectors play a similar role for defective matrices that eigenvectors play
for diagonalizable matrices. The eigenvectors of a diagonalizable matrix span the whole vector
space. The eigenvectors of a defective matrix do not, but the generalized eigenvectors of that
matrix do.
We must have d D as A and the red matrix are similar, they have same eigenvalues.
@u @u @2 u @2 u
ux D ; ut D ; uxx D ; ut t D (8.4.1)
@x @t @x 2 @t 2
Then, a partial differential equation (PDE) in terms of u.x; t/ is the following equation:
Note that partial derivatives of order higher than 2 are not discussed. This is because in physics
and engineering, we rarely see them present in differential equations.
To classify different PDEs, the concepts of order, dimension and linearity of a PDE are
introduced:
Order The order of a PDE is the highest partial derivative; u t D uxx is a second-order PDE;
Dimension The dimension of a PDE is the number of independent variables; u t t D uxx C uyy
is a 3D PDE as it involves x; y and t ;
Linearity A PDE is said to be linear if the function u and all its partial derivatives appear in a
linear fashion ;i.e., they are not multiplied together, they are not squared etc.
Table 8.3: Some PDEs with associated order, dimension and linearity.
u t D uxx 2 X 2
u t t D uxx C uyy 2 X 3
xux C yuy D u2 1 2
determining the behavior of the solutions; for example mathematicians are interested in questions
such as whether the solutions are unique or when the solutions exist.
We start with the wave equation in Section 8.5.1, derived centuries ago by d’Alembert in
1746. We live in a world of waves. Whenever we throw a pebble into the pond, we see the circular
ripples formed on its surface which disappear gradually. The water moves up and down, and the
effect, ripple, which is visible to us looks like an outwardly moving wave. When you pluck the
string of a guitar, the strings move up and down, exhibiting transverse wave; The particles in
the string move perpendicular to the direction of the wave propagation. The bump or rattle that
we feel during an earthquake is due to seismic-S wave. It moves rock particles up and down,
perpendicular to the direction of the wave propagation.
We continue in Section 8.5.2 with the heat equation (or diffusion equation) derived by Fourier
in 1807.
So, we consider a string fixed at two ends. At time t D 0, the string is horizontal and un-
stretched (Fig. 8.1). As the string undergoes only transverse motion i.e., motion perpendicular
to the original string, we use u.x; t/ to designate the transverse displacement of point x at time
t . Our task is to find the equation relating u.x; t/ to the physics of the string.
The key idea is to use Newton’s 2nd law (what else?) for a small segment of the string.
Such a segment of length x is shown in Fig. 8.1. What are the forces in the system? First,
we have f .x; t/ in the vertical direction which can be gravity or any external force. This is
a distributed force that is force per unit length (i.e., the total force acting on the segment is
f x). Second, we have the tension force T .x; t/ inside the string. We use Newton’s 2nd law
F D ma @2 u
p in the vertical direction to write, with a D =@t , mass is density times length, that is
2
2
m D .x/ C .u/ 2
p @2 u
.x/2 C .u/2 2 D T .x C x; t/ sin .x C x; t/ T .x; t/ sin .x; t/ C f .x; t/x
@t
(8.5.1)
Dividing this equation by x and considering x ! 0, we get
s
2 2
@u @ u d
1C 2
D .T .x; t/ sin .x; t// C f .x; t/
@x @t dx (8.5.2)
@T @
D sin .x; t/ C T .x; t/ cos .x; t/ C f .x; t/
@x @x
We know that the derivative of u.x; t/ is tan .x; t/, so we can write
@u @u
tan .x; t/ D .x; t/; .x; t/ D arctan (8.5.3)
@x @x
where we also need an expression for .x; t/. From tan .x; t/, we can compute sin .x; t/,
cos .x; t / and from the expression for , we can compute the derivative of :
p s @ u 2
. @u
@x
/2 1 @ @x 2
sin .x; t/ D ; cos .x; t/ D ; D (8.5.4)
1 C . @u
@x
/2 1 C . @u
@x
/2 @x 1 C . @u
@x
/2
Now comes the art of approximation (otherwise the problem would be too complex). We consider
only small vibration, that is when j @u
@x
j 1 , and with this simplified condition the above
equation becomes
@u @ @2 u
sin .x; t/ D ; cos .x; t/ D 1; D 2 (8.5.5)
@x @x @x
With all this, Eq. (8.5.2) is simplified to
@2 u @T @u @2 u
D C T .x; t/ C f .x; t/ (8.5.6)
@t 2 @x @x @x 2
The equation looks much simpler. But it is still unsolvable. Why? Because we have one equation
but two unknowns u.x; t/ and T .x; t/. But wait, we have another Newton’s 2nd law in the
horizontal direction:
@2 u 2
2@ u
D c (8.5.9)
@t 2 @x 2
What does this equation mean? On the LHS we have the acceleration term and on the RHS we
have the second spatial derivative of u.x; t/. The second spatial derivative of u measures the
concavity of the curve u.x; t/. Thus, when the curve is concave downward, this term is negative,
and thus the wave equation tells us that the acceleration is also negative and thus the string is
moving downwards (Fig. 8.2).
Figure 8.2
We do not discuss the solution to the wave equation here. But even without it, we still can
say something about its solutions. The first thing is that this equation is linear due to the linearity
of the differentiation operator. What does this entail? Let u.x; t/ and v.x; t/ be two solutions
to the wave equation, that is
@2 u 2
2@ u @2 v 2
2@ v
D c ; D c
@t 2 @x 2 @t 2 @x 2
then any linear combination of these two i.e., ˛u C ˇv, where ˛ and ˇ are two constants, is also
a solution:
@2 .˛u C ˇv/ 2
2 @ .˛u C ˇv/
D c
@t 2 @x 2
Why the wave equation can have more than one solution? Actually any PDE has infinitely many solutions.
Think of it this way. The violin string can be bent into any shape you like before it is released and the wave equation
takes over. In other words, each initial condition leads to a distinct solution.
3D wave equation. Having derived the 1D wave equation, the question is what is the 3D version?
let’s try to guess what it would be. It should be of the same form as the 1D equation but has
components relating to the other dimensions (red terms below):
@2 u
2
@2 u @2 u
2 @ u
Dc C 2C 2 (8.5.10)
@t 2 @x 2 @y @z
It is remarkable that a model born in attempts to understand how a string vibrates now has a
wide spectrum of applications. Here are some applications of the wave equation:
The idea is to consider a segment of the bar e.g. the part of the bar between x D a and x D b,
and applying the principle of conservation of energy to this segment. The conservation of energy
is simple: the rate of change of heat inside the bar is equal to the heat flux entering the left end
minus the heat flux going out the right end. The rate of change of heat is given by
@ b
Z
rate of change of heat D cA.x; t/dx (8.5.11)
@t a
while the heat fluxes are
heat fluxes D AJ.a; t/ AJ.b; t/ (8.5.12)
where J is the heat flux density. Now, we can write the equation of conservation of heat as
@ b
Z
cA.x; t/dx D AJ.a; t/ AJ.b; t/ (8.5.13)
@t a
Using Leibniz’s rule and the fundamental theorem of calculus, we can elaborate this equation as
Z b Z b
d.x; t/ dJ
cA dx D A dx (8.5.14)
a dt a dx
Z b
@.x; t/ @J
H) c C dx D 0 (8.5.15)
a @t @x
@.x; t/ @J
H)c C D0 (8.5.16)
@t @x
In the third equation, we moved from an integral equation to a partial differential equation. This
is because the segment Œa; b is arbitrary, so the integrand must be identically zero.
You might guess that we still miss a connection between J and .x; t/ (one equation and
two unknown variables is unsolvable). Indeed, and Fourier carried out experiments to give us
just that relation (known as a constitutive equation)
@
J D k (8.5.17)
@x
where k is known as the coefficient of thermal conductivity. The thermal conductivity provides
an indication of the rate at which heat energy is transferred through a medium by the diffusion
process.
With Eq. (8.5.17), our equation Eq. (8.5.16) becomes (note that k is constant):
@.x; t/ @ @
c C k D0
@t @x @x
(8.5.18)
@ @2 k
H) D 2 2 ; 2 D
@t @x c
which is a linear second order (in space) partial differential equation. As it involves the second
derivative of we need two boundary conditions on : .0; t/ D 1 and .L; t/ D 2 where
1;2 are real numbers. Furthermore, we need one initial condition (as we have 1st derivative of
w.r.t time): .x; 0/ D .x/ for some function .x/ which represents the initial temperature
in the bar at t D 0. Altogether, the PDE, the boundary conditions and the initial condition make
an initial-boundary value problem:
@ @2
D 2 2 0<x<L (8.5.19)
@t @x
.x; 0/ D g.x/ 0xL (8.5.20)
.0; t/ D 1 ; .L; t/ D 2 t >0 (8.5.21)
Intermediate value theorem of integral calculus (Eq. (4.11.3)) applied to the integral on
the LHS,
@
cA.x1 ; t/x D AJ.x0 ; t/ AJ.x0 C x; t/ (8.5.23)
@t
where x1 2 Œx0 ; x0 C x. Dividing both sides by x, we obtain
@ J.x0 C x; t/ J.x0 ; t/
cA.x1 ; t/ D A (8.5.24)
@t x
The final step is to let x to go to zero, and then x1 is x0 and on the RHS we have the
derivative of J evaluated at x0 .
@.x0 ; t/
c D Jx .x0 ; t/ (8.5.25)
@t
This equation holds for any x0 , we can replace x0 by x. And we get the 1D heat diffusion
equation.
3D diffusion equation. Having derived the 1D heat equation, it is not hard to derive the 3D
equation. Before doing so, let’s try to guess what it would be. It should be of the same form as
the 1D equation but has components relating to the other dimensions (red terms below):
2
@2 @2
@ 2 @
D C 2C 2 (8.5.26)
@t @x 2 @y @z
We use the Gauss’s theorem, see Section 7.11.6, for the derivation . We consider an arbitrary
domain V with the surface S. The temperature is now given by .x; t/ where x D .x1 ; x2 ; x3 /
is the position vector. The conservation of energy equation is
@
Z Z
cdV D J ndA
@t V S
@
Z Z
cdV D r J dV (Gauss’s theorem) (8.5.27)
@t V V
Z
@
c C r . kr/ dV D 0 .J D kr/
V @t
As the volume domain V is arbitrary, we get the well known 3D heat equation (assuming k is
constant):
3
@.x; t/ 2
X @2
D .x; t/; WD r .r/ D (8.5.28)
@t i D1
@xi 2
where is the Laplacian operator, named after the French mathematician Pierre-Simon Laplace
(1749-1827). We see this term f again and again in physics. Some people say that it is the
most important operator in mathematical physics.
In the above derivation, we have used the 3D version of Eq. (8.5.17):
2 3 2 32 3
Jx k 0 0 ;x
J D kr or 4Jy 5 D 4 0 k 0 5 4;y 5 (8.5.29)
6 7 6 76 7
Jz 0 0 k ;z
The matrix form is convenient when k is not constant. In that case we say the heat conduction is
not isotropic but anisotropic, and we use three different values for the diagonal terms.
Eq. (8.5.26) can also be used to model other diffusion processes (that’s why it is referred to
as the diffusion equation rather than the restricted heat equation term). For example, if a drop of
red dye is placed in a body of water, the dye will gradually spread out and permeate the entire
body. If convection effects are negligible, Eq. (8.5.26) will describe the diffusion of the dye
through the water; .x; t/ is now the concentration of dye at x and time t !
Of course it is possible to consider an infinitesimal cube and follow the same steps done for the long bar. But
the divergence theorem provides a shorter way.
@2 u @2 u @2 u
2
L.u/ D 2 c C 2
@t @x12 @x2
Now it is time to have a general expression for L, which generalizes the concrete instances we
have met :
n n
X @ X @2
L D a.x/ C bi .x/ C cij .x/ C (8.6.1)
i D1
@xi i;j D1 @xi xj
But this is not enough for mathematicians, why just two functions u; v? Then, they go for n
functions u1 ; u2 ; : : : ; un , and write L.a1 u1 C C an un / D a1 L.u1 / C C an L.un /.
dynamic temperature, K), mole (amount of substance, mol), and candela (luminous intensity,
cd).
From the seven base (or fundamental) units, we can derive many more derived units. For
example, what is the unit of force in SI? Using Newton’s 2nd law, we write
m
ŒF D kg (8.7.1)
s2
And to honour Newton, we invented a new unit called N, and thus 1 N=1kgm/s2 . Similarly, we
have 1 Pa=1N/m2 as the SI unit of pressure and stress, in honor of Blaise Pascal.
Some common consistent SI units are given in Table 8.4.
length - m ŒL
time - s ŒT
mass - kg ŒM
2 2
force mass acceleration N=1kgm/s ŒMLT
2 1 2
pressure/stress force = area Pa=1N/m ŒML T
Table 8.4: Some physical quantities with corresponding dimensions and SI units.
It is not the end of the story about units. Why we have meters and still need kilometers? The
reason is simple: we’re unable to handle too large or too small numbers. If we only had meter as
the only unit for length, then for lengths smaller than 1 meter we have to use decimals e.g. 0.05 m.
To avoid that, sub-units are developed. Instead of 0.05 m we say 5 cm. Similarly for 20 000 m, we
write 20 km, which is much easier to comprehend. In conclusion, larger and smaller quantities
are expressed by using appropriate prefixes with the base unit. Table 8.5 presents all prefixes in
SI. One example is: the mass of the Earth is 5 972 Yg (yottagrams), which is 5:972 1024 kg.
Table 8.5: Prefixes in SI. Prefix names have been mostly chosen from Greek words (positive powers of
10) or Latin words (negative powers of 10), although recent extensions of the range of powers of 10 has
resulted in the use of words from other languages. ‘Kilo’ comes from the Greek word for 1000 (103 ), and
‘milli’ comes from the Latin word for one thousandth (10 3 ).
f .x1 / f .x1 / df
f .˛x1 / D f .˛x2 / H) f1 .˛x1 /x1 D f1 .˛x2 /x2 ; f1 WD
f .x2 / f .x2 / dx
The above equation holds for any value of x1 ; x2 and ˛. Now, setting ˛ D 1,
f 0 .x/ 1
Z Z
dx D k dx H) ln f .x/ D k ln x C A
f .x/ x
f .x/ D C x k
That is good but why power functions but not other functions that we have spent a lot of time to
study in calculus? The reason is simple. We can never have more complicated functions. One
simple way to see this is use Taylor series. For example, the exponential function has the Taylor
series
x2
ex D 1 C x C C
2
If x was a cetain length, then e x would require the addition of length to area to volume, which
is nonsense. So, if we see, in an equation, e x or sin x or whatever function (except x k ), then x
must be a dimensionless number otherwise the equation is physically wrong.
The next step is to consider physical quantities that depend on more than one quantities. For
simplicity, I just consider a quantity z that depends on two other quantities x; y: z D f .x; y/.
Now, doing the samething, we will have
Example 8.1
The spring-mass system has only two quantities: the spring stiffness k with dimension ŒFL 1
and the mass m with dimension ŒM . We know that the dimension of force is ŒF D ŒMLT 2 .
Thus, k has a dimension of ŒM T 2 . We also know that the dimension of !0 is ŒT 1 . As this
quantity is a function of m and k, we have (from the power law above)
!0 D C ma k b
where a; b are so determined that the dimension of both sides be the same:
Œ!0 D C ŒM a ŒM b T 2b
H) ŒT 1
D ŒM aCb T 2b
And this gives us the following system of two linear equations to solve for a and b
)
aCb D0
H) a D 1=2; b D 1=2
2b D 1
Thus, we obtain the formula for the angular frequency without actually solving the equation,
p
!0 D C k=m
But dimensional analysis cannot give us the value of C . For that we can either solve the
problem (which is usually hard) or do an experiment. It is interesting to rewrite the above
equation as
p
C D !0 m=k
p
The number !0 m=k is called a dimensionless group. Furthermore, as it is a dimensionless
number, its value is invariant under change of units. Thus, it is called a universal constant.
In summary, this example has three independent dimensional quantities and they need
two fundamental dimensions (ŒM and ŒT ). The solution shows that there exists one dimen-
sionless group.
Example 8.2
For example, suppose we want to work out how the flow Q of an ideal fluid through a hole
of diameter D depends on the pressure difference p. It seems plausible that Q might also
depend on the density of the fluid , so we look for a relationship of the form:
Q D kD a .p/b c
Now, we write the dimensions of all quantities involved
In summary, this example has four independent dimensional quantities and they need three
fundamental dimensions (ŒM ; ŒL and ŒT ). The solution shows that there exists one dimen-
sionless group.
Example 8.3
In the previous example, we considered only an ideal fluid i.e., a fluid with zero viscosity.
Now, suppose that we’re dealing with a viscous fluid if the viscosity of dimension ŒL2 T 1 .
Now, Q is given by:
Q D kD a .p/b c d
Hence, the equations are
ŒL3 T 1
D ŒLa M b L b T 2b
M cL 3c
L2d T d
which results in the following system of linear equations (three equations for four unknowns)
2 3
8 2 3 a 2 3
<a
ˆ b 3c C 2d D 3 1 1 3 2 6 7 3
7 6b 7 6 7
b C c D 0 ” 40 1 1 05 6 7 D 4 0 5
6
ˆ 4c 5
2b d D 1 0 2 0 1 1
:
d
Using linear algebra from Chapter 10, the rank of the matrix associated to the above system is
three, and the system has one free variable. Let’s choose b as the free variable, we can solve
for a; c; d in terms of b:
cD b; d D 1 2b; a D 1 C 2b
Thus, Q is written as
Q D kD 1C2b .p/b b
1 2b
(8.7.2)
If the pattern we observe from the previous two examples still works, we should have two
dimensionless groups. This is so because there are five independent dimensional quantities
and they need three fundamental dimensions (ŒM ; ŒL and ŒT ). Indeed, we have two dimen-
sionless groups (highlighted red):
2 b
Q D p
Dk (8.7.3)
D 2
From the presented three examples there exists a relationship between the number of quan-
tities, the number of fundamental dimensions and the number of dimensionless groups. Now,
we need to prove it. Instead of a general proof, we consider Example 8.3 and prove that there
must be one dimensionless number in this example. First, we write the dimensions of all quanti-
ties involved, but we have to explicitly write the powers of ŒM ; ŒL and ŒT . For example, for
Now, suppose we can build a dimensionless number C of the form (power law)
x5
„ ƒ‚ …
A
where the first row of A is the powers of ŒM in Eq. (8.7.4). The second row is the powers
of ŒL and so on. Thus, this matrix is called the dimension matrix. Are we going to solve the
system in Eq. (8.7.5)? No no no. That is the power of mathematics. Using the rank theorem,
Theorem 10.5.4, from linear algebra which says rank.A/ C nullity.A/ D 5 and the fact that
rank.A/ D 3, we deduce that nullity.A/ D 2, hence it has two solutions x ¤ 0. Therefore, we
have two dimensionless numbers.
Hey, but why the rank of the dimension matrix A is three not less? If we use the Gauss-Jordan
elimination to get the row reduced echelon form of A, we get as the first three columns the three
unit vectors of R3 : .1; 0; 0/; .0; 1; 0/; .0; 0; 1/. This makes us to think of a three dimensional
space. Indeed, the three independent dimensions ŒM , ŒL and ŒT makes a three dimensional
vector space. In this vector space, a dimensional quantity has the coordinate vector .x1 ; x2 ; x3 /
because we always can write
F .1 ; 2 ; :::; m r / D 0
expressed only in terms of the dimensionless quantities.
Note that if the chosen fundamental dimensions are independent, then r is simply the number
of these fundamental dimensions.
The dimensionless combinations that we can make in a given problem are not unique: if 1 and 2 are both
dimensionless, then so are 1 2 and 1 C 2 and, indeed, any function that we want to make out of these two
variables.
x D xc x;
Q t D tc tQ (8.7.11)
Differential operator. As a preparation for a discussion of 2nd order ODE in which we need
d
R we introduce the differential operator dt
to compute x, , which we need to supply a function to
compute its time derivative:
d d d tQ 1 d
D D (8.7.16)
dt d tQ dt tc d tQ
The usefulness of this operator comes in when we compute the second derivative operator:
d2
d d d 1 d
D D .use Eq. (8.7.16)/
dt 2 dt dt dt tc d tQ
(8.7.17)
1 d2
1 d 1 d
D D 2 2
tc d tQ tc d tQ tc d tQ
1 d
and we applied Eq. (8.7.16) to the function tc d tQ
in the third equality.
axc d 2 xQ bxc d xQ
C C cxc xQ D Af .tc tQ/
tc d tQ
2 2 tc d tQ
Dividing it by the coefficient of the 2nd derivative, we get this equation:
ctc2
r
a
D 1 H) tc D (8.7.19)
a c
And making Atc2=axc D 1 gives us xc D A=c.
d 2 xQ b d xQ
C p C xQ D F .tQ/
d tQ2 ac d tQ
periodic motion or oscillation, is the subject of this section. Understanding periodic motion will
be essential for the study of waves, sound and light.
Observing a ball rolling back and forth in a round bowl or a pendulum that swings back and
forth past its straight-down position (Fig. 8.4), we can see that a body that undergoes periodic
motion always has a stable equilibrium position. When it is moved away from this position and
released, a force or torque comes into play to pull it back toward equilibrium (such a force is
called a restoring force). But by the time it gets there, it has picked up some kinetic energy, so
it overshoots, stopping somewhere on the other side, and is again pulled back (by the restoring
force) toward equilibrium.
When the restoring force is directly proportional to the displacement from equilibrium the
oscillation is called simple harmonic motion, abbreviated SHM or simple harmonic oscillation
(SHO). This section is confined to such oscillations.
We start with the simple harmonic oscillation in Section 8.8.1 where we discuss the equation
of motion of a spring-mass system, its solutions and its natural frequency and period. Damped
oscillations i.e., oscillations that die out due to resistive forces are discussed in Section 8.8.2.
Then, we present forced oscillations (those oscillations that require driving forces to maintain
their motions) in Section 8.8.3. The discussion is confined to sinusoidal driving forces only.
The phenomenon of resonance appears naturally in this context (Section 8.8.4). Force oscilla-
tions with any periodic driving forces are given in Section 8.8.5 where Fourier series are used.
Section 8.8.6 discusses the oscillation of pendulum.
Figure 8.4: Simple systems that undergo harmonic motion: spring-mass (a) and pendulum (b).
The minus sign is here to express the effect of pulling back: the force is always opposite the
displacement vector. Thus, when the mass is at the left side of O the force is pointing to the
right and thus the spring pushes the mass back to O. In this way we get harmonic oscillation.
Using Newton’s 2nd law we can write
k
mxR D kx H) xR C !02 x D 0; !02 D (8.8.1)
m
where xR D d 2 x=dt 2 . The notation !02 was introduced instead of !0 so that the maths (to be
discussed) will be in a simple form. At this stage we do not know its meaning, its role is for
notational convenience.
Assume that x.t/ is a solution of Eq. (8.8.1), then it is easy to see that Ax.t/ is also a
solution with any A > 0 that is a constant. Now assume that we have two solutions to this
equation, namely x1 .t/ and x2 .t/, which are independent of each other , then Ax1 .t/ C Bx2 .t/
is also a solution . Actually as it contains two constants A; B it is the general solution to
Eq. (8.8.1). Now we need to find two particular solutions and we are done. They are cos.!0 t /
and sin.!0 t / which are the only functions that have the second derivatives equal minus the
functions. Therefore, the general solution is||
with two constants A1 and A2 being real numbers. They are determined using the so-called
initial conditions. The initial conditions specify the conditions of the system when we start the
system. They include the initial position of the mass x0 (which is x.t/ evaluated at t D 0 i.e.,
x.0/) and the initial velocity v.0/:
While the solution in Eq. (8.8.2) is perfectly fine, it does not immediately reveal the amplitude
of the oscillation. Using the trigonometry identity cos.a b/ D cos a cos b C sin a sin b, we
can re-write that equation in the following form
!
A A
q
1 2
x D A21 C A22 p cos.!0 t / C p sin.!0 t/
2 2
A1 C A2 A1 C A22
2 (8.8.4)
D A cos.!0 t /
where A is the amplitude of the oscillation, i.e., the maximum displacement of the mass from
p in the positive or negative direction. If needed, we can relate A and to A1
equilibrium, either
and A2 : A D A21 C A22 and cos D A1 =A. is called phase-shifted angle, see Fig. 8.5.
Simple harmonic motion is repetitive. The period T is the time it takes the mass to complete
one oscillation and return to the starting position. Everyone should be familiar with the period
For example x1 .t / D sin t and x2 .t / D 5 sin t are not independent. Refer to Chapter 10 for detail.
You should verify this claim.
||
We should ask why there can’t be other solutions? To answer this question we need to use
of orbit for the Earth around the Sun, which is approximately 365 days; it takes 365 days for the
Earth to complete a cycle. We can find the formula for T based on this definition: the position
of the mass at time t is exactly the position at time t C T ; that is A cos.!0 .t C T / / D
A cos.!0 t /. So,
r
2 m
!0 .t C T / D !0 t C 2 H) T D D 2 (8.8.5)
!0 k
The unit of T is second in the SI system.
Next, we mention a related quantity named frequency, usually denoted by f . Frequency
helps to answer how often something happens (e.g. how many visits per day). In the case of
SHO, it measures how many cycles per unit time is. There is a relation between the period T
and the frequency f . To derive this relation, one example suffices. If it takes 0:1 s for one cycle
(i.e., T D 0:1 s), there will be then 10 cycles per second. Thus,
f D 1=T D !0=2 (8.8.6)
In the SI system, the unit of f is cycles per second or Hertz in honor of the first experimenter
with radio waves (which are electric vibrations). While f is referred to as frequency, !0 is
called angular frequency. It is such called because !0 D 2f with the unit of radians per
second. There is no circle but why angular frequency? There is a circle hidden here. Whenever
we deal with sine and cosine we are dealing with the complex exponential, which in turn
involves the unit circle. See Fig. 8.6 for detail. Later on, we will call !0 the natural frequency of
the system when the mass is driven by a cyclic force with yet another frequency !.
Solution using a complex exponential. As it is more convenient to work with the exponential
function than with the sine/cosine functions, we use a complex exponential to solve the SHO
problem. But as x.t/ is real not imaginary, we use complex numbers to simplify the mathematics,
and we will take x.t/ as the real part of the complex solution. Using complex exponential, we
write x.t / as
x.t/ D C1 e i !0 t C C2 e i !0 t ; C1 ; C2 2 C (8.8.7)
Heinrich Rudolf Hertz (22 February 1857 – 1 January 1894) was a German physicist who first conclusively
proved the existence of the electromagnetic waves predicted by James Clerk Maxwell’s equations of electromag-
netism.
This is so because e i !0 t and e i !0 t are two solutions of Eq. (8.8.1), thus any linear combinations of them is
also a solution.
Using e i D cos C i sin in Eq. (8.8.7) and compare with Eq. (8.8.2), we can relate C1 ; C2
with A1;2 :
C1 C C2 D A1
(8.8.8)
i.C1 C2 / D A2
which indicates that C2 is simply the complex conjugate of C1 : C2 D CN 1 . Now, we can proceed
with Eq. (8.8.7) where C2 is replaced by CN 1 :
x.t/ D C1 e i !0 t C CN 1 e i !0 t
D 2 ReŒC1 e i !0 t .CN 1 e i !0 t
is the conjugate of C1 e i !0 t /
(8.8.10)
D ReŒ2C1 e i !0 t (with 2C1 D A1 iA2 D Ae i
, Fig. 8.6)
i i !0 t
D ReŒAe e D A cos.!0 t /
Figure 8.6: Solving SHO using a complex exponential: the complex number Ae i.!0 t / moves counter-
clockwise with angular velocity !0 around a circle of radius A. Its real part, x.t /, is the projection of the
complex number onto the real axis. While the complex number goes around the circle, this projection
oscillates back and forth on the x axis.
Geometric meaning of Euler’s identity. Recall that we have derived Euler’s identity
e i C 1 D 0 in Eq. (2.23.16). Now, we can give a geometric meaning to it. Referring to Fig. 8.6
but with A D 1 (unit circle) and D 0. The complex number e i !0 t is circulating the unit circle.
When !0 t D , it has traveled half of the circle and arrive at the point . 1; 0/ or 1. And thus
If not clear, check Section 2.23 on complex conjugate rules, particularly uN wN D uw.
ei D 1.
Plots of displacement, velocity and acceleration. To verify whether our solutions agree with
our intuitive understanding of a SHO, we analyze the displacement x.t/, the velocity xP and the
acceleration xR for A1 D 1:0 and A2 D 0:0. That is we displace the mass (from the equilibrium)
to the right a distance of A1 and release it. The plots of x; xP and xR are shown in Fig. 8.7.
The mass goes to the left with an increasing velocity (and acceleration). When it reaches the
equilibrium point, the velocity is maximum (and so is the kinetic energy). It continues moving
to the left until it reaches A at t D 0:5, at that point the velocity is zero (and the potential
energy is maximum).
1.0
0.5
x(t)
0.0
0.5
1.0
0 1 2 3
t
5.0
2.5
x(t)
0.0
2.5
5.0
0 1 2 3
t
40
20
x(t)
0
20
40
0 1 2 3
t
Figure 8.7: SHO with x D A cos !t : plots of displacement, velocity and acceleration. The frequency is
!0 D 2 so that T D 1. The amplitude is A D 1.
Energy conservation. Let’s now compute the kinetic and potential energy of the SHO and see
about energy conservation. From Eq. (8.8.4), we have x and thus xP as
x D A cos.!0 t / H) xP D A!0 sin.!0 t /
Using them, we can determine the kinetic energy T and potential energy U as
1 1
T D mxP 2 D kA2 sin2 .!0 t /
2 2 (8.8.11)
1 2 1 2
U D kx D kA cos2 .!0 t /
2 2
From that energy conservation is easily seen: T C U D 1=2kA2 . It’s useful to plot the evolution
of the energies in time (Fig. 8.8a) to see the exchange between kinetic and potential energies. In
that plot, I used A D 0:5, D 0, m D k D 1 (thus !0 D 1 and T D 2).
1 2 1 2 1 xP 2 x2
mxP C kx D kA2 H) C D1
2 2 2 .!0 A/2 A2
What is the boxed equation? It is an ellipse! So, on the x xP plane–which is called the phase
plane–the trajectory of the mass is an ellipse (Fig. 8.8b). Think about it: we are dealing with a
mass moving on a line, but we have a circle if we use complex numbers to study this problem,
and we also met an ellipse if we use the phase plane. That’s remarkable.
0.5
T
U 0.50
0.4
0.25
Energies
0.3
0.00
x
0.2
0.25
0.1
0.50
0.0
0 1 2 3 4 5 6 1.0 0.5 0.0 0.5 1.0
t x
(a) (b)
Figure 8.8
zR C 2ˇ zP C !02 z D 0; z D e i !t (8.8.13)
Don’t forget that !02 D k=m.
Now comes the reason why we used complex numbers: the derivatives of an exponential function
is the product of the function and a constant! Indeed,
z D e i !t
zP D i!e i !t D i!z (8.8.14)
zR D ! 2 e i !t D ! 2z
Substituting z; zP and zR into Eq. (8.8.13), we get the following equation
z. ! 2 C 2ˇi! C !02 / D 0 (8.8.15)
which is valid for all t . Thus,
! 2 C 2ˇi! C !02 D 0 (8.8.16)
which is a quadratic equation for !. Solving this equation, we get:
q
! D iˇ ˙ !02 ˇ 2 (8.8.17)
Now, we get different solutions depending on the sign of the term under the square root. In what
follows, we discuss these solutions.
p
Weakly damped is the case when !0 > ˇ. By setting !0d D !02 ˇ 2 , we have ! D iˇ ˙ !0d .
So, z D e i !t is written as
d
ˇ ˙i !0d /t ˇ t ˙i !0d t
z D e i !t D e i.iˇ ˙!0 /t D e . De e (8.8.18)
ˇt i !0d t ˇt i !0d t
These are two particular solutions of Eq. (8.8.13): z1 D e e and z2 D e e . Thus,
the general complex solution is
ˇ t i !0d t ˇt i !0d t ˇt d
i !0d t
z D C1 e e C C2 e e De .C e i !0 t C C2 e / (8.8.19)
„ 1 ƒ‚ …
z0
where C1 and C2 are two complex numbers. Now, we have to express z in the form x C iy, so
that we can get the real part of it, which is the solution we are seeking of. We write z0 as
h i
z0 D ŒRe.C1 / C i Im.C1 cos !0d t C i sin !0d t
h i
C ŒRe.C2 / C i Im.C2 / cos !0d t i sin !0d t
d
d
(8.8.20)
D .Re.C1 / C Re.C2 // cos !0 t C .Im.C2 / Im.C1 // sin !0 t
„ ƒ‚ … „ ƒ‚ …
A B
C i.: : :/
The solution x.t/ is the real part of z, thus it is given by
h i
x.t/ D Re z.t/ D e ˇ t A cos !0d t C B sin !0d t
(8.8.21)
ˇt d
D e C cos !0 t C
this solution correct or at least plausibly correct? To answer that question is simple: put
ˇ D 0–which is equivalent to b D 0–into x.t/ and if that x.t/ has the same form of the
undamped solution, then x.t/ is ok. This can be checked to be the case, furthermore the term
e ˇ t is indeed a decay term: the oscillation has to come to a stop due to friction.
Example. Let’s consider one example with !0 D 1, ˇ D 0:05, x0 D 1:0, v0 D 3:0. We need to
compute C and using the initial conditions. Using Eq. (8.8.21), we have
x0
8
) ˆ C D
x0 D x.t D 0/ D C cos
ˆ
< cos
d
H) v0 C ˇx0
P D 0/ D Cˇ cos./ C !0 sin./
v0 D x.t : D arctan
ˆ
ˆ
!0d x0
Now, we can plot x.t/ using Eq. (8.8.21) (Fig. 8.9). The code is given in Listing B.11.
3
2
1
x(t)
0
1
2
3
0 10 20 30 40 50
t
Figure 8.9: Weakly damped oscillation can be seen as simple harmonic oscillations with an exponentially
decaying amplitude C e ˇ t . The dashed curves are the maximum amplitudes envelop ˙C e ˇ t .
p
Overpdamped is the case when !0 < ˇ. In this case, ! D iˇ ˙ i ˇ 2 !02 D i.ˇ ˙ !/, N
2 2
!N D ˇ !0 .
)
z1 D e i !1 t D e . ˇ !/t
N
i !2 t . ˇ C!/t
N
H) z.t/ D C1 e . ˇ !/t
N
C C2 e . ˇ C!/t
N
(8.8.22)
z2 D e De
There are two main reasons for the importance of sinusoidal driving forces. First, there are many
important systems in which the driving force is sinusoidal. The second reason is subtler. It turns
out that any periodic force can be built up from sinusoidal forces using Fourier series.
Eq. (8.8.23) can be rewritten as follows
k b F0
xR C 2ˇ xP C !02 x D f0 cos.!t/; !02 D ; 2ˇ D ; f0 D (8.8.24)
m m m
We are going to solve this equation using a complex function z.t/ D x.t/ C iy.t/ satisfying
Eq. (8.8.24):
zR C 2ˇ zP C !02 z D f0 e i !t (8.8.25)
It can be seen that the real part of z.t/ i.e., x.t/ is actually the solution of Eq. (8.8.24). With
z D C e i !t , we compute zP , zR
z D C e i !t
zP D i!C e i !t (8.8.26)
2 i !t
zR D ! Ce
And substituting them into Eq. (8.8.25) to get
! 2 C C 2ˇi!C C !02 C D f0 (8.8.27)
which give us C as follows
f0 f0 .!02 ! 2 2i!ˇ/
C D D
!02 ! 2 C 2i!ˇ .!02 ! 2 /2 C 4! 2 ˇ 2
(8.8.28)
f0
D f0 .!02 ! 2 2i!ˇ/; f0 D 2
.!0 ! 2 /2 C 4! 2 ˇ 2
Now, we write z D C e i !t explicitly into the form x.t/ C iy.t/ to find its real part:
z D C e i !t D C.cos !t C i sin !t/
D f0 .!02 ! 2 2i!ˇ/.cos !t C i sin !t/ (8.8.29)
D f0 .!02 ! 2 / cos !t C 2!ˇ sin !t C if0 .!02 ! 2 / sin !t
2!ˇ cos !t
Thus, the solution to Eq. (8.8.24), which is the real part of z.t/ is given by
f0 .!02 ! 2 / 2f0 !ˇ
x.t / D Re.z/ D 2 cos !t C 2 sin !t (8.8.30)
.!0 ! 2 /2 C 4! 2 ˇ 2 .!0 ! 2 /2 C 4! 2 ˇ 2
Now, we use the trigonometry identity cos.a b/ D cos a cos b C sin a sin b to rewrite x.t/.
First, we re-arrange x.t/ in the form of cos cos C sin sin, then we will have a compact form for
x.t /:
" #
f0 .!02 ! 2 / cos !t 2!ˇ sin !t
x.t / D p p Cp
.!02 ! 2 /2 C 4! 2 ˇ 2 .!02 ! 2 /2 C 4! 2 ˇ 2 .!02 ! 2 /2 C 4! 2 ˇ 2
f0 2!ˇ
D A cos.!t ı/; A D p ; tan ı D 2
.!02 ! 2 /2 C 4! 2 ˇ 2 !0 ! 2
(8.8.31)
We have just computed the response of the system to the driving force: a sinusoidal driving force
results in a sinusoidal oscillation with an amplitude proportional to the amplitude of the force.
All looks reasonable. Do not forget the natural oscillation response. We’re interested in the case
of weakly damped only. The total solution is thus given by
ˇt
x.t/ D A cos.!t ı/ C Be cos !0d t C (8.8.32)
Example. A mass is released from rest at t D 0 and x D 0. The driven force is f D f0 cos !t
with f0 D 1000 and ! D 2. Assume that the natural frequency is !0 D 5! D 10, and the
damping is ˇ D !0=20 D =2 i.e., a weakly damped oscillation.
We determine B and from the given initial conditions. Noting that A and ı are known:
A D 1:06 and ı D 0:0208.
) (
x0 D A cos ı C B cos B cos D x0 A cos ı
H)
v0 D !A sin ı C B. ˇ cos !0d sin / ˇB cos C B!0d sin D !A sin ı v0
which yields B D 1:056 and D 0:054. Using all these numbers in Eq. (8.8.32) we can
plot the solution as shown in Fig. 8.10. We provide the plot of the driving force, the transient
solution and the total solution x.t/. Codes to produce these plots are given in Appendix B.4.
1000
500
0
f(t)
500
1000
0 1 2 3 4 5
t
0.5
xh(t)
0.0
0.5
1.0
0 1 2 3 4 5
t
1.5
1.0
0.5
x(t)
0.0
0.5
1.0
1.5
0 1 2 3 4 5
t
Figure 8.10: Driven oscillation of a weakly damped spring-mass: the frequency of the force is 2, and
the natural frequency is 10. After about 3 cycles, the motion is indistinguishable from a pure cosine,
oscillating at exactly the drive frequency. The free oscillation has died out and only the long term motion
remains. In the beginning t 3, the effects of the transients are clearly visible: as they oscillate at a faster
frequency they show up as a rapid succession of bumps and dips. In fact, you can see that there are five
such bumps within the first cycle, indicating that !0 D 5!.
8.8.4 Resonance
By looking at the formula of the oscillation amplitude A, we can explain the phenomenon of
resonance. Recall that A is given by
f0
AD p (8.8.33)
.!02 ! 2 /2 C 4! 2 ˇ 2
which will have a maximum value when the denominator is minimum. Note that we are not
interested with using a big force to have a large amplitude. With only a relatively small force but
at a correct frequency we can get a large oscillation anyway. Moreover, we are only interested
in the case ˇ is small i.e., weakly damped. It can be seen that A is maximum when ! !0 , see
Fig. 8.11a and the maximum value is
f0
Amax (8.8.34)
2!0 ˇ
0.25 0.25
= 0.1 0
0.20 = 0.2 0
0.20 = 0.3 0
0.15 0.15
A2
A2
0.10 0.10
0.05 0.05
0.00 0.00
0 5 10 15 20 0 5 10 15 20
0
(a) (b)
is now given by
mxR C b xP C kx D f .t/ (8.8.36)
And we replace f .t/ by its Fourier series (Section 4.18)
1
X
f .t/ D Œan cos.n!t / C bn sin.n!t/ (8.8.37)
nD0
What is this new form different from the original problem, Eq. (8.8.36)? Now, we have a damped
SHO with infinitely many driving forces f0 .t/; f1 .t/; : : : But for each of this force, we are able
to solve for the solution xn .t/, with n D 0; 1; : : :, (we have assumed that the Fourier series
contain only the cosine terms for simplicity):
an 2n!ˇ
xn .t / D An cos.n!t ın /; An D p ; tan ın D
.!02 n2 ! 2 /2 C 4n2 ! 2 ˇ 2 !02 n2 ! 2
(8.8.40)
And what is the final solution? It is simply the sum of all xn .t/. Why that? Because our equation
is linear! To see this, let’s assume there are only two forces: with f1 .t/ we have the solution
x1 .t / and similarly for f2 .t/, so we can write:
which indicates that x.t/ D x1 .t/ C x2 .t/ is indeed the solution. This is known as the principle
of superposition, which we discussed in Section 8.6. There, the discussion was abstract.
In summary, we had a hard problem (due to f .t/), and we replaced this f .t/ by many
many easier sinusoidal forces. For each force, we solved an easier problem and we added these
solutions altogether to get the final solution. It is indeed the spirit of calculus!
g d 2 g
R C sin D 0; or 2
C sin D 0 (8.8.44)
l dt l
For small vibrations, we have sin (remember the Taylor series for sine?). Thus, our
equation is further simplified to
r s
g l
R C ! 2 D 0; ! D H) T D 2 (8.8.45)
l g
And voilà, we see again the simple harmonic oscillation equation! And the natural frequency
(and the period) of a pendulum does not depend on the mass of the blob. And of course it does
not depend on how far it swings i.e., the initial conditions have no say on this. This fact was first
observed by Galileo Galilei when he was a student of medicine watching a swinging chandelier.
A historical note: it was the Dutch mathematician Christian Huygens (1629-1695) who first
derived the formula for the period
p of a pendulum. Note that we can also use a dimensional
analysis to come up with ! g= l.
Pendulum and elliptic integral of first kind. Herein I demonstrate how an elliptic integral of
the first kind shows up in the formula of the period of a pendulum when its amplitude is large.
The idea is to start with Eq. (8.8.44) and massage it so that we can have dt as a function of .
Then, integrating dt to get the period T .
We re-write Eq. (8.8.44) using !:
d 2
C ! 2 sin D 0 (8.8.46)
dt 2
P we get
Multiplying both sides of this equation with ,
d 2 d d
2
C ! 2 sin D0 (8.8.47)
dt dt dt
Now, integrating this equation w.r.t t, we obtain
2
1 d
! 2 cos D k (8.8.48)
2 dt
Check Eq. (7.10.17) if this was not clear.
1
LqR C RqP C qD0 (8.8.52)
C
This has exactly the form of Eq. (8.8.12) for the damped oscillator.
And anything that we know about the damped oscillator will be immediately applicable to the
RLC circuit. In other words, the RLC circuit is an electrical analog of a spring-mass system with
damping.
Mathematicians do not care about physics or applications, what matters to them is the fol-
lowing nice equation with a; b; c 2 R
ayR C b yP C cy D 0 (8.8.53)
which they call a second order ordinary differential equation. But now you understand why
univesity students have to study them and similar equations. Because they describe our world
quite nicely.
Figure 8.12: A simple two coupled oscillators. In the absence of the spring 2, the two carts would oscillate
independently of each other. It is the spring 2 that couples the two carts.
where M is the mass matrix and K is the spring-constant matrix or stiffness matrix. Note that
these two matrices are symmetric. Also note that using matrix notation the equation of motion
of coupled oscillators, MxR D Kx, is a very natural generalization of that of a single oscillator:
with just one degree of freedom, all three matrices x, K and M are just ordinary numbers and
we had mxR D kx.
We use complex exponentials to solve Eq. (8.8.55):
" # " #
z A1 i !t
zD 1 D e D ae i !t ; H) zR D ! 2 ae i !t (8.8.56)
z2 A2
.! 2 M K/a D 0 (8.8.57)
det K ! 2 M D 0
(8.8.58)
This is a quadratic equation for ! 2 and has two solutions for ! 2 (in general). This implies that
there are two frequencies !1;2 at which the carts oscillate in pure sinusoidal motion. These
frequencies are called normal frequencies. The two sinusoidal motions associated with these
normal frequencies are known as normal modes. The normal modes are determined by solving
Eq. (8.8.57). If you know linear algebra, what we are doing here is essentially a generalized
eigenvalue problem in which ! 2 play the role of eigenvalues and a play the role of eigenvectors;
refer to Section 10.10 for more detail on eigenvalue problems.
Example 1. Let’s consider the case of equal stiffness springs and equal masses: k1 D k2 D
k3 D k and m1 D m2 D m. Using Eq. (8.8.58) we can determine the normal frequencies:
r r
k 3k
!1 D ; !2 D (8.8.59)
m m
Check Chapter 10 for a discussion on matrices.
Did u notice anything special about !1 ? And we use Eq. (8.8.57) to compute a:
" # " #! " # " #
2k k ! 2m 0 A1 0
2
D (8.8.60)
k 2k 0 ! m A2 0
With !1 , we solve Eq. (8.8.60) to have A1 D A2 D Ae i 1 . So, we have z1 .t/ and z2 .t/ and
from them the real parts of the actual solutions for mode 1:
) (
z1 D Ae i 1 e i !1 t x1 .t/ D A cos.!1 t 1 /
i 1 i ! 1 t
H) (8.8.61)
z2 D Ae e x2 .t/ D A cos.!1 t 1 /
As x1 .t / D x2 .t/ the two carts oscillate in a way that spring 2 is always in its unstretched
configuration. In other words, spring 2 is irrelevantpand thus the system oscillates with a natural
frequency similar to a single oscillator (i.e., ! D k=m).
With !2 , we solve Eq. (8.8.60) to have A1 D A2 D Be i 2 . The mode 2 solutions are
x1 .t/ D CB cos.!2 t 2 /
(8.8.62)
x2 .t/ D B cos.!2 t 2 / D B cos.!2 t 2 /
These solutions tell us that when cart 1 moves to the left a distance, cart 2 moves to the right the
same distance. We say that the two carts oscillate with the same amplitude but are out of phase.
Together, the general solutions are:
" # " #
1 1
x.t/ D A cos.!1 t 1 / C B cos.!1 t 2 / (8.8.63)
1 1
with the four constants of integration A; B; 1 ; 2 to be determined from four initial conditions.
Example 2. This case involves equal masses, but the second spring is much less stiff: k1 D
k3 D k, k2 k, m1 D m2 D m. The two normal frequencies are
r r
k k C 2k2
!1 D ; !2 D (8.8.64)
m m
As we have discussed, spring 2 is irrelevant in mode 1, so we got the same mode 1 frequency as
in Example 1. As k2 k, !1 !2 , we can write them in terms of their average !0 and half
difference (you will see why we did this via Eq. (8.8.67); the basic idea is that we can write
the solutions in two separate terms, one involves !0 and one involves ):
!1 D !0 ; !1 C !2 !2 !1
; !0 D ; D (8.8.65)
!2 D !0 C ; 2 2
Therefore, the normal modes are
( (
z1 D C1 e i.!0 /t D C1 e i !0 t e i t z1 D CC2 e i.!0 C/t D CC2 e i !0 t e i t
(mode 1); (mode 2)
z2 D C1 e i.!0 /t
D C1 e i !0 t e i t
z2 D C2 e i.!0 C/t D C2 e i !0 t e i t
where in the last step, we have used the formula that relating sine/cosine to complex exponentials,
see Section 2.23.4 if you do not recall this. And the real solutions are thus given by
" #
A cos.t/ cos.!0 t /
x.t/ D (8.8.67)
A sin.t/ sin.!0 t/
x(t)
0.0
and D 1 and consider a time duration of 2. First, we
try to understand what A sin.t/ sin.!0 t/ means. As −0.5
1
x1 (t)
−1
0 1 2 3 4 5 6
1 t
x2 (t)
−1
0 1 2 3 4 5 6
t
Later in Section 8.10 we shall know that this is nothing but the beat phenomenon when two
sound waves of similar frequencies meet.
This is so because at t D 0, x1 D A while xP 1 D x2 D xP 2 D 0.
So, with the technique of separation of variables, we have converted a single second order PDE
into a system of two first order ODEs. That’s the key lesson! What is interesting is that it is
straightforward to solve these two ODEs:
2 2 t
g.t/ D A1 e ; h.x/ D A2 cos x C A3 sin x (8.9.8)
with A1 ; A2 ; A3 are arbitrary constants. With these functions substituted into Eq. (8.9.4), the
temperature field is given by
2 2 t
.x; t/ D e .A cos x C B sin x/ (8.9.9)
with A D A1 A2 and B D A1 A3 . We have to find A; B and so that .x; t/ satisfies the BCs
and IC. For the BCs, we have
2 2 t
.0; t/ D 0 W e A D 0 H) A D 0
(8.9.10)
2 2 t
.1; t/ D 0 W e B sin D 0 H) sin D 0 H) D n; n D 1; 2; : : :
So, we have an infinite number of solutions written as§
.n/2 t
n .x; t/ D Bn e sin.nx/; n D 1; 2; 3; : : : (8.9.11)
All satisfy the boundary conditions (and of course the PDE). It is now to work with the initial
condition. First, since the PDE is a linear equation, the sum of all the fundamental solutions is
also a solution; this is known as the principle of superposition. So, we have
1 1
.n/2 t
X X
.x; t/ D n .x; t/ D Bn e sin.nx/ (8.9.12)
nD1 nD1
Evaluating this solution at t D 0 gives us (noting that the initial condition Eq. (8.9.2) says that
at t D 0 the temperature is .x/):
1
X
.x; 0/ D Bn sin.nx/ D .x/ (8.9.13)
nD1
Now the problem becomesP1 this: if we can approximate the initial temperature .x/ as an infinite
trigonometric series nD1 Bn sin.nx/, then we have solved the heat equation! Now Fourier
had to move away from physics to mathematics: he had to find the coefficients Bn in Eq. (8.9.13).
We refer to Section 4.18 for a discussion on how Fourier computed Bn . Then the solution to
Eq. (8.9.1) is the following infinite series
1 Z 1
.n/2 t
X
.x; t/ D Bn e sin.nx/; Bn D 2 .x/ sin.nx/dx (8.9.14)
nD1 0
Should we be worry about the infinity involved in this solution? No, we do not have to thanks to
2
the term e .n/ t which is actually a decaying term i.e., for large n and/or for large t , this term
is small. See Fig. 8.14 for an illustration.
§
B D 0 also satisfies the BCs, but it would result in a boring solution .x; t / D 0.
1.0 t=0
t = 0.005
t = 0.05
0.5
(x, t)
0.0
0.5
1.0
0.00 0.25 0.50 0.75 1.00
x
Figure 8.14: Solution of the heat equation: high order terms vanish first and thus the wiggles are gone
first.
History note 8.1: Joseph Fourier (21 March 1768 – 16 May 1830)
Jean-Baptiste Joseph Fourier was a French mathematician and physi-
cist who is best known for initiating the investigation of Fourier se-
ries, which eventually developed into Fourier analysis and harmonic
analysis, and their applications to problems of heat transfer and vi-
brations. The Fourier transform and Fourier’s law of conduction are
also named in his honor. Fourier is also generally credited with the
discovery of the greenhouse effect.
In 1822, Fourier published his work on heat flow in The Analytical
Theory of Heat. There were three important contributions in this work, one purely math-
ematical, two essentially physical. In mathematics, Fourier claimed that any function of
a variable, whether continuous or discontinuous, can be expanded in a series of sines of
multiples of the variable. Though this result is not correct without additional conditions,
Fourier’s observation that some discontinuous functions are the sum of infinite series
was a breakthrough. One important physical contribution in the book was the concept of
dimensional homogeneity in equations; i.e. an equation can be formally correct only if
the dimensions match on either side of the equality; Fourier made important contributions
to dimensional analysis. The other physical contribution was Fourier’s proposal of his
partial differential equation for conductive diffusion of heat. This equation is now taught
to every student of mathematical physics.
and move our hand up and down, a wave is created and travels to the right. And that’s a traveling
wave. Now, we need to describe it mathematically. And it turns out not so difficult.
Assume that at time t D 0, we have a wave of which the shape can be described by a
function y D f .x/. Furthermore, assume that the wave travels with a constant velocity c to the
right and its shape does not change in time. Then, at time t , the wave is given by f .x ct /.
To introduce some terminologies, let’s consider the simplest traveling wave; a sine wave.
Sinusoidal waves. Now, consider a sine wave (people prefer to call it a sinusoidal wave) traveling
to the right (along the x direction) with a velocity c. Its equation is
2
y.x; t/ D A sin .x ct/ (8.10.1)
The amplitude of the wave is A, the wavelength is . That is, the function y.x; t/ repeats itself
each time x increases by the distance . Thus, the wavelength is the spatial period of a periodic
wave . It is the distance between consecutive corresponding points of the same phase on the
wave, such as two adjacent crests, troughs, or zero crossings (Fig. 8.15).
So far we have focused on the shape of the entire wave at one particular time instant. Now
we focus on one particular location on the wave, say x and let time vary. As time goes on, the
wave passes by the point and makes it moves up and down. (Think of a leaf on a pond that bobs
up and down with the motion of the water ripples) The motion of the point is simple harmonic.
Indeed, we can show this mathematically as follows. Replacing x by x in Eq. (8.10.1), we have
2x
2 2c
y.x ; t/ D A sin .x ct/ D A sin t (8.10.2)
Note that in Section 8.8 we met another period, which is a temporal period. Waves are more complicated than
harmonic oscillations because we have two independent variables x and t.
This is indeed the equation of a SHO (Section 8.8.1) with an angular frequency ! and phase
2c 2x
!D D 2f; D (8.10.3)
where f is the frequency. Now, we can understand why the wavelength is defined as the
distance between consecutive corresponding points of the same phase on the wave. The phase
is identical for points x and x C . As each point in the string (e.g. x ) oscillates back and
forth in the transverse direction (not along the direction of the string), this is called a transverse
wave.
Now, I present another form of the sinusoidal wave which
introduces the concept of wavenumber, designated by k. Ob-
viously we can write Eq. (8.10.1) in the following form
y.x; t / D A sin.kx !t /, with k WD 2=. Referring to
the figure next to the text, it is obvious that the wavenumber
k tells us how many waves are there in a spatial domain of
length L. More precise k=2 is the number of waves fit inside
L. We can now study what will happen if two waves of the
same frequencies meet. For example if we are listening two
sounds of similar frequencies, what would we hear? Writing the two sounds as
If we plot the waves as in Fig. 8.16 (!1 =!2 D 8 W 10), we see that where the crests coincide we
get a strong wave and where a trough and crest coincide we get practically zero, and then when
the crests coincide again we get a strong wave again.
d’Alembert’s solution. Now, we turn to d’Alembert’s solutions to the wave equation. We have
shown that a traveling wave (to the right) can be written as f .x ct/. Thus, f .x ct/, as a
wave, must satisfy the wave equation. That is obvious (chain rule is what we need to verify this):
@ @
.f .x ct// D c 2 .f .x ct//
@t 2 @x 2
And there is nothing special about a wave traveling to the right, we have another wave traveling
to the left. It is given by g.x C ct/, and it is also a solution to the wave equation. As the wave
equation is linear, f .x ct/ C g.x C ct/ is also a solution to the wave equation. But, we need
a proof.
Note the similarity with Eq. (8.8.67).
Figure 8.16
The equation that we want to solve is for an infinitely long string (so that we do not have to
worry about what happens at the boundary):
where f .x/ is the initial shape of the string, and g.x/ is the initial velocity.
We introduce two new variables and as
D x C ct; D x ct
which transform the PDE from u t t D c 2 uxx to u D 0, which can be solved easily:
Now we have to deal with the initial conditions i.e., Eq. (8.10.5).
mother left the newly born child on the steps of the church of St
Jean Le Rond. The child was quickly found and taken to a home for
homeless children. He was baptised Jean Le Rond, named after the church on whose steps
he had been found. When his father returned to Paris he made contact with his young
son and arranged for him to be cared for by the wife of a glazier, Mme Rousseau. She
would always be d’Alembert’s mother in his own eyes, particularly since his real mother
never recognized him as her son, and he lived in Mme Rousseau’s house until he was
middle-aged. Jean Le Rond d’Alembert was one of the eighteenth century’s preeminent
mathematicians. He was elected to the French Academy of Sciences at the age of only
twenty-three. His important contributions include the d’Alembert formula, describing
how strings vibrate, and the d’Alembert principle, a generalization of one of Newton’s
classical laws of motion.
where the n term un .x; t/ is called the n-th mode of vibration or the n-th harmonic. This solution
satisfies the PDE and the BCs. If we plot these modes of vibration (Fig. 8.17), what we observe
is that the wave doesn’t propagate. It just sits there vibrating up and down in place. Such a wave
is called a standing wave. Points that do not move at any time (zero amplitude of oscillation)
are called nodes. Points where the amplitude is maximum are called antinodes. The simplest
mode of vibration with n D 1 is called the fundamental, and the frequency at which it vibrates
is called the fundamental frequency.
But waves should be traveling, why we have standing waves here? To see why, we need
to use trigonometry, particularly the product identities in Eq. (3.7.6) (e.g. sin ˛ cos ˇ D
sin.˛Cˇ /Csin.˛ ˇ /=2). Using these identities, we can rewrite u .x; t/ as
n
An n n
un .x; t/ D sin .x C ct/ C sin .x ct/
2 L L (8.11.8)
Bn n n
C cos .x ct/ cos .x C ct/
2 L L
Let’s now focus on the terms with An , we can write
An n An n
un .x; t/ D sin .x ct/ C sin .x C ct/
2 L 2 L (8.11.9)
An 2x 2ct An 2x 2ct 2L
D sin C sin C ; n D
2 n n 2 n n n
which is obviously the superposition of two traveling waves: the first term is a wave traveling to
the right and the second travels to the left. Both waves have the same amplitude.
All points on the string oscillate at the same frequency but with different amplitudes.
Now we need to consider the initial conditions. By evaluating u.x; t/ and its first time
derivative at t D 0, and using the ICs, we obtain
1
X nx
An sin D f .x/
nD1
L
1
(8.11.10)
X nc nx
Bn sin D g.x/
nD1
L L
Example 8.4
Now, assume that the initial velocity of the string is zero, thus Bn D 0, then the solution is
1 nc nx 2 L nx
X Z
u.x; t / D An cos t sin ; An D f .x/ sin dx (8.11.11)
nD1
L L L 0 L
0.75
0.50
0.25
0.00
0.25
0.50
0.75
0.00 0.25 0.50 0.75 1.00
u1(x, t)
0.75
0.50
0.25
0.00
0.25
0.50
0.75
0.00 0.25 0.50 0.75 1.00
u2(x, t)
0.75
0.50
0.25
0.00
0.25
0.50
0.75
0.00 0.25 0.50 0.75 1.00
u3(x, t)
Figure 8.17: Standing waves un .x; t / for n D 1; 2; 3. Different colors are used to denote un .x; t / for
different times.
What does this mean? If we break the initial shape of the string into many small components:
1
X nx
f .x/ D An sin
nD1
L
Then we suddenly release the string and study its motion. As the initial velocity is zero, we
just have An , which are computed as (Eq. (8.11.11))
L2
2h d n
An D 2 2 sin
n d.L d / L
Now, we extend this definition to a continuous function f .x/. Following the same procedure in
Section 4.11.3 when we computed the average of a function, we get
!1=2
L
1
Z
fN.x/ D Œf .x/2 dx (8.12.2)
L L
Recall also that the Fourier series of a periodic function f .x/ in Œ L; L is given by
1
X nx nx
f .x/ D a0 C an cos C bn sin (8.12.3)
nD1
L L
Now, we introduce fN .x/ which is a finite Fourier series of f .x/. That is fN .x/ consists of
a finite number N 2 N of the cosine and sine terms:
N
X nx nx
fN .x/ D a0 C an cos C bn sin (8.12.5)
nD1
L L
With that we compute the RMS of the difference between f .x/ and fN .x/ :
1 L
Z
ED .f .x/ fN .x//2 dx
L L (8.12.6)
1
D ..f; f / 2.f; fN / C .fN ; fN //
L
RL
Although not necessary, I used the short notation .f; g/ to denote the inner product L f .x/g.x/dx.
The plan is like this: if we can compute .f; fN / and .fN ; fN /, then with the fact that E 0, we
shall get an inequality, and that inequality is the Bessel inequality. Let’s start with .fN ; fN / :
" N
#2
L
nx nx
Z X
.fN ; fN / D a0 C an cos C bn sin dx
L nD1
L L
Z L N Z L N Z L
2 nx nx
X X
2 2 2
D a0 dx C an cos dx C bn sin2 dx
L nD1 L L nD1 L L
N N
!
X X
D L 2a02 C an2 C bn2
nD1 nD1
N
" #
L
nx nx
Z X
.f; SN / D f .x/ a0 C an cos C bn sin dx
L nD1
L L
Z L N Z L N L
nx nx
X X Z
D a0 f .x/dx C an f .x/ cos dx C bn f .x/ sin dx
L nD1 L L nD1 L L
N N
!
X X
D L 2a02 C an2 C bn2
nD1 nD1
To arrive at the final result, we just use Eq. (8.12.4) to replace the red integrals by the Fourier
coefficients. Substituting all these into the second of Eq. (8.12.6), we obtain
N N
!
1 X X
E D .f; f / 2a02 C an2 C bn2 (8.12.7)
L nD1 nD1
N N L
1
X X Z
2a02 C an2 C bn2 f 2 .x/dx
nD1 nD1
L L
1 L
1
X Z
2a02 an2 bn2 f 2 .x/dx
Bessel’s inequality W C C (8.12.8)
nD1
L L
d
Contents
9.1 Introduction and some history comments . . . . . . . . . . . . . . . . . . 666
9.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
9.3 Variational problems and Euler-Lagrange equation . . . . . . . . . . . . 670
9.4 Solution of some elementary variational problems . . . . . . . . . . . . . 673
9.5 The variational ı operator . . . . . . . . . . . . . . . . . . . . . . . . . . 677
9.6 Multi-dimensional variational problems . . . . . . . . . . . . . . . . . . 679
9.7 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
9.8 Lagrangian mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
9.9 Ritz’ direct method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
9.10 What if there is no functional to start with? . . . . . . . . . . . . . . . . 692
9.11 Galerkin methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
9.12 The finite element method . . . . . . . . . . . . . . . . . . . . . . . . . . 698
This chapter is devoted to the calculus of variations which is a branch of mathematics that
Rb
allows us to find a function y D f .x/ that minimizes a functional I D a G.y; y 0 ; y 00 ; x/dx.
For example it provides answers to questions like ‘what is the plane curve with maximum area
with a given perimeter’. You might have correctly guessed the answer: in the absence of any
restriction on the shape, the curve is a circle. But calculus of variation provides a proof and
more.
We use primarily the following books for the material presented herein:
When Least Is Best: How Mathematicians Discovered Many Clever Ways to Make Things
as Small (or as Large) as Possible by Paul Nahin [38];
Paul Joel Nahin (born November 26, 1940) is an American electrical engineer and author who has written
20 books on topics in physics and mathematics, including biographies of Oliver Heaviside, George Boole, and
665
Chapter 9. Calculus of variations 666
A History of the Calculus of Variations from the 17th through the 19th Century by Herman
Goldstine‘ [2];
The lazy universe. An introduction to the principle of least action by Jennifer Coopersmith||
[9]
Even though Euler has developed many techniques to solve the Euler-Lagrange partial differ-
ential equations, it was the physicist Walter Ritz who, in 1902, proposed a direct method to solve
approximately variational problems in a systematic manner. The modifier ’direct’ means that
one can work directly with the functional instead of first finding the associated Euler-Lagrange
equation and then solving this equation.
9.2 Examples
We have seen ordinary functions such as f .x/ D x 2 or f .x; y/ D x 2 C y 2 , but we have not
seen a functional before. This section presents some examples so that we get familiar with
functionals and variational problems.
Eucledian geodesic problem. Finding the shortest path joining two points .x1 ; y1 / and .x2 ; y2 /.
To this end, we are finding a curve mathematically expressed by the function f .x/ such that the
following integral (or functional)
Z .x2 ;y2 / Z x2 p
lŒf .x/ D ds D 1 C .f 0 .x//2 dx (9.2.1)
.x1 ;y1 / x1
is minimum. We use the notation lŒf .x/ to denote a functional l that depends on f .x/ (and
possibly its derivatives f 0 .x/; f 00 .x/; : : :). In this particular example, our functional depends
only on the first derivative of the sought for function.
Figure 9.1: A brachistochrone curve is a curve of shortest time or curve of fastest descent.
To this end we need to compute the traveling time which requires the distance and velocity.
For simplicity we select a coordinate system as shown in Fig. 9.1 where the starting point A is
at the origin and the vertical axis is pointing downward. Using the principle of conservation of
Galileo’s hanging chain. Galileo Galelai in his Discorsi1 (1638) described a method of drawing
a parabola as “Drive two nails into a wall at a convenient height and at the same level; ...
Over these two nails hang a light chain ... This chain will assume the form of a parabola, ...”.
Unfortunately, the hanging chain does not assume the form of a parabola and Galileo’s assertion
became a discussion point for followers of his work. Prominent mathematicians of the time,
Leibniz, Huygens and Johann Bernoulli, studied the hanging chain problem, which can be stated
as: Find the curve assumed by a loose flexible string hung freely from two fixed points. Every
person viewing power lines hanging between supporting poles is seeing Galileo’s hanging chain,
which is called a catenary, a name is derived from the Latin word catena, meaning chain.
How is this problem related to the above variational prob-
lems? In other words, what quantity is to be minimized? The
answer is the potential energy of the chain! Let’s consider
a flexible chain hung by two points A and B. The chain
has a total mass M , a total length L, and thus a uniform
mass per length density D M=L. Let’s consider a small
(very) segment ds of the chain locating at a distance y.x/,
p potential energy of this segment is mgy D gyds D
and the
gy dx 2 C dy 2 . Thus, the total potential energy is
Z x2 p Z x2 p
P.E D 2
gy dx C dy D 2 gy 1 C .y 0 /2 dx
x1 x1
The problem is then: find the curve y.x/ passing through A.x1 ; y1 / and B.x2 ; y2 / such that
P.E is minimum. Not really. We forgot that not every curve is admissible; only curves of the
same length L are. So, the problem must be stated like this: find the curve y.x/ passing through
A.x1 ; y1 / and B.x2 ; y2 / such that
Z x2 p
0
I Œy; y I x D gy 1 C .y 0 /2 dx
x1
Z x2 p
1 C .y 0 /2 dx D L
x1
This is certainly a variational problem, but with constraints. As we have learned from calculus,
we need Lagrange to handle the constraints.
Calculus based solution of the hanging chain problem. Herein we present the calculus based
solution of the hanging chain problem. It was done by Leibniz and Johann Bernoulli before
variational calculus was developed. We provide this solution to illustrate two points: (i) how cal-
culus can be used to solve problems and (ii) how the same problem (in this context a mechanics
one) can be solved by more than one way.
Figure 9.2
Considering a segment of the chain locating between x and x C x as shown in Fig. 9.2,
there are three forces acting on this segment: the tension at the left end T .x/, the tension at the
right end T .x C x/ and the gravity gs. As this segment is stationary i.e., not moving, the
sum of total forces acting on it must be zero:
P
Fx D 0 W T .x/ cos ˛.x/ D T .x C x/ cos ˛.x C x/
P
Fy D 0 W T .x C x/ sin ˛.x C x/ T .x/ sin ˛.x/ gs D 0
From the first equation, we deduce that the horizontal component of the tension in the chain is
constant:
T0
T .x/ cos ˛.x/ D T0 D constant H) T .x/ D
cos ˛.x/
And from the second equation, we get:
.T .x/ sin ˛.x// s
.T .x/ sin ˛.x// D gs H) D g
x x
Replacing T .x/ by T0=cos ˛.x/ and considering the limit when x ! 0, we then have
d p
.T0 tan ˛.x// D g 1 C .y 0 /2
dx
Phu Nguyen, Monash University © Draft version
Chapter 9. Calculus of variations 670
To solve this differential equation, we followed Vincenzo Riccati with a new variable z such that
y 0 D z:
p dz T0
y 0 D z H) T0 z 0 D g 1 C z 2 ” k p D dx; k WD
1Cz 2 g
Now, integrating both sides we get (see Section 4.4.15)
dz
Z Z
1
k p D dx H) C1 C k sinh zDx
1 C z2
where C1 is a constant of integration, and from this we get z, and finally from z D dy=dx, we
get y.x/:
x C1 x C1
z D sinh H) y D k cosh C C2
k k
where C2 is yet another constant of integration. If the lowest point of the catenary is at .0; k/, it
can be seen that C1 D C2 D 0, and the catenry has this form
x
y D k cosh
k
We hope that with this hanging chain problem, the introduction of hyperbolic functions into
mathematics is easier to accept. Again, it is remarkable that mathematics, as a human invention,
captures quite well natural phenomena.
Figure 9.3: Solution function y.x/, .x/ with .a/ D .b/ D 0 and one variation y.x/ C 1 .x/.
With the variation of the solution we proceed to the calculation of the corresponding change
in the functional, denoted by dI :
Z b Z b
0 0
dI D F .y.x/ C .x/; y .x/ C .x/I x/ dx F .y.x/; y 0 .x/I x/ dx
a a
Z b
F .y C ; y 0 C 0 I x/ F .y; y 0 I x/ dx
D (9.3.3)
a
Z b
@F @F 0
D C 0 dx
a @y @y
where in the last equality we have used the Taylor’s series expansion for F .y C ; y 0 C 0 I x/
around D 0.
Now, as u.x/ is the minimal solution, one has to have dI= D 0 (this is similar to df =dx D 0
in ordinary differential calculus). Thus, we obtain
b
@F @F
Z
C 0 0 dx D 0 (9.3.4)
a @y @y
In the next step we want to get rid of 0 (so that we can use a useful lemma called the fundamental
lemma of variational calculus which exploits the arbitrariness of to obtain a nice result in terms
of y, no more and ), and of course the trick is integration by parts:
b b
@F d @F @F
Z
dx C D0 (9.3.5)
a @y dx @y 0 @y 0 a
As .a/ D .b/ D 0, the boundary term (the last term in the above equation) vanishes and we
get the following
Z b
@F d @F
dx D 0 (9.3.6)
a @y dx @y 0
Rb
Using the fundamental lemma of variational calculus (which states that if a f .x/g.x/dx D 0
for all g.x/ then h.x/ D 0 for x 2 Œa; b), one obtains the so-called Euler-Lagrange equation
@F d @F
Euler-Lagrange equation W D0 (9.3.7)
@y dx @y 0
Euler derived this equation before Lagrange but his derivation was not as elegant as the one
presented herein which is due to Lagrange. To use Eq. (9.3.7), it should be noted that we treat
y; y 0 ; x as independent variables when calculating @F@y
@F
and @y 0.
We note that the Euler-Lagrangeequation in Eq. (9.3.7) is a second order partial differential
d @F
equation; this is due to the term dx @y 0
as we have the derivative of y 0 , and thus y 00 .
Now to solve Eq. (9.3.1), Euler solved Eq. (9.3.7). This is known referred to as the indirect
way to solving variational problems. There is a direct method to attack the variational problem
Eq. (9.3.1) directly; check Section 9.9. However, for now we are going to use the indirect
method to solve some elementary variational problems.
Stationary curves. Starting with the functional 9.3.1, we have assumed that y.x/ is a function
that minimizes this functional, and found that it satisfies the Euler-Lagrange equation 9.3.7. Is the
reverse true? That is if y.x/ satisfies the Euler-Lagrange equation will it minimize the functional?
The answer is, by learning from ordinary calculus, not necessarily . Therefore, functions that
satisfy the Euler-Lagrange equation are called stationary functions or stationary curves.
History note 9.1: Joseph-Louis Lagrange (25 January 1736 – 10 April 1813)
Joseph-Louis Lagrange was an Italian mathematician and astronomer,
later naturalized French. He made significant contributions to the fields
of analysis, number theory, and both classical and celestial mechanics.
As his father was a doctor in Law at the University of Torino, a career as
a lawyer was planned out for him by his father, and certainly Lagrange
For function y D f .x/, stationary points are those x such that f 0 .x / D 0. These points can be a maximum
or a minimum or an inflection point.
And thus
00
p .y 0 /2 y 00
" # y 1C .y 0 /2 p
y0 1 C .y 0 /2
d @F d
D D
@y 0 1 C .y 0 /2
p
dx dx 1 C .y 0 /2
Upon substitution into the Euler-Lagrange equation in Eq. (9.3.7) one gets
d @F
D 0 H) y” D 0 W y D ax C b
dx @y 0
The solution is a straight line as expected. The two coefficients a and b are determined using the
boundary conditions:
With y 0 D dy=dx , one can solve for dx, and from that we have
Z br
y
xD dy
0 A y
Now, we’re back to the old business of integral calculus: using this substitution y D
A sin2 =2 D A=2.1 cos /, we can evaluate the above integral to get
A
xD . sin /
2
The Brachistochrone curve is the one defined parametrically as
A
xD . sin /
2 (9.4.5)
A
y D .1 cos /
2
One determines A by the boundary condition that the curve passes through B.a; b/. In geometry,
this curve is known as a cycloid. A cycloid is the curve traced by a point on a circle, of radius
A=2, as it rolls along a straight line without slipping (Fig. 9.4). A cycloid is a specific form of
trochoid and is an example of a roulette, a curve generated by a curve rolling on another curve.
Figure 9.4: A cycloid is the curve traced by a point on a circle (P ) as it rolls along a straight line without
slipping: illustrated using geogebra with A D 2 and 2 Œ0; 2. Source: Brian Sterr– Stuyvesant High
School in New York.
We refer to the interesting book When Least Is Best: How Mathematicians Discovered Many
Clever Ways to Make Things as Small (or as Large) as Possible by Paul Nahin [38] for more
detail on the cycloid and its various interesting properties.
I, Johann Bernoulli, address the most brilliant mathematicians in the world. Noth-
ing is more attractive to intelligent people than an honest, challenging problem,
whose possible solution will bestow fame and remain as a lasting monument. Fol-
lowing the example set by Pascal, Fermat, etc., I hope to gain the gratitude of the
whole scientific community by placing before the finest mathematicians of our time a
problem which will test their methods and the strength of their intellect. If someone
communicates to me the solution of the proposed problem, I shall publicly declare
him worthy of praise.
Bernoulli allowed six months for the solutions but none were received during this period.
At the request of Leibniz, the time was publicly extended for a year and a half. At 4 p.m. on
29 January 1697 when he arrived home from the Royal Mint, Newton found the challenge in
a letter from Johann Bernoulli. Newton stayed up all night to solve it and mailed the solution
anonymously by the next post. Bernoulli, writing to Henri Basnage in March 1697, indicated
that even though its author, "by an excess of modesty", had not revealed his name, yet even
from the scant details supplied it could be recognized as Newton’s work, "as the lion by its
claw" (in Latin, tanquam ex ungue leonem). This story gives some idea of Newton’s power,
since Johann Bernoulli needed two weeks to solve it. Newton also wrote, "I do not love to
be dunned [pestered] and teased by foreigners about mathematical things...", and Newton had
already solved Newton’s minimal resistance problem, which is considered the first of the kind
in calculus of variations.
In the end, five mathematicians had provided solutions: Newton, Jakob Bernoulli, Gottfried
Leibniz, Ehrenfried Walther von Tschirnhaus and Guillaume de l’Hôpital.
Now, he applied this result to the Brachistochrone problem. Referring to the figure, and consider
a point P .x; y/, draw a tangent line to the curve y.x/ at P . He computed sin ˛ in terms of y 0
as follows
1 1
sin ˛ D cos ˇ D p Dp
1 C tan2 ˇ 1 C .y 0 /2
p
And the velocity v D 2gy, and thus Eq. (9.4.6) gave him:
1 p
p D c 2gy
1 C .y 0 /2
which is equivalent to Eq. (9.4.4)–the solution obtained using variational calculus.
From Eq. (9.3.3) we can compute ıF as easily as (recall that F D F .y; y 0 I x/)
@F @F 0 @F @F
ıF D C 0 D ıy C 0 ıy 0 (9.5.2)
@y @y @y @y
Observing the similarity to the total differential df of a function of two variables f .x; y/:
df D fx dx C fy dy when its variables change by dx and dy. We put these two side-by-side:
@f @f
df D dx C dy
@x @y
(9.5.3)
@F @F
ıF D ıy C 0 ıy 0
@y @y
d dy
variation/differentiation are permutable: ıy D ı
dx dx
Z b Z b
0
variation/integration are permutable: ı F .y; y I x/dx D ıF dx
a a
Finally, we can see that ıy is similar to the differential operator df in differential calculus;
Eq. (9.5.3) is one example. That is why Lagrange selected the symbol ı. We know that d.f C
g/ D df C dg and d.x 2 / D 2xdx. We have counterparts for ı: for u; v are some functions
Now we can use ı in the same manner we do with d . The proof is easy. For example, consider
F .u/ D u2 , when we vary the function u by ıu, we get a new functional FN D .u C ıu/2 . Thus,
the variation in the functional is ıF D .u C ıu/2 u2 D 2uıu.
One dimensional variational problem with second derivatives. Find the function y.x/ that
makes the following functional
Z b
J Œy WD F .y; y 0 ; y 00 ; x/ dx (9.5.5)
a
stationary and subjects to boundary conditions that y.a/; y.b/; y 0 .a/; y 0 .b/ fixed.
We compute the first variation ıJ due to the variation in y.x/, ıy (recall that ıy 0 D
d=dx.ıy/ and ıy 00 D d 2 =dx 2 .ıy/):
b b
@F @F 0 @F 00
Z Z
ıJ D ıF dx D ıy C 0 ıy C 00 ıy dx
a a @y @y @y
Now comes the usual integration by parts. For the term with ıy 0 :
Z b b
d @F d @F @F 0 @F 0 d @F
Z
0
ıy D 0
ıy C 0 ıy ) 0
ıy dx D ıydx
dx @y dx @y @y a @y a dx @y 0
d2
@F d @F @F
C D0
@y dx @y 0 dx 2 @y 00
And we want to find functions u.x; y/ and v.x; y/ defined on a domain B such that J is mini-
mum. On the boundary @B the functions are prescribed i.e., u D g and v D h, where g; h are
known functions of .x; y/.
The first variation of J , ıJ , is given by:
Z
@F @F @F @F @F @F
ıJ D ıu C ıux C ıuy C ıv C ıvx C ıvy dxdy
B @u @ux @uy @v @vx @vy
The next step is certainly integrating by parts the second, third, fifth and sixth terms. We demon-
strate only for the second term, starting with:
@ @F @ @F @F
ıu D ıu C ıux
@x @ux @x @ux @ux
And thus,
@F @ @F @ @F
Z Z Z
ıux dV D ıu dV ıudV
B @ux B @x @ux B @x @ux
Using the gradient theorem, Eq. (7.11.37), for the second term, we obtain
@F @F @ @F
Z Z Z
ıux dV D nx ıuds ıudV
B @ux @B @ux B @x @ux
Repeating the same calculations for the third, fifth and sixth terms, eventually the variation of J
is written as
Z
@F @ @F @ @F @F @ @F @ @F
ıJ D ıu C ıv dxdy
B @u @x @ux @y @uy @v @x @vx @y @vy
Z Z
@F @F @F @F
C nx C ny ıu ds C nx C ny ıv ds
@B @ux @uy @B @vx @vy
As u; v are specified on the boundary @B, ıu D ıv D 0 there. Using the fundamental lemma of
variational calculus, we obtain the Euler-Lagrange equations:
@F @ @F @ @F
D0
@u @x @ux @y @uy
(9.6.2)
@F @ @F @ @F
D0
@v @x @vx @y @vy
Example 9.1
For example, if J is:
Z Z Z
2 2 2
J Œu.x; y/ WD .ux C uy / dV D jruj dV D ru ru dV (9.6.3)
B B B
then Eq. (9.6.2) yields (we need to use the first equation only as there is no v function in our
functional)
Example 9.2
In the field of fracture mechanics, we have the following functional concerning a scalar field
.x; y/, where Gc ; b; c0 are real numbers and ˛ is a function depending on :
Gc 1
Z
J Œ.x; y/ D ˛./ C br r dV (9.6.5)
B c0 b
then Eq. (9.6.2) yields (we need to use the first equation only as there is no v function in our
functional)
Gc 1 0 2Gc b
˛ ./ D 0 in B (9.6.6)
c0 b c0
b
@F d @F @F @F
Z
ıI D ıy dx C .b/ıy.b/ .a/ıy.a/ (9.7.1)
a @y dx @y 0 @y 0 @y 0
where the red terms are the boundary terms. The Euler-Lagrange equation associated with
this functional is a second order partial differential equation. Thus it requires two boundary
conditions (BCs) to have a unique solution. In many cases, it is easy to determine these boundary
conditions. but there are also cases where it is very difficult to know the boundary conditions.
This is particularly true for fourth order PDEs.
It is a particularly beautiful feature of variational problems that they always furnish automat-
ically the right number of boundary conditions and the form. All comes from the first variation
of the functional. Getting back to the already mentioned ıI , we have the following cases:
Case 2: we fix one end (for example, y.a/ D A, and thus ıy.a/ D 0), and allows the
other end to be free. As y.b/ can be anything, we have ıy.b/ ¤ 0, so to have ıI D
@F
0, we need @y 0 .b/ D 0. And this is the second BC that the Euler-Lagrange equation
has to satisfy. Since this BC is provided by the variational problem, it is called natural
boundary condition. In case of the brachistochrone, this BC is translated to y 0 .b/ D 0
which indicates that the tangent to the curve at x D b is horizontal.
Example 1: an elastic bar. Consider an elastic bar of length L, modulus of elasticity E and cross
sectional area A. We denote by x the independent variable which runs from 0 to L, characterizing
the position of a point of the bar. Assume that the bar is fixed at the left end (x D 0) and subjected
to a distributed axial load f .x/ (per unit length) and a point load P at its right end (x D L). The
axial displacement of the bar u.x/ is the function that minimizes the following potential energy
" 2 #
L
EA du
Z
˘ Œu.x/ D f u dx P u.L/ (9.7.2)
0 2 dx
where the first term is the strain energy stored in the bar and the second and third terms denote
the work done on the bar by the force f and P , respectively.
To find the Euler-Lagrange equation for this problem, we compute the first variation of the
energy functional and set it to zero. The variation is given by
Z L
du d.ıu/
ı˘ D EA f ıu dx P ıu.L/ (9.7.3)
0 dx dx
We need to remove ıu0 D d=dx.ıu/; for this we use integration by parts. Noting that
d 2u
d du du d.ıu/
ıu D 2
ıu C
dx dx dx dx dx
Thus, we have L
L L
d 2u
du d.ıu/ du
Z Z
dx D ıu ıudx
0 dx dx dx 0 0 dx 2
Eq. (9.7.3) becomes
L L
d 2u
du
Z
ı˘ D EA 2 C f ıudx C EA ıu P ıu.L/
0 dx dx 0
Z L (9.7.4)
d 2u
du
D EA 2 C f ıudx C EA P ıu.L/
0 dx dx xDL
which gives the Euler-Lagrange equation
d 2u
C f D 0; 0 < x < L
EA
dx 2
which requires 2 BCs: one is u.0/ D 0–the BC that we impose upon the bar, and the other is
du
EA P D0
dx xDL
provided by the variational formulation.
b
d2 @F 0 L
@F d @F @F @F d @F
Z
ı˘ D C ıydx C ıy C 00 ıy
a @y dx @y 0 dx 2 @y 00 @y 0 dx @y 00 @y 0
(9.7.6)
With F D k=2.y 00 /2 y, we get the Euler-Lagrange equation from the first term in ı˘ D 0,
And ı˘ D 0 provides all BCs that the Euler-Lagrange equation of the beam requires. We
have the following cases:
y.0/ D 0; y.L/ D 0
(9.7.9)
y 0 .0/ D 0; y 0 .L/ D 0
That is we fix the displacement and the rotation at both ends of the beam. As the variations
of fixed quantities are zero, all the terms in Eq. (9.7.8) vanish. No natural BCs have to be
added.
That is we fix only the displacement of the two ends. Eq. (9.7.8) provides two more natural
BCs:
y 00 .0/ D 0; y 00 .L/ D 0 (9.7.11)
which indicate that the bending moments are zero at both ends.
That is we fix both the displacement/rotation of the left end, but leave the right end free.
Eq. (9.7.8) yields the remaining two BCs:
which means that the bending moment at the right end is zero and so is the shear force there.
t2
1
Z
S Œx.t / D L.x; x/dt;
P T D m.xP 2 C yP 2 C zP 2 /; U D U.x; y; z/ (9.8.4)
t1 2
This action is of the form of Eq. (9.8.1), thus its Euler-Lagrange equations can be obtained from
Eq. (9.8.2): (replace F by L)
@L d @L @L d @L @L d @L
D ; D ; D (9.8.5)
@x dt @xP @y dt @yP @z dt @Pz
d
Lagrange thus obtained three equations and recall that Newton also had three equations. If
Lagrange could show that his three equations are exactly Newton’s equations, then he has created
a new formulation of mechanics. That part is easy, we first need to compute @L
@x
and @L
@xP
:
@L @U @L @T
D D Fx I D D mxP (9.8.6)
@x @x @xP @xP
Substituting these into the first of Eq. (9.8.5), we get Fx D mx,
R which is nothing but Newton’s
2nd law.
@L d @L
D ; i D 1; 2; : : : ; N (9.8.7)
@xi dt @xP i
Now, we have another set of generalized coordinates q1 ; q2 ; : : : ; qN . We assume that it’s always
possible to go back and forth between the two coordinate systems. That is,
xi D xi .q1 ; q2 ; : : : ; qN ; t/
(9.8.8)
qi D qi .x1 ; x2 ; : : : ; xN ; t/
What we need to prove is: the EL equations hold for qi :
@L d @L
D ; i D 1; 2; : : : ; N (9.8.9)
@qi dt @qP i
Proof. We start from the RHS of Eq. (9.8.9) with
N
X @L @xP i
@L
D ; m D 1; 2; : : : ; N (9.8.10)
@qP m i D1
@xP i @qP m
where in the third equality, Eq. (9.8.7) was used for the red term and for the blue term, the order
of d=dt and d=dx was switched .
9.8.3 Examples
A bead is free to slide along a friction-less hoop of radius R. The hoop rotates with constant
angular speed ! around a vertical diameter (Fig. 9.6a). Find the equation of motion for the angle
shown.
From Fig. 9.6 we can determine the speed in the hoop direction and the direction perpendic-
ular to the hoop. From that, the kinetic and potential energies are
If it was not clear, here are the details:
N XN
d @xi X @ @xi @ @xi @ @xi @ @xi
D qP k C D qP k C
dt @qm @qk @qm @t @qm @qm @qk @qm @t
kD1 kD1
Thus,
N
" #
d @xi @ X @xi @xi @xP i
D qP k C D
dt @qm @qm @qk @t @qm
kD1
Figure 9.6
1
T D m R2 P 2 C R2 sin2 ! 2 ; U D mgR.1 cos / (9.8.14)
2
Now, we compute the terms in the EL equation:
@L
D mR2 ! 2 sin cos mgR sin
@ (9.8.15)
@L d @L
D mR2 P H) D mR2 R
P
@ dt @P
And thus the EL equation yields the equation of motion:
d @L @L R
2 g
D H) D ! cos sin (9.8.16)
dt @P @ R
It is hard to solve this equation exactly. Still, we can get something out of Eq. (9.8.16). One
thing that it can tells us is equilibrium points. An equilibrium point 0 is the point if we place
the bead there at rest (i.e., P D 0), it remains there. Since the bead remains at 0 , its velocity
must be constant, and thus its acceleration must be zero. So, to find equilibrium points, solve
R D 0, which is:
2 g
! cos sin D 0
R
A trigonometric equation! But this one is easy:
g
01 D 0; 02 D ; 03;4 D ˙ arccos 2
.if ! 2 g=R/
R!
So, there are four equilibrium points if the hoop spins fast i.e., ! 2 g=R. Otherwise, there are
two equilibrium points 01;2 ; they are the bottom and top of the hoop as you can predict. But
equilibrium points can be stable or unstable. An equilibrium point is said to be stable if when the
bead is at that position 0 and it is given a small disturb, it moves back to 0 . So, our question
now is among these four equilibrium points, which one are stable.
Consider first 01 D 0 (that is the bottom of the hoop). Close to 0, we have sin and
cos 1, thus Eq. (9.8.16) becomes
g g
R D ! 2 D k; k WD !2
R R
Now if the hoop spins at a small speed that ! 2 < g=R, then k > 0. The above equation is
identical to the one describing simple harmonic oscillations. From the study of these oscillations,
we know that the bead will oscillate around the bottom of the hoop. Therefore, the bottom of the
hoop is a stable equilibrium point when ! 2 < g=R. However, if ! 2 g=R, then that position
is unstable.
Ritz did not follow Euler, he thus did not derive the Euler-Lagrange equation associated with
Eq. (9.9.1). Instead he attacks the functional directly, but he looks only for an approximate
solution of the following form:
N
y.x/ D ˛ C ˇx C
x 2 (9.9.2)
We should be aware that even if we can derive the Euler-Lagrange equation, it is quite often that
we cannot solve it. Or it does not have solutions expressible in terms of elementary functions.
Still physicists (or engineers) need a solution even not in a nice analytical expression, but in the
form of a list of numbers.
If you ask why the form in Eq. (9.9.2)? Note that it is easy to work with polynomials (easy
to differentiate, to integrate for example). And the first curve we normally think of is a parabola.
So, it is natural to start with this polynomial form.
Because of the boundary conditions y.0/ D y.1/ D 1, y.x/ N has to be of the following form:
N
y.x/ D 1 C ˇx ˇx 2 (9.9.3)
(Use Eq. (9.9.2) for x D 0 and x D 1 with the given boundary conditions led to two equations
N
for ˛, ˇ and
). We can proceed with this form of y.x/. But we pause here a bit to study the
form of Eq. (9.9.3) carefully:
N
y.x/ D 1 C ˇx ˇx 2 D 1 C ˇx.1 x/ (9.9.4)
It can be seen that the red function x.1 x/ is vanished at both x D 0 and x D 1; the boundary
points! And the constant 1 is exactly the value of y.x/ at the boundary. Based on this analysis,
N
we can, in general, seek for y.x/ in the following general form
n
X
N
y.x/ D ˛0 .x/ C ci ˛i .x/ (9.9.5)
i D1
where ˛i .x/ must be zero at the boundary points, and ˛0 .x/ chosen to satisfy the non-zero
boundary conditions. Note that the ˛i ’s were called Ritz parameters.
N
And from y.x/, we can determine its derivative:
11 2 1
I.ˇ/ D ˇ C ˇC1 (9.9.7)
30 3
which is simply a normal function of ˇ, and we want to minimize I , right? That’s easy now:
dI 11 1 5
D0W ˇC D0)ˇD (9.9.8)
dˇ 15 3 11
Now ˇ determined, we have found the approximate solution:
5 5
N
y.x/ D1 x C x2 (9.9.9)
11 11
How accurate is this solution? We can compare it with the exact solution, which is given by
sinh.x/ C sinh.1 x/
y e .x/ D (9.9.10)
sinh.1/
One way to check the accuracy of an approximate solution is to plot both solutions together as in
Fig. 9.7a. The Ritz solution is quite good; however to have a better appreciation of the accuracy,
we can plot the error function defined as the relative difference of the Ritz solution with respect
to the exact one:
y e .x/ y.x/
N
error.x/ WD e
y .x/
Fig. 9.7b shows the plot of this error.
Let’s solve another problem with the Ritz method. Consider a simply supported beam of
length L. Find the deflection of the beam under uniformly distributed transverse load q0 . Recall
from Eq. (9.7.5) that the deflection y.x/ minimizes the following energy functional
L
k 00 2
Z
˘ Œu.x/ D .y / q0 y dx; k WD EI (9.9.11)
0 2
What are the boundary conditions? Because the beam is simply supported, its two ends cannot
move down, thus y.0/ D y.L/ D 0.
0.96
0.0002
0.94
0.0000
0.92
−0.0002
0.90
−0.0004
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
(a) (b)
R1
Figure 9.7: Ritz solution vs exact solution to the variational problem I Œy.x/ D 0 Œy 2 C
.y 0 /2 dxI y.0/ D y.1/ D 1.
Before using the Ritz method, the exact solution is a fourth order polynomial:
q0 L4 x x3 x4
e
y .x/ D 2 3C 4 (9.9.12)
24EI L L L
Thus, as a first approximate solution, we seek for the following solution (what if we do not have
the exact solution at hand? Then, we have to rely on the functional. Eq. (9.9.11))
N
y.x/ D c1 x.x L/ C c2 x 2 .x L/ (9.9.13)
This form is chosen due to the fact that ˛1 .x/ D x.x L/ and ˛2 .x/ D x 2 .x L/ vanish at
x D 0 and x D L. With this y.x/,
N I used SymPy to do everything for me, as shown in Listing 9.1.
Listing 9.1: Ritz’s solution for the simply supported beam with Eq. (9.9.13).
1 using SymPy
2 @vars x k L q0 c1 c2
3 y = c1*x*(x-L) + c2*x*x*(x-L) # approximate solution yh
4 ypp = diff(y,x,2) # its 2nd derivative
5 F = 0.5*k*ypp^2-q0*y # the integrand in the functional
6 J = integrate(F, (x, 0, L)) # the functional J
7 J1 = diff(J,c1) # derivative of J wrt c1
8 J2 = diff(J,c2) # derivative of J wrt c2
9 solve([J1, J2], [c1,c2]) # solve for c1 and c2
q0 L2 q0 L4 x2
x
N
y.x/ D x.x L/ D
24EI 24EI L L2
We now can check the accuracy. It can show that the Ritz maximum deflection, at the middle of
the beam x D L=2, is off 20% of the exact deflection.
Even though programming gave us quickly the solution, it did not tell us everything. So, it
is always a good idea to develop everything manually. Upon introduction of Eq. (9.9.13) into
Eq. (9.9.11), we obtained a functional ˘ which is a function of c1 and c2 . To minimize it, we
set d ˘=dc1 D 0 and d˘=dc2 D 0. Here is what we get from these two equations:
" #" # " #
A11 A12 c1 b
D 1 (9.9.14)
A21 A22 c2 b2
with Z L Z L
Aij D k˛i00 .x/˛j00 .x/dx; bj D q0 ˛j .x/dx (9.9.15)
0 0
Thus, Ritz converted a problem of solving a PDE (or minimizing a functional) to a linear algebra
problem of finding the solutions to Ac D b. And the matrix is of size n n, where n is the
number of terms in the Ritz approximation; furthermore the matrix is symmetric. What is nice
about Eq. (9.9.14) is that it has a pattern: the row ith can be written in this form
Aij cj D bi
which works for any value of n. Thus, we have a recipe to build up our system e.g. A and b to
solve for ci ’s.
To improve the Ritz solution, what should we do? We use a better approximation! A better
N
approximation can be obtained if we add more terms to u.x/; we add a new term c3 x 3 .x L/
N
to the two-parameter approximate y.x/:
N
y.x/ D c1 x.x L/ C c2 x 2 .x L/ C c3 x 3 .x L/
Repeat the same procedure by modifying the code in Listing 9.1, we get
q0 L2 q0 L q0
c1 D ; c2 D ; c3 D
24EI 24EI 24EI
Thus, the three-parameter Ritz solution is given by
q0 L4 x3 x4
x
N
y.x/ D 2 3C 4
24EI L L L
which is exactly the exact solution!
If you want to do it manually, then use Eq. (9.9.15) to compute the members Aij of the 3 3 matrix A, and
3 1 vector b. Solving the equation Ac D b gives you exactly the same ci ’s.
Z b @F d @F
I Œy.x/ WD F .y; y 0 I x/ dx ! min D0
a @y dx @y 0
b b
@F d @F
Z Z
0
ıI D ıF .y; y I x/ dx D 0 ıy dx D 0
a a @y dx @y 0
b Integration by parts
@F @F
Z
b
ıy C 0 ıy 0
@F @F 0
Z
dx D 0
a @y @y ıy C 0 ıy dx D 0
a @y @y
Integration by parts Z b
b
@F d @F
Z
ıI D ıF dx D 0;
ıy dx D 0 a
a @y dx @y 0
EL equation Z b
@F d @F I Œy.x/ WD F .y; y 0 I x/ dx ! min
D0 a
@y dx @y 0
Figure 9.8: The Euler-Lagrange highway of variational calculus: forward direction from a functional to
the Euler-Lagrange PDE and the backward direction from a PDE to a functional.
its properties. One example is that we have a thin plate and its edge is heated up to a certain
degree, then we ask this question: what is the temperature inside the plate? That temperature is
the solution to the Laplace’s equation:
f D 0 in B (9.10.1)
where is the Laplacian operator, see Eq. (7.11.35). Eq. (9.10.1) means that f .x; y/ is a
function such that f D 0 for all points in the plate or .x; y/ 2 B. Recall that
@2 f @2 f
f D C
@x 2 @y 2
Now, we start with a partial differential equation, and some mathematicians asked the ques-
tion: whether there exists a functional associated with this equation? And the answer to this
question in the case of Laplace’s equation is yes in terms of Dirichlet’s principle. Dirichlet’s
principle states that , if the function u is the solution to the Laplace’s equation,Eq. (9.10.1),
with boundary condition u D g on the boundary @B, then u can be obtained as the minimizer of
the Dirichlet energy functional
1
Z
EŒv D jrvj2 dV (9.10.2)
B 2
The name "Dirichlet’s principle" is due to Riemann, who applied it in the study of complex
analytic functions.
What is the significance of Dirichlet’s principle? It tells us that we can go the Euler-Lagrange
highway the inverse way, see the right branch of Fig. 9.8. Facing the task of solving a PDE, we
do not solve it directly, but we multiply it with ıy, integrate the result and do integration by parts,
eventually arrive at a functional. Now, we find the minimizer of this functional.
And this was exactly what Walther Heinrich Wilhelm Ritz (1878 – 1909)–a Swiss theoretical
physicist–did when he solved the problem of an elastic plate. Thus, in 1915 Ritz developed
the method which was coined the Ritz method, presented in Section 9.9. This name was due
to Galerkin. The main motivation for Ritz was the announcement of the Prix Vaillant for 1907
by the Academy of Science in Paris. This announcement was sent to him by his friend Paul
Ehrenfest on a postcard. The deformation of an elastic plate under an external force f .x; y/
was a very difficult problem at that time; it was first considered by Sophie Germaine in several
articles. The breakthrough was achieved by Kirchhoff in the form of the differential equation
@4 w @4 w @4 w
C 2 C D f .x; y/ (9.10.3)
@x 4 @x 2 y 2 @y 4
where w.x; y/ is the deflection of the plate. Of course we skip the required boundary conditions.
A compact way to write the bending plate equation is to use the Laplacian operator :
Ritz went the Euler-Lagrange highway backwards, and came up with the following functional:
Z
1
J Œw.x; y/ D .w/2 f w dV ! min (9.10.5)
B 2
Then, he introduced his approximation for the solution function w.x; y/, assuming that the
boundary condition is zero deflection on the plate edges:
N
w.x; y/ D c1 1 .x; y/ C c2 2 .x; y/ C C cn n .x; y/ (9.10.6)
Substitution of this into Eq. (9.10.5), we have J.c1 ; c2 ; : : :/, and minimizing it gives us a system
of linear equations to solve for the Ritz parameters ci ’s. The effort was high as Ritz did not have
computer to help him, but of course he managed to get good results.
Because we need the functions i .x; y/ to be zero on the plate boundary, Ritz selected the
easiest plate problem: a square plate of size 2 2. Thus, 1 .x; y/ D .1 x 2 /2 .1 y 2 /2 , with
the origin of the coordinate system at the plate center, and so on
Proof of Dirichlet’s principle. Assume that u is the solution to the Laplace’s equation, thus
u D 0 in B. Furthermore, we have u D g on @B. We have to show that
1
Z
EŒw D jr.u v/j2 dV
B 2
1
Z
D r.u v/ r.u v/dV
2 B
1
Z
D .ru/2 C .rv/2 C 2ru rvdV
2 B
D EŒu C EŒv EŒu; (because EŒv 0)
R
This is because B 2ru rvdV D 0, thanks to the first Green’s identity, see Section 7.11.13
Z Z Z
ru rvdV D .vru/ ndS vudV D 0
B @B B
x 3x
N
y.x/ D c1 sin C c2 sin (9.11.1)
L L
And the corresponding solution is
b Z b
@F d @F @F d @F @F @F 0
Z
D 0 H) ıy dx D 0 H) ıy C 0 ıy dx D 0
@y dx @y 0 a @y dx @y 0 a @y @y
With the boxed equation, they introduced the usual Ritz approximations for y and ıy to obtain a
system of linear equations. To demonstrate their method, we solve the bending beam problem
again, starting with the PDE:
Of course, this equation is nothing but the variation of a functional being set to zero. But we do
not need to know the form of that functional, if our aim is primarily to find the solution y.x/.
Why integration by parts? In theory, we can stop at Eq. (9.11.4), and introduce the Ritz
approximation into it to get a system of equations to solve for the Ritz parameters. However,
it involves y 0000 , thus the Ritz approximation for y must use at least a third order polynomial.
Furthermore, we have asymmetry in the formulation: there is y 0000 and only ıy. Just one simple
integration by parts, and we get Eq. (9.11.5) in which the derivative of y.x/ has been lowered
from four to two, and that is passed to ıy 00 . Thus, we have a symmetric formulation. Thanks to
this, the resulting matrix A will be symmetric i.e., Aij D Aj i .
Now, Galerkin used the Ritz approximation for y.x/. For illustration, only two terms are
used,
What are the di ’s? They are real numbers which can be of any value, because a variation is
anything that is zero at the boundary.
With these approximations introduced into Eq. (9.11.5), we get
Z L
Œk.c1 100 C c2 200 /.d1 100 C d2 200 / q0 .d1 1 C d2 2 dx D 0
0
which is re-arranged as
" Z ! ! #
L Z L Z L
k100 100 dx c1 C k100 200 dx c2 q0 1 dx d1 C
0 0 0
" Z ! ! #
L Z L Z L
k100 200 dx c1 C k200 200 dx c2 q0 2 dx d2 D 0
0 0 0
In theory, the only requirement is that ıy.0/ D ıy.L/ D 0. Thus, it is possible to use another approximation
for it, for example ıy D di i .x/. But that would be some years later after Galerkin’s work. Advancements are
made in small steps.
Now, because d1 and d2 are arbitrary, we conclude that the two bracket terms must be zeroes:
! !
Z L Z L Z L
k100 100 dx c1 C k100 200 dx c2 D q0 1 dx
0 0 0
! !
Z L Z L Z L
k100 200 dx c1 C k200 200 dx c2 D q0 2 dx
0 0 0
Look at what we have obtained? A system of equations to determine the Ritz coefficients, and
the system is identical to the one got from the Ritz method, see Eqs. (9.9.14) and (9.9.15). That’s
probably why Galerkin called his method the Ritz method, and nowadays we call what Galerkin
did the Galerkin method!
Let’s summarize the steps of the method, which I refer to as the Bubnov-Galerkin method– a
common term nowadays–in Box 9.1, even though a better term should have been Ritz-Bubnov-
Galerkin method. What more this method give us compared with its predecessor that Ritz de-
veloped? It has a wider application as there are many partial differential equations that are not
Euler-Lagrange equations of any variational problem.
Derive the weak form (multiply the PDE with ıy, integrate over the domain, inte-
grating by parts)
Z L
.ky 00 ıy 00 q0 ıy/dx D 0
0
Aij cj D bj
Contents
10.1 Vector in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
10.2 Vectors in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
10.3 System of linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . 719
10.4 Matrix algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
10.5 Subspaces, basis, dimension and rank . . . . . . . . . . . . . . . . . . . . 741
10.6 Introduction to linear transformation . . . . . . . . . . . . . . . . . . . . 747
10.7 Linear algebra with Julia . . . . . . . . . . . . . . . . . . . . . . . . . . 753
10.8 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
10.9 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
10.10 Eigenvectors and eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . 770
10.11 Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
10.12 Singular value decomposition . . . . . . . . . . . . . . . . . . . . . . . . 807
This chapter is about linear algebra. Linear algebra is central to almost all areas of mathemat-
ics. Linear algebra is also used in most sciences and fields of engineering. Thus, it occupies a
vital part in the university curriculum. Linear algebra is all about matrices, vector spaces, systems
of linear equations, eigenvectors, you name it. It is common that a student of linear algebra can
do the computations (e.g. compute the determinant of a matrix, or the eigenvector), but he/she
usually does not know the why and the what–the theoretical essence of the subject. This chapter
hopefully provides some answers to these questions.
There is one more strong motivation to learn linear algebra: it plays a vital part in machine
learning, which is basically ubiquitous in our modern lives.
The following books were consulted for the materials presented in this chapter:
699
Chapter 10. Linear algebra 700
Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares by Stephen
Boyd and Lieven Vandenberghe‘ ;
Introduction to Linear Algebra by the famous maths teacher Gilbert Strang [16]
I follow David Poole’s organization for the subject to a great extent. Sometimes I felt lost
reading Strang’s [16]. With Poole, I could read from the beginning to the end of his book. Even
though I understand that my linear algebra is still shaky (it is a big field and I rarely did exercises),
thus reading Strang’s [54] was useful. That book gave a concise review of linear algebra required
to be used in applications. If I could understand Strang this time, I can say that I understand
linear algebra.
The chapter starts with the familiar physical vectors in the 2D plane and in the 3D space
we are living in (Section 10.1). Nothing is abstract and it is straightforward to introduce vector-
vector addition and scalar-vector multiplication–the two most important vector operations in
linear algebra. For use in vector calculus (and applications in physics), the cross product of two
3D vectors is also presented. But keep in mind that this product (with a weird definition and we
can define a cross product of two 3D vectors only) is not used in linear algebra. The description
using vectors of lines and planes is discussed, which plays an important role later.
Section 10.2 then presents a generalization of 2D and 3D vectors to vectors in Rn –the n
dimensional space, whatever it is geometrically. The section introduces the important concept
of linear combinations of a set of vectors, which plays a vital role in the treatment of systems of
linear equations.
Systems of linear equations, those of the form Ax D b, are the subject of Section 10.3. More
than 2000 years ago Chinese mathematicians already knew how to solve these systems. Due to
its linearity solving a system of linear equations is not hard. But we introduce the new concept of
matrix to the subject, and of course the Gaussian elimination method to take a matrix associated
to Ax D b to a row (reduced) echelon form.
And with that we study the algebraic rules of matrices; how we can add two matrices, multi-
plying a matrix with a vector and so on. The subject is known as matrix algebra (Section 10.4).
Also discussed are transpose of a matrix, the inverse of a matrix, the LU factorization of a matrix.
Subspaces, basis and dimension are discussed in Section 10.5. A brief introduction to linear
Stephen Boyd is the Samsung Professor of Engineering, and Professor of Electrical Engineering in the Informa-
tion Systems Laboratory at Stanford University. His current research focus is on convex optimization applications
in control, signal processing, machine learning, and finance.
‘
Lieven Vandenberghe is a Professor of Electrical Engineering at the University of California, Los Angeles.
His lectures are available at https://www.youtube.com/watch?v=ZK3O402wf1c&list=
PL49CF3715CB9EF31D&index=1.
David Poole is a professor of mathematics at Trent University. He has been recognized with a number of
awards for his inspirational teaching. His research interests are algebra, discrete mathematics, ring theory and
mathematics education.
10.1 Vector in R3
To begin our journey about vector algebra let’s do some observation about various concepts we
use daily. For example, consider a cube of side 2 cm; its volume is 8 cm3 . Now if we rotate
this cube, whatever the rotation angle is, its volume is always 8 cm3 . We say that volume is
a direction-independent quantity. Mass, volume, density, temperature are such quantities. The
formal term for them is scalar quantities. To specify a scalar quantity, we need only to provide
its magnitude (8 cm3 , for example). And we know how to do mathematics with these scalars: we
can add, subtract, multiply, take roots etc. Furthermore, we know the rules of these operations,
see e.g. Eq. (2.1.2).
On the other hand, there are quantities that are direction-dependent. It is not hard to see
that velocity is such a quantity. We need to specify the magnitude (or speed) and a direction
when speaking of a velocity. After all, your car is running at 50 km/h north-west is completely
different from 50 km/h south-east. Quantities such as velocity, force, acceleration, (linear and
angular) momentum are called vectorial quantities; they need a magnitude and a direction.
Geometrically, we use arrows to represent vectors (Fig. 10.1). Symbolically, we can write
!
AB or a bold-face a–a notation introduced by Josiah Willard Gibbs (1839 – 1903), an American
scientist. We employ Gibbs’ notation in this book. So, in what follows a (and similar symbols
!
such as b) are vectors. However, in some figures, the old AB still exist as it’s easier to draw an
arrow.
Now, we need to define some operations for vectors similar to what we have done for numbers.
It turns out there are only a few: addition of vectors (two or more), multiplication of a vector
Figure 10.1: Vectors are geometrically represented by arrows. A is the head of the vector and B is its tail.
with a scalar, dot product of two vectors (yielding a scalar) and cross product of two vectors
giving a vector (remember the torque in physics?).
Having defined the addition operation, we need to find the properties that vector addition
obeys. From Fig. 10.2a, we can see immediately that a C b D b C a. Furthermore, it can be
seen that .a C b/ C c D a C .b C c/. That is, addition of vectors follow the commutative and
associative rules similar to numbers. Why .a C b/ C c D a C .b C c/ useful? Because it allows
us not to worry about the order and thus we can remove the brackets unambiguously.
Repeated addition leads to multiplication. If we add a vector a to itself we get 2a, a vector
that has the same direction with a but twice the length. We can generalize this by defining a
scalar multiplication for vectors. Given ˛ 2 R, ˛a is the scaled vector that has a new length
being the length of the original vector multiplied by ˛, but maintains the direction of a. From first
principles of Euclidean geometry (e.g. similar triangles), we can see that ˛.a C b/ D ˛a C ˛b.
Up to this point we have considered vectors as purely geometrical objects. To simplify the
computations, we adopt the approach of analytic geometry–use algebra to describe geometrical
objects. To this end, we use a Cartesian coordinate system, where each point is described by an
ordered pair of numbers .x; y/ in 2D or an ordered triplet of numbers .x; y; z/ or .x1 ; x2 ; x3 /
in 3D. A vector is then a directed line segment from the origin to any point in space (Fig. 10.3).
Figure 10.3: With the introduction of a coordinate system, any vector is represented by an ordered pair
of numbers .x; y/ in 2D, written as a column vector, or an"ordered
# triplet of numbers .x; y; z/ in 3D. To
a
save space, in text we write a D .a1 ; a2 /> instead of a D 1 . There is more to say about the transpose
a2
>
operator .
On this plane we see a remarkable thing: any vector, say a D .a1 ; a2 /> , is obtained by going
to the right (from the origin) a distance a1 and then going vertically a distance a2 . We can write
this down as
" # " #
1 0
a D a1 C a2 (10.1.1)
0 1
or, with the introduction of two new vectors i and j called the unit coordinate vectors:
" # " #
1 0
a D a1 i C a2 j ; i WD ; j WD (10.1.2)
0 1
Of course for 3D, we have three such vectors i D .1; 0; 0/> , j D .0; 1; 0/> and k D .0; 0; 1/> .
Why writing such trivial equation such as Eq. (10.1.2)? Because it says that any vector can be
written as a linear combination of the unit coordinate vectors. In other words, we say that the
two unit coordinate vectors span the 2D space. This is how mathematicians express the idea that
‘the two directions–east and north–are sufficient to get us anywhere on a plane’. Note that this
geometric view does not, however, exist if we talk about high-dimensional spaces.
Vector addition is simple with components: to add vectors, add the components. The proof
is straightforward as follows, where a D .a1 ; a2 ; a3 />
The word ordered is used because .x; y/ is totally different from .y; x/.
a C b D .a1 i C a2 j C a3 k/ C .b1 i C b2 j C b3 k/
D .a1 C b1 /i C .a2 C b2 /j C .a3 C b3 /k
Similarly, to scale a vector, scale its components: for a vector in 2D, ˛a D .˛a1 ; ˛a2 /. Do we
have to define vector subtraction? No! This is because a b D a C . 1/b. Scaling a vector
with a negative number changes its length and flips its direction.
Being able to be added, and scaled by a number, it is natural to compute a vector given by
˛1 a1 C ˛2 a2 C C ˛n an –a linear combination of n vectors ai . We have seen such combination
in Eq. (10.1.2).
With components, it is easy to prove ˛.a C b/ D ˛a C ˛b. Indeed, ˛.ai C bi / D ˛ai C ˛bi .
Similar trivial proofs show up frequently in linear algebra.
Box 10.1 summarizes the laws of vector addition and scalar multiplication. Note that 0 is the
zero vector i.e., 0 D .0; 0; 0/> for 3D vectors.
a b D a1 b1 C a2 b2 C a3 b3 (10.1.3)
Why this definition? One way to understand is to consider the special case that the two
vectors are the same. When b D a, we have a a D a12 C a22 C a32 , which is the square of
the length of a, see Fig. 10.4. So, the dot product gives us the length of a vector, defined by
p
kak WD a a. We recall that the notation jxj gives the distance from x to 0. Note the similarity
in the notations.
The dot product has many applications. For example, the kinetic energy of a 1D point mass
m
R2 with speed v is 0:5mv 2 and its extension to 3D is 0:5mv v. The work done by a force F is
1 F ds. And the list goes on.
There is a geometric meaning of this dot product: a b D kakkbk cos.a; b/. The notation
.a; b/ means the angle between the two vectors a and b. The proof is based on the generalized
Pythagorean theorem c 2 D a2 C b 2 2ab cos C (Section 3.12). We need a triangle here: two
edges are vectors a and b, and the remaining edge is c D b a. To this triangle, we can write
(using the generalized Pythagorean theorem)
kb ak2 D .b a/ .b a/
(10.1.5)
DbbCaa 2a b D kak2 C kbk2 2a b
Again, we observe some properties or laws governing the behavior of the dot product. We
summarize them in Box 10.2. The proofs are quite straightforward and thus skipped. From (a)
and (b) we are going to derive another rule with a D e C f
a .b C c/ D a b C a c ” .e C f / .b C c/ D .e C f / b C .e C f / c
And using (a,b) again, we have
.e C f / .b C c/ D e b C e c C f b C Cf c
And what is this? This is the FOIL (First-Outer-Inner-Last) rule of algebra discussed in Sec-
tion 2.1!
.a C b/ .a C b/ D a a C 2a b C b b
jjajj2 C 2jjajjjjbjj C jjbjj2 .Cauchy-Schwarz inequality/
2
.jjajj C jjbjj/
As a b D jjajjjjbjj cos , we have a b jjajjjjbjj.
We can use this to prove the Pythagoras’s theorem: if a is orthogonal to b then a b D 0, thus we have
.a C b/ .a C b/ D a a C b b. which is nothing than jja C bjj2 D jjajj2 C jjbjj2 . And this vector-based proof of
the Pythagoras theorem works for 2D and 3D and actually nD.
And if we have something for two vectors, we should extend that to n vectors. First, it’s easy
to see that, for 3 vectors we have
Solving plane geometry problems using vectors. Vectors can be used to solve easily (algebraic
manipulations of some vectors only) many plane geometry problem. See Fig. 10.5 for some
examples.
It is obvious that the length of a vector, which is a scalar quantity, is invariant under
translation and rotation. That is, if we rotate a vector, its length does not change. So, we
can define a ‘dot product’ that applies to a single vector only i.e., a a D a12 C a22 C a32 .
We can thus write
a a D kak2 D constant
b b D kbk2 D constant
.a C b/ .a C b/ D ka C bk2 D constant
The length of vector a C b can be evaluated using our dot product definition:
So, we come up with the fact that a1 b1 C a2 b2 C a3 b3 is also constant. That is why people
came up with this dot product between two vectors. It preserves lengths and angle.
With the dot product we can now write the equation for a plane in 3D. In 2D, a line needs
a point .x0 ; y0 / and a slope. For a plane, we need also a point P0 D .x0 ; y0 ; z0 / and a normal
N D .a; b; c/ (not a slope as there are infinitely many tangents to a plane). For a point P D
.x; y; z/ on the plane, the vector from P0 to P is perpendicular to the normal. And of course
perpendicularity is expressed by the dot product of these two vectors:
.x x0 /a C .y y0 /b C .z z0 /c D 0; or ax C by C cz D d (10.1.9)
Using two direction vectors u; v, which are not parallel, we can write the equation for a plane
z z0 u3 v3
Again, a plane passing through the origin can be expressed as a linear combination of two
(direction) vectors:
Lines in 2D 1 ax C by D c x D p C su
8
<a x C b y C c z D d
1 1 1 1
Lines in 3D 1 x D p C su
:a x C b y C c z D d
2 2 2 2
Planes in 3D 2 ax C by C cz D d x D p C su C tv
10.1.4 Projections
Let’s denote by p the projection of v on u. We have p D
u
OB jjujj . And consider the right triangle OBA, we also have
OB D jjvjj cos , now relating cos to the dot product of u; v,
we can write p as:
u
p D jjvjj cos
jjujj
uv u u v
D jjvjj D u
jjujjjjvjj jjujj uu
Finding a projection of a vector onto another one has many applications. For example, calculation
of the distance from a point to a line in space is one of them, but not an important one. As can
be seen, while finding the projection of v on u, we also get the vector perpendicular to u (vector
!
AB). This is very useful later on (Section 10.8.6). But I want to show you what will come next.
The vector p is, among all vectors along the line defined by u, the closest vector to v. This will be
generalized to the best approximation theorem when we extend our 3D space to n dimensional
space ().
The length of the projected vector can be computed as:
ˇˇ u v ˇˇ ˇ u v ˇ ju vj
jjpjj D ˇˇ uˇˇ D ˇ ˇ jjujj D
ˇˇ ˇˇ ˇ ˇ
uu uu jjujj
One application of this formula is to compute the distance from a point B.x0 ; y0 ; z0 / to a plane
P W ax C by C cz D d :
First, we consider a two dimensional rotation i.e., an object is circulating around in the xy
plane (Fig. 10.7). Our analysis is guided by the last row in Table 10.2. That is we are going to
write the work W D Fx in terms of .
Assume that at a time instant, the object is located at P, which is specified by .x; y/ and .r; /.
A moment later, it moves to point Q by rotating a small angle of . We compute the change
in positions x and y in terms of . Then, we compute the work W D Fx x C Fy y D
.xFy yFx / . So this term .xFy yFx / should be defined as torque which is a kind of force
that makes objects turn.
Figure 10.7
Yes, we have obtained one formula for the torque. But we can also obtain another formula for
it if we recall that work is tangential force multiplied with displacement. As seen from Fig. 10.8,
torque can also be defined as the magnitude of the force times the length of the level arm. And
this formula agrees with our experiences with torques: if the force is radial i.e., ˛ D 0 (or zero
length of level arm) the torque is zero.
Figure 10.8: Torque is defined as the magnitude of the force times the length of the level arm.
With forces, we have linear momentum p D mv and Newton’s 2nd law saying that the
external force is equal to the time derivative of the linear momentum: F ext D p.
P A question
arises, with torques, do we have another kind of momentum in the sense that ext P Let’s
D .
do the analysis. We start with the formula for the torque, D xFy yFx , then we replace Fx
and Fy using Newton’s 2nd law so that derivative with time appears:
dvy dvx d d
D xFy yFx D xm ym D .xmvy y mvx / D .xpy ypx / (10.1.13)
dt dt dt dt
Indeed, the torque is the time rate of change of something. And we call that something xpy ypx
is the angular momentum, denoted by L. And by doing the same analysis as done in Fig. 10.8
for the torque, we can see that the angular momentum is the magnitude of the linear momentum
times the length of the level arm.
We have conservation of linear momentum when the total external forces in a system is zero.
Do we have the same principle for angular momentum? As can be seen from Fig. 10.9 for a
system of 2 particles, the torque due to F 12 cancels the torque due to F 21 . Thus, the the rate of
change of the total momenta depends only on the external torques:
dL1
9
ext
D 1 C >
12 = dL
dt H) D ext
C ext
(10.1.14)
1 2
dL2 ext dt
D C
>
;
2 21
dt
Thus, if the net torque is zero, the angular momentum is conserved. Indeed, we also have an
analog for the principle of conservation of linear momentum. This encourages us to keep moving
on. We have kinetic energy for translational motions, what it will look like for rotational motions?
Kinetic energy is T D 0:5mv 2 . So we anticipate that for rotations, it should be T D
0:5f .m/! 2 . Let’s do the maths:
1 1
T D mv 2 D mr 2 ! 2 H) I D mr 2 (10.1.15)
2 2
The quantity I D mr 2 is called moment of inertia by Leonhard Euler. It is a function of mass
(of course) but it depends also on r i.e., how far the mass is away from the rotation axis, see for
an application in Fig. 10.10.
Now, if we repeat the analysis we have just done in the xy-plane but now for the yz-plane
and zx plane, we obtain three terms:
And that is the torque which is defined from two vectors r D .x; y; z/ and F ; xFy yFx is just
the z component of this torque. Now we generalize that to any two vectors a and b:
Figure 10.10: Moment of inertia in rotations: it is a function of mass (of course) but it depends also on
r i.e., how far the mass is away from the rotation axis. A spinning figure skater pull in her outstretched
arms to spin faster. This is because the angular momentum l D I! is conserved, when I is decreased, !
is increased i.e., spinning faster.
2 3
a2 b3 a3 b2
c WD a b H) c D 4a3 b1 a1 b3 5 (10.1.17)
6 7
a1 b2 a2 b1
From this definition, it can be seen that b a D a b:
2 3
b2 a3 b3 a2
b a D 4b3 a1 b1 a3 5 D ab (10.1.18)
6 7
b1 a2 b2 a1
The vector product is not commutative! One consequence is that a a D 0. Now, we need to
know the direction of a b. Just apply Eq. (10.1.17) to two special vectors .1; 0; 0/ and .0; 1; 0/,
and we get the cross product of them is .0; 0; 1/, which is perpendicular to .1; 0; 0/ and .0; 1; 0/.
The rule is: c is perpendicular to both a; b. This can be proved simply by just calculating the
dot product of a b with a, and you will see it is zero. But c points up or down? The right hand
rule tells us which exact direction it follows.
We now know the direction of the cross product, how about its length? Let’s compute it and
see what we shall get:
Note the striking similarity with Eq. (10.1.6) about the dot product! A geometric interpretation
of this formula is that the length of the cross product of a and b is the area of the parallelogram
formed by a and b. We also get that the area of a triangle formed by a and b is 0:5ka bk. See
Fig. 10.11a.
(a) (b)
Figure 10.11: A geometric interpretation of the cross product of two vectors: the length of the cross
product of a and b is the area of the parallelogram formed by a and b.
As the area of a triangle formed by a and b is 0:5ka bk, if the three verices are .x1 ; y1 /,
.x2 ; y2 / and .x3 ; y3 /, the area of the triangle explicitly expressed in terms of the coordinates of
its vertices is given by:
2 3
1 1 1
1
A D det 4x1 x2 x3 5 (10.1.20)
6 7
2
y1 y2 y3
Here are some rules regarding the cross product:
abD ba
aaD0
.˛a/ b D ˛.a b/ D a .˛b/
a .b C c/ D a b C a c
(10.1.21)
.a C b/ c D a c C b c
a .b c/ D b.a c/ c.a b/
.a b/2 D a2 b2 .a b/2
c .a b/ D .c a/ b
The first three rules are straightforward. How others have been discovered? Herein, we prove
the last rule, known as the scalar triple product of three vectors. As two vectors give us an area
so three vectors could give us a volume. So, let’s build a box with three sides being our three
vectors a; b; c (see Fig. 10.11b); this box is called a parallelepiped. It is seen that the volume
of this box is c .a b/: consider the base with two sides a; b, its area is ka bk; the volume
is: base area times the height; that is ka bkkck cos . As the volume does not change if we
consider a different base, the rule of the scalar triple product of three vectors is proved. Of course,
a proof using pure algebra exists:
c .a b/ D c1 .a2 b3 a3 b2 / C c2 .a3 b1 a1 b3 / C c3 .a1 b2 a2 b 1 /
D b1 .c2 a3 c3 a2 / C b2 .c3 a1 c1 a3 / C b3 .c1 a2 c2 a1 / D b .c a/
The rule a .b c/ D b.a c/ c.a b/ is known as the triple product. You’re encouraged
to prove it using of course the definition of cross product. You would realize that the process is
tedious and boring (lengthy algebraic expressions). Refer to Section 7.11.14 for a more elegant
proof when we’re equipped with more mathematics tools.
And this result is called the law of the moduli by Hamilton: it states that the modulus of the
product of two complex numbers is equal to the product of the modulus of the two numbers. We
put it below:
jz1 z2 j D jz1 jjz2 j (10.1.23)
Hamilton wanted to extend complex numbers–which he called couples as each complex
number contains two real numbers– to triplets. Thus, he considered this triplet z D a C bi C cj
with i 2 D j 2 D 1 and ij D j i (this is because at that time Hamilton still insisted on
the commutativity of multiplication). Of course, it is straightforward to add two triplets. But
multiplication was not easy to even a mathematician such as Hamilton. He wrote to his son
Archibald shortly before his death:
“Every morning in the early part of the above-cited month, on my coming down to
breakfast, your brother William Edwin and yourself used to ask me, ‘Well, Papa, can
you multiply triplets?’ Whereto I was obliged to reply, with a sad shake of the head,
‘No, I can only add and subtract them.’ ”
The red term troubled him. To get a triplet from z 2 , he needed to have ij D a1 C a2 i C a3 j
with ai 2 R. But this is impossible:
ij D a1 C a2 i C a3 j
i j D a1 i C a2 i 2 C a3 ij (multiplying the above by i)
2
j D a1 i a2 C a3 ij (i 2 D 1)
j D a1 i a2 C a3 .a1 C a2 i C a3 j / (replacing ij using 1st eq.)
j D a1 a3 a2 C .a1 C a2 a3 /i C a32 j
The last equation holds only when a32 D 1, which is impossible as a3 is a real number. So, ij
cannot be a triplet.
But if this term 2bcij is zero, then it is simple to see that jz 2 j D .a2 C b 2 C c 2 /, which
is jzjjzj. The law of the moduli, Eq. (10.1.23), works! But when 2bcij is zero? It is absurd to
think that ij D 0. So, Hamilton thought that if ij ¤ j i, then it is possible for the red term to
vanish. So, with ij ¤ j i , he computed z 2 :
i2 D j 2 D k2 D 1
ij D ji D k
(10.1.26)
jk D kj D i
ki D i Dj
Hamilton now needed to verify that his quartenions satisfy the rule of modulus
(Eq. (10.1.23)). He computed z 2 and with Eq. (10.1.26), he got:
.a C bi C cj C d k/.a C bi C cj C d k/ D a2 C abi C acj C ad kC
C abi b 2 C bcij C bd ik
C acj C bcj i c 2 C cdj k
C ad k C bd ki C dckj d2
D .a2 b2 c2 d 2 / C 2abi C 2acj C 2ad k
p
Thus, the modulus of zz is jzzj D .a2 b 2 c 2 d 2 /2 C .2ab/2 C .2ac/2 C .2ad /2 D
a2 C b 2 C c 2 C d 2 . Therefore, we have again the old rule about modulus that jzzj D jzjjzj.
10.2 Vectors in Rn
So we have seen 2D and 3D vectors. They are easy to grasp as we have counterparts in real
life. But mathematicians do not stop there. Or actually they encounter problems in which they
have to stretch their imaginations. One such problem is solving a system of large simultaneous
equations, like the following one
n
X
addition: aCbD .ai C bi /
i D1
scalar multiplication: ˛a D .˛a1 ; ˛a2 ; : : : ; ˛an /
n
X
dot product: a b D ai bi D ai bi
i D1
!1=2
p X
length (norm): jjajj D aaD ai2
i
Pn
where we have used Einstein summation rule in i D1 ai bi D ai bi . According to this rule, when
an index variable (i in this example) appears twice in a single term, it implies summation of that
term over all the values of the index. The index i is thus named summation index Pnor dummy
index. The dummy word is used because we can replace it by any other symbol: i D1 ai bi D
P n
j D1 aj bj D aj bj .
Remark 4. All the rules about vector addition and scalar vector multiplication in Box 10.1 still
apply for vectors in Rn . And note that we did not define the cross product for vectors living in a
space with dimensions larger than three! Lucky for us that in the world of linear algebra we do
not need the cross product.
Notation Rn . Let’s discuss how mathematicians say about 1D, 2D, 3D and nD spaces. When x
is a number living on the number line, they write x 2 R. When a point x D .x; y/ lives on a
plane, they write x 2 R2 ; this is because x 2 R and y 2 R. Similarly, they write x 2 R3 and
x 2 Rn . This notation follows the Cartesian product of two sets discussed in Section 5.5.
We have special numbers: 0 and 1, and we also have special vectors. The zero vector 0, note
the bold font for 0, has all components being zeros, and the ones vector 1 has all components
equal to one. And the unit vectors (remember i; j; k of the 3D space?):
2 3 2 3 2 3
1 0 0
607 617 607
6 7 6 7 6 7
6 7 6 7 6 7
e1 D 6 0 7 ; e 2 D 607 ; : : : ; e n D 607 (10.2.1)
6 :: 7 6 :: 7 6 :: 7
6 7 6 7 6 7
4:5 4:5 4:5
0 0 1
That is vector e i has all component vanished except the ith component which is one.
˛1 u1 C ˛2 u2 C C ˛m um (10.2.2)
Of course it is not a requirement to use Einstein notation in linear algebra; but it can be very useful elsewhere.
is called a linear combination of the vectors u1 ; : : : ; um . The scalars ˛1 ; : : : ; ˛m are the coeffi-
cients of the combination.
For some special values for ˛i we obtain some special combinations:
2x y D 1
(10.3.1)
xCy D5
All of us know the technique to solve it: elimination method. We keep the first equation, but
replace the second by the sum of the second equation and the first (to remove y):
2x 1y D 1
(10.3.2)
3x C 0y D 6
Then, we have x D 2 from the second equation, and back substituting x D 2 into the first
equation gives us y D 3. This is pretty easy. What is interesting is the fact that we write the
second equation 3x D 6 as 3x C 0y D 6. Furthermore, we can work on the two equations
without referring to x; y (after all, instead of x; y we can equally use u; v or whatever pleases
us); we just need to focus on the numbers 2; 1; 1; 1; 1; 5. So, we put the numbers appearing in
the LHS in a rectangular array with 2 rows and 2 columns, denoted by a capital boldface symbol
A, the numbers in the RHS in a vector (b), and the unknowns in another vector (x):
" #" # " #
2 1 x 1
D ; or Ax D b (10.3.3)
1 1 y 5
and this 2 row and 2 col array is called the coefficient matrix and the vector on the RHS is
called the RHS vector. Note that this is not simply a notation. Eq. (10.3.3) says that the matrix A
acts on the vector x to produce the vector b. Matrices do something as they are associated with
linear transformations. More about this later in Section 10.11.3.
In a matrix there are rows and columns, thus we can view Eq. (10.3.3) from the row picture
or the column picture. In the row picture, each row is an equation, which is geometrically a
line in a 2D plane. There are two lines, Fig. 10.12-left, and they intersect at .2; 3/, which is the
solution of the system. And this solution is unique, as there is no other solutions.
Figure 10.12: System of linear equations: row view (left) and column view (right).
In the column picture, we do not see two equations with scalar unknowns x and y, but we
see only one vector equation:
" # " # " #
2 1 1
x Cy D (10.3.4)
1 1 5
And we are seeking for the right linear combination of the columns of the coefficient matrix to
get the RHS vector. In Fig. 10.12-right, we see that if we go along the first column two times its
length and then follow the second column three times the length, then we reach the RHS .1; 5/.
For this simple example in 2D, the row picture is easier to work with. However, for a system
of more than three unknowns such a geometric view does not exist.
Historically it was the 19th-century English mathematician James Sylvester (1814 – 1897) who first coined the
term matrix, even though Chinese mathematicians knew about matrices from the 10th–2nd century BCE, written in
The Nine Chapters on the Mathematical Art.
No solution and many solutions. Using the row picture it is easy to see that Ax D b either: (i)
has a unique solution, (ii) has no solution and (iii) has many solutions. The following systems
have no solution and many solutions:
( (
2x y D 1 2x y D 1
; (10.3.5)
2x y D 2 4x 2y D 2
In the first system, the two lines are parallel and thus do not intersect. In the second system, the
second equation is just a multiple of the first; we have then just one equation and all the points
on the line of the first equation is the solution, see Fig. 10.13.
2 3 2 3 2 3
2 3 x1 1 1 2 3 2 3 2 3
1 2 2 2 6 7 6 7 7 x1 3
7 6x2 7 627 62 4 6 7 6 7 6 7
6
42 4 6 8 5 6 7 D 6 7 (underdetermined); 6 7 4x 5 D 445
6
4x3 5 455 42 6 8 5 2
3 6 8 10 x3 5
x4 7 2 8 10
The first matrix has more columns than rows–it is short and wide. The second matrix has more
rows than columns–it is thin and tall.
Elementary row operations. It is clear that we can perform some massages to a system of linear
equations without altering the solutions. For example, the system in Eq. (10.3.1) is equivalent to
the following ones:
( ( (
xCy D5 2.x C y/ D 10 xCy D5
” ”
2x y D 1 2x y D 1 3x D 6
in which the first system was obtained by swapping the two original equations; from it, the
second system obtained by multiplying the first equation by two and the third system by adding
the first equation to the second equation. Using the row picture, what we have done is called
elementary row operations . This is because the coefficients of the system are stored in the
coefficient matrix, and thus what done to the equations are done to the rows of this matrix. There
are only three types of elementary row operations:
The Gaussian elimination method, discussed in the next section, uses the elementary row opera-
tions to transform the system into a simpler form.
2 3 7 x3 10
Once the elimination process–to be discussed shortly–has been done, we get a new form Ux D
c:
2 3 2 3
2 4 2 2
U D 40 1 1 5 ; c D 445
6 7 6 7
0 0 4 8
where U, a matrix of which all elements below the main diagonal are zeros, is called an upper
triangular matrix; the non-zero red terms form a triangle. All the pivots of this upper triangular
matrix are on the diagonal. Obviously solving Ux D c is super easy: back substitution. The last
row gives us 4x3 D 8 or x3 D 2, substituting that x3 into the 2nd row: x2 C x3 D 4 we get
x2 D 2. Finally substituting x3 ; x2 into the first row we get x1 D 1.
The elimination process brings A to U which is in a row echelon form (REF). A matrix is
said to be in row echelon form if all entries below the pivots are zero.
We only mentioned about multiplying a row by a constant, but if the constant is 1=c, where c ¤ 0, then we
also cover division. Similarly, by adding a negative multiple of one row to another, we’re actually subtracting.
Now, I present the elimination process. We start with the elimination of x1 in the second row
(or equivalently the blue number 4); this is obtained by subtracting two times the first row from
the second row (the red number 2 is the first non-zero in the row that does the elimination, it is
called a pivot):
2 3 7 10 0 0 4 8
Gauss would finish here and do back substitution. Jordan continued with elimination until
the left block is the unit matrix: A becomes I. And the obtained form is called the reduced row
Wilhelm Jordan (1842 – 1899) was a German geodesist who conducted surveys in Germany and Africa. He
is remembered among mathematicians for the Gauss–Jordan elimination algorithm, with Jordan improving the
stability of the algorithm so it could be applied to minimizing the squared error in the sum of a series of surveying
observations. This algebraic technique appeared in the third edition (1888) of his Textbook of Geodesy.Wilhelm
Jordan is not to be confused with the French mathematician Camille Jordan (Jordan curve theorem), nor with the
German physicist Pascual Jordan (Jordan algebras).
echelon form; it makes the back substitution super easy. A matrix is said to be in reduced row
echelon form (RREF) if all the entries below and above the pivots are zero. What we have to do
is to remove the red terms–making zeros above the pivots and making the pivots ones:
2 3 2 3 2 3 2 3
2 4 2 2 2 0 6 14 2 0 0 2 1 0 0 1
40 1 1 45 H) 40 1 1 4 5 H) 40 1 0 2 5 H) 40 1 0 25
6 7 6 7 6 7 6 7
0 0 4 8 0 0 4 8 0 0 1 2 0 0 1 2
The solution is now simply the right block, which is . 1; 2; 2/. Note that the columns in A
transformed to the three unit vectors .1; 0; 0/; .0; 1; 0/ and .0; 0; 1/ of R3 in the reduced row
echelon form.
Is this solution making sense? We have three unknowns and three equations; each equation
is then a plane in R3 . The intersection of two such planes gives a line, and a line intersects the
remaining plane at a single point (if it is not parallel to the plane). This system is similar to
Fig. 10.12-left; it is hard to plot three planes and show their intersection.
2 3 2 3
x1 x2 x3 C 2x4 D 1 1 1 1 2 1 1 1 0 1 2
2x1 2x2 x3 C 3x4 D 3 H) 4 2 2 1 3 35 H) 4 0 0 1 1 15
6 7 6 7
1x1 C 1x2 x3 C 0x4 D 3 1 1 1 0 3 0 0 0 0 0
where to save space we have carried out the Gauss-Jordan elimination process in the final
step§ . Looking at the RREF, we have the third row full of zeros: it is meaningless because it is
equivalent to the equation 0 D 0. This indicates that the hyperplane 1x1 C1x2 x3 C0x4 D 3
is just a linear combination of the other hyperplanes. Indeed, the third row of A is equal to three
times the first row minus two times the second one.
Now, we have 4 unknowns but only 2 equations; there are so many freedom here. We say that
there are 4 2 D 2 free variables. And we also have two pivots (indicated by boxes in the above
equation). The columns containing the pivots are called the pivot columns; in this example, they
are the 1st and 3rd columns. They are of course the unit vectors .1; 0; 0/ and .0; 1; 0/ of R3 . The
other columns are called the non-pivot columns; they are the 2nd and 4th columns.
Now comes an important fact: the non-pivot columns can be written as linear combinations
of the pivot columns. Look at the first non-pivot column, it is the second column. Its nonzero
entries must be in the first entry (if not the case, then it would be a pivot column). Obviously, we
can write . 1; 0; 0/ D . 1/ .1; 0; 0/. The first non-pivot column is a linear combination of
the first pivot column. The second non-pivot column is .1; 1; 0/: it has the nonzero entries at
the first two slots, thus it is a linear combination of the first two unit vectors (or the 1st two pivot
§
As I did not aim to practice the Gauss-Jordan method, I used Julia to do this for me. The aim was to see the
solution of the system.
columns): .1; 1; 0/ D .1/ .1; 0; 0/ C . 1/ .0; 1; 0/. To illustrate this point, let’s consider a
RREF for a 4 6 matrix with 3 pivots:
2 3
1 b12 0 b14 0 b16
60 0 1 b24 0 b26 7
6 7
RD6 7
40 0 0 0 1 b36 5
0 0 0 0 0 0
Another important fact: in the RREF the 4th col is the 1st col minus the third col, if not clear
check again Eq. (10.2.3). And we also have the same relation in A: check that the 4th col of A is
exactly the 1st col minus the third one. To explain why we need to consider Ax D 0 discussed
in Section 10.3.3 .
It is a choice we made to select the variables associated with the non-pivot columns as the
free variables, and compute other variables, called the pivot variables, in terms of the free ones.
Thus, x2 ; x4 are the free variables and x1 ; x3 are the pivot variables. For the free variables we
can assign x2 D s and x4 D t, then
2 3 2 3 2 3 2 3
2Cs t 2 1 1
x1 x2 C x4 D 2 x1 D 2 C s t 6
s 7 607
7 6 7
617
6 7
607
6 7
H) H) x D 6 7 D 6 7Cs6 7Ct6 7
6
x3 x4 D 1 x3 D 1 C t 4 1Ct 5 415 405 415
t 0 0 1
(10.3.6)
This specific example tells us that the number of free variables equals the number of
unknowns minus the number of nonzero rows in the echelon form of A. Thus, we need to
introduce another number that characterizes the matrix better (for a matrix we have already two
numbers: the number of rows and cols): that is the concept of the rank of the matrix.
Definition 10.3.1
The rank of a matrix is the number of nonzero rows in its row echelon form (or its reduced
REF). It is also the number of pivots.
The short answer is that Ax D 0 is equivalent to Rx D 0.
The answer to the first question is simple: if x is a solution we have Ax D 0, and thus
A.cx / D 0 with c 2 R; in other words cx is also a solution. And that’s why mathematicians
call Ax D 0 a homogeneous equation. If the RHS is not 0, then we get an inhomogeneous
system.
We focus on the third question for now. It is obvious that one possible solution is the zero
vector, which is called understandably the trivial solution. This is similar to the equation 5x D 0.
But for the equation 0x D 0, then there are infinitely many solutions. So, Ax D 0 either has
one unique solution which is the zero vector or has infinitely many solutions. From the previous
section, we know that only when we have free variables we have infinitely many solutions.
Theorem 10.3.2
If ŒAj0 is a homogeneous system of m linear equations with n unknowns, where m < n, then
the system has infinitely many solutions.
Proof. Note that the system is solvable, then we use the rank theorem to have
Definition 10.3.2
If S D fv1 ; v2 ; :::; vk g is a set of vectors in Rn , then the set of ALL linear combination of
v1 ; v2 ; :::; vk is called the span of v1 ; v2 ; : : : ; vk , and is denoted by span.v1 ; v2 ; : : : ; vk / or
span.S /. If span.S/ D Rn , then S is called a spanning set for Rn .
Example 10.1
Show that R2 D span.f.2; 1/; .1; 3/g/. What we need to prove is that, for an arbitrary vector
in R2 , namely .a; b/ it is possible to write it as a linear combination of f.2; 1/; .1; 3/g. That
is, the following system
2x C y D a
x C 3y D b
always has solution for all a; b. We can use the Gaussian elimination to solve this system and
see that it always has solution.
Example 10.2
Find the span.f.1; 0/; .0; 1/; .2; 3/g/. We simply use the definition to compute the span:
" # " # " #
1 0 2
span.f.1; 0/; .0; 1/; .2; 3/g/ D c1 C c2 C c3
0 1 3
What is interesting is that the third vector .2; 3/ is nothing new, it is a linear combination of
the first two, so the span can be written in terms of only the first two vectors:
" # " # " # " #! " # " #
1 0 1 0 1 0
span.f.1; 0/; .0; 1/; .2; 3/g/ D c1 C c2 C c3 2 C3 D˛ Cˇ
0 1 0 1 0 1
Linear independence. We have seen that in matrices, it is possible that some columns can be
written in terms of others. For example, we can have
a3 D 2a1 3a1
In this case, we say that the three columns or vectors are linear dependent. Noting that the
above writing is not symmetric, as a3 was received special treatment. Thus, mathematicians will
re-write the above relation as
2a1 3a1 a3 D 0
And with that we have the following definitions about linear independence/dependence of a set
of vectors.
Definition 10.3.3
A collection of k vectors u1 ; : : : ; uk is linear dependent if
˛1 u1 C ˛2 u2 C C ˛k uk D 0
Definition 10.3.4
A collection of k vectors u1 ; : : : ; uk is linear independent if it is not linear dependent. That is
˛1 u1 C ˛2 u2 C C ˛k uk D 0 H) ˛i D 0 .i D 1; 2; : : : ; k/
Example 10.3
Determine whether the vectors f.1; 2; 0/; .1; 1; 1/; .1; 4; 2/g are linear independent. This is
equivalent to see if the following system
2 32 3 2 3
1 1 1 ˛1 0
42 1 45 4˛2 5 D 405
6 76 7 6 7
0 1 2 ˛3 0
has trivial solution (zero vector) or not. Using the Gauss elimination method, we get one zero
row, thus this system has infinitely many solutions, and one solution is not the zero vector.
Thus, the vectors are linear dependent.
It can be seen then that in a 2D plane, 3 (or more) vectors are surely linear dependent. This
can be intuitively explained: on a 2D plane, two directions (two vectors which are not parallel)
are sufficient to get us anywhere, so the third vector can be nothing new: it must be a combination
of the first two directions. Similarly, in a 3D space, any four vectors are linearly dependent. We
can state this fact as the following theorem
Theorem 10.3.3
Any set of m vectors in Rn is linearly dependent if m > n.
Proof. The proof is based on theorem 10.3.2, which tells us that a system of equation Ax D 0,
where A is a n m matrix, has a nontrivial solution whenever n < m. Thus, we build A with
its columns are the set of m vectors in Rn . Because x ¤ 0, the columns of A are linearly
dependent .
Figure 10.14: A set of linearly dependent vectors make a closed polygon. That is why following them we
return to where we started: the origin.
The size of a matrix gives the number of rows and columns it has. An m n matrix has m
rows and n columns:
2 3
A11 A12 A1n
6 7
6 A21 A22 A2n 7 h i
AD6
6 :: :: ::
7; or A D A 1 A 2 A n
4 : : : A1n 7
5
Am1 Am2 Amn
and we denote by Aij the entry at row i and column j of A. The columns of A are vectors in Rm
(i.e., they have m components) and the rows of A are vectors in Rn . In the above, the columns
of A are A i ; i D 1; 2; : : : ; n. When m D n we have a square matrix. The most special square
matrix is the identity matrix I, or In to explicitly reveal the size, where all the entries on the
diagonal are 1: Ii i D 1:
Do not forget the column picture of Ax D b that x is the coefficients of the linear combination of A’s columns.
pronounced m by n matrix.
2 3
1 0 0
6 7 h
60 1 07 i
I D In WD 6
6 :: :: : :
7 D e e e
1 2 n (10.4.1)
4: : : 07
5
0 0 1
This matrix is called the identity matrix because Ix D x for all x, it is the counterpart of number
one. As can be seen, I consists of all unit vectors in Rn .
n
X
.Ax/i D Ai k xk (10.4.4)
kD1
That is the i th entry of the result vector is the dot product of the i th row of A and x. This
definition comes directly from the system Ax D b. Because the dot product has the distributive
property that a .b C c/ D a b C a c, the matrix-vector multiplication also has the same
property:
2 3 2 3
row 1 of A .a C b/ row 1 of A a C row 1 of A b
6 7 6 7
def 6 row 2 of A .a C b/ 7 6 row 2 of A a C row 1 of A b 7
A.a C b/ D 6 ::
7D6
::
7 D Aa C Ab
: :
6 7 6 7
4 5 4 5
row m of A .a C b/ row m of A a C row 1 of A b
Now comes the harder matrix-matrix multiplication. One simple example for the motivation:
considering the following two linear systems:
x1 C 2x2 D y1 y1 y2 D z1
;
0x1 C 3x2 D y2 2y1 C 0y2 D z2
x1 x2 D z1
(10.4.5)
2x1 4x2 D z2
Thus, the product of the two matrices in this equation must be another 2 2 matrix, and this
matrix must be, because we know the result from Eq. (10.4.5)
" #" # " #
1 1 1 2 1 1
D
2 0 0 3 2 4
This result can be obtained if we first multiply the left matrix on the LHS with the first column
of the right matrix (red colored), and we get the first column of the RHS matrix. Doing the
same we get the second column. And with that, we now can define the rule for matrix-matrix
multiplication. Assume that A is an m n matrix and B is an n p matrix, then the product
AB is an m p matrix of which the ij entry is:
n
X
.AB/ij D Ai k Bkj (10.4.6)
kD1
In words: the entry at row i and column j of the product AB is the dot product of row i of A
and column j of B . And we understand why for matrix-matrix multiplication the number of
columns in the first matrix must be equal to the number of rows in the second matrix.
It must be emphasized that the above definition of matrix-matrix multiplication is not the only
way to look at this multiplication. In Section 10.4.4 other ways are discussed. This definition is
used for the actual computation of the matrix-matrix product, but it does not tell much what is
going on.
Remark 5. Of course you can define matrix-matrix multiplication in a different way; and in
the process you would create another branch of algebra. However, the presented definition is
compatible with matrix-vector multiplication. Thus, it inherits many nice properties as we shall
discuss shortly.
Thus matrix-matrix multiplication is not actually something entirely new.
Certainly mathematicians ask for proofs. Proving the first three laws is straightforward. This
is not unexpected as these laws are exactly identical to the laws for vector addition and scalar
multiplication. If we want we can think of a matrix as a ’long’ vector (and this is actually how
computers store matrices).
For the distributive law from the left: we consider one column of A.B C C/, it is A.bi C c i /,
which is Abi C Ac i (due to the linearity of matrix-vector multiplication).
After multiplication is powers, so we now define powers of a matrix. With p is a positive
integer, the pth power of a square matrix A is defined as
Ap WD AAA A .p factors/
And the usual laws of exponents e.g. 2m 2n D 2nCm hold for matrix powers:
With two vectors we can multiply them to get a number with the above dot product. A
question should arise: is this possible to get a matrix from the product of two vectors? The
answer is yes:
" # " # " # " #
1 3 > 1 h i 3 4
aD ; bD H) ab D 3 4 D
2 4 2 6 8
So, a vector a of length m with a vector b of length n via the outer product ab> yields an m n
matrix.
Definition 10.4.2
The transpose of an m n matrix A is the n m matrix A> obtained by interchanging the
rows and columns of A. That is the i th column of A> is the i th row of A.
With the introduction of the transpose, we can define a symmetric matrix as:
Definition 10.4.3
A square matrix A of size n n is symmetric if it is equal to its transpose.
Obviously transpose is an operator or a function, and thus it obeys certain rules. Here are
some basic rules regarding the transpose operator for matrices:
1 x 1 x
ex D Œe C e x
C Œe e x
2 2
Phu Nguyen, Monash University © Draft version
Chapter 10. Linear algebra 734
which led to the definition of the hyperbolic cosine and sine functions. Now, we do the same
thing for square matrices. Given a square matrix A, we can write
1 1
A D .A C A> / C .A A> /
2 2
and applying that to the following matrix,
2 3 2 3 2 3
1 2 3 2 6 10 0 2 4
7 16 7 16
A D 44 5 65 D 4 6 10 145 C 42 0 25
6 7
2 2
7 8 9 10 14 18 4 2 0
2 3
.row 1 of A/ .col 1 of B/ .row 1 of A/ .col 2 of B/ .row 1 of A/ .col 3 of B/
AB D 4.row 2 of A/ .col 1 of B/ .row 2 of A/ .col 2 of B/ .row 2 of A/ .col 3 of B/5
6 7
Thus, we can split B into three columns B 1 ; B 2 ; B 3 , and AB is equal to the product of A with
each column, and the results put together:
h i h i
AB D A B 1 B 2 B 3 D AB 1 AB 2 AB 3
The form on the right is called the matrix-column representation of the product. What does
this representation tell us? It tells us that the columns of AB are the linear combinations of
the columns of A (e.g. AB 1 is a linear combination of the cols of A from the definition of
matrix-vector multiplication). And that leads to the linear combination of all columns of AB is
just a linear combination of the columns of A. Later on, this results in rank.AB/ rank.A/.
And nothing stops us to partition matrix A, but we have to split it by rows:
2 3 2 3
A1 A1B
AB D 4 A 2 5 B D 4 A 2 B 5
6 7 6 7
A3 A3B
And this is called the row-matrix representation of the product.
It is also possible to partition both matrices, and we obtain the column-row representation of
the product:
2 3
h i B 1
AB D A 1 A 2 A3 4 B 2 5 D „
A1B 1 C A2B 2 C A3B 3
6 7
ƒ‚ …
B3 sum of rank 1 matrices
This reminds us of the dot product, but the individual terms are matrices not scalars because
A 1 B 1 is the outer product. For example, A 1 B 1 is a 3 3 matrix as A 1 is a 3 1 matrix and
B 1 is a 1 3 matrix:
2 3 2 3
A11 h i A B
11 11 A B
11 12 A B
11 13
A 1 B 1 D 4A21 5 B11 B12 B13 D 4A21 B11 A21 B12 A21 B13 5
6 7 6 7
2 3 2 3
1 0 0 2 1 4 3 1 2 1
6 7 6 7
60 1 0 1 37 " # 6 1 2 2 1 17 " #
6 7 I A12
6 7 B11 B 12 B 13
AD6
60 0 1 4 077D 0 A ; BD6 1 5 3 3 177D I
22
6 0 B23
40 0 0 1 65 1 0 0 0 25
6 7 6 7
4
0 0 0 7 1 0 1 0 0 3
where A has been partitioned into a 2 2 matrix and B as a 2 3 matrix. (Note that I was used
to denote the identity matrix but the size of I varies; similarly for 0.) With these partitions, the
product AB can be computed blockwise as if the entries are numbers:
" #" # " #
I A12 B11 B12 B13 B11 C A12 B12 B13 C A12 B23
AB D D
0 A22 I 0 B23 A22 0 A22 B23
Using Julia it is quick to check that the usual way to compute AB gives the same result as the
way using partitioned matrices.
press the inverse function arcsin and we have: arcsin y D x. Now, we have a square matrix A,
a vector x, when A acts on x we get a new vector b. To get back to x, do:
A 1 A x D A 1 b H) A 1 b D x
Ax D b H) „ƒ‚… (10.4.10)
I
The matrix A 1 is called the left inverse matrix of A. There exists the right inverse matrix of A
as well: it is defined by AA 1 D I. If a matrix is invertible, then its inverse, A 1 , is the matrix
that inverts A:
A 1A D I and AA 1
DI (10.4.11)
Property 2. The inverse of the product AB is the product of the inverses, but in reverse order:
1
.AB/ D B 1A 1
Elementary matrices. We are going to use matrix multiplication to describe the Gaussian
elimination method used in solving Ax D b. The key idea is that each elimination step is
corresponding with the multiplication of an elimination matrix E with the augmented matrix.
We reuse the example in Section 10.3.1. We’re seeking for a matrix E that expresses the
process of subtracting two times the first equation from the second equation. To find that matrix,
look at the RHS vector: we start with .2; 8; 10/ and we get .2; 4; 10/ after the elimination step;
this can be nearly achieved with:
||
This property is sometimes called the socks-and-shoe rule: you put in the socks and then the shoe. Now, you
take of the shoe first, then remove the socks.
Proof: A 1 A D I, thus A> .A 1 /> D I
Proof for n D 2: A2 .A 1 /2 D AAA 1 A 1 D AIA 1 D AA 1 D I.
2 32 3 2 3
1 0 0 2 2
40 1 05 4 8 5 4 4 5
6 76 7 6 7
0 0 1 10 10
We need to change this matrix slightly as follows, and we get what we have wanted for:
2 32 3 2 3
1 0 0 2 2
4 2 1 05 4 8 5 D 4 4 5
6 76 7 6 7
0 0 1 10 10
Thus, starting from the identity matrix I: Ib D b, the elimination matrix E21 is I with the extra
non-zero entry 2 in the .2; 1/ position. How to get that -2 from I? Replacing the second row
(of I) by subtracting two times the first row from the second row. But that is exactly what we
wanted for b!
Multiplying E21 with A has the same effect:
2 32 3 2 3
1 0 0 2 4 2 2 4 2
4 2 1 05 4 4 9 35 D 4 0 1 15
6 76 7 6 7
0 0 1 2 3 7 2 3 7
Definition 10.4.4
An elementary matrix is a matrix that can be obtained from the identity matrix by one single
elementary row operation. Multiplying a matrix A by an elementary matrix E (on the left)
causes A to undergo the elementary row operation represented by E. This can be expressed
by symbols, where R denotes a row operation:
A0 D R.A/ ” A0 D ER A (10.4.12)
Now, as the row operation affects the matrix A and the RHS vector b altogether, we can
put the coefficient matrix A and the RHS vector b side-by-side to get the so-called augmented
matrix, and we apply the elimination operation to this augmented matrix by left multiplying it
with E21 : 2 3
h i h i 2 4 2 2
E21 A b D E21 A E21 b D 4 0 1 1 45 (10.4.13)
6 7
2 3 7 10
To proceed, we want to eliminate -2 using the pivot 2 (red). The row operation is: replacing row
3 by row 3 + row 1, and that can be achieved with matrix E31 as follows (obtained from I by
replacing its row 3 by row 3 + row 1)
2 3
1 0 0
E31 D 40 1 05
6 7
1 0 1
to remove x1 in the third equation. Together, the two elimination steps can be expressed as:
2 3
h i h i 2 4 2 2
E31 E21 A b D E31 E21 A E31 E21 b D 40 1 1 45
6 7
0 1 5 12
Finally, we use E32 as follows (we want to remove the blue 1, or x2 in the row 3, and that is
obtained by replacing row 3 with row 3 minus row 2)
2 3
1 0 0
E32 D 40 1 05
6 7
0 1 1
2 3
h i h i 2 4 2 2
E32 E31 E21 A b D E32 E31 E21 A E32 E31 E21 b D 40 1 1 45 (10.4.14)
6 7
0 0 4 8
And we have obtained the same matrix U that we got before. Notice the pivots along the diagonal.
The inverse of an elementary matrix. The inverse of an elementary matrix E is also an elemen-
tary matrix that undoes the row operation that E has done. For example,
2 3 2 3
1 0 0 1 0 0
1
E32 D 40 1 05 H) .E32 / D 40 1 05
6 7 6 7
0 1 1 0 1 1
Finding the inverse: Gauss-Jordan elimination method. We have an invertible matrix and
we want to find its inverse. To illustrate the method, let’s consider a 3 3 matrix A. We know
that its inverse A 1 is a 3 3 matrix such that AA 1 D I. Let’s denote by x 1 ; x 2 ; x 3 the three
columns of A 1 . We’re looking for these columns. The equation AA 1 D I is equivalent to
three systems of linear equations, one for each column:
Ax i D e i ; i D 1; 2; 3 (10.4.15)
where e i are the unit vectors.
We know how to solve a system of linear equations, using the Gaussian elimination method.
The idea of the Gauss-Jordan elimination method is to solve Eq. (10.4.15) altogether. So, the
augmented matrix is ŒAj e 1 e 2 e 3 or ŒA I, and we perform the usual row operations on it. Let’s
consider a concrete matrix with 2’s on the diagonal and -1’s next to the 2’s, then the augmented
matrix is
2 3
2 1 0 1 0 0
4 1 2 1 0 1 05
6 7
0 1 2 0 0 1
The Gaussian elimination steps are:
2 3
2 1 0 1 0 0
H) 40 3=2 1 1=2 1 05 .1/2 row 1 + row 2/
6 7
0 1 2 0 0 1
2 3
2 1 0 1 0 0
H) 40 3=2 1 1=2 1 05 .2/3 row 2 + row 3/
6 7
What we have to do is to remove the red terms–making zeros above the pivots:
2 3
2 1 0 1 0 0
H) 40 3=2 0 3=4 3=2 3=45 .3/4 row 3 + row 2/
6 7
0 1 2 0 0 1
2 3
2 0 0 3=2 1 1=2
H) 40 3=2 0 3=4 3=2 3=45 .2/3 row 2 + row 1/
6 7
1
Now, the three columns of A are in the second half of the above matrix . Thus,
h ih i
H) 1
A I I A
And each row operation corresponds with an elementary matrix, so the above can be also written
as h i h i
Ek : : : E2 E1 A I D I A 1
This is because the the 1st column after the vertical bar is x 1 , the first column of the inverse of A.
A D E1 1 E2 1 Ek 1 (10.4.16)
As the inverse of an elementary matrix is also an elementary matrix, this tells us that every
invertible matrix can be decomposed as the product of elementary matrices.
10.4.6 LU decomposition/factorization
This section shows that the Gaussian elimination process results in a factorization of the matrix
A into two matrices: one lower triangular matrix L and the familiar upper triangular matrix U
that we have met. Recall Eq. (10.4.14) that
2 3
h i h i 2 4 2 2
E32 E31 E21 A b D E32 E31 E21 A E32 E31 E21 b D 40 1 1 45
6 7
0 0 4 8
From which, we can write,
From Property 3 of matrix inverse, we know the inverse matrices E211 ; E311 ; E321 : they are all
lower triangular matrices with 1s on the diagonal. Therefore, we get a lower triangular matrix
as their product. Thus, we have decomposed A into two matrices:
2 32 3
1 0 0 2 4 2
A D 4 2 1 0540 1 15
6 76 7
1 1 1 0 0 4
just similar to how we can decompose a number e.g. 12 D 2 6. And this is always a good
thing: dealing with 2 and 6 is much easier than with 12. L and U contain many zeros.
What is the benefits of this decomposition? It is useful because we replace Ax D b into two
problems with triangular matrices:
(
Ly D b
Ax D b ” LUx D b ” (10.4.17)
Ux D y
in which we first solve for y, then solve for x. Using the LU decomposition method to solve
Ax D b is faster than the Gaussian elimination method when we have a constant matrix A but
many different RHS vectors b1 ; b2 ; : : : This is because we just need to factor A into LU once.
Y
det.A/ D det.LU/ D det.L/ det.U/ D 1 ui (10.4.18)
i
where ui are the entries on the diagonal of U (pivots). There are more to say about determinants
in Section 10.9.
10.4.7 Graphs
We have used systems of linear equations as a motivation for matrices, but as is often the case
in mathematics, matrices appear in other problems. For example, in Chapter 8 we have seen
matrices when discussing coupled harmonic oscillators. In Section 11.6 we see matrices when
solving partial differential equations. In Section 6.6 we have seen matrices in linear recurrence
equations and in statistics. Even an image is a matrix. In this section, I present another application
of matrices.
Subspaces. Inside a vector space there might be a subspace, that is a smaller set of vectors
but big enough to be a space of itself. One example to demonstrate the idea. Note that a plane
passing through the origin can be expressed as a linear combination of two (direction) vectors:
P W x D us C tv, where s; v 2 R3 and u; t 2 R. Now, considering two vectors x 1 and x 2 lying
on this plane, we can write
x 1 D u 1 s C t1 v x 1 C x 2 D .u1 C u2 /s C .t1 C t2 /v 2 P;
H)
x 2 D u 2 s C t2 v ˛x 1 D ˛u1 s C ˛t1 v 2 P
This indicates that if we take two vectors on this plane, their sum is also on this plane and
the product of one vector with a real number is also on the plane. We say that: The plane
going through the origin .0; 0; 0/ is a subspace of R3 . And this example leads to the following
definition of a subspace.
Definition 10.5.1
A subspace of Rn is a set of vectors in Rn that satisfies two requirements: if u and v are two
vectors in the subspace and ˛ is a scalar, then
This gives us the following theorem (check definition 10.3.2 for what a span is)
Theorem 10.5.1: Span is a subspace
Let v1 ; v2 ; : : : ; vk be vectors in Rn . Then span.v1 ; v2 ; : : : ; vk / is a subspace of Rn .
And this theorem leads to the following subspaces of matrices: column space, row space,
nullspace.
Subspaces associated with matrices. We know that solving Ax D b is to find the linear
combination of the columns of A with the coefficients being the components of vector x so that
this combination is exactly b. And this leads naturally to the concept of the column space of
a matrix. And why not row space. And there are more. We put all these subspaces related to a
matrix in the following definition.
Definition 10.5.2
Let A be an m n matrix.
(a) The row space of A is the subspace R.A/ of Rn spanned by the rows of A.
(b) The column space of A is the subspace C.A/ of Rm spanned by the columns of A.
(c) The null space of A is the subspace N.A/ of Rn that contains all the solutions to
Ax D 0.
With this definition, we can deduce that Ax D b is solvable if and only if b is in the column
space of A. Therefore, C.A/ describes all the attainable right hand side vectors b.
Basis. A plane through .0; 0; 0/ in R3 is spanned by two linear independent vectors. Fewer than
two independent vectors will not work; more than two is not necessary (e.g. three vectors in
R3 , assuming that the third vector is a combination of the first two, then a linear combination of
these three vectors is essentially a combination of the first two vectors). We just need a smallest
number of independent vectors.
Definition 10.5.3
A basis for a subspace S of Rn is a set of vectors in S that
The first requirement makes sure that a sufficient number of vectors is included in a basis;
and the second requirement ensures that a basis contains a minimum number of vectors that
spans the subspace. We do not need more than that.
It is easy to see that the following sets of vectors are the bases for R2 (because they span R2
and they are linear independent):
" # " #! " # " #!
1 0 1 1
; ; ;
0 1 0 1
Even though R2 has many bases, these bases all have the same number of vectors (2). And this
is true for any subspace by the following theorem.
Theorem 10.5.2: The basis theorem
Let S be a subspace of Rn . Then any two bases of S have the same number of vectors.
Any two bases of a subspace of Rn have the same number of vectors. That number should
be special. Indeed, it is the dimension of the subspace. So, we have the following definition for it.
Express each vi in terms of u1 ; u2 ; : : :. Then build c1 v1 C D 0, which in turn is in terms of ./u1 C ./u2 C
D 0. As B is a basis all the terms in the brackets must be zero. This is equivalent to a linear system Ac D 0
with A 2 Rsr . This system has a nontrivial solution c due to theorem 10.3.2.
Definition 10.5.4
Let S be a subspace of Rn , then the number of vectors in a basis for S is called the dimension
of S , denoted by dim.S /. Using the language of set theory, the dimension of S is the cardinality
of one basis of S .
Example 10.4
Find a basis for the row space of
2 3
1 1 3 1 6
2 1 0 1 17
6 7
AD6
6
7
4 3 2 1 2 15
4 1 6 1 3
The way to do is the observation that if we perform a number of row elementary operations
on A to get another matrix B, then R.A/ D R.B/a . So, the same old tool of Gauss-Jordan
elimination gives us:
2 3
1 0 1 0 1
60 1 2 0 37
6 7
RD6 7
40 0 0 1 45
0 0 0 0 0
Now, the final row consists of all zeros is useless; thus the first three non-zero rows form a
basis for R.A/b . And we also get dim.R.A// D 3.
a
The rows of B are simply linear combinations of the rows of A, thus the linear combination of the rows of
B is a linear combination of all rows of A. This leads to R.B/ R.A/. But the row operations can be reversed
to go from B to A, so we also have R.A/ R.B/.
b
Why? Because the nonzero rows are independent.
Example 10.5
Find a basis for the column space of A given in Example 10.4. We have row operations not
column operations. So, one solution is to transpose the matrix to get A> in which the rows
are the columns of A. With A> , we can proceed as in the previous example. The second way
is better as we just work with A. Noting that basis is about the linear independence of the
columns of A. That is to see Ax D 0 has a zero vector as a solution or not. With this view,
we can study Rx D 0 instead where R is the RREF of A.
There are three pivot columns in R: the 1st, 2nd and 4th columns. These pivot columns
are the standard unit vectors e i , so they are linear independent. The pivot columns also span
the column space of Ra . Now, we know that the pivot columns of R is a basis for the column
space of R. And this means that the pivot columns of A is a basis for the column space of A.
And we also obtain dim.C.A// D 3/b .
a
This is because the non-pivot columns are linear combinations of the pivot ones, they do not add new thing
to the span.
b
Be careful that C.A/ ¤ C.R/
From the previous examples, we see that the column and row space of that specific matrix
have the same dimension. And in fact it is true for any matrix. So, we have the following theorem.
Theorem 10.5.3
The row and column spaces of a matrix have the same dimension.
A nice thing with this theorem is that it allows us to have a better definition for the rank
of a matrix. The rank of a matrix is the dimension of its row and column spaces. Compared
with the definition of the rank as the number of nonzero rows, this definition is symmetric with
both rows and columns. And it should be. With this row-column symmetry, it is no surprise that
rank.A/ D rank.A> /.
Suppose that A and B are two matrices such that AB makes sense, from the definition
of matrix-matrix product, we know that the columns of AB are linear combinations of the
columns of A. Thus C.AB/ C.A/. Therefore, rank.AB/ rank.A/. Similarly, we have
R.AB/ R.B/. Then, rank.AB/ rank.B/. Finally, rank.AB/ min.rank.A/; rank.B//.
Proof. [Proof of theorem 10.5.3] Consider a matrix A, and we need to prove that dim.R.A// D
dim.C.A//. We start with the row space with the fact that R.A/ D R.R/ where R is the RREF
of A. Thus, dim.R.A// D dim.R.R//. But dim.R.R// is equal to the number of unit pivots,
which equals to the number of pivot columns of A. And we know that the pivot columns of A is
C.A/.
We have the dimension for the row space and column space. What about the null space?
Definition 10.5.5
The nullity of a matrix A is the dimension of its null space and is denoted by nullity(A).
Example 10.6
Find a basis for the null space of A given in Example 10.4. This is equivalent to solving the
homogeneous system Ax D 0. We get the RREF as
2 3 2 3
1 1 3 1 6 0 1 0 1 0 1 0
2 1 0 1 1 07 60 1 2 0 3 07
6 7 6 7
AD6 7 H) 6
6
7
4 3 2 1 2 1 05 40 0 0 1 4 05
4 1 6 1 3 0 0 0 0 0 0 0
Looking at the matrix R, we know that there are 2 free variables x3 ; x5 . We then solve for the
pivot variables in terms of the free ones with x3 D s and x5 D t :
2 3 2 3 2 3 2 3
x1 sCt 1 1
6 7 6 7 6 7 6 7
6x2 7 6 2s 3t 7 6 27 6 37
6 7 6 7 6 7 6 7
6x 7 D 6 s 7 D s 6 17 C t 6
7 07
6 37 6 7 6 6 7
4x4 5 4 4t 5 05 45
6 7 6 7 6 7 6 7
4 4
x5 t 0 1
Therefore, the null space of A has a basis of the two red vectors. And the nullity of A is 2.
rank.A/ C nullity.A/ D n
Theorem 10.5.5
Let A be an m n matrix, then
Proof. Using Theorem 10.5.4 for matrices A and A> A (both have the same number of cols n),
we have
b D ˛1 u1 C ˛2 u2 C C ˛k uk
b D ˇ1 u1 C ˇ2 u2 C C ˇk uk
H) 0 D .˛1 ˇ1 /u1 C .˛2 ˇ2 /u2 C C .˛k ˇk /uk
which also means that f .˛x1 C ˇx2 / D ˛f .x1 / C ˇf .x2 /. The function y D g.x/ D ax C b,
albeit also a linear function, does not satisfy these two properties: it is not a linear function. But
y D g.x/ D ax C b is an affine function.
Any function possesses the linearity property of f .˛x1 C ˇx2 / D ˛f .x1 / C ˇf .x2 / is
called a linear function. And there exists lots of such functions. But we need to generalize our
concept of function. A function f W D ! R maps an object of D to an object of R. By objects,
we mean anything: a number x, a point in 3D space x D .x; y; z/, a point in a n-dimensional
space, a function, a matrix etc.
Of course linear algebra studies vectors and functions that take a vector and return another
vector. However, a new term is used: instead of functions, mathematicians use transformations.
For a vector u 2 Rn , a transformation T turns it into a new vector v 2 Rm . For example, we can
define T W R2 ! R3 as:
2 3
" #! x1 C x2
x1
T D 4 x1 x2 5
6 7
x2
x1 x2
However among many types of transformation, linear algebra focuses on one special trans-
formation: linear transformation. This is similar to ordinary calculus focus on functions that are
differentiable.
Definition 10.6.1
A linear transformation is the transformation T W Rn ! Rm satisfying the following two
properties:
For abstract concepts (concepts for objects do not exist in real life) we need to think about
some examples to understand more about them. So, in what follows we present some linear
transformations.
Some 2D linear transformations. Fig. 10.15 shows a shear transformation. The equation for a
2D shear transformation is
" #! " #
x1 x C x2
T D 1 (10.6.1)
x2 x2
If we apply this transformation to the two unit vectors i and j (labeled as iO in Fig. 10.15 without
boldface as it is inconvenient to hand write boldface symbols), i is not affected but j is sheared
to the right ( D 1 in the figure). So the unit square was transformed to a parallelogram.
a0 c0 2
b0 d0 a1 c1 2
b0 d1
1
1
-4 -3 -2 -1 0 1 2 3 4
-4 -3 -2 -1 0 1 2 3 4
-1
-1
Figure 10.15: Shear transformation is a linear transformation from plane to plane. Side note: a shear
transformation does not change the area. That’s why a parallelogram has the same area as the rectangle
of same base and height.
In Fig. 10.15 we applied the transformation T to all the grid lines of the 2D space. You can
see that (grid) lines (grey lines) are transformed to lines (red dashed lines), the origin is kept
fixed and equally spaced points transformed to equally spaced points. These are the consequence
of the following properties of any linear transformation.
Let T W Rn ! Rm be a linear transformation, then
The second property is the mathematical expression of the fact that linear transformations
preserve linear combinations. For example, if v is a certain linear combination of other vectors
s; t, and u, say v D 3s C 5t 2u, then T .v/ is the same linear combination of the images of
those vectors, that is T .v/ D 3T .s/ C 5T .t/ 2T .u/.
The standard matrix associated with a linear transformation. Consider again the linear
transformation in Eq. (10.6.1). Now, we choose three vectors: the first two are very specials–
they are the unit vectors e 1 D .1; 0/ and e 2 D .0; 1/; the third vector is arbitrary a D .1; 2/.
After the transformation T , we get three new vectors:
T .a/ D 1T .e 1 / C 2T .e 2 /
Knowing matrix-vector multiplication as a linear combination of some columns, we can write
T .a/ as a matrix-vector multiplication:
" #" #
1 1 1
T .a/ D
0 1 2
Of course carrying out this matrix-vector multiplication will give us the same result as of direct
use of Eq. (10.6.1). It is even slower. Why bother then? Because, a linear transformation T W
Rn ! Rm determines an m n matrix A, and conversely, an m n matrix A determines a linear
transformation T W Rn ! Rm . This is important as from now on when we see Ax D b, we do
not see a bunch of meaningless numbers, but we see it as a linear transformation that A acts on
x to bring it to b.
We now just need to generalize what we have done. Let’s consider a linear transformation
T W Rn ! Rm . Now, for a vector u D .u1 ; u2 ; : : : ; un / in Rn , we can always write u as a linear
combination of the standard basis vectors e i (we can use a different basis, but that leads to a
different matrix):
u D u1 e 1 C u2 e 2 C C un e n
So, the linear transformation applied to u can be written as
T .u/ D T .u1 e 1 C u2 e 2 C C un e n / D u1 T .e 1 / C u2 T .e 2 / C C un T .e n / (10.6.2)
which indicates that the transformed vector T .u/ is a linear combination of the transformed basis
vectors i.e., T .e i /, in which the coefficients are the coordinates of the vector. In other words,
if we know where the basis vectors land after the transformation, we can determine where any
vector u lands in the transformed space.
Now, assume that the n basis vectors in Rn are transformed to n vectors in Rm with coordi-
nates (implicitly assumed that the standard basis for Rm was used)
T .e 1 / D .a11 ; a21 ; : : : ; am1 /
T .e 2 / D .a12 ; a22 ; : : : ; am2 /
::: D :::
T .e n / D .a1n ; a2n ; : : : ; amn /
So we can characterize a linear transformation by storing T .e i /, i D 1; 2; : : : ; n in an m n
matrix like this
2 3
2 3 a11 a12 a1n
j j j j 6 7
7 6 a21 a22 a2n 7
A WD 4T .e 1 / T .e 2 / T .e n /5 D 6 (10.6.3)
6
6 :: :: ::
7
4 : : : a1n 5
7
j j j j
am1 am2 amn
That is, each column of this matrix is T .e i /, which is a vector of length m. This matrix is called
the standard matrix representing the linear transformation T . Why standard? Because we have
used one standard basis for Rn and another standard bases for Rm .
With this introduction of A, the linear transformation in Section 10.11.3 can be re-written as
a matrix-vector product:
T .u/ WD Au (10.6.4)
A visual way to understand linear transformations is to use a geogebra applet and play
with it. In Fig. 10.16, we present some transformations of a small image of Mona Lisa. By
changing the transformation matrix M , we can see the effect of the transformation immediately.
(a) (b)
Determinants. While playing with the geogebra applet we can see that sometimes a transfor-
mation enlarges the image and sometimes it shrinks the image. Can we quantify this effect of
a linear transformation? Let’s do it, but in a plane only. We consider a general transformation
matrix " #
a b
AD
c d
which tells us that the unit vector i is now at .a; c/ and j is at
.b; d /. We are going to compute the area of the parallelogram
made up by these two vectors. This parallelogram is what the
unit square (which has an area of 1) has been transformed to.
Based on the next figure, this area is ad bc. So, any unit square
in the plane is transformed to a parallelogram with an area of
ad bc. What about a square of 2 2? It is transformed to a
parallelogram of area 4.ad bc/. So, ad bc is the scaling of
It can be found easily using google https://www.geogebra.org/m/pDU4peV5.
g h i
which is the matrix of a 3D linear transformation. We compute the volume of the parallelepiped
formed by three vectors a D .a; d; g/, b D .b; e; h/ and c D .c; f; i/. We know how to compute
such a volume using the scalar triple product in Section 10.1.5:
Figure 10.17: Determinant of a matrix can be negative. In that case the linear transformation flips the
space or changes the orientation. Look at the orientation of the unit vectors before the transformation and
after.
That’s the most we can do about determinant using geometry. We cannot find out the
formula for the determinant of a 4 4 matrix. How did mathematicians proceed then? We refer
to Section 10.9 for more on the determinant of a square matrix.
the same concept but our functions are linear transformations. And doing it will reveals the rule
for matrix-matrix multiplication.
Assume we have a linear transformation T W Rn ! Rm and a second linear transformation
S W Rm ! Rp . From the previous sub-section, we know that there exists a matrix A, of size
m n, associated with the transformation T and another matrix B (of size p m) associated
with S. Now, consider a composite transformation of first applying T .u/ and second applying
S to the outcome of the first transformation. Mathematically, we write .S ı T /.u/ D S.T .u//
which transform u 2 Rn to Rp .
Assume that there exists a matrix C associated with .S ı T /.u/. Then the j th column of C
is .S ı T /.ej /:
Therefore,
h i
BA D BA1 BA2 BAn (10.6.5)
How about ABC? From function composition discussed in Section 4.2.3, we know that it is
associative, so .AB/C D A.BC/. This is a nice proof, much better than the proof that is based
on the definition of matrix-matrix multiplication (you can try it to see my point).
With the geometric meaning of determinant and matrix-matrix product, it is easy to see that
the determinant of the product of two matrices is the product of the determinants of each matrix:
This is because AB is associated with first a linear transformation which area scaling of jBj,
followed by another transformation which area scaling of jAj. Thus, in total the area scaling
should be jAjjBj.
We can see this by .S ı T /.u/ D S.Au/ D B.Au/ D .BA/u.
10.8 Orthogonality
10.8.1 Orthogonal vectors & orthogonal bases
We know that two vectors in R2 or R3 , a and b, are called orthogonal when a b D 0. We
extend this to vectors in Rn . So, vectors x; y in Rn are said to be orthogonal (denoted x ? y) if
x y D 0 or x > y D 0. We are interested in a bunch of vectors that are orthogonal to each other
as the following definition.
Definition 10.8.1
A set of vectors a1 ; : : : ; ak in Rn is an orthogonal set if all pairs of distinct vectors in the set
are orthogonal. That is if
The most famous example of an orthogonal set of vectors is the standard basis
fe 1 ; e 2 ; : : : ; e n g of Rn . And we know that these basic vectors are linear independent. So, we
guess that orthogonal vectors are linear independent. And that guess is correct as stated by the
following theorem.
Theorem 10.8.1: Orthogonality-Independence
Given a set of non-zero orthogonal vectors a1 ; : : : ; ak in Rn , then they are linear independent.
Proof. Proof is as follows. The idea is to assume that there is a zero vector expressed as a linear
combination of these orthogonal vectors. Then take the dot product of two sides with ai and use
the orthogonality to obtain ˛i D 0 for i D 1; 2; : : : :
˛1 a1 C ˛2 a2 C C ˛k ak D 0
H) ai .˛1 a1 C ˛2 a2 C C ˛k ak / D 0
(10.8.1)
H) ˛i .ai ai / D 0
H) ˛i D 0
Example 10.7
Considering these three vectors in R3 : v1 D .2; 1; 1/, v2 D .0; 1; 1/ and v3 D .1; 1; 1/.
We can see that: (i) they form an orthogonal set of vectors, then (ii) from theorem 10.8.1, they
are linear independent, then (iii) 3 independent vectors in R3 form a basis for R3 . If these
vectors form a basis, then we can find the coordinates of any vector in R3 w.r.t. this basis.
Find the coords of v D .1; 2; 3/.
We have to solve the following system:
2 32 3 2 3 2 3
2 0 1 c1 1 1=6
4 1 1 1 5 4c2 5 D 425 H) c D 45=25
6 76 7 6 7 6 7
1 1 1 c3 3 2=3
Solving a 3 3 system is not hard, what if the question is for a vector in R100 ? Is there any
better way? The answer is yes, and thus orthogonal bases are very nice to work with. We need
to define what an orthogonal basis is first.
Definition 10.8.2
An orthogonal basis for a subspace S of Rn is a basis of S that is an orthogonal set.
Now, we are going to find out the coordinates of v D .1; 2; 3/ using an easier way. We write
v in terms of the basis vectors, and we take the dot product of both sides with v1 , due to the
orthogonality, all terms vanish, and we’re left with:
v D c1 v 1 C c2 v 2 C c3 v 3
H) v v1 D .c1 v1 C c2 v2 C c3 v3 / v1
v v1
H) v v1 D c1 .v1 v1 / H) c1 D
v1 v1
If the last step was not clear, just use a specific a1 , and assuming there are only 3 vectors a1 ; a2 ; a3 . Then, the
LHS of the second line in Eq. (10.8.1) is: a1 .˛1 a1 C ˛2 a2 C ˛3 a3 /, which is ˛1 a1 a1 C ˛2 a1 a2 C ˛3 a1 a3 D
˛1 jja1 jj C 0 C 0. And thus, we get ˛1 D 0. Similarly, we get ˛2 D 0 if we started with a2 and so on.
What does this formula tell us? To find c1 , just compute two dot products: one of v with the
first basis vector, and the other is the squared length of this basis vector. The ratio of these two
products is c1 .
Nothing can be simpler. Wait, I wish we did not have to do the division with the squared
length of v1 . It is possible if that vector has a unit length. And we know that we can always make
a non-unit vector a unit vector simply by dividing it by its length, a process known as normalizing
a vector, see Eq. (10.1.7). Thus, we now move from orthogonal bases to orthonormal bases.
8
<1 i Dj
Orthonormal vectors a1 ; a2 ; : : :: ai aj D D ıij
:0 i ¤j
where we have introduced the Kronecker delta notation (named after Leopold Kronecker ) ıij .
A vector b in a subspace S with an orthonormal basis v1 ; v2 ; : : : ; vk has coordinates w.r.t.
to the basis given by
b D ˛1 v1 C ˛2 v2 C C ˛k vk ; ˛i D b vi (10.8.2)
Did we see this before? Remember Monsieur Fourier? What he did was to write a periodic
function f .x/ as a linear combination of the sine/cosine functions:
1
X nx nx
f .x/ D a0 C an cos C bn sin
nD1
L L
Leopold Kronecker (7 December 1823 – 29 December 1891) was a German mathematician who worked on
number theory, algebra and logic. He criticized Georg Cantor’s work on set theory, and was quoted by Weber (1893)
as having said, "God made the integers, all else is the work of man".
A> A D I H) A> D A 1
And this leads to the following special matrix whose inverse is simply its transpose. The notation
Q is reserved for such matrices.
Definition 10.8.4
An n n matrix Q whose columns form an orthonormal set is called an orthogonal matrix.
We now present an example of an orthogonal matrix. Assume that we want to rotate a point
P to P 0 an angle ˇ as shown in Fig. 10.18. The coordinates of P 0 are given by
" #
cos ˇ sin ˇ
x 0 D Rx; R D
sin ˇ C cos ˇ
To be historically precise Euler did this before Fourier, even though Euler doubted the idea of trigonometric
expansion of a periodic function.
It is easy to check that the columns of R are orthonormal vectors. Therefore, R> R D I, which
can be checked directly. We know that any rotation preserves length (that is jjx 0 jj D jjxjj or
jjRxjj D jjxjj); which is known as isometry in geometry. It turns out that every orthogonal
matrix transformation is an isometry. Note also that det R D 1. It is not a coincidence. Indeed,
from the property A> A D I, we can deduce the determinant of A:
I used det.AB/ D det.A/ det.B/ and det A> D det.A/. With this special example of an
orthogonal matrix (and its properties), we now have a theorem on orthogonal matrices.
Figure 10.18: Rotation in a plane is a matrix transformation that preserves length. The matrix of the
rotation is an orthogonal matrix.
Theorem 10.8.2
Let Q be an n n matrix. The following statements are equivalent.
(a) Q is orthogonal.
Qx Qy D .Qx/> .Qy/ D .x > Q> /Qy D x > .Q> Q/y D x > Iy D x > y D x y
Going from (b) to (c) is easy: use (b) with y D x. We need to go backwards: (c) to (b) to (a),
which is left as an exercise. Check Poole’s book if stuck.
(b) The set of all vectors that are orthogonal to W is called the orthogonal complement of
W , denoted by W ? . That is,
W ? D fv 2 Rn W v w D 0 for all w 2 W g
(c) Two subspaces S and W are said to be orthogonal i.e., S ? W if and only if x ? y, or,
x > y D 0 for all x 2 S and for all y 2 W .
h
For a vector to be orthogonal to a subspace, it just needs to be orthogonal to the span of that subspace.
This definition actually consists of three definitions. The first one extend the idea that we
discussed in the beginning of this section. Why we need W ? ? Because it is a subspace. We
know how to prove whether something is a subspace: Assume that v1 ; v2 2 W ? , and we need
to show that c1 v1 C c2 v2 is also in W ? :
(
v1 w D 0
H) .c1 v1 C c2 v2 / w D 0
v2 w D 0
And the third definition is about orthogonality of two subspaces. We has gone a long way from
orthogonality of two vectors in R2 to that of two vectors in Rn , then to the orthogonality of one
vector a subspace and finally to the orthogonality of two subspaces.
The proof is straightforward. The null space of A is all vector x such that Ax D 0, and from
matrix-vector multiplication, this is equivalent to saying that x is orthogonal to the rows of A.
Now, replace A by its transpose, then we have the second result in the theorem above.
To conclude, an mn matrix A has four subspaces, namely R.A/, N.A/, C.A/, N.A> /. But
they go in pairs: the first two are orthogonal complements in Rn , and the last two are orthogonal
in Rm .
uv
projv .u/ WD v (10.8.3)
vv
While projecting u onto vector v, we also get perpv .u/ WD u projv .u/, which is orthogonal to
v, see Fig. 10.19-left . This indicates that we can decompose a vector u into two vectors,
Is this still an orthogonal projection? We just need to check whether proji ;j .u/ i D 0 and
proji ;j .u/ j D 0. The answer is yes, and due to the fact that i ? j .
Figure 10.19: Orthogonal projection of a vector onto another vector (or a line) and onto a plane.
Proof: v perpv .u/ D v .u uv=vvv/ Dvu u v D 0.
Definition 10.8.6
Let W be a subspace of Rn and let fv1 ; v2 ; : : : ; vk g be an orthogonal basis for W . For any
vector v 2 Rn , the orthogonal projection of v onto W is defined as
v1 v v2 v vk v
projW .v/ D v1 C v2 C C vk
v1 v1 v2 v2 vk vk
The component of v orthogonal to W is the vector
u1
u1 D v1 ; e1 D
jju1 jj
u2
u2 D v2 proju1 .v2 /; e2 D
jju2 jj
u3
u3 D v3 proju1 .v3 / proju2 .v3 /; e3 D
jju3 jj
::
:
k 1
X uk
uk D vk projui .vk /; ek D
i
jjuk jj
are linear independent (because they’re orthogonal). And finally, fu1 ; u2 ; : : : ; uk g form an
orthogonal basis for the subspace Wk D span.v1 ; v2 ; : : : ; vk /.
10.8.7 QR factorization
The Gauss elimination process of Ax D b results in the LU factorization: A D LU. Now, the
Gram-Schmidt orthogonalization process applied to the linear independent columns of a matrix
A results in another factorization–known as the QR factorization: A D QR. To demonstrate this
factorization, consider a matrix with three independent columns A D Œa1 ja2 ja3 . Applying the
Gram-Schmidt orthonormalization to these three vectors we obtain e 1 ; e 2 ; e 3 . We can write then
a1 D .e 1 ; a1 /e 1
a2 D .e 1 ; a2 /e 1 C .e 2 ; a2 /e 2
a3 D .e 1 ; a3 /e 1 C .e 2 ; a3 /e 2 C .e 3 ; a3 /e 3
0 0 .e 3 ; a3 /
The matrix Q consists of orthonormal columns and thus is an orthogonal matrix (that explains
why the notation Q was used). The matrix R is an upper triangular matrix.
10.9 Determinant
To derive the formula for the determinant of a square matrix n n when n > 3, we cannot
rely on geometry. To proceed, it is better to deduce the properties of the determinant from the
special cases of 2 2 and 3 3 matrices. From those properties, we can define what should be
a determinant. It is not so hard to observe the following properties of the determinant of a 2 2
matrix (they also apply for 3 3 matrices):
The determinant of the 2 2 unit matrix is one; this is obvious because this matrix does
not change the unit square at all;
If the two columns of a 2 2 matrix are the same, its determinant is zero; this is obvious
either from the formula or from the fact that the two transformed basic vectors collapse
onto each other, a domain transforms to a line with zero area;
If one column is a multiple of the other column, the determinant is also zero; The ex-
planation is similar to the previous property; this one is a generalization of the previous
property;
Additive property:
the matrix A. We did this because from the previous discussion we know that the determinant
depends heavily on the columns of the matrix.
Now, we propose the following properties for D inspired from the properties of the
determinants of 3 3 matrices.
Property 1. D.I/ D 1.
Property 2. D.a1 ; a2 ; : : : ; an / D 0 if ai D aj for i ¤ j .
Property 3. If n 1 columns of A held fixed, then D.A/ is a linear function of the remaining
column. Stated in terms of the j th column, this property says that:
This comes from the additive area property and the fact that if we scale one column by ˛, the
determinant is scaled by the same factor.
Property 4. D is an alternating function of the columns, i.e., if two columns are interchanged,
the value of D changes by a factor of -1. Let’s focus on columns ith and j th, so we write
D.ai ; aj / leaving other columns untouched and left behind the scene. What we need to show is
that D.aj ; ai / D D.ai ; aj /.
Proof. The proof is based on Property 2 and Property 3. The trick of using Property 2 is to add
zero or subtract zero to a quantity.
:0
D.aj ; ai / D D.aj ; ai / C
D.a i ; ai / .added 0 due to Property 2/
D D.ai C aj ; ai / .due to Property 3/
:0
D D.ai C aj ; ai / i C
D.a aj ; ai C aj /
.minus 0 due to Property 2/
D D.ai C aj ; aj / .due to Property 3/
:0
D D.ai ; aj / D.a
j ; aj /
.due to Property 3/
D D.ai ; aj / .due to Property 3 with ˛ D 1/
Property 5. If the columns of A are linear dependent then D D 0. One interesting case is that
if A has at least one row of all zeros, its determinant is zero .
Proof. Without loss of generality, we can express a1 as a1 D ˛2 a2 C ˛3 a3 C C ˛n an . Now,
D.A/ is computed as
D D D.a1 ; a2 ; : : : ; an /
D D.˛2 a2 C ˛3 a3 C C ˛n an ; a2 ; : : : ; an /
D D.˛2 a2 ; a2 ; : : : ; an / C D.˛3 a3 ; a2 ; : : : ; an / C C D.˛n an ; a2 ; : : : ; an /
D ˛2 D.a2 ; a2 ; : : : ; an / C ˛3 D.a3 ; a2 ; a3 ; : : : ; an / C C ˛n D.an ; a2 ; : : : ; an /
D 0 C 0 C C 0 .Property 2/
where Property 3 was used in the third equality, Property 3 again in the fourth equality (with
˛ D 0).
Property 6. Adding a multiple of one column to another one does not change the determinant.
Proof. Suppose we obtain matrix B from A by adding ˛ times column j to column i. Then,
A geometry explanation for these results is that for the 2D matrix, shearing a rectangle does not
change its area, and for the 3D matrix, shearing a cube also does not change its volume. Still we
need an algebraic proof so that it can be extended to larger matrices. For the 3 3 matrix, the
second column can be decomposed as
2 3 2 3 2 3
d d 0
4 b 5 D 4 0 5 C 4b 5
6 7 6 7 6 7
0 0 0
Then, using Property 3, its determinant is given by
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇa d e ˇ ˇa d e ˇ ˇa 0 e ˇ ˇa 0 e ˇ
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇ0 b f ˇ D ˇ0 0 f ˇ C ˇ0 b f ˇ D ˇ0 b f ˇ
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇ0 0 c ˇ ˇ0 0 c ˇ ˇ0 0 c ˇ ˇ0 0 c ˇ
The red determinant is zero because of Property 5: the first and second columns are linear
dependent. Now, we do the same thing for the determinant left by decomposing column 3:
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇa 0 e ˇ ˇa 0 e ˇ ˇa 0 0 ˇ ˇa 0 0ˇ ˇa 0 0ˇ
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇ0 b f ˇ D ˇ0 b 0ˇ C ˇ0 b f ˇ C ˇ0 b 0ˇ D ˇ0 b 0ˇ D abc
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇ0 0 c ˇ ˇ0 0 0ˇ ˇ0 0 0 ˇ ˇ0 0 c ˇ ˇ0 0 c ˇ
Property 7. The determinant of a triangular matrix is the product of its diagonal entries. This
property results in another fact that if A is a triangular matrix, its transpose is also a triangular
matrix with the same entries on the diagonal, thus D.A/ D D.A> /. This holds for any square
matrix, not just for triangular matrix.
Property 8. D.A> / D D.A/. The proof goes as: If A is invertible, it can be written as a product
of some elementary matrices:
A D E1 E2 Ek
Thus, with D.EF/ D D.E/D.F/, we can write
where the fact that for an elementary matrix E, D.E> / D D.E/ was used. The importance of
Property 7 is that it allows us to conclude that all the properties of the determinant that we have
stated concerning the columns also work for rows; e.g. if two rows of a matrix are the same its
determinant is zero. This is so because the columns of A> are the rows of A.
Property 9. If A is invertible then we have det.A 1 / D 1=det.A/ . So, we do not need to know
what A 1 is, still we can compute its determinant.
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇa
ˇ 11 a12 a13 ˇ ˇa11 a12 a13 ˇ
ˇa
ˇ 21 a22 a23 ˇ ˇa31 a32 a33 ˇ
ˇ ˇ ˇ ˇ ˇ ˇ
ˇa21 a22 a23 ˇ D ˇ 0 a22 a23 ˇ ˇ 0 a12 a13 ˇ C ˇ 0 a12 a13 ˇ (10.9.1)
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇa31 a32 a33 ˇ ˇ 0 a32 a33 ˇ ˇ 0 a32 a33 ˇ ˇ 0 a22 a23 ˇ
The nice thing we get is that all the three determinants in the RHS are of this form:
ˇ ˇ 02 31
ˇa
ˇ 11 d12 d13 ˇ
ˇ a11 d12 d13
ˇ 0 b22 b23 ˇ D det @4 0
ˇ ˇ B6 7C
B
5A
ˇ ˇ
ˇ 0 b32 b33 ˇ 0
which can be re-written as (to get a lower triangular matrix)
ˇ ˇ ˇ ˇ
ˇa
ˇ 11 d12 d13 ˇ ˇa11 d12 d13
ˇ ˇ ˇ
ˇ b23 b32
ˇ 0 b22 b23 ˇ D ˇ 0 b22 b23 ˇ D a11 b22 b33
ˇ ˇ ˇ ˇ
ˇ ˇ ˇ ˇ b22
ˇ 0 b32 b33 ˇ ˇ 0 0 b33 b23 b32=b22 ˇ
Finally, noting that the matrix B is obtained by deleting a certain row and column of A. So, we
define Aij the matrix obtained by deleting row ith and column j th of A. With this definition,
the determinant of A can be expressed as:
ˇ ˇ
ˇa
ˇ 11 a12 a13 ˇ
ˇ
ˇa21 a22 a23 ˇ D a11 jA11 j a21 jA21 j C a31 jA31 j (10.9.3)
ˇ ˇ
ˇ ˇ
ˇa31 a32 a33 ˇ
There is a pattern in this formula. Also this formula works for 2 2 matrix (you can check it).
So, for a n n matrix A, its determinant is given by:
n
X
jAj D a11 jA11 j a21 jA21 j C a31 jA31 j C an1 jAn1 j D . 1/i 1
ai1 jAi1 j (10.9.4)
i D1
n
X
jAj D . 1/i Cj aij jAij j (10.9.5)
i D1
Why this definition works? Because it allows us to define the determinant of a matrix inductively:
we define the determinant of an n n matrix in terms of the determinants of .n 1/ .n 1/
matrices. We begin by defining the determinant of a 1 1 matrix A D Œa by det.A/ D a. Then
we proceed to 2 2 matrices, then 3 3 and so on. This is similar to how the factorial was
defined: nŠ D n.n 1/Š. Note that a definition is not the best way to compute the factorial and
also the determinant.
x1 jAj D jB1 j
which gives us x1 :
jB1 j
x1 D
jAj
which is strikingly similar to x D b=a for the linear equation ax D b. But now, we have to live
with determinants. Similarly, we have x2 D jB2 j=jAj. The geometric meaning of Cramer’s rule is
given in Fig. 10.21 for the case of 2 2 matrices. The area of the parallelogram formed by e 1
and x is y (or x2 ). After the transformation by A, e 1 becomes a1 D .a11 ; a21 / and x becomes
b. The transformed parallelogram’s area is thus det.Œa1 b/. But we know that this new area is
the original area scaled by the determinant of A. The Cramer rule follows.
It is now possible to have Cramer’s rule for a system of n equations for n unknowns, if
jAj ¤ 0
jB1 j jB2 j
x1 D ; x2 D ; : : : ; Bj is matrix A with j th col replaced by b (10.9.6)
jAj jAj
It is named after the Genevan mathematician Gabriel Cramer (1704–1752), who published the
rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published
special cases of the rule in 1748 (and possibly knew of it as early as 1729).
Cramer’s rule is of theoretical value than practical as it is not efficient to solve Ax D b
using Cramer’s rule; use Gaussian elimination instead. However, it leads to a formula of the
inverse of a matrix in terms of the determinant of the matrix. We discuss this now.
Cramer’s rule and inverse of a matrix. Suppose that we want to find the inverse of 2 2 matrix
A. Let’s denote
" #
1 x1 y1
A D
x2 y2
1
We then solve for x1 ; x2 ; y1 ; y2 such that AA D I, or two systems of linear equations:
2 3
jA11 j jA21 j jAn1 j
6 7
1
det Aj i 1 1 6jA12 j jA22 j jAn2 j 7
A ij
D ; A D adj A; adj A D 6
6 :: :: :: :: 7 (10.9.8)
7
det A det A 4 : : : : 5
jA1n j jA2n j jAnn j
where two formula are presented: the first one is for the ij -entry of the A 1 , and the second one
is for the entire matrix A 1 with the introduction of the so-called adjoint (or adjugate) matrix of
A. This matrix is the transpose of the matrix of cofactors of A.
stretching/shrinking factor. The computation of eigenvectors and eigenvalues for small matrices
of sizes 2 2, 3 3 can be done manually and in a quite straightforward manner.
However, it is hard to understand why people came up with the idea of eigenvectors. To
present a motivation for eigenvectors, we followed Euler in his study of rotation of rigid bodies.
In this context, eigenvectors appear naturally . So, we discuss briefly angular momentum and
inertia tensor in Section 10.10.1. Then, in Section 10.10.2 we discuss principal axes and principal
moments for a 3D rotating rigid body. From this starting point, we leave mechanics behind, and
move on to the maths of eigenvectors. I have read An Introduction To Mechanics by Daniel
Kleppner, Robert Kolenkow [26] and Classical Mechanics by John Taylor [56] for the materials
in this section.
where p ˛ D m˛ ! r ˛ ; and r ˛ denotes the position vector of mass ˛ . With the vector identity
a .b c/ D b.a c/ c.a b/, we can elaborate the angular momentum l further as
X X
m˛ r˛2 !
lD m˛ r ˛ .! r ˛ / D m˛ r ˛ .r ˛ !/ (10.10.2)
˛ ˛
With a coordinate system, the angular velocity and position vector are written as
2 3 2
3
!x x˛
! D 4!y 5 ; r ˛ D 4y˛ 5
6 7 6 7
!z z˛
Thus, we can work out explicitly the components of the angular momentum in Eq. (10.10.2) as
2 3 2 3
2 2
lx .y˛ C z˛ /! x x˛ y˛ !y x˛ z˛ ! z
6 7 X
4ly 5 D m˛ 4 y˛ x˛ !x C .x˛2 C z˛2 /!y y˛ z˛ !z 5
6 7
˛
lz z˛ x˛ !x z˛ y˛ !y C .x˛2 C y˛2 /!z
Eigenvectors appear in many fields and thus I do not know exactly in what context eigenvalues first appeared.
My decision to use the rotation of rigid bodies as a natural context for eigenvalues is that the maths is not hard.
The length of this position vector is denoted by r˛ .
m˛ .x˛2 C y˛2 /
P P P
lz m˛ z˛ x˛ m˛ z˛ y˛ !z
„ ƒ‚ …
I!
which in conjunction with this vector identity jja bjj D jjajj2 jjbjj2 .a b/2 becomes
X m˛ .r 2 ! 2 .r ˛ !/2 /
˛
KD (10.10.5)
˛
2
Using the components of r ˛ and !, K is written as
1 X
KD m˛ .y˛2 C z˛2 /!x2 C m˛ .x˛2 C z˛2 /!y2 C m˛ .x˛2 C y˛2 /!z2
2 ˛
(10.10.6)
2m˛ x˛ y˛ !x !y 2m˛ y˛ z˛ !y !z 2m˛ x˛ z˛ !x !z
which is a quadratic form; check Section 7.7.3 for a refresh. So, we can re-write it in this familiar
vector-matrix-vector product, and of course the matrix is I:
1
K D ! > I! ! (10.10.7)
2
Moment of inertia for continuous bodies. For a continuous body B, its matrix of moment of
inertia is given by (sum is replaced by integral and mass is replaced by dV ):
Z Z Z
2 2 2 2
Ixx D .y C z /dV; Iyy D .x C z /dV; Izz D .x 2 C y 2 /dV
BZ ZB BZ
(10.10.8)
Ixy D xydV; Ixz D xzdV; Iyz D yzdV
B B B
To be precise, I! is a second order tensor, and its representation in a coordinate system is a matrix. However,
for the discussion herein, the fact that I! is a tensor is not important.
Check the discussion around Eq. (10.1.19) if this identity is not clear.
Example 10.8
As the first example, compute the matrix of inertia for a cube of side a and mass M (the mass
is uniformly distributed i.e., the density is constant) for two cases: (a) for a rotation w.r.t. to
one corner and (b) w.r.t. to the center of the cube. The coordinate system axes are parallel to
the sides.
For case (a), we have:
Z a Z a Z a
2Ma2
Z Z
2 2 2
Ixx D Iyy D Izz D y dV C z dV D 2 dx y dy dz D
0 0 0 3
Z a Z a Z a 2
Ma
Ixy D Ixz D Iyz D xdx ydy dz D
0 0 0 4
where M D a3 . Thus, the inertia matrix is given by (this matrix has a determinant of
242M a2=12)
2 3
8 3 3
Ma2 6
I! D 4 3 8 35 (10.10.9)
7
12
3 3 8
Now, we will compute the angular momentum if the cube is rotated around the x-axis (due
to symmetry it does not matter which axis is chosen) with an angular velocity ! D .!; 0; 0/.
The angular velocity in this case is
2 32 3 2 3
2 8 3 3 ! 2 8!
Ma 6 7 6 7 Ma 6
lD 4 3 8 35 4 0 5 D 4 3! 5
7
12 12
3 3 8 0 3!
What we learn from this? Two things: first the inertia matrix is full and the angular momentum
is not parallel to the angular velocity. That is I! is in different direction than !. Let’s see
p
what we get if the angular velocity is along the diagonal of the cube i.e., ! D != 3.1; 1; 1/:
2 32 3 2 3
8 3 3 1 2
Ma2 ! 6 2
7 6 7 Ma ! 6 7 Ma
2
lD p 4 3 8 3 1 D p 2 D !
12 12 6
5 4 5 4 5
3 3
3 3 8 1 2
In this case, the angular momentum is parallel to the angular velocity. In other words, I! ! D
!, D Ma2 =6.
For case (b), we have (same calculations with different integration limits from a=2 to
a=2 instead)
Z a=2 Z a=2 Z a=2
Ma2
Z Z
2 2 2
Ixx D Iyy D Izz D y dV C z dV D 2 dx y dy dz D
a=2 a=2 a=2 6
Z a=2 Z a=2 Z a=2
Ixy D Ixz D Iyz D xdx ydy dz D 0
a=2 a=2 a=2
Figure 10.22
Actually Ixy is zero because the integrand is an odd function xy. Another explanation is, by
looking at Fig. 10.22, we see that the material on the side above the plane y D 0 cancels the
contribution of the material below this plane (so, Ixy D Iyz D 0). Thus, the inertia matrix is
given by
2 3
1 0 0
Ma2 6
I! D 40 1 05
7
6
0 0 1
If we compute now the angular momentum for any angular velocity !, we get l D .M a2=6/!
because I! is a multiple of the identity matrix (the red matrix). So, we see two things: (1)
the inertia matrix is diagonal (entries not in the diagonal are all zeros), and (2) the angular
momentum is parallel to the angular velocity, or I! D !, D Ma2 =6. And this holds for
any ! because of the infinite symmetry of a cube w.r.t. to its center.
Example 10.9
The second example is finding the inertia matrix for a spinning top that is a uniform solid cone
(mass M , height h and base radius R) spinning about its tip O; cf. Fig. 10.22. The z-axis is
chosen along the axis of symmetry of the cone.
All the integrals in the inertia matrix are computed using cylindrical coordinates. Due to
symmetry, all the non-diagonal terms are zero; and Ixx D Iyy . So, we just need to compute
three diagonal terms. Let’s start with Izz , but not Ixx (we will see why this saves us some
calculations):
Z Z
Izz D .x C y /dV D r 3 drddz
2 2
Z h "Z zR= h Z 2 #
3M 2
D r 3 dr d dz D R
0 0 0 10
Z Z Z
2 2 2
Ixx D .y C z /dV D z 2 dV
y dV C
Z h "Z zR= h Z 2 #
3M 2
D .3M=20/R2 C rdr d z 2 dz D .R C 4h2 /
0 0 0 20
We get a diagonal matrix. For an angular velocity .!x ; !y ; !z /, the corresponding angular mo-
mentum is .1 !x ; 1 !y ; 2 !z /. To get something interesting, consider this angular velocity
! D .!; 0; 0/ (that is rotation about the x-axis), then the angular momentum is .1 !; 0; 0/ or
1 !.
0 0 3
where i are called principal moments.
Ok, now we have two problems. The first problem is how to prove that any non-symmetric
solid has principal axes and the second problem is how to find the principal axes. Herein, we
focus on the second problem, being pragmatic. But wait, why the angular momentum being
parallel to the rotation axis is important? Otherwise, people did not spend time studying this
case. ???
To find the principal axes, we use the fact that for a principal axis through a certain origin O,
if the angular velocity points along this axis, then the angular momentum is parallel to !, that is:
I! ! D ! (10.10.10)
And this is an eigenvalue equation. A vector ! satisfying Eq. (10.10.10) is called an eigenvec-
tor, and the corresponding number , the corresponding eigenvalue . To solve the eigenvalue
equation, we re-write it in this form .I! I/! D 0. This equation only has non-zero solution
(i.e., ! ¤ 0) only when the determinant of the coefficient matrix is zero (if the determinant is
not zero, then the only solution is ! D 0, similar to equation 2x D 0). That is,
ˇ ˇ 8
<1 D 2
ˇ8 3 3 ˇˇ ˆ
ˆ
ˇ
ˇ 3 8 3 ˇ D 0 H) .2 /.11 /2 D 0 H)
ˇ ˇ
ˇ ˇ ˆ 2 D 11
ˇ 3 3 8 ˇ
ˆ
3 D 11
:
with D M a2 =12. First observation: 1 C 2 C 3 D 24 and is equal to I11 C I22 C I33 .
Second observation 1 2 3 D 2423 , which is det I! . So, at least for this example, the sum of
the eigenvalues is equal to the trace of the matrix, and the product of the eigenvalues is equal to
the determinant of the matrix.
For the first eigenvalue D 2, we have this system of equations:
2 32 3 2 3
6 3 3 !1 0
4 3 6 35 4!2 5 D 405
6 76 7 6 7
3 3 6 !3 0
p
of which the solution is !1 D !2 D !3 . So, the first principal axis is e 1 D .1= 3/.1; 1; 1/.
For the second and third eigenvalues D 11, we have this system of equations:
2 32 3 2 3
3 3 3 !1 0
4 3 3 35 4!2 5 D 405
6 76 7 6 7
3 3 3 !3 0
of which the solution is !1 C !2 C !3 D 0. We are looking for the other two axes, so we think
of vectors perpendicular to the first principal axis i.e., e 1 . So, we write !1 C !2 C !3 D 0 as
! e 1 D 0. This indicates that the other two axes are perpendicular to the first axis. Later on
we shall prove that the eigenvectors corresponding to distinct eigenvalues are orthogonal if the
matrix is symmetric.
The German adjective eigen means “own” or “characteristic of”. Eigenvalues and eigenvectors are character-
istic of a matrix in the sense that they contain important information about the nature of the matrix.
Principal stresses and principal planes. It is a fact that the same thing happens again and
again in many different fields. Herein, we demonstrate this by presenting principal stresses and
principal planes from a field called solid mechanics or mechanics of materials. This field is
studied by civil engineers, mechanical engineers, aerospace engineers and those people who
want to design structures and machines.
Similar to I! , ! and l , in solid mechanics there are the (second order) stress tensor , the
normal vector n and the traction vector t. And we also have a relation between them by Cauchy:
t D n (10.10.12)
Again t is in general not in the same direction as n. So, principal planes are those with normal
vectors n such that n D n, with being called the principal stresses (there are three principal
stresses).
Example 10.10
Find the eigenvalues and the eigenspaces of
2 3
0 C1 0
A D 40 C0 15
6 7
2 5 4
4 4
2 2
0 0
−2 −2
−4 −4
−4 −2 0 2 4 −4 −2 0 2 4
(a) (b)
Figure 10.23: Eigenpicture: x are points on the unit circle (highlighted by blue) and the transformed
vectors Ax, highlighted by red, are plotted head to tail with x. The eigenvector is the one in which the
blue and red vectors are aligned.
Now, to find the eigenvectors for a certain , we such for x such that
.A I/x D 0
Thus, the eigenvector x is in the null space of A I. The set of all eigenvectors and the zero
vector forms a subspace known as an eigenspace and denoted by E . Now for 1 D 2 D 1
we need to find the null space of A I (using Gauss elimination)a :
2 3 2 3 02 31
1 1 0 1 0 1 0 1
A ID4 0 1 1 5 H) 4 0 1 1 0 5 H) E1 D span @415A
6 7 6 7 B6 7C
2 5 3 0 0 0 0 1
Similarly, to find the eigenvectors for 3 D 2, we look for the null space of A 2I:
2 3 2 3 02 31
2 1 0 1 0 1=4 0 1
A 2I D 4 0 2 1 5 H) 4 0 1 1=2 0 5 H) E2 D span @425A
6 7 6 7 B6 7C
2 5 2 0 0 0 0 4
Note that, dim.E1 / D dim.E2 / D 1. Let us define the geometric multiplicity of an eigenvalue
to be the dimension of its eigenspace. Why we need this geometric multiplicity? Because
of this fact: an n n matrix is diagonalizable if and only if the sum of the dimensions of
the eigenspaces is n or the matrix has n linearly independent eigenvectors. (Thus, the matrix
considered in this example is not diagonalizable.)
a
Why we see a row full of zeros? This is because A I is singular by definition of eigenvectors.
Proof. [Proof of 5] For simplicity the proof is for a 2 2 matrix only. The two eigenvectors of A
are x 1 ; x 2 . Suppose that c1 x 1 C c2 x 2 D 0. Multiplying it with A yields: c1 1 x 1 C c2 2 x 2 D 0
and multiplying it with 2 gives: c1 2 x 1 C c2 2 x 2 D 0. Subtracting the obtained two equations
yields
.1 2 /c1 x 1 D 0
Now that 1 ¤ 2 and x 1 ¤ 0 (the premise of the problem), thus we must have c1 D 0. Doing
the same thing we also get c2 D 0. Thus, the eigenvectors are linear independent.
Proof. Proof of 6
det.A I/ D p./ D . 1/n . 1 /. 2 / . n /
D .1 /.2 / .n /
We write A2 x D AAx D A.Ax/ D A.x/ D .Ax/ D .x/ D 2 x.
This holds because .An / 1 D .A 1 /n for positive integer n.
‘
Since A is only invertible when det A ¤ 0, which is equivalent to det.A 0I/ ¤ 0. Thus 0 is not an
eigenvalue of A when it is invertible.
Theorem 10.10.1
If A is a symmetric real matrix, then its eigenvalues are real.
Proof. How we’re going to prove this theorem? Let denote by x and the eigenvector and
eigenvalue of A; might be a complex number of the form a C bi and the components of x
may be complex numbers. Our task is now to prove that is real. One way is to prove that the
complex conjugate of , which is D a bi , is equal to . That is, prove D . To this end,
we need to extend the notion of complex conjugate to vectors and matrices. It turns out to be
easy: just replace the entries of vectors/matrices by the conjugates. That is, if A D Œaij , then its
conjugate A D Œaij . Properties of complex conjugates as discussed in Section 2.23 still apply
for matrices/vectors; e.g. AB D A N B.
N
We start with Ax D x, and to make appear, take the conjugate of this equation to get
Ax D Ax D x D x
Now, to use the information that A is real (which means that A D A) and it is symmetric (which
means that A> D A), we transpose the above Ax D x:
x > A D x >
Now we have two equations:
Ax D x; x > A D x >
Now we compute the dot product of the first equation with x > , and the dot product of the second
equation with x, we obtain
But, x > x ¤ 0 as x is not a zero vector (it is an eigenvector). Thus, we must have D or
a C bi D a bi which leads to b D 0. Hence, the eigenvalues are real.
We know that for any square matrix, eigenvectors corresponding to distinct eigenvalues are
linear independent. For symmetric matrices, something stronger is true: such eigenvectors are
orthogonal|| . So, we have the following theorem.
||
The proof goes as 1 x 1 x 2 D D 2 x 1 x 2 , thus .1 2 /x 1 x 2 D 0. But 1 ¤ 2 .
Theorem 10.10.2
If A is a symmetric matrix, then any two eigenvectors corresponding to distinct eigenvalues
of A are orthogonal.
The proof of this theorem is not hard, but why we know this result? In Section 10.11.4 on
matrix diagonalization, we know that we can decompose A as A D VV 1 . Transposing it
gives us A> D .V 1 /> V> . As A is symmetric, we then have VV 1 D .V 1 /> V> . We
then guess that V> D V 1 . Or, V> V D I: V is an orthogonal matrix!
Next, we derive the so-called spectral decomposition of A. To see the point, assume that A
is a 2 2 matrix, we can then write (from the Spectral theorem)
2
" #" # " #
i > >
h
1 0 q1 h i 1 q1 X
A D QQ> D q1 q2 >
D q1 q2 >
D i qi q>
i (10.10.13)
0 2 q2 2 q2 i D1
We have seen quadratic forms (e.g. ax 2 C bxy C cy 2 ) when discussing the extrema of functions
of two variables (Section 7.7) and when talking about the kinetic energy of a 3D rotating body
(Section 10.10.1). Now is the time for a formal definition of quadratic forms:
Definition 10.10.2
A quadratic form in n variables is a function f W Rn ! R of the form
f .x/ D x > Ax
where A is a symmetric n n matrix and x 2 Rn . We refer to A as the matrix associated with
the quadratic form f .
If Q.x/ is positive definite, then its associated matrix A is said to be a positive definite matrix.
The next problem we have to solve is: when a quadratic form is positive definite? What is
then the properties of A? To answer this question, one observation is that, if there is no cross
term in f .x/, then it is easy to determine its positive definiteness. One example is enough to
convince us: f .x/ D 2x 2 C 4y 2 is positive semi-definite (PSD). Furthermore, without the cross
term, the associated matrix is diagonal:
" #" #
h i 2 0 x
2 2
f .x/ D 2x C 4y D x y (10.10.15)
0 4 y
Diagonal matrices? We need the spectral theorem that states that an n n real symmetric
matrix has the factorization A D QQ> with real eigenvalues in and orthonormal eigenvec-
tors in the columns of Q. Thus, we do a change of variable x D Qy, and compute the quadratic
form with this new variable y, magic will happen :
n
X
> > > > >
f .x/ D x Ax D .Qy/ A.Qy/ D y Q AQ y D y y D i yi2 (10.10.16)
„ ƒ‚ …
i D1
Principal axes theorem and ellipses. Eq. (10.10.16) is the theorem of principal axes. This
theorem tells us that any quadratic form can be written in a form without the cross terms. This
We cannot know this will work, but we have to try and usually pieces of mathematics fit nicely together.
is achieved by using a change of variable x D Qy. Now, we explain the name of the theorem.
Consider the following conic section (Section 4.1.6):
" #
5 4
5x 2 C 8xy C 5y 2 D 1 ” x > Ax D 1; x D .x; y/; A D
4 5
First, the eigenvectors and eigenvalues of A :
p p p p
1 D 1; 2 D 9I v1 D .1= 2; 1= 2/; v2 D .1= 2; 1= 2/
Then, the following change of variable
" p p #
C1= 2 1= 2
x D Qx 0 ; Q D p p ; x 0 D .y1 ; y2 /
1= 2 1= 2
1y12 C 9y22 D 1
Thus, our conic is an ellipse. Now, to graph this ellipse we need to know its axes. To this end,
we need to know where is the unit vector in the .y1 ; y2 / coordinate systems: e 01 D .1; 0/. Using
x D Qy, we have
" p p #" # " p #
C1= 2 1= 2 1 C1= 2
Qe 01 D p p D p
1= 2 1= 2 0 1= 2
which is the first eigenvector of A. Similarly, e 02 D .0; 1/
is the second eigenvector. Thus, the eigenvectors of A-the
matrix associated with a quadratic form-give the directions of
the principal axes of the corresponding graph of the quadratic
form. This explains why A D QQ> is called the principal axis
theorem–it displays the axes. What is more, the eigenvalues of A
gives us the lengths of the axes. The
p smaller eigenvalue (1) gives
the length of semi major axis (1= 1) and the p larger eigenvalue
(9) gives the shorter axis ( of half length 1= 9). This geometry
will help us to solve constrained optimization problems relating to quadratic forms as explained
in what follows.
Constrained optimization problems. I present herein now one application about the definite-
ness of a quadratic form. Assume that a quadratic form f .x/ D x > Ax is positive semi-definite,
then since f .0/ D 0, the minimum value of f .x/ is zero, without calculus. It is more often that
You can reverse the direction of v1 .
If you check Section 4.1.6 again you would see that this change of variable is exactly the rotation mentioned
in that section. Here, we have A D C D 5, thus the rotation angle is =4.
we have to find the maximum/minimum of f .x/ with x subjected to the constraint kxk D 1.
Thus, we pose the following constrained optimization problem||
x > Ax
max > or max x > Ax
x¤0 x x jjxjjD1
The solution to this problem actually lies in Eq. (10.10.16): to see that just look at f D 1y12 C
9y22 D 1 with the constraint y12 C y22 D 1, the maximum is f D 9, the maximum eigenvalue of
the matrix associated with the quadratic form. Thus, we sort the eigenvalues of A in this order
1 2 n , then
n
X
f .x/ D i yi2 D 1 y12 C 2 y22 C C n yn2
i D1
1 y12 C 1 y22 C C 1 yn2
1 y12 C y22 C C yn2 D 1
Another derivation.
From that it can be seen that the maximum of R.x/ is 1 . What is nice with this form of
R.x/ is the ease to find the maximum of R.x/ when the constraint is x is perpendicular
to v1 . This constraint means that c1 D 0, thus
||
To see that the two forms are equivalent, we can do this
Up to this point we have seen many mathematical objects: numbers, vectors, matrices and
functions. Do these different objects share any common thing? Many, actually. First, we can add
two numbers, we can add two vectors, we can add two matrices and of course we can add two
functions. Second, we can multiply a vector by a scalar, a matrix by a scalar and a function by
a scalar. Third, adding two vectors gives us a new vector, adding two matrices returns a matrix,
and adding two functions gives us a function (not anything else).
We believe the following equation showing a vector in R4 , a polynomial of degree less than
or equal 3, and a 2 2 matrix
2 3
a " #
b a b
6 7
u D 6 7 ; p.x/ D a C bx C cx 2 C dx 3 ; A D
6 7
4c 5 c d
d
is a good illustration that all these objects are related. After all, they are represented by 4 numbers
a; b; c; d .
It seems reasonable and logical now for mathematicians to unify all these seemingly different
but similar objects. Here comes vector spaces, which constitute the most abstract part of linear
algebra. The term vector spaces is a bit confusing because not all objects in vector spaces are
vectors; e.g. matrices are not vectors. A better name would probably be linear spaces. About the
power of algebra, Jean le Rond d’Alambert wrote: "Algebra is generous; she often gives more
than is asked of her".
To define a vector space, let V be a set of objects u; v; w; : : : on which two operations, called
addition and scalar multiplication are defined: the sum of u and v is denoted by u C v, and if
˛ is a scalar, the scalar multiple of v is denoted by ˛v. Then, V is defined as a vector space
(sometimes also referred to as linear space) if the following ten axioms are satisfied (˛; ˇ are
scalars):
We can view p.x/ D a C bx C cx 2 C dx 3 as a space–similar to Rn –with a basis of f1; x; x 2 ; x 3 g. Thus,
.a; b; c; d / are the coordinates of p.x/ with respect to that basis. And .a; b; c; d / can also be seen as the coordinates
of a point in R4 !
So, a vector space is a set of objects called vectors, which may be added together and multiplied
("scaled") by numbers, called scalars and these vectors satisfy the above ten axioms. Sometimes
we see this notation .V; R; C; / to denote a vector space V over R with the two operations of
addition and multiplication.
Example 1. Of course Rn with n 1 is a vector space. All the ten axioms of a vector space can
be verified easily.
Example 2. Let P2 be the set of all polynomials of degree less than or equal 2 with real coeffi-
cients. To see if P2 is a vector space, we first need to define the two basic operations of addition
and scalar multiplication. If p.x/; q.x/ are two objects in P2 , then p.x/ D a0 C a1 x C a2 x 2
and q.x/ D b0 C b1 x C b2 x 2 . Addition and scalar multiplication are defined as
This verifies the last two axioms on closure. The identity element for addition is the
polynomial with all coefficients being zero. The inverse element of addition of p.x/ is
p.x/ D a0 a1 x a2 x 2 . Verification of other axioms is straightforward as they come from
the arithmetic rules of real numbers.
Example 3. Let denote by F the set of all real-valued functions defined on the real line. If f .x/
and g.x/ are such two functions and ˛ is a scalar, then we define .f C g/.x/ and ˛f .x/ as
The zero function is f .x/ D 0 for all x. The negative function . f /.x/ is f .x/. It can
then be seen that F is a vector space, but a vector space of infinite dimension. Usually linear
algebra deals with finite dimensional vector spaces and functional analysis concerns infinite
dimensional vector space. But we do not follow this convention and cover both spaces in this
chapter. Similarly, we have another vector space, FŒa; b that contains all real-valued functions
defined on the interval Œa; b.
Example 4. All rectangular matrices of shape m n belong to a vector space Rmn . From
Section 10.4.2, we can verify that matrices obey the ten axioms of linear spaces. And the columns
of a m n matrix are also vector spaces because a column is a Rm vector.
If matrices are vectors, then we can do a linear combination of matrices, we can talk about
linearly independent matrices. For example, consider the space of all 2 2 matrices M . It is
obvious that we can write any such matrix as:
The red matrices are linear independent, and they are the basis vectors of M ; they play the same
roles of the unit vectors e i that we’re familiar with.
If a C c D b C c then a D b for a; b; c being scalars, n vectors or matrices. Thus, we
guess that this holds for any vector in a vector space. The following theorem is a summary of
some properties that vectors in a vector space satisfy. These properties are called the trivial
consequences of the axioms as they look obvious.
Theorem 10.11.1
Let V be a vector space, and a; b; c be vectors in V and c is a scalar. Then, we have
(a) If a C c D b C c then a D b.
(b) If a C b D b then a D 0.
(c) 0 D 0, 0v D 0.
(d) . 1/v D v.
aDaC0 (axiom 3)
D a C .c C x/ (x is the identity element for addition of c)
D .a C c/ C x (axiom 2)
D .b C c/ C x (given)
D b C .c C x/ (axiom 2)
DbC0 (x is the identity element for addition of c)
Db (axiom 3)
a C b D b D b C 0 H) a D 0 (using (a))
Proof of (c) is based on axioms 5/6 :
ax:5 .b/
0 D .0 C 0/ D 0 C 0 H) 0 D 0
ax:5 .b/
0v D .0 C 0/v D 0v C 0v H) 0v D 0
Proof of (d) is:
ax:6 .c/
v C . 1/v D 1v C . 1/v D .1 C . 1//v D 0v D 0
But we know that v C . v/ D 0, thus . 1/v D v. Proof of (e) is (we’re interested in the case
c ¤ 0 only, otherwise (e) is simply (c)):
ax:8 1 ax:7 1 1 .c/
v D 1v D c v D .cv/ D 0 D 0
c c c
c1 .1 C x/ C c2 .x C x 2 / C c3 .1 C x 2 / D 0
Why? Because these axioms involve the scalar multiplication of vectors.
This is equivalent to
c1 C c3 D 0 c1 C c2 D 0; c2 C c3 D 0 H) c1 D c2 D c3 D 0
c1 .1 C x/ C c2 .x C x 2 / C c3 .1 C x 2 / D a C bx C cx 2
c1 C c3 D a c1 C c2 D b; c2 C c3 D c
The coefficient matrix of this system is invertible, thus it has a solution. As f1Cx; xCx 2 ; 1Cx 2 g
is a basis for P2 , we deduce that dim.P2 / D 2. And it is a finite dimensional subspace. The
following definition aims to make this precise.
Definition 10.11.1
A vector space V is called finite-dimensional if it has a basis consisting of finitely many
vectors. The dimension of V , denoted by dimV , is the number of vectors in a basis for V . The
dimension of the zero vector space f0g is defined to be zero. A vector space that has no finite
basis is called infinite-dimensional.
Theorem 10.11.2
Consider a vector space V with a basis B D fv1 ; v2 ; : : : ; vn g. If we have two vectors u and v
in V and we know their coordinates ŒuB and ŒvB , then we can determine the coordinates of
their sum and the coordinates of ˛v
Example 10.11
Consider a vector in P2 : p.x/ D a C bx C cx 2 . If we use the standard basis B D f1; x; x 2 g
for P2 , then it is easy to see that the coordinate vectors of p.x/ w.r.t. B is
h i>
Œp.x/B D a b c
which is simply a vector in R3 . Thus, Œp.x/B connects the possibly unfamiliar space P2 with
the familiar space R3 . Points in P2 can now be identified by their coordinates in R3 , and every
vector-space calculation in P2 is accurately reproduced in R3 (and vice versa). Note that P2
is not R3 but it does look like R3 as a vector space.
What is Œ1B ? As we can write 1 D .1/.1/ C 0.x/ C 0.x 2 /, therefore Œ1B D .1; 0; 0/ D e 1 .
Similarly, ŒxB D .0; 1; 0/ D e 2 . So, if B D fv1 ; v2 ; : : : ; vn g is a basis for a vector space,
then Œvi B D e i .
The above example demonstrates that there is a connection between a vector space V
and Rn , and the following theorem is one of such connection. We shall use this theorem in
definition 10.11.2 when we discuss the change of basis matrix and use it to show that this matrix
is invertible.
Theorem 10.11.3
Let B D fv1 ; v2 ; : : : ; vn g be a basis for a vector space V and let u1 ; u2 ; : : : ; uk be vectors in
V , then fu1 ; u2 ; : : : ; uk g is linear independent in V if and only if fŒu1 B ; Œu2 B ; : : : ; Œuk B g is
linear independent in Rn .
which means that the coordinate vector of c1 u1 C c2 u2 C C ck uk w.r.t. B is the zero vector.
Therefore, we can write
Since fu1 ; u2 ; : : : ; uk g is linear independent the above equation forces ci ’s to be all zero.
Change of basis. Now, we discuss the topic of change of bases. The reason is simple: it is
convenient to work with some bases than others. We study how to do a change of bases herein.
Consider the easy R2 plane with two nonstandard bases: B with u1 D . 1; 2/ and u2 D .2; 1/;
and C with v1 D .1; 0/ and v2 D .1; 1/. Certainly, all these vectors (e.g. u1 ) are written with
respect to the standard basis .1; 0/ and .0; 1/. The question is: given a vector x with ŒxB D .1; 3/,
what is ŒxC ?
The first thing we need to do is to write the basis vectors of B in terms of those of C :
where Theorem 10.11.2 was used in the second step. And with the red matrix, denoted for now
by P, whose columns are the coordinate vectors of the basis vectors in B w.r.t. C, the calculation
of the coordinates of any vector in C is easy: ŒxC D PŒxB .
Thus, we have the following definition of this important matrix.
Definition 10.11.2
Let B D fu1 ; u2 ; : : : ; un g and C D fv1 ; v2 ; : : : ; vn g be bases for a vector space V . The n n
matrix whose columns are the coordinate vectors Œu1 C ; : : : ; Œun C of the vectors in the old
basis B with respect to the new basis C is denoted by PC B and is called the change-of-basis
matrix from B to C.
That matrix allows us to compute the coordinates of a vector in the new base:
ŒxC D PC B ŒxB
You can either draw these vectors and see this or you can simply solve two 2-by-2 systems.
Change of basis formula relates the coordinates of one and the same vector in two different
bases, whereas a linear transformation relates coordinates of two different vectors in the same
basis. One more thing is that PC B is invertible, thus we can always go forth and back between
the bases:
Why the change-of-base matrix is invertible? This is thanks to theorem 10.11.3: the vectors
fu1 ; u2 ; : : : ; un g are linear independent in V , thus the vectors fŒu1 C ; : : : ; Œun C g are linear inde-
pendent in Rn : the columns of the change-of-basis matrix are thus linear independent. Hence, it
is invertible.
d .f C g/ df dg d .cf / df
D C ; Dc
dx dx dx dx dx
Example 2. Let FŒa; b be a vector space of all real-valued functions defined on the interval
Rb
Œa; b. The integration operator, S W FŒa; b ! R by S.f / D a f .x/dx is a linear transforma-
tion.
Linear transformation is a fancy term and thus seems scary. Let’s get back to the friendly
y D f .x/: pop in a number x and it is transformed to a new number f .x/. Thus, a linear
transformation is simply a generalization of the concept of function, instead of taking a single
number now it takes in a vector and gives another vector. The key difference is that linear
transformations are similar to y D ax not y D sin x: the transformation is linear only. In
Section 4.2.4 we have discussed the concept of range of a function. We extend that to linear
transformation and introduce a new concept: kernel of the transformation. For y D f .x/, the
roots of this function is all x such that f .x / D 0. The kernel of a linear transformation is
exactly this.
Definition 10.11.3
Let T W V ! W be a linear transformation.
(a) The kernel of T , denoted by ker.T /, is the set of all vectors in V that are mapped by T
to 0 in W . That is,
ker.T / D fv 2 V W T .v/ D 0g
(b) The range of T , denoted by range.T /, is the set of all vectors in W that are images of
vectors in V under T . That is,
Definition 10.11.5
Consider a linear transformation T W V ! W .
(a) T is called one-to-one if it maps distinct vectors in V to distinct vectors in W . That is,
for all u and v in V , then u ¤ v implies that T .u/ ¤ T .v/.
(b) T is called onto if range.T / D W . In the words, the range of T is equal to the codomain
of T . Or, every vector in the codomain is the output of some input vector. That is, for
all w 2 W , there is at least one v 2 V such that T .v/ D w.
Again the definition above is not useful to check whether a transformation is one-to-one or
onto. There exists theorems which provides simpler ways to do that. Below is such a theorem:
The idea is that an isomorphism T W V ! W means that W is “just like” V in the context of
any question involving addition and scalar multiplication. The word isomorphism and isomor-
phic are derived from the Greek words isos, meaning “equal” and morph, meaning “shape”.
Example 10.12
Show that Pn 1 and Rn are isomorphic. To this end, we need to prove that there exists a linear
transformation T W Pn 1 ! Rn that is one-to-one and onto. Actually, we already knew such
transformation: the one that gives us the coordinates of a vector in Pn 1 with respect to a basis
of Pn 1 .
Let E D f1; x; :::; x n 1 g be a basis for Pn 1 . Then, any vector p.x/ in Pn 1 can be written
as
Matrix associated with a linear transformation. Let V and W be two finite dimensional
vector spaces with bases B and C, respectively, where B D fv1 ; v2 ; : : : ; vn g. Now consider a
linear transformation T W V ! W . Our task is to find the matrix associated with T . To this end,
consider a vector u 2 V , we can write it as
u D u1 v 1 C u2 v 2 C C un v n
Table 10.3: The parallel universes of P2 and R3 : P2 is isomorphic to R3 by the coordinate map
T .p.x// WD Œp.x/E where E D f1; t; t 2 g is the standard basis in P2 .
P2 R3
" #
a
p.t / D a C bt C ct 2 b
"c # " # " #
1 2 1
. 1 C 2t C 3t 2 / C .2 C 4t C 3t 2 / D 1 C 6t C 6t 2 2 C 4 D 6
"3 # "3 # 6
2 6
3.2 C t C 3t 2 / D 6 C 3t C 9t 2 3 1 D 3
3 9
Now T .u/ is a vector in W , and with respect to the basis C, its coordinates are
j j j j
This matrix is called the matrix of T with respect to the bases B and C. Then, any vector x 2 V
with B coordinate vector ŒxB is transformed to vector T .x/ 2 W with C coordinate vector
ŒT .x/C :
ŒT .x/B D ŒT B ŒxB
Now, we look at the transformed vector T .x/ but in the basis C, by multiplying ŒT .x/B with
the change-of-basis matrix PC B :
PC B ŒT .x/B D PC B ŒT B ŒxB
„ ƒ‚ …
ŒT .x/C
ŒT C ŒxC D PC B ŒT B ŒxB
ŒT C PC B ŒxB D PC B ŒT B ŒxB
This equation holds for any ŒxB , thus we get the following identity ŒT C PC B D PC B ŒT B
from which we obtain
ŒT B D PC 1 B ŒT C PC B (10.11.6)
This is often used when we are trying to find a good basis with respect to which the matrix of
a linear transformation is particularly simple (e.g. diagonal). For example, we can ask whether
there is a basis B such that the matrix ŒT B of T W V ! V is a diagonal matrix. The next section
is answering this question.
Example 10.13
Let’s consider the following matrix, which is associated to a linear transformation T , with its
eigenvalues and eigenvectors:
" # " # " #
3 1 1 1
AD ; 1 D 3; 2 D 2; v1 D ; v2 D
0 2 0 C1
Now, we consider two bases: the first basis C is the standard basis with .1; 0/ and .0; 1/ as
the basis vectors, and the second basis B with the basis vectors being the eigenvectors v1 ; v2 .
Now, we have
" #
h i 1 1
ŒT C D A; PC B D v1 v2 D
0 1
Now using Eq. (10.11.6) the transformation T –that is associated with A w.r.t. C–is now given
by w.r.t. the eigenbasis B:
" # 1 " #" # " #
1 1 3 1 1 1 3 0
ŒT B D PC 1 B ŒT C PC B D D
0 1 0 2 0 1 0 2
Look at what we have obtained: a diagonal matrix with the eigenvalues on the diagonal! In
other words, we have diagonalized the matrix A.
1
AV D V H) A D VV
With this form, it is super easy to compute powers of A. For example,
A3 D .VV 1 /.VV 1 /.VV 1 / D V.V 1 V/.V 1 V/V 1 / D V3 V 1
And nothing can stop us from going to Ak D Vk V 1 whatever k might be: 1000 or 10000.
This equation tells us that the eigenvalues of Ak are k1 ; : : : ; kn , and the eigenvectors of Ak are
the same as the eigenvectors of A.
h i check Section 10.4.4 on the matrix-column representation of the product AB: AB D
If this is not clear,
AB 1 AB 2 AB 3 . And AB 1 is a linear combination of the cols of A with the coefficients being the compo-
nents of B 1 . Here, A is V and B 1 D .1 ; 0; : : :/.
This dot product has these properties: ab D ba, aa 0 and .˛aCˇb/c D ˛.ac/Cˇ.bc/.
Now, we define an inner product between two vectors a; b in a vector space V , denoted by ha; bi,
which is an operation that assigns these two vectors a real number such that this product has
properties identical to those of the dot product:
Example 10.14
Let u D .u1 ; u2 / and v D .v1 ; v2 / be two vectors in R2 . Then, the following
2 3
w1 0
> 6 :: : : :: 7
hu; vi D w1 u1 v1 C w2 u2 v2 C C wn un vn D u Wv; W D 4 : : : 5
0 wn
A vector space equipped with an inner product is called an inner product space. Don’t
be scared as the space Rn is an inner product space! It must be as it was the inspiration for
mathematicians to generalize it to inner product spaces. We shall meet other inner product
spaces when we define concrete inner product. But first, with the inner product, similar to how
the dot product defines length, distance, orthogonality, we are now able to define these concepts
for vectors in an inner product space.
Definition 10.11.7
Let u and v be two vectors in an inner product space V .
p
(a) The length (or norm) of v is jjvjj D hv; vi.
Example 10.15
If we consider two functions f and g in CŒa; b–the vector space of continuous functions in
Œa; b, show that
Z b
hf; gi D f .x/g.x/dx (10.11.9)
a
dre polynomials (Note that Legendre polynomials are defined on the interval Œ 1; 1):
L0 .x/ D 1
h1; xi 1 1
Z
L1 .x/ D x 1Dx xdx D x
h1; 1i 2 1
h1; x 2 i hx; x 2 i 1 (10.11.10)
L2 .x/ D x 2 1 x D x2
h1; 1i hx; xi 3
3 3
h1; x i hx; x i hx ; x 3 i 2
2
3
L3 .x/ D x 3 1 x 2 2
x D x3 x
h1; 1i hx; xi hx ; x i 5
Actually, we need to scale these polynomials so that Ln .1/ D 1, then we have the standard
Legendre polynomials as shown in Table 10.4. One surprising fact about Legendre polynomials,
their roots are symmetrical with respect to x D 0, and Ln .x/ has n real roots within Œ 1; 1, see
Fig. 10.24. And these roots define the quadrature points in Gauss’ rule–a well known numerical
integration rule (Section 11.4.3).
1.0
n Ln .x/
0.5
0 1
1 x 0.0
1 2
2 2
.3x 1/ 0.5
1 3 L0 L2 L4
3 .5x 3x/ L1 L3 L5
2 1.0
1
4 8
.35x 4 30x 2 C 3/ 1.0 0.5 0.0 0.5 1.0
1
5 8
.63x 5 70x 3 C 15x/
Figure 10.24: Plots of some Legendre polynomials.
Table 10.4: The first six Legendre polynomials.
Adrien-Marie Legendre (1752 – 1833) was a French mathematician who made numerous
contributions to mathematics. Well-known and important concepts such as the Legendre polyno-
mials and Legendre transformation are named after him.
Now, we focus on the inner product space of polynomials. Because Legengre polynomials
are orthogonal to each other, they can be the basis for the inner product space of polynomials.
For example, any 2nd degree polynomial can be uniquely written as
where Li .x/ are the orthogonal Legendre polynomials, see Table 10.4. Next, we compute the
inner product of p2 .x/ with L3 .x/, because the result is beautiful:
Z 1 Z 1
L3 .x/p2 .x/dx D Œc0 L0 .x/ C c1 L1 .x/ C c2 L2 .x/ L3 .x/dx
1 1
Z 1 Z 1 Z 1
D c0 L0 .x/L3 .x/dx C c1 L1 .x/L3 .x/dx C c3 L2 .x/L3 .x/dx
1 1 1
D0
This is due to the orthogonality of Legendre polynomials. We will use this in Section 11.4.3 to
derive the famous Gauus-Legendre quadrature rule.
The Cauchy-Schwarz inequality. In Section 2.20.3, we have met the Cauchy-Schwarz inequal-
ity. At that time, we did not know of Rn . But now, we can see that this inequality is, for two
vectors u and v in Rn
ju vj jjujjjjvjj
The nice thing of mathematics is that the same inequality holds for two vectors in an inner
product space. We just replace the dot product by the more general inner product.
Proof. The proof is pretty similar to the one given in Section 2.20.3. We construct the following
function, which is always non-negative
f .t/ D hu C t v; u C tvi
So, f .t / is a quadratic function in t , we hence compute the discriminant and it has to be less
than or equal to 0:
D 4hu; vi2 4hv; vihu; ui 0
And with this, we also get the triangle inequality for vectors in an inner product space:
h i>
z D a1 C ib1 a2 C ib2 an C ibn
The first question we have to askpis: how we compute the length of a complex vector? If a is a
real n-vector, then its lengthpis a12 C C an2 . Can we use this for complex vectors? Just try
for z D .1; i /, then jjzjj D 12 C i 2 D 0, which cannot be correct: a non-zero vector cannot
have a zero length!
Definition 10.11.8
If u D .u1 ; u2 ; : : : ; un / and v D .v1 ; v2 ; : : : ; vn / are vectors in C n , then the complex dot
product of them is defined by
u v D uN 1 v1 C uN 2 v2 C C uN n vn
where uN i is the complex conjugate of ui . Recall that z D a C bi, then zN D a bi.
Definition 10.11.9
A norm on a vector space V is a mapping that associated with each vector v a real number
jjvjj, called the norm of v, such that the following properties are satisfied for all vectors u and
v and all scalars c:
In the following example, we consider the vector space Rn and show that there are many
norms rather than the usual Eucledian norm.
Example 10.16
Consider v D .v1 ; v2 ; : : : ; vn /, the following common norms for v:
(a) (l 1 ) jjvjj1 D jv1 j C jv2 j C C jvn j
1=2
(b) (l 2 ) jjvjj2 D jv1 j2 C jv2 j2 C C jvn j2
where jjvjj2 is the usual Eucledian norm. It is not hard to prove that l 1 , l 2 and l 1 are indeed
norms (we just need to verify the three properties stated in the definition of a norm). For
l p , the proof is harder and thus skipped. Note that I wrote jv1 j2 instead of v12 because the
discussion covers complex vectors as well. Thus, the symbol jj indicates the modulus.
Fig. 10.25 presents the geometry of these norms in R2 . Is this just for fun? Maybe, but it
reveals that the different norms are close to each other. Precisely, the norms are all equivalent on
Rn in the sense that
p
kvk2 kvk1 nkvk2
know what properties a matrix norm should have. So, we define a matrix norm based on these
properties. Later on, once we have found the formula for the norm, we check whether it satisfies
all these properties. This is similar to how we defined the determinant of a matrix.
Definition 10.11.10
A norm on a matrix space Mnn is a mapping that associated with each matrix A a real number
jjAjj, called the norm of A, such that the following properties are satisfied for all matrices A
and B and all scalars c:
Now we define a matrix norm which is based on a vector norm. Starting with a vector x with
a norm jjjj defined on it, we consider the norm of the transformed vector, that is kAxk. One way
to measure the magnitude of A is to compute the ratio kAxk=kxk. We can simplify this ratio as
jjAxjj
1
x
D
jjxjj Ax
D
A jjxjj
D kAx k
jjxjj
where the scaling property of a vector norm (definition 10.11.9) was used in the second equality.
A norm is just one single number, so we are interested only in the maximum of the ratio kAxk=kxk:
jjAxjj
max D max kAx k
kxk¤0 jjxjj kx kD1
Mathematicians then define the operator norm, of a matrix, induced by the vector norm kxk
as :
Of course we have to check the conditions in definition 10.11.10. I skipped that part. Check [45].
We think of kxk1 , kxk2 and kxk1 as the important vector norms. Then, we have three
corresponding matrix norms:
The definition looks scary but it turns out that we can actually compute the norms quite straight-
forwardly at least for the 1-norm and the 1 norm. For jjAjj2 we need the singular value
decomposition, so that norm is postponed to Section 10.12. I want to start with jjAjj1 for simple
2 2 matrices:
" # " #
a b ax1 C bx2
AD H) y WD Ax D H) kyk1 D jax1 C bx2 j C jcx1 C dx2 j
c d cx1 C dx2
Now, to find jjAjj1 , we just need to find the maximum of jax1 C bx2 j C jcx1 C dx2 j subject to
jx1 j C jx2 j D 1:
Thus, jjAjj1 is simply the largest absolute column sum of the matrix. Not satisfied with this
simple English, mathematicians write
n
X
jjAjj1 D max jAij j
j D1;:::;n
i D1
Follow the same steps, it can be shown that jjAjj1 is the largest absolute row sum of the matrix.
The proof for an n n matrix for jjAjj1 is not hard but for jjAjj1 it is harder.
A0 x 0 D b ” .A C A/.x C x 0 / D b ” x D A 1 Ax 0
Now, we can compute the norm of x
Thus,
kxk kAk
0
kA 1 kkAk D kA 1 kkAk
kx k kAk
And the red term is defined as the condition number of A, denoted by cond.A/. Why we had to
make kAk appear in the above? Because only the relative change in the matrix (e.g. kAk=kAk)
makes sense. Thus, the conditioning number gives an upper bound on the relative change in the
solution:
kxk kAk
cond.A/
kx 0 k kAk
It is certain that the conditioning number of a matrix depends on the choice of the norm used.
The most commonly used norms are kAk1 and kAk1 .
Example 10.17
Find the conditioning number of the matrix A given in the beginning of this section. We need
to compute A 1 :
" # " #
1 1 C2001 2000
AD ; A 1D
1 1:0005 2000 C2000
Then,
If we compute cond2 .A/ it is about 8002. Thus, when the condition number of a matrix is large
for a compatible matrix norm, it will be large for other norms. And that saves us from having
to compute different condition numbers! To appreciate that this matrix A has a large condition
number, consider now the well behaved matrix in Eq. (10.3.1), its condition number is just
three. Matrices such as A with large condition numbers are called ill conditioned matrices.
kv v k kv wk
for every vector w in W .
Thus, is the squared length of the vector Av. So, for a rectangular matrix, we do not have
eigenvalues but we have singular values, which are the eigenvalues of A> A:
Definition 10.12.1
If A is an m n matrix, the singular values of A are the square roots of the eigenvalues of
A> A and are denoted by 1 ; 2 ; : : : ; n . It is conventional to arrange the singular values in a
descending order: 1 2 n .
We can find the rank of A by counting the number of non-zero singular values. From theo-
rem 10.5.5 we have rank.A/ D rank.A> A/. But,
" #
h i h i h i 0
1
Av1 D 1 u1 ; Av2 D 2 u2 H) A v1 v2 D 1 u1 2 u2 D u1 u2
0 2
Now, we introduce the matrix V D Œv1 v2 , matrix U D Œu1 u2 and ˙ is the diagonal matrix
containing 1;2 . The above equation then becomes
AV D U˙ H) A D U˙V>
And the decomposition in the box is the singular value decomposition of A. Why y 1 is orthogo-
nal to y 2 ? To see this, suppose vi is the eigenvector of A> A corresponding to the eigenvalue i .
Then, for i ¤ j , we have
The final equality is due to the fact that the eigenvectors of the symmetric matrix A> A are
orthogonal.
what is the length of y, then A> A appears. Indeed, jjyjj2 D .Ax/> .Ax/ D x > A> Ax.
Example 10.18
Find a singular value decomposition for the following matrix:
" #
1 1 0
AD
0 0 1
The first step is to consider the matrix A> A and find its eigenvalues/eigenvectors:
2 3 2 p 3 2 3 2 p 3
1 1 0 1= 2 0 1= 2
6 p 7 6 p 7
A> A D 41 1 05 H) v1 D 41= 25 ; v2 D 405 ; v3 D 4 1= 2 5
6 7 6 7
0 0 1 0 1 0
with corresponding eigenvalues 1 D 2; 2 D 1; 3 D 0. Note that as rank.A/ D 2, we have
rank.A> A/ D 2, thus one eigenvalue must be zero. Notepalso that as A> A is symmetric, fvi g
is an orthogonal set. Thus, V and ˙ are given by (i D i )
2 p p 3
1= 2 0 1= 2 "p #
6 p p 7 2 0 0
V D 41= 2 0 C1= 25 ; ˙ D
0 1 0
0 1 0
To find U find ui :
1 1
u1 D Av1 D .1; 0/; u2 D Av2 D .0; 1/
1 2
These two vectors are already an orthonormal basis. Now, we have U; V and ˙ , then the SVD
of A is:
# 1=p2 1=p2 0
2 3
" # " # "p
1 1 0 1 0 2 0 0 6
D 4 0 0 15
7
0 0 1 0 1 0 1 0 p p
„ ƒ‚ … „ ƒ‚ … „ ƒ‚ … 1= 2 1= 2 0
A U ˙ „ ƒ‚ …
V>
Using Julia we can easily verify that the above is correct. Thus, we have singular value
decomposed a rectangular matrix!
Hope that this example demonstrates what a SVD is. Now, we give the formal definition of
it and then we need to prove that it is always possible to do a SVD for any matrix.
Definition 10.12.2
Let A be an m n matrix with singular values 1 2 n 0. Let r denote
the number of non-zero singular values of A. A singular value decomposition of A is the
following factorization A D U˙V> , where U is an m m orthogonal matrix, V is an n n
orthogonal matrix and ˙ is an m n diagonal matrix whose i th diagonal entry is the i th
singular value i for i D 1; 2; :::; r. All other entries of ˙ are zero.
Proof. We now prove that we can always do a SVD for A. The idea of the proof is to show that
for any vector x 2 Rn , we have Ax D U˙ V> x. If so, then of course A D U˙ V> . To this end,
we start with V> x, then ˙ V> x:
2 3 2 3
2 3 1 v 1 x 1 v>
1x
v1 x
62 v2 x 7 62 v> x7
6 7 6 7
6
6v2 x 7
7 2
V> x D 6 >
6 :: 7 6 : 7
7 D 6 :: 7
6 :: 7 H) ˙ V x D 6 :
7 6
4 : 5
7 6 7
6 7 6 >
4r vr x 5 4r vr x 5
7
vn x
0 0
Now, we consider U˙ V> x, noting that U contains ui D i 1 Avi :
So, in the third equality we just added a bunch of zero vectors. Note that Avi D 0, i > r
because we have only r non-zero singular values. The final equality comes from the fact that if
fv1 ; : : : ; vn g is an orthonormal set then v1 v> >
1 C C v n vn D I.
Left and right singular vectors. We have A> A with the eigenvectors vk . How about uk ? Are
they the eigenvectors of some matrix? The answer is yes: it is the eigenvector of AA> . Maths is
really nice, isn’t it. The proof goes as
Geometry of the SVD. We have seen in Fig. 10.23 that the linear transformation Ax transform
a circle in R2 into an ellipse in R2 . With the SVD, it can be proved that an m n matrix A
maps a unit sphere in Rn into an ellipsoid in Rm . Consider a unit vector x 2 Rn , and its image
y D Ax 2 Rm :
x D x1 v1 C x2 v2 C C xn vn H) y D Ax D x1 1 u1 C C xr r ur
The last inequality comes from the unit vector x. Now, if r D n (i.e., the matrix A is a full
column rank matrix), then in the above inequality we have equal sign, and thus the image Ax
is the surface of the ellipsoid. On the other hand, if r < n, then the image is a solid ellipsoid in
Rm .
We can even have a geometry interpretation of the different matrices in a SVD. For that
we have torestrict to a plane. Start with a unit vector x 2 R2 . Now the transformation Ax
is U˙ V> x. From Section 10.6 on linear transformation we know that we’re dealing with a
composite transformation. And we handle it from right to left. So, we start with V> x, which is
a rotation, thus we get a circle from a circle. But now we see the transformed circle in the plane
in which the axes are v1 and v2 (Fig. 10.26). Then comes ˙ V> x which simply stretches
(sometimes shrinks) our circle (the second circle from the left) to an ellipse. Finally, U is a
rotation and we got an oblique ellipse as the final Ax.
A byproduct of this is that we are now able to compute jjAjj2 , it is simply 1 : jjAjj2 D 1 .
Similar to what we have done to Taylor series, we truncate the sum on the RHS of A to get Ak –a
rank k matrix:
Ak D 1 u1 v> >
1 C C k uk v k
And we expect there exists a truth between A and Ak . And this truth was discovered by Schmidt
in 1907, which was later proved by Eckart and Young in 1936 and by Mirsky in 1955. The
theorem is now called the Eckart-Young-Mirsky theorem stating that Ak is the closet rank k
matrix to A. Obviously we need to use matrix norms to express this theorem:
Theorem 10.12.1: The Eckart-Young-Mirsky theorem
If B has rank k then
kA Bk kA Ak k; Ak D 1 u1 v> >
1 C C k uk vk
SVD in image compression. Suppose that the original image is a gray image of size .512; 512/,
and we rebuild the image with 50 singular values, then we only need to save 2 512 50 C 50
numbers to rebuild the image, while original image has 512 512 numbers. Hence this gives
us a compression ratio 19.55% if we don’t consider the storage type. Fig. 10.27 presents one
example and the code to produce it is given in Listing B.23.
Figure 10.27: From left to right: original image, 10, 50 and 100 singular values.
Contents
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
11.2 Numerical differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . 816
11.3 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
11.4 Numerical integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
11.5 Numerical solution of ordinary differential equations . . . . . . . . . . . 839
11.6 Numerical solution of partial differential equations . . . . . . . . . . . . 849
11.7 Numerical optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
11.8 Numerical linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
Numerical analysis is an area of mathematics that creates, analyzes, and implements al-
gorithms for obtaining numerical solutions to problems involving continuous variables. The
Newton-Raphson method to solve numerically the equation tan x D x is one example. The
Rb
Gauss quadrature method to numerically evaluate any definite integral a f .x/dx is also one
example. The finite difference method to solve ordinary and partial differential equations is yet
another example.
Numerical solutions are numbers not closed form expressions. For example, it is possible
to solve the quadraticpequation ax 2 C bx C c D 0 exactly to get the well known closed form
solutions x1;2 D b˙ b2 4ac=2a. Such solutions do not exist for polynomial equations of fifth
order or higher and for transcendental equations such as tan x D x. However, the Newton-
Raphson method can solve all the equations efficiently; but it only gives us numerical solutions.
For example, applying to tan x D x, it gives us 4:49340946.
The following books were consulted for the majority of the material presented in this chapter:
813
Chapter 11. Numerical analysis 814
Finite Difference Computing with PDEs: A Modern Software Approach by Hans Petter
Langtangen and Svein Linge, [32],
Computational Fluid Dynamics the basic and applications by John Anderson [1]
I strongly recommend the book of Anderson; it is so well written and a joy to read. Even though
it addresses numerical methods to solve the Navier-Stokes equations (which are of interest only
to people fascinated by the behavior of fluids), he explains things so clearly.
11.1 Introduction
Suppose we have to compute the following sum, for many values of ˛:
3
X
f .˛/ D ak cos.k˛/ (11.1.1)
kD0
The first solution is, of course, to compute term by term and add them up. What do you think
if someone tell you that there is a much much better method to compute f .˛/? The secret is
that cos ˛, cos 2˛ and so on, they are all related. Recall that, we have derived such a relation in
Eq. (3.7.20), re-given here
A name was given to such formula as it occurs a lot in mathematics. This is known as the
three-term recurrence relation because it involves three terms. Even with the hint that this
recurrence is the key to an efficient computation of the mentioned sum, it is really hard to know
where to start. Unless you know where to look for inspiration, and it comes in the name of the
Horner method in polynomial evaluation.
Horner’s method. In Section 2.28.3, the Horner method was presented as an efficient way to
evaluate any polynomial at a point x0 . As a recap, let’s consider a specific cubic polynomial
p.x/ D 2x 3 6x 2 C 2x C 1. In Horner’s method, we massage p.x0 / a bit as:
b3 D a3 b3 D 2
b2 D x0 b3 C a2 b2 D 2x0 6
b1 D x0 b2 C a1 b1 D x0 .2x0 6/ C 2
b0 D x0 b1 C a0 b0 D x0 .x0 .2x0 6/ C 2/ C 1
where the left column is for a general cubic polynomial whereas the right column is for the
specific p.x/ D 2x 3 6x 2 C 2x C 1. Then, p.x0 / D b0 . As to finding the consecutive b-values,
we start with determining b3 , which is simply equal to a3 . We then work our way down to the
other b’s, using the recursive formula:
bk 1 D ak 1 C bk x0
until we arrive at b0 . This relation can also be written as
bk D ak C bkC1 x0 (11.1.3)
But what is the relation between the sum in Eq. (11.1.1) and a polynomial? To see that
relation, we need to write the polynomial using the sum notation:
n
X
pn .x0 / D ak x k ; x k D xx k 1
kD0
Now, we can see that the sum in Eq. (11.1.1) and a polynomial are of the same form
n
X
f .x/ D ak k .x/ (11.1.4)
kD0
where k .x/ has either a three term recurrence relation or a two term recurrence relation (in the
case k .x/ D x k ).
Inspired by Eq. (11.1.3), we define the sequence of bk ’s as, where the only difference is the
red term which is related to cos.k 2/˛ in the three term recurrence relation (and of course
2 cos ˛ replaced x):
a3 D b3
a2 D b2 2 cos ˛b3
a1 D b1 C b3 2 cos ˛b2
a0 D b0 C b2 2 cos ˛b1
Substitution of ai ’s into Eq. (11.1.1), and re-arrangement the terms in this form b0 C b1 . / C
b2 . / C b3 . /:
3
X
f .˛/ D ak cos.k˛/
kD0
D .b0 C b2 2 cos ˛b1 / C .b1 C b3 2 cos ˛b2 / cos ˛
C .b2 2 cos ˛b3 / cos 2˛ C b3 cos 3˛
D b3 .cos 3˛ C cos ˛ 2 cos ˛ cos 2˛/ C b2 .cos 2˛ C 1 2 cos2 ˛/
C b1 . cos ˛/ C b0
Amazingly, all the red terms are zeros because of Eq. (11.1.2), thus the scary sum is finally equal
to this simple
3
X
ak cos.k˛/ D b0 b1 cos ˛ (11.1.6)
kD0
This is Clenshaw’s algorithm, named after the English mathematician Charles William Clenshaw
( 1926–2004) who published this method in 1955.
typically represent the solution as a discrete approximation that is defined on a grid. Since we
then have to evaluate derivatives at the grid points, we need to be able to come up with methods
for approximating the derivatives at these points, and, this will typically be done using only
values that are defined on that grid.
0 f .x C h/ f .x/ f 00 ./
forward difference: f .x/ ; error h
h 2
f .x/ f .x h/
backward difference: f 0 .x/
h
Since the approximations are obtained by truncating the term f 00 ./=2Šh2 from the exact formula
(Eq. (11.2.1)), this term is the error in our approximations, and is called the truncation error.
When the truncation error is of the order of O.h/, we say that the method is a first order method.
We refer to a method as a pth-order method if the truncation error is of the order of O.hp /. The
forward difference was used to develop the famous Euler’s method which is commonly used to
solve ordinary differential equations.
To develop a 2nd-order method we use more terms in the Taylor series including f 00 .x/:
f 00 .x/ 2 f 000 .1 / 3
f .x C h/ D f .x/ C f 0 .x/h C h C h ; 1 2 .x; x C h/
2Š 3Š (11.2.2)
f 00 .x/ 2 f 000 .2 / 3
f .x h/ D f .x/ f 0 .x/h C h h ; 2 2 .x; x h/
2Š 3Š
And subtracting the first from the second, we arrive at
000
f .1 / C f 000 .2 / 2
0 f .x C h/ f .x h/
f .x/ D h
2h 12
which yields the so-called centered difference for the 1st derivative:
f .x C h/ f .x h/
f 0 .x/ (11.2.3)
2h
This approximation is a 2nd order method by construction as the error is h2 . To demonstrate
the performance of these approximations, consider the function f .x/ D sin x C cos x and
Table 11.1: Finite difference approximations of f 0 .x/ for f .x/ D sin x C cos x. Errors of one-sided
differences (forward/backward) versus two-sided centered difference.
we compute f 0 .0/ and the errors (noting that the exact value is 1). The results are shown in
Table 11.1.
The result clearly indicates that as h is halved, the error of one-sided differences is only
halved (in Table 11.1, starting from the first row and going down, each time h is half of the
previous row), but the error of centered difference is decreased four times.
f .x C h/
2f .x/ C f .x h/
f 00 .x/ D C O.h2 / (11.2.5)
h2
This approximation was used to develop the famous Verlet’s method which is commonly used
to solve Newton’s equations of motions F D ma.
11.3 Interpolation
Assume that we are back in time to the period of no calculators and no formula for calculating
sine. Luckly, some people made up a table of sine of 1ı ; 5ı ; 10ı ; 15ı ,... But we need sin 2ı .
What are we going to do? We will use a method that has become to what we know today as
interpolation . In the first attempt, we assume that the two data points .1ı ; sin 1ı /, .5ı ; sin 5ı /
are connected by a line. We can determine the equation, let call it f .x/, for this line (because it
is straightforward). Having such a equation, it is a simple task to compute sin 2ı , it is f .2ı /.
Then, we realize that our assumption was too crude. In need of higher accuracy, instead
of a line joining the two points, we assume a parabola joining three data points. Generally,
interpolation is where an approximating function is constructed in such a way as to agree
perfectly with the usually unknown original function at the given measurement/data points.
Another situation where we need to do interpolation. Suppose that we have a very complex
function y D f .x/ that we do not want to work with it directly. So, we can generate some data
points .xi ; f .xi // and use them to generate an interpolating function that matches f .x/ only at
xi . The thing is usually the interpolating function is simple to work with e.g. it is a polynomial.
where u.x1 / D 1, u.x2 / D 0 and u.x3 / D 0. The following form satisfies the last two conditions
.x x2 /.x x3 /
u.x/ D (11.3.3)
.x1 x2 /.x1 x3 /
Similarly, we get the expressions for v.x/ and w.x/
.x x1 /.x x3 / .x x1 /.x x2 /
v.x/ D ; w.x/ D
.x2 x1 /.x2 x3 / .x3 x1 /.x3 x2 /
At this point, we should check whether what we have observed, u.x/ C v.x/ D 1, continues
‹
holding. That is, u.x/ C v.x/ C w.x/ D 1. The algebra might be messy, but the identity holds.
Now, we can write the equation for a 17th degree polynomial passing through 18 points. But
the equation would be so lengthy. We need to introduce some short notations. First, for n C 1
points .x0 ; y0 /; : : : ; .xj ; yj /; : : : ; .xn ; yn / the interpolating polynomial is now given by
n
X
y.x/ D li .x/yi (11.3.5)
i D0
What is this? It is (AGAIN!) a linear combination of some functions li .x/ with coefficients
being yi . In this equation, li .x/ is written as, (after examining the form of u; v; w, see again
Eq. (11.3.3))
n
Y x xj
li .x/ D (11.3.6)
j D0
xi xj
j ¤i
and are the so-called Lagrange basis polynomials. Plots of linear and quadratic Lagrange poly-
nomials are given in Fig. 11.1. Although named after Joseph-Louis Lagrange, who published it
in 1795, the method was first discovered in 1779 by the English mathematician Edward Waring
(1736–1798). We mentioned this not to imply that Lagrange is not great. He is one of the greatest
of all time. Just to mention that sometimes credit was not given to the first discoverer. About
this topic, some more examples are: the Lagrange interpolation formula was discovered by
Waring, the Gibbs phenomenon was discovered by Wilbraham, and the Hermite integral formula
is due to Cauchy. These are just some of the instances of Stigler’s Law in approximation theory.
Stigler’s law of eponymy, proposed by statistician Stephen Stigler in his 1980 publication Stigler’s law of
eponymy, states that no scientific discovery is named after its original discoverer. Examples include Hubble’s law,
which was derived by Georges Lemaître two years before Edwin Hubble, the Pythagorean theorem, which was
known to Babylonian mathematicians before Pythagoras, and Halley’s Comet, which was observed by astronomers
since at least 240 BC. Stigler himself named the sociologist Robert K. Merton as the discoverer of "Stigler’s law"
to show that it follows its own decree, though the phenomenon had previously been noted by others.
1.0
1.0
0.8 0.8
0.6 0.6
l0 l0
l1 0.4 l1
0.4 l2
0.2
0.2
0.0
0.0 −0.2
−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0
Figure 11.1: Plots of linear and quadratic Lagrange basis functions in Œ 1; 1. It is clear that li .xj / D ıij .
Example. There are 7 data points given in Table 11.2. And we use Lagrange interpolation to
find the 6th degree polynomial passing through all these points. As I am lazy (already in the
early 40s when doing this), I did not explicitly compute li .x/. Instead I wrote a Julia code
(Listing B.12) and with it got Fig. 11.2: a nice curve joining all the points.
1.0
x f .x/
0.5
0 0
1 0.8415 0.0
2 0.9093
−0.5
3 0.1411
4 -0.7568 −1.0
0 1 2 3 4 5 6
5 -0.9589
6 -0.2794 Figure 11.2: Lagrange interpolating function.
1
f .x/ D (11.3.7)
1 C 25x 2
And we want to use equidistant points xi between -1 and 1 such that:
2i
xi D 1C ; i D f0; 1; 2; : : : ; ng
n
Phu Nguyen, Monash University © Draft version
Chapter 11. Numerical analysis 822
to construct Lagrange polynomials that can capture this function. Then, we hope that a 5th
degree Lagrange polynomial can fit Runge’s function. But it does not do a good job. Well, after
all just 6 points. Then, we used 10 points to have a 9th degree Lagrange polynomial, and this is
even worse: there is oscillation at the edges of the interval, even though far from the edges, the
approximation is quite good.
1.0
1/(1 + 25x2 )
0.8 5th Lagrange
9th Lagrange
0.6
0.4
0.2
0.0
−0.2
Figure 11.3: Runge’s phenomena. This happens only for high order polynomials and equi-spaced points.
This is Runge’s phenomenon as it was discovered by the German mathematician Carl David Tolmé Runge
(1856–1927) in 1901 when exploring the behavior of errors when using polynomial interpolation to
approximate certain functions. The discovery was important because it shows that going to higher degrees
does not always improve accuracy. Note that this phenomenon is similar to the Gibbs phenomenon in
Fourier series (Section 4.18).
×107
10 6th derivative
0.50
0 0.25
−10 0.00
−20 −0.25
−30 −0.50
−0.75
−40
2nd derivative −1.00
−50
−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0
(a) (b)
Figure 11.4: Derivatives of the Runge function f .x/ D 1=1C25x 2 : . Note that, I used SymPy to automat-
ically compute f .m/ .x/ and evaluate the resulting expression at sampling points in Œ 1; 1 to generate
these plots. We should take advantage of CAS to focus on other things.
f .m/ ./
R.x/ WD f .x/ p.x/ D .x x1 /.x x2 / .x xm / (11.3.8)
mŠ
for some 2 Œa; b. It follows that
m
.x/ Y
jf .x/ p.x/j max jf .m/ .y/j; .x/ D .x xi / (11.3.9)
mŠ y2Œa;b i D1
And this theorem explains the Runge phenomenon in which the derivatives blow up, Fig. 11.4.
Note that .x/ is a monic polynomial, which is a single-variable polynomial in which the
leading coefficient (the nonzero coefficient of highest degree) is equal to 1. A n degree monic is
of this form x n C cn 1 x n 1 C C c2 x 2 C c1 x C c0 .
Properties. If f .x/ is a polynomial of degree less than or equal n, and we use n C 1 points
.xi ; f .xi // to construct a Lagrange interpolating function y.x/. Then, y.x/ f .x/, or in other
words the Lagrange interpolation is exact. Another property is that the polynomial interpolant is
P
unique . And this uniqueness allows us to state that i li .x/ D 1 for all x–a fact that we have
observed for n D 2 and n D 3 .
Now, the Lagrange basis functions have two properties, as stated below:
Motivation. If you’re wondering who ensures that there exists a polynomial that can interpolate
a given function? Rest assured that thanks to the Weierstrass approximation theorem: Let f
be a real-valued function defined on a interval Œa; b of R. Then, for any > 0, there exists a
polynomial p.x/ such that
This theorem does not tell us what is the expression of p.x/; you have to find it for yourself! But
it motivates mathematicians: if you work hard, you can find a poynomial that can approximate
well any function.
Vandermonde matrix. Let’s attack the interpolation problem directly. We use a n degree poly-
nomial of this form
Pn .x/ D a0 C a1 x C a2 x 2 C C an x n
to interpolate the n C 1 points .xi ; yi /, i D 0; 1; 2; : : : We have this system of linear equations
to solve for the coefficients ai :
a0 C a1 x0 C a2 x02 C C an x0n D y0
a0 C a1 x1 C a2 x12 C C an x1n D y1
:: :
: D ::
a0 C a1 xn C a2 xn2 C C an xnn D yn
which can be re-written in a matrix notation as
2 32 3 2 3
1 x0 x02 x0n a0 y0
2 n
6 76 7 6 7
61 x1 x1 x1 7 6a1 7 6y1 7
6 :: :: :: : : :: 7 6 :: 7 D 6 :: 7 (11.3.11)
6 76 7 6 7
4 : : : : : 54 : 5 4 : 5
1 xn xn2 xnn an yn
This beautiful matrix is the Vandermonde matrix, named after Alexandre-Théophile Vander-
monde (1735 - 1796)– a French mathematician, musician and chemist. Now, as an exercise to
determinant, we’re going to compute the determinant of the Vandermonde matrix. It’s easier to
deal with its tranpose, so we consider the tranpose:
2 3
2 3 1 1 1
1 x0 x02 x0n
6 x0 x1 xn 7
6 7
61 x1 x12 x1n 7
6 7
7 H) V> D 6x 2 x 2 x 2 7
6 7
VD6 6 :: :: :: : : : 6 0 1 n7
4: : : : :: 7 6 :: :: : : :: 7
4: : : :5
5
1 xn xn2 xnn n n
x0 x1 xnn
Now, we consider a simpler problem with only 4 points:
ˇ ˇ ˇ ˇ
ˇ 1 1 1 1 ˇ ˇ1 1 1 1 ˇ
ˇ ˇ ˇ ˇ
ˇx0 x1 x2 x3 ˇ ˇ0 x1 x0 x2 x0 x3 x0 ˇ
ˇ ˇ ˇ ˇ
det V> D ˇ 2 2 2 2 ˇ D ˇ
ˇ ˇ ˇ ˇ
ˇx0 x1 x2 x3 ˇ ˇ0 x12 x1 x0 x22 x2 x0 x32 x3 x0 ˇ
ˇ
ˇ ˇ ˇ ˇ
ˇ ˇ ˇ ˇ
ˇx 3 x 3 x 3 x 3 ˇ ˇ0 x 3 x 2 x0 x 3 x 2 x0 x 3 x 2 x0 ˇ
0 1 2 3 1 1 2 2 3 3
where the second row was replaced by 2nd row minus x0 times the 1st row; the third row by
the third row minus x0 times the second row and so on. Now, of course we expand by the first
column and do some factorizations to get
ˇ ˇ ˇ ˇ
x x x x x x 1 1 1
ˇ ˇ ˇ ˇ
ˇ 1 0 2 0 3 0 ˇ ˇ ˇ
ˇ ˇ ˇ ˇ
>
ˇ 2 2 2
det V D ˇx1 x1 x0 x2 x2 x0 x3 x3 x0 ˇ D .x1 x0 /.x2 x0 /.x3 x0 /ˇx1 x2 x3 ˇ
ˇ ˇ ˇ
ˇ ˇ ˇ ˇ
ˇ 3
ˇx1 x12 x0 x23 x22 x0 x33 x32 x0 ˇ
ˇ ˇ 2 2 2ˇ
ˇx1 x2 x3 ˇ
Now the red determinant should not be a problem for us, we can write immediately the answer
ˇ ˇ
ˇ1 1 1ˇ
ˇ ˇ ˇ ˇ
ˇ ˇ ˇ1 1ˇ
ˇ ˇ
ˇx1 x2 x3 ˇ D .x2 x1 /.x3 x1 / ˇˇ ˇ D .x2 x1 /.x3 x1 /.x3 x2 /
ˇ ˇ
ˇ
ˇ 2 2 2ˇ
ˇ x x
ˇ 2 3ˇ
ˇ
ˇx1 x2 x3 ˇ
As xi ’s are distinct, the determinant is different from zero, thus the Vandermonde matrix is
invertible. Thus, Eq. (11.3.11) has a unique solution. In other words, there is only one single
polynomial passing through all data points.
The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine
functions, notated as Tn .x/ and Un .x/. They can be defined several equivalent ways; in this
section the polynomials are defined by starting with trigonometric functions. The Chebyshev
polynomials of the first kind Tn .x/ are defined in this way. Note that from the above equation,
cos.n˛/ is a polynomial in terms of cos ˛, e.g. cos 3˛ D 4.cos ˛/3 3 cos.˛/. For n is a fixed
counting number, the Chebyshev polynomial is defined to be that polynomial of cosine:
Tn .cos ˛/ D cos.n˛/
Change of variable x D cos ˛, and we get
These polynomials were named after Pafnuty Chebyshev. The letter T is used, by Berstein,
because of the alternative transliterations of the name Chebyshev as Tchebycheff, Tchebyshev
(French) or Tschebyschow (German). Pafnuty Lvovich Chebyshev (1821 – 1894) was a Russian
mathematician and considered to be the founding father of Russian mathematics.
The recursive definition of Tn .x/ follows from the recursive formula for cos n˛:
8
<1;
ˆ
ˆ if n D 0
Tn .x/ D if n D 1 (11.3.14)
ˆx;
ˆ
2xTn 1 .x/ Tn 2 .x/; if n 2
:
The first four Chebyshev polynomials are, obtained using Eq. (11.3.14)
T0 .x/ D 1
T1 .x/ D x
T2 .x/ D 2x 2 1 D 21 x 2 1 (11.3.15)
3 2 3
T3 .x/ D 4x 3x D 2 x 3x
T4 .x/ D 8x 4 8x 2 C 1 D 23 x 4 8x 2 C 1
From this, we can see that Tn .x/ is a n-degree polynomial. Plots of the first four Tn .x/ are given
in Fig. 11.5. We can see that jTn .x/j 1. Furthermore, the leading coefficient of Tn .x/ is 2n 1 .
Chebyshev nodes are the roots of the Chebyshev polynomial of the first kind of degree n. To
find the roots, just use Eq. (11.3.13):
Tn .x/ D 0 ” cos.n arccos x/ D 0 ” n arccos x D C k
2
Therefore, for a given positive integer n the Chebyshev nodes in the interval . 1; 1/ are
1
xk D cos kC ; k D 0; 1; : : : ; n 1 (11.3.16)
n 2
T0 T1 T3 T4 T5
1.0
0.5
0.0
−0.5
−1.0
−1.0 −0.5 0.0 0.5 1.0
1.00
xk
0.75
0.50
0.25
0.00
−1.0 −0.5 0.0 0.5 1.0
Tn .x/ D 2n 1 .x x1 /.x x2 / .x xn /
If we use the Chebyshev nodes in a polynomial approximation, then Eq. (11.3.9) gives us
1
jf .x/ p.x/j max jf .n/ .y/j (11.3.17)
nŠ2n 1 y2Œa;b
And we hope that the denominator with nŠ and 2n 1 will dominate when n is large, and thus the
error jf .x/ p.x/j will decrease to zero. And we have a better approximation. We try this with
the Runge function, and the results shown in Fig. 11.7 confirm our analysis.
1.0
1/(1 + 25x2 )
9th Lagrange
0.8 19th Lagrange
0.6
0.4
0.2
0.0
−1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00
Figure 11.7: Approximation of Runge’s function using Chebyshev nodes: 10 nodes (red points) and 20
nodes. No more oscillation near -1 and 1.
Recall that
Z
I D cos n˛ cos m˛d˛ D 0 .m ¤ n/ (11.3.18)
0
A change of variable from ˛ to x:
p
x D cos ˛ H) dx D sin ˛d˛ D 1 x 2 d˛
1 1 1
1 D ; 2 D ; 3 D
.x1 x2 /.x1 x3 / .x2 x1 /.x2 x3 / .x3 x1 /.x3 x2 /
1 2 3
y D l.x/ y1 C y2 C y3
x x1 x x2 x x3
And thus, for the general case, the new form of the Lagrange interpolation is given by (first done
by Jacobi in his PhD thesis)
n n
X i Y 1
y.x/ D l.x/ yi ; l.x/ D .x xi /; i D Q (11.3.21)
i D0
x xi i D0 j ¤i xi xj
It can be seen that, in this form, the Lagrange basis li .x/ is written as
i
li .x/ D l.x/ (11.3.22)
x xi
To test the efficiency of this new form, one can try to use random data. For example, in
Fig. 11.8, 80 random yi in Œ 1; 1 are generated corresponding to 80 Chebyshev nodes. Then,
Eq. (11.3.21) was used to compute y.x/ at 2001 drawing points to get the interpolating poly-
nomial (the blue curve in the figure). The new form is about 1.5 times faster than the original
form.
But that’s not the end of the story. We can massage the formula to get more of it. Using the
PoU property of li .x/, we can find a formula of l.x/ as:
n
X X i 1
li .x/ D 1 H) l.x/ D 1 H) l.x/ D Pn i
(11.3.23)
i i D1
x xi i D1 x xi
n X n
X i yi i 1
y.x/ D ; i D Q (11.3.24)
i D0
x xi i D1
x xi j ¤i xi xj
1.5
1.0
0.5
0.0
−0.5
−1.0
Figure 11.8: A Lagrange interpolating polynomial through 80 random values at 80 Chebyshev nodes. The
solid red dots are the data points.
What’s special about this form, beside the fact that it is more efficient than the previous forms ?
Actually, this formula has a form that most of us are familiar with. To show that, let’s introduce
this symbol
i
wi D (11.3.25)
x xi
Eq. (11.3.24) then becomes:
Pn
wi yi
y.x/ D Pi D0
n (11.3.26)
i D1 wi
This has the exactly same form of the center of mass in physics, see Eq. (7.8.17), if we think of
wi as the masses of particles. Barycenter is the term used in astrophysics is for the center of mass
of two or more bodies orbiting each other. Therefore, Eq. (11.3.24) is called the barycentric
form.
n
X 1 X n.n C 1/
I.n/ D .i/ D i D (11.4.1)
i D1
n2 i 2n2
where we have used the formula of the sum of the first n integers, see Eq. (2.5.2). For various
values of n, the corresponding values of I.n/ are given in Table 11.3. We can observe a few
things from this table. First, I.n/ always overestimates I –this should be obvious by looking
at Fig. 11.9. Second, we need 500 000 intervals to get an accuracy of 6 decimals. This is not
practically useful. Note that for a general function it is impossible
P to have a final formula for
I.n/ as in Eq. (11.4.1); instead we have to compute I.n/ as f .xi /. With n D 500 000 we
need such number of function evaluation f .xi / and such a number of multiplications. That’s a
lot of work for a simple function!
R1
Table 11.3: Numerical integration of 0 xdx. Exact value is 0.5.
As for any approximation we need to know the associated error with our numerical integral.
Looking at Fig. 11.9, the error is obviously:
1 1 1
E.3/ D E1 CE2 CE3 D Œ.y1 y0 /C.y2 y1 /C.y3 y2 / D .y3 y0 / D (11.4.2)
2 2 2
And can be generalized to E.n/ D 0:5. The data (last row in Table 11.3) confirms this. Now,
we can understand why the sequence .E.n// converges slowly to 0.5. This is because the error
is proportional only to .
n 1
X
M.n/ D f .2i C 1/ (11.4.3)
i D0
2
We use the symbol M.n/ to remind us itR is a mid-point rule. It can be seen from Fig. 11.10 that
1
this mid-point rule gives exact value of 0 xdx. We can also get the same value algebraically.
Let’s see the performance of the mid-point rule for a harder function: y D x 2 . The results,
given in Table 11.4, indicates that it is a 2nd order method.
R1
Table 11.4: Performance of the mid-point rule versus the for 0 x 2 dx (exact value is 1/3).
T .n/ D Œ.y0 C y1 / C .y1 C y2 / C C .yn 1 C yn /
2
D Œy0 C 2y1 C 2y2 C C yn
2
(11.4.4)
R1
In Table 11.5 we compare the mid-point rule and the trapezoidal rule for 0 x 2 dx. Both are 2nd
order methods, but still not efficient as we need 100 intervals just for an accuracy of 6 decimals.
We need better methods. To have better methods, we need to change point of view. All the
methods discussed so far focus on the way the area of each thin slice is computed; the integrand
y D f .x/ was not touched!
R1
Table 11.5: Performance of the mid-point rule versus the trapezoidal rule for 0 x 2 dx.
1 1
a2 D Œf . 1/ C f .1/ f .0/; a1 D Œf .1/ f . 1/ ; a0 D 2f .0/ (11.4.5)
2 2
Now, we can approximate the integral in Œ 1; 1 as
Z 1 Z 1
f .x/dx g.x/dx
1 1
Z 1
D .a2 x 2 C a1 x C a0 /dx (11.4.6)
1
2a2 1
D C 2a0 D f . 1/ C 4f .0/ C f .1/ .Eq: (11.4.5)/
3 3
More often we need to break the interval Œa; b into n equal sub-intervals of length D .b a/=n
and apply the Simpson rule for each interval:
Z b n Z
X aCi
f .x/dx f .x/dx
a i D1 aC.i 1/
n (11.4.8)
X
D f .a C .i 1// C 4f .a C i =2/ C f .a C i/
i D1
6
We test the performance of Simpson’s rule for x 2 ; x 3 and x 4 . The Julia code is given in
Listing B.10 which is based on Eq. (11.4.8). The error for y D x 2 is zero which is expected.
The error is also zero for y D x 3 , which is a surprise. And the error for y D x 4 is proportional
to 4 ; Simpson’s rule is a 4th order method, which explains its popularity in calculators and
codes.
Another derivation. By now we can see that all quadrature rules have this common form
Z b X
f .x/dx D wi f .xi / (11.4.9)
a i
that is the sum of f .x/ evaluated at some points xi multiplied with a weight wi . In other words,
the integral is a weighted sum of function values at specially selected locations. So, we can select
a prior xi ’s–the quadrature points–and determine the corresponding
R1 weights wi . The first choice
is to use equally spaced quadrature points. For example, 1 f .x/dx can be computed as with 3
equally spaced points at 1; 0; 1:
n 1 10 100
Z 1
f .x/dx D w1 f . 1/ C w2 f .0/ C w3 f .1/ (11.4.10)
1
The problem is now how to determine the weights wi . We use Simpson’s idea of parabolic
approximation to replace f .x/ by ax 2 C bx C c. With this f .x/, Eq. (11.4.10) becomes:
2
a D w1 .a b C c/ C w2 .c/ C w3 .a C b C c/
3
D a.w1 C w3 / C b.w3 w1 / C c.w1 C w2 C w3 /
So we have two expressions supposed to be identical for all values of a; b; c. This can happen
only when:
9
w1 C w3 D 2=3> = 1 4
w1 w3 D 0 ) w1 D w3 D ; w2 D
> 3 3
w1 C w2 C w3 D 0 ;
which is the same result we have obtained in Eq. (11.4.6).
Newton-Cotes rule. It can be seen that the mid-point rule can be derived similarly to the
Simpson rule by approximating the function f .x/ with a constant function within each slice.
And the trapezoidal rule is where a linear approximation to the function was used. Actually these
rules are special cases of the so-called Newton-Cotes rules. Note that, in Newton-Cotes rules,
the quadrature points are evenly spaced along the interval and thus known. We just need to find
the quadrature weights wi .
Two-point Gauss rule. In the two-point Gauss rule, two quadrature points are used, thus we
write
Z 1
f .x/dx D w1 f .x1 / C w2 f .x2 / (11.4.11)
1
To determine the 4 unknowns, we need 4 equations. So, the idea is to exactly integrate these
functions 1; x; x 2 ; x 3 . Using Eq. (11.4.11) for these 4 functions, we have
f .x/ D 1 W 2 D w1 C w2
f .x/ D x W 0 D w1 x1 C w2 x2
2
f .x/ D x 2 W D w1 x12 C w2 x22
3
f .x/ D x 3 W 0 D w1 x13 C w2 x23
Four equations and four unknowns should be ok. But the equations are nonlinear. How to solve
them? Lucky for us, the equations are symmetric: changing w1 with w2 does not change the
equations! So we know w1 D w2 and thus from the first equation they are both equal to one.
p p
Symmetry demands that x1 D x2 . Then, it is straightforward to get x1 D 1= 3 and x2 D 1= 3.
The two-point Gauss rule is thus given by
Z 1
1 1
f .x/dx 1 f p C1f p
1 3 3
So, with two quadrature points (also referred to as Gauss points) Gauss quadrature can integrate
exactly cubic polynomials, by its very definition.
Three-point Gauss rule. In the same manner, we can develop the three-point Gauss rule:
Z 1
f .x/dx D w1 f .x1 / C w2 f .x2 / C w3 f .x3 / (11.4.12)
1
To determine the 6 unknowns, we need 6 equations. So, the idea is to exactly integrate these six
functions 1; x; x 2 ; x 3 ; x 4 ; x 5 . Using Eq. (11.4.12) for these 6 functions, we have
f .x/ D 1 W 2 D w1 C w2 C w3
f .x/ D x W 0 D w1 x1 C w2 x2 C w3 x3
2
f .x/ D x 2 W D w1 x12 C w2 x22 C w3 x32
3
f .x/ D x 3 W 0 D w1 x13 C w2 x23 C w3 x33
2
f .x/ D x 4 W D w1 x14 C w2 x24 C w3 x34
5
f .x/ D x 5 W 0 D w1 x15 C w2 x25 C w3 x35
x1 D x; w1 D w
x2 D 0; w2 D w2
x3 D x; w3 D w
n i wi
1 0. 2.0000000000
2 ˙0:5773502692 1.0000000000
3 ˙0:7745966692 0:5555555556
0. 0:8888888889
4 ˙0:8611363116 0.3478548451
˙0:3399810436 0.6521451549
Rb R1
Arbitrary interval. We need a f .x/dx not 1 f ./d . A simple change of variable is needed:
x D 0:5.1 /a C 0:5.1 C /b. So, the n points GL quadrature is given by
" #
b 1
b a b a X aCb b a
Z Z
f .x/dx D f .x.//d wi f C i
a 2 1 2 i
2 2
(11.4.14)
which can accurately integrate any polynomial of degree less than or equal 2n 1.
Z 1
I D p5 .x/dx (11.4.15)
1
We do not compute this integral directly, but we massage p5 .x/ a bit: we divide it by the Legendre
polynomial L3 .x/:
p5 .x/ D Q2 .x/L3 .x/ C R2 .x/ (11.4.16)
where Q2 .x/ and R2 .x/ are polynomials of degree 2 at most. Now, the integral becomes
Z 1 Z 1
I D ŒQ2 .x/L3 .x/ C R2 .x/dx D R2 .x/dx (11.4.17)
1 1
We converted an integral of a 5th degree polynomial to the Rintegral of a 2nd degree polynomial!
1
(This is so because Q2 .x/ and L3 .x/ are orthogonal, i.e., 1 Q2 .x/L3 .x/dx D 0, check Sec-
tion 10.11.5 if this is not clear). Now, we tackle the problem of how to compute the integral of
R2 .x/ without knowing its expression. But, we do know p5 .x/. So, if we use the roots of L3 .x/,
denoted by x0 ; x1 ; x2 , we have this, from Eq. (11.4.16)
Now, the problem is easier. We build a Lagrange polynomial interpolating the points .xi ; R2 .xi //,
or .xi ; p5 .xi //. But this polynomial is exactly R2 .x/, so we have
2
X
R2 .x/ D li .x/p5 .xi / (11.4.19)
iD0
Now, we understand why GL points are the roots of Legendre polynomials. You should double
check the values in Table 11.7 using this.
" n #
Z 1 Z 1 Z 1 Z 1 Z 1 X
f .; /d d D f .; /d d D wi f .i ; / d
1 1 1 1 1 i D1
n Z
X 1 n X
X n
D wi f .i ; /d D wi wj f .i ; j /
i D1 1 i D1 j D1
We begin with the simplest method–the Euler method (Section 11.5.1) for first order ODEs.
Next, we discuss this method for second order ODEs (e.g. equations of motions of harmonic
oscillators and of planets orbiting the Sun) in Section 11.5.2. Since the Euler method does
not conserve energies it is bad for modeling the long term behavior of oscillatory systems,
thus we need a better method and one of them is the Euler-Aspel-Cromer method presented in
Section 11.5.3. Having a good numerical method, we then apply it to the Kepler problem i.e.,
we solve the Sun-earth problem (Section 11.5.4). For what? To rediscover for ourselves that
planets do indeed go around the Sun in elliptical orbits. And high school students can achieve
that because the maths behind all of this is simple. In a logical development, we study three-
body and N -body problems in Section 11.5.5. Although Euler’s method and related variants are
simple and good, they are only first order methods (i.e., the accuracy is low), I present a second
order method in Section 11.5.6. That is the Verlet method–a very popular method used to solve
Newton’s equations of motions i.e., F D ma. Section 11.5.7
x.t C / x.t/
xP D (11.5.2)
With that xP substituted into Eq. (11.5.1), we can get x.t C /:
x.t C / x.t/
D f .x; t/ H) x.t C / D x.t/ C f .x; t/ (11.5.3)
The boxed equation, which is the Euler method, enables the solution x.t/ to advance or march
in time starting from x.0/. If you use Euler’s method with small –which is referred to as the
time step–you will find that it works nicely. (Just try it with some 1st ODE). We rush now to
second order ODEs which are more fun.
But how small is small for ? Does the numerical solution converge to the exact solution when
goes to zero? What is the accuracy of the method? Those are questions that mathematicians
seek answer for. For now, let’s have fun first and in Section 11.5.7 we shall try to answer those
questions. That’s how scientists and engineers approach a problem.
Or if you like you can say that we are using the forward difference formula for the first derivative of x.t/.
They are equivalent.
b k
v vP Dx WD F .v; x/ (11.5.5)
m m
Using the Euler method, that is the boxed equation in Eq. (11.5.3). for the position equation
xP D v and the velocity equation vP D F , we obtain
The Euler method is easy to program. Usually it works nicely but for some problems it performs
badly, and simple harmonic oscillation is one of them (Fig. 11.11). Input data: k D m D 1,
x0 D 1, v0 D 0 and b D 0 (i.e., no damping), the total time is three periods and time step D
0:01. The plot of x.t/ shows that the amplitude of the oscillation keeps increasing (Fig. 11.11a).
This means that energies also increase, and thus energy conservation is violated. Thus, the phase
portrait is no longer a nice circle (Fig. 11.11b). The orange is the exact phase portrait.
1.0 1.0
0.5 0.5
0.0 0.0
x
0.5 0.5
1.0 1.0
0 5 10 15 1.0 0.5 0.0 0.5 1.0
x x
(a) (b)
To understand what went wrong, we need a better notation. Instead of writing x.n/, we
write xn . Thus the subscript n is used to indicate the time when a certain term is evaluated;
the discrete time events are tn D n for n D 0; 1; 2; : : :. With the new notation, Eq. (11.5.6)
becomes
Refer to Fig. 8.8 and the related discussion if phase portrait is not clear.
xnC1 D xn C vn
(11.5.7)
vnC1 D vn C Fn
As the total energy is wrong, we analyze it. At two iterations or time steps tn and tnC1 , the total
energies are (without loss of generality I used m D k D 1)
1 1
En D vn2 C xn2
2 2 (11.5.8)
1 2 1 2
EnC1 D vnC1 C xnC1
2 2
Now using Eq. (11.5.7), we compute EnC1 :
1 1 2 2
EnC1 D .vn C Fn /2 C .xn C vn /2 D En C Fn vn C Fn2 C xn vn C vn2 (11.5.9)
2 2 2 2
Noting that Fn D xn , thus the change in total energy is
2 1 2 1 2
En WD EnC1 En D x C v >0 (11.5.10)
2 n 2 n
vnC1 D vn C Fn
(11.5.11)
xnC1 D xn C vnC1
The only change is in the red term, instead of using vn , now vnC1 is used. If you modify the code
(very slightly) and rerun the SHO problem, you will see that the results are very good. Cromer
in his paper entitled Stable solutions using the Euler approximation (so Cromer did not call his
method Cromer’s method and he gave credit to Aspel even though in a footnote) presented a
mathematical analysis of why the method works.
The change in total energy is now given by
2 1 2 1 2
En D vn xn 3 vn xn (11.5.12)
2 2
dvx GM m
m D x
dt r3
dvy GM m (11.5.13)
m D 3
y
dt p r
r D x2 C y2
We have two ODEs, not one. But that’s no problem. Using the Euler-Aspel-Cromer method, we
have (as the mass of the Sun is too big, it is assumed that the Sun is stationary)
q
rn D xn2 C yn2
GM
vx;nC1 D vx;n C xn
rn3
GM
(11.5.14)
vy;nC1 D vy;n C yn
rn3
xnC1 D xn C vx;nC1
ynC1 D yn C vy;nC1
with the initial conditions .x0 ; y0 / and .vx0 ; yy0 /, to be discussed shortly. Remark: the notation
got a bit ugly now: vx;nC1 means the x component of the velocity at time step n C 1.
Before we can run the code, there is the matter of choice of units. As the radius of Earth’s
orbit around the sun is about 1:5 1011 m, a graph showing this orbit would have labels of
1 1011 m, 2 1011 m etc., which is awkward. It is much more convenient to use astronomical
units, AU, which are defined as follows. One astronomical unit of length (i.e., 1 AU) is the
average distance between the Sun and the Earth, which is about 1:5 1011 m. For time, it is
convenient to measure it in years. What is then the unit of mass? Recall that the Earth’s orbit is,
to a very good approximation, circular. Thus, there must be a force equal to ME v 2 =r (r D 1
AU), where v is the Earth’s speed which is equal to 2 r=.1 yr/ D 2 AU/yr. Thus, we have
ME v 2 GMME
D H) GM D v 2 r D 4 2 AU3 /yr2
r r2
Now, we discuss the initial positions and velocities for
Mercury (as we want to see an ellipse). Using astronomical
data we know that the eccentricity of the elliptical orbit for
Mercury is e D 0:206, and the radius (or semi major axis)
a D 0:39 AU. For the simulation, we assume that the initial
position of Mercury is at the aphelion .x0 ; y0 / D .r1 ; 0/
with r1 D a.1 C e/ (check Section 4.12.2 if something not
clear). The initial velocity is .0; v1 /. How to compute this v1 ?
We need two equations: angular momentum conservation
and energy conservation evaluated at two points; these two
equations involve two unknown velocities v1 and v2 . The
angular momentum is rx py ry px , evaluated at two points
.r1 ; 0/ and .0; r2 /:
Figure 11.12: Mercury elliptical or-
bit.
v1 r1 p
v1 r1 D v2 b H) v2 D ; bDa 1 e2
b
With m being the mass of Mercury and M the mass of the Sun, conservation of total energy
provides us the second equation:
GM m 1 2 GM m 1 2
C mv1 D C mv2
r1 2 r2 2
Solving these two equations for v1 , noting that r2 D a, we get
r
GM 1 e
v1 D
a 1Ce
Now, we can really let the Mercury go! And with the Euler-Aspel-Cromer method and Newton’s
laws, we are able to get the elliptical orbit of planetary motion (Fig. 11.12). We can determine
the period T (how?) etc. Applying the same method for other planets we can also discover
Kepler’s third law: for each planet just compute T =a3=2 and you will see that this quantity is
p
approximately one (recall that Kepler told us that this constant should be k D 2= GM D 1).
We can also discover the 2nd Kepler’s law.
Gm1 mj
q
F 1 D F 12 C F 13 ; F 1j D r 1j ; r 1j D rj r 1; .jjxjj D x12 C x22 /
jjr 1j jj3
d v1 d r1
D F 1;
m1 D v1
dt dt
Using the Euler-Aspel-Cromer method, we update the velocity and position for mass m1 :
N
X Gmj
F i;n D r ij ; r ij D rj;n r i;n
jjr ij jj3
j D1;j ¤i (11.5.15)
vi;nC1 D vi;n C F i;n
r i;nC1 D r i;n C vi;nC1
Let’s have fun with this. From wikipedia page on three body problems, I obtained the fol-
lowing initial conditions:
And with that we get the beautiful figure-eight in Fig. 11.13a with equal masses (I used m1 D
m2 D m3 D 1 and G D 1). You can go to the mentioned wikipedia page to see the animation.
Now with mass m2 slightly changed to r 2 .0/ D .0:1; 0/ instead, we get Fig. 11.13b. How about
solution time? With a time step D 0:01 and a total time of about 6 (whatever unit it is), that is
600 iterations or steps, the code runtime is about 42 seconds including generation of animations
on a 16 GB RAM Mac mini with Apple M1 chip.
1.0
1.0
0.5 0.5
0.0 0.0
y
y
0.5 0.5
1.0 1.0
1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0
x x
(a) (b)
R
x.t/ «.t/ 3
x
x.t C / D x.t/ C x.t/
P C 2 C
2 3Š (11.5.16)
R
x.t/ «.t/ 3
x
x.t / D x.t/ P
x.t/ C 2
2 3Š
Adding and subtracting these two equations we obtain
2
x.t C / C x.t / D 2x.t/ C x.t/
R
(11.5.17)
x.t C / x.t / D 2x.t/
P
The program is presented in Appendix B.7.
It is named after Loup Verlet (1931 – 2019), a French physicist who pioneered the computer simulation of
molecular dynamics models. In a famous 1967 paper he used what is now known as Verlet integration (a method
for the numerical integration of equations of motion) and the Verlet list (a data structure that keeps track of each
molecule’s immediate neighbors in order to speed computer calculations of molecule to molecule interactions).
1 2
x.t C / D x.t/ C x.t/
P R
C x.t/
2
(11.5.19)
R C x.t
x.t/ R C /
P C / D x.t/
x.t P C
2
The first equation is obtained by eliminating x.t / in Eq. (11.5.18): substituting that term
obtained from the second into the first. The derivation of the velocity update is as follows:
where the independent variable x lies within a and b. And our goal is to study the accuracy of
Euler’s method. Let’s start with one example that we know the exact solution (from that we can
calculate the error in Euler’s method) :
x2
y 0 D xy; y.0/ D 0:1 H) y.x/ D 0:1e 2 (11.5.21)
We solve this problem using Euler’s method with different
step sizes h D 0:4; 0:2; 0:1. And we plot these numerical so- 0.7 exact
h =0.4
lutions with the exact solution in one plot. In this way, we can 0.6 h =0.2
h =0.1
understand the behavior of the method. From the results shown in 0.5
y(x)
0.4
the figure, we observe that the numerical solutions get closer to 0.3
the exact one when the step size h is getting smaller. The second 0.2
problem is now to quantify the error and show that it is getting 0.0
0.0 0.5 1.0
x
1.5 2.0
y 00 ./ 2
0 y 00 ./ 2
y.xCh/ D y.x/Cy .x/hC h D y.x/ C f .x; y/hC h ; 2 Œx; xCh (11.5.22)
2 2
Up to now, we have been working with the exact solution y.x/. Now comes Euler, with the
approximate solution. To differentiate the exact and approximate solution, the latter is denoted
Q
by y.x/. At x C h, Euler’s approximate solution is:
Q C h/ D y.x/
y.x Q C f .x; y/h (11.5.23)
Putting the exact solutions and Euler’s solution together, we get:
y 00 ./ 2
ynC1 D yn C f .xn ; yn /h C h
2 (11.5.24)
yQnC1 D yQn C f .xn ; yQn /h
With that we can calculate the error, which is the difference between the exact solution and the
numerical solution, that is EnC1 WD ynC1 yQnC1 . The error consists of two parts (assume that
rounding error is zero): the first part is the local truncation error–occurs when we neglected the
red term–this error is O.h2 /–and the second part is related to the blue term. Subtracting the first
from the second in Eq. (11.5.24), we get EnC1 as
1
EnC1 D En C Œf .xn ; yn / f .xn ; yQn / h C y 00 ./h2 (11.5.25)
2
And the triangle inequality (Eq. (2.20.9)) gives us (note that for the error we’re interested in its
magnitude only, thus we need jEnC1 j)
1
jEnC1 j jEn j C jf .xn ; yn / f .xn ; yQn /jh C jy 00 ./jh2 (11.5.26)
2
Phu Nguyen, Monash University © Draft version
Chapter 11. Numerical analysis 849
1
ˇ D max y 00 .x/ for x 2 Œa; b (11.5.28)
2
With these conditions, Eq. (11.5.26) is simplified to
˛n 1 2 .1 C hL/n 1
jEn j ˇh D ˇh (11.5.30)
˛ 1 L
This equation gives a bound for jEn j in terms of h, L, ˇ and n. Note that for a fixed h, this error
bound increases with increasing n. This is in agreement with the example of y 0 D xy that we
considered at the beginning of the section.
With this inequality .1 C hL/n e nhL and nh b a, we then have
e nhL 1 e .b a/L 1
jEn j ˇh ˇh WD Kh (11.5.31)
L L
We have just showed that the error at time step n is proportional to h with the proportionality
constant K depending on L; ˇ and the time interval b a. With this result, we’re now able to
talk about the error of Euler’s method: it is defined as the maximum of jEn j over all the time
steps:
Numerical methods for partial differential equations is the branch of numerical analysis that
studies the numerical solution of partial differential equations (PDEs). Common methods are
finite difference method (FDM), finite volume method (FVM), finite element method (FEM),
spectral methods, meshfree methods etc. The field is simply huge and I do not have time to learn
all of them. The finite difference method is often regarded as the simplest method to learn and
use.
Figure 11.14: A 2D (uniform) finite difference grid: the space Œ0; L is discretized by N points.
To start simple, we use the forward difference for the time partial derivative t evaluated at
grid point .i; n/:
n
inC1 in
@
D C O.t/ (11.6.1)
@t i t
and a central difference for the spatial second order derivative xx evaluated at grid point .i; n/:
n
@2 inC1 2in C in 1
D C O..x/2 / (11.6.2)
@x 2 i .x/ 2
Substituting Eqs. (11.6.1) and (11.6.2) into the heat equation (after removing the high order
terms of course), we get the following equation
t n
inC1 D in C 2 2in C in 1 ;
i D 1; : : : ; N 2 (11.6.4)
.x/2 i C1
This equation is called a computational molecule or stencil and plotted in Fig. 11.14 (right). And
this finite difference method is known as the Forward Time Centered Space or FTCS method.
What is more, it is an explicit method. It is so called because to determine inC1 , we do not have
to solve any system of equations. Eq. (11.6.4) provides an explicit formula to quickly compute
inC1 . There are explicit methods, just because there are implicit ones. And the next section
presents one implicit method.
xn xn 1 xn xn 1
xP D H) D sin xn
t t
Obviously to solve for xn with xn 1 known we have to solve the boxed equation, which is a
nonlinear equation. Nothing to be scared of as we have some good methods to solve it such as
the Newton method (Section 4.5.4). In anyway it is more costly to solve a nonlinear equation
than solving a linear one. So, you might be thinking we should not then use implicit methods.
But that’s not the whole story, otherwise the backward Euler’s method would not have been
developed.
Getting back to the heat equation, now we write t as
n
in in 1
@
D C O.t/ (11.6.5)
@t i t
Substituting Eqs. (11.6.2) and (11.6.5) into the heat equation, we get the following equation
2 t
sin 1 C .1 C 2s/in sinC1 D in 1 ; i D 1; : : : ; N 2; s WD (11.6.7)
.x/2
Noting that each equation involves only three unknowns at point i 1, i and i C 1, thus, when
we assemble all the equations from all the nodes, we get a tridiagonal matrix A. For example,
if we have six points (i.e., N D 6), we will have (the first and last row come from the boundary
n
conditions 0=5 D 0=5 ):
2 32 3 2 3
1 0 0 0 0 0 0n 0
6 s 1 C 2s s 0 0 0 7 61n 7 61n 1 7
6 76 7 6 7
6 7 6 n7 6 n 17
60 s 1 C 2s s 0 07 7 62 7 D 62 7
6 7 6 7
6 (11.6.8)
1 C 2s 7 63n 7 63n 1 7
60 0 s s 07 6 7 6 7
6
s 5 44n 5 44n 1 5
6 76 7 6 7
40 0 0 s 1 C 2s
0 0 0 0 0 1 5n 5
To see more clearly the pattern of the matrix, we need to have a bigger matrix. For example, with
100 points, we have the matrix shown in Fig. 11.15; the one on the left shows the entire matrix
and the right figure shows only the first ten rows/cols. Eq. (11.6.8) is obviously of the form
Ax D b and without knowing it beforehand we are back to linear algebra! We need techniques
from that field to have a fast method to solve this system. But we do not delve into that topic
here. We just use a linear algebra library to do that so that we can focus on the PDE (and the
physics we’re interested in).
It is obvious that the BTCS finite difference method is an implicit method as we have to solve
a system of (linear) equations to determine the temperate at all the nodes at a given time. What
are then the pros/cons of implicit methods compared with explicit methods? The next section
gives an answer to that question.
Figure 11.15: A tridiagonal matrix resulting from the FDM for the heat equation: obtained using the func-
tion imshow in matplotlib. A tridiagonal matrix is a band matrix that has nonzero elements only on the
main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supdiagonal/upper
diagonal (the first diagonal above the main diagonal).
1.0
exact sol.
400
|r| ≤ 1
0.8
200
0.6
0
0.4 −200
0.2 −400
0.0 −600
0.0 0.2 0.4 0.0 0.1 0.2 0.3
Figure 11.16: Demonstration of numerical stability in solving ODEs using finite difference methods.
von Neumann stability analysis is a procedure used to check the stability of finite difference
schemes as applied to linear partial differential equations. The analysis is based on the Fourier
decomposition of numerical error and was developed at Los Alamos National Laboratory after
having been briefly described in a 1947 article by British researchers Crank and Nicolson. Later,
the method was given a more rigorous treatment in an article by John von Neumann.
Let’s denote by A the exact solution to the heat equation (i.e., t D ˛xx ), by D the exact
solution to the finite difference equation corresponding to the heat equation. For example, if we
consider the FTBS method, then D is the exact solution to the following equation
discretization error D A D
(11.6.10)
round off error D N D H) N D C D
The stability of numerical schemes is closely associated with numerical error. A finite difference
scheme is stable if the errors made at one time step of the calculation do not cause the errors to be
magnified as the computations are continued. Thus the plan is now to study how behaves. We
are going to show that the error is also a solution of Eq. (11.6.9). The proof is simply algebraic.
Indeed, as N is the solution to Eq. (11.6.9), we have
Instead of considering the whole series, we focus on just one term. That is .x; t/ D e at e i kn x .
With that and Eq. (11.6.11), we can obtain the following
e at 1 e i kn x 2 C e i kn x
D (11.6.13)
˛t .x/2
nC1
and this allows us to determine the ratio of the error at two consecutive time steps i =in :
Check Section 4.18.3 if this is not clear.
inC1 ˛t i kn x
n
D e at D 1 C .e 2 C e i kn x / (Eq. (11.6.13))
i .x/2
˛t
D1C .2 cos kn x 2/ (11.6.14)
.x/2
4˛t 2 kn x
D1 sin
.x/2 2
The last two steps are purely algebraic. It is interesting that trigonometry identities play a role
in the context of numerical solutions of the heat equation, isn’t it?
ˇ nC1We do
ˇ not want the error to grow, so we’re interested in when the following inequality holds
ˇi =in ˇ 1. With Eq. (11.6.14), this condition becomes
ˇ ˇ
ˇ 4˛t 2 kn x ˇ
ˇ 2˛t 2 kn x ˛t 1
ˇ1 sin 1 H) sin 1 H) (11.6.15)
ˇ .x/2 2 ˇ .x/2 2 .x/2 2
The boxed equation gives the stability requirement for the FTCS scheme as applied to one-
dimensional heat equation. It says that for a given x, the allowed value of t must be small
enough to satisfy the boxed equation .
@ @2
D 0:12 2 0 < x < 1; t > 0
@t @x
.x; 0/ D 1 0 x 1
.0; t/ D 0; .1; t/ D 0 t > 0
whereas a (part of) numerical solution is shown in Table 11.8. An analytical solution allows
us to compute the solution at any point (in the domain). On the other hand, we only have the
numerical solutions at some points (at the nodes). The analytical solution can tell us how the
One example to see how small the time step must be: ˛ D 1, x D 0:1, then t 0:05.
parameters (e.g. here) affect the solution. The numerical solutions are obtained only for a
specific value of the parameters.
Now is the time for code verification. The results in Fig. 11.17 indicate that the implementa-
tion is correct and it also confirms the von Neumann stability analysis.
Table 11.8: Numerical solutions are given in a tabular format: each row corresponds with a time step.
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
0.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
4.0 0.0 0.816 0.973 0.996 0.999 0.999 0.999 0.996 0.973 0.816 0.0
7.5 0.0 0.188 0.358 0.492 0.578 0.607 0.578 0.492 0.358 0.188 0.0
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
Figure 11.17: Analytical versus numerical solution of the heat equation. Ten terms are used in
Eq. (11.6.16). For the FTCS scheme, a time step slightly larger than the upper limit in Eq. (11.6.15)
was used. Thus, the solution shows instability. For later time steps, the numerical solution blew up.
n
@2 u unC1 2uni C uni 1
i
D C O..x/2 /
@t 2 i .t/2
2 n (11.6.17)
@ u uniC1 2uni C uni 1
D C O..x/2 /
@x 2 i .x/2
t
Substituting these into the wave equation we obtain with r WD x
c
inC1 kn x
D g; g2 2ˇg C 1 D 0 ; ˇ D 1 2r 2 sin2
in 2
Note that g can be a complex number and we need jgj 1 so that our method is stable. And
this requires that jˇj 1. In this case, we can write g as
p
g1;2 D ˇ ˙ i 1 ˇ 2 H) jgj D 1
ˇ ˇ
ˇ 2 kn x ˇˇ
2 t
ˇ1 2r sin 1 H) r WD c1 (11.6.19)
ˇ 2 ˇ x
Figure 11.18: Waves propagating on a string with fixed ends. The data are: c D 300 m=s, L D 1 m, x D
0:01 m, t D x=c. The initial string shape is given at the2 top, which is a Gaussian pluck u.x; 0/ D
2
exp k.x x0 / with x0 D 0:3 m and k D 1000 1=m . The wave is split into two wavepackets
(pulses) which travel in opposite directions (second and third figs). This is consistent with d’ Alembert
solution in Eq. (8.10.6). The left pulse reaches the left end and reflected, this reflection inverts the pulse
so that its displacement is now negative (fourth fig). Meanwhile the right pulse keeps going to the right,
reaches the fixed end, reflected and inverted.
x1 D x0 srf
Example 11.1
2
3 3 xy
2 9 9 1 1
f .x; y/ D x C .y 2/ C ; rf D x C y; 2y 4C x
4 2 4 8 4 4 4
gamma 0.01 — final grad 5.764086792320361 gamma 0.1 — final grad 1.304983149085438 gamma 0.2 — final grad 0.34239835144927167
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
gamma 0.3 — final grad 0.09247189723413639 gamma 0.5 — final grad 0.003267333546280508 gamma 0.75 — final grad 0.028443294078113326
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
sin 3ı C 4x 3
sin 3ı D 3 sin 1ı 4 sin3 1ı H) x D
3
We are going to do the samething but for Ax D b: we split the matrix into two matrices
A D S T, then the system becomes .S T/x D b or Sx D Tx C b. Then, following al-Kashi,
we solve this system iteratively, starting from x 0 we get x 1 , and from x 1 we obtain x 2 and so
on:
Sx kC1 D Tx k C b; k D 0; 1; 2; : : : (11.8.1)
Thus, instead of solving Ax D b directly using e.g. Gaussian elimination method, we’re adopting
an iterative method.
It is obvious that we need to select S in a way that
(b) The difference (or error) x x k should go quickly to zero. To get an expression for this
difference, subtracting Eq. (11.8.1) from Sx D Tx C b:
Se kC1 D Te k H) e kC1 D S 1 Te k
The matrix B D S 1 T controls the convergence rate of the method.
Example 11.2
Consider the following system, with solution:
" #" # " # " # " #
C2 1 x C4 x 2
D has the solution D
1 C2 y 2 y 0
xk yk xk yk
A.1 Reading
When you’re solving problems, working through textbooks, getting into the nitty-gritty details
of each topic, it’s so easy to lose the forest for the trees and forget why you even became inspired
to study the topic that you’re learning in the first place. If you read only the text-books, you will
find the subject dull. Text-books on mathematics are written for people who already possess a
strong desire to study mathematics: they are not written to crease such a desire. Do not begin
by reading the subject. Instead, begin by reading around the subject. This is where really, really
good (and non-speculative) books on that topic come in handy: they inspire, they encourage, and
they help you understand the big picture. For mathematics and physics, the following are among
the bests (at least to me):
In A Mathematician’s Lament Paul Lockhart describes how maths is incorrectly taught in schools
and he provides better ways to teach maths. He continues in Measurement by showing us how we
should learn maths by ‘re-discovering maths’ for ourselves. Of course what Paul suggested works
only for self study people. What if you are a high school student? There are two possibilities.
861
Appendix A. How to learn 862
First, if you fortunately have a great teacher, then just stick with her/him. Second, if you do not
have such luck, you can ignore her/him and self study maths with your own pace. Do not forget
that mark is not important for deep understanding. Having said that, marks are vital for getting
scholarships, sadly.
The Joy of x by Steven Strogatz belongs to a family of maths books that aim to popularize
mathematics. In this family you can also find equally interesting books such as Journey through
Genius by William Dunham, or 17 equations that changed the world by Ian Stewart etc. It is
beneficial at a young age to read these books to realize that mathematics is not a dry, boring
topic. On the contrary, it is interesting. Similarly, An Imaginary Tale: The story
pof square root of
-1 by Paul Nahin is a popular maths book which tells the fascinating story of 1. In the book,
I have referred to many other popular math books (see the Reference list).
The Feynman Lectures on Physics by the Nobel winning physicist Richard Feymann is
probably the best to learn college level mathematics by studying physics. Bill Gates once said
’Feymann is the best teacher I never had’. In these lectures Feymann beautifully introduced
various physics topics and the mathematics required to describe them. He also describes how
physicists think about problems. Another reason to read these lectures is that it is good to
read books at a level higher than your knowledge. Feymann lectures were written for Caltech
(California Institute of Technology) undergraduates.
Evolution of Physics by the greatest physicist Einstein teaches us how to imagine. Through
imaginary thought experiments the book explains the basic concepts of physics. It is definitely a
must read for all students who want to learn physics.
And if you want to become a professional mathematician, read Letters to a young mathemati-
cian by Ian Stewart [48]. Ian Stewart (born 1945) is a British mathematician who is best-known
for engaging the public with mathematics and science through his many bestselling books, news-
paper and magazine articles, and radio and television appearances.
And don’t forget to read the history of mathematics. Here are some books on this topic:
Men of Mathematics: The Lives and Achievements of the Great Mathematicians from
Zeno to Poincaré by E. T. Bell [4];
If you prefer watching the history of maths unfold, the BBC Four The story of Maths is excellent.
You can find it on YouTube.
How should we read a mathematics textbook?. Of course the first thing to notice is that we
cannot read a math book like reading a novel. The second thing is that we should not read it
page-by-page, word-by-word from the beginning to the end in one go. The third thing is that
maths textbooks are usually many times longer than necessary because they have to include a
The lectures are freely available at https://www.feynmanlectures.caltech.edu.
lot of exercises (at the end of each section or chapter). Why so? Mostly to please the publishers
who aim for financial targets not educational ones! As discussed in Section 1.3, it is better to
spend time solving problems rather than exercises. It is certain that we first still have to do a few
exercises to understand a concept/method. But that’s it.
Here is one suggestion on how we should read a math book (based on many recommendations
that I have collected from various sources). It is clear that something that works for one person
might not work for others, but it can be a start:
1st read: skim through a section/chapter first. The idea is to see the forest, not the trees.
Knowing all the trees in the first go would be too much;
2nd read: read slowly (with paper/pencil) to get know the trees; focus on the motivation,
the definition, the theorem;
4th read: pay attention to the proofs; study them carefully and reproduce a proof for
yourself.
If you have a bad teacher, simply ignore his/her class. There are excellent math teachers
online. Learn from them instead. You can listen to the story of Steven Strogatz at https:
//www.youtube.com/watch?v=SUMLKweFAYk to see how a teacher can change your love
to mathematics and then your life;
If you have questions (any) on maths, you can post them to https://math.
stackexchange.com and get answers;
The best way to learn is to teach. If you do not have such opportunity, you can write about
what you know. Similar to this note. Or you can write a blog on maths. Writing is one of
the best way to consolidate your understanding of what you have learn (not only maths) .
You might wonder ’but writing is time consuming’. That is not true if you write just one
page per day and you’re doing that consistently for everyday;
As Dick Guindon once said Writing is nature’s way of letting you know how sloppy your thinking is.
LATEX is the best tool (as for now) for writing mathematics. So it is not a bad idea to learn
it and use it (for Mathematics Stack Exchange you have to use LATEX anyway). This book
was typeset using LATEX; If you do not know where to start with LATEX, check this youtube
video out;
While learning maths, it is a good habit to keep in mind that mathematics is about ideas
not formula or numbers. So, first you should be able to express the idea in your own
speaking language. Then, translate that to the language of maths. For example, the idea
of convergence of a sequence expressed in both English and mathematics:
Just like learning any speaking languages, to speak the language of maths you have to
study its vocabulary. You should get familiar with Greek symbols like ; ı, 8 etc.;
And as Euclid told Ptolemy 1st Soter, the first king of Egypt after the death of Alexander
the Great ‘there is no royal road to geometry’, you have to do mathematics. Just as to
enjoy swimming you have to jump into the water, by just watching others swimming you
will never understand the excitement;
Knowing the name of something doesn’t mean you understand it There is a way to
test whether you understand something or only know the name/definition. It’s called the
Feynman Technique, and it works like this: “Without using the new word which you have
just learned, try to rephrase what you have just learned in your own language.”;
As there is no single book that can covers everything about any topics, it is better to have
a couple of good books for any topics;
Read mathematics books very slowly; do not lose the forest for the trees. Study the defini-
tions carefully, why we need them. Then, play with the definitions to see what properties
they might possess. Until then, study the theorems. And finally the proofs. If you just want
to be a scientist or engineer, then focus less on the proofs;
Study the history of mathematics. Not only it tells you interesting stories but also it reveals
that great mathematicians are also human, they had to struggle, they failed many times
before succeeded in developing a sound mathematical idea;
If you fall behind in maths, physics, chemistry (I used to in 8th grade), just focus on
improving your maths. Being better at math, you will do fine with physics and chemistry.
Remember that math is the language God talks;
Feymann’s father once told him “See that bird? It’s a brown-throated thrush, but in Germany it’s called a
halzenfugel, and in Chinese they call it a chung ling and even if you know all those names for it, you still know
nothing about the bird.”
Be aware of focused vs diffuse mode of thinking. Check the book Learning How to Learn
for details. In short, diffuse mode is when your mind is relaxed and free. You’re thinking
about nothing in particular. You’re in diffuse mode when you’re walking in a park (without
a phone of course), having a bath etc. And usually it is when you’re in a diffuse mode that
you find solutions to problems that you have been struggling to solve . And of one the best
way to get into a diffuse mode is walking. It’s not a coincidence that many of the finest
thinkers in history were enthusiastic walkers. An old example is Aristotle, the famous
Greek philosopher, empiricist, who conducted his lectures while walking the grounds of
his school in Athens;
How long should you fight before giving up to look at the solutions? We admit it is very
tempting to look at the solutions when we’re stuck. But don’t! The best is to play with
the exercises for a while (2 hours? ), if still no luck, then forget it, do something else.
Come back to it later, do the same thing. After one or two days, still stuck, then look at
the solutions, but only the first step, solve the problem and do self-reflection. If you have
to look at the entire solution, then make sure you can repeat all the steps by yourself later.
Only then the material is yours. Don’t fool yourself by just looking at the solutions and
think that you understand the math. No! That is illusion of competence–a mental situation
where you think you’ve mastered a set of material but you really haven’t. We all watch
Messi scoring a goal from a free kick: he just puts the ball into the high left corner so that
the goal keeper cannot reach it. But can we repeat that?;
Do more self-reflection. What is the place that you learn most effectively? When is the
time you’re most productive? After solving every math question, ask questions like: why
the method works, why that answer, does the method work if we modify the question?
Why I could not see the solution?
Archimedes has gone down in history as the guy who ran naked through the streets of Syracuse shouting
"Eureka!" — or "I have it!" in Greek. The story behind that event was that Archimedes was charged with proving
that a new crown made for Hieron, the king of Syracuse, was not pure gold as the goldsmith had claimed. Archimedes
thought long and hard but could not find a method for proving that the crown was not solid gold until he took a
bath.
Of course how long before giving up is a personal decision. But I want to use Polya’s words about the pleasure
of finding something out for yourself: “A great discovery solves a great problem but there is a grain of discovery
in the solution of any problem. Your problem may be modest; but if it challenges your curioisty and brings into
play your inventive faculties, and if you solve it by your own means, you may experience the tension and enjoy the
triumph of discovery”.
To have a sharp mind and body we do exercies. Similarly your maths will be rusty if
you do not use it. I heard that Zdeněk Bažant– a Professor of Civil Engineering and
Materials Science at Northwestern University–keeps solving a partial differential equation
everyweek! Note that he is not a mathematician; but he needs maths for his work;
If you plan to become an engineer or scientist and you were not born with drawing abilities,
then practice drawing. Many figures in this book were drawn manually and this was
intentional as it is a good way for me to practice drawing;
Finally I have collected some learning tips into a document which can be found here.
Feynman’s Epilogue. At the end of his famous physics course at Caltech, Feynman said the
following words, I quote
Well, I’ve been talking to you for two years and now I’m going to quit. In some ways
I would like to apologize, and other ways not. I hope—in fact, I know—that two or
three dozen of you have been able to follow everything with great excitement, and
have had a good time with it. But I also know that “the powers of instruction are of
very little efficacy except in those happy circumstances in which they are practically
superfluous.” So, for the two or three dozen who have understood everything, may I
say I have done nothing but shown you the things. For the others, if I have made you
hate the subject, I’m sorry. I never taught elementary physics before, and I apologize.
I just hope that I haven’t caused a serious trouble to you, and that you do not leave
this exciting business. I hope that someone else can teach it to you in a way that
doesn’t give you indigestion, and that you will find someday that, after all, it isn’t
as horrible as it looks.
Finally, may I add that the main purpose of my teaching has not been to prepare
you for some examination—it was not even to prepare you to serve industry or the
military. I wanted most to give you some appreciation of the wonderful world and
the physicist’s way of looking at it, which, I believe, is a major part of the true culture
of modern times. (There are probably professors of other subjects who would object,
but I believe that they are completely wrong.)
This is probably the ideal learning environment that cannot be repeated by other teachers.
What is then the solution? Selft studying! With a computer connected to the world wide web,
some good books (the one I used to write this note are good in my opinion), and amazing free
teachers (e.g. 3Blue1Brown, Mathologer, blackpenredpen, Dr. Trefor Bazett), you can learn
mathematics (or any topic) in a fun and productive way.
To encourage young students to learn coding and also to demonstrate the important role of
coding in mathematics, engineering and sciences, in this book I have used many small programs
to do some tedious (or boring) calculations. In this appendix, I provide some snippets of these
programs so that young people can learn programming while learning maths/physics.
There are so many programming languages and I have selected Julia for two main reasons.
First, it is open source (so we can use it for free and we can see its source code if we find that
needed). Second, it is easy to use. For young students, the fact that a programming language is
free is obviously important. The second reason–being easy to use–is more important as we use
a programming language just as a tool; our main purpose is doing mathematics (or physics). Of
course you can use Python; it is also free and easy to use and popular. The reason I have opted
for Julia was to force me to learn this new language; I forced myself to go outside of my comfort
zone, only then I could find something unexpected. There is actually another reason, although
irrelevant here, is that Julia codes run faster than Python ones. Moreover, it is possible to use
Python and R in Julia.
It is worthy noting that our aim is to learn coding to use it to solve mathematical problems.
We do not want to learn coding to write software for general use; that is a compltely story. If
that is the case then do not spend time (if your time is limited) learning how to make graphical
user interfaces (GUI), and do not learn coding with languages such as Visual Basic, Delphi and
so on.
In the text, if there is certain amount of boring calculations (e.g. a table of partial sums of an
infinite series), certainly I have used a small Julia program to do that job. And I have provided
links to the code given in this appendix. Now, in the code snippets, I provide the link back to the
associated text in the book.
To reduce the thickness of the book, all other codes, which are not given in the text, are put
in github at this address.
R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety
of UNIX platforms, Windows and MacOS.
GitHub is a website and cloud-based service that helps developers store and manage their code, as well as
track and control changes to their code.
867
Appendix B. Codes 868
Listing B.1: Computing the square root of a positive real number S. See ??. Julia built in functions are
in blue heavy bold font.
1 function square_root(S,x0,epsilon)
2 x = x0 # x is for x_{n+1} in our formula
3 while (true) # do the iterations, a loop without knowing the # of iterations
4 x = 0.5 * ( x + S/x )
5 if (abs(x*x-S) < epsilon) break end # if x is accurate enough, stop
6 end
7 return x
8 end
Listing B.2 is the code to compute the partial sums of a geometric series niD1 1=2i . The code
P
is typical for calculating a sum of n terms. We initialize the sum to zero, and using a for loop to
add one term to the sum Peach time. Listing B.3 is a similar code, but for the Taylor series of the
1
sine function sin x D i D1 . 1/i 1 1=.2i 1/Šx 2i 1 ; see Section 4.14.6. The code introduces the
use of the factorial(n) function to compute nŠ. Note that we have to use big numbers as nŠ is
very large for large n.
Pn
Listing B.2: Partial sum of geometric series 1=2i . Also produces directly Table 2.11.
i D1
1 using PrettyTables # you have to install this package first
2 function geometric_series(n) # make a function named ‘geometric_series’ with 1 input
3 S = 0.
4 for k=1:n # using ’for’ for loops with known number of iterations
5 S += 1/2^k # S += ... is short for S = S + ...
6 end
7 return S
8 end
9 data = zeros(20,2) # this is an array of 20 rows and 2 columns
10 for i=1:20 # produce 20 rows in Table 2.10
11 S = geometric_series(i)
12 data[i,1] = i # row ‘i’, first col is ‘i’
13 data[i,2] = S # second col is S
14 end
15 pretty_table(data, ["n", "S"]) # print the table to terminal
P1
Listing B.3: Calculating sin x using the sine series sin x D i D1 . 1/i 1 1=.2i 1/Šx 2i 1 .
Listing B.4 is the program to check whether a natural number is a factorion. Having such a
function, we just need to sweep over, let say the first 100 000 numbers and check every number
if it is a factorion. We provide two solutions: one using the built in Julia‘s function digits to
get the digits of an integer. This solution is a lazy one. The second solution does not use that
function. Only then, we’re forced to work out how to get the digits of a number. Let’s say the
number is 3 258, we can get the digits starting from the first one (and get 3; 2; 5; 8) or we can
start from the last digit (to get 8; 5; 2; 3). The second option is easier because 8 D 3258%10 (the
last digit is the remainder of the division of the given number with 10). Once we have already
got the last digit, we do not need it, so we just need to remove it; 358 D d iv.358; 10/; that is
358 is the result of the integer division of 3258 with 10.
Listing B.5 is the code for the calculation of sn D nkD0 kn that is the product of all the
Q
binomial coefficients. The idea is the same as the calculation of a sum but we need toinitialize
the result to 1 (instead of 0). We use Julia built in function binomial to compute kn .
Qn n Qn
Listing B.5: sn D kD0 k D kD0
nŠ=.n k/ŠkŠ. See Pascal triangle and number e, Section 2.27.
1 function sn(n)
2 product=1.0
3 for k=0:n
4 product *= binomial(big(n),k)
5 end
6 return product
7 end
count the number of iterations required to get the solution. The function was then applied to
solving the equation cos x x D 0.
Listing B.6: Newton-Raphson method to solve f .x/ D 0 using central difference for derivative.
1 function newton_raphson(f,x0,epsilon)
2 x = x0
3 i = 0
4 while ( true )
5 i += 1
6 derx = (f(x0+1e-5)-f(x0-1e-5)) / (2e-5)
7 x = x0 - f(x0)/derx
8 @printf "%i %s %0.8f\n" i " iteration," x
9 if ( abs(x-x0) < epsilon ) break end
10 x0 = x
11 end
12 end
13 f(x) = cos(x) - x # short functions
14 newton_raphson(f,0.1,1e-6)
Listing B.7 implements three functions used to generate Newton fractals shown in Fig. 1.3.
The first function is the standard Newton-Raphson method, but the input is a function of a single
complex variable. The second function get_root_index is to return the position of a root r in
the list of all roots of the equation f .z/ D 0. This function uses the built in function isapprox
to check the equality of two numbers . The final function plot_newton_fractal loops over a
grid of n n points within the domain Œxmin ; xmax 2 , for each point .x; y/, a complex variable
z0 D x C iy is made and inserted to the function newton to find a root r. Then, it finds the
position of r in the list roots. And finally it updates the matrix m accordingly. We used the code
with the function f .z/ D z 4 1, but you’re encouraged to play with f .z/ D z 12 1.
B.2 Recursion
In Section 2.9 we have met the Fibonacci numbers:
Fn D Fn 1 C Fn 2 ; n 2; F0 D F1 D 1 (B.2.1)
To compute F .n/, we need to use the recursive relation in Eq. (B.2.1). Listing B.8 is the Julia
implementation of Eq. (B.2.1). What is special about this “fibonacci” function? Inside the def-
inition of that function we call it (with smaller values of n). The process in which a function
calls itself directly or indirectly is called recursion and the corresponding function is called a
recursive function.
We should never check the equality of real/complex numbers by checking a DD b; instead we should check
ja bj < , where is a small positive number. In other words, 0:99998 D 1:00001 D 1 according to a computer.
The built in function is an optimal implementation of this check.
The case n D 0 or n D 1 is called the base case of a recursive function. This is the case that
we know the answer to, thus it can be solved without any more recursive calls. The base case is
what stops the recursion from continuing on forever (i.e., infinite loop). Every recursive function
must have at least one base case (many functions have more than one).
Sometimes the problem does not appear to be recursive. Thus, to master recursion we must
first find out how to think recursively. For example, consider the problem of computing the sum
of the first n integers. Using recursion, we do this:
S.n/ D 1 C 2 C C n D 1 C 2 C C .n 1/ Cn
„ ƒ‚ …
S.n 1/
We also need the base case, which is obviously n D 1 (S.1/ D 1). Now we can implement this
in Julia as in Listing B.9.
Rb
Listing B.10: Simpson’s quadrature for a f .x/dx.
1 using PrettyTables
2 function simpson_quad(f,a,b,n)
3 A = 0.
4 deltan = (b-a)/n
5 deltax6 = deltan/6
6 for i=1:n
7 fa = f(a+(i-1)*deltan)
8 fb = f(a+i*deltan)
9 fm = f(a+i*deltan-deltan/2)
10 A += fa + 4*fm + fb
11 end
12 return A*deltax6
13 end
14 fx4(x) = x^4
15 I = simpson_quad(fx4,0,1,10)
interval, and for each ti , we compute x.ti /. Then we plot the points .ti ; x.ti //, these points are
joined by a line and thus we have a smooth curve. This is achieved using the Plots package.
To generate Fig. 11.2, many points in Œ0; 6 are generated, and for each point xi compute y.xi /,
then we plot the points .xi ; y.xi //.
B.6 Propability
Monte Carlo for pi. I show in Listing B.13 the code that implements the Monte-Carlo method
for calculating . This is the code used to generate Table 5.3 and Fig. 5.2 (this part of the code
is not shown for brevity). It also presents how to work with arrays of unknown size (line 4 for
the array points2 as we do not in advance how many points will be inside the circle). In line
13, we add one row to this array. Final note, this function returns multiple values put in a tuple
(line 16).
In Listing B.14, I present another implementation, which is much shorter using list
comprehension . In one line (line 3) all n points in Œ0; 12 is generated. . In line 4, we get all
the points inside the unit circle using the filter function and an anonymous p predicate (x ->
norm(x) <= 1). The norm function, from the LinearAlgebra package, is for x 2 C y 2 .
Computer experiment of tossing a coint. When we toss a coin we either get a head or a tail.
In our virtual coin tossing experiment, we generate a random integer number within Œ1; 2, and
we assign one to head and two to tail. We repeat this for n times and count the number of heads
and tails. Listing B.15 is the resulting code. The code introduces the rand function to generate
random numbers.
Using list comprehension we can have a shorter implementation shown in Listing B.16.
A list comprehension is a syntactic construct for creating a list based on existing lists. It follows the form of
the mathematical set-builder notation (set comprehension). For example, S D f2 x W x 2 N; x 2 > 3g.
Filter is a higher-order function that processes a data structure (usually a list) in some order to produce a new
data structure containing exactly those elements of the original data structure for which a given predicate returns
the boolean value true.
Listing B.16: Virtual experiment of tossing a coin in Julia: list comprehension based implementation.
1 function tossing_a_coin(n)
2 coin=[ rand(1:2) for _ in 1:n]
3 return (sum(coin .== 1), sum(coin .== 2))
4 end
Birthday problem. Now we present an implementation of the birthday problem. The procedure
is: we repeat the following steps N times where N is a large counting number:
collect birthdays of n persons; this can be done with [rand(1:365) for _ in 1:n]
count the number of occurences of the above birthdays array; for example with 3 persons,
we can have f1; 2; 2g, and after the counting we get f1; 2g (there is shared birthday), or
f4; 5; 6g with no duplicated elements, we get f1; 1; 1g thus no shared birthday.
Distributions.jl is a Julia package for probability distributions and associated functions. List-
ing B.18 presents a brief summary of some common functions.
The code in Listing B.19 is used to illustrate graphically the central limit theorem. The code
generates n uniformly distributed variables (i.e., X1 ; X2 ; : : : ; Xn ). Then it computes the mean of
Xi s, that is Y D .X1 C C Xn /=n. And this is done for a large number of times (N D 2 104
for example). Then, a histogram of the vector of these N means is plotted (lines 7–8). What we
get is Fig. 5.18a.
Listing B.21: N body problem solved with Euler-Cromer’s method: part II.
1 for n=1:stepCount-1
2 for i = 1: N # loop over the bodies
3 ri = pos[:,i,n] # position vector of body ’i’ at time n
4 fi = zeros(2) # compute force acting on ’i’
5 for j = 1:N
6 if ( j != i )
7 rj = pos[:,j,n] # position vector of body ’j’ at time n
8 mj = mass[j] # mass of body ’j’
9 fij = force(ri,rj,mj) # call the force function
10 fi += fij # add force of ’j’ on ’i’
11 end
12 end
13 vel[:,i,n+1] = vel[:,i,n]+dt*fi # update velocity of body ’i’
14 pos[:,i,n+1] = pos[:,i,n]+dt*vel[:,i,n+1] # update position of body ’i’
15 end
16 end
Listing B.22: N body problem solved with Euler-Cromer’s method: part III.
1 colors = [:blue,:orange,:red,:yellow]
2 anim = @animate for n in 1:stepCount
3 plot(;size=(400,400), axisratio=:equal, legend=false)
4 xlims!(-1.1,1.1)
5 ylims!(-1.1,1.1)
6 scatter!(pos[1,:,n],pos[2,:,n],axisratio=:equal) # plot three masses
7 # plot the trajectory of three masses upto time n
8 plot!(pos[1,1,1:n],pos[2,1,1:n],axisratio=:equal,color=colors[1])
9 plot!(pos[1,2,1:n],pos[2,2,1:n],axisratio=:equal,color=colors[2])
10 plot!(pos[1,3,1:n],pos[2,3,1:n],axisratio=:equal,color=colors[3])
11 end
12 gif(anim, "three-body.gif", fps=30) # fps = frames per second
you.
Listing B.23 is the code used to do a SVD image compression. The result of the code was
given in Fig. 10.27. In the code I used the map function. In many programming languages, map is
the name of a higher-order function that applies a given function to each element of a collection,
e.g. a list or set, returning the results in a collection of the same type. Listing B.24 demonstrates
the use of map.
are much better than my implementation, provided as ‘packages’ or ‘libraries’. When we learn
something we should reinvent the wheel as it is usually the best way to understand something.
But for real work, use libraries. Go to https://julialang.org for a list of packages available
in Julia.
coords of the lower right vertex of the triangle. We need a function to draw a triangle given its
lower left corner and its length, thus we wrote the function "tri" in Listing B.27.
Listing B.27: Draw a triangle with the lower left corner and side.
1 void tri(float x, float y, float l) {
2 triangle(x, y, x + l/2, y - sin(PI/3) * l, x + l, y);
3 }
Now, we study the problem carefully. The process is: start with an equilateral triangle. Sub-
divide it into four smaller congruent equilateral triangles and remove the central triangle. Repeat
step 2 with each of the remaining smaller triangles infinitely. Of course we do not divide the
triangles infinitely, but for a finite number of times denoted by n. Note also that subdividing the
biggest triangle by four smaller triangles and remove the central one is equivalent to draw three
smaller triangles.
Now, if n D 1 we just draw the biggest triangle, which is straightforward. For n D 2 we
need to draw three triangles. This is illustrated in Fig. B.3. We’re now ready to write the main
function called "divide", the code is in Listing B.28. The base case is n D 1 and if n D 2 we
call this function again with l replaced by l=2 (smaller triangles) and n replaced by n 1, which
is one, and thus three l=2 sub-triangles are created. Finally, put the divide function inside the
processing built in function draw as shown Listing B.29.
Figure B.3
For more on processing, you can check out this youtube channel.
Listing B.29: Put the drawing functions inside the draw function.
1 void draw() {
2 background(255); // background color
3 divide(x1, y1, l, 3);
4 }
885
Appendix C. Data science with Julia 886
8 sns.set_style("ticks")
9
10 train = DataFrame(CSV.File("Pearson.csv"))
11 size(train) % => (1078,2)
12 names(train) % => 2-element Vector{String}: "Father", "Son"
13 first(train,5) % -> print the first 5 rows
14 train[!,:Father] % => do not copy
15 col = train[,:Father] % => copy column Father to col
16 train[train.Father .> 70,:] % => get sub-table where father’s height > 70
17
18 fig , ax = plt.subplots(1, 1, figsize=(5,5))
19 ax.hist(train[!,:Father],bins=18,density=true)
20 plt.xlabel("Height")
21 plt.ylabel("Proportion of observations per unit bin")
[1] John Anderson. Computational Fluid Dynamics: the basic and applications. McGraw-Hill
Science/Engineering/Math, 1 edition, 1995. ISBN 9780070016859,0-07-001685-2,0-07-
001685-2. [Cited on page 814]
[2] Herman H. Goldstine (auth.). A History of the Calculus of Variations from the 17th through
the 19th Century. Studies in the History of Mathematics and Physical Sciences 5. Springer-
Verlag New York, 1 edition, 1980. ISBN 9781461381082; 1461381088; 9781461381068;
1461381061. [Cited on page 666]
[3] W. W. Rouse Ball. A short account of the history of mathematics. Michigan histori-
cal reprint. Scholarly Publishing Office, University of Michigan Library, 2005. ISBN
1418185272,9781418185275. [Cited on page 862]
[4] Eric Temple Bell. Men of Mathematics: The Lives and Achievements of the Great Mathe-
maticians from Zeno to Poincaré. Touchstone, 1986. ISBN 0-671-62818-6, 978-1-4767-
8425-0. [Cited on page 862]
[5] Alex Bellos. Alex’s Adventure In Numberland. Bloomsbury Publishing PLC. [Cited on
page 35]
[6] Jonathan Borwein and David Bailey. Mathematics by Experiment: Plausible Rea-
soning in the 21st Century. A K Peters / CRC Press, 2nd edition, 2008. ISBN
1568814429,9781568814421. [Cited on page 16]
[7] Glen Van Brummelen. Heavenly Mathematics: The Forgotten Art of Spherical Trigonome-
try. Princeton, 2012. ISBN 0691148929 978-0691148922. [Cited on page 249]
[8] D. N. Burghes and M.S. Borrie. Modelling with Differential Equations. Mathematics and
its Applications. Ellis Horwood Ltd , Publisher, 1981. ISBN 0853122865; 9780853122869.
[Cited on pages 604 and 607]
[9] Jennifer Coopersmith. The lazy universe. An introduction to the principle of least action.
Oxford University Press, 1 edition, 2017. ISBN 978-0-19-874304-0,0198743041. [Cited
on pages 288 and 666]
887
Bibliography 888
[10] Richard Courant, Herbert Robbins, and Ian Stewart. What is mathematics?: an elementary
approach to ideas and methods. Oxford University Press, 2nd ed edition, 1996. ISBN
0195105192,9780195105193. [Cited on page 253]
[11] Keith Devlin. The Unfinished game: Pascal, Fermat and the letters. Basic Books, 1 edition,
2008. ISBN 0465009107,9780465009107,9780786726325. [Cited on page 420]
[12] William Dunham. Euler: The master of us all, volume 22. American Mathematical Society,
2022. [Cited on page 393]
[13] C. H Edwards. The historical development of the calculus. Springer, 1979. ISBN
3540904360,9783540904366. [Cited on page 253]
[14] Stanley J. Farlow. Partial differential equations for scientists and engineers. Courier Dover
Publications, 1993. ISBN 048667620X,9780486676203. URL http://gen.lib.rus.ec/
book/index.php?md5=74c5f9a0384371ab46a1def8f73ec978. [Cited on page 604]
[15] Richard Phillips Feynman. The Feynman Lectures on Physics 3 Volume Set) Set v, volume
Volumes 1 - 3. Addison Wesley Longman, 1970. ISBN 0201021153,9780201021158.
[Cited on pages 193 and 521]
[17] Martin J Gander and Gerhard Wanner. From euler, ritz, and galerkin to modern computing.
Siam Review, 54(4):627–666, 2012. [Cited on page 692]
[18] Nicholas J. Giordano and Hisao Nakanishi. Computational Physics. Addison-Wesley, 2nd
edition edition, 2005. ISBN 0131469908; 9780131469907. [Cited on page 814]
[19] Anders Hald. A History of Probability and Statistics and Their Applications before 1750
(Wiley Series in Probability and Statistics). Wiley-Interscience, 1 edition, 2003. ISBN
0471471291,9780471471295. [Cited on page 420]
[20] Richard Hamming. Numerical methods for scientists and engineers. Dover, 2nd ed edition,
1987. ISBN 9780486652412,0486652416. [Cited on page 814]
[21] David J. Hand. Statistics: a very short introduction. Very Short Introductions. Oxford
University Press, USA, 2008. ISBN 9780199233564,019923356X. [Cited on page 509]
[23] Brian Hopkins and Robin J Wilson. The truth about königsberg. The College Mathematics
Journal, 35(3):198–207, 2004. [Cited on page 188]
[24] Eugene Isaacson and Herbert Bishop Keller. Analysis of numerical methods. Dover
Publications, 1994. ISBN 9780486680293,0486680290. [Cited on page 814]
[25] Victor J. Katz. A History of Mathematics. Pearson, 3rd edition edition, 2008. ISBN
0321387007,9780321387004. [Cited on page 862]
[27] M Kline. Mathematical Thought From Ancient to Modern Times I. Oxford University
Press, 1972. [Cited on page 269]
[28] Morris Kline. Calculus: An Intuitive and Physical Approach. John Wiley & Sons, 1967.
ISBN 9780471023968,0471023965. [Cited on page 253]
[29] Morris Kline. Mathematics for the Nonmathematician (Dover books explaining science).
Dover books explaining science. Dover Publications, illustrated. edition, 1985. ISBN
0486248232,9780486248233,048646329X,9780486463292. [Cited on page 239]
[30] Cornelius Lanczos. The Variational Principles of Mechanics. 1957. [Cited on pages 666
and 670]
[31] Serge Lang. Math: Encounters with high school students. Springer, 1985. ISBN
9780387961293,0387961291. [Cited on page 209]
[32] Hans Petter Langtangen and Svein Linge. Finite Difference Computing with PDEs: A
Modern Software Approach. Texts in Computational Science and Engineering 16. Springer
International Publishing, 1 edition, 2017. ISBN 978-3-319-55455-6, 978-3-319-55456-3.
[Cited on page 814]
[33] Eli Maor. To Infinity and Beyond: A Cultural History of the Infinite. Princeton University
Press, illustrated edition edition, 1991. ISBN 9780691025117,0691025118. [Cited on
page 182]
[34] Eli Maor. Trigonometric delights. Princeton University Press, 1998. ISBN
9780691057545,9780691095417,0691057540,0691095418. [Cited on page 244]
[35] Jerrold E. Marsden and Anthony Tromba. Vector calculus. W.H. Freeman, 5th ed edition,
2003. ISBN 9780716749929; 0716749920. [Cited on page 521]
[36] Paul J. Nahin. An imaginary tale: The story of square root of -1. Princeton University Press,
pup edition, 1998. ISBN 0691027951,9780691027951,9780691127989,0691127980.
[Cited on pages 146, 149, and 352]
[37] Paul J. Nahin. Dr. Euler’s Fabulous Formula: Cures Many Mathematical Ills. Princeton
University Press, 2006. ISBN 0691118221,9780691118222. [Cited on page 393]
[38] Paul J. Nahin. When Least Is Best: How Mathematicians Discovered Many Clever Ways to
Make Things as Small (or as Large) as Possible. Princeton University Press, 2007. ISBN
0691130523,9780691130521. [Cited on pages 665 and 675]
[39] Paul J. Nahin. Inside Interesting Integrals: A Collection of Sneaky Tricks, Sly
Substitutions, and Numerous Other Stupendously Clever, Awesomely Wicked, and ...
Undergraduate Lecture Notes in Physics. Springer, 2015 edition, 2014. ISBN
1493912763,9781493912766. URL http://gen.lib.rus.ec/book/index.php?md5=
dd3891c740af26fb79ab93e5eb7ec95f. [Cited on pages 339 and 341]
[40] Paul J. Nahin. Hot Molecules, Cold Electrons: From the Mathematics of Heat to the De-
velopment of the Trans-Atlantic. Princeton University Press, 2020. ISBN 9780691191720;
0691191727. [Cited on page 652]
[41] Yoni Nazarathy and Hayden Klok. Statistics with Julia: Fundamentals for Data Sci-
ence, Machine Learning and Artificial Intelligence. Springer Nature, 2021. ISBN
9783030709013,3030709019. [Cited on page 509]
[42] Roger B Nelsen. Proofs without words: Exercises in visual thinking. Number 1. MAA,
1993. [Cited on page 11]
[43] Ivan Morton Niven. Numbers: rational and irrational. New Mathematical Library. Mathe-
matical Assn of America, random house edition, 1961. ISBN 9780883856017,0883856018.
[Cited on page 91]
[44] G. Polya. How to solve it; a new aspect of mathematical method. Prince-
ton paperbacks, 246. Princeton University Press, 2d ed edition, 1971. ISBN
9780691023564,9780691080970,0691023565,0691080976. [Cited on page 13]
[45] David Poole. Linear Algebra.. A Modern Introduction. Brooks Cole, 2005. ISBN
0534998453,9780534998455. [Cited on pages 514, 700, 789, and 804]
[46] Sheldon M. Ross. A first course in probability. Prentice Hall, 5th ed edition, 1998. ISBN
0137463146,9780137463145. [Cited on page 420]
[47] H. M. Schey. Div, Grad, Curl, and All That: An Informal Text on Vector Cal-
culus, Fourth Edition. W. W. Norton & Company, 4th edition, 2005. ISBN
0393925161,9780393925166. URL http://gen.lib.rus.ec/book/index.php?md5=
261ab626a8014c7f36f081ef725cf968. [Cited on page 572]
[50] James Stewart. Calculus: Early Transcendentals. Stewart’s Calculus Series. Brooks Cole,
6° edition, 2007. ISBN 0495011665,9780495011668. URL http://gen.lib.rus.ec/
book/index.php?md5=ae7190f2e7ed196d93fd43485f2f7759. [Cited on page 521]
[51] Stephen M. Stigler. The history of statistics: the measurement of uncertainty before 1900.
Belknap Press, illustrated edition edition, 1986. ISBN 0674403401,9780674403406. [Cited
on page 420]
[52] John Stillwell. Mathematics and Its History. Undergraduate Texts in Mathematics.
Springer-Verlag New York, 3 edition, 2010. ISBN 144196052X,9781441960528. [Cited
on page 862]
[54] Gilbert Strang. Linear Algebra And Learning from Data, volume 1. Wesley-Cambridge
Press, 1 edition, 2019. ISBN 0692196382,9780692196380. [Cited on pages 700 and 700]
[55] Steven Strogatz. Infinite Powers: How Calculus Reveals the Secrets of the Universe.
Houghton Mifflin Harcourt, 2019. [Cited on pages 19, 252, and 253]
[56] John R. Taylor. Classical Mechanics. University Science Books, 2005. ISBN
189138922X,9781891389221. [Cited on pages 604 and 771]
[57] Lloyd N. Trefethen. Approximation Theory and Approximation Practice. 2013. [Cited on
page 813]
[58] Paul Zeitz. The Art and Craft of Problem Solving. John Wiley, 2nd ed edition, 2007. ISBN
9780471789017,0471789011. [Cited on pages 14 and 36]
Index
893
Clenshaw’s algorithm, 816 dependent variable, 605
closed bracket, 124 depressed cubic equation, 77
co-domain of a function, 267 derivative, 286, 291
coding, 867 determinant, 763
cofactor, 768 determinant of a matrix, 752
cofactor expansion, 768 difference equation, 453
column space, 743 difference equations, 454
complex analysis, 144 differential equations, 605
complex conjugate, 139 Differential operator, 294
complex number, 135 diffusion equation, 619
complex plane, 135 dimension matrix, 631
compound interests, 130 dimension of a PDE, 615
computer algebra system, 249 dimensional analysis, 627
computing, 17 dimensionless group, 628
condition number of a matrix, 807 directional derivative, 528
conditional probability, 444 Dirichlet integral, 343
conic sections, 254 discrete random variable, 461
conjugate radical, 55 divergence, 580
conservation of energy, 619 divergence of a vector, 583
continued fraction, 64 divergence theorem, 584
convex functions, 315 domain of a function, 267
convexity, 315 dot product, 704
coordinate map, 794 double integral, 540
coordinate vector, 790 double integral in polar coordinates, 542
coordinate vector , 747 driven damped oscillation, 641
coordinates , 747 driven oscillation, 641
coupled oscillation, 648 dummy index, 718
coupled oscillator, 648 dynamical equations, 562
covariance, 497
covariance matrix, 497 eigenvalue, 775
Cramer’s rule, 768 eigenvalue equation, 776
cross derivatives, 525 eigenvector, 775
cross product, 710 Einstein summation notation, 718
cubic equation, 75 elementary matrices, 738
cumulative distribution function, 474 ellipse, 258
curl of a vector field, 586 elliptic integral, 346
cycloid, 674 elliptic integral of the first kind, 346, 647
elliptic integral of the second kind, 346
damped oscillation, 641 empty set, 430
de Moivre, 467 Euclid, 146
de Moivre’s formula, 139 Euler, 393
de Morgan’s laws, 433 Euler’s identity, 145
definition, 11 Euler’s method, 840
Euler-Aspel-Cromer’ method, 842 graph theory, 188
Euler-Maclaurin summation formula, 404 gravitation, 568
expansion coefficients, 747 Green’s identities, 593
Exponential of a matrix, 614
extrema, 310 hanging chain, 668
extreme value theorem, 369 harmonic oscillation, 634
heat conduction, 620
factorial, 151 Heron’s formula, 272, 273
factorization, 78 Hessian matrix, 534
Feymann’s trick, 341 hexadecimal numbers, 187
Fibonacci, 64 histogram, 487
Fibonacci sequence, 60 horizontal translation, 265
finite difference equation, 851 Horner’s method, 175
fixed point iterations, 65 hyperbola , 259
floor function, 113 hyperbolic functions, 235
fluxes, 580
forced oscillation, 641 implicit differentiation, 308
forward difference, 817 improper integrals, 344
forward-backward-induction, 120 independent variable, 605
four color theorem, 191 inequality, 114
Fourier coefficients, 407 infimum, 429
Fourier series, 407 infinite series, 382
Fourier’s law, 620 initial-boundary value problem, 620
frequency, 636 inner product, 798
function, 263 inner product space, 799
function composition, 266, 267 integral, 281, 283
function transformation, 266 Integration by parts, 327
function,graph, 263 Integration by substitution, 325
functional equations, 271 intermediate value theorem, 369
functions of a complex variable, 144 interpolation, 819
inverse function, 268
Gauss rule, 835 irrational number, 51
Gauss’s theorem, 584 isomorphism, 794
generalized binomial theorem, 382
generalized eigenvector, 614 Jacobian matrix, 546
generalized Pythagoras theorem, 230 Jensen inequality, 315
generating functions, 501 joint probability mass function, 491
geometric mean, 116 Julia, 17, 867
geometric series, 386 Kepler’s laws, 560
golden ratio, 57 kernel of a linear transformation, 793
gradient vector, 529 Kronecker delta, 756
Gram-Schmidt algorithm, 761
graph, 188 L’Hopital’s rule, 366
graph of functions, 263 Lagrange basis polynomials, 821
Lagrange interpolation, 821 moment of inertia, 550
Lagrange multiplier, 537 moment of inertia matrix, 772
Lagrange multiplier method, 538 Monte Carlo method, 426
Lagrangian mechanics, 684 multiplication rule of probability, 440
Laplacian operator, 622
law of cosines, 230 N-body problem, 843
law of heat conduction, 620 natural frequency, 636
law of sines, 230 Neptune, 571
law of total probability, 443 Newton-Raphson method, 528
Legendre polynomials, 800 nilpotent matrix, 614
length of plane curves, 345 norm, 803
limit, 111, 356 normal frequencies, 649
line integrals, 575 normal modes, 649
linear approximation, 527 normalizing a vector, 705
linear combination, 719 normed vector space, 803
linear equation, 72 nullity, 745
linear function, 748 nullspace, 743
linear independence, 727 number theory, 34
linear recurrence equation, 453 numerical differentiation, 816
linear space, 785 one-to-one, 793
linear transformation, 793 onto, 793
linear transformations, 748 order of a PDE, 615
logarithm, 127 ordinary differential equations, 562, 605
logarithmic differentiation, 310 orthogonal matrix, 757
LU decomposition, 741 Orthonormal basis, 756
Machin’s formula, 219 parabolas, 259
marginal distribution, 491 Parallel axis theorem, 553
Markov chain, 516 parametric curves, 269
Markov’s inequality, 498 partial derivative, 525
mass matrix, 649 partial differential equations, 605
math phobia, 18 partial fraction decomposition, 337
mathematical modeling, 605 partial fractions, 339
matrix-matrix multiplication, 753 Pascal triangle, 167
maxima, 310 pattern, 4
mean value theorem, 369 PDE, 615
Mercator’s series, 385 PDF, 487
Mersenne number, 100 periodic functions, 408
method of separation of variables, 652 permutation, 151
mid-point rule, 832 piecewise continuous functions, 409
minima, 310 pigeonhole principle, 158, 159
modular arithmetic, 175 polar coordinates, 373
modulus of complex number, 137 polar form of complex numbers, 137
polynomial evaluation, 175 saddle point, 532
polynomial remainder theorem, 171 sample, 484
polynomials, 169 sample space, 433
power, 92 sample variance, 484
prime number, 47 scalar, 701
principal axes theorem, 783 scalar quantities, 701
probability density function, 487 scientific notation, 96
probability mass function, 461 second derivative, 307
probability vector, 516 second derivative test, 534
processing, 17 second moment of area, 550
programming, 17, 867 sequence, 111
projection, 709 shear transformation, 749
proof, 11 Simpson rule, 834
proof by contradiction, 47 Snell’s law of refraction, 312
proof by induction, 38 square root, 52
pseudoinverse matrix, 513 square wave, 409
Pythagoras, 71 standard deviation, 484
Pythagoras theorem, 67 state vector , 516
Pythagorean triple, 69 stiffness matrix, 649
Stokes theorem, 587
quadratic equation, 73 subset, 430
quadratic form, 536 subspace, 741
quadratic forms, 781 summation index, 718
quartenion, 717 superset, 430
quotient rule of differentiation, 300 supremum, 429
symmetry, 14
radical, 54 system of linear equations, 719
radican, 54
random variable, 461 tangent plane, 527
range of a function, 267 Taylor’s series, 393, 534
range of a linear transformation, 793 telescoping sum, 56
rank of a matrix, 725 the basis theorem, 743
rank theorem, 746 The Cauchy-Schwarz inequality, 801
rational numbers, 47 the fundamental theorem of calculus, 323
rectangular or right hyperbola , 259 the method of exhaustion, 275
recurrence equation, 453 the rank theorem, 725
reduced row echelon form, 724 The triangle inequality, 707
resonance, 644 theorem, 11
Rolle’s theorem, 369 time rate of change of position, 291
root mean square (RMS), 123 total differential, 527
row echelon form, 723 transcendental equations, 91
row space, 743 transcendental numbers, 91
Runge’s phenomenon, 823 transformation, 265
transition matrix , 516 vector field, 574
transverse wave, 656 vector space, 785
trapezoidal rule, 832 vectorial quantities, 701
trigonometric substitution, 337 Venn diagram, 430
trigonometry, 198 Verlet method, 846
trigonometry equations, 228 vertical asymptotes, 232, 358
Trigonometry identities, 143 vertical translation, 265
trigonometry identities, 209 Vieta’s formula, 175
trigonometry inequality, 222 Viète, 78
triple integral, 542 Viète’s formula, 106
truncation error, 817 von Neumann stability analysis, 854