Fundamental
Fundamental
Fundamental
OLIVER KNILL
This is still a live document and will be extended. Thanks to Jordan Stoyanov for some valuable
comments and corrections.
1. Arithmetic
Let N = {0, 1, 2, 3, . . . } be the set of natural numbers. A number p ∈ N, p > 1 is prime
if p has no factors different from 1 and p. With a prime factorization n = p1 . . . pn , we
understand the prime factors pj of n to be ordered as pi ≤ pi+1 . The fundamental theorem
of arithmetic is
Theorem: Every n ∈ N, n > 1 has a unique prime factorization.
Euclid anticipated the result. Carl Friedrich Gauss gave in 1798 the first proof in his monograph
“Disquisitiones Arithmeticae”. Within abstract algebra, the result is the statement that the
ring of integers Z is a unique factorization domain. For a literature source, see [227]. For
more general number theory literature, see [204, 79].
2. Geometry
√
Given an inner product space (V, ·) with dot product v · w leading to length |v| = v.v,
three non-zero vectors v, w, v − w define a right angle triangle if v and w are perpendicular
meaning that v · w = 0. If a = |v|, b = |w|, c = |v − w| are the lengths of the three vectors, then
the Pythagoras theorem is
Theorem: a2 + b2 = c2 .
Anticipated by Babylonians Mathematicians in examples, it appeared independently also in
Chinese mathematics [397] and was proven first by Pythagoras. It is used in many parts of
mathematics like in the Perseval equality of Fourier theory. See [334, 285, 234].
3. Calculus
Let f be a function of one variables which is continuously differentiable, meaning that
the limit g(x) = limh→0 [f (x + h) − f (x)]/h exists at every point x and defines a continuous
Rb
function g. For any such function f , we can form the integral a f (t) dt and the derivative
d/dxf (x) = f 0 (x).
Rb Rx
Theorem: a
f 0 (x)dx = f (b) − f (a), d
dx 0
f (t)dt = f (x)
Newton and Leibniz discovered the result independently, Gregory wrote down the first proof in
his “Geometriae Pars Universalis” of 1668. The result generalizes to higher dimensions in the
form of the Green-Stokes-Gauss-Ostogradski theorem. For history, see [233]. [134] tells
the “tongue in the cheek” proof: as the derivative is a limit of quotient of differences, the
anti-derivative must be a limit of sums of products. For history, see [133]
4. Algebra
A polynomial is a complex valued function of the form f (x) = a0 + a1 x + · · · + an xn , where
the entries ak are in the complex plane C. The space of all polynomials is denoted C[x]. The
largest non-negative integer n for which an 6= 0 is called the degree of the polynomial. Degree
1 polynomials are linear, degree 2 polynomials are called quadratic etc. The fundamental
theorem of algebra is
This result was anticipated in the 17th century. The first writer to assert that any n’th degree
polynomial has a root is Peter Roth in 1600 [332], proven first by Carl Friedrich Gauss and
finalized in 1920 by Alexander Ostrowski who fixed a topological mistake in Gauss proof. The
theorem assures that the field of complex numbers C is algebraically closed. For history and
many proofs see [146].
5. Probability
Given a sequence Xk of independent random variables on a probability space (Ω, A, P)
which all have the same cumulative distribution functions FX (t) = P[X ≤R t]. The nor-
malized random variable X = is (X − E[X])/σ[X], where E[X] is the mean Ω X(ω)dP (ω)
and σ[X] = E[(X − E[X])2 ]1/2 is the standard deviation. A sequence of random variables
Zn → Z converges in distribution to Z if FZn (t) → FZ (t) for all t as n → ∞. If Z is a
Gaussian random variable with zero mean E[Z] = 0 and standard deviation σ[Z] = 1, the
central limit theorem is:
2
OLIVER KNILL
Proven in a special case by Abraham De-Moivre for discrete random variables and then by
Constantin Carathéodory and Paul Lévy, the√theorem explains the importance and ubiquity
2
of the Gaussian density function e−x /2 / 2π defining the normal distribution. The
Gaussian distribution was first considered by Abraham de Moivre from 1738. See [395, 248].
6. Dynamics
Assume X is a random variable on a probability space R (Ω, A, P) for which |X| has finite
mean E[|X|]. This means X : Ω → R is measurable and Ω |X(x)|dP(x) is finite. Let T be an
ergodic, measure-preserving transformation from Ω to Ω. Measure preserving means that
P [T −1 (A)] = P [A] for all measurable sets A ∈ A. Ergodic means that that T (A) = A
implies P[A] = 0 or P[A] = 1 for all A ∈ A. The ergodic theorem states, that for an ergodic
transformation T on has:
Theorem: [X(x) + X(T x) + · · · + X(T n−1 (x))]/n → E[X] for almost all x.
This theorem from 1931 is due to George Birkhoff and called Birkhoff ’s pointwise ergodic
theorem. It assures that “time averages” are equal to “space averages”. A draft of the
von Neumann mean ergodic theorem which appeared in 1932 by John von Neumann has
motivated Birkhoff, but the mean ergodic version is weaker. See [439] for history. A special
case is the law of large numbers, in which case the random variables x → X(T k (x)) are
independent with equal distribution (IID). The theorem belongs to ergodic theory [181, 100,
372].
7. Set theory
A bijection is a map from X to Y which is injective: f (x) = f (y) ⇒ x = y and surjective:
for every y ∈ Y , there exists x ∈ X with f (x) = y. Two sets X, Y have the same cardinality,
if there exists a bijection from X to Y . Given a set X, the power set 2X is the set of all
subsets of X, including the empty set and X itself. If X has n elements, the power set has
2n elements. Cantor’s theorem is
Theorem: For any set X, the sets X and 2X have different cardinality.
The result is due to Cantor. Taking for X the natural numbers, then every Y ∈ 2X defines a
real number φ(Y ) = y∈Y 2−y ∈ [0, 1]. As Y and [0, 1] have the same cardinality (as double
P
counting pair cases like 0.39999999 · · · = 0.400000 . . . form a countable set), the set [0, 1]
is uncountable. There are different types of infinities leading to countable infinite sets
and uncountable infinite sets. For comparing sets, the Schröder-Bernstein theorem is
important. If there exist injective functions f : X → Y and g : Y → X, then there exists a
bijection X → Y . This result was used by Cantor already. For literature, see [182].
8. Statistics
A probability space (Ω, A, P) consists of a set Ω, a σ-algebra A and a probability measure
P. A σ-algebra is a collection of subset of Ω which contains the empty set and which is closed
under the operations of taking complements, countable unions and countable S intersections.
P The
function P on A takes values in the interval [0, 1], satisfies P[Ω] = 1 and P[ A∈S A] = A∈S P[A]
3
FUNDAMENTAL THEOREMS
for any finite or countable set S ⊂ A of pairwise disjoint sets. The elements in A are called
events. Given two events A, B where B satisfies P[B] > 0, one can define the conditional
probability P[A|B] = P[A ∩ B]/P[B]. Bayes theorem states:
The setup stated the Kolmogorov axioms by Andrey Kolmogorov who wrote in 1933 the
“Grundbegriffe der Wahrscheinlichkeitsrechnung” [259] based on measure theory built by Emile
Borel and Henry Lebesgue. For history, see [362], who report that “Kolmogorov sat down
to write the Grundbegriffe, in a rented cottage on the Klyaz’ma River in November 1932”.
Bayes theorem is more like a fantastically clever definition and not really a theorem. There
is nothing to prove as multiplying with P[B] gives P[A ∩ B] on both sides. It essentially
restates that A ∩ B = B ∩ A, the Abelian property of the product in the ring A. More
general is the statement that
P if A1 , . . . , An is a disjoint set of events whose union is Ω, then
P[Ai |B] = P[B|Ai ]P[Ai ]/( j P[B|Aj ]P[Aj ]. Bayes theorem was first proven in 1763 by Thomas
Bayes. It is by some considered to the theory of probability what the Pythagoras theorem is to
geometry. If one measures the ratio applicability over the difficulty of proof, then this theorem
even beats Pythagoras, as no proof is required. Similarly as “a+(b+c)=(a+b)+c”, also Bayes
theorem is essentially a definition but less intuitive as “Monty Hall” illustrates [347]. See [248].
9. Graph theory
A finite simple graph G = (V, E) is a finite collection V of vertices connected by a finite
collection E of edges, which are un-ordered pairs (a, b) with a, b ∈ V . Simple means that no
self-loops nor multiple connections are present in the graph. The vertex degree d(x) of
x ∈ V is the number of edges containing x.
P
Theorem: x∈V d(x)/2 = |E|.
This formula is also called the Euler handshake formula because every edge in a graph
contributes exactly two handshakes. It can be seen as a Gauss-Bonnet formula for the
valuation G → v1 (G) counting the number of edges in G. A valuation φ is a function
defined on subgraphs with the property that φ(A ∪ B) = φ(A) + φ(B) − φ(A ∩ B). Examples
of valuations are the number vk (G) of complete sub-graphs of dimension k of G. An other
example is the Euler characteristic χ(G) = v0 (G)−v1 (G)+v2 (G)−v3 (G)+· P · ·+(−1)d vd (G).
If we write dk (x) = vk (S(x)), where S(x) is the unit sphere of x, then x∈V dk (x)/(k + 1) =
vk (G) is the generalized handshake formula, the Gauss-Bonnet P result kfor vk . The Euler
characteristic then satisfies x∈V K(x) = χ(G), where K(x) = ∞
P
k=0 (−1) vk (S(x))/(k + 1).
This is the discrete Gauss-Bonnet result. The handshake result was found by Euler. For
more about graph theory, [46, 298, 28, 174] about Euler: [145].
10. Polyhedra
A finite simple graph G = (V, E) is given by a finite vertex set V and edge set E. A subset
W of V generates the sub-graph (W, {{a, b} ∈ E | a, b ∈ W }). The unit sphere of v ∈ V
is the sub graph generated by S(x) = {y ∈ V | {x, v} ∈ E}. The empty graph 0 = (∅, ∅)
is called the (−1)-sphere. The 1-point graph 1 = ({1}, ∅) = K1 is the smallest contractible
graph. Inductively, a graph G is called contractible, if it is either 1 or if there exists x ∈ V
such that both G − x and S(x) are contractible. Inductively, a graph G is called a d-sphere,
4
OLIVER KNILL
if it is either 0 or if every S(x) is a (d − 1)-sphere and if there exists a vertex x such that
G − x is contractible. Let vk denote the number of complete sub-graphs Kk+1 of G. The vector
(v0 , v1 , . . . ) is the f -vector of G and χ(G) = v0 − v1 + v2 − . . . is the Euler characteristic of
G. The generalized Euler gem formula due to Schläfli is:
Convex Polytopes were studied already in ancient Greece. The Euler characteristic relations
were discovered in dimension 2 by Descartes [4] and interpreted topologically by Euler who
proved the case d = 2. This is written as v − e + f = 2, where v = v0 , e = v1 , f = v2 . The two
dimensional case can be stated for planar graphs, where one has a clear notion of what the
two dimensional cells are and can use the topology of the ambient sphere in which the graph
is embedded. Historically there had been confusions [89, 342] about the definitions. It was
Ludwig Schläfli [359] who covered the higher dimensional case. The above set-up is a modern
reformulation of his set-up, due essentially to Alexander Evako. Multiple refutations [271] can
be blamed to ambiguous definitions. Polytopes are often defined through convexity [176, 438]
and there is not much consensus on a general definition [175], which was the reason in this entry
to formula Schläfli’s theorem using here a maybe a bit restrictive (as all cells are simplices),
but clear combinatorial definition of what a “sphere” is.
11. Topology
The Zorn lemma assures that that the Cartesian product of a non-empty family of non-empty
sets is non-empty. The Zorn lemma is equivalent to the axiom of choice Q
in the ZFC axiom
system and to the Tychonov theorem in topology as below. Let X = i∈I Xi denote the
product of topological spaces. The product topology is the weakest topology on X which
renders all projection functions πi : X → Xi continuous.
Q
Theorem: If all Xi are compact, then i∈I Xi is compact.
Zorn’s lemma is due to Kazimierz Kuratowski in 1922 and Max August Zorn in 1935. Andrey
Nikolayevich Tykhonov proved his theorem in 1930. One application of the Zorn lemma is the
Hahn-Banach theorem in functional analysis, the existence of spanning trees in infinite
graphs or the fact that commutative rings with units have maximal ideals. For literature, see
[219].
5
FUNDAMENTAL THEOREMS
√
Theorem: I(V (J)) = J.
The theorem is due to Hilbert. A simple example is when J = hpi = hx2 − 2xy + y 2 i is the
ideal J generated by p in R[x, y]; then V (J) = {x = y} and I(V (J)) is the ideal generated by
x − y. For literature, see [189].
13. Cryptology
An integer p > 1 is prime if 1 and p are the only factors of p. The number k mod p is the
reminder when dividing k by p. Fermat’s little theorem is
The theorem was found by Pierre de Fermat in 1640. A first proof appeared in 1683 by
Leibniz. Euler in 1736 published the first proof. The result is used in the Diffie-Helleman
key exchange, where a large public prime p and a public base value a are taken. Ana chooses
a number x and publishes X = ax modp and Bob picks y publishing Y = ay modp. Their secret
key is K = X y = Y x . An adversary Eve who only knows a, p, X and Y can from this not get
K due to the difficulty of the discrete log problem. More generally, for possibly composite
numbers n, the theorem extends to the fact that aφ(n) = 1 modulo p, where the Euler’s totient
function φ(n) counts the number of positive integers less than n which are coprime to n. The
generalized Fermat theorem is the key for RSA crypto systems: in order for Ana and Bob
to communicate. Bob publishes the product n = pq of two large primes as well as some base
integer a. Neither Ana nor any third party Eve do know the factorization. Ana communicates a
message x to Bob by sending X = ax modn using modular exponentiation. Bob, who knows
p, q, can find y such that xy = 1 mod φ(n). This is because of Fermat a(p−1)(q−1) = a mod n.
Now, he can compute x = y −1 mod φ(n). Not even Ana herself could recover x from X.
6
OLIVER KNILL
This Cauchy integral formula of Cauchy is used for other R results and estimates. It implies
for example the Cauchy integral theorem assuring that C f (z)dz = 0 for any simple closed
curve C in G boundingR a simply connected region D ⊂ G. Morera’s theorem assures that
for any domain G, if C f (z) dz = 0 for all simple closed smooth curves C in G, then f is
holomorphic in G. An other generalization is residue calculus: For a simply connected region
7
FUNDAMENTAL THEOREMS
The result is used in data fitting for example when understanding the least square solu-
tion x = (AT A)−1 AT b of a system of linear equations Ax = b. It assures that AT A is
invertible if A has a trivial kernel. The result is a bit stronger than the rank-nullity theorem
dim(ran(A)) + dim(ker(A)) = n alone and implies that for finite m × n matrices the index
dim(kerA) − dim(kerA∗ ) is always n − m, which is the value for the 0 matrix. For literature, see
[393]. The result has an abstract generalization in the form of the group isomorphism theorem
for a group homomorphism f stating that G/ker(f ) is isomorphic to f (G). It can also be
described using the singular value decomposition A = U DV T . The number r = ranA has
as a basis the first r columns of U . The number n − r = kerA has as a basis the last n − r
columns of V . The number ranAT has as a basis the first r columns of V . The number kerAT
has as a basis the last m − r columns of U .
This result is due to Picard and Lindelöf from 1894. Replacing the Lipschitz condition with
continuity still gives an existence√ theorem which is due to Giuseppe Peano in 1886, but
uniqueness can fail like for x0 = x, x(0) = 0 with solutions x = 0 and x(t) = t2 /4. The
example x0 (t) = x2 (t), x(0) = 1 with solution 1/(1 − t) shows that we can not have solutions
for all t. The proof is a simple application of the Banach fixed point theorem. For literature,
see [88].
20. Logic
An axiom system A is a collection of formal statements assumed to be true. We assume it to
contain the basic Peano axioms of arithmetic. An axiom system is complete, if every true
statement can be proven within the system. The system is consistent if one can not prove
1 = 0 within the system. It is provably consistent if one can prove a theorem ”The axiom
system A is consistent.” within the system.
8
OLIVER KNILL
For representation theory, see [425]. Pioneers in representation theory were Ferdinand Georg
Frobenius, Herman Weyl, and Élie Cartan. Examples of compact groups are finite group, or
compact Lie groups (a smooth manifold which is also a group for which the multiplications
and inverse operations are smooth) like the torus group T n , the orthogonal groups O(n) of
all orthogonal n × n matrices or the unitary groups U (n) of all unitary n × n matrices or the
group Sp(n) of all symplectic n × n matrices. Examples of groups that are not Lie groups are
the groups Zp of p-adic integers, which are examples of pro-finite groups.
9
FUNDAMENTAL THEOREMS
The Fourier transform then produces a homomorphism from L1 (G) to C0 (Ĝ) or a unitary
transformation from L2 (G) to L2 (Ĝ). For literature, see [83, 416].
23. Computability
The class of general recursive functions is the smallest class of functions which allows
projection, iteration, composition and minimization. The class of Turing computable
functions are the functions which can be implemented by a Turing machine possessing
finitely many states. Turing introduced this in 1936 [327].
Category theory was introduced in 1945 by Samuel Eilenberg and Sounders Mac Lane. The
lemma above is due to Nobuo Yoneda from 1954. It allows to see a category embedded in a
functor category which is a topos and serves as a sort of completion. One can identify a
set S for example with Hom(1, S). An other example is Cayley’s theorem stating that the
category of groups can be completely understood by looking at the group of permutations of
G. For category theory, see [295, 272]. For history, [267].
10
OLIVER KNILL
This is the implicit function theorem. There are concrete and fast algorithms to compute the
continuation. An example is the Newton method which iterates T (x) = x − f (x, y)/fx (x, y)
to find the roots of x → f (x, y) for fixed y. The importance of the implicit function theorem
is both theoretical as well as applied. The result assures that one can makes statements about
a complicated theory near some model, which is understood. There are related situations, like
if we want to continue a solution of F (x, y) = (f (x, y), g(x, y)) = (0, 0) giving equilibrium
points of the vector field F . Then the Newton step T (x, y) = (x, y) − dF −1 (x, y) · F (x, y)
method allows a continuation if dF (x, y) is invertible. This means that small deformations of
F do not lead to changes of the nature of the equilibrium points. When equilibrium points
change, the system exhibits bifurcations. This in particular applies to F (x, y) = ∇f (x, y),
where equilibrium points are critical points. The derivative dF of F is then the Hessian.
26. Counting
A simplicial complex X is a finite set of non-empty sets that is closed under the operation
of taking finite non-empty subsets. The Euler characteristic χ of a simplicial complex G is
defined as χ(X) = x∈X (−1)dim(x) , where the dimension dim(x) of a set x is its cardinality
P
|x| minus 1.
For zero-dimensional simplicial complexes G, (meaning that all sets in G have cardinality
1), we get the rule of product: if you have m ways to do one thing and n ways to do
an other, then there are mn ways to do both. This fundamental counting principle is
used in probability theory for example. The Cartesian product X × Y of two complexes
is defined as the set-theoretical product of the two finite sets. It is not a simplicial complex
any more in general but has the same Euler characteristic than its Barycentric refinement
(X × Y )1 , which
Pnis a simplicial complex. The maximal dimension of A × B is dim(A) + dim(B)
k
and pX (t) = v
k=0 k (X)t is the generating function of vk (X), then pX×Y (t) = pX (t)pY (t)
implying the counting principle as pX (−1) = χ(X). The function pX (t) is called the Euler
polynomial of X. The importance of Euler characteristic as a counting tool lies in the fact
that only χ(X) = pX (−1) is invariant under Barycentric subdivision χ(X) = X1 , where X1
is the complex which consists of the vertices of all complete subgraphs of the graph in which
the sets of X are the vertices and where two are connected if one is contained in the other.
The concept of Euler characteristic goes so over to continuum spaces like manifolds where the
product property holds too. See for example [11].
11
FUNDAMENTAL THEOREMS
T (X) = Q√ ∩ [0.32, 0.455] ⊂ X shows that completeness √ is necessary. The unique fixed point of
T in X is 2 − 1 = 0.414... which is not in Q because 2 = p/q would imply 2q 2 = p2 , which
is not possible for integers as the left hand side has an odd number of prime factors 2 while the
right hand side has an even number of prime factors. See [323]
There is a similar formula for the abscissa of absolute convergence of ζ which is defined
as σa = inf{a ∈ R | ζ(z) converges absolutely for all Re(z) > a }. The result is σa =
P∞
lim supn→∞ log(S(n))
λn
, For example, for the Dirichlet eta function ζ(s) = n=1 (−1)
n−1
/ns
has the abscissa
P∞of convergence σ0 = 0 and the absolute abscissa of convergence σa = 1. The
inα s
series ζ(s) = n=1 e /n has P σa = 1 sandQ σ0 = 1 − α. If an is multiplicative an+m = an am for
relatively prime n, m, then ∞ a
n=1 Qn /n = s 2s
p (1 + ap /p + ap2 /p + · · · ) generalizes the Euler
golden key formula n 1/ns = p (1 − 1/ps )−1 . See [184, 186].
P
29. Trigonometry
Mathematicians had a long and painful struggle with the concept of limit. One of the first
to ponder the question was Zeno of Elea around 450 BC. Archimedes of Syracuse made some
progress around 250 BC. Since Augustin-Louis Cauchy, one defines the limit limx→a f (x) = b to
exist if and only if for all > 0, there exists a δ > 0 such that if |x − a| < δ, then |f (x) − b| < .
A place where limits appear are when computing derivatives g 0 (0) = limx→0 [g(x)−g(0)]/x. In
the case g(x) = sin(x), one has to understand the limit of the function f (x) = sin(x)/x which is
the sinc function. A prototype result is the fundamental theorem of trigonometry (called
as such in some calculus texts like [61]).
It appears strange to give weight to such a special result but it explains the difficulty of limit
and the l’Hôpital rule of 1694, which was formulated in a book of Bernoulli commissioned to
Hôpital: the limit can be obtained by differentiating both the denominator and nominator and
taking the limit of the quotients. The result allows to derive (using trigonometric identities)
that in general sin0 (x) = cos(x) and cos0 (x) = − sin(x). One single limit is the gateway. It is im-
portant also culturally because it embraces thousands of years of struggle. It was Archimedes,
who used the theorem when computing the circumference of the circle formula 2πr using
exhaustion using regular polygons from the inside and outside. Comparing the lengths of
12
OLIVER KNILL
the approximations essentially battled that fundamental theorem of trigonometry. The iden-
tity is therefore the epicenter around the development of trigonometry, differentiation and
integration.
30. Logarithms
The natural logarithm is the inverse of the exponential function exp(x) establishing so a
group homomorphism from the additive group (R, +) to the multiplicative group (R+ , ∗).
We have:
This follows from exp(x + y) = exp(x) exp(y) and log(exp(x)) = exp(log(x)) = x by plugging
in x = log(u), y = log(v). The logarithms were independently discovered by Jost Bürgi around
1600 and John Napier in 1614 [382]. The logarithm with base b > 0 is denoted by logb . It is the
inverse of x → bx = ex log(b) . The concept of logarithm has been extended in various ways: in any
group G, one can define the discrete logarithm logb (a) to base b as an integer k such that
bk = a (if it exists). For complex numbers the complex logarithm log(z) as any solution w of
ew = z. It is multi-valued as log(|z|) + iarg(z) + 2πik all solve this with some integer k, where
arg(z) ∈ (−π, π). The identity log(uv) = log(u)+log(v) is now only true up to 2πki. Logarithms
can also be defined for matrices. Any matrix B solving exp(B) = A is called a logarithm of
A. For A close to the identity I, can define log(A) = (A − I) − (A − I)2 /2 + (A − I)3 /3 − ...,
which is a Mercator series. For normal invertible matrices, one can define logarithms
using the functional calculus by diagonalization. On a Riemannian manifold M , one also
has an exponential map: it is a diffeomorphim from a small ball Br (0) in the tangent space
x ∈ M to M . The map v → expx (v) is obtained by defining expx (0) = x and by taking for
v 6= 0 a geodesic with initial direction v/|v| and running it for time |v|. The logarithm logx
is now defined on a geodesic ball of radius r and defines an element in the tangent space. In
the case of a Lie group M = G, where the points are matrices, each tangent space is its Lie
algebra.
The theorem is P
due to Hugo Hadwiger from 1937. The coefficients aj (G) of the polynomial
Vol(G + tB) = nj=0 aj tj are a basis, where B is the unit ball B = {|x| ≤ 1}. See [239].
13
FUNDAMENTAL THEOREMS
14
OLIVER KNILL
This is called the Lebesgue decomposition theorem. It uses the Radon-Nikodym the-
orem. The decomposition theorem implies the decomposition theorem of the spectrum of
a linear operator. See [371] (like page 259). Lebesgue’s theorem was published in 1904. A
generalization due to Johann Radon and Otto Nikodym was done in 1913.
The theorem is due to Hermann Minkowski in 1896. It lead to a field called geometry of
numbers. [77]. It has many applications in number theory and Diophantine analysis
[68, 204]
36. Fredholm
An integral kernel K(x, y) ∈ L2 ([a, b]2 ) defines an integral operator A defined by Af (x) =
Rb Rb
a
K(x, y)f (y) dy with adjoint T ∗ f (x) = a K(y, x)f (y) dy. The L2 assumption makes the
function K(x, y) what one calls a Hilbert-Schmidt kernel. Fredholm showed that the Fred-
holm equation A∗ f = (T ∗ − λ)f = g has a solution f if and only if f is perpendicular to
the kernel of A = T − λ. This identity ker(A)⊥ = im(A∗ ) is in finite dimensions part of the
fundamental theorem of linear algebra. The Fredholm alternative reformulates this in
a more catchy way as an alternative:
In the second case, the solution depends continuously on g. The alternative can be put more
generally by stating that if A is a compact operator on a Hilbert space and λ is not an
eigenvalue of A, then the resolvent (A − λ)−1 is bounded. A bounded operator A on a Hilbert
space H is called compact if the image of the unit ball is relatively compact (has a compact
closure). The Fredholm alternative is part of Fredholm theory. It was developed by Ivar
Fredholm in 1903.
15
FUNDAMENTAL THEOREMS
The Dirichlet prime number theorem was found in 1837. The Green-Tao theorem
was done in 2004 and appeared in 2008 [171]. It uses Szemerédi’s theorem [154] which
shows that any set A of positive upper density lim supn→∞ |A ∩ {1 · · · n}|/n has arbitrary long
arithmetic progressions. So, any subset A of the primes P for which the relative density
lim supn→∞ |A ∩ {1 · · · n}|/|P ∩ {1 · · · n}| is positive has arbitrary long arithmetic progressions.
For non-linear sequences of numbers the problems are wide open. The Landau problem of
the infinitude of primes of the form x2 + 1 illustrates this. The Green-Tao theorem gives hope
to tackle the Erdös P conjecture on arithmetic progressions telling that a sequence {xn }
of integers satisfying n xn = ∞ contains arbitrary long arithmetic progressions.
16
OLIVER KNILL
defined as the nullity of the Hodge star operator dd∗ + d∗ d restricted to k-forms Ωk , where
dk : Ωk → Ωk+1 is the exterior derivative.
These are the Morse inequalities due to Marston Morse from 1934. It implies in particular
the weak Morse inequalities bk ≤ ck . Modern proofs use Witten deformation [105] of the
exterior derivative d.
This formula of Alain Connes tells that the spectral triple determines the geodesic distance
in (M, g) and so the metric g. It justifies to look at spectral triples as non-commutative
generalizations of Riemannian geometry. See [90].
42. Polytopes
A convex polytop P in dimension n is the convex hull of finitely many points in Rn . One
assumes all vertices to be extreme points, points which do not lie in an open line segment
of P . The boundary of P is formed by (n − 1) dimensional boundary facets. The notion
of Platonic solid is recursive. A convex polytop is Platonic, if all its facets are Platonic
(n − 1)-dimensional polytopes and vertex figures. Let p = (p2 , p3 , p4 , . . . ) encode the number
of Platonic solids meaning that pd is the number of Platonic polytops in dimension d.
In dimension 2, there are infinitely many. They are the regular polygons. The list of
Platonic solids is “octahedron”, “dodecahedron”, “icosahedron”, “tetrahedron” and “cube”
has been known by the Greeks already. Ludwig Schläfli first classified the higher dimensional
case. There are six in dimension 4: they are the “5 cell”, the “8 cell” (tesseract), the “16
cell”, the “24 cell”, the “120 cell” and the “600 cell”. There are only three regular polytopes in
dimension 5 and higher, where only the analog of the tetrahedron, cube and octahedron exist.
For literature, see [176, 438, 342].
17
FUNDAMENTAL THEOREMS
metric space is of second Baire category if the intersection of a countable set of open dense
sets is dense. The Baire Category theorem tells
One calls the intersection A of a countable set of open dense sets A in X also a generic set or
residual set. The complement of a generic set is also called a meager set or negligible or
a set of first category. It is the union of countably many nowhere dense sets. Like measure
theory, Baire category theory allows for existence results. There can be surprises: a generic
continuous function is not differentiable for example. For descriptive set theory, see [237]. The
frame work for classical descriptive set theory often are Polish spaces, which are separable
complete metric spaces. See [56].
The result is due to Joseph-Louis Lagrange. OneR can restate this as the fact that if f = 0
b
weakly then f is actually zero. It implies that if a f (x)g 0 (x) dx = 0 for all g ∈ X, then f is
constant. This is nice as f is not assumed to be
R b differentiable. The result is used to prove that
0
extrema to a variational problem I(x) = a L(t, x, x ) dt are weak solutions of the Euler
Lagrange equations Lx = d/dtLx0 . See [160, 308].
18
OLIVER KNILL
Already Fourier claimed this always to be true in his “Théorie Analytique de la Chaleur”. After
many fallacious proofs, Dirichlet gave the first proof of convergence [258]. The case is subtle
as there are continuous functions for which the convergence fails at some points. Lipót Féjer
was able to show that for a continuous function f , the coefficients fˆn nevertheless determine
the function using Césaro convergence. See [236].
For φ(x) = exp(x) and a finite probability space Ω = {1, 2, . . . , n} with f (k) = xk = exp(yk )
and P[{x}] = 1/n, this gives the arithmetic mean- geometric mean inequality (x1 ·
x2 · · · xn )1/n ≤ (x1 + x2 + · · · + xn )/n. The case φ(x) = ex is useful in general as it leads to the
inequality eE[f ] ≤ E[ef ] if ef ∈ L1 . For f ∈ L2 (ω, P ) one gets (E[f ])2 ≤ E[f 2 ] which reflects the
fact that E[f 2 ] − (E[f ])2 = E[(f − E[f ])2 ] = Var[f ] ≥ 0 where Var[f ] is the variance of f .
Theorem: A simple closed curve divides the plane into two regions.
The Jordan curve theorem is due to Camille Jordan. His proof [222] was objected at first [241]
but rehabilitated in [179]. The theorem can be strengthened, a theorem of Schoenflies tells
that each of the two regions is homeomorphic to the disk {(x, y) ∈ R2 | x2 + y 2 < 1}. In the
smooth case, it is even possible to extend the map to a diffeomorphism in the plane. In higher
dimensions, one knows that an embedding of the (d − 1) dimensional sphere in a Rd divides
space into two regions. This is the Jordan-Brouwer separation theorem. It is no more true
in general that the two parts are homeomorphic to {x ∈ Rd | |x| < 1}: a counter example
is the Alexander horned sphere which is a topological 2-sphere but where the unbounded
19
FUNDAMENTAL THEOREMS
component is not simply connected and so not homeomorphic to the complement of a unit ball.
See [56].
Bézout’s theorem was stated in the “Principia” of Newton in 1687 but proven fist in 1779 by
Étienne Bézout. If the hypersurfaces are all irreducible and in “general position”, then there
are exactly d solutions and each has multiplicity 1. This can be used also for affine surfaces. If
y 2 −x3 −3x−5 = 0 is an elliptic curve for example, then y 2 z −x3 −3xz 2 −5z 3 = is a projective
hypersurface, its projective completion. Bézout’s theorem implies part the fundamental
theorem of algebra as for n = 1, when we have only one homogeneous equation we have d roots
to a polynomial of degree d. The theorem implies for example that the intersection of two conic
sections have in general 2 intersection points. The example x2 − yz = 0, x2 + z 2 − yz = 0
has only the solution x = z = 0, y = 1 but with multiplicity 2. As non-linear systems of
equations appear frequently in computer algebra this theorem gives a lower bound on the
computational complexity for solving such problems.
20
OLIVER KNILL
every x, there exists a unique y = x−1 such that x ∗ y = y ∗ x = 1. The order n of the group
is the number of elements in the group. An element x ∈ G generates a subgroup formed by
1, x, x2 = x ∗ x, . . . . This is the cyclic subgroup C(x) generated by x. Lagrange’s theorem
tells
Theorem: |C(x)| is a factor of |G|
The origins of group theory go back to Joseph Louis Lagrange, Paulo Ruffini and Évariste
Galois. The concept of abstract group appeared first in the work of Arthur Cayley. Given a
subgroup H of G, the left cosets of H are the equivalence classes of the equivalence relation
x ∼ y if there exists z ∈ H with x = z ∗ y. The equivalence classes G/N partition G.
The number [G : N ] of elements in G/H is called the index of H in G. It follows that
|G| = |H|[G : H] and more generally that if K is a subgroup of H and H is a subgroup of G
then [G : K] = [G : H][H : K]. The group N generated by x is a called a normal group
N / G if for all a ∈ N and all x in G the element x ∗ a ∗ x−1 is in N . This can be rewritten as
H ∗ x = x ∗ H. If N is a normal group, then G/H is again a group, the quotient group. For
example, if f : G → G0 is a group homomorphism, then the kernel of f is a normal subgroup
and |G| = |ker(f )||im(f )| because of the first group isomorphism theorem.
52. Primes
A prime is an integer larger than 1 which is only divisible by 1 or itself. The Wilson theorem
allows to define a prime as a number n for which (n − 1)! + 1 is divisible by n. Euclid already
knew that there are infinitely many primes (if there were finitely many p1 , . . . , pn , the new
number p1 p2 · · · pn + 1 would have a prime factor different
P∞from the given set). It also follows
from the divergence of the harmonic series ζ(1)P= n=1 1/n P = 1 + 1/2 + 1/3 + · · · and
the Euler golden key or Euler product ζ(s) = ∞ n=1 1/n 2
= s −1
p prime (1 − 1/p ) for the
Riemann zeta function ζ(s) that there are infinitely many primes as otherwise, the product
to the right would be finite.
Let π(x) be the prime-counting function which gives the number of primes smaller or equal
to x. Given two functions f (x), g(x) from the integers to the integers, we say f ∼ g, if
limx→∞ f (x)/g(x) = 1. The prime number theorem tells
The result was investigated experimentally first by Anton Ferkel and Jurij Vega, Adrien-Marie
Legendre first conjectured in 1797 a law of this form. Carl Friedrich Gauss wrote in 1849
that he experimented independently around 1792 with such a law. The theorem was proven in
1896 by Jacques Hadamard and Charles de la Vallée Poussin. Proofs without complex analysis
were put forward by Atle Selberg and Paul Erdös in 1949. The prime number theorem also
assures that there are infinitely many primes but it makes the statement quantitative in that
it gives an idea how fast the number of primes grow asymptotically. Under√ the assumption
of the Riemann
Rx hypothesis, Lowell Schoenfeld proved |π(x) − li(x)| < x log(x)/(8π), where
li(x) = 0 dt/ log(t) is the logarithmic integral.
21
FUNDAMENTAL THEOREMS
which is compact by the Tychonov theorem. The translation maps Ti (x)n = xn+ei are homeo-
morphisms of Ω called shifts. A closed T invariant subset X ⊂ Ω defines a subshift (X, T ). An
automorphism T of Ω which commutes with the translations Ti is called a cellular automaton,
abbreviated CA. An example of a cellular automaton is a map T xn = φ(xn+u1 , . . . xn+uk ) where
U = {u1 , . . . uk } ⊂ Zd is a fixed finite set. It is called an local automaton because it is defined
by a finite rule so that the status of the cell n at the next step depends only on the status of
the “neighboring cells” {n + u | u ∈ U }. The following result is the Curtis-Hedlund-Lyndon
theorem:
Theorem: Every cellular automaton is a local automaton.
Cellular automata were introduced by John von Neumann and mathematically in 1969 by
Hedlund [193]. The result appears there. Hedlund saw cellular automata also as maps on
subshifts. One can so look at cellular automata on subclasses of subshifts. For example,
one can restrict the cellular automata map T on almost periodic configurations, which are
subsets X of Ω on which (X, T1 , . . . Tj ) has only invariant measures µ for which the Koopman
operators Ui f = f (Ti ) on L2 (X, µ) have pure point spectrum. A particularly well studied case
is d = 1 and A = {0, 1}, if U = {−1, 0, 1}, where the automaton is called an elementary
cellular automaton. The Wolfram numbering labels the 28 possible elementary automata
with a number between 1 and 255. The game of life of Conway is a case for d = 2 and
A = {−1, 0, 1} × {−1, 0, 1}. For literature on cellular automata see [432] or as part of complex
systems [433] or evolutionary dynamics [318]. For topological dynamics, see [109].
For example, if E is the topos of sets, then the slice category is the category of pointed
sets: the objects are then sets together with a function selecting a point as a “base point”.
A morphism f : A → B defines a functor E/B → E/A which preserves exponentials and the
subobject classifier Ω. Topos theory was motivated by geometry (Grothendieck), physics
(Lawvere), topology (Tierney) and algebra (Kan). It can be seen as a generalization and
even a replacement of set theory: the Lawvere’s elementary theory of the category of
sets ETCS is seen as part of ZFC which are less likely to be inconsistent [276]. For a short
introduction [214], for textbooks [295, 73], for history of topos theory in particular, see [294].
22
OLIVER KNILL
55. Transcendentals
A root of an equation f (x) = 0 with integer polynomial f (x) = an xn + an−1 xn−1 + · · · + a0
with n ≥ 0 and aj ∈ Z is called an algebraic number. The set A of algebraic numbers is
sub-field of the field R of real numbers. The field A s the algebraic closure of the rational
numbers Q. It is of number theoretic interest as it contains all algebraic number fields, finite
degree field extensions of Q. The complement R \ A is the set of transcendental numbers.
Transcendental numbers are necessarily irrational because every rational number x = p/q is
algebraic, solving qx − p = 0. Because the set of algebraic numbers is countable and the real
numbers are not, most numbers are transcendental. The group of all automorphisms of A which
fix Q is called the absolute Galois group of Q.
Theorem: π and e are transcendental
This result is due to Ferdinand von Lindemann. He proved that ex is transcendental for every
non-zero algebraic number x. This immediately implies e is transcendental. Now, if π were
algebraic, then πi would be algebraic and eiπ = −1 would be transcendental. But −1 is
rational. Lindemann’s result was extended in 1885 by Karl Weierstrass to the statement telling
that if x1 , . . . xn are linearly independent algebraic numbers, then ex1 , . . . exn are algebraically
independent. The transcendental property of π also proves that π is irrational. This is easier
to prove directly. See [204].
56. Recurrence
A homeomorphism T : X → X of a compact topological space X defines a topological
dynamical system (X, T ). We write T j (x) = T (T (. . . T (x))) to indicate that the map T is
applied j times. For any d > 0, we get from this a set (T1 , T2 , . . . , Td ) of commuting homeo-
morphisms on X, where Tj (x) = T j x. A point x ∈ X is called multiple recurrent for T if for
every d > 0, there exists a sequence n1 < n2 < n3 < · · · of integers nk ∈ N for which Tjnk x → x
for k → ∞ and all j = 1, . . . , d. Fürstenberg’s multiple recurrence theorem states:
Theorem: Every topological dynamical system is multiple recurrent.
It is known even that the set of multiple recurrent points are Baire generic. Hillel Fürstenberg
proved this result in 1975. There is a parallel theorem for measure preserving systems:
an automorphism T of a probability space (Ω, A, P) is called multiple recurrent if there
exists A ∈ A and an integer n such that P[A ∩ T1 (A) ∩ · · · ∩ Td (A)] > 0. This generalizes the
Poincaré recurrence theorem, which is the case d = 1. Recurrence theorems are related
to the Szemerédi theorem telling that a subset A of N of positive upper density contains
arithmetic progressions of arbitrary finite length. See [154].
57. Solvability
A basic task in mathematics is to solve polynomial equations p(x) = an xn + an−1 xn−1 +
· · · + a1 x + a0 = 0 with complex coefficients ak using explicit formulas involving roots. One
calls this an explicit algebraic solution. The √ linear case ax + b = 0 with x = −b/a, the
2
quadratic case ax + bx + c = 0 with x = (−b ± b2 − 4ac)/(2a) were known since antiquity.
The cubic x3 + ax2 + bx + C = 0 was solved by Niccolo Tartaglia and Cerolamo Cardano: a
first substitution x = X − a/3 produces the depressed cubic X 3 + pX + q (first solved by
Scipione dal Ferro). The substitution X = u − p/(3u) then produces a quadratic equation for
23
FUNDAMENTAL THEOREMS
u3 . Lodovico Ferrari solved finally the quartic by reducing it to the cubic. It was Paolo Ruffini,
Niels Abel and Évariste Galois who realized that there are no algebraic solution formulas any
more for polynomials of degree n ≥ 5.
The quadratic case was settled over a longer period in independent development in Babylonian,
Egyptian, Chinese and Indian mathematics. The cubic and quartic discoveries were dramatic
culminating with Cardano’s book of 1545, marking the beginning of modern algebra. After
centuries of failures of solving the quintic, Paolo Ruffini published the first proof in 1799, a
proof which had a gap but who paved the way for Niels Hendrik Abel and Évariste Galois. For
further discoveries see [290, 280, 10].
The intermediate fields of E/F are so described by groups. It implies the Abel-Ruffini the-
orem about the non-solvability of the quintic by radicals. The fundamental theorem demon-
strates that solvable extensions correspond to solvable groups. The symmetry groups of
permutations of 5 or more elements are no more solvable. See [389].
24
OLIVER KNILL
this is the Euler characteristic. Let indT (x) be the Brouwer degree of the map T induced
on a small (d − 1)-sphere S around x. This is the trace of the linear map Td−1 induced from
T on the cohomology group H d−1 (S) which is an integer. If T is differentiable and dT (x) is
invertible, the Brouwer degree is indT (x) = sign(det(dT )). Let FixT (X) denote the set of fixed
points of T . The Lefschetz-Hopf fixed point theorem is
P
Theorem: If FixT (X) is finite, then χT (X) = x∈FixT (X) indT (x).
A special case is the Brouwer fixed point theorem: if X is a compact convex subset of
Euclidean space. In that case χT (X) = 1 and the theorem assures the existence of a fixed
point. In particular, if T : D → D is a continuous map from the disc D = {x2 + y 2 ≤ 1}
onto itself, then T has a fixed point. The Brouwer fixed point theorem was proved in
1910 by Jacques Hadamard and Luitzen Egbertus Jan Brouwer. The Schauder fixed point
theorem from 1930 generalizes the result to convex compact subsets of Banach spaces. The
Lefschetz-Hopf fixed point theorem was given in 1926. For literature, see [118, 48].
This means that (p|q) = −(q|p) if and only if both p and q have remainder 3 modulo 4. The
odd primes with of the form 4k + 3 are also prime in the Gaussian integers. To remember
the law, one can think of them as “Fermions” and quadratic reciprocity tells they Fermions
are anti-commuting. The odd primes of the form 4k + 1 factor by the 4-square theorem
25
FUNDAMENTAL THEOREMS
in the Gaussian plane to p = (a + ib)(a − ib) and are as a product of two Gaussian primes
and are therefore Bosons. One can remember the rule because Boson commute both other
particles so that if either p or q or both are “Bosonic”, then (p|q) = (q|p). The law of quadratic
reciprocity was first conjectured by Euler and Legendre and published by Carl Friedrich Gauss
in his Disquisitiones Arithmeticae of 1801. (Gauss found the first proof in 1796). [187, 204].
The Weierstrass theorem has been proven in 1885 by Karl Weierstrass. A constructive proof sug-
gested by sergey Bernstein in 1912 uses Bernstein polynomials fn (x) = nk=0 f (k/n)Bk,n (x)
P
with Bk,n (x) = B(n, k)xk (1 − x)n−k , where B(n, k) denote the Binomial coefficients. The result
has been generalized to compact Hausdorff spaces X and more general subalgebras of C(X).
26
OLIVER KNILL
P P ˆ ˆ
This looks a bit like the Poisson summationP formula P n f (n) = n f (n), where f is the
Fourier transform of f . [The later follows from n e2πikx = n δ(x − n), where δ(x) is a Dirac
delta function. The Poisson formula holds if f is uniformly continuous and if both f and fˆ
satisfy the growth condition |f (x)| ≤ C/|1 + |x||1+ . ] More generally, one can read off the
Hausdorff dimension from decay rates of the Fourier coefficients. See [236, 387].
67. Shadowing
Let T be a diffeomorphism on a smooth Riemannian manifold M with geodesic metric
d. A T -invariant set is called hyperbolic if for each x ∈ K, the tangent space Tx M splits
into a stable and unstable bundle Ex+ ⊕ Ex− such that for some 0 < λ < 1 and constant C,
one has dT Ex± = ET±x and |dT ±n v| ≤ Cλn for v ∈ E ± and n ≥ 0. An -orbit is a sequence
27
FUNDAMENTAL THEOREMS
xn of points in M such that xn+1 ∈ B (T (xn )), where B is the geodesic ball of radius . Two
sequences xn , yn ∈ M are called δ-close if d(yn , xn ) ≤ δ for all n. We say that a set K has the
shadowing property, if there exists an open neighborhood U of K such that for all δ > 0
there exists > 0 such that every -pseudo orbit of T in U is δ-close to true orbit of T .
Theorem: Every hyperbolic set has the shadowing property.
This is only interesting for infinite K as if K is a finite periodic hyperbolic orbit, then the orbit
itself is the orbit. It is interesting however for a hyperbolic invariant set like a Smale horse
shoe or in the Anosov case, when the entire manifold is hyperbolic. See [230].
The result was first proven by Frobenius in 1887. Burnside popularized it in 1897 [69].
Here, Tr (a) = {|xi − a1 | = r1 . . . |xd − ad | = rd } isPthe boundary torus. For example, for
f (x) = exp(x), where f (n) (0) = 1, one has f (x) = ∞ n
n=0 x /n!. Using the differential op-
(n)
∞ f (x) n
erator Df (x) = f 0 (x), one can see f (x + t) = t = eDt f (x) as a solution of
P
n=0 n!
28
OLIVER KNILL
the transport equation R ft = Df . One can also represent f as a Cauchy formula for
polydiscs 1/(2πi)d |Tr (a)| f (z)/(z − a)d dz integrating along the boundary torus. Finite Taylor
series hold in the case if f is m + 1 times differentiable. In that case one has a finite series
f (n) (a)
S(x) = m (x − a)n such that the Lagrange rest term is f (x) − S(x) = R(x) =
P
n=0 n!
f m+1 (ξ)(x − a)m+1 /((m + 1)!), where ξ is between x and a. This generalizes the mean value
theorem inR the case m = 0, where f is only differentiable. The remainder term can also be
x
written as a f (m+1) (s)(x − a)m /m! ds. Taylor did state but not justify the formula in 1715
which was actually a difference formula. 1742 Maclaurin uses the modern form. [262].
If B = B1 , this gives n|B| ≤ |S|, which is an equality as then the volume of the ball |B| =
π n/2 /Γ(n/2+1) and the surface area of the sphere |S| = nπ n/2 /Γ(n/2+1) which Archimedes
first got in the case n = 3, where |S| = 4π and |B| = 4π/3. The classical isoperimetric
problem is n = 2, where we are in the plane R2 . The inequality tells then 4|B| ≤ |S|2 /π
which means 4πArea ≤ Length2 . The ball B1 with area 1 maximizes the functional. For
n = 3, with usual Euclidean space R3 , the inequality tells |B|2 ≤ (4π)3 /(27 · 4π/3) which is
|B| ≤ 4π/3. The first proof in the case n = 2 was attempted by Jakob Steiner in 1838 using
the Steiner symmetrization process which is a refinement of the Archimedes-Cavalieri
principle. In 1902 a proof by Hurwitz was given using Fourier series. The result has been
extended to geometric measure theory [144]. One can also look at the discrete problem to
maximize the area defined by a polygon: if {(xi , yi ), i = P0, . . . n − 1} are the points of the
polygon, then the area is given by Green’s formula as A = n−1 i=0 xi yi+1 − xi+1 yi and the length
Pn−1 2 2
is L = i=0 (xi − xi+1 ) + (yi − yi+1 ) with (xn , yn ) identified with (x0 , y0 ). The Lagrange
equations for A under the constraint L = 1 together with a fix of (x0 , y0 ) and (x1 = 1/n, 0)
produces two maxima which are both regular polygons. A generalization to n-dimensional
Riemannian manifolds is given by the Lévi-Gromov isoperimetric inequality.
29
FUNDAMENTAL THEOREMS
1-form dz which is called the canonical divisor K. Let l(D) be the dimension of the linear
space of meromorphic functions f on G for which (f ) + D ≥ 0. (The notation ≥ 0 means that
all coefficients are non-negative. One calls such a divisor effective). The Riemann-Roch
theorem is
Theorem: l(D) − l(K − D) = χ(D)
The idea of a Riemann surfaces was defined by Bernhard Riemann. Riemann-Roch was proven
for Riemann surfaces by Bernhard Riemann in 1857 and Gustav Roch in 1865. It is possible
to see this as a Euler-Poincaré type relation by identifying the left hand side as a signed
cohomological Euler characteristic and the right hand side as a combinatorial Euler character-
istic. There are various generalizations, to arithmetic geometry or to higher dimensions. See
[172, 358].
73. Optimal transport
Given two probability spaces (X, P ), (Y, Q) and a continuous cost function c : X ×Y → [0, ∞],
the optimal transport
R problem or Monge-Kantorovich minimization problem is to find
the minimum of X c(x, T (x)) dP (x) among all coupling transformations T : X → Y which
have the property that it transports the measure P to the measure Q. More generally, one
looks at a measure π on X × Y such that the projection of π Ronto X it is P and the projection
of π onto Y is Q. The function to optimize is then I(π) = X×Y c(x, y) dπ(x, y). One of the
fundamental results is that optimal transport exists. The technical assumption is that if the
two probability spaces X, Y are Polish (=separable complete metric spaces) and that the cost
function c is continuous.
Theorem: For continuous cost functions c, there exists a minimum of I.
In the simple set-up of probability spaces, this just follows from the compactness (Alaoglu
theorem for balls in the weak star topology of a Banach space) of the set of probability measures:
any sequence πn of probability measures on X × Y has a convergent subsequence. Since I
is continuous, picking a sequence πn with I(πn ) decreasing produces to a minimum. The
problem was formalized in 1781 by Gaspard Monge and worked on by Leonid Kantorovich.
Hirisho Tanaka in the 1970ies produced connections with partial differential equations like the
Bolzmann equation. There are also connections to weak KAM theory in the form of Aubry-
Mather theory. The above existence result is true under substantial less regularity. The question
of uniqueness or the existence of a Monge coupling given in the form of a transformation T is
subtle [412].
74. Structure from motion
Given m hyper planes in Rd serving as retinas or photographic plates for affine cameras and
n points in Rd . The affine structure from motion problem is to understand under which
conditions it is possible to recover both the points and planes when knowing the orthogonal
projections onto the planes. It is a model problem for the task to reconstruct both the scene
as well as the camera positions if the scene has n points and m camera pictures were taken.
Ullman’s theorem is a prototype result with n = 3 different cameras and m = 3 points which are
not collinear. Other setups are perspective cameras or omni-directional cameras. The
Ullman map F is a nonlinear map from Rd·2 × SOd2 to (R3d−3 )2 which is a map between equal
dimensional spaces if d = 2 and d = 3. The group SOd is the rotation group in R describing the
30
OLIVER KNILL
possible ways in which the affine camera can be positioned. Affine cameras capture the same
picture when translated so that the planes can all go through the origin. In the case d = 2, we
get a map from R4 × SO22 to R6 and in the case d = 3, F maps R6 × SO32 into R12 .
The result is much more general and can be extended. If f is in C k and has compact support
for example, then Kf is in C k+1 . An example of the more general set up is the Schrödinger
operator L = −∆ + V (x) − E. The solution to Lu = 0, solves then an eigenvalue problem.
As one looks for solutions in L2 , the solution only exists if E is an eigenvalue of L. The
Euclidean space Rn can be replaced by a bounded domain Ω of Rn where one can look at
boundary conditions like of Dirichlet or von Neumann type. Or one can look at the situation
on a general Riemannian manifold M with orRwithout boundary. On a Hilbert space, one has
then Fredholm theory. The equation P un = nG(x, y)f (y)dy is called a Fredholm integral
equation and det(1 − sG) = exp(− n s tr(G )/n!) the Fredholm determinant leading to
the zeta function 1/det(1 − sG). See [340, 279].
31
FUNDAMENTAL THEOREMS
.
The result needs only to be verified for prime numbers as N (a, b, c, d) = a2 + b2 + c2 + d2
is a norm for quaternions q = (a, b, c, d) which has the property N (pq) = N (p)N (q). This
property can be seen also as a Cauchy-Binet formula, when writing quaternions as complex
2 × 2 matrices. The four-square theorem had been conjectured already by Diophantus, but
was proven first by Lagrange in 1770. The case g(3) = 9 was done by Wieferich in 1912. It is
conjectured that g(k) = 2k + [(3/2)k ] − 2, where [x] is the integral part of a real number. See
[111, 112, 204].
77. Knots
A knot is a closed curve in R3 , an embedding of the circle in three dimensional Euclidean
space. One also draws knots in the 3-sphere S 3 . As the knot complement S 3 − K of a knot
K characterizes the knot up to mirror reflection, the theory of knots is part of 3-manifold
theory. The HOMFLYPT polynomial P of a knot or link K is defined recursively using
skein relations lP (L+ ) + l−1 P (L− ) + mP (L0 ) = 0. Let K#L denote the knot sum which
is a connected sum. Oriented knots form with this operation a commutative monoid with
unknot as unit. It features a unique prime factorization. The unknot has P (K) = 1, the
unlink has P (K) = 0. The trefoil knot has P (K) = 2l2 − l4 + l2 m2 .
The Alexander polynomial was discovered in 1928 and initiated classical knot theory. John
Conway showed in the 60ies how to compute the Alexander polynomial using a recursive skein
relations (skein comes from French escaigne=hank of yarn). The Alexander polynomial allows
to compute an invariant for knots by looking at the projection. The Jones polynomial found
by Vaughan Jones came in 1984. This is generalized by the HOMFLYPT polynomial named
after Jim Hoste, Adrian Ocneanu, Kenneth Millett, Peter J. Freyd and W.B.R. Lickorish from
1985 and J. Przytycki and P. Traczyk from 1987. See [5]. Further invariants are Vassiliev
invariants of 1990 and Kontsevich invariants of 1993.
This is a result which goes back to James Clerk Maxwell. Vlasov dynamics was introduced in
1938 by Anatoly Vlasov. An existence result was proven by W. Brown and Klaus Hepp in 1977.
The maps Xt will stay perfectly smooth if smooth initially. However, even if P 0 is smooth,
the measure P t in general rather quickly develops singularities so that the partial differential
equation has only weak solutions. The analysis of P directly would involve complicated
32
OLIVER KNILL
function spaces. The fundamental theorem of Vlasov dynamics therefore plays the role
of the method of characteristics in this field. If M is a finite probability space, then the
Vlasov Hamiltonian system is the Hamiltonian n-body problem on N . An other example is
M = T ∗ N and where m is an initial phase space measure. Now Xt is a one parameter family of
diffeomorphisms Xt : M → T ∗ N pushing forward m to a measure P t on the cotangent bundle.
If M is a circle then X 0 defines a closed curve on T ∗ N . In particular, if γ(t) is a curve in N and
X 0 (t) = (γ(t), 0), we have a continuum of particles initially at rest which evolve by interacting
with a force ∇V . About interacting particle dynamics, see [380].
79. Hypercomplexity
A hypercomplex algebra is a finite dimensional algebra over R which is unital and dis-
tributive. The classification of hypercomplex algebras (up to isomorphism) of two-dimensional
hypercomplex algebras over the reals are the complex numbers x + iy with i2 = −1, the
split complex numbers x + jy with j 2 = −1 and the dual numbers (the exterior algebra)
x + y with 2 = 0. A division algebra over a field F is an algebra over F in which division
is possible. Wedderburn’s little theorem tells that a finite division algebra must be a finite
field. Only C is the only two dimensional division algebra over R. The following theorem of
Frobenius classifies the class X of finite dimensional associative division algebras over R:
33
FUNDAMENTAL THEOREMS
80. Approximation
The Kolmogorov-Arnold superposition theorem shows that continuous functions C(Rn )
of several variables can be written as a composition of continuous functions of two variables:
More precisely, it is now known since 1962 that there exist functions fk,l and a function g
P2n
in C(R) such that f (x1 , . . . , xn ) = k=0 g(fk,1 (x1 ) + · · · + fk,n xn ). As one can write finite
sums using functions of two variables like h(x, y) = x + y or h(x + y, z) = x + y + z two
variables suffice. The above form was given by by George Lorentz in 1962. Andrei Kolmogorov
reduced the problem in 1956 to functions of three variables. Vladimir Arnold showed then (as a
student at Moscow State university) in 1957 that one can do with two variables. The problem
came from a more specific problem in algebra, the problem of finding roots of a polynomial
p(x) = xn + a1 xn−1 + · · · an using radicals and arithmetic operations in the coefficients is not
possible in general for n ≥ 5. Erland Samuel Bring shows in 1786 that a quintic can be reduced
to x5 + ax + 1. In 1836 William Rowan Hamilton showed that the sextic can be reduced to
x6 + ax2 + bx + 1 to x7 + ax3 + bx2 + cx + 1 and the degree 8 to a 4 parameter problem
x8 + ax4 + bx3 + cx2 + dx + 1. Hilbert conjectured that one can not do better. They are the
Hilbert’s 13th problem, the sextic conjecture and octic conjecture. In 1957, Arnold and
Kolmogorov showed that no topological obstructions exist to reduce the number of variables.
Important progress was done in 1975 by Richard Brauer. Some history is given in [142]:
81. Determinants
The determinant of a n × n matrix A is defined as the sum π (−1)sign(π) A1π(1) · · · Anπ(n) ,
P
where the sum is over all n! permutations π of {1, . . . , n} and sign(π) is the signature of
the permutation π. The determinant functional satisfies the product formula det(AB) =
det(A)det(B). As the determinant is the constant coefficient of the characteristic polyno-
mial pA (x) = det(A − x1) = p0 (−x)n + p1 (−x)n−1 + · · · + pk (−x)n−k + · · · + pn of A, one can
get the coefficients of the product F T G of two n × m matrices F, G as follows:
P
Theorem: pk = |P |=k det(FP ) det(GP ).
The right hand side is a sum over all minors of length k including the empty one |P | =
T
P
0, where det(FP )Pdet(GP ) = 1. This implies det(1 + F G) = P det(FP ) det(GP ) and so
2
det(1 + F T F ) = PP det (F P ). The classical Cauchy-Binet theorem is the special case k = m,
T
where det(F G) = P det(FP )det(GP ) is a sum over all m×m patterns if n ≥ m. It has as even
more special case the Pythagorean consequence det(AT A) = P det(A2P ). The determinant
P
product formula is the even more special case when n = m. [215, 251, 201].
82. Triangles
A triangle T on a two-dimensional surface S is defined by three points A, B, C joined by three
geodesic paths. (It is assumed that the three geodesic paths have no self-intersections nor other
intersections besides A, B, C so that T is a topological disk with a piecewise geodesic boundary).
If α, β, γ are the inner anglesR of a triangle T located on a surface with curvature K, there
is the Gauss-Bonnet formula S K(x)dA(x) = χ(S), where dA denotes the area element on
34
OLIVER KNILL
the surface. This implies a relation between the integral of the curvature over the triangle and
the angles:
R
Theorem: α + β + γ = T K dA + π
83. KAM
An area preserving map T (x, y) = (2x−y +cf (x), x) has an orbit (xn+1 , xn ) on T2 = (R/Z)2
which satisfies the recursion xn+1 − 2xn + xn−1
R 1 = cf (xn ). The 1-periodic function f is assumed
to be real-analytic, non-constant satisfying 0 f (x) dx = 0. In the case f (x) = sin(2πx), one
has the Standard map. When looking for invariant curves (q(t + α), q(t)) with smooth q, we
seek a solution of the nonlinear equation F (q) = q(t + α) − 2q(t) + q(t − α) − cf (q(t)) = 0. For
c = 0, there is the solution q(t) = t. The linearization dF (q)(u) = Lu = u(t + α) − 2u(t) +
u(t − α) − cf 0 (q(t))u(t) is a bounded linear operator on L2 (T) but not invertible for c = 0 so
that the implicit function theorem does not apply. The map Lu = u(t+α)−2u(t)+u(t−α)
becomes after a Fourier transform the diagonal matrix L̂ûn = [2 cos(nα) − 2]ûn which has the
inverse diagonal entries [2 cos(nα) − n]−1 leading to small divisors. A real number α is called
Diophantine if there exists a constant C such that for all integers p, q with q 6= 0, we have
|α−p/q| ≥ C/q 2 . KAM theory assures that the solution q(t) = t persists and remains smooth
if c is small. With solution the theorem means a smooth solution. For real analytic F , it
can be real analytic. The following result is a special case of the twist map theorem.
The KAM theorem was predated by the Poincaré-Siegel theorem in complex dynamics
which assured that if f is analytic near z = 0 and f 0 (0) = λ = exp(2πiα) with Diophantine
α, then there exists u(z) = z + q(z) such that f (u(z)) = u(λz) holds in a small disk 0: there
is an analytic solution q to the Schröder equation λz + g(z + q(z)) = q(λz). The question
about the existence of invariant curves is important as it determines the stability. The twist
map theorem result follows also from a strong implicit function theorem initiated by John
Nash and Jürgen Moser. For larger c, or non-Diophantine α, the solution q still exists but it is
no more continuous. This is Aubry-Mather theory. For c 6= 0, the operator L̂ is an almost
periodic Toeplitz matrix on l2 (Z) which is a special kind of discrete Schrödinger operator.
The decay rate of the off diagonals depends on the smoothness of f . Getting control of the
inverse can be technical [51]. Even in the Standard map case f (x) = sin(x), the composition
f (q(t)) is no more a trigonometric polynomial so that L̂ appearing here is not a Jacobi matrix
35
FUNDAMENTAL THEOREMS
in a strip. The first breakthrough of the theorem in a frame work of Hamiltonian differential
equations was done in 1954 by Andrey Kolmogorov. Jürgen Moser proved the discrete twist
map version and Vladimir Arnold in 1963 proved the theorem for Hamiltonian systems. The
above stated result generalizes to higher dimensions where one looks for invariant tori called
KAM tori. one needs some non-degeneracy conditions See [74, 307, 308].
85. Gauss-Bonnet-Chern
Let (M, g) be a Riemannian manifold of dimension d with volume element dµ. If
ij
Rkl is Riemann curvature tensor with respect to the metric g, define the constant C =
σ(1)σ(2) σ(d−1)σ(d)
((4π)d/2 (−2)d/2 (d/2)!)−1 and the curvature K(x) = C σ,π sign(σ)sign(π)Rπ(1)π(2) · · · Rπ(d−1)π(d) ,
P
where the sum is over all permutations π, σ of {1, . . . , d}. It can be interpreted as a Pfaffian.
In odd dimensions, the curvature is zero. Denote by χ(M ) the Euler characteristic of M .
R
Theorem: M
K(x) dµ(x) = 2πχ(M ).
The case d = 2 was solved by Carl Friedrich Gauss and by Pierre Ossian Bonnet in 1848. Gauss
knew the theorem but never published it. In the case d = 2, the curvature K is the Gaussian
curvature which is the product of the principal curvatures κ1 , κ2 at a point. For a sphere
2
of radius R for example, the Gauss curvature is 1/RR and χ(M ) = 2. The volume form is
then the usual area element normalized so that M 1 dµ(x) = 1. Allendoerfer-Weil in 1943
gave the first proof, based on previous work of Allendoerfer, Fenchel and Weil. Chern finally,
in 1944 proved the theorem independent of an embedding. See [105], which features a proof of
Vijay Kumar Patodi.
36
OLIVER KNILL
86. Atiyah-Singer
Assume M is a compact orientable finite dimensional manifold of dimension n and assume D
is an elliptic differential operator D : E → F between two smooth vector bundles E, F
over M . Using multi-index notation Dk = ∂xk11 · · · ∂xknn , a differentialPoperator k ak (x)Dk x
P
is called elliptic if for all x, its symbol the polynomial σ(D)(y) = |k|=n ak (x)y k is not zero
for nonzero y. Elliptic regularity assures that both the kernel of D and the kernel of the
adjoint D∗ : F → E are both finite dimensional. The analytical index of D is defined as
χ(D) = dim(ker(D)) − dim(ker(D∗ )). We think of it as the Euler characteristic of D. The
topological index of D is defined as the integral of the n-form KD = (−1)n ch(σ(D))·td(T M ),
over M . This n-form is the cup product · of the Chern character ch(σ(D)) and the Todd
class of the complexified tangent bundle T M of M . We think about KD as a curvature.
Integration is done over the fundamental class [M ] of M which is the natural volume form
on M . The Chern character and the Todd classes are both mixed rational cohomology classes.
On a complex vector bundle E they are both given by concrete power series of Chern classes
ck (E) like ch(E) = ea1 (E) + · · · + ean (E) and td(E) = a1 (1 + e−a1 )−1 · · · an (1 + e−an )−1 with
ai = c1 (Li ) if E = L1 ⊕ · · · ⊕ Ln is a direct sum of line bundles.
R
Theorem: The analytic index and topological indices agree: χ(D) = M KD .
In the case when D = d+d∗ from the vector bundle of even forms E to the vector bundle of odd
forms F , then KD is the Gauss-Bonnet curvature and χ(D) = χ(M ). Israil Gelfand conjectured
around 1960 that the analytical index should have a topological description. The Atiyah-Singer
index theorem has been proven in 1963 by Michael Atiyah and Isadore Singer. The result
generalizes the Gauss-Bonnet-Chern and Riemann-Roch-Hirzebruch theorem. According to
[346], “the theorem is valuable, because it connects analysis and topology in a beautiful and
insightful way”. See [322].
Abelian field extensions of Q are also called class fields. It follows that any algebraic number
field K/Q with Abelian Galois group has a conductor, the smallest n such that K lies in
the field generated by n’th roots of unity. Extending this theorem to other base number fields is
Kronecker’s Jugendtraum or Hilbert’s twelfth problem. The theory of complex mul-
tiplication does the generalization for imaginary quadratic fields. The theorem was stated
by Leopold Kronecker in 1853 and proven by Heinrich Martin Weber in 1886. A generalization
to local fields was done by Jonathan Lubin and John Tate in 1965 and 1966. (A local field is
a locally compact topological field with respect to some non-discrete topology. The list of local
fields is R, C, field extensions of the p-adic numbers Qp , or formal Laurent series Fq ((t)) over
a finite field Fq .) The study of cyclotomic fields came from elementary geometric problems
37
FUNDAMENTAL THEOREMS
like the construction of a regular n-gon with ruler and compass. Gauss constructed a regular
17-gon and showed that a regular n-gon can be constructed if and only if n is a Fermat
n
prime Fn = 22 + 1 (the known ones are 3, 6, 17, 257, 65537 and a problem of Eisenstein of 1844
asks whether there are infinitely many). Further interest came in the context of Fermat’s last
theorem because xn + y n = z n can be written as xn + y n = (x + y)(x + ζy) + · · · (x + ζ n−1 y),
where ζ is a n’th root of unity.
38
OLIVER KNILL
and n → ∞. This is equivalent to the fact that the unitary operator U f = f (T ) on L2 (X) has
no point spectrum when restricted to the orthogonal complement of the constant functions.
A topological transformation (a continuous map on a locally compact topological space) with
a weakly mixing invariant measure is not integrable as for integrability, one wants every
invariant measure to lead to an operator U with pure point spectrum and conjugating it so to
a group translation. Let G be the complete topological group of automorphisms of (X, A, m)
with the weak topology: Tj converges to T weakly, if m(Tj (A)∆T (A)) → 0 for all A ∈ A; this
topology is metrizable and completeness is defined with respect to an equivalent metric.
Anatol Katok and Anatolii Mikhailovich Stepin in 1967 [231] proved that purely singular con-
tinuous spectrum of U is generic. A new proof was given by [84] and a short proof in using
Rokhlin’s lemma, Halmos conjugacy lemma and a Simon’s “wonderland theorem” estab-
lishes both genericity of weak mixing and genericity of singular spectrum. On the topological
side, a generic volume preserving homeomorphism of a manifold has purely singular continuous
spectrum which strengthens Oxtoby-Ulam’s theorem [321] about generic ergodicity. [232, 181]
The Wonderland theorem of Simon [369] also allowed to prove that a generic invariant measure
of a shift is singular continuous [245] or that zero-dimensional singular continuous spectrum is
generic for open sets of flows on the torus allowing also to show that open sets of Hamiltonian
systems contain generic subset with both quasi-periodic as well as weakly mixing invariant tori
[246]
91. Universality
The space X of unimodular maps is the set of twice continuously differentiable even maps
f : [−1, 1] → [−1, 1] satisfying f (0) = 1 f 00 (x) < 0 and λ = f (1) < 0. The Feigenbaum-
Cvitanović functional equation (FCE) is g = T g with T (g)(x) = λ1 g(g(λx)). The map T is
a renormalization map.
The first proof was given by Oscar Lanford III in 1982 (computer assisted). See [212, 213].
That proof also established that the fixed point is hyperbolic with a one-dimensional unstable
manifold and positive expanding eigenvalue. This explains some universal features of uni-
modular maps found experimentally in 1978 by Mitchell Feigenbaum and which is now called
Feigenbaum universality. The result has been ported to area preserving maps [124].
92. Compactness
Let X be a compact metric space (X, d). The Banach space C(X) of real-valued continuous
functions is equipped with the supremum norm. A closed subset F ⊂ C(X) is called uniformly
bounded if for every x the supremum of all values f (x) with f ∈ F is bounded. The set F is
called equicontinuous if for every x and every > 0 there exists δ > 0 such that if d(x, y) < δ,
then |f (x) − f (y)| < for all f ∈ F . A set F is called precompact if its closure is compact.
The Arzelà-Ascoli theorem is:
Theorem: Equicontinuous uniformly bounded sets in C(X) are precompact.
39
FUNDAMENTAL THEOREMS
The result also holds on Hausdorff spaces and not only metric spaces. In the complex, there
is a variant called Montel’s theorem which is the fundamental normality test for holomorphic
functions: an uniformly bounded family of holomorphic functions on a complex domain G is
normal meaning that its closure is compact with respect to the compact-open topology.
The compact-open topology in C(X, Y ) is the topology defined by the sub-base of all contin-
uous maps fK,U : f : K → U , where K runs over all compact subsets of X and U runs over all
open subsets of Y .
93. Geodesic
The geodesic distance d(x, y) between two points x, y on a Riemannian manifold (M, g) is
defined as the length of the shortest geodesic γ connecting x with y. This renders the manifold
a metric space (M, d). We assume it is locally compact, meaning that every point x ∈ M has
a compact neighborhood. A metric space is called complete if every Cauchy sequence in M
has a convergent subsequence. (A sequence xk is called a Cauchy sequence if for every > 0,
there exists n such that for all i, j > n one has d(xi , xj ) < .) The local existence of differential
equations assures that the geodesic equations exist for small enough time. This can be restated
that the exponential map v ∈ Tx M → M assigning to a point v 6= 0 in the tangent space
Tx M the solution γ(t) with initial velocity v/|v| and t ≤ |v|, and γ(0) = x. A Riemannian
manifold M is called geodesically complete if the exponential map can be extended to the
entire tangent space Tx M for every x ∈ M . This means that geodesics can be continued for all
times. The Hopf-Rinov theorem assures:
The theorem was named after Heinz Hopf and his student Willi Rinov who published it in 1931.
See [117].
94. Crystallography
A wall paper group is a discrete subgroup of the Euclidean symmetry group E2 of the
plane. Wall paper groups classify two-dimensional patterns according to their symmetry. In
the plane R2 , the underlying group is the group E2 of Euclidean plane symmetries which
contain translations rotations or reflections or glide reflections. This group is the group
of rigid motions. It is a three dimensional Lie group which according to Klein’s Erlangen
program characterizes Euclidean geometry. Every element in E2 can be given as a pair
(A, b), where A is an orthogonal matrix and b is a vector. A subgroup G of E2 is called discrete
if there is a positive minimal distance between two elements of the group. This implies the
crystallographic restriction theorem assuring that only rotations of order 2, 3, 4 or 6 can
appear. This means only rotations by 180, 120, 90 or 60 degrees can occur in a Wall paper
group.
The first proof was given by Evgraf Fedorov in 1891 and then by George Polya in 1924. in
three dimensions there are 230 space groups and 219 types if chiral copies are identified. In
space there are 65 space groups which preserve the orientation. See [320, 177, 221].
40
OLIVER KNILL
The interest in quadratic forms started in the 17’th century especially about numbers which
can be represented as sums x2 + y 2 . Lagrange, in 1770 proved the four square theorem. In
1916, Ramajujan listed all diagonal quaternary forms which are universal. The 15 theorem was
proven in 1993 by John Conway and William Schneeberger (a student of Conway’s in a graduate
course given in 1993). There is an analogue theorem for integral positive quadratic forms,
these are defined by positive definite matrices Q which take only integer values. The binary
quadratic form x2 + xy + y 2 for example is integral but not an integer quadratic form because
the corresponding matrix Q has fractions 1/2. In 2005, Bhargava and Jonathan Hanke proved
the 290 theorem, assuring that an integral positive quadratic form is universal if it contains
{1, . . . , 290} in its range. [92].
41
FUNDAMENTAL THEOREMS
Sturm proved the theorem in 1829. He found his theorem on sequences while studying solutions
of differential equations Sturm-Liouville theory and credits Fourier for inspiration. See [337].
The result was proven by Henry John Stephen Smith in 1861. The result holds more generally
in a principal ideal domain, which is an integral domain (a ring R in which ab = 0 implies
a = 0 or b = 0) in which every ideal (an additive subgroup I of the ring such that ab ∈ I if
a ∈ I and b ∈ R) is generated by a single element.
42
OLIVER KNILL
Theorem: Local optima of linear programs are global and on the boundary
.
Since the solutions are located on the vertices of the polytope defined by the constraints the
simplex algorithm for solving linear programs works: start at a vertex of the polytop, then
move along the edges along the gradient until the optimum is reached. If A = [2, 3] and
x = [x1 , x2 ] and b = 6 and c = [3, 5] we have n = 1, m = 2. The problem is to maximize
f (x1 , x2 ) = 3x1 + 5x2 on the triangular region 2x1 + 3x2 ≤ 6, x1 ≥ 0, x2 ≥ 0. Start at (0, 0),
the best improvement is to go to (0, 2) which is already the maximum. Linear programming is
used to solve practical problems in operations research. The simplex algorithm was formulated
by George Dantzig in 1947. It solves random problems nicely but there are expensive cases in
general and it is possible that cycles occur. One of the open problems of Steven Smale asks
for a strongly polynomial time algorithm deciding whether a solution of a linear programming
problem exists. [310]
One can think of An as a sequence of larger and larger matrix valued random variables. The
circular law tells that the eigenvalues fill out the unit disk in the complex plane uniformly when
taking larger and larger matrices. It is a kind of central limit theorem. An older version due
to Eugene Wigner from 1955 is the semi-circular law telling that √ in the self-adjoint case, the
now real measures µn converge to a distribution with density 4 − x2 /(2π) on [−2, 2]. The
circular law was stated first by Jean Ginibre in 1965 and Vyacheslav Girko 1984. It was proven
first by Z.D. Bai in 1997. Various authors have generalized it and removed more and more
43
FUNDAMENTAL THEOREMS
moment conditions. The latest condition was removed by Terence Tao and Van Vu in 2010,
proving so the above “fundamental theorem of random matrix theory”. See [401].
103. Diffeomorphisms
Let M be a compact Riemannian surface and T : M → M a C 2 -diffeomorphism. A Borel
probability measure µ on M is T -invariant if µ(T (A)) = µ(A) for all A ∈ A. It is called
ergodic if T (A) = A implies µ(A) = 1 or µ(A) = 0. The Hausdorff dimension dim(µ) of
a measure µ is defined as the Hausdorff dimension of the smallest Borel set A of full measure
µ(A) = 1. The entropy hµ (T ) is the Kolmogorov-Sinai entropy of the measure-preserving
dynamical system (X, T, µ). For an ergodic surface diffeomorphism, the Lyapunov exponents
λ1 , λ2 of (X, T, µ) are the logarithms of the eigenvalues of A = limn→∞ [(dT n (x))∗ dT n (x)]1/(2n) ,
which is a limiting Oseledec matrix and constant µ almost everywhere due to ergodicity. Let
λ(T, µ) denote the Harmonic mean of λ1 , −λ2 . The entropy-dimension-Lyapunov theorem
tells that for every T -invariant ergodic probability measure µ of T , one has:
Theorem: hµ = dim(µ)λ/2.
This formula has become famous because it relates “entropy”, “fractals” and “chaos”, which are
all “rock star” notions also outside of mathematics. The theorem implies in the case of Lebesgue
measure preserving symplectic transformation, where dim(µ) = 2 and λ1 = −λ2 that “entropy
= Lyaponov exponent” which is a formula of Pesin given by hµ (T ) = λ(T, µ). A similar result
holds for circle diffeomorphims or smooth interval maps, where hµ (T ) = dim(µ)λ(T, µ). The
notion of Hausdorff dimension was introduced by Felix Hausdoff in 1918. Entropy was defined
in 1958 by Nicolai Kolmogorov and in general by Yakov Sinai in 1959, Lyapunov exponents
were introduced with the work of Valery Oseledec in 1965. The above theorem is due to Lai-
Sang Young who proved it in 1982. P Francois Ledrapier and Lai-Sang Young proved in 1985
that in arbitrary dimensions, hµ = j λj γj , where γj are dimensions of µ in the direction
of the Oseledec spaces Ej . This is called P the+ Ledrappier-Young formula. It implies the
+
Margulis-Ruelle inequality hµ (T ) ≤ j λj (T ), where λj = max(λj , 0) and λj (T ) are the
Lyapunov exponents. In the case of a smooth P T+-invariant measure µ or more generally, for
SRB measures, there is an equality hµ (T ) = j λj (T ) which is called the Pesin formula. See
[230, 125].
104. Linearization
If F : M → M is a globally Lipschitz continuous function on a finite dimensional vector space
M , then the differential equation x0 = F (x) has a global solution x(t) = f t (x(0)) (a local
by Picard-Lindelöf ’s existence theorem and global by the Grönwall inequality). An
equilibrium point of the system is a point x0 for which F (x0 ) = 0. This means that x0 is a
fixed point of a differentiable mapping f = f 1 , the time-1-map. We say that f is linearizable
near x0 if there exists a homeomorphism φ from a neighborhood U of x0 to a neighborhood V of
x0 such that φ ◦ f ◦ φ−1 = df . The Sternberg-Grobman-Hartman linearization theorem
is
Theorem: If f is hyperbolic, then f is linearizable near x0 .
The theorem was proven by D.M. Grobman in 1959 Philip Hartman in 1960 and by Shlomo
Sternberg in 1958. This implies the existence of stable and unstable manifolds passing
44
OLIVER KNILL
through x0 . One can show more and this is due to Sternberg who wrote a series of papers
starting 1957 [385]: if A = df (x0 ) satisfies no resonance condition meaning that no relation
λ0 = λ1 · · · λj exists between eigenvalues of A, then a linearization to order n is a C n map
φ(x) = x + g(x), with g(0) = g 0 (0) = 0 such that φ ◦ f ◦ φ−1 (x) = Ax + o(|x|n ) near x0 . We
say then that f can be n-linearized near x0 . The generalized result tells that non-resonance
fixed points of C n maps are n-linearizable near a fixed point. See [273].
105. Fractals
n
An iterated function system is a finite set of contractions {fi }P
i=1 on a complete metric space
(X, d). The corresponding Huntchingson operator H(A) = i fi (A) is then a contraction
on the Hausdorff metric of sets and has a unique fixed point called the attractor S of the
iteratedPfunction system. The definition of Hausdorff dimension is as follows: define hsδ (A) =
inf U ∈U i |Ui |s , where U is a δ-cover of A. And hs (A) = limδ→0 Hδs (A). The Hausdorff
dimension dimH (S) finally is the value s, where hs (S) jumps from ∞ to 0. If the contractions
are maps with contraction factors 0 < λj < 1 then the Hausdorff dimension of the attractor S
can be estimated with the the similarity dimension of Pnthe contraction vector (λ1 , . . . , λn ):
−s
this number is defined as the solution s of the equation i=1 λi = 1.
There is an equality if fi are all affine contractions like fi (x) = Ai λx + βi with the same
contraction factor and Ai are orthogonal and βi are vectors (a situation which generates a
large class of popular fractals). For equality one also has to assume that there is an open
non-empty set G such that Gi = fi (G) are disjoint. In the case λj = λ are all the same
then nλ−dim = 1 which implies dim(S) = − log(n)/ log(λ). For the Smith-Cantor set S,
where f1 (x) = x/3 + 2/3, f2 (x) = x/3 and G = (0, 1). One gets with n = 2 and λ = 1/3
the dimension dim(S) = log(2)/ log(3). For the Menger carpet with n = 8 affine maps
fij (x, y) = (x/3 + i/3, y/3 + j/3) with 0 ≤ i ≤ 2, 0 ≤ j ≤ 2, (i, j) 6= (1, 1), the dimension is
log(8)/ log(3). The Menger sponge is the analogue object with n = 20 affine contractions
in R3 and has dimension log(20)/ log(3). For the Koch curve on the interval, where n = 4
affine contractions of contraction factor 1/3 exist, the dimension is log(4)/ log(3). These are
all fractals, sets with Hausdorff dimension different from an integer. The modern formulation
of iterated function systems is due to John E. Hutchingson from 1981. Michael Barnsley used
the concept for a fractal compression algorithms, which uses the idea that storing the rules
for an iterated function system is much cheaper than the actual attractor. Iterated function
systems appear in complex dynamics in the case when the Julia set is completely disconnected,
they have appeared earlier also in work of Georges de Rham 1957. See [284, 140].
45
FUNDAMENTAL THEOREMS
The point was made by Richard Guy in [178] who states two “corollaries”: “superficial sim-
ilarities spawn spurious statements” and “early exceptions eclipse eventual essen-
tials”. The statement is backed up with countless many examples (a list of 35 are given in
n
[178]). Famous are Fermat’s claim that all Fermat primes 22 + 1 are prime or the claim that
the number π3 (n) of primes of the form 4k + 3 in {1, . . . , n} is larger than π1 (n) of primes of
the form 4k + 1 so that the 4k + 3 primes win the prime race. Hardy and Littlewood showed
however π3 (n) − π1 (n) changes sign infinitely often. The prime number theorem extended to
arithmetic progressions shows π1 (n) ∼ n/(2 log(n)) and π3 (n) ∼ n/(2 log(n)) but the density
of numbers with π3 (n) > π1 (n) is larger than 1/2. This is the Chebyshev bias. Experiments
then suggested the density to be 1 but also this is false: the density of numbers for which
π3 (n) > π1 (n) is smaller than 1. The principle is important in a branch of combinatorics called
Ramsey theory. But it not only applies in discrete mathematics. There are many examples,
where one can not tell by looking. When looking at the boundary of the Mandelbrot set for
example, one would tell that it is a fractal with Hausdorff dimension between 1 and 2. In reality
the Hausdorff dimension is 2 by a result of Mitsuhiro Shishikura. Mandelbrot himself thought
first “by looking” that the Mandelbrot set M is disconnected. Douady and Hubbard proved
M to be connected.
So, there exist Ramsey numbers R(r, s) such that for n ≥ R(r, s), the edge coloring of one
of the s-cliques can occur. A famous case is the identity R(3, 3) = 6. Take n = 6 people. It
defines the complete graph G. If two of them are friends, color the edge blue, otherwise red.
This friendship graph therefore is a r = 2 coloring of G. There are 78 possible colorings. In
each of them, there is a triangle of friends or a triangle of strangers. In a group of 6 people,
there are either a clique with 3 friends or a clique of 3 complete strangers. The Theorem was
proven by Frank Ramsey in 1930. Paul Erdoes asked to give explicit estimated R(s) which is
the least integer n such that any graph on n vertices contains either a clique of size s (a set
where all are connected to each other) or an independent set of size s (a set where none are
connected to each other). Graham for example asks whether the limit R(n)1/n exists. Ramsey
theory also deals other sets: van der Waerden’s theorem from 1927 for example tells that
if the positive integers N are colored with r colors, then for every k, there exists an N called
W (r, k) such that the finite set {1 . . . , N } has an arithmetic progression with the same color.
For example, W (2, 3) = 9. Also here, it is an open problem to find a formula for W (r, k) or
even give good upper bounds. [169] [168]
46
OLIVER KNILL
The Hodge dual of f ∈ Λp is defined as the unique ∗g ∈ Λn−p satisfying hf, ∗gi = hf ∧ g, ωi
where ω is the volume form. One has d∗ f = (−1)d+dp+1 ∗ d ∗ f and L ∗ f = ∗Lf . This implies
that ∗ is a unitary map from ker(L|Λp ) to ker(L|Λd−p ) proving so the duality theorem. For
n = 4k, one has ∗2 = 1, allowing to define the Hirzebruch signature σ := dim{u|Lu =
0, ∗u = u} − dim(u|Lu = 0, ∗u = −u}. The Poinaré duality theorem was first stated by Henri
Poincaré in 1895. It took until the 1930ies to clean out the notions and make it precise. The
Hodge approach establishing an explicit isomorphism between harmonic p and n − p forms
appears for example in [105].
The result was proven by Vladimir Abramovich Rokhlin in his thesis 1947 and independently
by Shizuo Kakutani in 1943. The lemma can be used to build Kakutani skyscrapers, which
are nice partitions associated to a transformation. This lemma allows to approximate an
aperiodic transformation T by a periodic transformations Tn . Just change T on T n−1 (B) so
that Tnn (x) = x for all x. The theorem has been generalized by Donald Ornstein and Benjamin
Weiss to higher dimensions like Zd actions of measure preserving transformations where the
periodicity assumption is replaced by the assumption that the action is free: for any n 6= 0,
the set T n (x) = x has zero measure. See [100, 152, 181].
47
FUNDAMENTAL THEOREMS
where δ is the geodesic distance on the flat torus and where | · |∞ is the L∞ supremum norm.
Lets call (Td , T, µ) a toral dynamical system if T is a homeomorphism, a continuous
transformation with continuous inverse. A cube exchange transformation on Td is a peri-
odic,
Qd piecewise affine measure-preserving transformation T which permutes rigidly all the cubes
d
i=1 [ki /n, (ki + 1)/n], where ki ∈ {0, . . . , n − 1}. Every point in T is T periodic. A cube ex-
change transformation is determined by a permutation of the set {1, . . . , n}d . If it is cyclic,
the exchange transformation is called cyclic. A theorem of Lax [275] states that every toral
dynamical system can approximated in the metric δ by cube exchange transformations. The
approximations can even be cyclic [13].
The result is due to Peter Lax [275]. The proof of this result uses Hall’s marriage theorem
in graph theory (for a ’book proof’ of the later theorem, see [9]). Periodic approximations of
symplectic maps work surprisingly well for relatively small n (see [339]). On the Pesin region
this can be explained in part by the shadowing property [230]. The approximation by cyclic
transformations make long time stability questions look different [180].
([371] states this as Theorem 6.3.6) gives some history: generalized functions appeared
first in the work of Oliver Heaviside in the form of “operational calculus. Paul Dirac used
the formalism in quantum mechanics. In the 1930s, Kurt Otto Friedrichs, Salomon Bocher
and Sergei Sobolev define weak solutions of PDE’s. Schwartz used the Cc∞ functions, smooth
functions of compact support. This means that the existence of k weak derivatives implies the
existence of actual derivatives. For p = 2, the spaces W k are Hilbert spaces and the theory a
bit simpler due to the availability of Fourier theory, where tempered distributions flourished.
In that case, one can define for any real s > 0 the Hilbert space H s as the subset of all f ∈ S 0
for which (1 + |ξ|2 )s/2 fˆ(ξ) is in L2 . The Schwartz test functions S consists of all C ∞ functions
having bounded semi norms ||φ||k = max|α|+|β|≤k ||xβ Dα φ||∞ < ∞ where α, β ∈ Nn . Since S is
larger than the set of smooth functions of compact support, the dual space S 0 is smaller. They
are tempered distributions. Sobolev emedding theorems like above allow to show that weak
solutions of PDE’s are smooth: for example, if the Poisson problem ∆f = V f with smooth V
is solved by a distribution f , then f is smooth. [62, 371]
48
OLIVER KNILL
The picture [261] is that once the AI has figured out the philosophy of the “Dude” in the Cohen
brothers movie Lebowski, also repeated mischiefs does not bother it and it “goes bowling”.
Objections are brushed away with “Well, this is your, like, opinion, man”. Two examples
of human super intelligent units who have succeeded to hack their own reward function are
Alexander Grothendieck or Grigori Perelman. The Lebowski theorem is due to Joscha Bach
[26], who stated this theorem of super intelligence in a tongue-in-cheek tweet. From a
49
FUNDAMENTAL THEOREMS
mathematical point of view, the smartest way to “solve” an optimal transport problem is to
change the utility function. On a more serious level, the smartest way to “solve” the continuum
hypothesis is to change the axiom system. This is a cheat, but on a meta level, more creativity
is possible. A precursor is Stanislav Lem’s notion of a mimicretin [277], a computer that plays
stupid in order, once and for all, to be left in peace or the machine in [6] who develops humor
and enjoys fooling humans with the answer to the ultimate question: ”42”. This document
entry is the analogue to the ultimate question: “What is the fundamental theorem of AI”?
The theorem states that the exterior derivative d is dual to the boundary operator δ. If G
is a connected 1-manifold with boundary, it is a curve with boundary δG = {A, B}. A 1-
form can be integrated over the curve G by choosing the on G induced volume form r0 (t)dt
Rb
given by a curve parametrization [a, b] → G and integrate a F (r(t)) · r0 (t)dt, which is
the line integral. Stokes theorem is then the fundamental theorem of line integrals.
R b a 0-form0 f which is a scalar function the derivative df is the gradient F = ∇f . Then
Take
a
∇f (r(t)) · r (t) dt = f (B) − f (A). If G is a two dimensional surface with boundary δG and F
is a 1-form, then the 2-form dF is the curl of F . If G is given as a surface parametrization
r(u, v), one can applyR dF on the pair of tangent vectors ru , rv and integrate this dF (ru , rv ) over
the surface GR to get G dF . The Kelvin-Stokes theorem tells that this is the same than the
line integral δG F . In the case of M = R3 , where F = P dx + Qdy + Rdz can be identified with
a vector field FR R= [P, Q, R] and dF = ∇R ×R F and integration of a 2-form H over a parametrized
manifold G is R
H(r(u, v))(ru , rv ) = R
H(r(u, v)·ru ×rv dudv we get the classical Kelvin-
Stokes theorem. If F is a 2-form, then dF is a 3-form which can be integrated over a 3-
∗
manifold G. As d : Λ2 → Λ3 can via Hodge duality naturally 1
R R R be paired with d0R: RΛ → Λ ,
0
50
OLIVER KNILL
d notation for exterior derivative was introduced in 1902 by Theodore de Donder. The ultimate
formulation above is from Cartan 1945. We followed Katz [235] who noticed that only in 1959,
this version has started to appear in textbooks.
115. Moments
The Hausdorff moment problem asks for R 1 necessary and sufficient conditions for a sequence
n
µn to be realizable as a moment sequence 0 x dµ(x) for a Borel probability measure on [0, 1].
One can Rstudy the problem also in higher dimensions: for a multi-index n = (n1 , . . . , nd ) denote
by µn = xn1 1 . . . xnd d dµ(x) the n’th moment of a signed Borel measure µ on the unit cube
I d = [0, 1]d ⊂ Rd . We say µn is a moment configuration if there exists a measure µ which
has µn as moments. If ei denotes the standard basis in Zd , define the partial
difference
k
Q ki k
Q n ki n Q d ni
(∆i a)n = an−ei − an and ∆ = i ∆i . We write n = i=1 ni and = i=1
Pn Pn1 Pnd k ki
and k=0 = k1 =0 · · · kd =0 . We say moments µn are Hausdorff bounded if there exists a
Pn n
constant C such that k=0 | (∆k µ)n | ≤ C for all n ∈ Nd . The theorem of Hausdorff-
k
Hildebrandt-Schoenberg is
Theorem: Hausdorff bounded moments µn are generated by a measure µ.
The above result is due to Theophil Henry Hildebrandt and Isaac Jacob Schoenberg from 1933.
[198]. Moments also allow to compare measures: a measure µ is called uniformly absolutely
continuous with respect to ν if there exists f ∈ L∞ (ν) such that µ = f ν. A positive probability
measure µ is uniformly absolutely continuous with respect to a second probability measure ν
if and only if there exists a constant C such that (∆k µ)n ≤ C · (∆k ν)n for all k, n ∈ Nd .
In particular it gives a generalization of a result of Felix Hausdorff from 1921 [191] assuring
that µ is positive if and only if (∆k µ)n ≥ 0 for all k, n ∈ Nd . An other special case is that
µ is uniformly
absolutely continuous with respect to Lebesgue measure ν on I d if and only if
n
|∆k µn | ≤ (n + 1)d for all k and n. Moments play an important role in statistics, when
k
looking at moment generating functions n µn tn of random variables X, where µn = E[X n ]
P
as well as in multivariate statistics, when looking at random vectors (X1 , . . . , Xd ), where
µn = E[X1n1 · · · Xdnd ] are multivariate moments. See [248]
116. Martingales
A sequence of random variables X1 , X2 , . . . on a probability space (Ω, A, P) is called a discrete
time stochastic process. We assume the Xk to be in L2 meaning that the expectation
E[Xk2 ] < ∞ for all k. Given a sub-σ algebra B of A, the conditional expectation E[X|B] is
the projection of L2 (Ω, A, P ) to L2 (Ω, B, P ). Extreme cases are E[X|A] = X and E[X|{∅, Ω}] =
E[X]. A finite set Y1 , . . . , Yn of random variables generates a sub- σ-algebra B of A, the
smallest σ-algebra for which all Yj are still measurable. Write E[X|Y1 , · · · , Yn ] = E[X|B],
where B is the σ-algebra generated by Y1 , · · · Yn . A discrete time stochastic process is called
a martingale if E[Xn+1 |X1 , · · · , Xn ] = E[Xn ] for all n. If the equal sign is replaced with ≤
Pn is called a super-martingale, if ≥ it is a sub-martingale.
then the process
2
The random
walk Xn = Y
k=1 k defined by a sequence of independent L random variables Yk is an
example of a martingale because independence implies E[Xn+1 |X1 , · · · , Xn ] = E[Xn+1 ] which
is E[Xn ] by the identical distribution assumption. If X and M are two discrete time stochastic
51
FUNDAMENTAL THEOREMS
The convergence theorem can be used to prove the optimal stopping time theorem which
tells that the expected value of a stopping time is the initial expected value. In finance it
is known as the fundamental theorem of asset pricing. If τ is a stopping time adapted
to a martingale Xk , it defines the random variable Xτ and E[Xτ ] = E[X0 ]. For a super-
martingale one has ≥ and for a sub-martingale ≤. The proof is obtained by defining the
Pmin(τ,n)−1
stopped process Xnτ = X0 + k=0 (Xk+1 − Xk ) which is a martingale transform and so
a martingale. The martingale convergence theorem gives a limiting random variable Xτ and
because E[Xnτ ] = E[X0 ] for all n, E[Xτ ] = E[X0 ]. This is rephrased as “you can not beat the
system” [427]. A trivial implication is that one can not for example design a strategy allowing
to win in a fair game by designing a “clever stopping time” like betting on “red” in roulette if
6 times “black” in a row has occurred. Or to follow the strategy to stop the game, if one has a
first positive total win, which one can always do by doubling the bet in case of losing a game.
Martingales were introduced by Paul Lévy in 1934, the name “martingale” (referring to the
just mentioned doubling betting strategy) was added in a 1939 probability book of Jean Ville.
The theory was developed by Joseph Leo Doob in his book of 1953. [120]. See [427].
Gauss himself already gave explicit formulas, but a formula of Brioschi gives the curvature K
explicitly as a ratio of determinants involving E, F, G as well as and first and second derivatives
of them. In the case when the surface is given as a graph z = f (x, y), one can give K =
D/(1 + |∇f |2 )2 , where D = (fxx fyy − fxy2
) is the discriminant and (1 + |∇f |2 )2 = det(II). If
the surface is rotated in space so that (u, v) is a critical point for f , then the discriminant
D is equal to the curvature. One can see the independence of the embedding also from the
Puiseux formula K = 3(|S0 (r)| − S(r))/(πr3 ), where |S0 (r)| = 2πr is the circumference of
the circle S0 (r) in the flat case and |S(r)| is the circumference of the geodesic circle of radius
r on S. The theorem Egregium also follows from Gauss-Bonnet as the later allows to write the
curvature in terms of the angle sum of a geodesic infinitesimal triangle with the angle sum π
of a flat triangle. As the angle sums are entirely defined intrinsically, the curvature is intrinsic.
The Theorema Egregium was found by Karl-Friedrich Gauss in 1827 and published in 1828
52
OLIVER KNILL
in “Disquisitiones generales circa superficies curvas”. It is not an accident, that Gauss was
occupied with concrete geodesic triangulation problems too.
118. Entropy
Given a random variable X on a probability space (Ω, A, P) which is finite and discrete
P in the
sense that it takes only finitely many values, the entropy is defined as S(X) = − x px log(px ),
where px = P[X = x]. To compare, for a random variable X with cumulative distribution
0
R = P[X ≤ x] having a continuous derivative F = f , the entropy is defined as
function F (x)
S(X) = − f (x) log(f (x)) dx, allowing the value −∞ if the integral does not converge. (We
always read p log(p) = 0 if p = 0.) In the continuous case, one also calls this the differential
entropy. Two discrete random variables X, Y are called independent if one can realize them
on a product probability space Ω = A × B so that X(a, b) = X(a) and Y (a, b) = Y (b) for
some functions X : A → R, Y : B → R. Independence implies that the random variables are
uncorrelated, E[XY ] = E[X]E[Y ] and that the entropy adds up S(XY ) = S(X) + S(Y ).
We can write S(X) = E[log(W (x))], where W is the “Wahrscheinlichkeit” random variable
assigning to ω ∈ Ω the value W (ω) = 1/px if X(ω) = x. P Let us say, a functional on discrete
random variables is additive if it is of the form H(X) = x f (px ) for some continuous function
f for which f (t)/t is monotone. We say it is multiplicative if H(XY ) = H(X) + H(Y ) for
independent random variables. The functional is normalized if H(X) = log(4) if X is a
random variable taking two values {0, 1} with probability p0 = p1 = 1/2. Shannon’s theorem
is:
Theorem: Any normalized, additive and multiplicative H is entropy S.
The word “entropy” was introduced by Rudolf Clausius in 1850 [349]. Ludwig Bolzmann saw
the importance of dtd S ≥ 0 in the context of heat and wrote in 1872 S = kB log(W ), where
W (x) = 1/px is the inverse “Wahrscheinlichkeit” thatPa state has the value x. His equation
is understood as the expectation S = kB E[log(W )] = x px log(W (x)) which is the Shannon
entropy, introduced in 1948 by Claude Shannon in the context of information theory. (Shannon
characterized functionals H with the property that if H is continuous in p, then for random
variables Hn with px (Hn ) = 1/n, one has H(Xn )/n ≤ H(Xm )/m if n ≤ m and if X, Y are
two random variables so that theP finite σ-algebras A defined by X is a sub-σ-algebra B defined
by Y , then H(Y ) = H(X) + x px H(Yx ), where Yx (ω) = Y (ω) for ω ∈ {X = x}. One can
show that these Shannon conditions are equivalent to the combination of being additive and
multiplicative. In statistical thermodynamics, where px is the probability of a micro-state,
then kB S is also called the Gibbs entropy, where kB is the Boltzmann constant. For
general random variables X on (Ω, A, P) and a finite σ-sub-algebra B, Gibbs looked in 1902 at
course grained entropy, which is the entropy of the conditional expectation Y = E[X|B|,
which is now a random variable Y taking only finitely many values so that entropy is defined.
See [363].
53
FUNDAMENTAL THEOREMS
{xk }k∈N . A pair of points a, b ∈ H defines a mountain pass, if there exist > 0 and r > 0
such that f (x) ≥ f (a) + on Sr (a) = {x ∈ H | ||x − a|| = r}, f is not constant on Sr (a) and
f (b) ≤ f (a). A critical point is called a saddle if it is neither a maximum nor a minimum of
f.
Theorem: If a Palais-Smale f has a mountain pass, it features a saddle.
The idea is to look at all continuous paths γ from a to b parametrized by t ∈ [0, 1]. For each
path γ, the value cγ = f (γ(t)) has to be maximal for some time t ∈ [0, 1]. The infimum over
all these critical values cγ is a critical value of f . The mountain pass condition leads to a
“mountain ridge” and the critical point is a “mountain pass”, hence the name. The example
(2 exp(−x2 − y 2 ) − 1)(x2 + y 2 ) with a = (0, 0), b = (1, 0) shows that the non-constant condition
is necessary for a saddle point on Sr (a) with r = 1/2. The reason for sticking with a Hilbert
space is that it is easier to realize the compactness condition due to weak star compactness of
the unit ball. But it is possible to weaken the conditions and work with a Banach manifolds X
continuous Gâteaux derivatives: f 0 : X → X ∗ if X has the strong and X ∗ the weak-∗ topology.
It is difficult to pinpoint historically the first use of the mountain pass principle as it must have
been known intuitively since antiquity. The crucial Palais-Smale compactness condition
which makes the theorem work in infinite dimensions appeared in 1964. [24] calls it condition
(C), a notion which already appeared in the original paper [324].
Theorem: If p, q are positive and odd integers, then S(2q, p) = eiπ/4 S(−p, 2q).
√ P
One has S(1, p) = (1/ p) p−1 2
x=0 exp(ix /p) = 1 for all positive integers p and S(2, p) =
√
(eiπ/4 / p) p−1 2
P
x=0 exp(2ix /p) = 1 if p = 4k + 1 and i if p = 4k − 1. The method of expo-
nential sums has been expanded especially by Vinogradov’s papers [413] and used for number
theory like for quadratic reciprocity [311]. The topic is of interest also outside of number the-
ory. Like in dynamical systems theory as Fürstenberg has demonstrated. An ergodic theorist
would look at the dynamical system T (x, y) = (x + 2y + 1, y + 1) on the 2-torus T2 = R2 /(πZ)2
and define gα (x, y) = exp(iπxα). Since the orbit of P this toral map is T n (1, 1) = (n2 , n), the
exponential sum can be written as a Birkhoff sum p−1 k
k=0 gq/p (T (1, 1)) which is a particular
orbit of a dynamical system. Results as those mentioned above show that the random walk
√
grows like p, similarly as in a random setting. Now, since the dynamical system is minimal,
the growth rate should not depend on the initial point and πq/p should be replaceable by any
irrational α and no more be linked to the length of the orbit. The problem is then to study
Pt−1
the growth rate of the stochastic process S (x, y) = k=0 g(T k (x, y)) (= sequence of random
t
variables) for any continuous g with zero expectation which by Fourier boils down to look at
exponential sums. Of course S t (x, y)/t → 0 by Birkhoff’s ergodic theorem, but as in the law
of iterated logarithm one is interested in precise growth rates. This can be subtle. Already
in the simpler case of an integrable T (x) = x + α on the 1-torus, there is Denjoy-Koskma
theory which shows that the growth rate depends on Diophantine properties of πα. Unlike for
54
OLIVER KNILL
irrational rotations, the Fürstenberg type skew systems T leading to the theta functions are not
integrable: it is not conjugated to a group translation (there is some randomness, even-so weak
as Kolmogorov-Sinai entropy is zero). The dichotomy between structure and randomness and
especially the similarities between dynamical and number theoretical set-ups has been discussed
in [400].
The theorem was proven by Marcel Berger and Wilhelm Klingenberg in 1960. That a pinching
condition would imply a manifold to be a sphere had been conjectured already by Heinz Hopf.
Hopf himself proved in 1926 that constant sectional curvature implies that M is even isometric
to a sphere. Harry Rauch, after visiting Hopf in Zürich in the 1940’s proved that a 3/4-
pinched simply connected manifold is a sphere. In 2007, Simon Brendle and Richard Schoen
proved that the theorem even holds if the statement M is a d-sphere (meaning that M is
diffeomorphic to the Euclidean d-sphere {|x|2 = 1} ⊂ Rd+1 ). This is the differentiable sphere
theorem. Since John Milnor had given in 1956 examples of spheres which are homeomorphic
but not diffeomorphic to the standard sphere (so called exotic spheres, spheres which carry
a smooth maximal atlas different from the standard one), the differentiable sphere theorem
is a substantial improvement on the topological sphere theorem. It needed completely new
techniques, especially the Ricci flow ġ = −2Ric(g) of Richard Hamilton which is a weakly
parabolic partial differential equation deforming the metric g and uses the Ricci curvature
Ric of g. See [40, 57].
55
FUNDAMENTAL THEOREMS
also find an example of an infinite finitely presented simple group. The non-solvability of the
word problem implies the non-solvability of the homeomorphism problem for n-manifolds with
n ≥ 4. See [435].
56
OLIVER KNILL
It appears silly to put a God number computation as a fundamental theorem, but the status
of the Rubik cube is enormous as it has been one of the most popular puzzles for decades
and is a prototype for many other similar puzzles, the choice can be defended. 1 One can
ask to compute the god number of any finitely presented finite group. Interesting in general
is the complexity of evaluating that functional. The simplest nontrivial Rubik cuboid is
the 2 × 2 × 1 one. It has 6 positions and 2 generators a, b. The finitely presented group
is {a, b|a2 = b2 = (ab)3 = 1} which is the dihedral group D3 . Its group elements are
G = {1, a = babab, ab = baba, aba = bab, abab = ba, ababa = b}. The group is isomorphic to the
symmetry group of the equilateral triangle, generated by the two reflections a, b at two
altitude lines. The God number of that group is 3 because the Cayley graph Γ is the cyclic
graph C6 . The puzzle solver has here “no other choice than solving the puzzle” than to make
non-trivial move in each step. See [224] or [33] for general combinatorial group theory.
The theorem applied to smooth map f : M → R tells that for almost all c, the set f −1 (c)
is a smooth hypersurface of M or then empty. The later can happen if f is constant. We
assumed C ∞ but one can relax the smoothness assumption of f . If n ≥ m, then f needs only
to be continuously differentiable. If n < m, then f needs to be in C m−n+1 . The case when N
is one-dimensional has been covered by Antony Morse (who is unrelated to Marston Morse)
in 1939 and by Arthur Sard in general in 1942. A bit confusing is that Marston Morse (not
Antony) covered the case m = 1, 2, 3 and Sard in the case m = 4, 5, 6 in unpublished papers
before as mentioned in a footnote to [357]. Sard also notes already that examples of Hassler
Whitney show that the smoothness condition can not be relaxed. Sard formulated the results
for M = Rm and N = Rn (by the way with the same choice f : M → N as done here and not
as in many other places). The manifold case appears for example in [386].
1I presented the God number problem in the 80ies as an undergraduate in a logic seminar of Ernst Specker
and the choice of topic had been objected to by Specker himself as a too “narrow problem”. But for me, the
Rubik cube and its group theoretical properties have “cult status” and was one of the triggers to study math.
57
FUNDAMENTAL THEOREMS
The theorem seems first have been realized by Henry Poincaré in 1901. Weierstrass before had
used the Weierstrass P function earlier in the case of elliptic curves over the complex plane. To
define the group multiplication, one uses the chord-tangent construction: first add point O
called the point at infinity which serves as the zero in the group. Then define −P as the point
obtained by reflecting at the x-axes. The group multiplication between two different points
P, Q on the curve is defined to be −R if R is the point of intersection of the line through P, Q
with the curve. If P = Q, then R is defined to be the intersection of the tangent with the curve.
If there is no intersection that is if P = Q is an inflection point, then one defines P + P = −P .
Finally, define P + O = O + P = P and P + (−P ) = 0. This recipe can be explicitly given
in coordinates allowing to define the multiplication in any field of characteristic different from
2 or 3. The group structure on elliptic curves over finite fields provides a rich source of finite
Abelian groups which can be used for cryptological purposes, the so called elliptic curve
cryptograph ECC. Any procedure, like public key, Diffie-Hellman or factorization attacks on
integers can be done using groups given by elliptic curves. [415].
127. Billiards
Billiards are the geodesic flow on a smooth compact n-manifold M with boundary. The dy-
namics is extended through the boundary by applying the law of reflection. While the flow
of the geodesic X t is Hamiltonian on the unit tangent bundle SM , the billiard flow is only
piecewise smooth and also the return map to the boundary is not continuous in general but it
is a map preserving a natural volume so that one can look at ergodic theory. Already difficult
are flat 2-manifolds M homeomorphic to a disc having convex boundary homeomorphic to a
circle. For smooth convex tables this leads to a return map T on the annulus X = T × [−1, 1]
which is C r−1 smooth if the boundary is C r [121]. It defines a monotone twist map: in the
sense that it preserves the boundary, is area and orientation preserving and satisfies the twist
condition that y → T (x, y) is strictly monotone. A Bunimovich stadium is the 2-manifold
with boundary obtained by taking the convex hull of two discs of equal radius in R with dif-
ferent center. The billiard map is called chaotic, if it is ergodic and the Kolmogorov-Sinai
entropy is positive. By Pesin theory, this metric entropy is the Lyapunov exponent which
is the exponential growth rate of the Jacobian dT n (and constant almost everywhere due to
ergodicity). There are coordinates in the tangent bundle of the annulus X in which dT is the
composition of a horizontal shear with strength L(x, y), where L is the trajectory length before
the impact with a vertical shear with strength −2κ/ sin(θ) where κ(x) is the curvature of the
curve at the impact x and y = cos(θ), with impact angle θ ∈ [0, π] between the tangent and
the trajectory.
58
OLIVER KNILL
theorem of John Mather) or points, where the curvature is unbounded (by a theorem of Andrea
Hubacher). Leonid Bunimovich constructed in 1979 the first convex chaotic billiard. No smooth
convex billiard table with positive Kolmogorov-Sinai entropy is known. A candidate is the real
analytic x4 + y 4 = 1. Various generalizations have been considered like in [430]. A detailed
proof that the Bunimovich stadium is measure theoretically conjugated to a Bernoulli system
(the shift on a product space) is surprisingly difficult: one has to show positive Lyapunov
exponents on a set of positive measure. Applying Pesin theory with singularities (Katok-
Strelcyn theory) gives a Markov process. One needs then to establish ergodicity using a method
of Eberhard Hopf of 1936 which requires to understand stable and unstable manifolds [82]. See
[398, 414, 308, 164, 230, 82] for sources on billiards.
128. Uniformization
A Riemann surface is a one-dimensional complex manifold. This means is is a connected
two dimensional manifold so that the transition functions o the atlas are holomorphic mappings
of the complex plane. It is simply connected if its fundamental group is trivial meaning that
its genus b1 is zero. Two Riemann surfaces are conformally equivalent or simply equivalent
if they are equivalent as complex manifolds, that is if a bijective morphism f between them
exists. A map f : S → S 0 is holmorphic if for every choice of coordinates φ : S → C and
ψ 0 : S 0 → C, the maps φ0 ◦ f ◦ φ−1 is holomorphic. The curvature is the Gaussian curvature
of the surface. The uniformization theorem is:
Theorem: A Riemann surface is equivalent to one with constant curvature.
This is a “geometrization statement” and means that the universal cover of every Riemann sur-
face is conformally equivalent to either a Riemann sphere (positive curvature), a complex
plane (zero curvature) or a unit disk (negative curvature). It implies that any region G ⊂ C
whose complement contains 2 or more points has a universal cover which is the disk which
especially implies the Riemann mapping theorem assuring that and region U homeomor-
phic to a disk is conformally equivalent to the unit disk. (see [74]. For a detailed treatement
of compact Riemann surfaces, see [161]. It also follows that all Riemann surfaces (without
restriction of genus) can be obtained as quotients of these three spaces: for the sphere one
does not have to take any quotient, the genus 1 surfaces = elliptic curves can be obtained as
quotients of the complex plane and any genus g > 1 surface can be obtained as quotients of the
unit disk. Since every closed 2-dimensional orientable surface is characterized by their genus g,
the uniformization theorem implies that any such surface admits a metric of constant curva-
ture. Teichmüller theory parametrizes the possible metrics, and there are 3g − 3 dimensional
parameters for g ≥ 2, whereas for g = 0 there is one and for g = 1 a moduli space H/SL2 (Z).
In higher dimensions, close to the uniformization theorem comes the Killing-Hopf theorem
telling that every connected complete Riemannian manifold of constant sectional curvature
and dimension n is isometric to the quotient of a sphere Sn , Euclidean space Rn or Hyperbolic
n-space Hn . Constant curvature geometry is either Elliptic, Parabolic=Euclidean or Hyperbolic
geometry. Complex analysis has rich applications in complex dynamics [34, 300, 74] and relates
to much more geometry [297].
59
FUNDAMENTAL THEOREMS
hidden Markov model. It applies both to differential equations ẋ(t) = Ax(t) + Bu(t) +
Gz(t) as well as discrete dynamical system x(t + 1) = Ax(t) + Bu(t) + Gz(t), where u(t)
is external input and z(t) input noise given by independent identically distributed usually
Gaussian random variables. Kalman calls this a Wiener problem. One does not see the
state x(t) of the system but some output y(t) = Cx(t) + Du(t). The filter then “filters out”
or “learns” the best estimate x∗ (t) from the observed data y(t). The linear space X is defined
as the vector space spanned by the already observed vectors. The optimal solution is given by
a sophisticated dynamical data fitting.
This is the informal 1-sentence description which can be found already in Kalman’s article.
Kalman then gives explicit formulas which generate from the stochastic difference equation
a concrete deterministic linear system. For a modern exposition, see [286]. This is the
Kalman filter. It is named after Rudolf Kalman who wrote [226] in 1960. Kalman’s paper
is one of the most cited papers in applied mathematics. The ideas were used in the Apollo
and Space Shuttle program. Similar ideas have been introduced in statistics by the Danish
astronomer Thorvald Thiele and the radar theoretician Peter Swerling. There are also nonlinear
version of the Kalman filter which is used in nonlinear state estimation like navigation systems
and GPS. The nonlinear version uses a multi-variate Taylor series expansion to linearize about
a working point. See [138, 286].
Oscar Zariski proved the theorem in 1943. To cite [309], “it was the final result in a foundational
analysis of birational maps between varieties. The ’main Theorem’ asserts in a strong sense
that the normalization (the integral closure) of a variety X is the maximal variety X 0 birational
over X, such that the fibres of the map X 0 → X are finite. A generalization of this fact
became Alexandre Grothendieck’s concept of the ’Stein factorization’ of a map. The result
has been generalized to schemes X, which is called unibranch at a point x if the local ring
at x is unibranch. A generalization is the Zariski connectedness theorem from 1957: if
f : X → Y is a birational projective morphism between Noetherian integral schemes, then the
inverse image of every normal point of Y is connected. Put more colloquially, the fibres of a
birational morphism from a projective variety X to a normal variety Y are connected. It implies
that a birational morphism f : X → Y of algebraic varieties X, Y is an open embedding into a
neighbourhood of a normal point y if f −1 (y) is a finite set. Especially, a birational morphism
between normal varieties which is bijective near points is an isomorphism. [189, 309]
60
OLIVER KNILL
This is called the Poincaré-Birkhoff theorem or Poincaré’s last theorem. It was stated by
Henry Poincaré in 1912 in the context of the three body problem. Poincaré already gave an
index argument for the existence of one fixed point gives a second. The existence of the first was
proven by George Birkhoff in 1913 and in 1925, he added the precise argument for the existence
of the second. The twist condition is necessary as the rotation of the annulus (r, θ) → (r, θ + 1)
has no fixed point. Also area preservation is necessary as (r, θ) → (r(2 − r), θ + 2r − 1) shows.
[45, 63]
132. Geometrization
A closed manifold M is a smooth compact manifold without boundary. A closed manifold
is simply connected if it is connected and the fundamental group is trivial meaning that
every closed loop in M can be pulled together to a point within M : (if r : T → M is a
parametrization of a closed path in M , then there exists a continuous map R : T × [0, 1] → M
such that R(0, t) = r(t) and R(1, t) = r(0). We say that M is 3-sphere if M is homeomorphic
to a 3-dimensional sphere {(x1 , x2 , x3 , x4 ) ∈ R4 | x21 + x22 + x23 + x23 = 1}.
Henry Poincaré conjectured this in 1904. It remained the Poincaré conjecture until its
proof of Grigori Perelman in 2006 [305]. In higher dimensions the statement was known as
the generalized Poincaré conjecture, the case n > 4 had been proven by Stephen Smale
in 1961 and the case n = 4 by Michael Freedman in 1982. A d-homotopy sphere is a closed
d-manifold that is homotopic to a d-sphere. (A manifold M is homotopic to a manifold N if
there exists a continuous map f : M → N and a continuous map g : N → M such that the
composition g ◦ f : M → M is homotopic to the identity map on M (meaning that there exists
a continuous map F : M × [0, 1] → M such that F (x, 0) = g(f (x)) and F (x, 1) = x) and the
map f ◦ g : N → N is homotopic to the identity on N .) The Poincaré conjecture itself, the case
d = 3, was proven using a theory built by Richard Hamilton who suggested to use the Ricci
flow to solve the conjecture and more generally the geometrization conjecture of William
Thurston: every closed 3-manifold can be decomposed into prime manifolds which are of 8
˜
types, the so called Thurston geometries S 3 , E 3 , H 3 , S 2 × R, H 2 × R, SL(2, R), Nil, Solv. If
the statement M is a sphere is replaced by M is diffeomorphic to a sphere, one has the
smooth Poincaré conjecture. Perelman’s proof verifies this also in dimension d = 3. The
smooth Poincaré conjecture is false in dimension d ≥ 7 as d-spheres then admit non-standard
smooth structures, so called exotic spheres constructed first by John Milnor. For d = 5 it is
true following result of Dennis Barden from 1964. It is also true for d = 6. For d = 4, it is open,
and called “the last man standing among all great problems of classical geometric topology”
[282]. See [306] for details on Perelman’s proof.
61
FUNDAMENTAL THEOREMS
The Euler polyhedron formula which was first seen in examples by René Descartes [4] but seen
by Euler in 1750 to work for general planar graphs. Euler already gave an induction proof
(also in 1752) but the first complete proof appears having been given first by Legendre in 1794.
The Steinitz theorem was proven by Ernst Steinitz in 1922, even so he obtained the result
already in 1916. In general, a planar graph always defines a finite generalized CW complex in
which the faces are the 2-cells, the edges are the 1-cells and the vertices are the 0-cells. The
embedding in the plane defines then a geometric realization of this combinatorial structure
as a topological 2-sphere. But the realization is not necessarily be achievable in the form of a
convex polyhedron. Take a tree graph for example, a connected graph without triangles and
without closed loops. It is planar but it is not even 2-connected. The number of vertices v and
the number of edges e satisfy v − e = 1. After embedding the tree in the plane, we have one
face so that f = 1. The Euler polyhedron formula v − e + f = 2 is verified but the graph is not
polyhedral. Even in the extreme case where G is a one-point graph, the Euler formula holds:
in that case there is v = 1 vertex, e = 0 edges and f = 1 faces (the complement of the point
in the plane) and still v − e + f = 2. The 3-connectedness assures that the realization can be
done using convex polyhedra. It is even possible to have force the vertices of the polyhedron
to be on the integer lattice. See [176, 438]. In [176], it is stated that the Steinitz theorem is
“the most important and deepest known result for 3-polytopes”.
These are the Euler-Lagrange equation of an infinite dimensional extremization problem. The
variational problem was proposed by David Hilbert in 1915. Einstein published in the same
62
OLIVER KNILL
year the general theory of relativity. In the case of a vacuum: T = 0, solutions g define
Einstein manifolds (M, g). An example of a solution to the vaccuum Einstein equations
different from the flat space solution is the Schwarzschild solution, which was found also in
1915 and published in 1916. It is the metric given in spherical coordinates as −(1 − r/ρ)c2 dt2 +
(1−r/ρ)−1 dρ2 +ρ2 dφ2 +ρ2 sin2 φdθ2 , where r is the Schwarzschild radius, ρ the distance to the
singularity, θ, φ are the standard Euler angles (longitude and colatitude) in calculus. The
metric solves the Einstein equations for ρ > r. The flat metric −c2 dt2 +dρ2 +ρ2 dθ2 +ρ2 sin2 θdφ2
describes the vacuum and the Schwarzschild solution describes the gravitational field near a
massive body. Intuitively, the metric tensor g is determined by g(v, v), and the Ricci tensor
by R(v, v) which is 3 times the average sectional curvature over all planes passing through
a plane through v. The scalar curvature is 6 times the average over all sectional curvatures
passing through a point. See [104, 85].
The theorem was proven by Philip Hall in 1935. It implies for example that if a deck of cards
with 52 cards is partitioned into 13 equal sized piles, one can chose from each deck a card so
that the 13 cards have exactly one card of each rank. The theorem can be deduced from a
result in graph geometry: if G = (V, E) = (X, ∅) + (Y, ∅) is a bipartite graph, then a matching
in G is a collection of edges which pairwise have no common vertex. For a subset W of X, let
S(W ) denote the set of all vertices adjacent to some element in W . The theorem assures that
there is an X-saturating matching (a matching covering X) if and only if |W | ≤ |S(W )| for
every W ⊂ X. The reason for the name “marriage” is the situation that X is a set of men and
Y a set of women and that all men are eager to marry. Let Ai be the set of women which could
make a spouse for the i’th man, then marrying everybody off is an X-saturating matching. The
condition is that any set of k men has a combined list of at least k women who would make
suitable spouses. See [64].
136. Mandelbulb
The Mandelbrot set M = M2,2 is the set of vectors c ∈ R2 for which T (x) = x2 + c leads
to a bounded orbit starting at 0 = (0, 0), where x2 has the polar coordinates (r2 , 2θ) if x
has the polar coordinates (r, θ). This construction can be done in higher dimensions too: The
Mandelbulb set M3,8 is defined as the set of vectors c ∈ R3 for which T (x) = x8 + c leads to
a bounded orbit starting at 0 = (0, 0, 0), where x8 has the spherical coordinates (ρ8 , 8φ, 8θ)
if x has the spherical coordinates (ρ, φ, θ). Like the Mandelbrot set it is a compact set. The
topology of M8 is unexplored. Also like in the complex plane, one could look at the dynamics
of a polynomials p = a0 + a1 x + · · · + ar xr in Rn . If (ρ, φ1 , . . . , φn−1 ) are spherical coordinates,
then x → xm = (ρm , mφ1 , . . . , mφn−1 ) is a higher dimensional “power” and allows to look at
the dynamics of Tn,p (x) = p(x) and the corresponding Mandelbulb Mn,p . As with all celebrities,
there is a scandal:
63
FUNDAMENTAL THEOREMS
Except of course the just stated theorem. But you decide whether it is true of not. The
Mandelbulb set has been discovered only recently. An attempt to trace some of its history
was done in [244]: already Rudy Rucker had experimented with a variant of M3,2 in 1988.
Jules Ruis wrote me to have written a Basic program in 1997. The first who wrote down
the formulas used today is Daniel White mentioned in a 2009 fractal forum. Jules Ruis 3D
printed the first models in 2010. See also [52] for some information on generating the graphics.
The theorem was proven in 1932 in the separable case by Stefan Banach and in 1940 in general
by Leonidas Alaoglu. The result essentially follows from Tychonov’s theorem as X ∗ can be seen
as a closed subset of a product space. Banach-Alaoglu therefore relies on the axiom of choice. A
case which often appears in applications is when X = C(K) is the space of continuous functions
on a compact Hausdorff space K. In that case X ∗ is the space of signed measures on K. One
implication is that the set of probability measures is compact on K. An other example are
Lp spaces (p ∈ [1, ∞), for which the dual is Lq with 1/p + 1/q = 1 (meaning q = ∞ for p = 1)
and showing that for p = 2, the Hilbert space L2 is self-dual. In the work of Bourbaki the
theorem was extended from Banach spaces to locally convex spaces (linear spaces equipped
with a family of semi-norms). Examples are Fréchet spaces, (locally convex spaces which are
complete with respect to a translation-invariant metric). See [94].
See [119]. See [270] for counter examples in d ≤ 4, who writes also “A hypothesis of algebraic
topology given by the signs of the intersection points leads to the existence of an isotopy”. The
failure of the Whitney trick in smaller dimensions is one reason why some questions in manifold
theory appear hardest in three or four dimension. There is a variant of the Whitney trick which
works also in dimensions 5, where K has dimension 2 and L has dimension 3.
64
OLIVER KNILL
Theorem: The torsion group of a rational elliptic curve is in the Mazur class.
140. Coloring
A graph G = (V, E) with vertex set V and edge set E is called planar if it can be embedded in
the Euclidean plane R2 without any of the edges intersecting. By a theorem of Kuratowski, this
is equivalent to a graph theoretical statement: G does not contain a homeomorphic image of
neither the complete graph K5 nor the bipartite utility graph K3,3 . A graph coloring with k
colors is a function f : V → {1, 2, . . . , k} with the property that if (x, y) ∈ E, then f (x) 6= f (y).
In other words, adjacent vertices must have different colors. The 4-color theorem is:
Some graphs need 4 colors like a wheel graph having an odd number of spikes. There are
planar graphs which need less. The 1-point graph K1 needs only one color, trees needs only
2 colors and the graph K3 or any wheel graph with an even number of spikes only need 3
colors. The theorem has an interesting history: since August Ferdinand Möbius in 1840 spread
a precursor problem given to him by Benjamin Gotthold Weiske, the problem was first known
also as the Möbius-Weiske puzzle [376]. The actual problem was first posed in 1852 by
Francis Guthrie [287], after thinking about it with his brother Frederick, who communicated
it to his teacher Augustus de Morgan, a former teacher of Francis who told William Hamilton
about it. Arthur Cayley in 1878 put it first in print, (but it was still not in the language of
graph theory). Alfred Kempe published a proof in 1879. But a gap was noticed by Percy John
Heawood 11 years later in 1890. There were other unsuccessful attempts like one by Peter
Tait in 1880. After considerable theoretical work by various mathematicians including Charles
Pierce, George Birkhoff, Oswald Veblen, Philip Franklin, Hassler Whitney, Hugo Hadwiger,
Leonard Brooks, William Tutte, Yoshio Shimamoto, Heinrich Heesch, Karl Dürre or Walter
Stromquist, a computer assisted proof of the 4-color theorem was obtained by Ken Appel
and Wolfgang Haken in 1976. In 1997, Neil Robertson, Daniel Sanders, Paul Seymour, and
Robin Thomas wrote a new computer program. Goerge Gonthier produced in 2004 a fully
machine-checked proof of the four-color theorem [428]. There is a considerable literature like
[319, 43, 153, 356, 80, 428].
65
FUNDAMENTAL THEOREMS
Theorem: On a 3-manifold, the Reeb vector field has a closed periodic orbit.
The theorem was proven by Clifford Taubes in 2007 using Seiberg-Witten theory. Mike Hutch-
ings with Taubes established 2 Reeb orbits under the condition that all Reeb orbits R are
non-degenerate in the sense that the linearized flow does not have an eigenvalue 1. Hutch-
ings with Dan Cristofaro-Gardiner later removed the non-degeneracy
R condition [209, 103] and
also showed that if the product of the actions A(γ) = γ α of the two orbits is larger than the
R
volume M α ∧ dα of the contact form, then there are three. To the history: Alan Weinstein
has shown already that if Y is a convex compact hypersurface in R2n , then there is a periodic
orbit. Paul Rabinovitz extended it to star-shaped surfaces. Weinstein conjectured in 1978 that
every compact hypersurface of contact type in a symplectic manifold has a closed character-
istic. Contact geometry as an odd dimensional brother of symplectic geometry has become
its own field. Contact structures are the opposite of integrable hyperplane fields: the Frobe-
nius integrability condition α ∧ dα = 0 defines an integrable hyperplane field forming a
co-dimension 1 foliation of M . Contact geometry is therefore a “totally non-integrable hyper
plane field”. [157]. The higher dimensional case of the Weinstein conjecture is wide open [207].
Also the symplectic question whether every compact and regular energy surface H = c for a
Hamiltonian vector field in R2n has a periodic solution is open. One knows that there are for
almost all energy values in a small interval around c. [200].
This had been the upper bound conjecture of Theodore Motzkin from 1957 which was
proven by Peter McMullen in 1970 who reformulated it hk (G) ≤ n−d+k−1
k
for all k < d/2
66
OLIVER KNILL
67
FUNDAMENTAL THEOREMS
n2 + 1. Landau really nailed it. There are 4 conjectures only, but all of them can be stated
in half a dozen words, are completely elementary, and for more than 100 years, nobody has
proven nor disproved any of them.
The theorem has been proven in 1985 by Michael Gromov. It has been dubbed as the principle
of the symplectic camel by Maurice de Gosson referring to the “eye of the needle” metaphor.
A reformulation of the Gosson allegory [107] after encoding “camel” = “ball in the phase
space”, “hole = “cylinder”, and “pass”=“symplectically embed into”, “size of the hole” =
“radius of cylinder” and “size of the camel” = “radius of the ball” is: “There is no way that
a camel can pass through a hole if the size of the hole is smaller than the size of the camel”.
See [292, 208] for expositions. The non-squeezing theorem motivated also the introduction of
symplectic capacities, quantities which are monotone c(M ) ≤ c(N ) if there is a symplectic
embedding of M into N , which are conformal in the sense that if ω is scaled by λ, then c(M )
is scaled by |λ| and such that c(B(1)) = c(Z(1)) = π. For n = 1, the area is an example of a
symplectic capacity (actually unique). The existence of a symplectic capacity obviously proves
the squeezing theorem. Already Gromov introduced an example, the Gromov width, which
is the smallest. More are constructed in using calculus of variations. See [200, 293].
68
OLIVER KNILL
example result due to Lichnerowicz is that if Ricci(Ω) ≥ λ > 0, then the first eigenvalue λ1 of
∆ satisfies λ1 ≥ 2λ. See [30, 424, 25].
The theorem was found in 1639 by Blaise Pascal (as a teenager) in the case of an ellipse. A
limiting case where we have two crossing lines is the Pappus hexagon theorem which goes
back to Pappus of Alexandria who lived around 320 AD. The Pappus hexagon theorem is one
of the first results in projective geometry.
69
FUNDAMENTAL THEOREMS
is b − a. In dimension n = 2, the Lebesgue measure of a measurable set is the area of the set. In
particular, a ball of radius r has area πr2 . When constructing the measure one has to specify
a σ-algebra, which is in the Lebesgue case the Borel σ-algebra generated by the open sets in
Rn . One has for every n ≥ 1:
The result is due to Giusetppe Vitali from 1905. It justifies why one has to go through all the
trouble of building a σ-algebra carefully and why it is not possible to work with the σ-algebra
of all subsets of R (which is called the discrete σ-algebra). The proof of the Vitali theorem
shows connections with the foundations of mathematics: by the axiom of choice there exists
a set V which represents equivalence classes in T/Q, where T is the circle. For this Vitali set
V all translates Vr = V + r are all disjoint with r ∈ Q. {r + V, r ∈ Q} = R is a partition. By
the Lebesgue measure property, all translated sets Vr have the same measure. As they are a
countable set and are disjoint and add up to a set of Lebesgue measure 1, they have to have
measure zero. But this contradics σ additivity. Now lift V to R and then build V × Rn−1 .
More spectacular are decompositions of the unit ball into 5 disjoint sets which are equivalent
under Euclidean transformations and which can be reassembled to get two disjoint unit balls.
This is the Banach-Tarski construction from 1924.
It is named after John Wilson, who was a student of Edward Waring. It seems that Joseph-Louis
Lagrange gave the first proof in 1771. It is not a practical way to determine primality: [384]:
from a computational point of view, it is probably one of the world’s least efficient primality
tests since computing (n − 1)! takes so many steps. Also named after him are Wilson primes,
primes for which not only p but p2 divides (p − 1)! + 1. The smallest one is 5. It is not known
whether there are infinitely many.
70
OLIVER KNILL
The statement had been conjectured by Nikolai Luzin in 1915 and was known as the Luzin
conjecture. The theorem was proven by Lennard Carleson in 1966. An extension to Lp with
p ∈ (1, ∞] was proven by Richard Hunt in 1968. The proof of the Carleson theorem is difficult.
While mentioned in harmonic analysis texts like [236] or surveys [238], who say about the
Carleson-Hunt theorem that it is one of the deepest and least understood parts of the theory.
The theorem was proven by Bernard Bolzano in 1817 for functions from [a, b] to R. The proof
follows from the definitions: as P = (0, ∞) is open, also f −1 (P ) is open. As N = (−∞, 0)
is open, also f −1 (N ) is open. If there is no root, then X = N ∪ P is a disjoint union of
two open sets and so disconnected contradicting the assumption of X being connected. A
consequences is the wobbly table theorem: given a square table with parallel equal length
legs and a “floor” given by the graph z = g(x, y) of a continuous g can be rotated and possibly
translated in the z direction so that all 4 legs are on the table. The proof of this application
is seen as a consequence of the intermediate value theorem applied to the height function f (φ)
of the fourth leg if three other legs are on the floor. A consequence is also Rolle’s theorem
assuring that if a continuously differentiable function [a, b] → R with f (a) = f (b) has a point
x ∈ (a, b) with f 0 (x) = 0. Tilting Rolle gives the mean value theorem assuring that for a
continuously differentiable function [a, b] → R there exists x ∈ (a, b) with f 0 (x) = f (b) − f (a).
The general theorem shows that it is the connectedness and not completeness which is the
important assumption.
151. Perron-Frobenius
A n × n matrix A is non-negative if Aij ≥ 0 for all i, j and positive if Aij > 0 for all i, j.
The Perron-Frobenius theorem is:
Theorem: A positive matrix has a unique largest eigenvalue.
It has been proven by Oskar Perron in 1907 and by Georg Frobenius in 1912. When seeing
the map x → Ax on the projective space, this is in suitable coordinates a contraction and the
Banach fixed point theorem applies. The Brouwer fixed point theorem only gives existence, not
uniqueness, but the Brouwer fixed point applies for non-negative matrices. This has applications
in graph theory, Markov chains or Google page rank. The Google matrix is defined as G =
dA + (1 − d)E, where d is a damping factor and A is a Markov matrix defined by the network
and E is the matrix Eij = 1. Sergey Brin and Larry Page write “the damping factor d is the
probability at each page the random surfer will get bored and request another random page”.
The page rank equation is Gx = x. In other words, the Google Page rank vector (the one
billion dollar vector), is a Perron-Frobenius eigenvector. It assigns page rank values to the
individual nodes of the network. See [274]. For the linear algebra of non-negative matrices, see
[301].
71
FUNDAMENTAL THEOREMS
This result is due to Paul Cohen from 1963. Cantor had for a long time tried to prove that
the continuum hypothesis holds. Cohen’s theorem shows that any such effort had been in vain
and why Cantor was doomed not to succeed. The problem had then been the first of Hilbert’s
problems of 1900. [364].
153. Homotopy-Homology
Given a path connected pointed topological space X with base b, the n’th homotopy group
πn (X) is the set of equivalence classes of base preserving maps from the pointed sphere S n to
X. It can be written as the set of homotopy classes of maps from the n-cube [0, 1]n to X
such that the boundary of [0, 1]n is mapped to b. It becomes a group by defining addition as
(f +g)(t1 , . . . , tn ) = f (2t1 , t2 , . . . tn ) for 0 ≤ t1 ≤ 1/2 and (f +g)(t1 , . . . , tn ) = g(2t1 −1, t2 , . . . , tn )
for 1/2 ≤ t ≤ 1. In the case n = 1, this is “joining the trip”: travel first along the first curve
with twice the speed, then take the second curve. The groups πn do not depend on the base
point. As X is assumed to be connected, π0 (X) is the trivial group. The group π1 (X) is
the fundamental group. It can be non-abelian. For n ≥ 2, the groups πn (X) are always
abelian f + g = g + f . The k’th homology group Hn (X) of a topological space X with
integer coefficients is obtained from the chain complex of the free abelian group generated by
continuous maps from n-dimensional simplices to X. The Hurewicz theorem is
Higher homotopy groups were discovered by Witold Hurewitz from 1935-1936. The Hurewitz
theorem is due to Hurewicz from 1950 [206]. In the case n = 1 the homomorphism can be easily
described: if γ : [0, 1] → X is a path, then since [0, 1] is a 1-simplex, the path is a singular
1-simplex in X. As the boundary of γ is empty, this singular 1-simplex is a cycle. This allows
to see it as an element in H1 (X). If two paths are homotopic, then their corresponding singular
simplices are equivalent in H1 (X). There is an elegant proof using Hodge theory if X = M is a
compact manifold: the image C of a map πp (M ) can be interpreted as a Schwartz distribution
on M . Let L = (d + d∗ )2 be the Hodge Laplacian and let the heat flow e−tL act on C. For
t > 0, the image e−tL C is now smooth and defines a differential form in Λp (M ). As all the non-
zero eigenspaces get damped exponentially, the limit of the heat flow is a harmonic form,
an eigenvector to the eigenvalue 0. But Hodge theory identifies ker(L|Λp ) with H p (M ) and
so with Hp (M ) by Poincaré duality. The Hurewitz homomorphism is then even constructive.
“Just heat up the curve to get the corresponding cohomology element, the commutator group
elements get melted away by the heat.” A space X is called n-connected if πi (X) = 0 for all
i ≤ n. So, 0-connected means path connected and 1-connected is simply connected. For
n ≥ 2, one has πn (X) isomorphic to Hn (X) if X is (n − 1)-connected. In the case n = 1, this
can already not be true as π1 (X) is in general non-commutative and H1 (X) is but H1 (X) is
72
OLIVER KNILL
the isomorphic to the abelianization of G = π1 (X) which is the group obtained by factoring
out the commutator subgroup [G, G] which is a normal subgroup of G and generated by all the
commutators g −1 h−1 gh of group elements g, h of G. See [190].
Theorem: A = I + B/2 − 1.
The result was found in 1899 by Georg Pick [329]. For a triangle for example with no interior
points, one has 0 + 3/2 − 1 = 1/2, for a rectangle parallel to the coordinate axes with I = n ∗ m
interior points and B = 2n + 2m + 4 boundary points and area A = (n + 1)(m + 1) also
I − B/2 − 1 = A. The theorem has become a popular school project assignment in early
geometry courses as there are many ways to prove it. An example is to cut away a triangle and
use induction on the area then verify that if two polygons are joined along a line segment, the
functional I +PB/2 − 1 is additive. There are other explicit formulas for the area like Green’s
formula A = n−1 i=0 xi yi+1 − xi+1 yi which does not assume the vertices Pi = (xi , yi ) to be lattice
points.
Marc Katz asked in 1962 “Can one hear the sound of a drum” [225]”. Caroline Gordon, David
Webb and Scott Wolpert answered negatively [166]. In the convex case, the question is still
open.
73
FUNDAMENTAL THEOREMS
of L assures that r(t) stays in the plane initially spanned by r(0) and r0 (0) and that the area
of the parallelogram spanned by r(t) and r0 (t) is constant. To see the natural potential in Rn
is, one has to go beyond Newton and pass to Gauss, who wrote the gravitational law in the
form div(F ) = 4πρ, where ρ is the mass density. It expresses that mass is the source for the
force field F . To get the force field in a central symmetric mass distribution, one can use the
divergence theorem in Rn and relate the integral of 4πρ over a ball of radius r with the flux
of F through the sphere S(r) of radius r. The former is 4πM , where M is the total mass in
the ball, the later is −|S(r)|F (r), where |S(r)| is the surface area of the sphere and the nega-
tive sign is because for an attractive force F (r) points inside. So, in three dimensions, Gauss
recovers the Newton gravitational law F (r) = −4πGM/|S(r)| = −GM/|r|2 . There is a natural
central force Kepler problem in any dimensions: in Rn , we have F (r) = −Cn r/|r|n where Cn
is a constant. For n = 1, there is a constant force pulling the particle towards the center, for
n = 2, one has a 1/|r| force which corresponds to a logarithmic potential, for n = 3, it is the
Newtonian inverse square 1/r2 force, in n = 4, it is a 1/r3 force. For n = 0, one formally gets
the harmonic oscillator which is Hook’s law. Which potentials lead to periodic motion?
The answer is surprising and was given by Bertrand: only the harmonic oscillator potential and
the Newtonian potential in R3 work. Let us call a central force potential all periodic if every
bounded (position and velocity) solution r(t) of the differential equations is periodic. Already
for the Kepler problem, there are not only motions on ellipses but also scattering solutions
moving on parabola or hyperbola, or then suicide motions, with r0 (0) = 0, where the particle
dives into the singularity.
Theorem: Only the Newton potentials for n = −1 and n = 3 are all periodic.
This theorem of Joseph Bertrand from 1873 tells that three dimensional space is special as it
in any other dimension, calendars would be almost periodic. We could live with that but there
are more compelling reasons why n = 3 is dynamically better: in other dimensions, only very
special orbits stay bounded. A small perturbation leads to the planet colliding with the sun
or escaping to infinity, both not very pleasant for possible inhabitants. Gauss’s analysis also
shows in any dimension n, the motion of a particle in a constant mass density is always given
by the harmonic oscillator motion, independent of the dimension: the divergence theorem
gives 4πρ|B(r)| = −|S(r)|F (r), where |B(r)| is the volume of the solid ball of radius r. This
gives the Hook law force F (x) = −4πρx/(n), where n is the dimension.
74
OLIVER KNILL
of all the basis sets of C k Whitney topologies. A basis for the later is the set of all functions for
which f (j) (x, y) ∈ Uj for all 0 ≤ j ≤ k and U0 , · · · Uk are all open intervals. With the Whitney
C ∞ topology, X is a Baire space so that residual sets (countable intersections of open dense
sets), are dense. The next theorem works n = 2, r ≤ 6 and for n ≥ 3 if r ≤ 5 [299]
Theorem: For a residual set in X, Mf is an r-dimensional manifold.
The theorem was due to René Thom who initiated Catastrophe theory in a 1966 article and
wrote [404] building on previous work by Hassler Whitney. More work and proofs were done
by various mathematicians like John Mather or Bernard Malgrange. There is more to it: the
restriction Xf of the projection of the singularity set Mf onto the parameter space Rr can be
classified. Thom proved that that for r = 4, there are exactly seven elementary catastro-
phes: ‘fold”, “cusp”, “swallowtail”, “butterfly”, “hyperbolic umbillic”, “elliptic umbillic” and
“parabolic umbillic”. For r = 5, the number of catastrophe types is 11. The subject is part of
singularity theory of differentiable maps, a theory that started by Hassler Whitney in 1955.
The theory of bifurcations was developed by Henri Poincare and Alexander Andronov. See
also [299, 335, 410]. It is also widely studied in the context of dynamical systems [303].
75
FUNDAMENTAL THEOREMS
the Sherrington-Kirkpatrick model from 1975, where the lattice is replaced by a complete
graph and the Jij define a random matrix. An other possibility is to change the spin to Zn
or the symmetric group (Potts) or then some other Lie group (Lattice gauge fields) and then
use a character to get a numerical value. Or one replaces the zero dimensional sphere with a
higher dimensional sphere like S 2 and takes σi · σj (Heisenberg model). See [368].
Theorem: r(AB)r(BC)r(CA) = 1
The theorem is called after Giovanni Ceva who wrote it down in 1678. Al-Mu’taman ibn Hud
from Zaragoza proved it already in the 11’th century. [202]. See [353].
This is the theorem of Fary-Milnor, proven by Fary in 1949 and Milnor in 1950. The theorem
follows also from the existence of quadrisecants, which are lines intersecting the knot in 4
points [110]. The existence of quadrisecants was proven by Erika Pannwitz in 1933 for smooth
knots and generalized in 1997 by Greg Kuperberg to tame knots, knots which are equivalent
to polygonal knots.
76
OLIVER KNILL
The theorem is due to F. Riesz. The name “rising sun lemma” appeared according to [39] first
in [21]. The picture is to draw the graph of the function f . If light comes from a distant source
parallel to the x-axis, then the intervals (an , bn ) delimit the hollows that remain in the shade
at the moment of sunrise. The lemma is used to prove that every monotone non-decreasing
function is almost everywhere differentiable.
77
FUNDAMENTAL THEOREMS
Up to the order of the Jordan blocks, the Jordan normal form is unique. If each Jordan block
is a 1 × 1 matrix, then the matrix is called diagonalizable. The spectral theorem assures
that a normal matrix AA∗ = ∗
A A is diagonalizable. Not every matrix is diagonalizable as the
1 1
shear matrix A = , a 2 × 2 Jordan block, shows. The theorem has been stated first
0 1
by Camille Jordan in 1870. For history, see [55]. The Jordan-Chevalley generalization states
that over an arbitrary perfect field, a matrix is similar to B + N , where B is semi-simple and
N is nilpotent and BN = N B. (See [205] page 17). A matrix B is called semi-simple if every
B-invariant linear subspace V has a complementary B-invariant subspace. For algebraically
closed fields, semi-simple is equivalent to be conjugated to a diagonal matrix. To the condition
on the field: a field k is called perfect if every irreducible polynomial over k has distinct roots.
Theorem: The area of U plus the area of V is the area of the triangle.
The proof directly follows from Pythagoras just by relating the areas of half discs and triangle.
The result is remarkable as it was historically the first attempt for the quadrature of the
circle. The lunes are bound by circles, while the triangle is bound by line segments. The
theorem does the quadrature of the lunes. Hippocrates of Chios lived from about 470 to
410 BC. For history see [29] page 37.
78
OLIVER KNILL
79
FUNDAMENTAL THEOREMS
hull intersect. Indeed, the 4 points define a quadrilateral and the partition {{x1 , x3 }, {x2 , x4 }}
define the two diagonals of the quadrilateral.
The theorem has been proven by Helge Tverberg in 1966. See [407, 210].
The paper was proven in 1935 by Heinz Hopf [203] in a marvelous homotopy proof: define
f (s, t) as the argument of the line through r(s) and r(t) or continuously extend it s = t as the
argument of the tangent line. The direct line from (0, 0) to (1, 1) in the parameter st-plane
gives a total angle change of n2π where n is an integer. Now deform the curve from (0, 0) to
(1, 1) so that it first goes straight from (0, 0) to (0, 1), then straight from (0, 1) to (1, 1). Both
lines produce a deformation of π and show that n = 1. The theorem can be generalized to a
Gauss-Bonnet theorem for planar regions G. The total curvature of the boundary is 2π times
the Euler characteristic of G. For a discrete version, see [249].
Here dj = deg(pj ) and r is the number of conjugacy classes of G. For an Abelian group, there
are n conjugacy classes. The theorem had been conjectured in 1896 by Richard Dedekind.
Frobenius proved it. [192, 91]
80
OLIVER KNILL
cover and M = {(1, 2), (3, 4), (5, 6), (7, 8)} is a minimal matching.
The origin of the theorem is attributed to Dénes Kn̈ig, who proved it in 1931 and wrote a
precursor paper in 1916 where he prove that a regular (constant vertex degree) bipartite graph
has a perfect matching (a matching which covers all vertices). For a proof see for example [113]
(Chapter 2).
Epilogue: Value
Which mathematical theorems are the most important ones? This is a complicated variational
problem because it is a general and fundamental problem in economics to define “value”. The
difficulty with the concept is that “value” is often a matter of taste or fashion or social influence
and so an equilibrium of a complex social system. Value can change rapidly, sometimes
triggered by small things. The reason is that the notion of value like in game theory depends
on how it is valued by others. A fundamental principle of catastrophe theory is that maxima
of a functional can depend discontinuously on parameter. As value is often a social concept,
this can be especially brutal or lead to unexpected viral effects. First of all, value is often
linked to historical or morale considerations. We tend more and more to link artistic and
scientific value also to the person. In mathematics, the work of Oswald Teichmüller or Ludwig
Bieberbach for example are linked to their political view and so devalued despite their brilliance
[360]. This happens also outside of science, in art or industry. The value of a company now
also depends on what “investors think” or what analysts see for potential gain in the future.
Social media try to measure value using “likes” or “number of followers”. A majority vote is a
measure but how well can it predict correctly what be valuable in the future? Majority votes
taken over longer times would give a more reliable value functional. Assume one could persuade
every mathematician to give a list of the two dozen most fundamental theorems and do that
every couple of years, and reflect the “wisdom of an educated crowd”, one could probably get a
pretty good value functional. Ranking theorems and results in mathematics are a mathematical
optimization problem by itself. One could use techniques known in the “search industry”. One
idea is to look at the finite graph in which the theorems are the nodes and where two theorems
are related with each other if one can be deduced from the other (or alternatively connect them
if one influences the other strongly). One can then run a page rank algorithm [274] to see
which ones are important. Running this in each of the major mathematical fields could give
an algorithm to determine which theorems deserve the name to be “fundamental”.
Opinions
Having taught a course called “Math from a historical perspective” at the Harvard extension
school motivated to write up the present document. This course Math E 320 is a rather
pedestrian but pretty comprehensive stroll through all of mathematics. At the end of the
course, students were asked as part of a project to write some short stories about theorems
or mathematical fields or mathematical persons. The present document benefits already from
these writings and also serves a bit as preparation for the course. It is interesting to see what
others consider important. Sometimes, seeing what others feel can change your own view. I
was definitely influenced by students, teachers, colleagues and literature as well of course by
the limitations of my own understanding. And my point of view has already changed while
writing the actual theorems down. Value is more like an equilibrium of many different factors.
81
FUNDAMENTAL THEOREMS
In mathematics, values have changed rapidly over time. And mathematics can describe the
rate of change of value [335]. Major changes in appreciation for mathematical topics came
definitely at the time of Euclid, then at the time when calculus was developed by Newton and
Leibniz. Also the development of more abstract algebraic constructs or topological notions,
the start of set theory changed things considerably. In more modern time, the categorization
of mathematics and the development of rather general and abstract new objects, (for example
with new approaches taken by Grothendieck) changed the landscape. In most of the new
development, I remain the puzzled tourist wondering how large the world of mathematics
is. It has become so large that continents have emerged: we have applied mathematics,
mathematical physics, statistics, computer science and economics which have drifted
away to independent subjects and departments. Classical mathematicians like Euler would
now be called applied mathematicians, de Moivre would maybe be stamped as a statistician,
Newton a mathematical physicist and Turing a computer scientist and von Neuman an
economist.
Search
A couple of months ago, when looking for “George Green”, the first hit in a search engine
would be a 22 year old soccer player. (This was not a search bubble thing [325] as it was
tested with cleared browser cache and via anonymous VPN from other locations, where the
search engine can not determine the identity of the user). Now, I love soccer, played it myself
a lot as a kid and also like to watch it on screen, but is the English soccer player George
William Athelston Green really more “relevant” than the British mathematician George Green,
who made fundamental break through discoveries which are used in mathematics and physics?
Shortly after I had tweeted about this strange ranking on December 27, 2017, the page rank
algorithm must have been adapted, because already on January 4th, 2018, the Mathematician
George Green appeared first (again not a search bubble phenomenon, where the search engine
adapts to the users taste and adjusts the search to their preferences). It is not impossible that
my tweet has reached, meandering through social media, some search engine engineer who was
able to rectify the injustice done to the miller and mathematician George Green. The theory
of networks shows “small world phenomena” [418, 32, 417] can explain that such influences or
synchronizations are not that impossible [394]. But coincidences can also be deceiving. Humans
just tend to observe coincidences even so there might be a perfectly mathematical explanation
prototyped by the birthday paradox. See [291]. But one must also understand that search
needs to serve the majority. For a general public, a particular subject like mathematics is
not that important. When searching for “Hardy” for example, it is not Godfrey Hardy who
is mentioned first as a person belonging to that keyword but Tom Hardy, an English actor.
This obviously serves most of the searches better. As this might infuriate particular groups
(here mathematicians), search engines have started to adapt the searches to the user, giving the
search some context which is an important ingredient in artificial intelligence. The problem
is the search bubble phenomenon which runs hard against objectivity. Textbooks of the future
might adapt their language, difficulty and even their citations or the historical credit on who
reads it. Novels might adapt the language to the age of the user, the country where the user
lives, and the ending might depend on personal preferences or even the medical history of the
user (the medical history of course being accessible by the book seller via ‘big data” analysis
of user behavior and tracking which is not SciFi this is already happening). Many computer
games are already customizable as such. A person flagged as sensitive or a young child might
82
OLIVER KNILL
be served a ending rather than ending the novel in an ambivalent limbo or even a disaster.
[325] explains the difficulty. The issues have amplified even more since that book came out and
even influenced elections.
Beauty
In order to determine what is a “fundamental theorem”, also aesthetic values matter. But the
question of “what is beautiful” is even trickier. Many have tried to define and investigate the
mechanisms of beauty: [185, 422, 423, 348, 373, 7, 304]. In the context of mathematical for-
mulas, the question has been investigated within the field of neuro-aesthetics. Psychologists,
in collaboration with mathematicians have measured the brain activity of 16 mathematicians
with the goal to determine what they consider beautiful [355]. The Euler identity eiπ + 1 = 0
was rated high with a value 0.8667 while a formula for 1/π due to Ramanujan was rated low
with an average rating of -9.7333. Obviously, what mattered was not only the complexity of
the formula but also how much insight the participants got when looking at the equation. The
authors of that paper cite Plato who wrote once ”nothing without understanding would ever
be more beauteous than with understanding”. Obviously, the formula of Ramanujan is much
deeper but it requires some background knowledge for being appreciated. But the authors
acknowledge in the discussion that that correlating “beauty and understanding” can be tricky.
Rota [348] notes that the appreciation of mathematical beauty in some statement requires the
ability to understand it. And [304] notices that “even professional mathematicians specialized
in a certain field might find results or proofs in other fields obscure” but that this is not much
different from say music, where “knowledge bout technical details such as the differences be-
tween things like cadences, progressions or chords changes the way we appreciate music” and
that “the symmetry of a fugue or a sonata are simply invisible without a certain technical
knowledge”. As history has shown, there were also always “artistic connections” [155, 66] as
well as “religious influences” [281, 374]. The book [155] cites Einstein who defines “mathematics
as the poetry of logical ideas”. It also provides many examples and illustrations and quotations.
And there are various opinions like Rota who argues that beauty is a rather objective property
which depends on historic-social contexts. And then there is taste: what is more appealing, the
element of surprise like the Birthday paradox or Petersburg paradox in probability theory, the
Banach-Tarski paradox in measure theory which obviously does not trigger any enlight-
enment nor understanding if one hears the first time: one can disassemble a sphere into 5
pieces, rotate and translate these pieces in space to build up two spheres. Or the surprising fact
that the infinite sum 1 + 2 + 3 + 4 + 5 + . . . is naturally equal to −1/12 as it is ζ(−1) (which
is defined by analytic continuation). The role of aesthetic in mathematics is especially impor-
tant in education, where mathematical models [148], mathematical visualization [31], artistic
enrichment [149], surfaces [266], or 3D printing [361, 256] can help to make mathematics more
approachable.
83
FUNDAMENTAL THEOREMS
the infamous “parade column” of 1991 by Marilyn vos Savant. Especially after a cameo in the
movie “21”, the theorem has become part of mathematical kitsch. I myself love mathematical
kitsch. A topic that gained that status must have been nice and innovative to obtain that label.
Kitsch becomes only tiresome however if it is not presented in a new and original form. The book
[326] for example, in the context of complex dynamics, remains a master piece still today, even-
so the picture have become only too familiar. Rendering the mandelbrot set today however
does not rock the boat any more. In that context, it appears strange that mathematicians
do not jump on the “mandelbulb set” M which is one of the most beautiful mathematical
objects there is but the reason is maybe just that it is a “youtube star” and so not worthy
yet any consideration. (More likely is that the object is just too difficult to study as we lack
the mathematical analytic tools which for example would just to answer the basic question
whether M is connected.) A second example is catastrophe theory [335, 410] a beautiful
part of singularity theory which started with Hassler Whitney and was developed by René
Thom [404]. It was hyped to much that it fell into a fall from which it has not yet fully
recovered. And this despite the fact that Thom himself pointed out the limits as well as the
controversies of the theory already. [53] It had to pay a prize for its fame and appears to be
forgotten. Chaos theory from the 60ies which started to peak with Edward Lorenz, Mandelbrot
and strange attractors etc started to become a clishé latest after that infamous scene featuring
the character Ian Malcolm in the 1993 movie Jurassic park. It was already laughed at within
the same movie franchise, when in the third Jurassic Park installment of 2001, the kid Erik
Kirby snuffs on Malcolm’s “preachiness” and quotes his statement “everything is chaos” in a
condescending way. In art, architecture, music, fashion or design also, if something has become
too popular, it is despised by the “connaisseurs”. Hardly anybody would consider a “lava
lamp” (invented in 1963) a object of taste nowadays, even so, the fluid dynamics and motion is
objectively rich and interesting. The piano piece “Für Elise” by Ludwig van Beethoven became
so popular that it can not even be played any more as background music in a supermarket.
There is something which prevents a “music critics” to admit that the piece is great. Such
examples suggest that it might be better for an achievement (or theorem in mathematics) not
to enter pop-culture as this shows lack of “deepness” and is despised by the elite. The principle
of having fame torn down to disgrace is common also outside of mathematics. Famous actors,
entrepreneurs or politicians are not all admired but sometimes also hated to the guts, or torn
to pieces and certainly can not live normal lives any more. The phenomenon of accumulated
critique got amplified with mob type phenomena in social media. There must be something
fulfilling to trash achievements. Film critics are often harsh and judge negatively because this
elevates their own status as they appear to have a “high standard”. Similarly morale judgement
is expressed often just to elevate the status of the judge even so experience has shown that often
judges are offenders themselves. Maybe it is also human Schadenfreude, or greed which makes
so many to voice critique. History shows however that social value systems do not matter much
in the long term. A good and rich theory will show its value if it is appreciated also in hundreds
of years, where fashion and social influence has less impact. The theorem of Pythagoras will
be important independent of fame and even if it has become a cliché, it is too important to be
labeled as such. It has not only earned the status of kitsch, it is also a prototype as well as a
useful tool.
84
OLIVER KNILL
Media
There is no question that the Pythagoras theorem, the Euler polyhedron formula χ =
v − e + f the Euler identity eiπ + 1 = 0, or the Basel problem formula 1 + 1/4 + 1/9 +
1/16 + · · · = π/6 will always rank highly in any list of beautiful formulas. Most mathematicians
agree that they are elegant and beautiful. These results will also in the future keep top spots in
any ranking. On social networks, one can find lists of√favorite formulas. On “Quora”, one can
find the arithmetic mean-geometric mean inequality ab ≤ (a + b)/2 or the geometric sum-
mation formula 1 + a + a2 + · · · = 1/(1 − a) high up. One can also find strange contributions
in social media like the identity 1 = 0.99999 . . . (which is used by Piaget inspired educators
to probe mathematical maturity of kids. Similarly as in Piaget’s experiments, there is time of
mathematical maturity where a student starts to understand that this is an identity. A very
young student thinks 1 is larger than 0.9999... even if told to point out a number in between.
Such threshold moments can be crucial for example to mathematical success later. We have a
strange fascination with “wunderkinds”, kids for which some mathematical abilities have come
earlier (even so the existence of each wonder kid produces a devastating collateral damage in
its neighborhood as their success sucks out any motivation of immediate peers). The problem
is also that if somebody does not pass these Piaget thresholds early, teachers and parents con-
sider them lost, they get discouraged and become uninterested in math (the situation in other
art or sport is similar). In reality, slow learners for which the thresholds are passed later are
often deeper thinkers and can produce deeper or more extraordinary results.) At the moment,
searching for the “most beautiful formula in mathematics” gives the Euler identity and search
engines agree. But the concept of taste in a time of social media can be confusing. We live in
a time, when a 17 year old “social influencer” can in a few days gather more “followers” and
become more widely known than Sophie Kovalewskaya who made fundamental beautiful
and lasting contributions in mathematics and physics like the Cauchy-Kovalevskaya theorem
mentioned above. This theorem is definitely more lasting than a few “selfie shots” of a pretty
face, but measured by a “majority vote”, it would not only lose, but completely disappear.
One can find youtube videos of kids explaining the 4th dimension, which are watched millions
of times, many thousand times more than videos of mathematicians who have created deep
mathematical new insight about four dimensional space. But time rectifies. Kovalewskaya will
also be ranked highly in 50 years, while the pretty face has faded. Hardy put this even more
extreme by comparing a mathematician with a literary heavy weight: Archimedes will be re-
membered when Aeschylus is forgotten, because languages die and mathematical ideas do not.
[185] There is no doubt that film and TV (and now internet like “Youtube” and blogs) has a
great short-term influence on value or exposure of a mathematical field. Examples of movies
with influence are It is my turn (1980), or Antonia’s line (1995) featuring some algebraic
topology, Good will hunting (1997) in which some graph theory and Fourier theory appears,
21 from (2008) which has a scene in which the Monty Hall problem has a cameo. The man
who knew infinity displays the work of Ramanujan and promotes some combinatorics like
the theory of partitions. There are lots of movies featuring cryptology like Sneakers (1992),
Breaking the code (1996), Enigma (2001) or The imitation game (2014). For TV, math-
ematics was promoted nicely in Numb3rs (2005-2010). For more, see [333] or my own online
math in movies collection.
85
FUNDAMENTAL THEOREMS
Professional opinions
Interviews with professional mathematicians can also probe the waters. In [260], Natasha Kon-
dratieva had asked a number of mathematicians: “What three mathematical formulas are the
most beautiful to you”. The formulas of Euler or the Pythagoras theorem naturally were
ranked high. Interestingly, Michael Atiyah included even a formula ”Beauty = Simplicity +
Depth”. Also other results, like the Leibniz series π/4 = 1 − 1/3 + 1/5 − 1/7 + 1/9 − . . . , the
Maxwell equations dF = 0, d∗ F = J or the Schrödinger equation 0
P∞ i~u s= (i~∇
Q + eA) su−1
2
+
2
V u, the Einstein formula E = mc or the Euler’s golden key n=1 1/n = p (1 − 1/p )
R∞ 2 √
or the Gauss identity −∞ e−x dx = π or the volume of the unit ball in R2n given as
π n /n! appeared. Gregory Margulis mentioned
√ P −n an application of the Poisson summation for-
P ˆ 2 2
= n e−n /4 or the quadratic reciprocity
P P
mula n f (n) = n f (n) which is 2 n e
law (p|q) = (−1)(p−1)/2(q−1)/2 , where (p|q) = 1 if q is a quadratic residue modulo p and −1
else. Robert Minlos gave the Gibbs formula, a Feynman-Kac formula or the Stirling
formula. Yakov Sinai mentioned the Gelfand-Naimark realization of an Abelian C ∗ alge-
bra as an algebra of continuous function Q∞or the second Plaw of thermodynamics. Anatoly
∞
Vershik gave the generating function k=0 (1 + x ) = n=0 p(n)xn for the partition func-
k
tion and the generalized Cauchy inequality between arithmetic and geometric mean. An
interesting statement of David Ruelle appears in that article who quoted Grothendieck by “
my life’s ambition as a mathematician, or rather my joy and passion, have constantly been
to discover obvious things . . . ”. Combining Grothendieck’s and Atiyah’s quote, fundamental
theorems should be “obvious, beautiful, simple and still deep”.
A recent column “Roots of unity” in the Scientific American asks mathematicians for their fa-
vorite theorem: examples are Noether’s theorem, the uniformization theorem, the Ham
Sandwich theorem, the fundamental theorem of calculus, the circumference of the
circle, the classification of compact 2-surfaces, Fermat’s little theorem, the Gromov
non-squeezing theorem, a theorem about Betti numbers, the Pythagorean theorem, the
classification of Platonic solids, the Birkhoff ergodic theorem, the Burnside lemma,
the Gauss-Bonnet theorem, Conways rational tangle theorem, Varignon’s theorem, an
upper bound on Reidemeister moves in knot theory, the asymptotic number of rela-
tive prime pairs, the Mittag Leffler theorem, a theorem about spectral sparsifiers, the
Yoneda lemma and the Brouwer fixed point theorem. These interviews illustrate also
that the choices are different if asked for “personal favorite theorem” or “objectively favorite
theorem”.
86
OLIVER KNILL
of Gauss, stating that the curvature of a surface is intrinsic and not dependent on an em-
bedding is more “fundamental” than the “Gauss-Bonnet” result, which is definitely deeper.
In number theory, one can argue that the quadratic reciprocity formula is deeper than
the little Theorem of Fermat or the Wilson theorem. (The later gives an if and only
criterion for primality but still is far less important than the little theorem of Fermat which
as the later is used in many applications.) The last theorem of Fermat is an example of
an important theorem as it is deep and related to other fields and culture, but it is not yet
so much a “fundamental theorem”. Similarly, the Perelman theorem fixing the Poincaré
conjecture is important, but it is not (yet) a fundamental theorem. It is still a mountain peak
and not a sediment in a rock. Important theorems are not much used by other theorems as
they are located at the end of a development. Also the solution to the Kepler problem on
sphere packings or the proof of the 4-color theorem [80] or the proof of the Feigenbaum
conjectures [108, 213] are important results but not so much used by other results. Important
theorems build the roof of the building, while fundamental theorems form the foundation on
which a building can be constructed. But this can depend on time as what is the roof today,
might be in the foundation later on, once more floors have been added.
Open problems
The importance of a result is also related to related to open problems attached to the theorem.
Open problems fuel new research and new concepts. Of course this is a moving target but any
“value functional” for “fundamental theorems” is time dependent and a bit also a matter
of fashion, entertainment (TV series like “Numbers” or Hollywood movies like “good will
hunting” changed the value) and under the influence of big shot mathematicians which
serve as “influencers”. Some of the problems have prizes attached like the 23 problems
of Hilbert, the 15 problems of Simon [367], the 18 problems of Smale, or the 10
Millenium problems. There are beautiful open problems in any major field and building
a ranking would be as difficult as ranking theorems. (I personally believe that Landau’s list
of 4 problems are clearly on the top. They are shockingly short and elementary but brutally
hard, having resisted more than a century of attacks by the best minds). There are other
problems, where one believes that the mathematics has just not been developed yet to tackle
it, an example being the 3k+1 problem. There appears to be wide consensus that the Riemann
hypothesis is the most important open problem in mathematics. It states that the roots of the
Riemann zeta function are all located on the axes Re(z) = 1/2. In number theory, the prime
twin problem or the Goldbach problem have a high exposure as they can be explained
to a general audience without mathematics background. For some reason, an equally simple
problem, the Landau problem asking whether there are infinitely many primes of the form
n2 + 1 is less well known. In recent years, due to an alleged proof by Shinichi Mochizuki of
the ABC conjecture using a new theory called Inter-Universal Teichmuller Theory (IUT)
which so far is not accepted by the main mathematical community despite strong efforts. But
it has put the ABC conjecture from 1985 in the spot light like [431]. It has been described
in [163] as the most important problem in Diophantine equations. It can be expressed using
the quality Q(a, b, c) of three integers a, b, c which is Q(a, b, c) = log(c)/ log(rad(abc)), where
the radical rad(n) of a number n is the product of the distinct prime factors of n. The ABC
conjecture is that for any real number q > 1 there exist only finitely many triples (a, b, c) of
positive relatively prime integers with a + b = c for which Q(a, b, c) > q. The triple with
87
FUNDAMENTAL THEOREMS
the highest quality so far is (a, b, c) = (2, 310 109, 235 ); its quality is Q = 1.6299. And then
there are entire collections of conjectures, one being the Langlands program which relates
different parts of mathematics like number theory, algebra, representation theory or algebraic
geometry. I myself can not appreciate this program because I would need first to understand it.
My personal favorite problem is the entropy problem in smooth dynamical systems theory
[229]. The Kolmogorov-Sinai entropy of a smooth dynamical system can be described using
Lyapunov exponents. For many systems like smooth convex billiards, one measures positive
entropy but can not prove it. An example is the table x4 + y 4 = 1 [220]. For ergodic theory see
[100, 109, 152, 372]
Classification results
One can also see classification theorems like the above mentioned Gelfand-Naimark realization
as mountain peaks in the landscape of mathematics. Examples of classification results are
the classification of regular or semi-regular polytopes, the classification of discrete subgroups of
a Lie group, the classification of “Lie algebras”, the classification of “von Neumann algebras”,
the “classification of finite simple groups”, the classification of Abelian groups, or the
classification of associative division algebras which by Frobenius is given either by the real
or complex or quaternion numbers. Not only in algebra, also in differential topology, one would
like to have classifications like the classification of d-dimensional manifolds. In topology, an
example result is that every Polish space is homeomorphic to some subspace of the Hilbert
cube. Related to physics is the question what “functionals” are important. Uniqueness results
help to render a functional important and fundamental. The classification of valuations of
fields is classified by Ostrowski’s theorem classifying valuations over the rational numbers
either being the absolute value or the p-adic norm. The Euler characteristic for example
can be characterized as the unique valuation on simplicial complexes which assumes the value
1 on simplices or functional which is invariant under Barycentric refinements. A theorem of
Claude Shannon [363] identifies the Shannon entropy is the unique functional on probability
spaces being compatible with additive and multiplicative operations on probability spaces and
satisfying some normalization condition.
88
OLIVER KNILL
Fundamental area also some inequalities [159] like the Cauchy-Schwarz inequality |a · b| ≤
|a||b|, the Chebyshev inequality P[|X − [E[X]| ≥ |a|] ≤ Var[X]/a2 . In complex analysis, the
Hadamard three circle theorem is important as gives bounds between the maximum of
|f | for a holomorphic function f defined on an annulus given by two concentric circles. Often
inequalities are more fundamental and powerful than equalities because they are more widely
used. Related to inequalities are embedding theorems like Sobolev embedding theorems.
For more inequalities, see [67]. Apropos embedding, there are the important Whitney or Nash
embedding theorems which are appealing.
Big ideas
Classifying and valuing big ideas is even more difficult than ranking individual theorems. Ex-
amples of big ideas are the idea of axiomatisation which stated with planar geometry and
number theory as described by Euclid and the concept of proof or later the concept of mod-
els. Archimedes idea of comparison, leading to ideas like the Cavalieri principle, integral
geometry or measure theory. René Descartes idea of coordinates which allowed to work on
geometry using algebraic tools, the use of infinitesimals and limits leading to calculus, al-
lowing to merge concepts of rate of change and accumulation, the idea of extrema leading
to the calculus of variations or Lagrangian and Hamiltonian dynamics or descriptions of fun-
damental forces. Cantor’s set theory allowed for a universal simple language to cover all
of mathematics, the Klein Erlangen program of “classifying and characterizing geometries
through symmetry”. The abstract idea of a group or more general mathematical structures
like monoids. The concept of extending number systems like completing the real numbers
or extending it to the quaternions and octonions or then producing p-adic number or
hyperreal numbers. The concept of complex numbers or more generally the idea of com-
pletion of a field. The idea of logarithms [382]. The idea of Galois to relate problems about
solving equations with field extensions and symmetries. The Grothendieck program
of “geometry without points” or “locales” as topologies without points in order to overcome
shortcomings of set theory. This lead to new objects like schemes or topoi. An other basic
big idea is the concept of duality, which appears in many places like in projective geometry,
in polyhedra, Poincaré duality or Pontryagin duality or Langlands duality for reductive
algebraic groups. The idea of dimension to measure topological spaces numerically leading
to fractal geometry. The idea of almost periodicity is an important generalization of
periodicity. Crossing the boundary of integrability leads to the important paradigm of stabil-
ity and randomness [307] and the interplay of structure and randomness [400]. These themes
are related to harmonic analysis and integrability as integrability means that for every
invariant measure one has almost periodicity. It is also related to spectral properties in solid
state physics or via Koopman theory in ergodic theory or then to fundamental new number
systems like the p-adic numbers: the p-adic integers form a compact topological group on
which the translation is almost periodic. It also leads to problems in Diophantine approxi-
mation. The concept of algorithm and building the foundation of computation using precise
mathematical notions. The use of algebra to track problems in topology starting with Betti
and Poincaré. An other important principle is to reduce a problem to a fixed point problem.
The categorical approach is not only a unifying language but also allows for generalizations
of concepts allowing to solve problems. Examples are generalizations of Lie groups in the form
of group schemes. Then there is the deformation idea which was used for example in the
Perelman proof of the Poincaré conjecture. Deformation often comes in the form of partial
89
FUNDAMENTAL THEOREMS
differential equations and in particular heat type equations. Deformations can be abstract in
the form of homotopies or more concrete by analyzing concrete partial differential equations
like the mean curvature flow or Ricci flow. An other important line of ideas is to use prob-
ability theory to prove results, even in combinatorics. A probabilistic argument can often
give existence of objects which one can not even construct. Examples are graphs with n nodes
for which the Euler characteristic of the defining Whitney complex is exponentially large
in n. The idea of non-commutative geometry generalizing geometry through functional
analysis or the idea of discretization which leads to numerical methods or computational
geometry. The power of coordinates allows to solve geometric problems more easily. The above
mentioned examples have all proven their use. Grothendieck’s ideas lead to the solution of the
Weyl conjectures, fixed point theorems were used in Game theory, to prove uniqueness of
solutions of differential equations or justify perturbation theory like the KAM theorem about
the persistence of quasi-periodic motion leading to hard implicit function theorems. In the
end, what really counts is whether the big idea can solve problems or to prove theorems. The
history of mathematics clearly shows that abstraction for the sake of abstraction or for the sake
of generalization rarely could convince the mathematical community. At least not initially. But
it can also happen that the break through of a new theory or generalization only pays off much
later. A big idea might have to age like a good wine.
Paradigms
There is once in a while an idea which completely changes the way we look at things. These
are paradigm shifts as described by the philosopher and historian Thomas Kuhn who relates
it also to scientific revolutions [268]. For mathematics, there are various places, where
such fundamental changes happened: the introduction of written numbers which happened
independently in various different places. An early example is the tally mark notation on
tally sticks (early sources are the Lebombo bone from 40 thousand years ago or the Ishango
bone from 20 thousand years ago) or the technology of talking knots, the khipu [409], which
is a topological writing which flourished in the Tawantinsuyu, the Inka empire. An other
example of a paradigm change is the development of proof, which required the insight that
some mathematical statements are assumed as axioms from which, using logical deduction,
new theorems are proven. That axiom systems can be deformed like from Euclidean to non-
Euclidean geometry was definitely a paradigm change too. On a larger scale, the insight that
even the axiom systems of mathematics can be deformed and extended in various ways came
only in the 20th century with Gödel. A third example of a paradigm change is the introduction
of the concept of functions which came surprisingly late. The modern concept of a function
which takes a quantity and assigns it a new quantity came only late in the 19’th century with the
development of set theory, which is a paradigm change too. There had been a long struggle
also with understanding limits, which puzzled already Greek mathematicians like Zeno but
which really only became solid with clear definitions like Weierstrass and then with the concept
of topology where the concept of limit is absorbed within set theory, for example using filters.
Related to functions is the use of functions to understand combinatorial or number theoretical
problems, like through the use of generating functions, or Dirichlet series, allowing analytic
tools to solve discrete problems like the existence of primes on arithmetic progressions. The
opposite, the use of discrete structures like finite groups to understand the continuum like Galois
theory is an other example of a paradigm change. It led to the insight that the quadrature
90
OLIVER KNILL
of the circle, or angle trisection can not be done with ruler and compass. There are various
other places, where paradigm changes happened. A nice example is the axiomatization of
Probability theory by Kolmogorov or the realization that statistics is a geometric theory:
random variables are vectors in a vector space. The correlation between two random variables
is the cosine of the angle between centered versions of these random variables. Paradigm
changes which are really fundamental can be surprisingly simple. An example is the Connes
formula [90] which is based on the simple idea that distance can be measured by extemizing
slope. This allows to push geometry into non-commutative settings or discrete settings, where
a priory no metric is given. An other example is the extremely simple but powerful idea of the
Grothendieck extension of a monoid to a group. It has been used throughout the history
of mathematics to generate integers from natural numbers, rational numbers from integers,
complex numbers from real numbers or quaternions from complex numbers. The idea is also
used in dynamical systems theory to generate from a not necessarily invertible dynamical system
an invertible dynamical system by extending time from a monoid to a group. In the context
of Grothendieck, one should mention also that category theory similarly as set theory at the
beginning of the last century changes the way mathematics is done and extended.
Taxonomies
When looking at mathematics overall, taxonomies are important. They not only help to
navigate the landscape and are also interesting from a pedagogical as well as historical point
of view. I borrow here some material from my course Math E 320 which is so global that a
taxonomy is helpful. Organizing a field using markers is also important when teaching intelli-
gent machines, a field which be seen as the pedagogy for AI. The big bulk of work in [254]
was to teach a bot mathematics, which means to fill in thousands of entries of knowledge. It
can appear a bit mind numbing as it is a similar task than writing a dictionary. But writing
things down for a machine actually is even tougher than writing things down for a student. We
can not assume the machine to know anything it is not told. This document by the way could
relatively easily be adapted into a database of “important theorems” and actually one my aims
is it to feed it eventually to the Sofia bot. If the machine is asked about “important theorem
in mathematics”, it would be surprisingly well informed, even so it is just stupid encyclopedic
data entry. Historically, when knowledge was still sparse, one has classified teaching material
using the liberal arts of sciences, the trivium: grammar, logic and rhetoric, as well as the
quadrivium: arithmetic, geometry, music, and astronomy. More specifically, one has built the
eight ancient roots of mathematics which are tied to activities: counting and sorting (arith-
metic), spacing and distancing (geometry), positioning and locating (topology), surveying and
angulating (trigonometry), balancing and weighing (statics), moving and hitting (dynamics),
guessing and judging (probability) and collecting and ordering (algorithms). This led then to
the 12 topics taught in that course: Arithmetic, Geometry, Number Theory, Algebra, Calculus,
Set theory, Probability, Topology, Analysis, Numerics, Dynamics and Algorithms. The AMS
classification is much more refined and distinguishes 64 fields. The Bourbaki point of view
is given in [114]: it partitions mathematics into algebraic and differential topology, differential
geometry, ordinary differential equations, Ergodic theory, partial differential equations, non-
commutative harmonic analysis, automorphic forms, analytic geometry, algebraic geometry,
number theory, homological algebra, Lie groups, abstract goups, commutative harmonic analy-
sis, logic, probability theory, categories and sheaves, commutative algebra and spectral theory.
What are hot spots in mathematics? Michael Atiyah [23] distinguished parameters like
91
FUNDAMENTAL THEOREMS
Key examples
The concept of experiment came even earlier and has always been part of mathematics. Ex-
periments allow to get good examples and set the stage for a theorem. 2 Obviously the theorem
can not contradict any of the examples. But examples are more than just a tool to falsify state-
ments; a good example can be the seed for a new theory or for an entire subject. Here are
a few examples: in smooth dynamical systems the Smale horse shoe comes to mind, in
differential topology the exotic spheres of Milnor, in one-dimensional dynamics the lo-
gistic map, or Henon map, in perturbation theory of Hamiltonian systems the Standard
map featuring KAM tori or Mather sets, in homotopy theory the dunce hat or Bing house,
in combinatorial topology the Rudin sphere, the Nash-Kuiper non-smooth embedding
of a torus into Euclidean space, in topology there is the Alexander horned sphere or the
Antoine necklace. In complexity theory there is the busy beaver problem in Turing com-
putation which is an illustration with how small machines one can achieve great things, in
group theory there is the Rubik cube which illustrates many fundamental notions for finite
groups, in fractal analysis the Cantor set, the Menger sponge, in Fourier theory the series
of f (x) = x mod 1, in Diophantine approximation the golden ratio, in the calculus of sums
the zeta function, in dimension theory the Banach Tarski paradox. In harmonic analysis
the Weierstrass function as an example of a nowhere differentiable function. The case of
Peano curves giving concrete examples of a continuous bijection from an interval to a square
or cube. In complex dynamics not only the Mandelbrot set plays an important role, but
also individual, specific Julia sets can be interesting. Examples like the Mandelbulb have not
even been started to be investigated. There seem to be no theorems known about this object.
In mathematical physics, the almost Matthieu operator [105] produced a rich theory related
to spectral theory, Diophantine approximation, fractal geometry and functional analysis.
Besides examples illustrating a typical case, it is also important to explore the boundary of a
theorem or theory by looking at counter examples. Collections of counter examples exist in
many fields like [158, 383, 338, 392, 429, 72, 243].
Physics
One can also make a list of great ideas in physics [126] and see the relations with the fun-
damental theorems in mathematics. A high applicability should then contribute to a value
functional in the list of theorems. Great ideas in physics are the concept of space and time
meaning to describe physical events using differential equations. In cosmology, one of the
insights is the to understand the structure of our solar system and getting to the heliocentric
system, an other is to look at space-time as a hole and realize the expansion of the universe
or that the idea of a big bang. More general is the Platonic idea that physics is geometry.
Or calculus: Lagrange developed his calculus of variations to find laws of physics. Then
there is the idea of Lorentz invariance which leads to special relativity, there is the idea of
general relativity which allows to describe gravity through geometry and a larger symmetry
seen through the equivalence principle. There is the idea of see elementary particles using
Lie groups. There is the Noether theorem which is the idea that any symmetry is tied
to a conservation law: translational symmetry leads to momentum conservation, rotation
2Quote of Vladimir Arnold: ”Mathematics is a part of physics where experiments are cheap”
92
OLIVER KNILL
symmetry to angular momentum conservation for example. Symmetries also play a role when
spontaneous broken symmetry or phase transitions. There is the idea of quantum me-
chanics which mathematically means replacing differential equations with partial differential
equations or replacing commutative algebras of observables with non-commutative alge-
bras. An important idea is the concept of perturbation theory and in particular the notion
of linearization. Many laws are simplifications of more complicated laws and described in
the simplest cases through linear laws like Ohms law or Hooks law. Quantization processes
allow to go from commutative to non-commutative structures. Perturbation theory allows
then to extrapolate from a simple law to a more complicated law. Some is easy application
of the implicit function theorem, some is harder like KAM theory. There is the idea of
using discrete mathematics to describe complicated processes. An example is the language
of Feynman graphs or the language of graph theory in general to describe physics as in loop
quantum gravity or then the language of cellular automata which can be seen as partial
difference equations where also the function space is quantized. The idea of quantization, a
formal transition from an ordinary differential equation like a Hamiltonian system to a partial
differential equation or to replace single particle systems with infinite particle systems (Fock).
There are other quantization approaches through deformation of algebras which is related
to non-commutative geometry. There is the idea of using smooth functions to describe
discrete particle processes. An example is the Vlasov dynamical system or Boltzmann’s
equation to describe a plasma, or thermodynamic notions to describe large sets of particles like
a gas or fluid. Dual to this is the use of discretization to describe a smooth system by discrete
processes. An example is numerical approximation, like using the Runge-Kutta scheme to
compute the trajectory of a differential equation. There is the realization that we have a whole
spectrum of dynamical systems, integrability and chaos and that some of the transitions are
universal. An other example is the tight binding approximation in which a continuum
Schrödinger equation is replaced with a bounded discrete Jacobi operator. There is the gen-
eral idea of finding the building blocks or elementary particles. Starting with Demokrit in
ancient Greece, the idea got refined again and again. Once, atoms were detected and charges
found to be quantized (Millikan), the structure of the atom was explored (Rutherford), and
then the atom got split (Meitner, Hahn). The structure of the nuclei with protons and neutrons
was then refined again using quarks leading the standard model in particle physics. There
is furthermore the idea to use statistical methods for complex systems. An example is the
use of stochastic differential equations like diffusion processes to describe actually deterministic
particle systems. There is the insight that complicated systems can form patterns through in-
terplay between symmetry, conservation laws and synchronization. Large scale patterns can
be formed from systems with local laws. Finally, there is the idea of solving inverse problems
using mathematical tools like Fourier theory or basic geometry (Erathostenes could compute
the radius of the earth by comparing the lengths of shadows at different places of the earth.)
An example is tomography, where the structure of some object is explored using resonance.
Then there is the idea of scale invariance which allows to describe objects which have fractal
nature.
Computer science
As in physics, it is harder to pinpoint “big ideas” in computer science as they are in general
not theorems. The initial steps of mathematics was to build a language, where numbers
93
FUNDAMENTAL THEOREMS
represent quantities [97]. Physical tools which assist in manipulating numbers can already
been seen as a computing device. Marks on a bone, pebbles in a clay bag, talking knots in a
Khipu, marks on a Clay tablet were the first step. Papyri, paper, magnetic, optical and electric
storage, the tools to build memory were refined over the millenniums. The mathematical
language allowed us to get further than the finite. Using a finite number of symbols we can
represent and count infinite sets, have notions of cardinality, have various number systems
and more general algebraic structures. Numbers can even be seen as games [96, 257]. A
major idea is the concept of an algorithm. Adding or multiplying on an abacus already was
an algorithm. The concept was refined in geometry, where ruler and compass were used as
computing devices, like the construction of points in a triangle. To measure the effectiveness
of an algorithm, one can use notions of complexity. This has been made precise by computing
pioneers like Turing as one has to formulate first what a computation is. In the last century
one has seen that computations and proofs are very similar and that they have similar general
restrictions. There are some tasks which can not be computed with a Turing machine and
there are theorems which can not be proven in a specific axiom system. As mathematics is a
language, we have to deal with concepts of syntax, grammar, notation, context, parsing,
validation, verification. As Mathematics is a human activity which is done in our brains,
it is related to psychology and computer architecture. Computer science aspects are also
important also in pedagogy and education how can an idea be communicated clearly? How
do we motivate? How do we convince peers that a result is true? Examples from history
show that this is often done by authority and that the validity of some proofs turned out
to be wrong or incomplete, even in the case of fundamental theorems or when treated by
great mathematicians. (Examples are the fundamental theorem of arithmetic, the fundamental
theorem of algebra or the wrong published proof of Kempe of the 4 color theorem). On the
other hand, there were also quite many results which only later got recognized. The work
of Galois for example only exploded much later. How come we trust a human brain more
than an electronic one? We have to make some fundamental assumptions for example to be
made like that if we do a logical step ”if A and B then “A and B” holds. This assumes for
example that our memory is faithful: after having put A and B in the memory and making
the conclusion, we have to assume that we did not forget A nor B! Why do we trust this
more than the memory of a machine? As we are also assisted more and more by electronic
devices, the question of the validity of computer assisted proofs comes up. The 4-color
theorem of Kenneth Appel and Wolfgang Haken based on previous work of many others
like Heinrich Heesch or the proof of the Feigenbaum conjecture of Mitchell Feigenbaum
first proven by Oscar Lanford III or the proof of the Kepler problem by Thomas Hales are
examples. A great general idea is related to the representation of data. This can be done using
matrices like in a relational database or using other structures like graphs leading to graph
databases. The ability to use computers allows mathematicians to do experiments. A branch
of mathematics called experimental mathematics [20, 216] relies heavily on experiments to
find new theorems or relations. Experiments are related to simulations. We are able, within
a computer to build and explore new worlds, like in computer games, we can enhance the
physical world using virtual reality or augmented reality or then capturing a world by
3D scanning and realize a world by printing the objects [256]. A major theme is artificial
intelligence [354, 217]. It is related to optimization problems like optimal transport, neural
nets as well as inverse problems like structure from motion problems. An intelligent
entity must be able to take information, build a model and then find an optimal strategy to
solve a given task. A self-driving car for example has to be able to translate pictures from a
94
OLIVER KNILL
camera and build a map, then determine where to drive. Such tasks are usually considered
part of applied mathematics but they are very much related with pure mathematics because
computers also start to learn how to read mathematics, how to verify proofs and to find new
theorems. Artificial intelligent agents [421] were first developed in the 1960ies learned also
some mathematics. I myself learned about it when incorporated computer algebra systems into
a chatbots in [254]. AI has now become a big business as Alexa, Siri, Google Home, IBM
Watson or Cortana demonstrate. But these information systems must be taught, they must
be able to rank alternative answers, even inject some humor or opinions. Soon, they will be
able to learn themselves and answer questions like “what are the 10 most important theorems
in mathematics?”
Brevity
We live in a instagram, snapchat, twitter, microblog, vine, watch-mojo, petcha-kutcha time.
Many of us multi task, read news on smart phones, watch faster paced movies, read shorter
novels and feel that a million word Marcel Proust’s masterpiece “a la recherche du temps perdu”
is ”temps perdu”. Even classrooms and seminars have become more aphoristic. Micro blogging
tools are only the latest incarnation of “miniature stories”. They continue the tradition of older
formats like ”mural art” by Romans to modern graffiti or “aphorisms” [263, 264]), poetry,
cartoons, Unix fortune cookies [17]. Shortness has appeal: aphorisms, poems, ferry tales,
quotes, words of wisdom, life hacker lists, and tabloid top 10 lists illustrate this. And then there
are books like “Math in 5 minutes”, “30 second math”, “math in minutes” [35, 162, 129], which
are great coffee table additions. Also short proofs are appealing like “Let epsilon be smaller
than zero” which is the shortest known math joke, or “There are three type of mathematicians,
the ones who can count, and the ones who can’t.” Also short open problems are attractive, like
the twin prime problem “there are infinitely many twin primes” or the Landau problem
“there are infinitely many primes of the form n2 + 1, or the Goldbach problem “every n > 2 is
the sum of two primes”. For the larger public in mathematics shortness has appeal: according
to a poll of the Mathematical Intelligencer from 1988, the most favorite theorems are short
[423]. Results with longer proofs can make it to graduate courses or specialized textbooks but
still then, the results are often short enough so that they can be tweeted without proof. Why
is shortness attractive? Paul Erdös expressed short elegant proofs as “proofs from the book”
[9]. Shortness reduces the possibility of error as complexity is always a stumbling block for
understanding. But is beauty equivalent to brevity? Buckminster Fuller once said: “If the
solution is not beautiful, I know it is wrong.” [7]. Much about the aesthetics in mathematics
is investigated in [304]. According to [348], the beauty of a piece of mathematics is frequently
associated with the shortness of statement or of proof: beautiful theories are also thought of
as short, self-contained chapters fitting within broader theories. There are examples of complex
and extensive theories which every mathematician agrees to be beautiful, but these examples
are not the one which come to mind. Also psychologists and educators know that simplicity
appeals to children: From [373] For now, I want simply to draw attention to the fact that
even for a young, mathematically naive child, aesthetic sensibilities and values (a penchant for
simplicity, for finding the building blocks of more complex ideas, and a preference for shortcuts
and “liberating” tricks rather than cumbersome recipes) animates mathematical experience. It
is hard to exhaust them all, even not with tweets: there are more than googool2 = 10200 texts of
length 140. This can not all ever be written down because there are more than what we estimate
the number of elementary particles. But there are even short story collections. Berry’s paradox
95
FUNDAMENTAL THEOREMS
tells in this context that the shortest non-tweetable text in 140 characters can be tweeted: ”The
shortest non-tweetable text”. Since we insist on giving proofs, we have to cut corners. Books
containing lots of elegant examples are [14, 9].
Twitter math
The following 42 tweets were written in 2014, when twitter had still a 140 character limit. Some
of them were actually tweeted. The experiment was to see which theorems are short enough
so that one can tweet both the theorem as well as the proof in 140 characters. Of course, that
often requires a bit of cheating. See [9] for proofs from the books, where the proofs have full
details.
Euclid: The set of primes is infinite. Proof: let p be largest
prime, then p! + 1 has a larger prime factor than p. Contradic-
tion.
√ √
Hippasus: 2 is irrational. Proof. If 2 = p/q, then 2q 2 =
p2 . To the left is an odd number of factors 2, to the right it is
even one. Contradiction.
96
OLIVER KNILL
97
FUNDAMENTAL THEOREMS
P
Discrete Gauss-Bonnet x K(x) = χ(G) with K(x) = 1 −
0 − v1 + v2 −
V0 (x)/2 + V1 (x)/3 + V2 (x)/4... curvature χ(G) = vP
v3 ... Euler characteristic Proof: Use handshake x Vk (x) =
vk+1 /(k + 2).
P
Lefschetz: x iT (x) = str(T |H(G)). Proof: LHS is
str(exp(−0L)UT ) and RHS is str(exp(−tL)UT ) for t → ∞.
The super trace does not depend on t.
98
OLIVER KNILL
Cauchy-Binet: det(1 + F T G) =
P
P det(FP ) det(GP )
T
Proof: A = F G.
P Coefficients of det(x − A) is
|P |=k det(F P ) det(G P ).
99
FUNDAMENTAL THEOREMS
Math areas
We add here the core handouts of Math E320 which aimed to give for each of the 12 math-
ematical subjects an overview on two pages. For that course, I had recommended books like
[139, 170, 42, 390, 391].
100
OLIVER KNILL
101
FUNDAMENTAL THEOREMS
off into even finer pieces. Additionally, there are fields which relate with other areas of science, like economics,
biology or physics:a
00 General 45 Integral equations
01 History and biography 46 Functional analysis
03 Mathematical logic and foundations 47 Operator theory
05 Combinatorics 49 Calculus of variations, optimization
06 Lattices, ordered algebraic structures 51 Geometry
08 General algebraic systems 52 Convex and discrete geometry
11 Number theory 53 Differential geometry
12 Field theory and polynomials 54 General topology
13 Commutative rings and algebras 55 Algebraic topology
14 Algebraic geometry 57 Manifolds and cell complexes
15 Linear/multi-linear algebra; matrix theory 58 Global analysis, analysis on manifolds
16 Associative rings and algebras 60 Probability theory and stochastic processes
17 Non-associative rings and algebras 62 Statistics
18 Category theory, homological algebra 65 Numerical analysis
19 K-theory 68 Computer science
20 Group theory and generalizations 70 Mechanics of particles and systems
What are
local and global
low and high dimension
fancy developments
commutative and non-commutative
in mathematics today? Michael Atiyah linear and nonlinear
[23] identified in the year 2000 the geometry and algebra
following six hot spots: physics and mathematics
Also this choice is of course highly personal. One can easily add 12 other polarizing quantities which help to
distinguish or parametrize different parts of mathematical areas, especially the ambivalent pairs which produce
a captivating gradient:
regularity and randomness discrete and continuous
integrable and non-integrable existence and construction
invariants and perturbations finite dim and infinite dimensional
experimental and deductive topological and differential geometric
polynomial and exponential practical and theoretical
applied and abstract axiomatic and case based
The goal is to illustrate some of these structures from a historical point of view and show that “Mathematics is
the science of structure”.
102
OLIVER KNILL
Lecture 2: Arithmetic
The oldest mathematical discipline is arithmetic. It is the theory of the construction and manipulation of
numbers. The earliest steps were done by Babylonian, Egyptian, Chinese, Indian and Greek thinkers.
Building up the number system starts with the natural numbers 1, 2, 3, 4... which can be added and multiplied.
Addition is natural: join 3 sticks to 5 sticks to get 8 sticks. Multiplication ∗ is more subtle: 3 ∗ 4 means to take 3
copies of 4 and get 4 + 4 + 4 = 12 while 4 ∗ 3 means to take 4 copies of 3 to get 3 + 3 + 3 + 3 = 12. The first factor
counts the number of operations while the second factor counts the objects. To motivate 3 ∗ 4 = 4 ∗ 3, spacial
insight motivates to arrange the 12 objects in a rectangle. This commutativity axiom will be carried over to
larger number systems. Realizing an addition and multiplicative structure on the natural numbers requires to
define 0 and 1. It leads naturally to more general numbers. There are two major motivations to to build new
numbers: we want to
103
FUNDAMENTAL THEOREMS
The system is unfit for computations as simple calculations V III + V II = XV show. Clay tablets, some as
early as 2000 BC and others from 600 - 300 BC are known. They feature Akkadian arithmetic using the base
60. The hexadecimal system with base 60 is convenient because of many factors. It survived: we use 60 minutes
per hour. The Egyptians used the base 10. The most important source on Egyptian mathematics is the
Rhind Papyrus of 1650 BC. It was found in 1858 [234, 378]. Hieratic numerals were used to write on papyrus
from 2500 BC on. Egyptian numerals are hieroglyphics. Found in carvings on tombs and monuments they
are 5000 years old. The modern way to write numbers like 2018 is the Hindu-Arab system which diffused
to the West only during the late Middle ages. It replaced the more primitive Roman system. [378] Greek
arithmetic used a number system with no place values: 9 Greek letters for 1, 2, . . . 9, nine for 10, 20, . . . , 90 and
nine for 100, 200, . . . , 900.
Integers. Indian Mathematics morphed the place-value system into a modern method of writing numbers.
Hindu astronomers used words to represent digits, but the numbers would be written in the opposite order.
Independently, also the Mayans developed the concept of 0 in a number system using base 20. Sometimes
after 500, the Hindus changed to a digital notation which included the symbol 0. Negative numbers were
introduced around 100 BC in the Chinese text ”Nine Chapters on the Mathematical art”. Also the Bakshali
manuscript, written around 300 AD subtracts numbers carried out additions with negative numbers, where +
was used to indicate a negative sign. [330] In Europe, negative numbers were avoided until the 15’th century.
Fractions: Babylonians could handle fractions. The Egyptians also used fractions, but wrote every frac-
tion a as a sum of fractions with unit numerator and distinct denominators, like 4/5 = 1/2 + 1/4 + 1/20 or
5/6 = 1/2 + 1/3. Maybe because of such cumbersome computation techniques, Egyptian mathematics failed to
progress beyond a primitive stage. [378]. The modern decimal fractions used nowadays for numerical calcula-
tions were adopted only in 1595 in Europe.
Real numbers: As noted by the Greeks already, the diagonal of the square is not a fraction. It first produced a
crisis until it became clear that ”most” numbers are not rational. Georg Cantor saw first that the cardinality
of all real numbers is much larger than the cardinality of the integers: while one can count all rational numbers
but not enumerate all real numbers. One consequence is that most real numbers are transcendental: they do
not occur as solutions of polynomial equations with integer coefficients. The number π is an example. The
concept of real numbers is related to the concept of limit. Sums like 1 + 1/4 + 1/9 + 1/16 + 1/25 + . . . are
not rational.
Complex numbers: some polynomials have no real root. To solve x2 = −1 for example, we need new
numbers. One idea is to use pairs of numbers (a, b) where (a, 0) = a are the usual numbers and extend addition
and multiplication (a, b) + (c, d) = (a + c, b + d) and (a, b) · (c, d) = (ac − bd, ad + bc). With this multiplication,
the number (0, 1) has the property that (0, 1) · (0, 1) = (−1, 0) = −1. It is more convenient to write a + ib where
i = (0, 1) satisfies i2 = −1. One can now use the common rules of addition and multiplication.
Surreal numbers: Similarly as real numbers fill in the gaps between the integers, the surreal numbers fill in the
gaps between Cantors ordinal numbers. They are written as (a, b, c, ...|d, e, f, ...) meaning that the ”simplest”
number is larger than a, b, c... and smaller than d, e, f, ... We have (|) = 0, (0|) = 1, (1|) = 2 and (0|1) = 1/2
or (|0) = −1. Surreals contain already transfinite numbers like (0, 1, 2, 3...|) or infinitesimal numbers like
(0|1/2, 1/3, 1/4, 1/5, ...). They were introduced in the 1970’ies by John Conway. The late appearance confirms
the pedagogical principle: late human discovery manifests in increased difficulty to teach it.
104
OLIVER KNILL
Lecture 3: Geometry
Geometry is the science of shape, size and symmetry. While arithmetic deals with numerical structures,
geometry handles metric structures. Geometry is one of the oldest mathematical disciplines. Early geometry
has relations with arithmetic: the multiplication of two numbers n × m as an area of a shape that is invariant
under rotational symmetry. Identities like the Pythagorean triples 32 + 42 = 52 were interpreted and
drawn geometrically. The right angle is the most ”symmetric” angle apart from 0. Symmetry manifests
itself in quantities which are invariant. Invariants are one the most central aspects of geometry. Felix Klein’s
Erlangen program uses symmetry to classify geometries depending on how large the symmetries of the shapes
are. In this lecture, we look at a few results which can all be stated in terms of invariants. In the presentation
as well as the worksheet part of this lecture, we will work us through smaller miracles like special points in
triangles as well as a couple of gems: Pythagoras, Thales,Hippocrates, Feuerbach, Pappus, Morley,
Butterfly which illustrate the importance of symmetry.
Much of geometry is based on our ability to measure length, the distance between two points. Having a
distance d(A, B) between any two points A, B, we can look at the next more complicated object, which is a set
A, B, C of 3 points, a triangle. Given an arbitrary triangle ABC, are there relations between the 3 possible
distances a = d(B, C), b = d(A, C), c = d(A, B)? If we fix the scale by c = 1, then a + b ≥ 1, a + 1 ≥ b, b + 1 ≥ a.
For any pair of (a, b) in this region, there is a triangle. After an identification, we get an abstract space, which
represent all triangles uniquely up to similarity. Mathematicians call this an example of a moduli space.
A sphere Sr (x) is the set of points which have distance r from a given point x. In the plane, the sphere is called
a circle. A natural problem is to find the circumference L = 2π of a unit circle, or the area A = π of a unit disc,
the area F = 4π of a unit sphere and the volume V = 4 = π/3 of a unit sphere. Measuring the length of segments
on the circle leads to new concepts like angle or curvature. Because the circumference of the unit circle in the
plane is L = 2π, angle questions are tied to the number π, which Archimedes already approximated by fractions.
Also volumes were among the first quantities, Mathematicians wanted to measure and compute. A problem
on Moscow papyrus dating back to 1850 BC explains the general formula h(a2 + ab + b2 )/3 for a truncated
pyramid with base length a, roof length b and height h. Archimedes achieved to compute the volume of the
sphere: place a cone inside a cylinder. The complement of the cone inside the cylinder has on each height h
the area π − πh2 . The half sphere cut at height h is a disc of radius (1 − h2 ) which has area π(1 − h2 ) too.
Since the slices at each height have the same area, the volume must be the same. The complement of the cone
inside the cylinder has volume π − π/3 = 2π/3, half the volume of the sphere.
The first geometric playground was planimetry, the geometry in the flat two dimensional space. Highlights
are Pythagoras theorem, Thales theorem, Hippocrates theorem, and Pappus theorem. Discoveries
in planimetry have been made later on: an example is the Feuerbach 9 point theorem from the 19th century.
Ancient Greek Mathematics is closely related to history. It starts with Thales goes over Euclid’s era at 500
BC and ends with the threefold destruction of Alexandria 47 BC by the Romans, 392 by the Christians and
640 by the Muslims. Geometry was also a place, where the axiomatic method was brought to mathematics:
theorems are proved from a few statements which are called axioms like the 5 axioms of Euclid:
105
FUNDAMENTAL THEOREMS
Euclid wondered whether the fifth postulate can be derived from the first four and called theorems derived
from the first four the ”absolute geometry”. Only much later, with Karl-Friedrich Gauss and Janos Bolyai
and Nicolai Lobachevsky in the 19’th century in hyperbolic space the 5’th axiom does not hold. Indeed,
geometry can be generalized to non-flat, or even much more abstract situations. Basic examples are geometry
on a sphere leading to spherical geometry or geometry on the Poincare disc, a hyperbolic space. Both
of these geometries are non-Euclidean. Riemannian geometry, which is essential for general relativity
theory generalizes both concepts to a great extent. An example is the geometry on an arbitrary surface. Cur-
vatures of such spaces can be computed by measuring length alone, which is how long light needs to go from
one point to the next.
An important moment in mathematics was the merge of geometry with algebra: this giant step is often
attributed to René Descartes. Together with algebra, the subject leads to algebraic geometry which can
be tackled with computers: here are some examples of geometries which are determined from the amount of
symmetry which is allowed:
Here are four pictures about the 4 special points in a triangle and with which we will begin the lecture. We will
see why in each of these cases, the 3 lines intersect in a common point. It is a manifestation of a symmetry
present on the space of all triangles. size of the distance of intersection points is constant 0 if we move on the
space of all triangular shapes. It’s Geometry!
106
OLIVER KNILL
If the sum of the proper divisors of a n is equal to n, then n is called a perfect number. For example,
6 is perfect as its proper divisors 1, 2, 3 sum up to 6. All currently known perfect numbers are even. The
question whether odd perfect numbers exist is probably the oldest open problem in mathematics and not
settled. Perfect numbers were familiar to Pythagoras and his followers already. Calendar coincidences like that
we have 6 work days and the moon needs ”perfect” 28 days to circle the earth could have helped to promote
the ”mystery” of perfect number. Euclid of Alexandria (300-275 BC) was the first to realize that if 2p − 1
is prime then k = 2p−1 (2p − 1) is a perfect number: [Proof: let σ(n) be the sum of all factors of n, including
n. Now σ(2n − 1)2n−1 ) = σ(2n − 1)σ(2n−1 ) = 2n (2n − 1) = 2 · 2n (2n − 1) shows σ(k) = 2k and verifies
that k is perfect.] Around 100 AD, Nicomachus of Gerasa (60-120) classified in his work ”Introduction to
Arithmetic” numbers on the concept of perfect numbers and lists four perfect numbers. Only much later it
became clear that Euclid got all the even perfect numbers: Euler showed that all even perfect numbers are of
the form (2n − 1)2n−1 , where 2n − 1 is prime. The factor 2n − 1 is called a Mersenne prime. [Proof: Assume
N = 2k m is perfect where m is odd and k > 0. Then 2k+1 m = 2N = σ(N ) = (2k+1 − 1)σ(m). This gives
σ(m) = 2k+1 m/(2k+1 − 1) = m(1 + 1/(2k+1 − 1)) = m + m/(2k+1 − 1). Because σ(m) and m are integers, also
m/(2k+1 − 1) is an integer. It must also be a factor of m. The only way that σ(m) can be the sum of only
two of its factors is that m is prime and so 2k+1 − 1 = m.] The first 39 known Mersenne primes are of
the form 2n − 1 with n = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253,
4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049, 216091, 756839, 859433, 1257787,
1398269, 2976221, 3021377, 6972593, 13466917. There are 11 more known from which one does not know the
rank of the corresponding Mersenne prime: n = 20996011, 24036583, 25964951, 30402457, 32582657, 37156667,
42643801,43112609,57885161, 74207281,77232917. The last was found in December 2017 only. It is unknown
whether there are infinitely many.
107
FUNDAMENTAL THEOREMS
A polynomial equations for which all coefficients and variables are integers is called a Diophantine equation.
The first Diophantine equation studied already by Babylonians is x2 + y 2 = z 2 . A solution (x, y, z) of this
equation in positive integers is called a Pythagorean triple. For example, (3, 4, 5) is a Pythagorean triple.
Since 1600 BC, it is known that all solutions to this equation are of the form (x, y, z) = (2st, s2 − t2 , s2 + t2 ) or
(x, y, z) = (s2 − t2 , 2st, s2 + t2 ), where s, t are different integers. [Proof. Either x or y has to be even because
if both are odd, then the sum x2 + y 2 is even but not divisible by 4 but the right hand side is either odd or
divisible by 4. Move the even one, say x2 to the left and write x2 = z 2 − y 2 = (z − y)(z + y), then the right
hand side contains a factor 4 and is of the form 4s2 t2 . Therefore 2s2 = z − y, 2t2 = z + y. Solving for z, y gives
z = s2 + t2 , y = s2 − t2 , x = 2st.]
Analyzing Diophantine equations can be difficult. Only 10 years ago, one has established that the Fermat
equation xn + y n = z n has no solutions with xyz 6= 0 if n > 2. Here are some open problems for Diophantine
equations. Are there nontrivial solutions to the following Diophantine equations?
x6 + y 6 + z 6 + u6 + v 6 = w6 x, y, z, u, v, w > 0
x5 + y 5 + z 5 = w 5 x, y, z, w > 0
xk + y k = n!z k k ≥ 2, n > 1
xa + y b = z c , a, b, c > 2 gcd(a, b, c) = 1
The last equation is called Super Fermat. A Texan banker Andrew Beals once sponsored a prize of 1000 000
dollars for a proof or counter example to the statement: ”If xp + y q = z r with p, q, r > 2, then gcd(x, y, z) > 1.”
Given a prime like 7 and a number n we can add or subtract multiples of 7 from n to get a number in
{0, 1, 2, 3, 4, 5, 6 }. We write for example 19 = 12 mod 7 because 12 and 19 both leave the rest 5 when dividing
by 7. Or 5 ∗ 6 = 2 mod 7 because 30 leaves the rest 2 when dividing by 7. The most important theorem in
elementary number theory is Fermat’s little theorem which tells that if a is an integer and p is prime then
ap − a is divisible by p. For example 27 − 2 = 126 is divisible by 7. [Proof: use induction. For a = 0 it is clear.
The binomial expansion shows that (a+1)p −ap −1 is divisible by p. This means (a+1)p −(a+1) = (ap −a)+mp
for some m. By induction, ap − a is divisible by p and so (a + 1)p − (a + 1).] An other beautiful theorem is
Wilson’s theorem which allows to characterize primes: It tells that (n − 1)! + 1 is divisible by n if and only
if n is a prime number. For example, for n = 5, we verify that 4! + 1 = 25 is divisible by 5. [Proof: assume
n is prime. There are then exactly two numbers 1, −1 for which x2 − 1 is divisible by n. The other numbers
in 1, . . . , n − 1 can be paired as (a, b) with ab = 1. Rearranging the product shows (n − 1)! = −1 modulo n.
Conversely, if n is not prime, then n = km with k, m < n and (n − 1)! = ...km is divisible by n = km. ]
The solution to systems of linear equations like x = 3 (mod 5), x = 2 (mod 7) is given by the Chinese
remainder theorem. To solve it, continue adding 5 to 3 until we reach a number which leaves rest 2 to 7:
on the list 3, 8, 13, 18, 23, 28, 33, 38, the number 23 is the solution. Since 5 and 7 have no common divisor, the
system of linear equations has a solution.
For a given n, how do we solve x2 − yn = 1 for the unknowns y, x? A solution produces a square root x of 1
modulo n. For prime n, only x = 1, x = −1 are the solutions. For composite n = pq, more solutions x = r · s
where r2 = −1 mod p and s2 = −1 mod q appear. Finding x is equivalent to factor n, because the greatest
common divisor of x2 − 1 and n is a factor of n. Factoring is difficult if the numbers are large. It assures
that encryption algorithms work and that bank accounts and communications stay safe. Number theory,
once the least applied discipline of mathematics has become one of the most applied one in mathematics.
108
OLIVER KNILL
Lecture 5: Algebra
Algebra studies algebraic structures like ”groups” and ”rings”. The theory allows to solve polynomial
equations, characterize objects by its symmetries and is the heart and soul of many puzzles. Lagrange claims
Diophantus to be the inventor of Algebra, others argue that the subject started with solutions of quadratic
equation by Mohammed ben Musa Al-Khwarizmi in the book Al-jabr w’al muqabala of 830 AD. Solutions
to equation like x2 + 10x = 39 are solved there by completing the squares: add 25 on both sides go get
x2 + 10x + 25 = 64 and so (x + 5) = 8 so that x = 3.
The use of variables introduced in school in elementary algebra were introduced later. Ancient texts only
dealt with particular examples and calculations were done with concrete numbers in the realm of arithmetic.
Francois Viete (1540-1603) used first letters like A, B, C, X for variables.
The search for formulas for polynomial equations of degree 3 and 4 lasted 700 years. In the 16’th century,
the cubic equation and quartic equations were solved. Niccolo Tartaglia and Gerolamo Cardano reduced
the cubic to the quadratic: [first remove the quadratic part with X = x − a/3 so that X 3 + aX 2 + bX + c
becomes the depressed cubic x3 + px + q. Now substitute x = u − p/(3u) to get a quadratic equation
(u6 + qu3 − p3 /27)/u3 = 0 for u3 .] Lodovico Ferrari shows that the quartic equation can be reduced to the
cubic. For the quintic however no formulas could be found. It was Paolo Ruffini, Niels Abel and Évariste
Galois who independently realized that there are no formulas in terms of roots which allow to ”solve” equations
p(x) = 0 for polynomials p of degree larger than 4. This was an amazing achievement and the birth of ”group
theory”.
Two important algebraic structures are groups and rings.
In a group G one has an operation ∗, an inverse a−1 and a one-element 1 such that a ∗ (b ∗ c) = (a ∗ b) ∗ c, a ∗ 1 =
1 ∗ a = a, a ∗ a−1 = a−1 ∗ a = 1. For example, the set Q∗ of nonzero fractions p/q with multiplication operation ∗
and inverse 1/a form a group. The integers with addition and inverse a−1 = −a and ”1”-element 0 form a group
too. A ring R has two compositions + and ∗, where the plus operation is a group satisfying a+b = b+a in which
the one element is called 0. The multiplication operation ∗ has all group properties on R∗ except the existence
of an inverse. The two operations + and ∗ are glued together by the distributive law a ∗ (b + c) = a ∗ b + a ∗ c.
An example of a ring are the integers or the rational numbers or the real numbers. The later two are
actually fields, rings for which the multiplication on nonzero elements is a group too. The ring of integers are
no field because an integer like 5 has no multiplicative inverse. The ring of rational numbers however form a field.
Why is the theory of groups and rings not part of arithmetic? First of all, a crucial ingredient of algebra is
the appearance of variables and computations with these algebras without using concrete numbers. Second,
the algebraic structures are not restricted to ”numbers”. Groups and rings are general structures and extend
for example to objects like the set of all possible symmetries of a geometric object. The set of all similarity
operations on the plane for example form a group. An important example of a ring is the polynomial ring
of all polynomials. Given any ring R and a variable x, the set R[x] consists of all polynomials with coefficients
in R. The addition and multiplication is done like in (x2 + 3x + 1) + (x − 7) = x2 + 4x − 7. The problem to
factor a given polynomial with integer coefficients into polynomials of smaller degree: x2 − x + 2 for example
can be written as (x + 1)(x − 2) have a number theoretical flavor. Because symmetries of some structure form
a group, we also have intimate connections with geometry. But this is not the only connection with geometry.
Geometry also enters through the polynomial rings with several variables. Solutions to f (x, y) = 0 leads to
geometric objects with shape and symmetry which sometimes even have their own algebraic structure. They
are called varieties, a central object in algebraic geometry, objects which in turn have been generalized
109
FUNDAMENTAL THEOREMS
Arithmetic introduces addition and multiplication of numbers. Both form a group. The operations can be
written additively or multiplicatively. Lets look at this a bit closer: for integers, fractions and reals and the
addition +, the 1 element 0 and inverse −g, we have a group. Many groups are written multiplicatively where
the 1 element is 1. In the case of fractions or reals, 0 is not part of the multiplicative group because it is not
possible to divide by 0. The nonzero fractions or the nonzero reals form a group. In all these examples the
groups satisfy the commutative law g ∗ h = h ∗ g.
Here is a group which is not commutative: let G be the set of all rotations in space, which leave the unit
cube invariant. There are 3*3=9 rotations around each major coordinate axes, then 6 rotations around axes
connecting midpoints of opposite edges, then 2*4 rotations around diagonals. Together with the identity rotation
e, these are 24 rotations. The group operation is the composition of these transformations.
An other example of a group is S4 , the set of all permutations of four numbers (1, 2, 3, 4). If g : (1, 2, 3, 4) →
(2, 3, 4, 1) is a permutation and h : (1, 2, 3, 4) → (3, 1, 2, 4) is an other permutation, then we can combine the
two and define h ∗ g as the permutation which does first g and then h. We end up with the permutation
(1, 2, 3, 4) → (1, 2, 4, 3). The rotational symmetry group of the cube happens to be the same than the group
S4 . To see this ”isomorphism”, label the 4 space diagonals in the cube by 1, 2, 3, 4. Given a rotation, we can
look at the induced permutation of the diagonals and every rotation corresponds to exactly one permutation.
The symmetry group can be introduced for any geometric object. For shapes like the triangle, the cube, the
octahedron or tilings in the plane.
Many puzzles are groups. A popular puzzle, the 15-puzzle was invented in 1874 by Noyes Palmer Chapman
in the state of New York. If the hole is given the number 0, then the task of the puzzle is to order a given
random start permutation of the 16 pieces. To do so, the user is allowed to transposes 0 with a neighboring
piece. Since every step changes the signature s of the permutation and changes the taxi-metric distance d of 0
to the end position by 1, only situations with even s + d can be reached. It was Sam Loyd who suggested to
start with an impossible solution and as an evil plot to offer 1000 dollars for a solution. The 15 puzzle group
has 16!/2 elements and the ”god number” is between 152 and 208. The Rubik cube is an other famous puzzle,
which is a group. Exactly 100 years after the invention of the 15 puzzle, the Rubik puzzle was introduced in
1974. Its still popular and the world record is to have it solved in 5.55 seconds. All Cubes 2x2x2 to 7x7x7 in a
row have been solved in a total time of 6 minutes. For the 3x3x3 cube, the God number is now known to be
20: one can always solve it in 20 or less moves.
A small Rubik type game is the ”floppy”, which is a third of the Rubik and which has only 192 elements. An
other example is the Meffert’s great challenge. Probably the simplest example of a Rubik type puzzle is
the pyramorphix. It is a puzzle based on the tetrahedron. Its group has only 24 elements. It is the group
of all possible permutations of the 4 elements. It is the same group as the group of all reflection and rotation
symmetries of the cube in three dimensions and also is relevant when understanding the solutions to the quartic
equation discussed at the beginning. The circle is closed.
110
OLIVER KNILL
Lecture 6: Calculus
Calculus generalizes the process of taking differences and taking sums. Differences measure change, sums
explore how quantities accumulate. The procedure of taking differences has a limit called derivative. The
activity of taking sums leads to the integral. Sum and difference are dual to each other and related in an
intimate way. In this lecture, we look first at a simple set-up, where functions are evaluated on integers and
where we do not take any limits.
Several dozen thousand years ago, numbers were represented by units like 1, 1, 1, 1, 1, 1, . . . . The units were
carved into sticks or bones like the Ishango bone It took thousands of years until numbers were represented
with symbols like 0, 1, 2, 3, 4, . . . . Using the modern concept of function, we can say f (0) = 0, f (1) = 1, f (2) =
2, f (3) = 3 and mean that the function f assigns to an input like 1001 an output like f (1001) = 1001. Now
look at Df (n) = f (n + 1) − f (n), the difference. We see that Df (n) = 1 for all n. We can also formalize the
summation process. If g(n) = 1 is the constant 1 function, then then Sg(n) = g(0) + g(1) + · · · + g(n − 1) =
1 + 1 + · · · + 1 = n. We see that Df = g and Sg = f . If we start with f (n) = n and apply summation on
that function Then Sf (n) = f (0) + f (1) + f (2) + · · · + f (n − 1) leading to the values 0, 1, 3, 6, 10, 15, 21, . . . .
The new function g = Sf satisfies g(1) = 1, g(2) = 3, g(2) = 6, etc. The values are called the triangular
numbers. From g we can get back f by taking difference: Dg(n) = g(n + 1) − g(n) = f (n). For example
Dg(5) = g(6) − g(5) = 15 − 10 = 5 which indeed is f (5). Finding a formula for the sum Sf (n) is not so easy.
Can you do it? When Karl-Friedrich Gauss was a 9 year old school kid, his teacher, a Mr. Büttner gave him
the task to sum up the first 100 numbers 1 + 2 + · · · + 100. Gauss found the answer immediately by pairing
things up: to add up 1 + 2 + 3 + · · · + 100 he would write this as (1 + 100) + (2 + 99) + · · · + (50 + 51) leading
to 50 terms of 101 to get for n = 101 the value g(n) = n(n − 1)/2 = 5050. Taking differences again is easier
Dg(n) = n(n + 1)/2 − n(n − 1)/2 = n = f (n). If we add up he triangular numbers we compute h = Sg which
has the first values 0, 1, 4, 10, 20, 35, ..... These are the tetrahedral numbers because h(n) balls are needed
to build a tetrahedron of side length n. For example, h(4) = 20 golf balls are needed to build a tetrahedron of
side length 4. The formula which holds for h is h(n) = n(n − 1)(n − 2)/6 . Here is the fundamental theorem
of calculus, which is the core of calculus:
Df (n) = f (n) − f (0), DSf (n) = f (n) .
Proof.
n−1
X
SDf (n) = [f (k + 1) − f (k)] = f (n) − f (0) ,
k=0
n−1
X n−1
X
DSf (n) = [ f (k + 1) − f (k)] = f (n) .
k=0 k=0
Rx
The process of adding up numbers will lead to the integral 0
f (x) dx . The process of taking differences will
d
lead to the derivative dx f (x) .
The familiar notation is
Rx d d
Rx
0 dt
f (t) dt = f (x) − f (0), dx 0
f (t) dt = f (x)
If we define [n]0 = 1, [n]1 = n, [n]2 = n(n − 1)/2, [n]3 = n(n − 1)(n − 2)/6 then D[n] = [1], D[n]2 = 2[n], D[n]3 =
3[n]2 and in general
d n
dx [x] = n[x]n−1
The calculus you have just seen, contains the essence of single variable calculus. This core idea will become
more powerful and natural if we use it together with the concept of limit.
111
FUNDAMENTAL THEOREMS
Problem: The Fibonnacci sequence 1, 1, 2, 3, 5, 8, 13, 21, . . . satisfies the rule f (x) = f (x − 1) + f (x − 2). For
example, f (6) = 8. What is the function g = Df , if we assume f (0) = 0? We take the difference between
successive numbers and get the sequence of numbers 0, 1, 1, 2, 3, 5, 8, ... which is the same sequence again. We
see that Df (x) = f (x − 1) .
If we take the same function f but now but now compute the function h(n) = Sf (n), we get the sequence
1, 2, 4, 7, 12, 20, 33, .... What sequence is that? Solution: Because Df (x) = f (x − 1) we have f (x) − f (0) =
SDf (x) = Sf (x − 1) so that Sf (x) = f (x + 1) − f (1). Summing the Fibonnacci sequence produces the
Fibonnacci sequence shifted to the left with f (2) = 1 is subtracted. It has been relatively easy to find the
sum, because we knew what the difference operation did. This example shows: we can study differences to
understand sums.
Problem: The function f (n) = 2n is called the exponential function. We have for example f (0) = 1, f (1) =
2, f (2) = 4, . . . . It leads to the sequence of numbers
n= 0 1 2 3 4 5 6 7 8 ...
f(n)= 1 2 4 8 16 32 64 128 256 . . .
We can verify that f satisfies the equation Df (x) = f (x) . because Df (x) = 2x+1 − 2x = (2 − 1)2x = 2x .
This is an important special case of the fact that
The function 2x is a special case of the exponential function when the Planck constant is equal to 1. We will see
that the relation will hold for any h > 0 and also in the limit h → 0, where it becomes the classical exponential
function ex which plays an important role in science.
Calculus has many applications: computing areas, volumes, solving differential equations. It even has applica-
tions in arithmetic. Here is an example for illustration. It is a proof that π is irrational The theorem is due
to Johann Heinrich Lambert (1728-1777): We show here the proof by Ivan Niven is given in a book of Niven-
Zuckerman-Montgomery. It originally appeared in 1947 (Ivan Niven, Bull.Amer.Math.Soc. 53 (1947),509). The
proof illustrates how calculus can help to get results in arithmetic.
Proof. Assume π = a/b with positive integers a and b. For any positive integer n define
0 ≤ f (x) ≤ π n an /n!(∗)
for 0 ≤ x ≤ π. For all 0 ≤ j ≤ n, the j-th derivative of f is zero at 0 and π and for n <= j, the j-th derivative
of f is an integer at 0 and π.
The function F (x) = f (x) − f (2) (x) + f (4) (x) − ... + (−1)n f (2n) (x) has the property that F (0) and F (π) are
integers and F + F 00 = f . Therefore, (F 0 (x) sin(x) − F (x) cos(x))0 = f sin(x). By the fundamental theorem of
Rπ
calculus, 0 f (x) sin(x) dx is an integer. Inequality (*) implies however that this integral is between 0 and 1 for
large enough n. For such an n we get a contradiction.
112
OLIVER KNILL
to [0, 1]. The set of all finite subsets of N however can be counted. The set of all subsets of the real numbers
has cardinality ℵ2 , etc. Is there a cardinality between ℵ0 and ℵ1 ? In other words, is there a set which can
not be counted and which is strictly smaller than the continuum in the sense that one can not find a bijection
between it and R? This was the first of the 23 problems posed by Hilbert in 1900. The answer is surprising:
one has a choice. One can accept either the ”yes” or the ”no” as a new axiom. In both cases, Mathematics
is still fine. The nonexistence of a cardinality between ℵ0 and ℵ1 is called the continuum hypothesis and
is usually abbreviated CH. It is independent of the other axioms making up mathematics. This was the work
113
FUNDAMENTAL THEOREMS
of Kurt Gödel in 1940 and Paul Cohen in 1963. The story of exploring the consistency and completeness
of axiom systems of all of mathematics is exciting. Euclid axiomatized geometry, Hilbert’s program was more
ambitious. He aimed at a set of axiom systems for all of mathematics. The challenge to prove Euclid’s 5’th
postulate is paralleled by the quest to prove the CH. But the later is much more fundamental because it deals
with all of mathematics and not only with some geometric space. Here are the Zermelo-Frenkel Axioms
(ZFC) including the Axiom of choice (C) as established by Ernst Zermelo in 1908 and Adolf Fraenkel and
Thoral Skolem in 1922.
Extension If two sets have the same elements, they are the same.
Image Given a function and a set, then the image of the function is a set too.
Pairing For any two sets, there exists a set which contains both sets.
Property For any property, there exists a set for which each element has the property.
Union Given a set of sets, there exists a set which is the union of these sets.
Power Given a set, there exists the set of all subsets of this set.
Infinity There exists an infinite set.
Regularity Every nonempty set has an element which has no intersection with the set.
Choice Any set of nonempty sets leads to a set which contains an element from each.
There are other systems like ETCS, which is the elementary theory of the category of sets. In category
theory, not the sets but the categories are the building blocks. Categories do not form a set in general. It
elegantly avoids the Russel paradox too. The axiom of choice (C) has a nonconstructive nature which can
lead to seemingly paradoxical results like the Banach Tarski paradox: one can cut the unit ball into 5 pieces,
rotate and translate the pieces to assemble two identical balls of the same size than the original ball. Gödel and
Cohen showed that the axiom of choice is logically independent of the other axioms ZF. Other axioms in ZF
have been shown to be independent, like the axiom of infinity. A finitist would refute this axiom and work
without it. It is surprising what one can do with finite sets. The axiom of regularity excludes Russellian
sets like the set X of all sets which do not contain themselves. The Russell paradox is: Does X contain
X? It is popularized as the Barber riddle: a barber in a town only shaves the people who do not shave
themselves. Does the barber shave himself? Gödels theorems of 1931 deal with mathematical theories
which are strong enough to do basic arithmetic in them.
First incompleteness theorem: Second incompleteness theorem:
In any theory there are true statements which can In any theory, the consistency of the theory can not
not be proved within the theory. be proven within the theory.
The proof uses an encoding of mathematical sentences which allows to state liar paradoxical statement ”this
sentence can not be proved”. While the later is an odd recreational entertainment gag, it is the core for a theorem
which makes striking statements about mathematics. These theorems are not limitations of mathematics; they
illustrate its infiniteness. How awful if one could build axiom system and enumerate mechanically all possible
truths from it.
114
OLIVER KNILL
the standard normal distribution. Analyzed first by Abraham de Moivre in 1733, it was studied by Carl
Friedrich Gauss in 1807 and therefore also called Gaussian distribution.
Two random variables X, Y are called uncorrelated, if E[XY ] = E[X] · E[Y ]. If for any functions f, g also
f (X) and g(Y ) are uncorrelated, then X, Y are called independent. Two random variables are said to have
the same distribution, if for any a < b, the events {a ≤ X ≤ b } and {a ≤ Y ≤ b } are independent. If X, Y
are uncorrelated, then the relation Var[X] + Var[Y ] = Var[X + Y ] holds which is just Pythagoras theorem,
because uncorrelated can be understood geometrically: X − E[X] and Y − E[Y ] are orthogonal. A common
problem is to study the sum of independent random variables Xn with identical distribution. One abbreviates
this IID. Here are the three most important theorems which we formulate in the case, where all random variables
are assumed to have expectatation 0 and standard deviation 1. Let Sn = X1 + ... + Xn be the n’th sum of the
115
FUNDAMENTAL THEOREMS
The LLN shows that one can find out about the expectation by averaging experiments. The CLT explains why
one sees the standard normal distribution so often. The LIL finally gives us a precise estimate how fast Sn
grows. Things become interesting if the random variables are no more independent. Generalizing LLN,CLT,LIL
to such situations is part of ongoing research.
√
Are numbers like π, e, 2 normal: do all digits appear with the same frequency?
What growth rates Λn can occur in Sn /Λn having limsup 1 and liminf −1?
p
For the second question, there are examples for Λn = 1, λn = log(n) and of course λn = n log log(n) from
LIL if the random variables are independent. Examples of random variables which are not independent are
√
Xn = cos(n 2).
Statistics is the science of modeling random events in a probabilistic setup. Given data points, we want to
find a model which fits the data best. This allows to understand the past, predict the future or discover
laws of nature. The most common task is to find the mean and the standard deviation of some data. The
Pn Pn
mean is also called the average and given by m = n1 k=1 xk . The variance is σ 2 = n1 k=1 (xk − m)2 with
standard deviation σ.
A sequence of random variables Xn define a so called stochastic process. Continuous versions of such pro-
cesses are where Xt is a curve of random random variables. An important example is Brownian motion,
which is a model of a random particles.
Besides gambling and analyzing data, also physics was an important motivator to develop probability theory.
An example is statistical mechanics, where the laws of nature are studied with probabilistic methods. A
famous physical law is Ludwig Boltzmann’s relation S = k log(W ) for entropy, a formula which decorates
Boltzmann’s tombstone. The entropy of a probability measure P [{k}] = pk on a finite set {1, ..., n} is defined
Pn
as S = − i=1 pi log(pi ). Today, we would reformulate Boltzmann’s law and say that it is the expectation
S = E[log(W )] of the logarithm of the “Wahrscheinlichkeit” random variable W (i) = 1/pi on Ω = {1, ..., n }.
Entropy is important because nature tries to maximize it
116
OLIVER KNILL
Lecture 9: Topology
Topology studies properties of geometric objects which do not change under continuous reversible deforma-
tions. In topology, a coffee cup with a single handle is the same as a doughnut. One can deform one into the
other without punching any holes in it or ripping it apart. Similarly, a plate and a croissant are the same. But
a croissant is not equivalent to a doughnut. On a doughnut, there are closed curves which can not be pulled
together to a point. For a topologist the letters O and P are the equivalent but different from the letter B.
The mathematical setup is beautiful: a topological space is a set X with a set O of subsets of X containing
both ∅ and X such that finite intersections and arbitrary unions in O are in O. Sets in O are called open sets
and O is called a topology. The complement of an open set is called closed. Examples of topologies are the
trivial topology O = {∅, X}, where no open sets besides the empty set and X exist or the discrete topology
O = {A | A ⊂ X}, where every subset is open. But these are in general not interesting. An important example
on the plane X is the collection O of sets U in the plane X for which every point is the center of a small disc
still contained in U . A special class of topological spaces are metric spaces, where a set X is equipped with a
distance function d(x, y) = d(y, x) ≥ 0 which satisfies the triangle inequality d(x, y) + d(y, z) ≥ d(x, z) and
for which d(x, y) = 0 if and only if x = y. A set U in a metric space is open if to every x in U , there is a ball
Br (x) = {y|d(x, y) < r} of positive radius r contained in U . Metric spaces are topological spaces but not vice
versa: the trivial topology for example is not in general. For doing calculus on a topological space X, each
point has a neighborhood called chart which is topologically equivalent to a disc in Euclidean space. Finitely
many neighborhoods covering X form an atlas of X. If the charts are glued together with identification maps
on the intersection one obtains a manifold. Two dimensional examples are the sphere, the torus, the pro-
jective plane or the Klein bottle. Topological spaces X, Y are called homeomorphic meaning “topologically
equivalent” if there is an invertible map from X to Y such that this map induces an invertible map on the
corresponding topologies. How can one decide whether two spaces are equivalent in this sense? The surface of
the coffee cup for example is equivalent in this sense to the surface of a doughnut but it is not equivalent to the
surface of a sphere. Many properties of geometric spaces can be understood by discretizing it like with a graph.
A graph is a finite collection of vertices V together with a finite set of edges E, where each edge connects two
points in V . For example, the set V of cities in the US where the edges are pairs of cities connected by a street
is a graph. The Königsberg bridge problem was a trigger puzzle for the study of graph theory. Polyhedra
were an other start in graph theory. It study is loosely related to the analysis of surfaces. The reason is that
one can see polyhedra as discrete versions of surfaces. In computer graphics for example, surfaces are rendered
as finite graphs, using triangularizations. The Euler characteristic of a convex polyhedron is a remarkable
topological invariant. It is V − E + F = 2, where V is the number of vertices, E the number of edges and F the
number of faces. This number is equal to 2 for connected polyhedra in which every closed loop can be pulled
together to a point. This formula for the Euler characteristic is also called Euler’s gem. It comes with a rich
history. René Descartes stumbled upon it and written it down in a secret notebook. It was Leonard Euler
in 1752 was the first to proved the formula for convex polyhedra. A convex polyhedron is called a Platonic
solid, if all vertices are on the unit sphere, all edges have the same length and all faces are congruent polygons.
A theorem of Theaetetus states that there are only five Platonic solids: [Proof: Assume the faces are regular
n-gons and m of them meet at each vertex. Beside the Euler relation V + E + F = 2, a polyhedron also satisfies
the relations nF = 2E and mV = 2E which come from counting vertices or edges in different ways. This gives
2E/m − E + 2E/n = 2 or 1/n + 1/m = 1/E + 1/2. From n ≥ 3 and m ≥ 3 we see that it is impossible that both
m and n are larger than 3. There are now nly two possibilities: either n = 3 or m = 3. In the case n = 3 we
have m = 3, 4, 5 in the case m = 3 we have n = 3, 4, 5. The five possibilities (3, 3), (3, 4), (3, 5), (4, 3), (5, 3)
117
FUNDAMENTAL THEOREMS
represent the five Platonic solids.] The pairs (n, m) are called the Schläfly symbol of the polyhedron:
Name V E F V-E+F Schläfli Name V E F V-E+F Schläfli
tetrahedron 4 6 4 2 {3, 3}
hexahedron 8 12 6 2 {4, 3} dodecahedron 20 30 12 2 {5, 3}
octahedron 6 12 8 2 {3, 4} icosahedron 12 30 20 2 {3, 5}
The Greeks proceeded geometrically: Euclid showed in the ”Elements” that each vertex can have either 3,4 or 5
equilateral triangles attached, 3 squares or 3 regular pentagons. (6 triangles, 4 squares or 4 pentagons would lead
to a total angle which is too large because each corner must have at least 3 different edges). Simon Antoine-
Jean L’Huilier refined in 1813 Euler’s formula to situations with holes: V − E + F = 2 − 2g ,
where g is the number of holes. For a doughnut it is V − E + F = 0. Cauchy first proved that there are 4
non-convex regular Kepler-Poinsot polyhedra.
If two different face types are allowed but each vertex still look the same, one obtains 13 semi-regular polyhe-
dra. They were first studied by Archimedes in 287 BC. Since his work is lost, Johannes Kepler is considered
the first since antiquity to describe all of them them in his ”Harmonices Mundi”. The Euler characteristic for
surfaces is χ = 2−2g where g is the number of holes. The computation can be done by triangulating the surface.
The Euler characteristic characterizes smooth compact surfaces if they are orientable. A non-orientable surface,
the Klein bottle can be obtained by gluing ends of the Möbius strip. Classifying higher dimensional manifolds
is more difficult and finding good invariants is part of modern research. Higher analogues of polyhedra are
called polytopes (Alicia Boole Stott). Regular polytopes are the analogue of the Platonic solids in higher
dimensions. Examples:
Ludwig Schll̈afly saw in 1852 exactly six convex regular convex 4-polytopes or polychora, where ”Choros”
is Greek for ”space”. Schlaefli’s polyhedral formula is V − E + F − C = 0 holds, where C
is the number of 3-dimensional chambers. In dimensions 5 and higher, there are only 3 types of poly-
topes: the higher dimensional analogues of the tetrahedron, octahedron and the cube. A general formula
Pd−1 k d
k=0 (−1) vk = 1 − (−1)
gives the Euler characteristic of a convex polytop in d dimensions with
k-dimensional parts vk .
118
OLIVER KNILL
Analysis is a science of measure and optimization. As a rather diverse collection of mathematical fields, it
contains real and complex analysis, functional analysis, harmonic analysis and calculus of variations.
Analysis has relations to calculus, geometry, topology, probability theory and dynamical systems. We focus
here mostly on ”the geometry of fractals” which can be seen as part of dimension theory. Examples are Julia
sets which belong to the subfield of ”complex analysis” of ”dynamical systems”. ”Calculus of variations” is
illustrated by the Kakeya needle set in ”geometric measure theory”, ”Fourier analysis” appears when looking
at functions which have fractal graphs, ”spectral theory” as part of functional analysis is represented by the
”Hofstadter butterfly”. We somehow describe the topic using ”pop icons”.
A fractal is a set with non-integer dimension. An example is the Cantor set, as discovered in 1875 by Henry
Smith. Start with the unit interval. Cut the middle third, then cut the middle third from both parts then the
middle parts of the four parts etc. The limiting set is the Cantor set. The mathematical theory of fractals belongs
to measure theory and can also be thought of a playground for real analysis or topology. The term fractal
had been introduced by Benoit Mandelbrot in 1975. Dimension can be defined in different ways. The simplest
is the box counting definition which works for most household fractals: if we need n squares of length r to
cover a set, then d = − log(n)/ log(r) converges to the dimension of the set with r → 0. A curve
of length L for example needs L/r squares of length r so that its dimension is 1. A region of area A needs A/r2
squares of length r to be covered and its dimension is 2. The Cantor set needs to be covered with n = 2m squares
of length r = 1/3m . Its dimension is − log(n)/ log(r) = −m log(2)/(m log(1/3)) = log(2)/ log(3). Examples of
fractals are the graph of the Weierstrass function 1872, the Koch snowflak (1904), the Sierpinski carpet (1915)
or the Menger sponge (1926).
Complex analysis extends calculus to the complex. It deals with functions f (z) defined in the complex plane.
Integration is done along paths. Complex analysis completes the understanding about functions. It also provides
more examples of fractals by iterating functions like the quadratic map f (z) = z 2 + c:
One has already iterated functions before like the Newton method (1879). The Julia sets were introduced in
1918, the Mandelbrot set in 1978 and the Mandelbar set in 1989. Particularly famous are the Douady rabbit
and the dragon, the dendrite, the airplane. Calculus of variations is calculus in infinite dimensions.
Taking derivatives is called taking ”variations”. Historically, it started with the problem to find the curve of
fastest fall leading to the Brachistochrone curve ~r(t) = (t − sin(t), 1 − cos(t)). In calculus, we find maxima
and minima of functions. In calculus of variations, we extremize on much larger spaces. Here are examples of
problems:
Brachistochrone 1696
Minimal surface 1760
Geodesics 1830
Isoperimetric problem 1838
Kakeya Needle problem 1917
Fourier theory decomposes a function into basic components of various frequencies f (x) = a1 sin(x) +
a2 sin(2x) + a3 sin(3x) + · · · . The numbers ai are called the Fourier coefficients. Our ear does such a
decomposition, when we listen to music. By distinguish different frequencies, our ear produces a Fourier anal-
ysis.
119
FUNDAMENTAL THEOREMS
The dimension of its graph is believed to be 2 + log(a)/ log(b) but no rigorous computation of the dimension
was done yet. Spectral theory analyzes linear maps L. The spectrum are the real numbers E such that
L − E is not invertible. A Hollywood celebrity among all linear maps is the almost Matthieu operator
L(x)n = xn+1 + xn−1 + (2 − 2 cos(cn))xn : if we draw the spectrum for for each c, we see the Hofstadter
butterfly. For fixed c the map describes the behavior of an electron in an almost periodic crystal. An
other famous system is the quantum harmonic oscillator, L(f ) = f 00 (x) + f (x), the vibrating drum
L(f ) = fxx + fyy , where f is the amplitude of the drum and f = 0 on the boundary of the drum.
120
OLIVER KNILL
121
FUNDAMENTAL THEOREMS
1 again. While this cypher remained unbroken for long, a more sophisticated frequency analysis which involves
first finding the length of the key makes the cypher breakable. With the emergence of computers, even more
sophisticated versions like the German enigma had no chance.
Diffie-Hellman key exchange allows Ana and Bob want to agree on a secret key over a public channel. The
two palindromic friends agree on a prime number p and a base a. This information can be exchanged over an
open channel. Ana chooses now a secret number x and sends X = ax modulo p to Bob over the channel. Bob
chooses a secret number y and sends Y = ay modulo p to Ana. Ana can compute Y x and Bob can compute
X y but both are equal to axy . This number is their common secret. The key point is that eves dropper Eve,
can not compute this number. The only information available to Eve are X and Y , as well as the base a and p.
Eve knows that X = ax but can not determine x. The key difficulty in this code is the discrete log problem:
getting x from ax modulo p is believed to be difficult for large p.
The Rivest-Shamir-Adleman public key system uses a RSA public key (n, a) with an integer n = pq and
a < (p − 1)(q − 1), where p, q are prime. Also here, n and a are public. Only the factorization of n is kept secret.
Ana publishes this pair. Bob who wants to email Ana a message x, sends her y = xa mod n. Ana, who has
computed b with ab = 1 mod (p−1)(q−1) can read the secrete email y because y b = xab = x(p−1)(q−1) = x modn.
But Eve, has no chance because the only thing Eve knows is y and (n, a). It is believed that without the
factorization of n, it is not possible to determine x. The message has been transmitted securely. The core
difficulty is that taking roots in the ring Zn = {0, . . . , n − 1 } is difficult without knowing the factorization
of n. With a factorization, we can quickly take arbitrary roots. If we can take square roots, then we can also
factor: assume we have a product n = pq and we know how to take square roots of 1. If x solves x2 = 1 mod n
and x is different from 1, then x2 − 1 = (x − 1)(x + 1) is zero modulo n. This means that p divides (x − 1) or
(x + 1). To find a factor, we can take the greatest common divisor of n, x − 1. Take n = 77 for example. We
are given the root 34 of 1. ( 342 = 1156 has reminder 1 when divided by 34). The greatest common divisor of
34 − 1 and 77 is 11 is a factor of 77. Similarly, the greatest common divisor of 34 + 1 and 77 is 7 divides 77.
Finding roots modulo a composite number and factoring the number is equally difficult.
Cipher Used for Difficulty Attack
Cesar transmitting messages many permutations Statistics
Viginere transmitting messages many permutations Statistics
Enigma transmitting messages no frequency analysis Plain text
Diffie-Helleman agreeing on secret key discrete log mod p Unsafe primes
RSA electronic commerce factoring integers Factoring
The simplest error correcting code uses 3 copies of the same information so single error can be corrected.
With 3 watches for example, one watch can fail. But this basic error correcting code is not efficient. It can
correct single errors by tripling the size. Its efficiency is 33 percent.
122
OLIVER KNILL
The goal of the theory is to predict the future of the system when the present state is known. A differential
equation is an equation of the form d/dtx(t) = f (x(t)), where the unknown quantity is a path x(t) in some
“phase space”. We know the velocity d/dtx(t) = ẋ(t) at all times and the initial configuration x(0)), we can to
compute the trajectory x(t). What happens at a future time? Does x(t) stay in a bounded region or escape
to infinity? Which areas of the phase space are visited and how often? Can we reach a certain part of the
space when starting at a given point and if yes, when. An example of such a question is to predict, whether an
asteroid located at a specific location will hit the earth or not. An other example is to predict the weather of
the next week.
It is called the logistic system and describes population growth. This system has the solution x(t) =
2et /(1 + e2t ) as you can see by computing the left and right hand side.
A map is a rule which assigns to a quantity x(t) a new quantity x(t + 1) = T (x(t)). The state x(t) of the
system determines the situation x(t + 1) at time t + 1. An example is is the Ulam map T (x) = 4x(1 − x) on
the interval [0, 1]. This is an example, where we have no idea what happens after a few hundred iterates even
if we would know the initial position with the accuracy of the Planck scale.
Dynamical system theory has applications all fields of mathematics. It can be used to find roots of equations
like for
T (x) = x − f (x)/f 0 (x) .
A system of number theoretical nature is the Collatz map
x
T (x) = (even x), 3x + 1 else .
2
A system of geometric nature is the Pedal map which assigns to a triangle the pedal triangle.
About 100 years ago, Henry Poincaré was able to deal with chaos of low dimensional systems. While
statistical mechanics had formalized the evolution of large systems with probabilistic methods already, the
new insight was that simple systems like a three body problem or a billiard map can produce very com-
plicated motion. It was Poincaré who saw that even for such low dimensional and completely deterministic
systems, random motion can emerge. While physisists have dealt with chaos earlier by assuming it or artifi-
cially feeding it into equations like the Boltzmann equation, the occurrence of stochastic motion in geodesic
flows or billiards or restricted three body problems was a surprise. These findings needed half a century to
sink in and only with the emergence of computers in the 1960ies, the awakening happened. Icons like Lorentz
helped to popularize the findings and we owe them the ”butterfly effect” picture: a wing of a butterfly can
produce a tornado in Texas in a few weeks. The reason for this statement is that the complicated equations
to simulate the weather reduce under extreme simplifications and truncations to a simple differential equation
ẋ = σ(y − x), ẏ = rx − y − xz, ż = xy − bz, the Lorenz system. For σ = 10, r = 28, b = 8/3, Ed Lorenz
discovered in 1963 an interesting long time behavior and an aperiodic ”attractor”. Ruelle-Takens called it a
123
FUNDAMENTAL THEOREMS
strange attractor. It is a great moment in mathematics to realize that attractors of simple systems can
become fractals on which the motion is chaotic. It suggests that such behavior is abundant. What is chaos?
If a dynamical system shows sensitive dependence on initial conditions, we talk about chaos. We will
experiment with the two maps T (x) = 4x(1 − x) and S(x) = 4x − 4x2 which starting with the same initial
conditions will produce different outcomes after a couple of iterations.
The sensitive dependence on initial conditions is measured by how fast the derivative dT n of the n’th iterate
grows. The exponential growth rate γ is called the Lyapunov exponent. A small error of the size h will be
amplified to heγn after n iterates. In the case of the Logistic map with c = 4, the Lyapunov exponent is log(2)
and an error of 10−16 is amplified to 2n · 10−16 . For time n = 53 already the error is of the order 1. This
explains the above experiment with the different maps. The maps T (x) and S(x) round differently on the level
10−16 . After 53 iterations, these initial fluctuation errors have grown to a macroscopic size.
Here is a famous open problem which has resisted many attempts to solve it: Show that the map T (x, y) =
(c sin(2πx) + 2x − y, x) with T n (x, y) = (fn (x, y), gn (x, y)) has sensitive dependence on initial conditions on a
R1R1
set of positive area. More precisely, verify that for c > 2 and all n n1 0 0 log |∂x fn (x, y)| dxdy ≥ log( 2c ). The
left hand side converges to the average of the Lyapunov exponents which is in this case also the entropy of the
map. For some systems, one can compute the entropy. The logistic map with c = 4 for example, which is also
called the Ulam map, has entropy log(2). The cat map
√
has positive entropy log |( 5 + 3)/2|. This is the logarithm of the larger eigenvalue of the matrix implementing
T.
While questions about simple maps look artificial at first, the mechanisms prevail in other systems: in astron-
omy, when studying planetary motion or electrons in the van Allen belt, in mechanics when studying coupled
pendulum or nonlinear oscillators, in fluid dynamics when studying vortex motion or turbulence, in geometry,
when studying the evolution of light on a surface, the change of weather or tsunamis in the ocean. Dynamical
systems theory started historically with the problem to understand the motion of planets. Newton realized
that this is governed by a differential equation, the n-body problem
n
X cij (xi − xj )
x00j (t) = ,
i=1
|xi − xj |3
where cij depends on the masses and the gravitational constant. If one body is the sun and no interaction of the
planets is assumed and using the common center of gravity as the origin, this reduces to the Kepler problem
x00 (t) = −Cx/|x|3 , where planets move on ellipses, the radius vector sweeps equal area in each time and the
period squared is proportional to the semi-major axes cubed. A great moment in astronomy was when Kepler
derived these laws empirically. An other great moment in mathematics is Newton’s theoretically derivation
from the differential equations.
124
OLIVER KNILL
125
FUNDAMENTAL THEOREMS
is a Turing machine and x is a finite input. Let H ⊂ T M denote the set of Turing machines (T, x) which halt
with the tape x as input. Turing looked at the decision problem: is there a machine which decides whether a
given machine (T, x) is in H or not. An ingenious Diagonal argument of Turing shows that the answer is ”no”.
[Proof: assume there is a machine HALT which returns from the input (T, x) the output HALT(T, x) = true,
if T halts with the input x and otherwise returns HALT(T, x) = false. Turing constructs a Turing machine
DIAGONAL, which does the following: 1) Read x. 2) Define Stop=HALT(x,x) 3) While Stop=True repeat
Stop:=True; 4) Stop.
Now, DIAGONAL is either in H or not. If DIAGONAL is in H, then the variable Stop is true which means
that the machine DIAGONAL runs for ever and DIAGONAL is not in H. But if DIAGONAL is not in H, then
the variable Stop is false which means that the loop 3) is never entered and the machine stops. The machine is
in H.]
Lets go back to the problem of distinguishing ”easy” and ”hard” problems: One calls P the class of decision
problems that are solvable in polynomial time and NP the class of decision problems which can efficiently be
tested if the solution is given. These categories do not depend on the computing model used. The question
”N=NP?” is the most important open problem in theoretical computer science. It is one of the seven mille-
nium problems and it is widely believed that P 6= N P . If a problem is such that every other NP problem
can be reduced to it, it is called NP-complete. Popular games like Minesweeper or Tetris are NP-complete. If
P 6= N P , then there is no efficient algorithm to beat the game. The intersection of NP-hard and NP is the class
of NP-complete problems. An example of an NP-complete problem is the balanced number partitioning
problem: given n positive integers, divide them into two subsets A, B, so that the sum in A and the sum in B
are as close as possible. A first shot: chose the largest remaining number and distribute it to alternatively to
the two sets.
We all feel that it is harder to find a solution to a problem rather than to verify a solution. If N 6= N P
there are one way functions, functions which are easy to compute but hard to verify. For some important prob-
lems, we do not even know whether they are in NP. Examples are the the integer factoring problem. An
efficient algorithm for the first one would have enormous consequences. Finally, lets look at some mathematical
problems in artificial intelligence AI:
problem solving playing games like chess, performing algorithms, solving puzzles
pattern matching speech, music, image, face, handwriting, plagiarism detection, spam
reconstruction tomography, city reconstruction, body scanning
research computer assisted proofs, discovering theorems, verifying proofs
data mining knowledge acquisition, knowledge organization, learning
translation language translation, porting applications to programming languages
creativity writing poems, jokes, novels, music pieces, painting, sculpture
simulation physics engines, evolution of bots, game development, aircraft design
inverse problems earth quake location, oil depository, tomography
prediction weather prediction, climate change, warming, epidemics, supplies
126
OLIVER KNILL
It is wonderful to visit other places and see connections. One can learn new things, relearn
old ones and marvel again about how large and diverse mathematics is but still to notice how
many similarities there are between seemingly remote areas. A goal of this project is also to
get back up to speed up to the level of a first year grad student (one forgets a lot of things over
the years) and maybe pass the quals (with some luck).
This summer 2018 project also illustrates the challenges when trying to tour the most important
mountain peaks in the mathematical landscape with limited time. Already the identification
of major peaks and attaching a “height” can be challenging. Which theorems are the most
important? Which are the most fundamental? Which theorems provide fertile seeds for new
theorems? I recently got asked by some students what I consider the most important theorem
in mathematics (my answer had been the “Atiyah-Singer theorem”).
Theorems are the entities which build up mathematics. Mathematical ideas show their merit
only through theorems. Theorems not only help to bring ideas to live, they in turn allow to
solve problems and justify the language or theory. But not only the results alone, also the
history and the connections with the mathematicians who created the results are fascinating.
The first version of this document got started in May 2018 and was posted in July 2018. Com-
ments, suggestions or corrections are welcome. I hope to be able to extend, update and clarify
it and explore also still neglected continents in the future if time permits.
It should be pretty obvious that one can hardly do justice to all mathematical fields and that
much more would be needed to cover the essentials. A more serious project would be to identify
a dozen theorems in each of the major MSC 2010 classification fields. This would roughly lead
to a “thousand and one theorem” list. In some sense, this exists already: on Wikipedia, there
are currently about 1000 theorems discussed. The one-document project getting closest to this
project is maybe the beautiful book [315].
127
FUNDAMENTAL THEOREMS
• July 28: Entry 36 had been a repeated prime number theorem entry. Its alternative is
now the Fredholm alternative. Also added are the Sturm theorem and Smith normal
form.
• July 29: The two entries about Lidskii theorem and Radon transform are added.
• July 30: An entry about linear programming.
• July 31: An entry about random matrices.
• August 2: An entry about entropy of diffeomorphisms
• August 4: 104-108 entries: linearization, law of small numbers, Ramsey, Fractals and
Poincare duality.
• August 5: 109-111 entries: Rokhlin and Lax approximation, Sobolev embedding
• August 6: 112: Whitney embedding.
• August 8: 113-114: AI and Stokes entries
• August 12: 115 and 116: Moment entry and martingale theorem
• August 13: 117 and 118: theorema egregium and Shannon theorem
• August 14: 119 mountain pass
• August 15: 120, 121,122,123 exponential sums, sphere theorem, word problem and finite
simple groups
• August 16: 124, 125, 126, Rubik, Sard and Elliptic curves,
• August 17: 127, 128, 129 billiards, uniformization, Kalman filter
• August 18: 130,131 Zarisky and Poincare’s last theorem
• August 19: 132, 133 Geometrization, Steinitz
• August 21: 134, 135 Hilbert-Einstein, Hall marriage
• August 22: 136-130
• August 24: 141-142
• August 25: 143-144
• August 27: 145-149
• August 28: 150-151
• August 31: 152
• September 1: 153-155
• September 2: 156
• September 8: 157,158
• September 14: 159-161
• September 25 2018: 162-164
• March 17 2019: 165-169
• March 20, 2019: section on paradigms
• March 21, 2019: 170
• March 27, 2019, 171
• June 20, 2019, 172
128
OLIVER KNILL
129
FUNDAMENTAL THEOREMS
a ring. One does not present the identity a + b = b + a for example as a fundamental
theorem. Yes, the Bayes theorem has an unusual high appeal to scientists as it appears
like a magic bullet, but for a mathematician, the statement just does not have enough
beef: it is a definition, not a theorem. Not to belittle the Bayes theorem, like the notion
of entropy or the notion of logarithm, it is a genius concept. But it is not an
actual theorem, as the cleverness of the statement of Bayes lies in the definition and
so the clarification of conditional probability theory. For the central limit theorem, it is
pretty clear that it should be high up on any list of theorems, as the name suggests: it is
central. But also, it actually is stronger than some versions of the law of large numbers.
The strong law is also super seeded by Birkhoff’s ergodic theorem which is much more
general. One could argue to pick the law of iterated logarithm or some Martingale
theorem instead but there is something appealing in the central limit theorem which
goes over to other set-ups. One can formulate the central limit theorem also for random
variables taking values in a compact topological group like when doing statistics with
spherical data [316]. An other pitch for the central limit theorem is that it is a fixed
point of a renormalization map X → X + X (where the right hand side is the sum
of two independent copies of X) in the space of random variables. This map increases
entropy and the fixed Rpoint is is a random variable whose distribution function f has the
maximal entropy − R f (x) log(f (x)) dx among all probability density functions. The
entropy principle justifies essentially all known probability density functions. Nature
just likes to maximize entropy and minimize energy or more generally - in the presence
of energy - to minimize the free energy.
• Topology. Topology is about geometric properties which do not change under contin-
uous deformation or more generally under homotopies. Quantities which are invariant
under homeomorphisms are interesting. Such quantities should add up under disjoint
unions of geometries and multiply under products. The Euler characteristic is the proto-
type. Taking products is fundamental for building up Euclidean spaces (also over other
fields, not only the real numbers) which locally patch up more complicated spaces. It is
the essence of vector spaces that after building a basis, one has a product of Euclidean
spaces. Field extensions can be seen therefore as product spaces. How does the counting
principle come in? As stated, it actually is quite strong and calling it a “fundamental
principle of topology” can be justified if the product of topological spaces is defined
properly: if 1 is the one-point space, one can see the statement G × 1 = G1 as the
Barycentric refinement of G, implying that the Euler characteristic is a Barycentric
invariant and so that it is a “counting tool” which can be pushed to the continuum, to
manifolds or varieties. And the compatibility with the product is the key to make it
work. Counting in the form of Euler characteristic goes throughout mathematics, com-
binatorics, differential geometry or algebraic geometry. Riemann-Roch or Atiyah-Singer
and even dynamical versions like the Lefschetz fixed point theorem (which generalizes
the Brouwer fixed point theorem) or the even more general Atiyah-Bott theorem can be
seen as extending the basic counting principle: the Lefschetz number χ(X, T )
is a dynamical Euler characteristic which in the static case T = Id reduces to the Euler
characteristic χ(X). In “school mathematics”, one calls the principle the “fundamental
130
OLIVER KNILL
principle of counting” or “rule of product”. It is put in the following way: “If we have
k ways to do one thing and m ways to do an other thing, then we have k ∗ m ways to
do both”. It is so simple that one can argue that it is over represented in teaching but
it is indeed important. [44] makes the point that it should be considered a founding
stone of combinatorics.
Why is the multiplicative property more fundamental than the additive counting
principle. It is again that the additive property is essentially placed in as a definition of
what a valuation is. It is in the in-out-formula χ(A ∪ B) + χ(A ∩ B) = χ(A) + χ(B).
Now, this inclusion-exclusion formula is also important in combinatorics but it is already
in the definition of what we call counting or “adding things up”. The multiplicative
property on the other hand is not a definition; it actually is quite non-trivial. It charac-
terizes classical mathematics as quantum mechanics or non-commutative flavors
of mathematics have shown that one can extend things. So, if the “rule of product”
(which is taught in elementary school) is beefed up to be more geometric and interpreted
to Euler characteristic, it becomes fundamental.
• Combinatorics. The pigeon hole principle stresses the importance of order struc-
ture, partially ordered sets (posets) and cardinality or comparisons of cardinality. The
point for posets is made in [336] who writes The biggest lesson I learned from Richard
Stanley’s work is, combinatorial objects want to be partially ordered! The use of injec-
tive functions to express cardinality is a key part of Cantor. Like some of the ideas
of Grothendieck it is of “infantile simplicity” (quote Grothendieck about schemes) but
powerful. It allowed for the stunning result that there are different infinities. One of
the reason for the success of Cantor’s set theory is the immediate applicability. For any
new theory, one has to ask: “does it tell me something I did not know?” In “set theory”
the larger cardinality of the reals (uncountable) than the cardinality of the algebraic
numbers (countable) gave immediately the existence of transcendental numbers.
This is very elegant. The pigeon hole principle similarly gives combinatorial results
which are non trivial and elegant. Currently, searching for “the fundamental theorem
of combinatorics” gives the “rule of product”. As explained above, we gave it a geo-
metric spin and placed it into topology. Now, combinatorics and topology have always
been very hard to distinguish. Euler, who somehow booted up topology by reducing
the Königsberg problem to a problem in graph theory did that already. Combi-
natorial topology is essentially part of topology. Today, some very geometric topics
like algebraic geometry have been placed within pure commutative algebra (this is
how I myself was exposed to algebraic geometry) On the other hand, some very hard
core combinatorial problems like the upper bound conjecture have been proven with
algebro-geometric methods like toric varieties which are geometric. In any case, order
structures are important everywhere and the pigeon principle justifies the importance
of order structures.
• Computation. There is no official “fundamental theorem of computer science” but
the Turing completeness theorem comes up as a top candidate when searching on
engines. Turing formalized using Turing machines in a precise way, what computing
is, and even what a proof is. It nails down mathematical activity of running an
131
FUNDAMENTAL THEOREMS
132
OLIVER KNILL
oriented languages which give more insight. But we like and make use of that higher
order codes can be boiled down to assembly closer to what the basic instructions are.
This is similar in mathematics and also in future, a topologist working in 4 manifold
theory will hardly think about all the definitions in terms of sets for similar reasons than
a modern computer algebra system does not break down all the objects into lists and
lists of lists (even so, that’s what it often is). Category theory has a chance to change
the landscape because it is close to computer science and to natural data structures. It
is more pictorial and flexible than set theory alone. It definitely has been very successful
to find new structures and see connections within different fields like computer science
[331]. It also has lead to more flexible axiom systems
133
Index
E8 lattice, 41 associativity, 7, 21
σ-algebra, 4 Atiyah Bott, 87
15 theorem, 41 Atiyah Singer, 87
17-gon, 38 Atiyah-Singer theorem, 127
290 theorem, 41 attractor of iterated function system, 45
3-connected, 62 Aubry-Mather theory, 30, 36
3-sphere, 61 Axiom of choice, 72
3D scanning, 95 axiom of choice, 5
4 color theorem, 65 axiom system, 8, 9
4-color theorem, 87
Bézier curves, 27
ABC conjecture, 88 Bézout’s bound, 20
abscissa of absolute convergence, 12 Baire category theorem, 18
abscissa of convergence, 12 Baire space, 75
absolute Galois group, 23 Bakshali manuscript, 104
absolutely continuous measure, 14 Banach algebra, 10
adjoint, 6 Banach fixed point theorem, 12
affine camera, 31 Banach space, 8, 38, 64
Alaoglu theorem, 30 Banach-Tarski construction, 70
Alexander polynomial, 32 Banach-Tarski paradox, 92
Alexander sphere, 92 Barycenter, 38
algebraic closure, 23 Barycentric subdivision, 11
algebraic extension, 24 Bayes theorem, 4
Algebraic number, 27 Beals conjecture, 108
algebraic number field, 23 Bernoulli shift, 22
algebraic numbers, 23 Bernstein polynomials, 27
Algebraic set, 5 Bertrand postulate, 68
algorithm, 95 Bertrand’s theorem, 74
Almost complex structure, 18 Betti number, 17
almost everywhere convergence, 70 bifurcation, 11, 75
Almost Mathieu operator, 92 bijective, 3
alphabet, 22 billiards, 58
Alternating sign conjecture, 79 Binomial coefficients, 27
Alternating sign matrix, 79 Bipartite graph, 81
AMS classification, 92, 102 bipartite graph, 63
analytic function, 7 Birkhoff, 3
Analytical index, 37 Birkhoff theorem, 3
ancient roots, 101 Boltzmann constant, 53
Angle trisector, 77 Bolzano-Weierstrass theorem, 78
Angular momentum, 74 Bolzmann equation, 30
Anosov, 28 Borel measure, 9, 14, 51
antipodal point, 38 Borsuk-Ulam theorem, 38
aperiodic, 47 boundary, 49
area element, 35 bounded linear operator, 6
area of polygon, 29 bounded martingale, 52
area triangle, 35 bounded stochastic process, 52
argument, 13 Brauer group, 33
arithmetic mean, 19 Brioschi formula, 53
arithmetic progression, 15 Brjuno set, 27
artificial intelligence, 95 Brouwer degree, 25
Arzela-Ascoli, 39 Brouwer fixed point theorem, 14, 25
asset pricing, 52 Burnside lemma, 28
134
OLIVER KNILL
135
FUNDAMENTAL THEOREMS
136
OLIVER KNILL
137
FUNDAMENTAL THEOREMS
138
OLIVER KNILL
139
FUNDAMENTAL THEOREMS
140
OLIVER KNILL
141
FUNDAMENTAL THEOREMS
142
OLIVER KNILL
143
FUNDAMENTAL THEOREMS
valuation, 4, 13
valuation of a field, 88
value, 81
value function, 93
Van der Waerden’s theorem, 46
variance, 19, 116
variational problem, 18
Vector field, 11
vector field, 14
vector space, 9
Vertex Cover, 81
vertex degree, 4
vertices, 4
viral effects, 81
Vitali theorem, 70
Vlasov dynamics, 32
Vlasov system, 32
volume, 15, 29
volume of ball, 29
Wahrscheinlichkeit, 53
wall paper group, 40
Waring problem, 31
Weak convergence, 39
weak Morse inequalities, 17
Weak solution, 33
weak solutions, 18
weak* topology, 64
weakly mixing, 39
144
OLIVER KNILL
Bibliography
References
[1] A005130. The on-line encyclopedia of integer sequences. https://oeis.org.
[2] P. Abad and J. Abad. The hundred greatest theorems.
http://pirate.shu.edu/ kahlnath/Top100.html, 1999.
[3] R. Abraham, J.E. Marsden, and T. Ratiu. Manifolds, Tensor Analysis and Applications. Applied Mathe-
matical Sciences, 75. Springer Verlag, New York etc., second edition, 1988.
[4] A. Aczel. Descartes’s secret notebook, a true tale of Mathematics, Mysticism and the Quest to Understand
the Universe. Broadway Books, 2005.
[5] C.C. Adams. The Knot Book. Freeman and Company, 1994.
[6] D. Adams. The Hitchhiker’s guide to the galaxy. Pan Books, 1979.
[7] R. Aharoni. Mathematics, Poetry and Beauty. World Scientific, 2015.
[8] L. Ahlfors. Complex Analysis. McGraw-Hill Education, 1979.
[9] M. Aigner and G.M. Ziegler. Proofs from the book. Springer Verlag, Berlin, 2 edition, 2010. Chapter 29.
[10] A. Alexander. Duel at Dawn. Harvard University Press, 2010.
[11] P. Alexandroff. Combinatorial topology. Dover books on Mathematics. Dover Publications, Inc, 1960.
Three volumes bound as one.
[12] J.M. Almira and A. Romero. Yet another application of the Gauss-Bonnet Theorem for the sphere. Bull.
Belg. Math. Soc., 14:341–342, 2007.
[13] S. Alpern and V.S. Prasad. Combinatorial proofs of the Conley-Zehnder-Franks theorem on a fixed point
for torus homeomorphisms. Advances in Mathematics, 99:238–247, 1993.
[14] C. Alsina and R.B. Nelsen. Charming Proofs. A journey into Elegant Mathematics, volume 42 of Dolciani
Mathematical Expositions. MAA, 2010.
[15] J.W. Anderson. Hyperbolic Geometry. Springer, 2 edition, 2005.
[16] G.E. Andrews. The theory of partitions. Cambridge Mathematical Library. Cambridge University Press,
1976.
[17] K. Arnold and J. Peyton, editors. A C User’s Guide to ANSI C. Prentice-Hall, Inc., 1992.
[18] V.I. Arnold. Mathematical Methods of Classical Mechanics. Springer Verlag, New York, 2 edition, 1980.
[19] V.I. Arnold. Lectures on Partial Differential Equations. Springer Verlag, 2004.
[20] V.I. Arnold. Experimental Mathematics. AMS, 2015. Translated by Dmitry Fuch and Mark Saul.
[21] E. Asplund and L. Bungart. A first course in integration. Holt, Rinehart and Winston, 1966.
[22] M. Atiyah. K-Theory. W.A. Benjamin, Inc, 1967.
[23] M. Atiyah. Mathematics in the 20th century. American Mathematical Monthly, 108, 2001.
[24] J-P. Aubin and I. Ekeland. Applied nonlinear Analysis. John Wiley and Sons, 1984.
[25] T. Aubin. Nonlinear Analysis on Manifolds. Monge-Amp‘ere equations. Springer, 1982.
[26] J. Bach. The lebowski theorem. https://twitter.com/plinz/status/985249543582355458, Apr 14, 2018.
[27] J.C. Baez. The octonions. Bull. Amer. Math. Soc. (N.S.), 39(2):145–205, 2002.
[28] R. Balakrishnan and K. Ranganathan. A textbook of Graph Theory. Springer, 2012.
[29] W.W. Rouse Ball. A short account of the History of mathematics. McMillan and co, London and New
York, 1988. Reprinted by Dover Publications, 1960.
[30] W. Ballmann. Lectures on Kähler Manifolds. ESI Lectures in Mathematics and Physics.
[31] T. Banchoff. Beyond the Third Dimension, Geometry, Computer Graphics and Higher Dimensions. Sci-
entific American Library, 1990.
[32] A-L. Barabasi. Linked, The New Science of Networks. Perseus Books Group, 2002.
[33] G. Baumslag. Topics in combinatorial group theory. Birkhäuser Verlag, 1993.
[34] A. Beardon. Iteration of Rational Functions. Graduate Texts in Mathematics. Springer-Verlag, New York,
1990.
[35] E. Behrends. Fünf Minuten Mathematik. Vieweg + Teubner, 2006.
[36] E.T. Bell. The Development of Mathematics. McGraw Hill Book Company, 1945.
[37] E.T. Bell. Men of Mathematics. Penguin books, 1953 (originally 1937).
[38] B.Engquist and W. Schmid (Editors). Mathematics Unlimited - 2001 and Beyond. Springer, 2001.
[39] S.K. Berberian. Fundamentals of Real analysis. Springer, 1998.
145
FUNDAMENTAL THEOREMS
[40] M. Berger. A Panoramic View of Riemannian Geometry. Springer Verlag, Berlin, 2003.
[41] M. Berger and B. Gostiaux. Differential geometry: manifolds, curves, and surfaces, volume 115 of Graduate
Texts in Mathematics. Springer-Verlag, New York, 1988.
[42] W. Berlinghoff and F. Gouvea. Math through the ages. Mathematical Association of America Textbooks,
2004.
[43] H-G. Bigalke. Heinrich Heesch, Kristallgeometrie, Parkettierungen, Vierfarbenforschung. Birkhäuser,
1988.
[44] N.L. Biggs. The roots of combinatorics. Historia Mathematica, 6:109–136, 1979.
[45] G. D. Birkhoff. An extension of poincareés last theorem. Acta Math., 47:297–311, 1925.
[46] J. Bondy and U. Murty. Graph theory, volume 244 of Graduate Texts in Mathematics. Springer, New
York, 2008.
[47] W.W. Boone and G. Higman. An algebraic characterization of the solvability of the word problem. J.
Austral. Math. Soc., 18, 1974.
[48] K.C. Border. Fixed point theorems with applications to economics and game theory. Cambridge University
Press, 1985.
[49] N. Bourbaki. Elements d’histoire des Mathematiques. Springer, 1984.
[50] N. Bourbaki. Elements de Mathematique. Springer, 2006.
[51] J. Bourgain. Green’s function estimates for lattice Schrödinger operators and applications, volume 158 of
Annals of Mathematics Studies. Princeton Univ. Press, Princeton, NJ, 2005.
[52] P. Bourke. Visualising volumetric fractals. GSTF Journal on Computing, 5, 2017.
[53] A. Boutot. Catastrophe theory and its critics. Synthese, 96:167–200, 1993.
[54] C. Boyer. A History of Mathematics. John Wiley and Sons, Inc, 2nd edition, 1991.
[55] F. Brechenmacher. Histoire du theoreme de Jordan de la decomposition matricielle (1870-1930). Ecole des
Hautes Etudes en Sciences Sociales, 2005. PhD thesis, EHESS.
[56] G.E. Bredon. Topology and Geometry, volume 139 of Graduate Texts in Mathematics. Springer Verlag,
1993.
[57] S. Brendle. Ricci Flow and the Sphere theorem. Graduate Studies in Mathematics. AMS, 2010.
[58] D. Bressoud. Historical reflections on the fundamental theorem of calculus. MAA Talk of May 11, 2011.
[59] D. M. Bressoud. Factorization and Primality Testing. Springer Verlag, 1989.
[60] M. Bressoud. Proofs and Confirmations, The story of the Alternating Sign Matrix Conjecture. MMA and
Cambridge University Press, 1999.
[61] O. Bretscher. Calculus I. Lecture Notes Harvard, 2006.
[62] H. Brezis. Functional Anslysis, Sobolev Spaces and Partial Differential Equations. University text.
Springer, 2011.
[63] M. Brown and W.D. Neuman. Proof of the Poincaré-Birkhoff fixed point theorem. Michigan Mathematical
Journal, 24:21–31, 1977.
[64] R.A. Brualdi. Introductory Combinatorics. Pearson Prantice Hall, forth edition, 2004.
[65] G. Van Brummelen. Heavenly Mathematics. Princeton University Press, 2013.
[66] C. Bruter. Mathematics and Modern Art, volume 18 of Springer Proceedings in mathematics. Springer,
2012.
[67] P. Bullen. Dictionary of Inequalities. CRC Press, 2 edition, 2015.
[68] E.B. Burger. Exploring the Number Jungle. AMS, 2000.
[69] W. Burnside. Theory of Groups of Finite Order. Cambridge at the University Press, 1897.
[70] W. Byers. How Mathematicians Think. Princeton University Press, 2007.
[71] W.T. Haight II C. Cavagnaro. Classical and Theoretical mathematics. CRC Press, 2001.
[72] M. Capobianco and J.C. Molluzzo. Examples and Counterexamples in Graph Theory. North-Holland, New
York, 1978.
[73] O. Caramello. Theories, Sites, Toposes. Oxford University Press, 2018.
[74] L. Carleson and T.W. Gamelin. Complex Dynamics. Springer-Verlag, New York, 1993.
[75] A-L. Cauchy. Cours d’Analyse. L’Imprimerie Royale, 1821.
[76] C.Clapman and J. Nicholson. Oxford Concise Dictionary of Mathematics. Oxford Paperback Reference.
Oxford University Press, 1990.
[77] A. Lax C.D. Olds and G. Davidoff. The Geometry of Numbers, volume 41 of Anneli Lax New Mathematical
Library. AMS, 2000.
146
OLIVER KNILL
[78] P. E. Ceruzzi. A History of Modern Computing. MIT Press, second edition, 2003.
[79] K. Chandrasekharan. Introduction to Analytic Number Theory, volume 148 of Grundlehren der mathema-
tischen Wissenschaften. Springer, 1968.
[80] G. Chartrand and P. Zhang. Chromatic Graph Theory. CRC Press, 2009.
[81] W.Y.C. Chen and R. P. Stanley. The g-conjecture for spheres. http://www.billchen.org/unpublished/g-
conjecture/g-conjecture-english.pdf, 2008.
[82] N. Chernof and R. Markarian. Chaotic billiards. AMS, 2006.
[83] C. Chevalley. Theory of Lie Groups. Princeton University Press, 1946.
[84] J.R. Choksi and M.G.Nadkarni. Baire category in spaces of measures, unitary operators and transforma-
tions. In Invariant Subspaces and Allied Topics, pages 147–163. Narosa Publ. Co., New Delhi, 1990.
[85] I. Ciufolini and J.A. Wheeler. Gravitation and inertia. Princeton Series in Physics, 1995.
[86] R. Cochrane. The secret Life of equations: the 50 greatest equations and how they work. Octopus Company,
2016.
[87] R. Cochrane. Math Hacks. Cassell Illustrated, 2018.
[88] E.A. Coddington and N. Levinson. Theory of Ordinary Differential Equations. McGraw-Hill, New York,
1955.
[89] P.R. Comwell. Polyhedra. Cambridge University Press, 1997.
[90] A. Connes. Noncommutative geometry. Academic Press, 1994.
[91] K. Conrad. The origin of representation theory. 2010.
[92] J. Conway. Universal quadratic forms and the fifteen theorem. Contemporary Mathematics, 272:23–26,
1999.
[93] J.B. Conway. Functions of One Complex Variable. Springer Verlag, 2. edition, 1978.
[94] J.B. Conway. A course in functional analysis. Springer Verlag, 1990.
[95] J.B. Conway. Mathematical Connections: A Capstone Course. American Mathematical Society, 2010.
[96] J.H. Conway. On Numbers and Games. A K Peters, Ltd, 2001.
[97] J.H. Conway and R.K. Guy. The book of numbers. Copernicus, 1996.
[98] J.H. Conway and N.J.A.Sloane. What are all the best sphere packings in low dimensions. Discr. Comp.
Geom., 13:383–403, 1995.
[99] J.H. Conway and N.J.A. Sloane. Sphere packings, Lattices and Groups, volume 290 of A series of Com-
prehensive Studies in Mathematics. Springer Verlag, New York, 2.nd edition edition, 1993.
[100] I.P. Cornfeld, S.V.Fomin, and Ya.G.Sinai. Ergodic Theory, volume 115 of Grundlehren der mathematischen
Wissenschaften in Einzeldarstellungen. Springer Verlag, 1982.
[101] R. Courant and H. Robbins. Was ist Mathematik. Springer, fifth edition, 1941.
[102] T. Crilly. 50 mathematical ideas you really need to know. Quercus, 2007.
[103] D. Cristofaro-Gardiner and M. Hutchings. From one reeb orbit to two. https://arxiv.org/abs/1202.4839,
2014.
[104] K.S. Thorne C.W. Misner and J.A. Wheeler. Gravitation. Freeman, San Francisco, 1973.
[105] H.L. Cycon, R.G.Froese, W.Kirsch, and B.Simon. Schrödinger Operators—with Application to Quantum
Mechanics and Global Geometry. Springer-Verlag, 1987.
[106] D.Downing. Dictionary of Mathematical Terms. Barron’s Educational Series, 1995.
[107] M. de Gosson. The symplectic camel principle and semicalssical mechanics. Journal of Physics A: Math-
ematical and General, 35(32):6825, 2002.
[108] W. de Melo and S. van Strien. One dimensional dynamics, volume 25 of Series of modern surveys in
mathematics. Springer Verlag, 1993.
[109] M. Denker, C. Grillenberger, and K. Sigmund. Ergodic Theory on Compact Spaces. Lecture Notes in
Mathematics 527. Springer, 1976.
[110] E. Denne. Alternating quadrisecants of knots. 2004. Thesis at University of Illinois at Urbana-Champaign.
[111] L.E. Dickson. History of the theory of numbers.Vol. I:Divisibility and primality. Chelsea Publishing Co.,
New York, 1966.
[112] L.E. Dickson. History of the theory of numbers.Vol.II:Diophantine analysis. Chelsea Publishing Co., New
York, 1966.
[113] R. Diestel. Graph theory, volume 173 of Graduate Texts in Mathematics. Springer, 5th edition, 2016.
[114] J. Dieudonné. A Panorama of Pure Mathematics. Academic Press, 1982.
[115] J. Dieudonné. Grundzüge der modernen Analysis. Vieweg, dritte edition, 1985.
147
FUNDAMENTAL THEOREMS
[116] C. Ding, D. Pei, and A. Salomaa. Chinese Remainder Theorem, Applications in Computing, Coding
Cryptography. World Scientific, 1996.
[117] M.P. do Carmo. Differential Forms and Applications. Springer Verlag, 1994.
[118] A. Dold. Lectures on algebraic topology. Springer, 1980.
[119] S. Donaldson and P. Kronheimer. The topology of four manifolds. Clarendon Press, 1990.
[120] J. Doob. Stochastic processes. Wiley series in probability and mathematical statistics. Wiley, New York,
1953.
[121] R. Douady. Application du théorème des tores invariantes. These 3 ème cycle, Université Paris VII, 1982.
[122] B.A. Dubrovin, A.T.Fomenko, and S.P.Novikov. Modern Geometry-Methods and Applications Part I,II,III.
Graduate Texts in Mathematics. Springer Verlag, New York, 1985.
[123] W. Dunham. Journey through Genius. Wiley Science Editions, 1990.
[124] J.-P. Eckmann, H. Koch, and P. Wittwer. A computer-assisted proof of universality for area-preserving
maps. Memoirs of the AMS, 47:1–122, 1984.
[125] J-P. Eckmann and D. Ruelle. Ergodic theory of chaos and strange attractors. Rev. Mod. Phys., 57:617–656,
1985.
[126] B.G. Sidharth (Editor). A century of Ideas. Springer, 2008.
[127] K. Ito (Editor). Encyclopedic Dictionary of Mathematics (2 volumes). MIT Press, second edition edition,
1993.
[128] N.J. Higham (Editor). The Princeton Companion to Applied Mathematics. Princeton University Press,
2015.
[129] R. Brown (Editor). 30-Second Maths. Ivy Press, 2012.
[130] S.G. Krantz (editor). Comprehensive Dictionary of Mathematics. CRC Press, 2001.
[131] S.G. Krantz (Editor). Dictionary of Algebra, Arithmetic and Trigonometry. CRC Press, 2001.
[132] T. Gowers (Editor). The Princeton Companion to Mathematics. Princeton University Press, 2008.
[133] C.H. Edwards. The historical Development of the Calculus. Springer Verlag, 1979.
[134] R. L. Eisenman. Classroom Notes: Spoof of the Fundamental Theorem of Calculus. Amer. Math. Monthly,
68(4):371, 1961.
[135] R. Elwes. Math in 100 Key Breakthroughs. Quercus, 2013.
[136] M. Erickson. Beautiful Mathematics. MAA, 2011.
[137] T. Rokicki et al. God’s number is 20. http://www.cube20.org/, 2010.
[138] R.L. Eubank. A Kalman Filter Primer. Chapman and Hall, CRC, 2006.
[139] H. Eves. Great moments in mathematics (I and II. The Dolciani Mathematical Expositions. Mathematical
Association of America, Washington, D.C., 1981.
[140] K.J. Falconer. Fractal Geometry. Wiley, second edition edition, 2003.
[141] B. Farb and R.K. Dennis. Noncommutative Algebra. Springer, 1993.
[142] B. Farb and J. Wolfson. Resolvent degree, hilbert’s 13th problem and geometry.
https://www.math.uchicago.edu/ farb/papers/RD.pdf, 2018.
[143] O. Faugeras. Three-dimensional computer vision: a geometric viewpoint. Cambridge MA: MIT Press,
Cambridge, MA, USA, second edition edition, 1996.
[144] H. Federer. Geometric measure theory. Die Grundlehren der mathematischen Wissenschaften, Band 153.
Springer-Verlag New York Inc., New York, 1969.
[145] E.A. Fellmann. Leonhard Euler. Birkhäuser, Basel, Boston, Berlin, 2007.
[146] B. Fine and G. Rosenberger. The Fundamental Theorem of Algebra. Undergraduate Texts in Mathematics.
Springer, 1997.
[147] K. Fink. A brief History of Mathematics. OPen Court Publishing Co, 1900.
[148] G. Fischer. Mathematical Models. Springer Spektrum, 2 edition, 2017.
[149] A. Fomenko. Visual Geometry and Topology. Springer-Verlag, Berlin, 1994. From the Russian by Marianna
V. Tsaplina.
[150] J-P. Francoise, G.L. Naber, and T.S. Tsun. Encylopedia of Mathematical Physics. Elsevier, 2006.
[151] T. Franzen. Gödel’s Theorem. A.K. Peters, 2005.
[152] N.A. Friedman. Introduction to Ergodic Theory. Van Nostrand-Reinhold, Princeton, New York, 1970.
[153] R. Fritsch and G. Fritsch. The four-color theorem. Springer-Verlag, New York, 1998. History, topological
foundations, and idea of proof, Translated from the 1994 German original by Julie Peschke.
148
OLIVER KNILL
[154] H. Furstenberg. Recurrence in ergodic theory and combinatorial number theory. Princeton University Press,
Princeton, N.J., 1981. M. B. Porter Lectures.
[155] L. Gamwell. Mathematics + Art, a cultural history. Princeton University Press, 2016.
[156] T.A. Garrity. All the Mathematics you Missed. Cambridge University Press, 2002.
[157] H. Geiges. An Introduction to Contact Geometry, volume 109 of Cambridge studies in advanced mathe-
matics. Cambridge University Press, 2005.
[158] B.R. Gelbaum and J.M.H. Olmsted. Theorems and Counterexamples in Mathematics. Springer, 1990.
[159] J.E. Littlewood G.H. Hardy and G. Polya. Inequalities. Cambridge at the University Press, 1959.
[160] M. Giaquinta and S. Hildebrandt. Calculus of variations. I,II, volume 310 of Grundlehren der Mathema-
tischen Wissenschaften. Springer-Verlag, Berlin, 1996.
[161] E. Girondo and G. González-Diez. Introduction to compact Riemann surfaces and dessins d’enfants, vol-
ume 79. Cambridge University Press, 2012.
[162] P. Glendinning. Math In Minutes. Quercus, 2012.
[163] D. Goldfeld. Beyond the last theorem. The Sciences, 3/4:43–40, 1996.
[164] C. Golé. Symplectic Twist Maps, Global Variational Techniques. World Scientific, 2001.
[165] S.W. Golomb. Rubik’s cube and quarks: Twists on the eight corner cells of rubik’s cube provide a model
for many aspects of quark behavior. American Scientist, 70:257–259, 1982.
[166] C. Gordon, D.L. Webb, and S. Wolpert. One cannot hear the shape of a drum. Bulletin (New Series) of
the American Mathematical Society, 27:134–137, 1992.
[167] T. Gowers. Mathematics, a very short introduction. Oxford University Press, 2002.
[168] R. Graham. Rudiments of Ramsey Theory, volume 45 of Regional conference series in Mathematics. AMS,
1980.
[169] R. Graham. Some of my favorite problems in ramsey theory. Integers: electronic journal of combinatorial
number theory, 7, 2007.
[170] I. Grattan-Guinness. The Rainbow of Mathematics. W. W. Norton, Company, 2000.
[171] B. Green and T. Tao. The primes contain arbitrarily long arithmetic progressions. Annals of Mathematics,
167:481–547, 2008.
[172] P. Griffiths and J. Harris. Principles of Algebraic Geometry. Pure and Applied Mathematics. John Wiley
and Sons, 1978.
[173] H. Groemer. Existenzsätze für Lagerungen im Euklidischen Raum. Mathematische Zeitschrift, 81:260–278,
1963.
[174] J. Gross and J. Yellen, editors. Handbook of graph theory. Discrete Mathematics and its Applications
(Boca Raton). CRC Press, Boca Raton, FL, 2004.
[175] B. Grünbaum. Are your polyhedra the same as my polyhedra? In Discrete and computational geometry,
volume 25 of Algorithms Combin., pages 461–488. Springer, Berlin, 2003.
[176] B. Grünbaum. Convex Polytopes. Springer, 2003.
[177] B. Grünbaum and G.C. Shephard. Tilings and Patterns. Dover Publications, 2013.
[178] R. Guy. The strong law of small numbers. Amer. Math. Monthly, 95:697–712, 1988.
[179] T.C. Hales. Jordan’s proof of the jordan curve theorem. Studies in logic, grammar and rhetorik, 10, 2007.
[180] G.R. Hall. Some examples of permutations modelling area preserving monotone twist maps. Physica D,
28:393–400, 1987.
[181] P. Halmos. Lectures on ergodic theory. The mathematical society of Japan, 1956.
[182] P.R. Halmos. Naive set theory. Van Nostrand Reinhold Company, 1960.
[183] D. Hanson. On a theorem of Sylvester and Schur. Canad. Math. Bull., 16, 1973.
[184] G.H. Hardy. Divergent Series. AMS Chelsea Publishing, 1991.
[185] G.H. Hardy. A mathematician’s Apology. Cambridge University Press, 1994.
[186] G.H. Hardy and M. Riesz. The general theory of Dirichlet’s series. Hafner Publishing Company, 1972.
[187] G.H. Hardy and E.M. Wright. An Introduction to the Theory of Numbers. Oxford University Press, 1980.
[188] R. Hartley and A. Zissermann. Multiple View Geometry in computer Vision. Cambridge UK: Cambridge
University Press, 2003. Second edition.
[189] R. Hartshorne. Algebraic Geometry. Springer-Verlag, New York, 1977. Graduate Texts in Mathematics,
No. 52.
[190] A. Hatcher. Algebraic Topology. Cambridge University Press, 2002.
149
FUNDAMENTAL THEOREMS
[191] F. Hausdorff. Summationsmethoden und Momentfolgen, I,II. Mathematische Zeitschrift, 9:74–109, 280–
299, 1921.
[192] T. Hawkins. The Mathematics of Frobenius in Context. Sources and Studies in the History of Mathematics
and Physical Sciences. Springer, 2013.
[193] G.A. Hedlund. Endomorphisms and automorphisms of the shift dynamical system. Math. Syst. Theor.,
3:320–375, 1969.
[194] S. Helgason. The Radon transform, volume 5 of Progress in mathematics. Birkhaeuser, 1980.
[195] J.M. Henshaw. An equation for every occasion = 52 formulas + why they matter. Johns Hopkins University
Press, 2014.
[196] M. Herman. Sur la conjugaison différentiable des difféomorphismes du cercle à des rotations. IHES, 49:5–
233, 1979.
[197] R. Hersh. What is Mathematics, Really? Oxford University Press, 1997.
[198] T.H. Hildebrandt and I.J. Schoenberg. On linear functional operations and the moment problem for a
finite interval in one or several dimensions. Annals of Mathematics, 34:317–328, 1933.
[199] H. Hofer and E. Zehnder. Symplectic invariants and Hamiltonian dynamics. Birkhäuser advanced texts.
Birkhäuser Verlag, Basel, 1994.
[200] H. Hofer and E. Zehnder. Symplectic invariants and Hamiltonian Dynamics. Birkhäuser Advanced texts.
Birkhäuser, 1994.
[201] A.J. Hoffman and C.W. Wu. A simple proof of a generalized cauchy-binet theorem. American Mathematical
Monthly, 123:928–930, 2016.
[202] J.B. Hogendijk. Al-mutaman ibn hud, 11the century king of saragossa and brilliant mathematician. His-
toria Mathematica, 22:1–18.
[203] H. Hopf. Über die Drehung der Tangenten und Sehnen ebener Curven. Compositio Math., 2, pages 50–62,
1935.
[204] L.K. Hua. Introduction to Number theory. Springer Verlag, Berlin, 1982.
[205] J. Humphreys. Introduction to Lie Algebras and Representation Theory. Springer, third printing, revised
edition, 1972.
[206] W. Hurewicz. Homotopy and homology. In Proceedings of the International Congress of Mathematicians,
Cambridge, Mass., 1950, vol. 2, pages 344–349. Amer. Math. Soc., Providence, R. I., 1952.
[207] M. Hutchings. Taubes proof of the weinstein conjecture in dimension three. Bull. Amer. Math. Soc.,
47:73–125, 2010.
[208] M. Hutchings. Fun with symplectic embeddings. Slides from Frankferst, Feb 6, 2016.
[209] M. Hutchings and C.H. Taubes. The Weinstein conjecture for stable Hamiltonian structures. Geom. Topol.,
13(2):901–941, 2009.
[210] P. V. M. Blagojević I. Bárány and G.M. Ziegler. Tverberg’s theorem at 50: Extensions and counterexam-
ples. Notices of the AMS, pages 732–739, 2016.
[211] H.S. Zuckerman I. Niven and H.L. Montogmery. An introduction to the theory of numbers. 1991.
[212] O. Lanford III. A computer-assisted proof of the feigenbaum conjectures. Bull. Amer. Math. Soc., 6:427–
434, 1982.
[213] O.E. Lanford III. A shorter proof of the existence of the Feigenbaum fixed point. Commun. Math. Phys,
96:521–538, 1984.
[214] L. Illusie. What is a topos? Notices of the AMS, 51, 2004.
[215] I.R.Shafarevich and A.O. Remizov. Linear Algebra and Geometry. Springer, 2009.
[216] R. Girgensohn J. Borwein, D.Bailey. Experimentation in Mathematics. A.K. Peters, 2004. Computational
Paths to Discovery.
[217] P.C. Jackson. Introduction to artificial intelligence. Dover publications, 1985. Second, Enlarged Edition.
[218] T. Jackson. An illustrated History of Numbers. Shelter Harbor Press, 2012.
[219] T. Jech. The axiom of choice. Dover, 2008.
[220] M. Jeng and O. Knill. Billiards in the lp unit balls of the plane. Chaos, Fractals, Solitons, 7:543–545,
1996.
[221] C. Goodman-Strauss J.H. Conway, H.Burgiel. The Symmetries of Things. A.K. Peterse, Ltd., 2008.
[222] M.C. Jordan. Cours d’Analyse, volume Tome Troisieme. Gauthier-Villards,Imprimeur-Libraire, 1887.
[223] J. Jost. Riemannian Geometry and Geometric Analysis. Springer Verlag, 2005.
150
OLIVER KNILL
[224] D. Joyner. Adventures in Group Theory, Rubik’s Cube, Merlin’s Machine and Other Mathematical Toys.
Johns Hopkins University Press, second edition, 2008.
[225] M. Kac. Can one hear the shape of a drum? Amer. Math. Monthly, 73:1–23, 1966.
[226] R.E. Kalman. A new approach to linear filtering and prediction problems. Transactions of the ASME–
Journal of Basic Engineering, 82(Series D):35–45, 1960.
[227] L.A. Kaluzhnin. The Fundamental Theorem of Arithmetic. Little Mathematics Library. Mir Publishers,
Moscow, 1979.
[228] M. Kaoubi. K-Theory. Springer Verlag, 1978.
[229] A. Katok. Fifty years of entropy in dynamics: 1958-2007. Journal of Modern Dynamics, 1:545–596, 2007.
[230] A. Katok and B. Hasselblatt. Introduction to the modern theory of dynamical systems, volume 54 of
Encyclopedia of Mathematics and its applications. Cambridge University Press, 1995.
[231] A.B. Katok and A.M. Stepin. Approximations in ergodic theory. Russ. Math. Surveys, 22:77–102, 1967.
[232] A.B. Katok and A.M. Stepin. Metric properties of measure preserving homemorphisms. Russ. Math.
Surveys, 25:191–220, 1970.
[233] V. Katz. The history of Stokes theorem. Mathematics Magazine, 52, 1979.
[234] V. Katz. Mathematics of Egypt, Mesopotamia, China, India and Islam. Princeton University Press, 2007.
[235] V.J. Katz. The history of stokes theorem. Mathematics Magazine, 52:146–156, 1979.
[236] Y. Katznelson. An introduction to harmonic analysis. John Wiley and Sons, Inc, New York, 1968.
[237] A.S. Kechris. Classical Descriptive Set Theory, volume 156 of Graduate Texts in Mathematics. Springer-
Verlag, Berlin, 1994.
[238] V.P. Khavin and N.K. Nikol’skij. Commutative Harmonic Analysis I. Springer, 1991.
[239] D.A. Klain and G-C. Rota. Introduction to geometric probability. Lezioni Lincee. Accademia nazionale dei
lincei, 1997.
[240] F. Klein. Vorlesungen über die Entwicklung der Mathematik im 19. Jahrhundert. Springer, 1979 (originally
1926).
[241] J.R. Kline. What is the Jordan curve theorem? American Mathematical Monthly, 49:281–286, 1942.
[242] M. Kline. Mathematical Thought from Ancient to Modern Time. Oxford University Press, 1972.
[243] S. Klymchuk. Counterexamples in Calculus. MAA, 2010.
[244] O. Knill. From the sphere to the mandelbulb. talk May 18, 2013, http://www.math.harvard.edu/ knill/s-
lides/boston/sic.pdf.
[245] O. Knill. Singular continuous spectrum and quantitative rates of weakly mixing. Discrete and continuous
dynamical systems, 4:33–42, 1998.
[246] O. Knill. Weakly mixing invariant tori of hamiltonian systems. Commun. Math. Phys., 205:85–88, 1999.
[247] O. Knill. A multivariable chinese remainder theorem and diophantine approximation.
http://people.brandeis.edu/ kleinboc/EP0405/knill.html, 2005.
[248] O. Knill. Probability Theory and Stochastic Processes with Applications. Overseas Press, 2009.
[249] O. Knill. A discrete Gauss-Bonnet type theorem. Elemente der Mathematik, 67:1–17, 2012.
[250] O. Knill. A multivariable chinese remainder theorem.
https://arxiv.org/abs/1206.5114, 2012.
[251] O. Knill. A Cauchy-Binet theorem for Pseudo determinants. Linear Algebra and its Applications, 459:522–
547, 2014.
[252] O. Knill. Some fundamental theorems in mathematics.
https://arxiv.org/abs/1807.08416, 2018.
[253] O. Knill. Top 10 fundamental theorems.
https://www.youtube.com/watch?v=Dvj0gfajdNI, 2018.
[254] O. Knill, J. Carlsson, A. Chi, and M. Lezama. An artificial intelligence experiment in college math
education. http://www.math.harvard.edu/˜knill/preprints/sofia.pdf, 2003.
[255] O. Knill and O. Ramirez-Herran. On ullman’s theorem in computer vision. Retrieved February 2, 2009,
from http://arxiv.org/abs/0708.2442, 2007.
[256] O. Knill and E. Slavkovsky. Visualizing mathematics using 3d printers. In C. Fonda E. Canessa and
M. Zennaro, editors, Low-Cost 3D Printing for science, education and Sustainable Development. ICTP,
2013. ISBN-92-95003-48-9.
[257] D. E. Knuth. Surreal numbers. Addison-Wesley Publishing Co., Reading, Mass.-London-Amsterdam, 1974.
[258] T.W. Koerner. Fourier Analysis. Cambridge University Press, 1988.
151
FUNDAMENTAL THEOREMS
152
OLIVER KNILL
[299] P.W. Michor. Elementary Catastrophe Theory. Tipografia Universitatii din Timisorara, 1985.
[300] J. Milnor. Dynamics in one complex variable. Introductory Lectures, SUNY, 1991.
[301] H. Minc. Nonnegative Matrices. John Wiley and Sons, 1988.
[302] M. Minsky. The Society of Mind. A Touchstone Book, published by Simon @ Shuster Inc, New York,
London, Toronto, Sydney and Tokyo, 1988.
[303] C. Misbah. Complex Dynamics and Morphogenesis and Morphogenesis, An Introduction to Nonlinear
Science. Springer, 2017.
[304] U. Montano. Explaining Beauty in Mathematics: An Aesthetic Theory of Mathematics, volume 370 of
Synthese Library. Springer, 2013.
[305] J.W. Morgan. The poincaré conjecture. In Proceedings of the ICM, 2006, pages 713–736, 2007.
[306] J.W. Morgan and G. Tian. Ricci Flow and the Poincaré Conjecture, volume 3 of Clay Mathematics
Monographs. AMS, Clay Math Institute, 2007.
[307] J. Moser. Stable and random Motion in dynamical systems. Princeton University Press, Princeton, 1973.
[308] J. Moser. Selected chapters in the calculus of variations. Lectures in Mathematics ETH Zürich. Birkhäuser
Verlag, Basel, 2003. Lecture notes by Oliver Knill.
[309] D. Mumford. O. zariski. National Academy of Sciences, 2013.
[310] K.G. Murty. Linear Programming. Wiley, 1983.
[311] M.Ram Murty and A Pacelli. Quadratic reciprocity via theta functions. Proc. Int. Conf. Number Theory,
1:107–116, 2004.
[312] E. Nagel and J.R. Newman. Godel’s proof. New York University Press, 2001.
[313] I.P. Natanson. Constructive Function theory. Frederick Ungar Publishing, 1965.
[314] E. Nelson. Internal set theory: A new approach to nonstandard analysis. Bull. Amer. Math. Soc, 83:1165–
1198, 1977.
[315] J. Neunhäuserer. Schöne Sätze der Mathematik. Springer Spektrum, 2017.
[316] T.Lewis N.I. Fisher and B.J. Embleton. Statistical analysis of spherical data. Cambridge University Press,
1987.
[317] C. Mouhot (notes by T. Feng). Analysis of partial differential equations. 2013. Lectures at Cambridge.
[318] M. Nowak. Evolutionary Dynamics: Exploring the Equations of Life. Harvard University Press, 2006.
[319] O. Ore. The Four-Color Problem. Academic Press, 1967.
[320] J. Owen. The grammar of Ornament. Quaritch, 1910.
[321] J.C. Oxtoby. Measure and Category. Springer Verlag, New York, 1971.
[322] R.S. Palais. Seminar on the Atiyah-Singer Index Theorem, volume 57 of Annals of Mathematics Studies.
Princeton University Press, 1965.
[323] R.S. Palais. A simple proof of the banach contraction principle. Journal Fixed point theory and Applica-
tions, 2:221–223, 2007.
[324] R.S. Palais and S. Smale. A generalized morse theory. Bull. Amer. Math. Soc., 70:165–172, 1964.
[325] E. Pariser. The Filter Bubble. Penguin Books, 2011.
[326] H-O. Peitgen and P.H.Richter. The beauty of fractals. Springer-Verlag, New York, 1986.
[327] C. Petzold. The Annotated Turing: A Guided Tour Through Alan Turing’s Historic Paper on Computabil-
ity and the Turing Machine. Wiley Publishing, 2008.
[328] R. R. Phelps. Lectures on Choquet’s theorem. D.Van Nostrand Company, Princeton, New Jersey, 1966.
[329] G. Pick. Geometrisches zur zahlenlehre. Sitzungsberichte des deutschen naturwissenschaftlich-
medicinischen Vereines für Böhmen ‘Lotos‘ in Prag, Band XIX, 1899.
[330] C.A. Pickover. The Math Book. Sterling, New York, 2012.
[331] B.C. Pierce. Basic Category Theory for Computer Scientists. MIT Press, 1991.
[332] Josep Pla i Carrera. The fundamental theorem of algebra before Carl Friedrich Gauss. Publicacions
Matemàtiques, 36(2B):879–911 (1993), 1992.
[333] B. Polster and M. Ross. Math Goes to the Movies. Johns Hopkins University Press, 2012.
[334] A.S. Posamentier. The Pythagorean Theorem: The Story of its Power and Beauty. Prometheus Books,
2010.
[335] T. Poston and I. Stewart. Catastrophe theory and its applications. Pitman, 1978.
[336] J. Propp. What i learned from richard stanley.
https://arxiv.org/abs/1501.00719, 2015.
153
FUNDAMENTAL THEOREMS
[337] Q.I. Rahman and G. Schmeisser. Analytic theory of polynomials, volume 26 of London Mathematical
Society Monographs. Oxford University Press, new series edition, 2002.
[338] A.R. Rajwade and A.K. Bhandari. Surprises and Counterexamples in Real Function Theory. Hindustan
Book Agency, 2007.
[339] F. Rannou. Numerical study of discrete area-preserving mappings. Acta Arithm, 31:289–301, 1974.
[340] M. Reed and B. Simon. Methods of modern mathematical physics. Academic Press, Orlando, 1980.
[341] H. Ricardo. Goldbach’s conjecture implies bertrand’s postulate. Amer. Math. Monthly, 112:492–492, 2005.
[342] D.S. Richeson. Euler’s Gem. Princeton University Press, Princeton, NJ, 2008. The polyhedron formula
and the birth of topology.
[343] H. Riesel. Prime numbers and computer methods for factorization, volume 57 of Progress in Mathematics.
Birkhäuser Boston Inc., 1985.
[344] David P. Robbins. The story of 1,2,7,42,429,7436,... Mathematical Intelligencer, 13:12–19, 1991.
[345] A. Robert. Analyse non standard. Presses polytechniques romandes, 1985.
[346] J. Rognes. On the Atiyah-Singer index theorem. http://www.abelprize.no, 2004.
[347] J. Rosenhouse. The Monty Hall Problem: The Remarkable Story of Math’s Most Contentious Brain Teaser.
Oxford University Press, 2009.
[348] G-C. Rota. The phenomenology of mathematical beauty. Synthese, 111(2):171–182, 1997. Proof and
progress in mathematics (Boston, MA, 1996).
[349] C. Rovelli. The order of Time. Riverhead Books, 2018.
[350] D. Ruelle. Statistical Mechanics. Mathematics Physics Monograph Series. W.A. Benjamin, Inc, 1968.
[351] D. Ruelle. Thermodynamic Formalism. The Mathematical Structures of Classical Equilibrium Statistical
Mechanics, volume 5 of Encyclopedia of mathematics and its applications. Addison-Wesley Publishing
Company, London, 1978.
[352] D. Ruelle. The mathematician’s brain. Princeton University Press, Princeton, NJ, 2007.
[353] J. W. Russell. An Elementary Treatise on Pure Geometry: With Numerous Examples. Clarendon Press,
1893.
[354] S. Russell and P. Norvig. Artificial Intelligence: A modern approach. Prentice Hall, 1995, 2003. second
edition.
[355] D.M.T.Benincasa S. Zeiki, J.P. Romaya and M.F.Atiyah. The experience of mathematical beauty and its
neural correlates. Front. Hum. Neurosci, 13, 2014.
[356] T.L. Saaty and P.C. Kainen. The four color problem, assaults and conquest. Dover Publications, 1986.
[357] A. Sard. The measure of the critical values of differentiable maps. Bull. Amer. Math. Soc., 48:883–890,
1942.
[358] H. Schenk. Computationsl algebraic Geometry, volume 58 of Student texts. Cambridge University Press,
2003.
[359] L. Schläfli. Theorie der Vielfachen Kontinuität. Cornell University Library Digital Collections, 1901.
[360] S.L. Segal. Mathematicians under the Nazis. Princeton university press, 2003.
[361] H. Segerman. Visualizing Mathematics with 3D Printing. John Hopkins University Press, 2016.
[362] G. Shafer and V. Vovk. The sources of komogorov’s grundbegriffe. Statistical Science, pages 70–98, 2006.
[363] C.E. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27:379–
423,623–656, 1948.
[364] S. Shelah. Proper Forcing. Lecture Notes in Mathematics. Springer, 1982.
[365] T. Siegfried. A beautiful math. Joseph Henry Press, 2006.
[366] J.H. Silverman. The Arithmetic of Elliptic Curves, volume 106 of Graduate Texts in Mathematics. Springer,
1986.
[367] B. Simon. Fifteen problems in mathematical physics. In Anniversary of Oberwolfach, Perspectives in
Mathematics. Birkhäuser Verlag, Basel, 1984.
[368] B. Simon. The statistical mechanics of lattice gases, volume Volume I. Princeton University Press, 1993.
[369] B. Simon. Operators with singular continuous spectrum: I. General operators. Annals of Mathematics,
141:131–145, 1995.
[370] B. Simon. Trace Ideals and their Applications. AMS, 2. edition, 2010.
[371] B. Simon. A comprehensive course in Analysis. AMS, 2017.
[372] Ya.G. Sinai. Introduction to ergodic theory. PrincetonUniversity press, Princeton, 1976.
[373] N. Sinclair. Mathematics and Beauty. Teachers College Press, 2006.
154
OLIVER KNILL
155
FUNDAMENTAL THEOREMS
[414] D.V. Treshchev V.V. Kozlov. Billiards, volume 89 of Translations of mathematical monographs. AMS,
1991.
[415] Fulton W. Algebraic Curves. An Introduction to Algebraic Geometry. Addison Wesley, 1969.
[416] F. Warner. Foundations of differentiable manifolds and Lie groups, volume 94 of Graduate texts in math-
ematics. Springer, New York, 1983.
[417] D. J. Watts. Small Worlds. Princeton University Press, 1999.
[418] D. J. Watts. Six Degrees. W. W. Norton and Company, 2003.
[419] J.N. Webb. Game Theory. Springer, 2007.
[420] E.W. Weinstein. CRC Concise Encyclopedia of Mathematics. CRC Press, 1999.
[421] J. Weizenbaum. ELIZA—a computer program for the study of natural language communication between
man and machine. Communications of the ACM, 9:36–45, 1965.
[422] D. Wells. Which is the most beautiful? Mathematical Intelligencer, 10:30–31, 1988.
[423] D. Wells. Are these the most beautiful? The Mathematical Intelligencer, 12:9, 1990.
[424] Q. Westrich. The calabi-yau theorem. University of Wisconsin-Madion, 2014.
[425] W.Fulton and J. Harris. Representation theory. Graduate Texts in Mathematics. Springer, 2004.
[426] H. Whitney. Collected Works. Birkhaueser, 1992.
[427] D. Williams. Probability with Martingales. Cambridge mathematical Texbooks, 1991.
[428] R. Wilson. Four Colors Suffice. Princeton Science Library. Princeton University Press, 2002.
[429] G.L. Wise and E.B.Hall. Counterexamples in Probability and Real Analysis. Oxford University Press, 1993.
[430] M. Wojtkowski. Principles for the disign of billiards with nonvanishing lyapunov exponents. Communica-
tions in Mathematical Physics, 105:391–414, 1986.
[431] N. Wolchover. Proven - ’most important unsolved problem’ in numbers. NBC news, 9/11/2012, 2012.
[432] S. Wolfram. Theory and Applications of Cellular Automata. World Scientific, 1986.
[433] S. Wolfram. A new kind of Science. Wolfram Media, 2002.
[434] H. Wussing. 6000 Jahre Mathematik. Springer, 2009.
[435] F.B. Cannonito W.W. Boone and R.C. Lyndon. Word problems. North-Holland Publishing Company,
1973.
[436] D. Zagier. A one-sentence proof that every prime p = 1 mod 4 is a sum of two squares. Amer. Math.
Monthly, 97:144, 1990.
[437] D. Zeilberger. Proof of the alternating-sign matrix conjecture. Elec. J. Combin., 3(Number 2), 1996.
[438] G.M. Ziegler. Lectures on Polytopes. Springer Verlag, 1995.
[439] J. D. Zund. George David Birkhoff and John von Neumann: A question of priority and the ergodic
theorems, 1931-1932. Historia Mathematica, 29:138–156, 2002.
[440] D. Zwillinger. CRC Standard Mathematical Tables and Formulas. CRC Press, 33 edition, 2017.
156