0% found this document useful (0 votes)
18 views106 pages

MA4209 Notes

The document is a draft by Donald L. Kreher on combinatorics and graph theory, detailing various topics in graph theory, including basic concepts, planar graphs, algebraic graph theory, and connectivity. It includes sections on Steiner triple systems, magic squares, and mutually orthogonal Latin squares, along with exercises for practice. The document serves as an educational resource for understanding fundamental and advanced concepts in graph theory and design theory.

Uploaded by

ortonisorton
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views106 pages

MA4209 Notes

The document is a draft by Donald L. Kreher on combinatorics and graph theory, detailing various topics in graph theory, including basic concepts, planar graphs, algebraic graph theory, and connectivity. It includes sections on Steiner triple systems, magic squares, and mutually orthogonal Latin squares, along with exercises for practice. The document serves as an educational resource for understanding fundamental and advanced concepts in graph theory and design theory.

Uploaded by

ortonisorton
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

Combinatorics and Graph Theory

Donald L. Kreher

DRAFT September 26, 2019


2
Contents

I A Taste of Graph Theory 1

1 Basic graph theory 3


1.1 Incidence, adjacency and degree . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Isomorphisms and Automorphisms . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Walks, trails, paths and cycles . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5 Trees and forests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6 Bipartite graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7 Euler trails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Planar Graphs 21
2.1 Planar embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 Euler’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4 Regular polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5 Kuratowski’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.1 Subdivision, contraction and minors . . . . . . . . . . . . . . . . . . 27
2.5.2 Blocks and seperable graphs . . . . . . . . . . . . . . . . . . . . . . . 28
2.5.3 Proof of Kuratowski’s theorem . . . . . . . . . . . . . . . . . . . . . 28
2.5.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3 Algebraic Graph Theory 33


3.1 Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Regular graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 The matrix tree theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4 Connectivity 45
4.0.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3
II A Taste of Design Theory 47

5 Steiner Triple Systems 49


5.1 Graph decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2 The Bose construction v ≡ 3 (mod 6) . . . . . . . . . . . . . . . . . . . . . 54
5.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.3 The Skolem construction v ≡ 1 (mod 6) . . . . . . . . . . . . . . . . . . . . 55

6 Magic Squares 59
6.1 De La Loubère’s construction . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2 The orthogonal Latin square construction . . . . . . . . . . . . . . . . . . . 61
6.3 Strachey’s construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.4 The Product construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.4.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

7 Mutually Orthogonal Latin Squares 69


7.1 Finite fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
7.2 Finite projective planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
7.3 Pairs of orthogonal Latin squares . . . . . . . . . . . . . . . . . . . . . . . . 75
7.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

III Miscellaneous Topics 85

8 Alternating Paths and Matchings 87


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
8.1.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
8.2 Perfect matchings and 1-factorizations . . . . . . . . . . . . . . . . . . . . . 90
8.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
8.3 Tutte’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
8.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
8.4 The 4-color problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.4.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

Acknowledgments
I thank the following people for ther help in note taking and proof reading: Steve Sy, Kyle Rokos,
Dave Torey, Robert Edman, Betsy George, David Clark, Sibel Ozkan, Joshua Ruark, Melissa
Keranen.

4
Part I

A Taste of Graph Theory

1
Chapter 1

Basic graph theory

1.1 Incidence, adjacency and degree


A (undirected) graph G = (V, E) consists of a set V of vertices and a set E of pairs of vertices
called edges.
Example 1.1. An example of a graph with 8 vertices and 6 edges is given by
V = {1, 2, 3, 4, 5, 6, 7, 8}
E = {{1, 2}, {2, 3}, {1, 3}, {4, 5}, {5, 6}, {6, 7}}
Every graph has a picture. A picture of the graph in this example is given in Figure 1.1.

1 3 5 7

2 4 6 8

Figure 1.1: A picture of a graph.

If x and y are vertices and {x, y} is an edge, then we say that x is adjacent to y. The graph on
n vertices in which all pairs of vertices are adjacent is called the complete graph and it is denoted
by Kn .

x is adjacent to y: x y

If x is a vertex, and e is an edge that contains x, then we say that x is incident to y.


e
x is incident to e: x

The degree of a vertex x is the number of edges incident to x. This is denoted by


Deg(x) = |{e ∈ E : x ∈ e}|

3
Example 1.2.

a b c d

e f g h

P
x a b c d e f g h x Deg(x)
Deg(x) 2 4 4 1 2 2 0 1 16

We now state the fundamental theorem of graph theory.

Theorem 1.3. For any graph G = (V, E), the sum of the degrees of the vertices is twice the number
of edges. X
Deg(x) = 2|E|
x∈V

Proof. Every edge is incident to two vertices. Thus the sum counts every edge twice.

Corollary 1.4. In any graph the number of vertices of odd degree is even.

Proof. Let G = (V, E) be a graph and let

A = {x ∈ V : Deg(x) is even}
B = {x ∈ V : Deg(x) is odd}.

˙ = V and thus by Theorem 1.3 we have


Then A∪B
X X X
2|E| = Deg(x) = Deg(x) + Deg(x)
x∈V x∈A x∈B
P
All of the summands in x∈A Deg(x) are even, we have:
X
0≡ Deg(x) ≡ |B| (mod 2),
x∈B
P
because all of the summands in x∈B Deg(x) are odd.

The average degree of a vertex in the graph G = (V, E) is

1 X 2|E|
AvgDeg(G) = Deg(x) = ,
|V | |V |
x∈V

the minimum degree is

δ(G) = min{Deg(x) : x ∈ V },

4
and the maximum degree is

∆(G) = Max{Deg(x) : x ∈ V }.

Hence

δ(G) ≤ AvgDeg(G) ≤ ∆(G).

The number of edges per vertex is


|E| 1
(G) = = AvgDeg(G)
|V | 2
For any graph G, we also denote the set of its edges and the set of its vertices by E(G) and
V (G), respectively. We say that the graph H is a subgraph of the graph G if V (H) ⊆ V (G) and
E(H) ⊆ E(G). For any U ⊆ V (G), the subgraph induced by U is H = G[U ] which has V (H) = U
and E(H) = {{x, y} ⊆ U : {x, y} ∈ E(G)}. If G is a graph and x ∈ V (G), then G − x is the
subgraph induced by V (G) \ {x}. It is the subgraph obtained by removing x and all the edges
incident to x. If H is a subgraph of G and V (H) = V (G), then H is called a spanning subgraph.

a b c a
Example 1.5. In the graph G = the subgraph G[{a, d, e}] = is an induced
d e f d e
a
subgraph of G, but H = is a subgraph that is not induced.
d e

An empty graph is a graph that contains no edges. A graph that has no vertices is pointless.
Theorem 1.6. Every non-empty graph G has a subgraph H satisfying

δ(H) > (H) ≥ (G)

Proof. We construct from G the subgraph H by deleting vertices without lowering  the ratio of
the number of edges to the number of vertices. We can delete the vertex x so long as Deg(x) ≤ .
Deleting such x decreases the number of vertices by 1 and the number of edges by at most . So
|E| − (G)
(G − x) ≥
|V | − 1
|E|
|E| − |V |
=
|V | − 1
 
1
|E| 1 − |V |
=
|V | − 1
 
|E| |V|V|−1
|
=
|V | − 1
|E|
=
|V |

5
More formally, we construct a sequence of induced subgraphs
G = G0 ⊇ G1 ⊇ G2 ⊇ G3 ⊇ · · ·
If Gi has a vertex xi with Deg(xi ) ≤ (Gi ), then we set Gi+1 = Gi − xi the subgraph obtained by
deleting xi and the edges incident to vertex xi . If there is no such vertex then we terminate the
sequence and set H = Gi . By the choice of xi , we have (H) ≥ (G). Furthermore H has no vertex
x with Deg(x) ≤ (H). Therefore Deg(x) > (H) for all x ∈ V .

1.2 Isomorphisms and Automorphisms


Two graphs G and H are isomorphic if there is a one to one function f : V (G) → V (H) such that
{x, y} ∈ E(G) if and only if {f (x), f (y)} ∈ E(H).
We say that such a function f is an isomorphism from G to H. An automorphism of a graph G
is an isomorphism from G to G. The set of all automorphism of a graph G form an (algebraic)
group under composition of functions that is called the automorphism group of G and is denoted
by Aut(G). The automorphism group of the graph in Figure 1.1 is the set of permutations
I (the identity)
(1, 2, 3)
(1, 3, 2)
(2, 3)
(1, 3)
(1, 2)
(4, 7)(5, 6)
(1, 2, 3)(4, 7)(5, 6)
(1, 3, 2)(4, 7)(5, 6)
(2, 3)(4, 7)(5, 6)
(1, 3)(4, 7)(5, 6)
(1, 2)(4, 7)(5, 6)
A picture of a graph that contains no vertex labels represents all possible labelings of that picture.
It is an isomorphism class or orbit under the action of the symmetric group on the set of vertex
labels. For example the picture

represents the 12 labeled graphs


b c b d c b c d
a d a c a d a c
d b d c a c a d
a c a b b d b c
c a d a a b a b
b d b c c d d c
They are all isomorphic.

6
Table 1.1: The nonisomorphic graphs on 4 vertices.

|E| graphs

0 ,

1 ,

2 ,

3 , ,

4 ,

1.2.1 Exercises
1. Let G be a graph on the vertex set V = {x1 , x2 , ..., xn }. Let di = Deg(xi ), for i = 1, 2, . . . , n
and order the vertices such that d1 ≤ d2 ≤ · · · ≤ dn . The sequence (d1 , d2 , d3 , . . . , dn ) is called
the degree sequence of the graph. If two graphs have different degree sequences, then they
are non-isomorphic, but the converse is not true. Find the smallest pair of graphs that are
non-isomorphic but have the same degree sequence.

2. Draw the nonisomorphic graphs on 5 vertices. (Solution is given in Table 1.2.)

3. Find the set of automorphisms of the cube.

1.3 Walks, trails, paths and cycles


Let G be a graph. A walk of length k in G is an alternating sequence

x0 e1 x1 e2 x2 e3 x3 · · · ek xk

of vertices and edges such that ei = xi−1 xi . If the walk starts at vertex a = x0 and ends at vertex
b = xk . we say that it is a a-b walk. We will simplify our notation {x, y} for an edge to just xy,
and use the simpler notation
x0 x1 x2 x3 · · · xk

7
to denote the walk. The edges are easily determined: ei = xi−1 xi , i = 1, 2, . . . , k. If x0 = xk , then
we say that the walk is a closed walk .

3 e11 6 e12 9

e2 e13 e4 e24 e6
e9 e10
2 8
5
e1 e3 e5

1 e7 4 e8 7

Figure 1.2: a graph for illustrating walks, trails, paths and cycles.

In the graph displayed in Figure 1.2 we see that

1e7 4e3 5e13 3e2 2e2 3e2 2e9 5e4 6e4 5e10 8e6 9

is a 1-9 walk of length 11. A trail is a walk in which all of the edges are distinct. A 1-9 trail of
length 8 in the graph displayed in Figure 1.2 is

1e7 4e3 5e13 3e2 2e9 5e4 6e14 8e6 9.

A path is a walk in which all of the vertices (and hence the edges) are distinct. A 1-9 path of length
5 in the graph displayed in Figure 1.2

1e7 4e3 5e4 6e14 8e6 9.

So, a path P is a subgraph of G of the form

V (P ) = {x0 , x1 , . . . , xk }
E(P ) = {x0 x1 , x1 x2 . . . , xk−1 xk }

for some subset of distinct vertices x0 , x1 , . . . , xk . We write P = x0 x1 x2 · · · xk to denote this path.


In Figure 1.2, 145689 is a path from 1 to 8. It has length 5. We denote a path of with n vertices
by Pn .. (Pn has length n − 1.)
If P = x0 x1 x2 · · · xk is a path in the graph G and xk x0 is an edge of G, then

C = P + xk x0

i.e. E(C) = E(P ) ∪ {{xk , x0 }} is a cycle (or circuit ) in G. A cycle is a path from a vertex to
itself. We denote a cycle of length n by Cn .
Theorem 1.7. Every graph G, with δ(G) ≥ 2, contains a path of length δ(G) and a cycle of length
at least δ(G) + 1.

8
Proof. Let P = x0 x1 x2 · · · xk be a longest path in the graph G.

P : ···
x0 x1 xk

Because P is a longest path from x0 to xk all of the vertices adjacent to xk lie on this path. Thus

k ≥ Deg(xk ) ≥ δ(G)

Let i be the smallest index such that xi xk ∈ E(G). Then

C = xi xi+1 xi+2 · · · xk xi

is a cycle of length at least δ(G) + 1.

The distance Dist(x, y) between two vertices x, y of G is the length of the shortest x-y path.
If no such path exits, then Dist(x, y) = ∞. In Figure 1.2 we see that Dist(1, 9) = 4. The greatest
distance between any vertices is called the diameter of G which we denote by

Diam(G) = Max{Dist(x, y) : x, y ∈ V (G)}

The graph in Figure 1.2 has diameter 4. Two more examples are given in Figure 1.3. The minimum

(a) (b)

Figure 1.3: Graph (a) is called the cube. It has |V | = 8, Diam(G) = 3, δ = ∆ = 3 and g(G) = 4.
Graph (b) is called the Petersen graph. It has |V | = 10, Diam(G) = 2, δ = ∆ = 3 and g(G) = 5.

length of a cycle in a graph G is called the girth of G and is denoted by g(G). Examples are given
in Figure 1.3.

Theorem 1.8. Every graph G containing a cycle satisfies g(G) ≤ 2Diam(G) + 1.

Proof. Let
C = x0 x1 x2 x3 · · · xj · · · xg−1 x0

9
be a shortest cycle in G and suppose g ≥ 2Diam(G) + 2. Then the length of C is g = g(G) ≥
2Diam(G) + 2. Let j = Diam(G) + 1. The path

x0 x1 x2 · · · xj

has length Diam(G) + 1 and the path

xj xj+1 xj+2 · · · x0

has length at least Diam(G) + 1. The shortest x0 -xj path

P : x0 = y0 y1 · · · y` = xj

in G has length ` ≤ Diam(G). Thus

x0 x1 · · · xj y`−1 y`−2 · · · y2 y1 x0

is a closed walk of length

j + 1 + ` − 1 = j + ` = Diam(G) + 1 + ` ≤ 2Diam(G) + 1

Furthermore not all of the edges of P are on the cycle C. Therefore this walk contains a cycle.
This cycle has length less than that of the walk, i.e. less than 2Diam(G) + 1. This contradicts the
choice of C being the shortest cycle.

1.3.1 Exercises
1. Show that a closed walk of odd length contains a cycle of odd length.

1.4 Connectivity
A non-empty graph G is connected if any two vertices are joined by a path.

Lemma 1.9. The vertices x1 , x2 , . . . , xn of a connected graph G can be listed so that the induced
subgraph
Gi = G[x1 , x2 , x3 , · · · , xi ]
is connected for every i.

Proof. We inductively construct graphs Gi as follows. Let x1 be any vertex of G. Obviously


G1 = G[x1 ] is connected. Suppose that we have constructed the connected subgraphs G1 , G2 , . . . , Gi
and let x be any vertex in V (G) \ {x1 , x2 , . . . , xi }. Choose any path

P : x1 = y0 y1 · · · y` = x

from x1 to x. (There exists such a path, because G is connected.) Let j be smallest such that
yj ∈
/ {x1 , x2 , . . . , xi }, and set xi+1 = yj . Then xi+1 is adjacent to yj−1 and yj−1 ∈ V (Gi ). Therefore
Gi+1 = G[x1 , x2 , x3 , · · · , xi , xi+1 ] is connected.

10
Figure 1.4: A graph G with κ(G) = 2 and δ(G) = 3.

Let G = (V, E) be a graph. If A, B ⊆ V and X ⊆ V ∪ E is a set of vertices and edges such that
every path in G from a vertex in A to a vertex B contains an edge or vertex in X, we say that X
separates A from B and we call X a separating set. A vertex that separates two other vertices is
called a cut vertex or an articulation point. An edge that separates its ends is called a bridge.
A graph G = (V, E) is said to be k-connected for k ∈ N if |V | > k and G − X is connected for
every X ⊆ V , |X| < k. That is no two vertices are separated by fewer than k other vertices. Every
non-empty graph is 0-connected. The 1-connected graphs are the non-trivial connected graphs.
The largest integer k such that G is k-connected is the (vertex ) connectivity of G. We denote this
integer by κ(G).

Example 1.10. The connectivity of some graphs.

1. κ(G) = 0 if and only if G is disconnected or G = K1 .

2. κ(Kn ) = n − 1.

3. The connectivity of the Petersen graph is 3.

4. We can delete all of the vertices adjacent to a given vertex and disconnect the graph. Thus
κ(G) ≤ δ(G).

5. The connectivity of the graph in Figure 1.4 is 2 but the minimum degree is 3.

If |V | > 1 and G − F is connected for every set F ⊆ E of fewer than ` edges, then G is
called `-edge connected . The greatest integer ` such that G is `-edge connected is called the edge
connectivity of G and this integer is denoted by λ(G). Note that λ(G) = 0 if and only if G is
disconnected. For a non-trivial graph G, it is easy to see that

κ(G) ≤ λ(G) ≤ δ(G).

Thus large connectivity implies large minimum degree, but the converse is not true as Figure 1.5
illustrates.

Theorem 1.11. Mader (1972). Every non-trivial graph G with AvgDeg(G) ≥ 4k has a k-
connected subgraph.

Proof. If k ∈ {0, 1} this is trivial. Let k ≥ 2 and G = (V, E). Set n = |V | and m = |E|. We will
prove the stronger result:

11
Figure 1.5: A graph with δ(G) = 4, λ(G) = κ(G) = 1.

G has a k-connected subgraph whenever


1. n ≥ 2k − 1, and
2. m ≥ (2k − 3)(n − k + 1) + 1.

Remark: This is indeed stronger. Conditions 1 and 2 follow from our assertion of AvgDeg(G) ≥
4k as follows.
If 1 is not true, then we have n < 2k − 1 and

1
m = AvgDeg(G)n ≥ 2kn > (n + 1)n.
2

This is too many edges. A graph can have at most n2 = n(n − 1)/2 edges.


Condition 2 follows from


1
m = AvgDeg(G)n ≥ 2kn
2
because,

2kn = (2k − 3)n + 3n


= (2k − 3)(n − (k − 1)) + (2k − 3)(k − 1) + 3n
= (2k − 3)(n − k + 1) + 1 + (2k − 3)(k − 1) + 3n − 1
≥ (2k − 3)(n − k + 1) + 1.

The last inequality holds as (2k − 3)(k − 1) + 3n − 1 ≥ 0.


We now prove the stronger result by induction on n.
If n = 2k − 1, then k = (n + 1)/2 and hence
      
n+1 n+1
m ≥ 2 −3 n− +1 +1
2 2
 
n+1
= (n − 2) +1
2
n(n − 1)
=
2

Thus G = Kn and Kk+1 ⊆ Kn , because k + 1 ≤ 2k − 1 = n for k ≥ 2. The subgraph Kk+1 is k


connected.

12
Suppose n > 2k − 1. If G has a vertex x with Deg(x) ≤ 2k − 3, then G − x has n0 = (n − 1)
vertices and m0 ≥ m − (2k − 3) edges. Thus

m0 ≥ (2k − 3)(n − k + 1) + 1 − (2k − 3)


= (2k − 3)(n − 1 − k + 1) + 1
= (2k − 3)(n0 − k + 1) + 1

So by induction G − x and hence G has a k-connected subgraph.


So now suppose G has no such vertex. Then

δ(G) ≥ 2k − 2

If G is k-connected we are done, G itself is the required subgraph. If G is not k connected, then
there is a set X of k − 1-vertices such that G − X = A1 ∪ A2 , where V (A1 ) ∪ V (A2 ) = V (G − X)
and V (A1 ) ∩ V (A2 ) = ∅. Let G1 = G[X ∪ V (A1 )] and G2 = G[X ∪ V (A2 )]. So G = G1 ∪ G2 ,
|V (G1 )|, |V (G2 )| < n and |V (G1 ∩ G2 )| = |X| = k − 1. Furthermore, for every x ∈ V (G1 ) \ V (G2 )
we have Deg(x) ≥ δ(G) ≥ 2k − 2 and the vertices adjacent to such x are in V (G1 ). Therefore
|V (G1 )| ≥ 2k − 1. Similarly |V (G2 )| ≥ 2k − 1. Thus both G1 and G2 satisfy Condition 1 of the
induction hypothesis. If neither G1 nor G2 satisfies Condition 2, then

|E(G1 )| ≤ (2k − 3)(|V (G1 )| − k + 1)

and
|E(G2 )| ≤ (2k − 3)(|V (G2 )| − k + 1)
Hence

m ≤ |E(G1 )| + |E(G2 )|
≤ (2k − 3)(|V (G1 )| − k + 1) + (2k − 3)(|V (G2 )| − k + 1)
= (2k − 3)(|V (G1 )| + |V (G2 )| − 2k + 2)
= (2k − 3)(n + (k − 1) − 2k + 2)
= (2k − 3)(n − k + 1)

(Recall |V (G1 ∩ G2 )| = |X| = k − 1.) This contradicts the fact that G satisfied Condition 2.
Consequently at least one of G1 and G2 satisfy both conditions of the induction hypothesis and so
at least one of G1 and G2 contain a k-connected subgraph. Therefore because these are subgraphs
of G we have that G contains a k-connected subgraph.

1.4.1 Exercises
1. (The Chekad and Ernie problem.) Given a connected graph G let T (G) be the number of
sequences
x1 , x2 , . . . , xn
of the vertices of G such that the induced subgraphs

Gi = G[x1 , x2 , . . . , xi ]

are connected.

13
(a) Compute T (G) for every connected graph on 5 or less vertices.
(b) Show that T (Pn ) = 2n−1 .
(c) Show that T (Cn ) = n2n−2 .
(d) Show that T (K1,n−1 ) = 2(n − 1)!. ( K1,n−1 is the graph having one vertex adjacent to
each of the other vertices, but no other edges. )
(e) Show that T (Kn ) = n!.
(f) If H is a connected subgraph of G show that T (H) ≤ T (G).
(g) Show that T (G) is always even.

These are easy except possibly for 1g. The general question of what are the possible values
of T (G) is called the spectrum problem of T (G) and I believe this to be an open question. It
probably will depend heavily on the solution for spanning trees.

2. Show that every 2-connected graph contains a cycle.

1.5 Trees and forests


A graph F is acyclic if contains no cycles. An acyclic graph is also called a forest. A connected
acyclic graph is called a tree. See Figure 1.6. The vertices of degree 1 in a tree T are called its leaves.

Figure 1.6: A tree

Every non-trivial tree has at least 2 leaves. A characterization of trees is given in Theorem 1.12.

Theorem 1.12. The following are all equivalent for a graph T .

1. T is a tree.

2. Any two vertices of T are connected by a unique path.

3. T is minimally connected. That is T is connected but T − e is disconnected for every edge


e ∈ E(T ).

4. T is maximally acyclic. That is T contains no cycles, but T + xy contains a cycle through xy


for any two non adjacent vertices x, y ∈ V (T ), x 6= y.

14
Proof. Let T = (V, E) be a graph.
(1⇒2) Suppose T is a tree and let a, b ∈ V , a 6= b. If

a = x0 x1 x2 x3 . . . xk−1 xk = b

and
a = y0 y1 y2 y3 . . . y`−1 y` = b

are two different paths from a to b. Then

a = x0 x1 x2 x3 . . . xk−1 y` y`−1 . . . y3 y2 y1 y0 = a

is a closed walk in which not all of the yi s are xj s are the same. It must therefore contain a cycle
contradicting T is a tree.
(2⇒3) Suppose any two vertices of T are connected by a unique path and let e = xy be any edge
of T . If T − e is connected then there is a path P from x to y in T − e. But then there are two x
to y paths in T , namely P and the edge e = xy. Thus T − e is disconnected for every edge e and
so T is minimally connected.
(3⇒4) Suppose T is minimally connected and let x, y be any two non-adjacent vertices of T , x 6= y.
Then there is an x to y path
x = x0 x1 x2 x3 . . . xk−1 xk = y

in T and hence there is a cycle in T + xy and so T is maximally acyclic.


(4⇒1) Suppose T is maximally acyclic. Then T has no cycles and for any pair of non-adjacent
vertices x and y we have that T + xy contains a cycle through xy. Thus T has a path from x to y
and so T is connected. Consequently T is a tree.

Applying Theorem 1.12 it is easy to see that every connected graph G contains a spanning tree.
Simply delete edges on cycles of G until there are no more cycles. Similarly it is easy to see that
any minimally connected spanning subgraph of G is also a tree.

Corollary 1.13. The vertices of a tree can always be listed x1 , x2 , . . . , xn so that every xi with
i ≥ 2 has a unique neighbor in {x1 , x2 , . . . , xi−1 }.

Proof. Use the listing provided by Lemma 1.9.

Corollary 1.14. A connected graph with n vertices is a tree if and only if it has n − 1 edges.

Proof. We leave this as Exercise 3.

Corollary 1.15. If T is a tree and G is any graph with δ(G) ≥ |V (T )| − 1, then G has a subgraph
isomorphic to T .

Proof. We leave this as Exercise 4.

15
1.5.1 Exercises
1. Show that any tree T has at least ∆(T ) leaves.
2. Show that every automorphism of a tree fixes a vertex or an edge.
3. Prove Corollary 1.14.
4. Prove Corollary 1.15.

1.6 Bipartite graphs


Let r ≥ 2 be a positive integer. A graph G = (V, E) is said to be r-partite if its vertices can be
partitioned into r blocks
˙ 2 ∪V
V = V1 ∪V ˙ 3 ∪˙ · · · ∪V
˙ r, Vi ∩ Vj = ∅ for i 6= j
such that every edge has its ends in different blocks. That is if xy ∈ E, then x ∈ Vi and y ∈ Vj ,
for some i 6= j. If r = 2, then an r-partite graph is said to be bipartite. An r-partite graph in
which every pair of vertices belonging to different blocks of the partition are adjacent is said to be
complete. If the r blocks of a complete r-partite graph G have sizes n1 , n2 , . . . , nr , then we denote
G by Kn1 ,n2 ,...,nr . We abbreviate Ks, s, s, . . . , s with Ksr . The bipartite graph K1,n is called the
| {z }
r
star .

(a) (b) (c) (d)

Figure 1.7: Examples of r-partite graphs: (a) A 4-partite graph. (b) The bipartite graph K3,3 .
(c) The 4 partite graph K2,2,2,2 = K24 . (d) The star K1,5 .

Theorem 1.16. A graph is bipartite if and only if it contains no odd cycle.


Proof. Clearly a graph G = (V, E) is bipartite if and only if its connected components are bipartite.
So we may assume that G is connected.
Suppose G is bipartite with partition V = V0 ∪V ˙ 1 , in which every edge has one end in V0 and
the other in V1 and let C = x0 x1 x2 · · · xk−1 x0 be any cycle. The length of C is k.
Without loss x0 ∈ V0 , say. Then xi ∈ V0 , if i is even and xi ∈ V1 if i is odd. It follows that k is
even and hence C has even length. So, G does not contain an odd cycle.
Conversely suppose G has no odd cycles. Let T be a spanning tree in G and fix any vertex r.
We define a partition V = V0 ∪V˙ 1 as follows. For each x ∈ V let Px be the unique path in T from
r to x.
V0 = {x ∈ V : length of Px is even}

16
a

b c
d

e f

g h

Figure 1.8: An Eulerian graph. abcdbedf eghf ca is an an Euler trail.

and

V1 = {x ∈ V : length of Px is odd}

We now show that every edge e = xy of G has one of its ends in V0 and the other in V1 .
If e is an edge of T , then either x is on the path Py or y is on the path Px . So assume x is on
Py , if not switch the roles of x and y. Then Py = Px y and so if length of Py is even then the length
of Px is odd and if Py is odd then Px is even. Hence one of x and y are in V0 and the other is in V1 .
If e is not an edge of T , then T + e has a unique cycle Ce , namely the unique path P from x
to y in T together with the edge e. This cycle has even length by assumption and so P has odd
length. Furthermore the edges of the path P are edges of T and so each has one end in V0 and the
other in V1 . Hence the vertices of P alternate between V0 and V1 . Therefore, because P has odd
length, one of x and y is in V0 and the other is in V1 .

1.6.1 Exercises
1. Show that G is bipartite if and only if every induced cycle has even length.

1.7 Euler trails


An Euler trail in a graph G is a closed walk that uses every edge of G exactly once. A graph G that
has an Euler trail is said to be Eulerian. An example of an Eulerian graph is given in Figure 1.8.

Theorem 1.17. (1736 Euler) A connected graph is Eulerian if and only if all of its degrees are
even.

Proof. Suppose G is Eulerian and let

W = x0 e0 x1 e1 · · · x`−1 e`−1 x` e` x0

be an Euler trail in G. Every occurrence of a vertex x in the trail accounts for two edges incident
to x, namely the edges immediately preceding and following it in the trail. Thus if x occurs k times
in the trail it must have degree 2k.

17
Conversely suppose all the vertices of the graph G have even degree and let

W = x0 e0 x1 e1 · · · x`−1 e`−1 x` e`

be a longest walk in G that uses each edge at most once. Because W cannot be extended, all of
the edges incident to x` must appear in W . Therefore because x` has even degree, we have x0 = x`
and W is closed. If there is an edge of G outside of W , then because G is connected, there must
be an edge e incident to some vertex xi of W , say e = uxi . Then

uexi ei xi+1 ei+1 · · · x`−1 e`−1 x0 e0 x1 e1 x2 e2 · · · ei−1 xi

is a longer walk then W . This is a contradiction and therefore W must include all of the edges of
G and is thus an Euler trail.

18
Table 1.2: The 34 nonisomorphic graphs on 5 vertices.

|E| graphs

2 ,

3 , , ,

4 , , , , ,

5 , , , , ,

6 , , , , ,

7 , , ,

8 ,

10

19
20
Chapter 2

Planar Graphs

2.1 Planar embedding


A graph G is said to be embeddable in the XY -plane if it can be drawn in the XY -plane so that
edges only intersect at their ends. A graph G that is embeddable in the XY -plane is said to a
planar graph and a plane graph is a graph that has been embedded in the XY -plane.

(a) (b)

Figure 2.1: (a) K4 . (b) A planar embedding of K4 .

2.2 Topology
A Jordan curve is a continuous non-intersecting curve whose origin and terminus intersect. If J is
a Jordan curve, the rest of the plane is partitioned into two open sets: the intereor of J denoted
by int(J), and the extereor of J denoted by ext(J). See Figure 2.2. Let Int(J), and Ext(J) be
the closures of int(J), and ext(J) respectively. Int(J) ∩ Ext(J) = J.
The Jordan curve theorem states that any line joining a point in int(J) to a point in ext(J)
must intersect J. We will use the Jordan curve theorem to prove that K5 is non-planar.

Theorem 2.1. K5 is non-planar.

Proof. Let G be a planar embedding of K5 with vertices x1 , x2 , x3 , x4 and x5 . G is complete so


C = x1 x2 x3 x1 is a cycle, i.e. C is a Jordan curve. The point x4 is either in int(C) or in ext(C).
Suppose x4 is in int(C). (x4 ∈ ext(C) is similar.)

21
ext(J)

int(J)

Figure 2.2: A Jordan curve.


.

f4
f6
f5
f7
f1 f2 f3

Figure 2.3: A plane graph with 7 faces.

The edges x1 x4 , x2 x4 , and x3 x4 divide int(C) into 3 regions: int(C1 ), int(C2 ), and int(C3 ),
where C1 = x1 x4 x2 x1 , C2 = x2 x4 x3 x2 , and C3 = x3 x4 x1 x3 . The vertex x5 must lie in one of
the four regions: ext(C), int(C1 ), int(C2 ), and int(C3 ). If x5 ∈ ext(C), then the edge x5 x4
must intersect C by the Jordan curve theorem. If x5 ∈ int(C1 ), then the edge x5 x3 must intersect
C1 by the Jordan curve theorem. If x5 ∈ int(C2 ), then the edge x5 x1 must intersect C2 by the
Jordan curve theorem. If x5 ∈ int(C3 ), then the edge x5 x2 must intersect C3 by the Jordan curve
theorem.

A shorter proof of Theorem 2.1.

2.3 Euler’s formula


A plane graph G divides the plane into a number of connected regions called the faces (or regions)
of G. See the plane graph in Figure 2.3. Let F = F (G) be the set of faces of the plane graph G.
We will write G = (V, E, F ) to denote a plane graph with vertex set V = V (G), edge set E = E(G)
and face set F = F (G). A face f ∈ F is incident with its vertices and edges. The degree of a face
f is Deg(f ) and is the number of edges incident to f , counting cut edges twice.

22
Theorem 2.2. In a plane graph G = (V, E, F ) we have
X
Deg(f ) = 2|E|
f ∈F

Proof. Every edge is incident to 2 faces.

Theorem 2.3. (Euler’s theorem.) Let G = (V, E, F ) be a connected plane graph, then

|V | − |E| + |F | = 2

Proof. (By induction on |F |.) If |F | = 1, then G contains no cycles and hence G is a tree. Thus
|E| = |V | − 1 by Corollary 1.14, and therefore

|V | − |E| + |F | = |V | − (|V | − 1) + 1 = 2

Now suppose |F | > 1. Then there is an edge e on some cycle and so G − e is connected. Therefore

|V (G − e)| − |E(G − e)| + |F (G − e)| = 2

But

|V (G − e)| = |V |
|E(G − e)| = |E| − 1
|F (G − e)| = |F | − 1

Hence,
2 = |V | − (|E| − 1) + (|F | − 1) = |V | − |E| + |F |

Corollary 2.4. Let G = (V, E, F ) be a (simple) planar graph with |V | ≥ 3, then

|E| ≤ 3|V | − 6

Proof. It suffices to do this for connected graphs. Let G = (V, E, F ) be a (simple) connected planar
graph, with |V | ≥ 3. Then Deg(f ) ≥ 3 and so
X
2|E| = Deg(f ) ≥ 3|F |
f

Hence,
2
|F | ≤ |E|.
3
Applying Euler’s formula we have

2 = |V | − |E| + |F |
2
≤ |V | − |E| + |E|
3
1
= |V | − |E|.
3
So, 6 ≤ 3|V | − |E| and thus |E| ≤ 3|V | − 6.

23
Corollary 2.5. K5 is non-planar.

Proof. If K5 were planar, then 10 = |E| ≤ 3|V | − 6 = 3 · 5 − 6 = 9. A contradiction.

Corollary 2.6. If G = (V, E, F ) is a (simple) planar graph δ(G) ≤ 5.

Proof. This is trivialy true for |V | = 1 and |V | = 2. So suppose |V | ≥ 3. Then,


X
δ(G)|V | ≤ Degx = 2|E| ≤ 6|V | − 12
x∈V

12
by Corollary 2.4. Therefore δ(G) ≤ 6 − |V | and so δ(G) ≤ 5, because δ(G) is integer.

Corollary 2.7. K3,3 is non-planar.

Proof. Let G = (V, E, F ) be a planar embedding. Now K3,3 is bipartite and so G contains no odd
cycles. Therefore Deg(f ) ≥ 4 for each face f ∈ F . So,
X
4|F | ≤ Deg(f ) = 2|E| = 2 · 9 = 18
f ∈F

Therefore |F | ≤ 4. But, then applying Euler’s Formula we have

2 = |V | − |E| + |F | ≤ 6 − 9 + 4 = 1,

a contradiction.

2.4 Regular polyhedra


Consider the solid cube. (Figure 2.4.) Now picture a sphere with center at the center of the cube

Figure 2.4: The Solid cube

and radius just large enough so that the vertices of the cube lie on its surface. Chose a vertex N
at the top of the sphere (centered above one of the faces of the cube) and let S be the the vertex
diametrically opposite N. Let P be the plane tangent to the sphere at the point S. If v is a point
on the sphere, then its stereographic projection is the point v̂, where the line n + λ(v − n), λ ≥ 0
intersects the plane P. See Figure 2.5. Thus the stereographic projection of a solid polyhedra is
a plane graph. A platonic solid is a polyhedron in which all of its faces have the same shape and
every vertex is on the same number of edges.

24
N
v

vb

S
P

Figure 2.5: Stereographic projection

A plane graph G = (V, E, F ) is a platonic graph if there exists constants k, ` > 0 such that
Deg(x) = k for all k ∈ V and Deg(f ) = ` for all f ∈ F . Let n = |V |, m = |E| and r = |F |. Notice
that k, ` ≥ 3. Then by Euler we have

n − m + r = 2,

by the fundamental theorem of graph theory Theorem 1.3 we have

2m = nk,

and by Theorem 2.2 we have


2m = r`
Therefore:
2m 2m
−m+ =2
k `
So,
2 2
m( − 1 + ) = 2
k `
Therefore
2 2
( − 1 + ) > 0,
k `
because m > 0 and 2 > 0. Consequently,

2` − k` + 2k > 0
k` − 2` − 2k < 0
(k − 2)(` − 2) − 4 < 0
(k − 2)(` − 2) < 4

There are thus 5 possibilities for the positive integers which we give in Table 2.1. The graphs are
drawn in Figure 2.6.

25
Table 2.1: The 5 platonic graphs.

k l (k − 2)(` − 2) n = |V | m = |E| r = |F | Name


3 3 1 4 6 4 Tetrahedron
3 4 2 8 12 6 Cube
3 5 3 20 30 12 Dodecahedron
4 3 2 6 12 8 Octahedron
5 3 3 12 30 20 Icosahedron

Tetrahedron Cube Octahedron

Dodecahedron Icosahedron

Figure 2.6: The 5 platonic graphs.

26
2.5 Kuratowski’s theorem
The goal of this section will be to prove Kuratowski’s Theorem which characterizes planar graphs.

2.5.1 Subdivision, contraction and minors


The graphs K5 and K3,3 are special graphs for planarity. If we construct a graph from K5 by
replacing one or more edges with a path of length ≥ 2, we obtain a subdivision of K5 . We say that
the edges of K5 have been subdivided .
Given a graph G, a subdivision of G is any graph obtained from G by replacing one or more
edges by paths of length two or more.
It is clear that any subdivision of K5 or K3,3 is non-planar, because K5 and K3,3 are non-
planar. It is apparent that vertices of degree two do not affect the planarity of a graph. The inverse
operation to subdividing an edge is to contract an edge with an endpoint of degree two.
Graphs G1 and G2 are topologically equivalent or homeomorphic, if G1 can be transformed into
G2 by the operations of subdividing edges and/or contracting edges with an endpoint of degree
two.
We will denote by T K5 any graph that is topologically equivalent to K5 . Similarly, T K3,3
denotes any graph that is topologically equivalent to K3,3 . In general, T K denotes a graph topo-
logically equivalent to K, for any graph K. If G is a graph containing a subgraph T K5 or T K3,3 ,
then G must be non-planar. Kuratowski’s theorem states that this is a necessary and sufficient
condition for a graph to be non-planar.
If G is a planar graph, and we delete any vertex v from G, then G − v is still planar. Similarly,
if we delete any edge uv, then G − uv is still planar. Also, if we contract any edge uv of G, then
G · uv is still planar. Contracting an edge can create parallel edges or loops. Because parallel edges
and loops do not affect the planarity of a graph, loops can be deleted, and parallel edges can be
replaced by a single edge, if desired.
Let H be a graph obtained from G by any sequence of deleting vertices and/or edges, and/or
contracting edges. H is said to be a minor of G. (We also say G has minor H.)
Notice that if G contains a subgraph T K5 , K5 is a minor of G, even though K5 need not be
a subgraph of G. For we can delete all vertices and edges which do not belong to the subgraph
T K5 , and then contract edges to obtain K5 . Similarly, if G has a subgraph T K3,3 , then K3,3 is a
minor of G, but need not be a subgraph. Any graph having K5 or K3,3 as a minor is non-planar.
A special case of minors is when a graph K is subdivided to obtain G.

Lemma 2.8. Let G be any graph obtained by splitting a vertex of K5 . Then G contains a subgraph
T K3,3 .

Proof. Let v1 and v2 be the two vertices resulting from splitting a vertex of K5 . Each has at least
degree three. Consider v1 . It is joined to v2 . Together, v1 and v2 are joined to the remaining
four vertices of G, and each is joined to at least two of these vertices. Therefore we can choose a
partition of these four vertices into x, y and w, z such that v1 is adjancent to x and y and such that
v2 such that w and z. Then G contains a K3,3 with bipartition v1 , w, z and v2 , x, y, as illustrated
in Figure 2.7.

27
v1 v2

Figure 2.7: Splitting a vertex of K5

Theorem 2.9. If G has a minor K3,3 , then G contains a subgraph T K3,3 . If G has a minor K5 ,
then G contains a subgraph T K5 or T K3,3 .

Proof. Suppose that G has a minor K5 or K3,3 . If no edges were contracted to obtain this minor,
then it is also a subgraph of G. Otherwise let G0 , G1 , . . . , Gk be a sequence of graphs obtained
from G, where G0 is a subgraph of G, edge ei of Gi−1 is contracted to obtain Gi , and Gk is either
K5 or K3,3 .
If each ei has an endpoint of degree two, then we can reverse the contractions by subdividing
edges, resulting in a T K5 or T K3,3 in G, as required. Otherwise let ei be the edge with largest i, with
both endpoints of at least degree three. All edges contracted subsequent to Gi have an endpoint of
degree two, so that Gi has a subgraph T K5 or T K3,3 . Gi−1 can be obtained by splitting a vertex v
of Gi . If v is a vertex of T K5 , then by Lemma 2.8, Gi−1 contains T K3,3 . If v is a vertex of T K3,3 ,
then Gi−1 also contains T K3,3 . If v is a vertex of neither T K5 nor T K3,3 , then Gi−1 still contains
T K5 or T K3,3 . In each case we find that G0 must have a subgraph T K5 or T K3,3 .

2.5.2 Blocks and seperable graphs


If a connected graph G has a cut-vertex v, then it is said to be separable, because deleting v separates
G into two or more components. A separable graph has κ = 1, but it may have subgraphs which are
2-connected, just as a disconnected graph has connected subgraphs. We can then find the maximal
non-separable subgraphs of G, just as we found the components of a disconnected graph. This is
illustrated in Figure 2.8.
The maximal non-separable subgraphs of G are called the blocks of G. The graph illustrated in
Figure 2.8 has eight blocks, held together by cut-vertices.

2.5.3 Proof of Kuratowski’s theorem


We now prove Kuratowski’s theorem. The proof presented is based on the W. Klotz’s (1989) proof1 .
It uses induction on m = |E(G)|.
1
W. Klotz, A constructive proof of Kuratowski’s theorem, Ars Combin., 28 (1989), 51–54.

28
(a) (b)

Figure 2.8: A graph (a) and its blocks (b)

If G is a disconnected graph, then G is planar if and only if each connected component of G is


planar. Therefore we assume that G is connected. If G is a separable graph that is planar, let H
be a block of G containing a cut-vertex v. H is also planar, because G is. We can delete H − v
from G, and find a planar embedding of the result. We then choose a planar embedding of H with
v on the outer face, and embed H into a face of G having v on its boundary. This gives:

Lemma 2.10. A separable graph is planar if and only if all its blocks are planar.

So there is no loss in generality in starting with a 2-connected graph G.

Theorem 2.11. (Kuratowski’s theorem) A graph G is planar if and only if it contains no


subgraph T K3,3 or T K5 .

Proof. It is clear that if G is planar, then it contains no subgraph T K3,3 or T K5 . To prove the
converse, we show that if G is non-planar, then it must contain T K3,3 or T K5 . We assume that G is
a simple, 2-connected graph with m edges. To start the induction, notice that if m ≤ 6, the result
is true, as all graphs with m ≤ 6 are planar. Suppose that the theorem is true for all graphs with
at most m − 1 edges. Let G be non-planar, and let ab ∈ E(G) be any edge of G. Let G0 = G − ab.
If G0 is non-planar, then by the induction hypothesis, it contains a T K3,3 or T K5 , which is also a
subgraph of G. Therefore we assume that G0 is planar. Let κ(a, b) denote the number of internally
disjoint ab-paths in G0 . Because G is 2-connected, we know that κ(a, b) ≥ 1.

Case 1. κ(a, b) = 1.
G0 has a cut-vertex u contained in every ab-path. Add the edges au and bu to G0 , if they
are not already present, to get a graph H, with cut-vertex u. Let Ha and Hb be the blocks
of H containing a and b, respectively. If one of Ha or Hb is non-planar, say Ha , then by the
induction hypothesis, it contains a T K3,3 or T K5 . This subgraph must use the edge au, as
G0 is planar. Replace the edge au by a path consisting of the edge ab plus a bu-path in Hb .
The result is a T K3,3 or T K5 in G. If Ha and Hb are both planar, choose planar embeddings
of them with edges au and bu on the outer face. Glue them together at vertex u, remove

29
the edges au and bu that were added, and restore ab to obtain a planar embedding of G, a
contradiction.

Case 2. κ(a, b) = 2.
Let P1 , and P2 be two internally disjoint ab-paths in G0 . Because κ(a, b) = 2, there is a
vertex u ∈ P1 and v ∈ P2 such that all ab-paths contain at least one of {u, v}, and G0 − {u, v}
is disconnected. If Ka denotes the connected component of G0 − {u, v} containing a, let
G0a be the subgraph of G0 induced by Ka ∪ {u, v}. Let Kb denote the remaining connected
components of G0 − {u, v}, and let G0b be the subgraph of G0 induced by Kb ∪ {u, v}, except
that uv, if it is an edge of G0 , is not included (because it is already in G0a ). Now add a vertex
x to G0a , adjacent to u, v, and a to obtain a graph Ha . Similarly, add y to G0b adjacent to
u, v, and b to obtain a graph Hb . Suppose first that Ha and Hb are both planar. As vertex x
has degree three in Ha , there are three faces incident on x. Embed Ha in the plane so that
the face with edges ux and xv on the boundary is the outer face. Embed Hb so that edges
uy and yv are on the boundary of the outer face. Now glue Ha and Hb together at vertices
u and v, delete vertices x and y, and add the edge ab within the face created, to obtain a
planar embedding of G. Because G is non-planar, we conclude that at least one of Ha and
Hb must be non-planar. Suppose that Ha is non-planar. It must contain a subgraph T K5 or
T K3,3 . If the T K5 or T K3,3 does not contain x, then it is also contained in G, and we are
done. Otherwise the T K5 or T K3,3 contains x. Now Hb is 2-connected (because G is), so that
it contains internally disjoint paths Pbu and Pbv connecting b to u and v, respectively. These
paths, plus the edge ab, can be used to replace the edges ux, vx, and ax in Ha to obtain a
T K5 or T K3,3 in G.

Case 3. κ(a, b) ≥ 3.
Let P1 , P2 , and P3 be three internally disjoint ab-paths in G0 . Consider a planar embedding
of G0 . Each pair of paths P1 ∪ P2 , P1 ∪ P3 , and P2 ∪ P3 creates a cycle, which embeds as a
Jordan curve in the plane. Without loss of generality, assume that the path P2 is contained
in the interior of the cycle P1 ∪ P3 , as in Figure 2.9. The edge ab could be placed either in
the interior of P1 ∪ P2 or P2 ∪ P3 , or else in the exterior of P1 ∪ P3 . As G is non-planar, each
of these regions must contain a path from an interior vertex of Pi to an interior vertex of Pj .
Let P12 be a path from u1 on P1 to u2 on P2 . Let P13 be a path from v1 on P1 to u3 on P3 .
Let P23 be a path from v2 on P2 to v3 on P3 . If u1 6= v1 , contract the edges of P1 between
them. Do the same for u2 , v2 on P2 and u3 , v3 on P3 . Adding the edge ab to the resultant
graph then results in a T K5 minor. By Theorem 2.9, G contains either a T K5 or T K3,3 .

2.5.4 Exercises
1. Show that adding a new edge to a maximal planar graph of order at least 6 always produces
both K5 and K3,3 as topological minors.

2. Show that a 2-connected plane graph is bipartite if and only if every face is bounded by an
even cycle.

30
v1
u1 P1

u2
P2
a b
v2

P3
u3
v3

Figure 2.9: A K5 minor

31
32
Chapter 3

Algebraic Graph Theory

In algebraic graph theory we study the algebras that can be associated with a graph G, and try to
obtain graphical information from these algebras.

3.1 Spectrum
The adjacency matrix of a graph G = (V, E) is the V by V matrix A = AG , where

1 if xy ∈ E
A[x, y] =
0 if xy 6∈ E.
The adjacency matrix is a real symmetric matrix. It follows that the eigen values of A are real and
that A has a set of n linearly independent eigen vectors.
Proposition 3.1. The eigen values of A are real.
Proof. Let λ be an eigen value of A. Then A~u = λ~u for some complex valued vector ~u. If
T
z = a+bi ∈ C, then let z = a−bi, the complex conjugate of z. We may also assume ~u ~u = ||~u|| = 1.
For otherwise we can use ~u/||~u|| for our eigen vector with eigen value λ. Thus
T T T T T T T T
λ = λ(~u ~u) = ~u (λ~u) = ~u (A~u) = (~u A)~u = (~u A )~u = (A~u) ~u = (λ~u)T ~u = λ~u ~u = λ

Consequently λ is a real number.

Recall that if λ is an eigen value of A, then A~u = λ~u for some nonzero vector ~u. Hence
(λI − A)~u = 0 and so the columns of λI − A are linearly dependent and therefore det(λI − A) = 0.
Consequently the eigen values of A are the roots of

χG (λ) = det(λIn − A) = λn + c1 λn−1 + c2 λn−2 + · · · + cn−1 λ + cn

the characteristic polynomial of A. (In is the n by n identity matrix.)


The spectrum of the graph G is the list of the eigen values of A, the adjacency matrix of G.
Let λ1 > λ2 > · · · > λs be the s distinct eigen values of A. The number mi of times λi is a root of
χG (λ), is called the algebraic multiplicity of λ. We exhibit the spectrum of G as follows:
 
λ1 λ2 · · · λs
Spec(G) =
m1 m2 · · · ms

33
Thus the characteristic polynomial of G is

χG (λ) = det(λIn − A) = (λ − λ1 )m1 (λ − λ2 )m2 · · · (λ − λs )ms


 
0 1 0 1
1 2  1 0 1 0 
For example the the adjacency matrix of the 4-cycle G = is 
 0 1
 and thus
0 1 
4 3
1 0 1 0
 
2 0 −2
Spec(G) = .
1 2 1
To motivate our study of adjacency matrix eigen values consider the following theorem.
Theorem 3.2. Let G = (V, E) be a graph with adjacency matrix A and characteristic polynomial

χG (λ) = λn + c1 λn−1 + c2 λn−2 + · · · + cn−1 λ + cn .

Then
1. c1 = 0;

2. −c2 = |E| the number of edges in G;

3. −c3 is twice the number of triangles in G.


Thus the coefficients of the characteristic polynomial carry graphical information. This result
follows from the computation of the coefficients ck as the sum of principle minors. That is
X
ck = (−1)k {det AK : K ⊆ V, |K| = k}

where AK is the K by K sub-matrix of A whose rows and columns are indexed by the vertices in
K. The sub-matrix AK is called a principle minor . We now prove Theorem 3.2.

Proof. (of Theorem 3.2)


1. The principle minors with one row and column are the diagonal entries of A and these are all
zero. Hence c1 = 0.

2. If K = {x, y} ⊆ V , x 6= y. Then either


   
0 0 0 1
AK = or AK = .
0 0 1 0
The first possibility occurs when xy 6∈ E and has determinant 0, while the second occurs
when xy ∈ E and has determinant −1,

3. If K = {x, y, z} ⊆ V , x 6= y 6= z 6= x. Then AK is
               
0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 1 1 0 1 1
 0 0 0  , 0 0 1  , 1 0 0  , 1 0 1  , 0 0 0  , 0 0 1  , 1 0 0  or 1 0 1  .
0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 1 0 1 0 0 1 1 0
Only the last one has non-zero determinant. It has value 2.

34
Here is useful theorem on eigen values that relates the eigen values between two matrices.
Theorem 3.3. Let A be any n by n matrix and let

f (x) = f0 xn + f1 xn−1 + f2 xn−2 + · · · + fk−1 x + fk

be any polynomial. Define the matrix f (A) by

f (A) = f0 An + f1 An−1 + f2 An−2 + · · · + fk−1 A + fk I

If λ1 , λ2 , . . . λn are the eigen values of A then f (λ) f (λ1 ), f (λ2 ), . . . f (λn ) are the eigen values of
f (A).
Proof. Let ~u be an eigen vector of A with eigen value λ. Then A~u = λ~u and so, Aj ~u = λj ~u. Thus

f (A)~u = (f0 An + f1 An−1 + f2 An−2 + · · · + fk−1 A + fk I)~u


= f0 An ~u + f1 An−1 ~u + f2 An−2 ~u + · · · + fk−1 A~u + fk ~u
= f0 λn ~u + f1 λn−1 ~u + f2 λn−2 ~u + · · · + fk−1 λ~u + fk ~u
= (f0 λn + f1 λn−1 + f2 λn−2 + · · · + fk−1 λ + fk )~u
= f (λ)~u

For example consider the n by n matrix Jn that has every entry 1. Then
~1 = [1, 1, . . . , 1]T
| {z }
n times

is an eigen vector with eigen value n and the n − 1 vectors ui , i = 2, 3, . . . , n given by



 1 if j = 1
u~i [j] = −1 if j = i
0 otherwise

are linearly independent eigen vectors with eigen value 0. Thus the eigen values of Jn are

n, 0, 0, . . . , 0 .
| {z }
n−1 times

The adjacency matrix A of the complete graph Kn is A = Jn − In . Thus A = f (J) where


f (x) = x − 1. Therefore the eigen values of A are

n − 1, −1, −1, . . . , −1 .
| {z }
n−1 times

Hence  
n − 1, −1
Spec(Kn ) =
1, n − 1

35
If A is the adjacency matrix of a graph G, then the set of polynomials in A with complex
coefficients is an algebra under the usual matrix operations. This algebra is called the adjacency
algebra and we denote it by

Alg(G) = {f (A) : f (x) ∈ C[x]}.

This algebra has finite dimension as a vector space over the complex numbers C. Our next few
theorems relates the dimension of Alg(G) to Diam(G) the diameter of G. Recall that

Diam(G) = Max{Dist(x, y) : x, y ∈ V (G)},

where Dist(x,y) is the length of the shortest path in G.

Theorem 3.4. The number of walks of length ` in G from vertex x to vertex y is A` [x, y].

Proof. ( by induction on `, the length of the walk.) If ` ∈ {0, 1}, then the result is obvious. So
suppose ` > 1, then by induction A`−1 [x, h] is the number of walks of length ` − 1 from x to h.
Hence A`−1 [x, h]A[h, y] is the number of walks of length ` from x to y whose penultimate vertex is
h. Summing over all vertices h we have

X
A` [x, y] = A`−1 [x, h]A[h, y]
h∈V (G)

is the number of walks from x to y that have length `.

Theorem 3.5. The dimension of Alg(G) is at least Diam(G) + 1

Proof. Let d = Diam(G) and choose vertices x, y such that Dist(x, y) = d. Let

x = x0 x1 x2 , . . . , xd = y

be a path from x to y of length d. Then A` [x0 , x` ] 6= 0, but Aj [x0 , x` ] = 0 for all j < `. Hence
I, A, A2 , . . . , Ad are linearly independent.

Proposition 3.6. A has n linearly independent eigen vectors.

Proof. Let λ1 be an eigen value (there must be at least one) and let u~1 be an associated eigen
vector of unit length, so Au~1 = λu~1 and ||u~1 || = 1. Furthermore u~1 is real valued because λ is
real. Using the Gram-Schmidt process extend u~1 to an orthonormal basis {u~1 , u~2 , u~3 , . . . , u~n }. Let

36
R = [u~2 , u~3 , . . . , u~n ] and U = [u~1 , R] Then U T U = I. Consider the matrix U T AU .

U T AU = [u~1 , R]T A[u~1 , R]


= [u~1 , R]T [λu~1 , AR]
 T
u~1 λu~1 u~1 T AR

=
RT λu~1 RT AR
λ(u~1 T u~1 ) (Au~1 )T R
 
=
λ(RT u~1 ) RT AR
λ (λu~1 )T R
 
=
0 RT AR
λ λ(u~1 T R)
 
=
0 RT AR
 
λ 0
=
0 RT AR
 
λ 0
=
0 B

where B = RT AR. The matrix B is a real symmetric matrix with one less row and column than
A. We can repeat the process obtaining another eigen value on the diagonal. Continuing we obtain
a matrix S such that S T AS is a diagonal matrix of the eigen values of A, and the columns of S are
an orthonormal basis of eigen vectors.

The minimum polynomial µA (x) of a matrix A is the monic polynomial of smallest degree such
that µA (A) = 0.

Proposition 3.7. Let A be the adjacency matrix of a graph G. Then

µA (λ) = (λ − λ1 )(λ − λ2 )(λ − λ3 ) · · · (λ − λs )

where λ1 , λ2 , . . . , λs are the distinct eigen values of A.

Proof. Applying Proposition 3.6 we see that there is an n by n matrix S = [S~1 , S~2 , . . . , S~n ] whose
columns are linearly independent eigen vectors of A. Let D = Diag(λ01 , λ02 , . . . , λ0n ) be the diagonal
matrix in which AS~i = λ0i S~i . Then A = SDS −1 and µA (λ0i ) = 0, for all i. Hence, µ(A) =
Sµ(D)S −1 = SDiag(µA (λ01 ), µA (λ02 ), . . . , µA (λ0n ))S −1 = 0. Furthermore if f (x) is any polynomial
such that f (A) = 0, then 0 = f (A) = Sf (D)S −1 = SDiag(f (λ01 ), f (λ2 ), . . . , f (λn ))S −1 and
consequently λ0i is a root of f (x) for each i = 1, 2, . . . , s. Therefore Deg(f (x)) ≥ Deg(µA (x)).

Corollary 3.8. A connected graph with n vertices and diameter d has at least d + 1 and at most
n distinct eigen values.

Proof. The degree of the minimum polynomial cannot be less than the dimension of the adjacency
algebra.

37
3.2 Regular graphs
Theorem 3.9. Let G be a regular graph of degree k. Then
1. k is an eigen value of G.

2. If G is connected, then the multiplicity of k is one.

3. For any eigen value λ of G, we have |λ| ≤ k.


Proof. 1. Let
~1 = [1, 1, . . . , 1]T
| {z }
n times

Then A~1 = k~1, so k is an eigen value.

2. We compute the eigenspace with eigen value k. Let A~u = k~u for some non-zero vector ~u.
Choose vertex x such that |~u[x]| is largest. We may assume that ~u[x] > 0, for otherwise we
can take −~u. Let y1 , y2 , . . . , yk be the k vertices adjacent to x. Then

k~u[x] = (A~u)[x] = ~u[y1 ] + ~u[y2 ] + · · · + ~u[yk ] ≤ ~u[x] + ~u[x] + · · · + ~u[x] = k~u[x]

Thus ~u[x] = ~u[y] for all y adjacent to x. Hence, because G is connected we can repeat their
argument until all entries of ~u have been shown to be equal. Thus ~u = α~1 for some α. Hence
the eigen space with eigen value k is {α~1 : α ∈ C}. Thus k has multiplicity one.

3. Let λ be any eigen value of G, and let ~u be such that A~u = λ~u. Let x be a vertex such that
|~u[x]| is largest. Let y1 , y2 , . . . , yk be the k vertices adjacent to x. Then

|λ||~u[x]| = |A~u[x]| = |~u[y1 ] + ~u[y2 ] + · · · + ~u[yk ]|


≤ |~u[y1 ]| + |~u[y2 ]| + · · · + |~u[yk ]|
≤ |~u[x]| + |~u[x]| + · · · + |~u[x]| = k|~u[x]|

Therefore |λ| ≤ k

Theorem 3.10. (Hoffman 1963) Let G be a graph. Then J ∈ Alg(G) if and only if G is a regular
connected graph.
Proof. Let A be the adjacency matrix of the graph G. Suppose J ∈ Alg(G). Then J = P (A) for
some polynomial P (x) ∈ C[x]. Thus AJ = JA and so

Deg(x) = (AJ)[x, y] = (JA)[x, y] = Deg(y)

for all vertices x and y. Therefore G is regular. If G were disconnected, there would be vertices
x and y with no walk between them. Hence A` [x, y] = 0 for all ` and it therefore impossible for
J = P (A) for any polynomial P (x).
Conversely suppose G is a regular connected graph of degree k. Then by Theorem 3.9, we know
that k is an eigen value and hence the minimum polynomial of A is of the form

µA (x) = (x − k)P (x)

38
for some P (x) ∈ C[x]. Then
0 = µA (A) = (A − kI)P (A),
and hence
AP (A) = kP (A).
Thus each column of P (A) is an eigen vector with eigen value k. So by 3.9, each column of P (A) is
a multiple of ~1 and thus P (A) = tJ for some constant t, because A and hence P (A) is symmetric.
Also, t is not zero, because Deg(P (x)) < Deg(µA (x). Therefore
1
J = P (A) ∈ Alg(G).
t

Another matrix associated with a graph G = (V, E) is incidence matrix . This is the V by E
matrix X = XG given by 
1 if x is incident to e,
X[x, e] =
0 if not.
Observe for vertices x and y that the [x, y]-entry of XX T is
X
XX T [x, y] = X[x, e]X T [e, y]
e∈E
X
= X[x, e]X[y, e]
e∈E
= |{e ∈ E : e is incident to both x and y}|

 1 if x 6= y and x is adjacent to y;
= 0 if x 6= y and x is not adjacent to y;
if x = y.

Deg(x)

Thus if G is regular of degree k, then


XX T = A + kIn . (3.1)
The line graph L(G) of a graph G is constructed by taking as vertices the edges of G with two
being adjacent in L(G) if they shared a common vertex. See Figure 3.1.
Observe for any pair of edges e and f that the [e, f ]-entry of X T X is
X
X T X[e, f ] = X T [e, x]X[x, f ]
x∈V
X
= X[x, e]X[x, f ]
x∈V
= |{x ∈ V : x is incident to both e and f }|

 1 if e 6= f and e and f share a common vertex;
= 0 if e 6= f and e and f do not share a common vertex;
2 if e = f.

Thus if AL is the adjacency matrix of the line graph of G, then


X T X = AL + 2Im (3.2)

39
e4
e4

e5
e1 e5 e3 e1 e3

e2 e2
G L(G)

Figure 3.1: A graph G and its line graph L(G)


.

Theorem 3.11. (Sachs 1967) If G = (V, E) is a regular graph of degree k with n = |V | and
m = |E|, then
χL(G) (λ) = (λ + 2)m−n χG (λ + 2 − k)

Proof. Let    
λIn −X In X
U= and V = .
0 Im XT λIm
Then
λIn − XX T
   
0 λIn 0
UV = and V U = .
XT λIm λX T λIm − X T X
The determinant of U V equals the determinant of V U . Thus

λm det(λIn − XX T ) = λn det(λIm − X T X) (3.3)

χL(G) (λ) = det(λIm − AL )


= det(λIm − (X T X − 2Im )) by Equation 3.2
= det((λ + 2)Im − X T X)
= (λ + 2)m−n det((λ + 2)In − XX T ) by Equation 3.3
= (λ + 2)m−n det((λ + 2)In − (A + kIn )) by Equation 3.1
= (λ + 2)m−n det((λ + 2 − k)In − A)
= (λ + 2)m−n χG (λ + 2 − k)

3.3 The matrix tree theorem


Let G = (V, E) be a graph. Arbitrarily orient the edges of G by replacing the edge {x, y} with
either (x, y) or (y, x). We represent this graphically by adding an arrow to the edge of G. The

40
incidence matrix D is the V × E matrix D given by

 +1 if e = (x, z) for some z ∈ V
D[x, e] = −1 if e = (z, x) for some z ∈ V
0 otherwise

An example is provided in Figure 3.2.

v1 e1 v2 v1 e1 v2
e2 e2 e1 e2 e3 e4 e5 e6
e5 e5 v1 −1 1 −1 0 0 0
e3 e4 e3 e4 D = v2 +1 0 0 −1 −1 0
v3 0 −1 0 1 0 1
v4 0 0 1 0 +1 −1
v4 e6 v3 v4 e6 v3

(a) (b) (c)

Figure 3.2: (a) The Graph K4 . (b) An arbitrary orientation of K4 . (c) The corresponding incidence
matrix of K4 .

Theorem 3.12. Let D be an incidence matrix of the graph G = (V, E). Then Rank(D) = n − c,
where n = |V | and c is number of connected components of G.
Proof. Let Di = (Vi , Ei ) be the incidence matrix of the i-th component of G. Then
 
D1 0 · · · 0
 0 D2 · · · 0 
D= .
 
.. .. 
 .. . . 
0 0 ··· Dc

by way of a simple relabeling of the vertices. Thus


c
X
Rank(D) = Rank(Di )
i=1

and consequently we need only show that Rank(Di ) = |Vi | − 1. Hence it suffices to assume that G
is connected. Denote the row of D corresponding to vertex x by d~x , and recall that every column
of D has the form  
0
 +1 
 
 .. 
 . 
 
 −1 
0
with exactly one 1 and one −1, because an edge only has 2 ends. Therefore the row sum is
X
d~x = 0.
x∈V

41
Thus the rows of D are linearly dependent and hence Rank(D) ≤ n − 1. Now suppose that we
have any dependency
X
αx d~x = 0,
x∈V

where not all αx = 0. Choose a row d~x for which αx 6= 0. Because G is connected there is an edge
e = {x, y} such that d~x [e] = ±1 and d~y [e] = −d~x [e] . In particular d~x [e] and dy [e] are opposite in
sign. But we must have
αx d~x [e] + αy dy [e] = 0,

so αx = αy . But G is connected so there is a path form x to any other vertex z ∈ V . Consequently


all the αz s are equal to some common value say α. Thus
X X
0= αd~x = α d~x ,
x∈V x∈V

is the only dependency among the rows of D. Therefore we can conclude that
Therefore Rank(D) = |V | − 1 for a connected graph D and for a graph G with c components
Rank(D) = |V | − c.

Let C = x1 x2 x3 · · · xk x1 be any cycle of the graph G. Independent from any orientation of G


assign to C the orientation {(x1 , x2 ), (x2 , x3 ), (x3 , x4 ), . . . (xk−1 , xk ), (xk , x1 )}, and define

 +1 if e ∈ E(C) and the orientation in G and C agree on e,
~
C[e] = −1 if e ∈ E(C) and the orientation in G and C disagree on e,
0 Otherwise.

See Figure 3.3 for an example.


~ ∈ Nullspace(D). Hence Dim(Nullspace(D)) ≥
Observe that if G = (V, E) has a cycle C, then C
1 and thus Rank(D) ≤ min{|V |, |E|} − 1.

Lemma 3.13. (Poincare 1901) Any square submatrix of D has determinant either 0, +1, −1.

Proof. (By induction on the number of rows.) Let S be a square submatrix of D. If S has only
one row (and column), then S = [0], [−1] or [+1]. Hence det(S) is either 0, +1, −1. Now suppose
S has more than one row. If every column of S is either all 0s, or has both a s+1 and a −1. Then
the row sum of S is all 0’s and hence det(S) = 0. Otherwise there is a column with exactly one
nonzero entry, a ±1. Then
det(S) = ±1S 0 ,

where S 0 is the square submatrix obtained by deleting the row and column containing this nonzero
entry. By induction det(S 0 ) is either 0, +1, −1. Therefore det(S) is either 0, +1, −1.

Theorem 3.14. Let G = (V, E) be a graph with n = |V | vertices and incidence matrix D. Suppose
U ⊂ E, |U | = n − 1, and let DU be a (n − 1) × (n − 1) submatrix of D consisting of the intersection
of the n − 1 columns of D and any of the n − 1 rows of D. Then DU is non-singular if and only if
the (V, U ) is a spanning tree of (V, E)

42
1 2

5 {1, 2}

0

3 4
{1, 3} 
 0 

{1, 6} 
 0 

{2, 3}  0 
6  
8 {2, 5} 
 0 

{3, 4} 
 1 

9 {3, 6} 0
 
7
 
~ =
C {3, 7}

 −1


G {4, 5}

 0


{4, 8} −1
 
 
3 4 {5, 9}
 
 0 
 
{6, 7} 
 0 

{7, 8} 
 1 

{7, 9}  0 
{8, 9} 0
7 8
C

~
Figure 3.3: An example of C.

Proof. Suppose (V, U ) is a spanning tree, then DU consists of n − 1 rows of D, the incidence matrix
of (V, U ). G is connected so Rank(D) = |V | − 1 = n − 1, So the n − 1 rows of DU must be linearly
independent so det(DU ) 6= 0.
Conversely suppose det(DU ) 6= 0. Then DU has an inverse, in other words D had a n − 1 × n − 1
invertible submatrix so Rank(DU ) = n − 1. Thus (V, U ) is connected by Theorem 3.12. Also,
because DU is of full rank we know that Dim(Nullspace(DU )) = 0 so (V, U ) can not have a cycle,
which means that (V, U ) must be a tree.

Consider the function

κ(G) = the number of spanning trees of G.

Some basic properties of κ(G) are:

1. κ(G) = 0 if G is disconnected,

2. κ(Cn ) = n, and

3. κ(G) = 1 if and only if G is a tree.

Lemma 3.15. Let D be the incidence matrix of G = (V, E) and let Q = DDT . Then the adjoint
Adj(Q) = tJ for some t.

Proof. Let n = |V |. If G is disconnected, then Rank(Q) = Rank(D) < n−1. Hence every cofactor
is 0 and thus Adj(Q) = 0.

43
If G is connected then Rank(Q) = Rank(D) = n − 1 and QAdj(Q) = det(Q)I = 0 im-
plies each column of Adj(Q) is in the null-space of Q which is contained in Nullspace(DT ) =
SpanC ([1, 1, 1, . . . , 1]T ). Therefore each column of Adj(Q) is a multiple of [1, 1, . . . , 1]T but Q is
symmetric so Adj(Q) is symmetric. Therefore Adj(Q) = tJ.

Theorem 3.16. Adj(Q) = κ(G)J

Proof. It suffices to show that one of the cofactors of Q is the number of spanning trees. Let D0
be D minus a row, then det D0 D0T is a cofactor.
The Cauchy-Binet Theorem says that
X
det(D0 D0T ) = (DU DUT ),
|U |=n−1,U ⊂E

where DU is the (n − 1) × |U | submatrix of D with edges in U .


Applying Theorem 3.14 we see that det(DU ) 6= 0 if and only if (V, U ) is a spanning tree and in
this case det(DU ) = ±1. Hence, det(DU DUT ) = det(DU )det(DUT ) = det(DU )2 = 1 and so from the
Cauchy-Binet Theorem we have: det(D0 D0T ) = κ(G)

Example 3.17.

1 2 12 13 14 23 24  
1 1 1 1 0 0 3 −1 1
D = 2 −1 0 0 1 0 D0 DT =  −1 2 −1 
3 0 −1 0 −1 1 −1 −1 3
4 3 4 0 0 −1 0 −1

3.4 Notes
Two excellent text on algebraic graph theory are

1. N. Biggs, Algebraic Graph Theory, Cambridge University Press, (1993).

2. C. Godsil and G. Royale, Algebraic Graph Theory, Graduate Texts in Mathematics 207
Springer, (2001).

44
Chapter 4

Connectivity

Theorem 4.1. (Menger’s Theorem) Let G = (V, E) be graph and choose subsets A, B ⊆ V . The
minimum number of vertices separating A and B equals the maximum number of disjoint paths
from A to B.

Proof. (Theorem 4.1) Let k be the number of vertices separating A and B. Clearly `, the number
of disjoint paths from A to B, cannot be more than k, for after all deleting one vertex on each such
path will separate A from B. So, ` ≤ k. We show by induction on |V | + |E| that ` ≥ k. If k = 0,
then G is a disconnected graph with A and B in different components. Hence ` = 0. If k = 1, then
there is a vertex x such that A and B are in different components of G − x. Consequently every
path from A to B passes through x and so there is a path from A to B through x and hence ` = 1.
Now suppose G, A and B are given with k ≥ 2. Assume the assertion holds for graphs with
fewer vertices or edges.
Case 0: (The trivial case.) A ∩ B 6= ∅.
Let x ∈ A ∩ B. Then x is one in of the paths from A to B and so x is in any set separating A
from B. Thus G − x has a set of k − 1 vertices separating A \ {x} from B \ {x}. Thus by induction
G − x has k − 1 disjoint paths from A \ {x} to B \ {x}. These paths together with the trivial path
x account for k disjoint paths from A to B in G. We will therefore assume A ∩ B = ∅ for the
remaining cases.
Case 1: A and B separated by X, |X| = k and X 6= A, B.
Let CA be all of the components of G − X hitting A and let CB be all of the components
of G − X hitting B. Let GA = G[V (CA ) ∪ X] and GB = G[V (CB ) ∪ X]. Note CA 6= ∅, because
|A| ≥ k = |X|, but A 6= X. Similarly CB 6= ∅. Furthermore GA ∩GB = ∅. Thus GA and GB contain
fewer vertices and edges than G does. Therefore by induction the number of vertices separating A
from X in GA is the same as the number of disjoint paths from A to X in GA . Every A to B path
contains a path from A to X in GA and so we cannot separate A from X by fewer than k vertices.
Hence GA contains k disjoint paths from A to X. Similarly GB contains k disjoint paths from B
to X. As |X| = k we can put these paths together to from k disjoint paths from A to B.
Case 2: The minimum set of vertices separating A and B is either A or B.
Let P be any path from A to B, then because A ∩ B = ∅, P has an edge ab with a ∈ / B and
b∈/ A. Let Y be a smallest set of vertices separating A from B in G − ab. Then Ya = Y ∪ {a} and
Yb = Y ∪ {b} both separate A from B in G. Thus

|Ya | = |Yb | ≥ k.

45
If equality holds, then we’re done by Case 1, unless Ya = A and Yb = B. But then Y = A ∩ B, and
so |A ∩ B| = |Y | = k − 1 ≥ 1, and we are done by Case 0.
If equality does not hold, then
|Ya | = |Yb | > k.
and so |Y | ≥ k. So by induction G − ab has k disjoint paths from A to B in G − ab. These are the
required k disjoint paths in G.

4.0.1 Exercises
1. Let k ≥ 2. Show that in a k-connected graph any k vertices lie on a common cycle.

46
Part II

A Taste of Design Theory

47
Chapter 5

Steiner Triple Systems

5.1 Graph decomposition


Given a graph G a collection of subgraphs {H1 , H2 , . . . , H` } such that
1. E(G) = E(H1 ) ∪ E(H2 ) ∪ · · · ∪ E(H` ), and
2. E(Hi ) ∩ E(Hj ) = ∅, for all i 6= j.
is called a decomposition of G.
Example 5.1. A decomposition of a graph G into subgraphs H1 and H2 .

= +

G H1 H2

A spanning r-regular subgraph H of a graph G is called an r-factor of G. A decomposition of G


into r-factors is called an r-factorization.
Example 5.2. A 2-factorization of K9 .
7 3 3 3

9 8 9 3 8 6 7 5
4 2 2 2

6 5 8 5 7 5 9 4
1 1 1 1

3 2 7 2 9 4 8 6
H1 H2 H3 H4

49
Theorem 5.3. The complete graph Kn has a one-factorization if and only if n is even.

Proof. A one-factor is a matching and so if a graph has a one-factor, then the number of vertices
must be even. Suppose n = 2m and let V = Z2m−1 ∪ {∞} be a set of n vertices. Let F0 be the one
factor with edges
{{0, ∞}} ∪ {{x, 2m − 1 − x} : x = 1, 2, . . . , m − 1}.
Clearly F0 has m edges and these edges are pairwise disjoint. Hence F0 is a one-factor. We develop
F0 modulo 2m − 1 to obtain the one-factorization. That is for j = 0, 1, 2, . . . , 2m − 2 define Fj to
have the edges
{{j, ∞}} ∪ {{x + j, 2m − 1 − x + j} : x = 1, 2, . . . , m − 1}
The edge {x, ∞} ∈ E(Fx ), for each x = 0, 1, 2, . . . , 2m−2. Given an edge {x, y} define ∆(x, y) = ±d,
where d ≡ x − y (mod 2m − 1), 1 ≤ d ≤ m − 1. There are m − 1 possible values for ∆(x, y). Namely
±1, ±2, . . . , ±m − 1. Observe that ∆(x + j, 2m − 1 − x + j) = ±2x and {±2x : 1 ≤ x ≤ m − 1} =
{±x : 1 ≤ x ≤ m − 1}. Thus Fj contains exactly one edge for each possible value of ∆. Therefore
all edges have been accounted for because the set of edges with ∆ = ±d is

{{x + j, y + j} : j = 0, 1, 2, . . . , 2m − 2},

where {x, y} is any edge with ∆(x, y) = ±d.

A k-matching in a graph G is a set of k independent edges, that is, k edges that have no common
vertices.

Lemma 5.4. Let G be a regular graph of order n and degree 1 or 2. If k is a proper divisor of
|E(G)|, then G can be decomposed into k-matchings except when n = 2k and at least one component
of G has odd order.

Proof. If G is regular of degree 1, then G itself is an n2 -matching. If k is a proper divisor of |E(G)|,


then k | n/2. We simply partition the n2 -matching into k-matchings.
We move to the degree 2 case. Let

C1 , C2 , . . . , Cz

be the z cycles comprising the components of G. Let the respective orders be `1 , `2 , . . . , `z . We


know that
X z
`i = n
i=1

and that k is a proper divisor of n. Thus, we want to find d = n/k k-matchings that partition
E(G). We do this by finding a proper edge coloring of G with d colors so that each color class
contains k edges.
We dispose of the d = 2 case first. If all the cycles have even length, then each may be properly
colored by 2 colors and we are done. If some cycles has odd length, then it is impossible to properly
2-color the edges of the cycle, and this gives rise to the exception in the statement of the lemma.
We assume d ≥ 3 for the remainder of the proof. Start with C1 . Color an arbitrary edge color
1. Next color an edge on C1 adjacent to the first colored edge color 2. Continue around C1 coloring
successive edges 3, 4 and so on until either all edges are colored or an edge is colored d and not all
edges of C1 are yet colored. In the latter case, color the next edge 1 and continue as before.

50
Figure 5.1: The circulant graph Circ(8; {1, 2, 4, 6, 7}).

The preceding process either properly colors the edges of C1 or `1 ≡ 1(mod d) and the next to
last edge has been colored d. The process then says to color the next edge color 1, but this edge
is adjacent to the first colored edge and cannot be colored 1 in a proper edge coloring. So instead
of coloring the last colored edge with color d, give it the color 1 and then color the last edge with
color d. We have properly edge colored C1 .
Note that the edge coloring of C1 uses colors 1, 2, . . . , r i times and colors r + 1, r + 2, . . . , d
i − 1 times, where r ≡ `1 (mod d, 0 ≤ r < d. We then move to cycle C2 and repeat the process used
on C1 starting with color r + 1. We then continue in this way through all the cycles and clearly
obtain a proper edge coloring with d colors, where each color is used k times. This completes the
proof.

The circulant graph G = Circ(n; S) is the graph with vertex set {u0 , u1 , . . . , un−1 }, and an
edge joining vertices ui and uj if and only if j − i ∈ S, where S ⊆ Z \ {0} and s ∈ S if and only
if −s ∈ S. We denote an edge joining ui and uj by ui uj . The set S is called the connection set of
Circ(n; S). For example, Circ(8; {1, 2, 4, 6, 7}) is given in Figure 5.1. The edge joining ui and uj
is said to have length equal to the residue in the range 1, 2, . . . , bn/2c that is congruent to j − i or
i − j modulo n.

Theorem 5.5. If G = Circ(n; S) is connected, k is a proper divisor of n, and k divides |E(G)|,


then there is a decomposition of G into k-matchings.

Proof. Write G as the union of circulant subgraphs Circ(n; {±s}), s ∈ S and use Lemma 5.4 to
independently decompose each into k-matchings. Note that when s = n/2, if k divides n and
|E(G)|, then k also divides n/2 when n/2 ∈ S.

An F -factorization of a graph G is a decomposition of G into subgraphs {H1 , H2 , . . . , H` } in


which Hi ≈ F for each i = 1, 2, . . . , `.

Example 5.6. A K3 -factorization of K7

0 1 2 3 4 5 6

3 1 4 2 5 3 6 4 0 5 1 6 2 0

51
A K3 -factorization of Kn is also called a Steiner triple system of order n. It is generally more
convenient to describe a Steiner triple system as a collection of 3-element subsets. A Steiner triple
system of order v is a pair (V, T ) where

1. V is a v-element set of points,

2. T is a collection of 3-element subsets called triples, and

3. every pair of points is in exactly one triple.

We use STS(n) to denote a Steiner triple system of order n.


An STS(3) on {1, 2, 3} consists of only one triple, namely {1, 2, 3}. A more interesting example
is given Example 5.7.

Example 5.7. A Steiner triple system of order 7.

V = {0, 1, 2, 3, 4, 5, 6}
T = {{0, 1, 3}, {1, 2, 4}, {2, 3, 5}, {3, 4, 6}, {0, 4, 5}, {1, 5, 6}, {0, 2, 6}}

Observe that the STS(7) in Example 5.7 is exactly the same as the K3 -factorization of K7 given
in Example 5.6.

Lemma 5.8. If a Steiner triple system of order v exists, then v ≡ 1, 3 (mod 6).

Proof. There are v2 pairs and each triple contains three of them. Thus the number of triples in


an STS(v) is  
1 v
.
3 2
Hence
v(v − 1)
is an integer. (5.1)
3·2
Also, any triple containing a fixed point x contains two other points. Therefore the number of
triples containing a fixed point x is (v − 1)/2 and so

(v − 1)
is an integer. (5.2)
2
Putting Equations 5.1 and 5.2 we see that v ≡ 1, 3 (mod 6).

Theorem 5.9. (The doubling construction) If a STS(v) exists, then so does an STS(2v + 1).

Proof. If an STS(v) exists, then v ≡ 1, 3 (mod 6) and so v is odd. Thus there is a one-factorization
{F1 , . . . , Fv } on {0, 1, . . . , v}. Let x1 , . . . , xv be v new points and let

V = {0, 1, . . . , v, x1 , . . . , xv }

be the 2v + 1 points of our STS(2v + 1). The triples are of two types.

Type 1. The 31 v2 triples in an STS(v) on x1 , . . . , xv .




52
v(v+1)
Type 2. The 2 triples defined by

{xj ∪ e : e ∈ Fj , j = 1, 2, . . . , v}.

This accounts for  


1 v v(v + 1) (2v + 1)(2v)
+ =
3 2 2 6
triples. The right number. Consider any pair {a, b}. If a, b ∈ {0, 1, . . . , v}, then ab is an edge
in some one-factor Fj , and {a, b, xj } is the triple in T that contains {a, b}, a type 2 triple. If
a, b ∈ {x1 , . . . , xv }, then a, b is contained in exactly one of the type 1 triples. If a ∈ {0, 1, . . . , v},
and b = xj ∈ {x1 , . . . , xv }, then there is a unique edge ac in Fj , that contains a. Hence a, b
is contained in the type 2 triple {a, b, c}. Therefore every pair is in some triple and there are
exactly the right number of triples. Therefore it follows that every pair is in exactly one triple and
consequently these triples form a STS(2v + 1).

Theorem 5.10. (vector space construction) A Steiner Triple System of order 2n − 1 exists for all
n ≥ 2.
Proof. Take as points V = Zn2 \ ~0 and as triples

T = {{~a, ~b, ~c} : ~a + ~b + ~c = ~0}

Observe that for any pair ~a, ~b, there is a unique ~c such that ~a + ~b + ~c = ~0. Namely, ~c = ~a + ~b.
Furthermore, ~c 6= ~a, for if so then ~a + ~b = ~a and thus ~b = 0 a contradiction. Similarly ~c 6= ~b.
Consequently (V, T ) is an STS(2n − 1).

Theorem 5.11. (multiply construction) If there is exists an STS(v) and an STS(w), then there
exists an STS(vw).
Proof. Let (V, T ) be an STS(v) and let (W, U ) be an STS(w). We construct an STS(vw) on V ×W
by taking the following three types of triples:
Type 1. {(a, x), (b, x), (c, x)} for each {a, b, c} ∈ T , and x ∈ W ,

Type 2. {(a, x), (a, y), (a, z)} for each {x, y, z} ∈ U , and a ∈ V ,

Type 3. The six triples {(a, x), (b, y), (c, z)} , {(a, y), (b, x), (c, z)} , {(a, x), (b, z), (c, y)} ,
{(a, z), (b, x), (c, y)} , {(a, y), (b, z), (c, x)} and {(a, z), (b, y), (c, x)} for each {a, b, c} ∈ T and
{x, y, z} ∈ U .
There is accounts for
v(v − 1)w w(w − 1)v 6v(v − 1)w(v − 1) vw(vw − 1)
+ + =
6 6 36 6
triples. The right number. Consider any pair of points (a, x), (b, y). If x = y, then (a, x), (b, y) is in
the Type 1 triple {{(a, x), (b, x), (c, x)}, where {a, b, c} is the unique triple in T that contains a, b.
If a = b, then (a, x), (b, y) is in the Type 2 triple {{(a, x), (a, y), (a, z)}, where {x, y, z} is the unique
triple in U that contains x, y. Otherwise, (a, x), (b, y) is in the Type 3 triple {(a, x), (b, y), (c, z)},
where {a, b, c} is the unique triple in T that contains a, b and {x, y, z} is the unique triple in U that
contains x, y.

53
Applying, these constructions to the STS(3) and STS(7) we constructed earlier we can obtain
for example STS(v) for v ∈ {3, 7, 9, 15, 19, 21, 27, 31} however we cannot so far construct STS(v)
for v ∈ {13, 25, 33, 37} among others. In the next two sections we will see that we can construct
STS(v) for all v ≡ 1, 3 (mod 6).

5.2 The Bose construction v ≡ 3 (mod 6)


A Latin square of order n is an n by n array L with entries from an n-element set such that each row
and column contain each symbol exactly once. A Latin square L is commutative if L[i, j] = L[j, i]
for all i, j. The multiplication table of an Abelian group is a commutative Latin square. A Latin
square is idempotent if L[i, i] = i for all i.

Example 5.12. Some Latin squares.

1 2 3 4
1 2 3
2 1 4 3
2 3 1
3 4 1 2
3 1 2
4 3 2 1

6 5 3 4 1 2
1 2 3 4 5
2 1 6 5 3 4
5 1 2 3 4
3 4 5 6 1 2
4 5 1 2 3
5 6 2 1 4 3
3 4 5 1 2
4 2 1 3 5 6
2 3 4 5 1
1 3 4 2 6 5

Theorem 5.13. An idempotent commutative Latin square of order 2n + 1 exists for every n ≥ 0.

Proof. The group (Z2n+1 , +) is a commutative group. Thus its addition table is a commutative
Latin square. The entries on the diagonal are {2x : x = 0, 1, 2, 3, . . . , 2n − 1} multiplying each entry
of the square by (n + 1) the inverse of 2 modulo 2n + 1. creates a square that is idempotent and
commutative.

Theorem 5.14. (Bose’s construction) A STS(v) exists whenever v ≡ 3 (mod 6).

Proof. Let v = 3 + 6n, set Q = {1, 2, . . . , 2n + 1} and let L be an idempotent commutative Latin
square on Q. We construct an STS(v) on Q × Z3 by taking the following two types of triples. See
Figure 5.2.

Type 1. {(x, 0), (x, 1), (x, 2)} for each x ∈ Q

Type 2. {(x, 0), (y, 0), (L[x, y], 1)}, {(x, 1), (y, 1), (L[x, y], 2)} and {(x, 2), (y, 2), (L[x, y], 0)}
for each x, y ∈ Q, x 6= y.

The Type 2 triples are well defined, because L[x, y] = L[y, x] and L[x, x] = x. There are
 
2n + 1 (3 + 6n)(2 + 6n) v(v − 1)
2n + 1 + 3 = =
2 6 6

54
Q × {0}

Q × {1}

Q × {2}

Type 1

Q × {0}

Q × {1}

Q × {2}

Type 2

Figure 5.2: The Bose construction

triples. The right number for an STS(v). Consider any pair of different points (p, i), (q, j) ∈
Q × {0, 1, 2}. If p = q, then they are in the Type 1 triple {(p, 0), (p, 1), (p, 2)}. If i = j, then the
pair of points are in the Type 2 triple {(p, i), (q, i), (L[p, q], i + 1)}. If p 6= q and i 6= j, then without
loss j = i+1 and (q, i), (p, j) are in the Type 2 triple {(p, i), (q, j), (r, k)}, where k = −i−j (mod 3)
and r is defined by L[p, r] = q.

5.2.1 Exercises
1. Explicitly Construct a Steiner Triple System of order 15, using the vector space construction,
the doubling construction and the Bose construction. For a bonus determine if these three
STS(15)s are nonisomorphic.
Two Steiner triple systems (V, T ) and (W, U ) of order v are isomorphic if there is a one to
one mapping f : V → W such that

{a, b, c} ∈ T if and only if {f (a), f (b), f (c)} ∈ U

5.3 The Skolem construction v ≡ 1 (mod 6)


A Latin square L of order 2n is said to be half-idempotent if cells L[i, i] and L[n + i, n + i] both
contain i for i = 1, 2, ..., n.

55
Example 5.15. Examples of commutative half-idempotent Latin Squares.

1 3 2 4
1 2 3 2 4 1
2 1 2 4 1 3
4 1 3 2

1 4 2 5 3 6
4 2 5 3 6 1
2 5 3 6 1 4
5 3 6 1 4 2
3 6 1 4 2 5
6 1 4 2 5 3

Theorem 5.16. There is a half-idempotent commutative Latin square of order 2n for all n.

Proof. The addition table of (Z2n , +) is a commutative Latin square with diagonal entries:

[0 + 0, 1 + 1, 2 + 2, . . . , (n − 1) + (n − 1), n + n, (n + 1) + (n + 1), . . . , (2n − 1) + (2n − 1)]


= [0, 2, 4, 6, . . . , 2n − 2, 0, 2, 4, . . . , 2n − 2]

So we can relabel the entries in this table to make a half-idempotent commutative Latin Square.

Theorem 5.17. (The Skolem Construction) There exists a Steiner triple system of every order
v ≡ 1 (mod 6).

Proof. Let v = 1 + 6n, n ≥ 1, let L be a half-idempotent Latin square on Q = {1, 2, . . . , 2n} and
let V = {∞} ∪ Q × Z3 . Take the following 3 types of triples.

Type 1: {(i, 0), (i, 1), (i, 2)}, i = 1, 2, ..., n.

1 2 ··· n n+1 · · · 2n

There are n Type 1 triples.

Type 2: {∞, (n + i, a), (i, a + 1)}, i = 1, 2, ..., n, a ∈ Z3 .

56
i n+i i n+i i n+i

∞ ∞ ∞

a=0 a=1 a=2

There are 3n Type 2 triples.

Type 3: {(i, a), (j, a), (L[i, j], a + 1)}, {i, j} ⊂ Q, a ∈ Z3

2n

There are 3 2 Type 3 triples.

The total number of triples chosen is


 
2n 2n(2n − 1)
n + 3n + 3 = 4n + 3 ·
2 2
= 4n + 3n(2n − 1)
= 6n2 + n
(6n + 1)(6n)
=
6
v(v − 1)
= .
6
v(v−1)
It it easy to see that every pair is in in exactly one of the chosen triples. Thus the 6 chosen
triples form a Steiner triple system of order v = 1 + 6n.

Example 5.18. A Steiner triple system of order 13. Here n = 2 and v = 1 + 6n = 13.

1 3 2 4
3 2 4 1
L=
2 4 1 3
4 1 3 2

Type 1 triples:
{(1, 0), (1, 1), (1, 2)}
{(2, 0), (2, 1), (2, 2)}

57
Type 2 triples:
{∞, (3, 0), (1, 1))}
{∞, (4, 0), (2, 1))}
{∞, (3, 1), (1, 2))}
{∞, (3, 1), (2, 2))}
{∞, (4, 2), (1, 0))}
{∞, (4, 2), (2, 0))}
Type 3 triples:
{(1, 0), (2, 0), (3, 1))}
{(1, 0), (3, 0), (2, 1))}
{(1, 0), (4, 0), (4, 1))}
{(2, 0), (3, 0), (4, 1))}
{(2, 0), (4, 0), (1, 1))}
{(3, 0), (4, 0), (3, 1))}
{(1, 1), (2, 1), (3, 2))}
{(1, 1), (3, 1), (2, 2))}
{(1, 1), (4, 1), (4, 2))}
{(2, 1), (3, 1), (4, 2))}
{(2, 1), (4, 1), (1, 2))}
{(3, 1), (4, 1), (3, 2))}
{(1, 2), (2, 2), (3, 0))}
{(1, 2), (3, 2), (2, 0))}
{(1, 2), (4, 2), (4, 0))}
{(2, 2), (3, 2), (4, 0))}
{(2, 2), (4, 2), (1, 0))}
{(3, 2), (4, 2), (3, 0))}

1. R. Stong, On 1-factorizability of Cayley graphs, J. Combin. Theory Ser. B 39 (1985), 298–


307.

58
Chapter 6

Magic Squares

Chapter 1 of H.J. Ryser’s book on combinatorial Mathematics begins with:

Combinatorial mathematics also referred to as combinatorial analysis or combina-


torics, is a mathematical discipline that began in ancient times. According to legend
the Chinese Emperor Yu (c. 2200 b.c.) observed the magic square on the shell of a
divine turtle. ...

After years of careful research I have discovered a picture of that famous turtle.

A magic square is an n by n array of integers with the property that the sum of the numbers
in each row, each column and the the main and back diagonals is the same. This sum is the magic
sum.
A magic square is n-th order if the integers 1, 2, 3, . . . , n2 are used

16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1

59
n=4 magic sum = 34

The magic sum of an n-th order magic square is easily determined because we must have the
same sum in each of the n rows. Thus the magic sum is

1 n3 + n
1 + 2 + · · · n2 =

.
n 2

The Statesman Benjamin Franklin writes in his autobiography:

When I disengaged myself, as above mentioned, from private business, I flatter’d


myself that, by the sufficient tho’ moderate fortune I had acquir’d, I had secured leisure
during the rest of my life for philosophical studies and amusements. I purchased all Dr.
Spence’s apparatus, who had come from England to lecture here, and I proceeded in
my electrical experiments with great alacrity; but the publick, now considering me as
a man of leisure, laid hold of me for their purposes, every part of our civil government,
and almost at the same time, imposing some duty upon me. The governor put me
into the commission of the peace; the corporation of the city chose me of the common
council, and soon after an alderman; and the citizens at large chose me a burgess to
represent them in Assembly. This latter station was the more agreeable to me, as I
was at length tired with sitting there to hear debates, in which, as clerk, I could take
no part, and which were often so unentertaining that I was induc’d to amuse myself
with making magic squares or circles, or any thing to avoid weariness; and I conceiv’d
my becoming a member would enlarge my power of doing good. I would not, however,
insinuate that my ambition was not flatter’d by all these promotions; it certainly was;
for, considering my low beginning, they were great things to me; and they were still
more pleasing, as being so many spontaneous testimonies of the public good opinion,
and by me entirely unsolicited.

Example 6.1 contains an Magic Square of order 8.

Example 6.1. The Franklin Square of order 8.

52 61 4 13 20 29 36 45
14 3 62 51 46 35 30 19
53 60 5 12 21 28 37 44
11 6 59 54 43 38 27 22
55 58 7 10 23 26 39 42
9 8 57 56 41 40 25 24
50 63 2 15 18 31 34 47
16 1 64 49 48 33 32 17

In this chapter we will determine the values of n for which a n-th order magic square exist. Let
M be the set of all positive integers n such that an n-th order magic square exists. So far we know
3, 4, 8 ∈ M and it is easy to see that 2 ∈
/ M.

60
6.1 De La Loubère’s construction
In 1693 De La Loubère constructed a magic square of order n whenever n is odd. His construction
is as follows.
First we place 1 in the middle cell of the first row. The numbers are placed consecutively
1, 2, 3, . . . , n2 in diagonal lines which slope upwards to the right except

1. when the top row is reached the next number is written in the bottom row as if it were the
next row after the top;

2. when the right column is reached, the next number is written in the first column as if it
followed the right-hand column; and

3. if a cell is reached that is already filled or if the upper right corner is reached then the next
cell to be used is the one directly below it.

Example
17 24 1 8 15
23 5 7 14 16
4 6 13 20 22
10 12 19 21 3
11 18 25 2 9
This construction shows that 2m + 1 ∈ M for all m.
The proof that In 1693 De La Loubère construction works is complicated. We offer in the next
section another construction for Magic Squares of odd order. This construction uses orthogonal
Latin squares.

6.2 The orthogonal Latin square construction


Two Latin squares A and B of order n on {0, 1, 2, . . . , n − 1} are said to be orthogonal Latin squares
if the n2 ordered pairs
(A[x, y], B[x, y])
x = 0, 1, 2, 3, . . . , n − 1, y = 0, 1, 2, . . . , n − 1 are all distinct.
An n by n array is said to be row magic if the sum of the entries in any row are the same. We
similarly define column magic.
If A and B are orthogonal Latin squares on {0, 1, 2, . . . , n − 1}, then

M = J + A + nB,

where J is the matrix of all 1s is row and column magic, because the sum of the entries in any row
or column of M is

{z· · · + 1} + (0 + 1 + 2 + · · · + (n − 1)) + n (0 + 1 + 2 + · · · + (n − 1))


|1 + 1 +
n
n(n − 1) n(n − 1)
=n+ +n
2 2
n3 + n
= .
2

61
Example 6.2. An example of almost constructing a magic square from 2 orthogonal Latin squares.
       
1 7 13 19 25 11111 01234 01234
12 18 24 5 6   1 1 1 1 1   1 2 3 4 0  23401
       
23 4 10 11 17 =  1 1 1 1 1  +  2 3 4 0 1  + 5  4 0 1 2 3 
       
 9 15 16 22 3   1 1 1 1 1   3 4 0 1 2  12340
20 21 2 8 14 11111 40123 34012

Unfortunately the front and back diagonals of

M = J + A + nB

do not necessarily have the magic sum as the above example shows.
Fortunately we can carefully choose the squares A and B so that the front and back diagonals
have the magic sum. For example consider the squares A and B on Zn , n odd, defined by
n−1
A[x, y] = (y − x) + (mod n)
2
B[x, y] = 2−1 (x + y) (mod n)

(Note, 2−1 exists, because n is odd.) First observe that if

(A[x1 , y1 ], B[x1 , y1 ]) = (A[x2 , y2 ], B[x2 , y2 ]),

then
n−1 n−1
y1 − x 1 + ≡ y2 − x 2 + (mod n)
2 2
and
2−1 (x1 + y1 ) ≡ 2−1 (x2 + y2 ) (mod n).
Thus,
y1 − x1 ≡ y2 − x2 (mod n)
and
x1 + y1 ≡ x2 + y2 (mod n).
Consequently x1 = x2 and y1 = y2 , and so A and B are orthogonal. Therefore M = J + A + nB is
row and column magic. The forward diagonal of A is
 
n−1 n−1 n−1
, ,...,
2 2 2

Thus forward diagonal sum of M is

n−1 n3 + n
n+n + n (0 + 1 + 2 + · · · + n − 1) = .
2 2
The magic sum. The back diagonal of B is
n−1 n−1 n−1
[ , ,..., ]
2 2 2

62
Thus back diagonal sum of M is

n−1 n3 + n
n + (0 + 1 + 2 + · · · + n − 1) + n = .
2 2
Again the magic sum. Thus we have shown that there is a magic square for all odd orders n.

Example 6.3. A magic square of order 5 constructed from two orthogonal Latin squares.
       
3 19 10 21 12 11111 23401 03142
17 8 24 15 1   1 1 1 1 1   1 2 3 4 0  31420
       
 6 22 13 4 20 =  1 1 1 1 1  +  0 1 2 3 4  + 5  1 4 2 0 3 
       
25 11 2 18 9   1 1 1 1 1   4 0 1 2 3  42031
14 5 16 7 23 11111 34012 20314

6.3 Strachey’s construction


Strachey in 1918, showed how to construct a magic square M of order 2(2m + 1) from a magic
square A of order 2m + 1. His construction is as follows.
Step 1. From A construct this non-magic square of order 2u:

A A + 2u2

S1 =

A + 3u2 A + u2

(6.1)

Step 2. Interchange the indicated cells:

m m−1

Swap
S2 =
Swap

(6.2)
The result is a magic square of order 2(2m + 1), So 2(2m + 1) = 4m + 2 ∈ M for all m.

63
Example 6.4. A magic square of order 10, magic sum=505. m = 2, u = 5.
Step 1.

17 24 1 8 15 67 74 51 58 65
23 5 7 14 16 73 55 57 64 66
4 6 13 20 22 54 56 63 70 72
10 12 19 21 3 60 62 69 71 53
11 18 25 2 9 61 68 75 52 59
92 99 76 83 90 42 49 26 33 40
98 80 82 89 91 48 30 32 39 41
79 81 88 95 97 29 31 38 45 47
85 87 94 96 78 35 37 44 46 28
86 93 100 77 84 36 43 50 27 34

Step 2.

92 99 1 8 15 67 74 51 58 40
98 80 7 14 16 73 55 57 64 41
4 81 88 20 22 54 56 63 70 47
85 87 19 21 3 60 62 69 71 28
86 93 25 2 9 61 68 75 52 34
17 24 76 83 90 42 49 26 33 65
23 5 82 89 91 48 30 32 39 66
79 6 13 95 97 29 31 38 45 72
10 12 94 96 78 35 37 44 46 53
11 18 100 77 84 36 43 50 27 59

64
Theorem 6.5. If there is a magic square of odd order u = 2m + 1, then there is a magic square of
order 2u.
Proof. Let A be a magic square of order u = 2m + 1 and apply Strachy’s construction obtaining
u3 + u
the 2u by 2u square S2 in Equation 6.2. The magic sum of A is . We will show that S2 has
2
3
(2u) + 2u
magic sum = 4u3 + u.
2
The change from the matrix S1 in Equation 6.1 to the matrix S2 in Equation 6.2 does not change
the entries in any column only there order. Thus the sum of the entries of columns 1 through u is
u3 + u
 3 
u +u
+ + u · 3u = 4u3 + u
2
2 2
and the sum of the entries of columns u through 2u is
 3   3 
u +u 2 u +u
+ u · 2u + + u · u = 4u3 + u
2
2 2
The sum of the entries of rows 1 through u is
 3   3 
u +u u +u
+ m · 3u2 + + (m + 2) · 2u2 + (m − 1) · u2 = 4u3 + u
2 2
The sum of the entries of rows u through 2u is
 3   3 
u +u 2 u +u
+ (m + 1) · 3u + + (m + 2) · u + (m − 1) · 2u = 4u3 + u
2 2
2 2
The sum of the entries on the forward diagonal is
 3   3 
u +u 2 u +u
+ (m + 1) · 3u + + (m + 2) · u + (m − 1) · 2u = 4u3 + u
2 2
2 2
The sum of the entries on the backward diagonal is
 3   3 
u +u u +u
+ m · 3u2 + + (m + 2) · u2 + (m − 1) · 2u2 = 4u3 + u
2 2

6.4 The Product construction


Let A = [aij ] be a magic square of order m and magic sum α. Let B = [bij ] be a magic square of
order n and magic sum β. Then the mn by mn square
(a11 − 1)n2 + B (a12 − 1)n2 + B (a1m − 1)n2 + B
 
···
 (a21 − 1)n2 + B (a22 − 1)n2 + B ··· (a2m − 1)n2 + B 
A⊗B =
 
.. .. .. 
 . . . 
(am1 − 1)n2 + B (am2 − 1)n2 + B · · · (amm − 1)n2 + B

is a magic square of order mn and magic sum αn3 − mn3 + mβ. See Exercise 6.4.1.1. Thus if
m, n ∈ M, then mn ∈ M.

65
Example 6.6. The product of a magic square of order 3 with a magic square of order 4

128 115 114 125 16 3 2 13 96 83 82 93


117 122 123 120 5 10 11 8 85 90 91 88
121 118 119 124 9 6 7 12 89 86 87 92
116 127 126 113 4 15 14 1 84 95 94 81
 
  16 3 2 13 48 35 34 45 80 67 66 77 112 99 98 109
8 1 6  5 10 11 8 
 3 5 7 ⊗ 37 42 43 40 69 74 75 72 101 106 107 104
 9 6 7 12  = 41

38 39 44 73 70 71 76 105 102 103 108
4 9 2
4 15 14 1 36 47 46 33 68 79 78 65 100 111 110 97
64 51 50 61 144 131 130 141 32 19 18 29
53 58 59 56 133 138 139 136 21 26 27 24
57 54 55 60 137 134 135 140 25 22 23 28
52 63 62 49 132 143 142 129 20 31 30 17

Theorem 6.7. There exists a magic square for every order except n = 2.
Proof. Let M be the set of orders n, for which there exists a magic square of order n. We have
examples that show 1, 3, 4, 8 ∈ M and constructions showing that:
1. 2n + 1 ∈ M , for all n,
2. 4n + 2 ∈ M , for all n,
3. if m, n ∈ M , then mn ∈ M .
Thus, by 1 and 2 we see that if n ≡ 1, 2, 3 (mod 4), then n ∈ M . If n 6= M , n 6= 2, then there
is a smallest n = 4k that n ∈
/ M . Then, because 4, 8 ∈ M , we see that k > 2 and so k ∈ M .
Consequently, by the product construction, 4k ∈ M . Therefore M = {n ∈ Z : n 6= 2}.

6.4.1 Exercises
1. Prove that the production construction given in Section 6.4 does indeed produce a magic
square.
2. According to Mathematical Recreations and Essays By W. W. Rouse Ball, 1937, printed in
Great Britain the following construction was first discovered in 1889 by W. Firth and has
been rediscovered by many other authors. It was first related to me by M.S. Keranen.
(a) Prove that the following construction forms a normal magic square of order n = 4m for
all m ≥ 1.
Step 1. Form a square S1 by writing the numbers from the set {1, 2, . . . , n2 } in order
from left to right across each row in turn, starting from the top left hand corner.

1 2 ... n
n+1 n+2 . . . 2n
·
·
·
(n − 1)n + 1 (n − 1)n + 2 . . . n2

66
Step 2. Now fix the shaded entries and interchange each unshaded entry with it’s dia-
metrically opposite number. The shaded entries are the four m by m corner boxes
and the center (2m) by (2m) box.

Example: A normal magic square of order n = 8, magic sum=260, m = 2.


Step 1.
1 2 3 4 5 6 7 8
9 10 11 12 13 14 15 16
17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32
33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48
49 50 51 52 53 54 55 56
57 58 59 60 61 62 63 64
Step 2.
1 2 62 61 60 59 7 8
9 10 54 53 52 51 15 16
48 47 19 20 21 22 42 41
40 39 27 28 29 30 34 33
32 31 35 36 37 38 26 25
24 23 43 44 45 46 18 17
49 50 14 13 12 11 55 56
57 58 6 5 4 3 63 64
(b) Use this construction instead of the product construction to complete the proof that a
magic square of order n exists except when n = 2

67
68
Chapter 7

Mutually Orthogonal Latin Squares

Two Latin squares A and B of order n are said to be orthogonal Latin squares if the n2 ordered
pairs
(A[x, y], B[x, y])
x = 1, 2, 3, . . . , n, y = 1, 2, . . . , n are all distinct. A set A1 , A2 , . . . , Ak mutual orthogonal Latin
squares of order n, is said to be k MOLS(n).

Theorem 7.1. The number of MOLS(n) is at most n − 1 for any n.

Proof. Let L1 , L2 , . . . , Lt be t MOLS(n) on the symbols {1, 2, . . . , n}. Relabel the squares so that
the first row of each is [1, 2, ..., n]. This is always possible. Now consider [2, 1]-cell of each square. It
can not be 1, because 1 is in the [1, 1]-cell of every square. Furthermore, they all must be different,
for if Li [2, 1] = Lj [2, 1] = k , then the pair (k, k) would appear twice as it appears already in row 1,
i.e. Li [1, k] = Lj [1, k] = k. Thus there are only n − 1 choices for the [2, 1]-cells of the squares.

7.1 Finite fields


A field is a set of elements F with two binary operations defined on F called multiplication and
addition, such that F is an commutative group under addition, F \ {0} is a commutative group
under multiplication, and a(b + c) = ab + ac, for all a, b, c ∈ F. The field is said to be finite, if |F|
is finite.

Theorem 7.2. Let F be a finite field.

1. |F| = pα , for some prime p and some integer α ≥ 0.

2. For every prime p and integer α ≥ 0, there is up to isomorphism a unique field of order
q = pα . We denote this unique field by Fq .

3. 
Fq ≈ Zp [X] (f (X)) = Zp [X] (mod f (X))
where f (X) is any irreducible polynomial of degree α in Zp [X].

4. Fq \ {0} is a cyclic group of order q − 1.

69
Proof. A proof can be found in any abstract algebra book.

Example 7.3. The polynomial f (X) = X 2 + 1 is irreducible over Z3 , because it is quadratic and
has no roots in Z3 . Thus
F9 ≈ Z3 [X] (X 2 + 1)


Let g = X + 1, then modulo (X 2 + 1) we see that:

i 0 1 2 3 4 5 6 7
g i 1 X + 1 2X 2X + 1 2 2X + 2 X X + 2
and g 8 = 1. Hence F9 \ 0 is a cyclic group of order 8.
Theorem 7.4. Let n = pα , where p is a prime and α is a positive integer. Then for n ≥ 3, there
exists a complete set of n − 1 MOLS(n).
Proof. Let Fn be a finite field of order n. For each f ∈ Fn define the Fn × Fn matrix Af by

Af [x, y] = f x + y,

for all x, y ∈ Fn . We claim that the n − 1 squares Af , f ∈ Fn , f 6= 0 are MOLS(n).

1. (Row Latin) If f x + y1 = f x + y2 , then y1 = y2 .

2. (Column Latin) If f x1 + y = f x2 + y2 , then f x1 = f x2 and so x1 = x2 .

3. (Orthogonal) Suppose

(Af [x, y], A` [x, y]) = (Af [x1 , y1 ], A` [x1 , y1 ])

for some f 6= `. Then

f x + y = f x1 + y1
`x + y = `x1 + y1

Subtracting we see that

(f − `)x = (f − `)x1 .

Thus x = x1 and then y = y1 .

Example 7.5. Four mutually orthogonal Latin squares of order 5. For n = 5, we have F5 = Z5 .
The four mutually orthogonal Latin squares are defined by the equations:

A1 [x, y] = x + y
A2 [x, y] = 2x + y
A3 [x, y] = 3x + y
A4 [x, y] = 4x + y,

70
where x, y ∈ F5 . The actual squares are:

0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
1 2 3 4 0 2 3 4 0 1 3 4 0 1 2 4 0 1 2 3
2 3 4 0 1 4 0 1 2 3 1 2 3 4 0 3 4 0 1 2
3 4 0 1 2 1 2 3 4 0 4 0 1 2 3 2 3 4 0 1
4 0 1 2 3 3 4 0 1 2 2 3 4 0 1 1 2 3 4 0
| {z } | {z } | {z } | {z }
A1 A2 A3 A4

An orthogonal array of order n and size k is a k by n2 array A with entries from an n-element
set S such that in any pair of distinct rows i and j every ordered pair occurs. That is

{(A[i, h], A[j, h]) : h = 1, 2, 3, . . . , n2 } = S × S.

We denote an orthogonal array of order n and size k by OA(k, n).

Example 7.6. An OA(4, 3).


1 1 1 2 2 2 3 3 3
1 2 3 1 2 3 1 2 3
1 2 3 3 1 2 2 3 1
1 2 3 2 3 1 3 1 2

Theorem 7.7. An OA(k, n) is equivalent to k − 2 MOLS(n).

Proof. Let L1 , L2 , . . . , Lk−2 be k − 2 MOLS(n) on the symbols {1, 2, . . . , n}. Label the rows and
columns of the squares by 1, 2, 3, . . . , n. Define the k by n2 array A by:
A[1, ni + j + 1] = i + 1 for i = 0, 1, 2, . . . , n − 1 and j = 0, 1, 2, . . . , n − 1
A[2, ni + j + 1] = j + 1 for i = 0, 1, 2, . . . , n − 1 and j = 0, 1, 2, . . . , n − 1
A[h + 2, `] = Lh [A[1, `], A[2, `]] for ` = 1, 2, . . . , n2 and h = 1, 2, . . . , k − 2
It is easy to see that A is an OA(k, n).
Given an OA(k, n) A we can define Latin squares L1 , L2 , . . . , Lk−2 as follows

Lh [A[1, `], A[2, `]] = A[h, `], ` = 1, 2, . . . , n2 .

It is an easy exercise to check that these squares are Latin squares and are pairwise orthogonal.

Example 7.8. 2 MOLS(3) is equivalent to OA(4, 3)

1 1 1 2 2 2 3 3 3
1 2 3 1 2 3
1 2 3 1 2 3 1 2 3
3 1 2 2 3 1
1 2 3 3 1 2 2 3 1
2 3 1 3 1 2
| {z } | {z } 1 2 3 2 3 1 3 1 2
| {z }
L1 L2 A

A transversal design with k groups of size n, denoted by TD(k, n) is a triple (X, B, G), where

1. X is a set of kn points,

71
2. B is a collection of k-element subsets called blocks

3. G = {G1 , G2 , . . . , Gk } is a partition of the kn points into k subsets of size n called groups,

such that every pair of points is in exactly one group or one block.
Because the blocks of a transversal design each contain exactly one point from each group, we
say that the blocks are transverse.

Example 7.9. A TD(4, 3).

X = {a1 , a2 , a3 , b1 , b2 , b3 , c1 , c2 , c3 , d1 , d2 , d3 }
G = [{a1 , a2 , a3 }, {b1 , b2 , b3 }, {c1 , c2 , c3 }, {d1 , d2 , d3 }]
| {z } | {z } | {z } | {z }
G1 G2 G3 G4
{a1 , b1 , c1 , d1 }
 

 



{a1 , b2 , c2 , d2 }






{a1 , b3 , c3 , d3 }



{a , b , c , d }
 
 2 1 3 2 
 
B = {a2 , b2 , c1 , d3 }
{a2 , b3 , c2 , d1 }

 

 

{a , b , c , d }

 


 3 1 2 3  
{a3 , b2 , c3 , d1 }

 
 

 
{a3 , b3 , c1 , d2 }

Theorem 7.10. A TD(k, n) is equivalent to an OA(k, n).

Proof. Given an OA(k, n) on symbols {1, 2, . . . , n}, define the groups G1 , G2 , . . . , Gk of the corre-
sponding TD(k, n) as follows:
Gi = {(i, j) : j = 1, 2, . . . , n}.

If the `-th column of the OA(k, n) is

[x1 , x2 , x3 , . . . , xk ]T

choose
{(1, x1 ), (2, x2 ), (3, x3 ), . . . , (k, xk )}

as a block of the TD(k, n). It is easy to see that this does indeed construct a TD(k, n) from an
OA(k, n).
Conversely suppose we are given a TD(k, n) (X, B, G). Define an OA(k, n) A by choosing a
bijection fi : Gi → {1, 2, . . . , n} for each group Gi ∈ G. Let B = {B1 , B2 , . . . , Bn2 }. Then the array
is
A[i, j] = fi (Gi ∩ Bj )

it is an easy exercise to show that A is indeed an OA(k, n)

Corollary 7.11. The three objects OA(k, n), TD(k, n) and (k − 2) MOLS(n) are all equivalent.

72
7.2 Finite projective planes
A projective plane is a pair (P, L), where is a finite set of points and L is a collection of subsets of
P called lines, satisfying the following three axioms

1. Every 2 points determine a unique line. That is given points x, y ∈ P, x 6= y there is exactly
one line L ∈ L, with x, y ∈ L.

2. Every 2 lines determine a unique point. That is given lines L, L0 ∈ L, L 6= L0 then |L∩L0 | = 1.

3. There exist 4 points no 3 of which are collinear. That is there are at least four distinct points
no 3 of which are in the same line.

Theorem 7.12. All the lines in a projective plane (P, L) have the same number of points.

Proof. Consider any two lines L, L0 ∈ L. By axiom 2 they intersect in a unique point x.

L ∩ L0 = {x}.

Let x1 , x2 , ..., xn be the remaining points on L. Thus L has n + 1 points. By axiom 3 there is point
y that is not on L or L0 . By axiom 1 there is a unique line Li that contains y and xi , i = 1, 2, . . . , n.
By axiom 2 the line Li intersects L0 in a unique point x0i . The points x01 , x02 , . . . , x0n are all distinct
because by axiom 1 the lines Li , i = 1, 2, . . . , n, pairwise intersect only in the point y. This accounts
for n + 1 points on L0 . If z is on L0 , then there is a line L̂ that contains x and y. This line intersects
L in some point xi say. But then L̂ = Li because the two points y and xi determine a unique line.
Consequently z = x0i . Therefore L0 has exactly n + 1 points the same as L.

73
A projective plane (P, L) in which the number of points on a line is n + 1 is called a projective
plane of order n.

Theorem 7.13. In a projective plane of order n, every point is on n + 1 lines.

Proof. Let y be any point. Then by axiom 3 there is a line L such that y ∈ / L. According
to Theorem 7.12 L = {x1 , x2 , . . . , xn+1 } for some set of n + 1 points. By axiom 2, every line
containing y must intersect L, and by axiom 1 we see that for each i = 1, 2, . . . , n + 1, There is
a unique line Li , with y, xi ∈ Li for each i = 1, 2, . . . , n + 1. Thus there are exactly n + 1 lines
through the point y.

Theorem 7.14. A projective plane of order n has n2 + n + 1 points.

Proof. Let y be any point. If x is any other point then by axiom 1 there is a line that contains x
and y. Thus every point is on a line through y. Applying Theorem 7.13 we see that there are n + 1
lines through y, and by Theorem 7.12 there are n points other than y on each of these lines. There
are thus
1 + n(n + 1) = n2 + n + 1

points.

Example 7.15. The Steiner triple system { {0,1,3} {1,2,4} {2,3,5} {3,4,6} {0,4,5} {1,5,6} {0,2,6}
} of order 7 is a projective plane of order 2.

Theorem 7.16. A projective plane of order n, n−1 MOLS(n), a OA(n+1, n) and a TD(n+1, n)
are all equivalent.

Proof. We already know from Theorem 7.11 that n−1 MOLS(n), a OA(n+1, n) and a TD(n+1, n)
are equivalent. We will first show that from a projective plane of order n that you can construct
OA(n + 1, n).
Let (P, L) be a projective plane of order n. Choose any line L ∈ L and let P1 , P2 , . . . , Pn+1 be
the n + 1 points on L. Through each Pi there are n lines other than L, number them 1, 2, 3, . . . , n
arbitrarily. Let Q1 , Q2 , . . . , Qn2 be the other n2 points and let aij be the number assigned to the
line through Pi and Qj ,i = 1, 2, . . . , n + 1 and j = 1, 2, . . . , n2 . Then its easily checked that

A = [aij ],

i = 1, 2, . . . , n + 1, j = 1, 2, . . . , n2 is an OA(n + 1, n).
Now we will show how to construct a a projective plane of order n from a TD(n + 1, n).
Let (X, B, G) be a TD(n + 1, n) and let ∞ be a new point. Set P = X ∪ {∞} and L =
B ∪ {G ∪ {∞} : G ∈ G. Then (P, L) is the required projective plane.

Theorem 7.17. There exists a projective plane of order n whenever n is a prime power.

Proof. If n is a prime power then we know from Theorem 7.4 that there exist n − 1 MOLS(n).
Thus applying Theorem 7.16 we have the result.

74
7.3 Pairs of orthogonal Latin squares
The pair of orthogonal Latin squares time line.
• Euler conjectures that 2 MOLS(n) exists if and only if n ≡ 0.1.3 (mod 4). He could do these
but could not construct any pair of orthogonal Latin squares of order 2 (mod 4).
• G. Tarray (1900) In a 33 page journal paper showed that does not exist 2 MOLS(4).
• MacNeise (1922) conjectures that if
n = pr11 pr22 · · · prxx ,
where p1 , p2 , . . . , px are distinct primes, then the maximum number of MOLS(n) is
M (n) = min{pr11 − 1, pr22 − 1, . . . , prxx − 1}

• E.T. Parker (1957) constructs 3 MOLS(21). This disproves MacNeise’s conjecture that the
maximum number would be M (21) = min{3 − 1, 7 − 1} = 2.
• R.S. Bose and S.S. Shrikande (1958) construct 2 MOLS(22) disproving Euler’s conjecture.
This result made the New York times.
• E.T. Parker (1959) constructs 2 MOLS(10).
• Bose, Shrikande, and Parker (1960) proves that there exists 2 MOLS(n) for every n except
for n = 2 or n = 6 when they cannot exist.
• Stinson (1984) Gives a clever 3 page proof that there do not exist 2 MOLS(6).
In this chapter we construct 2 MOLS(n) for every n except n = 2 and n = 4. Let Nmols (n)
be the number of mutually orthogonal Latin squares of order n. Note that Nmols (1) = ∞. Our
goal is to show Nmols (n) ≥ 2 for all positive integers n except n = 2, and n = 6.
Theorem 7.18. (Finite Field construction) For each pr , where p is a prime and r > 0 is an
integer, there exist pr − 1 MOLS(pr ), i.e. Nmols (pr ) = pr − 1.
Proof. We all ready did this in Theorem 7.4.

Theorem 7.19. (Direct product construction) Let A be a Latin square on X and let B be a
Latin square on Y . Define the square A × B on X × Y by
(A × B)[(x1 , y1 ), (x2 , y2 )] = (A[x1 , x2 ], B[y1 , y2 ]).
Then A × B is a Latin square.
Proof. Exercise 1.

Example 7.20.
X1 X2 X3 O1 O2 O3
X3 X1 X2 O3 O1 O2
1 2 3
X O X2 X3 X1 O2 O3 O1
× 3 1 2 =
O X O1 O2 O3 X1 X2 X3
2 3 1
O3 O1 O2 X3 X1 X2
O2 O3 O1 X2 X3 X1

75
Theorem 7.21. . If A1 , A2 are MOLS(x) and B1 , B2 are MOLS(y), then A1 × B1 and A2 × B2
are MOLS(xy),
Proof. Exercise 2.

Corollary 7.22. If n = pr11 pr22 · · · prxx , where p1 , p2 , . . . , px are distinct primes, then

Nmols (n) ≥ min{pr11 − 1, pr22 − 1, . . . , prxx − 1}.

Proof. Let M (n) = min{pr11 , pr22 , . . . , prxx }−1 The Finite field construction provides pri i −1 MOLS(pri i )
for i = 1, 2, . . . , x. So there exists M (n) MOLS(pri i ), for each i. Now apply the direct product
construction to get M (n) MOLS(n).

Observe that if n ≡ 0, 1, or 3 (mod 4), then the smallest prime power q that divides n is at
least 3. Hence applying Corollary 7.22 we may conclude the following.
Corollary 7.23. If n ≡ 0, 1, or 3 (mod 4), then Nmols (n) ≥ 2.
The rest of this chapter deals with the difficult case, i.e. two MOLS of order n, when n ≡ 2
(mod 4).
Theorem 7.24. If there are k − 1 MOLS(n), then there are k − 2 idempotent MOLS(n).
Proof. Let L0 , L1 , . . . , Lk−2 be MOLS(n) on {1, 2, . . . , n}. Simultaneously, permute the rows and
columns of the all the squares so that the diagonal of L0 is [1, 1, 1..., 1]. The remain squares now
must have each of the symbols 1, 2, 3, . . . , n on there diagonals in some order. These squares can
be simultaneously relabeled so that L1 , L2 , . . . , Lk−2 are idempotent.

Using the alternative language of orthogonal arrays we rewrite Theorem 7.24 into Theorem 7.25.
A column is a constant column if all its entries are identical.
Theorem 7.25. If there is an OA(k + 1, n), then there is an OA(k, n) with n constant columns.
If S is any subset of the symbols we denote in the remainder of this chapter the k by |S| matrix
of constant columns by
C(k, S) = [[s, s, s, . . . , s]T : s ∈ S]
| {z }
k times

An orthogonal array OA(k, n) on S with n = |S| constant columns can be written as

[C(k, S), A]

where A are the remaining n2 − n non-constant columns.


Example 7.26. 3 idempotent MOLS(5) and an OA(5, 5) with 5 constant columns
We start with the 4 MOLS(5) given in Example 7.5:

0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
1 2 3 4 0 2 3 4 0 1 3 4 0 1 2 4 0 1 2 3
2 3 4 0 1 4 0 1 2 3 1 2 3 4 0 3 4 0 1 2
3 4 0 1 2 1 2 3 4 0 4 0 1 2 3 2 3 4 0 1
4 0 1 2 3 3 4 0 1 2 2 3 4 0 1 1 2 3 4 0

76
Now simultaneously permute the rows of all the squares so that the first one has a constant diagonal
say [0, 0, 0, 0, 0]T .

0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
4 0 1 2 3 3 4 0 1 2 2 3 4 0 1 1 2 3 4 0
3 4 0 1 2 1 2 3 4 0 4 0 1 2 3 2 3 4 0 1
2 3 4 0 1 4 0 1 2 3 1 2 3 4 0 3 4 0 1 2
1 2 3 4 0 2 3 4 0 1 3 4 0 1 2 4 0 1 2 3

Now in each of the remaining squares the diagonal contains each symbol once. Independently
permute the symbols in these squares so that their diagonals become [0, 1, 2, 3, 4]T . That is use the
permutations
     
0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
, , and
0 4 3 2 1 0 2 4 1 3 0 3 1 4 2

This obtains 3 idempotent MOLS(5).

0 4 3 2 1 0 2 4 1 3 0 3 1 4 2
2 1 0 4 3 4 1 3 0 2 3 1 4 2 0
4 3 2 1 0 3 0 2 4 1 1 4 2 0 3
3 0 4 3 2 2 4 1 3 0 4 2 0 3 1
1 2 1 0 4 1 3 0 2 4 2 0 3 1 4

we can now use Theorem 7.7 to construct [C(k, S), A] an OA(5, 5) with 5 constant columns, S =
{0, 1, 2, 3, 4}.

0 1 2 3 4 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4
0 1 2 3 4 1 2 3 4 0 2 3 4 0 1 3 4 0 1 2 4 0 1 2 3
0 1 2 3 4 4 3 2 1 2 0 4 3 4 3 1 0 3 0 4 2 1 2 1 0
0 1 2 3 4 2 4 1 3 4 3 0 2 3 0 4 1 2 4 1 0 1 3 0 2
0 1 2 3 4 3 1 4 2 3 4 2 0 1 4 0 3 4 2 0 1 2 0 3 1
| {z } | {z }
C(k,S) A

A pairwise balanced design PBD of type 2-(v, K, λ) is a pair (X, B) where


1. X is a v-element set of points,

2. B is a collection of subsets of X called blocks,

3. |B| ∈ K for every block B ∈ B, and

4. every pair of points is in λ blocks.


The parameter λ is called the index. If the index is λ = 1 we say that the design is a Steiner
pairwise balanced design or a Linear space. If K = {k}, we write 2-(v, k, λ) instead of 2-(v, K, λ).

Example 7.27. • A Steiner triple system of order n, STS(n), is a 2-(n, 3, 1).

• A projective plane of order n is a 2-(n2 + n + 1, n + 1, 1).

77
• A transversal design, TD(k, n) is a 2-(nk, {k, n}, 1) with exactly k blocks of size n, and they
are disjoint.

Theorem 7.28. (The PBD construction for an k MOLS(n))


Let (X, B) be a PBD of index 1. Suppose for each B ∈ B, there exists k idempotent MOLS(|B|).
Then there are k idempotent MOLS(|X|).

Proof. We will construct an OA(k + 2, |X|), with |X| constant columns. Let (X, B) be a PBD of
index 1 and suppose for each Bi ∈ B = {B1 , B2 , . . . , Bb }, there exists k idempotent MOLS(|B|)
with |B| constant columns. Then for each Bi ∈ B there is an OA(k + 2, |Bi |) on Bi

[C(k + 2, Bi ), ABi ]

where C(k + 2, Bi ) are the constant columns and ABi are the remaining non-constant columns.
Then
[C(k + 2, X), AB1 , AB2 , AB3 , . . . ABb ]

is an OA(k + 2, |X|), with |X| constant columns. For consider any pair of rows u, v and any two
symbols a, b ∈ X. If a = b, then a, b appears on rows u, v in a unique column of C(k + 2, X). If
a 6= b, then there is exactly one block Bi ∈ B that contains a and b on rows u, v. Hence there is
exactly one column of ABi that contains a, b on rows u, v. An OA(k + 2, |X|), with |X| constant
columns is equivalent to k idempotent MOLS(|X|)

Example 7.29. Three MOLS of order 21. We have that 4 = 22 , is a prime power. So, by
Theorem 7.18, there are 3 MOLS(4) and thus by Theorem 7.16 there is a projective plane of order
4, i.e a 2-(21, 5, 1). Also 5, is a prime power. So, by Theorem 7.18, there are 4 MOLS(5) and
thus by Theorem 7.24, there are 3 idempotent MOLS(5). Now applying Theorem 7.28 we have 3
MOLS(21).

Corollary 7.30. If there is a PBD of type 2-(v, K, 1), then

Nmols (v) ≥ min{Nmols (k) − 1 : k ∈ K}

Proof. For each k ∈ K, apply Theorem 7.24 to obtain Nmols (k)-1 idempotent MOLS(k). Now
use the construction provided by Theorem 7.28.

In a PBD(X, B) a collection of blocks

B1 , B2 , . . . , Br

that partition the points is called a parallel class. A partial parallel class is called clear set of
blocks.

Theorem 7.31. (The clear set PBD construction for k MOLS(n))


Let (X, B) be a PBD with a clear set Π of blocks Π ⊆ B. Suppose for each P ∈ Π there are
k MOLS(|P |) and for each B ∈ B \ Π there are k idempotent MOLS(|B|). Then there are k
MOLS(|X|).

78
Proof. We will construct an OA(k + 2, |X|). Let (X, B) be a PBD with a clear set

Π = {P1 , P2 , . . . , Pr }

of blocks and let LPi be an OA(k + 2, |Pi |) on the symbols in Pi . Let

B \ Π = {B1 , B2 , . . . , Bs }

be the remaining blocks and for each j = 1, 2, . . . , s construct [C(k + 2, S), ABj ] on Bj an OA(k +
2, |Bj |) with |Bj | constant columns. Then

[C(k + 2, S), LP1 , LP2 , . . . , LPr , AB1 , AB2 , . . . , ABs ],

where S = X \ ∪si=1 Pi , is an OA(k + 2, |X|) For consider any pair of rows u, v and any two symbols
a, b ∈ X. If a = b ∈ S, then a, b appears on rows u, v in a unique column of C(k + 2, S). If
a=b∈ / S, then a = b ∈ Pi for a unique i and hence a, b appears on rows u, v in a unique column
of LPi . If a 6= b, then there is exactly one block B ∈ B that contains a and b on rows u, v. If
B = Pi ∈ Π there is exactly one column of LPi that contains a, b on rows u, v. If B = Bj ∈ B \ Π
there is exactly one column of ABj that contains a, b on rows u, v. An OA(k + 2, |X|) is equivalent
to k MOLS(|X|)

Example 7.32. Construction of 2 MOLS(22)

1. A PBD (X, B) of type type 2-(22, {3, 4, 5}, 1) with clear set

π = {{1, 8, 12}, {5, 10, 15}, {20, 21, 22}}.

X = {1, 2, 3, 4, . . . , 22}


 5 10 15
20 21 22 1 2 3 4 5




1 8 12 6 7 8 9 10







 16 17 18 19 1 10 13 19 22




 1 7 14 18 1 6 11 16 21



 2 6 14 20 1 9 15 17 20



 2 9 13 16 2 7 12 17 22
2 10 11 18 2 8 15 19 21

B =

 3 6 12 19 3 8 13 18 20
3 7 15 16 3 10 14 17 21







 3 9 11 22 4 6 15 18 22



 4 7 13 21 4 10 12 16 20




 4 8 11 17 5 7 11 19 20



 4 9 14 19 5 8 14 16 22
5 6 13 17 5 9 12 18 21




11 12 13 14 15

2. 3 is a prime so there are are 2 = 3 − 1 MOLS(3).

3. 4 is a prime power There are (4 − 1) − 1 = 2 idempotent MOLS(4).

79
4. 5 is a prime so there are (5 − 1) − 1 = 3 idempotent MOLS(5).
The clear set PBD construction (Theorem 7.31) yields 2 MOLS(22).
Corollary 7.33. If there is a PBD of type 2-(v, K, 1) and a subset K0 ⊆ K in which no two blocks
with sizes in K0 intersect, then

Nmols (v) ≥ min {{Nmols (k) : k ∈ K0 } ∪ {Nmols (k) − 1 : k ∈ K \ K0 }}

Proof. Observe that Π = {B ∈ B : |B| ∈ K0 } is a clear set of blocks. For each k ∈ K \ K0 , apply
Theorem 7.24 to obtain Nmols (k)-1 idempotent MOLS(k). Now use the construction provided
by Theorem 7.31.

Example 7.34. Consider a projective plane of order 4, i.e. a PBD of type 2-(21, 5, 1) and let x, y,
and z be any 3 non-collinear points. Referring to Figure 7.1 we see that there are 3 lines containing
pair of x, y, and z, there are 3 other lines through each of x, y, and z, and there are 9 lines not
containing any of x, y, or z. Deleting x, y, and z we obtain a PBD of type 2-(18, {3, 4, 5}, 1) in

Figure 7.1: The projective plane of order 4 and 3 non-collinear points

which the blocks of size 3 form a clear set. Hence applying Theorem 7.33 we have

Nmols (18) ≥ min{Nmols (3), Nmols (4) − 1, Nmols (5) − 1} = 2.

A pairwise balanced design is said to be resolvable if it blocks can be partitioned into parallel
classes.
Theorem 7.35. If Nmols (m) ≥ k − 1, then there is a resolvable PBD of type 2-(km, {k, m}, 1)
with m + 1 parallel classes in which the blocks of size m form one of the parallel classes.
Proof. Because Nmols (m) ≥ k−1, there is a TD(k+1, m) (X, G, B). Let G = {G0 , G1 , G2 , . . . , Gk }
and G0 = {1, 2, 3, . . . , m}. For each i ∈ G0 , i.e. for each i = 1, 2, . . . , m set

Pi = {B \ {i} : B ∈ B and i ∈ B}

Let P0 = {G1 , G2 , . . . , Gk }. Then P0 , P1 , . . . , Pm form the parallel classes of the required PBD. of
type 2-(km, {k, m}, 1) .

Corollary 7.36. If Nmols (m) ≥ k − 1 and 1 ≤ x < m, then


1. There exists a PBD of type 2-(km + x, {x, m, k, k + 1}, 1), and

80
2. Nmols (km + x) ≥ min{Nmols (x), Nmols (m), Nmols (k) − 1, Nmols (k + 1) − 1}.
Proof. Use Theorem 7.35 to obtain a resolvable PBD of type 2-(km, {m, k}, 1) with parallel classes
P0 , P1 , . . . , Pm in which P0 = {B1 , B2 , . . . , Bk } is the parallel class formed by the blocks of size m.
For i = 1, 2, . . . , x, add a new point ∞i to each block in Pi . Let B0 = {∞1 , ∞2 , . . . , ∞x } be a new
block. It is easy to check that this yields a PBD of type 2-(km + x, {x, m, k, k + 1}, 1) in which the
blocks B0 , B1 , . . . , Bk form a clear set. Apply Theorem 7.31.

Example 7.37. Nmols (100) = Nmols (7 · 13 + 9). So with k = 7, m = 13, x = 9, we have

Nmols (100) ≥ min{Nmols (13), Nmols (9), Nmols (7) − 1, Nmols (8) − 1} = 5

The value of k = 4 in Corollary 7.36 yields the following Theorem.


Theorem 7.38. If Nmols (m) ≥ 3, Nmols (x) ≥ 2 and 1 ≤ x < m, then Nmols (4m + x) ≥ 2.
Lemma 7.39. Nmols (4t) ≥ 3 for all t ≥ 1.
Proof. Corollary 7.22 shows that Nmols (4t) ≥ 2 except possibly when 3|t but 9 6 |t. Write 4t =
2a 3u, where a ≥ 2 and gcd(u, 6) = 1. Then Nmols (u) ≥ 4 and so Nmols (4t) ≥ min{Nmols (2a 3), 4}.
Hence we need only show that Nmols (2a 3) ≥ 3. Now 2a 3 = 2b · 12 or 2a 3 = 2b · 24, where either
b = 0 or b ≥ 2. So
Nmols (2a 3) ≥ min{3, Nmols (12)} ≥ 2
or
Nmols (2a 3) ≥ min{3, Nmols (24)} ≥ 2
Exercises 5 and 4 show that the number of mutually orthogonal Latin squares of orders 12 and 24
is at least 2.

Theorem 7.40. Nmols (n) ≥ 2 for all positive integers n, n 6= 2 or 6.


Proof. Corollary 7.23 tells us that we need only consider n ≡ 2 (mod 4). Write

n = 16k + y
= 16(k − 1) + (16 + y).

where y = 2, 6, 10or14. We see that Nmols (16 + y) ≥ 2 for each such y as follows:
• Nmols (18) ≥ 2, by Example 7.34.

• Nmols (22) ≥ 2, by Exercise 3.

• Nmols (26) ≥ 2, by Exercise 7.

• Nmols (30) ≥ min{Nmols (3), Nmols (10)} = 2, by Theorem 7.21 and Exercise 3.
Thus by Theorem 7.38 and Lemma 7.39,

Nmols (16k + y) = Nmols (4 · 4(k − 1) + (16 + y))) ≥ 2

provided k − 1 ≥ 1 and 16 + y < 4(k − 1), i.e. provided k ≥ 2 and 4k − 4 > 30. i.e. provided k ≥ 9.
Thus all that remains is to show that Nmols (n) ≥ 2 for all n ≡ 2 (mod 4), and 6 < n < 144.

81
The ones with n ≡ 1 (mod 3) are taken care of in Exercise 3. For n = 14, 26, and 38. see
Exercises 6, 7 and 8. For n = 18, see Example 7.34. For the remaining values of n we write
n = x · y, where min{Nmols (x), Nmols (y)} ≥ 2 and use Theorem 7.21 or we write n = k · m + x
where min{Nmols (x), Nmols (m), Nmols (k) − 1, Nmols (k + 1) − 1} ≥ 2 and use Corollary 7.36.
The values of x, y, m, and k are given in the following table.

30=3 · 10 42=3 · 14 50=5 · 10 54=3 · 18 62=4 · 13 + 10 66=3 · 22


74=4 · 16 + 10 78=3 · 66 86=4 · 19 + 10 90=9 · 10 98=7 · 14 102=3 · 34
110=5 · 22 114=3 · 38 122=4 · 27 + 14 126=9 · 14 134=4 · 27 + 26 138=3 · 46

7.3.1 Exercises
1. Prove Theorem 7.19.

2. Prove Theorem 7.21.

3. Let A be the following 4 × 4m array


 
0 0 ··· 0 1 2 ··· m 2m 2m − 1 ··· m + 1 ∞1 ∞2 · · · ∞m
 1
 2 · · · m 0 0 · ·· 0 ∞1 ∞2 · · · ∞m 2m 2m − 1 ··· m + 1

 2m 2m − 1 · · · m + 1 ∞1 ∞2 · · · ∞m 0 0 ··· 0 1 2 ··· m 
∞1 ∞2 · · · ∞m 2m 2m − 1 · · · m + 1 1 2 ··· m 0 0 ··· 0

The entries in A are from the set Z2m+1 ∪ {∞1 , . . . , ∞m }. Let Ai be the array obtained from
A by adding i to each entry, where i+∞j = ∞j for all j = 1, 2, . . . , m. Let B be an OA(4, m),
which exists if Nmols (m) ≥ 2. Let C be the matrix of constant columns:
 
0 1 · · · 2m
 0 1 · · · 2m 
C=  0 1 · · · 2m 

0 1 · · · 2m

Show that the array


D = [A0 , A1 , A2 , . . . , A2m , B, C]
is an OA(4, 3m + 1). Conclude that the following Theorem is true.

Theorem 7.41. If Nmols (m) ≥ 2, then Nmols (3m + 1) ≥ 2.

Note that in particular Nmols (10) ≥ 2. and Nmols (22) ≥ 2.

4. Take a projective plane of order q, where q is a prime power, and delete all the points on a
fixed line to obtain a PBD of type 2-(q 2 , q, 1). Delete an additional point to obtain a PBD
of type 2-(q 2 − 1, {q, q − 1}, 1) in which the blocks of size q − 1 form a clear set. Deduce that

Nmols (q 2 − 1) ≥ Nmols (q − 1)

for all prime powers q. What does this say about Nmols (24)?

82
5. Let  
0 1 2 3 4 5

 1 2 3 4 5 0 


 2 3 4 5 0 1 

 3 4 5 0 1 2 
A= 

 4 5 0 1 2 3 


 3 4 5 0 1 2 

 4 5 0 1 2 3 
5 0 1 2 3 4
and B = A + 6J, where J is the 6 by 6 matrix of 1s. Define
 
A B
L1 =
B A
and obtain L2 , . . . , L5 from L1 by permuting the columns of L1 so that
• L2 has first row [0, 6, 8, 2, 7, 1, 9, 11, 4, 10, 5, 3],
• L3 has first row [0, 3, 6, 1, 9, 11, 2, 8, 5, 4, 7, 10],
• L4 has first row [0, 8, 1, 11, 5, 9, 3, 10, 2, 7, 6, 4],
• L5 has first row [0, 4, 11, 10, 2, 7, 8, 6, 9, 1, 3, 5].
Show that L1 , L2 , L3 , L4 , L5 are mutually orthogonal and conclude that Nmols (12) ≥ 5.
6. Let
w
~ = [0, 0, 0, 0]
~x = [1, 3, 4, 6]
~y = [x1 , x2 , x1 , 5]
~z = [9, 4, 6, 2]
and  
w ~ ~z ~y ~x
 ~x w ~ ~z ~y 
A0 = 
 ~y ~x w
.
~ ~z 
~z ~y ~x w~
A0 is a 4 by 16 array. For each i ∈ Z11 , let Ai be the array obtained from A0 by adding i to
each entry modulo 11, fixing , x1 , x2 , x3 . Let B be an OA(4, 3) on {x1 , x2 , x3 } and let C be
the array of constant columns
 
0 1 . . . 10
 0 1 . . . 10 
C=  0 1 . . . 10 

0 1 . . . 10
Show that [A0 , A1 , . . . , A10 , B, C] is an OA(4, 14) and conclude that Nmols (14) ≥ 2.
Remark: The 4 by 176 array [A0 , A1 , . . . , A10 ] is an Incomplete Orthogonal Array (IOA) of
type 31 111 . It has 1 hole of size 3 and 11 holes of size 1. We filled the hole of size 3 with an
OA(4, 3) and the holes of size 1 with OA(4, 1)s to complete the IOA to an OA.

83
7. Let

w
~ = [x1 , x2 , x3 , x4 , x5 , x6 , x7 , 18]
~x = [0, 0, 0, 0, 0, 0, 0, 0]
~y = [15, 10, 7, 8, 12, 9, 6, 2]
~z = [1, 2, 4, 6, 7, 8, 10, 5]

and  
w ~ ~z ~y ~x
 ~x w ~ ~z ~y 
A0 = 
 ~y ~x w
.
~ ~z 
~z ~y ~x w~
A0 is a 4 by 32 array. For each i ∈ Z19 , let Ai be the array obtained from A0 by adding i to
each entry modulo 19, fixing , x1 , x2 , . . . , x7 . Let B be an OA(4, 7) on {x1 , x2 , . . . , x7 } and
let C be the array of constant columns
 
0 1 . . . 18
 0 1 . . . 18 
C=  0 1 . . . 18 

0 1 . . . 18

Show that [A0 , A1 , . . . , A18 , B, C] is an OA(4, 26) and conclude that Nmols (26) ≥ 2.
Remark: The 4 by 352 array [A0 , A1 , . . . , A18 ] is an Incomplete Orthogonal Array (IOA) of
type 71 119 . It has 1 hole of size 7 and 19 holes of size 1. We filled the hole of size 7 with an
OA(4, 7) and the holes of size 1 with OA(4, 1)s to complete the IOA to an OA.

8. Develop the base-blocks {0, 7, 10, 11, 23} and {0, 5, 14, 20, 22} to obtain 82 blocks. Show that
these 82 blocks form a PBD of type 2-(41, 5, 1). Delete 3 non-collinear points to obtain a
2-(38, {3, 4, 5}, 1) design in which the blocks of size 3 are a clear set. Now use Corollary 7.33
to show that Nmols (38) ≥ 2.

84
Part III

Miscellaneous Topics

85
Chapter 8

Alternating Paths and Matchings

8.1 Introduction
Matchings arise in a variety of situations as assignment problems, in which pairs of items are to be
matched together, for example, if people are to be assigned jobs, if sports teams are to matched in
a tournament, if tasks are to be assigned to processors in a computer, whenever objects or people
are to be matched on a one-to-one basis.
In a graph G, a matching M is a set of edges such that no two edges of M have a vertex in
common. Figure 8.1 illustrates two matchings M1 and M2 in a graph G.

u3 u3
u1 u1
u4 u2 u4 u2

u8 u8
u5 u6 u7 u5 u6 u7
M1 M2

Figure 8.1: Matchings

Let M have m edges. Then 2m vertices of G are matched by M . We also say that a vertex u
is saturated by M if it is matched, and unsaturated if it is not matched. In general, we want M
to have as many edges as possible. M is a maximum matching in G if no matching of G has more
edges.
For example, in Figure 8.1, |M1 | = 3 and |M2 | = 4. Since |G| = 8, M2 is a maximum matching.
A matching which saturates every vertex is called a perfect matching. Obviously a perfect matching
is always a maximum matching. M1 is not a maximum matching, but it is a maximal matching;
namely M1 cannot be extended by the addition of any edge uv of G. However, there is a way to
build a bigger matching out of M1 . Let P denote the path (u1 , u2 , . . . , u6 ) in Figure 8.1.
Let G have a matching M . An alternating path P with respect to M is any path whose edges
are alternately in M and not in M . If the endpoints of P are unsaturated, then P is an augmenting

87
path.
So P = (u1 , u2 , . . . , u6 ) is an augmenting path with respect to M1 . Consider the subgraph
formed by the exclusive or operation M = M1 ⊕ E(P ) (also called the symmetric difference,
(M1 − E(P )) ∪ (E(P ) − M1 )). M contains those edges of P which are not in M1 , namely, u1 u2 ,
u3 u4 , and u5 u6 . M is a bigger matching than M1 . Notice that M = M2 .

Lemma 8.1. Let G have a matching M . Let P be an augmenting path with respect to M . Then
M 0 = M ⊕ E(P ) is a matching with one more edge than M .

Proof. Let the endpoints of P be u and v. M 0 has one more edge than M , since u and v are
unsaturated in M , but saturated in M 0 . All other vertices that were saturated in M are still
saturated in M 0 . So M 0 is a matching with one more edge.

The key result in the theory of matchings is the following:

Theorem 8.2. (Berge’s theorem) A matching M in G is maximum if and only if G contains


no augmenting path with respect to M .

Proof. ⇒: If M were a maximum matching and P an augmenting path, then M ⊕ E(P ) would be
a larger matching. So there can be no augmenting path if M is maximum.
⇐: Suppose that G has no augmenting path with respect to M . If M is not maximum, then
pick a maximum matching M 0 . Clearly |M 0 | > |M |. Let H = M ⊕ M 0 . Consider the subgraph
of G that H defines. Each vertex v is incident on at most one M -edge and one M 0 -edge, so that
in H, Deg(v) ≤ 2. Every path in H alternates between M -edges and M 0 -edges. So H consists of
alternating paths and cycles, as illustrated in Figure 8.2.

H
M=

P M0 =

Figure 8.2: Alternating paths and cycles

Each cycle must clearly have even length, with an equal number of edges of M and M 0 . Since
|M 0 |
> |M |, some path P must have more M 0 -edges than M -edges. It can only begin and end with
an M 0 -edge, so that P is augmenting with respect to M . But we began by assuming that G has
no augmenting path for M . Consequently, M was initially a maximum matching.

This theorem tells us how to find a maximum matching in a graph. We begin with some
matching M . If M is not maximum, there will be an unsaturated vertex u. We then follow
alternating paths from u. If some unsaturated vertex v is reached on an alternating path P , then
P is an augmenting uv-path. Set M ← M ⊕ E(P ), and repeat. If the method that we have chosen
to follow alternating paths is sure to find all such paths, then this technique is guaranteed to find
a maximum matching in G.
In bipartite graphs it is slightly easier to follow alternating paths and therefore to find maximum
matchings, because of their special properties. Let G have bipartition (X, Y ). If S ⊆ X, then the
neighbor set of S is N (S), the set of Y -vertices adjacent to S. Sometimes N (S) is called the shadow

88
S

N (S)

Figure 8.3: The neighbor set

set of S. If G has a perfect matching M , then every x ∈ S will be matched to some y ∈ Y so that
|N (S)| ≥ |S|, for every S ⊆ X. P. Hall proved that this necessary condition is also sufficient.
Theorem 8.3. (Hall’s theorem) Let G have bipartition (X, Y ). G has a matching saturating
every x ∈ X if and only if |N (S)| ≥ |S|, for all S ⊆ X.
Proof. We have all ready discussed the necessity of the conditions. For the converse suppose that
|N (S)| ≥ |S|, for all S ⊆ X. If M does not saturate all of X, pick an unsaturated u ∈ X, and
follow all the alternating paths beginning at u. (See Figure 8.4.)

S X −S

T Y −T

Figure 8.4: Follow alternating paths

Let S ⊆ X be the set of X-vertices reachable from u on alternating paths, and let T be the set
of Y -vertices reachable. With the exception of u, each vertex x ∈ S is matched to some y ∈ T , for
S was constructed by extending alternating paths from y ∈ T to x ∈ S whenever xy is a matching
edge. Therefore |S| = |T | + 1.
Now there may be other vertices X − S and Y − T . However, there can be no edges [S, Y − T ],
for such an edge would extend an alternating path to a vertex of Y − T , which is not reachable from
u on an alternating path. So every x ∈ S can only be joined to vertices of T ; that is, T = N (S).
It follows that |S| > |N (S)|, a contradiction. Therefore every vertex of X must be saturated by
M.

89
Corollary 8.4. Every k-regular bipartite graph has a perfect matching, if k > 0.
Proof. Let G have bipartition (X, Y ). Since G is k-regular, ε = k · |X| = k · |Y |, so that |X| = |Y |.
Pick any S ⊆ X. How many edges have one end in S? Exactly k · |S|. They all have their other
end in N (S). The number of edges with one endpoint in N (S) is k · |N (S)|. So k · |S| ≤ k · |N (S)|,
or |S| ≤ |N (S)|, for all S ⊆ X. Therefore G has a perfect matching.

8.1.1 Exercises
1. Find a formula for the number of perfect matchings of K2n and Kn,n .
2. (Hall’s theorem.) Let A1 , A2 , . . ., An be subsets of a set S. A system of distinct representatives
for the family {A1 , A2 , . . . , An } is a subset {a1 , a2 , . . . , an } of S such that a1 ∈ A1 , a2 ∈ A2 ,
. . ., am ∈ Am , and ai 6= aj , for i 6= j. Example:
A1 = students taking computer science 421
A2 = students taking physics 374
A3 = students taking botany 464
A4 = students taking philosophy 221
The sets A1 , A2 , A3 , A4 may have many students in common. Find four distinct students
a1 , a2 , a3 , a4 , such that a1 ∈ A1 , a2 ∈ A2 , a3 ∈ A3 , and a4 ∈ A4 to represent each of the four
classes.
Show that {A1 , A2 , . . . , An } has a system of distinct representatives if and only if the union of
every combination of k of the subsets Ai contains at least k elements, for all k = 1, 2, . . . , n.
(Hint: Make a bipartite graph A1 , A2 , . . . , An versus all aj ∈ S, and use Hall’s theorem.)

8.2 Perfect matchings and 1-factorizations


Given any graph G and positive integer k, a k-factor of G is a spanning subgraph that is k-regular.
Thus a perfect matching is a 1-factor. A 2-factor is a union of cycles that covers V (G), as illustrated
in Figure 8.5.

Figure 8.5: 2-factors of the cube

The reason for this terminology is as follows. Associate indeterminates x1 , x2 , . . ., xn with the
n vertices of a graph. An edge connecting vertex i to j can be represented by the expression xi − xj .

90
Q
Then the entire graph can be represented (up to sign) by the product P (G) = ij∈E(G) (xi − xj ).
For example, if G is the 4-cycle, this product becomes (x1 − x2 )(x2 − x3 )(x3 − x4 )(x4 − x1 ). Since
the number of terms in the product is ε(G), when it is multiplied out, there will be ε x’s in each
term. A 1-factor of P (G), for example, (x1 − x2 )(x3 − x4 ), is a factor that contains each xi exactly
once. This will always correspond to a perfect matching in G, and so on.
With some graphs it is possible to decompose the edge set into perfect matchings. For ex-
ample, if G is the cube, we can write E(G) = M1 ∪ M2 ∪ M3 , where M1 = {12, 34, 67, 85},
M2 = {23, 14, 56, 78}, and M3 = {15, 26, 37, 48}, as shown in Figure 8.6. Each edge of G is in
exactly one of M1 , M2 , or M3 .
In general, a k-factorization of a graph G is a decomposition of E(G) into H1 ∪ H2 ∪ . . . ∪ Hm ,
where each Hi is a k-factor, and each Hi and Hj have no edges in common. The above decomposition
of the cube is a 1-factorization. Therefore we say the cube is 1-factorable.

1 2

5 6

8 7

4 3

Figure 8.6: A 1-factorization of the cube

Lemma 8.5. Kn,n is 1-factorable.


Proof. Let (X, Y ) be the bipartition of Kn,n , where X = {x0 , x1 , . . . , xn−1 } and Y = {y0 , y1 , . . . , yn−1 }.
Define M0 = {xi yi | i = 0, 1, . . . , n − 1}, M1 = {xi yi+1 | i = 0, 1, . . . , n − 1}, etc., where the addition
is modulo n. In general Mk = {xi yi+k | i = 0, 1, . . . , n − 1}. Clearly Mj and Mk have no edges in
common, for any j and k, and together M0 , M1 , . . ., Mn−1 contain all of E(G). Thus we have a
1-factorization of Kn,n .

Lemma 8.6. K2n is 1-factorable.


Proof. Let V (K2n ) = {0, 1, 2, . . . , 2n − 2} ∪ {∞}. Draw K2n with the vertices 0, 1, . . . , 2n − 2 in
a circle, placing ∞ in the center of the circle. This is illustrated for n = 4 in Figure 8.7. Take
M0 = {(0, ∞), (1, 2n − 2), (2, 2n − 3), . . . , (n − 1, n)} = {(0, ∞)} ∪ {(i, −i) | i = 1, 2, . . . , n − 1},
where the addition is modulo 2n − 1. M0 is illustrated by the thicker lines in Figure 8.7.
We can then “rotate”M0 by adding one to each vertex, M1 = M0 + 1 = {(i + 1, j + 1) | (i, j) ∈
M0 }, where ∞ + 1 = ∞, and addition is modulo 2n − 1. It is easy to see from the diagram that
M0 and M1 have no edges in common. Continuing like this, we have
M0 , M1 , M2 , . . . , M2n−2 ,
where Mk = M0 + k. They form a 1-factorization of K2n .

91
0

6 1


5 2

4 3

Figure 8.7: 1-factorizing K2n , where n = 4

We can use a similar technique to find a 2-factorization of K2n+1 .

Lemma 8.7. K2n+1 is 2-factorable.

Proof. Let V (K2n+1 ) = {0, 1, 2, . . . , 2n − 1} ∪ {∞}. As in the previous lemma, draw the graph
with the vertices in a circle, placing ∞ in the center. The first 2-factor is the cycle H0 =
(0, 1, −1, 2, −2, . . . , n − 1, n + 1, n, ∞), where the arithmetic is modulo 2n. This is illustrated in
Figure 8.8, with n = 3. We then rotate the cycle to get H1 , H2 , . . . , Hn−1 , giving a 2-factorization
of K2n+1 .

60

5 1

4 2

Figure 8.8: 2-factorizing K2n+1 ,where n = 3

8.2.1 Exercises
1. Find all perfect matchings of the cube. Find all of its 1-factorizations.

2. Find all perfect matchings and 1-factorizations of K4 and K6 .

3. Prove that the Petersen graph has no 1-factorization.

4. Prove that for k > 0 every k-regular bipartite graph is 1-factorable.

92
5. Describe another 1-factorization of K2n , when n is even, using the fact that Kn,n is a subgraph
of K2n .

6. Let M1 , M2 , . . . , Mk and M10 , M20 , . . . , Mk0 be two 1-factorizations of a k-regular graph G. The
two factorizations are isomorphic if there is an automorphism θ of G such that for each i,
θ(Mi ) = Mj0 , for some j; that is, θ induces a mapping of M1 , M2 , . . . , Mk onto M10 , M20 , . . . , Mk0 .
How many non-isomorphic 1-factorizations are there of K4 and K6 ?

7. How many non-isomorphic 1-factorizations are there of the cube?

8.3 Tutte’s theorem


Tutte’s theorem gives a necessary and sufficient condition for any graph to have a perfect matching.
Let S ⊆ V (G). In general, G−S may have several connected components. Write odd(G−S) for
the number of components with an odd number of vertices. The following proof of Tutte’s theorem
is due to Lovász
Theorem 8.8. (Tutte’s theorem) A graph G has a perfect matching if and only if odd(G − S) ≤
|S|, for every subset S ⊆ V (G).
Proof. ⇒: Suppose that G has a perfect matching M and pick any S ⊆ V (G). Let G1 , G2 , . . . , Gm
be the odd components of G − S. Each Gi contains at least one vertex matched by M to a vertex
of S. Therefore odd(G − S) = m ≤ |S|. See Figure 8.9.

odd even

G1 G2 Gm

Figure 8.9: Odd and even components of G − S

⇐: Suppose that odd(G − S) = m ≤ |S|, for every S ⊆ V (G). Taking S = ∅ gives odd(G) = 0,
so n = |G| is even. The proof is by reverse induction on ε(G), for any given n. If G is the complete
graph, it is clear that G has a perfect matching, so the result holds when ε = n2 . Let G be a graph
with the largest ε such that G has no perfect matching. If uv 6∈ E(G), then because G + uv has
more edges than G, it must be that G + uv has a perfect matching. Let S be the set of all vertices
of G of degree n − 1, and let G0 be any connected component of G − S. If G0 is not a complete
graph, then it contains three vertices x, y, z such that x adjacent to y, y is adjacent to z, but x is
not adjacent to z. Since y 6∈ S, deg(y) < n − 1, so there is a vertex w that is not adjacent to y.
Let M1 be a perfect matching of G + xz and let M2 be a perfect matching of G + yw, as shown
in Figures 8.10 and 8.11. Then xz ∈ M1 and yw ∈ M2 . Let H = M1 ⊕ M2 . H consists of one or
more alternating cycles in G. Let Cxz be the cycle of H containing xz and let Cyw be the cycle
containing yw.

93
y w

x z

Figure 8.10: H = M1 ⊕ M2 , case 1

Case 1. Cxz 6= Cyw .


Form a new matching M by taking M2 -edges of Cxz , M1 -edges of Cyw , and M1 edges else-
where. Then M is a perfect matching of G, a contradiction.

Case 2. Cxz = Cyw = C.


C can be traversed in two possible directions. Beginning with the vertices y, w, we either come
to x first or z first. Suppose it is z. Form a new matching M by taking M1 -edges between w
and z, M2 -edges between x and y, and the edge yz. Then take M1 edges elsewhere. Again
M is a perfect matching of G, a contradiction.

y w

x z

Figure 8.11: H = M1 ⊕ M2 , case 2

We conclude that every component G0 of G − S must be a complete graph. But then we can
easily construct a perfect matching of G as follows. Each even component of G − S is a complete
graph, so it has a perfect matching. Every odd component is also a complete graph, so is has a near
perfect matching, namely, one vertex is not matched. This vertex can be matched to a vertex of
S, since odd(G − S) ≤ |S|. The remaining vertices of S form a complete subgraph, since they have
degree n − 1, so they also have a perfect matching. It follows that every G satisfying the condition
of the theorem has a perfect matching.

94
Tutte’s theorem is a powerful criterion for the existence of a perfect matching. For example,
the following graph has no perfect matching, since G − v has three odd components.

Figure 8.12: A 3-regular graph with no perfect matching

We can use Tutte’s theorem to prove that every 3-regular graph G without cut-edges has a
perfect matching. Let S ⊆ V (G) be any subset of the vertices. Let G1 , G2 , . . . , Gk be the odd
components of G − S. Let
P mi be the number of edges connecting Gi to S. Then mi > 1, since G
has no cut-edge. Since v∈Gi Deg(v) = 2ε(Gi )P + mi = 3|Gi | = an odd Pnumber, we conclude that
mi is odd. Therefore mi ≥ 3, for each i. But v∈S Deg(v) = 3|S| ≥ i mi , since all of the mi
edges have one endpoint in S. It follows that 3|S| ≥ 3k, or |S| ≥ odd(G − S), for all S ⊆ V (G).
Therefore G has a perfect matching M . G also has a 2-factor, since G − M has degree two.

8.3.1 Exercises
1. For each integer k > 1, find a k-regular graph with no perfect matching.

2. A near perfect matching in a graph G is a matching which saturates all vertices of G but one.
A near 1-factorization is a decomposition of E(G) into near perfect matchings. Prove that
K2n+1 has a near 1-factorization.

3. Find a condition similar to Tutte’s theorem for a graph to have a near perfect matching.

95
8.4 The 4-color problem
Given a geographic map drawn in the plane, how many colors are needed such that the map can
be colored so that any two regions sharing a common border have different colors? In 1852, it
was conjectured by Francis Guthrie that four colors suffice. This simple problem turned out to
be very difficult to solve. Several flawed “proofs” were presented. Much of the development of
graph theory originated in attempts to solve this conjecture. See Aigner [?] for a development of
graph theory based on the 4-color problem. In 1976, Appel and Haken announced a proof of the
conjecture. Their proof was based on the results of a computer program that had to be guaranteed
bug-free. A second computer proof by Allaire [?] appeared in 1977. Each of these approaches
relied on showing that any planar graph contains one of a number of configurations, and that for
each configuration, a proper coloring of a smaller (reduced) graph can be extended to a proper
coloring of the initial graph. The computer programs generated all irreducible configurations, and
colored them. In the Appel-Haken proof, there were approximately 1800 irreducible configurations.
The uncertainty was whether all irreducible configurations had indeed been correctly generated.
In 1995, Robertson, Sanders, Seymour, and Thomas [?] presented another proof, also based
on a computer program, but considerably simpler than the original, requiring only 633 irreducible
configurations.
In this section, we present the main ideas of Kempe’s 1879 “proof” of the 4-color theorem.
Given a geographic map drawn in the plane, one can construct a dual graph, by placing a
vertex in the interior of each region, and joining vertices by edges if they correspond to adjacent
regions. Coloring the regions of the map is then equivalent to coloring the vertices of the dual, so
that adjacent vertices are of different colors. Consequently, we shall be concerned with coloring the
vertices of a planar graph.

Theorem 8.9. (4-Color theorem) Every planar graph can be properly colored with four colors.

If G is any simple planar graph, then it is always possible to extend G to a simple triangulation,
by adding diagonal edges in non-triangular faces. Therefore, if we can prove that all simple planar
triangulations are 4-colorable, the result will be true for all planar graphs. Hence we assume that
we are given a planar triangulation Gn on n vertices. We attempt to prove the 4-color theorem
(Theorem 8.9) by induction on n.
The colors can be chosen as the numbers {1, 2, 3, 4}. Given a coloring of G, then the subgraph
induced by any two colors i and j is bipartite. We denote it by K ij .
Definition: Given any 4-coloring of a planar graph G, each connected component of K ij is called
a Kempe component. The component containing a vertex x is denoted K ij (x). A path in K ij
between vertices u and v is called a Kempe chain.

Notice that if we interchange the colors i and j in any Kempe component, we obtain another
coloring of G.
Now let Gn be a simple triangulation on n vertices. If n = 4, then Gn = K4 . It is clear that
Theorem 8.9 is true in this case. Assume that n > 4. By Corollary 2.6, we know that Gn has a
vertex of degree three, four, or five. Let u be such a vertex. We reduce Gn to a simple planar
triangulation Gn−1 by deleting u and adding up to two diagonals in the resulting face. Thus we
may assume as an induction hypothesis, that Gn−1 has a 4-coloring. There are three cases.

Case 1. Deg(u) = 3.

96
Let the three adjacent vertices to u be (x, y, z). They all have different colors. Therefore
there is a fourth color available for v, giving a coloring of Gn .

Case 2. Deg(u) = 4.
Let the four vertices adjacent to u in Gn be (w, x, y, z), with a diagonal wy in Gn−1 . It is
clear that w, x, and y have different colors. If x and z have the same color, then a fourth
color is available for u. Otherwise, let w, x, y, z be colored 1, 2, 3, 4, respectively. There may
be a Kempe chain from x to z. If there is no Kempe chain, interchange colors in the Kempe
component K 24 (x), so that x and z now both have color 4. If there is a Kempe chain from x
to z, there can be no Kempe chain from w to y, for it would have to intersect the xz-Kempe
chain. Interchange colors in K 13 (w), so that w and z now both have color 3. In each case
there is a fourth color available for u.

3 1
3 1 v
w x 1 3
1 2 3 z 2 2w
1
1
4 3 4 3
z y y x

Figure 8.13: Kempe chains

Case 3. Deg(u) = 5.
Let the five vertices adjacent to u in Gn be (v, w, x, y, z), with diagonals vx and vy in Gn−1 .
It is clear that v, x, and y have different colors. Since we have a 4-coloring of Gn−1 , the
pentagon (v, w, x, y, z) is colored in either 3 or 4 colors. If it is colored in three colors, there is
a fourth color available for u. If it is colored in four colors, then without loss of generality, we
can take these colors to be (1, 2, 3, 4, 2), respectively. If K 13 (v) contains no vx-Kempe chain,
then we can interchange colors in K 13 (v), so that v and x are now both colored 3. Color
1 is then available for u. If K 14 (v) contains no vy-Kempe chain, then we can interchange
colors in K 14 (v), so that v and y are now both colored 4. Color 1 is again available for u.
Otherwise there is a Kempe chain Pvx connecting v to x and a Kempe chain Pvy connecting
v to y. It follows that K 24 (w) contains no wy-Kempe chain, as it would have to intersect Pvx
in K 13 (v). Similarly, K 23 (z) contains no vz-Kempe chain, as it would have to intersect Pvy
in K 14 (v). If Pvx and Pvy intersect only in vertex v, then we can interchange colors in both
K 24 (w) and K 23 (z), thereby giving w color 4 and z color 3. This makes color 2 available for
u. The difficulty is that Pvx and Pvy can intersect in several vertices. Interchanging colors
in K 24 (w) can affect the other Kempe chains, as shown in Figure ??, where the pentagon
(v, w, x, y, z) is drawn as the outer face.

Although this attempted proof of Theorem 8.9 fails at this point, we can use these same ideas
to prove the following.

Theorem 8.10. (5-Color theorem) Any planar graph can be colored in five colors.

97
v
1

3 4
z w
2 1 2

4 3
2
1 3 4 1

4 3
y x

Figure 8.14: Intersecting Kempe chains

Proof. See Exercise 8.4.1.

Appel and Haken’s proof of the 4-color theorem is based on the important concept of reducibility.
Given a graph G, a reducible configuration H is a subgraph of G with the property that H can be
reduced to a smaller subgraph H 0 , such that a 4-coloring of H 0 can be extended to all of H and
G. If every planar graph contained a reducible configuration, then every planar graph could be
4-colored. Appel and Haken’s proof was essentially a computer program to construct all irreducible
configurations, and to show that they could be 4-colored. The difficulty with this approach is being
certain that the computer program is correctly constructing all irreducible configurations. The
reader is referred to Saaty and Kainen [?] or Woodall and Wilson [?] for more information
on reducibility.

8.4.1 Exercises
1. Prove Theorem 8.10, the 5-color theorem.

2. Let G be a planar triangulation with a separating 3-cycle (u, v, w). Let H and K be the two
connected subgraphs of G that intersect in exactly (u, v, w), such that G = H ∪ K. Show
how to construct a 4-coloring of G from 4-colorings of H and K.

3. Let G be a planar triangulation with a separating 4-cycle (u, v, w, x). Let H and K be the
two connected subgraphs of G that intersect in exactly (u, v, w, x), such that G = H ∪ K.
Show how to construct a 4-coloring of G from 4-colorings of the triangulations H + uw and
K + uw. Hint: u, v, and w can be assumed to have the same colors in H and K. If x
is colored differently in H and K, look for an xv-Kempe chain, try interchanging colors in
K ij (x), or try coloring H + vx and K + vx.

98
4. All lakes are blue. Usually all bodies of water are colored blue on a map. Construct a planar
graph with two non-adjacent vertices that must be blue, such that the graph cannot be colored
in four colors subject to this requirement.

99
Index

AG , 33 adjacency matrix, 33
Cn , 8 adjacent, 3
E(G), 5 algebraic multiplicity, 33
F (G), 22 alternating path, 87
G[U ], 5 articulation point, 11
Jn , 35 augmenting path, 88
L(G), 39 automorphism, 6
Pn , 8 automorphism group, 6
T (G), 13 average degree, 4
V (G), 5
XG , 39 Berge’s theorem, 88
∆(G), 5 bipartite, 16
Ext(J), 21 blocks, 28, 72, 77
Int(J), 21 Bose Construction, 54
STS, 52 bridge, 11
TD(k, n), 71 characteristic polynomial, 33
Alg(G), 36 Chekad and Ernie problem, 13
AvgDeg(G), 4 circuit, 8
χG (λ), 33 circulant graph, 51
Deg(x), 3 clear set, 78
δ(G), 4 closed walk, 8
Diam(G), 9 column magic, 61
Dist(x, y), 9 commutative Latin square, 54
Fq , 69 complete, 16
(G), 5 complete graph, 3
ext(J), 21 connectivity, 11
int(J), 21 contract, 27
κ(G), 11 Cube, 26
κG, 43 cube, 9
λ(G), 11 cut vertex, 11
µA (x), 37 cycle, 8
~1, 35
g(G), 9 decomposition, 49
k-factor, 90 degree, 3
k-factorization, 91 degree sequence, 7
diameter, 9
acyclic, 14 distance, 9
adjacency algebra, 36 Dodecahedron, 26

100
doubling construction, 52 lines, 73

edge connected, 11 magic square, 59


edge connectivity, 11 magic sum, 59
edges, 3 matching, 87
embeddable, 21 maximal matching, 87
empty graph, 5 maximum degree, 5
Euler trail, 17 maximum matching, 87
Eulerian, 17 minimum degree, 4
minor, 27
F-factorization, 51 MOLS, 69
factor, 49 multiply construction, 53
factorization, 49 mutual orthogonal Latin squares, 69
field, 69
finite field, 69 near 1-factorization, 95
five-color theorem, 97 near perfect matching, 95
forest, 14 neighbor set, 88
four-color theorem, 96 number of edges per vertex, 5
girth, 9 Octahedron, 26
graph, 3 order, 74
groups, 72 orthogonal array, 71
orthogonal Latin squares, 61, 69
half-idempotent Latin Square, 55
Hall’s theorem, 89 pairwise balanced design, 77
homeomorphic, 27 parallel class, 78
partite, 16
Icosahedron, 26
path, 8
idempotent Latin square, 54
perfect matching, 87
incidence matrix, 39, 41
Petersen graph, 9
incident, 3
planar, 21
Incomplete Orthogonal Array, 83, 84
plane, 21
induced, 5
platonic graph, 25
induced subgraph, 5
platonic solid, 24
isomorphic graphs, 6
pointless, 5
isomorphism, 6
points, 52, 73, 77
Jordan curve, 21 principle minor, 34
Jordan curve theorem, 21 projective plane, 73

k-connected, 11 r-factor, 49
k-matching, 50 r-factorization, 49
Kempe, 96 r-partite, 16
Kempe chain, 96 reducibility, 98
Kuratowski’s theorem, 29 reducible configuration, 98
row magic, 61
Latin square, 54
Linear space, 77 saturated, 87

101
separable, 28
separates, 11
separating set, 11
shadow set, 89
spanning subgraph, 5
spectrum, 33
star, 16
Steiner pairwise balanced design, 77
Steiner triple system, 52
stereographic projection, 24
subdivided, 27
subdivision, 27
subgraph, 5
symmetric difference, 88
system of distinct representatives, 90

Tetrahedron, 26
topologically equivalent, 27
trail, 8
transversal design, 71
transverse, 72
tree, 14
triples, 52
Tutte’s theorem, 93

unsaturated, 87

vector space construction, 53


vertex connectivity, 11
vertices, 3

walk, 7

102

You might also like