0% found this document useful (0 votes)
31 views66 pages

Graph Theory Book

textbook about graph theory with linear algebra applications

Uploaded by

zacharyward2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views66 pages

Graph Theory Book

textbook about graph theory with linear algebra applications

Uploaded by

zacharyward2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Graph Theory Notes

Tony Mendes ([email protected])

Version Date: July 3, 2022

Abstract

These notes introduce graph theory at a level appropriate for an under-


graduate mathematics course.

Contents

1 First definitions 2

2 Vertex colorings 8

3 Trees 15

4 Planarity 20

5 Eulerian and Hamiltonian paths 26

6 Connectivity 30

7 Matchings 37

8 The adjacency matrix 45

9 The Laplacian 56

1
1
First definitions

This section contains basic graph theory definitions.

Definition. A graph G = (V, E) is an ordered pair where V is a finite set of elements


called vertices and E is a subset of {{u, v} : u, v ∈ V}. The singular form of the plural
“vertices” is vertex and elements in E are edges.

Graphs be drawn by writing down the vertices and then using lines that connect
vertices to indicate edges. There are many ways to draw the same graph. Artistic
license can be taken with how to arrange the vertices and edges as to best illustrate
properties of the graph.

Example 1. If V = {1, 2, 3, 4} and E = {{1, 2}, {1, 4}, {2, 4}}, then G = (V, E) is a
graph. This graph is shown twice below, drawn in two different ways:
2
3 1 1 2 3 4
4
Example 2. A graph G can be defined by letting V be the set of the 48 contiguous
US states and by letting E = {{A, B} : states A and B share a border}.
WA MT ND
MN ME
ID SD WI MI VT
OR NH
WY NY
IA MA
NE PA NJ RI
IL IN OH CT
NV UT CO WV MD DE
KS MO
CA KY VA
OK AR TN NC
AZ NM
MS AL GA SC
TX LA
FL

Graphs are an abstraction of a relationship between elements in a set. Other


“real world” examples of graphs are:

2
Chapter 1. First definitions 3

vertex set edge set relation


social media accounts friendship
atoms in a molecule chemical bonds
computer folders in a file system containment
The creative reader can think of an endless number of examples of graphs!

Definition. The complete graph Kn is the graph that has vertices {1, . . . , n} and that
has all possible edges. Below are K1 , . . . , K6 :

3 3 2
2 3 2 4 2
4 1
1 2 1 3 1 4 1 5 1 5 6
(n)
Theorem 3. The complete graph Kn has 2 = n(n − 1)/2 edges.

Proof. The edges in Kn are sets of two elements of the form {i, j} with 1 ≤ i < j ≤ n.
There are n ways to select i from 1, . . . , n and, for each choice of i, there are n − 1
ways to select j from the remaining numbers. Therefore there are n(n − 1) ways to
select ordered pairs (i, j). Since this process counts (i, j) and (j, i) as different, we
divide by 2 to count each set of the form {i, j} with 1 ≤ i < j ≤ n once. This shows
the number of edges is indeed n(n − 1)/2.
n
Theorem 4. There are 2(2) possible graphs with n vertices.
()
Proof. For each one of the n2 possible edges in a graph on n vertices, make one of
two choices: either include the edge in the graph or do not include the edge.

Definition. The complement of the graph G, denoted Gc , is the graph with the same
vertex set as G but with edge set {{u, v} : {u, v} is not an edge in G}.

Example 5. These two graphs are complements of one another:


3 3
4 2 4 2

5 1 5 1
()
It follows that if G has n vertices, then G and Gc have a combined n2 edges since
together they have every possible edge in a complete graph. It also can be seen
that (Gc )c = G.

Definition. Edges are incident if they share a vertex. Vertices u, v are adjacent in a
graph G if {u, v} is an edge. The degree of a vertex v is the number of vertices adjacent
to v. The degree sequence of a graph G is a list of the degrees of the vertices in G in
weakly decreasing order.
Chapter 1. First definitions 4

Example 6. The degree sequence for the two different graphs shown below are
both equal to (4, 3, 3, 2, 2, 2).
1 2 3 1 2 3

4 5 6 4 5 6

Can you find a graph with degree sequence (3, 3, 3, 3, 1, 1)?


Theorem 7 (Euler’s handshaking lemma). If G has E edges and degree sequence
(d1 , . . . , dn ), then d1 + · · · + dn = 2E.

Proof. Let {u, v} be an edge in G and let du and dv be the degrees of vertices u and
v. The edge {u, v} is counted twice in d1 + · · · + dn ; once with the du term and once
with the dv term. Thus the sum d1 + · · · + dn counts each edge exactly twice.

Definition. A path from vertex u to vertex v in G is a sequence of distinct vertices


u = x1 , x2 , . . . , xn = v in G such that xi and xi+1 are adjacent for i = 1, . . . , n − 1.
Example 8. The sequence 1, 2, 5, 8, 10 is a path from 1 to 10 in this graph
4 9
6
1 3 8 10
5
2 7

Definition. If e is an edge in G, we let G − e be the graph with e removed. If v is a


vertex in G, we let G − v be the graph with v and any edges containing v removed. A
subgraph of G is any graph formed by removing a set of vertices from G.
Example 9. We have K5 − {1, 2} = is equal to
3
4 2

5 1
and K5 − 5 = K4 . More generally, Km is a subgraph of Kn whenever m ≤ n.

Definition. A graph G is connected if there is a path from u to v for all vertices u and
v in G. A component of G is a maximal connected subgraph of G.

This graph in Example 8 is connected because there is a path between every


possible pair of vertices. Any connected graph has only one component.

Example 10. The following graph not connected because there is no path from 1
to 7. It has two components.
Chapter 1. First definitions 5

3 2 8
4 1 7
5 6 9
(n−1)
Theorem 11. If G has n vertices and more than 2 edges, then G is connected.
Proof. Suppose(G)has a component G1 with k vertices where 1 ≤ k ≤ n. The graph
( )
G1 has at most 2k edges, the part of G that does not contain G1 has at most n−k
2
edges, so the number of edges in G is at most
( ) ( )
k n−k k(k − 1) (n − k)(n − k − 1) n2 − n
+ = + = k2 − nk + .
2 2 2 2 2
This is a second degree polynomial (a parabola) in the variable k with a positive
leading coefficient, and so the maximum values occur at the endpoints(when ) k=1
or k = n. When k = 1 or k = n − 1, the above expression simplifies to n−12 . Since
G has more than this many edges, the only possibility is that k = n, meaning that G
is connected.
Definition. Two graphs G1 = (V1 , E1 ) and G2 = (V2 , E2 ) are isomorphic if there is
a bijection f : V1 → V2 such that u, v are adjacent in G1 if and only if f(u), f(v) are
adjacent in G2 . This bijection f is an isomorphism.
Example 12. These two graphs are isomorphic:
6 5 f e
2 1 b a

3 4 c d
7 8 g h
An isomorphism f could be f(1) = a, f(2) = b, . . . , f(8) = h.
Example 13. These two graphs are isomorphic:
3 2
1 2 3
4 1
4 5 6
5 6
An isomorphism f could be f(1) = 1, f(2) = 3, f(3) = 5, f(4) = 2, f(5) = 4, f(6) = 6.
Isomorphic graphs are the same with the exception that the labels on the graphs
are different. If G1 and G2 are isomorphic, then the graphs G1 and G2 have all of the
same properties. In particular, isomorphic graphs have the same degree sequence.
However, two graphs having the same degree sequence does not mean that they
are isomorphic. For example, the two graphs in Example 6 have the same degree
sequence but are not isomorphic because the first graph has adjacent degree 2 ver-
tices but the second graph does not. There are no known efficient algorithms that
can quickly determine if two graphs are isomorphic.
Chapter 1. First definitions 6

Definition. The Path graph Pn is the graph that has vertices {1, . . . , n} and edges
{{1, 2}, {2, 3}, . . . , {n − 1, n}}. Below is P10 :

1 2 3 4 5 6 7 8 9 10

Theorem 14. Let G be a graph with at least 2 vertices. If both G and Gc are connected,
then G has a subgraph isomorphic to P4 .

Proof. Suppose the theorem is not true and let G be a graph with the minimum
number of vertices such that G is connected, Gc is connected, and G does not have
a subgraph isomorphic to P4 .
Let u be a vertex in G. Then G − u does not have a subgraph isomorphic to P4
and, since G was the least counterexample to the theorem, either G − u or (G − u)c
is not connected. Without loss of generality suppose G − u is not connected.
Since Gc is connected, u cannot be adjacent to every other vertex in G. Thus
there is a v such that u and v are not adjacent in G. Take w such that v and w are
not adjacent in G − u, possible since G − u is not connected. Let v, x1 , . . . , xk , w be a
path of minimum length from v to w in G. It follows that u must be along this path,
say that u = xi . A depiction of this path is here:

component 1 of G − u
component 2 of G − u
x2 x1 v
xi−1 u xi+1

x3 ··· xi−2
xi+2 ··· w

We claim that the subgraph of G containing xi−2 , xi−1 , u, xi+1 is isomorphic to P4 .


Indeed, since v and u are not adjacent it cannot be the case that v = xi−1 and so
this path is actually length 4. Furthermore, neither xi−2 nor xi−1 can be adjacent to
xi+1 because otherwise G − u would be connected. Lastly, xi−2 cannot be adjacent
to u because we chose our path to have minimum possible length.
We have shown that G has a subgraph isomorphic to P4 , our contradiction.

Definition. An unlabeled graph with n vertices is a set of the form

{H = ({1, . . . , n}, E) : H and G are isomorphic}

for some graph G.


Using fancy language, an unlabeled graph is an equivalence class under the
equivalence relation given by graph isomorphism. We will use the term graph to
mean both labeled and unlabeled graphs. The type of graph should be clear from
context. We can draw an unlabeled graph in the same manner as a labeled graph
but without the labels.
Chapter 1. First definitions 7

Example 15. An example of an unlabeled graph is


{ }
2 1 2 1 4 1
= , ,
3 4 4 3 2 3
Example 16. These two labeled graphs are the same

because the unlabeled vertices in both graphs can be labeled to create the same
labeled graph:
8 7
2 1
3 4
1 2 3 4 9 8 7
9
Example 17. There are 11 possible unlabeled graphs with 4 vertices:

Unlabeled graphs are drawn when we are studying a property of graphs for
which the labeling does not matter. For example, the length of the longest path
in a graph does not depend on the labels, so when studying properties of longest
paths we may draw unlabeled versions of graphs.
2
Vertex colorings

Definition. An r-coloring of G = (V, E) is a function f : V → {1, . . . , r}.

Example 18. A 2-coloring of the Petersen graph is

where we are representing 1 with the color and 2 with .

Definition. The cycle graph Cn is the path graph Pn with the additional edge be-
tween vertices 1 and n. Below is C8 :
3
4 2
5 1
6 8
7

Theorem 19 (Fermat’s little theorem). If p is a prime number and r a positive integer,


then rp − r is divisible by p.

Proof. There are rP −r possible r-colorings of the cycle graph Cp that use at least two
colors because there are rp total colorings (making one choice of r colors for each
of the p vertices) and r of these use a single color. Sort this collection of colored
graphs into groups by rotational symmetry. For example, when p = 5 and r = 2,
we have

8
Chapter 2. Vertex colorings 9

Since p is prime, each collection of graphs grouped by rotational symmetry will


have exactly p elements. Therefore rp − r is divisible by p.

Definition. A coloring f is proper if f(u) ̸= f(v) for all adjacent vertices u and v.

Example 20. A proper 3-coloring of the graph on the left is shown on the right
3 2

4 1

5 6

where we are representing 1 with the color , 2 with , and 3 with . This is a proper
coloring because adjacent vertices never have the same color.

Theorem 21. There are r(r − 1) · · · (r − n + 1) proper r-colorings of Kn .

Proof. There are r choices on the color of vertex 1. No matter the choice of color
for vertex 1 there are r − 1 ways to color vertex 2. No matter the previous choices
of colors, there are r − 2 ways to color vertex 3, and so on, until we find there are
r − (n − 1) ways to color vertex n. Thus there are r(r − 1) · · · (r − n + 1) proper
r-colorings of Kn .

Definition. The chromatic number of a graph G, denoted χ(G), is the least r for
which there exists a proper r-coloring of G.

First observations about the chromatic number are that χ(Kn ) = n, χ(G) ≥
χ(G − v) for any vertex v, χ(G) ≥ χ(G − e) for any edge e, and if G is disconnected
with components G1 , . . . , Gk , then χ(G) = max{χ(G1 ), . . . , χ(Gk )}.

Example 22. We have χ(C2n ) = 2 and χ(C2n+1 ) = 3 for all n ≥ 1. In the case of
an even cycle we can color vertices in an alternating pattern, only using 2 colors. In
the case of an odd cycle we can again alternate colors but will need to use a third
color, see the case of C11 below as an example:

Theorem 23. If ∆ is the maximum degree of a vertex in G, then χ(G) ≤ ∆ + 1.


Chapter 2. Vertex colorings 10

Proof. We proceed by induction on the number of vertices in G with the assertion


clearly true if G has only one vertex.
Let v be a vertex of degree ∆. By the induction hypothesis there is a proper
coloring of G − v that uses at most ∆ + 1 colors. Use this coloring of to create a
proper coloring of G that uses at most ∆ + 1 colors by coloring vertex v a different
color than the vertices adjacent to v.

It was relatively easy to find the upper bound for χ(G) in Theorem 23. We im-
prove this bound for graphs other than Kn and C2n+1 in our next theorem. The proof
is significantly more involved than those we have encountered so far.
Theorem 24 (Brooks). Let ∆ be the maximum degree of a vertex in G. If G is a con-
nected graph that is not a complete graph nor an odd cycle, then χ(G) ≤ ∆.

Proof. The only connected graph with ∆ = 0 is K1 , the only connected graph with
∆ = 1 is K2 , and the only connected graphs with ∆ = 2 are of the form Cn or Cn − e
for an edge e. In all of these cases the theorem is quickly seen to be true, so we
assume from here on that ∆ ≥ 3.
We proceed by induction on the number of vertices in G. In the base step when
G has 4 vertices, is not equal to K4 , and has ∆ ≥ 3, it follows that G is one these
graphs, shown colored with ∆ colors:

Case 1: There is a vertex v such that G − v is not connected.


Let G1 , . . . , Gk be the components of G − v and let Gi + v be the subgraph of
G containing v and Gi . By induction, the graphs G1 + v, . . . , Gk + v can all be
properly colored using at most ∆ colors. Without loss of generality, select
proper colorings of these graphs such that v is always the same color. This
gives us a proper coloring of G that uses at most ∆ colors, proving that χ(G) ≤
∆ in this case.
Case 2: There are nonadjacent vertices v and w such that G−v−w is not connected.
Let G1 be a component of G − v − w and let G2 be graph created by deleting
the vertices in G1 from G − v − w.

G1 G2
v

We can assume there is at least one edge from v to G1 and at least one edge
from v to G2 because otherwise we would be back in Case 1. Similarly there
is at least one edge from w to G1 and from w to G2 .
Chapter 2. Vertex colorings 11

The plan is to properly color the vertices in the subgraphs G1 + v + w and G2 +


v+w with at most ∆ colors by induction and then to combine these colorings,
producing a proper coloring of G. This plan will succeed when both subgraph
colorings color v and w the same color or when both subgraph colorings color
v and w different colors, in which case we can re-color if necessary to ensure
that v and w are colors 1 and 2 respectively in both colorings.
Let H1 be the graph G1 + v + w with the added edge {v, w} and let H2 be the
graph G2 + v + w with the added edge {v, w}. The maximum degree in H1 or
H2 is still less than or equal to ∆ because of our assumption that there are
edges from v to G1 and G2 and edges from w to G1 and G2 .
Suppose one of these graphs, say H1 , is a complete graph with vertex degree
∆. Properly color G1 + u + v so that v and w are the same color. Since v and
w are vertices of degree 2 in H2 , by merging v and w into a single vertex in H2
we can properly color G2 + u + v such that v and w are the same color by
induction. Combine these two colorings to find a desired coloring of G.
One example of a graph G in this situation when ∆ = 4 is shown here:

1
v 4 5
2
w 6 7
3

We can merge v and w in H2 into a single vertex and then properly color the
resulting graph by induction using at most ∆ colors. This coloring can now
be combined with the coloring of G1 + v + w that colors vertices v and w the
same color and the remaining ∆ − 1 vertices different colors. Doing this gives

If H1 is a complete graph with vertex degree less than ∆, then we can color u
and v different colors in both H1 and H2 by induction. Combine these colorings
to create a proper coloring of G with at most ∆ colors.
If neither H1 nor H2 are complete graphs, then by induction we can properly
color both H1 and H2 (this is possible even in the case of odd cycles since ∆ ≥
3). These colorings both have v and w different colors and so these colorings
can be combined to create a proper coloring of G with at most ∆ colors.
Case 3: Every pair of nonadjacent vertices v and w leaves G − v − w connected.
Let u be a vertex of degree ∆ in G. Since G is not a complete graph there are
nonadjacent vertices v and w that are both adjacent to u. Let v1 = v and
v2 = w. Since G − v − w is connected, the remaining n − 2 vertices can be
Chapter 2. Vertex colorings 12

listed in some order v3 , . . . , vn such that vn = u and such that vi is adjacent


to at least one of the vertices vi+1 , . . . , vn .
Properly color the vertices v1 , . . . , vn greedily, meaning to color the vertices
in sequence, using the least possible available color at each step.
As an example when ∆ = 3, consider the Petersen graph with vertices u, v,
and w chosen as shown on the left. The vertices v1 , . . . , v10 can be labeled as
shown in the middle, and the greedy coloring is shown on the right where we
are representing 1 with the color , 2 with , and 3 with :

u v10

3 v9
w v v2 v1
4 7 v5 v6

5 6 v8 v7
1 2 v4 v3

This labeling scheme colors v and w the same color. Furthermore, since every
vertex except u has at most ∆ − 1 adjacent vertices preceding it in the greedy
coloring scheme, we will never need to use more than ∆ colors for the ver-
tices v3 , . . . , vn−1 . Lastly, since v and w are adjacent to u, the vertex vn = u
can be properly colored so that the entire graph uses at most ∆ colors. This
completes the proof.

Definition. A graph G is bipartite if χ(G) ≤ 2. If G is a bipartite graph that is properly


colored with a set X of vertices colored red and a set Y colored blue, then X and Y are
independent sets of vertices.

Example 25. Even cycle graphs are bipartite and odd cycle graphs are not.
Definition. The complete bipartite graph Km,n is the graph with vertices 1, . . . , n+m
and with edge set {{a, b} : 1 ≤ a ≤ m and m + 1 ≤ b ≤ n + m}. Below is K3,5 .

1 2 3

4 5 6 7 8

Definition. A cycle of length n in a graph G is a path v1 , . . . , vn in G such that v1 and


vn are adjacent.

A cycle of length n in a graph G does not mean that G has a Cn subgraph.

Example 26. The complete bipartite graph K3,5 has a cycle of length 6, namely the
path 1, 4, 2, 5, 3, 6, 1, but it does not have a cycle of length 5.

Theorem 27. A graph G is bipartite if and only if G does not have a cycle of an odd
length.
Chapter 2. Vertex colorings 13

Proof. If G is bipartite, then it cannot contain an odd cycle because then χ(G) ≥ 3.
Assume G is connected and has no cycles of an odd length. Pick a vertex u and
color it color 1. Take v to be another vertex in G. If the shortest path from u to v has
an odd number of vertices, color it color 1, otherwise color v color 2.
This coloring scheme will fail only if adjacent verities are assigned the same
color, as depicted here where we are representing 1 with the color and 2 with the
color :

This can only happen when G has an odd cycle.

Definition. The chromatic polynomial for a graph G, denoted PG (x), is the polyno-
mial such that PG (r) is equal to the number of proper r-colorings for r = 0, 1, 2, . . . .
Example 28. Theorem 21 gives that the chromatic polynomial for the complete
graph is PKn (x) = x(x − 1) · · · (x − n + 1).
Example 29. The chromatic polynomial for the path graph is PPn (x) = x(x − 1)n−1
because there are x choices for the color of vertex 1 and then x − 1 choices for the
colors of the remaining vertices.
Theorem 30. If e is an edge in G, then PG (x) = PG−e (x) − PG/e (x) where G/e is the
graph G with the edge e contracted, meaning that if e = {v, w}, then the vertices v
and w are merged into a single vertex:

v e w becomes vw

Proof. In the graph G−e, the vertices in the contracted edge can be the same color in
PG/e (x) ways and different colors in PG (x) ways. Thus PG−e (x) = PG/e (x)+PG (x).

Example 31. The chromatic polynomial for the cycle graph for n ≥ 2 is equal to

PCn (x) = (x − 1)n + (−1)n (x − 1).

This can be proved using Theorem 30 and induction: PC2 (x) = (x − 1)2 + (x − 1) =
x(x − 1) is correct and

PCn (x) = PCn −e (x) − PCn /e (x)


= PPn (x) − PCn−1 (x)
= x(x − 1)n−1 − ((x − 1)n−1 + (−1)n−1 (x − 1))
= (x − 1)n + (−1)n (x − 1),

as needed.
Theorem 32. If G has V vertices and E edges, then

PG (x) = xV − ExV−1 + · · · .
Chapter 2. Vertex colorings 14

Proof. We proceed by induction on E. If G has no edges, then G is KcV and then


PG (x) = xn , showing the assertion true.
Using Theorem 30, we have

PG (x) = PG−e (x) − PG/e (x)


= (xV − (E − 1)xV−1 + · · · ) − (xV−1 − · · · )
= xv − ExV−1 + · · · .
3
Trees

Definition. A tree is a connected graph without a cycle. A leaf is a degree 1 vertex in


a tree.

Example 33. The three unlabeled trees with 5 vertices are:

These trees have 2, 3, and 4 leaves, respectively.

As a corollary of Theorem 27, trees are bipartite.

Theorem 34. A graph is a tree if and only if there is a unique path between any pair
of vertices.

Proof. If two paths from vertices u and v existed, the graph would contain a cycle,
as shown below.

u v

Theorem 35. Every tree with at least two vertices has at least two leaves.

Proof. Let P be a longest path in a tree. Suppose that P is a path from vertex u to
vertex v. If either u or v had degree 2, then the path P could be extended by at least
one vertex, contradicting the fact that P is the longest path.

Theorem 36. A graph T is a tree with n vertices if and only if the chromatic polynomial
PT (x) = x(x − 1)n−1 .

Proof. Assume that T is a tree. We use induction on the number of edges in T with
the assertion true if T has no edges (and thus T is a single vertex). Let e be an edge
incident to a leaf. Theorem 30 gives

PT (x) = PT−e (x) + PT/e (x) = x(x(x − 1)n−2 ) − x(x − 1)n−2 = x(x − 1)n−1

15
Chapter 3. Trees 16

where we use induction twice: once when noticing that T − e consists of a lone
vertex that can be colored one of x colors and a tree with n − 1 vertices and once on
the tree T/e with n − 1 vertices.
Now assume that PT (x) = x(x − 1)n−1 . If T were not connected, then x2 would
divide PT (x). Expanding the polynomial gives

xn − (n − 1)xn−1 + · · · ,

which by Theorem 32 says that T has n − 1 edges. Any connected graph with n
vertices and n − 1 edges must be a tree since this is the minimum number of edges
required to create a connected graph.

Theorem 37 (Cayley). There are nn−2 labeled trees with n vertices.

Proof. Let tn be the number of labeled trees with n vertices. We will show that
n2 tn = nn . Start by selecting a tree in one of tn ways. Select one of the n vertices in
the tree as a “Start” vertex and select of the n vertices as an “End” vertex. The start
and end vertices can be the same. For example,
3 2
End 4 1
5 6 Start
We will turn this tree with Start and End vertices into a list of n integers with
each integer a member of {1, . . . , n}. Since there are nn such lists, this would prove
the theorem. To change the tree into the list,

1. Let P be the unique path from Start to End in the tree.


2. In the list positions found in P, write the path P from left to right.
3. In the remaining positions i, place the second vertex on the path from i to P.
For example, using the tree displayed above, the path P from Start to End con-
tains the vertices 6, 3, 4. So, after step 2, our list is ( , , 6, 3, , 4). Then, position 1
on the list is filled with 3 because the second vertex on the path from 1 to P is 3. Ap-
plying this logic to the other two empty positions arrives at the list (3, 4, 6, 3, 2, 4).
This process is bijective (meaning that each list corresponds to one and only
one tree with Start and End labels). Suppose f is the function from {1, . . . , n} to
{1, . . . , n} such that f(i) is the integer in position i on the list. The path P can be
reconstructed from f since an integer i is in the path P if iterating f eventually gives
i. This is because once we are on the path P, iterating f will keep us on the path
and if we are not on P, then applying f will move us one step closer to P at each
iteration. Once the path is identified, the remaining part of the tree can be easily
reconstructed.
For example, if our list is (2, 5, 1, 2, 3, 5), then 2 is on the path P because f(2) = 5,
f(5) = 3, f(3) = 1, f(1) = 2. Similarly, 5, 1 and 3 are also on the path P. Reading
positions 1, 2, 3 and 5 on the list gives that P is 2, 5, 1, 3. The other positions on the
Chapter 3. Trees 17

list give how to connect the other vertices to the tree; 4 is adjacent to 2 and 6 is
adjacent to 5. The reconstructed tree is:
Start 2 5 1 3 End

4 6
We now have shown that n2 tn = n2 , as needed.

Cayley’s formula is such a nice result that it deserves a second proof.

Proof. This exercise gives another proof of Cayley’s theorem. Start by selecting one
of the nn−2 functions f : {2, . . . , n − 1} → {1, . . . , n}. Represent f as a graph on
vertices 1, . . . , n by drawing an edge from i to f(i) for all i. Draw the graph in the
plane such that vertices in cycles are colinear with the least element in each cycle
listed first and such that cycles are listed in decreasing order according to minimum
element. Draw any vertices not contained in a cycle below this line.
For example, the function f such that f(2) = 3, f(3) = 3, f(4) = 7, f(5) = 1,
f(6) = 3, f(7) = 8, f(8) = 4, f(9) = 12, f(10) = 7, and f(11) = 10 would be depicted
this way:

12 4 7 8 3 1

9 10 2 6 5

11

Change this graph into a tree by connecting the cycles from left to right. Doing
this to the above gives
12 4 7 8 3 1

9 10 2 6 5

11

The graph was carefully drawn in the prescribed manner as to make this process
bijective. In particular, to take a tree and reconstruct the function f, Locate the path
P from n to 1. Remove any edge along this path that connecting to a vertex that is
smaller than all previous vertices in P (a left to right minimum). Then reinsert cycles
along this path between cut edges. Now the function f can be reconstructed.

Theorem 38. The expected number of leaves on a labeled tree with n vertices is ap-
proximately n/e.

Proof. There are (n−1)(n−1)n−3 = (n−1)n−2 labeled trees with the vertex labeled 1
as a leaf because we may take any one of the (n−1)n−3 labeled trees on the vertices
Chapter 3. Trees 18

2, . . . , n and attach 1 to any one of the n − 1 vertices. Since there are a total of nn−2
trees, the probability that vertex 1 (or any other vertex) is a leaf is
( )n ( )−2
(n − 1)n−2 1 1
= 1− 1−
nn−2 n n

We recall from Calculus that (1 − 1/n)n has limit 1/e and that (1 − 1/n)−2 has limit
1. Therefore the probability that any one of the n vertices is a leaf is approximately
1/e, showing that there are approximately n/e leaves on a labeled tree.

Definition. A spanning tree for a connected graph G is a tree found from removing
edges from G.

Since a tree must be connected, a spanning tree is the graph with the minimum
number of edges that connects all vertices in G.

Example 39. The edges in a spanning tree for the Petersen graph are colored gold:

Cayley’s theorem can be restated to say that there are nn−2 spanning trees for
the complete graph Kn . Our proof of Cayley’s theorem can be modified to give a
similar result for spanning trees in the complete bipartite graph Km,n .

Theorem 40. There are nm−1 mn−1 spanning trees for Km,n .

Proof. Begin by labeling the m vertices in Km,n with a1 , . . . , am and labeling the n
vertices in Km,n with b1 , . . . , bn . Select a spanning tree for Km,n . Select a “Start”
vertex among the “a” vertices and select an “End” vertex among the “b” vertices. If
tm,n denotes the number of spanning trees for Km,n , then There are mntm,n ways to
make these choices. For example, one possible choice when m = 3 and n = 5 is:

a1 a2 a3 Start

b1 b2 b3 b4 b5 End

Create two lists, one with positions a1 , . . . , am and the other with positions b1 , . . . , bn .
Let P be the unique path from Start to End.

1. Write the “b” elements from P from left to right in the positions a1 , . . . , am that
are also in P. Similarly, write the “a” elements from P from left to right in the
positions b1 , . . . , bn that are also in P.
Chapter 3. Trees 19

2. In the remaining positions ai , write the second vertex on the path from ai to P.
Similarly, in the remaining positions bi , write the second vertex on the path
from bi to P.

For example, using the tree shown above, the two lists have these positions that
need to be filled in:
( , , ) ( , , , , ).
a1 a2 a3 b1 b2 b3 b4 b5
The path P is a2 , b4 , a3 , b5 , so after completing the first step in the above instruc-
tions, we find
a2 a3
( , b4 , b5 ) ( , , , , ).
a1 a2 a3 b1 b2 b3 b4 b5
Following the second set of instructions gives
a2 a2 a1 a2 a3
( b1 , b4 , b5 ) ( , , , , ).
a1 a2 a3 b1 b2 b3 b4 b5
The first list of length m can contain elements in {b1 , . . . , bn } and so there are
mn such lists. Similarly, there are nm possible choices for the second list. Therefore,
there are mn nm possible pairs of lists.
Showing that this process is bijective is so similar to the ideas in the proof of
Theorem 37 that it is left to the reader. This shows that nmtm,n = nm mn .
4
Planarity

Definition. A graph is planar if it can be drawn on a plane without any edges cross-
ing. If a graph G is drawn in this way, then the regions in the plane (including the
“outside” region) bounded by edges are faces.

Example 41. The following table gives examples of various planar graphs:
Graph Planar drawing # vertices # edges # faces

K4 4 6 4

K2,3 5 6 3

Cn n n 2

Wn n 2n − 2 n

Theorem 42 (Euler). If G is planar, connected, and has V vertices, E edges and F faces,
then V − E + F = 2.

Proof. We prove this by induction on the number of edges E with the assertion true
when G has only one vertex and no edges.
If G is a tree, then E = V − 1, and F = 1, so V − E + F = 2. If G is not a tree, then
an edge on a cycle separates two faces, so removing this edge reduces the number
of faces by 1, leaving V − E + F unchanged. We are now done by induction.

Theorem 43. If G is planar with V ≥ 3 vertices and E edges, then E ≤ 3V − 6. If G


happens to also be bipartite, then E ≤ 2V − 4.

20
Chapter 4. Planarity 21

Proof. Each face is surrounded by at least 3 edges, so 3F ≤ 2E where F is the number


of faces. Using this in V − E + F = 2, we have E = V + F − 2 ≤ V + 2E/3 − 2, which
gives E ≤ 3V − 6. If G is bipartite, then each face is surrounded by at least 4 edges,
giving 4F ≤ 2E, which gives the result E ≤ 2V − 4 when used in Euler’s formula.
As a corollary of Theorem 43, the graphs K5 and K3,3 are not planar. Indeed,
K5 has E = 10 but 3V − 6 = 9 and K3,3 has E = 9 but 2V − 4 = 8. Theorem
44 below shows that these two non-planar graphs must appear inside every non-
planar graph.
Theorem 44. The graph G is not planar if and only if either K5 or K3,3 can be found
from contracting edges, removing edges, and/or removing vertices in G.
For example, the Petersen graph is not planar because the gold edges shown
below can be contracted to find K5 :

The proof of Theorem 44 is relatively long and technical, and, although it is


within the capabilities of the reader, we choose to omit the proof.
Theorem 45. If G is planar, then there is a vertex with degree 5 or less.
Proof. If every vertex has degree 6 or more, then 2E ≥ 6V means that E ≥ 3V >
3V − 6, contradicting Theorem 43.
Theorem 46 (The five color theorem). If G is planar, then χ(G) ≤ 5.
Proof. We proceed by induction on the number of vertices in G with the assertion
true if G has a single vertex.
By Theorem 45, G has a vertex v of degree 5 or less when any multiple edges are
identified as a single edge and any loops removed. By induction, χ(G − v) ≤ 5, so
there is a proper coloring of G − v that uses at most 5 colors.
If the degree of v is less than 5, then it can be colored a color different than its
neighbors. If the degree of v is 5 and all five vertices adjacent to v are different col-
ors, there must be two such adjacent vertices u and w such that {u, w} is not an
edge because otherwise K5 would be a subgraph of G.
Consider the graph G with the edges {v, u} and {v, w} contracted. By induction,
the resulting graph can be properly colored using at most 5 colors. Reinstate the
contracted edges and assign u and w the color of v. Now v is adjacent to vertices
of only 4 different colors, meaning that G can be properly colored using at most 5
colors.
Definition. A dual of a planar graph G, denoted G∗ , is a graph found by drawing G in
the plane without edge crossings and then taking the vertices of G∗ as the faces of G
with {f1 , f2 } equal to an edge in G∗ whenever faces f1 and f2 share an edge in G.
Chapter 4. Planarity 22

Example 47. Below we display a graph G on the left, a dual G∗ on the right, and a
depiction of how to find a dual in the middle.

There may be many different dual graphs for a single graph G. Furthermore,
a dual graph can have loops (an edge from a vertex to itself) or multiple edges
between vertices. Theorems 42, 43, 44 and 45 all still hold for graphs with loops or
multiple edges.

Example 48. The graphs on the right and left are duals of the graph in the center:

The graph on the right is the dual of a different planar embedding of the graph in the
center, the embedding found by redrawing the degree 1 vertex inside of the triangle.
Theorem 46 can be interpreted to say that the faces in a planar graph without
loops can be colored with at most five different colors such that faces that share
an edge are different colors. For example, below we show a coloring of the faces
of the planar graph in Example 47 next to the corresponding proper coloring of the
vertices in the dual graph. One of the colors we use is white, the color of the outside
face.

A closer inspection of this graph reveals that it can be properly colored using four
colors instead of five:
Chapter 4. Planarity 23

Indeed, Theorem 46 can be strengthened to the Four Color Theorem, stated here.

Theorem 49 (The four color theorem). If G is planar, then χ(G) ≤ 4.

The proof of the four color theorem is famously difficult and no simple proof is
known. All current versions of the proof reduce the situation to a careful analysis
of a large but finite number of specific graphs which are then checked brute force
by a computer. The proof of the four color theorem in 1979 was the first example of
a proof that required a computer to complete and sparked a debate on the future
role humans in mathematical theorem proving.
Definition. A subset C ⊆ R3 is convex if the line segment from x to y is completely
contained in C for every x, y ∈ C. The convex hull of a subset S ⊆ R3 is the intersec-
tion of all convex sets C which contain S.
Example 50. A sphere is convex but like a banana, you are not convex.

Definition. A convex polyhedron is the convex hull of a finite number of points in R3 ,


called vertices, such that not all vertices are coplanar and no one vertex lies in the
convex hull of the other vertices.

Example 51. A soccer ball is a convex polyhedron, with the vertices each point of
intersection of a hexagon and a pentagon.

Each convex polyhedron with V vertices has an associated graph with V vertices,
found by connecting two vertices in the graph if vertices in the convex polyhedron
are connected by an edge. This way our definitions of vertex, edge, and face for
convex polyhedra are the natural ones.

Theorem 52. The graph of a convex polyhedron is planar.

Proof. Enclose the convex polyhedron in large sphere. Consider the shadows cast
on the sphere by the vertices and edges of P by a light source in the center of P. This
is the graph of P drawn without edge crossings on a sphere, which can be used to
draw the graph in the plane by using a map projection.
Chapter 4. Planarity 24

Definition. A Platonic solid is a convex polyhedron such that the same number of
edges meet at each vertex and faces are congruent regular polygons.
This table records five examples of Platonic solids. The graphs are planar even
though we choose not to always exhibit a planar embedding.

Solid V E F Drawing Graph

Tetrahedron 4 6 4

Cube 8 12 6

Octahedron 6 12 8

Dodecahedron 20 30 12

Icosahedron 12 30 20

Theorem 53. There are exactly five Platonic solids.

Proof. Suppose a Platonic solid has V vertices, E edges, and F faces such that each
face is a regular p-gon and each vertex joins q edges. Then we have pF = 2E by
counting the edges bordering each face and we have qV = 2E by counting degrees.
Since V − E + F = 2, we have 2E/q − E + 2E/p = 2. Dividing by 2E gives the identity
1/E = 1/p + 1/q − 1/2, which must be a positive number.
Chapter 4. Planarity 25

The values of p and q must be at least 3. If either p or q is greater than 5, the


quantity 1/p+1/q−1/2 is less than or equal to 0. The only values of p and q between
3 and 5 that make 1/p + 1/q − 1/2 positive are recorded in this table:
p q 1
p + 1
q − 1
2 Platonic Solid
3 3 1/6 tetrahedron
4 3 1/12 cube
3 4 1/12 octahedron
3 5 1/30 icosahedron
5 3 1/30 dodecahedron
These are the five platonic solids.
5
Eulerian and Hamiltonian paths

Definition. A walk is a sequence of vertices v1 , . . . , vn such that vi and vi+1 are adja-
cent for i = 1, . . . , n − 1. The difference between a walk and a path is that vertices
in a path must be distinct. A trail is a walk where every edge is distinct. A graph is
Eulerian if there is a trail that starts and ends at the same vertex and that uses every
edge in the graph.
Example 54. The following graph is Eulerian
5 6

1 4 7

2 3

because of the trail 1, 5, 6, 7, 3, 6, 4, 3, 2, 5, 4, 2, 1.


Theorem 55. A connected graph is Eulerian if and only if every vertex degree is even.

Proof. If the graph is Eulerian, then the degree of a vertex v increases by 2 each time
the Eulerian trail passes through v.
Now suppose every vertex in a connected graph G has an even degree. We show
that G is Eulerian by induction on the number of edges in G.
A cycle C must exist in G because otherwise G would be a tree and then have a
degree 1 vertex, which is not even. Remove the edges in C from G. By induction,
each connected component is Eulerian. Create an Eulerian trail for G by traveling
around C, taking detours using the Eulerian trails along each component along the
way. For example, if C is the cycle 1, 2, 3, 4, 5, 6, 7, 8, 1 in the graph below,
14
3
13 4 2 9
5 1
12 6 8 10
7
11
then an Eulerian trail is 1, 9, 10, 1, 2, 3, 4, 14, 13, 12, 11, 6, 12, 5, 13, 4, 5, 6, 7, 8, 1.

Definition. A graph is Hamiltonian if there is a cycle that contains every vertex.

26
Chapter 5. Eulerian and Hamiltonian paths 27

Example 56. The graph of the dodecahedron is Hamiltonian:

Theorem 57 (Bondy-Chvátal). Let G be a graph with n ≥ 3 vertices and let u and v


be non-adjacent vertices such that the sum of the degrees of u and v is at least n. Then
G is Hamiltonian if and only if G + {u, v} is Hamiltonian.

Proof. If G is Hamiltonian, then G + {u, v} is also clearly Hamiltonian.


The reverse implication is proved by contradiction. Assume C = u, x2 , . . . , xn−1 , v, u
is a Hamiltonian cycle in G + {u, v} and assume that G is not Hamiltonian. If some
vertex xi in this cycle C is adjacent to u, then xi−1 cannot be adjacent to v because
otherwise u, xi , . . . , xn−1 , v, xi−1 , . . . , x2 , u would be a Hamiltonian cycle in G. There-
fore the sum of the degrees of u and v is at most n − 1, a contradiction.

As a corollary of Theorem 57, if G has n vertices and all degrees are at least n/2,
then G is Hamiltonian. This follows because we can keep adding edges between
non-adjacent vertices until we find a complete graph. In general, deciding whether
or not a given graph is Hamiltonian is a difficult problem (it is in a class of problems
known as NP-complete), so theorems where certain conditions imply that graphs
are Hamiltonian are probably the best we can hope for. Another example of such a
theorem is given next.

Theorem 58. Let C be a Hamiltonian cycle in a planar graph, let inside(i) be the num-
ber of i-edged faces inside C and outside(i) be the number of i-edged faces outside C.
Then ∑
(i − 2) (inside(i) − outside(i)) = 0.
i

Proof. If C contains x inside chords, then there are
∑x + 1 = i inside(i) inside faces.
Counting the edges around interior faces gives i iinside(i) =∑2x + n where the
graph has n vertices. Combining these last two equations∑ gives i (i−2)inside(i) =
n − 2. Applying the same logic to outside edges gives i (i − 2)outside(i) = n − 2.
The statement in the theorem follows.

Example 59. As an example of Theorem 58, consider


Chapter 5. Eulerian and Hamiltonian paths 28

This graph is Hamiltonian with the cycle highlighted in gold. As Theorem 58 says,
the last column in the table below sums to 0.
i inside(i) outside(i) (i − 2) (inside(i) − outside(i))
3 1 0 1
4 2 1 2
5 0 1 −3
Example 60. Theorem 58 can be used to show that certain planar graphs are not
Hamiltonian. Consider the following graph with 21 faces with 5 edges, 3 faces with
8 edges, and 1 face with 9 edges:

If the graph had a Hamiltonian cycle, then the face with 9 egdes would be an outside
face and the last column in the below table would sum to 0 for some integers a, b:
i inside(i) outside(i) (i − 2) (inside(i) − outside(i))
5 a 21 − a 3(2a − 21)
8 b 3−b 6(2b − 3)
9 0 1 −7
Setting the sum of the last column equal to 0 and simplifying gives 6a + 12b = 88.
There are no integer solutions to this equation because the left side is divisible by
3 but 88 is not. Thus the graph is not Hamiltonian.

The famous Traveling Salesman Problem is closely related to the problem of


deciding whether or not a graph has a Hamiltonian cycle. Assign positive values to
the edges in a complete graph Kn . These weights represent the cost to use the edge.
The Traveling Salesman Problem asks to find a minimum weight Hamiltonian cycle.
For example, the weighted complete graph K5 shown here
Chapter 5. Eulerian and Hamiltonian paths 29

2
1 1
3 2 1
3 2
1 5 2 1

4 2 5

has minimum weight Hamiltonian cycle given by 1, 2, 3, 4, 5, 1. It is notoriously dif-


ficult to find an exact solution to a large Traveling Salesman Problem, but there are
approximate heuristic solutions that can be found quickly which are within small
percentages of the exact solution.
6
Connectivity

Definition. A disconnecting set D of edges in a graph G is a set of edges such that G−


D is disconnected. The edge connectivity ε(G) is the smallest size of a disconnecting
set.

Example 61. We see that ε(Kn ) = n − 1, ε(Cn ) = 2, and ε(T) = 1 for trees T.

Definition. A bridge is an edge such that {e} is a disconnecting set.

Theorem 62. Let G be connected with edge e = {u, v}. Then e is the only path from
u to v if and only if e is a bridge.

Proof. If e is the only path from u to v, then G − e has no path from u to v and is
therefore disconnected.
On the other hand, suppose G1 , G2 are distinct components of G − e such that u
is in G1 . Suppose w is in G2 and let P be a path from u to w in G. The path P must use
the edge e, meaning that v must be the second vertex on the path P, showing that v
is in C2 . Thus there are no other paths from u to v other than the path u, v that uses
the edge e.

Definition. A u, v-disconnecting set is a set of E of edges in G such that u and v are


in different components of G − E.

Example 63. If G is the graph shown below,


9
3
5 8
u 2 v
4 7
1
6

then the minimum size of a u, v disconnecting set is 3 because the {u, 1}, {u, 2},
{u, 3} edges can be removed to disconnect u and v. In this example there also hap-
pen to be 3 paths from u to v that are edge disjoint (meaning that each edge is used
at most once in any of the paths): u, 1, 4, 6, v and u, 2, 4, 7, v and u, 3, 5, 8, v.

30
Chapter 6. Connectivity 31

Theorem 64 (Menger, edge version). The maximum number of edge disjoint paths
from u to v is equal to the minimum number of edges in a u, v disconnecting set.
Proof. We prove this by induction on the number of edges in G. The theorem is true
if G has no edges.

Case 1: Suppose a minimum size u, v disconnecting set contains the edge {u, v}.

1
e1
u e2 v

Removing this u, v edge removes one edge in a u, v disconnecting set and one
path from u to v. We are now done by induction.
Case 2: Suppose no u, v disconnecting set uses the {u, v} edge (if no such edge
exists then we must be in this case).

3
e1
G= u 2 e2 6 v

1 e3 5

The above graph G depicts one such situation, with E = {e1 , e2 , e3 }.


Let Gu be the graph G with all vertices in the component of G − E containing
u merged. For example, using the above G, we have

3
e1
Gu = u12 e2 6 v
e3
5
and
e1

Gv = u 2 e2 356v
e3
1
The set E is still a disconnecting set for both Gu and Gv of minimum size, so by
induction both Gu and Gv have the correct number of edge disjoint paths from
u to v. Combine these paths in the natural way to find the correct number of
edge disjoint paths from u to v in G.
In the above examples, the edge disjoint paths from u12 to v in Gu are u12, 3, v
and u12, 6, v and u12, 5, v. The edge disjoint paths from u to 356v in Gv are
u, 356v and u, 2, 356v and u, 1, 356v. Combining these paths gives the edge
disjoint paths u, 3, v and u, 2, 6, v and u, 1, 5, v.
Chapter 6. Connectivity 32

Definition. A directed graph, or digraph, is a graph where each edge is given a di-
rection. A simple graph is a graph that is not a directed graph, has no loops or multi-
ple edges, and does not have weighted edges. Usually the unqualified term “graph”
refers to a simple graph unless otherwise stated.

Example 65. A directed graph with multiple edges is shown below:

5 7

u 1 2 3 v

4 6

The proof of Theorem 64 still holds for directed graphs with multiple edges. So,
for instance, the graph in Example 65 has a minimum of two edges in a u, v discon-
necting set because if the two edges between vertices 1 and 2 are removed then
there is not a path from 1 to 2 (and therefore the resulting graph is disconnected).
Two edge disjoint paths from u to v are u, 5, 1, 2, 3, 7, v and u, 1, 2, 3, 6, v.

Definition. A network is a directed graph where nonnegative integer weights are


assigned to each edge. The in-degree of a vertex v in a network is the sum of the
weights of the edges that point to v and the out-degree is the sum of the weights of
the edges that leave v.

Example 66. A network is shown below:

3
1 2
3 2 2 3
4 1 3
u v
2 3 3 2
1 2
3

Such a graph can be used to model the number of cars that can drive from one
city to another in an hour, travel times, the capacity of a water filled pipes, or the
weights on the edges can represent multiple edges between nodes.

Definition. A flow from u to v in a network N is a network N′ such that the weight


on an edge in N′ is less than or equal to the weight on the corresponding edge in N
and such that the in-degree and out-degree are the same for all vertices in except for
u and v. The flow value is the out-degree of u in N′ .

Using the analogy of the edge weights in a network representing the amount of
water that can move through a pipe, the flow of the network models water flowing
from u to v.

Example 67. A flow from u to v for the graph in Example 66 is shown below:
Chapter 6. Connectivity 33

2
1 2
0 1 0 0
4 1 2
u v
2 0 0 1
1 2
3

The flow value is 6 because the out-degree of u (and the in-degree of v) is 6. We also
see that there is a u, v disconnecting set with edge weights that sum to 6 because
the edges surrounding u can be deleted to disconnect the graph.

Theorem 68 (Max-Flow Min-Cut). Let N be a network with vertices u, v. The maxi-


mum value for a flow from u to v is equal to the minimum weight u, v disconnecting
set.

Proof. The network can be considered a directed graph with multiple edges where
the edge weight in the network represents the number of edges between vertices.
The value of a maximum flow from u to v is then the number of edge disjoint paths
from u to v. By Theorem 64, this is also the size of a u, v disconnecting set, which is
equal to the minimum weight u, v disconnecting set in the network.

The following Ford-Fulkerson algorithm is a greedy algorithm that has input a


network N and vertices u and v and output the maximum flow value from u to v:

1. Start with N′ the network with all edge weights 0.


2. Find a path P from u to v with the maximum possible capacity, meaning that
this path has the ability to increase the flow value. This path can go back-
wards on directed edges, which has the effect of decreasing the edge weight.
3. Update N′ by adding the edge weights on P that increase the flow value.
4. Repeat steps 2 and 3 until no longer possible.
Example 69. As an example of the Ford-Fulkerson algorithm, let N be the network

7
8 1 3 4
4
u v
5 2 4 5
3

After starting with N′ the network with all edge weights 0, we find a path P and in-
crease the edge weights along P to the maximum extent possible. This path P is
highlighted on the updated network
Chapter 6. Connectivity 34

0
4 1 3 0
4
N′ = u v
0 2 4 4
0
Doing this again for another choice of a path P gives

3
4 1 3 3
′ 1
N = u v
3 2 4 4
3
The above path uses the {1, 4} edge going backwards. This means that we decrease
the edge weight along this path when changing the edge weights along the path P.
Continuing this process gives

3
5 1 3 3
′ 2
N = u v
3 2 4 5
3
and with one more iteration we arrive at a graph where no path P can increase the
flow value, which finds a maximum flow value of 9:

4
6 1 3 4
2
N′ = u v
3 2 4 5
3
Although different choices for the path P made at each step can result in a different
final network N′ , the maximum flow value can always be found with this algorithm.

Definition. A separating set for graph G that is not a complete graph is a set of ver-
tices V such that G − V is not connected. The vertex connectivity κ(G) is the minimum
size of a separating set and we set κ(Kn ) = n−1. A u, v-separating set is a separating
set V such that u and v are in different components of G − V.

Example 70. We have κ(Cn ) = 2 and κ(T) = 1 for any tree T.


It can be seen that κ(G) ≤ ε(G) for any graph G. Indeed, if E is a set of edges
such that G − E is disconnected, then taking V to be a set of vertices such that {u, v}
is an edge in E causes G − V to be disconnected.
Example 71. A minimal size 1, 8-separating set for the graph shown below is {2, 3, 4}.
Chapter 6. Connectivity 35

2 5

1 3 6 8

4 7

This is also the maximum number of paths from 1 to 8 that are vertex disjoint: 1, 2, 5, 8
and 1, 3, 7, 8 and 1, 4, 6, 8. With the exception of the start and end vertices, these
three paths do not share a vertex.

Theorem 72 (Menger, vertex version). The maximum number of vertex disjoint paths
from u to v is equal to the minimum number of vertices in a u, v-separating set.

Proof. Create a digraph D with vertices u, v and vertices x+, x− for every x ̸= u, v in
G. Add these directed edges to D:
1. (x−, x+) for all x ̸= u, v.
2. If x, y ̸= u, v and {x, y} is an edge in G, then (x+, y−).
3. If {u, x} is an edge in G, then (u, x−).
4. If {x, v} is an edge in G, then (x+, v).
For example, if
1 3

G= u v

2 4
then we have

1− 1+ 3− 3+

G= u v

2− 2+ 4− 4+

Let E be a minimum sized disconnecting set of edges in D. We can assume that E only
contains edges of the form (x−, x+). Indeed, if either (x+, y−) or (u, x+) is an edge
in E, then we can replace that edge with (x−, x+). Then E naturally corresponds to
a separating set in G.
Chapter 6. Connectivity 36

By Menger’s Theorem, the size of E is equal to the maximum number of edge


disjoint u, v paths in D. Such paths in D can only use each (x−, x+) edge once, and
thus naturally correspond to vertex disjoint paths in G.
In the above example, E can be E = {(1−, 1+), (4−, 4+)}. This corresponds to
the separating set {1, 4} in G. The maximum number of edge disjoint paths in D is
found with the paths u, 1−, 1+, 3−, 3+, v and u, 2−, 2+, 4−, 4+, which correspond
with the paths u, 1, 3, v and u, 2, 4, v in G.

Theorem 73. If the maximum degree in a graph is 3, then κ(G) = ε(G).

Proof. If there are two edge disjoint paths from u to v for some vertices u and v, then
the two paths cannot share a vertex (other than u and v), since this would require a
degree 4 vertex when the two paths intersect. Thus each edge disjoint path is also
a vertex disjoint path, and by Menger’s theorem we have

κ(G) = min κu,v (G) = min εu,v (G) = ε(G)

where κu,v (G) denotes the minimum size of a u, v-separating set and εu,v (G) denotes
the minimum size of a u, v-disconnecting set.

Theorem 74. If κ(G) ≥ 3, then G has a cycle of an even length.

Proof. There are at least 3 vertex disjoint paths between distinct vertices u and v.
Two of the three such paths must both have an even length or both have an odd
length, and combining these two paths gives an even lengthed cycle.
7
Matchings

Definition. A matching in a graph G is a set M of edges such that no two edges in


M are incident. The matching saturates a set X of vertices in G if every vertex in X is
incident to an edge in M.

Example 75. The edges in an example of a matching in a bipartite graph is high-


lighted below:
1 2 3 4

5 6 7 8 9 10

This matching saturates the set {1, 2, 3, 4}.


Theorem 76 (Hall). Let G be a bipartite graph with independent sets X and Y. For a
subset S of vertices, let N(S) be the set of vertices in G that are adjacent to a vertex
in S. Then there is a matching for G that saturates X if and only if |S| ≤ |N(S)| for all
S ⊆ X.

Example 77. Before its proof we illustrate the statement of Hall’s matching theo-
rem by continuing Example 75. The set X = {1, 2, 3, 4} and so there are 15 nontrivial
subsets S ⊆ X to check:

37
Chapter 7. Matchings 38

S N(S) |S| ≤ |N(S)|?


{1} {5, 6} Yes
{2} {6, 7, 8} Yes
{3} {7, 8} Yes
{4} {9, 10} Yes
{1, 2} {5, 6, 7, 8} Yes
{1, 3} {5, 6, 7, 8} Yes
{1, 4} {5, 6, 9, 10} Yes
{2, 3} {6, 7, 8} Yes
{2, 4} {6, 7, 8, 9, 10} Yes
{3, 4} {7, 8, 9, 10} Yes
{1, 2, 3} {5, 6, 7, 8} Yes
{1, 2, 4} {5, 6, 7, 8, 9, 10} Yes
{1, 3, 4} {5, 6, 7, 8, 9, 10} Yes
{2, 3, 4} {6, 7, 8, 9, 10} Yes
{1, 2, 3, 4} {5, 6, 7, 8, 9, 10} Yes

In all cases we have |S| ≤ |N(S)|, so Hall’s matching condition says that there is
a matching that saturates {1, 2, 3, 4}. On the other hand, there is not a matching
that saturates {5, 6, 7, 8, 9, 10} because if S = {9, 10}, then N(S) = {4} and so the
inequality |S| ≤ |N(S)| does not hold.

Proof. Suppose that a matching that saturates X exists and let S ⊆ X. Each vertex
in S is matched with a unique vertex in N(S), and so |S| ≤ |N(S)|.
Now suppose that |S| ≤ |N(S)| for all subsets S of vertices in X. Let G′ be the
graph G with two extra vertices: a vertex x that is adjacent to every vertex in X and a
vertex y that is adjacent to every vertex in Y. For instance, the graph G′ for the graph
shown in Example 75 is
x

1 2 3 4

5 6 7 8 9 10

Let A be a subset of X and B be a subset of Y such that the union A ∪ B is an x, y


separating set. This means that there is not an edge that connects a vertex in X − A
to a vertex in Y − B and so N(X − A) must be a subset of B. Therefore, using the
hypothesis that |S| ≤ |N(S)| in the case where S = X − A, we have

|X| − |A| = |X − A| ≤ |N(X − A)| ≤ |B|.

This implies |X| ≤ |A| + |B| = |A ∪ B|, meaning that the size of any x, y separating
set must at least as large as the number of vertices in X. By the vertex version of
Chapter 7. Matchings 39

Menger’s theorem (Theorem 72), there are at least |X| vertex disjoint paths from x
to y. These vertex disjoint paths correspond to a matching that saturates X.

Example 78. A standard deck of playing cards is shuffled and sorted into 13 piles of
4 cards. Why is it possible to take one card from each pile to form the set {A, . . . , K}?
Create a bipartite graph with one set of vertices given by the cards A, . . . , K, the
second set of vertices given by the piles p1 , . . . , p13 , and with an edge from a card to
a pile if that card appears in the pile. For instance, one such graph is shown here

A 2 3 4 5 6 7 8 9 10 J Q K

p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13

In this example, an A appears in piles p1 , p2 , p3 and p13 , a 2 appears in piles p4 , p5 , p6


and p13 , and so on.
If S is a subset of {A, . . . , K}, then there are 4|S| cards that must appear in at
least |S| different piles since each pile contains 4 cards. Thus |S| ≤ |N(S)| and we
have verified Hall’s matching condition. The matching that saturates {A, . . . , K}
corresponds to the desired ability to take one card from each pile to form the set
{A, . . . , K}.

Definition. A matching M is perfect if every vertex is incident to an edge in M.


Example 79. A perfect matching for the Petersen graph is indicated below:

Theorem 80. A bipartite graph with every vertex degree k has a perfect matching.

Proof. Let G have independent sets X and Y. There are k|X| edges leaving X and k|Y|
edges leaving Y, and so |X| = |Y|.
Suppose Hall’s condition fails, meaning that there is a subset of vertices S from
X such that |S| > |N(S)|. Each vertex in S is adjacent to k vertices in N(S), so the total
collection of vertices in N(S) is adjacent to at least k|S| vertices. Thus there must be
a vertex in N(S) which has degree more than S, a contradiction.

Definition. A covering for a graph G is a set of vertices X such that every edge in G is
incident to a vertex in X.

Example 81. Below we indicate a covering for the bipartite graph in Example 75:
Chapter 7. Matchings 40

1 2 3 4

5 6 7 8 9 10

Theorem 82 (Kőnig). The maximum number of edges in a matching in a bipartite


graph is equal to the minimum number of vertices in a covering.

Proof. Let Q be a vertex cover that uses the minimum number of vertices. If M is
any matching, then |M| ≤ |Q| because each edge in a matching is incident to at
least one vertex in Q. To complete the proof we will show that there is a matching
M with |M| = |Q|, showing that equality can be achieved.
Let G have independent sets X and Y. Let Gx be the graph G but with the edges
in G that connect vertices in Q ∩ X with vertices in Y − Q. For example, the graph Gx
coming from Example 81 is shown below
1 2 3 4

5 6 7 8 9 10

Let S be a subset of vertices in Q ∩ X in Gx . It follows that N(S) is a subset of Y − Q


in Gx that satisfies |S| ≤ |N(S)| because otherwise we can replace S with N(S) in Q
to find a covering for G that uses fewer vertices than Q. Thus, by Hall’s matching
condition (our Theorem 76), there is a matching Mx that saturates Q ∩ X.
Using similar logic on the graph Gy formed using in G that connect vertices in
Q ∩ Y with vertices in X − Q, we find a matching My that saturates Q ∩ Y. Our desired
matching is Mx ∪ My .

Theorem 83 (Tutte). For any subset S of vertices in a graph G, let oddG (S) denote the
number of components of G − S that have an odd number of vertices. Then G has a
perfect matching if and only if oddG (S) ≤ |S| for all subsets S of vertices.

Proof. Assume that G has a perfect matching. Each of the odd components in G − S
must have vertices matched to distinct vertices in S, and so oddG (S) ≤ |S| for all
subsets S of vertices.
Now assume that oddG (S) ≤ |S| for all subsets S of vertices. We will prove that
G has a perfect matching using induction on the number of vertices in G.
By taking S as the empty set, we see that oddG (S) ≤ 0, meaning that G must
have an even number of vertices. By counting vertices it follows that the parity of
|S| and oddG (S) are the same for any subset S and thus |S| and oddG (S) cannot differ
by 1. This permits us to can break the problem into the following two cases.

Case 1: Every subset S of vertices satisfies oddG (S) ≤ |S| − 2.


Let u and v be adjacent vertices in G, let G′ = G−u−v, and let T be any subset
of vertices in G′ . Then we have

oddG′ (T) = oddG (T ∪ {u, v}) ≤ |T ∪ {u, v}| − 2 = |T|


Chapter 7. Matchings 41

and so we can form a perfect matching in G by matching u and v and then


finding a perfect matching in G′ by induction.
Case 2: There is a subset S of vertices such that oddG (S) = |S|.
Take such an S with the maximum possible number of vertices. Each com-
ponent C1 , . . . , C|S| of G − S must have an odd number of vertices because
otherwise we can remove one vertex from an even sized component and add
it to S, thereby increasing the size of S.

C1 C2 C3 C|S|

...

...

Let G′ be the bipartite graph with independent sets {C1 , . . . , C|S| } and S and
with an edge between Ci and vj in G′ if there is an edge from component Ci to
vj in G.

C1 C2 C3 ... C|S|

...

Let T be a subset of {C1 , . . . , C|S| } and let N(T) be the vertices in S that are
adjacent to a vertex in T in G′ . We have |T| ≤ oddG (N(T)) because each com-
ponent in T is an odd sized component counted by oddG (N(T)). Using the set
N(T) in the hypothesis of the theorem, we have

|T| ≤ oddG (N(T)) ≤ |N(T)|

for all subsets T of {C1 , . . . , C|S| }. By Hall’s matching condition there is a match-
ing M′ for G′ that saturates {C1 , . . . , C|S| } and, since oddG (S) = |S|, this match-
ing also saturates S.
Let C be a graph found by taking one of the components Ci in G − S and re-
moving the vertex v found in the matching M′ . To extend the matching M′
to a matching for G we need to find a matching for C. This can be done by
induction provided oddC (U) ≤ |U| for all subsets U of vertices in C.
Chapter 7. Matchings 42

Suppose to the contrary that oddC (U) > |U|. Since these quantities have the
same parity they cannot differ by 1 and so oddC (U) ≥ |U| + 2. Thus we have

oddG (S ∪ U ∪ {v}) = oddG (S) + oddC (U) − 1


≥ |S| + |U| + 1
= |S ∪ U ∪ {v}|,

meaning that S is not the set with the maximum number of vertices that sat-
isfies oddG (S) = S as it could be replaced by S ∪ U ∪ {v}. This completes the
proof.

Example 84. Let G be a graph such that every vertex has degree 3 and such that
ε(G) ≥ 2. We can use Theorem 83 to show that G has a perfect matching.
Let S be any set of vertices and let H be an odd component of G − S. We have

(the sum of degrees in H) = 3|H| − (the number of edges from H to S)

Since (the sum of degrees in H) is even and 3|H| is odd, there are an odd number
of edges from H to S. Using the fact that ε(G) ≥ 2, there must be at least 3 edges
between H and S.
Since each odd component connects at least 3 times with S and since every ver-
tex in S has degree 3, we have oddG (S) ≤ |S|, as needed.
Definition. Let mG (k) denote the number of matchings for G that have exactly k
edges. The matching polynomial for the graph G with n vertices is

MG (x) = (−1)k mG (k)xn−2k .
k

Example 85. All possible matchings of C6 are shown below:


3 2 3 2 3 2 3 2 3 2 3 2
4 1 4 1 4 1 4 1 4 1 4 1
5 6 5 6 5 6 5 6 5 6 5 6

3 2 3 2 3 2 3 2 3 2 3 2
4 1 4 1 4 1 4 1 4 1 4 1
5 6 5 6 5 6 5 6 5 6 5 6

3 2 3 2 3 2 3 2 3 2 3 2
4 1 4 1 4 1 4 1 4 1 4 1
5 6 5 6 5 6 5 6 5 6 5 6

One of these matchings has 0 edges, 6 matchings have 1 edge, 9 matchings have 2
edges, and 2 matchings have 3 edges. Therefore the matching polynomial for C6 is

MC6 (x) = x6 − 6x4 + 9x2 − 2.


Chapter 7. Matchings 43

Theorem 86. Suppose G has n ≥ 3 vertices and e = {u, v} is an edge in G. Then


MG (x) = MG−e (x) − MG−u−v (x).
Proof. By counting whether or not e is used in a matching, we have
mG (k) = mG−e (k) + mG−u−v (k − 1)
and therefore

MG (x) = (−1)k mG (k)xn−2k
k
∑ ∑
= (−1)k mG−e (k)xn−2k + (−1)k mG−u−v (k − 1)xn−2k
k k

= MG−e (x) − (−1) k−1
mG−u−v (k − 1)x(n−2)−2(k−1)
k
= MG−e (x) − MG−u−v (x).
Example 87. Using Theorem 86 on the first edge in a path graph, we have that the
matching polynomial for Pn+1 satisfies the recursion
MPn+1 (x) = xMPn (x) − MPn−1 (x)
for n ≥ 2 with the initial conditions that MP0 (x) = 1 and MP1 (x) = x. These polyno-
mials are related to the Chebyshev polynomials of the second kind, defined by the
recursion
Un+1 (x) = 2xUn (x) − Un−1 (x)
with U0 (x) = 1 and U1 (x) = 2x. Comparing these recursions shows MPn (x) =
Un (x/2).
Example 88. Using Theorem 86 on cycle graphs, we have MCn (x) = MPn (x)−MPn−2 (x)
for n ≥ 3. These polynomials are related to the Chebyshev polynomials of the first
kind, defined by
1
Tn (x) = (Un (x) − Un−2 (x)) .
2
Comparing these expressions shows that MCn (x) = 2Tn (x/2).
Theorem 89. Let G have n ≥ 3 vertices. Then for any vertex u we have

MG (x) = xMG−u (x) − MG−u−v (x).
v is adjacent to u

Proof. If u has degree 0, then MG (x) = xMG−u (x). We continue by induction on the
degree of u.
If e = {u, w} is an edge in G, then by Theorem 86 we have
MG (x) = MG−e (x) − MG−u−w (x)

= xMG−u (x) − MG−u−v (x) − MG−u−w (x)
v ̸= w is adjacent to u

= xMG−u (x) − MG−u−v (x).
v is adjacent to u
Chapter 7. Matchings 44

Example 90. Using Theorem 89 on complete graphs, we have MK0 (x) = 1, MK1 (x) =
x, and MKn (x) = xMKn−1 (x) − nMKn−2 (x) for n ≥ 2. These polynomials are related to
the probabilist’s Hermite polynomials, defined by

Hn (x) = xHn−1 (x) − nHn−2 (x)

with the same initial conditions as MKn (x). Comparing these expressions shows that
MKn (x) = Hn (x).
8
The adjacency matrix

In this chapter we assume knowledge of basic operations in matrix algebra that


are usually found in a first course on the topic: Matrix multiplication, properties of
transposes, linear independence, finding eigenvalues and eigenvectors, the char-
acteristic polynomial, and diagonalization.
The most interesting connections between graph theory and matrix algebra use
theorems that students who have only taken a single matrix algebra course may not
have seen yet. When such a theorem is needed, we will simply state the theorem
without proof. These proofs of such theorems can be found in most matrix algebra
texts.
Definition. Let G have vertices v1 , . . . , vn . The adjacency matrix A(G) is the n × n
matrix with i, j entry equal to 1 if there is an edge from vj to vi and 0 otherwise. If we
have a graph with directed or weighted edges, then this i, j entry is the weight of the
edge from vj to vi .
Example 91. A graph G and its adjacency matrix A(G) are shown below:
 
2 0 1 1 0 0
 1 0 1 0 0
3 1  
A(G) = 1 1 0 1 1

0 0 1 0 1 
4 5 0 0 1 1 0

Theorem 92. The i, j entry of A(G)k is the number of walks of length k that start at vi
and end at vj .
Proof. We show this by induction on k with the assertion true when k = 1. If we let
Ai,j denote the i, j entry of the matrix A, then the definition of matrix multiplication
gives that the i, j entry of A(G)k+1 = A(G)A(G)k is

n
A(G)i,ℓ (A(G))kℓ,j
ℓ=1
({ )

n
1 if vi , vℓ are adjacent
= (# walks of length k from vℓ to vj )
ℓ=1
0 if not
= (# walks of length k + 1 from vi to vj )

45
Chapter 8. The adjacency matrix 46

Example 93. Continuing the example in Example 91, we have


   
2 1 1 1 1 2 3 5 2 2
1 2 1 1 1  3 2 5 2 2
   
2 
A(G) =  1 1 4 1 1   A(G) = 
3 
and 5 5 4 5 5
1 1 1 2 1 2 2 5 2 3
1 1 1 1 2 2 2 5 3 2

This says, for example, there are 4 walks of length 2 from vertex 3 back to vertex 3
in G and there are 2 walks of length 3 from vertex 1 vertex 5.
Theorem 94. Let tr(A) denote the matrix trace (the sum of the diagonal entries in A).
Then
a. tr(A(G)2 )/2 is equal to the number of edges in G, and
b. tr(A(G)3 )/6 is equal to the number of triangles (cycles of length 3) in G.

Proof. The i, i diagonal entry in A(G)2 gives the number of paths from vi back to
itself, which counts each edge incident to vi . Summing the diagonal elements in
A(G)2 therefore counts every edge twice.
Similarly, every walk from vi to itself of length 3 counts a triangle. Each triangle
is counted six times in the trace of A(G)3 , twice for each of the three vertices in the
triangle.

If G is a simple graph, then the adjacency matrix A(G) is a real symmetric matrix
(meaning A(G)⊤ = A(G)). Real symmetric matrices are the easiest class of matrices
to understand. There are a number of theorems that give great information about
real symmetric matrices. One such theorem is the spectral theorem, stated below
without proof.
Theorem 95 (Spectral theorem). If A is a real valued symmetric matrix, then all eigen-
values of A are real and there is an orthonormal basis of eigenvectors. This implies A
can be diagonalized using an orthogonal matrix P, which says that
 
λ1 0 · · · 0
 0 λ2 · · · 0 
  −1
A(G) = P  .. P
 . 
0 0 · · · λn

where λ1 , . . . , λn are the real eigenvalues of A and where P−1 = P⊤ .


Definition. A graph G has eigenvalue λ and eigenvector v if λ is an eigenvalue and
v is an eigenvector for the adjacency matrix A(G). This means that A(G)v = λv.
Example 96. Continuing the example in Example 91 and doing the calculations on
a computer algebra system, we find the eigenvalues for G are
√ √
1 + 17 1 − 17
, 1, −1, −1, .
2 2
Chapter 8. The adjacency matrix 47

If we take the matrices P and D to equal


   1+√17 
1 −1 0 −1 1 2 0 0 0 0
 1√ −1 0 1 1√   0 1 0 0 0 
   
P=
− 2
1− 17
0 0 0 − 1+2 17  
,D =  0 0 −1 0 0 ,

 1 1 −1 0 1   0 0 0 −1 0√ 
1 1 1 0 1 1− 17
0 0 0 0 2

Then we have A(G) = PDP−1 . Now that we have diagonalized the adjacency matrix
it is relatively easy to find powers of the matrix A. Indeed,
( √ )k 
1+ 17
 2 0 0 0 0 
 k 
 0 1 0 0 0 
k k −1   −1
A(G) = PD P = P  0 0 (−1)k 0 0 P
 
 0 0 0 (−1)k 0 
 ( √ )k 
1− 17
0 0 0 0 2

By explicitly doing the above matrix multiplication, we can find formulas for the
number of walks from one vertex to another. For example, using a computer alge-
bra system, we find that the row 1 column 3 entry of A(G)k is
(( √ )k ( √ )k )
1 1 + 17 1 − 17
√ − ,
17 2 2

so this gives the number of walks of length k From vertex 1 to vertex 3 in G. As an-
other example, the trace of A(G)k gives the number of walks of length
( k )that start
and end at the same vertex. Since the trace of a matrix satisfies tr PDk P−1 = tr Dk ,
the number of walks of length k that start and end at the same vertex is
( √ )k ( √ )k
1+ 17 k k k 1 − 17
+ 1 + (−1) + (−1) + .
2 2

Theorem 97. If G eigenvalues λ1 , . . . , λn , then the number of walks of length k that


start and end at the same vertex is λk1 + · · · + λkn .

Proof. Since A(G) is symmetric, it can be diagonalized. Let P be a matrix such that
A(G) = PDP−1 where D is a diagonal matrix with the eigenvalues λ1 , . . . , λn along
the diagonal. Then A(G)k = (PDP−1 )k = PDk P−1 has trace equal to tr Dk . The matrix
Dk has trace λk1 + · · · + λkn , so we are done by Theorem 92.

Example 98. If the weight of a walk is the product of the edge weights along the
walk, Theorem 97 still holds for directed or graphs with weighted edges, provided
the adjacency matrix is still diagonalizable. For example, the network N shown be-
low
Chapter 8. The adjacency matrix 48

2 2
1
3
4
1
1 3 1
1
4
4
1 1 1
1 4 3 2

4 1
5
2

has adjacency matrix


 
0 1 1/4 0 1/2
2/3 0 1/4 0 0 
 
A(N) = 
 0 0 1/4 1 0 
 0 0 0 0 1/2
1/3 0 1/4 0 0

that is diagonalizable with eigenvalues λ1 , . . . , λ5 that are approximately

1, 0.361613, −0.899806, −0.105904 + 0.341818i, −0.105904 − 0.341818i

and so the result in Theorem 97 holds for this graph. In particular, the number of
walks of length 2 that start and end at the same vertex is equal to
83
λ21 + · · · + λ25 = .
48
This is the sum of appropriately weighted walks of length 2 that start and end at the
same vertex, which is also equal to
2 1 1 2 1 1 1 1 83
·1+ · +1· + · + · = .
3 2 3 3 4 4 3 2 48
Definition. A network is strongly connected if for every pair of vertices u, v there
exists a walk from u to v in which no edge has weight 0. A probability vector is a
vector that has nonnegative components that sum to 1.
The Perron-Frobenius theorem gives interesting information about the eigen-
values and eigenvectors of square matrices with nonnegative entries, which is rele-
vant since the adjacency matrix for any network is such a matrix. We state the next
theorem without proof.
Theorem 99 (Perron-Frobenius). If A is the adjacency matrix for a strongly connected
network, then the following statements are true.
a. There is a positive real number λmax called the Perron value such that λmax is
an eigenvalue for A and such that every eigenvalue λ of A satisfies |λ| ≤ λmax .
b. There is a vector v with strictly positive entries called the Perron vector such
that v is both a probability vector and an eigenvector with eigenvalue λmax .
Chapter 8. The adjacency matrix 49

c. Any other eigenvector of A with eigenvalue λmax is a scalar multiple of the Per-
ron vector.
d. No other eigenvector of A besides scalar multiples of the Perron vector can have
all positive components.

Example 100. The graph in Example 91 has largest eigenvalue (1 + 17)/2 and so
this is the Perron value. The Perron vector is
 √   
7 − √17 0.18
 7 − 17   0.18 
1  √  
4 17 − 12 ≈ 0.28 .

16  √  
 7 − 17   0.18 


7 − 17 0.18

Example 101. The graph in Example 98 has largest eigenvalue 1 and so this is the
Perron value. All other eigenvalues have complex magnitude less than 1. The Perron
vector is    
15 0.385
 11  0.282
1    
 4  ≈  0.103  .

39     
3 0.077
6 0.153
If A is the adjacency matrix for a strongly connected network with Perron value
λmax and Perron vector v, then v satisfies Av/λmax = v, meaning that v is a fixed
point under multiplication by A/λmax . This is part of the reason why the Perron vec-
tor arises in applications as it can be found by repeatedly multiplying by A/λmax .
Indeed, if y = limk→∞ Ak x/λkmax exists for some vector x, then this limit is a multi-
ple of the Perron vector. This is because
Ay Ak+1 x
= lim k+1 = y,
λmax k→∞ λmax

meaning that y is an eigenvector for A with eigenvalue λmax , and so y is a scalar


multiple of the Perron vector.
Example 102. Columbia river salmon can live for three years. A one year old salmon
produces 4 eggs on average, two year old salmon produce 20, and three year old
salmon produce 60. The weights on the remaining edges in the network below give
the percentage of salmon that make it to the next age:
year 1
0.3 0.05
4
year 2 egg
20
0.6 60
year 3
Chapter 8. The adjacency matrix 50

   
0 4 20 60 0.932
0.05 0 0 0  
The adjacency matrix   has Perron vector ≈ 0.046 with
 0 0.3 0 0   0.014 
0 0 0.6 0 0.008
Perron value ≈ 1.01187.
The Perron vector gives the long term age distribution of salmon. A salmon
picked at random with will be an egg with probability 0.932, a year 1 salmon with
probability 0.046, a year 2 salmon with probability 0.014, and a year 3 salmon with
probability 0.008. If we do not wish to count an egg as a salmon, then rescaling the
probabilities gives 0.68 for year 1, 0.2 for year 2, and 0.12 for year 3.
The Perron value tells us the rate at which the salmon population is growing.
Each year there are approximately 1.187% more salmon than the previous year.

Example 103. The Perron vector can be used to rank sports teams. Let G be the
graph with nodes the 30 NBA basketball teams. Draw an edge from team i to team
j if team j has a better record in the games that the two teams have played. Do
not draw an edge if they have not played or if they have split the games they have
played. For example, this graph for the 2019–2020 Covid-19 shortened season is
shown below:

DEN DAL
DET CLE
GSW CHI

HOU CHO

IND BRK

LAC BOS

LAL ATL

MEM WAS

MIA UTA

MIL TOR

MIN SAS

NOP SAC

NYK POR
OKC PHO
ORL PHI
Chapter 8. The adjacency matrix 51

Create a network from this graph by such that if there are n out edges leaving
team i in this graph, then each of these edges are weighted 1/n. Consider a random
walk in this network, meaning that we start at a single team in the graph and then
repeatedly follow edges with the probabilities given by the edge weights.
Since each step in this walk moves from a losing team to a winning team, we can
expect to land on better teams more often in this random walk. The Perron vector
gives us the limiting probabilities that we would land on each team in the random
walk, so the Perron vector gives us our ranking of teams. The approximate ranking
for the 2019–2020 regular season is shown below:

0.075 LAC 0.040 UTA 0.021 SAC


0.071 MIL 0.039 OKC 0.017 CLE
0.069 HOU 0.030 SAS 0.016 WAS
0.065 LAL 0.028 PHI 0.016 ORL
0.062 MIA 0.028 NOP 0.015 MEM
0.058 TOR 0.027 DET 0.014 PHO
0.053 DAL 0.026 POR 0.011 NYK
0.051 DEN 0.023 CHI 0.005 CHO
0.047 IND 0.023 BRK 0.003 ATL
0.044 BOS 0.022 MIN 0.002 GSW

If the nodes in a graph are web pages and the edges between pages indicate links,
then a slightly modified version of the ranking method using the Perron vector in
the NBA basketball example was famously used by Google to rank web pages by
importance when sorting search results.

Theorem 104. A graph G with n vertices is bipartite if and only if there is a relabeling
of the vertices such that the adjacency matrix has the form
[ ]
0 C
C⊤ 0

for some k × (n − k) matrix C where 0 is the matrix of 0’s.

Proof. Assume that G is bipartite. By possibly relabeling the vertices we can assume
that the independent sets are vertices labeled 1, . . . , k and k + 1, . . . , n for some k.
Then, since there are no edges that connect vertices within the independent sets,
the adjacency matrix for G has the desired form.
Now assume that the the adjacency matrix for G has the desired form. This
means that there are no edges within the independent sets of vertices labeled 1, . . . , k
and k + 1, . . . , n for some k, as needed.

Theorem 105. Let G be a connected graph with Perron value λmax and Perron vector
v. Then G is bipartite if and only if −λmax is an eigenvalue of G.

Proof. Suppose G is bipartite. Possibly relabel the vertices of G so that the adja-
cency matrix A for G has the form in the statement of Theorem 104 for some k ×
Chapter 8. The adjacency matrix 52

[ ]⊤
(n − k) matrix C. Suppose λ is an eigenvalue for A with eigenvector x = y z
where y is a k × 1 vector and z is an (n − k) × 1 vector.
Then the equation Ax = λx implies Cz = λy and C⊤ y = λz. Then we have
[ ] [ ][ ] [ ] [ ] [ ]
−y 0 C −y Cz λy −y
A = ⊤ = = = −λ ,
z C 0 z −C⊤ y −λz z

showing that −λ is an eigenvalue of G. In particular, −λmax is an eigenvalue of G.


[ ]⊤
Now suppose −λmax is an eigenvalue of G with eigenvector x = x1 · · · xn
[ ]⊤
and let A be the adjacency matrix for G. Then if |x| denotes the vector |x1 | · · · |xn | ,
we have
λmax |x| = | − λmax x| = |Ax| ≤ A|x|.
The inequality in the above equation means that each component of λmax |x| is less
than or equal to the corresponding component in Ax. However, this inequality is
actually an equality because if there is a component for which the strict inequality
holds then we would have
( )⊤ ( )⊤
λmax v⊤ |x| < v⊤ A|x| = |x|⊤ A⊤ v = |x|⊤ λmax v = λmax v⊤ |x|

because the vector v has strictly positive components. This cannot happen and so
we have A|x| = λmax |x|.
Therefore |x| is a scalar multiple of the Perron vector and cannot have a com-
ponent equal to 0. By possibly relabeling the vertices of G we can assume without
loss of generality that x1 , . . . , xk are all positive and xk+1 , . . . , xn are all negative. Let
   
x1 xk+1 [ ]
+  ..  −  ..  B C
x =  . , x =  . , and A= ⊤
C D
xk xn

where B, C, and D are block matrices such that B is a k × k matrix, C is a k × (n − k)


matrix, and D is (n − k) × (n − k).
Using block matrix multiplication, the equation Ax = −λmax x tells us that

Bx+ + Cx− = −λmax x+ C⊤ x+ + Dx− = −λmax x− .


and
[ ]⊤
The equation A|x| = λmax |x| with the observation that |x| = x+ −x− gives

Bx+ − Cx− = λmax x+ and C⊤ x+ − Dx− = λmax x− .

Combining these two expressions shows Bx+ = 0 and −Dx− = 0. Since B and D
are matrices with nonnegative entries and since x+ and −x− have strictly positive
entries, the matrices B and D must be zero matrices. By Theorem 104, G is bipartite.

The proof of Theorem 105 says that if G is bipartite, then the positive and nega-
tive entries of the eigenvector that corresponds to eigenvalue −λmax partition the
graph into the independent sets.
Chapter 8. The adjacency matrix 53

The next well-known linear algebra theorem that we state without proof is used
with some frequency when finding bounds on the eigenvalues for real symmetric
matrices.
Theorem 106 (Courant-Fischer). If A is a real symmetric matrix with eigenvalues
λmax ≥ λ2 ≥ · · · ≥ λn−1 ≥ λmin , then

λmax = max x⊤ Ax and λmin = min x⊤ Ax

where the maximum and minimum is taken over unit vectors x (that is, x⊤ x = 1).
Furthermore, if λmax has eigenvector v1 and λmin has eigenvector vn , then the second
largest and second smallest eigenvalues satisfy

λ2 = max x⊤ Ax and λn−1 = min x⊤ Ax

where the maximum and minimum is taken over all unit vectors x that are also or-
thogonal to v1 and vn , respectively (that is, x⊤ v1 = 0 for the maximum and x⊤ vn =
0 for the minimum).
One of our first applications of Theorem 106 is a bound on the maximum eigen-
value of G, the content of our next theorem.
Theorem 107. Let λmax be the maximum eigenvalue for a graph G with n vertices
and let d be the maximum degree in G. Then

(the average vertex degree in G) ≤ λmax ≤ d.

Furthermore, under the added hypothesis that G is connected, the equality λmax = d
holds if and only if every vertex in G has degree d.

Proof. Let 1 be√the vector of all 1’s and A be the adjacency matrix for G. Using the
unit vector (1/ n)1 in Theorem 106, we have
( )⊤ ( )
1 1 the sum of the entries in A
λmax ≥ √ 1 A √ 1 = ,
n n n

which is equal to the average vertex degree, showing the lower bound on λmax .
[ ]⊤
Let v = v1 · · · vn be the Perron vector for A. Suppose that i is the index
such that vi is a maximum component of v. Then we have

λmax vi = (component i in Av) = vj ≤ dvi ,
vertex j is adjacent to vertex i

showing that λmax ≤ d as needed.


If λmax = d, the then above inequality is an equality, implying vi = vj for all
vertices j that are adjacent to i. Repeating the above argument with i replaced with
a vertex j adjacent to i shows that all vertices k adjacent to j also have vi = vk .
Assuming G is connected, continuing in this manner shows that every coordinate in
the Perron vector is the same and therefore v = 1/n. The equation A1/n = (d/n)1
Chapter 8. The adjacency matrix 54

now implies that each row of A sums to d, meaning that every vertex in G has degree
d.
On the other hand, if every vertex in a connected graph G has degree d, then
A1/n = (d/n)1, showing that the Perron value λmax = d.
The intuition behind Theorem 107 is that a random walk of length k grows at
a rate asymptotic to cλkmax for some positive constant c. Such a random walk has
m choices to leave a vertex of degree m, so if every vertex has degree d, then the
random walk grows at a rate asymptotic to dk , giving evidence that λmax = d. If not
every vertex has degree d, then at least the random walk grows at a rate asymptotic
to ak where a is the average degree, giving evidence that a ≤ λmax .
Theorem 108 (Wilf). If λmax is the largest eigenvalue for G, then the chromatic num-
ber satisfies χ(G) ≤ λmax + 1.
Proof. Let H be a χ(G)-critical subgraph of G (see our exercise on critical subgraphs)
and let λmax (H) be the largest eigenvalue for H. By the exercises, the minimum
degree in H is at least χ(G) − 1. Theorem 107 now gives
χ(G) − 1 ≤ (the average degree in H) ≤ λmax (H) ≤ λmax (G)
where the last inequality the content of our exercise on bounding the eigenvalues
of subgraphs.
Theorem 108 provides an upper bound on the chromatic number. Since up-
per bounds can be found by simply providing some random proper coloring of the
graph, lower bounds are generally more interesting. Lower bounds on the chro-
matic number that involve the eigenvalues of the adjacency matrix exist, as we state
in the next Theorem. The proof relies on more specialized techniques in linear al-
gebra and so it is omitted.
Theorem 109 (Hoffman). If λmax and λmin are the maximum and minimum eigen-
values of G, then 1 − λmax /λmin ≤ χ(G).
Example 110. The matching polynomial for the tree T shown below
9 4 2 3 5 8

1 6 7

is MT (x) = x9 −8x7 +18x5 −12x3 +2x. The characteristic polynomial for the adjacency
matrix A for T is equal to det(A(T) − xI) where I is the identity matrix. When this
calculation is carried out, we find the characteristic polynomial is −x9 +8x7 −18x5 +
12x3 − 2x. The characteristic polynomial is to equal MT (−x)
Theorem 111 shows that the relationship between the matching polynomial for
the tree and the characteristic polynomial for the adjacency matrix in Example 110
was not an accident. The proof is not difficult for those who have seen the determi-
nant written as a sum over permutations in the symmetric group, but we choose to
omit the proof because introducing the background material needed for the proof
is beyond the scope of this course.
Chapter 8. The adjacency matrix 55

Theorem 111. Let MG (x) be the matching polynomial for G. Then G has no cycles if
and only if MG (−x) is the characteristic polynomial for the adjacency matrix for G.
This chapter has shown how results from matrix algebra can be applied to the
adjacency matrix to learn about the graph. The eigenvalues and eigenvectors for
the graph play an interesting role in the subject. We end this chapter by describing
one more result, relating the number of distinct eigenvectors to the diameter of the
graph.

Definition. The distance between vertices u and v is the length of the shortest path
from u to v. The diameter of G is the largest distance between two vertices in G.

Example 112. The diameter of C12 is 6. The eigenvalues of C12 are


√ √ √ √
2, 3, 3, 1, 1, 0, 0, −1, −1, − 3, − 3, −2.

There are 12 eigenvalues but only 7 distinct eigenvalues.


More generally, the diameter of Cn is n/2 if n is even and (n − 1)/2 if n is odd and
it can be shown that the number of distinct eigenvalues of Cn is n/2 + 1 if n is even
and (n − 1)/2 + 1 if n is odd.

The proof of Theorem 114 relies on yet another result from matrix algebra that
gives evidence that real symmetric matrices are the best possible matrices to un-
derstand.

Theorem 113. If A is a real symmetric matrix with distinct eigenvalues λ1 , . . . , λk ,


then
(A − λ1 I) · · · (A − λk I) = 0
and no polynomial p(x) with a degree smaller than k has p(A) = 0. In other words,
the minimal polynomial for A is (x − λ1 ) · · · (x − λk ).

Theorem 114. The diameter of a connected graph G is less than the number of dis-
tinct eigenvalues of G.

Proof. Suppose the adjacency matrix A for G has k distinct eigenvalues. Theorem
113 implies that Ak is a linear combination of I, A1 , . . . , Ak−1 , meaning that there are
constants c0 , . . . , ck−1 such that

Ak = c0 I + c1 A1 + · · · + ck−1 Ak−1 .

It follows that Am is also a linear combination of I, A1 , . . . , Am−1 for all m ≥ k because

Ak+(m−k) = c0 Am−k + c1 Am−k+1 + · · · + ck−1 Am−1 .

Thinking about walks in G, if the diameter of G is d, then there are vertices u


and v such that the u, v entry of Ad is nonzero but that the u, v entry in each of
I, A1 , . . . , Ad−1 is 0. This means that Ad cannot be a linear combination of I, A1 , . . . , Ad−1
and therefore d < k.
9
The Laplacian

The adjacency matrix is good matrix to use when understanding walks in a graph,
but for many other purposes the Laplacian matrix is a better tool.
Definition. Let G be a simple graph and let D be a directed graph created by arbitrar-
ily assigning a direction to each edge in G. An incidence matrix Q for G is the matrix
with rows indexed by edges, columns indexed by vertices, and with v, e entry equal to


1 if e points to v in D
−1 if e leaves v in D


0 otherwise.
Example 115. If the graph G has its edges directed as shown below,
2 2
3 1 3 1

4 5 4 5
then listing the edges in the order {1, 2}, {2, 3}, {3, 1}, {3, 4}, {4, 5}, {5, 3} gives
the incidence matrix  
−1 1 0 0 0
0 −1 1 0 0
 
1 0 −1 0 0
Q= 0

 0 −1 1 0
0 0 0 −1 1 
0 0 1 0 −1
.
Definition. The Laplacian for the graph G is the matrix L = Q⊤ Q for some incidence
matrix Q.
Example 116. Calculating Q⊤ Q using the matrix in Example 115, the Laplacian is
     
2 −1 −1 0 0 2 0 0 0 0 0 1 1 0 0
−1 2 −1 0 0    
  0 2 0 0 0  1 0 1 0 0
−1 −1 4 −1 −1 = 0 0 4 0 0 −  1 1 0 1 1 
     
0 0 −1 2 −1 0 0 0 2 0 0 0 1 0 1 
0 0 −1 −1 2 0 0 0 0 2 0 0 1 1 0

56
Chapter 9. The Laplacian 57

The eigenvalues for this Laplacian are 5, 3, 3, 1, 0.


Theorem 117. First observations about the Laplacian L for a graph on n vertices are:
a. If D is the n × n diagonal matrix with the vertex degrees along the diagonal and
A(G) is the adjacency matrix for G, then L = D − A(G). This implies that the
Laplacian does not depend on how the edges were directed when finding Q.
b. The Laplacian L is a real valued symmetric matrix.
[ ]⊤ ∑
c. If x = x1 · · · xn , then x⊤ Lx = (xi − xj )2 .
{i, j} is an edge

d. The smallest eigenvalue µmin of L is equal to 0 with eigenvector (1/ n)1 where
1 is the vector of all 1’s.

Proof. Statement a. comes from writing L = Q⊤ Q for some incidence matrix Q and
then using the definition of matrix multiplication. Statement b. is true because
( )⊤
L⊤ = Q⊤ Q = Q⊤ Q = L.
As for statement c., we have

x⊤ Lx = x⊤ Q⊤ Qx = (Qx) (Qx) .

The vector Qx is indexed by edges and has edge e = {i, j} entry equal to ±(xi −

xj ). Thus (Qx) (Qx) gives the squared length of this vector, which is the desired
expression. √
Finally, for statement
√ d., the rows of L = D−A(G) sum to 0 and so (1/ n)L1 = 0,
showing that (1/ n)1 is an eigenvector with eigenvalue 0. Statement c. combined
with Theorem 106 gives that the minimum eigenvalue is at least

x⊤ Lx = (xi − xj )2
edges {i, j}

for all unit vectors x, which must be nonnegative, and so it is 0.

Let µ1 ≤ µ2 ≤ · · · ≤ µn be the eigenvalues of the Laplacian L. The second small-


est eigenvalue µ2 provides enough interesting information about the connectivity
of the graph that it warrants its own definition.
Definition. The algebraic connectivity of a graph G, denoted µ2 (G), is the second
smallest eigenvalue of the Laplacian matrix for G.
Since the vector 1 is an eigenvector with eigenvalue 0 for the Laplacian of a
graph with n vertices, Theorem 106 tells us that the algebraic connectivity µ2 satis-
fies ∑
µ2 = min x⊤ Lx = min (xi − xj )2
{i, j} is an edge
[ ]⊤
where the minimization is over unit vectors x = x1 · · · xn such that x⊤ 1 = 0
(or, equivalently, the components of the unit vector sum to 0). Therefore a common
Chapter 9. The Laplacian 58

approach to finding an upper bound on µ2 is to assign the real number xi to vertex


i in the graph such that x1 + · · · + xn = 0. Then
1 ∑
µ2 ≤ ⊤ (xi − xj )2
x x
{i, j} is an edge

where the division by x⊤ x is present for the situation where x is not a unit vector.
Example 118. We show how to find an upper bound on the algebraic multiplicity
in the graph in Example 115 by placing real numbers x1 , . . . , x5 that sum to 0 in for
the vertices of the graph and label the edges with (xi − xj )2 . One arbitrary choice is
2
22 12
2
1
0 1
2
2
12

−1 −2
12
and so µ2 ≤ (12 + 12 + 12 + 12 + 22 + 22 )/(02 + 12 + 12 + 22 + 22 ) = 6/5. Another
choice is
1
12 02
12
0 1
2
1
12

−1 −1
02
[ ]⊤
and so µ2 ≤ 4/4 = 1. This is an optimal labeling because 1 1 0 −1 −1 is
an eigenvector corresponding to the second smallest eigenvalue for the Laplacian
matrix for the graph in this example.
Theorem 119. If G is a graph with n vertices and S is a subgraph of G that has k ver-
tices, then
n
µ2 ≤ E(S, G − S).
k(n − k)
where E(S, G − S) denotes the number of edges between vertices in S and G − S.
{
[ ]⊤ n − k if i is a vertex in S,
Proof. Define x = x1 · · · xn such that xi =
−k if i is a vertex in G − S.
Then we have x1 + · · · + xn = k(n − k) + (−k)(n − k) = 0 and


0 if both i and j are in S,
(xi − xj ) = 0 if both i and j are in G − S,
2

 2
n otherwise.
Chapter 9. The Laplacian 59

Therefore
1 ∑
µ2 ≤ (xi − xj )2
x⊤ x
{i, j} is an edge
1
= n2 E(S, G − S)
k(n − k)2 + k2 (n − k)
n
= E(S, G − S).
k(n − k)

Theorem 119 says that a high algebraic connectivity means that there are many
edges between and set of vertices and the complement set of vertices. Conversely,
a low algebraic connectivity means that it is relatively easy to disconnect the graph.
Indeed, as a corollary of Theorem 119, the algebraic connectivity of G is 0 if G is not
connected. Indeed, it can be shown the number of components of G is the multi-
plicity of 0 as an eigenvalue of the Laplacian. The next two theorems reinforce this
intuition.
Theorem 120. If v is a vertex in a graph G with n vertices, then µ2 (G) ≤ µ2 (G−v)+1.

Proof. Let G′ be the graph created by possibly adding edges to G such that v con-
nected to all other vertices. The Laplacian matrix satisfies
[ ]
′ L(G − v) + I −1
L(G ) =
−1⊤ n−1

where this is a block matrix, I is the identity matrix, 1 is the vector of all 1’s, and
where we are assuming without loss of generality that vertex v is written last.
Let v be an eigenvector for L(G − v) with eigenvalue µ2 (G − v). Then
[ ] [ ]
′ v v
L(G ) = (µ2 (G − v) + 1) ,
0 0

showing that µ2 (G − v) + 1 is an eigenvalue for L(G′ ). This eigenvalue cannot be the


smallest eigenvalue for L(G′ ) because it is positive (since µ2 (G − v) ≥ 0), and so it
is at least the second smallest eigenvalue. Using our exercise on how removing an
edge can change the algebraic connectivity, we now have

µ2 (G) ≤ µ2 (G′ ) ≤ µ2 (G − v) + 1.

Theorem 121. If κ(G) is the vertex connectivity of G, then µ2 (G) ≤ κ(G).

Proof. Suppose V = {v1 , . . . , vκ(G) } is a minimum set of vertices such that G − V is


not connected. Repeatedly using Theorem 120 gives

µ2 (G) ≤ µ2 (G − v1 ) + 1 ≤ · · · ≤ µG (G − v1 − · · · − vκ(G) ) + κ(G) = κ(G).

Example 122. There are 9 spanning trees for the graph in Example 115:
Chapter 9. The Laplacian 60

In this example we see that when we multiply the nonzero eigenvalues for the Lapla-
cian and divide by the number of vertices, we also find 5 · 3 · 3 · 1/5 = 9.

Theorem 123 (Kirchhoff’s matrix tree theorem). For any matrix A, let A(i,j) denote the
matrix found by deleting the row i and column j in A. If τ (G) is the number of spanning
trees for G, then τ (G) = det(L(G)(1,1) ).

Proof. We proceed by induction on the number of edges and vertices in G. If G has


no edges, then L(G) is the zero matrix and so both τ (G) and det(L(G)(1,1) ) are equal
to 0. If G has no vertices, then the theorem is vacuously true.
By possibly reordering vertices we can assume without loss of generality that e
is an edge that connects vertex 1 and vertex 2 in G. The Laplacian is a block matrix
of the form  
d1 −1 u⊤
L(G) = −1 d2 v⊤ 
u v L1
[ ]⊤
where d1 is the degree of vertex 1, d2 is the degree of vertex 2, v = v1 · · · vn−2
and u are n − 2 dimensional vectors containing 0’s or −1’s, and L1 is the (n − 2) ×
(n − 2) submatrix found in the bottom right corner of L(G). Then we have
 
d1 − 1 0 u⊤ [ ]
d + d2 − 2 w ⊤
L(G − e) =  0 d2 − 1 v⊤  and L(G/e) = 1
w L1
u v L1

for some vector w. From this we have


[ ] [ ]
d2 v ⊤ d −1 v⊤
L(G) (1,1)
= , L(G − e)(1,1)
= 2 , and L(G/e)(1,1) = L1 .
v L1 v L1

Taking the determinant using the cofactor expansion along the first row of the ma-
trix, we see by induction that


n−2
(1,i)
det(L(G) (1,1)
) = d2 det L1 − (−1)i vi det(L1 )
i=1

n−2
(1,i)
= (d2 − 1) det L1 − (−1)i vi det(L1 ) + det L1
i=1

= det(L(G − e)(1,1) ) + det(L(G/e)(1,1) )


= τ (G − e) + τ (G/e)

where the last line follows by induction. By our exercise on spanning trees, we have
det(L(G)(1,1) ) = τ (G), as needed.
Chapter 9. The Laplacian 61

It is not difficult to adjust the above proof to show that the result in Theorem 123
still holds if the graph G is allowed to have multiple edges between vertices. Using
determinants can be awkward, so the result in Theorem 123 can be rephrased in
terms of the eigenvalues of the Laplacian matrix, as shown in Theorem 124.
Theorem 124. If 0, µ2 , . . . , µn are the eigenvalues for the Laplacian matrix of a graph
with n vertices, then τ (G) = µ2 · · · µn /n.
Proof. The characteristic polynomial for the Laplacian is equal to
det(L − xI) = (−x)(µ2 − x) · · · (µn − x),
and so the coefficient of x in this polynomial is −µ2 · · · µn .
Adding a multiple of a row (or column) to another row (or column) does not
change the determinant of a matrix. Since the columns of L sum to 0, change L − xI
by adding rows 2, . . . , n to the first row, to find
[ ] [ ]
−x (−x)1⊤ 1 1⊤
det(L − xI) = = (−x) det
v L(1,1) − xI v L(1,1) − xI
where v is a vector of 0’s and (−1)’s. Thus the coefficient of x in this polynomial is
[ ]
1 1⊤
− det
v L(1,1)
Since the rows of L sum to 0, adding columns 2, . . . , n in the above matrix to the
first column gives that the above determinant is equal to
[ ]
n 1⊤
− det = −n det(L(1,1) )
0 L(1,1)
where the determinant was calculated using the cofactor expansion along the first
column. Theorem 123 gives −µ2 · · · µn = −n det(L(1,1) ) = −nτ (G), as needed.
The Laplacian matrix can be used to help with graph visualization. Since the
Laplacian is a real valued symmetric matrix, it has an orthogonal basis of eigenvec-
tors. Two or three of these eigenvectors can be used as an axis system for repre-
senting the graph in R2 or R3 .
Suppose that µ2 , . . . , µn are the eigenvalues for the Laplacian matrix for a graph.
[ ]⊤
If x = x1 · · · xn is an eigenvector of length 1 with eigenvalue µ2 , then

µ2 = x⊤ Lx = (xi − xj )2
{i, j} is an edge

is minimum over unit vectors that are orthogonal to 1. Therefore adjacent vertices
will have relatively close values of xi and xj as to make (xi − xj )2 small. Similarly, if
[ ]⊤
y
∑ = y 1 · · · yn is an eigenvector of length 1 with eigenvalue µ3 , then y minimizes
{i, j} is an edge i − yj ) over all possible vectors that are orthogonal to both 1 and
2
(y
x, meaning that adjacent vertices will have relatively close values of yi and yj . Con-
tinuing in this manner suggests that a third vector to use in an axis system is an
eigenvector z corresponding to µ4 .
Chapter 9. The Laplacian 62

Example 125. A graph and its Laplacian are shown below:


 
5 0 −1 −1 0 −1 −1 0 −1
0 5 0 −1 −1 −1 0 −1 −1
 
−1 0 4 0 −1 0 −1 −1 0
 
−1 −1 0 4 0 −1 0 −1 0
 
 0 −1 −1 0 5 0 −1 −1 −1
 
−1 −1 0 −1 0 4 0 0 −1
 
−1 0 −1 0 −1 0 4 0 −1
 
 0 −1 −1 −1 −1 0 0 4 0
−1 −1 0 0 −1 −1 −1 0 5
To get a better visualization of the graph we find that the eigenvectors x and y cor-
responding to the eigenvalues µ2 ≈ 2.44 and µ3 = 3 are
[ ]⊤
x ≈ 0.00 −0.26 0.47 −0.47 0.26 −0.47 0.47 0.00 0.00 ,
[ ]⊤
y ≈ 0.33 −0.17 −0.17 −0.17 −0.17 0.33 0.33 −0.67 0.33 .

Taking the vertex coordinates as the ordered pairs of the form (xi , yi ) gives the visu-
alization shown below:

This representation more clearly shows the overall graph structure.

Example 126. The graph shown below

has eigenvalues for the Laplacian matrix that are approximately equal to 0, 0.44,
0.44, 0.44, 1, 1, . . . . The eigenvalues µ2 , µ3 and µ4 are all the same, indicating that
there are is some symmetry among the eigenvectors x, y and z corresponding to
these three smallest nonzero eigenvalues. To get a better visualization of the graph
we place vertices in R3 at the coordinates of the form (xi , yi , zi ) to find
Chapter 9. The Laplacian 63

which reveals a cube-like structure to the graph. To emphasize, this representation


of graph was created only from the coordinates of three of the eigenvectors of the
Laplacian matrix and without any knowledge of the geometry of the graph.

Definition. A Tutte layout of a graph G is found by fixing the position of some ver-
tices and then placing the remaining vertices at the average coordinate of adjacent
vertices.

Example 127. Consider the graph shown below:


4 3
5 2

6 1

7 10
8 9

Place vertices 1, 2, 3, 4 at the corners of a square, say at (0, 0), (1, 0), (1, 1) and (0, 1).
We can create a linear systems of equations to determine the placement of the
remaining vertices in a Tutte layout. If (xi , yi ) is the coordinate of vertex i for i =
5, . . . , 10, then we have

(x5 , y5 ) = ((1, 1) + (0, 1) + (x6 , y6 )) /3


(x6 , y6 ) = ((x5 , y5 ) + (x7 , y7 ) + (x8 , y8 )) /3
(x7 , y7 ) = ((x6 , y6 ) + (x8 , y8 ) + (x9 , y9 )) /3
(x8 , y8 ) = ((x6 , y6 ) + (x7 , y7 ) + (x10 , y10 )) /3
(x9 , y9 ) = ((0, 0) + (x7 , y7 ) + (x10 , y10 )) /3
(x10 , y10 ) = ((1, 0) + (x8 , y8 ) + (x9 , y9 )) /3
Chapter 9. The Laplacian 64

Rearranging terms and writing as a matrix multiplication gives


    
3 −1 0 0 0 0 x5 y 5 0 0 1 1  
−1 3 −1 −1 0 0   x6 y6  0 0 0 0
     0 0
 0 −1 3 −1 −1 0   x7 y7  0 0 0 0  
  =   1 0 .
 0 −1 −1 3 0 −1    0 1 1
   x8 y8  0 0 0 
0 0 −1 0 3 −1  x9 y9   1 0 0 0 0 1
0 0 0 −1 −1 3 x10 y10 0 1 0 0

The matrix on the left is the lower right 6 × 6 block of the Laplacian matrix. The
matrix on the right is the lower left 6 × 4 block of the adjacency matrix for G. The
matrix on the left happens to be invertible and so multiplying by the inverse gives
the unique solution    
x5 y 5 1/2 5/6
 x6 y6   1/2 1/2 
   
 x7 y7   7/15 1/3 
 = 
 x8 y8   8/15 1/3  .
   
 x9 y9   11/30 1/6 
x10 y10 19/30 1/6
Using these coordinates to plot the remaining vertices, we find
4 3

7 8

9 10

1 2

Fixing the positions of other choices of vertices gives an alternative embedding of


the graph. For example, fixing the positions of vertices 1, 2, 10 and 9 at the corners
of a square gives the Tutte layout shown below:
1 9

4 7
5 6
3 8

2 10
Chapter 9. The Laplacian 65

Theorem 128. If G is a connected graph with n vertices and the position of k vertices
are fixed with 2 ≤ k ≤ n − 1, then the positions of the remaining vertices are uniquely
determined in a Tutte layout.

Proof. Without loss of generality, assume that vertices 1, . . . , k are fixed such that
vertex i is placed at coordinate (xi , yi ). The Laplacian matrix for G has the form
[ ]
L1 −A⊤
L(G) =
−A L2

where L1 is a k × k block matrix, L2 is a (n − k) × (n − k) block matrix, and A is the


(n − k) × k submatrix of the adjacency matrix found in the bottom right corner.
Consider the graph G′ created by contracting all edges in 1, . . . , k while main-
taining edges between vertices in 1, . . . , k and k + 1, . . . , n, possibly resulting in a
graph with multiple edges. For instance, if k = 4 in Example 127, then G′ is shown
below.

1234
5

7 10
8 9
[ ]
′ ′ d v⊤
The Laplacian for G has the form L(G ) = where d is the degree of the
v L2
vertex created from contracting the vertices 1, . . . , k and v is the sum of the columns
in −A. The matrix-tree theorem gives that det L2 is the number of spanning trees for
G′ , which is nonzero since G′ is connected. This implies that L2 is invertible.
Since the coordinates of vertices k + 1, . . . , n are found at the average coordi-
nate of adjacent vertices, the system of equations that determine the coordinates
of vertices k + 1, . . . , n is
     
x1 y1 xk+1 yk+1 x1 y1
[ ] ..  = 0,  ..  = A  .. ..  .
−A L2  ... . or equivalently L2  ... .  . .
xn yn xn yn xk yk

Multiplying through by L−1


2 gives the unique solution.

Example 129. The following are two embeddings of the same planar graph cre-
ated by evenly spacing vertices in a face around a circle and then positioning the
remaining vertices using the Tutte layout.
Chapter 9. The Laplacian 66

In both depictions we find a straight line embedding of G that does not contain any
edge crossings.

We end this introduction into graph layouts by stating without proof a well-
known theorem that tells us that the Tutte layout has nice properties for a well
connected planar graph.

Theorem 130 (Tutte). If G is a planar graph with vertex connectivity κ(G) ≥ 3 and
v1 , . . . , vk are the vertices surrounding a face in G, then the Tutte layout found by plac-
ing v1 , . . . , vk at the vertices of a regular polygon provides an straight line embedding
of G that does not contain any edge crossings.
Unfortunately the Tutte layout can assign multiple vertices to the same coordi-
nates. For example, one Tutte layout for K5 − {4, 5} is
1

4,5

2 3

where the vertices 4 and 5 have the same position. Using a different face other than
the face containing 1, 2 and 3 does, however, provide a Tutte layout that does not
have two vertices with the same position:
1

5
3

2 4

You might also like