0% found this document useful (0 votes)
111 views

Algebraic Methods in Combinatorics: 1 Linear Algebra Review

This document provides an overview of algebraic methods in combinatorics. It begins with a review of linear algebra concepts such as matrix multiplication and invertible matrices. It then discusses vectors and linear dependence. The document proves several theorems regarding the maximum number of linearly independent vectors. It concludes with two problems involving collections of sets with certain intersection properties and derives upper bounds on the number of sets.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views

Algebraic Methods in Combinatorics: 1 Linear Algebra Review

This document provides an overview of algebraic methods in combinatorics. It begins with a review of linear algebra concepts such as matrix multiplication and invertible matrices. It then discusses vectors and linear dependence. The document proves several theorems regarding the maximum number of linearly independent vectors. It concludes with two problems involving collections of sets with certain intersection properties and derives upper bounds on the number of sets.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Algebraic Methods in Combinatorics

Po-Shen Loh
June 2011
1 Linear Algebra review
1.1 Matrix multiplication, and why we care
Suppose we have a system of equations over some eld F, e.g.
3x
1
+x
2
8x
3
= 1
9x
1
x
2
x
3
= 2
The set of ordered triples (x
1
, x
2
, x
3
) that solve the system is precisely the set of 3-element vectors x F
3
that
solve the matrix equation
Ax =

1
2

, where A =

3 1 8
9 1 1

.
Suppose that A is a square matrix. The equation Ax = 0 always has a solution (all zeros). An interesting
question is to study when there are no more solutions.
Denition 1 A square matrix A is nonsingular if Ax = 0 has only one solution: the all-zeros vector.
Often, nonsingular matrices are also called invertible, for the following reason.
Theorem 1 The following are equivalent:
(i) The square matrix A is nonsingular.
(ii) There exists another matrix, denoted A
1
such that A
1
A = I = AA
1
.
Solution:
(i) to (ii) row-reduction, nd that A always has RREF = identity
then can always solve Ax = (anything)
so can for example solve Ax = I, by doing it column by column.
now we know there is some B such that AB = I.
also, the row operations themselves were some left multiplication of matrices
so we also have some C so that CA = I.
Then B = IB = CAB = CI = C, so they are the same.
(ii) to (i): just multiply by inverse.
For non-square matrices, the most important fact is the following:
Theorem 2 If A has more columns than rows, then the equation Ax = 0 has more solutions than just the
all-zeros vector.
Solution: RREF, run out of rows before columns, so we dont have the sentinel 1 in every column. This
enables us to choose arbitrary nonzero values for all non-sentinel columns, and then still read o a valid solution
by backtracking the sentinel columns.
1
1.2 Vectors
A vector is typically represented as an arrow, and this notion is widely used in Physics, e.g., for force, velocity,
acceleration, etc. In Mathematics, vectors are treated more abstractly. In this lecture, we will mostly use
concrete vectors, which one can think of as columns of numbers (the coordinates). The fundamental operation
in Linear Algebra is the linear combination. It is called linear because vectors are not multiplied against each
other.
Denition 2 Given vectors v
1
, . . . , v
k
, and real coecients c
1
, . . . , c
k
, the sum c
1
v
1
+ + c
k
v
k
is called a
linear combination.
Its intuitive that two vectors usually do not point in the same direction, and that three vectors usually
do not all lie in the same plane. One can express both of the previous situations as follows:
Denition 3 Let v
1
, . . . , v
k
be a collection of vectors. One says that they are linearly dependent if:
One of the vectors can be expressed as a linear combination of the others.
There is a solution to c
1
v
1
+ + c
k
v
k
= 0, using real numbers c
1
, . . . , c
k
, where not all of the c
i
s are
zero.
Solution:
(i) to (ii): swap signs on the linear combination.
(ii) to (i): pick a nonzero, then shift all others to other side, and divide by the nonzero coecient.
A fundamental theorem in Linear Algebra establishes that it is impossible to have too many linearly
independent vectors. This is at the core of many Combinatorial applications.
Theorem 3 The maximum possible number of linearly independent vectors in R
n
is n.
Solution: Suppose we have more than n, say v
1
, . . . , v
n+1
. Try to solve for the coecients. This sets up
a matrix equation with more columns than rows, which by previous we know to have a nontrivial solution.
2 Combinatorics of sets
We begin with a technical lemma.
Lemma 1 Let A be a square matrix over R, for which all non-diagonal entries are all equal to some t 0, and
all diagonal entries are strictly greater than t. Then A is nonsingular.
Proof. If t = 0, this is trivial. Now suppose t > 0. Let J be the all-ones square matrix, and let D = AtJ. Note
that D is nonzero only on the diagonal, and in fact strictly positive there. We would like to solve (tJ+D)x = 0,
which is equivalent to Dx = tJx. Let s be the sum of all elements in x, and let the diagonal entries of D
be d
1
, . . . , d
n
, in order. Then, we have d
i
x
i
= ts x
i
= (t/d
i
)s. But since t and d
i
are both strictly
postive, this forces every x
i
to have opposite sign from s, which is impossible unless all x
i
= 0. Therefore, A is
nonsingular.
Solution: ALTERNATE:
Let J be the all-ones square matrix, and let D = AtJ. Note that D is nonzero only on the diagonal, and
in fact strictly positive there, so it is a positive denite matrix. Also, J is well-known to be positive semidenite
(easy to verify by hand), so A is positive denite. In particular, this means that x
T
Ax = 0 only if x = 0,
implying that Ax = 0 only for x = 0. This is equivalent to A being nonsingular.
Now try the following problems. The last two come from 102 Combinatorial Problems, by T. Andreescu
and Z. Feng.
2
1. (A result of Bourbaki on nite geometries; also appeared in St. Petersburg Olympiad.) Let X be a nite
set, and let T be a family of distinct proper subsets of X. Suppose that for every pair of distinct elements
in X, there is a unique member of T which contains both elements. Prove that [T[ [X[.
Solution: Let X = [n] and T = A
1
, . . . , A
m
. We need to show that n m. Dene the m n
incidence matrix A over R by putting 1 in the i-th row and j-th column if j A
i
. Consider the product
A
T
A, which is an n n matrix. For i ,= j, its entry at (i, j) is precisely 1.
Also, the diagonal entries are strictly larger than 1, because if some element j X belongs to only one
set A
k
T, then the condition implies that every element i X is also in A
k
, contradicting requirement
that A
k
be proper.
Therefore, A
T
A is nonsingular by Lemma 1. But if A has more rows than columns, then it would have
some x ,= 0 such that Ax = 0, hence A
T
Ax = 0. Therefore, A doesnt have more rows than columns,
i.e., n m.
2. (Fishers inequality) Let ( = A
1
, . . . , A
r
be a collection of distinct subsets of 1, . . . , n such that every
pairwise intersection A
i
A
j
(i ,= j) has size t, where t is some xed integer between 1 and n inclusive.
Prove that [([ n.
Solution: Consider the n r matrix A, where the i-th column of A is the characteristic vector of A
i
.
Then, A
T
A is a r r matrix, all of whose o-diagonal entries are t. We claim that the diagonal entries
are all > t. Indeed, if there were some [A
i
[ which were exactly t, then the structure of ( must look like
a ower, with one set A
j
of size t, and all other sets fully containing A
j
and disjointly partitioning the
elements of [n] A
j
among them. Any such construction has size at most 1 + (n t) n, so we would
already be done.
Therefore, A
T
A is nonsingular by Lemma 1, and the previous argument again gives r n.
3. Let A
1
, . . . , A
r
be a collection of distinct subsets of 1, . . . , n such that all [A
i
[ are even, and also all
[A
i
A
j
[ are even for i ,= j. How big can r be, in terms of n?
Solution: Arbitrarily cluster [n] into pairs, possibly with one element left over. Then take all possible
subsets where we never separate the pairs; this gives r up to 2
n/2
.
But this is also best possible. Suppose that S is the set of characteristic vectors of the sets in the extremal
example. The condition translates into S being self-orthogonal. But S S S S, so extremality
implies that S is in fact an entire linear subspace, which is self-orthogonal (i.e., S S

).
We have the general fact that for any linear subspace, dimS

= n dimS. This is because if d = dimS,


we can pick a basis v
1
, . . . , v
d
of S, and write them as the rows of a matrix A. Then, the kernel of A is
precisely S

, but any kernel has dimension equal to n minus the dimension of the row space (d).
Therefore, S S

implies that dimS dimS

= n dimS, which forces dimS n/2|, so we are


done.
4. What happens in the previous problem if we instead require that all [A
i
[ are odd? We still maintain that
all [A
i
A
j
[ are even for i ,= j.
Solution: Answer: r n. Work over F
2
. The characteristic vectors v
i
of the A
i
are orthonormal
1
, so
they are linearly independent: given any dependence relation of the form

c
i
v
i
= 0, we can dot product
both sides with v
k
and conclude that c
k
= 0. Thus, there can only be n of them.
ALTERNATE: Let A be the n r matrix where the columns are the characteristic vectors of the A
i
.
Then A
T
A equals the r r identity matrix, which is of course of full rank r. Thus r = rank(A
T
A)
rank(A) n.
5. Prove that if all codegrees
2
in a graph on n vertices are odd, then n is also odd.
1
Strictly speaking, this is not true, because there is no positive denite inner product over F
2
. However, if one carries out the
typical proof that orthonormality implies linear independence, it still works with the mod-2 dot product.
2
The codegree of a pair of vertices is the number of vertices that are adjacent to both of them.
3
Solution: First we show that all degrees are even. Let v be an arbitrary vertex. All vertices w N(v)
have odd codegree with v, which means they all have odd degree in the graph induced by N(v). Since
the number of odd-degree vertices in any graph must always be even, we immediately nd that [N(v)[ is
even, as desired.
Let A be the adjacency matrix. Then A
T
A = J I. But consider right-multiplying by 1. A1 = 0
A
T
A1 = 0 and I1 = 1, so we need to have J1 = 1, which implies that n is odd.
ALTERNATE ENDING: Now, let S = 1, v
1
, . . . , v
n
be the set of n + 1 vectors in F
n
2
where 1 is the
all-ones vector and v
i
is the characteristic vector of the neighborhood of the i-th vertex. There must be
some nontrivial linear dependence b1 +

i
a
i
v
i
= 0. But note that if we take the inner product of this
equation with v
k
, we obtain

i=k
a
i
= 0 because 1 v
k
= 0 = v
k
v
k
and v
i
v
k
= 1 for i ,= k. Hence all
the a
i
are equal. Yet if they are all zero, then b is also forced to be zero, contradicting the nontriviality
of this linear combination. Therefore, all a
i
are 1, and the equation

i=k
a
i
= 0 forces n 1 to be even,
and n to be odd.
6. (Introductory Problem 38) There are 2n people at a party. Each person has an even number of friends at
the party. (Here, friendship is a mutual relationship.) Prove that there are two people who have an even
number of common friends at the party.
Solution: Let A be adjacency matrix. Suppose for contradiction that every pair of people has an odd
number of common friends. Then over F
2
, we have A
T
A = J I, where J is the all-ones matrix and I
is the identity. Since all degrees even, A1 = 0. Hence A
T
A1 = 0. But J1 = 0 because J is a 2n 2n
matrix, and I1 = 1. Thus we have 0 = A
T
A1 = (J I)1 = 1, contradiction.
7. (Advanced Problem 49) A set T is called even if it has an even number of elements. Let n be a positive
even integer, and let S
1
, . . . , S
n
be even subsets of the set 1, . . . , n. Prove that there exist some i ,= j
such that S
i
S
j
is even.
Solution: Let A be nn matrix over F
2
with columns that are the characteristic vectors of the S
i
. Then
A
T
A = JI, but A is singular because A
T
1 = 0. Square the equation. We have (JI)(JI) = J
2
2J+I
since I, J commute. But n is even, and we are in F
2
, so it is just I, and we get A
T
AA
T
A = I. Contradicts
singularity of A.
(Uses A nonsingular implies A
T
nonsingular. Indeed, we need (AB)
T
= B
T
A
T
. So, in particular, if A
had inverse B, then we have a matrix B
T
such that it is left inverse of A
T
. In particular, whenever we go
to solve A
T
x = 0, we can left-multiply by B
T
, and get x = B
T
0 = 0, so no nontrivial solutions.)
ALTERNATE: Singularity implies det A
T
A = (det A)
2
= 0. However, det(J I) is precisely the parity of
D
n
, the number of derangements of [n]. It remains to prove that for even n, D
n
is odd. But this follows
from the well-known recursion D
n
= (n 1)(D
n1
+ D
n2
), which can be veried by looking at where
the element n is permuted to.
3 Bonus problems (not all linear algebra)
1. (Caratheodory.) A convex combination of points x
i
is dened as a linear combination of the form

i
x
i
,
where the
i
are non-negative coecients which sum to 1.
Let X be a nite set of points in R
d
, and let cvx(X) denote the set of points in the convex hull of X, i.e.,
all points expressible as convex combinations of the x
i
X. Show that each point x cvx(X) can in fact
be expressed as a convex combination of only d + 1 points of X.
Solution: Given a convex combination with d +2 or more nonzero coecients, nd a new vector with
which to perturb the nonzero coecients. Specically, seek

i
x
i
= 0 and

i
= 0, which is d + 1
equations, but with d + 2 variables
i
. So there is a non-trivial solution, and we can use it to reduce
another
i
coecient to zero.
2. (Radon.) Let A be a set of at least d + 2 points in R
d
. Show that A can be split into two disjoint sets
A
1
A
2
such that cvx(A
1
) and cvx(A
2
) intersect.
4
Solution: For each point, create an R
d+1
-vector v
i
by adding a 1 as the last coordinate. We have a
non-trivial dependence because we have at least d+2 vectors in R
d+1
, say

i
v
i
= 0. Split A = A
1
A
2
by taking A
1
to be the set of indices i with
i
0, and A
2
to be the rest.
By the last coordinate, we have

iA1

i
=

iA2
(
i
) .
Let Z be that sum. Then if we use
i
/Z as the coecients, we get a convex combination from A
1
via
the rst d coordinates, which equals the convex combination from A
2
we get by using (
i
)/Z as the
coecients.
3. (Helly.) Let C
1
, C
2
, . . . , C
n
be convex sets of points in R
d
, with n d + 1. Suppose that every d + 1 of
the sets have a non-empty intersection. Show that all n of the sets have a non-empty intersection.
Solution: Induction on n. Clearly true for n = d + 1, so now consider n d + 2, and assume true for
n1. Then by induction, we can dene points a
i
to be in the intersection of all C
j
, j ,= i. Apply Radons
Lemma to these a
i
, to get a split of indices A B.
Crucially, note that for each i A and j B, the point a
i
is in C
j
. So, each i A gives a
i

jB
C
j
,
and hence the convex hull of points in A is entirely contained in all C
j
, j B.
Similarly, the convex hull of points in B is entirely contained in all C
j
, j A. Yet Radons Lemma gave
intersecting convex hulls, so there is a point in both hulls, i.e., in all C
j
, j A B = [n].
4. (From Peter Winkler.) The 60 MOPpers were divided into 8 teams for Team Contest 1. They were then
divided into 7 teams for Team Contest 2. Prove that there must be a MOPper for whom the size of her
team in Contest 2 was strictly larger than the size of her team in Contest 1.
Solution: In Contest 1, suppose the team breakdown was s
1
+ + s
8
= 60. Then in the i-th team,
with s
i
people, say that each person did
1
si
of the work. Similarly, in Contest 2, account equally for the
work within each team, giving scores of
1
s

i
.
However, the total amount of work done by all people in Contest 1 was then exactly 8, and the total
amount of work done by all people in Contest 2 was exactly 7. So somebody must have done strictly less
work in Contest 2. That person saw
1
s

i
<
1
s
i
,
i.e., the size of that persons team on Contest 2 was strictly larger than her team size on Contest 1.
5. (MOP 2007/4/K2.) Let S be a set of 10
6
points in 3-dimensional space. Show that at least 79 distinct
distances are formed between pairs of points of S.
Solution: Zarankiewicz counting for the excluded K
3,3
in the unit distance graph. This upper-bounds
the number of edges in each constant-distance graph, and therefore lower-bounds the number of distinct
distances.
6. (MOP 2007/10/K4.) Let S be a set of 2n points in space, such that no 4 lie in the same plane. Pick any
n
2
+1 segments determined by the points. Show that they form at least n (possibly overlapping) triangles.
Solution: In fact, every 2n-vertex graph with at least n
2
+1 edges already contains at least n triangles.
No geometry is needed.
7. (Sperner capacity of cyclic triangle, also Iran 2006.) Let A be a collection of vectors of length n from Z
3
with the property that for any two distinct vectors a, b A there is some coordinate i such that b
i
= a
i
+1,
where addition is dened modulo 3. Prove that [A[ 2
n
.
Solution: For each a A, dene the Z
3
-polynomial f
a
(x) :=

n
i=1
(x
i
a
i
1). Observe that this is
multilinear. Clearly, for all a ,= b A, f
a
(b) = 0, and f
a
(a) ,= 0; therefore, the f
a
are linearly independent,
and bounded in cardinality by the dimension of the space of multilinear polynomials in n variables, which
is 2
n
.
5

You might also like