0% found this document useful (0 votes)
9 views7 pages

KG1 W2020 Lecture 10

KG1W2020Lecture10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views7 pages

KG1 W2020 Lecture 10

KG1W2020Lecture10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

NDMI011: Combinatorics and Graph Theory 1

Lecture #10
Sperner’s theorem. Ramsey numbers
Irena Penev

1 Sperner’s theorem
For a partially ordered set (X, ≤),

• a chain in (X, ≤) is any set C ⊆ X such that for all x1 , x2 ∈ C, we have


that either x1 ≤ x2 or x2 ≤ x1 .1

• a maximal chain in (X, ≤) is a chain C in (X, ≤) such that there is no


chain C 0 in (X, ≤) with the property that C $ C 0 ;

• an antichain in (X, ≤) is any set A ⊆ X such that for all distinct


x1 , x2 ∈ A, we have that x1 6≤ x2 and x2 ≤
6 x1 .

Note that a chain and an antichain in (X, ≤) can have at most one element
in common.2
Here, we are interested in a special case of the above. As usual, for a
set X, we denote by P(X) the power set (i.e. the set of all subsets) of X.
Clearly, for any set X, ⊆P(X) := {(A, B) | A, B ∈ P(X), A ⊆ B} is a partial
order on X. To simplify notation, in what follows, we write (P(X), ⊆)
instead of (P(X), ⊆P(X) ). We apply the above definitions to (P(X), ⊆),
as follows.
For a set X,
1
This definition works both for finite and for infinite X. Note also that ∅ is a chain
in (X, ≤). However, if X is finite and C is a non-empty chain in (X, ≤), then C can be
ordered as C = {x1 , . . . , xt } so that x1 ≤ · · · ≤ xt .
2
Indeed, if distinct elements x1 , x2 belong to a chain of (X, ≤), then x1 ≤ x2 or x2 ≤ x1 .
On the other hand, if they belong to an antichain of (X, ≤), then x1 6≤ x2 and x2 6≤ x1 .
So, distinct elements x1 and x2 cannot simultaneously belong to a chain and an antichain
of (X, ≤).

1
• a chain in (P(X), ⊆) is any set C of subsets of X such that for all
C1 , C2 ∈ C, we have that either C1 ⊆ C2 or C2 ⊆ C1 .3

• a maximal chain in (P(X), ⊆) is a chain in (P(X), ⊆) such that there


is no chain C 0 in (P(X), ⊆) with the property that C $ C 0 ;

• an antichain in (P(X), ⊆) is any set A of subsets of X such that for


6 A1 .4
all distinct A1 , A2 ∈ A, we have that A1 6⊆ A2 and A2 ⊆

As before, note that a chain and an antichain in (P(X), ⊆) can have at most
one element in common.

Example 1.1. Let X = {1, 2, 3, 4}. The following are chains in (P(X), ⊆):5

• {{2, 4}, {1, 2, 4}};6

• {∅, {1}, {1, 2}, {1, 2, 3}, X}.7

• {∅, {4}, {2, 4}, {1, 2, 4}, X};8

Further, the following are all antichains in (P(X), ⊆):9

• {∅};

• {X};

• {{1, 2}, {2, 3}, {1, 3, 4}};

• {{1, 2}, {1, 3}, {1, 4}, {2, 3}, {2, 4}, {3, 4}}.

Sperner’s theorem. Let n be a non-negative integer, and let X be  an


n
n-element set. Then any antichain in (P(X), ⊆) has at most bn/2c ele-
ments. Furthermore, this bound is tight, that is, there exists an antichain in
n

(P(X), ⊆) that has precisely bn/2c elements.

Proof. First, we note that the set of all bn/2c-element subsets of X is an
n
antichain in (P(X), ⊆), and this antichain has precisely bn/2c elements.
n

It remains to show that any antichain in (P(X), ⊆) has at most bn/2c
elements.
3
This definition works both for finite and for infinite X. Note also that ∅ is a chain in
(P(X), ⊆). However, if X is finite and C is a non-empty chain in (P(X), ⊆), then C can
be ordered as C = {C1 , . . . , Ct } so that C1 ⊆ · · · ⊆ Ct .
4
Equivalently: A1 \ A2 and A2 \ A1 are both non-empty.
5
There are many other chains in (P(X), ⊆) as well.
6
Note that this chain is not maximal, since we can add (for example) the set {2} to it
and obtain a larger chain.
7
This chain is maximal.
8
This chain is maximal.
9
There are many other antichains in (P(X), ⊆) as well.

2
Claim 1. There are precisely n! maximal chains in (P(X), ⊆).
Proof of Claim 1. Clearly, any maximal chain in (P(X), ⊆) is of the form
{∅, {x1 }, {x1 , x2 }, . . . , {x1 , x2 , . . . , xn }}, where x1 , . . . , xn is some ordering of
the elements of X. There are precisely n! such orderings, and so the number
of maximal chains in (P(X), ⊆) is n!. 
Claim 2. For every set A ⊆ X, the number of maximal chains
of (P(X), ⊆) containing A is precisely |A|!(n − |A|)!.
Proof of Claim 2. Set k = |A|. As in the proof of Claim 1, we have that any
chain in (P(X), ⊆) is of the form {∅, {x1 }, {x1 , x2 }, . . . , {x1 , x2 , . . . , xn }},
where x1 , . . . , xn is some ordering of the elements of X; this chain contains
A if and only if A = {x1 , . . . , xk } (and therefore, X \ A = {xk+1 , . . . , xn }).
The number of ways of ordering A is k!, and the number of ways of ordering
X \ A is (n − k)!. So, the total number of chains of (P(X), ⊆) containing A
is precisely k!(n − k)!. 
Now, fix an antichain A in (P(X), ⊆). We form the matrix M whose
rows are indexed by the elements of A, and whose columns are indexed by
the maximal chains of (P(X), ⊆), and in which the (A, C)-th entry is 1 if
A ∈ C and is 0 otherwise.10 Our goal is to count the number of 1’s in the
matrix M in two ways.
First, by Claim 2, for any A ∈ A, the number of maximal chains of
(P(X), ⊆) containing A is precisely |A|!(n − |A|)!; so, the number of 1’s in
the row of M indexed by A is precisely |A|!(n − |A|)!. Thus, the number of
1’s in the matrix M is precisely
X
|A|!(n − |A|)!.
A∈A
On the other hand, by Claim 1, the number of columns of M is precisely n!.
Furthermore, no chain of (P(X), ⊆) contains more than one element of the
antichain A, and so no column of M contains more than one 1. So, the total
number of 1’s in the matrix M is at most n!. We now have that
P
|A|!(n − |A|)! ≤ n!,
A∈A
and consequently,
P |A|!(n−|A|)!
n! ≤ 1.
A∈A
On the other hand, for all A ⊆ X (and in particular, for all A ∈ A), we have
that
(∗)
|A|!(n−|A|)! 1
1 1
n! = = n
n! ≥ n ,
(|A|)
|A|!(n−|A|)! (bn/2c )
where (*) follows from the fact that nk ≤ bn/2c
n
for all k ∈ {0, . . . , n}.11
 

10
Here, A ∈ A, C is a maximal chain in (P(X), ⊆), and the (A, C)-th entry of M is the
entry in the row indexed by A and column indexed by C.
11
See subsection 2.2 of Lecture Notes 1.

3
We now have that
P |A|!(n−|A|)! P 1 1
1 ≥ n! ≥ n ≥ |A| n ,
A∈A A∈A (bn/2c) (bn/2c)

n

which yields |A| ≤ bn/2c . This completes the argument.

2 The Pigeonhole principle


The Pigeonhole Principle. Let n1 , . . . , nt (t ≥ 1) be non-negative integers,
and let X be a set of size at least 1 + n1 + · · · + nt . If (X1 , . . . , Xt ) is any
partition of X,12 then there exists some i ∈ {1, . . . , t} such that |Xi | > ni .13

Proof. Suppose otherwise, and fix a partition (X1 , . . . , Xt ) such that |Xi | ≤
ni for all i ∈ {1, . . . , t}. But then

1 + n1 + · · · + nt ≤ |X| = |X1 | + · · · + |Xt | ≤ n1 + · · · + nt ,

a contradiction.

As an immediate corollary, we obtain the following.

Corollary 2.1. Let n and t be positive integers. Let X be an n-element


set, and let (X1 , . . . , Xt ) be any partition of X.14 Then there exists some
i ∈ {1, . . . , t} such that |Xi | ≥ d nt e.

Proof. By the Pigeonhole Principle, we need only show that n ≥ 1+t(d nt e−1).
If t | n,15 then d nt e = nt , and we have that

1 + t(d nt e − 1) ≤ 1 + t( nt − 1) = n − t + 1 ≤ n,

which is what we needed. Suppose now that t 6 | n, so that d nt e − 1 = b nt b.


Then let m = b nt c and ` = n − mt; since t 6 | n, we have that ` ≥ 1. But now

1 + t(d nt e − 1) = 1 + t(b nt c) = 1 + tm ≤ ` + tm ≤ n,

and we are done.

We remark that Corollary 2.1 is also often referred to as the Pigeonhole


Principle.
12
Here, we allow the sets X1 , . . . , Xt to possibly be empty.
13
If one thinks of elements of X as “pigeons” and sets X1 , . . . , Xt as “pigeonholes,” then
the Pigeonhole Principle states that some pigeonhole Xi receives more than ni pigeons.
14
Here, we allow the sets X1 , . . . , Xt to possibly be empty.
15
“t | n” means that n is divisible by t.

4
3 Ramsey numbers
A clique in a graph G is any set of pairwise adjacent vertices of G. The clique
number of G, denoted by ω(G), is the maximum size of a clique of G.
A stable set (or independent set) in a graph G is any set of pairwise
non-adjacent vertices of G. The stability number (or independence number)
of G, denoted by α(G), is the maximum size of a stable set in G.
Proposition 3.1. Let G be a graph on at least six vertices. Then either
ω(G) ≥ 3 or α(G) ≥ 3.
Proof. Let u be any vertex of G. Then |V (G) \ {u}| ≥ 5, and so (by the
Pigeonhole Principle) either u has at least three neighbors or it has at least
three non-neighbors.
Suppose first that u has at least three neighbors. If at least two of those
neighbors, say u1 and u2 , are adjacent, then {u, u1 , u2 } is a clique of G of
size three, and we deduce that ω(G) ≥ 3. On the other hand, if no two
neighbors of u are adjacent, then they together form a stable set of size at
least three, and we deduce that α(G) ≥ 3.
Suppose now that u has at least three non-neighbors. If at least two of
those non-neighbors, say u1 and u2 , are non-adjacent, then {u, u1 , u2 } is a
stable set of G of size three, and we deduce that α(G) ≥ 3. On the other
hand, if the non-neighbors of u are pairwise adjacent, then they together
form a clique of size at least three, and we deduce that ω(G) ≥ 3.

For a graph G and a vertex u, NG (u) is the set of all neighbors of u in


G, and NG [u] = {u} ∪ NG (u).
Theorem 3.2. Let k and ` be positive integers, and let G be a graph on at
least k+`−2 16 Then either ω(G) ≥ k or α(G) ≥ `.

k−1 vertices.
Proof. We may assume inductively that for all positive integers k 0 , `0 such
0 0 −2
that k 0 + `0 < k + `, all graphs G0 on at least k k+`

0 −1 vertices satisfy either
0 0 0
ω(G ) ≥ k or α(G ) ≥ ` . 0

If k = 1 or ` = 1, then the result 17 So, we may assume


 is immediate.
that k, ` ≥ 2. Now, set n = k−1 , n1 = k−1 , and n2 = k+`−3
k+`−2 k+`−3
 
k−2 ; then
n = n1 + n2 , and consequently, n − 1 = 1 + (n1 − 1) + (n2 − 1). Fix any
vertex u ∈ V (G), and set N1 = V (G) \ NG [u] and N2 = NG (u).
u

N1 N2
16
Note that k−1 = k+`−2
k+`−2
 
`−1
.
17
Indeed, it is clear that ω(G) ≥ 1 and α(G) ≥ 1. So, if k = 1, then ω(G) ≥ k; and if
` = 1, then α(G) ≥ `.

5
Since (N1 , N2 ) is a partition of V (G) \ {u}, and since |V (G) \ {u}| ≥ n − 1 =
1 + (n1 − 1) + (n2 − 1), the Pigeonhole Principle guarantees that either
|N1 | ≥ n1 or |N2 | ≥ n2 .
Suppose first that |N1 | ≥ n1 , i.e. |N1 | ≥ k+(`−1)−2

k−1 . Then by the
induction hypothesis, either ω(G[N1 ]) ≥ k or α(G[N1 ]) ≥ ` − 1. In the
former case, we have that ω(G) ≥ ω(G[N1 ]) ≥ k, and we are done. So
suppose that α(G[N1 ]) ≥ ` − 1. Then let S be a stable set of G[N1 ] of size
` − 1. Then {u} ∪ S is a stable set of size ` in G, we deduce that α(G) ≥ `,
and again we are done.
Suppose now that |N2 | ≥ n2 , i.e. |N2 | ≥ (k−1)+`−2

k−2 . Then by the
induction hypothesis, either ω(G[N2 ]) ≥ k − 1 or α(G[N2 ]) ≥ `. In the latter
case, we have that α(G) ≥ α(G[N2 ]) ≥ `, and we are done. So suppose that
ω(G[N2 ]) ≥ k − 1. Then let C be a clique of G[N2 ] of size k − 1. But then
{u} ∪ C is a clique of size k in G, we deduce that ω(G) ≥ k, and again we
are done.

For positive integers k and `, we denote by R(k, `) the smallest number


n such that every graph G on at least n vertices satisfies either ω(G) ≥ k or
α(G) ≥ `. The existence of R(k, `) follows immediately from Theorem 3.2.
Numbers R(k, `) (with k, ` ≥ 1) are called Ramsey numbers.
It is easy to see that for all k, ` ≥ 1, we have that18
R(1, `) = 1 R(k, 1) = 1

R(2, `) = ` R(k, 2) = k
Furthermore, we have R(3, 3) = 6. Indeed, by Proposition 3.1, R(3, 3) ≤ 6.
On the other hand, ω(C5 ) = 2 and α(C5 ) = 2, and so R(3, 3) > 5. Thus,
R(3, 3) = 6. The exact values of a few other Ramsey numbers are known,19
but no general formula for R(k, `) is known. Note however, that Theorem 3.2
gives an upper bound for Ramsey numbers, namely, R(k, `) ≤ k+`−2 k−1 for
all k, ` ≥ 1.
We complete this section by giving a lower bound for the Ramsey number
R(k, k).
Theorem 3.3. For all integers k ≥ 3, we have that R(k, k) > 2k/2 .
Proof. Since ω(C5 ) = 2 and α(C5 ) = 2, we see that R(3, 3) > 5 > 23/2 and
R(4, 4) > 5 > 24/2 . Thus, the claim holds for k = 3 and k = 4. From now
on, we assume that k ≥ 5.
Let G be a graph on n := b2k/2 c vertices, with adjacency as follows:
between any two distinct vertices, we (independently) put an edge with
probability 12 (and a non-edge with probability 21 ).
18
Check this!
19
For example, it is known that R(4, 4) = 18. On the other hand, the exact value of
R(5, 5) is still unknown.

6
For any set of k vertices of G, the probability that this set is a clique is
k
( 12 )(2) ;
there are nk subsets of V (G) of size k, and the probability that at

k
least one of them is a clique is at most nk ( 12 )(2) . So, the probability that

k
ω(G) ≥ k is at most nk ( 12 )(2) . Similarly, the probability that α(G) ≥ k

k
is at most n ( 1 )(2) . Thus, the probability that G satisfies at least one of

k 2
ω(G) ≥ k and α(G) ≥ k is at most
k k
n
( 12 )(2) ≤ 2( en k 1 (2)

2 k k ) (2) by Theorem 2.1
from Lecture Notes 1

k/2
2( e2 k )k
≤ 2k(k−1)/2
because n = b2k/2 c

k/2
= 2( k2e2
(k−1)/2 )
k


< 2( e k 2 )k

< 1 because k ≥ 5

Thus, the probability that G satisfies neither ω(G) ≥ k nor α(G) ≥ k is


strictly positive. So, there must be at least one graph on n = b2k/2 c vertices
whose clique number and stability number are both strictly less than k. This
proves that R(k, k) > b2k/2 c; since R(k, k) is an integer, we deduce that
R(k, k) > 2k/2 .

You might also like