The Ramsey Number R (3, T) Has Order of Magnitude T / Log T
The Ramsey Number R (3, T) Has Order of Magnitude T / Log T
Abstract
The Ramsey number R(s, t) for positive integers s and t is the minimum integer
n for which every red-blue coloring of the edges of a complete n-vertex graph
induces either a red complete graph of order s or a blue complete graph of order
t. This paper proves that R(3, t) is bounded below by (1 − o(1))t2 / log t times a
positive constant. Together with the known upper bound of (1 + o(1))t2 / log t, it
follows that R(3, t) has asymptotic order of magnitude t2 / log t.
1 Introduction
Throughout this paper, logarithms are natural logarithms, c denotes a positive constant, s, t
(3)
and n are positive integers, Kn and Gn denote respectively the complete graph and a triangle-
free (K3 -free) graph on n vertices, and α(G) and χ(G) are respectively the independence
number and the chromatic number of graph G. Our graph theory terminology follows [8], [5].
The Ramsey number R(s, t) is the minimum n such that every red-blue coloring of the
edges of Kn induces either a red Ks or a blue Kt . Equivalently, R(s, t) is the smallest n such
that every n-vertex graph has either an s-vertex clique or a t-vertex independent set. We focus
on
(3) (3)
R(3, t) := min{n : α(Gn ) ≥ t for every Gn } .
Since χ(G) ≥ n/α(G) for every graph G on n vertices, we have the following corollary.
1
(3)
Corollary 1.2 Every sufficiently large n has a Gn for which
1 n
r
(3)
χ(Gn ) ≥ .
9 log n
t2
c(1 − o(1)) ≤ R(3, t)
log t
with c = 1/162 = 1/(2 · 92 ), where o(1) goes to 0 as t goes to infinity. (We make no attempt
here to find the tightest possible constants.) Because it is known [1], [2], [32] also that
t2
R(3, t) ≤ (1 + o(1)) , (1)
log t
we now know that t2 / log t is the correct asymptotic order of magnitude of R(3, t). Also, (1)
(3)
easily gives an upper bound of χ(Gn ) which, together with Corollary 1.2, yields
1
r
n (3) √ r
n
(1 − o(1)) ≤ max χ(Gn ) ≤ (1 + o(1))2 2 .
9 log n (3)
Gn log n
The asymptotic behavior of R(3, t) has been a major open problem in Ramsey theory for
many years (see e.g [15], Appendix B of [3]). In 1961, Erdős [10] obtained a lower bound
from a lovely probabilistic argument that has become a cornerstone of probabilistic methods
in Combinatorics (see e.g. [3] or [6]). Graver and Yackel [16] found an upper bound in 1968
which, in conjunction with Erdős’s bound, gave
t2 t2 log log t
c1 2
≤ R(3, t) ≤ c2 .
(log t) log t
Ajtai, Komlós and Szemerédi [1], [2] removed the log log t factor in the upper bound, and
Shearer [32] (see also [33]) reduced the constant and simplified the proof to obtain (1).
Meanwhile, the lower bound c1 (t/ log t)2 defied improvement although Spencer [34], Bol-
lobás [6], Erdős, Suen and Winkler [12] and Krivelevich [25] simplified its proof and/or in-
creased c1 through refined probabilistic arguments. We consider parts of their arguments
later. More to the point of the present paper, Spencer [37] showed very recently that c1 can
be arbitrarily large, i.e.
R(3, t)
lim =∞.
t→∞ (t/ log t)2
He introduced also a differential equation which first suggested c t2 / log t as a lower bound and
which inspired the present contribution. We consider this in Section 2.
Our approach uses the so-called “semirandom method” or “Rödl’s nibble method”, a ver-
sion of which may have been used first in [2], the paper that removed the log log t factor in the
upper bound. More refined applications of the semirandom method were subsequently used to
settle intriguing conjectures on hypergraph packings, colorings and list colorings, see e.g. [30],
2
[13], [28], [14], [29], [18], [19]. The present author [22] used a similar method to make progress
on Vizing’s old problem [38] of upper bounds for the chromatic number of a triangle-free graph
of maximum degree D. We refer [18] and [22] for more on the history of the method.
The next section describes our random block construction of a triangle free graph along with
Spencer’s differential equation. Section 3 introduces basic parameters and proves Theorem 1.1
modulo the proof of our main lemma (Lemma 2.1) on the behavior of those parameters under
the block construction. Section 4 presents tools used in the proof of the main lemma, including
Azuma-Hoeffding type martingale inequalities that lead to proofs of the high concentrations
of our parameters near their expected values. The main lemma is then proved in Section 5.
One-by-One Construction
(OC 1) Set m := n2 and choose a random permutation π := (e1 , ..., em ) from the uniform
(OC 3) Suppose we have Gi . For j > i we say that edge ej “survives” (see [36]) if the graph
E(Gi ) ∪ {ej } has no triangle. Define
(
E(Gi ) ∪ {ei+1 } if ei+1 survives
E(Gi+1 ) :=
E(Gi ) otherwise.
It is obvious that Gm has no triangle. However, it seems hard to find any tight upper
bound on α(Gm ). The main obstacle is the fact that the event “ei+1 survives” depends highly
on the ordering e1 , ..., ei . There is no apparent property that all surviving edges share and
this makes it difficult to find a small upper bound on the variance of the random variable
|E(Gi+1 )| − |E(Gi )|.
Erdős, Suen and Winkler [12] modified the preceding notion of surviving so that an edge
ei+1 survives only if the graph {e1 , ..., ei+1 } has no triangle. Their new notion and a real time
(3)
version of the above construction enable them to prove the existence of a Gn such that
(3) √
α(Gn ) ≤ (3/2 + o(1)) n log n ,
3
Our block construction uses a combination of the above two notions of surviving, but is
closer to the original. For our new construction, we write evw for edge {v, w} ∈ E(Kn ). With
e, f, g ∈ E(Kn ), ef and ef g denote the sets {e, f } and {e, f, g} respectively.
(BC 2) Suppose we have a set Ei of edges and a triangle-free graph Gi with E(Gi ) ⊆ Ei . We
now say that edge e ∈ E(Kn ) \ Ei survives (after stage i) if e cannot be extended to a triangle
using edges from Ei . For small θ > 0 (our θ will be (log n)−2 ) the random set Xi+1 is defined
by ( √
θ/ n if e survives
Pr (e ∈ Xi+1 ) :=
0 otherwise
where all events “e ∈ Xi+1 ” are mutually independent.
(BC 3) We set Ei+1 := Ei ∪Xi+1 and define the set of forbidden pairs of edges in Xi+1 = Ei+1 \Ei
by
Λ(Xi+1 ) := {euv evw : euv evw ⊆ Xi+1 , ewu ∈ Ei } . (2)
Thus ef ∈ Λ(Xi+1 ) means that the pair ef makes a triangle with an edge g in Ei . So we do
not want both e and f in Gi+1 regardless whether g is actually in Gi . The set of forbidden
triples of edges is
∆(Xi+1 ) := {euv evw ewu : euv evw ewu ⊆ Xi+1 } . (3)
We now take a maximal disjoint collection Fi+1 of elements of Λ(Xi+1 ) ∪ ∆(Xi+1 ) and define
4
1.2 Differential Equation
As mentioned earlier, Spencer [37] introduced a differential equation to analyze the behavior of
a graph constructed by one-by-one construction. His differential equation also nicely explains
block construction, which suggests that our constructions are similar. We use “≈” to mean
approximately equal and emphasize that what follows is only an aid to our intuition.
Ψ(iθ)n3/2
Suppose Gi has edges, where Ψ is an unknown function. This might occur because
2
Ψ(iθ)n3/2 Ψ(iθ)
Pr (e ∈ Gi ) ≈ n ≈ √ for all e ∈ E(Kn ) . (4)
2 2 n
Recall that “e survives” (e 6∈ Ei ) if and only if there is no pair f, g ∈ Ei which together with
e makes a triangle. Since n − 2 triangles in E(Kn ) contain e, if Gi ≈ Ei and all events are
independent, then (4) would yield
Ψ(iθ) 2 n−2
Pr ( “ e survives after stage i” ) ≈ 1 − √ ≈ exp(−Ψ2 (iθ)) , (5)
n
where, as usual, Ψ2 (iθ) := (Ψ(iθ))2 . So we expect that the number of surviving edges after
stage i is about n2 exp(−Ψ2 (iθ)).
Therefore, in expectation,
5
To see where this leads, set a0 := 0, b0 := 1 and
i−1 i−1
0 2
Ψ0 (jθ)θ ≈ Ψ(iθ) for i = 1, 2, · · · .
X X
bi := Ψ (iθ) = exp(−Ψ (iθ)) , ai := bj θ = (7)
j=0 j=0
|T |
We might expect from (5) that every big subset T of V contains bi 2 ≈ bi |T |2 /2 surviving
edges. To be more precise, let
p
t := d9 n log n e , Γi (T ) :≈ {evw ⊆ T : evw survives after stage i}
If δ is not too small, then the expectation of |Tio | is almost 0, which implies α(Gio ) ≤ t with
probability almost 1. Because Gio is triangle-free, we might be done.
This suggests that Theorem 1.1 could follow easily from (8). Our main goal, in fact, is to
derive a property (see Property 7 in Section 3) that is essentially the same as (8). To achieve
it we define a subset Γi of the set of all surviving edges after stage i and adjoin fixed Ei and
Gi to satisfy properties that allows us to continue to the next stage. The triple (Ei , Γi , Gi ) is
no longer random when we begin stage i + 1. A small modification in Γi that gives up a few
surviving edges to gain more regularity is noted early in Section 5.
6
Γ0 := E(Kn ). In addition, ∆0 := {ef g ⊆ E(Kn ) : ef g makes a triangle}. Triple (Ei , Γi , Gi ) for
each i is required to satisfy
Our parameters depend only on (Ei , Γi , Gi ). We denote the forbidden pairs and triples of
edges in Γi (cf. (2) and (3)) by
Λi := {ef ⊆ Γi : ∃ g ∈ Ei · 3 · ef g ∈ ∆0 }
∆i := {ef g ⊆ Γi : ef g ∈ ∆0 } .
and the set of edges in Γi containing v as well as its neighborhood and degree in Γi by
For v ∈ V and evw ∈ Γi , we denote the set of incident edges euv of v which together with evw
form forbidden pairs in Γi , and the set of such vertices u, and their (same) size by
Set
Finally, for evw ∈ Γi , denote the set of vertices each of which together with v, w forms a triangle
in Γi by
N∆i (evw ) := {u ∈ V : euv evw ewu ⊆ Γi , euv evw ewu ∈ ∆0 } .
where θ := (log n)−2 and Ψ is the function defined in (6). Also, for A, B ⊆ V , let
7
We define eight properties based on the preceding concepts.
√
Property 1. For all v ∈ V , dEi (v) ≤ ai n + in1/4 log n.
Property 2. For all v ∈ V , dΓi (v) ≤ bi n.
Property 3. For all v 6= w in V , |NEi (v) ∩ NEi (w)| ≤ 3i log n.
√
Property 4. For all evw ∈ Γi , dΛi (evw , v) ≤ bi (ai + 5θ) n.
Property 5. For all e ∈ Γi , d∆i (e) ≤ b2i n.
√
Property 6. For all disjoint subsets A, B of V with |A|, |B| ≥ θ2 b2i n,
Property 8.
i−1
! !
i n X bj µj θ t
|Ti | ≤ n exp − (1 − ) √ ,
t j=0
n 2
If (4) and (5) held (recall ai ≈ Ψ(iθ)) and all events were independent, then all properties
√
would seem quite natural except for terms such as in1/4 log n and 5θbi n, which are basically
error term estimates. For example, we would expect
X
dΛi (evw , v) = 1(ewu ∈ Ei and euv survives)
u∈V \evw
≈ nPr (ewu ∈ Ei and euv survives) in expectation
≈ nPr (ewu ∈ Ei )Pr (ewu survives) assuming independence
√ √
≈ n(ai / n)bi = ai bi n assuming (4) and (5).
8
We note also that all properties are automatic for i = 0, and that Properties 1-6 for i are
needed to guarantee Property 7 for i + 1. Property 8 is a consequence of Property 7, as we
roughly saw in the previous section.
Lemma 2.1 (Main Lemma) Let δ := 1/17−10−5 and 0 ≤ k ≤ bnδ /θc. Suppose (Ek , Γk , Gk )
satisfies (9) and Properties 1 through 8 for i = k. Then some triple (Ek+1 , Γk+1 , Gk+1 ) satisfies
(9) and Properties 1 through 8 for i = k + 1.
The rest of this section examines parameters ai , bi and µi , and proves Theorem 1.1 using
Lemma 2.1. The following lemma presents upper and/or lower bounds of various terms in-
volving the parameters which, while not best possible, are ideally suited to their frequent use
in the rest of the paper.
Lemma 2.2 The following inequalities hold for iθ ≤ nδ and sufficiently large n:
0 ≤ ai − Ψ(iθ) ≤ θ, bi > bi+1 , bi (ai + 5θ) < 1/2 and bi (ai + 5θ)2 < 1/2 ; (10)
q q
log(iθ) − 1 ≤ ai ≤ log(iθ) + 1 + θ , for all iθ ≥ 1 ; (11)
Since Ψ0 (ξ) = exp(−Ψ2 (ξ)) is strictly decreasing and at most 1, clearly bi+1 < bi and
i−1 i−1 Z (j+1)θ i−1
Ψ0 ((j + 1)θ) ≤ Ψ(iθ) = Ψ0 (ξ) dξ ≤ θ Ψ0 (jθ) = ai .
X X X
ai − θ ≤ θ
j=0 j=0 jθ j=0
This verifies the first part of (10). The other parts of (10) follow easily from the first part and
the fact that both y exp(−y 2 ) and y 2 exp(−y 2 ) are less than 0.43.
For (11) and (12) it is enough to point out that
p p
log x − 1 ≤ Ψ(x) ≤ log x + 1 for x ≥ 1 (16)
9
since this implies
for some ξ with iθ ≤ ξ ≤ (i + 1)θ. Inequalities (13) and (14) follow routinely from this.
For (15) it is enough to consider
i−1 Z iθ i−1
Ψ(ξ)Ψ0 (ξ) dξ + θ2 bj ≤ (1/2 + 2θ)Ψ2 (iθ) ≤ (1/2 + θ1/5 ) log(iθ) ,
X X
θ aj bj ≤ (1 + 2θ)
j=0 0 j=0
where (10) is used in the first inequality, and (16) and θ = (log n)−2 in the last inequality.
Proof of Theorem 1.1 modulo Lemma 2.1. Let ko := bnδ /θc + 1. Then by Lemma
2.1 we have a triangle-free graph Gko that satisfies Property 8. Thus it is enough to show that
kXo −1
! !
ko n bj µj θ t
n exp − (1 − ) √ <1 (17)
t j=0
n 2
√
(recall t = 9 n log n.) Inequalities (15) and (11) give
o −1
kX o −1
kX kXo −1
1
θ bj µj = θ bj − 18θ + √ θ aj bj
j=0 j=0
3 log n j=0
log(nδ + θ)
≥ ako − (1/6 + θ1/5 ) √
log n
δ log n − 1 − δ(1/6 + 2θ1/5 ) log n
p p
≥
p
≥ 0.23 log n .
Hence
s
kXo −1
! ! !
ko n bj µj θ t log n t
log n exp − (1 − ) √ ≤ ko log n + t log n − 0.23(1 − )
t j=0
n 2 n 2
q
≤ n2δ + (9 − (0.23 · 81/2)(1 − 2)) n(log n)3
q
≤ −0.3 n(log n)3 < 0 ,
10
3 Tools
3.1 Azuma-Hoeffding type martingale inequalities
Most applications of the semirandom method involve Azuma-Hoeffding type martingale in-
equalities (from [17], [4]), which are very useful in showing that many events can happen
simultaneously. Indeed, Azuma-Hoeffding type martingale inequalities, followed by Lovász’s
local lemma, have become the most popular way to prove the existence of certain packings,
colorings and list colorings mentioned in Section 1. (See [9] for general treatment of probabil-
ity, [35] for Lovász’s local lemma, [27], [7], [26], [21], [19], [22] for more on Azuma-Hoeffding
type inequalities, and [31], [20], [23] for simple applications.)
We need a simple version of the Azuma-Hoeffding type martingale inequality. Let Φ =
Φ(τ1 , ..., τm ), where τ1 , ..., τm are independent identically distributed (i.i.d) Bernoulli random
variables with probability p:
The following lemma is due to Kahn [19] (Proposition 3.8). We present here a simpler version,
a proof of which can be found in the Appendix.
Corollary 3.2 Suppose the hypotheses of Lemma 3.1 hold and ρ > 0 satisfies
Then m
Pr (|Φ − E[Φ]| ≥ λ) ≤ 2 exp − ρλ + ρ2 p c2j .
X
j=1
11
Proof. Since (1 − p) exp(ρcj ) ≤ 2, for all j, we are done.
2
m
X
Corollary 3.3 Let Φ = 1Aj where each Aj is an event that depends only on τj . Then
j=1
Pr (|Φ − E[Φ]| ≥ λ) ≤ 2 exp − ρλ + (ρ2 /2)E[Φ] exp(ρ) .
Erdős and Tetali introduced a useful lemma in [11]. We will use their proof idea rather than
the lemma itself in Section 5.9, so we simply state the lemma without proof.
Let A be a collection of events in a probability space. We are interested in the simultaneous
occurrence of many independent events in A. Let
12
3.3 Almost Disjoint Covering Lemma
Suppose A1 , ..., Al are subsets of a set B. In the proof of the Main Lemma (see Sections 5.7
and 5.8), we need good upper bounds on l and j |Aj | when the sets Aj are not too small but
P
their pairwise intersections Aj ∩ Aj 0 are small enough. Notice that when |Aj | ≥ |B|1/2 and
Aj ∩ Aj 0 = ∅ for all j 6= j 0 , we easily have l ≤ |B|1/2 and j |Aj | ≤ |B|. The following lemma
P
Lemma 3.5 Let B be a set of size at least 4, A1 , ..., Al ⊆ B, and 1 ≤ β, γ ≤ |B|1/2 /2. If
then
l 1 −1
l ≤ β −1 γ −1 |B|1/2
X
and |Aj | ≤ 1 − |B| .
j=1
2γ 2
Proof. Suppose to the contrary of the first part that l ≥ l0 := bβ −1 γ −1 |B|1/2 c + 1. Then
l0 l0
[ X X
|B| ≥ | Aj | ≥ |Aj | − |Aj ∩ Aj 0 |
j=1 j=1 1≤j<j 0 ≤l0
≥ 2βγ|B|1/2 l0 − (l0 )2 β 2 /2
≥ 2|B| − |B|/2 > |B| .
This is a contradiction.
For the second part, set
A0j := Aj \
[
Aj for all j ∈ [l].
j 0 6=j
Therefore,
l l l
X 1 −1 X 0
1 −1 [ 0
1 −1
|Aj | ≤ 1 − |Aj | = 1 − | Aj | ≤ 1 − |B| .
j=1
2γ 2 j=1
2γ 2 j=1
2γ 2
13
Modification. For each pair (evw , v) with evw ∈ Γi , let U (evw , v) be a set of bbk (ak +
√
5θ) n c − dΛk (evw , v) new vertices so that U (evw , v) ∩ V = ∅ and all U (evw , v) are mutually
disjoint. Define
V ∗ := V ∪
[
U (evw , v)
(evw ,v)
Γ∗k := Γk ∪
[
{euv : u ∈ U (evw , v)}
(evw ,v)
and
Λ∗k := Λk ∪
[
{euv evw : u ∈ U (evw , v)} .
(evw ,v)
We also define N ∗ (e, v), N ∗ (e, v), · · · analogously as in Section 3. Notice that
( √
bk (ak + 5θ) n if evw ∈ Γk
d Λ∗k (evw , v) = (19)
1 if evw ∈ Γ∗k \ Γk .
√
Given θ := (log n)−2 and p := θ/ n , we now define a random subset Xk+1
∗ of Γ∗k (cf. (BC
2) of Section 2):
∗
Pr (e ∈ Xk+1 ) = p for all e ∈ Γ∗k
∗ ” mutually independent. Let
with all events “e ∈ Xk+1
∗
Xk+1 := Xk+1 ∩ Γk
Ek+1 := Ek ∪ Xk+1 .
∗ √
Pr (e ∈ Xk+1 ) = Pr (e ∈ Xk+1 ) = p = θ/ n , (20)
and all events “e ∈ Xk+1 ” are mutually independent. Using (BC 3) of Section 2 we may also
define a triangle-free graph Gk+1 . Finally, set
∗
Yk+1 := {e ∈ Γk : ∃ f ∈ Xk+1 · 3 · ef ∈ Λ∗k }
∗
Zk+1 := {e ∈ Γk : ∃ f, g ∈ Xk+1 · 3 · ef g ∈ ∆k }
and
Γk+1 := Γk \ (Xk+1 ∪ Yk+1 ∪ Zk+1 ) . (21)
The (random) triple (Ek+1 , Γk+1 , Gk+1 ) obviously satisfies (9). It remains to verify
14
We prove this by showing that (Ek+1 , Γk+1 , Gk+1 ) satisfies each property with probability at
least 1 − 1/n, that is,
Pr ((Ek+1 , Γk+1 , Gk+1 ) does not satisfy Property l) ≤ 1/n for l = 1, ..., 8 . (22)
Three preliminary lemmas will be needed. Henceforth, we fix k ≤ bnδ /θc, omit subscript
k (Γ = Γk , a = ak , dΛ∗ (v) = dΛ∗k (v), · · ·) and write Γ0 , a0 , · · · for Γk+1 , ak+1 , · · ·. Also, we just
write X 0 for (X ∗ )0 and generally omit the asterisk if there is another superscript.
(10) gives
The upper and lower bounds of the lemma follow from (13).
Proof. By (19),
1(ef ∈ Λ∗ )
X X X
|NΛ∗ (e) ∩ A| =
e∈Γ∗ e∈Γ∗ f ∈A
1(ef ∈ Λ∗ )
X X
=
f ∈A e∈Γ∗
X
= dΛ∗ (f )
f ∈A
√
= 2b(a + 5θ) n |A| . 2
15
We will prove (22) one property at a time. In most cases we first compute the expectations
of corresponding random variables, then derive good concentration results using the Azuma-
Hoeffding type inequalities of Section 4. Unless specified otherwise, our i.i.d Bernoulli random
variables are {τe }e∈Γ∗ with τe = 1 if and only if e ∈ X ∗ :
√
Pr (τe = 1) = p = θ/ n .
Applications of Corollary 3.2 will simply note the ce in the hypotheses of Lemma 3.1. If we
do not mention ce for some edges, then those edges are irrelevant.
Lemma 4.3 The following three conditions hold simultaneously with probability at least 1 −
3/n2 :
√
(i) For all v ∈ V , |NX 0 (v)| ≤ bθ n + n1/4 log n;
(ii) For all v 6= w ∈ V , |NE (v) ∩ NX 0 (w)| ≤ log n;
(iii) For all v 6= w ∈ V , |NX 0 (v) ∩ NX 0 (w)| ≤ log n.
Proof. We show that each of (i), (ii), and (iii) occurs with probability at least 1 − 1/n2 .
For (i), let
1(evw ∈ X 0 ) .
X
Φv := |NX 0 (v)| =
w∈NΓ (v)
which gives
16
Property 1 and (11) give
E[Φ(1)
v,w ] ≤ p|NE (v)| ≤ θ
1/2
≤1,
and Corollary 3.3 with ρ = 5 yields
≤ 2 exp(−ρ(log n − 1) + ρ2 )
≤ exp(−4 log n) = 1/n4 .
Pr (τu = 1) ≤ p2 = θ2 /n ,
we have
E[Φ(2) 2 2
v,w ] ≤ p n = θ < 1 .
Let ρ = 5. Then
≤ 2 exp(−ρ(log n − 1) + ρ2 )
≤ exp(−4 log n) = 1/n4 .
We now consider each property separately to prove (22). The definitions of Φv , Φv,w , · · ·
vary from property to property and therefore hold only within subsections. Also, to prove (22)
for the properties 2,4 and 5, we show that each vertex or pair of vertices violates the properties
with probability at most 1/n2 or 1/n3 , respectively. This is enough because there are at most
n vertices and n2 pairs.
4.2 Property 1
Since
NE 0 (v) = NE (v) ∪ NX 0 (v) ,
Property 1 and (i) of Lemma 4.3 give
√ √
dE 0 (v) = dE (v) + |NX 0 (v)| ≤ a n + kn1/4 log n + bθ n + n1/4 log n
√
= a0 n + (k + 1)n1/4 log n
17
4.3 Property 2
Let
1(e 6∈ Y 0 ) .
X
Φv :=
e∈NΓ (v)
Pr (e 6∈ Y 0 )
X
E[Φv ] =
e∈NΓ (v)
≤ bn(b0 /b − 5bθ2 ) ≤ b0 n − b2 θ2 n .
We take
√
ce = |NΛ∗ (e) ∩ NΓ (v)| ≤ 2b(a + 5θ) n
e e
Together with (12) and Corollary 3.2 with ρ = n−3/4 , this yields
√
Pr (Φv − E[Φv ] ≥ b2 θ2 n) ≤ 2 exp(−b2 θ2 n1/4 + (θ/ n )n1/2 )
≤ exp(−n1/4−2/17 ) ≤ 1/n2 .
4.4 Property 3
Since
|NE 0 (v)∩NE 0 (w)| ≤ |NE (v)∩NE (w)|+|NX 0 (v)∩NE (w)|+|NE (v)∩NX 0 (w)|+|NX 0 (v)∩NX 0 (w)| ,
4.5 Property 4
For evw ∈ Γ \ X 0 let
Φ(1) 1(e 6∈ Y 0 )
X
v,w :=
e∈NΛ (evw ,v)
and
Φ(2) 1(ewu ∈ X 0 ) .
X
v,w :=
u∈N∆ (evw )
(1) (2)
/ X 0 . Clearly,
Note that Φv,w and Φv,w are considered under the condition evw ∈
18
Thus (12) and Corollary 3.3 with ρ = n−1/4 imply that
2 √ √ √
Pr (Φ(2) 2 2 (2) (2) 2 2
v,w − b θ n ≥ b θ (a + 5θ) n ) ≤ Pr (Φv,w − E[Φv,w ] ≥ b θ (a + 5θ) n )
≤ 2 exp(−b2 θ2 (a + 5θ)n1/4 + b2 θ)
≤ exp(−n1/4−2/17 ) ≤ 1/n4 . (24)
(1)
We now consider Φv,w . Since
Let
ce := |NΛ (evw , v) ∩ NΛ∗ (e)| for e 6= evw .
Since
NΛ (euv , v) ⊆ NE (u) for all euv ∈ Γ, (26)
It now follows from (25), (27), (24), and (14) that, with probability at least 1 − 2/n4 ,
0 √ √ 2 √ 0 0 √
Φ(1) (2) 2 2
v,w + Φv,w ≤ b (a + 5θ) n − 2b θ (a + 5θ) n + b θ n ≤ b (a + 5θ) n .
19
4.6 Property 5
Let
1(euv ewu ∩ Y 0 = ∅)
X
Φvw :=
u∈N∆ (evw )
Pr (euv ewu ∩ Y 0 = ∅)
X
E[Φvw ] =
u∈N∆ (evw )
Let
√
b(a + 5θ) n
if NΛ∗ (e) ∩ N∆ (evw ) 6= ∅ and evw ∩ e 6= ∅
ce := 2 if NΛ∗ (e) ∩ (N∆ (evw ) ∪ evw ) 6= ∅ and evw ∩ e = ∅
0
otherwise
for e ∈ Γ∗ . (Actually, we may take ce = 1 for the second case.) Clearly, ce ≥ 2 for at most
√
2b(a + 5θ) n · 2b2 n edges (cf. Lemma 4.2), and ce > 2 for at most 2n edges. Hence
Then (28), (29), (12), and Corollary 3.2 with ρ = n−5/8 imply that
20
4.7 Property 6
We prove only the first part. An analogous proof holds for the other part.
√
It is enough to prove the property for all A, B with |A| = |B| = b2 θ2 n . (Of course, we
√
should really write bb2 θ2 n c here.) Let
L := {(e, v) ∈ Γ∗ × V ∗ : v ∈ e}
L(1) (A, B) := {(e, v) ∈ L : |NΛ∗ (e, v) ∩ Γ(A, B)| < 4((k + 1) log n)1/2 |A ∪ B|1/2 } . (30)
Because
Y0 =
[
NΛ∗ (e, v) ∩ Γ , (31)
e∈ X∗
(e, v) ∈ L
we further define
Then
|Γ0 (A, B)| ≤ ΦA,B . (32)
In regard to the expectation of ΦA,B , we claim that, for all evw ∈ Γ(A, B),
1/4
Pr (evw 6∈ Y (1) (A, B)) ≤ Pr (evw 6∈ Y 0 )(1 − p)−2n . (33)
This holds if
|{u ∈ NΛ∗ (evw , v) : (euv , v) 6∈ L(1) (A, B)}| ≤ n1/4 (34)
and
|{u ∈ NΛ∗ (evw , w) : (ewu , w) 6∈ L(1) (A, B)}| ≤ n1/4 . (35)
This inequality, Property 3, and Lemma 3.5 with β = (3(k + 1) log n)1/2 , γ = 1 then imply
that there are at most (3(k + 1) log n)−1/2 |(A ∪ B)|1/2 such u ∈ V ∗ . Since
21
√
Lemma 4.1, Property 6, (33), and our condition of |A| = |B| = b2 θ2 n imply
and
E[ΦA,B ] ≤ (b0 /b − bθ2 )|Γ(A, B)| ≤ b0 |A||B| − b6 θ6 n . (36)
Let
ce := min{4((k + 1) log n)1/2 |A ∪ B|1/2 , |NΛ∗ (e) ∩ Γ(A, B)|} ≤ 6((k + 1) log n)1/2 bθn1/4
It follows from (12) and Corollary 3.2 with ρ = n−1/4−εo , where εo := 1/4 − 4/17 = 1/68, that
Pr (ΦA,B − E[ΦA,B ] ≥ b6 θ6 n)
√
≤ 2 exp − b6 θ6 n3/4−εo + 12(θ/ n )((k + 1) log n)1/2 b7 θ5 (a + 5θ)n5/4−2εo
2
≤ exp(−b θ2 (log n)4 n3/4−4/17−εo /2)
2
≤ exp(−b θ2 (log n)2 n1/2 ) .
4.8 Property 7
√
Recall for T ∈ T that |T | = t = d9 n log n e. Let T ∈ T . (Actually, our proof works for all T
of size t.) We know by (21) that
22
Proof of (38). Set
(0)
ΦT := |X 0 ∩ Γ(T )| = 1(e ∈ X 0 ) .
X
e∈Γ(T )
Then
(0) √
E[ΦT ] = |Γ(T )|Pr (e ∈ X 0 ) = (θ/ n )|Γ(T )| ,
Thus
!
(0) 0 2 n
Pr (∃ T ∈ T ·3· ΦT ≥ b θ |Γ(T )|/2 ) ≤ exp(−n10/17 )
t
√
≤ exp(9 n (log n)3/2 − n10/17 ) ≤ 1/n2 .
L(1) (T ) := {(e, v) ∈ L : |NΛ∗ (e, v) ∩ Γ(T )| < 4((k + 1) log n)1/2 |T |1/2 },
L(2) (T ) := L \ L(1) (T ) ,
and
Y (1) (T ) :=
[
NΛ∗ (e, v) ∩ Γ(T ) ,
e∈ X∗
(e, v) ∈ L(1) (T )
Y (2) (T ) :=
[
NΛ∗ (e, v) ∩ Γ(T ) .
e ∈ X∗
(e, v) ∈ L(2) (T )
e∈Γ(T )
By (31),
(1) (2)
|Γ(T )| − |Y 0 ∩ Γ(T )| ≥ ΦT − ΦT .
We claim that
(1) 2
Pr (∃ T ∈ T · 3 · ΦT ≤ (b0 /b − 15b0 θ )|Γ(T )| ) ≤ 1/n2 (39)
23
and that !
(2) 2b0 bθ(1 + 6θ) t
Pr ∃ T ∈ T · 3 · ΦT > √ ≤ 3/n2 , (40)
9 log n 2
which imply directly that, with probability at least 1 − 4/n2 ,
!
0 0 2b0 bθ(1 + 6θ) t
0 2
|Γ(T )| − |Y ∩ Γ(T )| ≥ (b /b − 15b θ )|Γ(T )| − √ . (41)
9 log n 2
Set
ce := min{|NΛ∗ (e) ∩ Γ(T )|, 8((k + 1) log n)1/2 t1/2 } .
to obtain (39).
Proof of (40). It is enough to show that (i), (ii), and (iii) of Lemma 4.3 imply (40).
24
We show first that
Y (2) (T ) ⊆
[
Γ(NE (w, T ), NX 0 (w, T )) (43)
w∈WT
4((k + 1) log n)1/2 t1/2 ≤ |NΛ∗ (evw , v) ∩ Γ(T )| = |NΛ (evw , v) ∩ T | ≤ |NE (w, T )| .
Note that
|A(w, T )| ≥ |NE (w, T )| ≥ 4((k + 1) log n)1/2 t1/2 for all w ∈ WT . (44)
|Y (2) (T )| ≤
X
|Γ(NE (w, T ), NX 0 (w, T ))|
w∈WT
X0 X 00
≤ |NE (w, T )||NX 0 (w, T )| + b |NE (w, T )||NX 0 (w, T )|
√ X0 √ X 00
≤ b2 θ2 n |A(w, T )| + (1 + θ)b2 θ n |NE (w, T )| .
p
Moreover, Lemma 3.5 (with β = 3(k + 1) log n for both of the following inequalities, γ = 1
for the first inequality, and γ = log n for the second), Property 3, (ii) and (iii) of Lemma 4.3,
and (44) imply
X0 X 00
|A(w, T )| ≤ 2t and |NE (w, T )| ≤ (1 + θ)t .
Therefore, by (14),
!
(2) 2 2√ 2 2 √2b0 bθ(1 + 6θ) t
|Y (T )| ≤ 2b θ n t + (1 + θ) b θ n t ≤ √ .
9 log n 2
2
25
We return to (37) and divide |Z 0 ∩ Γ(T )| into two parts. We recall that NX 0 (v, T ) =
NX 0 (v) ∩ T and note that
Z 0 ∩ Γ(T ) =
[ [ [
Γ(NX 0 (v)) ∩ Γ(T ) = Γ(NX 0 (v) ∩ T ) = Γ(NX 0 (v, T )) .
v∈V v∈V v∈V
Let h := d4((k + 1) log n)1/2 t1/2 e. Given NX 0 (v, T ) = {vj , ..., vjr } with j1 < · · · < jr , let
1
(
NX 0 (v, T ) if r ≤ h
N̂X 0 (v, T ) :=
{vj , ..., vj } if r > h.
1 h
Then
Z 0 ∩ Γ(T ) =
[ [
Γ(N̂X 0 (v, T )) ∪ Γ(NX 0 (v, T )) .
v∈V v∈V
|NX 0 (v)| ≥ h
Let
(3) X (4) X
ΦT := |Γ(N̂X 0 (v, T ))|, and ΦT := |Γ(NX 0 (v, T ))| .
v∈V v∈V
|NX 0 (v, T )| ≥ h
Clearly
(3) (4)
|Z 0 ∩ Γ(T )| ≤ ΦT + ΦT . (45)
which implies
(3)
Pr (∃T ∈ T · 3 · ΦT ≥ 2b0 θ2 |Γ(T )| ) ≤ 1/n2 . (47)
Pr (euv evw ⊆ X 0 )
X X
=
v∈V ewu ∈ Γ(T )
v ∈ N∆ (ewu )
= p2
X X
1
ewu ∈Γ(T ) v∈N∆ (ewu )
Thus
(3) (3) (3)
Pr (ΦT ≥ 2b0 θ2 |Γ(T )| ) ≤ Pr (ΦT − E[ΦT ] ≥ b0 θ2 |Γ(T )|/2) .
26
Since
(3) (5) (6)
ΦT = ΦT + ΦT ,
it is enough to show that
(5) (5) √
Pr (ΦT − E[ΦT ] ≥ b0 θ2 |Γ(T )|/4) ≤ exp(− n (log n)2 ) (48)
and
(6) (6) √
Pr (ΦT − E[ΦT ] ≥ b0 θ2 |Γ(T )|/4) ≤ exp(− n (log n)2 ) . (49)
Consider (48). Let (
2h if e ∈ Γ(T )
ce :=
0 otherwise.
Then
c2e = 4h2 |Γ(T )| .
X
Consider (49). All random variables |Γ(N̂X 0 (v, T ))| for v ∈ V \T are mutually independent,
so
(6) Y
E[exp(ρΦT )] = E[exp(ρ|Γ(N̂X 0 (v, T ))|)] . (50)
v∈V \T
Our aim is to find a good upper bound of E[exp(ρ|Γ(N̂X 0 (v, T ))|)] so that we may apply
Markov’s inequality with the function exp(ρx) (see (53)). Let
Claim.
φ00v (ρ∗ ) ≤ 1 if ρ∗ ≤ h−1 . (51)
27
Let
|B| !
|B| |B|
2
xl+2 (1 − p)|B|−l .
X
ω(x) := x (1 − p + x) =
l=0
l
Then it is clear that the last term of (52) is exactly (p2 exp(1)/4)ω (4) (p exp(1/2)), where ω (4)
is the fourth derivative of ω. Therefore, with
we easily have
(p2 /4)ω (4) (p exp(1/2)) ≤ 1 .
Then, by (50),
(6) (6)
E[ |Γ(N̂X 0 (v, T ))| ] + ρ2 n/2) = exp(ρE[ΦT ] + ρ2 n/2) .
X
E[exp(ρΦT )] ≤ exp(ρ
v∈V \T
Proof of (54). It is enough to show that (i) and (iii) in Lemma 4.3 imply
!
(4) b0 bθ(1 + 6θ) t
ΦT ≤ √ for all T ∈T .
9 log n 2
28
Let
X∗ X X ∗∗ X
:= and := .
v∈V √ v∈V √
h ≤ |NX 0 (v, T )| < b2 θ2 n |NX 0 (v, T )| ≥ b2 θ2 n
Therefore, combining (37), (38), (41), (45), (47), (54) and Property 7, we have that, with
probability at least 1 − 1/n,
!
0 0 3b0 bθ(1 + 6θ) t
0 2
|Γ (T )| ≥ (b /b − 35b θ /2)|Γ(T )| − √
9 log n 2
!
0
bθ(1 + 6θ) t 2
≥ b µ − 35bµθ /2 − √
3 log n 2
! !
0
bθ t
2 t
0 0
≥ b µ − 18bθ − √ =bµ .
3 log n 2 2
4.9 Property 8
We will show that
k
! !
0 k n X bj µj θ t
E[ |T | ] ≤ n exp − (1 − ) √ , (55)
t j=0
n 2
29
Proof of (55). Since
E[ |T 0 | ] = Pr (E(G0 ) ∩ Γ(T ) = ∅) ,
X
T ∈T
(recall that b and µ actually mean bk and µk ). But notice that for
F 0 (T ) := {F ∈ F 0 : F ∩ Γ(T ) 6= ∅}
(again recall that F 0 (= Fk+1 ) is a maximal disjoint collection of forbidden pairs and triples in
X 0 : see (BC 3) in Section 2.1), we clearly have
e∈Γ(T )
Therefore
32 θ 32 θ
Pr |X 0 ∩ Γ(T )| ≤ √ |Γ(T )| = Pr ΦT ≤ √ |Γ(T )|
n n
!
1 (1 − )bµθ t
≤ exp − √ . (58)
2 n 2
30
√
On the other hand, with l := b2 θ|Γ(T )|/ n c, we know that
2 θ
Pr |F 0 (T )| ≥ √ |Γ(T )| ≤ Pr ∃ F1 , ..., Fl ∈ Λ ∪ ∆ · 3 ·
n
Fj ⊆ X 0 , Fj ∩ Γ(T ) 6= ∅ and Fj ∩ Fj 0 = ∅ ∀ j 6= j 0 .
Let
X (l) X
:= .
{F1 , ..., Fl } ⊆ Λ ∪ ∆
Fj ∩ Γ(T ) 6= ∅ ∀j ∈ [l]
Fj ∩ Fj 0 = ∅ ∀j 6= j 0
Then
2 θ X (l)
Pr |F 0 (T )| ≥ √ |Γ(T )| ≤ Pr (Fj ⊆ X 0 ∀j ∈ [l])
n
l
X (l) Y
= Pr (Fj ⊆ X 0 )
j=1
1 l
Pr (F ⊆ X 0 )
X
≤
l!
F ∈Λ∪∆
F ∩ Γ(T ) 6= ∅
Pr (F ⊆ X 0 ) = Pr (F ⊆ X 0 ) + Pr (F ⊆ X 0 )
X X X
F ∈Λ∪∆ F ∈Λ F ∈∆
F ∩ Γ(T ) 6= ∅ F ∩ Γ(T ) 6= ∅ F ∩ Γ(T ) 6= ∅
√
≤ 2b(a + 5θ) n |Γ(T )| p2 + b2 n|Γ(T )| p3
θ2 θ3
≤ √ |Γ(T )| + √ |Γ(T )|
n n
2θ 2
≤ √ |Γ(T )| .
n
2θ2
Thus with η := √ |Γ(T )|, we have
n
!
2 θ ηl 1 θ|Γ(T )| 1 bµθ t
Pr |F 0 (T )| ≥ √ |Γ(T )| ≤ ≤ exp − √ ≤ exp − √ , (59)
n l! 2 n 2 n 2
ηl η exp(1) l
where the second inequality uses ≤ . Hence (57), (58) and (59) yield (56).
l! l
Acknowledgments. I thank Joel Spencer and Noga Alon for helpful discussions. Especially,
I appreciate Spencer’s nice explanation of his differential equation which interested me in the
Ramsey number R(3, t).
31
Appendix
To prove Lemma 3.1, set
X
and Pr (τj = γj · · · , τm = γm ) = Pr (τj+1 = γj+1 , · · · , τm = γm ) yields
γj
X
E[Φ|τ1 , ..., τj ](κ) = Φ(κ1 , · · · , κj , γj+1 , · · · , γm )Pr (τj+1 = γj+1 , · · · , τm = γm )
γj+1 ,···,γm
X
= Φ(κ1 , · · · , κj , γj+1 , · · · , γn )Pr (τj = γj , · · · , τm = γm ) .
γj ,···,γm
× Pr (τj = γj , · · · , τm = γm )
X
≤ cj Pr (τj = γj , · · · , τm = γm ) = cj .
γj ,···,γm
32
Hence
E[ (Φ − E[Φ|τ1 , ..., τj−1 , τj+1 , ..., τm ])2 | τ1 , ..., τj−1 , τj+1 , ..., τm ] ≤ p(1 − p)c2j .
x := Φ(τ1 , ..., τj−1 , 0, τj+1 , ..., τm ) and y := Φ(τ1 , ..., τj−1 , 1, τj+1 , ..., τm )
E[ (Φ − E[Φ|τ1 , ..., τj−1 , τj+1 , ..., τm ])2 | τ1 , ..., τj−1 , τj+1 , ..., τm ]
= (1 − p)(x − ((1 − p)x + py))2 + p(y − ((1 − p)x + py))2
= p(1 − p)(x − y)2 ≤ p(1 − p)c2j .
Proof of Lemma 3.1. (cf. Lemma 5.3 of [22]) It is enough to show that
m
Pr (Φ − E[Φ] ≥ λ) ≤ exp(−ρλ + (ρ2 /2)
X
cj exp(ρcj )) ,
j=1
33
where the second equality uses E[Ωj ] = 0.
Next we show that
m
E[exp(ρ(Φ − E[Φ]))] ≤ exp((ρ2 /2)p(1 − p) c2j exp(ρcj )) .
X
(61)
j=1
Pm
Since Φ − E[Φ] = j=1 Ωj , it is enough to show that
l l
Ωj )] ≤ exp((ρ2 /2)p(1 − p) c2j exp(ρcj ))
X X
E[exp(ρ
j=1 j=1
j=1
References
[1] M. Ajtai and J. Komlós and E. Szemerédi, A note on Ramsey numbers, J. Combi. Th.
Series A, 29 (1980), 354–360.
[2] M. Ajtai and J. Komlós and E. Szemerédi, A dense infinite Sidon sequence, Europ. J.
Combinatorics 2 (1981), 1-11.
[3] N. Alon and J. Spencer, The probabilistic method, Wiley”, New York, 1992.
34
[4] K. Azuma, Weighted sums of certain dependent random variables, Tokuku Math. J. 19
(1967), 357–367
[8] J. A. Bondy and U. S. R. Murty Graph Theory with Applications, North-Holland, New
York 1976.
[10] P. Erdős Graph theory and probability II, Canad. J. Math 13 (1961), 346-352.
[11] P. Erdős and P. Tetali, Representations of integers as the sum of k terms, Random Struc-
tures and Algorithms 1, (1990), 245–261.
[12] P. Erdős and S. Suen and P. Winkler On the Size of a Random Maximal Graph, Preprint
(1993).
[13] P. Frankl and V. Rödl, Near perfect coverings in graphs and hypergraphs, Europ. J.
Combinatorics 6 (1985), 317–326.
[14] Z. Füredi, Matchings and Covers in Hypergraphs, Graphs and Combinatorics 4 (1988),
115–206.
[15] R. Graham and B. Rothschild and J. Spencer, Ramsey Theory, Wiley, New York, 1990.
[16] J. E. Graver and J. Yackel, Some graph theoretic results associated with Ramsey’s theo-
rem, J. of Combinatorial Theory 4 (1968), 125–175.
[17] W. Hoeffding, Probability inequalities for sums of bounded random variables, J. Amer.
Stat. Assoc. 27 (1963), 13–30.
[18] J. Kahn, Recent results on some not-so-recent hypergraph matching and covering prob-
lems, Proc. 1st Int’l Conference on Extremal Problems for Finite Sets, Visegrád, June
(1991).
[19] J. Kahn, Asymptotically good list colorings, J. of Combinatorial Th. (A), to appear.
[21] J. Kahn and E. Szemerédi, tThe second eigenvalue of a random regular graph, Manuscript,
1988.
35
[22] J. H. Kim, On Brooks’ Theorem For Sparse Graphs, Combinatorics, Probability & Com-
puting, to appear.
[23] J. H. Kim and J. Spencer and P. Tetali, Certificates for Random Tournaments, In prepa-
ration.
[24] J. Komlós and J. Pintz and E. Szemerédi”, On Heilbronn’s triangle problem, J. London
Math. Soc. 25 (1982), 13–24.
[25] M. Krivelevich, Bounding Ramsey numbers through large deviation inequalities, Preprint
(1994).
[27] V. Milman and G. Schechtman, Asymptotic Theory of Finite Dimensional Normed Spaces,
Springer, Berlin, 1980.
[29] N. Pippenger and J. Spencer, Asymptotic behavior of the chromatic index for hyper-
graphs, J. of Combinatorial Th. (A) 51 (1989), 24–42.
[30] V. Rödl, On a packing and covering problem, Europ. J. Combinatorics 5 (1985), 69–78.
[31] E. Shamir and J. Spencer, Sharp concentration of the chromatic number on random
graphs Gn,p , Combinatorica 7 (1980), 121–129.
[32] J. Shearer, A note on the independence number of triangle-free graphs, Discrete Mathe-
matics 46 (1983), 83–87.
[33] J. Shearer, A note on the independence number of triangle-free graphs II, J. of Combina-
torial Th. (B) 53 (1991), 300–307.
[34] J. Spencer, Asymptotic lower bounds for Ramsey functions, Discrete Mathematics 20
(1977), 69–76.
[35] J. Spencer, Ten Lectures on the Probabilistic Method, Society for Industrial and Applied
Mathematics, 1987.
[37] J. Spencer, Maximal Triangle-free Graphs and the Ramsey number R(3, k), Preprint,
1994.
[38] V. Vizing, Some unsolved problems in graph theory (Russian), Usp. Mat. Nauk. 23 (1968),
117–134. English Translation in Russian Math. Surveys, 23, 125–141.
36