0% found this document useful (0 votes)
6 views7 pages

New 3

The document discusses the strong connectivity of directed graphs and provides several lemmas and theorems related to the probability of strong connectivity and the existence of Hamilton cycles. It establishes conditions under which a directed graph is likely to be strongly connected and presents results on the number of distinct Hamilton cycles in such graphs. The proofs involve probabilistic methods and combinatorial arguments to derive the results on connectivity and Hamiltonicity.

Uploaded by

vsftharegister
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views7 pages

New 3

The document discusses the strong connectivity of directed graphs and provides several lemmas and theorems related to the probability of strong connectivity and the existence of Hamilton cycles. It establishes conditions under which a directed graph is likely to be strongly connected and presents results on the number of distinct Hamilton cycles in such graphs. The proofs involve probabilistic methods and combinatorial arguments to derive the results on connectivity and Hamiltonicity.

Uploaded by

vsftharegister
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

11.1.

Strong Connectivity 243

Lemma 11.6.
x
P(F ) = 1 − + o(1).
c
Proof. Applying Lemma 11.4 we see that

P(F ) = P(Fˆ ) + o(1), (11.3)

where Fˆ is defined with respect to the branching process.


Now let Eˆ be the event that the branching process eventually becomes extinct. We
write
P(Fˆ ) = P(Fˆ |¬Eˆ ) P(¬Eˆ ) + P(Fˆ ∩ Eˆ ). (11.4)
To estimate (11.4) we use Theorem 29.1. Let

ck e−c k
G(z) = ∑ k! z = ecz−c
k=0

be the probability generating function of Po(c). Then Theorem 29.1 implies that
ρ = P(Eˆ ) is the smallest non-negative solution to G(ρ) = ρ. Thus

ρ = ecρ−c .
ξ
Substituting ρ = c we see that

ξ ξ
P(Eˆ ) = where = eξ −c , (11.5)
c c
and so ξ = x.
The lemma will follow from (11.4) and (11.5) and P(Fˆ |¬Eˆ ) = 1 and

P(Fˆ ∩ Eˆ ) = o(1).

This in turn follows from


P(Eˆ | Fˆ ) = o(1), (11.6)
which will be established using the following lemma.
Lemma 11.7. Each member of the branching process has probability at least
ε > 0 of producing (log n)2 descendants at depth log n. Here ε > 0 depends only
on c.
Proof. If the current population size of the process is s then the probability that it
reaches size at least c+1
2 s in the next round is

(cs)k e−cs
∑ ≥ 1 − e−αs
k!
k≥ c+1
2 s
244 Chapter 11. Digraphs

for some constant α > 0 provided s ≥ 100, say.


Now there is a positive probability ε1 say that a single member spawns at least
100 descendants and so there is a probability of at least
!

ε1 1 − ∑ e−αs
s=100

that a single object spawns


 log n
c+1
 (log n)2
2

descendants at depth log n.

Given a population size between (log n)2 and (log n)3 at level i0 , let si denote
the population size at level i0 + i log n. Then Lemma 11.7 and the Chernoff bounds
imply that
   
1 2 1 2 2
P si+1 ≤ εsi (log n) ≤ exp − ε si (log n) .
2 8

It follows that
 i !
1
P(Eˆ | Fˆ ) ≤ P ∃i : si ≤ ε(log n)2 2
s0 s0 ≥ (log n)
2
(
 i )

1 2 1 2 2
≤ ∑ exp − ε ε(log n) (log n) = o(1).
i=1 8 2

This completes the proof (11.6) and of Lemma 11.6.


We must now consider the probability that both D+ (v) and D− (v) are large.

Lemma 11.8.
x
P |D− (v)| ≥ (log n)2 | |D+ (v)| ≥ (log n)2 = 1 − + o(1).

c

Proof. Expose S0+ , S1+ , . . . , Sk+ until either Sk+ = 0/ or we see that |Tk+ | ∈ [(log n)2 ,
(log n)3 ]. Now let S denote the set of edges/vertices defined by
S0+ , S1+ , . . . , Sk+ .
Let C be the event that there are no edges from Tl− to Sk+ where Tl− is the
set of vertices we reach through our BFS into v, up to the point where we first
11.1. Strong Connectivity 245

realise that D− (v) < (log n)2 (because Si− = 0/ and |Ti− | ≤ (log n)2 ) or we realise
that D− (v) ≥ (log n)2 . Then

(log n)4
 
1
P(¬C ) = O = 1−o(1)
n n
and, as in (11.2),

P |Si− | = si , 0 ≤ i ≤ k | C =


k  0 si
(log n)7
  
n − ti−1 si−1 c
=∏ 1+O
i=1 si n n
n0 −ti−1 −si
(log n)7
  
si−1 c
× 1− 1+O
n n

where n0 = n − |Tk+ |.
Given this we can prove a conditional version of Lemma 11.4 and continue as
before.

We have now shown that if α is as in Lemma 11.3 and if

S = v : |D+ (v)|, |D− (v)| > α log n




then the expectation


 x 2
E(|S|) = (1 + o(1)) 1 − n.
c
We also claim that for any two vertices v, w

P(v, w ∈ S) = (1 + o(1)) P(v ∈ S) P(w ∈ S) (11.7)

and therefore the Chebyshev inequality implies that w.h.p.


 x 2
|S| = (1 + o(1)) 1 − n.
c
But (11.7) follows in a similar manner to the proof of Lemma 11.8.
All that remains of the proof of Theorem 11.2 is to show that

S is a strong component w.h.p. (11.8)

Recall that any v 6∈ S is in a strong component of size ≤ α log n and so the second
part of the theorem will also be done.
246 Chapter 11. Digraphs

We prove (11.8) by arguing that

P ∃ v, w ∈ S : w 6∈ D+ (v) = o(1).

(11.9)

In which case, we know that w.h.p. there is a path from each v ∈ S to every other
vertex w 6= v in S.
To prove (11.9) we expose S0+ , S1+ , . . . , Sk+ until we find that
|Tk+ (v)| ≥ n1/2 log n. At the same time we expose S0− , S1− , . . . , Sl− until we find that
|T − (w)| ≥ n1/2 log n. If w 6∈ D+ (v) then this experiment will have tried at least
l 2
n1/2 log n times to find an edge from D+ (v) to D− (w) and failed every time.
The probability of this is at most
 c n(log n)2
1− = o(n−2 ).
n
This completes the proof of Theorem 11.2.

Threshold for strong connectivity


Here we prove
Theorem 11.9. Let ω = ω(n), c > 0 be a constant, and let p = log n+ω
n . Then

0
 if ω → −∞
−2e −c
lim P(Dn,p is strongly connected) = e if ω → c
n→∞ 
1 if ω → ∞.

= lim P(6 ∃ v s.t. d + (v) = 0 or d − (v) = 0)


n→∞

Proof. We leave as an exercise to prove that



1
 if ω → −∞
+ − −c
lim P(∃ v s.t. d (v) = 0 or d (v) = 0) = 1 − e−2e if ω → c
n→∞ 
0 if ω → ∞.

Given this, one only has to show that if ω 6→ −∞ then w.h.p. there does not exist
a set S such that (i) 2 ≤ |S| ≤ n/2 and (ii) E(S : S̄) = 0/ or E(S̄ : S) = 0/ and (iii)
S induces a connected component in the graph obtained by ignoring orientation.
But, here with s = |S|,
n/2  
n s−2
P(∃ S) ≤ 2 ∑ s (2p)s−1 (1 − p)s(n−s)
s=2 s
11.2. Hamilton Cycles 247

2n n/2  ne s s−2 2 log n s −s(1−s/n) ωs/n


 
≤ ∑ s s n e
log n s=2 n
2n n/2 −(1−s/n) ω/n
≤ ∑ (2n e log n)s
log n s=2
= o(1).

11.2 Hamilton Cycles


Existence of a Hamilton Cycle
Here we prove the following remarkable inequality: It is due to McDiarmid [661]

Theorem 11.10.

P(Dn,p is Hamiltonian) ≥ P(Gn,p is Hamiltonian)

Proof. We consider an ordered sequence of random digraphs


n
Γ0 , Γ1 , Γ2 , . . . , ΓN , N = 2 defined as follows: Let e1 , e2 , . . . , eN be an enumera-
tion of the edges of the complete graph Kn . Each ei = {vi , wi } gives rise to two
directed edges → −ei = (vi , wi ) and ←e−i = (wi , vi ). In Γi we include →

e j and ←
e−j indepen-
dently of each other, with probability p, for j ≤ i. While for j > i we include both
or neither with probability p. Thus Γ0 is just Gn,p with each edge {v, w} replaced
by a pair of directed edges (v, w), (w, v) and ΓN = Dn,p . Theorem 11.10 follows
from
P(Γi is Hamiltonian) ≥ P(Γi−1 is Hamiltonian).
To prove this we condition on the existence or otherwise of directed edges associ-
ated with e1 , . . . , ei−1 , ei+1 , . . . , eN . Let C denote this conditioning.
Either

(a) C gives us a Hamilton cycle without arcs associated with ei , or

(b) not (a) and there exists a Hamilton cycle if at least one of →

ei , ←
e−i is present,
or

(c) 6 ∃ a Hamilton cycle even if both of →



ei , ←
e−i are present.
248 Chapter 11. Digraphs

(a) and (c) give the same conditional probability of Hamiltonicity in Γi , Γi−1 . In
Γi−1 (b) happens with probability p. In Γi we consider two cases (i) exactly one of


ei , ←
e−i yields Hamiltonicity and in this case the conditional probability is p and (ii)
either of →−ei , ←
e−i yields Hamiltonicity and in this case the conditional probability is
1 − (1 − p)2 > p.
Note that we will never require that both → −
ei , ←
e−i occur.

Theorem 11.10 was subsequently improved by Frieze [386], who proved the
equivalent of Theorem 6.5.
log n+cn
Theorem 11.11. Let p = n . Then

0
 if cn → −∞
−c
lim P(Dn,p has a Hamilton cycle) = e−2e if cn → c
n→∞ 
1 if cn → ∞.

Number of Distinct Hamilton Cycles


Here we give an elegant result of Ferber, Kronenberg and Long [354].
 2 
Theorem 11.12. Let p = ω logn n . Then w.h.p. Dn,p contains eo(n) n!pn directed
Hamilton cycles.
Proof. The upper bound follows from the fisrt moment method. Let XH denote the
number of Hamilton cycles in D = Dn,p . Now E XH = (n − 1)!pn , and therefore
the Markov inequality implies that w.h.p. we have XH ≤ eo(n) n!pn .
For the lower bound let α := α(n) be a function tending slowly to infinity
with n. Let S ⊆ V (G) be a fixed set of size s, where s ≈ α logn 0
n and let V = V \ S.
Moreover, assume that s is chosen so that |V 0 | is divisible by integer ` = 2α log n.
From now on the set S will be fixed and we will use it for closing Hamilton cycles.
Our strategy is as follows: we first expose all the edges within V 0 , and show that
one can find the “correct” number of distinct families P consisting of m := |V 0 |/`
vertex-disjoint paths which span V 0 . Then, we expose all the edges with at least
one endpoint in S, and show that w.h.p. one can turn “most” of these families into
Hamilton cycles and that all of these cycles are distinct.
We take a random partitioning V 0 = V1 ∪ . . . ∪ V` such that all the Vi ’s are of
size m. Let us denote by D j the bipartite graph with parts V jand  V j+1 . Observe
log n
that D j is distributed as Gm,m,p , and therefore, since p = ω m , by Exercise
11.3.2, with probability 1 − n−ω(1) we conclude that D j contains (1 − o(1))m
edge-disjoint perfect matchings (in particular, a (1 − o(1))m regular subgraph).
The Van der Waerden conjecture proved by Egorychev [328] and by Falikman
11.2. Hamilton Cycles 249

[347] implies the following: Let G = (A ∪ B, E) be an r-regular bipartite graph


with partnsizes |A| = |B| = n. Then, the number of perfect matchings in G is at
least nr n!.
Applying this and the union bound, it follows that w.h.p. each D j contains
at least (1 − o(1))m m!pm perfect matchings for each j. Taking the union of one
perfect matching from each of the D j ’s we obtain a family P of m vertex disjoint
paths which spans V 0 . Therefore, there are

((1 − o(1))m m!pm )` = (1 − o(1))n−s (m!)` pn−s

distinct families P obtained from this partitioning in this manner. Since this
occurs w.h.p. we conclude (applying the Markov inequality to the number of
partitions for which the bound fails) that this bound holds for (1 − o(1))-fraction
of such partitions. Since there are (n−s)!` such partitions, one can find at least
(m!)

(n − s)!
(1 − o(1)) `
(1 − o(1))n−s (m!)` pn−s
(m!)
= (1 − o(1))n−s (n − s)!pn−s = (1 − o(1))n n!pn

distinct families, each of which consists of exactly m vertex-disjoint paths of size


` (for the last equality, we used the fact that s = o(n/ log n)).

We show next how to close a given family of paths into a Hamilton cycle.
For each such family P, let A := A(P) denote the collection of all pairs (sP ,tP )
where sP is a starting point and tP is the endpoint of a path P ∈ P, and define
an auxiliary directed graph D(A) as follows. The vertex set of D(A) is V (A) =
S ∪ {zP = (sP ,tP ) : zP ∈ A}. Edges of D(A) are determined as follows: if u, v ∈ S
and (u, v) ∈ E(D) then (u, v) is an edge of D(A). The in-neighbors (out-neighbors)
of vertices zP in S are the in-neighbors of sP in D (out-neighbors of tP ). Lastly,
(zP , zQ ) is an edge of D(A) if (tP , sQ ) is an edge D.
Clearly D(A) is distributed as Ds+m,p , and that a Hamilton cycle in D(A)
corresponds to a Hamilton cycle in D after adding the corresponding paths be-
tween each sP and tP . Now distinct families P 6= P 0 yield distinct Hamilton
cycles (to see this, just delete the vertices of S from the Hamilton cycle, to re-
cover the paths). Using Theorem 11.11 we see that for p = ω (log n/(s + m)) =
ω (log(s + m)/(s + m)), the probability that D(A) does not have a Hamilton cycle
is o(1). Therefore, using the Markov inequality we see that for almost all of the
families P, the corresponding auxiliary graph D(A) is indeed Hamiltonian and
we have at least (1 − o(1))n n!pn distinct Hamilton cycles, as desired.

You might also like