0% found this document useful (0 votes)
44 views25 pages

Moran

Consider a fixed population of N individuals of types A and B, where the relative fitness of individuals of type B is given by a real number s. The classical Moran process [13] models a discrete-time process for the fixed-size population where at each step, one individual is chosen for reproduction with probability proportional to its fitness (s for type B, 1 for type A), and then replaces an individual chosen uniformly for death. In particular, for the numbers NA and NB of individuals of each t

Uploaded by

gahmed.ams
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views25 pages

Moran

Consider a fixed population of N individuals of types A and B, where the relative fitness of individuals of type B is given by a real number s. The classical Moran process [13] models a discrete-time process for the fixed-size population where at each step, one individual is chosen for reproduction with probability proportional to its fitness (s for type B, 1 for type A), and then replaces an individual chosen uniformly for death. In particular, for the numbers NA and NB of individuals of each t

Uploaded by

gahmed.ams
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

The Moran process on a random graph

Alan Frieze∗and Wesley Pegden†


Department of Mathematical Sciences
Carnegie Mellon University
Pittsburgh PA 15213.

1 Introduction

Consider a fixed population of N individuals of types A and B, where the relative fitness of
individuals of type B is given by a real number s. The classical Moran process [13] models a
discrete-time process for the fixed-size population where at each step, one individual is chosen for
reproduction with probability proportional to its fitness (s for type B, 1 for type A), and then
replaces an individual chosen uniformly for death. In particular, for the numbers NA and NB of
individuals of each type, at each step of the process, the probability that NB increases by 1 and
NA decreases by 1 is (︃ )︃ (︃ )︃
+ rNB NA
p = ,
NA + rNB NA + NB
and the probability that NB decreases by 1 and NA increases by 1 is
(︃ )︃ (︃ )︃
NA NB
p− = .
NA + rNB NA + NB

(With probability 1 − p+ − p− , NA and NB remain unchanged.)

This process was generalized to graphs by Liberman, Hauert and Nowak in [10], see also [14]. In
this setting, a fixed graph—whose vertices represent the fixed population—has vertex colors that
evolve over time representing the two types of individuals. We will focus on the case where one
vertex begins colored as Type B—the mutant type. In the Birth-Death process, a random vertex
v is chosen for reproduction, with vertices chosen with probabilities proportional to the fitness of
their color type, and then a random neighbor u of v is chosen for death, with the result that u gets
recolored with the color of v. In the Death-Birth process, a vertex v is chosen uniformly randomly
for death, and then a vertex u is chosen randomly from among its neighbors, with probabilities
proprotional to the fitness of their respective types, with the result that v is recolored with the
color of u.

Research supported in part by NSF grant DMS1952285 Email: [email protected]

Research supported in part by NSF grant DMS1700365.Email: [email protected]

1
The isothermal theorem of [10] implies that if G is a regular, connected, undirected graph, the
fixation probability for type B—that is, the probability that all vertices will eventually be of Type
B—depends only on the number n of vertices in G and the relative fitness s, and not, for example
on the particular regular graph or the particular choice of starting vertex (more generally, the same
holds for doubly stochastic—so called isothermal —weighted digraphs). But it is also observed in
[10] that beyond this special setting, the graph structure can have a dramatic effect on the fixation
probability. In the classical Moran model for a population of size N (equivalent to the graph process
when the graph is a complete graph on N vertices, with loops), the fixation probability for Type
B is
1 − s−1 1
−N
∼1−
1−s s
(where aN ∼ bN indicates that aN /bN → 1 as N → ∞). But the fixation probability for an
N -vertex star graph is, asymptotically in N , for s > 1,
1
1−
s2
(see also [3, 2]).

For the special case when s = 1 the fixation probability can be characterized in terms of coalescence
times for random walks [4], but for s > 1 it is unknown whether a polynomial time algorithm exists
to determine the fixation probability given a particular input graph [5]; in place of this, heuristic
approximations (e.g., see [6]) or numerical experiments (e.g., [12]) are used to estimate fixation
probabilities.

In place of analyzing fixed graphs, one can analyze the fixation probability for a random graph
from some distribution. For the Erdős-Rényi random graph Gn,p , each possible edge among a set
of n vertices is included independently, each with probability p. For the case where 0 < p < 1 is
a constant independent of n, Adlam and Nowak leveraged the near-regularity of such graphs (in
particular, that they are “nearly-isothermal”) to show that the fixation probability on such graphs
is approximated by that of the classical Moran model. When p is not a constant but “small”, in
the sense that p = p(n) → 0 as n → ∞, Gn,p can exhibit significant diversity in vertex degrees,
and numerical experiments conducted by Mohamadichamgavi and Miekisz [7] showed a strong
dependence of the fixation probability of the degree of the initial mutant vertex.

In this paper we give a rigorous analysis of fixation probabilities for random graphs with degree
heterogeneity. In particular, our first result concerns Gn,p when p = p(n) = log n+ω(n)
n where ω(n)
denotes any function satisfying ω(n) → ∞. When ω(n) is a slow-growing function, this places our
analysis right at the threshold for connectivity of Gn,p ; indeed, in this regime, Gn,p is connected
and most vertices have degree close to log n, but still there are ∼ log n vertices whose degree is just
1. With probability tending to 1 as n → ∞ (“with high probability”), the automorphism group of
the graph is trivial—that is, any two vertices can be distinguished by their relations in the network
structure (even if they have the same degree). Nevertheless, we prove that the degree of the initial
mutant vertex (or possibly one of its neighbors) is enough to asymptotically determine the fixation

2
probability on Gn,p : the following are defined w.r.t. G = Gn,p . d(v) denotes the degree of vertex v.

1
ε=
log log log n
{︂ np }︂
S0 = v : d(v) ≤ . (1)
10
S1 = {v : d(v) ∈/ Iε = [(1 ± ε)np]} . (2)

Theorem 1. Given a graph G and a vertex v0 , we let ϕ = ϕBD G,v0 ,r denote the fixation probability of
the Birth-Death process on G when the process is initialized with a mutant at v0 of relative fitness
s > 1. If G is a random graph sampled according to the distribution Gn,p , then w.h.p., G has the
property that

(a) If d(v0 ) = o(np) then ϕ = 1 − o(1).

(b) Let S0 , S1 be as in (1), (2). Suppose that v0 ∈


/ S1 and N (v0 )∩S0 = ∅. Then w.h.p. ϕ ∼ (s−1)/s.
(This includes the case where v0 is chosen uniformly from [n].

(c) Suppose that v0 ∈ S1 . Then ϕ depends asymptotically only on d(v0 ), s.

(d) Suppose that v0 ∈


/ S1 and N (v0 ) ∩ S0 = {v1 } holds. Then ϕ depends asymptotically only on
d(v1 ), s.

On the other hand, if s ≤ 1 then G has the property that ϕG,v0 = o(1) regardless of v0 .

Here, o(1) denotes a function of n whose limit is 0 as n → ∞.

Theorem 2. Given a graph G and a vertex v0 , we let ϕDB G,v0 ,s denote the fixation probability of
the Death-Birth process on G when the process is initialized with a mutant at v0 of relative fitness
s > 1. If G be a random graph sampled according to the distribution Gn,p , then w.h.p., G has the
property that

(a) Suppose that v0 ∈


/ S1 and N (v0 ) ∩ S0 = ∅. Then w.h.p. ϕ ∼ (s − 1)/s.

(b) Suppose that v0 ∈ S1 . Then ϕ depends asymptotically only on d(v0 ), s.

(c) Suppose that v0 ∈


/ S1 and N (v0 ) ∩ S0 = {v1 } holds. Then ϕ depends asymptotically only on
d(v1 ), s.

On the other hand, if s ≤ 1 then G has the property that ϕDB


G,v0 ,s = o(1) regardless of v0 .

Remark 1. In our proofs we establish recurrence relations that enable us to asymptotically deter-
mine the fixation probabiliy in the above cases where it is not explicitly given. Unfortunately, these
recurrences are non-linear and difficult to solve explicitly, although in principle we could obtain
numerical results from them.

3
2 Notation

For a set S ⊆ [n], we let S̄ = [n]\S, e(S) = | {vw ∈ E(G) : v, w ∈ S} |. If S, T ⊆ [n], S ∩T = ∅, then
e(S : T ) = | {vw : v ∈ S, w ∈ T } |. N (S) = {w ∈ / S : ∃v ∈ S s.t. vw ∈ E(G)}, where we shorten
N ({v}) to N (v). We let dS (v) = |N (v) ∩ S| and d(v) = d[n] (v) and ∆ = max {d(v) : v ∈ [n]}.

We let
n ε2 np
n1 = n − and ω0 = .
(np)1/2 100 log np
We write A ∼ε B if A ∈ (1 ± O(ε))B as n → ∞ and A ≲ε B if A ≤ (1 − O(ε))B.

Chernoff Bounds We use the following inequalities for the Binomial random variable B(N, p):
2 np/2
P(B(N, p) ≤ (1 − θ)N p) ≤ e−θ 0 ≤ θ ≤ 1. (3)
−θ2 np/3
P(B(N, p) ≥ (1 + θ)N p) ≤ e 0 ≤ θ ≤ 1. (4)
(︂ e )︂λN p
P(B(N, p) ≥ λN p) ≤ λ > 0. (5)
λ

3 Random Graph Properties

Assume that np = O(log n). We will deal with the simpler case of np/ log n → ∞ in Section 4.4.
The following hold w.h.p.
Lemma 3.

(a) ∆ ≤ 5np.
2 /4
(b) |S1 | ≤ n1−ε .

(c) If C is a cycle of length at most ω0 then C ∩ S1 = ∅.

(d) If v, w ∈ S0 then dist(v, w) ≥ ω0 .

(e) If v ∈ S1 and d(w) < ω0 then dist(v, w) > ω0 .

(f ) If |S| ≤ 2n/(np)9/8 then e(S) < 10|S|.

(g) If |S| ≤ 2ω0 then e(S) ≤ |S|.

(h) W.h.p. ̸ ∃S ⊆ S̄ 1 such that

(i) |S| ∈ I(d) = [10/ε3 , n1 ].


(ii) e(S : T ) ∈
/ (1 ± 2ε)|S| |T |p where T = S̄ \ S1 .

(i) If S induces a connected subgraph and ω0 /2 ≤ |S| ≤ n/(np)9/8 then e(S : S̄) ≥ |S|np/2.

4
{︂ }︂
(j) If S ⊆ [n] and S induces a connected subgraph then S ∪ N (S) contains at most 7
ε2
max 1, |S|
ω0
members of S1 ∪ N (S1 ).

(k) Suppose that |S| ≤ n/(np)9/8 and S induces a connected subgraph. Let Bk (S) be the set of
vertices v ∈ S̄ with dS (v) = k. Then for 2 ≤ k ≤ (np)1/3 , |Bk (S)| ≤ αk snp where αk = ε/k 2 .

(l) If n/(np)2 ≤ |S| ≤ n1 , then there are at most θ|S| vertices v ∈ S that dS̄ (v) ∈
/ (1 ± ε)(n − |S|)p,
1
where θ = e2 (np)1/2 .

(m) There do not exist disjoint sets S, T ⊆ [n] with n/(np)9/8 ≤ |S| ≤ n/(np)1/3 and |T | = θ(n−|S|)
such that e(S, T ) ≥ α|S||T |p, where α = (np)1/4 .

Proof. We defer the proof of this lemma to Section A in an appendix.

4 Birth-Death

X denotes the set of mutant vertices and w(X) := (s − 1)|X| + n. We have

s ∑︂ dX̄ (v)
p+ = pBD
+ (X) = P(|X| → |X| + 1) = . (6)
w(X) d(v)
v∈X
1 ∑︂ dX (v)
pBD
− (X) = P(|X| → |X| − 1) = . (7)
w(X) d(v)
v∈N (X)

4.1 The size of (X ∪ N (X)) ∩ S1

An iteration will be a step of the process in which X changes. We prove some lemmas that will be
useful in later sections. In the following Z is a model for |X|.

Lemma 4. Suppose that Z = Zt , t ≥ t0 is a random walk on 0, 1, . . . , n and that we have t ≥ t0


implies that P(Zt+1 = Zt + 1) ≥ γ where γ > 1/2, as long as Zt ≥ ρt, where ρ > 0 is sufficiently
small. Suppose that n is large and that 0, n are absorbing states. Suppose that for values a, b > 2a
we have Z0 = 2a ≥ 2ρt0 . Then with probability 1 − O(e−Ω(a) ), Z reaches b before it reaches a.

Proof. Let σ = γ/2 + 1/4 and let Et be the event that Z makes at least (t − t0 )σ − a/2 positive
moves at times t0 + 1, . . . , t. If Eτ occurs for t0 ≤ τ ≤ t then Zt ≥ a + 2(t − t0 )σ − (t − t0 ) =
a + (γ − 1/2)(t − t0 ) > max {a, ρt} for ρ sufficiently small. (This is true by assumption for t = t0
the LHS increases by γ − 1/2 > ρ as t increases by one.) Let t1 = t0 + (b − a)/(γ − 1/2). If
and ⋂︁
E = tτ1=t0 Eτ occurs then Zt1 ≥ b. The Chernoff bounds imply that
t1 t1
{︄ (︃ )︃2 }︄
∑︂ ∑︂ 1 γ 1
P(¬E) ≤ ≤ exp − − γ(τ − t0 ) ≤ e−Ω(a) .
2 2 4
τ =t0 +a/2σ τ =t0 +a/2σ

5
Lemma 5. While, |X| ≤ n/(np)9/8 , the probability that (X ∪ N (X)) ∩ S1 increases in an iteration
is O(ε−2 /ω0 ).

Proof. The probability estimates are conditional on there being a change in X.

Let S1+ = S1 ∪ N (S1 ). We consider the addition of a member of S1 to X ∪ N (X). This would mean
the choice of v ∈ X and then the choice of a neighbor of v in S1+ . Since v is at distance at most 2
from S1 , Lemma 3(e) implies that d(v) ≥ ω0 . Let C be the component of the graph GX induced
by X that contains v. Assume first that |C| ≤ ω0 /2. Then v has at least ω0 /2 neighbors in X̄ and
so we see from Lemma 3(j) that the conditional probability of adding to S1 ∩ X is O(ε−2 /ω0 ).

Now assume that ω0 /2 < |C| ≤ n/(np)9/8 . Let C0 denote {v ∈ C : dX̄ (v) > 0}. We estimate the
probability of adding a vertex in S1 to X ∪ N (X), conditional on choosing v ∈ C0 . Now Lemma
3(f) implies that e(C) ≤ 10|C| and Lemma 3(i) implies that e(C : C̄) ≥ |C|np/2. So, very crudely,
|C0 | ≥ |C|/10 by Lemma 3 a. We write

P(|(X ∪ N (X)) ∩ S1 | increases | chosen vertex is in C0 , chosen neighbor is in X̄)


⎛ ⎞
1 ∑︂ dX̄∩S1 + (v)
1 ⎝ ∑︂ dX̄∩S1+ (v) ∑︂ dX̄∩S1+ (v) ∑︂ dX̄∩S1 + (v)
= = + + ⎠ (8)
|C0 | dX̄ (v) |C0 | dX̄ (v) dX̄ (v) dX̄ (v)
v∈C0 v∈C0 ∩S0 v∈C1 v∈C2

where
C1 = {v ∈ C0 \ S0 : dX (v) ≥ d(v)/2} and C2 = C0 \ (S0 ∪ C1 ).

The first sum in (8) is at most |C0 ∩ S0 | and it follows from Lemma 3(d) that |C0 ∩ S0 | ≤ 2|C|/ω0
(the 2 from the fact that our lower bound on |C is ω0 /2). It follows from Lemma 3(f) and dX (v0 ) ≥
log n/40 that the second sum in (8) is at most 800|C|/np. As for C2 , let A1 , A2 , . . . , Aℓ be the
components of the graph induced by C2 . It follows from Lemma 3(j) that

∑︂ dX̄∩S + (v) ℓ ∑︂ d m
1
∑︂ Āi ∩S + (v)
1
20 ∑︂ ∑︂
= ≤ dS + (v)
dX̄ (v) dX̄ (v) np 1
v∈C2 i=1 v∈Ai i=1 v∈Ai
m (︃ {︃ }︃)︃ (︃ )︃
20 ∑︂ 7 |Ai | |C2 |
≤ max 1, =O ,
np ε2 ω0 ε2 ω0 np
i=1

and we are done by (8) since |C2 | ≤ |C0 |.

We next consider N (X) ∩ S0 . At any point in the process, we let X


ˆ︁ denote the set of vertices that
have ever been in X up to this point.

Lemma 6. W.h.p., there are no vertices in S0 added to N (X) and no vertices in N (S0 ) added to
3/4
X in the first ω0 iterations.

6
Proof. Suppose that a member of S0 is added to N (X) because we choose v ∈ X and then add
u ∈ N (v) to X, and N (u) ∩ S0 ̸= ∅. It follows from Lemma 3(d,e) that d(v) ≥ ω0 and the choice
3/4
of u is unique. So the conditional probability of this happening is O(ω0 /ω0 ) = o(1).

If a vertex in N (S0 ) is added to X then either (i) v0 has a neighbor w ∈ S0 which has two neighbors
in Xˆ︁ or (ii) X
ˆ︁ has two neighbors in S0 or (iii) v0 chooses w as its neighbor during the process.
Lemma 3(c) rules out (i) and Lemma 3(d) rules out (ii). Lemma 3(e) implies that v0 has degree at
3/4
least ω0 and so the probability of this is O(ω0 /ω0 ) = o(1), given Lemma 3(d) .

Remark 2. We will see in the next section that w.h.p. the size of |X| follows a random walk
with a positive drift in the increasing direction. It follows from this that to deal with cases where
1/2
0 < |X| ≤ ω for some ω ≤ ω0 , we can assume that there will be have been at most O(ω log ω)
iterations to this point. More precisely we can use the Chernoff bounds as we did in Lemma 4 to
argue that if |X| is not absorbed at 0 then |X| will reach ω in O(ω log ω) iterations. Thus, given
|X| = ω at some point in the process we have |X| ˆ︁ = O(ω log ω).

4.2 p+ versus p−

In this section we bound the ratio p+ /p− for various values of |X|. We will see later when we
analyse the cases in Section 4.3 that if |X| ≤ 20/ε3 then we only need to consider cases where
|X ∩ S1 |, |N (X) ∩ S0 | ≤ 1. This will in turn follow from the results of Section 4.1.

Case BD1: |X| ≤ 20/ε3 and |X ∩ S1 |, |N (X) ∩ S0 | ≤ 1.


Let X1 = X ∩ S1 and let Y1 = N (X) ∩ S1 , Y0 = N (X) ∩ S0 . Either X1 = {x1 } or X1 = ∅ and
either Y0 = ∅ or Y0 = {y0 }. Let X (i) denote a connected component of X.

Because |X (i) | is small and X (i) induces a connected subgraph, Lemma 3(g) implies that X (i) ∪
(i)
N (X (i) ) = X (i) ∩ NX̄ (X (i) ) induces a tree or a unicyclic subgraph. Let δT be the indicator for
∑︁ (i)
X (i) inducing a tree and let δT = i δT .

Note that the number of edges inside X is precisely |X| − δT ≥ 0, and the number of edges inside
X that are not incident to X1 is precisely |X| − δT − dX (X1 ) ≥ 0.

Thus we have from (6) that


(︄ (︁ )︁ )︄
s (1 + ε)np(|X| − |X1 |) − 2(|X| − δT − dX (X1 )) dX̄ (X1 )
p+ ≤ + .
w(X) (1 − ε)np d(X1 )
(︄ (︁ )︁ )︄
s (1 − ε)np(|X| − |X1 |) − 2(|X| − δT − dX (X1 )) dX̄ (X1 )
p+ ≥ + .
w(X) (1 + ε)np d(X1 )

We can simplify this as follows. If |X| > |X1 |, then |X| − δT − dX (X1 ) = o(np) = o(np(|X| − |X1 |).

7
On other hand, if |X| = |X1 | = 1, then δT = 1 and dX (X1 ) = 0. Thus in fact we have
(︃ )︃
s dX̄ (X1 )
p+ ≤ 1 + 3ε)(|X| − |X1 |) + . (9)
w(X) d(X1 )
(︃ )︃
s d (X1 )
p+ ≥ (1 − 3ε)(|X| − |X1 |) + X̄ . (10)
w(X) d(X1 )

We see from (7) that

(1 + ε)np(|X| − |X1 |) + dX̄\S1 (X1 ) |Y1 \ Y0 | dX (Y0 )


(︃ )︃
1
p− ≤ + +
w(X) (1 − ε)np np/10 d(Y0 )
(1 − ε)np(|X| − |X1 |) − |X||N (X \ X1 ) ∩ S1 | + dX̄\S1 (X1 ) |Y1 \ Y0 | dX (Y0 )
(︃ )︃
1
p− ≥ + +
w(X) (1 + ε)np 5np d(Y0 )

Here |X||N (X \ X1 ) ∩ S1 | ≤ |X||N (X ˆ︁ \ X1 ) ∩ S1 | = O(|X|ε−3 log 1/ε) is a crude upper bound on


the number of edges between X \ X1 and N (X \ X1 ) ∩ S1 , see Remark 2 and Lemma 3(j). Because
np ≫ ε−4 log 1/ε we can absorb the |X||N (X \ X1 ) ∩ S1 | into an error term. (When X = X1 this
term goes away regardless.) A similar application of Remark 2 and Lemma 3(j) also implies that
|Y1 \ Y0 |/np is o(ε|X|). This can clearly be absorbed into the error term from |X| > 1. When
|X| = 1 it’s contribution after dividing by w(X) will be o(p+ ) and it can be ignored. Thus we write

(1 + ε)np(|X| − |X1 |) + dX̄\S1 (X1 ) dX (Y0 )


(︃ )︃
1
p− ≤ + . (11)
w(X) (1 − ε)np d(Y0 )
(1 − ε)np(|X| − |X1 |) + dX̄\S1 (X1 ) dX (Y0 )
(︃ )︃
1
p− − ≥ + . (12)
w(X) (1 + ε)np d(Y0 )

We now use (9) – (12) to estimate p+ /p− in various cases.

Case BD1a: X1 = Y0 = ∅.
In this case equations (9), (10) and εnp ≫ 1 imply that

s(1 − 3ε)|X| s(1 + 3ε)|X|


≤ p+ ≤ .
w(X) w(X)

Equations (11), (12) imply that

(1 − 3ε)|X| (1 + 3ε)|X|
≤ p− ≤ .
w(X) w(X)
So we have
p+
∼ε s. (13)
p−
Case BD1b: X1 = {x1 } , Y0 = ∅ and (|X| > 1 or d(x1 ) = Ω(np)).
If |X| > 1 then equations (11), (12) imply that

(1 − 3ε)(|X| − 1) + d(x1 )/np (1 + 3ε)(|X| − 1) + d(x1 )/np


≤ p− ≤ . (14)
w(X) w(X)

8
We then have, with the aid of (9) and (10) and the fact that d(x1 ) = Ω(np) implies dX̄ (x1 ) ∼ε d(x1 )
that (︃ )︃
p+ |X|
∼ε s . (15)
p− |X| − 1 + d(x1 )/np
s d(x1 )
If |X| = 1 then p+ = w(X) and p− ∼ε w(X)np and so (15) holds in this case too.

Case BD1c: X1 = {x1 } , Y0 = ∅ and |X| = 1 and d(x1 ) = o(np)).


We have p+ = s/w(X) and (11), (12) imply that p− = o(1/w(X)). So, in this case,
p+ np
∼ε → ∞. (16)
p− d(x1 )

Case BD1d: X1 = ∅ and N (X) ∩ S0 = {y0 }.


In this case equations (9), (10) imply that

s(1 − 3ε)|X| s(1 + 3ε)|X|


≤ p+ ≤ . (17)
w(X) w(X)

We have dX (y0 ) = 1. To see this observe that Xˆ︁ defined in Remark 2 will be a connected set of
size o(ω0 ) and so Lemma 3(c) implies that dX (y0 ) = 1

Equations (11), (12) imply that


(︃ )︃ (︃ )︃
1 1 1 1
(1 − 3ε)|X| + ≤ p− ≤ (1 + 3ε)|X| + . (18)
w(X) d(y0 ) w(X) d(y0 )
So, in this case,
p+ s
∼ε . (19)
p− 1 + d(y01)|X|
Case BD1e: X1 = {x1 } and N (X) ∩ S0 = {y0 }.
s d(x1 )
If |X| = 1 then p+ = w(X) and p− ∼ε w(X)np . If |X| > 1 then we use Lemma 6 to see that w.h.p.
dX̄ (X1 )/d(X1 ) = 1 giving
s s
((1 − 3ε)X) ≤ p+ ≤ ((1 + 3ε)X) .
w(X) w(X)

Equation (14) is replaced by

(1 − 3ε)(|X| − 1) + d(x1 )/np + 1/d(y0 ) (1 + 3ε)(|X| − 1) + d(x1 )/np + 1/d(y0 )


≤ p− ≤ . (20)
w(X) w(X)

But Lemma 3(d) implies that d(y0 ) ≥ ω0 and the term 1/d(y0 ) is absorbed into error terms. Note
that x1 = v0 ∈
/ S0 and so d(x1 )/np ≥ 1/10. So (15) holds.

Case BD2: |X| ∈ I1 = [20/ε3 , n1 = n − n/(np)1/2 ] and |X| ≥ εt where t denotes the iteration
number and
{︄ 1/2
1 |X| ≤ ω0 .
|(X ∪ N (X)) ∩ S1 | ≤ 1/2 (21)
O(ε−2 |X|/ω0 ) ω0 ≤ |X| ≤ n/(np)9/8 .

9
It follows from Lemma 3(h),(j) that
{︁ }︁
e(X : X̄) ≥ min e(X \ S1 : X̄), e(X, X̄ \ S1 ) ≥ e(X \ S1 : X̄ \ S1 )
≥ (1 − 2ε)|X \ S1 | |X̄ \ S1 |p
(︃ (︃ )︃)︃
t (︁ )︁
≥ (1 − 2ε) |X| − O 2
|X̄| − |S1 | p (22)
ε ω0
≥ (1 − 3ε)|X| |X̄|p. (23)

and similarly
e(X : X̄) ≤ (1 + 3ε)|X| |X̄|p.

Note that to go from (22) to (23) we use |X̄| ≫ |S1 | from Lemma 3(b) and the assumption that
|X| ≥ εt.

Now,
s e(X \ S1 : X̄) s(1 − 5ε)|X|(n − |X|)
p+ ≥ · ≥ . (24)
w(X) (1 + ε)np nw(X)
(︃ )︃
s e(X \ S1 : X̄) s(1 + 5ε)|X|(n − |X|)
p+ ≤ + |X ∩ S1 | ≤ . (25)
w(X) (1 − ε)np nw(X)

When |X| ≤ n/(np)9/8 we use (21). For larger X we have from Lemma 3(b) that |X|(n − |X|) ≥
ε−2 n|S1 |.

On the other hand,


1 e(X : X̄ \ S1 ) (1 − 5ε)|X|(n − |X|)
p− ≥ · ≥ , (26)
w(X) (1 + ε)np nw(X)
(︃ )︃
1 e(X : X̄ \ S1 ) (1 + 5ε)|X|(n − |X|)
p− ≤ + |N (X) ∩ S1 | ≤ . (27)
w(X) (1 − ε)np nw(X)

When |X| ≤ n/(np)9/8 we use (21). For larger X we again use |X|(n − |X|) ≥ ε−2 n|S1 |.

We see from (24) – (27) that


p+ s(1 − 4ε)|X|(n − |X|) nw(X)
≥ · ≥ (1 − 10ε)s. (28)
p− nw(X) (1 + 5ε)|X|(n − |X|)
p+ s(1 + 5ε)|X|(n − |X|) nw(X)
≤ · ≤ (1 + 10ε)s. (29)
p− nw(X) (1 − 4ε)e(X : X̄)
and so
p+
∼ε s. (30)
p−
Case BD3: |X| ≥ n1 .
We have, very crudely, that w.h.p.,
s|N (X)| s|N (X)| |N (X)| |N (X)|
≤ p+ ≤ and ≤ p− ≤ .
5npw(X) w(X) 5npw(X) w(X)

10
So,
p+ 1
≥ . (31)
p− 5np

4.3 Fixation probability – Proof of Theorem 1

In this section we use the results of Section 4.2 to determine the asymptotic fixation problem, for
various starting situations.

4.3.1 Case analysis

First recall the following basic result on random walk i.e. Gambler’s Ruin: we consider a random
walk Z0 , Z1 , . . . , on A = {0, 1, . . . , m}. Suppose that Z0 = z0 > 0 and that if Zt = x > 0 then
P(Zt+1 = x − 1) = β and P(Zt+1 = x + 1) = α = 1 − β. We assume that 0, z1 > z0 are absorbing
states and that α > β. Let ϕ denote the probability that the walk is ultimately absorbed at 0.
Then, see Feller [8] XIV.2,
(β/α)z0 − (β/α)z1
ϕ= . (32)
1 − (β/α)z1
Feller also proves that if D denotes the expected duration of the game then
m 1 − (β/α)a a
D= · m
+ = O(m). (33)
α − β 1 − (β/α) α−β

We next argue the following:


Lemma 7. W.h.p. either X becomes empty or |X| reaches size ω = 20/ε3 within O(ε−3 ) iterations.

Proof. Suppose that we consider a process Z1 , Z2 , . . . , such that Zt is the size of X after t iterations,
unless X becomes zero. In the latter case we use 0 as a reflecting barrier for Zt . Now consider
s
the first τ = 40s/(ε3 (2σ − 1)) steps of the Z process where σ = 2(s+1) + 14 ∈ ( 12 , s+1
s
). Given the
probabilty that the walk followed by X increases with probability ∼ s, the Chernoff bounds imply
that w.h.p. Z makes at least 2τ σ/3 positive steps and this means that Z will at some stage reach
20/ε3 . Going back to X, we see that w.h.p. either |X| reaches 0 or |X| reaches 20/ε3 .

Lemma 8. If |X| reaches ω = 20/ε3 → ∞ then w.h.p. X reaches [n].

Proof. We first show that if |X| reaches ω then w.h.p. |X| reaches n1 = n − n/(np)1/2 . This
follows from Lemma 4 with a = 10/ε3 and b = n1 , applied to the walk Zt = |X| at iteration t. In
particular, as long as |X| > max {a, εt}, the hypotheses of the lemma are satisfied with a positive
bias ∼ε s, by (30).

Now assume that |X| = n1 . The analysis of Case BD3 shows that there is a probability of at least
η = (1/5np)n2 that X reaches [n] after a further n2 = n − n1 = n/(np)1/2 steps. Now consider

11
the following experiment: when |X| = n1 , the walk moves right with probability at least 1/2 and
left with probability at most 1/2. If it moves right then there is a probability of at least η that
|X| reaches n before it returns to n1 . If it moves left then (32) implies that there is a constant
0 < ζ < 1 such that the probability of |X| reaching 20/ε3 before returning to n1 is at most (1−ζ)n1 ,
for any constant ζ < s/(s + 1). (Since this event can be analyzed with Case BD2 exclusively.) Let
m = η −1 log n. Then we have m(1 − ζ)n1 → 0 and mη → ∞. So w.h.p. X will reach [n] after
at most m returns and never visit 20/ε3 . Indeed, the probability it does not reach [n] is at most
o(1) + (1 − η)m/2 = o(1). (The first o(1) bounds the probability that |X| moves right from n1 fewer
than m/2 times.)

The next lemma concerns the case where d(v0 ) = o(np).

Lemma 9. Suppose that d(v0 ) = o(np) and let ω = np/d(v0 ). Then w.h.p. v0 ∈ X for the first
ω 1/2 iterations.

Proof. If X = {v0 } then it follows from Case BD1c that p− /p+ = O(1/ω) and the probability X
becomes empty is O(1/ω). If v0 ∈ X and |X| > 1 then the probability that v0 is removed from
X in the next iteration is also O(1/ω). This is because all of v0 ’s neighbors have degree Ω(np)
outside X (Lemma 3(d)) and all vertices of X other than v0 have many more neighbors in X̄ than
X. (Note from Lemma 5 that no vertex in X other than v0 can be in S0 ).

So, the probability that v0 gets removed this early is O(ω 1/2 /ω) = o(1).

Case BDF1: v0 ∈ / S1 and Y0 = ∅.


This includes the case where v0 is chosen uniformly at random. (This follows from Lemma 3(b).)
3/4
We have d(v0 ) ∼ε np and Lemma 5 implies that only Case BD1a is relevant for the first ω0
steps, and in this case the bias p+ /p− in the change in the size of |X| is asymptotically equal to s.
1/2
Equation (32) implies that so long as the bias is asymptotically s, |X| will reach m = 2ω0 before
reaching 0, with probability ∼ε (s − 1)/s. Equation 33 implies that w.h.p. this happens during
3/4
the first ω0 iterations. Lemma 8 then implies that w.h.p. X will reach [n] from here, proving
Theorem 1(b).

Case BDF2: d(v0 ) = o(np): We are initially in Case BD1c and Lemma 9 implies that we stay in
this case for the first ω 1/2 iterations, where ω = np/d(v0 ). Whenever |X| = 1, we see from (16) that
the probability X becomes empty in the next iteration is O(1/ω). Furthermore, (32) with a = 2
then implies that |X| reaches ω 1/3 with probability ∼ε (s − 1)/s before returning to 1. Combining
the above facts, we see that |X| reaches ω 1/3 within ω 1/2 iterations and we can then apply Lemma
8 to complete the proof of Theorem 1(a).

Case BDF3: v0 ∈ S1 , d(v0 ) = αnp, N (v0 ) ∩ S0 = ∅ where α ̸= 1 is a positive constant.


1/2
This is part of Case BD1b of Section 4.2. We consider the first ω0 steps. It follows from Section
1/2
4.1 that X ∩ S1 ⊆ {v0 } throughout the first ω0 iterations. We note that dX̄ (x1 )/d(x1 ) = 1 − o(1).
At iteration j ≤ ω0 we will have p+ /p− equal to either (1 − o(1))s/(1 + (α − 1)/j) (by (15))
or (1 − o(1))s depending on whether or not v0 is still in X. And we note that if v0 ∈ X then,

12
conditioned on X losing a vertex in the next iteration, the vertex it loses is v0 with probability
asymptotically equal to
αnp 1
n np α
(|X|−1)np 1
= . (34)
+ αnp 1 |X| − 1 + α
n np n np
1/2
Also, if v0 leaves X then Lemma 5 implies that it only returns to X with probability O(ε−2 ω0 /ω0 ) =
1/2
o(1) in the next ω0 iterations.

1/2
We can thus asymptotically approximately model |X| in the first ω0 iterations as a random walk
W0 = (Z0 = 1, Z1 , . . . , ) on {0, 1, . . . , n} where at the tth step if Zt−1 = j > 0 then either (i)
P(Zt = Zt−1 + 1) = αj = s/(s + 1 + (α − 1)/j) or (ii) P(Zt = Zt−1 + 1) = β = s/(s + 1).
The walk starts with probabilities as in (i) and at any stage may switch irrevocably to (ii) with
probability ∼ε ηj = α/(j − 1 + α), by (34). The fixation probability is then asymptotically equal
1/3
1/3 s−j −s−ω0
to the probability this walk reaches m = ω0 before it reaches 0. Let qj = 1/3 denote the
1−s−ω0
probability of reaching 0 before m in the random walk W1 where there is always a rightward bias
of s.
1/3
Let pj = pj (BDF 3) denote the probability that the walk reaches 0 before m = ω0 . Then p0 = 1
and pm = 0 and
pj = αj pj+1 + (1 − αj )(1 − ηj )pj−1 + (1 − αj )ηj qj (35)
1/2
for 1 < j < ω0 , from which we can compute p1 , asymptotically.

1/2
If |X| reaches ω0 then Lemma 8 implies that it will reach n w.h.p. This establishes part (b) of
Theorem 1.

Case BDF4: v0 ∈ / S1 and N (v0 ) ∩ S0 = {y0 }:


1/2
As such we begin in Case BD1d. We again consider the first ω0 rounds and we see that w.h.p.
we remain in this case, unless v0 leaves X.
1/2
Thus, as in BDF3, we can asymptotically approximately model |X| in the first ω0 iterations as a
suitable random walk, showing that the fixation probability is a function just of d(y0 ) in this case.
Equation (35) becomes
pj = pj (BDF 4) = βj pj+1 + (1 − βj )(1 − ηj )pj−1 + (1 − βj )ηj qj (36)
where βj = s(s + 1 + 1/jd(y0 )), which comes from replacing (14) by (18).

Case BDF5: v0 ∈ S1 and N (v0 ) ∩ S0 = {y0 }:


This has the same characteristics as Case BDF3. They both rely on (15).

4.3.2 s<1

Arguing as above we see that except when |X| ≤ 20/ε3 that w.h.p. the size of X follows a
random walk where the probability of moving left from a positive position is asymptotically at

13
|X|−1
least (s+1)|X| > 12 for |X| > 1−s
2
. We argue as in we did at the end of Case BDF2, with right moves
and left moves reversed, that w.h.p. X becomes empty.

4.3.3 s=1

It follows
∑︁ from Maciejewski [11] that the fixation probability of vertex v is precisely π(v) =
d(v)−1 / w∈[n] d(w)−1 . In a random graph with np = O(log n) this gives maxv π(v) = O(log n/n)
and when np ≫ log n this gives maxv π(v) = O(1/n).

4.4 np ≫ log n and s > 1

If np/ log n → ∞ then all vertices have degree ∼ np, see Theorem 3.4 of Frieze and Karoński [9]. So
S1 = ∅ and all but (f), (g), (k), (l) of Lemma 3 hold trivially. But (f) is only used to bound e(X : X̄),
where there is the possibility of low degree vertices. This is unnecessary when np/ log n → ∞ since
then w.h.p. e(S : S̄) ∼ |S|(n − |S|)np for all S. Property (g) is only used in (9), (10) to bound
e(X). But because |X| is small this will be small compared to |X|np and only contributes to the
error term. Properties (k), (l) are not used in analysing Birth-Death. In conclusion we see that
only Case BDF1 is relevant and Theorem 1 holds in this case.

5 Death-Birth

The analysis here is similar to the Birth-Death process and so we will be less detailed in our
description. We first replace (6), (7) by
1 ∑︂ sdX (v)
p+ = pDB
+ (X) = P(|X| → |X| + 1) = . (37)
n sdX (v) + dX̄ (v)
v∈N (X)
1 ∑︂ dX̄ (v) |X|
p− = pDB
− (X) = P(|X| → |X| − 1) = ≤ . (38)
n sdX (v) + dX̄ (v) n
v∈X

We use the notation of Section 4. We will once again assume first that np = O(log n) and remove
the restriction later in Section 5.4.

5.1 The size of (X ∪ N (X)) ∩ S1

Lemma 10. While, |X| ≤ n/(np)9/8 , the probability that X ∩ (S1 \ S0 ) increases in an iteration is
O(ε−2 /ω0 ).

Proof. We consider the addition of a member of S1 to X. This would mean the choice of v ∈
N (X) ∩ (S1 \ S0 ) and then the choice of a neighbor w of v in X. Let C be the component of the

14
graph GX induced by X that contains v. Assume first that |X| ≤ np/20. Lemma 3(e) implies that
d(w) ≥ ω0 and so dX̄ (v) ≥ ω0 /2 and since d(v) ≥ np/10 we can bound the probability of adding a
member of S1 \ S0 by O(ε−2 /ω0 ), where the ε−2 term comes from Lemma 3(j), applied with S = C.

Now assume that np/20 < |C| ≤ n/(np)9/8 . Then,


A
P(X ∩ S1 increases) ≤ ,
B
where
∑︂ sdX (v) ∑︂ sdX (v)
A= and B= .
sdX (v) + dX̄ (v) sdX (v) + dX̄ (v)
v∈N (X)∩S1 v∈N (X)

Now applying Lemma 3(j) to each component of GX shows that


7s|X| 7s|X|
|N (X) ∩ S1 | ≤ and so A ≤ 2 .
ε2 ω0 ε ω0
On the other hand, Lemma 3(h) implies that
∑︂ |N (C \ S1 )| − |C ∩ S1 | ∑︂ |C \ S1 |(n − |C| − |S1 |)p − |C ∩ S1 |
Bs−1 ≥ ≥ ≥
5np 5np
C C
∑︂ |C \ S1 |(n − o(n))p − |C ∩ S1 | ∑︂ |C|(n − o(n))p − |C ∩ S1 |(np + 1)
≥ ≥
5np 5np
C C
(︂ )︂
7(np+1)
∑︂ |C| (n − o(n))p − ε2 ω0 ∑︂ |C| |X|
≥ = .
5np 6 6
C C

3/4
We next consider the first ω0 iterations.
3/4
Lemma 11. W.h.p. N (X) ∩ S0 does not increase during the first ω0 iterations.

Proof. Consider the addition of a member of S0 to N (X). Suppose that a member of S0 is added
to N (X) because we choose v ∈ N (X)where N (v) ∩ S0 ̸= ∅ and we then choose w ∈ N (v) ∩ X.
3/4
Lemma 3(d) implies that d(v), d(w) ≥ np/10 and so we can bound this possibility in the first ω0
3/4
iterations by O(ω0 /np) = o(1).
3/4
Lemma 12. W.h.p., if d(v0 ) ≤ ω0 then dX (X1 ) ≤ 1 during the first ω0 iterations. Furthermore,
3/4
if such a neighbor leaves X then dX (X1 ) = 0 for the remaining iterations up to ω0 .

Proof. After the first iteration either X = ∅ or X = {v0 , v1 }(︂ where v1 ∈ N (v)︂0 ). As
(︂ long
)︂ as
d(v0 )−1 ω0
|X| > 1, the chance of adding another neighbor of X to X is O (|X|−1)np+d(v0 )−1 = O np . So,
7/4
the probability that dX (v0 ) reaches 2 is O(ω0 /np) = o(1). The same calculation suffices for the
second claim.

15
5.2 Bounds on p+ , p−

1/2
It follows from Lemma 11 that we only need to consider the case where (i) |X| > ω0 or (ii)
1/2
|X| ≤ ω0 and X ∩ S1 ⊆ {v0 } and |N (X) ∩ S0 | ≤ 1.

Case DB1: |X| ≤ 20/ε3 and |X1 |, |Y0 | ≤ 1.


We remind the reader that X ˆ︁ is connected and so w.h.p. if v ∈ N (X) then dX (v) = 1, except
possibly in one isntance where dX (v) = 2. We write
⎛ ⎞
s ⎝ ∑︂ dX (v) ∑︂ dX (v) dX (Y0 )
p+ = + + ⎠
n sdX (v) + dX̄ (v) sdX (v) + dX̄ (v) sdX (Y0 ) + dX̄ (Y0 )
v∈N (X)\Y0 v∈N (X)∩(S1 \S0 )
(39)
⎛ ⎞
s ⎝ ∑︂ 1 |Y0 |
∼ε + ⎠
n dX̄ (v) sdX (Y0 ) + dX̄ (Y0 )
v∈N (X)\Y0
⎛ ⎞
s ∑︂ ∑︂ 1 |Y0 |
= ⎝ + ⎠. (40)
n dX̄ (v) sdX (Y0 ) + dX̄ (Y0 )
w∈X v∈N (w)\(X∪Y0 )

Here we have used the fact that d(v) ≥ np/10 and np ≫ |X| to remove dX (v) from the first two
denominators. This is also used to remove the second summation in (39). So, separating w ∈ X1
from the rest of X we see that when |X| > 1, (using Lemma 3(j)),
⎛ ⎞
s⎜ ∑︂ 1 |Y0 | ⎟
p+ ∼ε ⎜|X| − |X1 | + + ⎟. (41)
n⎝ dX̄ (v) sdX (Y0 ) + dX̄ (Y0 ) ⎠
w∈X1
v∈N (w)\X

When |X| = 1 we have p− = 1/n and when |X| > 1


(︃ )︃
1 dX̄ (X1 )
p− ∼ε |X| − |X1 | + . (42)
n dX (X1 ) + dX̄ (X1 )

Case DB1a: |X| = 1 and X = {x}:


(︃ )︃
s |Y0 |
p+ ∼ε α+ , if d(x) = αnp where α = Ω(1). (43)
n sdX (Y0 ) + dX̄ (Y0 )
[︃ ]︃
sd(x) 10sd(x)
p+ ∈ , if d(x) = o(np). (44)
5n2 p n2 p
1
p− = .
n
∑︁
Explanation for (43), (44): Let A = v∈N (x) 1/d(v). This replaces the first sum in (40). If
d(x) = Ω(np) then Lemma 3(j) implies that A ∼ε α. If d(x) = o(np) then Lemma 3(d) implies
that np/10 ≤ d(v) ≤ 5np for v ∈ N (x).

16
Case DB1b: |X| > 1 and X1 = Y0 = ∅.
It follows from (41) that w.h.p.

s|X| |X|
p+ ∼ε and p− ∼ε . (45)
n n

Case DB1c: |X| > 1 and X1 = {x1 } and d(x1 ) = αnp ≫ ε−2 and Y0 = ∅.

s(|X| − 1 + α) |X|
p+ ∼ε and p− ∼ε . (46)
n n
Here we have used Lemma 3(j) to replace the sum in (41) by α. (When we apply the lemma, the
set S will be the connected component of X that contains x1 .)

Case DB1d: |X| > 1 and X1 = {x1 } and d(x1 ) = O(ε−2 ) and Y0 = ∅.
(︃ )︃
s(|X| − 1) 1 d(x1 ) − δ1 |X|
p+ ∼ε and p− ∼ε |X| − 1 + ∼ , (47)
n n sδ1 + d(x1 ) − δ1 n
where δ1 = dX (x1 ). Note that Lemma 12 implies that w.h.p. x1 has at most one neighbor in X.
We have used Lemma 3(d) to remove the sum in (41).

Case DB1e: |X| > 1 and X1 = ∅ and Y0 = {y0 }.


(︃ )︃
s 1 |X|
p+ ∼ε |X| + and p− ∼ε . (48)
n d(y0 ) + s − 1 n
dX (y0 ) = 1 follows from Lemma 3(c), since X
ˆ︁ of Remark 2 is connected.

Case DB1f: |X| > 1 and X1 = {v0 } and Y0 = {y0 }.


In this case, if d(v0 ) = αnp then α = Ω(1) since v0 ∈ / S0 , we have
(︃ )︃
s 1 |X|
p+ ∼ε |X| − 1 + α + and p− ∼ε . (49)
n d(y0 ) + s − 1 n

Case DB2: 20/ε 3 < |X| ≤ n/(np)9/8 .


⋃︁
Let B(X) = k≥2 Bk (X), where Bk (X) = {v ∈ / X : dX (v) = k}, see Lemma 3(k). Then Lemma
3(f) implies that if k > 10 then |Bk | ≤ ak |X| where ak = 10/(k − 10). To see this, observe that if
not then we can add ak |X| vertices to X to make a set Y , such that |Y | = (ak + 1)|X| ≤ 2n/(np)9/8
with e(Y ) ≥ kak |X| = 10|Y |, which contradicts Lemma 3(f).

ˆ︁ (X) = N (X) \ (B(X) ∪ S1 ) then from Lemma 3(h)(j)(k)and Lemma 4 (used to replace
Then, if N

17
|X| by |X|
ˆ︁ in one place,
∑︂
|N
ˆ︁ (X)| ≥ e(X \ S1 , X̄ \ S1 ) − Bk (X)
k≥2
(np)1/3 5np
∑︂ ε|X|np ∑︂ 10|X|
≥ (1 − 2ε)|X \ S1 |(n − |X| − |S1 |)p − −
k2 k − 10
k=2 k=(np)1/3
2
ˆ︁ ∩ S1 |np − επ |X|np + (10 + o(1))|X| log(5np)
≥ (1 − 3ε)|X|np − |X
6
≥ (1 − 4ε)|X|np.
So,
1 ∑︂ s s|X| |X|
p+ ≥ ≳ε and p− ≤ .
n (1 + ε)np n n
v∈N
ˆ︁ (X)

Case DB3: n/(np)9/8


< |X| ≤ n1 .
Let D(X) = {v ∈ X : dX̄ (v) ∈ (1 ± ε)(n − |X|)p}. Then, using Lemma 3(a)(b)(h)(l),
∑︂ dX̄ (v) ∑︂ dX̄ (v)

sdX (v) + dX̄ (v) s((1 − ε)np − (1 + ε)(n − |X|)p) + (1 − ε)(n − |X|)p
v∈D(X)\S1 v∈D(X)\S1

e(D(X) \ S1 , X̄ \ S1 ) + 5np|S1 |

(n + (s − 1)|X| − ((2s + 1)n − (s + 1)|X|)ε)p
(1 + 3ε)|X|(n − |X|)p
≤ . (50)
(n + (s − 1)|X| − ((2s + 1)n − (s + 1)|X|)ε)p
∑︂ dX̄ (v) |X|
≤ |X ∩ S1 | ≤ |S1 | ≤ . (51)
sdX (v) + dX̄ (v) np
X∩S1
∑︂ dX̄ (v) 1
≤ θ|X|, where θ = . (52)
sdX (v) + dX̄ (v) ε2 (np)1/2
X\(D(X)∪S1 )

The last inequality follows from Lemma 3(l).

So we see that if |X| = ξn then after summing the above inequalities and simplifying, we see that
ξ(1 − ξ)
p− ≲ε . (53)
1 + (s − 1)ξ
We now look for a lower bound on p+ .
1 ∑︂ sdX (v)
p+ ≥
n (s − 1)dX (v) + (1 + ε)np
v∈N (X)∩(D(X̄)\S1 )
1 ∑︂ sdX (v)

n (s − 1)(1 + ε)|X|p + (1 + ε)np
v∈N (X)∩(D(X̄)\S1 )

se(X̄ \ S1 , X \ S1 ) − se(X̄ \ D(X̄), X)



(s − 1)(1 + ε)|X|p + (1 + ε)np
s(1 − 2ε)|X|(n − |X|)p − se(X̄ \ D(X̄), X)
≥ , from Lemma 3(h).
(s − 1)(1 + ε)|X|p + (1 + ε)np

18
With α = 1/(np)1/4 and θ = 1/ε2 (np)1/2 ,

αθ|X|(n − |X|)p (np)n9/8 ≤ |X̄| ≤ (np)n1/3 , Lemma 3(l)(m) applied to S = X̄.


{︄
e(X̄\D(X̄), X) ≤ n
5θ(n − |X|)np (np)1/3
≤ |X̄| ≤ n1 , Lemma 3(l)(a) applied to S = X̄.

It follows from this that in both cases e(X̄ \ D(X̄), X) ≤ ε|X|(n − |X|)p. So,
s(1 − 3ε)ξ(1 − ξ)
p+ ≥ (54)
(s − 1)(1 + ε)ξ + 1 + ε
sξ(1 − ξ)
≳ε . (55)
(s − 1)ξ + 1

In which case
p+ sξ(1 − ξ) 1 + (s − 1)ξ
≳ε · = s.
p− (s − 1)ξ + 1 ξ(1 − ξ)

5.3 Fixation probability

We first prove the equivalent of Lemma 8.


Lemma 13. If |X| reaches ω, where ω → ∞ then w.h.p. X reaches [n].

Proof. We first show that if |X| reaches ω then w.h.p. |X| reaches n1 = n − n/(np)1/2 . Let a = ω/2
and m = n1 − a. There is a positive bias of ∼ε s in Cases DB2, DB3 as long as |X| > a. It follows
from (32) that the probability |X| ever reaches a before reaching m is o(1).

Now consider the case of |X| ≥ n1 . Comparing (7) and (37) we see that pDB BD
+ (X) ≥ p− (X).
Comparing (6) and (38) we see that pDB BD
− (X) ≤ P+ (X) . By comparing this with Case BD3
of Section 4.2, we see that this implies that p+ (X)/pDB
DB
− (X) ≥ 1/(5snp). Now consider the
experiment described in Lemma 8. Beginning with |X| = n1 , we still have a probability of at
most (1 − ζ)n1 of |X| reaching 0 before returning to n1 . Now there is a probability of at least
1/2
η = (5snp)s−n/(np) of |X| reaching n before returning to n1 .

5.3.1 Case analysis

We consider the following cases:


Case DBF1: v0 ∈ / S1 and Y0 = ∅.
1/2
In this case Lemmas 10 and 11 imply that we remain in Case DB1a or DB1b while |X| ≤ ω0 and
there is a bias to the right p+ /p− ∼ε s. (Lemma 11 also implies that X ∩ S0 remains empty. In this
case, before adding to X ∩ S0 we must add to N (X) ∩ S0 .) Remark 2 implies that w.h.p. we either
1/2 1/2 1/2
reach |X| = 0 or |X| = ω0 within O(ω0 log ω0 ) iterations. If |X| reaches ω0 then Lemma 13
implies that w.h.p. X eventually reaches [n]. Consequently ϕ ∼ε (s − 1)/s. This proves part (a) of
Theorem 2.

19
Case DBF2: v0 ∈ S1 , Y0 = ∅ and d(v0 ) = αnp where αnp ≫ ε−2 .
1/2
In this case we remain in Case DB1a or DB1c while |X| ≤ ω0 as long as v0 is not removed
from X. If d(v0 ) = αnp then the probability that this happens, conditional on a change in X, is
∼ε s/((s + 1)|X| − 1 + α). There are |X| chances of about s/n of choosing v ∈ X. Then for each
w ∈ X \ {v0 } there is a chance of about 1/n that v is a neighbor of w and that v chooses w as u.
This leads to (35) with ηj = s/((s + 1)j − 1 + α) and gives pj (DBF 2).

Case DBF3: v0 ∈ S1 , Y0 = ∅ and d(v0 ) = O(ε−2 ).


In this case the term ψ(δ1 ) = sδ1d(x 1 )−δ1
+d(x1 )−δ1 in (47), where δ1 = dX (x1 ), may become significant. It
1/2
is only significant while v0 ∈ X and |X| ≤ ω0 . In which case δ1 is either 0 or 1. If δ1 = 1 and
|X| = j ≥ 2 then conditional on |X| decreasing, δ1 becomes 0 with asymptotic probability 1/j. If
δ1 = 0 and |X| = j ≥ 2 then δ1 becomes 1 with asymptotic probability 0. If |X| = 1 and X = {v0 }
then δ1 becomes 1 if and only if X does not become empty after the next iteration. This leads to
the following recurrence: let pj,δ be the (asymptotic) probability of |X| becoming 0 starting from
1/2
|X| = j and δ1 = δ. Then we have p0,0 = p0,1 = 1 and pm,δ = 0 for m = ω0 and δ = 0 or 1. The
recurrence is
pj,0 = γj pj+1,0 + (1 − γj )(1 − ηj )pj−1,0 + (1 − γj )ηj qj .
pj,1 = γj pj+1,1 + (1 − γj )(1 − ηj )(1 − θj )pj−1,1 + (1 − γj )(1 − ηj )θj pj−1,0 + (1 − γj )ηj qj .
Here γj = (s(j − 1))/(s(j − 1) + j − 1 + ψ(δ)) is asymptotic to the probability that j = |X| increases,
ηj = s/(sj + j − 1) is asymptotic to the probability that v0 leaves X and θj = ηj is asymptotic
to the probabilty that v0 ’s neighbor in X leaves X. (We have α = 1 in the definition of ηj , since
v0 ∈
/ S1 .) ηj , qj are as in (35).

Case DBF4: v0 ∈ / S1 and Y0 = {y0 }:


If |X| = 1 then (43) implies that np+ ∼ε s(1+d(y0 )/(s+d(y0 )−1) and np− = 1 and if |X| > 1 then
1/2
we (i) either remain in Case DB1e while |X| ≤ ω0 or (ii) y0 moves to X and we are in Case DB1c
or DB1d, depending on d(y0 ). The probability that we switch from (i) to (ii) is asymptotically
equal to ξj = d(y0 )/(np(j − 1) + d(y0 )), where j = |X|. The recurrence for pj is
pj = λj (1 − ξj )pj+1 + λj ξj ψj + (1 − λj )(1 − ηj )pj−1 + (1 − λj )ηj qj
where λj = (s(j + 1/(d(y0 ) + s − 1)))/((s(j + 1/d(y0 ) + s − 1) + j) (We have α = 1 in the definition of
/ S1 .) We have ψj = pj (DBF 2) if d(y0 ) ≫ ε−2 and ψj = pj,1 if d(y0 ) = O(ε−2 ).
ηj in (35), since v0 ∈

Case DBF5: v0 ∈ S1 and Y0 = {y0 }:


In this case we begin in Case DB1a with d(v0 ) = αnp where α = Ω(1). Then we stay in Case DB1f
1/2
while |X| ≤ ω0 , unless v0 leaves X. In which case we move to Case DB1e. The recurrence for pj
is
pj = (1 − µj )pj+1 + µj (1 − ηj )pj−1 + µj ηj qj
where µj = (s(α + 1/(d(y0 ) + s − 1)))/((s(j − 1 + α + 1/d(y0 ) + s − 1) + j) is asymptotically equal
to the probability that v0 leaves X given that |X| decreases and ηj is as in (35).

1/2
We see from the above cases that when |X| is small the chance that X reaches ω0 yields (a), (b)
of Theorem 2, because if |X| reaches ω0 then there is a positive rightward bias and X will w.h.p.
eventually become [n].

20
The case s ≤ 1 The above analysis holds for s > 1. For s ≤ 1 we go back to the case where
|X| ≤ ω0 . If s < 1 then we see from (45) – (48) that there are constants C1 > 0, 0 < C2 < 1 such
that if |X| ≥ C1 then p+ /p− ≤ C2 . It follows that w.h.p. |X| will return to C2 before it reaches
1/2
ω0 and then there is a probability bounded away from 0 that |X| will go directly to 0.

The case s = 1 It ∑︁ follows from Maciejewski [11] that the fixation probability of vertex v is
precisely π(v) = d(v)/ w∈[n] d(w). In a random graph with np = Ω(log n) this gives maxv π(v) =
O(1/n).

5.4 np ≫ log n and s > 1

When np ≫ log n then S1 = ∅ and and all but (f), (g), (k), (l) of Lemma 3 hold trivially. Now (f)
and (k) are used in bounding e(X : X̄) and are not therefore needed. (g) is not used in Death-Birth.
The proof of (l) does not need np = O(log n). There is always a bias close to s and the fixation
probability is asymptotic to s−1
s .

References

[1] B. Adlam and M. A. Nowak, ”Universality of fixation probabilities in randomly structured


populations.” Scientific Reports 4.1 (2014): 6692.

[2] M. Broom and J. Rychtář. An analysis of the fixation probability of a mutant on special
classes of non-directed graphs, in Proceedings of the Royal Society A: Mathematical, Physical
and Engineering Sciences 464.2098 (2008): 2609-2627.

[3] Chalub, Fabio ACC. Asymptotic expression for the fixation probability of a mutant in star
graphs, arXiv preprint arXiv:1404.3944 (2014).

[4] Allen, B., Lippner, G., Chen, YT. et al. Evolutionary dynamics on any population structure.
Nature 544, 227–230 (2017). https://doi.org/10.1038/nature21723

[5] Ibsen-Jensen, R., Chatterjee, K., and Nowak, M. A. (2015). Computational complexity of
ecological and evolutionary spatial dynamics. Proceedings of the National Academy of Sciences,
112(51), 15636-15641.

[6] Kuo, Y.P., Nombela-Arrieta, C. and Carja, O. A theory of evolutionary dynamics on any
complex population structure reveals stem cell niche architecture as a spatial suppressor of
selection. Nat Commun 15, 4666 (2024). https://doi.org/10.1038/s41467-024-48617-2

[7] Mohamadichamgavi, Javad, and Jacek Miekisz. ”Effect of the degree of an initial mutant in
Moran processes in structured populations.” Physical Review E 109.4 (2024): 044406.

[8] W. Feller, An Introduction to Probability Theory and its Applications, 3rd Edition, Wiley
New York, 1968.

21
[9] A.M. Frieze and M. Karoński, Introduction to Random Graphs, Cambridge University Press,
2015.

[10] Lieberman, E., Hauert, C. and Nowak, M. Evolutionary dynamics on graphs, in Nature 433,
312–316 (2005). https://doi.org/10.1038/nature03204

[11] W. Maciejewski, Reproductive value in graph-structured populations, Journal of Theoretical


Biology 340 (2014) 285-293.

[12] J. Mohamadichamgavi and J. Miekisz, Effect of the degree of an initial mutant in Moran
processes in structured populations, Physical Review E 109 (2024).

[13] Moran, P. A. P. (1958). Random processes in genetics, in Mathematical Proceedings of the


Cambridge Philosophical Society 54 (1) 60–71. doi:10.1017/S0305004100033193.

[14] Nowak, Martin A. Evolutionary dynamics. Harvard university press, 2006.

A Proof of Lemma 3

(a) The degree d(v) of vertex v ∈ [n] is distributed as Bin(n−1, p). The Chernoff bound (5) implies
that (︂ e )︂5np
P(∆ > 5np) ≤ nP(Bin(n, p) ≥ 5np) ≤ n = o(1).
5
(b) We first observe that the Chernoff bounds (3), (4) imply that
∑︂ (︃n)︃ 2
P(B(n, p) ∈/ Iε ) = pi (1 − p)n−i ≤ e−ε np/(3+o(1)) . (56)
i
i∈I
/ ε

The degree d(v) of vertex v ∈ [n] is distributed as Bin(n − 1, p). So,


2 (n−1)p/(3+o(1)) 2 /(3+o(1))
/ Iε ) ≤ ne−ε
E(|S1 |) = nP(d(1) ∈ = n1−ε .

Now use the Markov inequality to obtain the upper bound.

(c)
ω0 (︃ )︃
∑︂ n
P(∃v ∈ S1 ∩ C : ¬(c))) ≤ k k!pk P(B(n − 3, p) ∈
/ Iε − 2)
k
k=3
2 np/(3+o(1))
≤ 2(np)ω0 e−ε
= o(1).

Explanation: we sum over possible choices for a k-cycle C of Kn . There are less than nk k!
(︁ )︁

k-cycles in Kn . There are k choices for a vertex of C ∩ S1 . Given a cycle C and v ∈ C we multiply
by the probability that the edges of C exist in Gn,p and that (dC̄ (v) + 2) ∈/ Iε .

22
(d)
⎛ ⎞2
0 −1 (︃ )︃
(︃ )︃ ω∑︂ np/10 (︃ )︃
n n ∑︂ n i
P(¬(d)) ≤ k!pk+1 ⎝ p (1 − p)n−i ⎠ ≤
2 k i
k=1 i=0
⎛ ⎞2
np/10 (︃ )︃i
∑︂ nep
n(np)ω0 +1 ⎝
e−np ⎠ ≤ n(np)ω0 +1 (2(10e)np/10 e−np )2 ≤ n1+1/10+2/3−2+o(1) = o(1).
i(1 − p)
i=0

Explanation: we sum over pairs of vertices x, y and paths P of length k < ω0 joining x, y. Then
we multiply by the probability that these paths exist and then by the probability that x, y have
few neighbors outside P .

(e)
ω0 ω0 (︃ )︃
∑︂
ℓ ℓ −ε2 np/4
∑︂ n−2 2 np/4+np)
P(∃x, y : ¬(e)) ≤ n 2
npe pk (1−p)n−k−2 ≤ 2n(np)2ω0 +1 e−(ε = o(1).
k
ℓ=1 k=0

Explanation: we use a similar analysis as for property (d).

(f)
2n/(np)9/8 (︃ 2n/(np)9/8 (︃(︂ )︂ (︂
s 9 enp )︂10 s
)︃(︃ (︁s)︁ )︃ )︃
∑︂ n 2 10s
∑︂
P(¬(f)) ≤ p ≤ e = o(1).
s 10s n 20
s=20 s=20
Explanation: we choose a set of size s and bound the probability it has 10s edges by the expected
number of sets of 10s edges that it contains. The final claim uses that fact that np = O(log n).

%eqrefitem9/81/2 (g)
2ω0 (︃ )︃(︃ (︁s)︁ )︃ 2ω0 (︂
∑︂ n 2 s+1
∑︂ neps )︂s
P(∃S : ¬(g)) ≤ p ≤ ω0 ep = o(1).
s s+1 2
s=4 s=4

We use a similar analysis as for property (f). The final claim also uses that fact that np = O(log n),
in which case (nepω0 )2ω0 ≤ no(1) .

(h) At least one of S, T has size at most n/2 and assume it is S. Suppose first that S induces a
connected subgraph. Suppose that |S| ≤ n/(np)9/8 . We first note that
n 2
n − |T | ≤ 9/8
+ n1−ε /4 ≤ ε2 |T |.
(np)
Then we have

e(S : T ) ≤ (1+ε)|S|np = (1+ε)|S| |T |p+(1+ε)|S|(n−|T |)p ≤ (1+ε)|S| |T |p(1+ε2 ) ≤ (1+2ε)|S| |T |p.

On the other hand, Lemma 3(f) implies that

e(S : T ) ≥ (1 − ε)|S|np − 20|S| ≥ (1 − 2ε)|S|np ≥ (1 − 2ε)|S| |T |p.

23
So now assume that n/(np)9/8 ≤ |S| ≤ n/2. Let Iˆ︁(d) = [n/(np)9/8 , n/2]. Fix S1 and all edges
2
incident with S1 . Then, where m stands for |S1 | and nε = n1−ε /4 ,
(︂ )︂
P ∃|S| ∈ Iˆ︁(d) , e(S : T ) ≤ (1 − ε)|S| |T |p
∑︂ (︃n − m)︃ 2
≤ ss−2 ps−1 e−ε s(n−s−m)p/3 (57)
s
s∈Iˆ︁(d)
∑︂ (︃n − m)︃ 2
≤ ss−2 ps−1 e−ε snp/7
s
s∈Iˆ︁(d)
1 ∑︂ −ε2 nps/8
≤ e
s2 p
s∈Iˆ︁(d)

= o(1).

Explanation of (57): Given s there are ns choices for S, ss−2 choices for a spanning tree T of
(︁ )︁

S. The factor ps−1 accounts for the probability that T exists in Gn,p and then the final factor
2
e−ε s(n−s−m)p/3 comes from the Chernoff bounds, since e(S : S̄) is distributed as Bin(s(n − s), p).
These are computed conditional on each v ∈ S having a lower bound on its degree and applying
the FKG inequality.
(︂ )︂
When it comes to estimating P ∃|S| ∈ Iˆ︁(d) , e(S : T ) ≥ (1 + ε)|S| |T |p we apply a similar argument,
but this time when we apply the FKG inequality we use the fact that each vertex has an upper
bound on its degree.

We now deal with the connectivity assumption. Suppose now that S has a component C of size less
than 20/ε3 . Then, using Lemma 3(g), we see that w.h.p. |N (C)| ≥ d(C) − 2|C| ≥ (1 − 2ε)|C|np
since S ∩ S1 = ∅. Clearly |N (C)| ≤ (1 + 2ε)|C|np, since C ∩ S1 = ∅. So, S will inherit the required
property from its components.

(i)

n/(np)9/8 (︃ )︃ n/(np)9/8 (︂
∑︂ n s−2 s−1 −s(n−s)p/3 1 ∑︂ )︂s
P(∃S : e(S : S̄) ≤ |S|np/2) ≤ s p e ≤ 2 e1−np/4 np = o(1).
s s p
s=ω0 /2 s=ω0 /2

(j) Suppose first that |S| ≤ ω0 . Let n0 = 5ω0 np be an upper bound on |S ∪ N (S)| and let s0 = 7/ε2 .
Then, using (a),
n0 (︃ )︃ (︃ )︃(︃ )︃
∑︂ n s
s−2 s−1 5s0 np −s0 ε2 np/(3+o(1))
P(∃S : ¬(j)) ≤ o(1) + s p e
s s0 s0
s=1
n0
(︄ 2
)︄s0
n ∑︂ s 5snpe2−ε np/(3+o(1))
≤ o(1) + 2 (enp) 2 = o(1).
s p s 0
s=1

24
When |S| > ω0 we replace s0 by s1 = 7s/ε2 ω0 to obtain
⌈︁ ⌉︁

n0 (︃ )︃ (︃ )︃(︃ )︃
∑︂ n s−2 s−1 s 5s1 np −s1 ε2 np/(3+o(1))
P(∃S : ¬(j)) ≤ s p e
s=ω0
s s1 s1
n0
(︄ 2
)︄s1
n ∑︂ s 5snpe2−ε np/(3+o(1))
≤ 2 (enp) = o(1).
s p s=ω s1
0

(︁ s )︁
(k) We use k pk to bound the probability that v ∈ Bk (S).

(np)1/3 n/(np)9/8 (︃
)︃ (︃ )︃ (︃(︃ )︃ )︃αk snp
∑︂ n s−2 s−1 n − s
∑︂ s k
P(∃S : ¬(k)) ≤ s p p
s αk snp k
k=2 s=k
(np)1/3
∑︂ 1 n/(np)
9/8 (︄ (︃ k+1 )︃εnp/k2 )︄s
∑︂ e (sp)k−1
≤ (enp) · = o(1).
s2 p k k αk
k=2 s=k

(l) We can assume that S induces a connected subgraph and then sum the contributions from each
component. We first consider the case where |S| ≤ n/2.
n/2
(︃ )︃ (︃ )︃
∑︂ n s−2 s−1 s 2
P(∃S : ¬(l)) ≤ s p (2e−ε (n−s)p/3 )θs
s θs
s=n/(np)2
n/2
⎛ (︄ 2 (n−s)p/3
)︄θ ⎞s
1 1−ε
⎝nep 2e
∑︂
≤ ⎠ = o(1).
p 2
θ
s=n/(np)
(︁n)︁
When n/2 < |S| ≤ n1 we drop the connectivity constraint and replace s by 4s . The summand is
(︂ 2
)︂s
then equal to 4e(2e−ε (n−s)p/3 /θ)θ .

(m) Here α = (np)1/4 .

n/(np)1/3 (︃ )︃(︃ )︃(︃ )︃


∑︂ n n θs(n − s)
P(∃S : ¬(m)) ≤ pαθs(n−s)p
9/8
s θ(n − s) αθs(n − s)p
s=n/(np)
n/(np)1/3
ne (︂ e )︂αθ(n−s)p/2 s (︂ e )︂αsp/2 )︃θ(n−s)
(︃ )︃ (︃
∑︂ ne
≤ · · = o(1).
s α θ(n − s) α
s=n/(np)9/8

25

You might also like