Ben Roberts Equity Model
Ben Roberts Equity Model
n
j=1
x
j
. (1)
2
The true equities are accurate estimates achieved through Monte Carlo simulation
with a very large sample size. More details are given in Section 4.
2
0 0.5 1 1.5 2 2.5 3
x 10
4
0
10
20
30
40
50
60
70
80
90
100
Chip stack
E
q
u
i
t
y
(
$
)
Figure 1: ICM equities in a large player pool
It is widely accepted that the theoretically correct values of the {p
i
(s)} are
the appropriate absorption probabilities of a continuous random walk inside
a (n1)-dimensional tetrahedron with absorbing boundaries. This random
walk is equivalent to repeatedly moving an innitessimal amount of chips
from one random player to another, whilst removing busted players from
the game.
3.1 Malmuth-Harville algorithm
The MH algorithm works by setting the correct winning probabilities as in
(1). It then operates under the assumption that if the top m places are taken
up by a subset S of players, then the probability that player i / S nishes
in (m + 1)th is simply the probability that he wouldve won an exclusive
tournament amongst the remaining players. Thus, the probability player i
nishes 2nd is calculated to be
p
i
(2) =
j=i
_
p
j
(1)
x
i
k=j
x
k
_
.
3
The rest of the p
i
(s)s can be found in an iterative fashion. Note that this
method can be quite computationally expensive as the calculation of p
i
(s)
is the weighted average of (n 1)!/(n s)! terms.
3.2 My algorithm
My algorithm works in a similar fashion to a dierent algorithm the
Malmuth-Weitzman (MW) algorithm. The MW algorithm assumes that the
probability that player i is next eliminated is inversely proportional to his
chip stack, and if realized reallocates his chips evenly amongst the remaining
players.
My algorithm makes a slightly dierent assumption about the probabil-
ities of next elimination:
p
i
(n)
1
x
2
i
j=i
1
x
j
. (2)
If eliminated, player is chips are then distributed amongst the remaining
players inversely proportionally to their stacks:
P(j wins | i next elim) =
x
j
+
ij
x
i
X
,
where
ij
=
x
1
j
k=i
x
1
k
,
and X is the total number of chips in play. Given that we redistribute a
busted players chips in this fashion, the elimination probabilities (2) are
necessary to ensure that a players chip stack is a martingale, which in turn
ensures the correct winning probabilities (1).
An intuition of why a shorter stacked player should gain more of player
is chips given that i is next eliminated can be gained by realizing that
the information is more valuable to him. For example, suppose there are
three players left with chip stacks in the ratio of 1 : 1 : 10. We denote the
probability that player 3 wins given that player 1 is next eliminated
by p
3
(1). By symmetry,
p
3
(1) =
p
3
(1)
1 p
3
(3)
.
The probability that player 3 is next eliminated p
3
(3) is clearly very small,
and thus p
3
(1) is very close to p
3
(1). This is reected by transferring only a
small fraction of player 1s chips to player 3 upon his elimination, while the
bulk are transferred to player 2.
4
3.3 Example
Tom Ferguson (Chris father) once wrote an unpublished paper describing
how to analytically calculate the true values of the {p
i
(s)} for n = 3. His
method is quite convoluted and doesnt generalize to n 4. However, he
did include the correct solution for the simplest non-trivial case where x =
(1, 1, 2) dened by the probability that the chip leader is next eliminated:
p
3
(3) = 0.1421.
3
Malmuth-Harville. We start with the winning probabilities:
p
i
(1) =
_
1
4
,
1
4
,
1
2
_
.
If player 3 does not win, the probability he gets 2nd is assumed to be the
probability he wouldve beaten one of the shorter stacks HU 2/3. As he
doesnt win the tournament half the time, we have p
3
(2) = 1/3, and so by
symmetry:
p
i
(2) =
_
1
3
,
1
3
,
1
3
_
.
This leaves us with the probabilities of next elimination to be:
p
i
(3) =
_
5
12
,
5
12
,
1
6
_
.
My algorithm. We calculate the probabilities of next elimination as per
equation (2): p
i
(3) (
3
2
,
3
2
,
1
2
), giving
p
i
(3) =
_
3
7
,
3
7
,
1
7
_
.
If player 1 is eliminated rst, his chips are allocated to the other two players
in the ratio 2 : 1. That is, the new stacks would be ( x
2
, x
3
) =
_
5
3
,
7
3
_
. By
symmetry, the probability that player 3 nishes 2nd is therefore
p
3
(2) = P(he does not nish 3rd) P(he loses the resulting HU battle)
=
6
7
(5/3)
4
=
5
14
.
This gives us
p
i
(2) =
_
9
28
,
9
28
,
5
14
_
,
and leaves us with the correct winning probabilities
p
i
(1) =
_
1
4
,
1
4
,
1
2
_
.
3
Toms paper can be found at www.math.ucla.edu/
~
tom/papers/unpublished/
gamblersruin.pdf
5
From this simple example, one can already observe the improved accuracy
of my algorithm, as the resulting value for p
3
(3) = (1/7) = 0.1429 is much
closer to the true value (0.1421) than the MH algorithm which nds p
3
(3) =
(1/6) = 0.1667.
3.4 Adaptation to reduce computation time
In the form described in Section 3.2, my algorithm is of comparable compu-
tational complexity to the MH algorithm, as they both nd the probabilities
of every permutation of placings. However, my algorithm can be adapted in
a way that is dicult to do with the MH algorithm.
This is achieved by iterating the following procedure for each player,
although we describe it in reference to player 1.
1. Start with nding the probabilties of next elimination p
i
(n) as per
equation (2).
2. Find the value x
1
: dened as the expected stack of player 1 given
player 1 is not the next elimination. That is,
x
1
=
1
1 p
1
(n)
i=1
_
p
i
(n)(x
1
+
i1
x
i
)
_
.
3. Similarly, nd the values of x
(m)
for m = 1, . . . , n 2: dened as the
expected size of the mth largest other stack after the rst elimination
given player 1 is not the next elimination. For example,
x
(1)
=
1
1 p
1
(n)
i=1
_
p
i
(n) max
j=1,i
{x
j
+
ij
x
i
}
_
.
4. Calculate the new probabilities of next elimination using (2) assuming
the new stacks are
_
x
1
, x
(1)
, . . . , x
(n2)
_
, and lets denote these prob-
abilities by
_
q
1
, q
(1)
, . . . , q
(n2)
_
. The probability player 1 nishes in
(n 1)th is then:
p
1
(n 1) = q
1
(1 p
1
(n)) .
5. Go back to Step 2 assuming the adjusted stacks and elimination prob-
abilities.
Adapting the algorithm in this fashion removes the need to calculate the
probabilities of each permutation of player placings, reducing the complexity
of the program to O(n
3
). Empirical testing has showed only very small
dierences between the results of this adaptation and the longer version.
6
4 A long comparison
4.1 Discretization
In order to compare the accuracy of my algorithm to the Malmuth-Harville
algorithm, we need to identify certain cases in which we can be reasonably
condent of the theoretically correct values that we are trying to approxi-
mate. One way to do this is to assume a discretization of the problem for
cases in which the starting chip stacks have a large common factor.
In the discretized version of the problem, we assume random exchanges of
a single chip between players instead of innitessimal exchanges. For small
enough cases it is easy to nd the theoretical solution to the discretized
problem by iterating a diusion process over the state space describing the
evolution of the probability distribution of the chip stacks over time. One
nice thing we nd is that the solution converges very quickly as we reduce
the coarseness of the discretization.
For example, we consider the simplest non-trivial case studied by Tom
Ferguson in which the three players have chip stacks proportional to (1, 1, 2).
Assuming random single chip exchanges with starting stacks x = (1, 1, 2),
we nd p
3
(3) = (1/7) = 0.14286. With starting stacks x = (2, 2, 4) we
nd p
3
(3) = 0.14222, and for x = (3, 3, 6) we nd p
3
(3) = 0.14217. It is
clear that the solution is quickly converging to the true value of 0.1421.
Other examples have also showed remarkable speed of convergence. This
is nice because we can take a reasonably coarse discretization of a problem
and expect the resulting solution to be very close to that of the continuous
version.
4.2 Comparing probabilities
I ran simulations for numerous examples to compare the accuracy of my
algorithm to the MH algorithm and without exception found mine to be
signicantly more accurate. Tables 2, 3, and 4 detail the results for three
examples. The probabilties calculated from my algorithm are shown in blue,
those calculated from the MH alorithm are shown in red, while the true
equities are shown in black.
4
4
We estimate the true equities by assuming reasonable discretizations of the examples.
For the rst two example in which n = 3, we iterate a diusion process as described in
Section 4.1. For the rst we assume starting stacks of x = (5, 5, 10), and for the second
x = (2, 18, 20). For the third example, the state space is too large to perform a similar
diusion process. Instead we use crude Monte-Carlo estimation with 1 million samples,
ensuring that the standard deviation of the estimates of each pi(s) is less than 0.05%.
7
Player 1st % 2nd % 3rd %
25 32.11 42.89
1 25 32.14 42.86
25 33.33 41.67
25 32.11 42.89
2 25 32.14 42.86
25 33.33 41.67
50 35.78 14.22
3 50 35.71 14.29
50 33.33 16.67
Table 2: n = 3, x (1, 1, 2)
Player 1st % 2nd % 3rd %
5 6.60 88.40
1 5 5.47 89.53
5 9.09 85.91
45 48.59 6.41
2 45 49.24 5.76
45 47.37 7.63
50 44.81 5.19
3 50 45.29 4.71
50 43.54 6.46
Table 3: n = 3, x (1, 9, 10)
Player 1st % 2nd % 3rd % 4th % 5th %
10 10.74 12.92 19.32 47.02
1 10 10.59 13.32 20.61 45.58
10 11.88 14.99 21.28 41.85
15 15.94 18.51 25.21 25.33
2 15 15.79 19.24 26.20 23.76
15 16.85 19.58 24.00 24.58
20 20.68 22.38 22.64 14.31
3 20 20.69 22.87 22.08 14.36
20 20.99 21.94 21.40 15.67
25 24.78 23.46 18.41 8.34
4 25 24.93 23.08 17.41 9.57
25 24.15 22.17 18.14 10.54
30 27.86 22.73 14.42 5.00
5 30 28.00 21.48 13.69 6.83
30 26.13 21.33 15.17 7.37
Table 4: n = 5, x (2, 3, 4, 5, 6)
By inspection, we can see that my algorithm generates approximate
probabilities generally closer to the true values for all three examples. To
quantify this, I calculated the average absolute error for each algorithm
and example. That is,
= E
_
| p
i
(s) p
i
(s)|
,
assuming a uniform distribution over the choices of (i, s). In the rst exam-
8
ple, we nd that my algorithm produces near-perfect results, with average
error = 0.03%. In comparison, the MH algorithm produces an average
error of = 1.09%.
The dierence is not as great in second and third examples but it is
still signicant. The average errors produced by my algorithm are 0.50%
and 0.59% respectively, while the MH algorithm produces 1.11% and 1.13%.
From looking at various examples like these, I found that my algorithm
generally seems to reduce the average error by a factor of about 2 to 3.
4.3 Comparing equity
To illustrate how the improved accuracy manifests itself in equity calcula-
tion, we assign prize structures to the three previous examples. For the
rst two in which n = 3 we assume prizes for 1st and 2nd to be $200 and
$100 respectively. For the third example we assume a $500, $300, $200 prize
structure as in the example described in Section 2.
Equity ($)
Player
MH Mine True
1 83.3 82.1 82.1
2 83.3 82.1 82.1
3 133.3 135.7 135.8
Table 5: x (1, 1, 2)
Equity ($)
Player
MH Mine True
1 19.1 15.5 16.6
2 137.4 139.2 138.6
3 143.5 145.3 144.8
Table 6: x (1, 9, 10)
Equity ($)
Player
MH Mine True
1 115.6 108.4 108.1
2 164.7 160.9 159.9
3 206.9 207.8 206.8
4 241.8 246.0 246.3
5 271.0 277.0 279.0
Table 7: x (2, 3, 4, 5, 6)
Again, it is evident that my algorithm is signicantly more accurante
than the MH algorithm. In a similar fashion to Section 4.2, I quantify this
9
by calculating the average absolute error
=
1
n
n
i=1
| y
i
y
i
| ,
where y
i
is player is true equity. We nd that the average errors of my
algorithm for the three examples are respectively 0.05, 0.75, and 0.95, while
the MH alorithm produces average errors 1.63, 1.66, and 4.99. By investi-
gating other examples, Ive found that my algorithm generally reduces the
absolute error of the equity approximations by a factor of about 3 to 5.
10