0% found this document useful (0 votes)
3 views

Lecture 5

Lecture 5 of game theory

Uploaded by

Rupam Kumawat
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture 5

Lecture 5 of game theory

Uploaded by

Rupam Kumawat
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Lemke-Howson Algorithm – Notation

Fix a strategic-form two-player game G = ({1, 2}, (S1 , S2 ) , (u1 , u2 )).


Assume that
� S1 = {1, . . . , m}
� S2 = {m + 1, . . . , m + n}
(I.e., player 1 has m pure strategies 1, . . . , m and player 2 has n pure
strategies m + 1, . . . , m + n. In particular, each pure strategy determines
the player who can play it.)
Assume that u1 , u2 are positive, i.e., u1 (k , �) > 0 and u2 (k , �) > 0 for
all (k , �) ∈ S1 × S2 .
This assumption is w.l.o.g. since any positive constant can be added to
payoffs without altering the set of (mixed) Nash equilibria.
Mixed strategies of player 1 : σ1 = (σ(1), . . . , σ(m)) ∈ [0, 1]m
Mixed strategies of player 2 : σ2 = (σ(m + 1), . . . , σ(m + n)) ∈ [0, 1]n
I.e. we omit the lower index of σ whenever it is determined by the argument.
A strategy profile σ = (σ1 , σ2 ) can be seen as a vector
σ = (σ1 , σ2 ) = (σ(1), . . . , σ(m + n)) ∈ [0, 1]m+n .
124
Running Example

3 4
1 3, 1 2, 2
2 2, 3 3, 1

� Player 1 (row) plays σ1 = (σ(1), σ(2)) ∈ [0, 1]2


� Player 2 (column) plays σ2 = (σ(3), σ(4)) ∈ [0, 1]2
� A typical mixed strategy profile is (σ(1), σ(2), σ(3), σ(4))

For example: σ1 = (0.2, 0.8) and σ2 = (0.4, 0.6) give the profile
(0.2, 0.8, 0.4, 0.6).

125
Characterizing Nash Equilibria
Recall that by Lemma 42 the following holds:

(σ1 , σ2 ) = (σ(1), . . . , σ(m + n)) ∈ Σ is a Nash equilibrium iff


� For all � = m + 1, . . . , m + n we have that
u2 (σ1 , �) ≤ u2 (σ1 , σ2 )
and either σ(�) = 0, or u2 (σ1 , �) = u2 (σ1 , σ2 )
� For all k = 1, . . . , m we have that
u1 (k , σ2 ) ≤ u1 (σ1 , σ2 )
and either σ(k ) = 0, or u1 (k , σ2 ) = u1 (σ1 , σ2 )

This is equivalent to the following: (σ1 , σ2 ) = (σ(1), . . . , σ(m + n)) ∈ Σ


is a Nash equilibrium iff
� For all � = m + 1, . . . , m + n we have that either σ(�) = 0, or � is
a best response to σ1 .
� For all k = 1, . . . , m we have that either σ(k ) = 0, or k is a best
response to σ2 .
126
Characterizing Nash Equilibria
Given a mixed strategy σ1 = (σ(1), . . . , σ(m)) of player 1 we define
L (σ1 ) ⊆ {1, 2, . . . , m + n} to consist of
� all k ∈ {1, . . . , m} satisfying σ(k ) = 0
� all � ∈ {m + 1, . . . , m + n} that are best responses to σ1
Given a mixed strategy σ2 = (σ(m + 1), . . . , σ(m + n)) of player 2 we
define L (σ2 ) ⊆ {1, 2, . . . , m + n} to consist of
� all k ∈ {1, . . . , m} that are best responses to σ2
� all � ∈ {m + 1, . . . , m + n} satisfying σ(�) = 0

Proposition 3
σ = (σ1 , σ2 ) is a Nash equilibrium iff L (σ1 ) ∪ L (σ2 ) = {1, . . . , m + n}.

We also label the vector 0m := (0, . . . , 0) ∈ Rm with {1, . . . , m} and


0n := (0, . . . , 0) ∈ Rn with {m + 1, . . . , m + n}.
We consider (0m , 0n ) as a special mixed strategy profile.

How many labels could possibly be assigned to one strategy?


127
Running Example

3 4
1 3, 1 2, 2
2 2, 3 3, 1

A strategy σ1 = (2/3, 1/3) of player 1 is labeled by 3, 4 since both


pure strategies 3, 4 of player 2 are best responses to σ1 (they result in
the same payoff to player 2)
A strategy σ2 = (1/2, 1/2) of player 2 is labeled by 1, 2 since both
pure strategies 1, 2 of player 1 are best responses to σ2 (they result in
the same payoff to player 1)
A strategy σ1 = (0, 1) of player 1 is labeled by 1, 3 since the strategy 1
is played with zero probability in σ1 and 3 is the best response to σ1
A strategy σ1 = (1/10, 9/10) of player 1 is labeled by 3 since no pure
strategy of player 1 is played with zero probability (and hence neither
1, nor 2 labels σ1 ) and 3 is the best response to σ1 .

128
Non-degenerate Games
Definition: G is non-degenerate if for every σ1 ∈ Σ1 we have that |supp(σ1 )| is
at least the number of pure best responses to σ1 , and for every σ2 ∈ Σ2 we
have that |supp(σ2 )| is at least the number of pure best responses to σ2 .
"Most" games are non-degenerate, or can be made non-degenerate by
a slight perturbation of payoffs
We assume that the game G is non-degenerate.
Non-degeneracy implies that L (σ1 ) ≤ m for every σ1 ∈ Σ1 and
L (σ2 ) ≤ n for every σ2 ∈ Σ2 .
We say that a strategy σ1 of player 1 (or σ2 of player 2) is fully labeled
if |L (σ1 )| = m (or |L (σ2 )| = n, respectively).
Lemma 50
Non-degeneracy of G implies the following:
� If σi , σ�i ∈ Σi are fully labeled, then L (σi ) � L (σ�i ). There are at
most (m+n m
) fully labeled strategies of player 1, (m+n n
) of player 2.
� For every fully labeled σi ∈ Σi and a label k ∈ L(σi ) there is
exactly one fully labeled σ�i ∈ Σi such that
L (σi ) ∩ L (σ�i ) = L (σi ) � {k }.
129
Examples

An example of a degenerate game:

3 4
1 1, 1 1, 1
2 3, 3 4, 4
Note that there are two pure best responses to the strategy 1.

Are there fully labeled strategies in the following game?

3 4
1 3, 1 2, 2
2 2, 3 3, 1
Yes, the strategy (2/3, 1/3) of player 1 is labeled by 3, 4 and the
strategy (1/2, 1/2) of player 2 is labeled by 1, 2.
Exercise: Find all fully labeled strategies in the above example.

130
Lemke-Howson (Idea)
Define a graph H1 = (V1 , E1 ) where

V1 = {σ1 ∈ Σ1 | |L(σ1 )| = m} ∪ {0m }

and {σ1 , σ�1 } ∈ E1 iff L (σ1 ) ∩ L (σ�1 ) = L (σ1 ) � {k } for some label k .
Note that σ�1 is determined by σ1 and k , we say that σ�1 is obtained from σ1 by
dropping k .

Define a graph H2 = (V2 , E2 ) where

V2 = {σ2 ∈ Σ2 | |L(σ2 )| = n} ∪ {0n }

and {σ2 , σ�2 } ∈ E2 iff L (σ2 ) ∩ L (σ�2 ) = L (σ2 ) � {�} for some label �.
Note that σ�2 is determined by σ2 and �, we say that σ�2 is obtained from σ2 by
dropping �.

k ,�
Given σi , σ�i ∈ Vi and k , � ∈ {1, . . . , m + n}, we write σi ←→ σ�i if
L (σi ) ∩ L (σ�i ) = L (σi ) � {k } and L (σi ) ∩ L (σ�i ) = L (σ�i ) � {�}
131
Running Example

3 4
1 3, 1 2, 2
2 2, 3 3, 1

[1, 2] [3, 4]
H1 : H2 :
(0, 0) (0, 0)

[2, 4] [1, 3] [1, 4] [2, 3]


(1, 0) (0, 1) (1, 0) (0, 1)

(2/3, 1/3) (1/2, 1/2)


[3, 4] [1, 2]

(Here, the red labels of nodes are not parts of the graphs.)
2,3 1,4
For example, (0, 0) ←→ (0, 1) and (0, 1) ←→ (2/3, 1/3) in H1 .
132
Lemke-Howson (Idea)

The algorithm
� basically�searches through � H�1 × H2 = (V
� 1 × V�2 , E)
where (σ1 , σ2 ), (σ�1 , σ�2 ) ∈ E iff either σ1 , σ�1 ∈ E1 , or σ2 , σ�2 ∈ E2 .

Given i ∈ N, we write
k ,�
(σ1 , σ2 ) −→ i (σ�1 , σ�2 )

and say that k was dropped from L (σi ) and � added to L (σi ) if
k ,�
σi ←→ σ�i and σ−i = σ�−i .

Observe that by Lemma 50, whenever a label k is dropped from


L (σi ), the resulting vertex of H1 × H2 is uniquely determined.
Also, |V | = |V1 ||V2 | ≤ (m+n
m
)(m+n
n
).

133
Running Example
3 4
1 3, 1 2, 2
2 2, 3 3, 1
The graph H1 × H2 has 16 nodes.

Let us follow a path in H1 × H2 starting in ((0, 0), (0, 0)):


2,3
((0, 0), (0, 0)) −→ 1 ((0, 1), (0, 0))
3,1
−→ 2 ((0, 1), (1, 0))
1,4
−→ 1 ((2/3, 1/3), (1, 0))
4,2
−→ 2 ((2/3, 1/3), (1/2, 1/2))

This is one of the paths followed by Lemke-Howson:


� First, choose which label to drop from L (σ1 ) (here we drop 2
from L (0, 0)), which adds exactly one new label (here 3)
� Then always drop the duplicit label, i.e. the one labeling both
nodes, until no duplicit label is present (then we have a Nash
equilibrium) 134
Lemke-Howson (Idea)
Lemke-Howson algorithm works as follows:
� Start in (σ1 , σ2 ) = (0m , 0n ).
� Pick a label k ∈ {1, . . . , m} and drop it from L (σ1 ).
This adds a label, which then is the only element of L (σ1 ) ∩ L (σ2 ).
� loop
� If L (σ1 ) ∩ L (σ2 ) = ∅, then stop and return (σ1 , σ2 ).
� Let {�} = L(σ1 ) ∩ L (σ2 ), drop � from L (σ2 ).
This adds exactly one label to L(σ2 ).
� If L (σ1 ) ∩ L (σ2 ) = ∅, then stop and return (σ1 , σ2 ).
� Let {k } = L(σ1 ) ∩ L (σ2 ), drop k from L (σ1 ).
This adds exactly one label to L(σ1 ).

Lemma 51
The algorithm proceeds through every vertex of H1 × H2 at most once.
Indeed, if (σ1 , σ2 ) is visited twice (with distinct predecessors), then either σ1 ,
or σ2 would have (at least) two neighbors reachable by dropping the label
k ∈ L (σ1 ) ∩ L (σ2 ), a contradiction with non-degeneracy.
Hence the algorithm stops after at most (m+n
m
)(m+n
n
) iterations.
135
Lemke-Howson Algorithm – Detailed Treatment

The previous description of the LH algorithm does not specify how to


compute the graphs H1 and H2 and how to implement the dropping of
labels.
In particular, it is not clear how to identify fully labeled strategies and
"transitions" between them.

The complete algorithm relies on a reformulation which allows us to


unify fully labeled strategies (i.e. vertices of H1 and H2 ) with vertices
of certain convex polytopes.
The edges of H1 and H2 will correspond to edges of the polytopes.

This also gives a fully algebraic procedure for dropping labels.

136
Convex Polytopes
k
� A convex combination of points o1 , . . . , oi ∈ R� is a point
λ1 o1 + · · · + λi oi where λi ≥ 0 for each i and ij=1 λj = 1.
� A convex polytope determined by a set of points o1 , . . . , oi is
a set of all convex combinations of o1 , . . . , oi .
� A hyperplane h is a supporting hyperplane of a polytope P if it
has a non-empty intersection with P and one of the closed
half-spaces determined by h contains P.
� A face of a polytope P is an intersection of P with one of its
supporting hyperplanes.
� A vertex is a 0-dimensional face, an edge is a 1-dim. face.
� Two vertices are neighbors if they lie on the same edge (they are
endpoints of the edge).

� A polyhedron is an intersection of finitely many closed


half-spaces
It is a set of solutions of a system of finitely many linear inequalities
� Fact: Each bounded polyhedron is a polytope, each polytope is
a bounded polyhedron. 137
Characterizing Nash Equilibria
Let us return back to Lemma 42:
(σ1 , σ2 ) = (σ(1), . . . , σ(m + n)) is a Nash equilibrium iff
� For all � = m + 1, . . . , m + n : u2 (σ1 , �) ≤ u2 (σ1 , σ2 ) and either
σ(�) = 0, or u2 (σ1 , �) = u2 (σ1 , σ2 )
� For all k = 1, . . . , m : u1 (k , σ2 ) ≤ u1 (σ1 , σ2 ) and either σ(k ) = 0,
or u1 (k , σ2 ) = u1 (σ1 , σ2 )

Now using the fact that


m

u2 (σ1 , �) = σ(k )u2 (k , �)
k =1

and
m+n

u1 (k , σ2 ) = σ(�)u1 (k , �)
�=m+1

we obtain ...
138
Reformulation
(σ1 , σ2 ) = (σ(1), . . . , σ(m + n)) is a Nash equilibrium iff
� For all � = m + 1, . . . , m + n,
m

σ(k ) · u2 (k , �) ≤ u2 (σ1 , σ2 ) (3)
k =1

and either σ(�) = 0, or the ineq. (3) holds with equality.


� For all k = 1, . . . , m,
m+n

σ(�) · u1 (k , �) ≤ u1 (σ1 , σ2 ) (4)
�=m+1

and either σ(k ) = 0, or the ineq. (4) holds with equality.

Dividing (3) by u2 (σ1 , σ2 ) and (4) by u1 (σ1 , σ2 ) we get ...

139
Reformulation

(σ1 , σ2 ) = (σ(1), . . . , σ(m + n)) is a Nash equilibrium iff


� For all � = m + 1, . . . , m + n,
m
� σ(k )
u2 (k , �) ≤ 1 (5)
u2 (σ1 , σ2 )
k =1

and either σ(�) = 0, or the ineq. (7) holds with equality.


� For all k = 1, . . . , m,
m+n
� σ(�)
u1 (k , �) ≤ 1 (6)
u1 (σ1 , σ2 )
�=m+1

and either σ(k ) = 0, or the ineq. (8) holds with equality.

Considering each σ(k )/u2 (σ1 , σ2 ) as an unknown value x(k ), and


each σ(�)/u1 (σ1 , σ2 ) as an unknown value y(�), we obtain ...
140
Reformulation
... constraints in variables x(1), . . . , x(m) and y(m + 1), . . . , y(m + n) :
� For all � = m + 1, . . . , m + n,
� m
x(k ) · u2 (k , �) ≤ 1 (7)
k =1

and either y(�) = 0, or the ineq. (7) holds with equality.


� For all k = 1, . . . , m,
m+n

y(�) · u1 (k , �) ≤ 1 (8)
�=m+1

and either x(k ) = 0, or the ineq. (8) holds with equality.


For all non-negative vectors x ≥ 0m and y ≥ 0n that satisfy the above
contraints we have that (x̄, ȳ) is a Nash equilibrium.

Here the strategy x̄ is defined�by x̄(k ) := x(k )/ m i=1 x(i), the strategy
m+n
ȳ is defined by ȳ(�) := y(�)/ j=m+1 y(j)
Given a Nash equilibrium (σ1 , σ2 ) = (σ(1), . . . , σ(m + n)), assigning
x(k ) := σ(k )/u1 (σ1 , σ2 ) for k ∈ S1 , and y(�) := σ(�)/u1 (σ1 , σ2 ) for
� ∈ S2 satisfies the above constraints. 141
Reformulation
Let us extend the notion of expected payoff a bit.

Given � = m + 1, . . . , m + n and x = (x(1), . . . , x(m)) ∈ [0, ∞)m we


define
m

u2 (x, �) = x(k ) · u2 (k , �)
k =1

Given k = 1, . . . , m and y = (y(m + 1), . . . , y(m + n)) ∈ [0, ∞)n we


define
m+n

u1 (k , y) = y(�) · u1 (k , �)
�=m+1

So the previous system of constraints can be rewritten succinctly:


� For all � = m + 1, . . . , m + n we have that u2 (x, �) ≤ 1 and either
y(�) = 0, or u2 (x, �) = 1.
� For all k = 1, . . . , m we have that u1 (k , y) ≤ 1, and either
x(k ) = 0, or u1 (k , y) = 1
142
Geometric Formulation
Define
� �
P := x ∈ Rm | (∀k ∈ S1 : x(k ) ≥ 0) ∧ (∀� ∈ S2 : u2 (x, �) ≤ 1)
� �
Q := y ∈ Rn | (∀k ∈ S1 : u1 (k , y) ≤ 1) ∧ (∀� ∈ S2 : y(�) ≥ 0)
P and Q are convex polytopes.
As payoffs are positive and linear in their arguments, P and Q are bounded
polyhedra, which means that they are convex hulls of "corners", i.e., they are
polytopes.
We label points of P and Q as follows:
� L (x) = {k ∈ S1 | x(k ) = 0} ∪ {� ∈ S2 | u2 (x, �) = 1}
� L (y) = {k ∈ S1 | u1 (k , y) = 1} ∪ {� ∈ S2 | y(�) = 0}

Proposition 4
For each point (x, y) ∈ P × Q � {(0, 0)} such that
L (x) ∪ L (y) = {1, . . . , m + n} we have that the corresponding strategy
profile (x̄, ȳ) is a Nash equilibrium. Each Nash equilibrium is obtained
this way.
143
Geometric Formulation
Without proof: Non-degeneracy of G implies that
� For all x ∈ P we have L (x) ≤ m.
� x is a vertex of P iff |L (x)| = m
(That is, vertices of P are exactly points incident on exactly m faces)
� For two distinct vertices x, x � we have L (x) � L(x � ).
� Every vertex of P is incident on exactly m edges; in particular,
for each k ∈ L(x) there is a unique (neighboring) vertex x � such
that L (x) ∩ L(x � ) = L (x) � {k }.
Similar claims are true for Q (just substitute m with n and P with Q).

Define a graph H1 = (V1 , E1 ) where V1 is the set of all vertices x of P


and {x, x � } ∈ E1 iff L (x) ∩ L (x � ) = L (x) � k .
Define a graph H2 = (V2 , E2 ) where V2 is the set of all vertices y of Q
and {y, y � } ∈ E2 iff L (y) ∩ L (y � ) = L (y) � k .
The notions of dropping and adding labels from and to, resp., remain
the same as before.
144
Lemke-Howson (Algorithm)
Lemke-Howson algorithm works as follows:
� Start in (x, y) := (0m , 0n ) ∈ P × Q.
� Pick a label k ∈ {1, . . . , m} and drop it from L (x).
This adds a label, which then is the only element of L (x) ∩ L (y).
� loop
� If L (x) ∩ L (y) = ∅, then stop and return (x, y).
� Let {�} = L(x) ∩ L (y), drop � from L (y).
This adds exactly one label to L(σ2 ).
� If L (x) ∩ L (y) = ∅, then stop and return (x, y).
� Let {k } = L(x) ∩ L (y), drop k from L (x).
This adds exactly one label to L(x).

Lemma 52
The algorithm proceeds through every vertex of H1 × H2 at most once.

Hence the algorithm stops after at most (m+n


m
)(m+n
n
) iterations.

145
The Algebraic Procedure

How to effectively move between vertices of H1 × H2 ?


That is how to compute the result of dropping a label?

We employ so called tableau method with an appropriate


pivoting.

146
Slack Variables Formulation
Recall our succinct characterization of Nash equilibria:
� For all � = m + 1, . . . , m + n we have that u2 (x, �) ≤ 1 and either
y(�) = 0, or u2 (x, �) = 1.
� For all k = 1, . . . , m we have that u1 (k , y) ≤ 1, and either
x(k ) = 0, or u1 (k , y) = 1

We turn this into a system o equations in variables x(1), . . . , x(m),


y(m + 1), . . . , y(m + n) and slack variables r(1), . . . , r(m),
z(m + 1), . . . , z(m + n) :

u2 (x, �) + z(�) = 1 � ∈ S2
u1 (k , y) + r(k ) = 1 k ∈ S1
x(k ) ≥ 0 y(�) ≥ 0 k ∈ S1 , � ∈ S2
r(k ) ≥ 0 z(�) ≥ 0 k ∈ S1 , � ∈ S2
x(k ) · r(k ) = 0 y(�) · z(�) = 0 k ∈ S1 , � ∈ S2

Solving this is called linear complementary problem (LCP).


147
Tableaux
The LM algorithm represents the current vertex of H1 × H2 using
a tableau defined as follows.

Define two sets of variables:


M := {x(1), . . . , x(m), z(m + 1), . . . , z(m + n)}
N := {r(1), . . . , r(m), y(m + 1), . . . , y(m + n)}
A basis is a pair of sets of variables M ⊆ M and N ⊆ N where |M| = n
and |N| = m.
Intuition: Labels correspond to variables that are not in the basis

A tableau T for a given basis (M, N):



P : v = cv − av � · v � v∈M
v � ∈M�M

Q: w = cw − aw � · w � w∈N
w � ∈N�N

Here each cv , cw ≥ 0 and av � , aw � ∈ R.


Note that the first part of the tableau corresponds to the polytope P,
the second one to the polytope Q.
148
Tableaux implementation of Lemke-Howson
A basic solution of a tableau T is obtained by assigning zero to
non-basic variables and computing the rest.
During a computation of the LM algorithm, the basic solutions will correspond
to vertices of the two polytopes P and Q.

Initial tableau:
M = {z(m + 1), . . . , z(m + n)} and N = {r(1), . . . , r(m)}
m

P: z(�) = 1 − x(k ) · u2 (k , �) � ∈ S2
k =1
m+n

Q: r(k ) = 1 − y(�) · u1 (k , �) k ∈ S1
�=m+1

Note that assigning 0 to all non-basic variables we obtain x(k ) = 0 for


k = 1, . . . , m and y(�) = 0 for � = m + 1, . . . , m + n.
So this particular tableau corresponds to (0m , 0n ).
Note that non-basic variables correspond precisely to labels of (0m , 0n ).
149
Lemke-Howson – Pivoting
Given a tableau T during a computation:

P : v = cv − av � · v � v∈M
v � ∈M�M

Q: w = cw − aw � · w � w∈N
w � ∈N�N

Dropping a label corresponding to a variable v̄ ∈ M � M (i.e. dropping


a label in P) is done by adding v̄ to the basis as follows:

� Find an equation v = cv − v � ∈M�M av � · v � , with minimum cv /av̄ .
Here cv � 0, and we assume that if av̄ = 0, then cv /av̄ = ∞
� M := (M � {v}) ∪ {v̄}
� Reorganize the equation so that v̄ is on the left-hand side:
cv � av � � v
v̄ = − ·v −
av̄ av̄ av̄
�v ∈M�M,v �v

� Substitute the new expression for v to all other equations.


Dropping labels in Q works similarly.
150
Lemke-Howson – Tableaux

The previous slide gives a procedure for computing one step of


the LH algorithm.

The computation ends when:


� For each complementary pair (x(k ), r(k )) one of the
variables is in the basis and the other one is not
� For each complementary pair (y(�), z(�)) one of the
variables is in the basis and the other one is not

151
Lemke-Howson – Example
Initial tableau (M = {z(3), z(4)}, N = {r(1), r(2)}):
z(3) = 1 − x(1) · 1 − x(2) · 3 (9)
z(4) = 1 − x(1) · 2 − x(2) · 1 (10)

r(1) = 1 − y(3) · 3 − y(4) · 2 (11)


r(2) = 1 − y(3) · 2 − y(4) · 3 (12)

Drop the label 2 from P: The minimum ratio 1/3 is in (9).


x(2) = 1/3 − (1/3) · x(1) − (1/3) · z(3) (13)
z(4) = 2/3 − (5/3) · x(1) − (1/3) · z(3) (14)

r(1) = 1 − y(3) · 3 − y(4) · 2 (15)


r(2) = 1 − y(3) · 2 − y(4) · 3 (16)

Here M = {x(2), z(4)}, N = {r(1), r(2)}.


Drop the label 3 from Q: The minimum ratio 1/3 is in (15).
152
Lemke-Howson – Example (Cont.)

x(2) = 1/3 − (1/3) · x(1) − (1/3) · z(3) (17)


z(4) = 2/3 − (5/3) · x(1) − (1/3) · z(3) (18)

y(3) = 1/3 − (2/3) · y(4) − (1/3) · r(1) (19)


r(2) = 1/3 − (5/3) · y(4) − (1/3) · r(1) (20)
Here M = {x(2), z(4)}, N = {y(3), r(2)}.
Drop the label 1: The minimum ratio (2/3)/(5/3) = 2/5 is in (18).

x(2) = 1/5 − (4/15) · z(3) − (1/5) · z(4) (21)


x(1) = 2/5 − (1/5) · z(3) − (3/5) · z(4) (22)
y(3) = 1/3 − (2/3) · y(4) − (1/3) · r(1) (23)
r(2) = 1/3 − (5/3) · y(4) − (1/3) · r(1) (24)
Here M = {x(2), x(1)}, N = {y(3), r(2)}.
Drop the label 4: The minimum ratio 1/5 is in (24).
153
Lemke-Howson – Example (Cont.)

x(2) = 1/5 − (4/15) · z(3) − (1/5) · z(4) (25)


x(1) = 2/5 − (1/5) · z(3) − (3/5) · z(4) (26)

y(3) = 1/5 − (1/5) · r(1) − (6/15) · r(2) (27)


y(4) = 1/5 − (1/5) · r(1) − (3/5) · r(2) (28)
Here M = {x(2), x(1)}, N = {y(3), y(4)} and thus
� x(1) ∈ M but r(1) � N
� x(2) ∈ M but r(2) � N
� y(3) ∈ N but z(3) � M
� y(4) ∈ N but z(4) � M
So the algorithm stops.
Assign z(3) = z(4) = r(1) = r(2) = 0 and obtain the following Nash
equilibrium:
x(1) = 2/5, x(2) = 1/5, y(3) = 1/5, y(4) = 1/5
154

You might also like