Game 02
Game 02
Externalities - enter through payoffs not strategies; can do it with strategies, but using payoffs
is more convenient because we can take strategy space as fixed without imposing a strong
assumption (makes existence proof easier)
Pseudo Game - (Friedman) opponents' strategy choices limit players strategy space; not
necessary to this with strategy space because it can be done through actions; think of
strategies as "will try to..." and actions as "will/can do..."
Sequentiality - sequence of play is built into the strategies
Other Players' Strategies - for player i, the strategies played by the other players are denoted
by s ~i
Nash Equilibrium (NE) - is a set of strategies (s 1* , s 2* , , s n* ) with each s i* ∈ S i such that for
every individual u i (s i* , s*~ i ) ≥ u i (s i , s *
~i ) ∀ s i ∈ S
i
(i.e., each player is playing his best reply to the opponents' best replies)
Existence - don't have enough to determine if NE exists; can come up with games (description
of players, strategy space and payoffs) that don't have NE
No NE Game - n players (players); each player announces a number (strategy space);
player with largest numbers wins x dollars and others get nothing (payoffs); this is a
game, but there is no NE because players can always find a larger number; problem is
that strategy space is not bounded (or closed)
Assumptions for NE - set of sufficient conditions to get NE; NE could exist without these but
general existence proof is very difficult without them; since players are maximizing their
expected payoffs it makes sense that these assumptions (and the existence proof) mirror
consumer optimization
(1) S i Compact (Closed & Bounded) - if not, could have possibility of no best reply
(2) S i Convex Set - combined with #3 ensures best replies are convex sets
(3) u i Quasiconcave - combined with #2 ensures best replies are convex sets
(4) u i Jointly Continuous - guarantees best reply changes with respect to opponent's
strategy in well behaved way
Nash Theorem - any game in strategic form satisfying assumptions (1)-(4) above has a Nash
equilibrium in S i .
Finite Games - not included because S i not a convex set; the mixed extension of a finite
game will satisfy the assumptions and have a NE
Outline of Proof: this is very long and will require lots of set up; here's the short version:
Write optimization problem
Find best replies
Get properties of best replies
Use Berge Maximum Theorem
Use fixed point theorem
1 of 8
Proof:
Optimization Problem - this is faced by each player in the game:
max
i
u i (s i , s ~i ) s.t. s i ∈ S i
s
∴ u i ( x + (1 − )y , s ~i ) = u i ( x, s ~i ) S1 × S 2
Which means x + (1 − ) y ∈ψ i (s ~i )
2
S
Define S = ∏S i
i
= S1 × S 2 × (Cartesian product)
S
1
More notation: ψ i (s ~i ) is set containing all of player i's best replies to s ~i (strategies
played by other players); ψ (s) maps any strategy s (1 strategy for each player) into
the Cartesian product of each player's best reply to the other players' strategies; s
can be written as (s 1 , s 2 , , s n ) = (s i , s ~ i ) = (s j , s ~ j ) 2
ψ (s1) ψ (s )
ψ (s) is nonempty, compact, convex valued, and UHC (because it is a
Cartesian product of nonempty, compact, convex valued, and UHC s2
s
sets... this is something else Berge proved, but we won't)
Kakutani Fixed Point Theorem - consider a mapping ψ : S → 2 S where s1
ψ 1(s2)
S is a compact and convex set and ψ is nonempty, compact, convex
S S
valued and UHC, then ∃ s * with s* ∈ψ (s*)
ψ (s*)
All the assumptions are satisfied so ∃ s* ∈ψ (s*) s* s* ∈ ψ (s*)
∴ s i* ∈ψ i (s ~i* ) ... i.e., u i (s i* , s*~ i ) ≥ u i (s i , s *
~i ) ∀ s i ∈ S
i
2 of 8
That means there exists a set of strategies s * where each player's strategy is a best
reply to the other players' (i.e., a Nash equilibrium)
Note: applying Kakutani only works because of the way we set up the mapping ψ ; we can
find other mappings that satisfy Kakutani but the fixed point is not a Nash equilibrium
(final exam question from ECO 7120)
(details)
Berge Maximum Theorem - if F : S × T → R is a continuous, numerical function and
Γ : S → 2 T is a compact valued, continuous correspondence such that ∀ s ∈ S , Γ(s ) is
nonempty, then the numerical function M defined by M ( s ) ≡ max{F ( s, t ) ∀ t ∈ Γ( s )} is
continuous in s and the correspondence ψ : S → 2 T defined by
ψ ( s ) ≡ {t ∈ Γ( s) : F ( s, t ) = M ( s)} is nonnull and compact valued at each s ∈ S and is upper
hemi continuous in S
English - if the objective function ( F ) is continuous and the constraint set ( Γ(s ) ) is
nonempty, compact and continuous (i.e., both UHC and LHC), then the maximized value
function ( M (s ) ) is continuous and the maximizer (ψ (s ) ) exists, is compact and UHC
Translation -
F : S × T → R ... F is a mapping from the domain S × T (all combinations of elements
of the sets S and T ) into the real numbers
Γ : S → 2 T ... Γ is a mapping from the set S into the power set of T (the set of subsets
of T ); this makes Γ a correspondence, not a function
Consumer Analogy -
S = parameters (price, income)
T = choice variables (quantity)
F ( s, t ) = objective function (utility)
Γ(s ) = constraint correspondence (budget set)
M (s ) = indirect utility
ψ (s ) = demand correspondence
Used in Game Theory -
s = s ~ i ... parameters (other player's strategies)
t = s i ... choice variables (player's strategy)
F ( s, t ) = u i (s i , s ~i ) ... objective function (utility)
Γ( s) = S i constraint correspondence (player's strategy space; constant by construction)
3 of 8
Consider any sequence t n ∈ψ ( s 0 ) with t n → t , where t ∈ Γ( s 0 ) (i.e., a sequence of
maximizers for given parameter s 0 that converges to any feasible value t )
That means either (a) F ( s 0 , t ) < M ( s 0 ) (i.e., t is not in the set of maximizers) or
(b) F ( s 0 , t ) = M ( s 0 ) ; can't have F ( s 0 , t ) > M ( s 0 ) because M ( s 0 ) is the
maximized value at F ( s 0 , t )
Consider case (a)
From continuity of F ( s, t ) , after some k th term in the sequence F ( s 0 , t j ) < M ( s 0 )
for all j > k (i.e., the sequence eventually has to go below M ( s 0 ) to converge
on F ( s 0 , t ) ), but that contradicts t j ∈ψ ( s 0 ) so case (b) must hold
Case (b) implies t ∈ψ ( s 0 ) which means ψ (s ) is closed
A closed and bounded set is compact (definition)
(3) ψ (s ) is UHC
Need to show ∃ convergent subsequences; these subsequences converge to t which is
a maximizer (i.e., ∈ψ ( s 0 ) )
(i) Consider any s 0 and any sequence s n → s 0 and any sequence t n ∈ψ ( s n )
We know t n ∈ Γ( s n ) because optimizer has to be feasible
Γ(s ) is UHC (because it's continuous)
By definition of UHC, t n has a subsequence that converges to a value in Γ( s 0 )
Label the sequence t k and say it converges to t ∈ Γ( s 0 )
(ii) Consider any tˆ ∈ Γ( s 0 ) and consider the sequence s k that corresponds to the
convergent sequence t k from the previous step (∴ s k → s 0 )
Γ(s ) is LHC (because it's continuous)
By definition of LHC, ∃ a sequence tˆ k ∈ Γ( s k ) with tˆ k → tˆ
(iii) Since t k is a subsequence of t n ∈ψ ( s n ) , we have t k ∈ψ ( s n )
By definition of maximized value function M (s ) , M ( s k ) = F ( s k , t k )
Previous step said tˆ k ∈ Γ( s k ) (i.e., tˆ k is sequence of feasible values)
∴ F ( s k , t k ) ≥ F ( s k , tˆ k )
(iv) Since F ( s, t ) is continuous, we can take the limit of both sides and the inequality still
holds (recall: s k → s 0 , t k → t , and tˆ k → tˆ )
F ( s 0 , t ) ≥ F ( s 0 , tˆ) ∀ tˆ ∈ Γ( s 0 ) (the ∀ comes from (ii) were we picked any tˆ )
(v) This means t maximizes F for s 0 (i.e., t ∈ψ ( s 0 ) ) so ψ (s ) is UHC
(4) M (s ) is continuous (i.e., ∀ sequences s k → s 0 , M ( s k ) → M ( s 0 ) )
M ( s k ) = F ( s k , t k ) for t k ∈ψ ( s k )
We already showed ψ (s ) is UHC so convergent subsequence exists with
F ( sˆ k , tˆ k ) → F ( s 0 , t )
Since t ∈ψ ( s 0 ) , we know F ( s 0 , t ) ≥ F ( s 0 , t ) ∀ t ∈ Γ( s 0 )
That means M ( s 0 ) = F ( s 0 , t ) so M ( s k ) → M ( s 0 )
4 of 8
(End of proof of Berge Maximum Theorem)
These notes are combined with notes from ECO 7120 (new notes in blue)
General Maximization Problem - max F ( x, ) s.t. x ∈ G ( ) , where x is a vector of decision
x
variables and is a vector of parameters (e.g., prices and endowments); G ( ) is the
"constraint set" which identifies all possible value that x can take on
New Notation - max F ( s, t ) s.t. t ∈ Γ(s ) ,
t
Maximized Value Function - V ( ) ≡ max F ( x, ) ; value of the function at its maximum (e.g.,
x
indirect utility function)
New Notation - M ( s ) = max F ( s, t )
t
Maximizer - x( ) such that F ( x( ), ) ≥ F ( y , ) ∀ y ∈ G ( ) ; the value of the decision
and bounded) for each and continuous in , then the maximized value function ( V ( ) ) is
Note1: if G ( ) is not bounded for a given set of parameters, then the problem may not have
a solution (e.g., zero price could result in infinite demand for good so there's no way to
maximize utility)
Note2: If a parameter does not enter a function (as they don't in utility maximization we're
studying), the function is continuous in that parameter
Continuous - Function F (x) is continuous if for any sequence x n → x 0 ,
(i) F ( x n ) → y (sequence in range determined by applying function to sequence x n
converges to some value y )
(ii) F ( x 0 ) exists
(iii) F ( x 0 ) = y (most non-continuous functions violate this part of the definition)
Upper Hemi Continuity - this is the "sort of" continuous we talked about in micro; Consider
sequence of points α n that converges to α 0 (blue dots in graphs); upper hemi continuity
says that any series determined by x(α n ) (red dots) converges to a point in x(α 0 )
Are UH Continuous Are Not UH Continuous
α0 α α0 α α0 α α0 α
5 of 8
Formally - given the convergent sequence α n → α 0 , then any sequence y n ∈ x(α n ) , with
y n → y has y ∈ x(α 0 )
Another Way - if sequence of points in the correspondence converges to (α 0 , y ) , then
(α 0 , y ) must be in the correspondence
Convergence - only look at convergent sequences; some sequences will jump back and
forth and the limit doesn't exist; for these sequences, we can use sub-sequence that will
converge
ECO 7405 Def'n - consider any sequence s n → s 0 and any sequence t n ∈ψ ( s n ) (aside:
since ψ (s ) is not single valued (i.e., a correspondence) there could be many
sequences), ψ (s ) is UHC at s 0 if there exists a convergent subsequence tˆ n → t and if
for any convergent subsequence t ∈ψ ( s 0 )
Lower Hemi Continuity - works backwards from UHC; take any point y in the correspondence
at α 0 ; for any sequence of points α n that converges to α 0 , there exists a sequence in the
correspondence that converges to y
Difference - LHC is a very subtle difference (for me anyway) from UHC; basically, UHC
says we look at a sequence in the correspondence to see if it converges to a point in the
correspondence; LHC says we look at a point in the correspondence and then see if we
can find a sequence in the correspondence that converges to that point... clear as mud?
Are LH Continuous Are Not LH Continuous
x(α) This is also UHC x(α) This is not UHC x(α) This is UHC x(α) This is not UHC
α0 α α0 α α0 α α0 α
Formally - take any y ∈ x(α 0 ) ; for any convergent sequence α n → α 0 ∃ y n ∈ x(α n ) such
that y n → y
ECO 7405 Def'n - assume t 0 ∈ψ ( s 0 ) ; consider any sequence s n → s 0 ; ψ (s ) is LHC if
there exists a sequence t n ∈ψ ( s n ) such that t n → t 0
Strict Convexity Assumption - if preferences are strictly convex (i.e., G ( ) is a strictly convex
6 of 8
Finite Strategy Spaces - Nash theorem doesn't apply to games with finite number of players,
each with finite number of strategies ( S i not a convex set)
Mixed Extension - if we allow players to use probability distributions over their strategies, the
Nash assumptions will be satisfied
i i
: j j
j
1 1 i
i
i
1
i
Expected Utility - u 1 (
1
,
2
,
3
)= 1
aijk 1
i
2
j
3
k
i j k
u ( 1
1
,
2
,
3
) is continuous - multiplication and addition are continuous functions
u ( 1
1
,
2
,
3
) is quasiconcave - fix and and u is linear in (linear means
2
3 1
1
Nash's Contribution - the proof of Nash's theorem "isn't that hard", but he won the Nobel Prize
for more than the theorem; Nash corrected and consolidated various other notions of
equilibrium by formally specifying the structure of the game
Poorly Defined - prior to Nash, the concept of equilibrium wasn't well defined
Cournot Equilibrium - 2 firms making simultaneous decisions on output
Stakelberg - 2 firms making sequential decision on output (leader-follower)
Fellner - 1940s wrote "Competition Among The Few"; tried to combine all notions of
equilibrium between different oligopoly models: "conjectural variation models"
Example - max π i (qi , q j )
qi
∂π i ∂π j
Cournot... = =0
∂qi ∂q j
∂π i ∂π j dq j ∂π j
Stakelberg... + = 0 (leader) and = 0 (follower)
∂qi ∂q j ∂q i ∂q j
dq j dq cj
Conjectural Variation - , sometimes written ; what player i believes
∂qi ∂qi
player j will do in response to player i's change in quantity
∂π i ∂π j dq j ∂π j ∂π j dqi
General Model... + = 0 and + =0
∂qi ∂q j ∂q i ∂q j ∂q i ∂q j
General Model - was thought to open a lot of possibilities because it could derive the
other models (e.g., dq j / dq i = dqi / dq j = 0 is Cournot)
7 of 8
Bresnahan - said conjectures may not be consistent with actual replies so he proposed
consistent conjectural variation; a whole series of "standard" conjectures for various
types of models
Game Theorists - eventually said conjectural variations didn't make sense; there are
only two options in a single-stage, two-player game: simultaneous moves or leader-
follower; a player can't change what he's doing if he can't observe the rival's move
and you can't have both move after the other so general model for conjectural
variations isn't right
Nash - corrected conjectural variations by formally specifying structure of game; Cournot
and Stakelberg equilibria are both Nash equilibria to different games (different
structure)
8 of 8