0% found this document useful (0 votes)
7 views

Lecture 01

Uploaded by

b7ng.1119
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Lecture 01

Uploaded by

b7ng.1119
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Lecture 01

November 12, 2023

1 Representation of Static Games


Definition 1.1. A strategic form game is a tuple ⟨N, (Ai , ui )i∈N ⟩ that is composed of (1) a set N of
players, (2) a set Ai of actions for each player i ∈ N , and (3) a payoff function ui : A → R for each
player i ∈ N .

• In most cases, an abstract players is named by integers, i.e, typically, N = {1, . . . , n}

• A = ×i∈N Ai = {(a1 , . . . , an ) : ai ∈ Ai , i = 1, . . . , n} is the Cartesian product of Ai ’s, which is


called outcome space, and a ∈ A is called an outcome, which is also an action profile of all players
– We call profile a list of objects (xi )i∈I with each object from some set Xi for each player
i ∈ I. In particular, an action profile is a list

a = (ai )i∈I ∈ A = ×i∈I Ai

• A payoff function can be considered as a utility function that represents a preference over the
outcome space for each player
• The strategic form game is also called normal form game
• Here, the game is static in the sense that we do not consider the sequence of moves
– May think that players move simultaneously
– Strategical independence: when one player moves, she does not know actions taken by others
– May not be appropriate for analyzing dynamic games

Example 1.1. Players 1 and 2 work in a team for the production of a public good. Their action is the
effort ai ∈ [0, 1]. The output, Q, depends on efforts according to a Cobb-Douglas production function

Q = Kaθ11 aθ22 .

The cost of effort measured in terms of pbulic good is Ci = a2i . The payoff function is then

ui (a1 , a2 ) = Kaθ11 aθ22 − a2i .

• Other common examples of static games are prisoners’ dilemma, Cournot duopoly, Betrand
competition, etc.

• Complete information: we assume that players have common knowledge of the static game, i.e.,
they know the set of players and every player’s action set and payoff function.

1
2 Dominance and Rationality
2.1 Expected Payoff and Randomized Actions
• Let X be a generic domain or a set.
• Then denote ∆(X) = {µ : X → [0, 1] : µ(x) ≥ 0, ∀x ∈ X and µ(x) = 1} is the set of
P
x∈X
probabilities or lotteries over set X.
• For example, if A is an outcome space, ∆(A) is the lotteries over it.
• The expected payoff of a lottery µ ∈ ∆(A) is then
X
ui (µ) = µ(a)ui (a).
a∈A

• Define the support of a probability µ by


suppµ = {x ∈ X : µ(x) > 0}.

• A randomized choice of actions by player i, also called a mixed action, is a probability αi ∈ ∆(Ai )
– An action ai ∈ Ai is also called a pure action.
– Pure actions can be considered as special cases of mixed actions, where the probability αi
assigns 1 to a unique action ai ∈ Ai
– That is, Ai ∼ {αi ∈ ∆(Ai ) : ♯suppαi = 1} ⊂ ∆(Ai ), or with abused notation Ai ⊂ ∆(Ai )
– Then the expected payoff of mixed action αi , given an action profile a−i = (a1 , . . . , ai−1 , ai+1 , . . . , an )
of all other players except for i, is
X
ui (αi , a−i ) = αi (ai )ui (ai , a−i )
ai ∈Ai

• Notations:
– A−i = ×j̸=i Aj is the set of actions of all other players except for player i
– a−i ∈ A−i is an action profile of all other players except for player i

2.2 Beliefs and Rationality


• A belief of player i about other players’ actions, denoted by µi ∈ ∆(A−i ), is a probability on
A−i .
– The expected payoff of responding an action ai to belief µi is
X
ui (ai , µi ) = µi (a−i )ui (ai , a−i )
a−i ∈A−i

– The expected payoff of responding a mixed action αi to belief µi is


X X X
ui (αi , µi ) = αi (ai )ui (ai , µi ) = αi (ai )µi (a−i )ui (ai , a−i )
ai ∈Ai ai ∈Ai a−i ∈A−i

• A player i is said to be rational if she chooses the mixed action αi to solve the problem
max ui (αi , µi )
αi ∈∆(Ai )

to respond to her belief µi about other players’ actions.

2
Example 2.1. Consider a player’s payoff is given in the following table:

Player 2
L R
U 4 0
Player 1 M 3 3
L 2 2
D 0 4

Let p = µ1 (L) and, consequently, µ1 (R) = 1 − p. Then,


u1 (U, µ1 ) = pu1 (U, L) + (1 − p)u1 (U, R) = 4p,
u1 (M, µ1 ) = pu1 (M, L) + (1 − p)u1 (M, R) = 3,
u1 (D, µ1 ) = pu1 (D, L) + (1 − p)u1 (D, R) = 4(1 − p).
If p = 1/2 or p = 3/4, how would a rational player 1 response to her belief?
Definition 2.1. A (mixed) action αi∗ is a best reply to a belief µi ∈ ∆(A−i ) if
∀αi ∈ ∆(Ai ), ui (αi∗ , µi ) ≥ ui (αi , µi ),
that is,
αi∗ ∈ arg max ui (αi , µi ).
αi ∈∆(Ai )

• In general, arg maxx∈X f (x) = {x∗ ∈ X|f (x∗ ) = maxx∈X f (x)} is set contains each x ∈ X that
maximizes f (x).
• Denote
BRi (µi ) = arg max ui (αi , µi )
αi ∈∆(Ai )

the set of best reply to µi .1 In the earlier example, what are best replies for p = 1/2 and p = 3/4,
respectively?
Lemma 2.1. Fix a belief µi ∈ ∆(A−i ), then the following are equivalent:
(1) a mixed action αi∗ is a best reply to µi , i.e. αi∗ ∈ BRi (µi );
(2) for each ai ∈ Ai in the support of αi∗ ,
ui (ai , µi ) = ui (αi∗ , µi ) = max ui (αi , µi );
αi ∈∆(Ai )

(3) every pure action in the support of αi∗ is a best reply to µi , i.e. suppαi∗ ⊆ BRi (µi ).
Proof. To complete the argument, we show that (1) =⇒ (2) =⇒ (3) =⇒ (1).

2.3 Dominance and Iterated Dominance


Definition 2.2. A mixed action αi weakly (strictly) dominates another mixed (or pure action) βi if
it yields a weakly (strictly) higher expected payoff regardless of others’ actions, i.e,
∀a−i ∈ A−i , ui (αi , a−i ) ≥ (>)ui (βi , a−i ).
A mixed action αi is weakly (strictly) dominant if it weakly dominates every actions, i.e. ,
∀βi ∈ ∆(Ai )\αi , ∀a−i ∈ A−i , ui (αi , a−i ) ≥ (>)ui (βi , a−i ).
A mixed action βi is strictly (weakly) dominated if there exists a mixed action αi that strictly (weakly)
dominates βi . A mixed action αi is undominated if it is not strictly dominated by any action.
1 Note that best reply BRi : ∆(A−i ) ⇒ ∆(Ai ) is a correspondence, i.e., a multi-valued function.

3
Lemma 2.2. If a mixed action αi is weakly dominant, then every pure action ai in the support of αi
is weakly dominant, i.e.

∀ai ∈ suppαi , ∀βi ∈ ∆(Ai ), ∀a−i ∈ A−i , ui (ai , a−i ) ≥ ui (βi , a−i ).

Proof. Let αi be weakly dominant. Suppose, for contradiction, there exists some āi ∈ suppαi , some
βi ∈ ∆(Ai ), and some ā−i such that ui (āi , ā−i ) < ui (βi , ā−i ). Then, since ui (αi , ā−i ) ≥ ui (βi , ā−i ) by
dominance, there must exist another action âi ∈ suppα such that ui (âi , ā−i ) > ui (āi , ā−i ) (why?); and
consider the following mixed action

0
 ai = āi
αi′ (ai ) = αi (âi ) + αi (āi ) ai = âi

αi (ai ) ∀ai ̸= āi , âi

that is, moving all the probability on āi to âi ; and it is easy to see that

ui (αi′ , ā−i ) − ui (αi , ā−i ) = αi (āi ) [ui (âi , ā−i ) − ui (āi , ā−i )] > 0,

a contradiction to that αi be weakly dominant.


Exercise 2.1. Show that, if a mixed action αi ∈ ∆(Ai ) strictly dominates another mixed action
βi ∈ ∆(Ai ), βi will never be chosen by a rational player regardless of her belief µi ∈ ∆(A−i ).

• (The dominance principle) In other words, a rational player only chooses undominated actions.
• Rationality is assumed to be common knowledge, therefore a rational player knows that her
opponents does not choose dominated actions.
• Consider the following example:

Player 2
l c r
U 3,2 0,1 0,0
Player 1 M 0,2 3,1 0,0
D 1,1 1,2 3,0

• We can apply the dominance principle iteratively:


– Rationality of order 1: for player 2, clearly r is dominated (by l or c, or mixing them)
– Rationality of order 2: Player 1 knows that 2 is rational and, hence, only chooses l or c.
Then, 12 U + 21 M dominates D. Therefore, that 1 is rational and knowing 2 is rational
exclude D.
– Rationality of order 3: Player 2 knows that 1 is rational and knows she is rational, and hence,
player 1 only chooses U or M . Then, clearly l dominates after excluding D. Therefore, the
only action that player will take is l if she is rational, and player 2 is rational, and player 2
knows that she is rational, and she knows that play 2 knows that she is rational.
– Rationality of order 4: player 1 choose U if she knows that player 2 chooses l.
– Rationality and common knowledge of rationality yields that the only action profile chosen
by players is (U, l)
• The procedure (algorithm), namely, “iterated elimination of strictly dominated actions” (IESD),
can be summarized as follow:

4
– Step a: Check whether any player has a dominated action. If no, stop.
– Step b: If yes, delete the dominated action and associated payoffs.
– Step c: Repeat step a and b until there is no dominated actions are identified.

• The actions that survives from this procedure is called rationalizable actions
• In general, rationalizable actions are not unique.
– Check out the example,

Player 2
a b c d
U 3,2 0,3 2,1 0,0
Player 1 M 0,2 3,1 2,1 0,0
D 1,1 1,2 1,3 3,0

3 Nash Equilibrium
Definition 3.1. An action profile a∗ ∈ A is a Nash equilibrium if, for every player i and each action
ai ∈ Ai ,
ui (a∗i , a∗−i ) ≥ ui (ai , a∗−i ).

• A Nash equilibrium is, first of all, an action profile, such that given others’ actions in this profile,
every player’s choice maximizes her expected payoff.

• It can be seen as a “steady state”. If players are told to act as prescribed by a Nash equilibrium,
then no one has incentive to unilaterally deviate (i.e. not obeying the action being told by
herself).
• It requires that (1) players are rational (for sure) and (2) players have correct belief about her
opponents.

Proposition 3.1. An action profile a∗ is a Nash equilibrium if and only if, for each player i, a∗i ∈
BRi (a∗−i ).

• Recall the Cournot duopoly model in micro class: solve the system of equations

q1 = BR1 (q2 ) and q2 = BR2 (q1 ).

• A Nash equilibrium may not always exist. If it exists, it may not be unique.

Theorem 3.1. Let G = N, (Ai , ui )i∈N be a game where, for each player i ∈ N , Ai is convex and
compact and ui : A → R is continuous. Then there exists a Nash equilibrium in game G.
Proof. (Sketch.) Define the following best reply correspondence BR : A ⇒ A by BR(a) = (BRi (a−i ))i∈I .
Implied by Proposition 3.1 a Nash equilibrium is characterized by a fixed point of the best reply cor-
respondence. By the theorem of the maximum, the best reply correspondence is convex-valued and
u.h.c, therefore the best reply correspondence has a fixed point and, thus, a NE exists.

• Theories about correspondence and fixed point theorems can be find in the appendix.

• A set A is convex if, for any a, a′ ∈ A and λ ∈ [0, 1], λa + (1 − λ)a′ ∈ A.


• A set A ⊆ Rn is compact if and only if it is closed and bounded.

5
• The above only consider pure actions, we can extends the concept to mixed actions.
– A mixed strategy equilibrium is basically a Nash equilibrium where players employ mixed
strategies.

Definition 3.2. A mixed strategy Nash equilibrium (MSNE ) of game G is a mixed strategy profile
α∗ = (α1∗ , . . . , αn∗ ) ∈ ∆(A1 ) × . . . × ∆(An ) such that, for each player i ∈ N and each of her mixed
action αi ∈ ∆(Ai ),
ui (αi∗ , α−i
∗ ∗
) ≥ ui (αi , α−i ).

• Instead of consider the original game G = N, (Ai , ui )i∈N , we can consider a so-called mixed
extension of G, Ḡ = N, (∆(Ai ), ūi )i∈N where
X Y
ūi (α) = ui (a) αj (aj )
a=(aj )j∈N ∈A j∈N

for all i and α = (αi )i∈N ∈ ×i∈N ∆(Ai ).


– (Characterization of MSNE) A MSNE of G is a NE of its mixed extension Ḡ

Example 3.1. (1) Matching pennies; (2) Battle of sexes.

• A game is finite if each player’s action set is finite.

Theorem 3.2. (Nash, 1950.) Every finite game has a mixed strategy Nash equilibrium.

• Note that, in a finte game, Ai is not convex, but ∆(Ai ) is always convex.

• Let α ∈ ×i∈N ∆(Ai ). ūi (α) = a=(aj )j∈N ∈A ui (a) j∈N αj (aj ), which is apparently continuous
P Q
in α.

Lemma 3.1. A strictly dominated action is never used with positive probability in a mixed strategy
equilibrium.
Exercise 3.1. Prove Lemma 3.1.
Proposition 3.2. All actions played with positive probability in a mixed strategy are rationalizable.

• Therefore, to find mixed NE, it is sufficient to only focus on rationalizable actions, i.e. actions
survived from IESD.

Proposition 3.3 (Characterization of MSNE). A mixed action profile α∗ is a MSNE if and only if,
for each player i ∈ N ,

∀ai ∈ suppαi∗ , ui (ai , α−i



) = ui (αi∗ , α−i

)
/ suppαi∗ , ui (ai , α−i
∀ai ∈ ∗
) ≤ ui (αi∗ , α−i

)

Exercise 3.2. Find all the NE, including mixed equilibria, in the following game:

Player 2
a b c d
U 3,2 0,3 2,1 0,0
Player 1 M 0,2 3,1 2,1 0,0
D 1,1 1,2 3,3 3,0

6
Appendix
A Correspondences and Theorem of the Maximum
Definition A.1. A correspondence g : A ⇒ B is a mapping that associates an element a ∈ A to a
subset g(a) ⊆ B.
Definition A.2. A correspondence g : A ⇒ B is upper hemicontinuous at a ∈ A if g(a) is nonempty
and if, for every sequence {an } ⊆ A such that limn→∞ an = a and for every sequence {bn } such that
bn ∈ g(an ),
lim bn = b =⇒ b ∈ g(a).
n→∞

A correspondence is upper hemicontinuous if it is upper hemicontinuous at every point of its domain.


It says that, for every sequence of points in the graph of the correspondence that converges to some
limit, that limit is also in the graph of the correspondence. This means that we don’t “lose points” in
our graph at the limit of a convergent sequence of points in the graph.
Theorem A.1. Suppose A is closed and B is compact. Then a closed-valued correspondence g : A ⇒
B, i.e., g(a) is closed for all a ∈ A, is upper-hemicontinuous if and only if it has a closed graph, i.e.,
the set
Gr(g) = {(a, b) ∈ A × B : b ∈ g(a)}
is closed.
Definition A.3. A correspondence g : A ⇒ B is lower hemicontinuous at a ∈ A if g(a) is nonempty
and if, for every sequence {an } ⊆ A that converging to a and every b ∈ g(a), there exist a subsequence
{amk } ⊆ {an } such that there exist a sequence {bk } with bk ∈ g(amk ) and limk→∞ bk = b. A
correspondence is lower hemicontinuous if it is lower hemicontinuous at every point of its domain.
It says that for every point in the graph of the correspondence, if there is a sequence in A converging
to a point a for which g(a) is nonempty, there is also a sequence in B converging to b ∈ g(a), and that
every point bn in that sequence is in the graph of {an }.
Theorem A.2. If g : A ⇒ B has an open graph, then it is lower hemicontinuous.
Definition A.4. A correspondence is continuous if it is both upper and lower hemicontinuous.
In Figure 1, the correspondence is l.h.c. but not u.h.c at x1 and u.h.c. but not l.h.c at x2 .

Figure 1: Lower and upper hemicontinuity

7
Exercise A.1. Show that if a correspondence is single valued, i.e., it is a function, and u.h.c. or l.h.c.,
then it is continuous.
Definition A.5. A function f : S → R defined on a convex subset S of a real vector space is
quasiconcave if for all x, y ∈ S and λ ∈ [0, 1],

f (λx + (1 − λ)y) ≥ min{f (x), f (y)},

and strictly quasiconcave if


f (λx + (1 − λ)y) > min{f (x), f (y)}.

Proposition A.1. A function f : S → R is (strictly) quansiconave if and only if its upper contour
sets Uα (f ) = {x : f (x) ≥ α} is convex.
Theorem A.3 (Berge’s maximum theorem). Let X be subset of l-dimensional Euclidean space Rl and
let Y be a subset of m-dimensional Euclidean space Rm . Let u : X × Y → R be a continuous real
function and let S : X ⇒ Y be a continuous and compact-valued (i.e., S(x) is compact for all x ∈ X)
correspondence. Then, the correspondence K : X ⇒ Y defined by, for all x ∈ X,

K(x) = arg max u(x, y)


y∈S(x)
 
= y ∈ S(x) : u(x, y) = max u(x, z)
z∈S(x)

is upper hemicontinuous and compact-valued.


If, in addition, the correspondence S is convex-valued and u is quasi-concave in its second argument,
then the correspondence K is also convex-value.

B Fixed Point Theorems


Theorem B.1 (Brouwer’s fixed point theorem). Suppose that S ⊆ Rn is nonempty, compact, and
convex, and that f : S → S is continuous. Then f has a fixed point, i.e., there exists an s∗ ∈ S such
that f (s∗ ) = s∗ .

Theorem B.2 (Kakutani’s fixed point theorem). Let ψ : S ⇒ S be a correspondence, where S ⊆ Rn


is nonempty, compact, and convex. Suppose ψ is nonempty-valued, convex-valued, and has a closed
graph, then ψ has a fixed point, i.e., there exists an s∗ ∈ S such that s∗ ∈ ψ(s∗ ).

You might also like