GT22

Download as pdf or txt
Download as pdf or txt
You are on page 1of 171

Microeconomics 2, Part 1:

Game Theory

Fabio Michelucci

Winter 2022

1 / 166
Basic Info about the Course

I Lectures: Tue 10.30, Thu 12.15 and Fri: 10.30 (3B, Aula Magna,
and 2B respectively).

I Additional exercise sessions (with a tutor). Time to be


determined.

I Office Hours: Tuesdays, 1 to 2.30 pm . I will be available on those


time, but you need to book first by email by the evening before.

I Book: T. Fujiwara-Greve (2015), Non-cooperative Game Theory.

I Evaluation: Written Exam (Mix of problem solving and questions).

I To contact me: [email protected]

2 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

3 / 166
Game Theory - What is it?

I Game theory is the study of interactions involving strategic


decision making.
I This is in contrast to decision theory in which there is no
strategic interaction (individual decision making).
I The definition is broad to reflect the fact that the theory can be
applied to a very large class of applications.
I In fact, one objective of the course is to endow you with a way of
reasoning that you can use to interpret many problems around
you.
I We will cover some of the key solution concepts and go through
some classic applications to see how to apply the theory and
understand its importance. This should give you a good base to
understand current theoretical research.

4 / 166
A Game - What is it?

I A Game is the formal mathematical description of the


underlying strategic interaction we want to describe (we refer to
those who play the game as players).
I Why the name? Board games (such a chess) are among the
early applications (conflicts is another).
I Indeed, they provide a natural/direct application for the theory.

I This is so as in a game like chess it is clear who are the players,


what are the rules, what are the options available, and what is
the final outcome.

5 / 166
I The theory applies less well when the consequences of players
decisions (actions) are affected in an unpredictable way by
chance or players skills (such as in many sports).
I However, the consequences of players’ action do not have to be
deterministic. Uncertainty can be incorporated. For instance,
this can be done by adding player Nature who selects - with
known probabilities - among a set of events.
I In game theory the outcome of a game is determined by the
interaction of players actions. This means that mathematically
this cannot be treated as an individual maximization problem
(as when you solved the consumer problem in intro micro
courses). The individual player optimization problem must
account for the interaction with other optimizing players.
I Notice that the above assumes that players chooses actions
individually. This is so in non cooperative game theory (this
course).
I Cooperative game theory, instead, allows players to form
coalitions and analyzes which coalitions can be stable (and
gives predictions based on that).
6 / 166
More Definitions
I We call games with a single round of decision making per player
and simultaneous play across players normal/strategic form
games.
I We call games with multiple round of decision making extensive
form games.
I In extensive form games we need to specify what the optimal
action of a player is contingent on any past play (history). We
call such contingent plan of action a player’s strategy.
I We distinguish between games of complete information and
incomplete information (complete roughly means that players
know the game being played, the information available to the
opponents, the opponents available actions and the payoff from
outcomes).
I We also distinguish between games of perfect information and
imperfect information (perfect roughly means that players know
all players past decision at each decision stage).
I Finally, we also distinguish between perfect and imperfect recall.
7 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

8 / 166
Games in strategic (or normal) form

I A game in strategic form is described as G = (N , S , u).

I There are three characteristic elements: players, strategies,


payoffs.

I N is the set of players (i is a specific player belonging to the set,


i ∈ N)

I S = ni=1 Si is the set of strategy profiles s = (si )i ∈N with Si
denoting the strategy set of each player i ∈ N .

I u : S 7→ RN is the combined payoff function, where ui (s) ∈ R is


the payoff to player i when strategy profile s is played.

9 / 166
Some Examples

Ex 1: finite game with two players, i = r, c (bimatrix game). Note that


r stands for row player, c for column player.

L R
U 5, 5 0, 6
D 6, 0 1, 1

N = {r, c}, S ={(U , L ), (D , L ), (U , R ), (D , R )}, Sr = {U , D }, Sc = {L , R }, u:


ur (U , L ) = uc (U , L ) = 5, ur (D , L ) = 6, uc (D , L ) = 0,ur (U , R ) = 0,
uc (U , R ) = 6, ur (D , R ) = uc (D , R ) = 1.

Ex 2: finite game with three players.

L R L R
U 2, 1, −1 −1, 3, 0 U 1, 3, 2 3, 1, 3
D 0, 4, 2 1, −2, −1 D 2, 0, 4 2, −1, 2
W E
Ex 3: A duopoly market where firms can choose any real number as
their quantity choice (infinite game). 10 / 166
Prisoner’s Dilemma

C D
C 5, 5 0, 6
D 6, 0 1, 1

I The game above is the same as in EX1 before, it is the well


known Prisoner’s Dilemma.
I Why has the game become so popular? Because it illustrates a
very worrying coordination failure.
I Notice that both for row and column player defect (D) is optimal
regardless of the strategy used by the other player.
I Hence, we should expect (D , D ) as the outcome of this game.
The dilemma comes from observing that by coordinating on
(C , C ) both players would be better off!
I This inefficiency is particularly striking as we the game assumes
both complete and perfect information.
11 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

12 / 166
Dominance - Definitions
I We begin formalizing how to determine predictions for a game’ s
play starting from the idea of applying dominance that we saw in
the PD.

0
I We say that for player i si is strictly dominated by si if
0
ui (si , s−i ) < ui (si , s−i ), for any s−i ∈ S−i .
0
In words strategy si does strictly better than si against any
strategy of the other players (as D in PD).

0
I We say that for player i si is weakly dominated by si if
0 0 0 0
ui (si , s−i ) ≤ ui (si , s−i ), for any s−i ∈ S−i , and ui (si , s−i ) < ui (si , s−i )
0
for some s−i ∈ S−i .
0
In words strategy si does at least as well as si against every
strategy of the other players, and against some strategy it does
strictly better.
13 / 166
C D
Prisoners’ Dilemma: C 5, 5 0, 6
D 6, 0 1, 1

I Compare C versus D . Action D strictly dominates C for both


players.

I If a player has a strategy that strictly dominates all other


strategies, we call it a dominant strategy.
I Dominance is a compelling notion for a rational player.

I In PD, both players have D as a (strictly) dominant strategy.


Hence, we could predict (D , D ).

14 / 166
L R
Game G : U 3, 0 0, −4
D 2, 4 −1, 8

I Opponents can also use dominance to “deduce” optimal


behavior.
I In game G, r has U as a (strictly) dominant strategy, hence
player c can anticipate that if r is rational he/she should play U .
But then c should play L . So our prediction is (U , L ).
I The reasoning above exploits the fact that all players are
assumed to know the structure of the game and the rationality
of all players.

15 / 166
I However, most games have no dominant strategies. So, what
next?
I Can we further exploit the idea that players should avoid
unattractive (dominated) strategies?

L C R
G 0: U 3, 0 0, −5 0, −4
M 1, −1 3, 3 −2, 4
D 2, 4 4, 1 −1, 8

D strictly dominates M , and similarly R does so for C .


If we simultaneously delete those, G 0 reduces to G on previous page.

16 / 166
Iterated elimination of strictly dominated
strategies
Can we generalize this?

L C R C R
C R
U 3, 2 0, 4 0, 1 U 0, 4 0, 1
⇒ ⇒ M 4, 1 1, 2
M 2, 0 4, 1 1, 2 M 4, 1 1, 2
D 1, 2 2, 1
D 1, 1 1, 2 2, 1 D 1, 2 2, 1

This procedure is called iterated elimination of strictly dominated


strategies, or IESD.
It assumes common knowledge of the game and of players’
rationality. Informally this means that all players know the structure
of the game and of players rationality and that all other players
know that...(infinite many times).
It is important to remark that:

I Order or speed of elimination is irrelevant.


I In a finite game, the set of surviving strategies is not empty (but
obviously in general not unique). 17 / 166
Iterated elimination of weakly dominated
strategies

I Iterated elimination of weakly dominated strategies (IEWD) is the


analog of IESD but applying weak rather than strong dominance.

I The difference has important implications to be careful about. In


fact for IEWD:

I order and speed of elimination matter.


I It may delete strategies that are part of a Nash eq.

I A game is dominance solvable if IEWD yields a unique strategy


profile.

18 / 166
Verify that the order of elimination and the speed matter in the game
below.

L R
U 2, 2 0, 0
M 6, 4 4, 4
D 0, 0 2, 2

19 / 166
Some Applications for these concepts

I IEWD (or IESD) can be surprisingly powerful, to see that see


applications below:
I Guess the average: each player guesses what 2/3 of the average
of all guesses will be. The player(s) with the closest guess to such
value wins a prize P (if more winners, P is evenly divided). Numbers
are restricted to the real numbers between 0 and 100.
I Can you see what is/are the possible solutions if you apply IEWD?
I Experimental economists have tested such prediction and it does not
hold in the lab.
I The above shows how the requirement of common knowledge of
rationality may be quite demanding.

I Second-price auction with private values.


I Cournot duopoly.

20 / 166
Zero sum games and Minmax Theorem

I We would like to have a way of reasoning also when dominance


does not have much bite. Lets do a first step, and lets restrict in
doing so to zero sum games (as board games or conflicts).
I A two player zero sum game is a normal form game such that for
any (s1 , s2 ) ∈ S , u1 (s1 , s2 ) = −u2 (s1 , s2 ).
I Games where one player wins and the other looses can be
expressed as zero sum games (in fact we may call them
constant sum games).
I In these games a rational player can reason that a rational
opponent will want to minimize her payoff given her chosen
action.
I We call reservation payoff the payoff from a strategy given that
the opponent is minimizing such payoff. It shows the minimum
payoff a player can guarantee herself regardless of what the
other player does.
21 / 166
I If we assume that a player chooses an action to maximize her
reservation payoff, we have: max min u1 (s1 , s2 ).
s1 ∈S1 s2 ∈S2

I The above is based on the principle of maximizing the minimum


gains (maxmin criterion ). We call the corresponding value the
maxmin value of player 1.
I If we do the same for the opponent (player 2) max min u2 (s1 , s2 ).
s2 ∈S2 s1 ∈S1
This in zero sum games is the same as min max u1 (s1 , s2 ).
s2 ∈S2 s1 ∈S1

I We call this the minmax value of player 1.

22 / 166
I Are the maxmin behaviors of the players consistent with each
other? Else, our prediction for how the game is played might not
be consistent.
I In the game above the maxmin strategy profile is (z, Y ), and
indeed for for that strategy profile players get a payoff
corresponding to their maxmin value. Hence, in this case using
maxmin strategy leads to a stable outcome for rational players.

23 / 166
I If we restrict to pure strategies we may not get the required
consistency even for zero sum game. Below the Matching
Pennies game (MP).

I In fact, we get that the maxmin value for each player is given by
−1, but there is no strategy profile for which the outcome
(−1, −1) can be achieved.
I Minmax Theorem (Von Neumann 1928): It provides conditions
for the existence of the consistency of minmax and maxmin in
zero sum games. It is considered the starting point of game
theory. (If one allows for mixed strategies such conditions are
met).
I Clearly, one would like to guarantee the existence of a stable
prediction beyond zero sum games. This come next.
24 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

25 / 166
Nash equilibrium

I Suppose a rational player i knows that his opponents will play


s−i .
I Then he should play a strategy si∗ that maximizes ui (·, s−i ).

I We say that a strategy si∗ is a best reply to s−i , if si∗ is such that
ui (si∗ , s−i ) ≥ ui (si , s−i ), for any si .
I In a finite game, the best reply is never empty (but may not be
unique). We define the set of agent i best replies to s−i as
BRi (s−i ).
I A strategy profile s ∗ is a Nash equilibrium if, for any player i
si∗ ∈ BRi (s−i

).
I In words, everybody maximizes utility given what anyone else is
doing.

26 / 166
Equilibria in Pure and Mixed strategies and
Existence

I Nash equilibria can be in pure or mixed strategies.

I In the first case the best reply is deterministic, while in the


second one is a probabilistic assignment (formal def. later).

Theorem (Nash, 1951): Every finite game G has (at least) a Nash
equilibrium (possibly in mixed strategies).

It generalizes existence of a stable prediction far beyond the result of


Von Neumann on zero sum games. The predictions of the two
concepts coincide when we restrict to zero sum games.

You can find the proof in the book, we will not review the proof in
class (at least for now).

27 / 166
Searching equilibria in pure strategies
For finite games, equilibria in pure strategies are found by inspection.

Ex 1: finite game with two players (bimatrix game).

O F C D
BoS: O 2, 1 0, 0 PD: C 5, 5 0, 6
F 0, 0 1, 2 D 6, 0 1, 1
G T
CG: G −100, −100 10, −10
T −10, 10 −1, −1
In BoS and CG there are 2 Nash eq. in pure strategies, while in PD the
eq. is unique. CG is symmetric but its equilibria are asymmetric.
Ex 2: finite game with three players.

L R L R
T 2, 1, −1 −1, 3, 0 T 1, 3, 2 3, 1, 3
B 0, 4, 2 1, −2, −1 B 2, 0, 4 2, −1, 2
W E
28 / 166
NE and Dominance

Result 1: For any game G if a strategy profile s ∗ is a NE, then it is not


eliminated by iterated elimination of strictly dominated strategies.

Assume it is not the case, and focus on first NE strategy that is


eliminated, say si∗ . The existence of a strategy that dominates si∗
contraddicts s ∗ being a NE.

Result 2: For any game G with finite strategy set, if s ∗ is the only
strategy profile that survives iterated elimination of strictly
dominated strategies, then s ∗ is the unique NE of the game.

Verify as an exercise by yourself and then check proof in the book.

29 / 166
Back to PD, and Public good application
I Let me switch from theory to applications to review and
elaborate more on the important insights from the PD.
I Recall that in PD there is a unique NE despite the presence of an
outcome that is Pareto superior.
I Consider now the following stylized public good game.
I Two of you have 1000 EUR each. You have to choose the amount
x to put in a magic common pot. I say magic because any
amount x is converted into 1.5x. The final amount in the public
pot is then shared evenly. The amount you do not put in the pot
(1 − x) it is privately consumed. Write formal description of
normal form game at home.
I Clearly, the social optimal thing to do is for both of you to put all
the money in the pot. This way each of you gets 1500. However,
from an individual perspective it is to contribute 0 whatever the
other is contributing.
I We call this behavior free riding. This leads to the unique Nash
eq (in fact the eq is in (strictly) dominant strategies) of each
contributing 0: both of you end up with the initial 1000. 30 / 166
I Many important applications fit the ideas from this simple game.
In fact, even staying home during a quarantine can be viewed as
contributing to a public good.
I Theory suggests us that without external enforcement we might
expect people not to cooperate (going to the street).
I Let me then twist the game by assuming that I can credibly
commit to destroy the amount you collectively put in the pot if
different from 2000. Then both contributing 1000 is a Nash eq.
For instance the credible threat of making the quarantine so
long that everybody would suffer. Notice that in eq. the external
enforcer never exerts the threat.
I Another modification is that each of you commits to put in the
public pot an amount given by the lower of the two contributions.
In this case not only (1000, 1000) is an equilibrium profile but
choosing 1000 is a (weakly) dominant strategy.
I The point I want to make with this digression is that game theory
not only helps us understanding behavior in strategic situations
but also helps us devising solutions.
31 / 166
Interpretations of NE

So far we have looked at static one shot games.

I Is it reasonable to assume players will be able to select the NE?


I Also, what if the NE is not unique? See BoS or CG.
I How can players have correct beliefs about opponents’ play?

I NE is a self enforcing agreement. This means that if players


meet and agree on a NE, by construction none of them has an
incentive to (unilaterally) deviate at the moment of playing the
game. This is a first justification.

I While we have formulated the game as static, we may think of it


as the result of an adaptation process in which players learn
how to play the game and about opponents strategies. Thus, NE
as the limit of an iterated process of best responses to
opponents’ previous play.
32 / 166
Static duopoly (Cournot market structure)
General assumptions: homogeneous product, price-taking
consumers. Cournot assumption: producers set output levels
simultaneously.
Two firms. Assume (inverse) linear demand (with slope -1 for
simplicity) p(q) = A − q where q = q1 + q2 and linear costs ci qi , with
A > c2 ≥ c1 ≥ 0.

Notice that unlike previous games this is not finite as strategy space
is given by positive real numbers.

I A Cournot equilibrium is s.t each firm maximizes profits (given


opponent’s strategy) and price clears demand. We call such
values (p ∗ , q1∗ , q2∗ ), with (q1∗ , q2∗ ) being the NE strategy profile.
I ui (qi , qj )= (A − qi − qj )qi − ci qi .

I ∂ui (qi ,qj ) = A − 2qi − qj − ci .


∂qi
I From the above we can already see that this is a game with
strategic substitutability.
I The best reply function is BRi (qj )= (A −ci −qj ) (0, if qj > A − ci ). 33 / 166
Best replies and eq.
The intersection of best replies determines equilibrium quantities
(fixed point).

This graph it is also useful:


- to discuss the idea of NE as the lim of iterated best replies.
- to practice how to apply iterated dominance. 34 / 166
(A −2ci +cj ) (A +c1 +c2 )
I Equilibrium quantity is qi∗ = 3 . So p ∗ = 3 .

(A −2ci +cj )2
I Equilibrium profits are ui (qi∗ , qj∗ ) = 9 .
I Notice that higher costs imply lower production; that is, c2 ≥ c1
implies q2∗ ≤ q1∗ .
(A −c)
I Take now ci = cj = c, so that qi∗ = 3 .

I Notice that if firms could collude deciding output together we


(A −c)
would have qi∗ = 4 , which is the monopoly solution.
I Decreasing quantity such that total equals monopoly q is a
Pareto improvement for firms but it is not a stable outcome (not
NE, as in PD!).
I Intuitively, each firm when increasing quantity does not account
the negative externality (via price) for the other firm.

35 / 166
Cournot oligopoly with different costs

Same assumptions but now there are n sellers with linear costs ci qi .
The best reply function is
P
A − ci − k ,i qk
BRi (q−i ) =
2
P
Rewrite this as A − 2BRi (q−i ) − k ,i qk = ci , or

A − BRi (q−i ) − Q = ci
P
Summing across the n FOCs, we find nA − Q − nQ = i ci so

n(A − c̄) A + n c̄ (A − ci ) + n(c̄ − ci )


Q∗ = , p∗ = , qi∗ =
(n + 1) n +1 (n + 1)

Under constant unit costs, only c̄ matters for Q ∗ , p ∗ (not the


distribution of ci ).

36 / 166
Bertrand market structure
Bertrand assumption: producers set prices simultaneously;
consumers buy at lowest price (if prices are identical, demand split
equally).
Two firms. Assume linear demand q = (A − p) and (identical) linear
costs cqi , with A > c ≥ 0. Price from set of real numbers.
Notice that demand and hence payoff functions are discontinuous.

I A Bertrand equilibrium is s.t. firms maximize profits (given


other’s strategy) and price clears demand. (p1b , p2b , q1b , q2b ), with
(p1b = c, p2b = c) being the NE strategy profile.
I The unique equilibrium is pib = c for i = 1, 2.
I Assume pi < c, for some i . Then deviation to pi = c
I Assume p1 , p2 . For instance, p1 < p2 . We have that c ≤ p1 < p2 . If
0
p1 > c, player 2 deviates to p2 ∈ (c, p1 ). If p1 = c, player 1 deviates
0
to p1 ∈ (p1 , p2 ).
I Assume p1 = p2 . If p1 = p2 > c, i can profitable deviate to pi − .
I Only case left, p1 = p2 = c. Easy to check it is a NE.

What if c1 , c2 ?, 37 / 166
Differentiated products

I Most industries produce similar but not identical products.

I That means that a consumer may not necessarily purchase


from the cheaper firm.

I We introduce two models of differentiated products.

I In the first the reason for such preference is exogenous to the


model and reflected into a different impact of the prices into the
demanded quantity for each good. In the second the preference
is attributed by the model to a location preference.

38 / 166
Two differentiated products

General assumptions: costless production, price-taking consumers.

I The (inverse) demand structure is pi = α − βqi − γqj .


Note β > γ > 0: own-price effect dominates cross-price effect.
I The demand structure is qi = a − bpi + cpj ,

where a= [α(β − γ)]/(β 2 − γ 2 ), b = β/(β 2 − γ 2 ), c= γ/(β 2 − γ 2 ).


I The brands’ measure of differentiation δ = γ 2 /β 2 goes from zero
(high differentiation) towards 1 (almost homogeneous).

39 / 166
Cournot market structure

Cournot assumption: producers set output levels simultaneously.


I A Cournot equilibrium is s.t. firms maximize profits (given other’s
strategy) and prices clear demand.
I The best reply function is BRi (qj )= (α − γqj )/(2β).

I In equilibrium: qi∗ = α/(2β + γ), pi∗ = (αβ)/(2β + γ), πi∗ = β[qi∗ ]2 .

I Profits increase with differentiation (firms’ monopoly power up).

40 / 166
Bertrand market structure

Bertrand assumption: producers set prices simultaneously.


A Bertrand equilibrium is s.t. firms maximize profits (given other’s
strategy) and price clears demand. (p1b , p2b , q1b , q2b ).

I The best reply function is BRi (pj )= (a + cpj )/(2b ).

I Best reply is increasing: prices are strategic complements, while


quantities in the Cournot game were strategic substitutes.
I In equilibrium: pib = a/(2b − c) = [α(β − γ)]/(2β − γ) and qib = bpib .

α 2 β(β−γ)
I Equilibrium profit πi∗ =b [pib ]2 = increases as γ ↓ 0.
(2β−γ)2 (β+γ)

41 / 166
Cournot vs. Bertrand

α
In equilibrium, the difference in price is pi∗ − pib = 4(β/γ)2 −1
.

1) Market price is higher under Cournot: pi∗ > pib .


2) Higher differentiation implies a smaller difference:
∂(pi∗ − pib )/∂γ > 0.
3) No difference in prices under independence: limγ↓0 (pi∗ − pib ) = 0.
Under Cournot, each firm is aware that its unilateral output
expansion directly reduces market price.
Under Bertrand, when considering a decrease in price the firm takes
the price in the other market as given so that output expansion is
partially offset by the rival. Hence more output is produced.

42 / 166
Location models: Hotelling
Let me first look at a model where locations are exogenously set.
With such locations we can model products differentiation. The
further apart, the more products are differentiated.
I Products are homogeneous (except for location).
I Consumers are uniformly distributed on [0, 1]. We call x the
realised position of a customer and t the transportation cost per
unit of distance.
I Two brands (A and B ) are located at the endpoints. Zero costs.
Bertrand competition.
I We can identify a threshold customer x̂ is indifferent between A
and B when pA + t x̂ = pB + t(1 − x̂).
I Hence, demand for A is x̂ = (pB − pA )/(2t) + (1/2).
I Best replies are pi = (pj + t)/2, so equilibrium prices are
pAh = pBh = t.
I Brands share market equally, each making a profit π = t/2 that
is increasing in t. (Profits increase with differentiation.)
43 / 166
Instead, let me now remove prices from the model (I set them to be
exogenously to be the same) to focus on the endogenous choice of
the location, li . This is a different model model from before.

Claim: The unique NE strategy profile is ( 21 , 12 ), that is both place in


the middle.

Sketch of Arguments:
I To verify it is a NE, notice that if you stick to it you get half of the
market. If you deviate you get less than half.
I Assume there is another NE with l1 , l2 . Then you gain market
share by getting closer to the other firm. Hence, in any NE li = lj .

I Assume there is another NE with li = lj , 21 . Then you can move


marginally closer to the middle and get more than half market
share. Hence, the NE above is unique.
I Notice that the solution is not socially optimal. To minimise
transportation costs the two firms should position at ( 41 , 34 ).

44 / 166
I In terms of differentiation of products this means too little
differentiation.
I For a more fun illustration: Ted Talk.

I However, we should allow firms to choose first location and then


prices (put 2 models together).
I If we do so in general whether we get too much or too little
differentiation depends on assumption on transportation costs
(linear as here, or quadratic, etc).

45 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

46 / 166
Mixed extension

Consider a penalty kick, or a tennis serve. We can model it as MP:

l r
L 1, −1 −1, 1
R −1, 1 1, −1

Players have opposing interests: it helps them to be unpredictable.


Thus, we introduce randomized moves.

Recall that in MP there is no NE in pure strategy (we showed it for


maxmin and they coincide for zero sum games with 2 players).

47 / 166
Defining mixed strategies
A mixed strategy is a randomized strategy for i . It has the same
mathematical description as a conjecture of j about i ’s play.

Such conjecture can be expressed a a probability distribution on Si .

Formally,
P
We define a function σi s.t. σi (si ) ∈ [0, 1] ∀si and σi (si ) = 1.
si ∈Si

The set of probability distributions on Si is denoted by ∆(Si ), which is


the set of mixed strategies for player i .

A pure strategy is a special case of a mixed strategy for which the


probability distribution is degenerate (attaches prob. 1 to the pure
strategy).

We define supp(σi )≡ {si ∈ Si |σi (si ) > 0}.

We call a completely mixed strategy a strategy that attaches strictly


positive probability to each pure strategy.
48 / 166
To define the mixed extension of G , we first introduce the concept of
expected utility, which is the extension of utility under uncertainty
that we use to build the extended game.

I Eui (σ ) =
P
σ1 (s1 )...σn (sn )ui (s1 ...sn )
s∈S

I We define the extended version G , by


0
G = ({1...n}, ∆(S1 ), ..., ∆(Sn ), Eu1 , ...Eun ).
I We are now ready to define best replies and then NE.

I BRi (σ−i )= {σi ∈ ∆(Si )|Eui (σi , σ−i ) ≥ Eui (x, σ−i ), ∀x ∈ ∆(Si ).

I A NE σ ∗ is s.t σ ∗ ∈ BRi (σ−i ), ∀i .

49 / 166
Useful implications/comments

I Result: If players play a NE σ ∗ , all strategies in the support of σi∗ ,


give the same expected utility to i .
Why?
If not, you would put more weight on strategy that gives highest
utility.

I Notice that the above provides a fast heuristic to build an


equilibrium in mixed strategies is based on making the opponent
indifferent.
I A strategy that is not strictly dominated by any pure strategy
may be dominated by a mixed strategy.
I The above helps simplifying the underlying game before
constructing set of equilibria.

50 / 166
Equilibria in mixed strategies
Back to Penalty kick (that is MP).

l r
L 1, −1 −1, 1
R −1, 1 1, −1
In a Game with 2 strategies a mixed strategy is identified by a
probability per player. We denote p (resp. q) the probability the goal
keeper goes right (resp. the penalty kicker shoots right).

Then ug (p, q) is the expected utility of the goal keeper given that the
randomisation of the penalty kicker is identified by q.
ug (p, q)= pq − p(1 − q) − q(1 − p) + (1 − p)(1 − q) =p(4q − 2) + 1 − 2q

Notice that if q > 21 , the optimal p is 1 (if penalty kicker goes more
often to right, it becomes optimal to go right ); q < 21 , the optimal p is
0; q = 21 , any p is optimal.

I omit looking at best reply for the penalty kicker as the analysis is
similar. 51 / 166
I The unique equilibrium is σg∗ = (1/2)R ⊕ (1/2)L and
σk∗ = (1/2)r ⊕ (1/2)l .
I Equilibrium payoff is 0 (not surprising as eq. strategies are
symmetric and it is a zero sum game).
I It is given by the intersection of the best reply correspondences
above (fixed point).
I Recall that the mixed extension of a finite game has at least one
equilibrium.
I Note that characterising the set of all NE in the game above was
easy as it was a 2 players, 2 pure strategies, symmetric game.
I In general, things might be more cumbersome.
52 / 166
Rock, Paper, Scissors game

I No NE in Pure strategies. Suppose not, then at least one player


is not winning. But then that player has a profitable deviation
(the one that yields the win).
I The above also excludes that in this game the possibility that
one player uses a pure strategy and the other mixes over 2 or 3
pure strategies.
I Then we need to allow a player to mix over 2 pure strategies and
over 3.
I Lets start from 2 and focus on player 2 mixing over R (with Prob
p) and S (with Prob 1 − p).
53 / 166
I Eu1 (R , p) = p ∗ 0 + (1 − p) ∗ 1 = 1 − p,
Eu1 (P , p) = p ∗ 1 + (1 − p) ∗ (−1) = −1 + 2p,
Eu1 (S , p) = p ∗ (−1) = +(1 − p) ∗ 0 = −p
I You can see (from above and from graph) that S is dominated.
I Unless p = 23 a pure by player 1 is optimal. Since we argued we
cannot have eq. based on pure strategies, we cannot have p , 23 .
I Assume then p = 32 , that is player 1 mixes between P and R . But
then for player 2 R is dominated by P , which contradicts R being
in the support of the strategy used by player 2.
I The above excludes an equilibrium based on player 2 mixing
over R and S . We can use similar arguments to exclude any
other partial mixing.
54 / 166
I It is easy to see that there is a unique equilibrium where both
players attach probability 31 to each pure strategy (check how to
derive it).
I To see it is a NE, notice that if the other players follows this
strategy each pure strategy you have gives indeed the same
expected utility, which justifies your assigned best reply.
I To see there are no other NE, assume any different mixing. If this
is the case there always exist one pure strategy that gives a
higher expected utility for the opponent yielding a contradiction.
I Notice that it took more time to derive the result, but the result
is not surprising given the previous penalty kick game (MP), of
which this is essentially an extension to 3 actions.

55 / 166
Guide to find NE in mixed strategies

1. Guess a support for each player; It helps to eliminate first all


pure strategies strictly dominated by other pure or mixed
strategies.

2. Try to find an equilibrium over this smaller game where all


strategies have positive probability; set up the right systems of
linear equations, including the constraint that probabilities must
add to 1;

3. If any of the linear systems has no solution or if the solution


makes no sense (negative probabilities), you guessed a wrong
support;

4. If you pass test 3), check that no pure strategy outside of the
support yields a higher utility.

56 / 166
4 other examples for you

L R
T 2, 1 0, 0
B 0, 0 1, 2

L R L R
T 1, 1, 1 0, 0, 0 T 0, 0, 0 0, 0, 0
B 0, 0, 0 0, 0, 0 B 0, 0, 0 2, 2, 2
W E

57 / 166
W X Y Z
A 1, 1 0, 1 0, 1 0, 1
B 1, 1 1, 1 0, 1 0, 1
C 1, 1 1, 1 1, 1 0, 1
D 1, 1 1, 1 1, 1 1, 1

L C R
T 3, 1 0, 0 2, 4
M 0, 0 1, 3 0, 0
B 4, 2 0, 0 1,1

58 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

59 / 166
Sequential decision making

I We want to extend our analysis to games with sequential


decision making.

I As an example we can add sequentiality in BoS by assuming that


Ann chooses before Bob.

I Clearly, the strategy sets are affected. This is so as at the start


of the game Bob needs to elaborate a contingent plan of action
depending on what Ann will decide.

I We have that Sa = {O , F }, and Sb = {(O , O ), (O , F ), (F , O ), (F , F )}.

I Let me first use our current representation; i.e. the normal form.

60 / 166
Note Ann is row player, A above should read O , and B should read F .
I The sequential version of BoS has 3 eq. in pure strategies. In two
of them Ann chooses O (to which Bob must have a contingent
plan of choosing O, while out of eq. path both O and F work).
One has Ann choosing F and Bob choosing F all the times.
I Out of those eq - given the sequentiality of the game - Ann
choosing O, and Bob always matching her choice seems a more
reasonable eq.
I The "problem" is that NE ignores rationality out of eq. path/play.

I This calls for a refinement of NE that embodies the notion of


sequential rationality.
I Notice also that the normal form representation does not
illustrate the sequentiality that we have added to the game.
61 / 166
Extensive form

A game Γ in extensive form is described as follows:


– a set N of players (to which we can add Nature);
– the game tree (how the game develops);
– the node labels (which player owns a node);
– moves/actions (from which strategies are formed);
– payoffs (utility functions);

– (information sets);
– (probability distribution for Nature’s moves).

62 / 166
Nature

Uncertainty can be introduced in an extensive form game by adding


a fictitious player called Nature.

Note that Nature only determines the realization probabilities of the


branches of a game. Her choice are not part of an equilibrium.
63 / 166
Terminology

I Initial node. Terminal nodes. Decision nodes.

I Successors and predecessors.

I Except for the initial node, every node has exactly one
immediate predecessor. The initial node has no predecessors.
I Play (or path) is a sequence of arrows and decision nodes from
origin to a terminal node.
I Length is the maximal number of decision nodes in some play of
the game.
I Perfect information: Each player knows the past decision of the
opponents at each decision stage
I Perfect recall: a player never forgets what he did or what he
knew.
(See The Absent-Minded Driver’s game)
64 / 166
I The game tree representation visualizes directly the information
structure via the information sets.
I (i) is the sequential version of BoS with perfect information, (ii)
with imperfect information. (in ii, must take same action in xb 1
and xb 2 )
I Xi : set of decision nodes for i (x ∈ Xi is a node), : Hik an
information set of i , Hi = {Hi 1 , ...HiK }: i information partition
(∪Kk=1 Hik = Xi ). Finally, Ai (Hi ): set of feasible actions at
information set Hi .
I Perfect information: ∀i , ∀Hi , |Hi | = 1.
65 / 166
Imperfect recall: Absent Minded Driver
Let h be the path from the origin to node x.
Let and X i (h ) be the sequence of information sets and actions for
player i on the path h .
Then, an extensive-form game has perfect recall if, for any i and Hi ,
0 0
if the paths to decision nodes x, x ∈ Hi are h and h respectively,
0
then X i (h ) = X i (h ).

We can see that X (h2 )= (H1 , S ) , X (h1 )= ∅.


66 / 166
Strategies and actions
A pure strategy is a complete plan of action for a player, specifying
an action for any of his information sets.

u !
1b !!!
u!
T ! aa d
2a !! aa
u! a
U  H
HHB
1a H
e
b
b
b
b
D b 2b  t
bb u L""
Q b
Q 1c "M
Q u"
Q
QR
QQ

The strategy profile determines the actual play and the outcome.
67 / 166
Mixed and Behavioral strategies
I Recall that a mixed strategy assigns a probability distribution
over the set of pure strategies. You should think of this
assignment as done before the start of the game.
I A behavioral strategy, instead, assigns probability distribution
over the set of feasible actions for each information set Ai (Hi )
independently of the other information sets. You may think of
this assignment as done during the game.
I To see the difference go back to sequential version of BoS (with
perfect info). A mixed strategy would assign a probability
distribution over the 4 possible actions of Bob. A behavioural
strategy, instead, would assign one over the actions at each
information set (so over F and O given that Ann chose F , and
over F and O given that Ann chose F ).
I Result (Kuhn Th.): For any finite extensive form game, any
behavioral strategy has an outcome equivalent mixed strategy
and viceversa, if and only if the game has perfect recall.
I Under the conditions of the result above we can use behavioral
strategies - which are more intuitive - without any loss. 68 / 166
I In the game above player 1 has 8 pure strategy. Hence, a mixed
strategy is a distribution over such 8 strategies. Instead, a
behavioral strategy assigns an independent probability
distribution over the two binary actions at each of the 3
information sets.
I The behavioral strategy mixes over a and b at H11 with prob. p1
0 0 00
1 − p1 , over a and b at H12 with prob. p2 1 − p2 , and over a and
00
b at H13 with prob. p3 1 − p3 . The corresponding mixed strategy
00
assigns probability p1 p2 p3 to the pure (a, a 0 , a ) (and so on,
check other 7). You can verity that the probability each end node
is reached is the same under both formalizations. 69 / 166
Backward Induction and NE

I In a finite extensive form game of perfect information and


perfect recall each player has an optimal action at the last
information set. Backward induction is a procedure by which we
find such optimal actions, we fix them and then we do the same
for the last but one information set and so on iteratively.
I BI embodies the logic of sequential rationality: at any node
where a decision is taken, a player best replies assuming that all
players will best reply in all future nodes.
I In the game above, we solve for player 2 optimal action (to pick),
and then fixed that we see what is optimal for player 1 (to pick).
This is the prediction from backward induction.
I Result: For any finite extensive form game Γ of complete and
perfect information, the solution by backward induction is a NE.
I In the game above there are no other NE in pure strategies.
70 / 166
Entrant game and Chain store paradox

The entrant game:

E in M accomodate
h - x - (1, 1)

out fight
? ?
(0, 2) (−1, −1)

I The solution by backward induction is given by (enter,


accomodate), which is then a NE. However, (do not enter, fight) is
another NE. It is based on a "non credible" threat (lacks
sequential rationality).

71 / 166
Extensive form and strategic form

The entrant game:

incumbent
fight accomodate
entrant in −1, −1 1, 1
out 0, 2 0, 2

A Nash equilibrium of the extensive form is a Nash equilibrium of the


associated strategic form.
In the entrant game, there are two NEs.

72 / 166
I Chain store paradox: The previous entrant game can be
extended to allow the monopolist to control N markets in each of
which there is one potential entrant for each market. The
example assumes that entrants arrive sequentially one after the
other with entrance decisions being observable (see book for full
description).

I The solution of backward induction has all entrants entering the


market.

I Is the prediction realistic?

I This points to the fact that the requirement of sequential


rationality might be quite "demanding" and "not entirely
realistic".

73 / 166
Stackelberg competition
I As an example of a sequential decision making in a game that is
not finite, lets see the Stackelberg model.
General assumptions: homogeneous product, price-taking
consumers.
Stackelberg assumption: producers set output levels sequentially.

I Two firms. Firm 1 is leader and 2 is follower.


I Assume a linear demand p(q) = a − q and linear costs ci qi , with
a > 2c1 − c2 and a > 3c2 − 2c1 .
I ui (qi , qj ) = (a − qi − qj )qi − ci qi .
I We solve the game by backward induction.
I The best reply function (recall Cournot analysis) for the follower
(a−c2 −q1 )
is BR2 (q1 )= 2 .
I Then the leader
a − q1 + c2 − 2c1
 
max u1 (q1 , BR2 (q1 )) = q1
q1 2
74 / 166
The leader’s equilibrium quantity is
a − 2c1 + c2 a − 2c1 + c2
q1s = > = q1∗
2 3
The follower’s equilibrium quantity is
a − 3c2 + 2c1 a − 2c2 + c1
q2s = < = q2∗
4 3
I Remark: q1s and q1s are the actions on the equilibrium path. The
NE we get by BI is a pair of strategies that specify the optimal
action at each information set.
Equilibrium price is
a + 2c1 + c2 a + c1 + c2
ps = < = p∗
4 3
Total output Q s > Q ∗ : Stackelberg is “more competitive” than
Cournot.
(q1s )2
Clearly, π1s = 2 > π1∗ by revealed profitability. Also, π2s < π2∗ .
Under Stackelberg competition the leader is overproducing
(compared to Cournot competition): he is not on his best reply ex
post (again over produces), because he commits ex ante to a
different quantity (and gains an advantage). 75 / 166
Sequential price game

Under Stackelberg competition the leader accommodates entry by


overproducing. This does not always occur.
Recall our simple model for two differentiated products, where the
demand structure is q1 = a − bp1 + cp2 and q2 = a + cp1 − bp2 and
production is costless.
Assume that producers set prices sequentially. Firm 1 is leader and 2
is follower.
Assume 2b + c > 2bc and apply BI. The best reply function for the
follower is BR2 (p1 ) = (a + cp1 )/(2b ).

76 / 166
The leader solves maxp1 (a − bp1 + cp2 ) p1 so

4b 2 + 2bc − c 2
!
2b + c a a
   
p1s = and p2s =
2 2b 2 − c 2 4b 2b 2 − c 2

We find p1s > p2s > p b . In the sequential price game, the leader is
overpricing (i.e., underproducing).
We have:
1) Higher profits than Bertrand for both firms: πis > πib .
2) Leader’s profit lower than follower’s: π2s > π1s .
3) Increase in leader’s profit smaller than follower’s:
π1s − π1b < π2s − π2b .
There is no first-mover’s advantage: leader expects the follower to
undercut him.

77 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

78 / 166
The ultimatum game

I Assume a pie of size 1.


I Player 1 has the right to propose how to share it. He chooses
x ∈ [0, 1], the fraction for himself.
I Player 2 can only choose to accept her share or reject. If she
rejects, both get 0.
I We can solve this game by BI.
I The solution is for the proposer to keep everything, and for
player 2 to accept.
I Whoever has the power to make the first (and only) offer has all
the bargaining power.
I To see why notice that player 2 must accept any quantity
different from zero if he is BR. Against zero, both accepting or
not accepting can be BR (but not accepting does not lead to a
NE).
I Are there other NE?

79 / 166
Bargaining over two periods - by 2 players

I Bargaining can protract for long, and waiting is costly.


I Hence, we assume a discount factor δi in [0, 1].
I Consider a two-period model, with alternating offers (it
resembles a twice repeated ultimatum game.).
I 1 makes the first offer: give x2 to 2 and keep 1 − x2 for himself.
I If 2 accepts, the game is over. If 2 refuses, then it is his turn to
make an offer to 1.
I We can solve the game by backwards induction:
I Continuation values for the two players in second period are
resp. 0 and 1
I Discounted to the first period, they become 0 and δ2 ;
I Optimal offer by 1 in first period gives 2 a payoff of δ2 .
I We see that bargaining power of player 2 depends on her
patience.

80 / 166
Bargaining over an infinite horizon

I We can extend the model with alternating offers over an infinite


horizon (Rubinstein, 1982).

I 1 offers x2 to 2 (and keeps 1 − x2 if accepted) in odd periods,

I 2 offers x1 to 1 (and keeps 1 − x1 if accepted) in even periods.

81 / 166
Key steps of proof:
I Notice that given the infinite horizon the game looks the same in
any odd (even) period except the discounting of payoff.
I Assume that if game enters the third period the solution
(strategy profile) by BI that gives highest payoff to player 1 in
any sub-game is played.
I Call the above σ H and (uh , vh ) the associated payoffs from the
subgame that starts with the 3 period.
I Hence, player 1 at the end of second period will accept if only if
x1 ≥ δuh . Hence, best proposal by 2 is x1 = δuh , which gives him
δ(1 − δuh ).
I Given the above, in period 1 player 2 will accept a proposal x2 iff
x2 ≥ δ(1 − δuh ). Under the optimal proposal by 1, 1 then gets
1 − δ(1 − δuh ).
I Since, σ H was the solution with highest payoff for 1, we must
1
have uh = 1 − δ(1 − δuh ), which gives uh = 1+δ (and then
δ
vh = 1+δ ).
I We can do the same focusing on strategy profile that gives
lowest payoff for 1. However, steps are the same and we get
same payoff for both players. Hence, the solution is unique.
82 / 166
Centipede game - BI and common knowledge of
rationality

I In the unique eq. by BI each player opts for "out" at each info set.
I Notice that opting for out is optimal based on the implicit
assumption that the other player will behave rationally in the
unfolding of the game.
I However, here if player 2 observes "in" by player, he observes
something that contrasts the assumption of common knowledge
of rationality.
I To avoid this conceptual issue one may assume that deviations
from the equilibrium path may occur due to mistakes. If the
probability of a mistake is low enough, the equilibrium outcome
is unaffected. 83 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

84 / 166
Sequential rationality
I BI embodies the logic of sequential rationality: at any node
where a decision is taken, a player best replies assuming that
people will best reply in all possible future nodes.
I However, the logic of BI cannot be applied directly at games as
the one below (the optimal action of E at her 2nd information set
depends on which nodes she is at).

I To extend the idea of sequential rationality to games with


imperfect information, we need to define subgames.
I In the game above, the branch of the tree following E entering
defines a subgame.
85 / 166
Subgame perfect equilibrium

I A subgame perfect equilibrium (SPE) is a strategy profile that


induces a Nash equilibrium in every subgame.
I In the previous subgame originating from E entering the NE is
given by strategy profile (A , A ). The strategy profile for the whole
game is instead ((Enter, A ), A ).
I A subgame of an extensive form game is a part of the game that
(i) starts with a singleton information set, (ii) includes all
decision nodes after that and all the corresponding actions; (iii)
does not cut any players’ information set.
I Formally, a subgame can be described as a collection of
information sets (satisfying the above conditions).
I Under perfect information, BI=SPE.
I SPE extends the idea (and so rules out (noncredible threats or
promises) to games with imperfect information.

86 / 166
I Def. A strategy profile s ∗ is a SPE if for each subgame H̃ , its
restricted strategy profile s ∗ |H̃ is a a NE of H̃ .

I Thm. For every finite game in extensive form, there exists a SPE.

87 / 166
Capacity choice game

The game below is another illustration of the logic behind SPE.


I Firm 1 decides her production capacity before playing a Cournot
game with firm 2.
I Assume market demand p = 900 − Q , Q = q1 + q2 .
I Firm 2 has no capacity constraint, and no costs.
I Production capacity for Firm 1 that can be small or large (not to
enter and have zero capacity is an option too).
I Small capacity costs 20.000 and can produce up to 100 units.
I Large capacity costs 80.000 and can produce unlimited units.
I We assume zero marginal production costs in both cases.

88 / 166
I π1 (q1 , q2 ) = (900 − q1 − q2 )q1 − C , where C depends on capacity.
I π2 (q1 , q2 ) = (900 − q1 − q2 )q2 .
I There are 2 proper subgames (after choice S or L by 1).
I The subgame after L is the same as Cournot game already
analyzed, and gives equilibrium quantities (300, 300).
I In the subgame after S , firm 1 best reply is constrained by the
capacity choice: we have q1 (q2 ) = 12 (900 − q2 ) is capped at 100.
Given this constrained best reply by 1, we have that the NE in the
subgame is (100, 400).
I When deciding whether entering firm 1 compares her profits
under the NE of the 2 subgames. It turns out that here is optimal
to induce the subgame associated with small capacity.
I Notice that the strategies of both players include the quantity
choices in both subgames. In addition for firm 1 the strategy
also includes the capacity choice.

89 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

90 / 166
Repeated games

The one shot framework we have started the course with provided a
powerful tool to analyze strategic interactions among individuals.
Some insights we learned were useful but not very reassuring (PD
game, public good game, etc).
I For those applications where the same set of players interact
repeatedly in a stable environment, we can provide a more
positive message.
I Main message is: repetition facilitates cooperation. The
mechanism how this might occur may be familiar:
“If you are nice to me today, I will be nice to you tomorrow. And if
you are nasty, I will be too.”
I The idea embodied by the above quote requires that players
observe the past history of choices: strategies may be history
dependent.

91 / 166
I The repeated version of a one shot game then becomes an
extensive form game. Hence, one we introduce them now.
I A repeated game Γ is formed by:
I a stage game G with n players;
I T ≥ 2 rounds of play;
I for each i , strategies and payoffs.

I Perfect monitoring: before each round, every player knows the


history of all the actions previously played (notice that in some
applications this may not be realistic).
I We assume weak perfect monitoring, which requires that only
pure actions of previous periods are observed.

92 / 166
I Define Ht = (S1 × S2 × ... × Sn )t−1 , the set of histories until round t.
ht will be the realization of a specific history.
I A pure strategy bi = (bi 1 , bi 2 , ..., biT ) is such that ∀t, bit : Ht 7→ Si .

I Player i gets a payoff flow {ui1 , ui2 , . . . , uiT } (one in each stage).

I The value of this payoff flow is the discounted sum:

T
X
Ui = δit−1 uit
t=1

with δi in [0, 1] if T < +∞ and δi in [0, 1) if T = +∞.


I To gain the first insights, we start by looking at a finitely
repeated interaction.

93 / 166
A twice repeated game

Let the stage game be:

I The stage game has two pure NE: (A , H ), and (B , B ). We look at


the SPE of the repeated game.
I Our goal is to verify that we can induce players to play the non
Nash profile (A , A ) (sum of payoff higher) in the first round if the
game is repeated (there are other more trivial SPE).
I We assume no discounting here.

I b11 (∅) ∈ {A , B }, b12 : {A , B } × {A , B , H } 7→ {A , B }, while


b21 (∅) ∈ {A , B , H }, b22 : {A , B } × {A , B , H } 7→ {A , B , H }.

94 / 166
Key observations:
I In the second/last round only the actions underlying a stage
game NE can be played.
I In this game the hard part will be persuade 2 to play A in the first
round (it is not a best reply in the stage game for him).
I To do so, we need to build a carrot and stick system (that is a
system of rewards and punishments).
I The availability of a carrot and stick mechanism depends on the
specific game.
I Here we have it, as there are 2 NE in the stage game of which
one can be used as carrot for 2 - (A , H ) - and one as stick - (B , B ).

95 / 166

I b11 ∗ ∗
(∅) = A , b12 (A , A ) = A , b12 (A , H ) = B .

I b21 ∗ ∗
(∅) = A , b22 (A , A ) = H , b22 (A , H ) = B .
I bi∗2 (h ) = H , b22

(A , H ) = B , ∀i and ∀h , (A , A )or(A , H ).
I We need to check that no player has an incentive to deviate at
any stage of the game.
I Player 2 does not have an incentive to deviate at t = 1, as under
the eq. path he gets 3 + 4, and with best deviation 4 + 1.
I Check all other possible deviations.

96 / 166
Repeated PD
Let the stage game be a PD:

C D
C 5, 5 0, 6
D 6, 0 1, 1

I (D , D ) is the only NE for the stage game.


I For T = 2, each player has 25 = 32 strategies in the repeated
game.
I If we play the game twice, the only SPE is to always play D .
I Result: If the stage game has a unique NE, the SPE of the finitely
repeated game is unique.
I Intuitively, we cannot build the system of carrot and stick system
used earlier.
I If the PD is finitely repeated, cooperation cannot be supported by
a SPE.
97 / 166
Infinitely repeated game
Let the stage game be a PD:

C D
C 5, 5 0, 6
D 6, 0 1, 1

But assume now that the stage game is infinitely repeated.


I A technical issue is that the infinite sum of stage payoff may not
not be finite, while we want to keep them finite to measure the
impact of difference strategies on payoffs. Hence, the need for a
discount factor which we assume to be common among players.
I Players maximize
X∞
δt−1 uit
t=1
I The introduction of δ makes also economic sense. It can be
interpreted as a time preference, but also as the probability that
the game continues after each stage.
98 / 166
I We refer as a grim trigger-strategy profile by biGT (h ) = C , if h = ∅
or (C , C )t−1 ; D otherwise. That is both players “play C and, after
any deviation, both play D forever”.
I The claim is that for high enough δ (players enough patient) the
grim trigger-strategy is a SPE of the infinitely repeated stage
game.
I One need to show that the proposed strategy is optimal among
infinitely many ones.
I The one shot deviation principle simplify our task. It says that to
check whether a strategy is optimal it is enough to check that it
unimprovable in one step: that is a player cannot improve her
total discount payoff by changing her actions in one period given
that she sticks to the planned actions for the following periods.
I In other words, it is enough to compare the continuation payoffs.

99 / 166
I Consider a subgame after h = (C , C )t−1 (eq. path till t − 1).

I The continuation payoff from b GT is 5 + δ5 + δ2 5 + ... = 5


1−δ .

I The continuation payoff from deviating at t is


δ
6 + δ1 + δ2 1 + ... = 6 + 1−δ .

I For one step deviation not profitable, we have 5 δ


1−δ ≥ 6 + 1−δ , i.e.
δ ≥ 1/5.

Result: A strategy that plays ŝ before any deviation is a SPE if and


only if
δ
 
Ui (ŝ) ≥ max ui (si , ŝj ) + u (s ∗ , s ∗ )
si 1−δ i i j

100 / 166
Tit for tat

I Another well known strategy that can be used to induce


cooperation is Tit for tat.
I Under this strategy, a player starts with cooperation and then
replicates the strategy used by the opponent in the previous
period.
I The above implies that if both players use it and one deviates
they then alternate the profiles cooperate/defect,
defect/cooperate.
I You can verify that for high enough δ you can sustain
cooperation as a Nash equilibrium.
I If we compare it with the grim trigger strategy, a negative aspect
is that it is not a SPE (after defection you are better off brining
back cooperation). A positive one is that it is more forgiving. If a
player defected by mistake it leads back to cooperation.

101 / 166
Folk’s theorem

We can state a more general result regarding the range of possible


payoffs that can be reached in a given repeated game.
Preliminaries:
I The minmax value vi = minσ−i ∈Σ−i maxσi ∈Σi ui (σi , σ−i ) is the worst
punishment that can be imposed on i in the stage game.
I The set of feasible and individually strictly rational outcomes is

Ψ (G ) = {(u(σ ))|σ ∈ Σ, ui (σ ) > vi , ∀i }

(Folk’s theorem) Given a stage game G , for any u in Ψ (G ) there exists


a discount factor δ̄ in (0, 1) such that, for δ ≥ δ̄, the infinitely repeated
game Γ (δ) has a SPE with payoff u.

102 / 166
Static Cournot market

I Two firms. Assume linear demand p(q) = a − bq and zero costs.

I At equilibrium: qi∗ = a/(3b ), p ∗ = a/3, πi∗ = a 2 /(9b ).

I We saw that collusion to a evenly split “monopoly” outcome is


advantageous: qim = a/(4b ), p m = a/2, πim = a 2 /(8b ).
I The insight we learn from previous analysis is that in an infinitely
repeated Cournot game, collusion can emerge as a SPE.

103 / 166
Collusion under Cournot competition

I The goal is to support the collusive outcome qim = a/(4b ).

I To do so we use a Nash reversion strategy that threatens to play


qi∗ forvever.
I Using one shot deviation principle, we need to make sure that
 
πi (q m ) ≥ (1 − δ) max πi (qi , qjm ) + δπi (qi∗ , qj∗ )
qi

which gives

(1/8)(a 2 /b ) ≥ (1 − δ)(9/64)(a 2 /b ) + δ(1/9)(a 2 /b )

, and yields δ ≥ 9/17.

104 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

105 / 166
Incomplete information

Let me remind you that:

I There is imperfect information when players may be uninformed


about the moves made by other players.
I There is incomplete information when players may be
uninformed about some characteristics of the game or of the
players.
I Before formalizing how we deal with these games, let me start
with some examples.

106 / 166
I Lack of complete information leads to the presence/modeling of
uncertainty.

I I want to start with a somewhat degenerate example to illustrate


the implications of uncertainty.

I Comparing this example with the second one, should help


understanding the complications associated with the modeling
of incomplete information.

107 / 166
I Assume one seller and one buyer. For simplicity, let me assume
that the buyer is a computer whose behavior is fixed (so this is
actually a decision problem).
I The seller values the object on sale zero, vs = 0. The
buyer/computer is equally likely to have any value between 0
and 100 (a program selects vb from U [0, 100]).
I The seller makes a take it or leave it offer to the
buyer/computer, p. The computer is programmed to accept if
vb ≥ p. The seller is aware of this.
I If seller is also aware of the vb selected, then he offers p = vb .
The outcome is efficient, and the computer gets zero even when
her value is high, say 90.

108 / 166
I If seller is uncertain about value selected by the program, he
(100−p)
maximizes his expected payoff 100 p, which gives p = 50.
I The value uncertainty in this example produces two
consequences that you may see in various games:

1. Even though trade here is always Pareto efficient, the object is


unsold half of the times (inefficiency).

2. The fact that vb is private information originates an


informational rent. (for instance, if computer has high value -
say vb = 90 - now she enjoys a payoff of 40).

109 / 166
One-sided uncertainty in BoS

Bob believes that Ann likes him or dislikes him, with equal probability.

He is uncertain whether he is playing the game on the left or the one


on the right. Conversely, Ann knows which game she is playing.

F O F O
F 2, 1 0, 0 F 2, 0 0, 2
O 0, 0 1, 2 O 0, 1 1, 0
likes (1/2) dislikes (1/2)

Above we are implicitly making important assumptions:


I Even though Bob is uncertain about Ann actual preferences, he
is perfectly aware about all possible scenarios and their
underlying frequencies.
I Ann is aware of them too, and the two share common knowledge
about it.
110 / 166
At the players’ level, the game is

FF FO OF OO
F 2, 1/2 1, 3/2 1, 0 0, 1
O 0, 1/2 1/2, 0 1/2, 3/2 1, 1

I Notice that Ann’s payoff in the matrix above are computed from
an ex ante perspective.
I Ann is with probability half the type of player on the left game
and with half the one on the right game.
I We call the NE of a game of incomplete information a Bayesian
Nash Equilibrium. The BNE of the game above is (F , FO ).

111 / 166
Formalization of Bayesian games (Harsanyi)

I Under incomplete information part of the structure of the game


is not common knowledge. For instance, players may be
uncertain about the opponents they are facing, their payoffs,
their strategy set.
I Various incomplete information situations can be reformulated to
the more tractable/interpretable incomplete information about
payoff. We focus on that.

I Once the structure of the game is not common knowledge, it is


not obvious how to model incomplete information in a tractable
way.
I Each player forms her own (first order) beliefs about such
structure. A player forms beliefs about opponents strategies
given the first order beliefs about the game (second order
beliefs), ... : a full hierarchy of belief is needed to describe the
incompleteness of information.

112 / 166
I Where do these beliefs come from? Are they subjective beliefs?
Can we guarantee their mutual consistency?
I We have already exploited the structure imposed by Harsanyi to
guarantee the consistency of such beliefs and the tractability of
the model in the BoS example.
I The Harsanyi approach assumes that players have common
knowledge of a probability distribution about the elements of the
game, and hence of what the various elements might be. Players
beliefs are then consistent as they are conditional probability
distributions based on such common prior. The conditioning is
based on Bayes’ rule (hence Bayesian games).
I We can model the common prior as something determined by
player Nature. This way we can reinterpret a game of incomplete
information as a game of imperfect but complete information.

113 / 166
I We represent each player characteristics and informational
structure via what we call a players’ type. We denote by Ti the
set of type of player i , and by ti a specific realization.
I The realized vector of types t is drawn from the set T by Nature
using a common probability distribution p(.). Players then form
their posterior based on their realized type p(.|ti ). You can see
that is enough that a player knows his type if we use this
representation.
I Player i payoff function in the Bayesian game becomes
S × T 7→ R (depends both on strategy profile played and realized
profile of types).
I We denote a player i strategy by bi , where bi : Ti 7→ Si .

I (b1 , b2 , ..., bn ) is a Bayesian Nash eq strategy profile if ∀i bi


maximizes i expected payoff given her beliefs about the
opponents types and their strategies b−i .

114 / 166
Illustrative example

I one buyer, one seller dealing about a car that can be low quality
or high quality (lemon problem, Akerlof).

I buyer values car 60 if high quality, 30 if low quality.

I seller values car 55 if high quality, 0 if low quality.

I Price is fixed to be P .

I Buyer and seller simultaneously decide whether to trade (T ) or


not (N ) at P .

115 / 166
I Prob. car is lemon is .3. This is the common prior. The extensive
form game below provides a game of imperfect info where
nature selects according to the common probability distribution.

I The seller observes the selection by nature and hence can


compute her posterior about car being a lemon.

116 / 166
I As usual strategies are defined at start of game, so each i
selects bi (.) such that
P P
Pti ∈Ti Pt−i ∈T−i p(ti , t−i )ui (bi (ti ), b−i (t−i )ti , t−i ) ≥
ti ∈Ti t−i ∈T−i p(ti , t−i )ui (gi (ti ), b−i (t−i )ti , t−i ).

I It turns out that ex ante optimality is equivalent to ex-post (type


dependent) optimality:
P
Pt−i ∈T−i p(t−i |ti )ui (bi (ti ), b−i (t−i )ti , t−i ) ≥
t−i ∈T−i p(t−i |ti )ui (gi (ti ), b−i (t−i )ti , t−i ).

117 / 166
Ex ante vs type dependent optimization

Ex ante:

I Checking the table before, it is direct to verify that when P ≤ 30


0
we have a BNE given by strategy profile (NT , T ).
I The eq. above entails adverse selection (only lemon traded at
sufficiently low price).

118 / 166
Ex post:
I The difference is that now we can think of the two type of the
seller as separate players, call S1 the one with good car and S2
the one with the lemon.
I if P ≤ 30, and buyer willing to trade, for S1 is optimal not to trade
and for S2 is optimal to trade. Given this, it is indeed a best reply
for the buyer to trade.
I As before if P > 30 players strategy cannot be mutual best reply.

I Same conclusion as before.

119 / 166
Cournot duopoly with incomplete information

The (inverse) demand function is P = 10 − Q .


There are two firms producing Q = q1 + q2 .
Column is known to have marginal cost c2 = 2.
Row may have marginal cost c1 = 1 or 3 (with equal prob.)
Equilibria under complete information:

q1∗ q2∗ p∗ Π∗1 Π∗2


[c1 = 1, c2 = 2] 10/3 7/3 13/3 100/9 49/9
[c1 = 3, c2 = 2] 2 3 5 4 9

120 / 166
When Row has high cost (c1H = 3): max (10 − q1H − q2 − c1H )q1H
When Row has low cost (c1L = 1): max (10 − q1L − q2 − c1L )q1L
Column has cost (c 2 = 2): max 12 ΠH + 12 ΠL .
10 − 2q1H − q2 − c1H = 0



 10 − 2q L − q2 − c L = 0


System of three FOCs: 
 1 1
 (1/2)[10 − q H − 2q2 − c2 ]+

 1
(1/2)[10 − q1L − 2q2 − c2 ] = 0

q1H q1L q2∗ pH pL ΠH1 ΠL1 E Π2


Eq.:
13/6 19/6 8/3 31/6 25/6 (13/6)2 (19/6)2 (8/3)2

121 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

122 / 166
Auction Theory: An Introduction

Let me start with some questions for you:

I How do you think auction theory is related to what you have


covered so far?

I Why study of auctions important?

I Which auctions do you know?

123 / 166
Game Theory and Auctions

I Any Auction Format determines a Game of Incomplete


Information
I Set of players, N (Bidders)

We refer to a bidder from the set of bidders as bidder i , i ∈ N .


I Set of Actions, Ai , ∀i (Allowed bids)
I Set of Signals/Types, Xi , ∀i (Private Information about your
value)
I Probability distribution over product set of signals, F
I Payoff Function, ui : X × A 7→ R (utility from equilibrium
allocation, and associated payment)
I Equilibrium Strategy, βi : Xi 7→ Ai (Bidding function)
I Common Knowledge of the Game
I We look for equilibria of the game of incomplete information.

124 / 166
Auction Theory: An Introduction

What type of things are auctioned?

I Art Auctions (Sotheby’s, etc)

I Dutch Flower Auctions (Dutch Auction)

I Timber Auctions

I Ebay

125 / 166
Auction Theory: An Introduction

More relevant auctions and applications:

I Licence Auctions (Mobile, TV, Internet, Radio, etc)


I Procurement Auctions
I Privatizations
I Central Bank Liquidity Auctions
I Stock Exchange (Double Auction)

Others (Perhaps not formally described as Auctions)


I Finance Applications: Takeovers, IPO (ex: Google)
I Industrial Organization: R&D Innovations
I Political Economy: Lobbies Contributions

126 / 166
Auction Theory: An Introduction

Why are Auction used?

I When bidders value are not know, auctions typically perform


well in terms of revenues generated, and efficiency.

I Anonymous (Identity of bidders plays no role)

I Universal (same format can be used for wide variety of


situations/objects. Rules are fairly simple.)

127 / 166
Auction Theory: An Introduction

Why is Auction theory used/useful?

I At least at aggregate level good predictions (confirmed by


empirical/experimental work).
I It has been successfully applied for the design of auctions
(Auction Design and Experimental economics nicely work
together).

I Can be used to model other competitive environments.

128 / 166
Auction Theory: Auction Environments

What are the objective of the auction?

I Maximizing Revenues
I Efficiency
I Some weighted combination of the two
I Others.

129 / 166
Auction Theory: Auction Environments

"Standard" (but restrictive) assumptions of most (but not all) of the


literature:

I Risk Neutrality
I No Budget Constraints
I Unidimensional and Independently drawn Signals
I Symmetry
I No Collusion
I Fixed Number of Bidders
I Others.

130 / 166
Auction Theory: Auction Environments

I Single vs Multi Unit Auctions


I Valuation Structure
I Private Values
I Interdependent Values
I Common Values
I Open Vs Sealed Bid Formats
I One stage versus Multi-stage Formats
I Repeated Auctions with same set of bidders (then Dynamic
game)

131 / 166
Auction Theory: Auction Environments

Open Formats

I English Auctions
I Dutch Auction

Sealed Formates
I FPA
I SPA
I APA

Modifications of standard formats (with Entry Fees, Reserve Price,


Buy out Price, etc)
Ultimately, still lots of scope for market design

132 / 166
Auction Theory vs Mechanism design vs.
Information design
Alternative approaches:

I Auction Theory: You fix the rules (mechanism, auction


format:FPA,SPA, etc) and determine equilibrium behavior.
Perhaps you compare eq. outcome of different mechanisms of
your choice.
I Mechanism Design: You fix the objective (Revenues, Efficiency)
and the information structure and look for the rules of the game
(mechanism) that maximizes your objective subject to IC
constraint (typically by restricting to direct mechanism). Then
you may ask yourself if there is a reasonable (generally indirect
mechanism) to implement the "optimal" allocation.
I Information Design:
You fix the objective (Revenues, Efficiency) and the rules of the
game and look for the information structure that maximizes
your objective.
133 / 166
Equilibrium concepts

I Bayesian Nash Equilibrium: ∀i , ∀xi ∈ Xi , ∀ai ∈ Ai ,



E [ui (α ∗ (X ), X )|Xi = xi ] ≥ E [ui (ai , α−i (X−i ), X )|Xi = xi ]

I Ex Post Equilibrium: ∀i , ∀x, ∀ai , ui (α ∗ (x), x) ≥ ui (ai , α−i (x−i ), x)
(Note: equilibrium is independent of F)
I Dominant Strategy Equilibrium: ∀i , ∀x, ∀αi ,∀a−i
ui (α ∗ (xi ), a−i , x) ≥ ui (αi (xi ), a−i , x)

Note: Dominant strategy implies Ex post, which in turns implies


Bayesian Nash.

134 / 166
Independent Private Value Setting (IPV)

We will work with the IPV framework:

I vi = xi (private value, unidimensional signal)

I xi ∈ [0, 1], xi drawn independently of xj from the same


distribution F , ∀j , i (Independence, Symmetry)
0
We also assume that F admits a continuous density f ≡ F

Also: 1 object, risk neutrality, no collusion, no budget constraints,


fixed number of bidders.

135 / 166
Second Price Auction (SPA)

Rules: Sealed Bid Format in which the highest bid wins, and pays the
second highest bid.

Why to start with SPA? It is not the most natural format.

I Important for Theory


I Not much observed in practice at least as a sealed (one shot)
format, but Ebay can be considered a dynamic version of it.

Note that the SPA is (strategically) equivalent to the EA if one of the 2


below holds:
I Private value Environment
I Only 2 Bidders.
Why?

136 / 166
Second Price Auction (SPA

Given the rules, the ex-post payoff of bidder i is:

I If bi > maxj ,i bj , πi = xi − maxj ,i bj


I If bi < maxj ,i bj , πi = 0
I If bi = maxj ,i bj , Assume random Tie breaking rule (for instance,
with equal probability)

I Claim: It is a weakly dominant strategy for ∀i to bid βi (xi ) = xi

137 / 166
Second Price Auction (SPA)
We need to consider possible deviations from the proposed strategy
Define P1 ≡ maxj ,i bj
Lets look at the ex post payoff and think about how it could vary
choosing any bi , xi , distinguish 2 cases: xi > P1 , xi < P1 (under the
proposed eq. xi = P1 happens with zero probability)
I xi > P1 (i wins and pays P1 , under proposed strategy)
I bi > xi still win, still pay P1 . ==> No improvement
I b i < xi
I bi > P1 , as above. still win, still pay P1 . ==> No improvement
I bi < P1 , lose and forego (xi − P1 ). ==> Worse off
I xi < P1 (i loses, and pays 0, under proposed strategy)
I bi < xi still lose. ==> No improvement
I b i > xi
I bi < P1 , as above you still lose. ==> No improvement
I bi > P1 , now you win, but suffer a loss of (P1 − xi ). ==> Worse off

Note: We just showed that regardless of the realized P1 , by playing


βi (xi ) = xi , I am sure that I could not have got a (strictly) higher ex
post payoff by playing any other strategy bi
138 / 166
Second Price Auction (SPA)
Further Implications of equilibrium SPA:
I Note that in the previous proof we did not use the fact that xi
were drawn from some Fi and had a common support, not that
were drawn independently.
I Why is this important?
I Because it means that the equilibrium prediction is robust (not
sensitive) to those details
I It also means that dealing with asymmetric SPA, or allowing for
correlation is not a problem (unlike in FPA!)
I What is the assumptions that we made use of?
I Private values (with interdependent values, Eq. is no longer in
dominant strategies!)
I Note that eq above is efficient: Object goes with bidder with
highest value
I Is it Unique?
I NO, there exist asymmetric eq. (Ex: 1 bidder bidding above the
max valuations of all bidders, all others bidding zero)
139 / 166
SPA: Calculating Expected Revenues

Let (X1 , X2 , ..., Xn ) be the vector of independent draws from F


(n) (n) (n)
Let (Y1 , Y2 , ..., Yn ) be the vector where of rearranged draws.
(n) (n)
Let Fk be the distribution of Yk .
(n) (n)
We have that F1 (y) = F (y)n , so that f1 (y) = nF (y)n−1 f (y)
(n) R1
Thus, E (Y1 ) = 0 ynF (y)n−1 f (y)dy, if N = 2, F=Uniform:
(n) R1
E (Y1 ) = 0 y2ydy = 23
(n)
We have that F2 (y) = F (y)n + nF (y)n−1 (1 − F (y)), so that
(n)
f2 (y) = n(n − 1)F (y)n−2 f (y)(1 − F (y))
(n) R1
Thus, E (Y2 ) = 0 yn(n − 1)F (y)n−2 f (y)(1 − F (y))dy, if N = 2,
(n) R1
F=Uniform: E (Y2 ) = 0 y211(1 − y)dy = 31

140 / 166
SPA: Calculating Expected Revenues

(n−1)
Notation from book: Y1 ≡ Y1 , we are often interested highest
value of other n-1 opponents.
(n−1)
G (y) = F (y)n−1 , also f1 = (n − 1)Fn−2 f (y)

One way to calculate the Revenues is to first calculate the expected


payment of bidder i given x, m(x)

y(n−1)F n−2 f (y)


Rx
m(x) = G (x)E (Y1 |Y1 < x) = F (x)n−1 0 F (x)n−1
dy
Rx y111 x2
U2: = x 0 x dy = 2
R1
x2 1
E (R ) = nEx (m(x)), U2: 2 0 2
dx = 3

141 / 166
First Price Auction (FPA)

Rules: Sealed Bid Format in which the highest bid wins, and pays
his/her own bid (Note: FPA is strat. Equiv. to Dutch. Why?).

Given the rules, the ex-post payoff of bidder i is:

I If bi > maxj ,i bj , πi = xi − bi
I If bi < maxj ,i bj , πi = 0
I If bi = maxj ,i bj , Assume random Tie breaking rule (for instance,
with equal probability)

How would you bid? Can’t bid your value, need to shade! How much?
If you knew P1 , P1 +  But you don’t ==> clearly eq. is not in dominant
strategy==>typically ex-post regret. (ex-ante trade off prob. of
winning, and ex post payoff)

142 / 166
First Price Auction (FPA)

Rules: Sealed Bid Format in which the highest bid wins, and pays
his/her own bid (Note: FPA is strat. Equiv. to Dutch. Why?).

Given the rules, the ex-post payoff of bidder i is:

I If bi > maxj ,i bj , πi = xi − bi
I If bi < maxj ,i bj , πi = 0
I If bi = maxj ,i bj , Assume random Tie breaking rule (for instance,
with equal probability)

How would you bid? Can’t bid your value, need to shade! How much?
If you knew P1 , P1 +  But you don’t ==> clearly eq. is not in dominant
strategy==>typically ex-post regret. (ex-ante trade off prob. of
winning, and ex post payoff)

142 / 166
First Price Auction (FPA)

Rules: Sealed Bid Format in which the highest bid wins, and pays
his/her own bid (Note: FPA is strat. Equiv. to Dutch. Why?).

Given the rules, the ex-post payoff of bidder i is:

I If bi > maxj ,i bj , πi = xi − bi
I If bi < maxj ,i bj , πi = 0
I If bi = maxj ,i bj , Assume random Tie breaking rule (for instance,
with equal probability)

How would you bid? Can’t bid your value, need to shade! How much?
If you knew P1 , P1 +  But you don’t ==> clearly eq. is not in dominant
strategy==>typically ex-post regret. (ex-ante trade off prob. of
winning, and ex post payoff)

142 / 166
First Price Auction (FPA)

Rules: Sealed Bid Format in which the highest bid wins, and pays
his/her own bid (Note: FPA is strat. Equiv. to Dutch. Why?).

Given the rules, the ex-post payoff of bidder i is:

I If bi > maxj ,i bj , πi = xi − bi
I If bi < maxj ,i bj , πi = 0
I If bi = maxj ,i bj , Assume random Tie breaking rule (for instance,
with equal probability)

How would you bid? Can’t bid your value, need to shade! How much?
If you knew P1 , P1 +  But you don’t ==> clearly eq. is not in dominant
strategy==>typically ex-post regret. (ex-ante trade off prob. of
winning, and ex post payoff)

142 / 166
First Price Auction (FPA)

Rules: Sealed Bid Format in which the highest bid wins, and pays
his/her own bid (Note: FPA is strat. Equiv. to Dutch. Why?).

Given the rules, the ex-post payoff of bidder i is:

I If bi > maxj ,i bj , πi = xi − bi
I If bi < maxj ,i bj , πi = 0
I If bi = maxj ,i bj , Assume random Tie breaking rule (for instance,
with equal probability)

How would you bid? Can’t bid your value, need to shade! How much?
If you knew P1 , P1 +  But you don’t ==> clearly eq. is not in dominant
strategy==>typically ex-post regret. (ex-ante trade off prob. of
winning, and ex post payoff)

142 / 166
First Price Auction (FPA)

Rules: Sealed Bid Format in which the highest bid wins, and pays
his/her own bid (Note: FPA is strat. Equiv. to Dutch. Why?).

Given the rules, the ex-post payoff of bidder i is:

I If bi > maxj ,i bj , πi = xi − bi
I If bi < maxj ,i bj , πi = 0
I If bi = maxj ,i bj , Assume random Tie breaking rule (for instance,
with equal probability)

How would you bid? Can’t bid your value, need to shade! How much?
If you knew P1 , P1 +  But you don’t ==> clearly eq. is not in dominant
strategy==>typically ex-post regret. (ex-ante trade off prob. of
winning, and ex post payoff)

142 / 166
First Price Auction (FPA)

Conjecture: symm, increas., diff. eq. exists and all j , 1 follow it. Lets
look at optimal strategy bidder 1.

I b ≤ β(1)
I β(0) = 0

Bidder 1 wins if b > maxi ,1 β(Xi ) = β(maxi ,1 Xi ) = β(Y1 )

or Y1 < β −1 (b )

I E (π) = G (β −1 (b ))(x − b )
g(β −1 (b ))
I FOC: 0 (x − b ) − G (β −1 (b )) = 0
β (β −1 (b ))
0
I symmetric eq. b = β(x), thus, G (x)β (x) + g(x)β(x) = xg(x) (FODE)

143 / 166
First Price Auction (FPA)

I d
dx (G (x)β(x)) = xg(x)

I initial condition: β(0) = 0


Rx
yg(y)
I β(x) = 0
dy = E (Y1 |Y1 < x)
G (x)
Rx
G (y))
I β(x) = x − 0G (x) dy, U: β(x) = x − nx
I U2: β(x) = 2x ; limn→∞ β(x) = x
I You can work out for other distributions: Ex: F (x) = x 2 , then
xexp(−λx)
β(x) = 23 x; F (x) = 1 − exp(−λx), then β(x) = λ1 − 1−exp(−λx)

144 / 166
FPA: Verifying Equilibrium

I Proposed eq is incr., thus if bidder 1 bids b , he/she wins iff all


other bidders types are less or equal than z ≡ β −1 (b )
I π(x, z) = G (z)(x − β(z)) = G (z)x − G (z)E (Y1 |Y1 < z) =
Rz Rz
G (z)x − 0 yg(y)dy = G (z)(x − z) + 0 G (y)dy

I π(x, x) − π(x, z) = G (z)(z − x) − z G (y)dy, NOTE: thats ≥ 0, ∀z


R
x

145 / 166
146 / 166
FPA: Comments

I Is Equilibrium in weakly dominant strategies?

I Is Equilibrium Ex Post?
I It is a Bayesian Nash Eq., and as such is sensitive to number of
bidders, distribution F , symmetry, etc (Not Details Free)

147 / 166
FPA: Overbidding

A lot of empirical Evidence (Lab, Field, etc) that people overbid


compared to RNBNE

Why?

Which types are "shading more?"


I Risk Aversion
I Regret
I Pleasure of winning
I Lack of Common Knowledge

148 / 166
FPA: Revenues and Comparison with SPA

I What is the expected payment of a generic bidder of type x in


FPA?
I m(x) = G (x)β(x) = G (x)E (Y1 |Y1 < x)

I Does it look familiar?


R1
x2 1
E (R ) = nEx (m(x)), same as in SPA!! U2: 2 0 2
dx = 3

I SPA: β(x) = x, E (R ) = E (β(Y2n )) = E (Y2n )

I FPA: β(x) = E (Y1 |Y1 < x), E (R ) = E (β(Y1n )) = E (Y2n )

149 / 166
Do you find the result surprising? (very different rules, but same
revenues)

Can you try to provide an intuition for it?

Notice:
I Allocation is the same
I Lowest type gets the same (0) in both auctions.

We have that the expected rents a bidder enjoys because of his


private info are the same.

How general is this?

150 / 166
Revenue Equivalence

Consider values to be independently and identically distributed, risk


neutrality, and the expected payment of type zero to be the same.
Assume the symm, increase eq, then all standard auctions yield the
same expected revenues.

I π(x, z) = G (z)x − m(z)

I ∂ d
∂z
π(x, z) = g(z)x − dz m(z) = 0
I Symmetric eq: z = x

I d
dy m(y) = g(y)y
Rx Rx
I m(x) = m(0) + yg(y)dy = yg(y)dy = G (x)E (Y1 |Y1 < x)
0 0

151 / 166
Revenue Equivalence

Why is Revenue Equivalence so important?

I It provides an important benchmark: we can then think at what


happens when we relax each of the underlying assumption
separately

I After verifying that the assumption of revenue equivalence hold,


you know can exploit the fact that you may know the revenues
from some other revenue equivalent auction (EX: SPA) to back
up the bidding function of the auction you are interested in.

152 / 166
Revenue Equivalent Auctions

Let us assume the IPV setting and think about the equilibrium of APA.

APA: Highest bid wins, but everybody pays his/her bids.

Rx
x2
What is the expected payment in this set-up? 0
yg(y)dy, U2: 2

In the APA, expected payment coincides with payment (cause you pay
regardless of the outcome)!

Rx
x2
Thus, β(x) = m(x) = 0
yg(y)dy; U2: β(x) = 2

Does this bidding f. makes sense? Yes, lower types shade much more
than higher types cause less likely to win.

153 / 166
Lets verify that the bidding functions we derived are indeed an
equilibrium.
Rz
π(x, z) = G (z)x − β(z) = G (z)x − 0
yg(y)dy

Rz
π(x, x) − π(x, z) = G (z)(x − z) + x
G (y)dy ≥ 0

Not surprisingly same check as for FPA.

154 / 166
Key Assumptions of Revenue Equivalence

I Independence

I Risk Neutrality

I No Budget Constraints

I Symmetry

155 / 166
General Vs. Direct Mechanisms
Myerson 1981

I A mechanism is defined by (B , π, µ)

I π : B 7→ ∆

I µ : B 7→ Rn

The bidding strategy given the mechanism:


I βi : [ai , bi ] 7→ Bi

I A Direct mechanism is defined by (Q , M )

I Q : X 7→ ∆, where lets call Qi (x) Prob i gets object if x

I M : X 7→ Rn , where lets call Mi (x) Payment from i if x

156 / 166
Revelation Principle
Myerson 1981
Take any Mechanism (B , π, µ), and an equilibrium of this mechanism
β. Then there exists a direct mechanism (Q , M ) s.t.:

I Each bidder reports truthfully


I The outcome is the same as in original mechanism (same
allocation expected payments for all bidders)

I Proof: Q (x) = π(β(x)), M (x) = µ(β(x))

I Example: FPA, U2. Indirect mechanism β(x) = 2x , this gives


2
m(x) = x2
I Suppose I propose you to report me your xi , and then I tell you
that I give you the object if your report is the highest, and that if
2
you report me xi you have to pay me x2 . You don’t have incentive
to lie, and we get same outcome as in FPA.
157 / 166
Table of contents

1. An Introduction to Game 7. Bargaining and sequential


Theory competition
2. Strategic form games 8. Subgame Perfect equilibrium
3. Dominance and Minmax 9. Repeated games
theorem 10. Incomplete information and
4. Nash equilibrium Bayesian Nash equilibrium
5. Mixed strategies 11. Auctions
6. Extensive form and backwards 12. Perfect Bayesian Equilibrium
induction (PBE)

158 / 166
Extensive form games and Incomplete
information
I We have already looked at incomplete information in the contest
of normal form games. The relevant concept there was Bayesian
Nash eq. Recall that we modeled incomplete information using
the framework of imperfect but complete information. The game
was Bayesian in the sense that that we applied Bayes rule to
update from the common prior. This ensured that players beliefs
were correct/consistent.
I We also already looked at extensive form games to deal with the
fact that in some games players move sequentially. There we
introduced the idea of imposing sequential rationality, which
lead to a refinement of NE via backward induction and SPE.
I Here we want to deal with incomplete information in extensive
form games where players might choose sequentially. In doing
so we want to maintain the requirement of sequential rationality
and the consistency of beliefs. This is what Perfect Bayesian
equilibrium (PBE) requires.
159 / 166
PBE

In each information set, the owner has a belief over where he is


(probability distribution over the nodes of the information set).
A belief system µ assigns beliefs to each information set.
In each information set, the owner chooses a behavioral strategy.
A behavioral strategy b assigns actions to each information set.
An assessment (b , µ) is a perfect bayesian equilibrium when it
satisfies two requirements:
— weak consistency (beliefs follow from the common prior and from
strategies using Bayes’ rule);
— sequential rationality (strategies are optimal, given beliefs).

160 / 166
Weak consistency

N j
 Z
 Z
 Z
q1  q2 Z q3
 Z
 Z
 1
1 z z Zz
Z

C B B
 C  B  B
 C 
p  1-p r  B 1-r B
C  B  B
 C  B  B
[µ] 
z 2 B  B
z z z
C  
A L L
T
 A  L L
T

What should µ be for q1 · p > 0? And what if q1 = 0 or p = 0?


The only requirement is to use Bayes’ rule whenever possible. If not,
any arbitrary belief is acceptable. 161 / 166
Beliefs matter

1
k
Z
Z
L Z R
Z
Z
2 Z
{ Z {
l l
l l
W l E W l E
ll ll

(2,2) (0,1) (0,0) (2,1)

The best reply for 2 is W if µ ≥ 1/2 and E if µ ≤ 1/2. There are two NE
that are supported by two different sets of beliefs.

162 / 166
Sequential rationality

L 1
(0,0) k
Z
Z
M Z R
Z
Z
[µ]
2 Z
{ Z{
l l
l l
l m l r l m l r
ll ll

(4,0) (-1,1) (0,4) (0,4) (-1,1) (4,0)

Lm is a SPE but m is never optimal to play. Hence, it is not a PBE


(cannot find beliefs to sustain m)

163 / 166
Signaling game

The following characterizes what we call a signaling game.


Characteristics:
– two players: Sender and Receiver;
– Receiver ignores type of Sender (only source of incomplete info).
Timing:
– Nature chooses the type of Sender according to p : Ts 7→ [0, 1];
– Sender plays first and Receiver plays second (each plays only
once).
We can then define behavioral  strategies
 πs : Ts 7→ ∆(Ss ),
πr : Ss 7→ ∆(Sr ), and ui : Ss Sr Ts 7→ R, for i : s, r.

164 / 166
Example: Back to Lemon problem

165 / 166
Classification of equilibria in pure strategies

Assume three types and three actions for the Sender:


– if t1 , t2 , t3 → L (all types play L ), we have a pooling equilibrium;
– if t1 → L , t2 → M , t3 → R (all types play differently), we have a
separating equilibrium;
– if t1 , t2 → L and t3 → R (some types play differently, but not all), we
have a partially separating equilibrium.

166 / 166

You might also like