Lecture 21 - Game Theory, Strategic Equilibrium, and Repeated Interactions
Lecture 21 - Game Theory, Strategic Equilibrium, and Repeated Interactions
Lecture 21 - Game Theory, Strategic Equilibrium, and Repeated Interactions
1. Asymmetric information
2. Conjectures about the behaviors of others
3. Interdependence of your actions on the conjectured choices of others
— leads to strategic behavior.
— Information/uncertainty
— Conjectured behavior
— Interdependence
• Naturally gives rise to game theory: a tool for analyzing strategic inter-
actions among individuals (people, firms) in an economic setting.
• Like all previous models, we have rational actors maximizing their well
being in a well specified environment.
• What is new here is small N , that is a small number of actors such that the
best choices for any one actor depends intimately on the choice of a small
number of other actors. This is quite different from a standard market
setting where everyone is a “price taker.” In this sense, the situations
described by game theory resemble a multi-player game with strategies,
payoffs and concealed information.
• An economic game has three elements:
1
1. Players
2. Strategies
3. Payoffs (utilities, states)
— 2 persons/firms
— N persons/firms
— Can also be against nature. In that case, nature moves stochastically.
• Strategies
• Payoffs
• Depending on the game, one or the other notation will typically be more
useful.
2
1.1 Example: Dormitory game
• Player A prefers to play music quietly, but dislikes hearing loud music
from the adjoining room.
• Player B is a head-banger. The more noise the better.
• This is a simultaneous move game.
• See Figure 21#1
Dormitory game:
Extensive form 21#1a
A
S = soft
S L
L = loud
B B
L S L
S
Dormitory game:
21#1b
Normal form
B
S L
S 6,3 2,4
A
L 5,4 3,5
3
• Solution concept?
4
1.3 Example: Rock, paper, scissors
• Recall the childhood game of rock paper scissors. This is a game where you
hold out one of three symbols corresponding to rock, paper and scissors.
The rules are:
5
• What is the expected utility of the parents?
• So E[Uk ] depends on P.
6
1.5 Prisoner’s dilemma
• You are already familiar with the prisoner’s dilemma. Review it briefly
anyway because it underscores a central issue in non-cooperative games —
the issue of credible commitment.
• The setup: Two criminals accused of a crime. The district attorney pulls
each aside to say: if you help me convict (‘rat on’) the other prisoner, I’ll
let you go free and the other prisoner will get 10 years. However, if you
both rat on one another, you’ll each get 5 years. The prisoners understand
that if neither rats on the other, they will each serve only 3 years. So, the
payoff matrix:
B
Rat Not
A Rat −5, −5 0, −10
Not −10, 0 −3, −3
• As is clear, Rat, Rat is the only Nash equilibrium of this game, even
though both prisoners would be strictly better off if they could choose
N ot Rat, N ot Rat.
• Hence, the prisoner’s dilemma underscores one of the interesting proper-
ties of game theoretic models: Outcomes that appear optimal are often
not stable when subject to the Nash criterion. In fact, they are often
dominated.
• This raises a set of questions:
• These issues are explored a bit in the tragedy of the commons game
7
2
• Notice ∂V ∂ V
∂Y < 0, ∂Y 2 < 0, meaning that each Yak does increasing marginal
damage (note, the outcome variable is in gallons per yak, so obviously it
will be optimal to bring some positive number of yaks).
A 21#2
Graze 4 Graze 5
B B
G4 G6 G4 G6
G5 G5
• Solution concept: What does each Yak herder do taking the actions of the
other herder as given?
• Problem for herder A
200 − 3Y 2 − Y 2 − 4Y 2 = 0,
200 = 8Y 2 ,
YA = YB = 5.
• Each herder brings 5 yaks to the common and earns 5 · (200 − 102 ) = 500,
and total output is 1, 000 gallons of milk.
8
• What would be the social optimum?
max Y · V = 200Y − Y 3 ,
Y
∂
= 200 − Y · Y 2 = 0,
∂Y
10 √
Y = 6 ≈ 8.16,
3
and total output is
1. B announces the action he will take if A brings 4 yaks and the action
he will take if A brings 5 yaks.
2. A takes his action.
3. B takes his action (hence, this is now a sequential game).
B
4, 4 4, 5 4, 6 5, 4 5, 5 5, 6 6, 4 6, 5 6, 6
4 544, 544 544, 544 544, 544 476, 595 476, 595 476, 595 400, 600 400, 600 400, 600
5 595, 476 500, 500 395, 474 595, 476 500, 500 395, 474 595, 476 500, 500 385, 474
B
4, 4 4, 5 4, 6 5, 4 5, 5 5, 6 6, 4 6, 5 6, 6
4 544, 544 544, 544 476, 595 400, 600
5 595, 476 595, 476 500, 500 595, 476 500, 500
9
2. B threatens 5, 5; A chooses 5: Payoffs are 500, 500
3. B threatens 6, 5; A chooses 5: Payoffs are 500, 500
B
4, 4 4, 5 4, 6 5, 4 5, 5 5, 6 6, 4 6, 5 6, 6
4 400, 600
5 500, 500 500, 500
• Hence, this game has three Nash equilibria. Question: Are any of these
equilibria problematic?
• Yes, both 1 and 2 involve non-credible threats. Player B threatens to take
actions contingent’s on A0 s choice that it would not be rational for him
to take. For example, if A brought 5 Yaks, then B should also bring 5.
Player B should never bring 6 Yaks if player A brings more than 4.
• So the problem is that B is making threats that should not be believed.
But in a simple Nash equilibrium, they are believed. The Nash equilbrium
concept does not rule out these implausible threats. Why not?
• Because these threats don’t have to be carried out in equilibrium. If A
brings 4 yaks, then B brings 6, which is rational. And if A brings 5, then
B brings 5, which is also rational. So player B is never forced to carry out
an irrational threat, which would in fact violate the Nash equilibrium.
• This example points to a problem with the Nash concept, which is that
implausible beliefs about what would happen ‘off the equilibrium path,’
can lead to implausible results in equilibrium (e.g., Player A brings 4 yaks
in response to B 0 s threat to bring 6).
• This motivates the ‘equilibrium refinement’ of Subgame Perfection.
10
1.7 Subgame perfection: The Scorsese example
• In the (poor) 1996 film “Casino,” Joe Pesci plays a gangster who is sent to
collect $50, 000 from a banker. The banker has the money in a safe. Pesci
threatens the banker with a baseball bat. The problem is that only the
banker knows the code to the safe. If Pesci hits the banker with the bat,
he probably won’t get the money. But he will go to jail. The banker gives
him a knowing look... So, the payoffs look as follows (note— the Banker
moves first):
Pesci
Assault Not
Banker Give Money −50, 000, +50, 000+Jail −50, 000, +50, 000
No Money 0+Pain,Jail 0, 0
Banker
21#3
Pesci Pesci
• The banker recognizes that it is not subgame perfect for Pesci to assault
him since Pesci only has jail time to gain.
• So, the subgame perfect equilibrium of this game is that the banker keeps
the money and Pesci leaves without committing the assault.
• But this is not what happens... Pesci convinces the banker he is irrational
and gets the money.
• It’s a great scene—and a nice application of game theory.
11
1.8 Related: Changing your payoffs
• Subgame perfection demonstrates that non-credible threats should not be
believed. This works to the disadvantage of the person making the threats.
There are at least two ways to change the game.
• The Scorsese example demonstrates one way: convince your opponent that
you are irrational. Irrational people may carry out threats that are self-
destructive.
• [North Korea is probably the master of this strategy.]
• A second strategy to make threats credible is, somewhat paradoxically, to
make your payoffs worse—that is, destroy your fallback option.
1 C 2C 1 C 2 C 1 C 2 1 C 2 C 1 C 2C
100,
100
D D D D D D D D D D
1 0 2 1 3 2 98 97 99 98
1 3 2 4 3 5 98 100 99 101
12
• This game is solved by backward induction. Start at the last node, work
your way back. As you can see, cooperation is clearly dominated in the
final play. But then it is dominated in the 2nd to last move. And so on...
• Iterated dominance leaves only one equilibrium, and it is an undesirable
one.
• This particular prediction does not have the ring of truth. Puts a hefty
premium on the rationality of both players, and their belief in the ratio-
nality of one another. It is also not especially well supported by empirical
evidence
• What would you do if the 1st player did not defect on move one? Game
theory does not make a clear prediction. This is ‘off the equilibrium path.’
• The problem of cooperation appears vexed.
B
Coop Defect
A Coop 5, 5 −1, 10
Defect 10, −1 1, 1
(1, 1) + (δ, δ) + (δ 2 , δ 2 ).
13
• Obviously, it is advantageous to cooperate.
• But of course in period 3, each ought to defect, since there is no further
payoff to cooperation. And so by backward induction, each should defect
in 2, and then in 1. Hence, no honor among thieves.
— Are you more likely to tip at a local restaurant that you go to often
or at an out-of-town restaurant where you’ll likely never return?
— Many drivers are extremely rude in city traffic. But they would not do
this if those drivers were their neighbors (even annoying neighbors).
— “No one in the history of the world has ever washed a rented car.” —
Lawrence Summers, Economist, President of Harvard University.
• In an infinitely repeated game (or a game with no known end point), there
cannot be backward induction. No player knows when the last period will
occur. So, the unravelling that we’ve seen above may not occur.
• Consider the following strategy: Each player announces she will cooperate
so long as the other player does so. But if the one player defects, the other
will punish her with no further cooperation. This is called the “grim
trigger strategy.” Is it a Nash equilibrium?
• Payoff to cooperation is
5
5 + 5δ + 5δ 2 + 5δ 3 + ... + 5δ ∞ = .
1−δ
Payoff to defection is
δ
10 + δ + δ 2 + ... + δ ∞ = 10 + .
1−δ
• Cooperation is therefore an equilibrium iff:
5 δ
> 10 + ,
1−δ 1−δ
5−δ
> 10,
1−δ
5 > 10 − 9δ
5
δ > ≈ .556
9
14
• So, if the future is sufficiently important, cooperation is sustainable as an
equilibrium.
• Hence, there can be ‘cooperative’ outcomes in non-cooperative game the-
ory, but seemingly only under restrictive conditions.
• Are the predictions of this models too strong?
15