0% found this document useful (0 votes)
11 views83 pages

3-Extensive Form Games

The document discusses extensive form games in game theory, focusing on their structure, notation, and key concepts such as Nash Equilibrium and Subgame Perfect Nash Equilibrium. It outlines the dynamics of decision-making processes, illustrating these concepts with examples like the Chain Store and Cournot Duopoly models. The document emphasizes the importance of backward induction in determining optimal strategies and the refinement of Nash equilibria through subgame analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views83 pages

3-Extensive Form Games

The document discusses extensive form games in game theory, focusing on their structure, notation, and key concepts such as Nash Equilibrium and Subgame Perfect Nash Equilibrium. It outlines the dynamics of decision-making processes, illustrating these concepts with examples like the Chain Store and Cournot Duopoly models. The document emphasizes the importance of backward induction in determining optimal strategies and the refinement of Nash equilibria through subgame analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Game Theory:

Extensive Form Games

Guillem Roig

Universidad del Rosario

1 / 83
Roadmap

3. Extensive Form Games of Complete Information.

3.1 Structure and Notation.


3.2 Nash Equilibrium.
3.3 Subgame Perfect Nash Equilibrium.
3.4 Illustrations.

3.4.1 A model of Bargaining


3.4.2 The Hold-up Game

3.5 Additional Topics.

3.5.1 Random Public Signals.


3.5.2 Simultaneous Moves.

2 / 83
Extensive Games

I In this representation, we will be able to model the dynamic structure of the


decision processes and predict the behavior of the players.

I This representation will allows us to know:

I In which order individuals play.


I What can a player do along the game.
I What does a player know about the game and about the other players’
choices.

I There is complete information in such a game if each player, when making any
decision, is perfectly informed of all the events that have previously occurred.

3 / 83
Extensive Games
The Chain Store

I A Chain store (CS) has a branch in a city, and faces one potential competitor (C).

I The game proceeds a follows:

I The potential competitor C decides whether to enter the market or not.


I Given C’s choice, CS decides whether to accommodate or fight back.
I The profits of the game are:
I (0, 0) (C’s profit, CS’s profit) if C enters and CS fights back.
I (2, 2) if C enters and CS accommodates.
I (1, 5) if C does not enter.

4 / 83
Extensive Games
The Chain Store
I The graphical illustration of the game.

Out In

CS
1, 5
Accommodate F ight

2, 2 0, 0

5 / 83
Extensive Games
Extensive Game with Perfect Information

Cournot Duopoly: Stackelberg competition


Example 2: Stackelberg Competion
I Cournot duopoly model where firms make decisions sequentially.

This game
I Firm looks
1, the likefirst chooses how much to produce.
leader,
I Then, firm 2, the follower, decides how much to produce.

2
1 q2
q1 p1(q1, q2), p2(q1, q2)

6 / 83
3.1 Structure and Notation

7 / 83
Structure and Notation
I An extensive form game with perfect information is a quintuple:

Γ = (I, K, P, A, (Ui )i∈I ).

I I is the finite set of players.


I K is a tree describing the structure and order of the decision making.
I A tree is a set of connected nodes, where for every pair of nodes, there
is a unique path connecting them.
I The set of nodes can be partitioned to the set X of non-terminal nodes,
and a set of Z terminal nodes.

8 / 83
Structure and Notation
Γ = (I, K, P, A, (Ui )i∈I )

I P is a partition of X, where for every x ∈ X we associate a unique player in I, who


is the one taking a decision in x.

I We can represent P = {X1 , ..., Xn } as a function P : X → I, where for every


i ∈ I, Xi = {x ∈ X | P(x) = i}.

I A is the family of the set actions or choices.

I For every non terminal-node x ∈ X, Ax is the set of possible actions or


choices available to player P(x) at x.

I Ui (z) is player i’s payoff at terminal node z.

9 / 83
I Consider the extensive form game

I The set of non-terminal nodes are X = {x1 , x2 , x3 , x4 }, the set of terminal nodes
Z = {z1 , · · ·z10 }.
I The nodes where player 1 makes a decision is X1 = {x1 }, and where player 2
makes a decision X2 = {x2 , x3 , x4 }.
I The set of possible actions Ax = {l, m, r}, Ax = {L, R}, Ax = {L, M, R} and
1 2 3
Ax4 = {a, b, c, d, e}.
10 / 83
History of a Game

I I the end of any stage k of the game, the history of the game is the sequence of
actions taken in the previous periods.

hk+1 = (a0 , a1 , ..., ak ).

I Define H as the set of possible histories.


I The set Z ⊂ H as the set of final histories.
I Hi ⊂ H is the subset of histories such that P(h) = i, i.e., the set of histories
where player i makes a choice.
I The initial history ∅ ∈ H.

11 / 83
History of a Game

I For histories that are non-terminal, i.e., h ∈ H\Z, we call the set of actions that
are available to player P(h) as A(h) = {a | (h, a) ∈ H}.

I In the initial history ∅ ∈ H, player P(∅) chooses an element of A(∅).

I For each possible choice a1 from this player, player P(a1 ) subsequently
chooses an element of A(a1 ).
I For each possible choice a2 from this player, P(a1 , a2 ) subsequently chooses
an element of A(a1 , a2 ). and so on ...

I We represent an extensive form game by

Γ = (I, H, P, (Ui )i∈I ).

12 / 83
I H = {∅, l, m, r, (l, L), (l, R)(m, L), (m, M), (m, R), (r, a), (r, b), (r, c), (r, d), (r, e)} .

I P(∅) = 1 and P(h) = 2 for each h ∈ H\ (Z ∪ {∅}).

13 / 83
Extensive Games
Strategy

I Player i’s strategy for the extensive form game (I, H, P, (Ui )i∈I ) is a mapping si
that assigns an action in A(h) at each h ∈ Hi ,

si : Hi −→ A(h).

I A strategy is not just a contingent plan of actions, it specifies an action at every


history, even at histories that are never reached.

14 / 83
Extensive Games
Strategy

A B

2
d
C D

1
c
E F

a b

I Player 1 has four strategies: AE, AF, BE, and BF.

I Question: how many strategies does Player 2 have?

15 / 83
Extensive Games
Strategy

I Every strategy profile s = (s1 , ..., sn ) defines an outcome O(s) = (a1 , ..., aK ) ∈ Z
by

I sP(∅) (∅) = a1
I sP(a1 ) (a1 ) = a2

I sP(a1 ,a2 ) (a1 , a2 ) = a3


I ...

I Thus, player i’s payoff is Ui (O(s)) given a strategy profile s.

16 / 83
3.2 Nash Equilibrium

17 / 83
Nash Equilibrium
I An extensive form game with perfect information (I, H, P, (Ui )i∈I ) determines a
normal form game (I, (Si )i∈N , (Ui )i∈I ).
C

Out In

CS
1, 5
Accommodate F ight

2, 2 0, 0

I The normal form representation of the game is

C/CS Accommodate Fight


In 2, 2 0, 0
Out 1, 5 1, 5

18 / 83
Nash Equilibrium
I To obtain a Nash equilibrium of an extensive form game

1. Represent it in normal form.


2. Apply the definition of a Nash equilibrium. A strategy profile s∗ is a Nash
equilibrium if for each i ∈ I and each s0i ∈ Si ,

Ui (O(s∗ )) ≥ Ui (O(s0i , s∗−i )).

I The chain store game in normal form

C/CS Accommodate Fight


In 2, 2 0, 0
Out 1, 5 1, 5

I has two Nash equilibria {(In, Accommodate), (Out, Fight)}.

19 / 83
Nash Equilibrium
[Osborne and Rubinstein, p.91]
I Allocating two identical indivisible objects between two people.

I Player 1 proposes an allocation of the two objects and player 2 either accepts o
rejects the proposal.

I This game has a normal form representation

20 / 83
Nash Equilibrium
[Osborne and Rubinstein, p.91]

I The Proposal game has up to 9 different Nash equilibria {(a,yyy), (a,yny), (a,yyn),
(a,ynn), (a,nny), (a,nnn), (b,nyy), (b,nyn), (c,nny)}.

21 / 83
Nash Equilibrium
I Remember that, the chain store game

C/CS Accommodate Fight


In 2, 2 0, 0
Out 1, 5 1, 5

I has two Nash equilibria {(In, Accommodate), (Out, Fight)}.

I The strategy profile (Out, Fight) is a Nash equilibrium because given that CS
chooses to flight, it is optimal for C to stay out at the start of the game, and given
that firm C chooses to stay out it is optimal for firm CS to fight.

I But, this equilibrium is not credible.

I The only reason for firm C to stay out of the market is that firm CS will
fight, but this action will never be taken if firm C enters the market.

22 / 83
Nash Equilibrium
I Remember that, the Proposal game

I has 9 different Nash equilibria


{(a, yyy), (a, yny), (a, yyn), (a, ynn), (a, nny), (a, nnn), (b, nyy), (b, nyn), (c, nny)}.

I But, all of these equilibria but (a,yyy) and (b,nyy) involve and action for player 2
that is implausible after some history as he rejects a proposal that gives him at
least one of the objects.

23 / 83
3.3 Subgame Perfect Nash Equilibrium

24 / 83
Subgame Perfect Nash Equilibrium

I This equilibrium concept takes into account the sequential structure of the
decision problem.

I By applying this equilibrium concept, the number of Nash equilibria will be


reduced. Hence, we have a refinement of Nash equilibrium.

25 / 83
Backward Induction

I Backward induction is the players process of reasoning backwards in time, from


the end of the game, to determine a sequence of optimal actions.
I This logic can be generalised to general finite horizon extensive games with
perfect information.

I Backward induction is the following procedure:

I Let L < ∞ be the maximum length of all histories.


I Find all nonterminal histories of (L − 1)-length and assign an optimal
action there. Eliminate unreached L-length terminal histories.
I Find all nonterminal histories of (L − 2)-length and assign an optimal
action there. Eliminate unreached (L − 1)-length terminal histories.
I And so on...
I Since L < ∞, the procedure would stop in finite time.

26 / 83
Backward Induction
I Applying this procedure in the chain store game

I To accommodate is the optimal action for CS if firm C has entered the industry.
Given that CS accommodates, it is optimal for C to enter the industry.

I Then, this procedure gives an outcome (In, Accommodate) which is the


“reasonable" Nash equilibrium.

27 / 83
Backward Induction
I In the proposal game, the backward induction procedure gives

I Is the optimal actions for player 2 following any proposal from player 1, and

I Outcomes (a,yyy) and (b,nyy) are the two “reasonable" Nash equilibria.

28 / 83
Subgame Perfect Nash Equilibrium
The Concept of a Subgame

I At any history, the “remaining" game can be regarded as an extensive game on its
own, which is called a subgame.

I The subgame of the extensive form game with perfect information of


(I, H, P, (Ui )i∈I ) that follows history h ∈ H\Z is the extensive form game
(I, H|h , P|h , (Ui |h )i∈I ) satisfying the following conditions:
I h0 ∈ H|h ⇐⇒ (h, h0 ) ∈ H.
I P|h (h0 ) = P(h, h0 ) for any h0 ∈ H|h .
I Ui |h (h0 ) = Ui (h, h0 ) for any terminal history h0 ∈ Z|h ⊂ H|h .
I A trivial example of a subgame is the original game itself.

29 / 83
Subgame Perfect Nash Equilibrium
The Concept of a Subgame
1

A B

2
d
C D

1
c
E F

a b

I This game has three different subgames.

1. The original game.


2. The game starting form the node where player 2 has to make a decision.
3. The game where player 1 has to decide between action E or F.

30 / 83
Subgame Perfect Nash Equilibrium
Strategies

I For each strategy si ∈ Si , denote the continuation strategy after history h ∈ H\Z
by si |h .

I This is a strategy for the subgame (I, H|h , P|h , (Ui |h )i∈I ) that satisfies:

si |h (h0 ) = si (h, h0 ) for each h0 such that (h, h0 ) ∈ H\Z.

I Let Si |h be the set of all strategies for player i for (I, H|h , P|h , (Ui |h )i∈I ).

31 / 83
Subgame Perfect Nash Equilibrium

I A strategy profile is a subgame perfect equilibrium if after any non-terminal


history it constitutes a Nash equilibrium.

I In a finite extensive game with perfect information (I, H, P, (Ui )i∈I ):

I A strategy profile s∗ ∈ S is a subgame perfect equilibrium if for each i ∈ I,


each h ∈ H\Z, and each si ∈ Si |h ,

Ui |h (O(s∗ |h )) ≥ Ui |h (O(si , s∗−i |h )).

I Each subgame perfect equilibrium is a Nash equilibrium, but the converse is not
true.

I For any finite horizon extensive game with complete information, the set of
subgame perfect equilibria is exactly the set of strategy profiles that can be found
by using backward induction.

32 / 83
Subgame Perfect Nash Equilibrium

I The subgame perfect Nash equilibrium requires strategies to be a Nash equilibria


in all subgames.

I Daunting task there are a large number of incentive constraints that need to be
verified.

I For finite extensive form games of complete information we can take advantage
of a result that gives a much simpler condition that is easier to check, the
one-shot deviation principle.

33 / 83
Subgame Perfect Nash Equilibrium
One Shot Deviation Principle

I A strategy s satisfies the one-deviation principle if no player can increase his


payoffs in any subgame through a one-shot deviation:

I Deviating from strategy s in the first period of the subgame, and then
reverting back to the strategy in s for the rest of the game.

I The one-shot deviation principle for finite extensive form games of complete
information states that we just need to check the incentive constraints with
respect to one-shot deviations.

34 / 83
Subgame Perfect Nash Equilibrium
One Shot Deviation Principle

Theorem
For a finite extensive form game with complete information (I, H, P, (Ui )i∈I ),
a strategy profile s∗ ∈ S is a subgame perfect equilibrium if and only if for
each h ∈ H\Z and each one-shot deviation si ∈ Si |h from s∗i |h for i = P(h), we
have
Ui |h (O(s∗ |h )) ≥ Ui |h (O(si , s∗−i |h )).

I The one-shot deviation principle does not hold for infinite horizon games.

I For infinite games, to apply the one-shot deviation principle, we will need to add
an extra requirement.
I The infinite game will need to satisfy continuity at infinity, that is, payoffs
in very far future are not important.

35 / 83
Subgame Perfect Nash Equilibrium
Existence

Theorem
[Selten, 1965] Every finite game in extensive form has at least one subgame
perfect Nash equilibrium (not necessary in pure strategies)

I The idea of the proof:

I Apply backward induction arguments in all possible subgames and then


apply the theorem of existence that we stated for normal form games.

36 / 83
Subgame Perfect Nash Equilibrium
Chain-store with K potential competitors

I Consider the chain-store game sequentially with K potential competitors in K


different cities.

I In market k, competitor Ck chooses either “In" or “Out" given the histories in the
previous k − 1 markets.

I CS chooses whether to accommodate or fight.

I The payoff of CS is the sum of its payoffs in all K markets.

37 / 83
Subgame Perfect Nash Equilibrium
Chain-store with K = 2 potential competitors

C1

Out In

CS

Accommodate F ight

C2 C2 C2

Out In Out In Out In

CS CS CS
1, 1, 10 2, 1, 7 0, 1, 5
Accommodate F ight Accommodate F ight Accommodate F ight

1, 2, 7 1, 0, 5 2, 2, 4 2, 0, 2 0, 2, 2 0, 0, 0

38 / 83
Subgame Perfect Nash Equilibrium
Chain-store with K = 2 potential competitors
I The unique subgame perfect Nash equilibrium is:
C1

Out In

CS

Accommodate F ight

C2 C2 C2

Out In Out In Out In

CS CS CS
1, 1, 10 2, 1, 7 0, 1, 5
Accommodate F ight Accommodate F ight Accommodate F ight

1, 2, 7 1, 0, 5 2, 2, 4 2, 0, 2 0, 2, 2 0, 0, 0

I Every competitor always enters and the chain store always accommodates.

39 / 83
Subgame Perfect Nash Equilibrium
The Centipede Game

I Each of two players can choose U or D.

I After choosing D the game ends.

I The payoffs increase the more the players stay in the game.

1 2 1 2 1
8, 6
U U U U U

D D D D D

2, 0 1, 2 4, 1 3, 4 6, 3

40 / 83
Subgame Perfect Nash Equilibrium
The Centipede Game
I The unique subgame equilibrium of the game is:

I Both players choose U in each round.

I This equilibrium strategy gives equilibrium payoff of (8, 6).

1 2 1 2 1
8, 6
U U U U U

D D D D D

2, 0 1, 2 4, 1 3, 4 6, 3

41 / 83
Subgame Perfect Nash Equilibrium
The Centipede Game (perturbed payoffs)
I By perturbing the payoffs in the same game by

1 2 1 2 1 2
6, 5
U U U U U U

D D D D D D

1, 0 0, 2 3, 1 2, 4 5, 3 4, 6
I The unique subgame perfect Nash equilibrium is every player plays S at each
round.
1 2 1 2 1 2
6, 5
C C C C C C

S S S S S S

1, 0 0, 2 3, 1 2, 4 5, 3 4, 6

42 / 83
Subgame Perfect Nash Equilibrium

I The subgame perfect Nash equilibrium is one of the most unchallenged


refinements and the mostly commonly used in economics.

I In the Centipede game, is the prediction given by subgame perfect equilibrium


reasonable?

I This equilibrium concept assumes full rationality and full rationality is also
common knowledge.

I It requires the infinite regression that “player 1 knows that player 2 knows that
player 2 knows .... knows that player 2 is rational".

43 / 83
Subgame Perfect Nash Equilibrium
I In general Subgame Perfect Nash Equilibrium (SPNE) requires two different
things:

1. SPNE gives a solution everywhere (in all subgames), even in subgames


where the solution says that they will not be reached.
2. SPNE imposes rational behavior everywhere, even in the subgames of the
game that SPNE says that cannot be reached.

I In out-of-equilibrium subgames, the “solution" is disapproved, yet


players evaluate their actions taking as given the behavior of the
other players, that have been demonstrated incorrect, since we are in
an out-of-equilibrium path.

44 / 83
3.4 Illustrations

45 / 83
3.4.1 A Model of Bargaining

46 / 83
A Model of Bargaining

I Groups of people often have to collectively choose an outcome in a situation in


which unanimity about the best outcome is lacking.
I The agreement over a price of a good between a seller and a buyer.
I Wage setting between a firm and a union.
I Peace conversations.

I We look at the problem of bargaining, in which agents try to reach an agreement.

I We follow the positive approach of bargaining: we specify what players can do in


the bargaining process and look for the equilibrium.

47 / 83
A Model of Bargaining
Bilateral Bargaining. [Rubinstein, 1982]
I Two players: I ∈ [1, 2].
I Bargain on how to divide one unit of a good (a pie): X = [0, 1].
I Bargaining takes place over (discrete) time: t = 0, 1, 2, ..., K.

I The bargaining process follows the rules:

I At the start of the game, player 1 makes an offer x1 ∈ [0, 1], player 2 accepts
or rejects the offer. If the offer is accepted, then the game is over and the
players receive (1 − x1 , x1 ).
I If player 2 rejects the offer, the game continues to the next period, t = 1,
when player 2 makes an offer x2 ∈ [0, 1], then player 1 accepts or rejects the
offer. If the offer is accepted, the game is over and the players receive
(x2 , 1 − x2 ).
I If player 1 rejects the offer, the game continues to the next period, t = 2, and
it is again the turn for player 1 to make the offer.

48 / 83
Bilateral Bargaining. [Rubinstein, 1982]
Rules cont’

I The game continues until an offer is accepted by any of the players, or when a
period t = K is reached.

I If no agreement is reached after K rounds of negotiation, each player obtains a


status quo payoff of zero.

I Assume that players have preferences to reach an agreement as soon as possible


and denote by δi each player discount factor.

49 / 83
50 / 83
Bilateral Bargaining. [Rubinstein, 1982]
I The outcome of this game depends on the rounds of negotiation.
I If K = 1 the negotiation game is illustrated by

I This game is trivial, because player 1 is the only one making the offer, the only
subgame perfect equilibrium is one when he proposes to give no pie to player 2.

51 / 83
Bilateral Bargaining. [Rubinstein, 1982]
I With two potential rounds of negotiation K = 2, if player 2 rejects the proposal
made by player 1, player 2 has the opportunity to make a proposal.

52 / 83
Bilateral Bargaining. [Rubinstein, 1982]
We apply backward induction.

I Consider a situation where the game have reached the second round of
negotiation. Player 2 makes a proposal.
I Because there is no further round of negotiation, every strictly positive offer
is accepted by player 1.
I The equilibrium offer must be 0, which must be accepted by player 1.

I At the first round of negotiation.

I Player 1 has to make a proposal to player 2.


I Player 1 knows that if player 2 rejects his proposal, the game moves to
round two and player 2 will make a proposal in which player 1 does not
obtain anything.
I The only chance for Player 1 to obtain any positive partition of the pie is to make
a proposal in the first round to player 2 that is accepted.
I Player 1 must make a proposal x1 to player 2 such that player 2 is indifferent
between accepting or rejecting the offer.

53 / 83
Bilateral Bargaining. [Rubinstein, 1982]
I Because players are impatient, the allocation (0, 1) proposed by player 2 in the
second round has a value δ2 in the first round.
I Therefore, a proposal x1 = δ2 is immediately accepted by player 2.

54 / 83
Bilateral Bargaining. [Rubinstein, 1982]
I The bargaining game with two potential rounds of negotiation has a unique
subgame perfect equilibrium.

I Player 2: offers 0 in period 2, and accepts x1 if and only if x1 ≥ δ2 in period


2.
I Player 1: offers δ2 in period 1, and accepts any offer in period 2.

55 / 83
Bilateral Bargaining. [Rubinstein, 1982]
I By considering three rounds of negotiation.

56 / 83
Bilateral Bargaining. [Rubinstein, 1982]
We apply backward induction.

I If the game arrives at the third round of negotiation. (Player 1 makes a proposal)

I Because there is no further periods of negotiation, every strictly positive


offer of player 1 is accepted by player 2.
I The equilibrium offer must be 0, and this is accepted by player 2.

I In the second round of negotiation. (Player 2 makes the proposal)

I Player 1 accepts an offer if and only if x2 ≥ δ1 .


I Player 2 offers x2 = δ1 and this is accepted by player 1.

I In the first round of negotiation. (Player 1 makes the proposal)

I Player 2 guarantees a payoff of δ2 (1 − δ1 ) if rejects the proposal of player 1.


I Any offer by player 1 such that x1 ≥ δ2 (1 − δ1 ) is accepted by player 2.
I Player 1 makes the offer x1 = δ2 (1 − δ1 ), which is accepted by player 2.

57 / 83
Bilateral Bargaining. [Rubinstein, 1982]

58 / 83
Bilateral Bargaining. [Rubinstein, 1982]
I The model with K finite rounds of negotiation has two potential drawbacks

1. The solution depends on the length of the game and on the identity of the
player who gets to make the last offer.
2. With a last period, if the last offer is rejected, the players are not allowed to
continue to try to reach an agreement. However, in situations when there is
no outside option, it is natural to assume that players keep on bargaining as
long as they do not reach an agreement.

I To deal with these limitations, we consider the the bargaining game with infinite
negotiation stages, that is, K = ∞.

59 / 83
Bilateral Bargaining. [Rubinstein, 1982]
Infinite Rounds of Negotiation.

I In an infinite game, we cannot apply backward induction.

I The one-shot deviation principle still holds as this game satisfies continuity at
infinity.

60 / 83
Infinite Rounds of Negotiation
Construction of an equilibrium
I When it is player i’s turn to make an offer:

I Let Mi be the supremum of player i’s SPNE payoffs.


I Let mi be the infimum of player i’s SPNE payoffs.

I Any offer made by player 2 such that x2 > δ1 M1 will be accepted by player 1.

I The minimum continuation payoffs that player 2 can guarantee by himself


is m2 ≥ 1 − δ1 M1 .

I Any offer made by player 1 such that x1 < δ2 m2 will be rejected by player 2.

I When player 1 makes the offer, the maximum that he can get is 1 − δ2 m2 .
I When player 2 makes the offer, the maximum that player 1 can get is δ1 M1 .
I Hence, the maximum continuation payoff that player 1 can guarantee by himself
n o
M1 ≤ max {1 − δ2 m2 , δ1 x2 } = max 1 − δ2 m2 , δ12 M1 = 1 − δ2 m2 .

61 / 83
Infinite Rounds of Negotiation
Construction of an equilibrium cont’

I Because m2 ≥ 1 − δ1 M1 , then M1 ≤ (1 − δ2 )/(1 − δ1 δ2 ).

I The same procedure allows us to obtain that the maximum continuation payoffs
that player 2 can guarantee by himself is

M2 ≤ (1 − δ1 )/(1 − δ1 δ2 ).

I Any offer made by player 1 such that x1 > δ2 M2 would be accepted by player 2

I The minimum continuation payoff that player 1 can guarantee by himself is


m1 ≥ 1 − δ2 M2 .
I By introducing M2 , then
δ2 (1 − δ1 ) 1 − δ2
m1 ≥ 1 − = .
(1 − δ1 δ2 ) 1 − δ1 δ2

62 / 83
Infinite Rounds of Negotiation
Construction of an equilibrium cont’

I Because by definition M1 ≥ m1 , then

1 − δ2
M1 = m1 = .
1 − δ1 δ2

I The same procedure allows us to obtain that

1 − δ1
M2 = m2 = .
1 − δ1 δ2

63 / 83
Infinite Rounds of Negotiation
Equilibrium offers

I Because any offer x1 > δ2 M2 = δ2 (1 − δ1 )(1 − δ1 δ2 ) is accepted by player 2,

I and any offer x1 < δ2 m2 = δ2 (1 − δ1 )(1 − δ1 δ2 ) is rejected by player 2.

I The unique equilibrium offer of player 1 is

δ2 (1 − δ1 )
x1 = .
1 − δ1 δ2
I The unique equilibrium offer of player 2 is

δ1 (1 − δ2 )
x2 = .
1 − δ1 δ2

64 / 83
Infinite Rounds of Negotiation
Properties of the equilibrium

I The equilibrium is unique and efficient:

I The equilibrium is unique: there is no other equilibrium of the infinitely


bilateral bargaining game. (Prediction)
I Efficiency comes from the fact that in the beginning of the game player 1
makes the offer x1 = δ2 (1 − δ1 )/(1 − δ1 δ2 ) and this is immediately accepted
by player 2.

I There is no delay on reaching an agreement.


I However, this result may be unrealistic, as delay is almost always
present in all real world bargaining.

65 / 83
Infinite Rounds of Negotiation
Properties of the equilibrium cont’
I Any player i with a higher discount factor δi will get more at any point of the
game for each discount factor of the rival δ−i .

I A player infinitely patient δi → 1, appropriates the whole pie.


I The payoff converges to 0 if the player is infinitely impatient δi → 0.
I The results depends on whether the player is the first to propose: if he
is the first to propose, he would still get 1 − δ−i even when δi = 0.
I If both players have the same discount factor δ1 = δ2 = δ.
1 δ
I The SPNE payoff profile is ( 1+δ , 1+δ
).
I This converges to (0.5, 0.5) as they become infinitely patient δ → 1.
I The strategic bargaining proposed by Rubinstein converges to the
Nash solution where each party gets half of the pie.

66 / 83
Infinite Rounds of Negotiation
Properties of the equilibrium cont’

I Rubinstein’s result is not robust to many important considarations:

I Incomplete information.
I Existence of outside options.
I If there is more than two people bargaining, n > 2, then results depend on
the particular bargaining protocol.

67 / 83
Multilateral Bargaining
Baron and Ferejohn (1989)
I n number of Players have to take a decision on how to allocate $1 among them.

I Denote by X = {x ∈ Rn |
P
i xi ≤ 1} be the set of feasible allocations.

I The game is played for K periods:

I In any period t, a player is chosen at random to be the proposer.


I With n players, the probability of being elected as the proposer is 1/n.
I The proposer suggests how to divide $1, i.e., chooses a vector
x = (x1 , ..., xn ) from the set X.
I After the proposal, players vote publicly.
I If approved by majority, this is implemented and the game ends.
I If not approved the game moves on to the next period, a player is
again randomly selected to make a proposal.
I If no proposal is approved by the end of the game, nobody receives any
payoff.

68 / 83
Multilateral Bargaining
Two rounds of voting, K = 2
I With two potential round of votes, K = 2, we use the concept of backward
induction.

I If the game reaches the second round of voting:

I a proposer can request everything.

I In the first period of voting:

I The proposer can buy one vote by paying the discounted expected payoff
of going to the second round, δ/n.
I The proposer pays δ/n to a number of (n − 1)/2 voters who vote positively
to his proposal.
I This proposal gets the majority votes and gets implemented.

69 / 83
Multilateral Bargaining
Infinite rounds of voting, K = ∞
I Restricting attention to symmetric stationary SPNE, the result is similar with only
two potential rounds of voting.

Theorem
For any δ ∈ (0, 1), there is a unique symmetric stationary subgame perfect
equilibrium, where the proposer distributes δ/n to randomly selected
(n − 1)2 number of players, and any player i votes positively for the proposal
if and only if the proposal assigns player i at least δ/n.

I In a symmetric equilibrium, the distribution of proposals is the same


independent of histories, every player expects to be treated symmetrically by the
proposer.
I Dropping stationarity, many allocations can be supported by a SPNE.
I Any allocation can be supported if there are many players and the players
are sufficiently patient. (More in Chapter 5 with infinite games).

70 / 83
3.4.2 The Hold-up Game

71 / 83
The Hold-up Game
I Two players play an ultimatum game.

I Player 1 makes a proposal to player 2.

I Player 2 either accepts or rejects the proposal.

I Before engaging in this ultimatum game, person 2 takes an action that affects the
size c of the pie to be divided.
I She may exert:
I No effort (not invest). Results in a small pie, of size cL .
I Effort (invest). Results in a large pie, of size cH .
I Person 2 dislikes exerting effort.
I Assume that her payoff is x − E if her share of the pie is x, where E is
the cost of exerting effort.
I Exerting effort is efficient because cH − cL > E.

72 / 83
The Hold-up Game
I The graphical representation of the game.

73 / 83
The Hold-up Game
Equilibrium
I Each subgame that follows player 2 choice of effort is an ultimatum game.

I This have a unique subgame perfect Nash equilibrium, person 1 offers


x = 0 and person 2 accepts all offers.

I Player 2 choice of effort in the begining of the game.

I If she chooses not to make an effort then her payoff, given the outcome in
the following subgame, is 0.
I If she chooses to undertake effort then her payoff is −E.
I She chooses not to undertake any effort.

I The game has a unique subgame perfect equilibrium, in which player 2 exerts no
effort and player 1 obtains all of the resulting small pie.

74 / 83
The Hold-up Game
Properties of the equilibrium

I The equilibrium does not depend on the values of cL , cH and E.

I Even if cH is much larger than cL , but E is very small player 2 will exerts no
effort in equilibrium.

I Both players could be better off if player 2 were to exert effort and she were to
obtain some of the extra pie.

I No such superior outcome is sustainable in equilibrium because player 2, having


exerted effort, will be “held-up" and the entire pie will go to player 1.

75 / 83
3.5 Additional Topics

76 / 83
3.5.1 Random Public Signal
I Consider a situations in which there is some exogenous uncertainty.

I We incorporate random public signals into an extensive game.

I An extensive game with perfect information and chance moves


(I, H, P, fc , (Ui )i∈I ) is an extensive game with perfect information and exogenous
randomization.
I P maps non-terminal histories to I ∪ {c}.
I Hc is the set of histories such that P(h) = c, chance determines the action
taken after the history h.
I For any h ∈ Hc , fc (h) ∈ 4(A(h)) assigns an action a ∈ A(h) with probability
fc (a|h).

77 / 83
3.5.1 Random Public Signal
I The strategy for each player is defined as before and the outcome of a strategy
profile is a probability distribution over terminal histories.

78 / 83
3.5.1 Random Public Signal
Equilibrium: We apply backward induction

I Player 2 chooses C.

I If Player 1 chooses A, then obtains 1. If he chooses B, he gets 3 with probability


1/2 and with probability 1/2 it will be the turn for player 2 to play.

I Player 1 expected payoff of choosing B is larger than choosing A, as


3/2 > 1.

I The unique subgame perfect equilibrium is (B, C).

79 / 83
3.5.2 Simultaneous Moves
The Chain Store with simultaneous moves

I Players move simultaneously after certain histories.

I Each of them being fully informed of all past events when making his choice.

I An extensive game with perfect information where players may move


simultaneously is represented by (I, H, P, (Ui )i∈I ).

I History H is a sequence of action profiles.


I P maps non-terminal histories to a subset of I.
I The choices of the players in P(h) are made simultaneously after history h.

80 / 83
3.5.2 Simultaneous Moves
The Chain Store with simultaneous moves

I If player 1 chooses “In", player 1 and 2 play a simultaneous game in which each
player can choose either C or D.

81 / 83
3.5.2 Simultaneous Moves
The Chain Store with simultaneous moves

I Expressing the whole game in normal form and obtaining the best responses

1/2 C D
In C 3, 1 0, 0
In D 0, 0 1, 3
Out C 2, 2 2, 2
Out D 2, 2 2, 2

I There are three different Nash equilibria, NE = {(InC, C); (OutC, D); (OutD, D)}

82 / 83
3.5.2 Simultaneous Moves
The Chain Store with simultaneous moves
I The extensive form game allows us to obtain the subgame perfect Nash
equilibria.
I Solving the game backwards, the two Nash equilibria of the simultaneous move
game is NE = {(InC, C); (InD, D)}.

I Moving backwards, the equilibrium (InD, D) is not subgame perfect.


I The two subgame perfect Nash equilibria of the game are:
SPNE = {(InC, C), (OutD, D)}.

83 / 83

You might also like