Micro Economics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

5) Three oligopolist operate in a market with inverse demand function Solution Concepts of

given by P(Q)= a - Q, where Q = q, + q, + q, and q, is the quantity Non-CooperativeGames

produced by the ith firm. Each firm has a constant marginal cost of
production, c, and no fixed cost. The firms choose their quantities as
follows:
a) Firm 1 chooses ql 2 0
b) Firm 2 and 3 observe ql and simultaneously choose qz and q3,
respectively. What is the sub-game perfect outcome of the game?
6) Find the equilibria of the following extensive form games
UNIT 25 REPEATED GAMES
Structure
25.0 Objectives
25.1 Introduction t

25.2 Repeated Games


25.2.1 Two-Stage Repeated Games
25.2.2 Finitely Repeated Games
25.2.3 Infinitely Repeated Games
25.3 Bilateral Bargaining
25.4 Friedman's Theorem
25.5 Collusion between Cournot Duopolists
25.6 Let Us Sum Up
25.7 Key Words
25.8 Some Useful Books
25.9 Answer or Hints to Check Your Progress
25.10 Exercises

25.0 OBJECTIVES-- -- -

After going through this unit, you will be able to:


assess the change in behaviour of an agent in repeated games;
understand the equilibrium in bilateral bargaining; and
present the collusive outcome in Cournot duoply model.

25.1 INTRODUCTION
First we will deal with games, which are repeated twice. Only then we will
explore finitely repeated games and finally infinitely repeated games. We will
discuss celebrated Friedman's theorem. You will be given an application of
the theory, what happens when the Cournot duopoly model is analyzed under
infinite repetition. You will apply game theory to bargaining problems,
application of 'which is rampant in almost every social, economic and
commercial issue, domestic as well as international.
25.2 REPEATED GAMES
In this unit we analyse whether threats and promises about future can affect
the current behaviour when a game is played repeatedly after collecting the
payoffs. We will define the concept of sub-game perfect Nash equilibrium for
repeated games
25.2.1 Two-Stage Repeated Games
In two-stage repeated game, players play the game once again after the
payoffs of the game is collected by them. Consider the Prisoners' Dilemma
given in normal form in the following table (remember Prisoners' Dilemma is
a class of games. Therefore, the payoff of this game is different from that
described in Unit 23).
- -

Player2 Repeated Games

Player 1

Player2

Player 1

Suppose the players play the game twice, and they observe the outcome of the
first game before playing it once again. We further assume that the payoff of
the entire game, taking two stages together is simply the sum of the payoffs
from the two stages (we assume there is no discounting, that is, the same
amount of payoff in any period gives the same amount of utility. Generally, if
we think in monetary terms we will prefer Rs. 100 today than Rs. 100
tomorrow. This is because there is a discount rate, which is embedded in our
inter-temporal choice decisions).
It is easy to see that the Nash equilibrium of the first stage of the game is (Ll,
L2) with payoff ( 1 , 1); we have added the equilibrium payoff of the first stage
td the payoffs of the game in the next stage. In the second stage also there is a
unique Nash equilibrium of the two-stage prisoners' dilemma.
25.2.2 Finitely Repeated Games
The argument stated above holds more generally. We generalise the concept
of repeated games to games with finite number of repetitions. Let G = (Al,
A>. ., A,; ul, UZ, , u,) denote a static game of complete information in which
players 1 through n simultaneously choose their strategies a, to a, respectively
from their strategy spaces SI through S,. This game is repeated T times after
collecting and observing the outcomes of each game. The game G is called the
stage game of the repeated game.
Definition: Given a stage game G, let G(T) denote the finitely repeated game
in which G is played T times, with the outcome of all the previous games
observed before the next play begins. The payoff for G(T) is simply the sum
of the payoffs from the T stage game.
Proposition: If a stage game G has a unique Nash equilibrium then, for any
finite T, the repeated game G(T) has a unique sub-game perfect Nash.
equilibrium: the Nash equilibrium of G being played in every stage of the
game.
It is interesting to investigate if there is more than one Nash equilibrium in the
stage game itself. What will be the nature of the equilibrium when the game is
repeated? We consider a simple two period game, which is a simple extension
of the prisoners' dilemma such as there are two Nash equilibria in it. In
addition to the strategies L1 and M1, we add another strategy R1 at the
NO"-cooperative Game disposal of player 1. Similarly, we add the strategy R2 to the strategy space of
Theory player 2. The game is described below in normal form.
Player 2

As a result of adding the two strategies and the distribution of payoffs now
there are two pure strategy IVash equilibrium, namely, (Ll, L2) and (Rl, R2).
Suppose the above stage game is played twice with the first stage outcome
being observed before the second stage begins.
Since the stage game has more than one Nash equilibrium, it is now possible
for the players to anticipate that different first stage outcomes will be followed
by different stage game equilibria in the second stage. Let us suppose for
example the players anticipate that (RI, R2) will be the second stage outcome
if the first stage outcome is (Ml, M2), but (1,1, L2) will be the second stage
outcome if any one of the eight other first stage outcomes occur. Thus, the
game reduces to the following one shot game, where (3, 3) has been added to
the (MI, M2) cell and (1, 1) has been added to rest of the cells.
Player 2

There are three pure strategy Nash equilibrium of the above game (Ll, L2),
(Ml, M2) and (Rl, R2). Let us denote the outcome of the repeated game as
[(w, x): (y, z)]; where (w, x) is the first stage outcome and (y, z) is the second
stage outcome. Therefore, the Nash equilibria namely, (Ll, L2), (MI, M2)
and (Rl, R2) can be achieved in the simplified one shot game if the outcomes
of the repeated game are [(Ll, L2), (L1 ,L2)], [(Ml, M2); (Rl, R2)] and [(Rl ,
R2), (Ll, L2)] respectively (if the first stage outcome is (Ll, L2), the second
stage outcome has to be (Ll, L2) according to the players' anticipation and so
on for each of the one shot game's Nash equilibria). The first and the last
Nash equilibria of the repeated game are sub-game perfect and simply
concatenate the Nash equilibrium outcome of the stage game. But the Nash
equilibrium (MI, M2) of the one shot game is possible if the outcome of the
repeated game has the sub-game perfect outcome [(MI, M2); (Rl, R2]],
which means in the first stage, the players chose (MI, M2) which is not a
Nash equilibrium of the stage game. We can conclude that cooperation can be
achieved in the first stage of a sub-game perfect outcome of a repeated game.
I Proposition: We extend this idea to a stage game being played T times, where Repeated Games
T is any finite number. If G = {A,, A2 ..., A,,; ul, u2, , u,) is a static game of
complete information with multiple Nash equilibria, then there may be sub-
I game perfect outcomes of the repeated game G(T) in which for any t< T, the
I
outcome in stage t is not a Nash equilibrium of the stage game G
I The main points to extract from the above example is that

I credible threats or promises about the future behaviour can affect current
behaviour
sub-game perfection (as we described in the previous unit) may not be a
definition strong enough to embody credibility.
r 25.2.3 Infinitely Repeated Games
In the finite horizon case, the main focus was that credible threats or promises
about future behaviour can influence current behaviour and if there are
multiple IVash equilibria of the stage game, then there may be sub-game
perfect outcomes of the repeated game G (T) in which, for any t < T, the
outcome of a stage game is not a Nash equilibrium of G. Whereas in case of
infinitely repeated games a stronger result is true. Even if the stage game has a
unique Nash equilibrium, there could be sub-game perfect outcome of the
infinitely repeated game in which no stage's outcome is a Nash equilibrium of
the stage game G.
Player 2

Player 1

An infinitely repeated game is an extension of a finitely repeated game, it


being played infinitely. Suppose the prisoners' dilemma game is to be
repeated infinitely and for each t, the outcome of the t-1 preceding plays of the
stage game is observed before the t' stage begins. Simple summation of the
payoffs from this infinite sequence of stage games does not provide a useful
measure of a player's payoff in the infinitely repeated game. This is because
receiving a payoff 4 is better than receiving a payoff 1 in every stage but a
summation of the payoff 1, repeated till infinity and that of 4 is same, which is
infinity. To tackle this problem, we introduce the concept of discount rate. As
we have argued earlier, Rs. 100 today is not the same as Rs. 100 tomorrow. If
the rate of interest is 'r', one can earn (100xr) one-year later in addition of the
principle Rs. 100. Therefore, Rs. 100 today is worth Rs. lOO(1-I-r)tomorrow.
To find the present value of a future income or future stream of income, we
must discount it to get the present value of the future income. To get the
present value of future income to be received t years later we multiply, it
1 1
with --- . This fraction -is called the discount factor; it is generally
(1 + r)'
(1 + r)
denoted by 6. We can apply this method of calculating the present value of an
income stream to calculate the present value of the payoffs of an infinitely
repeated game.
NO"-cooperative Game Definition: Given the discount factor 6, the present value of the infinite
Theory sequence of payoffs nl, n2, n3,. is given by

Let us consider the infinitely repeated prisoners' dilemma, where the payoff of I

fhe players is the present value of the player's payoff from the stage games.
The discount factor is d for both the players. We want to show that
cooperation, that is, (Ml, M2) can occur in every stage of a sub-game-perfect 4
outcome of an infinitely repeated game, even though the only Nash
equilibrium in the stage game is cooperation, which is (Ll, L2).
4
Suppose the i" player begins the infinitely repeated game by cooperation and
then cooperates in rest of the periods if and only if both the players have
cooperated in the previous stages of the game. Formally, the strategy of the ith
player is
Play Mi in the first stage
In the tthstage if the outcome of all the preceding (t-1) has been (MI, M2)
then play Mi in the thstage otherwise play Li, (i = 1, 2) (This type of strategy
is called a trigger strategy). If both the player follows this trigger strategy,
then the outcome of the infinitely repeated game would be (MI, M2) in every
stage of the game. Now we will prove that given some conditions on the value
of d, the above trigger strategy is a Nash equilibrium of the infinitely repeated
game, and such an equilibrium is sub-game-perfect.
To show that the above described trigger strategy is a Nash equilibrium of the
game, we have to show that if the player i adopts the trigger strategy, the best
response of the player j is to adopt the same strategy. The present value of the
payoffs that will be generated, for the jth player, if both of the players stick to
the trigger strategy is given by

If the jth player deviates from the trigger strategy, that is, she plays LJ in the
first stage, which will eventually lead to non-cooperation from player i in the
second stage (Li) and consequently from player j (Lj) also, the discounted
payoff of the jth player is given by (the payoff of the first stage for the jh
player would be 5, as the ith player in the first period following her trigger
strategy will play Ri, and for the remaining periods the payoff of the jth player
would be 1)

Therefore, playing Mj, or following the trigger strategy is optimal for the jth
player given that the ith player sticks to her trigger strategy if and only if
Therefore, the trigger strategy is the Nash equilibrium for both the players if Repeated Games
and only i f t ? > 1 / 4 .
Now we are in a position to formally define an infinitely repeated game,
history, strategy and sub games of an infinitely repeated game.
Definition of Infinitely Repeated Game: Given a stage game G, let G (m, d)
denote the infinitely repeated game in which G is repeated forever and all the
players have the same discounting factor denoted by a. For each t (t is a
positive integer), the outcomes of the t-l previous plays are observed before
the tthstage begins. Each player's payoff in G (m, d) is the present value of the
player's,payoff from the infinite sequence ofthe stage game.
History of an Infinitely Repeated Game: In the finitely repeated game G(T)
or the infinitely repeated game G (m, d), the history of play through stage t is
the record of the player's choices in the stage 1 through t.
Strategy of an Infinitely Repeated Game: Notice while defining the
repeated game in section we used the notation G = {Ai, Az..., An; ul, UZ,..., un)
while in previous units we described a static game of complete information as
G = {SI, S2..., Sn; U I , u2,.., u,); remember Si is in a static game of complete
informatim implies strategy spaces whereas the Ai s in a dynamic or repeated
game of complete information implies the action space. The players might
have chosen (al ,, a21, as,,. .. a n l )in stage 1 (suppose there are n players) and
(al2,azl, a32,. . . an2)and so on. For each player i and for each stage s, the action
a,, belongs to the Action space Ai.
Thus, in an infinitely repeated game there is a difference between strategy and
actions. In a finitely repeated game G(T) or an infinitely repeated game G (m,
a), a player's strategy specifies the action the player will take in every stage of
the game, for each possible history throughout the game. So, strategy is a
complete plan of action. It is defined for each possible circumstance and for
every stage of an infinitely repeated game.
Sub-game of an Infinitely Repeated Game: In the finitely repeated game
G(T), a sub-game beginning at the stage t+l is the repeated game in which G
is played T-t times, denoted by G(Tt). There are many sub-games that begin
at the stage t + l , one for each possible history of the game through stage t.
Whereas for an infinitely repeated game G ( a , a), each sub-game is identical
to the original game G (m, a). As in the finitely repeated game, there are as
many sub-games beginning at stage t+l of G ( a , a) as there are possible
histories of play through stage t. In the case of two stage prisoners' dilemma
(with two strategies available to each player), there are four sub-games,
corresponding to the second stage game that will follow the four possible first
stage outcomes. Similarly, the extension of the two stage prisoners' dilemma
where each player had three strategies at their disposal has nine sub-games. A
sub-game is not only a piece of the game which starts at a point where the
history of the game thus far is common knowledge among the players but also
includes all the moves that follow this point in the original game.
Sub-game Perfect Nash Equilibrium of an Infinitely Repeated Game
(Selton 1965): A Nash equilibrium is sub-game perfect if the players'
strategies constitute a Nash equilibrium in every sub-game of the infinitely
repeated game. Sub-game perfect Nash equilibrium is refinement of the
concept of Nash equilibrium, which means for a strategy profile to be sub-
game perfect, it must be a Nash equilibrium first and then it must satisfy an
additional test, that is, it must be a Nash equilibrium in every sub-game of the
game.
NO"-cooperative Game Our objective of describing all the definition above was to show that the
Theory trigger strategy that we have already defined in the game of infinitely repeated
prisoners' dilemma is sub-game perfect. Therefore, we need to show that the
trigger strategy constitute a Nash equilibrium on every sub-game of that
infinitely repeated game. Recall that every sub-game of an infinitely repeated
game is identical to the game as a whole. In the trigger strategy, Nash
equilibrium of the infinitely repeated prisoners' dilemma sub-games could be
grouped into two classes:
i) sub-games in which all the outcomes of the earlier stages have been (RI,
R2)
ii) sub-games in which the outcome of at least one stage differs from (Rl,
R2).
If the players adopt the trigger strategy for the game as a whole, then (i) the
players' strategies in a sub-game in the first case are again the trigger strategy,
which we have already shown to be the Nash equilibrium of the game as a
whole.
The players' strategies in a sub-game in the second case are simply to repeat
the stage game equilibrium (L,, L ~forever,
) which is also Nash equilibrium of
the game as a whole. Therefore, the trigger strategy Nash equilibrium of the
infinitely repeated prisoners' dilemma is sub-game-perfect.
Check Your Progress 1
1) Consider the Bertrand Duopoly model with constant marginal cost. We
have seen in the previous unit that the unique Nash equilibrium of this
game is (pl*= p2*=c). Consider the following trigger strategy:
Player i at period t=O, chooses pm (monopoly price) and at period t>O,
will choose p,= pm if and only if outcome of all earlier stages have been
(pm'pm);otherwise choose c for ever.
Show that this trigger strategy is sub game perfect Nash equilibrium
provided that d > 1/2.

2) Define the following concepts:


i) History of an infinitely repeated game.
ii) Stage game
iii) Discount factor
iv) Strategy in an infinitely repeated game.
......................................................................................
......................................................................................
......................................................................................
3) What is a strategy in a repeated game? What is a sub-game in a repeated Repeated Games
game? What is a sub-game perfect Nash equilibrium?
......................................................................................
......................................................................................
......................................................................................
......................................................................................

25.3 BILATERAL BARGAINING


The bargaining problem is the simplest, most abstract ingredient, of any
situation in which two (or more) agents are able to produce some benefit
through cooperation provided they agree in advance on a division between
them. If they fail to agree, the potential benefit never materialises and both of
them loose.
For example, there is a gain to both a trade union and an employer from
reaching an agreement on more flexible working hours so that production can
respond more readily to fluctuations in demand. The question is, how the
surplus, which will be generated from greater flexibility, is to be distributed
between labour and capital in the form of higher wages and profits. If we
notice carefully, bargaining problems are everywhere in the society.
Therefore, problems cannot merely be regarded as a technical affair as it
involves social issues of power and justice.
There are two very different approaches which game theorists have adopted in
their analysis of bargaining problem. The first is the so-called axiomatic
approach. In this approach game, theorists present a series of axioms, which
are considered to be rules for solving the problem should satisfy. Then
through formal analysis, they typically show that one criterion for dividing the
gains satisfies these axioms. The second approach treats the bargaining game
to be non-cooperative. The bargaining process is modeled step by step as a
dynamic non-cooperative game, with one person making an offer and then the
other and so on. Here, we will discuss the axiomatic approach in detail. The
outcome of this approach is often referred as Nash bargaining solution (Nash
bargaining solution is quite different from Nash equilibfium).
Nash's Axioms: The axiomatic approach begins by assuming that we are
looking for a rule, which will identify a particular outcome. In this way Nash
assumes in the beginning that we are only interested in rules, which identify
unique outcomes. Nash then suggests that it would be natural for any such rule
to satisfy the following four axioms.
i) Efficiency
ii) Individual rationality
iii) Scale covariance
iv) Independence of irrelevant alternatives.
v) Symmetry
We will explain all of them in detail after introducing some basic concepts,
which will be useful in understanding the axioms.
There are 2 persons bargaining over some amount of gain or payoff. We are
interested in a solution, which divides the gain in such a way that it is
acceptable to both of the players. Any such bargaining solution not only
NO"-cooperative Game depends on joint payoff but also on the consequences if the bargain breaks
Theory down. In such a situation we define the following:
F: jointly feasible set of payoff. It is a set feasible vector with two elements in
each of the vector. The elements in each vector suggest the way the gain is to
be distributed among the two.
Clearly, F c R2 ; where IR2 is the two dimensional Euclidian space.
Payoff to the 1St

payoff to the 2nd


Person
We assume that
1) F is closed and convex (boundaries of F are inside F and convex
combination of any two points of F lie inside F).
2) In the worst case, when there is disagreement among them the payoff
allocation is given by v = (vl, v2).
3) n
F {(x,,x,) I x, 2 v, and x, 2 v,) is non-empty and bounded. That is,
there are some common elements between the feasible joint payoff set
and the set containing allocations better than the disagreement payoff
allocation, but there are not infinitely many.
We denote the bargaining problem as (F, v). (F, v) is said to be essential iff
(read as: if and only if) 3 y (read as there exists y), where y = ( y,, y,) E (read
as belonging to) F, such that yl 2 vl and y2 2v2. [In a successful solution both
the players gain].
We define the bargaining solution function 4 as follows:
4 : (F, v) + R 2[such that the solution is in F, i.e., inside the feasible set]: 4 =
( 4 I (F, v), 4 2(FY 4 )
The function 4 (F, v) gives solution to a bargaining problem (F,'v).
Now, we are in a position to discuss the Nash's axioms
Axiom of Efliciency: Suppose we have two vectors x and y: x = (xl.x2) and y
=(~1.~2).

X, kY,
x 2 y iff
x2 2 Y2
XI > Y l
x > y iff
x2 >Y2
4 (F, v) is an allocation in F and for any x in F Repeated Games

If x 2 # (F, v),
then x = # (F, v).
This means the solution in a bargaining problem is always Pareto optimal.
Axiom of Individual Rationality: 4 (F, v) > v. This axiom means individuals
are rational. They do not accept a division in which they get less than the
disagreement payoff.
Axiom of Scale Covariance:

Forany 4 , A and y1,y2;suchthat4 >Oand/2, > 0 ;


if^= ( ( 4 .xl + yl), (4-x2 + TI)) $2 E F) and
w={(&.v, + Y, 1, ( 4 . ~ +
2 YI)) 7

then 4 (G,W) = ( 441(F, V)+ Y I 4~ 4 2 (F, v) + YS


The above statement implies change of origin and scale should not matter to
the solution.
Axiom of Independence of Irrelevant Alternatives: For any closed set G,
where G c F; and # (F, v) E G, 4 (G, v) = 4 (F,v).
We illustrate the above statement with a diagram.

Axiom of Symmetry: If vl = vz and (x2,xl1 (X,,X,)EF ] = F (that is F is


symmetric),

then 41 (F, v) = 42(F,V) .


The above statement implies equal players must be treated equally.
Using the above axioms, Nash derived a theorem:
There is a unique solution function4 (F, V) that satisfies all the above-
mentioned axioms, for every two person bargaining problem (F, v).
Example:
Suppose there are two parties, one buyer and one seller in a property market.
The seller's reservation price for a property is Rs. 10 lakh whereas the buyer
NO"-~ooperative~ame is unwilling to buy the property above Rs.12 lakh. What is the Nash
Theory bargaining solution of the game?
First of all we try to figure out the feasible set and disagreement payoff
allocation of the above problem.
If the buyer and seller engage in trade their total gain would be Rs. 2 lakh
(12 - lo), and if they do not, payoff of each of them would be zero. Therefore,
the feasible solution set F = { ( x , ~ )1 x + y = 1) and the disagreement payoff is
(0, 0). We can put the problem graphically as follows:
Buyers
Payoff I

V, disagreement Rs. 2 lakh

Here the feasible set is symmetric: if the payoff allocation (xl, xz) belongs to
the feasible set, then the payoff allocation (xz, xl) also belongs to the feasible
set and the disagreement payoff is zero for both the players. Therefore, both
the players are equal in every respect and they should be treated equally. The
Nash bargaining solution according to the axiom 5 must be of the form (x, x)
and to satisfy the axiom of efficiency, the following equation must hold:
2, x = 2. (If 2.x > 1, then the solution is outside the feasible set and if 2.x (: 1,
then the solution is not Pareto optimal as there is always scope to improve
someone's gain without reducing the same of the other). Therefore, the Nash
bargaining solution of the game is (1, 1).

25.4 FRIEDMAN'S THEOREM


Friedman's theorem (1971) is another milestone in determining the existence
of sub-game perfect Nash equilibrium of an infinitely repeated game G (a,3).
But before stating the theorem, we need to know a couple of definitions.
Feasible Payoff: We call the payoffs (x,, x?,. .., x,) feasible in the stage game
G if they are a convex combination (i.e., a weighted average, where the
weights are non-negative and they sum up to one) of the pure-strategy payoffs
of G. In the following diagram we present the feasible payoffs of the
Prisoners' dilemma game, by the shaded region. The pure strategy payoffs are
feasible and they are (1, 1), (0, 5), (5, O), (4,4). One can check that any payoff
allocation inside the shaded region can be achieved as a weighted average of
the pure strategy payoffs.
I Payoff to player 1 Repeated Games

Average payoff: Till now we defined players' payoff in an infinitely repeated


game to be the present value of the infinite sequence of stage game payoff.
But it is more convenient to express the present value in terms of the average
payoff from the same infinite sequence of stage game payoffs. The average
payoff of an infinitely repeated game is the payoff that would have to be
received in every stage so as to yield the same present value. Let d be the
discount factor. Suppose the infinite sequence of payoffs T I , T2, T3, .........
has the present value V. If the payoff rwere ,received in every stage, the
?T
present value would be . For r to be the average payoff from the
(1 - 4
infinite sequence, T,, n 2 , T,, ......... with discount factor d, the two present
values must be equal, which gives n = V.(1 - d ) . That is, the average payoff
is (1 - d )times the present value.
Given the discount factor d, the average payoff of the infinite sequence of

payoff n,, T,, n 3 , ......... is (1 - a )x


m

at-l.rt
a

Friedman's Theorem: Let G be a finite static game of complete information.


Let ( e , , e,, e,, .....en)denote the payoff from a Nash equilibrium of G, and
letXx , x,, x,, .....xn) denote any other payoffs from G. If xi > ei for every
player I and if d is sufficiently close to one, then there exists a sub-game
perfect Nash equilibrium of the infinitely repeated game G (oo,d), that
achieves (x,, X,, X,, .....x,) as the average payoff.

Payoff to player 1

Payoff to player
2
NO"-cooperativecame Friedman's theorem ensures that any point in the dotted area in the above
'Theory diagram can be achieved as the average payoff in a sub-game perfect Nash
equilibrium in a repeated game, provided that the discount factor is
sufficiently close to one.

25.5 COLLUSION BETWEEN COURNOT


DUOPOLISTS
Friedman was first to show that cooperation could be achieved in an infinitely
repeated game by using trigger strategies that switch forever to the stage game
Nash equilibrium following any deviation. The original application was to
collusion in a Cournot oligopoly.
Recall from unit 24 (Section 24.8): if the aggregate quantity in the market
is Q = (q, + q,), and the market clearing price is P = a - Q. Assuming Q < a,
and each firm has a marginal cost c, if the firms' choose their quantities
simultaneously, then the unique Nash equilibrium of the game is both firm
producing -( a - c) , which we call the Cournot quantity and denote it by q,.
3
Since the equilibrium aggregate quantity 2- ( a - C) , exceeds the monopoly
3
quantity q, ( a - C ) . Clearly, both the firms would be better off if each firm
----
=
2
produced half of q,, the monopoly quantity (-(a - c) ).
4
We will consider an infinitely repeated game based on this Cournot stage
game when both the firms have the discount factor a. We will seek for a sub-
game perfect Nash equilibrium in which both the firms collude and their
payoffs are more than Cournot payoff.
Let us consider the following trigger strategy:
Produce half of the monopol output (q,/2) in the first period. Continue
to produce the same in the tYti
period if both the firms produced (q,,/2) in
the t-lthperiod; otherwise produce the Cournot output q,.

The profit to one firm when both the firms produce (q,,,/2) is denoted by
5
2
7rm - (a - c ) ~
-- -q (ma - q m - c ) = [Check q, = (a - c)/2]
2 2 8
Whereas the profit accruing to each firm when both produce q, is denoted by

Finally, if firm 1 is going to produce (qm/2) this period, then the quantity that
maximises j's profit in this period, is obtained solving the following simple
maximisation problem:
mx a, Repeated Games
-(a -. q, - - - c).q, . The solution of the problem can be obtained
41
2
from the first order condition of profit maximisation, that is,

a---9, -c
or, q.I =
2
2
(a---4 -C
a --
or, q.J = 2.2
2
3(a -c)
orq.
' J = 8
9(a - c)'
with associated profit . We will denote this profit by nd(d stands
64
for deviation).
Therefore, it is a Nash equilibrium for both the firms to play the trigger
strategy, given earlier provided that,
present value of payoff from the trigger strategy 1 present value of the payoffs
deviated in the first period.
Or,

1 1 S
or, -.-n-, 2 nd + --
1-6 2 ".c

Substituting the values of n,, and n,,into the above equation, we get if
9
S 2 -, then the inequality will hold and the trigger strategy will be sub-game
17
perfect.
Thus, we see that collusion in infinitely repeated games can fetch extra
payoffs to the firms.
Check Your Progress 2
1) What are the assumptions of Nash bargaining solution? Illustrate each of
them.
NO"-cooperative Game 2) A rich man died leaving behind a bequest of Rs. 100 and total loan of
Theory Rs. 150. The loan owes to two persons, amounting Rs. 60(D1) and Rs.
90(Dz), respectively. The persons should decide among themselves how
to divide the bequest. If they fail to reach any solution, D Lwill receive
Rs.10 and Dz will receive Rs. 40 such that the debts are equalised to Rs.
50. Propose a Nash bargaining solution.

3) What is meant by average payoff and feasible payoff? State the


Friedman theorem. Comment, why this theorem is important in Game
theory?

25.6 LET US SUM UP


Repeated games are simply repetition of the stage game over time. The stage
game is repeated after the payoffs have been collected. Therefore, analysis of
these games involve a lot of considerations, such as present value of the
payoff, strategies are also complicated. By appropriately choosing the
strategy, players can ensure that in a repeated game the outcome at each stage
is not the Nash equilibrium of the stage game. Any feasible payoff which
gives each player more (or at least equal) payoff than the Nash equilibrium
gives can be achieved as the outcome of the game given some restrictions on
the discount factor.
Bargaining is a most frequent problem in economics, society and business.
Bargaining solution can be achieved involving two persons using game
theory. A bargaining solution, which satisfies the axioms stated by Nash, is
called the Nash bargaining solution.
In infinitely repeated game, players can achieve more than the Nash
equilibrium permits by colluding among themselves.

25.7 KEY WORDS


Average Payoff: The average payoff of an infinitely repeated game is the
payoff that would have to be received in every stage so as to yield the same
present value of payoffs of an infinitely repeated game.
Discount Factor: If the rate of return is r (which is generally positive), then
the ratio is called the discount factor. The discount factor in game
(1+r>
theory 1iterature is generally denoted by d .
Finitely Repeated Games: When a game is repeated finite number of times Repeated Games
it is called a finitely repeated game.
Infinitely Repeated Games: When a game is repeated infinite number of
times it is called a finitely repeated game.
Present Value: A future asset evaluated at present time is called the present
value of the asset. If t periods later an asset is worth A, then it's present value
A
now is given by the formula
( 1+r)'
Stage Game: In a repeated game the initial game, which is repeated in every
I time period is called the stage game.

25.8 SOME USEFUL BOOKS


Fundenberg, D e w and Jean Tirole (1980), Game Theory, MIT Press.
Tirole Jean (1989), The Theory of Industrial Organisation, MIT Press,
Cambridge Massachusetts, London, England.
Andreu Mas-Colell, Michael D. Whinston and Jerry R. Green (2005),
Microeconomic Theory, Oxford University Press.

25.9 ANSWER OR HINTS TO CHECK YOUR


PROGRESS
Check Your Progress 1
1) Let us consider that player 2 strictly follows the trigger strategy. If
player 1 also follows the strategy, the present value of her future payoff
stream is

But if she deviates from her strategy, the present value of her future
payoff stream is given by,

Therefore, there is no point for him to deviate from the trigger strategy if

or, i3 >1/2.
Thus, the trigger strategy is a Nash equilibrium, but to show that it is sub
game perfect Nash equilibrium, you need to show that the trigger
Non-Cooperative Game strategy induces Nash equilibrium in every sub-game of the infinitely
Theory repeated Bertrand duopoly.
There are two possible situations
i) Case I: No one deviates at any point. Then the sub-game will look
like the game itself. (pm,pm) is chosen in all earlier stages and the
strategy profile induces (pm,pm) in the subsequent games, which we
have shown to be Nash equilibrium provided the restriction on 8.
ii) Case 11: If something else happens in the game, then the strategy
will induce (c, c) at all subsequent games, which we have shown to
be a Nash equilibrium in the previous unit.
2) Define the following concepts:
i) History of an infinitely repeated game.
ii) Stage game
iii) Discount factor
iv) Strategy in an infinitely repeated game.
3) What is a strategy in a repeated game? What is a sub-game in a repeated
game? What is a sub-game perfect IVash equilibrium?
Check Your Progress 2
4) See Section 25.3.
i) Efficiency: The solution should be Pareto Efficient, that is, the
solution should be such that no one can be made better off without
making someone worse off.
ii) Individual rationality: Individuals will of course prefer
something as a solution which gives them something more than the
disagreement payoff solution.
vi) Scale covariance: Change of scale and origin does not matter to
the solution.
vii) Independence of irrelevant alternatives: The solution of two
bargaining problems with two different feasible set, one being a
proper subset of the other, is the same provided other thing remains
the same in the problems.
viii) Symmetry: If the players are in same position in case of
disagreement, and the Feasible set is symmetric then in the
solution the players must receive same amount of payoff.
2) Dl gets Rs. 35, D2gets Rs. 65.
3) See Section 24.4.
25.10 EXERCISES
1) The simultaneous move game is played twice with the outcome of the
first stage being observed before playing the second stage. There is no
discounting factor. Can the payoff (4, 4) be achieved in the first stage in
a pure strategy sub-game perfect Nash equilibrium? If so, give strategies
that do so. If not then, why not?
Repeated Games

2) Suppose there are n firms in a Cournot oligopoly. Inverse demand


L

function is given by P (Q) = a - Q, where Q = 2


1=1
41 . Consider the
infinitely repeated game based on this stage game. What is the lowest
value of d such that the firms can use trigger strategies to sustain the
monopoly output level in a sub-game perfect Nash equilibrium? How
does the answer vary with n?
3) Explain with a suitable example that in an infinitely repeated game a
feasible payoff can be obtained as a sub-game perfect Nash equilibrium
of the game, for appropriate discounting factor.
4) Does the concept of sub-game change when we talk about infinitely
repeated games.
5) Suppose in a Cournot model the discount factor d < 9/17. Now, clearly
the firms cannot support a quantity as low as half of the monopoly
quantity. (see Section 25.5). But for any value of d it is sub-game perfect
Nash equilibrium to repeat the Cournot quantity forever. Therefore, the
most profitable quantity that trigger strategy can support is between half
of monopoly output and Cournot output. Consider the following trigger
strategy to compute that level of output.
Produce q* in the first period. In the tth period produce q* if both
firms have produced q* in each of the t-1 periods. Otherwise
produce the Cournot quantity.
GAMES OF INCOMPLETE
INFORMATION
Structure
26.0 Objectives
26.1 Introduction
26.2 Static Game of Incomplete Information
26.2.1 Bayesian Nash Equilibrium - Normal-Form Representation
26.2.2 Definition
26.2.3 Example
26.3 Dynamic Game of Incomplete Information
26.3.1 Perfect Bayesian Nash Equilibrium - Definition
26.3.2 Example
26.4 Signaling Game
26.4.1 Definition of Signaling Game
26.4.2 Example
26.5 Entry Deterrence
26.6 Let Us Sum Up
26.7 Key Words
26.8 Some Usehl Books
26.9 Answer or Hints to Check Your Progress

26.0 OBJECTIVES -

After going through this unit, you will be able to:


define and solve problems regarding Bayesian Nash equilibrium;
e define and solve problems of Perfect Bayesian Nash equilibrium;
solve problems of Signaling game; and
find out under what condition a farm (having monopoly power) should
going for entry deterrence.

26.1 INTRODUCTION
Game of incomplete information constitutes a large part of the game theory
with which many new concepts and their applications make this kind of game
attractive and usefbl. As compare to game of complete information, game ,of
incomplete information is more realistic and widely used. It is trivial that the
concept of equilibrium in these two games is different due to their different
characteristics. In this unit, we illustrate different concepts of equilibrium
under the game of incomplete information and their application. In this regard,
we will begin with explaining the concept of Bayesian Nash equilibrium. The
second kind of equilibrium that we will discuss is the concept of Perfect
Bayesian Nash equilibrium. Perfect Bayesian Nash equilibrium is one of the
refinements of the Bayesian Nash Equilibrium. We will solve some problems
that deal with both Bayesian Nash equilibrium and Perfect Bayesian Nash
equilibrium. Next, we discuss the signaling game, where there is a firm that
sends some signal to another about her type and receiving that signal other
firm will react. As we will see under this kind of game one should have the
same notion of equilibrium. Lastly, we will going to discuss the concept and

You might also like