Game Theory - Ch02
Game Theory - Ch02
Steven
Split Steal
Sarah gets $50,000 Sarah gets nothing
Split
Steven gets $50,000 Steven gets $100,000
Sarah Sarah gets $100,000 Sarah gets nothing
Steal
Steven gets nothing Steven gets nothing
Let us focus on Sarah’s decision problem. She realizes that her decision alone
is not sufficient to determine the outcome; she has no control over what Steven
will choose to do. However, she can envision two scenarios: one where Steven
chooses Steal and the other where he chooses Split.
• If Steven decides to Steal, then it does not matter what Sarah does,
because she ends up with nothing, no matter what she chooses.
• If Steven picks Split, then Sarah will get either $50, 000 (if she also picks
Split) or $100, 000 (if she picks Steal).
Thus Sarah should choose Steal.
The above argument, however, is not valid because it is based on an implicit and unwar-
ranted assumption about how Sarah ranks the outcomes; namely, it assumes that Sarah is
selfish and greedy, which may or may not be true. Let us denote the outcomes as follows:
If, indeed, Sarah is selfish and greedy – in the sense that, in evaluating the outcomes,
she focuses exclusively on what she herself gets and prefers more money to less – then her
ranking of the outcomes is as follows: o3 ≻ o1 ≻ o2 ∼ o4 (which reads ‘o3 is better than
o1 , o1 is better than o2 and o2 is just as good as o4 ’). But there are other possibilities. For
example, Sarah might be fair-minded and view the outcome where both get $50, 000 as
better than all the other outcomes. For instance, her ranking could be o1 ≻ o3 ≻ o2 ≻ o4 ;
according to this ranking, besides valuing fairness, she also displays benevolence towards
Steven, in the sense that – when comparing the two outcomes where she gets nothing,
namely o2 and o4 – she prefers the one where at least Steven goes home with some
money. If, in fact, Sarah is fair-minded and benevolent, then the logic underlying the above
argument would yield the opposite conclusion, namely that she should choose Split.
Thus we cannot presume to know the answer to the question “What is the rational
choice for Sarah?” if we don’t know what her preferences are. It is a common mistake
(unfortunately one that even game theorists sometimes make) to reason under the assump-
tion that players are selfish and greedy. This is, typically, an unwarranted assumption.
Research in experimental psychology, philosophy and economics has amply demonstrated
that many people are strongly motivated by considerations of fairness. Indeed, fairness
seems to motivate not only humans but also primates, as shown in the following video:2
http://www.ted.com/talks/frans_de_waal_do_animals_have_morals .
The situation illustrated in Figure 2.1 is not a game as we have no information about
the preferences of the players; we use the expression game-frame to refer to it. In the case
2Also available at https://www.youtube.com/watch?v=GcJxRqTs5nk
2.1 Game frames and games 21
where there are only two players and each player has a small number of possible choices
(also called strategies), a game-frame can be represented – as we did in Figure 2.1 – by
means of a table, with as many rows as the number of possible strategies of Player 1 and
as many columns as the number of strategies of Player 2; each row is labeled with one
strategy of Player 1 and each column with one strategy of Player 2; inside each cell of the
table (which corresponds to a pair of strategies, one for Player 1 and one for Player 2) we
write the corresponding outcome.
Before presenting the definition of game-frame, we remind the reader of what the
Cartesian product of two or more sets is. Let S1 and S2 be two sets. Then the Cartesian
product of S1 and S2 , denoted by S1 × S2 , is the set of ordered pairs (x1 , x2 ) where x1 is an
element of S1 (x1 ∈ S1 ) and x2 is an element of S2 (x2 ∈ S2 ). For example, if S1 = {a, b, c}
and S2 = {D, E} then
S1 × S2 = {(a, D), (a, E), (b, D), (b, E), (c, D), (c, E)} .
The definition extends to the general case of n sets (n ≥ 2): an element of S1 × S2 × ... × Sn
is an ordered n-tuple (x1 , x2 , ..., xn ) where, for each i = 1, . . . , n, xi ∈ Si .
The definition of game-frame is as follows
Definition 2.1.1 A game-frame in strategic form is a list of four items (a quadruple)
⟨I, (S1 , S2 , ..., Sn ) , O, f ⟩ where:
• I = {1, 2, . . . , n} is a set of players (n ≥ 2).
• (S1 , S2 , . . . , Sn ) is a list of sets, one for each player. For every Player i ∈ I, Si
is the set of strategies (or possible choices) of Player i. We denote by S the
Cartesian product of these sets: S = S1 × S2 × · · · × Sn ; thus an element of S is a
list s = (s1 , s2 , . . . , sn ) consisting of one strategy for each player. We call S the set
of strategy profiles.
• O is a set of outcomes.
• f : S → O is a function that associates with every strategy profile s an outcome
f (s) ∈ O.
Using the notation of Definition 2.1.1, the situation illustrated in Figure 2.1 is the following
game-frame in strategic form:
• I = {1, 2} (letting Sarah be Player 1 and Steven Player 2),
• (S1 , S2 ) = ({Split, Steal}, {Split, Steal}); thus S1 = S2 = {Split, Steal}, so that the
set of strategy profiles is
S = {(Split, Split), (Split, Steal), (Steal, Split), (Steal, Steal)},
• O is the set of outcomes listed in Table 2.1,
• f is the following function:
s: (Split, Split) (Split, Steal) (Steal, Split) (Steal, Steal)
f (s) : o1 o2 o3 o4
(that is, f (Split, Split) = o1 , f (Split, Steal) = o2 , etc.).
From a game-frame one obtains a game by adding, for each player, her preferences
over (or ranking of) the possible outcomes. We use the notation shown in Table 2.2. For
example, if M denotes ‘Mexican food’ and J denotes ‘Japanese food’, then M ≻Alice J
means that Alice prefers Mexican food to Japanese food and M ∼Bob J means that Bob is
indifferent between the two.
22 Chapter 2. Ordinal Games in Strategic Form
Notation Interpretation
R The “at least as good” relation ≿ is sufficient to capture also strict preference ≻ and
indifference ∼. In fact, starting from ≿, one can define strict preference as follows:
o ≻ o′ if and only if o ≿ o′ and o′ ̸≿ o and one can define indifference as follows:
o ∼ o′ if and only if o ≿ o′ and o′ ≿ o.
We will assume throughout this book that the “at least as good” relation ≿i of Player i –
which embodies her preferences over (or ranking of) the outcomes – is complete (for every
two outcomes o1 and o2 , either o1 ≿i o2 or o2 ≿i o1 , or both) and transitive (if o1 ≿i o2
and o2 ≿i o3 then o1 ≿i o3 ).3
There are (at least) four ways of representing, or expressing, a complete and transitive
preference relation over (or ranking of) a set of outcomes. For example, suppose that
O = {o1 , o2 , o3 , o4 , o5 } and that we want to represent the following ranking (expressing
the preferences of a given individual): o3 is better than o5 , which is just as good as o1 , o1
is better than o4 , which, in turn, is better than o2 (thus, o3 is the best outcome and o2 is the
worst outcome). We can represent this ranking in one of the following ways:
• As a subset of O × O (the interpretation of (o, o′ ) ∈ O × O is that o is at least as good
as o′ ):
(o1 , o1 ), (o1 , o2 ), (o1 , o4 ), (o1 , o5 )
(o2 , o2 ),
(o3 , o1 ), (o3 , o2 ), (o3 , o3 ), (o3 , o4 ), (o3 , o5 ),
(o4 , o2 ), (o4 , o4 ),
(o5 , o1 ), (o5 , o2 ), (o5 , o4 ), (o5 , o5 )
• By listing the outcomes in a column, starting with the best at the top and proceeding
down to the worst, thus using the convention that if outcome o is listed above
outcome o′ then o is preferred to o′ , while if o and o′ are written next to each other
(on the same row), then they are considered to be just as good:
best o3
o1 , o5
o4
worst o2
• By assigning a number to each outcome, with the convention that if the number
assigned to o is greater than the number assigned to o′ then o is preferred to o′ , and
if two outcomes are assigned the same number then they are considered to be just as
o o o o o
good. For example, we could choose the following numbers: 1 2 3 4 5 .
6 1 8 2 6
Such an assignment of numbers is called a utility function. A useful way of thinking
of utility is as an “index of satisfaction”: the higher the index the better the outcome;
however, this suggestion is just to aid memory and should be taken with a grain
of salt because a utility function does not measure anything and, furthermore, as
explained below, the actual numbers used as utility indices are completely arbitrary.4
Definition 2.1.2 Given a complete and transitive ranking ≿ of a finite set of outcomes
O, a function U : O → R (where R denotes the set of real numbers)a is said to be an
ordinal utility function that represents the ranking ≿ if, for every two outcomes o and o′ ,
U(o) > U(o′ ) if and only if o ≻ o′ and U(o) = U(o′ ) if and only if o ∼ o′ . The number
U(o) is called the utility of outcome o.b
a The notation f : X → Y is used to denote a function that associates with every x ∈ X an element
y = f (x) with y ∈ Y .
b Thus, o ≿ o′ if and only if U(o) ≥ U(o′ ).
R Note that the statement “for Alice the utility of Mexican food is 10” is in itself a
meaningless statement; on the other hand, what would be a meaningful statement is
“for Alice the utility of Mexican food is 10 and the utility of Japanese food is 5” ,
because such a statement conveys the information that she prefers Mexican food to
Japanese food. However, the two numbers 10 and 5 have no other meaning besides
the fact that 10 is greater than 5: for example, we cannot infer from these numbers
that she considers Mexican food to be twice as good as Japanese food. The reason for
this is that we could have expressed the same fact, namely that she prefers Mexican
food to Japanese food, by assigning utility 100 to Mexican food and −25 to Japanese
food, or with any other two numbers (as long as the number assigned to Mexican
food is larger than the number assigned to Japanese food).
4 Note that assigning a utility of 1 to an outcome o does not mean that o is the “first choice”. Indeed, in
this example a utility of 1 is assigned to the worst outcome: o2 is the worst outcome because it has the lowest
utility (which happens to be 1, in this example).
24 Chapter 2. Ordinal Games in Strategic Form
It follows from the above remark that there is an infinite number of utility functions that
represent the same ranking. For instance, the following are equivalent ways of representing
the ranking o3 ≻ o1 ≻ o2 ∼ o4 ( f , g and h are three out of the many possible utility
functions):
outcome → o1 o2 o3 o4
utility f unction ↓
f: 5 2 10 2
g: 0.8 0.7 1 0.7
h: 27 1 100 1
Utility functions are a particularly convenient way of representing preferences. In
fact, by using utility functions one can give a more condensed representation of games, as
explained in the last paragraph of the following definition.
For example, take the game-frame illustrated in Figure 2.1, let Sarah be Player 1
and Steven Player 2 and name the possible outcomes as shown in Table 2.1. Let us add
the information that both players are selfish and greedy (that is, Player 1’s ranking is
o3 ≻1 o1 ≻1 o2 ∼1 o4 and Player 2’s ranking is o2 ≻2 o1 ≻2 o3 ∼2 o4 ) and let us represent
their rankings with the following utility functions (note, again, that the choice of numbers
2, 3 and 4 for utilities is arbitrary: any other three numbers would do):
outcome → o1 o2 o3 o4
utility f unction ↓
U1 (Player 1): 3 2 4 2
U2 (Player 2): 3 4 2 2
2.1 Game frames and games 25
Then we obtain the reduced game shown in Figure 2.2, where in each cell the first number
is the payoff of Player 1 and the second number is the payoff of Player 2.
Player 2 (Steven)
Split Steal
Split 3 3 2 4
Player 1
(Sarah)
Steal 4 2 2 2
Figure 2.2: One possible game based on the game-frame of Figure 2.1.
On the other hand, if we add to the game-frame of Figure 2.1 the information that
Player 1 is fair-minded and benevolent (that is, her ranking is o1 ≻1 o3 ≻1 o2 ≻1 o4 ),
while Player 2 is selfish and greedy and represent these rankings with the following utility
functions:
outcome → o1 o2 o3 o4
utility f unction ↓
U1 (Player 1): 4 2 3 1
U2 (Player 2): 3 4 2 2
then we obtain the reduced game shown in Figure 2.3.
Player 2 (Steven)
Split Steal
Split 4 3 2 4
Player 1
(Sarah)
Steal 3 2 1 2
Figure 2.3: Another possible game based on the game-frame of Figure 2.1.
In general, a player will act differently in different games, even if they are based on the
same game-frame, because her incentives and objectives (as captured by her ranking of the
outcomes) will be different. For example, one can argue that in the game of Figure 2.2 a
rational Player 1 would choose Steal, while in the game of Figure 2.3 the rational choice
for Player 1 is Split.
Player 2
E F G
A 3 ... 2 ... 1 ...
• if Player 2 selects E, then B in conjunction with E gives Player 1 the same payoff as
D in conjunction with E (namely 2),
• if Player 2 selects F, then B in conjunction with F gives Player 1 a payoff of 1, while
D in conjunction with F gives her only a payoff of 0,
• if Player 2 selects G then B in conjunction with G gives Player 1 the same payoff as
D in conjunction with G (namely 0).
In order to give the definitions in full generality we need to introduce some notation.
Recall that S denotes the set of strategy profiles, that is, an element s of S is an ordered list
of strategies s = (s1 , ..., sn ), one for each player. We will often want to focus on one player,
say Player i, and view s as a pair consisting of the strategy of Player i and the remaining
strategies of all the other players. For example, suppose that there are three players and the
strategy sets are as follows: S1 = {a, b, c}, S2 = {d, e} and S3 = { f , g}. Then one possible
strategy profile is s = (b, d, g) (thus s1 = b, s2 = d and s3 = g). If we focus on, say, Player
2 then we will denote by s−2 the sub-profile consisting of the strategies of the players other
than 2: in this case s−2 = (b, g). This gives us an alternative way of denoting s, namely as
(s2 , s−2 ). Continuing our example where s = (b, d, g), letting s−2 = (b, g), we can denote
s also by (d, s−2 ) and we can write the result of replacing Player 2’s strategy d with her
strategy e in s by (e, s−2 ); thus (d, s−2 ) = (b, d, g) while (e, s−2 ) = (b, e, g). In general,
given a Player i, we denote by S−i the set of strategy profiles of the players other than i (that
is, S−i is the Cartesian product of the strategy sets of the other players; in the above example
we have that S−2 = S1 × S3 = {a, b, c} × { f , g} ={(a, f ), (a, g), (b, f ), (b, g), (c, f ), (c, g)}.
We denote an element of S−i by s−i .
Definition 2.2.1 Given an ordinal game in strategic form, let i be a Player and a and b
two of her strategies (a, b ∈ Si ). We say that, for Player i,
• a strictly dominates b (or b is strictly dominated by a) if, in every situation (that
is, no matter what the other players do), a gives Player i a payoff which is greater
than the payoff that b gives. Formally: for every s−i ∈ S−i , πi (a, s−i ) > πi (b, s−i ).a
• a weakly dominates b (or b is weakly dominated by a) if, in every situation, a
gives Player i a payoff which is greater than or equal to the payoff that b gives
and, furthermore, there is at least one situation where a gives a greater payoff
than b. Formally: for every s−i ∈ S−i , πi (a, s−i ) ≥ πi (b, s−i ) and there exists an
s−i ∈ S−i such that πi (a, s−i ) > πi (b, s−i ).b
• a is equivalent to b if, in every situation, a and b give Player i the same payoff.
Formally: for every s−i ∈ S−i , πi (a, s−i ) = πi (b, s−i ).c
a Or, stated in terms of rankings instead of payoffs, f (a, s−i )≻i f (b, s−i ) for every s−i ∈ S−i .
b Or, stated in terms of rankings, f (a, s−i )≿i f (b, s−i ), for every s−i ∈ S−i , and there exists an s−i ∈ S−i
such that f (a, s−i )≻i f (b, s−i ).
c Or, stated in terms of rankings, f (a, s )∼ f (b, s ), for every s ∈ S .
−i i −i −i −i
28 Chapter 2. Ordinal Games in Strategic Form
For example, in the game of Figure 2.5 (which reproduces Figure 2.4), we have that
• A strictly dominates B.
• A and C are equivalent.
• A strictly dominates D.
• B is strictly dominated by C.
• B weakly (but not strictly) dominates D.
• C strictly dominates D.
Player 2
E F G
A 3 ... 2 ... 1 ...
B 2 ... 1 ... 0 ...
Player 1
C 3 ... 2 ... 1 ...
R Note that if strategy a strictly dominates strategy b then it also satisfies the conditions
for weak dominance, that is, ‘a strictly dominates b’ implies ‘a weakly dominates b’.
Throughout the book the expression ‘a weakly dominates b’ will be interpreted as ‘a
dominates b weakly but not strictly’.
The expression ‘a dominates b’ can be understood as ‘a is better than b’. The next term we
define is ‘dominant’ which can be understood as ‘best’. Thus one cannot meaningfully
say “a dominates” because one needs to name another strategy that is dominated by a; for
example, one would have to say “a dominates b”. On the other hand, one can meaningfully
say “a is dominant” because it is like saying “a is best”, which means “a is better than
every other strategy”.
Definition 2.2.2 Given an ordinal game in strategic form, let i be a Player and a one of
her strategies (a ∈ Si ). We say that, for Player i,
• a is a strictly dominant strategy if a strictly dominates every other strategy of
Player i.
• a is a weakly dominant strategy if, for every other strategy x of Player i, one of
the following is true: either (1) a weakly dominates x or (2) a is equivalent to x.
2.2 Strict and weak dominance 29
For example, in the game shown in Figure 2.5, A and C are both weakly dominant
strategies for Player 1. Note that if a player has two or more strategies that are weakly
dominant, then any two of those strategies must be equivalent. On the other hand, there
can be at most one strictly dominant strategy.
R The reader should convince herself/himself that the definition of weakly dominant
strategy given in Definition 2.2.2 is equivalent to the following: a ∈ Si is a weakly
dominant strategy for Player i if and only if, for every s−i ∈ S−i , πi (a, s−i ) ≥ πi (si , s−i )
for every si ∈ Si .5
Split 3 3 2 4 Split 4 3 2 4
Player 1 Player 1
(Sarah) (Sarah)
Steal 4 2 2 2 Steal 3 2 1 2
Figure 2.6: Copy of Figure 2.2. Figure 2.7: Copy of Figure 2.3.
In the game of Figure 2.7 (which reproduces Figure 2.3), Split is a strictly dominant
strategy for Player 1, while Steal is a weakly (but not strictly) dominant strategy for Player
2 and thus (Split,Steal) is a weak dominant-strategy profile.
5 Or, stated in terms of rankings, for every s−i ∈ S−i , f (a, s−i ) ≿i f (si , s−i ) for every si ∈ Si .
30 Chapter 2. Ordinal Games in Strategic Form
Player 2 (Ed)
Normal Extra
effort effort
Normal o1 o2
Player 1 effort
(Doug) Extra o3 o4
effort
Suppose that both Doug and Ed are willing to sacrifice family time to get the prize, but
otherwise value family time; furthermore, they are envious of each other, in the sense
that they prefer nobody getting the prize to the other person’s getting the prize (even
at the personal cost of sacrificing family time). That is, their rankings are as follows:
o3 ≻Doug o1 ≻Doug o4 ≻Doug o2 and o2 ≻Ed o1 ≻Ed o4 ≻Ed o3 . Using utility functions
with values from the set {0, 1, 2, 3} we can represent the game in reduced form as shown
in Figure 2.9. In this game exerting extra effort is a strictly dominant strategy for every
player; thus (Extra effort, Extra effort) is a strict dominant-strategy profile.
Definition 2.2.4 Given an ordinal game in strategic form, let o and o′ be two outcomes.
We say that o is strictly Pareto superior to o′ if every player prefers o to o′ (that is, if
o ≻i o′ , for every Player i). We say that o is weakly Pareto superior to o′ if every player
considers o to be at least as good as o′ and at least one player prefers o to o′ (that is, if
o ≿i o′ , for every Player i and there is a Player j such that o ≻ j o′ ).
In reduced games, this definition can be extended to strategy profiles as follows. If s
and s′ are two strategy profiles, then s is strictly Pareto superior to s′ if πi (s) > πi (s′ )
for every Player i and s is weakly Pareto superior to s′ if πi (s) ≥ πi (s′ ) for every Player
i and, furthermore, there is a Player j such that π j (s) > π j (s′ ).
2.3 Second-price auction 31
Player 2 (Ed)
Normal Extra
effort effort
Normal
2 2 0 3
Player 1 effort
(Doug) Extra
effort 3 0 1 1
For example, in the Prisoner’s Dilemma game of Figure 2.9, outcome o1 is strictly Pareto
superior to o4 or, in terms of strategy profiles, (Normal effort, Normal effort) is strictly
Pareto superior to (Extra effort, Extra effort).
When a player has a strictly dominant strategy, it would be irrational for that player to
choose any other strategy, since she would be guaranteed a lower payoff in every possible
situation (that is, no matter what the other players do). Thus in the Prisoner’s Dilemma
individual rationality leads to (Extra effort, Extra effort) despite the fact that both players
would be better off if they both chose Normal effort. It is obvious that if the players could
reach a binding agreement to exert normal effort then they would do so; however, the
underlying assumption in non-cooperative game theory is that such agreements are not
possible (e.g. because of lack of communication or because such agreements are illegal or
cannot be enforced in a court of law, etc.). Any non-binding agreement to choose Normal
effort would not be viable: if one player expects the other player to stick to the agreement,
then he will gain by cheating and choosing Extra effort (on the other hand, if a player does
not believe that the other player will honor the agreement then he will gain by deviating
from the agreement herself). The Prisoner’s Dilemma game is often used to illustrate
a conflict between individual rationality and collective rationality: (Extra effort, Extra
effort) is the individually rational profile while (Normal effort, Normal effort) would be the
collectively rational one.
Two oil companies bid for the right to drill a field. The possible bids are $10 million,
$20 million, . . . , $50 million. In case of ties the winner is Player 2 (this was decided earlier
by tossing a coin). Let us take the point of view of Player 1. Suppose that Player 1 ordered
a geological survey and, based on the report, concludes that the oil field would generate a
profit of $30 million. Suppose also that Player 1 is indifferent between any two outcomes
where the oil field is given to Player 2 and prefers to get the oil field herself if and only
if it has to pay not more than $30 million for it; furthermore, getting the oil field for $30
million is just as good as not getting it. Then we can take as utility function for Player 1
the net gain to Player 1 from the oil field (defined as profits from oil extraction minus the
price paid for access to the oil field) if Player 1 wins, and zero otherwise.
Player 2
$10M $20M $30M $40M $50M
$10M 0 0 0 0 0
$20M 20 0 0 0 0
Player 1 $30M 20 10 0 0 0
(value $30M)
$40M 20 10 0 0 0
$50M 20 10 0 −10 0
Figure 2.10: A second-price auction where, in case of ties, the winner is Player 2.
In Figure 2.10 we have written inside each cell only the payoff of Player 1. For example,
why is Player 1’s payoff 20 when it bids $30M and Player 2 bids $10M? Since Player 1’s
bid is higher than Player 2’s bid, Player 1 is the winner and thus the drilling rights are
assigned to Player 1; hence Player 1 obtains something worth $30M and pays, not its own
bid of $30M, but the bid of Player 2, namely $10M; it follows that Player 1’s net gain is
$(30 − 10)M = $20M.
The reader should verify that, for Player 1, submitting a bid equal to the value it assigns to
the object (namely, a bid of $30M) is a weakly dominant strategy: it always gives Player 1
the largest of the payoffs that are possible, given the bid of the other player. This does not
imply that it is the only weakly dominant strategy; indeed, in this example bidding $40M
is also a weakly dominant strategy for Player 1 (in fact, it is equivalent to bidding $30M).
Now we can describe the second-price auction in more general terms. Let n ≥ 2 be
the number of bidders. We assume that all non-negative numbers are allowed as bids and
that the tie-breaking rule favors the player with the lowest index among those who submit
the highest bid: for example, if the highest bid is $250 and it is submitted by Players 5, 8
and 10, then the winner is Player 5. We shall denote the possible outcomes as pairs (i, p),
where i is the winner and p is the price that the winner has to pay. Finally we denote by
bi the bid of Player i. We start by describing the case where there are only two bidders
and then generalize to the case of an arbitrary number of bidders. We denote the set of
non-negative numbers by [0, ∞).
2.3 Second-price auction 33
The case where n = 2: in this case we have that I = {1, 2}, S1 = S2 = [0, ∞), O =
{(i, p) : i ∈ {1, 2}, p ∈ [0, ∞)} and f : S → O is given by
(1, b2 ) if b1 ≥ b2
f (b1 , b2 ) = .
(2, b1 ) if b1 < b2
The case where n ≥ 2: in the general case the second-price auction is the following
game-frame:
• I = {1, . . . , n}.
• Si = [0, ∞) for every i = 1, . . . , n. We denote an element of Si by bi .
• O = {(i, p) : i ∈ I, p ∈ [0, ∞)} .
• f : S → O is defined as follows. Let H (b1, . . . , bn ) ⊆ I be the set of bidders
who submit the highest bid: H (b1 , . . . , bn ) = i ∈ I : bi ≥ b j for all j ∈ I and let
î (b1 , . . . , bn ) be the smallest number in the set H (b1 , . . . , bn ), that is, the winner of the
auction. Finally, let bmax (b1 , . . . , bn ) denote the maximum bid and bsecond (b1 , . . . , bn )
denote the second-highest bid,6 that is,
How much should a player bid in a second-price auction? Since what we have described is
a game-frame and not a game, we cannot answer the question unless we specify the player’s
preferences over the set of outcomes O. Let us say that Player i in a second-price auction
is selfish and greedy if she only cares about whether or not she wins and – conditional on
winning – prefers to pay less; furthermore, she prefers winning to not winning if and only
if she has to pay less than the true value of the object for her, which we denote by vi , and is
indifferent between not winning and winning if she has to pay exactly vi . Thus the ranking
of a selfish and greedy Player i is as follows (together with everything that follows from
transitivity):
Using this utility function we get the following payoff function for Player i:
We can now state the following theorem. The proof is given in Section 2.8.
Note that, for a player who is not selfish and greedy, Theorem 2.3.1 is not true. For
example, if a player has the same preferences as above for the case where she wins, but,
conditional on not winning, prefers the other player to pay as much as possible (she is
spiteful) or as little as possible (she is benevolent), then bidding her true value is no longer
a dominant strategy.
“By consensus, the Davis City Council agreed Wednesday to order a commu-
nitywide public opinion poll to gauge how much Davis residents would be
willing to pay for a park tax and a public safety tax.”
Opinion polls of this type are worthwhile only if there are reasons to believe that the people
who are interviewed will respond honestly. But will they? If I would like more parks
and believe that the final tax I will have to pay is independent of the amount I state in the
interview, I would have an incentive to overstate my willingness to pay, hoping to swing
the decision towards building a new park. On the other hand, if I fear that the final tax
will be affected by the amount I report, then I might have an incentive to understate my
willingness to pay.
The pivotal mechanism, or Clarke mechanism, is a game designed to give the partici-
pants an incentive to report their true willingness to pay.
A public project, say to build a park, is under consideration. The cost of the project
is $C. There are n individuals in the community. If the project is carried out, individual
i (i = 1, . . . , n) will have to pay $ci (with c1 +c2 +· · ·+cn = C); these amounts are specified
as part of the project. Note that we allow for the possibility that some individuals might
have to contribute a larger share of the total cost C than others (e.g. because they live
closer to the projected park and would therefore benefit more from it). Individual i has an
initial wealth of $mi > 0. If the project is carried out, individual i receives benefits from
2.4 The pivotal mechanism 35
it that she considers equivalent to receiving $vi . Note that for some individual i, vi could
be negative, that is, the individual could be harmed by the project (e.g. because she likes
peace and quiet and a park close to her home would bring extra traffic and noise). We
assume that individual i has the following utility-of-wealth function:
m if the project is not carried out
Ui ($m) =
m + vi if the project is carried out
n
The socially efficient decision is to carry out the project if and only if ∑ vi > C (recall that
i=1
n
∑ is the summation sign: ∑ vi is a short-hand for v1 + v2 + ... + vn ).
i=1
For example, suppose that n = 2, m1 = 50, m2 = 60, v1 = 19, v2 = −15, C = 6,
n
c1 = 6, c2 = 0. In this case ∑ vi = 19 − 15 = 4 < C = 6 hence the project should not
i=1
be carried out. To see this consider the following table:
If the project is carried out, Individual 1 has a utility gain of 13, while Individual 2 has
a utility loss of 15. Since the loss is greater than the gain, we have a Pareto inefficient
situation. Individual 2 could propose the following alternative to Individual 1: let us not
carry out the project and I will pay you $14. Then Individual 1’s wealth and utility would
be 50 + 14 = 64 and Individual 2’s wealth and utility would be 60 − 14 = 46 and thus
they would both be better off.
n
Thus Pareto efficiency requires that the project be carried out if and only if ∑ vi > C.
i=1
This would be a simple decision for the government if it knew the vi ’s. But, typically, these
values are private information to the individuals. Can the government find a way to induce
the individuals to reveal their true valuations? It seems that in general the answer is No:
those who gain from the project would have an incentive to overstate their potential gains,
while those who suffer would have an incentive to overstate their potential losses.
Influenced by Vickrey’s work on second-price auctions, Clarke suggested the following
mechanism or game. Each individual i is asked to submit a number wi which will be
interpreted as the gross benefit (if positive) or harm (if negative) that individual i associates
with the project. Note that, in principle, individual i can lie and report a value wi which is
different from the true value vi . Then the decision will be:
n
Yes if ∑ w j > C
j=1
Carry out the project?
n
No if ∑ w j ≤ C
j=1
36 Chapter 2. Ordinal Games in Strategic Form
However, this is not the end of the story. Each individual will be classified as either not
pivotal or pivotal.
!
n
either ∑ w j > C and ∑ w j > ∑ c j
j=1 j̸=i j̸=i
Individual i is not pivotal if !
n
or
∑ w j ≤ C and ∑ w j ≤ ∑ c j
j=1 j̸=i j̸=i
and she is pivotal otherwise. In other words, individual i is pivotal if the decision about the
project that would be made in the restricted society resulting from removing individual i is
different from the decision that is made when individual i is included. If an individual is
not pivotal then she has to pay no taxes. If individual i is pivotal then she has to pay a tax
in the amount of
It may seem that, since it involves paying a tax, being pivotal is a bad thing and one
should try to avoid it. It is certainly possible for individual i to make sure that she is
not pivotal: all she has to do is to report wi = ci ; in fact, if ∑ w j > ∑ c j then adding ci
j̸=i j̸=i
n
to both sides yields ∑ w j > C and if ∑ w j ≤ ∑ c j then adding ci to both sides yields
j=1 j̸=i j̸=i
n
∑ w j ≤ C . It is not true, however, that it is best to avoid being pivotal. The following
j=1
example shows that one can gain by being truthful even if it involves being pivotal and
thus having to pay a tax. Let n = 4, C = 15, c1 = 5, c2 = 0, c3 = 5 and c4 = 5.
2.5 Iterated deletion procedures 37
Theorem 2.4.1 — Clarke, 1971. In the pivotal mechanism (under the assumed pref-
erences) truthful revelation (that is, stating wi = vi ) is a weakly dominant strategy for
every Player i.
2.5.1 IDSDS
The Iterated Deletion of Strictly Dominated Strategies (IDSDS) is the following procedure
or algorithm. Given a finite ordinal strategic-form game G, let G1 be the game obtained by
removing from G, for every Player i, those strategies of Player i (if any) that are strictly
dominated in G by some other strategy; let G2 be the game obtained by removing from G1 ,
for every Player i, those strategies of Player i (if any) that are strictly dominated in G1 by
some other strategy, and so on. Let G∞ be the output of this procedure. Since the initial
game G is finite, G∞ will be obtained in a finite number of steps.
38 Chapter 2. Ordinal Games in Strategic Form
Figure 2.12 illustrates this procedure. If G∞ contains a single strategy profile (this is
not the case in the example of Figure 2.12), then we call that strategy profile the iterated
strict dominant-strategy profile (or solution). If G∞ contains two or more strategy profiles
then we refer to those strategy profiles merely as the output of the IDSDS procedure. For
example, in the game of Figure 2.12 the output of the IDSDS procedure is the set of
strategy profiles {(A, e), (A, f ), (B, e), (B, f )}.
What is the significance of the output of the IDSDS procedure? Consider game G of
Figure 2.12. Since, for Player 2, h is strictly dominated by g, if Player 2 is rational she
will not play h. Thus, if Player 1 believes that Player 2 is rational then he believes that
Player 2 will not play h, that is, he restricts attention to game G1 ; since, in G1 , D is strictly
dominated by C for Player 1, if Player 1 is rational he will not play D. It follows that if
Player 2 believes that Player 1 is rational and that Player 1 believes that Player 2 is rational,
then Player 2 restricts attention to game G2 where rationality requires that Player 2 not play
g, etc. It will be shown in a later chapter that if there is common knowledge of rationality,8
then only strategy profiles that survive the IDSDS procedure can be played; the converse
is also true: any strategy profile that survives the IDSDS procedure is compatible with
common knowledge of rationality.
Player 2
e f g h
Player 2
A 6 3 4 4 4 1 3 0 e f
Player B 5 4 6 3 0 2 5 1 Player A 6 3 4 4
1 C 5 0 3 2 6 1 4 0 G = G0 1 B 5 4 6 3 G4 = G∞
D 2 0 2 3 3 3 6 1
delete h
(dominated by g)
delete C
(dominated by A)
Player 2
e f g
Player 2
A 6 3 4 4 4 1 e f
Player B 5 4 6 3 0 2 A 6 3
G1 Player 4 4
1 C 5 0 3 2 6 1 B 5 4 6 3
1 C 5 0 3 2 G3
D 2 0 2 3 3 3
delete D
(dominated by C) Player 2 delete g
e f g (dominated by f )
Player 6 3 4 4 4 1
A
B 5
1 C 5
4 6 3 0 2 G2
0 3 2 6 1
R In finite games, the order in which strictly dominated strategies are deleted is irrelevant,
in the sense that any sequence of deletions of strictly dominated strategies leads to
the same output.
8Anevent E is commonly known if everybody knows E and everybody knows that everybody knows E
and everybody knows that everybody knows that everybody knows E, and so on.
2.5 Iterated deletion procedures 39
2.5.2 IDWDS
The Iterated Deletion of Weakly Dominated Strategies (IDWDS) is a weakening of IDSDS
in that it allows the deletion also of weakly dominated strategies. However, this procedure
has to be defined carefully, since in this case the order of deletion can matter. To see this,
consider the game shown in Figure 2.13.
Player 2
L R
A 4 0 0 0
T 3 2 2 2
Player 1
M 1 1 0 0
B 0 0 1 1
Since M is strictly dominated by T for Player 1, we can delete it and obtain the reduced
game shown in Figure 2.14
Player 2
L R
A 4 0 0 0
Player 1 T 3 2 2 2
B 0 0 1 1
Now L is weakly dominated by R for Player 2. Deleting L we are left with the reduced
game shown in Figure 2.15.
Player 2
R
A 0 0
Player 1 T 2 2
B 1 1
Now A and B are strictly dominated by T. Deleting them we are left with (T, R) , with
corresponding payoffs (2,2).
40 Chapter 2. Ordinal Games in Strategic Form
Alternatively, going back to the game of Figure 2.13, we could note that B is strictly
dominated by T; deleting B we are left with
Player 2
L R
A 4 0 0 0
Player 1 T 3 2 2 2
M 1 1 0 0
Now R is weakly dominated by L for Player 2. Deleting R we are left with the reduced
game shown in Figure 2.17.
Player 2
L
A 4 0
Player 1 T 3 2
M 1 1
Now T and M are strictly dominated by A and deleting them leads to (A, L) with corre-
sponding payoffs (4,0). Since one order of deletion leads to (T, R) with payoffs (2,2) and
the other to (A, L) with payoffs (4,0), the procedure is not well defined.
Definition 2.5.1 — IDWDS. In order to avoid the problem illustrated above, the IDWDS
procedure is defined as follows: at every step identify, for every player, all the strategies
that are weakly (or strictly) dominated and then delete all such strategies in that step. If
the output of the IDWDS procedure is a single strategy profile then we call that strategy
profile the iterated weak dominant-strategy profile (or solution) (otherwise we just use
the expression ‘output of the IDWDS procedure’).
2.6 Nash equilibrium 41
For example, the IDWDS procedure when applied to the game of Figure 2.13 leads to
the set of strategy profiles shown in Figure 2.18, namely {(A, L), (A, R), (T, L), (T, R)}.9
Player 2
L R
A 4 0 0 0
Player 1
T 3 2 2 2
Figure 2.18: The output of the IDWDS procedure applied to the game of Figure 2.13.
Hence the game of Figure 2.13 does not have an iterated weak dominant-strategy profile
(or solution).
The interpretation of the output of the IDWDS procedure is not as simple as that of the
IDSDS procedure: certainly common knowledge of rationality is not sufficient. In order
to delete weakly dominated strategies one needs to appeal not only to rationality but also
to some notion of caution: a player should not completely rule out any of her opponents’
strategies. However, this notion of caution is in direct conflict with the process of deletion
of strategies. In this book we shall not address the issue of how to interpret or justify the
IDWDS procedure.
Definition 2.6.1 Given an ordinal game in strategic form with two players, a strategy
profile s∗ = (s∗1 , s∗2 ) ∈ S1 × S2 is a Nash equilibrium if the following two conditions are
satisfied:
1. for every s1 ∈ S1 , π1 (s∗1 , s∗2 ) ≥ π1 (s1 , s∗2 ) (or stated in terms of outcomes and
preferences, f (s∗1 , s∗2 ) ≿1 f (s1 , s∗2 )), and
2. for every s2 ∈ S2 , π2 (s∗1 , s∗2 ) ≥ π2 (s∗1 , s2 ) (or, f (s∗1 , s∗2 ) ≿2 f (s∗1 , s2 )).
9 Note that the output of the IDWDS procedure is a subset of the output of the IDSDS procedure (not
necessarily a proper subset; for example, in the game of Figure 2.13 the two procedures yield the same
output).
42 Chapter 2. Ordinal Games in Strategic Form
In the game of Figure 2.19 there are two Nash equilibria: (T,L) and (B,C).
(T,L) is a Nash equilibrium because (1) π1 (T, L) = 3 = π1 (M, L) and π1 (T, L) = 3 >
π1 (B, L) = 1 and (2) π2 (T, L) = 2 > π2 (T,C) = 0 and π2 (T, L) = 2 > π2 (T, R) = 1.
(B,C) is a Nash equilibrium because (1) π1 (B,C) = 2 > π1 (M,C) = 1 and π1 (B,C) = 2 >
π1 (T,C) = 0 and (2) π2 (B,C) = 3 > π2 (B, L) = 0 and π2 (B,C) = 3 > π2 (B, R) = 0.
No other strategy profile in the game of Figure 2.19 is a Nash equilibrium.
Player 2
L C R
T 3 2 0 0 1 1
Player 1 M 3 0 1 5 4 4
B 1 0 2 3 3 0
“Self-enforcing agreement” interpretation: imagine that the players are able to com-
municate before playing the game and reach a non-binding agreement expressed as a
strategy profile s∗ ; then no player will have an incentive to deviate from the agreement (if
she believes that the other player will follow the agreement) if and only if s∗ is a Nash
equilibrium.
“Rationality with correct beliefs” interpretation: suppose that Player 1 believes that
Player 2 will choose y and she herself chooses x and, symmetrically, Player 2 believes that
Player 1 will choose x and he himself chooses y, then, if both players have correct beliefs
and their choices are rational, (x, y) is a Nash equilibrium.
It should be clear that all of the above interpretations are just verbal translations of the
formal definition of Nash equilibrium in terms of the inequalities given in Definition 2.6.1.
2.6 Nash equilibrium 43
The generalization of Definition 2.6.1 to games with more than two players is straight-
forward.
Definition 2.6.2 Given an ordinal game in strategic form with n players, a strategy
profile s∗ ∈ S is a Nash equilibrium if the following n inequalities are satisfied: for every
Player i = 1, . . . , n,
What is the relationship between the notion of Nash equilibrium and the solution
concepts defined in Section 2.5? Fix an ordinal strategic-form game G and let S(G) be
the set of strategy profiles in G. Let NE(G) ⊆ S(G) be the (possibly empty) set of Nash
equilibria of G, let IDSDS(G) ⊆ S(G) be the output of the iterated deletion of strictly
dominated strategies (IDSDS) and let IDWDS(G) ⊆ S(G) be the output of the iterated
deletion of weakly dominated strategies (IDSDS). The following theorem is proved in
Section 2.8.
Theorem 2.6.1 For every ordinal strategic-form game G,
NE(G) ⊆ IDSDS(G).
On the other hand, it is possible that NE(G)̸= ∅ and yet NE(G) ∩ IDWDS(G)= ∅.
In the case where IDSDS(G) is a singleton, that is, IDSDS(G) = {s}, the strategy profile s
is a strict dominant-strategy solution of G; similarly, if IDWDS(G) = {s}, then s is a weak
dominant-strategy solution of G (see Definition 2.2.3). The next theorem, which is proved
in Section 2.8, says that (A) a strict dominant-strategy solution is a strict Nash equilibrium
(s∗ is a strict Nash equilibrium if, for every Player i, πi (s∗i , s∗−i > πi (si , s∗−i ) for all si ∈
Si \ {s∗i }) and (B) a weak dominant-strategy solution is a Nash equilibrium.
Theorem 2.6.2 Let G be an ordinal strategic-form game and s ∈ S(G) a strategy profile
in G.
(A) If IDSDS(G) = {s} then s is a strict Nash equilibrium.
(B) If IDWDS(G) = {s} then s is a Nash equilibrium.
Definition 2.6.3 Consider an ordinal game in strategic form, a Player i and a strategy
profile s−i ∈ S−i of the players other than i. A strategy si ∈ Si of Player i is a best reply
(or best response) to s−i if πi (si , s−i ) ≥ πi (s′i , s−i ), for every s′i ∈ Si .
For example, in the game of Figure 2.20, for Player 1 there are two best replies to L,
namely M and T , while the unique best reply to C is B and the unique best reply to R is M;
for Player 2 the best reply to T is L, the best reply to M is C and the best reply to B is C.
Player 2
L C R
T 3 2 0 0 1 1
Player 1 M 3 0 1 5 4 4
B 1 0 2 3 3 0
A quick way to find the Nash equilibria of a two-player game is as follows: in each column
of the table underline the largest payoff of Player 1 in that column (if there are several
instances, underline them all) and in each row underline the largest payoff of Player 2 in
that row; if a cell has both payoffs underlined then the corresponding strategy profile is
a Nash equilibrium. Underlining of the maximum payoff of Player 1 in a given column
identifies the best reply of Player 1 to the strategy of Player 2 that labels that column and
similarly for Player 2. This procedure is illustrated in Figure 2.21, where there is a unique
Nash equilibrium, namely (B, E).
Player 2
E F G H
A 4 0 3 2 2 3 4 1
B 4 2 2 1 1 2 0 2
Player 1
C 3 6 5 5 3 1 5 0
D 2 3 3 2 1 2 3 3
Exercise 2.3 in Section 2.9.1 explains how to represent a three-player game by means
of a set of tables. In a three-player game the procedure for finding the Nash equilibria is
the same, with the necessary adaptation for Player 3: in each cell underline the payoff of
Player 3 if and only if her payoff is the largest of all her payoffs in the same cell across
different tables. This is illustrated in Figure 2.22, where there is a unique Nash equilibrium,
namely (B, R,W ).
Unfortunately, when the game has too many players or too many strategies – and it is
thus impossible or impractical to represent it as a set of tables – there is no quick procedure
for finding the Nash equilibria: one must simply apply the definition of Nash equilibrium.
This is illustrated in the following example.
2.6 Nash equilibrium 45
Player 2 Player 2
L R L R
Player T 0 0 0 2 8 6 Player T 0 0 0 1 2 5
1 B 5 3 2 3 4 2 1 B 1 6 1 0 0 1
Player 3 chooses W Player 3 chooses E
■ Example 2.1 There are 50 players. A benefactor asks them to simultaneously and
secretly write on a piece of paper a request, which must be a multiple of $10 up to a
maximum of $100 (thus the possible strategies of each player are $10, $20, . . . , $90, $100).
The benefactor will then proceed as follows: if not more than 10% of the players (that is, 5
or fewer players) ask for $100 then he will grant every player’s request, otherwise every
player will get nothing. Assume that every player is selfish and greedy (only cares about
how much money she gets and prefers more money to less). What are the Nash equilibria
of this game? There are several:
• every strategy profile where 7 or more players request $100 is a Nash equilibrium
(everybody gets nothing and no player can get a positive amount by unilaterally
changing her request, since there will still be more than 10% requesting $100; on the
other hand, convince yourself that a strategy profile where exactly 6 players request
$100 is not a Nash equilibrium),
• every strategy profile where exactly 5 players request $100 and the remaining players
request $90 is a Nash equilibrium.
Any other strategy profile is not a Nash equilibrium: (1) if fewer than 5 players request
$100, then a player who requested less than $100 can increase her payoff by switching
to a request of $100, (2) if exactly 5 players request $100 and among the remaining
players there is one who is not requesting $90, then that player can increase her payoff by
increasing her request to $90. ■
We conclude this section by noting that, since so far we have restricted attention to ordinal
games, there is no guarantee that an arbitrary game will have at least one Nash equilibrium.
An example of a game that has no Nash equilibria is the Matching Pennies game. This is a
simultaneous two-player game where each player has a coin and decides whether to show
the Heads face or the Tails face. If both choose H or both choose T then Player 1 wins,
otherwise Player 2 wins. Each player strictly prefers the outcome where she herself wins
to the alternative outcome. The game is illustrated in Figure 2.23.
Player 2
H T
H 1 0 0 1
Player 1
T 0 1 1 0
There is only one Nash equilibrium, namely (1,1) with payoffs (0,0). First of all, we
must show that (1,1) is indeed a Nash equilibrium.
If Player 1 switched to some x > 1 then her payoff would remain 0: π1 (x, 1) = 0, for all
x ∈ [1, ∞) and the same is true for Player 2 if he unilaterally switched to some y > 1 :
π2 (1, y) = 0, for all y ∈ [1, ∞).
Now we show that no other pair (x, y) is a Nash equilibrium.
Consider first an arbitrary pair (x, y) with x = y > 1. Then π1 (x, y) = 0, but if Player 1
switched to an x̂ strictly between 1 and x (1 < x̂ < x) her payoff would be π1 (x̂, y) = x̂ − 1 >
0 (recall that, by hypothesis, x = y).
Now consider an arbitrary (x, y) with x < y. Then π1 (x, y) = x − 1, but if Player 1 switched
to an x̂ strictly between x and y (x < x̂ < y) her payoff would be π1 (x̂, y) = x̂ − 1 > x − 1.
The argument for ruling out pairs (x, y) with y < x is similar.
Note the interesting fact that, for Player 1, x = 1 is a weakly dominated strategy: indeed it
is weakly dominated by any other strategy: x = 1 guarantees a payoff of 0 for Player 1,
while any x̂ > 1 would yield a positive payoff to Player 1 in some cases (against any y > x̂)
and 0 in the remaining cases. The same is true for Player 2. Thus in this game there is a
unique Nash equilibrium where the strategy of each player is weakly dominated!
[Note: the rest of this section makes use of calculus. The reader who is not familiar with
calculus should skip this part.]
We conclude this section with an example based on the analysis of competition among
firms proposed by the French economist Augustine Cournot in a book published in 1838.
2.7 Games with infinite strategy sets 47
In fact, Cournot is the one who invented what we now call ‘Nash equilibrium’, although
his analysis was restricted to a small class of games. Consider n ≥ 2 firms that produce an
identical product. Let qi be the quantity produced by Firm i (i = 1, . . . , n). For Firm i the
cost of producing qi units of output is ci qi , where ci is a positive constant. For simplicity
we will restrict attention to the case of two firms (n = 2) and identical cost functions:
c1 = c2 = c. Let Q be total industry output, that is, Q = q1 + q2 . The price at which each
firm can sell each unit of output is given by the inverse demand function P = a − bQ where
a and b are positive constants (with a > c). Cournot assumed that each firm was only
interested in its own profit and preferred higher profit to lower profit (that is, each firm is
“selfish and greedy”).
The profit function of Firm 1 is given by
π1 (q1 , q2 ) = Pq1 − cq1 = [a − b(q1 + q2 )] q1 − cq1 = (a − c)q1 − b(q1 )2 − bq1 q2 .
Similarly, the profit function of Firm 2 is given by
π2 (q1 , q2 ) = (a − c)q2 − b(q2 )2 − bq1 q2
Cournot defined an equilibrium as a pair (q1 , q2 ) that satisfies the following two inequali-
ties:
π1 (q1 , q2 ) ≥ π1 (q1 , q2 ) , for every q1 ≥ 0 (♣)
π2 (q1 , q2 ) ≥ π2 (q1 , q2 ) , for every q2 ≥ 0. (♦)
Of course, this is the same as saying that (q1 , q2 ) is a Nash equilibrium of the game
where the players are the two firms, the strategy sets are S1 = S2 = [0, ∞) and the payoff
functions are the profit functions. How do we find a Nash equilibrium? First of all,
note that the profit functions are differentiable. Secondly note that (♣) says that, having
fixed the value of q2 at q2 , the function π1 (q1 , q2 ) – viewed as a function of q1 alone –
is maximized at the point q1 = q1 . A necessary condition for this (if q1 > 0) is that the
partial derivative of this function with respect to q1 be zero at the point q1 , that is, it must
be that ∂∂ πq1 (q1 , q2 ) = 0. This condition is also sufficient since the second derivative of this
1
2
function is always negative ( ∂∂ qπ21 (q1 , q2 ) = −2b for every (q1 , q2 )). Similarly, by (♦), it
1
must be that ∂∂ πq2 (q1 , q2 ) = 0. Thus the Nash equilibrium is found by solving the system of
2
two equations
∂∂ πq1 (q1 , q2 ) = a − c − 2bq1 − bq2 = 0
1
∂ π2
∂ q2 (q1 , q2 ) = a − c − 2bq2 − bq1 = 0
a−c
The corresponding price is P = a − b 2 a−c a+2c
The solution is q1 = q2 = 3b . 3b = 3 and
2
(a−c)
the corresponding profits are π1 ( a−c a−c a−c a−c
3b , 3b ) = π2 ( 3b , 3b ) = 9b .
For example, if a = 25, b = 2, c = 1 then the Nash equilibrium is given by (4,4) with
corresponding profits of 32 for each firm. The analysis can easily be extended to the case
of more than two firms.
48 Chapter 2. Ordinal Games in Strategic Form
The reader who is interested in further exploring the topic of competition among firms
can consult any textbook on Industrial Organization.
Theorem [Vickrey, 1961] In a second-price auction, if Player i is selfish and greedy (as
specified in (2.1)) then it is a weakly dominant strategy for Player i to bid her true value,
that is, to choose bi = vi .
Proof. In order to make the notation simpler and the argument more transparent, we give
the proof for the case where n = 2. We shall prove that bidding v1 is a weakly dominant
strategy for Player 1 (the proof for Player 2 is similar). Assume that Player 1 is selfish and
greedy. Then we can take her payoff function to be as follows:
v − b2 if b1 ≥ b2
π1 (b1 , b2 ) = 1
0 if b1 < b2
We need to show that, whatever bid Player 2 submits, Player 1 cannot get a higher payoff
by submitting a bid different from v1 . Two cases are possible (recall that b2 denotes the
actual bid of Player 2, which is unknown to Player 1).
Case 1: b2 ≤ v1 . In this case, bidding v1 makes Player 1 the winner and his payoff
is v1 − b2 ≥ 0. Consider a different bid b1 . If b1 ≥ b2 then Player 1 is still the winner
and his payoff is still v1 − b2 ≥ 0; thus such a bid is as good as (hence not better
than) v1 . If b1 < b2 then the winner is Player 2 and Player 1 gets a payoff of 0. Thus
such a bid is also not better than v1 .
Case 2: b2 > v1 . In this case, bidding v1 makes Player 2 the winner and thus Player
1 gets a payoff of 0. Any other bid b1 < b2 gives the same outcome and payoff. On
the other hand, any bid b1 ≥ b2 makes Player 1 the winner, giving him a payoff of
v1 − b2 < 0, thus making Player 1 worse off than with a bid of v1 .
■
2.8 Proofs of theorems 49
Theorem [Clarke, 1971] In the pivotal mechanism (under the assumed preferences) truthful
revelation (that is, stating wi = vi ) is a weakly dominant strategy for every Player i.
Proof. Consider an individual i and possible statements w j for j ̸= i. Several cases are
possible.
if i states vi Yes 0 mi + vi − ci
!
if i states wi such that No ∑ wj − ∑ cj mi − ∑ wj − ∑ cj
j̸=i j̸=i j̸=i j̸=i
wi + ∑ w j ≤ C
j̸=i
!
if i states wi such that No ∑ wj − ∑ cj mi − ∑ wj − ∑ cj
j̸=i j̸=i j̸=i j̸=i
wi + ∑ w j ≤ C
j̸=i
if i states vi No 0 mi
! !
if i states wi such that Yes ∑ cj − ∑ wj mi + vi − ci − ∑ cj − ∑ wj
j̸=i j̸=i j̸=i j̸=i
wi + ∑ w j > C (recall that
j̸=i
∑ w j ≤ ∑ c j)
j̸=i j̸=i
!
Individual i cannot gain by lying if and only if mi ≥ mi + vi − ci − ∑ cj − ∑ wj ,
j̸=i j̸=i
i.e. if and only if vi + ∑ w j ≤ C, which is true by our hypothesis.
j̸=i
2.8 Proofs of theorems 51
! !
if i states wi such that Yes ∑ cj − ∑ wj mi + vi − ci − ∑ wj − ∑ cj
j̸=i j̸=i j̸=i j̸=i
wi + ∑ w j > C (recall that
j̸=i
∑ w j ≤ ∑ c j)
j̸=i j̸=i
Since we have covered all the possible cases, the proof is complete. ■
52 Chapter 2. Ordinal Games in Strategic Form
Theorem. For every ordinal strategic-form game G, NE(G) ⊆ IDSDS(G). On the other
hand, it is possible that NE(G)̸= ∅ and yet NE(G) ∩ IDWDS(G)= ∅.
Proof. Fix an arbitrary game G. First we show that NE(G) ⊆ IDSDS(G). If NE(G) = ∅
there is nothing to prove. Assume, therefore, that NE(G) ̸= ∅ and let s∗ = (s∗1 , . . . , s∗n ) ∈
NE(G). We need to show that s∗ ∈ IDSDS(G). Suppose not. Then there is a step in
the IDSDS procedure at which the strategy s∗i of some Player i is deleted. Let k be the
first such step, that is, letting Gk be the game obtained after implementing step k of the
procedure, the strategy profile s∗ is in game Gk−1 , while the strategy s∗i of Player i in not
in Gk . Then there must be a strategy ŝi of Player i in Gk−1 that strictly dominates s∗i , in
particular, it must be that πi (s∗i , s∗−i ) < πi (ŝi , s∗−i ) contradicting the hypothesis that s∗ is a
Nash equilibrium.
To prove the second part of the theorem it is sufficient to construct a game G such
that NE(G)̸= ∅ and NE(G) ∩ IDWDS(G) = ∅. Let G be the game shown in Figure 2.24.
Then NE(G) = {(C, F)}. Since, for Player 1, C is weakly dominated by A (and also by B),
and, for Player 2, F is weakly dominated by (D (and also by E), the output of the IDWDS
procedure applied to this game is the set of strategy profiles {(A, D), (A, E), (B, D) (B, E)},
which has an empty intersection with NE(G). ■
Player 2
D E F
A 1 0 0 1 0 0
Player
B 0 1 1 0 0 0
1
C 0 0 0 0 0 0
Figure 2.24: A game with only one Nash equilibrium, namely (C, F), which is eliminated
by the IDWDS procedure.
Proof. (A) Fix an ordinal game G and let S be the set of strategy profiles in G and, for
every player i, let Si be the set of strategies of player i in G. Let IDSDS(G) = {s∗ } (thus,
s∗i ∈ Si , for every player i) . We need to show that, for every player i,
Fix an arbitrary player i. If Si = {s∗i } then there is nothing to prove. Assume, therefore, that
the cardinality of Si is at least 2 (that is, Player i has at least one other strategy besides s∗i ).
Let m ≥ 1 be the number of steps that lead from G to the output of the IDSDS procedure
(thus, step m is the last step; it is possible that m = 0, that is, that S = {s∗ }, in which case
2.9 Exercises 53
(2.2) is trivially true). Let G0 = G and, for every k ∈ {1, . . . , m}, let Gk be the reduced
game obtained after step k of the procedure and, for every player i, let Sik be the set of
strategies of player i in game Gk . Since Sim = {s∗i }, there must be a step k ≤ m in the
procedure such that Sik−1 is a proper superset of {s∗i } and Sik = {s∗i }. Then it must be that,
for player i, s∗i strictly dominates every other strategy in Sik−1 ; in particular,
πi (s∗i , s∗−i ) > πi (si , s∗−i ), for every si ∈ Sik−1 \ {s∗i }. (2.3)
If Sik−1 = Si then the proof is complete. Suppose, therefore, that Sik−1 is a proper subset of
j
Si . Then there is an earlier step j < k in the procedure where Si is a proper superset of Sik−1
j+1 j
and Si = Sik−1 . Then it must be that, every strategy si ∈ Si \ Sik−1 is strictly dominated
by some strategy in Sik−1 . It follows from this and (2.3) that πi (s∗i , s∗−i ) > πi (si , s∗−i ), for
j
every si ∈ Si \ {s∗i }. Repeating this argument (by visiting, if necessary, earlier steps) we
obtain (2.2).
(B) The proof of this part is essentially the same as the proof of Part (A): the only modifica-
tion is the replacement of strict inequalities in (2.2) and (2.3) with weak inequalities. ■
2.9 Exercises
2.9.1 Exercises for Section 2.1: Game frames and games
The answers to the following exercises are in Section 2.10 at the end of this chap-
ter.
Exercise 2.1 Antonia and Bob cannot decide where to go to dinner. Antonia proposes
the following procedure: she will write on a piece of paper either the number 2 or the
number 4 or the number 6, while Bob will write on his piece of paper either the number
1 or 3 or 5. They will write their numbers secretly and independently. They then will
show each other what they wrote and choose a restaurant according to the following
rule: if the sum of the two numbers is 5 or less, they will go to a Mexican restaurant, if
the sum is 7 they will go to an Italian restaurant and if the number is 9 or more they will
go to a Japanese restaurant.
(a) Let Antonia be Player 1 and Bob Player 2. Represent this situation as a game
frame, first by writing out each element of the quadruple of Definition 2.1.1 and
then by using a table (label the rows with Antonia’s strategies and the columns
with Bob’s strategies, so that we can think of Antonia as choosing the row and
Bob as choosing the column).
(b) Suppose that Antonia and Bob have the following preferences (where M stands
for ‘Mexican’, I for ‘Italian’ and J for ‘Japanese’):
for Antonia: M ≻Antonia I ≻Antonia J; for Bob: I ≻Bob M ≻Bob J.
Using utility function with values 1, 2 and 3 represent the corresponding reduced
game as a table.
■
54 Chapter 2. Ordinal Games in Strategic Form
Exercise 2.2 Consider the following two-player game-frame where each player is given
a set of cards and each card has a number on it. The players are Antonia (Player 1) and
Bob (Player 2). Antonia’s cards have the following numbers (one number on each card):
2, 4 and 6, whereas Bob’s cards are marked 0, 1 and 2 (thus different numbers from
the previous exercise). Antonia chooses one of her own cards and Bob chooses one of
his own cards: this is done without knowing the other player’s choice. The outcome
depends on the sum of the points of the chosen cards. If the sum of the points on the two
chosen cards is greater than or equal to 5, Antonia gets $(10 minus that sum); otherwise
(that is, if the sum is less than 5) she gets nothing; furthermore, if the sum of points is
an odd number, Bob gets as many dollars as that sum; if the sum of points turns out to
be an even number and is less than or equal to 6, Bob gets $2; otherwise he gets nothing.
(The money comes from a third party.)
(a) Represent the game-frame described above by means of a table. As in the previous
exercise, assign the rows to Antonia and the columns to Bob.
(b) Using the game-frame of Part (a) obtain a reduced game by adding the information
that each player is selfish and greedy. This means that each player only cares
about how much money he/she gets and prefers more money to less.
■
Exercise 2.3 Alice (Player 1), Bob (Player 2), and Charlie (Player 3) play the following
simultaneous game. They are sitting in different rooms facing a keyboard with only one
key and each has to decide whether or not to press the key. Alice wins if the number of
people who press the key is odd (that is, all three of them or only Alice or only Bob or
only Charlie), Bob wins if exactly two people (he may be one of them) press the key
and Charlie wins if nobody presses the key.
(a) Represent this situation as a game-frame. Note that we can represent a three-
player game with a set of tables: Player 1 chooses the row, Player 2 chooses the
column and Player 3 chooses the table (that is, we label the rows with Player 1’s
strategies, the columns with Player 2’s strategies and the tables with Player 3’s
strategies).
(b) Using the game-frame of Part (a) obtain a reduced game by adding the information
that each player prefers winning to not winning and is indifferent between any
two outcomes where he/she does not win. For each player use a utility function
with values from the set {0, 1}.
(c) Using the game-frame of Part (a) obtain a reduced game by adding the information
that (1) each player prefers winning to not winning, (2) Alice is indifferent
between any two outcomes where she does not win, (3) conditional on not
winning, Bob prefers if Charlie wins rather than Alice, (4) conditional on not
winning, Charlie prefers if Bob wins rather than Alice. For each player use a
utility function with values from the set {0, 1, 2}.
■
2.9 Exercises 55
Exercise 2.4 There are two players. Each player is given an unmarked envelope and
asked to put in it either nothing or $300 of his own money or $600 of his own money. A
referee collects the envelopes, opens them, gathers all the money, then adds 50% of that
amount (using his own money) and divides the total into two equal parts which he then
distributes to the players.
(a) Represent this game frame with two alternative tables: the first table showing in
each cell the amount of money distributed to Player 1 and the amount of money
distributed to Player 2, the second table showing the change in wealth of each
player (money received minus contribution).
(b) Suppose that Player 1 has some animosity towards the referee and ranks the
outcomes in terms of how much money the referee loses (the more, the better),
while Player 2 is selfish and greedy and ranks the outcomes in terms of her own
net gain. Represent the corresponding game using a table.
(c) In the game of Part (b), is there a strict dominant-strategy profile?
■
Exercise 2.5 Consider again the game of Part (b) of Exercise 2.1 (Figure 2.28).
(a) Determine, for each player, whether the player has strictly dominated strategies.
(b) Determine, for each player, whether the player has weakly dominated strategies.
■
Exercise 2.6 There are three players. Each player is given an unmarked envelope and
asked to put in it either nothing or $3 of his own money or $6 of his own money. A
referee collects the envelopes, opens them, gathers all the money and then doubles the
amount (using his own money) and divides the total into three equal parts which he then
distributes to the players.
For example, if Players 1 and 2 put nothing and Player 3 puts $6, then the referee adds
another $6 so that the total becomes $12, divides this sum into three equal parts and
gives $4 to each player.
Each player is selfish and greedy, in the sense that he ranks the outcomes exclusively
in terms of his net change in wealth (what he gets from the referee minus what he
contributed).
(a) Represent this game by means of a set of tables. (Do not treat the referee as a
player.)
(b) For each player and each pair of strategies determine if one of the two dominates
the other and specify if it is weak or strict dominance.
(c) Is there a strict dominant-strategy profile?
■
56 Chapter 2. Ordinal Games in Strategic Form
Exercise 2.7 For the second-price auction partially illustrated in Figure 2.10 – repro-
duced below (recall that the numbers are the payoffs of Player 1 only) – complete the
representation by adding the payoffs of Player 2, assuming that Player 2 assigns a value
of $50M to the field and, like Player 1, ranks the outcomes in terms of the net gain
from the oil field (defined as profits minus the price paid, if Player 2 wins, and zero
otherwise).
Player 2
$10M $20M $30M $40M $50M
$10M 0 0 0 0 0
Player $20M 20 0 0 0 0
1 $30M 20 10 0 0 0
$40M 20 10 0 0 0
$50M 20 10 0 −10 0
■
Exercise 2.8 Consider the following “third-price” auction. There are n ≥ 3 bidders. A
single object is auctioned and Player i values the object $vi , with vi > 0. The bids are
simultaneous and secret.
The utility of Player i is: 0 if she does not win and (vi − p) if she wins and pays $p.
Every non-negative number is an admissible bid. Let bi denote the bid of Player i.
The winner is the highest bidder. In case of ties the bidder with the lowest index among
those who submitted the highest bid wins (e.g. if the highest bid is $120 and it is
submitted by players 6, 12 and 15, then the winner is Player 6). The losers don’t get
anything and don’t pay anything. The winner gets the object and pays the third highest
bid, which is defined as follows.
Let i be the winner and fix a Player j such that
[note: if
contains more than one element, then we pick any one of them]. Then the third price is
defined as
max {b1 , ..., bn } \ {bi , b j } .
2.9 Exercises 57
For example, if n = 3 and the bids are b1 = 30, b2 = 40 and b3 = 40 then the
winner is Player 2 and she pays $30. If b1 = b2 = b3 = 50 then the winner is Player 1
and she pays $50. For simplicity, let us restrict attention to the case where n = 3 and
v1 > v2 > v3 > 0. Does Player 1 have a weakly dominant strategy in this auction? ■
For every individual i = 1, . . . , 5, let vi be the perceived gross benefit (if positive; per-
ceived gross loss, if negative) from having the park built. The vi ’s are as follows:
Individual 1 2 3 4 5
Gross benefit v1 = $60 v2 = $15 v3 = $55 v4 = −$25 v5 = −$20
(Thus the net benefit (loss) to individual i is vi − ci ). Individual i has the following
utility of wealth function (where mi denotes the wealth of individual i):
mi if the project is not carried out
Ui ($mi ) =
mi + vi if the project is carried out
Let mi be the initial endowment of money of individual i and assume that mi is large
enough that it exceeds ci plus any tax that the individual might have to pay.
(a) What is the Pareto-efficient decision: to build the park or not?
Assume that the pivotal mechanism is used, so that each individual i is asked to state
a number wi which is going to be interpreted as the gross benefit to individual i from
carrying out the project. There are no restrictions on the number wi : it can be positive,
negative or zero. Suppose that the individuals make the following announcements:
Individual 1 2 3 4 5
Stated benefit w1 = $70 w2 = $10 w3 = $65 w4 = −$30 w5 = −$5
Individual 1 2 3 4 5
Pivotal?
Tax
(d) As you know, in the pivotal mechanism each individual has a dominant strategy.
If all the individuals played their dominant strategies, would the park be built?
(e) Assuming that all the individuals play their dominant strategies, find out who is
pivotal and what tax (if any) each individual has to pay?
(f) Show that if every other individual reports his/her true benefit, then it is best for
Individual 1 to also report his/her true benefit.
■
Exercise 2.10 Consider again the game of Part (b) of Exercise 2.1 (Figure 2.28).
(a) Apply the IDSDS procedure (Iterated Deletion of Strictly Dominated Strategies).
(b) Apply the IDWDS procedure (Iterated Deletion of Weakly Dominated Strategies).
Exercise 2.11 Apply the IDSDS procedure to the game shown in Figure 2.25. Is there
a strict iterated dominant-strategy profile? ■
Player 2
d e f
a 8 6 0 9 3 8
Player 1 b 3 2 2 1 4 3
c 2 8 1 5 3 1
Exercise 2.12 Consider the following game. There is a club with three members: Ann,
Bob and Carla. They have to choose which of the three is going to be president next
year. Currently, Ann is the president. Each member is both a candidate and a voter.
Voting is as follows: each member votes for one candidate (voting for oneself is
allowed); if two or more people vote for the same candidate then that person is chosen
as the next president; if there is complete disagreement, in the sense that there is exactly
one vote for each candidate, then the person for whom Ann voted is selected as the next
president.
(a) Represent this voting procedure as a game frame, indicating inside each cell of
each table which candidate is elected.
(b) Assume that the players’ preferences are as follows: Ann ≻Ann Carla ≻Ann Bob,
Carla ≻Bob Bob ≻Bob Ann, Bob ≻Carla Ann ≻Carla Carla. Using utility values
0, 1 and 2, convert the game frame into a game.
(c) Apply the IDWDS to the game of Part (b). Is there an iterated weak dominant-
strategy profile?
(d) Does the extra power given to Ann (in the form of tie-breaking in case of complete
disagreement) benefit Ann?
■
Player 2
D E F
a 2 3 2 2 3 1
Player 1 b 2 0 3 1 1 0
c 1 4 2 0 0 4
Exercise 2.15 Find the Nash equilibria of the games of Exercise 2.3 (b) (Figure 2.32)
and (c) (Figure 2.33). ■
Exercise 2.16 Find the Nash equilibria of the game of Exercise 2.4 (b) (Figure 2.35). ■
Exercise 2.17 Find the Nash equilibria of the game of Exercise 2.6 (Figure 2.37). ■
Exercise 2.18 Find the Nash equilibria of the game of Exercise 2.7 (Figure 2.38). ■
Exercise 2.19 Find a Nash equilibrium of the game of Exercise 2.8 for the case where
Exercise 2.20 Find the Nash equilibria of the game of Exercise 2.12 (b) (Figure 2.43).
■
Exercise 2.21 Find the Nash equilibria of the game of Exercise 2.13 (Figure 2.26). ■
2.9.7 Exercises for Section 2.7: Games with infinite strategy sets
The answers to the following exercises are in Section 2.10 at the end of this chap-
ter.
Exercise 2.22 Consider a simultaneous n-player game where each Player i chooses an
effort level ai ∈ [0, 1]. The payoff to Player i is given by
(interpretation: efforts are complementary and each player’s cost per unit of effort is 2).
(a) Find all the Nash equilibria and prove that they are indeed Nash equilibria.
(b) Are any of the Nash equilibria Pareto efficient?
(c) Find a Nash equilibrium where each player gets a payoff of 1.
■
2.9 Exercises 61
The harm inflicted on the fisheries due to water pollution is equal to $L > 0 of lost
profit [without pollution the fisheries’ profit is $A, while with pollution it is $(A − L)].
Suppose that the fisheries collectively sue the Mondevil Corporation. It is easily verified
in court that Mondevil’s plant pollutes the river. However, the values of Π and L cannot
be verified by the court, although they are commonly known to the litigants.
Suppose that the court requires the Mondevil attorney (Player 1) and the fisheries’
attorney (Player 2) to play the following litigation game. Player 1 is asked to announce
a number x ≥ 0, which the court interprets as a claim about the plant’s profits. Player 2
is asked to announce a number y ≥ 0, which the court interprets as the fisheries’ claim
about their profit loss. The announcements are made simultaneously and independently.
Then the court uses Posner’s nuisance rule to make its decision (R. Posner, Economic
Analysis of Law, 9th edition, 1997). According to the rule, if y > x, then Mondevil must
shut down its chemical plant. If x ≥ y , then the court allows Mondevil to operate the
plant, but the court also requires Mondevil to pay the fisheries the amount y. Note that
the court cannot force the attorneys to tell the truth: in fact, it would not be able to tell
whether or not the lawyers were reporting truthfully. Assume that the attorneys want to
maximize the payoff (profits) of their clients.
(a) Represent this situation as a strategic-form game by describing the strategy set of
each player and the payoff functions.
(b) Is it a dominant strategy for the Mondevil attorney to make a truthful announce-
ment (i.e. to choose x = Π)? [Prove your claim.]
(c) Is it a dominant strategy for the fisheries’ attorney to make a truthful announce-
ment (i.e. to choose y = L)? [Prove your claim.]
(d) For the case where Π > L (recall that Π and L denote the true amounts), find all
the Nash equilibria of the litigation game. [Prove that what you claim to be Nash
equilibria are indeed Nash equilibria and that there are no other Nash equilibria.]
(e) For the case where Π < L (recall that Π and L denote the true amounts), find all
the Nash equilibria of the litigation game. [Prove that what you claim to be Nash
equilibria are indeed Nash equilibria and that there are no other Nash equilibria.]
(f) Does the court rule give rise to a Pareto efficient outcome? [Assume that the
players end up playing a Nash equilibrium.]
■