Industrial Organization Theory and Application by Oz Shy
Industrial Organization Theory and Application by Oz Shy
Industrial Organization Theory and Application by Oz Shy
Industrial Organization
Theory and Applications
Oz Shy
The MIT Press
Cambridge, Massachusetts
London, England
Page iv
Page v
Page vii
CONTENTS
List of Figures
xiii
Preface
xvii
1 Introduction
1.4 References
I Theoretical Background
2 Basic Concepts in Noncooperative Game Theory
9
11
12
22
28
33
37
2.6 Exercises
40
2.7 References
42
43
43
49
53
3.4 Exercises
54
Page viii
57
4 Perfect Competition
63
64
66
68
4.4 Exercises
69
4.5 References
70
5 The Monopoly
71
72
73
75
78
80
89
5.7 Exercises
92
5.8 References
94
97
98
104
107
112
115
120
126
6.8 Exercises
128
6.9 References
131
133
135
143
149
162
163
7.6 Exercises
164
7.7 References
166
Page ix
169
171
8.2 Mergers
173
182
186
206
209
212
8.8 Exercises
213
8.9 References
214
219
221
222
224
229
9.4 Patents
233
239
241
244
247
248
9.10Exercises
248
9.11 References
250
253
256
263
269
10.4 Exercises
276
10.5 References
276
Page x
IV Marketing
11 Advertising
279
281
283
287
290
294
297
300
11.7 Exercises
302
11.8 References
304
307
308
310
315
317
322
327
12.7 Warranties
330
335
12.9 Exercises
337
12.10 References
338
341
342
346
13.3Peak-Load Pricing
348
352
13.5 Exercises
358
13.6 References
358
361
362
376
14.3 Dealerships
380
388
389
14.6 Exercises
390
14.7 References
392
Page xi
393
395
396
404
407
413
416
15.6 Exercises
419
15.7 References
420
421
421
426
432
16.4 Exercises
432
16.5 References
433
VI Selected Industries
435
17 Miscellaneous Industries
437
438
440
448
452
17.5 Exercises
456
17.6 References
457
Index
459
Page xiii
FIGURES
2.1 The pilot and the terrorist
23
26
37
38
39
42
46
48
49
50
52
54
61
65
67
69
73
74
77
82
86
87
Page xiv
100
112
114
123
124
125
134
137
138
139
144
146
150
155
157
159
162
163
177
184
185
8.4 Incumbent's profit levels and capacity choices for different levels of
entry cost.
191
193
193
194
195
199
204
208
223
226
235
Page xv
258
259
261
264
267
286
289
291
294
300
311
312
313
319
12.5 The market for lemons: Bad cars drive out the good cars
326
343
345
346
347
349
355
13.7 Revenue functions for the vertical and horizontal differentiation cases
356
362
385
403
410
16.1 Consumers with variable search cost searching for the lowest price
423
425
427
430
Page xvi
439
442
447
455
Page xvii
PREFACE
If we knew what it was we were doing, it would not be called research, would it?
A. Einstein
Page xviii
stances, this course can be taught without using calculus (see the list of topics in the next section).
Before reading this book, the student should have some experience in maximization techniques for oneand two-variables optimization problems. Occasionally, the student will have to have a very basic
knowledge of what probability is and how to calculate the joint probability of two events. Nothing in
this book requires methods more advanced than the ones I have described. Students who did not have
any training in microeconomics using calculus may not be able to handle several of the market
structures. The reader questioning whether this book fits his or her level is advised to look at chapter 3,
which reviews the basic microeconomics needed for a comprehensive study of industrial organization.
Page xix
Page xx
To the Instructor
Since this book grew out of lecture notes written for upper-division undergraduate and graduate
courses, the instructor will (I hope) find this book convenient to use, since almost all derivations are
done in the book itself.
If you are constrained to instruct a course without using calculus, then you can teach the list of topics
given earlier. If you can use some calculus, then the amount of material that you can cover depends on
your preferences and the length of the course.
All the theoretical background the student needs for a comprehensive study of this book is provided in
the first part. In fact, not all the material covered in this part is needed to study this book, but it is
brought up here for the sake of completeness, or for those readers who have either an insufficient
background in economics or none at all. Therefore, the instructor is urged to decide on how much time
to devote to this preparation part only after having completed the entire plan for this course. This
theoretical preparation is composed of two chapters. Chapter 2 provides all the necessary game
theoretic tools needed for the study of this book and for understanding the literature on industrial
organization. Background in game theory is not needed for reading this chapter, and no previous
knowledge is assumed. The main sections of chapter 2 must be taught before the instructor proceeds
with the study of industrial organization. Chapter 3 provides most of the basic microeconomics
background needed for the study of industrial organization. The material covered in this chapter is
studied in most intermediate microeconomics and in some managerial economics courses and can
therefore be skipped.
Two-semester course
A two -semester course can be logically divided into a more technically market-structure-oriented
semester, and an application-oriented semester. Thus, the first semester should start with game theory
(chapter 2), continued by the sequence of three chapters dealing with market structure: perfect
competition (chapter 4), monopoly (chapter 5), homogeneous products (chapter 6), and differentiated
products (chapter 7). If time is left, the first semester may include mergers and entry (chapter 8) and
research and development (chapter 9).
For the second semester, the instructor is free to select from a wide variety of mostly logically
independent topics. A possible starting point could be the theory of network economics and
standardization (chapter 10), continuing with selected topics from the remaining chapters:
Page xxi
advertising (chapter 11), durability and quality (chapter 12), pricing tactics (chapter 13), marketing
tactics (chapter 14), management and information (chapter 15), price dispersion and search theory
(chapter 16), and the special industries (chapter 17).
One-semester course
A common mistake (at least my mistake) in planning a one-semester course would be to treat it as the
first semester of a two-semester course. When this happens, the student is left with the wrong
impression that industrial organization deals only with the technical formulation of market structures,
yet without the knowledge that industrial organization has a lot to say about product design, marketing
techniques, and channels (chapters 11, 12, 13, 14, 15, and 17). These chapters have many less
technically oriented sections, with direct applications. Some sections rely on the knowledge of Cournot,
Bertrand, and sometime Hotelling's market structures, and for this reason, in a one-semester course, I
advise the instructor to carefully plan the logical path for this course. Finally, the material on search
theory (chapter 16) can be covered with no difficulty.
Let me summarize then: the two -semester course fits the structure and the depth of the coverage of this
book. The instructor of a one-semester course using this book should study the list of topics covered in
the later chapters, and then, working backwards, should determine what is the minimal knowledge of
market structures that students need to acquire in order to be able to understand the later chapters.
New Material
Almost by definition, a textbook is not intended for presenting newly developed material and ongoing
research. However, during the course of simplifying I was forced to modify or to develop some new
concepts. For example, I felt that it is important to include a location model without using calculus for
those courses that do not require the use of calculus. However, as the reader will find, a Nash-Bertrand
equilibrium for the discrete location model simply does not exist. For this reason, I was forced to
develop the undercutproof equilibrium concept described in subsection 7.3.4 on page 158. Three other
topics are also new: (a) the concept of -foreclosure developed in subsection 14.1.4 on page 366, (b)
endogenous peak-load pricing theory (section 13.4 on page 352) that emphasizes the role of the firm in
determining which period would be the peak and which would be the off-peak, and (c) targeted and
comparison advertising theory (sections 11.3 on page 290 and 11.4 on page 294).
Page xxii
Page 1
Chapter 1
Introduction
The purpose of an economic theory is to analyze, explain, predict, and evaluate.
Gathered from Joe Bain, Industrial Organization
Page 2
It is often thought that these four observations are interrelated. Most of the earlier empirical studies in
industrial organization focused on running regressions of variables such as profit margins, firms' size,
advertising expenditure, and research and development (R&D) expenditure on concentration (see
Goldschmid, Mann, and Weston 1974 for a summary of these works). The purpose of this book is to
provide a theoretical linkage of the factors that affect concentration, and how concentration affects the
strategic behavior of firms. The reason why we think of concentration as a major issue of industrial
organization theory follows from the failure of the competitive market structure to explain why
industries are composed of a few large firms instead of many small firms. Thus, the theory of
competitive market structure, although easy to solve for if an equilibrium exists, in most cases cannot
explain the composition and behavior of firms in the industry.
Given the noncompetitive behavior of firms, markets are also influenced by buyers' reactions to firms'
attempts to maximize profits. In this respect, our analysis here will have to fully characterize how
consumers determine which brands to buy, how much to buy, and how to search and select the lowest
priced brand that fits their specific preferences. For this reason, the approach we take is mostly a
strategic one, meaning that both firms and consumers learn the market structure and choose an action
that maximizes profit (for the firms) and utility (for the consumers). In addition, given the complexity
of decisions made by strategic (noncompetitive) firms, the issue of the internal organization of firms
becomes an important factor affecting their behavior. Thus, we briefly address the issue of how
management structure under conditions of imperfect information affects the performance of the firm in
the market.
Finally, we extensively analyze the role of the regulator. First, from a theoretical point of view we ask
whether intervention can increase social welfare under various market structures and firms' activities.
Second, we describe and analyze the legal system affecting our industries.
1.1.2 Schools of thought and methodology
The standard approach to the study of industrial organization, as laid out by Joe Bain, decomposes a
market into structure, conduct, and performance of the market. Structure means how sellers interact
with other sellers, with buyers, and with potential entrants. Market structure also defines the Product in
terms of the potential number of variants in which the product can be produced. Market conduct refers
to the behavior of the firms in a given market structure, that is, how firms determine their price policy,
sales, and promotion. Finally, performance refers to the
Page 3
welfare aspect of the market interaction. That is, to determine performance we measure whether the
interaction in the market leads to a desired outcome, or whether a failure occurs that requires the
intervention of the regulator.
Many aspects of performance are discussed in this book. First, is the technology efficient in the sense of
whether it is operated on an optimal (cost-minimizing) scale? Second, does the industry produce a
socially optimal number of brands corresponding to consumers' preferences and the heterogeneity of the
consumers? Third, are the firms dynamically efficientdo they invest a proper amount of resources in
developing new technologies for current and future generations? All these efficiency requirements are
generally summarized by a particular social welfare function that can combine the trade-off among the
different efficiency criteria. For example, the welfare of consumers who have preferences for variety
increases with the number of brands produced in an industry. However, if each brand is produced by a
different factory where each factory is constructed with a high fixed-cost investment, then it is clear that
from a technical point of view, the number of brands produced in an industry should be restricted.
Hence, there will always be a tradeoff between technical efficiency and consumer welfare that will
require defining a welfare function to determine the optimal balance between consumer welfare and
efficient production patterns.
In 1939, Edward Mason published a very influential article emphasizing the importance of
understanding the market-specific causes of non-competitive behavior. In that article, Mason discussed
the methodology for studying the various markets:
It goes without saying that a realistic treatment of these questions necessitates the use of analytical tools which are
amenable to empirical application. The problem, as I see it, is to reduce the voluminous data concerning industrial
organization to some sort of order through a classification of market structures. Differences in market structure are
ultimately explicable in terms of technological factors. The economic problem, however, is to explain, through an
examination of the structure of markets and the organization of firms, differences in competitive practices including
price, production, and investment policies.
Thus, Mason argued that to be able to understand different degrees of competition in different markets,
the researcher would have to analyze the different markets using different assumed market structures.
The reader will appreciate this methodology after reading this book, where we try to fit an appropriate
market structure to the studied specific
Page 4
market, where the variety of market structures are defined and developed in part II.
In his article, Mason emphasized the importance of understanding sources of market power (''market
control'' in his language) in order to understand how prices are determined in these markets ("price
policy" in his language):
A firm may have a price policy by reason of the existence of rivals of whose action it must take account, of the
desirability of considering the effect of present upon future price, of the possibility of competing in other ways than
by price, and for many other reasons.
Mason continues and hints at how the degree of industry concentration is correlated with
noncompetitive behavior:
The size of a firm influences its competitive policies in a number of ways....The scale of its purchases and sales
relative to the total volume of transactions...the absolute size of a firm, as measured by assets, employees, or
volume of sales,...[are] also relevant to price and production policies....Selling practices at the disposal of the large
firm may be beyond the reach of its smaller competitors....The size of a firm likewise influences its reaction to
given market situations.
Analysts of industrial organization after Mason continued mostly to use a descriptive language, but later
ones used price theory (sometimes referred to as the Chicago School). The Chicago price-theory
approach conceded that monopoly is possible but contended that its presence is much more often
alleged than confirmed. When alleged monopolies are genuine, they are usually transitory, with
freedom of entry working to eliminate their influence on price and quantities within a fairly short time
period (see Reder 1982). Thus, the so-called Chicago School was not very supportive of the persistentmarket-power approach that constituted Bain's major theory of entry barriers.
The fast development of game theory in the 1970s gave a push to the strategic approach to industrial
organization and later to strategic international trade analysis. Unlike the competitive-markets approach,
the strategic approach models the firms on the assumption that they and other firms can affect the
market outcome consisting of prices, quantities, and the number of brands. In addition, game theory
provided the tools for analyzing dynamic scenarios such as how established firms react to a threat of
entry by potential competitors.
Our approach does not attempt to represent any particular school of thought. In fact, the main purpose
of this book is to demonstrate
Page 5
that there is no general methodology for solving problems, hence each observation may have to be
worked out in a different model. Thus, each time we address a new observation, we generally, construct
a special ad hoc model, where the term "ad hoc" should not be given a negative connotation. To the
contrary, the ad hoc modeling methodology frees the researcher from constraining the theory to
temporary "fashions" which are given a priority in the scientific literature and allows the scientist to
concentrate on the merit of the model itself, where merit means how well the theory or the model
explains the specific observation that the scientist seeks to explain. Nevertheless, the reader will
discover that the strategic game-theoretic approach is the dominant one in this book.
1.2 Law and Economics
The legal structure governing the monitoring of the industry is called antitrust law. The word "trust"
reflects the spirit of the laws aiming at any form of organization, trust, communication, and contract
among firms that would impede competition.
In this book we confine the discussion of the legal aspects of the industry mainly to U.S. law. I chose to
deal with U.S. law since it is perhaps the most advanced in terms of achieving competition and the
restraints of monopoly power. Although not the oldest, the U.S. antitrust system seems to be the most
experienced one in terms of famous court cases that put the legal system into effect. For example, the
Restrictive Trade Practices Act, which is the British equivalent of the 1890 Sherman Act regarding
cartel prohibition, was enacted a very long time after the Sherman Act, in 1956 to be precise. In other
words, the U.S. was and remains a leader in antitrust legislation.
It is interesting to note that in the United States real prices of products tend to be the lowest in the
world. However, the United States also has the most restrictive antitrust regulation structure in the
world. Hence, although it is commonly argued that market intervention in the form of regulation results
in higher consumer prices, here we observe that antitrust regulation is probably the cause for low
consumer prices in the United States. For this reason, the study of the U.S. antitrust systems is an
integral part of the study of industrial organization, especially for those students from countries with
less competitive markets.
Several chapters in this book conclude with appendixes discussing the legal matters related to the topics
analyzed in the theoretical part of the chapter. In these appendixes, references are always made to the
law itself and to its historical origin. Court cases are not discussed in this book, since they are analyzed
in a large number of law-and-economics textbooks, for example Asch 1983, Gellhorn 1986, and Posner
1977.
Page 6
Page 7
is illegal per se, and (b) business behavior that is judged by standards of the party's intent or the effect
the behavior is likely to have. For our purposes, we will refer to the rule of reason as category (b).
Bork (1978) regards the per se rule as containing a degree of arbitrariness. The per se rule implies that
the judgment is handed down on the basis of the inherent effect of the act committed by the accused
party. That is, to have a particular behavior declared illegal per se, the plaintiff needs only to prove that
it occurred. The per se rule is justified in cases where the gains associated from the imposition of the
rule will far outweigh the losses since significant administrative costs can be saved. That is, the
advantage of the per se rule is that the particular case need not be identified, since the act itself is
assumed to be illegal.
1.3 Industrial Organization and International Trade
In this book the reader will find a wide variety of international issues, for the simple reason that
international markets should not be very different from national markets. Thus, one might expect that
concentration would characterize international markets as well as national markets. As a result of this
(rather late) recognition that international trade can be characterized by oligopolistic market structures,
a tremendous amount of literature emerged during the 1980s (see Krugman 1989).
Once this newer trade theory picked up, a broad new array of issues had to be analyzed. The first was,
how can international trade in differentiated products be explained by a monopolistic competition
market structure? Then, what are the implications of oligopolistic international market structures for the
gains from the imposition of trade barriers? Whereas earlier writers got excited by learning that
countries have a lot to gain when imposing trade restrictions or allowing subsidization of industries
competing in internationally oligopolistic markets, later writers have managed to calm down this new
wave of protectionism by demonstrating that any trade policy recommended under a particular market
structure may not be recommended under a different market structure. Thus, since it is hard to estimate
what the ongoing market structure is and the form of competition of a particular market, it may be better
that governments refrain from intervention at all. These later papers have somewhat mitigated the
strong policy actions recommended by the early strategic trade literature.
1.4 References
Asch, P. 1983. Industrial Organization and Antitrust Policy. New York: John Wiley & Sons.
Page 8
Bain, J. 1968. Industrial Organization. 2nd ed. New York: John Wiley & Sons.
Bork, R. 1978. The Antitrust Paradox. New York: Basic Books.
Gellhorn, E. 1986. Antitrust Law and Economics in a Nutshell. St. Paul, Minn.: West Publishing.
Goldschmid, H., H. Mann, and J. Weston. 1974. Industrial Concentration: The New Learning. Boston:
Little, Brown.
Krugman, P. 1989. "Industrial Organization and International Trade." In Handbook of Industrial
Organization, edited by R. Schmalensee and R. Willig. Amsterdam: North-Holland.
Mason, E. 1939. "Price and Production Policies of Large-Scale Enterprise." American Economic
Review 29, pt. 2: 61-74
Posner, R. 1977. Economic Analysis of Law. Boston: Little, Brown.
Reder, M. 1982. "Chicago Economics: Performance and Change." Journal of Economic Literature 20:
1-38.
West, E. 1987. "Monopoly." In The New Palgrave Dictionary of Economics, edited by J. Eatwell, M.
Milgate, and P. Newman. New York: The Stockton Press.
Page 9
PART I
THEORETICAL BACKGROUND: GAME THEORY AND MICROECONOMICS
Page 11
Chapter 2
Basic Concepts in Noncooperative Game Theory
If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not
the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will
succumb in every battle.
All men can see these tactics whereby I conquer, but what none can see is the strategy out of which victory is evolved.
Sun Tzu, The Art of War (490 B.C.)
Game theory (sometimes referred to as "Interactive Decision Theory") is a collection of tools for
predicting outcomes for a group of interacting agents, where an action of a single agent directly affects
the payoffs (welfare or profits) of other participating agents. The term game theory stems from the
resemblance these tools to sports games (e.g., football, soccer, ping-pong, and tennis), as well as to
"social" games (e.g., chess, cards, checkers, and Diplomacy).
Game theory is especially useful when the number of interactive agents is small, in which case the
action of each agent may have a significant effect on the payoff of other players. For this reason, the
bag of tools and the reasoning supplied by game theory have been applied to a wide variety of fields,
including economics, political science, animal behavior, military studies, psychology, and many more.
The goal of a game-theoretic model is to predict the outcomes (a list of actions
Page 12
adopted by each participant), given the assumed incentives of the participating agents. Thus, game
theory is extremely helpful in analyzing industries consisting of a small number of competing firms,
since any action of each firm, whether price choice, quantity produced, research and development, or
marketing techniques, has strong effects on the profit levels of the competing firms.
As the title of this chapter suggests, our analyses focus only on non-cooperative games. We generally
distinguish between two types of game representations: normal form games (analyzed in section 2.1),
and extensive form games (analyzed in section 2.2). Roughly speaking, we can say that in normal form
games all players choose all their actions simultaneously, whereas in extensive form games agents may
choose their actions in different time periods. In addition, we distinguish between two types of actions
that players can take: a pure action, where a player plays a single action from the player's set of
available actions, and a mixed action, where a player assigns a probability for playing each action (say
by flipping a coin). Our entire analysis in this book is confined to pure actions. However, for the sake of
completeness, mixed actions are analyzed in an appendix (section 2.4).
Finally, information plays a key role in game theory (as well as in real life). The most important thing
that we assume is that the players that we model are at least as intelligent as economists are. That is, the
players that we model have the same knowledge about the structure, the rules, and the payoffs of the
game as the economist that models the game does. Also important, our analysis in this chapter is
confined to games with perfect information. Roughly, this means that in perfect information games,
each player has all the information concerning the actions taken by other players earlier in the game that
affect the player's decision about which action to choose at a particular time. Games under imperfect
information are not used in this book; however, we introduce them in an appendix (section 2.5) for the
sake of completeness.
2.1 Normal Form Games
Our first encounter with games will be with normal form games. In normal form games all the players
are assumed to make their moves at the same time.
2.1.1 What is a game?
The following definition provides three elements that constitute what we call a game. Each time we
model an economic environment in a game-theoretic framework, we should make sure that the
following three
Page 13
2. Each player i,
, has an action set Ai which is the set of all actions available to player i. Let a
denote a particular action taken by player i. Thus, player i's action set is a list of all actions
available to player i and hence,
, where ki is the number of actions available to
player i.
Let
be a list of the actions chosen by each player. We call this list of
actions chosen by each player i an outcome of the game.
3. Each player i has a payoff function, i, which assigns a real number, i(a), to every outcome of the
game. Formally, each payoff function i maps an N-dimensional vector, a = (a1,... ,aN) (the action of
chosen by each player), and assigns it a real number, i(a).
A few important remarks on the definition of a game follow:
1. It is very important to distinguish between an action set Ai, which is the set of all actions available to
a particular player i, and an outcome a, which is a list of the particular actions chosen by all the
players.
2. Part 2 of Definition 2.1 assumes that the each player has a finite number of actions, that is, that player
i has ki actions in the action set Ai. However, infinite action sets are commonly used in industrial
organization. For example, often, we will assume that firms choose prices from the set of nonnegative
real numbers.
3. We use the notation {list of elements} to denote a set where a set (e.g., an action set) contains
elements in which the order of listing is of no consequence. In contrast, we use the notation (list) to
denote a vector where the order does matter. For example, an outcome is a list of actions where the first
action on the list is the action chosen by player 1, the second by player 2, and so on.
4. The literature uses the term action profile to describe the list of actions chosen by all players, which
is what we call an outcome. For our purposes there is no harm in using the term outcome (instead of the
term action profile) for describing this list of actions. However,
Page 14
if games involve some uncertainty to some players, these two terms should be distinguished since under
uncertainty an action profile may lead to several outcomes (see for example mixed actions games
described in the appendix [Section 2.4]).
5. In the literature one often uses the term stoutly instead of the term action (and therefore strategy set
instead of action set), since in a normal form game, there is no distinction between the two terms.
However, when we proceed to analyze extensive form games (section 2.2), the term strategy is given a
different meaning than the term action.
The best way to test whether Definition 2.1 is clear to the reader is to apply it to a simple example. A
simple way to describe the data that define a particular game is to display them in a matrix form.
Consider the following game described in Table 2.1. We now argue that Table 2.1
Country 2
WAR
Country 1
PEACE
WAR 1
1 3
PEACE 0
3 2
contains all the data needed for properly defining a game according to Definition 2.1. First, we have
two players, N = 2, called country 1 and 2. Second, the two players happen to have the same action sets:
A1 = A2 ={WAR, PEACE}. There are exactly four outcomes for this game: (WAR, WAR), (WAR,
PEACE), (PEACE, WAR), (PEACE, PEACE). Third, the entries of the matrix (i.e., the four squares)
contain the payoffs to player 1 (on the left-hand side) and to player 2 (on the right-hand side),
corresponding to the relevant outcome of the game. For example, the outcome a = (WAR, PEACE)
specifies that player 1 opens a war while player 2 plays peace. The payoff to player 1 from this outcome
is 1(a) = 1(WAR, PEACE) = 3. Similarly, the payoff to player 2 is 2(a) = 2(WAR, PEACE) = 0
since country 2 does not defend itself.
The story behind this game is as follows. If both countries engage in a war, then each country gains a
utility of 1. If both countries play PEACE, then each country gains a utility of 2. If one country plays
WAR while the other plays PEACE, then the aggressive country reaches the highest possible utility,
since it "wins" a war against the nonviolent country with no effort. Under this outcome the utility of the
"pacifist country" should be the lowest (equal to zero in our example).
Page 15
In the literature, the game described in Table 2.1 is commonly referred to as the Prisoners' Dilemma
game. Instead of having two countries fighting a war, consider two prisoners suspected of having
committed a crime, for which the police lack sufficient evidence to convict either suspect. The two
prisoners are put in two different isolated cells and are offered a lower punishment (or a higher payoff)
if they confess of having jointly committed this crime. If we replace WAR with CONFESS, and
PEACE with NOT CONFESS, we obtain the so-called Prisoners' Dilemma game.
In the present analysis we refrain from raising the question whether the game described in Table 2.1 is
observed in reality or not, or whether the game is a good description of the world. Instead, we ask a
different set of questions, namely, given that countries in the world behave like those described in Table
2.1, can we (the economists or political scientists) predict whether the world will end up in countries
declaring war or declaring peace. In order to perform this task, we need to define equilibrium concepts.
2.1.2 Equilibrium concepts
Once the game is properly defined, we can realize that games may have many outcomes. Therefore, by
simply postulating all the possible outcomes (four outcomes in the game described in Table 2.1), we
cannot make any prediction of how the game is going to end. For example, can you predict how a game
like the one described in Table 2.1 would end up? Will there be a war, or will peace prevail? Note that
formulating a game without having the ability to predict implies that the game is of little value to the
researcher. In order to make predictions, we need to develop methods and define algorithms for
narrowing down the set of all outcomes to a smaller set that we call equilibrium outcomes. We also
must specify properties that we find desirable for an equilibrium to fulfill. Ideally, we would like to find
a method that would select only one outcome. If this happens, we say that the equilibrium is unique.
However, as we show below, the equilibrium concepts developed here often fail to be unique.
Moreover, the opposite extreme may occur where a particular equilibrium may not exist at all. A game
that cannot be solved for equilibria is of less interest to us since no real-life prediction can be made.
Before we proceed to defining our first equilibrium concept, we need to define one additional piece of
notation. Recall that an outcome of the game a = (a1,... ,ai,..., aN) is a list of what the N players are doing
(playing). Now, pick a certain player, whom we will call player i, (e.g., i can be player 1 or 89 or N, or
any player). Remove from the outcome
Page 16
a the action played by player i himself. Then, we are left with the list of what all players are playing
except player i, which we denote by . Formally,
Note that after this minor surgical operation is performed, we can still express an outcome as a union of
what action player i plays and all the other players' actions. That is, an outcome a can be expressed as
.
Equilibrium in dominant actions
Our first equilibrium concept, called equilibrium in dominant strategies, is a highly desirable
equilibrium, in the sense that if it exists, it describes the most intuitively plausible prediction of what
players would actually do.
The following definition applies for a single player in the sense that it classifies actions in a player's
action set according to a certain criterion.
Definition 2.2
A particular action
is said to be a dominant action for player i if no matter what all other
players are playing, playing always maximizes player i's payoff. Formally, for every choice of actions
by all players except i, ,
For example,
Claim 2.1
In the game described in Table 2.1, the action a1 = WAR is a dominant action for player 1.
Proof. It has to be shown that no matter what player 2 does, player 1 is always better off by starting a
war. Thus, we have to scan over all the possible actions that can be played by player 2. If player 2 plays
a2 = WAR, then
Page 17
Similarly, since the game is symmetric (meaning that renaming player 1 as player 2 and vice versa, does
not change players' payoffs), the reader can establish that a2 = WAR is a dominant action for player 2.
We now turn to defining our first equilibrium concept. An equilibrium in dominant actions is simply an
outcome where each player plays a dominant action. Formally,
Definition 2.3
An outcome (
dominant actions if
) (where
for every i = 1, 2,..., N) is said to be an equilibrium in
is a dominant action for each player i.
Clearly, since WAR is a dominant action for each player in the game described in Table 2.1, the
outcome (a1, a2 ) = (WAR, WAR) is an equilibrium in dominant actions.
Although an equilibrium in dominant actions constitutes a very reasonable prediction of how players
may interact in the real world, unfortunately, this equilibrium does not exist for most games of interest
to us. To demonstrate this point, let us analyze the following Battle of the Sexes game described in
Table 2.2. The intuition behind this (rather
Rachel
FOOTBALL
OPERA ()
Jacob
OPERA( )
1 0
FOOTBALL
0 1
Table 2.2:
Battle of the Sexes
romantic) Battle of the Sexes game is that it is relatively important for Jacob and Rachel to be together.
That is, assuming that the payoffs to the players in Table 2.2 represent utilities to each player under
each outcome, each player gains the lowest possible utility when the player goes alone to one of these
entertainment events. Both of them gain a higher utility if they go together to one of these events.
However, comparing the two outcomes where the players are ''together,'' we can observe that Jacob
prefers the OPERA, whereas Rachel prefers FOOTBALL. Thus, the Battle of the Sexes is sometimes
referred to as a coordination game. The Battle of the Sexes game exhibited in Table 2.2 describes some
real-life situations. For example, in chapter 10 we analyze economies in which products operate on
different standards (such as different TV systems). The Battle of the Sexes game happens to be an ideal
theoretical framework to model two firms with two available actions: choose standard 1, or standard 2.
Failure to have both firms choosing the same standard may result in having consumers reject the
product, thereby leaving the two firms with zero profits.
Page 18
After formulating the Battle of the Sexes game, we now seek to find some predictions for this game.
However, the reader will probably be disappointed to find out that:
Claim 2.2
There does not exist an equilibrium in dominant actions for the Battle of the Sexes game.
Proof. It is sufficient to show that one of the players does not have a dominant action. In this case, there
cannot be an equilibrium in dominant actions since one player will not have a dominant action to play.
Therefore, it is sufficient to look at Jacob: If Rachel chooses aR = , then Jacob would choose
because
However, when Rachel goes to a football game, aR = , then Jacob would choose because
So, we have shown that one player does not have a dominant action, and this suffices to conclude that
Definition 2.3 cannot be applied; hence, there does not exist an equilibrium in dominant actions for the
Battle of the Sexes game.
Nash equilibrium (NE)
So far we have failed to develop an equilibrium concept that would select an outcome that would be a
"reasonable" prediction for this model. In 1951, John Nash provided an existence proof for an
equilibrium concept (earlier used by Cournot when studying duopoly) that has become the most
commonly used equilibrium concept in analyzing games.
Definition 2.4
An outcome
(where
for every i = 1, 2,..., N) is said to be a Nash equilibrium
(NE) if no player would find it beneficial to deviate provided that all other players do not deviate from
their strategies played at the Nash outcome. Formally, for every player i, i = 1, 2,..., N,
The general methodology for searching which outcomes constitute a NE is to check whether players
benefit from a unilateral deviation from a certain outcome. That is, to rule out an outcome as a NE we
need only
Page 19
demonstrate that one of the players can increase the payoff by deviating to a different action than the
one played in this specific outcome, assuming that all other players do not deviate. Once we find an
outcome in which no player can benefit from any deviation from the action played in that outcome, we
can assert that we found a NE outcome.
We continue our discussion of the NE with the investigation of the relationship between Nash
equilibrium and equilibrium in dominant actions. To demonstrate the relationship between the two
equilibrium concepts, we first search for the NE outcomes for the game described in Table 2.1. Recall
that we have already found that (WAR, WAR) is an equilibrium in dominant actions, but can this fact
help us in searching for a NE for this game? Not surprisingly, yes, it can! Since an equilibrium in
dominant actions means that each player plays a dominant action, no player would find it beneficial to
deviate no matter how the others play. In particular, no player would deviate if the other players stick to
their dominant actions. Hence,
Proposition 2.1
An equilibrium in dominant actions outcome is also a NE. However, a NE outcome need not be an
equilibrium in dominant actions.
Altogether, we have it that (WAR, WAR) is a NE for the game described in Table 2.1. We leave it to
the reader to verify that no other outcome in this game is a NE. Therefore, this equilibrium is called
unique. The second part of Proposition 2.1 follows from the Battle of the Sexes game, where there exist
two NE, but there does not exist an equilibrium in dominant actions.
Multiple Nash equilibria
We now demonstrate that a Nash equilibrium need not be unique. For example, applying Definition 2.4
to the Battle of the Sexes game yields:
Claim 2.3
The Battle of the Sexes game described in Table 2.2 has two Nash equilibrium outcomes:
(OPERA, OPERA) and (FOOTBALL, FOOTBALL).
Proof. To prove that ( , ) is a NE, we have to show that no player would benefit from deviation,
given that the other does not deviate. In this game with two players, we have to show that, given that a R
= , player J would play aJ = ; and that given that aJ = , player R would play aR = . These two
conditions follow from
Page 20
Using the same procedure, it can be easily shown that the outcome ( , ) is also a NE. Finally, we need
to show that the other two outcomes, (, )and ( , ) are not NE. However, this follows immediately
from (2.1).
Nonexistence of a Nash equilibrium
So far we have seen examples where there is one or more NE. That is, as in the Battle of the Sexes
game displayed in Table 2.2, it is always possible to find games with multiple NE. If the equilibrium is
not unique, the model has a low prediction power. In contrast, Table 2.3 demonstrates a game where a
Nash equilibrium does not exist. Therefore, consider the variant of the Battle of the Sexes game after
thirty years of marriage. The intuition behind the game described in Table 2.3 is that after
Rachel
OPERA ()
Jacob
FOOTBALL ( )
OPERA () 2
0 0
FOOTBALL ( ) 0
1 1
Table 2.3:
Nonexistence of a NE (in pure actions)
thirty years of marriage, Rachel's desire for being entertained together with Jacob has faded; however,
Jacob's romantic attitude remained as before, and he would always gain a higher utility from being
together with Rachel rather than alone.
Proposition 2.2
The game described in Table 2.3 does not have a NE.
Proof. We must prove that each outcome is not a NE. That is, in each of the four outcomes, at least one
of the player would find it beneficial to deviate.
(1) For the ( , ) outcome,
Page 21
Definition 2.5
1. In a two-player game, the best-response function of player i is the function R i(aj), that for every given
action aj of player j assigns an action a i= Ri(aj) that maximizes player i's payoff i(ai, aj)
2. More generally, in an N-player game, the best-response function of player i is the function
that for given actions
of players 1, 2,..., i - 1, i + 1,..., N, assigns an action
that
maximizes player i's payoff
.
Let us now construct the best-response functions for Jacob and Rachel described in the Battle of the
Sexes game given in Table 2.2. It is straightforward to conclude that
That is, if Rachel plays , Jacob's "best response" is to play , and if Rachel plays , Jacob's "best
response" is to play , and so on.
Now, the importance of learning how to construct best-response functions becomes clear in the
following proposition:
Proposition 2.3
If
Proof. By Definition 2.4, in a NE outcome each player does not benefit from deviating from the
strategy played in a NE outcome (given that all other players do not deviate). Hence, by Definition 2.5,
each player is on her best-response function.
That is, in a NE outcome, each player chooses an action that is a best response to the actions chosen by
other players in a NE. Proposition 2.3 is extremely useful in solving for NE in a wide variety of games
and will be used extensively.
The procedure for finding a NE is now very simple: First, we calculate the best-response function of
each player. Second, we check which outcomes lie on the best-response functions of all players. Those
outcomes that we find to be on the best-response functions of all players constitute the NE outcomes.
For example, in the Battle of the Sexes game, (2.2) implies that outcomes ( , ) and ( , ) each satisfy
both players' best-response functions and therefore constitute NE outcomes.
Page 22
and
2. An outcome a* is called Pareto efficient (also called Pareto optimal) if there does not exist any
outcome which Pareto dominates the outcome a*.
3. Outcomes a and are called Pareto noncomparable if for some player i,
other player j,
.
For example, in the Peace-War game, the outcomes (WAR, PEACE) and (PEACE, WAR) are Pareto
noncomparable. In the Battle of the Sexes game of Table 2.2, the outcomes (OPERA, FOOTBALL) and
(FOOTBALL, OPERA) are Pareto dominated by each of the other two outcomes. The outcomes
(OPERA, OPERA) and (FOOTBALL, FOOTBALL) are Pareto efficient and are also Pareto
noncomparable.
2.2 Extensive Form Games
Our analysis so far has concentrated on normal form games where the players are restricted to choosing
an action at the same time. In this section we analyze games in which players can move at different
times and more than once. Such games are called extensive form games. Extensive form games enable
us to introduce timing into the model.
Page 23
Before going to the formal treatment, let us consider the following example. A terrorist boards a flight
from Minneapolis to New York. After thirty minutes, after reaching a cruising altitude of thirty
thousand feet, the terrorist approaches the pilot and whispers to the pilot that she will explode a bomb if
the pilot does not fly to Cuba. Figure 2.1 describes the Pilot-Terrorist game. One player is the pilot and
the other is the
Figure 2.1:
The pilot and the terrorist
terrorist. The game is represented by a tree, with a starting decision node (point I), other decision nodes
(II N and IIC ), and terminal nodes (end points). Note that in some literature, the term vertex (vertices) is
used in place of the term node(s). The branches connecting decision nodes, and decision nodes to
terminal nodes describe actions available to the relevant player on a particular decision node.
In this Pilot-Terrorist game, after hearing the terrorist's threat, the pilot gets to be the player to choose
an action at the starting node. At the starting node, the pilot's action set is given by
.
Depending on what action is chosen by the pilot, the terrorist has her turn to move at node II C or IIN . The
terrorist's action set is
at the node IIC and
at the node II N. In this
simple game, the terrorist's action sets happen to be the same at both nodes, but this need not always be
the case.
We can now give a formal definition to extensive form games with perfect information. Extensive form
games with imperfect information are defined in Definition 2.17 on page 38.
Definition 2.7
An extensive form game is:
1. A game tree containing a starting node, other decision nodes, terminal nodes, and branches linking
each decision node to successor nodes.
Page 24
2. A list of
3. For each decision node, the name of the player entitled to choose an action.
4. For each player i, a specification of i's action set at each node that player i is entitled to choose an
action.
5. A specification of the payoff to each player at each terminal node.
2.2.1 Defining strategies and outcomes in extensive form games
Our preliminary discussion of extensive form games emphasized that a player may be called to choose
an action more than once and that each time a player chooses an action, the player has to choose an
action from the action set available at that particular node. Therefore, we need to define the following
term.
Definition 2.8
A strategy for player i (denoted by s i) is a complete plan (list) of actions, one action for each decision
node that the player is entitled to choose an action.
Thus, it is important to note that a strategy is not what a player does at a single specific node but is a list
of what the player does at every node where the player is entitled to choose an action.
What are the strategies available to the terrorist in the Pilot-Terrorist game described in Figure 2.1?
Since the terrorist may end up in either node II C or II N, a strategy for the terrorist would be a
specification of the precise action she will be taking at each node. That is, although it is clear that the
terrorist will reach either node IIC or II N but not both, a strategy for this player must specify what she
will do at each of the two nodes. Therefore, the terrorist has four possible strategies given by (B, B), (B,
NB), (NB, B), (NB, NB), where the first component refers to the terrorist's action in node IIC , and the
second component refers to her action at node IIN .
Since the pilot is restricted to making a move only at node I, and since his action set has two possible
actions, this game has eight outcomes given by (NY, (B, B)), (NY, (B, NB)), (NY, (NB, B)), (NY, (NB,
NB)), (C, (B, B)), (C, (B, NB)), (C, (NB, B)), (C, (NB, NB)).
2.2.2 A normal form representation for extensive form games
Now that the game is well defined, we seek to find some predictions. The first step would be to search
for a Nash equilibrium. Recalling our definition of Nash equilibrium (Definition 2.4), in extensive form
games
Page 25
we look for a Nash equilibrium in strategies, where each player cannot increase the payoff by
unilaterally deviating from the strategy played at the NE outcome.
It turns out that in many instances transforming an extensive form game into a normal form makes it
easier to find the Nash equilibria. Table 2.4 provides the normal form representation for the PilotTerrorist game described in Figure 2.1. Table 2.4 shows that there are three Nash
Terrorist
(B, B)
Pilot
(B, NB)
NY -1
-1
CUBA -1
-1
-1
(NB, B)
0 -1
-1 1
(NB, NB)
-1 2
1 1
Table 2.4:
Normal form representation of the Pilot-Terrorist game
equilibrium outcomes for this game: (NY, (NB, NB)), (NY, (B, NB)) and (CUBA, (NB, B)). Note that
here, as in the Battle of the Sexes game, multiple NE greatly reduce our ability to generate predictions
from this game. For this reason, we now turn to defining an equilibrium concept that would narrow
down the set of NE outcomes into a smaller set of outcomes. In the literature, an equilibrium concept
that selects a smaller number of NE outcomes is called a refinement of Nash equilibrium, which is the
subject of the following subsection.
2.2.3 Subgames and subgame perfect equilibrium
In this subsection we define an equilibrium concept that satisfies all the requirement of NE (see
Definition 2.4) and has some additional restrictions. This equilibrium concept may be helpful in
selecting a smaller set of outcomes from the set of NE outcomes, by eliminating some undesirable NE
outcomes.
Before we proceed to the formal part, let us go back to the Pilot-Terrorist game and look at the three NE
outcomes for this game. Comparing the three NE outcomes, do you consider any equilibrium outcomes
to be unreasonable? What would you suggest if the pilot were to hire you as her strategic adviser? Well,
you would probably tell the pilot to fly to New York. Why? By looking at the terrorist's payoffs at the
terminal nodes in Figure 2.1 we can see that if the pilot flies to NEW YORK, the terrorist will NOT
BOMB (a payoff of t = 0 compared with t = -1 if she does), and the pilot will gain a payoff of p = 2
compared with a payoff of p = 1 for flying to Cuba. In other words, after the pilot flies to any
destination (New York, or Cuba) the terrorist's payoff is maximized by choosing the NOT BOMB
action. From
Page 26
this we conclude that the limitation of the NE concept is that it cannot capture the pilot's ability to
predict that the terrorist will not have the incentive to explode the bomb once the plane arrives in New
York (in to Cuba). More precisely, under the NE outcomes (CUBA, (NB, B)) and (NY, (B, NB)) the
terrorist seems to be pulling what game theorists call an incredible threat, since the terrorist's payoffs at
the terminal nodes. indicate that once reaching either node IIC or II N , the terrorist will not explode the
bomb.
We now want to formalize an equilibrium concept that would exclude the unreasonable Nash equilibria.
In particular, we look for an equilibrium concept that would exclude outcomes where the terrorist
commits herself to the BOMB action, since such an action is incredible. Moreover, we seek to define an
equilibrium concept where the player who moves first (the pilot in our case) would calculate and take
into account how subsequent players (the terrorist in the present case) would respond to the moves of
the players who move earlier in the game. Hence, having computed how subsequent players would
respond, the first player can optimize by narrowing down the set of actions yielding higher payoffs. In
the Pilot-Terrorist example, we wish to find an equilibrium concept that would generate a unique
outcome where the pilot flies to New York.
We first define a subgame of the game.
Definition 2.9
A subgame is a decision node from the original game along with the decision nodes and terminal nodes
directly following this node. A subgame is called a proper subgame if it differs from the original game.
Clearly, the Pilot-Terrorist game has three subgames: One is the game itself whereas the other two are
proper subgames with nodes IIC and II N as starting nodes. The two proper subgames are illustrated in
Figure 2.2.
Figure 2.2:
Two proper subgames
Page 27
Definition 2.10
An outcome is said to be a subgame perfect equilibrium (SPE) if it induces a Nash equilibrium in every
subgame of the original game.
Definition 2.10 states that a SPE outcome is a list of strategies, one for each player, consisting of
players' actions that constitutes a NE at every subgame. In particular, a SPE outcome must be a NE for
the original game since the original game is a subgame of itself. Note that in each subgame, the action
NB is a NE.
We now seek to apply Definition 2.10 in order to solve for a SPE of the Pilot-Terrorist game.
Claim 2.4
The outcome (NY, (NB, NB)) constitutes a unique SPE for the Pilot-Terrorist game.
Proof. Since a SPE is also a NE for the original game, it is sufficient to look at the three NE outcomes
of the original game given by (NY, (B, NB)), (Cuba, (NB, B)) and (NY, (NB, NB)). Next, each proper
subgame has only one NE, namely, the terrorist chooses NB. Hence, given that a SPE outcome must be
a NE for every subgame, we conclude that the outcomes (NY, (B, NB)), (Cuba, (NB, B)) are not SPE.
Finally, the outcome (NY, (NB, NB)) is a SPE since it is a NE for the original game, and the outcome
(action) NB is a unique NE for every proper subgame.
Thus, we have shown that using the SPE, refines the NE in the sense of excluding some outcomes
which we may consider unreasonable.
We conclude this discussion of the SPE, by describing the methodologies commonly used for finding
SPE, outcomes. The general methodology for finding the SPE outcomes is to use backward induction,
meaning that we start searching for NE in the subgames leading to the terminal nodes. Then, we look
for NE for the subgames leading the subgames leading to the terminal nodes, taking as given the NE
actions to be played in the last subgames before the terminal nodes. Then, continuing to solve
backwards, we reach the starting node and look for the action that maximizes player 1's payoff, given
the NE of all the proper subgames. Note that the backward induction methodology is particularly useful
when the game tree is long. Finally, another common methodology is to first find the NE outcomes for
the game, say by transforming the extensive form representation into a normal form representation (see
subsection 2.2.2). Then, once we have the set of all NE outcomes, we are left to select those outcomes
that are also NE for all subgames. This can be done by trial and error, or, as we do in the proof of Claim
2.4, by ruling out the NE outcomes of the original game that are not NE for some proper subgames.
Page 28
Hence, a strategy of a player in a repeated game is a list of actions to be played in each period , where
each period action of player i is based on the observed list of actions played by all players in all
periods t = 1, 2,..., - 1 summarized by the history H. Therefore, an outcome of a repeated game
would be a list of actions each player is taking in every period, whereas the period payoff to each
player is a function of the actions played by the players in period .
Consider our Peace-War game described in Table 2.1, and suppose that this game is repeated T times, in
periods 1, 2,..., T, where T is
Page 29
with at , where
.
, and an
Page 30
In the second period, there are two possible actions country 1 can take: WAR and PEACE. Now, in
order to fully specify a strategy, country 1 has to specify which action will be taken for every possible
history. Hence; the number of second-period actions is 2 4. On top of this, there are two possible actions
available to country 1 in period 1. Hence, the number of strategies available to country 1 in a twoaction, two-period repeated game is 2 24 = 25.
Similarly, if the game is repeated three times (T = 3), the strategy set of country 1 contains
strategies, since in the third period there are 16 = 4 x 4 possible histories (resulting from four possible
lists of players' actions in each period).
We now state our main proposition for finitely repeated games:
Proposition 2.4
For any finite integer T,
, the T-times repeated Peace-War game has a unique SPE where
each country plays WAR in each period.
Thus, Proposition 2.4 states that no matter how many times the Peace-War game is repeated (it could be
one, or it could be a billion times), the unique SPE is WAR played by all players in every period.
Proof. Using backward induction, let us suppose that the countries have already played in T-1 periods,
and that now they are ready to play the final T's period game. Then, since period T is the last period that
the game is played, the T's period game is identical to the single one-shot Peace-War game. Hence, a
unique NE for the T's period game is WAR played by each country.
Now, consider the game played in period T - 1. Both players know that after this game is completed,
they will have one last game to play in which they both will not cooperate and play WAR. Hence, in T 1 each player would play the dominant strategy WAR. Working backwards, in each of the proceeding
periods T-2, T-3 until period 1, we can establish that WAR will be played by every player in each
period.
2.3.2 Infinitely repeated game
Now, suppose that the game is repeated infinitely many times (i.e.,
). The difference between
the infinitely repeated game and the small or large but finitely repeated game is that in an infinitely
repeated game, backward induction (used in the proof of Proposition 2.4) cannot be used to arrive at
equilibrium outcomes, since there is no final period to "start" the backward induction process.
Page 31
players cooperated in period - 1. However, if any player did not cooperate and played WAR in period
- 1, then player i "pulls the trigger" and plays the noncooperative action forever! That is,
for every t = , + 1, + 2, .... Formally,
Definition 2.12
Player i is said to be playing a trigger strategy if for every period , = 1, 2,...,
That is, country i cooperates by playing PEACE as long as no country (including itself) deviates from
the cooperative outcome. However, in the event that a country deviates even once, country i punishes
the deviator by engaging in a WAR forever.
Equilibrium in trigger strategies
We now seek to investigate under what conditions the outcome where both countries play their trigger
strategies constitutes a SPE.
Proposition 2.5
If the discount factor is sufficiently large, then the Outcome where the players play their trigger
strategies is a SPE. Formally, trigger strategies constitute a SPE if > 1/2.
Proof. Let us look at a representative period, call it period , and suppose that country 2 has not
deviated in periods 1,..., . Then, if country 1 deviates and plays
, Table 2.1 shows that
However, given that country 1 deviates, country 2 would deviate in all subsequent periods and play
for every
, since country 2 plays a trigger strategy. Hence, from period + 1 and
on, country 1 earns a payoff of 1 each period. Therefore, the period + 1 sum of discounted payoffs to
country 1 for all periods
Note that we used the familiar formula for calculating the
present value of an infinite stream of payoffs given by
Hence, if
country 1 deviates in period , its sum of discounted payoffs is the sum period 's payoff from playing
WAR (while country 2 plays PEACE) equal to
, plus the discounted infinite sum of
payoffs when both countries play WAR (sum of discounted payoffs of 1 each period). Thus, if country
1 deviates from PEACE in period then
Page 32
However, if country 1 does not deviate, then both countries play PEACE indefinitely, since country 2
plays a trigger strategy. Hence, both countries gain a payoff of 2 each period. Thus,
Comparing (2.3) with (2.4) yields the conclusion that deviation is not beneficial for country 1 if > 1/2.
Since no unilateral deviation is beneficial to any country at any subgame starting at an arbitrary period
, we conclude that no unilateral is beneficial to a country at any period t.
So far, we have showed that when both countries play the trigger strategy no country has the incentive
to unilaterally deviate from playing PEACE. In the language of game theorists, we showed that
deviation from the equilibrium path is not beneficial to any country. However, to prove that the trigger
strategies constitute a SPE we need to show that if one country deviates and plays WAR, the other
country would adhere to its trigger strategy and would play WAR forever. In the language of game
theorists, to prove SPE we need to prove that no player has the incentive to deviate from the played
strategy even if the game proceeds off the equilibrium path. To prove that, note that if country 1
deviates from PEACE in period , then Definition 2.12 implies that country 1 will play WAR forever
since Definition 2.12 states that any deviation (by country 1 or country 2) would trigger country 1 to
play WAR forever. Hence, country 2 would punish country 1 by playing WAR forever since WAR
yields a payoff to country 2 of 1 each period (compared with payoff of 0 if country 2 continues playing
PEACE). Altogether, the trigger strategies defined in Definition 2.12 constitute a SPE for the infinitely
repeated Peace-War game.
Proposition 2.5 demonstrates the relationship between the players' time discount factor, given by , and
their incentive to deviate from the cooperative action. That is, when players have a low discount factor
(say, is close to zero), the players do not care much about future payoffs. Hence, cooperation cannot
be a SPE since the players wish to maximize only their first period profit. However, when is large (
> 1/2 in our case) players do not heavily discount future payoffs, so cooperation becomes more
beneficial to the players since the punishment on deviation becomes significant because the discounted
flow of payoffs under cooperation (2 per period) is higher than the short-run gain from
Page 33
deviation (a payoff of 3 for one period and 1 thereafter). This discussion leads to the following
corollary:
Corollary 2.1
In an infinitely repeated game cooperation is easier to sustain when players have a higher time
discount factor.
2.3.3 A discussion of repeated games and cooperation
In this section we have shown that a one-shot game with a unique non-cooperative Nash equilibrium
can have a cooperative SPE when it is repeated infinitely. However, note that in the repeated game, this
SPE is not unique. For example, it is easy to show that the noncooperative outcome where each country
plays WAR in every period constitutes a SPE also. Moreover, the Folk Theorem (Folk, because it was
well known to game theorists long before it was formalized) states that for a sufficiently high time
discount factor, a large number of outcomes in the repeated game can be supported as a SPE. Thus, the
fact that we merely show that cooperation is a SPE is insufficient to conclude that a game of this type
will always end up with cooperation. All that we managed to show is that cooperation is a possible SPE
in an infinitely repeated game.
Finally, let us look at an experiment Robert Axelrod conducted in which he invited people to write
computer programs that play the Prisoners' Dilemma game against other computer programs a large
number of times. The winner was the programmer who managed to score the largest sum over all the
games played against all other programs. The important result of this tournament was that the program
that used a strategy called Tit-for-Tat won the highest score. The Tit-for-Tat strategy is different from
the trigger strategy defined in Definition 2.12 because it contains a less severe punishment in case of
deviation. In the Tit-for-Tat strategy, a player would play in period t what the opponent played in period
t - 1. Thus, even if deviation occurred, once the opponent resumes cooperation, the players would
switch to cooperation in a subsequent period. Under the trigger strategy, once one of the players
deviates, the game enters a noncooperative phase forever.
2.4 Appendix: Games with Mixed Actions
The tools developed in this appendix are not implemented in this book, and are brought up here only for
the sake of completeness. Thus, this appendix is not necessary to study this book successfully, and the
beginning readers are urged to skip this appendix.
Games with mixed actions are those in which the players randomize over the actions available in their
action sets. Often, it is hard to
Page 34
motivate games with mixed actions in economics modeling. This is not because we think that players
do not choose actions randomly in real life. On the contrary, the reader can probably recall many
instances in which he or she decided to randomize actions. The major reason why games with mixed
actions are hard to interpret is that it is not always clear why the players benefit from randomizing
among their pure actions.
The attractive feature of games with mixed actions is that a Nash equilibrium (in mixed actions) always
exists. Recall that Proposition 2.2 demonstrates that a Nash equilibrium in pure actions need not always
exist.
In what follows, our analysis will concentrate on the Top-Bottom-Left-Right given in Table 2.5. The
reason for focusing on the game in
Ms.
L(left)
Ms.
T(top) 0
B(bottom) 1
R(right)
0 0
-1
0 -1
Table 2.5:
NE in mixed actions
Table 2.5 is that we show that a Nash equilibrium in mixed actions exists despite the fact that a Nash
equilibrium in pure actions does not (the reader is urged to verify that indeed a Nash equilibrium in pure
actions does not exist).
We now wish to modify a game with pure strategies to a game where the players choose probabilities of
taking actions from their action sets. Recall that by Definition 2.1, we need to specify three elements:
(a) the list of players (already defined), (b) the action set available to each player, and (c) the payoff to
each player at each possible outcome (the payoff function for each player).
Definition 2.13
1. A mixed action of player is a probability distribution over playing a = T and playing a = B.
Formally, a mixed action of player is a probability ,
such that player plays T with
probability and plays B with probability 1 - .
2. A mixed action of player is a probability,
and plays R with probability 1 -
3. An action profile of a mixed actions game is a list ( , ) (i.e., the list of the mixed action chosen by
each player).
Page 35
4. An outcome of a game with mixed actions is the list of the realization of the actions played by each
player.
Definition 2.13 implies that the mixed-action set of each player is the interval [0,1] where player
picks a
and player picks a
. The reader has probably noticed that Definition 2.13
introduces a new term, action profile, which replaces the term outcome used in normal form games,
Definition 2.1. The reason for introducing this term is that in a game with mixed actions, the players
choose only probabilities for playing their strategies, so the outcome itself is random. In games with
pure actions, the term action profile and the term outcome mean the same thing since there is no
uncertainty. However, in games with mixed actions, the term action profile is used to describe the list of
probability distributions over actions chosen by each player, whereas the term outcome specifies the list
of actions played by each player after the uncertainty is resolved.
Our definition of the "mixed extension" of the game is incomplete unless we specify the payoff to each
player under all possible action profiles.
Definition 2.14
A payoff function of a player in the mixed-action game is the expected value of the payoffs of the player
in the game with the pure actions. Formally, for any given action profile (, ), the expected payoff to
player i, i = , , is given by
According to Definition 2.1 our game is now well defined, since we specified the action sets and the
payoff functions defined over all possible action profiles of the mixed actions game.
Applying the NE concept, defined in Definition 2.4, to our mixed-actions game, we can state the
following definition:
Definition 2.15
An action profile
(where ,
), is said to be a Nash equilibrium in mixed actions if no
player would find it beneficial to deviate from her or his mixed action, given that the other player does
not deviate from her or his mixed action. Formally,
Page 36
We now turn to solving for the Nash equilibrium of the mixed-actions extension game of the game
described in Table 2.5. Substituting the payoffs associated with the "pure" outcomes of the game in
Table 2.5 into the "mixed" payoff functions given in Definition 2.14 yields
and
That is, when player plays R with a high probability (1 - > 1/2), player 's best response is to play T
with probability 1 ( = 1) in order to minimize the probability of getting a payoff of -1. However, when
player plays L with a high probability ( > 1/2), player 's best response is to play B with probability
1 ( = 0) in order to maximize the probability of getting a payoff of +1. Similar explanation applies to
the best-response function of player .
The best-response functions of each player are drawn in Figure 2.3. Equations (2.8) and Figure 2.3
show that when plays = 1/2, player is indifferent to the choice among all her actions. That is,
when = 1/2, the payoff of player is the same (zero) for every mixed action
. In particular,
player is indifferent to the choice between playing a pure strategy (meaning that = 0 or = 1) and
playing any other mixed actions (0 < < 1). Similarly, player is indifferent to the choice among all
her mixed actions
, when player plays = 3/4.
Although a NE in pure actions does not exist for the game described in Table 2.5, the following
proposition shows:
Proposition 2.6
There exists a unique NE in mixed actions for the game described in Table 2.5. In this equilibrium, =
3/4 and = 1/2.
Page 37
Figure 2.3:
Best-response functions for the mixed-action extended game
The proposition follows directly from the right-hand side of Figure 2.3 that shows that the two bestresponse functions given in (2.8) have a unique intersection.
Finally, the best-response functions given in (2.8) have a property of being composed of horizontal or
vertical line segments. Since the equilibrium occurs when the two curves intersect in their "middle"
sections, we have it that under the NE mixed outcome, each player is indifferent to the choice among all
other probabilities that can be played, assuming that the other player does not deviate from the mixed
action. This result makes the intuitive interpretation of a mixed-action game rather difficult, because
there is no particular reason why each player would stick to the mixed action played under the NE.
2.5 Appendix: Games with Imperfect Information
Games with imperfect information are brought up here only for the sake of completion, and the
beginning readers are urged to skip this appendix. Games with imperfect information describe situations
where some players do not always observe the action taken by another player earlier in the game,
thereby making the player unsure which node has been reached. For example, Figure 2.4 describes a
variant of the Pilot-Terrorist game given in Figure 2.1. In Figure 2.4 we suppose that the terrorist cannot
monitor the direction in which the pilot is flying, say because the terrorist cannot read a compass or
because the pilot disables some of the navigation equipment. The broken line connecting nodes IIC and
II N describes an information set for the terrorist. The information set tells us that in this game, the
terrorist cannot distinguish whether node II C or IIN has been reached. Thus, when the terrorist has her
turn to make a move, she has to choose an action without knowing the precise node she is on. Formally,
Page 38
Figure 2.4:
A game with imperfect information: Information sets
Definition 2.16
An information set for a player is a collection of nodes in which the player has to choose an action.
When a player reaches an information set, the player knows that the particular information set has
been reached, but if the information set contains more than one node, the player does not know which
particular node in this collection has been reached.
We now have the tools to define a game with imperfect information:
Definition 2.17
An extensive form game is called
1. A game with imperfect information if one of the information sets contains more than one node;
2. A game with perfect information if each information set contains a single node.
Thus, all the extensive form games analyzed in Section 2.2 are games with perfect information since
each information set coincides with a single node.
We now slightly extend our definition of a strategy (Definition 2.8) to incorporate games with imperfect
information:
Definition 2.18
In a game with imperfect information, a strategy for a player is a list of actions that a player chooses at
any information set where the player is entitled to take an action.
Thus, Definition 2.18 provides a more general definition of a strategy (compared with Definition 2.8)
since a strategy is a list of actions a player chooses in each information set rather than in each node
where the player is entitled to take an action. Under perfect information, of course, Definitions 2.8 and
2.18 coincide, since under perfect information each information set is a singleton.
Finally, we need to extend our definition of subgames (Definition 2.9) to incorporate games with
imperfect information.
Page 39
Definition 2.19
A subgame is an information set that contains a single node, and all the subsequent decision and
terminal nodes, provided that all subsequent nodes are not contained in information sets containing
nodes that cannot be reached from the subgame.
Figure 2.5 illustrates a game with imperfect information. In Figure 2.5, the nodes labeled A, D, and G
are starting nodes for a subgame. However, the nodes labeled B, C, E, and F are not starting nodes for a
subgame since some subsequent nodes are contained in information sets containing nodes that cannot
be reached from these nodes.
Figure 2.5:
Game with imperfect information: Subgames
For example, the modified Pilot-Terrorist game described in Figure 2.4 has only one subgame, which is
the original game itself, because all subsequent nodes are contained in information sets containing more
than one node.
We conclude our discussion of games with imperfect information with solving for NE and SPE for the
modified Pilot-Terrorist game described in Figure 2.4. First, all the possible outcomes for this game are
given by (NY, B), (NY, NB), (Cuba, B), and (Cuba, NB). Thus, in the Pilot-Terrorist game under
imperfect information, the number of outcomes has been reduced from eight to four since the terrorist
now takes a decision at one information set (compared with two nodes, under perfect information).
Second, since this game does not have any proper subgames, any NE is also a SPE. Hence, in this case,
the set of NE outcomes coincides with the SPE outcomes. Thus, we can easily conclude that (NY, NB)
constitutes both NE and SPE outcomes.
Page 40
2.6 Exercises
1. Using Definition 2.5,
(a) Write down the best-response functions for country 1 and country 2 for the Peace-War game
described in Table 2.1, and decide which outcomes constitute NE.
(b) Write down the best-response functions for Jacob and Rachel for the game described in Table
2.3, and decide which outcomes constitute a NE (if there are any).
(c) Write down the best-response functions for player 1 and player 2 for the game described in
Table 2.5, and decide which outcomes constitute a NE (if there are any).
2. Consider the normal form game described in Table 2.6. Find the conditions on the parameters a, b, c,
d, e, f, g, and h that will ensure that
Ms.
L(left)
Ms.
R(right)
T(top) a
b c
B(bottom) e
f g
Table 2.6:
Normal form game: Fill in the conditions on payoffs
Page 41
one assumed to be honest. (b) If ni = nj , then the manager assumes that both travelers are honest and
pays them the declared value of the antiques. Letting n1 and n2 be the actions of the players, answer the
following questions:
(a) Under Definition 2.6, which outcomes are Pareto Optimal?
(b) Under Definition 2.4, which outcomes constitute a Nash equilibrium for this game.
4. Consider a normal form game between three major car producers, C, F, and G. Each producer can
produce either large cars, or small cars but not both. That is, the action set of each producer i, i = C, F,
G is
. We denote by ai the action chosen by player i,
, and by i(aC, aF, aG ) the profit
to firm i. Assume that the profit function of each player i is defined by
5. Figure 2.6 describes an extensive form version of the Battle of the Sexes game given initially in
Table 2.2. Work through the following problems.
(a) How many subgames are there in this game? Describe and plot all the subgames.
(b) Find all the Nash equilibria in each subgame. Prove your answer!
(c) Find all the subgame perfect equilibria for this game.
(d) Before Rachel makes her move, she hears Jacob shouting that he intends to go to the opera (i.e.,
play ). Would such a statement change the subgame perfect equilibrium outcomes? Prove and
explain!
6. (This problem refers to mixed actions games studied in the appendix, section 2.4.) Consider the
Battle of the Sexes game described in Table 2.2.
(a) Denote by the probability that Jacob goes to the OPERA, and by the probability that Rachel
goes to the OPERA. Formulate the expected payoff of each player.
Page 42
(b) Draw the best-response function for each player [RJ () and RR( )].
(c) What is the NE in mixed actions for this game?
(d) Calculate the expected payoff to each player in this NE.
(e) How many times do the two best-response functions intersect? Explain the difference in the
number of intersections between this game and the best-response functions illustrated in Figure 2.3.
Figure 2.6:
Battle of the Sexes in extensive form
2.7 References
Aumann, R. 1987. ''Game Theory.'' In The New Palgrave Dictionary of Economics, edited by J.
Eatwell, M. Milgate, and P. Newman. New York: The Stockton Press.
Axelrod, R. 1984. The Evolution of Cooperation. New York: Basic Books.
Binmore, K. 1992. Fun and Games. Lexington, Mass.: D.C. Heath.
Friedman, J. 1986. Game Theory with Applications to Economics. New York: Oxford University Press.
Fudenberg, D., and J. Tirole. 1991. Game Theory. Cambridge Mass.: MIT Press.
Gibbons, R. 1992. Game Theory for Applied Economists. Princeton, N.J.: Princeton University Press.
McMillan, J. 1992. Games, Strategies, and Managers. New York: Oxford University Press.
Moulin, H. 1982. Game Theory for the Social Sciences. New York: New York University Press.
Osborne, M., and A. Rubinstein. 1994. A Course in Game Theory. Cambridge, Mass.: MIT Press.
Rasrnusen, E. 1989. Games and Information: An Introduction to Game Theory . Oxford: Blackwell.
Page 43
Chapter 3
Technology, Production Cost, and Demand
Large increases in cost with questionable increase in performance can be tolerated only for race horses and fancy
[spouses].
Lord Kelvin 1824-1907 (President of the Royal Society)
This chapter reviews basic concepts of microeconomic theory. Section 3.1 (Technology and Cost)
introduces the single-product production function and the cost function. Section 3.2 analyzes the basic
properties of demand functions. The reader who is familiar with these concepts and properties can skip
this chapter and proceed with the study of industrial organization. The student reader should note that
this chapter reflects the maximum degree of technicality needed to grasp the material in this book.
Thus, if the reader finds the material in this chapter to be comprehensible, then the student should feel
technically well prepared for this course.
3.1 Technology and Cost
The production function reflects the know-how of a certain entity that we refer to as the firm. This
know-how enables the firm to transform factors of production into what we call final goods. In general,
we refrain from addressing the philosophical question of where technological know-how comes from.
However, in chapter 9 (Research and Development) we do analyze some factors that affect the advance
of technological know-how.
Page 44
For example, the marginal-product functions associated with the class of production functions
where a, , > 0 are given by
and
It is important to note that the marginal product of a factor is a function (not necessarily a constant) of
the amount of labor and capital used in the production process. In our example,
,
meaning that in this production process, the marginal product of labor gets larger and larger as the
amount of labor becomes scarce.
So far, we have not discussed the relationship between the two factors. We therefore make the
following definition.
Definition 3.1
1. Labor and capital are called supporting factors in a particular production process if the increase in
the employment of one factor raises the marginal product of the other factor. Formally, if
Page 45
2. Labor and capital are called substitute factors in a particular production process, if the increase in
the employment of one factor decreases the marginal product of the other factor. Formally, if
In our example, the reader can verify that labor and capital are supporting factors if > 1, and substitute
factors if < 1.
We conclude the discussion of the production function by looking at the effect of input expansion on
the amount of production. Formally,
Definition 3.2
Let be any number greater than 1. Then, a production technology Q = f(l,k) is said to exhibit
1. Increasing returns to scale (IRS) if
. That is, if expanding the employment of labor
and capital by the factor of will increase the output by more than a factor of .
2. Decreasing returns to scale (DRS) if
. That is, if expanding the employment of labor
and capital by the factor of will increase the output by less than a factor of .
3. Constant returns to scale (CRS) if
That is, if expanding the employment of labor
and capital by the factor of will increase the output by exactly a factor of .
In our example, the production technology
exhibits IRS if
if and only if > 1.
Page 46
We define the marginal cost function as the change in total cost resulting from a 'small' increase in
output level. Formally, the marginal cost function at an output level Q is defined by
Figure 3.1:
Total, average, and marginal cost functions
Page 47
Hence,
To demonstrate how useful Proposition 3.1 could be, we now return to our example illustrated in Figure
3.1, where TC(Q) = F + cQ2. Proposition 3.1 states that in order to find the output level that minimizes
the cost per unit, all that we need to do is extract Qmin from the equation AC(Qmin) = MC(Qmin ). In our
example,
.
Hence,
, and
Page 48
Figure 3.2:
Duality between the production and cost functions
illustrated on the right side of Figure 3.2. Under IRS, the average cost declines with the output level,
reflecting the fact that under IRS the cost per unit declines with a larger scale of production, say,
because of the adoption of assembly line technology. Under CRS, the cost per unit is constant,
reflecting a technology where an increase in the output level does not alter the per unit production cost.
The left side of Figure 3.2 reflects a DRS technology, where an increase in the output level raises the
per unit production cost.
Finally, recall our two-input example where
. We showed that this production
technology exhibits IRS if > 1 and DRS if < 1. Deriving the cost function of this production
technology would take us beyond the level of this book. However, for the sake of illustration we state
that the cost function associated with this technology is given by
where is a nonnegative function of W and R.
Now, in this case
. Then, AC(Q) is declining with Q if 1/() - 1 < 0, or > 1,
which is the condition under which the technology exhibits IBS. In contrast, AC(Q) is rising with Q
Page 49
if 1/() - 1 > 0, or < 1, which is the condition under which the technology exhibits DRS.
3.2 The Demand Function
We denote by Q(p) the (aggregate) demand function for a single product, where Q denotes the quantity
demanded and p denotes the unit price. Formally, a demand function shows the maximum amount
consumers are willing and able to purchase at a given market price. For example, we take the linear
demand function given by
where a and b are strictly positive constants to be estimated by
the econometrician. Alternatively, we often use the inverse demand function p(Q), which expresses the
maximum price consumers are willing and able to pay for a given quantity purchased. Inverting the
linear demand function yields p(Q) = a - bQ, which is drawn in Figure 3.3. Note that part of the
Figure 3.3:
Inverse linear demand
demand is not drawn in the figure. That is, for p > a the (inverse) demand becomes vertical at Q = 0, so
the demand coincides with the vertical axis, and for Q > a/b, it coincides with the horizontal axis.
An example of nonlinear demand function is the constant elasticity demand function given by
or
, which is drawn in Figure 3.4. This class of functions has some nice
features, which we discuss below.
3.2.1 The elasticity function
The elasticity function is derived from the demand function and maps the quantity purchased to a
certain very useful number which we call
Page 50
Figure 3.4:
Inverse constant-elasticity demand
the elasticity at a point on the demand. The elasticity measures how fast quantity demanded adjusts to a
small change in price. Formally, we define the demand price elasticity by
Definition 3.3
At a given quantity level Q, the demand is called
1. elastic if
2. inelastic if
3. and has a unit elasticity if
.
,
Page 51
define the total-revenue function as the product of the price and quantity:
. For the linear
2
case, TR(Q) = aQ - bQ , and for the constant elasticity demand,
. Note that a more
suitable name for the revenue function would be to call it the total expenditure function since we
actually refer to consumer expenditure rather than producers' revenue. That is, consumers' expenditure
need not equal producers' revenue, for example, when taxes are levied on consumption. Thus, the total
revenue function measures how much consumers spend at every given market price, and not necessarily
the revenue collected by producers.
The marginal-revenue function (again, more appropriately termed the "marginal expenditure") shows
the amount by which total revenue increases when the consumers slightly increase the amount they buy.
Formally we define the marginal-revenue function by
For the linear demand case we can state the following:
Proposition 3.2
If the demand function is linear, then the marginal-revenue function is also linear, has the same
intercept as the demand, but has twice the (negative) slope. Formally, MR (Q) - a - 2bQ.
Proof.
.
The marginal-revenue function for the linear case is drawn in Figure 3.3. The marginal-revenue curve
hits zero at an output level of Q = a/(2b). Note that a monopoly, studied in chapter 5, will never produce
an output level larger than Q = a/(2b) where the marginal revenue is negative, since in this case,
revenue could be raised with a decrease in output sold to consumers.
For the constant-elasticity demand we do not draw the corresponding marginal-revenue function.
However, we consider one special case where
. In this case, p = aQ-1, and TR(Q) = a, which is a
constant. Hence, MR(Q) = 0.
You have probably already noticed that the demand elasticity and the marginal-revenue functions are
related. That is, Figure 3.3 shows that MR(Q) = 0 when p(Q) = 1, and MR(Q) > 0 when |p (Q)| > 1. The
complete relationship is given in the following proposition.
Proposition 3.3
Page 52
Proof.
Figure 3.5:
Consumers' surplus
For a given market price p, the consumer surplus is defined by the area beneath the demand curve above
the market price. Formally, denoting by CS(p) the consumers' surplus when the market price is p, we
define
Page 53
Note that CS(p) must always increase when the market price is reduced, reflecting the fact that
consumers' welfare increases when the market price falls.
In industrial organization theory, and in most partial equilibrium analyses in economics, it is common
to use the consumers' surplus as a measure for the consumers' gain from trade, that is, to measure the
gains from buying the quantity demanded at a given market price compared with not buying at all.
However, the reader should bear in mind that this measure is only an approximation and holds true only
if consumers have the so-called quasi-linear utility function analyzed in the appendix (section 3.3).
3.3 Appendix: Consumer Surplus: The Quasi-Linear Utility Case
The analysis performed in this appendix is brought up here only for the sake of completeness; quasilinear utility is used only once in this book, in section 13.1, where we analyze two-part tariffs. We
therefore advise the beginning student to skip this appendix.
In this appendix, we demonstrate that when consumer preferences are characterized by a class of utility
functions called quasi-linear utility function, the measure of consumer surplus defined in subsection
3.2.3 equals exactly the total utility consumers gain from buying in the market.
Consider a consumer who has preferences for two items: money (m) and the consumption level (Q) of a
certain product, which he can buy at a price of p per unit. Specifically, let the consumer's utility
function be given by
Now, suppose that the consumer is endowed with a fixed income of I to be spent on the product or to be
kept by the consumer. Then, if the consumer buys Q units of this product, he spends pQ on the product
and retains an amount of money equals to m = I - pQ. Substituting into (3.4), our consumer wishes to
choose a product-consumption level Q to maximize
Page 54
given by
Thus, the demand derived from a quasi-linear utility function is a constant elasticity demand function,
illustrated earlier in Figure 3.4, and is also drawn in Figure 3.6.
Figure 3.6:
Inverse demand generated from a quasi-linear utility function
The shaded area in Figure 3.6 corresponds to what we call consumer surplus in subsection 3.2.3. The
purpose of this appendix is to demonstrate the following proposition.
Proposition 3.4
If a demand function is generated from a quasi-linear utility function, then the area marked by CS(p) in
Figure 3.6 measures exactly the utility the consumer gains from consuming Q0 units of the product at a
market price p0.
Proof. The area Cs(p) in Figure 3.6 is calculated by
3.4 Exercises
1. Consider the Cobb-Douglas production function given by Q = lk, where , > 0.
Page 55
(a) For which values of the parameters and does this production technology exhibit IRS, CRS,
and DRS?
(b) Using Definition 3.1, infer whether labor and capital are supporting or substitute factors of
production.
2. Consider the production function given by Q = l + k, where > 0.
(a) For which values of a does this production technology exhibit IRS, CRS, and DRS?
(b) Using Definition 3.1, infer whether labor and capital are supporting or substitute factors of
production.
3. Does the production function given by
4. Consider the cost function
, where A,