Notes For A Course in Game Theory
Notes For A Course in Game Theory
Maxwell B. Stinchcombe
Chapter 0
Organizational Stuff
Meeting Time: We’ll meet Tuesdays and Thursday, 8:00-9:30 in BRB 1.118. My phone
is 475-8515, e-mail [email protected] For office hours, I’ll hold a weekly problem
session, Wednesdays 1-3 p.m. in BRB 2.136, as well as appointments in my office 2.118. The
T.A. for this course is Hugo Mialon, his office is 3.150, and office hours Monday 2-5 p.m.
Texts: Primarily these lecture notes. Much of what is here is drawn from the following
sources: Robert Gibbons, Game Theory for Applied Economists, Drew Fudenberg and Jean
Tirole, Game Theory, John McMillan, Games, Strategies, and Managers, Eric Rasmussen,
Games and information : an introduction to game theory, Herbert Gintis, Game Theory
Evolving, Brian Skyrms, Evolution of the Social Contract, Klaus Ritzberger, Foundations
of Non-Cooperative Game Theory, and articles that will be made available as the semester
progresses (Aumann on Correlated eq’a as an expression of Bayesian rationality, Milgrom
and Roberts E’trica on supermodular games, Shannon-Milgrom and Milgrom-Segal E’trica
on monotone comparative statics).
Problems: The lecture notes contain several Problem Sets. Your combined grade on
the Problem Sets will count for 60% of your total grade, a midterm will be worth 10%, the
final exam, given Monday, December 16, 2002, from 9 a.m. to 12 p.m., will be
worth 30%. If you hand in an incorrect answer to a problem, you can try the problem again,
preferably after talking with me or the T.A. If your second attempt is wrong, you can try
one more time.
It will be tempting to look for answers to copy. This is a mistake for two related reasons.
1. Pedagogical: What you want to learn in this course is how to solve game theory models
of your own. Just as it is rather difficult to learn to ride a bicycle by watching other
people ride, it is difficult to learn to solve game theory problems if you do not practice
solving them.
2. Strategic: The final exam will consist of game models you have not previously seen.
7
Chapter 0.0
If you have not learned how to solve game models you have never seen before on your
own, you will be unhappy at the end of the exam.
On the other hand, I encourage you to work together to solve hard problems, and/or to
come talk to me or to Hugo. The point is to sit down, on your own, after any consultation
you feel you need, and write out the answer yourself as a way of making sure that you can
reproduce the logic.
Background: It is quite possible to take this course without having had a graduate
course in microeconomics, one taught at the level of Mas-Colell, Whinston and Green’
(MWG) Microeconomic Theory. However, many explanations will make reference to a num-
ber of consequences of the basic economic assumption that people pick so as to maximize
their preferences. These consequences and this perspective are what one should learn in
microeconomics. Simultaneously learning these and the game theory will be a bit harder.
In general, I will assume a good working knowledge of calculus, a familiarity with simple
probability arguments. At some points in the semester, I will use some basic real analysis
and cover a number of dynamic models. The background material will be covered as we
need it.
8
Chapter 1
In this Chapter, we’re going to quickly develop a version of the theory of choice under
uncertainty that will be useful for game theory. There is a major difference between the
game theory and the theory of choice under uncertainty. In game theory, the uncertainty
is explicitly about what other people will do. What makes this difficult is the presumption
that other people do the best they can for themselves, but their preferences over what they
do depend in turn on what others do. Put another way, choice under uncertainty is game
theory where we need only think about one person.1
Readings: Now might be a good time to re-read Ch. 6 in MWG on choice under uncertainty.
1.1.1 Notation
Fix a non-empty set, Ω, a collection of subsets, called events, F ⊂ 2Ω , and a function
P : F → [0, 1]. For E ∈ F , P (E) is the probability of the event2 E. The triple
(Ω, F , P ) is a probability space if F is a field, which means that ∅ ∈ F , E ∈ F iff
E c := Ω \ E ∈ F , and E1 , E2 ∈ F implies that both E1 ∩ E2 and E1 ∪ E2 belong to F , and P
is finitely additive, which means that P (Ω) = 1 and if E1 ∩ E2 = ∅ and E1 , E2 ∈ F , then
P (E1 ∪ E2 ) = P (E1 ) + P (E2). For a field F , ∆(F ) is the set of finitely additive probabilities
on F .
1
Like parts of macroeconomics.
2
Bold face in the middle of text will usually mean that a term is being defined.
9
Chapter 1.1
expected value of a utility function are the main class that is studied. There is some work
in game theory that uses preferences not representable that way, but we’ll not touch on it.
(One version of) the basic expected utility model of choice under uncertainty has a signal
space, S, a probability space Ω, a space of actions A, and a utility function u : A × Ω →
R. This utility function is called a Bernoulli or a von Neumann-Morgenstern utility
function. It is not defined on the set of probabilities on A × Ω. We’ll integrate u to represent
the preference ordering.
For now, notice that u does not depend on the signals s ∈ S. Problem 1.4 discusses how
to include this dependence.
The pair (s, ω) ∈ S × Ω is drawn according to a prior distribution P ∈ ∆(S × Ω), the
person choosing under uncertainty sees the s that was drawn, and infers βs = P (·|s), known
as posterior beliefs, or just beliefs, and then chooses some action in the set a∗ (βs ) = a∗ (s)
of solutions to the maximization problem
X
maxa∈A u(a, ω)P (ω|s).
ω
1.1.3 Examples
We’ll begin by showing how a typical problem from graduate Microeconomics fits into this
model.
11