BE Risk 1 15
BE Risk 1 15
Mark Dean
1 Lecture 1
Up until now, we have thought of the objects between which our consumers are choosing as being
physical items - chairs, tables, apples, brandy etc. We pretty much know what will happen when
we buy such things. However, we can also think of cases where the outcomes of the choices we
make are uncertain - we don’t know exactly what will happen when we buy a particular object.
Think of the following examples:
• You are deciding whether or not to put your student loan on black at the roulette table
• You are deciding whether or not to buy a house that straddles the San Andreas fault line
In each case, while you may understand exactly what it is that you are buying, (or choosing
between), the outcomes, in terms of the things that you care, about are uncertain. Here we are
going to think about how to model a consumer who is making such choices.
Economists tend to differentiate between two different types of ways in which we may not know
for certain what will happen in the future: risk and uncertainty (sometimes called ambiguity). The
difference between the two is that, in the former case, the probabilities of different outcomes are
known, while in the latter case they are not. Sometimes the difference is illustrated by thinking
1
about the difference between a horse race and a roulette wheel. The idea being that, for a roulette
wheel, we may not know which number is going to come up, but we know how likely each number
is to come up. In contrast, in a horse race, we may not even know that: reasonable people may
disagree about how likely it is for different horses to win. We will begin by discussing models of
choice under risk, then move on to choice under uncertainty.
In order to be concrete, let’s think about a specific example. You are in a fairground, and come
across a (very boring) game of chance. For an amount of money $, you can flip a coin. If it comes
down as heads, you get $10. If it comes down tails, you get nothing (let’s assume that you get to
choose the coin, so you are pretty sure that there is a 50% chance of a head and a 50% chance of
tails). The question is, for what price would you choose to play the game. In other words, you
have a choice between the following two options.
2. Play the game, and get − for sure, plus a 50% chance of getting $10.
How would you make a decision like this? The earliest thinkers on the subject suggested the
following strategy: Figure out the expected value (or average pay-out) of playing the game, and
see if it is bigger than 0. If it is, then play the game, if not, then don’t.
So what is the expected value of the game? With a 50% chance you will get $10 − , while with
a 50% chance you will get −. Thus, the average payoff is going to be
05(10 − ) + 05(−)
= 5−
Thus the value of the game is $5 − . In other words, following this strategy, you should play
the game if the cost of playing is less than $5.
Does this sound sensible? People thought so until Daniel Bernoulli (the Dutch-Swiss maths
superstar) came up with the following example:
Example 1 (The St. Petersburg Paradox) Imagine that the fairground guy offers you a dif-
ferent game. Now, you first of all flip the coin. If it comes down heads, then you get $2. If it comes
2
down tails, you flip again. If you get heads on that go, you get $4, otherwise you flip again. If it
comes down heads then you get $8, otherwise you flip again, and so on.
1
What is the expected value of this game? Well, there is a 2 probability that you will get heads
1
on the first trial, and so get $2. But there is a 2 chance that you will get tails and flip again. There
1
is then a 2 chance that you will get heads on that go, and so get $4. For that to happen, you would
have to get tails on the first go (probability 12 ) and heads on the second go (probability 12 ). Thus,
1 1
there is a 4 probability that you will get $4. Using the same logic, there is a 8 chance you will get
$8 and so on. The expected value of the game is therefore
1 1 1 1
$2 + $4 + $8 + $16 +
2 4 8 16
= $1 + $1 + $1 + $1 +
= ∞
The expected value of the game is ∞, and so that is how much you should be willing to pay. In
other words, however much the fairground guy is prepared to charge you, you should be willing to
pay it.
Assuming that you are not one of the people that is prepared to pay ∞ to play this game, what
has gone wrong? Bernoulli suggested one solution: Perhaps the difference in ‘happiness’ brought
about by getting extra money decreases as the amount of money you have increases. In other words
getting $1 extra if you only have $1 means a lot more than getting $1 extra if you have $1 million.
This is what we would (these days) call the decreasing marginal utility of wealth.
Example 2 Say a pauper finds a magic lottery ticket, that has a 50% chance of $1 million and
a 50% chance of nothing. A rich person offers to buy the ticket off him for $499,999 for sure.
According to our ‘expected value’ method’, the pauper should refuse the rich person’s offer!
Bernoulli argued that this is ridiculous. For the pauper, the difference in quality of life between
getting nothing and $499,999 is massive, while the difference between $499,999 and $1 million is
relatively small. Thus, by turning down the rich persons offer, they are gaining relatively little (a
50% chance of getting $1 million rather than $499,999) and loosing an awful lot (a 50% chance of
getting 0 rather than $499,999). Moreover, Bernoulli argued, if this is the case, what we should be
3
maximizing is expected utility, rather than expected value. In other words, if () is the utility
of getting an amount then, the pauper should choose to accept the rich guy’s offer if
1 1
($1 000 000) + ($0) ($499 999)
2 2
The idea is that the utility gap between 0 and $499 999 is larger than the gap between $499 999
and $1 000 000. For example, it could be that
($0) = 0
($499 999) = 10
If this were the case, then Bernoulli suggests that the pauper should accept the offer, as the expected
utility of the lottery ticket is 8, while the expected utility of the rich man’s offer is 10. In fact he
proposed that the utility of getting an amount could be approximated by the function ln() (note
that this exhibits decreasing marginal utility). If this is right, then the most that you should pay
for the St. Petersburg game is about $60.