Decision Theory Part 2: These Are Lecture Notes No Originality, Other Than The Organization of The Material, Is Claimed
Decision Theory Part 2: These Are Lecture Notes No Originality, Other Than The Organization of The Material, Is Claimed
P.G. Babu
Indira Gandhi Institute of Development Research
Film City Road, Goregaon (East), Mumbai 400 065
email: [email protected]
Let us now turn to a few paradoxes. They will help us understand the application of the theory,
the role of independence axiom, and the importance of thinking about notions such as risk aversion,
decreasing marginal utility and so on.
Zeckhausers Paradox
This paradox is due to Richard Zeckhauser. Some bullets are loaded into a revolver with six
chambers. The cylinder is spun and the gun is pointed at your head. Your preferences are such
that you like more money to less (monotonically increasing Bernoulii utility function; that is, if
x > y, u(x) > u(y)) and prefer being alive to being dead. Would you be prepared to pay more to
get one bullet removed, when only one bullet is loaded or when four bullets are loaded?
It appears that almost all of us would conclude that we would like to pay more when only one
bullet is loaded, as we would be buying our life for sure in that case. Is that the right answer or
not? Let us put our learning of N-M utility theory to use here in order to resolve this puzzle.
Death is the worst possible outcome here and let us call it as x, and living without paying
anything is the best possible outcome we can think of, and let that be denoted as x. These two
are the worst and the best outcomes respectively. The probability distribution that puts all its
probability weight, viz., 1, on death would then be denoted by x and the probability distribution
that puts all its probability weight, viz., 1, on life without any payment would be x . Denote the
money that you are willing to pay to get one bullet removed when there is only one bullet is loaded
be X; similarly, denote the money that you are willing to pay to get one bullet removed when there
are four bullets loaded be Y . The question that is being asked of you is whether X would be more
than Y or not?
Let the other two outcomes be A, which stands for being alive after paying X and B which
stands for being alive after paying Y . You are indifferent between one bullet and A, that is, no
1
These are lecture notes; no originality, other than the organization of the material, is claimed.
bullets for which you pay X. If one bullet is removed for a payment of amount X, then there is
no chance of dying and hence you will get:
1
5
0.u(x) + 1.u(A) = u(x) + u(x)
6
6
What should then be the N-M utility of A? From the way we have defined N-M Utilities (recall
the discussion in the previous section), it should be the unique probability weight attached to the
best outcome, viz, life without any payment x . In this case, then U (A) = 56 .
In the second situation you are indifferent between B, that is, three bullets for which you have
to pay Y and four bullets in the revolver. This will give you
3
3
4
2
.u(x) + .u(B) = u(x) + u(x)
6
6
6
6
On further simplification, this will give us:
1
2
u(B) = u(x) + u(x).
3
3
Hence, from the way we define utilities, U (B) should then be 23 .
Clearly, U (A) U (B) which in turn implies that A B. That is, being alive after paying Y is
inferior to being alive after paying X. Given that you like more to less, then it should be the case
that X < Y . This is completely contrary to our intuition that X > Y .
This paradox helps us sharpen our understanding of the N-M theory and the way we apply it
to solve the problem. Let us turn now to the next paradox, which is perhaps the most famous one
of all such paradoxes.
Allais Paradox
This paradox is due to Maurice Allais, a French economist, who formulated it during the famous
1950 Paris conference on the foundations of probability. As it turned out, Leonard Savage, who had
just then written Foundations of Statistics which lays the Bayesian foundations of probability
theory, himself had made a mistake. Also, it is the first instance of behavioral violation of the
fundamental N-M axioms, in particular, the Independence Axiom.
Consider the following two lotteries: A1 where you win Rs. 1 million with probability 1 and
A2 where you can win 5 million Rs. with probability 0.1, one million Rs. with probability 0.89,
and zero with probability 0.01. Which one will you choose between A1 and A2 ? Write down your
answer in a separate sheet.
Now consider two lotteries: A3 where you win 5 million with probability 0.1 and zero with
probability 0.9, and A4 where you win 1 million with probability 0.11 and zero with probability
2
0.89. What will you choose between A3 and A4 ? Write down your answer in a separate sheet of
paper.
Now compare your two answers. If you have chosen A1 over A2 , then you ought to choose A4
over A3 in order not to violate the Independence Axiom. Alternatively, if you chose A2 over A1 ,
then you need to choose A3 over A4 . Any other choice would be incompatible with the independence
axiom. Let us see how.
It would help us to represent the lotteries differently as below.
INSERT FIGURES ABOUT HERE.
The lower sub-lottery (viz., the lower branch of the decision tree) in the above diagrams stands
for the irrelevant third lottery. Its presence or absence should not influence your decision in the
upper sub-lottery (viz., upper branch of the decision tree). If it influences, then you are violating
the independence axiom. Recall the independence axiom at this point. If you like p to q, then you
should continue to like a compound lottery ap + (1 a)r to aq + (1 a)r. Hence, in our case, if
you had liked the upper sub-lottery in A1 , then you need to like the upper sub-lottery in A4 , or
alternatively, if you had liked the upper sub-lottery in A2 , you should like the one in A3 .
You can actually trace this violation to the violation of what is known as replacement separability. Recall that the expected utility function has the following form:
U (p) = Eu(p) = p1 u(x1 ) + . . . + pn u(xn )
where u(xi ) is the Bernoulli utility function of the sure outcome xi , i = 1, . . . n.
Note carefully that the above expected utility function is additively separable. That is, (p1 , x1 )
pair contributes p1 u(x1 ) to the overall expected utility and this is independent of the contributions
of other (probability, outcome) pairs. Hence, if you prefer to replace (x1 , p1 ) by (y1 , p1 ) here in
this lottery, then you would like to replace (x1 , p1 ) by (y1 , p1 ) in any other lottery of the form
(x1 , p1 ; x2 , p2 ; . . . ; xm , pm ). It is this notion of additive separability that is getting violated in the
Allais Paradox. The Allais paradox once again highlights the importance of the independence
axiom for the N-M expected utility theory.
This paradox got its name because the research paper resolving this puzzle was published in a
mathematics journal printed in St. Petersburg. The other curious fact is that the puzzle was posed
by Nicholas Bernoulli toward the end of 1600s and solved by his cousin Daniel Bernoulli in early
1700s. This puzzle is perhaps the first place where the notion of marginal utility shows up explicitly
3
in the literature, and note carefully that it is way ahead of Alfred Marshalls time (to whom the
idea of marginal utility is normally attributed to).
You are offered the following gamble. A fair coin is tossed repeatedly. If you get Head in the
first toss, you get 21 Rs. If you get a sequence Tail, Head after two tosses, then you get 22 as
prize. If you get a sequence, Tail, Tail, Head after three tosses, you get a prize of 23 and so on
indefinitely. That is, if you get Head for the first time in the nth toss, you get a prize money of
2n . The question now to you is: how much would you be willing to pay for a ticket to play this
gamble?
That should depend on expected payoffs perhaps. If you calculate the expected payoff for this
gamble, you will get:
1
1
E(x) = 2 + . . . + n 2n + . . . = 1 + 1 + . . . = .
2
2
None would pay all their wealth to play this gamble. You can object to the expected payoff and
say you need to calculate the expected utility perhaps. Even then, if you are risk neutral (a term
that we would define shortly hereafter), your Bernoulli utility function would be linear in wealth,
and in that case, Eu(x) = E(x) as u(x) = x in the linear case.
Daniel Bernoulli uses log2 x as the relevant utility function to resolve this puzzle. Without going
into the detail here, Daniel is using a concave function and normally a concave function captures
the Decision Makers risk aversion. (Again, we will come to the definition shortly). Also, observe
that the log function is increasing but at a decreasing rate (first derivative with respect to x is
positive, and the second derivative is negative). This is nothing but our notion of diminishing
marginal utility. Recall that in the logarithm to base 2 function, log2 xk = k. Hence, using this
function, the expected utility becomes:
1
1
1
1
Eu(x) = log2 21 + 2 log2 22 + . . . = .1 + .2 + . . . = 4.
2
2
2
4
The bottom line of this paradox and its resolution, for us, is the use of concave utility function.
Let us explore the significance of the concave function to the N-M utility theory later. However,
before that, let us have a quick look at all possible judgment and choice biases.
Much of this section depends crucially on the work of mathematical psychologists Daniel Kahnemann and Amos Tversky, a body of research work for which the former won a Nobel Memorial
Prize in Economics (the latter had unfortunately died earlier than the award). Recall the questions
that you had solved in two different groups last monday and wednesday. For convenience let us
collect them all here in two subsections named Group A and Group B.
4
4.1
Group A Questions
1. A 65-year-old relative of yours suffers from a serious disease. It makes her life miserable,
but does not pose an immediate risk to her life. She can go through an operation that, if
successful, will cure her. However, the operation is risky; 30% of the patients undergoing it
die. Would you recommend that she undergoes it?
2. You are given Rs. 1,000 for sure. Which of the following two options would you prefer?
(a) To get an additional Rs. 500 for sure.
(b) To get another Rs. 1,000 with probability 50%, and otherwise nothing (and be left with
the first Rs. 1,000).
3. You go to a movie. It was supposed to be good, but it turns out to be boring. Would you
leave in the middle and do something else instead?
4. Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a
student, she was deeply concerned with issues of discrimination and social justice, and she
participated in antinuclear demonstrations. Rank order the following eight descriptions in
terms of the probability (likelihood) that they describe Linda:
(a) Linda is a teacher in an elementary school.
(b) Linda works in a bookstore and takes yoga classes.
(c) Linda is active in a feminist movement.
(d) Linda is a psychiatric social worker.
(e) Linda is a member of the League of Women Voters.
(f) Linda is a bank teller.
(g) Linda is an insurance salesperson.
(h) Linda is a bank teller who is active in a feminist movement.
5. In four pages of a novel (about 2,000 words) in English, do you expect to find more than
10 words that have the form .....n. (seven-letter words that have the letter n in the sixth
position)?
6. What is the probability that, in the next two years, there will be a cure for AIDS?
7. What is the probability that, during the next year, your car could be a Total loss due to
an accident?
8. Which of the following causes more deaths?
(a) Digestive diseases.
5
4.2
Group B Questions
12. A 65-year-old relative of yours suffers from a serious disease. It makes her life miserable,
but does not pose an immediate risk to her life. She can go through an operation that, if
successful, will cure her. However, the operation is risky; 70% of the patients undergoing it
survive. Would you recommend that she undergoes it?
13. You are given Rs. 2,000 for sure. Which of the following two options would you prefer?
(a) To lose Rs. 500 for sure.
(b) To lose Rs. 1,000 with probability 50%, and otherwise to lose nothing.
14. Your friend had a ticket to a movie. She could not make it, and gave you the ticket instead
of just throwing it away. The movie was supposed to be good, but it turns out to be boring.
Would you leave in the middle and do something else instead?
15. Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a
student, she was deeply concerned with issues of discrimination and social justice, and she
participated in antinuclear demonstrations. Rank order the following eight descriptions in
terms of the probability (likelihood) that they describe Linda:
(a) Linda is a teacher in an elementary school.
(b) Linda works in a bookstore and takes yoga classes.
(c) Linda is active in a feminist movement.
(d) Linda is a psychiatric social worker.
(e) Linda is a member of the League of Women Voters.
4.3
Look at questions 1 and 12. These are similar to the questions that we had posed in the first
lecture. Both these questions are identical. If 30% people die, for sure 70% have to live and the
reverse is also true. The problems differ only in the way the information is represented to you.
7
decide in which situations it is rational and in which it is not. Hence, for each type of this bias, you
need to draw your own rationality line, beyond which you have to change your behavior pattern.
Look at problems 3 and 14. The bottom line in both is: if a movie (which was supposed to be
good) turns out to be boring, would you leave it in the middle to do something else? However, there
is one implicit assumption which differs; that is, in question 3, the implicit assumption is that you
have bought the ticket, and in question 14, you did not buy it. This difference results in different
choices. When people spend their own money, they do not find it so easy to quit. Is it rational to
do so? Certainly not. If you draw the decision tree, once you enter the movie hall, you are in stage
two of the game, and it does not matter any more what happened in the previous stage. What has
been done in stage one (or ticket money in our example) is basically sunk. The sunk costs cannot
be retrieved; what is gone is gone. So, you have to ignore it in making any further decisions.
Consider this scenario now. After quitting the movie hall in the middle, what next do you do?
Would you go to watch another movie? What if that also turned out to be boring? Alternatively,
after getting out, you might perhaps do nothing. If you extend this behavior to completion of
longer time horizon projects such as your education (imagine switching to some other subject after
an year, and to yet another in third year and so on) or a joint project, then it might be that the
world would view you as lacking in self control or worse still, unreliable.
The best way to visualize this scenario would be through drawing decision trees.
INSERT FIGURES ABOUT HERE
If the subtrees in the two decision trees are identical, then it is compelling argument that
you should take the same decision in both the problems. This follows from a principle called
consequentialism, that your decision in a subtree should only depend on that subtree. You can
forget the rest of the decision tree including the path that you led you to your current decision
node, and focus on the choices available to you at that node. If you decide to use decision trees,
and you commit to the principle of consequentialism, you would be less prone to sunk-cost effects.
Let us move to problems 4 and 15. There were eight options given about Linda and you were
asked to rank them. See if you have ranked the alternative Linda is a bank teller (option (f))
less likely than Linda is a bank teller who is active in a feminist movement (option (h)). If you
observe closely, option (h) is not possible without option (f) happening. The option (h) is in fact
an intersection of option (f) and the statement Linda is active in a feminist movement (option
(c)). People are often found ranking option (h) as more likely than option (f), even though when
pointed out, they might agree that this ranking does not make much sense. One reason tendered is
that they understood option (f) as meaning Linda is a bank teller who is NOT active in a feminist
movement, given option (h). But that is not a good excuse as option (f) appears before (h), and
hence it should be understood as Linda is a bank teller, who may or may not be active in a feminist
9
movement. Kahneman and Tversky call such examples as Conjunction Fallacy, because people
often rank the conjunction of two events as more plausible than one of the events. In situations such
as these, they argue that people use heuristics, and in this case, what they call representativeness
heuristic. This heuristic asks, what is representative or typical of Linda. Given her description, it
seems more likely that Linda is a teller who is active in a feminist movement. Even though this
heuristic leads us to a wrong answer in this case, often, it is useful.
Think of questions 5 and 16. Again they are the quite similar. Experiments report higher
estimates of the second option (seven lettered words ending with ing) than the first (seven lettered
words having n in the sixth position). Logically, question 5 describes a larger set of words than
question 16, as every seven letter word that ends with ing is a seven letter word that has n in
the sixth position. In situations like this, we cannot rely on our language skills (remembering all
the words) or a computer (to count the words). We need to use heuristics. It is difficult to think of
words with n in sixth position, but far too easy to think of words ending with ing. Kahneman and
Tversky call this heuristic as Availability Heuristic: the examples that are more easily available
to our memory get greater weight.
Problems 6 and 17 too make a similar point. Problem 17 is a more specific event. If you are
considering both together, you should not assign a higher probability to the second than the first.
For Problems 7 and 18, consider your answer to problem 7 with that of option (e) in problem
18. Typically, the probability attached to option (e) is higher than that attached to question 7.
The formal structure of the two problems would have been the same if we kept only one of the (a)
to (d) options in problem 18. The availability heuristics is making each of the scenarios (a) to (d)
in question 18 more conspicuous than it was in question 7. Hence, the addition over all of them
results in a larger estimate. This is an example where thinking more results in inferior answers.
Problems 8 and 19 form another set of example for Availability Heuristic. Though we might
want to change our mind given the current epidemic in Germany, in experiments, people choose
option (b) more. For the same question, we might answer differently this year, here in Germany.
This is another example of availability heuristic. If newspapers and TV channels sensationalize a
particular cause, for example digestive diseases, then that is what stays in our mind and we tend
to use them to answer questions like these, where we do not really have hard data ourselves.
Look at problems 9 and 20 now. Typically one observes statistically significant difference
between the answers given in the two questions. People give higher estimates if they are first
asked about the higher value. This is what is called the anchoring effect: the effect that irrelevant
information might have beyond what can be justified. This anchoring effect comes out of anchoring
heuristic. If you are told that someone has estimated the salary at 65000, you use it as an anchor
and revise it upwards, if you think it is too low an estimate. But even after your correction, the
10
estimate is likely to be lower than if they had started you with 135000 as your anchor.
Think of questions 10 and 21. People often say yes in the second scenario (Question 21: You
are going to a concert, where tickets cost Rs. 50. When you arrive, you realize that you have lost a
50 Rs. bill. Would you still buy the ticket (assuming you have enough money to buy still)?). Your
reasoning could be: in the first case, I have already spent 50, lost the ticket, and if I buy another
ticket, the total expense is 100 and the concert is not worth 100. However, the reasoning would
suggest the answer should be the same in both the cases. Suppose you start with E + 50 Rs. In
problem 10, if you bought a ticket, then you have E Rs. That is, you have exchanged a bundle
(E + 50, 0) for (E, 1), where the first component is money in your wallet and the second is ticket
to the concert. After losing, you find that instead of (E, 1) you have (E, 0). Now, do you prefer to
hold this bundle, viz., (E, 0) or would you like (E 50, 1)?
In problem 21, you started with the bundle (E + 50, 0) and before having bought the ticket, you
lost 50, so you now have (E, 0). Do you want to stick to it or buy a ticket, that is, (E50, 1)? Hence,
the bottom line question is the same in problem 10 as well as 21 and hence your decisions have to
be the same. Richard Thaler argues that people often have mental accounts. Even though money
is fungible, people often act as if a certain sum of money belongs to a certain class of expenditures.
If you are not convinced, think of another example that Thaler gives. You go to a store and
see a shirt. You like it but think it is too expensive and hence decide not to buy. You come home
and see your spouse giving the same shirt as your birthday gift. Now you are happy with it. If you
both have a joint account, then, you should not be happy. However, you see that the agency differs
here: who makes the decision? Also, the second difference is the occasion: it comes out of birthday
account and it has a certain budget; hence it is mentally treated as a legitimate expenditure.
Let us turn now to our last set of questions: 11 and 22. Several people choose option (a) viz.,
receiving 10 Rs. today over (b) viz., receiving 12 Rs. a week from today in problem 11 than in
problem 22. Also, several simultaneously choose (a) in problem 11 and (b) in problem 22. If you
have done so, you are dynamically inconsistent. Several real world behaviors such as postponing the
beginning of your savings, procrastination over preparation for an exam, would be good examples
of dynamic inconsistency.
After this tour of various biases and heuristics, let us get back to N-M Utility theory and where
we have left it, viz., what do concave Bernoulli utility functions stand for.
11
The Notions of Risk Averse, Risk Loving and Risk Neutral Decision Makers
Think of the following situation. You own a lottery ticket which allows you to participate in a
coin toss tomorrow. If the coin is going to land Heads you will get 10000 Rs. and if it lands
Tails you will get zero. This ticket is transferable. You can sell it to another person. What is the
smallest price that you would accept in exchange for your lottery ticket? If you are an expected
value maximizer, you will demand 5000, as it is the expected value of the lottery. Alternatively,
you might be willing to accept 4000, which is of course inconsistent with you being an expected
value maximizer. One reason why you may not be an expected value maximizer is that you are
worried about the riskiness of the gamble that you face. You will get 10000 with probability 0.5,
but you could also end up with zero with equal probability. However, if you accept 4000 today to
sell the lottery, you do not face any risk. Most humans do not like risk; they would be happy to
pay in order to avoid risk. For example, by willing to accept 4000 as a price for the lottery, you
are essentially paying to avoid the risk. When we buy insurance or keeping money in government
bonds or savings accounts, we are paying to reduce or eliminate the risk.
Let us now define the term Certainty Equivalent value of a gamble. It is the minimum amount
that a DM would accept, if paid with certainty, rather than face a gamble. We denote it by CE.
In our above example, CE = 4000.
We call a DM risk averse if his or her certainty equivalent value for any gamble is less than
the expected value of the gamble. That is, we say that the DM is risk averse if CE E(x)
for all gambles and CE < E(x) for at least one gamble. Similarly, a DM is risk neutral if the
certainty equivalent value for any gamble is equal to the expected value of the gamble. That is, if
CE = E(x) for all gambles, the DM is risk neutral. By the same token, the risk loving DM is one
for whom certainty equivalent value of a gamble exceeds the expected value of the gamble, that is,
if CE E(x) for all gambles, and CE > E(x) for at least one gamble.
If the amount of money involved in the gamble is small relative to DMs initial wealth or income,
then his behavior would be approximately risk neutral. If the amounts of money involved in the
gamble are large relative to his income, he would tend to be risk averse.
Also, if the realized or most likely payoffs from the gambles are close to the expected value of
the gamble, then we say that the risk involved is small and the DMs behavior would be close to
risk neutrality. For example, if the gamble pays you 5001 for heads and 4999 for tails, then DMs
behavior is close to risk neutrality as the payoffs are close to expected value, viz., 5000. On the
other hand, if the possible payoffs are far from the expected value, then the risk is large, and the
DMs behavior tends to be risk averse. Think of a gamble that gives you 1 million when it is heads
12
Consider a typical DM, whose Bernoulli utility function is given by u(x) = x 2 . Suppose this
DM faces a lottery which gives 100 Rs. when Heads and zero when Tails. Then, we know that
1
the Neumann-Morgenstern Expected utility function, U (.) = Eu(x) = 12 .(100 2 + 21 .(0) = 5. Now
be alert and note that utility of the expected value is not the same as expected utility. Why?
Let us first calculate the expected value. The expected value of this gamble is given by, E(x) =
1
1
2 (100)+ 2 (0)
= 50. Now, the utility of the expected value is given by u(E(x)) = u(50) = 50 2 = 7.07
approximately. Note carefully that for this particular utility function form that we have used, viz.,
1
u(x) = x 2 , the expected utility of the gamble is less than the utility of the expected value. That
is,
Eu(x) u(E(x)).
What sort of utility function is this? It appears to be a concave function. Now, give this DM the
option of accepting Rs. 40 in hand with certainty. The utility of that option would be u(40) =
1
(40) 2 = 6.3. If you ask this person whether he would prefer to play the gamble or to accept 40
Rs. for certain, he would prefer 40 in hand rather than the gamble (because utility of the former
exceeds the expected utility of the latter).
Now consider another DM with utility function u(x) = x, and facing the same gamble as above.
Then, for this DM, the expected utility will be 12 (100) + 12 (0) = 50. Now, the expected value from
the lottery would also be 50. Hence, the utility of the expected value would be u(50) = 50. Here,
the expected utility seems to be the same as utility of the expected value. What sort of utility
function is this? It is a linear utility function.
4
Think of a third DM whose utility function is given by u(x) = x 5 facing the same gamble. For
this person, the expected utility from the gamble would be:
4
1 4
1
(100) 5 + (0) 5 = 19.9
2
2
approximately. What would be the utility of the expected value? It would be u(E(x)) = u(50) =
4
(50) 5 = 22.9. Give this DM the option of accepting 40 in hand for certain. The utility of that
4
option would be u(40) = (40) 5 = 19.1. As this is lower than the expected utility, this DM will
prefer to play the gamble rather than accept 40 in hand.
We are now in a position to generalize from these examples. Think of a DM who is indifferent
between accepting a fixed amount x0 and playing a gamble which would give x1 with probability
0.5 and x2 with probability 0.5. For this person, u(x0 ) = Eu(x) = 0.5(u(x1 )) + 0.5(u(x2 )). If we
13
reduced x0 by a small fraction, this DM would strictly prefer the gamble, as he likes more to less.
In the same manner, if we increase x0 by a small amount, he would strictly prefer not to gamble.
Putting all this together, we can conclude that x0 is the smallest amount that the DM would be
willing to accept instead of facing the gamble. From the definition of CE, this x0 is nothing but
the certainty equivalent of the gamble. Hence, we can conclude the following.
Theorem 1
The utility from the certainty equivalent value of the gamble, CE, equals the expected utility of
the gamble; that is, Eu(x) = u(CE).
Now, if the agent is risk averse, then his certainty equivalent value for a gamble is less than the
gambles expected value; that is, CE < E(x). Hence, as this DM likes more to less, this would
imply: u(CE) < u(E(x)), for a risk averse DM. Given the above result, this would give us the
following:
Theorem 2
If the DM is risk averse, then his expected utility from a lottery is less than the utility from the
expected value of the lottery. That is, Eu(x) < u(E(x)).
We can view this as an alternative definition of risk aversion. This is equivalent to the previous
definition.
If a DM is risk neutral, then his certainty equivalent value for a gamble equals the gambles
expected value, that is, CE = E(x). It follows then, u(CE) = u(E(x)) for a risk neutral DM. It
also follows that if a DM is risk neutral, his expected utility from a gamble is equal to his utility
from the expected value of the gamble, that is, Eu(x) = u(E(x)) for a risk neutral DM.
1
Consider now the DM with the utility function x 2 . His certainty equivalent value of the gamble
1
is given by u(CE) = Eu(x) = 5 and given the utility function is u(x) = x 2 , it follows that,
1
CE 2 = 5 which on squaring both sides would give us: CE = 25. The minimum this DM would
accept instead of the gamble is 25. Now you understand why he prefers 40 Rs. with certainty to
playing the gamble, as 40 > 25. He is getting more than the minimum that he would accept rather
than face the gamble.
Theorem 3
The certainty equivalent value of the lottery can be calculated by applying the inverse of the utility
function to the expected utility; that is, if u1 (.) is the inverse of the utility function (Example: if
1
to pay in order to avoid the risk. It will turn out to be that such that u(E(x) ) = Eu(x).
Now, we also know that Eu(x) = u(CE). Combine these two to get the following relation between
expected value, certainty equivalent and risk premium:
u(CE) = Eu(x) = u(E(x) )
which would be
CE = E(x) .
That is, certainty equivalent is equal to expected value minus the risk premium. Put the other way,
risk premium is the difference between expected value and certainty equivalent.
Recall what qualifies to be a concave function. For a two outcome case, a function u(x) is said
to be concave if and only if u(x1 ) + (1 )u(x2 ) u(x1 + (1 )x2 ) for [0, 1]. Now, how
is this definition different from Eu(x) u(E(x))? Are they not the same? This realization makes
you understand that a risk averse decision maker is one whose Bernoulli utility function is concave.
In the same manner, a risk loving DM is one whose Bernoulli utility function would be convex and
the risk neutral DM will have a linear Bernoulli utility function. In the following diagrams, you can
geometrically see the relations between certainty equivalence, risk premium and expected value,
when the DM is risk averse and risk loving. You can draw the diagram for the risk neutral DM as
a small exercise.
INSERT FIGURES ABOUT HERE.
Given that risk aversion is captured by the concavity of the utility function, you could also
think of the degree of concavity to be a measure of risk aversion. For example, if the concave utility
function is differentiable, then u0 > 0 and u00 < 0. So, u00 can be used as a measure. However, note
that u00 will not survive positive linear transformation. So, you can divide u00 by u0 . The resultant
would be immune to positive linear transformations. As you intend to use it as a measure and
given that for concave utility function, u00 is negative, you can multiply it by negative sign to get
00
what is known as the Arrow - Pratt coefficient of absolute risk aversion: uu0 .
Think of an example such as u(x) = log x. Then, u0 (x) =
1
x
take a linear transformation of u(x), say v(x) = a.u(x) + b = a.log x + b. Now, calculate v 0 (x). It
will be v 0 (x) =
a
x
and v 00 (x) = (a)(x)(2) . Obviously, v 00 (x) is not the same as u00 (x). The same
is true for v 0 (x) and u0 (x). That is, marginal utilities are not order preserving. However, if you
00
00
(x)
(x)
calculate uu0 (x)
you will get x1 , which would be the same as vv0 (x)
.
Example 4 Consider u(x) = (49 + x) 2 . Now let the lottery that you face be outcome -40 with
probability
4
7
3
1
u00 (x) = (49 + x) 2 .
4
Expected payoff from this lottery would be
3
4
E(x) = (40) + (51) = 1.
7
7
Expected utility from this lottery would be
1
1
4
3
4
3
Eu(x) = u(40) u(51) = [49 40] 2 + [49 + 51] 2 = 6.
7
7
7
7
16