Choice Under Uncertainty-2
Choice Under Uncertainty-2
1
Uncertainty, Risk and Ambiguity
• Uncertainty is the opposite of certainty
• Consequences depend on a probability, rather than happening for sure
• Within uncertainty, there is risk and ambiguity
• Risk: known probabilities
• You win £5 if you toss a fair coin and get heads (p = 1/2)
• You win £5 if you throw a die and you get a 6 (p = 1/6)
• Ambiguity: unknown probabilities
• You win £5 if you draw a red ball out of an urn with an unknown number of red balls
• You get a bonus at work if the R&D expenditure leads to the discovery of a new drug
• You earn some money if your favourite sports team wins their league
• Attention! Some people use “uncertainty” to refer to “ambiguity”
• Consequences
• Finite set of consequences C = {x1, x2, …, xS}
• S is the number of states of the world
• Probabilities
• p = {p1, p2, … , ps}
• Pi 𝜖[0,1] is the probability of obtaining consequence xi
• σ𝑆𝑖=1 𝑝𝑖 = 1
• Your turn! ‘Write’ the situation you thought about before as a prospect
4
2
Choice under risk
Some definitions:
• Note that we use ≻, ≽, etc. to express preferences, and those are different to >, ≥, etc.
• Expected Value
• Expected value of prospect p = (p1: x1; … ;pn: xn)
• EV(p) = σ𝑛𝑖=1 𝑝𝑖 ∙ 𝑥𝑖
3
Choice under risk
• How do agents make choices in the presence of risk?
• Expected Value
• Expected value of prospect p = (p1: x1; … ;pn: xn)
• EV(p) = σ𝑛𝑖=1 𝑝𝑖 ∙ 𝑥𝑖
• Expected Value
• Expected value of prospect p = (p1: x1; … ;pn: xn)
• EV(p) = σ𝑛𝑖=1 𝑝𝑖 ∙ 𝑥𝑖
4
Choice under risk
• So that you make money, EV(game) > £WTP
• EV(game) = σ𝑛𝑖=1 𝑝𝑖 ∙ 𝑥𝑖
=1+1+1+…+1 =∞
• According to Expected Value, we should be willing to pay an infinite amount, but people
pay much less! Therefore EV does not do a good job in describing people’s decisions.
9
10
5
Expected Utility: an example
• Two options (prospects) to choose from:
• Wear summer outfit (Os)
• Wear winter outfit (Ow)
• Two states of the world
• Hot weather (Sh) with ph
• Cold weather (Sc) with pc
• They are mutually exclusive and exhaustive, so ph + pc = 1
• Non-decreasing utility function u(.)
11
6
Choice under Uncertainty
Attitudes to risk
13
7
Attitudes to risk • Consider
lottery l = (0.5: £5; 0.5:£15)
• Three attitudes to risk
utility
• Risk hating / averse
• Risk loving / seeking
• Risk neutral u(x)
𝑢(£15)
𝑢(£10)
• They depend on how the utility of the EU of lottery
expected value of the lottery compares to
the expected utility of the lottery 𝑢(£5)
• Here: u(EV) = u(£10) > EU(lottery)
• Expected value of the lottery for certain ≻
receiving (playing) the lottery
• The risk ‘isn’t worth it’ for her
• Risk averse agent concave utility function
• u’(x)>0, u’’(x)<0
x
• Certainty Equivalent (CE): amount which, if £5 £10 £15
received with certainty, gives the same (EV)
utility as the EU of the lottery
Certainty Equivalent 15
Attitudes to risk
Risk Seeking Agent Risk Averse Agent
utility utility u(EV) > EU(lottery)
Concave utility function
u(x)
u(x) 𝑢(£15)
𝑢(EV)
EU of lottery
𝑢(£15) 𝑢(£5)
𝑢(£5)
8
Attitudes to risk
Risk Seeking Agent Risk Averse Agent
utility u(EV) < EU(lottery) utility u(EV) > EU(lottery)
Convex utility function Concave utility function
u(x)
u’(x)>0, u’’(x)>0 u(x) 𝑢(£15)
𝑢(EV)
EU of lottery
𝑢(£15) 𝑢(£5)
EU of lottery
𝑢(EV)
𝑢(£5)
Attitudes to risk
Your turn!
Pause the video and draw the equivalent of the
diagrams in the previous slide for a risk neutral agent.
Tip: Remember that the curvature of the utility function is linked to
the agent’s attitude to risk. We’ve covered concave and convex, so
the only one left is… a linear utility function.
18
9
Attitudes to risk
Risk Neutral Agent Risk Averse Agent
utility u(EV) = EU(lottery) utility u(EV) > EU(lottery)
Linear utility function
u(x) u(x)
u’(x)>0, u’’(x)=0 𝑢(£15)
𝑢(£15) 𝑢(EV)
EU of lottery
𝑢(£5)
𝑢(£5)
Attitudes to risk
Risk Neutral Agent Risk Averse Agent
utility u(EV) = EU(lottery) utility u(EV) > EU(lottery)
Linear utility function
u(x) u(x)
u’(x)>0, u’’(x)=0 𝑢(£15)
𝑢(£15) 𝑢(EV)
EU of lottery
𝑢(EV) 𝑢(£5)
EU of lottery
𝑢(£5)
10
Certainty Equivalent example
• Consider
• What we just saw, more formally utility
lottery l = (pl: xl; ph:xh)
11
Attitudes to risk (recap so far)
23
24
12
How risk averse?
• Agents may be more or less risk utility
averse
• How can we quantify how risk u(x)
averse someone is?
25
• This is no good because the behaviour of the agent would be the same, so the
measured risk aversion should also be the same.
26
13
How risk averse?
Absolute Risk Aversion
• To solve this, we can normalise the second derivative by dividing by the first derivative
𝑢′′ 𝑥 0.5
• Both if u 𝑥 = x0.5 and u 𝑥 = 2x0.5, =-
𝑢′ 𝑥 𝑥
• The degree of risk aversion is now invariant to positive linear transformations of the utility
function.
• This (its negative) is known as the
𝒖′′ 𝒙
Arrow-Pratt measure of (absolute) risk aversion 𝒓𝑨 𝒙 = −
𝒖′ 𝒙 27
28
14
Attitudes to risk
Your turn!
Which of the three absolute risk aversion postures do
you think we are more likely to observe out in the real
world?
29
30
15
How risk averse?
Relative Risk Aversion
31
16
How risk averse? An example
𝑥 1−𝛾
𝑢 𝑥 = for 𝛾 > 1
1−𝛾
1−𝛾 𝑥 1−𝛾−1
• 𝑢′ 𝑥 = = 𝑥 −𝛾 ; 𝑢′′ 𝑥 = −𝛾𝑥 −𝛾−1
1−𝛾
• 𝑟𝑅 𝑥 = 𝛾 𝒖′′ 𝒙 𝒙
• 𝑟𝑅 ′ 𝑥 = 0 Constant relative risk aversion 𝒓𝑹 𝒙 = −
𝒖′ 𝒙
33
34
17
Laboratory measures of attitudes to risk
• Economists and psychologists have developed a variety of methods
for eliciting risk preferences in the laboratory, online and in the field.
35
36
18
Methods we will consider
• Balloon Analogue Risk Task (BART)
• Questionnaires
• Gneezy and Potters method
• Eckel and Grossman method
• Multiple price list (MPL) method
37
Categorising methods
• Complexity
• Complex methods estimate parameters of a utility function using functional
form assumptions, e.g. eliciting coefficient of relative risk aversion
• Simple methods just aim to score individuals in terms of how willing they are
to take risks, without parameterising the utility function
38
19
Lejuez, C. W., Read, J. P., Kahler, C. W.,
Richards, J. B., Ramsey, S. E., Stuart, G.
L., ... & Brown, R. A. (2002). Evaluation
Inflate a “balloon”. Each pump raises the reward earned, but if the
balloon pops then you lose it all.
39
40
20
Balloon Analogue Risk Task (BART)
For
• Easy to understand: simple & realistic
• Shown to correlate with reported real world risky behaviour
• gambling, drug use, unprotected sex
Against
• Risk preferences may not transfer across domains (e.g., financial)
• Computer and multiple trials required
• Cannot be easily embedded into surveys or used in the field without access to
computers
41
• General:
“Rate your willingness to take risks in general” on a 10-point scale,
with 1 = “completely unwilling” and 10 = “completely willing”
• But this assumes a stable domain-general preference, which is not realistic
• Domain specific (DOSPERT)
• 40-item scale with 8 items each in the domains of recreational, health, social, and
ethical risks, and four items in the domains of gambling and investment.
• Each item is a 5-point scale on how likely the person is to engage in a particular
behaviour.
• E.g., “drinking heavily at a social function”; “gambling a week’s income at a casino”
• Interpret total or average score to reveal willingness to take risks within and across
domains
42
21
Questionnaires
For
• Simple
• Domain-specific (if preferred)
• Easily understood
Against
• Unincentivised – possibility of gratuitously expressed risk preferences
43
Gneezy and Potters method Quarterly Journal of Economics 112 (2), 631–
645.
44
22
Gneezy and Potters method
For
• Single decision task: good for use in the field or embedding in surveys
• Relatively simple to understand
Against
• One-shot so inconsistency or misunderstanding is difficult to identify
• Cannot distinguish between risk neutral and risk seeking individuals,
nor identify degrees of risk seeking behaviour
45
46
23
Eckel and Grossman method
For
• Single decision task: good for use in the field or embedding in surveys
• Relatively simple to understand
Against
• One-shot so inconsistency or misunderstanding is difficult to identify
• Cannot identify degrees of risk seeking behaviour
47
• Payoffs held constant, put probability of success improves down the table
• Most participants expected to choose A in row 1 & B dominates A in row 10
• Switch point from A to B used as measure of risk preference
48
24
Multiple Price List (MPL) method
• We can use switch point to compare risk preferences across individuals
• Holt and Laury (2002) provided interpretation of the number of safe choices in
terms of CRRA utility function
49
Against
• Harder to understand for participants
• Multiple switching can make interpretation difficult
• Longer procedure with multiple choices per person
50
25
Which method should I use?
• This depends on
• Your research questions
• About risk per se? Or controlling for risk?
• Testing functional form assumptions? Or just checking for differences between groups?
• Domain of risk preference: financial or specific (e.g. health or recreation)?
• Your resources
• Incentivised or not?
• Computer programme or not?
• Time – can they do many choices or just one?
• Your study population
• Children; other languages; populations without standard currencies
51
52
26
Expected Utility Theory Axioms (1)
• If completeness, transitivity, continuity and independence hold, then
preferences can be represented by expected utility function.
• Completeness
• For all q, r we have either q ≽ r, r ≽ q, or both
• Transitivity
• For all q, r, s we have that if q ≽ r and r ≽ s, then q ≽ s.
• Continuity
• For all q, r, s: if q ≽ r and r ≽ s, there must be a probability p such
that (p : q ; 1-p : s) ∼ r
53
54
27
Choice under Uncertainty
Violations of Expected Utility Theory
55
Violations of EUT
• Violations of the Independence axiom
• Allais Paradox
• Common Ratio and Common Consequence effect
• Ellsberg Paradox
56
28
Violations of the Independence axiom
• EUT Independence Axiom
57
Allais Paradox
Allais (1953); Kahneman & Tversky (1979)
58
29
Allais Paradox
Allais (1953); Kahneman & Tversky (1979)
Allais Paradox
Allais (1953); Kahneman & Tversky (1979)
60
30
Allais Paradox
Allais (1953); Kahneman & Tversky (1979)
61
Allais Paradox
Allais (1953); Kahneman & Tversky (1979)
62
31
Allais Paradox
Allais (1953); Kahneman & Tversky (1979)
Ellsberg Paradox
Keynes (1921); Ellsberg (1961)
32
An Unusual Disease
Tversky & Kahneman (1981)
• Choice 1 • Choice 2
• If programme 1A is adopted, • If programme 2A is adopted,
200 people will be saved 400 people will die
• If programme 1B is adopted, • If programme 2B is adopted,
there is a 1/3 probability that there is a 1/3 probability that
600 people will be saved and a nobody will die and a 2/3
2/3 probability that no one will probability that 600 people will
be saved die
65
An Unusual Disease
Tversky & Kahneman (1981)
• Choice 1 • Choice 2
• If programme 1A is adopted, • If programme 2A is adopted,
200 people will be saved 400 people will die
• If programme 1B is adopted, • If programme 2B is adopted,
there is a 1/3 probability that there is a 1/3 probability that
600 people will be saved and a nobody will die and a 2/3
2/3 probability that no one will probability that 600 people will
be saved die
66
33
Reflection Effect
Kahneman & Tversky (1979)
• People reverse their choices in the loss vs. the gain domain
• Risk aversion in the gain domain (P3, P7) is accompanied by risk seeking in the
loss domain (P3’, P7’).
• Risk seeking in gains (P4, P8) is accompanied by risk aversion in losses (P4’, P8’)
• Certainty enhanced in gains and worsened in losses (P3)
67
Violations of EUT
Your turn!
Think about you and your classmates.
Imagine half of you owned UoB mugs, and the other half did not.
If I ask mug owners about their selling price (willingness-to-accept) for
the mugs, and I asked non-mug owners about their buying price
(willingness-to-pay)… would they be…
Total WTP = Total WTA ?
Total WTP > Total WTA ?
Total WTP < Total WTA ?
68
34
Violations of EUT
Your turn!
What if I had randomly given half of you mugs and no
mugs to the other half?
Endowment Effect
Kahneman, Knetch & Thaler (1990)
35
Endowment Effect
Kahneman, Knetch & Thaler (1990)
• Endowment effect!
• Possibly driven by loss aversion for mugs: Losing a mug feels twice as bad
as how good it feels to gain a mug
71
Endowment Effect
Kahneman, Knetch & Thaler (1990)
• Loss aversion for money: Giving up money you own feels disproportionately bad.
36
Preference Reversals
Lichtenstein & Slovic (1971, 1973); Lindman (1971)
73
Preference Reversals
Lichtenstein & Slovic (1971, 1973); Lindman (1971)
• No matter how the question is asked, the choice should always be the same
• CHOICE: P-bet ≻ $-bet
• VALUATION: M(P-bet) < M($-bet)
37
Choice under Uncertainty
Prospect Theory
75
Prospect Theory
• The violations of EUT have motivated the
development of many alternative models
of decision making
• Out of the scope of this module, but if you’re
curious, you can read Starmer (2000), Sudgen
(2004) and Fox, Erner and Walters (2015)
76
38
Original Prospect Theory
EDITING: Transform choice options in six operations
• Coding: is the option positive or negative?
• Combination: “coalescing” branches with the same consequence
• [100, 0.3; 200, 0.2; 100, 0.3; 200, 0.2] becomes [100, 0.6; 200, 0.4]
• Segregation: Riskless components are taken out and considered separately
• [200, 0.7; 250, 0.3] becomes [200,1]+[50,0.3]
• Cancellation: common components are ignored
• [200, 0.2; 100, 0.5; -50, 0.3] vs [200, 0.2; 150, 0.5; -100, 0.3] becomes
[100, 0.5; -50, 0.3] vs [150, 0.5; -100, 0.3]
• Simplification: rounding of outcomes or probabilities
• [99, 0.51] becomes [100,0.5]
• Detection of dominance: it is checked whether one option dominates the
other
77
78
39
Original Prospect Theory
EVALUATION
• Second, we need to evaluate the prospect according to this function:
𝑽 𝒙, 𝒑; 𝒚, 𝒒 = 𝝅 𝒑 𝒗 𝒙 + 𝝅 𝒒 𝒗 𝒚
79
80
40
Original Prospect Theory
EVALUATION
𝑽 𝒙, 𝒑; 𝒚, 𝒒 = 𝒗 𝒚 + 𝝅 𝒑 [𝒗 𝒙 − 𝒗 𝒚 ] if p+q = 1 and x>y>0 or x<y<0
81
82
41
Original Prospect Theory value
Reference dependence
• Humans are attuned to perceive
changes from reference points,
not absolute levels.
• We perceive a light is brighter or
dimmer, but not how bright it is
• We perceive a weight is lighter or losses gains
heavier, but not how heavy it is
• We perceive an outcome makes us
wealthier or poorer, but not how
wealthy we are
• The axes are changes not
absolute levels
83
Loss aversion
• “A salient characteristic of
attitudes to changes in welfare is
that losses loom larger than
gains. The aggravation that one
experiences in losing a sum of
money appears to be greater losses gains
than the pleasure associated
with gaining the same amount.”
• K&T 1979; p. 279
• The value function is kinked at
the origin, steeper for losses
than gains
84
42
Original Prospect Theory value
Diminishing sensitivity
• The impact of an extra unit
gained or lost is decreasing
• This is akin to diminishing
marginal utility
• Each additional gain is a little
less valuable than the one losses gains
before it
• Each additional loss is a little less
painful than the one before it
• The value function is concave
in gains, convex in losses
85
Diminishing sensitivity
“Sometimes it is important to be able to hear
that someone is breathing in the same room as
you. Sometimes someone will shout in your ear.
The sound intensity of the shout may be more
than a billion (milliard, 109 ) times larger than
the loudness of the breathing. You have to be
able to usefully perceive both. Your eyes have a
similar problem. The difference in brightness
between a sunny day and a moonless night is losses gains
more than a factor of a billion. You have to be
able to see things at night without being blinded
in the day. If your vision perception were linear,
you could not possibly do both. If you really
registered a sound 30 times louder than another
as sounding 30 times louder, you would lose the
quiet sounds behind the loud ones, and you
could not handle as wide a range of loudnesses
as you will encounter in your environment.”
• Guy Moore, McGill Physicist
86
43
Original Prospect Theory value
Functional form
• We tend to assume a power
function for the value function.
𝑥𝛼
• Specifically:
• 𝑣 𝑥 = 𝑥 𝛼 for 𝑥 ≥ 0 −𝑥
• 𝑣 𝑥 = −𝜆(−𝑥)𝛽 for 𝑥 < 0 losses gains
𝑥
• Notes:
• Curvature governed by 𝛼 and 𝛽 so
the curvature can be different for
gains versus losses −𝜆(−𝑥)𝛽
88
44
Original Prospect Theory
• Overweighting of small probabilities 1.0
89
• Subadditivity
• Subcertainty
• Subproportionality
• Outside of the scope of this module, but see
Kahneman and Tversky (1979) If you are
curious
90
45
Criticism to Original Prospect Theory
1. “Editing phase” seen as too ad-hoc for a formal theory of decision
making
• Not based on axioms or logic, instead simply descriptive
2. Can only handle prospects with two non-zero outcomes
• Due to the need to classify as strictly positive, strictly negative or
regular
3. Probability weighting function permits violations of stochastic
dominance
• A more serious critique
91
• Direct violations of dominance are prevented by the assumption that dominated alternatives
are detected and eliminated prior to the evaluation of prospects.
• However, the theory permits indirect violations of dominance
• e.g., triples of prospects so that A is preferred to B, B is preferred to C, and C dominates A.
92
46
Cumulative Prospect Theory
• Tversky and Kahneman (1992): “Advances in prospect theory”
• Solved some of the main criticisms of original prospect theory by
• Removing the editing phase
• Allows us to evaluate prospects with any number of non-zero outcomes
• Updating the probability weighting function
• Specific probability weighting function with specified functional form
• Transformation of the cumulative probabilities, rather than each p
separately
• The cumulative function guarantees that the sum of transformed decision weights is
1 (i.e. 𝜋 𝑝 + 𝜋 1 − 𝑝 = 1)
• This avoids any violations of stochastic dominance
93
47
Cumulative Prospect Theory
1. Rank the gains and losses from lowest to highest in absolute value:
• 0 ≤ 𝑔1 < 𝑔2 < 𝑔3 < ⋯ < 𝑔𝑛 and 0 ≥ 𝑙1 > 𝑙2 > 𝑙3 > ⋯ > 𝑙𝑚
• Visually:
96
48
Cumulative Prospect Theory
• 𝑊𝑖+ . for outcomes g
𝑝𝛾
• 𝑊+ = 1
𝑝𝛾 + 1−𝑝 𝛾 𝛾
97
49
Cumulative Prospect Theory
Each weight is the DIFFERENCE between the probability weight of events that are at least as
large as the event being considered, and the probability weight of events that are strictly
larger.
• For gains: at least as good minus strictly better
• For losses: at least as bad minus strictly worse
Gains
• Largest gain: 𝜋𝑛 = 𝑊 + 𝑝𝑛
• Second largest gain: 𝜋𝑛−1 = 𝑊 + 𝑝𝑛 + 𝑝𝑛−1 − 𝑊 + 𝑝𝑛
• Third largest gain: 𝜋𝑛−2 = 𝑊 + 𝑝𝑛 + 𝑝𝑛−1 + 𝑝𝑛−2 − 𝑊 + 𝑝𝑛 + 𝑝𝑛−1
• …
Losses
• Largest loss: 𝜋𝑚 = 𝑊 − 𝑝𝑚
• Second largest gain: 𝜋𝑚−1 = 𝑊 − 𝑝𝑚 + 𝑝𝑚−1 − 𝑊 − 𝑝𝑚
• Third largest gain: 𝜋𝑚−2 = 𝑊 + 𝑝𝑚 + 𝑝𝑚−1 + 𝑝𝑚−2 − 𝑊 − 𝑝𝑚 + 𝑝𝑚−1
• … 99
0.4
0.2
p
100
50
Cumulative Prospect Theory
• Example: GAINS
W(p) 1
0.8
0
1 0 0.35 1 0.65 1 0.58 0.42
0.2 0.4 0.6 0.8 1
• Reference point
• What should the reference point be?
• Current income? Expected income? Certain option in choice between gambles?
102
51