0% found this document useful (0 votes)
21 views51 pages

Choice Under Uncertainty-2

The document discusses choice under uncertainty and expected utility theory. It introduces concepts like risk, ambiguity, prospects, expected value, and expected utility. It provides an example to illustrate how expected utility theory can explain choices under risk better than expected value by accounting for utility instead of just monetary outcomes.

Uploaded by

Fariha Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views51 pages

Choice Under Uncertainty-2

The document discusses choice under uncertainty and expected utility theory. It introduces concepts like risk, ambiguity, prospects, expected value, and expected utility. It provides an example to illustrate how expected utility theory can explain choices under risk better than expected value by accounting for utility instead of just monetary outcomes.

Uploaded by

Fariha Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Choice under Uncertainty:

Expected Utility Theory


From Expected Value to Expected Utility Theory

Choice under uncertainty – Why care?


• Many economic situations of interest require decision makers to
make a choice in the presence of uncertainty
• i.e., before knowing the realisation of a random variable
• For example,
• Buying a license to sell drinks at the beach is a good idea in a hot summer but
a bad idea in a cold summer
• Investing in an oil firm may pay off if they discover a new oil field, otherwise it
won’t
• A new job may improve your career chances or not depending on whether
you get to work with manager A or manager B
• Can you think of any decisions like these you’ve had to make recently?
2

1
Uncertainty, Risk and Ambiguity
• Uncertainty is the opposite of certainty
• Consequences depend on a probability, rather than happening for sure
• Within uncertainty, there is risk and ambiguity
• Risk: known probabilities
• You win £5 if you toss a fair coin and get heads (p = 1/2)
• You win £5 if you throw a die and you get a 6 (p = 1/6)
• Ambiguity: unknown probabilities
• You win £5 if you draw a red ball out of an urn with an unknown number of red balls
• You get a bonus at work if the R&D expenditure leads to the discovery of a new drug
• You earn some money if your favourite sports team wins their league
• Attention! Some people use “uncertainty” to refer to “ambiguity”

Choice under risk


Some definitions:
• Prospect: list of consequences with associated probabilities
• p = (p1 : x1 ; … ; ps : xs) or q = (50% : £1; 50% : -£1)

• Consequences
• Finite set of consequences C = {x1, x2, …, xS}
• S is the number of states of the world
• Probabilities
• p = {p1, p2, … , ps}
• Pi 𝜖[0,1] is the probability of obtaining consequence xi
• σ𝑆𝑖=1 𝑝𝑖 = 1

• Your turn! ‘Write’ the situation you thought about before as a prospect
4

2
Choice under risk
Some definitions:

• Preferences over prospects


• p ≻ q: prospect p strictly preferred to prospect q
• p ≽ q: p weakly preferred to q
• p ∼ q: indifference between p and q
• Indifference does not mean that you don’t have a preference:
• p ∼ q means that both p ≽ q and q ≽ p

• Note that we use ≻, ≽, etc. to express preferences, and those are different to >, ≥, etc.

Choice under risk


• How do agents make choices in the presence of risk?

• Expected Value
• Expected value of prospect p = (p1: x1; … ;pn: xn)
• EV(p) = σ𝑛𝑖=1 𝑝𝑖 ∙ 𝑥𝑖

• Example – how would a decision maker rank the following prospects?


• PA = (0.3 : £40 ; 0.5 : £30 ; 0.2 : £20)
• PB = (1 : £30)
6

3
Choice under risk
• How do agents make choices in the presence of risk?

• Expected Value
• Expected value of prospect p = (p1: x1; … ;pn: xn)
• EV(p) = σ𝑛𝑖=1 𝑝𝑖 ∙ 𝑥𝑖

• Example – how would a decision maker rank the following prospects?


• EV(PA) = 0.3∙40 + 0.5∙30 + 0.2∙20 = £31
• EV(PB) = 1∙30 = £30
• PA ≻ PB
7

Choice under risk


• How do agents make choices in the presence of risk?

• Expected Value
• Expected value of prospect p = (p1: x1; … ;pn: xn)
• EV(p) = σ𝑛𝑖=1 𝑝𝑖 ∙ 𝑥𝑖

• BUT, St Petersburg Paradox


• A game: I’ll flip a coin repeatedly until I get heads.
• You will earn £2n, where n is the flip number at which I got heads.
• How much are you willing to pay (£WTP) to play this game?

4
Choice under risk
• So that you make money, EV(game) > £WTP
• EV(game) = σ𝑛𝑖=1 𝑝𝑖 ∙ 𝑥𝑖

• P of first getting heads in round 1 = ½


• P of first getting heads in round 2 = ½  ½
• P of first getting heads in round 3 = ½  ½  ½
1
• P of first getting heads in round n = (½)n = 𝑛
2
1 1 1 1 𝑛
= 2
2 +44 + 8
8 + ⋯+ 2𝑛
2

=1+1+1+…+1 =∞
• According to Expected Value, we should be willing to pay an infinite amount, but people
pay much less! Therefore EV does not do a good job in describing people’s decisions.
9

Choice under risk


• EV assumes that people base their utility
decisions only on how much money
they could receive, but
• Receiving £100 does not feel the same if u(x)
you earn £1,000 per month than if you
earn £10,000 per month.
• Choices may depend on factors other than
economic profit – think friendship, regret
and many others.
• Solution: translate £ into the utilities
(subjective values) of £ (Bernoulli,
1738)

• Expected Utility (EU) of prospect p =


σ𝑛𝑖=1 𝑝𝑖 ∙ 𝑢 𝑥𝑖 x

10

5
Expected Utility: an example
• Two options (prospects) to choose from:
• Wear summer outfit (Os)
• Wear winter outfit (Ow)
• Two states of the world
• Hot weather (Sh) with ph
• Cold weather (Sc) with pc
• They are mutually exclusive and exhaustive, so ph + pc = 1
• Non-decreasing utility function u(.)

• Which outfit to wear?

11

Expected Utility: an example (ctd.)


(𝒔𝒉) (𝒔𝒄) Let:
𝑢 𝑂𝑠 𝑠ℎ = 10
Hot Weather Cold Weather 𝑢 𝑂𝑠 𝑠𝑐 = 0
𝒑𝒉 𝒑𝒄 𝑢 𝑂𝑤 𝑠ℎ = 2
𝑢 𝑂𝑤 𝑠𝑐 = 6
𝑝ℎ = 0.4
(𝑶𝒔 )
Wear Summer 𝑝𝑐 = 0.6
Outfit
𝑶𝒔 𝒔 𝒉 𝑶𝒔 𝒔 𝒄 𝐸𝑈 𝑂𝑠
= 0.4 ∗ 10 + 0.6 ∗ 0
=4
𝑶𝒘 𝒔 𝒉 𝑶𝒘 𝒔 𝒄 𝐸𝑈 𝑂𝑤
= 0.4 ∗ 2 + 0.6 ∗ 6
(𝑶𝒘 )
Wear Winter = 4.4
Outfit
EU(Os) < EU(Ow)
Wear the winter outfit!
12

6
Choice under Uncertainty
Attitudes to risk

13

Attitudes to risk • Consider


lottery l = (0.5: £5; 0.5:£15)
• Three attitudes to risk
utility
• Risk hating / averse
• Risk loving / seeking
• Risk neutral u(x)
𝑢(£15)
𝑢(£10)
• They depend on how the utility of the EU of lottery
expected value of the lottery compares to
the expected utility of the lottery 𝑢(£5)
• Here: u(EV) = u(£10) > EU(lottery)
• Expected value of the lottery for certain ≻
receiving (playing) the lottery
• The risk ‘isn’t worth it’ for the agent
• Risk averse agent  concave utility function
• u’(x)>0, u’’(x)<0
x
• Certainty Equivalent (CE): outcome that £5 £10 £15
makes the agent indifferent between playing (EV)
the lottery or receiving the CE for sure 14

7
Attitudes to risk • Consider
lottery l = (0.5: £5; 0.5:£15)
• Three attitudes to risk
utility
• Risk hating / averse
• Risk loving / seeking
• Risk neutral u(x)
𝑢(£15)
𝑢(£10)
• They depend on how the utility of the EU of lottery
expected value of the lottery compares to
the expected utility of the lottery 𝑢(£5)
• Here: u(EV) = u(£10) > EU(lottery)
• Expected value of the lottery for certain ≻
receiving (playing) the lottery
• The risk ‘isn’t worth it’ for her
• Risk averse agent  concave utility function
• u’(x)>0, u’’(x)<0
x
• Certainty Equivalent (CE): amount which, if £5 £10 £15
received with certainty, gives the same (EV)
utility as the EU of the lottery
Certainty Equivalent 15

Attitudes to risk
Risk Seeking Agent Risk Averse Agent
utility utility u(EV) > EU(lottery)
Concave utility function
u(x)
u(x) 𝑢(£15)
𝑢(EV)
EU of lottery

𝑢(£15) 𝑢(£5)

𝑢(£5)

£5 £10 £15 x £5 CE £10 £15 x


(EV) (EV)
16

8
Attitudes to risk
Risk Seeking Agent Risk Averse Agent
utility u(EV) < EU(lottery) utility u(EV) > EU(lottery)
Convex utility function Concave utility function
u(x)
u’(x)>0, u’’(x)>0 u(x) 𝑢(£15)
𝑢(EV)
EU of lottery

𝑢(£15) 𝑢(£5)

EU of lottery

𝑢(EV)
𝑢(£5)

£5 £10 CE £15 x £5 CE £10 £15 x


(EV) (EV)
17

Attitudes to risk

Your turn!
Pause the video and draw the equivalent of the
diagrams in the previous slide for a risk neutral agent.
Tip: Remember that the curvature of the utility function is linked to
the agent’s attitude to risk. We’ve covered concave and convex, so
the only one left is… a linear utility function.

18

9
Attitudes to risk
Risk Neutral Agent Risk Averse Agent
utility u(EV) = EU(lottery) utility u(EV) > EU(lottery)
Linear utility function
u(x) u(x)
u’(x)>0, u’’(x)=0 𝑢(£15)
𝑢(£15) 𝑢(EV)
EU of lottery

𝑢(£5)

𝑢(£5)

£5 £10 £15 x £5 CE £10 £15 x


(EV) (EV)
19

Attitudes to risk
Risk Neutral Agent Risk Averse Agent
utility u(EV) = EU(lottery) utility u(EV) > EU(lottery)
Linear utility function
u(x) u(x)
u’(x)>0, u’’(x)=0 𝑢(£15)
𝑢(£15) 𝑢(EV)
EU of lottery

𝑢(EV) 𝑢(£5)
EU of lottery

𝑢(£5)

£5 £10 £15 x £5 CE £10 £15 x


(EV) (EV)
CE 20

10
Certainty Equivalent example
• Consider
• What we just saw, more formally utility
lottery l = (pl: xl; ph:xh)

• CE is the “amount which, if received with 𝑢(xh)


u(x)
certainty, gives the same utility as the EU 𝑢(plxl + phxh)
of the lottery”
pl𝑢(xl) + ph𝑢(xh)
 u(xCE) = plu(xl) + (1-pl)u(xh)
𝑢(xl)
• Given
• u(x) = x0.5
• xl = 4; xh = 9; pl = 0.5
Compute the CE.

• u(xCE) = 0.5(40.5) + 0.5(90.5) = 1 + 1.5


xl plxl + phxh xh x
• u(xCE) = xCE0.5 = 2.5
(EV)
• xCE0.5 = 2.52 = 6.25
Certainty Equivalent 21

Risk Premium example


• Consider
• Risk Premium (RP): difference between lottery l = (pl: xl; ph:xh)
the expected value and the certainty utility
equivalent
• The extra amount of x the agent is giving up 𝑢(xh)
u(x)
in order to avoid risk, or
𝑢(plxl + phxh)
• The extra amount that the agent would
receive from playing the lottery instead of pl𝑢(xl) + ph𝑢(xh)
receiving the CE.
• RP = EV – CE 𝑢(xl)

• Following the earlier example:


• RP = (0.5*4 + 0.5*9) – 6.25 = 0.25

• If agent is risk seeking, and so her utility Risk


function is convex, then her RP<0 Premium
• She is willing to pay money to incur the risk
xl plxl + phxh xh x
• And if the agent is risk neutral, then RP=0 (EV)
CE
22

11
Attitudes to risk (recap so far)

Attitude to risk Risk seeking Risk Neutral Risk Averse


Curvature of utility function Convex Linear Concave
Marginal utility Increasing, with Constant, with Decreasing, with
𝑢′ 𝑥 > 0 and 𝑢′ 𝑥 > 0 and 𝑢′ 𝑥 > 0 and
𝑢′′ 𝑥 > 0 𝑢′′ 𝑥 = 0 𝑢′′ 𝑥 < 0
Certainty equivalent Greater than EV Equal to EV Less than EV
Risk premium Negative Zero Positive

23

Choice under Uncertainty


How risk averse?

24

12
How risk averse?
• Agents may be more or less risk utility
averse
• How can we quantify how risk u(x)
averse someone is?

• Two simple ways


• All else equal, the lower the certainty
equivalent, the more risk averse
• All else equal, the higher the risk
premium, the more risk averse
• BUT these are specific to a given xl1 xh1 x
lottery so they are of limited use. xh2

25

How risk averse?


• The more concave the utility function, the more risk averse agents are
• So, can we simply measure the second derivative 𝑢′′ 𝑥 ?

• We cannot simply measure the second derivative of the utility function


• If we calculate the second derivative of the expected utility function, we get a value
for the curvature/risk aversion
• E.g., if u(x) = x0.5  u’(x) = 0.5x-0.5; u’’(x) = -0.25x-1.5
• If we multiply the utility function by 2 (a positive linear transformation) then we get
a different value
• E.g., if u(x) = 2x0.5  u’(x) = x-0.5; u’’(x) = -0.5x-1.5

• This is no good because the behaviour of the agent would be the same, so the
measured risk aversion should also be the same.

26

13
How risk averse?
Absolute Risk Aversion
• To solve this, we can normalise the second derivative by dividing by the first derivative
𝑢′′ 𝑥 0.5
• Both if u 𝑥 = x0.5 and u 𝑥 = 2x0.5, =-
𝑢′ 𝑥 𝑥

• 𝑢 𝑥 = x0.5; 𝑢′ 𝑥 = 0.5x-0.5; 𝑢′′ 𝑥 = -0.25x-1.5


𝑢′′ 𝑥 −0.25𝑥 −1.5 0.25
• = =- 𝑥 −1.5+0.5 = -0.5 𝑥 −1
𝑢′ 𝑥 0.5𝑥−0.5 0.05

• 𝑢 𝑥 = 2x0.5; 𝑢′ 𝑥 = x-0.5; 𝑢′′ 𝑥 = -0.5x-1.5


𝑢′′ 𝑥 −0.5𝑥−1.5
• = = -0.5𝑥 −1.5+0.5 = -0.5 𝑥 −1
𝑢′ 𝑥 𝑥 −0.5

• The degree of risk aversion is now invariant to positive linear transformations of the utility
function.
• This (its negative) is known as the
𝒖′′ 𝒙
Arrow-Pratt measure of (absolute) risk aversion 𝒓𝑨 𝒙 = −
𝒖′ 𝒙 27

How risk averse?


• Now that we have rA(x), we can take the first derivative of it to see how risk
aversion changes with changes in x.
• This would tell us how the amount of money invested in risky assets (i.e., risk
taking) would change as the investor’s level of wealth changes

• Decreasing absolute risk aversion


• 𝑟𝐴 (𝑥) is a strictly decreasing function (rA’(x) < 0)
• As x increases, agent is less risk averse and takes more risks (i.e., holds more risky assets)
• Constant absolute risk aversion
• 𝑟𝐴 (𝑥) does not vary with 𝑥 (rA’(x) = 0)
• As x increases, agent is as risk averse and takes same risks
• Increasing absolute risk aversion
• 𝑟𝐴 (𝑥) is a strictly increasing function (rA’(x) > 0)
• As x increases, agent is more risk averse and takes less risks

28

14
Attitudes to risk

Your turn!
Which of the three absolute risk aversion postures do
you think we are more likely to observe out in the real
world?

29

How risk averse?


• So far we have considered changes in ABSOLUTE amounts of wealth
• The Arrow-Pratt measure of absolute risk aversion allows us to measure whether one
individual is more or less risk averse than another for lotteries where they gain or
lose an absolute amount of money, given that they begin with the same level of
wealth.
• We have been making the implicit assumption that their wealth is 0 and they made decisions
considering only the additional money x they could receive

• But what about the PROPORTION of wealth?


• In the real world, it is quite credible that the risk one is willing to take depends on
the wealth they already have.
• E.g., As an investor gets richer, they invest more money in risky assets (consistent
with decreasing absolute risk aversion) BUT the absolute amount of money invested
is proportionally less of their wealth (increasing risk aversion?)

30

15
How risk averse?
Relative Risk Aversion

• If we care about the additional money received in relation to wealth,


the appropriate measure is the

Arrow-Pratt measure of relative risk aversion


𝑢′′ 𝑥 𝑥
𝜌 = 𝑟𝑅 𝑥 = −
𝑢′ 𝑥

31

How risk averse?


• As before, we can explore rR’(x) and identify three classes of risk aversion

• Decreasing relative risk aversion


• Proportion of wealth invested in a risky asset increases when wealth increases
• Constant relative risk aversion
• Proportion of wealth invested in a risky asset does not depend on changes in wealth
• Increasing relative risk aversion
• Proportion of wealth invested in a risky asset decreases when wealth increases

• Which one is more appealing intuitively?


• Harder to say, but it is often assumed (for the sake of simplicity) CARA
32

16
How risk averse? An example
𝑥 1−𝛾
𝑢 𝑥 = for 𝛾 > 1
1−𝛾

1−𝛾 𝑥 1−𝛾−1
• 𝑢′ 𝑥 = = 𝑥 −𝛾 ; 𝑢′′ 𝑥 = −𝛾𝑥 −𝛾−1
1−𝛾

• Risk aversion coefficients:


𝛾𝑥 −𝛾−1 𝛾
• 𝑟𝐴 𝑥 = = 𝛾𝑥 −1 = 𝑥 𝒖′′ 𝒙
𝑥 −𝛾
𝒓𝑨 𝒙 = −
−𝛾 𝒖′ 𝒙
• 𝑟𝐴 ′ 𝑥 = < 0  Decreasing absolute risk aversion
𝑥2

• 𝑟𝑅 𝑥 = 𝛾 𝒖′′ 𝒙 𝒙
• 𝑟𝑅 ′ 𝑥 = 0  Constant relative risk aversion 𝒓𝑹 𝒙 = −
𝒖′ 𝒙
33

Choice under Uncertainty


Capturing risk attitudes in the lab and the field

34

17
Laboratory measures of attitudes to risk
• Economists and psychologists have developed a variety of methods
for eliciting risk preferences in the laboratory, online and in the field.

35

Laboratory measures of attitudes to risk


• We will draw heavily on Charness et al., (2013)

36

18
Methods we will consider
• Balloon Analogue Risk Task (BART)
• Questionnaires
• Gneezy and Potters method
• Eckel and Grossman method
• Multiple price list (MPL) method

37

Categorising methods
• Complexity
• Complex methods estimate parameters of a utility function using functional
form assumptions, e.g. eliciting coefficient of relative risk aversion
• Simple methods just aim to score individuals in terms of how willing they are
to take risks, without parameterising the utility function

• Revealed versus self-reported preference


• Revealed preference measures infers participants’ risk preferences from their
(incentivised) choices.
• Self-reported methods use participants’ stated attitudes about their risk
preferences.

38

19
Lejuez, C. W., Read, J. P., Kahler, C. W.,
Richards, J. B., Ramsey, S. E., Stuart, G.
L., ... & Brown, R. A. (2002). Evaluation

Balloon Analogue Risk Task (BART) of a behavioral measure of risk taking:


the Balloon Analogue Risk Task
(BART). Journal of Experimental
Psychology: Applied, 8(2), 75.

Inflate a “balloon”. Each pump raises the reward earned, but if the
balloon pops then you lose it all.

39

Balloon Analogue Risk Task (BART)


• Function governs probability of popping: starts low and increases
until certainty after T pumps. T is not known to participants.
• Balloon will pop for sure after T pumps.
1
• First pump: 𝑝 𝑝𝑜𝑝 =
𝑇+1
1 1
• Second pump: 𝑝 𝑝𝑜𝑝 = =
𝑇+1−1 𝑇
1
• (T)’th pump: 𝑝 𝑝𝑜𝑝 = = 1
𝑇+1−𝑇

• Participants learn through experience then decide when to stop.


• Earlier stopping point implies greater risk aversion.

40

20
Balloon Analogue Risk Task (BART)
For
• Easy to understand: simple & realistic
• Shown to correlate with reported real world risky behaviour
• gambling, drug use, unprotected sex

Against
• Risk preferences may not transfer across domains (e.g., financial)
• Computer and multiple trials required
• Cannot be easily embedded into surveys or used in the field without access to
computers

41

Weber, E.U., Blais, A.R., Betz, N.E., 2002. A


domain-specific risk-attitude scale: measuring risk

Questionnaires perceptions and risk behaviors. Journal of


Behavioral Decision Making 15 (4), 263–290.

• General:
“Rate your willingness to take risks in general” on a 10-point scale,
with 1 = “completely unwilling” and 10 = “completely willing”
• But this assumes a stable domain-general preference, which is not realistic
• Domain specific (DOSPERT)
• 40-item scale with 8 items each in the domains of recreational, health, social, and
ethical risks, and four items in the domains of gambling and investment.
• Each item is a 5-point scale on how likely the person is to engage in a particular
behaviour.
• E.g., “drinking heavily at a social function”; “gambling a week’s income at a casino”
• Interpret total or average score to reveal willingness to take risks within and across
domains

42

21
Questionnaires
For
• Simple
• Domain-specific (if preferred)
• Easily understood

Against
• Unincentivised – possibility of gratuitously expressed risk preferences

43

Gneezy, U., Potters, J., 1997. An experiment


on risk taking and evaluation periods.

Gneezy and Potters method Quarterly Journal of Economics 112 (2), 631–
645.

• Captures incentivised risk attitudes in financial context


• Decision maker receives $𝑋 and chooses how much of it, $𝑥, to invest in a
risky option and how much to keep.
• With probability 𝑝, the invested money earns grows to become $𝑘𝑥 (𝑘 > 1).
• With probability (1 − 𝑝), the invested money is lost.
• The money not invested ($𝑋 − 𝑥)is kept by the investor.
• The payoff is then
• 𝑝 ∙ 𝑋 − 𝑥 + 𝑘𝑥 + 1 − 𝑝 ∙ 𝑋 − 𝑥
• Parameters are chosen such that investing gives higher EV than not
investing
• A risk seeking or risk neutral individual will invest all $𝑋
• A risk averse individual will invest less than $𝑋: 0 ≤ 𝑥 < 𝑋.

44

22
Gneezy and Potters method
For
• Single decision task: good for use in the field or embedding in surveys
• Relatively simple to understand

Against
• One-shot so inconsistency or misunderstanding is difficult to identify
• Cannot distinguish between risk neutral and risk seeking individuals,
nor identify degrees of risk seeking behaviour

45

Eckel, C.C., Grossman, P.J., 2002. Sex differences


and statistical stereotyping in attitudes toward
financial risk. Evolution and Human Behavior 23
Eckel and Grossman method (4), 281–295.

• Decision maker chooses just one of the following six gambles.

• The gambles give different combinations of EV (column 4) and range of


payoffs (column 5) so present trade-offs between EV and risk.
• Risk coefficient r of utility function u(x)=x1-r (CRRA assumed) can be
derived:
• 3.46 < 𝑟 → extremely risk averse (risk averse subjects choose gamble 1-4)
• 𝑟 = 0 → risk neutral (risk neutral subjects choose gamble 5)
• 𝑟 < 0 → risk seeking (risk seeking subjects choose gamble 6)

46

23
Eckel and Grossman method
For
• Single decision task: good for use in the field or embedding in surveys
• Relatively simple to understand

Against
• One-shot so inconsistency or misunderstanding is difficult to identify
• Cannot identify degrees of risk seeking behaviour

(Note: same pros and cons as Gneezy and Potters method!)

47

e.g. Holt, C.A., Laury, S.K., 2002. Risk


aversion and incentive effects.

Multiple Price List (MPL) method American Economic Review 92 (5),


1644–1655.

• Incentivised choices in list of binary decisions between lotteries, where the


spread and EV of the options change systematically between rows of the table.

• Payoffs held constant, put probability of success improves down the table
• Most participants expected to choose A in row 1 & B dominates A in row 10
• Switch point from A to B used as measure of risk preference
48

24
Multiple Price List (MPL) method
• We can use switch point to compare risk preferences across individuals
• Holt and Laury (2002) provided interpretation of the number of safe choices in
terms of CRRA utility function

49

Multiple Price List (MPL) method


For
• Generates close intervals for risk preference coefficients
• Patterns of choices can diagnose inconsistent responding
• Range of risk averse, neutral and seeking preferences can be accounted for

Against
• Harder to understand for participants
• Multiple switching can make interpretation difficult
• Longer procedure with multiple choices per person
50

25
Which method should I use?
• This depends on
• Your research questions
• About risk per se? Or controlling for risk?
• Testing functional form assumptions? Or just checking for differences between groups?
• Domain of risk preference: financial or specific (e.g. health or recreation)?
• Your resources
• Incentivised or not?
• Computer programme or not?
• Time – can they do many choices or just one?
• Your study population
• Children; other languages; populations without standard currencies

51

Choice under Uncertainty


Expected Utility Theory axioms

52

26
Expected Utility Theory Axioms (1)
• If completeness, transitivity, continuity and independence hold, then
preferences can be represented by expected utility function.

• Completeness
• For all q, r we have either q ≽ r, r ≽ q, or both

• Transitivity
• For all q, r, s we have that if q ≽ r and r ≽ s, then q ≽ s.

• Continuity
• For all q, r, s: if q ≽ r and r ≽ s, there must be a probability p such
that (p : q ; 1-p : s) ∼ r
53

Expected Utility Theory Axioms (2)


• Independence
• For all q, r, s and all probabilities p, we have that
if q ≽ r, then (p : q ; 1-p : s) ≽ (p : r ; 1-p : s)

54

27
Choice under Uncertainty
Violations of Expected Utility Theory

55

Violations of EUT
• Violations of the Independence axiom
• Allais Paradox
• Common Ratio and Common Consequence effect
• Ellsberg Paradox

• Violation of the description invariance principle


• An usual disease (previously known as ‘Asian disease’)
• Reflection effect
• Endowment effect
• Violation of the procedure invariance principle
• Preference reversals

56

28
Violations of the Independence axiom
• EUT Independence Axiom

57

Allais Paradox
Allais (1953); Kahneman & Tversky (1979)

• Common Ratio Effect


• Choice 1 • Choice 2
• Option A: £30 for certain • Option A: A lottery that offers
• Option B: A lottery that offers • 25% chance of £30
• 80% chance of £40 • 75% chance of £0
• 20% chance of £0 • Option B: A lottery that offers
• 20% chance of £40
• 80% chance of 0

• Most people choose Option A in Choice 1 but Option B in Choice 2


• That is not compatible with EUT!

58

29
Allais Paradox
Allais (1953); Kahneman & Tversky (1979)

• Common Ratio Effect


• Choice 1 • Choice 2
• Option A: £30 for certain • Option A: A lottery that offers
• Option B: A lottery that offers • 25% chance of £30
• 80% chance of £40 • 75% chance of £0
• 20% chance of £0 • Option B: A lottery that offers
• 20% chance of £40
• 80% chance of 0
• Choosing 1A implies that 1*u(£30) > 0.80*u(£40) => u(£30)/u(£40) > 0.80
• Choosing 2B implies that 0.25*u(£30) < 0.20*u(£40) => u(£30)/u(£40) < 0.20/0.25
=> u(£30)/u(£40) < 0.8
• Choices should be the same in both situations because the only difference
between them was that probabilities of the non-zero outcomes were multiplied
by the same ratio (1/4). 59

Allais Paradox
Allais (1953); Kahneman & Tversky (1979)

• Common Consequence Effect


• Choice 1 • Choice 2
• Option A: £1M for certain • Option A: A lottery that offers
• Option B: A lottery that offers • 89% chance of £0
• 89% chance of £1M • 11% chance of £1M
• 1% chance of £0 • Option B: A lottery that offers
• 10% chance of £5M • 90% chance of £0
• 10% chance of £5M

60

30
Allais Paradox
Allais (1953); Kahneman & Tversky (1979)

• Common Consequence Effect


• Choice 1 • Choice 2
• Option A: £1M for certain • Option A: A lottery that offers
• Option B: A lottery that offers • 89% chance of £0
• 89% chance of £1M • 11% chance of £1M
• 1% chance of £0 • Option B: A lottery that offers
• 10% chance of £5M • 90% chance of £0
• 10% chance of £5M

• Most people choose Option A in Choice 1 but Option B in Choice 2


• That is not compatible with EUT!

61

Allais Paradox
Allais (1953); Kahneman & Tversky (1979)

• Common Consequence Effect rewritten


• Choice 1 • Choice 2
• Option A: £1M for certain • Option A: A lottery that offers
• Option A’: A lottery that offers • 89% chance of £0
• 89% chance of £1M • 11% chance of £1M
• 11% chance of £1M • Option B: A lottery that offers
• Option B: A lottery that offers • 90% chance of £0
• 89% chance of £1M • 10% chance of £5M
• 1% chance of £0 • Option B’: A lottery that offers
• 10% chance of £5M • 89% chance of £0
• 1% chance of £0
• 10% chance of £5M

62

31
Allais Paradox
Allais (1953); Kahneman & Tversky (1979)

• Common Consequence Effect rewritten


• Choice 1 • Choice 2
• Option A’: A lottery that offers • Option A: A lottery that offers
• 89% chance of £1M • 89% chance of £0
• 11% chance of £1M • 11% chance of £1M
• Option B: A lottery that offers • Option B’: A lottery that offers
• 89% chance of £1M • 89% chance of £0
• 1% chance of £0 • 1% chance of £0
• 10% chance of £5M • 10% chance of £5M

• Common Consequences should be disregarded


• If we do that, we realise that both choices are the same, and hence according to
EUT, decisions should be the same too
• Evidence of non-linearity of probabilities 63

Ellsberg Paradox
Keynes (1921); Ellsberg (1961)

• An opaque urn contains


• 30 balls that are red
• 60 balls some of which are black and some of which are yellow (proportion
unknown)
• Choice 1 • Choice 2
• Option A: Bet £100 on red • Option A: Bet £100 on red and yellow
• Option B: Bet £100 on black • Option B: Bet £100 on black and yellow

• Most people choose 1A but 2B


• That is not compatible with EUT!
• It is evidence of ambiguity aversion
• And source dependence
64

32
An Unusual Disease
Tversky & Kahneman (1981)

• Imagine that the UK is preparing for the outbreak of an unusual disease,


which is expected to kill 600 people. Two alternative programmes to
combat the disease have been proposed. Assume that the exact scientific
estimate of the consequence of the programmes is as follows.

• Choice 1 • Choice 2
• If programme 1A is adopted, • If programme 2A is adopted,
200 people will be saved 400 people will die
• If programme 1B is adopted, • If programme 2B is adopted,
there is a 1/3 probability that there is a 1/3 probability that
600 people will be saved and a nobody will die and a 2/3
2/3 probability that no one will probability that 600 people will
be saved die

65

An Unusual Disease
Tversky & Kahneman (1981)
• Choice 1 • Choice 2
• If programme 1A is adopted, • If programme 2A is adopted,
200 people will be saved 400 people will die
• If programme 1B is adopted, • If programme 2B is adopted,
there is a 1/3 probability that there is a 1/3 probability that
600 people will be saved and a nobody will die and a 2/3
2/3 probability that no one will probability that 600 people will
be saved die

• Most people choose 1A and 2B


• But 1A and 2A are identical! 200 saved = 400 dead; and 1B and 2B are too
• This is a violation of the principle of description invariance

66

33
Reflection Effect
Kahneman & Tversky (1979)

• People reverse their choices in the loss vs. the gain domain

• Risk aversion in the gain domain (P3, P7) is accompanied by risk seeking in the
loss domain (P3’, P7’).
• Risk seeking in gains (P4, P8) is accompanied by risk aversion in losses (P4’, P8’)
• Certainty enhanced in gains and worsened in losses (P3)
67

Violations of EUT
Your turn!
Think about you and your classmates.
Imagine half of you owned UoB mugs, and the other half did not.
If I ask mug owners about their selling price (willingness-to-accept) for
the mugs, and I asked non-mug owners about their buying price
(willingness-to-pay)… would they be…
Total WTP = Total WTA ?
Total WTP > Total WTA ?
Total WTP < Total WTA ?
68

34
Violations of EUT

Your turn!
What if I had randomly given half of you mugs and no
mugs to the other half?

Total WTP = Total WTA ?


Total WTP > Total WTA ?
Total WTP < Total WTA ?
69

Endowment Effect
Kahneman, Knetch & Thaler (1990)

• Experiment 5 - Allocated 59 participants into 2 groups:


• Sellers are given a mug and indicate a which price they would sell it
• Buyers are not given a mug and indicate at which price they would buy it
• They used BDM procedure
• Randomly draw one price (from $0 to $9.50)
• Implement participants’ decisions

• Median selling price: $5.75


• Median buying price: $2.25

• Endowment effect: Endowing someone with a good


increases their valuation
• OR… income effect? 70

35
Endowment Effect
Kahneman, Knetch & Thaler (1990)

• Experiment 6 - 77 participants randomly allocated into 3 groups:


• Sellers are given a mug and indicate a which price they would sell it
• Buyers are given $3.5 and indicate at which price they would buy the mug
• Choosers, at each potential price, choose between money and mug
Sellers Choosers
Median value = ~$7 Median value = ~$3

• Endowment effect!
• Possibly driven by loss aversion for mugs: Losing a mug feels twice as bad
as how good it feels to gain a mug
71

Endowment Effect
Kahneman, Knetch & Thaler (1990)

• Experiment 6 - 77 participants randomly allocated into 3 groups:


• Sellers are given a mug and indicate a which price they would sell it
• Buyers are given $3.5 and indicate at which price they would buy the mug
• Choosers, at each potential price, choose between money and mug
Buyers Choosers
Median value = ~$2 Median value = ~$3

• Loss aversion for money: Giving up money you own feels disproportionately bad.

• In both cases, same transaction, but participants’ reference point mattered!


72

36
Preference Reversals
Lichtenstein & Slovic (1971, 1973); Lindman (1971)

• Choose between • Choice 2


• Option A: A lottery that offers • Option B: A lottery that offers
• 2% chance of £0 • 65% chance of £0
• 98% chance of £4 • 35% chance of £16

• How much would you be willing to• pay


Choice
to play
to2 each lottery?
• Option A • Option B

• Most people choose Option A but value Option B higher

73

Preference Reversals
Lichtenstein & Slovic (1971, 1973); Lindman (1971)

• Option A: A lottery that offers • Option B: A lottery that offers


• 2% chance of £0 • 65% chance of £0
• 98% chance of £4 • 35% chance of £16
• Option A = “P-bet” • Option B = “$-bet”
• High probability of a small • High payoff with a small
payoff probability

• No matter how the question is asked, the choice should always be the same
• CHOICE: P-bet ≻ $-bet
• VALUATION: M(P-bet) < M($-bet)

• These results violate the procedural invariance principle and


• imply intransitive preferences
• Possibly because of scale compatibility
74

37
Choice under Uncertainty
Prospect Theory

75

Prospect Theory
• The violations of EUT have motivated the
development of many alternative models
of decision making
• Out of the scope of this module, but if you’re
curious, you can read Starmer (2000), Sudgen
(2004) and Fox, Erner and Walters (2015)

• The most prominent of them is Prospect Theory


• Cumulative Prospect Theory (Tversky & Kahneman, 1992)
• Original Prospect Theory (Kahneman & Tversky, 1979)
• Two phases: (1) Editing, (2) Evaluation

76

38
Original Prospect Theory
EDITING: Transform choice options in six operations
• Coding: is the option positive or negative?
• Combination: “coalescing” branches with the same consequence
• [100, 0.3; 200, 0.2; 100, 0.3; 200, 0.2] becomes [100, 0.6; 200, 0.4]
• Segregation: Riskless components are taken out and considered separately
• [200, 0.7; 250, 0.3] becomes [200,1]+[50,0.3]
• Cancellation: common components are ignored
• [200, 0.2; 100, 0.5; -50, 0.3] vs [200, 0.2; 150, 0.5; -100, 0.3] becomes
[100, 0.5; -50, 0.3] vs [150, 0.5; -100, 0.3]
• Simplification: rounding of outcomes or probabilities
• [99, 0.51] becomes [100,0.5]
• Detection of dominance: it is checked whether one option dominates the
other
77

Original Prospect Theory


EVALUATION: The now-edited prospects are evaluated.
Let the outcomes be 𝑥, 𝑦, 0 and their respective probabilities be 𝑝, 𝑞, 1 − 𝑝 − 𝑞
• First, we need to specify what type of prospect we are dealing with:
• A strictly positive prospect has 𝑥 > 𝑦 > 0 and 𝑝 + 𝑞 = 1
• E.g. (400,0.25; 100,0.75)
• A strictly negative prospect has 𝑥 < 𝑦 < 0 and 𝑝 + 𝑞 = 1
• E.g. (−400,0.25; −100,0.75)
• A regular prospect is one which is neither strictly positive nor strictly negative
• E.g. 4000,0.8 (probabilities don’t sum to 1)
• E.g. −4000,0.8 (probabilities don’t sum to 1)
• E.g. 100,0.6; −100,0.4 (outcomes are not both > 0 or both < 0)

78

39
Original Prospect Theory
EVALUATION
• Second, we need to evaluate the prospect according to this function:
𝑽 𝒙, 𝒑; 𝒚, 𝒒 = 𝝅 𝒑 𝒗 𝒙 + 𝝅 𝒒 𝒗 𝒚

• Components and properties


• 𝜋(. ) is a subjective probability-weighting function
• 𝜋 0 = 0; 𝜋 1 = 1
• 𝑣 . is a value function
• 𝑣 0 =0
• Compare to EUT:
• 𝜋 𝑝 =𝑝
• 𝑣 . =𝑢 .

79

Original Prospect Theory


EVALUATION
𝑽 𝒙, 𝒑; 𝒚, 𝒒 = 𝝅 𝒑 𝒗 𝒙 + 𝝅 𝒒 𝒗 𝒚

• Evaluation of regular prospects


• 𝑉 4000,0.8 = 𝜋 0.8 𝑣(4000)

• 𝑉 100,0.6; −100,0.4 = 𝜋 0.6 𝑣 100 + 𝜋 0.4 𝑣(−100)

• ‘Sure prospect’: 𝑉 3000,1 = 𝜋 1 𝑣 3000 = 𝑣(3000)

80

40
Original Prospect Theory
EVALUATION
𝑽 𝒙, 𝒑; 𝒚, 𝒒 = 𝒗 𝒚 + 𝝅 𝒑 [𝒗 𝒙 − 𝒗 𝒚 ] if p+q = 1 and x>y>0 or x<y<0

• Evaluation of strictly positive prospects


• Important step is to separate out the certain part (the gain received in either case)
• 𝑉 400,0.25; 100,0.75 = 𝑣 100 + 𝜋 0.25 𝑣 400 − 𝑣 100

• Evaluation of strictly negative prospects


• Important step is to separate out the certain part (the loss received in either case)
• 𝑉 −400,0.25; −100,0.75 = 𝑣 −100 + 𝜋 0.25 𝑣 −400 − 𝑣 −100

• Note: 𝑣 𝑦 − 𝑣 𝑥 not 𝑣(𝑦 − 𝑥)!

81

Original Prospect Theory


VALUE FUNCTION
• In EUT: “the utility of an uncertain prospect is the sum of the utilities
of the outcomes, each weighted by its probability”
• But empirically
• “carriers of value are gains and losses, not final assets”
• Reference-dependence
• “losses loom larger than gains”
• Loss aversion
• What we do retain is EUT’s diminishing marginal utility
• Diminishing sensitivity

82

41
Original Prospect Theory value

Reference dependence
• Humans are attuned to perceive
changes from reference points,
not absolute levels.
• We perceive a light is brighter or
dimmer, but not how bright it is
• We perceive a weight is lighter or losses gains
heavier, but not how heavy it is
• We perceive an outcome makes us
wealthier or poorer, but not how
wealthy we are
• The axes are changes not
absolute levels

83

Original Prospect Theory value

Loss aversion
• “A salient characteristic of
attitudes to changes in welfare is
that losses loom larger than
gains. The aggravation that one
experiences in losing a sum of
money appears to be greater losses gains
than the pleasure associated
with gaining the same amount.”
• K&T 1979; p. 279
• The value function is kinked at
the origin, steeper for losses
than gains
84

42
Original Prospect Theory value

Diminishing sensitivity
• The impact of an extra unit
gained or lost is decreasing
• This is akin to diminishing
marginal utility
• Each additional gain is a little
less valuable than the one losses gains
before it
• Each additional loss is a little less
painful than the one before it
• The value function is concave
in gains, convex in losses

85

Original Prospect Theory value

Diminishing sensitivity
“Sometimes it is important to be able to hear
that someone is breathing in the same room as
you. Sometimes someone will shout in your ear.
The sound intensity of the shout may be more
than a billion (milliard, 109 ) times larger than
the loudness of the breathing. You have to be
able to usefully perceive both. Your eyes have a
similar problem. The difference in brightness
between a sunny day and a moonless night is losses gains
more than a factor of a billion. You have to be
able to see things at night without being blinded
in the day. If your vision perception were linear,
you could not possibly do both. If you really
registered a sound 30 times louder than another
as sounding 30 times louder, you would lose the
quiet sounds behind the loud ones, and you
could not handle as wide a range of loudnesses
as you will encounter in your environment.”
• Guy Moore, McGill Physicist

86

43
Original Prospect Theory value
Functional form
• We tend to assume a power
function for the value function.
𝑥𝛼
• Specifically:
• 𝑣 𝑥 = 𝑥 𝛼 for 𝑥 ≥ 0 −𝑥
• 𝑣 𝑥 = −𝜆(−𝑥)𝛽 for 𝑥 < 0 losses gains
𝑥

• Notes:
• Curvature governed by 𝛼 and 𝛽 so
the curvature can be different for
gains versus losses −𝜆(−𝑥)𝛽

• 𝜆 is the loss aversion parameter,


empirically ≈ 2
87

Original Prospect Theory


PROBABILITY WEIGHTING FUNCTION
• In EUT: “the utility of an uncertain prospect is the sum of the utilities
of the outcomes, each weighted by its probability”
• But empirically
• “the value of each outcome is multiplied by a decision weight, not by an
additive probability”
• Transforms probabilities 𝑝 into decision weights 𝜋 𝑝

88

44
Original Prospect Theory
• Overweighting of small probabilities 1.0

• Underweighting of large probabilities

decision weight 𝜋(𝑝)


• Discontinuity at extreme probabilities
• “Because people are limited in their ability to
comprehend and evaluate extreme
probabilities, highly unlikely events are either
ignored or overweighted, and the difference
between high probability and certainty is either
neglected or exaggerated. Consequently, pi is
not well-behaved near the end-points”
• K&T 1979 (pp.282-283) 0 stated probability p 1.0

89

Original Prospect Theory


• There is no specific formula. Instead: 1.0

• Properties of the weighting function


decision weight 𝜋(𝑝)

• Subadditivity
• Subcertainty
• Subproportionality
• Outside of the scope of this module, but see
Kahneman and Tversky (1979) If you are
curious

0 stated probability p 1.0

90

45
Criticism to Original Prospect Theory
1. “Editing phase” seen as too ad-hoc for a formal theory of decision
making
• Not based on axioms or logic, instead simply descriptive
2. Can only handle prospects with two non-zero outcomes
• Due to the need to classify as strictly positive, strictly negative or
regular
3. Probability weighting function permits violations of stochastic
dominance
• A more serious critique

91

Original Prospect Theory


Stochastic dominance
• If 𝑥 > 𝑦 > 0, 𝑝 > 𝑝′, and 𝑝 + 𝑞 = 𝑝′ + 𝑞′ < 1; hence, (𝑥, 𝑝; 𝑦, 𝑞) dominates (𝑥, 𝑝′; 𝑦, 𝑞′).
• If preference obeys dominance, then
𝜋 𝑝 𝑣 𝑥 + 𝜋 𝑞 𝑣 𝑦 > 𝜋 𝑝′ 𝑣 𝑥 + 𝜋 𝑞 ′ 𝑣 𝑦
𝜋 𝑝 − 𝜋 𝑝′ 𝑣 𝑦

>
𝜋 𝑞 −𝜋 𝑞 𝑣 𝑥

• Hence, as 𝑦 approaches 𝑥, 𝜋 𝑝 − 𝜋 𝑝′ approaches 𝜋 𝑞 − 𝜋 𝑞′ .


• Since 𝑝 − 𝑝′ = 𝑞′ − 𝑞, 𝝅 must be essentially linear, or else dominance must be violated.

• Direct violations of dominance are prevented by the assumption that dominated alternatives
are detected and eliminated prior to the evaluation of prospects.
• However, the theory permits indirect violations of dominance
• e.g., triples of prospects so that A is preferred to B, B is preferred to C, and C dominates A.

92

46
Cumulative Prospect Theory
• Tversky and Kahneman (1992): “Advances in prospect theory”
• Solved some of the main criticisms of original prospect theory by
• Removing the editing phase
• Allows us to evaluate prospects with any number of non-zero outcomes
• Updating the probability weighting function
• Specific probability weighting function with specified functional form
• Transformation of the cumulative probabilities, rather than each p
separately
• The cumulative function guarantees that the sum of transformed decision weights is
1 (i.e. 𝜋 𝑝 + 𝜋 1 − 𝑝 = 1)
• This avoids any violations of stochastic dominance

93

Cumulative Prospect Theory


• Overall evaluation of a prospect 𝑓:
𝑉 𝑓 = 𝑉 𝑓 + + 𝑉(𝑓 − )
Where 𝑓 + is the positive part of f and 𝑓 − is the negative part

• More detail on these parts:


𝑉 𝑓 + = σ𝑛𝑖 𝜋𝑖+ 𝑣 𝑥𝑖
𝑉 𝑓 − = σ𝑛𝑖 𝜋𝑖− 𝑣 𝑥𝑖

• Where 𝜋𝑖+ and 𝜋𝑖− are decision weights.


• How can we obtain these weights?
94

47
Cumulative Prospect Theory
1. Rank the gains and losses from lowest to highest in absolute value:
• 0 ≤ 𝑔1 < 𝑔2 < 𝑔3 < ⋯ < 𝑔𝑛 and 0 ≥ 𝑙1 > 𝑙2 > 𝑙3 > ⋯ > 𝑙𝑚
• Visually:

2. Then, transform the cumulative probabilities of each outcome


using the probability weighting function relevant for the outcome’s
sign (+ or -)
• 𝑊𝑖+ . for outcomes g
• 𝑊𝑖− . for outcomes l
95

Cumulative Prospect Theory


1. Rank the gains and losses from lowest to highest in absolute value:
• 0 ≤ 𝑔1 < 𝑔2 < 𝑔3 < ⋯ < 𝑔𝑛 and 0 ≥ 𝑙1 > 𝑙2 > 𝑙3 > ⋯ > 𝑙𝑚
• Visually:

2. Then, transform the cumulative probabilities of each outcome


using the probability weighting function relevant for the
outcome’s sign (+ or -)
• 𝑾+
𝒊 . for outcomes g
• 𝑾−
𝒊 . for outcomes l

96

48
Cumulative Prospect Theory
• 𝑊𝑖+ . for outcomes g
𝑝𝛾
• 𝑊+ = 1
𝑝𝛾 + 1−𝑝 𝛾 𝛾

• 𝑊𝑖− . for outcomes l


𝑝𝛿
• 𝑊− = 1
𝑝𝛿 + 1−𝑝 𝛿 𝛿

• These differ only in the powers


• The powers govern the curvature
• Empirically, 𝑊 + is more curved than 𝑊 −

97

Cumulative Prospect Theory


1. Rank the gains and losses from lowest to highest in absolute value:
• 0 ≤ 𝑔1 < 𝑔2 < 𝑔3 < ⋯ < 𝑔𝑛 and 0 ≥ 𝑙1 > 𝑙2 > 𝑙3 > ⋯ > 𝑙𝑚
• Visually:

2. Then, transform the cumulative probabilities of each outcome using the


probability weighting function relevant for the outcome’s sign (+ or -)
• 𝑊𝑖+ . for outcomes g
• 𝑊𝑖− . for outcomes l

3. Then include the cumulative part


98

49
Cumulative Prospect Theory
Each weight is the DIFFERENCE between the probability weight of events that are at least as
large as the event being considered, and the probability weight of events that are strictly
larger.
• For gains: at least as good minus strictly better
• For losses: at least as bad minus strictly worse

Gains
• Largest gain: 𝜋𝑛 = 𝑊 + 𝑝𝑛
• Second largest gain: 𝜋𝑛−1 = 𝑊 + 𝑝𝑛 + 𝑝𝑛−1 − 𝑊 + 𝑝𝑛
• Third largest gain: 𝜋𝑛−2 = 𝑊 + 𝑝𝑛 + 𝑝𝑛−1 + 𝑝𝑛−2 − 𝑊 + 𝑝𝑛 + 𝑝𝑛−1
• …
Losses
• Largest loss: 𝜋𝑚 = 𝑊 − 𝑝𝑚
• Second largest gain: 𝜋𝑚−1 = 𝑊 − 𝑝𝑚 + 𝑝𝑚−1 − 𝑊 − 𝑝𝑚
• Third largest gain: 𝜋𝑚−2 = 𝑊 + 𝑝𝑚 + 𝑝𝑚−1 + 𝑝𝑚−2 − 𝑊 − 𝑝𝑚 + 𝑝𝑚−1
• … 99

Cumulative Prospect Theory


• Example: GAINS Largest gain: 𝜋4 = 𝑊 + 𝑝4
W(p) 1 Second largest gain: 𝜋3 = 𝑊 + 𝑝4 + 𝑝3 − 𝑊 + 𝑝4
Third largest gain: 𝜋2 = 𝑊 + 𝑝4 + 𝑝3 + 𝑝2 − 𝑊 + 𝑝4 + 𝑝3
0.8
Fourth largest gain: 𝜋1 = 𝑊 + 𝑝4 + 𝑝3 + 𝑝2 + 𝑝1 − 𝑊 + 𝑝4 + 𝑝3 + 𝑝2
0.6

0.4

0.2

0 0.2 0.4 0.6 0.8 1

p
100

50
Cumulative Prospect Theory
• Example: GAINS
W(p) 1

0.8

Example: 𝐿 = 40,0.2; 30,0.3; 20,0.15; 0,0.35 , with 𝛾 = 0.75


0.6
𝒊 𝒙𝒊 𝒑𝒊 ෍ 𝒑𝒊 ෍ 𝒑𝒊 𝝅𝒊
𝑾+ ෍ 𝒑𝒊 𝑾+ ෍ 𝒑𝒊

0.4 𝒋≥𝒊 𝒋>𝒊 𝒋≥𝒊 𝒋>𝒊


4 40 0.2 0.2 0 0.25 0 0.25
0.2 3 30 0.3 0.5 0.2 0.47 0.25 0.22
2 20 0.15 0.65 0.5 0.58 0.47 0.11

0
1 0 0.35 1 0.65 1 0.58 0.42
0.2 0.4 0.6 0.8 1

VALUE: 𝑉 𝐿 = 0.42𝑣(0) + 0.11𝑣 20 + 0.22𝑣 30 + 0.25𝑣 40


p
101

Criticisms of Cumulative Prospect Theory


• Lack of normative basis
• Unlike EUT, PT and CPT are not grounded in axioms that have a normative
status.
• There have been some attempts to retrofit axioms but less clear if these are
normatively appealing
• Ask yourself: what is the goal?

• Reference point
• What should the reference point be?
• Current income? Expected income? Certain option in choice between gambles?

102

51

You might also like