50% found this document useful (2 votes)
442 views88 pages

Chapter15 - Decision Making

1) Decision analysis provides a framework for rational decision making under uncertainty by considering all possible actions and outcomes. 2) Examples include decisions around new product launches, crop selection, and oil drilling prospects. 3) Key aspects of decision analysis include identifying alternatives, possible states of nature, and the payoff for each combination in a payoff table. 4) Methods like maximax, maximin, and expected value can then be used to determine the optimal decision according to different criteria.

Uploaded by

Namig
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
50% found this document useful (2 votes)
442 views88 pages

Chapter15 - Decision Making

1) Decision analysis provides a framework for rational decision making under uncertainty by considering all possible actions and outcomes. 2) Examples include decisions around new product launches, crop selection, and oil drilling prospects. 3) Key aspects of decision analysis include identifying alternatives, possible states of nature, and the payoff for each combination in a payoff table. 4) Methods like maximax, maximin, and expected value can then be used to determine the optimal decision according to different criteria.

Uploaded by

Namig
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 88

Quantitative Methods in

Decision Making

Chapter 15
Decision Analysis
What is decision analysis?

Decision analysis provides a framework and


methodology for rational decision making,
so as to select the most satisfied action
from all possible actions.
Some examples

1. A manufacturer introducing a new product into the


marketplace. He has to decide:
How much should be produced?
Should the product be test marketed in a small region
before deciding upon full distribution?
How much advertising is needed to launch the
product successfully?

Uncertainty:
What will be the reaction of potential customers?
$

$200,000

$160,000
Revenue = $900 x
Profit

$120,000

Fixed cost Cost = $50,000 + $400 x


$80,000

Loss
$40,000

0 40 80 120 160 200 x

Break-even point = 100 units


Some examples

2. An agricultural firm has a land. It should


make decision on how to select the best
mix of crops and livestock for the
upcoming season.

Uncertainty:
What will be the weather conditions of the season?
Where are prices headed in the upcoming season?
Some examples

3. An oil company has to decide whether to


drill for oil in a particular location.

Uncertainty:
How likely is oil there? How much?
Should geologists investigate the site further before
drilling?
Frequently, one question to be addressed with
decision analysis is whether to make the
needed decision immediately or to first do
some testing (at some expense) to reduce the
level of uncertainty about the outcome of the
decision.
15.1 A Prototype Example

The GOFERBROKE COMPANY owns a tract of land that


may contain oil. A consulting geologist has reported to
management that she believes there is 1 chance in 4 of oil.
Because of this prospect, another oil company has offered to
purchase the land for $90,000.
However, Goferbroke is considering holding the land in
order to drill for oil itself. The cost of drilling is $100,000.
If oil is found, the resulting expected revenue will be
$800,000, so the company's expected profit (after deducting
the cost of drilling) will be $700,000. A loss of $100,000 (the
drilling cost) will be incurred if the land is dry (no oil).
15.1 A Prototype Example

Now the management must face the following questions

How to approach the decision of whether to drill or


sell based on these data?
How to approach the decision of whether to conduct
a detailed seismic survey?
How to refine the evaluation of the consequences of
the various possible outcomes?
15.2 Decision Making without Experimentation

Terminology about Decision Making


In general terms, the decision maker must choose an action
from a set of possible actions. The set contains all the feasible
alternatives under consideration for how to proceed with the
problem of concern.
This choice of an action must be made in the face of
uncertainty, because the outcome will be affected by random
factors that are outside the control of the decision maker.
These random factors determine what situation will be found at
the time that the action is executed.
Each of these possible situations is referred to as a possible
state of nature.
For each combination of an action and a state of
nature, the decision maker knows what the resulting
payoff would be.
The payoff is a quantitative measure of the value to
the decision maker of the consequences of the
outcome.
A payoff table commonly is used to provide the
payoff for each combination of an action and a state
of nature.
Payoff Table of the prototype example

status of land Payoff

Oil Dry
alternative

Drill for Oil $700,000 -$100,000

Sell the Land $90,000 $90,000

Chance of Status 1 in 4 3 in 4
Summary

Decision making framework

1 2 3 4

The decision Nature then Each This payoff


maker needs would choose combination of table should be
to choose one of the an action and used to find an
one of the possible state of nature optimal action
alternative states of for the
actions. nature (prior would result in decision maker
probability) . a payoff; which according to
is given as one an appropriate
of the entries criterion.
in a payoff
table.
The decision maker generally will have some
information that should be taken into account
about the relative likelihood of the possible
states of nature.
The probabilities for the respective states of
nature provided by the prior distribution are
called prior probabilities.
Classification of Decision Analysis

category characteristics

Deterministic (1) Goal (2) Actions (3)


Decision Analysis Certain Nature (4) Payoff

(1)Goal (2) Actions (3)


Indeterminable
Uncertain Nature
Decision Analysis Decision
(4) No knowledge about
Analysis probability of nature(5)
Payoff

Risk Decision (1) Goal (2) Actions (3) Nature


Analysis (4) Payoff (5)Probability
Methods for Indeterminable Decision Analysis

Pessimistic methodthe
maximin payoff criterion

Optimistic The Equivalent


method the A C Probability
maximax payoff Criterion
criterion Indeterminable
Decision Analysis

The maximum E D The Savage


likelihood rule Criterion
criterion
The prototype example

State of Nature
Alternative Oil Dry
1. Drill for Oil 700 -100

2. Sell the 90 90
Land
Chance of ? ?
Status
Maximax payoff criterion
For each possible action, find the maximum
payoff over all possible states of nature. Next,
find the maximum of these maximum payoffs.
Choose the action whose maximum payoff gives
this maximum.
State of Nature Maximax
payoff
Alternative Oil Dry

1. Drill for Oil 700 -100 700

2. Sell the Land 90 90 90

Action : Drill for Oil 700


Maximin payoff criterion

For each possible action, find the minimum payoff over all
possible states of nature. Next, find the maximum of these
minimum payoffs. Choose the action whose minimum
payoff gives this maximum. (Pessimistic method)

State of Nature Maximin


payoff
Alternative Oil Dry

1. Drill for Oil 700 -100 -100


2. Sell the Land 90 90 90
90
Action : Sell the land
Equivalent Probability Criterion

State of Nature Equivalent


Probability
Alternative Oil Dry
1. Drill for Oil 700 -100 300
2. Sell the Land 90 90 90
probability 0.5 0.5
300

Action : Drill for Oil


Savage rule Criterion Criterion

For each possible nature state, find the maximum payoff


over all possible alternative actives. Next, find the
difference of each active payoff comparing with the
maximum payoff. Choose the maximum difference of
every active, after that select the action whose difference
gives the minimum.
State of Nature Savage
Rule
Alternative Oil Dry Max
1. Drill for Oil 700(0) -100(190) 190

2. Sell the Land 90(610) 90(0) 610


Minimum 190
Action : Drill for Oil
Maximum likelihood criterion

Identify the most likely state of nature (the one


with the largest prior probability). For this state of
nature, find the action with the maximum payoff.
Choose this action.
State of Nature Maximax
likelihood
Alternative Oil Dry
1. Drill for Oil 700 -100 -100
2. Sell the Land 90 90 90
0.25 0.75 90
Action : Sell the land
Probability Decision Analysis

Bayes Rule

Decision making with risk


(with probability)
Bayes' decision rule
Using the best available estimates of the probabilities of the
respective states of nature (currently the prior probabilities),
calculate the expected value of the payoff for each of the
possible actions. Choose the action with the maximum
expected payoff.
For the prototype example, these expected payoffs are
calculated as follows:
E [Payoff (drill)] = 0.25*(700) + 0.75*(-100= 100.
E [Payoff (sell)] = 0.25*(90) + 0.75*(90)= 90.
100>90

Action : Drill for Oil


Advantage of Bayes' decision rule

It incorporates all the available information,


including all the payoffs and the best available
estimates of the probabilities of the respective states
of nature.
under many circumstances, past experience and
current evidence enable one to develop reasonable
estimates of the probabilities.
Furthermore, experimentation frequently can be
conducted to improve these estimates, as described
in the next section. Therefore, we will be using only
Bayes' decision rule throughout the remainder of the
chapter.
Exercise

P782
15.2-2
15.2-3
15.2-2. Jean Clark is the manager of the Midtown
Saveway Grocery Store. She now needs to replenish her
supply of strawberries. Her regular supplier can provide
as many cases as she wants. However, because these
strawberries already are very ripe, she will need to sell
them tomorrow and then discard any that remain unsold.
Jean estimates that she will be able to sell 10, 11, 12, or
13 cases tomorrow. She can purchase the strawberries for
$3 per case and sell them for $8 per case. Jean now
needs to decide how many cases to purchase. Jean has
checked the stores records on daily sales of strawberries.
On this basis, she estimates that the prior probabilities
are 0.2, 0.4, 0.3, and 0.1 for being able to sell 10, 11, 12,
and 13 cases of strawberries tomorrow.
(a) Develop a decision analysis formulation of this
problem by identifying the alternative actions, the states
of nature, and the payoff table.
(b) How many cases of strawberries should Jean
purchase if she uses the maximin payoff criterion?
(c) How many cases should be purchased according to the
maximum likelihood criterion?
(d) How many cases should be purchased according to
Bayes decision rule?
15.2-3.* Warren Buffy is an enormously wealthy investor who
has built his fortune through his legendary investing acumen. He
currently has been offered three major investments and he
would like to choose one. The first one is a conservative
investment that would perform very well in an improving
economy and only suffer a small loss in a worsening economy.
The second is a speculative investment that would perform
extremely well in an improving economy but would do very
badly in a worsening economy. The third is a countercyclical
investment that would lose some money in an improving
economy but would perform well in a worsening economy.
Warren believes that there are three possible scenarios over the
lives of these potential investments: (1) an improving economy,
(2) a stable economy, and (3) a worsening economy. He is
pessimistic about where the economy is headed, and so has
assigned prior probabilities of 0.1, 0.5, and 0.4, respectively, to
these three scenarios. He also estimates that his profits under
these respective scenarios are those given by the following
table:
Which investment should Warren make under each of the
following criteria?
(a) Maximin payoff criterion.
(b) Maximum likelihood criterion.
(c) Bayes decision rule.
Reconsider Prob. 15.2-3. Warren Buffy decides that Bayes
decision rule is his most reliable decision criterion. He believes
that 0.1 is just about right as the prior probability of an
improving economy, but is quite uncertain about how to split the
remaining probabilities between a stable economy and a
worsening economy.
Therefore, he now wishes to do sensitivity analysis with respect
to these latter two prior probabilities.
(a) Reapply Bayes decision rule when the prior probability of a
stable economy is 0.3 and the prior probability of a worsening
economy is 0.6.
(b) Reapply Bayes decision rule when the prior probability of a
stable economy is 0.7 and the prior probability of a worsening
economy is 0.2.
(c) Graph the expected profit for each of the three investment
alternatives versus the prior probability of a stable economy
(with the prior probability of an improving economy fixed at
0.1). Use this graph to identify the crossover points where the
decision shifts from one investment to another.
Sensitivity Analysis with Bayes Decision Rule
Sensitivity analysis commonly is used with various
applications of operations research to study the
effect if some of the numbers included in the
mathematical model are not correct.
State of Nature Expected
payoff
Alternative Oil Dry
1. Drill for Oil 700 -100 100
2. Sell the Land 90 90 90
Prior probability 0.25 0.75 90

Sensitivity analysis
Goferborokes management feels that the true
chances of having oil on the tract of land are likely
to lie somewhere between 15 and 35 percent. In
other words, the true prior probability of having oil
is likely to be in the range from 0.15 to 0.35, so the
corresponding prior probability of the land being
dry would range from 0.85 to 0.65.
State of Nature Expected
payoff
Alternative Oil Dry
1. Drill for Oil 700 -100 20
2. Sell the Land 90 90 90
Prior probability 0.15 0.85

State of Nature Expected


payoff
Alternative Oil Dry
1. Drill for Oil 700 -100 180
2. Sell the Land 90 90 90
Prior probability 0.35 0.65
Thus, the decision is very sensitive to the prior
probability of oil. This sensitivity analysis has
revealed that it is important to do more, if possible,
to pin down just what the true value of the
probability of oil is.

Let
P = prior probability of oil,
then
the expected payoff from drilling for any p is
E[Payoff(drill)] = 700p 100(1-p)= 800p100.
How the expected payoff changes when the probability
changes
Expected
payoff

700
Region where the Drill for oil
decision should
600 be to drill for oil
Region where
the decision
should be to
500 sell the land

400

300

Conclusion:
200 Should sell the land if p < 0.2375.
Should drill for oil if p >0.2375.
100
Sell the land

0.2375
0
0. 0.
0.2 0.4 1
6 8
Crossove
r point
-100
Exercise

P782
15.2-1
15.2-5
15.2-8
15.3 Decision Making with Experimentation

Decision making

With risk With experimentation


Posterior Probabilities

Additional testing (experimentation) can be done to


improve the preliminary estimates of the
probabilities of the respective states of nature
provided by the prior probabilities.

These improved estimates are called posterior


probabilities.
We have to decide

How to derive the posterior probabilities?

How to decide whether if is worthwhile to conduct


the experimentation?
The prototype example

A seismic survey obtains seismic soundings that


indicate whether the geological structure is
favorable to the presence of oil. The cost is $30,000.
An available option before making a decision is to
conduct a detailed seismic survey of the land to
obtain a better estimate of the probability of oil. We
will divide the possible findings of the survey into
the following two categories:
USS: Unfavorable seismic soundings; oil is fairly
unlikely.
FSS: Favorable seismic soundings; oil is fairly
likely.
Based on past experience, if there is oil, then the probability
of unfavorable seismic soundings is

P(USS | State = Oil) = 0.4, so P(FSS |State = Oil) = 0.6.

Similarly, if there is no oil , then the probability of


unfavorable seismic soundings is estimated to be

P(USS | State = Dry) = 0.8, so P(FSS| State = Dry) = 0.2


Probability Tree Diagram

How to find the posterior probabilities of the respective


states of nature given the seismic soundings?

0.25(0.6)=0.15 0.15/0.3=0.5
0.6 o il Oil and FSS Oil,given FSS
iv en
S,g
FS
U S S, 0. 4
given 0.25(0.4)=0.1 0.1/0.7=0.14
oil Oil and
5

Oil, Given USS


2

USS
0.

il
O

0.75(0.2)=0.15 0.15/0.3=0.5
0.7

y Dry and FSS Dry,given FSS


0.2 n Dr
5
Dr

ve
y

S ,gi
FS 0
.8 0.75(0.8)=0.6 0.6/0.7=0.86
USS,
given
Dry Dry and USS Dry, given USS

Unconditional probabilities:P(FSS)=0.15+0.15=0.3

P(finding) P(USS)=0.1+0.6=0.7
The value of experimentation

Experimentation Information Value

How?
Exercise
Expected Value of Perfect Information

The first method assumes (unrealistically) that the


experiment will remove all uncertainty about what
the true state of nature is, and then this method
makes a very quick calculation of what the resulting
improvement in the expected payoff would be
(ignoring the cost of the experiment). This quantity,
called the expected value of perfect information,
provides an upper bound on the potential value of
the experiment. Therefore, if this upper bound is
less than the cost of the experiment, the experiment
definitely should be forgone.
Suppose now that the experiment could definitely
identify what the true state of nature is, thereby
providing perfect information. Whichever state of
nature is identified, you naturally choose the action with
the maximum payoff for that state. We do not know in
advance which state of nature will be identified, so a
calculation of the expected payoff with perfect
information (ignoring the cost of the experiment)
requires weighting the maximum payoff for each state
of nature by the prior probability of that state of nature.
Calculate EPPI

State of Nature

Alternative Oil Dry


1. Drill for Oil 700 -100

2. Sell the Land 90 90

0.25 0.75

Expected Payoff with Perfect Information (EPPI) =


0.25*700 + 0.75*90=242.5.
Thus, if the Goferbroke Co. could learn before
choosing its action whether the land contains oil, the
expected payoff as of now (before acquiring this
information) would be 242,500 (excluding the
cost of the experiment generation the information).

To evaluate whether the experiment should be


conducted, we now use this quantity to calculate the
expected value of perfect information.
Calculate EVPI

The Expected Value of Perfect Information,


abbreviated EVPI, is calculated as
EVPI = Expected Payoff with Perfect Information-
Expected Payoff Without Experimentation
For the prototype example, the expected payoff without
experimentation (under Bayes decision rule) is 100.
Therefore,
EVPI = 242.5 100 = 142.5.
Since 142.5 far exceeds 30, the cost of experimentation
(a seismic survey), it may be worthwhile to proceed
with the seismic survey.
Exercise

P785
15.3-1a-d
15.3-3
Expected Value of Experimentation

Another method calculates the actual improvement


in the expected payoff (ignoring the cost of the
experiment) that would result from performing the
experiment. Then comparing this improvement with
the cost indicates whether the experiment should be
performed.
Calculate EPE

Calculating this quantity requires first computing


the Expected Payoff with Experimentation (EPE).
1. Find all the posterior probabilities, and the
corresponding expected payoff for each possible
finding from the experiment.
2. Then each of these expected payoffs needs to be
weighted by the probability of the corresponding
finding.
For the prototype example,
P(USS)=0.7, P(FSS)=0.3.

For the optimal policy with experimentation, the corresponding


expected payoff for each finding is

E(Payoff | Finding = USS) = 90,


E(Payoff | Finding = FSS) = 300.

With these numbers,


Expected payoff with experimentation
= 0.7*90 + 0.3*300
= 153.
Calculate EVE

The expected value of experimentation, abbreviated EVE,


is calculated as
EVE = EPE expected payoff without experimentation.
= 153 100
= 53

Since this value exceeds 30, the cost of conducting a


detailed seismic survey, this experimentation should be
done.
Exercise

P790
15.3-1Calculate Posterior Probability
15.4 Decision Trees

Decision trees provide a useful way


of visually displaying the problem and
then organizing the computational
work already described in the
preceding two sections. These trees
are especially helpful when a sequence
of decisions must be made.
A decision tree describes graphically:
The decisions to be made
The events that may occur
The outcomes associated with combinations of
decisions and events.
Probabilities are assigned to the events, and values
are determined for each outcome.
Three kinds of nodes, two kinds of branches

Decision Node Event Node Terminal Node


1 2 3
Where
Choice must be Representing the
uncertainty is
made final result of a
resolved (a point
combination of
where the
decisions and
decision maker
events.
learns about the
occurrence of an
event)

Decision Branch Event Branch


Forks and Branches

Sometimes, the nodes of the decision tree are referred


to as forks, and the arcs are called branches.
A decision fork, represented by a square, indicates that
a decision needs to be made at that point in the
process.
A chance fork, represented by a circle, indicates that a
random event occurs at that point.
Example of a Decision Tree
Probability
Event name
Terminal value
Cash flow

Probability
Decision name Event name
Terminal value
Cash flow Cash flow

Probability
Event name
Terminal value
Cash flow

Decision name
Terminal value
Cash flow
Prototype Example
oil

The prototype dr
ill dry

example involves
sel
l
a sequence of two rab
le
fa vo
decisions: un

1. Should a seismic fav


oil

ic
ora

sm
survey be ble
dr

sei
ill y
dr
conducted before Do
sel
l
an action is
chosen?
oil
2. Which action (drill
ill dry
dr
for oil or sell the No seismic

land) should be sel


l

chosen?
Data prepared for calculation
payoff

3)
0.14 670
il(
-15.7 O
0
80
f
ill Dr 0
Dr y(0
60 0
-10 .85
7) -130
c
9
(0.
7) Se 0
le ll
b
v ora 60
U nfa 0
123
.5) 670
b il(0
270 O 0
ic

0 80
sm

Fa g
v ora ill
sei

0
Dr
-30

ble 270 Dr
Do

(0. 0 y( 0
123 3)
d - 10 .5) -130
a 90
Se
ll
60

5) 700
il (0.2
100 O
0
80
h
ill
Dr Dr 0
100 y(0
0 0 . 75 -100
e - 10 )
No seismic 90
Se
ll
90
The Procedure of Calculation

1. Start at the right side of the decision tree and move left one
column at a time. For each column, perform either step 2 or
step 3 depending upon whether the forks in that column are
chance forks or decision forks.
2. For each chance fork, calculate its expected payoff. Record
this expected payoff for each decision fork in boldface next
to the fork.
3. For each decision fork, compare the expected payoffs of its
branches and choose the alternative whose branch has the
largest expected payoff. In each case, record the choice on
the decision tree by inserting a double dash as a barrier
through each rejected branch.
The Final Decision Tree
payoff

)
0.143 670
il(
-15.7 O
800
f
ill Dr 0
Dr y(0
60 0
-10 .85
7) -130
c
9
(0.
7) Sel 0
b le l
vo ra 60
U nfa 0
123
.5) 670
b il(0
270 O 0
ic

0 80
sm

Fa g
vo
ill
sei

rab 0
Dr
-30

le( 270 Dr
Do

0.3 0 y( 0
123 ) d - 10 .5) -130
a 90
Sel
l
60

) 700
(0 . 25
100 Oi l
800
h
ill
Dr Dr 0
100 y(0
0 0 . 75 -100
e - 10 )
No seismic 90
Sel
l 90
Decision Result
The chosen alternative also is indicated by inserting a double dash as a
barrier through each rejected branch.
The decision result is: Do seismic survey (EP=123) payoff

3)
.14 670
il( 0
-15.7 O
800
l f
il Dr 0
Dr y(0
60 0
-10 .85
7) -130
c
9
(0.
7) Sel 0
ble l
o ra 60
fav
Un 0
123
.5) 670
b
O il(0
270
ic

0 800
sm

Fa g
vo
ill
sei

rab 0
Dr
-30

le( 270 Dr
Do

0.3 0 y( 0
123 ) d - 10 .5) -130
a 90
Sel
l 60

5) 700
0.2
100 Oil(
800
h
ill
Dr Dr 0
100 y(0
0 0 . 75 -100
e - 10 )
No seismic
Do seismic survey Sel
90
l 90
Exercise

P789
15.4-2
15.4-3
15.4-4
15.4-5
15.4-10
15.5 Utility Theory

UTILITY THEORY
15.5 Utility Theory

Thus far, when applying Bayes decision rule, we


have assumed that the expected payoff in monetary
terms is the appropriate measure of the
consequences of taking an action. However, in
many situations this assumption is inappropriate.
For example, a company may be unwilling to invest
a large sum of money in a new product even when
the expected profit is substantial if there is a risk of
losing its investment and thereby becoming
bankrupt.
Peoples Attitude to Risk

100,000 nothing 40,000 with certainty


Utility Function for Money

There is a way of transforming monetary values to an


appropriate scale that reflects the decision makers
preferences. This scale is called the utility function for money.
(M)

0
M
$10,000 $30,000 $60,000 $100,000

A typical utility function u(M) for money M.


This utility function (M)

u(M) for money M


indicates that an 4

individual having this


utility function would
3

value obtaining 2

30,000 twice as much


as 10,000 and Would 1

value obtaining 0
M
100,000 twice as much $10,000 $30,000 $60,000 $100,000

as 30,000.
Risk-averse

Having this decreasing slope of the function


as the amount of money increase is referred
to as having a decreasing marginal utility for
money. Such an individual is referred to as
being risk-averse.
Risk-seekers

However, not all individuals have a decreasing


marginal utility for money. Some people are risk
seekers instead of risk-averse, and they go through
life looking for the big score. The slope of their
utility function increases as the amount of money
increases, so they have an increasing marginal
utility for money.
Risk-neutral

The intermediate case is that of a risk-neutral


individual, who prizes money at its face value. Such
an individuals utility for money is simply
proportional to the amount of money involved.
The fact that different people have different utility
functions for money has an important implication
for decision making in the face of uncertainty.
Fundamental Property of the Utility Function for Money

Fundamental
Property

Under the assumptions of utility theory, the decision


makers utility function for money has the property that
the decision maker is indifferent between two alternative
courses of action if the two alternatives have the same
expected utility (not money any more).
When the decision makers utility function for
money is used to measure the relative worth of the
various possible monetary outcomes, Bayes
decision rule replaces monetary payoffs by the
corresponding utilities. Therefore, the optimal
action (or series of actions) is the one which
maximizes the expected utility.
How to construct the utility function?

Step 1
As a starting point in constructing the utility
function, it is natural to let the utility of zero money
be zero, so u(0) = 0.
Construct the utility function

Step 2
What value of p makes you indifferent between two
alternatives?
The decision makers choice: .
If we let u(M) denote the utility of a monetary payoff
of M, this choice of p implies that

By choosing
u(-130) = -150 (a convenient choice since it will make u(M)
approximately equal to M when M is in the vicinity of 0), this
equation then yields u(700) = 600.
Construct the utility function

Step 3
To identify u(-100), a choice of p is made that makes
the decision maker indifferent between a payoff of -
130 with probability p or definitely incurring a
payoff of -100.The choice is p = 0.7, so
Construct the utility function

Step 4
To obtain u(90), a value of p is selected that makes
the decision maker indifferent between a payoff of
700 with probability p or definitely obtaining a
payoff of 90. The value chosen is p = 0.15, so
Construct the utility function
(M)

Step 5
700

A smooth curve was drawn

e
lin
600

e
through u(-130), u(-100),

lu
va
y
ar

n
et

io
on

ct
500
u(90), and u(700) to obtain

n
fu
ity
til
U
the decision makers utility 400

function for money. The 300

dashed line drawn at 45in 200

the following figure shows 100

the monetary value M of 0

-200 -100 100 200 300 400 500 600 700 M


the amount of money M. -100

This is typical for a


moderately risk-averse -200

individual.
Prototype Example

Using a Decision Tree to Analyze the Goferbroke Co.


Problem with Utilities
Monetary payoff Utility
-130 -150
-100 -105
60 60
90 90
670 580
700 600

this information can be used with a decision tree


as summarized next.
Final Decision Tree Using Utility function

Monetary
Utility
payoff

3)
. 14 670 580
il(0
-45.7 O
f
ill
Dr Dr
y(0
60 . 85
7) -130 -150
c
7) Sel
le (0. l
a b
fa vor 60 60
Un
106.5
.5) 670 580
b il( 0
215 O
ic
sm

Fa g
vo
ill
sei

rab
le( 215 Dr Dr
Do

0.3 y(0
106.5 ) d .5) -130 -150
a
Sel
l 60
60

5) 700 600
il ( 0.2
71.25 O
h
ill
Dr Dr
90 y(0
.75 -100 -105
e )
No seismic
Sel
l 90 90
Final Decision Tree Using Money Payoff
payoff

)
0.143 670
il(
-15.7 O
800
f
ill Dr 0
Dr y(0
60 0
-10 .85
7) -130
c
9
(0.
7) Sel 0
b le l
vo ra 60
U nfa 0
123
.5) 670
b il(0
270 O 0
ic

0 80
sm

Fa g
vo
ill
sei

rab 0
Dr
-30

le( 270 Dr
Do

0.3 0 y( 0
123 ) d - 10 .5) -130
a 90
Sel
l
60

) 700
(0 . 25
100 Oi l
800
h
ill
Dr Dr 0
100 y(0
0 0 . 75 -100
e - 10 )
No seismic 90
Sel
l 90
Discussion

These expected utilities lead to the same decisions at


forks a, c and d as in Fig. b, but the decision at fork
e now switches to sell instead of drill. However, the
backward induction procedure still leaves fork e on
a closed path. Therefore, the overall optimal policy
remains the same as given at the end of a (do the
seismic survey; sell if the result is unfavorable; drill
if the result is favorable).
For a somewhat more risk-averse owner, the
optimal solution would switch to the more
conservative approach of immediately selling the
land (no seismic survey).
Exercise

P792
15.5-2
15.5-3

You might also like