0% found this document useful (0 votes)
169 views

Topic 4 Decision Analysis

The document discusses various decision analysis techniques including: 1) Decision making without probabilities using optimistic, conservative, and minimax regret approaches. 2) Decision making with probabilities using the expected value approach which calculates the expected return of each decision based on payoffs and state probabilities. 3) An example of calculating expected values for a restaurant choosing between three models of different seating capacities.

Uploaded by

Kiri Soriano
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
169 views

Topic 4 Decision Analysis

The document discusses various decision analysis techniques including: 1) Decision making without probabilities using optimistic, conservative, and minimax regret approaches. 2) Decision making with probabilities using the expected value approach which calculates the expected return of each decision based on payoffs and state probabilities. 3) An example of calculating expected values for a restaurant choosing between three models of different seating capacities.

Uploaded by

Kiri Soriano
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Decision Analysis

Management Science
Decision Analysis
Contents
• Problem Formulation
• Decision Making without Probabilities
• Decision Making with Probabilities
• Risk Analysis and Sensitivity Analysis
• Decision Analysis with Sample Information
• Computing Branch Probabilities with Baye’s Theorem
• Utility Theory
Problem Formulation

• A decision problem is characterized by decision alternatives, states


of nature, and resulting payoffs.
• The decision alternatives are the different possible strategies the
decision maker can employ.
• The states of nature refer to future events, not under the control of
the decision maker, which may occur. States of nature should be
defined so that they are mutually exclusive and collectively
exhaustive.
Influence Diagrams

• An influence diagram is a graphical device showing the


relationships among the decisions, the chance events, and the
consequences.
• Squares or rectangles depict decision nodes.
• Circles or ovals depict chance nodes.
• Diamonds depict consequence nodes.
• Lines or arcs connecting the nodes show the direction of influence.
Payoff Tables

• The consequence resulting from a specific combination of a


decision alternative and a state of nature is a payoff.
• A table showing payoffs for all combinations of decision alternatives
and states of nature is a payoff table.
• Payoffs can be expressed in terms of profit, cost, time, distance or
any other appropriate measure.
Decision Trees

• A decision tree is a chronological representation of the decision


problem.
• Each decision tree has two types of nodes; round nodes
correspond to the states of nature while square nodes correspond
to the decision alternatives.
• The branches leaving each round node represent the different
states of nature while the branches leaving each square node
represent the different decision alternatives.
• At the end of each limb of a tree are the payoffs attained from the
series of branches making up that limb.
Decision Making without Probabilities

• Three commonly used criteria for decision making when probability


information regarding the likelihood of the states of nature is
unavailable are:
– the optimistic approach
– the conservative approach
– the minimax regret approach.
Optimistic Approach

• The optimistic approach would be used by an optimistic decision


maker.
• The decision with the largest possible payoff is chosen.
• If the payoff table was in terms of costs, the decision with the lowest
cost would be chosen.
Conservative Approach

• The conservative approach would be used by a conservative


decision maker.
• For each decision the minimum payoff is listed and then the
decision corresponding to the maximum of these minimum payoffs
is selected. (Hence, the minimum possible payoff is maximized.)
• If the payoff was in terms of costs, the maximum costs would be
determined for each decision and then the decision corresponding
to the minimum of these maximum costs is selected. (Hence, the
maximum possible cost is minimized.)
Minimax Regret Approach

• The minimax regret approach requires the construction of a regret


table or an opportunity loss table.
• This is done by calculating for each state of nature the difference
between each payoff and the largest payoff for that state of
nature.
• Then, using this regret table, the maximum regret for each possible
decision is listed.
• The decision chosen is the one corresponding to the minimum of
the maximum regrets.
Minimax Regret Approach

• The minimax regret approach requires the construction of a regret


table or an opportunity loss table.
• This is done by calculating for each state of nature the difference
between each payoff and the largest payoff for that state of
nature.
• Then, using this regret table, the maximum regret for each possible
decision is listed.
• The decision chosen is the one corresponding to the minimum of
the maximum regrets.
Example

Consider the following problem with three decision alternatives and


three states of nature with the following payoff table representing
profits:

State of Nature
𝑠1 𝑠2 𝑠3
𝑑1 4 4 -2
Decisions 𝑑2 0 3 -1
𝑑3 1 5 -3
EXAMPLE:

Optimistic Approach Decision Maximum


Payoff
 An optimistic decision
𝑑1 4
maker would use the 𝑑2 3
optimistic (maximax) Maximax Maximax
approach. We 𝑑3 5
Decision Payoff
choose the decision
that has the largest
single value in the
payoff table.
EXAMPLE:

Conservative Approach Decision Maximum


Payoff
 A conservative
decision maker would 𝑑1 -2
use the conservative Maximin Maximin
(maximin) approach. 𝑑2 -1
Decision Payoff
List the minimum 𝑑3 -3
payoff for each
decision. Choose the
decision with the
maximum of these
minimum payoffs.
EXAMPLE:

Regret Table
Minimax Regret Approach
𝑠1 𝑠2 𝑠3
 For the minimax regret 𝑑1 0 1 1
approach, first compute
a regret table by 𝑑2 4 2 0
subtracting each payoff
in a column from the
𝑑3 3 0 2
largest payoff in that
column. In this example,
in the first column
subtract 4, 0, and 1 from
Decision Maximum
4; etc. The resulting Payoff
regret table is: Minimax Minimax
𝑑1 1
Decision Payoff
𝑑2 4
𝑑3 3
Practice Exercise

Consider the following problem with three decision alternatives and


four states of nature with the following payoff table representing
profits:

State of Nature
𝑠1 𝑠2 𝑠3 𝑠4
𝑑1 3 2 3 5
Decisions 𝑑2 5 7 2 -2
𝑑3 4 -1 4 3
Decision Making with Probabilities

• Expected Value Approach


 If probabilistic information regarding the states of nature is available, one may
use the expected value (EV) approach.
 Here the expected return for each decision is calculated by summing the
products of the payoff under each state of nature and the probability of the
respective state of nature occurring.
 The decision yielding the best expected return is chosen.
Expected Value of a Decision Alternative

• The expected value of a decision alternative is the sum of weighted


payoffs for the decision alternative.
• The expected value (EV) of decision alternative di is defined as:
𝑁

𝐸𝑉 𝑑𝑖 = ෍ 𝑃(𝑠𝑗 )𝑉𝑖𝑗
𝑗=1

where: 𝑁 = the number of states of nature


𝑃(𝑠𝑗 ) = the probability of state of nature 𝑠𝑗
𝑉𝑖𝑗 = the payoff corresponding to decision alternative 𝑑𝑖 and
state of nature 𝑠𝑗where: N = the number of states of nature
P(sj ) = the probability of state of nature sj
Vij = the payoff corresponding to decision
alternative di and state of nature sj
Example:
Burger Prince

• Burger Prince Restaurant is contemplating opening a new


restaurant on Main Street. It has three different models, each with a
different seating capacity. Burger Prince estimates that the
average number of customers per hour will be 80, 100, or 120.
Payoff Table
State of Nature
𝑠1 = 80 𝑠2 = 100 𝑠3 = 120
Model A $10,000 $15,000 $14,000
Model B $8,000 $18,000 $12,000
Model C $6,000 $16,000 $21,000
Example:
Burger Prince

Calculate the expected value for each decision. The decision tree
on the next slide can assist in this calculation. Here d1, d2, d3
represent the decision alternatives of models A, B, C, and s1, s2, s3
represent the states of nature of 80, 100, and 120.
0.4
Decision Tree 𝑠1
10,000
𝑠2 0.2
2 15,000
0.4
𝑑1 𝑠3 14,000
𝑠1 0.4
8,000
𝑑2 0.2
1 3
𝑠2
18,000
0.4
𝑠3 12,000
𝑠1 0.4
𝑑3 6,000
𝑠2 0.2
4 16,000
0.4
𝑠3 21,000
Example:
Burger Prince

Expected Value For Each Decision

2 𝐸𝑀𝑉 = 0.4 10,000 + 0.2 15,000 + 0.4 14,000 = 12,600

1 3 𝐸𝑀𝑉 = 0.4 8,000 + 0.2 18,000 + 0.4 12,000 = 11,600

4 𝐸𝑀𝑉 = 0.4 6,000 + 0.2 16,000 + 0.4 21,000 = 14,000


Expected Value of a Perfect Information

• Frequently information is available which can improve the


probability estimates for the states of nature.
• The expected value of perfect information (EVPI) is the increase in
the expected profit that would result if one knew with certainty
which state of nature would occur.
• The EVPI provides an upper bound on the expected value of any
sample or survey information.
Expected Value of a Perfect Information

• EVPI Calculation
– Step 1:
Determine the optimal return corresponding to each state of nature.
– Step 2:
Compute the expected value of these optimal returns.
– Step 3:
Subtract the EV of the optimal decision from the amount determined in
step (2).
Example:
Burger Prince

• Expected Value of Perfect Information


Calculate the expected value for the optimum payoff for
each state of nature and subtract the EV of the optimal decision.
EVPI= .4(10,000) + .2(18,000) + .4(21,000) - 14,000 = $2,000
Risk Analysis

• Risk analysis helps the decision maker recognize the difference


between:
– the expected value of a decision alternative, and
– the payoff that might actually occur

• The risk profile for a decision alternative shows the possible payoffs
for the decision alternative along with their associated probabilities.
0.1
0.2

0
0.05
0.25
0.3
0.35
0.4

0.15
0
1,000
2000
3000
4000
Example:

5000
6000
7000
Burger Prince

8000
9000
10000
11000
12000
13000

PROFIT ($)
14000
15000
16000
17000
18000
19000
20000
21000
• Risk Profile for the Model C Decision Alternative

22000
23000
24000
25000
Sensitivity Analysis

• Sensitivity analysis can be used to determine how changes to the


following inputs affect the recommended decision alternative:
– probabilities for the states of nature
– values of the payoffs

• If a small change in the value of one of the inputs causes a change


in the recommended decision alternative, extra effort and care
should be taken in estimating the input value.
Bayes’ Theorem and Posterior
Probabilities

• Knowledge of sample or survey information can be used to revise


the probability estimates for the states of nature.
• Prior to obtaining this information, the probability estimates for the
states of nature are called prior probabilities.
• With knowledge of conditional probabilities for the outcomes or
indicators of the sample or survey information, these prior
probabilities can be revised by employing Bayes' Theorem.
• The outcomes of this analysis are called posterior probabilities or
branch probabilities for decision trees.
Computing Branch Probabilities

• Branch (Posterior) Probabilities Calculation


– Step 1:
For each state of nature, multiply the prior probability by its conditional
probability for the indicator -- this gives the joint probabilities for the states and
indicator.
– Step 2:
Sum these joint probabilities over all states -- this gives the marginal
probability for the indicator.
– Step 3:
For each state, divide its joint probability by the marginal probability for
the indicator -- this gives the posterior probability distribution.
Expected Value of Sample Information

• The expected value of sample information (EVSI) is the additional


expected profit possible through knowledge of the sample or survey
information.
Expected Value of Sample Information

• EVSI Calculation
– Step 1:
Determine the optimal decision and its expected return for the possible
outcomes of the sample using the posterior probabilities for the states of
nature.
– Step 2:
Compute the expected value of these optimal returns.
– Step 3:
Subtract the EV of the optimal decision obtained without using the
sample information from the amount determined in step (2).
Efficiency of Sample Information

• Efficiency of sample information is the ratio of EVSI to EVPI.


• As the EVPI provides an upper bound for the EVSI, efficiency is
always a number between 0 and 1.
Example:
Burger Prince

• Sample Information

Burger Prince must decide whether or not to purchase a marketing


survey from Stanton Marketing for $1,000. The results of the survey are
"favorable" or "unfavorable". The conditional probabilities are:

P(favorable | 80 customers per hour) = .2

P(favorable | 100 customers per hour) = .5

P(favorable | 120 customers per hour) = .9

Should Burger Prince have the survey performed by Stanton Marketing?


Example:
Burger Prince

Influence Diagram

Decision
Chance Market Avg. Number
Consequence Survey of Customers
Results Per Hour

Market Restaurant
Profit
Survey Size
Example:
Burger Prince

• Posterior Probabilities

Favorable
State Prior Conditional Joint Posterior
80 0.4 0.2 0.08 0.148
100 0.2 0.5 0.10 0.185
120 0.4 0.9 0.36 0.667
Total 0.54 1.000

P(favorable)=0.54
Example:
Burger Prince

• Decision Tree (top half) s1 (.148)


$10,000
s2 (.185)
d1 4
s3 (.667) $15,000
$14,000
s1 (.148)
d2 $8,000
2 5 s2 (.185)
s3 (.667) $18,000
I1 d3 $12,000
(.54) s1 (.148)
$6,000
6 s2 (.185)
$16,000
s3 (.667)
1 $21,000
Example:
Burger Prince

• Decision Tree (top half)


1 s1 (.696) $10,000
I2 s2 (.217)
d1 7 $15,000
(.46) s3 (.087)
$14,000
s1 (.696)
d2 $8,000
s2 (.217)
3 8 $18,000
s3 (.087)
d3 $12,000
s1 (.696) $6,000
s2 (.217)
9 $16,000
s3 (.087)
$21,000
Example:
Burger Prince

• Decision Tree (top half)

d1 4 EMV = .148(10,000) + .185(15,000)


$17,855 d + .667(14,000) = $13,593
2 5 EMV = .148 (8,000) + .185(18,000)
2
I1 + .667(12,000) = $12,518
(.54) 6 EMV = .148(6,000) + .185(16,000)
+.667(21,000) = $17,855
1
7 EMV = .696(10,000) + .217(15,000)
I2 +.087(14,000)= $11,433
(.46) d2 8 EMV = .696(8,000) + .217(18,000)
3
d3 + .087(12,000) = $10,554
$11,433
9 EMV = .696(6,000) + .217(16,000)
+.087(21,000) = $9,475
Example:
Burger Prince

• Expected Value of Sample Information


If the outcome of the survey is "favorable”, choose Model C.
If it is “unfavorable”, choose model A.
EVSI = .54($17,855) + .46($11,433) - $14,000 = $900.88
Since this is less than the cost of the survey, the survey should
not be purchased.
Example:
Burger Prince

Efficiency of Sample Information

The efficiency of the survey:


EVSI/EVPI = ($900.88)/($2000) = .4504
Meaning of Utility
• Utilities are used when the decision criteria must be based on more
than just expected monetary values.
• Utility is a measure of the total worth of a particular outcome,
reflecting the decision maker’s attitude towards a collection of
factors.
• Some of these factors may be profit, loss, and risk.
• This analysis is particularly appropriate in cases where payoffs can
assume extremely high or extremely low values.
Example:
Risk Avoider
Consider a three-state, three-decision problem with the
following payoff table in dollars:
State of Nature
𝑠1 𝑠2 𝑠3
𝑑1 +100,000 +40,000 +60,000
Decisions 𝑑2 +50,000 +20,000 +30,000
𝑑3 +20,000 +20,000 +10,000
The probabilities for the three states of nature are: P(s1) = .1, P(s2) = .3, and P(s3) = .6.
Example:
Risk Avoider
• Utility Table for Decision Maker

State of Nature
𝑠1 𝑠2 𝑠3
𝑑1 100 90 0
Decisions 𝑑2 94 80 40
𝑑3 80 80 60
Example:
Risk Avoider
• Utility Table for Decision Maker

State of Nature
𝑠1 𝑠2 𝑠3 Expected
Utility
𝑑1 100 90 0 37.0
Decisions
𝑑2 94 80 40 57.4
𝑑3 80 80 60 68.0
Probability 0.1 0.3 0.6

• Decision maker should choose decision d3.


Reference:

You might also like