Books 3337 0 0 (1241-1280)
Books 3337 0 0 (1241-1280)
Books 3337 0 0 (1241-1280)
29 gmcashflowdecay.xls
30 Name npv cash flows / Time
31 Description Output
32 Cell Sheet1!D22
33 Minimum -2.19E+08
34 Maximum 2.55E+08
35 Mean 4.31E+07
36 Std Deviation 9.92E+07
37 Variance 9.84E+15
38 Skewness -0.3451601
39 Kurtosis 2.396719
40 Errors Calculated 0
41 Mode 6.10E+07
42 5% Perc -1.35E+08
43 10% Perc -1.03E+08
44 15% Perc -7.06E+07
45 20% Perc -4.68E+07
46 25% Perc -3.02E+07
47 30% Perc -8040124
48 35% Perc 9848326
49 40% Perc 2.64E+07
50 45% Perc 4.13E+07
51 50% Perc 5.76E+07
52 55% Perc 6.90E+07
53 60% Perc 8.02E+07
54 65% Perc 9.26E+07
55 70% Perc 1.04E+08
56 75% Perc 1.18E+08
57 80% Perc 1.32E+08
58 85% Perc 1.49E+08
59 90% Perc 1.66E+08
60 95% Perc 1.90E+08
61 Filter Minimum
L M N
34
35 95% CI
36 for Mean
37 NPV
38 Lower 3.69E+07 43-2(99)/sqrt(1000)
FIGURE 13 39 Upper 4.94E+07 43+2(99)/sqrt(1000)
Values in 10^ -9
3.000
2.500
2.000
1.500
1.000
0.500
0.000
-250 -200 -150 -100 -50 0 50 100 150 200 250 300
Values in Millions
31.88% 68.12%
FIGURE 14 -250 0
In the car business, a new model virtually always has reduced sales every year. A new
drug, however, sees increased sales in the first few years, followed by reduced sales. To
model this form of the product life cycle, we must incorporate the following sources of
uncertainty. (Note that we assume that total number of years for which the drug is sold is
known).
■ Number of years for which unit sales increase
■ Average annual percentage increase in sales during the sales-increase portion of
the sales period
■ Average annual percentage decrease in sales during the sales-decrease portion of
the sales period
Lillygrowth.xls Example 3 shows how to model this type of product life cycle. See file Lillygrowth.xls
and Figure 15.
Lilly is producing a new drug that will be sold for 10 years. Year 1 unit sales are assumed
to follow a triangular random variable with worst case 100,000 units, most likely case
150,000, and best case 170,000. The year 0 fixed cost of developing the drug is $1.6 bil-
lion, to be depreciated on a 10-year straight-line basis. Sales are equally likely to increase
for 3, 4, 5, or 6 years, with the average percentage increase during those years following
a triangular random variable with worst case 5%, most likely case 8%, and best case 10%.
During the remainder of the 10-year sales life of the drug, unit sales will decrease at a
rate governed by a triangular random variable having best case 8%, most likely case 12%,
and worst case 18%. During each year, a unit of the drug sells for $15,000. Year 1 vari-
able cost of producing a unit of the drug is $10,000. The unit variable cost of producing
the drug increases at 4% a year.
a Estimate the mean NPV of the drug’s cash flows.
b What is the probability that the drug will add value to Lilly?
c What source of uncertainty is the most important driver of the drug’s NPV?
Solution After dragging our formulas to create years 6–10 and changing the depreciation in row
17 to be over a 10-year period, we simulate random variables in D3 (length of sales in-
crease), D7 (annual percentage rate of sales increase), and D8 (annual percentage rate of
sales decrease) with the following formulas
Cell D3: RISKDUNIFORM({3,4,5,6})
The RISKDUNIFORM variable is a discrete random variable that assigns equal proba-
bility to each listed value.
Cell D7: RISKTRIANG(0.05,0.08,0.1)
Cell D8: RISKTRIANG(0.08,0.12,0.18)
In short, the uncertainty about year 1 unit sales is very important for determining NPV,
but other random variables could probably be replaced by their mean without changing
the distribution of NPV by much.
For each @Risk random variable, the regression tornado graph (Figure 19) computes
the standardized regression coefficient for the @Risk random variable when we try to pre-
dict NPV from all @Risk random variables in the spreadsheet. A standardized regression
coefficient tells us (after adjusting for other variables in the equation) the number of stan-
dard deviations by which NPV changes when the given @Risk random variable changes
by one standard deviation. For example,
■ A one standard deviation change in year 1 unit sales will (ceteris paribus) change
NPV by .98 standard deviation.
■ A one standard deviation change in annual growth rate will increase NPV by .15
standard deviation (ceteris paribus).
Again it is clear that the uncertainty for year 1 sales is really all that matters here; other
random variables may as well be replaced by their means.
2.500
2.000
1.500
1.000
0.500
0.000
-400 -300 -200 -100 0 100 200 300
Values in Millions
53.73% 46.27%
FIGURE 17 -400 0
PROBLEMS
Group A
1 Dord Motors is considering whether to introduce a new ■ Price: Year 1 price $13,000
model: the Racer. The profitability of the Racer will depend Year 2 price 1.05*{(year 1 price) $30*(% by
on the following factors: which year 1 sales exceed expected year 1 sales)}
■ Fixed cost of developing Racer: Equally likely to be The 1.05 is the result of inflation!
$3 billion or $5 billion. Year 3 price 1.05*{(year 2 price) $30*(% by
■ Sales: Year 1 sales will be normally distributed with which year 2 sales exceed expected year 2 sales)}
m 200,000 and s 50,000. For example, if year 1 sales 180,000, then
Year 2 sales will be normally distributed with m year 2 price 1.05*{13,000 30(10)} $13,335
year 1 sales and s 50,000. ■ Variable cost per car: During year 1, the variable cost
Year 3 sales will be normally distributed with m per car is equally likely to be $5,000, $6,000, $7,000,
year 2 sales and s 50,000. or $8,000.
For example, if year 1 sales 180,000, then the mean Variable cost for year 2 1.05*(year 1 variable cost)
for year 2 sales will be 180,000. Variable cost for year 3 1.05*(year 2 variable cost)
TA B L E 3 TA B L E 5
Number Time Abandoned Value Received
of Competitors Probability
End of year 1 $3,000
0 .50 End of year 2 $2,600
1 .30 End of year 3 $1,900
2 .10 End of year 4 $900
3 .10
Your goal is to estimate the NPV of the new car dur- a Simulate 500 times the next three years of Truckco’s
ing its first three years. Assume that cash flows are profit. Estimate the mean and variance of the discounted
discounted at 10%; that is, $1 received now is equiv- three-year profits (use a discount rate of 10%).
alent to $1.10 received a year from now. b Do the same if during each year there is a 50%
a Simulate 400 iterations and estimate the mean and chance that each competitor leaves the industry.
standard deviation of the NPV the first three years of (Hint: You can model the number of firms leaving the
sales. industry in a given period with the RISKBINOMIAL
b I am 95% sure that the expected NPV of this project function. For example, if the number of competitors in the
is between _____ and _____. industry is in cell A8, then the number of firms leaving the
c Use the Target option to determine a 95% confi- industry during a period can be modeled with the statement
dence interval for the actual NPV of the Racer during RISKBINOMIAL(A8,.20). Just remember that the
its first three years of production. RISKBINOMIAL function is not defined if its first argument
d Use a tornado graph to analyze which factors are equals 0.)
most influential in determining the NPV of the Racer.
2 Trucko produces the Goatco truck. The company wants
Group B
information about the discounted profits earned during the 3 You have the opportunity to buy a project that yields at
next three years. During a given year, the total number of the end of years 1–5 the following (random) cash flows:
trucks sold in the United States is 500,000 50,000*GNP End of year 1 cash flow is normal with mean 1,000 and
40,000*INF, where standard deviation 200.
GNP % increase in GNP during year For t 1, end of year t cash flow is normal with Mean
INF % increase in Consumer Price Index during year actual end of year (t 1) cash flow and Standard deviation
.2*(mean of year t cash flow).
Value Line has made the predictions given in Table 2 for the
increase in GNP and INF during the next three years. a Assuming cash flows are discounted at 10%, deter-
In the past, 95% of Value Line’s GNP predictions have mine the expected NPV (in time 0 dollars) of the cash
been accurate within 6% of the actual GNP increase, and flows of this project.
95% of Value Line’s INF predictions have been accurate b Suppose we are given the following option: At the
within 5% of the actual inflation increase. end of year 1, 2, 3, or 4, we may give up our right to fu-
At the beginning of each year, a number of competitors ture cash flows. In return for doing this, we receive the
may enter the trucking business. At the beginning of a year, abandonment value given in Table 5.
the probability that a certain number of competitors will Assume that we make the abandonment decision as follows:
enter the trucking business is given in Table 3. We abandon if and only if the expected NPV of the cash
Before competitors join the industry at the beginning of flows from the remaining years is smaller than the
year 1, there are two competitors. During a year that begins abandonment value. For example, suppose end of year 1
(after competitors have entered the business, but before any cash flow is $900. At this point in time, our best guess is
have left) with c competitors, Goatco will have a market that cash flows from years 2–5 will also be $900. Thus, we
share given by .5*(.9)c. At the end of each year, there is a would abandon the project at the end of year 1 if $3,000
20% chance that each competitor will leave the industry. exceeded the NPV of receiving $900 for four straight years.
The sales price of the truck and production cost per Otherwise, we would continue. What is the expected value
truck are given in Table 4. of the abandonment option?
Tom Lingley, an independent contractor, has agreed to build a new room on an existing
house. He plans to begin work on Monday morning, June 1. The main question is when
he will complete his work, given that he works only on weekdays. The owner of the house
is particularly hopeful that the room will be ready by Saturday, June 27, that is, in 20 or
fewer working days. The work proceeds in stages, labeled A through J, as summarized in
Table 7. Three of these activities, E, F, and G, will be done by separate independent sub-
contractors. The expected durations of the activities (in days) are shown in the table. How-
ever, these are only best guesses. Lingley knows that the actual activities times can vary
because of unexpected delays, worker illnesses, and so on. He would like to use computer
simulation to see (1) how long the project is likely to take, (2) how likely it is that the proj-
ect will be completed by the deadline, and (3) which activities are likely to be critical.
Solution We first need to choose distributions for the uncertain activity times. Then, given any ran-
domly generated activity times, we will illustrate a method for calculating the length of
the project and identifying the activities on the critical path.
As always, there are several reasonable candidate probability distri-
The Pert Distribution
butions we could use for the random activity times. Here we illustrate a distribution that
TA B L E 7
Activity Time Data
Description Index Predecessors Expected Duration
has become popular in project scheduling, called the Pert distribution.† As shown in Fig-
ure 20, it is a “rounded” version of the triangular distribution that is specified by three pa-
rameters: a minimum value, a most likely value, and a maximum value. The distribution
in the figure uses the values 7, 10, and 19 for these three values, which implies a mean
of 11. We will use this distribution for activity C. Similarly, for the other activities, we
choose parameters for the Pert distribution that lead to the means in Table 7. In reality, it
would be done the other way around. The contractor would estimate the minimum, most
likely, and maximum parameters for the various activities, and the means would follow
from these.
Developing the Simulation Model The key to the model is representing the project network
in activity-on-arc form, as in Figure 21, and then finding Ej for each j, where Ej is the ear-
liest time we can get to node j. When the nodes are numbered so that all arcs go from
lower-numbered nodes to higher-numbered nodes, we can calculate the Ej’s iteratively,
starting with E1 0, with the equation
Ej max(Ei tij) (1)
Here, the maximum is taken over all arcs leading into node j, and tij is the activity time
on such an arc. Then En is the time to complete the project, where n is the index of the
finish node. This will make it very easy to calculate the project length.
†
It is named after the acronym PERT (Program Review and Evaluation Technique) that is synonymous with
project scheduling in an uncertain environment.
FIGURE 22
Project Scheduling Simulation Model
A B C D E F G H I J
1 Room construction project
2
3 Data on activity network Parameters of PERT distributions
4 Activity Code Numeric index Predecessors Min Most likely Max Implied mean Duration Duration+
5 Prepare foundation A 1 None 1.5 3.5 8.5 4 2.158 2.159
6 Put up frame B 2 A 3 4 5 4 4.513 4.513
7 Order custom windows C 3 None 7 10 19 11 9.572 9.572
8 Erect outside walls D 4 B 2 2.5 6 3 3.322 3.322
9 Do electrical wiring E 5 D 3 3.5 7 4 3.282 3.282
10 Do plumbing F 6 D 2 2.5 6 3 2.377 2.377
11 Put in duct work G 7 D 2 4 6 4 4.668 4.668
12 Hang dry wall H 8 E,F,G 2.5 3 3.5 3 3.197 3.197
13 Install windows I 9 B,C 0.5 1 1.5 1 1.384 1.384
14 Paint and clean up J 10 H 1.5 2 2.5 2 1.677 1.677
15
16 Index of activity to increase 1
17
18 Event times
19 Node Event time Event time+
20 1 0 0
21 2 2.158 2.159
22 3 6.671 6.672
23 4 9.572 9.572
24 5 9.993 9.994
25 6 14.661 14.662
26 7 17.858 17.859
27 8 19.536 19.537
28
29 Increase in project time? 1
30
We also need a method for identifying the critical activities for any given activity
times. By definition, an activity is critical if a small increase in its activity time causes
the project time to increase. Therefore, we will keep track of two sets of activity times
and associated project times. The first uses the simulated activity times. The second adds
a small amount, such as 0.001 day, to a “selected” activity’s time. By using the
RISKSIMTABLE function with a list as long as the number of activities, we can make
each activity the “selected” activity in this method. The spreadsheet model appears in Fig-
Projectsim.xls ure 22, and the details are as follows. (See the Projectsim.xls file.)
Inputs Enter the parameters of the Pert activity time distributions in the shaded cells and
the implied means next to them. As discussed above, we actually chose the minimum,
most likely, and maximum values while in @Risk’s Model window to achieve the means
in Table 7. Note that some of these distributions are symmetric about the most likely
value, whereas others are skewed.
Activity Times Generate random activity times in column I by entering the formula
RISKPERT(E5,F5,G5)
in cell I5 and copying it down.
†
It can be shown mathematically that the expected project time is always greater than when the expected ac-
tivity times are used to calculate the project time, as we did in Chapter 7. In other words, an assumption of
certainty always leads to an underestimation of the true expected project time.
FIGURE 24
Probabilities of
Activities Being Critical
Similarly, the values in the Right X and Right P boxes imply that the chance of the proj-
ect lasting longer than 23 days is slightly greater than 5%. This is certainly not good news
for Lingley, and he might have to resort to the crashing we discussed in Chapter 8.
The summary measures for the B29 output cell appear in Figure 24. Each “simulation”
in this output represents one selected activity being increased slightly. The Mean column
indicates the fraction of iterations where the project time increases as a result of the se-
lected activity’s time increase. Hence, it represents the probability that this activity is crit-
ical. For example, the first activity (A) is always critical, the third activity (C) is never
critical, and the fifth activity (E) is critical about 45% of the time. More specifically, we
see that the critical path always includes activities A, B, D, H, J, and one of the three “par-
allel” activities E, F, and G.
PROBLEMS
Group A
1 The city of Bloomington is about to build a new water operating personnel (P). Once the site is selected, we can
treatment plant. Once the plant is designed (D), we can erect the building (B). We can order the water treatment
select the site (S), the building contractor (C), and the machine (W) and prepare the operations manual (M) only
TA B L E 9
Predecessors Mean Time Standard Deviation
TA B L E 10
Predecessors Mean Time Standard Deviation
We assume the length of time (call it X) until failure of a machine is a continuous ran-
dom variable having a distribution function F(t) P(X
t) and a density function f (t).
Thus, for small t, the probability that a machine will fail between time t and t t is
approximately f (t)t. The failure rate of a machine at time t [call it r(t)] is defined to be
(1/t) times the probability that the machine will fail between time t and time t t,
given that the machine has not failed by time t. Thus,
1 tf(t)
r(t) Prob(X is between t and t t|X t)
t t(1 F(t))
f (t)
(1 F(t))
If r(t) is an increasing function of t, the machine is said to have an increasing failure rate
(IFR). If r(t) is a decreasing function of t, the machine is said to have a decreasing fail-
ure rate (DFR).
Consider an exponential distribution which has f (t) lelt and F(t) 1 elt. Then
we find that
lelt
r(t) l
e lt
Thus, a machine whose lifetime follows an exponential random variable has constant fail-
ure rate. This is analogous to the no-memory property of the exponential distribution dis-
cussed in Chapter 20.
The random variable that is most frequently used to model the time till failure of a ma-
chine is the Weibull random variable. The Weibull random variable has the following
density and distribution functions:
axa1 (t/b)e
f (t) e
be
a
F(t) 1 e(t/b)
It can be shown that if b 1, the Weibull random variable exhibits DFR, and if b 1,
the Weibull random variable exhibits IFR. The @Risk function RISKWEIBULL(alpha,
beta) will generate an observation for a Weibull random variable having parameters a and
b. If you input the mean and variance of observed machine times to failure into cells D4
Weibest.xls and D5, respectively, of workbook Weibest.xls, the workbook computes the unique values
of a and b that yield the observed mean and variance of times to failure. For example,
we see in Figure 25 that if the mean time to machine failure were 12 months and the stan-
dard deviation were 6 months, then a Weibull with a 2.2 and b 13.55 would yield
the desired mean and variance.
2 At least one of
the n must work.
n
FIGURE 26
Assume that the Hubble telescope contains four large mirrors. The time (in months) un-
til a mirror fails follows a Weibull random variable with a 25 and b 50.
a For certain types of pictures to be useful, all mirrors must be working. What is the
probability that the telescope can produce these types of pictures for at least 5 years?
b Certain types of pictures can be taken as long as at least one mirror is working. What
is the probability that these pictures can be taken for at least 7 years?
c Certain types of pictures can be taken as long as at least two mirrors are working.
What is the probability that these pictures can be taken for at least 6 years?
Solution See file Reliability.xls.
Reliability.xls Step 1 We begin by generating the length of time until each mirror fails in C3:C6 by
copying from C3 to C4:C6 the formula
RISKWEIBULL(25,50)
F G H I
MIN(C3:C6)
Step 3 Part (b) is a parallel system. We can take the desired pictures until the time the
last mirror fails. We compute the time the last mirror fails in cell C9 with the formula
MAX(C3:C6)
Step 4 Part (c) is a 2 out of 4 system. We can take the desired pictures until the time of
the third mirror failure. The time of the third mirror failure is the second largest of the
failure times. We compute the time of the third mirror failing in cell C10 with the
formula
LARGE(C3:C6,2)
This formula computes the second largest of the mirror failure times. Of course, this is
the time the third mirror fails. See Figure 27.
Step 5 We now select cells C8:C10 as output cells and run 1,000 iterations. After using
targets with the Detailed Statistics output, we obtain the results in Figure 28.
We find in part (a) that there is a 99.54% chance that all four mirrors will fail in 60
months or less, and only a .46% chance that all four mirrors will work for at least 60
months. In part (b), we find that there is a 98.29% chance that all four mirrors will fail
within 7 years, and only a 1.71% chance that all four mirrors will be working for at least
7 years. In part (c), we find that there is a 98% chance that two or more mirrors will be
working for 72 months or less, and only a 2% chance that two or more mirrors will be
working for at least 72 months.
If we know the distribution of the time till failure of a purchased product, @Risk makes
it a simple matter to estimate the distribution of warranty costs associated with a product.
The idea is illustrated in the following example.
The time until first failure of a refrigerator (in years) follows a Weibull random variable
with a 6.7 and b 8.57. If a refrigerator fails within 5 years, we must replace it with
a new refrigerator costing $500. If the replacement refrigerator fails within 5 years, we
must also replace that refrigerator with a new one costing $500. Thus, the warranty stays
in force until a refrigerator lasts at least 5 years. Estimate the average warranty cost in-
curred with the sale of a new refrigerator. (Do not worry about discounting costs.)
Solution See file Refrigerator.xls. We enter the length of time a refrigerator lasts in cell C6 with
Refrigerator.xls the formula
RISKWEIBULL(6.7,8.57)
We are not sure how many replacement refrigerators we might have to provide for the cus-
tomer. By selecting the Define Distributions icon when we are in cell C6, we can move
the sliders on the Weibull density function and determine the probability that we will have
to replace a given refrigerator. We find that there is only a 2.7% chance that a refrigera-
tor will have to be replaced. Then the chance that at least 5 refrigerators will have to be
replaced is (.027)5 .000014. Thus, generating only 5 refrigerator lifetimes should give
us an accurate estimate of total cost. We therefore copy the RISKWEIBULL formula from
C6 to C7:C10. See Figure 29.
In cell D6, we compute the cost associated with a sold refrigerator with the
formula
IF(C65,500,0)
In cells D7:D10, we compute the cost (if any) associated with any replacement refriger-
ators by copying from D7 to D8:D10 the formula
IF(AND(D60,C75),500,0)
This formula picks up the cost of a replacement if and only if the previous refrigerator
failed and the current refrigerator lasts less than 5 years.
In cell D11, we compute total cost with the formula
SUM(D6:D10)
After running 1,000 iterations and making cell D11 an output cell (see below), we find
the mean warranty cost per refrigerator to be $14.50. Note that maximum cost was
$1,000, so on at least one iteration, two refrigerators needed to be replaced.
F G H I J K L M
11
12 Name Workbook Worksheet Cell Minimum Mean Maximum
13 Output 1 Total cost / Cosrefrigerator Sheet1 D11 0 14.5 1000
Suppose that market shares between 0% and 60% are possible. A 45% share is most likely.
There are five market-share levels for which we feel comfortable about comparing the rel-
ative likelihoods (see Table 11).
From the table, a market share of 45% is 8 times as likely as 10%; 20% and 55% are
equally likely, etc. This distribution cannot be triangular, because then 20% would be
(20/45) as likely as the peak of 45%. In fact, 20% is .75 as likely as 45%. See Figure 30
Riskgeneral.xls and file Riskgeneral.xls for our analysis.
To model market share, enter the formula
RISKGENERAL(0,60,{10,20,45,50,55},{1,6,8,7,6})
TA B L E 11
Market Share Relative Likelihood
10% 1
20% 6
45% 8
50% 7
55% 6
0.06
0.05
0.03
0.02
0.00
1.5 11.0 20.5 30.0 39.5 49.0 58.5
FIGURE 31
C D Likelihood
29 Share Likelihood
30 0 0 10
Likelihood
31 10 1
32 20 6 5 Likelihood
33 45 8
34 50 7
0
35 55 6
0 50 100
36 60 0
Share
FIGURE 32
Suppose we select the Define Distributions icon. Then we choose the RISKGENERAL
random variable and select Apply. Now we can directly insert the RISKGENERAL (or
any other) random variable into a cell.
After entering the appropriate parameters for the RISKGENERAL random variable,
we will see the histogram shown in Figure 33. We are also given statistical information,
such as the mean and variance, for the random variable. If we select Apply, the formula
defining the desired RISKGENERAL random variable will be entered into the cell.
FIGURE 33
EXAMPLE 8 RISKCUMULATIVE
A large auto company’s net income for North American operations (NAO) for the next
year may be between 0 and $10 billion. The auto company estimates there is a 10% chance
that net income will be less than or equal to $1 billion, a 70% chance that net income will
be less than or equal to $5 billion, and a 90% chance that net income will be less than or
equal to $9 billion. Use @Risk to simulate NAO’s net income for the next year.
A B C D E F G H
1 Cumulative distribution
2
3 Min 0
4 Max 10 4.2
5 x P(X<=x) Slope 4.2 RiskCumul(B3,B4,A6:A8,B6:B8)
6 1 0.1 0.1
7 5 0.7 0.15
8 9 0.9 0.05 Name P(X<=x)
9 >9 0.1 DescriptionOutput
10 Cell D5
11 Minimum = 4.89E-03
12 Maximum = 9.999967
13 Mean = 4.199986
14 Std Deviati 2.773699
15 Variance = 7.693407
16 Skewness 0.589373
17 Kurtosis = 2.285831
18 Errors Calc 0
19 Mode = 3.43314
20 5% Perc = 0.497997
21 10% Perc = 0.999338 10%ile is 1!
22 15% Perc = 1.333212
23 20% Perc = 1.665637
24 25% Perc = 1.996866
25 30% Perc = 2.332803
26 35% Perc = 2.664376
27 40% Perc = 2.996635
28 45% Perc = 3.330816
29 50% Perc = 3.663554
30 55% Perc = 3.995894
31 60% Perc = 4.33135
32 65% Perc = 4.664128
33 70% Perc = 4.997442 70%ile is 5!
34 75% Perc = 5.995409
35 80% Perc = 6.993743
36 85% Perc = 7.99109
37 90% Perc = 8.989162 90%ile is near 9
FIGURE 34 38 95% Perc = 9.499336
Solution Our work is in the file Cumulative.xls. See Figure 34. The RISKCUMULATIVE function
Cumulative.xls takes as inputs (in order) the following quantities:
■ The smallest value assumed by the random variable
■ The largest value assumed by the random variable
■ Intermediate values assumed by the random variable
■ For each intermediate value, the cumulative probability that the random variable
is less than or equal to the intermediate value
In cell D5, we enter the following formula to simulate NAO’s annual net income:
RISKCUMUL(B3,B4,A6:A8,B6:B8)
We could have also used the following formula in cell D4:
RISKCUMUL(0,10,{1,5,9},{0.1,0.7,0.9})
@Risk will now ensure that
■ For net income x between 0 and $1 billion, the cumulative probability that net in-
.1
come is less than or equal to x rises with a slope equal to 10
0
.1.
■ For net income x between $1 billion and $5 billion, the cumulative probability
.7
that net income is less than or equal to x rises with a slope equal to 51
.1
.15.
■ For net income x between $5 billion and $9 billion, the cumulative probability
.9
that net income is less than or equal to x rises with a slope equal to 95
.7
.05.
■ For net income x greater than $9 billion, the cumulative probability that net in-
1
come is less than or equal to x rises with a slope equal to .9
10 9
.10.
After running 1,600 iterations we found the output in Figure 34. Note that the 10th
percentile of the random variable is near 1, the 70th percentile is near 5, and the 90th per-
centile is near 9. Figure 35 displays a cumulative ascending graph of net income. Note
that (as described previously) the slope of the graph is relatively constant between 0 and
1, between 1 and 5, between 5 and 9, and between 9 and 10.
EXAMPLE 9 RISKTRIGEN
Eli Lilly believes there is a 10% chance that its new drug Niagara’s market share will be
25% or less, a 10% chance that market share will be 70% or more, and the most likely
market share is 40%. Use @Risk to model the market share for Niagara.
Solution Our work is in the file Risktrigen.xls. See Figure 36. In B7, we just entered the formula
Risktrigen.xls RISKTRIGEN(B3,B4,B5,10,90)
A B
1 trigen function
2
3 10%ile 0.25
4 Most likely 0.4
5 90 %ile 0.7
6
FIGURE 36 7 share 0.464537
FIGURE 37
Drugforecast.xls The file Drugforecast.xls contains actual and forecast sales (in millions of d.o.t.) for the
years 1995–2002. See Figure 39. The forecast for 2003 is that 60 million d.o.t. will be
sold. How would you model actual sales of the drug for 2003?
Solution Step 1 In cells F5:F12, check for bias by computing actual sales/forecast sales for each
year. To do this, copy from F5 to F6:F12 the formula
D5/E5
Step 2 In cell F2, compute the bias of the original forecasts by averaging each year’s ac-
tual/forecast sales.
AVERAGE(F5:F12)
We find that actual sales tend to come in 8% under forecast.
Step 3 In G5:G12, correct past biased forecasts by multiplying them by .92. Simply copy
from G5 to G6:G12 the formula
$F$2*E5
†
To see if the bias is significantly different from 1, compute
Average of (actual)/(forecast) 1
If this exceeds t(a/2,n1) then there is significant bias. We usually choose a .05.
E F
14
15 Mean 2003 55.08187
FIGURE 40 16 Sigma 2003 6.2657
Step 4 In H5:H12, compute each year’s percentage error for the unbiased forecast. Copy
from H5 to H6:H12 the formula
D5/G5
Step 5 In cell I2, compute the standard deviation of the percentage errors with the
formula
STDEV(H5:H12)
We find that the standard deviation of past unbiased forecasts has been around 11% of
the unbiased forecast. We now model the 2003 sales of the drug (in millions of d.o.t.) with
the formula
RISKNORMAL(60*(.918), (60*.918)*.114) or RISKNORMAL(55.08,6.27)
See Figure 40.
Suppose GM CEO Rick Waggoner has received the following forecast for quarterly net
income (in billions of dollars) for Europe, NAO, Latin America, and Asia. See Figure 41
Corrinc.xls and file Corrinc.xls.
For example, we believe Latin American income will be on average $.4 billion. Based
on past forecast records, the standard deviation of forecast errors is 25%, so the standard
deviation of net income is $.1 billion. We assume that actual income will follow a nor-
mal distribution. Historically, net income in different parts of the world has been corre-
lated. Suppose the correlations are as given in B10:F13. Latin America and Europe are
most correlated, and Asia and NAO are least correlated. What is the probability that total
net income will exceed $4 billion?
Solution To correlate the net incomes of the different regions, we use the RISKCORRMAT func-
tion. The syntax is as follows:
Actual @Risk formula, RISKCORRMAT(correlation matrix, relevant column
of matrix)
where
Correlation matrix: cells where correlations between variables are located
Relevant column: column of correlation matrix that gives correlations for this cell
Actual @Risk formula: distribution of the random variable
A B C D E F G
1 Net Income Consolidation
2 with correlation Goal is 4 billion!
3 Mean Std. Dev Actual
4 1 LA 0.4 0.1 0.449011 0.521472
5 2 NAO 2 0.4 1.256578 1.264837
6 3 Europe 1.1 0.3 1.14203 0.994558
7 4 Asia 0 .8 0.3 0.685143 0.707549
8 Total!! 3.532761 3.488417
9
10 Correlations LA NAO Europe Asia
11 LA 1 0.6 0.7 0.5
12 NAO 0.6 1 0.6 0.4
13 Europe 0.7 0.6 1 0.5
14 Asia 0.5 0.4 0.5 1
15
FIGURE 41 16
B C D
17 Name Total!! / Actual
18 Description Output
19 Cell E8
20 Minimum = 1.858541
21 Maximum = 6.71191
22 Mean = 4.300031
23 Std Deviation = 0.895158
24 Variance = 0.801308
25 Skewness = -5.82E-02
26 Kurtosis = 2.894021
27 Errors Calculated 0
28 Mode = 4.470891
29 5% Perc = 2.756473
30 10% Perc = 3.186955
31 15% Perc = 3.364678
32 20% Perc = 3.554199
33 25% Perc = 3.715597
34 30% Perc = 3.854618
35 35% Perc = 3.96633
36 40% Perc = 4.080534
37 45% Perc = 4.173182
38 50% Perc = 4.306374
39 55% Perc = 4.413318
40 60% Perc = 4.530555
41 65% Perc = 4.632649
42 70% Perc = 4.7776
43 75% Perc = 4.907873
44 80% Perc = 5.04496
45 85% Perc = 5.216321
46 90% Perc = 5.456462
FIGURE 42 47 95% Perc = 5.758535
Step 1 Generate actual Latin American income in cell E4 with the formula
RISKNORMAL(C4,D4,RISKCORRMAT($C$11:$F$14,A4))
This ensures that the correlation of Latin American income with other incomes is created
according to the first column of C11:F14. Also, Latin American income will be normally
distributed, with a mean of $.4 billion and standard deviation of $.1 billion.
Step 2 Copying the formula in E4 to E5:E7 (respectively) generates the net income in
each region and tells @Risk to use the correlations in C11:F14.
Step 3 In cell E8, compute total income with the formula
SUM(E4:E7)
Step 4 Cell E8 has been made the output cell. We find from Targets (value of 4) that
there is a 36% chance of not meeting the $4 billion target. Also, the standard deviation
of net income is $895 million. See Figure 42.
B C D E F
53 Target #1 (Value)= 4
54 Target #1 (Perc%)= 30.76% 31% chance we fail
55 to meet target
56
57
FIGURE 43 58
FIGURE 44
B C D E F G H I J K L
5
6 Name Total!! / Ac LA / Actual NAO / ActuEurope / AcAsia / Actual
7 DescriptionOutput Normal(C4 Normal(C5 Normal(C6 Normal(C7,D7)
8 Iteration# /E8 E4 E5 E6 E7 LA NAO Europe Asia
9 1 4.804644 0.478546 2.196594 1.351783 0.777721 LA 1
10 2 4.132098 0.441263 1.699526 1.184871 0.806438 NAO 0.591262 1
11 3 6.129157 0.496915 2.453791 1.91255 1.265901 Europe 0.702735 0.587704 1
12 4 6.54744 0.57896 2.424948 1.968532 1.574999 Asia 0.498132 0.399115 0.496651 1
13 5 3.057065 0.319965 1.517732 0.968105 0.251263
14 6 5.324339 0.488499 2.292126 1.084479 1.459235
907 899 4.735623 0.469691 2.19903 1.466369 0.600534
908 900 4.901974 0.507751 2.242637 1.004801 1.146786
We can check that @Risk actually did correctly correlate net incomes. Make sure to check
Collect Distribution Samples when you run the simulation. Once you have run the simu-
lation, select the Data option from the Results menu. The results of each iteration will ap-
pear in the bottom half of the screen. You can Edit Copy Paste this data to a blank work-
sheet. See Figure 44. Now check the correlations between each region’s net income with
Data Analysis Tools Correlation. Select Data Analysis Tools Correlations and fill in the
dialog box as in Figure 45. Note that the correlations between the net incomes are virtu-
ally identical to what we entered in the spreadsheet.
When trying to model volume of sales for a new product in the auto and drug industries,
it is common to look for similar products sold in the past. We often have knowledge of
the following:
■ Accuracy of forecasts for year 1 sales volume
■ Data on how sales change after the first year
Consider Figure 46—data on actual and forecast year 1 sales for seven similar products.
Volume.xls See file Volume.xls. For example, for product 1, actual year 1 sales were 80,000; the fore-
cast for year 1 was 44,396. The percentage change in sales from year to year for the seven
products is given in Figure 47.
For example, product 1 sales went up 43% during the second year, 33% during the
third year, etc.
Suppose we forecast year 1 sales to be 90,000 units. How can we model the uncertain
volume in product sales?
Step 1 From cell D11 (formula AVERAGE(D4:D10)) of Figure 46, we see that past
forecasts for year 1 sales of similar products have overforecast the actual sales by 36.3%.
FIGURE 47
A B C D E F G H I J
13 Scenario Year 2 Year 3 Year 4 Year 5 Year 6 Year 7 Year 8 Year 9 Year 10
14 1 1.43 1.33 0.93 0.75 0.57 0.40 0.37 0.38 0.24
15 2 1.39 1.13 0.96 0.59 0.49 0.45 0.46 0.40 0.24
16 3 1.30 1.38 0.98 0.84 0.80 0.65 0.57 0.48 0.35
17 4 1.47 1.49 1.36 1.15 1.20 1.15 0.93 0.99 0.71
18 5 1.23 1.06 0.73 0.45 0.39 0.31 0.28 0.23 0.15
19 6 1.26 1.22 1.08 0.79 0.77 0.70 0.60 0.60 0.49
20 7 1.30 1.02 0.84 0.62 0.45 0.32 0.27 0.24 0.22
Step 2 Therefore, we can create unbiased forecasts in column E by copying the formula
$D$11*C4
from E4 to E5:E10.
Step 3 In column F, we compute the percentage error of our unbiased forecasts. In cell
F4, we compute the percentage error for product 1 with the formula
B4/E4
Copying this formula from F4 to F5:F10 generates percentage errors for the other
products.
Step 4 In cell F11, we compute the standard deviation (26.7%) of these percentage er-
rors with the formula
STDEV(F4:F10)
We are now ready to model 10 years of sales for the new product. To generate year 1 sales,
we model year 1 sales to be normally distributed, with a mean of 1.36*90,000 and a stan-
dard deviation of .267*(90,000*1.267). To model sales for years 2–10, we use @Risk to
randomly choose one of the seven volume-change patterns (or scenarios) from Figure 47.
Then we use the chosen scenario to generate sales growth for years 2–10.
Step 5 In cell G4, we choose a scenario with the formula
RISKDUNIFORM(A14:A20)
This formula gives a 1/7 chance of choosing each scenario.
G H I J K L M N O P Q
1 Year 1 Forecast 90000
2 Year
Scenari
3 o 1 2 3 4 5 6 7 8 9 10
4 4 102588.9 151164 225922 306360.9 351610 420801.1 484511.5 451618.5 445300.1 314821.9
Step 2 For each curve and each data point, compute the percentage error
Actual value of Y predicted value of Y
Predicted value of Y
Step 3 For each curve, compute mean absolute percentage error (MAPE) by averaging
the absolute percentage errors.
Step 4 Choose the curve that yields the lowest MAPE as the best fit.
We are not sure of the cost of building capacity for a new drug, but we believe that costs
will run around 50% more (in real terms) than for the drug Zozac. Table 12 gives data on
the costs incurred when capacity was built for Zozac.
For example, when 110,000 units of capacity for Zozac were built, the cost was
$654,000 (in today’s dollars). How would you model the uncertain cost of building ca-
pacity for the new product?
Capacity.xls Solution See the file Capacity.xls.
Step 1 To begin, we plot the best-fitting straight line, power curve, and exponential
curve. To do this, use Chart Wizard (X-Y option 1) and click on points till they turn gold.
Next, choose the desired curve and select R-SQ and the Equation option. We obtain the
graphs in Figures 49–51.
Step 2 In C3:E8 (see Figure 52), we compute the predictions for each curve. In C3:C8,
we compute the straight-line predictions by copying from C3 to C3:C8 the formula
5.0623*A377.516
In D3:D8, we compute the power curve prediction by copying from D3 to D3:D8 the
formula
13.483*A3^0.8229
In E3:E8, we compute the exponential curve predictions by copying from E3 to E3:E8 the
formula
164.52*EXP(0.0114*A3)
Step 3 In F3:H8, we use
Actual value of Y predicted value of Y
Predicted value of Y
TA B L E 12
Capacity Cost
(thousands) ($ thousands)
20 156
50 350
80 490
110 654
140 760
160 890
A B C D E
21
Power
22 0.8229
y = 13.483x
23 2
1000 R = 0.9983
24
25 800 Cost(000's)
26 600
27 400 Power
28 200 (Cost(000's))
29 0
30 0 100 200
31
FIGURE 50 32
F G H I J
23
24 y = 164.52e
0.0114x Exponential
2
25 R = 0.9103
26 1500
27 Cost(000's)
28 1000
29 Expon.
500
30 (Cost(000's))
31 0
32
0 100 200
FIGURE 51 33
A B C D E
1 Capacity Cost Modeling
to compute the percentage error for each model. (See Figure 53.) To do this, simply copy
the formula
($B3-C3)/C3
from F3 to F3:H8.
Step 4 In I3:K9, we compute the MAPE for each equation. We begin by computing the
absolute percentage error for each point and each curve by copying the formula
ABS(F3)
from I3:K8.
Next we compute the MAPE for each equation by copying the formula
AVERAGE(I3:I8)
from I9:K9.
Step 5 We find that the power curve (see J9) has the lowest MAPE. Therefore, we model
the cost of adding capacity with a power curve. By entering in G9 the formula
STDEV(G3:G8)
we find 2.6% to be the standard deviation of the percentage errors for the power curve.
We now model the cost of adding capacity for the new product with the formula
1.5*RISKNORMAL(13.483*(Capacity)^.8229,.026*13.483*(Capacity)^.8229)
That is, our best guess for the cost of adding capacity has a mean equal to the power curve
forecast and a standard deviation equal to 2.6% of our forecast.
We are bidding against a competitor for a construction project and want to model her bid.
In the past, her bid has been closely related to our (estimated) cost of completing the proj-
Biddata.xls ect. See file Biddata.xls and Figure 54.
Figures 55–57 give the best fitting linear, power, and exponential curves.
As in Example 12, we compute predictions and MAPEs for each curve (see Figure 58).
The linear curve has the smallest MAPE. Computing the actual errors for the linear
curve’s predictions (in column F) and their standard deviation, we find a standard devia-
tion of .94. Therefore, we model our competitor’s bid as
RISKNORMAL(1.489*(Our cost) 1.7893, .94)
FIGURE 55
A B C D E F G H
14
15
16 Linear y = 1.489x - 1.7873
2
17 R = 0.9969
18 80
19 60 Comp1 bid
20
40 Linear (Comp1 bid)
21
22 20 Linear (Comp1 bid)
23 0
24 0 10 20 30 40 50
25
Power
1.0586
y = 1.1671x
2
80 R = 0.997
60 Comp1 bid
40
Power (Comp1
20 bid)
0
0 10 20 30 40 50
FIGURE 56
Exponential
y = 10.549e0.044x
80 R2 = 0.9495
60
Comp1 bid
40
Expon. (Comp1 bid)
20
0
0 10 20 30 40 50
FIGURE 57
For similar products, the year after the first competitor comes in has historically shown a
significant price drop. Figure 59 contains data on this situation.
For example, for the first product, a competitor entered in year 1. During year 2, a 22%
price drop was observed, after allowing for a normal inflationary increase of 5% during
the second year. Model the effect on price the year after the first competitor enters the
Pricedata.xls market. See file Pricedata.xls.
Solution Figures 60–62 give the best-fitting linear, power, and exponential curves. The extremely
low R2 values imply that the year of entry has little or no effect on the price drop the year
after the first competitor comes in. Therefore, we model price drop as a RISKNORMAL
function, using the mean and standard deviation found in D14 and D15. If a competitor
enters during year t, we would model the year t 1 price with the formula
1.05*(year t price)*RISKNORMAL(.803,.0366)
Note: .803 1 .197.
B C D
-0.0169
Power y = 19.749x
2
R = 0.0034
30
25 Price drop
20 next year
15
10 Power
5 (Price drop
0 next year)
0 5 10
FIGURE 61
Exponential
0.0066x
y = 18.967e
2
30 R = 0.0036
25 Price drop
20 next year
15
10 Expon.
5 (Price drop
0 next year)
0 5 10
FIGURE 62
Here, the assumption is that the market drop during a year is normally distributed. To
check this, we could compute the skewness (with the SKEW function) and kurtosis (with
the KURT function) of the data. If both the skewness and kurtosis are near 0, the mar-
ket drop is probably normally distributed. An alternate approach to modeling the drop in
price is to use the formula RISKDUNIFORM(D4:D13). This ensures that the drop in
price is equally likely to assume one of the observed values. This approach has the ad-
vantage of not automatically assuming normality. The disadvantage, however, is that
using the RISKDUNIFORM function implies that only 10 values of price drop are
possible.