Books 3337 0 0 (1241-1280)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

H I

29 gmcashflowdecay.xls
30 Name npv cash flows / Time
31 Description Output
32 Cell Sheet1!D22
33 Minimum -2.19E+08
34 Maximum 2.55E+08
35 Mean 4.31E+07
36 Std Deviation 9.92E+07
37 Variance 9.84E+15
38 Skewness -0.3451601
39 Kurtosis 2.396719
40 Errors Calculated 0
41 Mode 6.10E+07
42 5% Perc -1.35E+08
43 10% Perc -1.03E+08
44 15% Perc -7.06E+07
45 20% Perc -4.68E+07
46 25% Perc -3.02E+07
47 30% Perc -8040124
48 35% Perc 9848326
49 40% Perc 2.64E+07
50 45% Perc 4.13E+07
51 50% Perc 5.76E+07
52 55% Perc 6.90E+07
53 60% Perc 8.02E+07
54 65% Perc 9.26E+07
55 70% Perc 1.04E+08
56 75% Perc 1.18E+08
57 80% Perc 1.32E+08
58 85% Perc 1.49E+08
59 90% Perc 1.66E+08
60 95% Perc 1.90E+08
61 Filter Minimum

L M N
34
35 95% CI
36 for Mean
37 NPV
38 Lower 3.69E+07 43-2(99)/sqrt(1000)
FIGURE 13 39 Upper 4.94E+07 43+2(99)/sqrt(1000)

Step 8 Copying from E16 to F16:I16 the formula


E14*E12
computes the variable cost for each year.
Step 9 In cells E17:I17, compute the depreciation for each of years 1–5 by copying from
E17 to F17:I17 the formula
$D$11/5
Step 10 By copying from E18 to F18:I18 the formula
E15-E16-E17
we determine before-tax profit for years 1–5.

2 3 . 2 Modeling Cash Flows from a New Product 1225


Distribution for npv cash flows / Time/D22
0
4.000
Mean=4.313917E+07
3.500

Values in 10^ -9
3.000
2.500
2.000
1.500
1.000
0.500
0.000
-250 -200 -150 -100 -50 0 50 100 150 200 250 300
Values in Millions
31.88% 68.12%
FIGURE 14 -250 0

Step 11 By copying from E19 to F19:I19 the formula


(1-tax_rate)*E18
we determine after-tax profit for years 1–5.
Step 12 By copying from E20 to F20:I20 the formula
E19E17
we add each year’s depreciation to its after-tax profit to compute the year’s cash flow.
Step 13 Assuming end-of-year cash flows, the formula
NPV(0.15,D20:I20)
in cell D22 computes the NPV of all cash flows.
Step 14 After making cell D22 an output cell and running 1,000 iterations, we obtain the
statistical output shown in Figure 13 and the graphical output in Figure 14.
From Figure 13, the mean NPV of cash flows (or risk-adjusted NPV) is $43 million.
We are 95% certain that mean NPV is between $37 million and $49 million. Figure 14
shows that there is a 32% chance the project will have cash flows with a negative NPV
(thereby reducing the company’s value) and a 68% chance that cash flows will have a pos-
itive NPV.

The Lilly Model

In the car business, a new model virtually always has reduced sales every year. A new
drug, however, sees increased sales in the first few years, followed by reduced sales. To
model this form of the product life cycle, we must incorporate the following sources of
uncertainty. (Note that we assume that total number of years for which the drug is sold is
known).
■ Number of years for which unit sales increase
■ Average annual percentage increase in sales during the sales-increase portion of
the sales period
■ Average annual percentage decrease in sales during the sales-decrease portion of
the sales period

1226 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


FIGURE 15
B C D E F G H I J K L M N
1
2 Growth then decay
3 length of growth 5
4 tax rate 0.4
5 cost growth 0.04
6 discount rate 0.15
7 growth rate 0.055313219
8 decay rate 0.117781276
9 Time
10 0 1 2 3 4 5 6 7 8 9 10
11 Cost 1.60E+09
12 Unit Sales 1.12E+05 1.18E+05 1.25E+05 1.32E+05 1.39E+05 1.47E+05 1.29E+05 1.14E+05 1.01E+05 8.88E+04
13 Price 1.50E+04 1.50E+04 1.50E+04 1.50E+04 1.50E+04 1.50E+04 1.50E+04 1.50E+04 1.50E+04 1.50E+04
14 Unit cost 1.00E+04 1.04E+04 1.08E+04 1.12E+04 1.17E+04 1.22E+04 1.27E+04 1.32E+04 1.37E+04 1.42E+04
15 Revenues 1.68E+09 1.77E+09 1.87E+09 1.98E+09 2.08E+09 2.20E+09 1.94E+09 1.71E+09 1.51E+09 1.33E+09
16 Variable Cost 1.12E+09 1.23E+09 1.35E+09 1.48E+09 1.63E+09 1.78E+09 1.64E+09 1.50E+09 1.38E+09 1.26E+09
17 Depreciation 1.60E+08 1.60E+08 1.60E+08 1.60E+08 1.60E+08 1.60E+08 1.60E+08 1.60E+08 1.60E+08 1.60E+08
18 Before tax profit 4.00E+08 3.84E+08 3.62E+08 3.34E+08 2.99E+08 2.56E+08 1.44E+08 5.01E+07 -2.77E+07 -9.19E+07
19 After tax profit 2.40E+08 2.30E+08 2.17E+08 2.00E+08 1.79E+08 1.53E+08 8.62E+07 3.01E+07 -1.66E+07 -5.51E+07
20 Cash flow -1600000000 4.00E+08 3.90E+08 3.77E+08 3.60E+08 3.39E+08 3.13E+08 2.46E+08 1.90E+08 1.43E+08 1.05E+08
21
22 npv cash flows ($290,597,621.28)

Lillygrowth.xls Example 3 shows how to model this type of product life cycle. See file Lillygrowth.xls
and Figure 15.

EXAMPLE 3 Eli Lilly

Lilly is producing a new drug that will be sold for 10 years. Year 1 unit sales are assumed
to follow a triangular random variable with worst case 100,000 units, most likely case
150,000, and best case 170,000. The year 0 fixed cost of developing the drug is $1.6 bil-
lion, to be depreciated on a 10-year straight-line basis. Sales are equally likely to increase
for 3, 4, 5, or 6 years, with the average percentage increase during those years following
a triangular random variable with worst case 5%, most likely case 8%, and best case 10%.
During the remainder of the 10-year sales life of the drug, unit sales will decrease at a
rate governed by a triangular random variable having best case 8%, most likely case 12%,
and worst case 18%. During each year, a unit of the drug sells for $15,000. Year 1 vari-
able cost of producing a unit of the drug is $10,000. The unit variable cost of producing
the drug increases at 4% a year.
a Estimate the mean NPV of the drug’s cash flows.
b What is the probability that the drug will add value to Lilly?
c What source of uncertainty is the most important driver of the drug’s NPV?
Solution After dragging our formulas to create years 6–10 and changing the depreciation in row
17 to be over a 10-year period, we simulate random variables in D3 (length of sales in-
crease), D7 (annual percentage rate of sales increase), and D8 (annual percentage rate of
sales decrease) with the following formulas
Cell D3: RISKDUNIFORM({3,4,5,6})
The RISKDUNIFORM variable is a discrete random variable that assigns equal proba-
bility to each listed value.
Cell D7: RISKTRIANG(0.05,0.08,0.1)
Cell D8: RISKTRIANG(0.08,0.12,0.18)

2 3 . 2 Modeling Cash Flows from a New Product 1227


In cell E12, we generate year 1 units sales with the formula
RISKTRIANG(100000,150000,170000)
Copying from F12 to G12:N12 the formula
IF(F10
length_of_growth1,E12*(1growth_rate),E12*(1-decay_rate))
generates unit sales for years 2–10. Note that our formula increases annual sales by the
growth rate for length-of-growth years and decreases annual sales by decay rate during
later years. (D3 is named length_of_growth, D7 is named growth_rate, and D8 is named
decay_rate.)
We used Autoconvergence to determine the number of iterations for @Risk to run. Un-
der Simulation Settings, selecting Iterations Auto and a change of 1% ensures that @Risk
will keep running iterations until, during the last 100 iterations, the mean, standard devi-
ation, and selected other statistics change by 1% or less. In this example, @Risk ran 1,800
iterations, yielding the results in Figure 16. There was an estimated mean of $29 mil-
lion and a 54% chance of negative NPV. Right clicking on NPV from the Explorer inter-
face yields the histogram in Figure 17. The histogram shows a 53% chance that the drug
will decrease Lilly’s NPV.
For part (c), use a tornado graph to determine the key drivers of NPV. To obtain a tor-
nado graph, you must have selected the Collect All Outputs box from the Simulation Set-
tings Sampling dialog box. (Unless you want a tornado graph, it is probably best to
uncheck that box. Checking that box adds a column to your output for each @Risk func-
tion in the model, and this can clutter up the output.) Right click on NPV in the Explorer
interface and select Tornado Graph. We can obtain a correlation and/or regression tornado
graph as shown in Figures 18 and 19.
Each bar of the correlation tornado graph (Figure 18) gives the correlation of the
@Risk random variable with NPV. For example,

■ Year 1 unit sales has a .98 correlation with NPV.


■ Annual growth rate has a .14 correlation with NPV.

In short, the uncertainty about year 1 unit sales is very important for determining NPV,
but other random variables could probably be replaced by their mean without changing
the distribution of NPV by much.
For each @Risk random variable, the regression tornado graph (Figure 19) computes
the standardized regression coefficient for the @Risk random variable when we try to pre-
dict NPV from all @Risk random variables in the spreadsheet. A standardized regression
coefficient tells us (after adjusting for other variables in the equation) the number of stan-
dard deviations by which NPV changes when the given @Risk random variable changes
by one standard deviation. For example,

■ A one standard deviation change in year 1 unit sales will (ceteris paribus) change
NPV by .98 standard deviation.
■ A one standard deviation change in annual growth rate will increase NPV by .15
standard deviation (ceteris paribus).

Again it is clear that the uncertainty for year 1 sales is really all that matters here; other
random variables may as well be replaced by their means.

1228 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


C D E
24
25
26 Name npv cash flo
27 Description Output
28 Cell D22
29 Minimum -3.54E+08
30 Maximum 2.37E+08
31 Mean -2.86E+07
32 Std Deviation 1.23E+08
33 Variance 1.52E+16
34 Skewness -0.34653
35 Kurtosis 2.440396
36 Errors Calculated 0.00E+00
37 Mode 2.33E+07
38 5% Perc -2.52E+08
39 10% Perc -2.06E+08
40 15% Perc -1.71E+08
41 20% Perc -1.41E+08
42 25% Perc -1.14E+08
43 30% Perc -9.07E+07
44 35% Perc -6.99E+07
45 40% Perc -5.01E+07
46 45% Perc -3.21E+07
47 50% Perc -1.31E+07
48 55% Perc 4.01E+06
49 60% Perc 1.94E+07
50 65% Perc 3.26E+07
51 70% Perc 4.98E+07
52 75% Perc 6.48E+07
53 80% Perc 8.02E+07
54 85% Perc 9.80E+07
55 90% Perc 1.23E+08
56 95% Perc 1.56E+08
57 Filter Minimum
58 Filter Maximum
59 Type (1 or 2)
60 # Values Filtered 0
61 Scenario #1 >75%
62 Scenario #2 <25%
63 Scenario #3 >90%
64 Target #1 (Value) 0
FIGURE 16 65 Target #1 (Perc%) 53.73%

Distribution for npv cash flows / Time/D22


-400
3.500
Mean=-2.855069E+07
3.000
Values in 10^ -9

2.500
2.000
1.500
1.000
0.500
0.000
-400 -300 -200 -100 0 100 200 300
Values in Millions
53.73% 46.27%
FIGURE 17 -400 0

2 3 . 2 Modeling Cash Flows from a New Product 1229


Correlations for npv cash flows / Time/D22

Unit Sales/E12 .981

growth rate / Growth then .../D7 .139

length of growth / Growth .../D3 .086

-.018 decay rate / Growth then d.../D8

-1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1

FIGURE 18 Correlation Coefficients

Regression Sensitivity for npv cash flows /


Time/D22

Unit Sales/E12 .983

growth rate / Growth then .../D7 .148

length of growth / Growth .../D3 .083

-.005 decay rate / Growth then d.../D8

-1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1

FIGURE 19 Std b Coefficients

PROBLEMS
Group A
1 Dord Motors is considering whether to introduce a new ■ Price: Year 1 price  $13,000
model: the Racer. The profitability of the Racer will depend Year 2 price  1.05*{(year 1 price)  $30*(% by
on the following factors: which year 1 sales exceed expected year 1 sales)}
■ Fixed cost of developing Racer: Equally likely to be The 1.05 is the result of inflation!
$3 billion or $5 billion. Year 3 price  1.05*{(year 2 price)  $30*(% by
■ Sales: Year 1 sales will be normally distributed with which year 2 sales exceed expected year 2 sales)}
m  200,000 and s  50,000. For example, if year 1 sales  180,000, then
Year 2 sales will be normally distributed with m  year 2 price  1.05*{13,000  30(10)}  $13,335
year 1 sales and s  50,000. ■ Variable cost per car: During year 1, the variable cost
Year 3 sales will be normally distributed with m  per car is equally likely to be $5,000, $6,000, $7,000,
year 2 sales and s  50,000. or $8,000.
For example, if year 1 sales  180,000, then the mean Variable cost for year 2  1.05*(year 1 variable cost)
for year 2 sales will be 180,000. Variable cost for year 3  1.05*(year 2 variable cost)

1230 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


TA B L E 2 TA B L E 4
Year 1 2 3 Year 1 Year 2 Year 3

GNP 3% 5% 4% Sales price $15,000 $16,000 $17,000


INF 4% 7% 3% Variable cost $12,000 $13,000 $14,000

TA B L E 3 TA B L E 5
Number Time Abandoned Value Received
of Competitors Probability
End of year 1 $3,000
0 .50 End of year 2 $2,600
1 .30 End of year 3 $1,900
2 .10 End of year 4 $900
3 .10

Your goal is to estimate the NPV of the new car dur- a Simulate 500 times the next three years of Truckco’s
ing its first three years. Assume that cash flows are profit. Estimate the mean and variance of the discounted
discounted at 10%; that is, $1 received now is equiv- three-year profits (use a discount rate of 10%).
alent to $1.10 received a year from now. b Do the same if during each year there is a 50%
a Simulate 400 iterations and estimate the mean and chance that each competitor leaves the industry.
standard deviation of the NPV the first three years of (Hint: You can model the number of firms leaving the
sales. industry in a given period with the RISKBINOMIAL
b I am 95% sure that the expected NPV of this project function. For example, if the number of competitors in the
is between _____ and _____. industry is in cell A8, then the number of firms leaving the
c Use the Target option to determine a 95% confi- industry during a period can be modeled with the statement
dence interval for the actual NPV of the Racer during RISKBINOMIAL(A8,.20). Just remember that the
its first three years of production. RISKBINOMIAL function is not defined if its first argument
d Use a tornado graph to analyze which factors are equals 0.)
most influential in determining the NPV of the Racer.
2 Trucko produces the Goatco truck. The company wants
Group B
information about the discounted profits earned during the 3 You have the opportunity to buy a project that yields at
next three years. During a given year, the total number of the end of years 1–5 the following (random) cash flows:
trucks sold in the United States is 500,000  50,000*GNP  End of year 1 cash flow is normal with mean 1,000 and
40,000*INF, where standard deviation 200.
GNP  % increase in GNP during year For t  1, end of year t cash flow is normal with Mean 
INF  % increase in Consumer Price Index during year actual end of year (t  1) cash flow and Standard deviation 
.2*(mean of year t cash flow).
Value Line has made the predictions given in Table 2 for the
increase in GNP and INF during the next three years. a Assuming cash flows are discounted at 10%, deter-
In the past, 95% of Value Line’s GNP predictions have mine the expected NPV (in time 0 dollars) of the cash
been accurate within 6% of the actual GNP increase, and flows of this project.
95% of Value Line’s INF predictions have been accurate b Suppose we are given the following option: At the
within 5% of the actual inflation increase. end of year 1, 2, 3, or 4, we may give up our right to fu-
At the beginning of each year, a number of competitors ture cash flows. In return for doing this, we receive the
may enter the trucking business. At the beginning of a year, abandonment value given in Table 5.
the probability that a certain number of competitors will Assume that we make the abandonment decision as follows:
enter the trucking business is given in Table 3. We abandon if and only if the expected NPV of the cash
Before competitors join the industry at the beginning of flows from the remaining years is smaller than the
year 1, there are two competitors. During a year that begins abandonment value. For example, suppose end of year 1
(after competitors have entered the business, but before any cash flow is $900. At this point in time, our best guess is
have left) with c competitors, Goatco will have a market that cash flows from years 2–5 will also be $900. Thus, we
share given by .5*(.9)c. At the end of each year, there is a would abandon the project at the end of year 1 if $3,000
20% chance that each competitor will leave the industry. exceeded the NPV of receiving $900 for four straight years.
The sales price of the truck and production cost per Otherwise, we would continue. What is the expected value
truck are given in Table 4. of the abandonment option?

2 3 . 2 Modeling Cash Flows from a New Product 1231


4 Mattel is developing a new Madonna doll. Managers TA B L E 6
have made the following assumptions.
It is equally likely that the doll will sell for two, four, Years Probability
six, eight, or ten years. 4 .1
At the beginning of year 1, the potential market for the
5 .3
doll is 1 million. The potential market grows by an average
of 5% per year. They are 95% sure that the growth in 6 .4
the potential market during any year will be between 3% 7 .2
and 7%.
They believe their share of the potential market during
year 1 will be at worst 20%, most likely 40%, and at best
50%. All values between 20% and 50% are possible. $300 million, the most likely case is $800 million, and the
The variable cost of producing a doll during year 1 is worst case is $1.7 billion.
equally likely to be $4 or $6. The product will begin sales during the year after
The sales price of the doll during year 1 will be $10. development concludes. The number of years the car will be
Each year, the sales price and variable cost of producing sold is assumed to be governed by the probability
the doll will increase by 5%. distribution in Table 6.
The fixed cost of developing the doll (incurred in year The size of the market during the first year of sales is
0) is equally likely to be $4, $8, or $12 million. unknown, but the worst case is a market size of 100,000, the
At time 0, there is one competitor in the market. During most likely case is 145,000, and the best case is 165,000.
each year that begins with four or fewer competitors, there Annual growth in market size is unknown, but is assumed
is a 20% chance that a new competitor will enter the market. to have a worst case of 1% per year, a most likely case of
To determine year t unit sales (for t  1), proceed as 6% a year, and a best case of 8% per year.
follows. Suppose that at the end of year t  1, x competitors First-year market share is unknown, but the worst case
were present. Then assume that during year t, a fraction is a 30% market share, the most likely case is 45%, and the
.9  .1*x of loyal customers (last year’s purchasers) will best case is 50%. After the first year of sales, market share
buy a doll during the next year and a fraction .2  .04*x of will fluctuate. On average, next year’s share will equal this
people currently in the market who did not purchase a doll year’s share. We are 95% sure that next year’s market share
last year will purchase a doll from the company this year. will be within 40% of this year’s market share.
We now generate a prediction for year t unit sales. Of course, During the first year of sales, price is unknown, with a
this prediction will not be precise. We assume that it is sure worst-case price of $16,000, a most likely price of $17,500,
to be accurate within 15%, however. and a best-case price of $18,000. Each year, price increases
Cash flows are discounted at 10% per year. by 5%.
a Estimate the expected NPV (in time 0 dollars) of During the first year of sales, the best-case estimate for
this project. the cost of producing a car is $11,000, the most likely cost
b You are 95% sure the expected NPV of this project is $13,000, and the worst-case cost is $14,500. Each year,
is between _____ and _____. variable cost increases by 5%.
c You are 95% sure that the actual NPV of the project The discount rate for this project is 15%.
is between _____ and _____. a You are 95% sure that mean NPV for this project is
d What two factors does the tornado diagram indicate between _____ and _____.
are key drivers of the project’s profitability? b What is the probability that the project will add
value to the company?
5 GM is thinking of marketing a new car, the Batmobile.
It is equally likely that the car will take 1, 2, or 3 years to c What are the key drivers of the project’s success?
develop. This may be modeled by a RISKDUNIFORM d Construct a graph that illustrates the range of possi-
random variable. A RISKDUNIFORM function is equally ble NPVs that might be generated by this project.
likely to assume any of the values listed in the cell.
Development cost is assumed equally split over
development time. The best case is development cost of

23.3 Project Scheduling Models


In Chapter 7, we used linear programming to determine the length of time needed to com-
plete a project. We also learned how to identify critical activities, where an activity is crit-
ical if increasing its activity time by a small amount increases the length of time needed
to complete the project by the same amount. Our discussion there required the assump-
tion that all activity times are known with certainty. In reality, these times are usually un-

1232 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


certain. Of course, this implies that the length of time needed to complete the project is
also uncertain. It also implies that for each activity, there is a probability (not necessarily
equal to 0 or 1) that the activity is critical.
To illustrate, suppose that activities A and B can begin immediately. Activity C can then
begin as soon as activities A and B are both completed, and the project is completed as
soon as activity C is completed. Activity C is clearly on the critical path, but what about
A and B? Let’s say that the expected activity times of A and B are 10 and 12. If we use
these expected times and ignore any uncertainty about the actual times—that is, if we pro-
ceed as we did in Chapter 7—then activity B is definitely a critical activity. However, sup-
pose there is some positive probability that A can have duration 12 and B can have dura-
tion 11. Under this scenario, A is a critical activity. Therefore, we cannot say in advance
which of the activities, A or B, will be critical. However, by using simulation we can see
how likely it is that each of these activities is critical. We can also see how long the en-
tire project is likely to take. We illustrate with the following example.

EXAMPLE 4 Construction Project with Uncertain Activity Times

Tom Lingley, an independent contractor, has agreed to build a new room on an existing
house. He plans to begin work on Monday morning, June 1. The main question is when
he will complete his work, given that he works only on weekdays. The owner of the house
is particularly hopeful that the room will be ready by Saturday, June 27, that is, in 20 or
fewer working days. The work proceeds in stages, labeled A through J, as summarized in
Table 7. Three of these activities, E, F, and G, will be done by separate independent sub-
contractors. The expected durations of the activities (in days) are shown in the table. How-
ever, these are only best guesses. Lingley knows that the actual activities times can vary
because of unexpected delays, worker illnesses, and so on. He would like to use computer
simulation to see (1) how long the project is likely to take, (2) how likely it is that the proj-
ect will be completed by the deadline, and (3) which activities are likely to be critical.
Solution We first need to choose distributions for the uncertain activity times. Then, given any ran-
domly generated activity times, we will illustrate a method for calculating the length of
the project and identifying the activities on the critical path.
As always, there are several reasonable candidate probability distri-
The Pert Distribution
butions we could use for the random activity times. Here we illustrate a distribution that

TA B L E 7
Activity Time Data
Description Index Predecessors Expected Duration

Prepare foundation A None 4


Put up frame B A 4
Order custom windows C None 11
Erect outside walls D B 3
Do electrical wiring E D 4
Do plumbing F D 3
Put in ductwork G D 4
Hang drywall H E, F, G 3
Install windows I B, C 1
Paint and clean up J H 2

2 3 . 3 Project Scheduling Models 1233


FIGURE 20
Pert Distribution

has become popular in project scheduling, called the Pert distribution.† As shown in Fig-
ure 20, it is a “rounded” version of the triangular distribution that is specified by three pa-
rameters: a minimum value, a most likely value, and a maximum value. The distribution
in the figure uses the values 7, 10, and 19 for these three values, which implies a mean
of 11. We will use this distribution for activity C. Similarly, for the other activities, we
choose parameters for the Pert distribution that lead to the means in Table 7. In reality, it
would be done the other way around. The contractor would estimate the minimum, most
likely, and maximum parameters for the various activities, and the means would follow
from these.
Developing the Simulation Model The key to the model is representing the project network
in activity-on-arc form, as in Figure 21, and then finding Ej for each j, where Ej is the ear-
liest time we can get to node j. When the nodes are numbered so that all arcs go from
lower-numbered nodes to higher-numbered nodes, we can calculate the Ej’s iteratively,
starting with E1  0, with the equation
Ej  max(Ei  tij) (1)

Here, the maximum is taken over all arcs leading into node j, and tij is the activity time
on such an arc. Then En is the time to complete the project, where n is the index of the
finish node. This will make it very easy to calculate the project length.


It is named after the acronym PERT (Program Review and Evaluation Technique) that is synonymous with
project scheduling in an uncertain environment.

1234 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


FIGURE 21
Project Network for
Room-Building Project

FIGURE 22
Project Scheduling Simulation Model
A B C D E F G H I J
1 Room construction project
2
3 Data on activity network Parameters of PERT distributions
4 Activity Code Numeric index Predecessors Min Most likely Max Implied mean Duration Duration+
5 Prepare foundation A 1 None 1.5 3.5 8.5 4 2.158 2.159
6 Put up frame B 2 A 3 4 5 4 4.513 4.513
7 Order custom windows C 3 None 7 10 19 11 9.572 9.572
8 Erect outside walls D 4 B 2 2.5 6 3 3.322 3.322
9 Do electrical wiring E 5 D 3 3.5 7 4 3.282 3.282
10 Do plumbing F 6 D 2 2.5 6 3 2.377 2.377
11 Put in duct work G 7 D 2 4 6 4 4.668 4.668
12 Hang dry wall H 8 E,F,G 2.5 3 3.5 3 3.197 3.197
13 Install windows I 9 B,C 0.5 1 1.5 1 1.384 1.384
14 Paint and clean up J 10 H 1.5 2 2.5 2 1.677 1.677
15
16 Index of activity to increase 1
17
18 Event times
19 Node Event time Event time+
20 1 0 0
21 2 2.158 2.159
22 3 6.671 6.672
23 4 9.572 9.572
24 5 9.993 9.994
25 6 14.661 14.662
26 7 17.858 17.859
27 8 19.536 19.537
28
29 Increase in project time? 1
30

We also need a method for identifying the critical activities for any given activity
times. By definition, an activity is critical if a small increase in its activity time causes
the project time to increase. Therefore, we will keep track of two sets of activity times
and associated project times. The first uses the simulated activity times. The second adds
a small amount, such as 0.001 day, to a “selected” activity’s time. By using the
RISKSIMTABLE function with a list as long as the number of activities, we can make
each activity the “selected” activity in this method. The spreadsheet model appears in Fig-
Projectsim.xls ure 22, and the details are as follows. (See the Projectsim.xls file.)
Inputs Enter the parameters of the Pert activity time distributions in the shaded cells and
the implied means next to them. As discussed above, we actually chose the minimum,
most likely, and maximum values while in @Risk’s Model window to achieve the means
in Table 7. Note that some of these distributions are symmetric about the most likely
value, whereas others are skewed.
Activity Times Generate random activity times in column I by entering the formula
RISKPERT(E5,F5,G5)
in cell I5 and copying it down.

2 3 . 3 Project Scheduling Models 1235


Augmented Activity Times We want to successively add a small amount to each activity’s
time to determine whether it is on the critical path. To do this, enter the formula
RISKSIMTABLE({1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
in cell B16. (We use a list of length 10 because there are 10 activities.) Then enter the
formula
I5IF(IndexC5,0.001,0)
in cell J5 and copy it down. (Here, Index is the range name of cell B16.) For example, if
we are checking whether activity D (the 4th activity) is critical, the Index cell will be 4,
and we will run a simulation where activity D’s time is augmented by 0.001 and the other
activity times are unchanged.
Event Times We want to use Equation (1) to calculate the node event times in the range
B20:B27. There is no quick way to enter the required formulas. (We see no way of using
Copy and Paste.) We need to use the project network as a guide for each node. Begin by
entering 0 in cell B20. Then enter the appropriate formulas in the other cells. For exam-
ple, the formulas in cells B22, B23, and B27 are
B21I6
MAX(B20I7,B21I6)
and
RISKOUTPUT()MAX(B23I13,B26I14)
To understand these, note that node 3 has only one arc leading into it, and this arc origi-
nates at node 2. No MAX is required for this node’s equation. In contrast, node 4 has two
arcs leading into it, from nodes 1 and 2, so a MAX is required. Similarly, node 8 requires
a MAX, because it has two arcs leading into it. Also, it is the finish node, so we desig-
nate its event time cell as an @Risk output cell—it contains the time to complete the
project.
Augmented Event Times Copy the formulas in the range B20:B27 to the range C20:C27
to calculate the event times when the selected activity’s time is augmented by 0.001.
Project Time Increases? To check whether the selected activity’s increased activity time
increases the project time, enter the formula
RISKOUTPUT()IF(C27B17,1,0)
If this calculates to 1, then the selected activity is critical for these particular activity
times. Otherwise, it is not. Note that this cell is also designated as an @Risk output cell.
Using @Risk We set the number of iterations to 1,000 and the number of simulations to
10 (one for each activity that we want to check for being critical). After running @Risk,
we request the histogram of project times in Figure 23. In Chapter 7, when the activity
times were not considered random, the project time was 20 days. Now it varies from a
low of 15.89 days to a high of 25.50 days, with an average of 20.42 days.† Although the
5th and 95th percentiles appear in the figure, it might be more interesting (and depress-
ing) to Tom Lingley to see the probabilities of various project times being exceeded. For
example, we entered 20 in the Left X box next to the histogram. The Left P value implies
that there is about a 57% chance that the project will not be completed within 20 days.


It can be shown mathematically that the expected project time is always greater than when the expected ac-
tivity times are used to calculate the project time, as we did in Chapter 7. In other words, an assumption of
certainty always leads to an underestimation of the true expected project time.

1236 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


FIGURE 23
Histogram of Project
Completion Time

FIGURE 24
Probabilities of
Activities Being Critical

Similarly, the values in the Right X and Right P boxes imply that the chance of the proj-
ect lasting longer than 23 days is slightly greater than 5%. This is certainly not good news
for Lingley, and he might have to resort to the crashing we discussed in Chapter 8.
The summary measures for the B29 output cell appear in Figure 24. Each “simulation”
in this output represents one selected activity being increased slightly. The Mean column
indicates the fraction of iterations where the project time increases as a result of the se-
lected activity’s time increase. Hence, it represents the probability that this activity is crit-
ical. For example, the first activity (A) is always critical, the third activity (C) is never
critical, and the fifth activity (E) is critical about 45% of the time. More specifically, we
see that the critical path always includes activities A, B, D, H, J, and one of the three “par-
allel” activities E, F, and G.

PROBLEMS
Group A

1 The city of Bloomington is about to build a new water operating personnel (P). Once the site is selected, we can
treatment plant. Once the plant is designed (D), we can erect the building (B). We can order the water treatment
select the site (S), the building contractor (C), and the machine (W) and prepare the operations manual (M) only

2 3 . 3 Project Scheduling Models 1237


after the contractor is selected. We can begin training (T) (in months) needed to complete each activity are given in
the operators when both the operations manual and operating Table 8. Use simulation to estimate the probability that the
personnel selection are completed. When the treatment plant project will be completed in (a) under 50 days and (b) more
and the building are finished, we can install the treatment than 55 days. Also estimate the probabilities that B, I, and
machine (I). Once the treatment machine is installed and T are critical activities.
operators are trained, we can obtain an operating license
2 To complete an addition to the Business Building, the
(L). The estimated mean and standard deviation of the time
activities in Table 9 need to be completed (all times are in
months). The project is completed once Room 111 has been
destroyed and the main structure has been built.
TA B L E 8
a Estimate the probability that it will take at least 3
Mean Standard Deviation years to complete the addition.
b For each activity, estimate the probability that it will
Activity D 6 1.5 be a critical activity.
Activity S 2 3.0
3 To build Indiana University’s new law building, the
Activity C 4 1.0
activities in Table 10 must be completed (all times are in
Activity P 3 1.0 months).
Activity B 24 6.0 a Estimate the probability that the project will take
Activity W 14 4.0 less than 30 months to complete.
Activity M 3 0.4 b Estimate the probability that the project will take
Activity T 4 1.0 more than 3 years to complete.
Activity I 6 1.0 c For each of the activities A, B, C, and G, estimate
Activity L 3 6.0 the probability that it is a critical activity.

TA B L E 9
Predecessors Mean Time Standard Deviation

Activity A: Hire workers — 4 0.6


Activity B: Dig big hole A 9 2.5
Activity C: Pour foundation B 5 1.0
Activity D: Destroy room A 7 2.0
Activity E: Build main structure C 10 1.5

TA B L E 10
Predecessors Mean Time Standard Deviation

Activity A: Obtain funding — 6 0.6


Activity B: Design building A 8 1.3
Activity C: Prepare site A 2 0.2
Activity D: Lay foundation B, C 2 0.3
Activity E: Erect walls and roof D 3 1.0
Activity F: Finish exterior E 3 0.6
Activity G: Finish interior D 7 1.5
Activity H: Landscape grounds F, G 5 1.2

23.4 Reliability and Warranty Modeling


In today’s high-tech world, it is very important to be able to compute the probability that
a system made up of machines will work for a desired amount of time. The subject of es-
timating the distribution of machine failure times and the distribution of time to failure
of a system is known as reliability theory.

1238 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


Distribution of Machine Life

We assume the length of time (call it X) until failure of a machine is a continuous ran-
dom variable having a distribution function F(t)  P(X
t) and a density function f (t).
Thus, for small t, the probability that a machine will fail between time t and t  t is
approximately f (t) t. The failure rate of a machine at time t [call it r(t)] is defined to be
(1/ t) times the probability that the machine will fail between time t and time t  t,
given that the machine has not failed by time t. Thus,

 
1 tf(t)
r(t)  Prob(X is between t and t  t|X  t)  
t t(1  F(t))
f (t)
(1  F(t))

If r(t) is an increasing function of t, the machine is said to have an increasing failure rate
(IFR). If r(t) is a decreasing function of t, the machine is said to have a decreasing fail-
ure rate (DFR).
Consider an exponential distribution which has f (t)  lelt and F(t)  1  elt. Then
we find that

lelt
r(t)   l
e lt

Thus, a machine whose lifetime follows an exponential random variable has constant fail-
ure rate. This is analogous to the no-memory property of the exponential distribution dis-
cussed in Chapter 20.
The random variable that is most frequently used to model the time till failure of a ma-
chine is the Weibull random variable. The Weibull random variable has the following
density and distribution functions:

axa1 (t/b)e
f (t)  e
be
a
F(t)  1  e(t/b)

It can be shown that if b 1, the Weibull random variable exhibits DFR, and if b  1,
the Weibull random variable exhibits IFR. The @Risk function RISKWEIBULL(alpha,
beta) will generate an observation for a Weibull random variable having parameters a and
b. If you input the mean and variance of observed machine times to failure into cells D4
Weibest.xls and D5, respectively, of workbook Weibest.xls, the workbook computes the unique values
of a and b that yield the observed mean and variance of times to failure. For example,
we see in Figure 25 that if the mean time to machine failure were 12 months and the stan-
dard deviation were 6 months, then a Weibull with a  2.2 and b  13.55 would yield
the desired mean and variance.

Common Types of Machine Combinations

Three common types of machine combinations are as follows:


■ A series system. A series system functions only as long as each machine func-
tions. See Figure 26(a).
■ A parallel system. A parallel system functions as long as at least one machine
functions. See Figure 26(b).

2 3 . 4 Reliability and Warranty Modeling 1239


A B C D E F G
1 Estimating Weibull
2 Distribution Parameters
3
4 Mean time to failure 12
5 Variance of time to Failure 36
6 Second Moment of failure time 180
7 Second moment/(mean)^2 1.25 Beta 13.54976
FIGURE 25 8 Alpha Alpha 2.2

(a) Series system


All n must work.
1 2 n

(b) Parallel system


1

2 At least one of
the n must work.

n
FIGURE 26

■ A k out of n system. A k out of n system consists of n machines and is consid-


ered working as long as k machines are working.
Of course, by combining these types, a very complex system may be modeled. We now
show how to use @Risk to model the probability that a machine system will last a de-
sired amount of time.

EXAMPLE 5 Hubble Telescope

Assume that the Hubble telescope contains four large mirrors. The time (in months) un-
til a mirror fails follows a Weibull random variable with a  25 and b  50.
a For certain types of pictures to be useful, all mirrors must be working. What is the
probability that the telescope can produce these types of pictures for at least 5 years?
b Certain types of pictures can be taken as long as at least one mirror is working. What
is the probability that these pictures can be taken for at least 7 years?
c Certain types of pictures can be taken as long as at least two mirrors are working.
What is the probability that these pictures can be taken for at least 6 years?
Solution See file Reliability.xls.
Reliability.xls Step 1 We begin by generating the length of time until each mirror fails in C3:C6 by
copying from C3 to C4:C6 the formula
RISKWEIBULL(25,50)

1240 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


A B C
1
2 Hubble Telescope
3 Mirror 1 49.30487
4 Mirror 2 30.19602
5 Mirror 3 38.99237
6 Mirror 4 37.64995
7
Time all 4
8 work 30.19602
Time till
last one
9 fails 49.30487
Last time
2 are
FIGURE 27 10 working 38.99237

F G H I

10 Name Time all 4 wTime till lasLast time 2


11 DescriptionOutput Output Output
12 Cell C8 C9 C10
13 Minimum 3.733206 31.46502 27.71785
14 Maximum 66.0223 101.8234 80.49436
15 Mean 35.63382 64.18716 54.32821
16 Std Deviati 10.08293 9.306231 8.747266
17 Variance 101.6655 86.60596 76.51466
18 Skewness -0.104045 0.121514 -2.73E-02
19 Kurtosis 2.737432 3.290648 2.910271
20 Errors Calc 0 0 0
21 Mode 34.15707 62.93507 58.45681
22 5% Perc 18.52796 49.15086 39.91564
23 10% Perc 22.40516 52.22929 42.77759
24 15% Perc 24.85496 54.84017 44.97655
25 20% Perc 26.80984 56.5674 46.99073
26 25% Perc 28.67021 57.84864 48.3152
27 30% Perc 30.25738 59.51218 49.89228
28 35% Perc 31.90257 60.52841 50.94405
29 40% Perc 33.26531 61.73524 52.26505
30 45% Perc 34.4916 62.83329 53.25808
31 50% Perc 35.7727 63.89499 54.54087
32 55% Perc 37.04685 65.06183 55.4865
33 60% Perc 38.58305 66.16101 56.72353
34 65% Perc 39.88355 67.578 58.00496
35 70% Perc 41.17931 68.97778 58.9762
36 75% Perc 42.88946 70.39309 60.089
37 80% Perc 44.47398 71.75684 61.61526
38 85% Perc 46.13106 73.66335 63.45758
39 90% Perc 48.46651 76.06507 65.64239
40 95% Perc 51.94818 79.72974 68.19598
41 Filter Minimum
42 Filter Maximum
43 Filter Type
44 # Values F 0 0 0
45 Scenario # >75% >75% >75%
46 Scenario #2<25% <25% <25%
47 Scenario #3>90% >90% >90%
48 Target #1 ( 60 84 72
FIGURE 28 49 Target #1 ( 99.54% 98.29% 98.00%

2 3 . 4 Reliability and Warranty Modeling 1241


Step 2 Part (a) is a series system. We can take the desired pictures until the first mirror
fails. The first mirror fails at the smallest of the four mirror failure times. Thus, the length
of time for which the first type of picture can be taken is computed in cell C8 with the
formula

MIN(C3:C6)

Step 3 Part (b) is a parallel system. We can take the desired pictures until the time the
last mirror fails. We compute the time the last mirror fails in cell C9 with the formula

MAX(C3:C6)

Step 4 Part (c) is a 2 out of 4 system. We can take the desired pictures until the time of
the third mirror failure. The time of the third mirror failure is the second largest of the
failure times. We compute the time of the third mirror failing in cell C10 with the
formula

LARGE(C3:C6,2)

This formula computes the second largest of the mirror failure times. Of course, this is
the time the third mirror fails. See Figure 27.
Step 5 We now select cells C8:C10 as output cells and run 1,000 iterations. After using
targets with the Detailed Statistics output, we obtain the results in Figure 28.
We find in part (a) that there is a 99.54% chance that all four mirrors will fail in 60
months or less, and only a .46% chance that all four mirrors will work for at least 60
months. In part (b), we find that there is a 98.29% chance that all four mirrors will fail
within 7 years, and only a 1.71% chance that all four mirrors will be working for at least
7 years. In part (c), we find that there is a 98% chance that two or more mirrors will be
working for 72 months or less, and only a 2% chance that two or more mirrors will be
working for at least 72 months.

Estimating Warranty Expenses

If we know the distribution of the time till failure of a purchased product, @Risk makes
it a simple matter to estimate the distribution of warranty costs associated with a product.
The idea is illustrated in the following example.

EXAMPLE 6 Refrigerator Failure

The time until first failure of a refrigerator (in years) follows a Weibull random variable
with a  6.7 and b  8.57. If a refrigerator fails within 5 years, we must replace it with
a new refrigerator costing $500. If the replacement refrigerator fails within 5 years, we
must also replace that refrigerator with a new one costing $500. Thus, the warranty stays
in force until a refrigerator lasts at least 5 years. Estimate the average warranty cost in-
curred with the sale of a new refrigerator. (Do not worry about discounting costs.)
Solution See file Refrigerator.xls. We enter the length of time a refrigerator lasts in cell C6 with
Refrigerator.xls the formula
RISKWEIBULL(6.7,8.57)

1242 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


A B C D E F G
1
2 Refrigerator
3 Warranty
4
5 Number Lasts Cost
6 1 8.113087 0
7 2 6.91762 0 .027^5
8 3 7.233594 0 1.43489E-08
9 4 8.776642 0
10 5 7.120917 0
11 Total cost 0
FIGURE 29 12

We are not sure how many replacement refrigerators we might have to provide for the cus-
tomer. By selecting the Define Distributions icon when we are in cell C6, we can move
the sliders on the Weibull density function and determine the probability that we will have
to replace a given refrigerator. We find that there is only a 2.7% chance that a refrigera-
tor will have to be replaced. Then the chance that at least 5 refrigerators will have to be
replaced is (.027)5  .000014. Thus, generating only 5 refrigerator lifetimes should give
us an accurate estimate of total cost. We therefore copy the RISKWEIBULL formula from
C6 to C7:C10. See Figure 29.
In cell D6, we compute the cost associated with a sold refrigerator with the
formula
IF(C65,500,0)
In cells D7:D10, we compute the cost (if any) associated with any replacement refriger-
ators by copying from D7 to D8:D10 the formula
IF(AND(D60,C75),500,0)
This formula picks up the cost of a replacement if and only if the previous refrigerator
failed and the current refrigerator lasts less than 5 years.
In cell D11, we compute total cost with the formula
SUM(D6:D10)
After running 1,000 iterations and making cell D11 an output cell (see below), we find
the mean warranty cost per refrigerator to be $14.50. Note that maximum cost was
$1,000, so on at least one iteration, two refrigerators needed to be replaced.

F G H I J K L M
11
12 Name Workbook Worksheet Cell Minimum Mean Maximum
13 Output 1 Total cost / Cosrefrigerator Sheet1 D11 0 14.5 1000

2 3 . 4 Reliability and Warranty Modeling 1243


PROBLEMS
Group A
Assume that the lifetimes of all machines described follow 3 A one-mile length of street has 5 street lights, equally
a Weibull random variable. spaced. The mean lifetime of a street light is 3 years, with
a standard deviation of 1 year. Assume that all 5 lights have
1 Suppose an auto engine consists of 12 components in
just been replaced. The street is considered too dark if at
series. The mean lifetime of each component is 5 years,
least one part of the street has no light working within .5
with a standard deviation of 2 years.
mile. On the average, how long will it be until the street is
a What is the probability that the engine will work for considered too dark?
at least 2 years?
b If the engine were a parallel system, what is the 4 In the refrigerator example, suppose the warranty works
probability that the engine would work for at least 10 as follows. If a refrigerator fails at any time within 5 years
years? of purchase, we give the consumer a prorated refund on the
$500 purchase price. For example, if the refrigerator fails
c If at least 8 engine components need to work for the after 4 years, we pay the customer $100. If the refrigerator
engine to work, what is the probability that the engine fails after 3 years, we pay the customer $200. Estimate our
will work for at least 7 years? expected warranty expense per refrigerator sold.
2 An aircraft engine lasts an average of 5 years, with a 5 The time to failure of a TV picture tube averages 5
standard deviation of 3 years before it needs to be replaced. years, with a standard deviation of 3 years. It costs an
Consider a plane with 4 new engines. On the average, how average of $250 to repair or replace a TV picture tube.
long will it be until an engine needs to be replaced? Determine fair prices for a 3-year, 4-year, or 5-year warranty.

23.5 The RISKGENERAL Function


What if a continuous random variable (such as market share) does not appear to follow a
normal or triangular distribution? We can model it with the RISKGENERAL function.

EXAMPLE 7 RISKGENERAL Distribution

Suppose that market shares between 0% and 60% are possible. A 45% share is most likely.
There are five market-share levels for which we feel comfortable about comparing the rel-
ative likelihoods (see Table 11).
From the table, a market share of 45% is 8 times as likely as 10%; 20% and 55% are
equally likely, etc. This distribution cannot be triangular, because then 20% would be
(20/45) as likely as the peak of 45%. In fact, 20% is .75 as likely as 45%. See Figure 30
Riskgeneral.xls and file Riskgeneral.xls for our analysis.
To model market share, enter the formula
RISKGENERAL(0,60,{10,20,45,50,55},{1,6,8,7,6})

TA B L E 11
Market Share Relative Likelihood

10% 1
20% 6
45% 8
50% 7
55% 6

1244 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


B C D E F G
1 EXAMPLE OF
2 RISKGENERAL
3 DISTRIBUTION
4
5 Minimum 0
6 Maximum 60
7 Specified Points
8 10 1
9 20 6
10 45 8
11 50 7
12 55 6
FIGURE 30 13 35.75 =RISKGENERAL(0,60,{10,20,45,50,55},{1,6,8,7,6} )

Distribution for DISTRIBUTION


0.08
PROBABILIT Y

0.06

0.05

0.03

0.02

0.00
1.5 11.0 20.5 30.0 39.5 49.0 58.5

FIGURE 31

C D Likelihood
29 Share Likelihood
30 0 0 10
Likelihood

31 10 1
32 20 6 5 Likelihood
33 45 8
34 50 7
0
35 55 6
0 50 100
36 60 0
Share
FIGURE 32

The syntax of RISKGENERAL is as follows.


■ Begin with the smallest and largest possible values.
■ Then enclose in {} the numbers for which you feel you can compare relative
likelihoods.
■ Finally, enclose in {} the relative likelihoods of the numbers you have previously
listed.
Running this in @Risk yields the output in Figure 31. Note that 20 is 6/8 as likely as 45;
10 is 1/8 as likely as 45; 50 is 7/8 as likely as 45; 55 is 6/8 as likely as 45, etc. In be-

2 3 . 5 The RISKGENERAL Function 1245


tween the given points, the density function changes at a linear rate. Thus, 30 would have
a likelihood of
(30  20)*(8  6)
6   6.8
(45  20)
Basically what @Risk has done is to take the curve constructed by connecting (with
straight lines) the points (0, 0), (10,1), . . . , (55,6), (60,0). @Risk rescales the height of
this curve so that the area under it equals 1, and then randomly selects points based on
the height of the curve. Thus, a share around 45 is 8/6 as likely as a share around 20, etc.
Figure 32 illustrates this idea.

REMARK For the spreadsheet in Figure 30, the syntax


RISKGENERAL(0,60,D8:D12,E8:E12)
is also acceptable.

Suppose we select the Define Distributions icon. Then we choose the RISKGENERAL
random variable and select Apply. Now we can directly insert the RISKGENERAL (or
any other) random variable into a cell.
After entering the appropriate parameters for the RISKGENERAL random variable,
we will see the histogram shown in Figure 33. We are also given statistical information,
such as the mean and variance, for the random variable. If we select Apply, the formula
defining the desired RISKGENERAL random variable will be entered into the cell.

FIGURE 33

1246 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


23.6 The RISKCUMULATIVE Random Variable
With the RISKGENERAL function, we estimated the relative likelihood of a random vari-
able taking on various values. With the RISKCUMULATIVE function, we estimate the
cumulative probability that the random variable is less than or equal to several given val-
ues. The RISKCUMULATIVE function can be used to approximate the cumulative dis-
tribution function for any continuous random variable.

EXAMPLE 8 RISKCUMULATIVE

A large auto company’s net income for North American operations (NAO) for the next
year may be between 0 and $10 billion. The auto company estimates there is a 10% chance
that net income will be less than or equal to $1 billion, a 70% chance that net income will
be less than or equal to $5 billion, and a 90% chance that net income will be less than or
equal to $9 billion. Use @Risk to simulate NAO’s net income for the next year.

A B C D E F G H
1 Cumulative distribution
2
3 Min 0
4 Max 10 4.2
5 x P(X<=x) Slope 4.2 RiskCumul(B3,B4,A6:A8,B6:B8)
6 1 0.1 0.1
7 5 0.7 0.15
8 9 0.9 0.05 Name P(X<=x)
9 >9 0.1 DescriptionOutput
10 Cell D5
11 Minimum = 4.89E-03
12 Maximum = 9.999967
13 Mean = 4.199986
14 Std Deviati 2.773699
15 Variance = 7.693407
16 Skewness 0.589373
17 Kurtosis = 2.285831
18 Errors Calc 0
19 Mode = 3.43314
20 5% Perc = 0.497997
21 10% Perc = 0.999338 10%ile is 1!
22 15% Perc = 1.333212
23 20% Perc = 1.665637
24 25% Perc = 1.996866
25 30% Perc = 2.332803
26 35% Perc = 2.664376
27 40% Perc = 2.996635
28 45% Perc = 3.330816
29 50% Perc = 3.663554
30 55% Perc = 3.995894
31 60% Perc = 4.33135
32 65% Perc = 4.664128
33 70% Perc = 4.997442 70%ile is 5!
34 75% Perc = 5.995409
35 80% Perc = 6.993743
36 85% Perc = 7.99109
37 90% Perc = 8.989162 90%ile is near 9
FIGURE 34 38 95% Perc = 9.499336

2 3 . 6 The RISKCUMULATIVE Random Variable 1247


FIGURE 35

Solution Our work is in the file Cumulative.xls. See Figure 34. The RISKCUMULATIVE function
Cumulative.xls takes as inputs (in order) the following quantities:
■ The smallest value assumed by the random variable
■ The largest value assumed by the random variable
■ Intermediate values assumed by the random variable
■ For each intermediate value, the cumulative probability that the random variable
is less than or equal to the intermediate value
In cell D5, we enter the following formula to simulate NAO’s annual net income:
RISKCUMUL(B3,B4,A6:A8,B6:B8)
We could have also used the following formula in cell D4:
RISKCUMUL(0,10,{1,5,9},{0.1,0.7,0.9})
@Risk will now ensure that
■ For net income x between 0 and $1 billion, the cumulative probability that net in-
.1 
come is less than or equal to x rises with a slope equal to 10
0
 .1.
■ For net income x between $1 billion and $5 billion, the cumulative probability
.7 
that net income is less than or equal to x rises with a slope equal to 51
.1
 .15.
■ For net income x between $5 billion and $9 billion, the cumulative probability
.9 
that net income is less than or equal to x rises with a slope equal to 95
.7
 .05.
■ For net income x greater than $9 billion, the cumulative probability that net in-
1 
come is less than or equal to x rises with a slope equal to .9
10  9
 .10.
After running 1,600 iterations we found the output in Figure 34. Note that the 10th
percentile of the random variable is near 1, the 70th percentile is near 5, and the 90th per-
centile is near 9. Figure 35 displays a cumulative ascending graph of net income. Note
that (as described previously) the slope of the graph is relatively constant between 0 and
1, between 1 and 5, between 5 and 9, and between 9 and 10.

1248 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


23.7 The RISKTRIGEN Random Variable
When we use the RISKTRIANG function, we are assuming we know the absolute worst
and absolute best case that can occur. Many companies, such as Eli Lilly, prefer to use a
triangular random variable in which the worst case and best case are defined by a per-
centile of the random variable. For example, at Eli Lilly the 10th percentile of demand,
most likely demand, and 90th percentile of demand often define forecasts. The following
example shows how to use the RISKTRIGEN function to model uncertainty.

EXAMPLE 9 RISKTRIGEN

Eli Lilly believes there is a 10% chance that its new drug Niagara’s market share will be
25% or less, a 10% chance that market share will be 70% or more, and the most likely
market share is 40%. Use @Risk to model the market share for Niagara.
Solution Our work is in the file Risktrigen.xls. See Figure 36. In B7, we just entered the formula
Risktrigen.xls RISKTRIGEN(B3,B4,B5,10,90)

A B
1 trigen function
2
3 10%ile 0.25
4 Most likely 0.4
5 90 %ile 0.7
6
FIGURE 36 7 share 0.464537

FIGURE 37

2 3 . 7 The RISKTRIGEN Random Variable 1249


C D
34 Name
35 DescriptionOutput
36 Cell [trigen.xls]S
37 Minimum = 9.73E-02
38 Maximum = 0.886495
39 Mean = 0.464533
40 Std Deviati 0.166746
41 Variance = 2.78E-02
42 Skewness 0.22598
43 Kurtosis = 2.398804
44 Errors Calc 0
45 Mode = 0.401881
46 5% Perc = 0.203626
47 10% Perc = 0.249634
48 15% Perc = 0.285171
49 20% Perc = 0.315337
50 25% Perc = 0.341713
51 30% Perc = 0.365485
52 35% Perc = 0.387192
53 40% Perc = 0.407964
54 45% Perc = 0.428942
55 50% Perc = 0.450952
56 55% Perc = 0.473918
57 60% Perc = 0.498488
58 65% Perc = 0.524349
59 70% Perc = 0.552427
60 75% Perc = 0.58265
61 80% Perc = 0.61619
62 85% Perc = 0.654338
63 90% Perc = 0.699825
FIGURE 38 64 95% Perc = 0.758373

The syntax of the RISKTRIGEN function is as follows:


RISKTRIGEN(lower value, most likely value, higher value, percentile for lower
value, percentile for higher value)
In Figure 37, we show the density function for the market share. Note that @Risk picks
the worst case for RISKTRIGEN (around 10%), so the chance of a market share below
25% is .10. @Risk picks the best case for RISKTRIGEN (around 89%), so the probabil-
ity of a share exceeding 70% is .10. When we ran 1,600 iterations, with cell B7 being the
output cell, we obtained the output in Figure 38.
Note that the 10th percentile is almost exactly 25%, and the 90th percentile is almost
exactly 70%.

23.8 Creating a Distribution Based on a Point Forecast


We are constantly inundated by forecasts:
■ The government predicts the GDP will grow by 4% during the next year.
■ The Eli Lilly marketing department predicts that demand for a given drug will be
400,000,000 d.o.t. (days of therapy) during the next year.

1250 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


■ A Wall Street guru predicts that the Dow will go up 20% during the next 12
months.
■ The bookmakers forecast that the Pacers will beat the Rockets by 6 in the open-
ing game of the 2005 NBA season.
Although the forecasts may be the best available, they are almost sure to be incorrect. For
example, the bookmakers’ prediction that the Pacers will win by 6 points is incorrect un-
less the Pacers win by exactly 6 points. In short, any single-valued (or point) forecast im-
plies a distribution for the quantity being forecasted. How can we find a random variable
that correctly models the uncertainty inherent in the point forecast? The key to putting a
distribution around a point forecast is to have some historical data about the accuracy of
past forecasts of the quantity of interest. For example, with regard to our forecast for the
Dow, we might have the forecast made in January of each of the past 10 years for the per-
centage change in the Dow and the actual change in the Dow for each of those years. We
begin by seeing if past forecasts exhibit any bias. For each past forecast, we determine
(actual value)/(forecast value). Then we average these ratios. If our forecasts are unbiased,
this average should be around 1. Any significant deviation from 1 would indicate a sig-
nificant bias.† For example, if the average of actual/forecast is 2, the actual results tend
to be around twice our forecast. To correct for this bias, we should automatically double
our forecast. If the average of actual/forecast is .5, the actual results tend to be around
half our forecast; to eliminate bias, we should automatically halve our forecast. Once we
have eliminated forecast bias, we look at the standard deviation of the percentage errors
of the unbiased forecast. We use the following @Risk random variable to model the quan-
tity being forecast.
RISKNORMAL(unbiased forecast, (percentage standard deviation of unbiased
forecasts)*(unbiased forecast))

EXAMPLE 10 Drug Forecast

Drugforecast.xls The file Drugforecast.xls contains actual and forecast sales (in millions of d.o.t.) for the
years 1995–2002. See Figure 39. The forecast for 2003 is that 60 million d.o.t. will be
sold. How would you model actual sales of the drug for 2003?
Solution Step 1 In cells F5:F12, check for bias by computing actual sales/forecast sales for each
year. To do this, copy from F5 to F6:F12 the formula
D5/E5
Step 2 In cell F2, compute the bias of the original forecasts by averaging each year’s ac-
tual/forecast sales.
AVERAGE(F5:F12)
We find that actual sales tend to come in 8% under forecast.
Step 3 In G5:G12, correct past biased forecasts by multiplying them by .92. Simply copy
from G5 to G6:G12 the formula
$F$2*E5


To see if the bias is significantly different from 1, compute

Average of (actual)/(forecast)  1

If this exceeds t(a/2,n1) then there is significant bias. We usually choose a  .05.

2 3 . 8 Creating a Distribution Based on a Point Forecast 1251


C D E F G H I
1 mean std dev
2 mean 0.918031 1 0.113753
3
Unbiased %age
4 Year Actual Sales Forecast A/F forecast error
5 1995 17 22 0.772727 20.19668 84%
6 1996 59 61 0.967213 55.9999 105%
7 1997 46 51 0.901961 46.81959 98%
8 1998 85 86 0.988372 78.95067 108%
9 1999 98 103 0.951456 94.5572 104%
10 2000 94 118 0.79661 108.3277 87%
11 2001 24 22 1.090909 20.19668 119%
FIGURE 39 12 2002 14 16 0.875 14.6885 95%

E F
14
15 Mean 2003 55.08187
FIGURE 40 16 Sigma 2003 6.2657

Step 4 In H5:H12, compute each year’s percentage error for the unbiased forecast. Copy
from H5 to H6:H12 the formula
D5/G5
Step 5 In cell I2, compute the standard deviation of the percentage errors with the
formula
STDEV(H5:H12)
We find that the standard deviation of past unbiased forecasts has been around 11% of
the unbiased forecast. We now model the 2003 sales of the drug (in millions of d.o.t.) with
the formula
RISKNORMAL(60*(.918), (60*.918)*.114) or RISKNORMAL(55.08,6.27)
See Figure 40.

23.9 Forecasting the Income of a Major Corporation


In many large corporations, different parts of a company make forecasts for quarterly net
income. An analyst in the CEO’s office pulls together the individual predictions to fore-
cast the entire company’s net income. In this section, we show an easy way to pool fore-
casts from different portions of a company and create a probabilistic forecast for the en-
tire company.
So far, we have usually assumed that @Risk functions in different cells are indepen-
dent. For example, the value of a RISKNORMAL(0,1) in cell A6 has no effect on the
value of a RISKNORMAL(0,1) in any other cell. In many situations, however, variables
of interest might be correlated. For example, a weak yen will lower the price of a
Japanese car in the United States and hurt GM market share. Since higher price incen-
tives increase market share, GM market share may also be negatively correlated with car

1252 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


price. Also, net income of NAO (North American operations) is often correlated with net
income in Europe. The following example shows how to model correlations with @Risk.
Recall that the correlation between two random variables must lie between 1 and 1.
■ Correlation near 1 implies a strong positive linear relationship.
■ Correlation near 1 implies a strong negative linear relationship.
■ Correlation near .5 implies a moderate positive linear relationship.
■ Correlation near .5 implies a moderate negative linear relationship.
■ Correlation near 0 implies a weak linear relationship.

EXAMPLE 11 Forecasting GM Net Income

Suppose GM CEO Rick Waggoner has received the following forecast for quarterly net
income (in billions of dollars) for Europe, NAO, Latin America, and Asia. See Figure 41
Corrinc.xls and file Corrinc.xls.
For example, we believe Latin American income will be on average $.4 billion. Based
on past forecast records, the standard deviation of forecast errors is 25%, so the standard
deviation of net income is $.1 billion. We assume that actual income will follow a nor-
mal distribution. Historically, net income in different parts of the world has been corre-
lated. Suppose the correlations are as given in B10:F13. Latin America and Europe are
most correlated, and Asia and NAO are least correlated. What is the probability that total
net income will exceed $4 billion?
Solution To correlate the net incomes of the different regions, we use the RISKCORRMAT func-
tion. The syntax is as follows:
 Actual @Risk formula, RISKCORRMAT(correlation matrix, relevant column
of matrix)
where
Correlation matrix: cells where correlations between variables are located
Relevant column: column of correlation matrix that gives correlations for this cell
Actual @Risk formula: distribution of the random variable

A B C D E F G
1 Net Income Consolidation
2 with correlation Goal is 4 billion!
3 Mean Std. Dev Actual
4 1 LA 0.4 0.1 0.449011 0.521472
5 2 NAO 2 0.4 1.256578 1.264837
6 3 Europe 1.1 0.3 1.14203 0.994558
7 4 Asia 0 .8 0.3 0.685143 0.707549
8 Total!! 3.532761 3.488417
9
10 Correlations LA NAO Europe Asia
11 LA 1 0.6 0.7 0.5
12 NAO 0.6 1 0.6 0.4
13 Europe 0.7 0.6 1 0.5
14 Asia 0.5 0.4 0.5 1
15
FIGURE 41 16

2 3 . 9 Forecasting the Income of a Major Corporation 1253


B C D E F
54 Scenario #3 = >90% 36% chance we fail
55 Target #1 (Value) 4 to meet target
56 Target #1 (Perc%) 35.72%
57

B C D
17 Name Total!! / Actual
18 Description Output
19 Cell E8
20 Minimum = 1.858541
21 Maximum = 6.71191
22 Mean = 4.300031
23 Std Deviation = 0.895158
24 Variance = 0.801308
25 Skewness = -5.82E-02
26 Kurtosis = 2.894021
27 Errors Calculated 0
28 Mode = 4.470891
29 5% Perc = 2.756473
30 10% Perc = 3.186955
31 15% Perc = 3.364678
32 20% Perc = 3.554199
33 25% Perc = 3.715597
34 30% Perc = 3.854618
35 35% Perc = 3.96633
36 40% Perc = 4.080534
37 45% Perc = 4.173182
38 50% Perc = 4.306374
39 55% Perc = 4.413318
40 60% Perc = 4.530555
41 65% Perc = 4.632649
42 70% Perc = 4.7776
43 75% Perc = 4.907873
44 80% Perc = 5.04496
45 85% Perc = 5.216321
46 90% Perc = 5.456462
FIGURE 42 47 95% Perc = 5.758535

Step 1 Generate actual Latin American income in cell E4 with the formula
RISKNORMAL(C4,D4,RISKCORRMAT($C$11:$F$14,A4))
This ensures that the correlation of Latin American income with other incomes is created
according to the first column of C11:F14. Also, Latin American income will be normally
distributed, with a mean of $.4 billion and standard deviation of $.1 billion.
Step 2 Copying the formula in E4 to E5:E7 (respectively) generates the net income in
each region and tells @Risk to use the correlations in C11:F14.
Step 3 In cell E8, compute total income with the formula
SUM(E4:E7)
Step 4 Cell E8 has been made the output cell. We find from Targets (value of 4) that
there is a 36% chance of not meeting the $4 billion target. Also, the standard deviation
of net income is $895 million. See Figure 42.

1254 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


B C D
15 Name Total!! / Actual
16 Description Output
17 Cell E8
18 Minimum = 2.174825
19 Maximum = 6.290998
20 Mean = 4.299921
21 Std Deviation = 0.605397
22 Variance = 0.366506

B C D E F
53 Target #1 (Value)= 4
54 Target #1 (Perc%)= 30.76% 31% chance we fail
55 to meet target
56
57
FIGURE 43 58

FIGURE 44
B C D E F G H I J K L
5
6 Name Total!! / Ac LA / Actual NAO / ActuEurope / AcAsia / Actual
7 DescriptionOutput Normal(C4 Normal(C5 Normal(C6 Normal(C7,D7)
8 Iteration# /E8 E4 E5 E6 E7 LA NAO Europe Asia
9 1 4.804644 0.478546 2.196594 1.351783 0.777721 LA 1
10 2 4.132098 0.441263 1.699526 1.184871 0.806438 NAO 0.591262 1
11 3 6.129157 0.496915 2.453791 1.91255 1.265901 Europe 0.702735 0.587704 1
12 4 6.54744 0.57896 2.424948 1.968532 1.574999 Asia 0.498132 0.399115 0.496651 1
13 5 3.057065 0.319965 1.517732 0.968105 0.251263
14 6 5.324339 0.488499 2.292126 1.084479 1.459235
907 899 4.735623 0.469691 2.19903 1.466369 0.600534
908 900 4.901974 0.507751 2.242637 1.004801 1.146786

What If Net Incomes Are Not Correlated?


Nocorrinc.xls In workbook Nocorrinc.xls, we ran the simulation of Example 10, assuming that the net
incomes in different regions were independent (that is, had 0 correlation). The results ap-
pear in Figure 43. Note that the absence of correlation has reduced the standard deviation
to $600 million and our chance of not meeting our $4 billion income target. This is be-
cause if the incomes of all the regions are independent, then it is likely that a high income
in one region will be cancelled out by a low income in another region. If the incomes of
the regions are positively correlated, these correlations reduce the diversification or hedg-
ing effect.

Checking the Correlations

We can check that @Risk actually did correctly correlate net incomes. Make sure to check
Collect Distribution Samples when you run the simulation. Once you have run the simu-
lation, select the Data option from the Results menu. The results of each iteration will ap-
pear in the bottom half of the screen. You can Edit Copy Paste this data to a blank work-
sheet. See Figure 44. Now check the correlations between each region’s net income with
Data Analysis Tools Correlation. Select Data Analysis Tools Correlations and fill in the

2 3 . 9 Forecasting the Income of a Major Corporation 1255


FIGURE 45

dialog box as in Figure 45. Note that the correlations between the net incomes are virtu-
ally identical to what we entered in the spreadsheet.

23.10 Using Data to Obtain Inputs for New Product Simulations


Many companies use subjective estimates to obtain inputs for new product simulations.
For example, market size may be subjectively modeled as a triangular random variable,
with the marketing department coming to a consensus on best-case, worst-case, and most
likely scenarios. In many situations, however, past data may be used to obtain estimates
of key variables. We now discuss how past data on similar products or projects can be
used to model share, price, volume, and cost uncertainty. The utility of any model will
depend on the type of data available.

The Scenario Approach to Modeling Volume Uncertainty

When trying to model volume of sales for a new product in the auto and drug industries,
it is common to look for similar products sold in the past. We often have knowledge of
the following:
■ Accuracy of forecasts for year 1 sales volume
■ Data on how sales change after the first year
Consider Figure 46—data on actual and forecast year 1 sales for seven similar products.
Volume.xls See file Volume.xls. For example, for product 1, actual year 1 sales were 80,000; the fore-
cast for year 1 was 44,396. The percentage change in sales from year to year for the seven
products is given in Figure 47.
For example, product 1 sales went up 43% during the second year, 33% during the
third year, etc.
Suppose we forecast year 1 sales to be 90,000 units. How can we model the uncertain
volume in product sales?
Step 1 From cell D11 (formula AVERAGE(D4:D10)) of Figure 46, we see that past
forecasts for year 1 sales of similar products have overforecast the actual sales by 36.3%.

1256 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


B C D E F
Actual/For Unbiased
3 Actual Forecast ecast forecast %age error
4 80000 44396 1.8019641 60516.733 1.3219484
5 100000 99209 1.0079731 135233.01 0.7394644
6 120000 94808 1.265716 129233.95 0.9285486
7 150000 96813 1.5493787 131966.99 1.1366479
8 180000 172862 1.0412931 235630.31 0.7639085
9 200000 108770 1.8387423 148265.72 1.3489295
10 55000 53052 1.0367187 72315.832 0.7605527
FIGURE 46 11 mean 1.3631123 stdev 0.2677479

FIGURE 47
A B C D E F G H I J
13 Scenario Year 2 Year 3 Year 4 Year 5 Year 6 Year 7 Year 8 Year 9 Year 10
14 1 1.43 1.33 0.93 0.75 0.57 0.40 0.37 0.38 0.24
15 2 1.39 1.13 0.96 0.59 0.49 0.45 0.46 0.40 0.24
16 3 1.30 1.38 0.98 0.84 0.80 0.65 0.57 0.48 0.35
17 4 1.47 1.49 1.36 1.15 1.20 1.15 0.93 0.99 0.71
18 5 1.23 1.06 0.73 0.45 0.39 0.31 0.28 0.23 0.15
19 6 1.26 1.22 1.08 0.79 0.77 0.70 0.60 0.60 0.49
20 7 1.30 1.02 0.84 0.62 0.45 0.32 0.27 0.24 0.22

Step 2 Therefore, we can create unbiased forecasts in column E by copying the formula
$D$11*C4
from E4 to E5:E10.
Step 3 In column F, we compute the percentage error of our unbiased forecasts. In cell
F4, we compute the percentage error for product 1 with the formula
B4/E4
Copying this formula from F4 to F5:F10 generates percentage errors for the other
products.
Step 4 In cell F11, we compute the standard deviation (26.7%) of these percentage er-
rors with the formula
STDEV(F4:F10)
We are now ready to model 10 years of sales for the new product. To generate year 1 sales,
we model year 1 sales to be normally distributed, with a mean of 1.36*90,000 and a stan-
dard deviation of .267*(90,000*1.267). To model sales for years 2–10, we use @Risk to
randomly choose one of the seven volume-change patterns (or scenarios) from Figure 47.
Then we use the chosen scenario to generate sales growth for years 2–10.
Step 5 In cell G4, we choose a scenario with the formula
RISKDUNIFORM(A14:A20)
This formula gives a 1/7 chance of choosing each scenario.

2 3 . 1 0 Using Data to Obtain Inputs for New Product Simulations 1257


FIGURE 48

G H I J K L M N O P Q
1 Year 1 Forecast 90000
2 Year

Scenari
3 o 1 2 3 4 5 6 7 8 9 10
4 4 102588.9 151164 225922 306360.9 351610 420801.1 484511.5 451618.5 445300.1 314821.9

Step 6 In H4, we generate year 1 sales with the formula


RISKNORMAL(I1*D11,(I1*D11)*F11)
This implies that
Mean year 1 sales  (biased forecast)(factor to correct for bias)
(Standard deviation year 1 sales)  (unbiased forecast for year 1 sales)*(standard
deviation of errors as percentage of unbiased forecast)
Step 7 In cell I4, we generate year 2 sales with the formula
H4*VLOOKUP($G$4,$A$14:$J$20,I3)
This formula takes year 1 generated sales and multiplies it by the year 2 growth factor for
the chosen scenario. Copying this formula to I4:Q4 generates sales for years 2–10. See
Figure 48.

Modeling Statistical Relationships


with One Independent Variable

Suppose we want to model the dependence of a variable Y on a single independent vari-


able X. We proceed as follows.
Step 1 Try to find the straight line, power curve, and exponential curve that best fit the
data. The easiest way to do this is to plot the points with Excel and use the Trend Curve
feature.
■ The straight line is of the form Y  a  bX.
■ The power function is of the form Y  axb.
■ The exponential function is of the form Y  aebX.

Step 2 For each curve and each data point, compute the percentage error
Actual value of Y  predicted value of Y

Predicted value of Y
Step 3 For each curve, compute mean absolute percentage error (MAPE) by averaging
the absolute percentage errors.
Step 4 Choose the curve that yields the lowest MAPE as the best fit.

1258 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


Step 5 Does at least one of the three curves appear to have some predictive value? Check
the plot for this, or look at the p-value from the regression; it should be
.15. If so,
model the uncertainty associated with the relationship between X and Y as follows:
■ If the straight line is the best fit, then model Y as
RISKNORMAL(prediction, standard deviation of actual (not percentage) errors)
■ If the power curve or the exponential curve is the best fit, then model Y as
RISKNORMAL(prediction, prediction*(standard deviation of percentage errors))

EXAMPLE 12 Modeling the Cost of Building Capacity

We are not sure of the cost of building capacity for a new drug, but we believe that costs
will run around 50% more (in real terms) than for the drug Zozac. Table 12 gives data on
the costs incurred when capacity was built for Zozac.
For example, when 110,000 units of capacity for Zozac were built, the cost was
$654,000 (in today’s dollars). How would you model the uncertain cost of building ca-
pacity for the new product?
Capacity.xls Solution See the file Capacity.xls.
Step 1 To begin, we plot the best-fitting straight line, power curve, and exponential
curve. To do this, use Chart Wizard (X-Y option 1) and click on points till they turn gold.
Next, choose the desired curve and select R-SQ and the Equation option. We obtain the
graphs in Figures 49–51.
Step 2 In C3:E8 (see Figure 52), we compute the predictions for each curve. In C3:C8,
we compute the straight-line predictions by copying from C3 to C3:C8 the formula
5.0623*A377.516
In D3:D8, we compute the power curve prediction by copying from D3 to D3:D8 the
formula
13.483*A3^0.8229
In E3:E8, we compute the exponential curve predictions by copying from E3 to E3:E8 the
formula
164.52*EXP(0.0114*A3)
Step 3 In F3:H8, we use
Actual value of Y  predicted value of Y

Predicted value of Y

TA B L E 12
Capacity Cost
(thousands) ($ thousands)

20 156
50 350
80 490
110 654
140 760
160 890

2 3 . 1 0 Using Data to Obtain Inputs for New Product Simulations 1259


A B C D E
10
Linear
11
12 y = 5.0623x + 77.516
13 1000 2
R = 0.9945
14 800 Cost(000's)
600
15
400
16 Linear
200
17 (Cost(000's))
0
18
0 100 200
19
FIGURE 49 20

A B C D E
21
Power
22 0.8229
y = 13.483x
23 2
1000 R = 0.9983
24
25 800 Cost(000's)
26 600
27 400 Power
28 200 (Cost(000's))
29 0
30 0 100 200
31
FIGURE 50 32

F G H I J
23
24 y = 164.52e
0.0114x Exponential
2
25 R = 0.9103
26 1500
27 Cost(000's)
28 1000
29 Expon.
500
30 (Cost(000's))
31 0
32
0 100 200
FIGURE 51 33

A B C D E
1 Capacity Cost Modeling

Linear Power Exponential


2 Capacity(00Cost(000's) Prediction Prediction Prediction
3 20 156 178.762 158.6369 206.6511577
4 50 350 330.631 337.1855 290.9152953
5 80 490 482.5 496.4086 409.5390027
6 110 654 634.369 645.132 576.5327482
7 140 760 786.238 786.7474 811.6199132
FIGURE 52 8 160 890 887.484 878.1261 1019.463863

1260 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


F G H I J K
1
%age %age
Error Error %age Error APE APE APE
2 Linear Power Exponential Linear Power Exponential
3 -0.127331 -0.016622 -0.2451046 0.127331 0.016622 0.24510464
4 0.058582 0.038004 0.20309934 0.058582 0.038004 0.20309934
5 0.015544 -0.01291 0.19646724 0.015544 0.01291 0.19646724
6 0.030946 0.013746 0.13436748 0.030946 0.013746 0.13436748
7 -0.033372 -0.033997 -0.0636011 0.033372 0.033997 0.06360109
8 0.002835 0.013522 -0.1269921 0.002835 0.013522 0.12699211
9 St dev 0.026132 0.044768 0.021467 0.16160532
FIGURE 53 10 MAPE

to compute the percentage error for each model. (See Figure 53.) To do this, simply copy
the formula
($B3-C3)/C3
from F3 to F3:H8.
Step 4 In I3:K9, we compute the MAPE for each equation. We begin by computing the
absolute percentage error for each point and each curve by copying the formula
ABS(F3)
from I3:K8.
Next we compute the MAPE for each equation by copying the formula
AVERAGE(I3:I8)
from I9:K9.
Step 5 We find that the power curve (see J9) has the lowest MAPE. Therefore, we model
the cost of adding capacity with a power curve. By entering in G9 the formula
STDEV(G3:G8)
we find 2.6% to be the standard deviation of the percentage errors for the power curve.
We now model the cost of adding capacity for the new product with the formula
1.5*RISKNORMAL(13.483*(Capacity)^.8229,.026*13.483*(Capacity)^.8229)
That is, our best guess for the cost of adding capacity has a mean equal to the power curve
forecast and a standard deviation equal to 2.6% of our forecast.

EXAMPLE 13 Bidding on a Construction Project

We are bidding against a competitor for a construction project and want to model her bid.
In the past, her bid has been closely related to our (estimated) cost of completing the proj-
Biddata.xls ect. See file Biddata.xls and Figure 54.
Figures 55–57 give the best fitting linear, power, and exponential curves.
As in Example 12, we compute predictions and MAPEs for each curve (see Figure 58).
The linear curve has the smallest MAPE. Computing the actual errors for the linear
curve’s predictions (in column F) and their standard deviation, we find a standard devia-
tion of .94. Therefore, we model our competitor’s bid as
RISKNORMAL(1.489*(Our cost)  1.7893, .94)

2 3 . 1 0 Using Data to Obtain Inputs for New Product Simulations 1261


A B C D E F
1 (All numbers in 000's)

Linear Power Exponential Actual Linear


2 Our cost Comp1 bid prediction prediction prediction Error
3 10 13 13.1027 13.35697 16.3795084 -0.1027
4 14 20 19.0587 19.07213 19.5315493 0.9413
5 16 22 22.0367 21.96795 21.3282198 -0.0367
6 18 25 25.0147 24.88511 23.2901627 -0.0147
7 30 44 42.8827 42.73548 39.4893521 1.1173
8 25 34 35.4377 35.23444 31.6909474 -1.4377
9 38 56 54.7947 54.88668 56.1502464 1.2053
10 44 63 63.7287 64.10133 73.114819 -0.7287
11 24 33 33.9487 33.74424 30.3267775 -0.9487
FIGURE 54 12 stdev 0.94189151

FIGURE 55
A B C D E F G H
14
15
16 Linear y = 1.489x - 1.7873
2
17 R = 0.9969
18 80
19 60 Comp1 bid
20
40 Linear (Comp1 bid)
21
22 20 Linear (Comp1 bid)
23 0
24 0 10 20 30 40 50
25

Power
1.0586
y = 1.1671x
2
80 R = 0.997
60 Comp1 bid
40
Power (Comp1
20 bid)
0
0 10 20 30 40 50
FIGURE 56

Exponential
y = 10.549e0.044x
80 R2 = 0.9495
60
Comp1 bid
40
Expon. (Comp1 bid)
20
0
0 10 20 30 40 50
FIGURE 57

1262 CHAPTER 2 3 Simulation with the Excel Add-in @Risk


G H I J K L

Linear Power Exponenti Linear abs Power abs Exponenti


%age %age al %age %age %age al %age
2 error error error error error error
3 -0.00784 -0.02673 -0.20633 0.007838 0.026726 0.206325
4 0.04939 0.048651 0.023984 0.04939 0.048651 0.023984
5 -0.00167 0.001459 0.031497 0.001665 0.001459 0.031497
6 -0.00059 0.004617 0.073415 0.000588 0.004617 0.073415
7 0.026055 0.029589 0.114224 0.026055 0.029589 0.114224
8 -0.04057 -0.03504 0.072862 0.04057 0.035035 0.072862
9 0.021997 0.020284 -0.00268 0.021997 0.020284 0.002676
10 -0.01143 -0.01718 -0.13834 0.011434 0.017181 0.138342
11 -0.02795 -0.02206 0.088147 0.027945 0.022055 0.088147
12 0.020831 0.022844 0.083497
FIGURE 58 13 MAPE

EXAMPLE 14 The Effects of New Competition on Price

For similar products, the year after the first competitor comes in has historically shown a
significant price drop. Figure 59 contains data on this situation.
For example, for the first product, a competitor entered in year 1. During year 2, a 22%
price drop was observed, after allowing for a normal inflationary increase of 5% during
the second year. Model the effect on price the year after the first competitor enters the
Pricedata.xls market. See file Pricedata.xls.
Solution Figures 60–62 give the best-fitting linear, power, and exponential curves. The extremely
low R2 values imply that the year of entry has little or no effect on the price drop the year
after the first competitor comes in. Therefore, we model price drop as a RISKNORMAL
function, using the mean and standard deviation found in D14 and D15. If a competitor
enters during year t, we would model the year t  1 price with the formula
1.05*(year t price)*RISKNORMAL(.803,.0366)
Note: .803  1  .197.

B C D

Share drop Price drop


3 Year competitor enters next year next year
4 1 35 22
5 1 33 21
6 2 20 17
7 3 15 15
8 3 13 19
9 4 14 24
10 5 10 15
11 6 9 22
12 5 11 25
13 4 13 17
14 Mean 19.7
FIGURE 59 15 Std Dev 3.622461

2 3 . 1 0 Using Data to Obtain Inputs for New Product Simulations 1263


Linear y = 0.197x + 19.03
2
R = 0.0087
30
25 Price drop
20 next year
15
10 Linear
5 (Price drop
0 next year)
0 5 10
FIGURE 60

-0.0169
Power y = 19.749x
2
R = 0.0034
30
25 Price drop
20 next year
15
10 Power
5 (Price drop
0 next year)
0 5 10
FIGURE 61

Exponential
0.0066x
y = 18.967e
2
30 R = 0.0036
25 Price drop
20 next year
15
10 Expon.
5 (Price drop
0 next year)
0 5 10
FIGURE 62

Here, the assumption is that the market drop during a year is normally distributed. To
check this, we could compute the skewness (with the SKEW function) and kurtosis (with
the KURT function) of the data. If both the skewness and kurtosis are near 0, the mar-
ket drop is probably normally distributed. An alternate approach to modeling the drop in
price is to use the formula RISKDUNIFORM(D4:D13). This ensures that the drop in
price is equally likely to assume one of the observed values. This approach has the ad-
vantage of not automatically assuming normality. The disadvantage, however, is that
using the RISKDUNIFORM function implies that only 10 values of price drop are
possible.

1264 CHAPTER 2 3 Simulation with the Excel Add-in @Risk

You might also like