0% found this document useful (0 votes)
3 views6 pages

Chen 2009

Uploaded by

Segun Adebayo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views6 pages

Chen 2009

Uploaded by

Segun Adebayo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Inevitable Disappointment in Projects

Selected on the Basis of Forecasts


M. Chen and J. Dyer, University of Texas at Austin

Summary Empirical surveys (Pruitt and Gitman 1987; Statman and


Investors often select projects whose estimated performance mea- Tyebjee 1985) show that many projects selected on the basis of
sures meet or exceed a hurdle value. At the time of decision predicted performance actually have smaller yields than expected.
making, the true performance of a project is unknown but uncer- Here is an example: Fig. 1 plots actual sizes vs. estimated sizes (in
tain forecasts are available. Decision makers (DMs) often ignore log scale) of oil reserves in the Norwegian sector of the North Sea
the prediction errors when they use these forecasts to choose pro- reported by the Norwegian Petroleum Directorate in 1997. Ap-
jects. To the disappointment of the DMs, many selected projects proximately 70% are overestimates, and several reserves with
result in smaller actual yields than those that were forecasted. very large forecasts turned out to be very poor, making the esti-
Some have attributed the cause of this to the optimistic bias of mate much higher than the size of discovery.
the predictions. This paper shows that this disappointment can The presence of prediction errors and the problems they cause
occur even if the prediction is unbiased. In this case, a bias can have long been recognized. Some consider that systematic biases
be introduced by the selection process that will allow more unat- inherent in the project-estimation procedures cause the errors. For
tractive (overestimated) projects to be accepted than attractive example, prediction may involve technical imperfections such as
(underestimated) ones. Although a similar phenomenon has been systematic underestimations of costs and overoptimism in pre-
noted in statistics and finance research, it is not well understood in dicting cash flows; sometimes, economic and political incentives
the context of project selection by DMs. exaggerate the predictions, such as the tendency to inflate esti-
We present a solution method based on Bayesian updating and mates to compete for limited resources (Pinches 1982).
demonstrate its effectiveness in eliminating the disappointment in Alternative theories suggest that the post-investment disap-
project selection with realistic data from oil exploration and pro- pointments occur even if the predictions are unbiased. Brown
duction projects. (1974, 1978) considers the problem of accepting capital-invest-
ment projects on the basis of the estimates of their costs and
revenues. He points out that the acceptance process may favor
Introduction those projects whose values are overestimated and, thus, will
Capital-investment decisions often involve selecting good projects introduce a selection bias in the value of accepted projects, even
from all available alternatives on the basis of some criterion that when the estimation is unbiased. Smidt (1979) considers a capital-
measures or is closely related to project performance such as net budgeting problem of selecting and post-auditing projects on the
present value (NPV), internal rate of return (IRR), and reserves of basis of NPV. The author argues that the acceptance cutoff value
an oil well. In many applications, the DMs set a minimum value should be greater than zero if the decision is based on the forecast.
and a project will be undertaken if its performance measure meets Horner (1980), in an unpublished essay, illustrates the theory of
or exceeds it; otherwise, the project will be rejected. For example, inevitable disappointment with a simple example related to the
NPV of an investment must be greater than zero; IRR should be at selection of an oil and gas project. Harrison and March (1984)
least equal to the cost of the capital plus a risk premium; an oil well consider the problem of selecting the top-ranked project, ordered
must have a reasonable reserves size to be profitable to develop. by the forecasts, from a group of alternatives. They show that the
However, at the time of decision making, the performances of true value of the selected project will be lower than that predicted,
the alternatives are usually unknown. For example, the NPV or which is labeled as the post-decision disappointment.
IRR of a project will not be known with certainty before the A related problem is the winner’s curse, which usually occurs in
project is undertaken. To resolve the problem, it is common to common-value auctions (Kagel and Levin 2002). In fact, the oil and
base the decisions on predicted values that can be evaluated be- gas industry was among the first to recognize this problem, and the
fore the project is undertaken. term “winner’s curse” first appeared in a paper by Capen et al.
A problem arises because predictions are subject to errors and (1971). In that case, the value of the bid item is not exactly known
will seldom be equal to the actual values that are realized. If all but its distribution is common knowledge to all bidders. Suppose
alternatives were to be carried out, the actual performances could bidders independently submit bids that are unbiased estimates of the
be measured upon the completion of the projects, some of which true value and the bidder with the highest bid will win the item at
would turn out to be better than predicted while others would turn that price. Because the winning price is the highest of a set of ran-
out to be worse. In some cases, the DMs may be aware of the dom guesses, it is biased high and the winner will suffer the winner’s
uncertainty associated with the prediction of performance because curse because, on average, she will pay more than the true value.
a probability distribution of the performance measure may be However, this phenomenon is not well-understood in the con-
generated by a Monte Carlo simulation or a decision tree. None- text of project selection by DMs. Smith and Winkler (2006) did a
theless, DMs often ignore this uncertainty and simply use the great job in bringing this subject to the attention of decision
mean of the distribution for project selection (i.e., they choose analysts. They address a similar problem, called the optimizer’s
optimal projects by comparing the expected values of the pre- curse, where they consider the decision problem of choosing the
dictions with the hurdle threshold). A possible justification is that, top-ranked project according to the estimated returns. They show
if the estimates are not systematically biased, then it seems that that the expected value of the chosen alternative could be less than
the positive and negative errors would cancel each other. Is this its unbiased estimate. Schuyler and Nieman (2007) address the
practice entirely satisfactory? same issue in the context of portfolio planning with examples of
factors that can affect the magnitude of the problem.
In this paper, we study a similar problem of accepting explora-
tion and production projects by comparing their forecasts, which
Copyright ã 2009 Society of Petroleum Engineers
are assumed to be unbiased estimates of the expected value of the
This paper (SPE 107710) was accepted for presentation at the Hydrocarbon Economics project performance, with a hurdle value. We examine the actual
and Evaluation Symposium, Dallas, 1–3 April 2007, and revised for publication. Original
manuscript received for review 10 February 2007. Revised manuscript received for review
performance of selected projects and the selection criterion with a
19 April 2008. Paper peer approved 13 May 2008. normal model and a log-normal model and show that the DMs

216 June 2009 SPE Journal


The variance of Y is s2Y ¼ s2X þ s2e ; and the correlation coeffi-
cient between X and Y is r ¼ sX =sY : Note that 0  r2  1: The
quantity r2 also measures prediction precision—the closer r2 is
to 1, the more precise the prediction procedure is.
We are interested in the selection criterion by which a project
will be accepted if its predicted value Y is equal to or greater than
a hurdle value c.

The Performance of Accepted Projects


The true returns are unknown, and, thus, one has to rely on fore-
casts at the time of decision making. Suppose the selection criteri-
on is that a project will be accepted if its predicted value y, which
is treated as though it is an unbiased deterministic estimate, meets
or exceeds a hurdle value c.
A natural question to ask is, “What can be expected about
the true value of the selected projects?” Because the forecast y
is unbiased, the DMs might think that the occurrences of over-
Fig. 1—Size of discovery vs. estimated size for reserves in predictions and underpredictions would be equally likely, and,
Norwegian sector of the North Sea. therefore, the expected true value would be equal to y. Were
it not for the hurdle rate, this would be true because we would
have E(X) = E(Y). However, as will be shown, the DMs will
will be disappointed inevitably if these forecasts are used in this be disappointed with this expectation because E(X| Y  c)  E(Y|
manner. Finally, we discuss a method of calibrating the forecasts Y  c).
with prior knowledge to avoid such disappointment. Suppose a project has a predicted y that is greater than the
This is a generic problem that arises when selecting from hurdle value c and, thus, it will be accepted according to the
alternatives by comparing the expected value of their predicted selection criterion. Then, the expected true value of this project is
performance with a hurdle value. The alternatives can be projects
in a portfolio, as discussed in this article, but they also can be
EðXjY; Y  cÞ ¼ ð1  r2 Þm þ r2 Y; . . . . . . . . . . . . . . . . . . . . .(2)
different alternatives within a given project, which can be treated
as different subprojects. where Y  c implies the project is selected and r = sx/sy is the
correlation between the true value and the predicted value. See
A Simple Normal Model Appendix A for the derivation of Eq. 2, which relies on Bayes’s
Suppose the true value of the project, denoted by X, is a random theorem. This procedure is known as a Bayesian updating process.
variable whose value will not be known until the project is com- The conditional expectation in Eq. 2 is also called the posterior
pleted. For example, in an oil exploration and production project, mean. According to Eq. 2, the posterior mean is a weighted aver-
X can be the true size of discovery or the market value of the asset age of the prior mean m and the forecast y. Thus, it reflects the
in 1 year, depending on the problem. DMs usually have some information from the forecaster as well as from the DM. When
prior knowledge about the distribution of the value of X. This Y > m, meaning the project is predicted to be above average,
knowledge can be gained, for example, through their experience because r2 is positive, the posterior mean is smaller than the pre-
with similar projects in the past or by observing outcomes of dicted value Y (i.e., E[X|Y, Y  max(c,m)] < Y). Thus, the DM will
similar projects in the industry. Smith and Winkler (2006) state be disappointed inevitably if she expects that the average true
that they “. . . believe that many decision-makers would be com- value would be equal to the predicted value. And, the larger the
fortable assessing a prior mean for a given alternative before predicted value Y is, the more disappointment the DM will re-
observing the results of an analysis.” ceive. This phenomenon is also known as “regression to the
Among all investment alternatives, DMs often believe that a mean.” Note that Eq. 2 also provides a solution using the Bayesian
large number of projects will turn out to be mediocre, only a few updating process, which is to use the posterior mean as the cali-
will be very successful, and others can be very poor. This is brated prediction when selecting projects. This will be discussed
reasonable in a competitive environment because, if a project is in more detail later.
perceived to be extremely profitable (or poor), then investments in To increase insight into the preceding result, let us look at an
such a project will increase (or decrease), which in turn will in- example. Suppose the average IRR of a certain type of project is
crease (or decrease) the demand for resources and market known to be 5% with a standard deviation (SD) of 10% and a
supplies. Thus, in equilibrium, the project will have a normal forecaster is known to have a prediction error SD of 10%. Sup-
return (Miller 1987). Here, we assume X follows a normal distri- pose the hurdle rate is 8%, and a project is predicted to have an
bution Nðm; s2X Þ; where the mean and variance of X (m and s2X ; IRR of 10%. The joint distribution of true and predicted IRR is
respectively) are assumed to be known. bivariate normal, as shown in Fig. 2. In Fig. 2, the red line is the
Though X is unknown at the time of decision making, often a regression line of the predicted IRR over the true IRR; and it is a
predicted value, denoted by Y, can be evaluated or obtained from 45 line, meaning that the predicted IRR is equally likely to be
some other source. For instance, Y could be the expected value of a above or below the true value. The blue line is the regression of
simulated distribution of estimates of X from a Monte Carlo anal- the true IRR over the predicted IRR, and it has a smaller slope
ysis. Because of the nature of prediction, Y is unlikely to equal X than the red line. A project with a predicted value of 10% could be
exactly. The prediction error, denoted by Y, is defined as follows: an outstanding project whose true IRR is greater than 10% (e.g.,
IRR =15%) and the forecast is undervalued, or it could be a
e ¼ Y  X: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (1) mediocre or poor project whose true IRR is smaller than 10% (e.g.,
IRR = 5%) and the forecast is overvalued. However, as can be
Here, e is assumed to be a normal noise Nð0; s2e Þ: Thus, E(X-Y) = seen from Fig. 2, the probability of this project being an outstand-
0 and Y is an unbiased estimator of X. We assume e is independent ing one is much less than the probability of being a bad one.
of X (i.e., Cov(X, e) = 0). The prediction variance s2e is a measure Hence, the expected true IRR of a project whose forecast is 10%
of the imprecision of the prediction. Eq. 1, plus the preceding will be lower than 10%. In fact, in this case, it can be shown that
distributional assumptions of X and e, implies that the distribution the expected true IRR is only 7.5%. Therefore, if the DM takes the
of the random vector (X, Y) follows a bivariate normal distribu- prediction at face value, the project will be accepted; however, it
tion: ðX; YÞ  BNðm; s2X ; m; s2X þ s2e ; rÞ: really should not be.

June 2009 SPE Journal 217


criterion. One question is whether we should expect mc to be close
^c : To answer this question, we need to derive mc:
to m

^c : . . . . . . . . . . . . . . . . . .(4)
mc ¼ EðXjY  cÞ ¼ ð1  r2 Þm þ r2 m

From Eq. 4, it is easy to see that mc is smaller than m^c :Thus, the
DM will be inevitably disappointed if she anticipates achieving m ^c
with the selection criterion because she actually receives only mc
on average.
Another question is whether mc will be always greater than the
hurdle value c. Because c is usually the minimum requirement for
a project to be funded, the selection criterion should secure the
minimum performance c for the selected projects. However, this
is not guaranteed. When the hurdle value reaches or exceeds some
critical value m, the expected true value of selected projects will
be lower than c (i.e., mc  c if c  m). The details are provided in
Appendix A.
Here is an example of a project-selection problem similar to
the one considered previously. Suppose the average IRR of a
project is 5% with an SD of 10% and the prediction has an error
SD of 10%. In Fig. 3, the expected performance is plotted against
Fig. 2—Normal case: A project with the predicted value Y =10%. various values of the hurdle rate c. The solid line is the 45 line
where the expected performance equals the hurdle rate. The dot-
ted line represents the expected predicted value m ^c ; which is al-
The Performance of the Selection Criterion ways above the solid line, meaning that m ^c  c for all m
^c : The blue
Now, we proceed to look at the performance of the selection line represents the expected true performance mc. The critical
criterion. Unlike the previous case, where a forecast value is point, denoted by m in the graph where the line of mc crosses the
observed, here, we want to assess the performance of applying solid line, is approximately 0.13. This means, if the hurdle rate c
the hurdle value c before we know the forecast of any project. is greater than 13%, then the selection criterion will lead to a
What can be said about the expected performance of projects selected project whose expected true IRR is lower than c.
selected by using the hurdle value c? A simple answer is that the
average predicted value of selected projects will be greater than c Extension to the Log-Normal Case
because E(Y| Y  c)  c regardless of the distributions of Y. If the
distribution of Y is assumed to be known, as in this case, we can Often, it is inappropriate to model the outcome of the project as a
calculate the expectation of the predicted value, denoted by m ^c : normal random variable. For example, in oil and gas exploration-
  and-production investment projects, the recoverable volume or the
cm size of the reserves are important decision variables, which can
^c ¼ EðYjY  cÞ ¼ m þ sY l
m ; . . . . . . . . . . . . . . . . . .(3) take on only positive values and usually do not follow a symmet-
sY
rical distribution. This can be seen from Fig. 1, where the reserves
where l(t) is the hazard function of the standard normal distribu- size is plotted in the log scale. However, if the logarithm of those
tion; it is defined as l(t) = f(t)/[1-F(t)], where f(t) and F(t) are variables can be modeled as normal, the previous results can be
the density and cumulative density function of the standard nor- extended to the log-normal distribution in a straightforward man-
mal distribution, respectively. The function l(t) is monotonically ner. Disappointment similar to that in the normal case also occurs
^c  maxðc; mÞ for
increasing in t and l(t)  t for all t. Note that m in the log-normal setting.
^c  m:
all c; in particular m Suppose the true value of the project can be expressed as X =
However, the DM is more interested in the expected true exp(X 0 ) and the prediction can be written as Y = exp( ), where (X 0 ,
value, denoted by mc, of projects selected with the hurdle value c. Y 0 ) is bivariate normal defined in the same way as (X, Y) in the
Here, mc is used as a measure of the performance of the selection preceding section. This implies that the distributions of the true

Fig. 3—Normal case: Expected performance of the selection criterion for various values of hurdle rate c.

218 June 2009 SPE Journal


0 0 Fð½cðmþs2 Þ=s Þ
value eX and the predicted value eY are both log-normal, and To compare v^c and vc, we look at their ratio. vv^cc ¼ F ½cðmþsX2 0 Þ=sY0
ð Y0 Y0
Þ
their expectations are EðeX Þ ¼ emþsX0 =2 and EðeY Þ ¼ emþsY0 =2 ; re-
0 2 0 2
exp½ðs2X0  s2Y 0 Þ=2  1 because s2X0  s2Y 0 : This result implies that
spectively.
Here, we consider the selection criterion whereby a project the DM will be disappointed if she expects that, for all projects
0

will be accepted if its predicted value eY is equal to or greater selected by the hurdle value e c, on average, the true value will
than a hurdle value ec. First, we examine the performance of the achieve the predicted value.
0

selected project whose predicted value is eY . If a project has the Similar to the normal case, vc is not necessarily greater than ec.
0

predicted value eY that is above the hurdle value ec, what can An example is plotted (in log scale) in Fig. 4. It can be seen that
be expected for its true value? To answer this question, we need log(vc) is greater than c for some small values of c, but it will fall
to know the distribution of the true-value conditioning on the pre- below c eventually as c increases.
dicted value. Because X 0 conditioning on Y0 is N ½ð1  r2 Þmþ
r2 Y 0 ; ð1  r2 ÞsX0 =2; eX conditioning on eY is a log-normal distri-
2 0 0
Calibration of Forecasts Through Bayesian
0

bution. Thus, for the selected project whose predicted value is eY , Updating
its expected true value is To eliminate the disappointment, the calibration method sug-
 0 0   
gested by the analysis in the previous sections is useful. That is,
E eX jeY ; Y 0  c ¼ exp ð1  r2 Þðm þ s2X0 =2Þ þ r2 Y 0 ; . . . (5) the DM can use the posterior mean, which is Eq. 2 in the normal
case and Eq. 5 in the log-normal case, as the calibrated forecast to
0 select projects by comparing this calibrated prediction with the
which is a value between the prior mean E(eX ) and the predicted
0 hurdle value. For example, in the normal case, when a project is
value and, thus, is smaller than eY . Once again, Eq. 5 is obtained predicted to have an outcome y, the DM should use (1- r2) m +
through a Bayesian updating process, as shown in Appendix A. 
r y in the selection process. Similarly, exp ð1  r2 Þðm þ s2X0 =2Þ
2
This result is also explained by the regression effect, as in the
0 þr2 Y 0  should be used in the log-normal situation. This estimator
normal case. Therefore, when eY > ec, the DM will be disap-
combines information from the forecaster as well as from the DM.
pointed in the performance of the accepted project if she expects
It is easy to see that the disappointment in the performance of
that the average true value will equal the predicted value. As in
accepted projects as well as in the selection criterion will disap-
the normal case, the posterior mean in Eq. 5 suggests a solution to
pear if the calibrated forecast is used.
this problem (e.g., the DM should calibrate the estimate with the
Here is an example based on the Norwegian North Sea data to
prior knowledge before applying the selection criterion).
illustrate this method. Reserves plotted in red in Fig. 1 are exclud-
Next, we examine the performance of the selection criterion.
ed because they introduced a serious bias. We assume the remain-
We are interested in knowing the expected performance of all
0 ing reserves are alternatives from which the DM is going to select.
projects that are accepted if their forecasts eY are above the hurdle
Suppose the DM has the prior knowledge that the size of the
value ec. First, we calculate the expected forecast for the accepted
reserves follows a log-normal distribution with m = 4.5 and sX =
projects, denoted by v^c :
1.5, which means that the average size of discovery is approxi-
 0 0  mately 277 million BOE. Suppose the error SD of the forecaster
v^c ¼ E eY jeY  ec is 1.3. Now the DM is able to calibrate the forecast by applying
  Eq. 5.
F ½c  ðm þ s2Y 0 Þ=sY 0
¼  expðm þ s2Y 0 =2Þ: . . . . . . . . . (6) To see the effect of the calibration, we compare the average
F ðc  mÞ=sY0 discovery size of the reserves selected with and without the cali-
bration. Fig. 5 displays the average performance of selection-
Then, we calculate the expected true value of the accepted hurdle values between 400 million and 700 million BOE. If the
projects, denoted by vc: estimates are taken at face value without calibration (the dotted
 0 0  line), then the average size of the selected reserves will go below
vc ¼ E eX jeY  ec the hurdle value after 590 million BOE, a truly disappointing
  result. However, if the DM uses the calibrated forecasts (the
F ½c  ðm þ s2X0 Þ=sY 0
¼ expðm þ s2X0 =2Þ: . . . . . . . . . (7) dashed line), then the average size of selected reserves will always
F½ðc  mÞ=sY 0  be higher than the hurdle value.

Fig. 4—Log-normal case: Expected performance of the selection criterion for various values of c.

June 2009 SPE Journal 219


References
Brown, K.C. 1974. A note on the apparent bias of net revenue estimates
for capital investment projects. The Journal of Finance 29 (4):1215–
1216. doi:10.2307/2978396.
Brown, K.C. 1978. The rate of return of selected investment projects.
The Journal of Finance 33 (4): 1250–1253. doi:10.2307/2326956.
Capen, E.C., Clapp, R.V., and Campbell, W.M. 1971. Competitive Bid-
ding in High-Risk Situations. J. Pet Tech 23 (6):641–653. SPE-2993-
PA. doi: 10.2118/2993-PA.
Harrison, R.J. and March, J.G. 1984. Decision making and postdecision surpri-
ses. Administrative Science Quarterly 29 (1): 26–42. doi:10.2307/ 2393078.
Horner, D. 1980. On the theory of inevitable disappointment. Theoretical
Note No. 21 (unpublished).
Kagel, J.H. and Levin, D. 2002. Common Value Auctions and the Win-
ner’s Curse. Princeton, New Jersey: Princeton University Press.
Miller, E.M. 1978. Uncertainty induced bias in capital budgeting. Finan-
cial Management 7 (3): 12–18. doi:10.2307/3665005.
Miller, E.M. 1987. The competitive market assumption and capital budgeting
Fig. 5—Comparison of the selection criterion with and without criteria. Financial Management 16 (4): 22–28. doi:10.2307/3666105.
calibration for the North Sea data. Miller, E.M. 2000. Capital budgeting errors seldom cancel. Financial
Practice & Education 10 (2): 128–135.
Pinches, G.E. 1982. Myopia, capital budgeting and decision making.
Summary and Discussions
Financial Management 11 (3): 6–19. doi:10.2307/3664993.
The disappointment problem occurs because of the bias intro- Pruitt, S.W. and Gitman, L.J. 1987. Capital budgeting forecast biases:
duced by the selection process. After the selection criterion is Evidence from the Fortune 500. Financial Management 16 (1): 46–
applied, the accepted projects are no longer a random sample of 51. doi:10.2307/3665549.
all projects. Therefore, one cannot take the unbiased predicted Schuyler, J. and Nieman, T. 2007. Optimizer’s Curse: Removing the
value at its face value and hope that the error will be canceled Effect of This Bias in Portfolio Planning. SPE Proj Fac & Const 3
out in some way. Indeed, it would not be because the cancellation (1): 1–9. SPE-107852-PA. doi: 10.2118/107852-PA.
occurs only in random samples. Smidt, S. 1979. A Bayesian analysis of projects selection and of post audit
A rational DM should recognize the bias and, therefore, lower evaluations. The Journal of Finance 34 (3): 675–688. doi:10.2307/
her expectation to eliminate the disappointment. For normally 2327434.
distributed predictions, the DM should use Eq. 2 to calibrate the Smith, J.E. and Winkler, R.L. 2006. The optimizer’s curse: Skepticism
forecast and then select projects by comparing the calibrated fore- and post-decision surprise in decision analysis. Management Science
cast with the hurdle value c. In the log-normal case, the forecast 52 (3): 311–322. doi: 10.1287/mnsc.1050.0451.
should be adjusted by Eq. 5 before making the selection decision. Statman, M. and Tyebjee, T.T. 1985. Optimistic capital budgeting forecasts:
Then, the disappointment on the average performance of the ac- An experiment. Financial Management 14 (3): 27–33. doi:10.2307/
cepted project will disappear. 3665056.
The winner’s curse is similar to our problem in that decisions
are based on estimates and the selection process results in a
nonrandom sample because the winning price is not a random Appendix A
guess of the true value. However, the two problems are not the Derivation of Eq. 2. From the linear-regression results, the true
same because the selection procedures are different. In the win- value X given the predicted value Y is a normal random variable
ner’s curse, decisions are made on the basis of ranks of estimates, with mean equal to a + bY, where b ¼ r ssXY ¼ r2 and a ¼ mX 
and only the one with the largest-order statistic is chosen. In the bmY ¼ ð1  r2 Þm:
problem considered here, decisions are made by comparing esti- Note r = sX/sY, which is the correlation coefficient between X
mates with a hurdle value. and Y. Thus,
EðXjYÞ ¼ a þ bY ¼ ð1  r2 Þm þ r2 Y: . . . . . . . . . . . . . . . . (A-1)
Nomenclature
c = hurdle value
Derivation of Eq. 3.
m = critical value
R þ1
X = true value of a project yfY ðyÞdY 1
Y = predicted value of a project ^c ¼ EðYjY  cÞ ¼
m c
¼ pffiffiffiffiffiffi
PðY  cÞ PðY  cÞ 2psY
e = prediction error Z þ1 " #
l(t) = hazard function of the standard normal distribution ðy  mÞ2 sY
 y exp  dY ¼ pffiffiffiffiffiffi
m = mean of a distribution c 2s 2
Y PðY  cÞ 2p
vc = expected true value for the accepted projects 8 " # 9
>
> ðc  mÞ2 >
>
v^c = expected forecast for the accepted projects >
> exp  þ >
>
>
< 2s2Y >
=
r = correlation coefficient 1
 Z " # ¼
f(t) = density function of the standard normal distribution >
> m þ1
ðy  mÞ 2 >
> PðY  cÞ
>
> exp  dY >
>
F(t) = cumulative density function of the standard normal >
: s2 c 2 >
;
Y 2sY
distribution " #
2
sY ðc  mÞ fY ðcÞ
 pffiffiffiffiffiffi exp  þ m ¼ m þ s2Y
Acknowledgments 2p 2
2sY 1  FY ðcÞ
 
This work was partially supported by the sponsors of the Center f cm  
sY cm
for Petroleum Asset Risk Management at The University of Texas ¼ m þ sY   ¼ m þ sY l :
at Austin, which acknowledges the contributions of Statoil, BP, 1  C cm sY
sY
Chevron, Landmark, and Devon. We also thank Pete Rose for
discussions and for generously providing data and other materials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (A-2)

220 June 2009 SPE Journal


Derivation of Eq. 4. exp½ð1  bÞðm þ s2X0 =2Þ
¼
mc ¼ EðXjY  cÞ PðY 0  cÞ
Z þ1
R þ1 1 Y 2  2ðm þ s2X0 ÞY þ m2
E½XjY ¼ YfY ðYÞdY  pffiffiffiffiffiffi exp  dY
¼ c 2ps 0 c 2s2Y 0
PðY  cÞ " Y #
ðm þ s2X0 Þ2  m2
R þ1 exp ð1  bÞðm þ s2X0 =2Þ þ
½ð1  r2 Þm þ r2 yfY ðYÞdY 2s2Y 0
¼ c ¼
PðY  cÞ PðY 0  cÞ
Z " #
þ1
^c : . . . . . . . . . . . . : : : : : : . . . . . . . . . . . (A-3)
¼ ð1  r2 Þm þ r2 m 1 ðY  ðm þ s2X0 ÞÞ2
 pffiffiffiffiffiffi exp  dY
2psY 0 c 2s2Y 0

exp ð1  bÞðm þ s2X0 =2Þ þ bðm þ s2X0 =2Þ
Derivation of Eq. 5. On the basis of the linear-regression results, ¼
X 0 |Y 0 = Y 0 is a normal random variable with mean (1-r2)m + r2Y0 PðY 0  cÞ
0 0

and variance ð1  r2 Þs2X0 : Therefore, eX conditioning on eY is a c  ðm þ s2X0 Þ


log-normal distribution with F 
sY 0
 0 0    
E eX jey ; Y 0  c ¼ expðEðX0 jY 0 Þ þ varðX0 jY 0 Þ=2Þ exp ð1  r2 Þ F ½c  ðm þ s2X0 Þ=sY 0
¼  expðm þ s2X0 =2Þ:
 F ðc  mÞ=sY0
 ðm þ s2X0 =2Þ þ r2 Y 0 : . . . . . . . . . . . .(A-4)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (A-6)

Derivation of Eq. 6.
Calculation of the Critical Point m. We want to find a critical
  R þ1 point m such that E(X|Y  m) = m.
0 0 e fY 0 ðyÞdy
y
1
v^c ¼ E eY jeY  ec ¼ c
¼ pffiffiffiffiffiffi  
PðY 0  cÞ PðY 0  cÞ 2psY 0 cm
EðXjY  mÞ ¼ ð1  r2 Þm þ r2 m þ sY l
sY
 
Z " # exp
ðmþs2Y 0 Þ2 m2
s2X c  m s2
þ1
ðy  mÞ  2
2s2Y Y 2s2Y 0 ¼mþ l ¼ m ) lðqÞ ¼ 2Y q;
 exp  dY ¼ sY sY sX
c 2s2Y PðY 0  cÞ
. . . . . . . . . . . . . . . . . . . . . . . . . . . .(A-7)
Z " #
þ1 s2
1 ðY  ðm þ s2Y 0 ÞÞ2 where q ¼ mm sY : Thus, q depends only on s2X :
Y
 pffiffiffiffiffiffi exp  dY
2psY 0 c 2s2Y 0 Next, we show that there is a unique solution for q. It is not
difficult to obtain:
h s2Y 0
i
exp m þ   lðqÞ
¼
2
F ½c  ðm þ s2Y 0 Þ=sY 0 lim ¼ 1: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (A-8)
PðY 0  cÞ q!1 q
s2 s2
  Therefore, for a sufficiently large q, lðqÞ< s2Y q because s2Y > 1:
F ½c  ðm þ s2Y 0 Þ=sY 0 But l(0) > 0. X X
¼  expðm þ s2Y 0 =2Þ:
F ðc  mÞ=sY0 Therefore, there is a unique solution for q, which can be found
easily with some numerical algorithm.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (A-5)
Min Chen is a post-doctoral associate in the School of Public
Derivation of Eq. 7. Health at Yale University. He holds a PhD degree from the
Department of Information, Risk, and Operations Manage-
  R þ1 0 ment, McCombs School of Business, at The University of Texas
X0 Y0 E½eX jY 0  cfY 0 ðYÞdY
vc ¼ E e je  e c
¼ c at Austin. His research interests include Bayesian analysis, sta-
PðY 0  cÞ tistical risk management, and biostatistics. James S. Dyer holds
the Fondren Centennial Chair of Business in the McCombs
exp½ð1  bÞðm þ s2X0 =2Þ School of Business at The University of Texas at Austin. He holds
¼
PðY 0  cÞ a BA degree in physics and a PhD degree in business adminis-
Z þ1 " # tration, both from The University of Texas at Austin. Dyer’s re-
1 ðY  mÞ2  2bs2Y 0 Y search is focused on the evaluation of risky investment
 pffiffiffiffiffiffi exp  dY decisions. He served as a member of the Steering Committee
2psY 0 c 2s2Y 0 for the SPE Forum on Profit Prediction held in 2007.

June 2009 SPE Journal 221

You might also like