Effect of Uncertainty in Input and Parameter Values On Model Prediction Error

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Ecological Modelling 105 (1998) 337 345

Effect of uncertainty in input and parameter values on model


prediction error
D. Wallach a,*, M. Genard b
a

Unite dAgronomie, Institut National de la Recherche Agronomique (INRA), BP 27, 31326 Castanet Tolosan Cedex, France
b
Unite de Recherche en Ecophysiologie et Horticulture, Institut National de la Recherche Agronomique (INRA),
Domaine Saint-Paul, Site Agroparc, 84914 A6ignon Cedex 9, France
Accepted 26 September 1997

Abstract
Uncertainty in input or parameter values affects the quality of model predictions. Uncertainty analysis attempts to
quantify these effects. This is important, first of all as part of the overall investigation into model predictive quality
and secondly in order to know if additional or more precise measurements are worthwhile. Here, two particular
aspects of uncertainty analysis are studied. The first is the relationship of uncertainty analysis to the mean squared
error of prediction (MSEP) of a model. It is shown that uncertainty affects the model bias contribution to MSEP, but
this effect is only due to non linearities in the model. The direct effect of variability is on the model variance
contribution to MSEP. It is shown that uncertainty in the input variables always increases model variance. Similarly,
model variance is always larger when one averages over a range of parameter values, as compared with using the
mean parameter values. However, in practice, one is usually interested in the model with specific parameter values.
In this case, one cannot draw general conclusions in the absence of detailed assumptions about the correctness of the
model. In particular, certain particular parameter values could give a smaller model variance than that given by the
mean parameter values. The second aspect of uncertainty analysis that is studied is the effect on MSEP of having both
literature-based parameters and parameters adjusted to data in the model. It is shown that the presence of adjusted
parameters in general, decreases the effect of uncertainty in the literature parameters. To illustrate the theory derived
here, we apply it to a model of sugar accumulation in fruit. 1998 Elsevier Science B.V.
Keywords: Model evaluation; Uncertainty analysis; Sensitivity analysis; Prediction error; Parameter adjustment

1. Introduction

* Corresponding author. Tel.: +33 561285033; fax: +33


561735537; e-mail: [email protected]

Mathematical modeling is increasingly used as


a tool for the study of agricultural and ecological
systems. Typically, there is uncertainty in the val-

0304-3800/98/$19.00 1998 Elsevier Science B.V. All rights reserved.


PII S 0 3 0 4 - 3 8 0 0 ( 9 7 ) 0 0 1 8 0 - 4

338

D. Wallach, M. Genard / Ecological Modelling 105 (1998) 337345

ues of the input variables and the parameters used


in these models. It is important to quantify the
effects of such uncertainty on the quality of model
predictions, for two reasons. First of all, in order
to better understand model behavior, it is useful to
separate different contributions to the errors in
model predictions, and the uncertainty in inputs
and parameters is one of these contributions.
Secondly, this uncertainty could be reduced by
doing additional or more accurate measurements,
and it is important to know how this additional
effort might improve model predictions.
Sensitivity analysis is one approach to analyzing
the effects of varying input or parameter values on
model output. It consists of varying the input or
parameter values over some range and observing
the effect on some output. This is the approach
proposed in de Wit and Goudriaan (1978), and
applied, for example, in Friend (1995). One can
examine the inputs or parameters one at a time, or
explore interactions (Salam et al., 1994). Sensitivity
analysis can be used to identify the parameters to
which the system is most sensitive, with a view
toward changing the true values of those parameters in order to modify system behavior (Silberbush
and Barber, 1983; Thornley and Johnson, 1990;
Teo et al., 1995). Sensitivity analysis is also used as
an exploratory tool to aid in understanding model
behavior, by indicating which parameters have the
largest effect on the model outputs. However,
sensitivity analysis does not indicate whether additional measurements are worthwhile or not. First
of all, results of sensitivity analysis do not depend
on the true uncertainty in the inputs and parameters. Also, sensitivity analysis is not explicitly
related to the quality of model predictions.
Uncertainty analysis is similar to sensitivity
analysis, but takes into account explicitly the uncertainty in input and in parameter values on
output. The idea is that, assuming that the distributions of the inputs and parameters are known,
one can sample from those distributions and generate resulting output variable distributions (e.g.
Rossing et al., 1994a,b; Aggarwal, 1995). It is still
not clear, however, exactly how this result is
related to model predictive ability.
The purpose of this paper is to examine two
aspects of uncertainty analysis that have not pre-

viously been investigated. The first is the relation


between uncertainty analysis and the mean
squared error of prediction (MSEP) of a model.
MSEP is a natural, simple measure of the predictive quality of a model. It is of interest then to
show explicitly how uncertainties in input and
parameter values affect this measure. The second
goal here is to show the effect of adjusting certain
parameters on the effect of uncertainty in the
remaining parameters. Often, one wishes to improve the agreement between the model and data
by adjusting parameters (Jansen and Heuberger,
1995), but the total number of model parameters
is too large to envision adjusting them all. One
possible approach is to settle for adjusting only a
selection of parameters. We will show how this
practice changes the significance of uncertainty in
the unadjusted parameters.
In the following section, we define the different
sources of variability that will be considered. We
then present the definition of MSEP, and show
how variability contributes to MSEP. The effect
of the parameter adjustment is investigated using
a Taylor series approximation for the contribution of various sources of variability to MSEP. In
Section 3, a model of fruit growth is analyzed as
an example. The goal here is to illustrate how the
theory of Section 2 can be applied, and how to
interpret the results. The final section contains a
summary and conclusions.

2. Theory

2.1. Definition of 6ariabilities


We begin with the model input variables,
defined as the variables in the model that are
imposed rather than calculated. This includes initial conditions, climate variables and site and
management characteristics. We will refer to the
full collection of input variables as U, so that U is
a vector. We consider the situation where we do
not have access to U itself but rather to estimated
values U.
U. =U+ oU

D. Wallach, M. Genard / Ecological Modelling 105 (1998) 337345

where oU is the random error, assumed to have


zero expectation. For example, long-term average
meteorological values used in forecasts could be
the values of U. that are used in the model, in
place of the true but unknown values U. Another
example of U. values could be meteorological variables measured at some distance from the site
being studied. U then are the meteorological variables at the site. The extent of the variability in U.
is indicated by the variance covariance matrix
SU = var(oU ).
The parameters in the model may be of two
fundamentally different types, depending on
whether they are obtained by some process which
does not involve the model, or by adjusting the
model to data. We refer to the first type as
literature parameters since often, though not necessarily, the values of these parameters are taken
from the literature. The vector of true values for
these parameters is denoted P, and the vector of
estimators of these parameters is denoted P. =
P +oP. We assume that E(oP ) = 0. This assumption is in fact not very restrictive. It essentially
requires that we have some precise definition of
what each parameter should represent, and an
unbiased estimator of each parameter. The variance-covariance matrix of P. is denoted SP.
The second type of parameters will be referred
to as adjusted parameters. The vector of true
values of these parameters is denoted Q(P. ), and
the vector of estimators is denoted Q. (P. ), with
Q. (P. )= Q(P. )+oQ(P. ). The variance covariance
matrix of Q. (P. ) is denoted SQ(P. ). In general,
parameter adjustment will be done by non-linear
regression, and SQ(P. ) will often be furnished by
the fitting algorithm.
It is important to distinguish between, U. , P. and
Q. (P. ). For different randomly chosen situations,
the errors in the input variables, oU, are independent. On the other hand, once the literature
parameter estimates are chosen, the same estimates are used for all predictions of the model.
That is, for given literature parameter values, the
errors oP are the same for all predictions. The
specificity of the estimated adjusted parameters is
that they depend on P. , as indicated by the notation Q. (P. ).

339

2.2. Definition of MSEP


We now introduce the mean squared error of
prediction (MSEP), which is a natural measure of
model quality when a major goal of the modelling
effort is prediction of some particular quantity
such as total harvest weight, or value of a fruit
crop, etc. The MSEP criterion is discussed in
detail by Wallach and Goffinet (1987, 1989) and
by Colson et al. (1995). A general definition of
MSEP is
MSEP=E{[Y* f(U. *, P. , Q. (P. )]2}.
Here Y* is the value of the output of interest for
an individual chosen at random from the population of interest and f(U. *, P. , Q. (P. )) is the corresponding model prediction. Thus the quantity in
braces is simply the squared difference between
the true output and the predicted output, for a
randomly chosen individual with some specific
value of oU and for the model with some specific
values of oP and oQ(P. ). The expectation is over all
the random variables, that is over the individuals
in the population (that is over Y*) as well as over
U. , P. and Q. (P. ).
MSEP is averaged over the distribution of
parameter values. The model that one uses, on the
other hand, has some particular fixed choice of
parameter values. We will thus also be interested
in examining the effect of uncertainties in the
input values, for fixed parameter values. This
leads us to consider
MSEP(P. , Q. (P. ))
=E{[Y* f(U. *, P. , Q. (P. )]2 P. , Q. (P. )}.
The notation P. , Q. (P. ) means that the values of
the random variables P. and Q. (P. ) are fixed. Thus
the expectation in the above expression is no
longer over these variables. If there are no
parameters adjusted to data, then the above expression reduces to
MSEP(P. )= E{[Y* f(U. *, P. ]2 P. }

340

D. Wallach, M. Genard / Ecological Modelling 105 (1998) 337345

2.3. Effect of uncertainty on MSEP, no adjustable


parameters
It is easy to show that MSEP can be decomposed into three terms as
MSEP=L+ D+ G

(1)

where,
L = E{[Y* E(Y* U*)]2}
population variance
D=E{[E(Y* U*) E( f(U. *, P. U*)]2}
model bias
G=E{[E( f(U. *, P. U*) f(U. *, P. )] }
2

model variance.
(See Bunke and Droge, 1984 and Wallach and
Goffinet, 1987 for similar decompositions). The
population variance term measures how variable
Y* is for fixed values of the inputs U*. This
term is small if the input variables in the model
explain most of the variability in Y. Thus, the
value of this term can be used to judge the
choice of input variables (see Wallach and
Goffinet, 1987, 1989), but it will not concern us
here since it is independent of variability in the
input variables or parameters. The model bias
term measures the average squared difference
between the average Y* for a given U*, and the
corresponding model prediction averaged over
U. * and P. . It is a measure of how well the
model equations represent Y as a function of U.
If U and P enter linearly in the model, then
E(f(U. *, P. U*)=f(U*, P), and so the bias term,
like the population variance term, is independant of the variability in U. * and in P. . For
nonlinear models however, the model bias term
will not be independent of variability. The
model variance term G represents the direct effect of uncertainty in the input variables or
parameters. For each value of U*, one calculates the variance of the model predictions, and
then one averages over the input values for the
population of interest.
It is of interest to compare the effect of uncertainty on MSEP, with the calculations of un-

certainty analysis. First of all, there is an effect


of uncertainty on the model bias term, which
has no equivalent in uncertainty analysis. However, this effect is only due to the nonlinearity
of the model, and may often be secondary. We
will ignore this effect of variability in the rest of
the discussion. The direct effect of uncertainty
on MSEP is through the model variance term.
This term is similar to what is calculated in uncertainty analysis, but there are important differences. First of all, in its most general form,
uncertainty analysis calculates the full distribution of the model output variables, and not only
the model variance. In our case, there is a clear
justification for simplifying and considering only
the model variance. This is because MSEP only
depends on this aspect of the distribution. Secondly, the term here involves an expectation
over the population of interest. This choice of
input variables is dictated by the fact that we
are looking at the contribution to MSEP. Uncertainty analysis on the other hand, does not
prescribe a particular set of input conditions to
be studied. Since the model variance may be
very different for different values of the input
variables U*, the specification of these variables
is important.
To better understand what G represents, suppose for the moment that there is no random
error in the parameters, and consider just the
effect of variability in the estimated input values. If there were no error in the input variables
we would have SU = 0 and G would be zero.
Otherwise, G is necessarily positive, since it is an
average of a variance. That is, if there is a distribution of estimated input values around the
true values, the result is always an increase in
MSEP. The same is true if there is a distribution of parameter values around the true values.
MSEP is larger, when averaged over the estimated values, than for the true values.
There is however a major difference between
the input variables and the parameters. If there is
variability in the input variables, then we cannot
avoid the effect of that variability, because for
each individual in the population we will be using
a value drawn at random from the distribution of
the input variables. This, as we have shown, nec-

D. Wallach, M. Genard / Ecological Modelling 105 (1998) 337345

essarily leads to an increase in MSEP, compared


with the case of no variability. With respect to the
parameter values, on the other hand, the model
that is used involves some particular choice of
parameter values. It is of theoretical interest to
note that averaging over possible parameter values increases MSEP, compared with using P.
However, for practical purposes, we are interested
not in MSEP averaged over possible parameter
values, but in MSEP(P. ), the value of MSEP for
our specific choice of parameter values. Based on
the above arguments, we cannot affirm that
MSEP(P. ) is necessarily larger than MSEP(P).
This lack of a clear result is a consequence of the
fact that we have made no assumptions about the
correctness of the model. If, for example, the
model equations are incorrect, then it is possible
that the model predictions could be improved by
using incorrect parameter values, rather than the
correct values.

f(U. *, P. , Q. (P. ))

: f[U*, P, Q(P)]
+
+
+
+

(f(u, p)

(p U*,P

!

and
GP = E

(f(u, p)

(u U*, P



(f(u, p)

(p U*, P

SU

!

(P. * P*)

(f(u, p)

(u U*,P

SP

(f(u, p)

.
(p U*,P)

If there are adjusted parameters, on the other


hand, the Taylor series expansion gives

(P. P)

[Q. (P) Q(P)].

n"

(Q(p) (f(u, p, q)
U*,P,Q. (P)
(p
(q

!

(f(u, p, q)
U*,P,Q(P)
(q

SQ (P)

"

(f(u, p, q)
U*,P,Q. (P)
(p

G: GU +GP,

GU =

(f(u, p, q)
U,P,Q(P)
(q

(Q(p) (f(u, p, q)
U*,P,Q(P)
(p
(q
(f(u, p, q)
SP
U*,P,Q. (P)
(p

where partial derivatives with respect to vectors


are column vectors. This leads to

with

GQ = E

(Q(p) (f(u, p, q)

U*,P,Q. (P)
(p P
(q

GPA = E

and

(f(u, p, q)
U*,P,Q. (P)
(p

where

(f(u, p)

f(U. *, P. ):f(U*, P)+
(u U*,P

(U. * U*)

(f(u, p, q)
U*,P,Q. (P) (U. * U*)
(u

G: GU + GPA + GQ,

We use a truncated Taylor series expansion to


show how the presence of adjusted parameters
affects G, the contribution of model variance to
MSEP. The expansion in the absence of adjusted
parameters gives




Substituting into the expression for G now gives

2.4. Effect of parameters adjusted to data

341

"

(f(u, p, q)
U*,P,Q(P) .
(q

There are two important differences between


the two different expressions for G. First of all, in
the case of adjustable parameters there is a term
GQ which represents the effect of variability in the
adjusted parameters. This term is non negative, so
that any uncertainty in the adjusted parameters
increases MSEP. The second difference is that in
the case of adjusted parameters there is the term
GPA, while the corresponding term in the absence
of adjusted parameters is GP. The term GPA
reflects two ways in which the variability in P.
affects G. First of all, there is a direct effect.
Variations in the values of the literature parameters change the model predictions, and this affects

342

D. Wallach, M. Genard / Ecological Modelling 105 (1998) 337345

G. Secondly, changing the values of the literature


parameters changes the values of the adjusted
parameters, and this also changes the model predictions and therefore G. Often, the two contributions to GPA have opposite signs, and thus tend to
cancel one another. The result is that GPA will
often be smaller than GP. The basic reason is that
the adjustable parameters tend to compensate
changes in the values of the literature parameters.
If the values of literature parameters change in
such a way as to increase predicted values, then
the adjustable parameters will tend to change in
such a way as to decrease these values, in order to
keep the predictions close to the data used for
adjustment. Thus, the effect of variability in the
literature parameters on MSEP will usually be
smaller in a model with adjustable parameters,
than in the same model without parameter adjustment. The example in the next section illustrates
this.

3. A model of sugar accumulation in fruit as an


example

3.1. The model


We will apply the presented theory to the model
for sugar accumulation in peaches of Genard and
Souty (1996). We consider a simplified version of
this model which only includes the sucrose component which is the major sugar at maturity.
The model first calculates the amount of carbon
in sucrose in a fruit. The rate of change equation
is
dCsu(t)
dCph(t)
= k1
exp[ k2DD(t)]Csu(t),
dt
dt
where t is time after the start of the calculations,
Csu(t) is the amount of carbon in sucrose in a
fruit, Cph(t) is carbon in the phloem, k1 and k2 are
parameters and DD(t) is the number of degree
days, calculated using a lower temperature threshhold of T0 degrees, after full bloom. The first term
on the right represents input of carbon from the
phloem, and the second losses of carbon due to
transformations to other sugars. Overall, carbon
from the phloem goes either to increase the total

fruit carbon content or to the respiration losses.


Thus
dCph(t)
dDW(t)
dDW(t)
= ct
+g
dt
dt
dt
+ mDW(t)[T(t) T0],
where DW(t) is the dry weight at time t, ct is a
parameter that represents the ratio of total fruit
carbon weight to fruit dry weight, g is the growth
respiration coefficient, m is the maintenance respiration coefficient and T(t) is temperature at time
t. Dry weight is assumed to follow a logistic curve
as a function of number of days after full bloom
DAB:
DW(t)
= a1 +

a2
,
1+ exp{ a3[DAB(t) DAB(0) a4]/a2}

where, a1, a2, a3 and a4 are parameters, and


DAB(0) is the number of days after full bloom at
the time the model calculations begin. The weight
of sucrose per fruit S(t) is related to the weight of
carbon in sucrose per fruit by
S(t)= Csu(t)/0.421.

3.2. Variability of parameters


We consider MSEP for the prediction of final
weight of sucrose per fruit, and for one particular
set of conditions studied by Genard and Souty
(1996). The treatment considered is peaches on
trees with a leaf-to-fruit ratio of 30, growing at
Avignon, France in the summer of 1993. We have
chosen to examine the effect on MSEP of variations in just two of the literature parameters,
namely ct and g, and in the adjusted parameter k2.
The estimated values and variances of the model
parameters are presented in Table 1. The variances for all the other parameters, and of the
input variables (the 1993 Avignon temperatures)
have been set to zero.
The estimate ct of ct is the average of 42 measurements carried out at the same time as the
experiments of Genard and Souty (1996). Let s 2c
represent the sample variance calculated from

D. Wallach, M. Genard / Ecological Modelling 105 (1998) 337345

these 42 measurements. Then the estimated variance of ct is s 2c /42.


The estimate g of g is the result for peaches
from DeJong and Goudrian (1989). The variance
required here is the variance between different
determinations of g, but this cannot be estimated
from a single measurement. We therefore use the
variance calculated from 21 literature values of g
for different species (DeJong and Goudrian, 1989;
Penning de Vries et al., 1989), which gives s 2g =
0.00784. This is equivalent to assuming that the
variability between different measurements for
peaches would be the same as between measurements for different species (which no doubt leads
to an overestimation of the variability), and that
we have access to just one of these measurements
(which is the case).
Since values for the two parameters come from
independent experiments, it is reasonable to assume that the errors in the two parameter values
are independent. Thus the estimated variance covariance matrix S. P is diagonal, with only two non
zero elements equal to the estimated variances of
ct and g.
The only parameter used here that Genard and
Souty (1996) estimated by fitting the model to data
was k2, and so the vector Q here contains just that
one element. The estimated variance of this
parameter, SQ(P. ), was given by the program that
furnished the maximum likelihood estimate of k2.
Table 1
Estimated values and variances of parameters
Parametera

Typeb

Estimated value Estimated variance

a1 (g)
a2 (g)
a3 (g/day)
a4 (day)
m (1/DD)
T0 (C)
k1
ct
g
k2 (1/DD)

l
l
l
l
l
l
l
l
l
a

2.48
26.1
2.15
35.3
5105
7
0.54
0.445
0.084
0.00308

0
0
0
0
0
0
0
3.4106
7.8103
2.5109

a
Units in parentheses; DD, degree days; threshold temperature
T0.
b
l, Literature parameter; a, parameter estimated by adjusting
model to data.

343

Table 2
Contributions to G
Parameter

Contribution to G (g2)

ct
g
k2

0.0001a
0.0049a
0.0039b

a
b

The sum of these two terms is GPA.


This is the value of GQ.

3.3. Effect of 6ariability on model 6ariance


We used the Taylor series approximation of the
previous section to obtain an estimate of G and its
components. The partial derivatives were estimated numerically. The alternative approach is to
use a Monte Carlo calculation, which involves
repeated sampling from the distributions of the
parameters. Iman and Helton (1988) compare the
Monte Carlo calculation with the Taylor series
approximation in uncertainty analysis, and conclude that the Monte Carlo technique has the best
overall performance. Haness et al. (1991) also
compare these methods, and conclude that the
differences are minor in the particular case that
they examine. In general, the Taylor series approximation should be adequate if the variances
are relatively small.
Consider first the effect of uncertainty in the
literature parameters on G. From Table 2, the
estimated contributions of uncertainty in the literature parameters ct and g are, respectively, 0.0001
g2 and 0.0049 g2, for a total of G. PA = 0.0050 g2.
The two contributions are additive because of the
assumptions inherent in the Taylor series approximation that we have used. We could reduce the
uncertainty in these parameters to zero by carrying out a very large number of additional measurements for each. With perfect estimations of
the two parameters GPA would be zero, and so the
estimated average reduction in MSEP would be
0.0050 g2. In the present case this reduction is
small (the root mean square error is
G. PA = 0.07
g, compared with the model prediction of 8 g
sucrose per fruit), and so additional measurements do not seem warranted.

344

D. Wallach, M. Genard / Ecological Modelling 105 (1998) 337345

We have discussed the effect of reducing the


variances to zero. We can also easily consider the
effect of doing any fixed number of additional
experiments. Suppose, for example, that one increased the number of measurements of g from 1
to 4. The contribution to G. PA is proportional to
the variance of g, and the variance is inversely
proportional to the number of measurements.
Thus, multiplying the number of measurements
by 4 would reduce the contribution to G. PA by a
factor of 4, from the original value of 0.0049 to
0.0012 g2.
The same discussion as above can be applied to
the adjusted parameter. The estimated root mean
square error due to the uncertainty in the adjusted
parameter k2 is
G. Q =0.06 g. Here also, in the
specific case of the example, extra measurements
do not seem justified.
If there are no adjusted parameters, then the
contribution that uncertainty in the literature
parameters makes to model variance is given by
GP. In the present case, if k2 is fixed at its estimated value of 0.00308, then the value of G. P due
to uncertainty in the two literature parameters ct
and g is G. P = 0.16 g2. This value was calculated
using the Taylor series approximation. It is 32
times as large as G. PA. Thus adjusting k2 rather
than using a fixed value reduces considerably the
effect of uncertainty in the literature parameters
on MSEP, as expected.

4. Summary and conclusions


We have examined the effect of uncertainties in
inputs, in parameter values from the literature or
in parameter values obtained by adjustment of the
model to data, on the MSEP of a model. No
assumptions are made concerning the correctness
of the model equations. The uncertainty affects
two different components of MSEP, the model
bias component and the model variance component. The model bias is affected only because of
non linearities in the model, while uncertainty
always contributes to the model variance term.
We have shown that uncertainty in model inputs always increases the model variance contribution to MSEP. Averaging over the distribution

of parameter values also invariably increases the


model variance contribution to MSEP, compared
with using the true parameter values. However,
for any specific parameter value estimates, we
cannot know whether MSEP(P. , Q. (P. )), the corresponding MSEP, is larger or smaller than for the
model which uses the expectation of P. .
More precise measurements of the input variables are worthwhile if GU, the contribution of
input variability to model variance, is large compared with the prediction error that one is willing
to accept. The usefulness of additional measurements for the literature parameters or adjusted
parameters depends on MSEP. If the contributions of literature and adjusted parameter uncertainty to MSEP are small compared with
acceptable prediction error, then additional measurements are not worthwhile. If these values are
large, then one should examine MSEP(P. , Q. (P. )).
If this is acceptably small, once again additional
measurements are not worthwhile. It is only if
both the contributions of parameter uncertainy
and MSEP(P. , Q. (P. )) are large that additional
measurements could be useful. It must be reemphasized, however, that there is no guarantee that
in any particular case additional measurements
will improve the MSEP.
We have also studied the effect of adjustable
parameters on the effect of uncertainty in the
literature parameters. It is sometimes argued that
a truly mechanistic model should not have any
adjustable parameters. However, it is often the
case that such models contain a fairly large number of parameters, and while the meaning of each
parameter may be well defined, there will normally be some uncertainty in the values. The
cumulative result of these uncertainties may be
quite large (Metselaar and Jansen, 1995). This
may lead one to introducing some adjustable
parameters (Jansen and Heuberger, 1995). We
have explicitly shown here the advantage of this
approach. The estimated contribution of literature
parameter uncertainty to MSEP depends on
whether or not the model has adjustable parameters. In general, the presence of adjustable
parameters is expected to decrease the effect of
uncertainty in the literature parameters on MSEP.

D. Wallach, M. Genard / Ecological Modelling 105 (1998) 337345

References
Aggarwal, P.K., 1995. Uncertainties in crop, soil and weather
inputs used in growth models: implications for simulated
outputs and their applications. Agric. Syst. 48, 361384.
Bunke, O., Droge, B., 1984. Estimators of the mean squared
error of prediction in linear regression. Technometrics 26,
145 155.
Colson, J., Wallach, D., Denis, J.B., Jones, J.W., Bouniols, A.,
1995. The mean squared error of yield prediction by SOYGRO. Agron. J. 87, 397402.
DeJong, T.M., Goudrian, J., 1989. Modeling peach fruit
growth and carbohydrate requirements: reevaluation of the
double sigmoid growth pattern. J. Am. Soc. Hort. Sci. 114,
800 804.
Friend, A.D., 1995. PGEN: an integrated model of leaf photosynthesis, transpiration and conductance. Ecol. Model. 77,
233 255.
Genard, M.M., Souty, M., 1996. Modeling the peach sugar
contents in relation to fruit growth. J. Am. Soc. Hort. Sci.
12, 1122 1131.
Haness, S.J., Roberts, L.A., Warwick, J.J., Cale, W.G., 1991.
Testing the utility of first order uncertainty analysis. Ecol.
Model. 58, 1 23.
Iman, R.L., Helton, J.C., 1988. An investigation of uncertainty and sensitivity analysis techniques for computer
models. Risk Anal. 8, 7190.
Janssen, P.H.M., Heuberger, P.S.C., 1995. Calibration of process-oriented models. Ecol. Model. 83, 5566.
Metselaar, K., Jansen, M.J.W., 1995. Evaluating parameter
uncertainty in crop growth models. IMACS/IFAC 1st Int
Symp on Mathematical Modelling and Simulation in Agriculture and Bio-industries, Brussels, May 1995.

345

Penning de Vries, F.W.T., Jansen, D.M., ten Berge, H.F.M.,


Bakema, A., 1989. Simulation of ecophysiological processes of growth in several annual crops. Pudoc, Wageningen, 271 pp.
Rossing, W.A.H., Daamen, R.A., Jansen, M.J.W., 1994a.
Uncertainty analysis applied to supervised control of
aphids and brown rust in winter wheat. 1. Quantification
of uncertainty in cost-benefit calculations. Agric. Syst. 44,
419 448.
Rossing, W.A.H., Daamen, R.A., Jansen, M.J.W., 1994b.
Uncertainty analysis applied to supervised control of
aphids and brown rust in winter wheat. 2. Relative importance of different components of uncertainty. Agric. Syst.
44, 449 460.
Salam, M.N., Street, P.R., Jones, J.G.W., 1994. Potential
production of Boro rice in the Haor region of Bangladesh.
1. The simulation model, validation and sensitivity analysis. Agric. Syst. 46, 257 278.
Silberbush, M., Barber, S.A., 1983. Sensitivity analysis of
parameters used in simulating K uptake with a mechanistic
mathematical model. Agron. J. 75, 851 854.
Teo, Y.H., Beyrouty, C.A., Gbur, E.E., 1995. Evaluation of a
model to predict nutrient uptake by field-grown rice.
Agron. J. 87, 7 12.
Thornley, J.H.M., Johnson, I.R., 1990. Plant and Crop Modelling. Clarendon Press, Oxford, pp. 669.
Wallach, D., Goffinet, B., 1987. Mean squared error of prediction in models for studying ecological and agronomic
systems. Biometrics 43, 561 573.
Wallach, D., Goffinet, B., 1989. Mean squared error of prediction as a criterion for evaluating and comparing system
models. Ecol. Model. 44, 299 306.
de Wit, C.T., Goudriaan, J., 1978. Simulation of Ecological
Processes. Pudoc, Wageningen, pp. 175.

You might also like