DK5985 ch11
DK5985 ch11
Statistical Inferences
where A* is matrix A evaluated at k*. It should be noted that for linear least
squares matrix A is independent of the parameters while this is clearly not the case
for nonlinear least squares problems. The required estimate of the variance Og is
obtained from
„ 2 S(k*) S(k*)
(11.2)
(d.f.) N m- p
177
where (d.f.)= Nm—p are the degrees of freedom, namely the total number of meas-
urements minus the number of unknown parameters.
The above expressions for the C(3F(k*) and ojr are valid, if the statistically
correct choice of the weighting matrix Q;, (i=l,...,N) is used in the formulation of
the problem. Namely, if the errors in the response variables (EJ, i=l,...,N) are nor-
mally distributed with zero mean and covariance matrix,
we should use [Mj]~ 1-1 as the weighting matrix Q. where the matrices M j; i=l,...,N
are known whereas the scaling factor, a^ , could be unknown. Based on the struc-
ture of MJ we arrive at the various cases of least squares estimation (Simple LS,
Weighted LS or Generalized LS) as described in detail in Chapter 2.
Although the computation of COK(k*) is a simple extra step after conver-
gence of the Gauss-Newton method, we are not obliged to use the Gauss-Newton
method for the search of the best parameter values. Once k* has been obtained
using any search method, one can proceed and compute the sensitivity coefficients
by setting up matrix A and thus quantify the uncertainty in the estimated parame-
ter values by estimating COK(k*).
Approximate inference regions for nonlinear models are defined by analogy
to the linear models. In particular, the (1 -a) 100% joint confidence region for the
parameter vector k is described by the ellipsoid,
or
°kj = °E
The computation of the above surface in the parameter space is not trivial.
For the two-parameter case (p=2), the joint confidence region on the k r k 2 plane
can be determined by using any contouring method. The contour line is approxi-
mated from many function evaluations of S(k) over a dense grid of (k b k2) values.
f x
( o,k*)-C2<i < u < f ( x 0 , k * ) + tj; / 2 a (11.8)
of1
y0(k) = f(x0,k*) [k - k *] (11.9)
5k
with the partial derivative (5fT/5k)r evaluated at x0 and k*. Taking variances from
both sides we have
\T
(ll.lOa)
5k dk
T
\T T
5f
(ll.lOb)
5k 5k
The standard prediction error of yjo, o y .Q , is the square root of the jth diago-
nal element of COF(y0), namely,
5k 5k
Equation 11.8 represents the confidence interval for the mean expected re-
sponse rather than a. future observation (future measurement) of the response vari-
able, y 0 . In this case, besides the uncertainty in the estimated parameters, we must
include the uncertainty due to the measurement error (EO). The (l~a)100% confi-
dence interval of y JQ is
(II.12)
°y.,o =
-'
5f
ok
,1
T
[Af
'*>\ok
(11.13)
V ) V /
Next let us turn our attention to models described by a set of ordinary differ-
ential equations. We are interested in establishing confidence intervals for each of
the response variables y j , j=l,...,w at any time t=t0. The linear approximation of
the output vector at time to,
CW(y 0 ) = d2 (11.16)
with the sensitivity coefficients matrix G(to) evaluated at k*. The estimated stan-
dard prediction error of yj(to) is obtained as the square root of the j* diagonal ele-
ment of CO F(y(t0)).
A*]- 1 C T G T (t 0 )J i j (11.17)
v
y j (t 0 ,k*)-t a / 2 o y j o < y j (t 0 ) < (11.19)
with
In this case we assume that we know precisely the value of the standard
experimental error in the measurements (oe). Using Equation 1 1.2 we obtain an
estimate of the experimental error variance under the assumption that the model is
adequate. Therefore, to test whether the model is adequate we simply need to test
the hypothesis
H : c =
«
« model
=>
If Xdata > Xv=(Nm-p),l-a Reject H0
where
and Xv=(Nm-p),l-a 's obtained from the tables of the jf-distribution with degrees
of freedom v=(N/w-p).
S7E = —\ ^
/ ( y j - y )j (11.22)
n-1 £
(11.23)
Again, we test the hypothesis at any desirable level of significance, for ex-
ample ce=0.05
H : = a
o <* model £
v Nm w v22= n ~l
iIff Fr data >
-^ Fj_'
i- l=( ~P)>
' ^
=> n • *H
Reject n
a o
where
Copyright © 2001 by Taylor & Francis Group, LLC
184 Chapter 11
(H-24)
and F v j = d f >v ' 2 = n ~' is obtained from the tables of the F-distribution.
where M; are known matrices. Actually quite often we can further assume that the
matrices IM,, i=l,...,N are the same and equal to matrix M.
An independent estimate of COV(z), E , that is required for the adequacy
tests can be obtained by performing NK repeated experiments as
NR