Lecture Notes 12 8
Lecture Notes 12 8
Nonlinear Regression
255
Chapter 13
Introduction to Nonlinear
Regression
Yi = f (Xi , β) + εi
= X0i β + εi
= β0 + β1 Xi1 + · · · + βp−1 Xi,p−1 + εi
Yi = f (Xi , γ) + εi
In both the linear and nonlinear cases, the error terms εi are often (but not always)
independent normal random variables with constant variance. The expected value in
the linear case is
E{Y } = f (Xi , β) = X0i β
and in the nonlinear case,
E{Y } = f (Xi , γ)
257
258 Chapter 13. Introduction to Nonlinear Regression (ATTENDANCE 12)
Exercise 13.1 (Linear and Nonlinear Regression Models) Identify whether the
following regression models are linear, intrinsically linear (nonlinear, but transformed
easily into linear) or nonlinear.
1. Yi = β0 + β1 Xi1 + εi
This regression model is linear / intrinsically linear / nonlinear
√
2. Yi = β0 + β1 Xi1 + εi
This regression model is linear / intrinsically linear / nonlinear
because
p
Yi = β0 + β1 Xi1 + εi
0
= β0 + β1 Xi1 + εi
3. ln Yi = β0 + β1 Xi1 + εi
This regression model is linear / intrinsically linear / nonlinear
because
p
ln Yi = β0 + β1 Xi1 + εi
Yi0 = β0 + β1 Xi1
0
+ εi
where Yi0 = ln Yi
4. Yi = γ0 + γ1 Xi1 + γ2 Xi2 + εi
This regression model1 is linear / intrinsically linear / nonlinear
f (Xi , γ) = γ0 [exp(γ1 Xi )]
ln f (Xi , γ) = ln [γ0 [exp(γ1 Xi )]]
Yi0 = ln γ0 + ln[exp(γ1 Xi )]
Yi0 = γ00 + γ1 Xi
7. f (Xi , γ) = γ0 + γγ2 γ1 3 Xi
This regression model is linear / intrinsically linear / nonlinear
even though
γ1
f (Xi , γ) = γ0 + Xi
γ2 γ3
Yi = γ0 + γ10 Xi
γ1
where γ10 = γ2 γ3
where three parameters have been condensed into one.
13.2 Example
Y = Xβ + ε
with respect to the linear regression parameters β0 , β1 , . . . , βp−1 , and so gives the
following (analytically–derived) estimators,
b = (X0 X)−1 X0 Y
2
Another estimation method is the maximum likelihood method which involves minimizing the
likelihood of the distribution function of the linear regression model with respect to the parameters.
If the error terms, ε, are independent identically normal, then the least squares method and MLE
methods give the same estimators.
260 Chapter 13. Introduction to Nonlinear Regression (ATTENDANCE 12)
In a similar way, the least squares estimation of the nonlinear regression model,
Yi = f (Xi , γ) + εi
reading ability
simple bounded
100
50
illumination 10
Yi = γ2 (1 − exp(−γ1 Xi )) + γ0 exp(−γ1 Xi )
γ0
2. Logistic Model, Yi = 1+γ1 exp(−γ2 Xi )
+ εi
logistic
100
-10 10
illumination
-1
It is often not easy to add or delete predictor variables to nonlinear regression models,
in other words, to model build. However, it is possible to perform diagnostic tests to
check for correlation or nonconstant error variance and to check for a lack of fit.
illumination, X 1 2 3 4 5 6 7 8 9 9 10
ability to read, Y 70 70 75 88 91 94 100 92 90 92 85
is the best of the nonlinear models that were considered. Consequently, conduct a
lack of fit test5 for this nonlinear regression.
att12-13-4-read-nonlin-lof
γ0
1. Logistic Model, Yi = 1+γ1 exp(−γ2 Xi )
+ εi
(a) Statement.
The statement of the test is (check none, one or more):
γ0 γ0
i. H0 : E{Y } = 1+γ1 exp(−γ2 Xi )
versus H1 : E{Y } > 1+γ1 exp(−γ2 Xi )
.
γ0 γ0
ii. H0 : E{Y } = 1+γ1 exp(−γ2 Xi )
versus H1 : E{Y } < 1+γ1 exp(−γ2 Xi )
.
γ0 γ0
iii. H0 : E{Y } = 1+γ1 exp(−γ2 Xi )
versus H1 : E{Y } 6= 1+γ1 exp(−γ2 Xi )
.
5
Notice how an additional data point, (9, 92), has been added to the data and so there are now
two points at X = 9. This is necessary to conduct a lack of fit test.
Section 5. Inferences about Nonlinear Regression Parameters (ATTENDANCE 12)265
(b) Test.
The test statistic6 is
SSE (R) − SSE (F ) SSE (F )
F∗ = ÷
df R − df F df F
SSE − SSPE SSPE
= ÷
(n − 2) − (n − c) n−c
SSLF SSPE
= ÷
c−2 n−c
243.2 − 2 2
= ÷
7 1
=
(circle one) 9.075 / 17.23 / 58.57.
The critical value at α = 0.01, with 7 and 1 degrees of freedom, is
(circle one) 4.83 / 5.20 / 5928
(Use PRGM INVF ENTER 7 ENTER 1 ENTER 0.99 ENTER)
(c) Conclusion.
Since the test statistic, 17.23, is smaller than the critical value, 5928, we
(circle one) accept / reject the null hypothesis that the regression func-
γ0
tion is E{Y } = 1+γ1 exp(−γ 2 Xi )
.
where D is the matrix of partial derivatives with respect to the parameters γ evaluated
at g(0).
illumination, X 1 2 3 4 5 6 7 8 9 10
ability to read, Y 70 70 75 88 91 94 100 92 90 85
According to the previous analysis, the logistic regression
γ0
Yi = + εi
1 + γ1 exp(−γ2 Xi )
is the best of the nonlinear models that were considered. Consequently, determine
various intervals and conduct tests for the this nonlinear regression. Assume large
sample results hold in this case.
att12-13-4-read-nonlin-CI,test
3. Test of Single γk
Test if γ0 6= 93 at α = 0.05.
Section 6. Learning Curve Example (ATTENDANCE 12) 267
(a) Statement.
The statement of the test, in this case, is (circle one)
i. H0 : γ0 = 93 versus Ha : γ0 < 93
ii. H0 : γ0 = 93 versus Ha : γ0 > 93
iii. H0 : γ0 = 93 versus Ha : γ0 6= 93
(b) Test.
The standardized test statistic of g1 = 0.0266 is
g0 − γ10 93.2731 − 93
t test statistic = = =
s{g0 } 4.0022