0% found this document useful (0 votes)
19 views14 pages

Unit - IV

Uploaded by

sakthi vel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views14 pages

Unit - IV

Uploaded by

sakthi vel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

110 REGRESSION [3.

6]

shortest length. This is the interval given in (91) for which, for the example,
( 9 0 ) 1S
Pr{* > 4.30} = 0.025
for t ~ t2. Hence the symmetric interval on bx is, from (91),

2.08 ± 2.40i2 i a VÖl39 = 2.08 ± 0.89/2,0.025


= 2.08 ± 0.89(4.30)
= (-1.75,5.91).
The length of this interval is 1.75 + 5.91 = 7.66, shorter than the length of
the non-symmetric interval in (98), namely 4.23 + 5.08 = 9.31.

6. THE GENERAL LINEAR HYPOTHESIS

a. Testing linear hypotheses


The literature of linear models abounds with discussions of different kinds
of hypotheses that can be of interest in widely differing fields of application.
Four hypotheses of particular interest are: (i) H: b = 0, the hypothesis
that all elements of b are zero, (ii) H: b = b0 , the hypothesis that bt = bi0
for / = 0, 1, 2, . . . , k, i.e., that each bt is equal to some specified value bi0 .
(iii) H: X'b = m, that some linear combination of the elements of b equals a
specified constant, (iv) H: bQ = 0, that some of the b/s, q of them where
q < k, are zero. Although the calculations for the .F-statistic for these
hypotheses and variants of them appear, on the surface, to differ markedly
from one kind of hypothesis to another, we will show that all linear hypotheses
can be handled by one universal procedure. Specific hypotheses such as those
listed above are then just special cases of the general procedure.
The general hypothesis we consider is
H: Kb = m
where b, of course, is the (k + l)-order vector of parameters of the model;
K' is any matrix of s rows and k + 1 columns; and m is a vector, of order s,
of specified constants. There is only one limitation on K': that it have full
row rank, i.e., r(K') = s. This simply means that the linear functions of b
which form the hypothesis must be linearly independent; that is, the hy-
pothesis must be made up of linearly independent functions of b and must con-
tain no functions which are linear combinations of others therein. This is
quite reasonable because it means, for example, that if the hypothesis relates
to bx — b2 and b2 — b3 then there is no point in having it also relate, ex-
plicitly, to bx — Z>3. Clearly, this condition on K' is not at all restrictive in
[3.6] THE GENERAL LINEAR HYPOTHESIS 111

limiting the application of the hypothesis H: K'b = m to real problems.


Furthermore, although it might seem necessary to also require that m be
such that the equations K'b = m be consistent, this is automatically achieved
by demanding that K' have full row rank, for the equations K'b = m are
then consistent for any vector m.
We now develop the F-statistic to test the hypothesis H: K'b = m. We
already have the following:
y — 7V(Xb, σ2Ι),
b = (X'X^X'y
and b ~ 7V[b, (X'X)-^ 2 ].
Therefore K'b - m ~ N[K'b - m, Κ'ίΧ'Χ^Κσ 2 ].
Hence, by an application of Theorem 2 in Chapter 2, the following quadratic
in K'b — m, using [KXX'X^K] - 1 as the matrix of the quadratic, has a
^-distribution: if
Q = (K'b - m)'[K'(X'X)-1K]-1(K'b - m)
then β/σ 2 ~ %v{s, (K'b - m)'[K'(X'X)-1K]"1(K'b - ιη)/2σ2}. (99)
The independence of Q and SSE is now shown, using Theorem 4 of Chapter
2. To do this we first express Q and SSE as quadratic forms of the same nor-
mally distributed random variable, noting initially that the inverse of
K ' i X ' X ) - ^ used in (99) exists because K' has full row rank and X'X is
symmetric. Then, on replacing b by (X'X)_1X'y, equation (99) for Q becomes
Q = [K'tX'X^X'y - m]'[K'(X'X)-1K]-1[K'(X'X)-1X'y - m].
But because K' has full row rank, (K'K) -1 exists—see corollary of Lemma 5,
Sec. 2.2. Therefore
K'tX'X^X'y - m = K'iX'X^X'fy - ΧΚίΚ'Κ)-^],
and so
Q = [y - XK(K'K)-1m]'X(X'X)-1K[K'(X'X)-1K]-1
X K'CX'X^X'ty - XKQt'K)-1!!!].
Now consider the error sum of squares
SSE = y'[I - XtX'X^X'Jy.
Because the products X'[I - ΧίΧ'Χ^Χ'] and [I - XiX'X^X'JX are both
null, SSE can be rewritten as
SSE = [y - ΧΚΟΚΊΟ^πιΠΐ - XtX'X^X'Hy - X ^ ' K ^ m ] .
Both Q and SSE have now been expressed as quadratics in the vector
y - XKtK'K)-1!!!. And although we already know that Q/cr2 Sind SSE/(T2
112 REGRESSION [3.6]
have ^'-distributions, this is further seen from their being quadratics in
y — XK(K'K)_1m which is a normally distributed vector; and the matrix in
each quadratic is idempotent. But, more importantly, the product of the
two matrices is null:
[I - X(X,X)-1X,]X(X,X)-1K[K,(X,X)-1K]-1K,(X,X)-1X, = 0.
Therefore by Theorem 4 of Chapter 2, Q and SSE are distributed inde-
pendently. Hence

F{H)
= SSE/[N - r(X)] = QIS02

~ F'{s, N - r(X), (Kb - m)'[K , (X , X)- 1 K]- 1 (K , b - m)/2a2} (100)


and under the null hypothesis H: K'b = m
F(H) ~ FStN_riX).
Hence F(H) provides a test of the hypothesis H: K'b = m. Thus the F-
statistic for testing the hypothesis H: K'b = m is

V
F(H) = 4 2 = , — (101)
so so
with s and N — r degrees of freedom, s being the number of rows in K',
it being of full row rank.
The generality of this result merits emphasis: it applies for any linear
hypothesis K'b = m, the only limitation being that K' have full row rank.
Other than this, F(H) can be used to test any linear hypothesis whatever. No
matter what the hypothesis is, it has only to be written in the form K'b = m
and F{H) of (101) provides the test. Having once solved the normal equations
for the model y = Xb + e and so obtained (X'X)"1, h = (X'X^X'y and
σ2, the testing of H: K'b = m can be achieved by immediate application of
F(H). The appeal of this result is illustrated below in subsection c, for the
four hypotheses listed at the beginning of this section. Note that a2 is uni-
versal to every application of F(H). Thus, in considering different hypotheses
the only term in F(H) that alters is Q/s.
b. Estimation under the null hypothesis
When considering the hypothesis H: K'b = m it is natural to ask, "What is
the estimator of b under the null hypothesis?" This might be especially per-
tinent following non-rejection of the hypothesis by the preceding igtest.
The desired estimator, b say, is readily obtainable using constrained least
squares. Thus b is derived so as to minimize (y — Xb)'(y — Xb) subject to
the constraint K'b = m.
[3.6] THE GENERAL LINEAR HYPOTHESIS 113

With 2Θ' as a vector of Lagrange multipliers we minimize


(y - Xb)'(y - Xb) + 2θ'(Κ'& - m)
with respect to the elements of b and Θ. Differentiation with respect to these
elements leads to the equations
X Xb + ΚΘ = X'y
(102)
and K'b = m.
These equations are solved as follows: from the first,
b = ( X ' X ^ X ' y - ΚΘ) = h - (Χ,Χ)"1ΚΘ,
and in the second
Kb = K'fc - ΚΧΧ'Χ^ΚΘ = m.
Hence Θ = [K'iX'X^KJ-XK'b - m)
and so b = h - ( X ' X ^ K I K ' i X ' X ) - « ] - 1 ^ - m). (103)
This expression and (101) apply directly to S- when the hypothesis is US = m
(see Exercise 8).
Having thus estimated b under the hypothesis we now show that the corre-
sponding residual sum of squares is SSE + Q where Q is the numerator sum
of squares of F(H), the F-statistic used in testing the hypothesis in (101).
The residual is
(y - Xb)'(y - Xb) = [y - Xh + X(h - b)]'[y - Xh + X(h - b)]
= (y - X&)'(y - Xb) + (& - b)'X'X(fc - b) (104)
with the other terms vanishing because X'(y — Χί>) = 0. Now from (103)
h - b = (X,X)-1K[K,(X,X)"1K]-1(K,b - m)
and so on substituting in (104)
(y - Xb)'(y - Xb)
= SSE + (K'fe - m) , [K , (X , X)- 1 K]- 1 K , (X , X)- 1 X , X(X , X)- 1
x K D K ' i X ' X ) - « ] - 1 ^ - m)
= SSE + (K'fc - m),[K,(X,X)-1K]-1(K,fe - m)
= SSE + Q (105)
from (99).
c. Four common hypotheses
The preceding expressions for F(H) and b, namely (101) and (103), are
here illustrated for four commonly occurring hypotheses.
114 REGRESSION [3.6]

(/) H: b = 0. Testing this hypothesis has already been considered earlier


in the analysis of variance tables. However, it illustrates the reduction of
F(H) to the j^-statistic of the analysis of variance tables. To apply F(H) the
equations b = 0 have to be written as K'b = m: hence K' = I, s = k + 1
and m = 0. Thus [K'CX'X)-*]- 1 becomes X X and so

wrTX b'X'Xfc SSR N-r


F(H) = =
2
(fc + l ) i r SSE
as before. Under the null hypothesis F(H) is FrN_r9 where r t= k + 1.
The corresponding value of b is, of course,
b = h - (X'X^KX'X)- 1 ]- 1 ^ = 0.
(ii) H: b = b0, i.e., Z^ = biQ for all i. Rewriting b = b 0 as K'b = m
gives
K' = I, s = k+l9 m = b0 and [ΚΧΧ'Χ^Κ]" 1 = X X
and so . .
f ( g ) = (fe-b0)TX( -bo) (t06)
2
(k + 1)σ
The numerator can be expressed alternatively as
(h - b0)'X'X(fc - b0) = (y - Xb0),X(X,X)-1X,X(X,X)-1X,(y - Xb0)
= (y - Xb0),X(X,X)-1X,(y - Xb0),
although the form shown in (106) is probably the most suitable for computing
purposes. Under the null hypothesis F(H) is distributed as FTtN_r, where
r = k+ 1.
In this case the estimator of b under the hypothesis is
b = h - (X'X^KX'X)- 1 ]- 1 ^ - b0) = b 0 .
(»0 H: X'b = m. Here
K' = λ', 5 = 1, m= m
™ (X'b - mnWjX'Xr^Ck'b - m)
F(H) = -

and because λ' is a vector this can be rewritten as

λχΧ'Χ^λσ2
Under the null hypothesis F(H) has the i^ ^.^-distribution. Hence
h—m
1 ~ tN_r.
σνλϊΧ'Χ)-
.'(X'X)-^?
[3.6] THE GENERAL LINEAR HYPOTHESIS 115
This is as one would expect, since X'fe is normally distributed with variance
χχχ'χ)-^2.
For this hypothesis the value of b is

b = b - (x'xy-^x'cx'xr^rxx'fc m)
X'b- m
= 6- (Χ'ΧΓ'λ.
χχχ'ΧΓ^.
At this point it is appropriate to comment on the lack of emphasis being
given to the /-test in hypothesis testing. This is because the equivalence of
/-statistics with F-statistics that have 1 degree of freedom in the numerator
term makes it unnecessary to consider /-tests. Whenever a /-test might be
proposed, the hypothesis to be tested can be put in the form H: X'b = m
and the F-statistic F(H) derived as here. If the /-statistic is insisted upon it is
then obtained as \/F(H). No further discussion of using the /-test is therefore
necessary.

(iv) H: bQ = 0, i.e., bt = 0 for i = 0, 1, 2, . . . , q - I, for q < k. In this


case
K' = [lg 0] and m= 0 so that s = q.
We write
K = [fco &i V-ll
r _1
and partition b, f> and (X X) accordingly:

b = h = and (X'X)- 1 =

where p + q = the order of b = k + 1. Then in F(H) of (101)

K b = b„
and
[K'CX'X^K]- 1 = T - \
giving
(107)
In the numerator we recognize the result [e.g., Searle (1966), Sec. 9.11]
of "invert part of the inverse"; i.e., take the inverse of X'X and invert that
part of it which corresponds to bQ of the hypothesis H: bq = 0. Although
demonstrated here for a bQ that consists of the first q b's in b, it clearly applies
for any subset of q Vs. In particular, for just one b, it leads to the usual F-test
on 1 degree of freedom, equivalent to a /-test (see Exercise 15).
116 REGRESSION [3.6]
The estimator of b under this hypothesis is

= b - (X'X)- 1
V
_o_ "C(&,
-0)

T "V b*
= b - T^b = —
T *qq q u
A- u
T T_1b
2><Z <Z<Z Q-1

0

-K -- T T" V
*-pq *-QQ

i.e., the estimator of the b's not in the hypothesis is bv — T^T^b^ .


The expressions obtained for F(H) and b for these four hypotheses con-
cerning b are in terms of h. They also apply to similar hypotheses in terms of 6
(see Exercise 7), as do analogous results for any hypothesis L'S- = m (see
Exercise 8).
d. Reduced models
We now consider, in turn, the effect on the model y = Xb + e of the
hypotheses K'b = m, K'b = 0 and bg = 0.
(i) K'b = m. In estimating b subject to K'b = m it could be said that we
are dealing with a model y = Xb + e on which has been imposed the
limitation K'b = m. We refer to the model that we start with, y = Xb + e
without the limitation, as the/w// model; and the model with the limitation
imposed, y = Xb + e with K'b = m, is called the reduced model. For ex-
ample, if the full model is
e
y. = b0 + bjXa + b2xi2 + *3*i3 + i
and the hypothesis is H: bx — b2, the reduced model is
y. = bQ + bx{xiX + xi2) + bzxiZ + e{.
The meaning of Q and of SSE + Q is now investigated in terms of sums of
squares associated with the full and reduced models. To aid description we
introduce the terms reduction(full-) and residual(full) for the reduction and
residual sums of squares after fitting the full model:
reduction(full) = SSR and residual(full) = SSE.

Similarly SSE + g = res idual(reduced), (108)


as established in (105). Hence
ß = SSE + Q - SSE
= residual(reduced) — residual(full) (109)
[3.6] THE GENERAL LINEAR HYPOTHESIS 117
and also
Q = y'y - SSE - [y'y - (SSE + Q)]
= SSR - [y'y - (SSE + Q)]

= reduction(full) - [y'y - (SSE + ß)]. (110)

Comparison of (110) with (109) tempts one to conclude that y'y - (SSE + Q)
is reduction(reduced), the reduction in sum of squares due to fitting the re-
duced model. The temptation to do this is heightened by the fact that
SSE + Q is residual(reduced) as in (108). However, we shall show that only
in special cases is y'y — (SSE + Q) the reduction in sum of squares due to
fitting the reduced model. It is not always so. The circumstances of these
special cases are quite wide, as well as useful, but they are not universal.
First we show that y'y — (SSE + Q) is not generally a sum of squares, since
it can be negative: for, in

y'y __ SSE ~ Q = SSR - Q

= b'X'y - (Kb - m)'[K'(X'X)-1K]-1(K'b - m) (111)

the second term is a positive semi-definite form. Therefore it is never negative,


and if one or more of the elements of m are sufficiently large that term will
exceed b'X'y and (111) will be negative. Hence y'y — (SSE + Q) is not a
sum of squares.
The reason that y'y — (SSE + Q) is not necessarily a reduction in sum of
squares due to fitting the reduced model is that y'y is not always the total
sum of squares for the reduced model. For example, if the full model is

y. = b0 + bxxa + b2xi2 + et

and the hypothesis is bx — b2 + 4, then the reduced model would be

Vi = b0 + (b2 + 4)xa + b2xi2 + et\

i.e., yi - 4xa = b0 + b2(xa + xi2) + et. (112)

The total sum of squares for this reduced model is (y — 4xx)'(y — 4xx) and
not y'y, and so y'y — (SSE + Q) is not the reduction in sum of squares.
Furthermore, (112) is not the only reduced model, because the hypothesis
b1 = b2 + 4 could just as well be used to amend the model to be

y. = b0 + bxxa + {bx - 4)xi2 + e^

i.e., Vi + 4xi2 = b0 + bx(xa + xi2) + e{. (113)


118 REGRESSION [3.6]
The total sum of squares will now be (y + 4x2)'(y + 4x2). So in this case
there are two reduced models, (112) and (113), and they are not identical.
Hence neither are their total sums of squares, neither of which equal y'y.
Therefore y'y — (SSE + Q) is not the reduction in sum of squares due to
fitting the reduced model. Indeed, by the existence of (112) and (113) there is
no unique reduced model. And yet, despite this, SSE + Q is the residual
sum of squares for all possible reduced models—their total sums of squares
and reductions in sums of squares differ from model to model but their
residual sums of squares are all the same.
The situation just described is true in general for the hypothesis K'b =
ΓΚΊ
m. Suppose V is such that R = has full rank and R - 1 = [P S] is its
L J
inverse. Then the model y = Xb + e can be written as
y = XR-!Rb + e
TK'b]
= X[P S] + e,
L'bJ
= XPm + XSL'b + e;
i.e., y - XPm = XSL'b + e. (114)
This is a model in the elements of L'b, which represents r — s LIN functions
of the'elements of b. But since L' is arbitrary, chosen to make R non-singular,
the model (114) is not unique. Despite this, it can be shown that the residual
sum of squares after fitting any one of the models implicit in (114) is SSE + Q.
And the corresponding value of the estimator of b is b given in (103) (see
Exercise 10).
(//) K'b = 0. One case in which y'y — (SSE + Q) is a reduction in sum
of squares due to fitting the reduced model is when m = 0. For then (114)
becomes VCT'U .
y = XSL b + e
and so the total sum of squares for the reduced model is y'y, the same as that
of the full model. Hence in this case
y'y - (SSE + ß ) = reduction(reduced). (115)
That it is a sum of squares, i.e., is positive semi-definite, is seen from (111)
wherein putting m == 0 gives
y'y - (SSE + Q) = fc'X'y - fe'KfK'iX'X^K^K'fe
= y ' W X ' X ^ X ' - XtX'X^KtK'iX'X)-*]- 1
X KXX'X^X'Jy. (116)
[3.6] THE GENERAL LINEAR HYPOTHESIS 119
Since the matrix enclosed in curly brackets is idempotent it is positive semi-
definite. Therefore so is y'y — (SSE + Ö); i.e., it is a sum of squares. From
Q = y'y — SSE — reduction(reduced).
But y'y - SSE = SSR = reduction(full)
and so Q = reduction(full) — reduction(reduced).
Therefore, since the sole difference between the full and reduced models is
just the hypothesis, it is logical to describe
Q as the reduction in a sum of squares due to the hypothesis.
With this description we insert the partitioning of SSR as the sum of Q and
SSR — Q into the analysis of variance of Table 3.2 to yield Table 3.6. In

TABLE 3 . 6 . ANALYSIS OF VARIANCE FOR


TESTING THE HYPOTHESIS K ' b = 0

Degrees of
Source of Variation Freedom Sum of Squares

Regression (full model) r SSR


Hypothesis s Q
Reduced model r —s SSR - Q
Residual error N -r SSE

Total N SST

doing so we utilize (99), that when m = 0,


Q/a* ~ X*'{s, b'K[K'(X'X)-iK]-iK'bl2a*}.
Then, because
(y'y - SSE)/*2 - Z
2
'{r, b'X'Xb/2cr2},
an application of Theorem 5 of Chapter 2 shows that
(SSR - β)/σ 2 - χν{τ - s, b'[X'X - ΚίΚχΧ'Χ^Κ}-1^^2}
and is independent of SSE/σ2. This, of course, can also be derived directly
from (116). Furthermore, the non-centrality parameter in the distribution of
SSR - Q can, in terms of (114), be shown to be equal to b'L(S'X'XS)L'b/2a 2
(see Exercise 11). Hence, under the null hypothesis, this non-centrality pa-
rameter is zero when L'b = 0. Thus SSR — Q forms the basis of an F-test
for the sub-hypothesis L'b = 0 under the null hypothesis K'b = 0.
120 REGRESSION [3.6]
We now have the following F-tests:
SSR/r
tests the full model,
SSE/(iV - r)
Qls
tests the hypothesis K'b = 0
SSE/(7V - r)
and, under the null hypothesis
(SSR - Q)l(r - s)
tests the sub-hypothesis L'b = 0.
SSE/(7V - r)
(Hi) bQ = 0. The most useful case of the reduced model when m = 0 is
when K' = [lq 0] for some q < k. The null hypothesis K b = m is then
bQ = 0, where b^ = [b0 bx · · · Z ^ J say, a subset of q of the b's. This
situation was discussed earlier where we found, in (107),
F(H) = Qlqa\ with Q = ^Τ~\,
involving the "invert part of the inverse" rule. Hence a special case of Table
3.6 is the analysis of variance table for testing the hypothesis H: bQ = 0,
shown in Table 3.7.

TABLE 3 . 7 . ANALYSIS OF VARIANCE FOR


TESTING THE HYPOTHESIS b^ = 0

Degrees of
Source of Variation Freedom Sum of Squares

all model (b) r SSR = b'X'y


Hypothesis: bQ = 0 9 Q = ΚΎΪ&
Reduced model (bp) r -q SSR - Q
esidual error N -r SSE = SST - SSR

Total TV SST = y'y

Shown in Table 3.7 is the most direct way of computing its parts:
SSR = b'X'y, Q = b^T" 1 ^, SSR - Q by diiferencing, SST = y'y and
SSE by differencing. Although SSR — Q is obtained most readily by differ-
encing it can also be expressed as b'pXvXjbv (see Exercise 12). The estimator
bp is derived from (103) as
bp = bp - ΎραΎ-χ (117)
using K'iX'X)"« = T M as in (107).
[3.6] THE GENERAL LINEAR HYPOTHESIS 121
Example. For the following data

8 2 1 4
10 -1 2 1
9 1 -3 4
6 2 1 2
12 1 4 6
"11 3 21" - 1 .2145 0231 -.0680"
(X'X)-1 = 3 31 20 = .0231 0417 -.0181
21 20 73_ -.0680 -.0181 .0382
39
y'y = 425 and x'y = 55
162
We consider no-intercept models only. Then
fe7 = [—1.39 0.27 2.54]
and the analysis of variance is
Degrees of Sum of
Source Freedom Squares

Full model SSR = 372.9


Residual error SSE = 52.1

Total SST = 425.0

For testing the hypothesis H: bx — b2 + 4 the reduction Q is, from (99),

1
Ö = (*i - «. - 4) [1 -1 0](X'X) -1 {k - b2 - 4)
OJ

(-1.39-0.27-4.0)2 (-5.66) 2
= 152.55.
.2145 + 0.417 - 2(.0231) .21
Hence the F-statistic for testing the hypothesis is 152.2/(52.1/2) = 5.8.
Were a reduced model to be derived by replacing bx by b2 + 4 it would be
y — 4%i = b2(x1 + x2) + b3x3 + e (118)
REGRESSION [3.6]
which the data are
y - 4a?i xx + #2 x3

0 3 4
14 1 1
5 -2 4
-2 3 2
8 5 6
The total sum of squares is now 02 + 142 + 52 + 22 + 82 = 289; and the
residual sum of squares, using SSE from the analysis of variance and Q
from the jF-statistic, is
SSE + Q = 52.1 + 152.2 = 204.3.
Therefore the analysis of variance for the reduced model is

Degrees of Sum of
Source Freedom Squares

Regression (reduced model) 2 84.7


Residual error 3 204.3

Total 5 289.0

The value of 84.7 for the reduction in sum of squares for the reduced model
can be verified by deriving the normal equations for the model (118) directly.
From the data they are
"48 4 l l p; Γ38"
41 73j U3_ L78-
73 —41Ί [38] "-0.23
and hence = (1/1823) =
-41 48J 1.20_
Then the reduction in the sum of squares is
"38"
[-0.23 1.20] = 93.6 - 8.9 = 84.7
78
as in the analysis of variance.
These calculations are, of course, shown here purely to illustrate the sum
of squares in the analysis of variance. They are not needed specifically
because for the reduced model the residual is always SSE + Q. And the
[3.6] THE GENERAL LINEAR HYPOTHESIS 123
estimator of b can be found from (103) as

1.39 1
1
b = 0.27 - (X'X)- - 1 — (-5.66)
.21
2.54. . 0.
1.39" .2145 - .0231" 3.77]
0.27 + .0231 - .0417 (26.95) = -0.23
2.54. .-.0680 + .0181. . 1.20.
wherein bx — b2 = 4, of course, and b2 and bz are as before.
For testing the hypothesis b± = 0, Q = (—1.39)2/(.2145) = 8.9 and the
analysis of variance of Table 3.6 is

Degrees of
Source Freedom Sum of Squares

Full model 3 372.9


Hypothesis 1 8.9
Reduced model 2 364.0
Residual error 2 52.1

Total 5 425.0

with
~—1.39" " .2145" ' 0"
b = .27 — .0231 (—1.39)/.2145 = 0.42
2.54_ _-.0680j 2.10
Again these results can be verified from the normal equations of the reduced
model, in this case
31 20 55
.20
P
73j IAJ .I62J
ley give
73 -20] 55 0.42
= (1/1863) =
-20 3lJ |_162 2.10
above; and the reduction in sum of squares is
r
31 20" Γ0.4 ΐ
[0.42 2.10] 364.0.
20 73 2.1 0

You might also like