Two-Way Classification (With One Observation Per Cell) :: y I P Q
Two-Way Classification (With One Observation Per Cell) :: y I P Q
Let us consider an experiment where two factors A and B at p and q levels respectively affect the
outcome of the experiment. For analysis of data, taking from this experiment, by ANOVA technique,
the arrangement of observation in various classes on the basis of two factors or criteria A and B is
called a two-way classification.
If for the effect of 1 level of factor A and 1 level of factor B , there is a single observation then it is
called two-way classification with single observation per cell. Examples:
1. The variety of paddy and the type of fertilizer used both affect the yield of a plot.
2. The yield of milk may be affected by breed and stock of the cows.
The Fixed Effect Model For Two-Way Classification (With Single Observation Per Cell):
The fixed effect linear model of two-way classification is given by
yij = + i + j + ij ;i = 1 (1) p ; j = 1 (1) q
Where,
y ij is the observation corresponding to the ith level of the factor A and jth level of factor B
B Marginal
A
B1 B2 … Bj … Bq Mean
Total
A1 y11 y12 … y1 j … y 1q y1 . y1 .
A2 y 21 y 22 … y2 j … y2q y2 . y2 .
Ai y i1 yi 2 … y ij … yiq yi . yi .
Ap y p1 y p2 … y pj … y pq yp. yp.
Marginal
y.1 y. 2 … y. j … y. q y. .
Total
Mean y.1 y. 2 … y. j … y. q y. .
The least square estimates of , i and j can be obtained by minimizing error sum squares
( )
p q p q
E = eij2 = yij − − i − j
2
i =1 j =1 i =1 j =1
E
( )
p q
= −2 y ij − − i − j = 0
i =1 j =1
p qi p q
yij = pq + q i + p j
i =1 j =1 i =1 j =1
p q
y.. = pq + q i + p j (1)
i =1 j =1
And,
E
( )
p
= −2 y ij − − i − j = 0
j i =1
p p
yij = p + i + p j
i =1 i =1
p
y. j = p + i + p j (3)
i =1
If we taking sum over i and j respectively in the equations (2 ) and (3) then we get equation (1) . So
we see that the equations are not independent and hence we cannot get the unique solutions of the
equations. So we have to put some restrictions to get the unique solutions of the equations. The
restrictions are
p q
i = 0 and j =0
i =1 j =1
Now, putting the restrictions we get from equation (1), (2 ) and (3) respectively
y.. = pqˆ
y..
ˆ = ... ... ... (4)
pq
y i. = q + q i
y i. = q( + i )
y i.
+ i = ˆ i = y i. − ˆ
q
ˆ i = y i. − y.. ... ... ... (5)
y . j = p + p j
(
y. j = p + j )
y. j
+j = ˆ j = y. j − ˆ
p
ˆ j = y. j − y.. ... ... ... (6)
So, the estimated values of the parameters are
Additive Model:
Let us consider the model
yij = + i + j + ij ; i = 1 (1) p
j = 1 (1)q
In this model there is no interaction effects between and . So this model is called additive model.
( )
p q
Total SS = y ij − y..
2
i =1 j =1
( ) ( ) ( )
p q
= y i . − y.. + y. j − y.. + y ij − y i . − y. j + y..
2
i =1 j =1
( ) + (y ) + (y )
p q p q p q
= y i . − y.. − y.. − y i . − y. j + y..
2 2 2
.j ij
i =1 j =1 i =1 j =1 i =1 j =1
( ) ( )( )
p q p q
+ 2 ( y i. − y.. ) y. j − y.. + 2 y i . − y.. y ij − y i. − y. j + y..
i =1 j =1 i =1 j =1
( )( )
p q
+ 2 y. j − y.. y ij − y i. − y. j + y..
i =1 j =1
Now,
(y. j − y.. )(yij − yi. − y. j + y.. ) = (y. j − y.. ) (yij − yi. ) − (y. j − y.. )
p q p q
i =1 j =1 i =1 j =1
q
( ) ( ) ( )
p q
= y. j − y.. y ij − y i. − y. j − y..
i =1 j =1 j =1
=0
Since algebraic sum of the deviations of the observations from their mean is zero. Similarly, other
product term also vanish.
( ) + (y )
p q p q
= q ( y i. − y.. ) + p y. j − y.. − y i. − y. j + y..
2 2 2
ij
i =1 j =1 i =1 j =1
Total SS = (SS due to factor A) + (SS due to factor B ) + (SS due to error)
i.e., TSS = SS ( A) + SS (B ) + ESS
Now, The d . f of TSS = d . f of SS ( A) + d . f of SS (B ) + d . f of ESS
= ( p − 1) + (q − 1) + ( p − 1)(q − 1)
= p − 1 + q − 1 + pq − p − q + 1
= pq − 1
The ANOVA table for two way classification with single observations per cell is given below:
Source of
d. f Sum of squares MSS F
variation
p
S1
p −1 S1 = q ( yi. − y.. ) s1 =
2
Factor A s1
i =1 p −1 F1 =
s3
( )
q
S2 ~ F( p −1), ( p −1)(q −1)
q −1 S 2 = p y. j − y.. s2 =
2
Factor B
j =1 q −1 s2
F2 =
s3
(yij − yi. − y. j + y.. )2
p q
S3
( p − 1)(q − 1) S3 = s3 = ~ F(q −1), ( p −1)(q −1)
Error
i =1 j =1 ( p − 1)(q − 1)
(yij − y.. )2
p q
Total pq − 1
i =1 j =1
Null Hypothesis:
We set up the null hypothesis that the factor A as well as factor B is homogeneous. In other
words, the null hypothesis for factor A and factor B are respectively:
1. 1 = 2 = = p = 0
or H 0 : i = 0
2. 1 = 2 = = q = 0
or H 0 : j = 0
For the rejected null hypothesis we should find out, of which level of factors differ
significantly. Then we use critical distance (C.D )
H 0 : i = i ; i i = 1(1) p
or H 0 : i − i = 0
(y − y )
=
i. i .
Under null hypothesis E ( y i. − y i. ) = 0
S .E (y − y ) i. i .
Now,
( )
V y i . − y i . = V y i . + V y i . ( ) ( ) they are independen t
2 2
= +
q q
2 2
=
q
=
2 MSE
q
MSE is an unbiased estimate of 2
(y i. − y i . )
t =
2 MSE
q
2 MSE
C.D = t , error d.f
2 q
If yi . − yi . C.D then null hypothesis accepted, otherwise null hypothesis rejected.
In general for
p
H 0 : ci i = 0 ci is the coefficient of i
i =1
Test Statistic,
ciˆ i
t =
S .E (ciˆ i )
ciˆ i
=
p
ci2 MSE
i =1
q