0% found this document useful (0 votes)
396 views

Two-Way Classification (With One Observation Per Cell) :: y I P Q

The document discusses two-way classification with a single observation per cell, where two factors A and B affect the outcome of an experiment. It presents the fixed effect linear model for this scenario as yij = μ + αi + βj + εij, where yij is the observation for the ith level of A and jth level of B. It also lists the assumptions of the model and describes how to estimate the unknown parameters μ, αi, and βj using the method of least squares. Specifically, it shows the normal equations and restrictions used to obtain unique estimates of the parameters as the cell means.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
396 views

Two-Way Classification (With One Observation Per Cell) :: y I P Q

The document discusses two-way classification with a single observation per cell, where two factors A and B affect the outcome of an experiment. It presents the fixed effect linear model for this scenario as yij = μ + αi + βj + εij, where yij is the observation for the ith level of A and jth level of B. It also lists the assumptions of the model and describes how to estimate the unknown parameters μ, αi, and βj using the method of least squares. Specifically, it shows the normal equations and restrictions used to obtain unique estimates of the parameters as the cell means.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Two-Way Classification (With One Observation Per Cell):

Let us consider an experiment where two factors A and B at p and q levels respectively affect the
outcome of the experiment. For analysis of data, taking from this experiment, by ANOVA technique,
the arrangement of observation in various classes on the basis of two factors or criteria A and B is
called a two-way classification.

If for the effect of 1 level of factor A and 1 level of factor B , there is a single observation then it is
called two-way classification with single observation per cell. Examples:
1. The variety of paddy and the type of fertilizer used both affect the yield of a plot.
2. The yield of milk may be affected by breed and stock of the cows.

The Fixed Effect Model For Two-Way Classification (With Single Observation Per Cell):
The fixed effect linear model of two-way classification is given by
yij =  +  i +  j +  ij ;i = 1 (1) p ; j = 1 (1) q

Where,
y ij is the observation corresponding to the ith level of the factor A and jth level of factor B

 is the general mean effect


 i is the fixed effect due to ith level of the factor A
 j is the fixed effect due to jth level of factor B and

eij is the random error component.

Assumption of Two-Way Classification:


The main assumption of two-way classification is as follows
1. All observations are independent.
2. All the effects are additive in nature.
3.  ,  i and  j are unknown parameters.
p q
4.   i = 0 and
i =1
 j = 0
j =1

5. There is no interaction effect between factor A and factor B .


6. (
eij ~ NID 0,  2 .)

Md. Iftakhar Parvej, Department of Statistics, NSTU 1


Layout of Two-Way Classified Data:

B Marginal
A
B1 B2 … Bj … Bq Mean
Total

A1 y11 y12 … y1 j … y 1q y1 . y1 .

A2 y 21 y 22 … y2 j … y2q y2 . y2 .

      
Ai y i1 yi 2 … y ij … yiq yi . yi .

      
Ap y p1 y p2 … y pj … y pq yp. yp.

Marginal
y.1 y. 2 … y. j … y. q y. .
Total

Mean y.1 y. 2 … y. j … y. q y. .

Estimation of the Unknown Value of the Parameter of the Model:


The model is
y ij =  +  i +  j + eij ; i = 1 (1) k , j = 1 (1) ni
 eij = y ij −  −  i −  j

The least square estimates of  ,  i and  j can be obtained by minimizing error sum squares

( )
p q p q
E =  eij2 =  yij −  −  i −  j
2

i =1 j =1 i =1 j =1

The normal equations for estimating  ,  i and  j are respectively:

E
( )
p q
= −2 y ij −  −  i −  j = 0
 i =1 j =1
p qi p q
  yij = pq + q  i + p  j
i =1 j =1 i =1 j =1
p q
 y.. = pq + q   i + p   j    (1)
i =1 j =1

Md. Iftakhar Parvej, Department of Statistics, NSTU 2


E
( )
q
= −2 y ij −  −  i −  j = 0
 i j =1
q q
  yij = q + q i +   j
j =1 j =1
q
 y i . = q + q i +   j    (2)
j =1

And,
E
( )
p
= −2 y ij −  −  i −  j = 0
 j i =1
p p
  yij = p +   i + p j
i =1 i =1
p
 y. j = p +   i + p j    (3)
i =1

If we taking sum over i and j respectively in the equations (2 ) and (3) then we get equation (1) . So
we see that the equations are not independent and hence we cannot get the unique solutions of the
equations. So we have to put some restrictions to get the unique solutions of the equations. The
restrictions are
p q

 i = 0 and  j =0
i =1 j =1

Now, putting the restrictions we get from equation (1), (2 ) and (3) respectively

y.. = pqˆ
y..
 ˆ = ... ... ... (4)
pq

y i. = q + q i
 y i. = q( +  i )
y i.
  + i =  ˆ i = y i. − ˆ
q
 ˆ i = y i. − y.. ... ... ... (5)
y . j = p + p j
 (
y. j = p  +  j )
y. j
  +j =  ˆ j = y. j − ˆ
p
 ˆ j = y. j − y.. ... ... ... (6)
So, the estimated values of the parameters are

Md. Iftakhar Parvej, Department of Statistics, NSTU 3


ˆ = y..
ˆ i = yi. − y..
ˆ j = y. j − y..

Additive Model:
Let us consider the model
yij =  +  i +  j +  ij ; i = 1 (1) p
j = 1 (1)q
In this model there is no interaction effects between  and  . So this model is called additive model.

Non Additive Model:


If the interaction effect is present in the model then the model is known as non-additive model.

Then the model becomes:


y ij =  +  i +  j + ( )ij +  ij ; i = 1 (1) p
j = 1 (1)q

Partitioning the Total Sum of Squares:

( )
p q
Total SS =  y ij − y..
2

i =1 j =1

( ) ( ) ( )
p q
=  y i . − y.. + y. j − y.. + y ij − y i . − y. j + y..
2

i =1 j =1

( ) +  (y ) +  (y )
p q p q p q
=  y i . − y.. − y.. − y i . − y. j + y..
2 2 2
.j ij
i =1 j =1 i =1 j =1 i =1 j =1

( ) ( )( )
p q p q
+ 2 ( y i. − y.. ) y. j − y.. + 2 y i . − y.. y ij − y i. − y. j + y..
i =1 j =1 i =1 j =1

( )( )
p q
+ 2 y. j − y.. y ij − y i. − y. j + y..
i =1 j =1

Now,

 (y. j − y.. )(yij − yi. − y. j + y.. ) =  (y. j − y.. ) (yij − yi. ) − (y. j − y.. )
p q p q

i =1 j =1 i =1 j =1

 q 
( ) ( ) ( )
p q
=  y. j − y..  y ij − y i. −  y. j − y.. 
i =1  j =1 j =1 
=0
Since algebraic sum of the deviations of the observations from their mean is zero. Similarly, other
product term also vanish.

Md. Iftakhar Parvej, Department of Statistics, NSTU 4


( ) +  (y )
p q p q p q
Total SS =  ( y i. − y.. ) +  y. j − y.. − y i. − y. j + y..
2 2 2
ij
i =1 j =1 i =1 j =1 i =1 j =1

( ) +  (y )
p q p q
= q  ( y i. − y.. ) + p  y. j − y.. − y i. − y. j + y..
2 2 2
ij
i =1 j =1 i =1 j =1

Total SS = (SS due to factor A) + (SS due to factor B ) + (SS due to error)
i.e., TSS = SS ( A) + SS (B ) + ESS
Now, The d . f of TSS = d . f of SS ( A) + d . f of SS (B ) + d . f of ESS
= ( p − 1) + (q − 1) + ( p − 1)(q − 1)
= p − 1 + q − 1 + pq − p − q + 1
= pq − 1

The ANOVA table for two way classification with single observations per cell is given below:

Source of
d. f Sum of squares MSS F
variation
p
S1
p −1 S1 = q  ( yi. − y.. ) s1 =
2
Factor A s1
i =1 p −1 F1 =
s3

( )
q
S2 ~ F( p −1), ( p −1)(q −1)
q −1 S 2 = p y. j − y.. s2 =
2
Factor B
j =1 q −1 s2
F2 =
s3
 (yij − yi. − y. j + y.. )2
p q
S3
( p − 1)(q − 1) S3 = s3 = ~ F(q −1), ( p −1)(q −1)
Error
i =1 j =1 ( p − 1)(q − 1)

 (yij − y.. )2
p q
Total pq − 1
i =1 j =1

Null Hypothesis:
We set up the null hypothesis that the factor A as well as factor B is homogeneous. In other
words, the null hypothesis for factor A and factor B are respectively:
1. 1 =  2 =  =  p = 0
or H 0 :  i = 0
2. 1 =  2 =  =  q = 0
or H 0 :  j = 0

Now compare, F1 to F , ( p −1), ( p −1)(q −1) and F2 to F , (q −1), ( p −1)(q −1)


Decision Rule:
If F1  F , ( p −1), ( p −1)(q −1) , then null hypothesis H 0 :  i = 0 is rejected, otherwise accepted. If

F2  F , (q −1), ( p −1)(q −1) , then null hypothesis H 0 :  j = 0 is rejected, otherwise accepted.

For the rejected null hypothesis we should find out, of which level of factors differ
significantly. Then we use critical distance (C.D )

H 0 :  i =  i ; i  i  = 1(1) p

or H 0 :  i −  i = 0

This is a contrast. The contrast c =  i −  i


Estimated value of the contrast
 i = y i . − y..
 i = y i . − y . .
  i −  i = y i . − y i .

The test statistic


( i −  i ) − E ( i −  i )
t=
S .E ( i −  i )
(y ) (
i. )
− y i . − E y i . − y i .
=
S .E (y − y ) i. i .

(y − y )
=
i. i .
 Under null hypothesis E ( y i. − y i. ) = 0
S .E (y − y ) i. i .

Now,
( )
V y i . − y i . = V y i . + V y i . ( ) ( )  they are independen t 
2 2
= +
q q
2 2
=
q

=
2 MSE
q
 MSE is an unbiased estimate of   2

(y i. − y i . )
t =
2 MSE
q

If t cal  t , error d.f


then accept the null hypothesis, otherwise reject the null hypothesis.
2

2 MSE
 C.D = t , error d.f

2 q
If yi . − yi .  C.D then null hypothesis accepted, otherwise null hypothesis rejected.

In general for
p
H 0 :  ci  i = 0 ci is the coefficient of  i
i =1

Test Statistic,
ciˆ i
t =
S .E (ciˆ i )
ciˆ i
=
p

 ci2 MSE
i =1
q

If t cal  t , error d.f


then null hypothesis is accepted, otherwise rejected.
2

You might also like