Chapter12 Sampling Successive Occasions
Chapter12 Sampling Successive Occasions
When the same population is sampled repeatedly, the opportunities for a flexible sampling scheme are
greatly enhanced. For example, on the hth occasion we may have a part of sample that are matched with
(common to) the sample at (h 1)th occasion, parts matching with both (h 1)th and (h 2)th occasions,
etc.
Such a partial matching is termed as sampling on successive occasions with partial replacement of
units or rotation sampling or sampling for a time series.
Notations:
Let P be the fixed population with N units.
yt : value of certain dynamic character which changes with time t and can be measured for each unit on
1
Yi
N
y
j
ij : population mean for the i th occasion
1 N
Si2
N 1 j 1
( yij Yi ) 2 : population variance for the i th occasion.
1 N
ii* ( yij Yi )( yi* j Yi* ) .
N 1 j 1
is the population correlation coefficient between the observations at the occasions i and i*
(i i* 1, 2,..., h) .
12
y ***
2 y1* , yi*** depends on yi 1 and yi**1
2. Type 2 estimators: They are obtained by considering the best linear combination of sample
means.
Type 1 estimators:
Two estimators are available for estimating Y2
S 22 1
with Var ( y2** ) (say)
u Wu
= y2* b( y1 y1* )
(y 1j y1* )( y2 j y2* )
is*2m
where b = is the sample regression coefficient.
(y 1j y1* ) 2
js1*m
If there are two uncorrelated unbiased estimators of a parameter, then the best linear unbiased estimator
of parameter can be obtained by combining them using a linear combination with suitably chosen
weights. Now we discuss how to choose weights in such a linear combination of estimators.
Let ˆ1 and ˆ2 be two uncorrelated and unbiased estimators of , i.e., E (ˆ1 ) E (ˆ2 ) and
Consider ˆ ˆ1 (1 )ˆ2 where 0 1 is the weight. Now choose such that Var (ˆ) is
minimum.
Var (ˆ) 2 12 (1 ) 2 22
Var (ˆ)
0
2 12 2(1 ) 22 0
22
* , say
1 2
2 2
2Var (ˆ)
0.
2 *
24 12 14 22
22 22
2 2 2 2
1 1
22 12 1
.
1 2
2 2
1
1
22 12
Now we implement this result in our case.
S2
For u 0 (complete matching), Var (Yˆ2 ) 2 .
n
S2
For u n (no matching), Var (Yˆ2 ) 2 .
n
Type II estimators:
We now consider the minimum variance linear unbiased estimator of Y2 under the same sampling
A best linear (linear in terms of observed means) unbiased estimator of Yˆ2 is of the form
m n 1
where constants a, b, c, d and matching fraction are to be suitably chosen so as to
n n
minimize the variance.
E (Yˆ2* ) Y2 ,
it requires
ab 0
c d 1.
Since a minimum variance unbiased estimator would be uncorrelated with any unbiased estimator of
zero, we must have
S2
Var ( y2* )
m
S2
Var ( y2 ) .
**
u
1
Now solving (1) and (2) by neglecting terms of order , we have
N
(1 2 ) y2**
Yˆ2* ( y1** y1* ) y2* .
(1
2 2
)
For these values of a and c ,
1 2 S 2
Var (Yˆ2* ) 2 2
.
1 n
Alternatively, minimize Var (Yˆ2* ) with respect to a and c and find optimum values of a and c . Then
Till now, we used SRSWOR for the two occasions. We now consider unequal probability sampling
schemes on two occasions for estimating Yˆ2 . We use the same notations as defined in varying probability
scheme.
xi
Then pi is the size measure of i , where X tot is the population total of auxiliary variable.
X tot
where s2*m is an SRSWR( m ) from s1* and s2*u is an independent sample selected from P by PPSWR
The estimator is
Yˆ2 des t2 m (1 )t2u ; 0 1
where
Then an estimator of Y is
Yˆ2 ca t2 m (1 )t2u ; 0 1
where
( y2 j y1 )n y1 j
t2 m m j
js2* m js1* j
y2 j
t2 u
js2 u
*
j
j np j
*j up j .
Similarly, other schemes are also there.
Sampling Theory| Chapter 12 | Sampling on Successive Occassions | Shalabh, IIT Kanpur
Page 7
Sampling on more than two occasions
When there are more than two occasions, one has a large flexibility in using both sampling procedures
and estimating the character.
Thus on occasion i
one may have parts of the sample that are matched with occasion (i 1)
parts that are matched with occasion (i 2) j
and so on.
One may consider a single multiple regression of all previous matchings on the current occasion.
However, it has been seen that the loss of efficiency incurred by using the information from the latest two
or three occasions only is fairly small in many occasions.
siu* is a sample by SRSWOR of size ui ( n mi ) from the units not already sampled.
and
S 2 (1 2 ) 1
Var (tim ) 2Var (Yˆ(i 1) )
mi Wim
1
assuming that (i1),i , i 2,3,...,. and terms of order are negligible.
N
sampling
1 1 1 1
Var ( yˆ regd ) Su2 2 S y2 *
n N n n
S (1 ) S y
2 2 2 2
y *
n n
1
which is obtained after ignoring the terms of by using mi for n and replacing
N
2 S y2
n * V ( x *
2
by 2Var (Yˆ(i 1) ) since and Si2 is constant. Using weights as the inverse of
where
Wiu
i .
Wiu Wim
Then
1 gi S 2
Var ( yˆi ) (say), i 1, 2,..., ( g1 1).
Wiu Wim n
1 S2
Substituting
Wiu ui
1 gi S 2
in ,
Wiu Wim n
we have
n 1
ui .
gi 1 2
2 gi 1
mi n
n n
Now maximize with respect to mi so as to minimize Var ( yˆi ). So differentiate with respect to mi
gi gi
and substituting it to be zero, we get
2
(1 2 ) 1 2 2 gi 1
mi2 mi n
n 1 2
mˆ i .
gi 1 (1 1 2 )
1 (1 1 2 ) 2
1
gi gi1 2
or qi 1 aqi1
where
1 (1- 1- 2 )
qi , q1 1, a ; 0 a 1.
gi (1 1 2 )
2 1 2
g 1 a .
1 1 2
2S 2 1 2
lim Var (Yˆi ) Var (Yˆ ) .
i
n 1 1 2
The limiting value of optimum sampling fraction as i is
mˆ i mˆ 1 2 1
lim .
i n n
g 1 1 2 2
Thus for the estimation of current population mean by this procedure, one would not have to match more
than 50% of the sample drawn on the last occasion.
Unless is very high, say more than 0.8, the reduction in variance (1 g h ) is only modest.
So for unbiasedness,
ci (ai bi )
.
di 1 ei. .
An unbiased estimator is of the form
Using these restrictions, find the constants and get the estimator.