0% found this document useful (0 votes)
20 views15 pages

Statistical Inference Notes

Uploaded by

ismaeel.3mtech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views15 pages

Statistical Inference Notes

Uploaded by

ismaeel.3mtech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

NOTES STAT INFERENCE-I STAT 401 B.

S STATISTICS

Statistical Inference:
Statistical Inference is "the art of drawing conclusions or inferences about a population
from the limited information contained in a sample taken from the population".
Two important areas of Statistical Inference are:
1. Estimation of parameters.
2. Testing of Hypotheses.
Estimation or Statistical Estimation:
The Statistical Estimation is a procedure of making judgment about the true but
unknown value of a population parameter by using the sample observations:
x1, x2,....,xn taken from the population,
For example, we may estimate the mean and the variance of a population by computing
the mean and the variance of a sample drawn from the population.
Estimation is further divided into:
i. Point estimation.
ii. Interval estimation.
What is the difference between Estimate, Estimator and Estimation?
An estimator is a sample statistic that is used to estimate the true but unknown value of
a population parameter.
An estimate is a numerical value obtained by substituting the sample observations in
the sample statistic used as an estimator.
The procedure of getting an estimate of a population parameter is called estimation.
For example; if x1, x2,....,xn is a random sample of size n taken from a population to
estimate the population mean  then sample mean ( x ) is defined as:
x
x= is an Estimator. The numerical quantity calculated by this formula is called
n
an Estimate and the whole procedure is called Estimation.
Point estimation:
Point estimation is a process of getting a single value from the sample as an estimate
of the true but unknown value of a population parameter.
Suppose we want to estimate the average height (µ) of 1st year students of Govt.
Emerson College, Multan on the basis of sample observations. If we find the sample
x
average height ( x ) be 64 inches. Then formula x = is called a point estimator. And
n
the numerical value, 64 inches so obtained is called a point estimate of the true but
unknown value of population parameter µ. And the whole procedure is called point
estimation.

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

Q: Describe the criteria for Good point Estimator? OR


What are the desirable qualities of Good Point Estimators?
Solution:
A point estimator is said to be a good estimator if it possesses the following four criteria:
1. Unbiasedness
2. Consistency,
3. Efficiency,
4. Sufficiency.
Unbiasedness:
An estimator ˆ is said to be an unbiased estimator of parameter  if:
E ( ˆ ) =  .
But if, E ( ˆ ) ≠  , then ˆ is said to be a biased estimator of the parameter.
(i) ()
If E ˆ >  , then ˆ is said to be positively biased estimator of a parameter.

(ii) If E ( ˆ ) <  , then ˆ is said to be negatively biased estimator of a parameter.


Hence, bias is defined as:
Bias = E ( ˆ ) - 
ii. Consistency:
If the difference between the sample statistic and the population parameter becomes
smaller and smaller as the sample size becomes larger, the sample statistic is said to be
a consistent estimator.
iii. Efficiency:
An unbiased estimator is defined to be efficient if the variance of its sampling
distribution is smaller than that of the variance of the sampling distribution of any other
unbiased estimator of the same parameter.
Suppose there are two unbiased estimators T1 and T2 of the same population parameter
, then T1 is aid to be more efficient estimator than T2 if:
Var(T1) < Var(T2).
The relative efficiency is defined as:
Var ( T2 )
E=
Var ( T1 )
(i) If E > 1, then T1 is said to be more efficient than T2,
(ii) If E < 1, then T2 is said to be more efficient than T1,
iv. Sufficiency:

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

An estimator is said to be sufficient, if the statistic used as estimator uses all the
information that is contained in the sample.
Any statistic that is not computed from all values in the sample statistic is not a sufficient
estimator.
For example, the sample mean x is a sufficient estimator of the population mean µ. But
the sample median is not a sufficient estimator.
Question: Define BLUE and how would we find the efficiency of biased estimators?
Ans: An estimator, ˆ that is linear, unbiased and has minimum variance among all
linear unbiased estimators of parameter  , is called a best linear unbiased estimator or
BLUE.
Q: Explain what is meant by the Mean square error of an estimator.
Prove that:
MSE (T) = Var (T) + (Bias) 2
Solution:
MSE of an estimator is defined as " the expected value of the squared difference
between the estimator and the true value (i.e; parameter)".
Mathematically, if T is an estimator of the parameter  then MSE is defined as:
MSE (T) =E [T – ]2
Adding and subtracting E (T) inside the bracket,
MSE (T) = E [T - E (T) + E (T) -] 2
MSE (T) = E [(T - E (T)) + ((E (T) -)] 2
MSE (T) = E [(T - E (T)) 2 + ((E (T) – )2 + 2(T - E(T)(E(T) -]
MSE (T) = E [T - E(T)]2 + [(E(T) – )] 2 + 2[(E(T) – E(T)] [(E (T)- ] . . .(1)
As, we know that:
Var (T) = E [T - E(T)]2
Moreover, we also know that,
Bias (T) = B (T) = E (T)- 
Putting all these values in equation (1),we get:
MSE (T) = Var (T) + [B (T)] 2
Q: Explain what is meant by the Mean square error of an estimator.
Prove that:
MSE ( ˆ ) = Var ( ˆ ) + (Bias) 2
Solution:
MSE of an estimator is defined as “the expected value of the squared difference

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

between the estimator and the true value (i.e; parameter)".


Mathematically, if ˆ is an estimator of the parameter  then MSE is defined as:
MSE ( ˆ ) =E [ ˆ – ]2
Adding and subtracting E () inside the bracket,
MSE ( ˆ ) = E [ ˆ - E ( ˆ ) + E ( ˆ ) -] 2
MSE ( ˆ ) = E [( ˆ - E ( ˆ )) + ((E ( ˆ ) -)] 2
MSE ( ˆ ) = E [( ˆ - E ( ˆ )) 2 + ((E ( ˆ ) – ˆ )2 + 2( ˆ - E( ˆ )(E( ˆ ) -]
MSE ( ˆ ) = E[ ˆ - E( ˆ )]2 + [(E( ˆ ) – ) ] 2 + 2[(E( ˆ ) – E(T)][(E( ˆ )- ] . . .(1)
As, we know that:
Var ( ˆ ) = E [ ˆ - E ( ˆ )] 2
Moreover, we also know that,
Bias ( ˆ ) = B ( ˆ ) = E ( ˆ )- 
Putting all these values in equation (1),we get:
MSE ( ˆ ) = Var ( ˆ ) + [B ( ˆ )] 2
Theorem: Let X1 , X2, X3, ...., Xn be a random sample of size n taken from the
( X − X )
2

population having mean µ and variance 2. then show that S 2 = is a


n
biased estimator of 2 i.e. E(S2) ≠ 2.

( X − X )
2

Proof: The sample variance = S 2


=
n

 X i −  +  − X  =  ( X i −  ) + ( X −  ) 
1 n 1 n

2 2
S2 =
n i =1 n i =1
1 n 
( ) ( )
2

 i ( ) ( )
2
   
2
S2 = X − + X − − 2 X − X −
n i =1 
i

1 n
+ n( X −  ) − 2( Xi −  ) ( X −  ) .........(i)
n

 ( Xi −  )
2 2
S2 =
n  i =1 i =1 
Consider only the factor:
 n  n n 
−2 ( X −  ) ( ) ( )
n

 (X )   n  X i − nu 
2
i −  = −2 X −   X i − nu  = − 2 X − 
i =1  i =1   i =1 
= −2 ( X −  )( nX −  ) = −2n ( X −  ) , put in equation (i)
2

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

1 n 2
 ( Xi −  ) + n ( X −  ) − 2n ( X −  ) 
2 2
S2 =
n  i =1 
1 n 2
 ( Xi −  ) − n( X −  ) 
2
S2 =
n  i =1 
Taking Expectation on both sides, we get
1 n 2
E(S ) =   E ( X i −  ) − nE ( X −  )  ................(2)
2 2

n  i =1 
Case –I
As we know that
E ( X i −  ) 2 = Var ( X ) =  2 And
2
E ( X i −  ) = Var ( X ) =  X =
2 2
[When sampling is done with replacement]
n
Putting these values in equation (2) we get;
1 n 2 2 1  2
( n − 1) =  n − 1   2
( )
E S 2
=    − n.  =  n 2 −  2  =
n  i =1 n  n n
 
 n 
Therefore it is proved that E S 2   2 ( )
Case 2:
As we know that
E ( X i −  ) 2 = Var ( X ) =  2 And
2 N −n
E ( X i −  ) = Var ( X ) =  X =
2 2
.
n N −1
[When sampling is done with out replacement]
putting these values in equation 2
1 n 2  2 N −n 1  2 2 N −n
( )
E S2 = 
n  i =1
 − n. .  =  n −  .
n N −1  n  N − 1 

2  N − n   2  nN − N − N + n   2  n −1  N n −1 2
( )
E S 2
= 
n 
n−  =
N −1  n   N −1  =
 N
.N .   = .
 N − 1 N − 1 n
.

therefore it is proved that E(S2) ≠ 2.


Hence, from both cases, it is proved that
 n −1  2
( )
E S 2 = S2 = 
 n 
 [When sampling is done with replacement]

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

N n −1 2
( )
E S 2 = S2 = .
N −1 n
. [When sampling is done with out replacement]

Theorem: : let X1 , X2, X3, ...., Xn be a random sample of size n taken from the
( X − X )
2

population having mean µ and variance  . then prove that s 2 2


= is an
n −1
unbiased estimator of 2. i.e. E(s2) = 2.
Solution

( X − X )
2

The sample variance = s 2


=
n −1
2 2

 ( X i −  ) − ( X −  )
1 n 1 n
s =
2
 i 
n − 1 i =1 
X −  +  − X 
 =
n − 1 i =1 

 ( X i −  ) − ( X −  ) − 2 ( X i −  ) ( X −  )
1 n
s2 =
n − 1 i =1 
1 n  
( i ) ( ) ( )
n
( X i −  ).....(1)
2
s2 = X −  + n X −  − 2 X − 
n − 1 i =1  i =1 
Consider only the factor:
n n 
−2 ( X −  )  ( X i −  ) = −2 ( X −  )  ( X i − n ) = −2 ( X −  )   X i − n 
n n

i =1 i =1  n i =1 
= −2 ( X −  )( nX − n ) = −2 ( X −  ) , put in equation (1)
2

1  n 2
 ( Xi −  ) + n ( X −  ) − 2n ( X −  ) 
2
s2 =
n − 1  i =1 
1  n 2
 ( Xi −  ) − n( X −  ) 
2
s2 =
n − 1  i =1 
Taking expectation on both sides, we get:
1  n 2
( )
E s2 =  
n − 1  i =1
E ( X i −  ) − nE ( X −  )  .............(2)
2


Case – I
E ( X i −  ) = Var ( X ) =  2
2
and

2
E ( X −  ) = Var ( X ) =  x 2 =
2
[ When sampling is done with replacement].
n

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

Putting these equation in (2) , we get:


1  n 2 2  2 ( n − 1)
( )
E s 2
=    − n.  =
n − 1  i =1
1
n  n −1


2 2

n −   =
n −1
=2

( X − X )
2

Therefore it is proved that s 2


= is an unbiased estimator of 2. i.e.
n −1
E(S2)=2.
Case –II
As we know that
E ( X i −  ) = Var ( X ) =  2
2

2 N −n
E ( X i −  ) = Var ( X ) =  2 X =
2
. [When sampling is done without
n N −1
replacement]
2  N − n   2  nN − n − N + n   2  n −1 
( )
E s2 = 
n −1 
n −  = 
N −1  n −1  N −1  = .N .   =
 n − 1  N − 1 N − 1
N
. 2

Therefore: E(S2) =  S 2 =  2 [When sampling is done with replacement], and


N
E(S2) =  S 2 = . 2 [When sampling is done with out replacement]
N −1

Q: If X1, X2, X3 be a random sample from a normal population with the mean µ and the
X + 2X2 + X3
variance 2, what is relative efficiency of the estimator T1 = 1
4
with respect to T2 = X ?
Solution
To find the relative efficiency, we first check the property of Unbiasedness;
X1 + 2 X 2 + X 3
As T1 = = weighted mean
4
Taking expectation on both sides
 X + 2X2 + X3 
E (T1 ) = E  1 
 4 
 E ( X 1 ) + 2.E ( X 2 ) + E ( X 3 ) 
E (T1 ) =  
 4 
As we know that E(X) = µ

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

1
E (T1 ) =   + 2 +  
4
Therefore T1 is an unbiased estimator of µ
X1 + X 2 + X 3
Now, T2 = X = = Arithmetic Mean
3
Taking expectation on both sides
 X + X2 + X3 
E(T2)= E  1 
 3 
 E ( X1 ) + E ( X 2 ) + E ( X 3 ) 
E (T2 ) =  
 4 
 +  +  
E (T2 ) =   = 
 3
Therefore T2 is an unbiased estimator of µ.
Now as T1 and T2 are both unbiased estimators of population mean µ, so we check there
relative efficiency in terms of their variances.
X1 + 2 X 2 + X 3
As ; T1 =
3
Taking variance on both sides:
 X + 2X2 + X3 
Var (T1 ) = Var  1 
 4 
1
Var (T1 ) = Var ( X 1 ) + 4Var ( X 2 ) + Var ( X 3 ) 
16 
As we know that Var(x) = 2
1 6 3
Var(T1) =  2 + 4. 2 +  2  = . 2 = . 2
16 16 8
X1 + X 2 + X 3
As T2 =
3
Taking variance on both sides we get;
 X + X2 + X3 
Var (T2 ) = Var  1 
 3 
1
Var (T2 ) = Var ( X 1 ) + Var ( X 2 ) + Var ( X 3 ) 
9
As, we know that Var(X) = 2

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

1 2 3 1
Var (T2 ) =  +  2 +  2  = . 2 = . 2
9 9 3
As, the relative efficiency of T1 with respect to T2 is defined as;
1 2
Var (T2 ).
8
E= =3 = 1
Var (T1 ) 3 . 2 9
8
Now; as E<1, then T2 is said to be more efficient than T1.
Q: Based on a random sample of three observations, consider three
possible estimators of µ.
1 1 1 5 1 1
X1 = X 1 + X 2 + X 3 , X 2 = X 1 + X 2 + X 3 , X 3 = 0.2 X 1 + 0.3 X 2 + 0.4 X 3
3 3 3 8 8 4
Find :
i. Which are unbiased.
ii. The efficiency of each unbiased estimator relative to other.
Solution
To find the relative efficiency, we first check the property of unbiasedness:
1 1 1
X1 = X1 + X 2 + X 3
3 3 3
Taking expectation on both sides:

( ) 1 1 1
E X 1 = E ( X1 ) + E ( X 2 ) + E ( X 3 )
3 3 3
As, we know that : E(X) = 

( ) 1 1 1
E X1 =  +  +  = 
3 3 3
Therefore, X 1 is an unbiased estimator of .
5 1 1
X2 = X1 + X 2 + X 3
8 8 4
Taking expectation on both sides:

( ) 5 1 1
E X 2 = E ( X1 ) + E ( X 2 ) + E ( X 3 )
8 8 4
As, we know that : E(X) = 

( ) 5 1 1
E X2 = + +  =
8 8 4
Therefore X 2 is an unbiased estimator of .

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

X 3 = 0.2 X 1 + 0.3 X 2 + 0.4 X 3


Taking expectation on both sides:

( )
E X 3 = 0.2 E ( X 1 ) + 0.3E ( X 2 ) + 0.4 E ( X 3 )

E ( X ) = 0.2 + 0.3 + 0.4 = 0.9


3

Therefore X 3 is an unbiased estimator of .


Hence X 1 , X 2 are unbiased estimators of population mean , so we check there relative
efficiency in terms of their variances:
1 1 1
X1 = X1 + X 2 + X 3
3 3 3
Taking variance on both sides:

( ) 1
9
1
9
1
9
1
9
( )
1
9
1
Var X 1 = Var ( X 1 ) + Var ( X 2 ) + Var ( X 3 ) = Var X 1 =  2 +  2 +  2
9
5 1 1
X2 = X1 + X 2 + X 3
8 8 4

( )
Var X 2 =
25
64
1 1 25 1 1
Var ( X 1 ) + Var ( X 2 ) + Var ( X 3 ) =  2 +  2 +  2
64 16 64 64 16
30 2 15 2
=  = 
64 32
The relative efficiency of X 1 with respect to X 2 is defined as
15 2
Var ( X 1 ) .
32 45
E= = = 1
Var ( X 2 ) 1 . 2 32
3
Now, as: E >1 so X 1 is said to be relatively more efficient than X 2 .
Q: A random sample X1, X2, ………,X8 is to be taken from a population with the mean
X + X2 + X3
 and variance 2. The following statistics are to be taken for : T1 = 1
3
2X 2 − 4X5 + X8 X − X1
T2 = , T3 = 8 , T4 = X 5
6 8
i. Which of the above are unbiased?
ii. Which of the above are most efficient?
Solution:
To find the relative efficiency, we first check the property of Unbiasedness:

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

X1 + X 2 + X 3
As : T1 = Taking expectation on both sides:
3
 X + X2 + X3 
E (T1 ) = E  1 
 3 
 E ( X1 ) + E ( X 2 ) + E ( X 3 ) 
E (T1 ) =  
 3 
As, we know that E(X)=
1
E (T1 ) =  +  +   = 
3
Therefore, T1 is an unbiased estimator of .
2X 2 − X 4 + 4X5 + X8
T2 =
6
Taking expectation on both sides
 2X − X 4 − 4X5 + X8 
E (T2 ) = E  2 
 6 
 2E ( X 2 ) − E ( X 4 ) + 4E ( X 5 ) + E ( X 8 ) 
E (T2 ) = E  
 6 
 2 −  + 4 +  
E (T2 ) =   = 
 6
Therefore, T2 is an unbiased estimator of .
X 8 − X1
T3 =
8
Taking expectation on both sides
 X − X1 
E (T3 ) = E  8 
 8 
 E ( X 8 ) − E ( X1 ) 
E (T3 ) =  
 8 
1
E (T3 ) =  −   = 0
8
Therefore, T3 is an unbiased estimator of .
T4 = X 5
Taking expectation on both sides :

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

E (T4 ) = E ( X 5 )

E (T4 ) = 
Therefore, T4 is unbiased estimator of .
Hence, T1, T2 and T4 are unbiased estimators of population mean  , so we check there
relative efficiency in terms of their variances.
X1 + X 2 + X 3
T1 = Taking variance on both sides:
3
 X + X2 + X3 
Var (T1 ) = Var  1 
 3 
1
Var (T1 ) = Var ( X 1 ) + Var ( X 2 ) + Var ( X 3 ) 
9
As, we know that Var(X)=2
1 1
Var (T1 ) =  2 +  2 +  2  =  2
9 3
2X 2 − X 4 + 4X5 + X8
As T2 =
6
Taking Variance on both sides:
 2X − X 4 + 4X5 + X8 
Var (T2 ) = Var  2 
 6 
1
Var (T2 ) =  4.Var ( X 2 ) + Var ( X 4 ) + 16Var ( X 5 ) + Var ( X 8 ) 
36 
As, we know that Var(X)=2
1 22 11
Var (T2 ) =  4. 2 +  2 + 16 2 +  2  =  2 =  2
36 36 18
T4 = X5
Taking variance on both sides
Var(T4) = Var(X5) =2
The relative efficiency of T1 with respect to T2 is defined as
11 2
Var (T2 ) .
11
E= = 18 = 1
Var (T1 ) 1 . 2 6
3
Now as E>1 so T1 is said to be more efficient than T2.
The relative efficiency of T1 with respect to T2 is defined as

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

Var (T4 ) 2
E= = = 3 1
Var (T1 ) 1 2
.
3
Now as E>1 so T1 is said to be more efficient than T4. The relative efficiency of T2
with respect to T4 is defined as
Var (T4 ) 2 18
E= = = 1
Var (T1 ) 11 2
. 11
18
Now, as E > 1, So T2 is said to be more efficient than T4.
Conclusion : Overall T1 is said to be relatively more efficient.

Q: If X1, X2, X3 and X4 be a random sample of size n=4 from a N(, 2). Statistician
wishes to estimate the mean by using either of the following two estimators of the mean
.
X1 + X 2 + X 3 + X 4
T1 = [The sample mean , X ]
4
X1 + 2 X 2 + 3 X 3 + X 4
T2 = [As weighted mean, X w ]
7
Solution
 X + X2 + X3 + X4 
E (T )1 = E  1 
 4 
 E ( X1 ) + E ( X 2 ) + E ( X 3 ) + E ( X 4 ) 
E (T )1 =  
 4 
As we know that: E(X) = 
1
E (T )1 =  +  +  +   = 
4
Therefore, T1 is an unbiased estimator of .
X1 + 2 X 2 + 3 X 3 + X 4
T2 =
7
Taking expectation on both sides:
 X + 2 X 2 + 3X 3 + X 4 
E (T2 ) = E  1 
 7 
 E ( X 1 ) + 2 E ( X 2 ) + 3E ( X 3 ) + E ( X 4 ) 
E (T2 ) =  
 7 

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

  + 2 + 3 +  
E (T2 ) =   = 
 7
Therefore, T2 is an unbiased estimator of .
Now as T1 and T2 are both unbiased estimator of population mean . So we check there
relative efficiency in terms of their variances.
X1 + X 2 + X 3 + X 4
T1 =
4
Taking variance on both sides:
 X + X2 + X3 + X4 
Var (T1 ) = Var  1 
 4 
1
Var (T1 ) = Var ( X 1 ) + Var ( X 2 ) + Var ( X 3 ) + Var ( X 4 ) 
16 
As we know that Var(x) = 2
1 4 1
Var (T1 ) =  2 +  2 +  2 +  2  = . 2 = . 2
16 16 4
X1 + 2 X 2 + 3 X 3 + X 4
As T2 =
7
Taking variance on both sides
 X + 2 X 2 + 3X 3 + X 4 
Var (T2 ) = Var  1 
 7 
1
Var (T2 ) = Var ( X 1 ) + var ( 2 X 2 ) + var ( 3 X 3 ) + var ( X 4 ) 
49 
As we know that : Var(x) = 2
1 15
Var (T2 ) =  2 +  2 +  2 +  2  =  2
49 49
As the relative efficiency of T1 with respect to T2 is defined as
15 2
Var (T2 ) .
60
E= = 49 = 1
Var (T1 ) 1 2
. 49
4
Now, as : E >1 so T1 is said to be relatively more efficient than T2.
Pooled Estimators from two or more Samples
Some times, we have to estimate parameters by pooling (combining) the values from
two or more random samples taken from the same population.
i. Pooled Estimators for population mean  and population variance 2

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan
NOTES STAT INFERENCE-I STAT 401 B.S STATISTICS

Let us consider two random samples of sizes n1 and n2 from a population with unknown
mean  and unknown variance 2. If X 1 , X 2 are two sample means and S12 , S22 are two
samples variances respectively then.
n1 X 1 + n2 X 2
Combined sample mean = Xc =
n1 + n2
n1S12 + n2 S 22 + n3 S32
Pooled (Combined) sample variance = S 2c =
n1 + n2 + n3 − 3
ii. Pooled Estimator of population proportion p:
If we took random samples of sizes n1, n2 and sample proportions p1 , p2 respectively
from a binomial population with unknown proportion of success (p) then:
n1 p1 + n2 p2
Pooled (combined) sample proportion = pc =
n1 + n2
Similarly, for three random samples,
n1 p1 + n2 p2 + n3 p3
Pooled (combined) sample proportion = pc =
n1 + n2 + n3

Professor Muhammad Naeem Alvi. HOD Statistics Department Govt. graduate college of Science, Multan

You might also like