0% found this document useful (0 votes)
11 views9 pages

Point and Interval Estimation

This chapter discusses the estimation of parameters associated with the probability distribution of a random variable X. It defines unbiased and consistent estimators, provides a theorem for proving consistency, and includes examples demonstrating the unbiasedness of sample mean and the inconsistency of sample variance. Additionally, it explores the properties of estimators derived from normal distributions and their variances.

Uploaded by

s.gaming106553
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views9 pages

Point and Interval Estimation

This chapter discusses the estimation of parameters associated with the probability distribution of a random variable X. It defines unbiased and consistent estimators, provides a theorem for proving consistency, and includes examples demonstrating the unbiasedness of sample mean and the inconsistency of sample variance. Additionally, it explores the properties of estimators derived from normal distributions and their variances.

Uploaded by

s.gaming106553
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

ESTIMATION OF PARAMETERS:

In this chapter, we discuss the parameters which are associated with the probability distribution of
random variable X.
Definition: Let X be a random variable with some probability distribution depending on an
unknown parameter θ. Let 𝑋1 , 𝑋2 , 𝑋3 , … 𝑋𝑛 be sample of size 𝑛 taken from distribution of X. If
𝑔(𝑋1 , 𝑋2 , 𝑋3 , … 𝑋𝑛 ) is a function of sample to be used for estimating θ. We refer 𝑔 as an estimator
of θ. The value of 𝑔 assumes will be refer as an estimate of θ. We write 𝜃̂ = 𝑔(𝑋1 , 𝑋2 , 𝑋3 , … 𝑋𝑛 ).
Definition: Let 𝜃̂ be an estimate for the unknown parameter θ associated with the distribution of
random variable X. Then 𝜃̂ is an unbiased estimator for θ if 𝐸(𝜃̂) = θ , ∀ θ .

Note: Any good estimate should be close to the value it is estimating, unbiasedness refers the
average value of the estimate will be close to the true parameter value.

Definition: Let 𝜃̂ be an estimate of the parameter θ we say that 𝜃̂ is a consistent estimate of θ if


lim 𝑃{|𝜃̂ − θ| > 𝜖} = 0 ∀𝜖 > 0 𝑜𝑟 lim 𝑃{|𝜃̂ − θ| ≤ 𝜖} = 1 .
𝑛→∞ 𝑛→∞

Note: As sample size increases the estimate becomes ‘better’ is indicated in above definition.
We shall find unbiasedness and consistent of estimate using the following theorem.

Theorem: Let 𝜃̂ be an estimate of the parameter θ based on a sample size 𝑛. If 𝐸(𝜃̂) = θ,


lim 𝑉(𝜃̂) = 0 then 𝜃̂ is a consistent estimate of θ.
𝑛→∞
Proof: We shall prove by using Chebyshev’s inequality.
1 2 1
∴ lim 𝑃{|𝜃̂ − θ| > 𝜖} ≤ 2 𝐸(𝜃̂ − θ) = 2 𝐸{(𝜃̂ − E(𝜃̂)) + (E(𝜃̂) − θ)}2(Add &subtract E(𝜃̂))
𝑛→∞ 𝜖 𝜖
1 2
= 𝜖2 {𝐸[𝜃̂ − E(𝜃̂)] + 2𝐸{[𝜃̂ − E(𝜃̂) ](E(𝜃̂) − θ)} + E(E(𝜃̂) − θ)2 }
1 2
=𝜖2 {𝑉(𝜃̂) + 2𝐸(𝜃̂ − E(𝜃̂)(E(𝜃̂) − θ) + (E(𝜃̂) − θ) }
1 2
= 𝜖2 {𝑉(𝜃̂) + (E(𝜃̂) − θ) } → 0 𝑎𝑠 𝑛 → ∞ using given condition.
∴ 𝜃̂ is a consistent estimate of θ.
Examples:

1. Show that sample mean is an unbiased and consistent estimate of population mean.

Solution: Let 𝑋1 , 𝑋2 , … 𝑋𝑛 are the samples taken from the distribution of X having mean
value 𝜇.

∑𝑛
𝑖=1 𝑋𝑖
∴ 𝑋̅ = is the sample mean.
𝑛
To prove: 𝐸 (𝑋̅ ) = 𝜇 𝑎𝑛𝑑 𝑉(𝑋̅ ) = 0.

∑ 𝑛 ∑𝑛
𝑋𝑖 1 𝑖=1 𝑋𝑖
Consider 𝐸 (𝑋̅ ) = 𝐸 ( 𝑖=1 )= . ∑𝑛𝑖=1 𝑋𝑖 = = 𝜇.
𝑛 𝑛 𝑛

That is, the sample mean is an unbiased estimate of population mean.


𝑛 𝑛
∑𝑛𝑖=1 𝑋𝑖 1 1 𝑛𝑉(𝑋) 𝜎 2
𝑉(𝑋̅) = 𝑉 ( ) = 2 𝑉 (∑ 𝑋𝑖 ) = 2 ∑ 𝑉(𝑋𝑖 ) = = .
𝑛 𝑛 𝑛 𝑛2 𝑛
𝑖=1 𝑖=1

𝜎2
∴ lim 𝑉(𝑋̅) = lim =0
𝑛→∞ 𝑛→∞ 𝑛

That is, the sample mean is consistent estimate of population mean.

2. Show that sample variance 𝑆 2 is not an unbiased estimate of population variance.

Solution: Let 𝜎 2 be the variance for the distribution of X. That is, population variance. Let
𝑆 2 be the sample variance.

To prove: 𝐸(𝑆 2 ) ≠ 𝜎 2

∑𝑛 ̅ 2
𝑖=1(𝑋𝑖 −𝑋) ∑𝑛 2 ̅ 2 ̅
𝑖=1(𝑋𝑖 +(𝑋 ) −2𝑋 𝑋𝑖 ) [ ∑𝑛 2 ̅ 2 ̅2
𝑖=1(𝑋𝑖 ) ] +𝑛(𝑋 ) −2𝑛(𝑋 )
By definition, 𝑆 2 = = =
𝑛 𝑛 𝑛

∑𝑛 2
𝑖=1(𝑋𝑖 )
= − (𝑋̅)2 { 𝑠𝑖𝑛𝑐𝑒 ∑𝑛𝑖=1(𝑋̅)2 = 𝑛(𝑋̅)2 𝑎𝑛𝑑 𝑋̅ 𝑛 = ∑𝑛𝑖=1 𝑋𝑖 }
𝑛

∑𝑛 2
𝑖=1(𝑋𝑖 ) ∑𝑛 2
𝑖=1(𝑋𝑖 )
Consider, E(𝑆 2 ) = 𝐸 { − ̅̅̅
(𝑋)2 } = 𝐸 { ̅̅̅ )2 }
} − 𝐸{(𝑋
𝑛 𝑛

1
̅̅̅)2 }
= 𝑛 𝐸(∑𝑛𝑖=1(𝑋𝑖 2 )) − {𝐸(𝑋 [ 𝑠𝑖𝑛𝑐𝑒 𝐸(𝑋 2 ) = 𝑉(𝑋) + 𝜇 2 , 𝐸(𝑋̅ 2 ) = 𝑉(𝑋̅) + 𝜇 2 ]

1 𝜎2 𝜎2
= 𝑛 [𝑛 (𝜎 2 + 𝜇 2 )] − { 𝑛 + 𝜇 2 } (sample variance is )
𝑛

𝜎2 𝜎2 𝑛−1
= (𝜎 2 + 𝜇 2 ) − { 𝑛 + 𝜇 2 } = (𝜎 2 ) − { 𝑛 } = 𝜎 2 ( )
𝑛

∴ 𝐸(𝑆 2 ) ≠ 𝜎 2
𝑥
1 −
3. Show that 𝑋̅ is a random sample of size 𝑛, 𝑓(𝑥, 𝜃) = {𝜃 𝑒 0 < 𝑥 < ∞, 0 < 𝜃 < ∞
𝜃

0 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
𝜃2
is an unbiased estimate of 𝜃 and has variance .
𝑛

Solution: First we shall find the mean and variance of X.


∞ ∞ 𝑥
1
𝐸(𝑋) = ∫−∞ 𝑥 𝑓(𝑥, 𝜃) 𝑑𝑥 = ∫0 𝑥 𝑒 − 𝜃 𝑑𝑥
𝜃
𝑥 𝑥
1 ∞ −𝑥 1 𝑒− 𝜃 𝑒− 𝜃 ∞ 𝜃2
= ∫ 𝑥 𝑒 𝜃 𝑑𝑥 = { 𝑥 − }0 = =𝜃
𝜃 0 𝜃 −1 1 𝜃
𝜃 𝜃2

∞ ∞
1 −𝑥
𝐸(𝑋 2 ) = ∫ 𝑥 2 𝑓(𝑥, 𝜃) 𝑑𝑥 = ∫ 𝑥 2 𝑒 𝜃 𝑑𝑥
−∞ 0 𝜃
𝑥 𝑥 𝑥
1 ∞ 2 −𝑥 1 2
𝑒− 𝜃 𝑒− 𝜃 𝑒− 𝜃 ∞ 2𝜃 3
= ∫ 𝑥 𝑒 𝜃 𝑑𝑥 = { 𝑥 − 2𝑥 + 2 }0 = = 2𝜃 2
𝜃 0 𝜃 −1 1 −1 𝜃
𝜃 𝜃2 𝜃3
∴ 𝑉(𝑋) = 2𝜃 2 − 𝜃 2 = 𝜃 2
𝜃2
The random variable 𝑋 has mean 𝜃 and variance 𝜃 2 . Hence, (𝑋̅) = 𝜃 , 𝑉(𝑋̅) = 𝑛

4. Let 𝑋1 , 𝑋2 , … 𝑋𝑛 are the samples taken from a normal distribution with 𝜇 = 0 and
∑ 𝑥𝑖 2
variance 𝜎 2 = 𝜃, 0 < 𝜃 < ∞. Show that 𝑌 = is an unbiased and consistent
𝑛

estimate of 𝜃.

𝜃
Solution: Let 𝑋 has 𝑁( 0, 𝜃) and sample mean (𝑋̅) has 𝑁( 0, )
𝑛

𝜃
Therefore, (𝑋̅) = 0 , 𝑉(𝑋̅) = 𝑛
∑ 𝑥𝑖 2
Let 𝑌 = .
𝑛

To prove 𝐸(𝑌) = 0 and 𝑉(𝑌) = 0 𝑎𝑠 𝑛 → ∞


∑ 𝑋𝑖 2 1 𝑛 𝐸(𝑋 2 )
Consider 𝐸(𝑌) = 𝐸 ( ) = 𝐸(∑ 𝑋𝑖 2 ) = = 𝐸(𝑋 2 )
𝑛 𝑛 𝑛

𝐸(𝑌) = 𝑉(𝑋) + [ 𝐸(𝑋)]2 = 𝜃 which implies Y has unbiased estimate of 𝜃.


𝑋− 𝜇 𝑋− 𝜇 2 𝑋− 0 2 𝑋2
We know that, X has ( 𝜇, 𝜎 2 ) , 𝑍 = has 𝑁( 0, 1) and 𝑍 2 = ( ) =( ) =
𝜎 𝜎 √𝜃 𝜃

has 𝜒 2 (1)
𝑋2 𝑋2
∴ 𝐸 ( ) = 1 𝑎𝑛𝑑 𝑉 ( ) = 2
𝜃 𝜃
This implies 𝐸(𝑋 2 ) = 𝜃 , 𝑉(𝑋 2 ) = 2𝜃 2
∑ 𝑋𝑖 2 1 𝑛 𝑉(𝑋 2 ) 𝑉(𝑋 2 ) 2𝜃2
Consider 𝑉(𝑌) = 𝑉 ( ) = 𝑛2 , 𝑉(∑ 𝑋𝑖 2 ) = = =
𝑛 𝑛2 𝑛 𝑛
2𝜃2
∴ lim 𝑉(𝑌) = lim = 0
𝑛→∞ 𝑛→∞ 𝑛

5. Let 𝑌1 , 𝑌2 be two independent unbiased statistics for 𝜃. The variance of 𝑌1 is twice the
variance of 𝑌2 . Find the constants 𝑘1 𝑎𝑛𝑑 𝑘2 such that 𝑌 = 𝑘1 𝑌1 + 𝑘2 𝑌2 is an unbiased
statistics for 𝜃 with smallest possible variance for such a linear combination.
Solution: Given that 𝐸(𝑌1 ) = 𝐸(𝑌2 ) = 𝜃, 𝑉(𝑌1 ) = 2𝑉( 𝑌2 ) = 2𝜎 2 .
To find 𝑘1 𝑎𝑛𝑑 𝑘2 such that 𝐸(𝑌) = 𝐸(𝑘1 𝑌1 + 𝑘2 𝑌2 ) = 𝜃.
That is, 𝑘1 𝐸(𝑌1 ) + 𝑘2 𝐸(𝑌2 ) = 𝜃 ⇒ 𝑘1 𝜃 + 𝑘2 𝜃 = 𝜃 ⇒ 𝑘1 + 𝑘2 = 1 ⇒ 𝑘2 = 1 − 𝑘1 .
2 2
Consider 𝑉(𝑌) = 𝑉(𝑘1 𝑌1 + 𝑘2 𝑌2 ) = 𝑘1 2 𝑉(𝑌1 ) + 𝑘2 𝑉(𝑌2 ) = 𝜎 2 (2𝑘1 2 + 𝑘2 )
2
𝑉(𝑌) = 𝜎 2 (2𝑘1 2 + 𝑘2 ) = 𝜎 2 (2𝑘1 2 + (1 − 𝑘1 )2 )
𝑑𝑉(𝑌)
V(Y) has minima if =0
𝑑𝑘1

𝑑𝑉(𝑌) 𝑑[𝜎2 (2𝑘1 2 +(1−𝑘1 )2 )] 1 2


⇒ = = 0 ⇒ 4𝑘1 − 2(1 − 𝑘1 ) = 0 ⇒ 𝑘1 = 3 and 𝑘2 = 3
𝑑𝑘1 𝑑𝑘1

6. Let 𝑋1 , 𝑋2 . . 𝑋25 , 𝑌1 , 𝑌2 … 𝑌25 be two independent random samples from the


𝑋̅
distribution 𝑁(3, 16) , 𝑁(4, 9) respectively. Evaluate P ( 𝑌̅ > 1)

16 9
Solution: Let 𝑋 ~𝑁(3, 16), 𝑌~𝑁(4, 9). Then, 𝑋̅ ~𝑁 (3, ) , 𝑌̅ ~𝑁(4, )
25 25

𝑋̅ 16
Now 𝑌̅ > 1 ⇒ 𝑋̅ > 𝑌̅ ⇒ 𝑋̅ − 𝑌̅ > 0. Since, 𝑋̅ − 𝑌̅ ~ 𝑁[ 3 × 1 + 4 × (−1), 12 × 25 +
9
(−1)2 × ]~𝑁(−1, 1) and Z= 𝑋̅ − 𝑌̅ + 1 ~𝑁(0, 1)
25

𝑋̅
Consider P (𝑌̅ > 1) = 𝑃(𝑋̅ − 𝑌̅ > 0) = 𝑃(𝑋̅ − 𝑌̅ + 1 > 1) = 𝑃(𝑍 > 1)

= 1 − 𝛷(1)=1- 0.841=0.159
Interval estimation:

Let ‘X’ be a random variable with some probability distribution, depending on an unknown
parameter θ. An estimate of θ given by two magnitudes within which θ can lie is called an interval
estimate of the parameter θ. The process of obtaining an interval estimate for θ is called interval
estimation.

Let θ be an unknown parameter to be determined by a random sample 𝑋1 , 𝑋2 , 𝑋3 , … 𝑋𝑛 of size 𝑛.


The confidence interval for the parameter θ is a random interval containing the parameter with
high probability say 1 − 𝛼; 1 − 𝛼 is called confidence coefficient.

Suppose that 𝑃{(𝐻(𝑋1 , 𝑋2 , … 𝑋𝑛 ) < 𝜃 < 𝐺(𝑋1 , 𝑋2 , … 𝑋𝑛 )} = 1 − 𝛼 then


{(𝐻(𝑋1 , 𝑋2 , … 𝑋𝑛 ), 𝐺(𝑋1 , 𝑋2 , … 𝑋𝑛 )} is a [(1 − 𝛼) × 100]% confidence interval.

Note: Let 𝑋1 , 𝑋2 . . 𝑋𝑛 be a random sample of size 𝑛 from a normal distribution 𝑁(𝜇, 𝜎 2 ).

∑𝑛
𝑖=1 𝑋𝑖 𝜎2
1. 𝑋̅ = ~ 𝑁(𝜇, ) for 𝜇, 𝜎 2 is known.
𝑛 𝑛
𝑋̅ −𝜇
2. 𝑇 = 𝑆 ~ 𝑇(𝑛 − 1) for 𝜇, 𝜎 2 is unknown.
√𝑛−1

∑𝑛 ̅ 2
𝑖=1(𝑋𝑖 −𝑋)
3. 𝑌 = ~𝜒 2 (n) for 𝜎 , 𝜇 is known
𝜎2
𝑛𝑆 2
4. 𝑌 = ~𝜒 2 (n-1) for 𝜎 , 𝜇 is unknown
𝜎2

Confidence Interval for mean:

∑𝑛
𝑖=1 𝑋𝑖 𝜎2
1) 𝜎 2 is known: Consider 𝑋̅ = ~ 𝑁(𝜇, )
𝑛 𝑛

𝑋̅ − 𝜇
∴𝑍= 𝜎 ~𝑁(0,1)
√𝑛
To find 𝑎 such that 𝑃(−𝑎 < 𝑍 < 𝑎) = 1 − 𝛼

𝑋̅ − 𝜇
⇒ 𝑃 (−𝑎 < 𝜎 < 𝑎) = 1 − 𝛼
√𝑛
𝑎𝜎 𝑎𝜎
⇒ 𝑃 (𝑋̅ − < 𝜇 < 𝑋̅ + ) = 1 − 𝛼
√𝑛 √𝑛
𝑎𝜎 𝑎𝜎
⇒ 𝜇 ∈ (𝑋̅ − , 𝑋̅ + )
√𝑛 √𝑛

Examples:
1. Let the observed value of 𝑋̅ of size 20 from a normal distribution with 𝜇 and 𝜎 2 =80 be
81.2. Obtain 95% confidence interval for the mean 𝜇.

80
Solution: Let 𝑋~ 𝑁(𝜇, 80) , 𝑋̅ ~ 𝑁 (𝜇, 20) = 𝑁(𝜇, 4)

𝑋̅ − 𝜇 𝑋̅ − 𝜇
∴𝑍= 𝜎 = ~𝑁(0,1)
2
√𝑛
1.95
𝑃(−𝑎 < 𝑍 < 𝑎) = 0.95 ⇒ 2 𝛷(𝑎)-1=0.95 ⇒ 𝛷(𝑎)= = 0.975 ⇒ 𝑎 = 1.96
2

𝑎𝜎 𝑎𝜎
⇒ 𝜇 ∈ (𝑋̅ − , 𝑋̅ + ) ∈ (81.2 − 1.96 × 2, 81.2 + 1.96 × 2)
√𝑛 √𝑛

⇒ 𝜇 ∈ (77.28, 85.12)

2. Let the observed value of 𝑋̅ of size 𝑛 from a normal distribution with 𝜇 and 𝜎 2 = 9. Find 𝑛
such that 𝑃(𝑋̅ − 1 < 𝜇 < 𝑋̅ + 1) = 0.9 approximately.

9
Solution: Let 𝑋~ 𝑁(𝜇, 9) , 𝑋̅ ~ 𝑁 (𝜇, 𝑛)

𝑋̅ − 𝜇
∴𝑍= ~𝑁(0,1)
3
√𝑛
1.9
∴ 𝑃(−𝑎 < 𝑍 < 𝑎) = 0.9 ⇒ 2 𝛷(𝑎)-1=0.9 ⇒ 𝛷(𝑎)= 2 = 0.95 ⇒ 𝑎 = 1.65

Given that, 𝑃(𝑋̅ − 1 < 𝜇 < 𝑋̅ + 1) = 0.9 ⇒ 𝑃(1 − 𝑋̅ > −𝜇 > −𝑋̅ − 1) = 0.9
⇒ 𝑃(1 > 𝑋̅ − 𝜇 > −1) = 0.9
⇒ 𝑃(−1 < 𝑋̅ − 𝜇 < 1) = 0.9
−1 𝑋̅ − 𝜇 1
⇒ 𝑃( < < ) = 0.9
3⁄√𝑛 3⁄√𝑛 3⁄√𝑛
−1 1
⇒ 𝑃( <𝑍< ) = 0.9
3⁄√𝑛 3⁄√𝑛
√𝑛
⇒ 2𝛷 ( ) − 1 = 0.9
3
√𝑛
⇒ 𝛷( ) = 1.9⁄2 = 0.95
3
√𝑛
⇒( ) = 1.65 ⇒ √𝑛 = 4.95 ⇒ 𝑛 = 24.5025 ≅ 25
3
2) 𝜎 2 is unknown:

𝑋̅ −𝜇
Consider 𝑇 = 𝑆 ~𝑇(𝑛 − 1) is in t- distribution with (n-1) degrees of freedom.
√𝑛−1

To find 𝑎 such that 𝑃(−𝑎 < 𝑇 < 𝑎) = 1 − 𝛼

𝑋̅ − 𝜇
⇒ 𝑃 (−𝑎 < < 𝑎) = 1 − 𝛼
𝑆
√𝑛 − 1
𝑎𝑆 𝑎𝑆
⇒ 𝑃 (𝑋̅ − < 𝜇 < 𝑋̅ + )= 1−𝛼
√𝑛 − 1 √𝑛 − 1
𝑎𝑆 𝑎𝑆
⇒ 𝜇 ∈ (𝑋̅ − , 𝑋̅ + )
√𝑛 − 1 √𝑛 − 1

Examples:

1. Let a random sample of size 17 from 𝑁(𝜇, 𝜎 2 ) yields 𝑋̅=4.7 and 𝑆 2 =5.76. Determine
90% confidence interval for 𝜇.

Solution: Given that 𝑛 = 17, 𝑋̅=4.7 and 𝑆 2 =5.76

𝑋̅ −𝜇 4(𝑋̅ −𝜇)
Let 𝑇 = 𝑆 = ~𝑇(17 − 1) ~𝑇(16)
√5.76
√𝑛−1

To find 𝑎 such that 𝑃(−𝑎 < 𝑇 < 𝑎) = 0.90 ⇒ 2𝛷(𝑎) − 1 = 0.90 ⇒ 𝑎 = 1.75

1.75 × √5.76 1.75 × √5.76


∴ 𝜇 ∈ (4.7 − , 4.7 + ) ⇒ 𝜇 ∈ (3.65, 5.75)
√16 √16

Confidence interval for variance:

1) 𝜇 is known:
∑𝑛 ̅ 2
𝑖=1(𝑋𝑖 −𝑋)
Let 𝑌 = ~𝜒 2 (n)
𝜎2
To find a and b such that 𝑃(𝑎 < 𝑌 < 𝑏) = 1 − 𝛼
𝛼 𝛼
⇒𝑃(𝑌 < 𝑎) = , 𝑃(𝑌 > 𝑏) =
2 2
∑𝑛 ̅ 2
𝑖=1(𝑋𝑖 −𝑋)
∴ 𝑃 (𝑎 < < 𝑏) = 1 − 𝛼
𝜎2

1 𝜎2 1
⇒ 𝑃( < < )=1−𝛼
𝑏 𝑛 ̅
∑𝑖=1(𝑋𝑖 − 𝑋) 2 𝑎
∑𝑛𝑖=1(𝑋𝑖 − 𝑋̅)2 2
∑𝑛𝑖=1(𝑋𝑖 − 𝑋̅)2
⇒ 𝑃( < 𝜎 < )= 1−𝛼
𝑏 𝑎
∑𝑛 ̅ 2 ∑𝑛
𝑖=1(𝑋𝑖 −𝑋)
̅ 2
𝑖=1(𝑋𝑖 −𝑋) ∑ 𝑋𝑖
⇒ 𝜎2 ∈ ( , ) where 𝜇 = = 𝑋̅
𝑏 𝑎 𝑛

Examples: 1. If 8.6, 7.9, 8.3, 6.4, 8.4, 9.8, 7.2, 7.8, 7.5 are the observed values of a
random sample of size 9 from a distribution 𝑁(8, 𝜎 2 ), construct 90% confidence interval
for 𝜎 2 .
∑𝑛 ̅ 2
𝑖=1(𝑋𝑖 −𝑋 ) 1
Solution: Let 𝜇 = 8, 𝑛 = 9 and 𝑌 = = {(0.6)2 + (0.1)2 + (0.3)2 +
𝜎2 𝜎2
7.35
(1.6)2 + (0.4)2 + (1.8)2 + (0.8)2 + (0.2)2 + (0.5)2 } = ~𝜒 2 (9)
𝜎2

To find a and b such that 𝑃(𝑎 < 𝑌 < 𝑏) = 1 − 𝛼 = 0.90 ⇒ 𝛼 = 0.10


𝛼 0.10 𝛼 0.10
⇒𝑃(𝑌 < 𝑎) = = = 0.05 ⇒ 𝑎 = 3.33 , 𝑃(𝑌 > 𝑏) = = = 0.05
2 2 2 2

⇒ (𝑌 < 𝑏) = 1 − 0.05 = 0.95 ⇒ 𝑏 = 16.9 using chi square table for 9 degrees of
freedom.
∑𝑛 ̅ 2 ∑𝑛
𝑖=1(𝑋𝑖 −𝑋 )
̅ 2
𝑖=1(𝑋𝑖 −𝑋 ) 7.35 7.35
∴ 𝜎2 ∈ ( , ) = (16.9 , ) = (0.43, 2.21)
𝑏 𝑎 3.33

2) 𝜇 is unknown:
𝑛𝑆 2
Let 𝑌 = ~𝜒 2 (𝑛 − 1)
𝜎2

To find 𝑎 and 𝑏 such that 𝑃(𝑎 < 𝑌 < 𝑏) = 1 − 𝛼


𝛼 𝛼
⇒𝑃(𝑌 < 𝑎) = , 𝑃(𝑌 > 𝑏) =
2 2
𝑛𝑆 2
∴ 𝑃 (𝑎 < < 𝑏) = 1 − 𝛼
𝜎2

1 𝜎2 1
⇒ 𝑃( < 2
< )=1−𝛼
𝑏 𝑛𝑆 𝑎
𝑛𝑆 2 𝑛𝑆 2
⇒ 𝑃( < 𝜎2 < )=1−𝛼
𝑏 𝑎
2
𝑛𝑆 2 𝑛𝑆 2
⇒ 𝜎 ∈( , )
𝑏 𝑎

Examples:

1. A random sample of size 15 from a normal distribution 𝑁(𝜇, 𝜎 2 ) yields 𝑋̅ =


3.2 , 𝑆 2 = 4.24 . Determine a 90% confidence interval for 𝜎 2 .

𝛼 𝑛𝑆 2
Solution: Given that,1 − 𝛼 = 0.9 ⇒ 𝛼 = 0.1 & = 0.05, 𝑌 = ~𝜒 2 (15 − 1) = 𝜒 2 (14)
2 𝜎2

𝛼 0.10 𝛼 0.10
∴ 𝑃(𝑌 < 𝑎) = = = 0.05 ⇒ 𝑎 = 6.57 , 𝑃(𝑌 > 𝑏) = = = 0.05
2 2 2 2
⇒ (𝑌 < 𝑏) = 1 − 0.05 = 0.95 ⇒ 𝑏 = 23.7 using chi square table for 14 degrees of
freedom.
15 × 4.24 15 × 4.24
∴ 𝜎2 ∈ ( , ) = (2.68, 9.68)
23.7 6.57
Extra Problems: 1. A random sample of size 9 from a normal distribution 𝑁(𝜇, 𝜎 2 )
yields 𝑆 2 = 7.63. Determine a 95% confidence interval for 𝜎 2 .
𝐴𝑁𝑆: 𝜎 2 ∈ (3.924, 31.5)
2. A random sample of size 15 from a normal distribution 𝑁(𝜇, 𝜎 2 ) yields 𝑋̅ =
3.2 , 𝑆 2 = 4.24 . Determine a 95% confidence interval for 𝜇.
𝐴𝑁𝑆: 𝜇 ∈ (2.02, 4.38)
3. A random sample of size 25 from a normal distribution 𝑁(𝜇, 4) yields 𝑋̅ =
78.3 , 𝑆 2 = 4.24 . Determine a 99% confidence interval for 𝜇.
𝐴𝑁𝑆: 𝜇 ∈ (77.268, 79.332)

You might also like