0% found this document useful (0 votes)
10 views8 pages

MGF, Sampling Theory & Central Limit Theorem

The document discusses Moment Generating Functions (mgf) for random variables, providing definitions, properties, and examples for discrete and continuous cases. It outlines how to compute mgf for various distributions including binomial, Poisson, uniform, exponential, gamma, and normal distributions. Additionally, it introduces sampling theory, explaining the concept of obtaining information about a population through a sample.

Uploaded by

khushpatel1222
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views8 pages

MGF, Sampling Theory & Central Limit Theorem

The document discusses Moment Generating Functions (mgf) for random variables, providing definitions, properties, and examples for discrete and continuous cases. It outlines how to compute mgf for various distributions including binomial, Poisson, uniform, exponential, gamma, and normal distributions. Additionally, it introduces sampling theory, explaining the concept of obtaining information about a population through a sample.

Uploaded by

khushpatel1222
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Moment Generating Functions(mgf)

Definition:
Let X be a random variable. The moment generating function of X denoted by 𝑀𝑋 (𝑡) and is
defined as

∑ 𝑒 𝑡𝑥𝑗 𝑃(𝑋 = 𝑥𝑗 ); 𝑋 𝑖𝑠 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒


𝑀𝑋 (𝑡) = 𝑗=1

∫ 𝑒 𝑡𝑥 𝑓(𝑥)𝑑𝑥; 𝑋 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
{ −∞
Note:
1. 𝑀𝑋 (𝑡) = 𝐸(𝑒 𝑡𝑥 )
(𝑛)
2. The nth derivative of 𝑀𝑋 (𝑡) at t = 0 is 𝐸(𝑋 𝑛 ). i.e., 𝑀𝑋 (0) = 𝐸(𝑋 𝑛 ).
𝑡𝑛
3. 𝐸(𝑋 𝑛 ) is the coefficient of in the equation 𝑀𝑋 (𝑡) = 𝐸(𝑒 𝑡𝑥 ).
𝑛!
4. 𝑉(𝑋) = 𝑀𝑋″ (0) − [𝑀𝑋′ (0)]2 .

Properties:
1. 𝑀𝑎𝑥 (𝑡) = 𝑀𝑋 (𝑎𝑡)
2. 𝑀𝑎𝑥+𝑏 (𝑡) = 𝑒 𝑡𝑏 𝑀𝑋 (𝑎𝑡)
3. If X and Y are independent random variables, then 𝑀𝑋+𝑌 (𝑡) = 𝑀𝑋 (𝑡). 𝑀𝑌 (𝑡).
This can be generalized for ‘n’ random variables.
4. If two or more independent random variables having a certain distribution are
added, the resulting random variable has a distribution of the same type as that
of the random variables. This property is called reproductive property.
Example: Suppose that X and Y are independent random variables with
distributions 𝑁(𝜇1 , 𝜎12 ) and 𝑁(𝜇2 , 𝜎22 ) respectively.
Let Z = X + Y.
𝑀𝑋 (𝑡) = 𝐸(𝑒 𝑧𝑡 ) = 𝐸(𝑒 (𝑋+𝑌)𝑡 )

= 𝐸(𝑒 𝑋𝑡 ). 𝐸(𝑒 𝑌𝑡 ) = 𝑀𝑋 (𝑡). 𝑀𝑌 (𝑡)


𝜎2 2 𝜎2 2 𝑡2
1𝑡 2𝑡 [𝑡(𝜇1 +𝜇2 )+(𝜎12 +𝜎22 ) ]
=𝑒 𝑡𝜇1 + 2 . 𝑒 𝑡𝜇2 + 2 =𝑒 2

This represents the mgf of a normally distributed random variable with expected
value (𝜇1 + 𝜇2 ) and variance (𝜎12 + 𝜎22 ).

Problems:
1. If X is the outcome obtained when a die is tossed, then find the moment generating
function. Also find its mean and variance.
1
Solution: P (𝑋 = 𝑥) = 6 ; 𝑥 = 1,2,3,4,5,6

We have 𝑀𝑋 (𝑡) = 𝐸(𝑒 𝑡𝑥 )


= ∑61 𝑒 𝑡𝑥 𝑃(𝑥)
1
= [𝑒 𝑡 + ⋯ + 𝑒 6𝑡 ]
6
1
We have 𝐸(𝑋) = 𝑀𝑋′ (0) = 6 [ 1 + ….+ 6]
21
= 6

1
𝑀𝑋″ (𝑡) = [𝑒 𝑡 + 4𝑒 2𝑡 + ⋯ + 36𝑒 6𝑡 ]
6
91
𝐸(𝑋 2 ) = 𝑀𝑋″ (0) =
6
𝑉(𝑋) = 𝑀𝑋″ (0) − [𝑀𝑋′ (0)]2 = 2.91
2. If X is a random variable taking values 0,1,2,…. and 𝑃(𝑋) = 𝑎𝑏 𝑥 , where a and b
are positive constants such that a+b = 1, then
(i) Find mgf.
(ii) If 𝐸(𝑋) = 𝑚1 , 𝐸(𝑋 2 ) = 𝑚2 , show that 𝑚2 = 𝑚1 (2𝑚1 + 1).
Solution: (i): 𝑀𝑋 (𝑡) = 𝐸(𝑒 𝑡𝑥 )
𝑎
= ∑∞ 𝑡𝑥
𝑥=0 𝑒 𝑝(𝑥) = 1−𝑏𝑒 𝑡

𝑎𝑏
(ii): 𝐸(𝑋) = 𝑀𝑋′ (0) = (1−𝑏)2
= 𝑏⁄𝑎= 𝑚1

𝐸(𝑋 2 ) = 𝑚2 = 𝑀𝑋″ (0)


𝑏 1+𝑏
= 𝑎( )
𝑎
1
= 𝑚1 𝑎 [𝑎 + 𝑏 + 𝑏]

= 𝑚1 (2𝑚1 + 1)

3. If X has pdf 𝑓(𝑥) = 𝜆𝑒 −𝜆(𝑥−𝑎) if 𝑥 ≥ 𝑎. Find its mgf and also find the mean and
variance.
𝑒 −|𝑥|
4. Suppose that X has pdf 𝑓(𝑥) = 2
, -∞ < x < ∞, find mean and variance using
mgf.

Mgf of Binomial distribution:


If 𝑝(𝑥) = 𝑛𝐶𝑥𝑝 𝑥 𝑞 𝑛−𝑥 ; 𝑥 = 0,1,2, … , 𝑛, then
𝑛

𝑀𝑋 (𝑡) = ∑ 𝑒 𝑡𝑥 𝑛𝐶𝑥𝑝 𝑥 𝑞 𝑛−𝑥


𝑥=0
𝐸𝑥𝑝𝑎𝑛𝑑𝑖𝑛𝑔,
𝑀𝑋 (𝑡) = (𝑝𝑒 𝑡 + 𝑞)𝑛
Mgf of Poisson distribution:
𝛼𝑥 𝑒 −𝛼
If 𝑝(𝑥) = ; 𝑥 = 0,1,2, … , ∞, then
𝑥!
𝑛
𝛼 𝑥 𝑒 −𝛼
𝑀𝑋 (𝑡) = ∑ 𝑒 𝑡𝑥
𝑥!
𝑥=0
𝑡
= 𝑒 𝑒𝑒 ∝ −∝
𝑡
= 𝑒 ∝(𝑒 −1)
Mgf of Uniform distribution in (-a, a):
1
If 𝑓(𝑥) = 2𝑎 in (-a, a), then
𝑎 1 𝑠𝑖𝑛ℎ𝑎𝑡
𝑀𝑋 (𝑡) = ∫−𝑎 2𝑎 𝑒 𝑡𝑥 dx = 𝑎𝑡

The value of 𝑬(𝑿𝟐𝒏 ):


𝑡 2𝑛
We know that 𝐸(𝑋 2𝑛 ) is the coefficient of (2𝑛)! in the equation 𝑀𝑋 (𝑡) = 𝐸(𝑒 𝑡𝑥 ).

𝑎2𝑛 𝑡 2𝑛 𝑎2𝑛
𝐸(𝑋 2𝑛 ) = (2𝑛+1)! = 2𝑛+1

Mgf of Exponential distribution:


If 𝑓(𝑥) = ∝ 𝑒 −∝𝑥 , 𝑥 > 0 , then

𝑀𝑋 (𝑡) = ∫ ∝ 𝑒 −∝𝑥 𝑒 𝑡𝑥 𝑑𝑥
0

=∝ ∫0 𝑒 −(∝−𝑡)𝑥 𝑑𝑥

𝑀𝑋 (𝑡) =
∝ −𝑡
Mgf of Gamma distribution:
𝛼𝑟 𝑥 𝑟−1 𝑒 −𝛼𝑥
If 𝑓(𝑥) = ; 𝛼 > 0, 𝑟 > 0, 0 < 𝑥 < ∞, then
𝛤(𝑟)


𝛼 𝑟 𝑥 𝑟−1 𝑒 −𝛼𝑥
𝑀𝑋 (𝑡) = ∫ 𝑒 𝑡𝑥 𝑑𝑥
0 𝛤(𝑟)
∝𝑟 ∞ 𝑣 𝑟−1 𝑑𝑣
Take 𝑥(∝ −𝑡) = 𝑣, 𝑀𝑋 (𝑡) = ∫0 (∝−𝑡) 𝑒 −𝑣 ∝−𝑡
𝛤(𝑟)

∝𝑟 1 ∞
= ∫ 𝑒 −𝑣 𝑣 𝑟−1 𝑑𝑣
(∝−𝑡)𝑟 𝛤(𝑟) 0

∝𝑟
= (∝−𝑡)𝑟

Mgf of Chi-square distribution:


𝑛 1
By substituting 𝑟 = 2, ∝ = in mgf of gamma distribution, we get chi-square distribution.
2

Therefore 𝑀𝑋 (𝑡) = (1 − 2𝑡)−𝑛⁄2


Mgf of Normal distribution:
1 1
𝑥−𝜇 2
If 𝑓(𝑥) = 𝑒 −2 ( ) , −∞ < 𝑥 < ∞, then
𝜎√2𝜋 𝜎

1 ∞ −
1
𝑥−𝜇 2
𝑀𝑋 (𝑡) = ∫ 𝑒 𝑡𝑥 𝑒
𝜎√2𝜋 −∞
2 ( 𝜎
) 𝑑𝑥
𝑥−𝜇
Substitute 𝑧 = ,
𝜎

𝑧2
1 ∞ −
𝑀𝑋 (𝑡) = ∫ 𝑒 𝑡(𝜇+𝜎𝑧) 𝑒
𝜎√2𝜋 −∞
2 𝜎𝑑𝑧
1
𝑒 𝑡𝜇 ∞ − (𝑧 2 −2𝑧𝑡𝜎+(𝑡𝜎)2 −(𝑡𝜎)2 )
= ∫ 𝑒
√2𝜋 −∞
2 𝑑𝑧

𝜎2 𝑡2
1
𝑒 𝑡𝜇 𝑒 2 ∞ − (𝑧−𝑡𝜎)2
=
√2𝜋
∫−∞ 𝑒 2 𝑑𝑧

Substitute 𝑦 = 𝑧 − 𝑡𝜎 and use gamma function,


𝜎2 𝑡 2
𝑀𝑋 (𝑡) = 𝑒 𝑡𝜇+ 2

Mgf of Standard normal distribution:


By substituting μ = 0, σ2 = 1 => σ = 1 in mgf of normal distribution, we get the
mgf of Standard normal distribution.
i.e.
2
𝑀𝑋 (𝑡) = 𝑒 𝑡 /2

Sampling Theory
Introduction: Sampling theory is the process of obtaining information about an entire
population by examining only a part of it. A population may be finite or infinite. A finite subset
of statistical individuals in a population is called a sample, no of individual in a sample is called
sample size.
Random Sample: Let X be a random variable. Let 𝑋1 , 𝑋2 , … 𝑋𝑛 be ‘n’ independent random
variables such that each 𝑋𝑖 has the same distribution of X. Then (𝑋1 , 𝑋2 , … 𝑋𝑛 ) is called a
̅ 2
̅ = ∑ 𝑋𝑖 is called Sample mean and 𝑠 2 = ∑(𝑋𝑖−𝑋)
random sample of size n. The statistic 𝑋
𝑛 𝑛
is a sample variance. Statistic is a characteristic of a sample whereas parameter is characteristic
of population. Example: Population Mean (𝜇) is parameter and Sample mean (𝑋̅) is statistic.
Sampling Theory Distribution: Let 𝑋1 , 𝑋2 , … 𝑋𝑛 denote a random sample of size n from a
𝜎2
distribution which is 𝑁(𝜇, 𝜎 2 ) then 𝑋̅~𝑁(𝜇, ) is a sample mean and Distribution of 𝑠 2
𝑛
𝑛2 𝑠 2
(Variance of random sample) is ~ 𝜒 (𝑛 − 1), where 𝑋̅ and 𝑠 2 are independent.
2
𝜎2

Theorem: Let X be a random variable with 𝐸(𝑋) = 𝜇 𝑎𝑛𝑑 𝑉(𝑋) = 𝜎 2 . Let 𝑋̅ be a sample
𝜎2
mean of random sample of size n, from X then 𝐸(𝑋̅ )= 𝜇 𝑎𝑛𝑑 𝑉(𝑋̅ )= . 𝑛
Problems:
1. Let 𝑋̅ be the mean of a random sample of size 25 from the distribution which is
N (75,100). Find Pr {71<𝑋̅ < 79}.
Solution: 𝜇 = 75, 𝜎 2 = 100, and n=25.
𝜎2 100
𝑋̅~𝑁(𝜇, ) Implies 𝑋̅~𝑁(75, ) implies 𝑋̅~𝑁(75,4)
𝑛 25
71−𝜇 𝑋̅ −𝜇 75−𝜇 71−75 79−75
Pr{ 𝜎/ ≤ 𝜎 ≤ 𝜎 } = Pr { ≤𝑧 ≤ }
√𝑛 2 2
√𝑛 √𝑛
= Pr{−2 ≤ 𝑧 ≤ 2}
= ∅(2) − ∅(−2)
= 2 ∅(2) − 1
= 2(0.9972)-1 (using normal table)
= 0.9544.

2. Let 𝑠 2 be the variance of a random sample of size 6 and distribution is N (𝜇, 12).
Evaluate 𝑃𝑟{ 2.3 < 𝑠 2 < 22.2}.
𝑛2 𝑠 2 6𝑠2
Solution: 𝑋̅~𝑁(𝜇, 12) n=6, 𝜎 2 = 12 implies 2 ~ 𝜒 2 (5) implies ~ 𝜒 2 (5)
𝜎 12
2 2.3 𝑠2 22.2
Pr{ 2.3 < 𝑠 < 22.2} = Pr{ 2 < < }
2 2
𝑠2
= Pr{ 1.15 < 𝑌 < 11.1} (Assume Y= 2 )
= 𝜒 2 (11.1) − 𝜒 2 (1.15) (with degree of freedom 5)
= 0.95 - 0.05 = 0.90.
3. Let 𝑋̅ and 𝑠 2 be the mean and variance of a random sample of size 25 from the
distribution N (3,100). Evaluate 𝑃𝑟{ 0 < 𝑋̅ < 6 , 55.2 < 𝑠 2 < 145.6}.
Solution: n=25, 𝑋 ~ 𝑁(3,100) implies 𝑿 ̅ ~ N(3, 100) = 𝑋̅~ N(3,4) and also
25
𝑛2 𝑠 2 25𝑠2 𝑠2
𝑌 = 𝜎2 = 100 = 4
𝑃𝑟{ 0 < 𝑋̅ < 6 , 55.2 < 𝑠 2 < 145.6} = 𝑃𝑟{ 0 < 𝑋̅ < 6 } . Pr{55.2 < 𝑠 2 < 145.6}
0−3 6−3 55.2 145.6
= Pr { <𝑧< } . Pr{ <𝑌< }
2 2 4 4
𝑠2
= Pr{−1.5 < 𝑧 < 1.5} . Pr{13.8 < < 36.4}
4
= {∅(1.5) − ∅(−1.5)}. {𝜒 2 24 (36.4) − 𝜒 2
24
(13.8)}
= 0.8664 × 0.9 = 0.7797.

4. Let 𝑋̅ be the mean of the random sample of size 5, from a normal distribution with mean
0 and variance 125. Find C so that, 𝑃𝑟{𝑋̅ < 𝐶} = 0.9.
Solution: n=5, 𝜇 = 0, 𝜎 2 = 125, 𝑋 ~ 𝑁(0,125) implies 𝑿 ̅ ~ N (0, 125) =
5
𝑋̅ −0
𝑋̅~ N(0,25) , 𝑷𝒓{𝑿
̅ < 𝑪} = 𝟎. 𝟗 implies Pr { < 𝐶} = 0.9
25
𝐶−0
Implies Pr {𝑧 < } = 0.9
5
𝐶
Implies ∅ (5 ) = 0.9 , C/5=∅−1 (0.9)
Implies C= 6.45.
Exercise:
1. Let 𝑋̅ be the mean of the random sample of size n from a normal distribution with mean
𝜇 and variance 100. Find n so that Pr{𝜇 − 5 < 𝑋̅ < 𝜇 + 5} = 0.954. (Ans: n=16).
2. Let Xi and Yi, i = 1, 2, …, 25 be two independent random samples from two normal
distributions N (0,16) and N (1,9) respectively. Let 𝑋̅ and 𝑌̅ denote the corresponding
sample means. Compute 𝑃(𝑋̅ > 𝑌̅).
Central Limit theorem
Introduction: Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 denote a random sample of size n from a distribution that has
(𝑋̅−𝜇)
mean µ and variance 𝜎 2 then the random variable 𝑌𝑛 = has a limiting distribution N
𝜎⁄√𝑛
(0,1).
Problems:
1. Compute an approximate probability that a mean sample of size 15 from a distribution
3𝑥 2 ; 0 < 𝑥 < 1 3 4
having pdf 𝑓(𝑥) = { is between and .
0 ; 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 5 5
𝜎2 ̅̅̅̅
𝑋̅ −𝐸(𝑋)
Solution: 𝑋~𝑁(𝜇, 𝜎 2 ) implies 𝑋̅~𝑁(𝜇, ) and also 𝑌𝑛 =
𝑛 √𝑣(𝑋̅)
3 3 3 3 1
𝜇 = 𝐸(𝑋 2 ) = 3/5 And 𝑉(𝑋) = 80 therefore 𝑋~𝑁 (4 , 80) implies 𝑋̅~𝑁(4 , 400)
3 4
3 4 −𝐸(𝑋̅) −𝐸(𝑋̅)
Pr { < 𝑋̅ < } = Pr{ 5
< 𝑌𝑛 < 5
}
5 5 √𝑣(𝑋̅ ) √𝑣(𝑋̅)
3 1

20 20
= Pr{ 1 <𝑌𝑛 < 1 }
20 20

= Pr{−3 < 𝑌𝑛 < 1} = ∅(1) − ∅(−3) = 0.840.


2. Suppose that 𝑋𝑗, j=1, 2… 50 are independent random variables each having a poison
distribution with 𝛼 = 0.03. Let 𝑆 = 𝑋1 + 𝑋2 + ⋯ + 𝑋50 using central limit theorem,
evaluate Pr{𝑆 ≥ 3}.
Solution: 𝑋~𝑃(0.03) therefore µ=𝛼 = 0.03,𝜎 2 = 𝛼 = 0.03 , 𝑆 = ∑50𝑖=1 𝑋𝑖
𝐸(𝑆) = 𝐸(∑ 𝑋𝑖 ) = 50 𝐸(𝑋𝑖 ) = 50(0.03) = 1.5 = 𝜇
𝑉(𝑆) = 𝑉(∑ 𝑋𝑖 ) = 50 𝑉(𝑋𝑖 ) = 50(0.03) = 1.5 = 𝜎 2
̅̅̅̅
𝑋̅−𝐸(𝑋) 𝑆−𝐸(𝑆)
𝑌𝑛 = =
√𝑣(𝑋̅) √𝑉(𝑆)

Therefore Pr{𝑆 ≥ 3} = 1 − Pr{𝑆 < 3}


𝑆−𝐸(𝑆) 3−𝐸(𝑆)
= 1 − Pr {
√𝑉(𝑆)
< √𝑉(𝑆)
}

3−1.5
= 1 − Pr{𝑌𝑛 <
√1.5
} = 1 − Pr{𝑌𝑛 < √1.5}

= 1 − ∅(1.22) = 1 − 0.8907 = 0.1093.


3. Let 𝑋̅ be the mean of random variable of size 100 from the𝜒 2 (50). Compute an
approximate value of Pr{49 < 𝑋̅ < 51}.
Solution: 𝑋~𝜒 2 (50) implies 𝑋̅~𝜒 2 (50,100) and also 𝐸(𝑋̅) = 50, 𝑉(𝑋̅) = 1.
̅̅̅̅
𝑋̅−𝐸(𝑋) 𝑋̅−50
Therefore 𝑌𝑛 = =
√𝑣(𝑋̅) √1

49−50 51−50
Pr{49 < 𝑋̅ < 51} = Pr{ 1 < 𝑌𝑛 < 1 }

= Pr{−1 < 𝑌𝑛 < 1} = 2∅(1) − 1 = 0.6826.


4. A computer in adding numbers rounds each number off to the nearest integer. Suppose
that all rounding errors are independent and uniformly distributed over (-0.5,0.5).
If 1500 numbers are added what is the probability that the magnitude of the total error
exceeds 15? How many numbers may be added together inorder that the magnitude of
the total error is less than 10 with probability o.9?

Let X be the rounding error.

Then 𝑋~𝑈(−0.5, 0.5)

E(X) = 0 and V(X) = 1/12

𝐿𝑒𝑡 𝑌 = ∑1500
𝑖=1 𝑋𝑖

Then Y is the total error.


1
E(Y)=0 and 𝑉(𝑌) = 1500 × 12 = 125
𝑌−E(Y)
𝐿𝑒𝑡 𝑍 = ~𝑁(0,1)
√V(Y)

∴ 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑡ℎ𝑎𝑡 𝑡ℎ𝑒 𝑚𝑎𝑔𝑛𝑖𝑡𝑢𝑑𝑒 𝑜𝑓 𝑡𝑜𝑡𝑎𝑙 𝑒𝑟𝑟𝑜𝑟 𝑒𝑥𝑐𝑒𝑒𝑑𝑠 15 = 𝑃(|𝑌| > 15)
= 1 − 𝑃(|𝑌| < 15) = 1 − 𝑃(−15 < 𝑌 < 15)
−15−0 15−0
= 1−𝑃( <𝑍< ) = 0.178
√125 √125

𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑡ℎ𝑎𝑡 𝑡ℎ𝑒 𝑚𝑎𝑔𝑛𝑖𝑡𝑢𝑑𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑜𝑡𝑎𝑙 𝑒𝑟𝑟𝑜𝑟 𝑖𝑠 𝑙𝑒𝑠𝑠 𝑡ℎ𝑎𝑛 10 = 𝑜. 9


𝑖𝑒, 𝑃(|𝑌| < 10) = 0.9
𝑃(−10 < 𝑌 < 10) = 0.9
−10 − 0 10 − 0
𝑃( <𝑍< ) = 0.9
√1500/𝑛 √1500/𝑛
𝑛 = 443
Exercise:
1. A random Sample of Size 64 is taken from an infinite population having
𝜇 = 112, 𝜎 2 = 144. Find the probability of getting Pr{𝑋̅ > 114.5}. (Ans: 0.0475)
2. Let 𝑋̅ denote the mean of random sample of size 128 from a gamma distribution with
𝛼 = 2 𝑎𝑛𝑑 𝛽 = 4. Find approximately 𝑃𝑟{7 < 𝑋̅ < 9}. (Ans: 0.9544)
3. If X is a random variable having the binomial distribution with n = 72 and p=1/3. Use
central limit theorem to evaluate P(22<X<28).

You might also like