Session2 QTII 24
Session2 QTII 24
Pritha Guha
Quantitative Techniques - II [email protected]
Making
Inference From
A Sample
Making Inferences From Samples
Note that
• It is impossible to collect every possible sample and calculate the
sample statistic for each of them.
• In fact, we will only have one sample with us!
• The sampling distribution will tell us how much a statistic would vary
from sample to sample and will help us to predict how close a statistic
is to the parameter it estimates.
Sampling 𝑋1 , 𝑋2 , … 𝑋𝑛 : IID sample with mean μ and variance 𝜎 2
Central Limit
Theorem (CLT)
Sampling Distribution Of Sample Proportion
The IT section of a university suggested that students should use long passphrases as
their email account password to protect their accounts from getting hacked.
If the population proportion of students at the university who are using long passphrase
as advised by IT section is 0.1 then what is the probability that the sample proportion
based on a random sample of size 200 would be in [0.045, 0.155] ? (Use a suitable
normal approximation without any correction)
Making
Inferences
Statistical inference
• Point Estimator
• Interval Estimator
Point Estimation: Basic Idea
• We resort to sampling when we cannot get hold of the entire population, and hence
don’t know it’s characteristics (parameters): mean, SD etc.
• The parameters are estimated through sample statistic: sample mean (𝑋), sample
Ƹ sample standard deviation (s)/variance (s2) etc.
proportion (𝑝),
• You can think of this technique as extrapolation of sample properties to the population.
• This technique is known as (point) estimation.
• The sample statistic is called the (point) estimator of the parameter.
Some Notations and Concepts
• For the sample statistic to be a good proxy (Estimate) for the population parameter it is
required have at least some desirable properties.
• Some properties which we would be discussing are:
• Unbiasedness
• Efficiency
• Consistency
Unbiasedness
What is Unbiasedness
What is Bias
• 𝐵𝑖𝑎𝑠 = 𝐸 𝜃መ − 𝜃
• The bias of an estimator 𝜃መ quantifies its accuracy by measuring how far, on the average,
𝜃መ differs from θ.
Problem
Sample mean is an unbiased estimator for the population mean for SRS,
i.e. 𝐸 𝑋ത = 𝜇.
Problem (HW)
Sample variance is an unbiased estimator for the population variance for
SRS.
Unbiased Estimators may
not be unique!
Problem
Suppose 𝑋1 , 𝑋2 , … , 𝑋𝑛 is a random sample (IID) from a distribution
with mean 𝐸 𝑋𝑖 = 𝜃 and 𝑉𝑎𝑟 𝑋𝑖 = 𝜎 2 .
Consider the following two estimators 𝜃1 and 𝜃2 for θ:
𝑋1 +𝑋2 +⋯+𝑋𝑛
𝜃1 = 𝑋1 , 𝜃2 =
𝑛
Are 𝜃1 and 𝜃2 both unbiased estimators of θ?
Bias and
Standard Error
▪ Suppose the population parameter, θ, has two estimators, namely, 𝜃1 and 𝜃2 .
We say the estimator with a smaller MSE is more efficient.
▪ MSE of an estimator 𝜃 is defined as:
2
𝑀𝑆𝐸 𝜃 = 𝑉𝑎𝑟 𝜃 + 𝐵𝑖𝑎𝑠(𝜃) .
▪ If 𝜃 is an unbiased estimator of θ, then 𝐸 𝜃 = 𝜃 and
𝑀𝑆𝐸 𝜃 = 𝑉𝑎𝑟 𝜃 .
▪ Now let us answer the problem.
Unbiased Estimators May Not Be Unique: Choosing
Using Efficiency
Note that
• Such estimates are consistent, i.e., they converge to the true parameter as the sample
size gets larger.
• MME may not be unique.
Problem
▪ Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be IID sample from Uniform [0,b]. Find MME of b
where b > 0.
▪ Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be IID sample from Uniform [-b,b]. Find MME of b
where b > 0.