Point Estimatiors
Point Estimatiors
Descriptive Statistics
Himadri Mukherjee
Department of Mathematics
BITS PILANI K K Birla Goa Campus, Goa
n P
i Xi
X
Example. max{Xi }, min{Xi }, Xi , .
n
i=1
Sample mean
Definition
Let X1 , X2 , X3 , . . . Xn be random samples from the distribution of
X.The statistic
Xn
Xi
i=1
X=
n
is called the sample mean of the population.
Sample variance
Definition
Let X1 , X2 , X3 , . . . Xn be random samples from the distribution of
X. The statistic,
Xn
(Xi − X)2
i=1
S2 =
n−1
√
is called the sample variance, and S = S 2 is called the sample
standard deviation.
Sample variance continued
Theorem
If a random sample is taken from a population having mean µ and
standard deviation σ, then X is a random variable with mean µ
2
and variance σn .
Theorem
if a random sample is taken from a population having mean µ and
variance σ 2 then E(S 2 ) = σ 2 .
Proof
The central limit theorem
Theorem
If X is the mean of a sample of size n taken from a population of
mean µ and variance σ 2 , then for n large, X is normal with mean
2
µ and variance σn .
Corollary
X−µ
√
σ/ n
follows a standard normal distribution.
Explanation
Example 1
Theorem
If n samples Xi are drawn from a population with meanP µ, and
standard deviation σ, then for large enough n, the sum ni=1 Xi
follows a normal distribution with mean nµ and standard deviation
√
nσ.
Normal approximation to Binomial and Poisson
Theorem
• For a large enough n, a random variable X following binomial
distribution
p B(n, p) can be approximated by
N (np, np(1 − p)).
• for a large enough n, a Poisson random variable X, with
√
Poisson parameter, nλ, can be approximated by N (nλ, nλ).
Continued
Estimation
Definition
An unbiased estimator of a parameter θ, is θ̂ such that E(θ̂) = θ.
152,115,109, 94, 88, 137, 152, 77, 160, 165, 125, 40, 128, 123,
136, 101, 62, 153, 83, 69
The maximum likelihood estimator
Definition
( discovered by Fisher) Let X1 , X2 , X3 . . . Xn have a joint
distribution f (x1 , x2 , x3 , . . . , θ1 , θ2 , θ3 . . . θm ), where the
parameters θ1 , θ2 , θ3 , . . . θm are unknown, x1 , x2 , x3 . . . xn are
observations, expressing the likelihood of these observations as
functions of θ1 , θ2 , θ3 θm . This is called the likelihood function.
The values of θi that maximizes the likelihood function are called
the maximum likelihood estimators or simply MLE.
Example 8.
Theorem
Let θˆ1 , θˆ2 , θˆ3 . . . θˆn be the MLE for the parameters θ1 , θ2 , θ3 , . . . θn ,
let h(θ1 , θ2 , θ3 , . . . θn ) be any function. The MLE of the function
h(θ1 , θ2 , θ3 , . . . θn ) is h(θˆ1 , θˆ2 ,θˆ3 , . . . θˆn )
Example 12.
Example 18.