0% found this document useful (0 votes)
71 views32 pages

Appendix C Mathematical Statistics 2015

The document outlines the topics covered in a course on fundamental mathematical statistics. It is divided into three parts. Part one covers populations, parameters, and random sampling, finite sample properties of estimators, and asymptotic properties of estimators. Part two discusses general approaches to parameter estimation and interval estimation and confidence intervals. Part three covers hypothesis testing and remarks on notation. The first section defines key concepts such as populations, parameters, and random sampling and gives examples to illustrate estimation and hypothesis testing.

Uploaded by

Sydney
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views32 pages

Appendix C Mathematical Statistics 2015

The document outlines the topics covered in a course on fundamental mathematical statistics. It is divided into three parts. Part one covers populations, parameters, and random sampling, finite sample properties of estimators, and asymptotic properties of estimators. Part two discusses general approaches to parameter estimation and interval estimation and confidence intervals. Part three covers hypothesis testing and remarks on notation. The first section defines key concepts such as populations, parameters, and random sampling and gives examples to illustrate estimation and hypothesis testing.

Uploaded by

Sydney
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Outline: 

Fundamentals of Mathematical Statistics

Part One
I.  Populations, Parameters, and Random Sampling
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics III. Asymptotic or Large Sample Properties of Estimators

Part Two
IV. General Approaches to Parameter Estimation
V. Interval Estimation and Confidence Intervals

Read Wooldridge, Appendix C: Part Three
VI. Hypothesis Testing
VII. Remarks on Notation

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
2
Fundamentals of Mathematical Statistics: Part One . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

I.  Populations, Parameters, and  Populations, Parameters, and 
Random Sampling Random Sampling
• By “learning”, we can mean several things. 
• Population refers to any well‐defined group of subjects. – Most important are estimation and hypothesis testing.

Example:
• Statistical inference involves learning something about the  • Suppose our interest is to find the average percentage increase in wage given an 
additional year of education.
population from a sample. – Population: obtain wage and education of 33 million working people
– Sample: obtain data on a subset of the population.

• Parameters are constants that determine the directions and  Example: Results:


strengths of relationship among variables. o the return to education is 7.5%
‐ example of point estimate.
o the return to education is between 5.6% and 9.4%
‐ example of interval estimates.
o Does education affect wage?
– example of hypothesis testing.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Populations, Parameters, and Random Sampling Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 3 I. Populations, Parameters, and Random Sampling Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 4
Sampling  Sampling
• Let Y be a random variable representing a population with a probability  Random Sampling: Definition
density function f(y;). • If Y1, …, Yn are independent random variables with a common probability 
density function f(y;), then {Y1, …, Yn} is a random sample from the 
• The probability density function (pdf) of Y is assumed to be known except  population represented by f(y;).
for the value of 
– Different values of  imply different population distributions. • We also say the Yi are i.i.d. random variables from f(y;).
– i.i.d. (independent, identically distributed)

Example: a random sample from normal distribution.

• If Y1, …, Yn are independent random variables with a normal distribution with mean 
and  variance 2, then {Y1, …, Yn} is a random sample from the Normal(,2) 
population.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Populations, Parameters, and Random Sampling Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 5 I. Populations, Parameters, and Random Sampling Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 6

Sampling Sampling
Example: random sample from Bernoulli distribution.
Example: working population 
• If Y1, ..., Yn are independent random variables, each is distributed as 
• We may obtain a sample of 100 families.   Bernoulli () so that
– Note that the data we observe will differ for each different sample.  A  P(Yi=1) = 
sample provides a set of numbers, say, {y1, …, yn}. P(Yi=0) = 1 ‐ 
then, {Y1, ..., Yn} constitutes a random sample from the Bernoulli () 
distribution. 

• Note that 
Yi = 1 if passenger i show up
Yi = 0 otherwise

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Populations, Parameters, and Random Sampling Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 7 I. Populations, Parameters, and Random Sampling Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 8
A. Unbiasedness A. Unbiasedness
II. Finite Sample Properties of  B. Variance B. Variance
Estimators C. Efficiency Estimators and Estimates C. Efficiency

• Suppose {Y1, …, Yn} is a random sample from a population distribution 
that depends on an unknown parameter .  
• A “finite sample” implies a sample of any size, no matter how large or  – An estimator of  is the rule that assigns each possible outcome of the 
small. sample.
– Small sample properties.
– A rule is specified before any sampling is carried out.

• An estimator W of a parameter  can be expressed as


• Asymptotic properties have to do with the behavior of estimators as the 
sample size grows without bound.
W = h(Y1, …, Yn}

for some known function h. 

• When [particular set of values, say {y1, …, yn}, is plugged into the function h, we 
obtain the estimate of .

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 9 II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 10

A. Unbiasedness A. Unbiasedness
Estimators and Estimates: B. Variance B. Variance
C. Efficiency Estimators and Estimates C. Efficiency
sampling distribution
Example:
• The distribution of an estimator is called the sampling distribution.  
– It describes the likelihood of various outcomes of W across different 
• Let {Y1, …, Yn} be a random sample from the population with mean .  
The natural estimator of  is the average of the random sample:
random samples.  

• The entire sampling distribution of W1 can be obtained given the 
probability distribution of W1 and outcomes. Note that Y‐bar is called the sample average.

• Unlike in Appendix A, we define the sample average of a set of numbers 
as a descriptive statistic.

• For actual data outcomes, y1, …, yn, the estimate is the average in the 
sample

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 11 II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 12
A. Unbiasedness A. Unbiasedness
Example C.1: City  B. Variance B. Variance
Unemployment Rates C. Efficiency Unbiasedness  C. Efficiency

Unbiased Estimator: Definition
• Example:
An estimator W of  is unbiased if

E(W) = 
for all possible values of W

• Estimator: 
– Intuitively, if the estimator is unbiased, then its probability distribution has an 
expected value equal to the parameter it is supposed to be estimating.
• Estimate:  = 6.0
– Our estimate of the average city unemployment rate in the U.S. is 6.0%.
• Unbiasedness does not mean that the estimate from a particular 
Notes sample is equal to , or even very close to .
1) Each sample results in a different estimate. • If we could indefinitely draw random samples on Y from the population
2) The rule for obtaining the estimate is the same. – then average these estimates over all random samples will obtain .

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 13 II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 14

A. Unbiasedness A. Unbiasedness
B. Variance B. Variance
Unbiasedness  C. Efficiency Unbiasedness C. Efficiency

Bias of an Estimator: Definition

If W is an estimator of , its bias is defined as • Show:  the sample average is an unbiased estimator of the population 
mean .
Bias() = E(W) – 
An estimator has a positive bias if E(W) –  >0. 1 n  1  n  1 n 
E (Y )  E   Yi   E   Yi     E (Yi ) 

 i 1 
n n  i 1  n  i 1 
1 n  1


The unbiasedness of an estimator and the size of bias depend on 
the distribution of Y and 
    ( n )  
n  i 1  n
– the function h

• We cannot control the distribution of Y, but we could choose the choice 
of the rule h.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 15 II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 16
A. Unbiasedness A. Unbiasedness
B. Variance The Sampling Variance of  B. Variance
Unbiasedness C. Efficiency
Estimators
C. Efficiency

Weaknesses: 
(1) Some very good estimators are not unbiased. • The variance of an estimator is the measure of the dispersion in the 
distribution.  
(2)  Unbiased estimators could be quite poor estimators. It is often called sampling variance.
• Example: the variance of sample average from a population.
Example:
Let W = Y1 (from a random sample of size n, we discard all of the observations except 
the first)
E(Y1) = 

• Unbiasedness ensures that the probability distribution of an estimator has a  • Summary:


mean value equal to the parameter it is supposed to be estimating.   If {Y1, …, Yn} is a random sample from a population with mean  and variance 
• Variance shows how spread out the distribution of an estimator. 2, then
• has the same mean as the population
• Its sampling variance equals the population variance 2 over the sample size. 
(2/n)

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 17 II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 18

A. Unbiasedness A. Unbiasedness
The Sampling Variance of  B. Variance The Sampling Variance of  B. Variance
C. Efficiency C. Efficiency
Estimators Estimators

Suppose W1 and W2 both are unbiased  This implies that the probability that W1 is 


• Define the estimator as estimators of , but W1 is more tightly centered  greater than any distance from  is less than
the probability W2 is greater than the same 
about . (See graph!) distance from .

which is usually called the sample variance.  

• Show that the sample variance is an unbiased estimator of 2.
E(S2) = 2

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 19 II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 20
A. Unbiasedness
The Sampling Variance of  B. Variance The Sampling Variance of 
C. Efficiency
Estimators Estimators

Example: Example:
For a random sample with mean  and variance 2. From simulation in Table C.1.  
Y1 is the estimator, the first observation drawn.
20 random samples of size 10 
(n=10) generated from the normal 
Estimator Estimator Y1 distribution with =2 and 2=1
Mean is unbiased Y1 is unbiased
E( ) =  E(Y1) =   y1 ranges from (‐0.64‐4.27); 
Variance Var( )= 2/n Var(Y1)= 2 mean = 1.89
 ranges from (1.16‐2.58)
If the sample size n=10; this implies Var(Y1) is ten times larger than mean = 1.96
Var( )
 Which estimator is better?

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 21 II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 22

Relative Efficiency A. Unbiasedness A. Unbiasedness


B. Variance
C. Efficiency Efficiency B. Variance
C. Efficiency

Example:
Relative Efficiency: If W1 and W2 are two unbiased estimators of , then  • For estimating the population mean , 
W1 is efficient relative to W2 when Var(W1)Var(W2) for all . Var( ) < Var(Y1) for any value of 2.

• The estimator  is efficient relative to Y1 for estimating .

• In a certain class of estimators, we can show that the sample average  has the 
smallest variance.
Example:

Example:y1
Example:
Show that  has the smallest variance among all unbiased estimators that are 
also linear functions of Y1, Y2, …, Yn.
– The assumptions are that Yi have common mean and variance, and that they are 
pairwise uncorrelated.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 23 II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 24
A. Unbiasedness A. Unbiasedness

Efficiency B. Variance
C. Efficiency Efficiency
B. Variance
C. Efficiency

• If we do not restrict our attention to unbiased estimators, then 
comparing variances is meaningless • A measure when comparing estimators that are not necessarily 
unbiased:
– Mean squared error (MSE)
Example: In estimating the population mean , we use trivial estimator equal 
to zero • If W is an estimator of , then
– mean equal to zero E(0)  = 0 MSE(W)  = E(W‐)2
– Variance equal to zero: Var(0)  = 0 = E[W‐E(W) +E(W)‐]2
– bias of this estimator equal ‐  Bias(0)  = ‐  = Var(W) + [bias(W)]2
Bias(0) = E(0) ‐  = ‐ 
• The MSE measures how far, on average, the estimator is away from .  It 
• So this trivial estimator is a very poor estimator when the bias of the  depends on the variance and bias.
estimator or  is large.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 25 II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 26

A. Unbiasedness A. Unbiasedness

Problem C.1
B. Variance
C. Efficiency Problem C.1 continue.. B. Variance
C. Efficiency

C.1 Let Y1, Y2, Y3, and Y4 be independent, identically distributed  (ii) Now, consider a different estimator of :


random variables from a population with mean  and 
1 1 1 1
variance 2. Let W  Y1  Y2  Y3  Y4
8 8 4 2
1
Y  (Y1 + Y2 + Y3 + Y4)  This is an example of a weighted average of the Yi. Show 
4 that W is also an unbiased estimator of . Find the 
denote the average of these four random variables. variance of W. [ans.]

(i) What are the expected value and variance of  in terms of  (iii) Based on your answers to parts (i) and (ii), which 


and 2? [ans.] estimator of  do you prefer,  or W? [ans.]

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 27 II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 28
Problem C.1 (i) continue… Problem C.1 (ii)
(i) This is just a special case of what we covered in the text, with n = 4:  
(ii) 
• E( ) = µ and Var( ) = 2/4.
• W is unbiased.
• Expected value of  =  E(W) = E(Y1)/8 + E(Y2)/8 + E(Y3)/4 + E(Y4)/2
= µ[(1/8) + (1/8) + (1/4) + (1/2)]
1 n  1  n  1 n  1 n
 1
E (Y )  E   Yi   E   Yi     E (Yi )   
 n i 1  n  i 1  n  i 1  n
    ( n )  
i 1  n
= µ(1 + 1 + 2 + 4)/8 = µ, 
Note that Yi are independent.
• Variance of  = 2/4. 
• Find variance of W
Var(W) = Var(Y1)/64 + Var(Y2)/64 + Var(Y3)/16 + Var(Y4)/4
= 2[(1/64) + (1/64) + (4/64) + (16/64)]
= 2(22/64)  
= 2(11/32).
29
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
30
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

III. Asymptotic or Large Sample  A. Consistency
Problem C.1 (iii) Properties of Estimators
B. Asy. Normality

• For estimating a population mean 
• (iii)  – One notable feature of Y1 is that it has the same variance for any sample 
size
Var(W) = 11/32 – improves in the sense that its variance gets smaller as n gets larger.
Var( ) = 8/32  = ¼  • Y1 does not improve in this case

• We can rule out silly estimators by studying the asymptotic or large 
• Because 11/32 > 8/32 = 1/4, Var(W) > Var( ) for any  sample properties of estimators (n).
2 > 0, so is preferred to W because each is unbiased. 
• How large is “large” sample?
– This depends on the underlying population distribution.
– Note that large sample approximations have been known to work well for 
sample sizes as small as 20 observations (n=20).

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
31 III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 32
A. Consistency A. Consistency

Consistency B. Asy. Normality Consistency B. Asy. Normality

1. The distribution Wn becomes 3. When Wn is consistent, we say that  is


more and more concentrated about  the probability limit of Wn, written as
as sample size increases (n). plim(Wn) = 
• Consistency concerns how far the estimator is likely to be from the 
parameter it is supposed to be estimating  2. For larger sample sizes, Wn is less and 4. The conclusion that Yn is consistent of  is
– as sample size increases indefinitely. less likely to be very far from . known as the law of large numbers (LLN)

• Definition: Consistency
Let Wn be an estimator of  based on Y1, …, Yn of sample size n.  Then, Wn is a 
consistent estimator if for for every >0

P(Wn‐ ) >   0 as n  

• Note that we index the estimator by the sample size, n, in stating this 
definition.

I. Random Sampling II. Finite Sample III. Asymptotic Sample I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
33 III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 34
Fundamentals of Mathematical Statistics: Part One . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Consistency A. Consistency
Consistency B. Asy. Normality
Law of Large Number B. Asy. Normality

• Unbiased estimators are not necessarily consistent, but those whose variances 
shrink to zero as sample size increases are consistent. 
Definition: LLN
Let Y1, …, Yn be i.i.d. random variables with mean .  Then 
• Formally, if Wn is an estimator of  and Var(Wn)0 as n  , then plim(Wn)=.
plim( n) = 

• Example: the average of a random sample drawn with mean  and variance 2
Intuitively, 
– the sample average is unbiased
the LLN says that if we are interested in finding population average , we get 
2 arbitrarily close to  by choosing a sufficiently large sample.
Var (Yn ) 
n

– Thus, Var( n)  0 as n ‐> 


n is a consistent estimator of 

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 35 III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 36
A. Consistency A. Consistency

Consistency B. Asy. Normality


Consistency B. Asy. Normality

Property PLIM.1
Let  be a parameter and define a new parameter =g() for some continuous  Property PLIM.2
function g( ).  Suppose plim(Wn)= .  Define an estimator of  as Gn=g(Wn).  Then

plim(Gn) = .
If plim(Tn)= and plim(Un)=, then
Alternatively,
1) plim(Tn+Un) =  + 
plim[g(Wn)] = g[plim(Wn)]
2) plim(TnUn) = 
for some continuous function, g().
3)  plim(Tn/Un) = / provided that   0
• What is a continuous function?
– Note a continuous function is a “function that can be graphed without lifting your pencil from the 
paper”.

Examples:
• g()= a + b
• g()= 2
• g() = 1/
• g() = exp()

I. Random Sampling II. Finite Sample III. Asymptotic Sample I. Random Sampling II. Finite Sample III. Asymptotic Sample
37 38
Fundamentals of Mathematical Statistics: Part One . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat Fundamentals of Mathematical Statistics: Part One . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Consistency A. Consistency
Consistency B. Asy. Normality
Consistency B. Asy. Normality

• Example: 
∑ Example:
(1) E
Estimating standard deviation from a population with mean  and variance 
∑ 2
(2) Yn* E(Y*)
Given sample variance

• As n  ,  and X* are consistent estimators of .


(1) plim( ) =  Var( ) =  0 as n 
– Sample variance is unbiased estimator for 2.
∑ – Sn2 is also a consistent estimator for 2.
(2) plim(Y*) =  plim( = plim( plim( )= 1* = 

• Sample standard deviation
Var( ) =  0 as n  – Sn is not an unbiased estimator of .  Why?
– Sn is a consistent estimator of .
• Y* is also a consistent estimator since Y* approaches the value of the  plim S n  plim S n   2  
parameter  as sample size gets larger and larger.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 39 III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 40
A. Consistency A. Consistency
Consistency B. Asy. Normality Asymptotic Normality B. Asy. Normality

Example: Central Limit Theorem


Yi annual earnings with a high school education
(population mean = Y)
Zi annual earnings with a college education
Consistency is a property of point estimators, so is unbiasedness.  
(population mean = Z)
• Let {Y1,…, Yn} and {Z1,…, Zn} be a random sample of size n from a population of workers, and 
we want to estimate the percentage difference in annual earnings between two groups,  Consistency and distribution
which is • Consistency does not tell us about the shape of that distribution for a given 
  100  (  Z  Y ) / Y sample size.
• Most econometric estimators have distributions that are well approximated by 
a normal distribution for large samples (n).
• Due to the facts that
plim( Z n )   Z plim(Yn )  Y

• It follows from PLIM.1 and PLIM.2 that
Gn  100  ( Z n  Yn ) / Yn

• Gn is a consistent estimator of .  It is just the percentage difference between  ̅ n and  n.
plim(Gn )  

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 41 III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 42

A. Consistency A. Consistency
Asymptotic Normality B. Asy. Normality
Central Limit Theorem (CLT) B. Asy. Normality

Definition: Asymptotic Normality
• Let {Zn: n=1, …, n} be a sequence of random variables such that for all numbers z,  • Definition: CLT
Yi  d(, 2) implies that {Y1, …, Yn} be a
P(Zn z)  (z) as n  ,   If Yi  d(, 2), then random sample with mean  and variance
2.

where  (z) is the standard normal cumulative distribution function (cdf) The variable Zn is the standardized


version of n: we have subtracted off
E( n)= and divided by sd( n)=/ .

has an asymptotic standard normal distribution. “a” stands for “asymptotically”


or “approximately”.
cdf for Zn
(z)  as n

• Intuitively,
The central limit theorem (CLT) suggests that the
average from a random sample for any population,
when standardized, has an asymptotic standard normal distribution.
│z
ZN

• Intuitively,
This property means that the cdf for Zn gets closer and closer to the cdf of the
standard normal distribution as the sample size n gets large.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 43 III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 44
Asymptotic Normality A. Consistency
B. Asy. Normality
A. Consistency
B. Asy. Normality
Central Limit Theorem Problem C.3
• if  is replaced by Sn, does have an approximate standard normal  C.3 Let  denote the sample average from a random sample with 
distribution for size n? mean  and variance 2. Consider two alternative estimators 
of  : 
(1)

The exact distributions of W1 = [(n‐1)/n] and 


W2 =  /2.

(i) Show that W1 and W2 are both biased estimators of  and find 


are not the same as (1) , but the difference is often small enough to be  the biases. What happens to the biases as n  ? Comment 
ignored for large n. on any important differences in bias for the two estimators as 
the sample size gets large. [ans.]

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 45 III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 46

A. Consistency
B. Asy. Normality
Problem C.3 continue … Problem C.3 (i)
(i) 
(ii) Find the probability limits of W1 and W2. {Hint: Use 
• E(W1) = [(n – 1)/n]E( ) = [(n – 1)/n]µ, 
properties PLIM.1 and PLIM.2; for W1, note that 
Bias(W1) = [(n – 1)/n]µ – µ = –µ/n.  
plim[(n‐1)/n] = 1.} Which estimator is consistent?  As n , Bias(W1)  0
[ans.]
• Similarly, 
(iii) Find Var(W1) and Var(W2). [ans.] E(W2) = E( )/2 = µ/2,
Bias(W2) = µ/2 – µ = –µ/2.  
As n , Bias(W1) = –µ/2
(iv) Argue that W1 is a better estimator than  if  is 
“close” to zero. [ans.] • The bias in W1 tends to zero as n  , while the bias in W2 is –µ/2 for 
all n.  This is an important difference.
(Consider both bias and variance.)

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 47 III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
48
Problem C.3 (ii) Problem C.3 (iii)
(ii) 
(iii) 
• plim(W1) = plim[(n – 1)/n]plim( )
= 1µ = µ.  
• Var(W1) = [(n – 1)/n]2Var( )
• plim(W2) =  plim( )/2  = µ/2.   = [(n – 1)2/n3]2

• Because plim(W1) = µ and plim(W2) = µ/2, W1 is  • Var(W2) = Var( )/4 = 2/(4n). 


consistent whereas W2 is inconsistent.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
49 III. Asymptotic or Large Sample Properties of Estimators
50
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Moments
IV. General Approaches to  B. Max Likelihood
Problem C.3 (iv) Parameter Estimation
C. Least Squares

(iv) 
• Because  is unbiased, its mean squared error is simply its variance.   • We have learned finite and asymptotic properties for estimation –
MSE( ) = Var( ) + [Bias( )]2 unbiasedness, consistency and efficiency.
= 2/n
• On the other hand,  Question:
MSE(W1) = Var(W1) + [Bias(W1)]2 Are they general approaches that produce estimators with good properties?
= [(n – 1)2/n3]2 + µ2/n2.  
Let µ = 0, 
MSE(W1) = Var(W1) = [(n – 1)2/n2](2/n) • Given a parameter  appearing in a population distribution, there are 
usually many ways to obtain unbiased and consistent estimator of .
Thus,  MSE(W1) < MSE( ) or
Var(W1) < Var( ) • There are three methods
[(n – 1)2/n2](2/n) < 2/n because (n – 1)/n < 1.   – Method of Moments
– Method of Maximum Likelihood
• Therefore, MSE(W1) is smaller than Var( ) for µ close to zero. – Method of Least Squares

• For large n, the difference between the two estimators is trivial.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
51 52
IV. General Approaches to Parameter Estimation Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Moments A. Moments
B. Max Likelihood B. Max Likelihood
Method of Moments C. Least Squares Method of Moments C. Least Squares

Example: Population covariance
• The basis of the method of moments proceeds as follows.   • The population covariance between two random variables X and Y is
XY = E(X‐X)(Y‐Y)
– The parameter  is shown to be related to some function of some expected 
value in the distribution of Y, usually E(Y) and E(Y2).
• The method of moment suggests the following estimator,
1 n
S XY   ( X i  X )(Yi  Y )
n i 1

Example: Population mean 1)This is a consistent estimator of XY.


• Suppose  is a function of ; i.e., =g() 2) However, this is a biased estimator.  

• Given the sample average  is an unbiased and consistent estimator of ,  Example: Sample covariance


it is natural to replace  for  .  Thus, g( ) is the estimator .  
– In addition, the estimator g( ) is a consistent estimator of .  If g () is linear • The sample covariance is
function of , then g( ) is an unbiased estimator of .

• Why is this a method of moments?  
– Here we replace the population moment  with the sample average . 1) It can be shown that this is an unbiased estimator of XY.
2) This is a consistent estimator of XY.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
53 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
54
IV. General Approaches to Parameter Estimation IV. General Approaches to Parameter Estimation

A. Moments A. Moments
B. Max Likelihood B. Max Likelihood
Method of Moments C. Least Squares Maximum Likelihood C. Least Squares

Example: Population correlation • Maximum likelihood Estimator (MLE) 1. In the discrete case, this is


P(Y1=y1, Y2=y2,…, Yn=yn)
XY = XY/(XY)  = P(Y1=y1)P(Y2=y2) … P(Yn=yn)
• Let {Y1, …, Yn} be a random sample from the  2. The joint distribution of {Y1, …, Yn}
population distribution f(y;).   can be written as the product of
• The method of moments suggests estimating XY as the densities: f(Y1;)f(Y2,)  f(Yn,).

• The likelihood function, which is a random variable, can be defined as

L(; Y1, …, Yn) = f(Y1;)f(Y2;)  f(Yn;)

This is called the sample correlation coefficient.   • It is easier to work with the log‐likelihood function
Facts:
1) It is obtained by taking the natural
log of the likelihood function.

2) The log of the product is the sum


Notes: of the logs.
(1) RXY is a consistent estimator of XY. 
(Why?): SXY, SX and SY are consistent.
• Then, the maximum likelihood estimator of , call it W, is the value of  that maximizes the 
(2) RXY is not an unbiased estimator of XY. (Why?)  likelihood function.
First: SX and SY are not unbiased estimators. – Intuitively, out of the possible values for , the value that makes the likelihood that the 
Second: RXY is a ratio of estimators, so it would not be unbiased. observed values are largest should be chosen.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
55 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
56
IV. General Approaches to Parameter Estimation IV. General Approaches to Parameter Estimation
A. Moments A. Moments
B. Max Likelihood B. Max Likelihood
Maximum Likelihood C. Least Squares Least Squares C. Least Squares

• Least squares estimators – a third kind of estimator.
• Properties: • The sample mean,  , is a least square estimator of the population mean .
1) MLE is usually consistent and sometimes unbiased. – It can be shown that the value of m which make the sum of squared deviations as small 
as possible is m = 
2) MLE is the generally the most asymptotically efficient estimator (when 
the population model f(y;) is correctly specified).
– MLE has the smallest variance among all unbiased estimators of .
– MLE is the minimum variance unbiased estimator.

Properties:
1) LSE is consistent and unbiased.

2) LSE is the generally the most efficient estimator in finite and large samples.

– LSE has the smallest variance among all linear unbiased estimators of .

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
57 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
58
IV. General Approaches to Parameter Estimation IV. General Approaches to Parameter Estimation

A. Nature
A. Moments
B. CI N(0,1)
Methods of Moments, Least  B. Max Likelihood V. Interval Estimation and  C. Rule of Thumb
C. Least Squares
Squares, Maximum Likelihood Confidence Intervals D. Asymptotic CI

• A point estimate is the researcher’s best guess at the population value, 
• The principles of least squares, method of moments, and  but it provides no information how close the estimate is likely to be to the 
maximum likelihood often result in the same estimator. population parameter.

Summary Unbiased Consistent Efficiency Example:


• On the basis of a random sample of workers,
Moments unbiased consistent  efficient 
– a researcher reports that job training grants increase hourly wage by 6.4%
Least Squares unbiased consistent  efficient  – We cannot know how close an estimate is for a particular sample because we 
Maximum usually  asymptotically  do not know the population value.
consistent 
Likelihood consistent  efficient 

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
59 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
60
IV. General Approaches to Parameter Estimation V. Interval Estimation and Confidence Intervals
A. Nature A. Nature
B. CI N(0,1) B. CI N(0,1)
The Nature of Interval Estimation C. Rule of Thumb The Nature of Interval Estimation C. Rule of Thumb
D. Asymptotic CI D. Asymptotic CI

• Interval estimation comes in when we make statement involving  • The standardized version of  has a standard normal distribution.


probabilities.
 Y  
– One way of assessing the uncertainly in an estimator – sampling standard deviation. • Rewrite as P  1.96   1.96   .95
 / n 

• Interval estimation uses information on the point estimate and the standard  • The random interval is  P(Y 1.96 / n <   Y  1.96 / n )  0.95


deviation by constructing Confidence interval.
(Y  1.96 / n , Y  1.96 / n )
– It shows how the population value is likely to lie in relation to the estimate.

1) The probability that the random interval contains the population mean  is .95 or 95%
Concept of Interval estimation: 2) This information allows us to construct an interval estimate of 
‐ by plugging the sample outcome of the average,  and  =1 into the random interval .  
• Assume {Y1, …, Yn} is a random sample from the Normal(, 2)  population.  
Suppose that the variance 2 is known (or 2=1). P(Y 1.96 / n <   Y  1.96 / n )  0.95
– The sample average  has a normal distribution with mean  and variance 
2 /n; i.e.,  • It is called a 95% confidence interval.

–  Normal(, 2 /n). • (Y  1.96 / n , Y  1.96 / n )


A shorthand notation is                   random interval
y  1.96 / n confidence interval

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
61 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
62
V. Interval Estimation and Confidence Intervals V. Interval Estimation and Confidence Intervals

A. Nature A. Nature
B. CI N(0,1) B. CI N(0,1)
The Nature of Interval Estimation
The Nature of Interval Estimation C. Rule of Thumb C. Rule of Thumb
D. Asymptotic CI (Y  1.96 / n , Y  1.96 / n ) D. Asymptotic CI

Example:
• Given the sample data {y1, y2, …, yn} are observed.  We can find  .
Suppose n= 16,  =7.3, =1 Correct Interpretation:
A random interval contains  with probability 0.95.
• The 95% confidence interval for  is 
7 .3  1 .96 / 16 = 7.3 .49
Incorrect interpretation:
• We can write in an interval form as [6.81, 7.89]. The probability that  is in the interval is 0.95. 
• The meaning of a confidence is more difficult to understand. • since  is unknown and it either is or is not in the 
We mean that the random interval contains  with probability .95 interval.
– There is a 95% chance that random interval contains .

• Random interval is an example of interval estimator.
– since endpoints change with different samples.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
63 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
64
V. Interval Estimation and Confidence Intervals V. Interval Estimation and Confidence Intervals
A. Nature A. Nature
B. CI N(0,1)
C. Rule of Thumb
CIs for the Mean from a Normally  B. CI N(0,1)
The Nature of Interval Estimation D. Asymptotic CI Distributed Population 
C. Rule of Thumb
D. Asymptotic CI

Example:
• Table C.2 contains calculations for 20 
random samples • Suppose the variance is 2 and known.  The 95% confidence interval is
• Assume Normal(2,1) distribution with 
sample size n=10.
• Interval estimates of  are  .62.
• In practice, we rarely know the population variance 2.
Results: • To allow for unknown , we can use an estimate:
1)  The interval changes with each random 
sample.
2)  19 of the 20 intervals contain the 
population value of . • However, the random interval no longer contains  with probability .95 because 
3) Only for replication number 19, =2 is not  the constant  has been replaced with the random variable S.
in the confidence interval.  
Y  1 .96 ( S / n )
4) 95% of the samples result in a confidence 
interval that contain .

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
IV. Parameter Estimation V. Interval Estimation & Confidence Interval
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
65 66
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics: Part Two . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Nature A. Nature
CIs for the Mean from a Normally  B. CI N(0,1) CIs for the Mean from a Normally  B. CI N(0,1)
C. Rule of Thumb C. Rule of Thumb
Distributed Population D. Asymptotic CI Distributed Population D. Asymptotic CI

• We use t distribution, rather standard normal distribution.
Example:
• Let n=20
where S is the sample standard deviation of the random sample {Y1, …, Yn}. df = n‐1 = 19
c = 2.093 (See Table G.2 in Appendix 
– Note that G)

• The 95% confidence interval is 
• To construct a 95% confidence interval using t 
distribution, let c  be the 97.5th percentile in 
y  2.093( s / 20 )
the tn‐1 distribution.   P(-c<t<c) =.95
and s are the values obtained 
• Once the critical value c is chosen, the random  from the sample.
interval contains .
Table G.2 in Appendix G.
[Y  c  S / n , Y  c  S / n ]
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
67 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
68
V. Interval Estimation and Confidence Intervals V. Interval Estimation and Confidence Intervals
A. Nature A. Nature
B. CI N(0,1) Example C.2 Effect of Job Training Grants  B. CI N(0,1)
CIs for the Mean from a Normally  C. Rule of Thumb on Worker Productivity C. Rule of Thumb
Distributed Population D. Asymptotic CI D. Asymptotic CI

• A sample of firms receiving job training grants 
in 1988.
• More generally, let c be the 100(1‐/2) percentile in the tn‐1 distribution.  
 Scrap rates – number of items per 100 
produced that are not useable and need 
• A 100(1‐ )% confidence interval is  to be scrapped.
 The change in scrap rates has a normal 
c/2 – known after choosing  and degree of freedom n‐1. distribution.
• n=20,  = ‐1.15                        
• Recall that  sd(Y )  / n
se( ) = s/ = .54 (Note that s=2.41)

• s/ is the point estimate of sd( or the standard error of  . • A 95% confidence interval for the mean 


se(Y )  s / n change in scrap rate  is
 2.093se( )
• A 100(1‐)% confidence interval can be written as [‐2.28, ‐0.02]

• With 95% confidence, the average change in 
scrap rates in the population is not zero!
• The notion of the standard error of an estimate plays an important role in econometrics.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
69 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
70
V. Interval Estimation and Confidence Intervals V. Interval Estimation and Confidence Intervals

A. Nature A. Nature
B. CI N(0,1) A Simple Rule of Thumb for a  B. CI N(0,1)
Example C.2 Effect of Job Training  C. Rule of Thumb C. Rule of Thumb
D. Asymptotic CI 95% Confidence Interval D. Asymptotic CI
Grants on Worker Productivity
• A Rule of Thumb for an approximate 95% confidence interval is
• The analysis above has some potentially serious flaws.

• It assumes that any systematic reduction in scrap rates is due to the job 
training grants. 1) It is slightly too big for large sample sizes
– Many things (variables) can happen over the course of the year to change  2) It is slightly too small for small sample sizes.
worker productivity!

• Note that t distribution approaches the standard normal distribution as the 
degrees of freedom gets large.

• In particular,
=.05, c/2  1.96 as n
(see graph)

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
71 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
72
V. Interval Estimation and Confidence Intervals Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat V. Interval Estimation and Confidence Intervals
A. Nature A. Nature
Asymptotic Confidence Intervals B. CI N(0,1) Example C.3 Race Discrimination  B. CI N(0,1)
for Nonnormal Populations C. Rule of Thumb C. Rule of Thumb
D. Asymptotic CI in Hiring D. Asymptotic CI

• For some applications, the population is nonnormal. • Matched pairs analysis – each person in a pair interviewed for the same job.


– In some cases, the nonnormal population has no standard distribution. B Probability that the black person is offered a job
• This does not matter as long as sample sizes are sufficiently large for the central limit  w Probability that the white person is offered a job
theorem to give a good approximation of the distribution of the sample average .
• A large sample size has a nice feature since it results in small confidence intervals.  This  • We are interested in the difference B ‐ W
is because standard error for  
Bi=1 If the black person gets a job offer from employer i
– [se( )] shrinks to zero as the sample size grows.
Wi=1 If the white person gets a job offer from employer i

• Unbiased estimators of B and W are  and  the fractions of interviews for which 


blacks and whites were offered jobs. 
• For large n, an approximate 95% confidence interval is

• A new variable Yi =-1
if the black did not get the job, but
the white did
where 1.96 is the 97.5th percentile in the standard normal distribution. Yi = Bi – Wi
Yi =0 if both did or did not get the job

• Note that the standard normal distribution is used in place of t distribution since we deal  • Yi can take these values, then,
if the white did not get the job, but
Yi =1
with asymptotics.   E ( B i )  E (W i )   B   W the black did
– as n increases without bound, the t distribution approaches standard normal distribution.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
73 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
74
V. Interval Estimation and Confidence Intervals V. Interval Estimation and Confidence Intervals

A. Nature A. Nature
Example C.3 Race  B. CI N(0,1) B. CI N(0,1)

Discrimination in Hiring
C. Rule of Thumb
D. Asymptotic CI Problem C.7 C. Rule of Thumb
D. Asymptotic CI

C.7 The new management at a bakery 
• Sample size n=241 b  .224 and w  .357, so y  .224  .357  .133 claims that workers are now more 
obs Wb Wa D=Wa-Wb

1 8.3 9.25 0.95


productive than they were under old  2 9.4 9 -0.4
– 22.4% of black were offered jobs, while 35.7% of white were offered jobs management, which is why wages have  3 9 9.25 0.25
– This is prima facie evidence of discrimination! “generally increased.” Let Wib be Worker  4 10.5 10 -0.5
i’s wage under the old management and 
• Sample standard deviation: s=0.482 let Wia be Worker i’s wage after the  5 11.4 12 0.6

change. The difference is  6 8.75 9.5 0.75

• Find an approximate 95% confidence interval for  7 10 10.25 0.25


Di = Wia ‐ Wib.  8 9.5 9.5 0
Assume that the Di are a random sample 
• A 95% CI for  = B ‐ w is  ‐.133  1.96(.482/(241)½ 9 10.8 11.5 0.7
from a Normal(, 2) distribution. 10 12.55 13.1 0.55
‐.133 .031
11 12 11.5 -0.5
[‐.164, ‐.102]
(i) Using the following data on 15 workers,  12 8.65 9 0.35

• A 99% CI for  = B ‐ w is  ‐.133  2.58(.482/(241)½


construct an exact 95% confidence  13 7.75 7.75 0

interval for . [ans.] 14 11.25 11.5 0.25


[‐.213, ‐.053]
15 12.65 13 0.35

mean 10.16667 10.40667 0.24


• We are very confident that the population difference is not zero!

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
75 Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
76
V. Interval Estimation and Confidence Intervals V. Interval Estimation and Confidence Intervals
Problem C.7 (i) Problem C.7 (i)
Date: 05/07/07 Time: 07:57
obs Wb Wa D=Wa-Wb Sample: 1 15
WB WA D
(i)  1 8.3 9.25 0.95

2 9.4 9 -0.4 Mean 10.16667 10.40667 0.24


• The average increase in wage is  3 9 9.25 0.25
Median 10 10 0.25
̅ = .24, or 24 cents.   4 10.5 10 -0.5
Maximum 12.65 13.1 0.95
5 11.4 12 0.6
Minimum 7.75 7.75 -0.5
6 8.75 9.5 0.75
Std. Dev. 1.569084 1.595291 0.450872
• The sample standard deviation is  7 10 10.25 0.25
Skewness 0.175376 0.290842 -0.34947
about  s = .451 8 9.5 9.5 0
Kurtosis 1.810807 2.022774 2.161199
9 10.8 11.5 0.7
n = 15,  10 12.55 13.1 0.55

se( ̅ ) = .1164.   11 12 11.5 -0.5


Jarque-Bera 0.960754 0.80833 0.745067

12 8.65 9 0.35 Probability 0.61855 0.667534 0.688986


13 7.75 7.75 0

14 11.25 11.5 0.25 Sum 152.5 156.1 3.6


15 12.65 13 0.35 Sum Sq. Dev. 34.46833 35.62933 2.846
mean 10.16667 10.40667 0.24

Observations 15 15 15

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
77 78
V. Interval Estimation and Confidence Intervals Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat V. Interval Estimation and Confidence Intervals Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
C. Asymptotic Fundamentals of Hypothesis Testing C. Asymptotic
VI. Hypothesis Testing D. P-Value
E. CI & HT
D. P-Value
E. CI & HT

• Suppose the election results are as follows: 
• We have reviewed how to evaluate point estimators and to  – Candidate A=42% and 
construct confidence intervals.   – Candidate B=58% of the popular vote
– Sometimes the question we are interested in has a definite yes or  • Candidate A argued that the election was rigged.
no answer. Consulting agency: a sample of 100 voters.  It was found that 53% voted for Candidate A.
1) Does a job training program effectively increase average worker 
productivity?
2) Are blacks discriminated against in hiring? Question:
how strong is the sample evidence against the officially reported percentage of 
42%? 
• Devising methods for answering such questions, using a 
sample of data, is known as hypothesis testing. • One way to proceed is to set up a hypothesis test.
Let  be the true proportion of the population voting for Candidate A

• The null hypothesis is
H0:  =.42

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
79 80
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
C. Asymptotic C. Asymptotic
Fundamentals of Hypothesis Testing D. P-Value
E. CI & HT
Fundamentals of Hypothesis Testing D. P-Value
E. CI & HT

• The null hypothesis plays a role similar to that of a defendant.  
– A defendant is presumed to be innocent until proven guilty.
– The null hypothesis is presumed to be true until the data strongly suggest  • There are two kinds of mistakes:
otherwise.
1)We reject the null hypothesis when it is true – Type I error
Example: We reject H0 when the true proportion of voting for Candidate 
• The alternative hypothesis is that the true proportion voting for Candidate A  A is in fact 0.42.
is above 0.42. 

H1: >.42 2)We “accept” or do not reject the null hypothesis when it is 
false – Type II error
• In order to conclude H1 is true and H0 is false, we must prove beyond  Example: we “accept” H0, but >.42.
reasonable doubt.
– Observing 43 votes out of a sample of 100 is not enough to overturn the original 
result.
• Such an outcome is within the expected sampling variation.

– How about observing 53 votes out of a sample of 100?

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
81 82
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
C. Asymptotic C. Asymptotic
Fundamentals of Hypothesis Testing D. P-Value
E. CI & HT
Fundamentals of Hypothesis Testing D. P-Value
E. CI & HT

• We can compute the probability of making either a Type I or a Type II error.
• Type II error
– Want to minimize the probability of Type II error
• Hypothesis testing requires choosing the significance level, denoted by .
– Alternatively, want to maximize the power of a test.
 = P(Reject H0 H0)
• The power of the test is one minus the probability of a Type II error. 
Read: the probability of rejecting null hypothesis, given that H0 is true. Mathematically,

• A significance level is the probability of committing Type I error. () = P(Reject H0) = 1 – P(Type II)

where  the actual value of the parameter.
• Classical hypothesis requires that we specify a significance level for a test.
• We would like the power to equal unity whenever the null hypothesis is 
• Common values for  are .10, .05, and .01.  false.
– They quantify our tolerance for an error.

• =.05: The researcher is willing to make mistakes (falsely reject H0) 5% of the time.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
83 84
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Fundamentals A. Fundamentals
B. HT N(0,1)
Testing Hypotheses about the  C. Asymptotic Testing Hypotheses about the B. HT N(0,1)
C. Asymptotic
D. P-Value Mean in a Normal Population D. P-Value
Mean in a Normal Population  E. CI & HT E. CI & HT

• Provided that the null hypothesis is true, the critical value c is determined 
• In order to test hypothesis, we need to choose a test statistic by the distribution of T and the chosen significance level .
and a critical value.  
• All rejection rules depend on the outcome of the test statistic t and the 
critical value c.

• The test statistic T is some function of the random sample.   • To test hypothesis about the mean  from a 


Normal(, 2) is as follows.  The null 
hypothesis is
• When we compute the statistic for particular outcome, we 
H0: = 0,
obtain an outcome of the test statistic, denoted by t. 
where 0 is a value we specify.  In the 
majority of applications, =0.

• The rejection rule depends on the nature of 
the alternative hypothesis. 

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
85 86
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
Testing Hypotheses about the  C. Asymptotic C. Asymptotic

Mean in a Normal Population 
D. P-Value
E. CI & HT
Testing Hypotheses about the D. P-Value
E. CI & HT
Mean in a Normal Population

Three alternatives of interest
• For example, for an one‐sided alternative,
• One sided alternative: H1: 0 > 0.
H1:  > 0,
H1:  < 0, • The null hypothesis is effectively 
H0:   0.
• Two sided alternative:
H1:   0,
• Here we reject the null hypothesis when the value of 
• Here we are interested in any departure from the null hypothesis. sample average,  , is sufficiently greater than 0.  How?

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
87 88
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Fundamentals A. Fundamentals

Testing Hypotheses about the B. HT N(0,1)


C. Asymptotic Example C.4: Effect of Enterprise 
B. HT N(0,1)
C. Asymptotic
Mean in a Normal Population D. P-Value
E. CI & HT Zones on Business Investments
D. P-Value
E. CI & HT

• We use standardized version, • Y denotes the percentage change in investment from the year before and year 
after a city became an enterprise zone.
• Assume that Y has a Normal(,2) distribution.
• Note that s is used in place of  and  se( y )  s / n H0: =0  (Null hypothesis: Enterprise zones have no effect)
• This is called the t statistic.  The t statistic measures the distance from  to 0 H1: >0 (Alternative hypothesis:They have a positive effect)
relative to the standard error of  .

• Under the null hypothesis, the random variable is
• Suppose that we wish to test H0 at the 5% level.  The test statistic is
T  n (Y   0 ) / S
where c is the
• A sample of 36 cities.
T has a tn‐1 distribution. 100(1-)  =0.5; C=1.69 (see Table  G.2)
percentile in a tn-
1 distribution.
 y‐bar=8.2; s=23.9
Example of a one‐tailed test: This is an  t = 2.06
• Choose the significance level =.05.  The critical  example of a
value c is chosen so that one-tailed test.
• We conclude that, at the 5% significance level, enterprise zones have an effect on average 
P( T > cH0) = .05 investment.
• At 1% significance level, do enterprise zones have an positive effect?
• The rejection rule is t > c

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
89 90
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
C. Asymptotic C. Asymptotic
D. P-Value
E. CI & HT
Testing Hypotheses about the D. P-Value
E. CI & HT
Mean in a Normal Population

• For the null hypothesis and the alternative hypothesis
H0:  ≥ 0,
H1:  < 0.

B • The rejection rule is
a
c t < ‐c
k
This implies that  <  0 that are sufficiently far from zero to 
reject H0.
U
p

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
91 92
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
Example C.5: Race  C. Asymptotic C. Asymptotic

Discrimination in Hiring
D. P-Value
E. CI & HT
Testing Hypotheses about the D. P-Value
E. CI & HT
Mean in a Normal Population
• =B‐W is the difference in the probability that blacks and whites receive job 
offers.
 is the population mean of the variable Y=B‐W where B and W are binary variables. 
• For the null hypothesis and the alternative hypothesis,
• Testing H0:  = 0,
H0: =0
H1:   0.
H1: <0

• Given n=241, y   .133 se ( y )  .48 / 241  . 031 • The rejection rule is


• The t statistic for testing H0: =0
t> c
t = ‐.133/.031 = ‐4.29

• Critical value = ‐2.58 (one‐sided test; =.005) This gives a two‐tailed test.
• t<‐2.58 There is very strong evidence against H0 in favor of H1.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
93 94
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
Testing Hypotheses about the C. Asymptotic Testing Hypotheses about the  C. Asymptotic
D. P-Value D. P-Value
Mean in a Normal Population E. CI & HT Mean in a Normal Population  E. CI & HT

• We have to be careful in obtaining the critical value, c.  

• The critical value c  (See graph!)
– It is the 100(1‐/2) percentile in a tn‐1 distribution. • Proper language for hypothesis testing:
– If =.05, c is the 97.5th percentile in the tn‐1 distribution. “We fail to reject H0 in favor of H1 at the 5% significance level”

• Incorrect wording:
Example: Let n=22, 
“ We accept H0 at the 5% significance level”
• c=2.08, the 97.5th percentile in a t21
distribution.
(See Table G.2)

• Rejection Rule: the absolute value 
of t statistic must exceed 2.08.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
95 96
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Fundamentals A. Fundamentals
Asymptotic Tests for  B. HT N(0,1)
C. Asymptotic Asymptotic Tests for 
B. HT N(0,1)
C. Asymptotic
D. P-Value D. P-Value
Nonnormal Population E. CI & HT Nonnormal Population E. CI & HT

• Because asymptotic theory is based on n increasing without bound, 
• If the sample is large enough, we can invoke central limit  – standard normal or t critical values are pretty much the same
theorem.
• Asymptotic theory is based on n increasing without bound • Suggestions:
– For moderate values of n, say between 30 and 60, it is traditional to use the 
t distribution.
– For n  120, the choice between two distributions is irrelevant.
• Under the null hypothesis, 
T  n (Y   0 ) / S a Normal (0,1)
• Note that our chosen significance levels are only approximate.
As n gets large, the tn‐1 distribution converges to the standard 
normal distribution. • When the sample size is large, the actual significance level will be very close to 5%.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
97 98
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
Example C.5: Race  C. Asymptotic C. Asymptotic

Discrimination in Hiring
D. P-Value Computing and Using p‐Values  D. P-Value
E. CI & HT E. CI & HT

• =B‐W is the difference in the probability that blacks and whites receive job  • The traditional requirement of choosing the significance level ahead of 


offers. time means that different researchers could wind up with different 
 is the population mean of the variable Y=B‐W where B and W are binary variables.  conclusions, 
– although they use the same set of data and same procedures.
• Testing
H0: =0
• p‐value of the test
H1: 0

• Given n=241,  y  .133se( y )  .48 / 241  .031 • p‐value of the test


It is the largest significance level at which we fail to reject the null hypothesis.
• The t statistic for testing H0: =0
t = ‐.133/.031 = ‐4.29
• p‐value of the test
It is the smallest significance level at which we reject the null hypothesis.
• Critical value = ‐2.58 (two‐sided test; =.01)
• t<‐2.58 There is very strong evidence against H0 in favor of H1.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
99 100
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
C. Asymptotic C. Asymptotic
Computing and Using p‐Values D. P-Value
Computing and Using p‐Values D. P-Value
E. CI & HT E. CI & HT

• One sided Test: Let H0: =0 in a Normal(,2).   • Interpretation: t=1.52 and p‐value=.065
The test statistic is T  n  Y / S

• The observed value of T for our sample is 
– The largest significance level at which we carry out the 
t =  1.52
test and fail to reject the H0 is .065.
– The probability that we observe a value of T as large as 
• The p‐value is the area to the right  1.52 when the null hypothesis is true.
hand side of 1.52, which is
– If we carry out the test at the significance level above 
.065, we reject the null hypothesis.
p‐value = P(T>1.52H0) 
= 1 – (1.52) = .0655
– The smallest significance level at which we reject the null 
hypothesis is .065.
where () is the standard normal  – We would observe the value of T as large as 1.52 due to 
cumulative distribution function (cdf). chance 6.5% of the time.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
101 102
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Fundamentals A. Fundamentals
B. HT N(0,1) Example C.6: Effect of Training Grants on  B. HT N(0,1)
C. Asymptotic C. Asymptotic
Computing and Using p‐Values D. P-Value Worker Productivity D. P-Value
E. CI & HT E. CI & HT
(one tail test)
•  is the average change in scrap rates; n=20. Note that the change in scrap rates has a 
normal distribution.
• Interpretation: t=2.85 and (n is large)
• Hypothesis
p‐value= 1 – (2.85) =.0022 H0: = 0 (Training grants have no effect)
H1:  <0  
– If the null hypothesis is true, we observe a value of T as large as 2.85  n=20 (for one‐tail test)
with probability .002.  
– If we carry out the test at the significance level above .002, we  • If we carry out the test at 
reject the null hypothesis. the significance level 
– The smallest significance level at which we reject the null  above 0.023, we reject the 
hypothesis is .002. null hypothesis.

• The smallest significance 
level at which we reject 
the null hypothesis is 
0.023.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
103 104
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
Example: Training Grants and  C. Asymptotic
Example C.7 Race Discrimination in Hiring
C. Asymptotic
D. P-Value D. P-Value
Worker Productivity (two tails) E. CI & HT E. CI & HT

Two sided alternative • Given
• For t testing about population means, the p‐value is t value of the test statistic n=241;   = ‐.133;  se( ) = .48/ = .031
P(Tn‐1>t) = 2P(Tn‐1>t) Tn-1 t random variable
• The t statistic for testing H0: =0
t = ‐.133/.031 = ‐4.29
• P‐value is computed by finding the area to the right oftand multiplying the  For nonnormal distribution, 
area by two. the exact p‐value can be 
• Hull Hypothesis and Two‐sided alternative • If Z is a standard normal random variable difficult to obtain, but we 
H0: =0 against  P(Z<‐4.29)  0 can find asympototic p‐
H1:   0 values by using the same 
• Hull Hypothesis and Two‐sided alternative calculations.
• If we carry out the test at the significance  P-value H0: =0 against 
= 0.023+0.023
level above 0.046, we reject the null  = 0.046 H1:   0
hypothesis.
Area = p-value = 023
• The smallest significance level at which we 
reject the null hypothesis is 0.046. • There is very strong evidence against H0 in favor of H1.
2.13 – Note that Critical value = ‐2.58 (=.01)

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
105 106
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
C. Asymptotic The Relationship between Confidence  C. Asymptotic
Computing and Using p‐Values D. P-Value
E. CI & HT Interval and Hypothesis Testing
D. P-Value
E. CI & HT

• Rejection rules for t value: Summary • The confidence interval and hypothesis testing are linked.
1) For H1:  > 0, the rejection rule is t>c and the p‐value is P(T>t). • Assume =.05.   The confidence interval can be used to test two sided 
alternatives.  Suppose
2) For H1:  < 0, the rejection rule is t<‐c and the p‐value is P(T<t). H0:  = 0
3) For H1:  ≠ 0, the rejec on rule is t>c and the p‐value is P(T>t). H1:   0

• Rejection Rule
– If 0 does not in the confidence interval, we reject the null hypothesis at the 
5% level.
• Rejection rules for p‐value: Summary
– If the hypothesized value of , 0, lies in the confidence interval, we fail to 
Choose significance level,  reject the null hypothesis at the 5% level.  
1) We reject the H0 at the 100% level if
p‐value <  • After a confidence interval is constructed, many values of 0 can be 
2) We fail to reject H0 at the 100% level if tested.  
p‐value   – Since a confidence interval contains more than one value, there are many 
null hypotheses that can be rejected.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
107 108
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
Example C.8:  Training Grants  C. Asymptotic Practical Significance and  C. Asymptotic
D. P-Value D. P-Value
and Worker Productivity E. CI & HT Statistical Significance  E. CI & HT

• A 95% confidence interval for the mean change in scrap rate  is • Three covered evidences of population parameters include 
[‐2.28, ‐0.02] 1) point estimates, 
• Since zero is excluded from this interval, at 5% level, we reject  2) confidence intervals and 
H0: =0 against H1:   0
3) hypothesis tests. 
• If H0: =‐2, we fail to reject the null hypothesis. 
• In empirical analysis, we should also put emphasis on the magnitudes of 
the point estimates!
• Don’t say
We “accept” the null hypothesis H0: =‐1.0 at the 5% significance level.
• Statistical significance depends on the size of the test statistic, not 
• This is because in the same set of data, there are usually many hypotheses  on the size of  .  
that cannot be rejected.  
– It depends on the ratio of  to its standard error: t  y / se( y )
• For example,  • The test statistic could be large because se( ) is large or  is large.
– it is logically incorrect to say that H0: =‐1 and H0: = ‐2 are both “accepted.”  
– It is possible that neither are rejected.  
– Thus, we say “fail to reject”.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
109
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 110

A. Fundamentals A. Fundamentals

Practical Significance and  B. HT N(0,1)
C. Asymptotic
Example C.9: Effect of Free Way  B. HT N(0,1)
C. Asymptotic
D. P-Value Width on Commute Time D. P-Value
Statistical Significance E. CI & HT E. CI & HT

• Let Y denote the change in commute time, measured in minutes, for 
commuters before and after a freeway was widened.
• Note that the magnitude and sign of the test statistic  • Assume YNormal(, 2)
determine the statistical significance. • Hypotheses:
 H0: =0
 H1: <0
• Practical significance depends on the magnitude of  .  
– The estimate can be statistically significant without being large,  • Given n=900, y  3.6; sample sd  32.7 se( y )  32.7 / 900  1.09
especially when we work with large sample sizes. • The t statistic for testing H0: =0
t = ‐3.6/1.09 = ‐3.30
p‐value=.005

• Statistical Significance: We conclude that the estimated reduction in comunte time 


had a statistically significant effect on average commute time.
• Practical Significance: The estimated reduction in average commute time is only 3.6 
minutes.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 111 VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 112
VII. Remarks on Notation Remarks on Notation
• In the main text, we use a simpler convention that is widely used in 
econometrics.
• We have been careful to use standard conventions to denote
random variable W • If  is a population parameter, the notation  will be used to denote 
estimator W both an estimator and an estimate of 
estimate w
• = theta hat

• Distinguishing between an estimator and estimate (outcome  Example:
of the random variable W) is important for understanding  • If the population parameter is , then 
various concepts in denotes an estimator or estimate of .
– Estimation and
– Hypothesis Testing.
• If the parameter is 2, then 
denotes an estimator or estimate of 2.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VII. Remarks on Notation Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 113 VII. Remarks on Notation Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 114

A. Fundamentals A. Fundamentals
B. HT N(0,1) B. HT N(0,1)
Problem C.6 C. Asymptotic
D. P-Value Problem C.6 continue
C. Asymptotic
D. P-Value
E. CI & HT E. CI & HT

C.6 You are hired by the governor to study whether a tax on 
liquor has decreased average liquor consumption in your  (iii) Now, suppose your sample size is n = 900 and you obtain the 
estimates  = ‐ 32.8 and s = 466.4.
state. You are able to obtain, for a sample of individuals 
Calculate the t statistic for testing H0 against H1; obtain the p‐value for 
selected at random, the difference in liquor consumption (in  the test. (Because of the large sample size, just use the standard 
ounces) for the years before and after the tax. For person i  normal distribution tabulated in Table G.1.) Do you reject H0 at the 5% 
who is sampled randomly from the population, Yi denotes the  level? at the 1% level? [ans.]
change in liquor consumption. Treat these as a random 
sample from a Normal(, 2) distribution. (iv) Would you say that the estimated fall in consumption is large in 
magnitude? Comment on the practical versus statistical significance of 
this estimate. [ans.]
(i) The null hypothesis is that there was no change in average liquor 
consumption.  State this formally in terms of . [ans.]
(ii) The alternative is that there was a decline in liquor consumption; state the  (v) What has been implicitly assumed in your analysis about other 
alternative in terms of . [ans.] determinants of liquor consumption over the two‐year period in order 
to infer causality from the tax change to liquor consumption? [ans.]

VI. Hypothesis Testing VII. Remarks on Notation VI. Hypothesis Testing VII. Remarks on Notation
115 116
Fundamentals of Mathematical Statistics: Part Three . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat Fundamentals of Mathematical Statistics: Part Three . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
y

Problem C.6 (i) (ii) Problem C.6 (iii)
(iii) 
Yi – the change in liquor consumption. • The standard error of  is
se( ) =  s/ = 466.4/30 = 15.55.  
Yi are a random sample from a Normal (, 2)
• Therefore, the t statistic for testing H0: µ = 0 is 
t =   /se( ) = – 32.8/15.55
= –2.11.  
(i)   • We obtain the p‐value as 
P(Z  –2.11), 
– H0: µ = 0. where Z ~ Normal(0,1).  

(ii) • These probabilities are in Table G.1:  
p‐value = .0174.  
– H1: µ < 0. • (=.05) Because the p‐value is below .05, we reject H0 against the one‐sided 
alternative at the 5% level.  
• (=.01) We do not reject at the 1% level because p‐value = .0174 > .01.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
117 118
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

Problem C.6 (iv)
(iv) 
• The estimated reduction, about 33 ounces, does not seem large for an 
entire year’s consumption.  
– If the alcohol is beer, 33 ounces is less than three 12‐ounce cans of beer.  
– Even if this is hard liquor, the reduction seems small.  
– (On the other hand, when aggregated across the entire population, alcohol 
distributors might not think the effect is so small.)
(v) 
• The implicit assumption is that other factors that affect liquor 
consumption – such as income, or changes in price due to 
transportation costs, are constant over the two years.

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
119 120
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Fundamentals
A. Nature B. HT N(0,1)
B. CI N(0,1) C. Asymptotic

Problem C.7 C. Rule of Thumb


D. Asymptotic CI Problem C.7 continue … D. P-Value
E. CI & HT

C.7 The new management at a bakery  obs Wb Wa D=Wa-Wb (ii) Formally state the null hypothesis that there has been no 


claims that workers are now more 
productive than they were under old 
1 8.3 9.25 0.95
change in average wages. In particular, what is E(Di) under H0? 
management, which is why wages have 
2 9.4 9 -0.4
If you are hired to examine the validity of the new 
“generally increased.” Let Wib be Worker 
3

4
9

10.5
9.25

10
0.25

-0.5
management’s claim, what is the relevant alternative 
i’s wage under the old management and  hypothesis in terms of  E(Di)? [ans.]
let Wia be Worker i’s wage after the  5 11.4 12 0.6

change. The difference is  6 8.75 9.5 0.75

Di = Wia ‐ Wib. 
7

8
10

9.5
10.25

9.5
0.25

0
• (iii) Test the null hypothesis from part (ii) against the stated 
Assume that the Di are a random sample  9 10.8 11.5 0.7
alternative at the 5% and 1% levels. [ans.]
from a Normal(, 2) distribution. 10 12.55 13.1 0.55

(i) Using the following data on 15 workers, 
11

12
12

8.65
11.5

9
-0.5

0.35
• (iv) Obtain the p‐value for the test in part (iii). [ans.]
construct an exact 95% confidence  13 7.75 7.75 0

interval for . [ans.] 14 11.25 11.5 0.25

15 12.65 13 0.35

mean 10.16667 10.40667 0.24

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
121
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 122

Problem C.7 (i) Problem C.7 (i)
Date: 05/07/07 Time: 07:57
(i) 
Sample: 1 15
The average increase in wage is  ̅ = .24, or 24 
obs Wb Wa D=Wa-Wb

cents.   1 8.3 9.25 0.95 WB WA D
2 9.4 9 -0.4 Mean 10.16667 10.40667 0.24
• The sample standard deviation is about s =  3 9 9.25 0.25
Median 10 10 0.25
.451, n = 15, se( ̅ ) = .1164. 4 10.5 10 -0.5
Maximum 12.65 13.1 0.95
5 11.4 12 0.6
Minimum 7.75 7.75 -0.5
• From Table G.2, the 97.5th percentile in the t14 6 8.75 9.5 0.75

distribution is 2.145.  Std. Dev. 1.569084 1.595291 0.450872


7 10 10.25 0.25
Skewness 0.175376 0.290842 -0.34947
8 9.5 9.5 0

• So the 95% CI is  [d  c  S / n , d  c  S / n ] 9 10.8 11.5 0.7


Kurtosis 1.810807 2.022774 2.161199

10 12.55 13.1 0.55

= .24  2.145(.1164),  11 12 11.5 -0.5


Jarque-Bera 0.960754 0.80833 0.745067
= or about –.010 to .490. 12 8.65 9 0.35 Probability 0.61855 0.667534 0.688986
(ii)  13 7.75 7.75 0

• If µ = E(di) then  14 11.25 11.5 0.25 Sum 152.5 156.1 3.6


H0: µ = 0.   15 12.65 13 0.35 Sum Sq. Dev. 34.46833 35.62933 2.846
• The alternative is that management’s claim is  mean 10.16667 10.40667 0.24
true:  H1: µ > 0.
Observations 15 15 15

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
123 124
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Problem C.7 (iv)
(iv) 
• We obtain the p‐value as 
P(T > 2.062), 
where T is from the t14 distribution.
• The p‐value obtained from Eview is .029; 
– this is half of the p‐value for the two‐sided alternative.  
– (Econometrics packages, including Eviews, report the p‐
value for the two‐sided alternative.)

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
125 126
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat

Problem C.7 (iv)
Hypothesis Testing for DI
Date: 05/07/07 Time: 08:03
Sample: 1 15
Good Luck!
Included observations: 15
Test of Hypothesis: Mean = 0.000000

Sample Mean = 0.240000


Sample Std. Dev. = 0.450872
FT19, PT15: See you around!
Method Value Probability
t-statistic 2.061595 0.0583

View / Test of Descriptive Stats / Simple Hypothesis Tests

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 128
127
VI. Hypothesis Testing Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat VII. Remarks on Notation

You might also like