We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 68
(Ch 9.1-9.3, 9.5-9.9)| Statistical Hypotheses
Statistical hypothesis: a claim about the value of a
parameter or population characteristic.
Examples:
* H: u=75 cents, where wu is the true population
average of daily per-student candy+soda expenses in
US high schools
H: p< .10, where p is the population proportion of
defective helmets for a given manufacturer
If u; and uu, denote the true average breaking
strengths of two different types of twine, one
hypothesis might be the assertion that u,— = 0, or
another is the statement 4-1 >5| Components of a Hypothesis Test
1. Formulate the hypothesis to be tested.
2. Determine the appropriate test statistic and
calculate it using the sample data.
3. Comparison of test statistic to critical region to
draw initial conclusions.
4. Calculation of p-value.
5. Conclusion, written in terms of the original
problem.| Components of a Hypothesis Test
1. Formulate the hypothesis to be tested.| 1. Null vs Alternative Hypotheses
In any hypothesis-testing problem, there are always two
competing hypotheses under consideration:
1. The status quo (null) hypothesis
2. The research (alternative) hypothesis
The objective of hypothesis testing is to decide, based on
sample information, if the alternative hypotheses is actually
supported by the data.
We usually do new research to challenge the existing
(accepted) beliefs.| 4. Null vs Alternative Hypotheses
Is there strong evidence for the alternative?
The burden of proof is placed on those who believe in the
alternative claim.
This initially favored claim (H) will not be rejected in favor
of the alternative claim (H, or H;) unless the sample
evidence provides significant support for the alternative
assertion.
If the sample does not strongly contradict Hp, we will
continue to believe in the plausibility of the null hypothesis.
The two possible conclusions: 1) Reject Ho.
2) Fail to reject Hp.
6| 4. Null vs Alternative Hypotheses
Why be so committed to the null hypothesis?
* Sometimes we do not want to accept a particular
assertion unless (or until) data can show strong support
« Reluctance (cost, time) to change
Example: Suppose a company is considering putting a new
type of coating on bearings that it produces.
The true average wear life with the current coating is
known to be 1000 hours. With uw denoting the true average
life for the new coating, the company would not want to
make any (costly) changes unless evidence strongly
suggested that u exceeds 1000.| 4. Null vs Alternative Hypotheses
An appropriate problem formulation would involve testing
Ho: w= 1000 against H,: u > 1000.
The conclusion that a change is justified is identified with
H,, and it would take conclusive evidence to justify
rejecting Hy and switching to the new coating.
Scientific research often involves trying to decide whether a
current theory should be replaced, or “elaborated upon.”| 4. Null vs Alternative Hypotheses
The alternative to the null hypothesis Hp: @ = 4 will look like
one of the following three assertions:
1. Hz: OF O%
2. H,: @> @ (in which case the null hypothesis is 6s 4)
3. H,: @< @ (in which case the null hypothesis is 0= 6)
The equality sign is always with the null hypothesis.
The alternate hypothesis is the claim for which we are
seeking statistical proof.| Components of a Hypothesis Test
2. Determine the appropriate test statistic and
calculate it using the sample data.| 2. Test Statistics
A test statistic is a rule, based on sample data, for
deciding whether to reject Ho.
The test statistic is a function of the sample data that will be
used to make a decision about whether the null hypothesis
should be rejected or not.| 2. Test Statistics
Example: Company A produces circuit boards, but 10% of
them are defective. Company B claims that they produce
fewer defective circuit boards.
Ho: p = .10 versus H,: p < .10
Our data is a random sample of n = 200 boards from
company B.
What test procedure (or rule) could we devise to decide if
the null hypothesis should be rejected?| 2. Test Statistics
Which test statistic is “best”??
There are an infinite number of possible tests that could be
devised, so we have to limit this in some way or total
statistical madness will ensue!
Choice of a particular test procedure must be based on the
probability the test will produce incorrect results.| 2. Errors in Hypothesis Testing
Definition
* Atype Il error is when the null hypothesis is rejected,
but it is true.
* Atype Il error is not rejecting Hj when H) is false.
This is very similar in spirit to our diagnostic test examples
* False negative test = type | error
* False positive test = type II error| 2. Errors in Hypothesis Testing
Definition
* Atype Il error is when the null hypothesis is rejected,
but it is true.
* Atype Il error is not rejecting Hj when H) is false.
This is very similar in spirit to our diagnostic test examples
* False negative test = type | error
* False positive test = type II error
How do we apply this to the circuit board problem?| 2. Type | errors
Usually: Specify the largest value of a that can be
tolerated, and then find a rejection region with that a.
The resulting value of «is often referred to as the
significance level of the test.
Traditional levels of significance are .10, .05, and .01,
though the level in any particular problem will depend on
the seriousness of a type | error—
The more serious the type | error, the smaller the
significance level should be.| 2. Errors in Hypothesis Testing
We can also obtain a smaller value of a -- the probability that
the null will be incorrectly rejected — by decreasing the size of
the rejection region.
However, this results in a larger value of # for all parameter
values consistent with H,.
No rejection region that will simultaneously make both a
and all 6’ s small. A region must be chosen to strike a
compromise between « and /.| 2. Testing means of a normal population with known o
Null hypothesis: Hg: “= Up
Test statistic value :< = Vn
Alternative Hypothesis Rejection Region for Level a Test
Hy bh > bo < =z, (upper-tailed test)
Hy: bw < Mo z= —z, (lower-tailed test)
Hy bh F bo either z = z,,) or z = —z,, (two-tailed test)
18| Components of a Hypothesis Test
3. Comparison of test statistic to critical region to
draw initial conclusions.
5. Conclusion, written in terms of the original
problem.ical region
< curve (probability distribution of test statistic Z when Hp is true)
Total shaded area
= a = P(type Lerror)
Shaded area
= a = P(type I error) Shaded area
0 & | -. 0 [rier 0 tae
Rejection region: Rejection region: either
Rejection region: ¢ = z, 22 lan 8ZS —Zan
(a) (b) (c)
Rejection regions for z tests: (a) upper-tailed test; (b) lower-tailed test; (c) two-tailed test 20| Example
An inventor has developed a new, energy-efficient lawn mower
engine. He claims that the engine will run continuously for more
than 5 hours (300 minutes) on a single gallon of regular
gasoline. (The leading brand lawnmower engine runs for 300
minutes on 1 gallon of gasoline.)
From his stock of engines, the inventor selects a simple random
sample of 50 engines for testing. The engines run for an average
of 305 minutes. The true standard deviation o is known and is
equal to 30 minutes, and the run times of the engines are
normally distributed.
Test hypothesis that the mean run time is more than 300
minutes. Use a 0.05 level of significance.
21| 2. Testing means of a large sample
When the sample size is large, the z tests for case | are
easily modified to yield valid test procedures without
requiring either a normal population distribution or
known o.
Earlier, we used the key result to justify large-sample
confidence intervals:
A large n (>30) implies that the standardized variable
xX - Ko
SIVn
Z=
has approximately a standard normal distribution.
22| 2. Testing means of a small sample coming from a normal
The One-Sample t Test
Null hypothesis: Ho: uw = Uo
wo Lee X= My
Test statistic value: t vVa
Alternative Hypothesis Rejection Region for a Level a
Test
Hy b> bo t= t,,-, (upper-tailed)
He be < by t= —t,,,-, (lower-tailed)
Ay bh F By either t= ty ,-) OF FS ~byy_—1 (two-tailed)
23| Clvs. Hypotheses
Rejection regions have a lot in common with confidence intervals.
Confidence
interval:
XE : xtE
x = 9930
Hypothesis
testing:
hE Ho= 10000 tut
Source:
24| Cl vs. Hypotheses
Example: The Brinell scale is a measure of how hard a
material is. An engineer hypothesizes that the mean Brinell
score of all subcritically annealed ductile iron pieces is not
equal to 170.
The engineer measured the Brinell score of 25 pieces of this
type of iron and calculated the sample mean to be 174.52 and
the sample standard deviation to be 10.31.
Perform a hypothesis test that the true average Brinell score is
not equal to 170, as well as the corresponding confidence
interval. Set alpha = 0.01.
25| Components of a Hypothesis Test
4. Calculation of p-value.
5. Conclusion, written in terms of the original
problem.
26| 4. p-Values
The p-value measures the “extremeness” of the sample.
Definition: The p-value is the probability we would get
the sample we have or something more extreme if the
null hypothesis were true.
So, the smaller the P-value, the more evidence there is in
the sample data against the null hypothesis and for the
alternative hypothesis.
So what constitutes “sufficiently small” and “extreme
enough” to make a decision about the null hypothesis?
27| 4. p-Values
The p-value measures the “extremeness” of the sample.
Definition: The p-value is the probability we would get
the sample we have or something more extreme if the
null hypothesis were true.
*This probability is calculated assuming that the null
hypothesis is true.
* Beware: The p-value is not the probability that Hp
is true, nor is it an error probability!
* The p-value is between 0 and 1.
28| 4. p-Values
Select a significance level a (as before, the desired type /
error probability), then a defines the rejection region.
Then the decision rule is:
reject Hy if P-value < a
do not reject Hy if P-value > a
Thus if the p-value exceeds the chosen significance level,
the null hypothesis cannot be rejected at that level.
Note, the p-value can be thought of as the smallest
significance level at which H, can be rejected.
29| P-Values for z Tests
The calculation of the P-value depends on whether the test
is upper-, lower-, or two-tailed.
1 — Bz) for an upper-tailed z test
P-value: P = 4 ®(z) or an lower-tailed z test
2[1 — @(z)] for a two-tailed z test
Each of these is the probability of getting a value at least as
extreme as what was obtained (assuming H, true).
30lues for z Tests
zcurve
P-value = area in upper tail
1. Upper-tailed test =1- 0%
H, contains the inequality >
Calculated z
zcurve
J
P-value = area in lower tail
2. Lower-tailed test = (2)
H, contains the inequality <
Calculated z
31lues for z Tests
P-value = sum of area in two tails = 2[1 — ®(Izl)]
zcurve
3. Two-tailed test
H,, contains the inequality #
ft ° ff
Calculated z,
32| Example
Back to the lawnmower engine example: There, we had
H,: ¥=300 vs H,: yp > 300
and
Z=1.18
What is the p-value for this result?
33| Example
Back to the lawnmower engine example: There, we had
H,: ¥=300 vs H,: yp > 300
and
Z=1.18
Assuming our average doesn’t change much, what sample
size would we need to see a statistically significant result?
34| Example
Back to the Brinell scale example: There, we had
H,:u=170 vs H,: p #170
and
T=2.19
What is the p-value for this result?
35| Example
Back to the Brinell scale example: There, we had
H,:u=170 vs H,: p #170
and
T=2.19
What if we had used alpha = 0.05 instead?
36| Distribution of p-values
Figure below shows a histogram of the 10,000 P-values from a simulation
experiment under a null p = 20 (with n= 4 and o= 2).
When H) is true, the probability distribution of the P-va
distribution on the interval from 0 to 1.
jue is a uniform
Percent
T
0.00 0.15 0.30 045 0.60 0.75 0.90
P-value
37| Distribution of p-values
About 4.5% of these P-values are in the first class interval
from 0 to .05.
Thus when using a significance level of .05, the null
hypothesis is rejected in roughly 4.5% of these 10,000
tests.
If we continued to generate samples and carry out the test
for each sample at significance level .05, in the long run 5%
of the P-values would be in the first class interval.
38| Distribution of p-values
A histogram of the P-values when we simulate under an alternative
hypothesis. There is a much greater tendency for the P-value to be
small (closer to 0) when uw = 21 than when w= 20.
Percent
5 tra
T T
0.00 0.15 0.30 045 0.60 0.75 0.90
P-value
=
~
(b)y=21 39| Distribution of p-values
Again H, is rejected at significance level .05 whenever
the P-value is at most .05 (in the first bin).
Unfortunately, this is the case for only about 19% of the
P-values. So only about 19% of the 10,000 tests correctly
reject the null hypothesis; for the other 81%, a type II error
is committed.
The difficulty is that the sample size is quite small and 21 is
not very different from the value asserted by the null
hypothesis.
40| Distribution of p-values
Figure below illustrates what happens to the P-value when
Hy is false because yw = 22.
Percent
40
30
20
10
0 * —— T T a
0.00 0.15 0.30 0.45 0.60 0.75 0.90
P-value
(c) y= 22
a4| Distribution of p-values
The histogram is even more concentrated toward values
close to 0 than was the case when w= 21.
In general, as u moves further to the right of the null value
20, the distribution of the P-value will become more and
more concentrated on values close to 0.
Even here a bit fewer than 50% of the P-values are smaller
than .05. So it is still slightly more likely than not that the
null hypothesis is incorrectly not rejected. Only for values of
umuch larger than 20 (e.g., at least 24 or 25) is it highly
likely that the P-value will be smaller than .05 and thus give
the correct conclusion.
42| Proportions: Large-Sample Tests
The estimator p = X/n is unbiased (E(p) = p), has
approximately a normal distribution, and its standard
deviation is ¢; = Vp(1 — p)/n.
When Hp is true, E(p) = py and 0; = Vpy(l — pin, so a5
does not involve any unknown parameters. It then follows
that when n is large and Hp is true, the test statistic
— Pam
Vpo(l — pon
has approximately a standard normal distribution.
43| Proportions: Large-Sample Tests
Alternative Hypothesis Rejection Region
H,: Pp > Po Z = Z,, (upper-tailed)
H,: P < Po Z s—z,, (lower-tailed)
H,: Pp # Po either Z = Z,2
Or Z $ —Z,y2 (two-tailed)
These test procedures are valid provided that npy = 10 and
n(1 — po) = 10.
44| Example
Natural cork in wine bottles is subject to deterioration, and
as a result wine in such bottles may experience
contamination.
The article “Effects of Bottle Closure Type on Consumer
Perceptions of Wine Quality” (Amer. J. of Enology and
Viticulture, 2007: 182-191) reported that, in a tasting of
commercial chardonnays, 16 of 91 bottles were considered
spoiled to some extent by cork-associated characteristics.
Does this data provide strong evidence for concluding that
more than 15% of all such bottles are contaminated in this
way? Use a significance level equal to 0.10.
45TWO SAMPLE TESTING| Normal Population, Known Variances
In general:
Null hypothesis: Ho: 1,— U2 = Ao
x-y-A,
Test statistic value: z = o} 3
m n
47| Test Procedures for Normal Populations with Known Variances
Null hypothesis: Ho: u4— Us = Ao
Alternative Hypothesis Rejection Region for Level a Test
H: Uy — Un > Ao Z = z,,(upper-tailed)
Ha by — U2 < Ag Zs —Z,,(lower-tailed)
Ha: ly — Un F Ao either Z = Z,)2 OF Z $ — Z,)2(two-
tailed)
48| Example 1
Analysis of a random sample consisting of 20 specimens of
cold-rolled steel to determine yield strengths resulted in a
sample average strength of x = 29.8 ksi.
A second random sample of 25 two-sided galvanized steel
specimens gave a sample average strength of
y = 34.7 ksi.
Assuming that the two yield-strength distributions are
normal with o, = 4.0 and o, = 5.0, does the data indicate
that the corresponding true average yield strengths uu, and
uy are different?
Let’ s carry out a test at significance level a = 0.01
49| Large-Sample Tests
The assumptions of normal population distributions and
known values of o; and o2 are fortunately unnecessary
when both sample sizes are sufficiently large. WHY?
2
Furthermore, using s? and s2 in place of 7] and o3 gives a
variable whose distribution is approximately standard
normal:
X — ¥— (4, — py)
Si $3
M1 4 22
m n
Z=
These tests are usually appropriate if both m > 30 and n >
30. 50| Example
Data on daily calorie intake both for a sample of teens who
said they did not typically eat fast food and another sample
of teens who said they did usually eat fast food.
Eat Fast Food Sample Size Sample Mean Sample SD
No 663 2258 1519
‘Yes 413 2637 1138
Does this data provide strong evidence for concluding that
true average calorie intake for teens who typically eat fast
food exceeds more than 200 calories per day the true
average intake for those who don’ t typically eat fast food?
Let’ s investigate by carrying out a test of hypotheses at a
significance level of approximately .05. 54| The Two-Sample t Test
When the population distribution are both normal, the
standardized variable
¥-Y-m—-m
re (py = Ha) (02)
%, 8
i, 8
m n
has approximately a ¢ distribution with df v estimated
from the data by
st sz \?
( mn * 2)
p=
(sim)? 4 (s3/n)?
m1 n-1
52| The Two-Sample t Test
The two-sample t test for testing Ho: u,— tm = Ap is as
follows:
Test statistic value: f =
x-y- Ay
53| The Two-Sample t Test
Alternative Hypothesis Rejection Region for
Ha: by — U2 > Ao
Hg: ly — Ua < Ao
Ha: [ly — la # Ag
Approximate Level a Test
t= t,, (upper-tailed)
ts —t,, (lower-tailed)
either tet. ,orts—toy
(two-tailed)
54| A Test for Proportion Differences
Theoretically, we know that:
Py — Po —0
ae
\Pa mon
has approximately a standard normal distribution when Hy
is true.
However, this Z cannot serve as a test statistic because the
value of p is unknown—H) asserts only that there is a
common value of p, but does not say what that value is.
55| ALarge-Sample Test Procedure
Under the null hypothesis, we assume that p; = Pp» = p,
instead of separate samples of size m and n from two
different populations (two different binomial distributions).
So, we really have a single sample of size m + n from one
population with proportion p.
The total number of individuals in this combined sample
having the characteristic of interest is X + Y.
The estimator of p is then
(9.5)
tn men Pit
m n m n
56| ALarge-Sample Test Procedure
Using p and ¢ = 1 —> in place of p and q in our old
equation gives a test statistic having approximately a
standard normal distribution when H) is true.
Null hypothesis: Ho: p; — p2 =0
Test statistic value (large samples):
Pi = Po
a~a(1 1
PX in n
z=
57| ALarge-Sample Test Procedure
Alternative Hypothesis Rejection Region for
Approximate Level a Test
H,: Py — P2 > 0 Z2Z,
Hg: py — P2 <0 25-2,
H,: P41 — Po # 0 either z = Z,2 Of Z<—Z,p
A P-value is calculated in the same way as for previous z
tests.
The test can safely be used as long as mp,.mq,, np», and nq,
are all at least 10.
58The F Test for Equality of Variances| The F Distribution
The F probability distribution has two parameters, denoted
by v, and v2. The parameter v, is called the numerator
degrees of freedom, and v, is the denominator degrees of
freedom.
A random variable that has an F distribution cannot
assume a negative value. The density function is
complicated and will not be used explicitly, so it’ s not
shown.
There is an important connection between an F variable
and chi-squared variables.
60| The F Distribution
If X, and X, are independent chi-squared rv’ s with v, and
V2 df, respectively, then the rv
X,/v,
X,/V,
can be shown to have an F distribution.
Recall that a chi-squared distribution was obtain by
summing squared standard Normal variables (such as
squared deviations for example). So a scaled ratio of two
variances is a ratio of two scaled chi-squared variables.
61| The F Distribution
Figure below illustrates a typical F density function.
F density curve with
¥ and % df
Shaded area = a
62| The F Distribution
We use F, for the value on the horizontal axis that
QV 1Vy
captures a of the area under the F density curve with v,
and v2 df in the upper tail.
The density curve is not symmetric, so it would seem that
both upper- and lower-tail critical values must be tabulated.
This is not necessary, though, because of the fact that
Fy ay» = UF
QV 1.V3 QV3,V)"
63| The F Distribution
We use F, for the value on the horizontal axis that
QV 1Vy
captures a of the area under the F density curve with v,
and v2 df in the upper tail.
The density curve is not symmetric, so it would seem that
both upper- and lower-tail critical values must be tabulated.
This is not necessary, though, because of the fact that
Fy ay» = UF
QV 1.V3 QV3,V)"
For example, Fo5,6.10 = 3.22 and F 95,19 = 0.31 = 1/3.22.
64| The F Test for Equality of Variances
A test procedure for hypotheses concerning the ratio oj/o3
is based on the following result.
Theorem
Let X;,..., X, be a random sample from a normal
distribution with variance o7, let Y;,..., Y, be another
random sample (independent of the X;’ s) from a normal
distribution with variance 73. and let Sj and S} denote the
two sample variances. Then the rv
Silo;
Sa3
has an F distribution with v; = m-—1 and v2=n—-1.
65| The F Test for Equality of Variances
This theorem results from combining the fact that the
variables (m — l)Si/o7 and (n — 1)S%/o3 each have a
chi-squared distribution with m— 1 and n—11 df,
respectively.
Because F involves a ratio rather than a difference, the test
statistic is the ratio of sample variances.
The claim that oj = 3 is then rejected if the ratio differs by
too much from 1.
66| The F Test for Equality of Variances
Null hypothesis: Hy: 07 = 0%
Test statistic value: f = s7/s3
Alternative Hypothesis Rejection Region for a Level a
Test
“go? 2 .
Ayo) > 03 f= Fom-tn-1
“og 2
Ay 0) < 03 fF Fyeecm=tn=1
+o 2 :
Ay: 0} # 05 either f= Foy. m—tn—1 Ol AS Fy arm—ta-1
67| Example
On the basis of data reported in the article “Serum Ferritin in
an Elderly Population” (J. of Gerontology, 1979:
521-524), the authors concluded that the ferritin distribution
in the elderly had a smaller variance than in the younger
adults. (Serum ferritin is used in diagnosing iron deficiency.)
For a sample of 28 elderly men, the sample standard
deviation of serum ferritin (mg/L) was s, = 52.6; for 26 young
men, the sample standard deviation was sz = 84.2.
Does this data support the conclusion as applied to men?
Use alpha = .01.
68