0% found this document useful (0 votes)
36 views6 pages

Two-Sample Summary Table

1) The document describes procedures for testing differences between two population means using z-tests or t-tests. 2) It provides formulas and steps for conducting one-sided and two-sided hypothesis tests with both known and unknown variances. 3) Formulas are given for calculating critical values, test statistics, p-values, sample sizes needed, and Type II error rates under different testing conditions.

Uploaded by

nongmuayka15650
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views6 pages

Two-Sample Summary Table

1) The document describes procedures for testing differences between two population means using z-tests or t-tests. 2) It provides formulas and steps for conducting one-sided and two-sided hypothesis tests with both known and unknown variances. 3) Formulas are given for calculating critical values, test statistics, p-values, sample sizes needed, and Type II error rates under different testing conditions.

Uploaded by

nongmuayka15650
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Testing two population means using two-sample z-procedure

variances known AND (both sample sizes n1 , n2 ≥ 30 OR n1 , n2 < 30 with normally distributed data)
Case 1: One-sided (upper bound) Case 2: Two-sided Case 3: One-sided (lower bound)

(1-α) level Confidence Intervals


 2   σ 12 σ 22   
 − ∞, x − x + z σ 1 + σ 2  σ 12 σ 22  x − x − z σ1 + σ 2 ,∞
2 2 2
x − x − z + , x − x + z +
 1 2 α
n1 n2   1 2 α 2
n n
1 2 α 2
n n2   1 2 α
n1 n2 
  1 2 1  

x1 − x2
Hypothesis Testing: test statistic z0 =
σ 12 σ 22
+
n1 n2
H 0 : µ1 ≥ µ 2 H 1 : µ1 < µ 2 H 0 : µ1 = µ 2 H 1 : µ1 ≠ µ 2 H 0 : µ1 ≤ µ 2 H 1 : µ1 > µ 2
p-value = Φ( z 0 ) p-value = 2 × Φ (− z 0 ) p-value = 1 − Φ( z 0 )

Size α hypothesis tests


accept H 0 reject H 0 accept H 0 reject H 0 accept H 0 reject H 0
z 0 ≥ − zα z 0 < − zα z 0 ≤ zα 2 z 0 > zα 2 z 0 ≤ zα z 0 > zα

Type II error (β), ∆ = µ1 − µ 2 in H 1 , ∆ 0 = µ1 − µ 2 in H 0 , usually ∆ 0 = 0

       
       
 ∆ − ∆0   ∆ − ∆0   ∆ − ∆0   ∆ − ∆0 
β = 1 − Φ − z α −  β = Φ z α 2 −  − Φ − z α 2 −  β = Φ  zα − 
 σ 12 σ 22   σ 12 σ 22   σ 12 σ 22   σ 12 σ 2

 + + + + 2

 n n2  
 n n 
2 

 n n2   n1 n2 
1 1 1
 
Sample size n; given α , β , ∆ 0 , ∆

n=
(z α (
+ z β ) σ 12 + σ 22
2
) (z α 2
2
(
+ z β ) σ 12 + σ 22 )
n= same as Case 1
(∆ − ∆ 0 ) 2
(∆ − ∆ 0 ) 2

Sample size n; given α and E = half-width of CI


2 2
 zα 2 
z 
(
n =  α  σ 12 + σ 22 ) n =  (
 σ 12 + σ 22 ) same as Case 1
E  E 
Testing two population means using two independent sample t-procedure
variances unknown AND (both sample sizes n1 , n2 ≥ 30 OR n1 , n2 < 30 with normally distributed data)
2
 s12 s 22 
 + 
If the variances are NOT assumed equal, compute the degree of freedom ν =  n1 n2 
s12 n1
2

+
s 22 n2
2
( ) ( )
n1 − 1 n2 − 1
Case 1: One-sided (upper bound) Case 2: Two-sided Case 3: One-sided (lower bound)

(1-α) level Confidence Intervals


 s12 s 22   s12 s 22 s12 s 22   s12 s 22 
 − ∞, x − x + t + x − x −t + , x1 − x 2 + tα 2,ν + x − x −t + ,∞
 1 2 α ,ν
n1 n2   1 2 α 2 ,ν
n1 n2 n1 n2   1 2 α ,ν
n1 n2 
  

x1 − x 2
Hypothesis Testing: test statistic t 0 = ; X ~ tν
s12 s 22
+
n1 n2
H 0 : µ1 ≥ µ 2 H 1 : µ1 < µ 2 H 0 : µ1 = µ 2 H 1 : µ1 ≠ µ 2 H 0 : µ1 ≤ µ 2 H 1 : µ1 > µ 2
p-value = P( X < t 0 ) p-value = 2 × P( X > t 0 ) p-value = P( X > t 0 )

Size α hypothesis tests


accept H 0 reject H 0 accept H 0 reject H 0 accept H 0 reject H 0
t 0 ≥ −tα ,ν t 0 < −tα ,ν t 0 ≤ tα 2,ν t 0 > tα 2,ν t 0 ≤ tα ,ν t 0 > tα ,ν

Type II error (β), ∆ = µ1 − µ 2 in H 1 , ∆ 0 = µ1 − µ 2 in H 0 , usually ∆ 0 = 0


       
       
 ∆ − ∆0   ∆ − ∆0   ∆ − ∆0   ∆ − ∆0 
β = 1 − P X < −tα ,ν −  β = P X < tα 2,ν −  − P X < −tα 2,ν −  β = P X < tα ,ν − 
 s12 s 22   s 2
s 2
2   s12 s 22   s12 s 22 
 + 1
+ + +
 n 1 n2  
 n 1 n 2



 n 1 n2  
 n 1 n2 

Sample size n; given α , β , ∆ 0 , ∆ ; OR Sample size n; given α and E = half-width of CI


This requires the use of OC curve, or Statistical software, e.g. Minitab
Testing two population means using two independent sample t-procedure
variances unknown AND (both sample sizes n1 , n2 ≥ 30 OR n1 , n2 < 30 with normally distributed data)
(n1 − 1) s12 + (n2 − 1) s 22
If the variances are assumed equal, compute the pool variance s = 2
p ; degree of freedom ν = n1 + n2 − 2
n1 + n2 − 2
Case 1: One-sided (upper bound) Case 2: Two-sided Case 3: One-sided (lower bound)

(1-α) level Confidence Intervals


     
 − ∞, x1 − x 2 + tα ,ν s p 1 + 1   x1 − x 2 − tα 2,ν s p 1 + 1 , x1 − x 2 + tα 2,ν s p 1 + 1   x1 − x 2 − tα ,ν s p 1 + 1 , ∞ 
 n1 n2   n1 n2 n1 n2   n1 n2 
  

x1 − x 2
Hypothesis Testing: test statistic t 0 = ; X ~ tν
1 1
sp +
n1 n2
H 0 : µ1 ≥ µ 2 H 1 : µ1 < µ 2 H 0 : µ1 = µ 2 H 1 : µ1 ≠ µ 2 H 0 : µ1 ≤ µ 2 H 1 : µ1 > µ 2
p-value = P( X < t 0 ) p-value = 2 × P( X > t 0 ) p-value = P( X > t 0 )

Size α hypothesis tests


accept H 0 reject H 0 accept H 0 reject H 0 accept H 0 reject H 0
t 0 ≥ −tα ,ν t 0 < −tα ,ν t 0 ≤ tα 2,ν t 0 > tα 2,ν t 0 ≤ tα ,ν t 0 > tα ,ν

Type II error (β), ∆ = µ1 − µ 2 in H 1 , ∆ 0 = µ1 − µ 2 in H 0 , usually ∆ 0 = 0


       
       
 ∆ − ∆0   ∆ − ∆0   ∆ − ∆0   ∆ − ∆0 
β = 1 − P X < −tα ,ν −  β = P X < tα 2,ν −  − P X < −tα 2,ν −  β = P X < tα ,ν − 
 1 1   1 1   1 1   1 1 
 s p +   s p +   s p +   s p + 
 n1 n2   n1 n2   n1 n2   n1 n2 

Sample size n; given α , β , ∆ 0 , ∆ ; OR Sample size n; given α and E = half-width of CI


This requires the use of OC curve, or Statistical software, e.g. Minitab
Testing two population variances using two-sample F-procedure
Data must be normally distributed
Case 1: One-sided (upper bound) Case 2: Two-sided Case 3: One-sided (lower bound)

(1-α) level Confidence Intervals


 s12   s12 s2   s12 
 0, 2 fα , n2 −1, n1 −1   2 f1−α 2, n2 −1, n1 −1 , 12 fα 2, n2 −1, n1 −1   2 f1−α , n2 −1, n1 −1 , ∞ 
 s2   s2 s2   s2 

s12
Hypothesis Testing: test statistic F0 = 2 ; X ~ f n1 −1,n2 −1
s2
H 0 : σ 12 ≥ σ 22 H 1 : σ 12 < σ 22 H 0 : σ 12 = σ 22 H 1 : σ 12 ≠ σ 22 H 0 : σ 12 ≤ σ 22 H 1 : σ 12 > σ 22
p-value = 2 × P( X > F0 ) if F0 > 1
p-value = P( X < F0 ) p-value = P( X > F0 )
p-value = 2 × P( X < F0 ) if F0 < 1

Size α hypothesis tests


accept H 0 reject H 0 accept H 0 reject H 0 accept H 0 reject H 0
F0 ≥ f 1−α ,n1 −1,n2 −1 F0 < f 1−α ,n1 −1,n2 −1 f 1−α 2,n1 −1,n2 −1 ≤ F0 ≤ f α 2, n1 −1,n2 −1 F0 > f α 2,n1 −1,n2 −1 or F0 < f1−α 2,n1 −1,n2 −1 F0 ≤ f α ,n1 −1,n2 −1 F0 > f α , n1 −1,n2 −1

Type II error and sample size n are usually not of interest


Testing two population proportions using two-sample z-procedure
Data are from binomial distribution, X1 ~ Bin(n1, p1), X2 ~ Bin(n2, p2)
Case 3: One-sided (lower
Case 1: One-sided (upper bound) Case 2: Two-sided
bound)

x1 x x + x2
First of all, compute these estimates pˆ 1 = , qˆ1 = 1 − pˆ 1 ; pˆ 2 = 2 , qˆ 2 = 1 − pˆ 2 , pˆ = 1
n1 n2 n1 + n2

(1-α) level Confidence Intervals


 pˆ 1 qˆ1 pˆ 2 qˆ 2   pˆ 1 qˆ1 pˆ 2 qˆ 2 pˆ 1 qˆ1 pˆ 2 qˆ 2   pˆ 1 qˆ1 pˆ 2 qˆ 2 
 − 1, pˆ 1 − pˆ 2 + zα +   pˆ 1 − pˆ 2 − zα 2 + , pˆ 1 − pˆ 2 + zα 2 +   pˆ 1 − pˆ 2 − zα + ,1
 n1 n2   n1 n2 n1 n2   n1 n2 
    

pˆ 1 − pˆ 2
Hypothesis Testing: test statistic z 0 =
1 1 
pˆ (1 − pˆ ) + 
 n1 n2 
H 0 : p1 ≥ p 2 H 1 : p1 < p 2 H 0 : p1 = p 2 H 1 : p1 ≠ p 2 H 0 : p1 ≤ p 2 H 1 : p1 > p 2
p-value = Φ( z 0 ) p-value = 2 × Φ (− z 0 ) p-value = 1 − Φ( z 0 )

Size α hypothesis tests


accept H 0 reject H 0 accept H 0 reject H 0 accept H 0 reject H 0
z 0 ≥ − zα z 0 < − zα z 0 ≤ zα 2 z 0 > zα 2 z 0 ≤ zα z 0 > zα
Testing two population proportions using two-sample z-procedure
Data are from binomial distribution, X1 ~ Bin(n1, p1), X2 ~ Bin(n2, p2)
Case 1: One-sided (upper bound) Case 2: Two-sided Case 3: One-sided (lower bound)

Type II error β,
H 0 : p1 ≥ p 2 H 1 : p1 < p 2 H 0 : p1 = p 2 H 1 : p1 ≠ p 2 H 0 : p1 ≤ p 2 H 1 : p1 > p 2
 1 1   1 1   1 1 
−z pˆ (1 − pˆ ) +  − ( p1 − p2 )  z pˆ (1 − pˆ ) +  − ( p1 − p2 )  z pˆ (1 − pˆ ) +  − ( p1 − p2 ) 
 α  n1 n2    α 2
 n1 n2    α  n1 n2  
β = 1 − Φ  β = Φ  β = Φ 
 p1q1 p2 q2   p1q1 p2 q2   p1q1 p2 q2 
 +   +   + 
n1 n2 n1 n2 n1 n2
     
 1 1 
−z ˆ
p (1 − ˆ
p ) +  − ( p − p ) 
 α2 n n  1 2

 1 2 
− Φ 
 p1q1 p2 q2 
 + 
n1 n2
 

Sample size n; given α , β , d = p1 − p 2 from H 1


2 2
 ( pˆ 1 + pˆ 2 )(qˆ1 + qˆ 2 )   ( pˆ 1 + pˆ 2 )(qˆ1 + qˆ 2 ) 
 zα + zβ pˆ 1 qˆ1 + pˆ 2 qˆ 2   zα 2 + zβ pˆ 1 qˆ1 + pˆ 2 qˆ 2 

n= 2  
n= 2  same as case 1
d  d 
   
   

Sample size n; given α and E = half-width of CI


zα2 ( pˆ 1 qˆ1 + pˆ 2 qˆ 2 ) zα2 2 ( pˆ 1 qˆ1 + pˆ 2 qˆ 2 )
n= n= same as case 1
E2 E2

You might also like