0% found this document useful (0 votes)
48 views133 pages

Psych Stats 4 Parametric Tests

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views133 pages

Psych Stats 4 Parametric Tests

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 133

Parametric Tests

Prepared by: Prof. Gerald M. Llanes, RPm,LPT


Parametric Tests
• Parametric tests are tests that require normal distribution
and the levels of measurement are expressed in interval
and ratio data.
I. Z- test for Two Sample Means

• This is used to compare the means of two independent


groups of samples drawn from a normal population
group.
• This is used if there are more than 30 samples for every
group
• Each population from where the samples were taken
should be normally distributed and with a known standard
deviation
I. Z- test for Two Sample Means

• Formula:
𝑥1 −𝑥2
• 𝑍=
𝑠2 2
1 + 𝑠2
𝑛1 𝑛2
I. Z- test for Two Sample Means

• Example 1.
• An admission test was administered to incoming freshmen in
the College of Nursing and Veterinary Medicine with 100
students. Each was randomly selected. The mean scores of
the given samples were mean 1= 90 and mean 2= 85 and
the variances of the test scores were 40 and 35,
respectively. Is there a significant difference between the
two groups? Use 0.01 level of significance.
I. Z- test for Two Sample Means

• 1. Problem:
• Is there a significant difference between the two groups?
• 2. Hypotheses:
• Ho: There is no significant difference between the two
groups
• Ha: There is a significant difference between the two
groups.
I. Z- test for Two Sample Means

• 3. Level of Significance:
• α= 0.01
• Z tabular value= ± 2.575

-2.575 +2.575
I. Z- test for Two Sample Means

• 4. Test of Statistic:
• Z-Test for Two-Sample Mean
• 5. Decision Rule:
• If Z computed value ≤ - 2.575, reject Ho
• If Z computed value ≥ + 2.575, reject Ho
• If -2.575 < Z computed value < +2.575, accept Ho
I. Z- test for Two Sample Means
• 6. Computation:
𝑥1 −𝑥2
• 𝑍=
𝑠2 2
1 + 𝑠2
𝑛1 𝑛2

𝑥1 −𝑥2 90−85 5 5 5
• 𝑍= = = = = = 5.77
2 40 35 75 0.75 0.866025403
𝑠2
1 + 𝑠2 +
100 100 100
𝑛1 𝑛2

• 𝒁 = 𝟓. 𝟕𝟕
I. Z- test for Two Sample Means

• 7. Interpretation:
• Since the Z computed value of 5.77 is greater that the
Z tabular value of +2.575 at 0.01 level of
significance, the research hypothesis is accepted which
means that there is a significant difference between
the two groups.
I. Z- test for Two Sample Means
• Example 2
• Is there a significant difference between the average height of males
born from the two different countries? A random sample yielded the
following results:
• Use the Z-test at 0.05 level of significance to test the null hypothesis that
the corresponding population means are equal against the research
hypothesis that they are not equal.
n1= 100 𝑥1 = 63.8 SD1= 2.58

n2= 120 𝑥2 = 63.1 SD2= 2.62


I. Z- test for Two Sample Means
• 1. Problem:
• Is there a significant difference between the average height of
males born from the two different countries?
• 2. Hypotheses:
• Ho: There is no significant difference between the average
height of males born from the two different countries.
• Ha: There is a significant difference between the average
height of males born from the two different countries.
I. Z- test for Two Sample Means
• 3 . Level of Significance:
• α= 0.05
• Z tabular value= ±1.96

-1.96 +1.96
I. Z- test for Two Sample Means
• 4. Test of Statistic:
• Z-Test for Two-Sample Mean
• 5. Decision Rule:
• If Z computed value ≤ - 1.96, reject Ho
• If Z computed value ≥ +1.96, reject Ho
• If -1.96 < Z computed value < +1.96, accept Ho
I. Z- test for Two Sample Means
• 6. Computation:
𝑥1 −𝑥2
• 𝑍=
𝑠2 2
1 + 𝑠2
𝑛1 𝑛2

𝑥1 −𝑥2 63.8−63.1 0.7 0.7


• 𝑍= = = =
2 2.582 2.622 6.6564 6.8644 0.066564+0.057203333
𝑠2
1 + 𝑠2 + 120 + 120
100 100
𝑛1 𝑛2

0.7 0.7
• 𝑍= = 0.351805817 = 1.99
0.123767333
• 𝒁 = 𝟏. 𝟗𝟗
I. Z- test for Two Sample Means
• 7. Interpretation:
• Since the Z computed value lies above 1.96, the null
hypothesis has been rejected. This signifies that there
is a significant difference between the average height
of males born from the two different countries.
II. t-test
• It is used to compare two means, the means of two independent
samples or two independent groups and the means of
correlated samples before and after the treatment.
• The t-test for independent samples is used when the samples
are drawn from different populations, while the t-test for
correlated samples is used when the samples come from the
same population or the same set of samples is subjected to two
different experimental conditions.
• Ideally, the t-test is used when there are less than 30 samples.
t- test for Two Independent Sample/ Groups
• The formula is:

𝑥1 −𝑥2
• 𝑡= 𝑆𝑆1 +𝑆𝑆2 1 1
+
𝑛1 +𝑛2 −2 𝑛1 𝑛2
t- test for Two Independent Sample/ Groups
• Where:
• t= t computed value
• 𝑥1 = mean for group 1
• 𝑥2 = mean for group 2
• 𝑆𝑆1 = sum of squares of group 1
• 𝑆𝑆2 = sum of squares of group 2
• 𝑛1 = number of observations in group 1
• 𝑛2 = number of observations in group 2
t- test for Two Independent Sample/ Groups
• Example 1.
• The following are the scores of 10 male and 10 female AB
students in Spelling. Test the null hypothesis that there is no
significant difference between the performance of male and
female AB students in the said test. Use t-test at 0.05 level of
significance
t- test for Two Independent Sample/ Groups
• 1. Problem:
• Is there a significant difference between the performance of
male and female students in spelling?
• 2. Hypotheses:
• Ho: There is no significant difference between the performance
of male and female students in spelling.
• Ha: There is a significant difference between the performance
of male and female students in spelling.
t- test for Two Independent Sample/ Groups
• 3. Level of Significance:
• α= 0.05
• 𝑑𝑓 = 𝑛1 + 𝑛2 − 2
• 𝑑𝑓 = 10 + 10 − 2
• 𝑑𝑓 = 20 − 2
• 𝑑𝑓 = 18
• t- tabular value= ± 2.101 -2.101 +2.101
t- test for Two Independent Sample/ Groups
• 4. Test of Statistic:
• t-test for two independent samples
• 5. Decision Rule:
• If t computed value ≤ - 2.101, reject Ho
• If t computed value ≥ +2.101, reject Ho
• If -2.101 < t computed value < +2.101, accept Ho
t- test for Two Independent Sample/ Groups
Male (X1) Female (X2) 𝒙𝟐𝟏 𝒙𝟐𝟐

14 12 196 144
18 9 324 81
17 11 289 121
16 5 256 25
4 10 16 100
14 3 196 9
12 7 144 49
10 2 100 4
9 6 81 36
17 13 289 169
Ʃx1= 131 Ʃx2= 78 Ʃ𝐱 𝟐𝟏 = 1891 Ʃ𝐱 𝟐𝟐 = 738
t- test for Two Independent Sample/ Groups
• 6. Computation:
𝑥1 −𝑥2
• 𝑡= 𝑆𝑆1 +𝑆𝑆2 1 1
+
𝑛1 +𝑛2 −2 𝑛1 𝑛2
Ʃ𝑥1 131
• 𝑥1 = = = 13.1
𝑛1 10
• 𝒙𝟏 = 𝟏𝟑. 𝟏
Ʃ𝑥2 78
• 𝑥2 = = 10 = 7.8
𝑛2
• 𝒙𝟐 = 𝟕. 𝟖
t- test for Two Independent Sample/ Groups
• 6. Computation:
Ʃ𝑥1 2 131 2 17,161
• 𝑆𝑆1 = Ʃ𝑥12 − 𝑛1
= 1891 − 10
= 1891 − 10 =
1891 − 1716.1 = 174.9
• 𝑺𝑺𝟏 = 𝟏𝟕𝟒. 𝟗
Ʃ𝑥2 2 78 2 6084
• 𝑆𝑆2 = Ʃ𝑥22 − 𝑛2
= 738 − 10
= 738 − 10
= 738 −
608.4 = 129.6
• 𝑺𝑺𝟐 = 𝟏𝟐𝟗. 𝟔
t- test for Two Independent Sample/ Groups
• 6. Computation:
𝑥1 −𝑥2 13.1−7.8
• 𝑡= 𝑆𝑆1+𝑆𝑆2
= =
1 1 174.9+129.6 1 1
+ +
𝑛1 +𝑛2 −2 𝑛1 𝑛2 10+10−2 10 10
5.3
=
304.5 2
18 10

5.3 5.3 5.3


• 𝑡= = = = 2.88
(16.91666667) 0.2 3.383333334 1.839383955
• 𝒕 = 𝟐. 𝟖𝟖
t- test for Two Independent Sample/ Groups
• 7. Interpretation:
• Since the t computed value of 2.88 is greater than t
tabular value of 2.101 at 0.05 level of significance
with 18 df, the null hypothesis is rejected in favor of
the research hypothesis. This means that there is a
significant difference between the performance of
male and female AB students in spelling.
t- test for Two Independent Sample/ Groups
• Example 2.
• Two groups of experimental rats were injected with
tranquilizers at 1.0 mg and 1.5 mg dose respectively.
The time given in seconds that took them to fall asleep
is hereby given.
1.0 mg dose (x1) 1.5 mg dose (x2) 𝒙𝟐𝟏 𝒙𝟐𝟐

9.8 12.0 96.04 144.0


13.2 7.4 174.24 54.76
11.2 9.8 125.44 96.04
9.5 11.5 90.25 132.25
13.0 13.0 169.0 169.0
12.1 12.5 146.41 156.25
9.8 9.8 96.04 96.04
12.3 10.5 151.29 110.25
7.9 13.5 62.41 182.25
10.2 Ʃx2= 100 104.04 Ʃ𝐱 𝟐𝟐 = 1140.84
9.7 94.09
Ʃx1= 118.7 Ʃ𝐱 𝟐𝟏 = 1309.25
t- test for Two Independent Sample/ Groups
• 1. Problem:
• Is there a significant difference brought about by the dosages
on the length of time it took the rats to fall asleep?
• 2. Hypotheses:
• Ho: There is no significant difference brought about by the
dosages on the length of time it took the rats to fall asleep.
• Ha: There is a significant difference brought about by the
dosages on the length of time it took the rats to fall asleep.
t- test for Two Independent Sample/ Groups
• 3. Level of Significance:
• α= 0.01
• 𝑑𝑓 = 𝑛1 + 𝑛2 − 2
• 𝑑𝑓 = 11 + 9 − 2
• 𝑑𝑓 = 20 − 2
• 𝑑𝑓 = 18
• t- tabular value= ± 2.878 -2.878 +2.878
t- test for Two Independent Sample/ Groups
• 4. Test of Statistic:
• t-test for two independent samples
• 5. Decision Rule:
• If t computed value ≤ - 2.878, reject Ho
• If t computed value ≥ +2.878, reject Ho
• If -2.878< t computed value < +2.878, accept Ho
t- test for Two Independent Sample/ Groups
• 6. Computation:
𝑥1 −𝑥2
• 𝑡= 𝑆𝑆1 +𝑆𝑆2 1 1
+
𝑛1 +𝑛2 −2 𝑛1 𝑛2
Ʃ𝑥1 118.7
• 𝑥1 = = = 10.79
𝑛1 11
• 𝒙𝟏 = 𝟏𝟎. 𝟕𝟗
Ʃ𝑥2 100
• 𝑥2 = = = 11.11
𝑛2 9
• 𝒙𝟐 = 𝟏𝟏. 𝟏𝟏
t- test for Two Independent Sample/ Groups
• 6. Computation:
2 Ʃ𝑥1 2 118.7 2
• 𝑆𝑆1 = Ʃ𝑥1 − 𝑛 = 1309.25 − 11 = 1309.25 −
1
14,089.69
= 1309.25 − 1280.880909
11
• 𝑺𝑺𝟏 = 𝟐𝟖. 𝟑𝟕
Ʃ𝑥2 2 100 2
• 𝑆𝑆2 = Ʃ𝑥22 − 𝑛2
= 1140.84 − 9
= 1140.84 −
10,000
= 1140.84 − 1. 111.111111
9
• 𝑆𝑆2 = 29.73
t- test for Two Independent Sample/ Groups
• 6. Computation:
𝑥1 −𝑥2 10.79−11.11
• 𝑡= 𝑆𝑆1 +𝑆𝑆2 1 1
= 28.37+29.73 1 1
=
𝑛1 +𝑛2 −2
+
𝑛 1 𝑛2 11+9−2
+
11 9
−0.32
58.1
0.09090909+0.111111111
18
−0.32 −0.32 −0.32
• 𝑡= = 0.652076315
= 0.807512424 = −0.40
3.227777778 0.202020201

• 𝒕 = −𝟎. 𝟒𝟎
t- test for Two Independent Sample/ Groups
• 7. Interpretation:
• Since the t computed value is > -2.878, accept Ho. This means
that no significant difference brought about by the dosages on
the length of time it took the rats to fall asleep.
t-test for Correlated Samples
• It is used when comparing the means before and after the
treatment. It is also used to compare the means of pre-test
and post-test.
t-test for Correlated Samples
• Formula:
𝐷
• 𝑡=
2 Ʃ𝐷 2
Ʃ𝐷 −
𝑛
𝑛(𝑛−1)
t-test for Correlated Samples
• Where:
• 𝐷 = mean difference between the pre-test and post-test
• Ʃ𝐷 2 = sum of squares of the difference between the pre-test
and post-test
• Ʃ𝐷= summation of the difference
• n= sample size
t-test for Correlated Samples
• Example 1.
• An experimental study was conducted on the effect of
programmed materials in English on the performance of 20
selected college students. Before the program was
implemented, the pre-test was administered and after 5
months the same instrument was used to get the posttest
result. The following is the result of the experiment. Test the
null hypothesis at 0.05 level of significance.
Pre-test (X1) Post-test (X2) D D2
20 25 -5 25
30 35 -5 25
10 25 -15 225
15 25 -10 100
20 20 0 0
10 20 -10 100
18 22 -4 16
14 20 -6 36
15 20 -5 25
20 15 5 25
18 30 -12 144
15 10 5 25
15 16 -1 1
20 25 -5 25
18 10 8 64
40 45 -5 25
10 15 -5 25
10 10 0 0
12 18 -6 36
20 25 -5 25
ƩD= -81 ƩD2= 947
t-test for Correlated Samples
• 1. Problem:
• Is there a significant difference between the scores of 20
selected college students before and after the
implementation of the program?
• 2. Hypotheses:
• Ho: There is no significant difference between the
performance of 20 selected college students before and after
the implementation of the program.
• Ha: The post-test is higher than the pre-test result
t-test for Correlated Samples
• 3. Level of Significance:
• α= 0.05
• 𝐷𝑓 = 𝑛 − 1
• 𝐷𝑓 = 20 − 1
• 𝐷𝑓 = 19
• t tabular value= -1.729

-1.729

Mean 1 < Mean 2


t-test for Correlated Samples
• 4. Test of Statistic:
• t-test for Correlated samples

• 5. Decision Rule:
• If t computed value ≤ - 1.729, reject Ho
• If t computed value > -1.729, Accept Ho
t-test for Correlated Samples
• 6. Computation:
𝐷
• 𝑡=
Ʃ𝐷)2
Ʃ𝐷2 − 𝑛
𝑛(𝑛−1)

Ʃ𝐷 −81
• 𝐷= = = −4.05
𝑛 20
𝐷 −4.05 −4.05 −4.05 −4.05
• 𝑡= 2
= 2
= = = =
Ʃ𝐷 −81 6561 947−328.05 618.95
Ʃ𝐷2 − 𝑛 947− 20 947−
20 380 380
𝑛(𝑛−1) 20(20−1) 20(19)
−4.05
1.62881579
−4.05 −4.05
• 𝑡 = 1.62881579 = 1.27625068 = −3.17
• 𝑡 = −3.17
t-test for Correlated Samples
• 7. Interpretation:
• Since the t computed value is < -1.729, Ho is rejected. It
implies that the post-test result is higher than the pre-test
result. This means that the program materials in English is
effective.
t-test for Correlated Samples
• Example 2.
• A certain mental health program was given to 15 insurance
agents for claims to improve their income generating. The
data were recorded before and after the implementation of
the program. Use 0.10 level of significance.
Before the Implementation After the Implementation (x2) D D2
(x1)
5,000 6,000 -1,000 1,000,000
7,500 7,000 500 250,000
8,000 10,000 -2,000 4,000,000
7,000 8,000 -1,000 1,000,000
7,000 7,000 0 0
8,000 9,000 -1,000 1,000,000
8,500 9,000 -500 250,000
10,000 10,000 0 0
6,000 8,000 -2,000 4,000,000
7,000 8,000 -1,000 1,000,000
5,000 10,000 -5,000 25,000,000
6,000 7,000 -1,000 1,000,000
5,500 6,000 -500 250,000
8,000 9,000 -1,000 1,000,000
10,000 11,000 -1,000 1,000,000
ƩD= -16,500 ƩD2= 40,750,000
t-test for Correlated Samples
• 1. Problem:
• Is there a significant difference between the income of insurance
agents before and after the implementation of the mental health
program?
• 2. Hypotheses:
• Ho: There is no significant difference between the income of insurance
agents before and after the implementation of the mental health
program.
• Ha: There is a significant difference between the income of insurance
agents before and after the implementation of the mental health
program.
t-test for Correlated Samples
• 3. Level of Significance:
• α= 0.10
• 𝐷𝑓 = 𝑛 − 1
• 𝐷𝑓 = 15 − 1
• 𝐷𝑓 = 14
• t tabular value= ±1.761
-1.761 +1.761
t-test for Correlated Samples
• 4. Test of Statistic:
• t-test for Correlated samples
• 5. Decision Rule:
• If t computed value ≤ - 1.761, reject Ho
• If t computed value ≥ +1.761, reject Ho
• If -1.761< t computed value < +1.761, accept Ho
t-test for Correlated Samples
• 6. Computation:
𝐷
• 𝑡=
Ʃ𝐷2
Ʃ𝐷2 − 𝑛
𝑛(𝑛−1)

Ʃ𝐷 −16,500
• 𝐷= 𝑛
= 15
= −1,100
𝐷 −1,100 −1,100
• 𝑡= 2
= 2
=
Ʃ𝐷 −16,500 272,250,000
Ʃ𝐷2 − 40,750,000− 40,750,000−
15
𝑛 15
𝑛(𝑛−1) 15(15−1) 15(14)

−1,100 −1,100 −1,100 −1,100


• 𝑡= 40,750,000−18,150,000
= 22,600,000
= 107,619.0476
= 328.0534219
=
210 210
−3.35
t-test for Correlated Samples
• 7. Interpretation:
• Since the t computed value is < -1.761, reject Ho. This means
that there is a significant difference between the income of
insurance agents before and after the implementation of
mental health program.
F- test or ANOVA (Analysis of Variance)
F- test or ANOVA (Analysis of Variance)
• Use to evaluate mean differences between two or more
treatments.
• This is an inferential procedure that uses sample data as the
basis for drawing general conclusions about population. This
seems similar to t-test, but t-test is limited to situations in
which there are only two treatments to compare.
• It provides the researchers with much greater flexibility in
designing experiments and interpreting results.
F- test or ANOVA (Analysis of Variance)
• The goal of ANOVA is to determine whether the mean
differences observed among the samples provide enough
evidence to conclude that there are differences in the mean
among the populations.
F-test (One-way ANOVA)
• One-way ANOVA or Single Factor ANOVA is used when the
researcher seeks to make comparisons among three, four, five
or more groups.
• Data will be analyzed if they are normal and expressed in an
interval or ratio data.
F-test (One-way ANOVA)
• Formula:
𝑀𝑆𝐵
• 𝐹= 𝑀𝑆𝑊

• Where:
• MSB= Mean Squares Between
• MSW= Mean Squares Within
F-test (One-way ANOVA)
• Example 1
• The store is selling 4 brands of Shampoo. The owner is
interested if there is a significant difference in the average
sales for one week. Perform the Analysis of Variance and test
the hypothesis at 0.05 level of significance that the average
sales of the 4 brands of shampoos are equal.
• The following data are recorded:
X1 X2 X3 X4 𝑿𝟐𝟏 𝑿𝟐𝟐 𝑿𝟐𝟑 𝑿𝟐𝟒

7 9 2 4 49 81 4 16
3 8 3 5 9 64 9 25
5 8 4 7 25 64 16 49
6 7 5 8 36 49 25 64
9 6 6 3 81 36 36 9
4 9 4 4 16 81 16 16
3 10 2 5 9 100 4 25
ƩX1= 37 ƩX2= 57 ƩX3= 26 ƩX4= 36 Ʃ𝐗 𝟐𝟏 Ʃ𝐗 𝟐𝟐 Ʃ𝐗 𝟐𝟑 Ʃ𝐗 𝟐𝟒
= 𝟐𝟐𝟓 = 𝟒𝟕𝟓 = 𝟏𝟏𝟎 = 𝟐𝟎𝟒
𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒
= 𝟓. 𝟐𝟗 = 𝟖. 𝟏𝟒 = 𝟑. 𝟕𝟏 = 𝟓. 𝟏𝟒
F-test (One-way ANOVA)
• 1. Problem:
• Is there a significant difference in the average sales of the 4
shampoo brands?
• 2. Hypotheses:
• Ho: There is no significant difference in the average sales of
the 4 shampoo brands.
• Ha: There is a significant difference in the average sales of the
4 shampoo brands
F-test (One-way ANOVA)
• 3. Level of Significance:
• α= 0.05
• 𝑑𝑓𝑏 = 𝑘 − 1 = 4 − 1 = 3
• 𝑑𝑓𝑏 = 3

• 𝑑𝑓𝑤 = 𝑛−1 − 𝑘−1


• 𝑑𝑓𝑤 = 28 − 1 − 3
• 𝑑𝑓𝑤 = 27 − 3
• 𝑑𝑓𝑤 = 24
• F tabular value= 3.01 3.01
F-test (One-way ANOVA)
• 4. Test of Statistic:
• F-test (One-Way ANOVA)
• 5. Decision Rule:
• If F computed value ≥ +3.01, reject Ho
• If F computed value < +3.01, accept Ho
F-test (One-way ANOVA)
• 6. Computation:
𝑀𝑆𝐵
• 𝐹= 𝑀𝑆𝑊

• Compute for CF (Connection Factor)


Ʃ𝑋1 +Ʃ𝑋2 +Ʃ𝑋3 +Ʃ𝑋4 2 37+57+26+36 2 156 2
• 𝐶𝐹 = 𝑛1 +𝑛2 +𝑛3 +𝑛4
= 7+7+7+7
= 28
=
24,336
28
= 869.14
• 𝐶𝐹 = 869.14
F-test (One-way ANOVA)
• 6. Computation:
• Compute for TSS (Total Sum of Squares)
• 𝑇𝑆𝑆 = Ʃ𝑋12 + Ʃ𝑋22 + Ʃ𝑋32 + Ʃ𝑋42 − 𝐶𝐹 = 225 + 475 +
110 + 204 − 869.14 = 1014 − 869.14 = 144.86
• 𝑇𝑆𝑆 = 144.86
F-test (One-way ANOVA)
• 6. Computation:
• Compute for BSS (Between Sum of Squares)
(Ʃ𝑋1 )2 (Ʃ𝑋2 )2 (Ʃ𝑋3 )2 (Ʃ𝑋4 )2 37 2 57 2 26 2
• 𝐵𝑆𝑆 = + + + − 𝐶𝐹 = + + +
𝑛1 𝑛2 𝑛3 𝑛4 7 7 7
36 2
7
− 869.14
1369 3249 676 1296
• 𝐵𝑆𝑆 = + + + − 869.14
7 7 7 7
• 𝐵𝑆𝑆 =
195.5714286 + 464.1428571 + 96.57142857 + 185.1428571 −
869.14
• 𝐵𝑆𝑆 = 941.4285714 − 869.14 = 72.29
• 𝐵𝑆𝑆 = 72.29
F-test (One-way ANOVA)
• 6. Computation:
• Compute for WSS (Within Sum of Squares)
• 𝑊𝑆𝑆 = 𝑇𝑆𝑆 − 𝐵𝑆𝑆 = 144.86 − 72.29 = 72.57
• 𝑊𝑆𝑆 = 72.57
F-test (One-way ANOVA)
• Compute for Mean Squares
• MSB (Mean Squares Between)
𝐵𝑆𝑆 72.29
• 𝑀𝑆𝐵 = = = 24.10
𝑑𝑓𝑏 3
• 𝑀𝑆𝐵 = 24.10

• MSW (Mean Squares Within)


𝑊𝑆𝑆 72.57
• 𝑀𝑆𝑊 = = = 3.02
𝑑𝑓𝑤 24
• 𝑀𝑆𝑊 = 3.02
F-test (One-way ANOVA)
• F Computed Value
𝑀𝑆𝐵 24.10
• 𝐹= = = 7.98
𝑀𝑆𝑊 3.02
• 𝐹 = 7.98
Analysis of Variance Table
Source of Degree of Sum of Mean F- F-Tabular
Variation Freedom Squares Squares Computed Value
Value
Between 3 72.29 24.10 7.98 3.01
Groups (k-1)

Within 24 72.57 3.02


Groups
(n-1) - (k-1)

Total (n-1)) 27 144.86


F-test (One-way ANOVA)
• 7. Interpretation:
• Since the F Computed value of 7.98 is greater than the F-
tabular value of 3.01 at 0.05 level of significance with 3 and 24
degrees of freedom, the null hypothesis is rejected in favor of
the research hypothesis which means that there is a
significant difference in the average sales of the 4 brands of
shampoo.
SCHEFFЀS TEST
• Used to test as to where the difference lies
• Formula:
(𝑥1 −𝑥2 )2
• 𝐹′ = 𝑠𝑤2 (𝑛1 +𝑛2)
𝑛1 𝑛2
SCHEFFЀS TEST
• Where:
• 𝐹 ′ = Scheffѐs Test
• 𝑥1 = mean of group 1
• 𝑥2 = mean of group 2
• n1= number of samples in group 1
• n2= number of samples in group 2
• Sw2= within mean squares
SCHEFFЀS TEST
• A VS. B

′ (𝑥1 −𝑥2 )2 (5.29−8.14)2 (−2.85)2 8.1225


• 𝐹 = 𝑠𝑤2 (𝑛1 +𝑛2)
= 3.02(7+7) = 3.02(14) = 42.28 =
(7)(7) 49 49
𝑛 1 𝑛2
8.1225
=
0.862857142
• 𝑭′ = 𝟗. 𝟒𝟏
SCHEFFЀS TEST
• A VS. C
(𝒙𝟏 −𝒙𝟐 )𝟐 (𝟓.𝟐𝟗−𝟑.𝟕𝟏)𝟐 (𝟏.𝟓𝟖)𝟐 𝟐.𝟒𝟗𝟔𝟒
• 𝑭′ = 𝒔𝒘𝟐 (𝒏𝟏 +𝒏𝟐)
= 𝟑.𝟎𝟐(𝟕+𝟕) = 𝟑.𝟎𝟐(𝟏𝟒) = 𝟒𝟐.𝟐𝟖 =
(𝟕)(𝟕) 𝟒𝟗 𝟒𝟗
𝒏𝟏 𝒏𝟐
𝟐.𝟒𝟗𝟔𝟒
=
𝟎.𝟖𝟔𝟐𝟖𝟓𝟕𝟏𝟒𝟐

• 𝑭 = 𝟐. 𝟖𝟗
SCHEFFЀS TEST
• A VS. D
(𝑥1 −𝑥2 )2 (5.29−5.14)2 (0.15)2 0.0225
• 𝐹′ = 𝑠𝑤2 (𝑛1 +𝑛2)
= 3.02(7+7) = 3.02(14) = 42.28 =
(7)(7) 49 49
𝑛 1 𝑛2
0.0225
=
0.862857142
• 𝑭′ = 𝟎. 𝟎𝟑
SCHEFFЀS TEST
• B VS. C

(𝑥1 −𝑥2 )2 (8.14−3.71)2 (4.43)2 19.6249


• 𝐹′ = 𝑠𝑤2 (𝑛1 +𝑛2)
= 3.02(7+7) = 3.02(14) = 42.28 =
(7)(7) 49 49
𝑛 1 𝑛2
19.6249
0.862857142
=
• 𝑭′ = 𝟐𝟐. 𝟕𝟒
SCHEFFЀS TEST
• B VS. D

′ (𝑥1 −𝑥2 )2 (8.14−5.14)2 (3)2 9


• 𝐹 = 𝑠𝑤2 (𝑛1 +𝑛2)
= 3.02(7+7) = 3.02(14) = 42.28 =
(7)(7) 49 49
𝑛 1 𝑛2
9
=
0.862857142

• 𝑭 = 𝟏𝟎. 𝟒𝟑
SCHEFFЀS TEST
• C VS. D

(𝑥1 −𝑥2 )2 (3.71−5.14)2 (−1.43)2 2.0449


• 𝐹′ = 𝑠𝑤2 (𝑛1 +𝑛2)
= 3.02(7+7) = 3.02(14) = 42.28 =
(7)(7) 49 49
𝑛 1 𝑛2
2.0449
0.862857142
=
• 𝑭′ = 𝟐. 𝟑𝟕
Comparison of the Average Sales of the Four Brands of Shampoo

Between Brand 𝑭′ (F .05) (k-1) Interpretation

(3.01) (3)

A VS. B 9.41 9.03 Significant


A VS. C 2.89 9.03 Not Significant
A VS. D 0.03 9.03 Not Significant
B VS. C 22.74 9.03 Significant
B VS. D 10.43 9.03 Significant
C VS. D 2.37 9.03 Not Significant
SCHEFFЀS TEST
• Conclusion:
• The above table shows that there is a significant difference
between brand A and brand B and brand C and also brand B
and brand D. However, brands A and C, and D and C and D
have no significant differences in their average sales. This
implies that brand B is more saleable than brands A, C, and D.
F-test (Two-Way ANOVA with Interaction Effect)
F-test (Two-Way ANOVA with Interaction Effect)
• Example 1.
• Forty-five language students were randomly assigned to one
of the three instructors and to one of the three methods of
teaching. Achievement was measured on a test administered
at the end of the term. Use two-way ANOVA with interaction
effect at 0.05 level of significance to test the following
hypotheses:
TEACHER FACTOR (Column)

A B C

40 50 40

Method of 41 50 41
Teaching Factor
40 48 40
1 (row)
39 48 38

38 45 38

Total ƩA1= 198 ƩB1= 241 ƩC1= 197 ƩR1= 636


TEACHER FACTOR (Column)

40 45 50

Method of 41 42 46
Teaching Factor 2
39 42 43
(row)
38 41 43

38 40 42

Total ƩA2= 196 ƩB2= 210 ƩC2= 224 ƩR2= 630


TEACHER FACTOR (Column)

40 40 40

Method of 43 45 41

Teaching Factor 41 44 41

3 (row) 39 44 39
38 43 38
Total ƩA3= 201 ƩB3= 216 ƩC3= 199 ƩR3= 616
Total ƩCL1= 595 ƩCL2= 667 ƩCL3= 620 GT= 1882
F-test (Two-Way ANOVA with Interaction Effect)
• 1. Problem:
• A. Is there a significant difference in the performance of the
three groups of students under three different instructors?
• B. Is there a significant difference in the performance of the
three groups of students under three different methods of
teaching?
• C. Is there an interaction effect between teachers and method
of teaching factors?
F-test (Two-Way ANOVA with Interaction Effect)
• 2. Hypotheses:
• Ho: There is no significant difference in the performance of the
three groups of students under three different instructors.
• Ha: There is a significant difference in the performance of the three
groups of students under three different instructors.
• Ho: There is no significant difference in the performance of the
three groups of students under three different methods of
teaching.
• Ha: There is a significant difference in the performance of the three
groups of students under three different methods of teaching.
F-test (Two-Way ANOVA with Interaction Effect)
• 2. Hypotheses:

• Ho: Interaction effects are not present


• Ha: Interaction effects are present
F-test (Two-Way ANOVA with Interaction Effect)
• 3. Level of Significance:
• α= 0.05
• 𝑑𝑓𝑡 = 𝑁 − 1 = 45 − 1 = 44
• 𝑑𝑓𝑤 = 𝑘 𝑛 − 1 = 9 5 − 1 = 9 4 = 36
• 𝑑𝑓𝑐 = 𝑐 − 1 = 3 − 1 = 2
• 𝑑𝑓𝑟 = 𝑟 − 1 = 3 − 1 = 2
• 𝑑𝑓𝑐. 𝑟 = 𝑐 − 1 𝑟 − 1 = 3 − 1 3 − 1 = 2 2 = 4
F-test (Two-Way ANOVA with Interaction Effect)
• 3. Level of Significance:
• F-Value Tabular
• Column df= 2/36= 3.26
• Row df= 2/36= 3.26
• Interaction df= 4/36= 2.63
F-test (Two-Way ANOVA with Interaction Effect)
• 3. Level of Significance:

3.26 3.26

2.63
F-test (Two-Way ANOVA with Interaction Effect)

• 4. Test of Statistic:

• F-test Two-Way-ANOVA with Interaction effect


F-test (Two-Way ANOVA with Interaction Effect)

• 5. Decision Rule:
• If F computed value ≥ 3.26, reject Ho
• If F computed value ≥ 3.26, reject Ho
• If F computed value ≥ 2.63, reject Ho
• If F computed value < 3.26, accept Ho
• If F computed value < 3.26, accept Ho
• If F computed value < 2.63, accept Ho
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:

• Compute for CF (Connection Factor)


(𝐺𝑇)2 (1882)2 3541924
• 𝐶𝐹 = 𝑁
= 45
= 45
= 78709.42
• 𝑪𝑭 = 𝟕𝟖𝟕𝟎𝟗. 𝟒𝟐
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:

• Compute for SST (Total Sum of Squares)


• 𝑆𝑆𝑇 = Ʃ𝑋 2 − 𝐶𝐹
• 𝑆𝑆𝑇 = 402 + 412 + 402 + 392 + 382 + 402 + 412 + 392 +
382 + 382 + 402 + 432 + 412 + 392 + 382 + 502 + 502 + 482 +
482 + 452 + 452 + 422 + 422 + 412 + 402 + 402 + 452 + 442 +
442 + 432 + 402 + 412 + 402 + 382 + 382 + 502 + 462 + 432 +
432 + 422 + 402 + 412 +412 + 392 + 382 − 78709.32
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• 𝑆𝑆𝑇 =1600+1681+1600+1521+1444+1600+1681+1521+1444
+1444+1600+1849+1681+1521+1444+2500+2500+2304+2304
+2025+2025+1764+1764+1681+1600+1600+2025+1936+1936
+1849+1600+1681+1600+1444+1444+2500+2116+1849+1849
+1764+1600+1681+1681+1521+1444-78709.42
• 𝑆𝑆𝑇 = 79218 − 78709.42
• 𝑺𝑻 = 𝟓𝟎𝟖. 𝟓𝟖
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• Compute for SSW (Sum of Squares Within)
(Ʃ𝐴1)2 Ʃ𝐴22 Ʃ𝐴32 Ʃ𝐵12 Ʃ𝐵22 Ʃ𝐵32
• 𝑆𝑆𝑊 = Ʃ𝑋 2− 𝑛 + 𝑛 + 𝑛 + 𝑛 + 𝑛 + 𝑛 +
Ʃ𝐶12 Ʃ𝐶22 Ʃ𝐶32
𝑛
+ 𝑛 + 𝑛
(198)2 (196)2 (201)2 (241)2 (210)2
• 𝑆𝑆𝑊 = 79218 − 5 + 5 + 5 + + +
5 5
(216)2 (197)2 (224)2 (199)2
+ 5 + 5 + 5
5
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
39204 38416 40401 58081 44100
• 𝑆𝑆𝑊 = 79218 − + + + + +
5 5 5 5 5
46656 38809 50176 39601
+ 5 + 5 + 5
5
• 𝑆𝑆𝑊 = 79218 − 7840.8 + 7683.2 + 8080.2 + 11616.2 +
8820 + 9331.2 + 7761.8 + 10035.2 + 7920.2
• 𝑆𝑆𝑊 = 79218 − 79088.8
• 𝑺𝑺𝑾 = 𝟏𝟐𝟗. 𝟐
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:

• Compute for SSC (Sum of Squares Between Columns)


(Ʃ𝐶𝐿1)2 +(Ʃ𝐶𝐿2)2 +(Ʃ𝐶𝐿3)2
• 𝑆𝑆𝐶 = 𝑁 − 𝐶𝐹
𝑐

(595)2 +(667)2 +(620)2


• 𝑆𝑆𝐶 = 45 − 78709.42
3
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
354025+444889+384400
• 𝑆𝑆𝐶 = − 78709.42
15
1183314
• 𝑆𝑆𝐶 = − 78709.42
15
• 𝑆𝑆𝐶 = 78887.6 − 78709.42
• 𝑺𝑺𝑪 = 𝟏𝟕𝟖. 𝟏𝟖
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• Compute for SSR (Sum of Squares of Rows)
(Ʃ𝑅1)2 +(Ʃ𝑅2)2 +(Ʃ𝑅3)2
• 𝑆𝑆𝑅 = 𝑁 − 𝐶𝐹
𝑅

636 2 + 630 2 + 616 2


• 𝑆𝑆𝑅 = 45 − 78709.42
3
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
404496+396900+379456
• 𝑆𝑆𝑅 = − 78709.42
15
1180852
• 𝑆𝑆𝑅 = − 78709.42
15
• 𝑆𝑆𝑅 = 78723.46667 − 78709.42
• 𝑺𝑺𝑹 = 𝟏𝟒. 𝟎𝟓
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• Compute for SSCR (Sum of Squares of Interaction)
• 𝑆𝑆𝐶𝑅 = 𝑆𝑆𝑇 − 𝑆𝑆𝑊 − 𝑆𝑆𝐶 − 𝑆𝑆𝑅
• 𝑆𝑆𝐶𝑅 = 508.58 − 129.2 − 178.18 − 14.05
• 𝑆𝑆𝐶𝑅 = 508.58 − 321.43
• 𝑺𝑺𝑪𝑹 = 𝟏𝟖𝟕. 𝟏𝟓
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• Compute for MSB (Mean of Squares Between Columns)
𝑆𝑆𝐶
• 𝑀𝑆𝐵 = 𝑑𝑓𝑐
178.18
• 𝑀𝑆𝐵 = 2
• 𝑴𝑺𝑩 = 𝟖𝟗. 𝟎𝟗
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• Compute for MSR (Mean of Squares of Rows)
𝑆𝑆𝑅
• 𝑀𝑆𝑅 = 𝑑𝑓𝑅
14.05
• 𝑀𝑆𝑅 = 2
• 𝑴𝑺𝑹 = 𝟕. 𝟎𝟑
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• Compute for MSI (Mean of Squares of Interaction)
𝑆𝑆𝐶𝑅
• 𝑀𝑆𝐼 = 𝑑𝑓𝐶𝑅
187.15
• 𝑀𝑆𝐼 = 4
• 𝑴𝑺𝑰 = 𝟒𝟔. 𝟕𝟗
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• Compute for MSW (Mean of Squares Within)
𝑆𝑆𝑊
• 𝑀𝑆𝑊 = 𝑑𝑓𝑊
129.20
• 𝑀𝑆𝑊 = 36
• 𝑴𝑺𝑾 = 𝟑. 𝟓𝟗
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• F-Value Computed:
• Columns
𝑀𝑆𝐶
• 𝐹𝐶 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑑 =
𝑀𝑆𝑊
89.09
• 𝐹𝐶 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑑 = 3.59
• 𝑭𝑪 𝑪𝒐𝒎𝒑𝒖𝒕𝒆𝒅 = 𝟐𝟒. 𝟖𝟐
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• Rows
𝑀𝑆𝑅
• 𝐹𝑅 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑑 = 𝑀𝑆𝑊
7.02
• 𝐹𝑅 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑑 = 3.59
• 𝑭𝑹 𝑪𝒐𝒎𝒑𝒖𝒕𝒆𝒅 = 𝟏. 𝟗𝟔
F-test (Two-Way ANOVA with Interaction Effect)
• 6. Computation:
• Interaction
𝑀𝑆𝐼
• 𝐹𝐼 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑑 = 𝑀𝑆𝑊
46.79
• 𝐹𝐼 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑑 = 3.59
• 𝑭𝑰 𝑪𝒐𝒎𝒑𝒖𝒕𝒆𝒅 = 𝟏𝟑. 𝟎𝟑
TWO-WAY ANOVA TABLE

F-Value

Source of SS df MS Computed Tabular Interpretation


Variation
Between 178.18 2 89.09 24.82 3.26 S
Columns
Rows 14.05 2 7.03 1.96 3.26 NS

Interaction 187.15 4 46.79 13.03 2.63 S

Within 129.20 36 3.59

Total 508.58 44
F-test (Two-Way ANOVA with Interaction Effect)
• 7. Interpretation:

• With the computed F-value (column) of 24.82 compared to


the F-tabular value of 3.26, the null hypothesis is rejected
which means that there is a significant difference in the
performance of the group of students under three different
instructors.
F-test (Two-Way ANOVA with Interaction Effect)
• 7. Interpretation:
• With regard to the F-value (row) of 1.96, it is lesser than the F-
tabular value of 3.26. Hence, the null hypothesis of no
significant differences in the performances of the students
under three different methods of teaching is accepted.
F-test (Two-Way ANOVA with Interaction Effect)
• 7. Interpretation:
• However, the F-value (interaction) of 13.03 is greater than the
F-tabular value of 2.63. Thus, the research hypothesis is
accepted which means that the interaction effect is present.
The Pearson Product Moment Coefficient of
Correlation, r
The Pearson Product Moment Coefficient of
Correlation, r
• An index of relationship between two variables
• Independent variable can be represented by X
• Dependent variable can be represented by Y
• The value of r is +1, zero, to -1
• If the value of r is +1 or -1, there is a perfect correlation
between x and y.
• It can be said that x influences y. However, if r equals to zero
then x and y are independent of each other.
The Pearson Product Moment Coefficient of
Correlation, r
• 1. Positive Correlation
• If the trend of the line graph is going upward, the value of r is
positive. This indicates that as the value of x increases the
value of y also increases and vice versa.
The Pearson Product Moment Coefficient of
Correlation, r
• 2. Negative Correlation
• If the trend of the graph is going downward, the value of r is
negative. It indicates that as the value of x increases the
corresponding value of y decreases and vice versa.
The Pearson Product Moment Coefficient of
Correlation, r
• 3. No Correlation
• If the trend of the line graph cannot be established either
upward or downward, then r=0, indicating that there is no
correlation between the x and y variables.
The Pearson Product Moment Coefficient of
Correlation, r
• The formula for Pearson r:

𝑛Ʃ𝑥𝑦−Ʃ𝑥Ʃ𝑦
• 𝑟=
𝑛Ʃ𝑥 2 − Ʃ𝑥 2 [𝑛Ʃ𝑦 2 −(Ʃ𝑦)2
The Pearson Product Moment Coefficient of
Correlation, r
• Where:
• r= Pearson r
• n= sample size
• Ʃxy= the sum of the product of x and y
• ƩxƩy= the product of the sum of Ʃx and the sum of Ʃy
• Ʃx2= sum of squares of x
• Ʃy2= sum of squares of y
The Pearson Product Moment Coefficient of
Correlation, r
• Example no. 1
• Below are the midterm grades and the final
examinations of 10 students in Psychology 101. Use
0.05 level of significance.
Pearson Product Moment Coefficient of Correlation
x y x2 y2 xy
75 80 5625 6400 6000
70 75 4900 5625 5250
65 65 4225 4225 4225
90 95 8100 9025 8550
85 90 7225 8100 7650
85 85 7225 7225 7225
80 90 6400 8100 7200
70 75 4900 5625 5250
65 70 4225 4900 4550
90 90 8100 8100 8100
Ʃx= 775 Ʃy= 815 Ʃx2=60925 Ʃy2=67325 Ʃxy= 64,000
𝒙 = 𝟕𝟕. 𝟓 𝒚 = 𝟖𝟏. 𝟓
r- value Interpretation

± 0.01 to ±0.20 Slight Correlation


± 0.21 to ± 0.40 Low Correlation

± 0.41 to ± 0.60 Moderate Correlation

± 0.61 to ± 0.80 High Correlation


± 0.81 to ± 0.99 Very High Correlation

+1.00 Perfect Positive Correlation

- 1.00 Perfect Negative Correlation


The Pearson Product Moment Coefficient of
Correlation, r
• 1. Problem:
• Is there a significant relationship between the midterm grades
and the final examinations of 10 students in Psychology 101?
• 2. Hypotheses:
• Ho: There is no significant relationship between the midterm
grades and the final examinations of 10 students in Psychology
101.
• Ha: There is a significant relationship between the midterm
grades and the final examinations of 10 students in Psychology
101.
The Pearson Product Moment Coefficient of
Correlation, r
• 3. Level of Significance:
• α= 0.05
• df= n-2
• df= 10-2
• df= 8
• r 0.05= 0.632
The Pearson Product Moment Coefficient of
Correlation, r
• 4. Test of Statistic:
• Pearson Product Moment Coefficient of Correlation
• 5. Decision Rule:
• If r-computed value ≥ 0.632, reject Ho
• Otherwise, accept Ho
The Pearson Product Moment Coefficient of
Correlation, r
• 6. Computation:
𝑛Ʃ𝑥𝑦−Ʃ𝑥Ʃ𝑦
• 𝑟=
𝑛Ʃ𝑥 2 − Ʃ𝑥 2 [𝑛Ʃ𝑦 2 −(Ʃ𝑦)2
𝑛Ʃ𝑥𝑦−Ʃ𝑥Ʃ𝑦 10(64000)−(775)(815)
• 𝑟= =
𝑛Ʃ𝑥 2 − Ʃ𝑥 2 [𝑛Ʃ𝑦 2 −(Ʃ𝑦)2 10(60925)− 775 2 [10 67325 −(815)2
640000−631625 8375 8375
• 𝑟= = = =
609250−600625 [673250−664225 8625 [9025] 77840625
8375
= 0.95
8822.73342
• 𝒓 = 𝟎. 𝟗𝟓
The Pearson Product Moment Coefficient of
Correlation, r
• 7. Interpretation:
• Since the r-computed value which is 0.95 is greater than the
tabular value of 0.632, the null hypothesis is disconfirmed.
This means that there is a significant relationship between the
midterm grades of students and the final examination. It
implies that the higher the midterm grades the higher also are
the final grades and vice versa.
Slide Title

You might also like