0% found this document useful (0 votes)
25 views27 pages

Lecture 10 - ANOVA

Uploaded by

amtullahhadia02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views27 pages

Lecture 10 - ANOVA

Uploaded by

amtullahhadia02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

ANOVA

INSTRUCTOR: FATIMA ZAFAR


Outline
Definition
One way anova , two way anova
Assumptions of
Example
Interpretation ()
Post hoc
Reporting
ANOVA
ANOVA, which stands for Analysis of Variance, is a statistical test used to analyze the difference
between the means of more than two groups.
It helps us drive meaningful conclusions from our data. How? By allowing us to determine
whether there are any significant differences between these groups.
Null and Alternate
hypothesis for ANOVA
The null hypothesis (H0) of ANOVA is that there is no difference among group means. The
alternative hypothesis (Ha) is that at least one group differs significantly from the overall mean of
the dependent variable.
For example
H0 : All income groups have equal mean anxiety.
Ha : All income groups do not have equal mean anxiety.
How does an ANOVA test
work?
ANOVA determines whether the groups created by the levels of the independent variable are
statistically different by calculating whether the means of the treatment levels are different from
the overall mean of the dependent variable.
If any of the group means is significantly different from the overall mean, then the null
hypothesis is rejected.
ANOVA uses the F test for statistical significance. This allows for comparison of multiple means
at once, because the error is calculated for the whole set of comparisons rather than for each
individual two-way comparison (which would happen with a t test).
The F test compares the variance in each group mean from the overall group variance. If the
variance within groups is smaller than the variance between groups, the F test will find a
higher F value, and therefore a higher likelihood that the difference observed is real and not due
to chance.
Assumptions of ANOVA

The assumptions of the ANOVA test are the same as the general assumptions for any parametric
test:
oThe dependent variable must be a continuous (interval or ratio) level of measurement. The
independent variables in ANOVA must be categorical (nominal or ordinal) variables
oThe population from which samples are drawn should be normally distributed.
oIndependence of cases: the sample cases should be independent of each other.
oHomogeneity of variance: Homogeneity means that the variance among the groups should be
approximately equal.
It is important to note that ANOVA is not robust to violations to the assumption of
independence. This is to say, that even if you violate the assumptions of homogeneity or normality,
you can conduct the test and basically trust the findings. However, the results of the ANOVA are
invalid if the independence assumption is violated.
Example
Your independent variable is social media use, and you assign groups to low, medium, and high
levels of social media use to find out if there is a difference in hours of sleep per night.
For example, suppose you want to check degree of jealousy among different attachment styles
namely secure, avoidant, and anxious-ambivalent.
The research hypothesis would be that the degree of jealousy differs among these three
populations.
For example, if you want to check whether there is a difference in life satisfaction among
different groups of ethnic backgrounds. (i.e. Punjabi, sindhi, Balochi, pathan, Kashmiri, Balti)
Types of ANOVA
One way ANOVA
Two way ANOVA
N way ANOVA
One-Way vs. Two-Way
ANOVA

A one-way ANOVA evaluates the impact of a sole factor on a sole response variable. The one-
way ANOVA is used to determine whether there are any statistically significant differences
between the means of three or more independent groups.
A two-way ANOVA is an extension of the one-way ANOVA. With a one-way, you have one
independent variable affecting a dependent variable. With a two-way ANOVA, there are two
independents.
For example, a two-way ANOVA allows a company to compare worker productivity based on two
independent variables, such as salary and skill set. It is utilized to observe the interaction
between the two factors and test the effect of two factors simultaneously.
One-Way ANOVA
Number of Factors: It analyzes one factor or independent variable.
Main Purpose: Its primary purpose is to compare the means of two or more groups to determine if
there are statistically significant differences among them.
Example: For instance, you might use it to compare the test scores of students in different classes.
Types of Variation and Sources: One-Way ANOVA considers two types of variation: within-group
(variation within each group) and between groups (variation between different groups).
Hypotheses Tested: It tests the null hypothesis and checks whether all group means are equal.
Interpretation of Results: One-Way ANOVA helps you determine if there are significant differences
in at least one group mean compared to the others.
Applicability: It is useful when comparing multiple independent groups with one categorical
independent variable.
Two-Way ANOVA
Number of Factors: It simultaneously analyzes two factors or independent variables.
Main Purpose: Its main goal is to assess the impact of two independent factors on a dependent variable
and whether their interaction significantly influences the dependent variable.
Example: It assesses how drug type and dosage levels independently and interactively affect outcomes.
Types of Variation and Sources: Two-Way ANOVA considers variation within each factor (within-group
variation for each factor) and the interaction between factors.
Hypotheses Tested: It tests multiple null hypotheses, including the main effects of each factor and the
interaction effect between the factors.
Interpretation of Results: Two-Way ANOVA provides insights into how each factor and their interaction
affect the dependent variable.
Applicability: It is appropriate when investigating the effects of two independent factors on a dependent
variable.
N-Way ANOVA
When researchers have more than two factors to consider, they turn to N-Way ANOVA, where
“n” represents the number of independent variables in the analysis. This could mean examining
how IQ scores are influenced by a combination of factors like country, gender, age group, and
ethnicity all at once. N-Way ANOVA allows for a comprehensive analysis of how these multiple
factors interact with each other and their combined effect on the dependent variable, providing
a deeper understanding of the dynamics at play.
Analysis of Variance
To understand the logic of analysis of variance, we consider variances.
Variance measures how far a set of data is spread out. Variance is the average of the squared
distances from each point to the mean.
In particular, there are two different ways of estimating population variances.
◦ within-groups estimate of the population variance
◦ between-groups estimate of the population variance
Within-group Variance
Within-groups estimate of variance is an estimate that how scores of individuals within a group
varies in the same group.
It focuses on variation within each of the groups being studied.
Between-group Variance
Between-groups estimate of the population variance is how mean of different groups are varied
from each other.
It focuses on the means of all the groups under study.
F ratio
F ratio is a ratio of the between-groups population variance estimate to the
within-groups population variance estimate.
F = Between- group estimate
within-group estimate
Effect Size
Effect size of ANOVA is called eta squared.
Effect size is a measure of the strength of the relationship between variables.
Effect sizes are important because whilst the one-way ANOVA tells you whether differences
between group means are "real" (i.e., different in the population), it does not tell you the
"size" of the difference.

η2, (Greek letter eta squared)


η2=Between group variance
Total Variance
INTERPRETATION
CHECK F value and p value
If no true variance exists between the groups, the ANOVA's F-ratio should equal close to 1.
Larger F-value: A larger F-value indicates a greater difference among the group means. It
suggests that the variations between the groups are significant.
Smaller F-value: Conversely, a smaller F-value suggests that the group means are similar, and
there may not be significant differences among them.
P-Value < Significance Level (e.g., 0.05): If the p-value is less than your chosen significance level
(often set at 0.05), it indicates that there are statistically significant differences among the
groups. In other words, you have evidence to reject the null hypothesis, which assumes no
significant differences.
Limitations of One Way
ANOVA Test
The one way ANOVA is an omnibus test statistic. This implies that the test will determine
whether the means of the various groups are statistically significant or not. However, it cannot
distinguish the specific groups that have a statistically significant mean. Thus, to find the specific
group with a different mean, a post hoc test needs to be conducted.
Post-hoc
ANOVA will tell you if there are differences among the levels of the independent variable, but
not which differences are significant. To find how the treatment levels differ from one another,
perform a TukeyHSD (Tukey’s Honestly-Significant Difference) post-hoc test.
The Tukey test runs pairwise comparisons among each of the groups, and uses a conservative
error estimate to find the groups which are statistically different from one another.
Post-hoc Analysis
There are two ways to check post hoc analysis.
◦ Firstly if your data is normally distributed.
◦ Secondly if your data is not normally distributed.

SPSS gives you 18 types of different post hoc analyses.


TUKEY
The Tukey post-hoc test should be used when you would like to make pairwise comparisons
between group means when the sample sizes for each group are equal.
If the sample sizes are not equal, you can use a modified version of the test known as the Tukey-
Kramer test.
The term “pairwise” means we only want to compare two group means at a time.

Highest statistical power


LSD
The original solution to this problem, developed by Fisher, was to explore all possible pair-wise
comparisons of means comprising a factor using the equivalent of multiple t-tests. This
procedure was named the Least Significant Difference (LSD) test
BONFERRONI
The Bonferroni post-hoc test should be used when you have a set of planned comparisons you
would like to make beforehand.
When we have a specific set of planned comparisons we’d like to make ahead of time like this,
the Bonferroni post-hoc test produces the most narrow confidence intervals, which means it has
the greatest ability to detect true difference between the groups of interest.
Note that the Bonferroni post-hoc test can also be used whether or not the group sample sizes
are equal.
SCHEFFE
The Scheffe post-hoc test should be used when you would like to make all possible contrasts
between group means. This test allows you to compare more than just two means at once,
unlike the Tukey post-hoc test.
Complex comparisons involve contrasts of more than two means at a time
Note that the Scheffe post-hoc test can be used whether or not the group sample sizes are
equal.
Less powerful

You might also like