0% found this document useful (0 votes)
13 views

Statistical Notes

Statistical Notes

Uploaded by

mulerstar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Statistical Notes

Statistical Notes

Uploaded by

mulerstar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Effect size

Effect size is a quantitative measure that reflects the magnitude of an effect or the strength of a
relationship in inferential analysis. Unlike hypothesis tests that primarily focus on whether an
effect exists (typically through p-values), effect size provides more information about the size of
the effect, which is crucial for understanding the practical significance of results.

There are different types of effect sizes, depending on the type of analysis being conducted:

1. Cohen's d: Used in comparisons between two means, it represents the difference


between two group means divided by the pooled standard deviation. A small effect size
is around 0.2, medium is around 0.5, and large is around 0.8.
2. Pearson's r: Used for correlation analysis, it measures the strength and direction of a
linear relationship between two variables. Values range from -1 to 1, with 0 indicating
no correlation.
3. Eta-squared (η²): Used in ANOVA contexts, it indicates the proportion of variance
attributed to a factor. Values range from 0 to 1, with higher values indicating a larger
effect.
4. Omega-squared (ω²): Also used in ANOVA, but it provides an unbiased estimate of
effect size.

Effect sizes help researchers and practitioners understand how substantial an impact is, making
it easier to assess the relevance and applicability of the findings in real-world scenarios. It also
allows for better comparisons across studies since it standardizes the measurement of effects,
facilitating meta-analyses and further research.

Significant Level

A 5% significance level in hypothesis testing, often denoted as α = 0.05, indicates that you are
willing to accept a 5% chance of incorrectly rejecting the null hypothesis when it is actually true.
This is commonly referred to as a Type I error.

In practical terms, when you conduct a hypothesis test:

 If the p-value obtained from your analysis is less than or equal to 0.05, you reject the
null hypothesis, implying that the results are statistically significant. This means that
there is less than a 5% probability that the observed results occurred by random
chance alone.
 If the p-value is greater than 0.05, you do not reject the null hypothesis, suggesting that
there is not enough evidence to conclude that an effect or relationship exists.

The choice of a 5% significance level is conventional, but researchers may choose different
thresholds (such as 0.01 or 0.10) depending on the context, the field of study, or the specific
consequences of making an error.
Effect Relations
In a quasi-experimental study that includes a control group and an
intervention group, the term "effect relation" generally refers to the
relationship between the treatment or intervention applied to the
experimental group and the outcomes measured in both groups. It
essentially describes how the intervention is expected to impact the results
compared to the outcomes in the control group, which does not receive the
intervention.

When discussing "effect relations between groups," it typically pertains to


examining and comparing the effects observed in the intervention group
versus the control group. Researchers are interested in understanding:

1. Differences in Outcomes: This involves assessing how the


intervention influences the dependent variable when compared to the
control group. For example, if you are studying an educational
intervention, you would look at the test scores of students in the
intervention group versus those in the control group to evaluate the
effect of the program.
2. Magnitude of Effect: Researchers may also be concerned with
quantifying the size of the effect, which can be represented using
effect sizes like Cohen's d or other metrics that indicate the strength of
the differences observed between the groups.
3. Statistical Significance: In this context, researchers will usually
conduct statistical tests to determine whether the differences observed
between groups are statistically significant, helping to infer whether
the intervention had a meaningful impact.

Overall, while "effect relation" is not a commonly used term in statistical


literature, the concept of examining the effects between groups in a quasi-
experimental design is crucial for drawing conclusions about the impact of an
intervention. It's important for understanding causality, even in non-
randomized studies where the degree of control over the variables may not
be as stringent as in randomized controlled trials.

You might also like