Plan 299B - Hypothesis Testing
Plan 299B - Hypothesis Testing
Mandar
Plan 299 B: Research Methods
Hypothesis Testing
I. Introduction
The goal is to see if there’s enough evidence to reject the null hypothesis in favor
of the alternative. The step-by-step process of conducting hypothesis testing is detailed
in the following subsections.
3. Collect Data
To test the hypothesis, we need to collect data that can be measured and
analyzed. This usually involves setting up an experiment or observation to gather
evidence on the effect of interest.
4. Analyze Results
The analysis step involves calculating results from the collected data and
interpreting what they mean. We often use statistical tests to help us determine if
the results show a significant effect.
5. Draw a Conclusion
Based on the analysis, we conclude whether the hypothesis was
supported by the data. We either reject the null hypothesis (indicating evidence
for an effect) or fail to reject it (indicating insufficient evidence for an effect).
Various statistical tests are used depending on the type of data, sample size, and
research objectives. Some of the commonly used methods are the z-test, t-test, chi-
square test, and the ANOVA or the analysis of variance. The z-test is used for large
samples (n > 30), especially when we know the population’s variance. It checks if the
sample mean differs significantly from the population mean. Meanwhile, the t-test is
often used for smaller samples (n < 30) when the population variance is unknown. It’s
used to compare the means of two groups.
The chi-square test is used for categorical data to see if there’s a relationship
between two variables. It tests if observed frequencies differ from expected frequencies
in one or more categories. Finally, ANOVA is used to compare the means of three or
more groups to see if at least one differs significantly from the others. ANOVA
determines if at least one group mean is different from the others, though it doesn’t
specify which group differs.
Each of these hypothesis tests is chosen based on the type of data, sample size,
and specific question being investigated. Together, they provide researchers with a
variety of tools to test different types of hypotheses accurately and reliably.
IV. Understanding Errors in Hypothesis Testing
Type I Error (False Positive or False Alarm) occurs when the null hypothesis is
rejected despite being true. It represents a “false alarm” and is controlled by the
significance level (α). On the other hand, Type II Error (False Negative or Missed Alarm)
happens when we fail to reject the null hypothesis even though it is false. It indicates a
missed opportunity to identify an actual effect, and the probability of this error is denoted
by β.
The decision to reject or accept the null hypothesis is made based on the basis
of probabilities. If there is a large difference between the value of the parameter obtained
from the sample and the hypothesized parameter, the null hypothesis is probably not
true. But how large a difference is necessary to reject the null hypothesis?
The critical value is a cutoff point that helps us decide if our test results are
strong enough to say there’s an effect. It’s based on the level of significance (α) that we
choose. For example, if we set our level of significance to 5% (or 0.05), we find a critical
value that leaves only a 5% chance of making a Type I Error if we reject the null
hypothesis.
There are three types of test used to determine how we assess the significance
of our findings in relation to the null hypothesis.
1. Left-tailed test - used when the alternative hypothesis states that the
parameter of interest is less than the value specified in the null
hypothesis.
2. Right-tailed test - applied when the alternative hypothesis states that the
parameter is greater than the value specified in the null hypothesis.
3. Two-tailed test - used when the alternative hypothesis states that the
parameter is not equal to the value specified in the null hypothesis.
For example, in a two-tailed test with a 5% significance level, the critical value
might be ±1.96. If the test statistic is above ± 1.96, we reject the null hypothesis. The
critical value depends on the significance level and the type of test (one-tailed or two-
tailed).
A z-score is a standardized measure that shows how far a particular data point is
from the mean of the data set, measured in terms of standard deviations. In hypothesis
testing, the z-score helps us understand whether the difference between our sample and
population is likely due to random chance or if it’s statistically significant.
The z-score is calculated using the following formula:
z=( χ−μ)÷ ¿
where:
● χ is the value of the sample mean,
● μ is the population mean,
● σ is the population standard deviation (if known), and
● n is the sample size.
The result tells us how many standard deviations a data point is from the
population mean. A z-score of 0 indicates the sample mean is exactly the same as the
population mean. Positive z-scores indicate values above the population mean and
negative z-scores indicate values below the population mean.
Assuming that traffic congestion is measured using the average traffic volume,
we collect data for average traffic volume before and after bike lanes are installed:
Population data:
● Average traffic volume: 500 cars per day
● Standard deviation: 50 cars
Sample data:
● Sample size: 100 barangays
● Average traffic volume: 470 cars per day
z=( χ−μ)÷ ¿
VIII. Conclusion
It is important to note that hypothesis testing also has its limits. It doesn’t prove
whether a hypothesis is right or wrong, it does however show if there is enough evidence
to support or reject it. Furthermore, results can depend on the quality and size of the
sample. Poor data also plays a huge role in hypothesis testing as it can lead to
unreliable conclusions.
IX. References:
Alferez, M. and Duro, M.C. (2023). Statistics and Probability. MSA Academic Advancement
Institute. MSA Publishing House. ISBN 971-8740-82-1.