Applications of Stat Lectures
Applications of Stat Lectures
TYPES OF HYPOTHESIS
Types of Hypotheses
1. Simple Hypothesis: Predicts the relationship between one independent variable and
one dependent variable. "If students eat more fruits, they will perform better in
exams." Here, eating fruits is the independent variable, and exam performance is the
dependent variable.
2. Complex Hypothesis: Involves two or more independent variables and two or more
dependent variables. "Eating more fruits and vegetables leads to weight loss, glowing
skin, and reduces the risk of many diseases." This includes multiple factors affecting
multiple outcomes.
5. Null Hypothesis (Ho): States that there is no relationship or effect between the
variables. "There is no difference in test scores between students who study with
music and those who do not." This serves as a baseline for testing.
6. Alternative Hypothesis (Ha): Challenges the null hypothesis, suggesting that there is
a significant relationship or effect. "Students who study with music will have different
test scores compared to those who do not." It posits that studying with music does
affect scores.
Page 1 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
These types of hypotheses are fundamental in research design and help researchers formulate
questions and conduct studies effectively.
• Hypothesis should be clear and precise. If the hypothesis is not clear and precise the
inferences drawn on its basis cannot be taken as reliable.
• Hypothesis should be capable of being tested. In a swamp of untestable hypotheses,
many a time the research programmes have bogged down. Some prior study may be
done by researcher in order to make hypothesis a testable one. A hypothesis ―is
testable if other deductions can be made from it which, in turn, can be confirmed or
disproved by observation.‖
• Hypothesis should state relationship between variables, if it happens to be a relational
hypothesis.
• Hypothesis should be limited in scope and must be specific. A researcher must
remember that narrower hypotheses are generally more testable and he should develop
such hypotheses.
• Hypothesis should be stated as far as possible in most simple terms so that the same is
easily understandable by all concerned. But one must remember that simplicity of
hypothesis has nothing to do with its significance.
• Hypothesis should be consistent with most known facts i.e., it must be consistent with
a substantial body of established facts. In other words, it should be one which judges
accept as being the most likely.
Page 2 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
• Hypothesis should be amenable to testing within a reasonable time. One should not
use even an excellent hypothesis, if the same cannot be tested in reasonable time for
one cannot spend a life-time collecting data to test it.
• Hypothesis must explain the facts that gave rise to the need for explanation. This
means that by using the hypothesis plus other known and accepted generalizations,
one should be able to deduce the original problem condition. Thus hypothesis must
actually explain what it claims to explain; it should have empirical reference.
• The alpha value is calculated using the formula: α=1− Confidence Level
Page 3 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
• This calculation gives you the probability of making a Type I error (rejecting
the null hypothesis when it is actually true). For example:
• If you are conducting a one-tailed test, α will be allocated entirely to one tail
of the distribution.
• For a two-tailed test, α will be split between both tails, meaning each tail will
have α/2.
• For example, with α = 0.05 in a two-tailed test, each tail would have:
α/2=0.025.
One-Tailed Test: A one-tailed test looks for an effect in only one direction—either an
increase or a decrease. This means that the critical region is located entirely in one tail of the
distribution.
Types:
• Right-Tailed Test: The critical region is in the right tail. You would use this
when you want to test if a parameter is greater than a certain value (e.g., H1: µ
> µ0).
• Left-Tailed Test: The critical region is in the left tail. This is used when testing
if a parameter is less than a certain value (e.g., H1: µ < µ0).
Example: If you are testing whether a new drug increases recovery rates, your null hypothesis
might state that the drug has no effect, while your alternative hypothesis states that it does
increase recovery rates. The critical region would be on the right side of the distribution.
Company interest is 20% so Ha =20% , Ho =20%
Page 4 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Two-Tailed Test: A two-tailed test examines both directions for an effect, meaning it tests for
any change, whether an increase or decrease. The critical region is split between both tails of
the distribution. Example: If you are testing whether a new teaching method has any effect on
student scores (either improving or worsening them), your null hypothesis might state that
there is no effect, while your alternative hypothesis states that there is some effect (H1: µ ≠
µ0). The critical regions would be in both tails of the distribution.
Type I Error: A Type I error occurs when you reject a true null hypothesis. This is also
known as a "false positive." Example: Imagine you test a new drug and conclude it works
when it actually doesn't. This is a Type I error because you incorrectly rejected the idea that
the drug has no effect.
Type II Error:A Type II error happens when you fail to reject a false null hypothesis. This is
also known as a "false negative." Example: If you test a new drug and decide it doesn't work
when it actually does, this is a Type II error because you missed detecting the true effect of
the drug.
5.Collect Sample Data: Obtain a random sample from the population and calculate the
sample mean (xˉ).
6.Calculate the Test Statistic: Use the formula for the z-test, t-test, correlation, regression,
anova and etc.
7. Decision rule test/Determine the P-Value or Critical Value: Use statistical software or
z-tables to find the p-value associated with your calculated z-score. In hypothesis testing, the
decision rule is a guideline that helps researchers determine whether to reject or not reject the
null hypothesis (Ho). Here’s a simple explanation of what it involves. Purpose to provide a
clear criterion based on statistical calculations that dictates how to interpret the results of a
hypothesis test. It tells you what to do with the test statistic you calculate from your sample
data.
Components:
Page 5 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
• Significance Level: This is the threshold for deciding when to reject H0. Common
values are 0.05 (5%) or 0.01 (1%). It reflects the probability of making a Type I error,
which is rejecting H0 when it is actually true.
• Critical Value: These are the thresholds derived from the statistical distribution (like
normal or t-distribution) that correspond to the significance level. They define the cut
off points beyond which you would reject H0.
P- value:
• Probability of obtaining results at least as extreme as the current one assuming null is
true.
• The p-value tells us the chance that our results happened just by random luck, rather
than because of a real effect.
• A low p-value or less than predetermined sig level typically less than 0.05 suggests
strong evidence against the null hypothesis, leading researchers to reject it. This
indicates that the observed effect is statistically significant and unlikely to have
occurred by random chance.
• A high p-value (greater than 0.05) indicates weak evidence against the null
hypothesis, suggesting that any observed effect could easily be due to chance.
• If the p-value is greater than this level, you fail to reject the null hypothesis.
Example
Page 6 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
• Null Hypothesis (Ho): The new drug has no effect on patients compared to the
placebo.
• This means there is a 3% probability of observing the results if the null hypothesis
were true.
• Since 0.03 is less than 0.05, researchers would reject the null hypothesis, concluding
that the drug likely has a significant effect.
8.Make a Decision:
Effect size:
• Tells you how meaningful the relationship between variables or the difference
between groups is. It indicates the practical significance of a research outcome.
• A large effect size means that a research finding has practical significance, while a
small effect size indicates limited practical applications.
• Statistical significance shows that an effect exists in a study.
Page 7 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
• Statistical significance alone can be misleading because it’s influenced by the sample
size. Increasing the sample size always makes it more likely to find a statistically
significant effect, no matter how small the effect truly is in the real world.
• Practical significance shows that the effect is large enough to be meaningful in the
real world represented by effect sizes.
• That’s why it’s necessary to report effect sizes in research papers to indicate the
practical significance of a finding.
There are dozens of measures for effect sizes. The most common effect sizes are
Cohen’s d and Pearson’s (r). Cohen’s d measures the size of the difference between two
groups while Pearson’s r measures the strength of the relationship between two variables.
Cohen’s d: Is designed for comparing two groups. It takes the difference between two
means and expresses it in standard deviation units. It tells you how many standard deviations
Cohen’s d formula:
• = mean of Group 1
• = mean of Group 2 s = standard deviation
• Example a pooled standard deviation that is based on data from both groups,
• the standard deviation from a control group, if your design includes a control and an
experimental group,
• the standard deviation from the pretest data, if your repeated measures design includes
a pretest and posttest.
Small 0.2 .1 to .3 or -.1 to -.3
Medium 0.5 .3 to .5 or -.3 to -.5
Large 0.8 or greater .5 or greater or -.5 or less
Page 8 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
• To calculate Cohen’s d for the weight loss study, you take the means of both groups
and the standard deviation of the control intervention group.
It’s helpful to calculate effect sizes even before you begin your study as well as after you
complete data collection.
Statistical power:
• Is a crucial concept in hypothesis testing that refers to the probability that a statistical
test will correctly identify a true effect when one exists. In simpler terms, it measures
how likely a test is to detect a difference or effect if it is actually there.
• The likelihood that a test will reject the null hypothesis (which states there is no
effect) when the alternative hypothesis (which states there is an effect) is true. This
probability is often expressed as 1−β where β represents the probability of making a
Type II error (failing to detect an effect when one exists).
Examples
Drug Trial Scenario: Imagine a clinical trial testing a new medication. If the study has a
statistical power of 80%, it means there is an 80% chance that the trial will successfully find
evidence of the drug's effectiveness if it truly works. Conversely, there is a 20% chance that
the trial will fail to detect this effectiveness, even though it exists 4.
Page 9 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
likely to detect any real improvement in scores due to the intervention. If the power were
only 50%, they might miss detecting significant improvements, leading to potentially
misleading conclusions about the effectiveness of the intervention.
Sample Size: Larger sample sizes generally increase statistical power because they reduce
variability and provide more accurate estimates of effects.
A parametric test is a type of statistical test that makes specific assumptions about the data,
particularly that the data follows a normal distribution and that the variances across groups
are equal. These tests are used to make inferences about population parameters based on
sample data.
Page 10 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Parametric tests are statistical methods that assume certain characteristics about the data, such
as normal distribution and equal variances across groups. These assumptions are crucial for
the validity of the test results.
Assumptions: Parametric tests assume that the data is normally distributed and that the
variances of the groups being compared are equal. They also assume that observations are
independent of each other.
Data Type: These tests are typically used with data measured on an interval or ratio scale,
such as height or temperature like common parametric tests include the t-test, ANOVA
(Analysis of Variance), and Pearson's correlation.
Purpose: Parametric tests are used to compare means, test hypotheses, and measure
relationships between variables. They are considered more powerful than non-parametric tests
when their assumptions are met.
Advantages: Because they rely on specific distributional assumptions, parametric tests can
provide more precise estimates and inferences when these assumptions hold true.
Normality: The data in each group should be normally distributed. This means that the data
should follow a bell-shaped curve with a mean of 0 and a standard deviation.
Equal Variance (Homogeneity of Variance): The variances of the groups being compared
should be approximately equal. This ensures that the variability in the data is consistent across
groups.
Independence: Observations in each group should be independent of each other. This means
that one observation should not influence another.
No Outliers: There should be no extreme outliers in the data that could skew the results of the
test.
Page 11 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Z-Test: Used to compare the mean of a sample to a known population mean when the
population standard deviation is known. Example: A psychologist wants to determine if the
average anxiety level of a sample of college students is different from the national average. If
the population standard deviation is known, a Z-test can be used.
• It is crucial for determining whether to reject the null hypothesis based on the observed
data.
• Z=M- μ /σM
• M = sample mean
• σM = standard error of the mean, calculated as σ/n (where σ is the population standard
deviation and n is the sample size).
• 16.1-15.8/0.4
• 0.3/0.4
• 0.75
T-Test: Used to compare the means of two groups when the population standard deviation is
unknown. Example: A researcher wants to see if there is a difference in stress levels between
students who exercise regularly and those who do not. A t-test can be used to compare these
two groups.
ANOVA (Analysis of Variance): Used to compare the means of three or more groups.
Example: A study investigates the effect of different teaching methods on student
Page 12 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
performance. ANOVA can be used to compare the average performance across different
teaching methods.
Pearson Correlation: Measures the strength and direction of a linear relationship between
two variables. Example: A psychologist examines the relationship between hours spent
studying and exam scores. Pearson correlation can be used to determine if there is a linear
relationship between these variables.
T-test:
T-tests are statistical tests used to determine if there is a significant difference between the
means of two groups. Here are the main types of t-tests:
One-sample t-test is a statistical procedure used to determine whether the mean of a single
sample is significantly different from a known or hypothesized value. This test is useful when
you want to compare the sample mean to a specific value that is not derived from the data
itself but is chosen for scientific reasons, such as a standard or reference value.
Assumptions
2. Random Sampling: The sample should be randomly selected from the population.
• Null Hypothesis (Ho): The population mean is not equal to the specified value.
• Alternative Hypothesis (Ha): The population mean is not equal to the specified value.
Calculate the Test
Statistic:
t=xˉ−μo/ s/underfoot n
wher
e:
Page 13 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
n is the sample
size.
Determine the Critical Value or Use a p-value: Compare the
calculated t-value to a
critical value from a t-distribution table or use a p-value to assess
significance.
Make a Decision: If the p-value is less than the chosen significance
level (e.g., 0.05),
reject the null hypothesis, indicating a significant difference between
the sample
mean and the hypothesized
value.
Example
Suppose a company claims that their energy bars contain 20 grams of protein. A sample of 30
bars shows an average protein content of 19.5 grams with a standard deviation of 1.2 grams.
Using a one-sample t-test, you can determine if there is a significant difference between the
sample mean and the claimed value of 20 grams.
If the calculated t-value exceeds the critical value or if the p-value is less than 0.05, you would
reject the null hypothesis, suggesting a significant difference between the sample mean and
the hypothesized value.
Page 14 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Assumptions
3. Equal Variances: The variances of the two groups should be equal (homogeneity of
variances).
where:
Make a Decision
Paired samples t-test, also known as a dependent samples t-test, is a statistical procedure
used to determine whether there is a significant difference between two related samples. This
test is particularly useful when the same subjects are measured twice, such as before and after
an intervention, or when paired observations are taken from the same group.
Page 15 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Assumptions
3. Random Sampling: The sample should be randomly selected from the population.
4. Make a Decision:
• If the p-value is less than the chosen significance level (e.g., 0.05), reject the
null hypothesis, indicating a significant difference between the paired samples.
Example
Page 16 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Calculate the differences for each employee and compute the mean and standard deviation of
these differences. Use these values to calculate the t-test statistic and determine if there is a
significant improvement in performance due to the training program.The paired samples t-test
is widely used in various fields, including psychology, education, and healthcare, to assess
changes over time or differences between related groups Non-Parametric Testing in Simple
Words with Examples
Non-parametric testing is a method used in statistics when you don't know the distribution of
your data or if it doesn't follow a normal distribution. Unlike parametric tests, which require
specific conditions like normality and equal variances, non-parametric tests are more flexible
and don't assume any particular distribution.
2.Small Sample Size: Sometimes used when sample sizes are small. Common
Non-Parametric Tests
The Mann-Whitney U test is used to compare differences between two independent groups
when the dependent variable is either ordinal or continuous, but not normally distributed
The Mann-Whitney U test is used to compare whether there is a difference in the dependent
variable for two independent groups.
Page 17 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Continuous Variables
1. Definition: Continuous variables are measured numerically and can take any value
within a given range, including fractions or decimals.
2. Characteristics: They have an infinite number of possible values between any two
points.
3. Examples:
Ordinal Variables
Definition: Ordinal variables have categories that are ordered or ranked but do not necessarily
have equal intervals between them.
Characteristics: The order matters, but the differences between consecutive categories may not
be consistent.
Examples:
Educational level (elementary school graduate, high school graduate, some college, college
graduate)
Assumptions
1. Purpose:
Page 18 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
2. Assumptions:
Before we introduce you to these four assumptions, do not be surprised if, when analysing
your own data using SPSS Statistics, one or more of these assumptions is violated (i.e., is not
met). This is not uncommon when working with real-world data rather than textbook
examples, which often only show you how to carry out a Mann-Whitney U test when
everything goes well! However, don’t worry. Even when your data fails certain assumptions,
there is often a solution to overcome this. First, let’s take a look at these four assumptions:
o Assumption #3: You should have independence of observations, which means that
there is no relationship between the observations in each group or between the
groups themselves. For example, there must be different participants in each group
Page 19 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
with no participant being in more than one group. This is more of a study design
issue than something you can test for, but it is an important assumption of the Mann-
Whitney U test. If your study fails this assumption, you will need to use another
statistical test instead of the Mann-Whitney U test (e.g., a Wilcoxon signed-rank
test). If you are unsure whether your study meets this assumption, you can use our
Statistical Test Selector, which is part of our enhanced content.
o Assumption #4: A Mann-Whitney U test can be used when your two variables are
not normally distributed. However, in order to know how to interpret the results
from a Mann-Whitney U test, you have to determine whether your two distributions
(i.e., the distribution of scores for both groups of the independent variable; for
example, 'males' and 'females' for the independent variable, 'gender') have the same
shape. To understand what this means, take a look at the diagram below:
The Wilcoxon Signed Rank Test is a non-parametric statistical test used to compare two
related samples or repeated measurements on the same subjects. It is particularly useful when
the data does not meet the assumptions of normality required for parametric tests like the
paired t-test.
1. Purpose:
Page 20 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
2. Assumptions:
Kruskal-Wallis H Test
Purpose:
It is particularly useful when data does not meet normality assumptions required for
ANOVA.
Assumptions:
Page 21 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
• Fewer Assumptions They require minimal assumptions about the data distribution,
making them suitable when parametric assumptions like normality are not met14.
• Suitable for Small Samples Some sources suggest no restriction for minimum sample
size1.
• Lower Statistical Power They generally have less statistical power compared to
parametric tests, especially with smaller sample sizes134. This means they are less
likely to detect a true difference or relationship in the data3.
Page 22 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Less Information Non-parametric tests tend to use less information than parametric
tests because they often rely on ranks or signs rather than the actual data values26.
• Less Efficient They are less efficient than their parametric counterparts when the
assumptions of the parametric methods are met2. Larger sample sizes may be needed
to achieve the same results2.
• Limited Use Non-parametric tests are more limited in their application and may not
be able to handle complex designs or interactions among variables3.
1. Statistical Power: Parametric tests generally have greater statistical power than
nonparametric tests when the assumptions are met. This means they are more likely to
detect a true effect when one exists. For example, a t-test can effectively identify
differences in means between two groups if the data is normally distributed1.
2. Precision: They provide more precise estimates of population parameters (like means
and standard deviations) compared to non-parametric tests. For instance, ANOVA can
give a detailed analysis of variance among multiple groups, providing insights that
non-parametric alternatives may not14.
3. Ease of Interpretation: Results from parametric tests are often easier to interpret and
communicate, as they rely on familiar statistical concepts such as means, standard
deviations, and p-values. For example, reporting the results of a linear regression
analysis is straightforward and widely understood2.
Page 23 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
4. Variety of Tests Available: There are many types of parametric tests available for
different types of data (e.g., t-tests for comparing two means, ANOVA for comparing
three or more), allowing for a broad range of statistical analyses.
2. Outlier Sensitivity: These tests can be significantly affected by outliers, which can
distort results. For instance, in a dataset with extreme values, the mean may not
accurately represent the central tendency, leading to incorrect interpretations13.
3. Sample Size Requirements: Parametric tests typically require larger sample sizes to
yield valid results. Small sample sizes may not provide enough power to detect
significant effects or may violate normality assumptions36.
4. Limited Applicability: They are generally not suitable for ordinal or nominal data
types, which limits their use in certain research contexts. Non-parametric tests would
be more appropriate in these cases12.
5. Complexity in Assumption Testing: Researchers must ensure that data meet specific
conditions (like homogeneity of variance), which can complicate analysis and
interpretation45.
In summary, while parametric tests offer powerful analytical tools when their assumptions are
met, researchers must carefully consider their applicability based on the nature of their data
and research questions.
Regression analysis has a rich and evolving history, with contributions from several key
figures over the centuries. The term "regression" itself was coined by Sir Francis Galton in
the late 19th century, but the mathematical foundations of regression were laid much earlier.
Page 24 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Early Beginnings
• Isaac Newton (1671): Newton used an averaging method in his work on Newton's
rings, which laid some groundwork for later statistical methods1.
• Francis Galton (1886): Galton introduced the term "regression" in his paper
"Regression towards Mediocrity in Hereditary Stature." He used it to describe how
the heights of offspring tend to revert towards the mean height of the population, even
if their parents were exceptionally tall or short234.
• Karl Pearson and Udny Yule: They expanded Galton's work into a broader statistical
context, developing multiple regression and correlation analysis25.
Modern Developments
• Definition: This type of regression involves one independent variable to predict the
value of a dependent variable. The relationship between variables is linear.
Page 25 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Simple linear regression is a statistical method used to model the relationship between two
continuous variables. It involves one independent variable (predictor) and one dependent
variable (response). The goal is to create a linear equation that best predicts the value of the
dependent variable based on the independent variable.
Key Components
Page 26 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
Definition: This involves two or more independent variables to predict the value of a
dependent variable.
Example: Analyzing how salary is affected by education level, experience, and location.
Multiple linear regression (MLR) is a statistical technique used to model the relationship
between a dependent variable and two or more independent variables. It is an extension of
simple linear regression, allowing for the analysis of multiple factors that influence a single
outcome.
Key Components
Page 27 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
• Independent Variables (x1, x2, ..., xn): The variables used to predict the dependent
variable.
y=β0+β1x1+β2x2+…+βnxn+ϵy=β0+β1x1+β2x2+…+βnxn+ϵ
References
Advanced Statistics I & II. (n.d.). Chapter 10 Assumptions of Parametric Tests. Retrieved
from
https://bookdown.org/danbarch/psy_207_advanced_stats_I/parametricassumptions.ht
ml
Byjus. (n.d.). Difference between Parametric and Nonparametric Tests. Retrieved from
[URL]
GraphPad. (n.d.). How do I evaluate if my data meet necessary assumptions before applying
parametric tests? Retrieved from https://www.graphpad.com/support/faq/how-do-
ievaluate-if-my-data-meet-necessary-assumptions-before-applying-parametric-tests/
Health Knowledge. (n.d.). Parametric and Non-parametric Tests. Retrieved from [URL]
Investopedia. (n.d.). Multiple Linear Regression (MLR) Definition, Formula, and Example.
Page 28 of 29
Composed By: Ms. Dawra Mehmood (Clinical Psychologist, Psychotherapist &
Educationist)
IOVS. (n.d.). Parametric Statistical Inference for Comparing Means and Variances.
Retrieved from [URL]
Page 29 of 29