Compare Means: 1-One Sample T Test
Compare Means: 1-One Sample T Test
The One Sample t Test can only compare a single sample mean to a specified constant. It can
not compare sample means between two or more groups.
Hypotheses
The null hypothesis (H0) and (two-tailed) alternative hypothesis (H1) of the one sample T test
can be expressed as:
where µ is a constant proposed for the population mean and x is the sample mean.
The test statistic for a One Sample t Test is denoted t, which is calculated using the following
formula:
To run a One Sample t Test click Analyze > Compare Means > One-Sample T Test.
B Test Value: The hypothesized population mean against which your test variable(s) will
be compared.
Example
In the Test Value field, enter 66.5, which is the CDC's estimation of the average height of
adults over 20.
A Test Value: The number we entered as the test value in the One-Sample T Test window.
B t Statistic: In this example, t = 5.810. Note that t is calculated by dividing the mean
difference (E) by the standard error mean (from the One-Sample Statistics box).
E Mean Difference: The difference between the "observed" sample mean (from the One
Sample Statistics box) and the "expected" mean (the specified test value (A)). The sign of the
mean difference corresponds to the sign of the t value (B). The positive t value in this example
indicates that the mean height of the sample is greater than the hypothesized value (66.5).
F Confidence Interval for the Difference: The confidence interval for the difference
Since p < 0.001, we reject the null hypothesis that the sample mean is equal to the
hypothesized population mean and conclude that the mean height of the sample is
significantly different than the average height of the overall adult population.
Note: The Paired Samples t Test can only compare the means for two (and only
two) related (paired) units on a continuous outcome that is normally distributed.
Hypotheses
OR
H0: µ1 - µ2 = 0 ("the difference between the paired population means is equal to 0")
H1: µ1 - µ2 ≠ 0 ("the difference between the paired population means is not 0")
Test Statistic
The test statistic for the Paired Samples t Test, denoted t, follows the same formula as the one
A Pair: The “Pair” column represents the number of Paired Samples t Tests to run. You may
choose to run multiple Paired Samples t Tests simultaneously by selecting multiple sets of
matched variables. Each new pair will appear on a new line.
Example
The sample dataset has placement test scores (out of 100 points) for four subject areas:
English, Reading, Math, and Writing. Suppose we are particularly interested in the
English and Math sections, and want to determine whether English or Math had
higher test scores on average. We could use a paired t test to test if there was a
significant difference in the average of the two tests.
The Independent Samples t Test compares the means of two independent groups in order to
determine whether there is statistical evidence that the associated population means are
significantly different. The Independent Samples t Test is a parametric test.
This test is also known as:
Independent t Test
Independent Measures t Test
Independent Two-sample t Test
Student t Test
Two-Sample t Test
Uncorrelated Scores t Test
Unpaired t Test
Unrelated t Test
Note: When one or more of the assumptions for the Independent Samples t Test are
not met, you may want to run the nonparametric Mann-Whitney U Test instead.
Hypotheses
The null hypothesis (H0) and alternative hypothesis (H1) of the independent samples T
test can be expressed in two different but equivalent ways:
OR
H0: µ1 - µ2 = 0 ("the difference between the two population means is equal to 0")
H1: µ1 - µ2 ≠ 0 ("the difference between the two population means is not 0")
Recall that the independent samples T test requires the assumption of homogeneity of
variance -- i.e., both groups have the same variance. SPSS conveniently includes a
test for the homogeneity of variance, called Levene's Test, whenever you run
an independent samples T test.
H0: σ12 - σ22 = 0 ("the population variances of group 1 and 2 are equal")
H1: σ12 - σ22 ≠ 0 ("the population variances of group 1 and 2 are not equal")
Test Statistic
The test statistic for an Independent Samples t Test is denoted t. There are actually two forms
of the test statistic for this test, depending on whether or not equal variances are assumed.
SPSS produces both forms of the test, so both forms of the test are described here. Note that
the null and alternative hypotheses are identical for both forms of the test
statistic.
When the two independent samples are assumed to be drawn from populations with identical
population variances (i.e., σ12 = σ22) , the test statistic t is computed as:
degrees of freedom df = n1 + n2 - 2.
When the two independent samples are assumed to be drawn from populations with unequal
variances (i.e., σ12 ≠ σ22), the test statistic t is computed as:
The calculated degrees of freedom
A Test Variable(s): This is the continuous variable whose means will be compared
B Grouping Variable: The independent variable. The categories (or groups) of the
independent variable will define which samples will be compared in the t test.
C Define Groups: Click Define Groups to define the category indicators (groups) to use
in the t test.
Clicking the Define Groups button (C) opens the Define Groups window:
2 Cut point: If your grouping variable is numeric and continuous, you can designate a cut
point for dichotomizing the variable. This will separate the cases into two categories based on
the cut point. Specifically, for a given cut point x, the new categories will be:
PROBLEM STATEMENT
In our sample dataset, students reported their typical time to run a mile. Suppose we want to
know if the average time to run a mile is different for athletes versus non-athletes.
A Levene's Test for Equality of of Variances: This section has the test results for
The p-value of Levene's test is printed as ".000" (but should be read as p < 0.001 --
i.e., p very small), so we we reject the null of Levene's test and conclude that the
variance in mile time of athletes is significantly different than that of non-athletes.
This tells us that we should look at the "Equal variances not assumed" row for
the t-test (and corresponding confidence interval) results. (If this test result had not
been significant -- that is, if we had observed p > α -- then we would have used the "Equal
variances assumed" output.)
B t-test for Equality of Means provides the results for the actual Independent
Since p < .0001 is less than our chosen significance level α = 0.05, we can reject the null
hypothesis, and conclude that the that the mean mile time for athletes and non-
athletes is significantly different.
4- One-Way ANOVA
The One-Way ANOVA ("analysis of variance") compares the means of two or more
independent groups in order to determine whether there is statistical evidence that the
associated population means are significantly different. One-Way ANOVA is a parametric test.
One-Factor ANOVA
One-Way Analysis of Variance
Between Subjects ANOVA
Hypotheses
The null and alternative hypotheses of one-way ANOVA can be expressed as:
compared between the samples (groups). You may run multiple means
comparisons simultaneously by selecting more than one dependent variable.
B Factor: The independent variable. The categories (or groups) of the independent variable
will define which samples will be compared. The independent variable must have at least two
categories (groups), but usually has three or more groups when used in a One-Way ANOVA.
Many online and print resources detail the distinctions among these options and will help
users select appropriate contrasts. Please see the IBM SPSS guide for detailed information on
Contrasts by clicking the ? button at the bottom of the dialog box.
D Post Hoc: (Optional) Request post hoc (also known as multiple comparisons) tests.
Specific post hoc tests can be selected by checking the associated boxes.
1 Equal Variances Assumed: Multiple comparisons options that assume homogeneity of
variance (each group has equal variance). For detailed information about the specific
comparison methods, click the Help button in this window.
sided hypothesis test can be specified if you choose to use a Dunnett post hoc test. Click the
box next to Dunnett and then specify whether the Control Category is the Last or First
group, numerically, of your grouping variable. In the Test area, click either < Control or >
Control. The one-tailed options require that you specify whether you predict that the mean
for the specified control group will be less than (> Control) or greater than (< Control)
another group.
3 Equal Variances Not Assumed: Multiple comparisons options that do not assume
equal variances. For detailed information about the specific comparison methods, click
the Help button in this window.
4 Significance level: The desired cutoff for statistical significance. By default, significance
is set to 0.05.
When the initial F test indicates that significant differences exist between group means, post
hoc tests are useful for determining which specific means are significantly different when you
do not have specific hypotheses that you wish to test. Post hoc tests compare each pair of
means (like t-tests), but unlike t-tests, they correct the significance estimate to account for the
multiple comparisons.
E Options: Clicking Options will produce a window where you can specify
which Statistics to include in the output (Descriptive, Fixed and random effects,
Homogeneity of variance test, Brown-Forsythe, Welch), whether to include a Means plot,
and how the analysis will address Missing Values (i.e., Exclude cases analysis by
analysis or Exclude cases listwise). Click Continue when you are finished making
specifications.
Example
OUTPUT
The output displays a table entitled ANOVA.