0% found this document useful (0 votes)
29 views7 pages

T-Statistics Notes

Uploaded by

Noor Ul Huda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views7 pages

T-Statistics Notes

Uploaded by

Noor Ul Huda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Z-score

Technically, z-scores are a conversion of individual scores into a standard form. The
conversion allows you to more easily compare different data; it is based on your
knowledge about the population’s standard deviation and mean.
T-scores
are also a conversion of individual scores into a standard form. However, t-scores are
used when researchers don’t know the population standard deviation; then make
an estimate by using the sample.
• You must know the standard deviation of the population and the sample
size should be above 30 in order to be able to use the z-score. Otherwise, use the
t-score.
Degrees of freedom
is that they are the number of values that are free to vary in a data set. What does
“free to vary” mean? Here’s an example using the mean (average):
Q. Pick a set of numbers that have a mean (average) of 10.
A. Some sets of numbers you might pick: 9, 10, 11 or 8, 10, 12 or 5, 10, 15.
Once you have chosen the first two numbers in the set, the third is fixed. In other
words, It can’t be chosen the third item in the set. The only numbers that are free
to vary are the first two. It can be picked 9 + 10 or 5 + 15, but after the decision
researcher must choose a particular number that will give the mean you are looking
for. So degrees of freedom for a set of three numbers is TWO.
Assumptions
• Values in the sample are independent observations.
• The population sampled must be normal.

THE T-TEST
The t-test is used to determine whether two means are significantly different from
one another.
Measuring effect size for the t statistic
There are a number of different effect size statistics, the most commonly used being
eta squared and Cohen’s d. both can range from 0 to 1 and represents the proportion
of variance in the dependent variable that is explained by the independent (group)
variable. SPSS does not provide eta squared values for t-tests
Measuring the Percentage of Variance Explained, r2
An alternative method for measuring effect size is to determine how much of the
variability in the scores is explained by the treatment effect. The concept behind this
measure is that the treatment causes the scores to increase (or decrease), which means
that the treatment is causing the scores to vary. If we can measure how much of the
variability is explained by the treatment, we will obtain a measure of the size of the
treatment effect.
This value is called the percentage of variance accounted for by the treatment and is
identified as r2 . Rather than computing r2 directly by comparing two different
calculations for SS, the value can be found from a single equation based on the
outcome of the t test.

In particular, estimates of Cohen’s d are not influenced at all by sample size, and
measures of r2 are only slightly affected by changes in the size of the sample. The
sample variance, on the other hand, influences hypothesis tests and measures of effect
size.

TYPES OF T-TESTS
There are a number of different types of t-tests available in SPSS. The three that will
be discussed here are
1. The One-sample t test is used to compare a sample mean to a specific value
(e.g., a population parameter; a neutral point on a Likert-type scale, chance
performance, etc.). Examples: 1. A study investigating whether stock brokers
differ from the general population on some rating scale where the mean for the
general population is known. 2. An observational study to investigate whether
scores differ from chance.
Calculation of t : t = mean - comparison value / Standard Error
2. independent-samples t-test, used when you want to compare the mean scores
of two different groups of people or conditions
3. paired-samples t-test, used when you want to compare the mean scores for the
same group of people on two different occasions, or when you have matched
pairs. In both cases, you are comparing the values on some continuous
variable for two groups or two occasions

INDEPENDENT-SAMPLES T-TEST
Non-parametric alternative: Mann-Whitney U Test (when data are only of ordinal
level of measurement or do not meet the other assumptions).
What you need: Two variables: • one categorical, independent variable (e.g.
males/females) • one continuous, dependent variable (e.g. self-esteem scores).
Assumption #1: Your dependent variable should be measured on
a continuous scale
Assumption #2: Your independent variable should consist of two
categorical, independent groups
Assumption #3: You should have independence of observations.
Assumption #4: There should be no significant outliers.
Assumption #5: Your dependent variable should be approximately normally
distributed for each group of the independent variable. You can test for
normality using the Shapiro-Wilk test of normality.
Assumption #6: There needs to be homogeneity of variances

After running the analysis what you need to see is:


Step 1: Checking the information about the groups
Step 2: Checking assumptions The first section of the Independent Samples Test
output box gives you the results of Levene’s test for equality of variances. This tests
whether the variance (variation) of scores for the two groups (males and females) is
the same. The outcome of this test determines which of the t-values that SPSS
provides is the correct one for you to use. • If your Sig. value for Levene’s test is
larger than .05 (e.g. .07, .10) you should use the first line in the table, which refers to
Equal variances assumed • If the significance level of Levene’s test is p=.05 or less
(e.g. .01, .001), this means that the variances for the two groups (males/females) are
not the same. Therefore your data violate the assumption of equal variance. Don’t
panic—SPSS is very kind and provides you with an alternative t-value which
compensates for the fact that your variances are not the same. You should use the
information in the second line of the t-test table, which refers to Equal variances not
assumed
Step 3: Assessing differences between the groups To find out whether there is a
significant difference between your two groups, refer to the column labelled Sig. (2-
tailed), which appears under the section labelled t-test for Equality of Means. Two
values are given, one for equal variance, the other for unequal variance. Choose
whichever your Levene’s test result says you should use (see Step 2 above). • If the
value in the Sig. (2-tailed) column is equal or less than .05 (e.g. .03, .01, .001), there
is a significant difference in the mean scores on your dependent variable for each of
the two groups. • If the value is above .05 (e.g. .06, .10), there is no significant
difference between the two groups.

PAIRED-SAMPLES T-TEST
Non-parametric alternative: Wilcoxon Signed Rank Test (see Chapter 16).
• If dependent variable is dichotomous, should instead use McNemar's test.
Assumption #2: IV should consist of two categorical, "related groups" or
"matched pairs".
What you need: One set of participants (or matched pairs). Each person (or pair) must
provide both sets of scores. Two variables: • one categorical independent variable (in
this case it is Time; with two different levels Time 1, Time 2) • one continuous,
dependent variable (e.g. Fear of Statistics Test scores) measured on two different
occasions or under different conditions
 Pre-test/post-test experimental designs are an example
Output:
Step 1: Determining overall significance In the table labelled Paired Samples Test you
need to look in the final column, labelled Sig. (2-tailed)—this is your probability (p)
value.
Step 2: Comparing mean values Having established that there is a significant
difference, the next step is to find out which set of scores is higher (Time 1 or Time
2). To do this, look in the first printout box, labelled Paired Samples Statistics. This
box gives you the Mean scores for each of the two sets of scores.

NON-PARAMETRIC ALTERNATIVES
When reporting descriptive statistics to accompany the results of a nonparametric test
of difference, such as the Mann–Whitney or Wilcoxon test, you should normally give
the median and range (not the mean and standard deviation) as the measures of central
tendency and dispersion. The median and range are more appropriate descriptives for
nonparametric tests because these are distribution-free tests and do not assume normal
distribution.

The Mann-Whitney U Test is used when:

1. Comparing two independent groups: For example, testing if the math scores of
students in two different schools are different.
2. Data is not normally distributed: It's an alternative to the independent samples t-test
when the normality assumption is violated or when the data is ordinal.

 Does not assume normal distribution of data.


 Compares the ranks of values between two groups, not the raw data.
 Suitable for ordinal, interval, or ratio data.

Assumptions of the Mann-Whitney U Test

1. Independent Groups:
o The two groups being compared must be independent, meaning the data for
one group should not influence the data for the other.

2. Ordinal or Continuous Data:


o The data should be ordinal, interval, or ratio in scale. It works for ranked data
but is not suitable for nominal variables.

3. Random Sampling:
o The samples should be randomly selected from their respective populations.
4. Similarity in Shape of Distributions (for Median Comparison):
o While the test compares distributions, if you interpret differences in medians,
the shapes of the distributions should be similar. If distributions differ
significantly in shape, results may be harder to interpret.

The Wilcoxon Signed-Rank Test is a non-parametric statistical test used to compare two
related samples or repeated measurements on a single sample. It evaluates whether their
population medians differ.

Assumptions

The Wilcoxon Signed-Rank Test is used when:

1. Comparing paired or related data: For example, pre-test and post-test scores from
the same group of individuals.
2. The data is not normally distributed: It serves as a non-parametric alternative to the
paired samples t-test.
3. Data is ordinal or interval/ratio: It works with ranked or continuous data but not
categorical data.

OUTPUT:SPSS gives the z value, shown in this row, not the T or W value shown in most
textbooks. As Howell (2002) explains, z should always be used for large samples but it
involves extra calculations. SPSS does that automatically. The negative sign can be
ignored (as for the t-test)

This row shows the p value. The hypothesis for this example was one-tailed, so
divide the p value by 2 to give p = .4715.
The Shapiro-Wilk test is used to assess whether a dataset is normally distributed. Here’s
how to run it in common statistical software:

1. Load Your Data: Open your dataset in SPSS.


2. Access the Test:
o Go to Analyze > Descriptive Statistics > Explore.
3. Set Up Variables:
o Move the variable(s) you want to test for normality into the Dependent List
box.
4. Request Normality Test:
o Click on the Plots button.
o Check the box for Normality plots with tests.
o Click Continue and then OK.
5. Interpret the Results:
o In the output, locate the Tests of Normality table.
o Check the row for Shapiro-Wilk:
 p>0.05p > 0.05p>0.05: Data is likely normally distributed.
 p≤0.05p \leq 0.05p≤0.05: Data is not normally distributed

When the ASSUMPTION OF HOMOGENEITY OF VARIANCES IS VIOLATED,


several alternative statistical tests can be used to ensure valid results. Here are the main
alternatives:

1. Welch’s ANOVA:

Welch’s ANOVA adjusts the degrees of freedom to account for unequal variances,
making it more reliable when variances differ across groups.

How to perform in SPSS:

o Go to Analyze → Compare Means → One-Way ANOVA.


o In the dialog box, click on Options and check Welch under the "Post Hoc" tests
section.

2. Brown-Forsythe Test: This test is a robust alternative to the One-Way ANOVA that is
less sensitive to unequal variances and is especially useful when you have more than two
groups.

The Brown-Forsythe test uses the median instead of the mean, making it less
influenced by outliers and unequal variances.

How to perform in SPSS:

o Go to Analyze → Compare Means → One-Way ANOVA.


o In the Options menu, select Brown-Forsythe as the test statistic.

3. Non-Parametric Tests:
If the violations are severe and cannot be corrected using adjustments like Welch’s or Brown-
Forsythe, you might consider using non-parametric tests, which do not require the assumption
of equal variances:

 Kruskal-Wallis Test (alternative to One-Way ANOVA):


o Use when comparing more than two independent groups.
o How to perform in SPSS:
 Go to Analyze → Nonparametric Tests → Legacy Dialogs → Kruskal-
Wallis H.
 Mann-Whitney U Test (alternative to Independent t-test):
o Use when comparing two independent groups.
o How to perform in SPSS:
 Go to Analyze → Nonparametric Tests → Legacy Dialogs → 2
Independent Samples.

You might also like