Z Test Formula
Z Test Formula
Z Test is a concept of statistics which compares means of two populations. Z test assumes normal
distribution under null hypothesis. Z test is performed on a large number of data or on a population data.
On the other hand, for a small data or sample data, T test is performed. The score determined by Z test is
called "Z score". Z score can be approximated when population standard deviation of a large data is
given. Z test uses an assumed value which is generally within the limits of given data to calculate Z score.
This value is known as "standardized random variable".
Where,
x = Standardized random variable
x¯x¯ = Mean of the data
σ = Population standard deviation.
The formula for population standard deviation is given below:
Where,
σ = Population standard deviation
xi = Numbers given in the data
x¯x¯ = Mean of the data
n = Total number of items.
Z Test Problems
Back to Top
Solved Examples
Question 1: In a government organization, the mean basic salary of the employees is INR 5000. How
much will be the z score of employees whose basic salary is INR 3000, if standard deviation of the
population is 850?
Solution:
Standardized random variable x = 3000
Mean x¯x¯ = 5000
Population standard deviation = 850
Formula for Z score is given below:
Z scoreZ score = x−x¯σx−x¯σ
Z scoreZ score = 3000−50008503000−5000850
= -2.353
Question 2: The marks obtained in mathematics exam by students of a class vary from 33 to 100. If
average marks are 62 and standard deviation is 20. Find the Z score for students who scored 80?
Solution:
Standardized random variable x = 80
Mean of the data x¯x¯ = 62
Standard deviation σ = 20
Z scoreZ score = x−x¯σx−x¯σ
Z scoreZ score = 80−622080−6220
= 0.9
The T-Test
The t-test assesses whether the means of two groups
are statistically different from each other. This analysis is appropriate
whenever you want to compare the means of two groups, and especially
appropriate as the analysis for the posttest-only two-group randomized
experimental design.
Figure 1. Idealized distributions for treated and comparison group posttest values.
Figure 1 shows the distributions for the treated (blue) and control (green)
groups in a study. Actually, the figure shows the idealized distribution --
the actual distribution would usually be depicted with a histogram or bar
graph. The figure indicates where the control and treatment group
means are located. The question the t-test addresses is whether the
means are statistically different.
What does it mean to say that the averages for two groups are
statistically different? Consider the three situations shown in Figure 2.
The first thing to notice about the three situations is that the difference
between the means is the same in all three. But, you should also
notice that the three situations don't look the same -- they tell very
different stories. The top example shows a case with moderate variability
of scores within each group. The second situation shows the high
variability case. the third shows the case with low variability. Clearly, we
would conclude that the two groups appear most different or distinct in
the bottom or low-variability case. Why? Because there is relatively little
overlap between the two bell-shaped curves. In the high variability case,
the group difference appears least striking because the two bell-shaped
distributions overlap so much.
Figure 2. Three scenarios for differences between means.
The top part of the formula is easy to compute -- just find the difference
between the means. The bottom part is called the standard error of the
difference. To compute it, we take the variance for each group and divide
it by the number of people in that group. We add these two values and
then take their square root. The specific formula is given in Figure 4:
Figure 4. Formula for the Standard error of the difference between the means.
The t-value will be positive if the first mean is larger than the second and
negative if it is smaller. Once you compute the t-value you have to look it
up in a table of significance to test whether the ratio is large enough to
say that the difference between the groups is not likely to have been a
chance finding. To test the significance, you need to set a risk level
(called the alpha level). In most social research, the "rule of thumb" is to
set the alpha level at .05. This means that five times out of a hundred
you would find a statistically significant difference between the means
even if there was none (i.e., by "chance"). You also need to determine
the degrees of freedom (df) for the test. In the t-test, the degrees of
freedom is the sum of the persons in both groups minus 2. Given the
alpha level, the df, and the t-value, you can look the t-value up in a
standard table of significance (available as an appendix in the back of
most statistics texts) to determine whether the t-value is large enough to
be significant. If it is, you can conclude that the difference between the
means for the two groups is different (even given the variability).
Fortunately, statistical computer programs routinely print the significance
test results and save you the trouble of looking them up in a table.