Unit 5 Non-Parametric Tests Parametric Tests
Unit 5 Non-Parametric Tests Parametric Tests
Non-Parametric Tests
Parametric Tests:
Parametric tests are those statistical tests which do not involve any population parameter and do not
make any assumption about the statistical population either. They are also called distribution free tests
as they are not following any probability distribution.
1. In case of parametric tests, information about population is completely known while in case of non-
parametric tests no information about the population is available.
2. Parametric tests are concerned with parameter estimation as well as hypothesis testing but non-
parametric tests are concerned with hypothesis testing only.
3. Parametric tests are applicable only for variables but non-parametric tests are applicable both for
variables and attributes.
4. No parametric test exists for nominal scale data whereas non-parametric tests do exist for nominal
and ordinal scale data.
5. Parametric tests if exist are relatively more powerful than non-parametric tests.
6. Parametric tests are not that readily comprehensible like those of non-parametric tests.
Note: There are numerous numbers of non-parametric tests but due to paucity of time we will discuss
only two such tests. These are Spearman Rank Correlation Test and Chi-square Test of Goodness of Fit.
However, before going to discuss these test let’s first have a look into various applications of Chi-square
Test.
Chi-Square Test (χ2) has large number of applications in statistics. Out of these applications one is
concerned with parametric test while the rest is concerned with non-parametric tests. These
applications are:
4. It is used to test the significance of the homogeneity of independent estimates of the population
variance (non-parametric test).
5. It is used to test the significance of the homogeneity of independent estimates of the population
correlation coefficient (non-parametric test).
Chi-Square goodness of fit test is a non-parametric test that is used to find out how the observed value
of a given phenomena is significantly different from the expected value. This test is developed by Prof.
Karl Pearson in the year 1900 and hence it is also popularly called Pearson’s Test of Goodness of Fit. In
Chi-Square goodness of fit test, the term goodness of fit is used to compare the observed sample
distribution with the expected probability distribution. Chi-Square goodness of fit test determines how
well theoretical distribution (such as normal, binomial, or Poisson) fits the empirical distribution. In Chi-
Square goodness of fit test, sample data is divided into intervals. Then the numbers of points that fall
into the interval are compared, with the expected numbers of points in each interval.
H0 : Oi = Ei i.e there is no significant difference between the observed and the expected value
Against H1: Oi ≠ Ei i.e there is significant difference between the observed and the expected value [ Two
Tailed Test]
3. In this step under the assumption that H0 is true, we will calculate the following test statistic
4. This is the final step where the calculated value of Chi-Square goodness of fit test is compared with
the tabulated value. If the calculated value of Chi-Square goodness of fit test is greater than the
tabulated value, we will reject the null hypothesis and conclude that there is a significant difference
between the observed and the expected frequency. If the calculated value of Chi-Square goodness of fit
test is less than the tabulated value, we may accept the null hypothesis and conclude that there is no
significant difference between the observed and expected value.
Assumptions underlying Chi-square Test of Goodness of Fit / Conditions for the validity of Chi-square
Test of Goodness of Fit:
This test is used for testing the significance of the correlation between two series of ranks. This test is
applicable in case of small as well as large samples. In case of Spearman Rank Correlation Test, sample
size is considered as large if we have n > 10 and it is considered as small if we have n 10. Let us discuss
the procedure of hypothesis testing under Spearman Rank Correlation Test in two different cases.
The steps under Case 1 for hypothesis testing are outlined as follows.
Step 1: Formulation of null hypothesis and alternative hypothesis in the following manner
H0: R = 0 i.e correlation coefficient is not significant between two series of ranks
Step 3: Under the assumption that H0 is true, we will calculate the test statistic as follows
Step 4: Finally, we compare the calculated value of the test statistic with its tabulated value and give the
The steps under Case 1 for hypothesis testing are outlined as follows.
Step 1: Formulation of null hypothesis and alternative hypothesis in the following manner
H0: R = 0 i.e correlation coefficient is not significant between two series of ranks
Step 3: Under the assumption that H0 is true, we will calculate the test statistic as follows
Where
Step 4: Finally, we compare the calculated value of the test statistic with its tabulated value and give the
tabulated value of rs while if n > 10 then calculated value of z is to be compared with tabulated value of
z. In any case, if the calculated value of the test statistic is found higher than the tabulated value we
reject the H0 and conclude that correlation coefficient between two series of ranks is significant,
otherwise we may accept the H 0 and conclude that correlation coefficient between two series of ranks is
not significant.