Hypothesis Testing Parametric and Non Parametric Tests
Hypothesis Testing Parametric and Non Parametric Tests
Introduction
Hypothesis testing is one of the most important concepts in Statistics which is heavily used
by Statisticians, Machine Learning Engineers, and Data Scientists.
In hypothesis testing, Statistical tests are used to check whether the null hypothesis is rejected or not
rejected. These Statistical tests assume a null hypothesis of no relationship or no difference between
groups.
So, In this article, we will be discussing the statistical test for hypothesis testing including both parametric
and non-parametric tests.
Table of Contents
T-test
Z-test
F-test
ANOVA
Chi-square
Mann-Whitney U-test
Kruskal-Wallis H-test
Parametric Tests
The basic principle behind the parametric tests is that we have a fixed set of parameters that are used to
determine a probabilistic model that may be used in Machine Learning as well.
Parametric tests are those tests for which we have prior knowledge of the population distribution (i.e,
normal), or if not then we can easily approximate it to a normal distribution which is possible with the help of
the Central Limit Theorem.
Mean
Standard Deviation
Eventually, the classification of a test to be parametric is completely dependent on the population
assumptions. There are many parametric tests available from which some of them are as follows:
To find the confidence interval for the population means with the help of known standard deviation.
To determine the confidence interval for population means along with the unknown standard deviation.
To find the confidence interval for the population variance.
To find the confidence interval for the difference of two means, with an unknown value of standard deviation.
Non-parametric Tests
In Non-Parametric tests, we don’t make any assumption about the parameters for the given population or
the population we are studying. In fact, these tests don’t depend on the population.
Hence, there is no fixed set of parameters is available, and also there is no distribution (normal distribution,
etc.) of any kind is available for use.
This is also the reason that nonparametric tests are also referred to as distribution-free tests.
In modern days, Non-parametric tests are gaining popularity and an impact of influence some reasons
behind this fame is –
The main reason is that there is no need to be mannered while using parametric tests.
The second reason is that we do not require to make assumptions about the population given (or taken) on
which we are doing the analysis.
Most of the nonparametric tests available are very easy to apply and to understand also i.e. the complexity
is very low.
T-Test
2. It is essentially, testing the significance of the difference of the mean values when the sample size is small
(i.e, less than 30) and when the population standard deviation is not available.
A T-test can be a:
One Sample T-test: To compare a sample mean with that of the population mean.
where,
x̄ is the sample mean
where,
If the value of the test statistic is greater than the table value -> Rejects the null hypothesis.
If the value of the test statistic is less than the table value -> Do not reject the null hypothesis.
Z-Test
2. It is used to determine whether the means are different when the population variance is known and the
sample size is large (i.e, greater than 30).
One Sample Z-test: To compare a sample mean with that of the population mean.
Image Source: Google Images
where,
F-Test
2. It is a test for the null hypothesis that two normal populations have the same variance.
F = s12/s22
6. By changing the variance in the ratio, F-test has become a very flexible test. It can then be used to:
ANOVA
3. It is used to test the significance of the differences in the mean values among more than two sample
groups.
4. It uses F-test to statistically test the equality of means and the relative variance between them.
Chi-Square Test
3. It helps in assessing the goodness of fit between a set of observed and those expected theoretically.
4. It makes a comparison between the expected frequencies and the observed frequencies.
6. If there is no difference between the expected and observed frequencies, then the value of chi-square is
equal to zero.
7. It is also known as the “Goodness of fit test” which determines whether a particular distribution fits the
observed data or not.
11. Chi-square as a parametric test is used as a test for population variance based on sample variance.
12. If we take each one of a collection of sample variances, divide them by the known population variance
and multiply these quotients by (n-1), where n means the number of items in the sample, we get the values
of chi-square.