Chi - Square History
Chi - Square History
Chi - Square History
. See templates for discussion to help reach a consensus. This article may require cleanup to meet Wikipedia's quality standards. (Consider using more specific cleanup instructions.) Please helpimprove this article if you can. The talk page may contain suggestions. (April 2008) "Chi-squared test" is often shorthand for Pearson's chi-squared test. A chi-squared test, also referred to as chi-square test or 2 test, is any statistical hypothesis test in which the sampling distribution of the test statistic is a chi-squared distribution when the null hypothesis is true, or any in which this is asymptotically true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-squared distribution as closely as desired by making the sample size large enough. Some examples of chi-squared tests where the chi-squared distribution is only approximately valid:
Pearson's chi-squared test, also known as the chi-squared goodness-of-fit test or chi-squared test for independence. When mentioned without any modifiers or without other precluding context, this test is usually understood (for an exact test used in place of 2, see Fisher's exact test).
Yates's correction for continuity, also known as Yates' chi-squared test. CochranMantelHaenszel chi-squared test. McNemar's test, used in certain 2 2 tables with pairing Linear-by-linear association chi-squared test The portmanteau test in time-series of autocorrelation analysis, testing for the presence
Likelihood-ratio tests in general statistical modelling, for testing whether there is evidence of the need to move from a simple model to a more complicated one (where the simple model is nested within the complicated one).
One case where the distribution of the test statistic is an exact chi-squared distribution is the test that the variance of a normally-distributed population has a given value based on a sample variance. Such a test is uncommon in practice because values of variances to test against are seldom known exactly.
[edit]Chi-squared test for variance in a normal population If a sample of size n is taken from a population having a normal distribution, then there is a well-known result (see distribution of the sample variance) which allows a test to be made of whether the variance of the population has a pre-determined value. For example, a manufacturing process might have been in stable condition for a long period, allowing a value for the variance to be determined essentially without error. Suppose that a variant of the process is being tested, giving rise to a small sample of product items whose variation is to be tested. The test statistic T in this instance could be set to be the sum of squares about the sample mean, divided by the nominal value for the variance (i.e. the value to be tested as holding). Then T has a chi-squared distribution with n 1 degrees of freedom. For example if the sample size is 21, the acceptance region for T for a significance level of 5% is the interval 9.59 to 34.17. Pearson's chi-squared test From Wikipedia, the free encyclopedia Pearson's chi-squared test (2) is the best-known of several chi-squared tests statistical procedures whose results are evaluated by reference to the chi-squared distribution. Its properties were first investigated by Karl Pearson in 1900.[1] In contexts where it is important to make a distinction between the test statistic and its distribution, names similar to Pearson -squared test or statistic are used. It tests a null hypothesis stating that the frequency distribution of certain events observed in a sample is consistent with a particular theoretical distribution. The events considered must be mutually exclusive and have total probability 1. A common case for this is where the events each cover an outcome of a categorical variable. A simple example is the hypothesis that an ordinary sixsided die is "fair", i.e., all six outcomes are equally likely to occur. Contents [hide]
1 Definition
o
2 Assumptions 3 Examples
o
4 Problems 5 Distribution
o o o
5.1 Two cells 5.2 Two-by-two contingency tables 5.3 Many cells
6 See also 7 Notes 8 References 9 External links [edit]Definition Pearson's chi-squared is used to assess two types of comparison: tests of goodness of fit and tests of independence.
A test of goodness of fit establishes whether or not an observed frequency distribution differs from a theoretical distribution. A test of independence assesses whether paired observations on two variables, expressed in a contingency table, are independent of each otherfor example, whether people from different regions differ in the frequency with which they report that they support a political candidate.
The first step in the chi-squared test is to calculate the chi-squared statistic. In order to avoid ambiguity, the value of the test-statistic is denoted by 2 rather than 2 (which is either an uppercase chiinstead of lowercase, or an upper case roman X); this also serves as a reminder that the distribution of the test statistic is not exactly that of a chisquared random variable. However some authors do use the 2 notation for the test statistic. An exact test which does not rely on using the approximate 2 distribution is Fisher's exact test: this is substantially more accurate in evaluating the significance level of the test, especially with small numbers of observations.
The chi-squared test statistic is calculated by finding the difference between each observed and theoretical frequency for each possible outcome, squaring them, dividing each by the theoretical frequency, and taking the sum of the results. A second important part of determining the test statistic is to define the degrees of freedom of the test: this is essentially the number of observed frequencies adjusted for the effect of using some of those observations to define the theoretical frequencies. [edit]Test for fit of a distribution [edit]Discrete uniform distribution In this case N observations are divided among n cells. A simple application is to test the hypothesis that, in the general population, values would occur in each cell with equal frequency. The "theoretical frequency" for any cell (under the null hypothesis of a discrete uniform distribution) is thus calculated as
and the reduction in the degrees of freedom is p = 1, notionally because the observed frequencies Oi are constrained to sum to N. [edit]Other distributions When testing whether observations are random variables whose distribution belongs to a given family of distributions, the "theoretical frequencies" are calculated using a distribution from that family fitted in some standard way. The reduction in the degrees of freedom is calculated as p = s + 1, where s is the number of parameters used in fitting the distribution. For instance, when checking a 3-parameter Weibull distribution, p = 4, and when checking a normal distribution (where the parameters are mean and standard deviation), p = 3. In other words, there will be n p degrees of freedom, where n is the number of categories. It should be noted that the degrees of freedom are not based on the number of observations as with a Student's t or F-distribution. For example, if testing for a fair, six-sided die, there would be five degrees of freedom because there are six categories/parameters (each number). The number of times the die is rolled will have absolutely no effect on the number of degrees of freedom. [edit]Calculating the test-statistic The value of the test-statistic is
where 2 = Pearson's cumulative test statistic, which asymptotically approaches a 2 distribution. Oi = an observed frequency; Ei = an expected (theoretical) frequency, asserted by the null hypothesis; n = the number of cells in the table.
Chi-squared distribution, showing X2 on the x-axis and P-value on the y-axis. The chi-squared statistic can then be used to calculate a p-value by comparing the value of the statistic to a chi-squared distribution. The number of degrees of freedom is equal to the number of cells n, minus the reduction in degrees of freedom, p. The result about the number of degrees of freedom is valid when the original data was multinomial and hence the estimated parameters are efficient for minimizing the chisquared statistic. More generally however, when maximum likelihood estimation does not coincide with minimum chi-squared estimation, the distribution will lie somewhere between a chi-squared distribution with n 1 p and n 1 degrees of freedom (See for instance Chernoff and Lehmann, 1954). [edit]Bayesian method For more details on this topic, see Categorical distribution#Bayesian statistics.
In Bayesian statistics, one would instead use a Dirichlet distribution as conjugate prior. If one took a uniform prior, then the maximum likelihood estimate for the population probability is the observed probability, and one may compute a credible region around this or another estimate. [edit]Test of independence In this case, an "observation" consists of the values of two outcomes and the null hypothesis is that the occurrence of these outcomes isstatistically independent. Each observation is allocated to one cell of a two-dimensional array of cells (called a table) according to the values of the two outcomes. If there are r rows and c columns in the table, the "theoretical frequency" for a cell, given the hypothesis of independence, is
where N is the total sample size (the sum of all cells in the table). The value of the test-statistic is
Fitting the model of "independence" reduces the number of degrees of freedom by p = r + c 1. The number of degrees of freedom is equal to the number of cells rc, minus the reduction in degrees of freedom, p, which reduces to (r 1)(c 1). For the test of independence, also known as the test of homogeneity, a chi-squared probability of less than or equal to 0.05 (or the chi-squared statistic being at or larger than the 0.05 critical point) is commonly interpreted by applied workers as justification for rejecting the null hypothesis that the row variable is independent of the column variable.[2] The alternative hypothesis corresponds to the variables having an association or relationship where the structure of this relationship is not specified. [edit]Assumptions The chi-squared test, when used with the standard approximation that a chi-squared distribution is applicable, has the following assumptions:[citation needed]
Simple random sample The sample data is a random sampling from a fixed distribution or population where each member of the population has an equal probability of selection. Variants of the test have been developed for complex samples, such as where the data is weighted.
Sample size (whole table) A sample with a sufficiently large size is assumed. If a chi squared test is conducted on a sample with a smaller size, then the chi squared test will yield an inaccurate inference. The researcher, by using chi squared test on small samples, might end up committing a Type II error.
Expected cell count Adequate expected cell counts. Some require 5 or more, and others require 10 or more. A common rule is 5 or more in all cells of a 2-by-2 table, and 5 or more in 80% of cells in larger tables, but no cells with zero expected count. When this assumption is not met, Yates's Correction is applied.
Independence The observations are always assumed to be independent of each other. This means chi-squared cannot be used to test correlated data (like matched pairs or panel data). In those cases you might want to turn to McNemar's test. [edit]Examples [edit]Goodness of fit For example, to test the hypothesis that a random sample of 100 people has been drawn from a population in which men and women are equal in frequency, the observed number of men and women would be compared to the theoretical frequencies of 50 men and 50 women. If there were 44 men in the sample and 56 women, then
If the null hypothesis is true (i.e., men and women are chosen with equal probability in the sample), the test statistic will be drawn from a chi-squared distribution with one degree of freedom. Though one might expect two degrees of freedom (one each for the men and women), we must take into account that the total number of men and women is constrained (100), and thus there is only one degree of freedom (2 1). Alternatively, if the male count is known the female count is determined, and viceversa. Consultation of the chi-squared distribution for 1 degree of freedom shows that the probability of observing this difference (or a more extreme difference than this) if men and women are equally numerous in the population is approximately 0.23. This probability is higher than conventional criteria for statistical significance (0.0010.05), so normally we would not reject the null hypothesis that the number of men in the
population is the same as the number of women (i.e., we would consider our sample within the range of what we'd expect for a 50/50 male/female ratio.) [edit]Problems The approximation to the chi-squared distribution breaks down if expected frequencies are too low. It will normally be acceptable so long as no more than 20% of the events have expected frequencies below 5. Where there is only 1 degree of freedom, the approximation is not reliable if expected frequencies are below 10. In this case, a better approximation can be obtained by reducing the absolute value of each difference between observed and expected frequencies by 0.5 before squaring; this is called Yates's correction for continuity. In cases where the expected value, E, is found to be small (indicating either a small underlying population probability, or a small number of observations), the normal approximation of the multinomial distribution can fail, and in such cases it is found to be more appropriate to use the G-test, a likelihood ratio-based test statistic. Where the total sample size is small, it is necessary to use an appropriate exact test, typically either the binomial test or (for contingency tables) Fisher's exact test; but note that this test assumes fixed and known marginal totals. [edit]Distribution The null distribution of the Pearson statistic with j rows and k columns is approximated by the chi-squared distribution with (k 1)(j 1) degrees of freedom.[3] This approximation arises as the true distribution, under the null hypothesis, if the expected value is given by a multinomial distribution. For large sample sizes, the central limit theorem says this distribution tends toward a certain multivariate normal distribution. [edit]Two cells In the special case where there are only two cells in the table, the expected values follow a binomial distribution,
where p = probability, under the null hypothesis, n = number of observations in the sample.
In the above example the hypothesised probability of a male observation is 0.5, with 100 samples. Thus we expect to observe 50 males. If n is sufficiently large, the above binomial distribution may be approximated by a Gaussian (normal) distribution and thus the Pearson test statistic approximates a chisquared distribution,
Let O1 be the number of observations from the sample that are in the first cell. The Pearson test statistic can be expressed as
By the normal approximation to a binomial this is the squared of one standard normal variate, and hence is distributed as chi-squared with 1 degree of freedom. Note that the denominator is one standard deviation of the Gaussian approximation, so can be written
So as consistent with the meaning of the chi-squared distribution, we are measuring how probable the observed number of standard deviations away from the mean is under the Gaussian approximation (which is a good approximation for large n). The chi-squared distribution is then integrated on the right of the statistic value to obtain the P-value, which is equal to the probability of getting a statistic equal or bigger than the observed one, assuming the null hypothesis. [edit]Two-by-two contingency tables When the test is applied to a contingency table containing two rows and two columns, the test is equivalent to a Z-test of proportions. [edit]Many cells Similar arguments as above lead to the desired result. [citation
needed]
the final one, whose value is completely determined by the others) is treated as an
independent binomial variable, and their contributions are summed and each contributes one degree of freedom.