0% found this document useful (0 votes)
14 views3 pages

STRZ

Strats running

Uploaded by

ibrahimiliyas.ki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views3 pages

STRZ

Strats running

Uploaded by

ibrahimiliyas.ki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Properties of Bernoulli distribution Normal distribution as a Limiting case of Binominal distribution: Fitting Poisson Distribution

1. P is the parameter of the distribution 2. The mean of the distribution is p Binominal Distribution is an important theoretical distribution for , discrete variables. Binominal If we want to fit a Poisson distribution to a given frequency distribution, we have to compute the
3. The variance is pq where q = 1 — p. 4. Variance is less than mean distribution tends to Normal Distribution under the following conditions. mean of the given distribution and take it as m. Once m is known, the Poisson distribution is
5. If X 1 x 2 ...... X n are n independently and identically distributed Bernoulli variates with 1. Number of trials (n) is very large obtained by putting that value of m in (e^-m m^x)/x!
parameter p, then (X 1 + x 2 + + x n )follows Bernoulli distribution with n and p as parameters. 2. p and q (ie probability for success in a single trail and the probability for its failure) are . The expected (or theoretical) frequencies can be obtained, by putting x = 0,1,2,3, in
Situations where Binomial distribution can be applied. almost equal. N x (e^-m m^x)/x!
(1) The random experiment has two outcomes, which can be called ̳success‘ and ̳failure‘. Then the Binominal distribution can be approximated to normal Central limit theorem
(2) Probability for success in a single trial remains constant from trial to trial of the experiment. Then mean and standard deviation are called parameters of the Normal distribution. Let x 1, x 2 ,........ x n be n independent variables. Let all have same distribution, same mean
(3)The experiment is repeated finite number of times. Properties (characteristics) of the Normal Distribution say μ and the same standard deviation says . the mean of ail these variables ie, (x 1, x 2 ,........
(4)Trials are independent. 1.The normal curve is a continuous curve. x n )/n follows a normal distribution with mean =μ and S.D = when n is large.
Properties of Binomial Distribution 2.The normal curve is bell shaped. Central limit theorem is considered to be one of the most remarkable theorems in the entire
1.Binomial Distribution is a discrete probability distribution 3.Normal curve is symmetric about the mean. theory of statistics. The theorem is called central because of its central position in probability
2.The shape and location of Binomial distribution changes as ̳p‘ changes for a given ̳n‘. 4.Mean. Median and Mode are equal for a normal distribution. distribution and statistical inference.
3.Binomial Distribution has one or two modal values. 5.The height of Normal curve is at its maximum at the mean. So condition for central limit theorem are:
4.Mean of the Binomial Distribution increases as ̳n‘ increases, with ̳p‘ remaining constant. 6.There is only one maximum point, which occurs at the mean. a)variables must be independent.
5.If ̳n‘ is large and if neither ̳p‘ nor ̳q‘ is too close to zero, Binomial Distribution may be 7.The ordinate at mean divides the whole area into two equal parts.(ie 0.5 on either side) b)all variables should have common mean and common S.D.
approximated to normal distribution. 6.Binomial Distribution has mean = np and S. D = npq . 8.The curve is asymptotic to the base on either side. ie. the curve approaches nearer and c)all variables should have the same distribution.
7.If two independent random variables follow Binomial Distribution, their sum also follows nearer to the base, but it never touches. d)n is very large.
Binomial Distribution. 9. coefficient of skewness is 0 Sampling Distributions
Fitting a Binomial Distribution 10. The normal curve is uni model. ie, it has only one mode. Sample statistic is a random variable. As every random variable has a probability distribution,
1. Determine the values of p, q and n and substitute them in the function nC x p x q n - x , we 11. The points of inflexion occur at ± . At the point of inflexion the curve changes from sample statistic also has a probability distribution.
get the probability function of the Binomial Distribution. concavity to convexity. The probability distribution of a sample statistic is called the sampling distribution of that
2Put x = 0, 1, 2..... in the function nC x p x q n- x , we get n + 1 terms. 12. Q 1 and Q 3 are equidistant from median. statistic. For example: Sample mean is a statistic I and the distribution of sample mean is a
3. Multiply each such term by N (total frequency), to obtain the expected frequency. 13. Mean deviation for Normal distribution is 3/4σ and QD is 2/3σ. sampling distribution. Sampling distribution plays a very important role in the study of
Uses of Poisson Distribution Standard normal variate (Unit normal variate) Statistical inference.
Practical situations where Poisson distribution can be used If X is a random variable following Normal distribution with mean and standard deviation Standard error:Standard deviation of a sampling distribution of a statistic is called I the standard
(1) to count the number of telephone calls arriving at a telephone switch board in unit time error of that statistic. For example, sample mean ( ̅x) has a sampling distribution. The S.D of that
then the variable Z =x-μ/σ is known as standard normal variate.
(say, per minute) (2) to count the number of customers arriving at the super market (say per distribution is called standard error of ̅x.
hour) (3) to count the number of defects per unit of a manufactured product, (in Statistical This Z follows normal distribution with mean ̳O‘ and S.D = 1 Uses of Standard Error
Quality Control) (4) to count the number of radio-active disintegrations of a radio-active element The distribution of Z is known as Standard Normal Distribution. Standard error plays a very important role in the large sample theory I and forms the basis of
per unit of time (in Physics) (5) to count the number of bacteria per unit (6) to count the number Uniform distribution (discrete) the testing of hypothesis.
of defective materials say, pins, blades etc., (in a packing of manufactured goods by a concern) A random variable X is said to follow the Uniform distribution of the discrete type if its p.d.f is (1) It is used for testing a given hypothesis
Characteristics of Poisson Distribution, f(x) =1/n when x = x 1 , x 2 , ........x n and f(x) = 0 elsewhere. (2) S.E gives an idea about the reliability of a sample. The reciprocal of S.E is a measure of
1. Poisson distribution is a discrete probability distribution. The following are some examples of random variables which follow the uniform distribution of reliability of the sample.
2. If ̳x‘ follows a Poisson Distribution, then ̳x‘ takes values 0,1,2,... to infinity the discrete type. (3) S.E can be used to determine the confidence limits of population measures like mean,
3 .It has a single parameter, m. When 4 m‘ is know all the terms can be found out. 1) Let an unbiased coin be tossed and let X = 1 if head turns up and X = 0 if tail turns up. proportion and standard deviation.
4. Mean and variance of Poisson Distribution are equal to m. f(x) =1/2 X =0,1 and 0, elsewhere. Commonly used sampling distributions
5. Poisson Distribution is a positively skewed distribution. 2) Let an unbiased die be thrown and let X denote the number shown. The p.d.f of X is, 1)Normal distribution
Properties (characteristics) of the Normal Distribution f(x) =1/6, x = 1, 2,....6 2)Chi–square(x^2)distribution
1.The normal curve is a continuous curve. 2.The normal curve is bell shaped. Uniform distribution (continuous) 3)t - distribution
3.Normal curve is symmetric about the mean. 4.Mean. Median and Mode are equal for a normal A random variable X is said to follow the uniform distribution of the continuous type over an 4)F distribution
distribution. 5.The height of Normal curve is at its maximum at the mean. interval (a , b) if its p.d.f is
6.There is only one maximum point, which occurs at the mean. f(x) =1/b-a= 0, a ≤ x ≤ b
7.The ordinate at mean divides the whole area into two equal parts.(ie 0.5 on either side)

Normal Distribution (As a sampling Distribution)


When the sample is large or when population SD is known, the following sample statistics have Uses of t distribution
standard Normal distribution. Suppose we denote those statistics by Z. Z follows normal 1.to test the given population mean when sample is small. Criteria for a good estimator (Desirable properties)
distribution (N(0,1)) The probability function of Z is f (z) =1/√2π e^(-z^2/2) 2.To test whether the two samples have same mean when the samples are small. The following are some of the characteristics which should be satisfied by a good estimator.
Properties of ’ Z’ distribution 3.To test whether there is difference in the observations of the two dependent samples. 1)An estimator should be unbiased.
1. Z-distribution is a normal distribution. So it has all the properties of a normal distribution. 4.To test the significance of population correlation coefficient. 2)An estimator should be consistent
2. Its mean = 0 and SD = 1 Properties of F distribution 3)An estimator should be efficient.
Uses of the sampling distribution of z 1. F distribution is a sampling distribution. 4)An estimator should be sufficient.
1. To test the given population means. 2. If F follows F distribution with (n 1 , n 2 ) degrees of freedom then 1/F follows F distribution 1. Unbiasedness: A statistic ‘t is said to be an unbiased estimator of population parameter ‘ θ ’ if
2. To test the significance of difference between two population means. with (n 2 , n 1 ) degrees of freedom E (t) = θ. For example, if ‘t’ follows a sampling distribution and mean of that distribution is the
3. To test the given population proportion. 3. Mean of the F distribution is n2/n2-1 where (n 1 , n 2 ) are the degrees of freedom. value of the parameter ' θ ' then t is the unbiased estimator ‘ θ ’.
4. To test the difference between two population proportions. 4. F curve is j shaped when n 2 ≤ 2 and bell shaped when n 1 > 2 We can see that sample mean is the unbiased estimator of the population mean,
5. To test the given population S. D Uses of F distribution ie., E ( ̅ ) = μ where ̅ is sample mean and μ is population mean.
6. To test the difference between two population Standard deviations. F statistic is used for test of hypothesis. The test conducted on the basis of ‘F’ statistic is called 2.Consistency: A statistic ‘t’ based on a sample of size ‘n’ is said to be consistent estimator of
X^2 distribution (Chi-square Distribution) F - test. F - test can be used to the parameter ‘θ ’if it converges in probability to 0.
1. If Z follows a standard normal distribution, then Z 2 will follow X^2 degree of freedom. 1) test the equality of variances of two populations when samples are small. ie., if p [t->0] as n->∞ then ‘t’ is consistent estimator of θ. ie., when n is large, the
distribution with one 2) test the equality of means of three or more populations probability for t tending to θ is near to 1.
2. Let ‘s’ and ‘ ’ be the standard deviations of sample and population respectively. Let ‘n’ be Theory of Estimation
ie, If E(t n ) = θ or E(t n ) θ and V(t n )->0 as n->∞, then tn is a consistent estimate of the
the sample size. Then ns 2 / follows a X^2 distribution with n - 1 degrees of freedom. The primary objective of a sample study is to draw conclusions about the population by
parameter θ
3. If z 1 , z 2 ,...........Z n are n standard normal variates then z 12 , z 22 ,...........Z n2 follows X^2 examining only a part of the population known as sample. Such conclusions drawn are called
Example: Sample mean is a consistent estimator of population mean since for large values of n,
distribution with n degrees of freedom. statistical inferences. Statistical inference is therefore a process by which we draw conclusions
sample mean tends to population mean.
Properties of X^2 distribution about some measure of population based on a measure of sample drawn from the population.
3.Efficiency: If t 1 and t 2 are two consistent estimators of parameter θ and if variance of t 1 is
1.X^2 distribution is a sampling distribution. It is a continuous probability distribution. Two main branches of statistical inference are estimation and testing hypothesis.
less than variance of t 2 for all n, then t 1 is said to be more efficient than t 2 .(V(t 1 ) < V(t 2 )).
2. Parameter of X^2 distribution is n. ESTIMATION:Statistical estimation is concerned with the methods by which population
That is, an estimator with lesser variability is said to be more efficient and consequently more
3. As the degree of freedom increases,X^2 distribution approaches to Normal Distribution: characteristics are estimated from sample. The true value of a population parameter is an
reliable than the other. Example: Sample mean is more efficient than sample median as an
4. Mean of X^2 distribution is n, variance of X^2 distribution is 2n and mode of X^2 n- 2 where unknown constant that can be correctly ascertained only by an exhaustive study of the
estimator of population mean.
‘n’ is the degree of freedom. For large values of n,X^2 distribution is symmetric. population. However it is ordinarily too expensive or it is infeasible to enumerate complete
4.Sufficiency: A Statistic ‘t’ is said to be sufficient estimator of parameter, θ if it contains all the
5. Sum of two independent X^2 variates is also a X^2 distribution is variate. population to obtain the required information. Therefore we estimate those parameters of the
information in the sample, regarding the parameter. In other words, a sufficient statistic utilities
Uses of X^2 distribution population through sample. This is statistical estimation. A manufacturer may be interested to
all the information that a given sample can furnish about the parameter.
X^2 is a test statistic in tests of hypotheses. Following are the uses of X^2 estimate the future demand for his product in the market. Statistical estimation procedures
A sufficient estimator is most efficient when an efficient estimator exists. It is always a
1. To test the given population variance when sample is small. provide us with the means of obtaining such estimates with desired degrees of precision.
consistent estimator. It may or may not be unbiased.A necessary and sufficient condition for an
2. To test the goodness of fit between observed and expected frequencies. With respect to estimating a parameter the following two types of estimates are possible.
estimate of a parameter θ to be sufficient is given by Neyman.
3. To test the independence of two attributes. 1.Point estimate s. 2.Interval estimates.
Interval estimation
4. To test the homogeneity of data. Point estimation:Any statistic suggested as an estimate of an unknown parameter is called a
In point estimation, a single value of a statistic is used as an estimate of the population
Students t - distribution point estimate of that parameter. So under point estimation, we determine a value which may be
parameter. But even the best possible point estimate may deviate enough from the true
Let x and s be the mean and S.D of a sample drawn from a normal population and let sample taken as an estimate of the population parameter. For example, if we suggest the mean of a
parameter value to make the estimate unsatisfactory. For example, in point estimation to
size (n) be small, then (¯x−μ)/(s/√n-1) t follows a t- distribution with n - 1 degrees of freedom. random sample taken from population as an estimate of population mean, it is point estimation.
estimate the population mean, we draw a sample and find the sample mean and it is taken as
Properties of t distribution The best estimate is one which falls nearer to the true value of the parameter to be estimated.
an estimate of the population mean. But there is a possibility of population mean to be either
1.t distribution is a sampling distribution Estimator and Estimates:A sample statistic that is used to estimate the population parameter is
less or more than this estimate. If we can find out the lowest and the highest values within which
2.for large samples , t distribution approaches to normal distribution called an estimator. For example, sample mean is an estimator of the population mean.
the population mean is expected to lie, they set up two limits. These two limits give an interval.
3.all odd moments of the distribution are 0 An estimate is a specific observed value of a statistic. To find an estimate, we select a sample
This interval within which population mean is expected to lie is called interval estimate. It cannot
4.Mean = 0 and variance = n/(n-2) for n > 2 and n is the degrees of freedom. and compute the value of the estimator from that sample. This computed value of the estimator
be expected that the population mean will always lie within this interval. So find out the
5. t curve is maximum at t = 0 is the estimate. For example, if sample mean is the estimator, the particular value of the sample
probability for the population mean to lie in this interval This probability is denoted by 1 - α . If 1 -
6. t curve has long tails towards the left to right. mean obtained from the sample is the estimate and the population mean is the parameter.
α = 0.95, then we are 95% confident that the population mean will lie in this interval so that the
probability for the population mean to lie outside this interval is only .05. ie α = .05 ( α ’ is the
level of significance.
Type I and Type II errors In any test of hypothesis the decision is to accept or to reject a null
Methods used for Point estimation Statistical Hypothesis:In any test of hypothesis we begin with some assumptions about the hypothesis. The decision is based on the information supplied by the sample data. The four
Commonly used methods for obtaining the Point estimation are population from which the sample is drawn. This assumption may be about the form of the possibilities of the decision are:
1.Method of Maximum likelihood population or about the parameters of the population. Such an assumption should be logically 1. Accepting a null hypothesis when it is true 2. Rejecting a null hypothesis when it is false
2.Method of moments drawn. This assumption is called Hypothesis. A statistical hypothesis may, therefore, be defined 3. Rejecting a null hypothesis when it is true. 4. Accepting a null hypothesis when it is false.
1.Method of maximum likelihood as a tentative conclusion logically drawn concerning the parameter or the form of the distribution Obviously (1) and (2) are correct while (3) and (4) are errors. The last two cases are
This is the most commonly used method of estimating the population parameter. of the population. For example, the assumption ―the sample is drawn from a normal population respectively known as type I and type II errors. In the absence of the proper and adequate
Maximum likelihood estimation is based on the assumption that different populations generate with mean = 40 and S. D - 10‖ is a hypothesis. sample information it is likely that errors may occur. Type 1 error is committed by rejecting null
different samples and that any given sample is more likely to have come from the given Simple and Composite Hypotheses:A hypothesis may be simple or composite. If a hypothesis is hypothesis even when that hypothesis is true. Similarly type II error committed by accepting a
population than others. Let x 1 , x 2 ,... x n , be a sample independently drawn from the concerning the population completely such as functional form and the parameter, it is called hypothesis when it is false. Type I error Rejecting H 0 when Ho is true.
population with parameter θ . Then f (x 1 , x 2...... . x n , θ) is called the likelihood function and simple hypothesis. If a hypothesis is not simple, then it is a composite hypothesis. For example: Type II error Accepting H o when H o is false.
denoted by L.Likelihood function L is the probability for selecting the sample x 1 , x 2 .... x n . the hypothesis ―Population normal with mean 25 and S. D = 10‖ is a simple hypothesis while For example: Suppose the test conducted on the basis of a sample drawn from the population
That is, L is the probability for the sample (x 1 , x 2 .... x n ) to belong to the population. the hypothesis ―Population follows normal distribution with mean = 25‖ is a composite leads to the conclusion that population mean ≠ 125 even though population mean is 125. This is
Hence L = P (x 1 , x 2 ... x n ,θ ) = P(x 1 ,θ) x P(x 2 ,θ) x ....... P(x n ,θ).since x 1 , x 2 ....xn are hypothesis. type I error. Similarly on the basis of the sample drawn, if the test conducted leads to the conclu-
independent. Parametric and non parametric hypotheses sion that the population mean is 125 even though population mean is not 125. it is type 11 error.
Then the principle of Maximum likelihood states that the statistic which maximizes the A hypothesis which specifies only the parameters of the probability density function is called a Critical Region
likelihood function, L is the maximum likelihood estimator of the parameter. This means, that parametric hypothesis. If a hypothesis specifies only the form of the density function in the In a test procedure we calculate a test statistic on which we base our decision. The range of
the statistic computed as estimator of the population parameter must maximise the likelihood of population, it is called non-parametric hypothesis. For example: the hypothesis ―Mean of the variation of this statistic is divided into two regions, acceptance region and rejection region. If
drawing the given sample. population is 25" is parametric while the hypothesis ―population is normal‖ is non-parametric. the computed value of the test statistic falls in the rejection region we reject the null hypothesis.
Hence the maximum likelihood estimator of the parameter ‘θ’ is that value of θ which Null and alternative hypotheses The rejection region is also known as critical region. Critical region corresponds to a
maximizes the likelihood function, L. Therefore among all the possible estimators of θ , that A null hypothesis can be defined as a statistical hypothesis which is stated for the purpose of predetermined level of significance (say α) and the acceptance region corresponds to the
which maximizes L is the MLE. possible acceptance. A null hypothesis is original hypothesis. Any hypothesis other than null region, 1 — α.
2.Methods of Moments hypothesis is called alternative hypothesis. So when the null hypothesis is rejected we accept Size of the Critical region:Probability for a selected sample to belong to the critical region is
Under this method, we find moments from the sample and equate them with the corresponding the other hypothesis known as alternative hypothesis. The Null hypothesis is denoted by H o called the size of the critical region. Therefore size of the critical region = P [rejecting H o when
moments of the population. Then we get equations containing the parameters. Solving these and the Alternative hypothesis by H 1 H o is true]. Size of the critical region is also known as level of significance.
equations we get the estimates of the population parameters. Difference between Standard deviation and Standard error. Large Sample and Small Sample tests
Let θ 1 ,θ 2 .... .......... θ k be k parameters of the population. Let μ 1 μ 2' ...μ k ' be the first k raw 1. Standard deviation is a measure of dispersion of statistical data (or a probability distribution). It is generally agreed among statisticians that a sample is to be considered large if its size
moments of the population and let m 1 ', m 2 '...m k ' be the first k raw moments of the sample. Standard error is a measure of dispersion of a sampling distribution. exceeds 30.The statistical tests used for large samples are different from those used for small
Then take μ 1’= m 1 ', μ 2' = m 2 ' and so on. Solving these equations we get the estimates of 2. Standard deviation measures the variability or consistency of a statistical series. Standard samples since the assumptions made in the case of large samples are not true for small
the parameters. error determines the precisions or reliability of an estimated value. samples. For large samples the sampling distribution of the test statistic is normal distribution
3.Standard deviation is calculated in relation to the mean of a series .Standard error can be and therefore the statistical test applied is Z - test.
calculated in relation to Mean, Median, S.D, correlation coefficient etc. I For small samples, the sampling distribution of the test statistic may be (1) Normal distribution
4. Standard Deviation can be used to find out S. E. Standard error cart be used for estimation (2) t - distribution (3) X 2 distribution (4) F - distribution. Therefore in the case of small samples
and testing of hypothesis. statistical test applied may be Z-test, t-test, -test, F-test etc.
Level of Significance:Confidence with which a null hypothesis is accepted or rejected depends Z - test:Z - test is applied when the test statistic follows normal distribution.Z - test is applied
on what is called significant level. The probability, with which we may reject a null hypothesis, when the test statistic follows normal distribution.
when it is true, is called the level of significance. So when the level of significance is .05, it Uses of Z - test are:
means that in the long run the statistician is rejecting true null hypothesis 5 times out of every' 1.To test the given population mean when the sample is large or when the population SD is
100 times. Therefore the level of significance is the risk; a statistician is running in his decisions. known.2.To test the equality of two sample means when the samples are large or when the
The level of significance is denoted by α. Then, α = Probability for rejecting H o when it is true. population SD is known.
The level of significance is usually determined before conducting the test of hypothesis. 3.To test the population proportion. 4.To test the equality of two sample proportions.

5.To test the population SD when the sample is large. 6. To test the equality of two sample
standard deviations when the samples are large or when
population standard deviations are known. 7.To test the equality of correlation coefficients.
t - test x^2 Test (Chi-Square Test)
t - test is applied when test statistic follows t - distribution. Parametric and Non - parametric tests:
Applications (uses) of t - test are: In certain test procedure, assumptions about the population distribution or parameters are
1.To test the given population mean when the sample is small and the population S. D is not made.
known. For example, in ̳t‘ we assume test that the samples are drawn from population following normal
2.To test the equality of two sample means when the samples are small and population S. D is distribution. When such assumptions are made, the test is known as parametric test.
unknown. There are situations, when it is not possible to make any assumption about the distribution of
3.To test the difference in values of two dependent samples. : test the significance of Correlation the
Coefficients. population from which samples are drawn. In such situations we follow test procedures which
Assumptions in t test are known as non - parametric tests.x^2 -test is an example of a non parametric test.
1.The parent population from which the sample drawn is normal x^2- test
2.The sample observations are independent. The statistical test in which the test statistic follows a x^2 distribution. is called the x^2 - test.
3.The population S. D ̳ ‘ is unknown. Therefore x^2 test is a statistical test, which tests the significance of difference between
4.When the equality of two population means is tested, the sample are assumed to be observed frequencies and the corresponding theoretical frequencies of a distribution, without
independent and the population variances assumed to be equal and unknown. any assumption about the distribution of the population.
Test of goodness of fit x^2- test is one of the simplest and most widely used non - parametric tests in statistical work.
If we have a set of frequencies of a distribution obtained by an experiment and if we are Phis test was developed by Prof. Karl Pearson in 1900.
interested in knowing whether these frequencies are consistent with those which may be Characteristics of x^2 - test
obtained 1.It is a non - parametric test. Assumptions about the form of the distribution or its parameters
based on some theory (or hypothesis), then we can use x^2 test of goodness of fit for this are not required.
purpose. 2.It is a distribution free test, which can be used in any type of distribution of population.
For example: if a frequency distribution like Binomial or Poisson or Normal is applicable, the 3.It is easy to calculate x^2 test statistic.
expected frequencies would be derived using that distribution. 4.It analyses the difference between a set of observed frequencies and a set of corresponding
Steps for the x^2 expected frequencies.
test of goodness of fit : Uses (Applications) of -x^2 test
1. H o : There is goodness of fit between observed and expected frequencies. x^2- test is one of the most useful statistical tests. It is applicable to very large number of
H 1 : There is no goodness of fit between observed and expected frequencies problems in practice. The uses of x^2- test are explained below:
2.Compute the test statistic,x^2 ∑ where ̳O‘ stands for observed frequencies and ̳E‘ stands for 1)Useful for the test of goodness of fit.x^2- test can be used to ascertain how well theoretical
expected frequencies. Observed frequencies are available in a given problem. But expected distributions fit the data. We can test whether there is goodness of fit between the observed
frequencies are to be computed. frequencies and expected frequencies.
3.Degree of freedom = n - r - 1 where ̳r‘ is the number of independent constraints to be satisfied 2)Useful for the test of independence of attributes: With the help of x^2- test we can find out
by the frequencies. For a frequency distribution r is the number of parameters computed from whether two attributes are associated or not.
the data. 3)Useful for testing homogeneity: Test of independence are concerned with the problem of
4.Obtain the table value of x^2 for the degree of freedom and for the desired level of whether one attribute is independent of another, while test of homogeneity are concerned with
significance. whether different samples come from the same population.
5.If the calculated value of x^2 less than the table value we conclude that there is goodness of 4)Useful for testing given population variance:x^2 - test can be used for testing whether the
fit given population variance is acceptable on the basis of samples drawn from that population.

You might also like