Notes Unit-4 BRM

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

TYPES OF STATISTICAL TECHNIQUES FOR DATA ANALYSIS

Data Analysis can be defined as the process of reviewing and evaluating the data that
is gathered from different sources. Data cleaning is very important as this will help
in eliminating the redundant information and reaching to the accurate conclusions.
Data analysis is the systematic process of cleaning, inspecting and transforming data
with the help of various tools and techniques. The objective of data analysis is to
identify the useful information which will support the decision-making process. There
are various methods for data analysis which includes data mining, data visualization
and Business Intelligence. Analysis of data will help in summarizing the results
through examination and interpretation of the useful information. Data analysis helps
in determining the quality of data and developing the answers to the questions which
are of use to the researcher.
In order to discover the solution of the problem and to reach to the specific and
quality results, various statistical techniques can be applied. These techniques will
help the researcher to get accurate results by drawing relationships between different
variables. The statistical techniques can mainly be divided into two a) Parametric test
and b) non-parametric test.

Parametric test
Parametric statistics considers that the sample data relies on certain fixed parameters.
It takes into consideration the properties of the population. It assumes that the sample
data is collected from the population and population is normally distributed. There are
equal chances of occurrence of all the data present in the population. The parametric
test is based on various assumptions which are needed to be holding good. Various
parametric tests are Analysis of Variance (ANOVA), Z test, T test, Chi Square test,
Pearson’s coefficient of correlation, Regression analysis.

T- Test
T- test can be defined as the test which helps in identifying the significant level of
difference in a sample mean or between the means of two samples. It is also called as
a T- Distribution. The t-test is conducted when the sample size of the population is
small, and variance of the population is not known. The t-test is used when the
population (n) is not larger than 30. There are two types of T-Test:
Dependent mean T Test- It is used when same variables or groups are experimented.
Independent mean T Test-It is used when two different groups experimented. The two
different groups have faced different conditions.
The formula for T-Test is:-

Z Test
This test is used when the population is normally distributed. The sample size of the
population is large or small, but the variance of the population is known. It is used for
comparing the means of the population or for identifying the significance level of
difference between the means of two independent samples. Z test is based on the
single critical value which makes the test more convenient.
The formula for z test is:-
X- Main value
µ – Sample Mean
σ – Standard Deviation

Analysis of Variance (ANOVA)


When there are two or more categorical data, then Analysis of Variance is used.
Analysis of variance can be mainly of two types a) one-way ANOVA, b) Two-way
ANOVA. One way ANOVA is used when the mean of three or more than three groups
are compared. The variables in each group are same. Two-way ANOVA is used to
discover if there is any relationship between two independent variables and dependent
variables. Analysis of Variance is based on many assumptions. ANOVA assumes that
there is a dependent variable which can be measured at continuous intervals. There
are independent variables which are categorical, and there should be at least two
categories. It also assumes that the population is normally distributed and there is no
unusual element is present.

Chi Square test


This test is also known as Pearson’s chi-square test. This test is used to find a
relationship between two or more independent categorical variables. The two
variables should be measured at the categorical level and should consist of two or
more independent groups.

Coefficient of Correlation
Pearson’s coefficient of correlation is used to draw an association between two
variables. It is denoted by ‘r’. The value of r ranges between +1 to -1. The coefficient
of correlation is used to identify whether there is a positive association, negative
association or no association between two variables. When the value is 0, it indicates
that there is no association between two variables. When it is less than 0, it indicates a
negative association, and when the value is more than 0, then it indicates a positive
association.

Regression Analysis
This is used to measure the value of one variable which is based on the value of
another variable. The variable whose value is predicted is the dependent variable, and
the variable which is used to predict the value of another variable is called
independent variable. The assumptions of regression analysis are that the variables
should be measured at the continuous level and there should be a linear relationship
between two variables.

Non-parametric Tests
Non-Parametric Statistics does not take into account any assumption relating to the
parameters of the population. It explains that data is ordinal and is not necessary to be
normally distributed. The non-parametric test is also known as a distribution-free test.
These tests are comparatively simpler than the parametric test. Various non-parametric
tests include Fisher- Irwin Test, Wilcoxon Matched –Pairs Test (Signed rank test),
Wilcoxon rank-sum test, Kruskal- Wallis Test, Spearman’s Rank Correlation test.

TESTING HYPOTHESIS

Hypothesis testing starts with setting up the premises, which is followed by selecting
a significance level. Next, we have to choose the test statistic, i.e. t-test or f-test.
While t-test is used to compare two related samples, f-test is used to test the equality
of two populations.

The hypothesis is a simple proposition that can be proved or disproved through


various scientific techniques and establishes the relationship between independent and
some dependent variable. It is capable of being tested and verified to ascertain its
validity, by an unbiased examination. Testing of a hypothesis attempts to make clear,
whether or not the supposition is valid.

For a researcher, it is imperative to choose the right test for his/her hypothesis as the
entire decision of validating or refusing the null hypothesis is based on it. Take a read
of the given article to understand the difference between t-test and f-test.

Comparison Chart

BASIS FOR
T-TEST F-TEST
COMPARISON

Meaning T-test is a univariate hypothesis test, F-test is statistical test, that


that is applied when standard determines the equality of the
deviation is not known and the variances of the two normal
sample size is small. populations.

Test statistic T-statistic follows Student F-statistic follows Snedecor


t-distribution, under null hypothesis. f-distribution, under null
hypothesis.

Application Comparing the means of two Comparing two population


populations. variances.
Definition of T-test

A t-test is a form of the statistical hypothesis test, based on Student’s t-statistic and
t-distribution to find out the p-value (probability) which can be used to accept or
reject the null hypothesis.

T-test analyses if the means of two data sets are greatly different from each other, i.e.
whether the population mean is equal to or different from the standard mean. It can
also be used to ascertain whether the regression line has a slope different from zero.
The test relies on a number of assumptions, which are:

 The population is infinite and normal.

 Population variance is unknown and estimated from the sample.

 The mean is known.

 Sample observations are random and independent.

 The sample size is small.

 H0 may be one sided or two sided.

Mean and standard deviation of the two sample are used to make comparison between
them, such that:

where,

x̄1 = Mean of the first dataset


x̄2 = Mean of the second dataset
S1 = Standard deviation of the first dataset
S2 = Standard deviation of the second dataset
n1 = Size of first dataset
n2 = Size of second dataset
Definition of F-test

F-test is described as a type of hypothesis test, that is based on Snedecor f-distribution,


under the null hypothesis. The test is performed when it is not known whether the two
populations have the same variance.

F-test can also be used to check if the data conforms to a regression model, which is
acquired through least square analysis. When there is multiple linear regression
analysis, it examines the overall validity of the model or determines whether any of
the independent variables is having a linear relationship with the dependent variable.
A number of predictions can be made through, the comparison of the two datasets.
The expression of the f-test value is in the ratio of variances of the two observations,
which is shown as under:

Where, σ2 = variance

The assumptions on which f-test relies are:

 The population is normally distributed.

 Samples have been drawn randomly.

 Observations are independent.

 H0 may be one sided or two sided.

Key Differences Between T-test and F-test

The difference between t-test and f-test can be drawn clearly on the following
grounds:

 A univariate hypothesis test that is applied when the standard deviation is not
known and the sample size is small is t-test. On the other hand, a statistical test,
which determines the equality of the variances of the two normal datasets, is
known as f-test.

 The t-test is based on T-statistic follows Student t-distribution, under the null
hypothesis. Conversely, the basis of the f-test is F-statistic follows Snedecor
f-distribution, under the null hypothesis.

 The t-test is used to compare the means of two populations. In contrast, f-test
is used to compare two population variances.

Conclusion

T-test and f-test are the two, of the number of different types of statistical test used for
hypothesis testing and decides whether we are going to accept the null hypothesis or
reject it. The hypothesis test does not take decisions itself, rather it assists the
researcher in decision making.

Definition of Z-test

Z-test is a statistical test where normal distribution is applied and is basically used for
dealing with problems relating to large samples when n ≥ 30.
n = sample size
For example suppose a person wants to test if both tea & coffee are equally popular in
a particular town. Then he can take a sample of size say 500 from the town out of
which suppose 280 are tea drinkers. To test the hypothesis, he can use Z-test.

Z-Test's for Different Purposes


There are different types of Z-test each for different purpose. Some of the popular
types are outlined below:

1. z test for single proportion is used to test a hypothesis on a specific value of the
population proportion.

Statistically speaking, we test the null hypothesis H0: p = p0 against the alternative
hypothesis H1: p >< p0 where p is the population proportion and p0 is a specific value
of the population proportion we would like to test for acceptance.
The example on tea drinkers explained above requires this test. In that example, p0 =
0.5. Notice that in this particular example, proportion refers to the proportion of tea
drinkers.

2. z test for difference of proportions is used to test the hypothesis that two
populations have the same proportion.

For example suppose one is interested to test if there is any significant difference in
the habit of tea drinking between male and female citizens of a town. In such a
situation, Z-test for difference of proportions can be applied.
One would have to obtain two independent samples from the town- one from males
and the other from females and determine the proportion of tea drinkers in each
sample in order to perform this test.
3. z -test for single mean is used to test a hypothesis on a specific value of the
population mean.

Statistically speaking, we test the null hypothesis H0: μ = μ0 against the alternative
hypothesis H1: μ >< μ0 where μ is the population mean and μ0 is a specific value of
the population that we would like to test for acceptance.
Unlike the t-test for single mean, this test is used if n ≥ 30 and population standard
deviation is known.

4. z test for single variance is used to test a hypothesis on a specific value of the
population variance.

Statistically speaking, we test the null hypothesis H0: σ = σ0 against H1: σ ><
σ0where σ is the population mean and σ0 is a specific value of the population
variance that we would like to test for acceptance.
In other words, this test enables us to test if the given sample has been drawn from a
population with specific variance σ0. Unlike the chi square test for single variance,
this test is used if n ≥ 30.

5. Z-test for testing equality of variance is used to test the hypothesis of equality of
two population variances when the sample size of each sample is 30 or larger.
CHI-SQUARE TEST
Adapted by Anne F. Maben from "Statistics for the Social Sciences" by Vicki Sharp

The chi-square (I) test is used to determine whether there is a significant difference between the expected
frequencies and the observed frequencies in one or more categories. Do the number of individuals or objects that
fall in each category differ significantly from the number you would expect? Is this difference between the
expected and observed due to sampling error, or is it a real difference?

Chi-Square Test Requirements


1. Quantitative data.
2. One or more categories.
3. Independent observations.
4. Adequate sample size (at least 10).
5. Simple random sample.
6. Data in frequency form.
7. All observations must be used.

Expected Frequencies
When you find the value for chi square, you determine whether the observed frequencies differ significantly
from the expected frequencies. You find the expected frequencies for chi square in three ways:

I . You hypothesize that all the frequencies are equal in each category. For example, you might expect that
half of the entering freshmen class of 200 at Tech College will be identified as women and half as men. You
figure the expected frequency by dividing the number in the sample by the number of categories. In this exam
pie, where there are 200 entering freshmen and two categories, male and female, you divide your sample of
200 by 2, the number of categories, to get 100 (expected frequencies) in each category.

2. You determine the expected frequencies on the basis of some prior knowledge. Let's use the Tech College
example again, but this time pretend we have prior knowledge of the frequencies of men and women in each
category from last year's entering class, when 60% of the freshmen were men and 40% were women. This
year you might expect that 60% of the total would be men and 40% would be women. You find the expected
frequencies by multiplying the sample size by each of the hypothesized population proportions. If the
freshmen total were 200, you would expect 120 to be men (60% x 200) and 80 to be women (40% x 200).

Now let's take a situation, find the expected frequencies, and use the chi-square test to solve the problem.

Situation

Thai, the manager of a car dealership, did not want to stock cars that were bought less frequently because of
their unpopular color. The five colors that he ordered were red, yellow, green, blue, and white. According to Thai,
the expected frequencies or number of customers choosing each color should follow the percentages of last year.
She felt 20% would choose yellow, 30% would choose red, 10% would choose green, 10% would choose blue,
and 30% would choose white. She now took a random sample of 150 customers and asked them their color
preferences. The results of this poll are shown in Table 1 under the column labeled observed frequencies."

Table 1 - Color Preference for 150 Customers for Thai's Superior Car Dealership
Category Color Observed Frequencies Expected Frequencies

Yellow 35 30

Red 50 45

Green 30 15

Blue 10 15

White 25 45
The expected frequencies in Table 1 are figured from last year's percentages. Based on the percentages for
last year, we would expect 20% to choose yellow. Figure the expected frequencies for yellow by taking 20% of
the 150 customers, getting an expected frequency of 30 people for this category. For the color red we would
expect 30% out of 150 or 45 people to fall in this category. Using this method, Thai figured out the expected
frequencies 30, 45, 15, 15, and 45. Obviously, there are discrepancies between the colors preferred by customers
in the poll taken by Thai and the colors preferred by the customers who bought their cars last year. Most striking
is the difference in the green and white colors. If Thai were to follow the results of her poll, she would stock twice
as many green cars than if she were to follow the customer color preference for green based on last year's sales.
In the case of white cars, she would stock half as many this year. What to do??? Thai needs to know whether or
not the discrepancies between last year's choices (expected frequencies) and this year's preferences on the basis
of his poll (observed frequencies) demonstrate a real change in customer color preferences. It could be that the
differences are simply a result of the random sample she chanced to select. If so, then the population of cus-
tomers really has not changed from last year as far as color preferences go. The null hypothesis states that there
is no significant difference between the expected and observed frequencies. The alternative hypothesis states
they are different. The level of significance (the point at which you can say with 95% confidence that the
difference is NOT due to chance alone) is set at .05 (the standard for most science experiments.) The chi-square
formula used on these data is
2 2
X = (O - E) where O is the Observed Frequency in each category
E E is the Expected Frequency in the corresponding category
is sum of
df is the "degree of freedom" (n-1)
2
X is Chi Square

PROCEDURE
2
We are now ready to use our formula for X and find out if there is a significant difference between the
observed and expected frequencies for the customers in choosing cars. We will set up a worksheet; then you will
follow the directions to form the columns and solve the formula.

1. Directions for Setting Up Worksheet for Chi Square

2 2
Category O E (O - E) (O - E) (O - E)
E

yellow 35 30 5 25 0.83

red 50 45 5 25 0.56

green 30 15 15 225 15

blue 10 15 -5 25 1.67

white 25 45 -20 400 8.89


2
X = 26.95

2. After calculating the Chi Square value, find the "Degrees of Freedom." (DO NOT SQUARE THE NUMBER
YOU GET, NOR FIND THE SQUARE ROOT - THE NUMBER YOU GET FROM COMPLETING THE
CALCULATIONS AS ABOVE IS CHI SQUARE.)

Degrees of freedom (df) refers to the number of values that are free to vary after restriction has been
placed on the data. For instance, if you have four numbers with the restriction that their sum has to be 50,
then three of these numbers can be anything, they are free to vary, but the fourth number definitely is
restricted. For example, the first three numbers could be 15, 20, and 5, adding up to 40; then the fourth
number has to be 10 in order that they sum to 50. The degrees of freedom for these values are then three.
The degrees of freedom here is defined as N - 1, the number in the group minus one restriction (4 - I ).

3. Find the table value for Chi Square. Begin by finding the df found in step 2 along the left hand side of the
table. Run your fingers across the proper row until you reach the predetermined level of significance (.05) at
the column heading on the top of the table. The table value for Chi Square in the correct box of 4 df and
P=.05 level of significance is 9.49.

4. If the calculated chi-square value for the set of data you are analyzing (26.95) is equal to or greater than the
table value (9.49 ), reject the null hypothesis. There IS a significant difference between the data sets that
cannot be due to chance alone. If the number you calculate is LESS than the number you find on the table,
than you can probably say that any differences are due to chance alone.

In this situation, the rejection of the null hypothesis means that the differences between the expected
frequencies (based upon last year's car sales) and the observed frequencies (based upon this year's poll
taken by Thai) are not due to chance. That is, they are not due to chance variation in the sample Thai took;
there is a real difference between them. Therefore, in deciding what color autos to stock, it would be to Thai's
advantage to pay careful attention to the results of her poll!

The steps in using the chi-square test may be summarized as follows:

Chi-Square I. Write the observed frequencies in column O


Test Summary 2. Figure the expected frequencies and write them in column E.
3. Use the formula to find the chi-square value:
4. Find the df. (N-1)
5. Find the table value (consult the Chi Square Table.)
6. If your chi-square value is equal to or greater than the table value, reject the null
hypothesis: differences in your data are not due to chance alone

For example, the reason observed frequencies in a fruit fly genetic breeding lab did not match expected
frequencies could be due to such influences as:
• Mate selection (certain flies may prefer certain mates)
• Too small of a sample size was used
• Incorrect identification of male or female flies
• The wrong genetic cross was sent from the lab
• The flies were mixed in the bottle (carrying unexpected alleles)

You might also like