CHAPTER 22 &23: Bivariate Statistical Analysis
CHAPTER 22 &23: Bivariate Statistical Analysis
ANALYSIS
TEXTBOOK: BUSINESS RESEARCH METHODS, 8TH
EDITION, ZIKMUND, BABIN, CARR AND GRIFFIN
LEARNING OUTCOMES
CHAPTER 22
1. Recognize when a particular bivariate test is appropriate.
2. Calculate and interpret a X2 test for a contingency table.
3. Calculate and interpret an independent sample t-test
comparing two means.
4. Understanding analysis of variance
5. Interpret ANOAV table
LEARNING OUTCOMES
CHAPTER 23
1. Apply and interpret simple bivariate correlations.
2. Interpret a correlation matrix.
3. Understanding simple regression.
4. Interpreting regression output including the t-test of
hypotheses tied to specific parameter coefficient.
TEST OF DIFFERENCE
Expected 15 13 20 18 15
Observed 12 15 24 17 11
• A p value less than .05, indicate that we fail to reject that there is no difference in
expected and observed values, and thus we conclude that there is a difference
between expected and observed values.
T-TEST FOR COMPARING
MEAN
1. When comparing mean of two independent groups, we use independent sample
t-test.
2. When comparing mean of a variable with population mean we use one sample t-
test.
3. When comparing mean of repeated measure, we use paired sample t-test.
4. In all cases we are looking what is the probability of a t statistic to be observed
in a t-distribution.
5. If the cross-ponding p value for a t-statistic is low, we will fail to reject null
hypothesis and conclude that there is a difference between the means.
ANOVA
1. When comparing mean of two or more than two independent groups, we use ANOVA.
2. In ANOVA, we test what portion of total variance is due to within group variance and
between group variance.
3. So, variance is mean – observe value, so each group has its own mean and thus we can
calculate the variance of each group. This is called variance within group (SSC).
4. We can also calculate the mean of all the groups and calculate the sum of variance of all the
groups based on overall mean. This is called sum of total variance (SST).
5. We can also calculate the variance of group mean with over all mean, and sum them up, this
is called sum of square error (SSE).
6. Sum of total variance is the sum of within group plus the sum of squared errors.
SST=SSC+SSE
7. So, we calculate a ratio between variance between groups (SSC) /Sum of squared error
(SSE), giving us a value of F statistics. A larger F statistics means a lower probability of
accepting a null hypothesis.
8. Two most common types of ANOVA are one way (which is used for independent samples)
and repeated measure ANOAVA when we have repeated measures of a variable and we want
to test the differences in their variance .
CHAPTER 23
MEASURE OF ASSOCIATIONS
CORRELATION
1. We know what is mean, we can calculate variance if we know mean of a data and its
standard deviation. Thus, helping us to visualize how much data is departed from its mean
value.
2. We can also look at the association between two variables by looking how the variance of
one variable is associated with the variance of the other variable.
3. We call this covariance. Its measure of association between change in the variance of two
variables. So numerical value of covariance offer nothing but only the sign , whether its
positive or negative or zero.
4. Correlation is standardizing these associations so that along with direction of association,
we can also calculate strength of these association.
5. Correlation (r) can be between the range of -1 to +1. the closer is the number the
strongest is the correlation.
6. Both covariance and correlations are measure of associations but not causation.
7. By squaring the correlation coefficient, we calculate R2 , the R2 explain the change in
variance in one variable due to change in another variable.
8. A correlation matrix is the standard form of reporting correlations.
CHAPTER 23
SIMPLE REGRESSION
1. As we proceed from mean, we were able to calculate variance, from variance we
were able to calculate correlation, and from correlation we were able to
calculate R2.
2. Now we can use this knowledge to conduct regression. Regression is fitting a
straight line in a scatter plot. That’s why it is called linear regression, because it
follows the equation of a straight line. Note that regression is our attempt to
predict value of dependent variable, if we know the value of independent
variable.
CHAPTER 23
SIMPLE REGRESSION
1. Simple regression is the relationship between one independent and one
dependent variable. You will notice that in such cases the beta value is the same
as R2
2. However, when you add more than one independent variable the beta values
will be different from r square, as they should be.
3. Lest run some examples of simple regressions in SPSS.