0% found this document useful (0 votes)
68 views61 pages

Week 10 Factor Analysis

The document describes various statistical analysis techniques used in marketing research, including frequency distributions, cross-tabulation, hypothesis testing, analysis of variance (ANOVA), analysis of covariance, multivariate ANOVA, and factor analysis. Factor analysis is used to identify underlying dimensions or factors that explain correlations among a set of variables, allowing researchers to reduce a large set of correlated variables into a smaller set of uncorrelated factors for further analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views61 pages

Week 10 Factor Analysis

The document describes various statistical analysis techniques used in marketing research, including frequency distributions, cross-tabulation, hypothesis testing, analysis of variance (ANOVA), analysis of covariance, multivariate ANOVA, and factor analysis. Factor analysis is used to identify underlying dimensions or factors that explain correlations among a set of variables, allowing researchers to reduce a large set of correlated variables into a smaller set of uncorrelated factors for further analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Data Preparation Process

Prepare Preliminary Plan of Data Analysis

Check Questionnaire

Edit

Code

Transcribe

Clean Data

Statistically Adjust the Data

Select Data Analysis Strategy


Frequency Distribution

▪ In a frequency distribution, one variable is


considered at a time.
▪ A frequency distribution for a variable
produces a table of frequency counts,
percentages, and cumulative percentages for
all the values associated with that variable.
Frequency of Familiarity
with the Internet
Cross-Tabulation

▪ While a frequency distribution describes one


variable at a time, a cross-tabulation describes
two or more variables simultaneously.
▪ Cross-tabulation results in tables that reflect
the joint distribution of two or more variables
with a limited number of categories or distinct
values.
Gender and Internet Usage

Gender
Row
Internet Usage Male Female Total

Light (1) 5 10 15

Heavy (2) 10 5 15

Column Total 15 15
Purchase of Fashion Clothing by Sex
& Marital Status

Sex
Purchase of
Fashion Clothing Male Female

Married Not Married Married Not Married

High 35% 40% 25% 60%

Low 65% 60% 75% 40%

Column totals 100% 100% 100% 100%

Number of cases 400 120 300 180


Steps Involved in Hypothesis
Testing
Step 1 Formulate H0 and H1

Step 2 Select Appropriate Test

Step 3 Choose Level of Significance

Step 4 Collect Data and Calculate Test Statistic

Step 5 Determine Probability Determine Critical Value of


Associated with Test Statistic Test Statistic TSCR

Step 6 Compare with Level of Determine if TSCAL falls into


Significance, (Non) Rejection Region

Step 7 Reject or Do not Reject H0

Step 8 Draw Marketing Research Conclusion


A Classification of Hypothesis Testing
Procedures for Examining Differences
Hypothesis Tests

Parametric Tests Non-parametric Tests


(Metric Tests) (Nonmetric Tests)

One Sample Two or More One Sample Two or More


* t test Samples * Chi-Square Samples
* z test * K-S
* Runs
* Binomial
Independent Paired Samples Independent Paired Samples
Samples Samples
* Paired t test * Sign
* Wilcoxon
* Two-Group t test * Chi-Square * McNemar
* z test * Mann-Whitney * Chi-Square
* Median
* K-S
A Summary of Hypothesis Tests
Related to Differences
Relationship Amongst Test, Analysis of Variance,
Analysis of Covariance, & Regression
Metric Dependent Variable

One Independent Variable One or More Independent


Variables

Binary

Categorical: Categorical: Interval


t test Factorial Interval

Analysis of Analysis of Regression


Variance Covariance

One Factor More Than One


Factor

One-Way Analysis N-Way Analysis of


of Variance Variance
One-Way Analysis of Variance

Marketing researchers are often interested in


examining the differences in the mean values of the
dependent variable for several categories of a single
independent variable or factor.

For example:
▪ Do the various segments differ in terms of their volume of
product consumption?
▪ Do the brand evaluations of groups exposed to different
commercials vary?
▪ What is the effect of consumers' familiarity with the store
(measured as high, medium, and low) on preference for the
store?
One-Way ANOVA:
Effect of In-Store Promotion on Store Sales
N-Way Analysis of Variance

In marketing research, one is often concerned with the effect


of more than one factor simultaneously. For example:
▪ How do advertising levels (high, medium, and low) interact
with price levels (high, medium, and low) to influence a
brand's sale?
▪ Do educational levels (less than high school, high school
graduate, some college, and college graduate) and age (less
than 35, 35-55, more than 55) affect consumption of a
brand?
▪ What is the effect of consumers' familiarity with a
department store (high, medium, and low) and store image
(positive, neutral, and negative) on preference for the
store?
Two-Way Analysis of Variance
Analysis of Covariance

When examining the differences in the mean values of


the dependent variable related to the effect of the
controlled independent variables, it is often necessary
to take into account the influence of uncontrolled
independent variables.
For example:
▪ In determining how different groups exposed to different commercials
evaluate a brand, it may be necessary to control for prior knowledge.
▪ In determining how different price levels will affect a household's cereal
consumption, it may be essential to take household size into account.
▪ Suppose that we wanted to determine the effect of in-store promotion
and couponing on sales while controlling for the effect of clientele.
Analysis of Covariance
Multivariate Analysis of Variance

▪ Multivariate analysis of variance (MANOVA)


is similar to analysis of variance (ANOVA),
except that instead of one metric dependent
variable, we have two or more.
▪ In MANOVA, the null hypothesis is that the
vectors of means on multiple dependent
variables are equal across groups.
▪ Multivariate analysis of variance is appropriate
when there are two or more dependent
variables that are correlated.
FACTOR ANALYSIS
[email protected]
School of Business & Economics
Chapter Outline

▪ Basic Concept
▪ Factor Analysis Model
▪ Statistics Associated with Factor Analysis
▪ Conducting Factor Analysis
▪ Applications of Common Factor Analysis
Factor Analysis

▪ Factor analysis is a general name denoting a


class of procedures primarily used for data
reduction and summarization.
▪ Factor analysis is an interdependence
technique in that an entire set of
interdependent relationships is examined
without making the distinction between
dependent and independent variables.
Factor Analysis

▪ Factor analysis is used in the following


circumstances:
• To identify underlying dimensions, or factors, that
explain the correlations among a set of variables.
Factors Underlying Selected
Psychographics and Lifestyles
Factor 2

Football Baseball

Evening at home
Factor 1
Go to a party
Home is best place

Plays
Movies

factor 1 can be interpreted as homebody versus socialite, and


factor 2 can be interpreted as sports versus movies/plays
Factor Analysis

▪ Factor analysis is used in the following


circumstances:
• To identify underlying dimensions, or factors, that
explain the correlations among a set of variables.
• To identify a new, smaller, set of uncorrelated
variables to replace the original set of correlated
variables in subsequent multivariate analysis
(regression or discriminant analysis).
Factors Underlying Selected
Psychographics and Lifestyles
Factor 2

Football Baseball

Evening at home
Factor 1
Go to a party
Home is best place

Plays
Movies
The psychographic factors identified may be used as independent variables in explaining
the differences between loyal and nonloyal consumers. Instead of the seven correlated
psychographic variables, we can use the two uncorrelated factors, i.e., homebody versus
socialite, and sports versus movies/plays, in subsequent analysis.
Factor Analysis

▪ Factor analysis is used in the following


circumstances:
• To identify underlying dimensions, or factors, that
explain the correlations among a set of variables.
• To identify a new, smaller, set of uncorrelated
variables to replace the original set of correlated
variables in subsequent multivariate analysis
(regression or discriminant analysis).
• To identify a smaller set of salient variables from
a larger set for use in subsequent multivariate
analysis.
Factors Underlying Selected
Psychographics and Lifestyles
Factor 2

Football Baseball

Evening at home
Factor 1
Go to a party
Home is best place

Plays
Movies

a few of the original lifestyle statements that correlate highly with the identified factors
may be used as independent variables to explain the differences between the loyal and
nonloyal users. we can select home is best place and football as independent variables,
and drop the other five variables to avoid problems due to multicollinearity
Applications in Marketing Research

New car buyers might be grouped based on the


relative emphasis they place on economy,
It can be used in market segmentation for convenience, performance, comfort, and luxury. This
identifying the underlying variables on which might result in five segments: economy seekers,
to group the customers. convenience seekers, performance seekers, comfort
seekers, and luxury seekers.
In product research, factor analysis can be
employed to determine the brand attributes Toothpaste brands might be evaluated in terms of
that influence consumer choice.
protection against cavities, whiteness of teeth, taste,
fresh breath, and price.
In advertising studies, factor analysis can be
used to understand the media consumption
habits of the target market. The users of frozen foods may be heavy viewers of
cable TV, see a lot of movies, and listen to country
In pricing studies, it can be used to identify the music.
characteristics of price-sensitive consumers.
These consumers might be methodical, economy
minded, and home centered.
Factor Analysis Model

Mathematically, each variable is expressed as a linear


combination of underlying factors.
The covariation among the variables is described in terms of a
small number of common factors plus a unique factor for each
variable.
If the variables are standardized, the factor analysis model may
be represented as:
Xi = Ai 1F1 + Ai 2F2 + Ai 3F3 + . . . + AimFm + ViUi

where
Xi = i th standardized variable
Aij = standardized multiple regression coefficient of variable i on common factor j
F = common factor
Vi = standardized regression coefficient of variable i on unique factor I
Ui = the unique factor for variable I
m = number of common factors
Factor Analysis Model

The unique factors are uncorrelated with each other and


with the common factors.

The common factors themselves can be expressed as


linear combinations of the observed variables.

Fi = Wi1X1 + Wi2X2 + Wi3X3 + . . . + WikXk


Where:

Fi = estimate of i th factor
Wi = weight or factor score coefficient
k = number of variables
Statistics Associated with
Factor Analysis
▪ Bartlett's test of sphericity. Bartlett's test of
sphericity is a test statistic used to examine the
hypothesis that the variables are uncorrelated in
the population. In other words, the population
correlation matrix is an identity matrix; each
variable correlates perfectly with itself (r = 1) but
has no correlation with the other variables (r = 0).
▪ Correlation matrix. A correlation matrix is a
lower triangle matrix showing the simple
correlations, r, between all possible pairs of
variables included in the analysis. The diagonal
elements, which are all 1, are usually omitted.
Statistics Associated with
Factor Analysis

▪ Communality. Communality is the amount of variance a


variable shares with all the other variables being
considered. This is also the proportion of variance
explained by the common factors.
▪ Eigenvalue. The eigenvalue represents the total variance
explained by each factor.
▪ Factor loadings. Factor loadings are simple correlations
between the variables and the factors.
▪ Factor loading plot. A factor loading plot is a plot of the
original variables using the factor loadings as coordinates.
▪ Factor matrix. A factor matrix contains the factor loadings
of all the variables on all the factors extracted.
Statistics Associated with
Factor Analysis
▪ Factor scores. Factor scores are composite scores estimated for
each respondent on the derived factors.
▪ Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy. The
Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy is an
index used to examine the appropriateness of factor analysis.
High values (between 0.5 and 1.0) indicate factor analysis is
appropriate. Values below 0.5 imply that factor analysis may not be
appropriate.
▪ Percentage of variance. The percentage of the total variance
attributed to each factor.
▪ Residuals are the differences between the observed correlations, as
given in the input correlation matrix, and the reproduced
correlations, as estimated from the factor matrix.
▪ Scree plot. A scree plot is a plot of the Eigenvalues against the
number of factors in order of extraction.
Conducting Factor Analysis
RESPONDENT
V1 V2 V3 V4 V5 V6
NUMBER
1 7,00 3,00 6,00 4,00 2,00 4,00
2 1,00 3,00 2,00 4,00 5,00 4,00
3 6,00 2,00 7,00 4,00 1,00 3,00
4 4,00 5,00 4,00 6,00 2,00 5,00
5 1,00 2,00 2,00 3,00 6,00 2,00
6 6,00 3,00 6,00 4,00 2,00 4,00
7 5,00 3,00 6,00 3,00 4,00 3,00
8 6,00 4,00 7,00 4,00 1,00 4,00
9 3,00 4,00 2,00 3,00 6,00 3,00
10 2,00 6,00 2,00 6,00 7,00 6,00
11 6,00 4,00 7,00 3,00 2,00 3,00
12 2,00 3,00 1,00 4,00 5,00 4,00
13 7,00 2,00 6,00 4,00 1,00 3,00
14 4,00 6,00 4,00 5,00 3,00 6,00
15 1,00 3,00 2,00 2,00 6,00 4,00
16 6,00 4,00 6,00 3,00 3,00 4,00
17 5,00 3,00 6,00 3,00 3,00 4,00
18 7,00 3,00 7,00 4,00 1,00 4,00
19 2,00 4,00 3,00 3,00 6,00 3,00
20 3,00 5,00 3,00 6,00 4,00 6,00
21 1,00 3,00 2,00 3,00 5,00 3,00
22 5,00 4,00 5,00 4,00 2,00 4,00
23 2,00 2,00 1,00 5,00 4,00 4,00
24 4,00 6,00 4,00 6,00 4,00 7,00
25 6,00 5,00 4,00 2,00 1,00 4,00
26 3,00 5,00 4,00 6,00 4,00 7,00
27 4,00 4,00 7,00 2,00 2,00 5,00
28 3,00 7,00 2,00 6,00 4,00 3,00
29 4,00 6,00 3,00 7,00 2,00 7,00
30 2,00 3,00 2,00 4,00 7,00 2,00
Conducting Factor Analysis
Problem Formulation

Construction of the Correlation Matrix

Method of Factor Analysis

Determination of Number of Factors

Rotation of Factors

Interpretation of Factors

Calculation of Factor Scores Selection of Surrogate Variables

Determination of Model Fit


Conducting Factor Analysis:
Formulate the Problem

The variables to be included in the factor


analysis should be specified based on past
research, theory, and judgment of the
researcher.
SPSS Windows

To select this procedure using SPSS for


Windows, click:

Analyze>Data Reduction>Factor …
SPSS Windows:
Principal Components
1. Select ANALYZE from the SPSS menu bar.
2. Click DATA REDUCTION and then FACTOR.
3. Move “Prevents Cavities [v1],” “Shiny Teeth [v2],” “Strengthen Gums [v3],”
“Freshens Breath [v4],” “Tooth Decay Unimportant [v5],” and “Attractive
Teeth [v6]” into the VARIABLES box
4. Click on DESCRIPTIVES. In the pop-up window, in the STATISTICS box check
INITIAL SOLUTION. In the CORRELATION MATRIX box, check KMO AND
BARTLETT’S TEST OF SPHERICITY and also check REPRODUCED. Click
CONTINUE.
5. Click on EXTRACTION. In the pop-up window, for METHOD select PRINCIPAL
COMPONENTS (default). In the ANALYZE box, check CORRELATION MATRIX.
In the EXTRACT box, check EIGEN VALUE OVER 1(default). In the DISPLAY
box, check UNROTATED FACTOR SOLUTION. Click CONTINUE.
6. Click on ROTATION. In the METHOD box, check VARIMAX. In the DISPLAY
box, check ROTATED SOLUTION. Click CONTINUE.
7. Click on SCORES. In the pop-up window, check DISPLAY FACTOR SCORE
COEFFICIENT MATRIX. Click CONTINUE.
8. Click OK.
Conducting Factor Analysis:
Construct the Correlation Matrix

▪ The analytical process is based on a matrix of


correlations between the variables.
▪ Bartlett's test of sphericity can be used to test
the null hypothesis that the variables are
uncorrelated in the population.
▪ Another useful statistic is the Kaiser-Meyer-Olkin
(KMO) measure of sampling adequacy. Small
values of the KMO statistic indicate that the
correlations between pairs of variables cannot be
explained by other variables and that factor
analysis may not be appropriate.
Correlation Matrix

Variables V1 V2 V3 V4 V5 V6

V1 1.000

V2 -0.530 1.000

V3 0.873 -0.155 1.000

V4 -0.086 0.572 -0.248 1.000

V5 -0.858 0.020 -0.778 -0.007 1.000

V6 0.004 0.640 -0.018 0.640 -0.136 1.000


Conducting Factor Analysis:
Determine the Method of Factor Analysis

▪ In principal components analysis, the total variance in the


data is considered. The diagonal of the correlation matrix
consists of unities, and full variance is brought into the factor
matrix.
▪ Principal components analysis is recommended when the
primary concern is to determine the minimum number of
factors that will account for maximum variance in the data
for use in subsequent multivariate analysis. The factors are
called principal components.
Conducting Factor Analysis:
Determine the Method of Factor Analysis

▪ In common factor analysis, the factors are estimated based


only on the common variance. Communalities are inserted in
the diagonal of the correlation matrix.
▪ This method is appropriate when the primary concern is to
identify the underlying dimensions and the common
variance is of interest. This method is also known as principal
axis factoring.
Results of
Principal Components Analysis
Communalities
Variables Initial Extraction
V1 1.000 0.926
V2 1.000 0.723
V3 1.000 0.894
V4 1.000 0.739
V5 1.000 0.878
V6 1.000 0.790

Initial Eigen values


Factor Eigen value % of variance Cumulat. %
1 2.731 45.520 45.520
2 2.218 36.969 82.488
3 0.442 7.360 89.848
4 0.341 5.688 95.536
5 0.183 3.044 98.580
6 0.085 1.420 100.000
Results of
Principal Components Analysis
Extraction Sums of Squared Loadings
Factor Eigen value % of variance Cumulat. %
1 2.731 45.520 45.520
2 2.218 36.969 82.488
Factor Matrix
Variables Factor 1 Factor 2
V1 0.928 0.253
V2 -0.301 0.795
V3 0.936 0.131
V4 -0.342 0.789
V5 -0.869 -0.351
V6 -0.177 0.871

Rotation Sums of Squared Loadings


Factor Eigenvalue % of variance Cumulat. %
1 2.688 44.802 44.802
2 2.261 37.687 82.488
Results of
Principal Components Analysis

Rotated Factor Matrix


Variables Factor 1 Factor 2
V1 0.962 -0.027
V2 -0.057 0.848
V3 0.934 -0.146
V4 -0.098 0.845
V5 -0.933 -0.084
V6 0.083 0.885

Factor Score Coefficient Matrix


Variables Factor 1 Factor 2
V1 0.358 0.011
V2 -0.001 0.375
V3 0.345 -0.043
V4 -0.017 0.377
V5 -0.350 -0.059
V6 0.052 0.395
Results of
Principal Components Analysis
The lower-left triangle contains the reproduced
correlation matrix; the diagonal, the communalities;
the upper-right triangle, the residuals between the
observed correlations and the reproduced
correlations.
Factor Score Coefficient Matrix
Variables V1 V2 V3 V4 V5 V6
V1 0.926 0.024 -0.029 0.031 0.038 -0.053
V2 -0.078 0.723 0.022 -0.158 0.038 -0.105
V3 0.902 -0.177 0.894 -0.031 0.081 0.033
V4 -0.117 0.730 -0.217 0.739 -0.027 -0.107
V5 -0.895 -0.018 -0.859 0.020 0.878 0.016
V6 0.057 0.746 -0.051 0.748 -0.152 0.790
Conducting Factor Analysis:
Determine the Number of Factors

▪ A Priori Determination. Sometimes, because of prior


knowledge, the researcher knows how many factors to
expect and thus can specify the number of factors to be
extracted beforehand.
▪ Determination Based on Eigenvalues. In this approach,
only factors with Eigenvalues greater than 1.0 are
retained. An Eigenvalue represents the amount of
variance associated with the factor. Hence, only factors
with a variance greater than 1.0 are included. Factors
with variance less than 1.0 are no better than a single
variable, since, due to standardization, each variable has
a variance of 1.0. If the number of variables is less than
20, this approach will result in a conservative number of
factors.
Conducting Factor Analysis:
Determine the Number of Factors

▪ Determination Based on Scree Plot. A scree plot is a


plot of the Eigenvalues against the number of factors in
order of extraction. Experimental evidence indicates
that the point at which the scree begins denotes the
true number of factors. Generally, the number of
factors determined by a scree plot will be one or a few
more than that determined by the Eigenvalue criterion.
▪ Determination Based on Percentage of Variance. In
this approach the number of factors extracted is
determined so that the cumulative percentage of
variance extracted by the factors reaches a satisfactory
level. It is recommended that the factors extracted
should account for at least 60% of the variance.
Screen Plot
3.0

2.5

2.0
Eigenvalue

1.5

1.0

0.5

0.0
1 2 3 4 5 6
Component Number
Conducting Factor Analysis:
Determine the Number of Factors

▪ Determination Based on Split-Half Reliability. The


sample is split in half and factor analysis is performed
on each half. Only factors with high correspondence of
factor loadings across the two subsamples are
retained.
▪ Determination Based on Significance Tests. It is
possible to determine the statistical significance of the
separate Eigenvalues and retain only those factors that
are statistically significant. A drawback is that with
large samples (size greater than 200), many factors are
likely to be statistically significant, although from a
practical viewpoint many of these account for only a
small proportion of the total variance.
Conducting Factor Analysis:
Rotate Factors

▪ Although the initial or unrotated factor matrix indicates the


relationship between the factors and individual variables, it
seldom results in factors that can be interpreted, because
the factors are correlated with many variables. Therefore,
through rotation, the factor matrix is transformed into a
simpler one that is easier to interpret.
▪ In rotating the factors, we would like each factor to have
nonzero, or significant, loadings or coefficients for only
some of the variables. Likewise, we would like each
variable to have nonzero or significant loadings with only a
few factors, if possible with only one.
▪ The rotation is called orthogonal rotation if the axes are
maintained at right angles.
Conducting Factor Analysis:
Rotate Factors

▪ The most commonly used method for rotation is the


varimax procedure. This is an orthogonal method of
rotation that minimizes the number of variables with
high loadings on a factor, thereby enhancing the
interpretability of the factors. Orthogonal rotation
results in factors that are uncorrelated.
▪ The rotation is called oblique rotation when the axes
are not maintained at right angles, and the factors are
correlated. Sometimes, allowing for correlations
among factors can simplify the factor pattern matrix.
Oblique rotation should be used when factors in the
population are likely to be strongly correlated.
Factor Matrix Before and After
Rotation

Factors Factors
Variables 1 2 Variables 1 2
1 X 1 X
2 X X 2 X
3 X 3 X
4 X X 4 X
5 X X 5 X
6 X 6 X
(a) (b)
High Loadings High Loadings
Before Rotation After Rotation
Conducting Factor Analysis:
Interpret Factors

▪ A factor can then be interpreted in terms


of the variables that load high on it.
▪ Another useful aid in interpretation is to
plot the variables, using the factor
loadings as coordinates. Variables at the
end of an axis are those that have high
loadings on only that factor, and hence
describe the factor.
Factor Loading Plot
Rotated Component Matrix
Component 2
Component Plot in
Rotated Space Component
Component 1 Variable 1 2

1.0 V4 V6
V2 V1 0.962 -2.66E-02
V2 -5.72E-02 0.848
0.5
V3 0.934 -0.146
V1
0.0 V4 -9.83E-02 0.854
V5 V3
V5 -0.933 -8.40E-02
-0.5
V6 8.337E-02 0.885
-1.0

1.0 0.5 0.0 -0.5 -1.0


Conducting Factor Analysis:
Calculate Factor Scores

The factor scores for the ith factor may be


estimated as follows:

Fi = Wi1 X1 + Wi2 X2 + Wi3 X3 + . . . + Wik Xk


Conducting Factor Analysis:
Select Surrogate Variables

▪ By examining the factor matrix, one could


select for each factor the variable with the
highest loading on that factor. That variable
could then be used as a surrogate variable for
the associated factor.
▪ However, the choice is not as easy if two or
more variables have similarly high loadings. In
such a case, the choice between these
variables should be based on theoretical and
measurement considerations.
Conducting Factor Analysis:
Determine the Model Fit

▪ The correlations between the variables can be


deduced or reproduced from the estimated
correlations between the variables and the
factors.
▪ The differences between the observed
correlations (as given in the input correlation
matrix) and the reproduced correlations (as
estimated from the factor matrix) can be
examined to determine model fit. These
differences are called residuals.
Results of
Common Factor Analysis

Communalities
Bartlett test of sphericity
Variables Initial Extraction
V1 0.859 0.928 • Approx. Chi-Square = 111.314
V2 0.480 0.562 • df = 15
V3 0.814 0.836 • Significance = 0.00000
V4 0.543 0.600
V5 0.763 0.789 • Kaiser-Meyer-Olkin measure of
V6 0.587 0.723 sampling adequacy = 0.660

Initial Eigenvalues
Factor Eigenvalue % of variance Cumulat. %
1 2.731 45.520 45.520
2 2.218 36.969 82.488
3 0.442 7.360 89.848
4 0.341 5.688 95.536
5 0.183 3.044 98.580
6 0.085 1.420 100.000
Results of
Common Factor Analysis
Extraction Sums of Squared Loadings
Factor Eigenvalue % of variance Cumulat. %
1 2.570 42.837 42.837
2 1.868 31.126 73.964

Factor Matrix
Variables Factor 1 Factor 2
V1 0.949 0.168
V2 -0.206 0.720
V3 0.914 0.038
V4 -0.246 0.734
V5 -0.850 -0.259
V6 -0.101 0.844

Rotation Sums of Squared Loadings


Factor Eigenvalue % of variance Cumulat. %
1 2.541 42.343 42.343
2 1.897 31.621 73.964
Results of
Common Factor Analysis

Rotated Factor Matrix


Variables Factor 1 Factor 2
V1 0.963 -0.030
V2 -0.054 0.747
V3 0.902 -0.150
V4 -0.090 0.769
V5 -0.885 -0.079
V6 0.075 0.847

Factor Score Coefficient Matrix


Variables Factor 1 Factor 2
V1 0.628 0.101
V2 -0.024 0.253
V3 0.217 -0.169
V4 -0.023 0.271
V5 -0.166 -0.059
V6 0.083 0.500
Results of
Common Factor Analysis

The lower-left triangle contains the reproduced


correlation matrix; the diagonal, the communalities;
the upper-right triangle, the residuals between the
observed correlations and the reproduced correlations.

Factor Score Coefficient Matrix


Variables V1 V2 V3 V4 V5 V6
V1 0.928 0.022 -0.000 0.024 -0.008 -0.042
V2 -0.075 0.562 0.006 -0.008 0.031 0.012
V3 0.873 -0.161 0.836 -0.005 0.008 0.042
V4 -0.110 0.580 -0.197 0.600 -0.025 -0.004
V5 -0.850 -0.012 -0.786 0.019 0.789 0.003
V6 0.046 0.629 -0.060 0.645 -0.133 0.723

You might also like