Spss Tutorial Cluster Analysis
Spss Tutorial Cluster Analysis
SPSS Tutorial
AEB 37 / AE 802
Marketing Research Methods
Week 7
Cluster analysis
Lecture / Tutorial outline
• Cluster analysis
• Example of cluster analysis
• Work on the assignment
Cluster Analysis
• It is a class of techniques used to
classify cases into groups that are
relatively homogeneous within
themselves and heterogeneous
between each other, on the basis of
a defined set of variables. These
groups are called clusters.
Steps to conduct a
Cluster Analysis
1. Select a distance measure
2. Select a clustering algorithm
3. Determine the number of clusters
4. Validate the analysis
2
1
1
REGR factor score 1 for analysis
-1
-2
-3
-4
-3 -2 -1 0 1 2 3 4
Clustering procedures
• Hierarchical procedures
– Agglomerative (start from n clusters,
to get to 1 cluster)
– Divisive (start from 1 cluster, to get to
n cluster)
• Non hierarchical procedures
– K-means clustering
Agglomerative clustering
Agglomerative
clustering
• Linkage methods
– Single linkage (minimum distance)
– Complete linkage (maximum distance)
– Average linkage
• Ward’s method
1. Compute sum of squared distances within clusters
2. Aggregate clusters with the minimum increase in the
overall sum of squares
• Centroid method
– The distance between two clusters is defined as the
difference between the centroids (cluster averages)
K-means clustering
1. The number k of cluster is fixed
2. An initial set of k “seeds” (aggregation centres) is
provided
• First k elements
• Other seeds
3. Given a certain treshold, all units are assigned to
the nearest cluster seed
4. New seeds are computed
5. Go back to step 3 until no reclassification is
necessary
Units can be reassigned in successive steps
( opt i mi si ng par t i oni ng )
Hierarchical vs Non
hierarchical methods
Hierarchical Non hierarchical
clustering
clustering
• No decision about the
number of clusters • Faster, more reliable
• Problems when data • Need to specify the
contain a high level of number of clusters
error (arbitrary)
• Can be very slow • Need to set the initial
• Initial decision are seeds (arbitrary)
more influential (one-
step only)
Suggested approach
1. First perform a hierarchical
method to define the number of
clusters
2. Then use the k-means procedure
to actually form the clusters
10
8
Distance
0
11 10 9 8 7 6 5 4 3 2 1
Number of clusters
Validating the
analysis
• Impact of initial seeds / order of
cases
• Impact of the selected method
• Consider the relevance of the
chosen set of variables
SPSS Example
1.5 MATTHEW
JULIA
1.0 LUCY
JENNIFER
.5 NICOLE
0.0
JOHN
-.5 PAMELA
THOMAS ARTHUR
-1.0
Component2
-1.5 FRED
-2.0
-1.5 -1.0 -.5 0.0 .5 1.0 1.5 2.0
Component1
Agglomeration Schedule
Number of clusters: 10 – 6 = 4
1.5 MATTHEW
JULIA
1.0 LUCY
JENNIFER
.5 NICOLE
0.0
JOHN
-.5 PAMELA
THOMAS ARTHUR
Cluster Number of Ca
-1.0 4
Component2
3
-1.5 FRED
2
-2.0 1
-1.5 -1.0 -.5 0.0 .5 1.0 1.5 2.0
Component1
The supermarkets.sav
dataset
Run Principal
Components Analysis
and save scores
• Select the variables to perform the
analysis
• Set the rule to extract principal
components
• Give instruction to save the
principal components as new
variables
Cluster analysis:
basic steps
• Apply Ward’s methods on the
principal components score
• Check the agglomeration schedule
• Decide the number of clusters
• Apply the k-means method
Analyse / Classify
Select
method here
Click here
first
Output: Agglomeration
schedule
Number of clusters
Identify the step where the “distance coefficients” makes a bigger
jump
800
700
600
500
400
300
200
100
0
118
120
122
124
126
128
130
132
134
136
138
140
142
144
146
148
Step
Number of clusters
Number of cases 150
Step of ‘elbow’ 144
__________________________________
Number of clusters 6
K-means
Specify
number of
clusters
Click here
first Thick here
Final output
Cluster membership
Component meaning
(tutorial week 5)
4. Organic radio
Component Matrixa
listener
1. “Old Rich Big
Component
Spender” 3. Vegetarian TV
1 2 3 4 5
Monthly amount spent .810 lover
-.294 -4.26E-02 .183 .173
Meat expenditure
2. Family
.480
shopper
-.152 .347 .334 -5.95E-02
Fish expenditure .525 -.206 -.475 -4.35E-02 .140
Vegetables expenditure .192 -.345 -.127 .383 5. Vegetarian
.199 TV and
-.207web hater
% spent in own-brand
.646 -.281 -.134 -.239
product
Own a car .536 .619 -.102 -.172 6.008E-02
% spent in organic food .492 -.186 .190 .460 .342
Vegetarian 1.784E-02 -9.24E-02 .647 -.287 .507
Household Size .649 .612 .135 -6.12E-02 -3.29E-03
Number of kids .369 .663 .247 .184 1.694E-02
Weekly TV watching
.124 -9.53E-02 .462 .232 -.529
(hours)
Weekly Radio listening
2.989E-02 .406 -.349 .559 -8.14E-02
(hours)
Surf the web .443 -.271 .182 -5.61E-02 -.465
Yearly household income .908 -4.75E-02 -7.46E-02 -.197 -3.26E-02
Age of respondent .891 -5.64E-02 -6.73E-02 -.228 6.942E-04
Extraction Method: Principal Component Analysis.
a. 5 components extracted.
Cluster
1 2 3 4 5 6
REGR factor score
-1.34392 .21758 .13646 .77126 .40776 .72711
1 for analysis 1
REGR factor score
.38724 -.57755 -1.12759 .84536 .57109 -.58943
2 for analysis 1
REGR factor score
-.22215 -.09743 1.41343 .17812 1.05295 -1.39335
3 for analysis 1
REGR factor score
.15052 -.28837 -.30786 1.09055 -1.34106 .04972
4 for analysis 1
REGR factor score
.04886 -.93375 1.23631 -.11108 .31902 .87815
5 for analysis 1
Cluster interpretation
through mean component values
• Cluster 1 is very far from profile 1 (-1.34) and
more similar to profile 2 (0.38)
• Cluster 2 is very far from profile 5 (-0.93) and
not particularly similar to any profile
• Cluster 3 is extremely similar to profiles 3 and 5
and very far from profile 2
• Cluster 4 is similar to profiles 2 and 4
• Cluster 5 is very similar to profile 3 and very far
from profile 4
• Cluster 6 is very similar to profile 5 and very far
from profile 3
Which cluster to
target?
• Objective: target the organic
consumer
• Which is the cluster that looks more
“organic”?
• Compute the descriptive statistics
on the original variables for that
cluster
Representation of factors 1
and 4
(and cluster membership)
3
2
1
REGR factor score 4 for analysis
Cluster Number of Ca
0
6
5
-1
4
3
-2
2
-3 1
-3 -2 -1 0 1 2