Chap7 Basic Cluster Analysis
Chap7 Basic Cluster Analysis
Inter-cluster
Intra-cluster distances are
distances are maximized
minimized
4 Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP,
Schlumberger-UP
Oil-UP
Summarization
– Reduce the size of large
data sets
Clustering precipitation
in Australia
Results of a query
– Groupings are a result of an external specification
– Clustering is a grouping of objects based on the data
Supervised classification
– Have class label information
Association Analysis
– Local vs. global connections
Partitional Clustering
– A division of data objects into non-overlapping subsets
(clusters) such that each data object is in exactly one subset
Hierarchical clustering
– A set of nested clusters organized as a hierarchical tree
p1
p3 p4
p2
p1 p2 p3 p4
p1
p3 p4
p2
p1 p2 p3 p4
Well-separated clusters
Center-based clusters
Contiguous clusters
Density-based clusters
Property or Conceptual
Well-Separated Clusters:
– A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than
to any point not in the cluster.
3 well-separated clusters
4 center-based clusters
8 contiguous clusters
Density-based
– A cluster is a dense region of points, which is separated by
low-density regions, from other regions of high density.
– Used when the clusters are irregular or intertwined, and when
noise and outliers are present.
6 density-based clusters
2 Overlapping Circles
Partitioning criteria
– Single level vs. hierarchical partitioning (often, multi-level
hierarchical partitioning is desirable)
Separation of clusters
– Exclusive (e.g., one customer belongs to only one region) vs.
non-exclusive (e.g., one document may belong to more than one
class)
Similarity measure
– Distance-based (e.g., Euclidian, road network, vector) vs.
connectivity-based (e.g., density or contiguity)
Clustering space
– Full space (often when low dimensional) vs. subspaces (often in
high-dimensional clustering)
Ack: Jiawei Han, Micheline Kamber, and Jian Pei ,
University of Illinois at Urbana-Champaign & Simon Fraser University
02/14/2018 Introduction to Data Mining, 2 nd Edition 21
21
Requirements and Challenges
Ack: Jiawei Han, Micheline Kamber, and Jian Pei ,
University of Illinois at Urbana-Champaign & Simon Fraser University
Scalability
– Clustering all the data instead of only on samples
Ability to deal with different types of attributes
– Numerical, binary, categorical, ordinal, linked, and mixture of
these
Constraint-based clustering
User may give inputs on constraints
Hierarchical clustering
Density-based clustering
2.5
1.5
y
0.5
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
2.5
2
Original Points
1.5
y
1
0.5
3 3
2.5 2.5
2 2
1.5 1.5
y
y
1 1
0.5 0.5
0 0
2.5
1.5
y
0.5
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
2.5
1.5
y
0.5
Iteration 1 Iteration 2
3 3
2.5 2.5
2 2
1.5 1.5
y
y
1 1
0.5 0.5
0 0
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
Iteration 4
1
2
3
8
2
y
-2
-4
-6
0 5 10 15 20
x
Starting with two initial centroids in one cluster of each pair of clusters
02/14/2018 Introduction to Data Mining, 2 nd Edition 43
10 Clusters Example
Iteration 1 Iteration 2
8 8
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
x x
Iteration 3 Iteration 4
8 8
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
x x
Starting with two initial centroids in one cluster of each pair of clusters
02/14/2018 Introduction to Data Mining, 2 nd Edition 44
10 Clusters Example
Iteration 4
1
2
3
8
2
y
-2
-4
-6
0 5 10 15 20
x
Starting with some pairs of clusters having three initial centroids, while other
have only one.
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
Iteration
x 3 Iteration
x 4
8 8
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
x x
Starting with some pairs of clusters having three initial centroids, while other have only
one.
02/14/2018 Introduction to Data Mining, 2 nd Edition 46
Solutions to Initial Centroids
Problem
Multiple runs
– Helps, but probability is not on your side
Sample and use hierarchical clustering to
determine initial centroids
Select more than k initial centroids and then
select among these initial centroids
– Select most widely separated
Postprocessing
Generate a larger number of clusters and then
perform a hierarchical clustering
Bisecting K-means
– Not as susceptible to initialization issues
02/14/2018 Introduction to Data Mining, 2 nd Edition 47
K-means++
1. For each unselected point , compute its smallest distance from already initialized means
2. Select the next cluster mean unif. rand. to be one of the unselected points based on
probability prop. to Thus farthest points
are most likely to be
selected as cluster
means
3. Repeat 1 and 2 until the cluster means are initialized
6.8 13 18
6.5
X 9 10
X 15 16
X18.5
6.5
X 9 10
X 15 16
X 18.5
Empty
Cluster
Several strategies
– Choose the point that contributes most to SSE
– Choose a point from the cluster with the highest SSE
– If there are several empty clusters, the above can be
repeated several times.
Pre-processing
– Normalize the data
– Eliminate outliers
Post-processing
– Eliminate small clusters that may represent outliers
– Split ‘loose’ clusters, i.e., clusters with relatively high
SSE
– Merge clusters that are ‘close’ and that have relatively
low SSE
– Can use these steps during the clustering process
ISODATA
CLUTO: http://glaros.dtc.umn.edu/gkhome/cluto/cluto/overview
6 5
0.2
4
3 4
0.15 2
5
2
0.1
1
0.05
3 1
0
1 3 2 5 4 6
– Divisive:
Start with one, all-inclusive cluster
At each step, split a cluster until each cluster contains an individual
point (or there are k clusters)
p2
p3
p4
p5
.
.
. Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2 C5
...
p1 p2 p3 p4 p9 p10 p11 p12
We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix. C1 C2 C3 C4 C5
C1
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2 C5
...
p1 p2 p3 p4 p9 p10 p11 p12
C1 ?
C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?
Proximity Matrix
C1
C2 U C5
...
p1 p2 p3 p4 p9 p10 p11 p12
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
5
1
3
5 0.2
2 1 0.15
2 3 6 0.1
0.05
4
4 0
3 6 2 5 4 1
Two Clusters
Original Points
Distance Matrix:
4 1
2 5 0.4
0.35
5
2 0.3
0.25
3 6
0.2
3 0.15
1 0.1
4 0.05
0
3 6 4 1 2 5
pjClusterj
proximity(
Cluster
i , Cluster
j)
|Clusteri | |Clusterj |
5 4 1
2 0.25
5 0.2
2
0.15
3 6 0.1
1 0.05
4 0
3 3 6 4 1 2 5
Strengths
– Less susceptible to noise and outliers
Limitations
– Biased towards globular clusters
5
1 4 1
3
2 5
5 5
2 1 2
MIN MAX
2 3 6 3 6
3
1
4 4
4
5
1 5 4 1
2 2
5 Ward’s Method 5
2 2
3 6 Group Average 3 6
3
4 1 1
4 4
3
MinPts = 7
• Resistant to Noise
• Can handle clusters of different shapes and sizes
(MinPts=4, Eps=9.75).
Original Points
• Varying densities
• High-dimensional data
(MinPts=4, Eps=9.92)
02/14/2018 Introduction to Data Mining, 2 nd Edition 92
DBSCAN: Determining EPS and MinPts
0.9 0.9
0.8 0.8
0.7 0.7
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
1 1
0.9 0.9
0.5 0.5
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
y
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
1
1
10 0.9
0.9
20 0.8
0.8
30 0.7
0.7
40 0.6
0.6
Points
50 0.5
0.5
y
60 0.4
0.4
70 0.3
0.3
80 0.2
0.2
90 0.1
0.1
100 0
0 20 40 60 80 100 Similarity
0 0.2 0.4 0.6 0.8 1
Points
x
1 1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50 0.5 0.5
y
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x
DBSCAN
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50 0.5 0.5
y
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x
K-means
1 1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50 0.5 0.5
y
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x
Complete Link
0.9
500
0.8
1 0.7
2 6 1000
0.6
3
4
1500 0.5
0.4
5 2000
0.3
7
0.2
2500
0.1
3000 0
500 1000 1500 2000 2500 3000
DBSCAN
6 9
8
4
7
2 6
SSE
0 5
4
-2
3
-4 2
-6 1
5 10 15 0
2 5 10 15 20 25 30
K
1
2 6
3
4
Example
– Compare SSE of 0.005 against three clusters in random data
– Histogram shows SSE of three clusters in 500 sets of random data
points of size 100 distributed over the range 0.2 – 0.8 for x and y
values
1
50
0.9
45
0.8
40
0.7
35
0.6
30
Count
0.5
y
25
0.4
20
0.3
15
0.2
10
0.1
5
0
0 0.2 0.4 0.6 0.8 1 0
0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034
x SSE
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
BSS Ci ( m mi ) 2
i
– Where |Ci| is the size of cluster i
02/14/2018 Introduction to Data Mining, 2 nd Edition 110
Internal Measures: Cohesion and
Separation
Example: SSE
– BSS + WSS = constant
m
1 m1 2 3 4 m2 5
cohesion separation
Distances used
s = (b – a) / max(a,b) to calculate b
i
– Typically between 0 and 1. Distances used
to calculate a
– The closer to 1 the better.
0.9
0.8
0.7
0.6
0.5
y
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
x
Avg. H=0.56 with a standard deviation of 0.03 for p=20 and 100 trails
0.9
0.8
0.7
0.6
0.5
y
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
x
Avg. H=0.95 with a standard deviation of 0.006 for p=20 and 100 trails