0% found this document useful (0 votes)
8 views133 pages

Unit4 Cluster Analysis 10oct

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views133 pages

Unit4 Cluster Analysis 10oct

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 133

Data Mining

Cluster Analysis
What is Cluster Analysis?
• Finding groups of objects such that the objects in a group
will be similar (or related) to one another and different
from (or unrelated to) the objects in other groups

Inter-cluster
Intra-cluster distances are
distances are maximized
minimized

11/16/2020 Introduction to Data Mining, 2nd Edition 2


Tan, Steinbach, Karpatne, Kumar
Applications of Cluster Analysis
Clustering for Understanding
Understanding Earths climate
Group related documents for browsing. Information Retrieval
group genes and proteins that have similar functionality
Psychology and Medicine to identify different types of depression or
diseases
group stocks with similar price fluctuations
Clustering for Utility
Summerization
Compression-Reduce the size of large data sets
Efficiently finding Nearest Neighbors

11/16/2020 Introduction to Data Mining, 2nd Edition 3


Tan, Steinbach, Karpatne, Kumar
Notion of a Cluster can be Ambiguous

How many clusters? Six Clusters

Two Clusters Four Clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 4


Tan, Steinbach, Karpatne, Kumar
Type of data in clustering analysis

• Interval-scaled variables
• Binary variables
• Nominal, ordinal, and ratio variables
• Variables of mixed types

Data Mining: Concepts and


11/16/2020
November 17, 2024
Introduction to Data Mining, 2nd Edition
Techniques 5 5
Tan, Steinbach, Karpatne, Kumar
Interval-valued variables
• Standardize data
– Calculate the mean absolute deviation:

s f 1n (| x1 f  m f |  | x2 f  m f | ... | xnf  m f |)


where
1
m f  n (x1 f  x2 f  ...  xnf )
– Calculate the standardized measurement (z-score) .

xif  m f
 s
• Using mean absolute zdeviation
if
f
is more robust than using
standard deviation

11/16/2020
November 17, 2024
Introduction to Data Mining, 2nd Edition 6 6
Tan, Steinbach, Karpatne, Kumar
Similarity and Dissimilarity Between
Objects

• Distances are normally used to measure the similarity or


dissimilarity between two data objects
• Some popular ones include: Minkowski distance:
d (i, j) q (| x  x |q  | x  x |q ... | x  x |q )
i1 j1 i2 j2 ip jp
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two
p-dimensional data objects, and q is a positive integer
• If q = 1, d is Manhattan distance

d (i, j) | x  x |  | x  x | ... | x  x |
i1 j1 i2 j 2 ip jp
11/16/2020 Introduction to Data Mining, 2nd Edition 7 7
Tan, Steinbach, Karpatne, Kumar
Similarity and Dissimilarity Between Objects
(Cont.)

• If q = 2, d is Euclidean distance:
d (i, j)  (| x  x |2  | x  x |2 ... | x  x |2 )
i1 j1 i2 j2 ip jp
– Properties
• d(i,j)  0
• d(i,i) = 0
• d(i,j) = d(j,i)
• d(i,j)  d(i,k) + d(k,j)
• Also, one can use weighted distance, parametric
Pearson product moment correlation, or other
disimilarity measures
11/16/2020 Introduction to Data Mining, 2nd Edition 8 8
Tan, Steinbach, Karpatne, Kumar
Binary Variables
Object j
1 0 sum
A contingency table for binary 1 a b a b
Object i
data 0 c d c d
sum a  c b  d p
Distance measure for symmetric
d (i, j)  b c
binary variables: a b c  d
Distance measure for
d (i, j)  b c
asymmetric binary variables:
a b  c
Jaccard coefficient (similarity
measure for asymmetric binary a
simJaccard (i, j) 
variables): a b  c
11/16/2020
November 17, 2024
Introduction to Data Mining, 2nd Edition 9 9
Tan, Steinbach, Karpatne, Kumar
Dissimilarity between Binary Variables

• Example
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
– gender is a symmetric attribute
– the remaining attributes are asymmetric binary
– let the values Y and P be set to 1, and the value N be set to 0
0 1
d ( jack , mary )  0.33
2  0 1
11
d ( jack , jim )  0.67
111
1 2
d ( jim , mary )  0.75
11 2
Introduction to Data Mining, 2nd Edition
11/16/2020
November 17, 2024 1010
Tan, Steinbach, Karpatne, Kumar
Nominal Variables

• A generalization of the binary variable in that it can take


more than 2 states, e.g., red, yellow, blue, green
• Method 1: Simple matching
– m: # of matches, p: total # of variables

d (i, j)  p p m

• Method 2: use a large number of binary variables


– creating a new binary variable for each of the M
nominal states

11/16/2020
November 17, 2024
Introduction to Data Mining, 2nd Edition 11 11
Tan, Steinbach, Karpatne, Kumar
Ordinal Variables

• An ordinal variable can be discrete or continuous


• Order is important, e.g., rank
• Can be treated like interval-scaled
– replace xif by their rank rif {1,...,M f }

– map the range of each variable onto [0, 1] by replacing


i-th object in the f-th variable by
rif  1
zif 
Mf 1
– compute the dissimilarity using methods for interval-
scaled variables
11/16/2020
November 17, 2024
Introduction to Data Mining, 2nd Edition 1212
Tan, Steinbach, Karpatne, Kumar
Types of Clusterings
• A clustering is a set of clusters
• Important distinction between hierarchical
and partitional sets of clusters
– Partitional Clustering
• A division of data objects into non-overlapping subsets (clusters) such that
each data object is in exactly one subset

– Hierarchical clustering
• A set of nested clusters organized as a hierarchical tree

11/16/2020 Introduction to Data Mining, 2nd Edition 13


Tan, Steinbach, Karpatne, Kumar
Partitional Clustering

Original Points A Partitional Clustering

11/16/2020 Introduction to Data Mining, 2nd Edition 14


Tan, Steinbach, Karpatne, Kumar
Hierarchical Clustering

p1
p3 p4
p2
p1 p2 p3 p4

Traditional Hierarchical Clustering Traditional Dendrogram

p1
p3 p4
p2
p1 p2 p3 p4

Non-traditional Hierarchical Clustering Non-traditional Dendrogram

11/16/2020 Introduction to Data Mining, 2nd Edition 15


Tan, Steinbach, Karpatne, Kumar
Other Distinctions Between Sets of Clusters

• Exclusive versus non-exclusive


– In non-exclusive clusterings, points may belong to multiple
clusters.
• Can belong to multiple classes or could be ‘border’ points
• Fuzzy clustering (one type of non-
exclusive)
– In fuzzy clustering, a point belongs to every cluster with some
weight between 0 and 1
– Weights must sum to 1
– Probabilistic clustering has similar characteristics
• Partial versus complete
– In some cases, we only want to cluster some of the data

11/16/2020 Introduction to Data Mining, 2nd Edition 16


Tan, Steinbach, Karpatne, Kumar
Types of Clusters
• Well-separated clusters

• Prototype-based clusters

• Contiguity-based clusters

• Density-based clusters

• Described by an Objective Function

11/16/2020 Introduction to Data Mining, 2nd Edition 17


Tan, Steinbach, Karpatne, Kumar
Types of Clusters: Well-Separated

• Well-Separated Clusters:
– A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than
to any point not in the cluster.

3 well-separated clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 18


Tan, Steinbach, Karpatne, Kumar
Types of Clusters: Prototype-Based

• Prototype-based
– A cluster is a set of objects such that an object in a cluster is
closer (more similar) to the prototype or “center” of a cluster,
than to the center of any other cluster
– The center of a cluster is often a centroid, the average of all
the points in the cluster, or a medoid, the most “representative”
point of a cluster

4 center-based clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 19


Tan, Steinbach, Karpatne, Kumar
Types of Clusters: Contiguity-Based

• Contiguous Cluster (Nearest neighbor or


Transitive)
– A cluster is a set of points such that a point in a cluster is
closer (or more similar) to one or more other points in the
cluster than to any point not in the cluster.

8 contiguous clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 20


Tan, Steinbach, Karpatne, Kumar
Types of Clusters: Density-Based

• Density-based
– A cluster is a dense region of points, which is separated by
low-density regions, from other regions of high density.
– Used when the clusters are irregular or intertwined, and when
noise and outliers are present.

6 density-based clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 21


Tan, Steinbach, Karpatne, Kumar
Types of Clusters: Objective Function

• Clusters Defined by an Objective Function


– Finds clusters that minimize or maximize an objective function.
– Enumerate all possible ways of dividing the points into clusters and
evaluate the `goodness' of each potential set of clusters by using
the given objective function. (NP Hard)
– Can have global or local objectives.
• Hierarchical clustering algorithms typically have local objectives
• Partitional algorithms typically have global objectives
– A variation of the global objective function approach is to fit the
data to a parameterized model.
• Parameters for the model are determined from the data.
• Mixture models assume that the data is a ‘mixture' of a number of
statistical distributions.

11/16/2020 Introduction to Data Mining, 2nd Edition 22


Tan, Steinbach, Karpatne, Kumar
Characteristics of the Input Data Are Important

• Type of proximity or density measure


– Central to clustering
– Depends on data and application

• Data characteristics that affect proximity and/or density are


– Dimensionality
• Sparseness
– Attribute type
– Special relationships in the data
• For example, autocorrelation
– Distribution of the data

• Noise and Outliers


– Often interfere with the operation of the clustering algorithm
• Clusters of differing sizes, densities, and shapes

11/16/2020 Introduction to Data Mining, 2nd Edition 23


Tan, Steinbach, Karpatne, Kumar
Clustering Algorithms
• K-means and its variants

• Hierarchical clustering

• Density-based clustering

11/16/2020 Introduction to Data Mining, 2nd Edition 24


Tan, Steinbach, Karpatne, Kumar
K-means Clustering

• Partitional clustering approach


• Number of clusters, K, must be specified
• Each cluster is associated with a centroid (center point)
• Each point is assigned to the cluster with the closest
centroid
• The basic algorithm is very simple

11/16/2020 Introduction to Data Mining, 2nd Edition 25


Tan, Steinbach, Karpatne, Kumar
Example of K-means Clustering
Iteration 6
1
2
3
4
5
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x
Example of K-means Clustering
Iteration 1 Iteration 2 Iteration 3
3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

Iteration 4 Iteration 5 Iteration 6


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

11/16/2020 Introduction to Data Mining, 2nd Edition 27


Tan, Steinbach, Karpatne, Kumar
Table of notation

Assigning points to the centriods

11/16/2020 Introduction to Data Mining, 2nd Edition 28


Tan, Steinbach, Karpatne, Kumar
K-means Clustering – Details
• Simple iterative algorithm.
– Choose initial centroids; repeat {assign each point to a nearest centroid; re-compute
cluster centroids} until centroids stop changing.
• Initial centroids are often chosen randomly.
– Clusters produced can vary from one run to another
• The centroid is (typically) the mean of the points in the
cluster, but other definitions are possible
• K-means will converge for common proximity measures
with appropriately defined centroid
• Most of the convergence happens in the first few
iterations.
– Often the stopping condition is changed to ‘Until relatively few
points change clusters’
• Complexity is O( n * K * I * d )
– n = number of points, K = number of clusters,
I = number of iterations, d = number of attributes

11/16/2020 Introduction to Data Mining, 2nd Edition 29


Tan, Steinbach, Karpatne, Kumar
K-means Objective Function
• A common objective function (used with Euclidean distance measure) is
Sum of Squared Error (SSE)
– For each point, the error is the distance to the nearest cluster center
– To get SSE, we square these errors and sum them.
K
SSE   dist 2 ( mi , x )
i 1 xCi
– x is a data point in cluster Ci and mi is the centroid (mean) for cluster Ci
– SSE improves in each iteration of K-means until it reaches a local or
global minima.
– The mean of centroid is given by

– For Document Data cosine similarity is used .The Total cohesion is


calculated using

11/16/2020 Introduction to Data Mining, 2nd Edition 30


Tan, Steinbach, Karpatne, Kumar
Two different K-means Clusterings
3

2.5

2
Original Points
1.5

y
1

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x

3 3

2.5 2.5

2 2

1.5 1.5
y

y
1 1

0.5 0.5

0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x x

Optimal Clustering Sub-optimal Clustering

11/16/2020 Introduction to Data Mining, 2nd Edition 31


Tan, Steinbach, Karpatne, Kumar
Importance of Choosing Initial Centroids …
Iteration 5
1
2
3
4
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x
Importance of Choosing Initial Centroids …

Iteration 1 Iteration 2
3 3

2.5 2.5

2 2

1.5 1.5
y

y
1 1

0.5 0.5

0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x x

Iteration 3 Iteration 4 Iteration 5


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

11/16/2020 Introduction to Data Mining, 2nd Edition 33


Tan, Steinbach, Karpatne, Kumar
Importance of Choosing Intial Centroids

• Depending on the
choice of initial
centroids, B and C
may get merged or
remain separate

11/16/2020 Introduction to Data Mining, 2nd Edition 34


Tan, Steinbach, Karpatne, Kumar
Problems with Selecting Initial Points

• If there are K ‘real’ clusters then the chance of selecting


one centroid from each cluster is small.
– Chance is relatively small when K is large
– If clusters are the same size, n, then

– For example, if K = 10, then probability = 10!/1010 = 0.00036


– Sometimes the initial centroids will readjust themselves in
‘right’ way, and sometimes they don’t
– Consider an example of five pairs of clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 35


Tan, Steinbach, Karpatne, Kumar
10 Clusters Example

Iteration 4
1
2
3
8

2
y

-2

-4

-6

0 5 10 15 20
x

Starting with two initial centroids in one cluster of each pair of clusters
11/16/2020 Introduction to Data Mining, 2nd Edition 36
Tan, Steinbach, Karpatne, Kumar
10 Clusters Example
Iteration 1 Iteration 2
8 8

6 6

4 4

2 2
y

y
0 0

-2 -2

-4 -4

-6 -6

0 5 10 15 20 0 5 10 15 20
x x
Iteration 3 Iteration 4
8 8

6 6

4 4

2 2
y

y
0 0

-2 -2

-4 -4

-6 -6

0 5 10 15 20 0 5 10 15 20
x x

Starting with two initial centroids in one cluster of each pair of clusters
11/16/2020 Introduction to Data Mining, 2nd Edition 37
Tan, Steinbach, Karpatne, Kumar
10 Clusters Example

Iteration 4
1
2
3
8

2
y

-2

-4

-6

0 5 10 15 20
x

Starting with some pairs of clusters having three initial centroids, while other
have only one.

11/16/2020 Introduction to Data Mining, 2nd Edition 38


Tan, Steinbach, Karpatne, Kumar
10 Clusters Example
Iteration 1 Iteration 2
8 8

6 6

4 4

2 2
y

y
0 0

-2 -2

-4 -4

-6 -6

0 5 10 15 20 0 5 10 15 20
Iteration
x 3 Iteration
x 4
8 8

6 6

4 4

2 2
y

y
0 0

-2 -2

-4 -4

-6 -6

0 5 10 15 20 0 5 10 15 20
x x

Starting with some pairs of clusters having three initial centroids, while other have only
one.
11/16/2020 Introduction to Data Mining, 2nd Edition 39
Tan, Steinbach, Karpatne, Kumar
Solutions to Initial Centroids Problem
• Multiple runs
– Helps, but probability is not on your side
• Use some strategy to select the k initial
centroids and then select among these
initial centroids
– Select most widely separated
• K-means++ is a robust way of doing this selection
– Use hierarchical clustering to determine initial
centroids
• Bisecting K-means
– Not as susceptible to initialization issues
11/16/2020 Introduction to Data Mining, 2nd Edition 40
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 41
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 42
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 43
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 44
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 45
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 46
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 47
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 48
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 49
Tan, Steinbach, Karpatne, Kumar
Example 2 :
First divide the
objects in 2 clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 50


Tan, Steinbach, Karpatne, Kumar
c1=1,4,5,6,7,8,9,10,11,12
c2=2,3

11/16/2020 Introduction to Data Mining, 2nd Edition 51


Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 52
Tan, Steinbach, Karpatne, Kumar
K-means++

• This approach can be slower than random initialization,


but very consistently produces better results in terms of
SSE
– The k-means++ algorithm guarantees an approximation ratio
O(log k) in expectation, where k is the number of centers
• To select a set of initial centroids, C, perform the following
1. Select an initial point at random to be the first centroid
2. For k – 1 steps
3. For each of the N points, xi, 1 ≤ i ≤ N, find the minimum squared
distance to the currently selected centroids, C1, …, Cj, 1 ≤ j < k,
i.e.,
4. Randomly select a new centroid by choosing a point with probability
proportional to is
5. End For

11/16/2020 Introduction to Data Mining, 2nd Edition 53


Tan, Steinbach, Karpatne, Kumar
K-Means Additional Issues
• Pre-processing
– Normalize the data
– Eliminate outliers

• Reducing the SSE with Post-processing


– Two strategies that decrease the total SSE by increasing the number
of clusters
1) Split a cluster i.e., clusters with relatively high SSE
2) Introduce a new cluster centroid: farther point in cluster is chosen
– Two strategies that decrease the total SSE by decreasing the number
of clusters
1) Merge clusters that are ‘close’ and that have relatively low SSE
2) Disperse a cluster :Remove a centroid and reassign values to new
cluster
– These steps can be used multiple times during the clustering process

11/16/2020 Introduction to Data Mining, 2nd Edition 54


Tan, Steinbach, Karpatne, Kumar
Empty Clusters
• K-means can yield empty clusters

6.8 13 18

6.5
X 9 10
X 15 16
X18.5

7.75 12.5 17.25

6.5
X 9 10
X 15 16
X 18.5

Empty
Cluster

11/16/2020 Introduction to Data Mining, 2nd Edition 55


Tan, Steinbach, Karpatne, Kumar
Handling Empty Clusters
• Basic K-means algorithm can yield empty
clusters

• Several strategies
– Choose the point that contributes most to
SSE, and make it a new centroid
– Choose a point from the cluster with the
highest SSE, and make it a new centroid
– If there are several empty clusters, the above
can be repeated several times.

11/16/2020 Introduction to Data Mining, 2nd Edition 56


Tan, Steinbach, Karpatne, Kumar
Updating Centers Incrementally
• In the basic K-means algorithm, centroids are
updated after all points are assigned to a centroid

• An alternative is to update the centroids after each


assignment (incremental approach)
– Each assignment updates zero or two centroids
– More expensive
– Introduces an order dependency
– Never get an empty cluster

11/16/2020 Introduction to Data Mining, 2nd Edition 57


Tan, Steinbach, Karpatne, Kumar
Limitations of K-means
• K-means has problems when clusters are
of differing
– Sizes
– Densities
– Non-globular shapes

• K-means has problems when the data


contains outliers.

11/16/2020 Introduction to Data Mining, 2nd Edition 58


Tan, Steinbach, Karpatne, Kumar
Limitations of K-means: Differing Sizes

Original Points K-means (3 Clusters)

11/16/2020 Introduction to Data Mining, 2nd Edition 59


Tan, Steinbach, Karpatne, Kumar
Limitations of K-means: Differing Density

Original Points K-means (3 Clusters)

11/16/2020 Introduction to Data Mining, 2nd Edition 60


Tan, Steinbach, Karpatne, Kumar
Limitations of K-means: Non-globular Shapes

Original Points K-means (2 Clusters)

11/16/2020 Introduction to Data Mining, 2nd Edition 61


Tan, Steinbach, Karpatne, Kumar
Overcoming K-means Limitations

Original Points K-means Clusters

One solution is to use many clusters.


Find parts of clusters, but need to put together.
11/16/2020 Introduction to Data Mining, 2nd Edition 62
Tan, Steinbach, Karpatne, Kumar
Overcoming K-means Limitations

Original Points K-means Clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 63


Tan, Steinbach, Karpatne, Kumar
Overcoming K-means Limitations

Original Points K-means Clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 64


Tan, Steinbach, Karpatne, Kumar
What Is the Problem of the K-Means Method?

• The k-means algorithm is sensitive to outliers !


– Since an object with an extremely large value may substantially
distort the distribution of the data
• K-Medoids: Instead of taking the mean value of the object in a cluster
as a reference point, medoids can be used, which is the most
centrally located object in a cluster
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

65
11/16/2020 Introduction to Data Mining, 2nd Edition 65
Tan, Steinbach, Karpatne, Kumar
PAM: A Typical K-Medoids Algorithm
Total Cost = 20
10 10 10

9 9 9

8 8 8

7 7 7

6
Arbitra 6
Assign 6

5
ry 5
each 5

4 choose 4 remain 4

3
k 3
ing 3

2
object 2
object 2

as to
1 1 1

0 0 0
0 1 2 3 4 5 6 7 8 9 10
initial 0 1 2 3 4 5 6 7 8 9 10
neares 0 1 2 3 4 5 6 7 8 9 10

medoi t
K=2 ds medoi Randomly select a
Total Cost = 26 ds nonmedoid
object,Oramdom
10 10

Do loop 9

8
Compute
9

8
Swapping total cost
Until no
7 7

O and 6
of 6

change Oramdom 5
swapping
5

4 4

If quality 3 3

2 2
is 1 1

improved. 0 0

11/16/2020 Introduction to Data Mining, 2nd Edition


0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6
66
7 8 9 10

Tan, Steinbach, Karpatne, Kumar 66


KMedoid Method

11/16/2020 Introduction to Data Mining, 2nd Edition 67


Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 68
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 69
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 70
Tan, Steinbach, Karpatne, Kumar
S.No x1 x2 C1(3,4) C2(7,4) Assignm
ent

O1 2 6 3 7 c1
O2 3 4 0 4 c1
O3 3 8 4 8 c1
O4 4 7 4 6 c1
O5 6 2 5 3 c2
O6 6 4 3 1 c2
O7 7 3 5 1 c2
O8 7 4 4 0 c2
O9 8 5 6 2 c2
O10 7 6 6 2 c2
Calculate the distance using Manhatan distance |x2-x1|+|y2-y1|
|7-2|+|4-6|=7

11/16/2020 Introduction to Data Mining, 2nd Edition 71


Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 72
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 73
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 74
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 75
Tan, Steinbach, Karpatne, Kumar
S.No x1 x2 C1(3,4) C2(7,3) Assignm
ent

O1 2 6 3 8 c1
O2 3 4 0 5 c1
O3 3 8 4 9 c1
O4 4 7 4 7 c1
O5 6 2 5 2 c2
O6 6 4 3 2 c2
O7 7 3 5 0 c2
O8 7 4 4 1 c2
O9 8 5 6 3 c2
O10 7 6 6 3 c2

11/16/2020 Introduction to Data Mining, 2nd Edition 76


Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 77
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 78
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 79
Tan, Steinbach, Karpatne, Kumar
11/16/2020 Introduction to Data Mining, 2nd Edition 80
Tan, Steinbach, Karpatne, Kumar
Efficiency improvement on PAM
CLARA (Kaufmann & Rousseeuw, 1990): PAM on samples
CLARANS (Ng & Han, 1994): Randomized re-sampling

11/16/2020 Introduction to Data Mining, 2nd Edition 81


Tan, Steinbach, Karpatne, Kumar
Hierarchical Clustering
• Produces a set of nested clusters
organized as a hierarchical tree
• Can be visualized as a dendrogram
– A tree like diagram that records the
sequences of merges or splits
6 5
0.2
4
3 4
0.15 2
5
2
0.1

1
0.05
3 1

0
1 3 2 5 4 6

11/16/2020 Introduction to Data Mining, 2nd Edition 82


Tan, Steinbach, Karpatne, Kumar
Strengths of Hierarchical Clustering
• Do not have to assume any particular
number of clusters
– Any desired number of clusters can be
obtained by ‘cutting’ the dendrogram at the
proper level

• They may correspond to meaningful


taxonomies
– Example in biological sciences (e.g., animal
kingdom, phylogeny reconstruction, …)

11/16/2020 Introduction to Data Mining, 2nd Edition 83


Tan, Steinbach, Karpatne, Kumar
Hierarchical Clustering
• Two main types of hierarchical clustering
– Agglomerative:
• Start with the points as individual clusters
• At each step, merge the closest pair of clusters until only one cluster
(or k clusters) left

– Divisive:
• Start with one, all-inclusive cluster
• At each step, split a cluster until each cluster contains an individual
point (or there are k clusters)

• Traditional hierarchical algorithms use a similarity or


distance matrix
– Merge or split one cluster at a time

11/16/2020 Introduction to Data Mining, 2nd Edition 84


Tan, Steinbach, Karpatne, Kumar
Agglomerative Clustering Algorithm
• Most popular hierarchical clustering technique
– Key Idea: Successively merge closest clusters
• Basic algorithm is straightforward
1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remain
• Key operation is the computation of the proximity of
two clusters
– Different approaches to defining the distance between
clusters distinguish the different algorithms

11/16/2020 Introduction to Data Mining, 2nd Edition 85


Tan, Steinbach, Karpatne, Kumar
Starting Situation
• Start with clusters of individual points and
a proximity matrix p1 p2 p3 p4 p5 ...
p1

p2
p3

p4
p5
.
.
. Proximity Matrix

...
p1 p2 p3 p4 p9 p10 p11 p12

11/16/2020 Introduction to Data Mining, 2nd Edition 86


Tan, Steinbach, Karpatne, Kumar
Intermediate Situation
• After some merging steps, we have some clusters
C1 C2 C3 C4 C5
C1

C2
C3
C3
C4
C4
C5

Proximity Matrix
C1

C2 C5

...
p1 p2 p3 p4 p9 p10 p11 p12

11/16/2020 Introduction to Data Mining, 2nd Edition 87


Tan, Steinbach, Karpatne, Kumar
Intermediate Situation
• We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix. C1 C2 C3 C4 C5
C1

C2
C3
C3
C4
C4
C5

Proximity Matrix
C1

C2 C5

...
p1 p2 p3 p4 p9 p10 p11 p12

11/16/2020 Introduction to Data Mining, 2nd Edition 88


Tan, Steinbach, Karpatne, Kumar
After Merging
• The question is “How do we update the proximity matrix?”
C2
U
C1 C5 C3 C4

C1 ?

C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?

Proximity Matrix
C1

C2 U C5

...
p1 p2 p3 p4 p9 p10 p11 p12

11/16/2020 Introduction to Data Mining, 2nd Edition 89


Tan, Steinbach, Karpatne, Kumar
How to Define Inter-Cluster Distance

p1 p2 p3 p4 p5 ...
p1
Similarity?
p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

11/16/2020 Introduction to Data Mining, 2nd Edition 90


Tan, Steinbach, Karpatne, Kumar
How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...
p1

p2

p3

p4
Min Defines cluster proximity as the proximity p5
between the closest two points that are in different
clusters. .
In graph terms the shortest edge between two nodes in.
different subsets of nodes .
 MIN Proximity Matrix
 MAX
 Group Average
 Distance Between Centroids
 Other methods driven by an objective function

– Ward’s Method uses squared error

11/16/2020 Introduction to Data Mining, 2nd Edition 91


Tan, Steinbach, Karpatne, Kumar
How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
Max Defines cluster proximity as the proximity
between the farthest two points that are in different .
clusters.
.
In graph terms the longest edge between two nodes in
different subsets of nodes .
Proximity Matrix
 MIN
 MAX
 Group Average
 Distance Between Centroids
 Other methods driven by an objective function

– Ward’s Method uses squared error


11/16/2020 Introduction to Data Mining, 2nd Edition 92
Tan, Steinbach, Karpatne, Kumar
How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
The Group average technique defines cluster
proximity to be the average pairwise proximities of all .
pairs of points from different clusters
.
 MIN
 MAX
.
Proximity Matrix
 Group Average
 Distance Between Centroids
 Other methods driven by an objective function

– Ward’s Method uses squared error

11/16/2020 Introduction to Data Mining, 2nd Edition 93


Tan, Steinbach, Karpatne, Kumar
How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...
p1

  p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

11/16/2020 Introduction to Data Mining, 2nd Edition 94


Tan, Steinbach, Karpatne, Kumar
MIN or Single Link
• Proximity of two clusters is based on the
two closest points in the different clusters
– Determined by one pair of points, i.e., by one
link in the proximity graph
• Example:
Distance Matrix:

11/16/2020 Introduction to Data Mining, 2nd Edition 95


Tan, Steinbach, Karpatne, Kumar
(3,6)=0.11
(5,2)=0.14 p1 p2 P3,p6 p4 p5
P1 0 0.24 0.22 0.37 0.34
P2 0.24 0 0.15 0.20 0.14
P3,p6 0.22 0.15 0 0.15 0.28
P4 0.37 0.20 0.15 0 0.29
p5 0.34 0.14 0.28 0.29 0

Min((3,6),(1))=min(dist(3,1),(6,1))
=min(0.22,0.23)
=0.22

Min((3,6),(2))=min(dist(3,2),dist(6,2))
=min(0.15,0.25)
=0.15
Min((3,6),(4))=0.15
Min((3,6),(5))=0.28

11/16/2020 Introduction to Data Mining, 2nd Edition 96


Tan, Steinbach, Karpatne, Kumar
p1 P2,p5 P3,p6 p4
P1 0 0.24 0.22 0.37
P2,p5 0.24 0 0.15 0.20
P3,p6 0.22 0.15 0 0.15
p4 0.37 0.20 0.15 0

Min((2,5),1)=0.24
Min((3,6),(2,5))=min(dist(3,2),dist(6,2),dist(3,5),dist(6,5)
=min(0.15,0.25,0.28,0.39)
=0.15
Min((2,5),4)=0.20

11/16/2020 Introduction to Data Mining, 2nd Edition 97


Tan, Steinbach, Karpatne, Kumar
p1 P2,p5,p3,p6 p4
p1 0 0.22 0.37
P2,p5,p3,p6 0.22 0 0.15
p4 0.37 0.15 0

Min((2,5),(3,6))=0.15

11/16/2020 Introduction to Data Mining, 2nd Edition 98


Tan, Steinbach, Karpatne, Kumar
p1 P2,p5,p3,p6,p4
p1 0 0.22
P2,p5,p3,p6,p4 0.22 0

11/16/2020 Introduction to Data Mining, 2nd Edition 99


Tan, Steinbach, Karpatne, Kumar
Hierarchical Clustering: MIN

5
1
3
5 0.2

2 1 0.15

2 3 6 0.1

0.05
4
4 0
3 6 2 5 4 1

Nested Clusters Dendrogram

11/16/2020 Introduction to Data Mining, 2nd Edition 100


Tan, Steinbach, Karpatne, Kumar
Strength of MIN

Original Points Six Clusters

• Can handle non-elliptical shapes

11/16/2020 Introduction to Data Mining, 2nd Edition 101


Tan, Steinbach, Karpatne, Kumar
Limitations of MIN

Two Clusters

Original Points

• Sensitive to noise and outliers


Three Clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 102


Tan, Steinbach, Karpatne, Kumar
MAX or Complete Linkage

• Proximity of two clusters is based on the


two most distant points in the different
clusters
– Determined by all pairs of points in the two
clusters Distance Matrix:

11/16/2020 Introduction to Data Mining, 2nd Edition 103


Tan, Steinbach, Karpatne, Kumar
p1 p2 P3,p6 p4 p5
P1 0 0.24 0.23 0.37 0.34
P2 0.24 0 0.25 0.20 0.14
P3,p6 0.23 0.25 0 0.22 0.39
P4 0.37 0.20 0.22 0 0.29
p5 0.34 0.14 0.39 0.29 0

Max(dist(3,6),1)=max(0.22,0.23)
=0.23
Max(dist(3,6),2)=max(0.15,0.25)
=0.25
Max(dist(3,6),4)=max(0.15,0.22)
0.22
Max(dist(3,6),5)=max(0.28,0.39)
=0.39

11/16/2020 Introduction to Data Mining, 2nd Edition 104


Tan, Steinbach, Karpatne, Kumar
p1 P2,p5 P3,p6 p4
P1 0 0.34 0.23 0.37
P2,p5 0.34 0 0.39 0.29
P3,p6 0.23 0.39 0 0.22
p4 0.37 0.29 0.22 0
Max((2,5),1)=max(0.24,0.34)=0.34
Max((3,6),(2,5)=max(0.15,0.25,0.28,0.39)=0.39
Max((2,5),4)=max(0.20,0.29)=0.29

11/16/2020 Introduction to Data Mining, 2nd Edition 105


Tan, Steinbach, Karpatne, Kumar
p1 P3,p6,p4 P2,p5
P1 0 0.37 0.34
P3,p6,p4 0.37 0 0.39
P2,p5 0.34 0.39 0

Max((3,6,4),1)=max(0.22,0.23,0.37)=0.37
Max((3,6,4),
(2,5))=max(0.15,0.25,0.20,0.28,0.39,0.29)=0.39

11/16/2020 Introduction to Data Mining, 2nd Edition 106


Tan, Steinbach, Karpatne, Kumar
P1,p2,p5 P3,p6,p4
P1,p2,p5 0 0.37
P3,p6,p4 0.37 0

Max((2,5,1),
(3,6,4))=max(0.15,0.28,0.37,0.25,0.39,0.23,0.20,
0.29,0.37)=0.37

11/16/2020 Introduction to Data Mining, 2nd Edition 107


Tan, Steinbach, Karpatne, Kumar
Hierarchical Clustering: MAX

4 1
2 5 0.4

0.35
5
2 0.3

0.25

3 6
0.2

3 0.15
1 0.1

4 0.05

0
3 6 4 1 2 5

Nested Clusters Dendrogram

11/16/2020 Introduction to Data Mining, 2nd Edition 108


Tan, Steinbach, Karpatne, Kumar
Strength of MAX

Original Points Two Clusters

• Less susceptible to noise and outliers

11/16/2020 Introduction to Data Mining, 2nd Edition 109


Tan, Steinbach, Karpatne, Kumar
Limitations of MAX

Original Points Two Clusters

• Tends to break large clusters


• Biased towards globular clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 110


Tan, Steinbach, Karpatne, Kumar
Group Average
• Proximity of two clusters is the average of pairwise proximity
between points in the two clusters.
 proximity(p ,p )
piClusteri
i j

pjClusterj
proximity(
Cluster
i , Cluster
j) 
|Clusteri | |Clusterj |

• Need to use average connectivity for scalability since total


proximity favors large clusters
Distance Matrix:

11/16/2020 Introduction to Data Mining, 2nd Edition 111


Tan, Steinbach, Karpatne, Kumar
p1 p2 p3p6 p4 p5
P1 0 0.24 0.225 0.37 0.34
P2 0.24 0 o.2 0.20 0.14
P3,p6 0.225 0.2 0 0.185 0.335
P4 0.37 0.20 0.185 0 0.29
p5 0.34 0.14 0.335 0.29 0

Dist((3,6),1)=(0.22+0.23)/(2*1)=0.225
Dist((3,6),2)=(0.15+0.25)/(2*1)=0.2
Dist((3,6),4)=(0.15+0.22)/(2*1)=0.185
Dist((3,6),5)=(0.28+0.39)/(2*1)=0.335

11/16/2020 Introduction to Data Mining, 2nd Edition 112


Tan, Steinbach, Karpatne, Kumar
p1 p2p5 p3p6 p4
p1 0 0.29 0.225 0.37
p2p5 0.29 0 0.267 0.245
p3p6 0.225 0.267 0 0.185
p4 0.37 0.245 0.185 0

Dist((2,5),1)=0.29
Dist((2,5)(3,6))=0.267
Dist((2,5),4)=0.245

11/16/2020 Introduction to Data Mining, 2nd Edition 113


Tan, Steinbach, Karpatne, Kumar
Hierarchical Clustering: Group Average

5 4 1
2 0.25

5 0.2
2
0.15

3 6 0.1

1 0.05

4 0
3 3 6 4 1 2 5

Nested Clusters Dendrogram

11/16/2020 Introduction to Data Mining, 2nd Edition 114


Tan, Steinbach, Karpatne, Kumar
Hierarchical Clustering: Group Average
• Compromise between Single and
Complete Link

• Strengths
– Less susceptible to noise and outliers

• Limitations
– Biased towards globular clusters

11/16/2020 Introduction to Data Mining, 2nd Edition 115


Tan, Steinbach, Karpatne, Kumar
Cluster Similarity: Ward’s Method
• Similarity of two clusters is based on the
increase in squared error when two
clusters are merged
– Similar to group average if distance between
points is distance squared

• Less susceptible to noise and outliers

• Biased towards globular clusters

• Hierarchical analogue of K-means


Introduction to Data Mining, 2nd Edition
– Can be used
11/16/2020
to initialize
Tan, Steinbach, K-means
Karpatne, Kumar
116
Hierarchical Clustering: Comparison

5
1 4 1
3
2 5
5 5
2 1 2
MIN MAX
2 3 6 3 6
3
1
4 4
4

5
1 5 4 1
2 2
5 Ward’s Method 5
2 2
3 6 Group Average 3 6
3
4 1 1
4 4
3

11/16/2020 Introduction to Data Mining, 2nd Edition 117


Tan, Steinbach, Karpatne, Kumar
Hierarchical Clustering: Time and Space requirements

• O(N2) space since it uses the proximity


matrix.
– N is the number of points.

• O(N3) time in many cases


– There are N steps and at each step the size,
N2, proximity matrix must be updated and
searched
– Complexity can be reduced to O(N2 log(N) )
time with some cleverness
11/16/2020 Introduction to Data Mining, 2nd Edition 118
Tan, Steinbach, Karpatne, Kumar
Key issues in Hierarchical Clustering

• Once a decision is made to combine two


clusters, it cannot be undone
• No global objective function is directly
minimized
• Different schemes have problems with one or
more of the following:
– Sensitivity to noise and outliers
– Difficulty handling clusters of different sizes and
non-globular shapes
– Breaking large clusters
11/16/2020 Introduction to Data Mining, 2nd Edition 119
Tan, Steinbach, Karpatne, Kumar
Strengths and Weaknesses

 Creation of taxonomy requires a hierarchy.


 These algorithms produce better quality clusters

 They are expensive in terms of computational and


storage requirements.
 As all merges are final is trouble for noisy and
high dimensional data

11/16/2020 Introduction to Data Mining, 2nd Edition 120


Tan, Steinbach, Karpatne, Kumar
Cluster Validity
• For supervised classification we have a variety of
measures to evaluate how good our model is
– Accuracy, precision, recall

• For cluster analysis, the analogous question is how to


evaluate the “goodness” of the resulting clusters?

• But “clusters are in the eye of the beholder”!


– In practice the clusters we find are defined by the clustering
algorithm

• Then why do we want to evaluate them?


– To avoid finding patterns in noise
– To compare clustering algorithms
– To compare two sets of clusters
– To compare two clusters
11/16/2020 Introduction to Data Mining, 2nd Edition 121
Tan, Steinbach, Karpatne, Kumar
Clusters found in Random Data
1 1

0.9 0.9

0.8 0.8

0.7 0.7

Random 0.6 0.6 DBSCAN


Points 0.5 0.5
y

y
0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
1 1

0.9 0.9

K-means 0.8 0.8


Complete
0.7 0.7
Link
0.6 0.6

0.5 0.5
y

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x

11/16/2020 Introduction to Data Mining, 2nd Edition 122


Tan, Steinbach, Karpatne, Kumar
Different Aspects of Cluster Validation

1. Determining the clustering tendency of a set of data, i.e.,


distinguishing whether non-random structure actually exists in the
data.
2. Comparing the results of a cluster analysis to externally known
results, e.g., to externally given class labels.
3. Evaluating how well the results of a cluster analysis fit the data
without reference to external information.
- Use only the data
4. Comparing the results of two different sets of clusters (generated for
the same data) to determine which is better.
5. Determining the ‘correct’ number of clusters.

For 2 and 3, we can further distinguish whether we want to evaluate


the entire clustering or just individual clusters.

11/16/2020 Introduction to Data Mining, 2nd Edition 123


Tan, Steinbach, Karpatne, Kumar
Measures of Cluster Validity
• Numerical measures that are applied to judge various aspects
of cluster validity, are classified into the following two types.
– External Index: Used to measure the extent to which cluster labels
match externally supplied class labels.
• Entropy
– Internal Index: Used to measure the goodness of a clustering
structure without respect to external information.
• Sum of Squared Error (SSE)

• You can use external or internal indices to compare clusters or


clusterings

11/16/2020 Introduction to Data Mining, 2nd Edition 124


Tan, Steinbach, Karpatne, Kumar
Measuring Cluster Validity Via Correlation
• Two matrices
– Proximity Matrix
– Ideal Similarity Matrix
• One row and one column for each data point
• An entry is 1 if the associated pair of points belong to the same cluster
• An entry is 0 if the associated pair of points belongs to different clusters
• Compute the correlation between the two matrices
– Since the matrices are symmetric, only the correlation between
n(n-1) / 2 entries needs to be calculated.
• High magnitude of correlation indicates that points that
belong to the same cluster are close to each other.
– Correlation may be positive or negative depending on whether
the similarity matrix is a similarity or dissimilarity matrix
• Not a good measure for some density or contiguity based
clusters.
11/16/2020 Introduction to Data Mining, 2nd Edition 125
Tan, Steinbach, Karpatne, Kumar
Internal Measures: SSE
• Clusters in more complicated figures aren’t well separated
• Internal Index: Used to measure the goodness of a clustering
structure without respect to external information
– SSE
• SSE is good for comparing two clusterings or two clusters
• Can also be used to estimate the number of clusters

10

6 9

8
4
7
2 6

SSE
0 5

4
-2
3
-4 2

-6 1

5 10 15 0
2 5 10 15 20 25 30
K

11/16/2020 Introduction to Data Mining, 2nd Edition 126


Tan, Steinbach, Karpatne, Kumar
Framework for Cluster Validity
• Need a framework to interpret any measure.
– For example, if our measure of evaluation has the value, 10, is that
good, fair, or poor?
• Statistics provide a framework for cluster validity
– The more “atypical” a clustering result is, the more likely it represents
valid structure in the data
– Can compare the values of an index that result from random data or
clusterings to those of a clustering result.
• If the value of the index is unlikely, then the cluster results are valid
– These approaches are more complicated and harder to understand.
• For comparing the results of two different sets of cluster
analyses, a framework is less necessary.
– However, there is the question of whether the difference between two
index values is significant

11/16/2020 Introduction to Data Mining, 2nd Edition 127


Tan, Steinbach, Karpatne, Kumar
Internal Measures: Cohesion and Separation

• Cluster Cohesion: Measures how closely


related are objects in a cluster
– Example: SSE
• Cluster Separation: Measure how distinct or
well-separated a cluster is from other
clusters 𝑆𝑆 𝐸=∑ ∑ ( 𝑥 − 𝑚𝑖 )
2

• Example: Squared Error 𝑖 𝑥 ∈𝐶 𝑖


– Cohesion is measured by the within cluster sum of squares (SSE)

𝑆𝑆𝐵=∑ |𝐶 𝑖|( 𝑚− 𝑚𝑖 )
2
– Separation
𝑖 is measured by the between cluster sum of squares

Where is the size of cluster i


11/16/2020 Introduction to Data Mining, 2nd Edition 128
Tan, Steinbach, Karpatne, Kumar
Internal Measures: Cohesion and Separation

Example: SSE
SSB + SSE = constant
m
  
1 m1 2 3 4 m2 5

K=1 cluster: 𝑆𝑆𝐸 = ( 1 − 3 )2 + ( 2 − 3 )2+ ( 4 − 3 ) 2+ (5 − 3 ) 2 =10


𝑆𝑆𝐵 = 4 × ( 3− 3 )2 =0
𝑇𝑜𝑡𝑎𝑙=10 +0=10
K=2 clusters: 𝑆𝑆𝐸 = (1 − 1.5 )2 + ( 2 − 1.5 )2 + ( 4 − 4.5 )2+ ( 5 − 4.5 )2=1
2 2
𝑆𝑆𝐵 =2 × ( 3 − 1.5 ) +2 × ( 4.5 − 3 ) =9
𝑇𝑜𝑡𝑎𝑙=1+ 9=10

11/16/2020 Introduction to Data Mining, 2nd Edition 129


Tan, Steinbach, Karpatne, Kumar
Internal Measures: Cohesion and Separation

• A proximity graph based approach can also be used for


cohesion and separation.
– Cluster cohesion is the sum of the weight of all links within a cluster.
– Cluster separation is the sum of the weights between nodes in the cluster
and nodes outside the cluster.

cohesion separation

11/16/2020 Introduction to Data Mining, 2nd Edition 130


Tan, Steinbach, Karpatne, Kumar
Internal Measures: Silhouette Coefficient

• Silhouette coefficient combines ideas of both cohesion and


separation, but for individual points, as well as clusters and clusterings
• For an individual point, i
– Calculate a = average distance of i to the points in its cluster
– Calculate b = min (average distance of i to points in another cluster)
– The silhouette coefficient for a point is then given by

s = (b – a) / max(a,b) Distances used


to calculate b
i
– Value can vary between -1 and 1
– Typically ranges between 0 and 1. Distances used
to calculate a
– The closer to 1 the better.

• Can calculate the average silhouette coefficient for a cluster or a


clustering

11/16/2020 Introduction to Data Mining, 2nd Edition 131


Tan, Steinbach, Karpatne, Kumar
External Measures of Cluster Validity: Entropy and Purity

11/16/2020 Introduction to Data Mining, 2nd Edition 132


Tan, Steinbach, Karpatne, Kumar
Final Comment on Cluster Validity

“The validation of clustering structures is


the most difficult and frustrating part of
cluster analysis.
Without a strong effort in this direction,
cluster analysis will remain a black art
accessible only to those true believers who
have experience and great courage.”
Algorithms for Clustering Data, Jain and Dubes
• H. Xiong and Z. Li. Clustering Validation Measures. In C. C. Aggarwal and C. K. Reddy,
editors, Data Clustering: Algorithms and Applications, pages 571–605. Chapman &
Hall/CRC, 2013.

11/16/2020 Introduction to Data Mining, 2nd Edition 133


Tan, Steinbach, Karpatne, Kumar

You might also like