0% found this document useful (0 votes)
9 views117 pages

Chap7 Basic Cluster Analysis

Cluster analysis is a data mining technique used to group similar objects while maximizing inter-cluster distances and minimizing intra-cluster distances. It has various applications, including document grouping, gene and protein analysis, and financial data clustering. The document discusses different clustering types, algorithms, and the importance of data characteristics in achieving quality clustering results.

Uploaded by

skhamrui2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views117 pages

Chap7 Basic Cluster Analysis

Cluster analysis is a data mining technique used to group similar objects while maximizing inter-cluster distances and minimizing intra-cluster distances. It has various applications, including document grouping, gene and protein analysis, and financial data clustering. The document discusses different clustering types, algorithms, and the importance of data characteristics in achieving quality clustering results.

Uploaded by

skhamrui2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 117

Data Mining

Cluster Analysis: Basic Concepts


and Algorithms

Lecture Notes for Chapter 7

Introduction to Data Mining, 2nd Edition


by
Tan, Steinbach, Karpatne, Kumar
What is Cluster Analysis?

 Finding groups of objects such that the objects in a group


will be similar (or related) to one another and different
from (or unrelated to) the objects in other groups

Inter-cluster
Intra-cluster distances are
distances are maximized
minimized

02/14/2018 Introduction to Data Mining, 2 nd Edition 2


Applications of Cluster Analysis
Discovered Clusters Industry Group
 Understanding Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN,

– Group related documents 1 Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN,


DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN,
Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down,
Technology1-DOWN
Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN,
for browsing, group genes Sun-DOWN
Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN,

and proteins that have 2 ADV-Micro-Device-DOWN,Andrew-Corp-DOWN,


Computer-Assoc-DOWN,Circuit-City-DOWN,
Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN,
Technology2-DOWN
similar functionality, or Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN
Fannie-Mae-DOWN,Fed-Home-Loan-DOWN,
group stocks with similar 3 MBNA-Corp-DOWN,Morgan-Stanley-DOWN Financial-DOWN

price fluctuations Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP,

4 Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP,
Schlumberger-UP
Oil-UP

 Summarization
– Reduce the size of large
data sets

Clustering precipitation
in Australia

02/14/2018 Introduction to Data Mining, 2 nd Edition 3


What is not Cluster Analysis?
 Simple segmentation
– Dividing students into different registration groups
alphabetically, by last name

 Results of a query
– Groupings are a result of an external specification
– Clustering is a grouping of objects based on the data

 Supervised classification
– Have class label information

 Association Analysis
– Local vs. global connections

02/14/2018 Introduction to Data Mining, 2 nd Edition 4


Notion of a Cluster can be
Ambiguous

How many clusters? Six Clusters

Two Clusters Four Clusters

20 points and 3 different ways to dividing them

02/14/2018 Introduction to Data Mining, 2 nd Edition 5


Types of Clusterings

 A clustering is a set of clusters


 Important distinction between hierarchical and
partitional sets of clusters
– Nested or unnested

 Partitional Clustering
– A division of data objects into non-overlapping subsets
(clusters) such that each data object is in exactly one subset

 Hierarchical clustering
– A set of nested clusters organized as a hierarchical tree

02/14/2018 Introduction to Data Mining, 2 nd Edition 6


Partitional Clustering

Original Points A Partitional Clustering

02/14/2018 Introduction to Data Mining, 2 nd Edition 7


Hierarchical Clustering

p1
p3 p4
p2
p1 p2 p3 p4

Traditional Hierarchical Clustering Traditional Dendrogram

p1
p3 p4
p2
p1 p2 p3 p4

Non-traditional Hierarchical Clustering Non-traditional Dendrogram

02/14/2018 Introduction to Data Mining, 2 nd Edition 8


Other Distinctions Between Sets of
Clusters
 Exclusive versus non-exclusive
– In non-exclusive clusterings, points may belong to multiple
clusters.
– Can represent multiple classes or ‘border’ points
– A person at a university can both a student and an employee
 Fuzzy versus non-fuzzy
– In fuzzy clustering, a point belongs to every cluster with some
weight between 0 and 1
– Weights must sum to 1
– Probabilistic clustering has similar characteristics
 Partial versus complete
– In some cases, we only want to cluster some of the data
– Noise, outliers
 Heterogeneous versus homogeneous
– Clusters of widely different sizes, shapes, and densities
02/14/2018 Introduction to Data Mining, 2 nd Edition 9
Types of Clusters

 Well-separated clusters

 Center-based clusters

 Contiguous clusters

 Density-based clusters

 Property or Conceptual

 Described by an Objective Function

02/14/2018 Introduction to Data Mining, 2 nd Edition 10


Types of Clusters: Well-Separated

 Well-Separated Clusters:
– A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than
to any point not in the cluster.

3 well-separated clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 11


Types of Clusters: Center-Based

 Center-based (Prototype based)


– A cluster is a set of objects such that an object in a cluster is
closer (more similar) to the “center” of a cluster, than to the
center of any other cluster
– The center of a cluster is often a centroid (continuous
attributes), the average of all the points in the cluster, or a
medoid (categorical attributes), the most “representative” point
of a cluster

4 center-based clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 12


Types of Clusters: Graph based (Contiguity-
Based)

 Contiguous Cluster (Nearest neighbor or


Transitive)
– A cluster is a set of points such that a point in a cluster is
closer (or more similar) to one or more other points in the
cluster than to any point not in the cluster.
 Two objects are connected only if they are within a specified distance of each
other
 Trouble when noise is present

8 contiguous clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 13


Types of Clusters: Density-Based

 Density-based
– A cluster is a dense region of points, which is separated by
low-density regions, from other regions of high density.
– Used when the clusters are irregular or intertwined, and when
noise and outliers are present.

6 density-based clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 14


Types of Clusters: Conceptual Clusters

 Shared Property or Conceptual Clusters


– Finds clusters that share some common property or represent
a particular concept.
 A clustering algo. would need a very specific concept of a cluster to detect
these clusters
 The groups should also have understandable descriptions that characterize
their membership.
.

2 Overlapping Circles

02/14/2018 Introduction to Data Mining, 2 nd Edition 15


Types of Clusters: Objective Function

 Clusters Defined by an Objective Function


– Finds clusters that minimize or maximize an objective function.
– Enumerate all possible ways of dividing the points into clusters
and evaluate the `goodness' of each potential set of clusters by
using the given objective function. (NP Hard)
– Can have global or local objectives.
 Hierarchical clustering algorithms typically have local objectives
 Partitional algorithms typically have global objectives
– A variation of the global objective function approach is to fit the
data to a parameterized model.
 Parameters for the model are determined from the data.
 Mixture models assume that the data is a ‘mixture' of a number of
statistical distributions.

02/14/2018 Introduction to Data Mining, 2 nd Edition 16


Map Clustering Problem to a Different
Problem

 Map the clustering problem to a different domain


and solve a related problem in that domain
– Proximity matrix defines a weighted graph, where the
nodes are the points being clustered, and the
weighted edges represent the proximities between
points

– Clustering is equivalent to breaking the graph into


connected components, one for each cluster.

– Want to minimize the edge weight between clusters


and maximize the edge weight within clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 17


Characteristics of the Input Data Are
Important

 Type of proximity or density measure


– Central to clustering
– Depends on data and application

 Data characteristics that affect proximity and/or density are


– Dimensionality
 Sparseness
– Attribute type
– Special relationships in the data
 For example, autocorrelation
– Distribution of the data

 Noise and Outliers


– Often interfere with the operation of the clustering algorithm

02/14/2018 Introduction to Data Mining, 2 nd Edition 18


Quality: What Is Good
Clustering?
 A good clustering method will produce high quality clusters
– high intra-class similarity: cohesive within clusters
– low inter-class similarity: distinctive between clusters
 The quality of a clustering method depends on
– the similarity measure used by the method
– its implementation, and
– Its ability to discover some or all of the hidden patterns

Ack: Jiawei Han, Micheline Kamber, and Jian Pei ,


University of Illinois at Urbana-Champaign & Simon Fraser University
02/14/2018 Introduction to Data Mining, 2 nd Edition 19
19
Measure the Quality of
Clustering
 Dissimilarity/Similarity metric
– Similarity is expressed in terms of a distance function,
typically metric: d(i, j)
– The definitions of distance functions are usually rather
different for interval-scaled, boolean, categorical,
ordinal ratio, and vector variables
– Weights should be associated with different variables
based on applications and data semantics
 Quality of clustering:
– There is usually a separate “quality” function that
measures the “goodness” of a cluster.
– It is hard to define “similar enough” or “good enough”
 The answer is typically highlyndsubjective
02/14/2018 Introduction to Data Mining, 2 Edition 20
20
Considerations for Cluster Analysis

 Partitioning criteria
– Single level vs. hierarchical partitioning (often, multi-level
hierarchical partitioning is desirable)
 Separation of clusters
– Exclusive (e.g., one customer belongs to only one region) vs.
non-exclusive (e.g., one document may belong to more than one
class)
 Similarity measure
– Distance-based (e.g., Euclidian, road network, vector) vs.
connectivity-based (e.g., density or contiguity)
 Clustering space
– Full space (often when low dimensional) vs. subspaces (often in
high-dimensional clustering)
Ack: Jiawei Han, Micheline Kamber, and Jian Pei ,
University of Illinois at Urbana-Champaign & Simon Fraser University
02/14/2018 Introduction to Data Mining, 2 nd Edition 21
21
Requirements and Challenges
Ack: Jiawei Han, Micheline Kamber, and Jian Pei ,
University of Illinois at Urbana-Champaign & Simon Fraser University
 Scalability
– Clustering all the data instead of only on samples
 Ability to deal with different types of attributes
– Numerical, binary, categorical, ordinal, linked, and mixture of
these
 Constraint-based clustering
 User may give inputs on constraints

 Use domain knowledge to determine input parameters

 Interpretability and usability


 Others
– Discovery of clusters with arbitrary shape
– Ability to deal with noisy data
– Incremental clustering and insensitivity to input order
– High dimensionality
02/14/2018 Introduction to Data Mining, 2 nd Edition 22
22
Major Clustering Approaches
(I)
 Partitioning approach:
– Construct various partitions and then evaluate them by some
criterion, e.g., minimizing the sum of square errors
– Typical methods: k-means, k-medoids, CLARANS
 Hierarchical approach:
– Create a hierarchical decomposition of the set of data (or objects)
using some criterion
– Typical methods: Diana, Agnes, BIRCH, CAMELEON
 Density-based approach:
– Based on connectivity and density functions
– Typical methods: DBSACN, OPTICS, DenClue
 Grid-based approach:
– based on a multiple-level granularity structure
– Typical methods: STING, WaveCluster, CLIQUE
02/14/2018 Introduction to Data Mining, 2 nd Edition 23
23
Clustering Algorithms

 K-means and its variants

 Hierarchical clustering

 Density-based clustering

02/14/2018 Introduction to Data Mining, 2 nd Edition 24


K-means Clustering

 Partitional clustering approach


 Number of clusters, K, must be specified
 Each cluster is associated with a centroid (center point)
 Each point is assigned to the cluster with the closest
centroid
 The basic algorithm is very simple

02/14/2018 Introduction to Data Mining, 2 nd Edition 25


Example of K-means Clustering
Iteration 6
1
2
3
4
5
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x
Example of K-means Clustering
Iteration 1 Iteration 2 Iteration 3
3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

Iteration 4 Iteration 5 Iteration 6


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

02/14/2018 Introduction to Data Mining, 2 nd Edition 27


K-means Clustering – Details
 Initial centroids are often chosen randomly.
– Clusters produced vary from one run to another.
 The centroid is (typically) the mean of the points in the
cluster.
 ‘Closeness’ is measured by Euclidean distance, cosine
similarity, correlation, etc.
 K-means will converge for common similarity measures
mentioned above.
 Most of the convergence happens in the first few
iterations.
– Often the stopping condition is changed to ‘Until relatively few
points change clusters’
 Complexity is O( n * K * I * d )
– n = number of points, K = number of clusters,
I = number of iterations, d = number of attributes

02/14/2018 Introduction to Data Mining, 2 nd Edition 28


Evaluating K-means Clusters

 Most common measure is Sum of Squared Error (SSE)


– For each point, the error is the distance to the nearest cluster
– To get SSE, we square these errors and sum them.
K
SSE   dist 2 ( mi , x )
i 1 xCi
– x is a data point in cluster Ci and mi is the representative point for
cluster Ci
 can show that mi corresponds to the center (mean) of the cluster
– Given two sets of clusters, we prefer the one with the smallest
error
– One easy way to reduce SSE is to increase K, the number of
clusters
 A good clustering with smaller K can have a lower SSE than a poor
clustering with higher K

02/14/2018 Introduction to Data Mining, 2 nd Edition 29


Two different K-means Clusterings
3

2.5

2
Original Points
1.5

y
1

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x

3 3

2.5 2.5

2 2

1.5 1.5
y

y
1 1

0.5 0.5

0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x x

Optimal Clustering Sub-optimal Clustering

02/14/2018 Introduction to Data Mining, 2 nd Edition 30


Limitations of K-means

 K-means has problems when clusters are of


differing
– Sizes
– Densities
– Non-globular shapes

 K-means has problems when the data contains


outliers.

02/14/2018 Introduction to Data Mining, 2 nd Edition 31


Limitations of K-means: Differing Sizes

Original Points K-means (3 Clusters)

02/14/2018 Introduction to Data Mining, 2 nd Edition 32


Limitations of K-means: Differing
Density

Original Points K-means (3 Clusters)

02/14/2018 Introduction to Data Mining, 2 nd Edition 33


Limitations of K-means: Non-globular
Shapes

Original Points K-means (2 Clusters)

02/14/2018 Introduction to Data Mining, 2 nd Edition 34


Overcoming K-means Limitations

Original Points K-means Clusters

One solution is to use many clusters.


Find parts of clusters, but need to put together.
02/14/2018 Introduction to Data Mining, 2 nd Edition 35
Overcoming K-means Limitations

Original Points K-means Clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 36


Overcoming K-means Limitations

Original Points K-means Clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 37


Importance of Choosing Initial
Centroids
Iteration 6
1
2
3
4
5
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x
Importance of Choosing Initial
Centroids
Iteration 1 Iteration 2 Iteration 3
3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

Iteration 4 Iteration 5 Iteration 6


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

02/14/2018 Introduction to Data Mining, 2 nd Edition 39


Importance of Choosing Initial
Centroids …
Iteration 5
1
2
3
4
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x
Importance of Choosing Initial
Centroids …

Iteration 1 Iteration 2
3 3

2.5 2.5

2 2

1.5 1.5
y

y
1 1

0.5 0.5

0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x x

Iteration 3 Iteration 4 Iteration 5


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

02/14/2018 Introduction to Data Mining, 2 nd Edition 41


Problems with Selecting Initial Points

 If there are K ‘real’ clusters then the chance of selecting


one centroid from each cluster is small.
– Chance is relatively small when K is large
– If clusters are the same size, n, then

– For example, if K = 10, then probability = 10!/1010 = 0.00036


– Sometimes the initial centroids will readjust themselves in
‘right’ way, and sometimes they don’t
– Consider an example of five pairs of clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 42


10 Clusters Example

Iteration 4
1
2
3
8

2
y

-2

-4

-6

0 5 10 15 20
x

Starting with two initial centroids in one cluster of each pair of clusters
02/14/2018 Introduction to Data Mining, 2 nd Edition 43
10 Clusters Example
Iteration 1 Iteration 2
8 8

6 6

4 4

2 2
y

y
0 0

-2 -2

-4 -4

-6 -6

0 5 10 15 20 0 5 10 15 20
x x
Iteration 3 Iteration 4
8 8

6 6

4 4

2 2
y

y
0 0

-2 -2

-4 -4

-6 -6

0 5 10 15 20 0 5 10 15 20
x x

Starting with two initial centroids in one cluster of each pair of clusters
02/14/2018 Introduction to Data Mining, 2 nd Edition 44
10 Clusters Example

Iteration 4
1
2
3
8

2
y

-2

-4

-6

0 5 10 15 20
x

Starting with some pairs of clusters having three initial centroids, while other
have only one.

02/14/2018 Introduction to Data Mining, 2 nd Edition 45


10 Clusters Example
Iteration 1 Iteration 2
8 8

6 6

4 4

2 2
y

y
0 0

-2 -2

-4 -4

-6 -6

0 5 10 15 20 0 5 10 15 20
Iteration
x 3 Iteration
x 4
8 8

6 6

4 4

2 2
y

y
0 0

-2 -2

-4 -4

-6 -6

0 5 10 15 20 0 5 10 15 20
x x

Starting with some pairs of clusters having three initial centroids, while other have only
one.
02/14/2018 Introduction to Data Mining, 2 nd Edition 46
Solutions to Initial Centroids
Problem

 Multiple runs
– Helps, but probability is not on your side
 Sample and use hierarchical clustering to
determine initial centroids
 Select more than k initial centroids and then
select among these initial centroids
– Select most widely separated
 Postprocessing
 Generate a larger number of clusters and then
perform a hierarchical clustering
 Bisecting K-means
– Not as susceptible to initialization issues
02/14/2018 Introduction to Data Mining, 2 nd Edition 47
K-means++

 This approach can be slower than random initialization,


but very consistently produces better results in terms of
SSE
– The k-means++ algorithm guarantees an approximation ratio
O(log k) in expectation, where k is the number of centers
 To select a set of initial centroids, C, perform the following
1. Select an initial point at random to be the first centroid
2. For k – 1 steps
3. For each of the N points, xi, 1 ≤ i ≤ N, find the minimum squared
distance to the currently selected centroids, C1, …, Cj, 1 ≤ j < k,
i.e.,
4. Randomly select a new centroid by choosing a point with probability
proportional to is
5. End For

02/14/2018 Introduction to Data Mining, 2 nd Edition 48


K-means++
 -means results can be sensitive to initialization
Desired clustering Poor initialization: bad clustering

 -means++ (Arthur and Vassilvitskii, 2007) an


improvement over -means
 Only difference is the way we initialize the cluster centers
(rest of it is just -means)
 Basic idea: Initialize cluster centers such that they are
reasonably far from each other
 Note: In -means++, the cluster centers are chosen to be of the
data points themselves

02/14/2018 Introduction to Data Mining, 2 nd Edition 49


K-means++

 K-means++ works as follows


 Choose the first cluster mean uniformly randomly to be one of the data
points

 The subsequent cluster means are chosen as follows

1. For each unselected point , compute its smallest distance from already initialized means

2. Select the next cluster mean unif. rand. to be one of the unselected points based on
probability prop. to Thus farthest points
are most likely to be
selected as cluster
means
3. Repeat 1 and 2 until the cluster means are initialized

 Now run standard K-means with these initial cluster means


 K-means++ initialization scheme sort of ensures that the initial cluster
means are located in different clusters
02/14/2018 Introduction to Data Mining, 2 nd Edition 50
Empty Clusters

 K-means can yield empty clusters

6.8 13 18

6.5
X 9 10
X 15 16
X18.5

7.75 12.5 17.25

6.5
X 9 10
X 15 16
X 18.5

Empty
Cluster

02/14/2018 Introduction to Data Mining, 2 nd Edition 51


Handling Empty Clusters

 Basic K-means algorithm can yield empty


clusters

 Several strategies
– Choose the point that contributes most to SSE
– Choose a point from the cluster with the highest SSE
– If there are several empty clusters, the above can be
repeated several times.

02/14/2018 Introduction to Data Mining, 2 nd Edition 52


Updating Centers Incrementally

 In the basic K-means algorithm, centroids are


updated after all points are assigned to a centroid

 An alternative is to update the centroids after


each assignment (incremental approach)
– Each assignment updates zero or two centroids
– More expensive
– Introduces an order dependency
– Never get an empty cluster
– Can use “weights” to change the impact

02/14/2018 Introduction to Data Mining, 2 nd Edition 53


Pre-processing and Post-
processing

 Pre-processing
– Normalize the data
– Eliminate outliers
 Post-processing
– Eliminate small clusters that may represent outliers
– Split ‘loose’ clusters, i.e., clusters with relatively high
SSE
– Merge clusters that are ‘close’ and that have relatively
low SSE
– Can use these steps during the clustering process
 ISODATA

02/14/2018 Introduction to Data Mining, 2 nd Edition 54


Bisecting K-means

 Bisecting K-means algorithm


– Variant of K-means that can produce a partitional or a
hierarchical clustering

CLUTO: http://glaros.dtc.umn.edu/gkhome/cluto/cluto/overview

02/14/2018 Introduction to Data Mining, 2 nd Edition 55


Bisecting K-means Example

02/14/2018 Introduction to Data Mining, 2 nd Edition 56


Hierarchical Clustering

 Produces a set of nested clusters organized as a


hierarchical tree
 Can be visualized as a dendrogram
– A tree like diagram that records the sequences of
merges or splits

6 5
0.2
4
3 4
0.15 2
5
2
0.1

1
0.05
3 1

0
1 3 2 5 4 6

02/14/2018 Introduction to Data Mining, 2 nd Edition 57


Strengths of Hierarchical
Clustering

 Do not have to assume any particular number of


clusters
– Any desired number of clusters can be obtained by
‘cutting’ the dendrogram at the proper level

 They may correspond to meaningful taxonomies


– Example in biological sciences (e.g., animal kingdom,
phylogeny reconstruction, …)

02/14/2018 Introduction to Data Mining, 2 nd Edition 58


Hierarchical Clustering

 Two main types of hierarchical clustering


– Agglomerative:
 Start with the points as individual clusters
 At each step, merge the closest pair of clusters until only one cluster
(or k clusters) left

– Divisive:
 Start with one, all-inclusive cluster
 At each step, split a cluster until each cluster contains an individual
point (or there are k clusters)

 Traditional hierarchical algorithms use a similarity or


distance matrix
– Merge or split one cluster at a time

02/14/2018 Introduction to Data Mining, 2 nd Edition 59


Agglomerative Clustering
Algorithm
 Most popular hierarchical clustering technique
 Basic algorithm is straightforward
1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remains

 Key operation is the computation of the proximity of


two clusters
– Different approaches to defining the distance between
clusters distinguish the different algorithms

02/14/2018 Introduction to Data Mining, 2 nd Edition 60


Starting Situation

 Start with clusters of individual points and a


proximity matrix p1 p2 p3 p4 p5 ...
p1

p2
p3

p4
p5
.
.
. Proximity Matrix

...
p1 p2 p3 p4 p9 p10 p11 p12

02/14/2018 Introduction to Data Mining, 2 nd Edition 61


Intermediate Situation

 After some merging steps, we have some clusters


C1 C2 C3 C4 C5
C1

C2
C3
C3
C4
C4
C5

Proximity Matrix
C1

C2 C5

...
p1 p2 p3 p4 p9 p10 p11 p12

02/14/2018 Introduction to Data Mining, 2 nd Edition 62


Intermediate Situation

 We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix. C1 C2 C3 C4 C5
C1

C2
C3
C3
C4
C4
C5

Proximity Matrix
C1

C2 C5

...
p1 p2 p3 p4 p9 p10 p11 p12

02/14/2018 Introduction to Data Mining, 2 nd Edition 63


After Merging

 The question is “How do we update the proximity matrix?”


C2
U
C1 C5 C3 C4

C1 ?

C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?

Proximity Matrix
C1

C2 U C5

...
p1 p2 p3 p4 p9 p10 p11 p12

02/14/2018 Introduction to Data Mining, 2 nd Edition 64


How to Define Inter-Cluster Distance
p1 p2 p3 p4 p5 ...
p1
Similarity?
p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

02/14/2018 Introduction to Data Mining, 2 nd Edition 65


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

02/14/2018 Introduction to Data Mining, 2 nd Edition 66


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

02/14/2018 Introduction to Data Mining, 2 nd Edition 67


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

02/14/2018 Introduction to Data Mining, 2 nd Edition 68


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

  p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

02/14/2018 Introduction to Data Mining, 2 nd Edition 69


MIN or Single Link

 Proximity of two clusters is based on the two


closest points in the different clusters
– Determined by one pair of points, i.e., by one link in the
proximity graph
 Example:
Distance Matrix:

02/14/2018 Introduction to Data Mining, 2 nd Edition 70


Hierarchical Clustering: MIN

5
1
3
5 0.2

2 1 0.15

2 3 6 0.1

0.05
4
4 0
3 6 2 5 4 1

Nested Clusters Dendrogram

02/14/2018 Introduction to Data Mining, 2 nd Edition 71


Strength of MIN

Original Points Six Clusters

• Can handle non-elliptical shapes

02/14/2018 Introduction to Data Mining, 2 nd Edition 72


Limitations of MIN

Two Clusters

Original Points

• Sensitive to noise and outliers


Three Clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 73


MAX or Complete Linkage

 Proximity of two clusters is based on the two


most distant points in the different clusters
– Determined by all pairs of points in the two clusters

Distance Matrix:

02/14/2018 Introduction to Data Mining, 2 nd Edition 74


Hierarchical Clustering: MAX

4 1
2 5 0.4

0.35
5
2 0.3

0.25

3 6
0.2

3 0.15
1 0.1

4 0.05

0
3 6 4 1 2 5

Nested Clusters Dendrogram

02/14/2018 Introduction to Data Mining, 2 nd Edition 75


Strength of MAX

Original Points Two Clusters

• Less susceptible to noise and outliers

02/14/2018 Introduction to Data Mining, 2 nd Edition 76


Limitations of MAX

Original Points Two Clusters

• Tends to break large clusters


• Biased towards globular clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 77


Group Average

 Proximity of two clusters is the average of pairwise proximity


between points in the two clusters.
 proximity(p ,p )
piClusteri
i j

pjClusterj
proximity(
Cluster
i , Cluster
j) 
|Clusteri | |Clusterj |

 Need to use average connectivity for scalability since total


proximity favors large clusters
Distance Matrix:

02/14/2018 Introduction to Data Mining, 2 nd Edition 78


Hierarchical Clustering: Group
Average

5 4 1
2 0.25

5 0.2
2
0.15

3 6 0.1

1 0.05

4 0
3 3 6 4 1 2 5

Nested Clusters Dendrogram

02/14/2018 Introduction to Data Mining, 2 nd Edition 79


Hierarchical Clustering: Group
Average

 Compromise between Single and Complete


Link

 Strengths
– Less susceptible to noise and outliers

 Limitations
– Biased towards globular clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 80


Cluster Similarity: Ward’s Method

 Similarity of two clusters is based on the increase


in squared error when two clusters are merged
– Similar to group average if distance between points is
distance squared

 Less susceptible to noise and outliers

 Biased towards globular clusters

 Hierarchical analogue of K-means


– Can be used to initialize K-means

02/14/2018 Introduction to Data Mining, 2 nd Edition 81


Hierarchical Clustering: Comparison

5
1 4 1
3
2 5
5 5
2 1 2
MIN MAX
2 3 6 3 6
3
1
4 4
4

5
1 5 4 1
2 2
5 Ward’s Method 5
2 2
3 6 Group Average 3 6
3
4 1 1
4 4
3

02/14/2018 Introduction to Data Mining, 2 nd Edition 82


MST: Divisive Hierarchical
Clustering

 Build MST (Minimum Spanning Tree)


– Start with a tree that consists of any point
– In successive steps, look for the closest pair of points (p, q)
such that one point (p) is in the current tree but the other (q) is
not
– Add q to the tree and put an edge between p and q

02/14/2018 Introduction to Data Mining, 2 nd Edition 83


MST: Divisive Hierarchical
Clustering

 Use MST for constructing hierarchy of clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 84


Hierarchical Clustering: Time and Space
requirements

 O(N2) space since it uses the proximity matrix.


– N is the number of points.

 O(N3) time in many cases


– There are N steps and at each step the size, N2,
proximity matrix must be updated and searched
– Complexity can be reduced to O(N2 log(N) ) time with
some cleverness

02/14/2018 Introduction to Data Mining, 2 nd Edition 85


Hierarchical Clustering: Problems and
Limitations

 Once a decision is made to combine two clusters,


it cannot be undone

 No global objective function is directly minimized

 Different schemes have problems with one or


more of the following:
– Sensitivity to noise and outliers
– Difficulty handling clusters of different sizes and non-
globular shapes
– Breaking large clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 86


DBSCAN

 DBSCAN is a density-based algorithm.


– Density = number of points within a specified radius (Eps)

– A point is a core point if it has at least a specified number of


points (MinPts) within Eps
 These are points that are at the interior of a cluster
 Counts the point itself

– A border point is not a core point, but is in the neighborhood


of a core point

– A noise point is any point that is not a core point or a border


point

02/14/2018 Introduction to Data Mining, 2 nd Edition 87


DBSCAN: Core, Border, and Noise Points

MinPts = 7

02/14/2018 Introduction to Data Mining, 2 nd Edition 88


DBSCAN Algorithm

 Eliminate noise points


 Perform clustering on the remaining points

02/14/2018 Introduction to Data Mining, 2 nd Edition 89


DBSCAN: Core, Border and Noise Points

Original Points Point types: core,


border and noise

Eps = 10, MinPts = 4


02/14/2018 Introduction to Data Mining, 2 nd Edition 90
When DBSCAN Works Well

Original Points Clusters

• Resistant to Noise
• Can handle clusters of different shapes and sizes

02/14/2018 Introduction to Data Mining, 2 nd Edition 91


When DBSCAN Does NOT Work Well

(MinPts=4, Eps=9.75).

Original Points

• Varying densities
• High-dimensional data
(MinPts=4, Eps=9.92)
02/14/2018 Introduction to Data Mining, 2 nd Edition 92
DBSCAN: Determining EPS and MinPts

 Idea is that for points in a cluster, their kth nearest


neighbors are at roughly the same distance
 Noise points have the kth nearest neighbor at farther
distance
 So, plot sorted distance of every point to its kth
nearest neighbor

02/14/2018 Introduction to Data Mining, 2 nd Edition 93


Cluster Validity
 For supervised classification we have a variety of
measures to evaluate how good our model is
– Accuracy, precision, recall

 For cluster analysis, the analogous question is how to


evaluate the “goodness” of the resulting clusters?

 But “clusters are in the eye of the beholder”!

 Then why do we want to evaluate them?


– To avoid finding patterns in noise
– To compare clustering algorithms
– To compare two sets of clusters
– To compare two clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 94


Clusters found in Random Data
1 1

0.9 0.9

0.8 0.8

0.7 0.7

Random 0.6 0.6 DBSCAN


Points 0.5 0.5
y

y
0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
1 1

0.9 0.9

K-means 0.8 0.8


Complete
0.7 0.7
Link
0.6 0.6

0.5 0.5
y

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x

02/14/2018 Introduction to Data Mining, 2 nd Edition 95


Different Aspects of Cluster Validation

1. Determining the clustering tendency of a set of data, i.e.,


distinguishing whether non-random structure actually exists in the
data.
2. Evaluating how well the results of a cluster analysis fit the data
without reference to external information.
- Use only the data
3. Determining the ‘correct’ number of clusters.
4. Comparing the results of two different sets of cluster analyses to
determine which is better.
5. Comparing the results of a cluster analysis to externally known
results, e.g., to externally given class labels.

02/14/2018 Introduction to Data Mining, 2 nd Edition 96


Measures of Cluster Validity
 Numerical measures that are applied to judge various aspects
of cluster validity, are classified into the following three types.
– External Index: Used to measure the extent to which cluster labels
match externally supplied class labels.
 Entropy
– Internal Index: Used to measure the goodness of a clustering
structure without respect to external information.
 Sum of Squared Error (SSE)
– Relative Index: Used to compare two different clusterings or
clusters.
 Often an external or internal index is used for this function, e.g., SSE or entropy
 Sometimes these are referred to as criteria instead of indices
– However, sometimes criterion is the general strategy and index is the
numerical measure that implements the criterion.

02/14/2018 Introduction to Data Mining, 2 nd Edition 97


Measuring Cluster Validity Via
Correlation
 Two matrices
– Proximity Matrix
– Ideal Similarity Matrix
 One row and one column for each data point
 An entry is 1 if the associated pair of points belong to the same cluster
 An entry is 0 if the associated pair of points belongs to different clusters
 Compute the correlation between the two matrices
– Since the matrices are symmetric, only the correlation between
n(n-1) / 2 entries needs to be calculated.
 High correlation indicates that points that belong to the
same cluster are close to each other.
 Not a good measure for some density or contiguity based
clusters.

02/14/2018 Introduction to Data Mining, 2 nd Edition 98


Measuring Cluster Validity Via
Correlation

 Correlation of ideal similarity and proximity


matrices for the K-means clusterings of the
following two data sets.
1 1

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5
y

y
0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x

Corr = -0.9235 Corr = -0.5810

02/14/2018 Introduction to Data Mining, 2 nd Edition 99


Using Similarity Matrix for Cluster
Validation

 Order the similarity matrix with respect to cluster


labels and inspect visually.

1
1
10 0.9
0.9
20 0.8
0.8
30 0.7
0.7
40 0.6
0.6

Points
50 0.5
0.5
y

60 0.4
0.4
70 0.3
0.3
80 0.2
0.2
90 0.1
0.1
100 0
0 20 40 60 80 100 Similarity
0 0.2 0.4 0.6 0.8 1
Points
x

02/14/2018 Introduction to Data Mining, 2 nd Edition 100


Using Similarity Matrix for Cluster
Validation

 Clusters in random data are not so crisp

1 1

10 0.9 0.9

20 0.8 0.8

30 0.7 0.7

40 0.6 0.6
Points

50 0.5 0.5

y
60 0.4 0.4

70 0.3 0.3

80 0.2 0.2

90 0.1 0.1

100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x

DBSCAN

02/14/2018 Introduction to Data Mining, 2 nd Edition 101


Using Similarity Matrix for Cluster
Validation

 Clusters in random data are not so crisp


1 1

10 0.9 0.9

20 0.8 0.8

30 0.7 0.7

40 0.6 0.6
Points

50 0.5 0.5

y
60 0.4 0.4

70 0.3 0.3

80 0.2 0.2

90 0.1 0.1

100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x

K-means

02/14/2018 Introduction to Data Mining, 2 nd Edition 102


Using Similarity Matrix for Cluster
Validation

 Clusters in random data are not so crisp

1 1

10 0.9 0.9

20 0.8 0.8

30 0.7 0.7

40 0.6 0.6
Points

50 0.5 0.5

y
60 0.4 0.4

70 0.3 0.3

80 0.2 0.2

90 0.1 0.1

100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x

Complete Link

02/14/2018 Introduction to Data Mining, 2 nd Edition 103


Using Similarity Matrix for Cluster
Validation

0.9
500
0.8

1 0.7
2 6 1000
0.6
3
4
1500 0.5

0.4
5 2000
0.3
7
0.2
2500
0.1

3000 0
500 1000 1500 2000 2500 3000

DBSCAN

02/14/2018 Introduction to Data Mining, 2 nd Edition 104


Internal Measures: SSE
 Clusters in more complicated figures aren’t well separated
 Internal Index: Used to measure the goodness of a clustering
structure without respect to external information
– SSE
 SSE is good for comparing two clusterings or two clusters
(average SSE).
 Can also be used to estimate the number of clusters
10

6 9

8
4
7
2 6

SSE
0 5

4
-2
3
-4 2

-6 1

5 10 15 0
2 5 10 15 20 25 30
K

02/14/2018 Introduction to Data Mining, 2 nd Edition 105


Internal Measures: SSE

 SSE curve for a more complicated data set

1
2 6

3
4

SSE of clusters found using K-means

02/14/2018 Introduction to Data Mining, 2 nd Edition 106


Framework for Cluster Validity
 Need a framework to interpret any measure.
– For example, if our measure of evaluation has the value, 10, is that
good, fair, or poor?
 Statistics provide a framework for cluster validity
– The more “atypical” a clustering result is, the more likely it represents
valid structure in the data
– Can compare the values of an index that result from random data or
clusterings to those of a clustering result.
 If the value of the index is unlikely, then the cluster results are valid
– These approaches are more complicated and harder to understand.
 For comparing the results of two different sets of cluster
analyses, a framework is less necessary.
– However, there is the question of whether the difference between two
index values is significant

02/14/2018 Introduction to Data Mining, 2 nd Edition 107


Statistical Framework for SSE

 Example
– Compare SSE of 0.005 against three clusters in random data
– Histogram shows SSE of three clusters in 500 sets of random data
points of size 100 distributed over the range 0.2 – 0.8 for x and y
values

1
50
0.9
45
0.8
40
0.7
35
0.6
30
Count

0.5
y

25
0.4
20
0.3
15
0.2
10
0.1
5
0
0 0.2 0.4 0.6 0.8 1 0
0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034
x SSE

02/14/2018 Introduction to Data Mining, 2 nd Edition 108


Statistical Framework for Correlation

 Correlation of ideal similarity and proximity matrices


for the K-means clusterings of the following two data
sets.
1 1

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5
y

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x

Corr = -0.9235 Corr = -0.5810

02/14/2018 Introduction to Data Mining, 2 nd Edition 109


Internal Measures: Cohesion and Separation

 Cluster Cohesion: Measures how closely related


are objects in a cluster
– Example: SSE
 Cluster Separation: Measure how distinct or well-
separated a cluster is from other clusters
 Example: Squared Error
– Cohesion is measured by the within cluster sum of squares (SSE)
SSE WSS   ( x  mi ) 2
i xCi
– Separation is measured by the between cluster sum of squares

BSS  Ci ( m  mi ) 2
i
– Where |Ci| is the size of cluster i
02/14/2018 Introduction to Data Mining, 2 nd Edition 110
Internal Measures: Cohesion and
Separation

 Example: SSE
– BSS + WSS = constant
m
  
1 m1 2 3 4 m2 5

K=1 cluster: SSE WSS(1  3) 2  ( 2  3) 2  ( 4  3) 2  (5  3) 2 10


BSS4 (3  3) 2 0
Total 10  0 10

K=2 clusters: SSE WSS(1  1.5) 2  ( 2  1.5) 2  ( 4  4.5) 2  (5  4.5) 2 1


BSS2 (3  1.5) 2  2 ( 4.5  3) 2 9
Total 1  9 10

02/14/2018 Introduction to Data Mining, 2 nd Edition 111


Internal Measures: Cohesion and Separation

 A proximity graph based approach can also be used for


cohesion and separation.
– Cluster cohesion is the sum of the weight of all links within a cluster.
– Cluster separation is the sum of the weights between nodes in the cluster
and nodes outside the cluster.

cohesion separation

02/14/2018 Introduction to Data Mining, 2 nd Edition 112


Internal Measures: Silhouette Coefficient

 Silhouette coefficient combines ideas of both cohesion and


separation, but for individual points, as well as clusters and clusterings
 For an individual point, i
– Calculate a = average distance of i to the points in its cluster
– Calculate b = min (average distance of i to points in another cluster)
– The silhouette coefficient for a point is then given by

Distances used
s = (b – a) / max(a,b) to calculate b
i
– Typically between 0 and 1. Distances used
to calculate a
– The closer to 1 the better.

 Can calculate the average silhouette coefficient for a cluster or a


clustering

02/14/2018 Introduction to Data Mining, 2 nd Edition 113


Hopkins Statistics

0.9

0.8

0.7

0.6

0.5
y

0.4

0.3

0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
x

Avg. H=0.56 with a standard deviation of 0.03 for p=20 and 100 trails

02/14/2018 Introduction to Data Mining, 2 nd Edition 114


Hopkins Statistics

0.9

0.8

0.7

0.6

0.5
y

0.4

0.3

0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
x

Avg. H=0.95 with a standard deviation of 0.006 for p=20 and 100 trails

02/14/2018 Introduction to Data Mining, 2 nd Edition 115


External Measures of Cluster Validity: Entropy and
Purity

02/14/2018 Introduction to Data Mining, 2 nd Edition 116


Final Comment on Cluster Validity

“The validation of clustering structures is the most


difficult and frustrating part of cluster analysis.
Without a strong effort in this direction, cluster
analysis will remain a black art accessible only to
those true believers who have experience and
great courage.”

Algorithms for Clustering Data, Jain and Dubes

02/14/2018 Introduction to Data Mining, 2 nd Edition 117

You might also like