0% found this document useful (0 votes)
8 views72 pages

Chap8 Basic Cluster Analysis Final Student Final

Uploaded by

Gehad Elsherbini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views72 pages

Chap8 Basic Cluster Analysis Final Student Final

Uploaded by

Gehad Elsherbini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 72

Data Mining

Cluster Analysis: Basic Concepts


and Algorithms

Lecture Notes for Chapter 8

Introduction to Data Mining


by
Tan, Steinbach, Kumar

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1


What is Cluster Analysis?

 Finding groups of objects such that the objects in a group


will be similar (or related) to one another and different
from (or unrelated to) the objects in other groups

Inter-cluster
Intra-cluster distances are
distances are maximized
minimized

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 2


Applications of Cluster Analysis

 Understanding
– Group related documents
for browsing, group genes
and proteins that have
similar functionality, or
group stocks with similar
price fluctuations

 Summarization
– Reduce the size of large
data sets

Clustering precipitation
in Australia

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 3


What is not Cluster Analysis?

 Supervised classification
– Have class label information

 Simple segmentation
– Dividing students into different registration groups
alphabetically, by last name

 Results of a query
– Groupings are a result of an external specification

 Graph partitioning
– Some mutual relevance and synergy, but areas are not
identical
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 4
Notion of a Cluster can be
Ambiguous

How many clusters? Six Clusters

Two Clusters Four Clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 5


Types of Clusterings

 A clustering is a set of clusters


 Important distinction between hierarchical and
partitional sets of clusters
 Partitional Clustering
– A division data objects into non-overlapping subsets (clusters)
such that each data object is in exactly one subset

 Hierarchical clustering
– A set of nested clusters organized as a hierarchical tree

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 6


Partitional Clustering

Original Points A Partitional Clustering

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 7


Hierarchical Clustering

p1
p3 p4
p2

Traditional Hierarchical Clustering

p1
p3 p4
p2

Non-traditional Hierarchical Clustering

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 8


Other Distinctions Between Sets of
Clusters

 Exclusive versus non-exclusive


– In non-exclusive clusterings, points may belong to multiple
clusters.
– Can represent multiple classes or ‘border’ points
 Fuzzy versus non-fuzzy
– In fuzzy clustering, a point belongs to every cluster with some
weight between 0 and 1
– Weights must sum to 1
– Probabilistic clustering has similar characteristics
 Partial versus complete
– In some cases, we only want to cluster some of the data
 Heterogeneous versus homogeneous
– Cluster of widely different sizes, shapes, and densities

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 9


Types of Clusters

 Well-separated clusters

 Center-based clusters

 Contiguous clusters

 Density-based clusters

 Property or Conceptual

 Described by an Objective Function


© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 10
Types of Clusters: Well-Separated

 Well-Separated Clusters:
– A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than
to any point not in the cluster.

3 well-separated clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 11


Types of Clusters: Center-Based

 Center-based (prototype-based)
– A cluster is a set of objects such that an object in a cluster is
closer (more similar) to the “center” of a cluster, than to the
center of any other cluster
– The center of a cluster is often a centroid, the average of all
the points in the cluster, or a medoid, the most “representative”
point of a cluster

4 center-based clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 12


Types of Clusters: Contiguity-Based

 Contiguous Cluster (Graph-based)


– A cluster is a set of points such that a point in a cluster is
closer (or more similar) to one or more other points in the
cluster than to any point not in the cluster. Cluster can be
defined as a connected component i.e. a group of objects that
are connected to one another, but that have no connection to
objects outside the group.

8 contiguous clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 13


Types of Clusters: Density-Based

 Density-based
– A cluster is a dense region of points, which is separated by
low-density regions, from other regions of high density.
– Used when the clusters are irregular or intertwined, and when
noise and outliers are present.

6 density-based clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 14


Types of Clusters: Conceptual Clusters

 Shared Property or Conceptual Clusters


– Finds clusters that share some common property or represent
a particular concept.
.

2 Overlapping Circles

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 15


Types of Clusters: Objective Function

 Clusters Defined by an Objective Function


– Finds clusters that minimize or maximize an objective function.
– Enumerate all possible ways of dividing the points into clusters
and evaluate the `goodness' of each potential set of clusters by
using the given objective function.
– Can have global or local objectives.
 Hierarchical clustering algorithms typically have local objectives
 Partitional algorithms typically have global objectives
– A variation of the global objective function approach is to fit the
data to a parameterized model.
 Parameters for the model are determined from the data.
 Mixture models assume that the data is a ‘mixture' of a number of
statistical distributions.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 16


Types of Clusters: Objective Function …

 Map the clustering problem to a different domain


and solve a related problem in that domain
– Proximity matrix defines a weighted graph, where the
nodes are the points being clustered, and the
weighted edges represent the proximities between
points

– Clustering is equivalent to breaking the graph into


connected components, one for each cluster.

– Want to minimize the edge weight between clusters


and maximize the edge weight within clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 17


Clustering Algorithms

 K-means and its variants

 Hierarchical clustering

 Density-based clustering

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 18


K-Means

 Initial set of clusters randomly chosen.


 Iteratively, items are moved among sets of
clusters until the desired set is reached.
 High degree of similarity among elements in a
cluster is obtained.
 Given a cluster Ki={ti1,ti2,…,tim}, the cluster
mean is mi = (1/m)(ti1 + … + tim)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 19


K-Means Example
 Given: {2,4,10,12,3,20,30,11,25}, k=2
 Randomly assign means: m1=3,m2=4
 K1={2,3}, K2={4,10,12,20,30,11,25},
m1=2.5,m2=16
 K1={2,3,4},K2={10,12,20,30,11,25}, m1=3,m2=18
 K1={2,3,4,10},K2={12,20,30,11,25},
m1=4.75,m2=19.6
 K1={2,3,4,10,11,12},K2={20,30,25}, m1=7,m2=25
 Stop as the clusters with these means are the
same.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 20


K-Means Algorithm

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 21


K-means Clustering
 Partitional clustering approach
 Each cluster is associated with a centroid (center point)
 Each point is assigned to the cluster with the closest centroid
 Number of clusters, K, must be specified
 The basic algorithm is very simple

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 22


K-means Clustering – Details
 Initial centroids are often chosen randomly.
– Clusters produced vary from one run to another.
 The centroid is (typically) the mean of the points in the cluster.
 ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc.
 K-means will converge for common similarity measures mentioned above.
 Most of the convergence happens in the first few iterations.
– Often the stopping condition is changed to ‘Until relatively few points change clusters’
 Complexity is O( n * K * I * d )
– n = number of points, K = number of clusters,
I = number of iterations, d = number of attributes

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 23


Two different K-means Clusterings
3

2.5

2
Original Points
1.5

y
1

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x

3 3

2.5 2.5

2 2

1.5 1.5
y

y
1 1

0.5 0.5

0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x x

Optimal Clustering Sub-optimal Clustering

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 24


Importance of Choosing Initial
Centroids

Iteration 6
1
2
3
4
5
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 25


Importance of Choosing Initial
Centroids
Iteration 1 Iteration 2 Iteration 3
3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

Iteration 4 Iteration 5 Iteration 6


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 26


Evaluating K-means Clusters

 Most common measure is Sum of Squared Error (SSE)


– For each point, the error is the distance to the nearest cluster
– To get SSE, we square these errors and sum them.
K
SSE   dist 2 ( mi , x )
i 1 xCi

– x is a data point in cluster Ci and mi is the representative point for


cluster Ci
 can show that mi corresponds to the center (mean) of the cluster
– Given two clusters, we can choose the one with the smallest
error
– One easy way to reduce SSE is to increase K, the number of
clusters
 A good clustering with smaller K can have a lower SSE than a poor
clustering with higher K
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 27
Importance of Choosing Initial
Centroids …

Iteration 5
1
2
3
4
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 28


Importance of Choosing Initial
Centroids …

Iteration 1 Iteration 2
3 3

2.5 2.5

2 2

1.5 1.5
y

y
1 1

0.5 0.5

0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x x

Iteration 3 Iteration 4 Iteration 5


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 29


 Handles outliers well.
 Ordering of input does not impact results.
 Does not scale well.
 Each cluster represented by one item, called the
medoid.
 Initial set of k medoids randomly chosen.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 30


PAM

A B C D E
A 0 1 2 2 3
B 1 0 2 4 3
C 2 2 0 1 5
D 2 4 1 0 3
E 3 3 5 3 0

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 31


PAM Cost Calculation

 At each step in algorithm, medoids are changed if


the overall cost is improved.
 C – cost change for an item t associated with
jih j
swapping medoid ti with non-medoid th.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 32


PAM Algorithm

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 33


Hierarchical Clustering

 Produces a set of nested clusters organized as a


hierarchical tree
 Can be visualized as a dendrogram
– A tree like diagram that records the sequences of
merges or splits

6 5
0.2
4
3 4
0.15 2
5
2
0.1

1
0.05
3 1

0
1 3 2 5 4 6

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 34


Strengths of Hierarchical
Clustering

 Do not have to assume any particular number of


clusters
– Any desired number of clusters can be obtained by
‘cutting’ the dendogram at the proper level

 They may correspond to meaningful taxonomies


– Example in biological sciences (e.g., animal kingdom,
…)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 35


Hierarchical Clustering

 Two main types of hierarchical clustering


– Agglomerative:
 Start with the points as individual clusters
 At each step, merge the closest pair of clusters until only one cluster
(or k clusters) left

– Divisive:
 Start with one, all-inclusive cluster
 At each step, split a cluster until each cluster contains a point (or
there are k clusters)

 Traditional hierarchical algorithms use a similarity or


distance matrix
– Merge or split one cluster at a time

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 36


Agglomerative Clustering
Algorithm
 More popular hierarchical clustering technique
 Basic algorithm is straightforward
1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remains

 Key operation is the computation of the proximity of


two clusters
– Different approaches to defining the distance between
clusters distinguish the different algorithms

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 37


Starting Situation

 Start with clusters of individual points and a


proximity matrix p1 p2 p3 p4 p5 ...
p1

p2
p3

p4
p5
.
.
. Proximity Matrix

...
p1 p2 p3 p4 p9 p10 p11 p12

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 38


Intermediate Situation

 After some merging steps, we have some clusters


C1 C2 C3 C4 C5
C1

C2
C3
C3
C4
C4
C5

Proximity Matrix
C1

C2 C5

...
p1 p2 p3 p4 p9 p10 p11 p12

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 39


Intermediate Situation

 We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix. C1 C2 C3 C4 C5
C1

C2
C3
C3
C4
C4
C5

Proximity Matrix
C1

C2 C5

...
p1 p2 p3 p4 p9 p10 p11 p12

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 40


After Merging

 The question is “How do we update the proximity matrix?”


C2
U
C1 C5 C3 C4

C1 ?

C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?

Proximity Matrix
C1

C2 U C5

...
p1 p2 p3 p4 p9 p10 p11 p12

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 41


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
Similarity?
p2

p3

p4

p5
 MIN
.
 MAX
.
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 42


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
 MIN
.
 MAX
.
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 43


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
 MIN
.
 MAX
.
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 44


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
 MIN
.
 MAX
.
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 45


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

  p2

p3

p4

p5
 MIN
.
 MAX
.
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 46


Cluster Similarity: MIN or Single
Link

 Similarity of two clusters is based on the two


most similar (closest) points in the different
clusters
– Determined by one pair of points, i.e., by one link in
the proximity graph.

I1 I2 I3 I4 I5
I1 1.00 0.90 0.10 0.65 0.20
I2 0.90 1.00 0.70 0.60 0.50
I3 0.10 0.70 1.00 0.40 0.30
I4 0.65 0.60 0.40 1.00 0.80
I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 47


© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 48
p1 p2 p3, p6 p4 p5
p1 0 0.24 0.22 0.37 0.34
p2 0.24 0 0.15 0.2 0.14
p3, p6 0.22 0.15 0 0.15 0.28
p4 0.37 0.2 0.15 0 0.29
p5 0.34 0.14 0.28 0.29 0

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 49


p1 p2 p3, p6 p4 p5
p1 0 0.24 0.22 0.37 0.34
p2 0.24 0 0.15 0.2 0.14
p3, p6 0.22 0.15 0 0.15 0.28
p4 0.37 0.2 0.15 0 0.29
p5 0.34 0.14 0.28 0.29 0

p1 p2, p5 p3, p6 p4
p1 0 0.24 0.22 0.37
p2,p5 0.24 0 0.15 0.2
p3, p6 0.22 0.15 0 0.15
p4 0.37 0.2 0.15 0

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 50


p1
p1 p2,
p2, p5
p5 p3,
p3, p6
p6 p4
p4
p1
p1 0
0 0.24
0.24 0.22
0.22 0.37
0.37
p2,p5
p2,p5 0.24
0.24 0
0 0.15
0.15 0.2
0.2
p3, p6 0.22 0.15 0 0.15
p4 0.37 0.2 0.15 0

p1 p2, p3 p4
p5, p6
p1 0 0.22 0.37
p2, p3, 0.24 0 0.15
p5, p6
p4 0.37 0.15 0

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 51


p1 p2, p3 p4
p5, p6
p1 0 0.22 0.37
p2, p3, 0.24 0 0.15
p5, p6
p4 0.37 0.15 0

p1 p2, p3,
p4, p5,
p6
p1 0 0.22
p2, p3, 0.22 0
p4, p5,
p6

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 52


Hierarchical Clustering: MIN

5
1
3
5 0.2

2 1 0.15

2 3 6 0.1

0.05
4
4 0
3 6 2 5 4 1

Nested Clusters Dendrogram

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 53


Strength of MIN

Original Points Two Clusters

• Can handle non-elliptical shapes

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 54


Limitations of MIN

Original Points Two Clusters

• Sensitive to noise and outliers

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 55


Cluster Similarity: MAX or Complete
Linkage

 Similarity of two clusters is based on the two least


similar (most distant) points in the different
clusters
– Determined by all pairs of points in the two clusters

I1 I2 I3 I4 I5
I1 1.00 0.90 0.10 0.65 0.20
I2 0.90 1.00 0.70 0.60 0.50
I3 0.10 0.70 1.00 0.40 0.30
I4 0.65 0.60 0.40 1.00 0.80
I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 56


© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 57
Hierarchical Clustering: MAX

4 1
2 5 0.4

0.35
5
2 0.3

0.25

3 6
0.2

3 0.15
1 0.1

4 0.05

0
3 6 4 1 2 5

Nested Clusters Dendrogram

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 58


Strength of MAX

Original Points Two Clusters

• Less susceptible to noise and outliers

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 59


Limitations of MAX

Original Points Two Clusters

•Tends to break large clusters


•Biased towards globular clusters
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 60
Cluster Similarity: Group Average

 Proximity of two clusters is the average of pairwise proximity


between points in the two clusters.
 proximity(p ,p )
piCluster
i j
i
pjCluster
proximity(
Cluster
i , Cluster
j) 
j

|Clusteri | |Clusterj |

 Need to use average connectivity for scalability since total


proximity favors large clusters
I1 I2 I3 I4 I5
I1 1.00 0.90 0.10 0.65 0.20
I2 0.90 1.00 0.70 0.60 0.50
I3 0.10 0.70 1.00 0.40 0.30
I4 0.65 0.60 0.40 1.00 0.80
I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 61
Hierarchical Clustering: Group
Average

Nested Clusters Dendrogram

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 62


Hierarchical Clustering: Group
Average

 Compromise between Single and Complete


Link

 Strengths
– Less susceptible to noise and outliers

 Limitations
– Biased towards globular clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 63


Cluster Similarity: Ward’s Method

 Similarity of two clusters is based on the increase


in squared error when two clusters are merged
– Similar to group average if distance between points is
distance squared

 Less susceptible to noise and outliers

 Biased towards globular clusters

 Hierarchical analogue of K-means


– Can be used to initialize K-means

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 64


Hierarchical Clustering: Comparison

5
1 4 1
3
2 5
5 5
2 1 2
MIN MAX
2 3 6 3 6
3
1
4 4
4

5
1 5 4 1
2 2
5 Ward’s Method 5
2 2
3 6 Group Average 3 6
3
4 1 1
4 4
3

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 65


MST: Divisive Hierarchical
Clustering

 Build MST (Minimum Spanning Tree)


– Start with a tree that consists of any point
– In successive steps, look for the closest pair of points (p, q)
such that one point (p) is in the current tree but the other (q) is
not
– Add q to the tree and put an edge between p and q

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 66


MST: Divisive Hierarchical
Clustering

 Use MST for constructing hierarchy of clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 67


DBSCAN

 DBSCAN is a density-based algorithm.


– Density = number of points within a specified radius (Eps)

– A point is a core point if it has more than a specified number


of points (MinPts) within Eps
 These are points that are at the interior of a cluster

– A border point has fewer than MinPts within Eps, but is in


the neighborhood of a core point

– A noise point is any point that is not a core point or a border


point.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 68


DBSCAN: Core, Border, and Noise Points

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 69


DBSCAN Algorithm

 Eliminate noise points


 Perform clustering on the remaining points

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 70


DBSCAN: Core, Border and Noise Points

Original Points Point types: core,


border and noise

Eps = 10, MinPts = 4


© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 71
When DBSCAN Works Well

Original Points Clusters

• Resistant to Noise
• Can handle clusters of different shapes and sizes

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 72

You might also like