0% found this document useful (0 votes)
13 views60 pages

Chapter 4

Cluster analysis is a data mining technique used to group similar objects while maximizing inter-cluster distances and minimizing intra-cluster distances. It has various applications, including document grouping, gene functionality analysis, and stock price clustering. Different clustering methods include partitional and hierarchical clustering, with algorithms like K-means being commonly used, though they have limitations regarding cluster size, density, and shape.

Uploaded by

Meshal Aldib
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views60 pages

Chapter 4

Cluster analysis is a data mining technique used to group similar objects while maximizing inter-cluster distances and minimizing intra-cluster distances. It has various applications, including document grouping, gene functionality analysis, and stock price clustering. Different clustering methods include partitional and hierarchical clustering, with algorithms like K-means being commonly used, though they have limitations regarding cluster size, density, and shape.

Uploaded by

Meshal Aldib
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

Data Mining

Cluster Analysis: Basic Concepts


and Algorithms

Lecture Notes for Chapter 4

Introduction to Data Mining, 2nd Edition


by
Tan, Steinbach, Karpatne, Kumar
What is Cluster Analysis?

 Finding groups of objects such that the objects in a group


will be similar (or related) to one another and different
from (or unrelated to) the objects in other groups

Inter-cluster
Intra-cluster distances are
distances are maximized
minimized

02/14/2018 Introduction to Data Mining, 2 nd Edition 2


Applications of Cluster Analysis
Discovered Clusters Industry Group
 Understanding Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN,

– Group related documents 1 Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN,


DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN,
Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down,
Technology1-DOWN
Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN,
for browsing, group genes Sun-DOWN
Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN,

and proteins that have 2 ADV-Micro-Device-DOWN,Andrew-Corp-DOWN,


Computer-Assoc-DOWN,Circuit-City-DOWN,
Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN,
Technology2-DOWN
similar functionality, or Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN
Fannie-Mae-DOWN,Fed-Home-Loan-DOWN,
group stocks with similar 3 MBNA-Corp-DOWN,Morgan-Stanley-DOWN Financial-DOWN

price fluctuations Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP,

4 Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP,
Schlumberger-UP
Oil-UP

 Summarization
– Reduce the size of large
data sets

Clustering precipitation
in Australia

02/14/2018 Introduction to Data Mining, 2 nd Edition 3


What is not Cluster Analysis?
 Simple segmentation
– Dividing students into different registration groups
alphabetically, by last name

 Results of a query
– Groupings are a result of an external specification
– Clustering is a grouping of objects based on the data

 Supervised classification
– Have class label information

 Association Analysis
– Local vs. global connections

02/14/2018 Introduction to Data Mining, 2 nd Edition 4


Notion of a Cluster can be
Ambiguous

How many clusters? Six Clusters

Two Clusters Four Clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 5


Types of Clusterings

 A clustering is a set of clusters


 Important distinction between hierarchical and
partitional sets of clusters
 Partitional Clustering
– A division of data objects into non-overlapping subsets
(clusters) such that each data object is in exactly one subset

 Hierarchical clustering
– A set of nested clusters organized as a hierarchical tree

02/14/2018 Introduction to Data Mining, 2 nd Edition 6


Partitional Clustering

Original Points A Partitional Clustering

02/14/2018 Introduction to Data Mining, 2 nd Edition 7


Hierarchical Clustering

p1
p3 p4
p2
p1 p2 p3 p4

Traditional Hierarchical Clustering Traditional Dendrogram

p1
p3 p4
p2
p1 p2 p3 p4

Non-traditional Hierarchical Clustering Non-traditional Dendrogram

02/14/2018 Introduction to Data Mining, 2 nd Edition 8


Types of Clusters

 Well-separated clusters

 Center-based clusters

 Contiguous clusters

 Density-based clusters

 Property or Conceptual

 Described by an Objective Function

02/14/2018 Introduction to Data Mining, 2 nd Edition 9


Types of Clusters: Well-Separated

 Well-Separated Clusters:
– A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than
to any point not in the cluster.

3 well-separated clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 10


Types of Clusters: Center-Based

 Center-based
– A cluster is a set of objects such that an object in a cluster is
closer (more similar) to the “center” of a cluster, than to the
center of any other cluster
– The center of a cluster is often a centroid, the average of all
the points in the cluster, or a medoid, the most “representative”
point of a cluster

4 center-based clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 11


Types of Clusters: Contiguity-Based

 Contiguous Cluster (Nearest neighbor or


Transitive)
– A cluster is a set of points such that a point in a cluster is
closer (or more similar) to one or more other points in the
cluster than to any point not in the cluster.

8 contiguous clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 12


Types of Clusters: Density-Based

 Density-based
– A cluster is a dense region of points, which is separated by
low-density regions, from other regions of high density.
– Used when the clusters are irregular or intertwined, and when
noise and outliers are present.

6 density-based clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 13


Types of Clusters: Conceptual Clusters

 Shared Property or Conceptual Clusters


– Finds clusters that share some common property or represent
a particular concept.
.

2 Overlapping Circles

02/14/2018 Introduction to Data Mining, 2 nd Edition 14


Types of Clusters: Objective Function

 Clusters Defined by an Objective Function


– Finds clusters that minimize or maximize an objective function.
– Enumerate all possible ways of dividing the points into clusters
and evaluate the `goodness' of each potential set of clusters by
using the given objective function. (NP Hard)
– Can have global or local objectives.
 Hierarchical clustering algorithms typically have local objectives
 Partitional algorithms typically have global objectives
– A variation of the global objective function approach is to fit the
data to a parameterized model.
 Parameters for the model are determined from the data.
 Mixture models assume that the data is a ‘mixture' of a number of
statistical distributions.

02/14/2018 Introduction to Data Mining, 2 nd Edition 15


Characteristics of the Input Data Are
Important

 Type of proximity or density measure


– Central to clustering
– Depends on data and application

 Data characteristics that affect proximity and/or density are


– Dimensionality
 Sparseness
– Attribute type
– Special relationships in the data
 For example, autocorrelation
– Distribution of the data

 Noise and Outliers


– Often interfere with the operation of the clustering algorithm

02/14/2018 Introduction to Data Mining, 2 nd Edition 16


Clustering Algorithms

 K-means

 Hierarchical clustering

02/14/2018 Introduction to Data Mining, 2 nd Edition 17


K-means Clustering

 Partitional clustering approach


 Number of clusters, K, must be specified
 Each cluster is associated with a centroid (center point)
 Each point is assigned to the cluster with the closest
centroid
 The basic algorithm is very simple

02/14/2018 Introduction to Data Mining, 2 nd Edition 18


Example of K-means Clustering
Iteration 6
1
2
3
4
5
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x
Example of K-means Clustering
Iteration 1 Iteration 2 Iteration 3
3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

Iteration 4 Iteration 5 Iteration 6


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

02/14/2018 Introduction to Data Mining, 2 nd Edition 20


K-means Clustering – Details
 Initial centroids are often chosen randomly.
– Clusters produced vary from one run to another.
 The centroid is (typically) the mean of the points in the
cluster.
 ‘Closeness’ is measured by Euclidean distance, cosine
similarity, correlation, etc.
 K-means will converge for common similarity measures
mentioned above.
 Most of the convergence happens in the first few
iterations.
– Often the stopping condition is changed to ‘Until relatively few
points change clusters’
 Complexity is O( n * K * I * d )
– n = number of points, K = number of clusters,
I = number of iterations, d = number of attributes

02/14/2018 Introduction to Data Mining, 2 nd Edition 21


Evaluating K-means Clusters

 Most common measure is Sum of Squared Error (SSE)


– For each point, the error is the distance to the nearest cluster
– To get SSE, we square these errors and sum them.
K
SSE   dist 2 ( mi , x )
i 1 xCi
– x is a data point in cluster Ci and mi is the representative point for
cluster Ci
 can show that mi corresponds to the center (mean) of the cluster
– Given two sets of clusters, we prefer the one with the smallest
error
– One easy way to reduce SSE is to increase K, the number of
clusters
 A good clustering with smaller K can have a lower SSE than a poor
clustering with higher K

02/14/2018 Introduction to Data Mining, 2 nd Edition 22


Two different K-means Clusterings
3

2.5

2
Original Points
1.5

y
1

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x

3 3

2.5 2.5

2 2

1.5 1.5
y

y
1 1

0.5 0.5

0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x x

Optimal Clustering Sub-optimal Clustering

02/14/2018 Introduction to Data Mining, 2 nd Edition 23


Limitations of K-means

 K-means has problems when clusters are of


differing
– Sizes
– Densities
– Non-globular shapes

 K-means has problems when the data contains


outliers.

02/14/2018 Introduction to Data Mining, 2 nd Edition 24


Limitations of K-means: Differing Sizes

Original Points K-means (3 Clusters)

02/14/2018 Introduction to Data Mining, 2 nd Edition 25


Limitations of K-means: Differing
Density

Original Points K-means (3 Clusters)

02/14/2018 Introduction to Data Mining, 2 nd Edition 26


Limitations of K-means: Non-globular
Shapes

Original Points K-means (2 Clusters)

02/14/2018 Introduction to Data Mining, 2 nd Edition 27


Overcoming K-means Limitations

Original Points K-means Clusters

One solution is to use many clusters.


Find parts of clusters, but need to put together.
02/14/2018 Introduction to Data Mining, 2 nd Edition 28
Overcoming K-means Limitations

Original Points K-means Clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 29


Overcoming K-means Limitations

Original Points K-means Clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 30


Importance of Choosing Initial
Centroids
Iteration 6
1
2
3
4
5
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x
Importance of Choosing Initial
Centroids
Iteration 1 Iteration 2 Iteration 3
3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

Iteration 4 Iteration 5 Iteration 6


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

02/14/2018 Introduction to Data Mining, 2 nd Edition 32


Importance of Choosing Initial
Centroids …
Iteration 5
1
2
3
4
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x
Importance of Choosing Initial
Centroids …

Iteration 1 Iteration 2
3 3

2.5 2.5

2 2

1.5 1.5
y

y
1 1

0.5 0.5

0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x x

Iteration 3 Iteration 4 Iteration 5


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

02/14/2018 Introduction to Data Mining, 2 nd Edition 34


Solutions to Initial Centroids
Problem

 Multiple runs
– Helps, but probability is not on your side
 Sample and use hierarchical clustering to
determine initial centroids
 Select more than k initial centroids and then
select among these initial centroids
– Select most widely separated
 Postprocessing
 Generate a larger number of clusters and then
perform a hierarchical clustering
 Bisecting K-means
– Not as susceptible to initialization issues
02/14/2018 Introduction to Data Mining, 2 nd Edition 35
Hierarchical Clustering

 Produces a set of nested clusters organized as a


hierarchical tree
 Can be visualized as a dendrogram
– A tree like diagram that records the sequences of
merges or splits

6 5
0.2
4
3 4
0.15 2
5
2
0.1

1
0.05
3 1

0
1 3 2 5 4 6

02/14/2018 Introduction to Data Mining, 2 nd Edition 36


Strengths of Hierarchical
Clustering

 Do not have to assume any particular number of


clusters
– Any desired number of clusters can be obtained by
‘cutting’ the dendrogram at the proper level

 They may correspond to meaningful taxonomies


– Example in biological sciences (e.g., animal kingdom,
phylogeny reconstruction, …)

02/14/2018 Introduction to Data Mining, 2 nd Edition 37


Hierarchical Clustering

 Two main types of hierarchical clustering


– Agglomerative:
 Start with the points as individual clusters
 At each step, merge the closest pair of clusters until only one cluster
(or k clusters) left

– Divisive:
 Start with one, all-inclusive cluster
 At each step, split a cluster until each cluster contains an individual
point (or there are k clusters)

 Traditional hierarchical algorithms use a similarity or


distance matrix
– Merge or split one cluster at a time

02/14/2018 Introduction to Data Mining, 2 nd Edition 38


Agglomerative Clustering
Algorithm
 Most popular hierarchical clustering technique
 Basic algorithm is straightforward
1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remains

 Key operation is the computation of the proximity of


two clusters
– Different approaches to defining the distance between
clusters distinguish the different algorithms

02/14/2018 Introduction to Data Mining, 2 nd Edition 39


Starting Situation

 Start with clusters of individual points and a


proximity matrix p1 p2 p3 p4 p5 ...
p1

p2
p3

p4
p5
.
.
. Proximity Matrix

...
p1 p2 p3 p4 p9 p10 p11 p12

02/14/2018 Introduction to Data Mining, 2 nd Edition 40


Intermediate Situation

 After some merging steps, we have some clusters


C1 C2 C3 C4 C5
C1

C2
C3
C3
C4
C4
C5

Proximity Matrix
C1

C2 C5

...
p1 p2 p3 p4 p9 p10 p11 p12

02/14/2018 Introduction to Data Mining, 2 nd Edition 41


Intermediate Situation

 We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix. C1 C2 C3 C4 C5
C1

C2
C3
C3
C4
C4
C5

Proximity Matrix
C1

C2 C5

...
p1 p2 p3 p4 p9 p10 p11 p12

02/14/2018 Introduction to Data Mining, 2 nd Edition 42


After Merging

 The question is “How do we update the proximity matrix?”


C2
U
C1 C5 C3 C4

C1 ?

C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?

Proximity Matrix
C1

C2 U C5

...
p1 p2 p3 p4 p9 p10 p11 p12

02/14/2018 Introduction to Data Mining, 2 nd Edition 43


How to Define Inter-Cluster Distance
p1 p2 p3 p4 p5 ...
p1
Similarity?
p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

02/14/2018 Introduction to Data Mining, 2 nd Edition 44


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

02/14/2018 Introduction to Data Mining, 2 nd Edition 45


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

02/14/2018 Introduction to Data Mining, 2 nd Edition 46


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

02/14/2018 Introduction to Data Mining, 2 nd Edition 47


How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1

  p2

p3

p4

p5
 MIN
.
 MAX .
 Group Average .
 Proximity Matrix
Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error

02/14/2018 Introduction to Data Mining, 2 nd Edition 48


MIN or Single Link

 Proximity of two clusters is based on the two


closest points in the different clusters
– Determined by one pair of points, i.e., by one link in the
proximity graph
 Example:
Distance Matrix:

02/14/2018 Introduction to Data Mining, 2 nd Edition 49


Hierarchical Clustering: MIN

5
1
3
5 0.2

2 1 0.15

2 3 6 0.1

0.05
4
4 0
3 6 2 5 4 1

Nested Clusters Dendrogram

02/14/2018 Introduction to Data Mining, 2 nd Edition 50


Strength of MIN

Original Points Six Clusters

• Can handle non-elliptical shapes

02/14/2018 Introduction to Data Mining, 2 nd Edition 51


Limitations of MIN

Two Clusters

Original Points

• Sensitive to noise and outliers


Three Clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 52


MAX or Complete Linkage

 Proximity of two clusters is based on the two


most distant points in the different clusters
– Determined by all pairs of points in the two clusters

Distance Matrix:

02/14/2018 Introduction to Data Mining, 2 nd Edition 53


Hierarchical Clustering: MAX

4 1
2 5 0.4

0.35
5
2 0.3

0.25

3 6
0.2

3 0.15
1 0.1

4 0.05

0
3 6 4 1 2 5

Nested Clusters Dendrogram

02/14/2018 Introduction to Data Mining, 2 nd Edition 54


Strength of MAX

Original Points Two Clusters

• Less susceptible to noise and outliers

02/14/2018 Introduction to Data Mining, 2 nd Edition 55


Limitations of MAX

Original Points Two Clusters

• Tends to break large clusters


• Biased towards globular clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 56


Group Average

 Proximity of two clusters is the average of pairwise proximity


between points in the two clusters.
 proximity(p ,p )
piCluster
i j
i
pjCluster
proximity(
Cluster
i , Cluster
j) 
j

|Clusteri | |Clusterj |

 Need to use average connectivity for scalability since total


proximity favors large clusters
Distance Matrix:

02/14/2018 Introduction to Data Mining, 2 nd Edition 57


Hierarchical Clustering: Group
Average

5 4 1
2 0.25

5 0.2
2
0.15

3 6 0.1

1 0.05

4 0
3 3 6 4 1 2 5

Nested Clusters Dendrogram

02/14/2018 Introduction to Data Mining, 2 nd Edition 58


Hierarchical Clustering: Group
Average

 Compromise between Single and Complete


Link

 Strengths
– Less susceptible to noise and outliers

 Limitations
– Biased towards globular clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 59


Hierarchical Clustering: Problems and
Limitations

 Once a decision is made to combine two clusters,


it cannot be undone

 No global objective function is directly minimized

 Different schemes have problems with one or


more of the following:
– Sensitivity to noise and outliers
– Difficulty handling clusters of different sizes and non-
globular shapes
– Breaking large clusters

02/14/2018 Introduction to Data Mining, 2 nd Edition 60

You might also like