0% found this document useful (0 votes)
21 views

hierarchicalclustering

clustering density based models

Uploaded by

prathap badam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

hierarchicalclustering

clustering density based models

Uploaded by

prathap badam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Hierarchical Clustering

Mehta Ishani
130040701003
What is Clustering in Data Mining?

Clustering is a process of partitioning a set of data (or objects) in


a set of meaningful sub-classes, called clusters

 Cluster:
 a collection of data objects that are
“similar” to one another and thus can
be treated collectively as one group

 but as a collection, they are


sufficiently different from other groups

2 8/23/2014
Distance or Similarity Measures
 Measuring Distance
 In order to group similar items, we need a way to measure
the distance between objects (e.g., records)
 Note: distance = inverse of similarity
 Often based on the representation of objects as “feature
vectors”

An Employee DB Term Frequencies for Documents

ID Gender Age Salary T1 T2 T3 T4 T5 T6


1 F 27 19,000 Doc1 0 4 0 0 0 2
2 M 51 64,000 Doc2 3 1 4 3 1 2
3 M 52 100,000 Doc3 3 0 0 0 3 0
4 F 33 55,000 Doc4 0 1 0 3 0 0
5 M 45 45,000 Doc5 2 2 2 3 1 4

Which objects are more similar?


3 8/23/2014
Distance or Similarity Measures

 Common Distance Measures:


X  x1 , x2 , , xn Y  y1 , y2 , , yn
 Manhattan distance:

dist ( X , Y )  x1  y1  x2  y2   xn  yn
Can be normalized
 Euclidean distance: to make values fall
between 0 and 1.
dist ( X , Y )   x1  y1     xn  yn 
2 2

 Cosine similarity:

dist ( X ,Y )  1  sim( X ,Y )
 ( xi  yi )
sim( X , Y )  i

 xi   yi
2 2

i i

4 8/23/2014
Distance or Similarity Measures
 Weighting Attributes
 in some cases we want some attributes to count more than
others
 associate a weight with each of the attributes in calculating
distance, e.g.,
dist ( X , Y )  w1  x1  y1    wn  xn  yn 
2 2

 Nominal (categorical) Attributes


can use simple matching: distance=1 if values match, 0

otherwise
 or convert each nominal attribute to a set of binary attribute,
then use the usual distance measure
 if all attributes are nominal, we can normalize by dividing the
number of matches by the total number of attributes
xi  min xi
x 'i 
 Normalization: max xi  min xi
 want values to fall between 0 an 1:
 other variations possible
5 8/23/2014
Distance or Similarity Measures
 Example
 max distance for salary: 100000-19000 = 79000
 max distance for age: 52-27 = 25 xi  min xi
x' 
max xi  min xi
i

ID Gender Age Salary ID Gender Age Salary


1 F 27 19,000 1 1 0.00 0.00
2 M 51 64,000 2 0 0.96 0.56
3 M 52 100,000 3 0 1.00 1.00
4 F 33 55,000 4 1 0.24 0.44
5 M 45 45,000 5 0 0.72 0.32

 dist(ID2, ID3) = SQRT( 0 + (0.04)2 + (0.44)2 ) = 0.44


 dist(ID2, ID4) = SQRT( 1 + (0.72)2 + (0.12)2 ) = 1.24

6 8/23/2014
Domain Specific Distance Functions
 For some data sets, we may need to use specialized functions
 we may want a single or a selected group of attributes to be used
in the computation of distance - same problem as “feature
selection”
 may want to use special properties of one or more attribute in the
data Example: Zip Codes
distzip(A, B) = 0, if zip codes are identical
distzip(A, B) = 0.1, if first 3 digits are
identical
distzip(A, B) = 0.5, if first digits are identical
distzip(A, B) = 1, if first digits are different
Example: Customer Solicitation
distsolicit(A, B) = 0, if both A and B responded
distsolicit(A, B) = 0.1, both A and B were chosen but did not respond
distsolicit(A, B) = 0.5, both A and B were chosen, but only one
responded
distsolicit(A, B) = 1, one was chosen, but the other was not

 natural distance functions may exist in the data


7 8/23/2014
Distance (Similarity) Matrix
 Similarity (Distance) Matrix
 based on the distance or similarity measure we can construct a
symmetric matrix of distance (or similarity values)
 (i, j) entry in the matrix is the distance (similarity) between items
i and j

I1 I2 In
I1  d12 d1n Note that dij = dji (i.e., the matrix is
I2 d 21  d2n symmetric. So, we only need the
lower triangle part of the matrix.
 The diagonal is all 1’s (similarity) or
In d n1 dn2  all 0’s (distance)

dij  similarity (or distance) of Di to D j


8 8/23/2014
Example: Term Similarities in Documents
T1 T2 T3 T4 T5 T6 T7 T8
Doc1 0 4 0 0 0 2 1 3
Doc2 3 1 4 3 1 2 0 1
Doc3 3 0 0 0 3 0 3 0
Doc4 0 1 0 3 0 0 2 0
Doc5 2 2 2 3 1 4 0 2

N
sim(Ti , Tj )   ( wik  w jk )
k 1

T1 T2 T3 T4 T5 T6 T7
T2 7
T3 16 8
Term-Term T4 15 12 18
Similarity Matrix T5 14 3 6 6
T6 14 18 16 18 6
T7 9 6 0 6 9 2
T8 7 17 8 9 3 16 3
9 8/23/2014
Similarity (Distance) Thresholds
 A similarity (distance) threshold may be used to mark pairs that are
“sufficiently”
T1 similar
T2 T3 T4 T5 T6 T7
T2 7
T3 16 8
T4 15 12 18
T5 14 3 6 6
T6 14 18 16 18 6
T7 9 6 0 6 9 2
T8 7 17 8 9 3 16 3 Using a
threshold value
T1 T2 T3 T4 T5 T6 T7 of 10 in the
T2 0 previous
T3 1 0 example
T4 1 1 1
T5 1 0 0 0
T6 1 1 1 1 0
T7 0 0 0 0 0 0
T8 0 1 0 0 0 1 0

10 8/23/2014
Graph Representation

 The similarity matrix can be visualized as an undirected


graph
 each item is represented by a node, and edges represent the
factT1that
T2 two
T3 items
T4 T5are
T6similar
T7 (a one in the similarity threshold
T2matrix)
0 T1 T3

T3 1 0
T4 1 1 1
T5 1 0 0 0 T5
T6 1 1 1 1 0
T7 0 0 0 0 0 0
T8 0 1 0 0 0 1 0 T4 T2

If no threshold is used, then


T7
matrix can be represented as
T6
a weighted graph T8

11 8/23/2014
Clustering Methodologies
 Two general methodologies
 Partitioning Based Algorithms
 Hierarchical Algorithms

 Partitioning Based
 divide a set of N items into K clusters (top-down)

 Hierarchical
 agglomerative: pairs of items or clusters are successively
linked to produce larger clusters
 divisive: start with the whole set as a cluster and
successively divide sets into smaller partitions

12 8/23/2014
Hierarchical Clustering

 Use distance matrix as clustering criteria.

Step 0 Step 1 Step 2 Step 3 Step 4 agglomerative


(AGNES)
a
ab
b
abcde
c
cde
d
de
e
divisive
(DIANA)
Step 4 Step 3 Step 2 Step 1 Step 0

13 8/23/2014
AGNES (Agglomerative Nesting)

 Introduced in Kaufmann and Rousseeuw (1990)


 Use the dissimilarity matrix.
 Merge nodes that have the least dissimilarity
 Go on in a non-descending fashion
 Eventually all nodes belong to the same cluster

10 10 10

9 9 9

8 8 8

7 7 7

6 6 6

5 5 5

4 4 4

3 3 3

2 2 2

1 1 1

0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

14 8/23/2014
Algorithmic steps for Agglomerative
Hierarchical clustering
Let X = {x1, x2, x3, ..., xn} be the set of data points.
(1)Begin with the disjoint clustering having level L(0) = 0 and sequence number m =
0.

(2)Find the least distance pair of clusters in the current clustering, say pair (r), (s),
according
to d[(r),(s)] = min d[(i),(j)] where the minimum is over all pairs of clusters in
the current clustering.

(3)Increment the sequence number: m = m +1.Merge clusters (r) and (s) into a
single cluster to form the next clustering m. Set the level of this clustering to
L(m) = d[(r),(s)].

(4)Update the distance matrix, D, by deleting the rows and columns corresponding
to clusters (r) and (s) and adding a row and column corresponding to the
newly formed cluster. The distance between the new cluster, denoted (r,s) and
old cluster(k) is defined in this way: d[(k), (r,s)] = min (d[(k),(r)], d[(k),(s)]).

(5)If
15 all the data points are in one cluster then stop, else repeat8/23/2014
from step 2).
A Dendrogram Shows How the
Clusters are Merged Hierarchically

16 8/23/2014
DIANA (Divisive Analysis)

 Introduced in Kaufmann and Rousseeuw (1990)


 Implemented in statistical analysis packages, e.g., Splus
 Inverse order of AGNES
 Eventually each node forms a cluster on its own

10 10
10

9 9
9

8 8
8

7 7
7

6 6
6

5 5
5

4 4
4

3 3
3

2 2
2

1 1
1

0 0
0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10

17 8/23/2014
Algorithmic steps for Divisive Hierarchical
clustering
1. Start with one cluster that contains all samples.
2. Calculate diameter of each cluster. Diameter is the
maximal distance between samples in the cluster.
Choose one cluster C having maximal diameter of all
clusters to split.
3. Find the most dissimilar sample x from cluster C. Let x
depart from the original cluster C to form a new
independent cluster N (now cluster C does not include
sample x). Assign all members of cluster C to MC.
4. Repeat step 6 until members of cluster C and N do not
change.
5. Calculate similarities from each member of MC to cluster
C and N, and let the member owning the highest
similarities in MC move to its similar cluster C or N.
Update members of C and N.
6. Repeat the step 2, 3, 4, 5 until the number of clusters
18becomes the number of samples or as specified8/23/2014 by the
user.
Pros and Cons
Advantages
1) No a priori information about the number of clusters required.
2) Easy to implement and gives best result in some cases.

Disadvantages
1) Algorithm can never undo what was done previously.
2) Time complexity of at least O(n2 log n) is required, where ‘n’ is the
number of data points.
3) Based on the type of distance matrix chosen for merging different
algorithms can suffer with one or more of the following:
i) Sensitivity to noise and outliers
ii) Breaking large clusters
iii) Difficulty handling different sized clusters and convex shapes
4) No objective function is directly minimized
5) Sometimes it is difficult to identify the correct number of clusters by
the dendogram.

19 8/23/2014
20 8/23/2014

You might also like