100% found this document useful (1 vote)
75 views

03 Hierarchical Clustering

Hierarchical clustering is an algorithm that groups similar objects into clusters. It works by initially putting each object in its own cluster, then iteratively merging the closest pairs of clusters until all objects are in a single cluster. The distance between clusters is defined differently depending on the linkage method used, such as single, complete, or average linkage. Hierarchical clustering produces a dendrogram that shows the cluster mergers at each step. It can cluster objects based on either a distance matrix or raw data.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
75 views

03 Hierarchical Clustering

Hierarchical clustering is an algorithm that groups similar objects into clusters. It works by initially putting each object in its own cluster, then iteratively merging the closest pairs of clusters until all objects are in a single cluster. The distance between clusters is defined differently depending on the linkage method used, such as single, complete, or average linkage. Hierarchical clustering produces a dendrogram that shows the cluster mergers at each step. It can cluster objects based on either a distance matrix or raw data.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Hierarchical Clustering

Introduction-Hierarchical clustering

• Hierarchical clustering is an algorithm that groups similar objects into


groups called clusters.
• The endpoint is a set of clusters, where each cluster is distinct from each
other cluster, and the objects within each cluster are broadly similar to
each other.
• Hierarchical clustering can be performed with either a distance
matrix or raw data. 
• When raw data is provided, the software will automatically compute a
distance matrix in the background
• It falls in the category of connectivity based models.

2
How it works
Given a set of N items to be clustered, and an N*N distance (or similarity) matrix, the
basic process of hierarchical clustering (defined by S.C. Johnson in 1967) is this:
1. Start by assigning each item to a cluster, so that if you have N items, you now
have N clusters, each containing just one item. Let the distances (similarities)
between the clusters be the same as the distances (similarities) between the
items they contain.
2. Find the closest (most similar) pair of clusters and merge them into a single
cluster, so that now you have one cluster less.
3. Compute distances (similarities) between the new cluster and each of the old
clusters.
4. Repeat steps 2 and 3 until all items are clustered into a single cluster of size N.
(*)

3
Contd..
• Step 3 can be done in different ways, which is what distinguishes single-
linkage from complete-linkage and average-linkage clustering.
• In single-linkage clustering (also called the connectedness or minimum method), we consider
the distance between one cluster and another cluster to be equal to the shortest distance from
any member of one cluster to any member of the other cluster. If the data consist of
similarities, we consider the similarity between one cluster and another cluster to be equal to
the greatest similarity from any member of one cluster to any member of the other cluster.
• In complete-linkage clustering (also called the diameter or maximum method), we consider
the distance between one cluster and another cluster to be equal to the greatest distance
from any member of one cluster to any member of the other cluster.
• In average-linkage clustering, we consider the distance between one cluster and another
cluster to be equal to the average distance from any member of one cluster to any member of
the other cluster.
A variation on average-link clustering is the UCLUS method of R. D'Andrade (1978) which
uses the median distance, which is much more outlier-proof than the average distance.

4
Single-Linkage Clustering: The Algorithm
• Begin with the disjoint clustering having level L(0) = 0 and sequence number m = 0.

• Find the least dissimilar pair of clusters in the current clustering, say pair (r), (s), according to

d[(r),(s)] = min d[(i),(j)]

where the minimum is over all pairs of clusters in the current clustering.

• Increment the sequence number : m = m +1. Merge clusters (r) and (s) into a single cluster to form the next clustering
m. Set the level of this clustering to

L(m) = d[(r),(s)]

• Update the proximity matrix, D, by deleting the rows and columns corresponding to clusters (r) and (s) and adding a row
and column corresponding to the newly formed cluster. The proximity between the new cluster, denoted (r,s) and old
cluster (k) is defined in this way:

d[(k), (r,s)] = min d[(k),(r)], d[(k),(s)]

• If all objects are in one cluster, stop. Else, go to step 2.


5
Example

Input distance matrix (L = 0 for all the clusters):

TO
BA FI MI NA RM

BA 0 662 877 255 412 996


FI 662 0 295 468 268 400
MI 877 295 0 754 564 138
NA 255 468 754 0 219 869
RM 412 268 564 219 0 669
TO 996 400 138 869 669 0

6
• The nearest pair of cities is MI and TO, at distance 138. These are merged into a single
cluster called "MI/TO". The level of the new cluster is L(MI/TO) = 138 and the new sequence
number is m = 1.
• Then we compute the distance from this new compound object to all other objects. The
shortest distance from "MI/TO" to RM is chosen to be 564, which is the distance from MI to
RM, and so on.

BA FI MI/TO NA RM
BA 0 662 877 255 412
FI 662 0 295 468 268
MI/TO 877 295 0 754 564
NA 255 468 754 0 219
RM 412 268 564 219 0

7
Contd..
• min d(i,j) = d(NA,RM) = 219 => merge NA and RM into a new cluster called NA/RM
L(NA/RM) = 219
m=2

FI MI/TO NA/RM
BA
BA 0 662 877 255
FI 662 0 295 268
MI/TO 877 295 0 564

NA/RM 255 268 564 0

8
Contd..
• min d(i,j) = d(BA,NA/RM) = 255 => merge BA and NA/RM into a new cluster called
BA/NA/RM
L(BA/NA/RM) = 255
m=3

BA/NA/RM FI MI/TO

BA/NA/RM 0 268 564

FI 268 0 295
MI/TO 564 295 0

9
Contd..
• min d(i,j) = d(BA/NA/RM,FI) = 268 => merge BA/NA/RM and FI into a new cluster
called BA/FI/NA/RM
L(BA/FI/NA/RM) = 268
m=4

MI/TO
BA/FI/NA/RM

BA/FI/NA/RM 0 295

MI/TO 295 0

10
• Finally, we merge the last two clusters at level 295.
• The process is summarized by the following hierarchical tree:

295

268

255
219

138

11
Difference between k-means and hierarchical
• Hierarchical clustering can’t handle big data well but K Means clustering can. This
is because the time complexity of K Means is linear i.e. O(n) while that of
hierarchical clustering is quadratic i.e. O(n2).
• In K Means clustering, since we start with random choice of clusters, the results
produced by running the algorithm multiple times might differ. While results are
reproducible in Hierarchical clustering.
• K Means is found to work well when the shape of the clusters is hyper spherical
(like circle in 2D, sphere in 3D).
• K Means clustering requires prior knowledge of K i.e. no. of clusters you want to
divide your data into. But, you can stop at whatever number of clusters you find
appropriate in hierarchical clustering by interpreting the dendrogram

12
Applications of Clustering
• Clustering has a large no. of applications spread across various domains. Some of
the most popular applications of clustering are:
• Recommendation engines
• Market segmentation
• Social network analysis
• Search result grouping
• Medical imaging
• Image segmentation
• Anomaly detection

13
Practice question
  BOS NY DC MIA CHI SEA SF LA DEN

BOS 0 206 429 1504 963 2976 3095 2979 1949

NY 206 0 233 1308 802 2815 2934 2786 1771

DC 429 233 0 1075 671 2684 2799 2631 1616

MIA 1504 1308 1075 0 1329 3273 3053 2687 2037

CHI 963 802 671 1329 0 2013 2142 2054 996

SEA 2976 2815 2684 3273 2013 0 808 1131 1307

SF 3095 2934 2799 3053 2142 808 0 379 1235

LA 2979 2786 2631 2687 2054 1131 379 0 1059

DEN 1949 1771 1616 2037 996 1307 1235 1059 0

14
THANK YOU

15

You might also like