Unit 6
Unit 6
Cluster Analysis
Cluster analysis groups data objects based on information found only in the data that describes the
objects and their relationships. The goal is that the objects within a group be similar (or related) to
one another and different from (or unrelated to) the objects in other groups. The greater the
similarity (or homogeneity) within a group and the greater the difference between groups, the
better or more distinct the clustering.
consider Figure which shows 20 points and three different ways of dividing them into clusters. The
shapes of the markers indicate cluster membership.
Figures (b) and (d) divide the data into two and six parts, respectively. However, the apparent
division of each of the two larger clusters into three sub clusters may simply be an artifact of the
human visual system.
Also, it may not be unreasonable to say that the points form four clusters, as shown in Figure (c).
This figure illustrates that the definition of a cluster is imprecise and that the best definition depends
on the nature of data and the desired results. Cluster analysis is related to other techniques that are
used to divide data objects into groups.
For instance, clustering can be regarded as a form of classification in that it creates a labelling of
objects with class (cluster) labels. However, it derives these labels only from the data. In contrast,
classification is supervised classification; i.e., new, unlabelled objects are assigned a class label using
a model developed from objects with known class labels. For this reason, cluster analysis is
sometimes referred to as unsupervised classification. When the term classification is used without
any qualification within data mining, it typically refers to supervised classification.
Also, while the terms segmentation and partitioning are sometimes used as synonyms for clustering,
these terms are frequently used for approaches outside the traditional bounds of cluster analysis.
For example, the term partitioning is often used in connection with techniques that divide graphs
into subgraphs and that are not strongly connected to clustering.
Segmentation often refers to the division of data into groups using simple techniques.
Example: An image can be split into segments based only on pixel intensity and color, or people can
be divided into groups based on their income.
Types of Clusterings:
Definition: An entire collection of clusters is commonly referred to as a clustering.
Types of clusterings:
Hierarchical (nested) versus partitional (unnested)
Hierarchial Clustering: If we permit clusters to have sub clusters, then we obtain a hierarchical
clustering, which is a set of nested clusters that are organized as a tree. Each node (cluster) in the
tree (except for the leaf nodes) is the union of its children (sub clusters), and the root of the tree is
the cluster containing all the objects. Often, but not always, the leaves of the tree are singleton
clusters of individual data objects.
If we allow clusters to be nested, then one interpretation of Figure (a) is that it has two sub clusters
(b), each of which, in turn, has three sub clusters Figure (d). The clusters shown in Figures (a–d),
when taken in that order, also form a hierarchical (nested) clustering with, respectively, 1, 2, 4, and 6
clusters on each level.
Note: Finally, note that a hierarchical clustering can be viewed as a sequence of partitional
clusterings and a partitional clustering can be obtained by taking any member of that sequence; i.e.,
by cutting the hierarchical tree at a particular level.
Exclusive versus Overlapping versus Fuzzy:
Exclusive Clustering: The clusterings shown in above Figure are all exclusive, as they assign each
object to a single cluster. There are many situations in which a point could reasonably be placed in
more than one cluster and these situations are better addressed by non-exclusive clustering.
A non-exclusive clustering is also often used when, for example, an object is “between” two or more
clusters and could reasonably be assigned to any of these clusters. Imagine a point halfway between
two of the clusters of Figure. Rather than make a somewhat arbitrary assignment of the object to a
single cluster, it is placed in all of the “equally good” clusters.
Fuzzy clustering:
In a fuzzy clustering, every object belongs to every cluster with a membership weight that is
between 0 (absolutely doesn’t belong) and 1 (absolutely belongs). In other words, clusters are
treated as fuzzy sets. (Mathematically, a fuzzy set is one in which an object belongs to every set with
a weight that is between 0 and 1. In fuzzy clustering, we often impose the additional constraint that
the sum of the weights for each object must equal 1.)
Similarly, probabilistic clustering techniques compute the probability with which each point belongs
to each cluster, and these probabilities must also sum to 1. Because the membership weights or
probabilities for any object sum to 1, a fuzzy or probabilistic clustering does not address true
multiclass situations, such as the case of a student employee, where an object belongs to multiple
classes. Instead, these approaches are most appropriate for avoiding the arbitrariness of assigning
an object to only one cluster when it is close to several. In practice, a fuzzy or probabilistic clustering
is often converted to an exclusive clustering by assigning each object to the cluster in which its
membership weight or probability is highest.
partial clustering: Some objects in a data set may not belong to well defined groups. Many times
objects in the data set represent noise, outliers, or “uninteresting background.”
For example: some newspaper stories share a common theme, such as global warming, while other
stories are more generic or one-of-a-kind. Thus, to find the important topics in last month’s stories,
we often want to search only for clusters of documents that are tightly related by a common theme.
In other cases, a complete clustering of the objects is desired. For example, an application that uses
clustering to organize documents for browsing needs to guarantee that all documents can be
browsed.
Types of Clusters:
Well-Separated Cluster:
A cluster is a set of objects in which each object is closer (or more similar) to every other object in
the cluster than to any object not in the cluster. Sometimes a threshold is used to specify that all the
objects in a cluster must be sufficiently close (or similar) to one another. Figure(a) gives an example
of well separated clusters that consists of two groups of points in a two-dimensional space. The
distance between any two points in different groups is larger than the distance between any two
points within a group. Well-separated clusters do not need to be globular, but can have any shape.
Prototype-Based Cluster: A cluster is a set of objects in which each object is closer (more similar)
to the prototype that defines the cluster than to the prototype of any other cluster. For data with
continuous attributes, the prototype of a cluster is often a centroid, i.e., the average (mean) of all
the points in the cluster. When a centroid is not meaningful, such as when the data has categorical
attributes, the prototype is often a medoid, i.e., the most representative point of a cluster. For many
types of data, the prototype can be regarded as the most central point, and in such instances, we
commonly refer to prototype based clusters as center-based clusters.
Graph-Based Cluster: If the data is represented as a graph, where the nodes are objects and the
links represent connections among objects, then a cluster can be defined as a connected
component; i.e., a group of objects that are connected to one another, but that have no connection
to objects outside the group.
Example: Example of graph-based clusters is a contiguity-based cluster, where two objects are
connected only if they are within a specified distance of each other. This implies that each object in a
contiguity-based cluster is closer to some other object in the cluster than to any point in a different
cluster.
Figure (c) shows an example of such clusters for two-dimensional points. This definition of a cluster
is useful when clusters are irregular or intertwined. However, this approach can have trouble when
noise is present since, as illustrated by the two spherical clusters of Figure(c), a small bridge of points
can merge two distinct clusters.
Density-Based Cluster: A cluster is a dense region of objects that is surrounded by a region of low
density. Figure(d) shows some density-based clusters for data created by adding noise to the data of
Figure (c). The two circular clusters are not merged, as in Figure(c), because the bridge between
them fades into the noise. Likewise, the curve that is present in Figure(c) also fades into the noise
and does not form a cluster in Figure(d).
A density based definition of a cluster is often employed when the clusters are irregular or
intertwined, and when noise and outliers are present. By contrast, a contiguity based definition of a
cluster would not work well for the data of Figure(d) because the noise would tend to form bridges
between clusters.
Shared-Property(Conceptual Clusters):
More generally, we can define a cluster as a set of objects that share some property. This definition
encompasses all the previous definitions of a cluster; e.g., objects in a center based cluster share the
property that they are all closest to the same centroid or medoid. However, the shared-property
approach also includes new types of clusters.
Consider the clusters shown in Figure(e). A triangular area (cluster) is adjacent to a rectangular one,
and there are two intertwined circles (clusters). In both cases, a clustering algorithm would need a
very specific concept of a cluster to successfully detect these clusters. The process of finding such
clusters is called conceptual clustering.
K-means:
Prototype-based clustering techniques create a one-level partitioning of the data objects.
Figure – K-mean
Clustering
Flowchart:
Figure
– K-mean Clustering
\
Hierarchial Clustering
A (1, 1), B(2, 3), C(3, 5), D(4,5), E(6,6), and F(7,5) and try to cluster
them.
To perform clustering, we will first create a distance matrix
consisting of the distance between each point in the dataset. The
distance matrix looks as follows.
Using Single Linkage approach
Step 1: First, we will consider each data point as a single cluster.
After this, we will start combining the clusters.
1. Cluster A is closest to B.
2. Cluster B is closest to A as well as CD.
3. Cluster EF is closest to CD.
Using the above information let us combine B and CD and name the
cluster BCD.
Step 4: After this, The cluster BCD is at the same distance from A
and EF. Hence, we will first merge BCD and EF to form BCDEF.
In the above example, if we combine (A, B), (C, D), and (E, F)
together in step 2 above, we will get clusters AB, CD, and EF.
Now, The minimum distance of CD is the same from AB and EF. Let
us merge AB and CD to form ABCD first. Next, we will combine ABCD
and EF to obtain the cluster ABCDEF.
Approach 2: Another approach is to choose the replacement centroid at random from the cluster
that has the highest SSE. This will typically split the cluster and reduce the overall SSE of the
clustering. If there are several empty clusters, then this process can be repeated several times.
Outliers:
When the squared error criterion is used, outliers can unduly influence the clusters that are found.
In particular, when outliers are present, the resulting cluster centroids (prototypes) are typically not
as representative as they otherwise would be and thus, the SSE will be higher. Because of this, it is
often useful to discover outliers and eliminate them beforehand. It is important, however, to
appreciate that there are certain clustering applications for which outliers should not be eliminated.
When clustering is used for data compression, every point must be clustered, and in some cases,
such as financial analysis, apparent outliers, e.g., unusually profitable customers, can be the most
interesting points.
An obvious issue is how to identify outliers. There are a number of techniques for identifying
outliers. If we use approaches that remove outliers before clustering, we avoid clustering points that
will not cluster well.
Alternatively, outliers can also be identified in a postprocessing step. For instance, we can keep track
of the SSE contributed by each point, and eliminate those points with unusually high contributions,
especially over multiple runs. Also, we often want to eliminate small clusters because they
frequently represent groups of outliers.
Various techniques are used to “fix up” the resulting clusters in order to produce a clustering that
has lower SSE. The strategy is to focus on individual clusters since the total SSE is simply the sum of
the SSE contributed by each cluster. (We will use the terms total SSE and cluster SSE, respectively, to
avoid any potential confusion.) We can change the total SSE by performing various operations on the
clusters, such as splitting or merging clusters.
One commonly used approach is to employ alternate cluster splitting and merging phases. During a
splitting phase, clusters are divided, while during a merging phase, clusters are combined. In this
way, it is often possible to escape local SSE minima and still produce a clustering solution with the
desired number of clusters.
The following are some techniques used in the splitting and merging phases.
Two strategies that decrease the total SSE by increasing the number of clusters are the
following:
Split a cluster: The cluster with the largest SSE is usually chosen, but we could also split the cluster
with the largest standard deviation for one particular attribute.
Introduce a new cluster centroid: Often the point that is farthest from any cluster center is
chosen. We can easily determine this if we keep track of the SSE contributed by each point. Another
approach is to choose randomly from all points or from the points with the highest SSE with respect
to their closest centroids.
Two strategies that decrease the number of clusters, while trying to minimize the increase
in total SSE, are the following:
Disperse a cluster: This is accomplished by removing the centroid that corresponds to the cluster
and reassigning the points to other clusters. Ideally, the cluster that is dispersed should be the one
that increases the total SSE the least.
Merge two clusters: The clusters with the closest centroids are typically chosen, although another,
perhaps better, approach is to merge the two clusters that result in the smallest increase in total
SSE. These two merging strategies are the same ones that are used in the hierarchical clustering
techniques known as the centroid method and Ward’s method, respectively.
Using an incremental update strategy guarantees that empty clusters are not produced because all
clusters start with a single point, and if a cluster ever has only one point, then that point will always
be reassigned to the same cluster.
In addition, if incremental updating is used, the relative weight of the point being added can be
adjusted; e.g., the weight of points is often decreased as the clustering proceeds. While this can
result in better accuracy and faster convergence, it can be difficult to make a good choice for the
relative weight, especially in a wide variety of situations. These update issues are similar to those
involved in updating weights for artificial neural networks. Yet another benefit of incremental
updates has to do with using objectives other than “minimize SSE.”
Suppose that we are given an arbitrary objective function to measure the goodness of a set of
clusters. When we process an individual point, we can compute the value of the objective function
for each possible cluster assignment, and then choose the one that optimizes the objective.
On the negative side, updating centroids incrementally introduces an order dependency. In other
words, the clusters produced usually depend on the order in which the points are processed.
Although this can be addressed by randomizing the order in which the points are processed, the
basic K-means approach of updating the centroids after all points have been assigned to clusters has
no order dependency. Also, incremental updates are slightly more expensive. However, K-means
converges rather quickly, and therefore, the number of points switching clusters quickly becomes
relatively small.
Advantages:
• Easy to implement.
It is also quite efficient, even though multiple runs are often performed .
• With a large number of variables, K-Means may be computationally faster than hierarchical
clustering (if K is small).
• An instance can change cluster (move to another cluster) when the centroids are recomputed.
Disadvantages:
• Difficult to predict the number of clusters (K-Value)
It cannot handle non-globular clusters or clusters of different sizes and densities, although it can
typically find pure sub clusters if a large enough number of clusters is specified.
Bisecting K-means
Idea: The bisecting K-means algorithm is a straightforward extension of the basic K-means algorithm
that is based on a simple idea: to obtain K clusters, split the set of all points into two clusters, select
one of these clusters to split, and so on, until K clusters have been produced.
There are a number of different ways to choose which cluster to split. We can choose the largest
cluster at each step, choose the one with the largest SSE, or use a criterion based on both size and
SSE. Different choices result in different clusters. Because we are using the K-means algorithm
“locally,” i.e., to bisect individual clusters, the final set of clusters does not represent a clustering
that is a local minimum with respect to the total SSE. Thus, we often refine the resulting clusters by
using their cluster centroids as the initial centroids for the standard K-means algorithm.
Algorithm:
This strategy is computationally infeasible and as a result, a more practical approach is needed,
even if such an approach finds solutions that are not guaranteed to be optimal. One technique,
which is known as gradient descent, is based on picking an initial solution and then repeating the
following two steps: compute the change to the solution that best optimizes the objective function
and then update the solution. We assume that the data is one-dimensional, i.e.,
This does not change anything essential, but greatly simplifies the notation.
The centroid for the K-means algorithm can be mathematically derived when the proximity function
is Euclidean distance and the objective is to minimize the SSE. Specifically, we investigate how we
can best update a cluster centroid so that the cluster SSE is minimized.
Here, Ci is the ith cluster, x is a point in Ci, and ci is the mean of the ith cluster. We can solve for the
kth centroid ck, which minimizes Equation, by differentiating the SSE, setting it equal to 0, and
solving, as indicated below.
Thus, as previously indicated, the best centroid for minimizing the SSE of a cluster is the mean of the
points in the cluster.
We can solve for the kth centroid ck, which minimizes Equation 7.5, by differentiating the SAE,
setting it equal to 0, and solving.
If we solve for ck, we find that ck = median{x ∈ Ck}, the median of the points in the cluster. The
median of a group of points is straightforward to compute and less susceptible to distortion by
outliers.
Agglomerative: Start with the points as individual clusters and, at each step, merge the closest pair
of clusters. This requires defining a notion of cluster proximity.
Divisive: Start with one, all-inclusive cluster and, at each step, split a cluster until only singleton
clusters of individual points remain. In this case, we need to decide which cluster to split at each step
and how to do the splitting.
Agglomerative hierarchical clustering techniques are by far the most common. A hierarchical
clustering is often displayed graphically using a tree-like diagram called a dendrogram, which
displays both the cluster-sub cluster relationships and the order in which the clusters were merged
(agglomerative view) or split (divisive view).
For sets of two-dimensional points, a hierarchical clustering can also be graphically represented
using a nested cluster diagram. Figure shows an example of these two types of figures for a set of
four two-dimensional points.
Basic Agglomerative Hierarchical Clustering Algorithm:
Many agglomerative hierarchical clustering techniques are variations on a single approach: starting
with individual points as clusters, successively merge the two closest clusters until only one cluster
remains.
The analysis of the basic agglomerative hierarchical clustering algorithm is also straightforward with
respect to computational complexity. time is required to compute the proximity matrix.
After that step, there are m−1 iterations involving steps 3 and 4 because there are m clusters at the
start and two clusters are merged during each iteration. If performed as a linear search of the
proximity matrix, then for the ith iteration, Step 3 requires time, which is
proportional to the current number of clusters squared. Step 4 requires O(m−i + 1) time to update
the proximity matrix after the merger of two clusters. (A cluster merger affects O(m − i + 1)
proximities for the techniques that we consider.) Without modification, this would yield a time
complexity of . If the distances from each cluster to all other clusters are stored as a sorted
list (or heap), it is possible to reduce the cost of finding the two closest clusters to O(m−i + 1).
However, because of the additional complexity of keeping data in a sorted list or heap, the overall
Example (Single Link): Figure shows the result of applying the single link technique to our
example data set of six points. Figure (a) shows the nested clusters as a sequence of nested ellipses,
where the numbers associated with the ellipses indicate the order of the clustering. Figure(b) shows
the same information, but as a dendrogram. The height at which two clusters are merged in the
dendrogram reflects the distance of the two clusters. For instance, from Table, we see that the
distance between points 3 and 6 is 0.11, and that is the height at which they are joined into one
cluster in the dendrogram. As another example, the distance between clusters{3,6}and {2,5} is given
by dist({3,6},{2,5}) = min(dist(3,2),dist(6,2),dist(3,5),dist(6,5)) = min(0 .15,0.25,0.28,0.39) =0 .15.
Example (Complete Link): Figure shows the results of applying MAX to the sample data set of six
points. As with single link, points 3 and 6 are merged first. However, {3,6} is merged with {4}, instead
of {2,5} or {1} because
Group Average:
For the group average version of hierarchical clustering, the proximity of two clusters is defined as
the average pairwise proximity among all pairs of points in the different clusters. This is an
intermediate approach between the single and complete link approaches. Thus, for group average,
the cluster proximity proximity(Ci, C j) of clusters Ci and Cj, which are of size mi and mj, respectively,
is expressed by the following equation:
Example (Group Average). Figure shows the results of applying the group average approach to
the sample data set of six points. To illustrate how group average works, we calculate the distance
between some clusters.
Because dist({3,6,4},{2,5}) is smaller than dist({3,6,4},{1}) and dist({2,5},{1}), clusters {3,6,4} and {2,5}
are merged at the fourth stage.
Ward’s Method :
For Ward’s method, the proximity between two clusters is defined as the increase in the squared
error that results when two clusters are merged. Thus, this method uses the same objective function
as K-means clustering. While it might seem that this feature makes Ward’s method somewhat
distinct from other hierarchical techniques, it can be shown mathematically that Ward’s method is
very similar to the group average method when the proximity between two points is taken to be the
square of the distance between them.
Example (Ward’s Method): Figure shows the results of applying Ward’s method to the sample
data set of six points. The clustering that is produced is different from those produced by single link,
complete link, and group average.
Centroid method:
Centroid methods calculate the proximity between two clusters by calculating the distance between
the centroids of clusters. These techniques may seem similar to K-means, but as we have remarked,
Ward’s method is the correct hierarchical analog. Centroid methods also have a characteristic—
often considered bad—that is not possessed by the other hierarchical clustering techniques that we
have discussed: the possibility of inversions. Specifically, two clusters that are merged can be more
similar (less distant) than the pair of clusters that were merged in a previous step. For the other
methods, the distance between merged clusters monotonically increases (or is, at worst, non-
increasing) as we proceed from singleton clusters to one all-inclusive cluster.
Advantages:
• Hierarchical clustering outputs a hierarchy, i.e a structure that is more informative than the
unstructured set of flat clusters returned by k-means. Therefore, it is easier to decide on the number
of clusters by looking at the dendrogram
. • Easy to implement.
Dis Advantages:
Agglomerative hierarchical clustering algorithms are expensive in terms of their computational
and storage requirements.
All merges are final can also cause trouble for noisy, high-dimensional data, such as document
data.
It is not possible to undo the previous step: once the instances have been assigned to a cluster,
they can no longer be moved around.
DBSCAN
Density-based clustering locates regions of high density that are separated from one another by
regions of low density. DBSCAN is a simple and effective density-based clustering algorithm that
illustrates a number of important concepts that are important for any density-based clustering
approach. In this section, we focus solely on DBSCAN after first considering the key notion of density.
This method is simple to implement, but the density of any point will depend on the specified radius.
For instance, if the radius is large enough, then all points will have a density of m, the number of
points in the data set. Likewise, if the radius is too small, then all points will have a density of 1. An
approach for deciding on the appropriate radius for low-dimensional data is given in the next section
in the context of our discussion of DBSCAN.
Figure graphically illustrates the concepts of core, border, and noise points using a collection of two-
dimensional points. The following text provides a more precise description.
Core points: These points are in the interior of a density-based cluster. A point is a core point if
there are at leastMinPts within a distance of Eps, where MinPts and Eps are user-specified
parameters. In Figure, point A is a core point for the radius (Eps) if MinPts≥ 7.
Border points: A border point is not a core point, but falls within the neighbourhood of a core
point. In Figure, point B is a border point. A border point can fall within the neighbourhoods of
several core points.
Noise points: A noise point is any point that is neither a core point nor a border point. In Figure,
point C is a noise point.
DBSCAN can find many clusters that could not be found using K-means.
Weaknesses:
DBSCAN has trouble when the clusters have widely varying densities.
It also has trouble with high-dimensional data because density is more difficult to define for
such data.
DBSCAN can be expensive when the computation of nearest neighbours requires computing
all pairwise proximities, as is usually the case for high-dimensional data.
DBSCAN
DBSCAN algorithm can cluster densely grouped points efficiently into one
cluster. It can identify local density in the data points among large datasets.
DBSCAN can very effectively handle outliers. An advantage of DBSACN over
the K-means algorithm is that the number of centroids need not be known
beforehand in the case of DBSCAN.
Epsilon is defined as the radius of each data point around which the density is
considered.
minPoints is the number of points required within the radius so that the data
point becomes a core point.
In the above figure, we can see that point A has no points inside epsilon(e)
radius. Hence it is a Noise Point. Point B has minPoints(=4) number of points
with epsilon e radius , thus it is a Core Point. While the point has only 1 ( less
than minPoints) point, hence it is a Border Point.