Clustering Part2
Clustering Part2
44
Centroid, Radius and Diameter of a Cluster
(for numerical data sets)
■ Centroid: the “middle” of a cluster
45
Extensions to Hierarchical Clustering
■ Major weakness of agglomerative clustering methods
■ Can never undo what was done previously
■ Do not scale well: time complexity of at least O(n2),
where n is the number of total objects
■ Integration of hierarchical & distance-based clustering
■ BIRCH (1996): uses CF-tree and incrementally adjusts
the quality of sub-clusters
■ CHAMELEON (1999): hierarchical clustering using
dynamic modeling
46
Chapter 10. Cluster Analysis: Basic Concepts and
Methods
■ Cluster Analysis: Basic Concepts
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Evaluation of Clustering
■ Summary
47
Density-Based Clustering Methods
■ Handle noise
■ One scan
■ Need density parameters as termination condition
grid-based)
48
Density-Based Clustering: Basic Concepts
■ Two parameters:
■ Eps: Maximum radius of the neighbourhood
■ MinPts: Minimum number of points in an
Eps-neighbourhood of that point
■ NEps(p): {q belongs to D | dist(p,q) ≤ Eps}
■ Directly density-reachable: A point p is directly
density-reachable from a point q w.r.t. Eps, MinPts if
■ p belongs to NEps(q)
p MinPts = 5
■ core point condition:
Eps = 1 cm
|NEps (q)| ≥ MinPts q
49
Density-Reachable and Density-Connected
■ Density-reachable:
■ A point p is density-reachable from p
a point q w.r.t. Eps, MinPts if there
p1
is a chain of points p1, …, pn, p1 = q
q, pn = p such that pi+1 is directly
density-reachable from pi
■ Density-connected
■ A point p is density-connected to a p q
point q w.r.t. Eps, MinPts if there is
a point o such that both, p and q o
are density-reachable from o w.r.t.
Eps and MinPts
50
DBSCAN: Density-Based Spatial Clustering of
Applications with Noise
■ Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
■ Discovers clusters of arbitrary shape in spatial databases
with noise
Outlier
Border
Eps = 1cm
Core MinPts = 5
51
DBSCAN: The Algorithm
■ Arbitrary select a point p
■ Retrieve all points density-reachable from p w.r.t. Eps and
MinPts
■ If p is a core point, a cluster is formed
■ If p is a border point, no points are density-reachable
from p and DBSCAN visits the next point of the database
■ Continue the process until all of the points have been
processed
52
DBSCAN: Sensitive to Parameters
53
OPTICS: A Cluster-Ordering Method (1999)
techniques
54
Reachability
-distance
undefined
Cluster-order
of the objects
55
Chapter 10. Cluster Analysis: Basic Concepts and
Methods
■ Cluster Analysis: Basic Concepts
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Evaluation of Clustering
■ Summary
56
Grid-Based Clustering Method
57
STING: A Statistical Information Grid Approach
58
The STING Clustering Method
■ Each cell at a high level is partitioned into a number of
smaller cells in the next lower level
■ Statistical info of each cell is calculated and stored
beforehand and is used to answer queries
■ Parameters of higher level cells can be easily calculated
from parameters of lower level cell
■ count, mean, s, min, max
60
Chapter 10. Cluster Analysis: Basic Concepts and
Methods
■ Cluster Analysis: Basic Concepts
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Evaluation of Clustering
■ Summary
61
Assessing Clustering Tendency
■ Assess if non-random structure exists in the data by measuring the
probability that the data is generated by a uniform data distribution
■ Test spatial randomness by statistic test: Hopkins Static
■ Given a dataset D regarded as a sample of a random variable o,
determine how far away o is from being uniformly distributed in the
data space
■ Sample n points, p1, …, pn, uniformly from D. For each pi, find its
nearest neighbor in D: xi = min{dist (pi, v)} where v in D
■ Sample n points, q1, …, qn, uniformly from D. For each qi, find its
nearest neighbor in D – {qi}: yi = min{dist (qi, v)} where v in D and
v ≠ qi
■ Calculate the Hopkins Statistic:
63
Determine the Number of Clusters
■ Empirical method
■ # of clusters ≈√n/2 for a dataset of n points
■ Elbow method
■ Use the turning point in the curve of sum of within cluster variance
w.r.t the # of clusters
64
Determine the Number of Clusters (2)
■ Cross validation method
■ Divide a given data set into m parts
■ Use m – 1 parts to obtain a clustering model
■ Use the remaining part to test the quality of the clustering
■ E.g., For each point in the test set, find the closest centroid, and
use the sum of squared distance between all points in the test set
and the closest centroids to measure how well the model fits the
test set
■ For any k > 0, repeat it m times, compare the overall quality measure
w.r.t. different k’s, and find # of clusters that fits the data the best
65
Cross validation method
66
Measuring Clustering Quality
67
Silhouette coefficient
68
69
Measuring Clustering Quality: Extrinsic Methods
■ Evaluation of Clustering
■ Summary
71
Summary
■ Cluster analysis groups objects based on their similarity and has
wide applications
■ Measure of similarity can be computed for various types of data
■ Clustering algorithms can be categorized into partitioning methods,
hierarchical methods, density-based methods, grid-based methods,
and model-based methods
■ K-means and K-medoids algorithms are popular partitioning-based
clustering algorithms
■ Birch and Chameleon are interesting hierarchical clustering
algorithms, and there are also probabilistic hierarchical clustering
algorithms
■ DBSCAN, OPTICS, and DENCLU are interesting density-based
algorithms
■ STING and CLIQUE are grid-based methods, where CLIQUE is also
a subspace clustering algorithm
■ Quality of clustering results can be evaluated in various ways
72