0% found this document useful (0 votes)
6 views100 pages

8 CLST

Uploaded by

shravyajc28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views100 pages

8 CLST

Uploaded by

shravyajc28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

Data Mining:

Concepts and Techniques


— Slides for Textbook —
— Chapter 8 —

©Jiawei Han and Micheline Kamber


Intelligent Database Systems Research Lab
School of Computing Science
Simon Fraser University, Canada
Data Mining:
November http://www.cs.sfu.ca
Concepts and
20, 2024 Techniques 1
Chapter 8. Cluster Analysis

■ What is Cluster Analysis?


■ Types of Data in Cluster Analysis
■ A Categorization of Major Clustering Methods
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Model-Based Clustering Methods
■ Outlier Analysis
Data Mining:
■ Summary
November Concepts and
20, 2024 Techniques 2
What is Cluster Analysis?
■ Cluster: a collection of data objects
■ Similar to one another within the same cluster

■ Dissimilar to the objects in other clusters

■ Cluster analysis
■ Grouping a set of data objects into clusters

■ Clustering is unsupervised classification: no


predefined classes
■ Typical applications
■ As a stand-alone tool to get insight into data
distribution
■ As a preprocessing step for other algorithms
General Applications of Clustering

■ Pattern Recognition
■ Spatial Data Analysis

■ create thematic maps in GIS by clustering feature


spaces
■ detect spatial clusters and explain them in spatial data
mining
■ Image Processing

■ Economic Science (especially market research)

■ WWW

■ Document classification

■ Cluster Weblog data to discover groups of similar


access patterns Data Mining:
November Concepts and
20, 2024 Techniques 4
Examples of Clustering Applications
■ Marketing: Help marketers discover distinct groups in their
customer bases, and then use this knowledge to develop
targeted marketing programs
■ Land use: Identification of areas of similar land use in an
earth observation database
■ Insurance: Identifying groups of motor insurance policy
holders with a high average claim cost
■ City-planning: Identifying groups of houses according to
their house type, value, and geographical location
■ Earth-quake studies: Observed earth quake epicenters
should be clustered Data
alongMining:
continent faults
November Concepts and
20, 2024 Techniques 5
What Is Good Clustering?

■ A good clustering method will produce high quality


clusters with
■ high intra-class similarity
■ low inter-class similarity
■ The quality of a clustering result depends on both the
similarity measure used by the method and its
implementation.
■ The quality of a clustering method is also measured by its
ability to discover some or all of the hidden patterns.
Data Mining:
November Concepts and
20, 2024 Techniques 6
Requirements of Clustering in Data
Mining
■ Scalability
■ Ability to deal with different types of attributes
■ Discovery of clusters with arbitrary shape
■ Minimal requirements for domain knowledge to
determine input parameters
■ Able to deal with noise and outliers
■ Insensitive to order of input records
■ High dimensionality
■ Incorporation of user-specified constraints
Data
■ Interpretability and Mining:
usability
November Concepts and
20, 2024 Techniques 7
Chapter 8. Cluster Analysis

■ What is Cluster Analysis?


■ Types of Data in Cluster Analysis
■ A Categorization of Major Clustering Methods
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Model-Based Clustering Methods
■ Outlier Analysis
Data Mining:
■ Summary
November Concepts and
20, 2024 Techniques 8
Data Structures

■ Data matrix
■ (two modes)

■ Dissimilarity matrix
■ (one mode)

Data Mining:
November Concepts and
20, 2024 Techniques 9
Measure the Quality of Clustering

■ Dissimilarity/Similarity metric: Similarity is expressed in


terms of a distance function, which is typically metric:
d(i, j)
■ There is a separate “quality” function that measures the

“goodness” of a cluster.
■ The definitions of distance functions are usually very
different for interval-scaled, boolean, categorical, ordinal
and ratio variables.
■ Weights should be associated with different variables

based on applications and data semantics.


■ It is hard to define “similar enough” or “good enough”
Data Mining:
■ the answer is typically highly subjective.
November Concepts and
20, 2024 Techniques 10
Type of data in clustering analysis
■ Interval-scaled variables:
■ Binary variables:
■ Nominal, ordinal, and ratio variables:
■ Variables of mixed types:

Data Mining:
November Concepts and
20, 2024 Techniques 11
Interval-valued variables
■ Standardize data
■ Calculate the mean absolute deviation:

where
■ Calculate the standardized measurement (z-score)

■ Using mean absolute deviation is more robust than using


standard deviation Data Mining:
November Concepts and
20, 2024 Techniques 12
Similarity and Dissimilarity Between
Objects

■ Distances are normally used to measure the similarity or


dissimilarity between two data objects
■ Some popular ones include: Minkowski distance:

where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are


two p-dimensional data objects, and q is a positive
integer
■ If q = 1, d is Manhattan distance

Data Mining:
November Concepts and
20, 2024 Techniques 13
Similarity and Dissimilarity Between
Objects (Cont.)

■ If q = 2, d is Euclidean distance:

■ Properties
■d(i,j) ≥ 0
■ d(i,i) = 0

■ d(i,j) = d(j,i)

■ d(i,j) ≤ d(i,k) + d(k,j)

■ Also one can use weighted distance, parametric Pearson

product moment correlation, or other disimilarity


measures. Data Mining:
November Concepts and
20, 2024 Techniques 14
Binary Variables
■ A contingency table for binary data
Object j

Object i

■ Simple matching coefficient (invariant, if the binary


variable is symmetric):
■ Jaccard coefficient (noninvariant if the binary variable is
asymmetric):
Data Mining:
November Concepts and
20, 2024 Techniques 15
Dissimilarity between Binary
Variables
■ Example

■ gender is a symmetric attribute


■ the remaining attributes are asymmetric binary
■ let the values Y and P be set to 1, and the value N be set to 0

Data Mining:
November Concepts and
20, 2024 Techniques 16
Nominal Variables

■ A generalization of the binary variable in that it can take


more than 2 states, e.g., red, yellow, blue, green
■ Method 1: Simple matching
■ m: # of matches, p: total # of variables

■ Method 2: use a large number of binary variables


■ creating a new binary variable for each of the M
nominal states Data Mining:
November Concepts and
20, 2024 Techniques 17
Ordinal Variables
■ An ordinal variable can be discrete or continuous
■ order is important, e.g., rank
■ Can be treated like interval-scaled
■ replacing xif by their rank
■ map the range of each variable onto [0, 1] by replacing
i-th object in the f-th variable by

■ compute the dissimilarity using methods for


interval-scaled variables
Data Mining:
November Concepts and
20, 2024 Techniques 18
Ratio-Scaled Variables

■ Ratio-scaled variable: a positive measurement on a


nonlinear scale, approximately at exponential scale,
such as AeBt or Ae-Bt
■ Methods:
■ treat them like interval-scaled variables — not a good
choice! (why?)
■ apply logarithmic transformation
yif = log(xif)
■ treat them as continuous ordinal data treat their rank
Data Mining:
as interval-scaled.
November Concepts and
20, 2024 Techniques 19
Variables of Mixed Types
■ A database may contain all the six types of variables
■ symmetric binary, asymmetric binary, nominal, ordinal,

interval and ratio.


■ One may use a weighted formula to combine their
effects.

■f is binary or nominal:
dij(f) = 0 if xif = xjf , or dij(f) = 1 o.w.
■ f is interval-based: use the normalized distance

■ f is ordinal or ratio-scaled

■ compute ranks r and


Data
if Mining:
■ and treat z as interval-scaled
November if Concepts and
20, 2024 Techniques 20
Chapter 8. Cluster Analysis

■ What is Cluster Analysis?


■ Types of Data in Cluster Analysis
■ A Categorization of Major Clustering Methods
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Model-Based Clustering Methods
■ Outlier Analysis
Data Mining:
■ Summary
November Concepts and
20, 2024 Techniques 21
Major Clustering Approaches

■ Partitioning algorithms: Construct various partitions and


then evaluate them by some criterion
■ Hierarchy algorithms: Create a hierarchical decomposition
of the set of data (or objects) using some criterion
■ Density-based: based on connectivity and density functions
■ Grid-based: based on a multiple-level granularity structure
■ Model-based: A model is hypothesized for each of the
clusters and the idea is to find the best fit of that model to
each other Data Mining:
November Concepts and
20, 2024 Techniques 22
Chapter 8. Cluster Analysis

■ What is Cluster Analysis?


■ Types of Data in Cluster Analysis
■ A Categorization of Major Clustering Methods
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Model-Based Clustering Methods
■ Outlier Analysis
Data Mining:
■ Summary
November Concepts and
20, 2024 Techniques 23
Partitioning Algorithms: Basic Concept

■ Partitioning method: Construct a partition of a database D


of n objects into a set of k clusters
■ Given a k, find a partition of k clusters that optimizes the
chosen partitioning criterion
■ Global optimal: exhaustively enumerate all partitions
■ Heuristic methods: k-means and k-medoids algorithms
■ k-means (MacQueen’67): Each cluster is represented by
the center of the cluster
■ k-medoids or PAM (Partition around medoids) (Kaufman

& Rousseeuw’87): Each cluster is represented by one of


Data Mining:
the objects in the cluster
November Concepts and
20, 2024 Techniques 24
The K-Means Clustering Method

■ Given k, the k-means algorithm is implemented in 4


steps:
■ Partition objects into k nonempty subsets

■ Compute seed points as the centroids of the

clusters of the current partition. The centroid is


the center (mean point) of the cluster.
■ Assign each object to the cluster with the nearest

seed point.
■ Go back to Step 2, stop when no more new

assignment.
Data Mining:
November Concepts and
20, 2024 Techniques 25
The K-Means Clustering Method
■ Example

Data Mining:
November Concepts and
20, 2024 Techniques 26
Comments on the K-Means Method
■ Strength
■ Relatively efficient: O(tkn), where n is # objects, k is #

clusters, and t is # iterations. Normally, k, t << n.


■ Often terminates at a local optimum. The global optimum

may be found using techniques such as: deterministic


annealing and genetic algorithms
■ Weakness

■ Applicable only when mean is defined, then what about

categorical data?
■ Need to specify k, the number of clusters, in advance

■ Unable to handle noisy data and outliers


Data Mining:
■ Not suitable to discover
November and with non-convex shapes
clusters
Concepts
20, 2024 Techniques 27
Variations of the K-Means Method
■ A few variants of the k-means which differ in
■ Selection of the initial k means

■ Dissimilarity calculations

■ Strategies to calculate cluster means

■ Handling categorical data: k-modes (Huang’98)

■ Replacing means of clusters with modes

■ Using new dissimilarity measures to deal with

categorical objects
■ Using a frequency-based method to update modes of

clusters
■ A mixture of categorical and numerical data:
Data Mining:
k-prototype method
November Concepts and
20, 2024 Techniques 28
The K-Medoids Clustering Method

■ Find representative objects, called medoids, in clusters


■ PAM (Partitioning Around Medoids, 1987)
■ starts from an initial set of medoids and iteratively

replaces one of the medoids by one of the


non-medoids if it improves the total distance of the
resulting clustering
■ PAM works effectively for small data sets, but does not

scale well for large data sets


■ CLARA (Kaufmann & Rousseeuw, 1990)

■ CLARANS (Ng & Han, 1994): Randomized sampling


Data Mining:
■ Focusing + spatial data
November structure
Concepts and (Ester et al., 1995)
20, 2024 Techniques 29
PAM (Partitioning Around Medoids)
(1987)

■ PAM (Kaufman and Rousseeuw, 1987), built in Splus


■ Use real object to represent the cluster
■ Select k representative objects arbitrarily
■ For each pair of non-selected object h and selected
object i , calculate the total swapping cost TC ih
■ For each pair of i and h ,
■ If TCih < 0, i is replaced by h
■Then assign each non-selected object to the most
similar representative object
Data Mining:
■ repeat steps 2-3Concepts
November until there
andis no change
20, 2024 Techniques 30
PAM Clustering: Total swapping cost TCih=∑jCjih
j
t t
j
i h h
i

h
j
i
i h j
t
Data Mining: t

November Concepts and


20, 2024 Techniques 31
CLARA (Clustering Large Applications) (1990)
■ CLARA (Kaufmann and Rousseeuw in 1990)
■ Built in statistical analysis packages, such as S+
■ It draws multiple samples of the data set, applies PAM on
each sample, and gives the best clustering as the output
■ Strength: deals with larger data sets than PAM
■ Weakness:
■ Efficiency depends on the sample size
■A good clustering based on samples will not
necessarily represent a good clustering of the whole
Data Mining:
data set if the sample
November is biased
Concepts and
20, 2024 Techniques 32
CLARANS (“Randomized” CLARA) (1994)

■ CLARANS (A Clustering Algorithm based on Randomized


Search) (Ng and Han’94)
■ CLARANS draws sample of neighbors dynamically

■ The clustering process can be presented as searching a

graph where every node is a potential solution, that is, a


set of k medoids
■ If the local optimum is found, CLARANS starts with new

randomly selected node in search for a new local optimum


■ It is more efficient and scalable than both PAM and CLARA

■ Focusing techniques and spatial access structures may


Data Mining:
further improve its performance
November Concepts and (Ester et al.’95)
20, 2024 Techniques 33
Chapter 8. Cluster Analysis

■ What is Cluster Analysis?


■ Types of Data in Cluster Analysis
■ A Categorization of Major Clustering Methods
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Model-Based Clustering Methods
■ Outlier Analysis
Data Mining:
■ Summary
November Concepts and
20, 2024 Techniques 34
Hierarchical Clustering
■ Use distance matrix as clustering criteria. This method
does not require the number of clusters k as an input,
but needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
agglomerative
(AGNES)
a ab
b abcde
c
cde
d
de
e Data Mining: divisive
NovemberStep 4 Step 3 StepConcepts
2 Step 1 and
Step 0 (DIANA)
20, 2024 Techniques 35
AGNES (Agglomerative Nesting)

■ Introduced in Kaufmann and Rousseeuw (1990)


■ Implemented in statistical analysis packages, e.g., Splus
■ Use the Single-Link method and the dissimilarity matrix.
■ Merge nodes that have the least dissimilarity
■ Go on in a non-descending fashion
■ Eventually all nodes belong to the same cluster

Data Mining:
November Concepts and
20, 2024 Techniques 36
A Dendrogram Shows How the
Clusters are Merged Hierarchically

Decompose data objects into a several levels of nested


partitioning (tree of clusters), called a dendrogram.

A clustering of the data objects is obtained by cutting the


dendrogram at the desired level, then each connected
component forms a cluster.

Data Mining:
November Concepts and
20, 2024 Techniques 37
DIANA (Divisive Analysis)

■ Introduced in Kaufmann and Rousseeuw (1990)


■ Implemented in statistical analysis packages, e.g., Splus
■ Inverse order of AGNES
■ Eventually each node forms a cluster on its own

Data Mining:
November Concepts and
20, 2024 Techniques 38
More on Hierarchical Clustering Methods

■ Major weakness of agglomerative clustering methods


2
■ do not scale well: time complexity of at least O(n ),

where n is the number of total objects


■ can never undo what was done previously

■ Integration of hierarchical with distance-based clustering

■ BIRCH (1996): uses CF-tree and incrementally adjusts

the quality of sub-clusters


■ CURE (1998): selects well-scattered points from the

cluster and then shrinks them towards the center of the


cluster by a specified fraction
■ CHAMELEON (1999): Data hierarchical
Mining: clustering using
dynamic modelingConcepts and
November
20, 2024 Techniques 39
BIRCH (1996)
■ Birch: Balanced Iterative Reducing and Clustering using
Hierarchies, by Zhang, Ramakrishnan, Livny (SIGMOD’96)
■ Incrementally construct a CF (Clustering Feature) tree, a
hierarchical data structure for multiphase clustering
■ Phase 1: scan DB to build an initial in-memory CF tree (a
multi-level compression of the data that tries to preserve
the inherent clustering structure of the data)
■ Phase 2: use an arbitrary clustering algorithm to cluster
the leaf nodes of the CF-tree
■ Scales linearly: finds a good clustering with a single scan
and improves the quality with a few additional scans
Data Mining:
■ Weakness: handles only numeric data, and sensitive to the
November Concepts and
order of the data record.
20, 2024 Techniques 40
Clustering Feature Vector

Clustering Feature: CF = (N, LS, SS)


N: Number of data points
LS: ∑Ni=1=Xi
SS: ∑Ni=1=Xi2 CF = (5, (16,30),(54,190))

(3,4)
(2,6)
(4,5)
(4,7)
Data Mining: (3,8)
November Concepts and
20, 2024 Techniques 41
CF Tree Root

B=7 CF1 CF2 CF3 CF6


L=6 child1 child2 child3 child6

Non-leaf node
CF1 CF2 CF3 CF5
child1 child2 child3 child5

Leaf node Leaf node


prev CF1 CF2 CF6 next prev CF1 CF2 CF4 next
Data Mining:
November Concepts and
20, 2024 Techniques 42
CURE (Clustering Using
REpresentatives )

■ CURE: proposed by Guha, Rastogi & Shim, 1998


■ Stops the creation of a cluster hierarchy if a level
consists of k clusters
Uses multiple representative points to evaluate the

distance between clusters,


Data Mining:adjusts well to arbitrary
shaped clusters Concepts
November and avoids
andsingle-link effect
20, 2024 Techniques 43
Drawbacks of Distance-Based
Method

■ Drawbacks of square-error based clustering method


■ Consider only one point as representative of a cluster

■ Good only for convex shaped, similar size and


Data Mining:
density, and if k Concepts
November can be reasonably
and estimated
20, 2024 Techniques 44
Cure: The Algorithm

■ Draw random sample s.


■ Partition sample to p partitions with size s/p
■ Partially cluster partitions into s/pq clusters
■ Eliminate outliers
■ By random sampling
■ If a cluster grows too slow, eliminate it.
■ Cluster partial clusters.
■ Label data in Data
disk Mining:
November Concepts and
20, 2024 Techniques 45
Data Partitioning and Clustering
■ s = 50
■ p=2 s/pq = 5

■ s/p = 25

y
y y

x
y y

x x
Data Mining:
November Conceptsx and x
20, 2024 Techniques 46
Cure: Shrinking Representative Points
y y

x x

■ Shrink the multiple representative points towards the


gravity center by a fraction of α.
Data Mining:
■ Multiple representatives capture the shape of the cluster
November Concepts and
20, 2024 Techniques 47
Clustering Categorical Data: ROCK

■ ROCK: Robust Clustering using linKs,


by S. Guha, R. Rastogi, K. Shim (ICDE’99).
■ Use links to measure similarity/proximity

■ Not distance based

■ Computational complexity:

■ Basic ideas:
■ Similarity function and neighbors:

Let T1 = {1,2,3}, T2={3,4,5}


Data Mining:
November Concepts and
20, 2024 Techniques 48
Rock: Algorithm
■ Links: The number of common neighbours for the
two points.
{1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5}
{1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5}
{1,2,3} 3
{1,2,4}
■ Algorithm
■ Draw random sample

■ Cluster with links

■ Label data in disk


Data Mining:
November Concepts and
20, 2024 Techniques 49
CHAMELEON

■ CHAMELEON: hierarchical clustering using dynamic


modeling, by G. Karypis, E.H. Han and V. Kumar’99
■ Measures the similarity based on a dynamic model

■ Two clusters are merged only if the interconnectivity


and closeness (proximity) between two clusters are
high relative to the internal interconnectivity of the
clusters and closeness of items within the clusters
■ A two phase algorithm

■ 1. Use a graph partitioning algorithm: cluster objects


into a large number of relatively small sub-clusters
■ 2. Use an agglomerative hierarchical clustering
algorithm: find the
Datagenuine
Mining:clusters by repeatedly
combining theseConcepts
November sub-clusters
and
20, 2024 Techniques 50
Overall Framework of CHAMELEON
Construct
Sparse Graph Partition the Graph

Data Set

Merge Partition

Final Clusters

Data Mining:
November Concepts and
20, 2024 Techniques 51
Chapter 8. Cluster Analysis

■ What is Cluster Analysis?


■ Types of Data in Cluster Analysis
■ A Categorization of Major Clustering Methods
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Model-Based Clustering Methods
■ Outlier Analysis
Data Mining:
■ Summary
November Concepts and
20, 2024 Techniques 52
Density-Based Clustering Methods
■ Clustering based on density (local cluster criterion),
such as density-connected points
■ Major features:
■ Discover clusters of arbitrary shape

■ Handle noise

■ One scan
■ Need density parameters as termination condition

■ Several interesting studies:

■ DBSCAN: Ester, et al. (KDD’96)

■ OPTICS: Ankerst, et al (SIGMOD’99).

■ DENCLUE: Hinneburg & D. Keim (KDD’98)


Data Mining:
■ CLIQUE: Agrawal, et al. (SIGMOD’98)
November Concepts and
20, 2024 Techniques 53
Density-Based Clustering: Background
■ Two parameters:
■ Eps : Maximum radius of the neighbourhood
■ MinPts : Minimum number of points in an
Eps-neighbourhood of that point
■ N Eps (p) : {q belongs to D | dist(p,q) <= Eps}
■ Directly density-reachable: A point p is directly
density-reachable from a point q wrt. Eps , MinPts if
■ 1) p belongs to N Eps (q)
■ 2) core point condition: p MinPts = 5
|N Eps (q) | >= MinPts q Eps = 1 cm
Data Mining:
November Concepts and
20, 2024 Techniques 54
Density-Based Clustering: Background (II)

■ Density-reachable:
p
■ A point p is density-reachable from
a point q wrt. Eps, MinPts if there p1
is a chain of points p1, …, pn, p1 = q
q, pn = p such that pi+1 is directly
density-reachable from pi
■ Density-connected
A point p is density-connected to a
■ p q
point q wrt. Eps, MinPts if there is
a point o such that both, p and q o
are density-reachable from o wrt.
Eps and MinPts. Data Mining:
November Concepts and
20, 2024 Techniques 55
DBSCAN: Density Based Spatial
Clustering of Applications with Noise

■ Relies on a density-based notion of cluster: A cluster is


defined as a maximal set of density-connected points
■ Discovers clusters of arbitrary shape in spatial databases
with noise

Outlier

Border
Eps = 1cm
Core MinPts = 5
Data Mining:
November Concepts and
20, 2024 Techniques 56
DBSCAN: The Algorithm

■ Arbitrary select a point p


■ Retrieve all points density-reachable from p wrt Eps
and MinPts .
■ If p is a core point, a cluster is formed.
■ If p is a border point, no points are density-reachable
from p and DBSCAN visits the next point of the
database.
■ Continue the process until all of the points have been
processed. Data Mining:
November Concepts and
20, 2024 Techniques 57
OPTICS: A Cluster-Ordering Method (1999)

■ OPTICS: Ordering Points To Identify the Clustering


Structure
■ Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)

■ Produces a special order of the database wrt its

density-based clustering structure


■ This cluster-ordering contains info equiv to the

density-based clusterings corresponding to a broad


range of parameter settings
■ Good for both automatic and interactive cluster

analysis, including finding intrinsic clustering structure


■ Can be represented
Datagraphically
Mining: or using visualization
techniques
November Concepts and
20, 2024 Techniques 58
OPTICS: Some Extension from
DBSCAN
■ Index-based:
■ k = number of dimensions

■ N = 20

■ p = 75% D
■ M = N(1-p) = 5

■ Complexity: O(kN2)
■ Core Distance p1

■ Reachability Distance o
p2
o
Max (core-distance (o), dData
(o, p))
Mining:
MinPts = 5
November
r(p1, o) = 2.8cm. r(p2,o)Concepts
= 4cm and
20, 2024 Techniques ε = 3 cm 59
Reachability
-distance

undefined

Data Mining:
Cluster-order
November Concepts and
20, 2024 Techniques of the
60 objects
DENCLUE: using density functions

■ DENsity-based CLUstEring by Hinneburg & Keim (KDD’98)


■ Major features
■ Solid mathematical foundation
■ Good for data sets with large amounts of noise
■ Allows a compact mathematical description of arbitrarily
shaped clusters in high-dimensional data sets
■ Significant faster than existing algorithm (faster than
DBSCAN by a factor of up to 45)
■ But needs a large number of parameters
Data Mining:
November Concepts and
20, 2024 Techniques 61
Denclue: Technical Essence
■ Uses grid cells but only keeps information about grid
cells that do actually contain data points and manages
these cells in a tree-based access structure.
■ Influence function: describes the impact of a data point
within its neighborhood.
■ Overall density of the data space can be calculated as
the sum of the influence function of all data points.
■ Clusters can be determined mathematically by
identifying density attractors.
■ Density attractors are local maximal of the overall
Data Mining:
density function.
November Concepts and
20, 2024 Techniques 62
Gradient: The steepness of a slope

■ Example

Data Mining:
November Concepts and
20, 2024 Techniques 63
Density Attractor

Data Mining:
November Concepts and
20, 2024 Techniques 64
Center-Defined and Arbitrary

Data Mining:
November Concepts and
20, 2024 Techniques 65
Chapter 8. Cluster Analysis

■ What is Cluster Analysis?


■ Types of Data in Cluster Analysis
■ A Categorization of Major Clustering Methods
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Model-Based Clustering Methods
■ Outlier Analysis
Data Mining:
■ Summary
November Concepts and
20, 2024 Techniques 66
Grid-Based Clustering Method
■ Using multi-resolution grid data structure
■ Several interesting methods
■ STING (a STatistical INformation Grid approach)
by Wang, Yang and Muntz (1997)
■ WaveCluster by Sheikholeslami, Chatterjee, and
Zhang (VLDB’98)
■ A multi-resolution clustering approach using
wavelet method
■ CLIQUE: Agrawal, et al. (SIGMOD’98)
Data Mining:
November Concepts and
20, 2024 Techniques 67
STING: A Statistical Information Grid
Approach
■ Wang, Yang and Muntz (VLDB’97)
■ The spatial area area is divided into rectangular cells
■ There are several levels of cells corresponding to different
levels of resolution

Data Mining:
November Concepts and
20, 2024 Techniques 68
STING: A Statistical Information
Grid Approach (2)
■ Each cell at a high level is partitioned into a number of
smaller cells in the next lower level
■ Statistical info of each cell is calculated and stored
beforehand and is used to answer queries
■ Parameters of higher level cells can be easily calculated from
parameters of lower level cell
■ count, mean, s, min, max

■ type of distribution—normal, uniform, etc.

■ Use a top-down approach to answer spatial data queries


■ Start from a pre-selected layer—typically with a small
number of cells
■ For each cell in the current level compute the confidence
interval
STING: A Statistical Information
Grid Approach (3)
■ Remove the irrelevant cells from further consideration
■ When finish examining the current layer, proceed to
the next lower level
■ Repeat this process until the bottom layer is reached
■ Advantages:
■ Query-independent, easy to parallelize, incremental

update
■ O(K), where K is the number of grid cells at the

lowest level
■ Disadvantages:
■ All the cluster boundaries are either horizontal or

vertical, and no diagonal boundary is detected


WaveCluster (1998)
■ Sheikholeslami, Chatterjee, and Zhang (VLDB’98)
■ A multi-resolution clustering approach which applies
wavelet transform to the feature space
■ A wavelet transform is a signal processing
technique that decomposes a signal into different
frequency sub-band.
■ Both grid-based and density-based
■ Input parameters:
■ # of grid cells for each dimension
■ the wavelet, and the
Data # of applications of wavelet
Mining:
November transform. Concepts and
20, 2024 Techniques 71
What is Wavelet (1)?

Data Mining:
November Concepts and
20, 2024 Techniques 72
WaveCluster (1998)
■ How to apply wavelet transform to find clusters
■ Summaries the data by imposing a multidimensional
grid structure onto data space
■ These multidimensional spatial data objects are

represented in a n-dimensional feature space


■ Apply wavelet transform on feature space to find the

dense regions in the feature space


■ Apply wavelet transform multiple times which result

in clusters at different scales from fine to coarse

Data Mining:
November Concepts and
20, 2024 Techniques 73
What Is Wavelet (2)?

Data Mining:
November Concepts and
20, 2024 Techniques 74
Quantization

Data Mining:
November Concepts and
20, 2024 Techniques 75
Transformation

Data Mining:
November Concepts and
20, 2024 Techniques 76
WaveCluster (1998)
■ Why is wavelet transformation useful for clustering
■ Unsupervised clustering

It uses hat-shape filters to emphasize region where


points cluster, but simultaneously to suppress weaker
information in their boundary
■ Effective removal of outliers

■ Multi-resolution

■ Cost efficiency

■ Major features:

■ Complexity O(N)

■ Detect arbitrary shaped clusters at different scales

■ Not sensitive to Data Mining:


noise, not sensitive to input order
November Concepts
■ Only applicable to and
low dimensional data
20, 2024 Techniques 77
CLIQUE (Clustering In QUEst)
■ Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).
■ Automatically identifying subspaces of a high dimensional
data space that allow better clustering than original space
■ CLIQUE can be considered as both density-based and
grid-based
■ It partitions each dimension into the same number of
equal length interval
■ It partitions an m-dimensional data space into
non-overlapping rectangular units
■ A unit is dense if the fraction of total data points
contained in the unit exceeds the input model parameter
■ A cluster is a maximal set of connected dense units
Data Mining:
within a subspace
November Concepts and
20, 2024 Techniques 78
CLIQUE: The Major Steps
■ Partition the data space and find the number of points that
lie inside each cell of the partition.
■ Identify the subspaces that contain clusters using the
Apriori principle
■ Identify clusters:
■ Determine dense units in all subspaces of interests
■ Determine connected dense units in all subspaces of
interests.
■ Generate minimal description for the clusters
■ Determine maximal regions that cover a cluster of
connected dense units for each cluster
Data Mining:
■ Determination of minimal cover for each cluster
November Concepts and
20, 2024 Techniques 79
Vacation
(10,000)

(week)
Salary

0 1 2 3 4 5 6 7
0 1 2 3 4 5 6 7

age age
20 30 40 50 60 20 30 40 50 60
Vacation

τ=3

ry 30 50
a la age
S
Data Mining:
November Concepts and
20, 2024 Techniques 80
Strength and Weakness of CLIQUE

■ Strength
■ It automatically finds subspaces of the highest

dimensionality such that high density clusters exist in


those subspaces
■ It is insensitive to the order of records in input and

does not presume some canonical data distribution


■ It scales linearly with the size of input and has good

scalability as the number of dimensions in the data


increases
■ Weakness

■ The accuracy ofDatathe Mining:


clustering result may be
Novemberdegraded at theConcepts
expenseand
of simplicity of the method
20, 2024 Techniques 81
Chapter 8. Cluster Analysis

■ What is Cluster Analysis?


■ Types of Data in Cluster Analysis
■ A Categorization of Major Clustering Methods
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Model-Based Clustering Methods
■ Outlier Analysis
Data Mining:
■ Summary
November Concepts and
20, 2024 Techniques 82
Model-Based Clustering Methods
■ Attempt to optimize the fit between the data and some
mathematical model
■ Statistical and AI approach
■ Conceptual clustering

■ A form of clustering in machine learning


■ Produces a classification scheme for a set of unlabeled objects
■ Finds characteristic description for each concept (class)
■ COBWEB (Fisher’87)
■ A popular a simple method of incremental conceptual learning
■ Creates a hierarchical clustering in the form of a classification
tree
■ Each node refers to a concept and contains a probabilistic
Data Mining:
description of that concept
November Concepts and
20, 2024 Techniques 83
COBWEB Clustering Method

A classification tree

Data Mining:
November Concepts and
20, 2024 Techniques 84
More on Statistical-Based Clustering

■ Limitations of COBWEB
■ The assumption that the attributes are independent
of each other is often too strong because correlation
may exist
■ Not suitable for clustering large database data –
skewed tree and expensive probability distributions
■ CLASSIT
■ an extension of COBWEB for incremental clustering
of continuous data
■ suffers similar problems as COBWEB

■ AutoClass (Cheeseman and Stutz, 1996)


■ Uses Bayesian statistical analysis to estimate the
number of clusters
Data Mining:
■ Popular in industry
November Concepts and
20, 2024 Techniques 85
Other Model-Based Clustering
Methods

■ Neural network approaches


■ Represent each cluster as an exemplar, acting as a

“prototype” of the cluster


■ New objects are distributed to the cluster whose

exemplar is the most similar according to some


dostance measure
■ Competitive learning

■ Involves a hierarchical architecture of several units

(neurons)
■ Neurons compete in a “winner-takes-all” fashion for

Data Mining:
the object currently being presented
November Concepts and
20, 2024 Techniques 86
Model-Based Clustering Methods

Data Mining:
November Concepts and
20, 2024 Techniques 87
Self-organizing feature maps (SOMs)

■ Clustering is also performed by having several


units competing for the current object
■ The unit whose weight vector is closest to the
current object wins
■ The winner and its neighbors learn by having
their weights adjusted
■ SOMs are believed to resemble processing that
can occur in the brain
■ Useful for visualizing high-dimensional data in
2- or 3-D spaceData Mining:
November Concepts and
20, 2024 Techniques 88
Chapter 8. Cluster Analysis

■ What is Cluster Analysis?


■ Types of Data in Cluster Analysis
■ A Categorization of Major Clustering Methods
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Model-Based Clustering Methods
■ Outlier Analysis
Data Mining:
■ Summary
November Concepts and
20, 2024 Techniques 89
What Is Outlier Discovery?
■ What are outliers?
■ The set of objects are considerably dissimilar from
the remainder of the data
■ Example: Sports: Michael Jordon, Wayne Gretzky, ...

■ Problem
■ Find top n outlier points

■ Applications:
■ Credit card fraud detection

■ Telecom fraud detection

■ Customer segmentation

■ Medical analysis
Data Mining:
November Concepts and
20, 2024 Techniques 90
Outlier Discovery:
Statistical Approaches

● Assume a model underlying distribution that generates


data set (e.g. normal distribution)
■ Use discordancy tests depending on

■ data distribution

■ distribution parameter (e.g., mean, variance)

■ number of expected outliers

■ Drawbacks

■ most tests are for single attribute


Data Mining:
■ In many cases, data distribution may not be known
November Concepts and
20, 2024 Techniques 91
Outlier Discovery:
Distance-Based Approach

■ Introduced to counter the main limitations imposed by


statistical methods
■ We need multi-dimensional analysis without knowing

data distribution.
■ Distance-based outlier: A DB(p, D)-outlier is an object O
in a dataset T such that at least a fraction p of the
objects in T lies at a distance greater than D from O
■ Algorithms for mining distance-based outliers
■ Index-based algorithm

■ Nested-loop algorithm

■ Cell-based algorithm
Outlier Discovery:
Deviation-Based Approach
■ Identifies outliers by examining the main characteristics
of objects in a group
■ Objects that “deviate” from this description are
considered outliers
■ sequential exception technique
■ simulates the way in which humans can distinguish
unusual objects from among a series of supposedly
like objects
■ OLAP data cube technique
■ uses data cubes to identify regions of anomalies in
Data Mining:
large multidimensional data
November Concepts and
20, 2024 Techniques 93
Chapter 8. Cluster Analysis

■ What is Cluster Analysis?


■ Types of Data in Cluster Analysis
■ A Categorization of Major Clustering Methods
■ Partitioning Methods
■ Hierarchical Methods
■ Density-Based Methods
■ Grid-Based Methods
■ Model-Based Clustering Methods
■ Outlier Analysis
Data Mining:
■ Summary
November Concepts and
20, 2024 Techniques 94
Problems and Challenges

■ Considerable progress has been made in scalable


clustering methods
■ Partitioning: k-means, k-medoids, CLARANS
■ Hierarchical: BIRCH, CURE
■ Density-based: DBSCAN, CLIQUE, OPTICS
■ Grid-based: STING, WaveCluster
■ Model-based: Autoclass, Denclue, Cobweb
■ Current clustering techniques do not address all the
requirements adequately
■ Constraint-based clustering analysis: Constraints exist in
Data Mining:
data space (bridgesConcepts
November and highways)
and or in user queries
20, 2024 Techniques 95
Constraint-Based Clustering Analysis

■ Clustering analysis: less parameters but more user-desired


constraints, e.g., an ATM allocation problem

Data Mining:
November Concepts and
20, 2024 Techniques 96
Summary

■ Cluster analysis groups objects based on their similarity


and has wide applications
■ Measure of similarity can be computed for various types
of data
■ Clustering algorithms can be categorized into partitioning
methods, hierarchical methods, density-based methods,
grid-based methods, and model-based methods
■ Outlier detection and analysis are very useful for fraud
detection, etc. and can be performed by statistical,
distance-based or deviation-based approaches
■ There are still lots of research issues on cluster analysis,
Data Mining:
such as constraint-based clustering
November Concepts and
20, 2024 Techniques 97
References (1)
■ R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of
high dimensional data for data mining applications. SIGMOD'98
■ M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.
■ M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify
the clustering structure, SIGMOD’99.
■ P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scietific, 1996
■ M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering
clusters in large spatial databases. KDD'96.
■ M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing
techniques for efficient class identification. SSD'95.
■ D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning,
2:139-172, 1987.
■ D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based
on dynamic systems. In Proc. VLDB’98.
■ S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large
databases. SIGMOD'98. Data Mining:
A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.
November

Concepts and
20, 2024 Techniques 98
References (2)

■ L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster


Analysis. John Wiley & Sons, 1990.
■ E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets.
VLDB’98.
■ G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to
Clustering. John Wiley and Sons, 1988.
■ P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997.
■ R. Ng and J. Han. Efficient and effective clustering method for spatial data mining.
VLDB'94.
■ E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large
data sets. Proc. 1996 Int. Conf. on Pattern Recognition, 101-105.
■ G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution
clustering approach for very large spatial databases. VLDB’98.
■ W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial
Data Mining, VLDB’97.
■ T. Zhang, R. Ramakrishnan, andData Mining:
M. Livny. BIRCH : an efficient data clustering method
November Concepts and
for very large databases. SIGMOD'96.
20, 2024 Techniques 99
http://www.cs.sfu.ca/~han

November Thank you !!!


Data Mining:
Concepts and
20, 2024 Techniques 100

You might also like