Chapter 7. Cluster Analysis

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 48

May 2, 2012 T.

MUTHAMILSELVAN , SITE 1
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
May 2, 2012 T. MUTHAMILSELVAN , SITE 2
What is Cluster Analysis?
Cluster: The process of grouping a set of physical or
abstract objects into classes of similar objects is called
clustering.
Similar to one another within the same cluster
Dissimilar to the objects in other clusters
Cluster analysis
Finding similarities between data according to the
characteristics found in the data and grouping similar
data objects into clusters
Unsupervised learning: no predefined classes
used for outlier detection- more interesting than common
cases
May 2, 2012 T. MUTHAMILSELVAN , SITE 3
Clustering: Rich Applications and
Multidisciplinary Efforts
Pattern Recognition
Spatial Data Analysis
Image Processing
Economic Science (especially market research)
WWW
Document classification
Cluster Weblog data to discover groups of
similar access patterns
May 2, 2012 T. MUTHAMILSELVAN , SITE 4
Requirements of Clustering in Data Mining
Scalability
Ability to deal with different types of attributes
Discovery of clusters with arbitrary shape
Minimal requirements for domain knowledge to
determine input parameters
Able to deal with noise and outliers
Insensitive to order of input records
High dimensionality
Incorporation of user-specified constraints
Interpretability and usability
May 2, 2012 T. MUTHAMILSELVAN , SITE 5
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
May 2, 2012 T. MUTHAMILSELVAN , SITE 6
Data Structures
Data matrix
(two modes)



Dissimilarity matrix
(one mode)
(
(
(
(
(
(
(

np
x ...
nf
x ...
n1
x
... ... ... ... ...
ip
x ...
if
x ...
i1
x
... ... ... ... ...
1p
x ...
1f
x ...
11
x
(
(
(
(
(
(

0 ... ) 2 , ( ) 1 , (
: : :
) 2 , 3 ( )
... n d n d
0 d d(3,1
0 d(2,1)
0
May 2, 2012 T. MUTHAMILSELVAN , SITE 7
Type of data in clustering analysis
Interval-scaled variables
Binary variables
Nominal, ordinal, and ratio variables
Variables of mixed types
May 2, 2012 T. MUTHAMILSELVAN , SITE 8
Interval-valued variables
Standardize data
Calculate the mean absolute deviation:

where
Calculate the standardized measurement (z-score)

Using mean absolute deviation is more robust than using
standard deviation

. ) ...
2 1
1
nf f f f
x x (x
n
m + + + =
|) | ... | | | (|
1
2 1 f nf f f f f f
m x m x m x
n
s + + + =
f
f if
if
s
m x
z

=
May 2, 2012 T. MUTHAMILSELVAN , SITE 9
Similarity and Dissimilarity Between
Objects
Distances are normally used to measure the similarity or
dissimilarity between two data objects
Some popular ones include: Minkowski distance:

where i = (x
i1
, x
i2
, , x
ip
) and j = (x
j1
, x
j2
, , x
jp
) are
two p-dimensional data objects, and q is a positive
integer
If q = 1, d is Manhattan distance


q
q
p p
q q
j
x
i
x
j
x
i
x
j
x
i
x j i d ) | | ... | | | (| ) , (
2 2 1 1
+ + + =
| | ... | | | | ) , (
2 2 1 1 p p j
x
i
x
j
x
i
x
j
x
i
x j i d + + + =
May 2, 2012 T. MUTHAMILSELVAN , SITE 10
Similarity and Dissimilarity Between
Objects (Cont.)
If q = 2, d is Euclidean distance:


Also, one can use weighted distance,
parametric Pearson product moment
correlation, or other disimilarity measures
) | | ... | | | (| ) , (
2 2
2 2
2
1 1 p p j
x
i
x
j
x
i
x
j
x
i
x j i d + + + =
May 2, 2012 T. MUTHAMILSELVAN , SITE 11
Given two objects represented by the tuples (22, 1, 42, 10)
and (20, 0, 36, 8)

(a) Compute the Euclidean distance between the two
objects.
(b) Compute the Manhattan distance between the two
objects.
(c) Compute the Minkowski distance between the two
objects, using q = 3.
May 2, 2012 T. MUTHAMILSELVAN , SITE 12
Binary Variables
A contingency table for binary
data

Distance measure for
symmetric binary variables:
Distance measure for
asymmetric binary variables:
Jaccard coefficient (similarity
measure for asymmetric
binary variables):
d c b a
c b
j i d
+ + +
+
= ) , (
c b a
c b
j i d
+ +
+
= ) , (
p d b c a sum
d c d c
b a b a
sum
+ +
+
+
0
1
0 1
Object i
Object j
c b a
a
j i sim
Jaccard
+ +
= ) , (
May 2, 2012 T. MUTHAMILSELVAN , SITE 13
Dissimilarity between Binary Variables
Example




gender is a symmetric attribute
the remaining attributes are asymmetric binary
let the values Y and P be set to 1, and the value N be set to 0
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
May 2, 2012 T. MUTHAMILSELVAN , SITE 14
Dissimilarity between Binary Variables
Example




gender is a symmetric attribute
the remaining attributes are asymmetric binary
let the values Y and P be set to 1, and the value N be set to 0
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
75 . 0
2 1 1
2 1
) , (
67 . 0
1 1 1
1 1
) , (
33 . 0
1 0 2
1 0
) , (
=
+ +
+
=
=
+ +
+
=
=
+ +
+
=
mary jim d
jim jack d
mary jack d
May 2, 2012 T. MUTHAMILSELVAN , SITE 15
Nominal Variables
A generalization of the binary variable in that it
can take more than 2 states, e.g., red, yellow,
blue, green
Method 1: Simple matching
m: # of matches, p: total # of variables



p
m p
j i d

= ) , (
May 2, 2012 T. MUTHAMILSELVAN , SITE 16
May 2, 2012 T. MUTHAMILSELVAN , SITE 17
Ordinal Variables
An ordinal variable can be discrete or continuous
Order is important, e.g., rank
Can be treated like interval-scaled
replace x
if
by their rank
map the range of each variable onto [0, 1] by replacing
i-th object in the f-th variable by


compute the dissimilarity using methods for interval-
scaled variables
1
1

=
f
if
if
M
r
z
} ,..., 1 {
f if
M r e
May 2, 2012 T. MUTHAMILSELVAN , SITE 18
Ratio-Scaled Variables
Ratio-scaled variable: a positive measurement on a
nonlinear scale, approximately at exponential scale,
such as Ae
Bt
or Ae
-Bt

Methods:
treat them like interval-scaled variablesnot a good
choice! (why?the scale can be distorted)
apply logarithmic transformation
y
if
= log(x
if
)
treat them as continuous ordinal data treat their rank
as interval-scaled
May 2, 2012 T. MUTHAMILSELVAN , SITE 19
Variables of Mixed Types
A database may contain all the six types of variables
symmetric binary, asymmetric binary, nominal,
ordinal, interval and ratio
One may use a weighted formula to combine their
effects

f is binary or nominal:
d
ij
(f)
= 0 if x
if
= x
jf
, or d
ij
(f)
= 1 otherwise
f is interval-based: use the normalized distance
f is ordinal or ratio-scaled
compute ranks r
if
and
and treat z
if
as interval-scaled
) (
1
) ( ) (
1
) , (
f
ij
p
f
f
ij
f
ij
p
f
d
j i d
o
o
=
=
E
E
=
1
1

=
f
if
M
r
z
if
May 2, 2012 T. MUTHAMILSELVAN , SITE 20
Ordinal Variables
May 2, 2012 T. MUTHAMILSELVAN , SITE 21
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
May 2, 2012 T. MUTHAMILSELVAN , SITE 22
Major Clustering Approaches (I)
Partitioning approach:
Construct various partitions and then evaluate them by some criterion,
e.g., minimizing the sum of square errors
Typical methods: k-means, k-medoids
Hierarchical approach:
Create a hierarchical decomposition of the set of data (or objects) using
some criterion, bottom-up approach- agglomerative approach,
top-down approach- The divisive approach,
Density-based approach:
Based on connectivity and density functions
Most partitioning methods- difficult
growing the given cluster as long as the density exceeds some threshold


May 2, 2012 T. MUTHAMILSELVAN , SITE 23
Major Clustering Approaches (II)
Grid-based approach:
based on a multiple-level granularity structure
STING-STatistical INformation Grid
Model-based:
A model is hypothesized for each of the clusters and tries to find the best
fit of that model to each other
probability distribution, Gaussian distributions
Frequent pattern-based: clustering tasks
Based on the analysis of frequent patterns
Typical methods: pCluster
User-guided or constraint-based:
Clustering by considering user-specified or application-specific constraints
Typical methods: COD (obstacles), constrained clustering
May 2, 2012 T. MUTHAMILSELVAN , SITE 24
The K-Means Clustering Method
Example
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
K=2
Arbitrarily choose K
object as initial
cluster center
Assign
each
objects
to most
similar
center
Update
the
cluster
means
Update
the
cluster
means
reassign reassign
May 2, 2012 T. MUTHAMILSELVAN , SITE 25
A Typical K-Medoids Algorithm (PAM)

0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Total Cost = 20
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
K=2
Arbitrary
choose k
object as
initial
medoids
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Assign
each
remainin
g object
to
nearest
medoids
Randomly select a
nonmedoid object,O
ramdom
Compute
total cost of
swapping
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Total Cost = 26
Swapping O
and O
ramdom
If quality is
improved.
Do loop
Until no
change
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
May 2, 2012 T. MUTHAMILSELVAN , SITE 26
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
May 2, 2012 T. MUTHAMILSELVAN , SITE 27
Hierarchical Clustering
Use distance matrix as clustering criteria. This method
does not require the number of clusters k as an input,
but needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
b
d
c
e
a
a b
d e
c d e
a b c d e
Step 4 Step 3 Step 2 Step 1 Step 0
agglomerative
(AGNES)
divisive
(DIANA)
May 2, 2012 T. MUTHAMILSELVAN , SITE 28
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
May 2, 2012 T. MUTHAMILSELVAN , SITE 29
Density-Reachable and Density-Connected
Density-reachable:
A point p is density-reachable from
a point q w.r.t. Eps, MinPts if there
is a chain of points p
1
, , p
n
, p
1
=
q, p
n
= p such that p
i+1
is directly
density-reachable from p
i

Density-connected
A point p is density-connected to a
point q w.r.t. Eps, MinPts if there
is a point o such that both, p and
q are density-reachable from o
w.r.t. Eps and MinPts
p
q
p
1
p q
o
May 2, 2012 T. MUTHAMILSELVAN , SITE 30
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
May 2, 2012 T. MUTHAMILSELVAN , SITE 31
STING: A Statistical Information Grid Approach
Wang, Yang and Muntz (VLDB97)
The spatial area area is divided into rectangular cells
There are several levels of cells corresponding to different
levels of resolution

May 2, 2012 T. MUTHAMILSELVAN , SITE 32
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
May 2, 2012 T. MUTHAMILSELVAN , SITE 33
Model-Based Clustering
What is model-based clustering?
Attempt to optimize the fit between the given data and
some mathematical model
Based on the assumption: Data are generated by a
mixture of underlying probability distribution
Typical methods
Statistical approach
EM (Expectation maximization), AutoClass
Machine learning approach
COBWEB, CLASSIT
Neural network approach
SOM (Self-Organizing Feature Map)
May 2, 2012 T. MUTHAMILSELVAN , SITE 34
Chapter 6. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
May 2, 2012 T. MUTHAMILSELVAN , SITE 35
Clustering High-Dimensional Data
Clustering high-dimensional data
Many applications: text documents, DNA micro-array data
Major challenges:
Many irrelevant dimensions may mask clusters
Distance measure becomes meaninglessdue to equi-distance
Clusters may exist only in some subspaces
Methods
Feature transformation: only effective if most dimensions are relevant
PCA & SVD useful only when features are highly correlated/redundant
Feature selection: wrapper or filter approaches
useful to find a subspace where the data have nice clusters
Subspace-clustering: find clusters in all the possible subspaces
CLIQUE, ProClus, and frequent pattern-based clustering
May 2, 2012 T. MUTHAMILSELVAN , SITE 36
Chapter 6. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
May 2, 2012 T. MUTHAMILSELVAN , SITE 37
Why Constraint-Based Cluster Analysis?
Need user feedback: Users know their applications the best
Less parameters but more user-desired constraints, e.g., an
ATM allocation problem: obstacle & desired clusters
May 2, 2012 T. MUTHAMILSELVAN , SITE 38
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
May 2, 2012 T. MUTHAMILSELVAN , SITE 39
What Is Outlier Discovery?
What are outliers?
The set of objects are considerably dissimilar from the
remainder of the data
Example: Sports: Michael Jordon, Wayne Gretzky, ...
Problem: Define and find outliers in large data sets
Applications:
Credit card fraud detection
Telecom fraud detection
Customer segmentation
Medical analysis
May 2, 2012 T. MUTHAMILSELVAN , SITE 40
Outlier Discovery:
Statistical Approaches
Assume a model underlying distribution that generates
data set (e.g. normal distribution)
Use discordancy tests depending on
data distribution
distribution parameter (e.g., mean, variance)
number of expected outliers
Drawbacks
most tests are for single attribute
In many cases, data distribution may not be known
May 2, 2012 T. MUTHAMILSELVAN , SITE 41
Density-Based Local
Outlier Detection
Distance-based outlier detection
is based on global distance
distribution
It encounters difficulties to
identify outliers if data is not
uniformly distributed
Ex. C
1
contains 400 loosely
distributed points, C
2
has 100
tightly condensed points, 2
outlier points o
1
, o
2
Distance-based method cannot
identify o
2
as an outlier
Need the concept of local outlier
Local outlier factor (LOF)
Assume outlier is not
crisp
Each point has a LOF
May 2, 2012 T. MUTHAMILSELVAN , SITE 42
Outlier Discovery: Deviation-Based Approach
Identifies outliers by examining the main characteristics
of objects in a group
Objects that deviate from this description are
considered outliers
Sequential exception technique
simulates the way in which humans can distinguish
unusual objects from among a series of supposedly
like objects
OLAP data cube technique
uses data cubes to identify regions of anomalies in
large multidimensional data
May 2, 2012 T. MUTHAMILSELVAN , SITE 43
Summary
Cluster analysis groups objects based on their similarity
and has wide applications
Measure of similarity can be computed for various types
of data
Clustering algorithms can be categorized into partitioning
methods, hierarchical methods, density-based methods,
grid-based methods, and model-based methods
Outlier detection and analysis are very useful for fraud
detection, etc. and can be performed by statistical,
distance-based or deviation-based approaches
There are still lots of research issues on cluster analysis
May 2, 2012 T. MUTHAMILSELVAN , SITE 44
Problems and Challenges
Considerable progress has been made in scalable
clustering methods
Partitioning: k-means, k-medoids, CLARANS
Hierarchical: BIRCH, ROCK, CHAMELEON
Density-based: DBSCAN, OPTICS, DenClue
Grid-based: STING, WaveCluster, CLIQUE
Model-based: EM, Cobweb, SOM
Frequent pattern-based: pCluster
Constraint-based: COD, constrained-clustering
Current clustering techniques do not address all the
requirements adequately, still an active area of research
May 2, 2012 T. MUTHAMILSELVAN , SITE 45
References (1)
R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high
dimensional data for data mining applications. SIGMOD'98
M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.
M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the
clustering structure, SIGMOD99.
P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scientific, 1996
Beil F., Ester M., Xu X.: "Frequent Term-Based Text Clustering", KDD'02
M. M. Breunig, H.-P. Kriegel, R. Ng, J. Sander. LOF: Identifying Density-Based Local Outliers.
SIGMOD 2000.
M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in
large spatial databases. KDD'96.
M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing
techniques for efficient class identification. SSD'95.
D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139-
172, 1987.
D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on
dynamic systems. VLDB98.
May 2, 2012 T. MUTHAMILSELVAN , SITE 46
References (2)
V. Ganti, J. Gehrke, R. Ramakrishan. CACTUS Clustering Categorical Data Using Summaries. KDD'99.
D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on
dynamic systems. In Proc. VLDB98.
S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases.
SIGMOD'98.
S. Guha, R. Rastogi, and K. Shim. ROCK: A robust clustering algorithm for categorical attributes. In
ICDE'99, pp. 512-521, Sydney, Australia, March 1999.
A. Hinneburg, D.l A. Keim: An Efficient Approach to Clustering in Large Multimedia Databases with
Noise. KDD98.
A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.
G. Karypis, E.-H. Han, and V. Kumar. CHAMELEON: A Hierarchical Clustering Algorithm Using
Dynamic Modeling. COMPUTER, 32(8): 68-75, 1999.
L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John
Wiley & Sons, 1990.
E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB98.
G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering. John
Wiley and Sons, 1988.
P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997.
R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94.
May 2, 2012 T. MUTHAMILSELVAN , SITE 47
References (3)
L. Parsons, E. Haque and H. Liu, Subspace Clustering for High Dimensional Data: A Review ,
SIGKDD Explorations, 6(1), June 2004
E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets.
Proc. 1996 Int. Conf. on Pattern Recognition,.
G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering
approach for very large spatial databases. VLDB98.
A. K. H. Tung, J. Han, L. V. S. Lakshmanan, and R. T. Ng. Constraint-Based Clustering in Large
Databases, ICDT'01.
A. K. H. Tung, J. Hou, and J. Han. Spatial Clustering in the Presence of Obstacles , ICDE'01
H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern similarity in large data
sets, SIGMOD 02.
W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining,
VLDB97.
T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very
large databases. SIGMOD'96.
May 2, 2012 T. MUTHAMILSELVAN , SITE 48
www.cs.uiuc.edu/~hanj
Thank you !!!

You might also like