0% found this document useful (0 votes)
97 views102 pages

Clustering Analysis

K-means clustering is a partitional clustering algorithm that aims to partition n observations into k clusters where each observation belongs to the cluster with the nearest mean. It works by assigning each point to a cluster whose centroid minimizes total intra-cluster variance, computed as the sum of squared distances between points and centroids. The algorithm proceeds iteratively by alternating between assignments of points to centroids and recomputation of centroids until convergence is reached.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views102 pages

Clustering Analysis

K-means clustering is a partitional clustering algorithm that aims to partition n observations into k clusters where each observation belongs to the cluster with the nearest mean. It works by assigning each point to a cluster whose centroid minimizes total intra-cluster variance, computed as the sum of squared distances between points and centroids. The algorithm proceeds iteratively by alternating between assignments of points to centroids and recomputation of centroids until convergence is reached.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

Partitional (K-means), Hierarchical, Density-Based (DBSCAN)

 In general a grouping of objects such that the objects in a


group (cluster) are similar (or related) to one another and
different from (or unrelated to) the objects in other groups

Inter-cluster
Intra-cluster distances are
distances are maximized
minimized
Discovered Clusters Industry Group
Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN,

 Understanding 1 Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN,
DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN,
Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down,
Technology1-DOWN
Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN,
Sun-DOWN

 Group related documents for Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN,

2 ADV-Micro-Device-DOWN,Andrew-Corp-DOWN,
Computer-Assoc-DOWN,Circuit-City-DOWN,
browsing, group genes and Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN,
Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN
Technology2-DOWN

proteins that have similar 3


Fannie-Mae-DOWN,Fed-Home-Loan-DOWN,
MBNA-Corp-DOWN,Morgan-Stanley-DOWN Financial-DOWN

functionality, or group stocks 4


Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP,
Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP, Oil-UP
Schlumberger-UP
with similar price fluctuations

 Summarization
 Reduce the size of large data
sets

Clustering precipitation
in Australia
 John Snow, London 1854
How many clusters? Six Clusters

Two Clusters Four Clusters


 A clustering is a set of clusters
 Important distinction between hierarchical
and partitional sets of clusters
 Partitional Clustering
 A division data objects into subsets (clusters)
such that each data object is in exactly one
subset
 Hierarchical clustering
 A set of nested clusters organized as a
hierarchical tree
Original Points A Partitional Clustering
p1
p3 p4
p2
p1 p2 p3 p4
Traditional Hierarchical Traditional Dendrogram
Clustering

p1
p3 p4
p2
p1 p2 p3 p4

Non-traditional Hierarchical Non-traditional Dendrogram


Clustering
 Exclusive (or non-overlapping) versus non-
exclusive (or overlapping)
 In non-exclusive clusterings, points may belong to
multiple clusters.
▪ Points that belong to multiple classes, or ‘border’ points

 Fuzzy (or soft) versus non-fuzzy (or hard)


 In fuzzy clustering, a point belongs to every cluster
with some weight between 0 and 1
▪ Weights usually must sum to 1 (often interpreted as probabilities)

 Partial versus complete


 In some cases, we only want to cluster some of the
data
 Well-Separated Clusters:
 A cluster is a set of points such that any point in a cluster is closer (or
more similar) to every other point in the cluster than to any point
not in the cluster.

3 well-separated clusters
 Center-based
 A cluster is a set of objects such that an object in a cluster is closer
(more similar) to the “center” of a cluster, than to the center of any
other cluster
 The center of a cluster is often a centroid, the minimizer of
distances from all the points in the cluster, or a medoid, the most
“representative” point of a cluster

4 center-based clusters
 Contiguous Cluster (Nearest neighbor or
Transitive)
 A cluster is a set of points such that a point in a cluster is closer (or
more similar) to one or more other points in the cluster than to any
point not in the cluster.

8 contiguous clusters
 Density-based
 A cluster is a dense region of points, which is separated by low-
density regions, from other regions of high density.
 Used when the clusters are irregular or intertwined, and when noise
and outliers are present.

6 density-based clusters
 Shared Property or Conceptual Clusters
 Finds clusters that share some common property or represent a
particular concept.
.

2 Overlapping Circles
 Clustering as an optimization problem
 Finds clusters that minimize or maximize an objective
function.
 Enumerate all possible ways of dividing the points into
clusters and evaluate the `goodness' of each potential set of
clusters by using the given objective function. (NP Hard)
 Can have global or local objectives.
▪ Hierarchical clustering algorithms typically have local objectives
▪ Partitional algorithms typically have global objectives
 A variation of the global objective function approach is to fit
the data to a parameterized model.
▪ The parameters for the model are determined from the data, and
they determine the clustering
▪ E.g., Mixture models assume that the data is a ‘mixture' of a number
of statistical distributions.
K-means and its variants

Hierarchical clustering

DBSCAN
 K-means and its variants

 Hierarchical clustering

 DBSCAN
 Partitional clustering approach
 Each cluster is associated with a centroid
(center point)
 Each point is assigned to the cluster with
the closest centroid
 Number of clusters, K, must be specified
 The objective is to minimize the sum of
distances of the points to their respective
centroid
 Problem: Given a set X of n points in a d-
dimensional space and an integer K group the
points into K clusters C= {C1, C2,…,Ck} such that


     
,
 ∈

is minimized, where ci is the centroid of the


points in cluster Ci
• Most common definition is with euclidean distance,
minimizing the Sum of Squared Error (SSE) function
 Sometimes K-means is defined like that

 Problem: Given a set X of n points in a d-dimensional


space and an integer K group the points into K
clusters C= {C1, C2,…,Ck} such that


    
 
 ∈
is minimized, where ci is the mean of the points in
cluster Ci
Sum of Squared Error (SSE)
• NP-hard if the dimensionality of the data is at
least 2 (d>=2)
 Finding the best solution in polynomial time is
infeasible

• For d=1 the problem is solvable in polynomial


time (how?)

• A simple iterative algorithm works quite well


in practice
 Also known as Lloyd’s algorithm.
 K-means is sometimes synonymous with this
algorithm
 Initial centroids are often chosen randomly.
 Clusters produced vary from one run to another.
3

2.5

2
Original Points
1.5

y
1

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x

3 3

2.5 2.5

2 2

1.5 1.5
y

y
1 1

0.5 0.5

0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x x

Optimal Clustering Sub-optimal Clustering


Iteration 6
1
2
3
4
5
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x
Iteration 1 Iteration 2 Iteration 3
3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x

Iteration 4 Iteration 5 Iteration 6


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
Iteration 5
1
2
3
4
3

2.5

1.5
y

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x
Iteration 1 Iteration 2
3 3

2.5 2.5

2 2

1.5 1.5
y

y
1 1

0.5 0.5

0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2


x x

Iteration 3 Iteration 4 Iteration 5


3 3 3

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5


y

y
1 1 1

0.5 0.5 0.5

0 0 0

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
 Do multiple runs and select the clustering
with the smallest error

 Select original set of points by methods


other than random . E.g., pick the most
distant (from each other) points as cluster
centers (K-means++ algorithm)
 The centroid depends on the distance function
 The minimizer for the distance function
 ‘Closeness’ is measured by Euclidean distance (SSE),
cosine similarity, correlation, etc.
 Centroid:
 The mean of the points in the cluster for SSE, and cosine
similarity
 The median for Manhattan distance.

 Finding the centroid is not always easy


 It can be an NP-hard problem for some distance functions
▪ E.g., median form multiple dimensions
 K-means will converge for common similarity
measures mentioned above.
 Most of the convergence happens in the first few
iterations.
 Often the stopping condition is changed to ‘Until
relatively few points change clusters’
 Complexity is O( n * K * I * d )
 n = number of points, K = number of clusters,
I = number of iterations, d = dimensionality
 In general a fast and efficient algorithm
 K-means has problems when clusters are of
different
 Sizes
 Densities
 Non-globular shapes

 K-means has problems when the data


contains outliers.
Original Points K-means (3 Clusters)
Original Points K-means (3 Clusters)
Original Points K-means (2 Clusters)
Original Points K-means Clusters

One solution is to use many clusters.


Find parts of clusters, but need to put together.
Original Points K-means Clusters
Original Points K-means Clusters
 K-medoids: Similar problem definition as in
K-means, but the centroid of the cluster is
defined to be one of the points in the cluster
(the medoid).

 K-centers: Similar problem definition as in K-


means, but the goal now is to minimize the
maximum diameter of the clusters (diameter
of a cluster is maximum distance between
any two points in the cluster).
 Two main types of hierarchical clustering
 Agglomerative:
▪ Start with the points as individual clusters
▪ At each step, merge the closest pair of clusters until only one cluster (or k
clusters) left

 Divisive:
▪ Start with one, all-inclusive cluster
▪ At each step, split a cluster until each cluster contains a point (or there are
k clusters)

 Traditional hierarchical algorithms use a similarity or


distance matrix
 Merge or split one cluster at a time
 Produces a set of nested clusters organized
as a hierarchical tree
 Can be visualized as a dendrogram
 A tree like diagram that records the sequences of
merges or splits 6 5

4
0.2 3 4
2
5
0.15 2

0.1 1
3 1
0.05

0
1 3 2 5 4 6
 Do not have to assume any particular number
of clusters
 Any desired number of clusters can be obtained
by ‘cutting’ the dendogram at the proper level

 They may correspond to meaningful


taxonomies
 Example in biological sciences (e.g., animal
kingdom, phylogeny reconstruction, …)
 More popular hierarchical clustering technique
 Basic algorithm is straightforward
1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remains

 Key operation is the computation of the proximity of


two clusters
 Different approaches to defining the distance between
clusters distinguish the different algorithms
 Start with clusters of individual points and a
proximity matrix p1 p2 p3 p4 p5 . . .
p1
p2
p3
p4
p5
.
.
. Proximity Matrix
 After some merging steps, we have some clusters
C1 C2 C3 C4 C5
C1
C2
C3 C3
C4 C4
C5
C1 Proximity Matrix

C2 C5
 We want to merge the two closest clusters (C2 and C5) and update
the proximity matrix.
C1 C2 C3 C4 C5
C1
C2
C3 C3
C4
C4
C5
Proximity Matrix
C1

C2 C5
 The question is “How do we update the proximity matrix?”
C2
U
C1 C5 C3 C4
C1 ?
C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?
C1 Proximity Matrix

C2 U C5
p1 p2 p3 p4 p5 ...
p1
Similarity?
p2
p3

p4
p5
 MIN
.
 MAX
.
 Group Average .
Proximity Matrix
 Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error
p1 p2 p3 p4 p5 ...
p1

p2
p3

p4
p5
 MIN
.
 MAX
.
 Group Average .
Proximity Matrix
 Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error
p1 p2 p3 p4 p5 ...
p1

p2
p3

p4
p5
 MIN
.
 MAX
.
 Group Average .
Proximity Matrix
 Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error
p1 p2 p3 p4 p5 ...
p1

p2
p3

p4
p5
 MIN
.
 MAX
.
 Group Average .
Proximity Matrix
 Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error
p1 p2 p3 p4 p5 ...
p1
× × p2
p3

p4
p5
 MIN
.
 MAX
.
 Group Average .
Proximity Matrix
 Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error
 Another way to view the processing of the
hierarchical algorithm is that we create links
between their elements in order of increasing
distance
 The MIN – Single Link, will merge two clusters
when a single pair of elements is linked
 The MAX – Complete Linkage will merge two
clusters when all pairs of elements have been
linked.
1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
5
1 2 .24 0 .15 .20 .14 .25
3 3 .22 .15 0 .15 .28 .11
4 .37 .20 .15 0 .29 .22
5 5 .34 .14 .28 .29 0 .39
2 1
6 .23 .25 .11 .22 .39 0
2 3 6

0.2
4
4 0.15

0.1

0.05

Nested Clusters Dendrogram


0
3 6 2 5 4 1
Original Points Two Clusters

• Can handle non-elliptical shapes


Original Points Two Clusters

• Sensitive to noise and outliers


1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
4 1 2 .24 0 .15 .20 .14 .25
2 5 3 .22 .15 0 .15 .28 .11

5 4 .37 .20 .15 0 .29 .22


2 5 .34 .14 .28 .29 0 .39
6 .23 .25 .11 .22 .39 0
3 6
3 0.4
1
0.35

4 0.3

0.25

0.2

0.15

0.1
Nested Clusters Dendrogram
0.05

0
3 6 4 1 2 5
Original Points Two Clusters

• Less susceptible to noise and outliers


Original Points Two Clusters

•Tends to break large clusters


•Biased towards globular clusters
 Proximity of two clusters is the average of pairwise proximity
between points in the two clusters.
∑ proximity(p , p )
pi∈Clusteri
i j

p j∈Clusterj
proximity(Clusteri , Clusterj ) =
|Clusteri |∗|Clusterj |

 Need to use average connectivity for scalability since total


proximity favors large clusters

1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
2 .24 0 .15 .20 .14 .25
3 .22 .15 0 .15 .28 .11
4 .37 .20 .15 0 .29 .22
5 .34 .14 .28 .29 0 .39
6 .23 .25 .11 .22 .39 0
1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
5 4 1 2 .24 0 .15 .20 .14 .25

2 3 .22 .15 0 .15 .28 .11

5 4 .37 .20 .15 0 .29 .22


2 5 .34 .14 .28 .29 0 .39
3 6 .23 .25 .11 .22 .39 0
6
1
0.25
4
3 0.2

0.15

0.1

Nested Clusters Dendrogram 0.05

0
3 6 4 1 2 5
 Compromise between Single and Complete
Link

 Strengths
 Less susceptible to noise and outliers

 Limitations
 Biased towards globular clusters
 Similarity of two clusters is based on the
increase in squared error (SSE) when two
clusters are merged
 Similar to group average if distance between points is
distance squared
 Less susceptible to noise and outliers
 Biased towards globular clusters
 Hierarchical analogue of K-means
 Can be used to initialize K-means
5
1 4 1
3
2 5
5 5
2 1 2
MIN MAX
2 3 6 3 6
3
1
4 4
4

5
1 5 4 1
2 2
5 Ward’s Method 5
2 2
3 6 Group Average 3 6
3
4 1 1
4 4
3
 O(N2) space since it uses the proximity
matrix.
 N is the number of points.

 O(N3) time in many cases


 There are N steps and at each step the size, N2,
proximity matrix must be updated and searched
 Complexity can be reduced to O(N2 log(N) ) time
for some approaches
 Computational complexity in time and space
 Once a decision is made to combine two
clusters, it cannot be undone
 No objective function is directly minimized
 Different schemes have problems with one or
more of the following:
 Sensitivity to noise and outliers
 Difficulty handling different sized clusters and convex
shapes
 Breaking large clusters
 DBSCAN is a Density-Based Clustering algorithm
 Reminder: In density based clustering we partition points
into dense regions separated by not-so-dense regions.
 Important Questions:
 How do we measure density?
 What is a dense region?

 DBSCAN:
 Density at point p: number of points within a circle of radius Eps
 Dense Region: A circle of radius Eps that contains at least
MinPts points
 Characterization of points
 A point is a core point if it has more than a
specified number of points (MinPts) within Eps
▪ These points belong in a dense region and are at the
interior of a cluster

 A border point has fewer than MinPts within


Eps, but is in the neighborhood of a core point.

 A noise point is any point that is not a core point


or a border point.
Point types: core, border
Original Points
and noise

Eps = 10, MinPts = 4


 Density edge

 We place an edge between two p

core points q and p if they are q


p1

within distance Eps.


 Density-connected
 A point p is density-connected to a
point q if there is a path of edges p q
from p to q
o
 Label points as core, border and noise
 Eliminate noise points
 For every core point p that has not been
assigned to a cluster
 Create a new cluster with the point p and all the
points that are density-connected to p.
 Assign border points to the cluster of the
closest core point.
 Idea is that for points in a cluster, their kth nearest neighbors are
at roughly the same distance
 Noise points have the kth nearest neighbor at farther distance
 So, plot sorted distance of every point to its kth nearest neighbor
 Find the distance d where there is a “knee” in the curve
 Eps = d, MinPts = k

Eps ~ 7-10
MinPts = 4
Original Points
Clusters

• Resistant to Noise
• Can handle clusters of different shapes and sizes
(MinPts=4, Eps=9.75).

Original Points

• Varying densities
• High-dimensional data

(MinPts=4, Eps=9.92)
 PAM, CLARANS: Solutions for the k-medoids problem
 BIRCH: Constructs a hierarchical tree that acts a
summary of the data, and then clusters the leaves.
 MST: Clustering using the Minimum Spanning Tree.
 ROCK: clustering categorical data by neighbor and
link analysis
 LIMBO, COOLCAT: Clustering categorical data using
information theoretic tools.
 CURE: Hierarchical algorithm uses different
representation of the cluster
 CHAMELEON: Hierarchical algorithm uses closeness
and interconnectivity for merging
 In order to understand our data, we will assume that there is a
generative process (a model) that creates/describes the data, and
we will try to find the model that best fits the data.
 Models of different complexity can be defined, but we will assume
that our model is a distribution from which data points are sampled
 Example: the data is the height of all people in US

 In most cases, a single distribution is not good enough to describe


all data points: different parts of the data follow a different
distribution
 Example: the data is the height of all people in US and China
 We need a mixture model
 Different distributions correspond to different clusters in the data.
 Example: the data is the height of all people
in US
 Experience has shown that this data follows a
Gaussian (Normal) distribution
 Reminder: Normal distribution:
1  ! "

  # "
2

  = mean,  = standard deviation


 What is a model?
 A Gaussian distribution is fully defined by the
mean  and the standard deviation 
 We define our model as the pair of parameters
$  , 

 This is a general principle: a model is defined


as a vector of parameters $
 We want to find the normal distribution that
best fits our data
 Find the best values for  and 
 But what does best fit mean?
 Suppose that we have a vector % 
 , … ,
' of values
 And we want to fit a Gaussian ( ,  model to the data
 Probability of observing point
 :
1  ! "

   #"
2
 Probability of observing all points (assume independence)
' '
1  ! "
 %  * 
  *  #"
 
2

 We want to find the parameters $  ,  that


maximize the probability  %|$
 The probability  %|$ as a function of $ is called the
Likelihood function
'
1  ! "
+ $  *  #"

2
 It is usually easier to work with the Log-Likelihood
function '

    1
++ $     , log 2  , log 
2  2


 Maximum Likelihood Estimation


 Find parameters ,  that maximize ++ $
' '
1 1
  
  0    
    0
, ,
 
Sample Mean Sample Variance
 Note: these are also the most likely
parameters given the data

 % $  $
 $% 
 %

 If we have no prior information about $, or X,


then maximizing  % $ is the same as
maximizing  $ %
 Suppose that you have the heights of people
from the US and China and the distribution
looks like the figure below (dramatization)
 In this case the data is the result of the
mixture of two Gaussians
 One for US people, and one for Chinese people
 Identifying for each value which Gaussian is most
likely to have generated it will give us a clustering.
 A value
 is generated according to the
following process:
 First select the nationality
▪ With probability 1 select US, with probability 1 select
China 1 2   1
We can also thing of this as a Hidden Variable Z

 Given the nationality, generate the point from the


corresponding Gaussian
▪ 
 $1 ~ ( 1 , 1 if US
▪ 
 $ ~ ( 1 ,  if China
 Our model has the following parameters
Θ  1 ,  , 1 ,  , 1 , 
Mixture probabilities Distribution Parameters

 For value
 , we have:

 |Θ  1 
 $1 2  
 |$
 For all values % 
 , … ,
'
'

 %|Θ  * 
 |Θ

 We want to estimate the parameters that maximize
the Likelihood of the data
 Once we have the parameters
Θ  1 ,  , 1 ,  , 1 ,  we can estimate the
membership probabilities  5
 and
 
 for each point
 :
 This is the probability that point
 belongs to the US
or the Chinese population (cluster)


 6  6
 6
 

 6  6 2 
   

 6 1


 6 1 2 
  
 Initialize the values of the parameters in Θ to
some random values
 Repeat until convergence
 E-Step: Given the parameters Θ estimate the
membership probabilities  6
 and  

 M-Step: Compute the parameter values that (in
expectation) maximize the data likelihood
' '
1 1
1    6|
     |

Fraction of
, , population in U,C
 
' '
 
  6

  
1  

, ∗  
MLE Estimates
 , ∗ 1  if ’s were fixed

' '
 
  6

  
    1  
 1 
, ∗  , ∗ 1 
 
 E-Step: Assignment of points to clusters
 K-means: hard assignment, EM: soft assignment
 M-Step: Computation of centroids
 K-means assumes common fixed variance
(spherical clusters)
 EM: can change the variance for different clusters
or different dimensions (elipsoid clusters)
 If the variance is fixed then both minimize the
same error function
# K-means
iris2 = iris
iris2$Species = NULL
(kmeans.result = kmeans(iris2,3))
table(iris$Species, kmeans.result$cluster)
plot(iris2[c("Sepal.Length", "Sepal.Width")], col
= kmeans.result$cluster)
# plot cluster centers
points(kmeans.result$centers[,c("Sepal.Length
", "Sepal.Width")], col = 1:3, pch = 8, cex=2)
# Hierarchical Clustering
idx = sample(1:dim(iris)[1], 40)
irisSample = iris[idx,]
irisSample$Species = NULL
hc = hclust(dist(irisSample), method="ave")
plot(hc, hang = -1, labels=iris$Species[idx])
# cut tree into 3 clusters
rect.hclust(hc, k=3)
groups = cutree(hc, k=3)
# DBSCAN
library(fpc)
iris2 = iris[-5] # remove class tags
ds = dbscan(iris2, eps=0.42, MinPts=5)
# compare clusters with original class labels
table(ds$cluster, iris$Species)
plot(ds, iris2)
plot(ds, iris2[c(1,4)])
plotcluster(iris2, ds$cluster)
# create a new dataset for labeling
set.seed(435)
idx = sample(1:nrow(iris), 10)
newData = iris[idx,-5]
newData = newData + matrix(runif(10*4, min=0, max=0.2), nrow=10, ncol=4)
# label new data
myPred = predict(ds, iris2, newData)
# plot result
plot(iris2[c(1,4)], col=1+ds$cluster)
points(newData[c(1,4)], pch="*", col=1+myPred, cex=3)
# check cluster labels
table(myPred, iris$Species[idx])

You might also like