0% found this document useful (0 votes)
23 views76 pages

K-Means Clustering-Converted-Merged

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views76 pages

K-Means Clustering-Converted-Merged

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

K-MEANS

CLUSTERING
INTRODUCTION-
What is clustering?

⚫ Clustering is the classification of objects into


different groups, or more precisely, the
partitioning of a data set into subsets
(clusters), so that the data in each subset
(ideally) share some common trait - often
according to some defined distance measure.
Types of clustering:
1. Hierarchical algorithms: these find successive clusters
using previously established clusters.
1. Agglomerative ("bottom-up"): Agglomerative algorithms
begin with each element as a separate cluster and
merge them into successively larger clusters.
2. Divisive ("top-down"): Divisive algorithms begin with
the whole set and proceed to divide it into successively
smaller clusters.
2. Partitional clustering: Partitional algorithms determine all
clusters at once. They include:
⚫ K-means and derivatives
⚫ Fuzzy c-means clustering
⚫ QT clustering algorithm
Common Distance measures:

⚫ Distance measure will determine how the similarity of two


elements is calculated and it will influence the shape of the
clusters.
They include:
1. The Euclidean distance (also called 2-norm distance) is
given by:

2. The Manhattan distance (also called taxicab norm or 1-


norm) is given by:
3.The maximum norm is given by:

4. The Mahalanobis distance corrects data for


different scales and correlations in the variables.
5. Inner product space: The angle between two
vectors can be used as a distance measure when
clustering high dimensional data
6. Hamming distance (sometimes edit distance)
measures the minimum number of substitutions
required to change one member into another.
K-MEANS CLUSTERING
⚫ The k-means algorithm is an algorithm to cluster
n objects based on attributes into k partitions,
where k < n.
⚫ It is similar to the expectation-maximization
algorithm for mixtures of Gaussians in that they
both attempt to find the centers of natural clusters
in the data.
⚫ It assumes that the object attributes form a vector
space.
⚫ An algorithm for partitioning (or clustering) N
data points into K disjoint subsets Sj
containing data points so as to minimize the
sum-of-squares criterion

where xn is a vector representing the the nth


data point and uj is the geometric centroid of
the data points in Sj.
⚫ Simply speaking k-means clustering is an
algorithm to classify or to group the objects
based on attributes/features into K number of
group.
⚫ K is positive integer number.
⚫ The grouping is done by minimizing the sum
of squares of distances between data and the
corresponding cluster centroid.
How the K-Mean Clustering
algorithm works?
⚫ Step 1: Begin with a decision on the value of k =
number of clusters .
⚫ Step 2: Put any initial partition that classifies the
data into k clusters. You may assign the
training samples randomly,or systematically
as the following:
1.Take the first k training sample as single-
element clusters
2. Assign each of the remaining (N-k) training
sample to the cluster with the nearest
centroid. After each assignment, recompute
the centroid of the gaining cluster.
⚫ Step 3: Take each sample in sequence and
compute its distance from the centroid of
each of the clusters. If a sample is not
currently in the cluster with the closest
centroid, switch this sample to that cluster
and update the centroid of the cluster
gaining the new sample and the cluster
losing the sample.
⚫ Step 4 . Repeat step 3 until convergence is
achieved, that is until a pass through the
training sample causes no new assignments.
A Simple example showing the
implementation of k-means algorithm
(using K=2)
Step 1:
Initialization: Randomly we choose following two centroids
(k=2) for two clusters.
In this case the 2 centroid are: m1=(1.0,1.0) and
m2=(5.0,7.0).
Step 2:
⚫ Thus, we obtain two clusters
containing:
{1,2,3} and {4,5,6,7}.
⚫ Their new centroids are:
Step 3:
⚫ Now using these centroids
we compute the Euclidean
distance of each object, as
shown in table.

⚫ Therefore, the new


clusters are:
{1,2} and {3,4,5,6,7}

⚫ Next centroids are:


m1=(1.25,1.5) and m2 =
(3.9,5.1)
⚫ Step 4 :
The clusters obtained are:
{1,2} and {3,4,5,6,7}

⚫ Therefore, there is no
change in the cluster.
⚫ Thus, the algorithm comes
to a halt here and final
result consist of 2 clusters
{1,2} and {3,4,5,6,7}.
PLOT
(with K=3)

Step 1 Step 2
PLOT
Real-Life Numerical Example
of K-Means Clustering
We have 4 medicines as our training data points object
and each medicine has 2 attributes. Each attribute
represents coordinate of the object. We have to
determine which medicines belong to cluster 1 and
which medicines belong to the other cluster.
Attribute1 (X): Attribute 2 (Y): pH
Object
weight index

Medicine A 1 1

Medicine B 2 1

Medicine C 4 3

Medicine D 5 4
Step 1:
⚫ Initial value of
centroids : Suppose
we use medicine A and
medicine B as the first
centroids.
⚫ Let and c1 and c2
denote the coordinate
of the centroids, then
c1=(1,1) and c2=(2,1)
⚫ Objects-Centroids distance : we calculate the
distance between cluster centroid to each object.
Let us use Euclidean distance, then we have
distance matrix at iteration 0 is

⚫ Each column in the distance matrix symbolizes the


object.
⚫ The first row of the distance matrix corresponds to the
distance of each object to the first centroid and the
second row is the distance of each object to the second
centroid.
⚫ For example, distance from medicine C = (4, 3) to the
first centroid is , and its distance to the
second centroid is , is etc.
Step 2:
⚫ Objects clustering : We
assign each object based
on the minimum distance.
⚫ Medicine A is assigned to
group 1, medicine B to
group 2, medicine C to
group 2 and medicine D to
group 2.
⚫ The elements of Group
matrix below is 1 if and
only if the object is
assigned to that group.
⚫ Iteration-1, Objects-Centroids distances :
The next step is to compute the distance of
all objects to the new centroids.
⚫ Similar to step 2, we have distance matrix at
iteration 1 is
⚫ Iteration-1, Objects
clustering:Based on the new
distance matrix, we move the
medicine B to Group 1 while
all the other objects remain.
The Group matrix is shown
below

⚫ Iteration 2, determine
centroids: Now we repeat step
4 to calculate the new centroids
coordinate based on the
clustering of previous iteration.
Group1 and group 2 both has
two members, thus the new
centroids are
and
⚫ Iteration-2, Objects-Centroids distances :
Repeat step 2 again, we have new distance
matrix at iteration 2 as
⚫ Iteration-2, Objects clustering: Again, we
assign each object based on the minimum
distance.

⚫ We obtain result that . Comparing the


grouping of last iteration and this iteration reveals
that the objects does not move group anymore.
⚫ Thus, the computation of the k-mean clustering
has reached its stability and no more iteration is
needed..
We get the final grouping as the results as:

Object Feature1(X): Feature2 Group


weight index (Y): pH (result)
Medicine A 1 1 1
Medicine B 2 1 1
Medicine C 4 3 2
Medicine D 5 4 2
K-Means Clustering Visual Basic Code

Sub kMeanCluster (Data() As Variant, numCluster As Integer)


' main function to cluster data into k number of Clusters
' input:
' + Data matrix (0 to 2, 1 to TotalData);
' Row 0 = cluster, 1 =X, 2= Y; data in columns
' + numCluster: number of cluster user want the data to be clustered
' + private variables: Centroid, TotalData
' ouput:
' o) update centroid
' o) assign cluster number to the Data (= row 0 of Data)

Dim i As Integer
Dim j As Integer
Dim X As Single
Dim Y As Single
Dim min As Single
Dim cluster As Integer
Dim d As Single
Dim sumXY()

Dim isStillMoving As Boolean


isStillMoving = True
if totalData <= numCluster Then
'only the last data is put here because it designed to be interactive
Data(0, totalData) = totalData ' cluster No = total data
Centroid(1, totalData) = Data(1, totalData) ' X
Centroid(2, totalData) = Data(2, totalData) ' Y
Else
'calculate minimum distance to assign the new data
min = 10 ^ 10 'big number
X = Data(1, totalData)
Y = Data(2, totalData)
For i = 1 To numCluster
Do While isStillMoving
' this loop will surely convergent
'calculate new centroids
' 1 =X, 2=Y, 3=count number of data
ReDim sumXY(1 To 3, 1 To numCluster)
For i = 1 To totalData
sumXY(1, Data(0, i)) = Data(1, i) + sumXY(1, Data(0, i))
sumXY(2, Data(0, i)) = Data(2, i) + sumXY(2, Data(0, i))
Data(0, i))
sumXY(3, Data(0, i)) = 1 + sumXY(3, Data(0, i))
Next i
For i = 1 To numCluster
Centroid(1, i) = sumXY(1, i) / sumXY(3, i)
Centroid(2, i) = sumXY(2, i) / sumXY(3, i)
Next i
'assign all data to the new centroids
isStillMoving = False

For i = 1 To totalData
min = 10 ^ 10 'big number
X = Data(1, i)
Y = Data(2, i)
For j = 1 To numCluster
d = dist(X, Y, Centroid(1, j), Centroid(2, j))
If d < min Then
min = d
cluster = j
End If
Next j
If Data(0, i) <> cluster Then
Data(0, i) = cluster
isStillMoving = True
End If
Next i
Loop
End If
End Sub
Weaknesses of K-Mean Clustering
1. When the numbers of data are not so many, initial
grouping will determine the cluster significantly.
2. The number of cluster, K, must be determined before
hand. Its disadvantage is that it does not yield the same
result with each run, since the resulting clusters depend
on the initial random assignments.
3. We never know the real cluster, using the same data,
because if it is inputted in a different order it may
produce different cluster if the number of data is few.
4. It is sensitive to initial condition. Different initial condition
may produce different result of cluster. The algorithm
may be trapped in the local optimum.
Applications of K-Mean
Clustering
⚫ It is relatively efficient and fast. It computes result
at O(tkn), where n is number of objects or points, k
is number of clusters and t is number of iterations.
⚫ k-means clustering can be applied to machine
learning or data mining
⚫ Used on acoustic data in speech understanding to
convert waveforms into one of k categories (known
as Vector Quantization or Image Segmentation).
⚫ Also used for choosing color palettes on old
fashioned graphical display devices and Image
Quantization.
CONCLUSION
⚫ K-means algorithm is useful for undirected
knowledge discovery and is relatively simple.
K-means has found wide spread usage in lot
of fields, ranging from unsupervised learning
of neural network, Pattern recognitions,
Classification analysis, Artificial intelligence,
image processing, machine vision, and many
others.
References
⚫ Tutorial - Tutorial with introduction of Clustering Algorithms (k-means, fuzzy-c-means,
hierarchical, mixture of gaussians) + some interactive demos (java applets).

⚫ Digital Image Processing and Analysis-byB.Chanda and D.Dutta Majumdar.

⚫ H. Zha, C. Ding, M. Gu, X. He and H.D. Simon. "Spectral Relaxation for K-means
Clustering", Neural Information Processing Systems vol.14 (NIPS 2001). pp. 1057-
1064, Vancouver, Canada. Dec. 2001.

⚫ J. A. Hartigan (1975) "Clustering Algorithms". Wiley.

⚫ J. A. Hartigan and M. A. Wong (1979) "A K-Means Clustering Algorithm", Applied


Statistics, Vol. 28, No. 1, p100-108.

⚫ D. Arthur, S. Vassilvitskii (2006): "How Slow is the k-means Method?,"

⚫ D. Arthur, S. Vassilvitskii: "k-means++ The Advantages of Careful Seeding" 2007


Symposium on Discrete Algorithms (SODA).

⚫ www.wikipedia.com
Note to other teachers and users of
these slides. Andrew would be delighted
if you found this source material useful in
giving your own lectures. Feel free to use
these slides verbatim, or to modify them
to fit your own needs. PowerPoint
originals are available. If you make use
Introduction to Support
of a significant portion of these slides in
your own lecture, please include this
message, or the following link to the
Vector Machines
source repository of Andrew’s tutorials:
http://www.cs.cmu.edu/~awm/tutorials .
Comments and corrections gratefully
received.

Thanks:
Andrew Moore
CMU
And
Martin Law
Michigan State University
History of SVM
 SVM is related to statistical learning theory [3]
 SVM was first introduced in 1992 [1]

 SVM becomes popular because of its success in


handwritten digit recognition
 1.1% test error rate for SVM. This is the same as the error
rates of a carefully constructed neural network, LeNet 4.
 See Section 5.11 in [2] or the discussion in [3] for details
 SVM is now regarded as an important example of “kernel
methods”, one of the key area in machine learning
 Note: the meaning of “kernel” is different from the “kernel”
function for Parzen windows
[1] B.E. Boser et al. A Training Algorithm for Optimal Margin Classifiers. Proceedings of the Fifth Annual Workshop on
Computational Learning Theory 5 144-152, Pittsburgh, 1992.
[2] L. Bottou et al. Comparison of classifier methods: a case study in handwritten digit recognition. Proceedings of the 12th
IAPR International Conference on Pattern Recognition, vol. 2, pp. 77-82.
[3] V. Vapnik. The Nature of Statistical Learning Theory. 2nd edition, Springer, 1999.

2021/3/3 2
Linear Classifiers Estimation:
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1
w: weight vector
x: data vector

How would you


classify this data?

2021/3/3 3
a
Linear Classifiers
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1

How would you


classify this data?

2021/3/3 4
a
Linear Classifiers
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1

How would you


classify this data?

2021/3/3 5
a
Linear Classifiers
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1

How would you


classify this data?

2021/3/3 6
a
Linear Classifiers
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1

Any of these
would be fine..

..but which is
best?

2021/3/3 7
a
Classifier Margin
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1 Define the margin
of a linear
classifier as the
width that the
boundary could be
increased by
before hitting a
datapoint.

2021/3/3 8
a
Maximum Margin
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1 The maximum
margin linear
classifier is the
linear classifier
with the, um,
maximum margin.
This is the
simplest kind of
SVM (Called an
LSVM)
Linear SVM
2021/3/3 9
a
Maximum Margin
x f yest
f(x,w,b) = sign(w. x + b)
denotes +1
denotes -1 The maximum
margin linear
classifier is the
linear classifier
Support Vectors with the, um,
are those
datapoints that maximum margin.
the margin This is the
pushes up
against simplest kind of
SVM (Called an
LSVM)
Linear SVM
2021/3/3 10
Why Maximum Margin?

f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1 The maximum
margin linear
classifier is the
linear classifier
Support Vectors with the, um,
are those
datapoints that maximum margin.
the margin This is the
pushes up
against simplest kind of
SVM (Called an
LSVM)

2021/3/3 11
How to calculate the distance from a point
to a line?
denotes +1
denotes -1 x
wx +b = 0

X – Vector
W
W – Normal Vector
b – Scale Value

 http://mathworld.wolfram.com/Point-LineDistance2-
Dimensional.html
 In our case, w1*x1+w2*x2+b=0,

 thus, w=(w1,w2), x=(x1,x2)

2021/3/3 12
Estimate the Margin
denotes +1
denotes -1 x
wx +b = 0

X – Vector
W
W – Normal Vector
b – Scale Value

 What is the distance expression for a point x to a line


wx+b= 0?
xw b xw b
d ( x)  

2 d 2
w w
i 1 i
2

2021/3/3 13
Large-margin Decision Boundary
 The decision boundary should be as far away from the
data of both classes as possible
 We should maximize the margin, m

 Distance between the origin and the line wtx=-b is b/||w||

Class 2

Class 1
m

2021/3/3 14
Finding the Decision Boundary
 Let {x1, ..., xn} be our data set and let yi  {1,-1} be
the class label of xi
 The decision boundary should classify all points correctly

 To see this: when y=-1, we wish (wx+b)<1, when y=1,

we wish (wx+b)>1. For support vectors, we wish


y(wx+b)=1.
 The decision boundary can be found by solving the
following constrained optimization problem

2021/3/3 15
Next step… Optional
 Converting SVM to a form we can solve
 Dual form
 Allowing a few errors
 Soft margin
 Allowing nonlinear boundary
 Kernel functions

2021/3/3 16
The Dual Problem (we ignore the derivation)
 The new objective function is in terms of ai only
 It is known as the dual problem: if we know w, we

know all ai; if we know all ai, we know w


 The original problem is known as the primal problem

 The objective function of the dual problem needs to be

maximized!
 The dual problem is therefore:

Properties of ai when we introduce The result when we differentiate the


the Lagrange multipliers original Lagrangian w.r.t. b
2021/3/3 17
The Dual Problem

 This is a quadratic programming (QP) problem


 A global maximum of ai can always be found

 w can be recovered by

2021/3/3 18
Characteristics of the Solution
 Many of the ai are zero (see next page for example)
w is a linear combination of a small number of data points
 This “sparse” representation can be viewed as data

compression as in the construction of knn classifier


 xi with non-zero ai are called support vectors (SV)

 The decision boundary is determined only by the SV

 Let tj (j=1, ..., s) be the indices of the s support vectors.

We can write
 For testing with a new data z
 Compute and
classify z as class 1 if the sum is positive, and class 2
otherwise
 Note: w need not be formed explicitly
2021/3/3 19
A Geometrical Interpretation

Class 2

a8=0.6 a10=0

a7=0
a5=0 a2=0

a1=0.8
a4=0
a6=1.4
a9=0
a3=0
Class 1

2021/3/3 20
Allowing errors in our solutions
We allow “error” xi in classification; it is based on the
output of the discriminant function wTx+b
 xi approximates the number of misclassified samples

Class 2

Class 1

2021/3/3 21
Soft Margin Hyperplane
 If we minimize ixi, xi can be computed by

 xi are “slack variables” in optimization


 Note that xi=0 if there is no error for xi

 xi is an upper bound of the number of errors

 We want to minimize
C : tradeoff parameter between error and margin

 The optimization problem becomes

2021/3/3 22
Extension to Non-linear Decision Boundary
So far, we have only considered large-
margin classifier with a linear decision
boundary
How to generalize it to become nonlinear?

Key idea: transform xi to a higher


dimensional space to “make life easier”
 Input space: the space the point xi are
located
 Feature space: the space of f(xi) after

transformation
2021/3/3 23
Transforming the Data (c.f. DHS Ch. 5)
f( )
f( ) f( )
f( ) f( ) f( )
f(.) f( )
f( ) f( )
f( ) f( )
f( ) f( )
f( ) f( ) f( )
f( )
f( )

Input space Feature space


Note: feature space is of higher dimension
than the input space in practice

 Computation in the feature space can be costly because it is


high dimensional
 The feature space is typically infinite-dimensional!
 The kernel trick comes to rescue

2021/3/3 24
The Kernel Trick
 Recall the SVM optimization problem

 The data points only appear as inner product


 As long as we can calculate the inner product in the

feature space, we do not need the mapping explicitly


 Many common geometric operations (angles, distances)
can be expressed by inner products
 Define the kernel function K by

2021/3/3 25
An Example for f(.) and K(.,.)
 Suppose f(.) is given as follows

 An inner product in the feature space is

 So, if we define the kernel function as follows, there is


no need to carry out f(.) explicitly

 This use of kernel function to avoid carrying out f(.)


explicitly is known as the kernel trick

2021/3/3 26
More on Kernel Functions
 Not all similarity measures can be used as kernel
function, however
 The kernel function needs to satisfy the Mercer function,
i.e., the function is “positive-definite”
 This implies that
 the n by n kernel matrix,
 in which the (i,j)-th entry is the K(xi, xj), is always positive
definite
 This also means that optimization problem can be solved
in polynomial time!

2021/3/3 27
Examples of Kernel Functions

 Polynomial kernel with degree d

 Radial basis function kernel with width s

 Closely related to radial basis function neural networks


 The feature space is infinite-dimensional

 Sigmoid with parameter k and q

 It does not satisfy the Mercer condition on all k and q

2021/3/3 28
Non-linear SVMs: Feature spaces

 General idea: the original input space can always be mapped to


some higher-dimensional feature space where the training set is
separable:

Φ: x → φ(x)

2021/3/3 29
Example
 Suppose we have 5 one-dimensional data points
 x1=1, x2=2, x3=4, x4=5, x5=6, with 1, 2, 6 as class 1 and 4,
5 as class 2  y1=1, y2=1, y3=-1, y4=-1, y5=1
 We use the polynomial kernel of degree 2
 K(x,y) = (xy+1)2
 C is set to 100

 We first find ai (i=1, …, 5) by

2021/3/3 30
Example
 By using a QP solver, we get
 a1=0, a2=2.5, a3=0, a4=7.333, a5=4.833
 Note that the constraints are indeed satisfied

 The support vectors are {x2=2, x4=5, x5=6}

 The discriminant function is

 b is recovered by solving f(2)=1 or by f(5)=-1 or by f(6)=1,


as x2 and x5 lie on the line and x4
lies on the line
 All three give b=9

2021/3/3 31
Example

Value of discriminant function

class 1 class 2 class 1

1 2 4 5 6

2021/3/3 32
Degree of Polynomial Features

X^1 X^2 X^3

X^4 X^5 X^6

2021/3/3 33
Choosing the Kernel Function
 Probably the most tricky part of using SVM.

2021/3/3 34
Software
 A list of SVM implementation can be found at
http://www.kernel-machines.org/software.html
 Some implementation (such as LIBSVM) can handle
multi-class classification
 SVMLight is among one of the earliest implementation of

SVM
 Several Matlab toolboxes for SVM are also available

2021/3/3 35
Summary: Steps for Classification
 Prepare the pattern matrix
 Select the kernel function to use

 Select the parameter of the kernel function and the


value of C
 You can use the values suggested by the SVM software, or
you can set apart a validation set to determine the values
of the parameter
 Execute the training algorithm and obtain the ai
 Unseen data can be classified using the ai and the

support vectors

2021/3/3 36
Conclusion
 SVM is a useful alternative to neural networks
 Two key concepts of SVM: maximize the margin and the

kernel trick
 Many SVM implementations are available on the web for

you to try on your data set!

2021/3/3 37
Resources
 http://www.kernel-machines.org/
 http://www.support-vector.net/

 http://www.support-vector.net/icml-tutorial.pdf

 http://www.kernel-machines.org/papers/tutorial-

nips.ps.gz
 http://www.clopinet.com/isabelle/Projects/SVM/applist.h
tml

2021/3/3 38
Appendix: Distance from a point to a line

 Equation for the line: let u be a variable, then any point


on the line can be described as:
 P = P1 + u (P2 - P1)
 Let the intersect point be u, P2
 Then, u can be determined by:

 The two vectors (P2-P1) is orthogonal to P3-u:


P
 That is,

 (P3-P) dot (P2-P1) =0


 P=P1+u(P2-P1)
P3
 P1=(x1,y1),P2=(x2,y2),P3=(x3,y3) P1

2021/3/3 39
Distance and margin

 x = x1 + u (x2 - x1)
y = y1 + u (y2 - y1)

 The distance therefore between the point P3 and the


line is the distance between P=(x,y) above and P3
 Thus,

 d= |(P3-P)|=

2021/3/3 40

You might also like