Simple K Means

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Simple K-means clustering is a type of unsupervised learning, which is used when you have unlabeled

data (i.e., data without defined categories or groups). The goal of this algorithm is to find groups in the
data, with the number of groups represented by the variable K. The algorithm works iteratively to assign
each data point to one of K groups based on the features that are provided. Data points are clustered
based on feature similarity. The results of the K-means clustering algorithm are:

 The centroids of the K clusters, which can be used to label new data
 Labels for the training data (each data point is assigned to a single cluster)

Rather than defining groups before looking at the data, clustering allows you to find and analyze the
groups that have formed organically. The "Choosing K" section below describes how the number of
groups can be determined.

Each centroid of a cluster is a collection of feature values which define the resulting groups. Examining
the centroid feature weights can be used to qualitatively interpret what kind of group each cluster
represents.

Algorithm:

 The Simple Κ-means clustering algorithm uses iterative refinement to produce a final result.
 The algorithm inputs are the number of clusters Κ and the data set.
 The data set is a collection of features for each data point.
 The algorithms starts with initial estimates for the Κ centroids, which can either be randomly
generated or randomly selected from the data set.

The algorithm then iterates between two steps:

1. Data assigment step:

Each centroid defines one of the clusters. In this step, each data point is assigned to its nearest centroid,
based on the squared Euclidean distance. More formally, if ci is the collection of centroids in set C, then
each data point x is assigned to a cluster based on

where dist( · ) is the standard (L2) Euclidean distance. Let the set of data point assignments for each”
i”th cluster centroid be Si.
2. Centroid update step:

In this step, the centroids are recomputed. This is done by taking the mean of all data points assigned to
that centroid's cluster.

The algorithm iterates between steps one and two until a stopping criteria is met (i.e., no data points
change clusters, the sum of the distances is minimized, or some maximum number of iterations is
reached).

This algorithm is guaranteed to converge to a result. The result may be a local optimum (i.e. not
necessarily the best possible outcome), meaning that assessing more than one run of the algorithm with
randomized starting centroids may give a better outcome.

Choosing K

The algorithm described above finds the clusters and data set labels for a particular pre-chosen K. To
find the number of clusters in the data, the user needs to run the K-means clustering algorithm for a
range of K values and compare the results. In general, there is no method for determining exact value of
K, but an accurate estimate can be obtained using the following techniques.

1. One of the metrics that is commonly used to compare results across different values of
K is the mean distance between data points and their cluster centroid.
 Since increasing the number of clusters will always reduce the distance to data points,
increasing K will always decrease this metric, to the extreme of reaching zero when K is
the same as the number of data points.
 Thus, this metric cannot be used as the sole target. Instead, mean distance to the
centroid as a function of K is plotted and the "elbow point," where the rate of decrease
sharply shifts, can be used to roughly determine K.

2. A number of other techniques exist for validating K, including cross-validation,


information criteria, the information theoretic jump method, the silhouette method,
and the G-means algorithm. In addition, monitoring the distribution of data points
across groups provides insight into how the algorithm is splitting the data for each K.

You might also like