Machine Learning With Python - Machine Learning Algorithms - K-Means Clustering Algo
Machine Learning With Python - Machine Learning Algorithms - K-Means Clustering Algo
Clustering System
K- means is an unsupervised partitional clustering algorithm that is based on grouping data into k – numbers of clusters by
determining centroid using the Euclidean or Manhattan method for distance calculation. It groups the object based on
minimum distance.
1. First, initialize the number of clusters, K (Elbow method is generally used in selecting the number of clusters )
2. Randomly select the k data points for centroid. A centroid is the imaginary or real location representing the center of
the cluster.
3. Categorize each data items to its closest centroid and update the centroid coordinates calculating the average of items
coordinates categorized in that group so far
4. Repeat the process for a number of iterations till successive iterations clusters data items into the same group
Let’s kick off K-Means Clustering Scratch with a simple example: Suppose we have data points (1,1), (1.5,2), (3,4), (5,7),
(3.5,5), (4.5,5), (3.5,4.5). Let us suppose k = 2 i.e. dataset should be grouped in two clusters. Here we are using the
Euclidean distance method.
Step-2: Since k = 2, we are randomly selecting two centroid as c1(1,1) and c2(5,7)
Step 3: Now, we calculate the distance of each point to each centroid using the euclidean distance calculation method
using Pythogoras theoream :
ITERATION 01
X1 Y1 X2 Y2 D1 X1 Y1 X2 Y2 D2 Remarks
1 1 1 1 0 1 1 5 7 7.21 D1<D2 : (1,1) belongs to c1
1.5 2 1 1 1.12 1.5 2 5 7 6.1 D1<D2 : (1.5,2) belongs to c1
3 4 1 1 3.61 3 4 5 7 3.61 D1<D2 : (3,4) belongs to c1
5 7 1 1 7.21 5 7 5 7 0 D1>D2 : (5,7) belongs to c2
3.5 5 1 1 4.72 3.5 5 5 7 2.5 D1>D2 : (3.5,5) belongs to c2
4.5 5 1 1 5.32 4.5 5 5 7 2.06 D1>D2 : (5.5,5) belongs to c2
3.5 4.5 1 1 4.3 3.5 4.5 5 7 2.91 D1>D2 : (3.5,4.5) belongs to c2
In cluster c1 we have (1,1), (1.5,2) and (3,4) whereas centroid c2 contains (5,7), (3.5,5), (4.5,5) & (3.5,4.5). Here, a new
centroid is the algebraic mean of all the data items in a cluster.
In cluster c1 we have (1,1), (1.5,2) ) whereas centroid c2 contains (3,4),(5,7), (3.5,5), (4.5,5) & (3.5,4.5). Here, new centroid
is the algebraic mean of all the data items in a cluster.
In cluster c1 we have (1,1), (1.5,2) ) whereas centroid c2 contains (3,4),(5,7), (3.5,5), (4.5,5) & (3.5,4.5). Here, new
centroid is the algebraic mean of all the data items in a cluster.
class K_Means:
We find the euclidean distance from each point to all the centroids. If you look for efficiency it is better to use the NumPy
function (np.linalg.norm(point1-point2, axis=0))
self.classes = {}
for j in range(self.k):
self.classes[j] = []
previous = dict(self.centroids)
for cluster_index in self.classes:
self.centroids[cluster_index] = np.average(self.classes[cluster_index], axis = 0)
isOptimal = True
At the end of the first iteration, the centroid values are recalculated, usually taking the arithmetic mean of all points in the
cluster. In every iteration, new centroid values are calculated until successive iterations provide the same centroid value.
Here we have created 3 groups of data of two-dimension with a different centre. We have defined the value of k as 3.
Now, let’s fit the model created
K-Means Clustering
Company Confidential: Data-Core Systems, Inc. | datacoresystems.com
CHOOSING VALUE OF K
While working with the k-means clustering scratch, one thing we must keep in mind is the number of clusters ‘k’. We
should make sure that we are choosing the optimum number of clusters for the given data set. But, here arises a
question, how to choose the optimum value of k ?? We use the elbow method which is generally used in analyzing the
optimum value of k.
The Elbow method is based on the principle that “Sum of squares of distances of every data point from its
corresponding cluster centroid should be as minimum as possible”.
2. For each value of K, calculate the Sum of squares of distances of every data point from its corresponding cluster centroid
which is called WCSS ( Within-Cluster Sums of Squares)
4. To select the value of k, we choose the value where there is bend (knee) on the plot i.e. WCSS isn’t increasing rapidly.
import numpy as np
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
fmt = '.2f'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True/Actual label')
plt.xlabel('Predicted label')
plt.show()
1. Relatively simple to learn and understand as the algorithm solely depends on the euclidean method of distance
calculation.
2. K means works on minimizing Sum of squares of distances, hence it guarantees convergence
3. Computational cost is O(K*n*d), hence K means is fast and efficient
CONS OF K-MEANS
· Market segmentation
· Document Clustering
· Image segmentation
· Image compression
· Customer segmentation
· Analyzing the trend on dynamic data