An Initial Seed Selection Algorithm
An Initial Seed Selection Algorithm
Short communication
Abstract
K-means is one of the most widely used clustering algorithms in various disciplines, especially for large
datasets. However the method is known to be highly sensitive to initial seed selection of cluster centers.
K-means++ has been proposed to overcome this problem and has been shown to have better accuracy
and computational efficiency than k-means. In many clustering problems though – such as when
classifying georeferenced data for mapping applications – standardization of clustering methodology,
specifically, the ability to arrive at the same cluster assignment for every run of the method i.e. replicability
of the methodology, may be of greater significance than any perceived measure of accuracy, especially
when the solution is known to be non-unique, as in the case of k-means clustering. Here we propose a
simple initial seed selection algorithm for k-means clustering along one attribute that draws initial cluster
boundaries along the “deepest valleys” or greatest gaps in dataset. Thus, it incorporates a measure to
maximize distance between consecutive cluster centers which augments the conventional k-means
optimization for minimum distance between cluster center and cluster members. Unlike existing
initialization methods, no additional parameters or degrees of freedom are introduced to the clustering
algorithm. This improves the replicability of cluster assignments by as much as 100% over k-means and
k-means++, virtually reducing the variance over different runs to zero, without introducing any additional
parameters to the clustering process. Further, the proposed method is more computationally efficient than
k-means++ and in some cases, more accurate.
Graphical abstract
Highlights
► The method does not introduce any new parameters to the clustering algorithm.
► The method is especially suited for clustering of georeferenced data for mapping.
Keywords
Classification;
Grouping of data;
Natural breaks;
Jenks natural breaks;
Natural breaks ArcGIS;
Jenks ArcGIS
1. Introduction
Clustering or classification of data into groups that represent some measure of homogeneity
across a given variable range or values of multiple variables, is a much analyzed and studied
problem in pattern recognition. K-means clustering is one of the most widely used methods for
implementing a solution to this problem and for assigning data into clusters. The method in its
initial formulation was first proposed by Mac Queen in 1967 (Mac Queen 1967) though the
approximation developed by Lloyd in 1982 (Lloyd 1982) has proven to be most popular in
application. The method assumes apriori knowledge of the number of clusters k and requires
seeding with initial values of centers of these clusters in order to be implemented. These initial
seed values have been shown to be an important determinant of the eventual assignment of data
to clusters. In other words, k-means clustering is highly sensitive to the initial seed selection for
K-means++ has been proposed to overcome this problem and has been shown to produce a scale
al. 2006; Arthur and Vassilvitskii 2007). The algorithm assesses the performance of the initial
seed selection based on the sum of square difference between members of a cluster and the
cluster center, normalized to data size. While this is a worthwhile means of assessing method
performance, it may be noted that in many clustering applications, the replicability of the
resultant cluster assignment can be much more desirable than the homogeneity of the cluster
We encountered one such application of the clustering problem while trying to cluster
georeferenced data into classes for mapping and visualization through a Geographical
Information Systems (GIS) software suite. Commercially available GIS software ArcGIS for
instance utilizes a proprietary modification of Jenks natural breaks algorithm (Jenks 1967) to
classify values of a variable for visualization in maps (ArcGIS 2009). The classification this
method obtains seems to reproduce itself with remarkable accuracy for each run. The clustering
bounds do not vary from run to run, even with variable values in eleven significant figures.
Jenks’ algorithm differs only slightly from k-means clustering. K-means using Lloyd’s algorithm
Equation 2
Where n is the data size of number of data points, k is the number of clusters and dist(di, cj)
computes the Euclidean distance between point di and its closest center cj. The algorithm runs as
follows;
b) Calculate the minimum cost function C, assigning data points d1,…,dn to their respective
Jenks algorithm differs in that instead of C it minimizes the cost function J, defined in Equation
3;
Equation 3
As can be seen in Equation 3, Jenks’ algorithm not only searches for minimum distance between
data points and centers of clusters they belong to but for maximum difference between cluster
If we are trying to develop a methodology for geo-processing; say a utility that studies the
scaling characteristic of a city and models the distribution of sizes of housing within different
size clusters, it can be essential to have a clustering mechanism that produces almost exactly
similar results each time. Drawing inspiration from Jenks’ algorithm, we propose an initial seed
selection algorithm for k-means clustering that produces similar clusters on each run. We
compare our results to those obtained by k-means as well as the widely used k-means++ initial
b) Calculate squared distance of each point from the nearest of all selected centers and sum
c) Choose the next center at random. Calculate sum of squared distances. Re-select this
center and calculate the sum of squared distances again. Repeat a given ‘number of trials’
and select the center with the minimum sum of squared distance as the next center.
The methodology is novel in that unlike other initial seed selection algorithms, it does not
introduce any new parameters (such as number of trials for k-means++) in the clustering
algorithm thereby avoiding additional degrees of freedom. By clustering along the deepest
valleys or highest gaps in the data series, the method introduces a measure of distance between
cluster centers augmenting the k-means optimization for minimum distance between cluster
center and cluster members. Additionally, unlike initialization algorithms like k-means++ there
is no randomness involved in the algorithm and the initial clusters obtained are always the same.
We propose the following method for calculating initial seed centers of k-means clustering along
one attribute.
a) Sort the data points in terms of increasing magnitude d1,…,dn such that d1 has the
b) Calculate the Euclidean distances Di between consecutive points di and di+1 as shown in
Equation 4;
c) Sort D in descending order without changing the index i of each Di. Identify k-1 index i
d) Sort i1,…i(k-1) in ascending order. The set (i1,…,i(k-1),ik) now forms the set of indices of
data values di, which serve as the upper bounds of clusters 1,…,k; where; ik = n.
e) The corresponding set of indices of data values di which serve as the lower bounds of
f) The values of cluster centers c will now simply calculated as the mean of di values falling
within the upper and lower bounds calculated above. This set of cluster centers (c1,…,ck)
where the gap between consecutive data values is the highest or the data has deepest ‘valleys’. In
The method can be easily implemented for small to medium size datasets by using the
To test the replicability of cluster assignments produced using this methodology, the same data
was clustered using this methodology ten times. The variance observed in cluster centers for
these ten runs was calculated and averaged over the number of cluster centers. For comparison
similar analysis was performed employing k-means and widely used k-means++ initial seeding
methodology and the variance averaged over number of cluster centers was calculated.
The analysis was run for five different datasets. The first data is the popular Iris dataset from UC
Irvine Machine Learning Repository (UCIMLR) (Fisher 1936). Attribute one of the data was
used for clustering. The data having 150 points was classed into 5 clusters. The second data is
US census block wise population data for the Metropolitan Statistical Area (MSA) of St. George
Utah. The population, land area and water body area data was downloaded from the US Census
Bureau website (US Census Bureau 2010). The area was calculated by summing and water and
land areas for the census block. The population density for each census block was estimated by
dividing population for the block with the area for the block. The data having 1450 points was
clustered along population density into 10 clusters. The third data was Abalone dataset from
UCIMLR (Nash, Sellers et al. 1994). Attribute 5 was used for clustering. The data has 4177
instances and was clustered into 25 classes. The fourth set of data was cloud cover data
downloaded from Phillipe Collard (Collard 1989). Data in column 3 was used for cluster
analysis. The data having 1024 points was clustered in 50 clusters. The fifth data set was
randomly generated normally distributed data with mean 10 and standard deviation of 1. The
3. Results
While the objective of development of this method is to produce more replicable results, the
sums of squared differences between cluster members and cluster centers between the proposed
method and k-means++ were compared and are juxtaposed in Table 1. As can be seen in Table 1,
k-means++ in general continues to produce more accurate clustering using this methodology,
though for two of the five datasets, our method produced better results.
Table 1: Sum of Squared Differences between Cluster Members and their Closest Centers
As shown in Table 2, our proposed method is also significantly faster than k-means++,
clustering as much as 89% faster than k-means++ in some cases. The advantage in clustering
speed is obtained over the initial seed selection, where k-means++ takes significantly longer
improving method replicability. The results are presented in Table 3. As can be seen in all three
cases, the variance was virtually reduced to zero using our method, which was at least a 90%
Table 3: Variance of Centers over Ten (10) Runs Averaged to Number of Clusters
The method for initial seed selection of algorithm we propose reduces the variance of clustering
to zero accurate up to eleven significant figures, for clustering along one attribute or dimension.
The further advantage of the proposed initialization method is that unlike k-means++ it does not
introduce any new variables within the analysis, such as the number of trials. Almost perfect
replicability and avoidance of additional degrees of freedom make the method especially suited
for inclusion as part in a protocol or standard methodology or algorithm. Further, the method
also produces results faster than k-means++ and hence is more computationally efficient at least
in two-dimensional space.
The method has applications in all areas of data analysis where a Jenks style ‘natural’
classification, with a high level of replicability may be needed. It has the following distinct
No additional degrees of freedom or modifiable parameters are introduced that may need
The clustering may be more ‘natural’ in the manner of Jenks’ algorithm considering that
Above advantages can render the initialization method highly useful in all areas where large
datasets have to be handled or a ‘natural’ classification of data is sought. This includes areas like
bioanalysis for instance where density based clustering is commonly deployed; the method can
be made part of a more detailed analysis regime with confidence that the replicability of the
results will not be negatively affected by the clustering algorithm. In the area of market
segmentation and computer vision, the method can be used to standardize clustering results. This
makes the method especially suited to utility development for GIS applications.
References
ArcGIS. (2009, April 15, 2011). "What is the source for ArcMap's Jenks Optimization classification?" Ask a
Cartographer, from http:/ / mappingcenter.esri.com/ index.cfm?fa=ask.answers&q=541.
Arthur, D. and S. Vassilvitskii (2007). k-means++: the advantages of careful seeding. Proceedings of the
eighteenth annual ACM-SIAM symposium on Discrete algorithms. New Orleans, Louisiana,
Society for Industrial and Applied Mathematics: 1027-1035.
Collard, P. (1989). Philippe Collard's cloud cover data.
Fisher, R. A. (1936). Iris data set. R. A. Fisher, UC Irvine Machine Learning Repository.
Jenks, G. F. (1967). "The data model concept in statistical mapping." International Yearbook of
Cartography 7: 186-190.
Lloyd, S. (1982). "Least squares quantization in PCM." IEEE Transactions on Information Theory 28(2): 9.
Mac Queen, J. (1967). Some methods for classification and analysis of multivariate observations.
Proceedings of the fifth Berkeley symposium on mathematical statistics and probability. L. M. Le
Cam and J. Neyman, University of California Press. I: 281-297.
Nash, W. J., T. L. Sellers, et al. (1994). Abalone data set. S. F. Division, UC Irvine Machine Learning
Repository.
Ostrovsky, R., Y. Rabani, et al. (2006). The effectiveness of Lloyd-type methods for the kMeans problem.
Symposium on Foundations of Computer Science.
Peña, J. M., J. A. Lozano, et al. (1999). "An empirical comparison of four initialization methods for the K-
Means algorithm." Pattern Recognition Letters 20(10): 1027-1040.
US Census Bureau. (2010). "US census 2010." from http:/ / www2.census.gov/ census_2010/ 04-
Summary_File_1/ .