Optimized and Efficient Color Prediction Algorithm
Optimized and Efficient Color Prediction Algorithm
Article
Optimized and Efficient Color Prediction Algorithms Using
Mask R-CNN
Rajesh Kannan Megalingam *, Balla Tanmayi, Gadde Sakhita Sree, Gunnam Monika Reddy,
Inti Rohith Sri Krishna and Sreejith S. Pai
Abstract: Color cognizant capability has a significant impact in service robots for object detection
based on color, traffic signal interpretation for autonomous vehicles, etc. Conventional clustering
algorithms such as K-means and mean shift can be used for predicting the dominant color of an
image by mapping the pixels from RGB to HSV and clustering them based on HSV values, thereby
picking the cluster with the most pixels as the dominant color of the image, but these approaches
are not solely dedicated to the same outcome. This research’s goal is to introduce novel techniques
for predicting the dominant color of objects in images, as well as pixel extraction concepts, which
allow these algorithms to be more time and efficiency optimized. This investigation appraises
propriety of integrating object detection and color prediction algorithms. We introduce a dominant
color prediction color map model and two new algorithms: average windowing and pixel skip. To
predict objects in an image prior to color prediction, we combined the Mask R-CNN framework with
our proposed techniques. Verification of our approach is done by creating a benchmark dataset of
200 images and comparing color predicted by algorithms with actual color. The accuracy and runtime
of existing techniques are compared with those of the proposed algorithms to prove the superiority
of our algorithms. The viability of the proposed algorithms was demonstrated by scores of 95.4%
accuracy and color prediction time of 9.2 s for the PXS algorithm and corresponding values of 93.6%
and 6.5 s for the AVW algorithm.
Citation: Megalingam, R.K.; Keywords: average windowing (AVW); pixel skip (PXS); dominant color prediction color map
Tanmayi, B.; Sree, G.S.; Reddy, G.M.;
(DCPCM); HSV (hue saturation value); Mask R-CNN; object detection; color prediction
Krishna, I.R.S.; Pai, S.S. Optimized
and Efficient Color Prediction
Algorithms Using Mask R-CNN.
Electronics 2023, 12, 909. https://
1. Introduction
doi.org/10.3390/electronics12040909
Proliferating deployment of robotics and autonomous applications in diverse aspects
Academic Editor: Chiman Kwan
of everyday life in recent times has aroused keen interest of researchers in the domain
Received: 16 December 2022 of detection and identification of objects. Such attributes outfit robots with a capability
Revised: 25 January 2023 to mimic human behavior. Deep learning techniques have become popular these days
Accepted: 26 January 2023 for implementing object detection. Some of the applications of deep learning techniques
Published: 10 February 2023 apart from object detection include facial recognition [1,2], gesture recognition [3,4], health
care [5,6], image enhancement [7], etc. Object detection assumes a dominant role in the field
of automation, harmonized with increased precision and simplification of work. Object
detection gadgets are conspicuous in video surveillance, security cameras, and self-driving
Copyright: © 2023 by the authors. systems [8]. Object detection models introduced in the past decade include region-based
Licensee MDPI, Basel, Switzerland.
convolutional neural networks (R-CNN), Fast R-CNN, you only look once (YOLO), and
This article is an open access article
Mask R-CNN [9]. The deployed models are enhanced versions of primitive CNN models.
distributed under the terms and
CNN models are also widely used for object recognition and classification [7]. YOLO
conditions of the Creative Commons
and R-CNN are esteemed for their accurate detection of objects [10,11]. The pretrained
Attribution (CC BY) license (https://
model employed in Mask R-CNN is used for detecting objects in the input image and as
creativecommons.org/licenses/by/
4.0/).
an input for our proposed algorithms that are discussed in the later sections. Dominant
color extraction from an image is a frequently used technique in applications such as image
segmentation, medical image analysis, image retrieval, and image mixing [12]. Although
many algorithms are available for detection of the dominant color of an object, they are
based on clustering techniques that are deficient in time complexity or in accuracy in
color prediction. Hence, we have introduced two algorithms based on new techniques for
predicting the dominant color of an object in the image. The two algorithms, AVW and
PXS, use averaging and skipping techniques respectively, to select pixels from an input
image. These algorithms have huge importance in terms of time as most of the pixels in the
image are eliminated while predicting color (which will be discussed in the later sections)
to save time without compromising on accuracy. The DCPCM model predicts the color of
all input pixels given by the proposed algorithms.
Initially, YOLO was selected for object detection, as it is one of the fastest object
detection models [10]. But YOLO does not provide a mask along with the bounding box for
the detected object, which results in reduced accuracy, as some background pixels inside
the bounding box that are not part of the object may contribute to the color prediction and
may disrupt the final color predicted. Isolation of the pixels contributing to the background
of the object, enclosed within the masked portion of AVW and PXS algorithms, facilitate
increased accuracy in the predictions of the (dominant) color of an object. Concurrently,
the runtime for predicting the object’s color is significantly diminished as the number of
pixels considered for prediction gets reduced. Unlike other object detection algorithms,
Mask R-CNN, outfitted with a segmentation mask for each object detected, is preferred for
integration with our proposed algorithms purposes. Mask R-CNN has also been found to
be more efficient in detecting the type of vehicles, for video surveillance and self-driving
systems [13], where color prediction of objects can be an added advantage. Nevertheless,
the AVW and PXS algorithms are flexible enough to be unified with any application that
requires color prediction. The canny edge detection approach is used to identify the
nonuniformity percentage for every object detected in an image. This technique detects
a wide variety of edges in images by noise suppression [14], yet it is applicable only to
grayscale images, returning a binary matrix of dimensions, stating whether the respective
pixel contributes to an edge or not. Average pixel width (APW) is the average of the entire
binary matrix. APW is more likely to give the percentage of nonuniformity in an image [15],
which is used to model the algorithms propounded in Section 5.
HSV is a color model of cylindrical shape that transforms the red green blue (RGB)
color model into easily comprehensible dimensions [16]. Hue determines the angle of color
on the HSV and RGB circles. Saturation represents the amount of color being used, and
value represents the level of brightness of the respective color. Supervened by its balanced
color perception capability, HSV color space is DCPCM model [17]. Elaborated in Section 3,
the entire HSV color space, marshalled in the proposed model, is mapped into 15 basic
color classes to provide a better understanding and readability of the DCPCM’s output.
The existing techniques of color prediction are based on clustering. Clustering refers to the
process of grouping the available data points (pixels in the context of color prediction) into
different clusters based on their similarity with certain parameters such as color, features,
etc. Clustering is commonly applied to predict the dominant color of an image [18,19];
K-means, mean shift, and fuzzy C-means are a few popular clustering models [20]. Hence,
this calls for a simple dedicated algorithm for prediction of the dominant color in an image.
These clustering algorithms excel in various machine learning applications [21–26].
An edge detector scheme, adaptable for real-time applications, is presented in [27];
the results presented in [27] show that the proposed method of edge detection outperforms
other existing methods in terms of accuracy. A model that discerns humans from the
background in a video and identifies the actions performed by him or her was proposed
in [28,29]. It introduced a scheme to discriminate between red and black grapes using
image detection and color prediction schemes. A hierarchical interactive refinement net-
work, which facilitates effective preservation of edge structures while detecting salient
objects, was illustrated in [30]. An efficient method that exploited mean-shift clustering,
Electronics 2023, 12, 909 3 of 25
was elucidated in [31], for the discovery of common visual patterns (CVPs) between two
images. An enhanced version of Mask R-CNN for the detection of cucumber in green-
houses, was demonstrated in [32]. Recently [33], reports were published on an application
of HSV combined with K-means clustering in the detection of food spoilage. A comparative
analysis of the execution times between the software implementation and register-transfer
level (RTL) design of the Sobel edge detection algorithm was presented in [34]. A scheme
for identification of an object, based on recognition of its color and contour, through digital
image processing techniques was presented in [35]. A technique for the segregation of ob-
jects in an image based on their color, alleviates the complexities of boundary detection [36].
A hybrid method that extracts the spatial information and color for identification of the
moving objects in a video was presented [37].
We summarize below the limitations of the clustering algorithms in color prediction.
1. The clustering algorithms are not exclusively designed for color prediction. They also
excel in various machine learning applications;
2. The K-value is not easy to predict in K-means algorithm, which does not go well with
global clusters. In addition, the time complexity of K-means increases if the datasets
are bigger;
3. Mini batch K-means algorithm has lower accuracy than K-means;
4. Time complexity is a major disadvantage in mean shift algorithm;
5. GMM takes more time to converge and hence slower than K-means;
6. Fuzzy C-means requires greater number of iterations for better results.
The purpose of this study is to address the limitations of clustering algorithms in
color prediction. We propose a model that is exclusively used for color prediction and
not for any other purpose. In addition, we propose two new algorithms that can extract
dominant colors accurately in less time. In view of this, a dominant color prediction color
map (DCPCM) model and two new algorithms, average windowing (AVW) and pixel skip
(PXS), are presented in this work. To predict the color of the object in an image, we need to
detect the object first. For this purpose, we integrate the Mask R-CNN algorithm with our
proposed techniques to predict the objects in the image prior to color prediction. Rather
than considering all the pixels of the object, the AVW algorithm takes the average of the
section/window of the pixels from the image at a time and repeats the averaging for the
entire object in the image. Because of this, the overall time for color prediction decreases.
At the same time, it maintains good accuracy because the window size is chosen according
to the uniformity of the color in the object. On the other hand, the PXS algorithm skips the
pixels selectively based on the uniformity of the color in the object. This results in less time
for color prediction without compromising much accuracy.
The main contributions of this research work are listed below:
1. A color prediction model called DCPCM that is exclusively designed by uniquely
categorizing HSV domain values into 15 different color classes, to be used only for
color prediction and not for any other purpose. The main aim of the given model
is to reduce the color spectrum into 15 commonly used colors, thereby reducing the
complexity and the runtime.
2. Two new algorithms called AVW and PXS to selectively extract pixels from an image,
using precomputed formulae, thereby reducing the runtime and maintaining the
accuracy;
3. Integration of Mask-RCNN algorithm with the proposed techniques for identifying
the objects in the image prior to pixel extraction and dominant color prediction;
4. Creation of benchmark data set with 200 images of single object, multi-object, uni-
formly colored, multicolored and of various sizes, to test the proposed algorithms and
compare them with the existing clustering techniques stated above.
The rest of the paper is organized as follows. Section 2 summarizes the entire working
of the system using an architecture diagram. The technical aspects of the work start from
Section 3, by presenting the DCPCM model, which is then followed by Section 4, describing
from Section 3, by presenting the DCPCM model, which is then followed by Section 4,
describing AVW and PXS algorithms in a detailed manner. Section 5 presents the experi-
mental results followed by Section 6, which deals with proving the supremacy of AVW
Electronics 2023, 12, 909 and PXS over the existing techniques for color prediction. Section 7 provides discussion
4 of 25
2. System
AVW Architecture
and PXS algorithms in a detailed manner. Section 5 presents the experimental results
Theby
followed system architecture
Section diagram
6, which deals shown in
with proving theFigure 1 presents
supremacy theand
of AVW workflow
PXS overofthe
the
proposed AVW and PXS algorithms that predict the object class using the Mask R-CNN
existing techniques for color prediction. Section 7 provides discussion on the results and
framework,
concludes thealong
work.with its color property. Consider an input image that contains two
bowls, a knife, and a spoon, as shown in Figure 1. When the proposed algorithm
2. System Architecture
AVW/PXS is executed, this image is assigned and stored in the form of a matrix. Subse-
The system
quently, architecture
the matrix diagram
triggers the objectshown
detectionin Figure
function1 presents the workflow
that predicts the objectsofalong
the
proposed AVW and PXS algorithms that predict the object class using the
with other parameters like bounding boxes, and mask color. If objects are detected in theMask R-CNN
framework,
image, thenalong with its color
the proposed property.store
algorithms Consider an inputofimage
the location thati.e.,
objects, contains two bowls,
its bounding box
acoordinates,
knife, and a along
spoon,with
as shown in Figure 1. When the proposed algorithm AVW/PXS
its masked portion. After prediction, the masked pixels are fil- is
executed, this image is assigned and stored in the form of a matrix. Subsequently,
tered out and sent as an input to the algorithm block, i.e., AVW or PXS. Simultaneously, the matrix
triggers the object
preprocessing detection
happens for function
the giventhat predicts
object the objects
to calculate a fewalong with other
parameters parameters
required for the
like bounding boxes, and mask color. If objects are detected in the image, then the proposed
algorithms using canny edge detection.
algorithms store the location of objects, i.e., its bounding box coordinates, along with its
masked portion. After prediction, the masked pixels are filtered out and sent as an input to
the algorithm block, i.e., AVW or PXS. Simultaneously, preprocessing happens for the given
object to calculate a few parameters required for the algorithms using canny edge detection.
Figure 1. System architecture of dominant color prediction using AVW and PXS algorithms. The
Figureimage
output 1. System
in thearchitecture
given figureof detected
dominant5 objects
color prediction
along withusing AVW and color.
the dominant PXS algorithms. The
The detection
output image in the given figure detected 5 objects along with the dominant color. The detection
results in the output image (left to right) are as follows: gray dining table (0.974); cyan bowl (0.961);
results in the output image (left to right) are as follows: gray dining table (0.974); cyan bowl (0.961);
orange bowl (0.989); pink spoon (0.975); and red knife (0.989). The output format is as follows:
orange bowl (0.989); pink spoon (0.975); and red knife (0.989). The output format is as follows: dom-
dominant
inant color color of predicted
of the the predicted object,
object, predicted
predicted objectobject class (confidence
class (confidence score
score of the of the predicted
predicted object).
Each detected object is surrounded by a bounding box.
object). Each detected object is surrounded by a bounding box.
The
TheDCPCM
DCPCMmodel
modelisisintegrated
integratedwith
withboth
bothAVW
AVWandandPXS
PXSalgorithms
algorithmsthat
thatmap
mappixels
pixels
tototheir respective color class among the fifteen predetermined colors. After the executionof
their respective color class among the fifteen predetermined colors. After the execution
of AVW/PXS, the output is displayed in a separate window that contains the objects
AVW/PXS, the output is displayed in a separate window that contains the objects sur-
surrounded by their bounding boxes, the object’s class, and its dominant color. This process
rounded by their bounding boxes, the object’s class, and its dominant color. This process
repeats for every object detected by the Mask R-CNN framework. If no objects are found in
repeats for every object detected by the Mask R-CNN framework. If no objects are found in
the image, then the algorithm simply displays “No object detected” and the same input
image is shown as the output in the Jupyter notebook. The DCPCM model skips the
color prediction step if the object detected is a person to avoid controversies as the model
can predict the costume color as the color of a person. This architecture is helpful in
classification of similar objects based on color that have wide applications in the service
robot sector.
Electronics 2023, 12, 909 5 of 25
if H (◦ )e[15, 45]
Orange
Yellow − Green if H (◦ )e[46, 110]
Green − Cyan if H (◦ )e[111, 175]
Color = (3)
Cyan − Blue if H (◦ )e[176, 240]
Violet if H (◦ )e[241, 310]
Pink otherwise
Electronics 2023, 12, 909 6 of 25
if H (◦ )e[15, 40]
Orange
Yellow if H (◦ )e[41, 60]
Yellow − Green if H (◦ )e[61, 110]
Green − Cyan if H (◦ )e[111, 150]
Color = Cyan if H (◦ )e[151, 190] (4)
Cyan − Blue if H (◦ )e[191, 225]
Violet if H (◦ )e[226, 280]
if H (◦ )e[281, 330]
Magenta
Pink otherwise
Predictions for the combination S = “medium” and V = “high”.
y2 x2
[ A] = ∑ ∑ K [r ][c] if Im [r][c] = [ Mc ]. (7)
r =y1 c= x1
Electronics 2023, 12, 909 8 of 25
1, CSET [i] = C_P(h, s, v)
K [r ][c][i ] = , ∀ i e [0, 15), where h, s, v = R2H ( Io [r ][c]). (8)
0, CSET [i] 6= C_P(h, s, v)
Step-3: Iterate through [A] and find the color with the maximum count which gives us the
Dc (dominant color) of the image. Equation (9), represents the mathematical form
of Step-3.
\
Dc = CSET [i ], where A[i ] > A[ j], ∀ ( j e [0, 15)) ( j! = i ) . (9)
The time complexity of all-pixel approach is O (m × n), where m and n are the width
and height of the given object, while the space complexity is O (m × n × 15).
Step-6: Iterate through [A], and find the color with the maximum count, equivalent to
the Dc (dominant color) of the image. The extraction of dominant color from the
Color-set, is represented in Equation (14).
\
Dc = CSET [i ], where A[i ] > A[ j], ∀ ( j e [0, 15)) ( j! = i ) . (14)
APW
k= × T. (15)
100
Let us assume that the k-pixels are uniformly divided in the object (average case). If
we consider that 1 window has 1 pixel contributing to edge (assuming that 1 value does
not disrupt the average value), then, f = T/k.
100
According to Equation (15), f = T × APW × T , which on simplification yields Equation (16).
100
f = . (16)
APW
Case-2: Object is 50% uniform: This is the case where every alternate pixel is an edge pixel.
Hence, the average of two pixels should be considered for color prediction if
according to the assumption that every window should have one edge pixel.
Proof: f = 100/APW.
Image is 50% uniform ⇒ APW = 50.
Hence, f = 2.
Case-3: Uniformity is above 50%: When uniformity is above 50%, the value of f tends to
decrease below 2. Since the window size is a positive integer; “f ” becomes 1
for uniformity greater than 50%. Hence, the algorithm changes into the all-pixel
approach.
Proof: Since f = 1, every window contains only one pixel whose average results
in the same value. Hence, all pixels are considered for prediction and the
threshold of APW for implementing the AVW algorithm is 50. For all other
values, the algorithm turns into the all-pixel approach.
same step. The mathematical representation of [A] and [K] for PXS is shown in
Equations (18) and (19).
y2 x2
x2 − x1
[ A] = ∑ ∑ [K], I f Im [r][c] = [ Mc ]; n e 0, S . (18)
r =y1 c= x1+n∗S
S, ( CSET [i ] = C P(h,s,v)
(K [r ][c] = K[r ][c − s + 1]))
T
K [r ][c][i ] =
1, ((CSET [i] = C_P(h, s, v)) ∀ i e [0, 15); (19)
(K [r ][c] 6= K[r ][c − s + 1]))
T
0, CSET [i] 6= CP(h,s,v)
Step-6: Iterate through [A] and find the color with the maximum count that gives us the Dc
(dominant color) of the image. The extraction of dominant color from the Color_set,
is shown in Equation (20).
\
Dc = CSET [i ], where A[i ] > A[ j], ∀ ( j e [0, 15)) ( j! = i ). (20)
100
(S − 2) = Max(0, ). (22)
APW
5. Experimental Results
A benchmark data set was created with 200 images, each of a different size, to test the
proposed algorithms. This data set is composed of objects with different categories like
single-color objects, multicolored objects, and objects with various sizes with respect to
the image size. The proposed algorithms AVW and PXS are tested with the benchmark
data set, with three iterations each, where the accuracy of prediction and the average of the
runtime are noted.
The testing process was automated such that when we start the iteration, it fetches
an image from the benchmark data set and triggers the object detection framework. The
output from the Mask R-CNN block presented in Figure 1 is directed to the AVW or PXS
algorithms which display a picture with a bounding box, color and class of all detected
objects in the image. Simultaneously, the result data, i.e., color of the predicted objects
and the time taken for predictions, are stored in an Excel file. This is repeated until all the
images in the data set are completed. The Excel file created is used to perform statistics,
such as average time taken, accuracy and, standard deviation of time. The accuracies of
the AVW and PXS algorithms, using the benchmark data set, are calculated manually. The
percentage of correct predictions of the objects, in each image of the benchmark data set,
is factored in to calculate the overall accuracy. The entire testing process stated above is
implemented in Jupyter Notebook in MacOS with 1.4 GHz Quad-Core Intel Core i5 CPU
and Intel Iris Plus Graphics 645 1536 MB GPU properties.
Object color predictions, using PXS and AVW algorithms, resulted in two output
images, the extracted image and the predicted image. Figures 2a and 3a, show the processed
images using AVW and PXS respectively, where bounding boxes are drawn around each
object, and also the color and object name are displayed together on top of the bounding box.
For the image containing two cups in Figure 2, the color predicted by both the algorithms
is identical, but the time consumed to make the predictions is not.
Electronics 2023, 12, x FOR PEER REVIEW 13 of 26
Electronics 2023, 12, x FOR PEER REVIEW 13 of 26
Figure
Figure
Figure 4.Different
Different
4. 4.Different stages
stages
stages involved
involved
involvedininincolor
color
color prediction,
prediction, i.e.,
prediction, i.e., extraction
extraction
i.e., extraction ofthe
ofof
thethe pixels
pixels contributing
contributing
pixels tototo
contributing
color
color prediction
prediction andand
the the final
final output,
output, using
using AVWAVWandand
PXS PXS algorithms.
algorithms.
color prediction and the final output, using AVW and PXS algorithms.
Figure
Figure 5.Color
5. 5.Color
Figure Colorprediction
prediction
prediction results
results
results (final
(final
(final output
output
output with
thethe
with
with thecolor
of of
color
color ofthe
the thedetected
detected
detected objects)
objects)
objects) using
using
using AVWAVW
AVW
algorithm.
algorithm.
algorithm.
Electronics 2023, 12, x FOR PEER REVIEW 15 of 26
Figure 6. Color prediction results (final output with the color of the detected objects) using PXS
algorithm.
Figure 6. Color prediction results (final output with the color of the detected objects) using PXS
Figure 6. Color prediction results (final output with the color of the detected objects) using PXS
algorithm.
algorithm.
Figures 7 and 8 present the standard deviation of runtime in each iteration plots for
AVW Figures
and PXS7algorithms, respectively,
and 8 present when
the standard tested with
deviation the benchmark
of runtime data set plots
in each iteration with for
Figures 7 andalgorithms,
8 present the standard deviation of runtime in benchmark
each iteration plots for
respect to time. The maximum deviation in time for AVW is nearly 4.82 s, as shown inwith
AVW and PXS respectively, when tested with the data set
AVW and
respect PXS algorithms,
to time. maximum respectively,
The maximum when
deviation tested
in is
time with the
for 5.82
AVWs, is benchmark data set with
Table 3, whereas deviation for PXS nearly asnearly
shown4.82 s, as 3.
in Table shown in
respect
Table 3,towhereas
time. The maximum
maximum deviation
deviation forinPXS
timeisfor AVW
nearly is nearly
5.82 4.82 s,inasTable
s, as shown shown3. in
Table 3, whereas maximum deviation for PXS is nearly 5.82 s, as shown in Table 3.
Figure 8.
Figure Standard deviation
8. Standard deviation of
of time
time taken
taken in
in each
each iteration
iteration of
of PXS.
PXS.
Table 3.
Table 3. Normalized
Normalized standard
standard deviation
deviation times
times for
for AVW
AVWand
andPXS
PXSschemes.
schemes.
Algorithm
Algorithm Minimum
Minimum Maximum
Maximum Mean
Mean
AVW
AVW 0.0084
0.0084 4.82 4.82 0.54 0.54
PXS
PXS 0.0069
0.0069 5.82 5.82 0.75 0.75
The computed accuracies for PXS and AVW for the benchmark data set are 95.4%
and 93.6%. In orderaccuracies
The computed to evaluateforthe
PXSperformance
and AVW for ofthe
algorithms
benchmark with respect
data set areto95.4%
time, and
the
average
93.6%. Inreduction in time isthe
order to evaluate compared
performanceto theofall-pixel approach.
algorithms The evaluated
with respect reductions
to time, the average
in time forin
reduction AVW
timeand PXS are approximately
is compared to the all-pixel 62% and 44%.
approach. The evaluated reductions in time
for AVW and PXS are approximately 62% and 44%.
6. Comparisons of Color Prediction Schemes
6. Comparisons of Color Prediction Schemes
As stated earlier, clustering techniques can be used for predicting the color of an ob-
ject inAs stated
the image earlier, clustering
[38,12]. techniques
Clustering can beby
is performed used for predicting
grouping the color data
all the available of anpoints
object
in the
into image [12,38].
different clusters.Clustering
The decision is performed by grouping
of which cluster all the
is allocated to aavailable
data pointdata points
depends
into different clusters. The decision of which cluster is allocated to a data
on the distance between the cluster’s centroid and the data point. Clustering methods can point depends
on the distance between the cluster’s centroid and the data point. Clustering methods
be applied in the context of predicting the dominant color of the image. The RGB channels
can be applied in the context of predicting the dominant color of the image. The RGB
are separately grouped into various clusters; the centroids of the clusters yield the domi-
channels are separately grouped into various clusters; the centroids of the clusters yield
nant colors in the image. To compare the prediction accuracy and the runtime of PXS and
the dominant colors in the image. To compare the prediction accuracy and the runtime of
AVW for the benchmark data set, conventional clustering algorithms are tested with the
PXS and AVW for the benchmark data set, conventional clustering algorithms are tested
same benchmark data set, over three iterations. The device used for testing the clustering
with the same benchmark data set, over three iterations. The device used for testing the
algorithms is the same as the one used for testing AVW and PXS, i.e., MacOS with 1.4 GHz
clustering algorithms is the same as the one used for testing AVW and PXS, i.e., MacOS
Quad-Core Intel Core i5 CPU and Intel Iris Plus Graphics 645 1536 MB GPU. The mean of
with 1.4 GHz Quad-Core Intel Core i5 CPU and Intel Iris Plus Graphics 645 1536 MB GPU.
the three iterations is considered for comparison of time with AVW and PXS. Each of these
The mean of the three iterations is considered for comparison of time with AVW and PXS.
comparisons are shown in different graphs and discussed in this section.
Each of these comparisons are shown in different graphs and discussed in this section.
6.1.
6.1. All-Pixel
All-Pixel Approach
Approach
Color
Color prediction was
prediction was initially
initially tested
tested with
with the
the benchmark
benchmark data data set using the
set using all-pixel
the all-pixel
approach,
approach, where all the pixels in the object portion are considered for the prediction. The
where all the pixels in the object portion are considered for the prediction. The
accuracy
accuracy for
for this
this approach
approach is is 94.5%.
94.5%. The
The time
time comparisons
comparisons of of the
the all-pixel
all-pixel approach
approach withwith
the
the AVW
AVW andand PXS
PXS algorithms
algorithms areare shown
shown in in Figure
Figure 9.
9. The
The average
average time
time consumed
consumed by the
by the
all-pixel approach for
all-pixel approach for predictions
predictionsof ofthe
theoutput
outputisis17.3
17.3s,s,versus
versus6.5
6.5s by
s by AVW,
AVW, and and
9.29.2
s bys
by
PXS PXS algorithms
algorithms as as shown
shown inin Table
Table 4. 4.
As Asthetheall-pixel
all-pixelapproach
approachaccounts
accountsfor for all
all the
the pixels
pixels
for
for color
color prediction,
prediction, the
the time
time drained
drained is is indelibly
indelibly higher
higher compared
compared to to either
either of
of AVW
AVW or or
PXS schemes.
PXS schemes.
Electronics 2023, 12, x FOR PEER REVIEW 17 of 26
Electronics 2023, 12, 909 17 of 25
Figure 9. Comparison of color prediction time of all-pixel, AVW and PXS schemes.
Figure 9. Comparison of color prediction time of all-pixel, AVW and PXS schemes.
Table 4. Comparison of normalized times for all-pixel, AVW and PXS schemes.
Table 4. Comparison of normalized times for all-pixel, AVW and PXS schemes.
Standard
Algorithm Minimum Maximum Mean Standard
Algorithm Minimum Maximum Mean Deviation
Deviation
All-Pixel 0.30 174.23 17.32 27.10
All-Pixel
AVW 0.30
0.12 174.23
59.23 17.32
6.52 27.10
10.24
AVWPXS 0.12
0.17 59.23
82.27 6.52
9.16 10.24
13.73
PXS 0.17 82.27 9.16 13.73
6.2. K-Means Clustering
6.2. K-Means Clustering
In the domain of machine learning, K-means is one of the well-known unsupervised
In the domain
clustering of machine
algorithms. learning, K-means
The algorithm, at random is one of the“k”
chooses well-known
data points,unsupervised
called means.
clustering
It then classifies the other data points into their nearest mean, by checking themeans.
algorithms. The algorithm, at random chooses “k” data points, called It
Euclidean
then classifies the other data points into their nearest mean, by checking
distance, and updates the mean value to the average of the cluster after each iteration of the Euclidean
distance,
K-means. and updates
This processtheismean value
repeated to the
until the average
centroidsofofthethecluster after
clusters each equal
become iteration
for of
two
K-means. This process
simultaneous is repeated
iterations. until the centroids
Many researchers of the clusters
have proposed become
modified equal
versions of for two
K-means
simultaneous
approach, toiterations.
improve theMany researchers
efficiency have proposed
of K-means [39–41].modified
Hence, inversions
order toofevaluate
K-means the
approach,
performance to improve the efficiency
of the proffered of K-means
algorithms, [39–41].algorithm
the K-means Hence, inwas order to evaluate
tested the
for dominant
performance of the with
color prediction proffered algorithms,data
the benchmark the K-means algorithm
set. A plot was tested
of the time for dominant
comparison between
color prediction with the benchmark data set. A plot
K-means and the AVW, PXS algorithms is displayed in Figure 10. of the time comparison between K-
means It andcanthe
beAVW, PXS
inferred algorithms
from is displayed
Figure 10, that AVW in Figureoutpaces
handily 10. K-means clustering in
predicting the dominant color of objects. K-means draws less time than PXS. The accuracy
of K-means for the benchmark data set is 84.1%, which is less compared to AVW and PXS.
Hence, both the AVW and PXS algorithms outperform K-means clustering on accurate
predictions of the dominant color of objects. The average time taken by the K-means for
predicting the output for the benchmark data set is 8.7 s, versus 6.5 s for AVW. However,
PXS takes a higher time, i.e., 0.5 s higher, compared to K-means as displayed in Table 5.
Electronics 2023, 12, x FOR PEER REVIEW 18 of 26
Electronics 2023, 12, 909 18 of 25
Figure 11. Time comparison plot of mini batch K-means, AVW and PXS.
Figure 11. Time comparison plot of mini batch K-means, AVW and PXS.
AVW performs color prediction in less time than mini batch K-means. The time taken
for mini batch K-means is similar to that of K-means, whereas the accuracy of mini batch
Electronics 2023, 12, 909 19 of 25
AVW performs color prediction in less time than mini batch K-means. The time taken
for mini batch K-means is similar to that of K-means, whereas the accuracy of mini batch
K-means is 83.8%, much less than the accuracy of both AVW and PXS algorithms. The
average time taken by the mini batch K-means for predicting the output for the benchmark
data set is 8.4 s compared to 6.5 s for AVW; however, PXS consumes a prediction time of
9.2 s as presented in Table 6. The effect of higher average time on the overall performance
for PXS can be perceived as a trade-off for its higher accuracy.
Table 6. Comparison of normalized times for mini batch K-means, AVW and PXS schemes.
Standard
Algorithm Minimum Maximum Mean
Deviation
AVW 0.12 59.23 6.52 10.24
PXS 0.17 82.27 9.16 13.73
Mini Batch K-means 0.24 86.43 8.39 12.55
Figure 12. Time comparison plot of mean shift, AVW and PXS.
Figure 12. Time comparison plot of mean shift, AVW and PXS.
The accuracy of mean shift for the benchmark data set is 85.2% which is less than that
ofThe
AVW accuracy
and PXS.of Hence,
mean shift
bothfor
AVW theand
benchmark data set the
PXS outperform is 85.2%
meanwhich is less than
shift algorithm in terms
that of AVW and PXS. Hence, both AVW and PXS outperform the mean shift algorithmtime
of time and accuracy for prediction of the dominant color of an image. The average
in taken
terms byof time and accuracy
the mean for prediction
shift for predicting of the dominant
the output color of andata
for the benchmark image. The
set is 33 av-
s—five
erage
timestime takenthan
greater by the mean
that shift for
for AVW andpredicting thetimes
easily three output for the
higher benchmark
than the time data set by
entailed
PXS as indicated in Table 7.
is 33 s—five times greater than that for AVW and easily three times higher than the time
entailed by PXS as indicated in Table 7.
Table 7. Comparison of normalized times for mean shift, AVW and PXS schemes.
Standard
Algorithm Minimum Maximum Mean
Deviation
AVW 0.12 59.23 6.52 10.24
PXS 0.17 82.27 9.16 13.73
Mean shift 0.30 577.93 32.93 76.66
Electronics 2023, 12, 909 20 of 25
Table 7. Comparison of normalized times for mean shift, AVW and PXS schemes.
Standard
Algorithm Minimum Maximum Mean
Deviation
AVW 0.12 59.23 6.52 10.24
PXS 0.17 82.27 9.16 13.73
Mean shift 0.30 577.93 32.93 76.66
Figure 13. Time comparison plot of Gaussian mixture model, AVW and PXS.
Figure 13. Time comparison plot of Gaussian mixture model, AVW and PXS.
From the above plot, it can be deduced that the Gaussian mixture model imposes the
From the
highest above
burden onplot,
timeitcompared
can be deduced that the
to the AVW Gaussian
and mixtureThe
PXS schemes. model imposes
Gaussian mixture
the highest burden on time compared to the AVW and PXS schemes. The Gaussian in
model, when tested with the benchmark data set, delivered an accuracy of 88% mix-
color
ture model, when
prediction. The tested
averagewith
timethedrained
benchmarkby thedata set, delivered
Gaussian mixture anmodel
accuracy of 88% in the
for predicting
output for the benchmark data set is 13.4 s, as demonstrated in Table 8, notably
color prediction. The average time drained by the Gaussian mixture model for predict- higher than
the time consumed by the AVW and PXS algorithms.
ing the output for the benchmark data set is 13.4 s, as demonstrated in Table 8, notably
higher than the time consumed by the AVW and PXS algorithms.
Table 8. Comparison of normalized times for Gaussian mixture model, AVW and PXS schemes.
Table 8. Comparison of normalized times for Gaussian mixture model, AVW and PXS Standard
schemes.
Algorithm Minimum Maximum Mean
Deviation
Standard
Algorithm
AVW 0.12Minimum 59.23
Maximum 6.52 Mean 10.24
Deviation
PXS 0.17 82.27 9.168 13.73
AVWmodel
Gaussian mixture 0.29 0.12 132.14 59.23 6.52
13.40 10.24
21.52
PXS 0.17 82.27 9.168 13.73
6.6. Fuzzy C-Means model
Gaussian mixture 0.29 132.14 13.40 21.52
Fuzzy C-means is an unsupervised clustering algorithm that analyzes various types
6.6.
ofFuzzy C-Means
data and groups them according to their similarity [44]. It assigns membership value
Fuzzy C-means is an unsupervised clustering algorithm that analyzes various types
of data and groups them according to their similarity [44]. It assigns membership value to
each data point per cluster center, based on the Euclidean distance between the data point
and center of the cluster. Rather than forcing a data point into one cluster, it assigns a data
Electronics 2023, 12, 909 21 of 25
to each data point per cluster center, based on the Euclidean distance between the data
point and center of the cluster. Rather than forcing a data point into one cluster, it assigns
a data point to one or more clusters based on its membership value. Indicating partial
membership and fuzzy partitioning, its value can range between 0 and 1 for different cluster
Electronics 2023, 12, x FOR PEER REVIEW
centers. New cluster centers and membership values of each data point are computed 22 of 26for
each iteration until convergence is encountered. The time comparisons for fuzzy C-means
integrated with the DCPCM model and the proposed algorithms are shown in Figure 14.
Figure 14. Time comparison plot of fuzzy C-means, AVW and PXS.
Figure 14. Time comparison plot of fuzzy C-means, AVW and PXS.
According to the above plot, fuzzy C-means require more time for color prediction
According
than AVW andtoPXS. the The
above plot, fuzzy
accuracy C-means
of color predictionrequire
with more
fuzzy time
C-meansfor color prediction
for the benchmark
than
dataAVW
set isand PXS.AVW
85.9%. The accuracy
and PXSof color prediction
outperformed with
fuzzy fuzzy C-means
C-means for the
with respect bench-
to time and
mark data set
accuracy. is 85.9%.
The average AVW
timesandforPXS outperformed
prediction fuzzy C-means
of the output withC-means
by the fuzzy respect to timethe
with
and accuracy. data
benchmark The average times
set is 22.4 for prediction
s, compared to 6.5ofs the
for output by the
AVW, and 9.2 fuzzy C-means
s for PXS with in
as shown
theTable
benchmark
9. data set is 22.4 s, compared to 6.5 s for AVW, and 9.2 s for PXS as shown
in Table 9.
Table 9. Comparison of normalized times for fuzzy C-means, AVW and PXS schemes.
Table 9. Comparison of normalized times for fuzzy C-means, AVW and PXS schemes.
Standard
Algorithm Minimum Maximum Mean
Standard
Deviation
Algorithm Minimum Maximum Mean
AVW 0.12 59.23 6.52 Deviation
10.24
AVW
PXS 0.17 0.12 82.27 59.23 6.52
9.16 10.24
13.73
Fuzzy C-means 1.36 273.48 22.41 38.85
PXS 0.17 82.27 9.16 13.73
Fuzzy C-means 1.36 273.48 22.41 38.85
Table 10 represents the algorithm-specific prediction accuracy of an object’s color and
Table
time 10 represents
reduction comparedthe algorithm-specific prediction
to the all-pixel approach accuracy
for the of an algorithm.
respective object’s color and
Discern-
time reduction compared to the all-pixel approach for the respective algorithm. Discern-
ment of the results in Table 10 reveals the following:
ment
• ofAVW
the results
and PXSin Table 10 reveals
algorithms havethe following:
higher accuracies compared to all other appraised
• AVW and PXS algorithms have higher accuracies compared to all other appraised
algorithms;
• algorithms;
PXS algorithm is the most accurate among all the other clustering algorithms;
• • PXS algorithm
AVW is the
algorithm hasmost accurate
the highest among in
reduction alltime
the along
other with
clustering
decentalgorithms;
prediction accuracy;
• • AVW algorithm has the highest reduction in time along with decent prediction
Negative values of “reduction in time” for mean shift and fuzzy C-means accu-
algorithms
racy;
imply that they require much longer than the all-pixel approach for the color predic-
• Negative values of “reduction in time” for mean shift and fuzzy C-means algorithms
tion task.
imply that they require much longer than the all-pixel approach for the color predic-
tion task.
Electronics 2023, 12, 909 22 of 25
Table 10. Comparison of time reduction and accuracies of all the discussed algorithms.
Reduction in Time
Algorithm Color Prediction Accuracy (%)
Compared to All-Pixel (%)
AVW 93.6 62
PXS 95.4 44.3
K-means 84.1 45.1
Mini batch K-means 83.8 47.5
Mean shift 85.2 −70.7
Fuzzy C-means 85.9 −53.4
Gaussian mixture model 88 22.4
7. Conclusions
Sustained proliferation in the deployment of autonomous robotic devices has stimu-
lated enhanced urgency in their capability of detection and discernment of objects in an
image. This paper has elucidated the design and working of two innovative color predic-
tion algorithms, PXS and AVW, for the extraction of pixels in a faster and more efficient
manner. Accuracy and reliability of the proposed algorithms are appraised by comparison
of the proffered algorithms with conventional approaches—K-means, Gaussian mixture
model, fuzzy C-means, mini batch K-means, and mean-shift clustering algorithms—using
a benchmark data set. The propounded algorithms performed with greater accuracy in an
exceptionally short time span, when AVW and PXS were juxtaposed with popular extant
color prediction algorithms. AVW and PXS algorithms are distinct from each other. PXS
algorithm exhibited enhanced accuracy (95.4%) with a downside of longer prediction time.
The AVW algorithm perceptibly needed less time for prediction of an object’s dominant
color (up to 62% decrease versus the all-pixel approach), incurring a compromised accu-
racy of up to 2%. A notable inference of this study calls for application-specific trade-offs
between latency and prediction accuracy.
For color prediction, either AVW or PXS can be deployed; if time complexity is a
major concern for predicting color, the AVW algorithm can be considered, which consumes
an optimized time returning decent accuracy. If one prefers higher accuracy, PXS can
be chosen for the color prediction, with a little higher prediction time compared to the
existing techniques. Integration of the proposed algorithms with service robots can have
a significant impact in detection of objects based on color in a real-time scenario. Time
consumed to predict the color by these proposed algorithms is notably less than that of
clustering algorithms. These algorithms are also useful for detecting traffic signals for
autonomous vehicles and for detection of damaged foods. Clustering algorithms are ill
suited for real time scenarios, due to their twin problems of less accuracy and longer
prediction times. The proposed DCPCM model can be deficient dealing with equally
shaded multicolored objects as well as certain real-time applications. These apparent
deficiencies call for additional research and development of the DCPCM concept to sustain
higher precision and streamlined performance of AVW and PXS algorithms.
Author Contributions: R.K.M. was responsible for conceptualization; i.e., ideas, formulation or
evolution of overarching research goals and aims. He was also in charge of supervision, i.e., oversight
and leadership responsibility for the research activity planning and execution, including mentorship
external to the core team, not only the development or design of methodology and creation of models,
i.e., the methodology, but also provision of study materials, reagents, materials, patients, laboratory
samples, animals, instrumentation, computing resources, or other analysis tools, i.e., the resources, in
addition to handling the preparation, creation and/or presentation of the published work, specifically,
writing the initial draft (including substantive translation), i.e., writing the original draft. He had
management and coordination responsibility for research activity planning and execution, i.e., project
administration and preparation, creation, and/or presentation of the published work by those from
the original research group, specifically critical review, commentary, or revision, including pre- and
postpublication stages, i.e., writing—review and editing. B.T. conducted management activities to
annotate (produce metadata), scrub data, and maintain research data (including software code, where
Electronics 2023, 12, 909 23 of 25
necessary for interpreting the data itself) for initial use and later reuse, i.e., data curation, applied
statistical, mathematical, computational, and other formal techniques to analyze and synthesize
study data, i.e., formal analysis, and the development and design of the methodology and creation
of models, i.e., methodology. G.S.S. was performing management activities to annotate (produce
metadata), scrub data and maintain research data (including software code, where necessary for
interpreting the data itself) for initial use and later reuse i.e., data curation. Also, applied statistical,
mathematical, computational, and other formal techniques to analyze and synthesize study data
i.e., formal analysis, and the development of design of methodology and creation of models, i.e.,
methodology. G.M.R. carried out the programming, software development, designing computer
programs, implementation of the computer code and supporting algorithms, and testing existing
code components, i.e., software, and performed verification, whether as a part of the activity or
separate, of the overall replication/reproducibility of results/experiments and other research outputs
i.e., validation. I.R.S.K. did the programming, software development, designing computer programs,
implementation of computer code, supporting algorithms, and testing existing code components,
i.e., software, and performed verification, whether as a part of the activity or separate, of the overall
replication/reproducibility of results/experiments and other research outputs i.e., validation. S.S.P.
executed the programming, software development, design of computer programs, implementation
of the computer code and supporting algorithms, and testing of existing code components, i.e.,
software, and carried out the verification, whether as a part of the activity or separate, of the overall
replication/reproducibility of results/experiments and other research outputs, i.e., validation. All
authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The datasets generated during and/or analyzed during the current
study are available from the corresponding author on reasonable request.
Acknowledgments: Members of this research team are grateful to the Department of Electronics and
Communication Engineering and the Humanitarian Technology Labs (HuT Labs) at the Amritapuri
campus of Amrita Vishwa Vidyapeetham, Kollam for providing all the necessary lab facilities and a
highly encouraging work environment, which were key factors in completion of this research project.
Conflicts of Interest: The authors have no competing interests to declare that are relevant to the
content of this article. The authors have no relevant financial or nonfinancial interests to disclose.
References
1. Kim, J.H.; Kim, B.G.; Roy, P.P.; Jeong, D.M. Efficient Facial Expression Recognition Algorithm Based on Hierarchical Deep Neural
Network Structure. IEEE Access 2019, 7, 41273–41285. [CrossRef]
2. Jeong, D.; Kim, B.-G.; Dong, S.-Y. Deep Joint Spatiotemporal Network (DJSTN) for Efficient Facial Expression Recognition. Sensors
2020, 20, 1936. [CrossRef] [PubMed]
3. Kim, J.-H.; Hong, G.-S.; Kim, B.-G.; Dogra, D.P. deepGesture: Deep learning-based gesture recognition scheme using motion
sensors. Displays 2018, 55, 38–45. [CrossRef]
4. Manju, K.; Aditya, G.; Ruben, G.C.; Verdú, E. Gesture Recognition of RGB and RGB-D Static Images using Convolutional Neural
Networks. Int. J. Interact. Multimed. Artif. Intell. 2019, 5, 22–27. [CrossRef]
5. Qummar, S.; Khan, F.G.; Shah, S.; Khan, A.; Shamshirband, S.; Rehman, Z.U.; Khan, I.A.; Jadoon, W. A Deep Learning Ensemble
Approach for Diabetic Retinopathy Detection. IEEE Access 2019, 7, 150530–150539. [CrossRef]
6. Shamshirband, S.; Mahdis, F.; Dehzangi, A.; Chronopoulos, A.T.; Alinejad-Rokny, H. A review on deep learning approaches in
healthcare systems: Taxonomies, challenges, and open issues. J. Biomed. Inform. 2021, 113, 103627. [CrossRef]
7. Pillai, M.S.; Chaudhary, G.; Khari, M.; Crespo, R.G. Real-time image enhancement for an automatic automobile accident detection
through CCTV using deep learning. Soft Comput. 2021, 25, 11929–11940. [CrossRef]
8. Pathak, A.R.; Pandey, M.; Rautaray, S. Application of Deep Learning for Object Detection. Procedia Comput. Sci. 2018, 132,
1706–1717. ISSN-1877-0509. [CrossRef]
9. Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. Deep Learning for Generic Object Detection: A Survey.
Int. J. Comput. Vis. 2020, 128, 261–318. [CrossRef]
10. Srisuk, S.; Suwannapong, C.; Kitisriworapan, S.; Kaewsong, A.; Ongkittikul, S. Performance Evaluation of Real-Time Object
Detection Algorithms. In Proceedings of the 2019 7th International Electrical Engineering Congress (IEECON), Hua Hin, Thailand,
6–8 March 2019; pp. 1–4. [CrossRef]
Electronics 2023, 12, 909 24 of 25
11. Kim, J.-A.; Sung, J.-Y.; Park, S.-H. Comparison of Faster-RCNN, YOLO, and SSD for Real-Time Vehicle Type Recognition. In
Proceedings of the 2020 IEEE International Conference on Consumer Electronics—Asia (ICCE-Asia), Seoul, Republic of Korea,
1–3 November 2020; pp. 1–4. [CrossRef]
12. Liu, Z.-Y.; Ding, F.; Xu, Y.; Han, X. Background dominant colors extraction method based on color image quick fuzzy c-means
clustering algorithm. Def. Technol. 2020, 17, 1782–1790. [CrossRef]
13. Elavarasi, S.A.; Jayanthi, J.; Basker, N. Trajectory Object Detection using Deep Learning Algorithms. Int. J. Recent Technol. Eng.
2019, 8, C6564098319. [CrossRef]
14. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [CrossRef]
15. Kaggle. Ideas for Image Features and Image Quality. Available online: https://www.kaggle.com/code/shivamb/ideas-for-
image-features-and-image-quality (accessed on 1 January 2022).
16. Vadivel, A.; Sural, S.; Majumdar, A.K. Human color perception in the HSV space and its application in histogram generation for
image retrieval. In Color Imaging X: Processing, Hardcopy, and Applications; SPIE: Bellingham, WA, USA, 2005; Volume 5667.
17. Ray, S.A. Color gamut transform pairs. ACM Sig-Graph Comput. Graph. 1978, 12, 12–19.
18. Atram, P.; Chawan, P. Finding Dominant Color in the Artistic Painting using Data Mining Technique. Int. Res. J. Eng. Technol.
2020, 6, 235–237.
19. Data Engineering and Communication Technology; Raju, K.S., Senkerik, R., Lanka, S.P., Rajagopal, V., Eds.; Advances in Intelligent
Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2020; Volume 1079. [CrossRef]
20. Guyeux, C.; Chrétien, S.; BouTayeh, G.; Demerjian, J.; Bahi, J. Introducing and Comparing Recent Clustering Methods for Massive
Data Management on the Internet of Things. J. Sens. Actuator Netw. 2019, 8, 56. [CrossRef]
21. Sai Satyanarayana Reddy, S.; Kumar, A. Edge Detection and Enhancement of Color Images Based on Bilateral Filtering Method
Using K-Means Clustering Algorithm. In ICT Systems and Sustainability; Advances in Intelligent Systems and, Computing; Tuba,
M., Akashe, S., Joshi, A., Eds.; Springer: Singapore, 2020; Volume 1077. [CrossRef]
22. Peng, K.; Leung, V.C.M.; Huang, Q. Clustering Approach Based on Mini Batch K-means for Intrusion Detection System Over Big
Data. IEEE Access 2018, 6, 11897–11906. [CrossRef]
23. Liu, N.; Zheng, X. Color recognition of clothes based on k-means and mean shift Automatic Detection and High-End Equipment.
In Proceedings of the 2012 IEEE International Conference on Intelligent Control, Beijing, China, 27–29 July 2012; pp. 49–53.
[CrossRef]
24. Mohit, N.A.; Sharma, M.; Kumari, C. A novel approach to text clustering using shift k-medoid. Int. J. Soc. Comput. Cyber-Phys.
Syst. Int. J. Soc. Comput. Cyber-Phys. Syst 2019, 2, 106–118. [CrossRef]
25. Balasubramaniam, P.; Ananthi, V.P. Segmentation of nutrient deficiency in incomplete crop images using intuitionistic fuzzy
C-means clustering algorithm. Nonlinear Dyn. 2016, 83, 849–866. [CrossRef]
26. Yin, S.; Zhang, Y.; Karim, S. Large Scale Remote Sensing Image Segmentation Based on Fuzzy Region Competition and Gaussian
Mixture Model. IEEE Access 2018, 6, 26069–26080. [CrossRef]
27. Liu, Y.; Xie, Z.; Liu, H. An Adaptive and Robust Edge Detection Method Based on Edge Proportion Statistics. IEEE Trans. Image
Process. 2020, 29, 5206–5215. [CrossRef]
28. Latha, N.S.A.; Megalingam, R.K. Exemplar-based Learning for Recognition & Annotation of Human Actions. In Proceedings of
the 2020 9th International Conference System Modeling and Advancement in Research Trends (SMART), Moradabad, India, 4–5
December 2020; pp. 91–93. [CrossRef]
29. Wang, Y.; Luo, J.; Wang, Q.; Zhai, R.; Peng, H.; Wu, L.; Zong, Y. Automatic Color Detection of Grape Based on Vision Computing
Method. In Recent Developments in Intelligent Systems and Interactive Applications IISA 2016; Advances in Intelligent Systems and,
Computing; Xhafa, F., Patnaik, S., Yu, Z., Eds.; Springer: Cham, Switzerland, 2017; Volume 541. [CrossRef]
30. Zhou, S.; Wang, J.; Wang, L.; Zhang, J.; Wang, F.; Huang, D.; Zheng, N. Hierarchical and Interactive Refinement Network for
Edge-Preserving Salient Object Detection. IEEE Trans. Image Process. 2021, 30, 1–14. [CrossRef] [PubMed]
31. Wang, L.; Tang, D.; Guo, Y.; Do, M.N. Common Visual Pattern Discovery via Nonlinear Mean Shift Clustering. IEEE Trans. Image
Process. 2015, 24, 5442–5454. [CrossRef] [PubMed]
32. Liu, X.; Zhao, D.; Jia, W.; Ji, W.; Ruan, C.; Sun, Y. Cucumber Fruits Detection in Greenhouses Based on Instance Segmentation.
IEEE Access 2019, 7, 139635–139642. [CrossRef]
33. Megalingam, R.K.; Sree, G.S.; Reddy, G.M.; Krishna, I.R.S.; Suriya, L.U. Food Spoilage Detection Using Convolutional Neural
Networks and K Means Clustering. In Proceedings of the 2019 3rd International Conference on Recent Developments in Control,
Automation & Power Engineering (RDCAPE), Noida, India, 10–11 October 2019; pp. 488–493. [CrossRef]
34. Megalingam, R.K.; Karath, M.; Prajitha, P.; Pocklassery, G. Computational Analysis between Software and Hardware Implementa-
tion of Sobel Edge Detection Algorithm. In Proceedings of the 2019 International Conference on Communication and Signal
Processing (ICCSP), Chennai, India, 4–6 April 2019; pp. 529–533. [CrossRef]
35. Megalingam, R.K.; Manoharan, S.; Reddy, R.; Sriteja, G.; Kashyap, A. Color and Contour Based Identification of Stem of Coconut
Bunch. IOP Conf. Ser. Mater. Sci. Eng. 2017, 225, 012205. [CrossRef]
36. Alexander, A.; Dharmana, M.M. Object detection algorithm for segregating similar colored objects and database formation. In
Proceedings of the 2017 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Kollam, India, 20–21
April 2017; pp. 1–5. [CrossRef]
Electronics 2023, 12, 909 25 of 25
37. Krishna Kumar, P.; Parameswaran, L. A hybrid method for object identification and event detection in video. In Proceedings
of the 2013 4th National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG),
Jodhpur, India„ 20–21 April 2013; pp. 1–4. [CrossRef]
38. Molada-Tebar, A.; Marqués-Mateu, Á.; Lerma, J.L.; Westland, S. Dominant Color Extraction with K-Means for Camera Characteri-
zation in Cultural Heritage Documentation. Remote Sens. 2020, 12, 520. [CrossRef]
39. Khandare, A.; Alvi, A.S. Efficient Clustering Algorithm with Improved Clusters Quality. IOSR J. Comput. Eng. 2016, 48, 15–19.
[CrossRef]
40. Wu, S.; Chen, H.; Zhao, Z.; Long, H.; Song, C. An Improved Remote Sensing Image Classification Based on K-Means Using HSV
Color Feature. In Proceedings of the 2014 Tenth International Conference on Computational Intelligence and Security, Kunming,
China, 15–16 November 2014; pp. 201–204. [CrossRef]
41. Haraty, R.A.; Dimishkieh, M.; Masud, M. An Enhanced k-Means Clustering Algorithm for Pattern Discovery in Healthcare Data.
Int. J. Distrib. Sens. Netw. 2015, 11, 615740. [CrossRef]
42. Bejar, J. K-Means vs. Mini Batch K-Means: A Comparison; LSI-13-8-R. 2013. Available online: http://hdl.handle.net/2117/23414
(accessed on 1 January 2020).
43. Cheng, Y. Mean shift, mode seeking, and clustering. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 790–799. [CrossRef]
44. Hung, M.-C.; Yang, D.-L. An efficient Fuzzy C-Means clustering algorithm. In Proceedings of the 2001 IEEE International
Conference on Data Mining, San Jose, CA, USA, 29 November–2 December 2001; pp. 225–232. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.