Upload 02
Upload 02
762-772
1
Department of Electrical Engineering, Mizoram University, Mizoram, India.
Department of Computer Science and Engineering, MIT, Manipur University, Manipur, India.
2
3
Department of Computer Science, South East Manipur College, Chandel, Manipur, India.
4
Department of Electronics and communication Engineering, MIT, Manipur University, Manipur, India.
Abstract
Rice holds its position as the most widely cultivated crop worldwide,
its demand steadily rising alongside population growth. The precise
identification of grains, especially wheat and rice, holds significant Article History
importance as their ultimate utilization depends on the quality of grains Received: 20 June 2024
prior to processing. Traditionally, grain identification tasks have been Accepted: 03 August 2024
predominantly manual, relying on experienced grain inspectors and
consuming considerable time. However, this manual classification Keywords
Artificial Intelligence;
process is susceptible to variations influenced by individual perception, Computer Vision;
given the subjective nature of human image interpretation. Consequently, Image Classification;
there is an urgent need for an automated recognition system capable Image Recognition;
Machine Learning;
of accurately identifying grains under diverse environmental conditions, Neural Networks;
necessitating the application of digital image processing techniques. Object Detection.
In this study, we focus on grading five distinct varieties of rice based on
their quality, employing a range of convolution neural networks (CNN)
namely, Efficientnetb0, Googlenet, MobileNetV2, Resnet50, Resnet101,
and ShuffleNet. The performance of CNN towards identification and
grading of rice grain is also compared to that of other parametric and
Non-parametric classifiers namely, Linear Discriminant Analysis (LDA),
K-Nearest Neighbor (K-NN), Naive Bayes (NB), and Back Propagation
Neural Network (BPNN) using Gray Level Co-occurrence Matrix (GLCM)
and Gray Level Run Length Matrix (GLRLM) based texture features.
CONTACT Ksh Robert Singh [email protected] Department of Electrical Engineering, Mizoram University, Mizoram, India.
The image dataset comprises five grades of rice, each containing 100
images, resulting in a comprehensive collection of 500 samples for
analysis. It is observed that, Convolution neural networks can grade
five different qualities of rice with highest accuracy of 64.4% in case
of GoogleNet. Results show that, rice grading using texture features
performs better with highest accuracy of 99.2% (using GLCM) and 93.4%
(using GLRLM).
Rice grains and corn seeds classification using It is also observed from the literature that, manual
different image pre-processing techniques and identification and classification of grains is quite
machine learning algorithms such as K-Nearest tedious, time consuming and the result is subjective
Neighbor (K-NN), Artificial Neural Network in nature. Therefore, a robust classification process
(ANN), Support Vector Machine (SVM), etc., are based on digital image processing and computer
discussed.1-14 These works also address different vision is becoming so important to overcome the
aspects for assessing rice grain such as detecting above problems that are encountered in manual
defects, varieties classification, quality grading etc. inspection process. It is learnt that classification
The accuracy achieved in the above literature’s using single grain kernel are not that easy as
ranges from 39% to 100%. It is also observed that compared to classification on bulk grain images.
information related to dataset (training/testing) Because, the former involves arranging the grain
is not properly highlighted in most of the work. kernels in such a way that, they do not touch each or
Classification of food grains using image processing overlap. It is also learnt that most of the classification
techniques and Probabilistic Neural Network (PNN) task carried out in the last few years were based
is conducted with accuracies 96% and 100%.15-16 on extracting certain attributes or features from the