An Analysis to Investigate Plant Disease Identification Based On

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/379188981

An analysis to investigate plant disease identification based on machine


learning techniques

Article in Expert Systems · March 2024


DOI: 10.1111/exsy.13576

CITATIONS READS

3 372

8 authors, including:

Sangeeta Duhan Preeti Gulia


Maharshi Dayanand University Maharshi Dayanand University
2 PUBLICATIONS 3 CITATIONS 116 PUBLICATIONS 671 CITATIONS

SEE PROFILE SEE PROFILE

Nasib Singh Gill Mohammad Yahya


M D University Rohtak, Haryana, India Oakland University
182 PUBLICATIONS 1,563 CITATIONS 13 PUBLICATIONS 103 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Mohamed Hassan on 05 May 2024.

The user has requested enhancement of the downloaded file.


Received: 19 November 2023 Revised: 14 February 2024 Accepted: 28 February 2024
DOI: 10.1111/exsy.13576

ORIGINAL ARTICLE

An analysis to investigate plant disease identification based on


machine learning techniques

Sangeeta Duhan 1 | Preeti Gulia 1 | Nasib Singh Gill 1 |


2 1
Mohammad Yahya | Sangeeta Yadav | Mohamed M. Hassan 3 |
Hassan Alsberi 3 | Piyush Kumar Shukla 4

1
Department of Computer Science &
Applications, Maharshi Dayanand University, Abstract
Rohtak, India
In agriculture, crops are severely affected by illnesses, which reduce their production
2
Computer Science, Oakland University,
Rochester, Michigan, USA every year. The detection of plant diseases during their initial stages is critical and
3
Department of Biology, College of Science, thus needs to be addressed. Researchers have been making significant progress in
Taif University, Taif, Saudi Arabia
the development of automatic plant disease recognition techniques through the utili-
4
Department of Computer Science &
zation of machine learning (ML), image processing, and deep learning (DL). This study
Engineering, University Institute of
Technology, Rajiv Gandhi Proudyogiki analyses the recent advancements made by researchers in the field of ML techniques
Vishwavidyalaya (Technological University of
Madhya Pradesh), Bhopal, India
for identifying plant diseases. This study also examines various methods used by
researchers to produce ML solutions, such as image preprocessing, segmentation,
Correspondence
Piyush Kumar Shukla, Department of
and feature extraction. This study highlights the challenges encountered while creat-
Computer Science & Engineering, University ing plant disease identification systems, such as small datasets, image capture condi-
Institute of Technology, Rajiv Gandhi
Proudyogiki Vishwavidyalaya (Technological
tions, and the generalizability of the models, and discusses possible solutions to cater
University of Madhya Pradesh), Bhopal, to these problems. Still, the development of a solution that automatically detects var-
Madhya Pradesh 462033, India.
ious plant diseases for various plant species remains a big challenge. To address these
Email: [email protected]
Preeti Gulia, Department of Computer Science challenges, there is a need to create a system that is trained on an extensive dataset
& Applications, Maharshi Dayanand University, that contains images of various types of diseases a plant can suffer from, and plant
Rohtak, India.
Email: [email protected]
images should be taken at various stages of the disease's development. This study
further presents an analysis of various methods used at different stages of plant dis-
Funding information
Deanship of Scientific Research, Taif
ease identification.
University, Saudi Arabia
KEYWORDS
computer vision, feature extraction, image processing, machine learning, plant disease

1 | I N T RO DU CT I O N

Agriculture is a mainstay in many countries. Many developing countries' Gross Domestic Product (GDP) depends on the agriculture sector. With
the advancements in technology, it has become a mandate to use new-age technologies like artificial intelligence (AI), computer vision, and image
processing in the field of agriculture to reduce the losses caused by the late identification of plant diseases and increase the average yield produc-
tion. The technology further assists in correct treatment of the infected plants by identifying the plants diseases at an early stage of infection
(Yang et al., 2022). Due to the rapid population growth, there is a need for good cultivation practices so that crop damage is minimal and crop pro-
duction can be increased without increasing the area for farming. Thus, with the replacement in traditional methods of identifying disease, such as
laboratory methods and visual clue-based manual inspection, with methods that can automatically identify different plant diseases on various
plants will results towards fewer crop losses and yield production quality and quantity will both improve (Wani et al., 2022).

Expert Systems. 2024;e13576. wileyonlinelibrary.com/journal/exsy © 2024 John Wiley & Sons Ltd. 1 of 30
https://doi.org/10.1111/exsy.13576
2 of 30 DUHAN ET AL.

Over the past few decades, numerous ML methods have been developed for detection and classification of plants diseases automatically.
Many studies have already been conducted to identify the most common diseases in crops like rice, tomato, wheat, and other famous crops. Deep
Learning (DL) usage is initiating to become crucial in the agricultural sector (Wani et al., 2022). Deep Convolutional Neural Networks (DCNN) a
one of the finest ML approaches for recognizing and classifying plant diseases (Yang et al., 2022). Currently, the researcher's focus is on DL tech-
niques like Convolutional Neural Networks (CNN), Visual Geometry Group (VGG) (Simonyan & Zisserman, 2015), ResNet (He et al., 2017), You
Only Look Once (YOLO) (Redmon et al., 2016), etc. because they provide high classification accuracy. Still, to train these models, a large dataset is
needed, and the memory requirement increases as the size of the neural network increases. To deploy DL models on mobile devices or Internet
of Things (IoT) devices, there is a need to develop lightweight models, as these models are less compute-intensive due to their small size
(Fanariotis et al., 2023). Despite improvements in these vision-based techniques, it is still difficult to identify plant diseases in practical situations.
Such applications have difficulties when it comes to disease detection in the real field due to the complicated backdrop and diverse environmental
circumstances. Another approach to disease prevention is crop monitoring using aerial or drone-based surveillance. However, this procedure also
needs reliable vision-based methods that need to be applied to a variety of crops (Bhagwat & Dandawate, 2021). When detecting ailments, it is
necessary to take plant disorders into account. Abiotic factors like variations in temperature, moisture, light, and nutrition can cause disorders in
plants (Wani et al., 2022).
A disease, on the other hand, is brought on by biotic agents like bacteria, viruses, and fungi (Fanariotis et al., 2023; Thakur et al., 2022; Yang
et al., 2022). Even though both illnesses and disorders affect the way a plant looks, they should be treated differently since they have distinct
underlying causes. The studies that concentrate on disorder identification are thus necessary because this domain for discussion is neglected by
the scholarly community. As a result, there are no publicly accessible databases for the identification of disorders (Hariri & Avşar, 2022).
The existing study contributes to literature as:

• This study explores the latest advancements in ML-based autonomous plant disease identification, specifically focusing on image segmentation
and feature extraction. The analysis encompasses a range of recently developed techniques, including hybrid approaches and novel
methodologies.
• Various techniques used by researchers for image preprocessing, segmentation, feature extraction, and classification in recent works are
described in this study.
• This study discusses the various challenges that researchers faced in developing an automatic plant disease detection system, as well as their
potential solutions.
• A comparative analysis of the different plant types that the researchers took into consideration is included in this study, along with a discussion
of current methods adopted by researchers for data augmentation, segmentation, and feature extraction for the identification of plant
diseases.

In this study, a comprehensive analysis is presented, focusing on related studies in the field of plant disease identification. Key topics dis-
cussed in the study include factors causing plant diseases and their associated symptoms, an in-depth review of existing hybrid, novel ML
approaches proposed for plant disease identification, exploration of different stages of development involved in disease identification, description
of influencing factors and potential solutions, analysis of crops considered in recent studies, types of datasets utilized, and image segmentation
and feature extraction approaches by researchers.
This study delves deeply into this area, providing insights into the factors causing plant diseases and their corresponding symptoms. In con-
trast, the studies Bhagwat and Dandawate (2021), Thakur et al. (2022), do not cover this topic comprehensively, leaving significant gaps in under-
standing the causal relationships between diseases and their symptoms. In this work, an in-depth review of existing hybrids and novel ML
approaches suggested for plant disease identification is presented, along with a thorough evaluation of various ML techniques along with their
performance assessments, utilized datasets, and image processing algorithms. While study Bhagwat and Dandawate (2021), Wani et al. (2022)
partially covers this analysis, extensively addresses this aspect, offering a comprehensive analysis of the existing ML approaches.
Furthermore, insight is provided on the different stages of development involved in plant disease identification, highlighting widely utilized
techniques and procedures at each stage, a subject only partially covered in study Bhagwat and Dandawate (2021), Thakur et al. (2022). In con-
trast, study Wani et al. (2022) does not explore this aspect, making this analysis the sole source of in-depth information regarding the stages of
development in disease identification.
The factors influencing plant disease identification and potential remedies are thoroughly discussed in this study, providing significant insights
into addressing this critical aspect. While study Bhagwat and Dandawate (2021) discusses this topic in detail, studies Thakur et al. (2022), Wani
et al. (2022) do not.
The analysis is performed to determine the crops studied in recent studies, the type of dataset used, the type of image segmentation
employed, and the feature extraction techniques used for plant disease identification. This detailed analysis is not covered in any of the listed
study Bhagwat and Dandawate (2021), Thakur et al. (2022), Wani et al. (2022), making this study an important resource for scholars and practi-
tioners in the field.
DUHAN ET AL. 3 of 30

In summary, the existing study contributes significantly to the field of plant disease detection by offering detailed insights into various aspects
that have not been adequately explored in previous studies. This analysis' comprehensiveness addresses major gaps in the existing literature and
provides essential knowledge for advancing research in this topic.
The rest of the study is organized as follows: Section 2 contains information on different kinds of plant diseases and symptoms that can be
used to identify them. In Section 3, a discussion on recent techniques developed by researchers for creating solutions for automatic plant disease
classification and detection systems is presented. Section 4 describes the steps involved in plant disease identification. Section 5 describes the
factors that affect plant disease recognition and Section 6 includes the discussion and analysis carried out based on the literature survey; Section 7
includes the future scope; and Section 8 includes the conclusion drawn.

2 | LITERATURE REVIEW

This section provides an overview of plant diseases and their causes. Then recent work on ML-based plant disease identification is provided.
Plant disease is caused by both abiotic and biotic factors. Diseases affect plant leaves, flowers, stems, roots, and other parts
(mlblevins, 2011). But leaves become the de facto way to identify the cause of the disease because early symptoms are visible on the plant's
leaves first.
Visible cues are crucial in identifying conditions. The disease category, colour, pattern, appearance, and affected area of the plant are fre-
quently used characteristics to diagnose ailments (Wani et al., 2022). Season, environment conditions, and crop type all affect the type of plant
disease that develops and how it spreads in the field (Bhagwat & Dandawate, 2021). Manual inspection is used by agricultural professionals or
farmers to detect and identify different types of diseases with which the plants are infected, but these processes take time and money. Crop loss
may be prevented with the use of vision-based AI methods for the recognition of plant diseases (Thakur et al., 2022; Wani et al., 2022). Figure 1
shows the causes and types of plant diseases (mlblevins, 2011). Figure 2 presents a brief overview of popular plant diseases along with a picture
of their symptoms and causes, which are obtained from the PlantVillage dataset (n.d.) and (Admin, 2020; Wani et al., 2022; Yogeswararao
et al., 2022).
Hariri and Avşar (2022) focus on leaf disorder diagnosis rather than disease detection, making their approach unique. Strawberry tipburn is
considered for detection. In this work, a sequential CNN model was created and its parameters were determined using PSO (Kennedy &
Eberhart, 1995). Vivekanand and M proposed a Deep Convolutional Neural Network (DCNN)-based plant leaf disease detection (PLDD) to
improve disease detection accuracy in case of feature variability. The suggested technique diagnoses diseases using raw real-time images without

FIGURE 1 Factor causing plant diseases (mlblevins, 2011).


4 of 30 DUHAN ET AL.

F I G U R E 2 Causes and symptoms of commonly occurring plant diseases (PlantVillage Dataset, n.d.; Admin, 2020; Wani et al., 2022;
Yogeswararao et al., 2022).

preprocessing (Vivekanand, 2022). Pan et al. (2022) used various DL models with different loss functions, including softmax, ArcFace, and Cos-
Face, to diagnose northern corn leaf blight disease in maize crops. The GoogleNet architecture (Szegedy et al., 2014) with softmax loss function
has the highest accuracy.
Because it improves learned feature discrimination, this study uses the A-softmax loss function instead of the original softmax loss function.
Explicitly minimizing the angular margin between classes helps the model learn more distinct characteristics. The A-Softmax loss function is
defined as:

0 1
kxi kΨðθyi ,i Þ
1X e
LASoftmax ¼  log@ P A ð1Þ
N i ekxi kΨðθyi ,i Þ þ ekxi kcosðθj,i Þ
j ≠ yi

    h i
ðkþ1Þπ
where, Ψ θyi ,i ¼ ð1Þk cos mθyi ,i  2k, θyi, i  kπ
m, m k  ½0,m  1 and m ≥ 1. Here m is an integer controlling the size of angular margin, N is total
number of training samples, xi represent input feature, yi is corresponding class label.
Sanath Rao et al. proposed AlexNet (Krizhevsky et al., 2017), a CNN architecture for grape and mango disease feature extraction and classifi-
cation. The model was deployed via the “JIT CROPFOIX” application on smartphones (SanathRao et al., 2021).Sanghavi et al. used field moisture,
humidity, and temperature data to develop an IoT-based process for early grape disease prediction. The proposed structure alerts the farmer, pro-
viding them time to scout and apply fungicide as a preventative precaution (Sanghavi et al., 2021). Deng et al. used an unmanned aerial vehicle
(UAV) to collect high-resolution RGB-based images to find wheat stripe rust transmission sites. CNN's semantic segmentation architecture
(deeplabv3+) (Chen et al., 2018) mapped infected areas. The study's main goal was to determine if RGB images can accurately identify disease
transmission hotspots under varied field circumstances. Also, it examines if a multi-branch binary classification network instead of a multi-
classification network can improve rust class recognition (Deng et al., 2022). Tiwari et al. proposed DCNN architecture is trained on a large
DUHAN ET AL. 5 of 30

collection of plant leaf images from different nations. This deep neural network handles complicated picture circumstances, including inter-class
and intra-class variations, well. The suggested method processes a single plant leaf image in 0.016 s with great precision, demonstrating its real-
time relevance (Tiwari et al., 2021). You et al.'s approach has two phases: object detection, which locates known disease classes, and Deep Metric
Learning (DML)-based post-filtering, which uses K-Nearest Neighbour (KNN) for known and unknown diseases and Softmax for known diseases
(You et al., 2022). Mostafa et al. classified guava diseases using five fine-tuned neural networks: ResNet101, ResNet50 (He et al., 2017), AlexNet
(Krizhevsky et al., 2017), SqueezeNet (Iandola et al., 2016), and GoogLeNet (Szegedy et al., 2014). ResNet50 outperforms other models. The
Kappa-Cohen Index is also used to evaluate models' performance (Mostafa et al., 2021). Sasikaladevi, suggested a DCNN model based on
hypergraph modelling (Sasikaladevi, 2022). Elaraby et al. developed a Plants-AlexNet CNN model to detect wheat, grape, cotton, cucumber, and
maize diseases. PSO (Kennedy & Eberhart, 1995) selects and optimizes features extracted by AlexNet (Krizhevsky et al., 2017) from input images.
The proposed model has acceptable classification accuracy even when trained on a tiny dataset (Elaraby et al., 2022). Paymode and Malode, used
VGG16 (Simonyan & Zisserman, 2015) model for identifying multi-crops leaf diseases (Paymode & Malode, 2022). Badiger et al. suggested
K-Means clustering for segmentation (Badiger et al., 2022). Panchal et al. carried out a study on InceptionV3 (Szegedy et al., 2016), VGG19,
VGG16 (Simonyan & Zisserman, 2015), and ResNet50 (He et al., 2017) to show that fine-tuning pre-trained DL models increase their efficiency
(Panchal et al., 2022). Rajeena p.p. et al. utilized pre-trained EfficientNetB0 (Tan & Le, 2020) model for the classification of corn leaf diseases due
to their small model size, high classification accuracy, and low computational cost (Rajeena et al., 2023). Yulita et al. use the DenseNet architecture
(Huang et al., 2017), the proposed modal uses five dense block layers, and convolution operations are applied after each layer. After the last dense
layer, a flattened layer and a fully connected layer are used (Yulita et al., 2023). Nayak et al. used Grab-Cut algorithm that employs the Gaussian
Mixture Model (GMM) to find the region of interest is used. In this study, 11 CNN models evaluated on rice leaf images captured in the real field.
It was observed that ResNet50 performs best for cloud architecture and MobileNetV2 (Sandler et al., 2018) is best for smart phones (Nayak
et al., 2023).
Table 1 lists the related research done on plant disease identification, as well as the various image processing, segmentation, and feature
extraction approaches that they have used (given in the image processing column) and the effectiveness of the various suggested methods. In this
table, datasets used by researchers are also given. Researchers who use datasets made available by researchers on demand or on Kaggle or Github
sites are considered to be using publicly available datasets in the dataset column of Table 1.

3 | M E TH O DO LO GY

In this methodology section, we examine recent developments in the identification of plant diseases, paying particular attention to studies that
use hybrid strategies for disease classification and identification. The significance of these studies lies in their innovative approaches, which dem-
onstrate the integration of ML techniques with conventional methods to augment precision and efficacy. We focus on researchers' novel methods
for developing automated plant disease identification systems. The focus is on explaining the developments in DL solutions, specifically in the
areas of image segmentation and feature extraction, which are vital for enhancing the accuracy and dependability of disease identification sys-
tems. This methodology section attempts to give a thorough overview of the most recent advancements in the field of plant disease
identification.

3.1 | Technique based on hybrid approach

Hybrid models are approaches that integrate different algorithms or methodologies to capitalize on their particular strengths while overcoming
their own limitations. These models combine many approaches, such as statistical models, machine learning algorithms, or domain-specific knowl-
edge, to enhance overall performance, resilience, and interpretability. Furthermore, hybrid approaches may involve ensemble methods, where
multiple models are trained independently and their predictions are combined to produce a final output. Ensemble methods, such as bagging, boo-
sting, or stacking, leverage the diversity of individual models to improve overall performance and generalization (Ganaie et al., 2022).
Guo Feng et al. (2022) proposed LFC-Net, integrating a location network inspired by Feature Pyramid Design (Lin et al., 2017) for extracting
salient regions and a feedback network for size adjustment. The feedback network employs ResNet50 (He et al., 2017) for feature extraction,
while the final classification network categorizes diseases present in the image. Sutaji and Yildiz developed LEMOXINET, combining MobileNetV2
(Sandler et al., 2018) and Xception (Chollet, 2017) models for mobile-friendly plant disease detection (Sutaji & Yıldız, 2022). Sunil et al. used U2-
Net to remove backgrounds and EfficientNetV2 (Tan & Quoc, 2021) for disease identification, addressing feature extraction challenges (Sunil
et al., 2022). Abbas et al. used Conditional Generative Adversarial Network (C-GAN) (Mirza & Osindero, 2014) for data augmentation and pre-
trained DenseNet121 (Huang et al., 2017) for disease classification (Abbas et al., 2021). Syed-Ab-Rahman et al. utilized ResNet101 (He
et al., 2017) for feature extraction and Region Proposal Networks (RPN) (Ren et al., 2016) for identifying regions of interest (ROIs) in leaf images
(Syed-Ab-Rahman et al., 2022). Pandian et al. proposed 14-DCNN for plant disease detection, outperforming other models (Pandian, Kumar,
TABLE 1 An overview of recent research studies by researchers in the classification and detection of plant diseases.
6 of 30

Reference Techniques Species Dataset Image processing technique Accuracy Precision Recall F1 score
Hariri and Avşar PSO-CNN Strawberry Real field images Image resize to 224  224 98.95% - 98.63% -
(2022) pixels, flipping, shifting,
rotation
Vivekanand DCNN-MBGD Tomato PlantVillage dataset - 96.06% 97.00% 96.00% 97.00%
(2022)
PAN et al. Pre-trained GoogleNet Corn Real field images Cropping, flipping, rotation, 99.94% - - -
(2022) reflection, and scaling
Sanath et al. AlexNet Grapes Real field images and Image clipping and resizing 99.03%—Grape 98.90%—Grape 98.50%—Grape 98.%—Grape
(2021) and PlantVillage dataset 89.86%— 89.50%— 90.50%— 90%—Mango
Mango Mango Mango Mango
Sanghavi et al. The IoT device platform Grape - - 94.41%—DM 95.54%—DM 96.77%—DM -
(2021) NodeMCU is utilized 96.04%—PM 91.26%—PM 98.02%—PM
to send the data to a
central server, which is
collected from
temperature and rain
sensor nodes
Deng et al. CNN's semantic Wheat High-resolution, Data augmentation—random 72.60%—rust 91.23%—rust 80.9%—rust
(2022) segmentation georeferenced aerial brightness adjustment, 98.38%— 99.22%— 98.8%—
architecture image—real field images saturation adjustment, healthy healthy healthy
(deeplabv3+) and contrast adjustment, random
multi-branch binary vertical, horizontal flip and
classification network random rotation.
Tiwari et al. DenseNet-201 Multiple PlantVillage dataset, citrus Image resize to 224  224 pixels 99.19% 94.80% 93.97% 93.79%
(2021) crops leaf images dataset, Rice
leaf images dataset, and
iBean leaf image dataset
You et al. Object detection— Strawberry Real field images - - 93.7% - -
(2022) Feature Pyramidal
Network (FPN)-Based
Faster R-CNN and
ResNet50 with margin
triplet and cross-
entropy losses for
DML
Mostafa et al. ResNet101, ResNet-50, Guava Real field images Image Resize—bicubic 99.54%— - 99.88%— 99.62%—
(2021) AlexNet, SqueezeNet, interpolation method ResNet-50 ResNet-50 ResNet-50
GoogLeNet Data Augmentation and
Enhancement—affine
transformation method, colour
histogram equalization
Segmentation—Unsharp masking
DUHAN ET AL.
TABLE 1 (Continued)

Reference Techniques Species Dataset Image processing technique Accuracy Precision Recall F1 score
Sasikaladevi Three layer hypergraph Multiple PlantVillage dataset Feature extraction—ResNet-50 99.50% 99.00% 98.00% -
DUHAN ET AL.

(2022) convolutional neural crops


network architecture.
Elaraby et al. AlexNet + PSO Multiple Publicly available datasets Randomization shifting and 98.83% 98.67% 98.78% 98.47%
(2022) crops rotation, flipping, zooming
Paymode and VGG16 for classification Grape and Real field and PlantVillage Translation, randomized 98.40%—Grape - - -
Malode Tomato dataset transformation turning, 95.71%—
(2022) flipping, rotation, and Tomato
sharpness
Badiger et al. K-Means Clustering + Tomato PlantVillage dataset RGB to LAB colour space 96.00% - - -
(2022) SVM conversion image
segmentation—k-mean
algorithm
Panchal et al. InceptionV3, VGG19, Multiple PlantVillage dataset Resizing, normalization, flipping, 93.5%—VGG
(2022) VGG16 and ResNet50 crops zooming, rotation with varying
angles, and shearing
Rajeena et al. EfficientNetB0 Corn PlantVillage and PlantDoc Preprocessing—smoothening, 98.85% 88% 87% 93%
(2023) dataset grey scale conversion, and
morphological operations
Feature extraction—GLCM
Segmentation—Otsu threshold
Yulita et al. DenseNet Tomato Publicly available dataset Rescale, random rotation, 95.40% 95.65% 95.40% 95.44%
(2023) shifting, zooming
Nayak et al. MobileNetV2 Rice Real field images Resizing, random noise injection, 97.56% - - 98.31%
(2023) random rotation, flipping
Segmentation—Grab cut
algorithm and min-cut
algorithm
7 of 30
8 of 30 DUHAN ET AL.

et al., 2022). Mathew et al. utilized Grey-Level Co-Occurrence Matrix (GLCM) for feature extraction, a K-means algorithm for segmentation, and a
voting classifier for disease prediction from plant leaf images (Mathew et al., 2022). Rajpoot et al. utilized a combination of Brightness-Preserving
Bi-Histogram Equalization (BBHE) (Wang & Ye, 2005) and CNN for early-stage mango disease detection (Rajpoot et al., 2022). Similarly, Gajjar
et al. developed a hybrid model employing both Single-Shot Detector (SSD) to detect the leaves in the image and provides an output bounding
box for each leaf detected and CNN for disease classification, suitable for real-time testing (Gajjar et al., 2021). Bedi and Gole proposed a model
integrating Convolutional AutoEncoder (CAE) (Masci et al., 2011) and CNN, where CAE reduces computational complexity by compressing
images, while CNN performs disease classification, making it suitable for IoT devices (Bedi & Gole, 2021). Mohapatra et al. proposed the Cat
Swarm Updated Black Widow Model (CSUBW) to increase the classification accuracy of CNN, specifically tested on mango plant leaves
(Mohapatra et al., 2022). Altalak et al. developed a hybrid model by combining the Convolutional Block Attention Module (CBAM) (Woo
et al., 2018), Support Vector Machine (SVM) (Cortes & Vapnik, 1995), and ResNet50 (Simonyan & Zisserman, 2015), evaluated on tomato plant
images to detect nine diseases (Altalak et al., 2022). Shah et al. suggested a ResTS (Residual Teacher/Student) model, which contains a ResTeacher
as a decoder for the recreation of the images and ResStudent to classify denoised images and a decoder to provide better visualization and classi-
fication than previous Teacher/Student architecture (Shah et al., 2022). Zhao et al. proposed a RI-Net by combining RI-Blocks to reduce the com-
putational complexity of the CNN model, resulting in a lightweight model suitable for real-time deployment. The CBAM module (Woo
et al., 2018) is added after each RI-Block to identify small intra-class differences and large inter-class differences (Zhao, Sun, et al., 2022). Archana
et al. (Archana et al., 2022) proposed a novel support vector machine-based probabilistic neural network (NSVMBPNN) which gives better classifi-
cation accuracy than models like Probabilistic Neural Network (PNN), Naïve Bayes, and SVM.
Hybrid models combine multiple techniques or methodologies to maximize the strengths of each approach while addressing the limitations of
individual models. Hybrid models are especially helpful in solving complicated problems that require a multidimensional strategy in order to accu-
rately predict or classify data. Hybrid models can improve overall performance, robustness, and adaptability by combining multiple techniques,
resulting in reliable outcomes even in challenging or diverse datasets. Table 2 shows the recent hybrid approaches developed by researchers for
identification and classification of plant diseases.

3.2 | Novel techniques for plant disease detection and classification

This section provides an overview of novel techniques proposed by researchers for the identification of plant leaf disease images.
Wang et al. attempted to improve model attention to an interesting area of plant disease images; Improved Attention Sub Module (IASM)
mechanism is introduced, and for weight reduction, the Ghostnet (Han et al., 2020) and Weighted Boxes Fusion (WBF) structures are proposed.
To increase the learning capability of each feature layer, fast normalization fusion and Bidirectional Feature Pyramid Network (BiFPN) are com-
bined for weighted feature fusion (Wang et al., 2022). Albattah et al. propose a drone-based DL technique employing an improved
EfficientNetV2-B4 model (Tan & Quoc, 2021). The model performs better with a shallow network structure (Albattah et al., 2022). Pandian et al.
proposed three DCNN models for plant leaf disease identification: Conv3, Conv4, and Conv5. Data augmentation techniques such as DCGAN
(Radford et al., 2016), NST (Gatys et al., 2015), Principal Component Analysis (PCA) (Pearson, 1901), and hyperparameter tuning approaches are
used to demonstrate the impact on proposed model performance. The performance of Conv5-DCNN is better than Conv3, Conv4, and other
transfer learning and ML methods (Pandian et al., 2022). Narmadha et al. proposed DenseNet169-MLP, DenseNet169 (Huang et al., 2017) to
extract features and Multilayer Perceptron (MLP) for the classification (Narmadha et al., 2022). Nandhini et al. combine Recurrent Neural Network
(RNN) and CNN to propose a Gated-Recurrent Convolutional Neural Network (G-RecCovNN), a convolutional layer to learn the spatial connec-
tion, and a recurrent layer to learn the temporal dependencies between pictures that were collected at various times (Nandhini et al., 2022). Keceli
et al. used multi-task learning and transfer learning to propose a novel method called multi-input multi-output CNN. The model receives two
inputs (feature learned), one from the multi-input network and another from pre-trained AlexNet (Krizhevsky et al., 2017), and gives two results,
which are the classification of plant type and diseases (Keceli et al., 2022).
Xu et al. introduced a new data augmentation approach called Style Consistent Image Translation (SCIT), which enables adaptation of varia-
tions from one class to another while preserving the unrelated characteristics of the original class (Xu et al., 2022). Yogeswararao et al. proposed
three novel densely connected convolutional neural networks (DCCNN), in which an 8-block DCCNN (which incorporates 8 modified dense block
layers) and skip connections are integrated into the modified dense block, which helps to find a better local minimum (Yogeswararao et al., 2022).
Wang et al. solved the SSD model's poor recognition rate and accuracy problem, three strategies for identifying plant leaf disease are suggested.
The first is squeeze-and-excitation (Se_SSD), which combines the SSD feature extraction network with channel attention mechanism. The second
method deep block SSD (DB_SSD) is proposed to improve the VGG (Simonyan & Zisserman, 2015) feature extraction network and third method
deep block attention SSD (DBA_SSD) is proposed which combines improved Visual Geometry Group (VGG) network and channel attention mech-
anism. DBA_SSD gives the best mAP of 92.20% on the PlantVillage dataset (Wang et al., 2021). Sun et al. proposed a lightweight model named
Mobile End AppleNet-based Single Shot Detector (MEAN-SSD). The Mobile End AppleNet (MEAN) block is introduced to reduce the complexity
and size of the proposed model (Sun et al., 2021). Hassan and Maji combined Inception and residual connection to propose a novel CNN
TABLE 2 A brief overview of hybrid approaches developed for plant disease identification.

References Techniques Plant species Dataset Image processing techniques Accuracy Precision Recall F1 score

Yang et al. LFC-Net. Strawberry Real field and Image resize to 224  224, 92.48% 90.68% 86.32% 88.45%
DUHAN ET AL.

(2022) Internet 299  299, and 331  331


pixels
Sutaji and Yıldız LEMOXINET Multiple crops Turk-Plant dataset, Image resize to 224  224 99.10% 99.10% 99.02% 99.03%
(2022) (MobileNetV2 iBean dataset, pixels, normalization
+ Xception) model. Citrus dataset,
PlantVillage
dataset, and Rice
dataset
Sunil et al. (2022) U2-Net Cardamom Real field images Feature extraction—U2-Net 98.26%— 98%— 98%— 98%—
+ EfficientNetV2 EfficientNetV2-L EfficientNetV2-L EfficientNetV2-L EfficientNetV2-L
Abbas et al. Pre-trained Tomato Internet and Data augmentation—C-GAN 99.51%—5 classes 99%—5 classes 99%—5 classes 99%—5 classes
(2021) DenseNet121 is used PlantVillage 98.65%—7 classes 98%—7 classes 99%—7 classes 98%—7 classes
as the classification dataset 97.11%—10 classes 97%—10 classes 97%—10 classes 97%—10 classes
model.
Syed-Ab- ResNet101 + RPN+ Citrus Publicly available Image preprocessing—binary 86.2%—black spot 93.8%—black spot 87%-black spot 90%—black spot
Rahmanet al. CNN. dataset images conversion, histogram 97.2%—canker 99%—canker 95.8%—canker 97.9%—canker
(2022) equalization, rotation 94.64%— 91%— 94.6%— 93%—
Feature extraction—ResNet101 Huanglongbing Huanglongbing Huanglongbing Huanglongbing
Segmentation—binary mask to
define the bounding box
Pandian et al. 14-layered DCNN Multiple crops Publicly available Data augmentation—Basic image 99.96% 99.79% 99.79% 99.79%
(2022) datasets manipulation (BIM), Neural
Style Transfer (NST), and Deep
Convolution Generative
Adversarial Network (DCGAN)
Mathew et al. Voting classification Potato PlantVillage Image preprocessing—RGB to 92.00% 92.05% 92.00% -
(2022) (DT + SVM + KNN) dataset Greyscale conversion
Feature extraction—GLCM
algorithm
Image segmentation—K-mean
algorithm
Rajpoot et al. Hybrid BBHE-CNN Mango Images collected Histogram equalization, image 95.65% 94.43% 96.55% -
(2022) from the resize to 256  256
Internet
Gajjar et al. SSD and MobileNetV1 Multiple crops PlantVillage, self- Data enhancement techniques 96.88% 93.78% 94.91% 95.34%
(2021) acquired and include increasing brightness,
from the randomly flipping images left
Internet to right, and up to down.
Bedi and Gole CAE-CNN model Peach PlantVillage CAE—To reduces the 98.38% 98.00% 98.72% 98.36%
(2021) dataset dimensionality of the image

(Continues)
9 of 30
TABLE 2 (Continued)

References Techniques Plant species Dataset Image processing techniques Accuracy Precision Recall F1 score
10 of 30

Mohapatra et al. (CSUBW) + CNN Mango Real field images Pre-processing—Contrast 91.20% - - -
(2022) Enhancement, and Histogram
Equalization
Segmentation—geometric mean-
based neutrosophic with a
fuzzy c-means method
Feature extraction—upgraded
Local binary pattern, colour
feature, pixel feature
Altalak et al. ResNet-50-CBAM Tomato PlantVillage RGB to BGR image conversion 97.23% - - -
(2022) (SVM) dataset Data augmentation techniques—
rotation, shifting, flipping, and
zooming
Shah et al. (2022) ResTS (ResTeacher + Multiple crops PlantVillage To segment ROIs binary - - - 99.1%
Decooder+ dataset thresholding algorithm
ResStudent) (threshold = 0.6) is used
Zhao et al. RIC-Net is proposed by Multiple crops PlantVillage Image augmentation—random 99.55% - - -
(2022) combining RI-block dataset rotation of image by 20 and
(merging residual 30 degree
structure with Random horizontal and vertical
Inception structure) offset between 0 to 0.2 times
and CBAM the height of the picture,
random scaling, and flipping
Archana et al. NSVMBPNN Rice Real field images Image preprocessing—image 95.20%—Bacterial 88.24%—Bacterial 100%—Bacterial leaf -
(2022) resizing, Weiner filtering, RGB leaf blight leaf blight blight
channel separation 97.60%—Brown 100%—Brown spot 91.43%—Brown
To extract colour features—novel spot 99.20%— 100%—Healthy spot 93.33%—
intensity-based colour feature Healthy leaf leaf Healthy leaf
extraction (NIBCFE) is 98.40%—Rice blast 100%—Rice blast 93.33%—Rice blast
proposed.
Texture features—GLCM and Bit
Pattern Features (BPF)
Shape features—by finding the
infected region diameter and
area
Segmentation—Modified k—
means
DUHAN ET AL.
DUHAN ET AL. 11 of 30

architecture. Depth-wise separable convolution is used in place of standard convolution to reduce the number of parameters. The proposed
model has only 0.42 million trainable parameters (Hassan & Maji, 2022).
Talasila et al. used MobileNetV2 (Sandler et al., 2018) as a feature extractor with DeepLabv3+ layers (Chen et al., 2018) for identifying the
infected region. The DCNN model is proposed to have 1.18 million trainable parameters and a model size of 4.18 MB, making it ideal to be
deployed on IoT-based devices or mobile phones (Talasila et al., 2023). Hari and Singh proposed a CNN model that employs feature reuse. The
model proposed can easily be deployed on drone surveillance systems or resource-constrained devices as the model size is very small (only
0.38 MB) and the trainable parameters are 0.1 million, which is very little as compared to other CNN models (Hari & Singh, 2023).
Ulutaş and Aslantaş, proposed two novel CNN models, namely CNN1 and CNN2, for the classification. PSO (Kennedy & Eberhart, 1995) is
used for optimization of the hypermarameter; grid search is used for weight optimization; and VGG16 (Simonyan & Zisserman, 2015) models are
used for fine tuning. In this study, four ensemble models are created, and a voting ensemble is used to combine the predictions of these CNN
models. In this study, hard and maximum voting methods are used (Ulutaş & Aslantaş, 2023).
Resti et al. proposes a novel method using the Multinomial Naïve Bayes (MNB) method and a fuzzy discretization approach. In the first
step, a fuzzy discretization approach is used to transform continuous data into discrete values, which are then used as input for the MNB classi-
fier. This approach is designed to address the limitations of traditional discretization methods, which can be sensitive to the choice of thresh-
olds and can lead to information loss (Resti et al., 2023). Verma et al. propose a meta-learning framework for CNN architecture
recommendation in plant disease identification. Initially, a meta-feature extractor generates meta-feature vectors for each training dataset.
Then, a meta-learner, utilizing Decision Tree (DT), Random Forest (RF), and Support Vector Regression (SVR), selects the best regressor trained
on meta-datasets to forecast accuracies associated with pre-trained models. Finally, the top n models are recommended based on projected
accuracies (Verma et al., 2023). In Table 3 a brief overview of novel techniques proposed by researchers for plant disease classification and
detection is provided.

3.3 | Recent approaches using image segmentation and feature extraction

This section provides a brief summary of recent studies done in the field of plant disease detection and classification is presented, where the main
focus was to find out the various image segmentation and feature extraction techniques used by researchers to improve the performance of the
ML models.
Khan et al. created an end-to-end segmentation strategy for this CNN model is employed, which has been tuned for semantic leaf segmenta-
tion (SLS) (Khan et al., 2022).
Zhao et al. proposed DoubleGAN to produce high-resolution photos of diseased leaves, with its operation divided into two stages: in the
first stage, a Wasserstein Generative Adversarial Network (WGAN) (Arjovsky et al., 2017) generates 64  64 pixel images from diseased leaf
input images, and in the second stage, to get the corresponding 256  256, a Super-Resolution Generative Adversarial Network (SRGAN)
(Ledig et al., 2017) is used. Images produced by DoubleGAN are clearer than images produced by the previous DCGAN (Zhao, Chen,
et al., 2022).
Hasan et al. proposed a segmentation framework using complex analysis processes and the GrabCut method (Rother et al., 2004) to address
several problems, like mild symptoms that remain undetected (Hasan, Yusuf, et al., 2022). S. Hasan et al. created a feature vector by fusing L*a*b*
space-based colour histogram features by removing the lightness channel (L*—channel) and author used RF as the base classifier to identify apple
plant diseases (Hasan, Jahan, & Islam, 2022). Sodjinou et al. use U-Net (Ronneberger et al., 2015) and K-means algorithm for semantic segmenta-
tion. To define cluster centroids, Subtractive clustering algorithm is used, but image quality decreases during the implementation of semantic seg-
mentation (Sodjinou et al., 2022).
Uryasheva et al. developed a sensing system to support digital phenotyping, for automatic apple leaf segmentation in field conditions. On-
field multispectral images are obtained using three cameras to collect long-wavelength Infrared band, visible spectrum, and vegetation index infor-
mation. On the test dataset, an average Intersection over Union (IoU) of 0.72 was obtained (Uryasheva et al., 2022).
Huang et al. focused on the traditional problem of identifying a disease from images collected from diverse backgrounds. For this, the FC-
SNDPN (Fully Convolutional—Switchable Normalization Dual Path Networks) approach, which enhances the capability of feature extraction, is
suggested. In data pre-processing, fully convolutional network (FCN) (Long et al., 2015) based on VGG-16 (Simonyan & Zisserman, 2015) is uti-
lized to remove the background from a complex background picture (Huang et al., 2022).
Chen et al. suggested a novel SegCNN approach which uses enhanced ANN (EANN) for plant disease image segmentation. The Levenberg–
Marquardt (LM) and genetic algorithm (GA) are used to find suitable connection weights and thresholds, and CNN is used for classification (Chen
et al., 2021).
To improve the classification accuracy, Karthickmanoj et al. used pixel replacement-based segmentation for finding areas of interest. For fea-
ture extraction, Oriented Rotated and Brief (ORB) and GLCM techniques were used. GLCM to extract texture features like energy, homogeneity,
and contrast. The leaves are then classified as healthy or diseased using an SVM classifier (Karthickmanoj et al., 2021).
TABLE 3 Recent novel approaches proposed by researcher for plant disease detection and classification.

References Techniques Plant species Dataset Image processing techniques Accuracy Precision Recall F1 score
12 of 30

Wang et al. IASM, (Ghostnet and Peanut PlantVillage Image preprocessing—Image 93.73% 93.73% 92.94% 92.97%
(2022) WBF), BiFPN and dataset, vector normalization, padding
YOLOv5 Model PlantDoc resizing, flipping, blurring
dataset, and operations, edge detection
Real field images enhancement, colour
enhancement, image denoising
and logarithmic
transformation.
LabelImg tool used for image
calibration.
Albattah et al. Improved Multiple PlantVillage and - 99.99% 99.63% 99.93% 99.78%
(Albattah EfficientNetV2-B4 crops Real field images
et al., 2022) model with Swiss using drone
function
Pandian et al. Three DCNN model— Multiple Publicly available Data augmentation—DCGAN, 98.41%— 94%— 100%— 97%—
(2022) Conv3-DCNN, crops dataset NST and PCA Conv5-DCNN Conv5-DCNN Conv5-DCNN Conv5-DCNN
Conv4-DCNN and Position augmentation—scaling,
Conv5-DCNN flipping, cropping, affine
transformation, translation,
rotation and padding
Colour augmentation—
saturation, contrast and
brightness
Narmadha DenseNet169-MLP Rice Publicly available Preprocessing—RGB to 97.68% 96.82% 96.40% 96.43%
et al. (2022) model dataset Grayscale conversion, median
filter
Image segmentation—Fuzzy-c
means
Nandhini et al. G-RecConNN Banana Real field images Preprocessing—Resize 21  21 95.1% - 93.60% -
(2022) pixels, RGB to HSV conversion
Keceli et al. Multi Input Multi-Task Multiple Plant Village Feature extraction—AlexNet 99%—plant type 98%—plant type 97%—plant type 98%—plant type
(2022) Neural Network model crops dataset and FISB 89%—disease 91%—disease 86%—disease 87%—disease
dataset. status status status status
Xu et al. SCIT—data augmentation Tomato Real-field images Feature extraction—pre-trained 96.48%— 99.61% 97.87%
(2022) approach based on VGG19 DenseNet121
CycleGAN using SCIT
Yogeswararao 8 block DCCNN Cucumber Real-field image Data augmentation—Image 98.23%
et al. (2022) and PlantVillage flipping, noise injection,
dataset Gamma correction, rotation,
PCA colour augmentation,
scaling
DUHAN ET AL.
TABLE 3 (Continued)

References Techniques Plant species Dataset Image processing techniques Accuracy Precision Recall F1 score
Wang et al. DBA_SSD Multiple PlantVillage Data augmentation—Flipping, 92.20%
DUHAN ET AL.

(2021) crops Dataset Histogram Equalization,


Channel Shuffle Hue
Saturation Value
Sun et al. MEAN_SSD Apple Real-field dataset Data augmentation—mirroring, 83.12% - - -
(2021) brightening, contrast
adjustment horizontal and
vertical rotation, sharpening
Hassan and Novel CNN architecture Multiple PlantVillage - 99.39%— 99.66%— 99.67%— 99.67%—
Maji, (2022) crops dataset and PlantVillage PlantVillage PlantVillage PlantVillage
publicly 99.66%—Rice 99.17%—Rice 99.19%—Rice 99.18%—Rice
available dataset 76.59%—Cassava 62.03%—Cassava 72.63%—Cassava 66.91%—Cassava
(Cassava
dataset, and
Rice dataset)
Talasila et al. DCNN Black gram Real field dataset Data augmentation—mirror 99.54% 98.78% 98.82% 98.80%
(2023) symmetry, random shifting,
noise injection, rotation,
illumination correction
Segmentation—DeepLabv3+
and MobileNetV2
Hari and Singh CNN Banana, Publicly available Rotation, height shift, width shift 98.82%—Banana - - -
(2023) Guava and dataset 99.71%—Guava
mango 97.41%—Mango
Ulutaş and Ensemble model Tomato PlantVillage Resizing 99.12%—model 4 99% 99% 99%
Aslantaş (MobileNetV3Small, dataset
(2023) EfficientNetV2L,
CNN2, CNN1,
InceptionV3) + PSO
+ grid search + voting
Resti et al. Fuzzy discretization + Corn Real field dataset Resizing and cropping 98.63% 94.14% 85.94% 89.95%
(2023) Multinomial Naïve
Bayes
Verma et al. Meta-leaning framework Multiple Publicly available - - - - -
(2023) (MobileNet + DT/RF/ crops dataset
SVR+ Rank-Biased
Overlap (RBO))
13 of 30
14 of 30 DUHAN ET AL.

Mzoughi and Yahiaoui identified disease independently from the plant species, and instead of using global leaf images, local disease symp-
toms are used for disease prediction. To find out the infected area, three DL-based semantic segmentation techniques are utilized: fully con-
volutional networks (FCN) (Long et al., 2015), Pyramid Scene Parsing Networks (PSPnet) (Zhao et al., 2017), and U-Net (Ronneberger et al., 2015).
After evaluation, PSPNet101 is considered for segmentation as it can learn multi-scale features efficiently. For classification EfficientNet is used
and only regions that contain diseased pixels are considered (Mzoughi & Yahiaoui, 2023).
Yang et al. present the TSTC, a novel Triple-branch Swin Transformer Classification network, for the simultaneous and separate diagnosis of
disease and severity. The TSTC network has three sections. First, the multitask feature extraction module uses Swin Transformer (Liu et al., 2021)
to extract features from the input image. Second, the feature fusion module uses Compact Bilinear Pooling (CBP) to create fused features that
improve model discrimination. The deep supervision model, which supervises hidden layers to provide extra discriminative features, is the third
module (Yang, Wang, et al., 2023). In Table 4, a brief summary of recent image segmentation and feature extraction approaches proposed for
plant disease classification and detection is given.

4 | S T E P S US E D F O R D E T E C T I O N A N D C L A S S I F I C A T I O N O F P L A N T D I S E A S E

This section explain the steps involved in ML solution for automatic identification of plant diseases. Figure 3 shows general architecture of plant
disease detection and classification system.

4.1 | Image acquisition

Image acquisition is carried out to collect and acquire photographs of the relevant object, which are then supplied into the ML model for learning
and classification. The first and most important stage in digital image processing is converting an analog image into binary information for com-
puter processing (Wani et al., 2022). There are several ways to get images needed to train our model:

1. A high-resolution digital camera can capture images of the actual fields.


2. Due to the widespread use of mobile devices, smartphone cameras offer a fantastic, affordable option for taking pictures.
3. IoT-based tools may be used to monitor and identify plant diseases.

Existing datasets of plant leaves in different health conditions can be used to train ML models for plant disease identification. These datasets
include healthy and disease states from various sources. For instance, the PlantVillage dataset (n.d.) is a freely accessible option, while other
dataset may entail a payment requirement. Achieving high accuracy throughout the framework is imperative to ensure the effectiveness of an
automated disease identification. This accuracy depends on the quality and amount of relevant pictures used to train ML models. To ensure effec-
tive training for real-life situations, it is necessary to capture images under diverse circumstances, including different lighting conditions and at var-
ious times of the day. Most research in this field uses RGB imagery. However, hyperspectral imaging can be used for more detailed analysis and
classification based on unique spectral features. Hyperspectral imaging is an advanced method that captures images in multiple closely spaced
spectral bands across the electromagnetic spectrum (Lu & Fei, 2014). Hyperspectral imaging gives detailed spectral data for each pixel, unlike con-
ventional imaging, which only captures three colour bands. This method is used in remote sensing (Wu et al., 2023) and agriculture (Wang
et al., 2023; Zhang et al., 2023). Plant disease detection can also use multispectral imaging (Yang, Liu, et al., 2023). This method captures images
in a number of spectral bands, usually more than three but less than hyperspectral imaging. Multispectral imaging analyzes and characterizes
objects and materials using their spectral responses. Integrating hyperspectral and multispectral imaging techniques could lead to future advances
in several fields (Wang et al., 2023). This integration enables more precise and thorough analysis, detection, and classification of objects and
materials.

4.2 | Image preprocessing

Images are first preprocessed to remove noise and distortion from camera angle and reflections. Preprocessing improves image quality for
faster analysis. Figure 4 shows multiple preprocessing methods. Preprocessing allows us to remove undesirable distortions and improve key
features that are critical to the intended application. These preprocessing approaches use mathematical or statistical methods to improve
the image's visual look and geometric features while ensuring that it follows the needed standard format. Preprocessing is frequently
required to reduce computing expenses. Additionally, image resizing is performed to adjust the images' dimensions to accommodate differ-
ent resolutions.
TABLE 4 An overview of recent image segmentation and feature extraction techniques proposed for automatic plant disease detection and classification.

F1
Reference Techniques Species Dataset Image processing technique Accuracy Precision Recall score
DUHAN ET AL.

Khan et al. SLS using an optimized CNN Tomato PlantVillage and Contrast adjustment, vertical flipping, horizontal 97.60% 95.00% 96.00% 97.00%
(2022) TomatoDB flipping changing brightness levels
Zhao, Chen, Classifier—VGG16, ResNet50and Multiple PlantVillage dataset Image augmentation—two stage DoubleGAN 99.70%— - - -
et al. (2022) DenseNet121 crops 1. WGAN—to generate 64  64 size images DenseNet121
2. SRGAN—to obtain 256  256 size high resolution
images from 64  64 size image
Hasan, Yusuf, Complex analysis processes and Coffee and Publicly available Segmentation—GrabCut 90.00% 90.00% - -
et al. (2022) GrabCut apple dataset (Coffee
dataset, RoCoLe
dataset, and apple
dataset)
Hasan, Jahan, Random Forest Apple PlantVillage dataset RGB to grey, RGB to LAB 98.63% 98.65% 98.64% 98.64%
& Islam Feature extraction—Discrete Wavelet Transformation
(2022) (DWT)
Segmentation—L*a*b* space-based colour histogram
Sodjinou et al. U-Net + K-means algorithms. Crops and Publicly available Image preprocessing—image normalization between 0 99.19% - - -
(2022) weeds dataset and 1, Image thresholding to remove background
Semantic segmentation—U-Net
Uryasheva Eff-Unet (EfficientNet + U-Net) Apple Real field images Filtering, histogram equalization, scaling, rotation, - - - -
et al. (2022) flipping, shifting, Random brightness change
Feature extraction—EfficientNet
Segmentation—U-Net
Huang et al. FC-SNDPN for better feature Tomato Real field images. Image cropping, clipping, rotate, flipping, noise addition 97.59% - - 98.75%
(2022) extraction.
Chen et al. EANN, (GA) and (LM) algorithm is Cucumber Real field images Grey Scale transformation, image resize to 128  128 93.75% - 100.00% -
(2021) used to find suitable connection pixels, and Hybrid GIWA_GF filtering (gradient
weights and thresholds and CNN inverse weighted approach (GIWA) + Gaussian
for classification. filtering)
Segmentation—ANN
Karthickmanoj Classifier—SVM Pomegranate Publicly available Image preprocessing—contrast enhancement and 92.32% 90% 95% -
et al. (2021) dataset image conversion from RGB model to HSV
Feature extraction—GLCM and ORF
Image segmentation—Thresholding based method
(pixel replacement based segmentation)
Mzoughi and Classification—EfficientNet Multiple PlantVillage dataset and Random vertical and horizontal flipping, random - - - -
Yahiaou crops Internet rotation
(2023) Segmentation—PSPNet101
Yang, Wang, TSTC (Swin Transformer + CBP) Multiple Publicly available Resizing, flipping and cropping 99%—disease - - -
et al. (2023) crops dataset (AI 88.73%—
Challenger 2018 severity
dataset)
15 of 30
16 of 30 DUHAN ET AL.

FIGURE 3 Different steps in the process of detecting and classifying plant disease.

4.2.1 | Data augmentation

When there is limited data available for the desired photographs, data augmentation is commonly employed to expand the data quantity. Position
augmentation uses affine transformations such as flipping, padding, cropping, rotation, scaling, and translation (Ren et al., 2016). Colour augmen-
tation, on the other hand, is the process of changing pixel values to change the brightness, contrast, and saturation attributes of an image,
resulting in a diverse range of images. These augmentation strategies are critical because our machine learning model needs a large dataset of rel-
evant photos to properly train for plant leaf disease identification. Figure 4 illustrates a variety of image augmentation techniques used by
researchers. These solutions are quite useful for resolving difficulties such as class imbalances in the dataset. Several approaches have been used
to generate images of minority groups, including SCIT (Xu et al., 2022), MFC-GAN (Ali-Gombe & Elyan, 2019), DCGAN (Radford et al., 2016), and
ImbCGAN (Douzas & Bacao, 2018). In some cases, the images used for training and testing purposes may differ in style or belong to different
domains. To address this issue, image-style alteration strategies are employed. The main strategies adopted include S + U Learning, StyleMix
(Hong et al., 2021), and StyleAug (Jackson et al., 2019). In reviewed studies, Zhao, Chen, et al. (2022) suggest using DoubleGAN to generate high-
resolution pictures of damaged leaves, while Abbas et al. (Abbas et al., 2021) use C-GAN to increase the dataset. Li et al. (2023) have introduced a
novel supervised image augmentation technique called “Negative Contrast,” which involves generating health leaf images by removing disease
areas from the real image and, after doing that, changing the label of the image from diseased to healthy. Unlike similar methods like Cut, Paste,
and Learn, NC doesn't merge diseased areas with different backgrounds; instead, it generates pseudo-healthy samples.
Despite advancements, artificially generated images are still far from real ones. Complex GAN structures like DoubleGAN (Zhao, Chen,
et al., 2022) need a lot of computational power, which may limit its use in resource-constrained applications. Maintaining diversity and coverage
of all disease types and variations is difficult while extending the dataset. Deep learning models, especially those containing GANs, are often criti-
cized for they lack interpretability, which can affect trust and adoption, especially in critical domains like agriculture.
For further reading on data augmentation methods, refer to Kumar et al. (2023) for insights into basic and advanced image data augmentation
and Xu et al. (2023) for augmentation-based methodologies in computer vision applications. Some augmentation approaches are shown in
Figure 5.

4.3 | Image segmentation

Image segmentation algorithms play a crucial role in extracting and grouping pixels based on their characteristics in an image. This allows for the
labeling of pixels, creating distinct groups that share common traits. These labels enable the division of an image into essential and less significant
parts (Narmadha et al., 2022).
Selecting a suitable segmentation method is critical in plant disease detection systems. Both locality- and threshold-based segmentation
methods are considered the best in this case. Figure 4 shows popular image segmentation approaches. One widely utilized approach is semantic
DUHAN ET AL. 17 of 30

FIGURE 4 Different techniques used during various stages of automatic plant disease identification.

segmentation, which assigns semantic labels to individual pixels in an image, thereby categorizing them into distinct classes. This approach high-
lights object boundaries and interior regions. Another approach is instance segmentation, which combines object detection and semantic segmen-
tation to accurately identify and segment individual object instances within an image. Panoptic segmentation combines instance and semantic
segmentation. This integrated method assigns pixels to semantic categories and provides object-level details. Panoptic segmentation, which cap-
tures object-level information and semantic classification, enhances image interpretation. In selecting a particular image segmentation technique
18 of 30 DUHAN ET AL.

FIGURE 5 Different data augmentation techniques.

FIGURE 6 Segmentation approaches and their related techniques.

for plant disease identification, several factors are considered, including the nature of the dataset, the complexity of the disease symptoms, com-
putational efficiency, and the desired level of detail in the segmentation. For instance, semantic segmentation techniques like U-Net
(Ronneberger et al., 2015) and DeepLabV3 (Chen et al., 2018) are commonly chosen when the goal is to classify each pixel in the image into
predefined classes, which is suitable for scenarios where precise delineation of diseased areas is required. Instance segmentation methods such as
Mask-RCNN (He, 2017) and Path Aggregation Network (PANet) (Liu, 2018) are preferred when it's necessary to differentiate between multiple
instances of the same class within an image, allowing for individual identification and classification of diseased regions. On the other hand, panop-
tic segmentation approaches like Panoptic FPN (Kirillov, 2019) and DetectoRS (Qiao et al., 2021) are utilized when both semantic and instance
segmentation are required to provide a comprehensive understanding of the scene, particularly in cases where various types of diseased regions
need to be identified alongside other elements in the image.
Minaee et al. (2021) conducted an extensive review of image segmentation techniques, categorizing them based on semantic, instance, and
panoptic segmentation. Figure 6 provides a visual representation of these categories, offering a comprehensive overview of the different segmen-
tation techniques.

4.4 | Feature extraction

Feature extraction, also known as feature engineering in machine learning, transforms raw data into meaningful features for analysis and model-
ling. The three global feature descriptors extracted from the input images are shape, colour, and texture. In this stage of plant disease detection,
DUHAN ET AL. 19 of 30

numerous distinguishing qualities are extracted from the picture dataset to classify plants as healthy or sick (Wani et al., 2022). The colour feature
defines the object's visual attributes. It uses various wavelength values to represent the objects. Different statistics from the colour histogram and
colour co-occurrence matrix can be used to create colour models in image processing. These numbers may include measures like mean, kurtosis,
skewness, and standard deviation, which tell us about how the colours in a picture are spread out and what their properties are. Texture features
are a representation of an object's surface characteristics, such as homogeneity, entropy, energy, contrast, and correlation. Local binary pattern
(LBP), Gabor filter, and Grey-Level Co-Occurrence Matrix (GLCM) algorithms can be used to access the image texture patterns (Yuan et al., 2022).
A feature vector can be created for extracting significant features from the images using a variety of CNN models. In Reference Sunil et al. (2022),
the complex background of the images was removed by extracting the multiscale features with U2-Net. In Reference Syed-Ab-Rahman et al.
(2022), ResNet101 is used for feature extraction, and Region Proposal Networks (RPN) are used to extract the potential ROIs. In Reference
Archana et al. (2022), the NIBCFE approach is proposed to extract colour features; BPF and GLCM techniques are utilized to extract texture fea-
tures of the image; and for finding the diameter and area of the infected region, shape features are extracted. Figure 4 shows the various feature
extraction techniques adopted by researchers in their model's development.

4.5 | Image classification

Classification is a process that involves the identification and categorization of incoming data into distinct and predetermined classes. At this
stage, leaves are classified as healthy or sick. Further, the diseases from which plant is suffering from are also categorized (Wani et al., 2022).
Machine learning techniques like Support Vector Machine (SVM), K-Nearest Neighbours (K-NN), fuzzy C-means, Genetic Algorithms (GA), Artifi-
cial Neural Networks (ANN), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), and CNN are the most widely used classification
approaches (Wani et al., 2022). Authors of Reference Hariri and Avşar (2022), proposed the PSO-CNN model, PSO for determining CNN optimal
parameters, and CNN for plant diseases classification. Improved EfficientNetV2-B4 as classifier for classifying disease of multiple crops from
PlantVillage dataset is proposed in Reference Albattah et al. (2022), DCNN is proposed in Reference Vivekanand (2022). MobileNetV2 and Xception
combined approach is proposed in Reference Sutaji and Yıldız (2022). Optimized lightweight YOLOv5 Model in Reference Wang et al. (2022).
The findings demonstrate CNN's efficacy in classifying plant diseases from images, emphasizing their importance in the area. However,
depending on the unique traits of various crops and diseases, neural networks' applicability differs. For image-based disease identification, CNNs like
VGG (Simonyan & Zisserman, 2015), ResNet (He et al., 2017), and DenseNet (Huang et al., 2017) capture spatial relationships well. For diseases with
distinct progression patterns, RNNs like Long Short-Term Memory (LSTM) networks are valuable for time-series data analysis (Guo et al., 2024).
Attention mechanisms, GANs, and hybrid models improve disease diagnosis by handling imbalanced datasets and capturing unique disease character-
istics. The neural network architecture should take into account disease complexity, crop characteristics, data availability, and desired performance
metrics to provide robust and effective plant disease diagnosis systems for varied agricultural situations. Nonetheless, current deep learning algo-
rithms encounter technical challenges, notably their opacity, which often leads researchers to perceive them as “black boxes.” Therefore, it is essen-
tial to understand and interpret the deep learning algorithms better (Yuan et al., 2022). In Reference Alzubaidi et al. (2021), a detailed analysis of
various DL models is presented, with various classification methods depicted in Figure 4. Table 5 shows a comparison of deep learning models that
are often used to find plant diseases. It shows important architectural details like GFlops, the number of parameters, and model size. The data for
model size, number of parameters, and GFlops are sourced directly from the PyTorch platform (n.d.), ensuring accuracy and reliability. Additionally,
the models' execution code is available in PyTorch (n.d.) and Keras (Team, n.d.), facilitating experimentation and reproducibility in diverse settings.

5 | P L A N T D I S E A S E I D E N T I F I C A T I O N : C H A L LE N G ES A N D S O LU T I O N S

Some of the difficulties encountered while developing a solution for automatic plant leaf disease recognition are presented below and in the fol-
lowing studies (Barbedo, 2018; Orchi et al., 2022):

5.1 | Inadequate datasets

Photos used to identify diseases are frequently taken under ideal and controlled circumstances. In actuality, there may be several illnesses on a
single leaf, and those present may be at different stages of development. Typically, a single picture with a similar backdrop shows the presence of
a single ailment. Datasets frequently disregard background and ambient elements. As a result, the accuracy obtained by machine learning models
is higher than that attained in a practical implementation. To recognize many diseases on an image, a dataset including images from the earliest to
the final stages of development is needed. Additionally, many photos should have multiple diseases on a single image. Since these images are not
always available, we can use deep learning models like GANs to synthesize them (Xu et al., 2022).
20 of 30 DUHAN ET AL.

TABLE 5 Brief overview of deep learning models utilized for plant disease identification.

Model
Model Year Parameters size GFlops Merits Demerits

AlexNet (Krizhevsky 2012 61.1 M 233.1 MB 0.71 Utilizes dropout and ReLU High computation cost and overfitting
et al., 2017) function
DenseNet121 (Huang 2017 8.0 M 30.8 MB 4.29 Blocks of layers; layers Increased number of feature maps
et al., 2017) connected to each other
EfficientNetV2_s (Tan & 2021 21.5 M 82.7 MB 8.37 Compound scaling approach Fine-tuning for specific tasks may require
Quoc, 2021) optimizes architecture careful hyperparameter tuning
GoogLeNet (Inception) 2015 6.6 M 49.7 MB 1.50 Block and concatenation Leads to valuable information loss
(Szegedy et al., 2014) concept; increased depth
MobileNetV2 (Sandler 2017 3.5 M 13.6 MB 0.3 Small, low-latency, low-power, Accuracy depends on the size of the
et al., 2018) and computationally cheap model which is flexible
ResNet50 (He et al., 2017) 2016 25.6 M 97.8 MB 4.09 Residual links accelerate deep Some layers contribute little or no
network convergence information; large number of weights
Swin TransformerV2 (Liu 2021 87.9 M 336.4 MB 20.32 Captures long-range Implementation and training may require
et al., 2021) dependencies in large-scale additional effort and resources
images
VGG16 (Simonyan & 2014 138.4 M 527.8 MB 15.47 Increased depth; small filter size High computation cost
Zisserman, 2015)
Xception (Chollet, 2017) 2016 22.8 M 88 MB - Better than inception model Expensive to train and prone to overfitting
when trained on small datasets.

5.2 | Unbalanced data

To focus entirely on training algorithms and avoid being sidetracked by extraneous factors, the most popular datasets for agricultural disease diagno-
sis are cleaned or their imbalanced nature is ignored. In reality, class distribution is skewed and imbalanced, from moderately to extremely unbal-
anced (Zhao, Chen, et al., 2022). As a result, a dataset containing an equal number of different disease images is required, with a sufficient number of
images to allow the machine learning model to easily learn disease characteristics. To augment datasets with one or more classes, unconditional
image generation methods like GANs can generate new images that are similar to those in the original dataset. Label-conditional image generation
methods like CGAN (Mirza & Osindero, 2014) and Auxiliary Classifier GAN (ACGAN) (Odena et al., 2017) can rebalance datasets by using mutual
information and learning minority class variations from majority-class data. Label-preserving and label-changing image-conditional image generation
methods can improve dataset balance and model robustness by simultaneously maintaining and changing label-dependent and label-independent
properties. Model-based image augmentation may help plant disease classification issues caused by imbalanced datasets (Xu et al., 2023).

5.3 | Image acquisition

One of the key elements that directly affect an image's attributes is its resolution. Small lesions and spores are detectable with a better resolution.
However, these qualities are impacted by the camera used to capture the image. Crops develop in very diverse natural conditions. Thus, a variety
of elements, including wind, lighting, and other meteorological conditions, have an effect on photographs. So, to increase the robustness of the
machine learning model, there is a need to capture images of the same leaf under different lighting conditions, at different times of the day, from
different angles, and under different environmental conditions.

5.4 | Image preprocessing

Images are resized to different dimensions to fit the specific processing and storage needs. More information is lost during the preparation and
storing of leaf pictures as the compression ratio increases. Due to this, the visibility of large lesions may not be much affected, while tiny symp-
toms may be significantly distorted. Due to this reason, DL-based optimized compression like Taubman (2000), Sharma et al. (2022) is utilized to
reduce image size while maintaining quality, but it increases model complexity. Although processing high-resolution images is computationally
expensive, low-resolution images present their own challenges. Low-resolution images lack clarity and detail, which could lead to disease charac-
teristics being lost. As a result, disease identification systems' accuracy and reliability may suffer, resulting in misclassification or missing detec-
tions. Commonly used methods for tackling low-resolution images include super-resolution techniques like Single-Image Super-Resolution (SISR)
DUHAN ET AL. 21 of 30

(Yang et al., 2019) and GANs, which aim to enhance image resolution by generating corresponding high-resolution images. Bicubic interpolation
can estimate intermediate pixel values in low-resolution images, improving their size and apparent resolution. FPNs collect data at different scales
using multi-scale feature maps, and multi-scale feature fusion approaches improve the model. Rotation, scaling, and flipping imitate image resolu-
tion variations, while transfer learning adapts pre-trained models to low-resolution images. By implementing these approaches, researchers can
improve the accuracy and reliability of automatic plant disease identification systems when faced with low-resolution images.

5.5 | Image segmentation

Finding and segmenting Region of Interests (ROIs) can be challenging for a number of reasons: A leaf may be overlapped with another leaf, with
another plant part, or both. It might also be slanted or covered with dew or dust. In photographs with complicated backdrops, it might be difficult
and complicated to segment the ROIs where symptoms appear. One solution to this problem is to identify disorders independently of leaf species
instead of simultaneously identifying plant species and diseases, and another solution is to use semantic segmentation to extract infected regions
from a complex background (Mzoughi & Yahiaoui, 2023).

5.6 | Feature extraction and selection

Plant species may be distinguished by their leaves, but some plant species share identical leaf forms. Furthermore, symptoms may not always be
present in areas that are straightforward to access, such as on plant leaves; in actuality, they may sometimes be hidden by other leaves or obstruc-
tions, or diseases may manifest as symptoms on stems, fruits, or even flowers. Unfortunately, researchers need to give the latter issue more of
their attention. Classification techniques are sensitive to changes in picture quality, orientation, size, contrast, and noise. Additionally, image
processing and feature extraction have a direct impact on classification performance. To efficiently select relevant features from the extracted
features, we can use various feature selection techniques like wrapper methods, filter methods, or embedded methods. Also, hybrid methods can
be utilized for optimal feature selection. In Reference Pradhan (2023), for feature extraction, GLCM techniques are used, and to select optimal
features, the chimp optimization algorithm (ChOA) is chosen.

5.7 | Disease classification

Some problems encountered in disease classification are:

5.7.1 | Variations in disease symptoms

A disease's symptoms can vary in form, colour, and size depending on its stage, making diagnosis difficult. Multiple diseases can occur simulta-
neously, making it hard to distinguish between groups of symptoms and individual symptoms. Various symptoms of diseases caused by viruses,
bacteria, and nutrient deficiencies can coexist in a single image. To train a model to distinguish between different symptoms, we must use a
dataset containing images from the various scenarios mentioned above.

5.7.2 | The similarity of symptoms across many illness types

Visually similar symptoms can result from infections, phytotoxicity, aliments, and nutrition imbalance. Finding the cause of a symptom can be difficult,
especially if only the visible spectrum is used. Most studies have focused on diseases with different symptoms because of this. Even so, symptom
identification is difficult. To detect invisible features, hyperspectral imaging (HSI) and multispectral imaging techniques can be utilized instead of RGB
images. Plant spectral reflectance variations can be detected subtly with HSI. Unlike visual assessment, which only uses visible wavelengths, HSI can
capture spectral and spatial data beyond human vision, improving disease detection (Bakkouri & Afdel, 2020). Due to similar symptoms across plant
diseases, machine learning approaches use multiple methods to improve disease discrimination and classification. To extract discriminative features
from plant images and allow models to detect minor disease variations, feature engineering is necessary. The discriminatory power of classifiers is
increased by methods like deep feature extraction and transfer learning, which use pre-trained models to identify pertinent patterns (Saad &
Salman, 2023). Additionally, numerous classifiers are combined in ensemble methods like Random Forests and Gradient Boosting Machines to
increase robustness and lessen the burden of misclassification caused by overlapping symptoms (Mahajan et al., 2023).
22 of 30 DUHAN ET AL.

6 | DISCUSSION

After an in-depth analysis of recent research on ML models for plant disease detection and classification, we found that CNN models are the dom-
inant approach. This study examined the different types of crops considered by the researchers and their frequency. We also examined different
techniques used by researchers at different stages of plant disease identification of plant diseases. Our analysis considered 65 studies that are
reviewed in Sections 2 and 3. As shown in Figure 7, most studies carried out have considered multiple crops' leaves, but the images considered in
these studies are taken from various publicly available datasets that contain leaf images taken under uniform conditions and these dataset contain
only few and similar classes of plant's disease. And most studies considering single plants are conducted on tomato plants, as shown in Figure 7,
because of their propensity to contract diseases. These studies primarily target leaf diseases caused by fungi, bacteria, and viruses. However, only
a few studies explore diseases stemming from abiotic factors (Hariri & Avşar, 2022).
It is also shown in Figure 8 that most studies considered open datasets or images obtained from the internet from various sources, like Kaggle
websites and the UCI machine learning in most studies contain a limited number of plant species and their specific diseases.

FIGURE 7 Different crops utilized.

FIGURE 8 Different dataset considered.


DUHAN ET AL. 23 of 30

To address the problem of a small dataset, researchers frequently use data augmentation techniques like shifting, scaling, rotation, flipping,
contrast enhancement, histogram equalization, and masking. However, due to the limited number of images available for specific plant diseases,
researchers use GANs to create synthetic images.
Despite the potential of GANs, a statistical analysis depicted in Figure 9 indicates that the majority of reviewed studies predominantly utilize
traditional image processing methods for data augmentation, with only a few studies incorporating GAN-based approaches. This tendency could
be caused by a number of factors, including the difficulty of using GANs in practice, computational resource requirements, or the preference for
more well-known augmentation methods in the literature.
Numerous studies, as depicted in Figure 10, favour the use of different deep learning models for automatic feature extraction, but other
image processing techniques, such as DWT, Upgraded Local Binary Pattern, GLCM, and GLCM with other methods, such as BPF and ORF, are
also used. Figure 11 displays how frequently researchers used various segmentation techniques like U-Net, k-mean clustering, and thresholding.

7 | FUTURE SCOPE

Researchers have proposed a number of hybrid models to increase the accuracy of plant disease classification however, these models are not suit-
able for use in IoT applications due to their large size and high floating point operations (FLOP) counts. Shallow CNN and lightweight machine
learning models address this issue. However, current ML models primarily focus on providing classification results for single disease leaf images,
which limits their applicability in real-world scenarios.

FIGURE 9 Different techniques utilized for data augmentation.

FIGURE 10 Approaches used for feature extraction.


24 of 30 DUHAN ET AL.

FIGURE 11 Different image segmentation techniques utilized.

It is crucial to identify multiple diseases on a single leaf or across multiple leaves in a single frame. Additionally, accurate disease localization
in images is essential for managing crops. For model robustness and generalizability, unbalanced datasets in plant disease identification must be
addressed. Model-based image augmentation strategies may help balance datasets and improve plant disease identification in future research. For
better pattern recognition of disease, the multi-layer feature fusion technique can be utilized (Bakkouri & Afdel, 2020; Bakkouri & Afdel, 2022).
Even though there are efficient and accurate models in the literature, few studies have developed real-time web services and mobile apps for dis-
ease diagnosis. So, in order for farmers to use advanced techniques for early disease detection, these models must be tested and deployed in real-
time (Wani et al., 2022). Fog computing provides a decentralized and efficient approach to real-time disease detection on IoT devices, allowing
agriculture stakeholders to process and analyse data more quickly. As demonstrated in Reference Tsai and Hsu (2024), fog computing effectively
converts three-dimensional spatial image data into a two-dimensional plane, thereby addressing computational challenges. Additionally, Gao et al.
(2024) demonstrates edge computing's role in processing and integrating heterogeneous sensor data prior to model deployment. Deploying deep
learning models on IoT devices or drones has the potential to enable comprehensive agricultural monitoring by extracting vital information for
assessing overall crop health and forecasting necessary actions. Furthermore, there is a pressing need for developing a decision support system
that provides a preventive measure to address the specific disease identified in the plant leaf image. This system could provide farmers with
detailed treatment plans by utilizing expert systems to advise on chemical usage, thereby reducing disease spread and minimizing environmental
impact (Jabbedari Khiabani & Khanmohammadi, 2022).

8 | C O N CL U S I O N

Plant disease significantly affects agriculture products' quantity and quality. A review of the existing research study indicates that various image
processing, ML, and DL techniques are used to address the issue of plant disease identification. To increase the model's performance, researchers
have explored techniques like transfer learning, autoencoders, attention mechanisms, GANs, and semantic segmentation in recent studies in the
field. However, small datasets and datasets that contain images taken in controlled conditions significantly affect the model's performance. Model
size and FLOPs count are bottlenecks in developing real-time application solutions. It is concluded that recent advances in deep learning, machine
learning, and image processing demonstrate the promising performance. Nevertheless, there is still a need to create, real-time, high-performance
plant disease identification systems.
This study surveys the most recent advances in the subject, as well as a discussion of various challenges that remain to be addressed and their
potential solutions, as well as a review of various ML and DL techniques that are used at various steps of plant disease identification system. The
limitation of this study is that it does not consider studies that focus on developing decision support systems and expert systems for the diagnosis
and monitoring of plant diseases. This paper also does not explore various challenges that occur while deploying and after deployment of the
model on an IoT device for capturing images or for real-time monitoring of plants. In the future, we can study how the channel attention mecha-
nisms, transfer learning, and various optimization techniques can help improve the performance of the plant disease recognition models. Also, we
can study research that focuses on using hyperspectral and multispectral imaging techniques for plant disease identification and studies that focus
on identifying disease considering not only leaves but other parts of the plant, such as roots and stems.
DUHAN ET AL. 25 of 30

FUND ING INFORMATION


The authors would like to acknowledge the Deanship of Graduate Studies and Scientific Research, Taif University for funding this work.

CONF LICT OF IN TE RE ST ST AT E MENT


The authors declare no conflict of interest.

DATA AVAI LAB ILITY S TATEMENT


The data that support the findings of this study are available from the corresponding author upon reasonable request.

ORCID
Sangeeta Duhan https://orcid.org/0000-0003-2312-1211
Preeti Gulia https://orcid.org/0000-0001-8535-4016
Nasib Singh Gill https://orcid.org/0000-0002-8594-4320
Mohammad Yahya https://orcid.org/0000-0001-9686-3385
Sangeeta Yadav https://orcid.org/0000-0003-2625-8096
Mohamed M. Hassan https://orcid.org/0000-0003-1612-107X
Piyush Kumar Shukla https://orcid.org/0000-0002-3715-3882

RE FE R ENC E S
Abbas, A., Jain, S., Gour, M., & Vankudothu, S. (2021). Tomato plant disease detection using transfer learning with C-GAN synthetic images. Computers and
Electronics in Agriculture, 187, 106279. https://doi.org/10.1016/j.compag.2021.106279
Admin, T. (2020). Bacterial diseases in plants: Causes, signs and treatment. Justagric https://www.britannica.com/science/plant-disease/Symptoms-and-
signs (accessed Oct. 08, 2022)
Albattah, W., Javed, A., Nawaz, M., Masood, M., & Albahli, S. (2022). Artificial intelligence-based drone system for multiclass plant disease detection using
an improved efficient convolutional neural network. Frontiers in Plant Science, 13, 1–10. https://doi.org/10.3389/fpls.2022.808380
Ali-Gombe, A., & Elyan, E. (2019). MFC-GAN: Class-imbalanced dataset classification using multiple fake class generative adversarial network. Neuro-
computing, 361, 212–221. https://doi.org/10.1016/j.neucom.2019.06.043
Altalak, M., Uddin, M. A., Alajmi, A., & Rizg, A. (2022). A hybrid approach for the detection and classification of tomato leaf diseases. Applied Sciences,
12(16), 8182. https://doi.org/10.3390/app12168182
Alzubaidi, L., Zhang, J., Humaidi, A. J., Al‐Dujaili, A., Duan, Y., Al‐Shamma, O., Santamaría, J., Fadhel, M. A., Al‐Amidie, M., & Farhan, L. (2021). Review of
deep learning: concepts, CNN architectures, challenges, applications, future directions. J Big Data, 8(1), 53. https://doi.org/10.1186/s40537-021-
00444-8
Archana, K. S., Srinivasan, S., Bharathi, S. P., Balamurugan, R., Prabakar, T. N., & Britto, A. S. F. (2022). A novel method to improve computational and classi-
fication performance of rice plant disease identification. Journal of Supercomputing, 78(6), 8925–8945. https://doi.org/10.1007/s11227-021-04245-x
Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein generative adversarial networks. In Proceedings of the 34th international conference on machine
learning, PMLR (pp. 214–223). The International Conference on Learning Representations (ICLR). Accessed: May 07, 2023. [Online]. Available: https://
proceedings.mlr.press/v70/arjovsky17a.html
Badiger, M., Kumara, V., Shetty, S. C. N., & Poojary, S. (2022). Leaf and skin disease detection using image processing. Global Transitions Proceedings, 3(1),
272–278. https://doi.org/10.1016/j.gltp.2022.03.010
Bakkouri, I., & Afdel, K. (2020). Computer-aided diagnosis (CAD) system based on multi-layer feature fusion network for skin lesion recognition in
dermoscopy images. Multimedia Tools and Applications, 79(29–30), 20483–20518. https://doi.org/10.1007/s11042-019-07988-1
Bakkouri, I., & Afdel, K. (2022). MLCA2F: Multi-level context attentional feature fusion for COVID-19 lesion segmentation from CT scans. Signal, Image and
Video Processing, 17, 1181–1188. https://doi.org/10.1007/s11760-022-02325-w
Barbedo, J. G. A. (2018). Factors influencing the use of deep learning for plant disease recognition. Biosystems Engineering, 172, 84–91. https://doi.org/10.
1016/j.biosystemseng.2018.05.013
Bedi, P., & Gole, P. (2021). Plant disease detection using hybrid model based on convolutional autoencoder and convolutional neural network. Artificial
Intelligence in Agriculture, 5, 90–101. https://doi.org/10.1016/j.aiia.2021.05.002
Bhagwat, R., & Dandawate, Y. (2021). A review on advances in automated plant disease detection. International Journal of Engineering and Technology Inno-
vation, 11(4), 251–264. https://doi.org/10.46604/IJETI.2021.8244
Chen, J., Chen, J., Zhang, D., Nanehkaran, Y. A., & Sun, Y. (2021). A cognitive vision method for the detection of plant disease images. Machine Vision and
Applications, 32(1). https://doi.org/10.1007/s00138-020-01150-w
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., & Adam, H. (2018). Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmenta-
tion. arXiv. https://doi.org/10.48550/arXiv.1802.02611
Chollet, F. (2017). Xception: Deep learning with Depthwise separable convolutions. In 2017 IEEE conference on computer vision and pattern recognition
(CVPR), Honolulu, HI, USA (Vol. 2017, pp. 1800–1807). IEEE. https://doi.org/10.1109/CVPR.2017.195
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–297. https://doi.org/10.1007/BF00994018
Deng, J., Zhou, H., Lv, X., Yang, L., Shang, J., Sun, Q., Zheng, X., Zhou, C., Zhao, B., Wu, J., & Ma, Z. (2022). Applying convolutional neural networks for
detecting wheat stripe rust transmission centers under complex field conditions using RGB-based high spatial resolution images from UAVs. Computers
and Electronics in Agriculture, 200, 107211. https://doi.org/10.1016/j.compag.2022.107211
26 of 30 DUHAN ET AL.

Douzas, G., & Bacao, F. (2018). Effective data generation for imbalanced learning using conditional generative adversarial networks. Expert Systems with
Applications, 91, 464–471. https://doi.org/10.1016/j.eswa.2017.09.030
Elaraby, A., Hamdy, W., & Alruwaili, M. (2022). Optimization of deep learning model for plant disease detection using particle swarm optimizer. Computers,
Materials & Continua, 71(2), 4019–4031. https://doi.org/10.32604/cmc.2022.022161
Fanariotis, A., Orphanoudakis, T., Kotrotsios, K., Fotopoulos, V., Keramidas, G., & Karkazis, P. (2023). Power efficient machine learning models deployment
on edge IoT devices. Sensors, 23(3), 1595.
Gajjar, R., Gajjar, N., Thakor, V. J., Patel, N. P., & Ruparelia, S. (2021). Real-time detection and identification of plant leaf diseases using convolutional neural
networks on an embedded platform. Visual Computer, 38, 2923–2938. https://doi.org/10.1007/s00371-021-02164-9
Ganaie, M. A., Hu, M., Malik, A. K., Tanveer, M., & Suganthan, P. N. (2022). Ensemble deep learning: A review. Engineering Applications of Artificial Intelli-
gence, 115, 105151. https://doi.org/10.1016/j.engappai.2022.105151
Gao, R., Dong, Z., Wang, Y., Cui, Z., Ye, M., Dong, B., Lu, Y., Wang, X., Song, Y., & Yan, S. (2024). Intelligent cotton Pest and disease detection: Edge comput-
ing solutions with transformer technology and knowledge graphs. Agriculture, 14(2), 247. https://doi.org/10.3390/agriculture14020247
Gatys, L. A., Ecker, A. S., & Bethge, M. (2015). A Neural Algorithm of Artistic Style. arXiv. https://doi.org/10.48550/arXiv.1508.06576
Guo, Z., Chen, X., Li, M., Chi, Y., & Shi, D. (2024). Construction and validation of Peanut leaf spot disease prediction model based on Long time series data
and deep learning. Agronomy, 14(2), 294. https://doi.org/10.3390/agronomy14020294
Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., & Xu, C. (2020). Ghostnet: More features from cheap operations. In Proceedings of the IEEE/CVF conference on
computer vision and pattern recognition (CVPR) (pp. 1580–1589). IEEE/CVF (Computer Vision Foundation).
Hari, P., & Singh, M. P. (2023). A lightweight convolutional neural network for disease detection of fruit leaves. Neural Computing and Applications, 35,
14855–14866. https://doi.org/10.1007/s00521-023-08496-y
Hariri, M., & Avşar, E. (2022). Tipburn disorder detection in strawberry leaves using convolutional neural networks and particle swarm optimization. Multi-
media Tools and Applications, 81(8), 11795–11822. https://doi.org/10.1007/s11042-022-12759-6
Hasan, R. I., Yusuf, S. M., Mohd Rahim, M. S., & Alzubaidi, L. (2022). Automated masks generation for coffee and apple leaf infected with single or multiple
diseases-based color analysis approaches. Informatics in Medicine Unlocked, 28, 100837. https://doi.org/10.1016/j.imu.2021.100837
Hasan, S., Jahan, S., & Islam, M. I. (2022). Disease detection of apple leaf with combination of color segmentation and modified DWT. Journal of King Saud
University, Computer and Information Sciences, 34, 7212–7224. https://doi.org/10.1016/j.jksuci.2022.07.004
Hassan, S. M., & Maji, A. K. (2022). Plant disease identification using a novel convolutional neural network. IEEE Access, 10, 5390–5401. https://doi.org/10.
1109/ACCESS.2022.3141371
He, K. (2017). Mask R-CNN, arXiv.org. arxiv.org/abs/1703.06870
He, K., Zhang, X., Ren, S., & Sun, J. (2017). Deep residual learning for image recognition. In 2016 IEEE conference on computer vision and pattern recognition
(CVPR), Las Vegas, NV, USA (Vol. 2016, pp. 770–778). IEEE. https://doi.org/10.1109/CVPR.2016.90
Hong, M., Choi, J., & Kim, G. (2021). StyleMix: Separating content and style for enhanced data augmentation. In 2021 IEEE/CVF conference on computer
vision and pattern recognition (CVPR) (pp. 14857–14865). IEEE. https://doi.org/10.1109/CVPR46437.2021.01462
Huang, G., Liu, Z., van der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In 2017 IEEE conference on computer vision
and pattern recognition (CVPR), Honolulu, HI, USA (Vol. 2017, pp. 2261–2269). IEEE. https://doi.org/10.1109/CVPR.2017.243
Huang, X., Chen, A., Zhou, G., Zhang, X., Wang, J., Peng, N., Yan, N., & Jiang, C. (2022). Tomato leaf disease detection system based on FC-SNDPN. Multi-
media Tools and Applications, 82, 2121–2144. https://doi.org/10.1007/s11042-021-11790-3
Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., & Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50 fewer parameters
and <0.5MB model size. arXiv. https://doi.org/10.48550/arXiv.1602.07360
Jabbedari Khiabani, S., Batani, A., & Khanmohammadi, E. (2022). A hybrid decision support system for heart failure diagnosis using neural networks and sta-
tistical process control. Healthcare Analytics, 2, 100110. https://doi.org/10.1016/j.health.2022.100110
Jackson, P. T., Atapour-Abarghouei, A., Bonner, S., Breckon, T., & Obara, B. (2019). Style Augmentation: Data Augmentation via Style Randomization. arXiv.
https://doi.org/10.48550/arXiv.1809.05375
Karthickmanoj, R., Padmapriya, J., & Sasilatha, T. (2021). A novel pixel replacement-based segmentation and double feature extraction techniques for effi-
cient classification of plant leaf diseases. Materials Today Proceedings, 47, 2048–2052. https://doi.org/10.1016/j.matpr.2021.04.416
Keceli, A. S., Kaya, A., Catal, C., & Tekinerdogan, B. (2022). Deep learning-based multi-task prediction system for plant disease and species detection. Eco-
logical Informatics, 69, 101679. https://doi.org/10.1016/j.ecoinf.2022.101679
Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of ICNN'95—international conference on neural networks, Perth, WA, Australia
(Vol. 4, pp. 1942–1948). IEEE. https://doi.org/10.1109/ICNN.1995.488968
Khan, K., Khan, R. U., Albattah, W., & Qamar, A. M. (2022). End-to-end semantic leaf segmentation framework for plants disease classification. Complexity,
2022, 1–11. https://doi.org/10.1155/2022/1168700
Kirillov, A. (2019). Panoptic Feature Pyramid Networks, arXiv.org. arxiv.org/abs/1901.02446
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6),
84–90. https://doi.org/10.1145/3065386
Kumar, T., Mileo, A., Brennan, R., & Bendechache, M. (2023). “Image Data Augmentation Approaches: A Comprehensive Survey and Future directions.”
arXiv. Accessed: Jul. 15, 2023. [Online]. Available: http://arxiv.org/abs/2301.02830
Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., & Shi, W. (2017). Photo‐realistic single
image super‐resolution using a generative adversarial network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
4681–4690.
Li, J., Yin, Z., Li, D., & Zhao, Y. (2023). Negative contrast: A simple and efficient image augmentation method in crop disease classification. Computer Science
and Mathematics, 13(7), 1–14. https://doi.org/10.20944/preprints202306.0616.v1
Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature Pyramid Networks for Object Detection. arXiv. https://doi.org/10.
48550/arXiv.1612.03144
Liu, S. (2018). Path Aggregation Network for Instance Segmentation, arXiv.org. arxiv.org/abs/1803.01534
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. Proceed-
ings of the IEEE/CVF International Conference on Computer Vision (ICCV), 10012–10022.
DUHAN ET AL. 27 of 30

Long, J., Shelhamer, E., & Darrell, T. (2015). Fully Convolutional Networks for Semantic Segmentation. arXiv. https://doi.org/10.48550/arXiv.1411.4038
Lu, G., & Fei, B. (2014). Medical hyperspectral imaging: A review. Journal of Biomedical Optics, 19(1), 10901. https://doi.org/10.1117/1.JBO.19.1.010901
Mahajan, P., Uddin, S., Hajati, F., & Moni, M. A. (2023). Ensemble learning for disease prediction: A review. Healthcare, 11(12), 1808. https://doi.org/10.
3390/healthcare11121808
Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011). Stacked convolutional auto-encoders for hierarchical feature extraction. In T. Honkela, W. Duch,
M. Girolami, & S. Kaski (Eds.), Lecture Notes in Computer Science Artificial neural networks and machine learning—ICANN 2011 (Vol. 6791, pp. 52–59).
Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-21735-7_7
Mathew, A., Antony, A., Mahadeshwar, Y., Khan, T., & Kulkarni, A. (2022). Plant disease detection using GLCM feature extractor and voting classification
approach. Materials Today Proceedings, 58, 407–415. https://doi.org/10.1016/j.matpr.2022.02.350
Minaee, S., Boykov, Y. Y., Porikli, F., Plaza, A. J., Kehtarnavaz, N., & Terzopoulos, D. (2021). Image segmentation using deep learning: A survey. In IEEE
Transactions on Pattern Analysis and Machine Intelligence (Vol. 44). IEEE. https://doi.org/10.1109/TPAMI.2021.3059968
Mirza, M., & Osindero, S. (2014). Conditional Generative Adversarial Nets. arXiv. Accessed: May 07, 2023. [Online]. Available: http://arxiv.org/abs/1411.
1784
mlblevins. (2011). List of Common Plant Diseases. Gardenerdy. https://gardenerdy.com/list-of-common-plant-diseases/ (accessed May 07, 2023)
Models and Pre-Trained Weights — Torchvision 0.17 Documentation. https://pytorch.org/vision/stable/models.html. Accessed 11 Feb. 2024
Mohapatra, M., Parida, A. K., Mallick, P. K., Zymbler, M., & Kumar, S. (2022). Botanical leaf disease detection and classification using convolutional neural
network: A hybrid metaheuristic enabled approach. Computer, 11(5), 82. https://doi.org/10.3390/computers11050082
Mostafa, A. M., Kumar, S. A., Meraj, T., Rauf, H. T., Alnuaim, A. A., & Alkhayyal, M. A. (2021). Guava disease detection using deep convolutional neural net-
works: A case study of guava plants. Applied Sciences, 12(1), 239. https://doi.org/10.3390/app12010239
Mzoughi, O., & Yahiaoui, I. (2023). Deep learning-based segmentation for disease identification. Ecological Informatics, 75, 102000. https://doi.org/10.
1016/j.ecoinf.2023.102000
Nandhini, M., Kala, K. U., Thangadarshini, M., & Madhusudhana Verma, S. (2022). Deep learning model of sequential image classifier for crop disease detec-
tion in plantain tree cultivation. Computers and Electronics in Agriculture, 197, 106915. https://doi.org/10.1016/j.compag.2022.106915
Narmadha, R. P., Sengottaiyan, N., & Kavitha, R. J. (2022). Deep transfer learning based Rice Plant disease detection model. Intelligent Automation & Soft
Computing, 31(2), 1257–1271. https://doi.org/10.32604/iasc.2022.020679
Nayak, A., Chakraborty, S., & Swain, D. K. (2023). Application of smartphone-image processing and transfer learning for rice disease and nutrient deficiency
detection. Smart Agricultural Technology, 4, 100195. https://doi.org/10.1016/j.atech.2023.100195
Odena, A., Olah, C., & Shlens, J. (2017). Conditional image synthesis with auxiliary classifier gans. In International conference on machine learning. ACM.
Orchi, H., Sadik, M., & Khaldoun, M. (2022). On using artificial intelligence and the internet of things for crop disease detection: A contemporary sur-
vey. 29.
Pan, S. Q., Qiao, J. F., Wang, R., Yu, H. L., Wang, C., Taylor, K., & Pan, H. Y. (2022). Intelligent diagnosis of northern corn leaf blight with deep learning
model. Journal of Integrative Agriculture, vol. 21(4), Elsevier BV., pp. 1094–1105. https://doi.org/10.1016/S2095-3119(21)63707-3.
Panchal, A. V., Patel, S. C., Bagyalakshmi, K., Kumar, P., Khan, I. R., & Soni, M. (2022). Image-based plant diseases detection using deep learning. Materials
Today Proceedings, 80, 3500–3506. https://doi.org/10.1016/j.matpr.2021.07.281
Pandian, J. A., Kumar, V. D., Geman, O., Hnatiuc, M., Arif, M., & Kanchanadevi, K. (2022). Plant disease detection using deep convolutional neural network.
Applied Sciences, 12(14), 6982. https://doi.org/10.3390/app12146982
Pandian, J. A., Kanchanadevi, K., Kumar, V. D., Jasińska, E., Goňo, R., Leonowicz, Z., & Jasiński, M. (2022). A five convolutional layer deep convolutional neu-
ral network for plant leaf disease detection. Electronics, vol. 11(8), MDPI AG, p. 1266. https://doi.org/10.3390/electronics11081266.
Paymode, A. S., & Malode, V. B. (2022). Transfer learning for multi-crop leaf disease image classification using convolutional neural network VGG. Artificial
Intelligence in Agriculture, 6, 23–33. https://doi.org/10.1016/j.aiia.2021.12.002
Pearson, K. (1901). LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal
of Science, 2(11), 559–572. https://doi.org/10.1080/14786440109462720
PlantVillage Dataset. https://www.kaggle.com/datasets/emmarex/plantdisease (accessed Jan. 12, 2023)
Pradhan, M. (2023). Cardiac image-based heart disease diagnosis using bio-inspired optimized technique for feature selection to enhance classification
accuracy. Machine Learning and AI Techniques in Interactive Medical Image Analysis IGI Global, 1, 151–166. https://doi.org/10.4018/978-1-6684-4671-
3.ch009
Qiao, S., Chen, L. C., & Yuille, A. (2021). DetectoRS: Detecting objects with recursive feature pyramid and switchable atrous convolution. In 2021 IEEE/CVF
conference on computer vision and pattern recognition (CVPR) (pp. 10208–10219). IEEE. https://doi.org/10.1109/CVPR46437.2021.01008
Radford, A., Metz, L., & Chintala, S. (2016). Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv.
https://doi.org/10.48550/arXiv.1511.06434
Rajeena, P. P., Aswathy, S. U., Moustafa, M. A., & Ali, M. A. S. (2023). Detecting plant disease in corn leaf using EfficientNet architecture—An analytical
approach. Electronics, 12(8), 1938. https://doi.org/10.3390/electronics12081938
Rajpoot, V., Dubey, R., Mannepalli, P. K., Kalyani, P., Maheshwari, S., Dixit, A., & Saxena, A. (2022). Mango plant disease detection system using hybrid
BBHE and CNN approach. Traitement du Signal, 39(3), 1071–1078. https://doi.org/10.18280/ts.390334
Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real‐time object detection. In 2016 IEEE conference on computer vision
and pattern recognition (CVPR) (pp. 779–788). IEEE. https://doi.org/10.1109/CVPR.2016.91
Ren, S., He, K., Girshick, R., & Sun, J. (2016). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv. https://doi.org/
10.48550/arXiv.1506.01497
Resti, Y., Irsan, C., Neardiaty, A., Annabila, C., & Yani, I. (2023). Fuzzy discretization on the multinomial Naïve Bayes method for modeling multiclass classifi-
cation of Corn Plant diseases and pests. Mathematics, 11(8), 1761. https://doi.org/10.3390/math11081761
Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv. https://doi.org/10.48550/arXiv.
1505.04597
Rother, C., Kolmogorov, V., & Blake, A. (2004). ‘GrabCut’: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics, 23(3),
309–314. https://doi.org/10.1145/1015706.1015720
28 of 30 DUHAN ET AL.

Saad, M. H., & Salman, A. E. (2023). A plant disease classification using one-shot learning technique with field images. Multimedia Tools and Applications,
1–26. https://doi.org/10.1007/s11042-023-17830-4
Sanath Rao, U., Swathi, R., Sanjana, V., Arpitha, L., Chandrasekhar, K., & Naik, P. K. (2021). Deep learning precision farming: Grapes and mango leaf disease
detection by transfer learning. Global Transitions Proceedings, 2(2), 535–544. https://doi.org/10.1016/j.gltp.2021.08.002
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L.-C. (2018). MobileNetV2: Inverted residuals and linear bottlenecks. In 2018 IEEE/CVF confer-
ence on computer vision and pattern recognition, Salt Lake City, UT, USA (Vol. 2018, pp. 4510–4520). IEEE. https://doi.org/10.1109/CVPR.2018.00474
Sanghavi, K., Sanghavi, M., & Rajurkar, A. M. (2021). Early stage detection of Downey and powdery mildew grape disease using atmospheric parameters
through sensor nodes. Artificial Intelligence in Agriculture, 5, 223–232. https://doi.org/10.1016/j.aiia.2021.10.001
Sasikaladevi, N. (2022). Robust and fast plant pathology prognostics (P3) tool based on deep convolutional neural network. Multimedia Tools and Applica-
tions, 81(5), 7271–7283. https://doi.org/10.1007/s11042-022-11902-7
Shah, D., Trivedi, V., Sheth, V., Shah, A., & Chauhan, U. (2022). ResTS: Residual deep interpretable architecture for plant disease detection. Information
Processing in Agriculture, 9(2), 212–223. https://doi.org/10.1016/j.inpa.2021.06.001
Sharma, A., Rajesh, B., & Javed, M. (2022). Detection of plant leaf disease directly in the JPEG compressed domain using transfer learning technique. In
Advanced machine intelligence and signal processing (pp. 407–418). Springer Nature Singapore.
Simonyan, K., & Zisserman, A. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv. https://doi.org/10.48550/arXiv.1409.
1556
Sodjinou, S. G., Mohammadi, V., SandaMahama, A. T., & Gouton, P. (2022). A deep semantic segmentation-based algorithm to segment crops and weeds in
agronomic color images. Information Processing in Agriculture, 9(3), 355–364. https://doi.org/10.1016/j.inpa.2021.08.003
Sun, H., Xu, H., Liu, B., He, D., He, J., Zhang, H., & Geng, N. (2021). MEAN-SSD: A novel real-time detector for apple leaf diseases using improved light-
weight convolutional neural networks. Computers and Electronics in Agriculture, 189, 106379. https://doi.org/10.1016/j.compag.2021.106379
Sunil, C. K., Jaidhar, C. D., & Patil, N. (2022). Cardamom plant disease detection approach using EfficientNetV2. IEEE Access, 10, 789–804. https://doi.org/
10.1109/ACCESS.2021.3138920
Sutaji, D., & Yıldız, O. (2022). LEMOXINET: Lite ensemble MobileNetV2 and Xception models to predict plant disease. Ecological Informatics, 70, 101698.
https://doi.org/10.1016/j.ecoinf.2022.101698
Syed-Ab-Rahman, S. F., Hesamian, M. H., & Prasad, M. (2022). Citrus disease detection and classification using end-to-end anchor-based deep learning
model. Applied Intelligence, 52(1), 927–938. https://doi.org/10.1007/s10489-021-02452-w
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In 2016 IEEE conference on
computer vision and pattern recognition (CVPR), Las Vegas, NV, USA (Vol. 2016, pp. 2818–2826). IEEE. https://doi.org/10.1109/CVPR.2016.308
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2014). Going deeper with convolutions. In Pro-
ceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1–9). IEEE.
Talasila, S., Rawal, K., & Sethi, G. (2023). Black gram disease classification using a novel deep convolutional neural network. Multimedia Tools and Applica-
tions, 82, 44309–44333. https://doi.org/10.1007/s11042-023-15220-4
Tan, M., & Le, Q. V. (2020). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv. https://doi.org/10.48550/arXiv.1905.11946
Tan, M., & Quoc, V. L. (2021). EfficientNetV2: Smaller Models and Faster Training. arXiv:2104.00298, arXiv. http://arxiv.org/abs/2104.00298
Taubman, D. (2000). High performance scalable image compression with EBCOT. IEEE Transactions on Image Processing, 9(7), 1158–1170.
Team, K. Keras Documentation: Keras Applications. https://keras.io/api/applications/. Accessed 11 Feb. 2024
Thakur, P. S., Khanna, P., Sheorey, T., & Ojha, A. (2022). Trends in vision-based machine learning techniques for plant disease identification: A systematic
review. Expert Systems with Applications, 208, 118117. https://doi.org/10.1016/j.eswa.2022.118117
Tiwari, V., Joshi, R. C., & Dutta, M. K. (2021). Dense convolutional neural networks based multiclass plant disease detection and classification using leaf
images. Ecological Informatics, 63, 101289. https://doi.org/10.1016/j.ecoinf.2021.101289
Tsai, Y.-H., & Hsu, T.-C. (2024). An effective deep neural network in edge computing enabled internet of things for plant diseases monitoring. Proceedings
of the IEEE/CVF Winter Conference on Applications of Computer Vision, 695–699.
Ulutaş, H., & Aslantaş, V. (2023). Design of Efficient Methods for the detection of tomato leaf disease utilizing proposed ensemble CNN model. Electronics,
12(4), 827. https://doi.org/10.3390/electronics12040827
Uryasheva, A., Kalashnikova, A., Shadrin, D., Evteeva, K., Moskovtsev, E., & Rodichenko, N. (2022). Computer vision-based platform for apple leaves seg-
mentation in field conditions to support digital phenotyping. Computers and Electronics in Agriculture, 201, 107269. https://doi.org/10.1016/j.compag.
2022.107269
Verma, S., Kumar, P., & Singh, J. P. (2023). A meta-learning framework for recommending CNN models for plant disease identification tasks. Computers and
Electronics in Agriculture, 207, 107708. https://doi.org/10.1016/j.compag.2023.107708
Vivekanand, B. A. (2022). Deep learning based tomato PLDD. International Journal of Engineering Trends and Technology, 70(7), 414–421. https://doi.org/
10.14445/22315381/IJETT-V70I7P243
Wang, C., & Ye, Z. (2005). Brightness preserving histogram equalization with maximum entropy: A variational perspective. IEEE Transactions on Consumer
Electronics, 51(4), 1326–1334. https://doi.org/10.1109/TCE.2005.1561863
Wang, H., Shang, S., Wang, D., He, X., Feng, K., & Zhu, H. (2022). Plant disease detection and classification method based on the optimized lightweight
YOLOv5 model. Agriculture, 12(7), 931. https://doi.org/10.3390/agriculture12070931
Wang, J., Yu, L., Yang, J., & Dong, H. (2021). Dba_ssd: A novel end-to-end object detection algorithm applied to plant disease detection. Information
(Switzerland), 12(11), 1–19. https://doi.org/10.3390/info12110474
Wang, X., Wang, X., Song, R., Zhao, X., & Zhao, K. (2023). MCT-net: Multi-hierarchical cross transformer for hyperspectral and multispectral image fusion.
Knowledge-Based Systems, 264, 110362. https://doi.org/10.1016/j.knosys.2023.110362
Wani, J. A., Sharma, S., Muzamil, M., Ahmed, S., Sharma, S., & Singh, S. (2022). Machine learning and deep learning based computational techniques in auto-
matic agricultural diseases detection: Methodologies, applications, and challenges. Archives of Computational Methods in Engineering, 29(1), 641–677.
https://doi.org/10.1007/s11831-021-09588-5
Woo, S., Park, J., Lee, J.-Y., & Kweon, I. S. (2018). CBAM: Convolutional Block Attention Module. arXiv. https://doi.org/10.48550/arXiv.1807.06521
Wu, G., Ning, X., Hou, L., He, F., Zhang, H., & Shankar, A. (2023). Three-dimensional Softmax mechanism guided bidirectional GRU networks for hyper-
spectral remote sensing image classification. Signal Processing, 212, 109151. https://doi.org/10.1016/j.sigpro.2023.109151
DUHAN ET AL. 29 of 30

Xu, M., Yoon, S., Fuentes, A., Yang, J., & Park, D. S. (2022). Style-consistent image translation: A novel data augmentation paradigm to improve plant dis-
ease recognition. Frontiers in Plant Science, 12, 773142. https://doi.org/10.3389/fpls.2021.773142
Xu, M., Yoon, S., Fuentes, A., & Park, D. S. (2023). A comprehensive survey of image augmentation techniques for deep learning. Pattern Recognition, 137,
109347. https://doi.org/10.1016/j.patcog.2023.109347
Yang, B., Wang, Z., Guo, J., Guo, L., Liang, Q., Zeng, Q., Zhao, R., Wang, J., & Li, C. (2023). Identifying plant disease and severity from leaves: A deep multi-
task learning framework using triple-branch Swin transformer and deep supervision. Computers and Electronics in Agriculture, 209, 107809. https://doi.
org/10.1016/j.compag.2023.107809
Yang, G. F., Yang, Y., He, Z. K., Zhang, X. Y., & He, Y. (2022). A rapid, low-cost deep learning system to classify strawberry disease based on cloud service.
Journal of Integrative Agriculture, 21(2), 460–473. https://doi.org/10.1016/S2095-3119(21)63604-3
Yang, W., Zhang, X., Tian, Y., Wang, W., Xue, J.-H., & Liao, Q. (2019). Deep learning for single image super-resolution: A brief review. IEEE Transactions on
Multimedia, 21(12), 3106–3121.
Yang, Y., Liu, Z., Huang, M., Zhu, Q., & Zhao, X. (2023). Automatic detection of multi-type defects on potatoes using multispectral imaging combined with a
deep learning model. Journal of Food Engineering, 336, 111213. https://doi.org/10.1016/j.jfoodeng.2022.111213
Yogeswararao, G., Naresh, V., Malmathanraj, R., & Palanisamy, P. (2022). An efficient densely connected convolutional neural network for identification of
plant diseases. Multimedia Tools and Applications, 81, 32791–32816. https://doi.org/10.1007/s11042-022-13053-1
You, J., Jiang, K., & Lee, J. (2022). Deep metric learning-based strawberry disease detection with unknowns. Frontiers in Plant Science, 13, 1–12. https://doi.
org/10.3389/fpls.2022.891785
Yuan, Y., Chen, L., Wu, H., & Li, L. (2022). Advanced agricultural disease image recognition technologies: A review. Information Processing in Agriculture, 9(1),
48–59. https://doi.org/10.1016/j.inpa.2021.01.003
Yulita, I. N., Amri, N. A., & Hidayat, A. (2023). Mobile application for tomato plant leaf disease detection using a dense convolutional network architecture.
Computation, 11(2), 20. https://doi.org/10.3390/computation11020020
Zhang, P., Wang, H., Ji, H., Li, Y., Zhang, X., & Wang, Y. (2023). Hyperspectral imaging-based early damage degree representation of apple: A method of cor-
relation coefficient. Postharvest Biology and Technology, 199, 112309. https://doi.org/10.1016/j.postharvbio.2023.112309
Zhao, H., Shi, J., Qi, X., Wang, X., & Jia, J. (2017). Pyramid Scene Parsing Network. arXiv. https://doi.org/10.48550/arXiv.1612.01105
Zhao, Y., Chen, Z., Gao, X., Song, W., Xiong, Q., Hu, J., & Zhang, Z. (2022). Plant disease detection using generated leaves based on DoubleGAN. IEEE/ACM
Transactions on Computational Biology and Bioinformatics, 19(3), 1817–1826. https://doi.org/10.1109/TCBB.2021.3056683
Zhao, Y., Sun, C., Xu, X., & Chen, J. (2022). RIC-net: A plant disease classification model based on the fusion of inception and residual structure and embed-
ded attention mechanism. Computers and Electronics in Agriculture, 193, 106644. https://doi.org/10.1016/j.compag.2021.106644

AUTHOR BIOGRAPHI ES

Sangeeta Duhan is currently pursuing a Ph.D. in Computer Science at the Department of Computer Science & Applications, M.D. University,
Rohtak, India. She completed her M.Sc. in Computer Science from Rajiv Gandhi Government College for Women, affiliated with Chaudhary
Bansi Lal University, Bhiwani, Haryana, India. Her main research areas include Deep Learning, Computer Vision, and IoT.

Preeti Gulia received a Ph.D. degree in computer science in 2013. She is currently working as an Associate Professor at the Department of
Computer Science & Applications, M.D. University, Rohtak, India. She is serving the Department since 2009. She has published more than 95
research papers indexed in SCI, SCIE, and Scopus and presented papers in national and international conferences. She has guided 04 scholars
and guiding 05 more scholars. Her area of research includes Data Mining, Big Data, Machine Learning, Deep Learning, IoT, and Software Engi-
neering. She is an active professional member of IAENG, CSI, and ACM. She is also serving as an Editorial Board Member and Active Reviewer
of International/National Journals.

Nasib Singh Gill holds post‐Doctoral research in Computer Science at Brunel University, West London during 2001‐2002 and Ph.D. in Com-
puter Science in 1996. He is a recipient of the Commonwealth Fellowship Award of the British Government for the Year 2001. Besides, he
also has earned his MBA degree. He is currently Head of the Department of Computer Science & Applications, M. D. University, Rohtak, India.
He is also working as Director of the Directorate of Distance Education as well as Director of the Digital Learning Centre, M. D. University,
Rohtak, Haryana. He is an active professional member of IETE, IAENG, and CSI. He has published more than 304 research papers indexed in
SCI, SCIE, and Scopus and authored 5 popular books He has guided so far 12 Ph.D. scholars as well as guiding about 5 more scholars. His
research interests primarily include – IoT, Machine & Deep Learning, Information and Network Security, Data mining & Data warehousing,
NLP, Measurement of Component‐based Systems, etc.

Mohammad Yahya is a Senior Software Engineer and Deep Learning Architect currently working with Industry. He holds a Ph.D. in Computer
Science from Oakland University, with his dissertation focusing on programming languages for clone detection systems. He also obtained an
MS in Computer Science with a minor in Machine Learning from Harbin Institute of Technology.

Sangeeta Yadav is currently working at Ch. Ranbir Singh State Institute of Engineering and Technology (SIET), Jhajjar, Haryana, India. Earlier,
she obtained her Ph.D. in computer science at the Department of Computer Science & Applications, M.D. University, Rohtak, India. Previ-
ously, she had also worked as an Assistent Professor at the Central University of Haryana, Mahendergarh, India. She has completed M.Tech.
30 of 30 DUHAN ET AL.

and B.Tech. from the Department of Computer Science & Engineering, DCRUST, Murthal, Sonepat, India. Her area of research includes
Machine Learning, Artificial Intelligence, Deep Learning. She has authored many research papers of International indexing.

Mohamed M. Hassan is currently a Professor at the Department of Genetics, Menoufiya University of Egypt, and he works as an associate
professor at the Biology Department, Faculty of Science, Taif University. He completed his degree in Molecular Biology, in 1999 at Tanta Uni-
versity, Egypt. He did his Master's Degree, in 2004 at the University of Menoufiya, Egypt. He obtained his Ph.D. at the University of Men-
oufiya, Egypt. His areas of Interest include Antibiotic resistance genes, Genomics, and bioinformatics. He is now a member of many scientific
committees and as reviewer in many scientific journals.

Dr. Hassan Alsberi is currently a Professor at the Department of Genetics, Helwan University of Egypt, and he works as an associate profes-
sor at the Biology Department, Faculty of Science, Taif University. He completed his degree in histology and cytology, in 1999 at Helwan Uni-
versity, Egypt. He did his Master's Degree, in 2009 at the University of Helwan, Egypt. He obtained his Ph.D. at the University of Menoufiya,
Egypt. His areas of Interest include histology, cytology, and bioinformatics. He is now a member of many scientific committees and as
reviewer in many scientific journals.

Dr. Piyush Kumar Shukla (World's Top 2% Scientists Rankings 2023 by Stanford University and Elsevier, PDF, Ph.D., SMIEE, LMISTE) is a Full
Associate Professor in the Computer Science and Engineering Department, University Institute of Technology, Rajiv Gandhi Proudyogiki
Vishwavidyalaya (Technological University of Madhya Pradesh). He has completed a postdoctoral fellowship (PDF) under "Information Secu-
rity Education and Awareness Project Phase II," funded by the Ministry of Electronics and IT (MeitY). He is the editor and reviewer of various
prestigious SCI, SCIE, and WOS‐indexed journals. He has over 200 publications in highly indexed journals and prestigious conferences, includ-
ing many books.

How to cite this article: Duhan, S., Gulia, P., Gill, N. S., Yahya, M., Yadav, S., Hassan, M. M., Alsberi, H., & Shukla, P. K. (2024). An analysis
to investigate plant disease identification based on machine learning techniques. Expert Systems, e13576. https://doi.org/10.1111/exsy.
13576

View publication stats

You might also like