Deep Learning For Plant Disease Detection Deep Learning
Deep Learning For Plant Disease Detection Deep Learning
Vol. 2, 2024
ISSN: 2704-1077 eISSN 2704-1069, DOI: 10.59543/ijmscs.v2i.8343
ABSTRACT: Agriculture, an essential bedrock of human survival, continually grapples with the menace of plant
diseases, culminating in substantial yield reductions. While conventional detection techniques remain widespread, they
often entail laborious efforts and are susceptible to inaccuracies, underscoring the pressing need for more efficient, scalable,
and immediate solutions. Our research explores the transformative capabilities of Deep Learning (DL) models, primarily
focusing on Convolutional Neural Networks (CNNs) and MobileNet architectures in the early and precise identification of
plant ailments. We augmented our exploration by incorporating eXplainable Artificial Intelligence (XAI) through
GradCAM, which elucidated the decision-making process of these models, providing a visual interpretation of disease
indicators in plant images. Through rigorous testing, our CNN model yielded an accuracy of 89%, a precision and recall of
96%, and an F1-score of 96%. Conversely, the MobileNet design showcased an accuracy of 96% but recorded slightly
lesser precision, recall, and F1-scores of 90%, 89%, and 89%, respectively. Such results amplify the transformative role of
DL in redefining plant disease detection methodologies, presenting a formidable counterpart to conventional techniques
and ushering in an era of heightened agricultural security.
Keywords: Agriculture, Plant diseases, Automated disease detection, Deep Learning (DL), Convolutional
Neural Networks (CNN), MobileNet.
1. INTRODUCTION
Agriculture, a cornerstone of human civilization, is critical in sustaining life across the globe,
providing nourishment to billions [1]. Its origins are as ancient as society, weaving a complex tapestry that
binds human survival to the land. The immense significance of agriculture is highlighted in regions like India,
where farming is not only an economic activity but a way of life for the vast majority of the population [2].
From small subsistence farms to large commercial agricultural establishments, the cultivation of crops is central
to human existence. Yet, this vital industry faces challenges as ancient as the practice: diseases caused by
bacteria, fungi, viruses, and other microorganisms [3, 4]. These invisible enemies constantly threaten the very
essence of agriculture, undermining food security and sustainability. Globally, plants are fundamental to food
provision. However, they are susceptible to diseases due to various environmental factors, leading to notable
production deficits. Though prevalent, Traditional manual detection methods are labor-intensive and prone to
errors, making them less reliable for early disease identification and containment [5]. Addressing these diseases
promptly can significantly bolster yields, potentially enhancing productivity by over 60% [6]. In this context,
Convolutional Neural Networks (CNNs) have emerged as a formidable tool, especially adept at deciphering
intricate patterns in large datasets, such as images, offering a promising alternative for disease detection [7].
These diseases can wreak havoc on crops, leading to catastrophic effects on both local and global scales. The
Irish famine of 1840 illustrates the historical consequences of plant disease, where the blight of potatoes led to
the loss of life and mass immigration, changing the demographic landscape [8]. Such tragedies are stark
reminders of the potential devastation that unchecked plant diseases can unleash. Even today, the threat persists
with staggering financial implications, such as more than 220 billion in losses worldwide [9]. Diseases like
cassava mosaic and cassava brown streak in sub-Saharan Africa have effects that ripple across economies,
affecting livelihoods, trade, and entire agricultural ecosystems [10]. Traditional methods of detection and
control, whether relying on human expertise or chemical interventions, face significant challenges [11, 12].
The process of identifying specific symptoms in various plant parts requires specialized knowledge, labor, and
time. Often, this can be slow and ineffective, especially in remote or resource-poor regions. The extensive use
of chemical control methods has further led to environmental pollution and the development of pathogen
resistance [13, 14]. This complexity is only magnified by the varied species and manifestations of diseases
[15], making a one-size-fits-all approach inadequate and impractical. The urgency for early detection cannot
be overstated, especially considering the necessity for timely intervention to mitigate the significant threats to
food availability, quality, and accessibility [16, 17]. As the global population continues to grow, so does the
demand for food. Traditional methods often fall short in scalability and efficiency, and the development of
novel, technology-driven approaches has become paramount. New paradigms are needed to bridge the gap
between detection and action to ensure that the world’s food supply remains resilient and robust. Deep Learning
(DL), a branch of artificial intelligence, has emerged as a promising solution [18,19]. Leveraging advanced
techniques like convolutional neural networks (CNNs), DL models can analyze high-resolution images to
detect even the most subtle signs of disease [20–23]. This technology has the potential to revolutionize disease
detection, transforming a process that once required extensive human intervention into one that can be
automated and scaled. Whether providing immediate support to regions lacking in agronomic infrastructure or
integrating autonomous vehicles in large-scale agriculture, the possibilities are vast and groundbreaking [24,
25]. However, challenges remain. Existing models often specialize in particular diseases or species, hindering
their broad application [26]. The call for robust, adaptable models has led to innovations like transfer learning,
which aims to make disease detection tools more efficient and universally applicable [27]. As the field
continues to evolve, ongoing research and collaboration among scientists, agronomists, and technologists are
essential. The convergence of traditional agricultural wisdom with cutting-edge technology opens the door to
an exciting future where the promise of sustainable, resilient, and abundant agriculture may finally be realized.
Artificial intelligence, with a specific emphasis on Deep Learning (DL), has ushered in revolutionary advances
in plant disease detection in recent times [28, 29]. Due to their ability to manage large and intricate images, DL
models are aptly suited for analyzing high-resolution visuals [30]. The advent of Graphical Processing Units
(GPUs) and innovative embedded processors has catalyzed the proliferation of DL applications, paving the
way for the practical implementation of sophisticated techniques like convolutional neural networks (CNNs)
[31]. Notably, these CNN models exhibit prowess in identifying nuanced symptoms, which conventional image
processing techniques often overlook [32–34]. The subsequent sections of this study are organized as follows:
Section 2 offers a deep dive into contemporary research related to our study. Section 3 outlines the foundational
knowledge of the classifiers utilized. Our proposed methodology is detailed in Section 4. Section 5 is earmarked
for a discourse on our research findings. We conclude in Section 6, where we encapsulate the essence of our
research and propose potential directions for subsequent investigations.
Chen et al.’s 2020 [35] paper delves into the profound effects of plant diseases (PDs) on the food chain.
The authors advocate for the use of deep learning (DL) in the automated detection and diagnosis of PDs,
emphasizing the transfer learning (TL) capabilities of pre-trained Convolutional Neural Networks (CNNs). The
method displayed notable results by leveraging VGGNet, initially trained on ImageNet, in conjunction with the
Inception module. It attained a validation accuracy of 91.83% on a public dataset and an average accuracy of
92.00% when predicting rice plant image classifications, even in scenarios with intricate backgrounds. Sunil et
al.’s 2022 [36] publication delves into the challenges of catering to an expanding population and the repercussions
of plant diseases (PDs) on crop yields. They put forth an economical approach for early PD detection by analyzing
plant leaf images using a combination of deep learning models, notably AlexNet, ResNet50, and VGG16. When
tested across various plant leaf image datasets, this method demonstrated outstanding accuracy, achieving 100%
for binary datasets and a close 99.53% for multi-class datasets. These findings validate the effectiveness of their
proposed method in accurately identifying PDs. In their 2020 study, Gayathri et al. [37] presented a deep learning
method tailored for the real-time detection of the primary five apple leaf diseases using enhanced Convolutional
Neural Networks (CNNs). This method utilized the GoogLeNet Inception framework and incorporated Rainbow
concatenation. Additionally, they developed a novel apple leaf disease dataset (ALDD) by leveraging data
augmentation and image annotation methodologies. Their innovative INAR-SSD model, trained on a
comprehensive set of 26,377 images of diseased apple leaves, secured a notable detection accuracy of 78.80%
mAP on the ALDD dataset and boasted a swift detection rate of 23.13 FPS. These outcomes underscore the
efficacy of the INAR-SSD model in promptly and accurately diagnosing apple leaf diseases, surpassing the
performance benchmarks set by prior methods. Jiang et al. (2019) [38] introduced a deep learning approach for
real-time detection of the five primary types of apple leaf diseases using advanced convolutional neural networks
(CNNs). They employed the GoogLeNet Inception architecture combined with Rainbow concatenation.
Furthermore, they curated a new dataset for apple leaf diseases (ALDD) through data augmentation and image
annotation techniques. Their INAR-SSD model, trained on 26,377 images of diseased apple leaves, achieved a
detection accuracy of 78.80% mAP on the ALDD dataset, with an impressive detection speed of 23.13 FPS. This
suggests that the INAR-SSD model stands out as an efficient tool for early apple leaf disease detection, offering
improved accuracy and speed compared to previous methods. In their work, [39] unveiled a groundbreaking
method for identifying plant diseases through leaf image categorization using deep convolutional networks.
Utilizing the Caffe Deep Learning framework, their model demonstrated remarkable accuracy, with precision
levels varying between 91% and 98% for the identification of 13 distinct plant diseases. The paper elaborates
extensively on the employed methodology, shedding light on the intricate training steps essential for effectively
deploying the disease recognition system. In their research, [40] undertook a comparative analysis of transfer
learning situations using CNN architectures such as VGG-16 and VGG-19. They juxtaposed these with their
proposed CNN structures tailored for olive plant disease identification. The study utilized a dataset of 3,400 olive
leaf images, and by integrating a data augmentation technique, they expanded this dataset. Notably, the model’s
accuracy (ACC) experienced an uplift, surging from around 88% to close to 95% post-data augmentation. This
research put forth an innovative method for plant disease identification, utilizing leaf image classification in
conjunction with deep convolutional networks [41]. The model was developed and trained using the Caffe Deep
Learning (DL) architecture. Demonstrating remarkable efficacy, it attained high precision levels ranging between
91% and 98% in accurately recognizing 13 distinct plant disease types.
2. Background
Artificial Neural Networks (ANNs) are computational models inspired by the human brain’s capacity
for analysis and information processing [42]. Like the human brain, an ANN is characterized by a network of
interconnected nodes or "neurons" forming a directed graph. These networks excel at recognizing intricate
patterns and models that might be too nuanced for either humans or traditional computational techniques. When
trained effectively, an ANN functions as a domain-specific expert, capable of predicting outcomes for new
data and addressing hypothetical scenarios, making it a tool apt for "what-if" analyses [43]. There are diverse
categories of neural networks, such as Recurrent Neural Networks (RNN), Multilayer Perceptrons (MLP), and
Convolutional Neural Networks (CNN), to name a few. While MLPs, regular neural networks, were initially
employed for image classification, they soon proved to be computationally demanding and parameter-heavy
with the increasing resolution of images. CNNs were introduced to counteract these limitations. Unlike
traditional networks, CNNs possess neurons structured in three dimensions - width, height, and depth, making
them tailored for image data [44]. Their inherent design, optimized for understanding the 3D spatial hierarchy
of images, has solidified CNNs as the go-to choice for image-related tasks. A standard CNN structure
predominantly comprises three key layers: the Convolutional layer, the Pooling layer, and the Fully Connected
layer.
Transfer learning (TL) is an ML strategy where knowledge from one task is leveraged to improve
performance on a related, subsequent task. This technique adapts a model pre-trained on a particular problem
to tackle a different but associated challenge. As highlighted by Torrey and Shavlik (2010) [45], this pre-trained
model can be rooted in deep learning or any other machine learning framework. The core principle of TL is
the portability of knowledge. Insights and feature patterns extracted from one context can provide a head start
when approaching a new problem, often reducing the computational cost and time to train. This efficiency
makes TL especially popular in fields like computer vision (CV) and natural language processing (NLP), where
large, adaptable pre-trained models are prevalent. Beyond these, TL also finds utility in diverse applications
such as recommendation engines and auditory signal processing.
2.3. MobileNet
MobileNet, a CNN architecture, emerged from Google in 2017, targeting efficient image processing
on mobile and embedded platforms [46]. By leveraging depthwise separable convolutions, the computational
expense gets significantly reduced. MobileNet has two primary versions: MobileNet V1, with 28 convolutional
layers, and V2, boasting 53 layers. Both have been pre-trained on expansive datasets like ImageNet. In transfer
learning (TL), these models can be tailored to new tasks by updating the final classification layer and training
on specific datasets for new objectives. MobileNet excels in numerous computer vision (CV) tasks, including
object detection, image segmentation, and facial recognition, demonstrating computational efficiency and
fewer parameter requirements than many deep neural networks (DNN) structures.
3. METHOD
In the methodology framework for our investigation, the initial step centers around dataset acquisition.
Here, pertinent data is meticulously gathered, laying the groundwork for our analytical pursuits. Subsequent to
this, data preprocessing techniques come into play, enhancing data quality and preparing it for intricate
analyses. As the prepared data stands poised for exploration, we segue into the modeling phase. At this juncture,
we deploy two distinct neural network designs: the well-established Convolutional Neural Network (CNN) and
the streamlined MobileNet architecture. Integrating eXplainable Artificial Intelligence (XAI) into our
78 ISSN: 2704-1077 eISSN 2704-1069
approach, we also incorporate GradCAM, offering a layer of interpretability. This facilitates a visual
representation of how these models discern patterns and make decisions, further enriching our understanding.
The juxtaposition of these two architectures ensures a multifaceted perspective, shedding light on their
individual strengths, nuances, and performance metrics, especially in relation to our dataset. The outcomes
from these models undergo rigorous scrutiny, wherein their results are dissected, insights extracted, and their
overall efficacy in addressing the research objectives is meticulously assessed.
3.1. Dataset
This study employs a rich dataset consisting of nearly 87,000 RGB leaf images of crops. These images
span 38 distinct classes, encompassing both healthy and afflicted specimens. For effective model development
and assessment, the data is apportioned into training and validation subsets, with an 80/20 split, preserving the
inherent directory hierarchy. An exclusive directory with 33 images is also constituted solely for prediction
tasks. Notably, this dataset can be accessed on Kaggle, presenting an open resource for enthusiasts delving into
plant disease identification and categorization.
3.2. Pre-processing
In this study, the preprocessing stages are instrumental in setting the data up for phases like model
training and assessment. The data selection phase involves choosing a specific subset of labels, emphasizing
20 unique classes from each directory for deeper scrutiny. Such a choice streamlines the study’s focus while
ensuring a balanced representation. Following this, images are resized to a consistent dimension of 224 x 224
pixels, a step that’s indispensable for ensuring they fit the input constraints of many DL models. Finally, each
image undergoes normalization, dividing pixel values by 255. By doing so, pixel values are scaled between 0
and 1, a measure that standardizes the dataset, priming it for efficient deep-learning model processing.
3.3. Modeling
For the initial classification approach in this investigation, a CNN is constructed utilizing the
TensorFlow platform. This CNN design commences with a 2D convolutional layer, which is subsequently
complemented by a max pooling layer to condense spatial dimensions. This sequence is reiterated, integrating
a dropout layer to counteract overfitting. Once flattened, the data transitions through dense layers. The terminal
layer adopts the softmax activation mechanism to facilitate multi-class categorization. The inherent adaptability
of the CNN design permits tailored modifications suited to the distinct classification objective and the dataset
at hand.
This research employs the MobileNetV2 framework, which builds upon the foundational MobileNet
architecture [15]. What distinguishes MobileNetV2 is its introduction of linear bottlenecks interspersed
between layers and the inclusion of shortcut connections spanning these bottlenecks. Like its predecessors,
MobileNetV2 benefits from pretraining on the ImageNet dataset, granting it robust feature extraction
capabilities. Adapting it for our specific classification task entails removing the upper layers originally geared
towards ImageNet classification. The resultant output from the base MobileNetV2 is then channeled through a
sequence of four Dense layers, all utilizing the ’relu’ activation function, with each layer having progressively
fewer nodes. To counteract potential overfitting, dropout layers are integrated after each Dense layer. The
concluding Dense layer, furnished with 20 nodes and a ’softmax’ activation function, is optimized for multi-
class categorization. To materialize this structure, we employed the Keras API from Tensor-Flow, configuring
it to accept input images of dimensions (224, 224, 3) and produce a probability distribution spanning the 20
classes.
In this study, we employ the sophisticated Grad-CAM technique, also known as Gradient-Weighted
Class Activation Mapping [47]. This method offers in-depth insights by providing explanations corresponding
to each input. Specifically, it yields an intuitive visualization that indicates the significance of individual pixels
when assessed by trained deep-learning algorithms. Grad-CAM stands out as a notable tool in the realm of
Explainable Artificial Intelligence (XAI), gaining widespread recognition and application in various computer
vision challenges. Given that our research primarily revolves around image-based data from three distinct
experiments, we chose the dependable Grad-CAM approach as our representative XAI technique. While there
are various other XAI methodologies available, for the purposes of this study, we solely focus on Grad-CAM
without delving into comparisons or discussions regarding the differences in their explanatory outcomes. It's
worth noting that the foundational concepts of Grad-CAM draw inspiration from the class activation map
(CAM) techniques [48]. An intrinsic characteristic of Grad-CAM is its ability to utilize gradient information
accumulated during the training phase. This allows for the identification of the relative importance of neurons
within the model's decision-making framework. In essence, neurons that exhibit larger absolute gradient values
are deemed more pivotal in influencing the model's conclusions.
The deep learning models were calibrated using the parameters outlined below (Table 1):
• Classes: The dataset was segmented into 20 distinct categories, guiding the models in clas- sifying
diverse data entries.
• Epochs: Each model was trained over a span of 20 epochs, meaning they iteratively learned from the
dataset 20 times.
• Loss Function: Sparse Categorical Cross Entropy was designated the loss function, a preferred
choice for multi-class classification tasks.
80 ISSN: 2704-1077 eISSN 2704-1069
• Optimization Algorithm: ADAM was employed as the optimization strategy, renowned for its
effectiveness in managing stochastic objectives utilizing first-order gradient data.
• Batch Size: A batch configuration of 64 was set, processing 64 dataset samples during each iteration
of model parameter updates.
• Validation Data: A fifth of the dataset, precisely 20%, was earmarked for validation, enabling real-
time performance assessment throughout the training phase.
• Image Dimensions: All input images were resized to dimensions of 224 x 224 pixels to maintain
consistency. The models were poised to achieve optimal performance and learning efficiency by
adhering to these specific configurations.
The evaluation results reveal the classification task's highest and lowest PRE, REC, and F-S values.
The class with the highest PRE, REC, and F-S is Grape_Esca_(Black_Measles), achieving perfect scores of
100% for all three metrics. This indicates the accurate identification of positive instances for this class.
Conversely, Raspberry_healthy also demonstrates excellent performance with PRE, REC, and F-S values of
100%. On the other hand, Tomato_Early_blight exhibits the lowest PRE, REC, and F-S values of 88%, 74%,
and 80%, respectively. These lower scores suggest some false positives and negatives in the classification
results for this class. The classification model demonstrates high performance with an ACC of 96%, and the
macro and weighted averages of PRE, REC, and F-S are 96%, indicating the model’s proficiency in accurately
predicting most classes.
GradCAM is designed to offer clarity in class discrimination by elucidating the areas of focus or
concern for each layer of the network during its processing and decision-making phases. This granular insight
is pivotal in understanding not only what the model sees but also the importance it attaches to various segments
of the input. Figure 2 vividly illustrates this concept by showcasing the heatmaps generated using the
GradCAM method. In these heatmaps, varying shades of color, ranging from red to blue, represent different
levels of importance or weights as determined by the model. More specifically, the darker shades, whether red
or blue, pinpoint the regions in the image that the network deems as carrying significant information. Such
regions are the ones that influence the model's decision most strongly. In the context of our study, which
revolves around disease detection in plants, these highlighted areas essentially suggest the potential regions of
the plant that manifest symptoms of a disease. Consequently, by juxtaposing the original images with their
respective GradCAM heatmaps, Figure 2 provides a visual guide, enabling researchers, agronomists, and
readers to appreciate the regions the models classify as indicative of disease presence. This offers a transparent
lens through which one can understand and trust the model's predictions, paving the way for more informed
decisions in real-world agricultural applications.
Figure 2. Comparative Analysis of Original Plant Images and Corresponding GradCAM Heatmaps
Highlighting Disease Indicators.
4.5. Comparison
Several distinctions emerge in comparing the performance of the CNN and MobileNet classifiers.
The CNN classifier demonstrates strong proficiency in certain classes, like
Grape_Esca_(Black_Measles), which achieves the highest Precision (PRE) value of 99% and an F-Score (F-
S) of 98%. This performance indicates a balanced blend of precision and recall for this class. However, there
are evident challenges with Tomato_Early_blight, which shows a significantly lower PRE of 62% and F-S of
71%. On the other hand, MobileNet boasts impressive results, especially for the Grape_Esca_(Black_Measles)
and Raspberry_healthy classes, both achieving perfect scores of 100% across PRE, Recall (REC), and F-S
metrics. Nonetheless, even MobileNet stumbles with Tomato_Early_blight, albeit with slightly better scores
than CNN, as indicated by its 88% PRE, 74% REC, and 80% F-S. While both models exhibit high levels of
accuracy, MobileNet appears to have a more consistent performance, as evidenced by its average accuracy of
96% across classes.
82 ISSN: 2704-1077 eISSN 2704-1069
REFERENCES
[1] Sunil S Harakannanavar, Jayashri M Rudagi, Veena I Puranikmath, Ayesha Siddiqua, andR Pramodhini. Plant leaf disease detection
using computer vision and machine learning algorithms. Global Transitions Proceedings, 3(1):305–310, 2022.
[2] Pranesh Kulkarni, Atharva Karwande, Tejas Kolhe, Soham Kamble, Akshay Joshi, and MedhaWyawahare. Plant disease detection using
image processing and machine learning. arXiv preprint arXiv:2106.10698, 2021.
[3] Tanvir Mahtab Uddin, Arka Jyoti Chakraborty, Ameer Khusro, BM Redwan Matin Zidan, Saikat Mitra, Talha Bin Emran, Kuldeep
Dhama, Md Kamal Hossain Ripon, Márió Gajdács, Muhammad Umar Khayam Sahibzada, et al. Antibiotic resistance in microbes:
History, mechanisms, therapeutic strategies and future prospects. Journal of infection and public health, 14(12):1750–1766, 2021.
[4] Punitha Kartikeyan and Gyanesh Shrivastava. Review on emerging trends in detection of plant diseases using image processing with
machine learning. International Journal of Computer Application, 975:8887, 2021.
[5] Muhammad Shoaib, Babar Shah, Shaker Ei-Sappagh, Akhtar Ali, Asad Ullah, Fayadh Alenezi, Tsanko Gechev, Tariq Hussain, and
Farman Ali. An advanced deep learning models-based plant disease detection: A review of recent research. Frontiers in Plant Science,
14:1158933,2023.
[6] Channamallikarjuna Mattihalli, Edemialem Gedefaye, Fasil Endalamaw, and Adugna Necho.Plant leaf diseases detection and auto-
medicine. Internet of Things, 1:67–73, 2018.
[7] Konstantinos P Ferentinos. Deep learning models for plant disease detection and diagnosis.
Computers and electronics in agriculture, 145:311–318, 2018.
[8] S Arivazhagan and S Vineth Ligi. Mango leaf diseases identification using convolutional neural network. International Journal of Pure
and Applied Mathematics, 120(6):11067–11079, 2018.
[9] Food and Agriculture Organization. Plant health and food security. Retrieved from http://www.fao.org/3/a-i7829e.pdf, 2019.
[10] G Madhulatha and O Ramadevi. Recognition of plant diseases using convolutional neural network. In 2020 fourth international
conference on I-SMAC (IoT in social, mobile, analytics and cloud)(I-SMAC), pages 738–743. IEEE, 2020.
[11] Zahid Iqbal, Muhammad Attique Khan, Muhammad Sharif, Jamal Hussain Shah, Muham- mad Habib ur Rehman, and Kashif Javed.
An automated detection and classification of citrus plant diseases using image processing techniques: A review. Computers and
electronics in agriculture, 153:12–32, 2018.
[12] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely con- nected convolutional networks. In
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
[13] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely con- nected convolutional networks. In
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
[14] Trung-Tin Tran, Jae-Won Choi, Thien-Tu Huynh Le, and Jong-Wook Kim. A comparative study of deep cnn in forecasting and
classifying the macronutrient deficiencies on development of tomato plant. Applied Sciences, 9(8):1601, 2019.
[15] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig
Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
[16] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large- scale hierarchical image database. In 2009
IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
[17] M Suresha, KN Shreekanth, and BV Thirumalesh. Recognition of diseases in paddy leaves using knn classifier. In 2017 2nd
International Conference for Convergence in Technology (I2CT), pages 663–666. IEEE, 2017.
[18] Bijaya Kumar Hatuwal, Aman Shakya, and Basanta Joshi. Plant leaf disease recognition using random forest, knn, svm and cnn.
Polibits, 62:13–19, 2020.
[19] Sinan Uğuz and Nese Uysal. Classification of olive leaf diseases using deep convolutional neural networks. Neural computing and
applications, 33(9):4133–4149, 2021.
[20] U Shruthi, V Nagaveni, and BK Raghavendra. A review on machine learning classification techniques for plant disease detection. In
2019 5th International conference on advanced computing & communication systems (ICACCS), pages 281–284. IEEE, 2019.
[21] Thipwimon Chompookham and Olarik Surinta. Ensemble methods with deep convolutional neural networks for plant leaf recognition.
ICIC Express Letters, 15(6):553–565, 2021.
[22] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint
arXiv:1409.1556, 2014.
[23] Youcef Djenouri, Asma Belhadi, Anis Yazidi, Gautam Srivastava, and Jerry Chun-Wei Lin. Artificial intelligence of medical things
for disease detection using ensemble deep learning and attention mechanism. Expert Systems, page e13093, 2022.
[24] Melike Sardogan, Adem Tuncer, and Yunus Ozen. Plant leaf disease detection and classifi- cation based on cnn with lvq algorithm. In
2018 3rd international conference on computer science and engineering (UBMK), pages 382–385. IEEE, 2018.
[25] Jayme GA Barbedo. Factors influencing the use of deep learning for plant disease recognition.
Biosystems engineering, 172:84–91, 2018.
[26] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE
conference on computer vision and pattern recognition, pages 770–778, 2016.
[27] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely con- nected convolutional networks. In
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
[28] Muhammad Shoaib, Babar Shah, Shaker Ei-Sappagh, Akhtar Ali, Asad Ullah, Fayadh Alenezi, Tsanko Gechev, Tariq Hussain, and
Farman Ali. An advanced deep learning models-based plant disease detection: A review of recent research. Frontiers in Plant Science,
14:1158933, 2023.
[29] Pranesh Kulkarni, Atharva Karwande, Tejas Kolhe, Soham Kamble, Akshay Joshi, and Medha Wyawahare. Plant disease detection
using image processing and machine learning. arXiv preprint arXiv:2106.10698, 2021.
[30] Amin Ullah, Khan Muhammad, Ijaz Ul Haq, and Sung Wook Baik. Action recognition using optimized deep autoencoder and cnn for
surveillance data streams of non-stationary environ- ments. Future Generation Computer Systems, 96:386–397, 2019.
[31] Konstantinos P Ferentinos. Deep learning models for plant disease detection and diagnosis.
Computers and electronics in agriculture, 145:311–318, 2018.
84 ISSN: 2704-1077 eISSN 2704-1069
[32] Rehan Ullah Khan, Khalil Khan, Waleed Albattah, and Ali Mustafa Qamar. Image-based detection of plant diseases: from classical
machine learning to deep learning journey. Wireless Communications and Mobile Computing, 2021:1–13, 2021.
[33] Jun Liu and Xuewei Wang. Plant diseases and pests detection based on deep learning: a review. Plant Methods, 17:1–18, 2021.
[34] R Karthik, M Hariharan, Sundar Anand, Priyanka Mathikshara, Annie Johnson, and R Menaka. Attention embedded residual cnn for
disease detection in tomato leaves. Ap- plied Soft Computing, 86:105933, 2020.
[35] J. Chen, J. Chen, D. Zhang, Y. Sun, and Y. A. Nanehkaran. Using deep transfer learn- ing for image-based plant disease identification.
Computers and Electronics in Agriculture, 173:105393, 2020.
[36] C. K. Sunil, C. D. Jaidhar, and N. Patil. Binary class and multi-class plant disease detection using ensemble deep learning-based
approach. International Journal of Sustainable Agricul- tural Management and Informatics, 8(4):385–407, 2022.
[37] S. Gayathri, D. J. W. Wise, P. B. Shamini, and N. Muthukumaran. Image analysis and detec- tion of tea leaf disease using deep learning.
In 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC), pages 398–403. IEEE, July 2020.
[38] P. Jiang, Y. Chen, B. Liu, D. He, and C. Liang. Real-time detection of apple leaf diseases using deep learning approach based on
improved convolutional neural networks. IEEE Access, 7:59069–59080, 2019.
[39] Srdjan Sladojevic, Marko Arsenovic, Andras Anderla, Dubravko Culibrk, and Darko Ste- fanovic. Deep neural networks based
recognition of plant diseases by leaf image classification. Computational intelligence and neuroscience, 2016, 2016.
[40] Sinan Uğuz and Nese Uysal. Classification of olive leaf diseases using deep convolutional neural networks. Neural computing and
applications, 33(9):4133–4149, 2021.
[41] Srdjan Sladojevic, Marko Arsenovic, Andras Anderla, Dubravko Culibrk, and Darko Ste- fanovic. Deep neural networks based
recognition of plant diseases by leaf image classification. Computational intelligence and neuroscience, 2016, 2016.
[42] A. Krogh. What are artificial neural networks? Nature Biotechnology, 26(2):195–197, 2008.
[43] M. R. Banham and A. K. Katsaggelos. Digital image restoration. IEEE Signal Processing Magazine, 14(2):24–41, 1997.
[44] S. Albawi, T. A. Mohammed, and S. Al-Zawi. Understanding of a convolutional neural net- work. In 2017 International Conference
on Engineering and Technology (ICET), pages 1–6. IEEE, August 2017.
[45] L. Torrey and J. Shavlik. Transfer learning, pages 242–264. IGI global, 2010.
[46] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig
Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017.
[47] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep
networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626).
[48] Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization.
In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2921-2929).