Sensors 23 08685
Sensors 23 08685
Article
A Smartphone-Based Detection System for Tomato Leaf Disease
Using EfficientNetV2B2 and Its Explainability with Artificial
Intelligence (AI)
Anjan Debnath 1,† , Md. Mahedi Hasan 1, , M. Raihan 1, * , Nadim Samrat 1,† , Mashael M. Alsulami 2 ,
Mehedi Masud 3 and Anupam Kumar Bairagi 4, *
1 Department of Computer Science and Engineering, North Western University, Khulna 9100, Bangladesh;
[email protected] (A.D.); [email protected] (M.M.H.);
[email protected] (N.S.)
2 Department of Information Technology, College of Computers and Information Technology, Taif University,
Taif 21944, Saudi Arabia; [email protected]
3 Department of Computer Science, College of Computers and Information Technology, Taif University,
Taif 21944, Saudi Arabia; [email protected]
4 Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh
* Correspondence: [email protected] (M.R.); [email protected] (A.K.B.)
† These authors contributed equally to this work.
Abstract: The occurrence of tomato diseases has substantially reduced agricultural output and
financial losses. The timely detection of diseases is crucial to effectively manage and mitigate the
impact of episodes. Early illness detection can improve output, reduce chemical use, and boost
a nation’s economy. A complete system for plant disease detection using EfficientNetV2B2 and
deep learning (DL) is presented in this paper. This research aims to develop a precise and effective
automated system for identifying several illnesses that impact tomato plants. This will be achieved
by analyzing tomato leaf photos. A dataset of high-resolution photographs of healthy and diseased
Citation: Debnath, A.; Hasan, M.M.; tomato leaves was created to achieve this goal. The EfficientNetV2B2 model is the foundation of the
Raihan, M.; Samrat, N.; Alsulami, deep learning system and excels at picture categorization. Transfer learning (TF) trains the model on
M.M.; Masud, M.; Bairagi, A.K. A a tomato leaf disease dataset using EfficientNetV2B2’s pre-existing weights and a 256-layer dense
Smartphone-Based Detection System layer. Tomato leaf diseases can be identified using the EfficientNetV2B2 model and a dense layer
for Tomato Leaf Disease Using
of 256 nodes. An ideal loss function and algorithm train and tune the model. Next, the concept is
EfficientNetV2B2 and Its
deployed in smartphones and online apps. The user can accurately diagnose tomato leaf diseases
Explainability with Artificial
with this application. Utilizing an automated system facilitates the rapid identification of diseases,
Intelligence (AI). Sensors 2023, 23,
assisting in making informed decisions on disease management and promoting sustainable tomato
8685. https://doi.org/10.3390/
s23218685
cultivation practices. The 5-fold cross-validation method achieved 99.02% average weighted training
accuracy, 99.22% average weighted validation accuracy, and 98.96% average weighted test accuracy.
Academic Editor: Anastasios
The split method achieved 99.93% training accuracy and 100% validation accuracy. Using the DL
Doulamis
approach, tomato leaf disease identification achieves nearly 100% accuracy on a test dataset.
Received: 28 September 2023
Revised: 19 October 2023 Keywords: tomato leaf; deep learning; transfer learning; EfficientNetV2B2; explainable AI; LIME;
Accepted: 20 October 2023 Grad-CAM; ablation study; smartphone; web application
Published: 24 October 2023
1. Introduction
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland. Crops, the foundation of human nourishment, are essential to feeding the world’s
This article is an open access article population [1]. There are many different types of crops, but none stand out as much
distributed under the terms and as tomatoes [2]. Tomatoes account for around 15% of all vegetables consumed globally,
conditions of the Creative Commons with an astounding 20 kg yearly per capita consumption rate [3]. The output of fresh
Attribution (CC BY) license (https:// tomatoes exceeds 170 million tons annually, making it the most plentiful vegetable crop
creativecommons.org/licenses/by/ worldwide [4]. China, Egypt, Turkey, the United States, and India are the top tomato
4.0/). producers, demonstrating its widespread use and economic importance [5].
Tomato cultivation is widely practiced, although it is not without its difficulties [6].
The sneaky existence of tomato leaf diseases is the biggest threat to the worldwide tomato
business [7]. The Food and Agriculture Organization of the United Nations reports that
these diseases substantially negatively impact tomato production globally, with an annual
loss rate of up to 10% [8]. The tendency of these diseases to start in the leaves before
ferociously spreading across the entire plant makes them even more worrisome [9].
Historically, diagnosing and treating these disorders has been a time-consuming and
expensive process that frequently relied on manual examinations by qualified profession-
als [10]. But with the advent of the digital era, agriculture has undergone a fundamental
change [11]. Automated AI image-based solutions have become an essential weapon in the
battle against illnesses affecting tomato leaves [12]. The advent of cutting-edge software
and technologies has ushered in an era when pictures are acknowledged and used as
a reliable method of disease diagnosis [13]. This innovation uses image processing, an
intelligent method that boosts recognition precision, lowers expenses, and improves picture
recognition efficiency [14].
Computer vision technology is one of the most effective and ecologically responsible
ways to identify plant diseases [15]. This technique provides a low-cost, non-destructive
method of spotting agricultural problems with no negative effects on the environment [16].
Particularly obvious symptoms of underlying plant diseases are the scars and abnormalities
that appear on leaves [17]. Healthy leaves have a consistent color and texture, but sick
leaves show differences and frequently have recognizable patterns of illness spots [18]. To
improve illness diagnosis, researchers have explored a variety of imaging methods and
tailored illumination conditions in labs [19].
Although they can be useful in some cases, conventional diagnostic techniques are
burdened by their high cost and proneness to human mistakes [20]. On the other hand,
the quick development of computer technology has given rise to creative solutions. In
the identification of agricultural diseases, computer vision, machine learning (ML), and
DL have found their place. These tools make it possible to separate RGB photos of crop
diseases depending on their color, texture, or form characteristics [21]. Even in complicated
natural contexts where several illnesses may display identical symptoms, our advanced
technique significantly improves the accuracy of disease diagnosis [22]. The study has
significant ramifications for sustainable agriculture, global food security, and technology’s
crucial place in contemporary farming methods.
ML and DL models have an influence beyond tomatoes [23]. These tremendous instru-
ments have the potential to revolutionize agriculture in general [24]. We can equip farmers
with the tools they need to protect their crops, improve food security, and strengthen
the world’s agricultural economy by modifying and training these models to detect par-
ticular crops’ distinctive traits and illnesses [25]. This revolutionary use of technology
promises improved precision and a more sustainable and successful future for agriculture
globally [26]. As we progress further into the digital era, the interdependence of agricul-
ture and technology will become increasingly crucial to ensure the success of our crops
and plenty of our meals [27]. Nowadays, artificial intelligence (AI)-based expert systems
(smartphone applications, web applications) are more useful for this detection. So, if it is
possible to implement the detection system in a smartphone application, this would be
more powerful and easy for everyone to use. So we thought about the concept and worked
on it to implement the detection process on smartphones and web applications. Anyone
can take pictures of tomato leaves, and a smartphone application or web application can be
used to obtain promising results. The major contributions of this work are as follows:
• We optimize a very effective DL model, EfficientNetV2B2, for tomato leaf disease detection.
• The proposed model is evaluated using different matrices such as loss curve, ROC
curve, confusion matrix, precision, recall, F1-score, and accuracy with datasets [28,29].
The model is also justified by comparing it with state-of-the-art deep learning models
and customized models [30–36].
Sensors 2023, 23, 8685 3 of 21
• A smart application system has been built to detect and classify tomato leaf diseases,
adapting to both smartphone and web-based interfaces. The application provides the
results in both English and Bangla.
• The explainable AI frameworks such as LIME and Grad-CAM are also used to analyze
the model.
The subsequent sections of this work are structured in the following manner. Section 2
covers the Literature Review, whereas Section 3 presents the Methodology of this investi-
gation. Section 4 provides a comprehensive account of the Experimental Outcomes and
Discussions, while Section 5 is a summary of our findings and conclusions.
2. Literature Review
Agarwal et al. [30] implemented a convolutional neural network (CNN) using the
dataset from [29]. This dataset is vast and includes many types of crops. However,
in this particular experiment, tomato leaves were only utilized. A cohort consisting of
10 individuals was employed, and a dataset including 10,000 photographs was utilized
for training purposes. In order to ensure the accuracy and reliability of the results, a
validation approach was employed, wherein 700 instances were allocated for each class,
while 50 instances were assigned for each kind for testing purposes. The dimensions of the
image were 256 × 256. The model was executed for a total of 1000 epochs. The researchers
attained a mean test accuracy of 91.20%. The present study involved the development
of a convolutional neural network (CNN) model for the purpose of detecting diseases in
tomato crops. The architecture has three convolution and maximum pooling layers, each
characterized by a distinct number of filters. One notable feature of the proposed model is
its minimal storage requirement of approximately 1.5 MB, in contrast to the significantly
larger storage demand of around 100 MB for pre-trained models. Subsequent investigations
will endeavor to refine the model by using a more extensive dataset comprising a greater
quantity of photographs encompassing diverse cropping techniques.
Similarly, Ahmad et al. [37] tried laboratory-based tomato leaves collected from a
repository. They used only four classes of tomato leaves, splitting the dataset into training
(70%), validation (20%), and testing (10%). They also used different deep learning models.
Among them, using feature extraction, Inception V3 achieved the best accuracy of 93.40%,
and using parameter tuning, Inception V3 achieved the best accuracy of 99.60%. They
found that feature extraction produces less accurate outcomes than parameter adjustment.
The future logical progression of their work will be to improve these models’ performance
on actual field data.
Zhao et al. [31] used the plant village dataset [29] and selected only tomato leaves of
10 classes. Image size used 224 × 224. The SE-ResNet50 model achieved the best average
of 96.81 accuracy in the experiment. A multi-scale feature-extraction model was developed
for the purpose of identifying tomato leaf diseases. Subsequent research endeavors will
encompass the timely automated detection of tomato and other agricultural ailments
through the utilization of these trained models. The researchers will also employ the
proposed approach to automate the identification of tomato leaf diseases in an authentic
agricultural environment, employing a greenhouse inspection robot that was created
independently by the team.
Zhou et al. [38] used tomato leaf disease datasets comprising 13,185 images with nine
classes. The image size used was 196 × 196 pixels. The dataset was split into training (60%),
validation (20%), and testing (20%). Deep CNN, ResNet50, DenseNet121, and RRDN were
used and achieved the best accuracy on the RRDN model at 95%. In this study, residual
dense networks were recommended for tomato leaf disease detection. They changed the
model architecture to create a classification model with higher accuracy than cutting-edge
methods. They hope to use these findings to improve agricultural intelligence.
Trivedi et al. [39] used tomato leaf disease datasets, where nine types were classified
as infected and one class was resistant. Images were normalized by setting a resolution of
256 × 256 pixels. Then, the images were changed to grey. A convolutional neural network
Sensors 2023, 23, 8685 4 of 21
was tried with different epochs and different learning rates. Finally, they achieved the best
accuracy at 98.58%, and the detection rate of that model was 98.49%. The study examined a
deep neural network model that accurately detected and classified tomato leaf diseases.
The crop leaf lacked nutrients, thus the model was expanded to incorporate other abiotic
illnesses. The researchers wanted to maximize data collection and learn about various plant
diseases. New technologies will improve precision in the future. Wu et al. [40] collected a
dataset from Plant Village [29] and used only tomato leaves for this experiment. They tried
five different classes. For GoogLeNet, AlexNet, and ResNet, they used an image size of
224 × 224 pixels, and for VGG, they used an image size of 299 × 299 pixels of RGB color
space. A total of 1500 images were used for this experiment. This experiment used AlexNet,
GoogLeNet, ResNet, and VGG16, and among them, GoogLeNet achieved the best accuracy
of 94.33%. They also tried DCGAN, BEGAN, and DCGAN + BEGAN, and among them,
DCGAN achieved the best 94.33% accuracy, but the accuracy on the test was 67%. This
experiment tried to find different accuracies using different learning rates. In this study,
the authors showed that DCGAN can produce data that closely resemble genuine photos,
hence increasing the size of the dataset for training big neural networks, enhancing the
generalizability of recognition models, and increasing the variety of data. To recognize
tomato leaf disease, they intend to develop a better data-augmentation approach in the
future. This will increase the recognition’s robustness and accuracy.
Chen et al. [19] collected a dataset of tomato leaves from the Hunan Vegetable Institute.
Images were taken in natural light, and the image size was 4460 × 3740. They collected
a total of 8616 images of five kinds of diseases. They tried it with the B-ARNet model
architecture, and using a 224 × 224 image size, the model achieved an accuracy of 88.43%.
Then, they compared it with ARNet, ResNet50, and AlexNet. Among all of them, their B-
ARNet achieved the best accuracy at 88.43%. This article suggests a strategy for identifying
tomato leaf disease based on the ABCK-BWTR and B-ARNet models. There are few
studies on the identification of multiple diseases on the same blade, despite the B-ARNet
recognition model’s suggestion that they can improve the recognition effect of tomato
diseases, particularly similar diseases under complicated backgrounds. To increase the
model’s capacity for generalization, the image data of tomato leaf disease should be
progressively expanded in the future.
All of these studies show what happens to infected tomato leaves when different
models are used. It accurately predicted tomato disease leaves in certain studies, even
when learning rates and epochs were altered. A summary of the literature review is
implemented in Table 1.
3. Methodology
Figure 1 shows that we first collected the dataset and split it into training, testing,
and validation sets. Then, we deployed the model EfficientNetV2B2 with an additional
Sensors 2023, 23, 8685 5 of 21
dense layer 256 and finally built an expert system using the model. We created both a
web application and a smartphone application that will take tomato leaves as input and
produce results. In the background, we collect these images and train our model with those
images because they are now are training dataset. This process will continue as a loop to
keep our detection system updated.
Figure 1. The workflow for tomato leaf disease detection with the suggested user application.
3.1. Dataset
Every stage in the experiment requires datasets. Ten thousand images of tomato leaves
were obtained from Kaggle [28] for the following dataset. We also stored the dataset on
https://zenodo.org/record/8311631 (accessed on 2 September 2023) for further use. There is
also the smartphone application file (.apk), which we created in the smartphone application
folder. The diseases and their corresponding number of samples are shown in Table 2. This
is a ten-class dataset with one healthy leaf class. Each class contains 1000 samples, and
hence, this is a balanced dataset. Figure 2 displays a few samples from this dataset that
were picked randomly. The images were processed with a 256 × 256 resolution.
training (800 images in every class), 1000 images for the validation set (100 images from
each class), and 1000 images for the test set (100 images from every class).
Class Names Training Images Validation Images Test Images Total Images
Mosaic Virus 800 100 100 1000
Target Spot 800 100 100 1000
Bacterial Spot 800 100 100 1000
Yellow Leaf Curl Virus 800 100 100 1000
Late Blight 800 100 100 1000
Leaf Mold 800 100 100 1000
Early Blight 800 100 100 1000
Spider Mites Two-Spotted Spider Mite 800 100 100 1000
Septoria Leaf Spot 800 100 100 1000
Healthy 800 100 100 1000
• Stem Block: This is the first node in the network, and it is in charge of analyzing the
input picture and extracting key information.
• Block1, Block2, Block3, . . . : These are the next blocks in the network, usually sorted
in ascending order, with Block1 being nearer the input and higher-numbered blocks
being further in the network.
• Head Block: The output layer and final predictions are handled by this network’s last
building piece.
Figure 4. Model Architecture for EfficientNetV2B2 with Additional Dense Layer 256.
Fold Numbers Training Accuracy Validation Accuracy Test Accuracy Required Time (Minutes)
1 99.03% 99.20% 99.20% 36.0
2 99.01% 99.40% 98.90% 40.18
3 99.14% 99.40% 99.50% 42.56
4 98.91% 99.40% 98.10 37.10
5 99.02% 98.70% 99.10% 42.53
Average Weighted Accuracy 99.02% 99.22% 98.96% 39.67
Figure 5a shows the loss curve of the best fold, which is fold 3. Figure 5b shows the
accuracy curve for training and validation of the best fold. Among the five folds, this fold
achieved the best accuracy. Fold 3 achieved a training accuracy of 99.14%, a validation
accuracy of 99.40%, and a test accuracy of 99.50%.
Sensors 2023, 23, 8685 10 of 21
(a) (b)
(a) (b)
Class Names Training Images Validation Images Test Images Total Images
Mosaic Virus 303 37 33 373
Target Spot 1120 157 127 1404
Bacterial Spot 1720 215 192 2127
Yellow Leaf Curl Virus 4758 310 289 5357
Late Blight 1580 157 172 1909
Leaf Mold 761 105 86 952
Early Blight 800 110 90 1000
Spider Mites Two-Spotted Spider Mite 1375 150 151 1676
Septoria Leaf Spot 1433 178 160 1771
Healthy 1287 160 144 1591
Now, we try to compare our model’s performance with other authors’ model per-
formance on the dataset [29]. Table 7 compares our approach to the dataset [29], and we
obtained the best accuracy. Table 7 shows that the proposed model achieved 2.31% better
accuracy than AlexNet [33]. Similarly, it achieved 8.6% better accuracy than CNN [30],
2.99% better accuracy than SE-ResNet50 [31], 0.10% better accuracy than ResNet34 [32],
1.90% better accuracy than SECNN [34], 7.20% better accuracy than CNN [35], and 4.09%
better accuracy than VGG16 [36].
(a) (b)
Figure 11. Graph of (a) loss curve of plant village dataset and (b) accuracy graph of the plant
village dataset.
Sensors 2023, 23, 8685 14 of 21
Table 7. Comparing our method to other methods on the plant village dataset.
Figure 13. Accuracy comparison among the epochs with no additional layer.
(a) (b)
Figure 14. Graph of (a) loss curve with no additional layer, (b) accuracy graph with no addi-
tional layer.
(a) (b)
Figure 16. Screenshot of the web application (a) uploading the image, (b) showing the results
and references.
Sensors 2023, 23, 8685 16 of 21
Figure 17. Screenshot of the smartphone application (a) uploading the image, (b) showing the results
(c) showing rest of the results and references.
4.5. Discussion
This experiment was performed by using the EfficientNetV2B2 model. This experiment
used the five-fold cross-validation method and the split method. The cross-validation
method is very popular, and this method also performed very well on the dataset, but
when the split method (80% training, 10% validation, 10% test) was used, it achieved better
accuracy than the five-fold cross-validation method. We used more epochs for the five-fold
cross-validation method, and it achieved better accuracy. As we used the free version of
Google-colab, it has a time limitation for GPU use. The experiment with the split method
achieved 100% validation accuracy and 100% test accuracy on test sets. The overfitting
problem did not occur because the training set and test set are totally different. The training
and validation accuracy comparison for the experiment is shown in Figure 7, and the
confusion matrix is shown in Figure 8. For this experiment, we could not find any papers
on our main dataset [28], so we implemented the proposed model on the very popular
Plant Village dataset [29] and used only tomato leaves. We also achieved better accuracy,
Sensors 2023, 23, 8685 17 of 21
and a comparison is shown in Table 7. Finally, using the model, we developed smartphone
and web applications to make the prediction easy.
Figure 19. LIME experiment on each class of tomato leaves to understand the main features with the
most influence on the model’s prediction.
picture that are crucial to the model’s final categorization determination. The technique
makes use of the gradients between the target class score and the feature maps produced
by the last convolutional layer of the model. These gradients effectively serve as a spotlight,
illuminating the areas of the picture that the model considers to be most important in mak-
ing its categorization determination; this analysis’s outcome behind a certain prediction,
improving the model’s interpretability, and fostering confidence in complicated neural
networks are all aided by this depiction. We used the top activation layer as our target layer
for the Grad-Cam visualization; this analysis’s outcome is shown in Figure 20. This helps
us to verify and understand the assumptions that drive the predictions of our model. It can
make the model’s decision-making process clear and understandable to both technical and
non-technical uses by providing graphic explanations.
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 20. Grad-Cam visual explanations of (a) bacterial spot, (b) early blight, (c) late blight, (d) leaf
mold, (e) septoria leaf spots, (f) spider mites and two-spotted spider mites, (g) target spot, (h) yellow
leaf curl virus, (i) mosaic virus, (j) tomato healthy.
Sensors 2023, 23, 8685 19 of 21
5. Conclusions
This experiment has presented a technique for classifying and identifying tomato leaf
disease using the model EfficientNetV2B2. It contrasts the suggested method with a number
of artisanal shallow structure approaches based on deep learning and machine learning.
Using the dataset [28], the suggested approach produces outcomes that are encouraging in
terms of accuracy. A weighted training accuracy of 99.02 percent, a weighted validation
accuracy of 99.22 percent, and a weighted test accuracy of 98.96 percent were all attained
using the five-fold cross-validation method. With a split-method accuracy of training of
99.93%, an accuracy of validation of 100%, and test accuracy of 100%, we demonstrated
encouraging results in the experimental section. We developed a smartphone application
and web application that takes an image of a tomato leaf as input and provides the correct
result and solution for that disease. It supports both Bangla and English languages, a
solution that will benefit farmers. There are speaking buttons that read the solutions and
results for the users. This experiment has some limitations. The experiment was performed
on only nine classes of diseases, and one class comprised healthy tomato leaves. The
number of diseases could increase in the future. Also, this experiment only used two
languages. It could be increased further. As we explained earlier, the user’s test image was
stored on a server, and used the images as a training dataset, but it was a manual process for
us. Our future goal is to decrease these limitations and also use this approach on other crops
and analyze their performance. The study demonstrates how deep learning algorithms may
promote the sustainable growth of tomato crops by revolutionizing disease-management
methods. Our web and smartphone tools may be used to make this identification process
simpler, more accurate, and more optimistic. This technology can help farmers make
educated decisions and allocate resources efficiently, and tomato leaf disorders impact crop
quality and yield with more research and integration into agricultural systems.
Author Contributions: Conceptualization: M.M.H. and A.D.; Data management: A.D. and N.S.;
Formal evaluation: A.D., M.M.H., M.R. and N.S.; Investigation: A.D. and N.S.; Methodology: A.D.
and N.S.; Project management, A.D.; Resources: A.D., M.R. and M.M.H.; Software and Coding: A.D.
and M.R.; Supervision: M.M.H., M.R. and A.K.B.; Validation, M.R. and A.K.B.; Visualization: A.D.,
M.R., M.M.H. and A.K.B.; Writing original draft: A.D. and N.S.; Writing review and final editing:
M.R., M.M.H., M.M.A., M.M. and A.K.B. All authors have read and agreed to the published version
of the manuscript.
Funding: The researchers would like to acknowledge the deanship of Scientific Research, Taif
University, for funding this project.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The main dataset used for this experiment was collected from Kaggle
which is available at https://www.kaggle.com/datasets/kaustubhb999/tomatoleaf (accessed on 30
June 2023). This research also used the plant village dataset for comparison and collected only tomato
leaves. The dataset was collected from https://data.mendeley.com/datasets/tywbtsjrjv/1 (accessed
on 30 June 2023). The main dataset and smartphone application (.apk file) of this study are available
at https://zenodo.org/record/8311631 (accessed on 2 September 2023).
Acknowledgments: The researchers would like to acknowledge the Deanship of Scientific Research,
Taif University, for funding this project.
Conflicts of Interest: The authors declare that they have no conflict of interest.
Sensors 2023, 23, 8685 20 of 21
Abbreviations
The following abbreviations are used in this manuscript:
References
1. Thangaraj, R.; Anandamurugan, S.; Pandiyan, P.; Kaliappan, V.K. Artificial intelligence in tomato leaf disease detection: A com-
prehensive review and discussion. J. Plant Dis. Prot. 2022, 129, 469–488. [CrossRef]
2. Vasavi, P.; Punitha, A.; Rao, T.V.N. Crop leaf disease detection and classification using machine learning and deep learning
algorithms by visual symptoms: A review. Int. J. Electr. Comput. Eng. 2022, 12, 2079. [CrossRef]
3. Li, L.; Zhang, S.; Wang, B. Plant disease detection and classification by deep learning—A review. IEEE Access 2021, 9, 56683–56698.
[CrossRef]
4. Basavaiah, J.; Arlene Anthony, A. Tomato leaf disease classification using multiple feature extraction techniques. Wirel. Pers.
Commun. 2020, 115, 633–651. [CrossRef]
5. Sarkar, C.; Gupta, D.; Gupta, U.; Hazarika, B.B. Leaf disease detection using machine learning and deep learning: Review and
challenges. Appl. Soft Comput. 2023, 145, 110534. [CrossRef]
6. Ramanjot; Mittal, U.; Wadhawan, A.; Singla, J.; Jhanjhi, N.; Ghoniem, R.M.; Ray, S.K.; Abdelmaboud, A. Plant Disease Detection
and Classification: A Systematic Literature Review. Sensors 2023, 23, 4769. [CrossRef] [PubMed]
7. Kumar, R.; Chug, A.; Singh, A.P.; Singh, D. A Systematic analysis of machine learning and deep learning based approaches for
plant leaf disease classification: A review. J. Sens. 2022, 2022, 3287561. [CrossRef]
8. Yamamoto, K.; Togami, T.; Yamaguchi, N. Super-resolution of plant disease images for the acceleration of image-based phenotyp-
ing and vigor diagnosis in agriculture. Sensors 2017, 17, 2557. [CrossRef]
9. Brahimi, M.; Boukhalfa, K.; Moussaoui, A. Deep learning for tomato diseases: Classification and symptoms visualization. Appl.
Artif. Intell. 2017, 31, 299–315. [CrossRef]
10. Kumar, Y.; Singh, R.; Moudgil, M.R.; Kamini. A Systematic Review of Different Categories of Plant Disease Detection Using Deep
Learning-Based Approaches. Arch. Comput. Methods Eng. 2023, 30, 4757–4779. [CrossRef]
11. Tian, X.; Meng, X.; Wu, Q.; Chen, Y.; Pan, J. Identification of tomato leaf diseases based on a deep neuro-fuzzy network. J. Inst.
Eng. (India) Ser. A 2022, 103, 695–706. [CrossRef]
12. Bhujel, A.; Kim, N.E.; Arulmozhi, E.; Basak, J.K.; Kim, H.T. A lightweight Attention-based convolutional neural networks for
tomato leaf disease classification. Agriculture 2022, 12, 228. [CrossRef]
13. Mkonyi, L.; Rubanga, D.; Richard, M.; Zekeya, N.; Sawahiko, S.; Maiseli, B.; Machuve, D. Early identification of Tuta absoluta in
tomato plants using deep learning. Sci. Afr. 2020, 10, e00590. [CrossRef]
14. John, S.; Rose, A.L. Machine learning techniques in plant disease detection and classification-a state of the art. INMATEH-Agric.
Eng. 2021, 65, 362–372.
15. Tugrul, B.; Elfatimi, E.; Eryigit, R. Convolutional neural networks in detection of plant leaf diseases: A review. Agriculture 2022,
12, 1192. [CrossRef]
16. Chopra, G.; Whig, P. Analysis of Tomato Leaf Disease Identification Techniques. J. Comput. Sci. Eng. (JCSE) 2021, 2, 98–103.
[CrossRef]
17. Hidayah, A.N.; Radzi, S.A.; Razak, N.A.; Saad, W.H.M.; Wong, Y.; Naja, A.A. Disease Detection of Solanaceous Crops Using Deep
Learning for Robot Vision. J. Robot. Control (JRC) 2022, 3, 790–799. [CrossRef]
18. Coulibaly, S.; Kamsu-Foguem, B.; Kamissoko, D.; Traore, D. Deep neural networks with transfer learning in millet crop images.
Comput. Ind. 2019, 108, 115–120. [CrossRef]
19. Chen, X.; Zhou, G.; Chen, A.; Yi, J.; Zhang, W.; Hu, Y. Identification of tomato leaf diseases based on combination of ABCK-BWTR
and B-ARNet. Comput. Electron. Agric. 2020, 178, 105730. [CrossRef]
20. Zhang, L.; Zhou, G.; Lu, C.; Chen, A.; Wang, Y.; Li, L.; Cai, W. MMDGAN: A fusion data augmentation method for tomato-leaf
disease identification. Appl. Soft Comput. 2022, 123, 108969. [CrossRef]
21. Zaki, S.Z.M.; Zulkifley, M.A.; Stofa, M.M.; Kamari, N.A.M.; Mohamed, N.A. Classification of tomato leaf diseases using MobileNet
v2. IAES Int. J. Artif. Intell. 2020, 9, 290. [CrossRef]
22. Shoaib, M.; Shah, B.; Ei-Sappagh, S.; Ali, A.; Ullah, A.; Alenezi, F.; Gechev, T.; Hussain, T.; Ali, F. An advanced deep learning
models-based plant disease detection: A review of recent research. Front. Plant Sci. 2023, 14, 1158933. [CrossRef] [PubMed]
23. Tan, M.; Le, Q. Efficientnetv2: Smaller models and faster training. In Proceedings of the International Conference on Machine
Learning, Virtual Event, 18–24 July 2021; pp. 10096–10106.
Sensors 2023, 23, 8685 21 of 21
24. Parez, S.; Dilshad, N.; Alghamdi, N.S.; Alanazi, T.M.; Lee, J.W. Visual Intelligence in Precision Agriculture: Exploring Plant
Disease Detection via Efficient Vision Transformers. Sensors 2023, 23, 6949. [CrossRef] [PubMed]
25. Ghosh, P.; Mondal, A.K.; Chatterjee, S.; Masud, M.; Meshref, H.; Bairagi, A.K. Recognition of Sunflower Diseases Using Hybrid
Deep Learning and Its Explainability with AI. Mathematics 2023, 11, 2241. [CrossRef]
26. Khan, H.; Haq, I.U.; Munsif, M.; Mustaqeem.; Khan, S.U.; Lee, M.Y. Automated wheat diseases classification framework using
advanced machine learning technique. Agriculture 2022, 12, 1226. [CrossRef]
27. Javeed, D.; Gao, T.; Saeed, M.S.; Kumar, P. An Intrusion Detection System for Edge-Envisioned Smart Agriculture in Extreme
Environment. IEEE Internet Things J. 2023. [CrossRef]
28. Kaustubh, B. Tomato Leaf Disease Detection. 2020. Available online: https://www.kaggle.com/datasets/kaustubhb999/tomatol
eaf (accessed on 30 June 2023).
29. Sun, J.; Tan, W.; Mao, H.; Wu, X.; Chen, Y.; Wang, L. Recognition of multiple plant leaf diseases based on improved convolutional
neural network. Trans. Chin. Soc. Agric. Eng. 2017, 33, 209–215.
30. Agarwal, M.; Singh, A.; Arjaria, S.; Sinha, A.; Gupta, S. ToLeD: Tomato leaf disease detection using convolution neural network.
Procedia Comput. Sci. 2020, 167, 293–301. [CrossRef]
31. Zhao, S.; Peng, Y.; Liu, J.; Wu, S. Tomato leaf disease diagnosis based on improved convolution neural network by attention
module. Agriculture 2021, 11, 651. [CrossRef]
32. Tan, L.; Lu, J.; Jiang, H. Tomato leaf diseases classification based on leaf images: A comparison between classical machine learning
and deep learning methods. AgriEngineering 2021, 3, 542–558. [CrossRef]
33. Rangarajan, A.K.; Purushothaman, R.; Ramesh, A. Tomato crop disease classification using pre-trained deep learning algorithm.
Procedia Comput. Sci. 2018, 133, 1040–1047. [CrossRef]
34. Naik, B.N.; Malmathanraj, R.; Palanisamy, P. Detection and classification of chilli leaf disease using a squeeze-and-excitation-based
CNN model. Ecol. Inform. 2022, 69, 101663. [CrossRef]
35. Kurmi, Y.; Saxena, P.; Kirar, B.S.; Gangwar, S.; Chaurasia, V.; Goel, A. Deep CNN model for crops’ diseases detection using leaf
images. Multidimens. Syst. Signal Process. 2022, 33, 981–1000. [CrossRef]
36. Paymode, A.S.; Malode, V.B. Transfer learning for multi-crop leaf disease image classification using convolutional neural network
VGG. Artif. Intell. Agric. 2022, 6, 23–33. [CrossRef]
37. Ahmad, I.; Hamid, M.; Yousaf, S.; Shah, S.T.; Ahmad, M.O. Optimizing pretrained convolutional neural networks for tomato leaf
disease detection. Complexity 2020, 2020, 8812019. [CrossRef]
38. Zhou, C.; Zhou, S.; Xing, J.; Song, J. Tomato leaf disease identification by restructured deep residual dense network. IEEE Access
2021, 9, 28822–28831. [CrossRef]
39. Trivedi, N.K.; Gautam, V.; Anand, A.; Aljahdali, H.M.; Villar, S.G.; Anand, D.; Goyal, N.; Kadry, S. Early detection and classification
of tomato leaf disease using high-performance deep neural network. Sensors 2021, 21, 7987. [CrossRef] [PubMed]
40. Wu, Q.; Chen, Y.; Meng, J. DCGAN-based data augmentation for tomato leaf disease identification. IEEE Access 2020, 8, 98716–98728.
[CrossRef]
41. Lin, C.; Li, L.; Luo, W.; Wang, K.C.; Guo, J. Transfer learning based traffic sign recognition using inception-v3 model. Period.
Polytech. Transp. Eng. 2019, 47, 242–250. [CrossRef]
42. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE
Trans. Neural Networks Learn. Syst. 2021, 33, 22331591. [CrossRef]
43. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International
Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114.
44. Garreau, D.; Luxburg, U. Explaining the explainer: A first theoretical analysis of LIME. In Proceedings of the International
Conference on Artificial Intelligence and Statistics, Online, 26–28 August 2020; pp. 1287–1296.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.