0% found this document useful (0 votes)
33 views6 pages

Detection of Various Plant Leaf Diseases Using Deep Learning Techniques

The document discusses the use of deep learning techniques, specifically the ResNet 34 model, for the automated detection of plant leaf diseases through image classification. It highlights the advantages of using a user-friendly web interface that allows users to upload leaf images for quick diagnosis and provides information on potential remedies. The approach aims to enhance crop health and sustainability by facilitating accurate and efficient plant disease identification and treatment.

Uploaded by

rabigiri20027
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views6 pages

Detection of Various Plant Leaf Diseases Using Deep Learning Techniques

The document discusses the use of deep learning techniques, specifically the ResNet 34 model, for the automated detection of plant leaf diseases through image classification. It highlights the advantages of using a user-friendly web interface that allows users to upload leaf images for quick diagnosis and provides information on potential remedies. The approach aims to enhance crop health and sustainability by facilitating accurate and efficient plant disease identification and treatment.

Uploaded by

rabigiri20027
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Detection of Various Plant Leaf Diseases Using

2023 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI) | 979-8-3503-1590-5/23/$31.00 ©2023 IEEE | DOI: 10.1109/ACCAI58221.2023.10200031

Deep Learning Techniques


K. Harshavardhan, PJV Abhinav Krishna, and Angelina Geetha

Dept of Computer Science and Engineering Hindustan Institute of Technology and Science, Chennai, India.

E-mail : [email protected] [email protected] [email protected]

Abstract- To sustain healthy crop yields and avoid automated approach for detecting plant diseases
extensive plant disease damage, it is essential to using computer vision called a "plant disease
diagnose plant leaves. Traditional diagnostic detector" uses machine learning algorithms to
techniques, however, can be laborious and accurately identify diseased and healthy plants as
complicated. Deep learning methods like ResNet 34
well as the type ofdisease. Convolution neural
have recently come to light as a potential strategy for
automating plant leaf diagnosis. Deep convolutional networks (CNN) and other deep learning networks
neural network ResNet 34 is capable of correctly for pictures can be used to achieve this. Horizontal
classifying photos of plant leaves based on patterns edges, vertical edges, RGB values, and other
and characteristics. A ResNet 34 model may be taught characteristics are extracted from pictures using
to recognize different leaf illnesses and other CNN. For extracting visual features, CNN is the
anomalies with excellent accuracy by training it on a finest deep-learning neural network many images of
huge dataset of tagged plant leaf photos. The ResNet both healthy and unhealthy plants may be used to
34 model's output may be seen on a website, enabling train the CNN-based network to identify plant
users to upload pictures of plant leaves and get a diseases, and a trained model can then be used to the
diagnosis in a matter of seconds. Also, the website
future to forecast plant diseases from images of plant
might include details on the particular ailment or issue
found, along with suggested remedies and prevention leaves.Identifying and treating illnesses and other
actions. The way of identifying and treating plant issues in plants requires the examination of plant
illnesses may be completely altered by this method of leaves. Correct and prompt diagnosis can reduce the
diagnosing plant leaves, which would ultimately result need for pesticides and fertilizers, avert extensive
in healthier crops and a more sustainable future. damage, and maintain healthy crop harvests. Deep
Farmers, agronomists, and other stakeholders will learning methods like ResNet 34, which can
find it simpler to recognize and effectively treat plant precisely identify photos based on patterns and
diseases thanks to the usage of ResNet 34 and a characteristics, represent one of the most promising
website for output display, which offers an accessible methods for automating plant leaf diagnosis.
and user-friendly interface for plant diagnostics.

Keywords: Deep learning, Resnet34, Plant leaf disease, Convolutional neural network (CNN) architecture
Transfer learning, CNN, Flask ResNet 34 has attained cutting-edge performance on
several image categorization tasks. It is especially
I.INTRODUCTION useful for diagnosing plant leaves due to its deep
structure, which enables it to capture and learn
For the identification of plant diseases, image complicated information in photos. A ResNet 34
processing and deep learning models can be used. In model may be taught to recognize different leaf
this project, we've outlined the method for using leaf illnesses and other anomalies with excellent
images to identify plant diseases. A subset of signal accuracy by training it on a huge dataset of tagged
processing called "image processing" is able to plant leaf photos.The ResNet 34 model's output can
extract from an image the picture's attributes or other then be seen on a website, offering a simple and
relevant data. One aspect of machine learning is open interface for plant diagnosis. Users can upload
deep learning. As no feature engineering is pictures of plant leaves and instantly obtain a
necessary with deep learning, unlike with diagnosis, enabling quick identification and
conventional machine learning techniques, there is remediation of any problems. The website can also
no need to worry about domain expertise. An offer details on the particular sickness or issue
found, along with suggested remedies and

Authorized licensed use limited to: Jain University. Downloaded on February 14,2025 at 09:20:16 UTC from IEEE Xplore. Restrictions apply.
prevention actions. Overall, the efficiency and learning, includingthe usage of activation functions
precision of plant diagnosis can be greatly enhanced made from rectified linear units (ReLU), the use of
by using ResNet 34 for plant leaf diagnosis and a dropout regularization, and the use of data
website for output display. This strategy has the augmentation. AlexNet was also the first
potential to transform how we identify and treat architecture to use GPUs for training deep neural
plant illnesses by utilizing the power of deep networks, which significantly reduced training time.
learning and giving users an accessible interface, Several people have utilised AlexNetfor the purpose
ultimately resulting in better crops and a more of transfer learning in many of computer vision
sustainable future. applications, such as objectidentificationand
localization.
The paper is shown in the following diagram: Part II
provides a detailed explanation of the Related work. Simonyan and Zisserman et al [4] proposed theVGG
Part III provides specifics on the Approaches. Part architecture, which is a deep CNN with a simple and
IV presents the findings. V displays the discussion. uniform structure. VGG reached cutting-edge
Part VI, the final portion, finishes the paper. performance on the ILSVRC 2014 image
categorization task and has become a popular
II.LITERATURE REVIEW architecture for transfer learning.A stack of
convolutional layers with tiny 3x3 filters and max
Mohammad Hammed Salem et al. [1] the author pooling layers, followed by a number of fully linked
compared different models that are trained using layers, make up the VGG architecture. The VGG
different architectures such as CNN, AlexNet, architecture is easy to understand and modify, and it
GoogleNet, VGG, SVM, and ResNet and compared has been used as a baseline architecture for many
their accuracies as well as the number of parameters computer vision tasks. The VGG architecture has
that they can be trained on. The author also gives the also been used for feature extraction in deep
number of papers that are on the above architectures. learning-based image retrieval systems.
It seems there are very few to no papers published
on the problem statement using ResNet-34 Szegedy et al. [5] introduced the Inception
architecture. architecture, which is a family of CNN architectures
that use a combination of 1x1, 3x3, and 5x5
He et al. [2] introduced the ResNet architecture, convolutional filters in parallel to improve
which is a deep neural network with residual computational efficiency and accuracy. The
connections that allows for the training of very deep Inception architecture was designed to address the
networks. The ResNet architecture was designed to problem of computational efficiency in deep neural
address the problem of vanishing gradients, which networks, which can become a bottleneck in training
occurs when gradients become too small during and inference. By using multiple filter sizes in
backpropagation, making it difficult to train deep parallel, the Inception architecture can capture
neural networks.ResNet learns a residual function features at different spatial scales lowering the
that comes close to the intended mapping by amount of parameters while increasing network
skipping over some layers using residual accuracy. The segmentation, object recognition, and
connections. This approach makes it easier to train picture classification tasks all fall under the umbrella
very deep networks and has been shown to improve of computer vision challenges that have employed
network accuracy. ResNet has become a popular the Inception framework.
architecture for image recognition tasks, achieving
modern research findings on various datasets, Zhou et al. [6] suggested a technique for localizing
including the ILSVRC 2015 itask of categorisation objects in images by training a CNN to predict a
and the COCO object detection task. heatmap indicating the object's location. The
method, called the Class Activation Mapping
Krizhevsky et al. [3] proposed the AlexNet (CAM) technique, uses the weights of the ultimate
architecture, which was the first deep CNN to attain convolutional layer of a CNN to generate a heatmap
cutting-edge performance on the ILSVRC 2012 that highlights the areas of the picture are most
image categorisation task. AlexNet introduced important for classification. CAM method can be
several novel ideas that are now standard in deep used to visualize the procedure for determining
decisions of a CNN and to identify the regions of an

Authorized licensed use limited to: Jain University. Downloaded on February 14,2025 at 09:20:16 UTC from IEEE Xplore. Restrictions apply.
image that are most important for a particular including AlexNet, VGG, and ResNet.
classification task. The CAM manycomputer vision
tasks, such as object identification, and localization. Wang etal. [11]presented a technique for identifying
plant diseases using a CNN architecture with
Ronneberger et al. [7] introduced the U-Net transfer learning. The method uses a pre-trained
framework,which is a CNN framework designed CNN model, such as VGG or ResNet, as a feature
forchallenges of segmenting biomedical images. The extractor and trains ontop of the retrieved
U-Net framework consists of a contracting path, characteristics, a linear SVM classifier. The method
which captures context information using was shown to achieve high accuracy on various plant
convolutional and max pooling layers, and an disease datasets, including the PlantVillage dataset
expanding path, which uses transposed and the Tomato-Leaf dataset.
convolutional layers and skips connections to
recovera picture's input's spatial resolution. In a Zhao etal. [12]]presented a technique for rice disease
variety of biomedical picture segmentation tasks, identification using a CNN architecture with transfer
such as cell, tissue, and brain tumour segmentation, learning. The method uses a pre-trained CNN model,
it has been demonstrated that the U-Net framework such as Inception or ResNet, as a feature extractor
performs at the cutting edge. and trainson top of the retrieved characteristics, a
linear SVM classifier. The method was shown to
Huang etal. [8] introduced the DenseNetframework, achieve high accuracy on various rice disease
which is a CNN framework with dense connections datasets, including the RiceDisease dataset and the
that allows for the reuse of features learned in Rice Plant Diseases dataset.
previous layers. The DenseNet architecture was
designed to address the issue with disappearing Ferentinos et al [13] conducted a comprehensive A
gradients in very deep networks by letting gradients overview of diagnosis of plant leaf disease using
flow directly from the output to the input of each deep learning methods, including CNN-based
layer. The ILSVRC 2012 and 2015 image methods, transfer learning methods, and
classification tasks, as well as other image unsupervised learning methods. The review
classification challenges, have demonstrated that the highlights the potential of deep learning-based
DenseNet design achieves state-of-the-art methods for diagnosis of plant leaf disease and
performance. It has also been utilised for other tasks, identifies the challenges that need to be addressed,
such as object identification, segmentation, and including the lack of large-scale annotated datasets
localization. and the need for real-time detection methods.

Simonyan and Zisserman [9] proposed the Two- Zhang et al. [14]presented a technique for grape leaf
Stream CNN architecture, which is a CNN disease identification using a CNN architecture with
architecture that takes both spatial and temporal transfer learning. The method uses a pre-trained
information into account for action recognition CNN model, such as VGG or Inception, as a feature
tasks. The Two-Stream CNN architecture consists of extractor and trains on top of the retrieved
two parallel CNNs, one for processing spatial characteristics, a linear SVM classifier The method
information and one for processing temporal was shown to achieve high accuracy on various
information, which is then combined using a fusion grape leaf disease datasets, including the Grape Leaf
layer. For a number of action detection tasks, Diseases dataset and the Grape Diseases dataset.
including the UCF101 and HMDB51 datasets, the
two stream CNN architecture has been demonstrated III.METHODOLOGY
to perform at the cutting edge.
The detection of plant leaf disease can be achieved
Deng et al. [10] introduced the ImageNet dataset, using the Residual network 34 algorithm, which is a
which is a big collection of photos with annotations popular image classification algorithm. The
that have become the standard benchmark for image following steps can be followed for the detection of
recognition tasks. There are more than a million plant leaf disease and deploying it into web
pictures in the ImageNet collection. from 1000 application using the flask framework:
different categories, and it has been used to assess
the effectiveness of different CNN architectures,

Authorized licensed use limited to: Jain University. Downloaded on February 14,2025 at 09:20:16 UTC from IEEE Xplore. Restrictions apply.
Step 1: Collect and preprocess dataset: Collect a the instantiated model with pre-trained model
large dataset of plant leaf images with and without weights which will then predict the disease of the
diseases. The photos are preprocessed by being image that has been uploaded. If the image is healthy
resized to the same size, using techniques for data then there will be a screen suggesting that the plant
augmentation, and creating training and validation is healthy. Otherwise, if there is any disease detected
sets from the dataset. then the resulting screen will have the name of the
Step 2: Train ResNet34 model: Use transfer learning disease along with the remedial measures to
to train a ResNet34 model on the preprocessed minimize the loss and further spread. The model is
dataset. Transfer learning involves using a pre- trained the model till now. transfer learning has been
trained model and fine-tuning it on your specific used in which use the knowledge of a previously
dataset. The pre-trained ResNet34 model can be trained model and fine-tuning the last layers with the
obtained from a deep-learning library such as dataset to make the model get accustomed to the
PyTorch or Keras. given dataset and can be able to predict the disease.
Step 3: Metrics like accuracy, precision, and recall Here the resnet-34 previously trained model is used
may be used to gauge how well the model performed and trained the model with the New-plant leaf
on the validation set. diseases dataset according to the problem. The
Step 4: Save the model:the trained model should be model is trained on Kaggle. The resulting model has
saved in a deployable format., such as a .pt or .h5 an accuracy of 98.7% on test data with a loss of
file. 0.04452. Whereas the author of the [2] specified
Step 5: Build Flask app: Build a Flask app that can paper in the literature survey had been able to attain
receive images and classify them using the trained an accuracy of 96.4%. The further work includes
model. The Flask app should include a form where deploying our trained model in the designed web
the user can upload an image and a function to application. This can be done using the flask
process the uploaded image. framework which is normally used for deploying
Step 6: Deploy the app: Deploy the Flask app on a machine learning and deep learning models in the
web server such as Heroku or AWS. Test the app by applications. Then designing test cases and testing
uploading images and checking the classification the application needs to be done.
results.
Algorithm for resnet34:
Step 1: Let x be the input size of image (height,
width, channels), and let F(x) be the output feature
maps of the ResNet34 architecture. Then, we can
represent the ResNet34 architecture as a function F:
R^(height x width x channels) -> R^C, where C is
the number of output classes.
Step 2: The ResNet34 architecture can be broken
down into a sequence of operations that are applied
to the input image x. Let's denote these operations as
H_i(x), where I represent the i-th layer of the
ResNet34 architecture.
The output feature maps F(x) of the ResNet34
architecture can be computed as follows:
F(x) = H_n(H_{n-1}(...H_1(x))),
where n is the total number of layers in the
ResNet34 architecture.
Step 3: Each operation H_i(x) can be represented
Fig 1: Proposed Workflow Diagram
mathematically as:
As shown in the workflow of the solution is H_i(x) = h_i(h_{i-1}(...h_1(x))) + x,
illustrated in the above diagram. The user will input where h_i represents the i-th residual block, which is
a leaf image through our designed web interface. a function that takes the input tensor and applies a
set of convolutional filters, batch normalization, and
Then the application will resize the image and
ReLU activation. The skip connection x adds the
preprocess it. Then it will be given as an input for
input tensor to the output of the last convolutional

Authorized licensed use limited to: Jain University. Downloaded on February 14,2025 at 09:20:16 UTC from IEEE Xplore. Restrictions apply.
layer in the residual block.
Step4: The function h_i can be represented
mathematically as:
h_i(x) = ReLU(BatchNorm(Conv(x, W_i))),
where Conv stands for the convolutional operation,
BatchNorm for batch normalisation, ReLU for
ReLU activation function, and W i for the learnable
parameters of the convolutional filters and batch
normalisation operation.

IV.RESULTS

Figure 3,4 shows the experimental results of the


diagnosis of plant leaf disease using resnet 34 for the
input images dataset. The below figures represent
the output of the experiments and predicted early
bright and leaf spots respectively. And figure 2
Fig 3: Potato Leaf
shows the testing interface of the model.

The web page has been deployed using flask frame


work, the image which is given as input is
preprocessed and feed to the model for prediction

Fig 4: Tomato Leaf

V.DISCUSSION

The results of these studies imply that resnet34 has


the potential to be a useful method for finding plant
leaf disease. In comparison to conventional sample
Fig 2: Testing Interface techniques, resnet can save time and resources by
automating the identification and classification of
disease. Moreover, resnet can deliver data that is
more precise and consistent than manual sampling
techniques, which are subject to bias and human
error. The use of resnet34 algorithm for plant leaf
disease detection shows promise and is an area of
active research resnet and other computer vision
techniques are anticipated to play a more and bigger
role in understanding and predicting technology and
algorithms continue to advance.

Authorized licensed use limited to: Jain University. Downloaded on February 14,2025 at 09:20:16 UTC from IEEE Xplore. Restrictions apply.
successfully manage the issue of vanishing
gradients. Even with sparse data, researchers have
been able to train ResNet-34 to reliably identify and
categories several plant leaf diseases by utilizing
large-scale datasets and transfer learning approaches.
Yet, there is still opportunity for advancement in this
area, notably in the creation of faster and more
precise techniques for identifying and treating plant
leaf ailments in real-world situations.

REFERENCES
[1] Muhammad hammadSaleem,john Potgieter(2019) plant leaf
disease classification by deep learning 8(11), 468
[2] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual
learning for image recognition. In Proceedings of the IEEE
conference on computer vision and pattern recognition (pp. 770-
Fig 5: Analysis of accuracy of various algorithms. 778).
[3] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet
classification with deep convolutional neural networks. In
The accuracy of an image classification model Advances in neural information processing systems (pp. 1097-
depends on a number of factors, including the input 1105).
photos, model parameters, and accuracy [4] Simonyan, K., & Zisserman, A. (2014). Very deep
convolutional networks for large-scale image recognition. arXiv
threshold[3]. A ratio can be used as a threshold to preprint arXiv:1409.1556.
determine whether a certain transaction is a true [5] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov,
positive or a false positive using the Intersection D., ... &Rabinovich,(2015). Going deeper with convolutions. In
Proceedings of the IEEE conference on computer vision and
over Union (IoU) method[4]. The IoU ratio pattern recognition (pp. 1-9).
quantifies the degree to which the bounding box of a [6] Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A.
(2016). Learning deep features for discriminative localization.
projected item overlaps the bounding box containing In Proceedings of the IEEE conference on computer vision and
the ground reference data. The proportion of real pattern recognition (pp. 2921-2929).
positives to all forecasted positive outcomes is [7] Ronneberger, O., Fischer, P., &Brox, T. (2015). U-net:
Convolutional networks for biomedical image segmentation. In
known as precision. International Conference on Medical Image Computing and
Computer-Assisted Intervention (pp. 234-241).
Precision = (Number of True Positives)/ (Number of [8] Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q.
(2017). Densely connected convolutional networks. In
True Positives + Number of False Positives). Proceedings of the IEEE conference on computer vision and
The ratio of actual (relevant) objects to genuine pattern recognition (pp. 4700-4708).
[9] Simonyan, K., & Zisserman, A. (2014). Two-stream
positives is known as recall. convolutional networks for action recognition in videos. In
Recall = (Number of True Positives)/ (Number of Advances in neural information processing systems (pp. 568-
True Positives + Number of False Negatives). 576).
[10] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L.
There is a value range of 0 to 1, with 1 denoting the (2009). ImageNet: A large-scale hierarchical image database. In
maximum accuracy. IEEE conference on computer vision and pattern recognition
F1 score = (Precision × Recall)/(Precision + (pp. 248-255).
[11] Wang, X., Perez-Gonzalez, A., &Itti, L. (2018). A deep learning
Recall)/2. approach for plant disease detection and diagnosis. Computers
and Electronics in Agriculture, 152, 356-365.
[12] Zhao, X., Wang, X., & Zhang, Y. (2018). An automated rice
VI.CONCLUSION disease diagnosis system using deep learning algorithms.
Computers and Electronics in Agriculture, 151, 266-279.
In conclusion, ResNet-34 is a promising method that [13] Ferentinos, K. P. (2018). Deep learning models for plant disease
detection and diagnosis. Computers and Electronics in
has demonstrated significant promise in recent Agriculture, 145, 311-318.
research for the identification of plant leaf disease. [14] Zhang, L., Lu, W., &Niu, Z. (2020). Grape leaf disease
identification based on deep learning with transfer learning.
ResNet-34 has been demonstrated to perform better Journal of Applied Remote Sensing, 14(2), 026527
than other deep learning models in reliably [15] Alshehri, H., Ibrahim, A. A., Alyahya, S., &Alshammari, S.
recognizing and categorizing a variety of plant leaf (2020). A deep learning approach for plant disease recognition.
Journal of Ambient Intelligence and Humanized Computing,
diseases. This is due to its capacity to build 11(7), 2813-2822.
hierarchical representations of input data and

Authorized licensed use limited to: Jain University. Downloaded on February 14,2025 at 09:20:16 UTC from IEEE Xplore. Restrictions apply.

You might also like