0% found this document useful (0 votes)
29 views28 pages

Deep Learning Based Plant Disease Synopsis

The document discusses a deep learning approach for plant disease detection using convolutional neural networks, specifically VGG, Inception, AlexNet, and ResNet, trained on a dataset from the PlantVillage project. It highlights the importance of timely disease identification for food security, particularly for smallholder farmers, and demonstrates the technical feasibility of using smartphones for automated disease diagnosis. The best-performing model achieved an accuracy of 99.35%, suggesting its potential as an effective tool for real-world agricultural applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views28 pages

Deep Learning Based Plant Disease Synopsis

The document discusses a deep learning approach for plant disease detection using convolutional neural networks, specifically VGG, Inception, AlexNet, and ResNet, trained on a dataset from the PlantVillage project. It highlights the importance of timely disease identification for food security, particularly for smallholder farmers, and demonstrates the technical feasibility of using smartphones for automated disease diagnosis. The best-performing model achieved an accuracy of 99.35%, suggesting its potential as an effective tool for real-world agricultural applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

PLANT DISEASE DETECTION -A USING DEEP

LEARNING APPROACH USING VGG,


INCEPTION, ALEXNET, AND RESNET

Abstract:
Agriculture plays a crucial role in the economic growth of a country as it is one of the main
means of subsistence. Recently, technological methods have been designed for the
identification of plants and detection of their diseases in order to meet the new challenges
facing farmers and their learning needs.
Crop diseases are a noteworthy risk to sustenance security, however their quick
distinguishing proof stays troublesome in numerous parts of the world because of the non
attendance of the important foundation.
Our proposed project includes various phases of implementation namely dataset importing,
feature extraction, training the classifier and classification
In this project, convolutional neural network models are developed to perform plant disease
detection through deep learning methodologies. Training of the models was performed with
the use of an open database from plant village website that consists of a set of 38 distinct
classes of [plant, disease] combinations, including healthy plants. Several model architectures
were trained, with the best performance in identifying the corresponding [plant, disease]
combination (or healthy plant).

The deep learning models used would be VGG, INCEPTION, ALEXNET, AND
RESNET

The significantly high success rate makes the model a very useful advisory or early warning
tool, and an approach that could be further expanded to support an integrated plant disease
identification system to operate in real cultivation conditions.
Introduction

Modern technologies have given human society the ability to produce enough food to meet
the demand of more than 7 billion people. However, food security remains threatened by a
number of factors including climate change (Tai et al., 2014), the decline in pollinators
(Report of the Plenary of the Intergovernmental Science-PolicyPlatform on Biodiversity
Ecosystem and Services on the work of its fourth session, 2016), plant diseases (Strange and
Scott, 2005), and others. Plant diseases are not only a threat to food security at the global
scale, but can also have disastrous consequences for smallholder farmers whose livelihoods
depend on healthy crops. In the developing world, more than 80 percent of the agricultural
production is generated by smallholder farmers (UNEP, 2013), and reports of yield loss of
more than 50% due to pests and diseases are common (Harvey et al., 2014). Furthermore, the
largest fraction of hungry people (50%) live in smallholder farming households (Sanchez and
Swaminathan, 2005), making smallholder farmers a group that's particularly vulnerable to
pathogen-derived disruptions in food supply.

Various efforts have been developed to prevent crop loss due to diseases. Historical
approaches of widespread application of pesticides have in the past decade increasingly been
supplemented by integrated pest management (IPM) approaches (Ehler, 2006). Independent
of the approach, identifying a disease correctly when it first appears is a crucial step for
efficient disease management. Historically, disease identification has been supported by
agricultural extension organizations or other institutions, such as local plant clinics. In more
recent times, such efforts have additionally been supported by providing information for
disease diagnosis online, leveraging the increasing Internet penetration worldwide. Even
more recently, tools based on mobile phones have proliferated, taking advantage of the
historically unparalleled rapid uptake of mobile phone technology in all parts of the world
(ITU, 2015).

Smartphones in particular offer very novel approaches to help identify diseases because of
their computing power, high-resolution displays, and extensive built-in sets of accessories,
such as advanced HD cameras. It is widely estimated that there will be between 5 and 6
billion smartphones on the globe by 2020. At the end of 2015, already 69% of the world's
population had access to mobile broadband coverage, and mobile broadband penetration
reached 47% in 2015, a 12-fold increase since 2007 (ITU, 2015). The combined factors of
widespread smartphone penetration, HD cameras, and high performance processors in mobile
devices lead to a situation where disease diagnosis based on automated image recognition, if
technically feasible, can be made available at an unprecedented scale. Here, we demonstrate
the technical feasibility using a deep learning approach utilizing 54,306 images of 14 crop
species with 26 diseases (or healthy) made openly available through the project PlantVillage
(Hughes and Salathé, 2015). An example of each crop—disease pair can be seen in Figure 1.

Figure 1. Example of leaf images from the PlantVillage dataset, representing every
crop-disease pair used.
(1) Apple Scab, Venturia inaequalis
(2) Apple Black Rot, Botryosphaeria obtusa
(3) Apple Cedar Rust, Gymnosporangium juniperi-virginianae
(4) Apple healthy
(5) Blueberry healthy
(6) Cherry healthy
(7) Cherry Powdery Mildew, Podoshaera clandestine
(8) Corn Gray Leaf Spot, Cercospora zeae-maydis
(9) Corn Common Rust, Puccinia sorghi
(10) Corn healthy
(11) Corn Northern Leaf Blight, Exserohilum turcicum
(12) Grape Black Rot, Guignardia bidwellii,
(13) Grape Black Measles (Esca), Phaeomoniella aleophilum, Phaeomoniella chlamydospora
(14) Grape Healthy
(15) Grape Leaf Blight, Pseudocercospora vitis
(16) Orange Huanglongbing (Citrus Greening), Candidatus Liberibacter spp.
(17) Peach Bacterial Spot, Xanthomonas campestris
(18) Peach healthy
(19) Bell Pepper Bacterial Spot, Xanthomonas campestris
(20) Bell Pepper healthy
(21) Potato Early Blight, Alternaria solani
(22) Potato healthy
(23)Potato Late Blight, Phytophthora infestans
(24) Raspberry healthy
(25) Soybean healthy
(26) Squash Powdery Mildew, Erysiphe cichoracearum
(27) Strawberry Healthy
(28) Strawberry Leaf Scorch, Diplocarpon earlianum
(29) Tomato Bacterial Spot, Xanthomonas campestris pv. vesicatoria
(30) Tomato Early Blight, Alternaria solani
(31) Tomato Late Blight, Phytophthora infestans
(32) Tomato Leaf Mold, Passalora fulva
(33) Tomato Septoria Leaf Spot, Septoria lycopersici
(34) Tomato Two Spotted Spider Mite, Tetranychus urticae
(35) Tomato Target Spot, Corynespora cassiicola
(36) Tomato Mosaic Virus
(37) Tomato Yellow Leaf Curl Virus
(38) Tomato healthy.

Computer vision, and object recognition in particular, has made tremendous advances in the
past few years. The PASCAL VOC Challenge (Everingham et al., 2010), and more recently
the Large Scale Visual Recognition Challenge (ILSVRC) (Russakovsky et al., 2015) based
on the ImageNet dataset (Deng et al., 2009) have been widely used as benchmarks for
numerous visualization-related problems in computer vision, including object classification.
In 2012, a large, deep convolutional neural network achieved a top-5 error of 16.4% for the
classification of images into 1000 possible categories (Krizhevsky et al., 2012). In the
following 3 years, various advances in deep convolutional neural networks lowered the error
rate to 3.57% (Krizhevsky et al., 2012; Simonyan and Zisserman, 2014; Zeiler and Fergus,
2014; He et al., 2015; Szegedy et al., 2015). While training large neural networks can be very
time-consuming, the trained models can classify images very quickly, which makes them also
suitable for consumer applications on smartphones.

Deep neural networks have recently been successfully applied in many diverse domains as
examples of end to end learning. Neural networks provide a mapping between an input—such
as an image of a diseased plant—to an output—such as a crop~disease pair. The nodes in a
neural network are mathematical functions that take numerical inputs from the incoming
edges, and provide a numerical output as an outgoing edge. Deep neural networks are simply
mapping the input layer to the output layer over a series of stacked layers of nodes. The
challenge is to create a deep network in such a way that both the structure of the network as
well as the functions (nodes) and edge weights correctly map the input to the output. Deep
neural networks are trained by tuning the network parameters in such a way that the mapping
improves during the training process. This process is computationally challenging and has in
recent times been improved dramatically by a number of both conceptual and engineering
breakthroughs (LeCun et al., 2015; Schmidhuber, 2015).

In order to develop accurate image classifiers for the purposes of plant disease diagnosis, we
needed a large, verified dataset of images of diseased and healthy plants. Until very recently,
such a dataset did not exist, and even smaller datasets were not freely available. To address
this problem, the PlantVillage project has begun collecting tens of thousands of images of
healthy and diseased crop plants (Hughes and Salathé, 2015), and has made them openly and
freely available. Here, we report on the classification of 26 diseases in 14 crop species using
54,306 images with a convolutional neural network approach. We measure the performance
of our models based on their ability to predict the correct crop-diseases pair, given 38
possible classes. The best performing model achieves a mean F1 score of 0.9934 (overall
accuracy of 99.35%), hence demonstrating the technical feasibility of our approach. Our
results are a first step toward a smartphone-assisted plant disease diagnosis system.

Related Work –LITERATURE SURVEY

Plant Disease Classificaiton Before the problem of crop disease detection can be solved, the
problem of identifying different species of plants need to be addressed. Fortunately, there has
been much work already completed in this problem domain. In the research article, A Model
of Plant Identification System Using GLCM, Lacunarity, and Shen Features, researchers
dived into many preprocessing steps that can be done to extract important features for binary-
class classification. Images are transformed using Polar Fourier Transform to achieve
translational and rotational invariance. Color features, such as the mean, standard deviation,
skewness, and kurtosis are made on the pixel values of the leaves. Lastly, the research paper
incorporated features that come from gray-level co-occurrence matrix (GLCM). The GLCM
functions characterize the texture of an image by calculating how often pairs of pixel with
specific values and in a specified spatial relationship occur in an image, creating a GLCM,
and then extracting statistical measures from this matrix. Although crop disease detection is
moving away from hand-engineered features, such as SIFT, HoG, and SURF, there are still a
variety of techniques to help deep learning networks extract important features, such as
background segmentation. The method that was used in the experiments will be explained in
the method section. In the research paper, Plant Leaf and Disease Detection by Using HSV
Features and SVM, the researchers proposed using a neural network to classify whether a leaf
was infected or not. If a leaf was infected, the images were further processed by a neural
network, where a genetic algorithm was implemented to optimize the SVM loss to determine
identify the type of disease. This method is quite interesting in that it breaks down the process
disease identification in two steps. It will be interesting to compare and contrast with more
recent papers, where healthy leaves are treated as just another class label. As a result,
classification is just done in a single step. In addition, the research paper introduces a method
for optimizing the loss function using a genetic algorithm, which can be compared to natural
selection where only the strong hyper parameters will survive. I need to do more research in
how genetic algorithms compare in other forms of computing loss, such as Adam, RmsProp,
and etc. Lastly, similar to the paper, A Model of Plant Identification System Using GLCM,
Lacunarity, and Shen Features, there was a focus on identifying important features of the
image to help in the classification process. In both papers, they used gray-level co-occurrence
matrix (GLCM) to extract information about the distribution of pixel values.

Plant Disease Classification with Convolutions


Other work moved towards using convolutional networks to get better performances.
Convolutional Networks are a category of Neural Networks that have proven very effective
in areas such as image recognition and classification. ConNets have been successful in
identifying faces, objects, and traffic signs apart from powering vision in robots and self
driving cars. ConvNets derive their name from the ”convolution” operator. The primary
purpose of Convolution in case of a ConvNet is to extract features from the input image.
Convolution preserves the spatial relationship between pixels by learning image features
using small filters. There are many important parts of the Convolution Network.

These includes the following properties

1. Depth: Depth corresponds to the number of filters we use for the conolution operation.

2. Stride: Stride is the number of pixels by which we slide our filter matrix over the input
matrix.

3. Zero-padding: Sometimes, it is convenient to pad the input matrix with zeros around the
border, so that we can apply the filter to bordering elements of our input image matrix.

4. Non-Linearity: ReLU is an element wise operation (applied per pixel) and replaces all
negative pixel values in the feature map by zero. The purpose of ReLU is to introduce non-
linearity in our ConvNet, since most of the real-world data we would want our ConvNet to
learn would be non-linear (Convolution is a linear operation element wise matrix
multiplication and addition, so we account for non-linearity by introducing a non-linear
function like ReLU).

5. Spatial Pooling: Spatial Pooling (also called subsampling or downsampling) reduces the
dimensionality of each feature map but retains the most important information. Spatial
Pooling can be of different types: Max, Average, Sum etc. These networks have grown in the
number of layers leading to architectures such as RestNet and AlexNet that have been trained
on images such as Cifar-10 and then fined tune to other problems, such as plant classification.

Methods

Dataset Description

We analyze 54,306 images of plant leaves, which have a spread of 38 class labels assigned to
them. Each class label is a crop-disease pair, and we make an attempt to predict the crop-
disease pair given just the image of the plant leaf. Figure 1shows one example each from
every crop-disease pair from the PlantVillage dataset. In all the approaches described in this
paper, we resize the images to 256 × 256 pixels, and we perform both the model optimization
and predictions on these downscaled images.

Across all our experiments, we use three different versions of the whole PlantVillage dataset.
We start with the PlantVillage dataset as it is, in color; then we experiment with a gray-scaled
version of the PlantVillage dataset, and finally we run all the experiments on a version of the
PlantVillage dataset where the leaves were segmented, hence removing all the extra
background information which might have the potential to introduce some inherent bias in
the dataset due to the regularized process of data collection in case of PlantVillage dataset.
Segmentation was automated by the means of a script tuned to perform well on our particular
dataset. We chose a technique based on a set of masks generated by analysis of the color,
lightness and saturation components of different parts of the images in several color spaces
(Lab and HSB). One of the steps of that processing also allowed us to easily fix color casts,
which happened to be very strong in some of the subsets of the dataset, thus removing
another potential bias.
PROPOSED SYSTEM

AlexNet
The pre-trained AlexNet model (as shown in Fig. 1) used in this study primarily consist of 5
convolutional layers (conv layers) and 3 fully connected layers. In the first convolution layer,
a filter of dimension 11 x 11 x 3 which represents height, width and depth respectively is
applied over the input image of dimension 227 x 227 x 3. When the filter is applied to a
particular pixel, dot product of filter matrix with the respective pixel value in the receptive
field of the image is carried out. A total of 96 such filters are applied in the first layer. Hence
96 activation maps are generated from Rectified Linear Unit (ReLU) layer of the first
convolution layer. In similar fashion convolution layer 2 with 256 filters of dimension 5 x 5 x
48, layer 3 with 384 filters of dimension 3 x 3 x 256, layer 4 with 384 filters of dimension 3 x
3 x 192 and layer 5 with 256 filters of dimension 3 x 3 x 192 were used for performing
convolution operation and activation maps are generated with different neurons activated in
each map.

In AlexNet model, several convolution layers are followed by ReLU, maxpooling and
normalization layers. ReLU is a nonlinear, non-saturating activation function applied to the
output of all the convolution layer including the last two fully connected layers. In
maxpooling layers, the dimension of the output from the previous convolution layer is
reduced by finding and holding the maximum value in the receptive field. Fully connected
layer 6 and 7 have 4096 neurons where all neurons are connected to each other. Dropout
layer has been included to randomly avoid the number of connections in a network for
training which have shown to improve the performance of the network during test phase [11].
The last fully connected layers have been modified with the total number of classes i.e., 6
diseases and a healthy class which results in a total of 7.
InceptionV3
InceptionV3 is consisted of inception blocks. In each inception block, convolution filters of
various dimensions and pooling are used on the input in parallel. They are concatenated along
their channels just before providing output. Three types of inception blocks have been used in
this architecture.
To reduce number of parameters, often 1×1× (no. of input channels) sized 3D filters are used
before any other operation on the input. Although the architecture is quite deep, the model
has only about 25 million parameters. 1×1 filters also reduce the computation cost. The larger
filters capture high abstract features, and the smaller ones capture local features. The full
InceptionV3 architecture
VGG16 net

The pre-trained VGG16 net is based on the stacked architecture of AlexNet with more
number of convolution layers added to the model (as shown in Fig. 2). It consists of 13
convolution layers with each layer followed by ReLU layer. Some of the convolution layers
are followed by maxpooling to reduce the dimension similar to AlexNet. In convolution
layers, smaller filters of dimension 3 x 3 are used compared to the AlexNet where filters of
larger dimensions are used. Smaller filters will reduce the number of parameters and ReLU
layer is added after each convolution layer which increases the non-linearity. This will
improve the discrimination of each class compared to the architecture with larger filters.

([plant, disease] combinations) of


just 82 images (0.47%). Among
these
82 misclassified images, there were
some few “faulty” ones that con-
tained no plant leaves at all, like for
example, those shown in Fig. 4(a)
and (b). In that specific case, the
“faulty” images were registered in
class c_49 of the database (Tomato
with Early blight), while the model
classified them in class c_16 (healthy
Corn), as shown on the classifi-
cation tables in Fig. 4. These tables
present the ranked outputs of the
final CNN model for the images on
their left. These images were
counted as misclassifications in the
performance estimation of the
model (a “correct” classification
would be the class c_49, even
though
they do not actually belong to any
class, as they do not contain any
plant leaves). They were both
classified as class c_16 probably due
to
the fact that the images of that class
(two representative ones are shown
in Fig. 4(c) and (d)) contained mainly
similar soil texture and the corn
leaves were very slim, occupying a
rather small portion of the image.
Thus, the actual accuracy of the final
model, if such examples were
excluded, would be even higher than
99.53%.
Other problematic situations
regarding the field-conditions images
of the database included: (i) images
with extensive partial shading on
the leaves, (ii) images with multiple
objects in addition to the pictured
leave or leaves, such as fingers,
entire hands, shoes, parts of shirts,
etc.,
and (iii) images with the leave
occupying a very small and non-
centric
part of the frame. Judging from the
high performance of the final
model, these problems were
overcome by the learning process in
most
cases. An indicative case was that of
class c_5 (Banana with Black si-
gatoka), where 4 out of a total of 48
testing images were mistakenly
classified (i.e., misclassification
reached 8.33%, in contrast to the
overall misclassification rate of
0.47%). Fig. 5 shows the
classification
results of the model on 8
representative images of the c_5
class, in-
cluding the 4 misclassified ones
(inside red rectangles). The first
three
images inside green rectangles were
classified correctly with a certainty
of practically 100%, while the image
inside the yellow rectangle was
classified correctly with a certainty
of about 80% (the second choice
with a certainty level of 19.4% was
class c_6, which corresponds to
banana plant with a different
disease). The 4 misclassified images
(in
the red rectangles) suffered mainly
from extensive partial shading ef-
fects, which probably confused the
CNN model, even though in two of
them, the correct classification was
the second choice of the model,
with the first choice being the similar
class c_6 (Banana plants with
banana speckle disease), thus the
model identified correctly the plant
species but did not accurately
detected the existing plant disease.
4. Conclusions
In this work, specialized deep
learning models were developed,
based on specific convolutional
neural networks architectures, for
the
identification of plant diseases
through simple leaves images of
healthy
or diseased plants. The training of
the models was performed using an
openly available database of 87,848
photographs, taken in both la-
boratory conditions and real
conditions in cultivation fields. The
data
comprises 25 plant species in 58
distinct classes of [plant, disease]
combinations, including some
healthy plants. The most successful
model architecture, a VGG
convolutional neural network,
achieved a
success rate of 99.53% (top-1 error
of 0.47%) in the classification of
17,548 previously unseen by the
model plant leaves images (testing
set). Based on that high level of
performance, it becomes evident
that
convolutional neural networks are
highly suitable for the automated
detection and diagnosis of plant
diseases through the analysis of
simple
leaves images. In addition, the high
importance of the existence of real-
conditions images (captured in the
cultivation fields) in the training
data, which was indicated by the
presented results, suggests that, in
the
development of such models, focus
should be given in the maximization
of the ratio of real-conditions images
in the training data. Furthermore,
the low computational power
required by the trained model to
classify
a given image (about 2 ms on a
single GPU), makes feasible its in-
tegration into mobile applications for
their use in mobile devices. Suc
. Conclusions
In this work, specialized deep
learning models were developed,
based on specific convolutional
neural networks architectures, for
the
identification of plant diseases
through simple leaves images of
healthy
or diseased plants. The training of
the models was performed using an
openly available database of 87,848
photographs, taken in both la-
boratory conditions and real
conditions in cultivation fields. The
data
comprises 25 plant species in 58
distinct classes of [plant, disease]
combinations, including some
healthy plants. The most successful
model architecture, a VGG
convolutional neural network,
achieved a
success rate of 99.53% (top-1 error
of 0.47%) in the classification of
17,548 previously unseen by the
model plant leaves images (testing
set). Based on that high level of
performance, it becomes evident
that
convolutional neural networks are
highly suitable for the automated
detection and diagnosis of plant
diseases through the analysis of
simple
leaves images. In addition, the high
importance of the existence of real-
conditions images (captured in the
cultivation fields) in the training
data, which was indicated by the
presented results, suggests that, in
the
development of such models, focus
should be given in the maximization
of the ratio of real-conditions images
in the training data. Furthermore,
the low computational power
required by the trained model to
classify
a given image (about 2 ms on a
single GPU), makes feasible its in-
tegration into mobile applications for
their use in mobile devices. Suc

Proposed Models

1. Modified Alexnet
2. Modified Inception
3. Modified VGG
APPLICATIONS AND FUTURE SCOPE:
In this work, we have proposed deep convolutional neural network based classifier for real
time rice disease and pest recognition. We have done a comprehensive study on rice disease
and pest recognition, incorporating ten classes of rice diseases, pest, their simultaneous
occurrence and healthy plant. We collected a lot of images of rice plants infected with various
diseases and pest from the rice field of BRRI in real life scenario. We applied the knowledge
of agriculture in solving this rice disease classification problem. We used various types of
convolutional neural network architectures, and implemented various training methods on
each of them. We were successfully able to distinguish between inter and intra class variation
of diseases and pest in rice plant in complex environment, with the best accuracy of 99.69%
on test set. Validation accuracy and test accuracy of most of the convolutional neural network
architectures were found to be very high, because our training, validation and test set were
collected from the same site. Future work will be based on collecting more images to train
our model, particularly from various regions. Symptoms of the same disease may be different
in different regions and different varieties of rice.

We have covered only one pest in this work, because we could not find rice plants infected by
other pests in the field of BRRI. We collected some images from their pest nursery. But
Images of rice plants in the nursery usually differ a lot from the images of rice plants in the
fieldboth in symptoms of the infected area and in the background. We are trying to collect
field images of pest infected rice plants from other scientists of BRRI and agricultural
universities of Bangladesh. A complete disease recognition system can be developed by
incorporating location, weather and soil data along with the image of the diseased part of the
plant. This will require devices to be set up in the field. The farmers also have to be trained so
that they do not damage the devices and can use the system effectively.

Our convolutional neural network architectures have large size because of a large number of
parameters. Architectures such as MobileNet has a relatively small number of parameters. An
attempt can be taken to achieve high accuracy in rice disease and pest classification with such
small convolutional neural network architectures, which are suitable for mobile application.
An accurate disease and pest recognition system integrated in an Android app can help the
farmers to a great extent.
Most of the diseases covered in our work can be cured after they have occurred. Neck blast is
a bit different. It can completely destroy the yield of a large field in a day. It affects the grain
of the rice plant, so the infected rice plants do not produce any rice. For these reasons,
prevention is much more important than cure in case of Neck Blast. A recurrent neural
network (RNN) based system can be developed to predict a possible outbreak of Neck Blast
using weather data, because occurrence of Neck Blast greatly depends on weather conditions.
We have collected
weather data of various districts of Bangladesh of years 2001-2017 from Bangladesh
Meteorological
Department (BMD) with the help of BRRI. The data contain highest temperature, lowest
temperature, average temperature, total rainfall, average humidity, wind speed and amount of
sunshine hours of each day. We found some patterns in the data which may link the weather
conditions to the possible outbreaks of Neck Blast. But without field specific data, it is quite
difficult to develop an accurate forecasting model for Neck Blast. The exact date of
occurrence
of Neck Blast may vary from field to field. Moreover, some varieties of rice, such as BRRI-
28
and BRRI-29 are more susceptible to Neck Blast than other varieties. To develop an accurate
forecasting model for Neck Blast using RNN, we need to set up devices in the fields of those
areas where Neck Blast occurs frequently, particularly in Khulna, Satkhira and Jessore region
of Bangladesh, and record weather data and the date of occurrence of Neck Blast. Neck Blast
generally occurs from the second week of March to 1-2 April. Data collection process will
take
a lot of time, because we need to observe the occurrence of Neck Blast for at least two such
seasons. Besides, if the weather conditions are not suitable for the occurrence of Neck Blast
during our observation period, the disease may not even occur.

Conclusions
In this work, specialized deep learning models were developed, based on specific
convolutional neural networks architectures, for the identification of plant diseases through
simple leaves images of healthy or diseased plants. The training of the models was performed
available database taken in both laboratory conditions and real conditions in cultivation
fields. The data comprises 14 plant species in 38 distinct classes of [plant, disease]
combinations, including some healthy plants. The most successful model architecture, a
Based on that high level of performance, it becomes evident that convolutional neural
networks are highly suitable for the automated election and diagnosis of plant diseases
through the analysis of simple leaves images. In addition, the high importance of the
existence of real-conditions images (captured in the cultivation fields) in the training data,
which was indicated by the presented results, suggests that, in the development of such
models, focus should be given in the maximization of the ratio of real-conditions images in
the training data. Further more, the low computational power required by the trained model to
classify a given image (about 2 ms on a single GPU), makes feasible its integration into
mobile applications for their use in mobile devices. Such devices could be either
smartphones, to be used by growers or agronomists, or drones and other autonomous
agricultural vehicles, to be used for real-time monitoring and dynamic disease detection on
large-scale open-field cultivations. In the former case in particular, except fort he fact that a
farmer at a remote location could have an incipient warning about a possible threat for his/her
cultivation, and an agronomist could have a valuable advisory tool in his/her disposal, a
future possibility could be the development of an automated pesticide prescription system
that would require a confirmation by the automated disease diagnosis system to allow the
purchase of appropriate pesticides by the farmers. That would drastically limit the
uncontrolled acquisition of pesticides that leads to their overuse and misuse, with the
consequent catastrophic effects on the environment

References

1. Tai AP, Martin MV, Heald CL (2014) Threat to future global food security from climate
change and ozone air pollution. Nature Climate Change 4(9):817–821.
2. TO-BE-Filled (2016) Pollinators vital to our food supply under threat.
3. Strange RN, Scott PR (2005) Plant disease: a threat to global food security.
Phytopathology 43.
4. UNEP (2013) Smallholders, food security, and the environment.
5. Harvey CA et al. (2014) Extreme vulnerability of smallholder farmers to agricultural risks
and climate change in madagascar. Philosophical Transactions of the Royal Society of
London B: Biological Sciences 369(1639).
6. Sanchez PA, Swaminathan MS (2005) Cutting world hunger in half.
7. Ehler LE (2006) Integrated pest management (ipm): definition, historical development and
implementation, and the other ipm. Pest management science 62(9):787–789.
8. ITU (2015) Ict facts and figures – the world in 2015.
9. Hughes DP, Salathé M (2015) An open access repository of images on plant health to
enable the development of mobile disease diagnostics. CoRR abs/1511.08060.

You might also like