Performance Analysis of Transfer Learning Model and Prediction of Corn Leaf Diseases
Performance Analysis of Transfer Learning Model and Prediction of Corn Leaf Diseases
https://doi.org/10.22214/ijraset.2022.48406
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
Abstract: Agriculture is one in all the formost significant roles within growth and development of our nation economy. The
identification of diseases is that the key of forestall the losses within the yield and quantity of the agriculture product. Diseases
detection on the plant is incredibly critical for sustainable agriculture. It’s challenging to watch the plant the plant manually
especially people who are new to farming. It requires excessive time interval. Therefore a correct prediction and detection of
disease will reduce the utilization of fertilizer within the field, which helps from soil impurities. In this present paper we have
explained how we train our model with normal dataset and augmented dataset and achieved accuracy greater than 95%
Keywords: Prediction, Transfer Learning, Corn Leaf Diseases, VGG-16, VGG-19.
I. INTRODUCTION
To contribute to the event of countries, knowledge of agriculture sectors is crucial. Agriculture could be a one- of- a- kind
source of wealth that develops farmers. For a powerful country, the event of farming is a necessity and a requirement within
the global market. The world’s population is growing at an exponential rate, necessitating the massive food production within
the next 50 years. Information about differing kinds of crops and diseases occurring at each level and its analysis at an early
stage play a key and dynamic role within the agriculture sector. A farmer's main problem is that the occurrence of assorted
diseases on their crops. The disease classification and analysis of illnesses may be a crucial concern for agriculture's
optimum food yield. Food safety is an huge issue due to lack of infrastructure and technology, so crop disease
classification and identification are important to be considered within the coming days. classification and identification are
important to be considered within the coming days. Detection and recognition of crops illnesses is a vital study topic
because it may be capable of monitoring huge fields of crops and detecting disease symptoms as soon as they occur on plant
leaves. As a result finding a fast, efficient, least inexpensive, and effective approach to work out crops diseases instances
is kind of important.
Transfrer Learning:-The study of transfer learning is motivated by the very fact that individuals can intelligently apply
knowledge learned previously to resolve new problems faster or with better solutions. transfer learning is an machine learning
method where we reuse a pre-trained model because the start line for a model on a new task. To put it simply a model trained
on one task is repurposed on a second, related task as an optimization that permits rapid progress when modeling the second
task. By applying transfer learning to a new task, one can achieve significantly higher performance than training with
only a little amount of data.
Maize belongs to the Poaceae monocot family and is that the third most significant cereal crop in the world. Though maize
isn’t eaten directly, it’s use to make several products like corn starch, syrup and ethanol. Maize plant leaves suffer from a
range of infections, and the three most prevalent maize leaf diseases are the northern corn blight disease, common rust disease
& grey leaf spot disease.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2054
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
3) Weihui Zeng, Haidong Li, Gensheng Hu, Dong Liang, Identification of maize leaf diseases by using the SKPSNet-50
convolutional neural network model, where they got accuracy of 92%.
4) Zhang based on the improved deep learning model of GoogLeNet and Cifar10 network, applied to the identification of diseases
and insect pests of corn leaves, whose highest accuracy rate reached 98.9%.
5) Weihui Zeng, Haidong Li, Gensheng Hu, Dong Liang, Lightweight dense-scale network (LDSNet) for corn leaf disease
identification 2022,the got accuracy of 95.4%.
6) J. Arun Pandian, G. Geetharamani and B. Annette, "Data Augmentation on Plant Leaf Disease Image Dataset Using
Image Manipulation and Deep Learning Techniques.
III. METHODOLOGY
A. Dataset
We have four type of corn leaf as follow.
1) Northern Leaf Blight Disease
The fungus Exserohilium turcicum is chargeable for the northern corn blight disease. The primary striking symptom for this
disease is that the large grey cigar-shaped lesions that on the leaf’s surface. Moderate to chill temperatures, and a
comparatively high humidity level act as a catalyst for this diseases.
2) Common Rust
Common rust disease is one more maize disease favoured by high humidity levels and cold temperatures. During this disease,
variety of small tan spots develop on both the surfaces of the leaf and as a result, the photosynthesis of the leaf reduces
drastically. the identical has shown.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2055
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
4) Healthy Leaf
Healthy maize leaves have been shown below. Four different samples have been picked for demonstration.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2056
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
a) The first and second convolutional layers are comprised of 64 feature kernel filters and size of the filter is 3×3. As input image
(RGB image with depth 3) passed into first and second convolutional layer, dimensions changes to 224x224x64. Then
the resulting output is passed to max pooling layer with astride of 2.
b) The third and fourth convolutional layers are of 128 feature kernel filters and size of filter is 3×3. These two layers are
followed by a max pooling layer with stride 2 and the resulting output will be reduced to 56x56x128.
c) The fifth, sixth and seventh layers are convolutional layers with kernel size 3×3. All three use 256 feature maps. These
layers are followed by a max pooling layer with stride 2.
d) Eighth to thirteen are two sets of convolutional layers with kernel size 3×3. All these sets of convolutional layers have 512
kernel filters. These layers are followed by max pooling layer with stride of 1.
e) Fourteen and fifteen layers are fully connected hidden layers of 4096 units followed by a softmax output layer (Sixteenth
layer) of 1000 units.
2) VGG-19
VGG-19 model architecture – 16 convolutional layers and 2 Fully connected layers and 1 SoftMax classifier VGG-19 -
Karen Simonyan and Andrew Zisserman introduced VGG-19 architecture in 2014 in their paper Very Deep Convolutional
Network for Large Scale Image Recognition. Karen and Andrew created a 19-layer network comprised of convolutional and
fullyconnected layers. Using only 3×3 convolutional layers stacked on top of each other for simplicity.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2057
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
a) The first and second convolutional layers are comprised of 64 feature kernel filters and size of the filter is 3×3. As input
image (RGBimage with depth 3) passed into first andsecond convolutional layer, dimensions changes to 224x224x64.
Then the resultingoutput is passed to max pooling layer with astride of 2.
b) The third and fourth convolutional layers are of 128 feature kernel filters and size of filter is 3×3. These two layers are
followed by a max pooling layer with stride 2 and the resulting output will be reduced to 56x56x128.
c) The fifth, sixth, seventh and eighth layers are convolutional layers with kernel size 3×3. All three use 256 feature maps.
These layers are followed by a max pooling layer with stride 2.
d) Ninth to Sixteenth are two sets of convolutional layers with kernel size 3×3. All these sets of convolutional layers have
512 kernel filters. These layers are followed by max pooling layer with stride of 1.
e) Seventeenth and Eighteenth layers are fully connected hidden layers of 4096 units followed by a softmax output layer
(Nineteenth layer) of 1000 units.
IV. ANALYSIS
As discussed earlier, first we will train our data without augmentation with VGG-16 and VGG-19. Later we will improve the
accuracy using image augmentation technique. Finally, we will leverage the pretrained model VGG-16 and VGG-19 which is
already trained on a huge dataset with diverse range of categories to extract features and classify images.
All the evalutation metrics will be compared in later stage.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2058
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
Above is the code to call VGG-16 pretrained model. We need to include weights = ‘imagenet’ to fetch VGG-16 model which is
trained on the imagenet dataset. dataset. It is important to set include_top = False to avoid downloading the fully connected layers of
the pretrained model.
Below are the model metrics for VGG-16 Model accuracy and loss after fine tuning the pretrained model without data
augmentation.
Above is the code to call VGG-19 pretrained model. We need to include weights = ‘imagenet’ to fetch VGG-16 model which is
trained on the imagenet dataset. dataset. It is important to set include_top = False to avoid downloading the fully connected layers of
the pretrained model.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2059
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
Below are the model metrics for VGG-19 – Model accuracy and loss after fine tuning the pretrained model without data
augmentation.
Data augmentation is a process of artificially increasing the amount of data by generating new data points from existing data. This
includes adding minor alterations to data or using machine learning models to generate new data points in the latent space of
original data to amplify the dataset.
a) Rotating the image randomly by 50 degrees using the rotation_range parameter.
b) Translating the image randomly horizontally or vertically by a 0.2 factor of the image's width or height using the
width_shift_range and the height_shift_range parameters.
c) Applying shear-based transformations randomly using the shear_range by 0.2 parameter and Zoom by 0.2
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2060
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2061
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
V. COMPARATIVE RESULTS
Training Validation
accuracy accuracy
VGG-16 without data augmenatation 97.10% 98%
VGG-19 without data augmenatation 96.47% 97.55%
VGG-16 with data augmenatation 98.20% 99.25%
VGG-19 with data augmenatation 97.60% 98.75%
Above table shows training and validation accuracy for two different model using with and without data augmentation.The first
model VGG-16 without Data augmentation gives validation accuracy of 98%, but with data augmentation its validation accuracy is
99.25%. And the second model VGG-19 without Data augmentation gives the validation accuracy of 97.55%,but with Data
augmentation its gives the validation acuuracy of 98.75%.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2062
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
VI. CONCLUSION
The main goal of this research work is to train our model with and without data augmentation. So, we can say that by using the data
augmentation with our model we can enlarge our data and introduce some new data point from the existing data for our model the
get the better accuracy and predicition
REFERENCES
[1] Sumita Mishra, Rishabh Sachan, Diksha Rajpal, Deep Convolutional Neural Network based Detection System for Real-time Corn Plant Disease Recognition,
https://doi.org/10.1016/j.procs.2020.03.236.
[2] Weihui Zeng, Haidong Li, Gensheng Hu, Dong Liang, Lightweight dense-scale network (LDSNet) for corn leaf disease identification,
https://doi.org/10.1016/j.compag.2022.106943.
[3] Daniel Fernando Santos-Bustos, Binh Minh Nguyen, Helbert Eduardo Espitia, Towards automated eye cancer classification via VGG and ResNet networks
using transfer learning, https://doi.org/10.1016/j.jestch.2022.101214.
[4] Honghui Xu, Wei Li, Zhipeng Cai, Analysis on methods to effectively improve transfer learning performance, https://doi.org/10.1016/j.tcs.2022.09.023.
[5] Qiang Zhang, Zhe Tian, Jide Niu, Jie Zhu, Yakai Lu, A study on transfer learning in enhancing performance of building energy system fault diagnosis with
extremely limited labeled data, https://doi.org/10.1016/j.buildenv.2022.109641.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2063