0% found this document useful (0 votes)
38 views48 pages

Batch - 142 - Minor Final Report

Uploaded by

vtu19373
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views48 pages

Batch - 142 - Minor Final Report

Uploaded by

vtu19373
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

TURMERIC PLANT LEAF DISEASES DETECTION AND

CLASSIFICATION USING DEEP LEARNING

Minor project report submitted


in partial fulfillment of the requirement for award of the degree of

Bachelor of Technology
in
Computer Science & Engineering

By

DARSI MENIKA AKSHAYA (20UECS0239) (VTU 18155)


DURGA SAI SRI RAMIREDDY (20UECS0269) (VTU 17550)
TUMMALURU BHANU PRAKASH REDDY (20UECS0959) (VTU 17031)

Under the guidance of


Mr. D. Jaganathan, M.Tech.,
Assistant Professor

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


SCHOOL OF COMPUTING

VEL TECH RANGARAJAN DR. SAGUNTHALA R&D INSTITUTE OF


SCIENCE & TECHNOLOGY
(Deemed to be University Estd u/s 3 of UGC Act, 1956)
Accredited by NAAC with A++ Grade
CHENNAI 600 062, TAMILNADU, INDIA

May, 2023
TURMERIC PLANT LEAF DISEASES DETECTION AND
CLASSIFICATION USING DEEP LEARNING

Minor project report submitted


in partial fulfillment of the requirement for award of the degree of

Bachelor of Technology
in
Computer Science & Engineering

By

DARSI MENIKA AKSHAYA (20UECS0239) (VTU 18155)


DURGA SAI SRI RAMIREDDY (20UECS0269) (VTU 17550)
TUMMALURU BHANU PRAKASH REDDY (20UECS0959) (VTU 17031)

Under the guidance of


Mr. D. Jaganathan, M.Tech.,
Assistant Professor

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


SCHOOL OF COMPUTING

VEL TECH RANGARAJAN DR. SAGUNTHALA R&D INSTITUTE OF


SCIENCE & TECHNOLOGY
(Deemed to be University Estd u/s 3 of UGC Act, 1956)
Accredited by NAAC with A++ Grade
CHENNAI 600 062, TAMILNADU, INDIA

May, 2023
CERTIFICATE
It is certified that the work contained in the project report titled ”TURMERIC PLANT LEAF DIS-
EASES DETECTION AND CLASSIFICATION USING DEEP LEARNING” by DARSI MENIKA
AKSHAYA (20UECS0239), DURGA SAI SRI RAMIREDDY (20UECS0269) and TUMMALURU
BHANU PRAKASH REDDY (20UECS0959) has been carried out under my supervision and that
this work has not been submitted elsewhere for a degree.

Signature of Supervisor
Mr. D. Jaganathan
Assistant Professor
Computer Science & Engineering
School of Computing
Vel Tech Rangarajan Dr. Sagunthala R&D
Institute of Science & Technology
May, 2023

Signature of Head of the Department Signature of the Dean


Dr. M. S. Murali Dhar, M.Tech., Ph.D Dr. V. Srinivasa Rao
Associate Professor & Head Of Department Professor & Dean
Computer Science & Engineering Computer Science & Engineering
School of Computing School of Computing
Vel Tech Rangarajan Dr. Sagunthala R&D Vel Tech Rangarajan Dr. Sagunthala R&D
Institute of Science & Technology Institute of Science & Technology
May, 2023 May, 2023

i
DECLARATION

We declare that this written submission represents our ideas in our own words and where others
ideas or words have been included, we have adequately cited and referenced the original sources. We
also declare that we have adhered to all principles of academic honesty and integrity and have not
misrepresented or fabricated or falsified any idea/data/fact/source in our submission. We understand
that any violation of the above will be cause for disciplinary action by the Institute and can also
evoke penal action from the sources which have thus not been properly cited or from whom proper
permission has not been taken when needed.

(DARSI MENIKA AKSHAYA)


Date: / /

(DURGA SAI SRI RAMIREDDY)


Date: / /

(TUMMALURU BHANU PRAKASH REDDY)


Date: / /

ii
APPROVAL SHEET

This project report entitled ”TURMERIC PLANT LEAF DISEASES DETECTION AND CLASSI-
FICATION USING DEEP LEARNING” by DARSI MENIKA AKSHAYA (20UECS0239), DURGA
SAI SRI RAMIREDDY(20UECS0269) and TUMMALURU BHANU PRAKASH REDDY(20UECS0
959) is approved for the degree of B.Tech in Computer Science Engineering.

Examiners Supervisor

Mr. D. Jaganathan, M.Tech.,

Date: / /
Place:

iii
ACKNOWLEDGEMENT

We express our deepest gratitude to our respected Founder Chancellor and President Col. Prof.
Dr. R. RANGARAJAN B.E. (EEE), B.E. (MECH), M.S (AUTO),D.Sc., Foundress President Dr.
R. SAGUNTHALA RANGARAJAN M.B.B.S. Chairperson Managing Trustee and Vice President.

We are very much grateful to our beloved Vice Chancellor Prof. S. SALIVAHANAN, for provid-
ing us with an environment to complete our project successfully.

We record indebtedness to our Professor & Dean, Department of Computer Science & Engi-
neering, School of Computing, Dr. V. SRINIVASA RAO, M.Tech., Ph.D., for immense care and
encouragement towards us throughout the course of this project.

We are thankful to our Head, Department of Computer Science & Engineering,Dr. M. S. MU-
RALI DHAR, M.E., Ph.D., for providing immense support in all our endeavors.

We also take this opportunity to express a deep sense of gratitude to our Internal Supervisor Mr.
D. JAGANATHAN, M.Tech., for his cordial support, valuable information and guidance, he helped
us in completing this project through various stages.

A special thanks to our Project Coordinators Mr. V. ASHOK KUMAR, M.Tech., Ms. C.
SHYAMALA KUMARI, M.E., for their valuable guidance and support throughout the course of the
project.

We thank our department faculty, supporting staff and friends for their help and guidance to com-
plete this project.

DARSI MENIKA AKSHAYA (20UECS0239)


DURGA SAI SRI RAMIREDDY (20UECS0269)
TUMMALURU BHANU PRAKASH REDDY (20UECS0959)

iv
ABSTRACT

Turmeric is a widely cultivated crop with various medical and culinary uses. How-
ever, leaf diseases pose a significant threat to turmeric plants, leading to reduced
yield and quality. Various conventional methods are available for disease detection
and classification, but they often lack accuracy and efficiency. In recent years, deep
learning techniques have emerged as a promising approach for plant disease detec-
tion and classification. In this study, we propose a deep learning-based approach for
automatic detection and classification of turmeric leaf diseases using a convolutional
neural network (CNN) model. We evaluated the proposed approach on a publicly
available dataset of turmeric leaf images containing three different types of diseases.
The result evaluates the superior performance of the proposed approach in terms of
accuracy and efficiency. The proposed approach can assist farmers and agronomists
in making informed decisions for disease management and improving the yield of
turmeric crops. It can also serve as a basis for the development of an automated
system for disease detection and classification in real-time. Overall, the proposed
approach offers a promising solution for accurate and efficient detection and clas-
sification of turmeric leaf diseases using deep learning, compared to the existing
systems.

Keywords:
VGG Visual Geometry Group , KERAS, GOOGLE COLAB , CNN

v
LIST OF FIGURES

4.1 CNN Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 13


4.2 Data Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3 Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4 Sequence Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.5 Activity Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5.1 Input Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22


5.2 Output Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3 Unit testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.4 Integration testing . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.5 System testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.6 Test Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

6.1 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

9.1 Poster Presentation . . . . . . . . . . . . . . . . . . . . . . . . . 34

vi
LIST OF ACRONYMS AND
ABBREVIATIONS

CNN Convolutional Neural Network


GAN Generative adversarial network
GOCOLAB Google Colab
IPYNB ipython Notebook
MSVM Multicategory Support Vector Machine
SSD Single shot detector
SVM Support Vector Machine
R-CNN Reigon based convolutional Neural Network
R-FCN Region based fully convolutional Network
RGB Red,blue and green
VGG Visual Geometry Group
YOLO You only look once

vii
TABLE OF CONTENTS

Page.No

ABSTRACT v

LIST OF FIGURES vi

LIST OF ACRONYMS AND ABBREVIATIONS vii

1 INTRODUCTION 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Aim of the Project . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Project Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Scope of the Project . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 LITERATURE REVIEW 3

3 PROJECT DESCRIPTION 9
3.1 Existing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Proposed System . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Feasibility Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3.1 Economic Feasibility . . . . . . . . . . . . . . . . . . . . . 10
3.3.2 Technical Feasibility . . . . . . . . . . . . . . . . . . . . . 10
3.3.3 Social Feasibility . . . . . . . . . . . . . . . . . . . . . . . 11
3.4 System Specification . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4.1 Hardware Specification . . . . . . . . . . . . . . . . . . . . 11
3.4.2 Software Specification . . . . . . . . . . . . . . . . . . . . 11
3.4.3 Standards and Policies . . . . . . . . . . . . . . . . . . . . 11

4 METHODOLOGY 13
4.1 CNN Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Design Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2.1 Data Flow Diagram . . . . . . . . . . . . . . . . . . . . . . 14
4.2.2 Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . 15
4.2.3 Sequence Diagram . . . . . . . . . . . . . . . . . . . . . . 16
4.2.4 Activity Diagram . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Algorithm & Pseudo Code . . . . . . . . . . . . . . . . . . . . . . 18
4.3.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.2 Pseudo Code . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4 Module Description . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4.1 Image Acquisition . . . . . . . . . . . . . . . . . . . . . . 19
4.4.2 Image Pre-processing . . . . . . . . . . . . . . . . . . . . . 19
4.4.3 Image Segmentation . . . . . . . . . . . . . . . . . . . . . 20
4.5 Steps to execute/run/implement the project . . . . . . . . . . . . . . 20
4.5.1 Upload the IPYNB code in the GOCOLAB . . . . . . . . . 20
4.5.2 Create and upload the dataset . . . . . . . . . . . . . . . . 20
4.5.3 Run the code in their respective fields . . . . . . . . . . . . 21

5 IMPLEMENTATION AND TESTING 22


5.1 Input and Output . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.1.1 Input Design . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.1.2 Output Design . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3 Types of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.3.1 Unit testing . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.3.2 Integration testing . . . . . . . . . . . . . . . . . . . . . . 25
5.3.3 System testing . . . . . . . . . . . . . . . . . . . . . . . . 26
5.3.4 Test Result . . . . . . . . . . . . . . . . . . . . . . . . . . 27

6 RESULTS AND DISCUSSIONS 28


6.1 Efficiency of the Proposed System . . . . . . . . . . . . . . . . . . 28
6.2 VGG16 vs VGG19 Accuracy and Loss Comparison . . . . . . . . . 28
6.3 Sample Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

7 CONCLUSION AND FUTURE ENHANCEMENTS 31


7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.2 Future Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . 31

8 PLAGIARISM REPORT 32
9 SOURCE CODE & POSTER PRESENTATION 33
9.1 Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
9.2 Poster Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . 34

References 35
Chapter 1

INTRODUCTION

1.1 Introduction

In India, agriculture is the major source of the income. The agricultural sector
employs the majority of the country’s people, either directly or indirectly. As a re-
sult, creating high-quality agricultural yields is essential for the country’s economic
progress to continue. Farmers choose the proper products by monitoring and main-
taining the necessary temperature, light, and humidity needs in order to generate har-
vests of higher quality and productivity. Furthermore, due to population expansion,
weather fluctuations, and political instability, the agriculture business has begun to
explore for new ways to enhance food production. Farmers are still grappling with
issues such as early detection of plant diseases because observing this sort of dis-
ease on a plant’s leaf with the naked eye is not always possible, an automated expert
system that can help detect the disease in a timely manner would be very valuable.
Advances in technology, particularly the use of image processing in conjunction with
a machine learning technique, will aid farmers in detecting plant disease in its early
stages.

Turmeric is the dried rhizome of Curcuma longa, a perennial herbaceous plant


in the Zingiberaceae family. Turmeric contains up to 3 percent of curcumin, the most
physiologically active phytochemical component. Its high curcumin concentration.
Indian turmeric is regarded as the greatest in the world. It’s extracted and studied
for its well-known medicinal properties. The creation of an automated system for
turmeric disease classification is motivated by the goal of integrating the Informa-
tion and Communications Technology (ICT) with agriculture sector. Leaf spot,leaf
blotch, and rhizome rot are the three diseases that damage turmeric leaves. Each
of these diseases has its own cause and symptoms, which are listed below. Turmeric
leaf spot is the most common disease that affects the plant. It has become a key stum-
bling block to successful turmeric cultivation. Brown dots of varied sizes emerge on
the upper surface of young leaves as symptom. Colletotrichumcapsici is the fungus

1
that causes it. Small, oval, rectangular or irregular brown spots emerge on either side
of the leaves as a symptom of leaf blotch disease, which quickly turn filthy yellow
or dark brown. The fungus Taphrinamaculans is the culprit. This disease is caused
by the fungus Pythiumgraminicolum. Turmeric plant biproducts are utilised in food
preparation and health care on a regular basis. The most frequent turmeric plant
diseases are Leaf Spot and Leaf Blotch. VGG 19 is a CNN (Convolutional Neural
Network) architecture used for sick picture classification and detection.

1.2 Aim of the Project

The aim of the project is to detect and classify the diseases in turmeric plant
through image processing using CNN architecture. The main aim is to identify dis-
eases in turmeric plant.

1.3 Project Domain

The timely identification and early prevention of crop diseases are essential for
improving production. In this paper, CNN models are implemented to identify and
diagnose diseases in plants from their leaves, since CNNs have achieved impressive
results in the field of machine vision. Standard CNN models require a large number
of parameters and higher computation cost. In this paper, we replaced standard con-
volution with depth separable convolution, which reduces the parameter number and
computation cost. The implemented models were trained with the open dataset that
consisting of 14 different plant species with 38 different categorical disease classes
and healthy plant leaves. To evaluate the performance of this models, various param-
eters such as batch size along with dropout and different numbers of the epochs were
incorporated.

1.4 Scope of the Project

The scope of the project is to get the accurate values of the disease using the data
set we use and find what type of disease that the leaf is having.

2
Chapter 2

LITERATURE REVIEW

Aravindhan Venkataramanan etal.,[1] addresses the problem of detecting and clas-


sifying plant diseases using deep neural networks (DNNs). This begins with an in-
troduction to the importance of agriculture and the impact of plant diseases on crop
yield and food security. Then it highlights the traditional methods of disease detec-
tion and classification, which are time-consuming, expensive, and require special-
ized expertise. Next they introduces the concept of using DNNs for plant disease
detection and classification, which can significantly improve the efficiency and ac-
curacy of the process. It provides a comprehensive review of the existing literature
on plant disease detection using DNNs, which includes the use of convolutional neu-
ral networks (CNNs), recurrent neural networks (RNNs), and hybrid networks. It
also highlights the advantages and limitations of each approach and provides a com-
parative analysis of their performance. This then presents the proposed method for
plant disease detection and classification using a CNN-based architecture. The ar-
chitecture consists of multiple convolutional layers followed by max-pooling layers
and fully connected layers. This also describes the preprocessing steps involved in
preparing the input images for the CNN. The proposed method evaluates the perfor-
mance of the proposed method using two publicly available datasets, which include
images of various plant diseases. The results demonstrate that the proposed method
outperforms the existing methods in terms of accuracy, precision, and recall. This is
concluded by highlighting the potential applications of the proposed method in agri-
culture and the need for further research to improve the efficiency and effectiveness
of plant disease detection and classification.

F. Jakjoud etal.,[2] addresses the use of deep learning algorithms for the detection
of plant diseases. The proposed method begins with an introduction to the impor-
tance of plant disease detection in agriculture and the challenges associated with tra-
ditional methods. It highlights the potential of deep learning algorithms to improve
the efficiency and accuracy of disease detection. It also provides a comprehensive

3
review of the existing literature on plant disease detection using deep learning algo-
rithms, which includes the use of convolutional neural networks (CNNs) and transfer
learning. The proposed method discusses the advantages and limitations of each ap-
proach and provides a comparative analysis of their performance. Then this presents
a proposed method for plant disease detection using a CNN-based architecture. The
architecture consists of multiple convolutional layers followed by max-pooling lay-
ers and fully connected layers. The proposed method evaluates the performance of
the proposed method using a publicly available dataset, which includes images of
various plant diseases.

G. Rama Mohan Reddy etal.,[3] proposed method begins with an introduction


to the importance of plant disease detection and its impact on agriculture. This
method highlights the traditional methods of disease detection, which are manual,
time-consuming, and require specialized knowledge. This method then introduces
the concept of using deep learning techniques, particularly CNNs, for automated
disease detection. The proposed method involves collecting plant leaf images us-
ing a Raspberry Pi camera and pre-processing the images to enhance their quality.
Then, a CNN model is trained using the pre-processed images to classify the images
into healthy or diseased categories. The trained model is deployed on the Raspberry
Pi to enable real-time disease detection. It evaluate the performance of the pro-
posed method using several publicly available datasets, including the Plant Village
dataset. The results demonstrate that the proposed method achieves high accuracy
and outperforms the existing methods in terms of disease detection. This concludes
by highlighting the potential applications of the proposed method in precision agri-
culture and the need for further research to improve the efficiency and effectiveness
of disease detection using deep learning techniques and Raspberry Pi.

Hardik kumar etal.,[4] addresses the problem of detecting and classifying plant
leaf diseases using conventional machine learning and deep learning techniques.
The proposed method begins with an introduction to the importance of plant leaf
disease detection and classification in agriculture and the impact of these diseases on
crop yield and food security. It highlights the traditional methods of disease detec-
tion, which are time consuming, expensive, and require specialized expertise. The
proposed system then introduces the concept of using machine learning and deep
learning for plant leaf disease detection and classification, which can significantly

4
improve the efficiency and accuracy of the process. It provides a comprehensive
review of the existing literature on plant leaf disease detection and classification us-
ing machine learning and deep learning techniques, which includes the use of various
classifiers such as k-nearest neighbours (KNN), support vector machines (SVM), de-
cision trees, random forests, and convolutional neural networks (CNNs). It highlights
the advantages and limitations of each approach and provides a comparative analy-
sis of their performance. The proposed system then presents the proposed method
for plant leaf disease detection and classification using both conventional machine
learning and deep learning techniques. The conventional machine learning approach
involves feature extraction and classification using KNN, SVM, and decision trees,
while the deep learning approach involves training a CNN-based architecture. The
system also describes the preprocessing steps involved in preparing the input images
for both approaches. The proposed system evaluates the performance of the pro-
posed method using a publicly available dataset, which includes images of various
plant leaf diseases. The results demonstrate that the deep learning approach outper-
forms the conventional machine learning approaches in terms of accuracy, precision,
and recall.

Marko Arsenovic etal.,[5] focuses on the use of deep neural networks for the
recognition and classification of plant diseases using leaf images. The proposed
system begins with an introduction to the importance of plant disease recognition
and classification in agriculture and the impact of these diseases on crop yield and
food security. It highlights the traditional methods of disease detection, which are
time consuming, expensive, and require specialized expertise. The system then in-
troduces the concept of using deep neural networks for plant disease recognition and
classification, which can significantly improve the efficiency and accuracy of the pro-
cess. It provides a comprehensive review of the existing literature on plant disease
recognition and classification using deep neural networks, which includes the use of
various architectures such as Convolutional Neural Networks (CNNs), Deep Belief
Networks (DBNs), and Recurrent Neural Networks (RNNs). The proposed system
highlights the advantages and limitations of each approach and provides a compar-
ative analysis of their performance. It then presents the proposed method for plant
disease recognition and classification using a CNN-based architecture. This method
also describes the pre-processing steps involved in preparing the input images for the
CNN. The proposed system evaluates the performance of the proposed method us-

5
ing a publicly available dataset, which includes images of various plant diseases. The
system concludes by highlighting the potential applications of the proposed method
in agriculture and the need for further research to improve the efficiency and effec-
tiveness of plant disease recognition and classification using deep neural networks.

Mrs. Kavita Krishnat Patil etal.[6] proposes a method for the automated detection
of leaf diseases using convolutional neural networks (CNNs) and image processing
techniques. The proposed method begins with an introduction to the importance of
plant disease detection and its impact on agriculture. It highlights the traditional
methods of disease detection, which are manual, time-consuming, and prone to er-
rors. The proposed method then introduces the concept of using deep learning tech-
niques, particularly CNNs, for automated disease detection. This provides a com-
prehensive review of the existing literature on plant disease detection using CNNs
and image processing techniques. The method highlights the advantages of using
CNNs, including their ability to automatically learn and extract features from im-
ages, and their high accuracy in disease detection. The proposed method involves
preprocessing the images to enhance their contrast and remove noise. Then, a CNN
model is trained using the preprocessed images to classify the images into healthy
or diseased categories. The performance of the proposed method is evaluated using
two datasets, including the PlantVillage dataset. The results demonstrate that the
proposed method achieves high accuracy and outperforms the existing methods in
terms of disease detection. The proposed method also conducts a sensitivity analysis
to evaluate the robustness of the proposed method to changes in the input features.
It is concluded by highlighting the potential applications of the proposed method in
precision agriculture and the need for further research to improve the efficiency and
effectiveness of disease detection using deep learning techniques.

Murk Chohan etal.,[7] proposed method focuses on the use of deep neural net-
works for the detection of plant diseases using leaf images. This begins with an
introduction to the importance of plant disease detection in agriculture and the im-
pact of these diseases on crop yield and food security. The method highlights the
traditional methods of disease detection, which are time-consuming, expensive, and
require specialized expertise. This method then introduces the concept of using deep
neural networks for plant disease detection, which can significantly improve the effi-
ciency and accuracy of the process. It provides a comprehensive review of the exist-

6
ing literature on plant disease detection using deep neural networks, which includes
the use of various architectures such as Convolutional Neural Networks (CNNs),
Deep Belief Networks (DBNs), and Autoencoders. It highlights the advantages and
limitations of each approach and provides a comparative analysis of their perfor-
mance. It then presents the proposed method for plant disease detection using a
CNN-based architecture. The architecture consists of multiple convolutional layers
followed by max-pooling layers and fully connected layers. This method evaluates
the performance of the proposed method using a publicly available dataset, which
includes images of various plant diseases. The proposed method concludes by high-
lighting the potential applications of the proposed method in agriculture and the need
for further research to improve the efficiency and effectiveness of plant disease de-
tection using deep neural networks.

S. Sreeja etal.,[8] proposed method focuses on the use of image processing tech-
niques for the automated detection of turmeric plant diseases. This method begins
with an introduction to the importance of turmeric as a medicinal plant and the im-
pact of leaf diseases on its growth and yield. It highlights the traditional methods of
disease diagnosis, which are time-consuming, labour-intensive, and require special-
ized expertise. It then introduces the concept of using image processing techniques
for disease detection, which can significantly improve the efficiency and accuracy of
the process. It provides a comprehensive review of the existing literature on image
processing techniques for disease detection, which includes the use of various meth-
ods such as segmentation, feature extraction, and classification. The method then
presents the proposed method for the detection of turmeric plant diseases using a
combination of image processing techniques, including colour normalization, image
enhancement, segmentation, and feature extraction. The proposed method evaluates
the performance of the proposed method using a dataset of turmeric leaf images,
which includes healthy leaves and leaves infected with various diseases. It also per-
forms a sensitivity analysis to evaluate the robustness of the proposed method to
changes in the input features. This method concludes by highlighting the potential
applications of the proposed method in the agriculture industry and the need for fur-
ther research to improve the efficiency and effectiveness of disease detection using
image processing techniques.

Srdjan Sladojevic etal.,[9] proposed system focuses on the use of deep neural net-

7
works for the recognition of plant diseases by leaf image classification. It begins
with an introduction to the importance of plant disease detection and the impact of
these diseases on crop yield and food security. It highlights the traditional methods
of disease diagnosis, which are time-consuming and require specialized expertise.
It then introduces the concept of using deep neural networks for disease detection
and classification, which can significantly improve the efficiency and accuracy of the
process. It provides a comprehensive review of the existing literature on deep neural
networks for disease detection and classification, which includes the use of various
methods such as convolutional neural networks (CNNs), autoencoders, and recur-
rent neural networks (RNNs). It highlights the advantages and limitations of each
approach and provides a comparative analysis of their performance. This method
then presents the proposed method for the recognition of plant diseases by leaf im-
age classification using a CNN-based approach. The proposed system evaluates the
performance of the proposed method using a dataset of leaf images, which includes
healthy leaves and leaves infected with various diseases. It also performs a sensi-
tivity analysis to evaluate the robustness of the proposed method to changes in the
input features. This method concludes by highlighting the potential applications of
deep neural networks in agriculture and the need for further research to improve the
efficiency and effectiveness of disease detection and classification using these tech-
niques.

8
Chapter 3

PROJECT DESCRIPTION

3.1 Existing System

Turmeric is plagued by numerous diseases throughout its growth method. Not


finding its diseases at early stages might result in a loss in production and even
crop failure. The foremost necessary issue is to accurately establish diseases of the
turmeric plant rather than mistreatment multiple steps like image pre-processing, fea-
ture extraction ,classification within the standard methodology and the single-phase
detection model is adopted to alter recognizing turmeric plant leaf diseases. To en-
hance the detection accuracy of turmeric diseases, a deep learning-based technique
known as the improved YOLOV3-Tiny model is planned. To improve detection ac-
curacy than YOLOV3-Tiny, this methodology uses residual network structure sup-
ported the convolutional neural network specially layers.
The results show that the detection accuracy is improved within the planned
model compared to the YOLOV3-Tiny model. It allows anyone to perform quick
and correct turmeric leaf diseases detection. In this, major turmeric diseases like
leaf spot, leaf blotch, and rootstalk rot square measured. Coaching and testing pic-
tures square measure captured throughout each day and night and compared with
numerous YOLO ways and quicker R-CNN with the VGG16 model. Moreover, the
experimental results show that the Cycle- GAN augmentation method on turmeric
leaf dataset supports a lot of for improving detection accuracy for smaller datasets
and also the planned model has associate in nursing advantage of high detection
accuracy and fast recognition speed compared with existing traditional models.

3.2 Proposed System

Compared to alternative models like YOLOV5 and every one of the alternative
models, the model we have a tendency to square measure proposing offers a best
results and correct results. The model we have a tendency to propose is the model

9
that worls on VGG19 and VGG16 design. This has a tendency to square measure
victimization google colab because the setting and giving dataset as pictures like
thirty pictures and that this processes the unwellness as leaf sopt and leaf blotch.
The fifteen pictures contain leaf spot and another pictures contain.

3.3 Feasibility Study

3.3.1 Economic Feasibility

At present, the standard technique of visual scrutiny in humans by visual scrutiny


makes it not possible to characterize plant diseases. Advances in pc vision models
provide quick, normalized, and correct answers to these issues. Classifiers may be
sent as attachments throughout preparation. All you wish could be a internet associ-
ation and a camera-equipped cellular telephone. The well-known business applica-
tions ”I Naturalist” and ”Plant Snap” show however this can be attainable each apps
stand out sharing skills with customers additionally as building intuitive online social
communities. In recent years,deep learning has semiconductor diode to perform in
varied fields like image recognition, speech recognition, and tongue process. The uti-
lization of the Convolutional Neural Network within the drawback of Plant Disease
Detection has excellent results. CNN is recognized because the best technique for
seeing. We have a tendency to contemplate the Neural design specifically quicker
Region-Based Convolutional Neural networks (Faster R-CNN), Regionbased Con-
volution Neural Networks(R-FCN), and single-shot Multi box detector (SSD) every
of the Neural design ought to be able to be incorporate with any feature exactor rely-
ing on the applying. Pre-processing of information is extremely vital to models for
correct performance. Several infections (viral or fungal) typically will be is may be
arduous to differentiate often sharing overlap of symptoms.

3.3.2 Technical Feasibility

To prevent losses, little holder farmers and passionate about a timely and correct
crop malady diagnosing. During this study, a pre-trained CNN was fine-tuned, and
the model was deployed on-line. The ultimate result was a plant disease detection
app. This service is free, simple to use and needs simply a sensible phone and web
association. Thus, the user’s wants as outlined during this paper are fulfilled. A
thorough investigation exposes the capabilities and limitations of the model. Overall,

10
once valid in an exceedingly controlled setting, an accuracy of ninety seven two share
is given. This achieved accuracy depends on variety of things including the stage of
malady, malady sort, background information and object composition because of
this, a collection of user pointers would be needed for business use, to make sure the
expressed accuracy is delivered.

3.3.3 Social Feasibility

Augmentation and transfer learning during this case, proved beneficial to the
model, serving to the CNN to generalize a lot of reliability whereas this improved
the model’s ability to extract options,it had been not enough once the model was pre-
sented with ‘in field’ imaging. During this case, the classifier ranked associate degree
accuracy of simply forty four proportion the importance of diversifying the coaching
dataset to incorporate alternative background knowledge, extra plant anatomy and
varying stages of disease.Overall, this study is conclusive in demonstrating however
CNNs is also applied to empower small-holder farmers in their fight against disease
within the future, work ought to be targeted on diversifying coaching datasets and
additionally in testing similar internet applications in real world things. Without such
developments, the struggle against plant disease can continue.

3.4 System Specification

3.4.1 Hardware Specification

• System:64 bit OS
• x64 processor

3.4.2 Software Specification

• 4 GB RAM
• Better GPU(For performance)

3.4.3 Standards and Policies

Google Colab

11
If a person have got used Jupyter notebook antecedently, that person can quickly
learn to use Google Colab. To be precise, Colab could be a free Jupyter notebook
surroundings that runs entirely within the cloud. Most significantly, it doesn’t need
a setup and therefore the notebooks that you just produce will be at the same time
emended by your team members just the manner you edit documents in Google Docs.
Colab supports several standard machine learning libraries which might be simply
loaded in your notebook.
• As a programmer, the following can be performed using Google Colab.
• Write and execute code in Python
• Document the code that supports mathematical equations
• Create/Upload/Share notebooks
• Import/Save notebooks from/to Google Drive
• Import/Publish notebooks from GitHub
• Import external datasets e.g. from Kaggle
• Integrate PyTorch, TensorFlow, Keras, OpenCV
• Free Cloud service with free GPU
VGG Visual Geometry GROUP
AlexNet came move into 2012 and it improved on the normal convolutional
neural networks, therefore VGG is perceived as a successor of the AlexNet, however
it had been created by a unique cluster named as Visual pure mathematics cluster at
Oxford‘s and hence the name VGG, it carries and uses some concepts from it’s pre-
decessors and improves on them and uses deep convolutional neural layers to boost
accuracy. Let’s explore what VGG19 is and compare it with a number of the op-
posite versions of the VGG design and additionally see some helpful and sensible
applications of the VGG design.

12
Chapter 4

METHODOLOGY

4.1 CNN Architecture

Figure 4.1: CNN Architecture

In Figure 4.1 displays the architecture of Convolutional Neural Networks. CNN


stands for Convolutional Neural Network, a type of neural network widely used in
image processing and computer vision applications. The architecture of a CNN typ-
ically consists of several layers, including convolutional layers, pooling layers, and
fully connected layers.

13
4.2 Design Phase

4.2.1 Data Flow Diagram

Figure 4.2: Data Flow Diagram

In Figure 4.2, it displays the data flow diagram. for this, the data is given in the
format of images so they are input images. The images will get trained to the dataset,
so the model gets trained by the dataset. The VGG architecture 19 helps the model
in learning the dataset. The trained model gets the test data and it finds what type of
disease the test image has got and it gives with the accurate values and results.

14
4.2.2 Use Case Diagram

Figure 4.3: Use Case Diagram

In Figure 4.3, it displays the use case diagram. When we give a new input image
first the module extracts the leaf features. Then it goes through the CNN model, it
then compares the features with already trained dataset. Then it goes through dense
CNN and the leaf features are extracted separately. Then the module will predict
whether the plant leaf is affected by any disease or not. It shows the output from one
of the 38 classes which are predetermined and trained. Then the output will be in a
textual format.

15
4.2.3 Sequence Diagram

Figure 4.4: Sequence Diagram

In Figure 4.4, it displays the sequence diagram. For the model gives a perfect
sequence of the data,this diagram gives a clear idea of how the data passes to the
next step and explains what steps are linked together.

16
4.2.4 Activity Diagram

Figure 4.5: Activity Diagram

In Figure 4.5, it displays the Activity diagram which represets how the activity is
been working in the model. When user intearacts with the VGG19 model, the user
will upload the image of the disease affected plant and the image is been processed
in multiple layers and then it shown the disease with accuracy percentage.

17
4.3 Algorithm & Pseudo Code

4.3.1 Algorithm

• Step1: Open Google colab and select to the new notebook..


• Step2: Connect the google drive to Google Colab to import the data set.
• Step3: Install the required libraries needed for the program.
• Step4: Import required libraries like keras,pandas,matplot lib,Imagedatagenerator.
• Step5: In the sample data create train data folder
• Step6: Create two folders in the train data and name them as LEAF SPOT AND
LEAF BLOTCH
• Step7: Upload leaf blotch data set and leaf spot data set in the respective folders
in the train data
• Step8: Upload a test image in the train data 15
• Step9: Name the test image as TEST image
• Step10: Run the respective code in the respective fields.
• Step11: Plot the training and validation accuracy values , loss values.
• Step12:Based on the accuracy, model we will test the input image with high
accuracy value value.
• Step13: Model will predict the output as LEAF SPOT OR LEAF BLOTCH

4.3.2 Pseudo Code

Step 1 : Import necessary libraries: numpy, pandas, matplotlib, os, keras, and rele-
vant modules
from keras.preprocessing.image and keras.applications
Step 2 : Set up image data generators for training and validation using the Image-
DataGenerator function,with the zoom range, shear range, horizontal flip, and pre-
processing function
Image Data Generator ( zoom range = 0.5,she arrange = 0.3 , horizontal
flip =True , pre processing function = pre process input)

18
Step 3: Create a training data generator using the flow from directory function, with
the target directory, target size, and batch size specified.
train = train.datagen flow from directory(directory =” / content / sample
data/train data” ,target size=(256,256),batch size =25)
Step 4: Create a validation data generator using the same flow from directory func-
tion, with the same target directory and target size but a different batch size.
val=train datagen.flow from directory( directory=”/content/ sample data/-
train data”,target size = (256,256) batch size=25)
Step 5: Obtain the first batch of images and their labels using the next() method of
the training data generator.
t img,label=train.next()
Step 6: Define a function called plotimage that takes in an array of images and their
corresponding labels and plots them using matplotlib.
def plotimage(img arr,label);
Step 7: Call the plotimage function with the first 10 images and labels from the
training set.
plot image(t img [:10],label [:10]
Step 8: The program should output a plot of the first 10 images and their labels.
test loss,test acc=model.evaluate(test generator)

4.4 Module Description

4.4.1 Image Acquisition

This is the strategy of getting photos with a camera by visiting the placement
or victimisation of the different available resources like image databases or on-line
repositories. T he pictures are taken in 3 colours: Red, Green, and Blue (RGB), a
colour transformation structure is generated and a device independent colour area
transformation is applied.

4.4.2 Image Pre-processing

The image within which the approach have a tendency to use leaf image clipping
to get rid of the superfluous portions, the recovered plant leaf image is shipped to
a automatic data processing system. The image with in which the approach have a
tendency to have an interest to use leaf image clipping to get rid of the superfluous

19
portions, the recovered plant leaf image is shipped to a automatic data processing
system. Resizing the image, noise reduction from the image, augmentation and
smoothing of the image are all necessary phases in pre-processing. Image classi-
fication is that the method of segmenting pictures into completely different classes
supported their features. A feature may be the perimeters in a picture, the element
intensity, the amendment in element values, and many more. Nowadays the picture
belong to a similar individual but varies when compared across options just like the
color of the image, position of the face, the background color, color of the shirt, and
plenty of additional the most important challenge once operating with pictures is that
the uncertainty of these options. To the human eye, it’s all a similar, however, once
regenerate to knowledge you will not find a particular pattern across these pictures
simply.

4.4.3 Image Segmentation

This image process technique process could be a technique for dividing a picture
into relevant parts supported shared traits. Image segmentation is done employing a
form of approaches such as a boundary and spot detection algorithmic program or an
area detection algorithmic program Otsu’s approach, thresholding, and edge-based
strategies approaches, like k-means cluster, and so on.

4.5 Steps to execute/run/implement the project

4.5.1 Upload the IPYNB code in the GOCOLAB

Once the google colab is on and upload the IPYNB code in the gocolab. Once the
uploading is completed we can see the respective code that helps the model.

4.5.2 Create and upload the dataset

Create new folder in the sample data and name the foder as train data. Create
two new folders in the train data and name the folders as LEAF SPOT and LEAF
BLOTCH upload images of leaf spot and blotch in the respective folders and upload
a new test image in the train data and name the image as test image.

20
4.5.3 Run the code in their respective fields

After uploading the training dataset, run the code in the respective fields.there are
certain fields in the code that gives graphical output and some cases the output is
the accuracy of the images. There is a code where we need to enter the path of the
test image so that classifies the image and gives what type of disease the leaf has
effected.

21
Chapter 5

IMPLEMENTATION AND TESTING

5.1 Input and Output

5.1.1 Input Design

Figure 5.1: Input Design

In Figure 5.1, it displays the input images that will be used to train the dataset. In
the given images there are thee types of diseases. Leaf spot, leaf blotch and rhizome
rot. The leaf spot disease is caused by the fungus Colletotrichum capsici and is
characterized by small, circular, brown spots on the leaves. The spots may coalesce
and form larger patches, causing defoliation and reduced yield. The Turmeric leaf
blotch disease is caused by the fungus Phaeoramularia curcuminis and is a serious

22
foliar disease of turmeric. The Rhizome Rot disease is caused by the fungus Pythium
aphanidermatum and is characterized by wilting, yellowing, and decay of the leaves.
The disease can affect the rhizomes as well, leading to stunted growth and reduced
yield.

5.1.2 Output Design

Figure 5.2: Output Design

In Figure 5.2, it displays the output. The output displays the output by showing
what type of disease is detected. It uses the input dataset and gives a output based on
the input that is given.

5.2 Testing

Testing is a process of evaluating a software system or application to ensure that


it meets its intended requirements, functions correctly, and performs as expected in
different scenarios and conditions. The goal of testing is to identify any defects,

23
errors, or issues that may affect the software’s quality, reliability, and performance
and to ensure that it meets the user’s expectations. Testing can be performed at
various stages of the software development life cycle, such as unit testing, integration
testing, system testing, and acceptance testing. Each testing stage focuses on specific
aspects of the software, such as functionality, performance, security, and usability,
and uses different techniques and methods to evaluate the software’s quality.

5.3 Types of Testing

5.3.1 Unit testing

Input

Figure 5.3: Unit testing

In Figure 5.2, it displays unit testing result of our approach. Unit testing is a
software testing method in which individual units or components of software are
tested in isolation from the rest of the system to ensure that each unit is working as
expected. A unit can be a function, method, or class. The goal of unit testing is to

24
identify and fix defects early in the development cycle before the code is integrated
into a larger system. Unit tests are typically automated and are run frequently during
development to catch defects as soon as possible.

Test result

5.3.2 Integration testing

Input

Figure 5.4: Integration testing

In Figure 5.3, it displays integration testing. Integration testing is a software test-


ing method that tests the interactions between different modules or components of a
system to ensure that they work together as expected. The goal of integration testing

25
is to verify that the individual components of a system can work together and com-
municate with each other correctly. Integration testing can be done at different levels
of the system, such as module, subsystem, or system level.

Test result

5.3.3 System testing

Input

Figure 5.5: System testing

In Figure 5.4, it displays system testing. System testing is a software testing


method that tests the entire system as a whole, including all its components and
their interactions. The goal of system testing is to ensure that the system meets all
the functional and non-functional requirements.

26
5.3.4 Test Result

Figure 5.6: Test Image

In Figure 5.5 it displays the test image. The test image consists the data on how
the image is tested and how the disease is identified and classified. A test image
is a digital image used in image processing and computer vision applications for
testing and evaluating algorithms, software, and hardware systems. Test images are
designed to contain specific features or characteristics that allow the performance of
the system to be measured and compared to a standard or benchmark.

27
Chapter 6

RESULTS AND DISCUSSIONS

6.1 Efficiency of the Proposed System

Predicting the category for brand spanking new knowledge instances using our
finalised classification model in Keras using the prediction path operate. Note that
this operate is merely accessible on consecutive models, not those models developed
victimization the practical API. There the output define are going to be ”The malady
is”. The trained model can analyze the input with the info pretrained and build a
prediction. A ninety nine accuracy is achieved with the input. But with discomposed
knowledge the model might not be correct. But the accuracy stays between 96.

6.2 VGG16 vs VGG19 Accuracy and Loss Comparison

Existing system:(VGG 16)


In the Existing system, the implimentation to predict the category for brand
spanking new information instances victimization the finalized classification model
in Keras victimization the prediction(path) perform is done. Note that this perfor-
mance is just offered on sequent models, not those models developed victimization
the practical API. There the output define are going to be ”The unwellness is”. The
trained model with vgg16 can analyze the input with the information pretrained and
build a prediction. A 94.59459185600281 of accuracy is achieved with the input.
But with discomposed information the model may not be correct. But the accuracy
stays between ninety 93 and 94.

Proposed system:(VGG 19)


The trained model with vgg19 can analyze the input with the information pre-
trained and build a prediction. The trained model with vgg19 can analyze the input
with the information pretrained and build a prediction. A 97.222220897674 accu-
racy is achieved with the input. But with discomposed information the model may

28
not be correct. But the accuracy stays between ninety six to ninety seven. The Base
model had associate accuracy of 0.9614 with 0.81 loss wherever the created vgg16
model got a ninety four p.c accuracy with low loss. Here the vgg19 model has the
whip hand with ninety seven p.c accuray with steady results as we tend to observe
from graphs.

6.3 Sample Code

1 i m p o r t numpy a s np
2 i m p o r t p a n d a s a s pd
3 i m p o r t mat p l o t l i b . py p l o t a s p l t
4 import os
5 import keras
6 from k e r a s . p r e p r o c e s s i n g . image
7 i m p o r t I m a g e D a t a G e n e r a t o r , img t o a r r a y , l o a d img
8 from k e r a s . a p p l i c a t i o n s . vgg19
9 import VGG19 , p r e p r o c e s s i n p u t , d e c o d e p r e d i c t i o n s t r a i n d a t a g e n = Image D a t a G e n e r a t o r ( zoom
r a n g e = 0 . 5 , s h e a r r a n g e = 0 . 3 , h o r i z o n t a l f l i p = True , p r e p r o c e s s i n g function =pre process input )
v a l d a t a g e n = Image D a t a G e n e r a t o r ( p r e p r o c e s s i n g f u n c t i o n = p r e p r o c e s s i n p u t )

29
Output

Figure 6.1: Output

In Figure 6.1, it displays the output of the code that is implemented and gives a
graph as output giving out the accuracy in both vgg available.

30
Chapter 7

CONCLUSION AND FUTURE


ENHANCEMENTS

7.1 Conclusion

The annual lack of agricultural productiveness is as a result of intense plant sick-


nesses. As a result, diagnosing sicknesses in flora at an early level is vital for
alerting such intense losses beforehand the future.The Monocotyledonous and Di-
cotyledonous plant families, in addition to the applicable method comprising the
photograph processing phases had been included on this study. The maximum sub-
stantially used classification techniques for the identity and detection of sicknesses
on plant leaves had been evaluated. The maximum current paintings has been tested
and illustrated. It may be said that, some of the techniques utilised with in side the
preceding paintings the deep studying principles, especially the CNN approach, have
received the highest accuracy.

7.2 Future Enhancements

In this have a look at a dataset created for 2 critical illnesses of turmeric plant
the usage of pics received from the sector conditions, and some dataset for the crop
became found. The pics from RGB colour area had been transformed into distinc-
tive colour spaces. With the created dataset, a pre-educated deep studying version
specifically VGG16 became used for education and validation. In addition, capabili-
ties from the distinctive layers of VGG16 had been given to the MSVM for assessing
the type efficiency. This have a look at has proved the prevalence of (RGB) discipline
pics in which the type accuracy became maximum for the 2 illnesses. Surprisingly,
it additionally furnished a competing accuracy withinside the case of pics educated
with VGG16. VGG19 performed well than vgg16,SVM and ALEX net. Hence,
VGG19 is proposed to this project.

31
Chapter 8

PLAGIARISM REPORT

32
Chapter 9

SOURCE CODE & POSTER


PRESENTATION

9.1 Source Code

1 i m p o r t numpy a s np
2 i m p o r t p a n d a s a s pd
3 i m p o r t mat p l o t l i b . py p l o t a s p l t i m p o r t o s
4 import keras
5 from k e r a s . p r e p r o c e s s i n g . image i m p o r t I m a g e D a t a G e n e r a t o r ,
6 img t o a r r a y , l o a d . img
7 from k e r a s . a p p l i c a t i o n s . vgg19 i m p o r t
8 VGG19 , p r e p r o c e s s i n p u t , d e c o d e . p r e d i c t i o n s
9 t r a i n . datagen =
10 Image D a t a G e n e r a t o r ( zoom r a n g e = 0 . 5 , s h e a r r a n g e = 0 . 3 , h o r i z o n t a l f l i p =True , pre p r o c e s s i n g
function = pre process input )
11 v a l d a t a g e n = Image D a t a G e n e r a t o r ( p r e p r o c e s s i n g f u n c t i o n = p r e p r o c e s s i n p u t )
12 train =
13 t r a i n d a t a gen . f l o w from d i r e c t o r y ( d i r e c t o r y = / c o n t e n t / sample d a t a / t r a i n d a t a ,
t a r g e t s i z e =(256 ,256) , b a t c h s i z e =25)
14 val=
15 train datagen . flow from directory ( directory= / c o n t e n t / sample d a t a / t r a i n d a t a , target size =
( 2 5 6 , 2 5 6 ) b a t c h s i z e =25)
16 Found 36 i m a g e s b e l o n g i n g t o 3 c l a s s e s .
17 Found 36 i m a g e s b e l o n g i n g t o 3 c l a s s e s .
18 t img , l a b e l = t r a i n . next ( )
19 def plotimage ( img arr , l a b e l ) :
20 f o r im , l i n z i p ( i m g a r r , l a b e l ) :
21 p l t . figure ( fig size =(5 ,5) )
22 p l t . imshow ( im )
23 p l t . show ( )
24 p l o t image ( t i m g [ : 1 0 ] , l a b e l [ : 1 0 ] )
25 C l i p p i n g i n g p u t d a t a t o t h e v a l i d r a n g e f o r imshow w i t h RGB d a t a
26 ([0..1] for f l o a t s or [ 0 . . 2 5 5 ] f or i n t e g e r s ) .

33
9.2 Poster Presentation

Figure 9.1: Poster Presentation

34
References

[1] Aravindhan Venkataramanan, Deepak Kumar P Honakeri and Pooja Agarwal,


“Plant Disease Detection and Classification Using Deep Neural Networks”, In-
ternational Journal on Computer Science and Engineering (IJCSE), August 2019.
[2] F. JAKJOUD, A. HATIM and A. BOUAADDI, “Deep Learning application for
plant diseases detection”, ITEE’19, El Jadida, Morocco, November 2019.
[3] G. Rama Mohan Reddy, Nettam Sai Sumanth and N. Sai Preetham Kumar, ”Plant
Leaf Disease Detection Using CNN And Raspberry Pi”, IJASRET, Volume 5,
Issue 2, February 2020.
[4] Hardikkumar S. Jayswal and Jitendra P. Chaudhari,”Plant Leaf Disease Detec-
tion and Classification using Conventional Machine Learning and Deep Learn-
ing”, International Journal on Emerging Technologies 11(3): 1094-1102, January
2020.
[5] Marko Arsenovic, “Deep Neural Networks Based Recognition of Plant Diseases
by Leaf Image Classification”, Volume 2019, Article ID 3289801, June 2019.
[6] Mrs. Kavita Krishnat Patil , ”Leaf Disease Detection Using Image Processing By
CNN”, IJIERT, VOLUME 8, ISSUE 8, August 2021
[7] Murk Chohan, Adil Khan, Rozina Chohan, Saif Hassan Katpar and Muhammad
Saleem Mahar,” Plant Disease Detection using Deep Learning”, International
Journal of Recent Technology and Engineering (IJRTE), Volume-9 Issue-1, May
2020.
[8] S. Sreeja, “ Automated detection of turmeric plant diseases using image process-
ing techniques”, IJCSE, 2020.
[9] Srdjan Sladojevic, Marko Arsenovic, Andras Anderla, Dubravko Culibrk, and
Darko Stefanovic, “Deep Neural Networks Based Recognition of Plant Diseases
by Leaf Image Classification”, Computational Intelligence and Neuroscience,
June 2020.

35
General Instructions
• Cover Page should be printed as per the color template and the next page also
should be printed in color as per the template
• Wherever Figures applicable in Report , that page should be printed in
color
• Dont include general content , write more technical content
• Each chapter should minimum contain 3 pages
• Draw the notation of diagrams properly
• Every paragraph should be started with one tab space
• Literature review should be properly cited and described with content related to
project
• All the diagrams should be properly described and dont include general infor-
mation of any diagram
• Example Use case diagram - describe according to your project flow
• All diagrams,figures should be numbered according to the chapter number
• Test cases should be written with test input and test output
• All the references should be cited in the report
• Strictly dont change font style or font size of the template, and dont cus-
tomize the latex code of report
• Report should be prepared according to the template only
• Any deviations from the report template,will be summarily rejected
• Number of Project Soft Binded copy for each and every batch is (n+4) copies
as given in the table below
• Attach the CD in last Cover page of the Project Report with CD cover and
details of batch like Title,Members name and VTU No ,Batch No should be
written in Marker pen
• For Standards and Policies refer the below link
https://law.resource.org/pub/in/manifest.in.html
• Plagiarism should be less than 15%

36

You might also like