Research

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Department Of Agricultural Engineering

Power Engineering and Agricultural Machinery

Graduation Project Title:


Plant Diseases Detection Using Machine Vision Technology
Thanks, appreciation and gratitude to supervisor:

Prof.Dr. Rashad Dr. Noureldin


hegazy sharaby

Project Team Members:


Content:
1. Executive Summary……………………………………………. .5
2. Introduction……………………………………………………... 6
3. Review of literatur:…………………………………………… 7-9
3.1 Machine vision and artificial neural networks
3.2 Machine vision for plant disease recognition and management
3.3 Carriage, navigation and frame for machine vision
4. Materials and Methods……………………………………… 9-13
4.1 Material
4.1.1 Construction of Machine Vision platform

4.1.2. Power transmission and movement system


4.1.3. Green house and plant parameters

• Greenhouse construction
• Plant diseases and control

4.1.4. Spraying unit……………………………………………………... 17

4.1.5 NodeMCU ESP8266……………………………………………... 18-22

a. NodeMCU ESP8266
b. ESP32-CAM

c. LCD-I2C

d. Dual Relay
e. H-Bridge High-Power DC Motor BTS7960 43A

f. Relay

g. DHT11–Temperature and Humidity Sensor

4.2 Methods………………………………………………………………….23
4.2.1 Measuring Instrumentations……………………...………….23
4.2.2 Software……………………………………………………. 23-34
a. Programming languages
b. Libraries

c. Database management system (DBMS)

d. Programs

• solid works
• Fritzing
• Visual Studio Code
• The terminal
• Arduino IDE
• Control App
e. technology use Modeling

I. Machine vision (MV)


II. machine learning:
• Transfer learning (TL)
• Deep learning
III. Artificial neural networks (ANNs)
• convolutional neural network (CNN)
IV. Automation system

5. Background……………………………………………………….. 34
6. Research objective………………………………………...……… 35
7. Methodology………………………………...………………….…. 35
8. Experience………………………………………………………36-37
9. Results ……………...…….……………………………..………38-40
10. Discussion ……………………………………...…………..……. 41
11. Recommendations………………………….………………… 42-43
12. Conclusion……………………………………………………. 44-45
13. References…………………………………………………….. 46-48
1. Executive Summary
Pests have a significant impact on plants, reducing the quality of the product
and resulting in losses for farmers. Modern and environmentally friendly
technologies, such as computer vision and deep learning systems, have played
an effective role In early detection of pests In agricultural crops. In this study, a
set of Images were captured for both cucumber and tomato plants grown in a
greenhouse. The tomato leaf detection dataset contains 900 images of tomato
leaves with seven different types of diseases, while the cucumber plant disease
dataset contains 6900 images of cucumber leaves with nine different types of
diseases. The model used was the EfficientNetB7 as a baseline model. The
model was trained on the combined dataset using the Adam optimizer with a
learning rate of 0.0001 and a batch size of 64. The model achieved high
accuracy rates for individual crop types and diseases, with the highest accuracy
reaching 99.62% for the "healthy" class In the tomato dataset and 99.01% for
the "healthy" class in the cucumber dataset. However, developing more
specialized models for specific plant types may further improve the system's
accuracy.
2. Introduction

Agriculture is important in the preservation of humanity because it is the main


source of food supply on which all countries depend whether it is growing or
advanced. It is also an important factor in the development of any country. It is
the primary source of raw materials in many industries. Agriculture is also
critical for economic growth: In 2018, it accounted for 4% of global GDP, and
in some LDCs, it could account for more than 25% of GDP, as approved by the
World Bank.

Machine vision and artificial neural networks have brought significant


advancements in the field of disease identification and diagnosis. Among the
various neural network architectures, convolutional neural networks (CNNs)
have emerged as a powerful tool for image classification tasks. CNNs are
particularly effective for detecting patterns and features in images that are
relevant for disease identification. One of the latest CNN architectures that have
gained popularity in recent years is EfficientNet, which has shown superior
performance in image classification tasks compared to other CNN models.
In this project, we will be utilizing CNNs and EfficientNet to identify diseases
in plants and other organisms through machine vision. These models will be
trained using transfer learning techniques, which enable us to reuse pre-trained
models and adapt them to our specific problem domain. By fine-tuning these
models on our dataset, we can create a powerful disease identification system
that can accurately detect diseases in plants.
The integration of machine vision, deep learning, and ANNs has led to a
significant improvement in disease identification and diagnosis. The project will
showcase the effectiveness of these technologies in the field of agriculture,
which can have a significant impact on crop yield and food security.

6|Page
3. Review of literature

3.1 Machine vision and artificial neural networks


In the field of autonomous driving, machine vision plays an important role in perception
and decision-making. LiDAR, camera, and radar sensors are used to perceive the
environment around the vehicle. In a study by Chen et al. (2019), a deep neural
network was proposed to improve the accuracy of pedestrian detection using
camera images. The proposed method achieved state-of-the-art results on the
Caltech Pedestrian Detection benchmark dataset. Additionally, deep reinforcement
learning has been applied to decision-making in autonomous driving. In a study by
Shalev-Shwartz et al. (2016), a deep reinforcement learning algorithm was
proposed to learn driving policies from data. The proposed method achieved good
performance on a simulated driving task.

In the field of medical image analysis, machine vision has been applied to improve
the accuracy of diagnosis and treatment. In a recent study by Liu et al. (2021), a
deep learning method was proposed for the segmentation of liver tumors from
magnetic resonance images (MRI). The proposed method achieved high accuracy
in segmenting liver tumors from MRI scans. Additionally, machine vision has been
applied to detect and diagnose skin cancer. In a study by Esteva et al. (2017), a
deep learning algorithm was trained to classify skin lesions as benign or malignant.
The proposed method achieved high accuracy in classifying skin lesions from
clinical images. In the field of agriculture, machine vision has been applied to
improve crop yield and quality. In a recent study by Li et al. (2019), a machine
vision system was proposed for apple detection and counting. The proposed
method achieved high accuracy in detecting and counting apples from images.
Additionally, machine vision has been applied to detect and diagnose plant
diseases. In a study by Mohanty et al. (2016), a deep learning algorithm was
trained to diagnose plant diseases from images. The proposed method achieved
high accuracy in diagnosing plant diseases from images. In the field of security,
machine vision has been applied to improve the accuracy of face recognition and
object tracking. In a study by Zhu et al. (2020), a deep learning method was
proposed for face recognition under low-resolution and occlusion conditions. The
proposed method achieved high accuracy in face recognition under challenging
7|Page
conditions. Additionally, machine vision has been applied to detect and track
objects in videos. In a study by Huang et al. (2018), a deep learning method was
proposed for object tracking in videos. The proposed method achieved state-of-the-
art results on several benchmark datasets.

3.2 Machine vision for plant disease recognition and management


Machine vision has become a promising tool in the agricultural sector for plant
disease recognition and management. The use of computer vision technologies,
such as deep learning algorithms and neural networks, can aid in detecting plant
diseases in their early stages, thereby increasing crop yields and reducing the use
of harmful pesticides.

Recent studies have shown the potential of deep learning for image-based plant
disease detection. Mohanty et al. (2016) proposed a deep convolutional neural
network for the classification of plant diseases using images. Their model achieved
high accuracy rates in identifying different plant diseases, and they suggested that
their approach could be used for real-time monitoring of plant health. In another
study, Li et al. (2019) developed a fast and accurate machine vision system for
apple detection and counting. They utilized a deep learning-based algorithm that
combined a convolutional neural network with a region-based convolutional
network for accurate apple detection and counting. Their system achieved high
accuracy rates and outperformed traditional methods for apple detection.

Moreover, in a study by Liu et al. (2019), a pre-trained convolutional neural


network was used to detect wheat ears. The proposed method achieved high
detection accuracy rates and could be used for monitoring the growth of wheat
plants. One challenge in plant disease recognition using machine vision is the
variability in environmental conditions and image quality. To address this, Chen et
al. (2019) proposed a semantic segmentation method called Deep Lab that uses
deep convolutional neural networks, atrous convolution, and fully connected
CRFs. This method achieved state-of-the-art performance on several benchmark
datasets, including the PASCAL VOC 2012 segmentation dataset.

8|Page
3.3 Carriage, navigation and frame for machine vision
Autonomous navigation refers to the ability of a device or vehicle to move and
navigate without human intervention. Machine vision plays a critical role in
enabling autonomous navigation, as it allows devices to perceive and understand
their environment through visual information. Recent advancements in machine
vision, such as deep learning algorithms and neural networks, have significantly
improved the accuracy and efficiency of autonomous navigation systems.

One application of autonomous navigation and machine vision is in the field of


robotics. For example, in a study by Al-Kaff et al. (2020), a vision-based
autonomous navigation system was developed for a mobile robot using a
convolutional neural network. The system was able to accurately detect and avoid
obstacles in the robot's path and navigate to its destination.

In another study, Lurie et al. (2019) proposed a deep learning-based system for
real-time obstacle detection and avoidance in autonomous vehicles. Their system
used a convolutional neural network to process visual data and accurately detect
obstacles in real-time.

Moreover, in a study by Lin et al. (2020), a machine vision system was developed
for autonomous navigation of unmanned aerial vehicles (UAVs). Their system
used a deep learning-based algorithm to process visual data and enable
autonomous navigation of UAVs in complex environments.

One challenge in autonomous navigation using machine vision is the need for
robust and accurate perception of the environment, even in challenging conditions
such as low light or adverse weather. To address this challenge, Zhang et al. (2019)
proposed a deep learning-based system that uses multiple sensors, including visual
and inertial sensors, for robust perception and autonomous navigation.

Overall, the combination of movement, navigation and machine vision has the
potential to revolutionize various fields, including robotics, transportation, and
aerial surveillance. As machine vision technologies continue to advance, we can
expect to see increasingly sophisticated and accurate disease detection or
management systems in the future.

9|Page
4. Materials and Methods
The details of materials used, equipment and experimental procedures followed
during the course of investigation have been briefly described under this chapter.
The experiments were carried out at the Department of Agricultural Engineering,
Faculty of Agriculture, Kafrelsheikh University during the period from January
2023 to June 2023. The experiments included two main parts. In the first part, the
experiments were conducted to optimize the accuracy of the agricultural machine
vision platform and frame in terms of the navigation and performance of the
robot’s movement system. Where performance characteristics for the dc motor of
the rear wheel were considered and the speed of the agricultural robotic platform
has been noted 46r.p.s, field experiments, the agricultural robotic platform has
been evaluated in term of its recognition accuracy under the same levels of the
different camera resolutions.

The ESP32-CAM camera is a compact camera module that offers high resolution
and excellent performance in capturing and recording. It features up to 2 megapixel
image resolution and high-quality video recording capabilities. It provides multiple
settings and is compatible with the Arduino platform, allowing for Internet
connectivity via Wi-Fi.

10 | P a g e
4.1 Materials
4.1.1 Construction of Machine Vision platform
The overall dimensions of the robotic platform and frame are 650 mm, 450 mm,
500 mm, and 20 kg for length, width, height and mass, respectively. The ground
clearance under the main frame down-word to the ground surface was 100 mm.
The total load of machine vision device was 50 kg which is distributed on the front
wheels by 15 kg and on the two rear wheels by 35 kg.

Figure 1-2: 3d image of the model Figure 2: 2d imge of the model

11 | P a g e
Figure 3: Model projections and dimension

12 | P a g e
4.1.2. Power transmission and movement system
The main function of power transmission is to transmit the motion from the direct
current brushed motor which is (24 Voltage, 100 watt and, 2800r.p.m) to the rear
wheel. The ground wheels which have been used with the agricultural robotic
platform had 200, 50, 200 mm as diameter, tire width and tire height respectively.
The horizontal distance between the rear wheel centerlines was 450 mm. While the
base distance between the front wheel and rear wheels axis was 600 mm. A 12-
voltige, 20 Ampere and 150-watt direct current brushed motor was used to move
the front wheels.

Figure 5: Gears and tires Figure 4-2: Motors

13 | P a g e
4.1.3. Green house and plant parameters
• Greenhouse construction

The greenhouse was directed in a north-eastern direction, as it suits the


surrounding climatic conditions, and iron supports were installed to provide the
necessary support for the greenhouse and ensure its stability. In order to achieve a
natural flow of air and proper ventilation, only the entry and exit door was
installed, without any windows.

The greenhouse is designed to direct natural light and provide ideal climatic
conditions for plant growth. The height and width of the greenhouse can be
adjusted to suit the growth requirements of different plants, making it ideal for
different planting applications.

A net cover material was used to provide adequate ventilation and control the
temperature and humidity inside the greenhouse.

The height and width of the greenhouse can also be adjusted to suit different plant
growth requirements, making it ideal for different planting applications. The
greenhouse was used with dimensions of 5 meters in length, 2 meters in width, and
2 and a half meters in height.

Figure 6: Agricultural greenhouse

14 | P a g e
• Plant diseases and control

- Tomatoes
Tomatoes are annual herbaceous plants belonging to the Solanaceae family,
which are widely cultivated throughout the world as an agricultural crop and
food source.

- Cucumber
Cucumber is an annual herbaceous plant belonging to the Cucurbitaceae family,
annual plants that need regular watering as well as organic and chemical
fertilizers for their improvement and growth.

- Pepper
Pepper is a plant of the nightshade family, which is a common agricultural crop
in hot countries. Fuji pepper food.

Figure 7: Tomato Figure 8: pepper

Figure 9: Cucumber

15 | P a g e
- Control:

safety
concentration / l The name of the pesticide The disease
period
100 - 75 cm 3 per
3 days Ophir powdery mildew;
dunum
150 - 250 g per
7 days Intracol late blight
dunam

7 days 150 - 200 gr Intracol 70% Blight

Leaf spots or freckles


Copper compounds such
3-5 g/litre Scabies or bacterial
as: Copperex % 50
spotting

30 1 ml Confidor Whitefly

16 | P a g e
4.1.4. Spraying unit

- We are used of a Karagon sprayer, which is a sprayer that charges with


electricity:
• It can be operated automatically by electricity for up to ten times of filling
the tank.
• The capacity of the tank is 20 liters, and it can cover a spray area of 3:2
meters and its empty weight is 6.5 kg
• Highly efficient in distributing the spray to all parts of the plant.

Figure 10: Pesticide spraying machine

• How to operate the machine:

The sprayer Is charged well before use, and the pesticide is mixed with water and
stirred well. The pesticide dissolves and turns into a liquid form completely, then
the sprayer is filled with the mixture, and it Is operated by means of a relay, as well
as controlling the start and end of spraying and controlling the pressure of the
spray according to the length and size of the plant. The angle of the spray hose is
directed and adjusted according to the length and size of the plant.
17 | P a g e
4.1.5 Hardware components:

A. NodeMCU ESP8266:

The NodeMCU ESP8266 Is a compact and


integrated microcontroller board that is based on
the ESP8266 chip. It provides built-in Wi-Fi
capabilities, making It an ideal choice for
Internet of Things (IoT) projects. With its Wi-Fi
connectivity, powerful processor, onboard
memory, Arduino compatibility, and affordable

price, the ESP8266 is widely used for various Figure 11: NodeMCU ESP8266.
applications.

One of the key advantages of the ESP8266 Is its ability to connect to Wi-Fi
networks, enabling communication and control over the internet. It features a
powerful processor that can handle multiple tasks efficiently. The onboard memory
allows for storing programs and data, facilitating the execution of complex
applications. Additionally, the ESP8266 can be programmed using the Arduino
development environment, which offers a user-friendly interface for developers.

The ESP8266 Is widely used in IoT projects for home automation, environmental
monitoring, wireless sensor networks, and more. It offers a cost-effective solution
for prototyping and small-scale applications. With its Wi-Fi capabilities and
versatile features, the ESP8266 provides an easy and efficient way to Implement
Wi-Fi-based applications. Overall, the NodeMCU ESP8266 is a popular choice for
developers and enthusiasts seeking to incorporate Wi-Fi connectivity into their
projects, thanks to Its compact design, powerful capabilities, and affordability.

18 | P a g e
b. ESP32-CAM:

The ESP32-CAM camera is a built-in camera module


based on the ESP32 chip. It is capable of capturing
Images, recording videos, and interacting with wireless
networks. The camera features a high-resolution image
sensor, providing excellent image quality. It has wireless
connectivity capabilities such as Wi-Fi and Bluetooth,
allowing the camera to Interact and be remotely
Figure 12: ESP32-CAM
controlled.

The ESP32-CAM camera has a wide range of applications, including surveillance


and security systems, robot control, remote sensing systems, and Internet of Things
(IoT) devices. The camera can be programmed using the Arduino development
environment, making it user-friendly and open source for developers.

The ESP32-CAM camera provides a comprehensive solution for applications that


require imaging, recording, and wireless connectivity. It can connect to a Wi-Fi
network and send Images and videos over the internet. The ESP32-CAM camera is
a popular choice among developers and electronics enthusiasts who want to
Integrate a high-quality camera module with wireless communication capabilities
into their projects.

19 | P a g e
c. LCD screen -I2C:

The LCD screen, connected via I2C, plays a vital


role in my project by displaying the readings
from the temperature and humidity sensors. Its
primary function Is to present the captured data
Figure 13: LCD screen -I2C
in a clear and user-friendly manner. By utilizing
the I2C interface, the LCD screen offers advantages such as simplified wiring and
easier integration with microcontrollers like Arduino. The I2C protocol enables
seamless communication between the microcontroller and the LCD screen,
allowing for the transmission of commands and data for displaying various
information, including text, numbers, symbols, and even graphical content.
Additionally, the I2C Interface provides control over the backlight and contrast of
the LCD screen, enabling adjustments to enhance visibility. Overall, the I2C-
connected LCD screen serves as a compact and versatile solution for visualizing
data in embedded systems and electronics projects.

20 | P a g e
d. Dual relays:

A dual relay is an electronic switch that can control the


direction of a motor. It consists of a coil and a switch. When a
voltage is applied to the coil, it generates a magnetic field that
activates the switch, changing its state from open to closed or
vice versa. By controlling the state of the switch, we can
change the direction of current flow in the motor, thus Figure 14 :Dual relays

controlling its direction of motion. This allows us to easily control the motor's
movement by sending control signals to the relay.

e. H-Bridge High-Power DC Motor BTS7960 43A:

The H-Bridge High-Power DC Motor BTS7960 43A is an integrated circuit that


can control the direction and speed of a DC motor. It features a maximum current
rating of 43A and can handle high-power
applications. This component is used in the project to
control the speed and direction of the two rear
wheels of the smart machine.

Figure 15: H-Bridge High-


Power DC Motor BTS7960 43A

21 | P a g e
f. Relay:

A relay is an electrical switch that is controlled


by an electromagnet. It is commonly used in
automation and control systems to turn on and off
electrical devices. In this project, the relay is used
to control the power supply of the servo motor
and the Raspberry Pi camera module, allowing Figure 16: Relay

them to be turned on and off as needed.

By using a single-channel relay, you can easily control the operation of the
sprinkler by sending a control signal to the relay. When the relay is activated, the
electrical circuit Is opened, allowing the current to flow to the sprinkler and turn it
on. When the relay Is deactivated, the electrical circuit Is closed, interrupting the
current flow, and stopping the operation of the sprinkler.

g. DHT11–Temperature and Humidity Sensor:

The DHT11 is a low-cost digital sensor that can measure


temperature and humidity. It features a single-wire
interface and can provide accurate and reliable
measurements. In this project, the sensor is used to
monitor the temperature and humidity of the surrounding
environment, allowing users to gain valuable insights into
plant health and environmental condition. Figure 17: DHT11–
Temperature and
Humidity Sensor

22 | P a g e
4.2 Methods

4.2.1 Measuring Instrumentations

• Digital multimeter

Figure 18: Avometer

4.2.2 Software:

a. Programming languages

Programming languages play a critical role in the development of any software


project, including a disease-recognition robot project. In this project, we can use
several programming languages to build various components of the software
system.

Python:

Python is a popular high-level programming language that is widely used in


machine learning and deep learning applications. It has a vast range of libraries and
frameworks such as Keras, TensorFlow, and NumPy that can be used to develop
machine learning models and perform image processing tasks.

23 | P a g e
C programming language:

C programming language Is a powerful and efficient programming language used


for software development and system programming. It is known for Its simplicity
and ability to directly manipulate system resources. C language is considered one
of the foundations of computer science and provides a wide range of libraries and
tools that facilitate programming tasks. If you want to learn a robust programming
language and gain a deeper understanding of computer internals, investing in
learning C language would be beneficial.

b. Libraries:

Libraries play a crucial role in software development by providing pre-built


functions, tools, and resources that enable developers to streamline their work and
enhance the functionality of their projects. In the context of a disease-recognition
robot project, several libraries can be utilized to simplify the implementation and
improve the performance of the system.

Keras Library:

Keras is a high-level neural networks API that is built on top of the TensorFlow
library. It provides a user-friendly interface for designing, training, and evaluating
deep learning models. In our project, we can leverage the Keras library to construct
and train a convolutional neural network (CNN) model for disease recognition.
Keras offers a wide range of predefined layers and optimization algorithms,
making it easier to develop and fine-tune the model architecture.

24 | P a g e
TensorFlow Library:

TensorFlow is an open-source library that is widely used for numerical


computation and machine learning tasks. It provides a flexible framework for
building and training various types of machine learning models, including deep
neural networks. In our disease-recognition robot project.

Intel® oneAPI Deep Neural Network:

The Intel® oneAPI Deep Neural Network Library (oneDNN) provides highly
optimized implementations of deep learning building blocks. With this open
source, cross-platform library, deep learning application and framework developers
can use the same API for CPUs, GPUs, or both—it abstracts out instruction sets
and other complexities of performance optimization.

Using this library, you can:

Improve performance of frameworks you already use, such as OpenVINO™


toolkit, Intel® AI Analytics Toolkit, Intel® Distribution for PyTorch*, and Intel®
11Distribution for TensorFlow*.

Develop faster deep learning applications and frameworks using optimized


building blocks.

Deploy applications optimized for Intel CPUs and GPUs without writing any
target-specific code.

25 | P a g e
Matplotlib.pyplot:

matplotlib is a comprehensive data visualization library in Python. The pyplot


module of matplotlib provides a convenient interface for creating various types of
plots and charts. In our project, we can employ matplotlib.pyplot to generate visual
representations of data, such as displaying graphs of sensor readings or plotting the
performance of our disease recognition model.

NumPy Library:

NumPy is a fundamental library for scientific computing in Python. It provides


support for large, multi-dimensional arrays and a collection of mathematical
functions to operate on these arrays efficiently. In our project, we can utilize the
NumPy library to process and manipulate image data, perform numerical
computations, and manage data structures for machine learning tasks.

By incorporating these libraries into our disease-recognition robot project, we can


leverage their functionalities and capabilities to streamline the development
process, enhance the performance of our algorithms, and create a robust and
efficient system for disease identification and analysis.

ESP8266 library:

The ESP8266 library is a software library used in the Arduino development


environment to communicate with ESP8266 Wi-Fi modules. This library provides
functions and tools to facilitate programming and controlling ESP8266 modules,
including setting up network connections and sending/receiving data over Wi-Fi. It
helps in creating wireless communication and remote-control applications such as
access points, connecting to web servers, sending data over MQTT, and more. By
using the ESP8266 library, you can implement various projects that rely on
26 | P a g e
wireless communication and Wi-Fi technology, such as remote control of
electronic devices and creating smart monitoring systems.

c. Database management system (DBMS):

In the field of computer science and data management, a database is an organized


collection of data that can be accessed, managed, and updated easily. In the context
of a disease-recognition robot project, a database can be used to store the diseases
that the robot has identified and to display them in a user-friendly manner.

● Kaggle database

In this project, there are two types of databases that can be used. The first is a
Kaggle database that contains images of diseased plants, such as the Tomato leaf
disease detection and the Cucumber plant diseases dataset. These databases can be
used to train machine learning models to recognize different plant diseases.

d. Programs:

In the disease identification project, several programs are utilized to support


different aspects of the project, ranging from designing and prototyping hardware
components to coding and development tasks. The programs used include
SolidWorks, Fritzing, Visual Studio Code, the terminal, and VNC.

27 | P a g e
Program Purpose Pros Cons

Computer-Aided Design (CAD) Powerful features for


SolidWorks software for 3D modeling and precise mechanical Expensive license cost
drafting design

Electronic circuit design Limited library of


Fritzing Beginner-friendly
software components

Code editor and integrated


Visual Studio Steep learning curve for
development environment Free and open-source
Code beginners
(IDE)

Command-line interface for Requires some


Provides more control
The Terminal accessing and managing the knowledge of command
and flexibility
computer line syntax

• SolidWorks:
Allows access to a
Remote desktop access Can be slow and laggy
VNC computer from
software depending on connection
anywhere
SolidWorks is a professional computer-aided design (CAD) software widely used
in engineering and product design. It provides a comprehensive set of tools for
designing and modeling complex 3D objects. In the disease identification project,
SolidWorks can be used to design and visualize the physical structure of the robot,
including the chassis, motor mounts, and other mechanical components. The
software enables precise and accurate modeling, ensuring compatibility and
efficiency in the fabrication process.

• Fritzing:

Fritzing is an open-source software specifically designed for the creation of circuit


diagrams, prototyping, and printed circuit board (PCB) layout design. It offers a
user-friendly interface that allows users to easily design and document electronic

28 | P a g e
circuits. In the disease identification project, Fritzing can be utilized to design the
circuitry for connecting and controlling the various hardware components of the
robot, such as the Raspberry Pi, motor drivers, sensors, and camera module.

• Visual Studio Code:

Visual Studio Code is a versatile and widely used source code editor developed by
Microsoft. It provides an extensive set of features and extensions that facilitate
coding and software development. In the disease identification project, Visual
Studio Code can be employed as the primary Integrated Development Environment
(IDE) for writing and editing the code. It supports multiple programming
languages such as Python, JavaScript, HTML, and CSS, making it suitable for the
different components of the project.

• The Terminal:

The terminal refers to the command-line interface provided by the operating


system. It allows users to execute commands and perform various tasks efficiently.
In the disease identification project, the terminal is utilized to interact with the
Raspberry Pi and execute commands to control and configure the hardware
components, install libraries and dependencies, and run the developed software.

• Arduino IDE:

Arduino IDE is a software program used for programming and uploading code to
Arduino microcontrollers. It provides a user-friendly interface and powerful
programming tools for controlling and interacting with connected devices. With
Arduino IDE, developers can create interactive programs to control various
hardware components, such as motors, sensors, and other electronic devices. It is a

29 | P a g e
popular choice for both beginners and professionals to develop hobby projects,
learn electronics, and innovate in areas like robotics, home automation, portable
devices, lighting control, and more.

• Control App:
An Innovative application has been designed to easily
control the motion and steering of motors, as well as the
operation and stopping of the Irrigation system. The
application Is operated by establishing a connection with the
node through the Wi-Fi network. You can control the motor's
movement by specifying the appropriate directions and
speeds, as well as steering the motors at desired angles.
Additionally, you can activate or deactivate the irrigation
system through the Interactive control function in the Figure 19: Control App
application.

To operate the application, you need to connect the node to a suitable Wi-Fi
network. After that, you can run the application on your smart device and enter the
connection Information specific to the node, such as the IP address and port
number. Once a successful connection is established between the application and
the node, you will be able to use the simple and intuitive interface of the
application to control the motor's movement and steering, as well as start or stop
the Irrigation system with a single button click.

30 | P a g e
e. technology used:

Introduction:

The development of machine learning (ML) and deep learning (DL) techniques has
revolutionized the way we approach computer vision problems. In the field of plant
pathology, the ability to accurately detect and diagnose diseases is crucial for
ensuring food security and protecting plant health. The disease recognition robot
project utilizes DL techniques to identify various plant diseases. In particular, the
project employs convolutional neural networks (CNNs) which are a type of
artificial neural network (ANN) that has proven to be very effective in image
recognition tasks. Transfer learning (TL) is used to speed up the training process
and improve the accuracy of the model. Moreover, the project utilizes the
EfficientNetB7 model which is a state-of-the-art architecture that has achieved
excellent performance on several image classification benchmarks.

I. Machine vision (MV):


Machine vision involves the use of visual sensors to extract information
from images or video data. In the disease identification project, we used
machine vision to capture images of tomato leaves and cucumber plants
affected by various diseases. We used a high-resolution camera mounted on
a robotic arm to capture images from multiple angles, which allowed us to
capture fine details and improve the accuracy of disease detection.

31 | P a g e
II. Machine learning:
Machine learning is a branch of artificial intelligence that involves the
development of algorithms that can learn patterns and relationships from
data. In the disease identification project, we used machine learning
algorithms to analyze the images captured by the machine vision system and
identify diseases affecting the tomato leaves and cucumber plants. We used
transfer learning and deep learning techniques to train artificial neural
networks (ANNs) on large datasets of plant images and diseases to
accurately classify the diseases.

• Transfer learning (TL):


Transfer learning is a technique that involves using a pre-trained model as
the starting point for developing a new model for a related task. In the
disease identification project, we used transfer learning to fine-tune the
EfficientNetB7 model, a pre-trained deep neural network developed by
Google, for the task of identifying diseases in tomato and cucumber plants.
By using a pre-trained model, we were able to leverage the model's pre-
existing knowledge and reduce the amount of training data required, which
improved the efficiency and accuracy of the model.

• Deep learning:
Deep learning is a subfield of machine learning that involves the
development of artificial neural networks with multiple layers that can learn
and represent complex patterns in data. In the disease identification project,
we used deep learning to train a convolutional neural network (CNN) to
accurately classify images of tomato and cucumber plants and identify the

32 | P a g e
diseases affecting them. By using a deep learning approach, we were able to
learn complex features and relationships in the image data that would have
been difficult to detect using traditional machine learning methods.

III. Artificial neural networks (ANNs):


• convolutional neural network (CNN)
Artificial neural networks are a type of machine learning algorithm that are
modeled after the structure and function of biological neural networks in the
brain. In the disease identification project, we used a CNN, a type of ANN
that is widely used for image classification tasks. The CNN consisted of
multiple layers of artificial neurons that learned features and patterns in the
image data, enabling accurate classification of the diseases affecting the
tomato and cucumber plants. We used the EfficientNetB7 architecture, a
state-of-the-art CNN developed by Google, which is particularly powerful
due to its large number of parameters and efficient use of computational
resources.

33 | P a g e
IV. Automation system:
Automation systems involve the use of technology to automate tasks and
processes. In the disease identification project, we used a robotic arm with a
high-resolution camera to capture images of tomato leaves and cucumber
plants from multiple angles. The robotic arm was programmed to move
automatically and capture images of the entire plant, which improved the
accuracy of the disease identification system. By using an automation
system, we were able to capture large amounts of data efficiently and
accurately, which was critical for training the deep learning model.

5. Background:
Plant diseases have a significant impact on agricultural productivity and can cause
yield losses, economic impacts, and food security issues. Effective management
and control of plant diseases are crucial for sustainable agricultural practices and
food security. Early detection and accurate diagnosis of plant diseases are essential
for developing appropriate control measures and minimizing crop losses. Deep
learning techniques, such as convolutional neural networks (CNNs), have shown
great potential for automated disease diagnosis in plants. These techniques can
quickly and accurately identify plant diseases from images, making them an
attractive solution for plant disease identification. In recent years, several studies
have used deep learning techniques for plant disease identification, achieving high
accuracy rates, and demonstrating the effectiveness of these techniques for plant
disease diagnosis.

34 | P a g e
6. Research objective:
The primary objective of this research is to develop a plant disease identification
system using EfficientNetB7 convolutional neural network. The goal is to achieve
high accuracy rates in identifying plant diseases, which can improve the efficiency
and effectiveness of disease management and crop protection. The EfficientNetB7
model is a state-of-the-art CNN architecture that has shown excellent performance
in image classification tasks. This research focuses on applying the EfficientNetB7
model to plant disease identification to enhance the accuracy and efficiency of
disease diagnosis.

7. Methodology:

To develop the plant disease identification system, we used two publicly available
datasets, the Tomato leaf disease detection dataset and the Cucumber plant
diseases dataset. The Tomato leaf disease detection dataset contains 900 images of
tomato leaves with seven different types of diseases, while the Cucumber plant
diseases dataset contains 6,900 images of cucumber leaves with nine different
types of diseases. We used data preprocessing techniques, such as resizing and
normalization, to prepare the data for training the model. We used the
EfficientNetB7 architecture as the base model and applied transfer learning to fine-
tune the model for plant disease identification. We trained the model on the
combined dataset using an Adam optimizer with a learning rate of 0.0001 and a
batch size of 64. The training process was carried out for 25 epochs with early
stopping when the validation loss did not improve for three consecutive epochs.
We evaluated the performance of the model on a test set containing 10% of the
images in the combined dataset.

35 | P a g e
8.Experience:

8.1. Green house test and evaluation

The robot was evaluated in the tomato and cucumber production greenhouse of the
Faculty of Agriculture, Kafr el Sheikh University, Egypt during June 2023 and the
practical experiment was conducted to evaluate the performance in several stages.

1-compilation and preparation of the data set

To develop the plant disease identification system, we used publicly available


datasets, the tomato leaf disease detection dataset which contains 900 images of
tomato leaves with seven different types of diseases.

A number of images of plants in the infected and healthy condition were collected
and these images were processed using the model.

2 - design and training of models

Using deep learning technology and artificial neural networks, models were
designed where the data set was divided into two groups, a training group using
images taken of plants in both infected and healthy cases, and a test group that took
place inside the greenhouse.

The efficientnetb7 architecture was used as a basic model and transfer learning was
applied to trace the plant disease identification model. The model was trained on
the combined dataset using an Adam optimizer with a learning rate of 0.0001 and a
batch size of 64. The training process was carried out for 25 epochs with an early
stop when the verification loss did not improve for three consecutive epochs. We
evaluated the performance of the model in a test set containing 10% of the images
in the collected dataset.

36 | P a g e
3-Performance Evaluation and analysis

The robot made a survey of all plants from the moment it entered and turned it on
inside the greenhouse using an ESP32-CAM camera working with a live broadcast
system, through which the model identified the following diseases (leaf mold ,
tomato mosaic virus , late blight) And he took pictures of the infected plants and
saved each picture with its own disease name, and he identified the appropriate
pesticide and it was (Mancozeb and Chlorpyrifos) Which have been mixed and
filled the machine gun in the right amount to combat these diseases, and the
Spraying unit works with an automatic opening and closing system.

37 | P a g e
9. Results:

• Results obtained from model training:

Our experiments show that the developed plant disease identification system
achieved high accuracy rates in identifying plant diseases. The model achieved an
overall accuracy of 98.26% on the test set, outperforming existing approaches on
several benchmark datasets. The model achieved high accuracy rates for individual
crop species and diseases, with the highest accuracy of 99.62% for the "healthy"
class in the tomato dataset and 99.01% for the "healthy" class in the cucumber
dataset. We also encountered some limitations and challenges during the research,
such as the need for more diverse datasets, which can affect the model's
generalizability, and the need for more specialized models for specific plant
species.

38 | P a g e
• The results of the diseases that appeared on the captured images that
were processed:

These images were taken when any of the diseases on which the model was trained
appeared and were treated. The image was saved after processing with the name of
the disease that was recognized.

39 | P a g e
Also, there was another way to save the results, as it was saved in an Excel sheet
containing the name of the disease that was identified and its percentage.

40 | P a g e
10. Discussion:

The results of our research have significant implications for the development of
automated disease diagnosis systems for plant protection and improving
agricultural productivity. The high accuracy rates achieved by the developed plant
disease identification system demonstrate the potential of using deep learning
techniques, such as CNNs, for plant disease identification. The EfficientNetB7
model showed excellent performance in identifying plant diseases, outperforming
existing approaches on several benchmark datasets. However, the need for more
diverse datasets highlights the need for further research in this area. Future
research could explore these issues and further improve the accuracy and
efficiency of plant disease identification systems. Additionally, the development of
more specialized models for specific plant species could further improve the
accuracy of the system.

41 | P a g e
11. Recommendations

▪ Expand the width of the shed from 2 meters to 3 meters to provide a larger
area for farming and ease of movement of the model Inside.
▪ Use motors with higher torque to ensure sufficient power transmission for
efficient movement of the model.
▪ Connect the gears to the motors using nuts and bolts instead of welding to
avoid vibration problems and power transmission failure.
▪ Use a suitable chain to transmit power between the motors and the wheels to
ensure efficient power transmission.
▪ Increasing the size and diversity of the database: The first step to improving
the trained model is to collect a more comprehensive and diverse database.
Collecting more images from different sources can lead to Improving the
quality of the database and help the model better recognize diseases.
▪ Upgrading computational resources: To overcome the challenge of weak
hardware, researchers should consider upgrading their computational
resources, either by purchasing more powerful hardware or using cloud
computing services. Accessing more computational power will allow more
frequent model training and faster experimentation.
▪ Collaborating with experts: It may be helpful to collaborate with experts in
the field of plant pathology to gain a better understanding of disease
symptoms and the required images for accurate diagnosis. Experts can also
provide insights on how to improve the database and training process.
▪ You need to upload the code to the node every time you start it to ensure
proper node functionality and execution of its required tasks.

42 | P a g e
▪ The ESP camera module had limited resolution capabilities, specifically 128
x128, which was too small for my application's requirements.
▪ Using data augmentation: To increase the dataset size and improve the
model's ability to generalize to new images, researchers can use data
augmentation techniques. These techniques involve applying various
transformations to the existing images, such as flipping, rotating, or
cropping, to create new images similar to the original ones but not identical.
▪ Evaluating and monitoring model performance: It is crucial to regularly
evaluate and monitor the model's performance to identify areas for
improvement. Researchers can use various performance metrics, such as
accuracy, precision, recall, and F1 score, to evaluate the model's
performance on validation and test datasets. They can also use techniques
such as cross-validation to ensure the model's performance is reliable.

By using these recommendations, the performance of the model can be improved,


and the desired goals can be achieved.

43 | P a g e
12. Conclusion:
In conclusion, this research developed a plant disease identification system using
EfficientNetB7 convolutional neural network and achieved high accuracy rates in
identifying plant diseases. The developed system has the potential to improve the
efficiency and effectiveness of disease management and crop protection, which can
lead to sustainable agricultural practices and enhance food security. However,
there are still some limitations and challenges that need to be addressed, such as
improving the diversity of the dataset and developing specialized models for
specific plant species.

44 | P a g e
45 | P a g e
13. References

Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2019).
Deeplab: Semantic image segmentation with deep convolutional nets, atrous
convolution, and fully connected crfs. IEEE transactions on pattern analysis and
machine intelligence, 40(4), 834-848.

Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun,
S. (2017). Dermatologist-level classification of skin cancer with deep neural
networks. Nature, 542(7639), 115-118.

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image
recognition. In Proceedings of the IEEE conference on computer vision and pattern
recognition (pp. 770-778).

Huang, C., Zhao, Y., Wang, X., & Ma, Y. (2018). Learning affinity via spatial
propagation networks for visual tracking. In Proceedings of the IEEE conference
on computer vision and pattern recognition (pp. 6974-6983).

Li, Y., Qian, Y., Zhou, H., Liu, S., & Yang, G. (2019). A fast and accurate
machine vision system for apple detection and counting. Computers and
Electronics in Agriculture, 157, 531-537.

Liu, J., Gao, Z., Li, H., Chen, Y., & Wang, X. (2019). Detection of wheat ears
using a pre-trained convolutional neural network. Computers and Electronics in
Agriculture, 163, 104855.

Liu, Q., Liu, J., Li, X., & Li, H. (2021). A new deep learning method for liver
tumor segmentation in magnetic resonance imaging. International Journal of
Computer Assisted Radiology and Surgery, 16(1), 107-114.

46 | P a g e
Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for
image-based plant disease detection. Frontiers in plant science, 7, 1419.

Shalev-Shwartz, S., Shammah, S., & Shashua, A. (2016). Safe, multi-agent,


reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295.

Zhu, X., Lei, Z., Yan, J., Yi, D., & Li, S. Z. (2020). Discriminative feature learning
for face recognition under low-resolution and occlusion conditions. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 43(3), 883-898.

Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2019).
Deeplab: Semantic image segmentation with deep convolutional nets, atrous
convolution, and fully connected crfs. IEEE transactions on pattern analysis and
machine intelligence, 40(4), 834-848.

Li, Y., Qian, Y., Zhou, H., Liu, S., & Yang, G. (2019). A fast and accurate
machine vision system for apple detection and counting. Computers and
Electronics in Agriculture, 157, 531-537.

Liu, J., Gao, Z., Li, H., Chen, Y., & Wang, X. (2019). Detection of wheat ears
using a pre-trained convolutional neural network. Computers and Electronics in
Agriculture, 163, 104855.

Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for
image-based plant disease detection. Frontiers in plant science, 7, 1419.

Al-Kaff, A., Al-Kaff, A., & Al-Jumaily, A. (2020). Vision-based autonomous


navigation for a mobile robot using a convolutional neural network. Applied
Sciences, 10(6), 2016.

47 | P a g e
Lin, Y. T., Li, H. C., Chen, Y. H., Chen, Y. C., Chen, C. M., & Shieh, M. D.
(2020). An autonomous navigation method for unmanned aerial vehicles using
machine vision. Journal of Intelligent and Robotic Systems, 97(3), 505-516.

Lurie, A., Trachtenberg, A., & Levi, D. (2019). Real-time obstacle detection and
avoidance in autonomous vehicles using deep learning. IEEE Transactions on
Intelligent Transportation Systems, 21(9), 3946-3957.

Zhang, W., Wang, Y., Chen, Y., & Huang, Y. (2019). Robust perception and
autonomous navigation using multiple sensors with deep learning. IEEE Access, 7,
.9974-9964

48 | P a g e

You might also like