Research
Research
Research
• Greenhouse construction
• Plant diseases and control
a. NodeMCU ESP8266
b. ESP32-CAM
c. LCD-I2C
d. Dual Relay
e. H-Bridge High-Power DC Motor BTS7960 43A
f. Relay
4.2 Methods………………………………………………………………….23
4.2.1 Measuring Instrumentations……………………...………….23
4.2.2 Software……………………………………………………. 23-34
a. Programming languages
b. Libraries
d. Programs
• solid works
• Fritzing
• Visual Studio Code
• The terminal
• Arduino IDE
• Control App
e. technology use Modeling
5. Background……………………………………………………….. 34
6. Research objective………………………………………...……… 35
7. Methodology………………………………...………………….…. 35
8. Experience………………………………………………………36-37
9. Results ……………...…….……………………………..………38-40
10. Discussion ……………………………………...…………..……. 41
11. Recommendations………………………….………………… 42-43
12. Conclusion……………………………………………………. 44-45
13. References…………………………………………………….. 46-48
1. Executive Summary
Pests have a significant impact on plants, reducing the quality of the product
and resulting in losses for farmers. Modern and environmentally friendly
technologies, such as computer vision and deep learning systems, have played
an effective role In early detection of pests In agricultural crops. In this study, a
set of Images were captured for both cucumber and tomato plants grown in a
greenhouse. The tomato leaf detection dataset contains 900 images of tomato
leaves with seven different types of diseases, while the cucumber plant disease
dataset contains 6900 images of cucumber leaves with nine different types of
diseases. The model used was the EfficientNetB7 as a baseline model. The
model was trained on the combined dataset using the Adam optimizer with a
learning rate of 0.0001 and a batch size of 64. The model achieved high
accuracy rates for individual crop types and diseases, with the highest accuracy
reaching 99.62% for the "healthy" class In the tomato dataset and 99.01% for
the "healthy" class in the cucumber dataset. However, developing more
specialized models for specific plant types may further improve the system's
accuracy.
2. Introduction
6|Page
3. Review of literature
In the field of medical image analysis, machine vision has been applied to improve
the accuracy of diagnosis and treatment. In a recent study by Liu et al. (2021), a
deep learning method was proposed for the segmentation of liver tumors from
magnetic resonance images (MRI). The proposed method achieved high accuracy
in segmenting liver tumors from MRI scans. Additionally, machine vision has been
applied to detect and diagnose skin cancer. In a study by Esteva et al. (2017), a
deep learning algorithm was trained to classify skin lesions as benign or malignant.
The proposed method achieved high accuracy in classifying skin lesions from
clinical images. In the field of agriculture, machine vision has been applied to
improve crop yield and quality. In a recent study by Li et al. (2019), a machine
vision system was proposed for apple detection and counting. The proposed
method achieved high accuracy in detecting and counting apples from images.
Additionally, machine vision has been applied to detect and diagnose plant
diseases. In a study by Mohanty et al. (2016), a deep learning algorithm was
trained to diagnose plant diseases from images. The proposed method achieved
high accuracy in diagnosing plant diseases from images. In the field of security,
machine vision has been applied to improve the accuracy of face recognition and
object tracking. In a study by Zhu et al. (2020), a deep learning method was
proposed for face recognition under low-resolution and occlusion conditions. The
proposed method achieved high accuracy in face recognition under challenging
7|Page
conditions. Additionally, machine vision has been applied to detect and track
objects in videos. In a study by Huang et al. (2018), a deep learning method was
proposed for object tracking in videos. The proposed method achieved state-of-the-
art results on several benchmark datasets.
Recent studies have shown the potential of deep learning for image-based plant
disease detection. Mohanty et al. (2016) proposed a deep convolutional neural
network for the classification of plant diseases using images. Their model achieved
high accuracy rates in identifying different plant diseases, and they suggested that
their approach could be used for real-time monitoring of plant health. In another
study, Li et al. (2019) developed a fast and accurate machine vision system for
apple detection and counting. They utilized a deep learning-based algorithm that
combined a convolutional neural network with a region-based convolutional
network for accurate apple detection and counting. Their system achieved high
accuracy rates and outperformed traditional methods for apple detection.
8|Page
3.3 Carriage, navigation and frame for machine vision
Autonomous navigation refers to the ability of a device or vehicle to move and
navigate without human intervention. Machine vision plays a critical role in
enabling autonomous navigation, as it allows devices to perceive and understand
their environment through visual information. Recent advancements in machine
vision, such as deep learning algorithms and neural networks, have significantly
improved the accuracy and efficiency of autonomous navigation systems.
In another study, Lurie et al. (2019) proposed a deep learning-based system for
real-time obstacle detection and avoidance in autonomous vehicles. Their system
used a convolutional neural network to process visual data and accurately detect
obstacles in real-time.
Moreover, in a study by Lin et al. (2020), a machine vision system was developed
for autonomous navigation of unmanned aerial vehicles (UAVs). Their system
used a deep learning-based algorithm to process visual data and enable
autonomous navigation of UAVs in complex environments.
One challenge in autonomous navigation using machine vision is the need for
robust and accurate perception of the environment, even in challenging conditions
such as low light or adverse weather. To address this challenge, Zhang et al. (2019)
proposed a deep learning-based system that uses multiple sensors, including visual
and inertial sensors, for robust perception and autonomous navigation.
Overall, the combination of movement, navigation and machine vision has the
potential to revolutionize various fields, including robotics, transportation, and
aerial surveillance. As machine vision technologies continue to advance, we can
expect to see increasingly sophisticated and accurate disease detection or
management systems in the future.
9|Page
4. Materials and Methods
The details of materials used, equipment and experimental procedures followed
during the course of investigation have been briefly described under this chapter.
The experiments were carried out at the Department of Agricultural Engineering,
Faculty of Agriculture, Kafrelsheikh University during the period from January
2023 to June 2023. The experiments included two main parts. In the first part, the
experiments were conducted to optimize the accuracy of the agricultural machine
vision platform and frame in terms of the navigation and performance of the
robot’s movement system. Where performance characteristics for the dc motor of
the rear wheel were considered and the speed of the agricultural robotic platform
has been noted 46r.p.s, field experiments, the agricultural robotic platform has
been evaluated in term of its recognition accuracy under the same levels of the
different camera resolutions.
The ESP32-CAM camera is a compact camera module that offers high resolution
and excellent performance in capturing and recording. It features up to 2 megapixel
image resolution and high-quality video recording capabilities. It provides multiple
settings and is compatible with the Arduino platform, allowing for Internet
connectivity via Wi-Fi.
10 | P a g e
4.1 Materials
4.1.1 Construction of Machine Vision platform
The overall dimensions of the robotic platform and frame are 650 mm, 450 mm,
500 mm, and 20 kg for length, width, height and mass, respectively. The ground
clearance under the main frame down-word to the ground surface was 100 mm.
The total load of machine vision device was 50 kg which is distributed on the front
wheels by 15 kg and on the two rear wheels by 35 kg.
11 | P a g e
Figure 3: Model projections and dimension
12 | P a g e
4.1.2. Power transmission and movement system
The main function of power transmission is to transmit the motion from the direct
current brushed motor which is (24 Voltage, 100 watt and, 2800r.p.m) to the rear
wheel. The ground wheels which have been used with the agricultural robotic
platform had 200, 50, 200 mm as diameter, tire width and tire height respectively.
The horizontal distance between the rear wheel centerlines was 450 mm. While the
base distance between the front wheel and rear wheels axis was 600 mm. A 12-
voltige, 20 Ampere and 150-watt direct current brushed motor was used to move
the front wheels.
13 | P a g e
4.1.3. Green house and plant parameters
• Greenhouse construction
The greenhouse is designed to direct natural light and provide ideal climatic
conditions for plant growth. The height and width of the greenhouse can be
adjusted to suit the growth requirements of different plants, making it ideal for
different planting applications.
A net cover material was used to provide adequate ventilation and control the
temperature and humidity inside the greenhouse.
The height and width of the greenhouse can also be adjusted to suit different plant
growth requirements, making it ideal for different planting applications. The
greenhouse was used with dimensions of 5 meters in length, 2 meters in width, and
2 and a half meters in height.
14 | P a g e
• Plant diseases and control
- Tomatoes
Tomatoes are annual herbaceous plants belonging to the Solanaceae family,
which are widely cultivated throughout the world as an agricultural crop and
food source.
- Cucumber
Cucumber is an annual herbaceous plant belonging to the Cucurbitaceae family,
annual plants that need regular watering as well as organic and chemical
fertilizers for their improvement and growth.
- Pepper
Pepper is a plant of the nightshade family, which is a common agricultural crop
in hot countries. Fuji pepper food.
Figure 9: Cucumber
15 | P a g e
- Control:
safety
concentration / l The name of the pesticide The disease
period
100 - 75 cm 3 per
3 days Ophir powdery mildew;
dunum
150 - 250 g per
7 days Intracol late blight
dunam
30 1 ml Confidor Whitefly
16 | P a g e
4.1.4. Spraying unit
The sprayer Is charged well before use, and the pesticide is mixed with water and
stirred well. The pesticide dissolves and turns into a liquid form completely, then
the sprayer is filled with the mixture, and it Is operated by means of a relay, as well
as controlling the start and end of spraying and controlling the pressure of the
spray according to the length and size of the plant. The angle of the spray hose is
directed and adjusted according to the length and size of the plant.
17 | P a g e
4.1.5 Hardware components:
A. NodeMCU ESP8266:
price, the ESP8266 is widely used for various Figure 11: NodeMCU ESP8266.
applications.
One of the key advantages of the ESP8266 Is its ability to connect to Wi-Fi
networks, enabling communication and control over the internet. It features a
powerful processor that can handle multiple tasks efficiently. The onboard memory
allows for storing programs and data, facilitating the execution of complex
applications. Additionally, the ESP8266 can be programmed using the Arduino
development environment, which offers a user-friendly interface for developers.
The ESP8266 Is widely used in IoT projects for home automation, environmental
monitoring, wireless sensor networks, and more. It offers a cost-effective solution
for prototyping and small-scale applications. With its Wi-Fi capabilities and
versatile features, the ESP8266 provides an easy and efficient way to Implement
Wi-Fi-based applications. Overall, the NodeMCU ESP8266 is a popular choice for
developers and enthusiasts seeking to incorporate Wi-Fi connectivity into their
projects, thanks to Its compact design, powerful capabilities, and affordability.
18 | P a g e
b. ESP32-CAM:
19 | P a g e
c. LCD screen -I2C:
20 | P a g e
d. Dual relays:
controlling its direction of motion. This allows us to easily control the motor's
movement by sending control signals to the relay.
21 | P a g e
f. Relay:
By using a single-channel relay, you can easily control the operation of the
sprinkler by sending a control signal to the relay. When the relay is activated, the
electrical circuit Is opened, allowing the current to flow to the sprinkler and turn it
on. When the relay Is deactivated, the electrical circuit Is closed, interrupting the
current flow, and stopping the operation of the sprinkler.
22 | P a g e
4.2 Methods
• Digital multimeter
4.2.2 Software:
a. Programming languages
Python:
23 | P a g e
C programming language:
b. Libraries:
Keras Library:
Keras is a high-level neural networks API that is built on top of the TensorFlow
library. It provides a user-friendly interface for designing, training, and evaluating
deep learning models. In our project, we can leverage the Keras library to construct
and train a convolutional neural network (CNN) model for disease recognition.
Keras offers a wide range of predefined layers and optimization algorithms,
making it easier to develop and fine-tune the model architecture.
24 | P a g e
TensorFlow Library:
The Intel® oneAPI Deep Neural Network Library (oneDNN) provides highly
optimized implementations of deep learning building blocks. With this open
source, cross-platform library, deep learning application and framework developers
can use the same API for CPUs, GPUs, or both—it abstracts out instruction sets
and other complexities of performance optimization.
Deploy applications optimized for Intel CPUs and GPUs without writing any
target-specific code.
25 | P a g e
Matplotlib.pyplot:
NumPy Library:
ESP8266 library:
● Kaggle database
In this project, there are two types of databases that can be used. The first is a
Kaggle database that contains images of diseased plants, such as the Tomato leaf
disease detection and the Cucumber plant diseases dataset. These databases can be
used to train machine learning models to recognize different plant diseases.
d. Programs:
27 | P a g e
Program Purpose Pros Cons
• SolidWorks:
Allows access to a
Remote desktop access Can be slow and laggy
VNC computer from
software depending on connection
anywhere
SolidWorks is a professional computer-aided design (CAD) software widely used
in engineering and product design. It provides a comprehensive set of tools for
designing and modeling complex 3D objects. In the disease identification project,
SolidWorks can be used to design and visualize the physical structure of the robot,
including the chassis, motor mounts, and other mechanical components. The
software enables precise and accurate modeling, ensuring compatibility and
efficiency in the fabrication process.
• Fritzing:
28 | P a g e
circuits. In the disease identification project, Fritzing can be utilized to design the
circuitry for connecting and controlling the various hardware components of the
robot, such as the Raspberry Pi, motor drivers, sensors, and camera module.
Visual Studio Code is a versatile and widely used source code editor developed by
Microsoft. It provides an extensive set of features and extensions that facilitate
coding and software development. In the disease identification project, Visual
Studio Code can be employed as the primary Integrated Development Environment
(IDE) for writing and editing the code. It supports multiple programming
languages such as Python, JavaScript, HTML, and CSS, making it suitable for the
different components of the project.
• The Terminal:
• Arduino IDE:
Arduino IDE is a software program used for programming and uploading code to
Arduino microcontrollers. It provides a user-friendly interface and powerful
programming tools for controlling and interacting with connected devices. With
Arduino IDE, developers can create interactive programs to control various
hardware components, such as motors, sensors, and other electronic devices. It is a
29 | P a g e
popular choice for both beginners and professionals to develop hobby projects,
learn electronics, and innovate in areas like robotics, home automation, portable
devices, lighting control, and more.
• Control App:
An Innovative application has been designed to easily
control the motion and steering of motors, as well as the
operation and stopping of the Irrigation system. The
application Is operated by establishing a connection with the
node through the Wi-Fi network. You can control the motor's
movement by specifying the appropriate directions and
speeds, as well as steering the motors at desired angles.
Additionally, you can activate or deactivate the irrigation
system through the Interactive control function in the Figure 19: Control App
application.
To operate the application, you need to connect the node to a suitable Wi-Fi
network. After that, you can run the application on your smart device and enter the
connection Information specific to the node, such as the IP address and port
number. Once a successful connection is established between the application and
the node, you will be able to use the simple and intuitive interface of the
application to control the motor's movement and steering, as well as start or stop
the Irrigation system with a single button click.
30 | P a g e
e. technology used:
Introduction:
The development of machine learning (ML) and deep learning (DL) techniques has
revolutionized the way we approach computer vision problems. In the field of plant
pathology, the ability to accurately detect and diagnose diseases is crucial for
ensuring food security and protecting plant health. The disease recognition robot
project utilizes DL techniques to identify various plant diseases. In particular, the
project employs convolutional neural networks (CNNs) which are a type of
artificial neural network (ANN) that has proven to be very effective in image
recognition tasks. Transfer learning (TL) is used to speed up the training process
and improve the accuracy of the model. Moreover, the project utilizes the
EfficientNetB7 model which is a state-of-the-art architecture that has achieved
excellent performance on several image classification benchmarks.
31 | P a g e
II. Machine learning:
Machine learning is a branch of artificial intelligence that involves the
development of algorithms that can learn patterns and relationships from
data. In the disease identification project, we used machine learning
algorithms to analyze the images captured by the machine vision system and
identify diseases affecting the tomato leaves and cucumber plants. We used
transfer learning and deep learning techniques to train artificial neural
networks (ANNs) on large datasets of plant images and diseases to
accurately classify the diseases.
• Deep learning:
Deep learning is a subfield of machine learning that involves the
development of artificial neural networks with multiple layers that can learn
and represent complex patterns in data. In the disease identification project,
we used deep learning to train a convolutional neural network (CNN) to
accurately classify images of tomato and cucumber plants and identify the
32 | P a g e
diseases affecting them. By using a deep learning approach, we were able to
learn complex features and relationships in the image data that would have
been difficult to detect using traditional machine learning methods.
33 | P a g e
IV. Automation system:
Automation systems involve the use of technology to automate tasks and
processes. In the disease identification project, we used a robotic arm with a
high-resolution camera to capture images of tomato leaves and cucumber
plants from multiple angles. The robotic arm was programmed to move
automatically and capture images of the entire plant, which improved the
accuracy of the disease identification system. By using an automation
system, we were able to capture large amounts of data efficiently and
accurately, which was critical for training the deep learning model.
5. Background:
Plant diseases have a significant impact on agricultural productivity and can cause
yield losses, economic impacts, and food security issues. Effective management
and control of plant diseases are crucial for sustainable agricultural practices and
food security. Early detection and accurate diagnosis of plant diseases are essential
for developing appropriate control measures and minimizing crop losses. Deep
learning techniques, such as convolutional neural networks (CNNs), have shown
great potential for automated disease diagnosis in plants. These techniques can
quickly and accurately identify plant diseases from images, making them an
attractive solution for plant disease identification. In recent years, several studies
have used deep learning techniques for plant disease identification, achieving high
accuracy rates, and demonstrating the effectiveness of these techniques for plant
disease diagnosis.
34 | P a g e
6. Research objective:
The primary objective of this research is to develop a plant disease identification
system using EfficientNetB7 convolutional neural network. The goal is to achieve
high accuracy rates in identifying plant diseases, which can improve the efficiency
and effectiveness of disease management and crop protection. The EfficientNetB7
model is a state-of-the-art CNN architecture that has shown excellent performance
in image classification tasks. This research focuses on applying the EfficientNetB7
model to plant disease identification to enhance the accuracy and efficiency of
disease diagnosis.
7. Methodology:
To develop the plant disease identification system, we used two publicly available
datasets, the Tomato leaf disease detection dataset and the Cucumber plant
diseases dataset. The Tomato leaf disease detection dataset contains 900 images of
tomato leaves with seven different types of diseases, while the Cucumber plant
diseases dataset contains 6,900 images of cucumber leaves with nine different
types of diseases. We used data preprocessing techniques, such as resizing and
normalization, to prepare the data for training the model. We used the
EfficientNetB7 architecture as the base model and applied transfer learning to fine-
tune the model for plant disease identification. We trained the model on the
combined dataset using an Adam optimizer with a learning rate of 0.0001 and a
batch size of 64. The training process was carried out for 25 epochs with early
stopping when the validation loss did not improve for three consecutive epochs.
We evaluated the performance of the model on a test set containing 10% of the
images in the combined dataset.
35 | P a g e
8.Experience:
The robot was evaluated in the tomato and cucumber production greenhouse of the
Faculty of Agriculture, Kafr el Sheikh University, Egypt during June 2023 and the
practical experiment was conducted to evaluate the performance in several stages.
A number of images of plants in the infected and healthy condition were collected
and these images were processed using the model.
Using deep learning technology and artificial neural networks, models were
designed where the data set was divided into two groups, a training group using
images taken of plants in both infected and healthy cases, and a test group that took
place inside the greenhouse.
The efficientnetb7 architecture was used as a basic model and transfer learning was
applied to trace the plant disease identification model. The model was trained on
the combined dataset using an Adam optimizer with a learning rate of 0.0001 and a
batch size of 64. The training process was carried out for 25 epochs with an early
stop when the verification loss did not improve for three consecutive epochs. We
evaluated the performance of the model in a test set containing 10% of the images
in the collected dataset.
36 | P a g e
3-Performance Evaluation and analysis
The robot made a survey of all plants from the moment it entered and turned it on
inside the greenhouse using an ESP32-CAM camera working with a live broadcast
system, through which the model identified the following diseases (leaf mold ,
tomato mosaic virus , late blight) And he took pictures of the infected plants and
saved each picture with its own disease name, and he identified the appropriate
pesticide and it was (Mancozeb and Chlorpyrifos) Which have been mixed and
filled the machine gun in the right amount to combat these diseases, and the
Spraying unit works with an automatic opening and closing system.
37 | P a g e
9. Results:
Our experiments show that the developed plant disease identification system
achieved high accuracy rates in identifying plant diseases. The model achieved an
overall accuracy of 98.26% on the test set, outperforming existing approaches on
several benchmark datasets. The model achieved high accuracy rates for individual
crop species and diseases, with the highest accuracy of 99.62% for the "healthy"
class in the tomato dataset and 99.01% for the "healthy" class in the cucumber
dataset. We also encountered some limitations and challenges during the research,
such as the need for more diverse datasets, which can affect the model's
generalizability, and the need for more specialized models for specific plant
species.
38 | P a g e
• The results of the diseases that appeared on the captured images that
were processed:
These images were taken when any of the diseases on which the model was trained
appeared and were treated. The image was saved after processing with the name of
the disease that was recognized.
39 | P a g e
Also, there was another way to save the results, as it was saved in an Excel sheet
containing the name of the disease that was identified and its percentage.
40 | P a g e
10. Discussion:
The results of our research have significant implications for the development of
automated disease diagnosis systems for plant protection and improving
agricultural productivity. The high accuracy rates achieved by the developed plant
disease identification system demonstrate the potential of using deep learning
techniques, such as CNNs, for plant disease identification. The EfficientNetB7
model showed excellent performance in identifying plant diseases, outperforming
existing approaches on several benchmark datasets. However, the need for more
diverse datasets highlights the need for further research in this area. Future
research could explore these issues and further improve the accuracy and
efficiency of plant disease identification systems. Additionally, the development of
more specialized models for specific plant species could further improve the
accuracy of the system.
41 | P a g e
11. Recommendations
▪ Expand the width of the shed from 2 meters to 3 meters to provide a larger
area for farming and ease of movement of the model Inside.
▪ Use motors with higher torque to ensure sufficient power transmission for
efficient movement of the model.
▪ Connect the gears to the motors using nuts and bolts instead of welding to
avoid vibration problems and power transmission failure.
▪ Use a suitable chain to transmit power between the motors and the wheels to
ensure efficient power transmission.
▪ Increasing the size and diversity of the database: The first step to improving
the trained model is to collect a more comprehensive and diverse database.
Collecting more images from different sources can lead to Improving the
quality of the database and help the model better recognize diseases.
▪ Upgrading computational resources: To overcome the challenge of weak
hardware, researchers should consider upgrading their computational
resources, either by purchasing more powerful hardware or using cloud
computing services. Accessing more computational power will allow more
frequent model training and faster experimentation.
▪ Collaborating with experts: It may be helpful to collaborate with experts in
the field of plant pathology to gain a better understanding of disease
symptoms and the required images for accurate diagnosis. Experts can also
provide insights on how to improve the database and training process.
▪ You need to upload the code to the node every time you start it to ensure
proper node functionality and execution of its required tasks.
42 | P a g e
▪ The ESP camera module had limited resolution capabilities, specifically 128
x128, which was too small for my application's requirements.
▪ Using data augmentation: To increase the dataset size and improve the
model's ability to generalize to new images, researchers can use data
augmentation techniques. These techniques involve applying various
transformations to the existing images, such as flipping, rotating, or
cropping, to create new images similar to the original ones but not identical.
▪ Evaluating and monitoring model performance: It is crucial to regularly
evaluate and monitor the model's performance to identify areas for
improvement. Researchers can use various performance metrics, such as
accuracy, precision, recall, and F1 score, to evaluate the model's
performance on validation and test datasets. They can also use techniques
such as cross-validation to ensure the model's performance is reliable.
43 | P a g e
12. Conclusion:
In conclusion, this research developed a plant disease identification system using
EfficientNetB7 convolutional neural network and achieved high accuracy rates in
identifying plant diseases. The developed system has the potential to improve the
efficiency and effectiveness of disease management and crop protection, which can
lead to sustainable agricultural practices and enhance food security. However,
there are still some limitations and challenges that need to be addressed, such as
improving the diversity of the dataset and developing specialized models for
specific plant species.
44 | P a g e
45 | P a g e
13. References
Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2019).
Deeplab: Semantic image segmentation with deep convolutional nets, atrous
convolution, and fully connected crfs. IEEE transactions on pattern analysis and
machine intelligence, 40(4), 834-848.
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun,
S. (2017). Dermatologist-level classification of skin cancer with deep neural
networks. Nature, 542(7639), 115-118.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image
recognition. In Proceedings of the IEEE conference on computer vision and pattern
recognition (pp. 770-778).
Huang, C., Zhao, Y., Wang, X., & Ma, Y. (2018). Learning affinity via spatial
propagation networks for visual tracking. In Proceedings of the IEEE conference
on computer vision and pattern recognition (pp. 6974-6983).
Li, Y., Qian, Y., Zhou, H., Liu, S., & Yang, G. (2019). A fast and accurate
machine vision system for apple detection and counting. Computers and
Electronics in Agriculture, 157, 531-537.
Liu, J., Gao, Z., Li, H., Chen, Y., & Wang, X. (2019). Detection of wheat ears
using a pre-trained convolutional neural network. Computers and Electronics in
Agriculture, 163, 104855.
Liu, Q., Liu, J., Li, X., & Li, H. (2021). A new deep learning method for liver
tumor segmentation in magnetic resonance imaging. International Journal of
Computer Assisted Radiology and Surgery, 16(1), 107-114.
46 | P a g e
Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for
image-based plant disease detection. Frontiers in plant science, 7, 1419.
Zhu, X., Lei, Z., Yan, J., Yi, D., & Li, S. Z. (2020). Discriminative feature learning
for face recognition under low-resolution and occlusion conditions. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 43(3), 883-898.
Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2019).
Deeplab: Semantic image segmentation with deep convolutional nets, atrous
convolution, and fully connected crfs. IEEE transactions on pattern analysis and
machine intelligence, 40(4), 834-848.
Li, Y., Qian, Y., Zhou, H., Liu, S., & Yang, G. (2019). A fast and accurate
machine vision system for apple detection and counting. Computers and
Electronics in Agriculture, 157, 531-537.
Liu, J., Gao, Z., Li, H., Chen, Y., & Wang, X. (2019). Detection of wheat ears
using a pre-trained convolutional neural network. Computers and Electronics in
Agriculture, 163, 104855.
Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for
image-based plant disease detection. Frontiers in plant science, 7, 1419.
47 | P a g e
Lin, Y. T., Li, H. C., Chen, Y. H., Chen, Y. C., Chen, C. M., & Shieh, M. D.
(2020). An autonomous navigation method for unmanned aerial vehicles using
machine vision. Journal of Intelligent and Robotic Systems, 97(3), 505-516.
Lurie, A., Trachtenberg, A., & Levi, D. (2019). Real-time obstacle detection and
avoidance in autonomous vehicles using deep learning. IEEE Transactions on
Intelligent Transportation Systems, 21(9), 3946-3957.
Zhang, W., Wang, Y., Chen, Y., & Huang, Y. (2019). Robust perception and
autonomous navigation using multiple sensors with deep learning. IEEE Access, 7,
.9974-9964
48 | P a g e