Project Report
Project Report
Project Report
on
Weed & Crop Detection using Machine Learning
Bachelor of Technology
in
Computer Science and Engineering (Artificial Intelligence)
by
Rishabh Sonkar (2000971520045)
Vivek (2000971520063)
Vini Srivastava (2000971520062)
CERTIFICATE
This is to certify that the project report entitled “Weed and Crop Detection” submitted
by Rishabh Sonkar (2000971520045) , Vivek (2000971520063), Vini
Srivastava(2000971520062) to the Galgotias College of Engineering & Technology,
Greater Noida, Utter Pradesh, affiliated to Dr. A.P.J. Abdul Kalam Technical University
Lucknow, Uttar Pradesh in partial fulfillment for the award of Degree of Bachelor of
Technology in Computer Science & Engineering is a bonafide record of the project work
carried out by them under my supervision during the year 2021-2022.
i
GALGOTIAS COLLEGE OF ENGINEERING & TECHNOLOGY
GREATER NOIDA, UTTAR PRADESH, INDIA- 2 0 1 3 0 6 .
ACKNOWLEDGEMENT
We have taken efforts in this project. However, it would not have been possible without
the kind support and help of many individuals and organizations. We would like to extend
my sincere thanks to all of them.
We are highly indebted to Mr. Mukesh Kumar Singh for his guidance and constant
supervision. Also, we are highly thankful to them for providing necessary information
regarding the project & also for their support in completing the project.
We also express gratitude towards our parents for their kind co-operation and
encouragement which helped me in completion of this project. Our thanks and
appreciations also go to our friends in developing the project and all the people who have
willingly helped me out with their abilities.
ii
ABSTRACT
Weeds are perhaps the main variable influencing farming creation. The
waste and contamination of farmland biological climate brought about by
full-inclusion substance herbicide splashing are turning out to be
progressively obvious. With the consistent improvement in the farming
creation level, precisely recognizing crops from weeds and accomplishing
exact splashing just for weeds are significant. Notwithstanding, exact
showering relies upon precisely distinguishing and finding weeds and
harvests. Lately, a few researchers have utilized different PC visio strategies
to accomplish this reason.
This survey expounds the two parts of utilizing customary picture handling
strategies and profound learning-based techniques to tackle weed
recognition issues. It gives an outline of different strategies for weed
location lately, dissects the benefits and drawbacks of existing techniques,
furthermore, presents a couple of related plant leaves, weed datasets, and
weeding mechanical assembly. In conclusion, the issues and troubles of the
current weed recognition techniques are examined, and the advancement
pattern of future exploration is prospected.
iii
CONTENTS
Title Page
CERTIFICATE i
ACKNOWLEDGEMENT ii
ABSTRACT iii
CONTENTS iv
LIST OF TABLES v
LIST OF FIGURES vi
CHAPTER 1: INTRODUCTION
1.1 Introduction 3
1.2 Objective 5
2.3.1 Introduction 13
CHAPTER 3: DESIGNING
3.1 Modules 17
iv
3.2 Data Flow Diagram 18
4.3.1 Python 26
4.3.2 Jupyter 31
4.3.3 PyTorch 32
4.3.4 OpenCV 32
4.5.3 Accuracy 33
4.5.4 Flexibility 34
4.5.5 Quick/Speed 34
4.5.6 Need 34
v
4.5.7 Significance 34
CHAPTER 5: RESULT
CHAPTER 6: CONCLUSION
CHAPTER 7: REFERENCE
vi
List of Tables
vii
LIST OF FIGURES
viii
CHAPTER 1
INTRODUCTION
1.1 Introduction
The Indian economy is built on agriculture. It is a decent spring of ordinary payment for the
majority of Indians. It is fundamentally structured, and hence the benefit or disadvantage is
based on the yield obtained.
The control usage of weeds generating among the domain crops is one of the major
consequences in agri-business. Such plants are really laid off at the manifest, at whatever
location is possible, or pesticides are being sprinkled or splattered reliably wherever all
through the field to keep them in line. In standard or regular weed control systems, herbicides
are sprinkled or splattered reliably all around the field. This methodology is amateurish or
awkward as beautiful.
Much of the water is used to water the plant, and just 1% of the produced water is used to
manage weeds, resulting in waste, pollution of the environment, and financial problems for
individuals.
To avoid these results, a sharp weed control system should be used. This type of framework
must be equipped for detecting and identifying weeds in the field or weed invaded area and
herbicide-sprayers are facilitated or allocated to sprinkle or splash preferably on the sought-
after spots.
Furthermore, it focuses on diminishing the costly work and restricts the usage of herbicides
that harm the customary advancement of plants. Surface, form, and hiding properties, as well
as region-based features, are used in the machine vision cornerstone method to distinguish
between undesired plants and items. A picture finder is a primary piece of any weed finding
and game plan system. Solitary plant portrayal has been successfully displayed with either
powerful or concealing imaging.
Ghost systems' spatial targets are frequently insufficient for precise individual plant or leaf
distinguishing evidence. Weed guideline activity is a fundamental issue and can basically
impact alter yield. Regardless, herbicides are expected to play a significant role in weed
control and management; their usage is denounced because it is used extremely and has
conceivably terrible effects. Fix bathing has been shown in several studies to reduce pesticide
consumption and execution. Manual investigating for fix showering requires broad resources
and is not an achievable other option.
Various researchers have additionally inspected Patch-sprinkling using distant location and
machine vision. Machine vision systems are best employed at the plant size, however distant
distinguishing may be applied on the suggested notion. Both structures require image
acquisition and processing to function. The size of a photograph is measured in gigabytes,
and the amount of time it takes to prepare is determined by image quality, yield, and weed
size, calculation utilised, and hardware plans.
Gathering pixels is the first step in detecting weeds in a photograph. The motivation for
dividing and distributing images into plants and establishment pixels is to differentiate or
perceive the aggregate or measure of plant material inside a certain place or zone. The
domain is concentrated around herbicidal splash application if the proportion of plant
material achieves a particular threshold.
ix
The system that makes use or use of the spatial or adjacent allocation information
consistently or continually and applies it as the basic proportion of herbicide to the weed-
attacked locale or region would be altogether more compelling and restricting regular
mischief. Consequently, making a high-spatial assurance, consistent weed-intrusion
acknowledgment structures with all records is the response for site-explicit weed
organization framework.
A couple of years prior, the formation of the product and equipment AI picture handling
frameworks was fundamentally most of each firm's developers were focused on improving
the user interface. The situation was fundamentally transformed with the advent of the
Windows operating system, and most designers moved their emphasis to picture handling
difficulties.
Notwithstanding, this has not yet prompted the cardinal advancement in addressing regular
assignments of perceiving faces, vehicle numbers, street signs, investigating remote and
clinical pictures, and so on. A variety of technical and scientific teams overcome each of
these "eternal" difficulties via trial and error.
The task of automating the building of software tools for addressing intellectual challenges
is imagined and actively tackled overseas due to the high cost of present technical solutions.
In the realm of machine learning, the necessary toolkit shall assist the analysis and
recognition of images of currently dark material, as well as ensuring that apps are developed
properly by regular programmers. Similarly, the Windows toolkit enables the creation of
user interfaces to address a wide range of difficulties.
Object recognition is a word that refers to a range of computer vision tasks that are all quite
similar, example is recognizing objects in complex photos. Picture order consists of
exercises, for example, predicting the class of a single object in a photograph.
Object restriction refers to determining the extent of at least one thing in an image and
surrounding it with a blooming box. Object recognition is accomplished by combining these
two tasks and restricting and grouping at least one item in an image. When a customer or
professional uses the phrase "objects acknowledgment," they usually mean "object
discovery." It might be perplexing for newcomers to distinguish various associated PC vision
duties.
Thus, with this approach, the three can be distinguished in PC vision projects:
Image Classification: Predict the type or class of an item in a photograph completes the task.
Input: An image made up of only one component, as an example a snapshot.
Output: A designation for a class (e.g., at least one full integer assigned to each class name).
Object Localization: This is accomplished by locating the existence using a bounding box to
demonstrate the position of things in an image.
Input: An image that includes at least one thing, as an example a photograph.
Output: There may be at least one boundary boxes (e.g., A point, width, and height are used
to define it.).
Object Detection: This is performed using locating the detection of objects in an image using
a bounding box, as well as the categories or classifications of the discovered objects
Input: A image made up of at least one object, such as a photograph.
Output: Each leaping box must have at least one bounding box (for example, defined by a,
width, and level), as well as a class mark.
Object division, also known as "object occasion division" or "semantic division," is a further
development of this split of PC vision tasks, in which cases of observed objects are presented
by highlighting the item's specific pixels rather than a coarse bounding box. We may deduce
from this that object recognition refers to a set-up for evaluating PC vision involving tasks.
x
As an example, while the distinctions between object limitation and image order are rather
apparent, the distinctions between object limitation and protest placement might be
perplexing, especially because all three tasks are frequently referred to as protest
acknowledgment.
Individuals can distinguish and perceive items in images. The human visual system is fast
and precise, yet it can also handle difficult jobs like as distinguishing diverse products and
identifying obstacles with only a little forethought. We can now truly prepare PCs to
recognise and characterise distinct items inside a picture with great precision, thanks to the
availability of massive data plans, faster GPUs, and improved algorithms.
We want to understand words like object placement, object restriction, misfortune work for
object identification and confinement, ultimately examine a thing acknowledgment
computation known as "You only look once" (YOLO).
Picture characterization likewise includes assigning a class name to a photograph, though
drawing a bouncing box around at least one item in an image is an example of an object
limitation. Object identification is often rather difficult and by developing a diagram that
integrates these bouncing putting a box around each object of interest in the image and
labelling it with a class mark. This broad range of difficulties is referred to as protest
acknowledgement.
Object acknowledgment alludes to an assortment a series of linked procedures for identifying
things in advanced photographs YOLOV3, or Region-based Convolutional Neural Networks
are a class of convolutional neural networks that are based on regional approaches for
protesting limitation and acknowledgement endeavours that are meant for model execution.
You Only Look Once, or YOLO, is the second set of object affirmation processes, which are
intended to be quick and consistent.
1.2 Objective
Getting more familiar with weed control and recognition can make it simpler for
ranchers to conclude whether these weeds ought to be invited or on the other hand on
the off chance that they should go.
Weed discovery and counteraction can as of now begin before you plant. Weed seeds
might be detected and removed, limiting the growth of unwanted plants.
To limit the utilization of manures and lessening the causticity of the dirt.
According to the Weed Science Society of America, a weed is characterized as a plant whose
development is bothersome in a field, prompting environmental irregular characteristics and
monetary misfortune. Periodically, these weeds could likewise prompt medical conditions
in people and creatures. A portion of the instances of weed plants are Poison Ivy, Tree of
xi
paradise. Systematically talking, there is no real importance for the word 'weed.' It is
generally emotional in light of the fact that a plant can be a weed in one setting however not
in another.
At times useful weeds are purposefully filled in the nurseries. Frequently, weeds can develop
obtrusively or forcefully outside the species' normal natural surroundings. According to the
normal foes' speculation, certain plants become exceptionally prevailing when acquainted
into new ambiences due with an absence of fauna that feeds on them or need fauna that rivals
them.
A portion of the downsides of having weeds in cultivation are:
• They rival the yield plants for food, water, daylight, and soil supplements.
• They make skin aggravations homo-sapiens and may cause distress in the
creatures' gastrointestinal systems as a portion of the weeds contain thistles, pods, and
even poisons.
• They can go about as a host for a few microbes that could debase the harvest
creation, 2 Positive connection amongst plants and their regular foes.
• Weeds could harm other designing works like water sprinklers, channels, drains,
and establishments.
• They cause corruption of yards' tasteful appearance, greens, and professional
flowerbeds with their unappealing appearance. It is fundamental for eliminate the
weeds from the rural fields to forestall every one of the disadvantages referenced
previously. The most pervasive techniques to eliminate the weeds are utilizing
herbicides, deadly shrivelling.
Be that as it may, assuming the field region is huge, it is marginally more earnestly to screen
the yield plants with restricted human work. Likewise, we really want a specialist to
distinguish the species, and there aren't a lot of professionals that can finish this work. Thus,
Automated weed recognition is perhaps the most suitable and achievable answers for the
proficient decrease or prohibition of substance like composts in the yield creation.
Researchers are focusing on combining the modern or present approaches or ideas with the
presented ways for analysing and evaluating segmented weed photos and photographs
automatically. Research studies discuss and compare the techniques of weed control
providing a significant emphasis or focus is placed on explaining and presenting new or
current research in weed identification and control automation.
xii
1.4 Problem Statement
A weed plant or crop is a plant that is undesired in the field. Farmers have been fighting for
as long as areas must be exploited for food production, weed populations must be combated.
This weeds control board provides a substantial amount to the overall cost of the
development in traditional or conventional horticulture.
Robotized weed recognition is perhaps the most practical and plausible answers for the
effective decrease or prohibition of substance like manures in the yield creation. Researchers
are focusing on combining the modern or present approaches or ideas with the presented
ways for analysing and evaluating segmented weed photographs and images automatically.
Research studies discuss and contrast the various weed control strategies, paying specific
attention to or focuses on describing and expressing the modern or current study towards
weed detection and control automation.
Machine learning emulates the human mind's operations for handling the information
utilizing various organization layers to separate more elevated level highlights from the
crude information pictures and thus utilized for some visual acknowledgment assignments.
In the field of horticulture, every one of the items (plants and weeds) will be generally green
in variety (green on green), so object acknowledgment (for our situation, species
distinguishing proof) is moderately more confounded as the greater part of the article
acknowledgment calculations use tone, fill, surface and size for object acknowledgment.
xiii
CHAPTER 2
LITERATURE REVIEW
In this segment, we investigate pertinent works led as of late that utilization AI and picture
examination for weed discovery. Late examinations in the writing present an assortment of
order approaches used to produce weed maps from UAV pictures.
Notwithstanding, as proof in the new condition of-craftsmanship shows AI calculations
make more precise and effective options than customary parametric calculations, while
managing complex information.
Among these AI calculations, the RF classifier is turning into an extremely well-known
choice for remote detecting applications because of its summed-up presentation and
functional speed. RF has been viewed as attractive for high goal UAV picture grouping and
horticultural planning. SVM is another well-known AI classifier that has been widely used
for weeding and harvesting characterisation. Table No. 3.1 presents an outline of late chips
away at Machine learning based approach for weed recognition.
xiv
Table No.1- state of art-techniques
Alam [5] generated a Crop/weed detection and classification in the year 2020 targeting
unspecified crop. The dataset involved pictures gathered from private ranch. To enhance
exactness, they utilized object-based picture examination methods alongside RF (Random
Forest). They found the overall accuracy to be 95%. Brinkhoff [6] Over a 6200 km2 section
of the Riverina region of New South Wales, Australia, a land cover guide was created to find
and categorise perennial harvests. To boost precision, they employed object-based image
evaluation techniques in combination with controlled SVM. When weighted by object area,
the accuracy was 90.9 percent, while the overall exactness was 84.8 percent for a twelve-
class item count..Aaron [7] generated Weed detection by UAV in the year 2019 targeting
Maize crop. The dataset contained pictures gathered from private ranch. To enhance
precision, they utilized object-based picture investigation procedures alongside NVDI digit
YOLOv3 Detector. They viewed the general exactness as 98%. ..Zhang [8] generated Weeds
species recognition in the year 2019 targeting 8 weed plants and crop. The dataset involved
1600 pictures gathered from South China crop field. To enhance precision, they utilized
object-based picture examination strategies alongside regulated SVM arrangement. They
viewed the general exactness as 92.35%. ..Y-H Tu [9] generated a Measuring Canopy
Structure in the year 2019 targeting Avocado tree. The dataset included pictures assembled
from Avocado fields at Bundaberg, Australia. To upgrade exactness, they utilized object-
based picture investigation procedures alongside RF (Random Forest). They found the
overall accuracy to be 96%. ..Adel [10] generated a Weed detection using shape feature in
the year 2018 targeting Sugar Beet. The dataset contained pictures assembled by Shiraz
University, Iran. To upgrade exactness, they utilized object-based picture investigation
procedures alongside administered SVM characterization. They found the overall accuracy
to be 95%. ..Abouzahir [11] generated Weeds species detection in the year 2018 targeting
Soya bean. The dataset involved pictures gathered from Sâa José ranch, Brazil. To streamline
precision, they utilized object-based picture investigation procedures alongside regulated
SVM arrangement. They viewed the general precision as 95.07%. J Gao [12] generated a
Weeds recognition in the year 2018 targeting Maize. The dataset involved pictures gathered
from Crop field of Belgium. To enhance exactness, they utilized object-based picture
examination procedures alongside RF (Random Forest) and KNN (K closest neighbor). They
viewed the general precision as 81% and 76.95% individually.
xv
2.2 Related Work:
In the new past, many profound learning models were presented for object acknowledgment
assignments. Be that as it may, with regards to the agribusiness area, the article
acknowledgment task is trying as the weed plants and harvest plants could have a similar
variety, surface, fill, and size. Grouping is a generally simple errand contrasted with the
acknowledgment undertakings at lower heights, to be exact on a leaf level. There are
numerous public datasets at the leaf level for species recognizable proof [13], illness
expectation in one [14], or more species [15]. Nonetheless, with regards to continuous
applications, we really want to zero in on datasets at the plant level. There have been many
advances in these plant level order assignments. Most agriculture datasets focus on diseased
crop identification. Not many datasets like Deep Weeds [16] centre around weed establishes
that develop among the harvest plants. In any case, Deep Weeds centre around eight distinct
species that are local to northern Australia, and it gives no information about the confinement
of the plants, in this manner limiting it to arrangement undertakings.
The lighting of the picture likewise assumes a vital part in horticulture errands alongside the
quality. The vast majority of the referenced datasets have a solitary lighting condition. Carrot
Weed [17] is a dataset that gives pictures at various lighting conditions, however as the name
recommends, the yield pictures are confined to carrot plants. Explicit datasets like Plant
Phenotyping [18], Plant Seedling dataset [19], and others [20] give us data about the
vegetation regions, yet the constraint of these datasets is that the foundation is soil or stones
rather than different plants. Indeed, even at the plant level, without appropriate comments
demonstrating explicit plant species' area, perceiving the species among a few plants is hard.
Late advances in the field of item recognition led to the cooperation of farming and profound
learning fields to accomplish accuracy horticulture [21] [22]. Usage of CNNs for recognizing
weeds among specific plants like turfgrass [23], ryegrass [24], soybeans [25] was shown to
be a reasonable strategy for weed administration. Alongside the directed models, unaided
models with negligible marking have additionally been being used for the weed discovery
[26]. In this postulation, we have made an engineered dataset of 80 weed species with more
than one class for each picture to grow our concentrate on Mask R-CNNs execution on weed
acknowledgment.
For our airborne picture study, we zeroed in on perceiving MAM utilizing UAV pictures.
Due to the at any point expansion in populace, the interest for developing food is supposed
to increment notwithstanding restricted farmlands. To develop more food with less assets,
ranchers are presently adjusting the supposed Precision Agriculture. Accuracy farming
includes present day innovation use, including yet not limited to robots and dusters for crop
the executives. Even though robots are not at present considered each horticultural need, (for
example, conveying destructive substances) because of Federal Aviation Administration
(FAA) guidelines, dusters can still be utilized for crop the executives as they fly at extremely
low elevation (10 foot over the ground). In any case, dusters are more costly than drones.
Be that as it may, robots can be utilized for crop the executives following the FAA guidelines.
Perceiving the farming examples, for example, weed acknowledgment can be exceptionally
difficult at an extremely high height. Utilizing the multispectral pictures taken by rambles,
xvi
we can recognize the plant species. There are not many datasets [27] made for design
acknowledgment in agribusiness. Consequently, we propose a three-level pecking order
(timberlands, trees, and leaves) for affirming the presence of MAM in each field, with the
woods being the high height, trees being the low-elevation, and leaves being the ground-
level pictures. In this proposition, we examined the exhibition of YOLO at low elevation and
ground levels. As there are no datasets devoted to MAM acknowledgment, we made
manufactured information utilizing NST and standard expansion procedures.
2.3.1 Introduction:
Neural Style Transfer (NST) is a classification of image stylization within the realm of
Non-Photo realistic Rendering (NPR). NPR is a subset of Computer Graphics (CG)
zeroing in on empowering a wide scope of expressive styles for advanced craftsmanship.
Not at all like traditional CG, NPR doesn't focus in on photorealism10. Because of its
motivation from other imaginative modes, for example, activities, drawing, painting, NPR
is in many cases utilized in films and computer games. The initial two outline based style
move calculations depended on fix based surface blend calculations called picture
analogies [28] and Image Quilting [29].Texture union is for the most part used to fill in
openings in pictures like inpainting11 or extend the little pictures. Surface blend
algorithmically develops an enormous picture from a little computerized test. There are
numerous strategies to accomplish this objective. A portion of the procedures accessible
are fix based surface combination, pixel-based surface union, shifting, stochastic surface
blend.
Early style moves calculations depend on fix-based surface blend. Fix based Texture blend
is quicker and powerful contrasted with pixel-based surface combination in light of the
fact that the fix-based surface union makes another surface by repeating and sewing
different surfaces at different counterbalances. Picture relationships, Image stitching, and
diagram cut surfaces are probably the best fix-based surface union calculations.
• Image Quilting: Another picture is integrated by sewing little fixes of existing pictures.
It very well may be utilized exclusively for a solitary style.
xvii
2.3.2 Classification vs. Localization vs. Detection:
One of the many getting through inquiries in the field of Computer Vision is, "What are
the distinctions between Classification, Localization, and Detection?" Image
characterization is a somewhat simple undertaking contrasted with confinement and
discovery. Picture characterization includes relegating a specific name to a picture that
gives us the insights regarding that picture's class. Object Localization includes making a
jumping box around the items inside a picture, yet it determines nothing about the picture's
class.
In any case, discovery, then again, includes making a bouncing box around the locale of
interest (RoI) and doling out a class to the various items in the picture. Thus, object
recognition is a blend of Image grouping and limitation. Frequently, the entire strategy for
object identification is alluded to as Object acknowledgment. The figure beneath
represents the contrast between classification, localization, detection and instance
segmentation.
In this undertaking, the information will be a picture containing at least one recognizable
items, and the result will be a picture containing bouncing boxes around those discern able
xviii
items and a name demonstrating the class of that item in the jumping box. The item
acknowledgment errand can be additionally improved by adding Image Segmentation.
xix
CHAPTER 3
DESIGNING
3.1 MODULES:
xx
Data flow diagrams are basic blocks that demonstrate the interaction between various
system components and offer a high-level perspective, as well as the limits of a specific
system and a thorough overview of system parts.
Data flow diagrams begin at the source and terminate at the destination level, decomposing
from high to lower levels. The most essential thing to know about data flow diagrams is
that they show data flow in one direction but not in loop structures, and they don't show
time factors.
The data flow analysis is discussed in this part, which contains information on the data
used, the classification of data flow diagrams depending on their goals, and the various
levels used in the project.
The general instructions for creating a block chart in this project may be found here:
Data flow processes: It will describe the path, or the flow of information from one
material to the next.
Process: The source from which the output is created for the specified input is defined by
the process. It describes the operations that are conducted on information until the point
where it is modified, stored, or appropriated.
Data store: It is the location where the data is saved after it has been extracted from the
data source.
Source: It is the data's beginning point or destination point, the point at which an external
entity serves as a catalyst for the data to flow to its intended destination.
xxi
Incoming input and exiting data or information should both be modified by the process.
The data storage should not be isolated; it should be linked to at least one other process.
The cycle's outer elements should be engaged with a single information stream.
The information stream in information processing should be from left to right and start to
end.
The data storage and their destinations should be called with capital letters on the data
flow
diagram, and the data flow and procedure should be small capitalising the first letter.
These guidelines ought to be kept for building the information stream charts.
Two sources are required to generate a DFD level 0 diagram for the suggested technique.
One is for the source, while the other is for the destination, as well as a procedure. In the
project source is the Camera providing images as inputs for weed detection as destination.
YOLO is a method that provides real-time object detection using neural networks. The
popularity of this algorithm is due to its accuracy and quickness. It has been applied in a
variety of ways to identify animals, humans, parking metres, and traffic lights.
The YOLO algorithm for object detection is described in this article along with its
workings. It also highlights a few of its practical uses.
1. What exactly is it? This query asks you to name the thing in a particular picture.
2. Where is it? This inquiry aims to pinpoint the precise placement of the object within the
picture.
Various methods are used for object detection, including Retina-Net, rapid R-CNN, and
Single-Shot MultiBox Detector (SSD). These methods have addressed the problems of
data scarcity and modelling in object detection, however they cannot find objects in a
single algorithm run. Due to its superior performance to the aforementioned object
detection techniques, the YOLO algorithm has grown in popularity.
xxiii
Fig 4.1 (Flowchart of YOLO Algorithm).
You Only Look Once is known by the acronym YOLO. This algorithm identifies and finds
different things in a picture (in real-time). The class probabilities of the discovered photos
are provided by the object identification process in YOLO, which is carried out as a
regression problem.
Convolutional neural networks (CNN) are used by the YOLO method to recognise items
instantly. The approach just needs one forward propagation through a neural network to
detect objects, as the name would imply.
This indicates that a single algorithm run is used to perform prediction throughout the full
image. Multiple class probabilities and bounding boxes are simultaneously predicted using
the CNN.
There are numerous variations of the YOLO algorithm. Tiny YOLO and YOLOv4 are a
couple of the more popular ones.
xxiv
4.1.4 HOW THE YOLO ALGORITHM WORKS
YOLO algorithm works using the following three techniques:
Residual blocks
Bounding box regression
Intersection Over Union (IOU)
Residual blocks
First, the image is divided into various grids. Each grid has a dimension of S x S. The
following image shows how an input image is divided into grids.
Fig 4.2
There are numerous grid cells with equal dimensions in the image above. Every grid cell will
be able to detect items that enter it. For instance, a grid cell will be in charge of detecting an
object if its centre appears within that cell.
Width (bw)
xxv
Height (bh)
Class (for example, person, car, traffic light, etc.)- This is represented by the letter c.
Bounding box center (bx,by)
The following image shows an example of a bounding box. The bounding box has been
represented by a yellow outline.
Fig 4.3
The bounding boxes and their confidence scores must be predicted for each grid cell. If the
expected and actual bounding boxes are identical, the IOU is equal to 1. This approach
gets rid of bounding boxes that aren't the same size as the actual box.
xxvi
Fig 4.4
xxvii
4.3 Technologies Used
4.3.1 Python
Python is a for the most part used generally around significant, weird state programming language.
Its fashionable mental soundness emphasises the significance of code, and its supplement enables
the programming brain to solve issues in less lines of code than is possible in languages like Java
or C++.
The language contains important features that should enable clearing applications on a small and
large scale. Other than question coordinated, vital, and helpful programming or procedural styles,
Python partners and supports a variety of programming ideal models. It has a goliath and standard
library, as well as monster components such as a dynamic sort mechanism and modified memory
association.
Why is Python?
Python scripts may be usefully created and run a lot faster than other programming vernaculars
because they are simple to comprehend and use.
Python scripts are easier to comprehend and use than other programming languages, thus they can
be produced and executed much faster. One of the principal explanations behind the prominence
of python would be its effortlessness in language structure so it very well may be effectively
perused and seen even by novice engineers too.
Because python is a deciphering language, it can also be quickly tried by altering the code base.
xxviii
For the Python programming language, there is a variety of information, guidelines, and
video tutorials accessible for students and designers of all skill levels and ages to utilise to
better their understanding.
This essentially intends that if someone generally dislikes python language, they may
acquire second aid from professionals at all degrees, from juvenile to locally dominant.
Finding support on open door accepts a basic part in the improvement of the endeavour,
which regardless could make misfortunes.
Since 2006, Google has employed the Python programming language for a variety of applications
and stages. The Python programming language has taken a significant amount of time and money
to plan and create. For python, they've even made a separate entrance. The overview of help
mechanical assemblies and documentation continues to develop for the python language in the
world of planners.
xxix
Other libraries, such as nltk for conventional language processing or scikit-learn for AI
applications, have a defined purpose.
For the Python language, there are several structures and libraries available, such as:
Another advantage of the Python language's versatility is that it may be utilised in a wide
range of situations, such as flexible programmes, workspace applications, web
development, hardware programming, and so on. The adaptability of python makes it more
appealing to utilize on account of its enormous number of usages.
6) Big data, Machine Learning and Cloud Computing
The most fiercely debated topics are cloud or distributed computing, Machine Learning,
and Big Data in computer programming right now, which helps stores of relationship with
changing and work on their cycles and work processes.
After the R language, Python is the second most well-known included tool for data science
and examination. The python language, metaphorically speaking, fills a lot of different
facts pertaining to vocations in the association. Because of its many uses, including the
ease of analysing and determining useable data, the python language is used for a large
portion of creative work.
This, however many python libraries are being utilized in many AI projects consistently,
for example, TensorFlow for brain organizations and OpenCV for PC vision, and so on.
xxx
7) First-choice Language
The key reason for python's popularity in the improvement business is because of its
versatility, it is the ideal alternative for several software engineers and understudies.
Understudies and engineers generally, people expect to learn a popular language. Python
is unquestionably the hottest cake on the market right now.
For their improvement initiatives, many software engineers and information science
understudies use the Python language. Learning Python is one of the most crucial
components of information science certification courses. Thusly, the python language can
give a ton of impressive calling likely entryways for students. Because of the assortment
of uses of python, one can seek after various profession choices and won't stay adhered to
one.
Python does not impose any restrictions on authors in terms of promoting any kind of goal.
Other programming languages may not offer the same amount of flexibility and variation
while studying only one.
Along these lines, it is hiring additional Python Developers and Programmers, allowing it
to continue to grow and gain reputation.
10) Automation
The Python programming language may assist a lot with project automation since it has a
lot of devices and modules, which makes things a lot easier. It is unimaginable to
understand that one can show up at a general level of robotization really just by using
fundamental python codes.
Python is also the greatest show marketer when it comes to the computerization of
programming testing. It will astound you how much less time and lines are required to
generate programmes for computerization devices.
xxxi
4.3.2 TensorFlow
It's an open-source modernised thinking library that gathers models utilising data stream
graphs. It allows designers to create massive extension cerebrum networks with several
levels. Classification, perception, understanding, discovering, prediction, and creation are
some of the most common uses for TensorFlow.
TensorFlow, a Python library for efficient numerical processing, was built and released by
Google. A foundation library may be used directly to create Deep Learning models, or it
can be used in conjunction with covering libraries that operate on the association on top
of TensorFlow.
The Object Detection API: It's yet a centre AI challenge to make exact AI models equipped
for limiting and recognizing numerous items in a solitary picture. The as of late publicly
released TensorFlow Object Detection API has delivered best in class results.
4.3.3 PyTorch
The Torch library-based machine learning framework PyTorch was created by Meta AI
and is now a member of the Linux Foundation. It is used for applications like computer
vision and natural language processing. It is software that is available for free and open
source under a modified BSD licence. Although PyTorch also provides a C++ interface,
the Python interface is more refined and the main focus of development.
4.3.4 OpenCV
OpenCV (Open-Source Computer Vision Library) is a programming library for computer
vision and artificial intelligence. The purpose of OpenCV was to develop a common
foundation for PC vision applications and to increase the usage of machine learning in
commercial applications. Because OpenCV is a BSD-approved project, it is simple for
organisations to utilise and upgrade the code.
More than 2500 upgraded and examined computations are included in the collection,
which includes a wide choice of both exceptional and top-tier PC vision and AI estimates.
These computations can be used to recognise and see faces, recognise objects, describe
human activities in accounts, track camera advancements, track moving things, separate
3D models of articles, create 3D point fogs from sound system cameras, secure pictures
together to convey a high assurance image of an entire scene, find similar pictures from
an image data set, eliminate red eyes from pictures taken with streak, take after eye
xxxii
enhancements, observe scene and set u OpenCV has a client base of about 47 thousand
users, with over 14 million downloads. In general, the library is utilised by associations;
inquire about social events and governmental bodies.
o Processors
Intel® Core™ i5 processor 4300M at 2.60 GHz or 2.59 GHz (1 socket,2 cores, 2
threads
per core), 8 GB of DRAM
Programming tools are all necessary for the Weed Detection project (freely available)
Individual programming
As a result, it's apparent that the project Weed Detection is resource feasible.
xxxiii
4.5.3 Accuracy
At the sub-centimetre level, spatial accuracy
4.5.4 Flexibility
Adaptable to any plant types
4.5.5 Quick/Speed
Real-time operation is preferred, with prompt control response upon detection.
4.5.6 Need
To prevent excess use of fertilizers and herbicides for crop production
4.5.7 Significance
Reduces usages of chemical in crops, hence decreasing toxicity in crops
xxxiv
CHAPTER 5
RESULT
xxxv
Fig. 5.2 Sample Output 2
xxxvi
CHAPTER 6
CONCLUSION
Machine leaning has wide facticity of applications leaving options to the researchers to
choose one of the areas of their interest. Lot of research findings is published but
uncertainly lots of research areas are still untouched. Besides, with the fast PCs and sign
processors available in the 2000s, advanced image processing has become the most well-
known or standard style of picture handling, which is mostly used because it is the most
versatile and least costly technique.
An outline of the ongoing weed control strategies and advances engaged with programmed
weed location. The imaging frameworks and information processing methodologies for
removing weed patches in fields have gotten a lot of attention. The limitations of newly
manufactured optical discovery frameworks were demonstrated, as well as a number of
essential concepts for future frameworks. Without exact inside line harvest and weed plant
location frameworks, programmed mechanical weed management techniques are limited
to treating between-column weeds. In drug control, the lack of an inside line plant finder
results in an overabundance of herbicides being sprayed in fields. On the off chance that a
reliable system for detecting weed plants could be built, practical and ecological
investment funds may be made.
xxxvii
CHAPTER 7
REFERENCES
[1].https://www.apriorit.com/dev-blog/599-ai-for-image-processing
[2].https://www.researchgate.net/publication/337464355_OBJECT_DETECTION_AND
_IDENOTIFICATION_A_Project_Report
[3]. https://pjreddie.com/darknet/yoloo/
[4].https://towardsscience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-
algorithms36d53571365e?gi=951190c93f15
[5]. Alam, M.; Alam, M.S.; Roman, M.; Tufail, M.; Khan, M.U.; Khan, M.T. Constant
MachineLearning Based Crop/Weed Detection and Classification for Variable-Rate
Spraying in Agriculture. In Proceedings of the 2020 7th International Conference on
Electrical and Electronics Engineering (ICEEE), Antalya, Turke, 14-16 April 2020.
[6]. Brinkhoff, J.; Vardanega, J.; Robson, A.J. Land Cover Classification of Nine Perennial
Crops Using Sentinel-1 and-2 Data. Remote Sens. 2020.
[7]. Etienne, A.; Saraswat, D. Machine learning approaches to automate weed detection
by UAV based sensors. In Autonomous Air and Ground Sensing Systems for
Agricultural Optimization and Phenotyping IV; International Society for Optics and
Photonics: Bellingham, WA, USA, 2019.
[8]. Zhang, S.; Guo, J.; Wang, Z. Combing K-means Clustering and Local Weighted
Maximum Discriminant Projections for Weed Species Recognition. Front. Comput.
Sci. 2019.
[9]. Tu, Y.H.; Johansen, K.; Phinn, S.; Robson, A. Measuring canopy structure and
condition using multi-spectral UAS imagery in a horticultural environment. Remote
Sens. 2019.
[10]. Bakhshipour, A.; Jafari, A. Evaluation of support vector machine and artificial neural
networks in weed detection using shape features. Comput. Electron. Agric. 2018.
[11]. Abouzahir, S.; Sadik, M.; Sabir, E. Enhanced Approach for Weeds Species Detection
Using Machine Vision. In Proceedings of the 2018 International Conference on
Electronics, Control, Optimization and Computer Science (ICECOCS), Kenitra,
Morocco, 5–6 December 2018.
[12]. Gao, J.; Nuyttens, D.; Lootens, P.; He, Y.; Pieters, J.G. Recognising weeds in a maize
crop using a random forest machine-learning algorithm and near-infrared snapshot
mosaic hyperspectral imagery. Biosyst. Eng. 2018.
[13] Girshick, R., Donahue, J., Darrell, T., & Malik, J. , “Rich feature hierarchies for
accurate object detection and semantic segmentation.,” In Proceedings of the IEEE
conference on computer vision and pattern recognition 2014.
xxxviii
[14] Felzenszwalb, P. F., & Huttenlocher, D. P., “Efficient graph-based image
segmentation,” International journal of computer vision 2004.
[15] Mallah, C., Cope, J., & Orwell, J. , “Plant Leaf Classification using Probabilistic
Integration of Shape, Texture and Margin Features.,” Pattern Recognit. Appl., 3842.,
2013.
[16] Huang, Mei-Ling; Chang, Ya-Han, “Dataset of Tomato Leaves,” Mendeley Data,
V1, doi: 10.17632/ngdgg79rzb.1, 2020.
[17] CHOUHAN, Siddharth Singh; Kaul, Ajay; SINGH, UDAY PRATAP; & Science,
Madhav Institute of Technology, “ A Database of Leaf Images: Practice towards Plant
Conservation with Plant Pathology,,” Mendeley Data, V4, doi:
10.17632/hb74ynkjcn.4, 2020.
[18] Mohanty, S. P., Hughes, D. P., & Salathé, M, “Using deep learning for image-
based plant disease detection.,” Frontiers in plant science, 7, 1419, 2016.
[19] Olsen, A., Konovalov, D. A., Philippa, B., Ridd, P., Wood, J. C., Johns, J., ... &
White, R. D. , “DeepWeeds: A multiclass weed species image dataset for deep
learning.,” Scientific reports, 9(1), 1-12., 2019.
[20] Lameski, Petre & Zdravevski, Eftim & Trajkovik, Vladimir & Kulakov, Andrea. ,
“Weed Detection Dataset with RGB Images Taken Under Variable Light Conditions.,”
112-119. 10.1007/978-3-319-67597-8_11. , 2017.
[21] Minervini, M., Fischbach, A., Scharr, H., & Tsaftaris, S. A. , “Finely-grained
annotated datasets for image-based plant phenotyping.,” Pattern recognition letters, 81,
80-89., 2016.
[22] Giselsson, T., Dyrmann, M., J\orgensen, R., Jensen, P., & Midtiby, H., “ A Public
Image Database for Benchmark of Plant Seedling Classification Algorithms.,” arXiv
preprint., 2017.
[23] Sudars, K., Jasko, J., Namatevs, I., Ozola, L., & Badaukis, N. , “Dataset of
annotated food crops and weed images for robotic computer vision control.,” Data in
brief, 31, 105833. https://doi.org/10.1016/j.dib.2020.105833, 2020.
[24] Wang, A., Zhang, W., & Wei, X. , “A review on weed detection using ground-
based machine vision and image processing techniques.,” Computers and electronics
in agriculture, Vols. 158, 226-240, 2019.
[25] Bakhshipour, A., & Jafari, A. , “ Evaluation of support vector machine and
artificial neural networks in weed detection using shape features,” Computers and
Electronics in Agriculture, Vols. 145, 153-160, 2018.
[26] Yu, J., Sharpe, S. M., Schumann, A. W., & Boyd, N. S. , “Deep learning for image
based weed detection in turfgrass.,” European journal of agronomy, Vols. 104, 78-84,
2019.
[27] Yu, J., Schumann, A. W., Cao, Z., Sharpe, S. M., & Boyd, N. S. , “ Weed detection
in perennial ryegrass with deep learning convolutional neural network.,” frontiers in
Plant Science, no. 10, 1422, 2019.
[28] dos Santos Ferreira, A., Freitas, D. M., da Silva, G. G., Pistori, H., & Folhes, M.
T. , “Weed detection in soybean crops using ConvNets,” Computers and Electronics
in Agriculture, Vols. 143, 314-324, 2017.
xxxix
[29] Louargant, M., Jones, G., Faroux, R., Paoli, J. N., Maillot, T., Gée, C., & Villette,
S. , “Unsupervised classification algorithm for early weed detection in row-crops by
combining spatial and spectral information,” Remote Sensing, no. 10(5), 761, 2018.
[30] Bah, M. D., Hafiane, A., & Canals, R. , “Deep learning with unsupervised data
labeling for weed detection in line crops in UAV images.,” Remote sensing, 10(11),
1690., 2018.
[31] Hertzmann, A., Jacobs, C. E., Oliver, N., Curless, B., & Salesin, D. H. , “Image
analogies,” In Proceedings of the 28th annual conference on Computer graphics and
interactive techniques, 2001.
[32] Efros, Alexei A., and William T. Freeman., “Image quilting for texture synthesis
and transfer.,” Proceedings of the 28th annual conference on Computer graphics and
interactive techniques., 2001.
xl