An Overview of Autonomous Crop Row Navigation Strategies For Unmanned Ground Vehicles
An Overview of Autonomous Crop Row Navigation Strategies For Unmanned Ground Vehicles
A B S T R A C T
Unmanned ground vehicles (UGVs) are becoming popular for use in agricultural environments. These unmanned systems are implemented in order to address human
labor shortages throughout the agricultural industry, and improve food safety throughout the production cycle of produce crop. Common uses of UGVs in agriculture
include: detection of animal fecal matter, surveys of crop growth, detection of crop damage from storms or floods, and detection of unwanted pests or molds.
Navigation of crop rows is typically accomplished using vision-based cameras and global positioning system (GPS) units. Machine vision strategies are implemented
to detect crop row contours and edges to ensure proper navigation of rows without damaging crops. A number of other control and navigation strategies exist for
autonomous movements of UGVs. This paper provides a survey and overview of autonomous navigation strategies for UGVs with applications to agricultural
environments.
1. Introduction location accuracy, precision GPS receivers are used. Real-time kine-
matic (RKT) and Differential GPS (DGPS), which use reference stations
The development of UGVs for various uses from agriculture opera- located in the target environment, are used to enhance the accuracy of
tions to military operations has been occurring for the last few decades GPS signal down to a few centimeters. The autonomous weeding robot
(Sistler, 1987). Improving the efficiency of agricultural production is a developed by Bakker et al. (2010a) navigates using this type of tech-
concern as the world population continues to increase. The unique nology, relying on two GPS antennas connected to a RTK-DGPS re-
ability of UGVs to travel through fields, while supporting sizable pay- ceiver, improving location accuracy to 1–2 cm (Bakker et al., 2010a).
loads makes them ideal for an agricultural environment. The develop- An alternative to GPS navigation is to use sensors or cameras to
ment of UGVs for particular agricultural applications is ongoing in interpret the local environment as the UGV travels. Radar and ultra-
academia and the engineering industry to combat issues of labor sonic sensors can be used to detect large obstacles or landmarks, and
shortages and foodborne illness which affect the agricultural industry can be used in well controlled indoor environments or roads with
today (Hamrita et al., 2000). Many of the currently available com- predefined surroundings for navigation. However, in uncontrolled
mercial UGVs do not offer autonomous navigation for agricultural en- outdoor environments, natural variability can test the limits of these
vironments, and instead depend on remote control (Husky Unmanned types of sensors. Light Detection and Ranging (LIDAR) systems can also
Ground Vehicle Robot, n.d.; Jackal Small Unmanned Ground Vehicle, be used to produce either a 2D or 3D rendering of the surroundings.
n.d.). Various researchers have developed autonomous UGVs for spe- LIDAR systems utilize a laser or combination of lasers, and are not
cific agricultural tasks in crop rows, however these systems are de- limited by visibility or ambient light levels.
signed only for that particular application (Lefcourt et al., 2016). While 3D LIDAR sensing systems can be expensive, cheaper 2D
Commercially available or open-source hardware and software are not sensors can be manipulated to produce a 3D rendering for use in de-
available for crop navigation. tecting static and dynamic objects in the surroundings of a UGV, as
Currently, GPS and geographic information system (GIS) are the discussed by Rejas et al. (2015). A 2D laser sensor mounted atop a
most commonly used means of guiding vehicles through an agricultural continuously rotating platform was used to emit a focused laser beam
environment without the input of a human operator. In general, these and receive and interpret reflection levels of objects in the path of the
vehicles rely on pre-planned routes, rather than having the ability to laser. Based on this information, the distance between the robot and an
operate in new or changing environments. In order to ensure high obstacle or object of interest was determined. The rate of rotation of the
∗
Corresponding author.
E-mail address: [email protected] (S.A. Gadsden).
1
Current affiliation: Northrop Grumman Corp., Baltimore, MD, 21090, USA.
https://doi.org/10.1016/j.eaef.2018.09.001
Received 27 July 2017; Received in revised form 1 April 2018; Accepted 2 September 2018
Available online 10 September 2018
1881-8366/ © 2018 Asian Agricultural and Biological Engineering Association. Published by Elsevier B.V. All rights reserved.
S. Bonadies, S.A. Gadsden Engineering in Agriculture, Environment and Food 12 (2019) 24–31
laser was adjusted by the system based on the density of the objects crops in front of the robot. Calculating the center of the detected plants,
detected in the path of the laser and the vehicle speed. Using a Hokuyo and drawing a line between these points, the position and heading of
30LX LIDAR system with a detection angle of 270° and a detection the vehicle was determined by comparing to a predefined line at the
range of 0.1–30 m, the system was used to create a 3D image of a room center of the image. The image processing algorithm used edge detec-
containing the scanner. The software for this system was developed in tion and predetermined information about the crop size and distance
ROS for optimal processing speed. Using a rotation speed of 1.5 RPS, between each crop to detect plants in the image. The collected image
the scanner can be mounted atop a vehicle travelling up to 5.5 m/s to was converted to a binary image, and after detecting the contours of the
avoid obstacles in its path. After various experimental trials, it was plants in the image, the plant's centroid was determined. These two
determined that the rotation speed of the scanner should be adjusted points were used to create the guidance line for navigation. Using a
based on the speed of the vehicle for optimal results. The system was fuzzy logic controller with two inputs (vehicle position and heading)
able to detect obstacles between 0.1 and 10 m surrounding the UGV, and one output (motor output commands), the vehicle was run through
and could be used to produce 3D map renderings of the vehicle's en- a small vegetable crop lane. The true position of the vehicle throughout
vironment as it travels. the trial was recorded using a painted line drawn by the robot as it
Camera vision can also be used to visually interpret the environ- travelled. With the vehicle travelling at 0.2 m/s, it was able to maintain
ment for navigation and obstacle avoidance. Cameras can be used to a position within ± 35 mm of the desired path.
produce 2D images or 3D renderings of the local environment. A Takagaki et al. (2013) created an image processing method that
standard camera can be used to capture 2D still or video images of the could be used for autonomous navigation of a ground vehicle through
crop environment, which can be processed for guidance. Navigation agricultural environments based on ridge and furrow detection between
based on image processing utilizes images as an input signal to detect rows. Two different image processing algorithms were used for images
edges of crops and rows by recognizing differences in color or shape, or with shade and images without shade, as determined by analysis of gray
avoid obstacles while travelling through a field. Imaging relies on a level histograms. In order to navigate through rows with shade, color
light source to induce reflectance or a fluorescence response of the area differences were used, while texture detection was used to navigate
of interest. In agricultural fields, sunlight or infrared light sources can when shade was not present. For shade images, shadows were present
be used to induce reflectance for data acquisition, while various light on one side of the ridge based on the angle of the sun. The edge of a
sources including ultraviolet light can induce a fluorescent response. ridge was determined by observing the pixel value difference between
Image processing can be used to navigate an unknown environment and the light side of the ridge and the shaded side of the ridge. After the
respond to changes in real-time (Mousazadeh, 2013). edges of the ridges to the left and right of the vehicle were determined,
In contrast, stereo vision cameras are capable of creating a 3D image a Hough transform was applied to determine the equations of these
of the environment by imitating human vision, which combines data lines, and a center line between the two was calculated for vehicle
from two separate images of the same scene to interpret the depth of navigation. For images without shade, the variance within a square
various objects. Navigation based on stereo vision through agricultural region was observed to find the area in the image where the soil is
crop rows requires a detectable height difference of the crop above the smooth, indicative of a furrow. Using the minimum variance values, the
ground (English et al., 2014). In earlier growing stages, crops may too furrow can be distinguished from the ridges in a binary transformation.
short to provide enough information for navigation based on the height Finally, a Hough transform is applied to obtain the equation of the
of the crop. Additional image processing is needed to distinguish be- furrow line for guidance along the row.
tween the crop and ground compared to mono vision cameras. Since the Testing the algorithms in four different fields, the camera was used
processing of the stereo images required to determine depth informa- to acquire images with a variety of lighting conditions. The image
tion is significant, this method may be less efficient than the use of sorting algorithm correctly distinguished the shaded images from those
mono vision cameras to collect 2D images of the environment (Rejas without shade 100% of the time. Of the 30 images taken with shade, the
et al., 2015). image processing algorithm successfully determined the row center
English et al. (2014) developed a guidance method for an autono- 100% of the time. Of the 23 images without shade, the image proces-
mous weed spraying platform utilizing two different cameras for image sing algorithm successfully determined the lane center 87% of the time.
capture: an IDS uEye CP and a low-cost Microsoft LifeCam Cinema The image processing algorithm, Takagaki et al. predict, could be im-
webcam. A row-tracking control system developed in C++ using plemented into a control system to guide an UGV through the rows
OpenCV was implemented on the platform. The image processing al- (Takagaki et al., 2013).
gorithm performed a series of processing and calculation steps on the Navigation in an agricultural field may require not only navigating
image collected by the camera. First pre-processing was performed to through a crop row, but also recognizing the start and end of a row, and
remove lens-distortion effects, then the location of the horizon was turning between rows. While image or sensor data can interpret when
detected to determine roll and pitch. Next, the image was straightened an UGV has reached the end of a row, additional control is needed to
using roll and pitch estimations, and the image was warped to create an instruct the vehicle to make turns and locate the next row. Based on the
overhead view for interpretation. Using the warped image, an estimate size of the rows in a particular field as well as the size of the UGV,
for vehicle location and heading were determined. After correcting for software can be programmed to enable the robot to make a pre-
robot heading angle from the desired track was determined. Using a determined turning maneuver when the end of a row is reached.
Proportional-Integral controller, closed-loop experiments were run to The purpose of this paper is to provide an overview of autonomous
observe the vehicle's ability to autonomous navigate wheat and sor- navigation strategies for UGVs with applications to agricultural en-
ghum stubble rows. For comparison, RTK-GPS data was collected to vironments. The paper is organized as follows. Navigation techniques
record vehicle position throughout the experiments. The RMSE between for UGVs are described in Section 2. Two of the most popular lane
the vehicle position and the true lane location for these trials was detection algorithms are summarized in Section 3, and include crop row
28 mm and 120 mm, respectively. Despite noisy images in some loca- detection using contour tracing and Hough line transformation. Section
tions, the vehicle was able to successfully navigate both crop rows 4 describes lane detection based on popular control strategies: PID and
without modification to the image processing algorithm. fuzzy logic. The paper is then concluded in Section 5.
Similarly, Xue and Xu developed a vision-based row guidance
method for an autonomous weeding and spraying robot (Xue & Xu, 2. Navigation techniques
Autonomous Agircultural Robot and Its Row Guidance, 2010). This
vehicle was equipped with a Sony CCD camera mounted to the front of In order to control the motion of an UGV, manual or autonomous
the platform. The row detection method relied on the detection of two control can be used. A manual control scheme can be utilized to direct a
25
S. Bonadies, S.A. Gadsden Engineering in Agriculture, Environment and Food 12 (2019) 24–31
26
S. Bonadies, S.A. Gadsden Engineering in Agriculture, Environment and Food 12 (2019) 24–31
this brightness. When all three RGB values are equal, the corresponding
color falls between white (maximum brightness) and black (minimum
brightness), storing only the intensity information, thus the pixel will be
in grayscale.
For example, in this case, the green mask is of interest since the
region of interest (lettuce crop) is green, while other areas of the field
are brown. Isolating the green mask of the image, which is the second
layer of the RGB file, will form a grayscale image corresponding to the
intensity of green in the image. As can be seen in Fig. 3, the top image
displays the original crop rows with the RGB information for a point
Fig. 4. Binary Images based on Threshold Value of 50 (top) and 172 (bottom).
within the crop row at (300, 210). The grayscale RGB values are re-
presented as a percentage between 0 and 1, calculated by computing
the quotient of 171, the green value, over 256, which is equal to 0.667. black. For example, if the threshold value is set to 50, the point (300,
The greater the green index is at a particular point, the greater the value 210) will be converted to a white pixel, with RGB value (255, 255,
of the quotient, and thus the brighter the grayscale pixel will be. Pixels 255). However, if the threshold is set to 172, just above the original
with a greater intensity, closer to 255 (or 100%), are more green, and green value of 171 from the original image, this same pixel is converted
appear to be more white in grayscale. Pixels with a lower intensity, to black, with RGB value (0, 0, 0), since it falls below the set threshold
closer to 0, are less green, and appear darker in grayscale. (Fig. 4). In this case, a value of 50 is a more appropriate threshold for
Since the green objects in the image will appear as brighter pixels, a this image.
brightness threshold can be set to distinguish between the areas in the Now that the image is binary, the contours of the lanes between the
grayscale image, converting the image to binary. In this case, a global crop rows can be traced. The contour of the lane is the connected line
threshold, which uses a single gray level for the entire image, can be set which encloses all of the pixels that make up the object. In order for the
(Demant et al., 2013). A global threshold will define all gray values program to detect a contour, the concept of connectedness must be
above the threshold value as white, and all those below the threshold as defined. Two basic types of connectedness are used for 2-D images:
four-connectedness and eight-connectedness (Fig. 5). Starting with a
single point, the pixels surrounding it can be considered connected only
if they fall next to this point in the horizontal or vertical directions
when using four-connectedness. In contrast, diagonal pixels are also
considered to be connected to the point when using eight connected-
ness.
Based on the selected connectedness definition, a contour detecting
algorithm can be developed using the following steps for lane detection.
First, a search is initiated for a transition between the rows and the
lanes between the rows. Once this transition is located, the next
neighbor, based on the connectedness definition is determined. Since
the contour represents the border or perimeter of the object, only pixels
27
S. Bonadies, S.A. Gadsden Engineering in Agriculture, Environment and Food 12 (2019) 24–31
Fig. 6. Inverted Binary Image based on Threshold of 50. Fig. 8. Grayscale image from the green mask of the example image.
along the object border are considered neighbors. Moving in a clock- Transformation, 2016). Similar to the previous method, the first step is
wise or counterclockwise direction, the entire image is searched until to isolate the green mask of the image (Fig. 8). Next, an edge detection
the contour is traced. The contour has been completely traced when the algorithm can be used to convert the image to binary based on the
algorithm reaches the point at which it started, signifying a closed re- shapes in the image. For example, Canny edge detection, which com-
gion. bines five steps, smoothing, finding gradients, non-maximum suppres-
In general, the object of interest in a binary image is considered to sion, double thresholding, and edge tracking by hysteresis, can be used
be that which is represented by white pixels (pixels with a value of 1), to distinguish between edges and the rest of the image (Canny Edge
while black pixels (value of 0) are considered the background. In this Detection, n.d.). The output image from the Canny edge detection al-
example, the crop rows are the area of interest that is denoted by white gorithm is a binary image, with the edges of objects as white pixels, and
pixels, but the lanes between the rows are the contours that should be the rest of the image as black pixels (Fig. 9).
detected for row guidance. As such, the image can be inverted, so that Once the edges of the objects within the image have been detected,
all of the pixels representing the crop rows are flipped from a value of 1 a Hough transform can be performed to determine the equation of the
to a value of 0 and all of the pixels representing the lanes are flipped lines within the image. In MATLAB, the “hough.m” function can be
from 0 to 1. Considering the threshold value of 50, the image is in- used to implement a Standard Hough Transform, using the parametric
verted, and the point (300, 210) is now black, while the lane pixels are representation of a line (Equation (1)):
white (Fig. 6).
In various software platforms, such as MATLAB, predefined func- ρ= x cos(θ) + y sin(θ) (1)
tions are available to detect contours without having to program the
where ρ is the perpendicular distance from the image origin to the line
algorithm. In MATLAB, the function “bwconncomp.m” can be used to
of interest in the image and θ is the angle, ranging between −90 and
find all of the connected objects within a binary image. This function
90°, between the perpendicular projection and the x axis (Fig. 10).
stores the x-y coordinates of all of the points that make up the contour,
The inputs to the “Hough” function are a binary image, and optional
which can later be used to calculate the center of a lane or row. After
rho and theta resolution values. The “Hough” function outputs the
the contours are detected in the image, they can be displayed on the
Standard Hough Transform (SHT), which is a parameter space matrix
image. In the row detection case, the lanes between the rows should be
with rows and columns corresponding to rho and theta respectively, in
the largest contours in the area of interest (Fig. 7). The two largest
addition to the rho and theta arrays. The values in the SHT are the
contours in this image are the lanes to the left and right of the center
number of points that lie on a line that is specified by the particular rho
crop row. Based on the shape of the contours, the left lane is better
and theta. The peak values in this matrix signify potential straight lines
detected than the right lane because of the shadows within the field.
within the image.
The lane detection can be improved in a variety of ways including fil-
The peak values in the SHT matrix can be determined using the
tering the binary image to remove noise and tuning the threshold value
“houghpeaks.m” function, which uses the SHT matrix and a specified
to adapt to the ambient light in the field.
number of peaks to look for. The output of “houghpeaks.m” is a matrix
containing the rho and theta values of the specified number of peaks.
3.2. Crop row detection using hough line transformation Finally, the rho and theta values of the peaks can be used to calculate
the equations of the lines in the image using “houghlines.m” based on
In contrast to row detection, the lines which follow the edges of the the formula in Equation (1). These lines can be drawn in the image for
crop row can be determined using Hough transformation (Hough visual comparison with the crop rows. For example, using the example
Fig. 7. Contours in lettuce row. Fig. 9. Binary image output of canny edge detection algorithm.
28
S. Bonadies, S.A. Gadsden Engineering in Agriculture, Environment and Food 12 (2019) 24–31
rather than over the soil between the rows. Using the image data to the
detect the crop rows or lanes between the rows can be used in combi-
nation with a PID or Fuzzy Logic controller to ensure that the UGV stays
within the desired lane to avoid damage to the crops.
The configuration of the vehicle platform will determine which
approach is appropriate for this project. For example, if the vehicle is to
use the edges of the crop row for navigation, it will need to take an
image of the entire row. If the UGV is much shorter than the crops in the
row, the camera image may be blocked by the surrounding crops. To
overcome this, a platform can be used that is tall enough to mount the
camera above the row. Alternately, a platform that can straddle the
crop row, with one side of the vehicle in the lane on the left and the
other in the lane of the right side of the crop row could be used. In
contrast, an UGV which uses the edges of the lane between crop rows
can navigate, regardless of whether it can see over the tops of plants.
Both UGVs that fit within a lane and UGVs that straddle an entire crop
row could utilize the lane detection approach. Since the lane detection
approach seems to be better suited for a wider variety of UGV plat-
forms, controllers using approach are outlined below.
To convert the lane detection data to commands for the motors for
UGV navigation, an area of interest within the image can be de-
Fig. 10. Graphical representation of ρ (“rho”) and θ (“theta”) from the hough termined. For example, based on the UGV speed, the transmission speed
transform (Hough Transformation, 2016). of the image to the computer that processes the image, and the trans-
mission speed of the navigation commands to the motors, navigation
data at a certain distance in front of the image is needed in order for
movement correction to be applied in time. In a 2-D image, this area of
interest can be selected as a set pixel value on the y-axis, which re-
presents a particular distance from the front of the UGV.
In addition, the center of vehicle relative to the image at this area of
interest should be known. This can be determined through a calibration
even before developing the controller. Theoretically, each time the
vehicle is centered and facing forward, the center of the row being
detected should align to the same pixel value. Centering the vehicle
within a sample row and processing a series of images taken by the
robot at this location can be used to determine the pixel value of the
row center. Depending on the accuracy of the image processing algo-
rithm, this center pixel value should be applicable across rows of many
Fig. 11. Hough transform lines in the example image. widths, and can used as the set-point.
image, Hough transform was able to detect two straight lines along the 4.1. Method #1: lane detection with PID control
edges of the center crop row (see Fig. 11).
It can be noted that while the line left side of the crop row follows One of the oldest and most commonly used forms of system control
the edge of the row well, the line along the right side seems to be angled is PID, which combines proportional, integral, and derivative terms.
slightly off from the actual row. Utilizing filters to remove noise from PID control requires a user to tune three constant parameters to control
the image before performing the Canny edge detection, as well as ad- the system response: kp , ki , and kd . Increasing the kp term generally
justing the Hough transform configuration can be useful in improving improves the system rise and settling times, while increasing ki im-
the line detection for this particular image. In addition, the Hough proves the steady state error, and increasing kd improves the percent
transform did not detect both edges of the lanes between the rows, but overshoot. However, changing each of the gain terms can also nega-
rather only detected the edges of the largest row. A limitation of this tively impact the system response characteristics, as summarized in
particular method is that the resulting lines are straight line approx- Table 1. For example, increasing kp can increase the percent overshoot
imation of the crop row edges. Based on the curvature of the particular of the response. As such, fine tuning to determine the optimal combi-
row, this straight line approximation may not convey the necessary nation of the three gain terms is important.
information for successful navigation. Selecting a smaller region within The error between a measured system variable and a desired set-
the image collected in which to perform the Hough transform to de- point is continuously calculated and is used to calculate an output to a
termine the row edges would improve the approximation. process variable. Equation (2) displays the general form of a PID con-
troller:
4. Lane detection with control strategies
Table 1
Impact of increasing PID gain terms on system response (Introduction: PID
Based on the two lane detection techniques described above, two
Controller Design, n.d.).
approaches can be taken to guide a UGV through the crop rows. For
one, the UGV can be configured to follow the edges of the lane between Gain Term Rise Time Overshoot Settling Time Steady State Error
two crop rows. Another approach would be to configure the UGV to
kp Decrease Increase Small Change Decrease
detect the edges of the crop row itself. Since the wheels (or tracks) of ki Decrease Increase Increase Eliminate
the vehicle will be on the ground between the crop rows, damage can kd Small Change Decrease Decrease No Change
occur if the wheels (or tracks) begin to roll over the crops themselves
29
S. Bonadies, S.A. Gadsden Engineering in Agriculture, Environment and Food 12 (2019) 24–31
t
absolute truth and falsity. In order to design a fuzzy controller, mem-
u (t ) = kp e(t ) + ki ∫ e(τ )dτ + kd de(dtt ) bership functions must be developed for the system input and output,
0 (2)
coupled with a set of rules to handle the inputs and determine what
where u (t ) is the controller output variable and e(t ) is the error between output is appropriate for the current state of the system (Gerla, 2005).
the desired location of the vehicle (set-point) and the actual location of A Fuzzy control system has three parts: fuzzification, rule evalua-
the vehicle based on the image data acquired. The derivative and in- tion, and defuzzification. A set of crisp inputs, for example sensor input
tegral values can be approximated in discrete time for implementation data, is transformed into a set of fuzzy inputs through fuzzification. A
in control software as seen in Equations (3) and (4): set of input membership functions, which encompass the relationship
between all possibly input values, is used to convert these sensor input
de(t ) [e (t ) − e (t − 1)]
≈ values to a fuzzy input value ranging between 0 and 1. Developing
dt T (3)
appropriate membership functions for the input set is important; using
t too few can lead to slow system response and using too many can cause
∫ e(τ )dτ ≈ [e (t ) − e (t − 1)]⋅T instability in the system. After the crisp inputs are converted to fuzzy
0 (4) inputs, these values are fed through a set of rules developed for the
where t is the current time step, t − 1 is the previous time step, and T is system. These rules are used to determine the controller output based
the sampling rate. PID controllers are used for a variety of industrial on the sensor input data in the form of an IF-THEN statement, which
applications temperature control in furnaces and pH regulators relates the output (dependent) variables to the input (independent)
(Common Industrial Applications of PID Control, n.d.). PID control has variables. Based on the fuzzy input values, the rules are evaluated and
also been used for agricultural applications for driver assistances. Foster the rule that is most true is used to determine the fuzzy outputs. Finally,
et al. utilized a PID controller to autonomously regulate the velocity of the fuzzy outputs are converted into crisp outputs through defuzzifi-
a hydrostatic windrower to improve the machine productivity cation, which requires a second set of membership functions, con-
(Mousazadeh, 2013). verting the fuzzy outputs between 0 and 1 to meaningful output values.
The implementation of a PID controller requires gain tuning in order Fuzzy Logic control has been used for steering control in agri-
to assist in guiding the system input to a predetermined set-point. The cultural and military robots. A Fuzzy Logic controller was implemented
following steps outline the software algorithm for lane detection using a into the DORIS robot at the University of Germany to control the
PID controller (Fig. 12): commands sent to the motors based on the steering wheel angle and the
In Step 1, a camera facing forward connected to the UGV will be level of force applied to the break or gas pedal to ensure smooth turning
instructed to capture an image of the area. This image will be stored maneuvers (Sailan et al., 2014). In addition, an UGV developed by Xue
within the software, and then processed in Step 2 by the image pro- et al. utilized a Fuzzy Logic controller to guide a robot through a corn
cessing algorithm. The goal of the image processing algorithm will be to crop row based on the detected location of crop rows on either side of
determine the pixel value of the crop row on both the left and right the robot. The Fuzzy controller used two inputs (offset from center line
sides of the robot at the predetermined area of interest (y-axis pixel and heading angle), fuzzified by five triangular membership functions
value). With the pixel values of the crops rows surrounding the UGV, with a uniform distribution. The input information was compared to the
Step 3 can be completed to calculate the pixel value of the center of the previous position of the robot, and various turn signals were outputted
crop lane. Then, the set-point pixel value can be subtracted from the to the motors based on the position and heading angle (Xue, Zhang, &
lane center pixel value to determine the error from the set-point in Step Grift, Variable Field-of-View Machine Vision Based Row Guidance of an
4. Once the error is determined, it can be feed through a PID controller Agricultural Robot, 2012).
with predefined gains in Step 5 to determine appropriate motor output The implementation of a Fuzzy Logic controller requires calibration
values. Then in Step 6 the PID output motor values can be sent to the and development of membership functions for both the input and
motors. Finally, in Step 7, the algorithm is instructed to return to Step 1 output variables of the system. The following steps outline the software
and repeat the loop. algorithm for lane detection using a Fuzzy Logic controller (Fig. 13):
Setting up the PID controller for navigation will require manual gain Similar to the PID controller, in Step 1, a camera facing forward
tuning efforts for the real system. The goals of this tuning effort should connected to the UGV will be instructed to capture an image of the area.
be to reduce the percent overshoot of the system response to avoid This image will be stored within the software, and then processed in
running into crops after a correction. The desired maximum percent Step 2 by the image processing algorithm. With the pixel values of the
overshoot will vary based on the width of the crop lane that the vehicle crops rows surround the UGV, Step 3 can be completed to calculate the
travels down, as a higher overshoot may be allowable within wider pixel value of the center of the crop lane. Then, the set-point pixel value
rows. In addition, reducing the settling time of the system such that the can be subtracted from the lane center pixel value to determine the
distance between the center of the UGV and the lane center is within a error from the set-point in Step 4. This step can be eliminated if the
predetermined error from center ( ± 1 or ± 2 inches) will maximize the membership functions are calibrated to directly interpret the lane
amount of time the robot is travelling forward at maximum speed. center information. Once the error is determined, it can be fuzzified
using predetermined input membership functions for the system in Step
4.2. Method #2: lane detection with fuzzy logic control 5. In Step 6, the fuzzified error value will be evaluated using the rule set
for the Fuzzy Logic Controller, and used to determine fuzzified output
A Fuzzy Logic controller utilizes fuzzy logic to produce desired motor values using the output membership functions. Then in Step 7
outputs based on given inputs. Fuzzy logic, in contrast to Boolean logic, the Fuzzy output motor values can be defuzzified to be sent to the
allows for varying degrees of truthfulness between 0 and 1, rather than motors. The defuzzified motor command values can be sent to the
30
S. Bonadies, S.A. Gadsden Engineering in Agriculture, Environment and Food 12 (2019) 24–31
motors in Step 8. Finally in Step 9, the algorithm is instructed to return Canny Edge Detection, (n.d.). (OpenCV). Retrieved from: http://docs.opencv.org/master/
to Step 1 and repeat. da/d22/tutorial_py_canny.html#gsc.tab=0.
Common Industrial Applications of PID Control, (n.d.). (Control Station Inc.). Retrieved
In order to utilize the Fuzzy Logic controller, a calibration step must from: http://controlstation.com/pid-control/.
be performed to determine the values of the membership functions for Demant, C., Garnica, C., Streicher-Abel, B., 2013. Overview: segmentation. In: Industrial
the input and output variables. Calibration for the image input variable Image Processing: Visual Quality Control in Manufacturing. Springer, Berlin, pp.
83–112.
can be completed by setting up the UGV at different, known locations English, A., Ross, P., Ball, D., Corke, P., 2014. Vision based guidance for robot navigation
across the front of the crop lane to determine what range of error values in agriculture. In: 2014 IEEE International Conference on Robotics & Automation
corresponds to the physical position of the robot within the lane. The (ICRA). Hong Kong.
Gerla, G., 2005. Fuzzy logic programming and fuzzy control. Stud. Logica 79 (2),
rules to evaluate the input should be setup such that the UGV performs 231–254.
certain turning maneuvers based on the input from the image data. Hamrita, T.K., Tollner, E.W., Schafer, R.L., 2000. Toward fulfilling the robotic farming
Predetermined set speeds can be used to set up the membership func- vision: advances in sensors and controllers for agricultural applications. IEEE Trans.
Ind. Appl. 36 (4), 1026–1032.
tions for the output variables to enable the UGV to perform the various
Hough Transformation. (2016). (The MathWorks, Inc.). Retrieved from: http://www.
turning maneuvers to return to the center of the crop lane. Finally, the mathworks.com/help/images/ref/hough.html.
fuzzy output should be converted to a command value which is Husky Unmanned Ground Vehicle Robot, (n.d.). (Clearpath Robotics). Retrieved from:
meaningful to the motors so that the maneuver can be performed. http://www.clearpathrobotics.com/husky-unmanned-ground-vehicle-robot/.
Introduction: PID Controller Design, (n.d.). (University of Michigan). Retrieved from:
http://ctms.engin.umich.edu/CTMS/index.php?example=Introduction§ion=
5. Concluding remarks ControlPID.
Jackal Small Unmanned Ground Vehicle, (n.d.). (Clearpath Robotics). Retrieved from:
http://www.clearpathrobotics.com/jackal-small-unmanned-ground-vehicle/.
Unmanned systems such as UGVs are implemented in order to ad- Lefcourt, A., Kistler, R., Gadsden, S., Kim, M., 2016. Automated cart with VIS/NIR hy-
dress human labor shortages throughout the agricultural industry, and perspectral reflectance and fluorescence imaging capabilities. Appl. Sci. 7 (1).
improve food safety throughout the production cycle of produce crop. Mousazadeh, H., 2013. A technical review on navigation systems of agricultural auton-
omous off-road vehicles. J. Terramechanics 50 (3), 211–232.
The most common use of UGVs in agriculture is detecting contaminated Patel, P., 2015, May 6. Cheap Centimeter-precision GPS for Cars, Drones, Virtual Reality.
plants and crops, searching for the presence of animals or pests, and (IEEE Spectrum) Retrieved March 15, 2016, from: http://spectrum.ieee.org/tech-
identifying crop funguses and molds. UGVs navigate crop rows, and as talk/transportation/self-driving/cheap-centimeterprecision-gps-for-cars-and-drones.
Rejas, J.-I., Sanchez, A., Glez-de-Rivera, G., Prieto, M., Garrido, J., 2015. Environment
such, autonomous navigation strategies have been developed. These
mapping using a 3D laser scanner for unmanned ground vehicles. Microprocess.
strategies typically make use of machine vision and PID and fuzzy Microsyst. 39 (8), 939–949.
control methods. The purpose of this paper was to provide a compre- Rows of Lettuce, (n.d.). (National geographic Partners, LLC.). Retrieved from: http://
intelligenttravel.nationalgeographic.com/2012/06/12/the-lettuce-of-wrath/salinas-
hensive overview of autonomous navigation strategies for unmanned
ca-rows-of-lettuce/.
ground vehicles (UGVs) with applications to agricultural environments. Sailan, K., Kuhnert, K.D., Karelia, H., 2014. Modeling, design and implement of steering
fuzzy PID control system for DORIS robot. Int. J. Comp. Commun. Eng. 3 (1), 57–62.
Appendix A. Supplementary data Sistler, F.E., 1987. Robotics and intelligent machines in agriculture. IEEE J. Robot.
Automation, RA 3 (1), 3–6.
Takagaki, A., Masuda, R., Iida, M., Suguri, M., 2013. Image processing for ridge/furrow
Supplementary data related to this article can be found at https:// discrimination. In: 4th IFAC Conference on Modelling and Control in Agriculture,
doi.org/10.1016/j.eaef.2018.09.001. Horticulture and Post Harvest Industry. Espoo, Finland.
Xue, J., Xu, L., 2010. Autonomous agricultural robot and its row guidance. In: 2010
International Conference on Measuring Technology and Mechatronics Automation,
References pp. 725–729 (Changsha, China).
Xue, J., Zhang, L., Grift, T.E., 2012. Variable field-of-view machine vision based row
guidance of an agricultural robot. Comput. Electron. Agric. 84, 85–91.
Bakker, T., Asselt van, K., Bontsema, J., Müller, J., Straten van, G., 2010a. Systematic
Yang, J., Dang, R., Luo, T., Liu, J., 2015. The development status and trends of unmanned
design of an autonomous platform for robotic weeding. J. Terramechanics 47 (2),
ground vehicle control system. In: 2015 IEEE International Conference on Cyber-
63–73.
technology in Automation, Control and Intelligent Systems (CYBER). Shenyang.
Bakker, T., van Asselt, K., Bontsema, J., Müller, J., van Straten, G., 2010b. A path fol-
lowing algorithm for mobile robots. Aut. Robots 29 (1), 85–97.
31