0% found this document useful (0 votes)
18 views31 pages

Mechatronics and Applications

Uploaded by

Lanz Louie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views31 pages

Mechatronics and Applications

Uploaded by

Lanz Louie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

4 Mechatronics and

Applications
4.1 Automation and Control
J. K. Schueller
Abstract. Sensors, actuators, and controllers can form automation or control sys-
tems to control machines or systems in their desired tasks. Classical or modern con-
trol theory can aid design and analysis of many systems. The components and tech-
niques are discussed and some examples presented.
Keywords. Automation, Control, Sensors, Actuators.

4.1.1 Introduction
Machines or systems which have the capability to self-act or self-regulate are called
automated. Automation allows these machines and systems to perform their tasks in a
productive, efficient, reliable, and accurate manner without great amounts of human
intervention. Control is the exercise of regulation, whether by machine or human in-
tervention. Automation allows machines to control themselves. The development of
information technologies has brought more capabilities to automation and control.
Although there have long been some automatic controls, such as the float regulators
on ancient Greek water clocks, control theory and an engineering understanding of
automation have been rather recent developments. James Watt’s centrifugal fly ball
governor for steam engines was an early development and a vital contributor to the
Industrial Revolution. While some fundamental theory was developed in the 1800s by
Maxwell, Lyapunov, and others, most control theory was developed in the 1900s in
response to the needs of long-distance telephony, World War II military, and the aero-
space industry.
Agricultural and biological engineers have used automatic controls and control the-
ory in their efforts to get their machines and systems to respond properly within the
complex biological/chemical/physical environments in which they must operate. The
most famous early agricultural control example is the Ferguson System, developed by
Harry S. Ferguson in the 1920s, which allowed tractors to vary implement soil work-
ing depth to maintain a constant load on the tractor. There are now many examples of
successful agricultural and biological automatic control implementations. Fertilizer
applicators mix and apply the right fertilizer mixture according to a variable rate map
and GPS position location. Environmental controls of livestock buildings keep ani-
mals healthy and productive. Automated irrigation systems apply the correct amount
of water when and where it is needed. These are some examples of how automatic
controls are widely applied in agricultural engineering.
186 Chapter 4 Mechatronics and Applications

4.1.2 Control Theory


One way control systems are classified is by whether they are open-loop or closed-
loop. In open-loop systems a command is given to a system and it is assumed the sys-
tem performs properly. Closed-loop systems compare the results or output of the sys-
tem to the desired output and take appropriate corrective actions. Closed-loop systems
generally exhibit more accurate performance, but cost more and may tend to be more
unstable. Most systems to which control theory is applied are closed-loop.
Figure 1 shows an example of a closed-loop system. The input, which indicates the
desired output, is compared to the sensed output and the error between those two is
used to generate a command by the controller. The actuator then generates a control
action, which causes the plant (the machine or system being controlled) to behave in
the desired manner. An open-loop system will not have the sensor, which allows feed-
back of the output. Open-loop systems are therefore more sensitive to disturbances and
system parameter variations because the resulting changes in the output are not sensed.
Another way control systems can be classified is as to whether they are sequential,
continuous, or discrete.
Sequential control systems cause a machine or system to go through a set series of
operations. They generally do not exercise much regulation within the operations. For
example, a sequential control system may remove the milking machine when a cow is
finished in a milking parlor, open the gate to let the cow out, close the gate after the
cow is out and then let the next cow in. Sequential control analysis and design often
makes use of ladder logic. Complex sequential control systems can be analyzed with
Boolean algebra, truth tables, flowcharts, or state diagrams. Contemporary sequential
control systems may use programmable logic controllers (PLCs) to allow the sequence
of control to be easily modified in software rather than requiring hardware or connec-
tion changes.
Continuous control systems are the usual subject of control theory analysis [1-3].
Such systems are physical systems in which the input-output behavior of the system
can be described by ordinary differential equations in time. Many of the components
have a relationship that can be modeled as a constitutive equation of the relationship
of a through variable to an across variable. For example, the current through an elec-
trical component or the fluid flow through an orifice can be related to the voltage
across or the pressure across that component.

CONTROLLER

INPUT ERROR CONTROL CONTROL OUTPUT


ACTUATOR CONTROL
ALGORITHM COMMAND ACTION PLANT
(DESIRED
OUTPUT)

SENSOR

Figure 1. Closed-loop control system.


4.1 Automation and Control 187

Physical systems can be modeled by writing the differential equations for the vari-
ous components, be they electrical, mechanical, fluid, heat transfer, electromechanical,
hydraulic, or some other system. The equations can then be transformed to the Laplace
domain to get a transfer function which describes the dynamic input-output relation-
ship.
A block diagram, such as Figure 2, with mathematical transfer functions replacing
the component names can be generated to combine the various components into a
complete system. Block diagram algebra can then be used to simplify the representa-
tion and calculate the overall transfer function. Blocks indicate multiplication or divi-
sion and summers indicate addition or subtraction.

T P T
Valve
q=KvXv
Xv

Linkage
Xv=(X-Xc)/2
Cylinder
dXc/dt = (1/A)q

Xc

Linkage

Valve Cylinder
Xv q Xc
X
Kv 1/As

Xc/X = (12)Kv (1/As)/(1+(12)(Kv)(1/As)1) = 1/((2A/Kv)s+1)

Figure 2. Block diagram and transfer function of a hydromechanical servo.


188 Chapter 4 Mechatronics and Applications

Most practical control theory, including all of it discussed here, is only applicable
to linear systems, those that have a linear relationship between the input and the out-
put. However, nonlinear systems can often be approximated by linearizing them at the
points of normal operation. Although agricultural and biological systems may also be
very complex, many times there are dominant behavioral characteristics. Many such
systems can be modeled as approximately being a gain, delay, integrator, first-order,
or second-order system. The output responses of such systems can be approximated by
modeling the inputs as being impulses, steps, or ramps.
The types of systems studied by agricultural and biological engineers are often so
complex or unknown that they cannot be modeled analytically. However, their re-
sponse behavior can be determined experimentally. Such systems can be subjected to
known inputs, such as steps, sinusoids, or random excitations, to find their behavioral
characteristics. The transient, frequency, or stochastic responses allow a dynamic
model to be developed. Care must be taken to avoid overfitting, exciting nonlinearities
(especially saturations and dead zones), and exceeding dynamic and frequency ranges
of the system or the instrumentation. But such step, swept-sine, and stochastic model-
ing can be very powerful.
Many contemporary systems are controlled by computers. If the computer control
system is dynamically very fast compared to the system being controlled, it may be
modeled as a continuous controller. However, many times the theory of discrete, or
digital, control is used. This type of control recognizes that the computer only inter-
faces with the system at fixed instances in time. Most often, z-transforms are used in
place of Laplace transforms as the classical discrete analytical tool.
Modern control theory is replacing classical control theory in many situations.
Modern control theory uses state-space techniques in which the state of the system is
modeled by equations which describe how the state variables change. Computers and
the refinement of linear algebra have made modern control analyses tractable. Section
3.2 (Control and Optimization) of this handbook has more details on modern control
theory as well as some of the contemporary advances in the areas of robust control and
fuzzy control. Modern, robust, fuzzy, and other recently developed control techniques
are areas of much current research, development, and application to practice. They
have much promise in improving the performance and reliability of automated ma-
chines and systems.
4.1.3 Control and Automation Implementation
Automatic control was historically implemented by mechanical components. For
example, Watt’s governor used centrifugal force on rotating masses to move a linkage,
which actuated a steam valve. In a similar manner, Ferguson’s tractor draft control
system used the force against a spring to cause a mechanical displacement, which con-
trolled a hydraulic valve, in turn causing the draft-causing implement to be raised or
lowered. Many systems still use mechanical elements in the control system. But the
development of electronics and computers has widened the use of automatic controls
and improved performance.
4.1 Automation and Control 189

Desired Command Error Oil Shaft Fluid Fluid


Pressure Voltage Voltage Flow Speed Flow Pressure

Operator + Amplifier Hydraulic Hydraulic Fluid Piping


Interface Valve Motor Pump Network

Sensed
Pressure
Voltage
Pressure
Sensor

Figure 3. Pressure control system.

An example of the evolution of controls can be seen in the thermostatic temperature


controls of buildings. Early fans, heaters, and air conditioning systems often used me-
chanical bimetallic strips with mercury switches to keep the temperature of buildings
housing plants, animals, or humans at the desired temperature. The differential thermal
expansion of the bimetallic strips repositioned a bubble of mercury in a curved tube,
thereby activating an electrical circuit supplying power to the climate control equip-
ment until the temperature changed to the desired setting and the bimetallic strips
caused the circuit to be unpowered. Contemporary climate control systems may use
computers, sensors, and actuators to achieve better and more sophisticated control.
The principle of sensing the output, comparing it to the desired output, and taking cor-
rective action remains whether the control system is mechanical or electronic.
In Figure 1, the plant is the object that is to be controlled. The machine or system
must have an output, which can be measured, and a means for the behavior of the
plant to be affected by the control action. The other components of Figure 1 are added
to the system to complete the closed-loop control system. As an example, Figure 3
shows a pressure control system of the type used in liquid pesticide or fertilizer appli-
cations. The valve, motor, pump and piping networks form the plant to be controlled.
Sensors must accurately measure the output of the machine or system; otherwise
the controller will take wrong actions and the output of the system will be accordingly
wrong. Besides accuracy, sensors must have adequate resolution and range. In addition,
the sensors should have fast dynamic response compared to the plant and the controller.
Analog sensors provide a voltage or current proportional to the plant’s output. Sen-
sors are commonly used to measure displacement (potentiometers, linear variable dif-
ferential transformers, resolvers, capacitive sensors), acceleration (accelerometers),
temperature (thermocouples, resistance thermometers, thermistors), strain (strain
gauges), and many other quantities. Sensors can be used with mechanical components
to measure another quantity. For example, strain gauges measure strain, but put on a
diaphragm, a strain gauge can measure pressure.
As computers and digital electronics are used more in automatic controls, there is
increased use of digital sensors. These may be analog sensors, which have integrated
electronics to supply a digital output, or they may be inherently digital. For example,
the speeds of shafts on many agricultural machines are determined by counting pulses
over short time periods from variable reluctance sensors near the teeth of rotating
190 Chapter 4 Mechatronics and Applications

gears. Recent advances have produced new kinds of sensors. The global positioning
system (GPS) can be used for large-scale displacements. Machine vision and optical
sensors are becoming more powerful and more commonly used as sensors.
The output feedback from the sensor is compared to the desired output (input) in
the controller. Based upon the error between the desired output and the actual output
feedback, and often the time history of that error, the controller issues a command.
Determining what command the controller should issue for different errors and error
histories is a task for the engineer designing the control system.
Control systems may be implemented in electronic components without the use of
computers. Many such systems use operational amplifiers to compare the input and
output feedback and then generate a command proportional to the error. It is possible
to add circuitry to make the command also partially proportional to the integral of the
error or the derivative of the error. This common type of control is known as PID-
proportional, integral, and derivative. Increasing the proportional gain (the amount of
command generated per unit of error) will cause the system to respond faster and have
less steady-state error, but it may also decrease stability and lead to more overshoot of
the desired output. Adding integral control may remove the steady-state error, but de-
crease stability. Adding derivative control may stabilize the system, but make it more
susceptible to noise and saturation. Tuning the controller to the best control settings
may improve the system’s performance without adding hardware.
Small computers are often used as controllers. Microprocessors, microcontrollers,
and digital signal processors (DSPs) can quickly and efficiently implement simple or
complex software control routines. Often the computers run simple programs which
input the desired output and the actual output, calculate the appropriate control com-
mand, and then output the command in a continuous loop. The reliance on computers
is often not obvious to the users of these embedded controllers. The software may be
either interrupt-driven or program-driven.
Personal computers and larger computers can also be used for control applications.
Depending upon the needs of the control system, the computers may either run con-
ventional operating systems or operating systems specifically designed for real-time
control. When such general-purpose computers are used, control interfacing becomes
an issue. Analog sensor signals and actuators requiring analog commands imply that
the control systems have analog-to-digital and digital-to-analog converters to interface
the computers with the analog hardware. Digital sensors or actuators don’t require
such converters, but the communications connections or buses must be compatible.
Many controllers provide an output which can be immediately input into the plant.
However, in many other cases the controller output is the not appropriate physical
quantity (for example, current instead of force), does not have enough power, or is of
inappropriate scale for the plant input. Actuators are often used to convert the control
command into a control action that can influence the behavior of the plant. It is impor-
tant that the actuator provide the proper control action when commanded. In addition,
it must have significantly better dynamic response than the plant to avoid degrading
closed-loop control system performance. Although actuators are seldom discussed in
control theory, they are important in the types of systems encountered by agricultural
4.1 Automation and Control 191

and biological engineers.


The most common type of actuator is electromechanical. A voltage or current input
provides electrical power to a solenoid or motor. The actuator output is a force, torque,
or displacement. Electromechanical actuators may be either linear or rotary. Due to the
high forces or torques involved in many agricultural applications, electrohydraulic
actuators are also common. The final component of the electrohydraulic actuator, ei-
ther a hydraulic motor or a cylinder depending upon whether rotary or linear output is
needed, follows an electrohydraulic valve in most applications. The performance of
the valves in such systems is obviously important. It must also be remembered that the
displacement of a hydraulic motor or cylinder is proportional to the integral of flow.
The control action from the actuator affects the plant and hopefully changes the
plant’s output to achieve the desired performance. As mentioned above, the plant is
the object being controlled. The control system is designed according to the character-
istics of the plant and the performance required or desired. The control systems are
easier to design, and usually perform better, if the plant is time invariant, meaning its
parameters do not change, and linear.
A wide variety of mathematical techniques are now available to aid the design of
control systems. Common classical techniques are described in most control engineer-
ing textbooks and include root locus, pole placement, compensator, and frequency
domain techniques. Modern control theory techniques often use optimization method-
ologies. Commercial software is available to perform most common analyses. When the
system is significantly complex, nonlinear, or time variant, the difficulty of obtaining
closed-form solutions usually leads to the use of commercial system simulation soft-
ware to find the time-domain responses of various candidate systems to typical inputs.
4.1.4 Automation and Control in Agriculture and Related Fields
The wide variety of agricultural systems and the diversity throughout the world
makes it difficult to generalize about the application of automation and control [4-6].
However, in many such applications of automation and control, the situation is diffi-
cult. The systems to be controlled may be a complex combination of physical, chemi-
cal, and biological components. Even the example of pH control system in Figure 4
has electrical, mechanical, and chemical components. Many agricultural and biologi-
cal automation systems are located outdoors or in agricultural buildings where they
may be subject to a wide range of atmospheric and other environmental conditions,
such as temperatures, humidity, and vibrations. The systems are often installed in re-
mote or rural locations where the maintenance and service infrastructure is sparse. The
systems must be cheap, reliable, and easy for relatively unskilled human operators. It
is a demanding task.
The use of automation and control can be controversial. Whether the local eco-
nomic, social, and technical situation supports automation must be determined. This
can be a special concern in developing countries where the reduction of labor usage by
automation may not be desired. But where it fits, automation and control can often
increase the quantity and quality of the food and fiber produced, while helping to pro-
tect the environment.
192 Chapter 4 Mechatronics and Applications

TANK

RECIRCULATING
PUMP
ACID
PUMP

pH
SENSOR

ACID

CONTROLLER DESIRED
pH
Figure 4. Example of a pH control system.

There are very many examples of automation and control being applied to agricul-
ture and related fields. Some are simple and some are sophisticated. Table 1 lists some
of the examples which can be found in other volumes of this Handbook. Many others
can be found in books, papers, conference proceedings, and other literature.
The early control systems used on agricultural equipment were mechanical or hy-
dromechanical, such as the Ferguson system and the self-leveling system for hillside
combines. Now, electronic and computer controls dominate new designs. The integra-
tion of mechanical, electronic, and software elements used in most automation system
is often known as mechatronics, especially in Europe and East Asia [7, 8].
Mechatronic systems depend on goals or commands entered into their control com-
puters to guide their actions. Sensors also provide inputs on the state of the system and

Table 1. Some control systems in other volumes of the CIGR Handbook series.
Volume Page Controlled System
I 362 Irrigation
498 Irrigation water delivery
II 304 Aquaculture
III 45 Diesel engine injection
171 Ferguson system hitch
309 Direct injection pesticide sprayer
477 Greenhouse climate
610 Precision agriculture application
IV 42 Grain dryer
345 Cold storage refrigeration
4.1 Automation and Control 193

its environment. The mechatronic systems then use software to decide on the appro-
priate signals to be output to the actuators. Such systems are very flexible in that sim-
ple software changes can change the system behavior. They also allow more compli-
cated and sophisticated control algorithms.
One area in which mechatronics is achieving greater usage is the movement to X-
by-wire in vehicles. X-by-wire systems replace mechanical or hydromechanical func-
tions in a vehicle with a combination of mechanical, electronic, and software compo-
nents. For example, conventional vehicle brakes may be replaced with a system in
which there is no direct connection between the brake pedal and the brakes. Such sys-
tems require high reliability of the components and the overall system for safety.
Figure 5 shows a simplified schematic example of a system for steering a vehicle.
The driver’s positioning of the steering wheel is sensed and transmitted to a computer
that then determines and communicates a command to another computer that controls
the steering actuator. The actuator’s position is closed-loop controlled by the second
computer’s sensing of the position of the vehicle wheels. Feedback can be supplied to
the driver by the first computer through an actuator’s effects on the steering wheel.

Sensor Actuator

Feedback
Actuator

Steered
Wheels
Steering
Wheel

Computer Computer

Sensor

Data Bus
Figure 5. Simplified schematic of a vehicle steering-by-wire system.
194 Chapter 4 Mechatronics and Applications

Such a system is very flexible, due to the wide variety of algorithms which may be
implemented in the computers and the other information which may be accessed from
the data bus. For example, the steering system may have a variable ratio between the
steering wheel and the angle of the vehicle wheels, which in turn changes with vehicle
speed.
Since agricultural equipment operates in complex environments with weather,
fields, plants, or animals that can vary widely, there is a great advantage to equipment
with control systems that can respond to such variations. Historically, operating pa-
rameters of agricultural equipment, such as speeds and geometric clearances, were
either fixed or adjusted infrequently by human operators. Automation and control sys-
tems now allow the operating parameters to be adjusted automatically in response to
changing conditions to improve productivity, efficiency, and quality. In order for such
control systems to be used, the machine must be capable of being adjusted. For exam-
ple, fixed-ratio mechanical (such as belt, chain, or gear) drives might need to be re-
placed with variable-speed hydraulic or electrical drives with appropriate valving or
drivers.
Agricultural equipment has evolved to accommodate such control systems [9,10].
A contemporary grain combine harvester is a good example. It may have automatic
control of such items as header height, reel speed, travel speed, rotor speed, concave
opening, and sieve opening. The harvester has to be designed with drives and actuators
to permit control systems to do their jobs. The components of the control systems of-
ten allow more flexibility in the design and layout of agricultural equipment since
electrical, and to a lesser extent hydraulic, power and signals can be transferred more
easily from one part of the equipment to another than mechanical power and adjust-
ments. For example, a rotating shaft goes straight from one component to another, but
a hydraulic hose or electrical wire can bend along a convoluted path. Returning to the
grain combine harvester example, the operator can now control the many functions
from the operator station and the engine can be located far from power-consuming
components.
Due to the demands of increased performance from agricultural equipment and the
improvements in automation and control systems, especially sensors, actuators, and
algorithms, automation and control systems will continue to become more prevalent in
agricultural systems. The trend of networking control systems together under stan-
dards such as SAE J1939, DIN 9684, and ISO 11783 will continue to accelerate.
More, better, and coordinated automation and control will contribute to better agricul-
tural equipment.

References
1. Dorf, R. C., and R. H. Bishop. 2005. Modern Control Systems, 10th ed. Upper
Saddle River, NJ: Prentice-Hall.
2. Ogata, K. 2002. Modern Control Engineering, 4th ed. Upper Saddle River, NJ:
Prentice-Hall.
4.2 Positioning and Navigation 195

3. Wells, R. L., J. K. Schueller, and J. Tlusty. 1990. Feedforward and feedback


control of a flexible robotics arm. IEEE Control Systems 10(1): 9-15.
4. Cox, S. W. R. 1997. Measurement and Control in Agriculture. Oxford, UK:
Blackwell Science.
5. Schueller, J. K. 1992. A review and integrating analysis of spatially-variable
control of crop production. Fertilizer Research 33: 1-34.
6. Searcy, S. W., ed. 1991. Automated Agriculture for the 21st Century. St. Joseph,
MI: ASAE.
7. De Silva, C. W. 2005. Mechatronics: An Integrated Approach. Boca Raton, FL:
CRC Press.
8. Histand, M. B., and D. G. Alciatore. 1999. Introduction to Mechatronics and
Measurement Systems. New York, NY: McGraw-Hill.
9. Klenin, N. I., I. F. Popov, and V. A. Sakun. 1970. Sel’skokhozyaistvennye
Mashiney (Agricultural Machines). Moscow, Russia: Kolos Publishers.
10. Srivastava, A. K., C. E. Goering, and R. P. Rohrbach. 1993. Engineering
Principles of Agricultural Machines. St. Joseph, MI: ASAE.

4.2 Positioning and Navigation


H. W. Griepentrog, B. S. Blackmore,
and S. G. Vougioukas
Abstract. This chapter covers some of the recent developments in positioning and
navigation of agricultural vehicles. Reliable absolute or relative positioning of a vehi-
cle is the basic requirement for manual and automated steering and essential for
navigation of autonomous systems. Furthermore, an agricultural vehicle has to be
able to perform several navigation modes within a field in order to succeed in per-
forming a field operation.
Keywords. Positioning, Absolute positioning, Relative positioning, Sensor fusion,
Navigation, Navigation modes.

4.2.1 Introduction
In the agricultural environment special characteristics appear for the navigation
process of vehicles. This environment offers a very different set of circumstances to
those encountered by a laboratory or indoor vehicle. A number of additional complica-
tions are raised [1]:
• Operating areas can be large and geographically separated;
• Ground surfaces are often uneven with varying tractive conditions;
• Depending on the operation, wheel slippage may be far from negligible;
• Environmental conditions (rain, fog, dust, etc.) may affect sensor observations;
• Low cost systems are required.
Fortunately, agricultural operations are carried out in semi-natural environments; a
farm can generally be described by fields with known boundaries and crop plants
196 Chapter 4 Mechatronics and Applications

within a field are often arranged in particular structures (i.e., oriented crop rows and
patterns). This a priori information can be used to improve the positioning and naviga-
tion tasks and in general increases the vehicle performance [2].
The absolute or relative position of a tractor is the basic requirement for manual
navigation and automated steering and essential for navigation of autonomous vehi-
cles. Applications in agriculture, such as asset surveying and precision farming, also
rely on positional information. Geo-referenced tracking of production processes to
allow a subsequent tracing becomes more important today due to food safety aspects
with documentation and traceability to enhance consumers’ trust.
There are two ways to define a position. The first way is to use an absolute coordi-
nate system (e.g. a map projection such as UTM or WGS84) and define the position
within this fixed frame of reference. The second way is to use a relative coordinate
system. This is usually relative to the position and orientation of the tractor. Both use
orthogonal Cartesian axes but the navigation directrix (which governs the movement
and position of a point) may be either Cartesian (x and y or easting and northing) or
vector (modulus and argument or heading and distance).
Furthermore, there are two main procedures within positioning operations to de-
termine an unknown location, the trilateration and triangulation. By using trilateration
the position of a vehicle is determined with distance measurements to the known
points. In trilateration navigation systems are usually three or more transmitters
mounted at known locations in the environment and one receiver on the rover. Global
Positioning Systems (GPS) with absolute positioning are an example of trilateration.
In triangulation there are three or more active transmitters mounted at known loca-
tions. A rotating sensor on board the vehicle measures the angles between the virtual
line to the transmitter beacons and the vehicle’s longitudinal axis. The unknown x-
and y-coordinates and the unknown vehicle orientation can be computed based on
these angle measurements.
4.2.2 Positioning
Absolute Positioning
The most common form of absolute positioning is the Global Navigation Satellite
System (GNSS), which is a space-based microwave positioning system that uses trilat-
eration between known positions of orbiting satellites. GNSS provides 24-hour three-
dimensional position, velocity, and time information to a user anywhere on or near the
surface of earth. Currently there are two systems available: the Global Positioning
System (run by the NAVSTAR GPS Joint Program Office, http://gps.losange-
les.af.mil) and the Global Orbiting Navigation Satellite System (GLONASS, operated
by the Russian Federation). The European Commission decided in 2003 to build up a
European GNSS which will be available in 2009 (GALILEO, http://europa.eu.int
/comm/dgs/energy_transport/galileo/intro/future_en.htm).
The GPS system is the most commonly used GNSS today. The GPS is made up of
the space segment, the control segment, and the users. The space segment consists of a
constellation of at least 24 orbiting satellites.
4.2 Positioning and Navigation 197

GPS satellites transmit on two L-band carrier frequencies: 1.57542 GHz (L1) and
1.22760 GHz (L2). The L1 signal has a sequence encoded which contains two codes, a
precision (P) code and a coarse/acquisition (C/A) code. The L2 carrier contains only
P-code that is encrypted for military and authorized civilian users. The commercial
GPS receivers utilize the L1 signal and the C/A code.
P-code users calculate their geocentric positions to about 5 m with a single hand-
held satellite receiver. The military has encrypted the P-codes and only authorized
users can utilize it. As a consequence civilian users cannot observe the P-codes.
Selective Availability (SA) was used to dither the positional accuracy with single
GPS receivers to 100 m horizontally and 156 m vertically at the 95 % level. On 2 May
2000, SA was switched off and the current horizontal accuracy is about 5 to 25 m at
the 95% level.
Differential GPS (DGPS)
Differential positioning (DGPS) can be conducted with either post- or real-time
processing. The former is simpler and less expensive, while the latter is more compli-
cated because of the need of a radio link. Differential corrections may take the form of
measurement corrections or position corrections. With either approach the coordinates
of one point, which is used as a reference station, must be known and available. The
further a reference site is from a rover site, the more the errors at the two sites will
differ and the less accurate the position determination using differential techniques
will be.
To apply differential positioning in post-processing the logged data from the refer-
ence site and the rover are combined together on a computer. The proper differential
corrections are computed and applied by using algorithms to match the exact time of
the observations at the monitor receiver with the identical times from the rover re-
ceiver.
DGPS can provide data to an accuracy of a couple of meters in dynamic situations
and even better while stationary. That improvement has a significant effect on GPS as
a data resource: GPS can not only be used for coarse navigation of boats and planes. It
becomes a universal measurement system capable of positioning points on a very pre-
cise scale.
Real-time DGPS uses a base station located over a known control point where it is
continuously computing the difference between its known and its reported position. It
then sends this correction information via radio transmitters or other satellites which
broadcast the correction information. The user’s GPS receiver needs to have a radio
receiver or GPS receiver capable of reading correction information distributed by sat-
ellites, and will correct the recorded position recorded 1 or 2 s previously. The systems
can provide real-time positioning at accuracies of 2 to 5 m. One drawback of real-time
GPS systems is the time delay between when a measurement is made at the reference
site and the time it takes to send and implement the correction at the rover site. This
error will of course be avoided in the post-processing mode.
To improve GNSS performance to satisfy requirements, various satellite-based
augmentations have been developed to transmit differential corrections to the users.
198 Chapter 4 Mechatronics and Applications

These cover large areas and the data receiver often is integrated into the GPS receiver.
This type of augmentation system includes the European Geostationary Navigation
Overlay Service (EGNOS). It is Europe’s first step into satellite navigation. EGNOS
will complement the military-controlled GPS and GLONASS systems. The correction
data will improve the accuracy of the current services from about 20 m to better than 5
m. The EGNOS coverage area includes all European states but can be extended to
non-European regions. Beginning in 2009 the EGNOS infrastructure will be integrated
into Galileo. North America has a similar system called the Wide Area Augmentation
System (WAAS) and Japan has the Multifunctional Transport Satellite Space-based
Augmentation System (MSAS).
Real-Time Kinematic GPS (RTK GPS)
Carrier-phase GPS is a method of position determination capable of providing cen-
timeter positional accuracy. It is used extensively for surveying applications. The re-
ceivers are much more complex and expensive than the hand-held code-based GPS
receivers. This technique rapidly became the means of conducting centimeter-
accuracy surveying and navigation for a variety of practical and scientific purposes.
Pseudolites
GPS pseudolites or pseudo-satellites are ground-based transmitters that transmit
GPS-identical signals in a local area. They can aid in GPS positioning in three main
ways [3, 4]. First, they can be used to augment the GPS satellite constellation by
providing additional ranging sources when the natural satellite coverage is inadequate.
Second, pseudolites can be used as an aid to carrier-cycle ambiguity resolution when
using carrier-phase differential GPS (CDGPS) for precise positioning. Third,
pseudolites can be used to replace completely the GPS satellite constellation. This is
generally done to emulate GPS indoor positioning or on extraterrestrial locations, e.g.
positioning a vehicle on Mars [3].
Relative Positioning
Dead reckoning is a mathematical procedure for determining the present vehicle
location by using previous positions combined with known course and velocity infor-
mation over a given time duration. Relative positioning in general can be seen as dead
reckoning because it always refers, relatively, backwards to known positions.
Odometry is a widely used relative positioning method of indoor mobile robots.
Due to accumulating error characteristics it is used only for short-term navigational
tasks. It is the integration of incremental motion information from wheel rotation
and/or steering orientation over time. Many papers are available that document the
extensive research carried out in this area [5]. For outdoor conditions several problems
occur that contribute to often-unacceptable errors in the positional information. In out-
door applications errors mostly appear due to wheel slippage and 3-dimensional posi-
tioning tasks.
Inertial measuring units (IMU) use gyroscopes and accelerometers to determine ro-
tation and acceleration. Based on these measured data a course and the current posi-
tion relative to a known previous point—where the measuring started—can be calcu-
lated. IMUs for outdoor operation are often extended by electronic compasses that
4.2 Positioning and Navigation 199

measure the orientation of the vehicle relative to the earth’s magnetic field. IMUs
achieve high positioning accuracies when they are integrated, but are still expensive.
Referenced Positioning
Landmarks are distinct features in an environment that a sensor can recognize.
Landmarks can be of geometric shapes and they may include additional information
such as particular patterns. Landmarks have a fixed and known position from where a
vehicle can relatively localize itself. Before a vehicle can use landmarks for naviga-
tion, the characteristics of the landmarks must be known and stored in its controller
memory. The accuracy of the calculated vehicle position depends then on how reliable
landmarks are recognized and used for the localization process.
There are two types of landmarks: artificial and natural landmarks. The terms are
defined by [6] as follows:
• Natural landmarks are those objects or features that are already in the environ-
ment and have a function other than vehicle navigation [7];
• Artificial landmarks are specially designed objects or markers that need to be
placed in the environment with the sole purpose of enabling positioning.
In outdoor natural landmark navigation, the detection and matching of the charac-
teristic features from the sensory inputs is the crucial problem. Computer vision is the
most suitable sensor type for this task. Today, typical computer vision-based natural
landmarks for navigation purposes in agriculture are crop rows. Another common fea-
ture recognized today is crop edges detected by a laser scanner-based proximity sensor
(see Section 4.3 of this handbook). The automated operations based on these features
are inter-row hoeing and steering of combine harvesters.
In artificial landmark positioning a simple configuration uses three or more detec-
tors positioned around the workspace. A vehicle-mounted laser is swept horizontally
and the time at which the beam was detected is communicated to the positioning sys-
tem. By using triangulation the location of the vehicle can be determined. This system
has the disadvantage of requiring a communication link between the vehicle and the
detectors.
Systems that rely on a vehicle-mounted laser, when used on rough terrain, have the
drawback of missing the targets. An alternative is to fix lasers in the field and mount
the detector on the vehicle [8].
Map-based (or map-matching) positioning is a technique in which the vehicle uses
its sensors to create a map of the unknown environment. This local map is then com-
pared to a map previously stored in memory or requested and received from a con-
nected GIS system. The robot can compute its actual position and orientation in the
environment by matching the measured and prior stored environmental features [9].
Sensor Fusion
Under all operational conditions no single positional sensor alone can adequately
provide the required information. Because of this lack of a single good method using
either relative or absolute positioning, it is recommended to combine methods, at least
one from each group. This process, based on different sensor types, is called sensor
fusion. Most of the present navigation sensor integration techniques are based on Kal-
200 Chapter 4 Mechatronics and Applications

man filtering procedures, which represent one of the best solutions for multisensor
integration. A Kalman filter is a linear and model based estimator which uses stochas-
tic, recursive, weighted and least squares computing algorithms [10]. It has been
proved for GPS parallel-tracking systems [11].
4.2.3 Navigation
Most agricultural tasks need to be planned and optimized in advance, before the ac-
tual field operation will be executed [12, 13]. However, robot operation in the field is
not deterministic and cannot be entirely planned, because the environment is only
semi-known and semi-structured. Furthermore, the environment is dynamic; agricul-
tural robots have to operate in the presence of other robots, humans, and/or animals.
For a navigation process, besides the position, information about velocity, attitude
and heading, acceleration and angular rate is included in the problem. Although the
positional data–relative and/or absolute–is one of the most important information, the
knowledge about the vehicle’s attitude and behavior has also to be determined by sen-
sors. While using sensor fusion the same information can then be used for either loca-
tion or behavior determination.
Vehicles are currently classified into map-based and sensor-based systems. Map-
based or automated guided vehicles (AGV) vehicles are not free to alter their planned
navigation route. Therefore these systems are almost fully deterministic in their behav-
ior. The positional information is the most important data; otherwise, no acceptable
behavior is possible. Sensor-based or self-guided vehicles (SGV) perform differently
and rely mostly on their own local environmental awareness. Sophisticated motion
control, obstacle avoidance, pattern recognition, and autonomous navigation are the
basic functions to achieve safe, reliable, and accurate operations for these mobile
agents. The positional information is one among other important data to describe the
vehicle’s attitude and its closer environment.
Furthermore, AGV and SGV mobile vehicle types can also be described as Carte-
sian map-based and relative sensor-based systems [14]. A hybrid system comprising a
combination of both is necessary when the behavioral actions of vehicles shall become
acceptable. This hybrid system then combines adaptive and goal-oriented control.
Both vehicle types require navigation tasks, where, besides the position, velocity, atti-
tude and heading, acceleration and angular rates are included in the process.
Navigation Tasks
For an AGV to move from A to B (navigation) a planned route is needed, including
the start and end positions. The positioning system together with the automatic steer-
ing keeps the vehicle on the route.
SGVs only need to know the target position and the current position. The actual
route will be calculated depending on the information from sensor systems about ob-
stacle avoidance and distance left to the target position.
Especially for unknown environments, the navigation mode has to become more
advanced. Such hybrid systems have been proposed; they consist of not only a sensor
fusion but also behavior fusion [12, 15]. The main components of the system are an
obstacle avoider, a goal seeker, a navigation supervisor, and an environment evaluator.
4.2 Positioning and Navigation 201

This navigator is able to perform successfully in various unknown or partially known


environments, and has satisfactory ability in tackling moving obstacles [12].
An agricultural robot has to be able to perform several navigation modes within a
field in order to succeed in performing a field operation.
Navigation Mode Changer
For hybrid systems complex navigation tasks in dynamic environments require that
certain elementary behaviors are activated and deactivated, or that some of their pa-
rameters are changed as the vehicle interacts with the environment and progresses in
different stages of the agricultural operation. In hybrid architectures, this task is as-
signed to a mediator process, which acts as a hybrid automaton [16], or a discrete
event system [17], i.e. a vehicle with each node corresponding to a distinct behavior.
Navigation Modes
Vougioukas et al. [12] proposed a mode changer for a robotic agricultural opera-
tion. It uses eight simple navigation tasks: initialization, calibration, path planning,
path tracking, watch-and-wait, obstacle avoidance, failure, and completion (Figure 1).
The path-tracking mode is purely deterministic, while the obstacle-avoidance mode
is completely reactive. The function of the navigation task is to move the vehicle along
a path that is computed by a path planner before the vehicle starts moving, while
avoiding any obstacles that may exist somewhere on the path. The navigation tasks
were implemented and tested with a particular platform [12]. In the following a de-
scription of the implementation of the operating modes and the mode changer is given.

Path Path computed


Calibration OK
Planning Path
Tracking Goal reached

Calibration

Task
Obstacle in Completion
green zone
Red zone
Initialization OK empty
Failure
Failure Obstacle in
Calibration failed
red zone Goal reached

Initialization
Green zone
Watch & empty
Wait
Initialization
failed Failure
Obstacle

Failure Obstacle
Failure
Avoidance

Figure 1. Mode changer: Different navigation tasks for field operations [12].
202 Chapter 4 Mechatronics and Applications

• Task-initialization mode—In this operating mode the robot launches and con-
nects to all the actuator and sensor objects and performs all the necessary mem-
ory allocation for the active objects. If any problem that would prohibit the con-
tinuation of the execution of the task is encountered an appropriate message is
sent to the mode manager.
• Task-calibration mode—In this mode a number of calibration procedures that
involve robot motion are performed. More specifically, the compass is calibrated
using filtered heading information from the GPS, while the robot moves along a
small predefined distance. Also, in this mode, if any problem that would prohibit
the continuation of the execution of the task is encountered an appropriate mes-
sage is sent to the mode manager.
• Path-planning mode—Generally, in this mode a collision-free path that connects
its current position and orientation to a desired (goal) position and orientation is
computed for the vehicle, based on the task requirements (e.g., field coverage),
on detailed knowledge of the field and vehicle geometry, and on the position and
geometry of any existing obstacles. Path planning should take into account ki-
nematic and dynamic constraints introduced by the vehicle and any implement it
is carrying. In the case of multiple feasible paths, optimization criteria, such as
minimum time travel, minimum fuel consumption, etc., can be used [18,19].
• Path-tracking mode—Path tracking or route following is the operation in which
the vehicle follows a predetermined route. It constitutes a major research area in
robotics and autonomous agricultural vehicles. The task is nontrivial, especially
in the case of non-holonomic vehicles. A path can be defined as a sequence of
waypoints connected via straight-line segments.
• Obstacle-avoidance mode—Obstacle avoidance is the operation in which the
vehicle keeps moving in a desired direction while avoiding static and dynamic
obstacles. The obstacle-avoidance operating mode is entered when some range
sensor detects an obstacle in the “green zone” of the robot. Its implementation is
based on the virtual force field (VFF) method [20]. This method combines the
evidence grids (or occupancy grids) for obstacle representation with the poten-
tial field method for real-time obstacle avoidance [21].
• Watch-and-wait mode—This mode is entered when an obstacle suddenly ap-
pears close to the robot. In this case, the robot will stand still for a period of
time, reading its range sensors, and wait until the obstacle leaves or is removed.
If this does not happen, an appropriate message will be sent to the mode
changer, which will result in entering the task failure mode.
• Task-failure mode—This operating mode can be reached from all other modes
except for the task-completion mode. The actions taken in it depend greatly on
the type of task being executed. In this particular case study the robot is com-
manded to stay still, all active objects are freed from memory, data logging files
are closed, and informative error messages are printed on the standard output.
• Task-completion mode—In this mode the robot is commanded to stay still, all
active objects are freed from memory, and data logging files are closed.
4.2 Positioning and Navigation 203

References
1. Hague, T., J. A. Marchant, and N. D. Tillett. 2000. Ground based sensing systems
for autonomous agricultural vehicles. Computers and Electronics in Agriculture
25: 11-28.
2. Tillett, N. D., T. Hague, and S. J. Miles. 2002. Inter-row vision guidance for
mechanical weed control in sugar beet. Computers and Electronics in Agriculture
33: 163-177.
3. LeMaster, E. 2003. GPS on the web—Applications of GPS pseudolites. GPS
Solutions 6: 268-270.
4. Barnes, J., C. Rizos, J. Wang, D. Small, G. Voigt, and N. Gambale. 2003. Locata:
The positioning technology of the future? Proc. SatNav 2003 the 6th International
Symposium on Satellite Navigation Technology Including Mobile Positioning and
Location Services.
5. Borenstein, J. 1996. Measurement and correction of systematic odometry errors
in mobile robots. IEEE Trans. Robot. Autom. 12(6): 869-880.
6. Borenstein, J., H. R. Everett, L. Feng, and D. Wehe. 1997. Mobile robot
positioning—Sensors and techniques. J. Robotic Systems 14(4): 231-249.
7. Olson, C. F. 2002. Selecting landmarks for localization in natural terrain.
Autonomous Robots 12(2): 201-210.
8. Shmulevich, I., G. Zeltzer, and A. Brunfeld. 1989. Laser scanning method for
guidance of field machinery. Trans. ASAE 32(2): 425-430.
9. Yang, H., K. Park, J. G. Lee, and H. Chung. 2000. A rotating sonar and a
differential encoder data fusion for map-based dynamic positioning. J. Intelligent
and Robotic Systems 29(3): 211-232.
10. De Schutter, J., J. De Geeter, T. Lefebvre, and H. Bruyninckx. 1999. Kalman
filters—A tutorial. Leuven, Belgium: Katholieke Universiteit Leuven.
11. Han, S., Q. Zhang, and H. Noh. 2002. Kalman filtering of DGPS positions for a
parallel tracking application. Trans. ASAE 45(3): 553-560.
12. Vougioukas, S., S. Fountas, B. S. Blackmore, and L. Tang. 2004. Navigation task
in agricultural robots. Proc. International Conference on Information Systems
and Innovation Technologies in Agriculture, Food and Environment
(Thessaloniki, Greece), 55-64.
13. Blackmore, B. S., S. Fountas, S. Vougioukas, L. Tang, C. G. Sørensen, and R. N.
Jørgensen. 2004. A method to define agricultural robot behaviours. Proc.
Mechatronics and Robotics.
14. Freyberger, F., and G. Jahns. 2000. Symbolic course description for
semiautonomous agricultural vehicles. Computers and Electronics in Agriculture
25(1-2): 121-132.
15. Ye, C., and D. Wang. 2001. A novel navigation method for autonomous mobile
vehicles. J. Intelligent and Robotic Systems 32: 361-388.
16. Egerstedt, M., K. Johansson, K. Lygeros, and S. Sastry. 1999. Behavior based
robotics using regularized hybrid automata. Proc. IEEE Conference on Decision
and Control (CDC ‘99).
204 Chapter 4 Mechatronics and Applications

17. Kosecka, J., and R. Bajcsy. 1994. Discrete event systems for autonomous mobile
agents. J. Robotics and Autonomous Systems.
18. Sørensen, C. G., T. Bak, and R. N. Jørgensen. 2004. Mission planner for
agricultural robotics. Proc. AgEng, Leuven, Belgium.
19. Stoll, A. 2003. Automatic operation planning for GPS-guided machinery. Proc.
4th European Conference on Precision Agriculture ECPA, ed J. V. Stafford, 657-
664. Berlin, Germany: Wageningen Academic Press, Wageningen, NL.
20. Borenstein, J., and Y. Koren. 1989. Real-time obstacle avoidance for fast mobile
robots. Transactions on Systems, Man and Cybernetics 19(5): 1179-1187.
21. Khatib, O. 1986. Real-time obstacle avoidance for manipulators and mobile
robots. International J. Robotics Research 5(1): 90-98.

4.3 Autonomous Vehicles and Robotics


B. S. Blackmore and H. W. Griepentrog
Abstract. This section covers some of the recent developments in vehicle automa-
tion ranging from various forms of driver-assisted steering through totally autono-
mous vehicles. Commercial and research examples are given with a description of
how behavioral robotics can be applied to agriculture.
Keywords. Autonomous tractor, Automatic steering, Agricultural robotics, Behav-
ioral robotics.

4.3.1 Driver Assistance


Various driver assistance aids are available to help reduce the complexity and diffi-
culty of field operations as well as to help improve efficiency. These aids fall into two
main groups: assistance with steering and automated implement tasks.
To improve the field efficiency, overlaps and skips of operations should be kept to
a minimum. The distance between the working envelope of the implement and the
previously treated area is usually judged by eye and relies on the skill and experience
of the driver. As implements get wider, this task becomes more difficult, so different
marking systems have been developed, such as using a disc coulter to mark the correct
driving distance while cultivating and using foam markers while spraying.
Steering Assist: Crop Edge Detection
As combine harvester sizes have grown, the corresponding header widths have in-
creased, so that header widths of 10 m are not unusual. Judging the distance between
the divider and the crop edge becomes difficult especially as it is can be 5 m to one
side and slightly ahead of the driver. One method of overcoming this has been to use a
4.3 Autonomous Vehicles and Robotics 205

Figure 1. A laser scanner used to distinguish the edge of a standing crop.

forward-looking laser scanner mounted over the divider. The laser scanner sweeps an
arc of over the edge of the standing crop and the range to each point at one degree
intervals are calculated. As the standing crop is higher than the cut stubble, the crop
edge can be distinguished. The distance between the crop edge and the divider can be
calculated and fed into the steering control system to keep the header full.
Steering Assist: Light Bar or Graphic Display
Driving in a straight line, at the correct distance and parallel to a previous track, has
always been important to minimize skips and overlaps of field operations. Soil-
engaging discs and foam markers have been used, but the combination of higher-
accuracy positioning systems and simple driver interfaces have enabled a semi-skilled
driver to produce a skilled output. Given that the vehicle position can be measured
more accurately than the driver can judge, the driver records two points A and B down
the initial crop row, called the AB line. The working width of the implement is entered
and the system calculates an infinite series of tracks parallel to the AB line. The driver
watches the light bar and uses it to select the next track and to keep the tractor on
course. Some commercially available systems now offer the ability to automatically
steer the tractor while in the straight part of the route. Manual steering is used to turn
at the end of the rows.

1 2 3 4 5 …

Figure 2. The AB line and the infinite (not shown) parallel tracks
and a John Deere tractor using the Autotrac steering system.
206 Chapter 4 Mechatronics and Applications

Figure 3. Light bars. Detailed indicator (above, ©Trimble); simple LED arrangement (below).

The light bar itself consists of a horizontal line of LEDs that represent the deviation
of the desired position from the actual tractor position. The driver steers the tractor so
that the central LEDs remain lit. Green LEDs can be used for the central dead band
while red LEDs show the desired path to the left and right. More advanced versions
give heading guidance and error as well as direction to the next feature (used for soil
sampling).
Driver Assistance: Self-Guiding Inter-Row Weeder
Mechanical inter-row weeding can be very efficient when carried out by skilled
operators, but it is difficult to keep high accuracy over long periods of time. Lateral
crop-row position data can be extracted from a camera mounted above the crop by
binarising the image into soil and green, and using the knowledge that the crop was
planted in rows, to find the best regression line to approximate the center of the row.
These lines can be extrapolated backward to the weeder, which can be shifted laterally
to run in the center of the row spacing.

Horizon
Vanishing point

Side shift
toolbar

d d
Figure 4. Crop row regression lines and an intra-row weeder schematic.
4.3 Autonomous Vehicles and Robotics 207

4.3.2 Automatic Steering


There are a number of commercially available systems that can be retrofitted to
conventional tractors that allow the steering function to be automated, while the driver
attends to other tasks. Although these systems may remove part of the steering task
from the operator, they cannot be considered to be autonomous, as the driver must
carry out many other tasks apart from steering.
Figure 5 shows a flowchart of an automatic steering and implement control system.
The positioning and orientation of the tractor is assessed by combining absolute posi-
tion from an RTK GPS, relative accelerations from the IMU (fiber-optic gyroscope),
and ground speed and distance from the odometry, by using a Kalman filter. The Kal-
man filter is used to ameliorate the errors from the positional sensors by creating an
individual probabilistic error function for each sensor and adjusting a weighting func-
tion in favor of the most appropriate one. A computer in the cab (with an operator in-
terface) sends steering and implement-control messages to a control CPU on the trac-
tor and a job computer on the implement via a Control Area Network (CAN) bus. The
tractor control CPU is tightly coupled to a number of closed-loop feedback systems
such as steering, tractor speed, gearbox control, engine speed, linkage control, etc. The
implement job computer is dedicated to the specific implement and controls its par-
ticular function whether it is spraying, weeding, plowing, etc. At present the tractor
and operator control the implement but in the future the implement may well control
the tractor.
As all of the functions, such as the proposed route and application rates, have been
defined before the job starts, it can be considered a deterministic process.
Inertial
Radar /
DGPS Measurement
Odometry
Unit

Route
Task
planning Kalman filter
planning

Route
plan Treatment
Navigation Implement map
Actual algorithm control
map algorithm Actual
map

Display Tractor control


diver info algorithm

CAN bus

Manual vehicle Tractor Implement


track correction control CPU job computer

Figure 5. Flowchart of the basic tractor and implement control functions.


208 Chapter 4 Mechatronics and Applications

Figure 6. Route plan and treatment map for an automatically steered tractor.

A georeferenced field boundary is imported into the route planning software and
the tractor and implement parameters are defined. Some parameters, such as turning
circle, working speeds, implement width, etc., are extracted from a predefined data-
base. A working direction is defined, usually along the length of the field (similar to
the A-B line above) and the software creates a set of suggested linear routes on the
headlands and in the main part of the field at the working width distance apart. The
user then identifies the particular route required and the places where different opera-
tions or applications should occur. This route plan and treatment map is then trans-
ferred to the tractor controller as in Figure 5. The tractor is driven manually for a few
seconds to calibrate the Kalman filter and then to the start position before being
switched into automatic mode. The tractor then follows the route, correcting for any
sensed positional or steering errors. Implement control is started when the tractor has
reached the boundary of a treatment area. A log file is recorded showing all actual
routes and treatments [1].
4.3.3 Autonomous Tractors
In the past, agricultural engineers have developed many ways to automate agricul-
tural tasks, and the goal of developing an autonomous tractor has almost become the
“holy grail” of agricultural engineering with hundreds of papers and patents dating
from the 1920s. Although many of these systems were successful in terms of automat-
ing particular tasks, none have been able to deal with the real-world complexity of the
agricultural environment to become truly autonomous.
A clear distinction can be made between an automatically steered tractor and an
autonomous tractor. An automatically steered tractor, as described above, needs an
operator to attend to unknown object avoidance, safety, and other non-automated
tasks; an autonomous tractor must be capable of working without an operator. It is
obvious, from a human perspective, that we need more intelligent control of the tractor
but what we humans find easy is often difficult to achieve in a computer. Furthermore,
intelligence is difficult to define and can only be compared to human intelligence. An-
other approach is to define the actions of the tractor in terms of tasks and behaviors.
4.3 Autonomous Vehicles and Robotics 209

Figure 7. An early concept drawing of a wire-following tractor.

Many researchers working in robotics consider behavior-based robotics to be the most


appropriate way to develop truly autonomous vehicles. In this way a definition of
autonomous tractor behavior can be expressed as “sensible long-term behavior, unat-
tended, in a semi-natural environment, while carrying out a useful task.”
This sensible long-term behavior is made up of a number of parts. First, sensible
behavior, which at the moment is device-independent, needs to be defined. Alan Tur-
ing defined a simple test for artificial intelligence [2], which is in essence: If a ma-
chine’s behavior is indistinguishable from a person’s then it must be intelligent. We
cannot yet develop an intelligent machine but we can make it more intelligent than it is
today by defining a set of behavior modes that make it react in a sensible way (defined
by people) to a predefined set of stimuli in the form of an expert system. Second, it
must be able to carry out its task over prolonged periods, unattended. When it needs to
refuel or resupply, it must be capable of returning to base and restocking. Third, safety
behaviors are important. The operational modes of the machine must make it safe to
others as well as to itself, but it must be capable of safely deactivating when sub-
systems malfunction. Catastrophic failure must be avoided, so multiple levels of sys-
tem redundancy must be designed into the vehicle. Fourth, because the vehicle is in-
teracting with a complex semi-natural environment (in horticulture, agriculture, park-
land and forestry uses), it must use sophisticated sensing and control systems to be
able to behave correctly. Many projects in the past have found ways to simplify the
environment to suit the vehicle, but the approach should now be to embed enough
intelligence within the tractor to allow suitable emergent behavior to work in an un-
modified environment [3].
Purposeful Autonomous Behavior
The operation of an autonomous vehicle can be divided into two parts: tasks and
behaviors. The task is what the tractor has been instructed to do: navigate, plow, seed,
etc. The way in which it carries out the task is then called the behavior. Tasks and
210 Chapter 4 Mechatronics and Applications

Plan changes
to the world
Identify
object
Monitor
changes

Layer level

Priority
Build maps
World model
Explore

Avoid
Perception Planning
objects

Real Real
Sensors Actuators Sensors Actuators
world world
Traditional Approach Behavior-Based Approach

Figure 8. Comparison between the traditional and behavior-based approaches.

behaviors can be determined before an operation starts (deterministic) but the tractor
requires the ability to react to new or unknown situations. This requires a reactive re-
sponse to a changing context. The combination of both forms is called a hybrid system.
Some low-level tasks and behaviors can be brought together to form new higher-level
behaviors that may not always give the expected results. These are called emergent
behaviors. Higher-level behaviors can be said to subsume lower, more primitive, be-
haviors [4]. Further reading on behavior-based robotics can be found in [5].
4.3.4 Sensing Systems
For a vehicle to be able to interact with its environment in a sensible manner, it
must be able to sense and understand its local proximity or environment. There are
two main tasks: the need to identify characteristics about a particular target or point of
interest (not covered here), and the ability to sense nearby objects that may become
obstacles as it navigates. These sensors give proximity data relative to the vehicle. The
absolute position of the relative data can be calculated given the absolute position of
the vehicle and the pose of the vehicle. A number of non-contact range finders are
available but the two most commonly used are ultrasonic range finders and laser scan-
ners.
Sensing Proximity by an Ultrasonic Ring
An ultrasonic ring is made up from a number of individual ultrasonic rangefinders
set at fixed angles to give coverage around the vehicle (Figure 9). Each rangefinder
emits a directed ultrasound “chirp” and the time taken to pick up the returned echo is
proportional to the distance to the reflecting object.
4.3 Autonomous Vehicles and Robotics 211

30°

10 m

Figure 9. An ultrasonic ring comprised of 17 ultrasonic rangefinders.

At programmable intervals, each unit transmits an ultrasonic burst, waits until the
ringing in the unit has stopped, and amplifies the returned signal. Amplification is
increased over time to compensate for the reduced energy that has been attenuated
over distance. Distances are calculated from the elapsed time between transmission
and reception and the velocity of sound in air (which is taken as a constant). The oper-
ating range is between 20 cm and 10 m with a 30° dispersion angle [6].
These are cheap and reliable rangefinders but are prone to signal loss when sensing
ultrasonic absorbent materials such as soft fabrics.
Sensing Proximity by Laser Scanners
Laser scanners are used to detect an intersecting surface profile from a laser plane.
One laser measurement system (Figure 10) emits a pulsed rotating laser beam at 75 Hz
through 180° and the distance to each point is calculated at 1° intervals. The range
depends on the reflectivity of the object being sensed, but it varies between 30 m and
150 m. Resolution is nominally 10 mm but the statistical error increases to 40 mm at
the upper limits of its range. Output of the distance measurements is via an RS232
serial communications port [7].

30 m

180º at 1º intervals

Figure 10. Laser scanner used for forward proximity sensing.


212 Chapter 4 Mechatronics and Applications

Figure 11. Augmented reality: Video images with plant count, likely stem position,
and calculated leaf area [8].

Sensing Obstacles
An object becomes an obstacle when it is likely to interfere with navigation. Most
robotic systems under development assume a 2.5-dimensional world. That means all
obstacles are considered to have infinite height and must be avoided. Consider that in
a 3-D world a small rock is not an obstacle if a spray boom can pass safely over the
top, but it is an obstacle if the tractor attempts to run over it. This type of 3-D interac-
tion is difficult to achieve.
Sensing Targets
A target or point of interest is the position or object that the task is working on.
This can take the form of a waypoint, crop row, or even an individual plant. Although
RTK GPS can give absolute positions to the centimeter level, which may be all that is
needed for some operations, specialized relative sensors are needed to identify indi-
vidual targets. Cameras and machine vision techniques are often used for this purpose
(see Figure 11).
4.3.5 Multiple Vehicles
When one autonomous tractor has been developed, it will be a relatively easy task
to combine a number of them to be able to increase the work rates. Three levels of
interaction have been identified.
• Coordination of multiple vehicles can be carried out centrally. Each vehicle is
working independently and does not necessarily know about other vehicles but
has their own task to carry out. An example would be where each vehicle would
be carrying out a different task in different fields.
• Cooperation is where multiple vehicles are working in the same field and are
aware of each other and what others are doing. If three vehicles were carrying
out the same task, such as mechanical weeding in the same field, then each vehi-
cle should know which rows other vehicles are working in before they select a
new row to start in. It would not make sense for two vehicles to come head-to-
head in the same row. Real time communications between vehicles on a peer-to-
peer basis would be needed.
• Collaboration is where multiple vehicles could share the same task at the same
time. An example would be for multiple vehicles to pull a large trailer that one
vehicle could not pull on its own. This is a very difficult situation to manage ef-
fectively.
4.3 Autonomous Vehicles and Robotics 213

4.3.6 Agricultural Research Vehicles


Robotra
Robotra (Figure 12) was designed as a tilling robot at the Institute of Agricultural
Machinery in Saitama, Japan since 1993. It is a commercial tractor that has been retro-
fitted with a range of positioning systems (RTK GPS, surveying grade laser range-
finder, odometry, digital compass, and inertial measurement) and control systems to
interface with the tractor to allow high levels of automation [9].

Figure 12. Robotra, an autonomous research tractor.

Figure 13. A small purpose-built sensing platform for crops and weeds (©DIAS).
214 Chapter 4 Mechatronics and Applications

Autonomous Platform and Information System (API)


A small four-wheel drive, four-wheel steer platform was produced as a student pro-
ject [10] and later modified to take color cameras for weed detection and hyperspectral
cameras for crop health parameters. This omnidirectional platform has a small ground
footprint, high crop clearance and good maneuverability that make it an ideal crop
scouting platform.
Demeter
A New Holland 2250 windrower was retrofitted with DGPS, INS, odometry, and
two cameras used to grab images of the crop in front of the machine. An image proc-
essing system was used to extract the relative position of the edge of the crop. This

Figure 14. Demeter (image from http://www-old.rec.ri.cmu.edu/projects/demeter/index.shtml).

Figure 15. Autonomous windrowing


(image from http://www-old.rec.ri.cmu.edu/projects/demeter/index.shtml).
4.3 Autonomous Vehicles and Robotics 215

gave a relative directrix for the harvester to follow. Reliability was improved by inte-
grating the multiple relative positioning systems and the absolute positioning systems
(GPS) to remove accumulated positional offsets as well as primary guidance. Rudimen-
tary object avoidance algorithms where incorporated into the image processing [11].

References
1. Glasmacher, H. 2002. AGRO NAV Plan—Software for planning and evaluation
of the path and work of field robots. Automation Technology for Off-Road
Equipment, ed. Q. Zhang, 405-411.
2. Turing, A. 1950. Computing machinery and intelligence. Mind 59: 433-60.
3. Blackmore, B. S., H. Have, and S. Fountas. 2001. A specification of behavioral
requirements for an autonomous tractor. Proc. of the 6th International Symposium
on Fruit, Nut and Vegetable Production Engineering, eds. M. Zude, B. Herold,
and M. Geyer, 25-36. Potsdam-Bornim, Germany: Institut für Agrartechnik
Bornim e.V.
4. Brooks, R. A. 1986. A robust layered control system for a mobile robot. J.
Robotics and Automation RA-2: 14-23.
5. Arkin, R. C. 1998. Behavior Based Robotics. Cambridge, MA: MIT Press.
6. Polaroid. 1993. Polaroid ultrasonic developer’s kit. PXW6431. Polaroid
Corporation. Available at: www.polaroid-oem.com.
7. SICK. 1998. Laser Measurement System LMS 2000. Laser Measurement
Technology, SICK optics, Sebastian-Kneipp-Straße 1, D-79183 Waldkirch,
Germany. Available at: www.sick.de.
8. Tang, L., Tian, L., and B. L. Steward. 2000. Color image segmentation with
genetic algorithm for in-field weed sensing. Trans. ASAE 46: 1019-1028.
9. Matsuo, Y., Yamamoto, S., and O. Yukumoto. 2002. Development of tilling
robot and operation software. Automation Technology for Off-Road Equipment,
ed. Q. Zhang, 184-189.
10. Madsen, T. E., and H. L. Jakobsen. 2001. Mobile robot for weeding. Unpublished
MSc. thesis. Danish Technical University.
11. Pilarski, T., M. Happold, H. Pangels, M. Ollis, K. Fitzpatrick, and A. Stentz.
2002. The Demeter system for automated harvesting. Autonomous Robots 13: 19-
20.

You might also like