0% found this document useful (0 votes)
369 views12 pages

Blanche-An Experimental in Guidance and Navigation of An Autonomous Robot Vehicle

This document summarizes an autonomous robot vehicle called Blanche. It has the following key capabilities and components: 1) Blanche is designed to operate autonomously in structured indoor environments like offices and factories. It uses low-cost sensors and aims to be commercially viable. 2) Its sensors include an optical rangefinder for navigation and odometry wheels for position tracking. Onboard software performs real-time position estimation by matching sensor data to an a priori map. 3) Blanche receives pre-planned paths from an offline planner as a series of line and arc segments. Onboard software generates reference trajectories to follow these segments and uses feedback control to navigate along the planned route.

Uploaded by

maysamsh
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
369 views12 pages

Blanche-An Experimental in Guidance and Navigation of An Autonomous Robot Vehicle

This document summarizes an autonomous robot vehicle called Blanche. It has the following key capabilities and components: 1) Blanche is designed to operate autonomously in structured indoor environments like offices and factories. It uses low-cost sensors and aims to be commercially viable. 2) Its sensors include an optical rangefinder for navigation and odometry wheels for position tracking. Onboard software performs real-time position estimation by matching sensor data to an a priori map. 3) Blanche receives pre-planned paths from an offline planner as a series of line and arc segments. Onboard software generates reference trajectories to follow these segments and uses feedback control to navigate along the planned route.

Uploaded by

maysamsh
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 7 , NO.

2, APRIL I991

I93

Blanche- An Experiment in Guidance and Navigation of an Autonomous Robot Vehicle


Ingemar J. Cox, Member, IEEE
Abstract-This paper describes the principal components and capabilities of Blanche, an autonomous robot vehicle. Blanche is designed for use in structured office or factory environments rather than unstructured natural environments. This is a significant restriction, but still allows for many potential applications. An overview of the physical configuration of the vehicle is presented as well as a description of its two sensors, an optical rangefinder and odometry. It is assumed that an off-line path planner provides the vehicle with a series of collision-free maneuvers, consisting of line and arc segments, to move the vehicle from a current to a desired position. On board the vehicle, the line and arc segments specifications are sent to control software consisting of low-level trajectory generation and closed-loop motion control. The trajectory generator takes each segment specification and generates a reference vector at each control update cycle. The cart controller controls the front steering angle and drive velocity using conventional feedback compensation to maintain small errors between the reference and measured states. The controller assumes accurate knowledge of the vehicles position. We believe that position estimation is a primary problem that must be solved for autonomous vehicles working in structured environments. Blanches position estimation system consists of 1) an a priori map of its environment, represented as a collection of discrete line segments in the plane; 2) a matching algorithm that registers the range data with the map (of line segments) (this algorithm has the properties that it is robust against missing and spurious data and is reasonably fast allowing matching to occur very frequently (approximately every 8 s), and 3) the matching algorithm also estimates the precision of the corresponding match/correction that is then optimally (in a maximum likelihood sense) combined with the current odometric position to provide an improved estimate of the vehicles position. The vehicle and associated algorithms have all been implemented and tested within a structured office environment. Except for the off-line global path planner, the entire autonomous vehicle is self-contained, all processing being performed on board and with no recourse to passive or active beacohs placed in the environment. We believe this vehicle is significant not just because of the sensing and algorithms to be described, but also because its implementation represents a high level of performance at low cost.

ery. From a commercial perspective, we have tried to be cost conscious, since many potential commercial applications are very Lost sensitive. This paper describes the principal components and capabilities of the vehicle. Section I1 describes the physical structure of the vehicle and its associated sensors. Section 111 describes the trajectory generation and guidance control system for the vehicle. Experimental results are included. The guidance system compares the reference position and velocity vector with the measured vector and uses conventional linear feedback for control. Section IV describes the position estimation system employed by the vehicle. Included in this is a description of the robots map representation, Section IV-A, the algorithm used to match the range data to this map, Section IV-C, as well as some implementation details. Section IV-D discusses how the odometry position estimate is combined with the correction estimated by the matcher. The system ahs been implemented on the vehicle and experimental results are included in Section IV-E. Section V contains a brief description of the software implementation.

11. OVERVIEW OF BLANCHE

Blanche, shown in Fig. 1, has a tricycle configuration consisting of a single steerable drive wheel at the front and two passive rear wheels. The vehicle is powered by two sealed 12V 55Ah batteries which, in the current configuration, provide a useful lifetime of approximately 7 h. Control of the cart is based on a Multibus system consisting of a MC68020 microprocessor with MC68881 match coprocessor, 2 Mbyte of memory, an ethernet controller, a custom two-axis motor controller, and an analogto-digital convertor. The Motorola 68020 runs a real-time UNIX@ derived executive called NRTX [161. Blanche is designed to be low cost, and its dependence on I. INTRODUCTION only two sensors, an optical rangefinder and odometry, reflects LANCHE [4]is an experimental vehicle designed to operate this. Both sensors are low cost and together provide all the autonomously within a structured office or factory environ- navigational sensing information available to the cart. The adment. We believe that a key problem for such a vehicle is vantage of odometry is, of course, that it is both simple and accurate position estimation, particularly if active or passive inexpensive. However, it is prone to several sources of errors. beacons are not permitted. Blanche is an experiment in the First, surface roughness and undulations may cause the distance guidance and navigation of an autonomous vehicle within an to be over estimated. Second, wheel slippage can cause distance office environment. From a research perspective, Blanche has to be under estimated. Finally, variations in load can distort the served as a testbed with which to experiment with such things as odometer wheels and introduce additional errors. If the load can real-time programming languages, sensor integration/data fusion be measured, then the distortion can be modeled and corrected techniques, and with techniques for error detection and recov- for [28]. Where appropriate, a simple and more accurate alternative is to provide a pair of knife-edge nonload-bearing wheels solely for odometry. Blanche uses this approach. However, even Manuscript received September 4, 1989; revised September 1, 1990. A very small errors in the vehicles initial position can lead to portion of this work was presented at the IEEE International Workshop on gross errors over a long enough path. Consequently, it is Intelligent Robots and Systems, 1989. imperative the vehicles environment be sensed in order to anull The author is with the NEC Research Institute, Princeton, NJ 08540. these accumulating errors. IEEE Log Number 9041908.
1042-296X/91/04OO-O193$01 .OO 01991 IEEE

194

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. I, NO. 2, APRIL 1991

Map o f 28-420 (each point denoted by circle of twice std)


0

I
-o

sS o >

Fig. 1. Cart Blanche.

A simple low-cost optical rangefinder has been developed specifically for cart navigation [20]. The rangefinder uses an approximately 1-in-diameter beam and a rotating mirror to provide 360" polar coordinate coverage of both distance and reflectance out to about 20 ft. Both radial and range resolution correspond to about 1 in at a ranging distance of 5 ft, with an overall bandwidth of approximately 1 kHz. Thus, there is the capability to extract approximately lo00 samples per revolution, though, in practice, we limit the number to about 180. Fig. 2 shows a typical range map of a room obtained from a single scan of the rangefinder. A scan typically takes about 1 s. Each point is represented by its corresponding region of uncertainty, denoted by a circle of radius twice the standard deviation in the measurement. It should be pointed out that the error, due to assuming that the range output is linear with distance, may sometimes exceed the error due to noise in the rangefinder. This systematic error can be removed by using a table look up technique to accurately map range output into distance. We assume that an off-line path planner [31] generates a series of collision free maneuvers, consisting of line and arc segments, to move the vehicle from a current to a desired position. Since the cart has a minimum turning radius (approximately 2 ft), it is not possible to simply turn on the spot and vector to the desired position as iS the case for differentially driven vehicles. This path is dowdloaded to the vehicle, which then navigates along the commadded route. The next section describes how these line and arc Segments are interpreted by the vehicle.

Reference Trajectory Generator

Real-time model-boscd position estimation

on-board

Cart Controller

I I

<

Cart Dynamics

Position Estimation

Cart Kinemnlics

111. GUIDANCE (TRAJECTORY GENERATION AND CONTROL) This section is taken from Nelson and Cox [23]. The path plan received from the off-board path planner consists of a list, each line of which specifies a segment of the path. The segment specification includes the segment type and the desired state at the end of the segment. The segment specifications are sent to control software consisting of low-level trajectory generation and closed-loop motion control, as illustrated in Fig. 3 [23]. Briefly, the trajectory generator takes each segment specification and generates a reference vector at each control update cycle (every 0.1 s). The cart controller controls the front steering angle and drive velocity using conventional feedback compensa-

c
Odomelry

tion to maintain small errors between the reference and measured states, as described in Section 111-A. The trajectory generator (TG) continually provides an appropriate reference state for the cart controller to track. The TG computes at each control update cycle the reference state on the path, the reference steering Angle and drive speed, and the distance to the end of the current path segment. The reader is directed to [23] for a detailed explanation of the generation of the reference vectors. When the distance to the end-point of the segment becomes less than the distance between the preceding reference points, the TG requests the next segment in the path

COX: BLANCHE-AN

EXPERIMENT IN GUIDANCE AND NAVIGATION

195

drive speed coordinates ( a ,w ) by the kinematic equations


v
=

R W C Oa S

8 = (R/b)wsina

x = v cos 0
y = v sin 6

I
X

' X

Fig. 4. Plan view of the cart coordinates.

plan and begins generating reference states along the new segment-lines or continuous curvature splines [22]. The main advantage of the reference path guidance scheme over point-to-point guidance schemes [13], [29] is that the cart's on-board system is always providing a current (computed) point on the path where the cart should be, with smooth transitions across path-segment boundaries. When the distance between this computed current position and the end of the path segment is less than a threshold, a transition from current to next path segment occurs. A point-to-point control scheme, compares the current measured position (rather than current computed position) to the desired position. However, because of measurement errors, such a scheme may not detect the point at which the control algorithm should switch to a new target point, and hence may fail to make the proper transitions from one path segment to the next. The TG scheme also provides a means for onboard monitoring of the operating status of the cart. Under normal operating conditions, the difference between the measured state of the cart and the current reference state should be small. If this state error ever exceeds some established limits, then the cart must be functioning abnormally. The cart controller can test for abnormal errors and take remedial action. By this means, system malfunctions, such as failure of motors or odometry sensors, can be detected in time for the cart to be safely stopped. A third advantage of the TG scheme is that it provides a convenient separation between the path-guidance logic and the feedback control logic, which must be tuned to the characteristics of the closed-loop dynamics of the steering and drive systems of the cart. Once properly adjusted, the controller design remains fixed throughout all the different operating situations that may arise along the path; all the controller needs to know is what the current reference and measured states are. The TG takes care of dealing with the geometric computations necessary to evolve the path plan as points in time, and also with anomalies that may require local path modifications, either spatially, temporally, or both.

where R is the radius of the drive wheel and b is the wheelbase, as shown in Fig. 4.' The task for the cart controller is to steer and drive the front wheel of the cart so that the CR point accurately tracks the reference points specified by the trajectory generator. The cart position and velocity measurements necessary for performing this task are obtained from the position estimation system indicated in the cart control block diagram shown in Fig. 3. The cart controller consists of a path controller and two motor-control units. The motor-control units provide inner control loops for the steering and drive motors, while the path controller provides the outer control loop for trajectory tracking. I ) Motor Error Control: The front-wheel steering angle and the front-wheel drive speed depend on the motor-load dynamics in response to the control signals to the motors. The purpose of the motor-control units is to compensate for the dynamics in such a way that the motor outputs ( a ,a) respond rapidly and smoothly to accurately follow the commanded values (a,, w,) output from the path controller. This is done by conventional feedback compensation, based on error signals between the commanded values and the motor outputs values, measured by the angle encoders on the motor shafts. To achieve reasonably high-gain error control without causing undesirable, or even unstable response, controller chips containing digital pole-zero compensating filters are used. The dynamics of the motor-load combination for the two units are represented to a close approximation by the linear model

ti = a,

- HI& i = a2 - H 2 w

where a, and a2 are the control accelerations (torque per unit moment of inertia) developed by the steering and drive motors, respectively, and H, and H2 are the friction coefficients of the two motor-load systems. The digital error signals driving the control units are

and

A . Cart Controller
The coordinates by which the cart is controlled are the front-wheel steering angle a and the front-wheel drive speed W . If the location of the cart is measured with respect to the center of rotation (CR) of the cart, located at the midpoint between the rear odometry wheels (see Fig. 4),the speed v , heading 8 , and position ( x , y ) of the cart are related to the steering angle and

where k denotes the kth sampling period, and C, is the angle encoder gain (in counts per radian). The difference between the two control units is that one is controlling the angle while the other is controlling the angular velocity. Because of the extra integration in the angle control dynamics, the steering angle is the more difficult one for which to compensate properly. 2) Path Error Control: The path controller generates the wc), , using the refersteering and drive command signals ( a y c ence values ( a,, wr) from the TG, and path-errors derived from the measured state of the cart relative to the reference state, as
The choice of x , y position determines what point on the cart tracks the reference path, as well as the form of the kinematic equations [l]. The choice to have the CR point track the path, rather than, say, the front wheel, was made for reasons related to path planning and cart maneuverability [21].

'

196

IEEE TRANSACTIONS ON COMPONENTS. HYBRIDS. AND MANUFACTURING TECHNOLOGY, VOL. 14, NO. I , MARCH 1991
MEASURED STATE : REFERENCE STATE:
0

desired and actual paths


N

ERROR COMPONENTS:

Normal error: e n Heading error: e h =

er-@,

Tangential error: e t Velocity error : e v l vr

- vm

Fig. 5 . Error components used for path control.

shown in Fig. 5 . The path error is resolved into four components. As diagramed in Fig. 5, the distance error is resolved into tangential error e , and normal error e,,

160

180

200

220

240

260

280

300

320

e , = ( x , - x,) cos er + ( y, - y,) sin e, e,, = - ( x , - x m )sin er + ( y, - y,) cos e,.


The velocity error is resolved into heading error eh and speed error e,

x (inches)

Fig. 6. Measured path trajectories for the vehicle.

4 in/s (dashed), and 6 in/s (large dashed). In these experiments,


measured refers only to the vehicles odometric estimate of the vehicles position. The control algorithm clearly performs better at the slower speeds as the gains have been optimized for the slower speeds. The performance degradation at higher speeds is usually acceptable since high-speed transit maneuvers typically do not require as much precision as low-speed docking maneuvers. Some overshoot at the beginning of the arc segments is unavoidable, since the control cannot make the steering wheel follow the angle discontinuity at the transition point between line and arc segments. Furthermore, as the vehicle enters the tighter loop on the right, the overshoot at high speeds is exaggerated due to the steering wheel hitting the hard limit step. The discontinuity in steering angle could be avoided if we were to control the point directly under the drive wheel rather than the midpoint between the rear wheels. However, controlling the orientation of the vehicle becomes more complex, as does path planning. A further alternative would be to replace the arc segments with segments of curves that are continuous in curvature. We have recently incorporated polar splines [2 11 into the trajectory generator, which closely approximate the optimum minimum square jerk curves of Kanayama and.Hartman [15]. In fact, the trajectory generator is easily extensible, as discussed in Section VI.

eh = er - em e,, = v, - U , .
Given the reference values C Y , , U, and these error components, the command steering and drive signals generated by the path controller in each update cycle are

+ Clen + C2eh wc = a, + C,e, + C4e,.


CY^ =
CY,

Note that when the cart is following the reference path without error, the command steering and speed are equal to their reference values. C, in (2) are chosen to The weighting constants C,;. -, optimize the overall path-tracking performance of the cart. As with the motor control units, it was the steering control part of the outer loop design that was the most sensitive to the choice of the control weights in (2). Because the cart speed affects the optimum steering gains through the kinematic relations of (l), work is currently under way to find an appropriate functional dependence on speed for these weights, such that path control is maintained at near-optimum during the critical docking segments, when the speed is being reduced to zero. Although not included in the experimental results described below, a quick analysis suggested that the gains C , and C, should be scaled by the inverse of v 2 and U , respectively, where v is the speed of the vehicle. This has been implemented and appears to provided reasonable results over the range of tested speeds from 3 to 12 in/s.

IV. NAVIGATION (POSITION ESTIMATION)


The guidance control subsystem assumes an accurate knowledge of the vehicles position. By position, we mean the vehicles ( x , y , e ) configuration with respect to either a global or local coordinate frame, not topological position, e.g., to the left of the wall. Dead reckoning using odometry sensors drifts with time so that the estimated position of the vehicle becomes increasingly poor. This complicates the process of position estimation. In order to correct for these cumulative errors, the vehicle must sense its environment and at least recognize key landmarks. Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities. One of the goals of this research has been to develop robust algorithms that will

B. Experimental Results
The test runs for the vehicle were limited to a 14 X 12 ft workspace in the robot laboratory. Hence, the test paths have been mostly figure-eight patterns with tight turns in which the steering angle is at or near the hard limit of 45. Fig. 6 illustrates the performance of the cart over such a test path. The reference trajectory is the solid line, while the carts measured trajectories are shown for three operating speeds: 2 in/s (dotted),

COX: BLANCHE-AN

EXPERIMENT IN GUIDANCE AND NAVIGATION

197

4) an algorithm to estimate the precision of the corresponding


match/correction [8] that allows the correction to be optimally (in a maximum likelihood sense) combined with the current odometric position to provide an improved estimate of the vehicles position. Provided the error models are accurate, the combined position estimate is less noisy than any one of the sets of individual measurements. The sensor integration process can, of course, be routinely mechanized by use of a Kalman filter [ll], [19], but is not explicitly used in this system.

IO

CO crion odomrny

i i ,
combine dn&
Marcher

A . Map Representation

Fig. 7.

Expanded view of position estimation subsystem.

allow a vehicle to navigate in a structured office or factory environment. Navigation can be broadly separated into two distinct phases, reference and dead reckoning, as discussed in Cox and Wilfong [9]. Reference guidance refers to navigation with respect to a coordinate frame based on visible external landmarks. Dead reckoning refers to navigation with respect to a coordinate frame that is an integral part of the guidance equipment. Dead reckoning has the advantage that it is totally self-contained. Consequently, it is always capable of providing the vehicle with an estimate of its position. Its disadvantage is that the position error grows without bound unless an independent reference is used to periodically reduce the error. Reference guidance has the advantage that the position errors are bounded, but detection of external references or landmarks and real-time position fixing may not always be possible. Clearly dead reckoning and external reference navigation are complementary and combinations of the two approaches can provide very accurate positioning systems. Position estimation based on the simultaneous measurement of the range or bearing to three or more known landmarks is well understood [26]. However, recognizing naturally occurring reference points within a robots environment is not always easy due to noise and/or difficulties in interpreting the sensory information. Placing easy to recognize beacons in the robots workspace is one way to alleviate this problem. Many different types of beacons have been investigated including 1) corner cubes and laser scanning system, 2) bar-code, spot mark, or infrared diodes [27] and associated vision recognition systems [12], and 3) sonic or laser beacon systems. We chose not to rely on beacons, believing that the ability to operate in an unmodified environment was preferable from a user standpoint. There have been many efforts to use high-level vision, particularly stereo vision [2], [18] by which to navigate. However, conventional vision systems were ruled out because of the large computational and associated hardware costs: We want the vehicle to be economic. Fig. 7 is an overview of Blanches position estimation system. It consists of
1) an a priori map of its environment, 2) a combination of odometry and optical range sensing to sense its environment, 3) an algorithm for matching the sensory data to the map [7], and

Many spatial representations have been proposed. However, it is worth reflecting on the purpose of a map representation. Our purpose is to compare sensed range data to a map in order to refine our position estimate. The map is not intended to be used for path planning; it is not even necessarily intended to be updated by sensory data. Its sole purpose is for position estimation in an absolute coordinate frame. While many spatial representations have been proposed, few appear to have been tested on a real vehicle. One major exception is occupancy grids [lo]. Occupancy grids represent space as a 2- or 3D array of cells, each cell holding an estimate of the confidence that it is occupied. A major reason given for not using a more geometric representation is that sensor data is very noisy, making geometric interpretation difficult. This is especially true of sonar data (which Elfes [lo] and others used) and is one reason why sonar was not used on Blanche. The infrared range data is much less noisy. This combined with the fact that factory or office buildings are easily described by collections of line segments also influenced our choice of representation. We represent the environment as a collection of discrete line segments in the plane. A 2D representation was chosen because 1) much of the robots environment is uniform in the vertical direction (there is not much to be gained from a 3D representation), 2) the range sensor currently provides only 2D information (r, e), and 3) matching sensor data to 2D maps is significantly simpler than matching to 3D maps. More pragmatically, a line segment description was chosen because a matching algorithm had been developed that used line segments. Moreover, the matching algorithm use (see Section 111-C) does not require the explicit extraction of any features from the image.

B. Sensor Data
Sensing of the environment is quite straightforward, relying on odometry and an infrared rangefinder, as previously mentioned. The choice of a rangefinder was based primarily on the belief that ultrasonic rangefinders were poor, with severe problems due to specular reflection. A characteristic of the position estimation system described here is that the vehicle does not move under odometric control for a period, then stop to locate beacons, update is position, and continue. Rather, its position is constantly being updated as the vehicle moves. Since range data are collected while the vehicle is moving, there is a need to remove any motion distortion that may arise. This is done by reading the odometric position at each range sample. The current position and range value are then used to convert the data point into world coordinates for matching to the map. If ( r , a) is the range and angle pair from the rangefinders local coordinate frame, its position in world coordinates is given

FT

198

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. I, NO. 2, APRIL 1991

Ranger Scan(5) of RM 2B-420


0

Registered Scan(5) of RM 2B-420


N

is:

2
-200
-100

0
N

too

200

-200

-100

100

200

Distance (Inches) (a) Fig. 8.

Distance (Inches)

(b)
Map and range data (a) before registration and @) after registration.

b Y

The following observations motivated the derivation of the matching algorithm described below:

where R is the homogeneous transformation describing the relative position of the rangefinder with respect to the cart, and C describes the vehicle's position with respect to the base coordinate system.

C. Matcher
We now describe how the matching of range data to the map is achieved. First, an example. The solid line in Fig. 8(a) is the line segment description of a laboratory room in which Blanche is moved. The model, consisting of 24 line segments, was constructed very simply from measurements based on the 1-ft2 floor tile grid. Item a is a tall desk, item b is a ventilation duct, item c is some cardboard posters, item d is a small refrigerator, and item e is a large cupboard. It should be pointed out that the model is very simple; some items in the room are not modeled, and others are only roughly approximated, such as item c, which in fact is made up of several cardboard sections all at different distances from the wall. The dotted points show the image based on range data acquired from a single scan of the sensor. In this example there is a small degree of rotation and a large translation, of the order of 9 ft in x and 8 ft in y . Fig. 8(b) shows the results of applying the algorithm. It is evident that the correct congruence has been found. The general matching problem has been extensively studied by the computer-vision community. Most work has centered upon the general problem of matching an image of arbitrary position and orientation relative to a model. Matching is achieved by first extracting features followed by determination of the correct correspondence between image and model features, usually by some form of constrained search. Once the correspondence is known, determination of the congruence is straightforward.

1) The displacement of the image relative to the model is small, i.e., we roughly know where we are at all times. This assumption is almost always true for practical autonomous vehicle applications, particularly if position updating occurs frequently. 2) Feature extraction can be difficult and noisy. Ideally, we would like a matcher that does not require a feature extraction preprocessing stage but works directly on the range data points.
This section is taken from [7] and [SI. If the displacement between image and model is small, then for each point in the image its corresponding line segment in the model is very likely to be its closest line segment in the model. Determination of the (partial) correspondence between image points and model lines reduces to a simple search to find the closest line to each point. The central feature of the original conception was devising a method to use the approximately correct correspondence between image points and model lines to find a congruence that greatly reduces the displacement. Iterating this method leads to the following algorithm (for actual use, the algorithm incorporates certain computational simplifications as described below):

1) For each point in the image, find the line segment in the model that is nearest to the point. Call this the target. 2 ) Find the congruence that minimizes the total squared distance between the image points and their target lines. 3) Move the points by the congruence found in (2). 4) Repeat steps 1-3 until the procedure converges. The composite of all the step 3 congruences is the desired total congruence.
The rationale for our method of performing step 2 is explained with the aid of Fig. 9. The model shown consists of two line segments. The image consists of several points from both segments that have been displaced by translation up and to the

COX: BLANCHE-AN

EXPERIMENT IN GUIDANCE AND NAVIGATION

I99

points. Let ui be a unit vector that is orthogonal to the (infinite line) target of point v i . Let rr be the dot product of any point in so the target consists of all points 2 such that the target with ui, u . .2 = r . . Counterclockwise rotation by 8 around V means moving a point z to cos (e) sin (e) -sin
COS

(e)
(0)

(2 -

5)

+V.

The first-order approximation, which is called a pseudorotation, means moving z to

[; - ; ] ( z - u ) + u = e [;

-;](z

-U)

+ z.

Then the pseudocongruence used in step 2' moves point vi to

The distance of this point from the target line is the dot product of this vector with u i , minus r i . Thus, by making the following definitions
xi1 = U j l
~

Fig. 9.

xi2 =

Example of line model and image points.

ui2

right and by a slight rotation counterclockwise. For every point in the image, its target segment is the correct segment, as one would hope. It is natural to seek the congruence that minimizes the total squared distance from the model points to their target segments, i.e., the minimizing procedure tries to move the image points from each segment so that they lie on that segment, but it does not care where on the segment they go. In this case it is possible to reduce the total squared distance to 0, and there is a unique congruence that does so, namely, the inverse of the displacement used in forming the image, and one application of steps 1-3 perfectly recovers the desired congruence. If the original displacement was larger and/or the image contained points from near the ends of the line segments, the correspondence of image points to line segments might be imperfect and several iterations might be required. If the image points contain some error, a potentially infinite iteration process might be required, though in practice only a few iterations usually achieve convergence to sufficient accuracy. Step 2 is computationally the most complex step. For computational efficiency, two modifications are introduced and the new version is called step 2'. First, each target is changed from a line segment to the infinite line containing the segment. Note, however, that step l continues to use the finite segments. Second, the dependence of the moved points on 0 is nonlinear, so this dependence is approximated by the first-order terms in 8 . Such an approximate congruence is called a pseudocongruence. Note, however, that step 3 continues to use a real congruence, namely, the congruence with the same ( t x , t,, 0 ) as the pseudocongruence found in step 2'. Since the algorithm is iterative and the final iterations involve vanishingly small displacements, the approximations involved in these two modifications do not cause error in the final result. Any congruence can be described as a rotation by some angle 0 followed by a translation by t and can thus be denoted by ( t ,e). Let the ith point of the image be U, = ( uil, ur2)', where the prime indicates transposition, so that U, is a column vector, not a row vector. Let V be the center of gravity of the image

xi3=ui.
y. = r I. I
b =

[':

-;]("i-i)

Ur ,. V i

(fx,

f,J)

step 2' can be described as minimizing the sum of squared deviations from

Now suppose the image contains n points. Let X be the n x 3 matrix whose rows are ( x i l , x i 2 , x i 3 ) ,and let y be the column vector with n components yi. Then the desired estimate (TX, ? , &) is the least-squares solution to

Xb = y .
By definition, this constitFtes least-squares linear regression. The least-squares solution b to this equation can be obtained by solving the equation

( X ' X ) i= X'y.

(3)

The implementation details are given in the following. A major criticism of least-squares linear regression is its sensitivity to outliers. The algorithm as described so far is intrinsically robust against incompleteness of the image, i.e., missing data points. To make it robust against spurious data (outliers), e.g., people walking by, and incompleteness of the model, it is modified further by deleting from consideration in step 2' points whose distance to their target segments exceed some limit. Note that, for the general matching problem, in which the assumption of small displacement is not necessarily valid, such an approach would not be possible. Our experience is that robustness if very desirable, yet very hard to achieve. The robustness of this algorithm is one of its major features.2
Minimizing the median absolute deviation would probably lead to a more inherently robust algorithm [ 141. However, it is computationally more expensive.

200

IEEE TRANSACTIONSON ROBOTICS AND AUTOMATION, VOL. 7, NO. 2 , APRIL 1991

Step 1 of the algorithm requires that, for each point, its corresponding (closest) line segment be found. Since the map is currently unordered, this entails exhaustively calculating the distance of every line from each point in order to find the closest line segment. If there are p points and 1 line segments, the complexity of this operation is O ( p l ) . However, if the map is first preprocessed to compute the corresponding Voronoi diagram (a step that takes O( I log (I)) time) determination of the closest line segment can then be achieved in a(p log (1)) [24]. l g Voronoi solution. Presently, we have not implemented t Since determination of the partial correspondence can quickly dominate the processing time, it pays to keep p and I small. We therefore restrict the number of image points to approximately 180 out of a possible 1OOO. Originally the map was small enough that we did not have to worry abopt @e size of I, the number of line segments. However, this is no longer the case. Fortunately, our knowledge of the vehicles position allows us to extract only those lines within say a 2 0 4 radius. It is this subset of the entire map, which often only numbers four or five, that is then used by the matching algorithm. Currently, position updates occur approximately every 8 s for an image size of 180 points and a map of 24 line segments. This is sufficient for our requirements. However, it should be noted that the matching is the lowest level priority process running on the uniprocessor system. A considerable improvement in speed would be gained by simply dedicating a processor board to this task. Finally, we note that the matcher returns an ( x , y , 0 ) correction based on an origin at the centroid of the image/range scan. A coordinate transformation is therefore required to convert the matchers centroid centered correction to a correction based on the vehicles current position.

3) Integration: Given ( x o , U,) ahd ( x , , U,), the position and standard deviation from odometry and matching, respectively, we can optimally combine these using the formulas [19],

PI

where ( x , , U,) are the ppclated values for the x position and its corresponding standard deviation. It is clear that if the standard deviation in the odometry i s small compared with the matcher, the error term ( x , - x,) has little effect. Alternatively, if the standard deviation in the odometry is large compared with the matcher, almost all the error term is added to the corrected value. This is intuitively correct. In fact, for the case of the vehicle moving down a parallel corridor, very good estimates of the vehicles orientation and position normal to the corridor can be made by the matcher, but the position of the vehicle along the corridor cannot be estimated by the matcher and its associated standard deviation is infinite. In these situations we rely on odometry until the matcher detects line features that are not parallel to the corridor. This updated value is fed back to the odometry where it is used as the new value from which the current position is estimated. This is referred to as an indirect feedback configuration [19] in comparison to an indirect feedforward configuration, since the error correction is fed back into the odometry subsystem. In this way, the odometry errors are not allowed to grow without bound. Further, since the odometry estimate is corrected after each match or registration, failure of the rangefinder or matching subsystem does not lead to immediate D. Integrating Odometric and Matched Positions failure of the robot system. Instead, the robot is free to continuIt is assumed that the X, y , and 0 position and orientation are ing navigating for some period using only odometry until such independent of one another. We therefore describe how the x time as the odometric estimates of the standard deviation in position is updated, y and 0 being identical. We first need to position exceed sqme unacceptable limits. estimate the standard deviation in the measurement of x for both E. Experiments Results the matcher and odometry. I ) Estimating Matcher Accuracy: Although the rangefinder Fig. 10 is a sequence qf range images taken by the robot has an associated measurement uncertainty in ( r , a),we do not vehicle as it moved along a predetermined path. The vehicle is directly transform this uncertainty into y , . Rather, the measure- initially positioned at the origin (0,O) and pointing in the posiments are simply transformed into ( x , y ) coordinates with an tive x direction. The line segments illustrate the map of the assumed error having mean 0 and unknown variance. Because environment, consisting of a corridor that opens up into an the matching algorithm constitutes a least-squares linear regres- elevator bay, off of which are two additional corridors. Along sion, then, referring to (3), if X is known and each y , is the corridors are numerous doors that may be either open or observed with an error having a mean of 0 and unknown closed, but are nat modeled in the map. Data points obtained variance U * , and if the errors are independent variables having from the inside of rooms are therefore (hopefully) treated as the Gaussian (i.e., normal) distribution, then the least-squares spurious and discarded by the matcher. The pairs of numbers for solution of this equation has the Gaussian distribution whose x , y , h, represent the vehicles estimate of its x , y position and mean value is the true underlying value and whose estimated orientation together with the associated standard deviations in covariance matrix is given by s 2 ( X X ) - ,where s2 is the these estimates. unbiased estimate of u 2 , At initialization the vehicle was given its initial position. In practice, the vehicle had been displaced approximately 3 in in its s2 = ( y - xi) * ( y - X i ) / ( . - 4 ) . ( x , y ) position. Its orientation is approximately correct. The The covariance matrix is calculated as part of the overall calcula- experimental setup lacked an independent position measurement system such as a real-time triangulation system. Consequently, tion for the congruence [8]. 2) Estimating Odometry Accuracy: Estimation of the stan- the absolute accuracy of the vehicle could not be quantified dard deviation for odometry is less rigorous. Presently, it is except at its finish position. However, close exavination of the assumed to be directly proportional to the distance moved. path of the vehicle indicated that the mobile robot deviated from Empirically$ we have set the constant of proportionality to 0.01, its path by no more that approximately 6 in. These deviations i.e., after 100 ft the standard deviation is 1 ft. For a more were not due to the loy-level controller, which provided smooth trajectories. Small lurches in the vehicles motion were visirigorous treatment of odometric error, see [30].

COX: BLANCHE-AN

EXPERIMENT IN GUIDANCE AND NAVIGATION


201

scan 1

scan2

scan3

5can4

L1

. c 0

X= 0 3 1 y-3 20 1 h=08 1 y= -3 0 1

.
A

x= 0 3 1 y-3101 y-5.1 0.1

h.07

-200

200

400

600

-200

200

400

600

-200

200

400

600

-200

200

400

600

x (inches)

x (inches)

x (inches)

x (inches)

scan5

scan6

scan7

5can8

x39.73.5 y-5.50.1
0 0

y=6.10.1

y-6.40.1 h-1.1 1

h=0.8 1

-200

200

400

600

200

200

400

600

-200

200

400

600

-200

200

400

600

x (inches)

x (inches)

x (inches)

x (inches)

(a)

scan9

scan 10

scahl 1

scan 1 2

y-6.20 1 h-0.8
1

-200

200

400

x (inches)

I
600
0

XP38.3.4

y-6.3 0.1
h-0.7 1
0

y-6.1 0.1
h=1.4 1

-200

200

400

600

-200

200

400

600

-200

200

400

600

x (inches)

x (inches)

x (inches)

scan 13

scan 1 4

scan 15

scan 16

0
-

0
-

I
h

I
0

x376.tJ.6 y=6.1 0.1


0

0 o

x459.P.3
y-5.9 0.1 y-590.1 h-0.1 1 h-0.4 1

.c

;?

y-8.3 0.1

h= 0

h-20.1 1

-200

200

400

600

-200

200

400

600

-200

200

400

600

-200

200

400

600

x (inches)

x (inches)

x (inches)

x (Inches)

Fig. 10. Sequence of range images as vehicle navigated along a corridor.

T I

202

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 7. NO. 2, APRIL 1991

TABLE I A SEQUENCE OF CORRECTIONS ESTIMATED BY THE MATCHER


X

an almost 2-in error in the y direction as the vehicle traveled 9 ft along the x direction.

Correction
1
2

(inches)
0.0
0.0

(inches)
2.8 0.1

(degrees)
0.7
-0.5

V . SOFTWARE IMPLEMENTATION NOTES


Blanche has been primarily developed for use as an autonomous robot operating in a structured office or factory environment. It has also served as a testbed for experiments in real-time programming environments. In particular, we have programming language [25] exclusively. used the C In a previous paper [5], we showed how the constructor classes can be used to guarmechanism associated with C + antee the initialization and self-test of each subsystem of the robot. A complementary destructor mechanism can be used to guarantee safe termination of subsystems under most conditions. These mechanisms are completely transparent to both the user and his application program. Exception handling [6] is discussed, and it is shown how object-oriented programming facilities can be used to provide transparent recovery from subsystem failures during program execution, given some hardware redundancy. Most of the examples were demonstrated and tested using C running on Blanche. for I remain enthusiastic about the advantages of C real-time robotics applications. In particular, data abstraction in the form of classes allows a modular code to be written that often has a one-to-one correspondence with underlying physical subsystems. The object-oriented programming facilities have also been exploited at the trajectory generation level where arc, line, and any other segment type are derived from a common base class. The low-level controller is unaware of the type of segment it is following, and this construction should allow for , like most languages, easy extensibility. Of course, C requires discipline on behalf of the user-I am certainly guilty of code, but it has also saved some dreadful fragments of C + me from many run-time bugs.

0.0 0.0
0.0 0.0

-0.1 0.1

0.3
-0.1
1.o -0.8

4 5
6

0.5
- 0.2 - 0.2

++

7
8 9
-

0.0 0.0
1.2

-0.3

-0.1
0.1

0.3

-0.6
-0.5
0.3

10
11

6.7
- 3.0

-0.1
0.1

ble when the vehicles position was updated by the matcher because, at these times the error between the commanded reference position and the estimated position would instantaneously become large (occasionally on the order of a few inches). Table I tabulates the partial set of corrections made by the vehicle as it traverses its path. The first correction correctly determines the error in the y direction. A correction of 0.7 is also applied to the heading, but examination of the heading corrections suggests that this is noise. However, there is no correction in the x direction. This is because the vehicle cannot tell where it is along the ~ o r r i d o r The . ~ matcher has an infinite standard deviation along the x direction. Only when the vehicle comes out into the elevator bay-see scan 9 of Fig. 10-is it able to begin to refine its x position. Large collections of range points not associated with the model usually indicate people walking by. In addition, maintenance and cleaning personnel may occasionally leave unmodeled objects in the environment. However, almost all these data points are rejected as spurious because their vicinity to a modeled line segment is not within preset thresholds. The relative precision of the odometry and matching estimates depends strongly on the particular data and on the frequency of matching. For a vehicle moving at 1 in/s with matching occurring every 10 s, the odometric standard deviation is approximately 1 in. Within the corridor, the matcher estimates the position of the vehicle perpendicular to the corridor with a standard deviation of approximately 0.1 in. Of course, the standard deviation along the corridor is infinite. For less extreme cases, such as the elevator bay (scans 9-11 of Fig. lo), the standard deviation in the x direction averaged 0 . 6 in. y , 0) The matchers estimates of the standard deviations in (x, assume a Gaussian noise form with zero mean. However, the prototype rangefinder appears to have systematic errors: range values have been seen to depend on the strength of the received signal. For example, very bright objects appear further away. We believe that the large corrections in the x direction at scans 9 through 11 are due in part to this problem. An improved rangefinder is currently under test that we hope will solve this problem. Other potential sources of systematic errors include errors in the map and errors in the alignment of the rangefinder relative to the vehicle. A misalignment of only 1 would cause Remember that doorways are not modeled. Locally. the environment appears as two parallel lines.

++

++

++ +

VI. CONCLUSIONS
Trajectory generation and control procedures for enabling a robot cart to accurately and autonomously navigate around a structured environment, given a set of path data from an off-line path planner, have been described. These procedures have been implemented in an experimental robot cart. The on-board trajectory generator has a number of advantages including 1) it reduces greatly the amount of data needed from the path planner, so even a very low data rate, intermittent link to the cart is sufficient; 2 ) it provides a convenient separation between the path guidance and the cart control logic; and 3 ) it supports fail-safe operation if the vehicles reference and measured states are continuously monitored. The experience gained so far in this study indicates that the basic approach described above for guidance and control of a robot cart is sound. The separation of trajectory generation from the low-level cart controller provides a robust method for maintaining accurate tracking of any feasible path, once the controller unit has been properly adjusted to stabilize the closed-loop dynamics of the path control loop. We arc still investigating the functional dependence of the control weights, C , and C, in ( 2 ) , on speed. However, the current solution of scaling by the inverse of v 2 and v , respectively (where v is the speed of the vehicle) appears to be satisfactory. The cart guidance system assumes accurate knowledge of the vehicles position. Although passive or active beacons can provide a very accurate and cost-effective solution to position

COX: BLANCHE-AN

EXPERIMENT IN GUIDANCE AND NAVIGATION

203

estimation, the consequent need to modify the environment was considered undesirable and beacons were therefore not used. Instead, the position estimation system consists of 1) an a priori map of its environment, 2) a combination of odometry and optical range sensing, 3) an algorithm for matching the sensory data to the map, and 4) an algorithm to estimate the precision of the corresponding match/correction. A 2D representation based on collections of line segments in the plane was chosen because 1) much of the robots environment is uniform in the vertical direction; there is not much to be gained from a 3D representation, 2) the range sensor currently provides only 2D information ( r , e), 3) matching sensor data to 2D maps is significantly simpler than matching to 3D maps, and 4) pragmatically, a line segment description was chosen because an efficient and robust matching algorithm that used line segments had been developed. The matching algorithm assumes that the displacement between image and model is small. This is a very reasonable assumption since odometry is providing the vehicle with a good estimate of position and matching occurs very frequently. Although the matching algorithm may be criticized for not being general, it does not have to be. Moreover, generality has been sacrificed for two very important characteristics, robustness and speed. Our experience is that while robustness is very desirable, it is very hard to achieve. The algorithm is intrinsically robust against incompleteness of the image, i.e., missing data points. Spurious data, e.g., people walking by, and incompleteness of the model are dealt with by deleting from consideration any points whose distance to their target segments exceed some limit. Note that, for the general matching problem, in which the assumption of small displacement is not necessarily valid, such an approach would not be possible. The vehicle and associated algorithms have all been implemented and tested within a structured office environment. The entire autonomous vehicle is self-contained; all processing being performed onboard. We believe this vehicle is significant not just because of the sensing and algorithms described, but also because its implementation represents a very high level of performance at very low cost. There also appears to be a self-sustaining property to this configuration: Accurate knowledge of position allows for fast robust matching, which leads to accurate knowledge of position. There are several areas of in which the vehicle might be improved. First, the need to provide a map of the environment can become tedious, and it would therefore be desirable to automate this step, e.g., the vehicle might build its own map [17]. Second, any map has so far been assumed to be perfectly accurate, all errors being considered due to sensor noise or errors in the matcher. In practice, the map also has an associated accuracy that should also be modeled. Third, the covariance matrix estimated by the matcher assumes there is no correlation between scans of the rangefinder. This is not necessarily true in practice because of 1) errors in the map and 2) sensor noise may be correlated with particular wall surfaces. Future work is directed to addressing these problems. ACKNOWLEDGMENT The development of Blanche would not have been possible without the help of many individuals. It is a pleasure to thank J . B. Kruskal and W. L. Nelson whose collaboration with the matching and control algorithms, respectively, were critical to the success of the project. Also thanks to G . L. Miller and E. R. Wagner for the optical ranger and D. A. Kapilow for the

real-time computing environment, especially the debugging tools. Finally, R. A. Boie, W. J. Kropfl, J . E. Shopiro, F. W. Sinden, and G . T. Wilfong all provided help and assistance of one kind or another. Thank you.

REFERENCES
J. C. Alexander and J. H. Maddocks, On the kinematics of wheeled mobile robots, Int. J . Robotics Res., to be published. N. Ayache and 0. Faugeras, Building, registrating, and fusing noisy visual maps, in Proc. Znt. Conf. Computer Vision (London), 1987, pp. 73-79. Y. Bar-Shalom and T. E. Fortmann, Tracking and Data Association. New York: Academic, 1988. I. J. Cox, Blanche: An autonomous robot vehicle for structured environments, in Proc. ZEEE Znt. Conf. Robotics Automat., 1988, pp. 978-982. -, C language support for guaranteed initialization, failsafe termination and error recovery, in Proc. IEEE Int. Conf. Robotics Automat., 1988, pp. 641-643. I. J. Cox and N. H. Gehani, Exception handling in robotics, Computer, vol. 22, no. 3, pp. 43-49, 1989. I. J. Cox and J. B. Kruskal, On the congruence of noisy images to line segment models, in Proc. Znt. Conf. Computer Vision,

++

1988.

I. J. Cox, J. B. Kruskal, and D. A. Wallach, Predicting and estimating the performance of a subpixel registration algorithm, unpublished, 1988. I. J. Cox and G. T . Wilfong, Eds., Autonomous Robot Vehicles. New York: Springer-Verlag, 1990. A. Elfes, Sonar-based real-world mapping and navigation, IEEE J . Robotics Automat., vol. RA-3, no. 3, pp. 249-265,
1987.

A. Gelb, Ed. Applied Optimal Estimation. Cambridge, MA: MIT Press, 1974. G. Giralt, R. Chatila, and M. Vaisset, An integrated navigation and motion control system for autonomous multisensory mobile robots, in Proc. 1st Int. Symp. Robotics Res. (Bretton Woods, NY) 1983, pp. 191-214. T. Hongo, H. Arakawa, G. Sugimoto, K. Tange, and Y. Yamamoto, An automatic guidance system of a self-controlled vehicle, ZEEE Trans. Industrial Electron., vol. IE-34, no. 1 , pp. 5-10, 1987. P. J. Huber, Robust Statistics. New York: Wiley, 1981. Y. Kanayama and B. I. Hartman, Smooth local path planning for autonomous vehicles, in Proc. ZEEE Znt. Conf. Robotics Automat., 1989, pp. 1265-1270. D. A. Kapilow, Real-time programming in a UNIX environment, in Proc. Symp. Factory Automat. Robotics, 1985, pp. 28-29. J. Leonard, H. Durrant-Whyte, and I. J. Cox, Dynamic map building for an autonomous mobile robot, in Proc. IEEE Int. Workshop Zntell. Robots Syst., 1990, pp. 89-96. L. Matthies and S. A. Shafer, Error modeling in stereo navigation, IEEE J . Robotics Automat., vol. RA-3, no. 3, pp.
239-248, 1987.

P. S . Maybeck, Stochastic Models, Estimation, and Control, vol. 1 . New York: Academic, 1979. G. L. Miller and E. R. Wagner, An optical rangefinder for autonomous robot cart navigation, in Proc. SPIE Mobile Robots ZZ, vol. 852, 1987, pp. 132-144. W. L. Nelson, Continuous steering-function control of robot carts, ZEEE Trans. Industrial Electron., vol. 36, no. 3, pp.
330-337, 1989.
, Continuous-curvature paths for autonomous vehicles, in Proc. IEEE Int. Conf. Robotics Automat., 1989, pp.

1260- 1264.

W. L. Nelson and I. J. Cox, Local path control of an autonomous vehicle, in Proc. IEEE Znt. Conf. Robotics Automat., 1988, pp. 1504-1510. F. P. Preparata and M. I. Shamos, Computational Geometry: A n Introduction. New York: Springer-Verlag, 1985. B. Stroustrup, The C + + Programming Language. Reading, MA: Addison-Wesley, 1986. D. J. Torrieri, Statistical theorv of passive location systems,

E T

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. I, NO. 2, APRIL 1991

IEEE Trans. Aerospace Electron. Syst., vol. AES-20, no. 2,


pp. 183-198, 1984. T. Tsumura, Survey of automated guided vehicle in Japanese factory, in Proc. IEEE Int. Conf. Robotics Automat., 1986, pp. 1329-1334. T. Tsumura, N. Fujiwara, and M. Hashimoto, An experimental system for self-contained position and heading measurement of ground vehicle, in Proc. Znt. Conf. Advanced Robotics, 1983, pp. 269-276. S. Uta et al., An implementation of michi-a locomotion command system for intelligent mobile robots, in Proc. 15th ZCAR, 1985, pp. 127-134. C. M. Wang, Location estimation and uncertainty analysis for mobile robots, in Proc. ZEEE Int. Conf. Robotics Automat., vol. 2, 1988, pp. 1230-1235. G . T. Wilfong, Motion planning for an autonomous vehicle, in Proc. IEEE Conf. Robotics Automat., 1988, pp. 529-533.

Ingemar J. Cox (S79-M83) received the


B A . degree in electrical engineering and computer science from University College, London, in 1980 and the D.Phi1. degree in engineering science from the University of Oxford, in 1983. He was a Principal Investigator with AT&T Bell Laboratories, Murray Hill, NJ, from 1984 to 1989. He is currently a Senior Research Scientist with the NEC Research Institute, Princeton, NJ. His research interests are broadly in the areas of autonomous robot vehicles, computer vision, and real-time programming environments. He coeditied (with G. T. Wilfong) the book Autonomous Robot Vehicles (Springer-Verlag). Dr. Cox is a member of the ACM.

You might also like