0% found this document useful (0 votes)
51 views

A Tutorial On Visual Servo Control

Seth Hutchinson

Uploaded by

shivnair
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

A Tutorial On Visual Servo Control

Seth Hutchinson

Uploaded by

shivnair
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

A Tutorial on Visual Servo Control

Department of Electrical and Computer Engineering The Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign 405 N. Mathews Avenue Urbana, IL 61801 Email: [email protected] Department of Computer Science Yale University New Haven, CT 06520-8285 Phone: 203 432-6432 Email: [email protected] CSIRO Division of Manufacturing Technology P.O. Box 883, Kenmore. Australia, 4069. [email protected] May 14, 1996

Seth Hutchinson

Greg Hager

Peter Corke

Abstract
This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed. Since any visual servo system must be capable of tracking image features in a sequence of images, we include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research eld of visual servo control.

1 Introduction
Today there are over 800,000 robots in the world, mostly working in factory environments. This population continues to grow, but robots are excluded from many application areas where the work enviroment and object placement cannot be accurately controlled. This limitation is due to the inherent lack of sensory capability in contempory commercial robot systems. It has long been recognized that sensor integration is fundamental to increasing the versatility and application domain of robots but to date this has not proven cost e ective for the bulk of robotic applications which are in manufacturing. The `new frontier' of robotics, which is operation in the everyday world, provides new impetus for this research. Unlike the manufacturing application, it will not be cost e ective to re-engineer `our world' to suit the robot. Vision is a useful robotic sensor since it mimics the human sense of vision and allows for noncontact measurement of the environment. Since the seminal work of Shirai and Inoue 1] (who describe how a visual feedback loop can be used to correct the position of a robot to increase task accuracy), considerable e ort has been devoted to the visual control of robot manipulators. Robot controllers with fully integrated vision systems are now available from a number of vendors. Typically visual sensing and manipulation are combined in an open-loop fashion, `looking' then `moving'. The accuracy of the resulting operation depends directly on the accuracy of the visual sensor and the robot end-e ector. An alternative to increasing the accuracy of these subsystems is to use a visual-feedback control loop which will increase the overall accuracy of the system | a principal concern in any application. Taken to the extreme, machine vision can provide closed-loop position control for a robot ende ector | this is referred to as visual servoing. This term appears to have been rst introduced by Hill and Park 2] in 1979 to distinguish their approach from earlier `blocks world' experiments where the system alternated between picture taking and moving. Prior to the introduction of this term, the less speci c term visual feedback was generally used. For the purposes of this article, the task in visual servoing is to use visual information to control the pose of the robot's end-e ector relative to a target object or a set of target features. Since the rst visual servoing systems were reported in the early 1980s, progress in visual control of robots has been fairly slow but the last few years have seen a marked increase in published research. This has been fueled by personal computing power crossing the threshold which allows analysis of scenes at a su cient rate to `servo' a robot manipulator. Prior to this, researchers required specialized and expensive pipelined pixel processing hardware. Applications that have been proposed or prototyped span manufacturing (grasping objects on conveyor belts and part mating), teleoperation, missile tracking cameras and fruit picking as well as robotic ping-pong, juggling, balancing, car steering and even aircraft landing. A comprehensive review of the literature in this eld, as well the history and applications reported to date, is given by Corke 3] and includes a large bibliography. Visual servoing is the fusion of results from many elemental areas including high-speed image processing, kinematics, dynamics, control theory, and real-time computing. It has much in common with research into active vision and structure from motion, but is quite di erent to the often described use of vision in hierarchical task-level robot control systems. Many of the control and vision 2

problems are similar to those encountered by active vision researchers who are building `robotic heads'. However the task in visual servoing is to control a robot to manipulate its environment using vision as opposed to passively or actively observing it. Given the current interest in this topic it seems both appropriate and timely to provide a tutorial introduction to this topic. We hope that this tutorial will assist researchers by providing a consistant terminology and nomenclature, and assist others in creating visually servoed systems and gaining an appreciation of possible applications. The growing literature contains solutions and promising approaches to many theoretical and technical problems involved. We have attempted here to present the most signi cant results in a consistant way in order to present a comprehensive view of the area. Another di culty we faced was that the topic spans many disciplines. Some issues that arise such as the control problem, which is fundamentally nonlinear and for which there is not complete established theory, and visual recognition, tracking, and reconstruction which are elds unto themselves cannot be adequately addressed in a single article. We have thus concentrated on certain fundamental aspects of the topic, and a large bibliography is provided to assist the reader who seeks greater detail than can be provided here. Our preference is always to present those ideas and techniques which have been found to function well in practice in situations where high control and/or vision performance is not required, and which appear to have some generic applicability. In particular we will describe techniques which can be implemented using a minimal amount of vision hardware, and which make few assumptions about the robotic hardware. The remainder of this article is structured as follows. Section 2 establishes a consistent nomenclature and reviews the relevant fundamentals of coordinate transformations, pose representation, and image formation. In Section 3, we present a taxonomy of visual servo control systems (adapted from 4]). The two major classes of systems, position-based visual servo systems and image-based visual servo systems, are discussed in Sections 4 and 5 respectively. Since any visual servo system must be capable of tracking image features in a sequence of images, Section 6 describes some approaches to visual tracking that have found wide applicability and can be implemented using a minimum of special-purpose hardware. Finally Section 7 presents a number of observations, and about the current directions of the research eld of visual servo control.

2 Background and De nitions


Visual servo control research requires some degree of expertise in several areas, particularly robotics, and computer vision. Therefore, in this section we provide a very brief overview of these subjects, as relevant to visual servo control. We begin by de ning the terminology and notation required to represent coordinate transformations and the velocity of a rigid object moving through the workspace (Sections 2.1 and 2.2). Following this, we brie y discuss several issues related to image formation, including the image formation process (Sections 2.3 and 2.4), and possible camera/robot con gurations (Section 2.5). The reader who is familiar with these topics may wish to proceed directly to Section 3.

2.1 Coordinate Transformations


In this paper, the task space of the robot, represented by T , is the set of positions and orientations that the robot tool can attain. Since the task space is merely the con guration space of the robot tool, the task space is a smooth m-manifold (see, e.g., 5]). If the tool is a single rigid body moving arbitrarily in a three-dimensional workspace, then T = SE3 = <3 SO3 , and m = 6. In some applications, the task space may be restricted to a subspace of SE3 . For example, for pick and place, we may consider pure translations (T = <3 , for which m = 3), while for tracking an object and keeping it in view we might consider only rotations (T = SO3 , for which m = 3). Typically, robotic tasks are speci ed with respect to one or more coordinate frames. For example, a camera may supply information about the location of an object with respect to a camera frame, while the con guration used to grasp the object may be speci ed with respect to a coordinate frame at the end-e ector of the manipulator. We represent the coordinates of point P with respect to coordinate frame x by the notation x P. Given two frames, x and y , the rotation matrix that represents the orientation of frame y with respect to frame x is denoted by x Ry . The location of the origin of frame y with respect to frame x is denoted by the vector x ty . Together, the position and orientation of a frame are referred to as its pose, which we denote by a pair x xy = (x Ry x ty ). If x is not speci ed, the world coordinate frame is assumed. If we are given y P (the coordinates of point P relative to frame y ), and x xy = (x Ry x ty ), we can obtain the coordinates of P with respect to frame x by the coordinate transformation
x

P = xRy y P + x ty = x xy y P:

(1) (2)

Often, we must compose multiple poses to obtain the desired coordinates. For example, suppose that we are given poses x xy and y xz . If we are given z P and wish to compute x P, we may use the composition of transformations
x

P = xxy yP = x xy y xz z P = x xz z P

(3) (4) (5)

where
x

xz = (xRy y Rz xRy ytz + x ty )

(6)

Thus, we will represent the the composition of two poses by x xz = x xy y xz . We note that the operator is used to represent both the coordinate transformation of a single point and the composition of two coordinate transformations. The particular meaning should always be clear from the context. 4

In much of the robotics literature, poses are represented by homogeneous transformation matrices, which are of the form
xT y

"x

# Ry xty : 0 1

(7)

To simplify notation throughout the paper, we will represent poses and coordinate transformations as de ned in (1). Some coordinate frames that will be needed frequently are referred to by the following superscripts/subscripts:

e The coordinate frame attached to the robot end e ector 0 The base frame for the robot c The camera coordinate frame
When T = SE3 , we will use the notation xe 2 T to represent the pose of the end-e ector coordinate frame relative to the world frame. In this case, we often prefer to parameterize a pose using a translation vector and three angles, (e.g., roll, pitch and yaw 6]). Although such parameterizations are inherently local, it is often convenient to represent a pose by a vector r 2 <6, rather than by xe 2 T . This notation can easily be adapted to the case where T SE3 . For example, when T = <3 , we will parameterize the task space by r = x y z ]T . In the sequel, to maintain generality we will assume that r 2 <m , unless we are considering a speci c task.

2.2 The Velocity of a Rigid Object


In visual servo applications, we are often interested the relationship between the velocity of some object in the workspace (e.g., the manipulator end-e ector) and the corresponding changes that occur in the observed image of the workspace. In this section, we brie y introduce notation to represent velocities of objects in the workspace. Consider the robot end-e ector moving in a workspace with T SE3 : In base coordinates, the motion is described by an angular velocity (t) = !x (t) !y (t) !z (t)]T and a translational velocity T(t) = Tx(t) Ty(t) Tz (t)]T : The rotation acts about a point which, unless otherwise indicated, we take to be the origin of the base coordinate system. Let P be a point that is rigidly attached to the end-e ector, with base frame coordinates x y z ]T . The derivatives of the coordinates of P with respect to base coordinates are given by

x = z!y ; y!z + Tx _ y = x!z ; z!x + Ty _ z_ = y!x ; x!y + Tz


which can be written in vector notation as _ P= 5

(8) (9) (10) (11)

P + T:

This can be written concisely in matrix form by noting that the cross product can be represented in terms of the skew-symmetric matrix

2 3 0 ;z y sk(P) = 6 z 0 ;x 7 4 5
;y x
0 _ P = ;sk(P ) + T:

allowing us to write

Together, T and de ne what is known in the robotics literature as a velocity screw

(12)

2T 6 Tx 6 y 6 _ 6 Tz r = 6 !x 6 6! 4 y
!z

3 7 7 7 7: 7 7 7 5

_ Note that r also represents the derivative of r when the angle parameterization is chosen to be the set of rotations about the coordinate axes (recall that r is a parameterization of xe). De ne the 3 6 matrix A(P) = I3 j ; sk(P)] where I3 represents the 3 3 identity matrix. Then (12) can be rewritten in matrix form as _ P = A(P)_ r (13) Suppose now that we are given a point expressed in end-e ector coordinates, e P: Combining (1) and (13), we have _ P = A(xe eP)_ r (14) Occasionally, it is useful to transform velocity screws among coordinate frames. For example, _ suppose that e r = e T e ] is the velocity of the end-e ector in end-e ector coordinates. Then the equivalent screw in base coordinates is _ r= T =

"

# "

Re e eT ; e Re

te :

2.3 Camera Projection Models


To control the robot using information provided by a computer vision system, it is necessary to understand the geometric aspects of the imaging process. Each camera contains a lens that forms a 2D projection of the scene on the image plane where the sensor is located. This projection causes direct depth information to be lost so that each point on the image plane corresponds to a ray in 3D space. Therefore, some additional information is needed to determine the 3D coordinates 6

1
X

Im

ag

ep

lan

e
P=(x,y,z)

(u,v)

image view point object

Figure 1: The coordinate frame for the camera/lens system. corresponding to an image plane point. This information may come from multiple cameras, multiple views with a single camera, or knowledge of the geometric relationship between several feature points on the target. In this section, we describe three projection models that have been widely used to model the image formation process: perspective projection, scaled orthographic projection, and a ne projection. Although we brie y describe each of these projection models, throughout the remainder of the tutorial we will assume the use of perspective projection. For each of the three projection models, we assign the camera coordinate system with the xand y -axes forming a basis for the image plane, the z -axis perpendicular to the image plane (along the optic axis), and with origin located at distance behind the image plane, where is the focal length of the camera lens. This is illustrated in Figure 1.

Perspective Projection. Assuming that the projective geometry of the camera is modeled by perspective projection (see, e.g., 7]), a point, c P = x y z ]T , whose coordinates are expressed
(x y z ) = u = z x v y

with respect to the camera coordinate frame, will project onto the image plane with coordinates p = u v]T , given by

" #

" #

(15)

If the coordinates of P are expressed relative to coordinate frame x, we must rst perform the coordinate transformation c P = c xx x P

Scaled orthographic projection. Perspective projection is a nonlinear mapping from Cartesian


to image coordinates. In many cases, it is possible to approximate this mapping by the linear scaled orthographic projection. Under this model, image coordinates for point c P are given by

" #
where s is a xed scale factor.

u =s x v y
7

" #

(16)

Orthographic projection models are valid for scenes where the relative depth of the points in the scene is small compared to the distance from the camera to the scene, for example, an airplane ying over the earth, or a camera with a long focal length lens placed several meters from the workspace.

A ne projection. Another linear approximation to perspective projection is known as a ne projection. In this case, the image coordinates for the projection of a point c P are given by " # u = Ac P + c (17) v
where A is an arbitrary 2 3 matrix and c is an arbitrary 2-vector. Note that orthographic projection is a special case of a ne projection. A ne projection does not correspond to any speci c imaging situation. Its primary advantage is that it is an unconstrained linear imaging model. As a result, given a set of corresponding pairs f(c Pi ui vi]T )g, A and c are easily computed using linear regression techniques. Hence, the calibration problem is greatly simpli ed for this model.

2.4 Image Features and the Image Feature Parameter Space


In the computer vision literature, an image feature is any structural feature than can be extracted from an image (e.g., an edge or a corner). Typically, an image feature will correspond to the projection of a physical feature of some object (e.g., the robot tool) on the camera image plane. We de ne an image feature parameter to be any real-valued quantity that can be calculated from one or more image features. Examples include, moments, relationships between regions or vertices, and polygon face areas. Jang 8] provides a formal de nition of what we term feature paramters as image functionals. Most commonly the coordinates of a feature point or a region centroid are used. A good feature point is one that can be located unambiguously in di erent views of the scene, such as a hole in a gasket 9] or a contrived pattern 10, 11]. Image feature parameters that have been used for visual servo control include the image plane coordinates of points in the image 10, 12{17], the distance between two points in the image plane and the orientation of the line connecting those two points 9,18], perceived edge length 19], the area of a projected surface and the relative areas of two projected surfaces 19], the centroid and higher order moments of a projected surface 19{22], the parameters of lines in the image plane 10], and the parameters of an ellipse in the image plane 10]. In this tutorial we will restrict our attention to point features whose parameters are their image plane coordinates. In order to perform visual servo control, we must select a set of image feature parameters. Once we have chosen a set of k image feature parameters, we can de ne an image feature parameter vector f = f1 fk ]T . Since each fi is a (possibly bounded) real valued parameter, we have f = f1 fk ]T 2 F <k , where F represents the image feature parameter space. The mapping from the position and orientation of the end-e ector to the corresponding image 8

xc

cx

t cx t

xe xt xc

Figure 2: Relevant coordinate frames world, end-e ector, camera and target. feature parameters can be computed using the projective geometry of the camera. We will denote this mapping by F, where F : T ! F: (18) For example, if F <2 is the space of u v image plane coordinates for the projection of some point P onto the image plane, then, assuming perspective projection, F = u v ]T , where u and v are given by (15). The exact form of (18) will depend in part on the relative con guration of the camera and end-e ector as discussed in the next section.

2.5 Camera Con guration


Visual servo systems typically use one of two camera con gurations: end-e ector mounted, or xed in the workspace. The rst, often called an eye-in-hand con guration, has the camera is mounted on the robot's end-e ector. Here, there exists a known, often constant, relationship between the pose of the camera(s) and the pose of the end-e ector. We represent this relationship by the pose exc . The pose of the target1 relative to the camera frame is represented by c xt. The relationship between these poses is shown in Figure 2. The second con guration has the camera(s) is xed in the workspace. In this case, the camera(s) are related to the base coordinate system of the robot by 0 xc and to the object by c xt : In this case, the camera image of the target is, of course, independent of the robot motion (unless the target is the end-e ector itself). A variant of this is for the camera to be agile, mounted on another robot or pan/tilt head in order to observe the visually controlled robot from the best vantage 23]. For either choice of camera con guration, prior to the execution of visual servo tasks, camera calibration must be performed. For the eye-in-hand case, this amounts to determining e xc. For the xed camera case, calibration is used to determine 0 xc. Calibration is a long standing research issue in the computer vision community (good solutions to the calibration problem can be found in a number of references, e.g., 24{26]).
1

The word target will be used to refer to the object of interest, that is, the object that will be tracked.

Joint controllers

Power amplifiers

cx d +

Control law

inverse kine matics

cx ^

Pose determination

Feature extraction

Figure 3: Dynamic position-based look-and-move structure.


Joint controllers Power amplifiers

f d

Control law

Feature extraction

Figure 4: Dynamic image-based look-and-move structure.

3 Servoing Architectures
In 1980, Sanderson and Weiss 4] introduced a taxonomy of visual servo systems, into which all subsequent visual servo systems can be categorized. Their scheme essentially poses two questions: 1. Is the control structure hierarchical, with the vision system providing set-points as input to the robot's joint-level controller, or does the visual controller directly compute the joint-level inputs? 2. Is the error signal de ned in 3D (task space) coordinates, or directly in terms of image features? The resulting taxonomy, thus, has four major categories, which we now describe. These fundamental structures are shown schematically in Figures 3 to 6. If the control architecture is hierarchical and uses the vision system to provide set-point inputs to the joint-level controller, thus making use of joint feedback to internally stabilize the robot, it is referred to as a dynamic look-and-move system. In contrast, direct visual servo2 eliminates
Sanderson and Weiss used the term \visual servo" for this type of system, but since then this term has come to be accepted as a generic description for any type of visual control of a robotic system. Here we use the term \direct
2

10

the robot controller entirely replacing it with a visual servo controller that directly computes joint inputs, thus using vision alone to stabilize the mechanism. For several reasons, nearly all implemented systems adopt the dynamic look-and-move approach. First, the relatively low sampling rates available from vision make direct control of a robot ende ector with complex, nonlinear dynamics an extremely challenging control problem. Using internal feedback with a high sampling rate generally presents the visual controller with idealized axis dynamics 27]. Second, many robots already have an interface for accepting Cartesian velocity or incremental position commands. This simpli es the construction of the visual servo system, and also makes the methods more portable. Thirdly, look-and-move separates the kinematic singularities of the mechanism from the visual controller, allowing the robot to be considered as an ideal Cartesian motion device. Since many resolved rate 28] controllers have specialized mechanisms for dealing with kinematic singularities 29], the system design is again greatly simpli ed. In this article, we will utilize the look-and-move model exclusively. The second major classi cation of systems distinguishes position-based control from image-based control. In position-based control, features are extracted from the image and used in conjunction with a geometric model of the target and the known camera model to estimate the pose of the target with respect to the camera. Feedback is computed by reducing errors in estimated pose space. In image-based servoing, control values are computed on the basis of image features directly. The image-based approach may reduce computational delay, eliminate the necessity for image interpretation and eliminate errors due to sensor modeling and camera calibration. However it does present a signi cant challenge to controller design since the plant is non-linear and highly coupled. In addition to these considerations, we distinguish between systems which only observe the target object and those which observe both the target object and the robot end-e ector. The former are referred to as endpoint open-loop (EOL) systems, and the latter as endpoint closed-loop (ECL) systems. The primary di erence is that EOL system must rely on an explicit hand-eye calibration when translating a task speci cation into a visual servoing algorithm. Hence, the positioning accuracy of EOL systems depends directly on the accuracy of the hand-eye calibration. Conversely, systems that observe the end-e ector as well as target features can perform with accuracy that is independent of hand-eye calibration error 30{32]. Note also that ECL systems can easily deal with tasks that involve the positioning of objects within the end-e ector, whereas EOL systems must use an inferred object location. From a theoretical perspective, it would appear that ECL systems would always be preferable to EOL systems. However, since ECL systems must track the end-e ector as well as the target object, the implementation of an ECL controller often requires solution of a more demanding vision problem.
visual servo" to avoid confusion.

11

Joint controllers

Power amplifiers

cx d +

Control law

inverse kine matics

cx ^

Pose determination

Feature extraction

Figure 5: Position-based visual servo (PBVS) structure as per Weiss.


Power amplifiers f d + Control law

Feature extraction

Figure 6: Image-based visual servo (IBVS) structure as per Weiss.

4 Position-Based Visual Servo Control


We begin our discussion of visual servoing methods with position-based visual servoing. As described in the previous section, in position-based visual servoing features are extracted from the image and used to estimate the pose of the target with respect to the camera. Using these values, an error between the current and the desired pose of the robot is de ned in the task space. In this way, position-based control neatly separates the control issues, namely the the computation of the feedback signal, from the estimation problems involved in computing position or pose from visual data. We now formalize the notion of a positioning task as follows:

De nition 4.1

A positioning task is represented by a function E : T ! <m : This function is referred to as the kinematic error function. A positioning task is ful lled with the end-e ector in pose xe if E(xe) = 0:

If we consider a general pose xe for which the task is ful lled, the error function will constrain some number, d m degrees of freedom of the manipulator. The value d will be referred to as the degree of the constraint. As noted by Espiau et al. 10,33], the kinematic error function can be 12

thought of as representing a virtual kinematic constraint between the end-e ector and the target. Once a suitable kinematic error function has been de ned and the parameters of the functions are instantiated from visual data, a regulator is de ned which reduces the estimated value of the kinematic error function to zero. This regulator produces at every time instant a desired ende ector velocity screw u 2 <6 which is sent to the robot control subsystem. For the purposes of this section, we use simple proportional control methods for linear and linearized systems to compute u 34]. These methods are illustrated below, and are discussed in more detail in Section 5. We now present examples of positioning tasks for end-e ector and xed cameras in both ECL and EOL con gurations. In Section 4.1, several examples of positioning tasks based on directly observable features are presented. Following that, Section 4.2, describes positioning tasks based on target pose estimates. Finally, in Section 4.3, we brie y describe how point position and object pose can be computed using visual information.

4.1 Feature-Based Motions


We begin by considering a positioning task in which some point on the robot end-e ector, e P, is to be brought to a xed stationing point, S visible in the scene. We refer to this as point-to-point positioning. In the case where the camera is xed, the kinematic error function may be de ned in base coordinates as Epp(xe S eP) = xe eP ; S: (19) Here, as in the sequel, the arguments of the error function after the semicolon denote parameters de ning the positioning task. Epp de nes a three degree of freedom kinematic constraint on the robot end-e ector position. If the robot workspace is restricted to be T = <3 this task can be thought of as a rigid link that fully constrains the pose of the end-e ector relative to the target. When T SE3 the constraint de nes a virtual spherical joint between the object and the robot end-e ector. Let T = <3 : We rst consider the case in which one or more cameras calibrated to the robot b base frame furnish an estimate, c S of the stationing point coordinates with respect to a camera ^ coordinate frame. Using the estimate of the camera pose in base coordinates, xc , from o ine b = xc cS: b calibration and (1), we have S ^ Since T = <3 the control input to be computed is the desired robot translational velocity which we denote by u3 to distinguish it from the more general end-e ector screw. Since (19) is linear in xe it is well known that in the absence of outside disturbances, the proportional control law ^ b u3 = ;kEpp(^ e xc c S eP) = ;k xe eP ; xc c S : x ^ b (20) will drive the system to an equilibrium state in which the estimated value of the error function is ^ zero 34]. The value k > 0 is a proportional feedback gain. Note that we have written xe in the feedback law to emphasize the fact that this value is also subject to errors. The expression (20) is equivalent to open-loop positioning of the manipulator based on visionb ^ ^ based estimates of geometry. Errors in xe xc or c S (robot kinematics, camera calibration and 13

visual reconstruction respectively) will lead to positioning errors of the end-e ector. Now, consider the situation when the cameras are mounted on the robot and calibrated to the end-e ector. In this case, we can express (19) in end-e ector coordinates:

S eP) = eP ; ex0 S: (21) b The camera(s) furnish an estimate of the stationing point, c S which can be combined with inforb ^ ^ b mation from the camera calibration and robot kinematics to produce S = xe e xc c S: We now
compute
eu
3

e Epp (xe

^ ^ b ^ b = ;k e Epp (^ e xe e xc c S eP) = ;k(e P ; e x0 0 xe e xc c S) = ;k(e P ; e xc c S) x ^ ^ b

(22)

^ Notice that the terms involving xe have dropped out. Thus (22) is not only simpler, but positioning accuracy is also independent of the accuracy of the robot kinematics. The above formulations presumed an EOL system. For an ECL system we suppose that we can also directly observe eP and estimate its coordinates. In this case, (20) and (22) can be written:

kinematics or the camera calibration. This is an important advantage for systems where a precise camera/end-e ector relationship is di cult or impossible to determine o ine. Suppose now that T SE3: Now the control input is u 2 R6 which represents a complete velocity screw. The error function only constrains 3 degrees of freedom, so the problem of computing u from the estimated error is underconstrained. One way of proceeding is as follows. Consider the _ case of free standing cameras. Then in base coordinates we know that P = u3 : Using (13), we can relate this to the end-e ector velocity screw as follows: _ P = u3 = A(P)u (25)

b b ^ u3 = ;k Epp(^ e xc c S exc c P) = ;k xc (cP ; cS) x ^ b ^ b (23) e u = ;k e E (^ x c S ex c P) = ;k e x (c P ; c S) ^c b b (24) 3 pp xe ^ c b ^ c b ^ respectively. We now see that u3 (respectively e u3) does not depend on xe and is homogeneous b b ^ ^ in xc (respectively e xc ). Hence, if c S = c P then u3 = 0 independent of errors in the robot

Unfortunately, A is not square and therefore can cannot be inverted to solve for u: However, recall that the matrix right inverse for an m n matrix M n > m is de ned as M+ = MT (MMT );1: The right inverse computes the minimum norm vector which solves original system of equations. Hence, we have u = A(P)+u3 (26) for free-standing cameras. Similar manipulations yield
e u = A(Se)+ e u3

(27)

for end-e ector mounted cameras. 14

As a second example of feature-based positioning, consider that some point on the end-e ector, P is to be brought to the line joining two xed points S1 and S2 in the world. Geometrically, the shortest path for performing this task is to move eP toward the line joining S1 and S2 along the perpendicular to the line. The error function describing this trajectory in base coordinates is:

Epl(xe S1 S2 eP) = (S2 ; S1) ((xe eP ; S1 ) (S2 ; S1)): Notice that although E is a mapping from T to <3 placing a point on a line is a constraint of
degree 2: >From the geometry of the problem, we see that de ning

u = ;kA(^ e eP)+Epl(^ e S1 S2 eP) x x c c


" #

is a proportional feedback law for this problem. Suppose that now we apply this constraint to two points on the end-e ector:
e Eppl(xe S1 S2 P1 P2) = Epl(xe S1 S2 eP1) Epl(xe S1 S2 P2) e e

Eppl de nes a four degree of freedom positioning constraint which aligns the points on the end-

e ector with those in target coordinates. The error function is again overparameterized. Geometrically, it is easy to see that one way of computing feedback is to compute a translation, T which moves e P1 to the line through S1 and S2 : Simultaneously, we can choose so as to rotate e P2 about e P1 so that the the line through e P1 and e P2 becomes parallel to that through S1 and S2 : This leads to the proportional feedback law: = ;k1 (S2 ; S1 ) Re (eP2 ; e P1 )] T = ;k2(S2 ; S1) ((^ e eP ; S1) (S2 ; S1 )) ; x (^ e P1 ) x
e

(28) (29)

Note that we are still free to choose translations along the line joining S1 and S2 as well as rotations about it. Full six degree-of-freedom positioning can be attained by enforcing another point-to-line constraint using an additional point on the end-e ector and an additional point in the world. See 35] for details. These formulations can be adjusted for end-e ector mounted camera and can be implemented as ECL or EOL systems. We leave these modi cations as an exercise for the reader.

4.2 Pose-Based Tasks


In the previous section, positioning was de ned in terms of directly observable features. When working with a priori known objects, it is possible to recover the pose of the object, and to de ne stationing points in object coordinates. The methods of the previous section can be easily applied when object pose is available. For example, suppose t S is an arbitrary stationing point in a target object's coordinate system, and ^ that we can compute e xt using end-e ector mounted cameras. Then using (1) we can compute e S = ext t S: This estimate can be used in any of the end-e ector based feedback methods of the b ^ 15

previous section in both ECL and EOL con gurations. Similar remarks hold for systems utilizing free-standing cameras. Given object pose, it is possible to directly de ne manipulator stationing in object coordinates. Let e xt be a desired stationing point for the end-e ector, and suppose the system employs freestanding cameras. We can de ne a positioning error

Erp(xe txe xt) = exe = ex0 xt txe :

(30)

(Note that in order for this error function to be in accord with our de nition of kinematic error we must select a parameterization of rotations which is 0 when the end-e ector is in the desired position.) ^ ^ ^ Using feature information and the camera calibration, we can directly estimate xt = xc c xt: In e R can be represented order to compute a velocity screw, we rst note that the rotation matrix e as a rotation through an angle e e about an axis de ned by a unit vector e ke 6]. Thus, we can de ne ^ = k1e ^e e ke T = k2e ^e ; te t (31) (32)

where te is the origin of the end-e ector frame in base coordinates. ^ Note that if we can also observe the end-e ector and estimate its pose, c xe we can rewrite (30) as follows:
e

^ ^ ^ ^ ^ ^ ^ xe = exc cx0 0xc cxt txe = exc cxt txe

Once again we see that for an ECL system, both the robot kinematic chain and the camera pose relative to the base coordinate system have dropped out of the error equation. Hence, these factors do not a ect the positioning accuracy of the system. The modi cations of pose-based methods to end-e ector based systems are completely straightforward and are left for the reader.

4.3 Estimation
Obviously, a key issue in position-based visual servo is the estimation of the quantities used to parameterize the feedback. In this regard, position-based visual servoing is closely related to the problem of recovering scene geometry from one or more camera images. This encompasses problems including structure from motion, exterior orientation, stereo reconstruction, and absolute orientation. A comprehensive discussion of these topics can be found in a recent review article 36]. We divide the estimation problems that arise into single-camera and multiple-camera situations which will be discussed in the following sections. 16

4.3.1 Single Camera


As noted previously, it follows from (15) that a point in a single camera image corresponds to a line in space. Although it is possible to perform geometric reconstruction using a single moving camera, the equations governing this process are often ill-conditioned, leading to stability problems 36] Better results can be achieved if target features have some internal structure, or the features come from a known object. Below, we brie y describe methods for performing both point estimation and pose estimation with a single camera assuming such information is available. dinates of a point in space from a single camera projection. In particular, if the feature has a known scale, this information can be used to compute point position. An example of such a feature is a circular opening with known diameter d whose image will be an ellipse. By estimating the parameters of the ellipse in the image, it is possible to compute distance to the hole as well as the orientation of the plane of the hole relative to the camera imaging plane.

Single Points Clearly, extra information is needed in order to reconstruct the Cartesian coor-

Object Pose Accurate object pose estimation is possible if the vision system observes features of

a known object, and uses those features to estimate object pose. This approach has been recently demonstrated by Wilson 37] for six DOF control of end-e ector pose. A similar approach was recently reported in 38]. Brie y, such an approach proceeds as follows. Let t P1 t P2 : : : t Pn be a set of points expressed in an object coordinate system with unknown pose c xt relative to an observing camera. The reconstruction problem is to estimate c xt from the image locations of the corresponding observations p1 p2 : : : pn: This is referred to as the pose estimation problem in the vision literature. Numerous methods of solution have been proposed and 39] provides a recent review of several techniques. Broadly speaking, solutions divide into analytic solutions and least-squares solutions which employ a variety of simpli cations and/or iterative methods. Analytic solutions for three and four points are given by 40{44]. Unique solutions exist for four coplanar, but not collinear, points. Leastsquares solutions can be found in 45{51]. Six or more points always yield unique solutions. The camera calibration matrix can be computed from features on the target, then decomposed 49] to yield the target's pose. The least-squares solution proceeds as follows. Using (15), we can de ne an objective function of the unknown pose between the camera and the object:

O(cxt ) = O(c Rt ctt ) =

n X i=1

kpi ; (cRt t P + c tt )k2:

This is a nonlinear optimization problem which has no known closed-form solution. Instead, iterative optimization techniques are employed. These techniques iteratively re ne a nominal value c xt (e:g: the pose of the object in a previous image), to compute an updated value for the pose parameters. Because of the sensitivity of the reconstruction process to noise, it is often a good idea to incorporate some type of smoothing or averaging of the computed pose parameters, at the 17

cost of some delay in response to changes in target pose. A particularly elegant formulation of this updating procedure results by application of statistical techniques such as the extended Kalman lter 52]. The reader is referred to 37] for details.

4.3.2 Multiple Cameras


Many systems utilizing position-based control with stereo vision from free-standing cameras have been demonstrated. For example, Allen 53] shows a system which can grasp a toy train using stereo vision. Rizzi 54] demonstrates a system which can bounce a ping-pong ball. All of these systems are EOL. Cipolla 31] describes an ECL system using free-standing stereo cameras. One novel feature of this system is the use of the a ne projection model (Section 2.3) for the imaging geometry. This leads to linear calibration and control at the cost of some system performance. The development of a position-based stereo eye-in-hand servoing system has also been reported 55]. Multiple cameras greatly simplify the reconstruction process as illustrated below.

Single Points Let axc1 represent the location of a camera relative to an arbitrary base coordinate frame a: By inverting this transformation and combining (1) and (15) for a point a P = x y z ]T
we have
a 1 p1 = u1 = z aP + t x aP + tx (33) v z y P + ty where x y and z are the rows of c1 Ra and c1 ta = tx ty tz ]T : Multiplying through by the denomi-

"

"

nator of the right-hand side, we have where

A1 (p1)aP = b1(p1):

(34)

# " # x ; u1z aP and b1(p1) = tz u1 ; tx : A 1 (p 1 ) = y ; v z tz v1 ; ty 1 Given a second camera at location c2 xa we can compute A2 (p2) and b2(p2 ) analogously. Stacking
these together results in a matrix equation

"

"

A1(p1) A2(p2)

P = b1(p1) : b2(p2)

"

which is an overdetermined system that can be solved for a P: to an object coordinate system, it is relatively straightforward to solve the absolute orientation problem relating camera coordinates to object coordinates. The solution is based on noting that the centroid of a rigid set of points is invariant to coordinate transformations. Let t P1 t P2 : : : t Pn b b b and c P1 c P2 : : : c Pn denote n reference points in object coordinates and their corresponding 18

Object Pose Given a known object with three or more points in known locations with respect

estimates in camera coordinates. De ne t C and c C be the centroids these point sets, respectively, b and de ne t Pi = t Pi ; t C and c Pi = c Pi ; c C: Then we have c b b xt (tPi ; tC) ; (cPi ; cC) = (cRt tPi + ctt ; cRttC ; ctt) ; (cPi ; cC) = cRttPi ; cPi: Note that the nal expression depends only on c Rt : The corresponding least-squares problem can either be solved explicitly for c Rt (see 56{58]), or solved incrementally using linearization. Given an estimate for c Rt the computation of c tt is a linear least squares problem.

4.4 Discussion
The principle advantage of position-based control is that it is possible to describes tasks in terms of positioning in Cartesian coordinates. It's primary disadvantage is that it is often highly calibration dependent. The impact of calibration dependency often depends on the situation. In an environment where moderate positioning accuracy is required from rmly mounted cameras, extant calibration techniques probably provide a su ciently accurate solution. However, if the cameras are moving and high accuracy is required, calibration sensitivity is an important issue. Computation time for the relative orientation problem is often cited as a disadvantage of position-based methods. However recent results show that solutions can be computed in only a few milliseconds even using iteration 39] or Kalman ltering 37]. Endpoint closed-loop systems are demonstrably less sensitive to calibration. However, particularly in stereo systems, small rotational errors between the cameras can lead to reconstruction errors which do impact the positioning accuracy of the system. Thus, endpoint closed-loop systems will work well, for example, with a moving stereo head in which the cameras are xed and rigid. However, it still may cause problems when both cameras are free to move relative to one another. Feature-based approaches tend to be more appropriate to tasks where there is no prior model of the geometry of the task, for example in teleoperation applications 59]. Pose-based approaches inherently depend on an existing object model. The pose estimation problems inherent in many position-based servoing problems requires solution to a potentially di cult correspondence problem. However, if the features are being tracked (see Section 6), then this problem need only be solved once at the beginning of the control process. Many of these problems can be circumvented by sensing target pose directly using a 3D sensor. Active 3D sensors based on structured lighting are now compact and fast enough to use for visual servoing. If the sensor is small and mounted on the robot 60{62] the depth and orientation information can be used for position-based visual servoing.

5 Image-Based Control
As described in Section 3, in image-based visual servo control the error signal is de ned directly in terms of image feature parameters (in contrast to position-based methods that de ne the error signal in the task space coordinates). Thus, we posit the following de nition. 19

De nition 5.1 An image-based visual servoing task is represented by an image error function e : F ! <l , where l k and k is the dimension of the image feature parameter
space. As described in Section 2.5, the system may use either a xed camera or an eye-in-hand conguration. In either case, motion of the manipulator causes changes to the image observed by the vision system. Thus, the speci cation of an image-based visual servo task involves determining an appropriate error function e, such that when the task is achieved, e = 0: This can be done by directly using the projection equations (15), or by using a \teaching by showing" approach, in which the robot is moved to a goal position and the corresponding image is used to compute a vector of desired feature parameters, fd . If the task is de ned with respect to a moving object, the error, e, will be a function, not only of the pose of the end-e ector, but also of the pose of the moving object. Although the error, e, is de ned on the image parameter space, the manipulator control input is typically de ned either in joint coordinates or in task space coordinates. Therefore, it is necessary to relate changes in the image feature parameters to changes in the position of the robot. The image Jacobian, introduced in Section 5.1, captures these relationships. We present an example image Jacobian in Section 5.2. In Section 5.3, we describe methods that can be used to \invert" the image Jacobian, to derive the robot velocity that will produce a desired change in the image. Finally, in Sections 5.4 and 5.5 we describe how controllers can be designed for image-based systems.

5.1 The Image Jacobian


In visual servo control applications, it is necessary to relate di erential changes in the image feature parameters to di erential changes in the position of the manipulator. The image Jacobian captures these relationships. Let r represent coordinates of the end-e ector in some parameteriza_ _ tion of the task space T and r represent the corresponding end-e ector velocity (note, r is a velocity screw, as de ned in Section 2.2). Let f represent a vector of image feature parameters and f_ the corresponding vector of image feature parameter velocities. The image Jacobian, Jv , is a linear transformation from the tangent space of T at r to the tangent space of F at f . In particular, _ f_ = Jv r where Jv 2 <k m , and (35)

2 @v (r) @v1 (r) 1 6 @r1 ::: @rm " # 6 6 . @ . . Jv (r) = =6 . . @r 6 . 6 @v (r) 4 k ::: @vk (r)
@r1 @rm

3 7 7 7 7: 7 7 5

(36)

Recall that m is the dimension of the task space, T . Thus, the number of columns in the image 20

Jacobian will vary depending on the task. The image Jacobian was rst introduced by Weiss et al. 19], who referred to it as the feature sensitivity matrix. It is also referred to as the interaction matrix 10] and the B matrix 14,15]. Other applications of the image Jacobian include 9,12,13,22]. The relationship given by (35) describes how image feature parameters change with respect to changing manipulator pose. In visual servoing we are interested in determining the manipulator _ velocity, r, required to achieve some desired value of f_ . This requires solving the system given by (35). We will discuss this problem in Section 5.3, but rst we present an example image Jacobian.

5.2 An Example Image Jacobian


Suppose that the end-e ector is moving with angular velocity (t) and translational velocity T (as described in Section 2.2) both with respect to the camera frame in a xed camera system. Let p be a point rigidly attached to the end-e ector. The velocity of the point p, expressed relative to the camera frame, is given by
c

_ = p

p+T

(37)

To simplify notation, let c p = x y z ]T . Substituting the perspective projection equations (15) into (9) { (10) we can write the derivatives of the coordinates of p in terms of the image feature parameters u v as

x = z!y ; vz !z + Tx _ f uz ! ; z! + T y = f z _ x y z z_ = f (v!x ; u!u ) + Tz :

(38) (39) (40)

Now, let F = u v ]T , as above and using the quotient rule, _ _ u = z x ; xz _

Similarly

z2 = z 2 fz z!y ; vz !z + Tx] ; uz z (v!x ; u!y ) + Tz ]g 2 2 = z Tx ; u Tz ; uv !x + + u !y ; v!z z v ; v = Ty ; Tz + _ z z


2

(41) (42) (43) (44)

; v2

!x +

uv

!y + u!z

21

Finally, we may rewrite these two equations in matrix form to obtain

2 ;uv " # 6 0 ;u u =6 z _ z 6 v _ 40 ;v ; 2 ; v 2
z z

2 3 6 Tx 2 + u2 ;v 6 Ty 7 6 Tz 76 76 6 uv u 5 6 !x 4 !y
!z

3 7 7 7 7 7 7 7 5

(45)

which is an important result relating image-plane velocity of a point to the relative velocity of the point with respect to the camera. Alternative derivations for this example can be found in a number of references including 63,64]. It is straightforward to extend this result to the general case of using k=2 image points for the visual control by simply stacking the Jacobians for each pair of image point coordinates

2 u _1 6 v1 6 _ 6 .. 6 . 6 6u 4 _

vk=2 _

k=2

2 6 6 3 6 6 7 6 7 6 7 6 7=6 7 6 7 6 6 5 6 6 6 6 6 4

z1
0 . . .

;u1

z1
. . . 0

z1 ;v1 z1
. . .

;u1 v1

+ u2 1

2 ; v1

u1 v1
. . . 2 + u2 2 k=

zk=2
0

zk=2

;uk=2 zk=2 ;vk=2 zk=2

;uk=2 vk=2

. . .

2 ; vk=2 uk=2 vk=2

7 2 Tx 7 u1 7 6 Ty 76 7 . 7 6 Tz . 76 . 7 6 !x 76 76 ;vk=2 7 4 !y 7 !z 7 7 u 5
k=2

;v1 7 7

3 3 7 7 7 7: 7 7 7 5

(46)

Finally, note that the Jacobian matrices given in (45) and (46) are functions of the distance from the camera focal center to the point being imaged (i.e., they are functions of zi ). For a xed camera system, when the target is the end-e ector these z values can be computed using the forward kinematics of the robot and the camera calibration information. For an eye-in-hand system, determining z can be more di cult. This problem is discussed further in Section 7.1.

5.3 Using the Image Jacobian to Compute End-E ector Velocity


_ Visual servo control applications typically require the computation of r, given as input f_ . Methods _ for computing f_ are discussed in Section 6. Here, we focus on the determination of r, assuming that f_ is given. There are three cases that must be considered: k = m, k < m, and k > m. We now discuss each of these. _ When k = m and Jv is nonsingular, J;1 exists. Therefore, in this case, r = J;1 f_ . Such an v v approach has been used by Feddema 18], who also describes an automated approach to image feature selection in order to minimize the condition number of Jv . When k 6= m, J;1 does not exist. In this case, assuming that Jv is full rank (i.e., rank(Jv ) = v min(k m)), we can compute a least squares solution, which, in general, is given by 22

_ v r = J+f_ + (I ; J+Jv )b v

(47)

where J+ is a suitable pseudoinverse for Jv , and b is an arbitrary vector of the appropriate dimenv _ _ sion. The least squares solution gives a value for r that minimizes the norm kf_ ; Jv rk. We rst consider the case k > m, that is, there are more feature parameters than task degrees of freedom. By the implicit function theorem 65], if, in some neighborhood of r, m k and rank(Jv ) = m (i.e., Jv is full rank), we can express the coordinates fm+1 : : :fk as smooth functions of f1 : : :fm . >From this, we deduce that there are k ; m redundant visual features. Typically, this will result in a set of inconsistent equations (since the k visual features will be obtained from a computer vision system, and therefore will likely be noisy). In this case, the appropriate pseudoinverse is given by

J+ = (JT Jv);1JT : v v v

(48)

Here, we have (I ; J+ Jv ) = 0 (the rank of the null space of Jv is 0, since the dimension of the v column space of Jv , m, equals rank(Jv )). Therefore, the solution can be written more concisely as _ v r = J+ f_ : (49)

When k < m, the system is underconstrained. In the visual servo application, this implies that _ we are not observing enough features to uniquely determine the object motion r, i.e., there are certain components of the object motion that can not be observed. In this case, the appropriate pseudoinverse is given by

J+ = JT (JvJT );1: v v v

(50)

In general, for k < m, (I ; J+ Jv ) 6= 0, and all vectors of the form (I ; J+ Jv )b lie in the null v v space of Jv , which implies that those components of the object velocity that are unobservable lie in the null space of Jv . In this case, the solution is given by (47). For example, as shown in 64], the null space of the image Jacobian given in (45), is spanned by the four vectors

2u3 6v7 6 7 6 7 6 7 607 6 7 607 4 5


0

203 607 6 7 607 6 7 6u7 6 7 6v7 4 5

2 uvz 6 ;(u2 + 2)z 6 6 6 6 ;vz2 6 6 4 0


u

3 7 7 7 7 7 7 7 5

2 (u2 + v2 + 2)z 6 0 6 6 ;u(u2 + v2 + 2)z 6 6 6 6 ;(u2uv 2)z 4 +


u
2

3 7 7 7 7: 7 7 7 5

(51)

In some instances, there is a physical interpretation for the vectors that span the null space of the image Jacobian. For example, the vector u v 0 0 0]T re ects that the motion of a point along a projection ray cannot be observed. The vector 0 0 0 u v ]T re ects the fact that rotation 23

of a point on a projection ray about that projection ray cannot be observed. Unfortunately, not all basis vectors for the null space have such an obvious physical interpretation. The null space of the image Jacobian plays a signi cant role in hybrid methods, in which some degrees of freedom are controlled using visual servo, while the remaining degrees of freedom are controlled using some other modality 12].

5.4 Resolved-Rate Methods


The earliest approaches to image-based visual servo control 9, 19] were based on resolved-rate motion control 28], which we will brie y describe here. Suppose that the goal of a particular task is to reach a desired image feature parameter vector, fd . If the control input is de ned as in Section _ 4 to be an end-e ector velocity, then we have u = r, and assuming for the moment that the image Jacobian is square and nonsingular,

u = J;1(r)f_ : v
If we de ne the error function as e(f ) = fd ; f , a simple proportional control law is given by

(52) (53)

u = KJ;1 (r)e(f ) v

where K is a constant gain matrix of the appropriate dimension. For the case of a non-square image Jacobian, the techniques described in Section 5.3 would be used to compute for u. Similar results have been presented in 12,13].

5.5 Example Servoing Tasks


In this section, we revisit the problems that were described in Section 4.1. Here, we describe image-based solutions for these problems.

Point to Point Positioning Consider the task of bringing some point P on the manipulator to a desired stationing point S. If two cameras are viewing the scene, a necessary and su cient condition for P and S to coincide in the workspace is that the projections of P and S coincide in

each image. If we let ul v l]T and ur v r ]T be the image coordinates for the projection of P in the left and right images, respectively, then we may take f = ul v l ur v r ]T . If we let T = <3 F is a mapping from T to R4 : l r Let the projection of S have coordinates uls vs ] and ur vs ] in the left and right images. We s l v l ur v r ]T , yielding then de ne the desired feature vector to be fd = us s s s

epp(f ) = f ; fd:
24

(54)

The image Jacobian for this problem can be constructed by using (45) for each camera (note that a coordinate transformation must be used for either the left or right camera, to relate the end-e ector velocity screw to a common reference frame).

Point to Line Positioning Consider again the task in which some point P on the manipulator end-e ector is to be brought to the line joining two xed points S1 and S2 in the world.
If two cameras are viewing the workspace, it can be shown that a necessary and su cient condition for P to be colinear with the line joining S1 and S2 is that the projection of P be colinear with the projections of the points S1 and S2 in both images (for non-degenerate camera con gurations). The proof proceeds as follows. The origin of the coordinate frame for the left camera, together with the projections of S1 and S2 onto the left image forms a plane. Likewise, the origin of the coordinate frame for the right camera, together with the projections of S1 and S2 onto the right image forms a plane. The intersection of these two planes is exactly the line joining S1 and S2 in the workspace. When P lies on this line, it must lie simultaneously in both of these planes, and therefore, must be colinear with the the projections of the points S1 and S2 in both images. We now turn to conditions that determine when the projection of P is colinear with the the projections of the points S1 and S2 . It is known that three vectors are coplanar if and only if their scalar triple product is zero. For the left image, let the projection of S1 have image coordinates l l ul1 v1], the projection of S2 have image coordinates ul2 v2], and the projection of P have image coordinates ul v l]. If the three vectors from the origin of the left camera to these image points are coplanar, then the three image points are colinear. Thus, we construct the scalar triple product

02 l 3 2 l 31 2 l 3 u1 u2 u l ( ul v l]T ) = B6 v l 7 6 v l 7C 6 v l 7 : epl @4 1 5 4 2 5A 4 5
We may proceed in the same fashion to derive conditions for the right image

(55)

02 r 3 2 r 31 2 r 3 u1 u2 u r 5 4 r 5A 4 er ( ur vr]T ) = B6 v1 7 6 v2 7C 6 vr 7 : @4 5 pl
Finally, we construct the error function

(56)

el ul l T e(f ) = erpl(( ur vr]]T)) v pl

"

(57)

where f = ul v l ur v r ]T . Again, the image Jacobian for this problem can be constructed by using (45) for each camera (note that a coordinate transformation must be used for either the left or right camera, to relate the end-e ector velocity screw to a common reference frame). 25

Given a second point on the end-e ector, a four degree of freedom positioning operation can be de ned by simply stacking the error terms. It is interesting to note that these solutions to the point-to-line problem perform with an accuracy that is independent of calibration, whereas the position-based versions do not 66].

5.6 Discussion
One of the chief advantages to image-based control over position-based control is that the positioning accuracy of the system is less sensitive camera calibration. This is particularly true for ECL image-based systems. For example, it is interesting to note that the ECL image-based solutions to the point-to-line positioning problem perform with an accuracy that is independent of calibration, whereas the position-based versions do not 66]. It is important to note, however, that most of the image-based control methods appearing in the literature still rely on an estimate of point position or target pose to parameterize the Jacobian. In practice, the unknown parameter for Jacobian calculation is distance from the camera. Some recent papers present adaptive approaches for estimating 14] this depth value, or develop feedback methods which do not use depth in the feedback formulation 67]. There are often computational advantages to image-based control, particularly in ECL con gurations. For example, a position-based relative pose solution for an ECL single-camera system must perform two nonlinear least squares optimizations in order to compute the error function. The comparable image-based system must only compute a simple image error function, an inverse Jacobian solution, and possibly a single position or pose calculation to parameterize the Jacobian. One disadvantage of image-based methods over position-based methods is the presence of singularities in the feature mapping function which re ect themselves as unstable points in the inverse Jacobian control law. These instabilities are often less prevalent in the equivalent position-based scheme. Returning again to the point-to-line example, the Jacobian calculation becomes singular when the two stationing points are coplanar with the optical centers of both cameras. In this conguration, rotations and translations of the setpoints in the plane are not observable. This singular con guration does not exist for the position-based solution. In the above discussion we have referred to fd as the desired feature parameter vector, and implied that it is a constant. If it is a constant then the robot will move to the desired pose with respect to the target. If the target is moving the system will endeavour to track the target and maintain relative pose, but the tracking performance will be a function of the system dynamics as discussed in Section 7.2. Many tasks can be described in terms of the motion of image features, for instance aligning visual cues in the scene. Jang et al. 68] describe a generalized approach to servoing on image features, with trajectories speci ed in feature space { leading to trajectories (tasks) that are independent of target geometry. Skaar et al. 16] describes the example of a 1DOF robot catching a ball. By observing visual cues such as the ball, the arm's pivot point, and another point on the arm, the interception task can be speci ed, even if the relationship between camera and arm is not known a priori. Feddema 9] uses a feature space trajectory generator to interpolate feature parameter 26

values due to the low update rate of the vision system used.

6 Image Feature Extraction and Tracking


Irrespective of the control approach used, a vision system is required to extract the information needed to perform the servoing task. Hence, visual servoing pre-supposes the solution to a set of potentially di cult static and dynamic vision problems. To this end many reported implementations contrive the vision problem to be simple: e:g: painting objects white, using arti cial targets, and so forth 9, 12, 54, 69]. Other authors use extremely task-speci c clues: e:g: Allen 53] uses motion detection for locating a moving object to be grasped, and welding systems commonly use special lters that isolate the image of the welding tip. A review of tracking approaches used by researchers in this eld is given in 3]. In less structured situations, vision has typically relied on the extraction of sharp contrast changes, referred to as \corners" or \edges", to indicate the presence of object boundaries or surface markings in an image. Processing the entire image to extract these features necessitates the use of extremely high-speed hardware in order to work with a sequence of images at camera rate. However not all pixels in the image are of interest, and computation time can be greatly reduced if only a small region around each image feature is processed. Thus, a promising technique for making vision cheap and tractable is to use window-based tracking techniques 54,70,71]. Windowbased methods have several advantages, among them computational simplicity, little requirement for special hardware, and easy recon guration for di erent applications. We note, however, that initialization of window or region-based systems typically presupposes an automated or humansupplied solution to a potentially complex vision problem. In keeping with the minimalist approach of this tutorial, we concentrate on describing the window-based approach to tracking of features in an image. A discussion of methods which use specialized hardware combined with temporal and geometric constraints can be found in 72]. The remainder of this section is organized as follows. Section 6.1 describes how window-based methods can be used to implement fast detection of edge segments, a common low-level primitive for vision applications. Section 6.2 describe an approach based on temporally correlating image regions over time. Section 6.3 describes some general issues related to the use of temporal and geometric constraints, and Section 6.4 brie y summarizes some of the issues surrounding the choice of a feature extraction method for tracking.

6.1 Feature based methods


In this section, we illustrate how window-based processing techniques can be used to perform fast detection of isolated straight edge segments of xed length. Edge segments are intrinsic to applications where man-made parts contain corners or other patterns formed from physical edges and vertices. Images are comprised of pixels organized into a two-dimensional coordinate system. We adopt the notation I (x t) to denote the pixel at location x = u v ]T in an image captured at time t: 27

A window can be thought of as a two-dimensional array of pixels related to a larger image by an invertible mapping from window coordinates to image coordinates. We consider rigid transformations consisting of a translation vector c = x y ]T and a rotation : A pixel value at x = u v ]T in window coordinates is related to the larger image by R(x c t) = I (c + R( )x t) (58) where R is a two dimensional rotation matrix. We adopt the convention that x = 0 is the center of the window. In the sequel, the set X represents the set of all values of x: Window-based tracking algorithms typically operate in two stages. In the rst stage, one or more windows are acquired using a nominal set of window parameters. The pixel values for all x 2 X are copied into a two-dimensional array that is subsequently treated as a rectangular image. Such acquisitions can be implemented extremely e ciently using line-drawing and region- ll algorithms commonly developed for graphics applications 73]. In the second stage, the windows are processed to locate features. Using feature measurements, a new set of window parameters are computed. These parameters may be modi ed using external geometric constraints or temporal prediction, and the cycle repeats. We consider an edge segment to be characterized by three parameters in the image plane: the u and v coordinates of the center of the segment, and the orientation of the segment relative to the image plane coordinate system. These values correspond directly to the parameters of the acquisition window used for edge detection. Let us rst assume we have correct prior values c; = (u; v;) and ; for an edge segment. A window, R;(x) = R(x c; ; t) extracted with these parameters would then have a vertical edge segment within it. Isolated step edges can be localized by determining the location of the maximum of the rst derivative of the signal 64,72,74]. However, since derivatives tend to increase the noise in an image, most edge detection methods combine spatial derivatives with a smoothing operation to suppress spurious maxima. Both derivatives and smoothing are linear operations that can be computed using convolution operators. Recall that the two-dimensional convolution of the window R; ( ) by a function G is given by 1 X (R; G)(x) = jX j R; (x ; s)G(s):
s2X

By the associativity of linear operations, the derivative of a smoothed signal is equivalent to the signal convolved with the derivative of the smoothing function. Hence, smoothing and di erentiation can be combined into a single convolution template. An extremely popular convolution kernel is the derivative of a Gaussian (DOG) 75]. In one dimension, the DOG is de ned as g(x) = ;x exp(;x2= 2) where is a design parameter governing the amount of smoothing that takes place. Although the DOG has been demonstrated to be the optimal lter for detecting step edges 75], it requires oating point arithmetic to be computed accurately. Another edge detector which can be implemented without oating point arithmetic is the derivative of a triangle (DOT) kernal. In one dimension the DOT is de ned as g (x) = signum(x): 28

For a kernal three pixels wide, this is also known as the Prewitt operator 64]. Although the latter is not optimal from a signal processing point of view, convolution by the DOT can be implemented using only four additions per pixel. Thus, it is extremely fast to execute on simple hardware. Returning to detecting edge segments, convolutions are employed as follows. Let e any derivativebased scalar edge detection kernal arranged as a single row. Compute the convolution R1 (x) = (R; e)(x): R1 will have a response curve in each row which peaks at the location of the edge. Summing each column of R1 superimposes the peaks and yields a one-dimensional response curve. If the estimated orientation, ; was correct, the maximum of this response curve determines the o set of the edge in window coordinates. By interpolating the response curve about the maximum value, subpixel localization of the edge can be achieved. If the ; was incorrect, the response curves in R1 will deviate slightly from one another and the superposition of these curves will form a lower, more spread out aggregate curve. Thus, maximizing the maximum value of the aggregate response curve is a way to determine edge orientation. This can be approximated by performing the detection operation on windows acquired at ; as well as two bracketing angles ; and performing quadratic interpolation on the maxima of the corresponding aggregate response curves. Computing the three oriented edge detectors is particularly simple if the range of angles is small. In this case, a single window is processed with the initial scalar convolution yielding R1: Three aggregate response curves are computed by summing along the columns of R1 and along diagonals corresponding to angles of : The maxima of all three curves are located and interpolated to yield edge orientation and position. Thus, for the price of one window acquisition, one complete scalar convolution, and three column sums, the vertical o set o and the orientation o set can be computed. Once these two values are determined, the state variables of the acquisition window are updated as
+

u+ v+

= ;+ = u; ; o sin( + ) = v ; + o cos( + )

An implementation of this method has shown that that localizing a 20 pixel edge using a Prewitt-style mask 15 pixels wide searching 10 pixels and 15 degrees takes 1:5 ms on a Sun Sparc II workstation. At this rate, 22 edge segments can be tracked simultaneously at 30 Hz, the video frame rate used. Longer edges can be tracked at comparable speeds by subsampling along the edge. Clearly, this edge-detection scheme is susceptible to mistracking caused by background or foreground occluding edges. Large acquisition windows increase the range of motions that can be tracked, but reduce the tracking speed and increase the likelihood that a distracting edge will disrupt tracking. Likewise, large orientation brackets reduce the accuracy of the estimated orientation, and make it more susceptible to edges that are not closely oriented to the underlying edge. There are several ways of increasing the robustness of edge tracking. One is to include some type of temporal component in the algorithm. For example, matching edges based on the sign or 29

absolute value of the edge response increases its ability to reject incorrect edges. For more complex edge-based detection, collections of such oriented edge detectors can be combined to verify the location and position of the entire feature. Some general ideas in this direction are discussed in Section 6.3

6.2 Area Correlation-Based Methods


Feature-based methods do not make strong use of the actual appearance of the features they are tracking. If the desired feature is a speci c pattern that changes little over time, then tracking can be based on correlating the appearance of the feature (in terms of its gray-values) in a series of images. SSD (Sum of Squared Di erences) is a variation on correlation which tracks a region of an image by exploiting its temporal consistency | the observation that the appearance of small region in an image sequence changes very little. Consider only windows that di er in the location of their center. We assume some reference window was acquired at time t at location c: Some small time interval, later, a candidate window of the same size acquired at location c + d: The correspondence between these two images is measured by the sum of the squared di erences of corresponding pixels: X O(d) = (R(x c t)) ; R(x c + d t + ))2 w(x) >0 (59)
x2X

where w( ) is a weighting function over the image region. The aim is to nd the displacement, d, that minimizes O(d). Since images are inherently discrete, a natural solution is to select a nite range of values D and compute ^ d = min O(d): d2D The advantage of a complete discrete search is that the true minimum over the search region is guaranteed to be found. However, the larger the area covered, the greater the computational burden. This burden can be reduced by performing the optimization starting at low resolution and proceeding to higher resolution, and by ordering the candidates in D from most to least likely and terminating the search once a candidate with an acceptably low SSD value is found 15]. Once the discrete minimum is found, the location can be re ned to subpixel accuracy by interpolation of the SSD values about the minimum. Even with these improvements, 15] reports that a special signal processor is required to attain frame-rate performance. It is also possible to solve (59) using continuous optimization methods 76{79]. The solution begins by expanding R(x c t) in a Taylor series about (c t) yielding

R(x c + d t + ) R(x c t) + Rx(x)dx + Ry (x)dy + Rt (x) where Rx Ry and Rt are the spatial and temporal derivatives of the image computed using convolution as follows: " # 1 ; 1 )(x) Rx(x) = (R 1 ; 1
30

Ry (x) = (R Rt(x) =
Substituting into (59) yields O(d) De ne

"

1 1 )(x) ;1 ; 1

(R( c t + ) ; Rs ( c t))

"

11 11

#!

(x) (60)

X
x2X

(Rx (x)dx + Ry (x)dy + Rt(x) )2 w(x)

q Rx(x) w(x) and h(x) = Rt (x) w(x) g(x) = Ry (x)pw(x) Expression (60) can now be written more concisely as X O(d) (g(x) d + h(x) )2 :
x2X

"

(61)

Notice O is now a quadratic function of d: Computing the derivatives of O with respect to the components of d setting the result equal to zero, and rearranging yields a linear system of equations:

"X

^ Solving for d yields an estimate, d of the o set that would cause the two windows to have maximum ^ correlation. We then compute c+ = c; + d yielding the updated window location for the next tracking cycle. This is e ectively a proportional control algorithm for the \servoing" the location of an acquisition to maintain the best match with the reference window over time. In practice this method will only work for small motions (it is mathematically correct only for a fraction of a pixel). This problem can be alleviated by rst performing the optimization at low levels of resolution, and using the result as a seed for computing the o set at higher levels of resolution. For example, reducing the resolution by a factor of two by summing groups of four neighboring pixels doubles the maximum displacement between two images. It also speeds up the ^ computations since fewer operations are needed to compute d for the smaller low-resolution image. Another drawback of this method is the fact that it relies on an exact math of the gray values| changes in contrast or brightness can bias the results and lead to mistracking. Thus, it is common to normalize the images to have zero mean and consistant variance. With these modi cations, it is easy to show that solving (62) is equivalent to maximizing the correlation between the two windows. Continuous optimization has two principle advantages over discrete optimization. First, a single updating cycle is usually faster to compute. For example, (62) can be computed and solved in less than 5 ms on a Sparc II computer 79]. Second, it is easy to incorporate other window parameters such as rotation and scaling into the system without greatly increasing the computation time 78,79]. It is also easy to show that including parameters for contrast and brightness in (60) makes SSD tracking equivalent to nding the maximum correlation between the two image regions 76]. Thus, SSD methods can be used to perform template matching as well as tracking of image regions. 31

x2X

(g(x)g(x)T )

d=

x2X

h(x)g(x)

(62)

6.3 Filtering and Feedforward


Window-based tracking implicitly assumes that the interframe motions of the tracked feature do not exceed the size of search window, or, in the case of SSD tracking, a few pixels from the expected location of the image region. In the simplest case, the previous location of the image feature can be used as a predictor of its current location. Unfortunately, as feature velocity increases the search window must be enlarged which adversely a ects computation time. The robustness and speed of tracking can be signi cantly increased with knowledge about the dynamics of the observed features, which may be due to motion of the camera or target. For example, given knowledge of the image feature location xt at time t Jacobian Jv the end-e ector velocity ut and the interframe time the expected location of the search windows can be computed by the prediction ft+ = ft + Jv ut: Likewise, if the dynamics of a moving object are known, then it is possible to use this to enhance prediction. For example, Rizzi 54] describes the use of a Newtonian ight dynamics model to make it possible to track a ping-pong ball during ight. Predictors based on ; tracking lters and Kalman lters have also been used 37,53,72]. Multiresolution techniques can be used provide further performance improvements, particularly when a dynamic model is not available and large search windows must be used.

6.4 Discussion
Prior to executing or planning visually controlled motions, a speci c set of visual features must be chosen. Discussion of the issues related to feature selection for visual servo control applications can be found in 18, 19]. The \right" image feature tracking method to use is extremely application dependent. For example, if the goal is to track a single special pattern or surface marking that is approximately planar and moving at slow to moderate speeds, then SSD tracking is appropriate. It does not require special image structure (e:g: straight lines), it can accommodate a large set of image distortions, and for small motions can be implemented to run at frame rates. In comparison to the edge detection methods described above, SSD tracking is extremely sensitive to background changes or occlusions. Thus, if a task requires tracking several occluding contours of an object with a changing background, edge-based methods are clearly faster and more robust. In many realistic cases, neither of these approaches by themselves yields the robustness and performance desired. For example, tracking occluding edges in an extremely cluttered environment is sure to distract edge tracking as \better" edges invade the search window, while the changing background would ruin the SSD match for the region. Such situations call for the use of more global task constraints (e:g: the geometry of several edges), more global tracking (e:g: extended contours or snakes 80]), or improved or specialized detection methods. To illustrate these tradeo s, suppose a visual servoing task relies on tracking the image of a circular opening over time. In general, the opening will project to an ellipse in the camera. There 32

are several candidate algorithms for detecting this ellipse and recovering its parameters: 1. If the contrast between the interior of the opening and area around it is high, then binary thresholding followed by a calculation of the rst and second central moments can be used to localize the feature 54]. 2. If the ambient illumination changes greatly over time, but the brightness of the opening and the brightness of the surrounding region are roughly constant, a circular template could be localized using SSD methods augmented with brightness and contrast parameters. In this case, (59) must also include parameters for scaling and aspect ratio 70]. 3. The opening could be selected in an initial image, and subsequently located using SSD methods. This di ers from the previous method in that this calculation does not compute the center of the opening, only its correlation with the starting image. Although useful for servoing a camera to maintain the opening within the eld of view, this approach is probably not useful for manipulation tasks that need to attain a position relative to the center of the opening. 4. If the contrast and background are changing, the opening could be tracked by performing edge detection and tting an ellipse to the edge locations. In particular, short edge segments could be located using the techniques described in Section 6.1. Once the segments have been t to an ellipse, the orientation and location of the segments would be adjusted for the subsequent tracking cycle using the geometry of the ellipse. During task execution, other problems arise. The two most common problems are occlusion of features and and visual singularities. Solutions to the former include intelligent observers that note the disappearance of features and continue to predict their locations based on dynamics and/or feedforward information 54], or redundant feature speci cations that can perform even with some loss of information. Solution to the latter require some combination of intelligent path planning and/or intelligent acquisition and focus-of-attention to maintain the controllability of the system. It is probably safe to say that image processing presents the greatest challenge to generalpurpose hand-eye coordination. As an e ort to help overcome this obstacle, the methods described above and other related methods have been incorporated into a publically available \toolkit." The interested reader is referred to 70] for details.

7 Related Issues
In this section, we brie y discuss a number of related issues that were not addressed in the tutorial.

7.1 Image-Based versus Position-Based Control


The taxonomy of visual servo introduced in Section 1 has four major architectural classes. Most systems that have been reported fall into the dynamic position- or image-based look-and-move 33

structure. That is, they employ axis-level feedback, generally of position, for reasons outlined earlier. No reports of an implementation of the position-based direct visual servo structure are known to the authors. Weiss's proposed image-based direct visual-servoing structure does away entirely with axis sensors | dynamics and kinematics are controlled adaptively based on visual feature data. This concept has a certain appeal but in practice is overly complex to implement and appears to lack robustness (see, e.g., 81] for an analysis of the e ects of various image distortions on such control schemes). The concepts have only ever been demonstrated in simulation for up to 3-DOF and then with simplistic models of axis dynamics which ignore `real world' e ects such as Coulomb friction and stiction. Weiss showed that even when these simplifying assumptions were made, sample intervals of 3 ms were required. This would necessitate signi cant advances in sensor and processing technology, and the usefulness of controlling manipulator kinematics and dynamics this way must be open to question. Many systems based on image-based and position-based architectures have been demonstrated, and the computational costs of the two approaches are comparable and readily achieved. The often cited advantage of the image-based approach, reduced computational burden, is doubtful in practice. Many reports are based on using a constant image Jacobian, which is computationally e cient, but valid only over a small region of the task space. The general problem of Jacobian update remains, and in particular there is the di culty that many image Jacobians are a function of target depth, z . This necessitates a partial pose estimation which is the basis of the position-based approach. The cited computational disadvantages of the position-based approach have been ameliorated by recent research | photogrammetric solutions can now be computed in a few milliseconds, even using iteration.

7.2 Dynamic Issues in Closed-loop Systems


A visual servo system is a closed-loop discrete-time dynamical system. The sample rate is the rate at which images can be processed and is ultimately limited by the frame rate of the camera, though many reported systems operate at a sub-multiple of the camera frame rate due to limited computational ability. Negative feedback is applied to a plant which generally includes a signi cant time delay. The sources of this delay include charge integration time within the camera, serial pixel transport from the camera to the vision system, and computation time for feature parameter extraction. In addition most reported visual servo system employ a relatively low bandwidth communications link between the vision system and the robot controller, which introduces further latency. Some robot controllers operate with a sample interval which is not related to the sample rate of the vision system, and this introduces still further delay. A good example of this is the common Unimate Puma robot whose position loops operate at a sample interval of 14 or 28 ms while vision systems operate at sample intervals of 33 or 40 ms for RS 170 or CCIR video respectively 27]. It is well known that a feedback system including delay will become unstable as the loop gain is increased. Many visual closed-loop systems are tuned empirically, increasing the loop gain until overshoot or oscillation becomes intolerable. While such closed-loop systems will generally converge to the desired pose with a zero error, the same is not true when tracking a moving target. The tracking performance is a function of the closed-loop dynamics, and for simple proportional controllers will exhibit a very signi cant time lag or phase delay. If the target motion is constant 34

then prediction (based upon some assumption of target motion) can be used to compensate for the latency, but combined with a low sample rate this results in poor disturbance rejection and long reaction time to target `maneuvers'. Predictors based on autoregressive models, Kalman lters, ; and ; ; tracking lters have been demonstrated for visual servoing. In order for a visual-servo system to provide good tracking performance for moving targets considerable attention must be paid to modelling the dynamics of the robot and vision system and designing an appropriate control system. Other issues for consideration include whether or not the vision system should `close the loop' around robot axes which are position, velocity or torque controlled. A detailed discussion of these dynamic issues in visual servo systems is given by Corke 27,82].

7.3 Mobile robots


The discussion above has assumed that the moving camera is mounted on an arm type robot manipulator. For mobile robots the pose of the robot is generally poorly known and can be estimated from the relative pose of known xed objects or landmarks. Most of the techniques described above are directly applicable to the mobile robot case. Visual servoing can be used for navigation with respect to landmarks or obstacle and to control docking (see, e.g., 83]).

7.4 A Light-Weight Tracking and Servoing Environment


The design of many task-speci c visual tracking and vision-based feedback systems used in visual servoing places a strong emphasis on system modularity and recon gurability. This has motivated the development of a modular, software-based visual tracking system for experimental vision-based robotic applications 84]. The system design emphasizes exibility and e ciency on standard scienti c workstations and PC's. The system is intended to be a portable, inexpensive tool for rapid prototyping and experimentation for teaching and research. The system is written as a set of classes in C++. The use of object-oriented methods hides the details of how speci c methods are implemented, and structures applications through a prespeci ed set of generic interfaces. It also enhances the portability of the system by supporting device abstraction. The current system runs on several framegrabbers including models avialable from most common manufacturers. More information on the system and direction for retrieving it can be found at http://www.cs.yale.edu/HTML/YALE/CS/AI/VisionRobotics/YaleAI.html. The same design philosophy is currently being used to develop a complementary hand-eye coordination toolkit which is to be avilable in the near future.

7.5 Current Research Problems


There are many open problems in visual servo control, too numerous to describe here. These include control issues, such as adaptive visual servo control 14, 85], hybrid control (e.g., hybrid vision/position control 12], or hybrid force/vision control), and multi-rate system theory 86] issues related to automatic planning of visually controlled robot motions 87,88] applications in 35

mobile robotics, including nonholonmoic systems 83] and, feature selection 18,78]. Many of these are describe in the proceedings of a recent workshop on visual servo control 89].

7.6 The future


The future for applications of visual servoing should be bright. Camera's are relatively inexpensive devices and the cost of image processing systems continues to fall. Most visual servo systems make use of cameras that conform to broadcast television standards, and this is now a signi cant limiting factor toward achieving high-performance visual servoing. Those standards were developed over 50 years ago with very speci c design aims and to their credit still serve. Advances in digital image processing technology over the last decade have been rapid and vision systems are now capable of processing far more pixels per second than a camera can provide. Breaking this bottleneck would allow use of higher resolution images, higher frame rates or both. The current push for HDTV will have useful spino s for higher resolution visual servoing. New standards (such as promoted by the AIA) for digital output cameras are spurring the development of new cameras that are not tied to the frame rates and interlacing of the old broadcast standards. Visual servoing also requires high-speed image processing and feature extraction. The increased performance and falling cost of computer vision systems, and computing systems in general, is a consequence of Moore's Law. This law, originally postulated in 1964 predicts that computer circuit density will double every year, and 30 years later still holds true. Robust scene interpretation is perhaps the greatest, current, limitation to the wider application of visual servoing. Considerable progress is required if visual-servo systems are to move out of environments lined with black velvet.

8 Conclusion
This paper has presented, for the rst time, a tutorial introduction to robotic visual servo control. Since the topic spans many disciplines, we have concentrated on certain fundamental aspects of the topic. However a large bibliography is provided to assist the reader who seeks greater detail than can be provided here. The tutorial covers, using consistant notation, the relevant fundamentals of coordinate transformations, pose representation, and image formation. Since no standards yet exist for terminology or symbols we have attempted, in Section 2, to establish a consistent nomenclature. Where necessary we relate this to the notation used in the source papers. The two major approaches to visual servoing, image-based and position-based control, were been discussed in detail in Sections 5 and 4. The topics have been discussed formally, using the notation established earlier, and illustrated with a number of realistic examples. An important part of any visual servo system is image feature parameter extraction. Section 6 discussed two broad approaches to this problem with an emphasis on methods that have been found to function well in practice and that can be implemented without 36

specialized image processing hardware. Section 7 presented a number of related issues that are relevant to image-based or position-based visual servo systems. These included closed-loop dynamics, relative pros and cons of the di erent approaches, open problems and the future.

References
1] Y. Shirai and H. Inoue, \Guiding a robot by visual feedback in assembling tasks," Pattern Recognition, vol. 5, pp. 99{108, 1973. 2] J. Hill and W. T. Park, \Real time control of a robot with a mobile camera," in Proc. 9th ISIR, (Washington, DC), pp. 233{246, Mar. 1979. 3] P. Corke, \Visual control of robot manipulators | a review," in Visual Servoing (K. Hashimoto, ed.), vol. 7 of Robotics and Automated Systems, pp. 1{31, World Scienti c, 1993. 4] A. C. Sanderson and L. E. Weiss, \Image-based visual servo control using relational graph error signals," Proc. IEEE, pp. 1074{1077, 1980. 5] J. C. Latombe, Robot Motion Planning. Boston: Kluwer Academic Publishers, 1991. 6] J. J. Craig, Introduction to Robotics. Menlo Park: Addison Wesley, second ed., 1986. 7] B. K. P. Horn, Robot Vision. MIT Press, Cambridge, MA, 1986. 8] W. Jang, K. Kim, M. Chung, and Z. Bien, \Concepts of augmented image space and transformed feature space for e cient visual servoing of an \eye-in-hand robot"," Robotica, vol. 9, pp. 203{212, 1991. 9] J. Feddema and O. Mitchell, \Vision-guided servoing with feature-based trajectory generation," IEEE Trans. Robot. Autom., vol. 5, pp. 691{700, Oct. 1989. 10] B. Espiau, F. Chaumette, and P. Rives, \A New Approach to Visual Servoing in Robotics," IEEE Transactions on Robotics and Automation, vol. 8, pp. 313{326, 1992. 11] M. L. Cyros, \Datacube at the space shuttle's launch pad," Datacube World Review, vol. 2, pp. 1{3, Sept. 1988. Datacube Inc., 4 Dearborn Road, Peabody, MA. 12] A. Castano and S. A. Hutchinson, \Visual compliance: Task-directed visual servo control," IEEE Transactions on Robotics and Automation, vol. 10, pp. 334{342, June 1994. 13] K. Hashimoto, T. Kimoto, T. Ebine, and H. Kimura, \Manipulator control with image-based visual servo," in Proc. IEEE Int. Conf. Robotics and Automation, pp. 2267{2272, 1991. 14] N. P. Papanikolopoulos and P. K. Khosla, \Adaptive Robot Visual Tracking: Theory and Experiments," IEEE Transactions on Automatic Control, vol. 38, no. 3, pp. 429{445, 1993. 15] N. P. Papanikolopoulos, P. K. Khosla, and T. Kanade, \Visual Tracking of a Moving Target by a Camera Mounted on a Robot: A Combination of Vision and Control," IEEE Transactions on Robotics and Automation, vol. 9, no. 1, pp. 14{35, 1993. 37

16] S. Skaar, W. Brockman, and R. Hanson, \Camera-space manipulation," Int. J. Robot. Res., vol. 6, no. 4, pp. 20{32, 1987. 17] S. B. Skaar, W. H. Brockman, and W. S. Jang, \Three-Dimensional Camera Space Manipulation," International Journal of Robotics Research, vol. 9, no. 4, pp. 22{39, 1990. 18] J. T. Feddema, C. S. G. Lee, and O. R. Mitchell, \Weighted selection of image features for resolved rate visual feedback control," IEEE Trans. Robot. Autom., vol. 7, pp. 31{47, Feb. 1991. 19] A. C. Sanderson, L. E. Weiss, and C. P. Neuman, \Dynamic sensor-based control of robots with visual feedback," IEEE Trans. Robot. Autom., vol. RA-3, pp. 404{417, Oct. 1987. 20] R. L. Andersson, A Robot Ping-Pong Player. Experiment in Real-Time Intelligent Control. MIT Press, Cambridge, MA, 1988. 21] M. Lei and B. K. Ghosh, \Visually-Guided Robotic Motion Tracking," in Proc. Thirtieth Annual Allerton Conference on Communication, Control, and Computing, pp. 712{721, 1992. 22] B. Yoshimi and P. K. Allen, \Active, uncalibrated visual servoing," in Proc. IEEE International Conference on Robotics and Automation, (San Diego, CA), pp. 156{161, May 1994. 23] B. Nelson and P. K. Khosla, \Integrating Sensor Placement and Visual Tracking Strategies," in Proc. IEEE International Conference on Robotics and Automation, pp. 1351{1356, 1994. 24] I. E. Sutherland, \Three-dimensional data input by tablet," Proc. IEEE, vol. 62, pp. 453{461, Apr. 1974. 25] R. Tsai and R. Lenz, \A new technique for fully autonomous and e cient 3D robotics hand/eye calibra tion," IEEE Trans. Robot. Autom., vol. 5, pp. 345{358, June 1989. 26] R. Tsai, \A versatile camera calibration technique for high accuracy 3-D machine vision m etrology using o -the-shelf TV cameras and lenses," IEEE Trans. Robot. Autom., vol. 3, pp. 323{344, Aug. 1987. 27] P. I. Corke, High-Performance Visual Closed-Loop Robot Control. PhD thesis, University of Melbourne, Dept.Mechanical and Manufacturing Engineering, July 1994. 28] D. E. Whitney, \The mathematics of coordinated control of prosthetic arms and manipulators," Journal of Dynamic Systems, Measurement and Control, vol. 122, pp. 303{309, Dec. 1972. 29] S. Chieaverini, L. Sciavicco, and B. Siciliano, \Control of robotic systems through singularities," in Proc. Int. Workshop on Nonlinear and Adaptive Control: Issues i n Robotics (C. C. de Wit, ed.), Springer-Verlag, 1991. 30] S. Wijesoma, D. Wolfe, and R. Richards, \Eye-to-hand coordination for vision-guided robot control applications," International Journal of Robotics Research, vol. 12, no. 1, pp. 65{78, 1993. 31] N. Hollinghurst and R. Cipolla, \Uncalibrated stereo hand eye coordination," Image and Vision Computing, vol. 12, no. 3, pp. 187{192, 1994. 38

32] G. D. Hager, W.-C. Chang, and A. S. Morse, \Robot hand-eye coordination based on stereo vision," IEEE Control Systems Magazine, Feb. 1995. 33] C. Samson, M. Le Borgne, and B. Espiau, Robot Control: The Task Function Approach. Oxford, England: Clarendon Press, 1992. 34] G. Franklin, J. Powell, and A. Emami-Naeini, Feedback Control of Dynamic Systems. AddisonWesley, 2nd ed., 1991. 35] G. D. Hager, \Six DOF visual control of relative position," DCS RR-1038, Yale University, New Haven, CT, June 1994. 36] T. S. Huang and A. N. Netravali, \Motion and structure from feature correspondences: A review," IEEE Proceeding, vol. 82, no. 2, pp. 252{268, 1994. 37] W. Wilson, \Visual servo control of robots using kalman lter estimates of robot pose relative to work-pieces," in Visual Servoing (K. Hashimoto, ed.), pp. 71{104, World Scienti c, 1994. 38] C. Fagerer, D. Dickmanns, and E. Dickmanns, \Visual grasping with long delay time of a free oating object in orbit," Autonomous Robots, vol. 1, no. 1, 1994. 39] C. Lu, E. J. Mjolsness, and G. D. Hager, \Online computation of exterior orientation with application to hand-eye calibration," DCS RR-1046, Yale University, New Haven, CT, Aug. 1994. To appear in Mathematical and Computer Modeling. 40] M. A. Fischler and R. C. Bolles, \Random sample consensus: a paradigm for model tting with applicatio ns to image analysis and automated cartography," Communications of the ACM, vol. 24, pp. 381{395, June 1981. 41] R. M. Haralick, C. Lee, K. Ottenberg, and M. Nolle, \Analysis and solutions of the three point perspective pose estimation problem," in Proc. IEEE Conf. Computer Vision Pat. Rec., pp. 592{598, 1991. 42] D. DeMenthon and L. S. Davis, \Exact and approximate solutions of the perspective-threepoint problem," IEEE Trans. Pat. Anal. Machine Intell., no. 11, pp. 1100{1105, 1992. 43] R. Horaud, B. Canio, and O. Leboullenx, \An analytic solution for the perspective 4-point problem," Computer Vis. Graphics. Image Process, no. 1, pp. 33{44, 1989. 44] M. Dhome, M. Richetin, J. Lapreste, and G. Rives, \Determination of the attitude of 3D objects from a single perspective view," IEEE Trans. Pat. Anal. Machine Intell., no. 12, pp. 1265{1278, 1989. 45] G. H. Rosen eld, \The problem of exterior orientation in photogrammetry," Photogrammetric Engineering, pp. 536{553, 1959. 46] D. G. Lowe, \Fitting parametrized three-dimensional models to images," IEEE Trans. Pat. Anal. Machine Intell., no. 5, pp. 441{450, 1991. 47] R. Goldberg, \Constrained pose re nement of parametric objects," Intl. J. Computer Vision, no. 2, pp. 181{211, 1994. 39

48] R. Kumar, \Robust methods for estimating pose and a sensitivity analysis," CVGIP: Image Understanding, no. 3, pp. 313{342, 1994. 49] S. Ganapathy, \Decomposition of transformation matrices for robot vision," Pattern Recognition Letters, pp. 401{412, 1989. 50] M. Fischler and R. C. Bolles, \Random sample consensus: A paradigm for model tting and automatic cartography," Commun. ACM, no. 6, pp. 381{395, 1981. 51] Y. Liu, T. S. Huang, and O. D. Faugeras, \Determination of camera location from 2-D to 3-D line and point correspondences," IEEE Trans. Pat. Anal. Machine Intell., no. 1, pp. 28{37, 1990. 52] A. Gelb, ed., Applied Optimal Estimation. Cambridge, MA: MIT Press, 1974. 53] P. K. Allen, A. Timcenko, B. Yoshimi, and P. Michelman, \Automated Tracking and Grasping of a Moving Object with a Robotic Hand-Eye System," IEEE Transactions on Robotics and Automation, vol. 9, no. 2, pp. 152{165, 1993. 54] A. Rizzi and D. Koditschek, \An active visual estimator for dexterous manipulation," in Proceedings, IEEE International Conference on Robotics and Automaton, 1994. 55] J. Pretlove and G. Parker, \The development of a real-time stereo-vision system to aid robot guidance in carrying out a typical manufacturing task," in Proc. 22nd ISRR, (Detroit), pp. 21.1{21.23, 1991. 56] B. K. P. Horn, H. M. Hilden, and S. Negahdaripour, \Closed-form solution of absolute orientation using orthonomal matrices," J. Opt. Soc. Amer., vol. A-5, pp. 1127{1135, 198. 57] K. S. Arun, T. S. Huang, and S. D. Blostein, \Least-squares tting of two 3-D point sets," IEEE Trans. Pat. Anal. Machine Intell., vol. 9, pp. 698{700, 1987. 58] B. K. P. Horn, \Closed-form solution of absolute orientation using unit quaternion," J. Opt. Soc. Amer., vol. A-4, pp. 629{642, 1987. 59] G. D. Hager, G. Grunwald, and G. Hirzinger, \Feature-based visual servoing and its application to telerobotics," DCS RR-1010, Yale University, New Haven, CT, Jan. 1994. To appear at the 1994 IROS Conference. 60] G. Agin, \Calibration and use of a light stripe range sensor mounted on the hand of a robot," in Proc. IEEE Int. Conf. Robotics and Automation, pp. 680{685, 1985. 61] S. Venkatesan and C. Archibald, \Realtime tracking in ve degrees of freedom using two wristmounted laser range nders," in Proc. IEEE Int. Conf. Robotics and Automation, pp. 2004{ 2010, 1990. 62] J. Dietrich, G. Hirzinger, B. Gombert, and J. Schott, \On a uni ed concept for a new generation of light-weight robots," in Experimental Robotics 1 (V. Hayward and O. Khatib, eds.), vol. 139 of Lecture Notes in Control and Information Sciences, pp. 287{295, Springer-Verlag, 1989. 40

63] J. Aloimonos and D. P. Tsakiris, \On the mathematics of visual tracking," Image and Vision Computing, vol. 9, pp. 235{251, Aug. 1991. 64] R. M. Haralick and L. G. Shapiro, Computer and Robot Vision. Addison Wesley, 1993. 65] F. W. Warner, Foundations of Di erentiable Manifolds and Lie Groups. New York: SpringerVerlag, 1983. 66] G. D. Hager, \Calibration-free visual control using projective invariance," DCS RR-1046, Yale University, New Haven, CT, Dec. 1994. To appear Proc. ICCV '95. 67] D. Kim, A. Rizzi, G. Hager, and D. Koditschek, \A \robust" convergent visual servoing system." Submitted to Intelligent Robots and Systems 1995, 1994. 68] W. Jang and Z. Bien, \Feature-based visual servoing of an eye-in-hand robot with improved tracking performance," in Proc. IEEE Int. Conf. Robotics and Automation, pp. 2254{2260, 1991. 69] R. L. Anderson, \Dynamic sensing in a ping-pong playing robot," IEEE Transaction on Robotics and Automation, vol. 5, no. 6, pp. 723{739, 1989. 70] G. D. Hager, \The \X-Vision" system: A general purpose substrate for real-time vision-based robotics." Submitted to the 1995 Workshop on Vision for Robotics, Feb. 1995. 71] E. Dickmanns and V. Graefe, \Dynamic monocular machine vision," Machine Vision and Applications, vol. 1, pp. 223{240, 1988. 72] O. Faugeras, Three-Dimensional Computer Vision. Cambridge, MA: MIT Press, 1993. 73] J. Foley, A. van Dam, S. Feiner, and J. Hughes, Computer Graphics. Addison Wesley, 1993. 74] D. Ballard and C. Brown, Computer Vision. Englewood Cli s, NJ: Prentice-Hall, 1982. 75] J. Canny, \A computational approach to edge detection," IEEE Trans. Pattern Anal. Mach. Intell., pp. 679{98, Nov. 1986. 76] B. D. Lucas and T. Kanade, \An iterative image registration technique with an application to stereo vision," in Proc. International Joint Conference on Arti cial Intelligence, pp. 674{679, 1981. 77] P. Anandan, \A computational framework and an algorithm for the measurement of structure from motion," International Journal of Computer Vision, vol. 2, pp. 283{310, 1989. 78] J. Shi and C. Tomasi, \Good features to track," in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 593{600, IEEE Computer Society Press, 1994. 79] J. Huang and G. D. Hager, \Tracking tools for vision-based navigation," DCS RR-1046, Yale University, New Haven, CT, Dec. 1994. Submitted to IROS '95. 80] M. Kass, A. Witkin, and D. Terzopoulos, \Snakes: active contour models," International journal of Computer Vision, vol. 1, no. 1, pp. 321{331, 1987. 41

81] B. Bishop, S. A. Hutchinson, and M. W. Spong, \Camera modelling for visual servo control applications," Mathematical and Computer Modelling { Special issue on Modelling Issues in Visual Sensing. 82] P. Corke and M. Good, \Dynamic e ects in visual closed-loop systems," Submitted to IEEE Transactions on Robotics and Automation, 1995. 83] S. B. Skaar, Y. Yalda-Mooshabad, and W. H. Brockman, \Nonholonomic camera-space manipulation," IEEE Transactions on Robotics and Automation, vol. 8, pp. 464{479, Aug. 1992. 84] G. D. Hager, S. Puri, and K. Toyama, \A framework for real-time vision-based tracking using o -the-shelf hardware," DCS RR-988, Yale University, New Haven, CT, Sept. 1993. 85] A. C. Sanderson and L. E. Weiss, \Adaptive visual servo control of robots," in Robot Vision (A. Pugh, ed.), pp. 107{116, IFS, 1983. 86] N. Mahadevamurty, T.-C. Tsao, and S. Hutchinson, \Multi-rate analysis and design of visual feedback digital servo control systems," ASME Journal of Dynamic Systems, Measurement and Control, pp. 45{55, Mar. 1994. 87] R. Sharma and S. A. Hutchinson, \On the observability of robot motion under active camera control," in Proc. IEEE International Conference on Robotics and Automation, pp. 162{167, May 1994. 88] A. Fox and S. Hutchinson, \Exploiting visual constraints in the synthesis of uncertaintytolerant motion plans," IEEE Transactions on Robotics and Automation, vol. 11, pp. 56{71, 1995. 89] G. Hager and S. Hutchinson, eds., Proc. IEEE Workshop on Visual Servoing: Achievements, Applications and Open Problems. Inst. of Electrical and Electronics Eng., Inc., 1994.

42

You might also like