0% found this document useful (0 votes)
45 views

Visual Robot Control

This document summarizes a research paper presented at the 30th Annual Conference of the IEEE Industrial Electronics Society in November 2004 in Busan, Korea. The paper introduces a new visual feedback control method for guiding a mobile robot using the vanishing point of parallel lines in a corridor. This method does not require any prior information other than the width of the corridor and robot. It is based on projective geometry concepts like perspective projection, epipolar geometry, and vanishing points. The proposed method can be used to easily design a visual feedback system to guide the robot and help it avoid obstacles while navigating indoor corridors.

Uploaded by

electtwo7749
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Visual Robot Control

This document summarizes a research paper presented at the 30th Annual Conference of the IEEE Industrial Electronics Society in November 2004 in Busan, Korea. The paper introduces a new visual feedback control method for guiding a mobile robot using the vanishing point of parallel lines in a corridor. This method does not require any prior information other than the width of the corridor and robot. It is based on projective geometry concepts like perspective projection, epipolar geometry, and vanishing points. The proposed method can be used to easily design a visual feedback system to guide the robot and help it avoid obstacles while navigating indoor corridors.

Uploaded by

electtwo7749
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

-

The 301h Annual Conference of the IEEE Industrial Electronics Soclety, November 2 6,2004, Busan, Korea

New Visual feedback Control Design about Guidance of a Mobile


Robot using Vanishing Point
Shigem Uchikado', Sun Lili' and Midori Nagayoshi'
' Information & Sciences Department,Tokyo Denki University, Ishizaka Hatoyama-machi Hiki-gun Saitama, 350-0394,Japan
e-mail: [email protected], 03smi [email protected], 04smi 13@,i.dendai.ac.k

Abstract- We consider a problem about navigation of 11. PROJECTIVE GEOMETRY


a mobile robot with a camera in Indoor environment. A
new visual control method using a vanishing point of A . Coordinate systems
parallel lines at both sides of the corridor is introduced,
and we call this the vanishing point visual control Here we consider three coordinate systems, that is, the world,
method. This method gives a lot of useful information on the robot and the camera coordinate systems with the sub-
the design. Therefore we can't need a priori information scripts "'H ", '' R " and '' C ",respectively. And also these
except both widths of the corridor and the robot, but we denote their current positions. A Cartesian " X y " coordinate
can easily design the visual feedback system for guidance system on the normalized image plane, called the image
and obstacle avoidance by using this method. The idea is coordinate system, is taken in 5uch a way that the X - and
based on perspective geometry such as perspective pro-
JJ -axes are parallel to the X , - and Yc -axes, respectively.
jection, epipolar geometry, vanishing point, and projec-
,
tive transformation.

1. INTRODUCTION

Two typical methods [ 11 [2] using one camera have been


proposed. One uses some fluorescent tubes as natural land-
marks, and also needs a map of the lights provided in
advance. The other uses the wall-following scheme and the
special omni-directiond sensor for moving a corridor.
The problem considered in this paper is to design guid-
ance in a corridor and a corner, and an avoidance obstacle.
To achieve these, we propose a new method that is called the
vanishing point control method.The method is based on
important conceptions of projective geometry, and a visual
control system of the robot can be designed easily using the
1yw,yR The Wodd C. System C.:=Coordinate
The Robot C. System

method. Fig. 1 Four Coordinate Systems


First of all, we show perspective geometry such as per-
spective projection, epipolar geometry, and vanishing point.
By using projective transformation between both image
planes of the onboard camera and an imaginary camera,
epipolars of these image planes indicate the robot's transla-
tion direction. Hence we can easily turn the robot's transla-
tion direction in an arbitrary direction. The design consists of
three methods. That is, methods to guide the robot in the
corridor and at a comer, and the method to avoid an obstacle.
Here a new visual control method using a vanishing point of
parallel lines at both sides of the corridor is introduced, and
we call this the vanishing point visual control method. This
method gives a lot of useful information for the design. We
can't need a priori infomation except both widths of the
corridor and the robot, but we can easily design the visual
feedback system for guidance and obstacle avoidance by
using this method. Fig.2 Perspective Projection of Pinhole Camera Model

Where R, i s the 3 x 3 rotation matrix, Tog is the

627

Authorized licensed use limited to: University of Bridgeport. Downloaded on March 15,2010 at 17:39:11 EDT from IEEE Xplore. Restrictions apply.
3 X 1 translation vector, and ( A , , ) are the offset
values between the robot and the camera coordinate systems.
-'
B. Pinhole Camera Model where GC =(U, V, 1) T and m c =(U', v', are
We use a pinhole camera shown in Fig.2 as the simplest homogeneous coordinates of points (U,V ) and (U',v ' )
and ideal model of camera function. in a pixel image coordinate system corresponding to the
normalized image system, respectively. These yield the fol-
C. Perspective Projection lowing equations.

Suppose that points Pc and X, = (x, y ) are in the


A& =A ( I io );c, (6)

camera and normalized image coordinate systems, respec- ++4(RiT)Pc, (7)


tively. Let a camera c'
be obtained by rotating and trans-
lating the camera C by R and T . The relations between
R. Epipolar Geometry
pc and p; can be described as
The epipolar constraint says that the three points, that is,
Pl =RP, +T (1) the centers of the camerac and the imaginary camera c' ,
and the point Pc in the camera coordinate system, are all
coplanar [4][5]. Then the constraint can be expressed simply
as
-7 T
xc ( T x R Z , ) = O (8)
By defining [TI,as the matrix such that [TI,, = T x y
for any vector y ,we can rewrite the above equation as a
linear pation.

Fig.3 The Camera C'rotated and translated by R and T

Where ( R ,T ) are called the extrinsic parameters and


R, 15' are the normalized image planes of the camera
c, c' ,respectively. Then the normalized image coo&-
nates(x,y) are related to the camera coordi-
,,E
Fig.4 Epipolar Plane

nates (q-
y, x, 1 by
9 9

x = .fX& 9 Y =PC/ZC (2)


Where E = [TI, R is called the Essential matrix [4][5].
In homogeneous coordinates, the above equations (1) and
(2) with f = 1 become Moreover using equations ( 5 ) , it follows that
*
- 9 T
A,; = (Ii0)Pc, (3) mc Fhc=O (10)
C

where
-pA;;; =(IiO)i$
c , xc, ,Pc
-1

xC
-I
=( R : T ) F c , (4)
are homogeneous coordinates,
Where F = A -
[41[51.
[TI,RA- is the Fundamental matrix

Furthermore from equations (6) and (7),we have


and 4 ER,& E R . 4fiL = A , A U - ' 6 , +rl,AT, f7, E R (11)
Now the following 3 x 3 upper triangular matrix, A,,
containing the intrinsic parameters of the camera is
introduced[4][5]. Then we can transform the normalized
image coodinates into pixel coordinates by

628

Authorized licensed use limited to: University of Bridgeport. Downloaded on March 15,2010 at 17:39:11 EDT from IEEE Xplore. Restrictions apply.
(0-1 0)
(22)
Now define points 2, zi as
;=AT (13) Consequently, this projective transformation results in the
e'=AR~T
Then these points satisfy the following equations.
(14) parallel camera configuration with respect to the 2- Axis.
That is, using Eqs.( 19) and (20) corresponding points are on
G=O (15) corresponding epipolar Iines.
FTz*=
0 (16)
Hence 2 and z1are called the epipole. 111. ROBOTMODEL

Here we consider a simple car shown figure 5 as the mo-


E. 17anishing Point bile robot [6j. Suppose that the camera can be moved only
Now consider two parallel lines in the world written as vertically. However we can also horizontaily turn the camera
in any direction by rotating the car. Hence we can set the
Xi@,) = x, + Y y , , (17) camera to an arbitrary pose. The robot can move only in the
vyi E R , v, =(v,,O)T ER4x1, j=1,2
yz plane, and it can be modeled as
i ( t )= v(t)cose(t)
where v, is called the vanishing point [4][5].
j ( r )= -v(t)sinB(t) (23)
F. Projective Transformation B(t) = w ( t )
First of all, since the rank of F is 2, we have the fol- Where,
lowing Singular Value Decomposition of the fundamental
matrix[7].
w = (VR (0+ V L(O)/ 2
(24)
w ( i )= ( V , ( f ) - v V L ( t ) ) / 2 d
F =UDVT (18)
where, U,V are orthogonal matrices and
UUT= w T= I(Unit Matrix)
r0O

Y
0 < r,s E R : Singular Values of F.
Furthermore we can decompose F as follows.
I
Fig. 5 Car Body
vR and vL are velocity of the car's right and left wheels,
Now, consider the following projective transformation.
H I 2d its body width, w(t) its angle acceleration , @(P)
x P ,= Fnx
?le = F ' , x'
- (19)
(20)
angle of its translation direction, P a target point and
( y ,2 ) its position on the Y z plane. Now consider a case
where
l Ton l
i"'"1
in which is very small. The epipolar geometry be-

F m = -1 0 0 V T tween the point pw , the camera c ,and an imaginary


camera c,
,which is located on the robot's center, becomes
as follows.

i:1yi
Firpr= 0 s 0 U'

Then a new fundamental matrix F, is given by


n,i, =A R , ~ ,,
Where the homogeneous coordinates
a, = 4 112,
the nomalized image plane of the imaginary camera
(25)
zr, i ,are points on
c,
intrinsic matrix A using projective transformation. Defin-
and
on its pixel image plane, respectively. We can identify the

ing a transformed plane formed with h, , we have,

629

Authorized licensed use limited to: University of Bridgeport. Downloaded on March 15,2010 at 17:39:11 EDT from IEEE Xplore. Restrictions apply.
Given navigation course shown by the dashed line in
figure 7, our aim of the research is to design safe guidance of
The center of the transformed plane is the translation direc- the robot from the start to the destination by using only a
tion of the robot [7]. So we can easily set the car to an arbi- current image taken with the camera c . In order to achieve
trary translation direction by tuming the car and the camera this, we have to design the foltowing.
C.

P + ltl" -+ rro"
Guidance:
How to guide the car in a corridor and at a comer
(by camera) (by cur)
Control;
How to avoid an obstacle
t

Fig.8 The Vanishing Control Method

Fig.6 The Transformed Plane


v. DESIGNMETHODOF THE SAFE GUIDANCE
Iv. STATEMENTOF T I E PROBLEM
A. Method to guide the robot in the corridor
Our research deals with the mobile robot that is used in
Now we propose a vanishing point visual control method
indoor environment such as office and hospital. Hence the
(VPVCM) for guiding the robot in a corridor.
space that the robot moves has various restrictions as shown
The VPVCM uses a vanishing point of parallel lines at both
figure 7.That is, there are corridors, obstacles such as
flowerpots and people, and comers. sides of the corridor as shown in figure 8, that is, if the van-
So suppose the following. ishing point vp on the normalized image plane coincides
The car's weight can be ignored, and the intrinsic matrix A with the epipole . Then it is obvious that the robot can go
has been alreadv identified. And the offset values to the point Vp straightly along the dash line in figure 8 is
( R , ,Torr>are known, and
ITog I is small. Furthermore
both widths of the car and the corridor 2d , dc are known
parallel to the parallel lines at both sides of the corridor. This
method is similar to the wall-following scheme [2] that
needs a special omnidirectional sensor.
and both sides of the corridor are straight parallel lines and
In this case, the robot is a controlled system and is controlled
identified on the image plane.
as follows.

r\
Destination
Reference output: r =
output Error:
e(vp)
e = r - B(P)
Controller:
AvR = k,e, AvL = kle
vR = vRC.+AV,, vL = vLC. + AyL
where, v R , vLare inputs of the controlled system, 8 ( P )
is its output, P is a point on the image plane denoting the
robot's translation direction, k, > 0 is an arbitrary constant
control gain, and v R C . ,VLc. are the same constant nor-
Fig.7 Indoor Figure of the Building considered in this Paper mal speeds. The control system for the robot is shown in
figure 9.

630

Authorized licensed use limited to: University of Bridgeport. Downloaded on March 15,2010 at 17:39:11 EDT from IEEE Xplore. Restrictions apply.
WO-
Robot* @--- +
Fig.9 The Control System for the Robot

That is, the robot can be easily guided and translated in the
conidor by keeping a constant distance d from the wall.
I I
(1) Not Changing Lanes
( 1 ) Identification of the Vanishing Point of the Corridor
Many methods [S][9]to identify the parallel lines of the
corridor have been proposed and consequently the vanishing
point can be easily calculated by using these identified lines.

(2) Determination of a Translation Lane in the Image Plane


It is difficult to determine the translation lane when one
camera is used. However it can be easily done if the property
about the vanishing point is used. Now suppose that d , dc
and AR, /A2 are known. Then the translation lane i s
determined as it will be satisfied the following relationship
between the parallel lines and its lane on the transformed
plane as shown in figure 10.
(2) Changing Lanes
-=d
lY2-Yll
dc 1~3-4
Where p1= (A, yl), p 2 = ( ~ 2y ,2 ) ,
p3 = (x3. .v3) and xl = x2 = x3.

Co rri do r*
I
‘ I
(3) Stopping There

Fig.11 The idea about three Methods

2dxdr
Fig. 10 The Figure Explaining the Determination of the where dl is dl =
Lane dc
If there is a space dl on the leR, then the robot moves
without changing lanes. And if there is a space in the right
E. Merhod to ovoid an obstacle but not on the left, then the robot changes lanes onto the right
side.
Three ways that the robot does not bump against an ob-
stacle that is in front of it are considered, that is, keeping Furthermore if & -do S d , , then the robot stops there.
going, stopping before bumping, and going on by avoiding However there is the remaining problem which is how to
the obstacle. The methods that will be proposed here are change lanes.
illustrated in the following figures.

Step 1: Translation to p5 :
The reference output is set to e(P5) and the point
P5 is set so that the following is satisfied.

63 I

Authorized licensed use limited to: University of Bridgeport. Downloaded on March 15,2010 at 17:39:11 EDT from IEEE Xplore. Restrictions apply.
Vanishing Point.
where, d p = [ p 4- P4,and is considered only
in the case d ( h ) / d t > O .This means that the control
point p5 is set in proportion to the relative speed of
the obstacle that is calculated by the rate of which the
obstacle goes large. This step will be ended if a space
for the translation lane ofthe robot is formed on the
right side.

Step 2: VPVCM on the right side


The robot is controlled again on the right side by
using the VPVCM like the method proposed in sec-
tion 4.2.

Step 3: VPVCM on the left side


After the robot passes the obstacle and constant time
passes, the robot will again be controlled by step1 so Fig. 12 An Image ofthe camera on the Robot near a Corner
that the robot can be guided in the left side. A control
point like P5 can be arbitrarily set in this case. By using the method, we can easily design guidance in a
corridor and at a corner, and an avoidance obstacle. And we
C. Method to guide the rob06 at a comer also show the visual open loop control system for guidance
Figure 12 is an image of the camera on the robot that is at the corridor. Finally a numerical simulation is performed
moving near a comer. It is noticed from the leA line ofthe in order to investigate the effect ofthe proposed method by
conidor in figure 12 that the corridor curves to the left at the using a simple example, and good results are obtained.
comer. So the problem is how to control the robot to the left
VII. REFERENCES
at the comer. All significant information in figure 12 is lost
during the turn and we can't use it for guidance.So we [ 11 Fabien LAUNAY, Akihisa OHYA and Shin'ichi YUTA; "Vision-Based
consider a simple guidance of the robot that uses only the Navigation of Mobile Robot using Fluorescent Tubes", EEEiRSJ Intema-
tional Conference of Intelligent Robots and Systems, FAB, Kagawa Japan
point P6 and the distance d 2 . Namely if the point P6 (2000)
vanishes on an image plane and then constant time passes, [2] A.K. Das,R. Fierro,V.Kumar, BSouthall, J.Spletzer, and C.J.Taylor;
the robot will begin to turn to the lePt by the following open "Real-Tim Vision-Based Control o f a Nonholonomic Mobile Robot", The
loop control law until the next vanishing point appears. 200 I IEEE International Conference on Robotics and Automation, Seoul
KoEa,200 I
A y R = k3(l/d2), AV, = 0 [3] Kyoung Sig Roh, Wang Heon Lee, In So Kweon ;"Obstacle Deration
and Self-Localizationwithout Camra Calibration Using Projective In-
variants", IEEWRSJ International Conference ou Intelligent Robots and
System, (1997)
Where k3 > 0 is an arbitrary design parameter and the [4] Kenichi Kanatani; "Group-Theoretical methods in Image Understand-
ing", Springer-Verlag (Berlin), 1990
robot tums at the speed which is inversely proportional to [SI ZZhang and O.Faugem, "3DDynamic Scene Analysis", Springer
series in information sciences vol. 27, Springer-Verlag (Berlin), 1992
the distance d 2 . [6] Miyazaki Fumio, Masutani Yasuhim and Nishikawa Atusi, "lntroduc-
tion i o Robotics, Chapter 6",Kyoritu Press, 2000
VI. CONCLUSIONS Richard 1. Harlley, "Kruppa'sEquations Derived from the Fundamental
Matrix", IEEE Tmnsaction on Patiern Analysis and Machine Intelligence,
In this paper we proposed an new visual control method Vo1.19,N0.2, pp.133-135, 1997
[SI P.V.C. Hough; Method and means for recognizing complex patterns,
for navigation of the mobile robot in indoor environment. US.Patent 306965, 1962
Tbe new visual control method using a vanishing point of 191 R.O.Duda and P.E Hart; Use of the Hough transformation to detect line
parallel lines at both sides ofthe corridor is introduced, and andcurvesinPictures,CommACM, lS,No.l,pp.ll-15, 1972
we call this the vanishing point visual control method. The
vanishing point visual control method gives a lot of u s e l l
information for the design.

632

Authorized licensed use limited to: University of Bridgeport. Downloaded on March 15,2010 at 17:39:11 EDT from IEEE Xplore. Restrictions apply.

You might also like