Robot Motion Planning For Map Building: Benjam In Tovar, Rafael Murrieta-Cid and Claudia Esteves
Robot Motion Planning For Map Building: Benjam In Tovar, Rafael Murrieta-Cid and Claudia Esteves
Abstract
The goal of this work is to develop techniques that
allow one or more robotic observers to operate with
full autonomy while accomplishing the task of model
building. The planning algorithm operates using certain simple but flexible models of the observer sensor
and actuator abilities. We provide techniques that
allow us to implement these sensor models on top of
the capabilities of the actual (and off-the-shelf ) sensors we have. It is worth keeping the following points
in mind regarding our goals: 1) even with completely
idealized sensing and mobility capabilities, the algorithmic task of model building is quite challenging. 2)
computational techniques can be used to approximate
and implement these idealized sensors on top of actual sensors. 3) the quality and success of the generated plans depend significantly on the observer capabilities; study of this dependency terms of high-level
parameters describing the sensors (e.g., max. distance sensed, viewing frustum) is part of this work.
Introduction
Previous Work
Our approach
fmin
The first step will evaluate the possible size of unexplored space, energy needed, distance to the nearest free edge, distance to nearest known walls and
objects. All these parameters are inferred from the
current robot position (by computing the visibility
region [18]), which means that the robot does not
have to go to every new possible position. The length
of the free edges is used to estimate the size of the
unexplored space. A very simple model of the uncertainty is used, where it grows proportional to the
square root of the distance to travel. In this uncertainty model we also include a term where the rotation cost is calculated. It is preferred that the robot
travels straight than rotating.
In the second step, the robot actually moves to the m
best evaluated positions and selects the best one taking into account perceptual information. In a very
simple way, it is as if the robot would take a glimpse
from the threshold of several doors, and explore the
room that it likes the best. The robot will stop exploring when there are no free edges to be visited
left.
The perceptual information is composed by the identification probability of the objects perceived by the
robot and some particular features such as corners,
that allows the matching between the areas already
explored. The utility function is constructed in such
a way that it integrates all these features. The next
possible position that maximizes the utility function
is kept. The function will prefer positions combining proximity to the robot, proximity to a free edge,
small uncertainty value of robot position, high object
identification probability and ability to see features
like corners. Positions near walls and objects will be
discarded because many sensors become blind when
the objects are very near.
The function is composed by terms representing each
one of these measures mentioned above. It is proposed that the utility function has a multiplicative
form. A position with a very low value on at least
one of these measures will be discarded even though
it may have a very good value on the others. For
instance, a position very close to a free edge (with
great chance of discovering new space) must be discarded if the robot has no information to integrate
this new area to the explored space.
The utility function T is given by the following expression [19]:
!
X
n
1
e||
N
(l ss )
T = (e
Where:
s+1
pj + e
j=1
Distance to object
s
sv
lv
pj
n
Ne
fmin
dl
For fmin we choose a function like the shown in figure 1. Before a distance threshold t the function
takes a low value, discriminating those positions near
the objects. After the threshold the function takes
the value of 1, to let the value of T reside on the
other parameters.
The pj is the identification probability for a landmark. A landmark is defined as a remarkable object.
The landmark should have some properties that distinguish them from other objects, i.e.:
Discrimination. A landmark should be easy
to differentiate form other surrounding objects.
Accuracy. A landmark must be useful to reduce the uncertainty of the robot position.
3.1
Landmark identification
Landmarks in indoor environments are often structures such as corners, doors, columns, posters, etc.
To compute the distance between a landmark and the
fmin (dl ) robot by using a single camera, we suppose that the
landmark size is known. Our landmarks are posters,
doors,
and columns. To assume that for a given en(1)
vironment the size of these objects is known a priori
is not unrealistic. An object is labeled as a landmark
if and only if its object identification probability is
Line-fitting
Model-matching
where
h(P, Q) = max min kp qk
pP qQ
(2)
(3)
EXPLORATION(R, W)
1. Pn TAKE PERCEPTION(R, W)
2. M.add(Pn )
3. Uf ID UNXP AREAS(M)
4. if Uf 6= then
(a) [x, y, ] SELECT NEW POS(Uf , R, M)
(b) if NOVALID(x, y, ) then
i. Return M, NOK
(c) R.move(x, y, )
(d) goto 1
5. else Return M, OK
SELECT NEW POS(Uf , R, M)
1. for each uf Uf do
(a) ln RDN SAMPLE(n, uf , R, M)
(b) Lq = Lq ln
2. qnbv MAXIMIZE(T , Lq )
3. Return qnbv
We are not dealing with simultaneous robots motion, only a robot is moving at a given time the other robots remain motionless. A state S (V (qK ))
changes to S +1 (V (qK )) when the robot k has
reached the location qgoalk .
Figure 7:
Environment map at
time t + 2
Figure 3:
Figure 6:
Environment map at
time t
Figure 8:
data
Laser
Figure 4:
Figure 10:
Figure 11:
Figure 12:
Figure 13:
Figure 14:
Figure 15:
Our approach is better than previous methods because the robot plans the motions in such a way
that its uncertainty localization is minimized by using data registration and landmarks information. At
the same time, the motion strategy take into account
that the robot must move to discover unexplored environment regions by minimizing energy consumption. The robot motion strategy proposed generates
a fast and reliable map building.
As future research, we will complete our global architecture and perform multi-robot map building experimentation. The representation of the environment proposed here is useful to several robotics task
specially visibility-based ones such as target tracking and target finding. We believe that the model proposed can be used to select good locations
where to perform 3D sensing operations. A 3D model of the environment is a inter-medium goal to other
tasks (object manipulation, assembling, etc). This
3D modeling is one our possible future researches.
Acknowledgments
The authors thank Hector Gonzalez Ba
nos for his
contribution to the ideas presented in this paper. This work was funded by CONACyT project
J34670-A and by the ITESM Campus Ciudad de
Mexico.
References
[1] N. Ayache and O.D. Faugeras. Building, registrating, and fusing noisy visual maps. IEEE Journal of
Robotics Research, 7(6):4565, 1988.
[2] W.K. Lee B. Kuipers, R. Froom and D. Pierce. The
semantic hierarchy in robot learning. In IEEE Int.
Conf. on Robotics and Automation, 1993.
[3] R. Chatila and J.P. Laumond Position referencing
and consistent world modeling for mobile robots. In
IEEE Int. Conf. on Robotics and Automation, 1985.
[4] H. Choset and J. Burdick Sensor based Motion Planning: The Hierarchical Generalized Voronoi Diagram.
In Algorithms for Robotic Motion and Manipulation,
J.P. Laumond and M. Overmars (eds.), A K Peters,
Wellesley (MA), 46-61, 1997.
[5] J.L. Crowley. World modeling and position estimation for a mobile robot using ultrasonic ranging. In
IEEE Int. Conf. on Robotics and Automation, 1989.
[6] H. Bulata and M. Devy Incremental Construction
of Landmark-based and Topological Model of Indoor
Environments by a Mobile Robot. In IEEE Int. Conf.
on Robotics and Automation, 1996.
[7] A. Elfes. Sonar-based real world mapping and navigation. IEEE Journal on Robotics and Automation,
3(3):249264, 1987.
[8] L.J Guibas, J.C Latombe, S.M LaValle, D. Lin and
R. Motwani. Visibility-based pursuit-evasion in a