Towards Lifelong Visual Maps

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Towards Lifelong Visual Maps

Kurt Konolige and James Bowman


Willow Garage
Menlo Park, CA
konolige,[email protected]

Abstract— The typical SLAM mapping system assumes a they have not yet been articulated as a coherent set of rules
static environment and constructs a map that is then used for a practical robotic system to exist in a dynamic environ-
without regard for ongoing changes. Most SLAM systems, such ment. No current mapping and localization system adheres
as FastSLAM, also require a single connected run to create
a map. In this paper we present a system of visual mapping, to all of them, and it is not obvious that combining current
using only input from a stereo camera, that continually updates techniques would lead to a consistent realtime system.
an optimized metric map in large indoor spaces with movable Lifelong mapping as a concept is independent of the
objects: people, furniture, partitions, etc. The system can be sensor suite. But just as laser sensors helped to solve a
stopped and restarted at arbitrary disconnected points, is static SLAM problem that was difficult for sonars, so new
robust to occlusion and localization failures, and efficiently
maintains alternative views of a dynamic environment. It techniques in visual place recognition (PR) can help with the
operates completely online at a 30 Hz frame rate. difficult parts of lifelong mapping: loop closure and robust
relocalization. Visual sensors have much more data, and are
I. INTRODUCTION better at distinguishing scenes from a single snapshot – 2D
laser scans are more ambiguous, making loop closure and
A mobile robot existing in a space over a period of time
relocalization a harder task.
has to deal with a changing environment. Some of these
The high information content in individual views also
changes are ephemeral and can be filtered, such as moving
allows visual maps to more easily encode conflicting infor-
people. Others are more permanent: objects like furniture
mation that arises from dynamic environments. For example,
are moved, posters change, doors open and close, and more
two snapshots of a doorway, one with it open and one closed,
infrequently, walls are torn down or built. The typical SLAM
could exist in a visual map. These views can be matched
mapping system assumes a static environment and constructs
independently to a current view, and the best one chosen.
a map, usually in a single continuous run. This map is then
In contrast, laser maps use a set of scans in a local area to
enshrined as the ground truth, and used without regard for
construct a map, making them harder to dynamically update
ongoing changes, with the hope that a robust localization
or represent alternative scenes.
filter will be sufficient for navigation and other tasks.
In this paper we build on our recent work in online
In this paper we propose to end these limitations by
visual mapping using PR [19], and extend it to include
focusing on the idea of a lifelong map. A lifelong map system
incremental map stitching and repair, relocalization, and view
must deal with at least three phenomena:
deletion, which is critical to maintaining the efficiency of the
1) Incremental mapping. The system should be able map. The paper presents two main contributions: first, an
to add new sections on to its map at any point, that overall system for performing lifelong mapping that satisfies
is, it should be continuously localize and mapping. It the above criteria; and second, a view deletion model that
should be able to wake up anywhere, even outside the maximizes the ability of the map to recognize situations that
current map, and connect itself to the map when it have occurred in the past and are likely to occur again.
is encountered. It should continually check for loop This model is based on clustering similar views, keeping
closures and optimize them. It should work online. exemplars of past clusters while allowing new information
2) Dynamic environment. When the world changes, the to be added. Experiments demonstrate that the lifelong map
system should repair its map to reflect the changes. system is able to cope with map construction over many
The system should maintain a balance between re- separate runs at different times, recovers from occlusion and
membering past environments (to deal with short-term other localization errors, and efficiently incorporates changes
occlusions) and efficient map storage. in an online manner.
3) Localization and odometry failure. Typically a robot
will fail to localize if its sensors are blocked or II. R ELATED W ORK
degraded in some way. The system should recover from Work on mapping dynamic environments with laser-based
these errors by relocalizing in the map when it gets the systems has concentrated on two areas: ephemeral objects
chance. such as moving people that are distractors to an otherwise
These principles have been explored independently in static map, and longer-term but less frequent changes to ge-
many research papers (see Section II on related work). But ometry such as displaced furniture. The former are dealt with
using probabilistic filters to identify range measurements not III. BACKGROUND : V IEW-BASED M APS
consistent with a static model [11], [15], or by tracking
moving objects [20], [29]. With visual maps, the problem A. FrameSLAM and Skeleton Graphs
of ephemeral objects is largely bypassed with geometric
The view map system, which derives from our work on
consistency of view matching, which rejects independently-
FrameSLAM [2], [18], [19], is most simply explained as a
moving objects [1].
set of nonlinear constraints among camera views, represented
Dealing with longer-term changes is a difficult problem,
as nodes and edges (see Figures 6 and 7 for sample graphs).
and relatively little work has been done. Burgard et al. [5]
Constraints are input to the graph from two processes, visual
learn distinct configurations of laser-generated local maps –
odometry (VO) and place recognition (PR). Both rely on
for example, with a door open and closed. They use fuzzy
geometric matching of stereo views to find relative pose
k-means to cluster similar configurations of local occupancy
relationships. The poses are in full 3D, that is, 6 degrees
grids. They extend particle-filter localization to include the
of freedom, although for simplicity planar projections are
configuration that is most consistent with the current laser
shown in the figures of this paper.
scan. Our approach is similar in spirit, but clustering is done
by a view matching measure in a small neighborhood, and VO and PR differ only in their search method and features.
is on a much finer scale. We also concentrate on efficiency VO uses FAST features [22] and SAD correlation, continu-
in keeping as few views as possible to represent different ously matching the current frame of the video stream against
scenes. the last keyframe, until a given distance has transpired or
Another approach with similarities to ours is the long- the match becomes too weak. This produces a stream of
term mapping work of Biber and Duckett [4]. They sample keyframes at a spaced distance, which become the backbone
local laser maps at different time scales, and merge the of the constraint graph, or skeleton. PR functions opportunis-
samples in each set to create maps with short and long- tically, trying to find any other views that match the current
term elements. The best map for localization is chosen keyframe, using random tree signatures [7] for viewpoint
by its consistency with current readings. They have shown independence. This is much more difficult, especially in
improved localization using their updated maps over time systems with large loops. Finally, an optimization process
scales of weeks. finds the best placement of the nodes in the skeleton.
Both these methods rely on good localization to maintain For two views ci and cj with a known relative pose, the
their dynamic maps. They create an initial static map, and constraint between them is
cannot tolerate localization failures: there is no way to
incorporate large new sections into the map. In contrast, our ∆zij = ci ⊖ cj , with covariance Λ−1 (1)
method explicitly deals with localization failure, incremental
map additions, and large map changes. where ⊖ is the inverse motion composition operator – in
A related and robust area of research is topological visual other words, cj ’s position in ci ’s frame; we abbreviate the
mapping, usually using omnidirectional cameras ([26], [27], constraint as cij . The covariance expresses the strength of the
[12] among many). As with laser mapping, there have been constraint, and arises from the geometric matching step that
few attempts to deal with changing environments. Dayoub generates the constraint. In our case, we match two stereo
and Duckett [9] develop a system that gradually moves frames using a RANSAC process with 3 random points to
stable appearance features to a long-term memory, which generate a relative pose hypothesis. The hypothesis with the
adapts over time to a changing environment. They choose most inliers is refined in a final nonlinear estimation, which
to adapt the set of features from many views, which is a also yields a covariance estimate. In cases where there are
visual analog to the Biber and Ducket time-scale approach; too few inliers, the match is rejected; the threshold varies for
they report increased localization performance over a static VO (usually 30) and PR (usually 80).
map. Andreasson et al. [3] also provide a robust method Given a constraint graph, the optimal position of the
for global place recognition in scenes subject to change over nodes
P is a nonlinear optimization problem of minimizing

long periods of time, but without modifying the initial views. ij ∆z ij Λ∆zij ; a standard solution is to use preconditioned
Both these methods assume the map positions are known, conjugate gradient [2], [14]. For realtime operation, it is
while we continuously build and optimize a metric map of more convenient to run an incremental relaxation step, and
views. the recent work of Grisetti et al. [13] on stochastic gradient
For place recognition, we rely on the hierarchical vocab- descent provides an efficient method of this kind, called Toro,
ulary trees proposed by Nistér and Stewénius [21]; other which we use for the experiments. Toro has an incremental
methods include approximate nearest neighbor [23] and mode that allows amortizing the cost of optimization over
various methods for improving the response or efficiency many view insertions.
of the tree [8], [16], [17]. Callmer et al. [6], Eade and Because Toro accepts only a connected graph, we have
Drummond [10], Williams et al. [28], and Fraundorfer et al. used the concept of a weak link to connect a disjoint sequence
[12] all use vocabulary tree methods to perform fast place to the main graph. A weak link has a very high covariance so
recognition and close loops or recover from localization as not to interfere with the rest of the graph, and is deleted as
failures. soon as a normal connection is made via place recognition.
B. Deleting Views
View deletion is the process of removing a view from the
skeleton graph, while preserving connections in the graph.
We show this process here for a simple chain. Let c0 , c1
and c2 be three views, with constraints c01 and c12 . We can
construct a constraint c02 that represents the relative pose
and covariance between c0 and c2 . The construction is:
∆z02 = ∆z01 ⊕ ∆z12 , (2)
Γ−1
02 = J1 Γ−1 T −1 T
01 J1 + J2 Γ12 J2 . (3)
J1 and J2 are the jacobians of the transformation z02
with respect to c1 and c2 , respectively. The pose difference
is constructed by compounding the two intermediate pose
differences, and the covariances Γ are rotated and summed Fig. 1. Recognition rate. The plot shows the proportion of recognized
appropriately via the Jacobians (see [24]). poses for varying pose angle and pose distance. The poses are taken from
the final map in Figure 6.
Under the assumption of independence and linearity, this
formula is exact, and the node c1 can be deleted if it is only
desired to retain the relation between c0 and c2 ; otherwise
• Recovery from localization failure.
it is approximately correct when c0 and c2 are separated by
∆z02 . We use view deletion extensively in Section V to get Here we show how a lifelong map can be created and
rid of unnecessary views and keep the graph small. maintained using the techniques of the view map skeleton.
The most interesting and difficult part is deciding when to
C. Place Recognition delete views from the map.
The place recognition problem (matching one image to
a database of images) has received recent attention in the A. System-Level Algorithm
vision community [17], [21], [23]. We have implemented a We assume that the robot stores a skeleton map M that
place recognition scheme based on the vocabulary trees of represents its current global map. Every time the robot wakes
Nistér and Stewénius [21] which has good performance for up, it runs the following algorithm for visual mapping.
both inserting and retrieving images. Features are described
with a compact version of random-tree signatures [7], [19], Lifelong Mapping
which are efficient to compute and match, and have good Input: skeleton view map M
performance under view change. Output: updated map M
For a given reference image, the vocabulary tree generates 1) On wakeup, initialize the current keyframe Kc
an ordered set of matches. We pick the top N candidates and insert a weak link between Kc and the last
(where N is ∼ 15) and subject them to a geometric consis- encountered map keyframe.
tency check. In previous work [19], we found experimentally 2) Get new stereo frame S
and theoretically that a match count of 30 features or greater 3) Perform VO to get the relative pose Kc ↔ S
produces no false positives.1 We also found, in test cases, 4) VO failure?
that the vocabulary tree produced at least one good match in
a) Add weak link from S to Kc
the top 15 candidates for 97% of matchable reference views.
b) If previous S was a VO failure, delete it
For any given reference view, we expect almost 60% of the
c) Continue at step (2)
correct matches to appear in the top 15 results. Figure 1
shows the recognition rate for an actual map (Figure 6), as 5) Switch keyframes?
a function of distance and angle to a view. Within a 0.5m a) Kc ⇐ S
radius, the place recognition algorithm gives very high recall b) Add skeleton node?
when the angle is 10 degrees or less. i) M ⇐ M ∪ {S}
ii) Place recognition for S?
IV. L IFELONG V ISUAL M APPING
A) Add PR links to M
In the introduction, we presented the three principles of B) Remove any weak links
lifelong mapping, which can be summarized as: C) Check for view deletion using Kc
• Continual, incremental mapping and loop closing. D) Incrementally optimize M
• Map repair to reflect a changing environment.
6) If not shut down, continue from step (2)

1 Visual aliasing, e.g., the same large poster in two locations, could
This algorithm uses techniques from Section III to main-
produce false positives, although we haven’t yet found such a case in many
hundreds of thousands of matches. Filters based on positional information tain the global view map; the one addition is view deletion in
could be used in these cases. line 5C, which is explained further below. In general form,
Fig. 2. Sequence stitching. The robot traverses the first sequence (red),
then is transported 5m and restarted (green). After continuing a short time,
a correct view match inserts the new trajectory into the map.
Fig. 3. Timing for view integration per view during integration of the last
sequence of Figure 6.
the algorithm is very simple. Waking up, the robot is lost,
each one. Averages for for adding to and searching the
and inserts a weak link to keep the map connected. Then
vocabulary tree are 25 ms, and for the geometry check, 65
it processes stereo frames at 30 Hz, using VO to connect
ms. Optimization by Toro in incremental mode uses less than
each frame to the last. If there is a failure, it proceeds as
10 ms per view.
with wakeup, putting in a weak link. Otherwise, it tests for
Storage for each skeleton view consumes 60KB (average
a keyframe addition, which happens if the match score falls
300 features at 200 bytes for the descriptor); we can easily
below a threshold, or the robot has moved a certain amount
accommodate 50K views in memory (3GB). With a 50m x
(usually 0.3m or 10 degrees). On keyframe addition, a further
50m building, assume that the robot’s trajectories are spread
check is made on whether to add a skeleton view, which is the
over, say, 33% of the building’s area; then the maximum
same test as for keypoint switching, with a further constraint
density of views in the map is 50 per square meter, more
of at least 15 frames (0.5 seconds) between skeleton views.
than adequate for good recognition. 2
If a skeleton view is added, it checks all views in the graph
for matches, and adds any links it finds, removing the now- V. F ORGETTING V IEWS
unnecessary weak link. The view deletion algorithm is run A robot running continuously in a closed environment will
on Kc (see Section V) to thin out crowded skeleton views. eventually accumulate enough images in its graph to stress
Finally, the graph is incrementally optimized, and the process its computational abilities. Pruning the graph is essential to
repeats until the robot shuts down. long-term viability. The question is how to prune it in a
B. Map Stitching “reasonable” way, that is, what criteria guide the deletion of
views? If the environment were static, a reasonable choice
Loop closure, recovery from VO failure, and global is reconstruction quality, which was successfully developed
localization on wakeup are handled by a uniform place- in the Photo Tourism project [25]. With a dynamic map, we
recognition mechanism. To illustrate, consider two small want to maximize the chance that a given view in the map
sequences that overlap somewhere along their length. These will be used in recognizing the pose of the robot via a new
can represent any of the three scenarios above. In Figure view.
2, the first sequence in red ends in a VO failure or robot
shutdown. The second sequence (green) starts some 5m A. Visual Environment Types
away, without any a priori knowledge of its relation to the Consider a stereo camera at a fixed position capturing
first. The weak link (dotted line) connects the last view of images at intervals throughout a period of time. Figure
the red sequence with the first view of the green sequence, to 4 shows the matching evolution for different persistence
maintain a connected graph. After traveling along the green classes. In a completely static scene, a view will continue
sequence, PR makes matches to the first sequence. Optimiza- to match all view that come after it. In a static scene with
tion then brings the sequences into correct alignment, and the some small changes (e.g., dishes or chairs being moved),
weak link can be deleted. there will be a slow degradation of the matching score over
time, stabilizing to the score for features in the static areas.
C. Computation and Storage
If there is a large, abrupt change (e.g., a large poster moved),
The Lifelong Mapping algorithm can be run online using then there is a large falloff in the matching score. There are
a single processor core. The time spent in view integration
is broken down by category in Figure 3. VO takes 11 ms 2 We are experimenting with just saving the vocabulary word index for
average per frame; there are a maximum of two skeletons each feature as in [12], which would increase the limit to 500K views, and
views added per second, leaving at least 330 ms to process a density of 500 per square meter.
C. Metric Neighborhood
While running, the robot’s views in the skeleton will
seldom fall on exactly the same pose. Thus, to cluster views,
we need to define a view neighborhood ηd , ηφ over which
to delete redundant views. The size of the neighborhood is
dependent on the scale of the application and the statistics of
the match data. In large outdoor maps it may be sufficient to
have a view every 10m or so; for close-up reconstruction of
a desktop we may want views every 10cm. For the typical
indoor environments of this paper, the data of Figure 1
suggest that ηd = 0.5m is a reasonable neighborhood, and
we use this in the experiments of Section VI.
The view angle ηφ is also important in defining the neigh-
borhood, since matching views must have an overlapping
Fig. 4. View environments. On the left, a schematic of view responses over
time. Every bar represents how well a view matches subsequent views at the FOV. Again, the data suggest a neighborhood value of ηφ =
same position for the given environment type. On the right, the clusters that 10o .
are induced by the environment (see Section V-D). The red circled cluster The skeleton graph induces a global 3D metric on its
is from the original view, which does not survive in the Ephemeral case.
nodes, but the metric is not necessarily globally accurate:
as the graph distance between nodes increases, so does their
changes that repeat, like a door opening and closing, that relative uncertainty. The best way to see this is to note that a
lead to occasional spikes in the response graph. Finally, a large loop could cause two nodes at the beginning and end of
common occurrence is an ephemeral view, caused by almost the loop to be near each other in the graph global pose, but
complete occlusion by a dynamic object such as a person not in ground truth, if the uncertainty of the links connecting
– this view matches nothing subsequently. An environment the nodes is high. So, we search just the local graph around
could have a mixture of these types, for example, occlusion a view to find views that are within its neighborhood.
can occur with any of them. D. An LRU Algorithm
Our main idea is similar to the environment-learning
Within a metric neighborhood consisting of a set S of
strategy of [5], which attempts to learn distinct configurations
views, we want to do the following:
of laser-generated local maps – for example, with a door open
• limit the number of views to a maximum of κ;
and closed. In our case, we want to learn clusters of views
• preserve diversity of views;
that represent a similar and persistent visual environment.
• preferentially remove older, unmatched views.
B. View Clustering The main idea of the following view deletion algorithm is
To cluster views based on their recognition environment, to use clusters of S to indicate redundant views, and try to
the obvious score is the inlier match percentage. Let v and v ′ preserve single exemplars from as large a number of clusters
be two views, m the minimum count of their two feature sets, as possible, using a least-recently used (LRU) algorithm.
and me the number of inliers in their match. The closeness
of the two views is defined as LRU View Deletion
m Input: set of n ≤ κ views vi in a neighborhood, a new
c(v, v ′ ) ≡ − 1. (4) view v, a view limit κ, and a view match threshold τ .
e
m
Output: updated set of views of size ≤ κ.
The closeness measure is zero if two views match exactly,
and increases to infinity when there are no matching features. • Add v to the graph
Note that closeness is not a distance measure, since it does • If c(v, vi ) > τ for all vi (no match), set the
not obey the triangle inequality (informally, two images may timestamp of v to -1 (oldest timestamp)
be close to a third image, but not close to each other). • While n > κ do
The closeness measure defines the graph of the skeleton – If any cluster has more than one exemplar,
set, where an edge between two views exists if c(v, v ′ ) < τ delete the oldest exemplar among these clusters
for some match threshold τ . For a set S of views, a cluster – Else delete the oldest exemplar
of S is any maximal connected subset of S.
In deleting a view v from a cluster S, it is important to E. Analysis
retain the connectedness of edges coming into a cluster. If This algorithm obviously preserves the sparsity of views
there is an edge (v, a) from an external node a that has no in a neighborhood, keeping it at or below κ. To preserve
other link to the cluster, then a new link is formed from a exemplar diversity, the algorithm will keep adding views
to a node v ′ of the cluster that is connected to v, using the until κ is reached. Thereafter, it will add new (unmatched)
technique of Section III-B. If the cluster is a singleton, then views at the expense of thinning out all the clusters, until
all pairs of nodes a, b linked to v are connected. some cluster must be deleted. Then it chooses the oldest
Fig. 5. Representative scenes from the large office loop, showing matched features in green. Note blurring, people, cluttered texture, nearly blank walls.

Fig. 6. Trajectories from robot runs through an indoor environment. Left: four typical trajectories shown without correction. Right: the complete lifelong
map, using multiple trajectories. The map has 1228 views and 3826 connecting links. Distances are in meters.

cluster. Figure 4 shows examples of cluster evolution for the The number of clusters is traded off against the divergence
different environments. In the case of static environments, of each cluster using a Bayesian Information Criteria. In our
there is just a single cluster, populated by the most recent case, we have a more direct means of comparing views, using
views. When there is a large change, a new cluster will grow the closeness measure c(v, v ′ ). We can vary the threshold
to have all the exemplars but one – should the change be τ to obtain different bounds on the cluster variance. Our
reversed as in the repetition scene, that cluster will again technique has the advantage of low storage requirements and
grow. Notice that the bias towards newer views reduces older an aging process to get rid of long-unused views. Finally,
cluster to single exemplars in the long term. there is no need to choose a particular cluster for localization,
One of the interesting aspects of the view deletion al- as in [5], because a new view is compared against all views
gorithm is how it deals with new ephemeral views. Such to find the best match.
a view v starts a new cluster with the oldest timestamp. Our algorithm differs from the sample-based approach of
The next view to be added will not match v; assuming the Biber and Duckett [4], which forms clusters as randomly-
neighborhood is full, and all clusters are singletons, v will chosen samples at different time scales and synthesizes the
be deleted. Only if the cluster is confirmed with the next common features into an exemplar. Instead, we keep single
addition matching will it survive. recent exemplars for each cluster, which have a better chance
of matching new views.
In the long term, the equilibrium configuration of any
neighborhood is a set of singleton clusters, representing the VI. E XPERIMENTS
most recent κ stable environments with the most recent We performed a series of experiments using a robot
exemplars for each environment. This is the most desired equipped with a stereo head from Videre Design. The FOV
outcome for any algorithm. was approximately 90 degrees, with a baseline of 9 cm,
Our clustering algorithm has similarities to the fuzzy- and a resolution of 640x480. The experiments took place in
means clustering done by Burgard et al. [5]. For their 2D the Willow Garage building, a space of approximately 50x
laser maps, a local occupancy grid is treated as a vector, and x 50m, containing several large open areas with movable
the vectors are clustered to find representative environments. furniture, workstations, whiteboards, etc. All experiments
Fig. 7. Additional later sequence added to the map of Figure 6; closeup is on the left. The new VO path is in green, and the intra-path links are in
gray. There are 650 views and 2,676 links in this sequence, bringing the total map to almost 2K views. All inter-path links are shown in blue. Note the
relatively few links between the new path and the old ones, showing the environment has changed considerably. No view deletion is done in this figure.

were done during the day with no attempt to change the


environment or discourage people from moving around the
robot. Figure 5 shows some representative scenes from the
runs.
A. Incremental Construction
Over the course of two days, we collected a set of six
sequences covering a major portion of Willow Garage. The
sequences were done without regard to forming a full loop or
connecting to each other – see the four submaps on the left of
Figure 6. There were no VO failures in the sequences, even
with lighting changes, narrow corridors, and walls with little
texture. The map stitching result (right side, Figure 6) shows
that PR and optimization melded the maps into a consistent
global whole. A detail of the map in Figure 8 shows the
density of links between sequences in a stable portion of the
environment, even after several days between sequences. Fig. 8. Detail of a portion of the large map of Figure 6. The cross-links
To show that the map can be constructed incrementally between the different sequences are shown in blue.
without regard to the ordering of the sequences, we redid the
runs with a random ordering of the sequences, producing the
same overall map with only minor variation. In Figure 7, the new sequence is overlaid against the
For the whole map, there were a total of 29K stereo original ones. Despite numerous long VO failures, the new
frames, resulting in a skeleton map with 1228 view nodes sequence is integrated correctly and put into the right re-
and 3826 links. The timings when adding the last sequence lation with the original map. Because the environment was
are given in Figure 3. Given that we run PR and skeleton changed, there are relatively few links between the original
integration only at 2 Hz, timings show that the system can sequences and the new one (see the detail on the right), while
run online. there are very dense connections among the older sequences.
In the new environment, the old sequences are no longer
B. Map Repair as relevant for matching, and the map has been effectively
After an interval of 4 days, we made an additional se- repaired by the new sequence.
quence of some 13K frames, covering just a small 10m x
5m portion of the map. During this sequence, the stereo pair C. Deletion in a Small Area
was covered at random times, simulating 20 extended VO We did not do view deletion in the first experiment because
failures. The area was a high-traffic one with lots of movable the density of views was not sufficient to warrant it. With the
furniture, so the new sequence had a significantly different small area sequence just presented, there were enough views
visual environment from the original sequences. The object to make deletion worthwhile. We collected statistics on the
was to test the system’s ability to repair a map, and do so map with different values for κ, the maximum number of
under adverse conditions, namely VO failures. views allowed in a neighborhood. These statistics are for the
area occupied by the new sequence. [3] H. Andreasson, A. Treptow, and T. Duckett. Self-localization in non-
stationary environments using omni-directional vision. Robotics and
κ ∞ 7 5 4 3 2 Autonomous Systems, 55(7):541–551, July 2007.
Views 643 232 162 134 104 78 [4] P. Biber and T. Duckett. Dynamic maps for long-term operation of
mobile service robots. In RSS, 2005.
Edges 2465 361 213 184 293 269 [5] W. Burgard, C. Stachniss, and D. Haehnel. Mobile robot map learning
Views/nb 17.7 6.1 4.6 3.8 2.9 2.0 from range data in dynamic environments. In Autonomous Navigation
Clusters/nb 2.0 2.3 2.1 2.0 1.8 1.5 in Dynamic Environments, volume 35 of Springer Tracts in Advanced
Robotics. Springer Verlag, 2007.
[6] J. Callmer, K. Granström, J. Nieto, and F. Ramos. Tree of words for
The View line shows the total number of new views in the visual loop closure detection in urban slam. In Proceedings of the
map, which decrease significantly with κ. New edges also 2008 Australasian Conference on Robotics and Automation, page 8,
decline until κ = 4, and which point more edges are needed 2008.
[7] M. Calonder, V. Lepetit, and P. Fua. Keypoint signatures for fast
as clusters are deleted and their neighbors must be linked. learning and recognition. In ECCV, 2008.
The most interesting line is clusters per neighborhood. In [8] M. Cummins and P. M. Newman. Probabilistic appearance based
general, there are two clusters, one for each direction the navigation and loop closing. In ICRA, 2007.
[9] F. Dayoub and T. Duckett. An adaptive appearance-based map for
robot traverses along a pathway – views from these directions long-term topological localization of mobile robots. In IROS, 2008.
have no overlapping FOV. Note that decreasing κ keeps the [10] E. Eade and T. Drummond. Unified loop closing and recovery for real
number of clusters approximately constant, while reducing time monocular slam. In BMVC, 2008.
[11] D. Fox and W. Burgard. Markov localization for mobile robots in
the number of views substantially. It is the clusters that dynamic environments. Journal of Artificial Intelligence Research,
preserve view diversity. 11:391–427, 1999.
[12] F. Fraundorfer, C. Engels, and D. Nistér. Topological mapping,
VII. C ONCLUSION localization and navigation using image collections. In IROS, pages
3872–3877, 2007.
We believe this paper makes a significant step towards [13] G. Grisetti, C. Stachniss, S. Grzonka, and W. Burgard. A tree
a mapping system that is able to function for long periods parameterization for efficiently computing maximum likelihood maps
using gradient descent. In In RSS, 2007.
of time in a dynamic environment. One of the main contri- [14] J. Gutmann and K. Konolige. Incremental mapping of large cyclic
butions of the paper is to present the criteria for practical environments. In Proc. IEEE International Symposium on Computa-
lifelong mapping, and show how such a system can be tional Intelligence in Robotics and Automation (CIRA), pages 318–
325, Monterey, California, November 1999.
deployed. The key technique is the use of robust visual place [15] D. Haehnel, R. Triebel, W. Burgard, and S. Thrun. Map building with
recognition to close loops, stitch together sequences made at mobile robots in dynamic environments. In ICRA, pages 1557–1563,
different times, repair maps that have changed, and recover 2003.
[16] H. Jegou, M. Douze, and C. Schmid. Hamming embedding and weak
from localization failures, all in real time. To operate in geometric consistency for large scale image search. In ECCV, 2008.
larger environments, it is also necessary to build a realtime, [17] H. Jegou, H. Harzallah, and C. Schmid. A contextual dissimilarity
optimized map structure connecting views, a role filled by measure for accurate and efficient image search. Computer Vision and
Pattern Recognition, IEEE Computer Society Conference on, 0:1–8,
the skeleton map and Toro. 2007.
The role of view deletion in maintaining an efficient map [18] K. Konolige and M. Agrawal. Frame-frame matching for realtime
has been highlighted. With the LRU deletion algorithm, we consistent visual mapping. In Proc. International Conference on
Robotics and Automation (ICRA), 2007.
have shown that exemplars of different environments can be [19] K. Konolige, J. Bowman, J. Chen, P. Mihelich, M. Colander, V. Lepetit,
maintained in a fine-grained manner, while minimizing the and P. Fua. View-based maps. In Submitted, 2009.
storage required for views. [20] M. Montemerlo and S. Thrun. Conditional particle filters for simulta-
neous mobile robot localization and people-tracking. In ICRA, pages
One of the main limitations of the paper is the lack of long- 695–701, 2002.
term data and results on how the visual environment changes, [21] D. Nistér and H. Stewénius. Scalable recognition with a vocabulary
and how to maintain matches over long-term changes. Not all tree. In CVPR, 2006.
[22] E. Rosten and T. Drummond. Machine learning for high-speed corner
features are stable in time, and picking out those that are is a detection. In European Conference on Computer Vision, volume 1,
good idea. We have begun exploring the use of linear features 2006.
as key matching elements, since indoors the intersection of [23] J. Sivic and A. Zisserman. Video google: A text retrieval approach
to object matching in videos. Computer Vision, IEEE International
walls, floors and ceilings are generally stable. Conference on, 2:1470, 2003.
[24] R. C. Smith and P. Cheeseman. On the representation and estimation
VIII. ACKNOWLEDGMENTS of spatial uncertainty. International Journal of Robotics Research,
5(4), 1986.
We would like to thank Giorgio Grisetti and Cyrill Stach- [25] N. Snavely, S. M. Seitz, and R. Szeliski. Skeletal sets for efficient
niss for providing Toro, and for their help in getting the structure from motion. In Proc. Computer Vision and Pattern Recog-
incremental version to work. We also would like to thank nition, 2008.
[26] I. Ulrich and I. Nourbakhsh. Appearance-based place recognition for
Michael Calonder and Patrick Mihelich for their work on topological mapping. In ICRA, 2000.
the place recognition system. [27] C. Valgren, A. Lilienthal, and T. Duckett. Incremental topological
mapping using omnidirectional vision. In IROS, 2006.
R EFERENCES [28] B. Williams, G. Klein, and I. Reid. Real-time slam relocalisation. In
ICCV, 2007.
[1] M. Agrawal and K. Konolige. Rough terrain visual odometry. In Proc. [29] D. Wolf and G. Sukhatme. Online simultaneous localization and
International Conference on Advanced Robotics (ICAR), August 2007. mapping in dynamic environments. In ICRA, 2004.
[2] M. Agrawal and K. Konolige. FrameSLAM: From bundle adjustment
to real-time visual mapping. IEEE Transactions on Robotics, 24(5),
October 2008.

You might also like