SLAM in The Dynamic Context of Robot Soccer Games
SLAM in The Dynamic Context of Robot Soccer Games
= (x, y)
T
and the
height of the robots camera r, equations 1 and 2 can be used to calculate the
sensor model in form of the observation matrix H as in equation 3. Note that
the velocity is not observable by processing a single camera image.
h
1
() = atan2(r, |
|) (1)
h
2
() = atan2(y, x) (2)
H =
_
rx
|
|
3
+r
2
|
|
ry
|
|
3
+r
2
|
|
0 0
y
x
2
+y
2
x
x
2
+y
2
0 0
_
(3)
For objects which are simply governed by the physical laws of motion, in-
stead of being motorized or controlled, the motion model for the control update
consists of a continuous motion slowed down by a friction factor k =
F
friction
m
as
the force generated by the friction divided by the mass of the object. Since the
state is modeled in local coordinates, the robots own motion, given by the trans-
lational and rotational odometry (
x
,
y
,
_
=:V
..
_
1 +
kt
|v
t1
|
_
(
) v
t1
for |v
t1
| |kt|
_
0 0
0 0
_
v
t1
else .
(4)
where () =
_
cos() sin()
sin() cos()
_
is the rotation around . The full time update
therefore predicts the state
t1
according to equation 5.
t
= g(
t1
) =
_
(
) t(
)
0 V
_
t1
_
_
_
_
y
0
0
_
_
_
_
(5)
This results in the Jacobi matrix G for the process update as the partial
derivatives of x,y,v
x
and v
y
at (x
t1
, y
t1
, v
x,t1
, v
y,t1
):
G =
_
(
) t(
)
0 M
_
(6)
M =
_
_
_
gvx
vx
gvx
vy
gvy
vx
gvy
vy
_
if |v
t1
| |kt|
(
) else
(7)
with
g
vx
v
x
=
_
1 +
k t
|v|
_
cos(
)
k t v
x
(cos(
)v
x
sin(
)v
y
)
|v|
3
(8)
g
vx
v
y
=
_
1 +
k t
|v|
_
sin(
)
k t v
y
(cos(
)v
x
sin(
)v
y
)
|v|
3
(9)
g
vy
v
x
=
_
1 +
k t
|v|
_
sin(
)
k t v
x
(sin(
)v
x
+ cos(
)v
y
)
|v|
3
(10)
g
vy
v
y
=
_
1 +
k t
|v|
_
cos(
)
k t v
y
(sin(
)v
x
+ cos(
)v
y
)
|v|
3
. (11)
Thus local models of dynamic objects in the robots environment can be
modeled using separate Kalman lters. In case of the unpredictability of the
motion of autonomous robots it is possible to neglect the estimation of their
velocity and apply high process noise instead.
The separate localization module described in section 3.1, in itself also a
buer integrating information from static, known world features into a localiza-
tion belief model, is used analogically to those percept-buers, but the state is
not deleted periodically after forwarding the belief to the SLAM part of the algo-
rithm. This localization reects part of the SLAM state, and changes to this part
of the SLAM state are fed back into the localization modules state. Thus the
virtual localization measurements used to update the SLAM state are basically
the innovation introduced by new static feature observations. Therefore those
measurements are still conditionally independent from previous measurements
given the current belief state, so the Markov assumption is not violated.
3.3 Local and Distributed Knowledge Integration
The state of the full model of the robots environment consists of its own pose
p
0
= (p
0,x
, p
0,y
, p
0,
)
T
, the poses of all cooperating robots (p
i
= (p
i,x
, p
i,y
, p
i,
)
T
with i 1, ..., n), and the states of the dynamic objects. While only a small
subset of cooperating robots or other elements are observed at the same time
and modeled according to section 3.2 in each time interval, they remain part
of the full model also during time intervals where these are not observed. It
is possible to dynamically shrink or expand the state vector if new unknown
robots are observed. Alternatively a separate mechanism could keep track of
active and inactive slots in the state vector by using time-to-live counters. This
latter approach has been chosen here to prevent frequent rescaling of both the
state vector and its covariance matrix.
The integration of the locally accumulated and the distributed information
into the model will be done in the process and sensor update. The own pose and
those of cooperating robots can be updated with the pose changes propagated
from the individual localization modules relative to the pose used for the last
update. The ball is updated using a motion model similar to the one in equa-
tion 5, but without the odometry related rotations due to the local coordinate
system. Other autonomous agents can either be updated according to the latest
velocity estimations, or just using an identity and appropriately high process
noise following the reasoning proposed in section 2.2.
The sensor update consists of two dierent kinds of observations. If a robot,
either the local robot itself or any of the communicating robots in the team,
has made observations of static world elements which have been used to update
the separate localization estimate in the rst stage (cp. section 3.1), then this
absolute pose estimate is used as a direct measurement of the corresponding
pose in the state vector, i.e. the measurement Jacobian is an identity in the
corresponding submatrix.
The other case is the observation of a dynamic feature by one of the robots in
the team. If the observed dynamic feature is a robot (without further identied
characteristics such as team markers etc.), this dynamic object may either be
any of other robots in the team, or one of a number of non-cooperating other
robots in the environment. In this case, the maximum likelihood correspondence
will be chosen to be updated, or a new model will be inserted or activated if
the other choices are too unlikely. The corresponding expected observation is
in a robot-relative euclidean coordinate system, since this is the format of the
local models distributed as aggregated percepts. It is expressed as a function
of the observed objects model (m
x
, m
y
, m
vx
, m
vy
) and its observers pose p
i
,
with i = 0 for local observations and i 1, ..., n communicated ones, which are
otherwise not distinguished any further.
The observation model is given by equations 12 and 13
h
mx,my
(p
i
) = (p
i,
)
_
(m
x
, m
y
)
T
(p
i,x
, p
i,y
)
T
_
(12)
h
mvx
,mvy
(p
i
) = (p
i,
)
_
m
vx
, m
vy
_
T
(13)
from which the corresponding entries in the measurement Jacobian can be calcu-
lated as in equation 14, with c
and s
0 0 c
(m
x
p
i,x
) s
+ (m
y
p
i,y
) c
0 0 s
(m
x
p
i,x
) c
(m
y
p
i,y
) s
0 0 c
0 0 m
vx
s
+ m
vy
c
0 0 s
0 0 m
vx
c
m
vy
s
_
_
_
_
(14)
Re-localization events can be handled by resetting the corresponding state
variables and removing the covariances, i.e. setting all entries in the covariance
matrix in the rows and columns to zero. If such a previous mis-localization by
a team member resulted in modeled false positives, those will stay as isolated
features in the state for some time and will be deleted or inactivated after a
certain time without observation. This serves as a self-repair routine to remove
clutter from the environmental model, and to prevent the growth of the state by
the accumulation of models of such elements. The same is done if two models of
unknown features are decided to correspond to the same origin after a series of
observations, so that the information needs to be fused into the rst model and
the seconds needs to be deactivated. Alternatively it would be possible to keep
multiple environment models for each localization hypothesis, as done in [4].
4 Evaluation
The modeling process is complex and incorporates a multitude of dierent infor-
mation, so that a step by step illustration of the working principle is not prac-
tical. To evaluate the presented approach, a simulated situation rst illustrates
the theoretical possibilities and the qualitative eect in section 4.1, followed by
a quantitative analysis in soccer games using experiments with real robots in
section 4.2. Both setups use an SPL scenario as specied by the 2011 rules.
4.1 Qualitative Demonstration
Figure 2 illustrates a simple scenario in a simulated environment. The robots in a
team share their information for distributed cooperative modeling. Figure 2(b)
shows the resulting model with 2D covariance ellipses extracted from the full
state. In the following, one robot looks down and does not see any static eld
features any more, and both he and the ball are teleported to another location
on the eld (see gure 3). The use of distributed percepts and the modeling of
the own pose together with the ones of other robots and the ball position and
velocity allows the robot to not only correct its position, but also its orientation.
(a) Setup of the robots on the eld. (b) World model generated from local
and distributed information.
Fig. 2. Scenario with a team of robots looking around and sharing perception infor-
mation to cooperatively model their environment.
(a) Scenario after teleportation of ball and
downwards-looking robot.
(b) World model generated from local
and distributed information.
Fig. 3. Following the situation in gure 2, one robot looks down and only sees the ball
but no landmarks, and he and the ball are teleported. The shared information however
still allows for a correction of both position and orientation of the robot.
This simple experiment shows the potential usefulness of such a combined
modeling of a robots dynamic environment and its pose in it. RoboCup SPL
games contain periods where robots are chasing the ball, approaching it for pre-
cise positioning to shoot at the goal, or even dribbling it. During those periods
odometry errors are integrated into the robots localization if not countered by
frequently looking up at static eld features to correct the robots pose estima-
tion. If looking at the ball also allows the correction of those odometry errors,
especially the orientation, this is expected to be a clear advantage.
4.2 Quantitative Performance Evaluation
The articial situation created in the previous section just serves as an example
of how localization benets may be gained. To allow a quantitative evaluation
of the approachs performance, the perceptions of a robot have been recorded
during normal game situations with real robots on a regular SPL eld. Those
perceptions include the proprioception, i.e. odometry, orientation and joint an-
gle information, exteroception, i.e. perceptions of objects by means of image
processing, as well as the distributed local models of other cooperating robots
running the same code, and ground truth information provided by a camera
system mounted above the eld.
This set of input information is then processed by two dierent module con-
gurations. One is the conguration described in section 3. The second uses
the same localization but a simpler module for cooperative tracking of dynamic
objects without any feedback into the localization, and has been used to win
the second place at RoboCup 2011. This experiment is not set up to show that
the localization works, since both solutions are based on the same competitive
solution for the localization problem with all features described in [10], but to
evaluate the additional benet gained by unied modeling of the full state.
A rst evaluation of several recorded situations did not show any conclu-
sive results, meaning the positive and negative eects of the full state modeling
equaled out most of the time, in a low percentage of cases the full system even
showed a slightly decreased localization quality. Closer evaluation showed that
the currently used visual robot recognition provides too much uncertainty or
even uncorrected systematical errors, such as in the distance estimation, to be
benecial for the localization.
A second conguration of the system, which ignores the robot perceptions for
the modeling of the robots own position, but still uses the much more precise ball
perceptions, showed the expected results. As can be seen in the representative
extract visualized in gure 4, the proposed system provides benecial information
for the robots own localization most of the time. A direct comparison of the
localization quality of both systems shows that the robot pose translation errors
for the full system model are below 25 cm in 83% of the time, and only 72% of the
time for the unassisted underlying localization module, and the average errors
over the whole experiment are 166 mm compared to 213 mm. However, note that
with this second conguration of the system, in the teleportation experiment the
robots orientation could not be recovered as easily as described in section 4.1.
Fig. 4. Dierence of translation errors of the two described systems. Negative values
mean larger errors of the unassisted localization compared to modeling the full state.
5 Conclusion
This paper presents the advantages of modeling the full environment state es-
timation as compared to only localizing in said environment. A competitive
stand-alone localization module is extended to perform as a full state model,
and the additional gain in localization performance is evaluated both in a simu-
lated situation as well as in several real world experiments with multiple robots
and ground truth provided by an external camera system. While the robot per-
ception in the current vision system is not good enough to benet from using
temporary opponent models as additional features for localization, usage of the
ball as a dynamic feature signicantly improves the localization quality.
An additional advantage of estimating the full state in a cooperative modeling
approach is the existence of a single model which contains all information in a
globally consistent way. This renders the switching between local tracking of the
ball and a global team ball model obsolete, for example, and therefore simplies
behavior specication.
References
1. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics (Intelligent Robotics and
Autonomous Agents). The MIT Press (2005)
2. H ahnel, D., Schulz, D., Burgard, W.: Map building with mobile robots in populated
environments. In: Proc. of the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS). (2002)
3. Schulz, D., Burgard, W., Fox, D., Cremers, A.B.: Tracking multiple moving objects
with a mobile robot. Computer Vision and Pattern Recognition, IEEE Computer
Society Conference on 1 (2001) 371
4. Czarnetzki, S., Rohde, C.: Handling heterogeneous information sources for multi-
robot sensor fusion. In: Proceedings of the 2010 IEEE International Conference on
Multisensor Fusion and Integration for Intelligent Systems (MFI 2010), Salt Lake
City, Utah (September 2010) 133 138
5. K ummerle, R., Steder, B., Dornhege, C., Kleiner, A., Grisetti, G., Burgard, W.:
Large scale graph-based SLAM using aerial images as prior information. In: Pro-
ceedings of Robotics: Science and Systems (RSS), Seattle, WA, USA (June 2009)
6. Stroupe, A., Matrin, M., Balch, T.: Distributed sensor fusion for object position
estimation by multi-robot systems. In: Proceedings of the IEEE International
Conference on Robotics and Automation (ICRA), 2001. (2001)
7. Dietl, M., Gutmann, J.S., Nebel, B.: Cooperative Sensing in Dynamic Environ-
ments. In: Proceedings of the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS). (2001) 17061713
8. Howard, A.: Multi-robot simultaneous localization and mapping using particle
lters. The International Journal of Robotics Research 25(12) (2006) 12431256
9. Zhou, X.S., Roumeliotis, S.I.: Multi-robot SLAM with unknown initial correspon-
dence: The robot rendezvous case. In: Proceedings of IEEE International Confer-
ence on Intelligent Robots and Systems (IROS), Beijing, China (2006) 17851792
10. Jochmann, G., Kerner, S., Tasse, S., Urbann, O.: Ecient multi-hypotheses un-
scented kalman ltering for robust localization. In R ofer, T., Mayer, N.M., Savage,
J., Saranli, U., eds.: RoboCup 2011: Robot Soccer World Cup XV. Lecture Notes
in Computer Science. Springer Berlin / Heidelberg (2012) to appear