0% found this document useful (0 votes)
26 views

Lecture12 PDF

Uploaded by

Evangelista Mao
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Lecture12 PDF

Uploaded by

Evangelista Mao
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

Summary
?

Zürich Autonomous Systems Lab


Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

Locomotion and
Kinematics
?

Zürich Autonomous Systems Lab


12 - Summary
3

Characteristics of Wheeled Robots and Vehicles


 Stability of a vehicle is be guaranteed with 3 wheels
 If center of gravity is within the triangle which is formed by the ground contact
point of the wheels.
 Stability is improved by 4 and more wheel
 however, this arrangements are hyper static and require a flexible suspension
system.
 Bigger wheels allow to overcome higher obstacles
 but they require higher torque or reductions in the gear box.
 Most arrangements are non-holonomic (see chapter 3)
 require high planning effort
 require low control inputs
 Combining actuation and steering on one wheel makes the design complex
and adds additional errors for odometry.

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
4

Different Arrangements of Wheels I


 Two wheels
COG below axle

 Three wheels

Omnidirectional Drive Synchro Drive


© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL
12 - Summery
5

Different Arrangements of Wheels II


 Four wheels

 Six wheels

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
6

Kinematic Constraints: Fixed Standard Wheel

.
y
.
q
.
x

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
73
7
b
A .
Robot chassis q (-l)cos(b)
.
v=rj
.
. y y sin(a+b) l
. .
x cos(a+b)
. q(-l)sin(b) q (-l)
q a .
R  x y q  T
.
. x
y (cos(a+b))
.
x sin(a+b)

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

Sensors and
Perception
?

Zürich Autonomous Systems Lab


12 - Summery
9

Perception for Mobile Robots

Places / Situations
A specific room, a meeting situation, …
•Functional / Contextual
Servicing / Reasoning Relationships of Objects
• imposed
Objects • learned
Compressing Information

• spatial / temporal/semantic
Doors, Humans, Coke bottle, car , …

Interaction •Models / Semantics


• imposed
• learned
Features
Lines, Contours, Colors, Phonemes, …

Navigation •Models
• imposed
Raw Data • learned
Vision, Laser, Sound, Smell, …
© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL
12 - Summery
10

Classification of Sensors

 What:
 Proprioceptive sensors
• measure values internally to the system (robot),
• e.g. motor speed, wheel load, heading of the robot, battery status
 Exteroceptive sensors
• information from the robots environment
• distances to objects, intensity of the ambient light, unique features.

 How:
 Passive sensors
• Measure energy coming from the environment
 Active sensors
• emit their proper energy and measure the reaction
• better performance, but some influence on environment

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
11

General Classification (1)

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
12

General Classification (2)

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

Perception / Robot
Vision
?

Zürich Autonomous Systems Lab


14 Lec.12
12 - Summery
14

From Pinhole to Perspective Camera


 Pinhole camera  thin lens equation: 1  1 + 1
f z e Lens Focal Plane

 When object is out of focus Object


Focal Point Image Plane

 blur circle L
Blur Circle
f of radius R

z e

 Perspective camera: adjust the image plane


so that objects at infinity are in focus

 Perspective = dependence of the Object

apparent size of objects on their h C


distance from the observer
h'
Image

z f
© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL
15 12 - Summery

Perspective Projection
 Project a 3D world point in pixel coordinates in
the image plane Pc , Pw
Zw
1. Project 3D point in camera-frame Pc to local
image plane coords p. Xw
Yw
2. Convert local image plane point p to pixel
coordinates (u,v). f
 matrix of intrinsic parameters: K
Zc
3. Generalize projection, for any 3D point in the
world-frame Pw into pixel coordinates p Xc
 projection matrix: K[R|T] [R|T]
Yc
Extrinsic Parameters

Camera calibration:
 Use point correspondences to find the elements of the Projection Matrix
 Decompose this matrix to get K, R, and T.

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


16 12 - Summery

Stereo Vision
 Depth estimation for the simplified case where both cameras are
identical and are aligned on the horizontal axis

Z • Disparity
P  ( X P , YP , Z P )
• General stereo: triangulation for
constructing disparity maps and scene
reconstruction
• Optical Flow
ul ur
f
Cl Cr X
b
P  ( x, y, z )

 Epipolar Geometry & applications Epipolar


 Epipolar constraint for efficient
Epipolar
Line 1
Line 2
image search
p1  (u1 , v1 ) p2  (u2 , v2 )
 Epipolar rectification  align images

C1 E1 E2 C2

epipoles
© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL
17 Lec.12
12 - Summery
17

Image Filtering and Image Features


 Filtering: accept/reject certain image components. e.g. smoothing

Sxy
 1D/2D Correlation and its uses in: ( x, y)

 Gaussian image smoothing:


 Taking derivatives
 Template matching
 Correlation versus Convolution Image I Filtered Image J  F (I )

 Separable filters for more efficient 2D correlation

 Edge detection and the derivative theorem of convolution


 Harris Corner Detection:
Find a patch that exhibits intensity
change in ≥ 2 directions
 Harris properties
 invariant to rotation, linear intensity changes flat region: edge: no change corner: significant
 NOT invariant to scale changes no intensity along the edge change in at least 2
change direction directions

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


18 Lec.12
12 - Summery
18

SIFT keypoints

 Methodology:
 Detect salient regions by using a Difference of
Gaussians pyramid
 Describe keypoints using local intensity gradients

 SIFT features are invariant to


 Rotation
 Small image scale
 Small view point changes
 Note: Harris corners only correspond to a detection
technique, whereas SIFT incorporates methods for both
detection and description

SIFT:
high-quality, distinctive features.
 computationally demanding  usually not suited to real-time applications.
 A combination of the Harris detector + image patches for description could be a more
suitable alternative.

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


19 Lec.12
12 - Summery
19

Object recognition
Q: Is this Book present in the Scene?
• Extract keypoints in both images
(e.g. SIFT)
• Look for corresponding matches
• Most of the Book’s keypoints are
present in the Scene

 A: The Book is present in the


Scene

 Bag of words approach:


 Use analogies from text retrieval: visual words,
vocabulary of visual words
 Cluster visual words to build a Visual Vocabulary
 Use hierarchical clustering for a Vocabulary Tree

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
20

Error Propagation Law


 What is the uncertainty of the line given the uncertainties
of the points used to extract it? r
a
 General case: X1
… … Y1
Y j  f j ( X 1... X n )

… …
Xi System Yi
Xn Ym

 The output covariance matrix CY is given by the error propagation law:

Jacobian, defined as:


Covariance matrix representing the
input uncertainties

 Application to line extraction


© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL
21 Lec.12
12 - Summery
21

Line extraction from Range Data

Split-n-merge Line Regression


 Line Fitting/Extraction:
Given points that belong to a line,
how to estimate the line parameters?
 4 algorithms for line extration:

 Application to lane detection,


door detection,… RANSAC Hough Transform

 Comparison of line extraction


algorithms:
• Split-n-merge & Line regression:
fast, use the sequential order of raw
scan points
• RANSAC and HT:
precise/robust for general point clouds

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

Localization
?

Zürich Autonomous Systems Lab


12 - Summery
23

Map based localization

?
position
Position Update
(Estimation?)

Encoder Prediction of
matched
Position
(e.g. odometry) observations

YES

Map predicted position


data base Matching

• Odometry, Dead Reckoning


raw sensor data or
• Localization based on external sensors, extracted features

Perception
Perception
beacons or landmarks
Observation
• Probabilistic Map Based Localization
© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL
12 - Summery
24

Odometry: The Differential Drive Robot (2)


 Kinematics

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
25
Odometry: Growth of Pose uncertainty for Straight Line
Movement
 Note: Errors perpendicular to the direction of movement are growing
much faster!

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
265
26
Odometry: example of non-Gaussian error model

 Note: Errors are not shaped like


ellipses!
Courtesy AI Lab, Stanford

[Fox, Thrun, Burgard, Dellaert, 2000]


© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL
12 - Summery
27

Belief Representation
 a) Continuous map
with single hypothesis
probability distribution

 b) Continuous map
with multiple hypotheses
probability distribution

 c) Discretized map
with probability distribution

 d) Discretized topological
map with probability
distribution

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
28

Action and perception updates


 In robot localization, we distinguish two update steps:
 ACTION (or prediction) update:
• the robot moves and estimates its position through its proprioceptive sensors.
During this step, the robot uncertainty grows.

 PERCEPTION (or meausurement) update:


• the robot makes an observation using its exteroceptive sensors and correct its
position by opportunely combining its belief before the observation with the
probability of making exactly that observation.
During this step, theof robot uncertainty shrinks.
Probability
making this
observation
Robot belief
before the
observation

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
29

The solution to the probabilistic localization problem


 A probabilistic approach to the mobile robot localization problem is a method able
to compute the probability distribution of the robot configuration during each
Action and Perception step.

 The ingredients are:


1. The initial probability distribution p ( x) t 0

2. The statistical error model of the


proprioceptive sensors (e.g. wheel encoders)

3. The statistical error model of the


exteroceptive sensors (e.g. laser, sonar, camera)

4. Map of the environment


(If the map is not known a priori then the robot needs to build a map of the environment and then
localize in it. This is called SLAM, Simultaneous Localization And Mapping)

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
30

The solution to the probabilistic localization problem


 How do we solve the Action and Perception updates?

 Action update uses the Theorem of Total probability

 Perception update uses the Bayes rule

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
31

Illustration of probabilistic bap based localization

Initial probability distribution p ( x) t 0


x

p( zt | xt )

Perception update
bel ( xt )    p( zt | xt )bel ( xt )

Action update

p( zt | xt )

Perception update bel ( xt )    p( zt | xt )bel ( xt )

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
32

Illustration of probabilistic bap based localization

Initial probability distribution p ( x) t 0


x

p( zt | xt )

Perception update
bel ( xt )    p( zt | xt )bel ( xt )

Action update

p( zt | xt )

Perception update bel ( xt )    p( zt | xt )bel ( xt )

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
33

Illustration of probabilistic bap based localization

Initial probability distribution p ( x) t 0


x

p( zt | xt )

Perception update
bel ( xt )    p( zt | xt )bel ( xt )

Action update

p( zt | xt )

Perception update bel ( xt )    p( zt | xt )bel ( xt )

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
34

Illustration of probabilistic bap based localization

Initial probability distribution p ( x) t 0


x

p( zt | xt )

Perception update
bel ( xt )    p( zt | xt )bel ( xt )

Action update

p( zt | xt )

Perception update bel ( xt )    p( zt | xt )bel ( xt )

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
35

Markov versus Kalman: summary


 Remember: both methods solve the convolution and Bayer rules for the
action and the perception update respectively, but:

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

SLAM
?

Zürich Autonomous Systems Lab


37 Lec.12
12 - Summery
37

Simultaneous Localization And Mapping


The SLAM problem:
How can a body navigate in a previously unknown environment while constantly
building and updating a map of its workspace using on board sensors only?

 Full SLAM: estimate p(x0:t ,m0:n | z0:k ,u0:t )


 Online SLAM: estimate p(xt ,m0:n | z0:k ,u0:t )

True scene features

m0 m1 m2 m3 m4 m5 m6 m7 m8

z0 z1 z2 z3 z4 z5 z6 z7 z8 z9 z10 z11 z12 z13 z14 z15 z16


Measurements

x0 x1 x2 x3 . . .
Robot poses

u0 u1 u2 u3
Control inputs © R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL
38 Lec.12
12 - Summery
38

SLAM Approaches

m0 m1 m2 m3 m4 m5 m6 m7 m8 Full graph optimization (OFF-LINE)


 Eliminate observations & control input
nodes
x0 x1 x2 x3  Solve global optimization

m0 m1 m2 m3 m4 m5 m6 m7 m8
Filtering (e.g. MonoSLAM)
 Eliminate past poses
 Summarize all experience with respect
x0 x1 x2 x3 to the last post via a state vector and
covariance matrix

m0 m1 m2 m3 m4 m5 m6 m7 m8 Keyframes (e.g. PTAM)


 Retain the most representative poses
and interdependencies
x0 x1 x2 x3
 Optimize resulting graph

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


39 Lec.12
12 - Summery
39

SLAM methods
 xt   Pxx Pxm1 .. Pxmn-1 
 EKF SLAM: m  P P .. P 
yt   1 , P   m1 x m1 m1 m1 m n -1 
 state vector stacks robot parameters  ...  yt
 .. .. .. .. 
and feature parameters    
 n -1 
m  mn-1x
P Pmn-1m1 .. Pmn-1mn-1  
 As the robot moves and makes measurements
yt and Pyt are updated using the standard EKF equations
 Scales badly with the number of features  sparsify the covariance matrix

 Particle Filter SLAM


 Use a set of particles to represent PDF of robot pose and map
 Each particle holds an estimate of the state vector and has an associated weight

 GraphSLAM
 Interprets SLAM as a network of springs, imposing constraints between robot
poses and feature-locations
 Solution = state of minimal energy of this network

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

Cognition / Planning
?

Zürich Autonomous Systems Lab


12 - Summery
41

The Planning Problem


 We can generally distinguish between
 (global) path planning and
 (local) obstacle avoidance.

 First step:
 Transformation of the map into a representation useful for planning
 This step is planner-dependent

 Second step:
 Plan a path on the transformed map

 Third step:
 Send motion commands to controller
 This step is planner-dependent (e.g. Model based feed forward, path following)

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
42

Graph Search
 Methods
 Breath First
 Depth First
 Dijkstra
 A* and variants
 D* and variants
 ...

 Discriminators
 f(n) = g(n) + ε h(n)
 g(n‘) = g(n) + c(n,n‘)

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
43

Graph Search Strategies: Breadth-First Search

A A A A A A A

B C D B C D B C D B C D B C D B C D B C D

E E F E F G E F G E F G E F G

G H G H F I G H F I

D I

A A A A A A A

B C D B C D B C D B C D B C D B C D B C D

E F G E F G E F G E F G E F G E F G E F G

G H F I G H F I G H F I G H F I G H F I G H F I G H F I

D I K D I K C H D I K C H L D I K C H L D I K C H L D I K C H L D I K C H L

A A L A L L A L L A
First path found!
= optimal

A A

B C D B C D

B
E F G E F G
E
G H F I G H F I

D I K C H L D I K C H L K
A L L A K A L L A
A=initial C F H
L

G
D I L=goal

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
44

Graph Search Strategies: Breadth-First Search


 Corresponds to a wavefront expansion on a 2D grid
 Use of a FIFO queue
 First-found solution is optimal if all edges have equal costs
 Dijkstra‘s search is a „g(n)-sorted“ HEAP variation of breadth first search
 First-found solution is guaranteed to be optimal no matter the (positive!) cell
cost

Fig. 1: NF1: put in each cell its L1-distance


from the goal position (used also in local
path planning)

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
45

Graph Search Strategies: A* Search


 Similar to Dijkstra‘s algorithm, A* also uses a HEAP (but „f(n)-sorted“)
 A* uses a heuristic function h(n) (often Euclidean distance)
 f(n) = g(n) + ε h(n)

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


# 12 - Summery

© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL


12 - Summery
47

Exam
 Oral, 30 minutes

 3 Questions selected by examiners


 Application
 Basics 1
 Basics 2

 Questions given beforehand (Webpage, sent to all participants)


 http://www.asl.ethz.ch/education/master/mobile_robotics/Questions2012.pdf

 Example:
All terrain demining in 3.2.3 Wheel kinematic constraints 5.2.4 Odometric position
unstructured environments of the 5 wheel types, pro/cons of estimation and error model for a
wheel types differential drive robot and their
use in Markov and EKF
localization
© R. Siegwart, M. Chli, M. Rufli, D. Scaramuzza, ETH Zurich - ASL

You might also like