Lecture3
Lecture3
Mobile Robots
Pangun Park
controller
sensors
robot model
vℓ vr
R
φ
(x, y)
L R
ẋ = (v + v ) cos φ
R
ẋ
= 2
(vr + vl ) cos φ
R
ẏ = 2
(vr + vl ) sin φ
R
φ̇ = (v − vl )
L r
Inputs:
v
ω
Dynamics:
φ
ẋ
= v cos φ
ẏ = v sin φ
(x, y)
φ̇ =ω
ẋ
= v cos φ R 2v
v= (vr + vl ) → = vr + vl
ẏ = v sin φ 2 R
R ωL
φ̇ =ω
ω = (vr − vl ) → = vr − vl
L R
R
ẋ
= 2
(vr + vl ) cos φ 2v + ωL
R vr =
ẏ = 2
(vr + vl ) sin φ 2R
R 2v − ωL
φ̇ = (vr − vl )
L vl =
2R
Two possibilities:
External sensors
Internal sensors
Orientation: Compass, ...
Position: Accelerometers, Gyroscopes, ...
Wheel Encoders
Dr
Dc Dl + Dr
Dc =
2
Dℓ
ẋ
= x + Dc cos φ
ẏ = y + Dc sin φ
= φ + Dr −D
l
φ̇
L
DRIFT!!!
Robots need to know what the world around them looks like
Instead of worrying about the resolution of the sensors, assume we know the distance and
direction to all obstacles around us (that are close enough)
(d1 , φ1 )
Instead of worrying about the resolution of the sensors, assume we know the distance and
direction to all obstacles around us (that are close enough)
(d1 , φ1 )
x1 = x + d1 cos(φ1 + φ)
y1 = y + d1 sin(φ1 + φ)
φ ω =???
r = φd , e = φd − φ, φ̇ = ω
This typically will not work since we are dealing with angles:
φd = 0, φ = 100π → e = −100π
(xg , yg )
! "
yg − y
φd = arctan
xg − x
(x, y)
ω = K (φd − φ)
ANGLES!!!
“ω = K (φd − φ)”
JUST RIGHT!!!
(xo , yo )
π
φ2 = φobst ±
2
φ3 = φgoal
Pure go-to-goal
φ4 = F (φobst , φgoal )
Blended
φ1 = φobst + π
This is “pure” avoidance