Motion Model
Motion Model
In this assignment you will recursively estimate the position of a vehicle along a trajectory using
available measurements and a motion model.
The vehicle is equipped with a very simple type of LIDAR sensor, which returns range and
bearing measurements corresponding to individual landmarks in the environment. The global
positions of the landmarks are assumed to be known beforehand. We will also assume known
data association, that is, which measurment belong to which landmark.
Motion and Measurement Models
Motion Model
The vehicle motion model recieves linear and angular velocity odometry readings as inputs, and
outputs the state (i.e., the 2D pose) of the vehicle:
$$\begin{align} \mathbf{x}_{k} &= \mathbf{x}_{k-1} + T \begin{bmatrix} \cos\theta_{k-1}
&0 \\ \sin\theta_{k-1} &0 \\ 0 &1 \end{bmatrix} \left( \begin{bmatrix} v_k \\ \omega_k
\end{bmatrix} + \mathbf{w}_k \right) \, , \, \, \, \, \, \mathbf{w}_k =
\mathcal{N}\left(\mathbf{0}, \mathbf{Q}\right) \end{align}$$
and are the linear and angular velocity odometry readings, which
we use as inputs to the model
The process noise has a (zero mean) normal distribution with a constant
covariance .
Measurement Model
The measurement model relates the current pose of the vehicle to the LIDAR range and bearing
measurements .
$$\begin{align} \mathbf{y}^l_k = \begin{bmatrix} \sqrt{(x_l - x_k - d\cos\theta_{k})^2 + (y_l -
y_k - d\sin\theta_{k})^2} \\ atan2\left(y_l - y_k - d\sin\theta_{k},x_l - x_k -
d\cos\theta_{k}\right) - \theta_k \end{bmatrix} + \mathbf{n}^l_k \, , \, \, \, \, \, \mathbf{n}^l_k =
\mathcal{N}\left(\mathbf{0}, \mathbf{R}\right) \end{align}$$
The landmark measurement noise has a (zero mean) normal distribution with a constant
covariance .
Getting Started
Since the models above are nonlinear, we recommend using the extended Kalman filter (EKF) as
the state estimator. Specifically, you will need to provide code implementing the following steps:
the prediction step, which uses odometry measurements and the motion model to
produce a state and covariance estimate at a given timestep, and
the correction step, which uses the range and bearing measurements provided by the
LIDAR to correct the pose and pose covariance estimates
Unpack the Data
First, let's unpack the available data:
In [1]:
import pickle
import numpy as np
import matplotlib.pyplot as plt
# input signal
v = data['v'] # translational velocity input [m/s]
om = data['om'] # rotational velocity input [rad/s]
Notice that the orientation values are wrapped to the range in radians.
Initializing Parameters
Now that our data is loaded, we can begin getting things set up for our solver. One of the most
important aspects of designing a filter is determining the input and measurement noise
covariance matrices, as well as the initial state and covariance values. We set the values here:
In [11]:
v_var = 0.01 # translation velocity variance
om_var = 0.01 # rotational velocity variance
# allowed to tune these values
# r_var = 0.1 # range measurements variance
r_var = 0.01
# b_var = 0.1 # bearing measurement variance
b_var = 10
In order for the orientation estimates to coincide with the bearing measurements, it is also
Correction Step
First, let's implement the measurement update function, which takes an available landmark
measurement and updates the current state estimate . For each landmark
measurement received at a given timestep , you should implement the following steps:
x_k = x_check[0]
y_k = x_check[1]
theta_k = wraptopi(x_check[2])
x_l = lk[0]
y_l = lk[1]
r = np.sqrt(d_x**2 + d_y**2)
phi = np.arctan2(d_y, d_x) - theta_k
# 4. Correct covariance
P_check = (np.identity(3) - K_k.dot(H_k)).dot(P_check)
Prediction Step
Now, implement the main filter loop, defining the prediction step of the EKF using the motion
model provided:
Where
$$\begin{align} \mathbf{F}_{k-1} = \frac{\partial \mathbf{f}}{\partial \mathbf{x}_{k-
1}}\bigg|_{\mathbf{\hat{x}}_{k-1},\mathbf{u}_{k},0} \, , \, \, \, \, \mathbf{L}_{k-1} =
\frac{\partial \mathbf{f}}{\partial \mathbf{w}_{k}}\bigg|_{\mathbf{\hat{x}}_{k-
1},\mathbf{u}_{k},0} \, . \end{align}$$
In [18]:
#### 5. Main Filter Loop
#######################################################################
# set the initial values
P_check = P_est[0]
x_check = x_est[0, :].reshape(3,1)
for k in range(1, len(t)): # start at 1 because we've set the initial
prediciton
# 4. Propagate uncertainty
P_check = F_km.dot(P_check.dot(F_km.T)) + L_km.dot(Q_km.dot(L_km.T))
e_fig = plt.figure()
ax = e_fig.add_subplot(111)
ax.plot(t[:], x_est[:, 2])
ax.set_xlabel('Time [s]')
ax.set_ylabel('theta [rad]')
ax.set_title('Estimated trajectory')
plt.show()
Are you satisfied wth your results? The resulting trajectory should closely resemble the ground
truth, with minor "jumps" in the orientation estimate due to angle wrapping. If this is the case,
run the code below to produce your solution file.
In [20]:
with open('submission.pkl', 'wb') as f:
pickle.dump(x_est, f, pickle.HIGHEST_PROTOCOL)
In [ ]: