0% found this document useful (0 votes)
164 views7 pages

Probabilistic Robotics Exam, Spring 2012: 4 Points

The document provides information about an exam for a Probabilistic Robotics course held on May 30, 2012. It states that the maximum points for the exam is 40, with 20 points needed to pass with a grade of G and 30 points for a distinction grade of VG. Martin will be available at 10am to clarify any questions. Students are allowed to use a pen, paper, and calculator during the exam.

Uploaded by

eetaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
164 views7 pages

Probabilistic Robotics Exam, Spring 2012: 4 Points

The document provides information about an exam for a Probabilistic Robotics course held on May 30, 2012. It states that the maximum points for the exam is 40, with 20 points needed to pass with a grade of G and 30 points for a distinction grade of VG. Martin will be available at 10am to clarify any questions. Students are allowed to use a pen, paper, and calculator during the exam.

Uploaded by

eetaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Probabilistic Robotics exam, spring 2012

May 30, 2012; 8.15–12.15.

The maximum number of points is 40. To pass the exam (with grade G), you need 20 points.
To pass with distinction (with grade VG), you need 30 points. Martin will come by at around
10 o’clock to clarify any questions that need clarification. Allowed tools are pen, paper, and a
calculator.
Good luck!

Question 1 Both SLAM with Rao-Blackwellised particle filters and graph-based SLAM have been covered
4 points in this course. For both approaches, give two examples of situations that would “break” the
algorithm.

Solution Examples for particle filters:

1. A state space with more than three or four degrees of freedom (for
example, fully three-dimensional poses).

2. When a particle which at some point is assigned a low probability


doesn’t get resampled in the next iteration, but would have turned out
to represent a correct path and map at a later stage.

3. Updating the filter (resampling) when the robot is stationary and


doesn’t get new motion updates. The filter would deteriorate to fill up
with copies of one single particle.

Examples for pose-graph SLAM:

1. Front-end inserts an erroneous edge with a too high confidence. (For


example, data association between two distinct locations that look very
similar, or a scan-registration error.)

2. Failing to add a constraint between parts of the robot trajectory, leaving


an unconnected graph.

3. A robot continuously running around a small loop, in which all features


are observable from each pose. This would take away the sparsity of
the graph, eventually making the back-end (the graph optimisation
step) prohibitively slow.

Question 2 At the end of this exam is a multiple-choice question where each question has exactly two
5 points possible answers. Assume that a student knows the correct answer to a proportion k of all
the questions and makes a random guess for the remaining questions.
The teacher grading this exam observes that question two is correctly answered (Z2 =
correct) by this student. What was the probability that the student was guessing based on

1
this observation? Derive the formula for the conditional probability and calculate the actual
percentage for k = 0.5.

Solution Let the random variable X2 ∈ {guessing, ¬guessing} denote whether the
student is guessing or not for question two.
Using the assumption we have for how many questions the student knows
the correct answer to, the prior probability that the student is guessing is
p(X2 = guessing) = 1 − k = 0.5.
The conditional probability for providing a correct answer when guessing
is
p(Z2 = correct | X2 = guessing) = 0.5,
given that each question has two possible answers. The probability of a
correct answer when the student knows the answer is

p(Z2 = correct | X2 = ¬guessing) = 1.

Now, what is the posterior probability after an observation?

p(X2 = guessing | Z2 = correct) =


p(correct | guessing)p(guessing)
(Bayes rule) =
p(correct)
p(correct | guessing)p(guessing)
(Total probability)
p(correct | guessing)p(guessing) + p(correct | ¬guessing)p(¬guessing)
This is the form of the conditional probability that was asked for in the
question. The actual percentage for k = 0.5 is
0.5 · 0.5 1
p(X2 = guessing | Z2 = correct) = = .
0.5 · 0.5 + 1 · 0.5 3
In other words, the estimated probability that this student was guessing was
decreased from 12 to 13 when observing a correct answer.
This answer assumes a perfect “sensor model” — that is, that the teacher
marks the answer as correct if and only if the answer actually is correct ;)

Question 3 Consider a world with only three possible robot locations: X = {x1 , x2 , x3 }. Consider a Monte
Carlo localisation algorithm which may use N samples among these locations. Initially, the
samples are uniformly distributed over the three locations. (As usual, it is perfectly acceptable
if there are less particles than locations.) Let Z be a Boolean sensor variable characterized by
the following probabilities:

p(z | x1) = 0.8 p(¬z | x1) = 0.2


p(z | x2) = 0.4 p(¬z | x2) = 0.6
p(z | x3) = 0.1 p(¬z | x3) = 0.9

In other words, we have a high probability of observing Z = z at location x1 , and a high


probability of observing Z = ¬z at location x3 .
MCL uses these probabilities to generate particle weights, which are subsequently nor-
malised and used in the resampling process. For simplicity, let us assume we only generate

2
one new sample in the resampling process, regardless of N. This sample might correspond to
any of the three locations in X. Thus, the sampling process defines a probability distribution
over X. With N = ∞ this distribution would be equal to true posterior.

2 points a) Based on the prior uniform distribution, calculate the true posterior p(xi | z) for each of
the locations X = {x1 , x2 , x3 }.
3 points b) Assume that you use only two particles: N = 2. There are 32 = 9 possible combinations
possible for the initial particle set. The following table contains values which could be
used to calculate the resampling probability for the new sample. Fill in the missing values,
and compare the resulting probability distribution to the answer in question a). Are the
distributions the same? In what way is the particle filter with two particles biased? Explain
this difference.

number sample set prob. of set p(z | s) for each sample s weights prob. of resampling for each xi
x1 x2 x3
1 x1 , x1 1/9 0.8, 0.8 1/2, 1/2 1/9 0 0
2 x1 , x2 1/9 . . . , 0.4 2/3, 1/3 2/27 1/27 0
3 x1 , x3 1/9 . . . , 0.1 8/9, . . . ... 0 1/81
4 x2 , x1 1/9 ..., ... ...,... ... 1/27 0
5 x2 , x2 1/9 ..., ... ...,... ... ... ...
6 x2 , x3 1/9 ..., ... 4/5, 1/5 ... ... 1/45
7 x3 , x1 1/9 0.1, 0.8 1/9, 8/9 ... ... ...
8 x3 , x2 1/9 ..., ... ...,... ... ... ...
9 x3 , x3 1/9 ..., ... ...,... ... ... ...
Σ ...+ 0.363+ ... = 1

Solution
a) This is another exercise in applying Bayes’ theorem.

p(z | xi )p(xi ) p(z | xi )p(xi )


p(xi | z) = = P3
p(z) i=1 p(z | xi )p(xi )
0.8 · 1/3 0.267
p(x1 | z) = = = 0.616
0.8 · 1/3 + 0.4 · 1/3 + 0.1 · 1/3 0.433
0.4 · 1/3 0.133
p(x2 | z) = = = 0.308
0.8 · 1/3 + 0.4 · 1/3 + 0.1 · 1/3 0.433
0.1 · 1/3 0.133
p(x3 | z) = = = 0.077
0.8 · 1/3 + 0.4 · 1/3 + 0.1 · 1/3 0.433

b) With a finite particle set, the filter is biased towards the prior distribution.
The full table for N = 2 looks like this:

3
number sample set prob. of set p(z | s) for each sample s weights prob. of resampling for each xi
x1 x2 x3
1 x1 , x1 1/9 0.8, 0.8 1/2, 1/2 1/9 0 0
2 x1 , x2 1/9 0.8, 0.4 2/3, 1/3 2/27 1/27 0
3 x1 , x3 1/9 0.8, 0.1 8/9, 1/9 8/81 0 1/81
4 x2 , x1 1/9 0.4, 0.8 1/3, 2/3 2/27 1/27 0
5 x2 , x2 1/9 0.4, 0.4 1/2, 1/2 0 1/9 0
6 x2 , x3 1/9 0.4, 0.1 4/5, 1/5 0 4/45 1/45
7 x3 , x1 1/9 0.1, 0.8 1/9, 8/9 8/81 0 1/81
8 x3 , x2 1/9 0.1, 0.4 1/5, 4/5 0 4/45 1/45
9 x3 , x3 1/9 0.1, 0.1 1/2, 1/2 0 0 1/9
Σ 0.457 0.363 0.180

(For an even more limited particle set, N = 1, the table would look as
follows.)
number sample set prob. of set p(z | s) for each sample s weights prob. of resampling for each xi
x1 x2 x3
1 x1 1/3 0.8 1 1/3 0 0
2 x2 1/3 0.4 1 0 1/3 0
3 x3 1/3 0.1 1 0 0 1/3
Σ 0.333 0.333 0.333

Question 4 The following figure shows the set of possible positions (x, y) at time (t + 1) of a mobile robot
5 points using an odometry-based motion model; i.e., the action u is given by the odometry information
(drot1 , drot2 , dtrans ). The errors in drot1 and dtrans are assumed to follow zero-mean uniform
density functions prot1 (a), ptrans (d) respectively. Give the analytical expressions of the two
density functions!

Solution The probability density function for the rotation should be

1 α α
prot1 (a) = for − 6a6 ,
α 2 2
prot1 (a) = 0 otherwise.

4
The function for the translation should be
1 r1 − r2 r − r2
ptrans (d) = for − 6d6 1 ,
r1 − r2 2 2
ptrans (d) = 0 otherwise.

Question 5 Grid maps, as their name suggests, are grids where the value of each cell represent the
4 points probability of that cell being occupied. The probability of each cell is updated using a binary
Bayes filter. What are the two main simplifying assumptions used in such a filter to keep the
problem tractable (i.e., less computationally complex than when using no assumptions)?

Solution The two main simplifying assumptions are:

• The occupancy of each cell is independent of the other cells given


measurement data

• The pose of the robot is fully known

An extra assumption is that

• the map is static (does not change over time)

Any combination of two of the three assumptions above has been counted as
a correct answer to the question.

Question 6 Explain the different steps of Kalman filters. Divide them into prediction and correction and
5 points give an explanation on how the state and uncertainty evolves.

Solution The idea was to check the overall concept of a Kalman Filter and rather not
only to give a set of equations (possible only memorized). However, since this
was not explicitly said in the question, answers containing only equations
are considered OK.
Your answer should include written text (or equations) describing the KF.

Question 7 Extend the previous question with respect to EKF localization. What sensor data is typically
5 points used and where? What models of uncertainty needs to be provided and where are they used?

5
Solution The following topics should be included in your answer to get the full five
points:
• Based on equation or in text that linearization is used in the prediction
and correction step (2 points)
• Discussion on sensor data — range / bearing / etc. — or that a mea-
surement equation was provided (1 point)
• Models of uncertainty — odometry (control input) covariance matrix
in the prediction step (1 point)
– measurements covariance matrix in the correction step (1 point)

Question 8 True or false? Correct answer is +1 point per question. A false answer results in −1 point,
7 points but you cannot get negative total points for this question.
1. If two variables A and B are independent, then they remain independent given knowl-
edge of any other variable C.
2. Bayes filters assume conditional independence of sensor measurements taken at differ-
ent points in time given the current and all past states.
3. A likelihood-field range-sensor model is more accurate than a beam model.
4. Occupancy grid maps in log-odds form are numerically more stable than in probability
form.
5. In certain degenerate cases a particle filter could still work even with a single particle.
6. Given the accumulated 3D rotation error between two nodes a and b in a pose graph,
encoded by a matrix R(x, y, z), distributing the rotation error over the n nodes between
a and b can accurately be done by applying R( ni x, ni y, ni z) to each node i between a
and b.
7. When using the normal-distributions transform (NDT) to represent laser scan data,
an advantage of using a mixture of a normal and a uniform distribution (as opposed
to just a normal distribution) is that spurious scan points don’t “inflate” the normal
distribution unnecessarily.

Solution
1. false
The fact that p(A | B) = p(A) does not necessarily mean that p(A | B, C) =
p(A). For example, if A denotes the outcome of a roll of one die, B
denotes the outcome of another die, and C denotes whether the sum
of the two dice is odd or even, knowing the outcome of both B and C
does change the probability distribution for A.
2. true
This is the Markov assumption.
3. false
Using likelihood fields is an approach that can be used for speeding
up evaluation of the sensor model in a grid map, and make the model
smoother, but it is not accurate.

6
4. true
Log odds go from minus to plus infinity, probabilities go from 0 to 1
(with 1 being the unstable value).

5. true
The degenerate case would be the one of fully deterministic motion,
and with a know start pose.

6. false
This is discussed in the paper by Grisetti et al. Three-dimensional
distributions are noncommutative, so the interpolation has to be done
with something like a slerp instead.

7. true
Trying to fit a single normal distribution to points in a plane, plus a
couple of extraneous points (for example, from a person passing by)
would result in a distribution with large covariance, which would not
in a meaningful way describe the underlying surface.

You might also like