Agv Software Task
Agv Software Task
Congratulations on making it this far into the selections of the AGV Software Team!
For this final phase of selection, five (5) tasks are given, the first one being manda-
tory. Attempt as many tasks as you can. For each task, you first need to understand
the problem statement, read concepts from the internet (some resources are provided
in this doc itself), and then implement (code) it.
The task explanations are given at the end of this document in a separate section.
1. Make your programs Object Oriented if possible, although this is not a require-
ment.
2. The resources given in this document are not sufficient. You are expected to
find more on your own.
3. You may discuss among yourselves and help each other, but sharing your code
is a strict no. The use of plagiarized code will result in complete and immediate
disqualification for both candidates, from whom the code/idea has been copied
and who have copied it.
4. The test data and their format for all tasks will be shared with you separately.
5. For the tasks that require it, you can visualize the path or data using Matplotlib
or OpenCV.
7. We have allotted the time with sufficient consideration for various factors and
hence cannot give any extension to anyone as it would be extremely unfair to
the rest of the candidates
A word of advice- You have to attempt the first task mandatorily, but you are encour-
aged to do more (Typically the number of tasks we have attempted for our selections
are two to three:) ) - this will increase your chances of getting selected and will also
help you explore your interests in different fields we work on and get a general idea
about them.
2
Software Task Round Autonomous Ground Vehicle
While collaborative discussions with peers are encouraged, participants are strictly
advised against plagiarizing code, as it may lead to disqualification.
3
Software Task Round Autonomous Ground Vehicle
VizDoom
Introduction
While the task that follows below does not directly relate to autonomous vehicles, a
lot of the ideas and concepts that you will use to solve it will directly carry over to
autonomous vehicles and automation in general. This includes but is not limited to
planning in continuous spaces, designing controllers to carry out your planned tra-
jectory, and using simulators and integration so that you can test out your algorithms
before deploying them in the real world.
For this task, you will be using ViZDoom, which is an AI research platform/simulator
based on the game which arguably birthed the FPS genre - Doom. You can visit their
GitHub repo here. Follow the instructions there to set it up on your machine (which
is just a simple pip install for the most part) and go through the documentation and
examples on how to use it. The overall goal of this task is to navigate a maze and
reach a checkpoint.
Level 1
Load this custom .wad file into the simulator. A WAD file is a game data file used
by Doom and Doom II, as well as other first-person shooter games that use the orig-
inal Doom engine. On correctly loading the .wad file, you will see something like
this:
4
Software Task Round Autonomous Ground Vehicle
Global Planning
You can either use the automap from the simulator or use the map directly from here.
The white pixel is the initial position of your ego and blue is where you will find the
blue skull needed to finish the level. You can use any planning algorithm you like to
plan a trajectory but we prefer RRT*. You will need to keep an eye out for dynamic
and kinetic constraints of your ego since any path connecting the two points might
not be feasible for your controller to follow in the next level.
Trajectory Following
You will need to now translate your global trajectory to actions in space and direct
your ego to the blue skull. You might need a closed loop controller that corrects the
errors that can build up in following the trajectory.
Level 2
You will no longer have access to the automap or a predefined map. You are only
allowed to use the data from the buffers. The objective remains the same. You will
most likely need to implement some sort of DFS search in the space while keeping ap-
proximate track of where you are, controls for backtracking and behaviors to search
for open passages; however, you are free to take any approach you like. An example
depth buffer is given below:
5
Software Task Round Autonomous Ground Vehicle
Submission
To make your submission for this task, use this modified version of the ViZDoom
repository. The examples folder contains two python files, for Level 1 and Level 2 of
this task. You can add your code in these files and submit them. We have already
set up the .wad file of the custom map in this repository and made changes in the
configuration files accordingly.
Feel free to run and change the existing scripts in the examples folder to get a better
understanding of the environment.
In addition to the mentioned codes, submit an image of the final global path that your
designed algorithm suggested for the given map image. Submit a screen recording
of the next parts of the task.
6
Software Task Round Autonomous Ground Vehicle
where st+1 = [xt+1 , yt+1 , θt+1 ] is the prediction of the motion model. Derive the Jaco-
bians of g with respect to the state G = ∂g ∂g
∂s and control V = ∂u
Description of Task
In this task, you will implement an Extended Kalman Filter (EKF) and Particle Filter
(PF) for localizing a robot based on landmarks.
We will use the odometry-based motion model. We assume that there are landmarks
present in the robot’s environment. The robot receives the bearings (angles) to the
landmarks and the ID of the landmarks as observations: (bearing, landmark ID).
We assume a noise model for the odometry motion model with parameters α and
a separate noise model for the bearing observations with parameter β . The land-
7
Software Task Round Autonomous Ground Vehicle
mark ID observation is noise-free. See the provided starter code for implementation
details.
At each timestep, the robot starts from the current state and moves according to the
control input. The robot then receives a landmark observation from the world. You
will use this information to localize the robot over the whole time sequence with an
EKF and PF.
Code Overview
The starter code is written in Python and depends on NumPy and Matplotlib. This
section gives a brief overview of each file. Feel free to make changes to the skeleton
code given as well.
• localization.py
• soccer_field.py
- This implements the dynamics and observation functions, as well as the noise
models for both. Add your Jacobian implementations here!
8
Software Task Round Autonomous Ground Vehicle
• utils.py
- This contains assorted plotting functions, as well as a useful function for nor-
malizing an angle between [−π, π].
• policies.py
• ekf.py
Command-Line Interface
To visualize the robot in the soccer field environment, run:
The blue line traces out the robot’s position, a result of noisy actions. The green line
traces the robot’s position assuming that actions weren’t noisy. After implementing a
filter, the filter’s estimate of the robot’s position will be drawn in red.
$ python localization.py -h
Data Format
• state: [x, y, θ]
9
Software Task Round Autonomous Ground Vehicle
Hints
• Ensure to call utils.minimized angle whenever an angle or angle difference
could exceed [−π, π].
EKF Implementation
Implement the extended Kalman filter algorithm in ekf.py. You need to complete
the ExtendedKalmanFilter.update method and the Field methods G, V, and H. Suc-
cessful implementation results should be comparable to the following:
(a) Plot the real robot path and the filter path under the default parameters pro-
vided.
(b) Plot the mean position error as the α and β factors range over
r = [1/64, 1/16, 1/4, 4, 16, 64] and discuss any interesting observations. Run 10 trials
per value of r.
(c) Plot the mean position error and ANEES (average normalized estimation error
squared) as the filter α, β factors vary over r (as above), while the data is generated
with the default. Discuss any interesting observations.
10
Software Task Round Autonomous Ground Vehicle
PF Implementation
Implement the particle filter algorithm in pf.py. You need to complete the
ParticleFilter.update and ParticleFilter.resample methods.
(a) Plot the real robot path and the filter path under the default parameters.
(b) Plot the mean position error as the α, β factors range over r and discuss.
(c) Plot the mean position error and ANEES as the filter α, β factors vary over r while
the data is generated with the default.
(d) Plot the mean position error and ANEES as the α, β factors range over r, and the
number of particles varies over [20, 50, 500].
Submission
Submit a zip file containing all files with your code, and a pdf file with the results
and the plots mentioned in the implementation details above.
01
Since the factors are multiplied with variances, this is between 1/8 and 8 times the default noise values.
11
Software Task Round Autonomous Ground Vehicle
ImageMatchX
Introduction
The task revolves around a document classification method known as Bag of Words
(BoW). This technique represents documents as vectors or histograms, where each
word’s count within the document is recorded. The objective is to identify documents
of the same category by comparing their word distributions. By analyzing a new
document’s word frequencies and comparing them to existing class histograms, we
can determine its likely classification. This method assumes that documents within
the same class will share similar word distributions, enabling effective categorization
based on word occurrence.
Provided Code
1. createFilterBank: This function will generate a set of image convolution fil-
ters. See Figure 6. There are 4 filters, and each one is made at 5 different scales,
for a total of 20 filters. Filters are (Gaussian, Laplacian Gaussian, X Gradient of
Gaussian, Y Gradient of Gaussian).
12
Software Task Round Autonomous Ground Vehicle
10. getVisualWords.py: A function to map each pixel in the image to its closest
word in the dictionary.
Problem Description
Write a function to extract filter response, applying all of the n filters on each of the
3 color channels of the input image.
In your write-up: Show an image from the data set and 3 of its filter responses.
1.2
Write two functions that return a list of points in an image, that will then be used to
generate visual words.
Next, write a function that uses the Harris corner detection algorithm to select key
points from an input image.
13
Software Task Round Autonomous Ground Vehicle
1.3
Write a function to map each pixel in the image to its closest word in the dictio-
nary.
In your write-up: Show the word maps for 3 different images from two different
classes (6 images total).
2.2
Create a function that extracts the histogram of visual words within the given im-
age.
In your write-up: Show the word maps for 3 different images from two different
classes (6 images total). Do this for each of the two dictionary types (random and
Harris).
2.3
Making use of nearest neighbor classification, write a script that saves visionRandom.pkl
and visionHarris.pkl and in each pickle store a dictionary that contains:
14
Software Task Round Autonomous Ground Vehicle
You will need to load the train image names and train labels from traintest.pkl.
Load the dictionary from dictionaryRandom.pkl and dictionaryHarris.pkl you
saved in part 1.3.
Write a function to search the most similar image from a set of images using a certain
algorithm.
In your write-up: Show similar images to one and show similarity scores.
Links
• Data: Google Drive Link
15
Software Task Round Autonomous Ground Vehicle
3D Reconstruction
Introduction
In the realm of autonomous ground vehicles, the ability to perceive and understand
the surrounding environment accurately is crucial for safe and efficient operation.
One key component of perception is 3D reconstruction, which involves creating a
detailed and comprehensive representation of the scene in three dimensions. This
process plays a vital role in enabling autonomous vehicles to make informed deci-
sions, accurately detect objects, navigate complex environments, and effectively plan
their trajectories.
Task
This task requires you to construct 3D representation of surroundings from multiple
view images. A detailed description of the concept behind the task and how you are
expected to tackle it are given in the Link. All the required data is also given in the
above link.
16
Software Task Round Autonomous Ground Vehicle
Contact Information
Name Phone Number Email
17