Auv Iitk Report
Auv Iitk Report
Abstract
This report presents a concise description of the Motion Control Software employed for the
functioning of the underwater vehicle Varun to perform various tasks. The main objective
is to use the data from camera and sensors and then performing a manoeuvre using a robust
algorithm and precise motion controls to successfully complete the required tasks. The
software has been developed over the time not only to serve the bare minimum purpose but
rather in a way to make it as easy as possible for use.
Keywords: AUV, IITK, ROS, OpenCV, Gazebo
1 Introduction 1
1.1 Line Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Buoy Hitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Passing through a Gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Torpedo Shooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Implementation 2
2.1 Sensors ie Hardware Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Motion Library Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Task Handler Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.4 Master Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.5 Debug Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.6 Software and Tools used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.6.1 Robot Operating System a.k.a ROS . . . . . . . . . . . . . . . . . . . 5
2.6.2 OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.6.3 Gazebo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.6.4 Other Tools and Software . . . . . . . . . . . . . . . . . . . . . . . . 6
4 Future Work 7
1. Introduction
Motion Control is an indispensable part of an autonomous vehicle as it is responsible
for the movement of the robot underwater. The main objective is to take the input from
the cameras after image processing and then performing the required tasks like detection,
hitting, dropping by carefully analyzing the data from various sensors like arduino, imu and
pressure sensor and then implementing the required controls to achieve the desired motion.
To decrease the complexity of the task, we broke it down into levels of abstraction with the
help of ROS nodes. We also have a master node which coordinates the task fo all other
nodes so that the program works in a coherent manner.We explain the tasks at hand in the
following sub-sections and then explain the how they lead to a similar solution.
2. Implementation
The task at hand is gargantuan and needs to be abstracted out into different modules.
ROS is perfect for helping out in this abstraction. It divides each task into a node with
specific purpose and a fixed structure. The input of data to sensors and output to arduino is
handled by the Sensor nodes in hardware layer. The tasks of handling individual calibrated
motion is carried out by nodes in the Motion Library layer. And finally in the final layer of
abstration, we have the Task Handler Layer with nodes to implement individual tasks with
the help of nodes that are lower in the hierarchy.
– Subscribe data on four different topics for four different motions. And publish
pressure sensor data on a topic.
– When a PWM is subscribed on a topic, it’s callback function is called. It first
apply calibration on the PWM to get the individual PWM of each thruster, then
send it to the thrusters.
– In case of turn motion, it uses side thursters if robot is moving sideward also, else
it uses thursters at end.
• Camera and Image Processing - This node takes the raw data from the two cameras
(front and bottom) and publish them on two different topics for processing on task
nodes. It is done for each frame at a certain rate.
• Sideward
• Turn
• Upward
All the four motions use similar algorithm.
• Image Calibration - Used to find the HSV values of the line and the buoy before the
task starts.
• action_lib for communication between different layers and action (of nodes) handling.
• Dynamic Reconfigure to change the parameters used by various nodes during run time.
• Roslaunch file to automate the tasks of individual processes, saving a lot of time.
• rqt_reconfigure package for interactive graphical user interface (GUI) to operate robot
and different parameters.
• Rosbag to store data of all sensors and topics during testing for later simulations.
2.6.2. OpenCV
Further for the task of implementing the Computer Vision Code we use OpenCV which
is an open source library for C++ that has various inbuilt algorithms for object detection.
It can be easily be used along with ROS framework, essential for our purpose.
2.6.3. Gazebo
• Gazebo is a robot simulator that we used for the project.
• We designed the arena in the simulator and applied some thrust and drag forces to
simulate the environment in which the bot will operate.
• This simulator will approximate real life situations which helps in debugging the code
before going for actual testing in the pool.
• Git - Version Control System more than one people can work on same code.
• Calibrate PID constants for motion library to make the motion smooth and fast.
• The thrust of thrusters were different hence the robot didn’t move in the straight line.
So we calibrated thrusters and plotted their data to get relation between there forces
and PWM and then applied the PWM according to it.
• As the testing of algorithms can’t always be done underwater, we also made a ground
bot and tested our code on that.
4. Future Work
There is a lot of scope for improvement in the existing motion control system. The initial
aim of the project of the project was to get ready a rudimentary motion control system that
can get the work done. Not much focus was given to the automation part and efficiency of
algorithms.
To make the robot truly autonomous we can apply various concepts of Machine Learning.
For example one is improving detection tasks where one can apply machine learning to form
a data-set and then using Convolutional Neural Networks and window scanning to get the
position and orientation of the object to be detected from the required image. Although this
might be tough due to the resources required to form a dataset.
Another idea is Inverse Kinematics using Machine Learning. Path planning gives us the
trajectory of the most efficient path but to move on that path we have to fire up our actua-
tors and accelerate in the right direction. This is done by Inverse Kinematics i.e. mapping
displacement to actuator voltages. The task involves implementation of automatic calibra-
tion tools using machine learning algorithms like SVM for automatically calibrating these
mappings using simulator for acquiring test data.
Apart from that we can also add filters to our sensor data to reject the noises in them. We
can also improve our thruster calibration to make motion more smooth and accurate.
We can add some more sensors for more data like Doppler Velocity Log to get the velocity
of each direction by which we can localize the objects with respect to the robot.