Developing A Real Time Collision Avoid A
Developing A Real Time Collision Avoid A
Developing A Real Time Collision Avoid A
MASTER OF TECHNOLOGY
in
INFORMATION TECHNOLOGY
(M.Tech in IT specialization Robotics)
Submitted by
Rajan Kumar Singh
Iro2011014
July, 2013
CANDIDATE’S DECLARATION
I do hereby recommend that the thesis work prepared under my supervision by Mr.
Rajan Kumar Singh titled “Developing a Real Time Collision Avoidant Low
Cost Mobile Manipulator Using ROS” be accepted in the partial fulfillment of the
requirements of the degree of Master of Technology in Information Technology for
Examination
CERTIFICATE OF APPROVAL
The forgoing thesis is hereby approved as a credible study in the area of Information
Technology Engineering and its allied areas carried out and presented in a manner
satisfactory to warrant its acceptance as a prerequisite to the degree for which it has been
submitted. It is understood that by this approval the undersigned do not
necessarily endorse or approve any statement made, opinion expressed or conclusion
drawn therein but approve the thesis only for the purpose for which it is submitted.
Signature & Name of the Committee members On final examination and approval of the
thesis
ACKNOWLEDGEMENTS
I would like to thanks my professors Prof. GC Nandi, Dr. Pavan Chokroborti who guided me
in right direction to complete my thesis. I would like special thanks to Mr. Abhijit Makhal who
really helped me a lot in my thesis. I would also like to thank Mr. Avinash Singh for his
support. I would like to thanks My Parents for their blessings, financial support, motivation.
And finally thanks to all My Friends for their co-operation and motivation. And finally thanks to
all my juniors for their support and love.
ABSTRACT
Many researchers and robotic scientist want to launch the robots in our daily life. As we also know that
many manipulators are used in so many places like industries but their use are restricted due to their
complex environmental condition and their application. Those manipulators have no intelligence to work
in different environment and for various application. So our aim is to develop a such autonomous mobile
manipulator which can work in every complex environmental condition and can be used for various
application like table top operation, in kitchen environment, in office environment, in house environment,
in industries, in hazardous place also.
In our thesis we have tried to build an autonomous robotic arm which can work in those given complex
environmental condition and can be used for various application. Basically the whole thesis can be
categories in three parts 1st one is perception, 2nd one is planning, 3rd one is actuation.
In perception we use Microsoft kinect which uses the IR to capture the depth image which can be further
converted to laser scan by using pointcloud_to_laserscan package. By perception ros has full information
about environment and initial position and goal position.
In planning part ros uses OMPL (open motion planning library) to plan the trajectory. Kinematics is
solved for our arm by using atom_arm_kinematics package which is generated by using the planning
description configuration wizard for our urdf (unified robot description format) file of our robot.
After planning the trajectory by planner robotic arm actuates after receiving the joint_trajectory_msgs.
Our arm is 4 DOF arm which uses 4 AX-12 dynamixel servo motors which have been interfaced by using
usb2dynamixel device which continuously publishes its states and receive joint_trajectory_msgs.
Transform of the arm is maintained by using package named tf.
So that’s the whole plan of this thesis to create this robotic arm which can be further more helpful to
many young researchers and scientist to do their research.
Table of Contents :
1. Introduction……………………………………………………………………………7
1.1 Currently existing technologies………………………………………………..7
1.2 Analysis of previous research in this area……………………………………..9
1.3 Problem definition and scope.…………………………………………………15
1.4 Formulation of the present problem……………………………………………16
1.5 Organization of the thesis……………………………………………………..17
2. Description of Hardware and Software Used………………………………………17
2.1 Hardware………………………………………………………………………17
2.1.1 AX-12 dynamixel servo motor………………………………...17
2.1.2 Controller……………………………………………………….20
2.1.3 Perception and sensor…………………………………………..21
2.1.4 Phidget I/O board………………………………………………24
2.1.5 Power supply……………………………………………….…..24
2.2 Software………………………………………………………………….…….26
2.2.1 PCL (point cloud library)………………………………………26
2.2.2 Arbotrix controller………………………………………….….27
2.2.3 TF (transform)……………………………………. …………..28
2.2.4 RVIZ…………………………………………………..………29
3. Theoretical Tools – Analysis and Development……………………………………30
3.1 Forward kinematics ………………………………………………..……...…..30
3.2 Inverse kinematics…..………............................................................................33
3.3 Jacobean……………………………………………………………….…...…..36
4. System architecture….. ………….………………………………………...................37
4.1 Perception pipeline…….…………………………………………………….…38
4.1.1 Sensor I/P…………………………………………………...….38
4.1.2 Noisy point cloud dealing………………………………….…..39
4.1.3 Construction of collision environment……………………..…..41
4.2 Motion planning algorithm………………………………………………..……42
4.2.1 Sample based motion planning……………………………..…..42
4.2.2 CHOMP…………………………………………………….…..43
Indian Institute of Information Technology | Allahabad
P age |5
List of Figures:
Chapter: 1
1. Introduction:
1.1. Current existing technology:
As we know the importance of robot in our daily life. Every roboticist has desire to make such a
social robot which used in our daily life work. Some robots are created to use in home work like
collecting and cleaning plates in kitchen environments and some are created to use in industries
like object pick and place that is manipulation and some are created to use in hazardous place
like work in atomic reactor where very high radiation level exist. So overall it is very vital role of
robots in our daily life. I guess in coming few year we are much dependent on robots (social and
industrial both). Some robots are already created which have capabilities to do work like if I say
“make an omelet for me” if it know the method to make the omelet then ok it will cook for but it
do not know the method to cook the omelet the it will go to internet search the method to cook
the omelet and download the process and then follow them to finish his work. So we have
already developed such type of robot which can be used in our daily life.
In last few years many roboticist has spent their life to create such type of great robot which
help us in our daily life under some circumstances. A few year back Japanese scientist filled with
astonishment everyone by creating the beautiful biped robot series named ASIMO. This robot is
biped robot which uses two legs to walk and great body structure. Due to biped this robot is more
like human and different from other robots which use wheel to move. ASIMO is a biped robot
designed by HONDA in 2000.
As we know that biped locomotion is a very researchful topic for robotics engineers. A lot of
robotics scientists are doing their research on the biped locomotion. In the biped locomotion
push recovery is the latest challenging problem on which all over the world every university and
research organization are focusing. Some of my friends are also doing research on the push
recovery topic in my institute.
Here we are trying to create a robot (mobile manipulator robot) and I am doing my research on
the robotic arm (autonomous robotic manipulator) for this robot platform named ATOM
(Autonomous Testbed of Mobile Manipulation). This robot platform ATOM is ROS (Robot
Operating System) based robot.
ROS (robot operating system) is a software framework initially designed by Stanford University
in 2007. After that development continues by the Willow Garage in 2008. ROS provides the
platforms, libraries, drivers, visualize etc. We can make our robot using ROS very easily. It is
ubuntu compatible software, currently it is being developed for the windows by the Willow
Garage.
There is so many robot which are using the ROS (robot operating system). Some of them are
very costly common people can’t afford them. Some of them having little bit less cost, but here
we made our robot ATOM in very low price so that common people can afford this robot for
their different area( for home work, office work, and for education work). Previous ROS based
robots are given below.
Every robotics engineer’s dream to build such type of autonomous robot which can interact
naturally to the human being, which can perform object manipulation and help in daily life of
human being, which can wander in any cluttered environment without getting collision
autonomously.
These all dreams became true when Willow Garage launched the ROS based robot named PR2
(personal robot 2). But cost of this robot (PR2) is very high approximately $400,000. Common
people even most of the robotics engineers cannot afford this robot. Only 50 pr2 robots are in the
world and Willow Garage. Willow Garage was initially founded by two people by recruiting
scientist from Stanford university to accomplish the project solar powered boat in open ocean in
fall of 2006. After this in 2007 Willow Garage released the hardware and open source software
called ROS (robot operating system). They 1 st release the pr2 robot which can manipulate objects
autonomously and can wander in any unstructured and cluttered environment autonomously.
PR2 robot:
Many robotics engineers’ dream to make an autonomous robot became true when Willow
Garage launched pr2 (personal robot 2) into market. Initially its cost was very high, common
people even many researchers can’t afford this robot but after some time its cost reduces to
$400,000 now many research institute have this robot for the research work.
This pr2 robot project was funded by NSF: Major Research Instrumentation Program
(MRI), Small Business Innovation Research (SBIR), NSF: Grant Opportunities for
Academic Liaison with Industry (GOALI), Defense University Research Instrumentation
Program (DURIP), DOD Multidisciplinary University Research Initiative (MURI).
Pr2 robot can be categorized into many different modules. All these modules are described
below.
Willow Garage 1st success was to build the PR2 robot. This is two armed and wheeled
autonomous robot which can be used for so many purposes like education (Test bed for
research), in the home environment, in office environment, in kitchen environment etc. It has two
autonomous arm which will be described in next module. It physical size is about the size of the
human.
ROS (robot operating system) is the Software part of the pr2 robot. ROS is a open source
software developed by Willow Garage. In ROS you will get so many libraries which can be used
in creation of the robot. If you want to work on pr2 robot and you don’t have the pr2 robot then
don’t worry about that because they have created the one open source simulator named gazebo
and open source visualizing software named rviz where you can do your research, where you can
indulge yourself to develop the field of robotics. ROS gives the platform and the libraries and
tools to create the advance robotics application. You can find the research and core libraries on
which they are working currently, from there you will always updated what is going on, what is
the current new challenging problem we are facing, on which you can work you can do your
research and submit your research by making the reporjitary of your package. From where
anyone can download your package and test it and further can research on it. So Willow Garage
is trying to create the community from all over the world of researchers.
2 Manipulation:
It is the 2nd module of the pr2 robot. Pr2 arm is the 7 DOF (degree of freedom) autonomous
robotic arm. Pr2 arm is backdrivable and current control also hence it can manipulate in any
cluttered and unstructured environment.
It has total 7 DOF arm, in which 2 DOF in the shoulder (shoulder_lift and shoulder_pan), 2 DOF
in the elbow (elbow_lift and elbow_roll) and finally 3 DOF in the wrist (yaw, roll, pitch). It has
two sided gripper hence they have used total 9 servo motors to create one such robotic arm.
When we give the goal position to place the object to the pr2, 1st it do the perception by using the
tilted laser scan it creates the collision map using the library pcl (point cloud library). By using
pcl it will get the evry information about the environment about the obstacles, about the object.
Such a way it will get the initial and goal position by perception and then its planner starts to
plan the trajectory without getting collision with any obstacles in the environment.
3 Mobility:
This is the 3rd module of the pr2 robot. Movement of the robot in any cluttered and unstructured
environment is also a challenging problem. This robot (pr2) is a wheeled robot with navigation
capability. It can navigate the optimal path in the environment. After the 3d scan of the
environment by using tilted laser scan it will create the collision map of the environment such a
way it will get the information (such as it recognizes the object and get the co-ordinate the
obstacles) about all the obstacles present in the environment, then its planning section plans the
trajectory without getting collision. It use the SLAM (simultaneous localization and mapping)
for navigation.
Sensors have the important role in the robotics field, pr2 uses many sensors for taking inputs
from the environment, for grasping, for perception. Manipulation stereo with Texture Projector
and environment stereo and 5MP camera are used in the pr2 for visualization of the target object
and the obstacles present in the environment. One base planner laser scanner for the localization
and mapping in the environment. When we give command to pick the object put it to the goal
position sometimes when pr2 moves it arm to grasp the object some area occluded by the arm it
can’t see the object anymore it uses one fore arm camera for avoid the occlusion. Its gripper uses
the pressure sensor array, this sensor array sends feedback continuously and increases the
pressure so that it can hold the object without damaging, means by using pressure sensor array it
try to hold the object without slipping and damaging the object by applying the exact pressure to
hold. And finally one tilting planner laser scanner to create the collision map using point cloud.
Fig. 6 (a) Stereo with texture projector and 5MP camera (b) base laser scanner (c) fore arm camera (d)
gripper pressure sensor array (e) tilting laser scanner
As we know many manipulators or robotic arm are available in the market which can be bought
and use but main problem with that their cost is too high, a common people can’t afford to buy
that manipulator. Their cost are approximately million dollar so its impossible to buy those high
cost manipulator for their respective work or research.
My main aim to build a such manipulator or robotic arm which has similar functioning like those
robotic arm and cost is in range of a common researchers who can buy this robotic arm for their
various application or their further research.
Cost of this robotic arm is approximately INR 30000/- nearly $ 450, so cost is so less.
Researchers can build their own robotics arm by using our method and can do their research on it
for further enhancement.
We are using dynamixel servo motors for building this autonomous robotic arm. The cost of the
dynamixel motor is approximately INR 2500/ per motor, and one usb to dynamixel device to
connect the dynamixel motor with system. Very high configuration system is needed which cost
is not added in the cost of the robotic arm. For gripper we use pressure sensor array which are
used to calculate the total pressure implied on the object. And for perception we are using
microsoft kinect which cost is approximately INR 11000/. Other manipulators are using high
cost perception device named LASER RANGE FINDER which has the same function like
Microsoft kinect. The cost of the laser range finder is very high which is approximately $ 3800
(INR 200,000) . One highly regulated 12 volt DC power supply to run the servo motors.
Software used in this project is totally free which are open source software named ROS (robot
operating system). This open source software is developed by Willow Garage.
So overall my approach to keep the minimum cost for building a such autonomous robotic arm.
We are using this method for creating the robotic arm in minimum cost and having same
function to that of high cost manipulator have. It is also as accurate as high cost manipulator.
Scope of this project is anyone can build their own autonomous robotic arm for various
application by following this project’s method. Here anyone can do their research on this
platform related to table top operation to pick and place the small objects on the table. Any
researcher can work on this platform to serve the water glass or wine glass. It can be used in fine
manipulation also if the hardware are of the high cost which could give very précis zed result.
Since it has searching and path planning algorithm by which it can play the chess.
So overall the cost is as minimum as possible and it is as accurate as the high cost manipulator.
Its cost is very minimum as compared to the other high cost autonomous robotic arm and it has
the similar function like those high cost manipulator. I have published one journal paper on basis
of the literature survey , in this paper description about all other high cost manipulators are given
also compared with my robot . It has various scope in the field of manipulation.
One more scope of this robot in the field of surgery of a patient. When doctor is not available in
hospital this robot can follow the doctor and can do the surgery. It’s a very fine manipulation so
for this purpose we need more advance servo motors with great feedback for precise actuation.
The most important thing in this application is getting highly calibrated perception, for this
purpose we need LASER RANGE FINDER which cost is very high. We have to model our
URDF (unified robot description format), urdf is the just simulating model of our robot which is
interfaced with our real robot and synchronize with our real robot by using node
robot_state_publisher and joint_state_publisher. Both urdf and real robot will synchronize with
these two node urdf continuously publish its state in the ros messages and then real robot will
receive the same state of the urdf and follow it. Urdf can be controlled by hand gesture
movement so doctor can see the patient of live camera and can move his hand this movement can
be captured by urdf it publish the robot state and the same time real robot will receive the robot
state and follow the doctor’s hand movement.
This is the very important thing before start to build our robotic arm. We should know about
constraints, about our environment and application. Then we have to formulate our strategy to
build our proper robot which will operate in given constraint and given environment.
Firstly we have to modeled our robot description format means urdf, this is the main thing which
need to build properly, we should build our urdf very precisely. We should keep in our mind
what approach should be applied.
I’m developing a low cost mobile manipulator for pick and place application, for that application
I built the 4 degree of freedom robotic arm using 4 dynamixel servo motor. As in high cost
manipulator high cost servo motors are used. Perception is the main step by which it can percept
the environment by using Microsoft kinect.
I got motivation by SMART (social mobile autonomous robot for testbed) which is developed in
ROBOTICS AND AI LAB IIIT ALLAHABAD. Arm of SMART is not autonomous that’s why I
chose to develop an autonomous robotic arm for various application. I developed this robotic
arm in ROBOTICS AND AI LAB Indian Institute of Information Technology Allahabad.
Robotics and AI lab of IIIT Allahabad is a very advance lab for robotics and AI, many work on
autonomous and artificial intelligence is going on. Every facility is available in this lab for doing
our research.
Chapter: 2
2. Description of hardware and software used
As we know we are building an autonomous robotic arm. For building this autonomous arm we have to
take care of both hardware and software aspects. First we have to build our hardware part and it
should interfaced with the software ROS (robot operating system). After developing the
hardware we should take care of software part. Before developing the software part we should
know whole idea of our application as well as the environment condition. If you have the
knowledge of everything related to you project then you can only develop your software part.
Our thesis can be categories in two part basically, first one is hardware and second one is
software. Further description about hardware and software are given below.
2.1 Hardware:
For building the hardware part of my thesis, so many hardware are used which are listed below.
Servo name is basically came from term servant, servo motors have their own controller. It acts
like servant and follow the instruction of its controller like servant that’s why we call it servo
motor. At many places this AX-12 dynamixel servo motors are used to make humanoid robot
also.
It is the one of the main part of hardware. It is actually used as actuator. AX-12 servo motor is
the most advance servo available in the market, which can be used by many researchers and
scientists for building their own robot.
For building of our project by this dynamixel servo motor we have to connect these motors in
series. First motor is connected to computer and next one is connected to previous motor. Every
motor has two port one is used to connect with previous servo and one is used to connect with
next motor. And every port has three pins one pin is used for 12 volt, one is used for ground and
3rd one is used for data.
We know every electronic device operate under some circumstances for proper operation. So
every device have some stats under which that device will give the better result. So AX-12
dynamixel motor’s stats are given below:
Operating voltage for this servo motors are 9-12 volt DC (recommended
11.1volt).
It can offer the torque of 15.3 Kg.cm or 212 oz.inch.
No load speed for this servo motor is 59 RPM or .169 sec/60 degree.
Weight of the AX-12 servo motor is 55Gram.
Dimension of this servo motor is 32*50*40 mm.
Minimum angle it can rotate i.e. resolution of the motor is .29 degree.
Reduction ratio is given 1/254.
Operating angle for AX-12 motor is 300 degree or we can say continuous turn.
It can have the maximum current of 900mA.
Operating temperature range of this servo motor is -5 to 85 degree Celsius.
2.1.2 Controller:
Controllers are used to connect the dyna mixel servo motors with computer directly. All servo
motors are added serially to controllers. Many types of controllers are available in the market
such as CM-2, CM-2+, CM-5, CM-510, usb2dynamixel etc. All are connected to the usb port of
the personal computer. Basically these controllers convert the usb to serial port so easily we can
get serial data from usb port by using this device.
Above shown in figure is the usb2dynamixel controller. We have used this controller in our
project. This usb2dynamixel device has Status Display LED, Serial Connector, 3-pin connector,
4-pin connector and Function selection Switch. Status Display LED is used to indicate the power
supply, TXD (transmitting data), RXD (receiving data) status. Function selection switch is used
to select the communication method like whether it is TTL or RS-485 or RS-232. AX-series
motor uses 3-pin connector to connect with this usb2dynamixel controller using TTL
communication method. 4-pin connector uses RS-485 communication method to connect the
DX-series and RX-series servo motors. And finally serial connector uses RS-232 communication
method, it change the usb port to serial port.
As below shown in figure CM-5 is another way to control the AX-12 dynamixel servo motor.
CM-5 can store 4 number of program at a time to execute. You can execute those program by
selecting the different modes and can perform different task by your robot.
As shown in figure power switch is used to keep ON/OFF and power jack is used to connect the
power to CM-5. Status LED indicates the status of CM-5 like whether its RXD (receiving data)
or
TXD (transmitting data) or charging mode. There are 4 direction buttons are in CM-5. Mode
button is used to change the mode of operation of the CM-5 and start button is used to select the
mode of operation. Play button is used to execute the program which is stored in the CM-5.
Perception is the main part of the arm navigation. Many high cost robot like pr2 uses Laser
Range Finder as a perception device. But its cost is so high approximately INR 190,000, Which
can’t be affordable by many young researchers and scientists. We are using another perception
device which cost is very low as compare to Laser Range Finder. Microsoft launched a new
product named Microsoft kinect, which was initially developed for gaming purpose. But we are
using this device as a perception device in our robot.
Hokuyo UTM -30LX laser range finder is latest version of laser range finder. It has some
advance properties over other laser range finder. Its cost is so high approximately $5600. Its
angular resolution is .27 degree and scan range of 30 meter. It will take lesser power
approximately 8.4 watts at 12 volts. Its size is 6.0*6.0*8.5cm.
One more perception device exist named PML (poor man’s lidar), which can be constructed in
your own lab. We tried to construct this PML in our lab by using laser projector, receiver and
one servo motor. Laser projector and receiver is mounted on the servo motor by using concept of
the laser range finder. PML’s cost is very low. But it didn’t give the better result that’s why we
didn’t use this device in robot.
As we know that Laser Range Finder gives 650 points in single 240 degree sweep, while PML
(poor man’s lidar) gives only 24 points in single 180 degree sweep.
We are using Microsoft kinect in our robot for perception . kinect is a Microsoft product which
is developed for gaming purpose. In kinect one IR projector, IR receiver, stereo camera and array
of microphones. IR projector projects the IR ray to the environment and provides the depth
image. Such a way we have depth image by this IR projector and receiver. Further depth image
can be converted to laser scan by using package pointcloud_to_lasercsan. This
pointcloud_to_laserscan package provides fake laser scan from point cloud by doing slice
horizontally of the image and taking the nearest distance in each column.
As shown in figure Kinect has 3D- depth sensor which are IR transmitter and IR receiver. One
RGB camera and array of highly sensitive and noise suppressed microphone. It can be used in
speech recognition.
Pressure sensor we are using at palm and thumb of the robotic arm for the grasping purpose. We
can train our robot in such a way that it can apply suitable force for each individual objects so
that no objects get harm while grasping process is going on.
Phidget I/O board is used to connect the pressure sensor which are used to calculate the exact
pressure on the objects by thumb and palm. This Phidget IO board is easily interfaced with ROS
(robot operating system)
Basically any analog sensor can be interfaced to the phidget i/o board like pressure sensor, gyro sensor,
rotating potentiometer etc. These all can be connected to the ros by using the phidget I/O board.
In the phidget i/o board as shown in figure separate 8 bit digital input and 8 bit digital output port is given
taking 8 bit digital out put by giving 8 bit digital in put. One port is given to connect the board to the
computer device. 7 separate ports are given to connect the 7 different sensors to the phidget for
taking input.
It’s the one of the main part of our project its very serious thing to supply the power to all
devices used in our projects. For different device we have different power rating so we need to
build the separate regulator circuits for each power rating.
As we know that power rating for the dynamixel servo motor is 9 to 12 volt DC supply. It can
maximum have the 900 mA of current.
We using Li-ion battery for the main source of the power, which can give the 12 volt DC. From
this source we will provide supply to all.
Normal power cord of the Microsoft kinect takes power from AC input. You can use AC power
source for static use of the means no movement of the Microsoft kinect. But it is not possible
for the mobile robot, as we know mobile robot moves in whole environment we have to give
the power source to the Microsoft kinect from the DC source.
Power supply is to given from the usb port from the CPU (central processing unit). No need to worry
about the power supply for the phidget I/O board.
2.2 Software:
We are using ROS (robot operating system) ,which is a open source software developed by the
Willow Garage. Initially it was started in Stanford University in 2006, and then in 2007 willow
garage started to developed the ROS.
As I already told that ros is a open source software, where you can do your research in software
field in developing of your robot. Many open source libraries are available there, you can use
them without any problem. Some open source libraries are given below.
For point cloud processing it is used the PCL (point cloud library). Point cloud library uses many
algorithms which are used in cloud processing. These algorithms include filtering, feature
extraction, segmentation, surface reconstruction, registration and model filtering.
Mainly PCL is used in perception part of our robot. As I already told that we are using Microsoft
kinect for perception. Microsoft kinect has IR (infra red) projector and receiver by which we get
depth image. By using pointcloud_to_laserscan package we convert the depth image into laser
scan. By using previous package we are building our collision map of our environment.
PCL library is has great importance in perception like object detection, object recognition etc.
which are describe below.
Object detection and recognition is a very important step in the perception. Object detection
means the perception device which used the PCL to detect the object in the environment.
Detection is based on mainly size. For example in turtlebot block detection they use 3 cm block
to detect and pick and place. You can edit in the launch file of the block detection package of the
turtle to detect our own block of different size. They also use the kinect for perception, kinect
projects the IR and where it finds the 3cm height, width and length of point cloud it will simply
show the block.
Recognition process is basically based on the shape of the object color of the object. Suppose a
robot has to recognize a bottle. 1 st it will create the point cloud of that bottle, and then it will
match the shape and color of the point cloud of the bottle with the objects database created and
saved in our computer. It will classify the given data and can have the knowledge about that
bottle. This process is known that object recognition.
In some case if it can’t find the any of the match in our own created database into our own PC,
then it will follow the internet and checks in the huge amount of the point cloud database of the
objects. This connection with inter to our robot is done using ROS package roboearth. This
roboearth is used to create the database of the objects upload these to the internet and connect the
robot to the internet. In such a way our robot can search in huge amount of database and
recognize the object.
After recognition the objects it has the prior knowledge about the object like how much pressure
should be apply to grasp without damaging the object. It has knowledge of the grasping point of
the object.
2.2.2 Arbotix:
This is a ROS package in which all the devices controller exists. It has the controller for the
dynamixel motor and for other sensor devices, messages for publishing, firmware. It has
following ROS packages.
3 Arbotix_controller
4 Arbotix_firmware
5 Arbotix_msgs
6 Arbotics_python
7 Arbotics_sensors
TF is ROS package using by robot which can track the multiple co-ordinate frames over time. TF
shows the relationship in between co-ordinate frames in a tree manner structure buffered in time,
transform points, vectors between two co-ordinate frames at every desired point at any desired
time. As you know that robot has many co-ordinate frames like world frame, base frame, arm
base frame, elbow frame, wrist frame, gripper frame etc. and we know also that each frame is
going to change over each moment of time. Then TF is basically used to track the co-ordinate
frames with respect to any other co-ordinate frame. So by using TF robot has full information
about any co-ordinate frame with respect to any co-ordinate.
It is a 3 D visualization tool for ROS. It is a open source tool available for ROS. Mainly use of
rviz is to visualize the URDF (unified robot description format). Main application using rviz is
the interactive marker by which you can interact with your robot URDF. You can control your
robot joint by rotating and change the robot position and you can select some special menu
assigned to each marker.
Interactive marker server continuously update the rviz as you will see in your rviz to your robot
model while you interact with your urdf. Interactive marker continuously takes feedback from
the rviz for checking the status of the rviz.
As shown in figure you can see the interactive control of the right hand wrist joint which has 3
DOF and in figure 1 you can see that some special menu option in which you can select for IK,
plan the trajectory, trajectory filtering etc.
Chapter: 3
In this section we will discuss about theoretical analysis of the autonomous robotic arm. The
dynamics behind any manipulator is based on the three basic concept, these are given below.
Forward kinematics is the relation between the joint space and the pose and the orientation of the
rigid robot (manipulator). Forward kinematics is used when joint parameter is given for any
manipulator we need to find out the pose goal of the end effector of the manipulator. We use
kinematics equations for this purpose.
As we know our robot autonomous robotic arm is serial chain of the motors, then to find out the
forward kinematics for our robot the transform matrix. These series chain of joint may be any
type like they are fixed joint or they are revolute joint or they are prismatic joint or they may be
more complex joint than the revolute and prismatic like ball and socket joint. As we know that
revolute joint rotate with respect to the single axis and the prismatic joint will provide the linear
movement along the axis. After finding out the transform matrix we will simply multiply the
joint parameter to the transform matrix the we will get the pose parameter or simply we can say
we have the position of the end effactor of our robot.
So forward kinematics is used to calculate the position of the end effector in Cartesian space
when given the joint angles in joint space.
Given parameter: The length of each link and angle of each joint
Target parameter: The position of the end effector (i.e its in (x, y, z) co-ordinate)
Now we are moving forward to calculate the transform matrix. For finding out the transform
matrix we have to use the DH (Denavit-Hartenberg) principle.
DH (Denavit-Hartenberg) principle:
Jacques Denavit and Richard Hartenberg introduce this principle in 1955 for standardize the
co-ordinate frames for spatial linkages. This principle is used in robotics specially for selecting
the frame of reference.
For the kinematic chain of joints and links we have separate co-ordinate for every link. So we
need a transform matrix for calculation of the position and orientation of the next link co-
ordinate with respect to the previous co-ordinate frame. In such a way we will use this method
for calculation of the position and orientation of the end effector of the robotic arm.
For analysis of the kinematics of robotic arm we have to attach the c0-ordinate frame for all link
like given in figure Oi (Xi, Yi, Zi).
We attached Oi (Xi, Yi, Zi) for the link i , the meaning of this is that when robot actuated all
points on the link i have the same co-ordinate frame i. The base co-ordinate frame O0 (X0,Y0,
Z0) which is attached to the base of the robot.
For calculation of the orientation and position next link like Oi (Xi, Yi, Zi) in reference of the
Oi-1 (Xi-1, Yi-1, Zi-1) that transformation known as homogenous transformation matrix. And
when we calculated homogeneous transformation matrix for each link, then finally we can
calculate the transformation matrix for the robotic arm in which joint parameter is multiplied
for getting the position and orientation of our end effector.
For DH parameter four parameter is used , , , which associated with the link i and
joint i. detail of this four parameter is given below.
Homogeneous transformation matrix is single variable, so one out of four is only vary and rest
three will be constant for each joint. As we know that for revolute joint will vary and for
As we already discussed that forward kinematics is used to map joint space Q to Cartesian space
W. But inverse kinematics is just the opposite of the forward kinematics it maps Cartesian space
W to joint space Q.
Equation for the inverse kinematics is given by following which maps Cartesian space to joint
space.
Inverse kinematics have very important role in robotics. Because normally input of any robot is
in Cartesian space (i.e. normally we define the initial position and goal position in Cartesian co-
ordinate, whenever we give the target location or initial location we can give only in Cartesian
co-ordinate system). When initial and goal position is given to the robot will solve for the angle
of each joint using inverse kinematics. After solving the inverse kinematics either it can give the
unique solution or it can provide the multiple solution. When it will get the unique solution the
normally it will execute the trajectory and follow the resulted joint angle, and when it will get
multiple solution first it will select the optimum solution in such a way that it can move in
environment without collision with other objects i.e. collision free path. Some solution from
multiple solution are not physically reliable.
In inverse kinematics for calculation of angle we use ATan2 function. This ATan2 function is
more helpful in calculation of the joint angle from Cartesian co-ordinate.
ATan2 function is used in many places where co-ordinates are given and you have to find out the
angle. If you are given (3,4) co-ordinate in Cartesian co-ordinate you are asked to calculate the
angle. Then simply you will apply tan inverse function atan(y/x) i.e. atan(4/3).
But when you are given the co-ordinate (-3, -4) and again you are asked to calculate the angle, if
you apply same atan(-4/-3)=atan(4/3),and you will get same result. But both points are not
having the same angle, both are in different quadrant. (3,4) is in 1 st quadrant and (-3,-4) is in 3rd
quadrant.
That’s why we use atan2 function to discriminate between these two and this function will give
the accurate result.
Simply Jacobean is defined as the time derivative of the kinematics equation. Jacobean gives the
relation between the joint rate of the joint and linear and angular velocity of the end effector of
the robot.
Jacobean also gives the relation of the joint torque by joint and the force and torque produced by
the end effector.
For any given manipulator we are given the joint angles and the link length, we have to calculate
the linear velocity of the end effctor while angular joint rates are given. Or reverse can be
possible while joint rates are given and you have to calculate the linear velocity of the end
effector.
Chapter: 4
4 System Architecture:
For safe planning of the trajectory and execution of the trajectory we will go to the following
architecture to perform our manipulation. In this section we are going to describe all software
part step by step development.
1. Perception pipeline
2. Motion planning algorithm
3. Motion execution monitor
These are the 5 step which you have to follow for designing the software part of the manipulator.
Perception pipeline plays the key role of building a representation of world into a collision map.
Perception pipeline contributes basically two function:
(a) Perception pipeline combines the planner and give the problem to perform its
manipulation in cluttered atmosphere using data from the sensor.
(b) Perception pipeline is used in the field of handling occlusion and noisy data perfectly. It
is also used in noisy laser scan and it can deal with shadowing effects.
Perception pipeline is a complete package of the these three parts which are describe further.
as I already told that I have used Microsoft kinect for perception purpose. The input data from
this data is in the form of point cloud. Kinect use IR sensor for scanning the environment then
data is transformed to laser scan by using pointcloud_to_laserscan package. Point cloud is just
the set of points in the space that corresponds to detect the objects. This sensor provides the input
in the form of point cloud of the 3D environment.
As we know that sensor point cloud data is often noisy that should be take care of it. When robot
is performing its manipulation its arm may be in front of the perception device and during the
point cloud generation the point cloud of the projected arm is also treated as obstacle. But this
projected part of the arm is not the obstacle of the environment. So we need to deal with these
noisy data.
So we have to separate these points from the sensor input points cloud. We will use a simple
method to separate these points i.e. point cloud of the projected part of the arm. “The system
have to check any part of the point cloud input data is the same as the point cloud of the
geometric shape of the projected part of the arm, then this portion of the point cloud will not be
treated as obstacle of the environment.”
One more problem generally occurs which is shadowing effect. It will occur when kinect grazes
some part of the body of our robot. Points seem by edges of the robot arm now seems to be away
and treated as part of the environment. When robot arm will move in the environment then these
veiling points seems to be on desired path of the robot arm, so our process will be haulted. “So to
remove these veiling points a small padding distance is summed up in the collision
representation of our robot links.”
These two problem is need to be arise which need to be filtered from the input point cloud.
4.1.2.1 Filtering:
To remove this shadows points which are coming in the point clouds in the environments we
used filters like voxel grid filter which are using to filter the shadow points as given in figure
As you can see in this figure shadow points which are present in the 1 st image which are removed
after using the filter (voxel grid filter).
4.1.2.2 Segmentation:
After successful creation of point cloud of the environment, this environment point cloud is
unstructured so system will take so much time to process. So we need to segment these
unstructured point cloud data.
Segmentation is nothing but Euclidean Cluster Extraction. It uses several clustering methods like
Nearest Neighbor Estimator, RANSAC etc. to segment these objects.
Collision environment is similar to the collision map of the environment. A proper building of
collision map with frequent sensor updates can handle correctly the occluded data. We can
handle this by update the collision map with the next sensor update i.e. we have current collision
map and next collision map due to sensor update data then we can handle the occlusion by these
collision maps.
Suppose we have collision map C (initially empty) and we have new collision map due to sensor
update N. Then we will 1 st calculate the difference D in between these two collision maps and we
can find the occluded data or removal data from the environment.
D=C–N
D in the collision map is either moving obstacle or part that is occluded by robot arm. For this
part D is occluded data or not , we will check that line segment in between d as a function of D
and the sensor origin intersects a body which is the part of the robot arm. If it happen then the
obstacle is said to be occluded and D will be added to N for complete data after removing the
occlusion.
Collision map is the very important input for the motion planning and path execution process.
For this our robot has restriction of a cube of size 2 meter forward, 1.5 meter in each side and 2
meter upword with respect to the base of the robot. This cube box has entire workspace when
arm can reach and do its manipulation.
After successfully building the collision map the filtered collision environment data is transferred
to the motion planner. Motion planner is used to plan the trajectory between initial position and
goal position. Basically three types of planners are used for planning the trajectory.
Sampling based planning is generally takes the samples points in between the initial and goal
points taking care of obstacles, and then it will calculate the inverse kinematics for each points
thus trajectory will be generated. It gives the collision free trajectory very quickly.
This sampling based planner used a tree based algorithm from OMPL library. This planner used
following algorithm also:
When robot receives a planning request then planner selects algorithm based on type of planning
request. If goal is available i.e. if goal is retrieved from the request then LBKPIECE or SBL are
used. And when goal is not given then KPIECE and RRT is used.
Planner selects the algorithm on basis of the priority, this priority will be either increased or
decreased if planner does not find the solutions. We know highest priority is of LBKPIECE and
lowest priority for the RRT.
4.2.2 CHOMP:
CHOMP stands for Covariant Hamiltonian Optimization for Motion Planning. It is a trajectory
optimization and it is based on the covariant gradient descent technique. It is also use to smooth
the path which are generated by sampling based planner. CHOMP is a cost function which is
summation of two cost function of the smoothness and cost function of the collision.
Smoothness cost function comes from summing the square of derivative of each joint along the
trajectory. And collision cost function comes from the Signed Distance Field. Collision cost is
in Cartesian space which can be transformed to joint space by using Jacobean matrix. For our
robot CHOMP optimizes trajectory in 1 to 4 seconds depends on the type of request.
Motion plans can be calculated for a robot in a collision map of an environment which is
continuously updated in the real time using perception pipeline and motion planning. This
motion planning is then sent to the trajectory control, which tries to follow them as perfect as
possible. In the motion execution monitor two motion planners are used:
Long range planner is used in sampling based planning in which it quickly generates the path
between initial point and the near to the goal point or to the goal point if goal point is not in
collision state. Short range planner is used in CHOMP based planning. It is apply when goal
position in collision or very near to the obstacles.
If any goal is in collision state or very close to the obstacles then 1 st we execute the trajectory
from initial point to near point of the goal by sampling based motion planning quickly and from
near to the goal point to the goal point it will apply CHOMP based planning so that it can grasp
the object without getting any collision. CHOMP based planning is slow.
Above shown algorithm is used about working of motion planning and motion execution system.
This algorithm is used for finding the states in goal region in stead of single goal state. If a state
for which robot endeffector satisfies all condition and having no collision then long range
planner can be executed. If it does not find any valid goal state then it will apply genetic
algorithm to find a valid state which is as close to goal.
Due to proper updation of the environment data i.e. collision map updatation in each and every
moment of time it will check whether our robot arm is in collision state or not if robot arm in
collision state then it will terminate the execution and it will request for the trajectory replanning.
Chapter: 5
5 Software hardware development:
This chapter is actual work done by me. I have developed both hardware and software part. As
my thesis topic is Real Time Collision Avoidant Arm Navigation for a Low Cost Robotic Arm
Manipulator using ROS. Main aim of my thesis is to developing a low cost real time collision
avoidant robotic arm. A robotic arm which having the low cost so that many young researchers
and scientists who wants to work on the arm navigation field and do not having the sufficient
funding for their research. This thesis will surely much helpful for those young researchers and
scientists.
Our robotic arm can operate in any unstructured and cluttered environment. It can avoid any
obstacle presence in the environment in between the initial position and goal position. Mainly it
can be used in pick and place purpose. It can be used in table top manipulation also. Table top
manipulation like suppose I want a red pen, and so many different colors pen are kept at the
table. I will give the command to the robot to “give me the red pen”. Actually this command can
be given by either through speech or through the command. It will receive the information and
then it will 1st create the point cloud of the objects which are kept on the table. Then it will
search the red pen point cloud database in the objects data base for recognition of the red pen. If
it will succeed in finding the red pen point cloud database then its ok otherwise it will search on
internet for the database of the red pen point cloud. After searching the database of the red pen it
will match the input point cloud taken from the table and classify the data to recognize the red
pen. If this will recognize the red pen it has the co-ordinate of the grasping point and it has the
goal position to place their order. Then it will plan the trajectory between the initial position to
goal position avoiding the obstacles in between the initial and goal points. Then it will executes
the trajectory and actuators will actuate and perform proper operation. This is the overview of
my work what I am trying to do.
For developing this type of advance autonomous robotic arm we have to develop both the
hardware and software part. Now here I am going to describe step by step development of the
hardware and software.
Hardware development is the initial step of the project. 1 st we have to design to design the
hardware of the autonomous robotic arm.
For designing of the robotic arm we need a BIOLOID ROBO KIT. Basically BIOLOID kit is
used for simple humanoid robot, which can dance, can do various pose of the gymnastic, which
can salute you, which can run etc. You can do your different research on that robot. It BIOLOID
kit these hardware are available.
These all parts are given to you in the BIOLOID kit if you buy it. Its cost is $899.00, which are
little bit costlier. But you can afford it if not you can order only those things which you need this
will be cheaper.
Requirements:
For our projects we only need few things these are dynamixel servo motors, serial cables,
usb2dynamixel controller, 12 volt battery, Microsoft kinect and cables. One more thing we will
use a circuit for the power supply for Microsoft kinect from DC source. And for developing the
base of the robot we need steel plate, steel rod, wooden circular base etc.
1st thing we have developed is the base for your robot at which all devices can be kept easily,
stably. So we need a strong stable base for fixing the robot body, autonomous robotic arm,
perception devices. For developing the robot base we use 4 wooden circular base. We use these
circular wooden plate are in a stack manner for creating the stable and mechanically strong.
After successfully creating the base we add two torso link that is upper torso link and lower torso
link. We fixed our autonomous robotic arm to the upper torso link. We add our Microsoft kinect
which is the perception device to the upper torso link. And the upper torso link is connected to
the lower torso link. And lower torso link is connected to the base of the robot.
As shown in figure you can see our full robot, in which two 4 DOF autonomous robotic arm.
And two Microsoft kinect one is used for the arm navigation and other is used for the mapping
and localization of the environment.
This is the second step of the developing the robot, for development of the head of the robot 1 st
fixed one dynamixel servo motor to the upper torso link. This dynamixel motor is used for the
pan joint by which robot can rotate its head. And then we use the another dynamixel servo motor
to the pan dynamixel servo motor. This dynamixel servo motor is used as providing tilt joint. By
using the tilt joint robot can tilt its head. So our head has two Degree of Freedom of pan joint and
tilt joint.
As you can see in the figure of head of our robot two dynamixel servo motors are used in such
manner that robot can move its head in pan direction and in tilt direction. And then we add a
steel plate for base of the stereo camera and for the Microsoft kinect. At the steel plate two stereo
vision HD camera we have fixed and Microsoft kinect is also fixed on the steel plate. For stable
fixing the Microsoft kinect we use steel strip to bound the kinect with the head steel platform.
Now here the main work is started to develop the perfect arm using the AX-12 dynamixel servo
motor. We have faced a lot of problem during developing of the arm. Firstly we designed the
arm using 5 dynamixel servo motor, single motor for each joint. You can see in the figure as
shown. You have to use the different motor IDs in whole arm and only one without ID motor
Can be used in your whole robot. You know every servo motor has different motor IDs and some
of then have no ID when you are going to use those motor you will treat as ID 1. Initially we
have developed the 5 DOF (degree of freedom) robotic arm. These joints are shoulder_pan_joint,
shoulder_tilt_joint, elbow_tilt_joint, elbow_flex_joint, wrist_roll_joint. So we have used 5
dynamixel servo motor for developing this, single servo motor for each joint. And two motor for
the gripper control. We have designed the two sided gripper initially.
After designing this robotic arm we have faced so many problem, as we know that it is a 5
degree of freedom arm so 5 servo motors are used in that robotic arm and 2 servo motors are
used in gripper so the initial motor i.e. shoulder lift joint’s motor did not take enough torque to
lift the whole arm as at the end of the robotic arm these two gripper control servo motor provide
the large torque.
So to overcome this problem we use single sided gripper control i.e. only one dynamixel servo
motor is used in the gripper and one end of the gripper is fixed.
After facing a lot of problem then we thought about the dual motor concept that means for each
joint we have used two dynamixel servo motors. This dual motor concept is used to overcome
the torque problem. As we have developed arm initially that robotic arm is not able to take
enough torque. For that arm when we increase the load then it will not work properly. That’s
why we have used two servo motors for each joint except in shoulder_pan joint. This new arm is
4 degree of freedom arm in which joints having the number of motors as given below.
As you can see in the figure below the 4 DOF autonomous robotic arm which uses the 1 servo
motor for shoulder pan joint, 2 motor for the shoulder lift joint, 2 motor for the elbow flex joint
1 servo motor for wrist roll joint and finally 1 motor for the gripper control.
This dual motor arm is capable for lifting the coke cane because two motor provides to resist the
enough torque. So this arm is more helpful for us for designing purpose. And this arm is working
properly in our lab. This arm can be used to lift some heavy thing like bear bottle, soda bottle,
coke cane etc. so over all this autonomous arm can be used in our daily life as our helper to help.
After designing the this dual motor arm, we developed a 4 degree of freedom single using servo
motor for the block manipulation purpose. We have taken some hint from the TURTLEBOT
robot arm for the block manipulation purpose. 1 servo motor for the joint shoulder pan, 1 motor
for the joint shoulder lift, 1 motor for the joint elbow flex, 1 motor for the joint wrist flex and 1
motor for the gripper control. We have used single sided gripper.
Image
This type of arm is very easily applicable to the block manipulation. It can pick and place the
light objects like 3 cm wooden block cube, marker pen etc.
For any robotic arm gripper is most important part by which it will pick the objects and place the
object at the goal position. You have to take care of the maximum force provided by the gripper
on the objects without damaging the objects. Some objects are very sophisticated like egg, egg is
crckable if you apply the more force on it during the grasping. So you have to take continuous
feedback from the gripper so that object does not slip, if object slips during the grasping gripper
force should increases.
Basically two types of the gripper are used in the robotics field 1 st is one sided gripper and 2 nd is
two sided gripper.
Above shown in figure is the two sided gripper control in which two servo motors are used. This
both sided gripper can hold the object tightly. It is actuated from both side.
Above shown figure is showing the single sided gripper in which only one servo motor is used.
One end is fixed and other end is connected to the motor which can rotate to grasp.
In this chapter we are going to discuss about the software development in our project. As I
already told you that my robot is fully ROS based, we are using ROS packages for our projects
which all are open source means available free of cost. We will discuss about the software
development here step by step.
Before starting the work we have to install ROS and other supporting tools and packages which
are available free of cost and ROS compatible. We know ROS and all tools and packages are
open source so need of waste money for software.
URDF stands for unified robot description format. URDF is the model of your robot so first step
to our development of the software part we have to create the perfect URDF. If your URDF is
perfectly build.
URDF gives the perfect description about your robot. URDF is the tree structure of the robot. As
shown in figure 1st thing you have to define the links, and you know that joint exists in between
any two links. So after defining the links we have to start to create the tree structure by defining
the joints and defining the parent Link and child Link. Like as shown in figure we have defined
links Link1, Link2, Link3 and Link4, then we have to define joint1 which exist between Link1
and Link2 in which parent Link is Link1 and child Link is Link2, joint2 exist between Link1 and
Link3 in which parent link is Link1 and child link is Link3, joint3 exist between Link3 and
Link4 in which parent link is Link3 and child link is Link4. Such a way you can see the tree like
structure of the links.
In URDF you have to give the perfect dimension of the links in meter, moment of inertia of the
links, material, color, origin, orientation of the links and types of the joint like it’s a fixed joint or
rotary joint or prismatic joint, using mesh for the links.
In the block diagram you can see the dimension of the link, orientation of the links are given.
This is the our first URDF for our robot developed, in which we use only blocks for the links and
the servo motors. So that is why it doesn’t looks so perfect and good. In this URDF perfect size
and dimension are not given so this URDF is perfect. You can see the joint state publisher in the
figure by which we can control the every joint.
After developing this URDF tried to make it as it looks like in original. Means we used mesh for
the dynamixel AX-12 rather than a black cuboids, and we used meshes for the links connector
like F1, F2, F3…., F16 which are used in the arm as a links and the connectors. By using meshes
it will look like same as it is in original means dynamixel servo motor looks like original
dynamixel servo motors, and the links and connectors are looks same as they are in original
piece. So after using the meshes we don’t have to take care of measuring the size. We just use
those connectors which are used in the original arm, so size is the same as in the real robotic arm.
As you can see in the figure this is the developed URDF used dual motor for single joint which
are described already. Here this URDF is look like in original what it look in the real.
After developing this dual motor URDF as well as in real we tried to work on the block
manipulation, so we took help from the turtle bot robot and we designed our 4DOF arm using 4
dynamixel servo motors and the one motor for the gripper. This arm is used in the block
manipulation, which can pick the block of size 3cm and place it on the goal position.
For working on the arm navigation 1 st thing you have to generate the robot model i.e. URDF. If
you have your perfect design of URDF then 2nd step is to generate the kinematic package for our
robot using planning description configuration wizard.
Fig.37 package generation of our robot model using planning description configuration wizard
Planning description configuration wizard generates the package for our robot model i.e. our
robot’s URDF. It will generates the launch files of motion planning, IK solver, planning
components visualize, planning scene ware house viewer etc. for our robot model.
Then we have developed our main launch file for robot arm, in which each nodes will be
launched like our URDF, joints parameter, robot_state_publisher, joint_state_publisher,
dynamixel follow controller, TF etc. we have created our parameter file also in format of .YAML
in which it contains the joint name, respective motor ID, max and min speed, motor temperature
etc. this file is loaded to our URDF in this launch file.
5.2.2.1 Joint_state_publisher/robot_state_publisher:
This package is to use in synchronization between our robot model (URDF) and our real robot.
By using this real robot publishing its joints state to the ROS and then our URDF subscribe that
joints state of the real robot and follow that state with the real robot..
Figure 1st and second are just showing the synchronization between the real robot and URDF and
3rd figure is showing the same with transform coordinate frames.
Dynamixel follow controller is used to follow the joint trajectory generated by the planner of our
robot. It will receive the input in form of angles and give the command to perform actuation to
the dynamixel motor. We have designed this controller for our robot’s dynamixel motor to
follow the joints angle given by planner.
Chapter: 6
6 Results and analysis: Our aim is to develop a low cost mobile autonomous robotic
arm which can perform the various application. This robotic arm can be used in table top
manipulation, in kitchen environment, in office environment and for the block
manipulation etc. We have checked in our lab only for the block manipulation using this
autonomous robotic arm. Using this robot arm I can pick and place the block from any position
to the given goal position. As I already told that this robot arm can be used for multi purpose but
due to limited resources in our lab we have only used this arm for block manipulation. Block
manipulation activity you can see in the figure.
Fig.40 collision free trajectory generation using planning scene ware house viewer
We have developed collision free trajectory for our robot using planning scene ware house
viewer. These figure shows how we generate the trajectory using ware house viewer. Using this
ware house viewer we can filter the trajectory, and we can create the some obstacles which are
used to avoid by our robot during the trajectory execution. So we have successfully generated
trajectory for our robot in this tool.
Above presented in figure this is the torque analysis of each motors during performing the block
manipulation. Figure 1 shows torque analysis of the shoulder pan joint, figure 2 shows the torque
analysis of the shoulder lift joint, figure 3 shows torque analysis of the elbow flex joint and last
one shows the torque analysis of wrist flex joint.
Chapter: 7
7 Conclusion:
Main objective of this thesis is to develop a low cost mobile manipulator which can perform the
operation of pick and place in cluttered environment also. We gave our best to develop this
autonomous robot arm in real time which can avoid the collision and can perform object
manipulation.
For development of this robot arm we have developed first hardware part of our robot arm using
BIOLOID kit (AX12 dynamixel servo motor, connectors, screws, nuts etc).
After successful development of the hardware we focused ourselves towards software part. For
developing software part we used ROS (robot operating system). 1st step in this field is to
develop a perfect URDF model for our robot arm. After this using this URDF file we have to
generate our atom_arm_navigation package using planning description configuration wizard. In
this package we have forward kinematics package, inverse kinematics package, motion planning
package, perception package etc exist. After that we have to create our robot launch file, by
launching this file our robot will be online and ready for doing your application whatever
application you want to make perform. This launch file contains so many ROS nodes like
uploading the actuators YAML file for every joints, it will launch the URDF, it will launch the
perception node, it will launch the robot_state_publisher and joint_state_publisher and so on.
Now we concentrated ourselves in the block manipulation for our robot. We first build the
simple_arm_server for our robot by which our real robot arm cam contact to the ROS. After
successfully building of simple_arm_server we use turtlebot_block_manipulation package for
performing the block manipulation by our real time robot. By using all of these our robot
performs block manipulation i.e. it can detect the blocks of 3cm, it can autonomously plan its
trajectory after receiving the goal, then it can pick and place the block to the goal position.
This is the overview of my these “Real Time Collision Avoidant Low Cost Autonomous
Robotic Arm using ROS”.
Chapter: 8
8 Recommendation and future work:
This robot arm is basically a testbed on which anyone can test their developed application. Any
researcher who wants to do their research on it he can use it as a testbed and try to develop it.
Young researchers can work on the recognition of the house object like pen, cup, spoon, box of
tooth paste, soap etc.
They can work on the grasping of those objects by developing their algorithm they can find the
best grasping point and they can test on our robot arm.
As we know we are using a very low cost dynamixel AX 12 servo motor which is not giving the
better performance. So for better performance we have to use 1 st high cost servo motor, so that
error due to this servo low cost motor can be removed make our system better.
As I told that we have no mechanical workshop in our organization so that we can make our
robot’s mechanical structure strong and stable. So our mechanical structure of the robot is not so
stable due to this limited resources in our laboratory. Due to this loose mechanical structure we
have faced so many problem during development of such a robot arm.
So I will suggest first to remove these problem which are discussed above i.e. use the high cost
servo motor which give low jerk and error due to temperature and build a stable frame for your
robot for the better performance.
References:
[1] J. Kramer and M. Scheutz, “Development environments for au-tonomous mobile
robots: A survey,” Autonomous Robots , vol. 22, no. 2, pp. 101–132, 2007.
[2] Ioan A. S, Ucan Mrinal Kalakrishnan, Sachin Chitta.: “Combining Planning
Techniques for Manipulation Using Realtime Perception” IEEE ROBOTICS AND
AUTOMATION MAGAZINE.
[3] Sachin Chitta, E. Gil Jones, Matei Ciocarlie, Kaijen Hsiao.: “Perception, Planning,
and Execution for Mobile Manipulation in Unstructured Environments”, IEEE
ROBOTICS AND AUTOMATION MAGAZINE.
[4] R. B. Rusu, I. A. S¸ ucan, B. Gerkey, S. Chitta, M. Beetz, and L. E. Kavraki, “Real-
time perception guided motion planning for a personal robot,” in International
Conference on Intelligent Robots and Systems, St. Louis, USA, October 2009.
[5] M. Quigley, E. Berger, and A. Y. Ng, “STAIR: Hardware and Software
Architecture,” in AAAI 2007 Robotics Workshop, Vancouver, B.C, August , 2007.
[6] K. Wyobek, E. Berger, H. V. der Loos, and K. Salisbury, “Towards a personal
robotics development platform: Rationale and design of an intrinsically safe personal
robot,” in Proc. of the IEEE Intl. Conf. on Robotics and Automation (ICRA) , 2008.
[7] N. Ratliff, M. Zucker, J. A. Bagnell, and S. Srinivasa, “Chomp: Gradient
optimization techniques for efficient motion planning,” in IEEE International Conference
on Robotics and Automation, 12–17 May 2009, pp. 489–494.
[8] R. Diankov, N. Ratliff, D. Ferguson, S. Srinivasa, and J. J. Kuffner., “Bispace
planning: Concurrent multi-space exploration,” in Robotics: Science and Systems,
Zurich, Switzerland 2008.
[9] D. Berenson, S. S. Srinivasa, D. Ferguson, A. Collet, and J. J. Kuffner,
“Manipulation planning with workspace goal regions,” in IEEE International Conference
on Robotics and Automation, May 2009.
[10] D. Katz, E. Horrell, Y. Yang, B. Burns, T. Buckley, A. Grishkan, V.
Zhylkovskyy, O. Brock, and E. Learned-Miller, “The UMass Mo-bile Manipulator
UMan: An Experimental Platform for Autonomous Mobile Manipulation,” in IEEE
Workshop on Manipulation for Human Environments, Philadelphia, USA, August 2006.
[11] C. Borst, C. Ott, T. Wimbock, B. Brunner, F. Zacharias, B. Baeum, U.
Hillenbrand, S. Haddadin, A. Albu-Schaeffer, and G. Hirzinger, “A humanoid upper
body system for two-handed manipulation,” in IEEE International Conference on
Robotics and Automation, April 2007, pp.2766–2767.
[12] S. Srinivasa, D. Ferguson, M. V. Weghe, R. Diankov, D. Berenson, C. Helfrich,
and H. Strasdat, “The Robotic Busboy: Steps Towards Developing a Mobile Robotic
Home Assistant,” in Intl. Conference on Intelligent Autonomous Systems (IAS-10), July
2008.
[13] T. Ihme and U. Ruffler, Motion Planning Based on Realistic Sensor Data for Six-
Legged Robots, ser. Informatik aktuell, Autonome Mobile Systeme, pp. 247-253.
Springer Berlin/Heidelberg, 2007.
[14] P. Sermanet, R. Hadsell, M. Scoffier, U. Muller, and Y. LeCun,“Mapping and
planning under uncertainty in mobile robots with long-range perception,” in International
Conference on Intelligent Robots and Systems, 2008, pp. 2525–2530.
[15] Y. Kuwata, G. Fiore, J. Teo, E. Frazzoli, and J. How, “Motion planning for urban
driving using RRT,” in International Conference on Intelligent Robots and Systems,
2008, pp. 1681–1686.
[16] H. Choset, K. M. Lynch, S. Hutchinson, G. A. Kantor, W. Burgard, L. E. Kavraki,
and S. Thrun, Principles of Robot Motion: Theory, Algorithms, and Implementations.
MIT Press, June 2005.
[17] http://www.ros.org/wiki/ompl.
[18] http://www.ros.org/wiki/sbpl.
[19] I. A. S¸ ucan and L. E. Kavraki, “Kinodynamic motion planning by interior-
exterior cell exploration,” in International Workshop on the Algorithmic Foundations of
Robotics, Guanajuato, Mexico, December 2008.
[20] G. S´anchez and J.-C. Latombe, “A single-query bi-directional proba-bilistic
roadmap planner with lazy collision checking,” International Journal of Robotics
Research, vol. 6, pp. 403–417, 2003.
[21] W. G. Inc., “Intern challenge,”
http://www.willowgarage.com/blog/2009/08/17/intern-pr2-challenge-2009, 2009.
[22] J. Kuffner, K. Nishiwaki, S. Kagami, M. Inaba, and H. Inoue, “Motion planning
for humanoid robots,” in In Proc. 20th Int’l Symp. Robotics Research (ISRR’03), 2003.
[23] http://www.ros.org/wiki/pr2_interactive_manipulation
[24] http://www.trossenrobotics.com/bioloid-comprehensive-robot-kit.aspx
[25] http://www.pointclouds.org/documentation/tutorials/cluster_extraction.php
[26] http://www.pointclouds.org/documentation/tutorials/planar_segmentation.php#planar
-segmentation
[27] http://docs.pointclouds.org/trunk/group__filters.html
[28] Geoffrey Taylor, Lindsay Kleeman Monash University Intelligent Robotics
Research Centre Department of Electrical & Computer Systems Engineering Monash
University, VIC 3800 Australia.: “Visual Perception and Robotic Manipulation”,
Springer Tracts in Advanced Robotics Volume 26.