Lab05
Lab05
Introduction
In the previous lab we perfected the use of move_base with actionlib as well as the use of roslaunch.
Today we will work on TF, to understand a bit better how it works. We will also take a look at 3D cameras,
although independently from the robot.
At the end of this lab sheet, you have a summary on how to connect to the robot and the required
passwords, as well as a troubleshooting. If you need any other information on how to connect, refer to
the previous labs’ sheets.
Goals for today’s lab: Use TF to get transformations from one frame to another and see them on RViz.
See 3D pointclouds using a 3D camera on RViz.
Important reminder: Laptops are shared by all the students of the module. Therefore, make sure to delete
any data you download to the computer before leaving the lab. Please copy anything you need from your
laptop to the USB drive at the start of the practical and copy it back to your laptop at the end. USB
drives may be erased without prior notice between sessions.
Now, let’s change the view so we can see the 3D frames. Go to Panels, and select View. This will open a
new panel on the right. Change the view to “Orbit (rviz)”.
You can now move the area to see something like this:
1
Now, click on Add on the bottom left corner of RViz, and select TF. You should see a bunch of things like
this:
This shows all the frames available to the robot, in the correct position. If you expand the TF widget on
the left, and expand the frames, you can unselect some frames and clear it out a bit. For instance, this:
Will show only the laser frame and the camera frame, as follows:
Note that RViz is using TF to display the things accordingly. However, when the robot moves, the laser will
move with it. Thanks to TF, the robot can compute the displacement between the laser and any of the
other frames. RViz transforms anything it receives to the fixed frame, which usually is set to map. If you
change it under “Global options”, and set it to base_link for instance, you will see that then the robot is
the center, and the map moves around it.
2
Getting points from the laser to the map
Our goal will be to get the point which is closer to the robot as observed by the laser and display this on
RViz.
We will be subscribing to the /scan topic to get laser scan information. Check how the published
messages look like by using the rostopic echo (the option --noarr may be helpful), rostopic type, and
rosmsg show commands.
The messages sent are of type sensor_msgs/LaserScan. Take a moment to understand the
get_laser_x_y() method of the attached template. That code gets the information from the laser, and
iterates over the ranges (distances seen from the laser), keeping the minimum range as well as the angle
where the observation was seen.1 The laser observes in all directions (360 degrees), and gives us the
information on an array (called ranges). It also gives us the angle_increment, which describes how does
the angle increments at every step of the array. Therefore, the element at position “i” in the array will
have been observed at the angle “i * angle_increment”. The rest uses trigonometry to get x and y positions
from the robot.
The method will give you the x,y coordinates of the closest point as seen by the robot’s laser. This point
will be in the frame “base_scan”, as this is the frame seen by the robot’s laser.
Your task now is to complete the code to transform the coordinates from base_scan to map inside the
transform_coords method. You shouldn’t need to change anything in the main method. Look for
comments with the “TODO” text which are the parts you need to check or complete.
This will show a yellow ball as published by the code. If the transformation worked well, the yellow ball
should correspond to the closest obstacle around the robot, as seen here:
1
A video explaining how to deal with laser data will be uploaded later on.
3
Now, let’s try some things:
• Check the differences in the pose before and after transformation (so, in base_scan and map
frames) by printing the pose variable. Both pose represent the same location, but from different
points of view.
• Comment the line in the main where the transformation method is called (pose =
lab5.transform_coords(pose)). Make sure that the pose sent to the publish_marker is in the
base_scan frame. If everything went well, the code should still be working! Why is that? This is
because now we’re sending the PoseStamped with the frame_id as base_scan. RViz receives it,
and knows it needs to display things from the map frame (as that is the fixed frame it has set), so
RViz transforms the position for us!
• Let’s see what happens now if we send the pose in the wrong frame (which is something that
usually happens). Change the line “pose = lab5.coord_to_posestamped(coords,
frame_id="base_scan")” and change the frame_id to map. Comment again the
transform_coords method. Now, the yellow ball won’t be where it should, and will even show
where no obstacles are present (and not even at the height of the laser as before!):
This is because we are sending the position from the base_scan frame as if it was in the map frame. Thus,
the position is in relation to the origin of the map (the axes that show on the left in the image above). If
you locate the robot at the origin of the map, the yellow ball should now be in the correct position as both
frames are aligned, but of course this doesn’t happen when we move the robot. That is why TF is
important, and why we need to make sure to specify the correct frame for each message we send.
4
Do not place your fingers in the front of the camera. Always grasp the camera from top/bottom, and
avoid leaving fingerprints on the front!
Connect it to the laptop. Disconnect the robot and stop all the masters. Open a new terminal, and then
run from the container:
roslaunch realsense2_camera demo_pointcloud.launch
A new RViz window should have opened, and you should see an image. Add two new Image widgets (from
the bottom left corner of RViz). In one, set the topic to “/camera/depth/image_rect_raw” and the other
to “/camera/color/image_raw”. Add a TF widget too.
The color image is the normal image, as from any camera. The depth image is what detects the depth
(distance to the camera). There, the color represents how far every pixel is from the camera. Joining both
images together, we can create the 3D point cloud, which is a 3D representation of the two images, which
can be seen in RViz. Rotate it around and see how it works. Also pay attention to the different frames that
are published from the camera.
Pay special attention to the camera_color_frame and the camera_depth_frame. You will see they are not
on the same place, that is because one is from the RGB camera (on the right), while the other is the depth
camera in the left. This small displacement must be considered for precise interaction with the world.
Also look how the objects closer to the camera cast a shadow of empty space behind them. Can you find
what are the limits of the camera, how far can it see? And what happens when we get close to it (without
touching the lenses!)?
Depth cameras are very useful in robotics, as they allow the robot not only to see the world, but also to
know at what distance the objects seen by the camera are.
5
Appendix – Reminders on passwords,
connection to the robot, and troubleshooting
Starting and logging into the laptop
Every robot has an assigned laptop (both labelled as ITR-X, where X is a number). When you first log into
the computer, you will need to decrypt the hard drive. Following are the passwords for the laptop, make
sure to remember them during the whole module. The user you will be using is called “student”. The
laptop will automatically log on to your user, but in case you need it the password is below.
Once we are logged in, our prompt will change to ubuntu@tb-itrX:~$ showing that we are inside the
robot’s computer.
Note. If the ssh command fails, make sure the robot can be found by running “ping 192.168.1.X”,
where X completes the IP address of the robot. If it works, you should see packets being sent and
received.
Troubleshooting
1. My rosrun command doesn’t run. The error says: “[rosrun] Couldn’t find executable named
<node_name>.py below”. This means that either:
a. Your script is not called exactly as you wrote on the rosrun command.
b. Your script is not executable (run the chmod +x <node_name>.py in the folder where the
node is.
c. Your script is not in the correct folder or package.
6
2. My rosrun command doesn’t run. The error says: “[rosrun] package ‘XXX’ not found”. This means
ROS does not know your package (you should be able to run roscd <package_name> if all is
correct). This means either:
a. You did not create the package with exactly that name.
b. You did not catkin_make.
c. You did not source the devel/setup.bash of the workspace you’re running things from.
3. rosrun works, but the robot does not move. Make sure that the environment variables are correct
and well set, refer to the Lab1 sheet for more info. Make sure that you cans see topics from and
to the robot and the laptop as in the exercises from Lab1.
4. rosrun works, but the robot does not move. Make sure you wrote the correct topic name without
typos. It is common to make typos in topic names, and this creates a new topic with that name,
so everything may be working but you’re just not publishing the information where the robot is
waiting to receive it!
5. rosrun doesn’t work, the robot doesn’t move, everything is well set up and I can publish and echo
messages both from the robot and the computer. In this case, the robot is probably making some
beeping sounds, it means it is running out of battery and doesn’t have the strength to move (like
it happens to some people in the morning :-) ).
6. Anything else, try to understand the error message first (error messages are there to tell you why
it is not working), and ask the TA afterwards. Please do not use TAs as human debuggers (but do
ask them questions if you don’t understand something or struggle! They are there to help).
7. I get an error “ERROR: unable to contact ROS master at [http://192.168.1.X]”. You probably forgot
to add the “:11311” at the end of the ROS_MASTER_URI.
8. I get some error message of the type “Costmap2DROS transform timeout. Current time:
1664362190.8577, global_pose stamp: 1650545888.8141, tolerance: 0.5000”. You probably
forgot to synchronise the time between the robot and the laptop.