ROS by Examples For ROS Indigo
ROS by Examples For ROS Indigo
VOLUME 2
A PI ROBOT PRODUCTION
R. PATRICK GOEBEL
Version 1.1.0
For ROS Indigo
ROS BY EXAMPLE. Copyright © 2014 by R. Patrick Goebel
All Rights Reserved. No part of this work may be reproduced or transmitted in any form
or by any means, electronic or mechanical, including photocopying, recording, or by any
information storage or retrieval system, without the prior written permission of the
copyright owner and the publisher.
ISBN: 978-1-312-39266-3
Products and company names mentioned herein may be the trademarks of their
respective owners. Rather than use a trademark symbol with every occurrence of a
trademark name, we are using the names only in an editorial fashion and to the benefit
of the trademark owner, with no intention of infringement of the trademark.
Information contained in this work (Paperback or eBook) has been obtained from
sources believed to be reliable. However, the author does not guarantee the accuracy or
completeness of any information contained in it, and the author shall not be responsible
for any errors, omissions, losses, or damages caused or alleged to be caused directly or
indirectly by the information published herein. This work is published with the
understanding that the author is supplying information but is not attempting to render
professional services. This product almost certainly contains errors. It is your
responsibility to examine, verify, test, and determine the appropriateness of use, or
request the assistance of an appropriate professional to do so.
PREFACE
In February 2013, ROS stewardship was transferred from Willow Garage to the Open
Source Robotics Foundation (OSRF). The stated mission of OSRF is "to support the
development, distribution, and adoption of open source software for use in robotics
research, education, and product development." The two major projects overseen by
OSRF are ROS and Gazebo, the advanced 3D robot simulator.
New ROS packages continue to be released on a regular basis by individuals and groups
working in both university labs and the robotics industry. Interest in learning ROS
continues to grow, and the popularity of ROS By Example Volume 1 has exceeded my
expectations. Yet going beyond the basics still requires a fairly steep learning curve and
realistic examples dealing with more advanced topics are not always easy to find.
The overall goal of Volume 2 is to introduce a collection of ROS packages and tools that
are essential for programming truly autonomous robots. This includes the use of
executive task managers like SMACH and Behavior Trees, creating custom URDF
models for different robots, monitoring the health of the robot using ROS diagnostics,
multiplexing control inputs and assigning priorities, controlling a multi-jointed arm for
reaching and grasping, monitoring and controlling the robot through a web browser, and
performing realistic simulations using Gazebo. When we are finished, you will have the
tools necessary to program an "always on" robot that can monitor its own subsystems
and perform a series of tasks without human intervention.
Please note that the code in this book is written for ROS Indigo. While some of it might
work on earlier versions of ROS, it is highly recommended that the reader use ROS
Indigo with this book.
PRINTED VS PDF VERSIONS OF THE BOOK
The printed and PDF versions of this book are nearly the same with a few important
differences. The page formatting is identical but most PDF readers start page
numbering from the first page and ignore what is set by the document itself. Images and
code syntax are in color in the PDF version but grayscale in the printed version to keep
the cost reasonable. The PDF version is full of clickable links to other resources on the
Web. In the printed version, these links appear as underlined text with a numbered
superscript. The expanded URLs are then listed as endnotes at the end of the book.
Staying Up-To-Date: If you'd like to ask questions about the book or the sample code,
or receive notifications regarding updates and bug fixes, please join the ros-by-example
Google Group.
Main Chapter Headings
Preface................................................................................................................vii
Printed vs PDF Versions of the Book...............................................................ix
Changes Since ROS Hydro..............................................................................xix
1. Scope of this Volume.......................................................................................1
2. Installing the ros-by-example Code...............................................................3
3. Task Execution using ROS.............................................................................7
4. Creating a URDF Model for your Robot....................................................89
5. Controlling Dynamixel Servos: Take 2......................................................147
6. Robot Diagnostics........................................................................................167
7. Dynamic Reconfigure..................................................................................189
8. Multiplexing Topics with mux & yocs.......................................................201
9. Head Tracking in 3D...................................................................................213
10. Detecting and Tracking AR Tags.............................................................231
11. Arm Navigation using MoveIt!.................................................................251
12. Gazebo: Simulating Worlds and Robots.................................................369
13. Rosbridge: Building a Web GUI for your Robot...................................405
Appendix: Plug and Play USB Devices for ROS: Creating udev Rules......439
Contents
Preface................................................................................................................vii
Printed vs PDF Versions of the Book...............................................................ix
Changes Since ROS Hydro..............................................................................xix
1. Scope of this Volume.......................................................................................1
2. Installing the ros-by-example Code...............................................................3
3. Task Execution using ROS.............................................................................7
3.1 A Fake Battery Simulator.....................................................................................8
3.2 A Common Setup for Running the Examples.....................................................10
3.3 A Brief Review of ROS Actions........................................................................11
3.4 A Patrol Bot Example.........................................................................................12
3.5 The Patrol Bot using a Standard Script...............................................................13
3.6 Problems with the Script Approach....................................................................16
3.7 SMACH or Behavior Trees?..............................................................................16
3.8 SMACH: Tasks as State Machines.....................................................................17
3.8.1 SMACH review.......................................................................................................18
3.8.2 Patrolling a square using SMACH..........................................................................19
3.8.3 Testing SMACH navigation in the ArbotiX simulator............................................23
3.8.4 Accessing results from a SimpleActionState...........................................................26
3.8.5 SMACH Iterators.....................................................................................................27
3.8.6 Executing commands on each transition.................................................................30
3.8.7 Interacting with ROS topics and services................................................................31
3.8.8 Callbacks and Introspection.....................................................................................35
3.8.9 Concurrent tasks: Adding the battery check to the patrol routine...........................36
3.8.10 Comments on the battery checking Patrol Bot......................................................44
3.8.11 Passing user data between states and state machines............................................44
3.8.12 Subtasks and hierarchical state machines..............................................................48
3.8.13 Adding the battery check to the house cleaning robot...........................................54
3.8.14 Drawbacks of state machines................................................................................54
3.9 Behavior Trees...................................................................................................55
3.9.1 Behavior Trees versus Hierarchical State Machines...............................................56
3.9.2 Key properties of behavior trees..............................................................................57
3.9.3 Building a behavior tree..........................................................................................58
3.9.4 Selectors and sequences...........................................................................................60
3.9.5 Customizing behaviors using decorators (meta-behaviors).....................................61
3.10 Programming with Behavior Trees and ROS....................................................62
3.10.1 Installing the pi_trees library.................................................................................62
3.10.2 Basic components of the pi_trees library..............................................................63
3.10.3 ROS-specific behavior tree classes........................................................................68
3.10.4 A Patrol Bot example using behavior trees..........................................................73
3.10.5 A housing cleaning robot using behavior trees.....................................................79
3.10.6 Parallel tasks..........................................................................................................85
3.10.7 Adding and removing tasks...................................................................................87
4. Creating a URDF Model for your Robot....................................................89
4.1 Start with the Base and Wheels..........................................................................90
4.1.1 The robot_state_publisher and joint_state_publisher nodes....................................91
4.1.2 The base URDF/Xacro file......................................................................................92
4.1.3 Alternatives to using the /base_footprint frame......................................................97
4.1.4 Adding the base to the robot model.........................................................................97
4.1.5 Viewing the robot's transform tree..........................................................................98
4.1.6 Using a mesh for the base........................................................................................99
4.2 Simplifying Your Meshes.................................................................................104
4.3 Adding a Torso.................................................................................................104
4.3.1 Modeling the torso.................................................................................................105
4.3.2 Attaching the torso to the base..............................................................................106
4.3.3 Using a mesh for the torso.....................................................................................107
4.3.4 Adding the mesh torso to the mesh base...............................................................108
4.4 Measure, Calculate and Tweak.........................................................................110
4.5 Adding a Camera..............................................................................................110
4.5.1 Placement of the camera........................................................................................111
4.5.2 Modeling the camera.............................................................................................112
4.5.3 Adding the camera to the torso and base...............................................................114
4.5.4 Viewing the transform tree with torso and camera................................................115
4.5.5 Using a mesh for the camera.................................................................................116
4.5.6 Using an Asus Xtion Pro instead of a Kinect........................................................118
4.6 Adding a Laser Scanner (or other Sensors)......................................................118
4.6.1 Modeling the laser scanner....................................................................................119
4.6.2 Attaching a laser scanner (or other sensor) to a mesh base...................................120
4.6.3 Configuring the laser node launch file..................................................................121
4.7 Adding a Pan and Tilt Head..............................................................................122
4.7.1 Using an Asus Xtion Pro instead of a Kinect........................................................124
4.7.2 Modeling the pan-and-tilt head..............................................................................124
4.7.3 Figuring out rotation axes......................................................................................127
4.7.4 A pan and tilt head using meshes on Pi Robot......................................................128
4.7.5 Using an Asus Xtion Pro mesh instead of a Kinect on Pi Robot...........................129
4.8 Adding One or Two Arms................................................................................129
4.8.1 Placement of the arm(s).........................................................................................129
4.8.2 Modeling the arm...................................................................................................129
4.8.3 Adding a gripper frame for planning.....................................................................132
4.8.4 Adding a second arm.............................................................................................133
4.8.5 Using meshes for the arm servos and brackets......................................................135
4.9 Adding a Telescoping Torso to the Box Robot.................................................137
4.10 Adding a Telescoping Torso to Pi Robot........................................................138
4.11 A Tabletop One-Arm Pi Robot.......................................................................138
4.12 Testing your Model with the ArbotiX Simulator............................................140
4.12.1 A fake Box Robot................................................................................................141
4.12.2 A fake Pi Robot...................................................................................................143
4.13 Creating your own Robot Description Package..............................................144
4.13.1 Copying files from the rbx2_description package...............................................145
4.13.2 Creating a test launch file....................................................................................145
5. Controlling Dynamixel Servos: Take 2......................................................147
5.1 Installing the ArbotiX Packages.......................................................................147
5.2 Launching the ArbotiX Nodes..........................................................................148
5.3 The ArbotiX Configuration File.......................................................................152
5.4 Testing the ArbotiX Joint Controllers in Fake Mode........................................158
5.5 Testing the Arbotix Joint Controllers with Real Servos....................................160
5.6 Relaxing All Servos..........................................................................................163
5.7 Enabling or Disabling All Servos.....................................................................166
6. Robot Diagnostics........................................................................................167
6.1 The DiagnosticStatus Message.........................................................................168
6.2 The Analyzer Configuration File......................................................................169
6.3 Monitoring Dynamixel Servo Temperatures....................................................170
6.3.1 Monitoring the servos for a pan-and-tilt head.......................................................170
6.3.2 Viewing messages on the /diagnostics topic.........................................................173
6.3.3 Protecting servos by monitoring the /diagnostics topic.........................................175
6.4 Monitoring a Laptop Battery............................................................................179
6.5 Creating your Own Diagnostics Messages.......................................................180
6.6 Monitoring Other Hardware States...................................................................186
7. Dynamic Reconfigure..................................................................................189
7.1 Adding Dynamic Parameters to your own Nodes.............................................190
7.1.1 Creating the .cfg file..............................................................................................190
7.1.2 Making the .cfg file executable.............................................................................191
7.1.3 Configuring the CMakeLists.txt file......................................................................192
7.1.4 Building the package.............................................................................................192
7.2 Adding Dynamic Reconfigure Capability to the Battery Simulator Node........192
7.3 Adding Dynamic Reconfigure Client Support to a ROS Node.........................196
7.4 Dynamic Reconfigure from the Command Line...............................................199
8. Multiplexing Topics with mux & yocs.......................................................201
8.1 Configuring Launch Files to use mux Topics...................................................202
8.2 Testing mux with the Fake TurtleBot...............................................................203
8.3 Switching Inputs using mux Services...............................................................204
8.4 A ROS Node to Prioritize mux Inputs..............................................................205
8.5 The YOCS Controller from Yujin Robot..........................................................208
8.5.1 Adding input sources.............................................................................................211
9. Head Tracking in 3D...................................................................................213
9.1 Tracking a Fictional 3D Target.........................................................................214
9.2 Tracking a Point on the Robot..........................................................................215
9.3 The 3D Head Tracking Node............................................................................218
9.3.1 Real or fake head tracking.....................................................................................218
9.3.2 Projecting the target onto the camera plane...........................................................219
9.4 Head Tracking with Real Servos......................................................................222
9.4.1 Real servos and fake target....................................................................................223
9.4.2 Real servos, real target...........................................................................................224
9.4.3 The nearest_cloud.py node and launch file...........................................................226
10. Detecting and Tracking AR Tags.............................................................231
10.1 Installing and Testing the ar_track_alvar Package..........................................232
10.1.1 Creating your own AR Tags................................................................................232
10.1.2 Generating and printing the AR tags...................................................................234
10.1.3 Launching the camera driver and ar_track_alvar node.......................................234
10.1.4 Testing marker detection.....................................................................................236
10.1.5 Understanding the /ar_pose_marker topic...........................................................236
10.1.6 Viewing the markers in RViz..............................................................................238
10.2 Accessing AR Tag Poses in your Programs....................................................239
10.2.1 The ar_tags_cog.py script....................................................................................239
10.2.2 Tracking the tags with a pan-and-tilt head..........................................................243
10.3 Tracking Multiple Tags using Marker Bundles..............................................244
10.4 Following an AR Tag with a Mobile Robot....................................................244
10.4.1 Running the AR follower script on a TurtleBot .................................................247
10.5 Exercise: Localization using AR Tags............................................................248
11. Arm Navigation using MoveIt!.................................................................251
11.1 Do I Need a Real Robot with a Real Arm?.....................................................252
11.2 Degrees of Freedom.......................................................................................252
11.3 Joint Types.....................................................................................................253
11.4 Joint Trajectories and the Joint Trajectory Action Controller.........................254
11.5 Forward and Inverse Arm Kinematics............................................................257
11.6 Numerical versus Analytic Inverse Kinematics..............................................258
11.7 The MoveIt! Architecture...............................................................................258
11.8 Installing MoveIt!...........................................................................................260
11.9 Creating a Static URDF Model for your Robot .............................................260
11.10 Running the MoveIt! Setup Assistant...........................................................261
11.10.1 Load the robot's URDF model...........................................................................262
11.10.2 Generate the collision matrix.............................................................................263
11.10.3 Add the base_odom virtual joint.......................................................................264
11.10.4 Adding the right arm planning group................................................................265
11.10.5 Adding the right gripper planning group...........................................................268
11.10.6 Defining robot poses..........................................................................................270
11.10.7 Defining end effectors.......................................................................................272
11.10.8 Defining passive joints......................................................................................272
11.10.9 Generating the configuration files.....................................................................272
11.11 Configuration Files Created by the MoveIt! Setup Assistant........................274
11.11.1 The SRDF file (robot_name.srdf)......................................................................274
11.11.2 The fake_controllers.yaml file...........................................................................275
11.11.3 The joint_limits.yaml file..................................................................................276
11.11.4 The kinematics.yaml file...................................................................................277
11.12 The move_group Node and Launch File.......................................................279
11.13 Testing MoveIt! in Demo Mode...................................................................279
11.13.1 Exploring additional features of the Motion Planning plugin...........................283
11.13.2 Re-running the Setup Assistant at a later time..................................................284
11.14 Testing MoveIt! from the Command Line....................................................285
11.15 Determining Joint Configurations and End Effector Poses...........................288
11.16 Using the ArbotiX Joint Trajectory Action Controllers................................291
11.16.1 Testing the ArbotiX joint trajectory action controllers in simulation...............291
11.16.2 Testing the ArbotiX joint trajectory controllers with real servos......................299
11.17 Configuring MoveIt! Joint Controllers.........................................................300
11.17.1 Creating the controllers.yaml file......................................................................301
11.17.2 Creating the controller manager launch file......................................................303
11.18 The MoveIt! API..........................................................................................304
11.19 Forward Kinematics: Planning in Joint Space..............................................304
11.20 Inverse Kinematics: Planning in Cartesian Space.........................................313
11.21 Pointing at or Reaching for a Visual Target..................................................321
11.22 Setting Constraints on Planned Trajectories.................................................323
11.22.1 Executing Cartesian Paths.................................................................................323
11.22.2 Setting other path constraints............................................................................329
11.23 Adjusting Trajectory Speed..........................................................................333
11.24 Adding Obstacles to the Planning Scene......................................................336
11.25 Attaching Objects and Tools to the Robot....................................................345
11.26 Pick and Place..............................................................................................347
11.27 Adding a Sensor Controller..........................................................................359
11.28 Running MoveIt! on a Real Arm..................................................................363
11.28.1 Creating your own launch files and scripts.......................................................364
11.28.2 Running the robot's launch files........................................................................364
11.28.3 Forward kinematics on a real arm.....................................................................365
11.28.4 Inverse kinematics on a real arm.......................................................................365
11.28.5 Cartesian paths on a real arm.............................................................................366
11.28.6 Pick-and-place on a real arm.............................................................................366
11.28.7 Pointing at or reaching for a visual target..........................................................367
12. Gazebo: Simulating Worlds and Robots.................................................369
12.1 Installing Gazebo............................................................................................370
12.2 Hardware Graphics Acceleration....................................................................371
12.3 Installing the ROS Gazebo Packages..............................................................372
12.4 Installing the Kobuki ROS Packages..............................................................373
12.5 Installing the Fetch Robot ROS Packages......................................................373
12.6 Using the Gazebo GUI...................................................................................373
12.7 Testing the Kobuki Robot in Gazebo..............................................................375
12.7.1 Accessing simulated sensor data.........................................................................378
12.7.2 Adding safety control to the Kobuki...................................................................382
12.7.3 Running the nav_square.py script from Volume 1..............................................384
12.8 Loading Other Worlds and Objects................................................................385
12.9 Testing the Fetch Robot in Gazebo.................................................................386
12.9.1 Fetch joint trajectories.........................................................................................387
12.9.2 Fetch and MoveIt!...............................................................................................388
12.9.3 Fetch Pick-and-Place...........................................................................................389
12.10 Real Pick-and-Place using the simple_grasping Perception Pipeline............391
12.10.1 Limitations of depth cameras............................................................................392
12.10.2 Running the demo..............................................................................................392
12.10.3 Understanding the real_pick_and_place.py script.............................................398
12.11 Running Gazebo Headless + RViz...............................................................401
13. Rosbridge: Building a Web GUI for your Robot...................................405
13.1 Installing the rosbridge Packages...................................................................405
13.2 Installing the web_video_server Package.......................................................406
13.3 Installing a Simple Web Server (mini-httpd)..................................................408
13.4 Starting mini-httpd, rosbridge and web_video_server....................................409
13.5 A Simple rosbridge HTML/Javascript GUI....................................................411
13.6 Testing the GUI with a Fake TurtleBot..........................................................413
13.7 Testing the GUI with a Real Robot.................................................................414
13.8 Viewing the Web GUI on another Device on your Network..........................414
13.9 Using the Browser Debug Console.................................................................415
13.10 Understanding the Simple GUI.....................................................................416
13.10.1 The HTML layout: simple_gui.html.................................................................417
13.10.2 The JavaScript code: simple_gui.js...................................................................421
13.11 A More Advanced GUI using jQuery, jqWidgets and KineticJS..................432
13.12 Rosbridge Summary.....................................................................................437
Appendix: Plug and Play USB Devices for ROS: Creating udev Rules......439
13.13 Adding yourself to the dialout Group...........................................................439
13.14 Determining the Serial Number of a Device.................................................440
13.15 UDEV Rules.................................................................................................441
13.16 Testing a UDEV Rule...................................................................................442
13.17 Using a UDEV Device Name in a ROS Configuration File..........................442
CHANGES SINCE ROS HYDRO
If Indigo is your first experience with ROS and this book, then you can safely skip this
chapter. However, if you have already been using the previous version of this book with
ROS Hydro, there are a few changes to be noted. You can read the official list of
differences between ROS Hydro and Indigo on the Hydro → Indigo migration page.
Here are some of the items that affect the code used with this book.
• The laptop_battery.py script has been moved out of the turtlebot meta
package and now lives in a package of its own, ros-indigo-laptop-
battery-monitor. Similarly, the LaptopChargeStatus message type that
used to be found in the linux_hardware package has been replaced with the
SmartBatteryStatus message found in the smart_battery_msgs
package.
• The previous openni meta-package and has been superseded by two new
packages: openni2_camera for use with the Asus Xtion, Xtion Pro and
Primesense 1.08/1.09 cameras and freenect_camera for the Microsoft Kinect.
This has some potentially important consequences depending on which camera
you use:
◦ The earlier openni drivers for the Kinect supported resolutions down to
160x120 pixels. Unfortunately, the lowest resolution supported by the
freenect driver for the Kinect is 640x480 pixels (VGA) while the lowest
resolution supported on the Xtion Pro using the openni2 driver is 320x240
(QVGA). While these resolutions will generally work OK when using a fast
laptop or PC, be aware that lower-power CPUs or smaller single-board
computers may struggle to process video streams at resolutions above
320x240. For example, it can be difficult to get smooth object tracking at
640x480 pixels on a Core Duo processor without graphics acceleration since
tracking tends to lag behind the video processing at this resolution.
Fortunately, we can often use the camera driver's data skipping function to
get around this as we will see in Chapter 8.
◦ When using the earlier openni (version 1) driver with previous revisions of
this book, the default color image topic was /camera/rgb/image_color
for both the Kinect and Xtion Pro cameras. However, when using the
openni2 driver with an Asus Xtion Pro, the default color image topic is
/camera/rgb/image_raw while the topic /camera/rgb/image_color
is not used at all. On the other hand, when using the freenect driver with
a Microsoft Kinect, the color image data is published on both
/camera/rgb/image_raw and /camera/rgb/image_color. Since
/camera/rgb/image_raw is used by both cameras, we will switch to this
topic instead of /camera/rgb/image_color for this revision of the book.
The sample code launch files have therefore been updated to use this topic
name.
◦ The openni2 and freenect drivers use different units for depth image
values. The freenect driver uses millimeters when publishing depth
images while the openni2 driver uses meters. To convert the freenect
values to meters we divide by 1000. This has been done in the sample code.
Note that this only applies to depth images. When it comes to point clouds,
both drivers publish depth values in meters.
◦ The openni2 driver does not appear to generate disparity data for the
Asus Xtion cameras. Consequently, the disparity_view node that we
used in previous book revisions can no longer be used to view colored depth
images from these cameras. However, we can still view a grayscale version
of the depth image. For the Kinect, the freenect driver does publish
disparity data on the topic /camera/depth/disparity.
◦ The original openni.org website was shut down on April 23, 2014.
However, OpenNI 2 binaries are being preserved on the Structure website.
The freenect drivers are being developed through the OpenKinect project.
• The default camera resolution in all sample code has been changed from
320x240 to 640x480 to match the default resolution used by the camera launch
files distributed with ROS and the minimum resolution supported by the
freenect driver for the Kinect. If you are running vision processing on a low-
powered CPU, and you are using an Asus Xtion camera, you will get
significantly better frame rates and tracking performance if you reduce the
resolution to 320x240 using rqt_reconfigure or adjusting the settings in the
camera's launch file. As noted above, the lowest resolution available on the
Kinect when using the freenect driver is 640x480 so using a smaller
resolution is not possible for this camera.
• A new CallbackTask has been added to the pi_trees library that turns any
Python function into a task.
• The UBR-1 robot model used in the Gazebo chapter has been replaced by the
Fetch robot by Fetch Robotics. The script names used with the model have been
updated accordingly.
• There was a change in the ROS Indigo xdot package that causes the
smach_viewer utility to crash. As of this writing, this issue still has not been
fixed in the smach_viewer package so we need to install a modified version
of the xdot package instead. Instructions are provided in Chapter 3.
• Finally, at the time of this writing, the IKFast plugin built using OpenRAVE
does not work under ROS Indigo. Consequently, Section 11.29, Creating a
Custom IK Fast Plugin has been removed from this revision of the book.
1. SCOPE OF THIS VOLUME
In Volume 1 we learned how to program a robot using the fundamental components of
ROS including the control of a mobile base, SLAM, robot vision (OpenCV, OpenNI and
a little bit of PCL), speech recognition, and joint control using Dynamixel servos. In
this volume we will tackle some of the more advanced ROS concepts and packages that
are essential for programming truly autonomous robot behaviors including:
• creating a model for your own robot using URDF/Xacro descriptions, including
a pan-and-tilt head, multi-jointed arm(s) and gripper(s), a telescoping torso, and
the placement of sensors such as a laser scanner
• using the ROS diagnostics package to enable your robot to monitor its own
systems such as battery levels and servo temperatures
• multiplexing ROS topics using mux and yocs so that control inputs can be
prioritized and not work against each other
• controlling a multi-jointed arm and gripper using the new MoveIt! framework
including the execution of forward and inverse kinematics, collision avoidance,
grasping, and pick-and-place tasks
• working with simulated robots and environments using the sophisticated Gazebo
simulator
• creating a web-based GUI for your robot using rosbridge, HTML and Javascript
NOTE: The chapter on Gazebo assumes we already have a robot model like the Kobuki
or Fetch whose developers have provided the necessary Gazebo properties and plugins
to handle the simulated physics of the robot. Creating either worlds or Gazebo plugins
If you are reading the printed version of the book, then copy and paste is probably not an
option. Instead, you can use the following commands to download a small shell script
called rbx2-prereq.sh that will run the apt-get command above:
We will also need the code from the Volume 1 repository (rbx1) even if you do not have
the book. To install the rbx1 code for ROS Indigo (in case you don't already have it),
run the following commands:
$ cd ~/catkin_ws/src
$ git clone -b indigo-devel https://github.com/pirobot/rbx1.git
$ cd ~/catkin_ws
$ catkin_make
$ source ~/catkin_ws/devel/setup.bash
To clone and build the Volume 2 repository (rbx2) for ROS Indigo, follow these steps:
$ cd ~/catkin_ws/src
$ git clone -b indigo-devel https://github.com/pirobot/rbx2.git
$ cd ~/catkin_ws
$ catkin_make
$ source ~/catkin_ws/devel/setup.bash
NOTE : The last line above should be added to the end of your ~/.bashrc file if you
haven't done so already. This will ensure that your catkin packages are added to your
ROS_PACKAGE_PATH whenever you open a new terminal.
If the ROS By Example code is updated at a later time, you can merge the updates with
your local copy of the repository by using the following commands:
$ cd ~/catkin_ws/src/rbx2
$ git pull
$ cd ~/catkin_ws
$ catkin_make
$ source devel/setup.bash
Staying Up-To-Date: If you'd like to ask questions about the code, report bugs, or
receive notifications of updates, please join the ros-by-example Google Group.
All of the ROS By Example Volume 2 packages begin with the letters rbx2. To list the
packages, move into the parent of the rbx2 meta-package and use the Linux ls
command:
Throughout the book we will be using the roscd command to move from one package
to another. For example, to move into the rbx2_dynamixels package, you would use
the command:
$ roscd rbx2_dynamixels
Note that you can run this command from any directory and ROS will find the package.
IMPORTANT: If you are using two computers to control or monitor your robot, such
as a laptop on the robot together with a second computer on your desktop, be sure to
clone and build the Indigo branch of the rbx2 and rbx1 repositories on both machines.
• pause and resume: It is often desirable to pause a currently running task (or
tasks) when a higher-priority task is given control and then to resume the
preempted task(s) when the executive yields back control. For example, the
Neato vacuum cleaner is able to return to the place it left off after recharging.
• task hierarchy: Tasks can often be broken down into subtasks to handle the
details. For instance, a high level task called recharge might consist of three
subtasks: navigate to the docking station, dock the robot, and charge the battery.
Similarly, the docking task could be broken down into: align with beacon; drive
forward; stop when docked.
• conditions: Both sensor data and internal programming variables can place
constraints on when and how a task will be executed. In ROS, these variables
often take the form of messages being published by various nodes. For
example, the battery monitoring task will subscribe to a diagnostics topic that
includes the current battery level. When a low battery level is detected, a check
battery condition should fire and the recharging task will begin execution while
other tasks are paused or aborted.
You can verify that the simulator is working by opening another terminal and running
the command:
data: 91.0
---
data: 90.6666641235
---
data: 90.3333358765
---
etc
The node also defines a ROS service called set_battery_level that takes a floating
point value as an argument and sets the battery charge to that level. We will use this
service to simulate a recharge of the battery by setting the level to 100 or to simulate a
sudden depletion of the battery by setting a low value.
The set_battery_level service can be used as follows:
where the argument is a number between 0 and 100. To simulate a recharge, use the
set_battery_level service to set the level to a high number such as 100. To
simulate a sudden drop in the battery level, set the value to a lower level such as 30.
This will allow us to test how various nodes respond to a low battery condition.
The battery simulator can also be controlled using rqt_reconfigure. To change the
battery runtime or manually set the battery level, bring up rqt_reconfigure:
$ rosrun rqt_reconfigure rqt_reconfigure
The setup of these variables is taken care of by the file task_setup.py located in the
directory rbx2_tasks/src/rbx2_tasks. In each of our examples, we will import
this file to establish the basic environment. Some of the more important variables that
are set in task_setup.py are as follows:
• n_patrols (default: 2)
We also define the waypoint locations as well as the location of the docking station.
Any of these can be changed to suit your needs.
Finally, we define a move_base client and a cmd_vel publisher for controlling the
movement of the robot.
And to see the list of possible statuses returned by the result, run the command:
All of this can be done using ROS actions but one would have to create an action server
for each task, then coordinate the tasks with a number of if-then conditions or
interacting callbacks among the actions. While certainly possible, the resulting code
would be rather tedious. Fortunately, both SMACH and behavior trees help to make
these more complicated situations easier to program.
• Initialization:
◦ CHECK_BATTERY
◦ RECHARGE
◦ PATROL
◦ drive motors
where NAV_DOCK means navigate to the docking station. The PATROL task can also be
broken down into a sequence of navigation subtasks:
PATROL: NAV_0 → NAV_1 → NAV_2 → NAV_3
where we have indexed each navigation task by the waypoint number (one for each
corner of the square). The navigation tasks can then be implemented using standard
ROS MoveBaseAction goals and the navigation stack as we did in Volume 1.
Before learning how to implement the Patrol Bot using SMACH or behavior trees, let's
review how it could be done using a standard script.
This check is done at the same frequency as the rate at which messages are received on
the battery level topic.
In the meantime, the our main control loop might start off looking something like this:
while n_patrols < max_patrols:
if low_battery:
recharge()
else:
patrol()
At the start of each patrol, we check the battery level and recharge if necessary.
Otherwise, we start the patrol. Of course, this simple strategy won't work in practice
since the battery is likely to run down in between battery checks when the robot is part
way around the course. Let's see how we might correct this problem.
def patrol():
for location in waypoints:
nav_to_waypoint(location)
When we write it out this way, we see that we should move the battery check inside the
nav_to_waypoint() function:
def nav_to_waypoint(location):
if low_battery:
recharge()
else:
move_to(location)
At least now we are checking the battery level before we move on to each waypoint.
However, the move_to(location) function could take some time to complete
depending on how far away the next waypoint is located. So we really need to move the
battery check even deeper and place it inside the move_to() routine.
move_base.send_goal(goal, feedback_cb=self.nav_feedback_cb)
Now we are checking the battery status every time we receive a feedback message from
the MoveBaseAction server which should be frequent enough to avoid a dead battery.
The recharge() function will cancel the current move_base goal before sending a
new goal to the MoveBaseAction server to navigate the robot to the docking station for
charging.
The entire script outlined here can be found in the file patrol_script.py located in
the rbx2_tasks/nodes directory.
Link to source: patrol_script.py
The script is fairly straightforward and will not be described in detail. However, you
can test it out as follows.
The robot should execute two loops around the square while monitoring its battery level.
Whenever the battery falls below the threshold defined in the script ( 50), the robot will
move to the circular docking station in the middle of the square. Once recharged
(battery level is set back to 100), the robot should continue its patrol where it left off.
While the camera is being panned back and forth, we still need to keep tabs on the
battery level but now we are no longer running a move_base goal. This means we need
to add a second battery check, this time to the head panning routine. If the battery level
falls below threshold at this point, we no longer need to cancel a navigation goal;
instead, we need to recenter the camera before moving to the docking station.
For each additional task, we need to add another battery check which results in
redundancy in the code and makes our overall task executive less modular.
Conceptually, the battery check should appear prominently near the top of our task
hierarchy and we should really only have to check the battery level once on each pass
through the task list.
Another shortcoming of the script approach is that we are making direct calls to ROS
actions, topics and services rather than through reusable wrappers that hide the common
details. For example, in addition to subscribing to the battery level topic, it is likely that
we will also subscribe to other topics generated by additional sensors such as a depth
camera, bump sensors, or a laser scanner. All these subscribers share a similar setup
pattern in terms of a topic name, topic type and callback function. Task frameworks like
SMACH define wrappers for ROS topics, services and actions that take care of the setup
details thereby allowing these objects to be added to or deleted from a given task
executive without having to repeat numerous lines of code.
We will also make use of a simple GUI package called python-easygui in some of
our examples so let's install that now as well:
$ sudo apt-get install python-easygui
Finally, there was a change in the ROS Indigo xdot package that causes the
smach_viewer utility to crash. As of this writing, this issue still has not been fixed in
the smach_viewer package so we need to install a modified version of the xdot
package instead as follows:
$ cd ~/catkin_ws/src
$ git clone https://github.com/jbohren/xdot.git
$ cd ~/catkin_ws
$ catkin_make
$ rospack profile
One of the simplest real-world examples of a state machine is a lock and key. The lock
can be in one of two states, LOCKED or UNLOCKED while turning the key clockwise or
counterclockwise is the input. (By convention, state names are written in all upper
case.) The possible state transitions are from LOCKED to UNLOCKED and from
UNLOCKED to LOCKED depending on the direction the key is turned. (For completeness,
we could also specify the transitions LOCKED to LOCKED and UNLOCKED to UNLOCKED.)
The outcomes are that the door can be opened or that it cannot.
The SMACH package provides a standalone Python library for creating state machines
and a ROS wrapper for integrating the library with ROS topics, services and actions.
The SMACH Wiki pages includes a number of excellent tutorials and the reader is
encouraged to work through as many of them as possible. At the very least, it is
essential to understand the Getting Started page. We will assume some familiarity with
these tutorials as we walk through our examples.
SMACH has a lot of features and may appear a little overwhelming at first. So we will
take each component in turn using examples that provide some visual feedback. But
first, a short review of what you learned in the online tutorials.
3.8.1 SMACH review
We assume that you have worked through at least some of the tutorials on the SMACH
Wiki so we will provide only a review of the essential concepts here. If you need more
details on a given class or function, you can look it up in the SMACH API. You can
also go directly to the source on GitHub.
SMACH states are Python classes that extend the smach.State class by overriding the
execute() method to return one or more possible outcomes. The execute method
can also take an optional argument defining a collection of userdata that can be used
to pass information between states. The actual computations performed by the state can
be essentially anything you want, but there are a number of predefined state types that
can save a lot of unnecessary code. In particular, the SimpleActionState class turns
a regular ROS action into a SMACH state. Similarly, the MonitorState wraps a ROS
topic and the ServiceState handles ROS services. The CBState uses the
etc.
A state machine itself must have an outcome and can therefore be a state in another state
machine. In this way, state machines can be nested to form a hierarchy. For example, a
state machine called "clean house" could contain the nested state machines "vacuum
living room ", "mop kitchen", "wash tub" and so on. The higher level state machine
"clean house" will determine how the outcomes of the nested state machines map into
the overall outcome (e.g. "all tasks complete" or "not all tasks complete").
There are a number of predefined SMACH containers that can also save you a lot of
programming. The Concurrence container returns an outcome that depends on more
than one state and allows one state to preempt another. The Sequence container
automatically generates sequential transitions among the states that are added to it. And
the Iterator container allows you to loop through one or more states until some
condition is met. We will learn more about these containers as we need them.
3.8.2 Patrolling a square using SMACH
In Volume 1, we programmed our robot to navigate around a square using a variety of
methods including move_base actions. And earlier in this chapter we used a Python
script to do the same thing while also monitoring a simulated battery level. Let us now
see how we can use SMACH to accomplish the same goal.
In this section, we will leave out the battery check and simply move the robot around the
square. In the next section we will add in the battery check and enable the robot to
recharge when necessary.
One way to conceptualize the patrol problem is that we want the robot to be in one of
four states—namely, the poses that define the four corners of the square. Furthermore,
we want the robot to transition through these states in a particular order. Equivalently,
we could say that we want the robot to execute four tasks; namely, to move to each of
the corner locations in sequence.
NAV_STATE_1 → NAV_STATE_2
NAV_STATE_2 → NAV_STATE_3
NAV_STATE_3 → NAV_STATE_0
Here we have defined the last transition to take us back to the starting state which
therefore causes the whole state machine to repeat the cycle. If instead we want the
robot to stop after navigating the square just once, we can define another state
NAV_STATE_4 to have the same goal pose as NAV_STATE_0 and then change the
last transition above to:
NAV_STATE_3 → NAV_STATE_4
We could then terminate the machine (and therefore stop the robot) by adding a final
transition to the empty state:
NAV_STATE_4 → ''
In SMACH, the transitions depend on the outcome of the previous state. In the case of our
robot moving between locations, the outcome can be "succeeded", "aborted" or
"preempted".
These ideas are implemented in the script patrol_smach.py found in the
rbx2_tasks/nodes directory. To save space, we won't display the entire listing but
will focus on the key sections instead. (You can click on the script name above to go to
the online listing or bring up the script in your own editor.)
In the import block near the top of the scrip, we need to bring in the SMACH objects we
want to use:
from smach import StateMachine
from smach_ros import SimpleActionState, IntrospectionServer
Will we need the StateMachine object to build the overall state machine, the
SimpleActionState to wrap our calls to move_base and the
IntrospectionServer so we can use the smach_viewer.
As you know from the online tutorial, the SimpleActionState type allows us to wrap
a regular ROS action into a SMACH state. Assuming we have already assigned the corner
nav_states.append(move_base_state)
First we create an empty list called nav_states to hold our navigation states, one for
each corner of the square. Next, we loop through each of the waypoints, creating a
standard MoveBaseGoal using that waypoint as the desired pose. We then turn this
goal into a SMACH state using the statement:
# Add the states to the state machine with the appropriate transitions
with self.sm_patrol:
StateMachine.add('NAV_STATE_0', nav_states[0],
transitions={'succeeded':'NAV_STATE_1','aborted':'NAV_STATE_1'})
StateMachine.add('NAV_STATE_1', nav_states[1],
transitions={'succeeded':'NAV_STATE_2','aborted':'NAV_STATE_2'})
StateMachine.add('NAV_STATE_2', nav_states[2],
transitions={'succeeded':'NAV_STATE_3','aborted':'NAV_STATE_3'})
StateMachine.add('NAV_STATE_3', nav_states[3],
transitions={'succeeded':'NAV_STATE_0','aborted':'NAV_STATE_0'})
Next, we add each navigation state to the state machine together with a dictionary of
transitions from outcomes to the next state. The first argument in each line is an
arbitrary name we assign to the state so that the transitions have something to refer to.
By convention, these state names are written in upper case. For example, the first line
above adds the state nav_states[0] to the state machine and gives it the name
NAV_STATE_0. The transitions for this state tells us that if the state succeeds (the robot
makes it to the goal location), we want the next state to be NAV_STATE_1 which is
defined on the second line and represents the second goal pose stored in the state
nav_states[1].
Note how we also map the outcome 'aborted' to the next state. While this is optional
and not always desired, it works nicely with MoveBase goals since the base planner
might not always succeed in getting the robot to the current goal due to obstacles or time
constraints. In such cases, the outcome would be aborted and instead of simply
stopping the robot, we continue on to the next goal.
Note also how the final state transition returns to the first state, NAV_STATE_0. In this
case, the state machine and the robot will continue to loop around the square
indefinitely. If we want the robot to stop after the first loop, we can create the following
state machine instead:
with self.sm_patrol:
StateMachine.add('NAV_STATE_0', nav_states[0],
transitions={'succeeded':'NAV_STATE_1','aborted':'NAV_STATE_1'})
StateMachine.add('NAV_STATE_1', nav_states[1],
transitions={'succeeded':'NAV_STATE_2','aborted':'NAV_STATE_2'})
StateMachine.add('NAV_STATE_2', nav_states[2],
transitions={'succeeded':'NAV_STATE_3','aborted':'NAV_STATE_3'})
StateMachine.add('NAV_STATE_3', nav_states[3],
transitions={'succeeded':'NAV_STATE_4','aborted':'NAV_STATE_4'})
StateMachine.add('NAV_STATE_4', nav_states[0],
transitions={'succeeded':'','aborted':''})
The final state NAV_STATE_4 is assigned the same pose state as the starting point and
we map both outcomes into the empty state thus terminating the state machine and
stopping the robot. The script patrol_smach.py implements this version of the state
machine but places the execution in a loop that enables us to control the number of times
the robot completes its patrol as we show next.
Note that, unlike a regular script, we cannot simply place a while loop around the
actual states of our state machine since that would result in looping through the
construction of the states before they are even run.
We will also tend to add the following pair of lines to most of our scripts:
The SMACH Introspection server enables us to view the running state machine in the
graphical smach_viewer utility as we will see below.
Next, terminate any running instances of RViz, then bring it up with the nav_tasks
config file:
$ rosrun rviz rviz -d `rospack find rbx2_tasks`/nav_tasks.rviz
You should see the robot move around the square three times and then stop. The view in
RViz should look something like this:
At the same time, you should see the following messages in the terminal you used to
launch the patrol_smach.py script:
Here we see that SMACH reports when each state transition occurs and also the final
outcome of the state machine as a whole. The lines that report the " Success rate" is
something extra created by our patrol_smach.py script that we will examine in more
detail in the next section.
We can also see a graph of the running state machine by using the smach_viewer.py
utility. To fire up the viewer, open another terminal and run:
$ rosrun smach_viewer smach_viewer.py
The display in the SMACH viewer should look something like this:
try:
rospy.loginfo("Success rate: " + str(100.0 * self.n_succeeded /
(self.n_succeeded + self.n_aborted + self.n_preempted)))
except:
pass
The callback function takes three arguments: the current userdata (which we will
explore more later on), as well as the status and result returned by the underlying
ROS action (in this case, move_base). As it turns out, the move_base action does not
use the result field. Instead, it places the results in the status field which is why our
test condition above checks the value of the status field.
As you can see from the above code, our callback simply increments the counters for the
number of move_base attempts that are successful, aborted or preempted. We also
print out the percent success so far.
with self.sm:
# Initialize the iterator
self.sm_patrol_iterator = Iterator(outcomes =
['succeeded','preempted','aborted'],
input_keys = [],
with self.sm_patrol_iterator:
# Initialize the patrol state machine
self.sm_patrol = StateMachine(outcomes =
['succeeded','aborted','preempted','continue'])
# Add the states to the state machine with the appropriate transitions
with self.sm_patrol:
StateMachine.add('NAV_STATE_0', nav_states[0],
transitions={'succeeded':'NAV_STATE_1','aborted':'NAV_STATE_1','preempted':'NAV_
STATE_1'})
StateMachine.add('NAV_STATE_1', nav_states[1],
transitions={'succeeded':'NAV_STATE_2','aborted':'NAV_STATE_2','preempted':'NAV_
STATE_2'})
StateMachine.add('NAV_STATE_2', nav_states[2],
transitions={'succeeded':'NAV_STATE_3','aborted':'NAV_STATE_3','preempted':'NAV_
STATE_3'})
StateMachine.add('NAV_STATE_3', nav_states[3],
transitions={'succeeded':'NAV_STATE_4','aborted':'NAV_STATE_4','preempted':'NAV_
STATE_4'})
StateMachine.add('NAV_STATE_4', nav_states[0],
transitions={'succeeded':'continue','aborted':'continue','preempted':'continue'}
)
with self.sm:
# Initialize the iterator
self.sm_patrol_iterator = Iterator(outcomes =
['succeeded','preempted','aborted'],
input_keys = [],
it = lambda: range(0, self.n_patrols),
output_keys = [],
it_label = 'index',
exhausted_outcome = 'succeeded')
with self.sm_patrol_iterator:
# Initialize the patrol state machine
self.sm_patrol =
StateMachine(outcomes=['succeeded','aborted','preempted','continue'])
# Add the states to the state machine with the appropriate transitions
with self.sm_patrol:
StateMachine.add('NAV_STATE_0', nav_states[0],
transitions={'succeeded':'NAV_STATE_1','aborted':'NAV_STATE_1','preempted':'NAV_
STATE_1'})
StateMachine.add('NAV_STATE_1', nav_states[1],
transitions={'succeeded':'NAV_STATE_2','aborted':'NAV_STATE_2','preempted':'NAV_
STATE_2'})
StateMachine.add('NAV_STATE_2', nav_states[2],
transitions={'succeeded':'NAV_STATE_3','aborted':'NAV_STATE_3','preempted':'NAV_
STATE_3'})
StateMachine.add('NAV_STATE_3', nav_states[3],
transitions={'succeeded':'NAV_STATE_4','aborted':'NAV_STATE_4','preempted':'NAV_
STATE_4'})
StateMachine.add('NAV_STATE_4', nav_states[0],
transitions={'succeeded':'continue','aborted':'continue','preempted':'continue'}
)
Next we create the patrol state machine as we have done before with two differences.
First, the state machine is tucked inside the " with self.sm_patrol_iterator"
statement. Second, we have added a new outcome labeled ' continue' to both the
overall patrol machine and the final state, NAV_STATE_4. Why we do this will become
clear in the final lines below.
Iterator.set_contained_state('PATROL_STATE', self.sm_patrol,
loop_outcomes=['continue'])
The final line adds the Iterator as a state in the overall state machine.
To test the script, make sure you have the fake TurtleBot up and running as in the
previous sections as well as RViz with the nav_tasks.rviz config file, then run the
iterator script:
$ rosrun rbx2_tasks patrol_smach_iterator.py
The result should be the same as before: the robot should make two complete patrols of
the square.
To test the script, make sure you have the fake TurtleBot up and running as in the
previous sections as well as RViz with the nav_tasks.rviz config file, then run the
iterator script:
$ rosrun rbx2_tasks patrol_smach_callback.py
Unlike the previous scripts, this one will run indefinitely until you abort it with Ctrl-C.
19 sm_battery_monitor = StateMachine(outcomes=[])
21 with sm_battery_monitor:
22 # Add a MonitorState to subscribe to the battery level topic
23 StateMachine.add('MONITOR_BATTERY',
24 MonitorState('battery_level', Float32, self.battery_cb),
25 transitions={'invalid':'RECHARGE_BATTERY',
26 'valid':'MONITOR_BATTERY',
27 'preempted':'MONITOR_BATTERY'},)
The first state we add to the state machine is called MONITOR_BATTERY and it uses the
SMACH MonitorState to keep tabs on the battery level. The arguments to the
MonitorState constructor are the topic we want to monitor, the message type for that
topic and a callback function (here called self.battery_cb) that is described below.
The key names in the transitions dictionary come from the pre-defined outcomes for the
MonitorState type which are valid, invalid and preempted although the valid
and invalid outcomes are actually represented by the values True and False,
30 StateMachine.add('RECHARGE_BATTERY',
31 ServiceState('battery_simulator/set_battery_level',
SetBatteryLevel, request=100),
32 transitions={'succeeded':'MONITOR_BATTERY',
33 'aborted':'MONITOR_BATTERY',
34 'preempted':'MONITOR_BATTERY'})
The second state we add to the state machine is the RECHARGE_BATTERY state that uses
the SMACH ServiceState. The arguments to the ServiceState constructor are the
service name, the service type, and the request value to send to the service. The
SetBatteryLevel service type is set in the rbx2_msgs package which is why we
imported rbx2_msgs.srv at the top of our script. Setting the request value to 100
essentially performs a simulated recharge of the battery to full strength. The
ServiceState returns the traditional outcomes of succeeded, aborted and
preempted. We map all outcomes back to the MONITOR_BATTERY state.
The final part of the script that requires explaining is the callback function for our
MonitorState:
The first INFO message above indicates that the state machine is initialized in the
MONITOR_STATE state. The next series of lines shows a count down of the battery level
which is the result of our rospy.loginfo() statements in the battery_cb function
describe earlier. When the level falls below threshold, we see the MONITOR_STATE
returns an outcome of invalid which triggers a state transition to the
RECHARGE_BATTERY state. Recall that the RECHARGE_STATE calls the
set_battery_level service and sets the battery level back to 100. It then returns an
outcome of succeeded which triggers a state transition back to MONITOR_STATE. The
process then continues indefinitely.
Here we see that a transition callback always has access to userdata, active_states
and any callback arguments that were also passed. In particular, the variable
active_states holds the current state of the state machine and we will use this in the
next section to determine where the robot was when it was interrupted to recharge its
battery.
3.8.9 Concurrent tasks: Adding the battery check to the patrol routine
Now that we know how to patrol the square and check the battery, it is time to put the
two together. We want a low battery signal to take top priority so that the robot stops its
patrol and navigates to the docking station. Once recharged, we want the robot to
continue its patrol beginning with the last waypoint it was heading for before being
interrupted.
SMACH provides the Concurrence container for running tasks in parallel and enabling
one task to preempt the other when a condition is met. So we will set up our new state
machine with a Concurrence container which will hold the navigation machine and the
battery check machine. We will then set up the container so that the battery check can
preempt navigation when the battery level falls below threshold.
Before looking at the code, let's try it out. Terminate the monitor_battery.py node
if it is still running from an earlier session. (It may take a while to respond to Ctrl-C
before the node eventually dies.)
If you don't already have it running, bring up the SMACH viewer with the command:
$ rosrun smach_viewer smach_viewer.py
Bring up RViz with the nav_tasks config file if it is not already running:
Finally, make sure you can see the RViz window in the foreground, then run the
patrol_smach_concurrence.py script:
You should see the robot move around the square two times and then stop. Whenever
the battery falls below threshold (set by default to 50 in the task_setup.py file), the
robot will interrupt its patrol and head to the docking station for a recharge. Once
recharged, it will continue its patrol where it left off.
While the script is running, you can also view the state machine in the SMACH viewer.
The image should look something like the following:
Let us now examine the code that makes this work. The following is a summary of the
overall steps:
• create a state machine called sm_nav that takes care of moving the robot from
one waypoint to the next
• create a second state machine called sm_recharge that moves the robot to the
docking station and performs a recharge
• finally, create a fourth state machine called sm_top that includes the
sm_patrol and sm_recharge machines as well as a STOP state that allows us
to terminate the entire patrol once we have completed the specified number of
loops.
• sm_top
◦ sm_patrol
▪ monitor_battery
▪ sm_nav
◦ sm_recharge
self.sm_nav.register_transition_cb(self.nav_transition_cb, cb_args=[])
Next we register a callback function on the sm_nav state machine that fires on a state
transition. The callback function looks like this:
As you can see, the function simply stores the last active state in the variable
self.last_nav_state. This will allow us to continue the patrol where we left off
after the robot completes a recharge.
Here we see that the overall state machine is composed of three component state
machines: the nav_patrol machine labeled 'PATROL', the sm_recharge machine
called 'RECHARGE' and a functionally defined state machine labeled ' STOP'.
Furthermore, the transition table for the nav_patrol machine indicates that an
outcome of 'succeeded' should lead back to the PATROL state while an outcome of
'recharge' should transition to the RECHARGE state and an outcome of 'stop' should
fire up the STOP state. The RECHARGE state has only one transition defined which is to
return to the PATROL state if the recharge outcome is 'succeeded'. Finally, the STOP
state transitions to an empty state thereby terminating the entire state machine.
Let's now break down each of these component states and state machines. The last state
labeled STOP is defined earlier in the script and looks like this:
class Stop(State):
def __init__(self):
State.__init__(self, outcomes=['succeeded','aborted','preempted'])
pass
Here we create a SMACH state that simply prints out a message and returns ' succeeded'.
It's only function is to give us a state we can transition to when we are finished all
patrols.
with self.sm_recharge:
StateMachine.add('NAV_DOCKING_STATION', nav_docking_station,
transitions={'succeeded':'RECHARGE_BATTERY'})
StateMachine.add('RECHARGE_BATTERY',
ServiceState('battery_simulator/set_battery_level', SetBatteryLevel,
100, response_cb=self.recharge_cb), transitions={'succeeded':''})
The first state gets the robot to the docking station using the nav_docking_station
SimpleActionState and the second state simulates a recharge by using a SMACH
ServiceState to call the set_battery_level service with a value of 100 (full
charge).
Finally, the PATROL state is defined by the nav_patrol state machine that looks like
this:
with self.nav_patrol:
Concurrence.add('SM_NAV', self.sm_nav)
Concurrence.add('MONITOR_BATTERY', MonitorState("battery_level", Float32,
self.battery_cb))
The MonitorState state takes three arguments: the topic we want to subscribe to, the
message type for that topic, and a callback function that gets called whenever a message
is received on the subscribed topic. In other words, it wraps a standard ROS subscribe
operation. In our case, the callback function, self.battery_cb looks like this:
The final question is: how does a low battery level condition preempt the current
navigation task, start the recharge task, then resume the previous navigation task? You'll
notice that the Concurrence container defines two callback functions: the
child_termination_cb that which fires when either child state terminates; and the
outcome_cb callback that fires when all child states have terminated. Let's look at
these in turn.
The child termination callback fires when either the sm_nav machine or the battery
MonitorState returns so we have to check for either condition. The outcome map is
indexed by the name of the state machines that make up the Concurrence container
which in our case are SM_NAV and MONITOR_BATTERY. So first we check the status of
the SM_NAV machine and if it has succeeded (i.e. the robot just made it to a waypoint)
we return True. Otherwise, we check the status of the battery monitor state. Recall that
a MonitorState maps True into valid and False into invalid. If we see an
outcome of invalid (the battery has fallen below threshold), we reset the initial state of
the sm_nav machine to be the last successful navigation goal and return True.
The outcome callback is executed when both the sm_nav machine and the
MonitorState have terminated. In our case, the callback function looks like this:
def concurrence_outcome_cb(self, outcome_map):
# If the battery is below threshold, return the 'recharge' outcome
if outcome_map['MONITOR_BATTERY'] == 'invalid':
return 'recharge'
# Otherwise, if the last nav goal succeeded, return 'succeeded' or 'stop'
elif outcome_map['SM_NAV'] == 'succeeded':
self.patrol_count += 1
rospy.loginfo("FINISHED PATROL LOOP: " + str(self.patrol_count))
# If we have not completed all patrols, start again at the beginning
As with the child termination callback, we test the outcome map for each of the
Concurrence states. In this case we test the MONITOR_BATTERY outcome first and if it
is invalid (the battery is low), we return an outcome of ' recharge'. Recall that in our
top-level state machine, this outcome will cause a transition to the RECHARGE state
which in turn is defined by the sm_recharge state machine that will move the robot to
the docking station. Otherwise, we test the outcome of the SM_NAV state and, if it
returns 'succeeded', meaning we have a arrived at a waypoint, we then test to see if we
have finished the designated number of patrols and can stop or keep patrolling.
# Add the states to the state machine with the appropriate transitions
with self.sm_patrol:
StateMachine.add('PICK_WAYPOINT', PickWaypoint(),
transitions={'succeeded':'NAV_WAYPOINT'},
remapping={'waypoint_out':'patrol_waypoint'})
StateMachine.add('NAV_WAYPOINT', Nav2Waypoint(),
transitions={'succeeded':'PICK_WAYPOINT',
'aborted':'PICK_WAYPOINT',
'preempted':'PICK_WAYPOINT'},
remapping={'waypoint_in':'patrol_waypoint'})
self.sm_patrol = StateMachine(outcomes=['succeeded','aborted','preempted'])
Here we create the overall patrol state machine as usual with the standard set of
outcomes.
self.sm_patrol.userdata.waypoints = self.waypoints
To make the pre-defined set of waypoints available to the state machine and any states
we subsequently add to it, we assign the array to a userdata variable. This variable
can have any name you like but it makes sense to simply call it waypoints in this case.
with self.sm_patrol:
StateMachine.add('PICK_WAYPOINT', PickWaypoint(),
transitions={'succeeded':'NAV_WAYPOINT'},
remapping={'waypoint_out':'patrol_waypoint'})
StateMachine.add('NAV_WAYPOINT', Nav2Waypoint(),
transitions={'succeeded':'PICK_WAYPOINT',
'aborted':'PICK_WAYPOINT',
'preempted':'PICK_WAYPOINT'},
remapping={'waypoint_in':'patrol_waypoint'})
Next we add the second state which we call NAV_WAYPOINT and represents the custom
state Nav2Waypoint() which will be defined below. As we will see, this state looks
for a data variable named waypoint_in that tells the state where to navigate the robot.
Note that we don't pass this variable to Nav2Waypoint() as an argument; instead, we
use a remapping dictionary for the state that maps the intermediary
patrol_waypoint variable to the waypoint_in.
The last two pieces to examine are the custom states PickWaypoint() and
Nav2Waypoint(). Here is the PickWaypoint() code:
class PickWaypoint(State):
def __init__(self):
State.__init__(self, outcomes=['succeeded'], input_keys=['waypoints'],
output_keys=['waypoint_out'])
userdata.waypoint_out = waypoint_out
return 'succeeded'
As you can see, this state has two arguments defining a set of input_keys and
output_keys. For this state, there is only one input_key called 'waypoints' and
one output_key called 'waypoint_out'. The execute() function automatically gets
passed the userdata object as an argument. So first we pick a random number from 0
to the length of the userdata.waypoints array, then we assign it to the
userdata.waypoint_out variable.
self.goal = MoveBaseGoal()
self.goal.target_pose.header.frame_id = 'map'
if self.preempt_requested():
self.service_preempt()
return 'preempted'
To try out the script, run the following commands. If you don't already have the fake
TurtleBot running, bring it up now:
If you don't already have RViz running with the nav_tasks config file, bring it up
with:
Finally, make sure you can see the RViz window in the foreground, then run the
random_patrol_smach.py script:
You should see the robot move randomly from waypoint to waypoint indefinitely.
For bonus credit, the reader is encouraged to write a new script that combines the
random patrol with the battery check using a SMACH Concurrence container.
3.8.12 Subtasks and hierarchical state machines
As we have seen, SMACH enables us to nest state machines inside each other. This
allows us to break down a complex set of goals into primary tasks and subtasks. For
example, a house cleaning robot might be programmed to navigate from one room to the
next while carrying out a number of subtasks specific to each room. The procedure
might go something like this:
• START_LOCATION → LIVING_ROOM
• VACCUM_FLOOR → DUST_FURNITURE → CLEAN_WINDOWS
• LIVING_ROOM → KITCHEN
• MOP_FLOOR → DO_DISHES
• KITCHEN → BATHROOM
• WASH_TUB → MOP_FLOOR
• BATHROOM → HALLWAY
If you don't already have RViz running with the nav_tasks config file, fire it up now:
Finally, make sure you can see the RViz window in the foreground, then run the
clean_house_smach.py script:
You should see the robot move from room to room and perform various cleaning tasks.
You can also watch the progress in graphical form in the smach_viewer window. As
each cleaning task is completed, a pop-up window will appear. Click OK to allow the
robot to continue. (The pop-up windows are created by the easygui Python module and
are used just to illustrate how we can call other programs from within a state machine.)
Here we create a state machine for the living room and then add a state called
VACUUM_FLOOR. This state is defined in terms of a custom class VacuumFloor()
which we will describe in detail below.
One we have a state machine for each room containing the various cleaning tasks, we
put them together into the overall state machine as follows:
The construction of the clean_house state machine reads somewhat like a list of
chores:
• Add a state called START that navigates state the robot to the hallway, then
transition to a state called LIVING_ROOM.
• Add a state called LIVING_ROOM that navigates the robot to the living room and
once there, transitions to a state called LIVING_ROOM_TASKS.
• Add a state called KITCHEN that navigates the robot to the kitchen and once
there, transition to a state called KITCHEN_TASKS
• etc.
With the overall state machine constructed, we can later add tasks to any of the
individual room state machines without having to modify other parts of the code. For
example, the bathroom state machine actually has two subtasks:
with sm_bathroom:
StateMachine.add('SCRUB_TUB', ScrubTub('bathroom', 7),
transitions={'succeeded':'MOP_FLOOR'})
StateMachine.add('MOP_FLOOR', MopFloor('bathroom', 5),
transitions={'succeeded':'','aborted':'','preempted':''})
Let's now turn to the simulated cleaning tasks themselves. Recall that we added the
VACUUM_FLOOR state to the sm_living_room state machine using the following line of
code:
Here we have used SMACH's ability to define a state in terms of another class defined
elsewhere in the script. As you will recall from running the simulation in the ArbotiX
simulator, each "cleaning task" is represented by some scripted motions of the robot that
are meant to look like that action. So we need a state that can essentially run arbitrary
code to produce this kind of behavior.
SMACH enables us to extend the generic State class and override the default execute()
function with whatever code we like. The code below defines the custom state class
used in the clean_house_smach.py script to represent the VaccumFloor state:
class VacuumFloor(State):
def __init__(self, room, timer):
State.__init__(self, outcomes=['succeeded','aborted','preempted'])
self.task = 'vacuum_floor'
self.room = room
self.timer = timer
self.cmd_vel_pub = rospy.Publisher('cmd_vel', Twist)
self.cmd_vel_pub.publish(Twist())
message = "Finished vacuuming the " + str(self.room) + "!"
update_task_list(self.room, self.task)
return 'succeeded'
The VaccumState state takes a room name and timer as arguments. We then call the
__init__() function on the generic State object and define two possible outcomes:
succeeded and preempted. The generic State object assumes the callback function
is called execute() which in turns receives the standard argument called userdata
that we will explore in the next section. Otherwise, we are free to run nearly any code
we want here as long as we return one of the outcomes listed earlier.
In the example above, we use a cmd_vel publisher to move the robot back and forth
while counting down the timer. Note how we also check for a preempt request using
the preempt_requested() function which is inherited from the State object. If a
preempt request is received, the state stops what it is doing and returns the outcome
preempted.
If the execute() function is allowed to proceed all the way to the end, the global
function update_task_list() is used to check this task off the list of chores and an
outcome of succeeded is returned. We also use the easygui module to display a pop-
up message that has to be clicked before the robot will continue. Simply comment out
this line if you want the robot to perform the task without interruption.
The other "cleaning" tasks such as Mop and Scrub are defined in a similar manner in the
clean_house_smach.py script.
Behavior trees break down complex tasks by branching them into a set of conditions and
subtasks. The tree is always executed from top to bottom and left to right. For a given
level of the tree, nodes on the left have higher priority than nodes on the right. In the
tree shown above, this means that the STAY_HEALTHY branch of the tree will always be
run before the PATROL branch. Similarly, the CHECK_BATTERY task will always be
executed before the RECHARGE behavior.
Behaviors or conditions higher in the tree are more abstract such as "stay healthy" while
those at lower levels are more concrete such as "navigate to the docking station". In
fact, only the terminal nodes in each branch of the tree result in actual behavior.
Terminal nodes are also known as "leaf nodes" and are colored red in the diagram
above. We might also refer to them as "action nodes". The blue nodes are called
• Nodes represent tasks and conditions rather than states. The terms "task" and
"behavior" will be used interchangeably. As will see later on, checking a
condition can be thought of as just another kind of task.
• A task or behavior is generally a piece of code that runs for one or more cycles
and produces a result of either SUCCESS or FAILURE. If a task takes more than
one cycle to complete, it will have a status of RUNNING before returning its
result. A task's current status is always passed up to its parent task in the tree.
• Tasks or behaviors can also represent composite behaviors whose status depends
on two or more child behaviors. The two most commonly used composite
behaviors are selectors and sequences.
◦ A selector attempts to execute its first child behavior and if it succeeds, the
selector also succeeds. Otherwise, the selector attempts to execute the next
child behavior and so on. If all child behaviors fail, the selector also fails.
In this way, a selector is like a problem solver: first try one solution and if
that fails, try the next solution, and so on.
◦ A sequence attempts to run all of its child tasks one after the other. If any
child task fails, then the sequence fails. If the sequence is executed without
failure all the way to the last child behavior, then the sequence succeeds.
• When a task has more than one subtask, the subtasks are prioritized based on
their list order; i.e. from left to right in a standard tree diagram or from top to
• The execution of a behavior tree always begins at the root node and proceeds in
depth-first order beginning with the left most branch. When a given node is run,
the result (SUCCESS, FAILURE or RUNNING) is passed up to its parent. Only
leaf nodes result directly in the production of a behavior or the checking of a
condition. Interior nodes are used to direct the flow of processing to their child
nodes using selectors and sequences. An important property of behavior trees
is that execution begins again at the root of the tree on every "tick" of the clock.
This ensures that any change in the status of higher priority behaviors will result
in the appropriate change in execution flow even if other nodes are already
running.
• Behaviors can be augmented with decorators that modify the results of the
behavior. For example, we might want to execute a sequence of subtasks but
ignore failures so that the sequence continues through to the end even when
individual subtasks do not succeed. For this we could use an "ignore failure"
decorator that turns a result of FAILURE into SUCCESS for the task it decorates.
We will see an example of this in the next section where we revisit the Patrol
Bot scenario.
• Many behavior trees make use of a global "black board" that can store data
about the environment as well as the results of earlier behaviors. Individual
nodes can read and write to the black board.
3.9.3 Building a behavior tree
Constructing a behavior tree can be done either from the top down or from the bottom
up or even some combination of the two. It is often easier to begin with the root node
and work our way down, beginning with the more abstract composite behaviors and then
adding the more concrete conditions and action tasks.
For the Patrol Bot example, the root node will have two composite child behaviors:
STAY_HEALTHY and PATROL. Using a bulleted list, we can represent our tree so far like
this:
where we have used the label BEHAVE for the root node.
Note that the priority of a child behavior is determined by its order in the list since we
will always run through the child nodes in list order. So in this case, the
STAY_HEALTHY behavior has a higher priority than the PATROL behavior. After all, if
we let the battery run down, the robot will not be able to continue its patrol.
Next, let's flesh out these two high level behaviors. The STAY_HEALTHY task will
consist of two child tasks: CHECK_BATTERY and RECHARGE. It could also include
checking servos for overheating, or watching for excessive current to the drive motors
(perhaps the robot is stuck on something). The PATROL task will navigate to the four
corners of a square using four child tasks. So the behavior tree now looks like this:
• BEHAVE
• STAY_HEALTHY
• CHECK BATTERY
• RECHARGE
• PATROL
• NAV_0
• NAV_1
• NAV_2
• NAV_3
Once again, list order is important. For example, we want to check the battery before
we decide to recharge, and we want to patrol the corners of the square in a particular
order.
Finally, the RECHARGE task consists of two child behaviors: navigating to the docking
station and charging the robot. So our final behavior tree looks like this:
• BEHAVE
• STAY_HEALTHY
• CHECK BATTERY
• RECHARGE
• NAV_DOCK
• CHARGE
• PATROL
• NAV_0
• NAV_1
• NAV_2
• NAV_3
There is nothing stopping you from creating any number of meta-behaviors for use in
your behavior tree. Some examples include:
First we define the possible task status values using the class TaskStatus as a kind of
enum. One can include additional status values such as ERROR or UNKNOWN but these
three seem to be sufficient for most applications.
class Task(object):
""" The base Task class """
def __init__(self, name, children=None, *args, **kwargs):
self.name = name
self.status = None
if children is None:
children = []
self.children = children
def run(self):
pass
def reset(self):
for c in self.children:
c.reset()
def get_status(self):
return self.status
def announce(self):
print("Executing task " + str(self.name))
The base Task class defines the core object of the behavior tree. At a minimum it must
have a name and a run() function that in general will not only perform some behavior
but also return its status. The other key functions are add_child() and
remove_child() that enable us to add or remove sub-tasks to composite tasks such as
selectors and sequences (described below). You can also use the prepend_child() or
insert_child() functions to add a sub-task with a specific priority relative to the
other tasks already in the list.
When creating your own tasks, you will override the run() function with code that
performs your task's actions. It will then return an appropriate task status depending on
the outcome of the action. This will become clear when we look at the Patrol Bot
example later on.
The reset() function is useful when we want to zero out any counters or other
variables internal to a particular task and its children.
class Selector(Task):
"""
Run each subtask in sequence until one succeeds or we run out of tasks.
"""
def __init__(self, name, *args, **kwargs):
def run(self):
for c in self.children:
c.status = c.run()
if c.status != TaskStatus.FAILURE:
return c.status
return TaskStatus.FAILURE
A Selector runs each child task in list order until one succeeds or until it runs out of
subtasks. Note that if a child task returns a status of RUNNING, the selector also returns
RUNNING until the child either succeeds or fails.
class Sequence(Task):
"""
Run each subtask in sequence until one fails or we run out of tasks.
"""
def __init__(self, name, *args, **kwargs):
super(Sequence, self).__init__(name, *args, **kwargs)
def run(self):
for c in self.children:
c.status = c.run()
if c.status != TaskStatus.SUCCESS:
return c.status
return TaskStatus.SUCCESS
A Sequence runs each child task in list order until one succeeds or until it runs out of
subtasks. Note that if a child task returns a status of RUNNING, the sequence also returns
RUNNING until the child either succeeds or fails.
class Iterator(Task):
"""
Iterate through all child tasks ignoring failure.
"""
def __init__(self, name, *args, **kwargs):
super(Iterator, self).__init__(name, *args, **kwargs)
def run(self):
for c in self.children:
c.status = c.run()
class ParallelOne(Task):
"""
A parallel task runs each child task at (roughly) the same time.
The ParallelOne task returns success as soon as any child succeeds.
"""
def __init__(self, name, *args, **kwargs):
super(ParallelOne, self).__init__(name, *args, **kwargs)
def run(self):
for c in self.children:
c.status = c.run()
if c.status == TaskStatus.SUCCESS:
return TaskStatus.SUCCESS
return TaskStatus.FAILURE
The key difference between the ParallelOne composite task and a Selector is that
the ParallelOne task runs all of its tasks on every "tick" of the clock unless (or until)
one subtask succeeds. A Selector continues running the first subtask until that task
either succeeds or fails before moving on to the next subtask or returning altogether.
class ParallelAll(Task):
"""
A parallel task runs each child task at (roughly) the same time.
The ParallelAll task requires all subtasks to succeed for it to succeed.
"""
def __init__(self, name, *args, **kwargs):
super(ParallelAll, self).__init__(name, *args, **kwargs)
def run(self):
n_success = 0
n_children = len(self.children)
for c in self.children:
c.status = c.run()
if c.status == TaskStatus.SUCCESS:
n_success += 1
if c.status == TaskStatus.FAILURE:
return TaskStatus.FAILURE
if n_success == n_children:
return TaskStatus.SUCCESS
else:
return TaskStatus.RUNNING
Similar to the ParallelOne task, the ParallelAll task runs each subtask on each
tick of the clock but continues until all subtasks succeed or until one of them fails.
self.iterations = kwargs['iterations']
self.announce = announce
self.loop_count = 0
self.name = name
print("Loop iterations: " + str(self.iterations))
def run(self):
while True:
if self.iterations != -1 and self.loop_count >= self.iterations:
return TaskStatus.SUCCESS
for c in self.children:
while True:
c.status = c.run()
if c.status == TaskStatus.SUCCESS:
break
return c.status
c.reset()
self.loop_count += 1
if self.announce:
print(self.name + " COMPLETED " + str(self.loop_count) + "
LOOP(S)")
The Loop task simply executes its child task(s) for the given number of iterations. A
value of -1 for the iterations parameters means "loop forever". Note that a Loop
task is still a task in its own right.
class IgnoreFailure(Task):
"""
Always return either RUNNING or SUCCESS.
"""
def __init__(self, name, *args, **kwargs):
super(IgnoreFailure, self).__init__(name, *args, **kwargs)
def run(self):
for c in self.children:
c.status = c.run()
return TaskStatus.SUCCESS
The IgnoreFailure task simply turns a FAILURE into a SUCCESS for each of its child
behaviors. If the status of a child task is RUNNING, the IgnoreFailure also takes on a
status of RUNNING.
The CallbackTask turns any function into a task. The function name is passed to the
constructor as the cb argument along with optional arguments. The only constraint on
the callback function is that it must return 0 or False to represent a TaskStatus of
FAILURE and 1 or True to represent SUCCESS. Any other return value is interpreted as
RUNNING.
class CallbackTask(Task):
"""
Turn any callback function (cb) into a task
"""
def __init__(self, name, cb=None, cb_args=[], cb_kwargs={}, **kwargs):
super(CallbackTask, self).__init__(name, cb=None, cb_args=[],
cb_kwargs={}, **kwargs)
self.name = name
self.cb = cb
self.cb_args = cb_args
self.cb_kwargs = cb_kwargs
def run(self):
status = self.cb(*self.cb_args, **self.cb_kwargs)
else:
return TaskStatus.RUNNING
class MonitorTask(Task):
"""
Turn a ROS subscriber into a Task.
"""
def __init__(self, name, topic, msg_type, msg_cb, wait_for_message=True,
timeout=5):
super(MonitorTask, self).__init__(name)
self.topic = topic
self.msg_type = msg_type
self.timeout = timeout
self.msg_cb = msg_cb
if wait_for_message:
try:
rospy.wait_for_message(topic, msg_type, timeout=self.timeout)
rospy.loginfo("Connected.")
except:
rospy.loginfo("Timed out waiting for " + topic)
# Subscribe to the given topic with the given callback function executed
via run()
rospy.Subscriber(self.topic, self.msg_type, self._msg_cb)
def run(self):
return self.status
def reset(self):
pass
The MonitorTask subscribes to a given ROS topic and executes a given callback
function. The callback function is defined by the user and is responsible for returning
one of the three allowed task status values: SUCCESS, FAILURE or RUNNING.
class ServiceTask(Task):
"""
Turn a ROS service into a Task.
"""
def __init__(self, name, service, service_type, request, result_cb=None,
wait_for_service=True, timeout=5):
self.result = None
self.request = request
self.timeout = timeout
self.result_cb = result_cb
if wait_for_service:
rospy.loginfo("Waiting for service")
rospy.wait_for_service(service, timeout=self.timeout)
rospy.loginfo("Connected.")
def run(self):
try:
result = self.service_proxy(self.request)
if self.result_cb is not None:
self.result_cb(result)
return TaskStatus.SUCCESS
except:
rospy.logerr(sys.exc_info())
return TaskStatus.FAILURE
def reset(self):
pass
The ServiceTask wraps a given ROS service and optionally executes a user-defined
callback function. By default, a ServiceTask will simply call the corresponding ROS
service and return SUCCESS unless the service call itself fails in which case it returns
FAILURE. If the user passes in a callback function, this function may simply execute
some arbitrary code or it may also return a task status.
class SimpleActionTask(Task):
"""
Turn a ROS action into a Task.
"""
def __init__(self, name, action, action_type, goal, rate=5,
connect_timeout=10, result_timeout=30, reset_after=False, active_cb=None,
done_cb=None, feedback_cb=None):
super(SimpleActionTask, self).__init__(name)
self.action = action
self.goal = goal
self.tick = 1.0 / rate
self.rate = rospy.Rate(rate)
self.result = None
self.connect_timeout = connect_timeout
self.result_timeout = result_timeout
self.reset_after = reset_after
if active_cb == None:
active_cb = self.default_active_cb
self.active_cb = active_cb
if feedback_cb == None:
feedback_cb = self.default_feedback_cb
self.feedback_cb = feedback_cb
self.action_started = False
self.action_finished = False
self.goal_status_reported = False
self.time_so_far = 0.0
self.action_client.wait_for_server(rospy.Duration(self.connect_timeout))
except:
rospy.loginfo("Timed out connecting to the action server " + action)
def run(self):
# Send the goal
if not self.action_started:
rospy.loginfo("Sending " + str(self.name) + " goal to action
server...")
self.action_client.send_goal(self.goal, done_cb=self.done_cb,
active_cb=self.active_cb, feedback_cb=self.feedback_cb)
self.action_started = True
if not self.goal_status_reported:
rospy.loginfo(str(self.name) + " ended with status " +
str(self.goal_states[status]))
self.goal_status_reported = True
def default_active_cb(self):
pass
def reset(self):
self.action_started = False
self.action_finished = False
self.goal_status_reported = False
self.time_so_far = 0.0
The robot should make two loops around the square, stopping to recharge when
necessary, then stop. Let's now look at the code.
We begin by importing the pi_trees_ros library which in turn imports the core
pi_trees classes from the pi_trees_lib library. The first key block of code
involves the creating of the navigation tasks shown below:
Here we see nearly the same procedure as we used with SMACH although now we are
using the SimpleActionTask from the pi_trees library instead of the
SimpleActionState from the SMACH library.
The root behavior is a Sequence labeled BEHAVE that will have two child branches; one
that starts with the Selector labeled STAY_HEALTHY and a second branch labeled
LOOP_PATROL that uses the Loop decorator to loop over the patrol task. We then add
the two child branches to the root node in the order that defines their priority. In this
case, the STAY_HEALTHY branch has higher priority than LOOP_PATROL.
Next we take care of the rest of the patrol nodes. The patrol sequence itself is
constructed as an Iterator called PATROL. We then add each move_base task to the
iterator. Finally, we add the entire patrol to the LOOP_PATROL task.
68 # Add the battery check and recharge tasks to the "stay healthy" task
69 with STAY_HEALTHY:
70 # The check battery condition (uses MonitorTask)
71 CHECK_BATTERY = MonitorTask("CHECK_BATTERY", "battery_level",
Float32, self.check_battery)
72
73 # The charge robot task (uses ServiceTask)
74 CHARGE_ROBOT = ServiceTask("CHARGE_ROBOT",
"battery_simulator/set_battery_level", SetBatteryLevel, 100,
result_cb=self.recharge_cb)
75
76 # Build the recharge sequence using inline construction
77 RECHARGE = Sequence("RECHARGE", [NAV_DOCK_TASK, CHARGE_ROBOT])
We then construct the RECHARGE task as a Sequence whose child tasks are
NAV_DOCK_TASK and CHARGE_ROBOT. Note how we have used the inline syntax to
illustrate how we can add child tasks at the same time that we construct the parent.
Equivalently, we could have used the three lines:
RECHARGE = Sequence("RECHARGE")
RECHARGE.add_child(NAV_DOCK_TASK)
RECHARGE.add_child(CHARGE_ROBOT)
80 STAY_HEALTHY.add_child(CHECK_BATTERY)
81 STAY_HEALTHY.add_child(RECHARGE)
We complete the STAY_HEALTHY branch of the tree by adding the CHECK_BATTERY and
RECHARGE tasks . Note again that the order is important since we want to check the
battery first to see if we need to recharge.
Before starting execution, we use the print_tree() function from the pi_trees
library to display a representation of the behavior tree on the screen. The tree itself is
executed by calling the run() function on the root node. The run() function makes
one pass through the nodes of the tree so we need to place it in a loop.
Recall that this function was assigned to the CHECK_BATTERY MonitorTask which
monitors the battery_level topic. We therefore check the battery level against the
low_battery_threshold parameter. If the level is below threshold, we return a task
status of FAILURE. Otherwise we return SUCCESS. Because the CHECK_BATTERY task
is the highest priority task in the STAY_HEALTHY selector, if it returns FAILURE, then
the selector moves on to its next subtask which is the RECHARGE task.
The patrol_tree.py script illustrates an important property of behavior trees that
helps distinguish them from ordinary hierarchical state machines like SMACH. You'll
notice that after a recharge, the robot continues its patrol where it left off even though
nowhere in the script did we explicitly save the last waypoint reached. Remember that
in the SMACH example (patrol_smach_concurrence.py), we had to save the last
state just before a recharge so that the robot would know where to continue after being
charged. Behavior trees inherently store their state by virtue of each node's status
property. In particular, if the robot is on its way to a waypoint, the navigation task doing
the work of moving the robot has a status of RUNNING. If the robot is then diverted to
the docking station for a recharge, the status of the previously active navigation status is
still RUNNING. This means that when the robot is fully charged and the
CHECK_BATTERY task returns SUCCESS, control returns automatically to the running
navigation node.
3.10.5 A housing cleaning robot using behavior trees
Earlier in the chapter we used SMACH to simulate a house cleaning robot. Let us now do
the same using behavior trees. Our new script is called clean_house_tree.py and is
found in the rbx2_tasks/nodes subdirectory. The program is similar to the
patrol_tree.py script but this time we will add a few tasks that simulate vacuuming,
scrubbing and mopping just as we did with the SMACH example. We will also include
battery checking and recharge behavior.
Before describing the code, let's try it out. If you don't already have the
fake_turtlebot.launch file running, bring it up now:
Recall that this launch file also runs a move_base node, the map server with a blank
map, and the fake battery node with a default runtime of 60 seconds.
The robot should make one circuit of the square, performing cleaning tasks in each room
and recharging when necessary.
The overall behavior tree implemented by the clean_house_tree.py script looks like
this:
In addition to the main tasks shown above, we also require a few condition nodes such
as "is the room already clean" and "are we at the desired location?" The need for these
condition checks arises from the recharge behavior of the robot. For example, suppose
the robot is in the middle of mopping the kitchen floor when its battery level falls below
threshold. The robot will navigate out of the kitchen and over to the docking station.
Once the robot is recharged, control will return to the last running task—mopping the
kitchen floor—but now the robot is no longer in the kitchen. If we don't check for this,
the robot will start mopping the docking station! So to get back to the kitchen, we
include a task that checks the robot's current location and compares it to where it is
supposed to be. If not, the robot navigates back to that location.
A good way to understand the behavior tree we will create is to imagine that you are
asked to clean a room yourself. If you were asked to clean the bathroom, you might use
a strategy like the following:
• First find out if the bathroom even needs cleaning. Perhaps your roommate felt
energetic and took care of it already. If there is a task checklist somewhere such
as on the refrigerator, make sure the bathroom isn't already checked off.
• Once you are in the bathroom, check the list of tasks to perform. After each task
is completed, put a check mark beside the task on the list.
We can mimic this very same process in a behavior tree. For a given room, the subtree
will look something like the following. In parenthesis beside each task, we have
indicated the type of task it is: selector, sequence, iterator, condition, or action.
• CLEANING_ROUTINE (selector)
• IS_ROOM_CLEAN (condition)
• CLEAN_ROOM (sequence)
• NAV_ROOM (selector)
• CHECK_LOCATION (condition)
• MOVE_BASE (action)
• TASK_LIST(iterator)
• DO_TASK_1(sequence)
• CHECK_LOCATION (condition)
• EXECUTE_TASK (action)
• UPDATE_TASK_LIST (action)
• DO_TASK_2(sequence)
• CHECK_LOCATION (condition)
• EXECUTE_TASK (action)
• UPDATE_TASK_LIST (action)
• ETC
Here's how the tree would look if the only task was to vacuum the living room and we
omit the battery checking subtree:
Once we are at the target room, the TASK_LIST iterator begins. First we check that we
are still at the correct location, then we execute each task in the iterator and update the
task list.
To make our script more readable, the simulated cleaning tasks can be found in the file
cleaning_tasks_tree.py under the rbx2_tasks/src folder. We then import this
file at the top of the clean_house_tree.py script. Let's look at the definition of one
of these simulated tasks:
class VacuumFloor(Task):
def __init__(self, name, room, timer, *args):
super(VacuumFloor, self).__init__(self, name, *args)
self.name = name
self.room = room
self.counter = timer
self.finished = False
self.cmd_vel_pub = rospy.Publisher('cmd_vel', Twist)
self.cmd_vel_msg = Twist()
self.cmd_vel_msg.linear.x = 0.05
def run(self):
if self.finished:
return TaskStatus.SUCCESS
else:
rospy.loginfo('Vacuuming the floor in the ' + str(self.room))
self.finished = True
self.cmd_vel_pub.publish(Twist())
message = "Finished vacuuming the " + str(self.room) + "!"
rospy.loginfo(message)
Recall that some behavior trees use an object called the global "black board" for tracking
certain properties of the tree and the world. In Python, the black board can be a simple
class with a number of variables to hold the data. At the top of the
clean_house_tree.py script we define the BlackBoard() class shown above with
a list variable to store the task list and a ROS Point variable to track the robot's
current coordinates on the map.
black_board = BlackBoard()
Next we create an instance of the BlackBoard class and create an ordered list of
cleaning tasks using the task definitions from the file clean_house_tasks_tree.py
in the src/rbx2_tasks subdirectory. This task list will be convenient for iterating
through all the tasks assigned to the robot. It also means that we can add or remove
tasks by simply editing the list here at the top of the script.
The heart of the script involves creating the desired behavior tree from this task list.
Here is the relevant block in its entirety.
As you can see, the behavior tree is built by looping over all the tasks in the task list
stored on the black board. The inline comments should make clear how we build a
subtree for each room and its tasks. We then add each subtree to the overall
CLEAN_HOUSE task.
Notice how both the message task and the counting task run to completion before the
script exits but that the output alternates between the two tasks since they are running in
parallel.
Let's take a look at the core part of the code:
1 class ParallelExample():
2 def __init__(self):
3 # The root node
The construction of the behavior tree shown above should be fairly self explanatory
from the inline comments. Note that the PrintMessage() and Count() tasks are
defined later in the script and are fairly straightforward so we will not display them here.
If you modify the parallel_tree.py script so that the ParallelAll task on line 19
above is replaced with a ParallelOne task instead, the output should look like this:
Behavior Tree Structure
--> PRINT_AND_COUNT
--> PRINT_MESSAGE
--> COUNT_TO_10
Take 1 me 2 to 3 your 4 leader!
while True:
status = BEHAVE.run()
time.sleep(0.1)
if status == TaskStatus.SUCCESS:
BEHAVE.reset()
if remove:
PARALLEL_DEMO.remove_child(COUNT_WORDS)
else:
PARALLEL_DEMO.add_child(COUNT_WORDS)
It's not hard to imagine a behavior tree where nodes or branches are added or pruned
based a robot's experience to better adapt to the conditions on hand. For example, one
could add or remove entire branches of the tree depending on which room the robot is in
or switch its behavior from cleaning to patrolling by simply snipping one branch of the
tree and adding the other.
URDF models can use both mesh objects (STL or Collada) or simple box and cylinder
components. We will cover both cases in the sections that follow. Our general strategy
will be as follows:
• Create individual URDF/Xacro files for the main components including a base
with wheels, a torso, camera, laser scanner, pan-and-tilt head, and arm(s). All
dimensions and offsets will be stored as properties at the top of each file so that
modifications can be made easily at any time.
• Create separate files for materials (e.g. colors) and hardware components (e.g.
Dynamixel servos and brackets).
• Construct our final URDF model by including the relevant files created earlier
and attaching the various components in the right places for our particular robot.
You should see a small window appear called "Joint State Publisher" with two slider
controls, one for each drive wheel:
</launch>
The <arg> line near the top points to our robot's URDF/Xacro file and the <param>
line loads that file onto the ROS parameter server as the parameter named
robot_description. In fact, you can view the XML of the loaded model using the
command:
though there is generally little need to do this except perhaps during debugging to verify
that the parameter has been set.
The next few lines in the launch file bring up the robot_state_publisher node.
This node reads the geometry defined in the robot model and publishes a set of
transforms that make up the robot's tf tree. Without the tf tree, very little else could
be done with the robot including navigation, 3D vision, or controlling an arm.
The next set of the lines launches the joint_state_publisher node. This optional
node is used mostly when testing a URDF model in RViz as we are doing now.
Running the node with the use_gui parameter set to True brings up the slider control
window we saw earlier for setting the positions of any movable joints defined in the
robot model. At the moment we only have the two wheel joints but when we add a
multi-jointed arm later in the chapter, we will be able to set its configuration as well.
NOTE: When the robot's joints are under the control of an actual hardware driver, the
joint_state_publisher node is no longer used and in fact, it would conflict with
the real joint positions being published by the driver.
All URDF/Xacro files will begin with these two opening tags. Everything after the
<robot> tag will define our component and we will close the whole file with an ending
</robot> tag.
The properties section allows us to assign all dimensions and offsets to variables that
can then be used throughout the rest of the file. If changes are made to the robot at a
later time, simply tweak these values accordingly. Keep in mind the following points
when setting property values:
To see the axes attached to the base in RViz, set the Alpha value of the base_link to
something like 0.5 like the following:
A wheel is defined as a macro so that we can use it for each drive wheel without having
to repeat the XML. We won't describe the syntax in detail as it is already covered in the
URDF tutorials. However, note the use of the reflect parameter which takes on the
values 1 or -1 for the left and right sides respectively and allows us to flip both the
wheel and the sign of the y offset.
Next we define the macro for the base itself. In this case we are using a simple box
geometry for the <visual> component incorporating the parameters defined at the top
of the file. For the <collision> block, we have defined a wider box to encompass the
wheels. This provides a safety margin around the robot to help prevent the wheels
getting caught against obstacles. (Note: if your robot's wheels lie inside the perimeter of
the base, then you could simply use the same box used for the visual component.)
56 <link name="base_footprint">
57 <visual>
58 <origin xyz="0 0 0" rpy="0 0 0" />
59 <geometry>
60 <box size="0.05 0.05 0.001" />
61 </geometry>
62 <material name="TransparentGreen" />
63 </visual>
64 </link>
65
66 <joint name="base_joint" type="fixed">
67 <origin xyz="0 0 ${base_size_z/2 - wheel_offset_z}" rpy="0 0 0" />
68 <parent link="base_footprint"/>
Here we define the base_footprint link and a fixed joint that defines its relationship
to the base_link. As explained in the next section, the role of the base_footprint
is essentially to define the elevation of the base above the ground. As you can see
above, the joint between the footprint and the base raises the base by an amount
calculated from the base height and the z-offset of the wheels. To understand why we
have to elevate the robot by an amount base_size_z/2, change the Fixed Frame in
RViz to /base_link. If you use your mouse to shift the viewpoint so that you are
looking at the ground nearly edge-on, the image should look like this:
Note how the origin of a URDF box component is placed at the center of the box; i.e. at
a height of base_size_z/2 meters above the bottom of the robot. Consequently, the
fixed joint between the /base_footprint frame (which rests on the ground) and the
/base_link frame must include a translation in the z-direction a distance of
base_size_z/2 meters just to get the bottom of the robot up to the ground plane.
Here we call the <wheel> macro twice with the reflect parameter set first to 1 for the
left wheel and -1 for the right to make sure the wheels get mounted on opposite sides of
the base. The suffix parameter causes the <wheel> macro to give each wheel link a
unique name.
The only problem with this approach is that the ground_clearance variable then has
to be added to the origin of all the components in your model, including wheels, torso,
camera, arms, etc. If you do use this approach, then remember that other ROS packages
like the Navigation stack will require using the /base_link frame rather than
/base_footprint in a number of configuration files.
<xacro:include filename="$(find
rbx2_description)/urdf/box_robot/base.urdf.xacro" />
</robot>
As you can see, the file is quite simple. First we add a name for our robot to the opening
<robot> tag (highlighted in bold above). Then we include two macro files—the
materials file that defines various colors, and the base macro file we just created.
Finally, we add the base to the robot by calling the <base> macro with a name and
color.
As we will see, adding a torso, camera, laser scanner and arm are nearly as
straightforward.
4.1.5 Viewing the robot's transform tree
To view an image of the robot's tf tree, use the view_frames utility:
$ cd ~
$ rosrun tf view_frames
This will create a PDF file called frames.pdf in the current directory. It can be
viewed using any PDF reader such as evince:
$ evince frames.pdf
To bring up just the base, first terminate any other URDF launch file you might have
running from an earlier section, then run the following:
$ roslaunch rbx2_description pi_robot_base_only.launch
(Remember to toggle the checkbox beside the RobotModel display if you are still
seeing a previously loaded model.)
To see how the mesh was included in the URDF model, take a look at the file
pi_base.urdf.xacro in the directory rbx2_description/urdf/pi_robot. The
file is nearly identical to the box model of the base with a few key differences. Here are
the key property lines and the block defining the base macro:
<property name="base_radius" value="0.152" />
<property name="base_height" value="0.241" />
<property name="ground_clearance" value="0.065" />
Since Pi Robot's base is cylindrical rather than a box, we set its radius and height as
properties. We also set the ground clearance and base mesh scale values. (More on the
mesh scale below.)
The key line is highlighted in bold above. Instead of defining a simple box (or in this
case a cylinder) for the visual component, we load the 3D model of the base using the
<mesh> tag. The filename parameter is set to the ROS package path of the
appropriate file, in this case, the file pi_robot_base.stl in the directory
rbx2_description/meshes/pi_robot. Since meshes created in CAD programs
will not necessarily use meters as the base unit, we also use a scale parameter that
takes three values for the x, y and z scale multipliers. (The scale will almost always be
the same in all three directions.) Pi's base mesh was created in Google Sketchup and it
turns out that the appropriate scale factor is 0.0254. This is the value set for the
base_mesh_scale property near the top of the file.
Note that we use a simple cylinder rather than the mesh in the <collision> block.
The reason is that the collision parameters are used by other ROS packages like the
Navigation stack for checking that the robot is not about to run into an obstacle. It is
much quicker to do collision checking with simple geometric shapes like boxes and
cylinders than to use a 3D mesh which might have thousands of faces. As long as the
simpler shape envelops the visual mesh, it serves just as well for collision checking.
Note also that we have set the z origin component for the collision cylinder to
base_height/2. This is because the coordinate frame of the base mesh is actually
attached to the bottom of the mesh rather than the middle as will will explain below.
To see the collision cylinder in RViz, simply check the box beside Collision Enabled
under the RobotDisplay and the view should look like this:
• un-check the checkboxes under the Links section for all the links except the
base_link
As you can see, this particular mesh defines its origin in the center of the bottom of the
base. (Recall that a box or cylinder component has the origin in the middle of the
component.) Fortunately the axes are oriented the way we want with the x-axis pointing
in the robot's forward direction.
The ground clearance of the robot can now be seen to be simply the difference between
the wheel radius and the wheels z-offset. So the <joint> definition for the
base_footprint link looks like this:
<joint name="base_joint" type="fixed">
<origin xyz="0 0 ${wheel_offset_z - wheel_radius}" rpy="0 0 0" />
<parent link="base_link"/>
<child link="base_footprint" />
</joint>
The top-level URDF/Xacro model that includes the mesh base is stored in the file
pi_robot_base_only.xacro in the rbx2_templates/urdf/pi_robot directory
and it looks like this:
<?xml version="1.0"?>
<robot name="pi_robot">
</robot>
As you can see, the file is quite simple. First we add a name for our robot to the opening
<robot> tag. Then we include two macro files, the materials file that defines various
colors and the base macro file we just created. Finally, we add the base to the robot by
calling the <base> macro and a desired color.
As we will see, adding a torso, camera and arm are just as straightforward.
(Remember to toggle the checkbox beside the RobotModel display if you are still
seeing a previously loaded model.)
Change the Fixed Frame back to /base_footprint if you still have it set to
/base_link from the previous section.
Note how the new torso_link is listed under the Links section of the RobotModel
display in the left hand panel. You can turn off the display of the torso by un-checking
the checkbox beside the link name.
4.3.1 Modeling the torso
Let's look at the URDF/Xacro file torso.urdf.xacro defining the torso:
<?xml version="1.0"?>
<robot>
</robot>
At the top of the file we define the dimensions of the torso as a post 0.7 meters tall and
1" x 1" in cross-section. We then define the torso macro as a <joint> and a <link>.
The joint block connects the torso to the parent link which is passed as the parent
parameter to the macro. The point of attachment is determined by the *origin block
parameter which is also passed to the macro. Recall from the URDF Xacro tutorial that
a block parameter is inserted using the <xacro:insert_block> tag as we see above.
We'll see how to pass the *origin parameter in the next section.
</robot>
As you can see, the file is similar to our base-only robot model. First we add the three
torso offset parameters that determine where the torso is attached to the base. We also
include the torso.urdf.xacro file. Finally, we attach the torso to the base by calling
the <torso> macro with the parent parameter set to "base" and the xyz part of the
<origin> block parameter set to the torso offset values. Note that we could also assign
orientation values to the rpy parameter but in this case the default vertical orientation of
the torso is what we want.
4.3.3 Using a mesh for the torso
Pi Robot uses an 8020 T-slot post for a torso. As it turns out, the 3D Warehouse for
Google Sketchup has a model for just such an item so we can use it as a mesh. Pi's mesh
torso is defined in the file pi_torso.urdf.xacro in the directory
rbx2_description/meshes/pi_robot. The file looks like this:
<?xml version="1.0"?>
<robot>
The <torso> macro takes the parent and *origin as parameters so we know how to
attach it to the robot. As with Pi Robot's base mesh, we include a <mesh> tag in the
geometry section of the <link> block. In this case, the mesh filename points to the
package location of the t_slot.stl file. We also scale the mesh by a factor stored in
the property torso_mesh_scale. As it turns out, we need a value of 0.0127 to make the
scale the T-slot mesh the same as the rest of the robot.
4.3.4 Adding the mesh torso to the mesh base
Adding the mesh torso to the rest of Pi Robot is the same as we did for the Box Robot.
To see how it looks, run the following launch file:
$ roslaunch rbx2_description pi_robot_with_torso.launch
<xacro:include filename="$(find
rbx2_description)/urdf/pi_robot/pi_base.urdf.xacro" />
<xacro:include filename="$(find
rbx2_description)/urdf/pi_robot/pi_torso.urdf.xacro" />
</robot>
As with the Box Robot model, first we define the torso offsets relative to the base. Then
we include the materials file and the two mesh macro files, one for the base and one for
the torso. Finally, we call the base macro followed by the torso macro with the origin
block parameters set to the desired point of attachment.
Even with meshes and careful measurements, you may see gaps or overlaps of the robot
parts in RViz. This is the "tweaking" part of the process. Change your offsets by small
amounts and check RViz for the result. It is often helpful to set the Alpha values for a
component to something less that 1 (e.g 0.5) so that you can see if another component
is actually penetrating it. Then adjust the offset until the parts no longer overlap.
As you can see, we have modeled the Kinect using three pieces: a box for the base, a
cylinder for the supporting post and another box for the camera body. Let's now look at
the details.
<link name="${name}_base_link">
<visual>
<origin xyz="0 0.0 0.0" rpy="0 0 0" />
<geometry>
<box size="${kinect_base_x} ${kinect_base_y} ${kinect_base_z}"/>
</geometry>
<material name="${color}" />
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<box size="${kinect_base_x} ${kinect_base_y} ${kinect_base_z}"/>
</geometry>
</collision>
</link>
First we store the dimensions of the Kinect in a number of properties separating out the
three pieces: the base, the supporting post, and the camera body.
Next we define a box-shaped link for the camera base and a joint that connects it to the
parent link (which will be the torso as we will see in the next section). Note how we use
the variable ${name} instead of the fixed string "camera" for the prefix for the link and
joint names. This allows us to attach more than one camera if desired and give each
camera a different name which then propagates through its set of links and joints.
<joint name="camera_base_post_joint" type="fixed">
<origin xyz="0 0 ${(kinect_base_z + kinect_base_post_height)/2}" rpy="0
0 0" />
<link name="${name}_base_post_link">
<visual>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<cylinder radius="${kinect_base_post_radius}" length="$
{kinect_base_post_height}"/>
</geometry>
<material name="${color}" />
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<cylinder radius="${kinect_base_post_radius}" length="$
{kinect_base_post_height}"/>
</geometry>
</collision>
</link>
<link name="${name}_link">
<visual>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<box size="${kinect_body_x} ${kinect_body_y} ${kinect_body_z}"/>
</geometry>
<material name="${color}" />
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<box size="${kinect_body_x} ${kinect_body_y} ${kinect_body_z}"/>
</geometry>
</collision>
</link>
Here the construction process continues as we attach the cylindrical post to the camera
base at the bottom and the camera body at the top.
The rest of the URDF/Xacro file defines the relationship between the various optical and
depth frames of the camera. The offsets were copied from the "official" Kinect URDF
file found in the turtlebot_description package. For example, the relation
between the camera body (camera_link frame) and the depth frame is given by the
block:
Here we see that the depth frame is laterally displace from the center of the camera body
by 1.25 cm. This relation as well as the others specified in the camera file enable the
robot_state_publisher to include the various camera reference frames in the tf
tree. This in turn connects the camera frames to the rest of the robot which allows the
robot to view an object in the depth frame but know where it is relative to the robot's
base (for example). All the complicated frame transformations are done for you by ROS
thanks to the tf library.
<xacro:include filename="$(find
rbx2_description)/urdf/box_robot/base.urdf.xacro" />
<xacro:include filename="$(find
rbx2_description)/urdf/box_robot/torso.urdf.xacro" />
<xacro:include filename="$(find
rbx2_description)/urdf/box_robot/kinect_box.urdf.xacro" />
</robot>
As you can see, the file is similar to our base-plus-torso robot model. First we add the
three camera offset parameters that determine where the camera is attached to the torso.
We also include the kinect_box.urdf.xacro file. Next we attach the torso to the
base using the <torso> macro and finally, we attach the camera to the torso by calling
the <camera> macro with the parent parameter set to the torso and the origin
parameters set to the camera offset values.
To use an Asus Xtion Pro camera instead of a Kinect, terminate the previous launch file
and run the command:
$ roslaunch rbx2_description box_robot_with_xtion.launch
$ cd /tmp
$ rosrun tf view_frames
This will create a PDF file called frames.pdf in the current directory. It can be
viewed using using any PDF reader such as evince:
$ evince frames.pdf
And here is closer view of the mesh including the coordinate axes for the depth and
RGB camera frames:
And to launch the Pi Robot model with a mesh version of the Xtion Pro, run the
command:
<xacro:include filename="$(find
rbx2_description)/urdf/box_robot/base.urdf.xacro" />
<xacro:include filename="$(find
rbx2_description)/urdf/sensors/laser.urdf.xacro" />
</robot>
To see how it looks in RViz, terminate any current URDF launch files and run:
* Note: you might actually need to set the min_ang and max_ang values smaller than
the full range of your scanner if parts of the robot get in the way of the full range as in
the image above of Pi Robot's base.
After a toggling the Robot Model display, the view should look like the following:
If RViz is still running from a previous session, don't forget to toggle the checkbox
beside the RobotModel display to update the model.
4.7.2 Modeling the pan-and-tilt head
The overall model for this robot is found in the file
box_robot_with_pan_tilt_head.xacro in the directory
rbx2_description/urdf/box_robot. This file is nearly the same as the
box_robot_with_kinect.xacro file we examined earlier although this time we
include the file box_pan_tilt_head.urdf.xacro instead of the
kinect.urdf.xacro file. The box_pan_tilt_head.urdf.xacro itself includes
the kinect.urdf.xacro as well as the link and joint definitions for the servos and
brackets. This way we can attach the camera to the head tilt bracket. The
box_pan_tilt_head.urdf.xacro file also includes another file called
dynamixel_box_hardware.xacro. This file defines a number of properties to hold
the dimensions of the servos and various types of Dynamixel brackets. It also defines a
collection of macros that models the servos and brackets using simple boxes. (We'll use
meshes for Pi Robot below.)
We won't go through the servo URDF/Xacro files line-by-line as we have done with the
other components as it would take too much space. The basic idea is that we define a
link for each servo and bracket and joints in between them. However, let's at least look
at a couple of XML blocks that illustrate how to model revolute joints. Referring to the
file box_pan_tilt_head.urdf.xacro, the following lines define the head_pan
joint and link in terms of an AX12 servo and F3 bracket:
<dynamixel_AX12_fixed parent="head_base" name="head_pan_servo">
<origin xyz="-0.012 0 ${-AX12_WIDTH/2}" rpy="${M_PI/2} 0 ${-M_PI/2}"/>
</dynamixel_AX12_fixed>
<link name="${name}_link">
<inertial>
<mass value="0.00001" />
<origin xyz="0 0 0" />
<inertia ixx="1.0" ixy="0.0" ixz="0.0"
iyy="1.0" iyz="0.0"
izz="1.0" />
</inertial>
<visual>
<origin xyz="0 0 0 " rpy="0 0 0" />
<geometry>
<box size="${AX12_HEIGHT} ${AX12_WIDTH} ${AX12_DEPTH}"/>
</geometry>
<material name="Black"/>
</visual>
<collision>
<origin xyz="0 0 -0.01241" rpy="0 0 0" />
<geometry>
<box size="${AX12_HEIGHT} ${AX12_WIDTH} ${AX12_DEPTH}"/>
</geometry>
</collision>
</link>
</macro>
This macro is fairly straightforward—it defines the link as a box with the same
dimensions of an AX-12 servo and it defines a fixed joint between itself and the parent
which in this case is the head_base_link.
The second macro above, bioloid_F3_head_revolute, does the work of actually
panning the head. This macro is also defined in the
dynamixel_box_head_hardware.xacro file and looks like this:
<macro name="bioloid_F3_head_revolute" params="parent name joint_name llimit
ulimit color *origin *axis">
<joint name="${joint_name}_joint" type="revolute">
<insert block name="origin"/>
<insert block name="axis"/>
<link name="${name}_link">
<inertial>
<mass value="0.00001" />
<origin xyz="0 0 0" />
<inertia ixx="1.0" ixy="0.0" ixz="0.0"
iyy="1.0" iyz="0.0"
izz="1.0" />
</inertial>
<visual>
<origin xyz="0 0 0 " rpy="0 0 0" />
<geometry>
<box size="${F3_DEPTH} ${F3_WIDTH} ${F3_HEIGHT}"/>
</geometry>
<material name="${color}"/>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<box size="${F3_DEPTH} ${F3_WIDTH} ${F3_HEIGHT}"/>
</geometry>
</collision>
</link>
</macro>
Note how the joint type in this case (highlighted in bold above) is revolute instead of
fixed. A revolute joint takes parameters lower and upper for the lower and upper
rotation limits, specified in radians. These are set for a particular joint by passing the
values ${llimit} and ${ulimit} in the code above. An AX-12 servo has an
operating range of 300 degrees, so 150 degrees to either side of center. The value of
2.53 radians that is passed in for the lower and upper limits in the
bioloid_F3_head_revolute definition above is about 145 degrees—just a little less
than the full range so we don't force the servo against its own limits.
The joint also takes parameters velocity and effort which are limits on its dynamics
that should be respected by a real servo controller. A velocity limit of 1.571 radians per
second is equivalent to about 90 degrees per second which is moderately fast, especially
if the servo is panning a camera. Although we have set the effort to 1.0 to show the
syntax, neither the dynamixel_motor nor the arbotix package makes use of this
value.
Another key property of a revolute joint is this <axis> tag, also highlighted in bold
above. The xyz components of the axis tag are typically either 0 or 1 and define the
Here we have set the Alpha value for the RobotModel display to 0.5 so that we can see
the link and joint frames more easily. Next, we have set the Link Tree Style to Joints
in Alphabetical Order instead of the default Links in Alphabetical Order. Looking
down the list of joints, we find the head_pan_joint and check the box beside Show
Axes. Referring back to the image above, we see that panning the head requires a
rotation around the z-axis (shown in blue in RViz) but that the reference frame is
inverted with the z-axis pointing downward. Therefore, we need to specify the rotation
axis as the negative z-axis ("0 0 -1" in the URDF model) which would then be
pointing upward. If you use the right-hand rule and point your thumb upward, your
fingers should curl in the direction of counter-clockwise rotation of the head which is
the convention ROS uses to define a positive rotation. You can verify that this is the
The view in RViz should then look something like the following:
If RViz is still running from a previous session, don't forget to toggle the checkbox
beside the RobotModel display to update the model.
If RViz is still running, toggle the Robot Display checkbox to refresh the model. The
joint state GUI should now appear as follows:
And the view in RViz should appear something like the following:
The key parameters to note here are the side and reflect parameters. For the right
arm, we set the side parameter to "right" and the reflect parameter to " -1". These
parameters are then passed to macros in the box_arm.urdf.xacro macro to give each
joint a unique name (e.g. right_arm_shoulder_pan_joint versus
left_arm_shoulder_pan_joint) and (optionally) reflect any displacements from
right to left. (As it turns out, our current models do not require the reflect parameter but
is a good idea to know about it in case it is needed for a different model.) As we shall
see in the next section, adding a second arm can be done using the same
box_arm.urdf.xacro file with parameters side="left" and reflect="1".
Don't forget to check the rotations of the arm joints using the method described section
4.7.3. If you find that a particular joint rotates in the wrong direction, simply change the
sign of the axis component in the arm's URDF/Xacro file.
<!-- Planning link and joint for the right gripper -->
<joint name="${side}_gripper_joint" type="fixed">
<origin xyz="0.05 0.0 -0.0375" rpy="${PI/2} 0 0"/>
<axis xyz="0 0 1" />
<parent link="${side}_gripper_static_finger_link"/>
<child link="${side}_gripper_link"/>
</joint>
<link name="${side}_gripper_link">
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<box size="0.001 0.0005 0.0005"/>
</geometry>
</visual>
</link>
The link itself is defined as a very small box called right_gripper_link and the
location of this small box is defined by the right_gripper_joint which displaces
and rotates the origin of the link relative to the parent
(right_gripper_static_finger_link in this case). The result is a gripper frame
located and oriented in a good place for grasping. We will use this gripper frame in the
chapter on arm navigation when it comes to picking up objects.
NOTE: Using a virtual gripper frame like this is optional and not a standard ROS
practice. It is more common to use the last arm link (e.g wrist roll or wrist flex) attached
to the gripper as the reference frame for planning. That way one can swap out different
grippers without having much of an effect on existing planning software. However,
adding a virtual planning frame can make it easier to visualize and describe grasping
goals relative to the gripper.
Note how the two arms and grippers use the same macros <arm> and <gripper> but
with opposite values for the parameters side and reflect. The side parameter gives
the links and joints of each arm different prefixes (e.g.
right_arm_shoulder_lift_joint versus left_arm_shoulder_lift_joint)
and the reflect parameter can be used to mirror displacements or rotation directions
although this is not necessary in the case of the current arm models. As for how each
arm is mounted, note that the y-offset of the left arm is the opposite sign of the right arm
and it is rotated around the z-axis in the by 90 degrees in the opposite direction.
After toggling the RobotModel display in RViz and adjusting the joint slider controls,
the model should look something like the following:
• the pi_tobot_with_arm.xacro file in turn includes Xacro files for the base,
torso, pan-and-tilt head, arm, and gripper
(If RViz is still running from a previous session, toggle the checkbox beside the
RobotModel display to reload the model.) The image should look like the following:
It is relatively straightforward to add a linear joint to the model of our robot's torso so
that we can experiment with adjusting the robot's height on the fly. To try it out with
our box robot, run the launch file:
(If RViz is still running from a previous session, toggle the checkbox beside the
RobotModel display to reload the model.)
Note that the joint control panel now has a slider for the torso joint. Moving this control
will adjust the torso height up and down.
The URDF that makes this happen can be found in the file tele_torso.urdf.xacro
in the directory rbx2_description/urdf/box_robot. The key element is the
addition of a prismatic (linear) joint that splits the torso into upper and lower segments
like this:
(ROS uses the keyword "prismatic" to indicate a linear joint.) We define the axis to be
in the positive z direction and we define the origin to have a 10 cm (0.1 meter) offset
so that the starting position places the robot at a neutral height. We then give the joint a
15 cm range in both the up and down directions using the lower and upper
parameters.
The URDF model for the telescoping torso can be found in the file
pi_tele_torso.urdf.xacro found in the directory
rbx2_description/urdf/pi_robot. The code is nearly identical to the box robot
as described in the previous section.
(If RViz is still running from a previous session, toggle the checkbox beside the
RobotModel display to reload the model.) The image should look like this:
To load the model with the gripper, terminate the earlier launch file and run the
command:
(If RViz is still running from a previous session, toggle the checkbox beside the
RobotModel display to reload the model.) The image should look like this:
It is instructive to note that the very same URDF/Xacro files for the arm and gripper are
used with this "Pedestal Pi" model as with the mobile version of Pi Robot. We are
simply attaching the arm and gripper to different base models for the two robots.
The output on the screen should look like something like this:
Here we see that we have started three nodes: the main arbotix driver, a driver for the
right gripper, and the robot_state_publisher. We also see INFO lines for the two
joint trajectory controllers (called FollowControllers in the arbotix package) one
for the right arm and one for the gripper) and the DiffController for the differential
drive base. We will postpone discussion of these nodes and controllers until the chapter
on MoveIt! and Arm Navigation. For now, let us simply verify that we can move the
joints in an appropriate manner.
If RViz is still running from a previous session, terminate it now and bring it up with the
sim.rviz config file as follows:
Note that under the Global Options category, we have set the Fixed Frame to /odom.
Recall from Volume 1 that we need to be in the /odom frame in order to see the robot
move when publishing Twist commands on the /cmd_vel topic.
Now open up a new terminal and bring up up the ArbotiX GUI utility:
$ arbotix_gui
Use the simulated track pad on the left to drive the robot around in RViz by grabbing the
little red dot and dragging it around. (If the robot does not appear to move, double-
check that you have set the Fixed Frame to /odom under the Global Options category
in RViz.) To move a joint, click the checkbox beside a servo name then use the slide
control to rotate it.
The arbotix_gui is a nice way to test the basic functionality of your robot. Moving
the red dot on the track pad publishes Twist messages on the /cmd_vel topic. If the
base controller parameters are set properly, the fake robot should move in the expected
manner.
Similarly, the servo slider controls can be used to test the motion of individual joints.
These controls publish a Float64 message representing the joint position in radians on
the topic /joint_name/command. For example, the slider labeled
In later chapters, we will learn how to control the joints programmatically from within
our own nodes.
If RViz is not still running, bring it up now with the sim.rviz config file:
$ arbotix_gui
Now you can drive Pi Robot's base and control his servos as we did with the Box Robot.
$ cd mybot_description
$ mkdir urdf
$ mkdir meshes
$ mkdir launch
You will now need to edit the package.xml file to include your name and email
address as the package maintainer as well as the license type. You can edit the
description field as you like.
$ cd ~/catkin_ws
$ catkin_make
And to ensure your new package is added to your ROS_PACKAGE_PATH for the current
terminal session, run:
$ rospack profile
<xacro:include filename="$(find
rbx2_description)/urdf/pi_robot/pi_base.urdf.xacro" />
This line pulls in Pi's base model from the file pi_base.urdf.xacro found in the
same package directory. Simply edit this line in your copy of the file to match your
package name and the location of your robot's base file.
Be sure to change the name of the package and the .xacro file highlighted above in
bold to match your package name and robot model file.
To view the model in RViz, use the config file in the rbx2_description package as
we have done in the rest of this chapter:
Alternatively, copy the config file into your new package directory:
$ roscd mybot_description
$ roscp rbx2_description urdf.rviz .
You can then run RViz with your own config file and any changes you make to the
settings will be saved to your copy:
• The arbotix package includes gripper controllers that are not available in the
dynamixel_motor package.
• The arbotix package provides a fake mode that uses essentially the same
configuration file used with real servos. This allows us to test the operation of
our scripts and nodes using RViz as a kind of fake simulator before trying them
out on a real robot.
• The arbotix package can be used with either the ArbotiX controller or the
USB2Dynamixel controller while the dynamixel_motor package only runs
with the USB2Dynamixel device.
ROS uses the use_sim_time parameter when running bag files or the Gazebo
simulator. It is therefore a good idea to explicitly set this parameter to false at the top
of a launch file designed for a real controller in case you have inadvertently left it set to
true from an earlier session.
The arbotix driver uses a parameter called sim to determine whether it runs in fake
mode or controls real servos. Here we define an argument by the same name so that we
can pass a value of true or false for the parameter when we run the launch file. This
allows us to use the same launch file for running the fake robot or the real robot. We
also set a default value of true so that if we run the launch file without an argument, it
will run in fake mode. So to run the one-arm version Pi Robot in fake mode, we would
run the launch file like this:
To connect to the real Pi Robot, we would run the launch file with the sim argument set
to false like this:
Here we define an argument for the serial port that will be used if we are connecting to a
real controller. We also set the default to the most likely USB port so that if your
controller is on /dev/ttyUSB0, you do not have to supply the port argument when
running the launch file. So if Pi Robot is using a USB2Dynamixel on port
/dev/ttyUSB0, and we want control real servos rather than running in fake mode, we
would run the launch file like this:
If your controller is always on USB1, you could modify the launch file or make a copy
of the original and substitute /dev/ttyUSB1 for the default. See also the Appendix on
configuring plug and play devices under Ubuntu so that your controller is always
assigned the same device name.
By now you should be familiar with this line that loads the URDF model we want to
use. In this case, we are loading the model for the one-arm version of Pi Robot
including a gripper as defined by the URDF/Xacro file
pi_robot_with_gripper.xacro in the rbx2_description package under the
subdirectory urdf/pi_robot.
Here we launch the arbotix gripper controller for Pi's one-sided gripper. The
arbotix_controllers package includes the gripper_controller node that can
control a number of different gripper types including the one-sided single-servo model
like Pi's, dual servo models like those used on Maxwell, or single-servo parallel grippers
like the PhantomX gripper from Trossen Robotics. For Pi's gripper, we set the model
parameter to singlesided as well as the pad_width and finger_length given in
meters. The single joint name must match Pi's gripper URDF model and is set to
right_gripper_finger_joint since that joint corresponds to the servo that rotates
the movable finger plate.
Next we run the robot_state_publisher node that takes care of mapping the robot's
URDF model and joint states into the corresponding tf tree.
Note how we make use of the unless keyword inside the <node> tag in Line 38. This
tells roslaunch not to run this node if the sim parameter is true; i.e. if we are running
in fake mode. (There is no need to relax a fake servo!)
The arbotix node publishes servo diagnostics automatically so here we fire up the
diagnostic_aggregator that will enable us to keep an eye on servo loads and
temperatures. Our diagnostics configuration is stored in the
dynamixel_diagnostics.yaml file located in the rbx2_dynamixels/config
directory. (ROS diagnostics is explored fully in the next chapter.) This configuration
file instructs the aggregator to summarize diagnostic information for joints and
controllers. Note that we again use the unless keyword inside the <node> tag so we
don't run the node in fake mode.
The official description of all available arbotix parameters can be found on the
arbotix_python Wiki page. So let's focus on the parameter values that we have set in
our current configuration file.
1 port: /dev/ttyUSB0
2 baud: 1000000
3 rate: 100
4 sync_write: True
5 sync_read: False
6 read_rate: 10
7 write_rate: 10
• baud (default: 115200) – the baud rate of the serial port connected to the
controller. If you are using a USB2Dynamixel controller, this parameter has to
be set to 1000000. If you are using an ArbotiX controller, the default baud rate
is 115200.
• rate (default: 100) – how fast (in Hz) to run the main driver loop. The default
value of 100 works well, especially when it comes to generating joint
trajectories as we will do later on. The reason is that the arbotix controllers
interpolate joint positions when moving the servos and a high rate here results in
a finer degree of interpolation and therefore smoother servo motion.
• read_rate (default: 10) – how frequently (Hz) data is read from the
Dynamixels.
9 joints: {
10 head_pan_joint: {id: 1, neutral: 512, min_angle: -145, max_angle: 145},
11 head_tilt_joint: {id: 2, neutral: 512, min_angle: -90, max_angle: 90},
12 right_arm_shoulder_roll_joint: {id: 3, neutral: 512, invert: True},
13 right_arm_elbow_flex_joint: {id: 4, neutral: 512, min_angle: -90,
max_angle: 90},
14 right_arm_forearm_flex_joint: {id: 5, neutral: 512, min_angle: -90,
max_angle: 90},
In Lines 9-19 we set the parameters for our servos by using the joints parameter
which is a dictionary of joint names together with a separate parameter dictionary for
each servo. (More on these parameters below.) Note that the joint names must be the
same here as they are in the URDF model for your robot. You can use the above
configuration file as a template for your own robot, but be sure to edit the names to
match your robot's URDF/Xacro model.
IMPORTANT NOTE: The AX and RX series of Dynamixel servos have an 8-bit
angular resolution so that position values range from 0-1023 with a mid-point setting of
512. They also have a range of 300 degrees. The EX and MX series have a 12-bit
angular resolution so that position values range from 0-4095 with a mid-point value of
2048. These servos have a range of a full 360 degrees. Pi Robot uses a pair of MX-
28T servos for his shoulder pan and lift joints and AX-12 servos everywhere else. You
will therefore see that the ticks and neutral parameters for these two joints above
refer to the 12-bit range of the MX servos. More details are given below in the
parameter summary.
Each joint or servo has its own set of parameters as follows:
• ticks (default: 1024) – the range of position values for this servo. The default
value is valid for AX and RX series Dynamixels. For EX and MX models, a
value of 4096 must be used in the configuration file.
• neutral (default: 512) – the neutral position value is mapped by the arbotix
driver into 0 radians when publishing joint states. The default value of 512 is
usually appropriate for AX and RX model servos. However, for EX and MX
servos, a mid-point of 2048 must be explicitly specified in the configuration
file. Depending on how a particular servo connects to the rest of the robot, you
will sometimes set a neutral value other than the mid-point of its range. For
example, for Pi Robot's shoulder lift joint (which uses and MX-28T servo), the
neutral parameter is set to 1024 which represents ¼ of a turn. This was done so
• range (default: 300) – the working angular range of the servo. The default
value of 300 applies to AX and RX servos but a value of 360 must be explicitly
specified in the configuration file for EX and MX models.
• max_speed (default: 684) – the maximum speed of the servo in degrees per
second. The default value is suitable for an AX-12 servo. For an MX-28 servo,
use 702.
• min_angle (default: -150) – the minimum angle (in degrees) that the
controller will turn the servo. The default value of -150 is half the full range of
an AX or RX servo. For EX or MX servos, that value can be as high as -180.
Note that you can set smaller values if the rotation of the servo is restricted by
the way it is mounted. For example, Pi Robot's elbow joint can rotate only 90
degrees in either direction before its bracket runs into the upper arm. We
therefore specify min and max angles parameters of -90 and 90 in the
configuration file shown above.
• max_angle (default: 150) – The maximum angle (in degrees) that the
controller will turn the servo. The default value of 150 is half the full range of
an AX or RX servo. For EX or MX servos, that value can be as high as 180.
Note that you can set smaller values if the rotation of the servo is restricted by
the way it is mounted. For example, Pi Robot's elbow joint can rotate only 90
degrees in either direction before its bracket runs into the upper arm. We
therefore specify min and max angles parameters of -90 and 90 in the
configuration file shown above.
For each joint listed in the configuration file, the arbotix driver launches a joint
controller and several ROS services similar to those we used in Volume 1 with the
dynamixel_motor package. For example, to control the position (in radians) of the
right_arm_elbow_flex_joint, we can publish a Float64 message on the
/right_arm_elbow_flex_joint/command topic. To set the speed of the head pan
Here we see controllers section of the configuration file. In the case of Pi Robot, we
are configuring three controllers: the base controller for a differential drive base, a
trajectory action controller for the right arm joints, and a second trajectory action
controller for the head joints. Let's look at each of these in turn.
For a real robot, the base controller settings appearing in Line 22 above are only
relevant if you are using an actual ArbotiX controller to control a motor driver and a
differential drive base. However, we also use the base controller in fake mode when we
want to test a mobile robot in the ArbotiX simulator as we did in Volume 1 when
learning about ROS navigation. The official reference for the base parameters can be
found on the ArbotiX diff_controller Wiki page. When using a fake robot, the most
important parameter here is the base_frame_id which we have set to
base_footprint. If your robot does not use a base_footprint frame, change the
value here accordingly; e.g. base_link.
Next we turn to the joint trajectory controllers for the arm and head. The use of ROS
joint trajectories to control multiple joints simultaneously will be covered in detail in the
chapter on Arm Navigation. For now we will simply describe the various parameters
used in the configuration file. Let's start with the arm controller.
In Line 24 above we define a second trajectory action controller, this time for the pan-
and-tilt servos of the head. Although we won't use the head controller much in this
volume, it can be used to send motion goals to both head joints simultaneously.
The key items to note are highlighted in bold above: the right gripper controller is
started; the ArbotiX controller is running in simulation mode; and the trajectory
controllers for the right arm and head have been launched.
Now bring up RViz with the urdf.rviz config file from the rbx2_description
package:
$ rosrun rviz rviz -d `rospack find rbx2_description`/urdf.rviz
Next, bring up a new terminal and let's try publishing a few simple joint position
commands. We first learned about these commands in Volume 1 in the chapter on
controlling Dynamixel servos with ROS. We also ran some similar tests near the end of
the chapter on URDF models.
The first command should pan the head to the left (counter-clockwise) through 1.0
radians or about 57 degrees:
Now try raising the arm 2.0 radians (about 114 degrees) using the
right_arm_shoulder_lift_joint:
To disable a servo so that it no longer responds to position commands, use the enable
service with the argument false:
Notice how the servo does not respond. To re-enable the servo, run the command:
This time the servo should move. The enable service comes in handy when we want to
disable a servo for a period of time to let it cool down or otherwise take it offline for a
bit. In the meantime, other nodes can still publish position commands and they will
simply be ignored. We will look at two more services, relax and set_speed in the
next section on testing real servos.
We will defer testing arm and head trajectory controllers until the chapter on Arm
Navigation after we have learned more about joint trajectories and arm kinematics.
If your USB2Dynamixel controller is connected to a different USB port, you can run the
launch file with the port argument. For example, if your controller is on USB port
/dev/ttyUSB1, use the command:
(In the Appendix, we show how to create more descriptive port names like
/dev/usb2dynamixel that will work no matter what underlying physical port is used
by the OS.)
After a brief delay, you should see the rqt_robot_monitor GUI appear that should
look like the following image:
If the USB2Dynamixel controller has successfully connected to the two servos, the two
joints will appear in the rqt_robot_monitor window as shown above. We will
explore this monitor in detail in the next chapter on ROS Diagnostics.
As with the fake robot, let's begin by trying to move the pan-and-tilt head:
$ rostopic pub -1 /head_pan_joint/command std_msgs/Float64 -- 1.0
This command should pan the head smoothly to the left (counter-clockwise) through
1.0 radians or about 57 degrees. (If the head rotates clockwise, then your arbotix's
configuration file needs to be edited so that the head servo's invert parameter is set to
True. Or, if it was already set to True for some reason, set it to False instead. Then
restart your robot's launch file and try the above command again.)
To change the speed of the head pan servo to 1.5 radians per second, use the
set_speed service:
Then try panning the head back to center at the new speed:
$ rostopic pub -1 /head_pan_joint/command std_msgs/Float64 -- 0.0
To relax a servo so that you can move it by hand, use the relax service:
$ rosservice call /head_pan_joint/relax
Now try rotating the head by hand. Note that relaxing a servo does not prevent it from
moving when it receives a new position command. For example, re-issue the last
panning command:
$ rostopic pub -1 /head_pan_joint/command std_msgs/Float64 -- 1.0
The position command will automatically re-enable the torque and move the servo.
Note also that the servo also remembers the last speed setting.
To fully disable a servo so that it relaxes and no longer responds to position commands,
use the enable service with the argument false:
$ rosservice call /head_pan_joint/enable false
You should now be able to turn the servo by hand. Next, try sending the servo a new
position:
This time the servo should not respond. To re-enable the servo, run the command:
Now the servo should move again. The enable service comes in handy when we want to
disable a servo for a period of time to let it cool down or otherwise take it offline for a
bit. In the meantime, other nodes can still publish position commands and they will
simply be ignored.
Let's take a quick look at the script since it nicely illustrates how to access the arbotix
joint services from within a ROS node.
1 #!/usr/bin/env python
2
3 import rospy
4 from std_srvs.srv import Empty
5 from arbotix_msgs.srv import SetSpeed
6
7 class Relax():
8 def __init__(self):
First we import the two service message types we will need. The arbotix relax
service uses an Empty message from the standard services package ( std_srvs). The
set_speed service uses the SetSpeed message from the arbotix_msgs package.
The arbotix_driver node stores all of its parameters under the /arbotix
namespace on the ROS parameter server. The /arbotix/joints parameter contains
the list of joints (and their individual parameters). We will need this list so we can
iterate through all the joints to relax each one in turn.
Here we initialize a pair of lists to hold the relax and set_speed services for each
servo we will connect to shortly.
Now we iterate through the list of joints, connect to the relax and set_speed service
for each joint, then append a proxy to that service to the appropriate list. For example, if
the first joint in the list is called head_pan_joint, then the variable relax_service
is set to '/head_pan_joint/relax' which we know from the previous section is the
correct service name. We then use a call to wait_for_service() to make sure the
service is alive before appending the appropriate ServiceProxy object to the list of
relax services. Notice how we use the Empty service type we imported at the top of the
script to specify the type for the relax service. The same steps are then followed for the
set_speed service.
With all servos set to a safe speed, we then iterate through the relax services to turn of
the torque for each servo. Note how in Line 39 we call the relax() service proxy with
an empty argument since relax service uses the Empty message type.
This script can be used as a kind of software "stop switch" since disabling the servos
will cause them to both relax and become unresponsive to any further motion requests.
• battery levels for both the robot and onboard computer (e.g. laptop battery)
• temperatures and loads on the joints (to avoid frying the servos)
It is up to the hardware driver for a given device to read the raw values of these
variables. But we need a way to represent the values using a standard message type so
that other ROS nodes can monitor them without having to worry about the underlying
driver and measurement units.
To this end, ROS provides the diagnostics stack for collecting and summarizing data
from sensors and actuators that can be used to flag potential problems. The key
components of the diagnostics stack include the DiagnosticStatus message type,
the diagnostic_updater API, the diagnostic_aggregator, and an analyzer
configuration file. The API enables us to convert raw sensor values to an array of
DiagnosticStatus fields which is then published as a DiagnosticArray message
on the /diagnostics topic. The diagnostic_aggregator categorizes and
organizes these messages into a hierarchical tree as specified by one or more analyzer
configuration files and publishes the result on the /diagnostics_agg topic. The
rqt_robot_monitor utility subscribes to the /diagnostics_agg topic and displays
the status of any monitored components in a color-coded graphical form that makes it
easy to spot a problem, or to drill down to a more detailed view of a given component.
You can also write your own nodes that can subscribe to the /diagnostics or
/diagnostics_agg topics to monitor the status of various components and act
appropriately when something is in trouble.
In some cases, the ROS drivers for the hardware you are using will already publish
diagnostics information on the/diagnostics topic; you can then simply subscribe to
the /diagnostics topic to make use of the information in your own nodes or use
rqt_robot_monitor to visually inspect diagnostic information. In other cases, you
might have to write your own driver for publishing the diagnostics information you
require. Or, if a driver already exists but the data is not yet in the form of a ROS
byte OK=0
byte WARN=1
byte ERROR=2
byte level
string name
string message
string hardware_id
diagnostic_msgs/KeyValue[] values
string key
string value
As we can see above, the message first enumerates three general status levels: OK, WARN
and ERROR. The level field holds the current status of the device itself and will have a
value of 0, 1 or 2 to indicate a status of OK, WARN or ERROR, respectively. Next we see
three string fields for storing the name of the component, an arbitrary message, and
its hardware_id. The rest of the message is an array of key-value pairs that can
store the raw values returned by the hardware driver. For example, the pair
key=temperature, value=31 would mean that the temperature of the device is 31
degrees Celsius.
NOTE: For readers upgrading from ROS Hydro, a new status of STALE has been added
to the DiagnosticStatus message. This means that the message now has a new MD5
signature and will not be compatible with nodes running under ROS Hydro or earlier.
Therefore, be sure not mix diagnostics nodes or robot monitors running on pre-Indigo
versus post-Indigo ROS distributions.
A given hardware driver will typically publish an array of such values using the
DiagnosticArray message type which looks like this:
std_msgs/Header header
diagnostic_msgs/DiagnosticStatus[] status
1 pub_rate: 1.0
2 analyzers:
3 joints:
4 type: GenericAnalyzer
5 path: 'Joints'
6 timeout: 5.0
7 contains: '_joint'
While it is possible to define custom analyzers for different devices, we will only need
to use the GenericAnalyzer type. You can find out more about analyzer
configuration parameters in the GenericAnalyzer Tutorial on the ROS Wiki. For now,
the parameters shown above will suffice. Let's break it down line by line.
Line 1 specifies the rate at which we want the aggregator to publish diagnostics
information. A rate of 1.0 Hz is probably fast enough for most devices such as servos,
batteries, drive motors, etc.
Line 2 specifies that all our parameters will fall under the ~analyzers namespace
which is required when using a GenericAnalyzer with the diagnostic aggregator. We
won't need to access this namespace directly but the line must appear in our config file.
Line 3 indicates that we will use the joints namespace under the ~analyzers
namespace to store the aggregated diagnostics for servo joints.
Line 4 indicates that we will use the GenericAnalyzer plugin type to analyze these
devices.
In Line 5, the path parameter defines the string that will be used to categorize these
devices when viewed in the rqt_robot_monitor GUI. In this case, we want all our
servos to be listed under the category 'Joints'.
If your USB2Dynamixel controller is connected to a different USB port, you can run the
launch file with the port argument. For example, if your controller is on USB port
/dev/ttyUSB1, use the command:
After a brief delay, you should see the rqt_robot_monitor window appear like the
image on the right. Here we see that the monitor has
successfully detected our two head servos and
indicates that the overall status for each is OK. You
can double-click on a joint name to bring up a
detailed status screen such as the one shown below
for the head pan joint:
This status screen indicates that the head pan servo is essentially centered (position =
-0.005), the temperature is a safe 33 degrees Celsius, the input voltage is 12V
(multiplied by 10 for some reason—it is not 120 volts!), the error rate is a perfect 0.0%
and the torque is currently off meaning the servo is relaxed and can be turned by hand.
Let us now take a closer look at the pi_robot_head_only.launch file to see how
the aggregator and rqt monitor nodes are run. Near the bottom of the launch file you
will find the following lines:
<node pkg="diagnostic_aggregator" type="aggregator_node"
name="diagnostic_aggregator" clear_params="true" unless="$(arg sim)">
<rosparam command="load" file="$(find
rbx2_dynamixels)/config/head_only_diagnostics.yaml" />
</node>
Here we run the diagnostic_aggregator node which loads the configuration file
head_only_diagnostics.yaml file found in the rbx2_dynamixels/config
directory. We'll look at that file shortly. We also delete any left over diagnostic
aggregator parameters that might be left over from previous runs by using the
clear_params="true" argument.
Next we fire up the rqt_robot_monitor node which generates the GUI shown earlier
for viewing servo status visually.
The head_only_diagnostics.yaml file defines the analyzers we want to run and
how the information should be organized. Here's what that file looks like:
1 pub_rate: 1.0
2 analyzers:
3 joints:
4 type: GenericAnalyzer
5 path: 'Joints'
6 timeout: 5.0
7 contains: '_joint'
To see the messages being published on the /diagnostics topic while you are
connected to the servos, open another terminal and run the command:
The first part of the output should look something like this:
header:
seq: 2125
stamp:
secs: 1405518802
nsecs: 322613000
frame_id: ''
status:
-
level: 0
name: head_controller
message: OK
hardware_id: ''
values:
-
key: State
value: Not Active
-
level: 0
name: head_pan_joint
message: OK
hardware_id: ''
values:
-
key: Position
value: -1.01242732
-
First we see the header fields, then the beginning of the status array. The hyphen (-)
character indicates the beginning of an array entry. The first hyphen above indicates the
beginning of the first entry of the status array. This entry has array index 0 so this
first entry would be referred to as /diagnostics/status[0]. In this case, the entry
refers to the head controller that controls both servos. Since the head controller is a
software component, it does not have position or temperature so let's look at the next
array entry.
The second block above would have array index 1 so this entry would be accessed as
/diagnostics/status[1]. In this case, the entry refers to the head pan joint. The
indented hyphens below the values keyword indicate the entries in the key-value
array for this diagnostic element. So the Temperature value for this servo would be
indexed as /diagnostics/status[1].values[1]. Note how we use a period (.) to
indicate subfields of an array element. If you wanted to echo just the temperature of the
head pan servo, you could use the following command:
$ rostopic echo /diagnostics/status[1].values[1]
Returning now to the previous output above, notice that the first few status fields for the
head pan joint are:
status:
-
level: 0
name: head_pan_joint
message: OK
hardware_id: ''
The critical result here is the value for the level subfield which is 0 in this case. Recall
that a value of 0 maps to a diagnostics status of OK. The relevant component should be
identified by the name and sometimes the hardware_id fields. Here we see that this
array element refers to the head_pan_joint but that the hardware ID is not specified.
If you issue the command 'rostopic echo /diagnostics | more' again and keep
hitting the space bar to scroll through the messages, you will see status messages for
each servo. In particular, the message for the head tilt joint begins like this:
So we see that the value of the name field is simply the joint name we assigned to each
servo in our ArbotiX config file. Since each joint name ends with the string ' _joint',
our analyzer configuration file, head_only_diagnostics.yaml, can use this string
for the value of the contains parameter to let the aggregator know that diagnostic
messages containing this string in their name field are the ones were interested in.
6.3.3 Protecting servos by monitoring the /diagnostics topic
The rbx2_diagnostics package includes a script called monitor_dynamixels.py
in the nodes subdirectory. This node subscribes to the /diagnostics topic, extracts
the messages relevant to the servos and disables the servos if they appear to be in
trouble. The script then counts off a timer to let the servos cool down, then re-enables
them when their temperature is back to normal.
The script is fairly long so let's focus on the key lines.
4 from diagnostic_msgs.msg import DiagnosticArray, DiagnosticStatus
5 from arbotix_msgs.srv import Relax, Enable
Near the top of the script, we import the diagnostics message types we will need as well
as the Relax and Enable services from the arbotix_msgs package.
The list of joints controlled by the arbotix_driver node is stored in the ROS
parameter /arbotix/joints. In fact, this parameter is the same as the joints
parameter we defined in our arbotix configuration file. So we store the joint list
(actually a dictionary) in the variable self.joints.
Next we initialize a timer to track how long we have disabled a servo as well as a couple
of boolean flags to indicate when the servos are disabled and when we are resting. Then
we call the connect_servos() function (described below) that takes care of
connecting to the various topics and services related to controlling the Dynamixels.
The final line above subscribes to the /diagnostics topic and sets the callback
function to self.get_diagnostics() that we will look at next.
35 def get_diagnostics(self, msg):
36 if self.rest_timer != 0:
37 if rospy.Time.now() - self.rest_timer <
rospy.Duration(self.minimum_rest_interval):
38 return
39 else:
40 self.resting = False
41 rest_timer = 0
In the first part of the callback function, we check the status of the rest_timer and if
we still have some time left on the clock we return immediately. Otherwise we reset the
timer to 0 and the resting flag to False, then continue on to the following lines.
45 for k in range(len(msg.status)):
46 # Check for the Dynamixel identifying string in the name field
47 if not '_joint' in msg.status[k].name:
48 # Skip other diagnostic messages
49 continue
Now that we know we are dealing with a joint message, we check the diagnostic status
level. If the status indicates ERROR, then this servo is overheated and we need to disable
it. We could disable just the one servo, but to simplify things for now, we disable all
servos when any one of them overheats.
If the servo is not hot enough for an ERROR status but it is warm enough for a WARN
status, we display a warning message but do not disable the servos. We also set a couple
of flags so we don't keep repeating the same warning message.
Finally, if no servos are overheating and we are not currently in a rest period, re-enable
the servos.
Let's now take a look at the three functions we called earlier in the script:
connect_servos(), disable_servos() and enable_servos().
99 def disable_servos(self):
100 for joint in sorted(self.joints):
101 self.enable[joint](False)
102
103 def enable_servos(self):
104 for joint in sorted(self.joints):
105 self.enable[joint](True)
Once we have the relax and enable service proxies defined, it is straightforward to
disable or enable all servos by iterating through the list of joints and calling the
appropriate service. To relax and disable a servo, we call the enable service with the
request value set to False and the servo will relax and ignore any future position
requests. To re-enable a servo, we call the enable service with a request value of
True.
To try out the script, first make sure you are running the launch file for your
Dynamixels, like the pi_robot_head_only.launch file we used in the previous
section. Then run the monitor_dynamixels.launch file:
You can now run any other nodes such as head tracking that use the servos. The
monitor_dynamixels node will monitor the servo temperatures and if any servo gets
too hot, it will disable all servos until they are all back to a safe temperature. In the
meantime, other nodes can continue publishing servo commands but they will simply be
ignored until the Dynamixels are re-enabled.
The launch file specifies the rate at which to publish the diagnostics data (default 1Hz)
and the apci_path to the battery file (default /proc/acpi/battery/BAT1).
Assuming the launch file runs without error, take a look at the data being published on
the /diagnostics topic:
header:
As you can see, the laptop battery node publishes the raw voltage, charge, and capacity
values as key-value pairs. What is not apparent until you look carefully at the
laptop_battery.py script, is that the overall diagnostics level will always be 0 (i.e.
OK) as long as the battery is detected and its status has been updated in a timely manner.
In other words, the status published here is not related to the battery's charge—it only
reflects whether or not the battery is present and can be monitored. A new script,
detailed in the next section, will publish a diagnostics message based on the charge
itself.
header:
As you can see from the output, the message includes the voltage, rate of charge or
discharge (depending on the sign), charge level, capacity, percent charge remaining,
charge_state (0 = discharging, 1 = charging, 2 = charged) and whether or not it is
present.
What we'd like to do is turn the percent charged value into a diagnostics level of OK,
WARN or ERROR and publish it on the /diagnostics topic as a standard ROS
diagnostics message. Our new script called, monitor_laptop_charge.py,
subscribes to the /laptop_charge topic and converts the percent charge to one of the
three standard diagnostic status levels: OK, WARN or ERROR.
We will use the monitor_laptop_charge.launch file to bring up the node and run
the diagnostic aggregator with an appropriate configuration file. The launch file looks
like this:
1 <launch>
2 <node pkg="rbx2_diagnostics" type="monitor_laptop_charge.py"
name="monitor_laptop_charge" output="screen">
3 <param name="warn_percent" value="50" />
4 <param name="error_percent" value="20" />
5 </node>
6
7 <node pkg="diagnostic_aggregator" type="aggregator_node"
name="diagnostic_aggregator" clear_params="true">
8 <rosparam command="load" file="$(find
rbx2_diagnostics)/config/power.yaml" />
9 </node>
10
11 <node pkg="rqt_robot_monitor" type="rqt_robot_monitor"
name="rqt_robot_monitor" />
12 </launch>
The publishing rate of 1.0 Hz specified by the pub_rate parameter seems appropriate
for a battery monitor since battery levels do not change very quickly.
Line 3 indicates that we will use the power namespace under the ~analyzers
namespace to store the aggregated diagnostics for battery power.
Line 4 indicates that we will use the GenericAnalyzer plugin type to analyze these
devices.
In Line 5, the path parameter defines the string that will be used to categorize these
devices when viewed in the rqt_robot_monitor GUI. In this case, we want all our
servos to be listed under the category 'Power System'.
Line 6 defines a timeout for receiving an update from the device. If no data is received
from the device within this time frame, the device will be marked as "Stale" in the
rqt_robot_monitor.
Finally, in Line 7 we specify a list of strings that should appear somewhere in the
diagnostic array message to identify the type of device we want to monitor.
To try it out, first make sure you are on a laptop and then run the
laptop_battery.launch file if it is not already running:
After a brief delay you should see the rqt_robot_monitor window appear that should
look like this:
Under the Power System category, we have two status messages: one for the battery
overall (i.e. it can be detected) and one for the current charge level. The first status is
provided by the laptop_battery.py node whereas the second is published by our
new node monitor_laptop_charge.py. Let's take a look at that script now.
Link to source: monitor_laptop_charge.py
1 #!/usr/bin/env python
2
3 import rospy
4 from diagnostic_msgs.msg import DiagnosticArray, DiagnosticStatus, KeyValue
5 from smart_battery_msgs.msg import SmartBatteryStatus
6
7 class MonitorLaptopCharge():
8 def __init__(self):
9 rospy.init_node("monitor_laptop_charge")
10
11 # Get the parameters for determining WARN and ERROR status levels
12 self.warn_percent = rospy.get_param("~warn_percent", 50)
13 self.error_percent = rospy.get_param("~error_percent", 20)
14
15 # A diagnostics publisher
16 self.diag_pub = rospy.Publisher('diagnostics', DiagnosticArray,
queue_size=5)
17
19 rospy.Subscriber('laptop_charge', SmartBatteryStatus,
self.pub_diagnostics)
And here we create a subcriber to monitor the /laptop_charge topic. The callback
function self.pub_diagnostics (explained below) will convert the laptop charge to
a ROS diagnostics message.
Recall the that the values array in a DiagnosticStatus message are stored as
KeyValue pairs where both members of the pair are strings. So here we append a
KeyValue pair where the key is the string 'percent_charge' and the value is the actual
percent charge converted to a string.
We then set the value of the level and message fields of the diag_msg variable
according to the thresholds we have set in the error_percent and warn_percent
parameters.
Finally, we append the diagnostic status message we have created to the diagnostics
array and publish the result.
After a short delay, the window similar to the following should appear:
In the image above, we have expanded the camera node and then selected the driver
category. In the right-hand panel we can then change a number of the driver's
parameters including the resolution using the pull-down menus labeled image_mode
and depth_mode.
As you can see, the file is fairly simple. First we set the PACKAGE variable to the
package we are in. Then we import the catkin dynamic reconfigure generator library.
This allows us to create a ParameterGenerator() object and add a couple of integer
parameters. Line 9 adds the battery_runtime parameter with a default value of 60
seconds and a range of 1-7200 seconds. Line 11 adds the new_battery_level
• type - the type of value stored, and can be any of int_t, double_t, str_t, or
bool_t
• level - a bitmask which will later be passed to the dynamic reconfigure callback.
When the callback is called all of the level values for parameters that have been
changed are ORed together and the resulting value is passed to the callback.
This can be used to organize parameters into groups such as those that control
sensors that must be shutdown a restarted when a parameter is changed. See
this tutorial for an example.
• min - the min value (optional and does not apply to string and bool types)
• max - the max value (optional and does not apply to string and bool types)
NOTE: the name string cannot be the value "i", "state", or "name" as discussed in
this issue.
The final line of the cfg file must be written carefully:
13 exit(gen.generate(PACKAGE, "battery_simulator", "BatterySimulator"))
The first string inside quotations should be the same as the name of the node we are
configuring but without the file extension. The second string inside quotations must be
the name of the cfg file itself without the .cfg extension.
$ roscd rbx2_utils/cfg
$ chmod a+x BatterySimulator.cfg
Since much of the script uses already familiar concepts, let's focus on the lines related to
dynamic reconfigure.
7 import dynamic_reconfigure.server
8 from rbx2_utils.cfg import BatterySimulatorConfig
First we import the dynamic reconfigure server library as well as the config file for the
battery simulator itself. Notice that this BatterySimularConfig object was created
by catkin_make from our .cfg file.
Here we create the dynamic reconfigure server object that will service requests of type
BatterySimulatorConfig and we set the callback function to
self.dynamic_reconfigure_callback() that will be described next.
4 import dynamic_reconfigure.client
First we import the dynamic configure client library rather than the server library.
12 client = dynamic_reconfigure.client.Client("battery_simulator",
timeout=30, config_callback=self.callback)
Here we create a client connection to the battery simulator node, set the timeout to 30
seconds, and assign a callback function defined below. Note that the callback function
will be called whenever the parameters of the battery simulator node are changed, even
by another node such as rqt_reconfigure. This allows the dyna_client node to
monitor parameter changes in the battery simulator node and adjust its own behavior if
desired.
13 r = rospy.Rate(0.1)
16 charge = True
17
18 while not rospy.is_shutdown():
19 if charge:
20 level = 100
21 else:
22 level = 0
23
24 charge = not charge
25
26 client.update_configuration({"new_battery_level": level})
27
28 r.sleep()
We use the charge variable to alternate between a battery level of of 100 and 0. We
then enter the main loop and run the client.update_configuration() function to
set the battery simulator's new_battery_level parameter to either 100 or 0
depending on the value of charge. Note how the parameter name and value are
specified as a Python dictionary. If we wanted to change both the
new_battery_level and battery_runtime parameters at the same time, the update
line would look like this:
The dynamic configure client callback function automatically receives the current
parameter configuration as an argument that we have named config above. This
argument is a dictionary of values and we can pull out the current value of any
parameter by name. In Line 30 above, we simply display the value of the
new_battery_level parameter in the terminal window.
To see the script in action, first fire up the battery simulator node;
$ roslaunch rbx2_utils battery_simulator.launch
If you monitor the dyna_client.py terminal window, you will see the output from the
rospy.loginfo() command above every 10 seconds and the value of the
new_battery_level parameter will alternate between 100 and 0. To verify that the
battery simulator is actually responding to the new battery level, open another terminal
and monitor the battery level itself as published on the /battery_level topic:
Here you should see the battery level count down as usual starting at 100 but every 10
seconds it will jump ether to 0 or back to 100.
The dump and load commands are useful if you have spent some time tweaking a set of
parameters using rqt_reconfigure or another method and want to save those changes
to a file that can be loaded at a later time. To save the current configuration of a node
called my_node to a file called my_node.yaml, use the command:
NOTE: There is currently a bug in ROS Indigo running on Ubuntu 14.04 that prevents
reloading parameters. You can follow the progress of this issue on Github.
Then to load this configuration at a later time, run the command:
$ rosrun dynamic_reconfigure dynparam load /my_node my_node.yaml
You can also load previously saved parameters by way of a launch file like this:
<launch>
<node pkg="dynamic_reconfigure" name="dynaparam" type="dynaparam"
command="load" file="$(find my_package)/my_node.yaml" />
</launch>
The <node> line above will load the dynamic parameters from the file my_node.yaml
located in the package my_package. Of course, you can have other nodes launched in
the same launch file.
The mux node enables us to multiplex several input topics into one output topic. Only
one input at a time is passed through to the output. Services are provided for selecting
the active input as well as for adding and deleting input topics as we shall describe later.
Note that even without multiplexing, one can control a ROS base controller using
multiple inputs simultaneously. The problem is that all such inputs will typically
publish their Twist messages on the same /cmd_vel topic. The result is jerky motion
of the robot as it alternately responds to one Twist message or another from the various
sources. The mux utility gets around this problem by allowing only one input through at
any given time.
In the case of our base controller example, let's call the navigation input topic
move_base_cmd_vel and the joystick input topic joystick_cmd_vel. Let the
output topic be cmd_vel. The following command would then set up our multiplexer:
$ rosrun topic_tools mux cmd_vel move_base_cmd_vel joystick_cmd_vel \
mux:=mux_cmd_vel
Here we run the mux node from the topic_tools package with the first argument
specifying the output topic and the next two arguments specifying the two input topics.
The last argument above gives the mux node a unique name and determines the
namespace for the service and topic names as we will see later on.
The key line is highlighted in bold above. This line simply remaps the cmd_vel topic
that move_base normally publishes on to the move_base_cmd_vel topic. That's all
there is to it. The mux_joystick_teleop.launch file uses a similar remapping:
<launch>
<!--- Teleop Joystick -->
<node pkg="turtlebot_teleop" type="turtlebot_teleop_joy"
name="turtlebot_teleop_joystick" output="screen">
<remap from="turtlebot_teleop_joystick/cmd_vel" to="joystick_cmd_vel" />
Once again, the highlighted line provides the remapping, this time from
turtlebot_teleop_joystick_cmd_vel that is used by default by the
turtlebot_teleop_joy node we are using to the topic joystick_cmd_vel that we
will use with our mux node. (We could have just used the topic name
turtlebot_teleop_joystick_cmd_vel directly with our mux node but we chose
joystick_cmd_vel to be more generic and so we need to remap it here.)
Assuming you have a joystick attached to your computer, bring up the mux joystick
node:
The first thing you will notice is that the fake TurtleBot does not respond to any control
input—neither the joystick nor by setting 2D Nav goals in RViz. That is because the
ArbotiX base controller we are using for the fake TurtleBot is listening on the
/cmd_vel topic for Twist messages but all of our control inputs are publishing on their
own topics. So now let's run the mux node described earlier:
on the terminal. Since the first input topic listed on the mux command line above is
move_base_cmd_vel and since this is the topic used by our
mux_fake_move_base.launch file, you should now be able to control the fake
TurtleBot by setting 2D navigation goals in RViz with the mouse. However, at this
point, the joystick should not have any effect on the robot's motion. Let's turn to that
next.
The one we are likely to use the most is the select service for selecting which input
topic currently has control. Let's select the joystick as the control input. Open another
terminal and run the command:
Notice how the select service name reflects the name of the mux node we started earlier,
in this case mux_cmd_vel. The output on the terminal should be:
prev_topic: /move_base_cmd_vel
You should now be able to control the fake TurtleBot with the joystick. At the same
time, you will no longer be able to move the robot by setting navigation goals in RViz.
To go back to navigation control using RViz and move_base, run the service call:
Now give the robot a navigation goal using RViz and pick a goal location far away from
the robot so it takes some time to get there. As the robot is moving toward the goal,
override its motion with the joystick. You should find the robot reacts smoothly to your
joystick input. As soon as you release the dead man button on the joystick, the robot
should head back toward the navigation goal.
1 #!/usr/bin/env python
2
3 import rospy
4 from geometry_msgs.msg import Twist
5 from topic_tools.srv import MuxSelect
6 import thread
7
8 class SelectCmdVel:
9 def __init__(self):
10 rospy.init_node("select_cmd_vel")
11
12 # The rate at which to update the input selection
13 rate = rospy.get_param('~rate', 5)
14
15 # Convert to a ROS rate
Near the top of the script we first import the Twist message type and the MuxSelect
service type. We also import the thread library as we will need to lock our callbacks.
19 self.lock = thread.allocate_lock()
We use a handful of variables to set the initial control mode as well as to track the last
control mode used.
Here we subscribe to the two cmd_vel topics used with our mux node, one for the
joystick and one for move_base. We also set the callback function to point to functions
defined later in the script.
Before connecting to the mux select service, we wait to see if it is alive. Then we
assign a ServiceProxy to the variable mux_select to be used later.
Jumping down the script a bit, these are the two callback functions to handle messages
received on the joystick_cmd_vel topic and the move_base_cmd_vel topic. Since
ROS subscriber callbacks run in separate threads, we need to use the lock created earlier
before setting the self.joystick and self.move_base flags.
Finally, we jump back to the main loop where the actual input selection is done. Since
we want the joystick to take priority, we check it first. If the self.joystick flag is
True (which means joystick input was detected in the joystick_cb function described
above), we use the mux_select service proxy to set the input control topic to
joystick_cmd_vel. Otherwise, we give control to the move_base_cmd_vel topic.
publisher: "output/cmd_vel"
Here we see that our two input topics are listed under the subscribers section. Each
subscriber is given a descriptive name, the topic on which it is listening for commands,
a timeout (in seconds) and a priority where larger numbers have higher priority and
must be unique—i.e., two or more subscribers cannot have the same priority. The
output topic is specified by the publisher parameter and in this case is set to
output/cmd_vel. Note that since the output topic does not have a leading " /", it will
be prepended with the name of the nodelet namespace which is /cmd_vel_mux so that
the full output topic name is /cmd_vel_mux/output/cmd_vel.
To test it all out, terminate any nodes and launch files from the previous two sections
and start from scratch as follows.
Bring up the fake TurtleBot:
$ roslaunch rbx1_bringup fake_turtlebot.launch
Assuming you have a joystick attached to your computer, bring up the mux joystick
node:
We can now run the yocs_cmd_vel.launch file. This takes the place of both the mux
node we ran earlier and our select_cmd_vel.py node:
Here we see that the nodelet manager is started followed by the cmd_vel_mux nodelet.
subscribers:
- name: "Joystick control"
topic: "/joystick_cmd_vel"
timeout: 1.0
priority: 2
publisher: "output/cmd_vel"
Here we assume that the keyboard teleop node is publishing Twist commands on the
/keyboard_cmd_vel topic. We have also given it the lowest priority. This
configuration can be found in the file yocs_cmd_vel_with_keyboard.yaml in the
rbx2_nav/config directory. You can try it out by terminating
yocs_cmd_vel.launch file from the previous section, then running the following two
commands:
NOTE: Most keyboard teleop nodes (including the TurtleBot teleop node we are using
in the mux_keyboard_teleop.launch file) continue to publish an empty Twist
message even when all keys have been released. On the one hand, this is a safety
feature since it means the robot will stop if the user is not pressing a key. On the other
hand, it does not play well with the yocs_cmd_vel nodelet since unless the keyboard
input is put last in the priority list, it will always override all other inputs below it.
A PoseStamped message can represent the location of a face or colored object, but it
can also track other points in space such as the location of gripper on the robot's arm that
might not even be in view of the camera. (Recall that we always know the location of
the gripper or other parts of the robot relative to the camera by way of the robot's URDF
model and the robot_state_publisher.)
Next, fire up RViz using the fake_target.rviz config file that is configured to
display the 3D marker:
You should see a yellow sphere moving in RViz in front of the robot.
Back in RViz, you should see the head track the motion of the yellow sphere. You can
change the way the sphere moves using rqt_reconfigure:
For example, setting the move_target_rate to 1 and the speed to 2.0 will cause the
sphere to jump over larger gaps instead of moving smoothly in between.
The frame we want to track is specified in the launch file by the parameter
target_frame. By default, this frame is set to 'right_gripper_link'.
Back in RViz, you should see the head track rotate downward and to the right to look at
the right gripper.
Finally, bring up the arbotix_gui control so that we can manually move the arm:
$ arbotix_gui
Use the slider controls on the arbotix_gui to move the arm joints. As you move the
sliders, the arm should move in RViz and the head will move to track the position of the
right gripper.
1 #!/usr/bin/env python
2
3 import rospy
4 from geometry_msgs.msg import PoseStamped
5
6 class PubFrame():
7 def __init__(self):
8 rospy.init_node('pub_tf_frame')
9
10 # The rate at which we publish target messages
11 rate = rospy.get_param('~rate', 20)
12
13 # Convert the rate into a ROS rate
14 r = rospy.Rate(rate)
As you can see, the script is fairly straightforward. After reading in the publishing rate
parameter, we get the target frame as a parameter that defaults to
right_gripper_link and can be changed in the launch file. Then we define a
publisher for the target_pose topic. We then set the PoseStamped target to be the
origin of the target frame by giving it position coordinates (0, 0, 0) and orientation
values (0, 0, 0, 1). Finally, we enter a loop to publish this pose with a new timestamp on
each cycle.
You might be wondering, if we are publishing the same pose every time, how can the
head tracker know that the link is actually moving. The answer is that the head tracker
script uses the tf transformPoint() function to lookup the latest transformation
Position tracking uses a fixed servo speed and aims the camera right at the target point.
On the other hand, recall that speed tracking aims the camera ahead of the target but
adjusts the speed to be proportional to the displacement of the target from the center of
the field of view. The use of position tracking with real hardware causes jerky motion
of the servos which is why we use speed tracking instead. However, for the fake robot,
position tracking works fine.
Near the top of head tracking script, you will see the following lines:
# For fake mode we use the arbotix controller package and position tracking
By setting the parameter sim to True in the head_tracker.launch file, we can run
head tracking in the ArbotiX simulator using position tracking. When using real servos,
set this parameter to False and speed tracking will be used instead.
Further down in the head_tracker.py script, the controller type and tracker type
determines the callback we run on the PoseStamped target type:
else:
rospy.Subscriber('target_topic', PoseStamped, self.update_joint_speeds)
First we acquire a lock at the beginning of the callback. This is to protect the variables
self.pan_speed, self.tilt_speed and self.target_visible which are also
modified in our main loop.
14 target = PointStamped()
15 target.header.frame_id = msg.header.frame_id
16 target.point = msg.pose.position
Before projecting the PoseStamped message into the camera plane, we extract the
position component since we do not require the orientation of the target for this process.
Here we use the transformPoint() function from the tf library to project the
PointStamped target onto the camera frame. (A good reference for other
transformation functions can be found on the ROS Wiki on the page Using Python tf.)
30 try:
31 pan /= distance
32 tilt /= distance
33 except:
34 pan /= 0.5
35 tilt /= 0.5
We need to convert the pan and tilt values to angular displacements (in radians) which
we can do by dividing each by the distance to the target. In case the distance is zero, we
set it to 0.5 meters which is roughly the minimum distance a Kinect or Xtion Pro can
reliably measure.
The rest of the script is essentially the same as the head tracking script we used in
Volume 1 and so will not be described further here.
In the next chapter we will make use of this new head tracker node to locate special
patters called AR tags in 3-dimensional space.
The launch file assumes your USB2Dynamixel controller is on /dev/ttyUSB0 but you
can use the port parameter as shown above to change the port if your controller has a
different USB ID, e.g. /dev/ttyUSB1.
If all goes well, you should eventually see the rqt_robot_monitor window appear
displaying the status of the pan and tilt servos under the Joints category.
Next bring up the Dynamixel monitoring node we created in the chapter on Diagnostics.
Remember that this node helps prevent the servos from overheating by disabling all
servos if any one of them exceeds 60 degrees C.
Finally, start the head tracker node, but this time with the sim parameter set to false:
$ roslaunch rbx2_dynamixels head_tracker.launch sim:=false
As we did in the earlier demo, bring up rqt_reconfigure and try changing the
motion parameters of the target:
$ rosrun rqt_reconfigure rqt_reconfigure
To stop the tracking and center the servos, simply type Ctrl-C in the terminal window
you used to launch the head tracking node.
Next, Ctrl-C out of the pub3d_target.launch file if it is still running. The yellow
sphere should stop moving in RViz.
Next, bring up the OpenNI node for your depth camera. For example:
Now bring up the nearest cloud node using the corresponding launch file:
This launch file runs a VoxelGrid filter and two PassThrough filters to restrict the
point cloud processing to a box in front of the camera that is 0.6 meters wide (about a
foot on either side), extends outward to 1.0 meters (about 3 feet), and runs 0.3 meters
(about 1 foot) below the camera and 0.15 meters (6 inches) above the camera. This
creates a kind of "focus of attention" for the robot so that we can track objects that are
relatively near the camera. The resulting point cloud is output on the topic called
/cloud_filtered.
Next, exit out of RViz if it is still running then fire it up again with the
track_pointcloud.rviz config file:
Finally, if the head tracker node is not already running, start it now with the sim
argument set to false:
$ roslaunch rbx2_dynamixels head_tracker.launch sim:=false
After a brief delay, the pan-and-tilt head should begin tracking the nearest object in front
of the camera. Keep in mind that both the Kinect and the Xtion Pro cannot see the depth
objects closer than about 0.5 meters (a little less than 2 feet). Also remember that the
nearest_cloud.launch file filters out points that are further away than 1 meter.
(You can run rqt_reconfigure and adjust the parameters for the VoxelGrid and
PassThrough filters on the fly, or change them in the launch file.)
If you stand in front of the camera within the filter box and move left or right, the
camera should track your motion. If you move your body back beyond 1 meter from the
camera and then extend your hand into the filter box, the head and camera should track
your hand. If you are in doubt about what the camera is detecting, make sure you can
see RViz on your computer screen so you can monitor the point cloud image.
To access the points in the depth cloud, we need the message class from roslib and
the point_cloud2 library from the ROS sensor_msgs package. We also need the
PointCloud2 message type.
We will publish the COG of the nearest point cloud on the topic /target_pose as a
PoseStamped message. This way we can use our head tracker script out of the box to
track the cloud.
Here we begin the callback function that fires whenever we receive a point cloud
message on the cloud_filtered topic. First we initialize a list called points that we
will use to store the points in the point cloud. Then loop through all the points in the
incoming message using the read_points()function from the point_cloud2 library
that we imported at the top of the script. The first three values of each point gets
appended to the points list. These first three values are the x, y and z coordinates of
the point relative to the camera depth frame.
Next we convert the points list into a numpy array so that we can quickly compute the
COG using the numpy mean() function. It is also a good idea to test for NaNs (not a
value) that can some times occur in point clouds returned from depth cameras.
To prepare the COG for publishing on the /target_pose topic, we first store the x, y
an z values as the components of a ROS Point object.
We then turn the COG into a PoseStamped message by adding a timestamp and the
frame_id of the incoming point cloud message as well as a unit orientation.
53 self.target_pub.publish(target)
It turns out that certain kinds of checkerboard-like patterns like those shown above are
particularly easy to recognize under different lighting conditions, viewing angles, and
distances. These patterns are called fiducial markers or AR tags. AR stands for
"augmented reality" because such tags can be added to objects and detected in video
recordings so that animations or other artificial labels (such as the name of the object)
can be overlaid on the video in place of the tag.
AR tags can also be used for robot localization by placing a number of tags on walls or
ceilings in such a way that the robot can always compute its position and orientation
relative to one or more tags.
$ cd ~/catkin_ws/src
$ git clone -b indigo-devel \
https://github.com/sniekum/ar_track_alvar.git
$ cd ~/catkin_ws
$ catkin_make
$ rospack profile
Now run the createMarker utility included in the ar_track_alvar package. The
createMarker command takes an ID argument from 0-65535 specifying the particular
pattern we want to generate. Let's start with ID 0:
This command will create a file called MarkerData_0.png in the current directory.
You can view the image using a utility such as eog ("eye of Gnome"):
$ eog MarkerData_0.png
Usage:
/opt/ros/Indigo/lib/ar_track_alvar/createMarker [options] argument
Of the options listed, you will probably make most use of the -s parameter to adjust the
size (default is 9 x 9 units). You can also use the -u parameter to change the
measurement units (the default is cm). So to make a marker with ID 7 that is 5cm x 5cm
big, use the command:
Next we need to print out the markers. You can do so either one at a time or bring all
three into an image editor like Gimp and print them on one sheet. You can find all three
on a single graphic in the file Markers_0_2.png in the rbx2_ar_tags/config
directory. Print this file so that we can test all three markers at once.
10.1.3 Launching the camera driver and ar_track_alvar node
The ar_track_alvar package can utilize depth data from an RGB-D camera to assist
in the detection and localization of a marker. We will therefore assume we have a
Kinect or Xtion Pro Live camera. Bring up the appropriate camera driver now:
For the Microsoft Kinect:
$ roslaunch freenect_launch freenect.launch
<launch>
<arg name="marker_size" default="6.6" />
<arg name="max_new_marker_error" default="0.08" />
<arg name="max_track_error" default="0.05" />
Note that we have set the marker size to 6.6 even though we created the markers with a
size parameter of 5cm. Depending on your printer, this size discrepancy is to be
expected but the number we want to use in the launch file is the size you actually
measure on the printout. In my case, the three markers were each 6.6cm on a side.
These messages indicate that the ar_track_alvar node has detected three markers
with ID's 0, 1 and 2. Try covering one of the markers with your hand and you should
see only two IDs displayed.
10.1.5 Understanding the /ar_pose_marker topic
The ar_track_alvar node publishes the ID and pose of each detected marker on the
/ar_pose_maker topic. Each message on this topic consists of a header and an array
of markers. Each marker has message type AlvarMarker whose fields can be seen by
issuing the command:
$ rosmsg show ar_track_alvar_msgs/AlvarMarker
As you can see, the message is quite straightforward including a header, the marker's ID,
a confidence value (not currently implemented), and the estimated pose of the marker
relative to the output frame listed in the top level header frame_id.
To see the marker poses currently being detected, hold the test printout up to the camera
and echo the /ar_pose_marker topic:
$ rostopic echo /ar_pose_marker
Remember, that the message is an array of markers, so to view the data for a single
marker, use an array index like this:
(NOTE: This command will fail with an error unless you are already holding the test
markers in front of the camera. If this happens to you, make sure you hold the printout
in front of the camera before running the command. Note also that the array index does
not correspond to the marker ID. For example, index 0 might hold the data for marker
ID 2, so do not rely on the indexes to identify the markers.)
As you move the markers around in front of the camera, you should see the position and
orientation values changes accordingly. Note that you can rotate the markers as much as
you like and even tilt the printout so that the camera is viewing the tags on an oblique
angle, yet tracking should still be successful for most poses of the targets.
If you now hold the test printout in front of the camera, you should see all three markers
appear in RViz in the same relative position and orientation as they appear on the page.
Try rotating and moving the page and the markers should move accordingly. If you
cover a tag with your hand, its should disappear from RViz. The image below shows
the view when all three markers are detected. We have also displayed the reference axes
for one of the markers as computed by the ar_track_alvar node.
Next, open another terminal and monitor the messages on the /target_pose topic:
The ar_tags_cog.launch file runs the ar_tags_cog.py node and sets an optional
list of valid IDs:
<launch>
<node pkg="rbx2_ar_tags" name="ar_tags_cog" type="ar_tags_cog.py"
output="screen">
<rosparam>
tag_ids: [0, 1, 2]
</rosparam>
</node>
</launch>
In this case, we have indicated that we only want to track IDs 0, 1 and 2. Any other tag
IDs detected will be ignored by the ar_tags_cog.py node as we shall now see.
Link to source: ar_tags_cog.py
1 #!/usr/bin/env python
2
3 import rospy
4 from geometry_msgs.msg import Point, PoseStamped
5 from ar_track_alvar.msg import AlvarMarkers
6
7 class TagsCOG():
8 def __init__(self):
9 rospy.init_node("ar_tags_cog")
10
11 # Read in an optional list of valid tag ids
12 self.tag_ids = rospy.get_param('~tag_ids', None)
13
14 # Publish the COG on the /target_pose topic as a PoseStamped message
15 self.tag_pub = rospy.Publisher("target_pose", PoseStamped,
queue_size=5)
16
17 rospy.Subscriber("ar_pose_marker", AlvarMarkers, self.get_tags)
18
19 rospy.loginfo("Publishing combined tag COG on topic /target_pose...")
20
21 def get_tags(self, msg):
22 # Initialize the COG as a PoseStamped message
23 tag_cog = PoseStamped()
24
25 # Get the number of markers
26 n = len(msg.markers)
27
28 # If no markers detected, just return
29 if n == 0:
30 return
31
By this point in your ROS programming, this script will probably appear fairly basic so
let's just touch on the key lines.
To improve reliability, we can read in an optional list of tag IDs. Any IDs not in the list
will be ignored when computing the COG. The valid tag IDs can be set in the launch
file as we saw earlier.
Next we define a publisher to output the COG of the tracked tags as a PoseStamped
message on the topic /target_pose.
Here we subscribe to the /ar_pose_marker topic and set the callback to the function
get_tags() described next.
The get_tags() begins by checking the number of tags detected. If there are none, we
simply return.
Here we iterate through the markers in the message array and sum up the x, y and z
coordinates of each marker. If we have defined a list of valid tag IDs, then any IDs not
in the list are skipped. If the tag list is empty ( None), we include all detected tags. We
then compute the mean values for each component which become the position values for
the COG. We also assign the COG a unit orientation just to make it a proper pose
message. After adding a time stamp and frame_id, we publish the COG on the
/target_pose topic.
Now run the ar_tags_cog launch file so that the COG of any detected tags will be
published on the /target_pose topic:
Next, launch the startup file for your robot and servos. We'll use the head-only version
of Pi Robot as an example:
$ roslaunch rbx2_bringup pi_robot_head_only.launch sim:=false
Now bring up the 3D head tracker node we developed in the last chapter:
If you move the AR tag(s) in front of the camera, the head should track the COG of any
detected markers.
Notice how we set the marker_size parameter to 17.5 cm in line 2. We also set a fairly
high max_track_error value of 0.5 in line 4. This value was found by trial and error
to reduce the number of frames where the ar_track_alvar node loses the target. Of
course, you can try different values with your setup and rqt_reconfigure.
Our callback function set_cmd_vel() computes the Twist command we want to send
to the robot to keep it facing the marker and to stay within a given distance of it. Let's
start with the first few lines:
78 try:
79 marker = msg.markers[0]
80 if not self.target_visible:
81 rospy.loginfo("FOLLOWER is Tracking Target!")
82 self.target_visible = True
83 except:
84 # If target is lost, stop the robot by slowing it incrementally
85 self.move_cmd.linear.x /= 1.5
86 self.move_cmd.angular.z /= 1.5
87
88 if self.target_visible:
89 rospy.loginfo("FOLLOWER LOST Target!")
90 self.target_visible = False
91
92 return
Assuming we have a marker, we get the lateral offset from the positional y component
of the marker pose and the fore/aft offset from the x component. How do we know
these are the right components? Recall that we set the output_frame for the
ar_track_alvar node to be /base_footprint. This means that the marker poses
as published in the AlvarMarkers messages and received by our callback function are
already defined in our base frame. Furthermore, our base frame is oriented with the x-
axis pointing forward and the y-axis pointing left.
With the offsets in hand, it is a simple matter to set the rotational and linear components
of the Twist command appropriately as is done in Lines 101-116. As with our earlier
follower programs, we first check that the offset exceeds a minimal threshold, then we
set the robot speeds proportional to the offsets while respecting the maximum and
minimum speeds set in the launch file.
10.4.1 Running the AR follower script on a TurtleBot
If you have a TurtleBot, you can use the following commands to test the
ar_follower.py script. You can use similar commands with other robots by simply
changing the launch file used to bring up your robot driver.
Start by launching the TurtleBot's startup files. Note that we are using the launch file
from the Volume 1 package and this command must be run on the TurtleBot's laptop:
$ roslaunch rbx1_bringup turtlebot_minimal_create.launch
Next, bring up the OpenNI node for the robot's depth camera. This command must also
be run on the robot's computer.
NOTE: In the commands above, we set the publish_tf argument to false since the
TurtleBot's launch files and URDF model already take care of publishing the camera
frames.
Next, bring up the ar_track_alvar node for large markers. You can run this on
either the robot's computer or your desktop assuming the two are networked
appropriately:
This command can also be run on either the robot or a networked desktop.
Now hold the large marker printout in front of the robot's camera a few feet away and
start moving backward, forward, left and right. Once the ar_track_alvar node has
locked on to the marker, the robot should start to follow your movements.
• performing a pick-and-place operation where the arm uses its gripper to grasp an
object and move it to another location
But before we get to these tasks, we have to lay some ground work.
• revolute - a joint like a servo that rotates along an axis and has a limited range
specified by the upper and lower limits.
• continuous – a joint like a drive wheel that rotates around an axis and has no
upper or lower limits
• prismatic - a sliding joint like a linear actuator that has a limited range
specified by the upper and lower limits
• fixed - not really a joint because it cannot move. All degrees of freedom are
locked.
• floating - this joint type allows motion along all 6 degrees of freedom.
• planar - this joint type allows motion in a plane perpendicular to the axis
Most of the joints we will deal with are revolute like a regular servo. We will therefore
typically use the terms "joint angle" and "joint position" interchangeably. If we are
dealing specifically with a prismatic joint like a telescoping torso, we will use the term
"position" instead of "angle".
This plot was generated by running rqt_plot on the relevant joint positions while the
arm was moving. The actual command used was:
$ rqt_plot /joint_states/position[2], /joint_states/position[3],
/joint_states/position[4], /joint_states/position[5],
/joint_states/position[6], /joint_states/position[8]
We can see the various fields in a ROS JointTrajectory message using the
command:
[trajectory_msgs/JointTrajectory]:
std_msgs/Header header
uint32 seq
Here we see that a joint trajectory consists of a standard ROS header together with an
array of joint names and an array of trajectory points. A trajectory point consists of an
array of positions, velocities, accelerations and efforts that describe the motion of the
joints at that point. Note that typical trajectories will have all zeros for the joint
velocities for the first and last point meaning that the arm starts and finishes at rest.
Each trajectory point also includes a time_from_start that specifies the number of
seconds after the time stamp in the header when we expect the trajectory to pass through
this point.
The simplest joint trajectory would contain a single point specifying how we want the
joint configuration to look at a given time from start. But most trajectories will include
many points that describe the arm configuration at various points over time. It is the job
of the joint action trajectory controller to interpolate these points (using splines for
example) to create a smooth trajectory that can be implemented on the underlying
hardware. Both the arbotix and dynamixel_motor packages provide a joint
trajectory action controller that works with Dynamixel servos. So to move a multi-
jointed Dynamixel arm a certain way over time, we can send the desired joint trajectory
to one of these controllers which will then compute the required interpolation between
points and control the servos to move along the requested trajectory. We will see how
this works later in the chapter.
You might be wondering how on earth can we figure out the positions, velocities,
accelerations and efforts of every joint along an entire trajectory? And the answer is that
we almost never have to do this ourselves. Instead, we will normally specify the much
simpler goal of moving the end-effector to a specific position and orientation in space.
We will then rely on MoveIt! to compute a joint trajectory that will move the end-
effector into the desired pose, all while avoiding obstacles and satisfying any other
constraints specific to the task. This trajectory will then be sent to the joint trajectory
action controller that will command the servos to execute the planned trajectory on the
actual arm.
joint range
In the first three images, the arm has two revolute joints and therefore two degrees of
freedom. The location of the goal also requires two values, namely its x and y
coordinates in the plane. In situations like (a), the arm generally has two inverse
kinematic solutions (sometimes called "elbow up" and "elbow down"). However, if a
joint has a limited range of motion as shown in (b), then only one of these solutions may
actually be realizable by the arm. If the goal is out of reach as shown in (c), then there
are no solutions. In (d) we add a third joint so now the arm has an extra degree of
freedom. As you might expect, this opens up additional solutions.
Things get even trickier when we add constraints such as the presence of obstacles. If a
potential solution would cause any part of the arm to collide with the obstacle, then that
• URDF → SRDF: The URDF model of the robot is used to create an enhanced
Semantic Robot Description File that includes tags defining move groups, end
effectors, and self-collision information. The SRDF file is created automatically
from the URDF file by running the MoveIt! Setup Assistant.
• The Setup Assistant also creates a number of launch files for running key
planning and motion control modules including:
◦ the move_group node that provides ROS topics, services and actions used
for kinematics, planning and execution, pick and place, state validation and
planning scene queries and updates
If you also want to experiment with the MoveIt! configuration for the Willow Garage
PR2, run the command:
If you are using your own robot model located in a different package and/or directory,
alter the commands accordingly. Also, the name of the resulting URDF file can be
anything you like but you'll need to remember it when running the MoveIt! Setup
Assistant.
To double-check the validity of your URDF model, run the ROS check_urdf utility:
$ check_urdf pi_robot.urdf | less
Note how robot name is output correctly as well as the root link which is
base_footprint in our case. The rest of the output shows how all the robot's links
are connected as a tree off the root link.
NOTE: If you need to change your robot model at a later time, do not edit the static
URDF file. Instead, edit the original URDF/Xacro file(s) and then run the xacro utility
again to regenerate the static file.
• To create a new MoveIt configuration package, click the button labeled Create
New MoveIt Configuration Package.
• Under the section labeled "Load a URDF or Collada Robot Model", click the
Browse button and navigate to the URDF file we created above:
rbx2_description/urdf/pi_robot/pi_robot.urdf
• If all goes well, you should see an image of Pi Robot in the right hand panel.
(The black background makes certain parts of the model a little hard to see.)
You can rotate, move and zoom the model using your mouse just like in RViz.
$ export LIBGL_ALWAYS_SOFTWARE=1
• click on the large button labeled Regenerate Default Collision Matrix. You
can generally leave the sampling density set to the default value of 10000.)
The result should be a list of link pairs that cannot collide with one another because of
the geometric constraints of the robot.
We will add the arm's joints to the planning group as a serial kinematic chain as this is
required when using the KDL plugin. Therefore, click the Add Kin. Chain button.
Click the link labeled Expand All near the bottom of the window to see the entire link
tree of the robot as shown below. Then select the right_arm_base_link in the tree
and click the Choose Selected button next to the Base Link text box. The result should
look like this:
Finally, click the Save button. The final screen should look like this:
This time, click the Add Links button. The screen should look like the following:
• enter a name for the pose in the Pose Name text box.
• use the slider controls to position the arm the way you want for the given pose
The image below shows the screen for the "resting" pose of the right arm:
• navigate to your catkin/src directory and create a new folder to hold your
robot's MoveIt configuration files. Most people choose a folder name of the
form robot_name_moveit_config. So for Pi Robot, we use
pi_robot_moveit_config. This will also become the package name for
To exit the assistant, click the button labeled Exit Setup Assistant.
Finally, to ensure that ROS can find your new package, run the following command:
$ rospack profile
We only need to run this command this one time. New terminals will automatically pick
up your new package.
The SRDF file is plain text and easy to read so you can bring it up in a text editor to
view its contents. The key tags and definitions are as follows:
• A <group> element for each planning group listing the chains, joints or links
making up that group. For example, the <group> element for Pi Robot's right arm
looks like this:
<group name="right_arm">
<chain base_link="right_arm_base_link" tip_link="right_gripper_link">
</group>
• A <group_state> element that stores the joint positions for any named poses you
created. The element that stores the "wave" pose for Pi Robot's arm looks like this:
<group_state name="wave" group="right_arm">
<joint name="right_arm_elbow_flex_joint" value="0.1416" />
<joint name="right_arm_forearm_flex_joint" value="0.2629" />
<joint name="right_arm_shoulder_lift_joint" value="-2.6168" />
<joint name="right_arm_shoulder_pan_joint" value="-1.0315" />
<joint name="right_arm_shoulder_roll_joint" value="-1.0288" />
<joint name="right_arm_wrist_flex_joint" value="-0.2116" />
</group_state>
• An <end_effector> element indicating the group and parent link for the end-
effector. Here is the element for Pi Robots right gripper:
<end_effector name="right_end_effector" parent_link="right_gripper_link"
group="right_gripper" />
By storing this information in a static file, the MoveIt! motion planners can save a
significant amount of time by not checking these link pairs for collisions whenever a
new arm motion is planned.
– name: fake_right_gripper_controller
joints:
- right_gripper_finger_joint
Note that we have a controller for each planning group which in this case includes the
right arm and the gripper. We will learn more about this file later in the chapter when
dealing with real controllers so for now it is enough to notice that each fake controller
simply lists the set of joints it controls.
joint_limits:
right_arm_shoulder_pan_joint:
has_velocity_limits: true
max_velocity: 3.14
has_acceleration_limits: false
max_acceleration: 0
right_arm_shoulder_lift_joint:
has_velocity_limits: true
max_velocity: 3.14
has_acceleration_limits: false
max_acceleration: 0
If you find that your robot's arm does not move quickly enough, try changing the
parameters in this file. We will see later how we can always slow down the motion even
without making changes here. But to get faster motion, these are the parameters to
tweak. Start by setting the has_acceleration_limits parameters to true and
setting the max_acceleration value to something greater than 0. Start with low
values like something in between 1.0 and 3.0. After the changes, you will need to
The syntax of the kinematics.yaml file is fairly simple and assigns an IK plugin plus
a few parameters for each planning group created when running the Setup Assistant.
For a single-arm robot like Pi Robot, the file would look like this:
right_arm:
kinematics_solver: kdl_kinematics_plugin/KDLKinematicsPlugin
kinematics_solver_attempts: 3
kinematics_solver_search_resolution: 0.005
kinematics_solver_timeout: 0.05
Recall that the name of the planning group (here called right_arm) as well as the
kinematics solver and its parameters were chosen when running the Setup Assistant.
There is usually no need to change these and if you do, it is usually better to re-run the
Setup Assistant rather than editing the file manually.
Here is another example kinematics.yaml file, this time for the PR2 which has two
arms and telescoping torso:
right_arm:
kinematics_solver: pr2_arm_kinematics/PR2ArmKinematicsPlugin
kinematics_solver_attempts: 3
kinematics_solver_search_resolution: 0.001
kinematics_solver_timeout: 0.05
right_arm_and_torso:
kinematics_solver: kdl_kinematics_plugin/KDLKinematicsPlugin
kinematics_solver_attempts: 3
kinematics_solver_search_resolution: 0.001
kinematics_solver_timeout: 0.05
left_arm:
In this case, the left and right arms use a custom kinematics solver created specifically
for the PR2 that is much faster than the stock KDL solver. Two additional planning
groups are also defined which include each arm and the torso. These planning groups
use the KDL solver.
From the ROS Wiki we learn the following details regarding the parameters used above:
• kinematics_solver:
kdl_kinematics_plugin/KDLKinematicsPlugin: The KDL kinematics
plugin wraps around the numerical inverse kinematics solver provided by
the Orocos KDL package.
◦ It obeys joint limits specified in the URDF (and will use the safety limits if
they are specified in the URDF).
NOTE: If the demo launch file aborts, type Ctrl-C and try launching again. If it
continues to crash, the culprit is most likely RViz. Turning off hardware acceleration
for RViz usually helps which can be done by issuing the following command before
launching the demo.launch file:
$ export LIBGL_ALWAYS_SOFTWARE=1
If this solves the crashing problem, it might be a good idea to put the above command at
the end of your ~/.bashrc file so it is run automatically in new terminal sessions.
Once RViz successfully appears, you should see Pi Robot in the right panel and the
Motion Planning panel on the left as shown below:
• Next, click on the Planning tab in the Motion Planning panel and you should see the
following screen:
Click on Select Start State under the Query column. Most of the time, you will
simply leave this selected on <current> but you can also use the pull down menu
to select a different starting configuration of the arm. If you defined some named
poses for the arm when running the Setup Assistant, they will be listed in the menu.
• Once you have your starting configuration set, click on Select Goal State under the
Query column. Use the pull down menu to select the goal configuration of the arm,
then click the Update button. The goal configuration will appear as an orange arm
in RViz. The image below indicates that we have selected the "straight_forward"
configuration:
Alternatively, you can use the interactive markers on the orange colored arm to set
the goal position. As noted earlier, when using the interactive markers, you do not
click the Update button when you are finished positioning the arm. (You can also
select a pre-defined pose, click Update, then tweak the pose with the interactive
markers.)
• It can be instructive to choose a number of <random valid> states for the Goal
State which will give you an idea of the range and types of trajectories to expect
from your arm.
To exit demo mode at any time, type Ctrl-C in the terminal window in which you ran
the demo.launch file.
11.13.1 Exploring additional features of the Motion Planning plugin
The RViz Motion Planning plugin has additional features that we do not have space to
explore in this Volume. For example, the Manipulation tab is designed to be connected
to an object detection pipeline so that one can run a "pick-and-place" operation from
within the GUI. While we won't attempt to set up the GUI in this way, we will cover the
basics of pick-and-place later in the chapter using virtual target objects and then again in
the next chapter with a real robot and real objects.
The Scene Objects tab allow you to add mesh objects to the scene from local files or
URLs and then store the result for later use. One place to look for compatible mesh
files is the Gazebo database at http://gazebosim.org/models. For example, if you click
on the model directory for bowl and then on the meshes subdirectory, you will find the
mesh file bowl.dae. Right-click on this file and select "copy link address" or
equivalent. Then use the Import URL button in the Scene Objects tab and paste the
URL into the pop-up box. The bowl will appear in Rviz—it will probably be shown in
green and rolled over on its side somewhere near or intersecting with the robot's base.
You can move it around with the interactive marks or the Position and Rotation
controls on the Scene Objects panel. You can also change the bowl's size with the
Scale slider. Add other objects in the same way and then click on the Export As Text
button to save the scene to a text file. You import this scene later on using the Import
From Text button.
The next screen shot shows a scene created by adding three meshes from the
gazebosim.org site including a box, a bowl and a hammer. The objects were then scaled
and moved into position using the Scale slider and interactive markers on each object.
Then click the button labeled Edit Existing MoveIt Configuration Package. A file
dialog box will open. Navigate to your MoveIt! configuration directory, and click the
Open button. Then click the Load Files button. Your current configuration will be
loaded and you can make any changes you like. When you are finished, click the
Configuration Files tab on the left and then click the Generate Package button to save
your changes.
NOTE: Don't forget to recreate the static version of your robot's URDF file before re-
running the Setup Assistant if you make changes to any of your model's component
Xacro/URDF files.
Next, un-check the boxes beside Query Start State and Query Goal State as follows:
>
All commands in MoveIt! operate on a move group as defined in the configuration files
for the robot in use. In the case of Pi Robot, first select the right_arm group with the
use command:
After a brief delay, you should see an output similar to the following and a new
command prompt for the right_arm group:
With the robot visible in RViz, let's begin by moving the arm to a random set of joint
angles using the go rand command:
MoveIt! will check the validity of the selected angles against the kinematic model of the
arm. The target pose will be considered invalid if (1) the selected joint angles fall
outside the joint limits set in the model, (2) the arm would collide with another part of
the robot such as the head or torso, or (3) the motion would cause the arm to strike any
nearby objects. (More on this later.) If any of these checks fail, a message similar to
the following will appear:
On the other hand, if the joint goal represents a reachable configuration of the arm,
MoveIt! will plan a trajectory from the starting configuration to the target configuration
and pass that trajectory on to the arm's trajectory controller. The controller will then
move the arm smoothly into the new position. In this case, you will see an output like
the following:
right_arm> go rand
Moved to random target [-1.33530406365 0.898460602916 0.184664419448
1.20867797686 0.171914381348 1.08589163089]
right_arm>
Try issuing the go rand command a few times to see how the arm responds to different
target configurations.
Let's explore a few of the other commands. To see the current joint values and the pose
of the end-effector, use the current command:
right_arm> current
At the top of the output we see a list of the joint angles in radians. The joints are listed
in the same order as they are linked in the arm, starting at the shoulder and working
toward the gripper. Next comes the pose (position and orientation) of the end-effector
(gripper) relative to the /odom reference frame. At the end of the output is the
orientation of the end-effector given in terms of roll, pitch and yaw (RPY) around the x,
y and z axes respectively.
To record the current configuration into a variable called ' c' use the command:
right_arm> record c
The goal joint values can now be tweaked using the appropriate index. For example, to
set the second joint value (right_arm_lift_joint) to 0.2, use the command:
right_arm> goal = c
Finally, to move the arm to the new goal configuration, make sure the robot is visible in
RViz, then send the goal to the arm using the command:
Remember that the new goal may fail if it does not pass all the checks described earlier.
If you defined one or more named poses in the Setup Assistant, you can use them in the
Commander. For Pi Robot, you can send the "resting", "straight_forward" or "wave"
pose:
right_arm> go resting
right_arm> go straight_forward
right_arm> go wave
To see the other commands supported by the MoveIt! Commander, type help at the
command prompt:
right_arm> help
Now that we know that MoveIt! is talking to the arm, it's time to make it do something
useful.
One option is to use MoveIt! in demo mode together with the command line interface.
Using the interactive markers in RViz, we can position the arm or end-effector the way
we want, then read off the resulting joint angles or end-effector pose using the current
command in the MoveIt! command line interface as we did earlier.
For example, suppose we have positioned the arm using the interactive markers in RViz
as shown below and we want to get either the joint angles or end-effector pose so we can
use it later as a goal in one of our programs.
right_arm> current
The joint angles are in the order they appear in the kinematic chain beginning with the
shoulder pan joint, and the pose refers to the pose of the end-effector relative to the
/odom frame. (This works also for the /base_footprint frame because of the planar
virtual joint we defined connecting the /odom frame and the /base_footprint frame
in the MoveIt! Setup Assistant). These numbers can be copied into your script and used
as target joint angles or a target pose for the end-effector.
You can also determine the end-effector pose directly in RViz by expanding the Links
tree under the Scene Robot and then expanding the right_gripper link to reveal its
current position and orientation. However, using the technique above yields both the
joints angles and end-effector pose and does not require all that clicking in RViz.
Finally, you can use the handy script called get_arm_pose.py found in the
rbx2_arm_nav/scripts directory. The script uses the MoveIt! API so the robot's
move_group.launch file must already be running. Run the script with the command:
The output includes the list of active joints and their current positions as well as the pose
of the end-effector. These values are in a format that allows them to be copy-and-pasted
into a script to be used as target values.
If you still have the MoveIt! demo.launch file running from an earlier session,
terminate it now. Then begin by launching the one-arm version of Pi Robot in fake
mode (sim:=true):
$ roslaunch rbx2_bringup pi_robot_with_gripper.launch sim:=true
Next, bring up RViz with the arm_sim.rviz config file from the rbx2_arm_nav
package:
$ rosrun rviz rviz -d `rospack find rbx2_arm_nav`/config/arm_sim.rviz
Now run the trajectory_demo.py script. Note that we are not using MoveIt! at all at
this point. The script simply sends a pair of joint trajectory requests (one for the arm
and one for the head) to the trajectory action controllers that were started by our launch
file above. If the controllers succeed in implementing the requested trajectories, the arm
and head should move upward and to the right smoothly and at the same time:
$ rosrun rbx2_arm_nav trajectory_demo.py _reset:=false _sync:=true
To move the arm and head back to their original configurations, run the command with
the reset parameter set to true:
To move the arm first and then the head, run the script with the sync parameter set to
false:
First we need to import a number of message and action types for use with trajectories.
We have already met the FollowJointTrajectoryAction action and
FollowJointTrajectoryGoal message earlier in the chapter. You will recall that
the key component of these objects is the specification of a joint trajectory in terms of
joint positions, velocities, accelerations and efforts. We therefore also need to import
the JointTrajectory and JointTrajectoryPoint message types.
The reset parameter allows us to move the arm and head back to their starting
positions. The sync parameter determines whether we run the arm and head trajectories
simultaneously or one after the other.
Next we create two lists containing the joint names for the arm and head respectively.
We will need these joint names when specifying a trajectory goal.
31 if reset:
32 # Set the arm back to the resting position
33 arm_goal = [0, 0, 0, 0, 0, 0]
34
Here we define two goal configurations in terms of joint positions, one for the arm and
one for the head. There is nothing special about these particular values and you can try
other joint angles as you like. However, remember that the joint positions are listed in
the same order as the joint names defined above. If the reset parameter is set to True
on the command line, then the goal positions are set back to their neutral positions with
the arm hanging straight down and the head centered.
47 arm_client =
actionlib.SimpleActionClient('right_arm_controller/follow_joint_trajectory',
FollowJointTrajectoryAction)
48
49 arm_client.wait_for_server()
Next we create a simple action client that connects to the joint trajectory action server
for the right arm. Recall that the namespace for this controller was defined in the
configuration file for the arbotix controllers using the action_name parameter.
56 head_client =
actionlib.SimpleActionClient('head_controller/follow_joint_trajectory',
FollowJointTrajectoryAction)
57
58 head_client.wait_for_server()
And here we do the same for the head trajectory action server. We can now use the
arm_client and head_client objects to send trajectory goals to these two joint
groups.
The rest of the script simply repeats the procedure for the head trajectory.
There are a few things worth noting about the trajectories generated by the script:
• Each joint (either in the arm or the pan-and-tilt head) stops moving at the same
time. You can confirm this visually by watching the fake robot in RViz or you
can use rqt_plot to test it graphically. For example, the following command
will plot the joint velocities for the head_pan and head_tilt joints over the
course of the motion:
$ rqt_plot /joint_states/velocity[1]:velocity[7]
The graph below shows the pan and tilt joint velocities while running the
trajectory_demo.py script:
• Recall that we set the time_from_start for both the arm and head trajectories
to be 3 seconds. In the figure above, the numbers along the x-axis represent
time in seconds. You can therefore see that the head trajectory took just a little
over 3 seconds.
11.16.2 Testing the ArbotiX joint trajectory controllers with real servos
For an arm configured exactly like Pi Robot's, the trajectory_demo.py script can be
run on the real servos without modification. However, since not everyone has access to
a real Dynamixel arm, let's test the simpler case of a pan-and-tilt head.
The script head_trajectory_demo.py in the rbx2_dynamixels/scripts
directory is essentially the same as the trajectory_demo.py script that we just
examined but only includes the code relevant to the head trajectory. Before running the
script, we have to connect to the real servos. Terminate any fake or real Dynamixel
related launch files you might already have running and bring up the head-only version
of Pi Robot, or use your own launch file if you have one:
$ roslaunch rbx2_bringup pi_robot_head_only.launch sim:=false
Note how we set the sim argument to false in the command above.
Assuming your servos are detected properly by the arbotix driver, run the head
trajectory script with the reset parameter set to false:
The head should rotate smoothly to the right (clockwise) and look up and it should take
about 3 seconds to execute the trajectory. Re-center the servos again, but this time with
a duration of 10 seconds:
If your robot has a multi-jointed arm like Pi Robot's, you can try the
trajectory_demo.py scripts as we did with the fake robot. Of course, you will need
to modify the joint names and position values in the script assuming your arm
configuration is different than Pi's.
Now that we have tested the basic function of the arbotix joint trajectory controllers,
we are ready to configure MoveIt! to work with them.
As you can see, this file lists a controller for each of the MoveIt! planning groups we
defined using the Setup Assistant. In Pi's case, we have one for the right_arm group
and one for the right_gripper group. For each group, the file includes a name for
the controller, the namespace it operates in, the type of ROS action it implements,
whether or not it is the default controller for this group (almost always set to true) and
the list of joints in that group.
Let's break this down for each of our two planning groups. For the right arm we have:
name: right_arm_controller
action_ns: follow_joint_trajectory
These two parameters are taken together and specify that the name of the ROS topic that
will accept commands for this controller is called
/right_arm_controller/follow_joint_trajectory. To fully understand what
this means, we need the next parameter:
type: FollowJointTrajectory
The type parameter indicates that the arm will be controlled using a ROS
FollowJointTrajectory action. You can view the full definition of a
FollowJointTrajectory action using the command:
control_msgs/FollowJointTrajectoryActionGoal action_goal
std_msgs/Header header
uint32 seq
time stamp
string frame_id
actionlib_msgs/GoalID goal_id
time stamp
string id
control_msgs/FollowJointTrajectoryGoal goal
trajectory_msgs/JointTrajectory trajectory
std_msgs/Header header
uint32 seq
time stamp
string frame_id
string[] joint_names
trajectory_msgs/JointTrajectoryPoint[] points
float64[] positions
float64[] velocities
float64[] accelerations
float64[] effort
duration time_from_start
...
This first part of the message definition shows us the essence of what we need to know:
namely, that controlling the arm requires the specification of a joint trajectory which in
turn consists of an array of positions, velocities, accelerations and efforts at each point
along the way for each joint in the arm. The parameter time_from_start specifies
the desired timing of each joint configuration along the trajectory relative to the time
stamp in the header.
Putting it all together, MoveIt! expects our right_arm planning group to be controlled
by FollowJointTrajectoryAction messages published on the
/right_arm_controller/follow_joint_trajectory action topics. This in
turns means that our actual joint controller for the right arm must implement a ROS
action server that consumes these types of messages and implements a callback to map
FollowJointTrajectoryGoal messages into actual joint trajectories. Both the
arbotix_ros and dynamixel_motor packages provide action servers that work with
Dynamixel servos.
Turning now to the right_gripper group we have:
name: right_gripper
action_ns: gripper_action
Gripper control is much simpler than controlling the rest of the arm as it typically
amounts to a simple squeezing of one or two finger joints activated by one or two
servos. From the three parameters listed above, MoveIt! expects the gripper to be
controlled by GripperCommandAction goals sent to the topic
/right_gripper/gripper_action. To see the first part of the
GripperCommandAction definition, run the command:
Near the bottom of the output above we see that the gripper goal is given as a simple
float position value specifying the desired distance between the fingers, and a
max_effort for how hard to squeeze.
A real gripper controller must therefore define an action server that can consume
GripperCommandGoal messages and translate the desired finger position(s) into
rotation angles of the attached servos.
11.17.2 Creating the controller manager launch file
In addition to the controllers.yaml file, we also have to create a launch file for the
overall MoveIt! controller manager. This file lives in the launch subdirectory of the
robot's MoveIt! package and is given the name
robot_name_moveit_controller_manager.launch.xml. The MoveIt! Setup
Assistant will create an empty version of this file so it is up to us to fill in the details. In
Pi Robot's case, we edit the file
pi_robot_moveit_controller_manager.launch.xml to look like this:
<launch>
<!-- Set the param that trajectory_execution_manager needs to find the
controller plugin -->
• set either a joint space goal or a Cartesian goal (position and orientation) for the
end-effector
The first line defines the move group to be the right arm. Then we define a list of joint
positions (radians for revolute or continuous joints, meters for prismatic joints). The
order of the joints is the same as the link order in the arm, starting with the base of the
arm. We then use the set_joint_value_target() function to set the target angles
to the motion planner followed by the go() function to execute the plan.
• moves the arm to the named configuration called "resting" that was defined in
the Setup Assistant and stored in the SRDF file
• moves the arm to a joint configuration defined by setting target angles for each
joint
• moves the arm to the named configuration called "straight_forward" that was
defined in the Setup Assistant and stored in the SRDF file
• returns the arm to the named configuration called "resting" that was defined in
the Setup Assistant and stored in the SRDF file
Note how we run the launch file with the sim argument set to true. Later on we will
use the same launch file with real servos by simply setting the sim argument to false.
After running the launch file you should see the following output:
process[arbotix-1]: started with pid [11850]
process[right_gripper_controller-2]: started with pid [11853]
process[robot_state_publisher-3]: started with pid [11859]
[INFO] [WallTime: 1401976945.363225] ArbotiX being simulated.
[INFO] [WallTime: 1401976945.749586] Started FollowController
(right_arm_controller). Joints: ['right_arm_shoulder_pan_joint',
'right_arm_shoulder_lift_joint', 'right_arm_shoulder_roll_joint',
'right_arm_elbow_flex_joint', 'right_arm_forearm_flex_joint',
'right_arm_wrist_flex_joint'] on C1
[INFO] [WallTime: 1401976945.761165] Started FollowController
(head_controller). Joints: ['head_pan_joint', 'head_tilt_joint'] on C2
The key items to note are highlighted in bold above: the right gripper controller is
started; the arbotix driver is running in simulation mode; and the trajectory controllers
for the right arm and head have been launched.
If RViz is already running, shut it down then bring it up again using the arm_nav.rviz
config file from the rbx2_arm_nav package:
Finally, open another terminal and run the forward kinematics demo script while
keeping an eye on RViz:
You should see the robot's arm and gripper move to the various joint poses defined in
the script.
Since this is our first MoveIt! script, let's take it line by line.
First we import the Python wrapper to the overall moveit_commander interface. This
will give us access to the functions and objects we need to control the robot's arm and
gripper.
To open or close the gripper, we need the GripperCommand message type from the
control_msgs package.
10 moveit_commander.roscpp_initialize(sys.argv)
When running the MoveIt! Python API, this command initializes the underlying
moveit_commander C++ system and should always be run near the top of your script.
16 GRIPPER_OPEN = [0.05]
17 GRIPPER_CLOSED = [-0.03]
18 GRIPPER_NEUTRAL = [0.01]
Here we set three basic gripper openings. The values are in meters and specify the
desired spacing in between the finger tips with 0.0 representing the half-way point. It is
up to the gripper controller to map the desired opening into joint angles using the
geometry of the gripper.
21 right_arm = moveit_commander.MoveGroupCommander('right_arm')
The primary object we will use from the moveit_commander library is the
MoveGroupCommander class for controlling a specific move group (i.e. planning
group) defined in the robot's SRDF configuration. In the case of one-arm Pi Robot, we
have a move group for the right arm and another for the right gripper. Recall that we
defined the names for the robot's planning groups when we ran the MoveIt! Setup
Assistant. Here we initialize the right arm group by passing its name to the
MoveGroupCommander class.
24 right_gripper = moveit_commander.MoveGroupCommander('right_gripper')
In a similar manner, we create the move group object for the right gripper.
27 end_effector_link = right_arm.get_end_effector_link()
33 right_arm.set_named_target("resting")
Next we use the set_named_target() function to prepare the arm to move to the
configuration called "resting" stored in the SRDF file. This particular configuration has
the arm hanging straight down. Note that we haven't actually told the arm to move yet.
36 traj = right_arm.plan()
With a target configuration set, we now use the plan() function to compute a trajectory
from the current configuration to the target configuration. The plan() function returns
a MoveIt! RobotTrajectory()object that includes a joint trajectory for moving the
arm toward the goal. We store this trajectory in the variable named traj.
39 right_arm.execute(traj)
Finally, we use the execute() command to actually move the arm along the planned
trajectory. Notice how we pass the pre-computed trajectory to the execute()function
as an argument. The job of actually implementing the trajectory is passed along to the
underlying joint trajectory action controller which in our case is the arbotix controller.
MoveIt! includes another command called go() that combines the plan() and
execute() command into one. So instead of using:
traj = right_arm.plan()
right_arm.execuite(traj)
right_arm.go()
Notice how the go() command does not use the intermediary trajectory object. While
this simplifies the code, we sometimes want to modify the planned trajectory before
executing it. We will therefore usually use the go() command unless we need to apply
some transformation to the trajectory before execution and in those cases, we will use
plan() and execute().
Next we set a joint value target for the right gripper to the "neutral" position defined
earlier in the script. Note that we haven't commanded the gripper to move yet—we do
that next:
48 right_gripper.go()
Here we use the go() command to plan and execute the gripper trajectory.
Next we prepare to move the arm into a configuration defined by a specific set of
positions for each joint. First we set an array of joint positions, one for each of the six
degrees of freedom of the arm. The order of the joints is the same as the order in which
they are linked together in the URDF model. In the case of Pi Robot's arm, the order is:
['right_arm_shoulder_pan_joint', 'right_arm_shoulder_lift_joint',
'right_arm_shoulder_roll_joint', 'right_arm_elbow_flex_joint',
'right_arm_forearm_flex_joint', 'right_arm_wrist_flex_joint']
If you want to display the active joint names in the correct order as we have done above,
or store their names in an array, you can use the get_active_joints() command for
the planning group in question. For example, the following line would store the active
joint names for the right_arm group in an array named
right_arm_active_joints:
right_arm_active_joints = right_arm.get_active_joints()
With the desired joint configuration stored in the variable joint_positions, we are
ready to set it as a goal configuration for the arm:
56 right_arm.set_joint_value_target(joint_positions)
Here we use the set_joint_value_target() function like we did with the gripper,
only this time we are passing in values for six joints instead of one.
59 right_arm.go()
62 rospy.sleep(2)
65 right_arm.remember_joint_values('saved_config', joint_positions)
Here we use the remember_joint_values() function to store the values from the
joint_positions array as a named configuration that we have called 'saved_config'.
We will then use this named configuration later in the script to move the arm back to
this position. Note that instead of the fixed joint_positions array, use could read
the current joint values using the function get_current_joint_values(). This
means that at any time you would like to create a named configuration, you can use a
pair of commands like the following:
current_positions = right_arm.get_current_joint_values()
right_arm.remember_joint_values('named_target', current_positions)
Note that named configurations created dynamically like this are not stored in the SRDF
file and will only last as long as the script is running.
68 right_gripper.set_joint_value_target(GRIPPER_CLOSED)
71 right_gripper.go()
74 # Set the arm target to the "straight_forward" pose from the SRDF file
75 right_arm.set_named_target("straight_forward")
76
77 # Plan and execute the motion
78 right_arm.go()
Set the arm configuration to the named "straight_forward" pose stored in the SRDF file
and then plan and execute the motion.
Recall that we stored one of the earlier arm configurations under the name
'saved_config'. Here we use the set_named_target() function to return to that pose.
94 # Return the arm to the named "resting" pose stored in the SRDF file
95 right_arm.set_named_target("resting")
96 right_arm.go()
99 right_gripper.set_joint_value_target(GRIPPER_NEUTRAL)
100 right_gripper.go()
Cleanly shut down both the moveit_commander and the script using a pair of utility
commands. It is a good idea to always end your scripts with these two commands.
end_effector_link = right_arm.get_end_effector_link()
right_arm.set_pose_target(target_pose, end_effector_link)
right_arm.go()
In this code snippet we begin by connecting to the right arm move group. We then get
the name of the end-effector link that we will use later for setting its pose. Next, we set
the target pose of the end-effector as a PoseStamped message relative to the
base_footprint frame. Finally, we use the set_pose_target() function to set the
desired end-effector pose.
To see this in action, we will use the script moveit_ik_demo.py in the directory
rbx2_arm_nav/scripts. This script performs the following steps:
• moves the arm to the named configuration called "resting" that was defined in
the Setup Assistant and is stored in the SRDF file
• sets the target pose described above for the end-effector relative to the
base_footprint frame using the set_pose_target() function
• runs the go() command which uses the inverse kinematics of the arm to plan
and execute a trajectory that places the end-effector in the desired pose
• moves the arm to the named configuration called "wave" that was defined in the
Setup Assistant and stored in the SRDF file
• returns the arm to the named configuration called "resting" that was defined in
the Setup Assistant and stored in the SRDF file
Before looking at the code, let's try it in the ArbotiX simulator. If you don't already
have the Pi Robot's launch file running from the previous section, fire it up now:
$ roslaunch rbx2_bringup pi_robot_with_gripper.launch sim:=true
If you don't already have the robot's move_group.launch file running, run it now as
well:
And if you don't have RViz running with the arm_nav.rviz config file, run the
following command:
You should see the robot's arm move to the various poses defined in the script as well as
any messages displayed in the terminal window.
If you observe the terminal window where we ran the move_group.launch file, you
will see messages regarding the number of planning attempts and the time it took to find
a solution for each arm movement. For example, the following messages are displayed
when computing the IK solution required to move the arm from the initial resting
position to the first pose:
The line highlighted in bold above indicates that it took over half a second to compute
the IK solution using the generic KDL solver. Typical solution times for different poses
range from 0.15 to 0.6 seconds on my i5 laptop. At the end the chapter we will learn
to how to create a custom IK solver for the arm that reduces these computation times by
a factor of 10 or more.
The first part of the script is similar to the moveit_fk_demo.py node so we will skip
ahead to what is new.
It is a good idea to explicitly set the reference frame relative to which you will set pose
targets for the end-effector. In this case we use the base_footprint frame.
33 right_arm.allow_replanning(True)
36 right_arm.set_goal_position_tolerance(0.01)
37 right_arm.set_goal_orientation_tolerance(0.05)
45 target_pose = PoseStamped()
46 target_pose.header.frame_id = reference_frame
47 target_pose.header.stamp = rospy.Time.now()
48 target_pose.pose.position.x = 0.20
49 target_pose.pose.position.y = -0.1
50 target_pose.pose.position.z = 0.85
51 target_pose.pose.orientation.x = 0.0
52 target_pose.pose.orientation.y = 0.0
53 target_pose.pose.orientation.z = 0.0
54 target_pose.pose.orientation.w = 1.0
Here we set a target pose for the end-effector relative to the reference_frame we set
earlier. This particular pose has the gripper oriented horizontally and positioned 0.85
meters off the ground, 0.10 meters to the right of the torso and 0.20 meters forward of
the center of the base.
57 right_arm.set_start_state_to_current_state()
Before sending the target pose to the IK solver, it is a good idea to explicitly set the start
state to the current state of the arm.
57 right_arm.set_pose_target(target_pose, end_effector_link)
Next we use the set_pose_target() function to set the target pose for the end-
effector. This function takes a pose as the first argument and the name of the end
effector link which we stored earlier in the script in the variable end_effector_link.
As with the set_joint_value_target() function we used in the FK demo, the
set_pose_target() function does not initiate planning or cause the arm to move so
we take care of that next.
73 right_arm.go()
To execute the shift, we use the go() command. As before, we could call the plan()
and execute() command separately or combine them as we have here.
Here we use the shift_pose_target() function again but this time we rotate the
end-effector 90 degrees around the roll axis. (The roll axis is the same as the x-axis in
the frame of the end-effector.)
82 saved_target_pose = right_arm.get_current_pose(end_effector_link)
85 right_arm.set_named_target("wave")
86 right_arm.go()
Here we move the arm to a named configuration called "wave" stored in the SRDF file.
90 right_arm.set_pose_target(saved_target_pose, end_effector_link)
91 right_arm.go()
Next we use the set_pose_target() function to set the target pose to the saved pose.
Running the go() command then calls the IK solver to find a corresponding set of joint
angles and executes a trajectory that will place the end-effector in that pose.
95 right_arm.set_named_target("resting")
96 right_arm.go()
And if you don't already have Pi Robot's move_group.launch file running, fire it up
now:
Now bring up RViz with the fake_target.rviz config file from the rbx2_utils
package:
The yellow balloon should appear in RViz and move slowly in front of the robot. Recall
that the pub_3d_target.py node publishes the pose of the balloon on
the/target_pose topic.
After a few moments, the head should start tracking the balloon in RViz.
If all goes well, Pi should periodically update the position of his arm so that the gripper
reaches toward the balloon.
If you are not already running Pi Robot's MoveIt! launch file, bring it up now:
If RViz is already running, terminate it now then fire it up again with the
arm_paths.rviz configuration file. (Alternatively, use the Open Config option under
the File menu and navigate to rbx2_arm_nav/config/arm_paths.rviz.)
Finally, run the Cartesian paths demo script with the cartesian parameter set to true:
Path planning failed with only 0.25 success after 100 attempts.
then simply run the script again. The Cartesian path planner requires that an IK solution
can be found for each pose along the path. If the solver fails to find a solution for one or
more of the waypoints, then the path is aborted. Running the script again usually finds a
solution due to the random nature of the KDL IK solver.
The path is displayed in RViz because the arm_paths.rviz config checks the box
beside Show Trail beside the right_gripper_link in the Planned Path list of links
as shown in the image.
This time, the view in RViz should look something like this.
Notice that the all the trajectories are now curved. Note also that the gripper no longer
stays horizontal along the triangular portion of the trajectory. What's more, if you run
the command again, you will get a potentially different set of trajectories in between the
same waypoints. This highlights another useful feature of the
compute_cartesian_path() function: it can move the gripper the same way time
after time.
First we read in the parameter called cartesian that is True by default. If the
parameter is True, we calculate Cartesian paths between the set of waypoints. If it is set
to False, we use the regular set_pose_target() and go() commands to move the
arm. This way we can compare the straight line trajectories with those computed the
normal way.
Next we get the name of the end-effector link and set the arm to the "straight_forward"
configuration that we defined when running the MoveIt! Setup Assistant.
After moving the arm to the "straight_forward" pose, we set the start_pose and
end_pose variables to be the same as the resting pose so we can use these as the first
and last waypoints.
Next we initialize a list to hold the waypoints and set the wpose variable to a copy of
the start_pose. If we are executing a Cartesian path, we then append the first
waypoint to the waypoints list. Note that we aren't yet moving the arm—we are
simply building up a list of waypoints.
Here we set the next waypoint pose 0.2 meters back and 0.2 meters to the right of the
starting pose. If the cartesian parameter is set to True, then we append this new pose
to the waypoints list. Otherwise, we move the end-effector to the new pose right away
and in whatever manner the planner chooses by using the standard
set_pose_target() and go() commands.
The next few lines (that we'll skip here) simply add another waypoint that positions the
end-effector down and back toward the mid-line of the robot . The final waypoint is set
to the starting pose ("straight_forward").
95 fraction = 0.0
96 maxtries = 100
97 attempts = 0
98
99 right_arm.set_start_state_to_current_state()
Before we compute and execute the Cartesian path, we set a counter for the number of
attempts we will allow and a variable called fraction that will be explained below.
We also set the arm's internal start state to the current state.
The key statement here begins on Line 103 above and continues to Line 107. Here we
invoke the function compute_cartesian_path() which takes four arguments:
Finally, if we obtain a plan that passes through 100% of the waypoints, we execute the
plan. Otherwise, we display a "plan failed" message.
11.22.2 Setting other path constraints
Suppose the robot's task is to grasp and move an object while keeping it upright. For
example, the object might be a container of liquid like a cup of coffee so that the cup has
to be kept from tipping and spilling the fluid. MoveIt! allows us to specify path
constraints for the planned trajectory that can accomplish this type of goal.
In this case, we need to constrain the orientation of the gripper so that it stays level
during the motion. The script moveit_contraints_demo.py in the
rbx2_arm_nav/scripts directory illustrates how it works. This script performs the
following actions:
• moves the arm to an initial pose with the gripper about shoulder height and
horizontal
• sets a target pose for the gripper in front of the robot and down around table
height
• plans and executes a path to the target pose while satisfying the orientation
constraint
NOTE: Don't be alarmed if the robot appears to take a long time to perform this task or
fails altogether. If the motion is successful, the gripper should move vertically
downward in front of the robot keeping the gripper horizontal. However, you might
notice quite a long pause (possibly as long as 2 minutes on an i5 computer) while the
planner attempts to compute a path for the gripper that satisfies the constraints.
Furthermore, it is not unusual for the planner to fail to find a solution at all, even after
the default 5 attempts. If this happens to you, run the script again until you see the
desired motion. If you monitor the terminal window in which we ran the
move_group.launch file, you can see the status of each attempt.
To use orientation constraints, we first import the general Constraints message type
as well as the OrientationConstraint message from the MoveIt! messages
package. (There is also a PositionConstraint message type that works in a similar
way.)
# Increase the planning time since constraint planning can take a while
right_arm.set_planning_time(15)
Here we set the planning time per attempt to 15 seconds since computing IK solutions
for each point along the path can take a while.
To create a path constraint, we first initialize a Constraints list and give it a name.
We then start adding constraints to the list. In this case, we are going to add a single
orientation constraint.
The constraint header is typically the same as the header of the pose of the arm just
before applying the constraint.The link_name refers to the link we want to constrain
which in our case is the gripper (end-effector).
Finally, we give this constraint a weight which would have more relevance if we were
applying multiple constraints at the same time.
In these next two lines, we first append the orientation constraint to the list of constraints
and then, since that is the only constraint we are going to apply, we use the
set_path_constraints() function to apply the constraints to the right_arm move
group.
# Set the start state and target pose, then plan and execute
right_arm.set_start_state_to_current_state()
right_arm.set_pose_target(target_pose, end_effector_link)
right_arm.go()
rospy.sleep(1)
We then set the next target pose and execute the trajectory as usual. The planner sees
that we have applied the constraint on the right_arm group and therefore incorporates
the constraint into the planning.
right_arm.clear_path_constraints()
When you want to go back to regular planning, clear all constraints using the
clear_path_constraints() function.
The robot should first move the arm to the straight out position at normal speed, then
back to the resting position at roughly ¼ normal speed.
1 #!/usr/bin/env python
2
3 import rospy
4 from moveit_msgs.msg import RobotTrajectory
5 from trajectory_msgs.msg import JointTrajectoryPoint
6 from geometry_msgs.msg import PoseStamped, Pose
7
8 def scale_trajectory_speed(traj, scale):
9 # Create a new trajectory object
10 new_traj = RobotTrajectory()
11
12 # Initialize the new trajectory to be the same as the input trajectory
13 new_traj.joint_trajectory = traj.joint_trajectory
14
15 # Get the number of joints involved
16 n_joints = len(traj.joint_trajectory.joint_names)
17
18 # Get the number of points on the trajectory
19 n_points = len(traj.joint_trajectory.points)
20
21 # Store the trajectory points
22 points = list(traj.joint_trajectory.points)
23
24 # Cycle through all points and joints and scale the time from start,
25 # speed and acceleration
26 for i in range(n_points):
27 point = JointTrajectoryPoint()
28
29 # The joint positions are not scaled so pull them out first
30 point.positions = traj.joint_trajectory.points[i].positions
31
32 # Next, scale the time_from_start for this point
33 point.time_from_start =
traj.joint_trajectory.points[i].time_from_start / scale
34
35 # Get the velocities for each joint for this point
36 point.velocities = list(traj.joint_trajectory.points[i].velocities)
37
38 # Get the accelerations for each joint for this point
39 point.accelerations =
list(traj.joint_trajectory.points[i].accelerations)
40
41 # Scale the velocity and acceleration for each joint at this point
42 for j in range(n_joints):
43 point.velocities[j] = point.velocities[j] * scale
44 point.accelerations[j] = point.accelerations[j] * scale * scale
45
46 # Store the scaled trajectory point
47 points[i] = point
48
49 # Assign the modified points to the new trajectory
Next we get a count of the number of joints and the number of points in the trajectory.
We convert the trajectory points to a Python list so we can enumerate through them.
26 for i in range(n_points):
27 point = JointTrajectoryPoint()
28
29 # The joint positions are not scaled so pull them out first
30 point.positions = traj.joint_trajectory.points[i].positions
31
32 # Next, scale the time_from_start for this point
33 point.time_from_start =
traj.joint_trajectory.points[i].time_from_start / scale
We then pull out the list of joint velocities and accelerations for this point so that we can
scale them next.
42 for j in range(n_joints):
43 point.velocities[j] = point.velocities[j] * scale
44 point.accelerations[j] = point.accelerations[j] * scale * scale
Here we scale the joint velocities and accelerations for this trajectory point. Note how
accelerations get multiplied by the square of the scale since acceleration is the rate of
change of velocity.
47 points[i] = point
With the scaling complete, we store the modified trajectory point as the ith point in the
new trajectory.
50 new_traj.joint_trajectory.points = points
When all points have been scaled, we assign the entire list to the new trajectory.
53 return new_traj
The obstacles initially appear all the same color in RViz which is determined by the
setting under the Motion Planning display named Scene Geometry → Scene Color
also shown above. Our script sets different colors for individual objects. Real obstacles
can be added to the planning scene either from existing knowledge of the robot's
environment, or from sensor data such as laser scans and point clouds as we will see
later in the chapter.
Before we look at the code, let's run the simulation. If you're not already running the
one-arm version of Pi Robot in the ArbotiX simulator, bring it up now:
The script first adds a floating table top and two boxes to the scene. It then moves the
arm to the "resting" position in case it is not already there. Next we set a target pose for
the gripper to place it in between the two boxes and a few centimeters above the table.
MoveIt! then deftly controls the arm to move the gripper to the target pose while
avoiding any collisions with any part of the arm with the obstacles. Finally, we move
the arm back to the resting position. The view in RViz should something look like this
part way through the sequence:
17 scene = PlanningSceneInterface()
Here we define the scene publisher that will be used to publish object color information
back to the planning scene.
We will set the object poses relative to the base_footprint frame so we assign it to
the variable reference_frame so that it can be reused throughout the script. We also
set the arm's pose reference frame to the same frame to keep things simple. You can use
different frames for scene objects and arm poses if it makes more sense for a given
situation.
Each scene object requires a unique name which we assign to the variables table_id,
box1_id and box2_id.
56 scene.remove_world_object(box1_id)
57 scene.remove_world_object(box2_id)
58 scene.remove_world_object(table_id)
Since we might want to run the script more than once in succession, we use the
remove_world_object() function to remove any scene objects that would have been
added by a previous run.
Here we set the height of the floating table off the ground as well as the dimensions (in
meters) of the table and the two boxes.
78 table_pose = PoseStamped()
79 table_pose.header.frame_id = reference_frame
80 table_pose.pose.position.x = 0.26
81 table_pose.pose.position.y = 0.0
82 table_pose.pose.position.z = table_ground + table_size[2] / 2.0
83 table_pose.pose.orientation.w = 1.0
Next we set the pose for the table, placing it 0.26 meters in front of the reference frame
(/base_footprint) and at a height determined by the table_ground variable (set to
0.75 meters earlier) plus half the height of the table itself.
Now we can add the table to the planning scene using the add_box() function. This
function takes a string argument for the name of the object followed by the pose and size
of the box.
We will skip the lines where we set the poses of the two boxes and add them to the
scene the same way we did for the table.
While certainly not necessary, it is nice to give the objects different colors so that they
appear more distinct in RViz. Here we use a convenience function defined later in the
script to color the table red and the two boxes orange.
Here we set the target pose for the end effector to be in between the two boxes and 5 cm
above the table. The target orientation is to have the gripper horizontal.
Finally, we set the target pose for the arm and run the go() command to plan and
execute a trajectory that avoids the obstacles.
The script finishes up with a couple of auxiliary functions for setting the object colors.
These functions were borrowed from Mike Ferguson's moveit_python package. The
setColor() function is fairly self explanatory and uses the ObjectColor message
type from the moveit_msgs package. However, the sendColors() function could
use a little explanation so let's look at it now:
150 def sendColors(self):
151 # Initialize a planning scene object
152 p = PlanningScene()
153
154 # Need to publish a planning scene diff
155 p.is_diff = True
156
157 # Append the colors from the global color dictionary
158 for color in self.colors.values():
The key idea here is that we are making an update to the planning scene rather than
creating one from scratch. The objects have already been added earlier in the script and
we just want to paint them a different color. The PlanningScene message includes a
field called is_diff and if this is set to True as we do above, then the overall planning
scene will be updated rather than replaced by the information we publish. The last line
in the function uses the scene publisher that we defined near the top of the script to
actually publish the object colors.
The attached object will turn purple in color when the attachment is successful. It will
then turn green again at the end when it is detached. Note how the movement of the arm
compensates for the attached object and prevents it from striking any part of the table or
the base of the robot.
In Line 62 we set the dimensions of the box-shaped tool and give it a length of 0.3
meters (30 cm) and a width and height of 0.02 meters (2 cm).
Lines 64-76 define the pose we want the object to have relative to the end-effector.
The pose used here orients the long dimension of the tool to be parallel to the gripper
fingers and pointing out from the arm. To position the tool so that it appears to be
grasped by the gripper at one end, we slide it ½ the length of the tool so that its end is
position just at the finger tips, then bring it back 2.5 cm so the gripper appears to have a
good hold on it.
Finally, in Line 79, we use the attach_box() function on the scene object to attach
our box-shaped tool to the end-effector. This function takes four arguments: the link to
which we want to attach the object; the name (given as a string) assigned to the object
we are attaching (so it can be referred to later); the pose defining the object's relation to
the attachment link; the size of the box we are attaching given as a triple specifying
length, width and height. You can also supply an optional fifth argument which is the
Toward the end of the script, we detach the tool from the end-effector with the
statement:
102 scene.remove_attached_object(end_effector_link, 'tool')
Note how we reference the tool by the name we gave it earlier during the attach
operation. Note also that like the attach_box() function, the
remove_attached_object() function is defined on the scene object, not the move
group for the arm.
Like the obstacles demo, the script first adds a floating table top and two boxes to the
scene. It also adds a narrow box in between the first two boxes and this is set as the
target for the pick-and-place operation. After generating a number of test grasps, the
arm is moved to place the gripper in the correct position to grasp the target. The gripper
then closes around the target and the target is placed at a new location on the table.
NOTE 1: You will notice that the movement of the grasped object seems to lag the
movement of the gripper. This is an artifact of some as yet unidentified bug or
limitation in the MoveIt! RViz plugin. You can see smooth motion of the object at the
expense of jerky movement of the arm by changing the following settings in RViz:
Now run the pick-and-place demo again. This time the arm motion will not be as
smooth but the grasped object will move in sync with the gripper.
NOTE 2: You might find that either the pick operation or the place operation fails after
the five attempts allowed in the script. This can sometimes happen when the IK solver
fails to find a solution for moving either the gripper (during pick) or the grasped object
(during place). We'll discuss methods for increasing the chances of success below. For
now, simply run the script again until both the pick and place operations succeed.
GROUP_NAME_ARM = 'right_arm'
GROUP_NAME_GRIPPER = 'right_gripper'
GRIPPER_FRAME = 'right_gripper_link'
GRIPPER_CLOSED = [-0.06]
GRIPPER_OPEN = [0.05]
GRIPPER_NEUTRAL = [0.0]
GRIPPER_JOINT_NAMES = ['right_gripper_finger_joint']
GRIPPER_EFFORT = [1.0]
REFERENCE_FRAME = 'base_footprint'
For convenience, we define a number of global variables to hold the names of various
links, frames and joints. These could also be read in as ROS parameters. The values
listed for the three gripper positions (closed, open and neutral) were determined
somewhat empirically.
During debugging, it is sometimes helpful to visualize the gripper poses that are
attempted when trying to pick up the target object. Here we define a publisher that will
allow us to publish the various poses on the "gripper_pose" topic. We can then
subscribe to this topic in RViz to view the poses.
The MoveIt! pick-and-place operation is not guaranteed to succeed on the first try so we
specify a maximum number of attempts for the pick operation and the place operation
separately.
# Set the target pose in between the boxes and on the table
target_pose = PoseStamped()
target_pose.header.frame_id = REFERENCE_FRAME
target_pose.pose.position.x = 0.22
target_pose.pose.position.y = 0.0
target_pose.pose.position.z = table_ground + table_size[2] + target_size[2] /
2.0
target_pose.pose.orientation.w = 1.0
Here we set the pose of the target so that it sits on the table in between the two boxes.
We then add the target to the scene using the add_box() function.
Next we specify the location where we want the target object to be placed after it is
grasped by the robot. The pose defined above is located on the table to the right of the
right box (as viewed by the robot). By setting the orientation to the unit quaternion, we
are also indicating that the object should be placed in the upright position.
# Initialize the grasp pose to the target pose
grasp_pose = target_pose
The MoveIt! pick operation requires that we provide at least one possible grasp pose for
the gripper that might work for holding the target object. In fact, we will provide a
range of different grasp poses but we start with a pose that matches the target pose.
# Shift the grasp pose by half the width of the target to center it
Here we tweak the initial grasp pose a little to the right of the target's center. The only
reason we are doing this is that Pi Robot's gripper moves only the right finger while the
left finger is fixed. By shifting the grasp pose a little toward the fixed finger, the closing
of the right finger against the target will result in a more centered grasp. If you are using
a parallel gripper with either two servos (one for each finger) or a prismatic joint that
moves both fingers evenly with a single servo, then this adjustment would not be
necessary.
As we mentioned earlier, we will generate a range of grasp poses for MoveIt! to try
during the pick operation. Here we use a utility function called make_grasps() that
we will describe in detail later. The function takes two arguments: the initial grasp pose
to try and a list of object IDs that the gripper is allowed to touch on its final approach to
the target. In this case, we allow it to touch the table. The make_grasps() function
returns a list of possible grasp poses for the gripper.
As explained earlier, it is sometimes helpful to visualize the grasp poses in RViz. Here
we publish each pose in rapid succession so we can quickly view the poses if desired.
We have finally reached the block where we attempt to pick up the target object using
our list of grasp poses. First we set a flag to test the result that will allow us to abort
the loop if we find a successful grasp pose. We also track the number of times
(n_attempts) that we run through the collection of grasps. Note that each attempt
involves a run through the entire grasp list. Because of random perturbations used in the
underlying pick and IK algorithms, it often pays to cycle through the list more than once
if earlier attempts fail.
First we reset the result flag and the number of attempts. We then use the utility
function make_places() defined later in the script to generate a list of candidate poses
to place the object. We give make_places() the desired place_pose we defined
earlier in the script as an argument. The function then returns a list of poses similar to
the target pose but that might vary slightly in position and/or orientation to give the IK
solver a better chance of succeeding.
Here we execute a loop similar to the one we used for pick(). The actual place()
operation occurs on the line highlighted in bold above. Unlike the pick() function,
place() takes a single pose rather than a list of poses as an argument. So we manually
cycle through our list of places and run the place() operation on each pose. If the
place() operation determines that the object can be successfully moved to the given
pose, the arm will place the object at that pose and the gripper will release the object.
That essentially completes the script. Next we look at the all important
make_grasps() utility function.
With these requirements in mind, let's now break down the make_grasps() function
line by line.
3 g = Grasp()
6 g.pre_grasp_posture = self.make_gripper_posture(GRIPPER_OPEN)
7 g.grasp_posture = self.make_gripper_posture(GRIPPER_CLOSED)
Next we set the pre_grasp_posture for the gripper to be the open position and the
grasp posture to be the closed position. Note that we are using another utility function
called make_gripper_posture() that is also defined in the script and which we will
describe later.
Referring to Line 10 above, the want the gripper to approach along the x-direction,
[1.0, 0.0, 0.0], which is aligned with the forward direction of the base since the
robot is standing behind the target. We want the minimum approach distance to be 1cm
and the desired approach distance to be 10cm.
In a similar manner, Line 11 defines a retreat direction upward and to the right, [0.0,
-1.0, 1.0], since we want to lift the object over one of the boxes and move it to the
right of that box. Here we also set a minimum retreat distance of 10cm and a desired
distance of 15cm.
14 g.grasp_pose = initial_pose_stamped
Next we will generate a number of alternate grasp poses with various values for pitch
and (optionally) yaw angles. The pitch_vals and yaw_vals lists above indicate the
angles we will try (in radians). Note that in this case, we are only going to vary the pitch
angle. You could also add a roll parameter if you find it necessary.
For each value of yaw and pitch, we create a quaternion from the two Euler components
using the quaternion_from_euler function that we imported at the top of the script.
32 g.grasp_pose.pose.orientation.x = q[0]
33 g.grasp_pose.pose.orientation.y = q[1]
34 g.grasp_pose.pose.orientation.z = q[2]
35 g.grasp_pose.pose.orientation.w = q[3]
We then set the grasp pose orientation components to those of the computed quaternion.
38 g.id = str(len(grasps))
Every grasp pose requires a unique id so we simply set it to the length of the list of
grasps which grows by one as we add each grasp.
41 g.allowed_touch_objects = allowed_touch_objects
A grasp can be given a list of objects it is allowed to touch. In our case, we will allow
the gripper to touch the table if needed on its final approach to the target. The
allowed_touch_objects list is passed as an argument to the make_grasps()
function and recall that we passed the list [table_id] to allow the gripper to touch the
table.
If we were controlling a real arm, we would set a reasonable value for the maximum
contact force but since we are working only in simulation, we set a value of 0.
The MoveIt! pick operation expects the grasp pose candidates to be ranked by quality.
Here we assume that our initial guess (the target pose) has the highest quality ( 1.0)
which corresponds to a pitch value of 0. For other pitch values, we degrade the quality
by subtracting the absolute value of the pitch from 1.0 which reflects the fact that we
don't expect high values of pitch to be necessary.
50 grasps.append(deepcopy(g))
Finally, we append the grasp to the list of grasps. We use the Python deepycopy()
function to ensure that we add a truly distinct copy of the grasp object g. Otherwise,
subcomponents of g might only contain references to a previous version of g as we
cycle through our loop.
53 return grasps
When we have finished generating the list of grasp poses, the list is returned to the
calling program.
Finally, we have the make_places() function that was used to generate a number of
alternative poses to place the grasped object on the table.
1 def make_places(self, init_pose):
2 # Initialize the place location as a PoseStamped message
3 place = PoseStamped()
4
5 # Start with the input place pose
6 place = init_pose
7
8 # A list of x shifts (meters) to try
9 x_vals = [0, 0.005, 0.01, 0.015, -0.005, -0.01, -0.015]
10
11 # A list of y shifts (meters) to try
12 y_vals = [0, 0.005, 0.01, 0.015, -0.005, -0.01, -0.015]
13
As you can see, this function is very similar to the make_grasps() utility that was
described in detail above. Instead of returning a list of grasp poses that vary in
orientation, we are returning a list of place poses that vary in position around the desired
pose.
This file defines our collection of sensors through the use of MoveIt! sensor plugins. In
this case, we are defining a single plugin that uses the PointCloudOctomapUpdater
to generate an octree occupancy map. The plugin expects to subscribe to the topic
named by the point_cloud_topic parameter which in this case is
/camera/depth_registered/points. We then specify parameters for the
maximum depth range we want to process in meters, sub-sampling rates for the video
stream (frame_subsample) and point cloud grid (point_subsample). The
padding_offset (in centimeters) and padding_scale determine how much of a
buffer around the robot itself we want to use when creating the self-filter. The resulting
point cloud with the robot parts removed is published on the filtered_cloud_topic.
where the parameter octomap_frame specifies the coordinate frame in which this
representation will be stored. If you are working with a mobile robot, this frame should
be a fixed frame in the world. The octomap_resolution parameter specifies the
resolution at which this representation is maintained (in meters).
To test things out, first fire up the OpenNI camera driver for your Kinect or Xtion Pro:
Now bring up the MoveIt! environment for Pi Robot which includes the sensor manager
created above:
Assuming there are objects within view of the camera, the image in RViz should look
something like the following:
Where the beginning and ending comment characters are shown in bold. Next,
terminate your move_group.launch file if it is running, then delete any existing
sensor parameters before re-running your move_group.launch file:
You only have to do this once since move_group.launch will now no longer set the
sensor parameters thereby disabling octomap processing.
To protect the servos from overheating, we will also run the monitor_dynamixels
node we created in Chapter 6:
These launch files can stay running indefinitely as long as you are connected to your
robot.
While not necessary when using the real robot, you can also bring up RViz:
$ rosrun rviz rviz -d `rospack find rbx2_arm_nav`/config/arm_nav.rviz
Hopefully your robot's arm will move in the desired way. When the script has finished,
it is a good idea to relax all the servos to let them cool down. If you are using
Dynamixel servos and the arbotix package, you can use the following command:
This pose is 85cm above the base_footprint, 20cm ahead of center and 10cm to the
right of center. These numbers will have to be adjusted to create a pose that is attainable
by your robot's arm and gripper. One way to do this is as follows:
• position the gripper manually at the desired target position and orientation
$ rosrun rbx2_arm_nav
You can now release the arm and read off the desired position and orientation
components from the screen. Copy and paste these values into your script for the target
pose.
As in the previous section, it's best to test things out with a simulated version of your
robot, either using the MoveIt! demo.launch file for your robot or the ArbotiX
simulator with a configuration file customized for your robot.
When you are ready, you can run the script the same way with the real robot. In Pi
Robot's case we would run:
$ rosrun rbx2_arm_nav moveit_ik_demo.py
When the arm has gone through its paces, relax the servos:
Before starting the arm tracker, we need to run the head tracker node so that the robot
will keep the depth camera pointed at the nearest object. Notice how we set the sim
argument to false in the command below:
If all goes well, the robot should periodically update the position of his arm so that the
gripper more or less reaches toward the nearest object positioned in front of the camera
(but at least 0.5 meters away).
For extra bonus points, it is left to the reader to improve the arm_tracker.py script by
setting the gripper orientation so that it actually points directly at the target rather than
always been set to a horizontal orientation as it is now. For example, if the target is
above the level of the shoulder, not only will the arm reach upward as it does now, but
the gripper will also angle upward toward the target.
For more installation information, including source installs, see the online tutorial.
After completing the installation, test the basic functionality by starting the Gazebo
server:
$ gzserver
You should see some startup messages that look something like this:
Open another terminal window and run the Gazebo client to bring up the GUI:
$ gzclient
If all goes well, the Gazebo GUI should appear which looks like the following:
Terminate any gzserver or gzclient processes you might have already running, then
bring up the server and client together with the command;
$ gazebo
This will almost always solve the problem although at the cost of consuming more CPU
cycles. If this works for you, place the above command at the end of your ~/.bashrc
file (if you haven't already) so it will be run each time you open a new terminal.
To test your Gazebo ROS installation, run the empty_world.launch file from the
gazebo_ros package:
If successful, you should eventually see the Gazebo GUI with an empty world. If the
launch process crashes on the first attempt, type Ctrl-C and try again two or three
times. If it keeps on crashing, see the previous section on turning off hardware
acceleration.
NOTE: The Kobuki Gazebo package provides a model of the Kobuki's base only
without a Kinect depth camera.
When the robot and world have been loaded, the Gazebo screen should look like this:
We can now try to control the robot using standard ROS commands. But first we need
to know the topics and services used by the simulated robot. Get the list of current
topics by running the command:
$ rostopic list
Most of the Kobuki-related topics live under the namespace /mobile_base. The topic
/mobile_base/commands/velocity accepts Twist messages to control the base
while the /mobile_base/events/cliff and /mobile_base/events/bumper
topics register events from the cliff and bumper sensors. The standard /odom topic is
used to publish the robot's odometry data.
Since the Kobuki expects Twist commands on the topic
/mobile_base/command/velocity instead of /cmd_vel, try moving the robot
forward at 0.2 meters per second by running the following command:
$ rostopic pub -r 10 /mobile_base/commands/velocity \
geometry_msgs/Twist '{linear: {x: 0.2}}'
You should see the robot move across the table and fall off the edge where we created
the gap. To keep the poor robot from bouncing around on the floor (or driving out of the
scene if it landed on its wheels), type Ctrl-C to kill the rostopic command and then
click on the Edit menu in the Gazebo GUI and select Reset Model Poses.
Since a real Kobuki robot is equipped with cliff sensors, why did the simulated robot
drive off the edge of the table? The Kobuki's Gazebo model includes plugins for the
cliff sensors and they do indeed detect the drop-off at the edge of the simulated table;
however, in this example we controlled the robot through the topic
/mobile_base/commands/velocity that sends Twist commands directly to the
base controller and ignores any data coming from the sensors. In the next section, we
will see how to examine the data returned by the simulated sensors and how to drive the
robot more safely.
This warning is expected since we are running a simulation, not a real robot, and the
cliff sensors only publish a message when at least one of them detects a drop-off.
Now repeat the experiment described in the previous section and set the robot heading
for a table edge while keeping your eye on the terminal window monitoring the cliff
messages. It might help to slow down the speed of the robot so this time change the
linear speed to 0.1 meters per second:
$ rostopic pub -r 10 /mobile_base/commands/velocity \
geometry_msgs/Twist '{linear: {x: 0.1}}'
As the robot passes over the table edge, you should see the following three messages on
the /mobile_base/events/cliff topic:
These messages indicate that all three cliff sensors (labeled 0, 1 and 2 in the sensor
field above) have entered state 1 which means "cliff detected". The values displayed
in the bottom field represent the distance to the floor although it is not clear from the
Kobuki documentation what the units are. Also, keep in mind that the IR sensors used
for cliff detection have a range of only 2-15 cm and the table tops are much higher than
this off the floor.
To see the definition of the Kobuki cliff message, run the command:
$ rosmsg show -r kobuki_msgs/CliffEvent
The -r option stands for "raw" and includes the comments written in the message
definition file. The key part of the output is:
# cliff sensor
uint8 LEFT = 0
uint8 CENTER = 1
uint8 RIGHT = 2
uint8 sensor
uint8 state
The message definition indicates that each message has three fields, sensor, state and
bottom and that the sensor field can values 0, 1 and 2 to indicate the left, center and
right IR sensors, while the state field takes the value 0 if a floor is detected or 1 if a
cliff is detected.
To monitor the messages on the bump sensors topic, run the command:
$ rostopic echo /mobile_base/events/bumper
Then flip back to the other terminal window and keep an eye out for bumper messages
while watching the robot in Gazebo. The moment the robot strikes the block, you
should see a series of messages on the bumper topic that look like this:
As with the cliff sensors, the bumpers are numbered 0, 1 and 2 for left, center and right.
So the message above indicates that the center bumper has struck an object, bounced
back enough to release the bumper, then struck again as the robot continue to move
forward. (Depending on the processing speed of your computer, the robot might not
bounce off the block so that you will only see the first message above.) To see the
BumperEvent message definition, run the command:
# bumper
uint8 LEFT = 0
uint8 CENTER = 1
uint8 RIGHT = 2
# state
uint8 RELEASED = 0
uint8 PRESSED = 1
uint8 bumper
uint8 state
which shows us that there are three sensors that can have a state value of either 0
(released) or 1 (pressed).
With the velocity publisher still running, use your mouse to move the robot back away
from the block but this time aim it so that it strikes the block on either the left or right
side of the robot. This time you should see the bumper message indicate that either
bumper 0 (left) or 2 (right) was depressed.
Note that running into an object does not cause the robot to stop*, just like the cliff
sensors did not automatically stop the robot from plunging off the table. In the next
section we'll see how to control the robot more safely.
(* When the simulated Kobuki runs head on against an object, it does appear to stop.
However, this is simply because the forward motion is being impeded by the object, not
This launch file loads the config file yocs_safety_mux.yaml from the directory
rbx2_gazebo/config. The file has the same basic layout as the configuration files
we used in Chapter 8 and is shown below:
subscribers:
- name: "Safe reactive controller"
topic: "safety_controller"
timeout: 1.0
priority: 2
publisher: "output/cmd_vel"
Note how we have given the safety controller top priority, followed by a joystick node
that publishes on the /joystick_cmd_vel topic like we used in Chapter 8, then any
other node that publishes on the standard /cmd_vel topic. Since the arbotix_gui
node is publishing on /cmd_vel, you should now be able to control the robot using the
trackpad; however, you will not be able to drive the robot off the table since the cliff
sensors will alert the safety controller which in turn will override the input from the
trackpad. Give it a try! (Just don't drive the robot backward over an edge since the
robot only has cliff sensors in front.) You can also try driving into the cinder blocks and
you should see that the safety controller causes the robot to "bounce" by backing it up a
bit after impact.
If you have a joystick, you can also add it to the mix by running the launch file:
$ roslaunch rbx2_nav mux_joystick_teleop.launch
Now drive the robot around with the joystick and observe how the safety controller
keeps you out of trouble.
The tables in this world are about 1 meter long and half a meter wide so two tables
together makes a 1 meter square that should nicely accommodate our nav_square.py
script from the rbx1_nav package. Notice how we have positioned the robot so that it
is pointing to the left parallel to the table edge and a couple of inches away from the
edge behind it. When your are finished positioning the robot, make sure it is no longer
selected by clicking on the arrow tool on the toolbar and selecting anything other
than the robot. (If you don't this, the robot will not move when we send motion
command below.)
To ensure the robot does not fall of the edge of the table, bring up the safety controller if
it is not already running:
$ roslaunch rbx2_gazebo kobuki_yocs_safety_controller.launch
The robot should do one loop around the perimeter of the table. If it gets too close to an
edge, one of the cliff detectors will fire and cause the safety controller to kick in. The
robot should then continue on its path.
NOTE: If you find that the robot simply spins in place when trying to make the first
turn around the table, chances are the simulator needs to be restarted. (The author
noticed on one trial that the simulated odometry had stopped registering rotations on the
/odom topic.) Exit the Gazebo GUI then type Ctrl-C in the terminal used to launch the
Kobuki playground world. Launch the world again and repeat the experiment described
above.
From this demonstration we see how Gazebo can be used to test a robot in a realistic
environment without even having access to the real robot or even the test setup (e.g.
tables and cinder blocks in this case.) Our local robotics club (HBRC) holds a
"TableBot Challenge" every year where the goal is to program a robot to locate an
object on the table and move it from one location to another, all while keeping the robot
from falling off. Using Gazebo would allow testing different robots and control
strategies in a variety of environments before risking the robot in what might be a
potentially expensive process of trial and error learning!
When the Gazebo GUI is up, click on the Insert tab. Here you should see a list of
models in your local cache as well as those in the database on the gazebosim.org site. (It
might take a minute or so for the remote database to be loaded.) Open the remote
database and select the RoboCup 2009 SPL Field. It will take a few moments to load,
then place it in the scene with your mouse and click the left mouse button to drop it in
place with the robot somewhere near the center of the field. Next add a RoboCup 3D
Simulation Ball using the same technique.
$ arbotix_gui cmd_vel:=/mobile_base/commands/velocity
Now test your soccer skills by driving the robot using the simulated track pad and
attempt to knock the ball into the goal. If you miss and the ball starts rolling out of the
field, simply select Reset Model Poses from the Edit menu.
Try inserting other objects listed under the Insert tab. If you end up creating a world
you like, you can save it using the Save World As option under the File menu. (By
tradition, Gazebo world files are saved with either a .world or .sdf extension.) Be
sure to save the world inside one of your ROS package directories so you can load it
using a standard launch file later if desired. For an example of how this is done, take a
look at the kobuki_playground.launch file in the kobuki_gazebo package:
$ roscd kobuki_gazebo/launch
$ cat kobuki_playground.launch
Note that it might take a little while before the robot appears depending upon the speed
of your computer and graphics card. Eventually, the screen should look something like
the one shown below.
Fetch's arm and head should move upward and to the right while the torso joint moves
upward. To lower the torso, re-center the head, and return the arm to the tucked
position, run the same script with the reset parameter set to true:
Now that we know that the basic simulation is working, let's try some arm navigation
using MoveIt!.
If you don't already have the Fetch simulation running, bring it up now:
It might take a few minutes for the robot to appear and for its controllers to load.
Now let's try an inverse kinematics demo. The script called fetch_ik_demo.py
(located in the rbx2_gazebo/scripts directory) is similar to our earlier
moveit_ik_demo.py script with the following differences:
• The new script does not refer to the named poses (like "resting") that were
defined in Pi Robot's MoveIt! configuration since these poses are not defined in
the Fetch MoveIt! Config.
• The Fetch MoveIt! configuration uses the wrist_roll_link as the end-
effector rather than the gripper link as we did with Pi Robot. This allows the
robot to use interchangeable end-effectors but it also means that an extra
calculation is required when specifying a target pose for the gripper itself.
To try out the script, run the following command while keeping an eye on the robot in
Gazebo:
You should see the arm and torso move upward while the gripper assumes a horizontal
pose pointing forward.
To tuck the arm back into its resting position, use the fetch_tuck_arm.py script
included in the rbx2_arm_nav package:
The arm should first move along a non-Cartesian path to a starting configuration defined
in the script. The next trajectory should move the gripper along a Cartesian triangle
ending up back at the starting position. If MoveIt! fails to find a solution, simply re-run
the script and eventually it should work.
Don't be alarmed if it takes a while for the simulated robot and environment to appear.
When it does, you should see Fetch at the entrance of a room together with a table and a
blue cube on the tabletop as shown below:
The launch file will fire up the MoveIt! configuration for Fetch as well as ROS nodes
for navigation, perception and grasping. The robot should move to the edge of the table,
pick up the cube, then carry it to the adjacent room and put it back down on the table
there.
• fit a plane to the depth camera's point cloud to detect the support surface
(usually a table top)
• remove the points from the cloud that belong to that surface (called "inliers") so
that the remaining points must correspond to objects sitting on the surface
• group the remaining points into clusters based on the Euclidean distance
between points
• fit either boxes or cylinders to the clusters and compute their poses
• once we have the pose of a target object, compute a set of suitable grasp poses
for the gripper like we did earlier with the virtual pick-and-place demo
Note that this technique does not use the RGB image from the camera at all so in theory,
the method would even work in the dark.
Although we will use the simple_grasping perception code for our demonstration,
another useful resource is the handle_detector package that uses 3D point cloud data
to locate grasping locations on different shaped objects. There is also the
To see how it works, you'll need a target object such as a small box or cylinder sitting on
a table top within reach of your robot's arm and gripper. A travel-sized toothpaste box
or pill bottle works well for small grippers. A soda can be used if your robot's gripper
opens wide enough.
First terminate any running Gazebo simulations as well as any Fetch specific nodes or
MoveIt! launch files.
If your robot uses Dynamixel servos and the ArbotiX ROS driver like Pi Robot, you
might want to keep the servos from overheating by running the
monitor_dynamixels.py node we created in the chapter on ROS Diagnostics:
Fire up an image_view node so you can see what the camera sees:
Make sure the camera is at least a couple of feet or so above the table and angle it
downward so that it is looking at the target object. The author used a small toothpaste
box as the target object and the view through the camera looked like the image below on
the right along with a side shot of the setup on the left:
Now bring up the MoveIt! nodes for your robot. For Pi Robot we would run:
The simple grasping perception pipeline creates a ROS action server in the namespace
basic_grasping_perception/find_objects that can be called to trigger a
segmentation of the visual scene into a support surface (or surfaces) and the target object
(or objects). The result is a set of object primitives (boxes and cylinders) and their poses
that can be inserted into the MoveIt! planning scene. We can then use these poses to
guide the robot's arm to grasp and move the target while avoiding collisions with the
table top or other objects. Additional details will given after we run the demo.
We are now ready to run our real pick-and-place script called
real_pick_and_place.py found in the rbx2_arm_nav/scripts directory. This
script begins by calling the simple grasping perception action server to get the shapes
and poses of the table top and supported objects. It then uses these poses with code
from our earlier moveit_pick_and_place_demo.py script to generate appropriate
grasp poses before executing the pick() and place() operations. (The place pose
used for the demonstration was chosen so that the robot would move the object to a
location just to the left of the torso and at the same distance in front of the robot as the
original target location.)
To test the perception pipeline but without trying to move the arm, run the script with
the --objects argument.
Initially, you might see a number of messages like the following in the terminal window:
Simply adjust the tilt angle of the camera either up or down until ideally only one object
is detected:
Back in RViz, the view should look something like the following:
Here we see that the perception pipeline has successfully identified the table (shown in
transparent gray) and the shape and location of the box sitting on top (shown in orange).
Note that the box might sometimes appear as a cylinder rather than rectangular (and vice
versa for cylindrical targets) since the fitting process is not perfect. For the most part,
this will not affect grasping as long as the width of the virtual object is roughly the same
in both cases.
Assuming your target object can be detected, Ctrl-C out of the
real_pick_and_place.py script, make sure your robot is ready for the real test, and
run the script again with the --once argument. This will attempt to execute one pick-
and-place cycle to grasp and move the object and then exit:
$ rosrun rbx2_arm_nav real_pick_and_place.py --once
It is important to note that once the pick operation begins, there is no further visual
feedback to guide the arm toward the target. In other words, once the table and target
object are located by the perception pipeline, the arm begins to move along a preplanned
trajectory all the way to the target. This means that the gripper will sometimes bump
the target or fail to grasp it altogether. Similarly, the place operation assumes that the
table is in the same position as it was originally detected. If we were to remove the table
part way through the arm movement, the robot would simply drop the object on the floor
at the place location. Finally, both the pick and place operations can fail due to a failure
by the IK solver to find a solution.
find_objects =
actionlib.SimpleActionClient("basic_grasping_perception/find_objects",
FindGraspableObjectsAction)
goal = FindGraspableObjectsGoal()
# Send the goal request to the find_objects action server which will trigger
# the perception pipeline
find_objects.send_goal(goal)
# The result will contain support surface(s) and objects(s) if any are detected
find_result = find_objects.get_result()
To trigger the perception pipeline, we first create an empty goal object using the
FindGraspableObjectsGoal message type. Then we turn off the
simple_grasping grasp planner since it assumes a parallel gripper which Pi Robot
does not have. (We will use our own grasp generator as we did earlier with fake pick-
and-place.) Next we send the empty goal to the action server using our find_objects
action client which triggers the visual segmentation process on the action server. We
then wait for the results for up to 5 seconds and store whatever is returned in the variable
find_results. These results will be in the form of a
FindGraspableObjectsResult message which contains a wealth of information
about the shape and pose of any detected support surfaces and objects. You can see the
full definition of the message by running the command:
Now that we have the perception results we can get the poses of the support surface and
target object.
This loop cycles through all detected objects in the results and adds their shapes and
poses to the MoveIt! planning scene.
As we loop through the objects, we keep track of the one that is closest to the robot.
This will become our target. We get the size of the target from the dimensions of the
object's primitive (e.g. box, cylinder, etc.) and the pose from the object's
primitive_poses array.
target_pose.pose.orientation.x = 0.0
target_pose.pose.orientation.y = 0.0
target_pose.pose.orientation.z = 0.0
target_pose.pose.orientation.w = 1.0
Since we generally want the arm to approach the target with the gripper oriented
horizontally, we set the target pose accordingly. Note however that you could also try
using the object's pose directly for the target pose.
A similar loop in the script (which we won't repeat here) cycles through any detected
support surfaces and inserts them into the planning scene as we did for objects. In our
simple setup there is only one support surface so we can assume it is the table top in
front of the robot.
place_pose = PoseStamped()
place_pose.header.frame_id = REFERENCE_FRAME
place_pose.pose.position.x = target_pose.pose.position.x
place_pose.pose.position.y = 0.03
place_pose.pose.position.z = table_size[2] + target_size[2] / 2.0 + 0.015
place_pose.pose.orientation.w = 1.0
Here we define the place_pose which is the position and orientation we want the
object to be moved to after it is grasped. In this case we want the object to end up the
same distance in front of the robot (position.x), then 3 cm to the left of the torso
(position.y) and on the table surface (position.z) which we compute from the
table height, half the target height and a calibration factor of 0.015 meters upward just
to be sure we do not try to embed the object inside the table.
# Shift the grasp pose half the size of the target to center it in the gripper
try:
grasp_pose.pose.position.x += target_size[0] / 2.0
grasp_pose.pose.position.y -= 0.01
grasp_pose.pose.position.z += target_size[2] / 2.0
The last step before starting the pick operation is to set the initial grasp pose and
generate a collection of alternative grasps to try with the pick planner. Here we initialize
the grasp pose to the target pose. We then shift the pose by half the target size in the x
and z dimensions to better center the grasp on the object. The lateral shift in the y
dimension of -1.0 cm was determined empirically to center the object between the
gripper fingers.
1 <launch>
2
3 <env name="GAZEBO_MODEL_PATH" value="$(find fetch_gazebo)/models:$(optenv
GAZE BO_MODEL_PATH)" />
4
5 <arg name="robot" default="fetch"/>
6 <arg name="debug" default="false"/>
7 <arg name="gui" default="true"/>
8 <arg name="headless" default="false"/>
9
10 <!-- Start Gazebo with a blank world -->
11 <include file="$(find gazebo_ros)/launch/empty_world.launch">
12 <arg name="debug" value="$(arg debug)" />
13 <arg name="gui" value="$(arg gui)" />
14 <arg name="paused" value="false"/>
15 <arg name="use_sim_time" value="true"/>
16 <arg name="headless" value="false"/>
Note that Line 11 above includes the standard empty_world.launch file from the
gazebo_ros package. Lines 12-16 pass a number of arguments to that launch file; in
particular, a value of false is set for the headless parameter and a value of true is
set for the gui parameter (which is taken from the gui arg value set earlier in the launch
file). Taken together, these cause the empty_world.launch file to run the gzserver
process in headless mode and without the gzclient GUI.
To try it out, terminate any running Gazebo simulations and launch the Fetch
playground simulation in headless mode as follows:
$ roslaunch fetch_gazebo playground.launch headless:=true gui:=false
You should see various startup messages related to the Fetch controllers, but the Gazebo
GUI will not appear.
Now fire up RViz with the fetch.rviz config file found in the
rbx2_gazebo/config directory:
After a few seconds delay, you should see the model of the Fetch in RViz. Note that the
RViz world appears empty since the walls and tables in the simulated room are not
published in a way that can be visualized in RViz. However, the table and cube will be
come visible after we run the demo below and the perception pipeline detects both
objects and publishes them on the MoveIt! planning scene topic.
The first part of the pick and place script makes a call to the simple grasping PCL-based
perception pipeline that can detect planar surfaces and any objects on top of them.
Although we do not see the table and cube initially in RViz, they are being simulated in
Gazebo as is the robot's depth camera.
NOTE: You will probably notice that the movement of the grasped cube seems to lag
the movement of the gripper. This is an artifact of some as yet unidentified bug in the
MoveIt! RViz plugin. You can see smooth motion of the cube at the expense of jerky
movement of the arm by changing the following settings in RViz:
Now run the pick-and-place demo again. This time the arm motion will not be as
smooth but the grasped object will move in sync with the gripper.
We will also need a number of JavaScript packages that are not typically distributed as
Debian packages. These packages are already included in the rbx2_gui/js directory
but if you need to reinstall them for some reason, you can run the install-js-
packages.sh script as follows:
$ roscd rbx2_gui/scripts
$ sh install-js-packages.sh
To test the server, first connect to a camera. For a Kinect or Asus Xtion Pro fire up the
OpenNI node:
If you are using a webcam, you can use the usb_cam driver that we installed in Volume
One:
Note that web_video_server uses port 8080 by default. If you are already using that
port for something else, you can run web_video_server with a different port
parameter. The following example uses port 8181:
NOTE: When you run web_video_server with a different port as shown above, the
port number is stored on the ROS parameter server. So if you run web_video_server
in a later session without the port parameter, it will pick up the port number from the
parameter server instead of the default port 8080. To override the value on the
parameter server, you can specify the default port on the command line just as we did
above with the alternate port. You can also use the included web_video_server
launch file which sets the port parameter to the default 8080:
$ roslaunch rbx2_gui web_video_server.launch
If you need to run web_video_server on a different port, you can edit this file or
make a copy of it and set the port accordingly.
With both the camera node and web_video_server running, you should now be able
to view the camera image at the following URL:
http://localhost:PORT/stream_viewer?topic=/IMAGE_TOPIC
http://localhost:8080/stream_viewer?topic=/camera/rgb/image_raw
Assuming you can see the live streaming image and your camera driver supports
dynamic reconfigure, you can even change the image resolution on the fly using
rqt_reconfigure:
Note that if you are using a depth camera, you can also view the depth image using
web_video_viewer. For example, if you are using a Kinect or Xtion Pro camera, the
URL to view the depth image is:
http://localhost:8080/stream_viewer?topic=/camera/depth/image
Your web browser should display a grayscale video stream where darker pixels
represent points closer to the camera and lighter pixels represent points further away.
So to view the color image at a size of 1280x960 (regardless of the original resolution)
and a jpeg quality of 50, use the URL:
http://localhost:8080/stream_viewer?
topic=/camera/rgb/image_raw&width=1280&height=960&quality=50
Using a lower quality setting can be useful for reducing the bandwidth needed for the
video stream. To learn about other parameters applicable to the web_video_server
node, see the online Wiki page.
Ignore this for now as we will use our own configuration file located in the rbx2_gui
directory.
which can be ignored as there seems to be a bug in mini-httpd that causes this
message to be displayed even when the port is free. If the port really is in use by
another process, change the port in the mini-httpd.conf file to something else, like
8282 and try running it again. (But don't use ports 8080 or 9090 as these will be used
by other processes needed by rosbridge.) Remember the port number you end up
using as we will need it below.
<rosparam ns="/robot_gui">
maxLinearSpeed: 0.5
maxAngularSpeed: 2.0
videoTopic: /camera/rgb/image_color
</rosparam>
</launch>
First we set a few parameters under the namespace /robot_gui. These will be used
later on in our Javascript files. Next we launch the mini-httpd server, followed by the
web_video_server node and a robot_pose_publisher node which is only
required if your rosbridge application using 2D navigation. We then launch the
laptop battery node in case we are on a laptop and want to monitor its battery level.
Finally, we run the rosbridge websocket server.
Before running the launch file, terminate any web_video_server node you might still
have running from an earlier session. Then run the command:
$ roslaunch rbx2_gui rosbridge.launch
If you are using a webcam, you can use the usb_cam driver that we installed in Volume
One:
Next, if you haven't already run the rosbridge.launch file, bring it up now:
$ roslaunch rbx2_gui rosbridge.launch
Next, let's focus on the areas above and below the video display. The row along the top
includes a text box for the rosbridge host and port that have been given default values
of localhost and 9090. The checkbox is initially checked and causes the GUI to
connect to the rosbridge server when it is first loaded. Try un-checking the box and the
video stream should either disappear or freeze. Check the box again and the video
should become live again. If the video doesn't reappear for some reason, simply reload
the page. We'll see how to map HTML controls into actions when we look at the
underlying Javascript.
Now take a look at the area below the video display. Notice that the Publish and
Subscribe check boxes are currently un-checked. Open a terminal window and issue
the command:
$ rostopic echo /chatter
Keeping the terminal window visible, return to your web browser and check the Publish
checkbox. You should see a series of messages in the terminal like the following:
data: Greetings Humans!
---
data: Greetings Humans!
---
data: Greetings Humans!
---
So checking the Publish checkbox results in publishing the message entered in the
Message text box beside it. Now check the Subscribe checkbox. You should see the
message "Greetings Humans!" appear in the Message text box to the right. Keeping
both check boxes checked, enter some new text in the Publish text box. As you type,
the message in the Subscribe Message box should mirror the new message. Returning
to your terminal window, you should also see the new message displayed there as well.
You can test the parameter buttons and text boxes in a similar fashion. Entering a name
and value for a parameter and clicking the Set button will store that parameter value on
the ROS parameter server. Entering a parameter name and clicking the Get button will
read the parameter value into the corresponding text box.
We will explore the simple_gui.html file and its associated Javascript below. But
first let's try out the navigation controls using the fake TurtleBot.
If you don't already have the rosbridge launch file running, fire it up now as well:
where a.b.c.d is the IP address of your rosbridge host. For example, if the machine
running rosbridge has local IP address 10.0.0.3, then enter the following URL on
your tablet or other device:
http://10.0.0.3:8181/simple_gui.html
(If the two machines are not connected through the same router then you will need to
configure any firewalls between the two device to allow access on ports 8080, 9090 and
8181.)
<title>Robot GUI</title>
<script type="text/javascript">
function init() {
var readyStateCheckInterval = setInterval(function() {
if (document.readyState === "complete") {
init();
clearInterval(readyStateCheckInterval);
}
}, 100);
}
</script>
We then define an init() function that simply waits for the page to be loaded
completely before continuing. This ensures that we won't try to connect to the
rosbridge and mjpeg servers until the various HTML elements are all in place.
<body onload="init();init_ros();">
Here we begin the layout of our GUI. We use a number of nested tables to split the
screen into four main panels: upper left for the rosbridge connection and video image,
lower left for a number of publish/subscribe boxes and buttons; upper right for the page
title and status area; and lower right for the motion controls. (You'll probably find the
source easier to read using your own editor or by viewing the code on Github.)
The first few lines shown above create a row just above the video display containing a
pair of text boxes for the rosbridge host and port and a checkbox that toggles the
connection off or on. The connectDisconnect() function assigned to the checkbox
is defined in our simple_gui.js file.
<!-- *** Table Row 2: Display the video image -->
<tr>
<td colspan="2" width="100%"><div id="videoCanvas" style="display:
block;"></div><br/></td>
</tr>
Next we insert a <div> tag named videoCanvas where the mjpegcanvas client will
display the video image.
<!-- *** Table Row 3: Publish a message on the /chatter topic -->
These lines add a checkbox to publish the message entered in the accompanying text box
on the /chatter topic. The toggleChatter() function is defined in
simple_gui.js. Note how we also assign the function updateChatterMsg() to the
onInput event handler. The updateChatterMsg() function is also defined in
simple_gui.js.
<!-- *** Table Row 4: Subscribe to the message on the /chatter topic -->
<tr>
<td style="vertical-align: top; text-align: right;">
Subscribe: <input id="chatterSub" type="checkbox" class="checkbox"
onClick="subChatter();">
</td>
<td style="vertical-align: top; text-align: left;">
Topic: /chatter
Message: <input readonly="readonly" type="text" size="30"
id="chatterData">
</td>
</tr>
Here we have a checkbox and read-only text box used to subscribe to the /chatter
topic and display the current message.
In these two blocks, we create buttons and text boxes to set or get a ROS parameter from
the parameter server.
Here we begin the right hand panel where we place four arrow icons for controlling the
robot's motion. In the lines above, we first insert two labels for displaying the current
linear and angular speeds as well as instructions on how to use the navigation arrows.
<tr>
<td align="right"><span style="font-size: 14pt;">Max Angular Spd:</span></td>
<td><input type="range" id="maxAngularSpeed" min="0.01" max="2.0" step="0.01"
value="2.0"
onChange="writeStatusMessage('maxAngularSpeedDisplay', this.value);"
onMouseUp="setMaxAngularSpeed(this.value);"
onInput="maxAngularSpeedDisplay.value=this.value;"></td>
<td><span style="font-weight: bold;"><output id="maxAngularSpeedDisplay"
size="4"></output></span></td>
</tr>
These lines create a pair of slider controls for changing the max linear and angular speed
of the robot. We set callbacks on the onChange(), onMouseUp() and onInput()
events so that we can display the new settings and update the new values in our
underlying Javascript variables.
13.10.2 The JavaScript code: simple_gui.js
Next, let's examine the simple_gui.js file located in the js subdirectory.
Link to source: simple_gui.js
// The default video topic (can be set in the rosbridge launch file)
var videoTopic = "/camera/rgb/image_raw";
We begin by setting the port numbers for the rosbridge and mjpeg servers. Here we
use the standard ports but you can change them if you are running the server processes
on alternate ports. We also pick up the hostname from the server and then construct the
websocket URL for connecting to the rosbridge server. The mjpeg host and port will
be used later in the script for connecting to the video stream.
Next we set a default ROS topic name for the video stream that can be overridden in the
rosbridge.launch file and we set the video quality to 50%. This setting generally
produces an acceptable image and significantly reduces the load on the CPU from the
web_video_server process. However, feel free to adjust to your liking.
// A varible to hold the chatter topic and publisher
var chatterTopic;
Here we define two variables to hold the chatter topic name and message, both of which
will be defined later in the script.
Here we test to see if we are on a touch device such as a tablet and if so, the variable
isTouchDevice is set to true; otherwise it will be false. We can then use this
variable to customize the display depending on the type of device we are on.
We use the variable shiftKey to indicate when the user is holding down the Shift
key when using a conventional display and mouse. This will allow us to use the Shift
key as a dead man's switch so that we can stop the robot if the key is released.
// The topic on which to publish Twist messages
var cmdVelTopic = "/cmd_vel";
The next set of variables relate to moving the robot base. Most ROS robots will
subscribe to the /cmd_vel topic for instructions on how to move so we set that topic as
the default here. We then set defaults for the maximum linear and angular speeds and
how much to increment each speed when clicking on the navigation arrows.
We will use a Javascript timer for publishing Twist messages on the /cmd_vel topic
as well as the text message on the /chatter topic (if selected). We will also use a stop
timer for automatically stopping the robot when using a touch device when the user lifts
their finger off the controls. Here we define two variables to store handles to these
timers and will set the timers themselves later in the code. The rate variable
determines how often we publish messages.
// Set the video width to 1/2 of the window width and scale the height
// appropriately.
var videoWidth = Math.round(windowWidth / 2.0);
var videoHeight = Math.round(videoWidth * 240 / 320);
To ensure a good fit of the video stream regardless of the size of the user's display, we
set the video width and height variables to be proportional to the current window width
and height.
// Connect to rosbridge
function init_ros() {
ros.connect(serverURL);
Here we create the main ROSLIB.Ros() object and store it in the variable named ros.
This object is defined in the roslib.min.js library that we imported at the top of the
simple_gui.html file. We will use this object shortly to connect to rosbridge. We
also initialize the hostname and port text fields in our GUI from the values defined
earlier in the script.
The ROSLIB object defines a number of event callbacks. In these lines we set the "on
error" callback to simply display the error in the Javascript console.
videoTopicParam.get(function(value) {
if (value != null) {
videoTopic = value;
}
});
Now we get into the heart of the script. Note how we place the key ROS-related
functions inside the "on connection" callback for our main ros object. This ensures that
we don't try to read or set ROS parameters or topics until we have a connection to the
rosbridge server.
In the lines above, we first use the ROSLIB.Param() object to read in the video topic as
defined in our rosbridge.launch file. We test for a null value in which case we will
use the default topic set earlier in the script.
Next we define the mjpegViewer variable as an instance of the
MJPEGCANVAS.Viewer() object. This object type is defined in the
mjpegcanvas.min.js library that we imported in the simple_gui.html file. The
mjpeg viewer object requires the ID of the <div> used in the HTML file to display the
video stream. Recall that we used <div id="videoCanvas"> in the
simple_gui.html file. We also have to set the host, port, width, height, video quality
writeStatusMessage('maxLinearSpeedDisplay',
maxLinearSpeed.toFixed(1));
}
});
This next block attempts to read in a parameter value for maxLinearSpeed that might
have been set in the rosbridge.launch file. If a non-null value is obtained, then it
overrides the global maxLinearSpeed variable set near the top of the script. We then
update the slider control on the GUI with the new values. Note that we have to do this
inside the parameter callback since roslib functions run asynchronously to prevent
locking up the overall script. We therefore want to be sure to update the form element
just after the parameter value is returned.
The next block in the script is nearly identical to the one above (so we won't display it
here) but sets the maxAngularSpeed is the same way. Next we turn to the chatter
message.
// Create the chatter topic and publisher
chatterTopic = new ROSLIB.Topic({
ros : ros,
name : '/chatter',
messageType : 'std_msgs/String',
});
document.addEventListener('keyup', function(e) {
if (!e.shiftKey) {
shiftKey = false;
stopRobot();
}
}, true);
These next two blocks assign callback functions to the keydown and keyup events. In
particular, we look for the user to depress the Shift key and set a flag to indicate
whether or not the Shift key is currently been pressed. Recall that we will use this flag
to implement a dead man's switch when using the mouse to control the motion of the
robot. If the Shift key is not depressed, we call the function stopRobot() which is
defined later in the script. Otherwise we pass along any detected keycode (like the
press of an arrow key) to the setSpeed() function, also defined later in the script.
Finally, we start the publisher loop. We use the Javascript setInterval() function
that can be used like a timer to execute a function given as the first argument every t
milliseconds as specified by the second argument. In our case, we are running the
function called refreshPublishers (described below) every 200 milliseconds which
gives us our desired rate of 5 times per second.
That concludes the "on connection" part of the script. We now turn to the remaining
functions that can be called after the connection is up and running.
function toggleChatter() {
var pubChatterOn = document.getElementById('chatterToggle').checked;
if (pubChatterOn) chatterTopic.advertise();
else chatterTopic.unadvertise();
}
function updateChatterMsg(msg) {
chatterMsg.data = msg;
}
Recall that the toggleChatter() function is bound to the Publish checkbox defined
in the simple_gui.html file. If the checkbox is checked, we call the advertise()
function; otherwise we call unadvertise(). Note that advertising a topic does not
automatically cause the corresponding message to be published. Publishing the message
is taken care of by the next function.
function refreshPublishers() {
// Keep the /cmd_vel messages alive
pubCmdVel();
if (chatterTopic.isAdvertised)
chatterTopic.publish(chatterMsg);
}
function pubCmdVel() {
vx = Math.min(Math.abs(vx), maxLinearSpeed) * sign(vx);
vz = Math.min(Math.abs(vz), maxAngularSpeed) * sign(vz);
if (isNaN(vx) || isNaN(vz)) {
vx = 0;
vz = 0;
}
cmdVelPub.publish(cmdVelMsg);
}
Publishing a Twist message on the /cmd_vel topic introduces a couple of new ROSLIB
objects. First we create an instance of the ROSLIB.Topic object that takes the main
roslib object, the topic name, and the message type as arguments. We then define the
function pubCmdVel() that uses the current global variables vx and vz representing the
desired linear and angular velocities of the robot and creates an instance of a
ROSLIB.Message object configured with the Twist message type and vx and vz in the
appropriate slots. At the end of the function, we publish the actual message on the
/cmd_vel topic using the publish() function that is defined for all ROSLIB.Topic
objects.
Recall that in the simple_gui.html file that defines the layout of our GUI, we
assigned the setSpeed() function to the onClick and onTouchStart events for the
navigation arrow icons. Here at last we define that function. The speed code can either
be a string such as 'forward' or 'left' or a keycode that will be available when a key is
pressed such as an arrow key. Depending on the keycode detected, we increase or
decrease the linear and/or angular velocities vx and vz appropriately. Since the
pubLoop() is running continually on the timer we set earlier, these new values will be
published on the /cmd_vel topic.
function stopRobot() {
vx = vz = 0;
var statusMessage = "vx: " + vx.toFixed(2) + " vz: " + vz.toFixed(2);
writeStatusMessage('cmdVelStatusMessage', statusMessage);
pubCmdVel();
}
function timedStopRobot() {
stopHandle = setTimeout(function() { stopRobot() }, 1000);
}
function clearTimedStop() {
clearTimeout(stopHandle);
}
The timedStopRobot() function is used when using a touch device and calls
stopRobot() after a delay of one second (1000 milliseconds). This function is
assigned to the onTouchEnd event in our GUI so that if the user stops touching the
arrow icons for one second or more, the robot will stop. The clearTimedStop()
function is then used to cancel the stop timer.
function subChatter() {
var subscribe = document.getElementById('chatterSub').checked;
var chatterData = document.getElementById('chatterData');
var listener = chatterTopic;
if (subscribe) {
console.log('Subscribed to ' + listener.name);
listener.subscribe(function(msg) {
chatterData.value = msg.data;
});
} else {
listener.unsubscribe();
console.log('Unsubscribed from ' + listener.name);
}
}
Recall that the subChatter() function is bound to the Subscribe checkbox on the
GUI. First we determine if the checkbox is checked and assign the result to the
subscribe variable. We also get a pointer to the chatterData text box on the GUI
so we know where to display the results.
Next we assign the chatterTopic object to the local listener variable. While not
strictly necessary, the assignment reminds us that we are now running in "listen" mode
rather than publishing. The ROSLIB.Topic object includes a subscribe function that
takes another function as a callback. The callback function takes a single argument
which is the message received on the topic subscribed to. In the code above, if the
subscribe checkbox is checked, we create a callback that displays the received
message in the chatterData textbox on the GUI. Otherwise, we call the
unsubscribe() function to stop listening.
The remaining few functions in the script should now be fairly self-explanatory given
the discussion above. We turn next to a more complete GUI using more modern
Javascript libraries especially suitable to touch devices.
If you are using a webcam, you can use the usb_cam driver that we installed in Volume
One:
Notice how we are using the same rosbridge launch file as before but we will now
point our browser to a different web page. This allows us to have different GUIs served
up by the same rosbridge server.
On the Main tab, the base control arrows have been replaced with a simulated touch pad
(colored blue) similar to the base control provided by the ArbotiX Gui we have used
earlier in the book. The touch pad is created using the KineticJS toolkit and the code is
located in the robot_gui.js file. Drag the yellow disc either with the mouse or your
finger (on a touch screen) and move it in the direction you want the robot to move. The
further away from center you move the disc, the faster the robot will move. Release the
disc and the robot will stop. The control pad is positioned so that if you are holding a
tablet, the yellow disc should fall nicely under your thumb for controlling the base.
The maximum linear and angular speeds can be adjusted by the slider controls beneath
the touch pad.
The charge level of the robot's battery and the laptop battery (if running on a laptop) are
displayed using a pair of jqWidgets fuel gauges defined in the file tabs/main.html.
Subscribing to the appropriate battery level topics is taken care of in the robot_gui.js
file and will need to be customized for your robot. We will use the fake battery
simulator to test them out below.
The Misc tab contains the same code we used with the simple GUI and is shown below:
The Diagnostics and Parameters tabs are currently just place holders defined by the
files tabs/diagnostics.html and tabs/parameters.html. Edit these files to
include your own code or remove the tabs altogether by editing the file
robot_gui.html. For now, the pages simply display the following message:
With the fake battery node running, the Robot Battery fuel gauge on the rosbridge GUI
should now start falling from 100 to 0 over the course of 15 minutes.
Next bring up RViz with the nav.rviz configuration file so that we can confirm that
map has been loaded and the fake TurtleBot can be seen on the map.
$ rosrun rviz rviz -d `rospack find rbx2_gui`/nav.rviz
• a camera driver
where x.y.z.w is the IP address of the machine running the web server.
For example, if your login name is robomeister, then use the command:
Then log out of your window session completely, and log back in again. Alternatively,
simply reboot your computer.
Appendix: Plug and Play USB Devices for ROS: Creating udev Rules -
439
13.14 Determining the Serial Number of a Device
Suppose we have an Arduino that we want to plug into one of our robot's USB ports.
Before plugging in the device, run the following command in any open terminal:
$ tail -f /var/log/syslog | grep tty
Now plug in the your device (the Arduino in our example) and monitor the output in the
terminal window. In my case, after a short delay I see the output:
Oct 9 18:50:21 pi-robot-z935 kernel: [75067.120457] cdc_acm 2-1.3:1.0:
ttyACM0: USB ACM device
The appearance of ttyACM0 in the output tells me that the Arduino was assigned device
name /dev/ttyACM0. You can verify this with the command:
$ ls -l /dev/ttyACM0
(Note that the timestamp should be close to the time when you plugged in the device.)
Now that we know that the device name for the Arduino is /dev/ttyACM0, we can use
the udevadm utility to get its serial number:
$ udevadm info -q all -n /dev/ttyACM0 | grep -w ID_SERIAL_SHORT
E: ID_SERIAL_SHORT=74133353437351200150
The number we want is the part after the equals sign above:
74133353437351200150
Now that we have the Arduino's serial number, let's map it to a device filename.
Appendix: Plug and Play USB Devices for ROS: Creating udev Rules - 440
13.15 UDEV Rules
A mapping between serial numbers and device filenames can be created using a set of
UDEV rules that we store in a file in the directory /etc/udev/rules.d. The first
thing we need to do is create a file in this directory to hold our rules.
Begin by moving into the directory and list the files that are already there:
$ cd /etc/udev/rules.d
$ ls -l
Create a file with a unique name like 40-my-robot.rules using your favorite editor
where the leading number is unique. Note that since we are creating a file in a system
directory, we need to use sudo:
$ sudo gedit 40-my-robot.rules
SUBSYSTEM=="tty", ENV{ID_SERIAL_SHORT}=="74133353437351200150",
MODE="0666", OWNER="robomeister", GROUP="robomeister",
SYMLINK+="arduino"
where we have set the ID_SERIAL_SHORT variable to the value we found earlier for the
Arduino—change this to reflect your device's ID. The MODE can always be set to 0666
and the OWNER and GROUP should be set to your Ubuntu username.
The SYMLINK variable defines the name we want our device to have: in this case, it will
be /dev/arduino since "/dev" will be automatically prepended to the name we
choose.
Save your changes and exit the editor. Then run the following command to reload the
UDEV rules:
$ sudo service udev restart
Appendix: Plug and Play USB Devices for ROS: Creating udev Rules -
441
That' all there is to it. We're now ready to test our new rule.
As you can see, a symbolic link has been created between /dev/arduino and the real
device file /dev/ttyACM0. If there had been another ACM device plugged in before we
plugged in the Arduino, then /dev/ttyACM0 would have been taken and the link would
have been made to /dev/ttyACM1 instead. But in either case, we can just use
/dev/arduino in our ROS configuration files and not worry about the underlying
physical device name anymore
port: /dev/usb2dynamixel
baud: 1000000
rate: 100
sync_write: True
sync_read: False
read_rate: 10
write_rate: 10
joints: {
head_pan_joint: {id: 1, neutral: 512, min_angle: -145, max_angle: 145},
head_tilt_joint: {id: 2, neutral: 512, min_angle: -90, max_angle: 90}
}
controllers: {
Appendix: Plug and Play USB Devices for ROS: Creating udev Rules - 442
head_controller: {onboard: False, action_name:
head_controller/follow_joint_trajectory, type: follow_controller, joints:
[head_pan_joint, head_tilt_joint]}
}
Appendix: Plug and Play USB Devices for ROS: Creating udev Rules -
443
Appendix: Plug and Play USB Devices for ROS: Creating udev Rules - 444