0% found this document useful (0 votes)
1K views620 pages

Principles of Robot Motion

Principles of Robot Motion Thrun

Uploaded by

fenasikarim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
1K views620 pages

Principles of Robot Motion

Principles of Robot Motion Thrun

Uploaded by

fenasikarim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 620
Intelligent Robotics and Autonomous Agents Ronald C. Arkin, editor Behavior-Based Robotics, Roaskd C. Arkin, 1998 Robot Shaping: An Experiment Bchavioy Enuncering, Marco Dovigo ani Marco ‘Colombe 1998 Layered Learning in Multiagent Systems: A Winn ‘Stove, 2000 Approach to Robotic Soccer eter Evolutionary Robotics: The Biolog. Inteligence, and Technology of Self-Organizing ‘Machines, Stefano Nolf and Dario Flteano, 2000 Reasoning about Rational Agents, Michael Wookie, 200 Indraducton to AL Robie, Robin . Mp, 2000 Sviauesi Negoviation in Malage Environments, Sart Ks, 2001 Mechanics of Robotic Manipulation, Mathew T Mason, 2001 Designing Sociable Reber, Cyt L Brewed, 2001 Iharodston to Autonomous Mobile Robots, Roland Sica an ish R. Nouakhsb, 2004 Principles of Robot Motion: Thor Agorithns, and Inplementtions, Howie Choset. Kevin Lynch, Sein Hutchinson, George Kastor, Wolfram Burgard, Lydia Kavraki and Sebastian ‘Thun, 2005 Principles of Robot Motion Theory, Algorithms, and Implementation Howie Choset, Kevin Lynch, Seth Hutchinson, George Kantor, Wolfram Burgard, Lydia Kaveaki, ‘and Sebastian Thrun A Bradford Book The MIT Press Cambridge, Massachusets London, England (©2005 Massachuserts Insitute of Technology All sights reserved. No part of this book may be reproduced in any form by any electronic oF ‘mechanical means (including photocopying, recording, or information storage and retrieval) ‘without permission in writing from the publisher. [MIT Press books may be purchased at special quantity discounts for business or sales promotional use. For information, please email special sales @mitpress.mit.ed or write to ‘Special Sales Department, The MAIT Press, 5 Cambridge Center, Cambridge, MA 02142. ‘This book was set in UTEX2e by Interactive Composition Corporation and was printed and. bound inthe United States of America, Library of Congress Cataloging-in-Publication Data Principles of robot motion: theory, algorithms, and implementation/Howie Choset (et a pcm. (Intelligent robotics and autonomous agents) “A Bradford book: Includes bibliographical references and inde. ISN 0-262-03327-5 (alk. paper) 1. Robots—Motion. J. Choset, Howie M. Il Series. ‘THQI. 75 2004 629,8°92—de22 2004044906 10987654321 i i | i i i To our families Contents Foreword xv Preface xvii Acknowledgments xxi 1 Introduction 1 1.1 Overview of Concepts in Motion Planning 9 1.2 Overview of the Book — 12 1.3 Mathematical Style 13 2 Bug Algorithms 17 21 BuglandBug2 17 22 Tangent Bug 23 23° Implementation 30 23.1 What Information: The Tangent Line 31 23.2. How to infer information with Sensors: Distance and Gradient 32 23.3 How to Process Sensor Information: Continuation Methods 35 3 Configuration Space 39 3.1 Specifying a Robot’s Configuration 40 3.2 Obstacles and the Configuration Space 43, 3.2.1 Circular Mobile Robot 43 3.22 TwoJoint Planar Arm 45 3.3 The Dimension of the Configuration Space 47 34 The Topology of the Configuration Space 50 i 3.43 Comnectedness and Compactness 58 i 3.5.1 Matrix Representations of Rigid-Body Configuration 60 i 3.6 Parameterizations of SO(3) 66 4 Potential Functions 77 4.1 Additive Attractive/Repulsive Potential 80 4.2. Gradient Descent 84 4.3 Computing Distance for implementation in the Plane 85 43.1 Mobile Robot Implementation 86 43.2 Brushfire Algorithm: A Method to Compute Distance \ onaGnd 86 44 Local Minima Problem 89 45. Wavefront Planer 90 46 Navigation Pot Fnctions 93 461 SphereSpae 93 462° StarSpace 96 47° Potent Futon in Non-Euldean Spaces 99 47.1 Relaionship between Forces inthe Workspace and Configuration Space 100 47.2 Powell Functions for Righ-Body Robols 101 473 PathPlnning fr Articulated Bods 108 5 Roadmaps 107 SA Visibiliy Maps: Visbitty Graph 110 SM1 Visbiliy Gap Defcon 110 542 Visbiliy Gaph Conkrucion 113 52. Deformation Reracts: Generalized Voronoi Disgram 117 52.1 GVDDefnon 118 Contents 33 34 58 522 GVD Roadmap Properties. 119 523 Deformation Retract Definition 121 524 GVD Dimension: The Preimage Theorem and Critical Points 123, 525 Construction of the GVD 126 Retractlike Structures: The Generalized Voronoi Graph 129 53.1 GVG Dimension: Transversality 130 53.2 Retract-like Structure Connectivity 133 5.3.3 Lyapunov Control: Sensor-Based Construction ofthe HGVG — 136 Piecewise Retracts: The Rod-Hierarchical Generalized Voronoi Graph 138 Silhouette Methods 141 55.1 Canny's Roadmap Algorithm 142 55.2 Opportunistic Path Planner 151 6 Cell Decompositions 161 61 62 63 ‘Trapezoidal Decomposition 162 ‘Morse Cell Decompositions 168 62.1 Boustrophedon Decomposition 169 62.2 Morse Decomposition Definition 170 62.3 Examples of Morse Decomposition: Variable Slice 172 624 — Sensor-Based Coverage 178 625 Complexity of Coverage 182 Visibility Based Decompositions for PursuivEvasion 187 7 Sampling-Based Algorithms 197 a 72 Probabilistic Roadmaps 202 TLL Basie PRM 203 7.1.2 A Practical Implementation of Basic PRM 208 7.1.3 PRM Sampling Strategies 216 7.14 PRM Connection Strategies 225 Single-Query Sampling-Based Planners 227 72.1 Expansive-Spaces Trees 230 Contents 722. Rapidly-Exploring Random Trees 233 7.23 Connection Strategies and the SBL Planner 238 7.3. Integration of Planners: Sampling-Based Roadmap ofTiees 238 74° Analysis of PRN 242 741 PRA Operating in? 243 742 (6,0, B)-Expansiveness 246 743 Abstract Path Tiling 250 75 Beyond Basic Path Planning 253 75.1 Control-Based Planning 253 752 Multiple Robots 254 753 Manipulation Planning 257 754 Assembly Planning 259 75.5 Flexible Objects 260 75.6 Biological Applications 262 8 Kalman Filtering 269 8.1 Probabilistic Estimation 270 8.2 Linear Kalman Filtering 272 821 Overview 273 8.22 ASimple Observer 274 8.23. Observing with Probability Distributions 277 824 TheKalman Filter 282 825 Kalman Filter Summary 284 8.2.6 Example: Kalman Filter for Dead Reckoning 285 8.27 Observability in Linear Systems 287 83 Extended Kalman Filter 289 83.1 EKF for Range and Bearing Localization 290 832 Data Association 292 83.3. EKF forRange-Only Localization 294 84 Kalman Filterfor SLAM — 294 84.1 Simple SLAM 294 8.4.2 Range and Bearing SLAM 296 9 Bayesian Methods 301 9.1 Localization 301 9.A.1 The Basie Idea of Probabilistic Locatization 302 Contents xi 9.1.2 Probabilistic Localization as Recursive Bayesian Filtering 304 9.1.3. Derivation of Probabilistic Localization 308 9.4.4 Representations of the Posterior 310 9.1.5 Sensor Models 322 9.2 Mapping 328 9.2.1 Mapping with Known Locations ofthe Robot 328, 9.2.2 Bayesian Simultaneous Localization and Mapping 337 10 Robot Dynamics 349 10.1 Lagrangian Dynamics 349 10.2 Standard Forms for Dynamics 353 103 Velocity Constraints 357 104 Dynamics of a Rigid Body 361 104.1 Planar Rotation 362 10.4.2 Spatial Rotation 363 U1 Trajectory Planning 373 111 Preliminaries 374 112 Decoupled Trajectory Planning 374 112.1 Zero Inertia Points 378 11.2.2 Global Time-Optimal Trajectory Planning 384 113 Direct Trajectory Planning 384 113.1 Optimal Control 385 1132 Nonlinear Optimization 389 113.3 Grid-Based Search 392 12 Nonholonomic and Underactuated Systems 401 12.1 Preliminaries 402 12.1.1 Tangent Spaces and Vector Fields 405, 12.1.2 Distributions and Constraints 407 12.1.3 Lie Brackets 409 (22 Conteol Systems 414 Contents 123 Controllability 416 123.1 Local Accessibility and Controllability 419 1232 Global Controllability 422 12.4. Simple Mechanical Control Systems 424 124.1 Simplified Controllability Tests 425 12.42 Kinematic Reductions for Motion Planning 434 1243 Simple Mechanical Systems with Nonholonomic Constraints 438 12.5 Motion Planning 440 125.1 Optimal Control 440 1252 Steering Chained-Form Systems Using Sinusoids 444 125.3 Nonlinear Optimization 445 1254 Gradient Methods for Driftless Systems 446 12535 Differentially Flat Systems 447 12,56 Carsand Cars Pulling Trailers 450 12.5.7 Kinematic Reductions of Mechanical Systems 462 1258 Other Approaches 465 A Mathematical Notation 473 B Basic Set Definitions. 475 C Topology and Metric Spaces 478 Cl Topology 478 ©.2 Metric Spaces. 479 3 Normed and Inner Product Spaces. 480 C4 Continuous Functions 481 C5 Jacobians and Gradients 483 D Curve Tracing 487 D1 Implicit Function Theorem 487 D.2__Newton-Raphson Convergence Theorem 488 E Representations of Orientation 489 El EulerAngles 489 E.2 Roll,Pitch,and Yaw Angles 491 Contents alii E3 Axis-Angle Parameterization 492 £4 Quatemions 494 F Polyhedral Robots in Polyhedral Worlds 499 E1 Representing Polygons in Two Dimensions 499 F2 Intersection Tests for Polygons 502 F3 Configuration Space Obstacles in Q = Bt The Star Algorithm 507 F4 Configuration Space Obstacles in Q=SE(2) 508 F5 Computing Distances between Polytopes in R® and 509 G Analysis of Algorithms and Complexity Classes 513 GA Running Time 513 G2 Complexity Theory 515 G3 Completeness 520 H Graph Representation and Basic Search S21 H.1 Graphs 521 H2~ At Algorithm — $27 H.2.1 Basic Notation and Assumptions 530 1.2.2 Discussion: Completeness, Efficiency, and Optimality $31 H23 Greedy-Search and Dijkstra’s Algorithm 532 H24 Example of At onaGrid 533 H.2.5 Nonoptimistic Example 535 H3- D" Algorithm 536 H4 Optimal Plans. $46 I Statistics Primer 587 L1 Distributions and Densities 5948, 12 Expected Values and Covariances $50 13 Multivariate Gaussian Distributions 551 J Linear Systems and Control 552 JA State Space Representation $52 12 Stability $54 J3 4 IS Contents LTIControl Systems 557 Observing LIT Systems 559 Discrete Time Systems S62 JS. Stability 562 J52 Controllability and Observability 563 Bibliography 565 Index 397 Foreword ‘THis IMPRESSIVE book isthe result of a serious undertaking of distinguished motion planning researchers led by Howie Choset. Over the years, motion planning has become a major research theme in robotics. The goal is to enable robots to auto- ‘matically compute their motions from high-level descriptions of tasks and models acquired through sensing. This goal has recently become even more crucial. Om the fone hand, robotics has expanded from « largely dominant focus on industrial manu- facturing into areas where tasks are less repetitive and environments less structured, for instance, medical surgery, ocean and space exploration, assistance to the elderly, and search-and-rescue. In these areas, itis impractical to explicitly program the robots for each new goal, On the other hand, the need for automatic motion-planning capa- bilities has expanded outside the realm of roboties, into domains such as computer animation (e., to generate motions of avatars), computer-aided design (e.g.,t0 test that a product can be assembled or serviced), verification of building codes (e.g., to check access of key facilities to the disabled), exploration of virtual environments (to help the user navigate in geometric models made of tens of millions of triangles), ‘or even computational biology (to help analyze important molecular motions, like folding and binding). Today, progress in motion planning is increasingly motivated by these new applications. By confronting novel and difficult problems, researchers have made consider- able progress in recent years, Not only have faster and more robust algorithms been developed and tested, but the range of motion-planning problems has continuously expanded. In the "80s and part of the °90s, finding collision-free paths was the main or only goal. Today, while obstacle avoidance remains a key issue, other important constraints are considered as well, for instance, visibility, coverage, kinodynamic, ‘optimality, equilibrium, and uncertainty constraints. These constraints make problems ‘more interesting and lead to more useful algorithms. In addition, while research in ‘motion planning used to be neatly divided between theory and practice, this distinction hhas now largely disappeared. Most recent contributions to the field combine effective Foreword algorithms tested on significant problems, along with some formal guarantees of performance. Although journal and conference papers in motion planning have proliferated, there has not been any comprehensive reference text in more than a decade. This book fills this gap in outstanding fashion. It covers both the early foundations of the field ‘and the recent theoretical and practical progress that has been made. It beautifully demonstrates how the enduring contributions of early researchers in the field like Lozano-Perez (configuration space) and Reif (complexity of motion planning), have Jed toa rich and vibrant research area, with ramifications that were unsuspected only a decade ago. ‘Lam usually suspicious of books in which chapters have been written by different authors. But, to my good surprise, this book is more than a standard textbook. The fact that seven authors collaborated on this book attests to the diversity of the research going on in motion planning and the excitement associated with each research topic. ‘Simultaneously, the authors have done excellent work in providing a unified presen- tation of the core concepts and methodologies, and thus the book can be used as a textbook. This book will serve well the growing community of students, researchers, and engineers interested in motion planning. Jean-Claude Latombe Stanford, California Preface PEOPLE HAVE always dreamed of building intelligent machines to perform tasks. ‘Today, these machines are called robots, derived from the Czech word robota meaning servitude or drudgery. Robots have inspired the imagination of many, appearing in mythology, literature, and popular movies. Some popular robotic characters include Robby the Robot, R2D2 and C3PO, Golem, Pushpack, Wanky and Fanny, Gundam and Lt, Cmdr, Data. Just like their literary counterparts, robots can take on many forms and constructing them involves addressing many challenges in engineering, ‘computer science, cognitive science, language, and so on, Regardless of the form of the robot or the task it must perform, robots must maneuver through our worl. This book is about automatic planning of robot motions. However, the approaches developed in this book are not limited 10 robots: recently, they have been used for “designing” pharmaceutical drugs, planning routes on circuit boards, and directing digital actors inthe graphics world ‘The robot motion field and its applications have become incredibly broad, and this is why the book las seven co-authors. This type of book requires a broad spectrum, of expertise, However, it should be stressed that this is indeed a textbook and not ‘a collection of independent chapters put together in an edited volume, Each author participated in writing each of the chapters and all ofthe chapters are integrated with each other. This book is aimed at the advanced undergraduate or new graduate student inter- ested in robot mation, and it may be read ina variety of ways. Our goal in weiting in this book is threefold: to create an updated textbook and reference for robot motion, to ‘make the fundamental mathematics hehind robot motion accessible to the novice, and to stress implementation relating low-level details to high-level algorithmic concepts. Since the robot motion field is indeed broad, this book cannot cover all the topics, nor do we believe that any book can contain exhaustive coverage on robot motion, We do, however, point the reader to Jean-Claude Latombe’s Robot Motion Planning (2621, Preface Latombe’s book was one of the frst text and reference books aimed at the motion- planning community and it certainly was a guide fr us when writing this book. {nthe decade since Latomibe's book was published, there have been great advances in the ‘motion-planning field, particularly in probabilistic methods, mechanical systems, and sensor-based planning, so we intended to createatext with these new advances. How- ever, there are many topics not included in our text that are included in his, including assembly planning, geometric methods in dealing with uncertainty, altiple moving obstacles, approximate cell decompositions, and obstacle representations. We also believe that concepts ftom control theory and statistical reasoning have ‘gained greater relevance to robot motion. Therefore, we have included an appendix briefly reviewing linear control systems which serves as background for our presen- tation on Kalman filtering. Our description of Kalman filtering differs from others in that it relies om a rich geometric foundation. We present a comprehensive deserip- tion of Bayesian-based approaches, Concepts from mechanics and dynamics have also had great impact on robot motion. We have included a chapter on dynamics ‘hich serves as a basis for our description of trajectory planning and planning for underactuated robots. This book cam be read from cover to cover. In doing so, there are four logical com ponents to the book: geometric motion planning approaches (chapters 2 through 6), probabilistic methods (chapters 7,8, and 9), mechanical contol systems (chapters 10, 11, and 12), and the appendices. Covering the entre book could require a full year course. However, not all of the topics inthis book need be covered for a course on robot motion. For semester-long courses, the following themes are suggested: Theme Chapter and Appendix Sequence Path Planning 3,4,G,5,7, and 6 Mobile Robotics 2,H,3,4,5,D, and 6 Mechanical Control 3,10, 11, and 12 Position Estimation 15,8, and9 ‘The algorithms and approaches presented in this book are based on geometry and thus rest on a solid mathematical basis. Beyond anything superficial, in order to understand the many motion-planning algorithms, one must understand these mathe- ‘matical underpinnings. One of the goals of this book is to make mathematical concepts ‘more accessible to students of computer science and engineering. In this book, we introduce the intuition behind new mathematical concepts on an “as needed” basis to understand both how and why certain motion planning algorithms work. Some salient Preface aie ‘concepts are formally defined in each chapter and the appendices contain overviews ‘of some basic topics in more detail. The idea here is that the reader can develop an understanding of motion planning algorithms without getting bogged down by ‘mathematical details, but can turn to them in the appendices when necessary. It is ‘our hope that the reader will gain enough new knowledge in algebra, graph theory, ‘geometry, copology, probability, filtering, and so on, to be able to read the state of the art literature in robot motion. ‘We discuss implementation issues and itis important to note that such issues are not mere details, but pose deep theoretical problems as well. In chapters 2, 4, 5, ‘and 6, we discuss specific issues on how to integrate range sensor information into a planner. The Kalman Filtering (chapter 8) and Bayesian-based (chapter9) approaches hhave been widely used in the robot motion field to deal with positioning and sensor uncertainty. Finally, we discuss in chapters 11 and 12 issues involving kinematic and dynamic contraints that real robots experience. ‘We have also included pseudocode for many of the algorithms presented throughout the book. In appendix H, we have included a discussion of graph search with detailed examples to enable the novice to implement some standard graph search approaches, ‘with applicability well beyond robot motion, Finally, a the end of each chapter, we present problems that stress implementation, Acknowledgments ‘We Finsr and foremost want to thank our students, who were incredibly supportive ‘of us when writing this book. We would like to thank the members of the Bioraboties! ‘Sensor Based Planning Lab at Camegie Mellon, especially Ji Yeong Lee; the Lab- oratory for Intelligent Mechanical Systems at Northwestern; the robotics group in the Beckman Institute at the University of Ilinois, Urbana Champaign; the Physical ‘and Biological Computing Group at Rice, especially Andrew Ladd and Erion Plaku; the Lab for Autonomous Intelligent Systems at the University of Freiburg, especially Dick Habel and Cyrill Stachniss; and the Stanford and Carnegie Mellon Learning Labs, for their contributions and efforts for this book. We thank Alfred Anthony Rizzi and Matt Mason for their inspiration and support, and Ken Goldberg and Jean-Claude Latombe for their input and adviee. For input the form of figures or feedback on drafts, we thank Ercan Acar, Srinivas Akella, Nancy Amato, Serkan Apaydin, Prasad Atkar, Denise Bafman, Devin Balkcom, Francesco Bullo, Joel Burdick, Prasun Choudhury, Cynthia Cobb, Dave Conner, Al Costa, Frank Dellaert, Bruce Donald, Dieter Fox, Bob Grabowski, Aaron Greenfield, David Hsu, Pekka Isto, James Kuffner, Christian Laugier, Jean-Paul Laumond, Steve LaValle, Brad Lisien, Julie Nord, Jim Ostrowski, Nancy Pollard, Codric Pradalier, lonnis, Rekleitis, Elie Shammas, Thierry Simeon, Sarjun Skaff, and M. Dick Tsuyuki. ‘Weare also indebted tothe many students who helped debug this ext for us, Portions of this text were used in Camegie Mellon’s Sensor Based Motion Planning Course, Carnegie Mellon’s Mechanical Control Systems reading group, Northwestern's ME 450 Geometry in Robotics, University of Illinois’ ECE 450 Advanced Robotic Planning, and University of Freiburg’s Autonomous Systems course. 1 Introduction ‘SOME OF the most significant challenges confronting autonomous robotics lie in the area of automatic motion planning. The goal isto be able to specify a task in a high- level language and have the robot automatically compile this specification intoa set of low-level motion primitives, or feedback controllers, to accomplish the task. The pro- totypical task isto Find apath for a robot, whether itis arobot arm, a mobile robot, ora ‘magically free-Alying piano, from one configuration to another while avoiding obsta cles. From this early piano mover's problem, motion planning has evolved to address a huge number of variations on the problem, allowing applications in areas such as animation of digital characters, surgical planning, automatic verification of factory layouts, mapping of unexplored environments, navigation of changing environments, assembly sequencing, and drug desiga New applications bring new considerations ‘that must be addressed in the design of motion planning algorithms. Since actions in the physical world are subject co physical laws, uncertainty, and ‘geometric constraints, the design and analysis of motion planning algorithms raises a ‘unique combination of questions in mechanics, control theory, computational and dif- ferential geometry, and computer science. The impact of automatic motion planning, therefore, goes beyond its obvious utility in applications. The possiblity of building ccomputer-controlled mechanical systems that can sense, plan their ava cotions, and execute them has contributed to the development of our math and science base by asking fundamental theoretical questions that otherwise might never have been posed. ‘This book addresses the theory and practice of robot motion planning, with an eye toward applications. To focus the discussion, and to point out some of the important ‘concepts in motion planning, lets first look at a few motivating examples 1 Introduction Piano Mover’s Problem ‘The classic path planning problem is the piano mover's problem {373}. Given athree- «dimensional rigid body, for example a polyhedron, and a known set of obstacles, the problem isto find a collision-free path for the omnidirectional free-flying body froma start configuration toa goal configuration. The obstacles are assumed to be stationary and perfectly known, and execution of the planned path is exact. This is called affine planning, because planning is finished in advance of execution, Variations on this problem are the soja mover's problem, where the body moves in a plane among planar obstacles, and the generalized mover's problem, where the robot may consist of a set of rigid bodies linked at joints, eg., a robot arm. ‘The key problem is to make sure no point on the robot hits an obstacle, so we need ‘8 way to represent the location of all the points on the robot. This representation is the configuration of the robot, and the configuration space is the space of all con- figurations the robot can achieve, An example of a configuration is the set of joint angles for a robot arm or the one orientation and two position variables for a sofa in the plane. The configuration space is generally non-Euclidean, meaning that it does not look like an n-dimensional Euclidean space R*. The dimension of the configu ration space is equal to the number of independent variables in the representation of the configuration, also known as the degrees of freedom (DOF). The piano has six degrees of freedom: three to represent the position (x-y~z) and three to represent the corientation (roll-pitch-yaw). The problem js to find a curve in the configuration space that connects the start and goal points and avoids all configuration space obstacles that arise due to obstacles in the space. ‘The Mini AERCam NASA’s Johnson Space Center is developing the Mini AERCam, or Autonomous Extravehicular Robotic Camera, for visual inspection tasks in space (figure 1.1). It is a free-flying robot equippéd with twelve cold gas thrusters, allowing it to generate a force and torque in any direction. When operating in autonomous mode, it must be able to navigate in a potentially complex three-dimensional environment. In this respect the problem is similar to the piano mover’s problem. Since we have to apply thrusts to cause motion, however, we need t0 plan not only the path the robot is to follow, but also the speed along the path. This is called a trajectory: and the thruster inputs are determined by the dynamics of the robot. In the piano mover's problem, we only worried about geometric or kinematic issues. 1 Imroduction 3 Figure 1.1 NASA's Mini AERCam free-flying video inspection robot @ o Figure 12 (2) The CyCab. (b) The Segway Human Transponer Personal Transport Vehicles ‘Small personal transport vehicles may become a primary means of transportation in pedestrian-filled urban environments where the size, speed, noise, and pollution of automobiles is undesirable, One concept isthe CyCab [355], asmall vehicle designed ‘by a consortium of institutions in France to transport up to two people at speeds up to 30 knw (figure 1.2a). Another concept is the Segway HT, designed to carry a single rider at speeds up to 20 km/h (figure 1.2b). 1 Introduction ‘To simplify control of vehicles in crowded environments, one capability under study is automatic parallel parking. The rider would initiate the parallel-parking pro- cedure, and the onboard computer would take over from there. Such systems will soon be commercially available in automobiles. On the surface, this problem sounds like the sofa mover’s problem, since both involve a body moving in the plane among, obstacles. The difference is that cars and the vehicles in flgure 2.2 cannot instanta- neously slide sideways like the sofa. The velocity constraint preventing instantaneous sideways motion is called a nonholonomic constraint, and the motion planner must tke this constraint into account. Systems without velocity constraints, such as the sofa, are omnidirectional in the configuration space. Museum Tour Guides In 1997, a mobile robot named RHINO served as a fully autonomous tour-guide at the Deutsches Museum Bonn (figure 1.3). RHINO was able to lead museum visitors from cone exhibitto the next by calculating a path using 2 stored map of the museum. Because the perfect execution model ofthe piano mover’s problem is unrealistic in this seting, RHINO had to be able to localize itself by comparing its sensor readings to its stored ae) Pee] Figure 1.3 RHINO, the interactive mabe tour guide robot 1 Introduction 5) Figure 14 The Mars rover Sojourner ttp/imarsjpl.nasa.gov/MPP/coverlsojourner hin ‘map. To deal with uncenainty and changes in the environment, RHINO employed a sensor-based planning approach, interleaving sensing, planning, and action. Planetary Exploration One of the most exciting successes in robot deployment was a mobile robot, called Sojourner (figure 1.4), which landed on Mars on July 4, 1997. Sojourner provided up- close images of Martian terrain surrounding the lander. Sojourner did not move very far from the lander and was able to rely on motion plans generated offline on Earth and uploaded. Sojourner was followed by its fantastically successful cousins, Spirit and Opportunity, rovers that landed on Mars in January 2004 and have provided a treasure trove of scientific data. In the future, robots will explore larger areas and thus will require significantly more autonomy, Beyond navigation capability, such robots, ‘will have to be able to generate a map of the environment using sensor information, ‘Mapping an unknown space with a robot that experiences positioning error is an especially challenging “chicken and egg” problem—without a map the robot cannot determine its own position, and without knowledge about its own position the robot ‘cannot compute the map. This problem is offen called simultaneous localization and ‘mapping or simply SLAM. Demining Mine fields stifle economic development and result in tragic injuries and deaths each year. A$ recently as 1994, 2.5 maillion mines were placed worldwide while only 100,000 were removed. 1 Ineroduction Robots can play a key role in quickly and safely demining. an area, The crucial first step is finding the mines. In demining, a robot must pass a mine-detecting sensor over all points inthe region that might conceal a mine. To do this, the robot must traverse a carefully planned path through the target region. The robot requires a coverage path planner to find a motion that passes the sensor over every point in the field. If the planner is guaranteed to find a path that covers every point in the field when such a path exists, then we call the planner complete. Completeness is obviously a crucial requirement for this task, Coverage has other applications including floor cleaning { 16], lawn mowing (198), ‘unexploded ordnance hunting [260], and harvesting [341]. In all ofthese applications, the robot must simultaneously localize itself to ensure complete coverage. Fixed-base Robot Arms in Industry Inhighly strvctured spaces, fixed-base robot arms performa variety of tasks, including assembly, welding, and painting. In painting, for example, the robot must deposit a uniform coating over all points on a target surface (figure 1.5). This coverage problem presents new challenges because (1) ensuring equal paint deposition is a more severe requirement than mere coverage, (2) the surface is not usually flat, and (3) the robot ‘must properly coordinate its internal degrees of freedom to drive the paint atomizer over the surface. Industrial robot installations are clearly driven by economic factors, so there is a high priority on minimizing task execution time, This motivates motion planners that return time-optimal motion plans. Other kinds of tasks may benefit from other kinds of optimality, such as energy- or fuel-optimality for mobile robots. Figure 15 ABB painting robot named Tobe. 1 Introduction Fi Ire 1.7 The Carnegie Mellon snake robot Holt mounted on 2 mobile base. Snake Robots for Urban Search and Rescue When a robot has more degrees of freedom than required to complete its task, the robot is called redundant. When a robot has many extra degrees of freedom, then itis called hyper-redundant. These robots have multidimensional non-Evclidean configu- ration spaces. Hyper-redundant serial mechanisms look like elephant trunks or snakes (figures 1.6 and 1.7), and they can use their extra degrees of freedom to thread through tightly packed volumes to reach locations inaccessible to humans and conventional ‘machines. These robots may be particularly well-suited to urban search and rescue, where it is of paramount importance 0 locate survivors in densely packed rubble as quickly and safely as possible. Robots in Surgery Robots are increasingly used in surgery applications. In noninvasive stereotactic radio- surgery, high-energy radiation beams are cross-fired at brain tumors. In certain cases, 1 Introduction Figure 18 The motions of a digital actor are computed automatically. (Courtesy of J.C Latombe) these beams are delivered with high accuracy, using six degrees of freedom robotic arms (e.g. the CyberKnife system [1] ). Robots are also used in invasive procedures. They often enhance the surgeon's ability to perform technically precise maneuvers, For example, the da Vinci Surgical System [2] can assist in advanced surgical tech niques such as cutting and suturing. The ZEUS System (3] can assist in the control of blunt retractors, graspers, and stabilizers. Clearly, as robotics advances, more and ‘more of these systems will be developed to improve our healthcare Digital Actors Algorithms developed for motion planning or sensor interpretation are not just for robots anymore. In the entertainment industry, motion planning has found a wide variety of applications in the generation of motion for digital actors, opening the way to.exciting scenarios in video games, animation, snd virtual environments (figure 1.8). Drug Design An important problem in drug design and the study of disease is understanding how a protein folds to its native or most stable configuration. By considering the protein as an articulated linkage (figure 1.9), researchers are using motion planning to identify likely folding pathways from a long straight chain o a tightly folded configuration, In pharmaceutical drug design, proteins are combined with smaller molecules to form ‘complexes that are vital for the prevention and cure of disease. Motion planning, LARTER TSS Sa A Lt LI Overview of Concepts in Motion Planning 9 Figure 1.9 A molecule represented as an articulated linkage ‘methods are used to analyze molecular binding motions, allowing the automated testing of drugs before they are synthesized in the laboratory. Overview of Concepts in Motion Planning ‘The previous examples touched on a number of ways to characterize the motion planning problem and the algorithm used to address it. Here we summarize some of the important concepts. Our characterization of a motion planner is according to the task it addresses, properties of the robot solving the task, and properties of the algorithm." We focus on topics that are covered in this book (table 1.1). ‘Task ‘The most important characterization of a motion planner is according to the problem it solves. This book considers four tasks: navigation, coverage, localization, and ‘mapping. Navigation is the problem of finding & collision-free motion for the robot system from one configuration (or state) to another. The robot could be a robot arm, 8 mobile robot, or something else. Coverage is the problem of passing a sensor ot tool over all points in a space, such as in demining or painting. Localization is the problem of using a map to interpret sensor data to determine the configuration of the robot. Mapping is the problem of exploring and sensing an unknown environment {This clasificaton ino tre categories i somewhat arbitrary but will be convenient fr intodction. 1 Introduction ‘Navigate | Configuration space, degree of freedom | Optimal/nonoptimal motions Map | Kinematicldynamie Computational complexity Cover | Omnidirectional or Completeness Localize | Motion constraints (resolution, probabilistic) Onlineloftine Sensor-based/world model “Table 1.1 Some ofthe concepts covered in this book, to construct a representation that is useful for navigation, coverage, or localization Localization and mapping can be combined, as in SLAM ‘There are a aumber of interesting motion planning tasks not covered in detail inthis ‘book, such as navigation among moving obstacles, manipulation and grasp planning, assembly planning, and coordination of multiple robots. Nonetheless, algorithms in this book can be adapted to those problems. Properties of the Robot ‘The form of an effective motion planner depends heavily on properties of the robot solving the task. For example, the robot and the environment determine the number of degrees of freedom of the system and the shape of the configuration space. Once ‘we understand the robot's configuration space, we can ask if the robot is free to ‘move instantaneously in any direction in its configuration space (in the absence of obstacles). Ifo, we call the robot omnidirectional Ifthe robot is subject to velocity constraints, such as a car that cannot translate sideways, both the constraint and the robot are called nonholonomic. Finally, the robot could be modeled using kinematic equations, with velocities as controls, or using dynamic equations of motion, with forces as controls, Properties of the Algorithm Once the task and the robot system is defined, we can choose between algorithms based on how they solve the problem, For example, does the planner find motions that are optimal in some way, such as in length, execution time, or energy consumption? (Or does it simply find a solution satisfying the constraints? In addition tothe quality of the output of the planner, we can ask questions about the computational complexity of LL Overview of Concepts in Motion Planning a the planner, Are the memory requirements and running time of the algorithm constant, polynomial, or exponential in the “size” of the problem description? The size of the input could be the number of degrees of freedom ofthe robot system, the amount of ‘memory needed to describe the robot and the obstactes in the environment, etc, and the complexity can be defined in terms of the worst case or the average case. If we expect to scale up the sizeof the inputs, a planner is often only considered practical if it runs in time polynomial or better in the inputs. When a polynomial time algorithm has been found for a problem that previously could only be solved in exponential time, some key insight into the problem has typically been gained, Some planners are complete, meaning that they will always find a solution to the ‘motion planning problem when one exists or indicate failure in finite time, This is a very powerful and desirable property. For the motion planning problem, asthe number of degrees of freedom increases, complete solutions may become computationally intractable. Therefore, we can seek weaker forms of completeness. One such form is resolution completeness. It means that if a solution exists at a given resolution of discretization, the planner will find it. Another weaker form of completeness is probabilistic completeness. It means that the probability of finding a sotution (if one exists) converges to I as time goes to infinity Optimality, completeness, and computational complexity naturally trade off with, cach other. We must be willing to accept increased computational complexity if we demand optimal motion plans or completeness from our planner. We say a planner is oflie if t constructs the plan in advance, based on a known model of the environment, and then hands the plan off to an executor. The planner is ‘onlin if it incrementally constructs the plan while the robot i executing. In this case, the planner can be sensor-based, meaning that itinterleaves sensing, computation, and action. The distinction between offline algorithms and online sensor-based algorithms ‘can be somewhat musky; if an offline planner runs quickly enough, for example, then it can be used in a feedback loop to continually reptan when new sensor data, updates the environment model. The primary distinction is computation time, and practically speaking, algorithms are often designed and discussed with tis distinction in mind. A similar issue arises in control theory when attempting to distinguish between feedforward control (commands based ona reference trajectory and dynamic ‘model) and feedback control (commands based on ervor from the desired trajectory), as techniques like model predicrive control essentially use fast feedforward control generation in a closed loop. In this book we will not discuss the low-level feedback: ‘controllers needed to actually implement robot motions, but we will assume they are available 12 1 Introduction Overview of the Book (Chapter 2 dives right into a class of simple and intuitive “Bug” algorithms requiring 1a minimum of mathematical background to implement and analyze. The task is to navigate a point mobile robot to a known goal location in a plane filled with unknown static obstacles. The Bug algorithms are sensor-based—the robot uses a contact sensor ‘or arange sensor to determine when its touching or approaching an obstacle, as well as odometry or other sensing to know its current position in the plane. Ithas two basic motion primitives, moving ina straight line and following a boundary, and it switches between these based on sensor data, These simple algorithms guarantee that the robot will arive at the goal if itis reachable. ‘To move beyond simple point robots, in chapter 3 we study the configuration space ‘of mote general robot systems, including rigid bodies and robot arms. The mathe ‘matical foundations in this chapter allow us to view general path planning problems as finding paths through configuration space, We study the dimension (degrees of freedom), topology, and parameterizations of non-Buclidean configuration spaces, as ‘well as representations of these configuration spaces as surfaces embedded in higher- «dimensional Euctidean spaces. The forward kinematic map is introduced to relate one choice of configuration variables to another. The differential of this map, often called the Jacobian, is used to relate the velocites in the two coordinate systems. Material in this chapter is referenced throughout the remainder of the book. Chapter 4 describes a class of navigation algorithms based on artificial potential functions. In this approach we set up a virtual potential field in the configuration space to make obstacles repulsive and the goal configuration attractive to the robot. ‘The robot then simply follows the downtill gradient of the artificial potential. For some navigation problems, itis possible to design the potential field to ensure that following the gradient will always take the robot to the goal. If calculating such 4 potential field is difficult or impossible, we can instead use one that is easy 10 ceaclulate but may have the undesirable property of local minima, locations where the robot gets “stuck.” In this case, we can simply use the potential field to guide a search-based planner. Potential fields can be generated offline, using a model of the environment, or calculated in real-time using current sensor readings. Purely reactive _gradient-following potential field approaches always run the risk of getting stuck in Tocal minima, however. In chapter 5, we introduce more concise representations of the robor’s free space that a planner can use to plan paths between two configurations. These structures are called roadmaps. A planner can also use a roadmap to explore an unknown space. By using sensors to incrementally construct the roadmap, the robot can then use the roadmap for future navigation problems. This chapter describes several roadmaps 13 13° Mathematical ple B including the visibility graph, the generalized Voronoi diagram, and Canny's original roadmap, Chapter 6 describes an alternative representation of the free space called a cell decomposition which consists of a set of cells of the free space and a graph of cells with connections between adjacent cells. A cell decomposition is useful for coverage tasks, and it can be computed offline or incrementally using sensor data, ‘Constructing complete and exact roadmaps of an environment is generally quite computationally complex. Therefore, chapter 7 develops sampling-based algorithms that trade completeness guarantees for a reduction of the running time of the planner. This chapter hightights recent work in probabilistic roadmaps, expansive-spaces trees, and rapidly-exploring random trees and the broad range of motion planning problems to which they are applicable, Probabilistic reasoning can also address the problems of sensor uncertainty and positioning error that plague mobile robot deplayment, We can model these uncer- tainties and errors as probability distributions and use Kalman filtering (chapter 8) and Bayesian estimation (chapter 9) to address localization, mapping, and SLAM problems Just as the description of configuration space in chapter 3 provides many of the kinematic and geometric tools used in path planning, the description of second-order robot dynamics in chapter 10 is necessary for feasible trajectory planning, i, finding motions parameterized by time. We can then pose time- and energy-optimal trajectory Planning problems for dynamic systems subject to actuator limits, as described in chapter 11 Chapter 11 assumes that the robot has an actuator for each degree of freedom. In chapter 12 we remove that assumption and consider robot systems subject to nonholonomic (velocity) constraints and/or acceleration constraints due to missing actuators, or underactuation. We study the reachable states for such systems, Le., controllability, using tools from differential geometry. The chapter ends by describing planners that find motions for systems such as cars, cars pulling trailers, and spacecraft or robot arms with missing actuators. Mathematical Style ‘Our goal isto present topics in an intuitive manner while helping the reader appreciate the deeper mathematical concepts. Often we suppress mathematical rigor, however, ‘when intuition is suficient. In many places proofs of theorems are omitted, and the reader is referred to the original papers. For the most part, mathematical concepts are introduced as they are needed. Supplementary mathematical material is deferred to the appendices 0 allow the reader to focus on the main concepts of the chapter. ap 1 Introduction Figure 1.10 A path isa curve inthe free configuration space Qjze connecting €(0) 10 e(1) ‘Throughout this book, robots are assumed to operate in a planar (R) or three- dimensional (R?) ambient space, sometimes called the workspace VV. This workspace will often contain obstacles; let WO, be the ith obstacle. The free workspace is the set of points Whee = W\ LJ, WO) where the \ isa subtraction operator. Motion planning, however, does not usually occur in the workspace. Instead, it ‘occurs in the configuration space Q (also called C-space), the set of all robot con- figurations. We will use the notation R(q) to denote the set of points of the ambient space occupied by the robot at configuration g. An obstacle inthe configuration space corresponds to configurations ofthe robot that intersect an obstacle inthe workspace, ie. QO; = [a | R(Q) WO. # Mi). Now we can define the free configuration space 35 Qjee = Q\ LU, QO, We sometimes simply refer to “tree space” when the meaning, is unambiguous Inthis book we make a distinction between pat planning and motion planning. A ‘path sa continuous curve on the configuration space. Iti represented by a continu: ‘ous funetion that maps some path parameter, usualy taken to be in the unit interval {0, 1], 102 curve in jee (figure 1.10). The choice of unit interval is arbitrary; any parameterization would suffice. The solution tothe path planning problem is acontin- ‘uous function c C® (see appendix C for a definition of continuous functions) such that © [0,1] + Qwhere (0) = dua e(!) = gus ad cls) € Que ¥6 € 10.1. ‘When the path is parameterized by time t, then c(t) is a trajectory, and velocities and accelerations can be computed by taking the first and second derivatives with 1.3 Mathematical Style 15 respect to time. This means that ¢ should be at least twice-differentable, ie. in the class C® Finding a feasible trajectory is called trajectory planning or motion planning. In this book, configuration, velocity, and force vectors will be written as column Yectors when they are involved in any matrix algebra. For example, a configuration 4 € R* will be written in coordinates as q = [gi qs» 4]? When the vector will not be used in any computation, we may simply refer to it asa tuple of coordinates, ©-8.9 = (qu dzs---+4n)s Without bothering to make ita column vector. 2 24 Bug Algorithms EVEN A simple planner can present interesting and difficult issues. The Bug! and Bug2 algorithms (301] are among the earliest and simplest sensor-based planners With provable guarantees. These algorithms assume the cobot isa point operating in the plane with a contact sensor ora zero range sensor to detect obstacles. When the robot has a finite range (nonzero range) sensor, then the Tangent Bug algorithm [217] is a Bug derivative thatcan use that sensor information to find shorter paths to the goal. The Bug and Bug-like algorithms ae straightforward to implement; moreover, a simple analysis shows that their success is guaranteed, when possible, These algorithms require two behaviors: move on a straight line and follow a boundary. To handle ‘boundary-following, we introduce a curve-tracing technique based on the implicit function theorem at the end ofthis chapter. Tis technique is general to following any path, but we Focus on following a boundary ata fixed distance. Bugi and Bug? Perhaps the most straight forward path planning approach is to move toward the ‘goal, unless an obstacle is encountered, in which case, circumnavigate the obstacle "until motion toward the goal is once again allowable, Essentially, the Bug! algorithm formalizes the “common sense” idea of moving toward the goal and going around ‘obstacles. The robot is assumed to be a point with pecfect positioning (no positioning error) with a contact sensor that can detect an obstacle boundary if the point robot “touches” it. The robot can also measure the distance d(, y) between any two points 2 Bug Algorithms andy. Finally, assume thatthe workspace is bounded. Let B,() denote aball of radius rr centered on x, ie., By(x) = ly € RB d(x, y) indeed blocks a path from Q> to the goal and therefore drives toward O,, So, even though driving toward O; may initially minimize Ax, 01) + dO}, dau) mote than driving toward Othe planner effectively asigns an infinite costo d(O2, gut) because it has enough information to conclude that any path through O- will be suboptimal ‘The set (0,) is continuously updated asthe robot moves toward a particular O,, which can be seen in'figure 2.9. When 1 = I, the robot has not sensed the obstacle, hence the robot moves toward the goal. When r = 2, the robot initially senses the obstacle, depicted by a thick solid curve. The robot continues to move toward the ‘oa, but off tothe side ofthe obstacle heading toward the discontinuity in p. For 1 =3 and 1 = 4, the robot senses more of the obstacle and continues to decrease distance toward the goal while hugging the boundary. ‘The robot undergoes the motion-to-goal behavior until it can no longer decrease the heuristic distance to the goal. Put differently it finds a point that is like a local minimom of A, 0;)+4(O,, gga) resticted wo the path that motion-to-goa dictates, ‘When the robot switches to boundary-following. it finds the point M on the sensed portion ofthe obstacle which has the shortest distance on the obstacle to the goal [Note that ifthe sensor range is ero, then M isthe same a te hit point from the Bug! and Bug? algorithms. This sensed obstacle is also called the followed obstacle, We make a distintion between the followed obstacle and the blocking obstacle. Let x be 22° Tangent Bug 2” > a aD) Ef f Figure 29 Demonstration of motion-to-goal behavior fora robot with a finite Sensor range moving toward a goal which is “above” the light gray obstacle. 2 Aeon Figure 2.10 The workspace isthe same as in figure 2.7. The solid and dashed segments represent the path generated by motion-to-goal and the dotted path represents the boundary following path, Note that M isthe “local minimum” point. the curent position ofthe robot. The blocking obstacle is the closest obstacle within sensor range that intersects the segment (1 —2)x + Aga W2€ [0, 1] Initially, the blocking obstacle and the followed obstacle are the same. [Now the robot moves in the same direction as if i were in the motion-to-goal ‘behavior. It continuously moves toward the O; on the followed obstacle in the cho- sen direction (figure 2.10). While undergoing this motion, the planner also updates two values: dns 3 dg The vale does i the shortest cstance between the ‘boundary which had been sensed and the goal. Let A be all of the points within 2. Bug Algoriehms line of sight of x with range ® that are on the followed obstacle WO, ie., ly € BWO, : Ax (1 Ay € Qnee YA € 10, 1]. The value dey is the distance between the goal and the closest point on the followed obstacte that is within, fine of sight of the robot, i. as = mind gga) When dacs < distorts the 10b0t terminates the boundary-following behavior. Let T be the point where a circle centered at x of radius R intersects the segment that connects x and gga. This isthe point onthe periphery ofthe sensing range that is closest to the goal when the robot is located at x. Starting with x = quyq and ass ants algorithm 3. ewe ‘Algorithm 3 Tangent Bug Algorithm Input: A point robot with a range sensor Outputs A path to the quot OF a conclusion no such path exists 1: while True do 2 repeat 3 Continuously move toward the point n € (T, O;} which minimizes d(x, n) + de, sat) until = the goal is encountered or = The direction that minimizes d(x,m) + d(m. desl) begins to increase d(x, dgon), 4, the robot detects a “Jocal minimum” of d(-, dpa. 5: Chose a boundary following direction which continues inthe same direction as the ‘most recent motion-to-goal direction. repeat Continuously update dre, drtoness and {). Continuously moves toward n © (0;) that is inthe chosen boundary direction, until The goal is reached. = Therobot completes a cycle around the obstacle in which case the goal cannot be achieved, 8 das < dosoved 10 end while 22. Tangent Bug 29 Gun Hy Figure2.11 The path generated by Tangent Bug with zero sensor range, The dashed lines cor- respond tothe motion-c-goal behavior and the dotted lines corespond to boundary-following. Figure 2.12 Path generated by Tangent Bug with finite sensor range. The dashed lines eorre- spond tothe motion-(o-goal behavior and the dotted lines correspond to boundary-following, Te dashed -dotted circles corespond to the sensor range of the robot. See figures 2.11, 2.12 for example runs. Figure 2.11 contains a path for a robot with zero sensor range. Here the robot invokes a motion-to-goal behavior until it encounters the first obstacle at hit point H. Unlike Bugl and Bug2, encountering a hit point does not change the behavior mode for the robot. The robot continues ‘with the motion-to-goal behavior by turning right and following the boundary of the first obstacle. The robot turned right because that direction minimized its heuristic distance to the goal. The robot departs this boundary at a depart point Ds. The robot 23 2 Bug Algorithms Figure 2.13 Path generated by Tangent Bug with infinite sensor range. The dashed-lines correspond tothe motioa-to-goal behavior and there is no boundary following continues with the motion-to-goal behavior, maneuvering around a second obstacle, ‘until itencounters the third obstacle at M7, The robot turns left and continues to invoke the motion-to-goal behavior until it reaches Mf, a minimum point. Now, the planner invokes the boundary-following behavior until the robot reaches Ls. Note that since ‘we have zero sensing range, dees isthe distance between the robot and the goal. The procedure continues until the robot reaches the goal. Only at Mf and L, does the robot switch between behaviors, Figures 2.12, 2.13 contain examples where the robot has a finite and infininte sensing ranges, respectively Implementation Essentially, the bug algorithms have two behaviors: drive toward a point and follow an obstacle, The first behavior is simply a form of gradient descent of d(-,) where m is either gy OF an O. The second behavior, boundary-following, presents a challenge because the obstacle boundary is not known a priori. Therefore, the robot planner ‘must rely on sensor information to determine the path. However, we must concede that the full path to the goal will not be determined from one sensor reading: the sensing range of the robot may be limited and the robot may not be able to “see” the entire world from one vantage point. So, the robot planner has to be incremental We must determine first what information the robot requires and then where the robot should move to acquire more information. This is indeed the challenge of sensor-based planning. {deally, we would like this approach to be reactive with sensory information 23.1 23 Implementation 3 feeding into a simple algorithm that outputs translational and rotational velocity for the robot ‘There are three questions: What information does the robot require to circumnay- igate the obstacle? How does the robot infer this information from its sensor data? How does the robot use this information to determine (locally) a path? What Information: The Tangent Line If the obstacle were fat, such as along wall in a corridor, then following the obstacle is trivial: simply move parallel to the obstacle. This is readily implemented using a sensing system that can determine the obstacle’s surface normal n(x), and hence a direction parallel to its surface. However, the world is not necessarily populated with flat obstacles; many have nonzero curvature. The robot can follow a path that is, consistently orthogonal to the surface normal; this direction can be written as m(x)* and the resulting path satisfies 6(¢) = wv where v is a basis vector in (n(¢(1)))*. The sign of v is based on the “previous” direction of é. Consistently determining the surface normal can be quite challenging and there. fore for implementation, we can assume that obstacles are “locally fla” This means the sensing system determines the surface normal, the robot moves orthogonal to this normal for a short distance, and then the process repeats. In a sense, the robot determines the sequence of short straight-line segments to follow, based on sensor information. This flat line, loosely speaking, is the tangent (figure 2.14). It isa linear approxi- ‘mation of the curve at the point where the tangent intersects the curve. The tangent owes a) Figure 2.14 The solid curve is the offset curve. The dashed line represents the tangent to the ftfset curve at 23.2 2 Bug Algorithms can also be viewed as a first-order approximation to the funetion that describes the curve. Let c : (0, 1] > We be the function that defines a path, Let = clip) for 4.59 € (0, I]. The tangent at xis #6. The tangent space can be viewed as a fine whose basis vector is vies {adil lee R} How to Infer information with Sensors: Distance and Gradient ‘The next step is to infer the tangent from sensor data, Instead of thinking of the robot as a point in the plane, let's think of it as a circular base which has a fine array of tactile sensors radially distributed along its circumference (figure 2.15). When the robot contacts an obstacle, the direction fram the contacted sensor to the rabat’s center approximates the surface normal. With this information, the robot can determine a sequence of tangents to follow the obstacle. Unfortunately, using atactile sensor to prescribe a path requires the robot to collide ‘with obstacles, which endangers the obstacles and the robot. Instead, the robot should follow apath ata safe distance YW* ¢ R from the nearest obstacle. Such apath is called Tactite Ring. \ SSE - : | | Gay Obstacle Figure 215 A ine-resolution tactile sensor. 4) 2.3 Implementation a Figure 2.16 The global minimum ofthe rays determines the distance to the closest obstacle; the gradient points in a direction away from the obstacle along the ray. an offiet curve [381]. Let D(x) be the distance from x to the closest obstacle, ie. Dlx) = tinge J yo, HO) ‘To measure this distance with amobile robot equipped with an onboard range Sensing ring, we use the raw distance Function again. However, instead of looking for dis- continuities, we look for the global minimum, tn other words, D(x) = min, px. s) (figure 2.16). We will ned 0 use the gradient of distance, In general, the gradient is a vector shat pois in the direction that maximally increases the value of a function. See appendix C5 for more details. Typically, the ih component ofthe gradient vector is che partial derivative ofthe function with respect tits th coordinate. In the plane, VD(x) = [221 422)" which points in the direction that increases distance the most, Finally, the gradient isthe unit direction associated withthe smallest value of the rw distance function. Since the raw distance function seemingly approximates 2 sensing system with individual range sensing elements radially distributed around the perimeter ofthe robot, an algorithm defined interns of D can often be implemented sing realistic sensors. ‘There are many choices For range sensors; ere, we investigate the wseof ultrasonic sensors (figure 2.17), which are commonly found on mobile robots. Conventional ltrasonie sensors measure distance using time of flight. When the speed of sound 2 Bug Algorithms Figure 2.17 The disk on the right is the standard Polaroid ultrasonic transducer found on ‘many mobile robots; the circuitry onthe left drives the transducer. Figure 2.18. Beam pater forthe Polaroid wansduce. i i in aie is constant, te time that the ultrasound requires to leave the transducer, strike an obstacle, and return is proportional tothe distance to the point of reflection on 4 the obstacle [113]. This obstacle, however, can be located anywhere along the angu- 3] Jar spread of the sonar sensor's beam pattern (figure 2.18). Therefore, the distance information that sonars provide is fairy accurate in depth, but not in azimuth, The beam pattem can be approximated wih cone (gue 219). Forthe commonly wed | Polaroid transduce, the archase is 22.5 degrees. When the reading ofthe sensorisd, 3 the point of reflection can be anywhere along the ar base of length 225 233 23° Implementation 35 Sensor Measurement Axis Beam Pattern Robot Figure 219 Centerline model. Initially, assume that the echo originates from the center of the sonar cone, We acknowledge that this is a naive model, and we term this the centerline model (igure 2.19). The ultrasonic sensor with the smallest reading approximates the global ‘minimum of the raw distance function, and hence D(x), The ditection that this sensor is facing approximates the negated gradient ~V D(x) because this sensor faces the closest obstacle. The tangent is then the line orthogonal to the direction associated ‘with the smallest sensor reading. How to Process Sensor Information: Continuation Methods The tangent to the offset curve is (VD(x))*, the line orthogonal to VD(x) (fig: ure 2.14). The vector V D(x) points in the direction that maximally increases distance; likewise, the vector V(x) points in the dection that maximally decreases dis- tance; they both point along the same line, but in opposite directions. Therefor, the vector (VD(2x))* points in the direction that locally maintains distanee; it is perpen- dicular to both VD(x) and ~V D(x). This would be the tangent of the offset curve ‘hich maintains distance tothe nearby obstacle. ‘Another way to see why (V D(x))* is the tangent is to look at the definition ofthe offset curve, For a safety distance W", we can define the offset curve implicitly as the set of points where G(x) = D(x) — W* maps to zero, The set of nonzero points (or vectors) that map to zero is called the null space of a map, For a curve implicitly defined by G, the tangent space ata pointx isthe null space of DG(.x), the Jacobian of G (410}. In general, thej, jthcomponentofthe Jacobian matrix isthe partial derivative ‘ofthe ith component function with respect tothe jth coordinate and thus the Jacobian 2° Bug Algorithms is a mapping between tangent spaces. Since in this case, G i a real-valued function (i = 1), the Jacobian is just row vector D(x). Here, we are reusing the symbol D. The reader is forced to use context to determine if D means distance or differential In Euclidean spaces, the ith component of a single-row Jacobian equals the ith component ofthe gradient and thus V D(x) = (D.D(sx))". Therefore, since the tangent space is the null space of DD(s), the tangent for boundary-following in the plane is the line orthogonal to VD(2). i, (VD(x))*, and can be derived from sensor information Using distance information, the robot can determine the tangent direction to the offset curve. Ifthe obstacles are lat, then the offset curve is aso fat, and simply following the tangent is sufficient to follow the houndary of an unknown obstacle. ‘Consider, instead, an obstacle with curvature, We ean, however, assume that the ‘obstacle is locally flat. The robot can then move along the tangent fora short distance, but since the obstacle has curvature, the robot will not follow the offset curve, ie it ‘will “fall off” ofthe offset curve. To reaccess the offset curve, the robot moves either toward or away from the obstacle until it reaches the safety distance W". In doing 50, the robot is moving along a line defined by V(x), which ean be derived from sensor information. Essentially, the robot is performing a numerical procedure of prediction and cor- rection. The robot uses the tangent to locally predict the shape of the offset curve and then invokes a correction procedure once the tangent approximation isnot valid. Note that the robot does not explicitly trace the path but instead “hovers” around i resulting ina sampling ofthe path, not the pat itself (igure 2.20) ‘A numerical tracing procedure can be posed as one which traces the roots of the expression G(x) = 0, wherein this ease Gx) = D(x) — W*, Numerical curve tracing techniques test on the implice function theorem (9,232,307) which locally defines a curve that is implicitly defined by a map G: ¥ x R > Y. Specifically, the roots of G locally define a curve parameterized by A € IR. See appendix D for a formal definition. Figure 2.20 The dashed ine isthe actual path, but the robot follows the thn black lines, predicting and correcting along the path. The black cieles are samples along the path 25) 23 Implementation 37 For boundary following ata safety distance W*, the function Gy, A) = D(y, 2) ~ W" implicitly defines the offset curve. Note that the A-coordinate corresponds to a tangent direction and the y-coordinates to the line or hyperplane orthogonal to the tangent. Let ¥ denote this hyperplane and DyG be the matrix formed by taking the derivative of G(x) = D(x) ~ W* = 0 with respect to the y-coordinates. It takes the form DyG(x) = Dy D(x) where Dy denotes the differential with respect to the ‘yacoordinates. If DyG(y, 2) is surjective at x = (2, y)7, them the implicit function ‘theorem states thatthe roots of G(y, 2) locally define acurve that follows the boundary ata distance W" as 2 is varied, iz, »(2). By numerically tracing the roots of G, we can locally construct path. While there are a number of curve-tracing techniques (232) let us consider an adaptation of a common predictorcorrector scheme. Assume that the robot is located at @ point x Which isa fixed distance W* away from the boundary. The robot takes a “small” ste, ‘SA, in the A-direction (i. the tangent to the Tocal path). In genera, this prediction step takes the robot off the offset path. Next, a correction method is used to bring the robot back onto the offset path. If 4 is small, then the local path will intersect a correcting plane, which is & plane orthogonal to the A-direction at a distance AA away from the origin. The correction step finds the location where the offset path intersects the cor- recting plane and is an application of the Newton convergence theorem [232]. See appendix D.2 fora more formal definition of this theorem The Newton convergence theorem also requires that DyG(y, A) be full rank at every (y, 2) in @ neighborhood of the offset path. This is true because for Gx) = D(x) = W*,(0 DyGOv, ANI” DG(y, 2). Since DG(y, 2) is full rank, so must be DyGCy, A) om the offset curve. Since the set of nonsingular matrices is an open set, we know there isa neighborhood around each (9) inthe offset path where DG(y, 2) is full rank and hence we can use the iterative Newton method to implement the corrector step. IF y* and X* are the nth estimates of y and 2, the h + st iteration is defined as = (DyG)"" Gey 34), where DyG is evaluated at (y*, 4"), Note that since we are working in a Buctidean space, we can determine DyG solely from distance gradient, and hence, sensor information Problems 1. Prove that D(x) is the global minimum of p(x 5) with respect tos 2. What are the wadeoffs between the Bug! and Bug? algorithms? 38 2 Bug Algorithms Extend the Bugl and Bug? algorithms toa two-link manipulator. ‘What isthe difference between the Tangent Bug algorithm with zero range detector and Bug2? Draw examples (What are the differences between the path in figure 2.11 and te paths that Bugl and Bug2 would have generated? ‘The Bug algorithms also assume the planner knows the location of the goal and the robot has perfect positioning. Redesign one of the Bug algorithms to relax the assumption of perfect positioning. Feel fre to introduce a new type of “reasonable” sensor (nota high- resolution Global Positioning System). In the Bugl algorithm, prove or disprove tha the robot does not encounter any obstacle that does not intersect the disk of radius (gaan. dg) CEmNet04 3 Ga What assumptions do the Bug], Bug2, and Tangent Bug algorithms make on robot loc ‘zation, both in position and orientation? Prove the completeness ofthe Tangent Bug algorithm, ‘Adapt the Tangent Bug algorithm so that it has a limited field of view sensor, Le. it does ‘ot have a 360 degree field of view range sensor : ‘Write out DyG for boundary following inthe planar case. Let Gy(x) = D(x) + 1 and let G(x) = D(x) + 2. Why are their Jacobians the same? Let G(x, ») =»? +y —22, Write outa y as a function of x in an interval about the origin forthe curve defined by G(x, y) = 0 =" 3 Configuration Space ‘To CREATE motion plans for robots, we must be able to specify the position of the robot. More specifically, we must be able to give a specification of the location of every point om the robot, since we need to ensure that no point on the robot collides ‘with an obstacle, This raises some fundamental questions: How much information is required to completely specify the position of every point on the robot? How should this information be represented? What are the mathematical properties of these representations? How can obstacles in the robot's world be taken into consideration while planning the path of a robot? In this chapter, we begin to address these questions. We first discuss exactly what is meant by a configuration of a robot and introduce the concept of the configura- tion space, one of the most important concepts in robot motion planning. We then briefly discuss how obstacles in the robot’s environment restrict the set of admissible paths. We then begin # more rigorous investigation of the properties of the config- uration space, including its dimension, how it sometimes can be represented by a differentiable manifold, and how manifolds can be represented by embeddings and parameterizations. We conclude the chapter by discussing mappings between differ- tent representations of the configuration, and the Jacobian of these mappings, which relates velocities in the different representations. 3.1 3 Configuration Space Specifying a Robot’s Configuration To make our discussion more precise, we introduce the following, definitions. The configuration ofa robot system is complete specification of the position of every point ofthat system. The configuration space, of C-space, of the robot system is the space of all possible configurations ofthe system. Thus a configuration is simply a point inthis abstract configuration space. Throughout the text, we use q to denote a configuration and Q to denote the configuration space.' The number of degrees of Freedom of arobot systemis the dimension of the configuration space, or the minimum nunber of parameters needed to specify the configuration. To illustrate these definitions, considera circular mobile robot that can translate without rotating in the plane. A simple way to represent the robot's configuration is to specify the location ofits centr, (x,y), relative to some fixed coordinate frame, If we know the radius r of the robot, we can easily determine from the configuration 4 = (x, ») the set of points occupied by the robot. We will use the notation R(q) to denote this set. When we define the configuration as q = (x,y), we have Ry) =(@ YO -2P +0-y¥¥ sr and we see that these two parameters, x and y, are sufficient to completely determine the configuration of the circular robot. Therefore, forthe circular mobile robot. we can represent the configuration space by R® once we have chosen a coordinate frame in the plane. Robots move in a two- or three-dimensional Euclidean ambient space, represented by IR? or R, respectively. We sometimes refer to this ambient space asthe workspace. Other times we have a more specific meaning for “workspace.” For example, for a robot arm, often we call the workspace the set of points of the ambient space'reachable bby apoint on the hand or end effector (se figure 3.3) For the transfating mobile robot described above, the workspace and the configuration space are both two-dimensional Euclidean spaces, but itis important to keep in mind that these are different spaces. This becomes clear when we consider even slightly more complicated robots, as we see next. ‘Consider a two-joint planar robot arm, as shown in figure 3.1. A point on the first link ofthe arm is pinned, so thatthe only possible motion of the fist link is rotation bout this joint, Likewise, the base of the second link is pinned to a point at the end of the first link, and the only possible motion ofthe Second link isa rotation about this joint. Therefore, if we specify the parameters 8, and 6, as shown in figure 3.1, we While is used almost universally to denote a configuration, she configuration space is sometimes. denoted by C, particularly inthe path-plaoning community. BI Specifying a Robor's Configuration " Figure 3.1. The angles 6 and é specify the configuration ofthe wo-joit robot have specified the configuration of the arm. For now we will assume no joint limits, so the two links can move over each other Each joint angle 9; comesponds to a point on the unit circle 5, and the configuration space is $! x S! = 7, the two-dimensional torus It is common to picture a torus as the surface of a doughnut because a torus has a natural embedding in R°, just as a circle S" has a natural embedding in I?. By cutting the torus along the 8, = O and ® = O curves, we can flatten the torus onto the plane, as shown in figure 3.2. With this planar representation, we are identifying points on S* by points inthe interval (0.2) CR. While this representation covers all points in S', the interval (0, 27), being a subset of the real line, does not naturally wrap around like S*, so there isa discontinuity in the representation. As we discuss in section 3.4, ths is because $" is topologically different from any interval of R. We define the workspace of the two-joint manipulator to be the reachable points by the end effector. The workspace for our two-joint manipulator is an annulus (figure 3.3), which is a subset of R?, All points in the interior of the annulus are reachable in two ways, with the arm in a right-arm and a left-arm configuration, sometimes called elbow-up and elbow-down. Therefore, the position of the end effec- tor is nota valid configuration (not a complete description ofthe Location of all points of the robot), so the annulus is not a configuration space for this robot. ‘So we have seen thatthe configuration spaces of both the translating mobile robot and the two-joint manipulator are two-dimensional, but they are quite different. The torus 7* is doughnut-shaped with finite area, while Ris flat with infinite area. We delve further into these sors of differences when we discuss topology in section 3.4 2 7 d | a ae o © Figure 32. (2) A two-joint manipulator. (6) The configuration ofthe robot is represented as 2 point on the tora configuration space.) The torus can be cut nd flattened onto the plane. This planar representation has “Wraparound” features where the edge FR is connected to GP, et. Figure 33 The workspace for ths wo-joint manipulator isan annulus, a disk with a smaller disk removed from it. Note that ll pins in the interior ofthe annulus are reachable with a right-arm configuration and left-arm configuration 32 3.24 32. Obstacles and the Configuration Space B Obstacles and the Configuration Space Equipped with our understanding of configurations and of configuration spaces, we can define the pth-planning problem tbe that of determining continuous mapping, € (0, 1] = Q, such that no configuration inthe path causes a collision between the robotand an obstacle. Its useful to define explicitly the set of configurations for which such a collision occurs, We define a configuration space obstacle QO, tobe the set of configurations at which the robot intersects an obstacle WO, in the workspace, Le, 20; = lg € 21 RIG) WO, # ‘The free space or free configuration space Qyes isthe set of configurations at which the robot does not intersect any obstacle, i.e, re = (LU, 20). With this notation, we define a free path to be a continuous mapping ¢ : (0, 1] > Qpee, and asemifree pathto beacontinuous mapping. : (0, 1} —» el(Qpee) in which cl( tee) denotes the closure Of Qige. A free path does not allow contact between the robot and obstacles, while a semifree path allows the robot to contact the boundary of an obstacle. We assume that Qfge is open unless otherwise noted. ‘We now examine how obstacles in the workspace can be mapped into the config- uration space forthe robots that we discussed above. Circular Mobile Robot Consider the circular mobile robot in an environment with a single polygonal obstacle in the workspace, as shown in figure 3.4. In igure 3.4(b), we slide the robot around the obstacle to find the constraints the obstacle places on the configuration of the robot, ice, the possible locations of the robot's reference point, We have chosen to use the center of the robot, but could easily choose another point. Figure 3.4(¢) shows the resulting obstacle in the configuration space. Motion planning for the circular robot in figure 3.4(a) is now equivalent to motion planning for a point in the configuration space, as shown in figure 3.4(c), Figure 3.5 shows three mobile robots of different radii in the same environment. Im each case, the robot is trying to find a path from one configuration to another. ‘To transform the workspace obstacles into configuration obstacles, we “grow” the polygon outward and the walls inward. The problem is now to find a path forthe point robot in the configuration space. We see thatthe growing process has disconnected the free configuration space Qjee for the largest robot, showing that there is no solution for this robot “6 3 Configuration Space © CO ° igure 34 (a) The circular mobile robot approaches the workspace obstacle, (b) By sliding the mobile robot around the obstacle and keeping track ofthe curve traced out by the reference Point, we construct the configuration space obstacle. (c) Motion planning forthe robot in the workspace representation jn (a) has been transformed into motion planning fora point robot inthe configuration space. Workspace @ ©) © Figure 35. The top row shows the workspace and the bottom row shows the configuration space for (a) a point mobile robot, (b) circular mobile robot, and (c) larger cireutar mobile robot. Although the example in figure 3.5 is quite simple, the main point is that it is ‘easier to think about points moving around than bodies with volume, Keep in mind that although both the workspace and the configuration space for this system can be represented by IR, and the obstacles appear to simply “grow” in this example, the configuration space and workspace are different spaces, and the transformation from workspace obstacles to configuration space obstacles is not always so simple. 3.2. Obstacles and the Configuration Space 6 For example, appendix F discusses how to generate configuration space obstacles for a polygon that translates and rotates among polygonal obstacles in the plane, The ‘vo. joint arm example is examined next ‘Two-Joint Planar Arm Forthe cas ofthe circular mobilerobotina world populated with polygonal obstacles, itis easy to compute configuration space obstacles, When the robot is even sighty more complex, it becomes much more difficult todo so, For this reason, grid-based representations of the configuration space are sometimes used. Consider the case of the two-joint planar arm, for which Q = T?. We can define a grid on the surface of the torus, and foreach point on this grid we can perfocm a fairly simple test to see ifthe comesponding configuration causes a collision between the arm and any obstacle in the workspace. If we let each grid point be represented by a pixel, we can visualize the configuration space obstacle by “coloring” pixels appropriately. This method was used to obtain figures 3.6, 3.7, and 3.8.? In each of the figures, the image on the left depicts atwo-joint arm ina planar workspace, while the image onthe right depicts the configuration space. In each case the arm onthe left is depicted in several configurations, and these are indicated in the configuration spaces on the right. Figure 3.6. (Left) A path for the wo jint manipulator through its workspace, where the start and goal configurations are darkened. (Right) The path in the configuration space. ‘These figures were obtined using the apple at tp/ford ier berkeley dlcpace 46 3 Configuration Space Figure 3.7 (Left) The workspace fora two-joint manipulator where the start and goal con figurations are shown, (Right) The configuration space shows there is no free path Between the start and goal configurations. Figure 3.8 (Left) The workspace fora two-joint manipulator where the start and goal con- figurations are darkened. (Right) The path shows the wraparound of the planar representation of the configuration space. White pictures such as those in figures 3.6, 3.7, and 3,8 are useful for visualiz- ing configuration space obstacles, they are not sufficient for planning collision-free ‘motions. The reason for ths is thatthe grid only encodes collision information for the discrete set of points lying on the grid. A path includes not only grid points, but also the points on the curves that connect the grid points. One possible remedy for this problem isto “thicken” the robot when we test tata grid point, so that if the thickened i 3 # y q $ # 4 x q 33 33 The Dimension of the Configuration Space a robot is colision-free, then paths to adjacent grid points are also collision-free. We could also choose to ignore this problem by choosing a grid resolution that is “high enough.” ‘The Dimension of the Configuration Space In our introduction to configuration space above, we restricted our attention to two- dimensional configuration spaces that are easy to visualize. For each example, it was fairly straightforward to conclude that there were only two degrees of freedom: for a twanslating robo, the configuration was specified by a point in te familiar Euclidean plane, while forthe two-joint arm the two joint angles gave a complete specification ofthe arm's position. In ths section we determine the numberof degrees of freedom cof more complex systems by considering constrains on the motions of individual points of the systems. ‘As a first example, suppose the robot is a point that can move inthe plane. The configuration can be given by two coordinates, ypically q = (x,y) € Q = RE, once we have chosen a reference coordinate frame fixed somewhere in space. Thus the robot has two degrees of freedom; the configuration space is two-dimensional Now consider a system consisting of three point bots, A, B, and C, that ae free to move in the plane. Since the points can move independently, inorder to specify the ‘configuration ofthe system we need to specify the configuration of each of 4, B, and C. By simply generalizing the case for a single point, we see that a configuration for this system can be given by the six coordinates x, Ya, ta, Yes xc, and Yo (assuming thatthe points can overlap). The system has six degrees of freedom, and in this case wwe have Q-= B® Real robots are typically modeled as a set of rigid bodies connected by joints (or 4 single rigid body for the case of most mobile robots), nota set of points that are free to move independently. So, suppose now that the robot is a planar rigid body that can both translate and rotate in the plane. Define 4, B, and C tobe three distinct points that ae fixed to the body. To place the body in the plane, we are first free to choose the position of A by choosing its coordinates (x4, y4). Now we wish to choose the coordinates of B, (xp, yo), but the rigidity of the Body requires that this point maintain a constant distance d(A, B) from A: (A,B) = qo) + On — 0 3 This approach called conservative, a 2 maton planer using this appreach wil never fed an incorrect ‘soluionbutitmight mis solutions when hey exist. As area, the planter cat ony beeslution compete sot complete. 3° Configuration Space “This equation constrains B to lie somewhere ona circle of radius d(A, B) centered at (x, yas and our only freedom in placing Bis the angle @ from A to B Now when we try to choose coordinates (xc. 3c) for C, we se that our choice is subject 0 two constraints: H(A, 0) = VRRP EOI (B,C) = Vine = xe FO Yo In other words, C has already been placed for us. In fact every point on the body has been placed once we have chosen (x, ¥4.6)- making this @ good representation of the configuration, The body has cree degrees of freedom, and its configuration space is R? x $! Each ofthe distance constraints above isan example of a holonomie constrain A holonomic constraint is one that can be expressed purely asa function g of the configuration variables (and possibly time), ie., of the form 8(q,t) =0, Each linearly independent holonomic constraint on a system reduces the dimension ofthe system's configration space by one. Thus a system described by m coordinates subject tom independent holonomic constraints hasan (n ~ m)-dimensional conig- uration space. In this case, we normaly attempt to represent the configuration space by a smaller set of n ~m coordinates subject to no constrains, eg. the coordinates (4, Yas 8) forthe plana body above Nonholonomie constraints are velocity constraints ofthe form 8.4.0 =0 which do not reduce the dimension of the configuration space. Nonholonomic con straints are fet to chapter 12. ‘We can apply the counting method above to determine the number of degrees of freedom ofa three-dimensional rigid body. Choose three noncollinear points on the body, A. B,C. The coordinates (x4, ya, 24) of point A are frst chosen arbitrarily. [After fixing A, the distance constraint from A forces B to lie on the two-dimensional surface ofasphere centered at A. After both A and B ae fixed, the distance constraints from A and B force C to lie on a one-dimensional circle about the axis formed by ‘A and B. Once this point is chosen, all other points onthe body are fixed. Thus the configuration of a rigid body in space can be described by nine coordinates subject to three constraimts, yielding a six-dimensional configuration space. ‘We have already seen that a rigid body moving in a plane has three degrees of freedom, but we can arrive at this same conclusion if we imagine a spatial rigid body By 33° The Dimension ofthe Configuration Space 49 with six degrees of freedom with a set of constraints that restricts it toa plane. Choose this book as an example, using three comers ofthe back cover as points A, B and C. The book can be confined to a plane (eg. the plane ofa tabletop) using the three hholonomic constraints wastes zc 30. ‘The tworjoint planar arm can be shown to have two degrees of freedom by this (somewhat indirect) counting method. Each of the two links can be thought of as 2 rigid spatial body with six degrees of freedom. Six constraints restrict the bodies to a plane (three for each link), 1wo constraints restrict a point on the first Tink (the first joint) to be at a fixed location inthe plane, and once the age ofthe frst Link is chosen, two constraints restrict a point on the second link (the second joint) tobe at 8 fixed location. Therefore we have (12 coordinates) — (10 constraints) = 2 degrees of freedom. Of course we usually count the degrees of freedom of an apen-chain jointed robot, also known as a serial mechanism, by adding the degrees of freedom at each joint. Common joints with one degree of freedom are revolute (R) joints, joints which rotate bout an axis, and prismatic () joints, joints which allow translational motion along an axis. Our two:joint robot is sometimes ealled an RR oF 2R robot, indicating that both joints ae revolute. An RP robot, on the other hand, has a revolute joint followed by a prismatic joint. Another common joint is a spherical (ball-and-socket joint, ‘which has three degrees of freedom. ‘A closed:-chain robo, also known as a parallel mechanism, is one where the links form one or mare closed loops. Ifthe mechanism has k links, then one is designated sa stationary “ground” link, and k ~ 1 Tinks are movable. To determine the number of degrees of freedom, note that each movable link has NV degrees of freedom, where N = 6 fora spatial mechanism and N = 3 for a planar mechanism, Therefore the system has N(k ~ 1) degrees of freedom before the joints are taken into account Now each of then joint between the links places W — f; constraints on the feasible ‘motions ofthe links, where fis the number of degrees of freedom at joint i (e.2., {fi = | fora revolute joint, and f = 3 fora spherical joint). Therefore, the mobility ‘4 ofthe mechanism, or its number of degrees of freedom, is given by N= DOW ~ fi) b+Ds M Nk 50 34 3 Configuration Space Figure 3A planar mechanism with six links (A through F), seven revolute joints, and one degree of freedom, x This is known as Gribler’s formula for closed chains, and it is only valid if the constraints due the joints are independent. In the planar mechanism of figure 3.9, there are seven joints, each with one degree of freedom, and six links, yielding a mobility of 6~ 7-1) +7=1 ‘As an application ofthe ideas inthis section, determine the number of degrees of freedom of your arm by adding the degrees of freedom at your shoulder, elbow, and ‘wrist. To test your answer, place your palm flat down on a table with your elbow bent. ‘Without moving your torso or your palm, you should find that i is still possible 19 ‘move your arm. This intemal freedom means that your arm is redundant with respeet {othe task of positioning your hand (orarigid body grasped by your hand) in space: an infinity of arm configurations puts your hand inthe same place." How many internal degrees of freedom do you have? How many holonomic constraints were placed on Your arm's configuration when you fixed your hand's pesition and orientation? The sum of the number of constraints and internal degrees of freedom is the number of degrees of fteedom of your (unconstrained) arm, and you should find that your arm has more than six degrees of freedom. A robot is said to be hyper-redundant with respect to a task if it has many more degrees of freedom than requied forthe task. (There is no strict definition of “many” here.) items ‘The Topology of the Configuration Space Now that we understand how to determine the dimension of configuration space, role wwe can begin to explore its opology and geometry, each of which plays a in developing and analyzing motion-planning algorithms. Some basic concepts from topology are discussed in appendixes B and C. 4 Provided your rm is aay from its joint mits. 34 The Topology of the Configuration Space SI =) L D CS Figure 3.10 The surfaces ofthe coffee mug andthe torus are topologically equivalent ‘Topology is a branch of mathemati that considers properties of objects that do not change when the objects are subjected to arbitrary continuous transformations, such as stretching or bending, For this reason, topology is sometimes referred to as “rubber sheet geometry” Imagine a polygon drawn on a rubber sheet. As the sheet stretched in various directions, the polygon’s shape changes; however, certain properties of the polygon do not change. For example, points that are inside the polygon do not move to the outside of the polygon simply because the sheet is stretched. ‘Two spaces are topologically different if cutting or pasting is required to turn one into the other, as cutting and pasting are not continuous transformations. For example, the configuration spaces ofthe circular mobile robot () and the two-oint planar arm (7?) ate topologically different, If we imagine T? as the surface of a rubber doughnut, ‘we see that no matter how we stretch or deform the doughnut (without tearing it), the doughnut will always have a hole in it. Also, if we imagine R? as an infinite eubber sheet, there is no way to stretch it (without tearing it) such that a hole will appear in the sheet. To a topologist, all rubber doughnuts are the same, cegardless of how they are stretched or deformed (figure 3.10), Likewise, all rubber sheet versions of R? are the same, ‘One reason that we care about the topology of configuration space is that it will affect our representation of the space. Another reason is that if we can derive a path- planning algorithm for one kind of topological space, then that algorithm may carry over to other spaces that are topologically equivalent (see, e.g.,chapter4, section 4.6). ‘Since topology is concerned with properties that are preserved under continu- ‘ous transformations, we begin our study of the topology of configuration spaces by describing two types of continuous transformations: homeomorphisms and diffeo- ‘morphisms. Appendix C provides an introduction to differentiable transformations. Homeomorphisms and Diffeomorphisms ‘A mapping @: 5 rT is a cule that places elements of S into correspondence with elements of T, We respectively define the image of S under g and the preimage 3 Configuration Space 5 CQ 5 Injection Bijection «) 9) r r| i ae s s Discontinuous bijection Homeomorphism Dittemorpism (isomorphism) Figure 3.11 Representative ways of looking at surjective, injective, and bijective mappings. Bijections may become homeomorphisms or diffeomorphisms if they are sufficiently differen- table of T by (5) = (968) 15 € S} and gC) = Is} 815) € 7). If 9(S) = T (ie., every element of T is covered by the mapping) then we say that 4 issurjective or onto. If puts each element of T into correspondence with at most onerelement of , ie. for any ¢ € T, 6~'(t) consists of at most one element in S, then we say that is injective (one-to-one). If ¢ is injective, then when sy 4 52 We hhave $(5:) # 6(s2) for 1,52 € S. Maps that are both surjective and injective are said to be bijective. Figure 3.11 illustrates these definitions. As another example, the map sin:(~¥, £) —> (~1, 1) isbijective, whereassin:R + (= 1, Lisonly surjective. Bijective maps have the property that their inverse exists at all points in the range T, and thus they allow us to move easily back and forth between the two spaces S and 7. Jn our case, we will use bijective maps to move back and forth between configuration spaces (whose geometry can be quite complicated) and Euclidean spaces. ‘We will consider two important classes of bijective mappings. DEFINITION 3.4.1 Jf: 8+ T isa bijection, and both and 6" are continuous, hen 4G isahomeomorphism. When such a @ exists. S and T are said to be homeomorphic. 2) G3) G4) Bs) 44 The Topology ofthe Configuration Space 53 OM) Circle Ellipse Racetrack Figure 3.12 A circle, an ellipse, anda racetrack. ‘A mapping @:U — V is said to be smooth if all partial derivatives of $, of all orders, are well defined (t2., @ is of class C™). With the notion of smoothness, we define a second type of bijection. DeRNITION 34:2 A smooth map}: U —> V is adifieomorphism if is bijective and 9" is smooth, When such a @ exists, U and V are said to be diffeomorphic. ‘The condition for diffeomorphisms (smoothness) is stronger that the condition for homeomorphisms (continuity), and thus ll diffeomorphisms are homeomorphisms. ‘To illustrate these ideas, consider three one-imensional surfaces: a circle, denoted bby Mc; an ellipse, denoted by M,; and a “racetrack,” denoted by M,. The racetrack consists of two half-circles connected by straight lines (Agure 3.12). We define these shapes mathematically as Me = (xy) | Gy) Me = (Gy) 1 fy My = (Gey) | FG. y) =O} with x-1 0 s-Isyshr>0 yatta yet MOM) oat yal sel ictsyshx<0 Note that these surfaces are implicitly defined as being the set of points that satisfy some equation f(x, y) = 0. In some ways, these three surfaces are similar. For example, they are all simple, closed curves in the plane; all of f(x, y). f(x. y),and f(x, y) ate continuous. In other ways, they seem quite different, For example, both f(x, ») and f.(x, 9) are 3 Configuration Space continuously differentiable, while f(x, y) is not. We can more precisely state the similarities and differences between these surfaces using the concepts of homeomor. phism and diffeomorphism. In particular, it can be shown that M., M., and.M, are all homeomorphic to each other. For example, the map ¢:.M.—> Me given by oe.) = is a homeomorphism, For this choice of @, both p and 6 are smooth, and therefore, M. is diffeomorphic to M,. Neither M, nor M, is diffeomorphic to M,, however. This is because Sex, y) isnot continuously differentiable, while both f(x, y) and f(x, y) are. This. 4 ccan be seen by examining the curvatures of the circle, ellipse, and racetrack. For the 3 circle, the curvature is constant (and thus continuous), and for the ellipse, curvature is continuous. For the racetrack, there are discontinuities in curvature (atthe points 1D, CL, =D. (1, D0, —D), and therefore there is no smooth mapping from either the cirle or the ellipse to the racetrack. We are often concemed only with the local properties of configuration spaces. Local properties are defined on neighborhoods. For metric spaces®, neighborhoods are most easly defined in terms of open balls. Fora point p of some manifold M, ‘we define an open ball of radius by Bap) = (PEM | dlp, p') < eh, where d is a metric on M.° A neighborhood of a point p € M is any subset SM with p © Uf such that for every p’ € U/, there exists an open ball B.(p') CU. Any open balls itself a neighborhood. The open disk inthe plane isan example. For the point (x, yo) inthe plane, an open ball defined by the Euclidean metric is Belo, Yo) = CY) | ( 40)? + (y= yo)? < €?). We say that Sis locally diffeomorphic (esp. locally homeomorphic) to T if for cach p © § there exists a diffeomorphism (resp. homeomorphism) f from S't0 T on some neighborhood U with p e U The sphere presents @ familiar example of these concepts. At any point on the sphere, there exists a neighborhood ofthat point that is diffeomorphic to the plane. It is not surprising that people once believed the world was flat — they were only looking at their neighborhoods! 5 Armetric space ia space equiped wit a isance mesic. See appendix C 6 One can define all topological properties. including neighborhoods, without resorting tothe use of ‘metrics, but for our parpoes i wl be eases to sume a metic onthe configuration space and expt the mete properties. 4 4 i i i i i 34.2 3A The Topology of the Configuration Space 55 Let us now reflect on the two examples from the beginning of this chapter. For the circular mobile robot, the workspace and the configuration space are diffeomorphic. This iseasy to see, since both are copies of R?. In this case, the identity map (x) is a perfectly fine global diffeomorphism between the workspace and configuration space. In contrast, the two-joint manipulator has a configuration space that is 7, the torus. The torus 7? is not diffeomorphic to R?, but itis locally diffeomorphic. Ifthe revolute joints in the two-joint manipulator have lower and upper limits, 6f < 0 < 6}, so that they eannot perform a complete revolution, however, then the configuration space of the two-joint manipulator becomes an open subset of the torus, which i diffeomorphic to R? (globally). This follows from the fact that each joint angle lies that open interval to cover the line HOR. in an epen interval of R', and we can “stretch ‘An example of such a stretching function is tan :(~ Differentiable Manifolds For all of the configuration spaces that we have seen so far, we have been able to uniquely specify a configuration by n parameters, where 2 is the dimension of the configuration space (two forthe planar two-joint arm, three for a polygon in the plane, ete.). The reason that we could do so was that these configuration spaces were all “locally like” n-dimensional Euclidean spaces. Such spaces, called manifolds, are a central topic of topology. DEFINITION 3.4.3 (Manifold) A set $ is a kodimensional manifold if it és locally hhomecmorphic to RS, meaning that each point in S possesses a neighborhood that is hhomeomorphic 0 an open set in Rt Whilea general &-dimensional manifolds locally homeomorphic to Rt the config- tration spaces that we will consider are locally diffeomogphic to R*, an even stronger relationship. In fact, when we parameterized configurations in section 3.1, we were merely constructing diffeomorphisms from configuration spaces to R?. If we con. struet enough of these diffeomorphisms (so that every configuration in @ is in the domain of atleast one of them), and if these diffeomorphisms are compatible with cone another (an idea that we will formalize shorty), then this set of diffeomorphisms together withthe configuration space define a differentiable manifold. We now make these ideas more precise. Derinmion 3.44 (Chart) A pair (U,), such that U is an open set in a k- dimensional manifold and 6 is a diffeomorphism from UJ to some open set in Rt is called «chart 56 3 Configuration Space The use ofthe term char is enalogous to its use in cartography, since the subset YW is “charted” onto RF in much the same way that cartographers chart subsets of te globe onto a plane when ereating maps. Charts are sometimes refered to as coordinate systems because each point inthe set U is assigned a set of coordinates in a Euclidean space {410}. Te inverse dfeomorphism, 9! : Rk > U, isreferred to asa parameterization ofthe manifold. ‘As an example, consider the one-dimensional manifold $! = (x = (21,2) €R?| af +14 =U). Forany point x € 5 wecan define neighborhood thatis diffeomorphic to B. For example, consider the upper portion of the circle, Ui = (x € S'|12 > 0} The chart 6, : Uy —> (0, 1) is given by d(x) = xy, and thus x; can be used to Gefine a local coordinate system for the upper semicircle. tn the other direction, the upper portion of the circle can be parameterized by z € (0, 1) CR, with @f'(z) = (z,(1~2)!), which maps the open unit interval to the upper semicircle. But * is not lobally diffeomorphic to R'; we cannot find a single chart whose domain includes allof S! ‘We have already used this terminology in section 3.1, when we referred to 0, 03 as parameters that represent a configuration ofthe two-joint arm, Recall that (6,8) € RR, and that when considered as a representation ofthe configuration, they define a point in 7%, the configuration space, which is a manifold. We now see that in sec- tion 3.1, when we represented a configuration ofthe planar arm by the pair (0,8), we ‘were infact eating a chart from a subset of the configuration space toa subset of R? ‘A single mapping from 7? to R? shown in figure 3.2 encounters continuity prob: lems at 6, = {0, 2:r). For many interesting configuration spaces, it will be the case ‘hat we cannot construct single chart whose domain contains the entire configuration space. In these cases, we construct a collection of charts that cover the configuration space. We are not free to choose these charts arbitrarily; any two charts in this col- leetion must be compatible for pars of the manifold on which thei domains overlap. ‘Two charts with such compatibility are said to be C™-related (igure 3.13), DerInrTion 3.4.5 (C*-telated) Let (U,) and (V,¥) be two chars on a k- dimensional manifold. Let X be the image of U MV under and ¥ be the image of UNV under Vien X= (90) eR |r eUNV) Woy ERM y CUNY} These mo charts are said to be C-related if both of the composite functions Wop: XY, dow: YX are C*. ih Aon 34 The Topology of the Configuration Space 7 Figure 3.13 The charts (U, ) and (V, ) map open sets on the k-dimensional manifold 10 ‘open sets in 3 two charts are C*-telated, we can switch back and forth between them ina smooth ‘way when their domains overlap. This idea will be made more concrete in the example. of 5" below. A sset of charts that are C*-telated, and whose domains cover the entire con- figuration space Q, form an atlas for Q. An atlas is sometimes referred to as the differentiable structure for Q. Together. the atlas and Q comprise a differentiable man- fold. There are other ways to define differentiable manifolds, as we will see in sec: tion 3.5. Asan example, consider again the one-dimensional manifold. Above, wesdefined a single chart, (U1, 61). If we define three more charts, we can construct an atlas for 'S', These four charts are given by UstrES Im >0, d(x) =m ee S' [22 <0) date) reS')n 20), (x)= e=treS' | <0), dale) “The corresponding parameterizations are given by oj! : (1, 1) > Un with OD S12) o') o') It is clear that the U; cover $!, so to verify that these charts form an atlas it is only necessary to show that they are C™-related (figure 3.14). Note that 38 343 3 Configuration Space Figure 3.14 Four charts covering he cele! Uy Uz = Us 1U, = @, so we need only check the four pairs of composite maps: 005! 0,1) > 0,0, bs0d7! + 0,1) + 0,1) OD (1.0, drogi!s(-1.0) > 0.) HELOFOD, — dsods' sO. > (-1.0) (1,0) > (1,0), Geode" (1,0) > (-1,0). In each case, © 47'(z) is smooth on each of the open unit intervals that define the domains for the composite mappings given above. For example, $1095 "(z) = 1-22 This collection of four charts is not minimal; itis straightforward to find two charts to cover $! (see problem 9) Connectedness and Compactness We say that a manifold is path-connected, oF just connected, if there exists a path between any tWo points ofthe manifold.” Allof the obstacle-free configuration spaces that we will consider inthis text, eg, R",S*, and 7", are connected. The presence of ‘obstacles, however, can disconnect the free configuration space Qe. In this ease, the free configuration space is broken into a set of connected components, the maximal connected subspaces. In figure35(¢), for example, obstacles break the mobile robo’s free configuration space into wo connected components. There can be no solution to motion-planning problem if gaa ad joa donot len the same connected component of Qtee 7 or more general spaces the concepts of path-comectenes nd connectedness are uot eqivlert, bit fora manifold they ave the same, More generally «space is connected if there is 0 continous mapping from the space toa discrete se of more than one clement 3.44 35 3S. Embeddings of Manifolds in 59 A spaceis compact®ifitresembles aclosed, bounded subset of R.A spaceiisclosed if it includes all of its limit points. As examples, the half-open interval [0, 1) C Ris bounded but not compact, while the closed interval (0, 1} is bounded and compact. ‘The space R is not bounded and therefore not compact. The spaces 5" and 7” are both compact, as they can be expressed as closed and bounded subsets of Euclidean spaces. The unit circle S!, e.., can be expressed as a closed and bounded subset of R Inconfiguration spaces with obstacles or joint limits, the modeling of the obstacles, may affect whether the space is compact or not. For revolute joint subject to joint limits the set of joint configurations is compact i the join is allowed to hit the limits, ‘but not compact ifthe joint can only approach the limits arbitrarily closely ‘The product of compact configuration spaces is also compact. For a noncompact space M, x Mz, if Mi is compact, then it is called the compact factor of the space. ‘Compact and noncompact spaces cannot be diffeomorphic Not All Configuration Spaces Are Manifolds We are focusing on configuration spaces that are manifolds, and more specifically differentiable manifolds, but itis important to keep in mind that not all configuration spaces are manifolds. As a simple example, the closed unit square [0, 1] x (0, 1] ¢ IR? is not a manifold, but a manifold with boundary obtained by pasting the one- «dimensional boundary on the two-dimensional open set (0, 1) x (0, 1). Also, some parallel mechanisms with one degree of freedom have configurations from which there are two distinct possible motion directions (i.e., the configuration space is a self-intersecting figure eight). It is beyond the scope of this chapter to discuss other types of configuration spaces, but be aware: if you cannot show it to be a manifold, it may not be! Embeddings of Manifolds in R" Although « k-dimensional manifold can be represented using as few as k parameters, ‘we have seen above that doing so may require multiple chants. An alternative is 0 use a representation with “extra” numbers, subject to constraints, to achieve ‘slobal representation, AS an example, $! is a one-dimensional manifold that we can ‘Tn epotogy space 8 dained tobe compact if every open cover ofthe space admits a ite subeover, but we wil Rouse these concepes hee 0. 351 4 Configuration Space represent as ! = ((r, y)[x? + y? = 1; we “embed” S! in R®. The fact that we cannot finda single char for all ofS" tells us that we cannot embed S' in, Likewise, although the torus 7? is a two-dimensional manifold, it is not possible to embed the torus in R?, which is why we typically illustrate the torus as a doughnut shape in. ‘The manifolds S! and 7? can be viewed as submanifolds of R? and R? respectively, ‘Submanifolds are smooth subsets of an ambient space that inherit the differentiabilty properties of the ambient space. Submanifolds are often created by a smooth set of| ‘equality constraints on R*, as we see in the example of the circle 5 above. Any differentiable manifold can be viewed as an embedded submanifold of "for large enough ‘When we are confronted with a configuration space that does not permit a single slobal coordinate chart, we are faced with a choice. We can either use a single set ‘of parameters and suffer the consequences of si representation, use multiple charts to construct an atlas, or use a single global repre sentation by embedding the configuration space in a higher-dimensional space. One advantage ofthe last approach is tha the representation can facilitate other operations. Important examples are representations of orientation using complex numbers and quaternions (see appendix E) and matrix representations for the configuration of a rigid body in the plane or in space, as discussed next. ‘Matrix Representations of Rigid-Body Configuration, Itis often convenientto represent the position and orientation ofa rigid body using an ‘mx mr matsx of eal mumbers."The m® entries ofthis matrix must satisty amumber of smooth equality constraints, making the manifold of such matrices a submanifold of IR". One advantage of suck a representation is tha these matrices can be roltipied to get another matrix in the manifold. More precisely, these matrices form a group with the group operation of matrix multiplication? Simple matrix multiplication can bbe used to change the reference frame ofa representation or to rotate and translate a configuration. ‘We describe the orientation ofa rigid body in n-dimensional space (n = 2 or 3) by the matrix groups SO(n), and the postion and orientation by the matrix groups SE(n). After describing these representations abstractly, we look in detail atexamples ‘In fact, the matrix representations in tis section are Lie grows, as (1) they ae differentiable manifolds ich are also groups (2) the group operation is C™, and (3) the mapping from an element of the group tots inverse is. 35. Eimbeddings of Manifolds in 2° 61 Boe Figure 3.15 The rotation matrix fora body is obtained by expressing the unit vectors &. 3, and ofthe body x-y- frame ina stationary frame x, of the use of S(n) for representing the configuration of a body, for changing the reference frame ofthe representation, and for translating and rotating a configuration. Orientation: SO(2) and SO(3) Figure 3.15 shows a rigid body with a frame s-y-2 atached it. Our representation ofthe orientation ofthe body wil be as a3 x 3 matrix fas 2] [Ru Ra Ro R | n|=|Ru Ra Ry | € S03), HDB Ry Re Rs where £ = [f1, £2, Es" is the unit vector in the body x-direction expressed in a and are defined similarly (see stationary coordinate frame x4-¥,-ty, The vectors 5 figure 3.15). ‘The matrix R is often called the rotation matrix representation of the orientation. ‘This representation uses nine numbers to represent the three angular degrees of free- «dom, so there are six independent constraints on the matrix entries: each column (and row) isa unit veetor, 62 3 Configuration Space yielding three more constrains, Because we are assuming right-handed frames, the determinant of R is +1. Matrices satisfying these conditions belong to the spe- ial orthogonal group of 3 x 3 matrices $O(3). “Special” refers to the fact that the determinant is +1, not —L In the planar case, 2 is the 2x 2 matrix Hi] _ feos@ ~sina nl-[e Jesom. ‘where 8 is the orientation of the x-y frame relative tothe x,-y, frame, Generalizing, orientations in n-dimensional space can be written SO(n) = {RE R™| RAT =T, det(R) = 1}, where T is the identity matrix. This implies the relation Rak" Position and Orientation: S£(2) and SE(3) Figure 3.16 shows a rigid body with an attached x-y-z coordinate frame relative to a stationary frame 4,-y,-z,. Let p = (Pi, Po, psl” € B® be the vector from the origin of the stationary frame to the body frame, as measured in the stationary frame, and let R € SO(3) be the rotation matrix as described above, as if the body frame were translated back to the stationary frame. Then we represent the position and orientation of the body frame relative to the stationary frame as the 4 x 4 transform matrix ra[f t] sees ee a teem hasan epee fern sweden ate deter Gag spoke pet neal can be written as an element of the special Euclidean group SE(n): mow BR) SEW 3 ‘where the bottom row consists of m zeros and a one, TO. To make a right-handed frame. poit sigh ahead with you sight index finger, pint your mide finger 9 degrees to the Je, and sick your thom straight up. Your index finger i pointing inthe + diretion, your middle inger pointing inthe + direction, and your thumb i potinginthe +: direction. 3S. Embeddings of Manifolds in 3° 3 Figure 3.16 The body frame x-y-t relative 1 stationary word frame x,-¥-2 Uses of the Matrix Representations ‘The matrix groups SO(n) and SE(n) can be used to 1. represent rigid-body configurations, 2. change the reference frame for the representation of a configuration ora point, and 3, displace (move) a configuration or a point. When the matrix is used for representing a configuration, we often call it a frame, When it is used for displacement or coordinate change, we often call it transform, ‘The various uses are best demonstrated by example. Figure 3.17 shows three coortnate frames on a regular grid of unit spacing. These frames are confined to a plane with their z-axes pointing out of the page. Let Tay be the configuration of frame B relative to frame A, and let Tac be the configuration of frame C relative to frame B. I is clear from the figure that -1 00 2] 010-4 [Ran pas]_| 0 -1 0 O} -100-1 ton [Se l=] oot a} tao to o 00 1 ooo 1 From these, we can find Tc, the frame C relative to the frame A, by performing a change of reference frame on Tyc. This involves premutliplying by Taq, based on 3 Configuration Space Figure 3.17 Three frames ina pane with thie <-axes pointing out ofthe page the rule for coordinate transformations that the second subscript of the matrix on the left cancels with the frst subscript of the matrix on the right, if they are the same ssubseript. In other words, TyaToc = TapTpe = Tac We find that -1 00 -2]7 01 0 -4] fo -1 0 2 p 9-10 of}f-100 -1 boot “e=)o of off Oo 1 of” jo Baal o 00 sJLooo 1} fo oo1 which we can verify by inspection ‘The representation ofthe point w in the coordinates of frame C is written wc. From figure 3.17, we can see that the coordinates of w in C are {=2, 1, OJ". To facilitate matrix multiplications, however, we will express points in homogeneous coordinates by appending a1 to the end ofthe vector, ie, -2,1,0, 07 To find the representation of the point w in other frames, we use a modification of the subscript canceling rife to get Teowo = ToaWe = we ={-3,1,0, 1] and TaaTacwo = Trewe = ™% = (1,-1.0, 1", Which can be verified by inspection, Elements of S£(7) can also be used to displace a point. For example, Tyai4 does not satisfy the subscript canceling rule, and the result is not simply a representation 4.5 Embeddings of Manifolds in 3" 65 » oy wT . meee Hee): ees at @ o © @ Figure 3418 Displacing a point by the tansformation Tap. (a) The frames A and B and the point w. (b) Rotating frame A tothe orientation of frame B, carying the point w along with it (6) Translating frame A 10 the Jocation of frame B, carying w along with it, () The final point Taam of the point w in a new frame. Instead, the point w is rotated about the origin of the frame A by Ran (expressed in the A frame), and then translated by pag in the A frame. This is the same motion required to take frame A to frame B. The result is, (-3.1.0,17, the location of the transformed point in the frame A. This transformation is shown graphically in figure 3.18, Finally, we can use elements of SE(3) to displace frames, not just points. For example, given a frame B represented by Tg relative to frame A, and a transform 7; € SEQ), then Tan, Ryopi + Paw 1 isthe representation ofthe transformed frame BY relative to A after rotating B about its origin by R, (expressed inthe B frame) and then tansating by py ia the original 2B frame (before it was rotated), On the other hand, RiRas Ripse +P on Bear isthe representation ofthe transformed frame B” relative to A after rotating B about the origin of A by Ry (expressed in the A frame) and then translating by py in the A frame, Note that Tye and Taps are generally different, as matrix multiplication is not commutative If we consider frame B to be attached to a moving body, we call 7, a body frame transformation if itis multiplied on the right, as the rotation and tanslation are expressed relative tothe body frame B. If A is a stationary world frame, we call 7, ‘world-frame transformation ifitis multiplied onthe left asthe rotation andtranslation Toor = TT [ 66 3.6 3 Configuration Space Figure 3.19 (a) The inal frame B relative to A. (b) ' is obtained by rotating about B and then translating inthe original yp-diection. (c) B” is obtained by rotating about A and then translating in the y4-dicetion are expressed relative to the fixed A frame. An example is shown in figure 3.19 for “1 00 -2 1 000 Cr) o-to1 T=) go 01 of ™={ 0 0 1 f° ooo 1 o 001 giving 100 -2 1002 010-1 0101 T= ToT =|) 9 1 9) Tr =hTe=|5 9 1 0 ooo 1 0001 Applying n world-frame transformations yields Tyg = T, ... 727; Tas, while n body-frame transformations yields Tay = TasTiTs Ty Parameterizations of $O(3) ‘We have scen tht the nine elements R,; of a rotation matrix R € SO(3) are subjest” to six constrains, leaving three rotational degrees of freedom. Thus, we expect that $0(3) can be locally parameterized using three variables, Euler angles area common parameterization. However, just as we see we cannot find a global parameterization foracircle with single variable, we cannot build a global parameterization of SOQ) with Buler angles Given two coordinate frames Fo and F;, we can specify the orientation of frame F, relative to frame Fy by three angles (6. 8, ¥). known as Z-Y-Z Euler angles. These Euler angles are defined by three successive rotations as follows. Initially, the two 3.6) Bn G8) 67 3.6 Parameterizations of SO(3) Figure 3.20 Euler angle representation. frames are coincident, Rotate Fy about the z-axis by the angle d to obtain frame F, Next, rotate frame F, about its y-axis by the angle @ to obtain frame Fs. Finally, rotate frame F, about its c-axis by the angle y- to obtain frame F;. This s illustrated in figure 3.20. ‘The corresponding rotation matrix R can thus be generated by successive multi- plication of rotation matrices that define rotations about coordinate axes, R= RigRyoRup co ~% 0) fc 0 sw] [ey my 0 =]% co Of] 0 1 Ol }sy cy 0 0 0 tjl-» o@)lo o 1 (sect ~epcesy —S9cy p50] = | secacy +eesy —Sgcosy +esCy 5pS0 | ~ucy pomaas| [Note tht successive rotation matrices are multiplied on the right, as successive rota- tions are defined about axes inthe changing “body” frame. Parameterization of $O(3) using Euler angles, slong with some other representa- tions of $0(3) are described in detail in appendix E 68 37 Configuration Space Example Configuration Spaces In most cases, we can model robots a rigid bodies, articulated chains, or combinations of these two, Some common robots and representations oftheir configuration spaces are given in table 3.1 When designing a motion planner, itis often important to understand the underlying structure of the robor’s configuration space. In particular, we note the following, ww S? x S! xs x S" (n times) = 7%, the n-dimensional torus S'S! x ++ x S! (ntimes) + S*, the n-dimensional sphere in B+ # Six S!x 8 4800) SE(2) #RP SEQ) # RY tis sometimes important to know whether a manifold is compact. The manifolds 5°, 7", and SO(n) ace alt compact, as are all of their direct produets. The manifolds BY and SE(n) are not compact. and therefore R* x M is not compact, regardless of ‘whether or not the manifold M is compact. Despite their differences, all of these configuration spaces have an important sim itarity. When equipped with an atlas, esch isa differentiable manifold, In particular, ‘RY and $0(2) are one-dimensional manifolds; aR? S* and 7? are two-dimensional manifolds: Type of robot Representation of Q “Mobile robot translating inthe plane BR ‘Mobile robot translating and rotating in SE(Q) or R x S! the plane Rigid body translating inthe three-space ® [A spacecraft SEQ) or BY x $03) | An n-jint revolute arma tm A planar mobile robot with an attached SE(2) xT" joint arm ‘Table 3.1 Some common robots and their configuration spaces 38 38 Transforming Configuration and Velocity Representations 69 = B, SE(2) and SO(3) are three-dimensional manifolds; RS, 7 and $£(3) are six-dimensional manifolds Thus, for example, all of R, S£(2), and $0(3) can be represented locally by a set of three coordinates. ‘Transforming Configuration and Velocity Representations We often need to transform from one representation of the configuration of a rbot @€ Q fo some other representation x¢.M. A common example accurs when q represents the joint angles of a robot drm and x represents the configuration of the end effector as a rigid body in the ambient space. The representation x is more convenient when planning manipulation tasks in the world, but control of the robot arm is mote easily expressed in q variables, so we need an easy way of switching back and forth. is often the case that Q and M are not homeomorphie; the dimension of the two spaces may not even be equal. Using the robot arms inspiration, we define the forward kinematics map + Q > Mand the inverse kinematics map 9" : M > Q. These maps may not be horseo- morphisms even ifthe dimensions of Q and M are equal. As the robot system moves, te time derivative. = 4 is related to the time derivative = % by a -i where J is the Jacobian of the map , also known as the differential Dé (see ‘ppenix C). The Jacobian is also useful for transforming forces expressed in one set ‘of coordinates to another (see chapter 4, section 4.7, and chapter 10) i (a4, EXAMPLE38.1. The 2R robot arm of figure 3.21 has link lengths Ly and Ly. tts configuration space is Q = T*, and we represent the configuration by the ovo joint angles q = {0i,0:)". The endpoint of the hand in the Cartesian space is x = [aal” © MC RY In this case, the dimensions of Q and M are equal but they are not homeomorphic. The forward kinematics map @:Q—> Mis a Leta) The inverse kinematics map $-' is one-to-t40 at most points of M, meaning that the robot can be chosen to be in either the right-arm or left-arm configuration. The inverse kinematics of the 2R arm is most easily found geometrically using the law of cosines and is eft for problem 20, = [ £10080) + La cost) +63) og) = a [ Ly sin dy + La sin + 62) 70 3 Configuration Space’ Figure 321 The 2R robot arm and the velocity atts endpoint. ‘The Jacobian of the forward kinematics map is [#8 sg) = [rE asin dy = Lesin(@, +0) —Lasin(® +62) * | Lycos + Lacos(® +0) La c0s(@, +) 1,0)", as shown in 16 Plugging in Ly = L: ‘figure 3.21, we find that worl ABI U-1) ‘matching the motion seen in the figure 2/4, 0: = m/2 and G = singular configuration. In this case, the two-dimensional set of joint velocities q maps 10 a one-dimensional set of endpoint velocities # — instantaneous endpoint motion is impossible in one direction. EXAMPLE3.82_A polygon moving in the plane is represented by the configuration 4 = [aus dargil! € Q = RE x S!, where (qu. qa) gives the position of a reference frame Fp attached to the polygon relaive to a world frame F, and qs gives the Problems 7 Figure 322. The point on the polygon is at in the polygon frame Fy and x in the world fame F. orientation of F, relative co F (see figure 3.22). A point is fixed on the polygon at 1 = (rural? in the polygon frame F, and let x = [xy, x2” © M = RP be the position of this point in the plane. Then the forward kinematics mapping is 21) ogc) = [4] 4 [oo sins] fr [n]-#o-[S]+[ote cee [n- Where we recognize the 2 x2 rotation matrix. The inverse map $~ inthis example is one-to-many, asthe dimension of Qis greater than the dimension of M. The velocities and gare related by the Jacobian ae 2 0 =r sings —r2cisqs aq ae 1 ryc0sqs —n sings Problems 1, Invent your on nontrivial robot system. Itcould consist of one ot more robot ars, mabile platforras, conveyor bets, fixed obstacles, movable objects etc Describe the configuration space mathematically. Explain whether or not the configuration space is compact and if ‘ot, describe the compact factors. Describe the connected components ofthe coniguration space, Draw a rough picture of your robot system. 7 3 Configuration Space 2. Give the dimension of the configuration spaces of the following systems. Explain your (2) Two mobile robots rotating and translating in the plane, (6) Two translating and rotating planar mobile robots ted together by 2 rope (©) Two translating and rotating planar mobile robots connected rigidly by a bar: (@) The two arms ofa single person (with torso stationary) holding om firmly to a ea’s steering wheel (© A train on ain tracks. What if we include the wheel angles? (The wheels co without slipping.) (© A spacecraft wih a 6R robot arm, (g) The end effector ofthe 6R robot arm ofa spacecraft. (h) Your legs a8 you pedal « bicycle (remaining seated with fet fied tothe pedals) A sheet of paper 13, Describe the Bug2 algorithm fora two.joint manipulator. What are the critical differences between the Bug? algorithm fora mobile base and the two-joint manipulator? What does straight line mean inthe arm's configuration space? Can Bug2 be made to work forthe two,joint arm? 4. Prove te configuration space obstacle of aconvex mobile robot translating ina plane with convex obstacle is convex, 5. Prove the union operator propagates from the workspace tothe configuration space. That is the union of two configuration space obstacles isthe configuration space obstacle ofthe union of two workspace obstacles, In other words, assuming Q is a configuration space ‘operator, show that a0v0, LJwo, = 0, Joo, 6, How many dees ofedom des sig aly in space av? How many of them ae rena? rove hee wo ways (a eng te me of honing umber olson the body and sequentially adding their independent degrees of freedom until each point ‘on the body is fixed, and (6) using the definitions of SE(n) end SO(n). 7. Use cardboard and pushpins to create a closed chain with four links, a four-bar mechanism (One of these bars is considered stationary, or fixed tothe ground. Going around the loop, the link lengths between joints are 6.5 (the ground link), 3.0, 1.5, and 3.0 inches (oe centimeters) in length. Poke a hole atthe midpoint of the 1.5 inch link and trace the ssahid tbs path tha: the hole traces. Describe a good representation ofthe configuration space of the linkage. 8. Give a homeomorphism from the racetrack to the ellipse in figure 3.12. ‘9, Find co charts forthe unit circle S" and prove they form an alas Problems 3 10. Explain why the latitude-longitude chart we often place on the Earth is not a globat ‘parameterization. Find two chars forthe sphere and prove that they form an alas. 11, The set of right-arm and left-arm configurations ofthe 2R manipulator in igure 3.3 each _ivean annulus of reachable positions by theend effector, neither of whichis diffeomorphic to the robot’s configuration space. Consider the right-arm and left-arm workspaces as two separate annuluses, and describe how they can be glued together to make a single space that ira valid representation of the configuration space. Comment on the topology ofthis slued space. 12. For the 2R manipulator of figure 3.7, how many connected components of fee eonfigura- tion space ae there? Copy the igure, colo each of the connected components a different color, and give a drawing of the robot in each of these connected components 13, Show that compact and noncompact spaces are never difeomorphic 14, Find a diffeomorphism from any open interval (a, 6) € R to the whole real ine B 15, Give an implicit constraint equation f(x,y, 2) = 0 that embeds ators in B 16. For T ¢ SE(3) consisting of the rotation matrix & € $0(3) and the translation p € R find the inverse transform T!, 50 that TT! = T-(T = Z. Your answer should not contain any matrix inverses, 17. Consider two three-dimensional frames aligned with each other, called A and B. Rotate B90 degrees about the x-axis of A, then rotate again by 90 degrees about the yooxis ‘of A, then move the origin of B by thee units inthe z-direction of A. (Make sure you ‘rotate inthe eight direction! Use the right-hand rate: thumb points along the positive axis, Fingers cur in the direction of postive rotation) Give the matrix Tay € SE(3) describing the Fame B relative tothe frame A, Consider the point x» = [4, 3, 1, 1]? in homogeneous coordinatesin frame B. Whatisthe point x4 (expressed in the frame A)? Consider the point Yq = U.2,. 1)” inhomogeneous coordinates in frame A. and perform the transformation Tyo. Where is the new point y,? 8, Write a program to calculate the configuration space fr a convex polygonal robot trans= Jatin in an environment with convex obstacles. The program should readin from a fle a counterclockwise list of vertices representing the robot, where (0,0) the robot reference point. From a second ile, te program should read ina set of obstacles in the workspace. ‘The user enters an orientation for the robot and the program calculates the configuration space obstacles (see, eg, appendix F) Display the configuration space for different orien- tations ofthe robo! to show that translating paths between two points may exist for some ‘orientations of the robot, but not for others. 19. Write a program to display the configuration space for 2 2R manipulator in a polygonal cavironment. The program should read in a file containing the location of the base of ‘the bol. the length ofthe links, and the iss of vertices representing the obstaces. The ” 21, 4 Configuration Space Figure 323 A 3R planar robot with a frame attached to the end effector. program should check for collision at 1 degree increments for each joint and create apt similar to that showin in figure 3.7, Find the inverse kinematics ~! map Ainates g for the 2R robot arm in example 3.8.1. Note that for most reachable points x, there will be two solutions, coresponding (the right- and left-arm configuracons. Your solution will likely make use ofthe cwo-argument arctangent atan2(r.x(), which retums the unique angle in [~, x) tthe point (x, x) in the plane, as well a the law of cosines a = B 4c! ~2bce0s A, where a, b andcare the lengths ofthe three edges ofa triangle and A isthe angle opposite to edge a. Solve for genera) L;, Ls (Go not plug in numbers. the end-effector coordinates x tothe joint coor- Give the forward Kinematics for the planar 3R arm shown in figure 3.23, from joi ‘angles 4 to the position and orientation of the end effector frame in the plane. Find ihe ‘manipulator Jacobian For the problem above, show that the forward kinematics mapping is injective, surjective, bijective. or none of these. when viewed as. mapping from 7? to Rx 5! Finda “large” set of joint angle ranges ! CT" and a set of end-effector configurations Vc R? x St for which the mapping is dffeoworphism ‘The topology of SE(2) is equivalent to R> x 5 0(2). Let's represent an clement of RP x 50(2) by (x, R), where x = RP. R € S0{2), AS we have seen, we can make $E(2) 4 oup by giving ita group operation, namely, matix multiplication, We can also make Problems 75 RE x 5012) a group by using the direct product structure to define composition of two clement: (ey RiNCans Ra) = (a4 +a, BIRD) G BE x SOD) We are using vector addition asthe group operation on R? and matrix mol $02). With this proup operation, is R? x $O(2) commutative? Is $E(2) commutative? ‘The spaces $£(2) and IR? x $O(2) ae topologically equivalent, bu ae they equivalent as groups? 4 Potential Functions HAVING SEEN the difficult of explicitly representing the configuration space, analter- native is to develop search algorithms that incrementally “explore” free space while searching fora path. Already, we have seen that the Bug algorithms maneuver through free space without constructing the configuration space, but the Bug algorithms are limited to simple two-dimensional configuration spaces. Therefore, this chapter intro- duces navigation planner that apply toa richer class of robots and produce a greater variety of paths than the Bug methods, ie, they apply to & more gener configuration spaces, including those that are multidimensional and non-Euclidean A potential functions a differentiable real-valued function U : R" — R. The value ‘of potential function canbe viewed as energy and hence the gradient ofthe potential is force, The gradient is a vector VU(q) = DU(G)" = Lig (@).--++ REC@)N” which points in the direction that focally maximally increases U. See appendix C.5 for a mote rigorous definition ofthe gradient. We use the gradient co define a vector field, ‘Which assigns. vector each point ona manifold. A gradient vector eld asits name suggests, asians the gradient of some function to each point. When U is energy, the _gradient vector field has the property that work done along any closed path is zero. “The potential funetion approach ditects a robot as if it were a panicle moving ina gradient vector field, Gradients can be intuitively viewed as forees ating on a positively charged particle robot which is attracted to the negatively charged goal Obstacles also have a positive charge which forms a repulsive force directing the robot away from obstacles, The combination of repulsive and attractive forces hopefully directs the bot from the start location to the goal location while avoiding obstacles (igure 4.1), 4 Potential Functions Figure 4.1 The negative charge atteacts the robot and the postive charge repels it, resulting ina pat, denoted by the dashed lin, around the obstacle and tothe gos. [Note that in this chapter, we mainly deal with first-order systems (i.e., we ignore Potential functions can be viewed as a landscape where the robots move from a “high-value” state to a “Jow-value” state, The robot follows a path “downhill” by following the negated gradient of the potential function, Following such a path is called gradient descent, &@) =~ YUL). ‘The robot terminates motion when it reaches a point where the gradient vanishes, ice. it has reached a q* where VU(q") = 0. Such a point q* is called a critical point of U, The point q* is either a maximum, minimum, or a saddle point (figure 42). ‘One can look at the second derivative to determine the type of critical point. For real-valued functions, this second derivative is the Hessian matrix Five co When the Hessian is nonsingular at q*, the critical point at q* is non-degenerate, implying that che critical pointis isolated [173]. When the Hessian is positive-definite, 4 Potential Functions 9 (Maximum) (Saddle) (Minieaur) Figure 4.2. Different types of critical points: (Top) Graphs of functions. (Bottom) Gradients of functions the critical point is a local maximum. Generally, we consider potenti functions whose Hessians are nonsingular, i, those which only have isolated critical points. This also means that the potential function is never at. For gradient descent methods, we do not have to compute the Hessian because the robot generically terminates its motion at a focal minimum, not at a local maximum ‘ora saddle point. Since gradient descent decreases U, the robot cannot arrive ata Tocal ‘maximum, unless of course the robot starts at a maximum. Since we assume that the function s never Mat, the set of maxima contains just isolated points, and the likelihood Of starting at one is practically zero. However, even ifthe robot stars at a maximum, any perturbation ofthe robot position frees the robot, allowing the gradient vector field to induce motion onto the robot. Arriving at a saddle point is also unlikely, because they are unstable as well. Local minima, on the other hand, are stable because after any perturbation from a minimum, gradient descent returns the robot to the minimum. ‘Therefore, the only critical point where the robot can generically terminate is a local rwinimum, Hopefully this is where the goal is located. See figure 4.3 for an example of a configuration space with its corresponding potential function, along with its cenergy surface landscape and gradient vector field. 80 44 4 Povential Functions Figure 43 (a) A configuration space with three circular obstacles bounded by a circle, (6) Potential function energy surface. (c) Contour plot for energy surface. (4) Gradient vectors for potential function ‘There are many potential functions other than the attractive/repulsive potential Many of these potential functions are efficient (© compute and can be computed online [234]. Unfortunately they all suffer one problem—the existence of focal min- {ma not corresponding to the goal. This problem means that potential functions may Jead the robot to a point which is not the goal; in other words, many potential func- tions do not lead to complete path planners. Two classes of approaches address this problem: the fist class augments the potential field with a search-based planner, and the second defines a potential function with one local minimum, called a navigation Junction (239). Although complete (or resolution complete), both methods require full knowledge of the configuration space prior tothe planning event, Finally, unless otherwise stated, the algorithms presented inthis chapter apply to = spaces of arbitrary dimension, even though the figures are drawn in ewo dimensions. Also, we include some discussion of implementation on a mobile base operating in the plane (ie., a point in a two-dimensional Euclidean configuration space) Additive Attractive/Repulsive Potential ‘The simplest potential function in Qrce is the attractivelrepulsive potential. The intu- ition behind the attractive/repulsive potential is straightforward: the goal attracts the robot while the obstacles repel it. The sum ofthese effects draws the robot to the goal while deflecting it from obstacles, The potential function can be constructed as the sum of attractive and repulsive potentials U(g) = Val g) + Ung. eb Al. Additive Anractive/Repulsve Potential 81 ‘The Attractive Potential “There are several criteria that the potential field Use should satisfy. First, Ux should bbe monotonically increasing with distance from qgya. The simplest choice isthe conie ‘potential, measuring a scaled distance to the goal, ie., U(q) = £4(q. dg). The is ‘ parameter used to scale the effect of the attractive potential. The atractive gradient is VUC@) = zeS—5(q — aya). The gradient vector points away from the goal with magnitude ¢ at al points of the configuration space except the goal, where itis undefined. Starting from any point other than the goal, by following the negated ‘gradient, a path is traced toward the goal ‘When numerically implementing this method, gradient descent may have “chatter- ing” problems since there isa discontinuity in te attractive gradient at the origin, For this reason, we would prefer a potential function that is continuously differentiable, such that the magnitude of the attractive gradient decreases as the robot approaches ‘Gao. The simplest such potential function is one that grows quadratically with the distance t0 drt 8 1 Uaaha) = 34°C4, dors with the gradient Ls v (i (ato) Wag) = 1 = Ag VAG. dye 36944. da) 504 ~ deat) which isa vector based atq, points away from qjeq and has a magnitude proportional to the distance from q to gpai- The farther away q is from ggouy the bigger the magnitude of the vector. In other words, when the robot is far away from the goal, the robot quickly approaches it; when the robot is elose to the goal, the robot slowly ‘approaches it. This feature is useful for mobile robots because it reduces “overshoot” of the goal (resulting from step quantization). In figure 4.4(a), the goal isin the center and the gradient vectors for various points are drawn. Figure 4.4(b) contains a contour plot for Uy; each solid circle corresponds toa set of points q where Uss(q) is constant. Finally, figure 4.4(c) plots the graph of the attractive potential Note that while the gradient VUyi(q) converges linearly to zero as q approaches eva (Which is a desirable property), it grows without bound as q moves away from eo Hf uae i far from dpn, this May produce a desired Velocity that is too large, For this reason, we may choose to combine the quadratic and conie potentials so that the 2 a2 43) 4 Potential Function @ O) © Figure 44 (a) Attractive gradient vector fel. (b) Atractive potential isocontours.(¢) Graph ofthe attractive potential. = ‘conic potential attracts the robot when it is very distant from gaya and the quadratic potential attracts the robot when it is near dys. Of course it is necessary thatthe ‘gradient be defined at the boundary between the conic and quadratic portions. Such afield can be defined by : CC ee Ula) = : : Fou 44, Geos) ~ 36(Agas)*s (4, Qe) > Ai and in this case we have Gq —Apct» 14s Gast) S Fi Wag) = Sa tou) ‘eam (4, dant) > Ajo ‘where df. isthe threshold distance from the goal where the planner switches between conic and quadratic potentials. The gradient is well defined atthe boundary ofthe two iE ~ fields since at the boundary where d(q, gga) = djays the gradient of the quadratic potential is equal to the gradient of the conic potential, VUsu(g) = (4 ~ duo) ‘The Repulsive Potential A repulsive potential Keeps the robot away from an obstacle. The strength of the repulsive force depends upon the robot's proximity to the an obstacle, The closer the robot is to an obstacle, the stronger the repulsive force should be. Therefore, the 4 (44) 4s 466) 4. Additive Anractive/Repulsive Potential 83 Figure 4.5 The epulsive gradient operates only in a domain wear the obstacte. repulsive potential is usually defined in terms of distance to the closest obstacle D(q), Unga) = {ais + Dig) <0", 0, Dig) > Q", ‘whose gradient is, Wees(q) = {" (eos BVO), Dg) < O°, 0 Dq@)>o ‘where the Q” € R factor allows the robot to ignore obstacles sufficiently far away from it and the 7 can be viewed as a gain on the repulsive gradient. These scalars are usually determined by trial and error. (See figure 4.5.) ‘When numerically implementing this solution, a path may form that oscillates around points that are two-way equidistant from obstacles,"ie., points where D is nonsmooth. To avoid these oscillations, instead of defining the repulsive potential function in terms of distance to the closest obstacle, the repulsive potential function is redefined in terms of distances to individual obstacles where di(q) isthe distance tw obstacle 20, ie. (a) = min dq. 0) Note that the min operator returns the smallest d(q, c) forall points ¢ in QO; 84 an 42 4 Potential Functions Figure 4.6 The distance between x and QO, isthe distance tthe closest poiaton QO. The gradients a unit vector pointing away from the nearest point. q It-can be shown for convex obstacles QO; where cis the closest point to x thatthe gradient of di(q) is 4 aKa) = tes ' The vector Val(g) describes the direction that maximally increases the distance to. 0; from q (igure 46). 7 Now, each obstacle has its own potential function, vacar= {3 (as- itd(g) 5 OF 0, itd(g) > OF, where Q7 defines the size ofthe domain of influence for obstacle QO). Then Ueg(q) = SEL, Van (a). Assuming that there are only convex obstacles or nonconvex ones can bbe decomposed into convex pieces, oscillations do not occur because the planner does not have radical changes in the closest point anymore, ‘sa in cle c Gradient Descent Gradient descent is a well-known approach (o optimization problems. The idea is. simple, Starting at the initial configuration, take a small step inthe direction opposite the gradient. This gives a new configuration, and the process is repeated until the gra- dient is zer0. More formally, we can define a gradient descent algorithm (algorithm 4). 4 4 43 43° Computing Distance for Implementation inthe Plane 85 ‘Algorithm 4 Gradient Descent Output (0) = Qa sequence of points (q(0),4(1)e....@()) while YU(Q(i)) #0do gli + D = gl) taVUG) +t end while In algorithm 4, the notation q(J) is used to denote the value of g atthe ith iteration and the final path consists of the sequence of iterates (9(0), (1), .-., qli)}. The value of the scalar o(i) determines the step size at the iteration. Tis important that (i) be small enough that the robot is not allowed to “jump into” obstacles, while being large enough thatthe algorithm does not require excessive computation time. In ‘motion planning problems, the choice for (i) is often made on an ad hoc orempirical ‘basis, perhaps based on the distance to the nearest obstacle or tothe goal. A number of systematic methods for choosing ai) can be found in the optimization literature [45]. Finally, it is unlikely that we will ever exactly satisfy the condition VU(q(@) = 0. For this reason, this condition is often replaced with the more forgiving condition IIVU(@Q)I < «, in which ¢ is chosen to be sufficiently small, based on the task requirements. Computing Distance for Implementation in the Plane In this section, we discuss some implementation issues in constructing the attrac tivefcepulsive potential function. The attractive potential function is rather straight- forward if the robot knows its current location and the goal location. The challenge. lies in computing the repulsive function because it requires calculation of distance to obstacles. Therefore, in this section, we discuss two different methods to com pute distance, and hence the repulsive potential function. The first method deals with sensor-based implementation on a mobile robot and borrows ideas from chapter 2 in inferring distance information from sensors. The second method assumes the contig- uration space has been discretized into a grid of pixels and computes distance on the eri, 86 43. 43.2 4 Potential Functions Figure 4.7 Local minima of rays determine the distance to nearby obstacles ‘Mobile Robot Implementation Thus fa, the discussion has been general to any configuration space where we can define distance. Now let's consider some issues in implementing these potential functions on a planar mobile robot equipped with range sensors radially distributed around its circumference, These range sensors approximate value ofthe raw distance function p defined in chapter 2. Whereas D(q) corresponds tothe global minimum of the raw distance function p, a d(4) corresponds to a Jocal minimum with respect 10 of pl, 8) (Figure 4.7) For example, any sensor in the sonar array whose value is less than that of both its lft and right neighbors isa local minimum. Such sensors face te closest points on their corresponding obstacles, Therefore, these sensors pot inthe direction that maximally brings the robot closest to the obstacles, ie., -Vei(q)- “The distance gradient points inthe opposite direction. An obstacle distance function may be incorrect if the obstacle is partially occluded by another Brushfire Algorithm: A Method 1o Compute Distance on a Grid In this subsection, we explain a method for computing distance from a map repre- sentation called a grid, which is a wwo-dimensional array of square elements called pisels. A pixel has a value of zero if itis completely free of obstacles and one if itis completely or even partially occupied by an obstacle. 43° Computing Distance for Implementation in the Plane 87 at [a2 | os a [2 | a3 | ot Bites} ns m4 16 | a7 | one | on nt [one | as I Four point Eight point Figure4.8 Four point vs. eight-point connectivity The user or planner has @ choice in determining the neighboring relationships of pixels ina grid. When only the north, south, east, and west pixels are considered neigh- bors, the grid has four-point connectivity. When the grid also includes the diagonals. as neighbors, then it has eight-point connectivity (igure 4.8). Four-point connectivity has the advantage in that it respects the Manhattan distance function (the L! metric) because it measures distance as if one were driving in city blocks in midtown Manhattan. ‘The brushfire algorithm uses a grid to approximate distance, and hence the repul- sive function, The mput to the algorithm isa grid of pixels where the free-space pixels havea value of zero and the obstacles havea value of one. The outputisa grid of pixels ‘whose corresponding values each measure distance tothe nearest obstacle. These val- ues can then be used to compute a repulsive potential function, as well sits gradient. In the first step of the brushfire algorithm, all 2r0-valued pixels neighboring one- valued pixels are labeled with a two. The algorithm can use four-point or eight-point connectivity to determine adjacency. Next, all zero-valued pixels adjacent to two's are labeled with a three. This procedure repeats i, all zero-valued pixels adjacent toan{ are labeled with an + 1,as fa brushfir is growing from each of the obstacles ‘until the fie consumes all of the fre-space pixels. The procedure terminates when all pixels have an assigned value (gure 49) ‘The brushfire method produces @ map of distance values to the nearest obstacle. ‘The gradient of distance ata pixel is determined by finding a neighbor withthe lowest, pixel value, The gradient is then a vector which points to this neighbor. Note that this vector points in either one of four or one of eight possible directions. If there are multiple neighbors with the same lowest value, simply pick one to define the gradient Just asthe grid is an approximation ofthe workspace, so is the computed gradient an approximation of the actual distance gradient. 4 Potential Functions Figure 4.9 Propagation of the brushfive algorithm with eight-point connectivity. lines passthrough pixels where fronts collide. 88 44 44 Local Minima Problem 89 ‘With distance and gradient to the nearest obstacle inhand, a planner can compute the repulsive function. The attractive potential function can be computed analytically and together with the cepulsive function, a planner can invoke the additive attractive! repulsive function described in section 4.1 Itis worth noting thatthe method described here generalizes into higher dimensions ‘where pixels then become volume elements. For example, in three-dimensions, four- point connectivity generalizes to six-point connectivity and eight-point connectivity ‘generalizes to twenty-six-point connectivity. So, when assigning incremental pixel values to neighboring pixels in higher dimensions, the algorithm choses the appro- priate adjacency relationship and then iterates through as described above. Although possible, it would become computationally intractable to compute the brushfire in higher dimensions, Local Minima Problem “The problem tha plagues all gradient descent algorithms is the possible existence of local minima in the potential field, For appropriate choice ofa’) itean be shown that the gradient descent algorithm is generically guaranteed to converge to a minimum inthe field, but there is no guarantee that this minimum will be the global minimum. “This means that there is no guarantee that gradient descent will find a path 0 ggya In figure 4.10, the robot is initially attracted to the goal as it approaches the horseshoe shaped obstacle. The goal continues to attract the robot, but the bottom arm of the ‘obstacle deflects the robot upward until the top arm of the horseshoe begins to influence _ ‘a Figuce 4.10 Local sniniswum inside the concavity. The robot moves into the caneavity until dient balances out the attractive gradient. the repulsive gn 90 45 4 Potential Functions >So co Figure 4.11 Local minimum without concave obstaces. The robot moves away from the two, convex obstacles until i reaches a point where the gradient vanishes; at tis pont, the sum of the attractive gradient and the repulsive gradient is 2et0. ‘the robot. At tis point, the effect ofthe top and bottom arms keeps the robot halfway between them and the goal continues to attract the robot. The robot reaches a point where the effect ofthe obstacle’s base counteracts the atraction of the goal. In other words, the robot has reached a g* where VU(g") = O and q* is not the goal. Note, this problem is not limited to concave obstacles as can be seen in figure 4.11. Local minima present a significant drawback to the attractive/repulsive approach, and thus the attractive/repulsive technique is not complete 5 Barraquand and Latombe (37] developed search techniques other than gradient 7 ddescentto overcome the problem of local minima present when planning with potential functions. Their planner, the Randomized Path Planner (RPP) (37], used a variety of potential functions some of which were simplified expressions of the potentials presented inthis chapter. RPP followed the negative gradient ofthe specified potential function and when stuck at a local minimum, it initiated a series of random walks. Often the random walks allowed RPP to escape the local minimurn and in that case, the negative gradient to the goal was followed again. Wave-Front Planner ‘The wave-front planner [38,208] affords the simplest solution to the local minima problem, but can only be implemented in spaces that are represented as grids, For the sake of discussion, consider a two-dimensional space. Initially, the planner starts with the standard binary grid of zeros corresponding to free space and ones to obstacles. The 45° Wave-Front Planner 91 planner also knows the pixel locations ofthe start and goal. The goal pixel is tabeted ‘with a two. In the first step, all zero-valued pixels neighboring the goal are labeled with a three, Next, all zero-valued pixels adjacent to threes are labeled with four. ‘This procedure essentially grows a wave front from the goal where at each iteration, all pixels on the wave front have the same path length, measured with respect to the ‘grid, (0 the goal. This procedure terminates when the wave front reaches the pixel that contains the robot start location, ‘The planner then determines a path via gradient descent on the grid starting from the start Essentially, the planner determines the path one pixel at atime. Assume that the value of the stare pixel is 33. The next pixel in the path is any neighboring pixel ‘whose value is 32. There could be multiple choices; simply pick any one of the choices. ‘The next pixel is then one whose value is 31. Boundedness of te free space (and hence the discretization) and continuity of the distance function ensure that construction of the wave front guarantees that there will always be a neighboring pixel whose value is one less than that of the current pixel and that this procedure forms a path in the grid to the goal, ie, tothe pixel whose value is two. Figure 4.12 contains six panels that demonstrate various stages of the wave-front propagation using four-point connectivity. Note that all points on the wave front have the same Manhattan distance to the goal. In the lower-left panel, note how the wave front seemingly collides on itself. We will see later that the point of initial collision corresponds to a saddle point of the function that measures distance to the goal ‘This point then propagates away from the start as well, The trace of this propagation corresponds to a set of points that have two choices for shortest paths back tothe goal, either going around the top of the triangle or below it, ‘The wave-front planner essentially forms a potential function on the grid which thas one local minimum and thus is resolution complete. The planner also determines the shortest path, but at the cost of coming dangerously close to obstacles. The major drawback of this method is that the planner has to search the entire space for a path, Finally, just like the brushfire method, the wave-front planner generalizes into higher dimensions as well. Consider the three-dimensional case first. Just as a pixel hhas four edges, a voxel (a three-dimensional pixel) has six faces, Therefore, the analogy to four-point connectivity with pixels is six-point connectivity with voxels. For a voxel with value i, if we assume six-point connectivity, then we assign / +1 to the surrounding six voxels that share a face with the current voxel. Likewise, if we assume twenty-six-point connectivity (analogous to eight-point connectivity), then wwe assign 7 + 1 to all surrounding voxels that share a face, edge or vertex. It should be noted, however, implementation of the wavefront planner in higher dimensions becomes computationally intractable. 2 Figure 4.12, in the uppers 4.6 4.641 46 Navigation Potential Functions 93 Navigation Potential Functions ‘Thus far, we have seen in chapter ? thatthe Bug algorithms are complete sensor-based planners that work in unknown spaces, but are limited to planar configuration spaces. Then, at the beginning of this chapter, we have seen that the attractive/repulsive potential Function approach applies to a general class of configuration spaces, but suffers from local minima problems, and hence is not complete. The wave-front planner addresses the local minima problem, but requires time and storage exponential in the dimension of the space. In this section, we introduce a new potential that is a function of distance to the obstacles, has only one minimum and applies to a limited class of configuration spaces with dimension two, three, and more. Such potential functions are called navigation functions, formally defined in [239, 364]. DEFINITION 4.6.1 A function @ : Ques > (0, 1] is called a navigation function ifit = is smooth (or atleast C* fork > 2), ‘= has a unique minimum at ggoa in the connected component of the fre space that contains dt ‘= is uniformly maximal on the boundary of the free space, and = is Morse A Morse function is one whose critical points are all non-degenerate. This means that critical points are isolated, and if a Morse function is used for gradient descent any random perturbation will destabilize saddles and maxima, The navigation func- tion approach represents obstacles as QO; = (q |Ai(q) < 0); in other words, f(g) js negative in the interior of QO,, zero on the boundary of QO,, and positive in the exterior of QO, ‘Sphere-Space “This approach initially assumes thatthe configuration space is bounded by a sphere centered at qy and has n dim(Qjec)-dimensional spherical obstacles centered at {s+ dy The obstacle distance Functions are easy to define as Bolq) = —d"(a. 90) + "3s Big) = #q.a) — 7. ‘where ris the radius ofthe sphere. Note that f,(q) increases continuously as q moves, vay from the obstacle. Instead of considering the distance to the elosest obstacle or 4 48) (49 (4.10) ap the distance to each individual obstacle, we consider BG) = [] 6ica). the interior of the free space. This presumes that the obstacles are disjoint. ‘This approach uses f to form a repulsive-like function, The attractive portion of the navigation function is a power of distance to the goal, .e., ¥e(Q) = (dQ, Geoa)*s a8 4 approaches the boundary of any obstacle. More importantly, fora large enough £ 1c the function 2(q) has a unique minimum. This is true because as « increases, the | term 8y./8q dominates 96/34, meaning thatthe gradient of % points tovard th §| ‘goal, Essentially, increasing has the effect of making % take the form of a steep 3 small relative to the overwhelming influence ofthe atractive field. Near an obstacle, only that obstacle has a significant effect on the value of ‘Therefore, the only opportunity fora local minimum to appear is along a radial line = between the obstacle and the goal, On this line near the boundary of an obstate, the Hessian of % cannot be positive definite because % is quickly decreasing in el value moving fiom the obstacle to the goal. Therefore there cannot be any localé minimum for lage x, except atthe goal {239} 3 ‘So % has a unique minimum, but unfortunately it can have arbitrarily large values, 8 ‘making it dfficat to compute, Therefore, we introduce the analytical swite, which % is defined as i =, Apo. + Since o,(x) i 2e0 at x = 0, converges to one as x approaches co, and is continuous (figure 4.13), we can use 0,(x) to bound the value ofthe function %, eye (ee s(q,A) = (« Gg) = (gi): The function s(q, 8) has @ zero value atthe goal, unitary value on the boundary % ‘of any obstacle, and varies continuously in the free space. It has a unique minimum 3 for a large enough x. However, itis sill not necessarily a Morse function because it ‘may have degenerate critical points, So, we introduce another function tha essentially a(x) pbs sd. 46 Navigation Potential Functions 9% ents) Figure 4.13 Analytic switch function which is used to map the range ofa function tthe unit inwerval Figure 4.14 Configuration space bounded by a ciecle with five circle obstacles. sharpens s(q, 2) so its critical points become nondegenerate, i¢., so that-s(q, 2) can become a Morse function, This sharpening function is 412) Bast For A= 1, the resulting navigation function on a sphere-woeld is then he 44, desat) (4.13) = (&on0%) @ = ete —_. eee ( 8 ) (= Gage) + BON which is guaranteed to have a single minimum at ggg fora sufficiently large « [239] Consider the configuration space in figure 4.14, The effect of increasing x can be seen in figure 4.15, which plots the contour lines for g as « increases. For x = 3. y has three local minima, one of which is the global minimum. For « = 4 and 6, the local ‘minima become more apparent because i is easier to see the contour lines (actually loops) that encircle the local minima, For x = 7 and 8, the “bad” minima are there 4.6.2 sharp transitions in between (figure 4.16). This makes implementation of a gra descent approach quite difficult because of numerical errors, Star-Space The result of sphere-spaces is just the first step toward a more general plannes. sphere-space can serve as a “model space” for any configuration space that is diffeo ‘morphic othe sphere-space. Once we have a navigation function forthe model sp to find a navigation function forthe diffeomorphic configuration space, we need on! find the diffeomorphism relating the two spaces. 46 Navigation Potential Functions ” Figure 4.16 Navigation function for a spherespace with five obstacles for « he sand «= 10. A / \ | Figure 4.17 (Left) Starshaped set. (Right) Nota star-shaped set Inthis subsection we consider star-spaces consisting of a star-shaped configuration space populated by star-shaped obstacles. A star-shaped set Sisaset where there exists atleast one point that is within line of sight ofall other points inthe set, i. Br such that Wy eS, te +(I-Dy eS Wee (0,1) ‘See figure 4.17. All convex sets are star-shaped, but the converse is not rue 98 1a) (4.15) (4.16) «17 4 Potential Functongl “The approach is to map a configuration space populated by star-shaped obstacles {nto a space populated by sphere-shaped obstacles. Itcan be shown [364] that fr two fee configuration spaces Mf and F, ify : M — (0, 1]is anavigation function on Ml figure 4.18). This diffeormorphism ensures that there is a one-to-one correspondence: ‘between critical points, We will use this property to define navigation functions in ae star-spaces using results from sphere-spaces. 4 ‘The h mapping between the star- and sphere-spaces will be constructed using translated scaling map Tg) = wing ~ 4) + where vg) = (0+ Bg)" ae, ‘here gis the center ofthe star-shaped set, and p, and, ar, respectively the ceme and radius of the spherical obstace, Here Bq) defines a star-shaped set such tht ‘A(q) is negative in the interior, oro onthe boundary, and positive in the exterior. [Note that iq is in the boundary of the star-shaped obstacle, then (I + f,(q)) = I and thus Ti(q) = rigi=%; + pi In other words, 7,(q) maps points on the boundary’ ofthe star-shaped set oa sphere For te star-shaped obstacle QO), we define the analytical switch Jeo. (a4 WB, +28; 1g.) = (one 54.2) ( fi where 4, Ils 4.18) (419) 47 47 Potential Functions in Non-Euclidean Spaces 99 i... Bis zero on the boundary of the obstacles except the “current” obstacle QO, ‘Note that s}(q, 4) is one on the boundary of QO,, but is zero atthe goal and on the boundary ofall other obstacles except OO, We define a similar switch for the goal which is one at the goal and zero on the boundary of the free space, ie. ‘Now, using the above switehes and a translated sealing map, we can define a mapping between star-space and sphere-space as Sapa(ds A) 1304) = Sul WT ay) + 81€4, ITI) where Tiy(@) = 4 is just the identity map, used for notational consistency. Note that /s(q) is exactly 7;(q) on the boundary of the QO, because s; is one on the boundary of QO, aad forall j # é, 5; is zero on the boundary of QO, (here we include 5,,. a8 one of te 5,'s). In other words, for each i, h(q) is T; on the boundary of obstacle QO, which maps the boundary of a star to a sphere, Moreover, #(9) is continuous and thus x(q) maps the entire starspace to a sphere-space. Itcan be shown that for a suitable 2, #(q) is smooth, bijective, and has a smooth inverse, i¢.,i8 diffeomorphism [239], Therefore, since we have a navigation function on a sphere-space, we also have a navigation function on the star-space, Potent Functions in Non-Euclidean Spaces Putting the issue of local minima aside for a moment, another major challenge for {implementing potential functions is constructing the configuration space in the frst place. This is especially challenging when the configuration space is non-Euclidean and multidimensional. fn order to deal with this difficulty, we will define a potential function in the workspace, which is Euclidean, and then lit it o the configuration space. Here, we compute a gradient in the configuration space as a function of gradi- ‘ents in the workspace. To do so, instead of thinking of gradients as velocity vectors, we will now think of them as forces. We then establish a relationship between a ‘workspace force and a configuration space force. Then we apply this relationship to a single rigid-body robot, i, we show how to derive a configuration space force using workspace forces acting a rigid-body robot, Finally, we apply this relationship to a rmultibody robot. We focus the discussion in this section on the attractive/repulsive potentials from section 4.1. 100 And (4.20) 2 4 Potential Functiong Relationship between Forces in the Workspace and Configuration Space Since the workspace is a subset ofa low-dimensional space (either Ro I?), i ig much easier to implement and evaluate potential functions over the workspace than ‘over the configuration space. Now, we will teat the gradient in the workspace ag forces. Naturally, workspace potential functions give rise 10 workspace forces, but ultimately, we will need forces in the configuration space to determine the path for the robot. Let x and g be coordinate vectors representing a point in the workspace and the configuration of the robot, respectively, where the coordinates x and g are related by the forward kinematics (chapter 3) x = @(q). Let f and u denote generalized forces in the workspace and the configuration space, respectively. To represent a force f acting at a point x = 9(q) in the workspace as a generalized force w acting in the robot's configuration space, we use the principle of virtual work, which essentially ‘says that work (or power) is acoordinate-independent quantity. This means that power measured in workspace coordinates must be equal to power measured in configuration space coordinates. Inthe workspace, the power done by a force f isthe Familiar f7. In the configuration space, power is given by uq. From chapter 3, section 3.8, we know that = Jq, where J = 26/@q is the Jacobian of the forward kinematic map. Therefore, the mapping from workspace forces to configuration space forces is given by Sig aulg fy psu a EXAMPLE 4.7.1 (A Force Acting on Vertex of a Polygonal Robot) Consider the polygonal robot shown in figure 4.19. The vertex a has coordinates la ay)" in the ‘robot's local coordinate frame. Therefore, ifthe robot's coordinate frame is located at {x,y}? with oriemation 8, the forward kinematic map for vertex a (ie, the mapping from 4 =x, », 8 to the global coordinates ofthe vertex a) is given by sa. a 00)= LF irine secc The corresponding Jacobian matrix is given by sa) = 0 ~a,sin “sae 4 1 a,cos@ ~a, sind 4.22) (4.23) 472 4.7 Potential Functions in Non-Euclidean Spaces Jol Figure 4.19 The robot A, with coordinate frame oriented at angle & From the world frame, and vertex a with loeal coordinates (a, 2). Therefore, the configuration space force is given by [:]-784 1 0 E 0 ' a, sind — a, 088 a, cos ~ fe ] = bs | — fila sin® +a, 0088) + f,(a c089 ~ ay sind) | ‘and uy corresponds to the torque exerted about the origin of the robot frame. Our result for ug can be verified by the familiar torque equation t =r x f, where r is the vector from the robot's origin to the point of application of f, and t= wy Potential Functions for Rigid-Body Robots As before, our goal in defining potential functions is to construct a potential function that repels the robot from obstacles, with a global minimum that corresponds to gyn In the configuration space, this task was conceptually simple because the robot was represented by a single point, which we treated as a point mass under the influence of 102 24 (4.25) 4 Potential Funct a potential field. In the workspace, things are not so simple; the robot has finite in the plane and volume in three dimensions. Evaluating the effect of a potential fn the robot would involve computing an integral over the area/volume defined by the robot, and this ean be quite complex (both mathematically and computational An altemative approach isto select a subset of points on the robot, called conmal points, and to define a workspace potential for each of these points. Evaluating the’ effect of the potential field on a single point is no different from the evalustiongig of points required to "pin down” the robot. For example, for a rigid-body robot in th plane, we can fix the position of the robot by fixing the position of two of its points For each r,, the attractive potential is 1d(r)(Q). t/Ggat))» ACA). F)(dgon) S Fy, Ana Si ACr 6G), *) (Goa? ~ FE jdyoaiy ATM), F)(Goal)) > Apa With this potential function, the workspace force for attractive control point , i defined by Gr /64) = FCG)» 0/6), Feo) < A Pang) = Stee derylqh rascal) > a data { For the workspace repulsive potential fields, we use the same control points (1) and define the repulsive potential for ry as u, i= {il fan ~ if)» Ars) = OF, 0, dilryg)) > OF where di(rj(q)) is the shortest distance between the control point r) and obstac WO, and Q} is the workspace distance of influence for obstacles. The gradient of each Ung.) corresponds to a workspace force, mn (& atm) BVA), dlr) = 0, diery@)) > OF. Weep. 6G) = { 4.26) 4.7 Potential Functions in Non-Euclidean Spaces 103 Rtg) n n Figure 4.20. The epulsve forces exerted on the robot vertices rj and rz may not be sufcient to prevent a collision between edge E, and the obstacle. Often the vertices of the robot are used as the repulsive control points, but it is important to note that this selection of repulsive control points does not guarantee thatthe robot cannot collide with an obstacle. Figure 4.20 shows an example where this is the case. In this figure, the repulsive contol points 7: and r2 are very far from the obstacle WO;, and therefore the repulsive influence may not be great enough to prevent the robot edge £ from colliding with the obstacle, For this reason, we could aud a flaring repulsive control point, ray. The ating control point is defined as that point on the boundary of the robot that is closest to any workspace obstacle Obviously the choice of rox depends on the configuration q. For the example shown in figure 4.20, rage Would be located atthe center of edge £;, thus repelling the robot from the obstacle. The repulsive force acting on req is defined in the same way 2 forthe other contol points, using (4.25), Tye total configuration space force acting onthe robot isthe sum of the configura- tion space forces that result from all attractive and repulsive control points, ie, x aa x Mets = LIAM ais6a) + VO IPQ) Vegi in whieh J(q) is the Jacobian matrix for control point r). It is essential that the addition of forces be done in the configuration space and notin the workspace. ug) = 104 473 4 Potential Funes Path-Planning Algorithm, Having defined a configuration space force, which we will again treat as a veloc ‘we can use the same gradient descent method for this case asin section 4.1. As befo there are a number of design choices that must be made. { controls the relative influence of the attractive potential for control point r, to weight one of the control points more heavily than the others, producing ig “follow the leader” type of motion, in which the leader control point is quick) attracted tots final position, and the robot then reorients itself so thatthe ote attractive control points reach their final positions. 1; controls the relative influence of the repulsive potential for control point r; with the &; itis not necessary that all of the 1 be set to the same value Q; We can define a distinct Q} for each obstacle, In particular, we do not was any obstacle’ region of influence to include the goal position of any repuls control point. We may also wish to assign distinct Q7’s tothe obstacles to avai the possibility of overlapping regions of influence for distinct obstacles. Path Planning for Articulated Bodi articulated manipulators. Attractive control points are defined on the end effector repulsive control points are placed on the links. It may be a good idea to use atl and we would like to prevent the links from colliding with obstacles. For each co point, a Jacobian matrix is computed (see chapter 3, section 3.8), These Jacobi ‘map workspace forces to generalized configuration space forces (joint torques for revolute joints, joint forces for prismatic joints). With these exceptions, the formalism ‘of section 4.7.2 can be directly applied to the path-planning problem for articula arms (of course the implementation may be a bit more difficult, since the Jacobian may be a bit more difficult to construct, and computing distances to polyhedrons i Problems 105 Problems |. Prove that di(x) is a local minimum of p(x.) with respect to s. Show that D(x) ea be defined in terms of dix) 2, Does the wave-front planner in a discrete gre yield the shortest distance? (IF 30, in what metric?) 3, Write a program that determines a path between two points in a plana grid using the wave {ont planner Input from afile a set of obstacles in a known workspace. This file should be a Jistof venices and the program should automaticaly conver the polygonal representationo binary rid of pixels, Input fom the keyboard a start and goal location and write a prograt to display a meaningful outpat thot a path i indeed determined, Use either four-point or cight-point connectivity 4. Write a program that determines a path fora planar convex robot that can orient from a start 1 8 final configuration using the wave-front planner, Input from a file a robot and from a separate file a set of obstacles in a known workspace. Input a Sart and gost configuration rom the keyboard. Hand in meaningful output where the robot must orient to get from start to goal, Hand in meaningful output where a path is not found, 5. The two-ink manipulator in figure 4.21 has no joint limits, Use the wavefront planner t0 raw the shortest path between the star and goal configurations Theta? Theta Figure 421 (Left) The initial configuration of a two-link manipulator ia a polygonst ‘workspace. (Right) The configuration space ofthe two: ink manipulator with a start and goal comguration labeled Sand (respectively. 106 4 Pant acts Implement the atrativerepulsive potential Funetion fora point robot in a configuratog space with te following obstacles polygons 4b) polygons and circles (c) potyhedrons (2) polyhedrons and spheres {@) polyhedrons, spheres, and cylinders, soit Adapt the atteactive/repusive potential function methed to handle moving obstacles, Explein why the paths resulting from the Bug2 algorithm and the navigation poten! > function took sitar 5 Roadmaps AS DESCRIBED in chapters 2 and 4, a planer plans a path from a particular start configuration (o a particular goal configuration. If we knew that many paths were to be planned in the same environment, then it would make sense to construct a data structure once and then use that data structure to plan subsequent paths more quickly ‘This data structure is often called a map, and mapping isthe task of generating models of robot environments from sensor data, Mapping is important when the robot does not have a priori information about its environment and must rely on its sensors to gather information to incrementally construct its map. lathe context of indoor systems, three ‘map concepts prevail: topological, geometric, and grids (see figure 5.1). ‘Topological representations aim at representing environments with graphlike struc tures, where nodes correspond to “something distinct” and edges represent an adja- ‘cency relationship between nodes. For example, places may be locations with specific distinguishing features, such as intersections and T-junctigns in an office building, ‘and edges may correspohid to specific behaviors or motion commands that enable the robot to move from one location to another, such as wall-following. Recently, it has ‘become popular to augment topological maps with metric information (e-z., relative distance, angle) to help disambiguate places that “look” the same [108, 250,382,418} (oF f0 use ther for navigation {188, 213, 240, 339}. Geometric models use geometric primitives for representing the environment. Map, ping then amounts to estimating the parameters of the primitives to best fit the sensor observations. In the past, different representations have been used with great success, Many researchers use line segments {27, 122, 169, 180,334] to represent parts of the 108 (5) firs tty Ca} tel Figure $.1 Different ways to represent an environment: topologically, geometicaly, using grids. ‘environment. Popular approaches also represent three-dimensional structures of t environment with triangle meshes [17, 161, 182,416], where the value of each pixel corresponds to the likelihood that its corresponding portion of workspace or configuration space is occupied [142]. Occupancy grids first introduced for mapping unknown spaces with wide-angle ultrasonic sensors; this topic is discussed in chapter 9. This chapter focuses on a class of topological maps called roadmaps (91,262) roadmap is embedded in the free space and hence the nodes and edges of a road also carry physical meaning. For example, a roadmap node corresponds to a specif location and an edge corresponds to a path between neighboring locations. So, addition to being 2 graph, a roadmap is a collection of one-dimensional manifol that captures the salient topology of the free space. path t0 a network of highways, then along the highway system, and finally from thes e highway to thee destination. The bulk ofthe motion ours on the highway systema which brings the motorist from near the star to near the goal (figure 5.2. Likewise, using a roadmap, the planner ean construct a path between any pois in aconzted component ofthe robot's pus y Hs fing alison free path onto the roadmap, traversing the roadmap to the vicinity of the goal, and’ then constructing a collision-free path from a point on the roadmap to the goal. Te bulk ofthe motion occurs on the roadmap and thus searching doesnot occur in 8 ‘multidimensional space, whether it be the workspace or te configuration space. the robot knows the roadmap, then it in essence knows the environment. Sone way robot can explore an unknown environment is by relying on sensor data to construct roadmap and then using that roadmap to tan futur excursions into the environ ‘We now formilly define the roadmap. 5 Roadmaps 109 Figure 5.2 Los Angeles freeway system: Planning a path from Pasadena tothe Manhattan [Beach requires finding a path onto the 110, then to the 10S and 405, and finally from the 405 to the beach, Courtesy of Mapquest. DEFINITION 5.0.2 (Roadmap) A unionof one-dimensional curves isa roadmap RM ffor all quan and qgua it pee that can be connected by a path, the following properties hol: 1, Accessibility there exists a path from gaan © Qree 10 SOME dng € RM, 2. Departability: there exists a path from some ding € RM 10 dons € Oren and 3. Connectivity: shere exists a path in RM beoween dng nd Qn: Tn this chapter, we consider five types of roadmaps: visibility maps, deforma- tion retracts, retract-like structures, piecewise retracts and silhouettes. All oF these roadmaps have a corresponding graph representation. Visibility maps tend to apply to configuration spaces with polygonal obstacles, Nodes of the map are the vertices of the polygons and for visibility maps we can use the terms node and vertex iterchange- ably. Two nodes ofa visibility map share an edge if their corresponding vertices are ‘within line of sight of each other. Deformation retractions are analogous to melting ie or burning grassland, As an arbitrary shaped piece of ice melts, a resulting “stick 110. 51 SLL figure” forms. The ice represents the robot's free space and since the stick figure ¢ tures the macroscopic properties ofthe piece of ce, ican be used for path planning the robot's free space The representation used for silhouette methods is const by repeatedly projecting a shadow of the robot's multidimensional free space of lower-dimensional spaces until a one-dimensional network is formed. Visibility Maps: The Visibility Graph ‘The defining characteristics ofa visibility map are that its nodes share an edge if th are within line of sight of each other, and that all points in the robot's free space within line of sight of atleast one node on the visibility map. This second stateme implies that visibility maps, by definition, possess the properties of accessibil and departability. Connectivity must then be explicitly proved for each map for structure to be a roadmap. In this section, we consider the simplest visibility map called the visibility graph {262,298}, Visibility Graph Definition ‘The standard visibility graph is defined in a two-dimensional polygonal configurai space (figure 5.3). The nodes v, of the visibility graph include the start locationg the goal location, and all the vertices of the configuration space obstacles. The edges e,, are straight-line segments that connect two line-of-sight nodes v, and vy ej FD Ho 50; + (1 =); © Hl Qree) ¥5 € (0, UI oat goat Figure 53 Polygonal configuration space with a start and goal 5.1 Visibility Maps: The Visibility Graph wt goat Figure3.4 The thin solid lines delineate the edges ofthe visibility raph forthe three obstacles represented as filled polygons. The thick dotted line represents the shortest path beoween the start and goal Note that we are embedding the nodes and edges in the free space and that edges of the polygonal obstacles also serve as edges inthe visibility graph, By definition, the visibility graph bas the properties of accessibility and departabil- ity. We leave it to the reader as an exercise to prove the visibility graph is connected in a connected component of free space. Using the standard two-norm (Euclidean distance), the visibility graph can be searched for the shortest path (figure 5.4) (366} ‘The visibility graph can be defined fora three dimensional configuration space popu- lated with polyhedral obstacles, bu it does not necessarily contain the shortest paths in such a space. ‘Unfortunately the visibility graph has many needless edges. The use of supporting and separating lines can reduce the number of edges. A supporting line is tangent to two obstacles such that both obstacles lie on the same side of the line. For nonsmooth obstacles, such as polygons, a supporting line I can be tangent at a vertex vif Bu) M1 QO, = 0. A separating line is tangent (0 two obstacles such that the obstacles lie on opposite sides of the line. See figure 5.5 for an example of supporting and separating lines. ‘The reduced visibility graph is soley constructed from supporting and separating lines. In other words, all edges of the original visibility graph that do not lie on a supporting or separating line are removed. Figure 5.6 contains the reduced visibility ‘graph of the example in figure 5.4. The notion of separating and supporting lines can be used to generalize the visibility graph method for curved obstacles [294], Supporting 1 Figure 55. Support ‘use a ponsmooth notion Figure 5.6 Reduced visibility graph, 5.1 Visibility Maps: The Visibility Graph m3 Figure 5.7 Reduced visibility graph with nonconvex obstacles. [At fist, the definitions of the supporting and separating lines may seem to only apply to convex obstacles. However, this definition applies to nonconvex shapes as oll, Here, we use the notion of local convexity. Recall that convex sets in the plane have the property that forall points on their boundary, there exists.aline orthogonal to the surface normal that separates the convex set. This means that the set lies entirely on one side of the line. A set is focally convex ata point if the hyperplane tangent to separates the points in a neighborhood of ¢ on the boundary ofthe convex set QO, In other words, when N is the surface normal at €, QO; is locally convex até if for alle € (QO, f\nbhd (2), (¢ — 2) N > Dor(e ~Z)- N <0. Convex obstacles are locally convex everywhere on the boundary ofthe set, Figure 5.7 contains a reduced visibility graph for a configuration space with nonconvex obstacles. The reduced visibility graph is beneficial because it has fewer edges making the search for the shortest path more efficient. Visibility Graph Construction Let V = (vj, Ua) be the set of vertices ofthe polygons inthe configuration space as well asthe start and goal configurations. To construct the visibility graph, foreach 1 € V we must determine which other vertices are visible o v. The most obvious way (© make this determination is to test all Tine segments TV,» + 3, to See if they intersect an edge of any polygon. Fora particular Bij, there are O(n) intersections to check because there are O(n) edges from the obstacles. Now, there are O(n) potential segments emanating from v, so fora particular v, there are O(n) tests to determine which vertices are indeed visible fom v. This must be done forall » © V and thus the construction of the visibility graph would have complexity O(n’), ua the beam illuminates the object that is closest to the lighthouse. Furthermore, ag beam rotates, the obstacle that is illuminated changes only at a finite numbe orientations ofthe beam. Ifthe obstacles in the space are polygons, these orient ‘occur when the beam is incident on a vertex of some polygon. This insight class of algorithms known in che computational geometry literature as plane algorithms. A plane sweep algorithm solves problem by sweeping line, called the sweept across the plane, pausing at each of the vertices of the obstacles. At each verte algorithm updates a partial solution (othe problem. Plane sweep algorithms are efficiently compute the intersections ofa set of line segments in the plane, to comp intersections of polygons, and tosolve many other computational geometry prabk For the problem of computing the set of vertices visible from v, we will let sweep line, 1, be a half-line emanating from v, and we will use a rotational rotating | from 0 to 2m. The key to this algorithm is to incrementally maintai set of edges that intersect /, sorted in order of increasing distance from v. Ifa veg 1 is visible (0 v, then it should be added to the visibility graph (algorithm 5). ‘Aigorithm 5 Rotational Plane Sweep Algorithm _ Input: A set of vertices [v,} (whose edges do not intersect) and a vertex v Output: A subset of vertices from (v,) that are within line of sight of w 1: Foreach vertex v, caleulate a, the angle from the horizontal axis tothe line 2: Create the vertex list €, containing the ay’s sorted in increasing order. & Create the active list S, containing the sorted list of edges that intersect the ho hlf-line emanating from v, for all, do if v; is visible to v then ‘Add the edge (v, v4) to the visibility graph. end if iffy; is the beginning of an edge, £, notin S then Insert the E into S. 10, endif 11: iffy; is the end of an edge in S then 1: Delete the edge from S. 1: end it a: end for Sl Visibility Mops: The Visibility Graph ms Figure 58 An example of the sweep Iie algorithm at work for an environment containing ‘wo rectangular obstacles straightforward to determine if 2 is visible to v, Let S be the sorted list of edges that intersects the half-line emanating from v; the set $ is incrementally constructed as the algorithm runs. If the line segment 177 does not intersect the closest edge in S, and if I does not lie between the two edges incident on v (the sweep line does not intersect the interior of the obstacle at»), then v; is visible from v. Figure 5.8 shows an example configuration space containing two obstacles with vertices vi, .-., Us- Table 5.1 shows how the data structures are updated as the algo- rithm proceeds from initialization to termination, Step 1 of the algorithm determines the angles, a;’s, at which the line / will pause; such angles correspond to the vertices of the obstacles. In step 2 of the algorithm, these angles are used to construct the vertex list, £. and in step 3 the active list S is initialized. After initialization, € and S are the sorted lists E = (a3, 07,04, a5, a1, 05, 3, 6}, S (Es. Es. Bu Eh Ri 116 5 Roadimayg Vertex NewS Actions Initialization (Ey, E>, Ey, Bs) Sort edges imersecting horizontal halftine as [Es, By, Es, Ee) Delete By from 5. Add £5 (0S. oo (Es, Es, Ey, Es) Delete By from S. Add Ey 10 S. es (Es, Ex) Delete Bs from S. Delete Es from S. ADD (v, v4) 10 visibility graph as 0 Delete Ey from S, Delete Ey from S: ADD (v, vs) to visibility graph a (EBs) Add By to S. Add £, tS. ADD (v, v)) 0 visibility graph as (Es, Ei, En Es) Add By 0S. Add Es 10 S. ” (Es, Ez, Ba, Es} Delete E, from S. Add E; 10S. a (Es, Ex, Ea, Es) Delete Bs from S. Add Ey t0 S. ‘Termination Table $.1. Table showing the progress of the rotational plane sweep algorithm for the env ronment of figure 5.8. Attermination, the algorithm has added three new edges tothe visibility graph: (0,04), (2,2), and (v,»). : The complexity of algorithm 5 is O(n? log). The'time required by step 1 i (O(n), since each vertex must be visited exactly once. For step 2. the required time is = (O(n logn), since this is the time required to sorta lst of n elements. For step 3, he's set of active edges can be computed in O(n) time by merely testing exch edge to seeifit intersects the horizontal axis. In the worst case, if every edge were to intersect the hot; ‘zontal axis, this set could be sorted in time O(n log n). The main loop ofthe program (step 4 iterates n times (once for each vertex) At each iteration, the algorithm perform basic bookkeeping operations (insert or delete), but these can be done in tim O(logn) if an appropriate data structure, such as a balanced tree, is used to ma tain S Thus, the time required by step 4 is O(n log), and therefore the total time complexity ofthe algorithm is O(n? logn). Finally, we have not considered here the case when / may simultaneously intersect multiple vertices. In order for this to occur, three vertices must be collinear. When this does occur, the problem can be resolved by slightly perturbing the position of one of

You might also like