0% found this document useful (0 votes)
939 views38 pages

Book Chapter Springer

The document summarizes the author's work on developing control methods for humanoid robots to perform advanced multi-contact behaviors. It describes (1) whole-body kinematic and dynamic models that account for contact constraints, (2) a prioritized torque control approach to achieve force and motion behaviors under constraints, and (3) several simulated examples of multi-contact manipulation skills. The goal is to enable humanoid robots to operate in human environments by handling basic tasks through dexterous, compliant physical interactions.

Uploaded by

Luis Sentis
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
939 views38 pages

Book Chapter Springer

The document summarizes the author's work on developing control methods for humanoid robots to perform advanced multi-contact behaviors. It describes (1) whole-body kinematic and dynamic models that account for contact constraints, (2) a prioritized torque control approach to achieve force and motion behaviors under constraints, and (3) several simulated examples of multi-contact manipulation skills. The goal is to enable humanoid robots to operate in human environments by handling basic tasks through dexterous, compliant physical interactions.

Uploaded by

Luis Sentis
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Compliant Control of Whole-Body Multi-Contact Behaviors in Humanoid Robots

Luis Sentis

Introduction

In 2008, the US National Intelligence Council published a report listing six disruptive technologies by the year 2025. It included subjects in biology, energy, robotics, and information technology. On the subject of robotics, it reports: Robots have the potential to replace humans in a variety of applications with far-reaching implications. [...]. The development and implementation of robots for elder-care applications, and the development of human-augmentation technologies, mean that robots could be working alongside humans in looking after and rehabilitating people. In this spirit, I explore here a methodology to enable greater maneuverability and interactivity of robots in human environments. As the expectation on humanoid robots to operate as human helpers and social companions grows, the need to improve their motor skills becomes increasingly important. Equipped with highly dexterous anthropomorphic systems, sensors for environmental awareness, and powerful computers, humanoids should be capable to handle many basic chores that humans do. Yet they cannot, not to the level that would make them practical. The goal of this chapter is to summarize my collaborative work in the area of compliant control of humanoid robots with focus on the design and execution of advanced whole-body multi-contact skills. In particular, I will address the important subject of unied whole-body force and motion behaviors under geometrical and multi-contact constraints, and the synthesis of whole-body skills using action primitives. In essence, I summarize here the following contributions: (1) whole-body task and postural control published in [30], (2) prioritized torque control under geometric constraints published in [54], (3) synthesis of whole-body behaviors published in [54], and modeling and control of multi-contact behaviors published in [55]. Humanoids are dicult systems to model and control because they are highly dimensional and underactuated, and because they operate under contact and geometrical constraints. The work I will summarize here, describes my eorts towards a unied torque-level control methodology that simultaneously optimizes all required processes to achieve advanced compliant manipulation and maneuvering behaviors under the physical constraints imposed by human environments. The strength of this methodology relies on the underlying whole-body kinematic and dynamic models, which exploit the high dexterity of the humanoids mechanism and account for the underactuated modes to create high-end skills

Fig. 1. Service application: This snapshot taken from a simulated control scenario, depicts the operation of mobile robots in human environments. In the near future, tedious chores such as tiding up the house or cooking might be performed by skilled robots.

without relying on high dimensional motion planning computations. Operating at the torque control level enables the intuitive implementation of force, motion, and compliant multi-contact behaviors while removing the need to generate inverse kinematic trajectories. Dynamic models are incorporated into the framework to enhance physical performance, allowing controllers to use lower feedback gains to achieve whole-body compliant contact interactions. A hierarchical organization of the controller imposes priorities between the control objectives according to their relative importance in the execution chain. Priorities are used as a mechanism to temporarily override non-critical criteria in order to fulll critical constraints. A supervisory layer monitors the feasibility of the control objectives and provides feedback to high-level planners to modify the control policy or reason about the current action. In the last two years I have co-developed an embedded software architecture that implements the methods described here. This architecture currently operates the upper body of a research version of the Honda Asimo humanoid robot which uses torques as control inputs [31]. Although, the architecture is not yet published it is based on my collaborative work initiated in [29] and [47]. To handle the computational complexity of the dynamic models, we have based our design on a multi-process and multi-rate architecture where several servo processes running at fast rates control all interactive operations while another

process running at low rates updates the models. Performance benchmarking has been conducted for simulated robotic structures, achieving the necessary speeds to govern high dimensional humanoid robots. Section 2, I will focus on the description of whole-body kinematic and dynamic models under contact constraints and will describe my recent work on modeling and control of multi-contact behaviors. In Section 3, I will summarize my contributions to prioritized torque-level control in the form of an architecture that unies motion and force behaviors while fullling geometric and contact constraints. In Section 4, I will study several simulated examples involving whole-body multi-contact and manipulation behaviors.

Modeling Humanoids under Multi-Contact Constraints

One of the fundamental dierences between humanoid robots and xed robotic manipulators is their ability to maneuver in their environment using whole-body multi-contact interactions. Gravity forces push the humanoids body against the ground, providing a supporting platform to move in the terrain. The robots behavior is therefore determined not only by the displacements of its articulated joints but also by the maneuvers of its body and by the interactions between closed loops formed by the bodies in contact. In this section I will describe, (1) kinematic models that take into account underactuated degrees of freedom and multi-contact constraints, (2) whole-body kinematic and dynamic models of task objectives, and (3) mechanical models of the internal forces and moments taking place between contact closed loops. These models will be used in the next section to construct compliant torque controllers for whole-body force and motion interactions under geometrical and contact constraints. Contact interactions in robots have been addressed since the early 1980s with work on dynamics and force control in the context of robotic manipulation [25] [50]. Cooperative distributed manipulation became important to enable the handling of big or heavy objects [2]. To describe the behavior of the object independently of the manipulators, an augmented object model was proposed based on dynamically consistent models [28]. Research began to focus on modeling multi-grasp behaviors and the associated internal forces acting between manipulators [45]. Using a closed-chain mechanism called the virtual linkage model, decoupled object behavior and accurate dynamic control of internal forces was addressed [64]. Mobile robotic platforms equipped with robotic manipulators were developed [20] and multi-grasp manipulation was implemented using ecient operational space algorithms [8]. Related work on constrained kinematic and dynamic models include [15, 43, 62, 66]. The work described in this section is closely connected to locomotion. It is therefore important to review modern developments on this eld of research. Dynamic legged locomotion has been a center of attention since the 1960s [16]. The Zero Moment Point criterion (ZMP) was developed to evaluate center of mass (CoM) acceleration boundaries in coplanar multicontact situations [61]. Imple-

mentations of simple dynamic control algorithms for multi-legged running robots followed [49]. ZMP methods for humanoid robots where pioneered around the same times [58, 59] and later perfected as part of the Honda humanoid program [19]. To enable generalized multi-contact locomotion behaviors, extensions to the ZMP dynamic evaluation criterion were developed [18]. Compliant multi-contact behaviors using optimal distribution of contact forces has been recently explored [21]. Finding CoM static placements given frictional constraints for multi-legged systems was tackled in [3, 10]. The eld of legged locomotion is vast and continues to broaden with pioneering contributions in areas such as hybrid non-linear control [60, 63], biomimetic coupled oscillators [6, 22], and passive dynamic walking [11, 42]. 2.1 Kinematic and Dynamic Models

The kinematic and dynamic behavior of a humanoid robot interacting with its environment is determined not only by the robots movement but also by its interactions with the environment through contact forces. To characterize these complex interactions, we represent humanoids as free oating articulated systems pushed against the ground by gravity forces and in contact with ground surfaces using their limbs or body parts to gain maneuvering support. In Figure 2 we show a kinematic representation of a humanoid robot supported by its four limbs. The mobility of the robot with respect to the environment is determined by the relative position and orientation of a root base located on its body with respect to an inertial frame of reference, i.e. xb(p) xb = x b(r) R6 , (1)

where xb(p) and xb(r) represent respectively the relative position and rotation of the base with respect to the inertial frame of reference. Computationally, we describe whole-body kinematics by assigning revolute and prismatic joints to the robots free oating base, including a spherical joint representing the base orientation and three prismatic joints representing the base Cartesian position. We use ecient algorithms to compute branching kinematic variables using the algorithms described in [8, 14]. The humanoid system is therefore treated as a branching articulated system with n actuated joints and 6 underactuated DOFs. Denition 1 (Robot Generalized Coordinates ). The robot position and orientation in space and the position of its articulated joints can be fully described by the set {xb , q} = {xb(p) , xb(r) , q1 , q2 , . . . , qn } R6+n , (2) where the vector xb represents the coordinates of the base link and the n 1 vector q represents actuated joint positions.

Fig. 2. Kinematic representation of a humanoid robot: The free moving base is represented as a virtual spherical joint in series with three prismatic virtual joints. Reaction forces appear at the contact points due to gravity forces pushing the body against the ground. Contact constraints are expressed as rigid constraints with zero velocities and accelerations at the supporting bodies.

Using Euler-Lagrangian formalism we can derive the following equation of motion describing the systems generalized dynamics under actuator torques and external contact forces A b + b + g + JsT Fr = U T (3) q where A is the (n + 6) (n + 6) inertia matrix, b and g are (n + 6) 1 vectors of Coriolis/centrifugal and gravity forces respectively, U = 0n6 Inn Rn(n+6) (4) is a selection matrix selecting actuated DOFs, and is the n 1 vector of actuation torques. As such, U determines the underactuated/actuated degrees of freedom of the robot plus its free oating base by assigning zero torques to the base. Moreover, Js is the cumulative Jacobian of all supporting bodies in contact with the ground and Fr is the associated vector of reaction forces and moments. As shown in Equation (3), reaction forces against the ground translate into forces acting on passive and actuated DOFs. With the premise that the con-

tact bodies lay at against the ground, a simple contact model can be dened where no relative movement occurs between contact surfaces and the supporting surfaces [65], i.e. s = 06ns 1 , s = 06ns 1 . (5)

Here s(i) is the 6 1 vector of linear and angular velocities of the ith contact support. Note that more sophisticated contact models should be considered in case that the contact interactions take place against deformable surfaces, e.g. a stiness/damper mechanical model. Solving Equation (3) for the above set of constraints leads to a model-based estimate of reaction forces and moments in terms of actuation torques, i.e. T T T s b , Fr = J s U J s (b + g) + s J (6) q where s is the apparent inertia matrix projected in the space dened by the supporting bodies, Js is the associated Jacobian matrix, and J s is its dynamically consistent generalized inverse [26]. Plugging the above equation in (3) we obtain the following dynamic model under contact constraints T (7) A b + NsT (b + g) + JsT s Js b = (U Ns ) , q q where Ns is the dynamically consistent null-space of Js . Notice that the projection Ns in the above equation ensures that the applied torques fulll the contact constraints of Equation (5). Because the robot is constrained by the supporting ground, the position of its base can be derived from the position of the actuated joints alone. In turn, the coordinates of arbitrary task descriptors can be obtained from actuated joint positions only. Let us study this dependency more closely. The velocity constraint on the supporting extremities given in Equation (5) means that base and joint velocities lie in the algebraic null-space of the support Jacobian. This dependency leads to the following expression, b N b , (8) s q q where b and q are arbitrary vectors of base and joint velocities, and and q are b the corresponding constrained variables. Moreover, constrained joint velocities alone can be obtained by multiplying the above equation by the actuation matrix U , i.e. q = U Ns b . (9) q When the robot is in single support stance, the matrix U Ns is full rank and therefore q can take arbitrary values. However, when the robot is in multi contact stance, the presence of closed loops causes U Ns to be singular and as a

Fig. 3. Dependency between base and joint displacements: Due to the eect of supporting contact constraints, the position of the end eector, x, can be expressed using joint positions alone, q, i.e. disregarding the position of the base, xbase .

result there are inter-dependencies between the constrained joint velocities q . Solving the above equation yields the equality b = U N q , (10) s q where U Ns is the dynamically consistent generalized inverse of U Ns . To verify that the above expansion is correct it must produce zero velocities at the support points, i.e. b s = Js = Js U NS q = 0. (11) q This is fullled when U Ns is the dynamically consistent generalized inverse of U Ns as dened in [53], i.e. U NS A1 (U NS )
T

U Ns A1 (U Ns )

(12)

where (.)+ is the Moore-Penrose pseudo inverse operator. To verify the correctness of Equation (11) we use the equalities A1 NsT = Ns A1 and Js Ns = 0 which are demonstrated in [53]. 2.2 Task Kinematics and Dynamics Under Supporting Constraints

To execute complex actions, a humanoid must simultaneously coordinate the behavior of multiple task objectives. We consider the vector of task descriptors

Fig. 4. Whole-body manipulation behavior: We show here overlapping pictures from a simulated example consisting of a behavior of placing screws on a stair by means of a screwgun. To achieve this complex behavior we simultaneously optimize multiple task objectives shown in the right hand side if the image. Each objective is represented by a coordinate vector and a Jacobian transformation to express its relationship with respect to joint displacements.

(e.g. see Figure 4) xskill x1 x 2 , . . . xnt (13)

where each xk describes the coordinates of a desired task descriptor, e.g. the Cartesian position and orientation of a point in the robots body, and nt is the number of task descriptors. A general denition of task descriptors is any linear or non-linear transformation of the generalized coordinates dened in Denition 1. When using (10), task velocities can be expressed in terms of joint velocities alone, i.e. b xk = Jk = Jk U Ns q . (14) q Denition 2 ( Support consistent reduced Jacobian ). The task operator Jk U Ns in the above equation, behaves as a constrained Jacobian transformation mapping joint velocities into task velocities and I will refer to it using the symbol
Jk

Jk U Ns .

(15)

This expression is motivated by the dependency of base velocities on joint velocities, as shown in Figure 3.

Fig. 5. Decomposition of internal forces and moments: We decompose internal forces and moments into contact centers of pressure, internal tensions, and normal moments. Contact centers of pressure allow us to control contact rotational behavior while internal tensions and normal moments allow us to control the behavior of contact points with respect to surface friction properties.

2.3

Modeling of Contact Centers of Pressure, Internal Forces, and CoM Behavior

We consider whole-body contact scenarios where multiple extremities of the robot are in stable contact against at surfaces (see Figure 5). In this case, every contact imposes six constraints on the robots mobility. We assume that each extremity in contact has enough joints with respect to the base link to enable the independent control of its position and orientation. This condition translates

Fig. 6. Forces acting on a supporting body (e.g. the right foot): Establishing the balance of moments on each contact body allows us to determine the position of contact centers of pressure.

in the existence of six or more independent mechanical joints between the base link and the extremity in contact. We consider contact scenarios involving an arbitrary number of supporting extremities, represented by the number ns . Flat supporting contacts impose 6ns constraints on the robots motion, where 6 of these constraints provide the support to maneuver the robots center of mass and the other 6(ns 1) describe the internal forces and moments acting on the closed loops that exist between supporting extremities [64]. Internal forces and moments play two dierent roles in characterizing the contact behavior of the robot: (1) contact centers of pressure dene the behavior of the contact links with respect to edge or point rotations and (2) internal tensions and moments describe the behavior of the contact links with respect to the friction characteristics associated with the contact surfaces. For a number of ns links in contact we associate ns contact CoPs. Each contact center of pressure is dened as the 2D point on the contact surface where tangential moments are equal to zero. Therefore, 2ns coordinates describe all contact pressure points. In Figure 5 I illustrate a multi-contact scenario involving three supporting contacts on the robots feet and left hand and an operational task designed to interact with the robots right hand. We focus on the analysis of the forces and moments taking place on a particular contact body, as shown in Figure 6. Based on [61], we abstract the inuence of the robots body above the k th supporting extremity by the inertial and grav-

ity force fsk and the inertial and gravity moment msk acting on a sensor point Sk . For simplicity, I assume the sensor is located at the mechanical joint of the contact extremity. Here, Pk represents the contact center of pressure of the k th contact extremity, frk is the vector of reaction forces acting on Pk , and mrk is the vector of reaction moments acting on Pk . The frame {O} represents an inertial frame of reference located outside of the robot and the frame {Sk } represents a frame of reference located at the sensor point. All force quantities are described with respect to the sensor frame. Assuming the supporting extremities are in static contact, the center of pressure for the k th contact link can be written as Pkx = Skx msy frkx Skz Pkz , frkz frkz frky msx Skz Pkz + . frkz frkz (16)

Pky = Sky

(17)

Here, x and y refer to the tangential directions with respect to the local surface frames. The same analysis applies to the other extremities in contact, dening the ns contact centers of pressure. Characterizing contact CoPs is an important step towards developing contact models that describe intuitively the contact state of the robot. Based on these models, in the next section I will describe methods that eciently control the internal force state of the robot. In particular, I will present control methods that enables the manipulation of contact CoPs to desired positions on the contact surfaces. By manipulating contact CoPs away from contact edges we ensure that contact surfaces stay at against the supporting surfaces avoiding undesired contact rotations. Additionally, controlling contact CoPs will result in compliant contact behaviors since they imply neutralizing tangential moments exerted by contact surfaces. The various properties of contact CoPs make them an eective abstraction for the control and analysis of contact rotational behaviors. The analysis of contact CoPs and CoM behaviors will be exploited next to develop a virtual linkage model that characterizes the interaction and maneuvers of humanoids in unstructured multi-contact environments. In [55], a new instance of the virtual linkage model [64] to describe the complex contact relationships formed between contact closed loops was presented. The virtual linkage model is a parallel multi-articulated mechanical model connecting closed loops between contact extremities using virtual prismatic and spherical joints. It was rst used to describe the relationship between resultant and internal forces of a shared object between multiple manipulators. In the case of humanoid robots, we use the virtual linkage model to describe the behavior of internal forces and moments as well as the behavior of forces and moments acting at the robots CoM. To dene the mechanical structure associated with the virtual linkage model, we choose anchoring points located at the contact CoP positions previously dened. Therefore, contact CoPs act as the nodes connecting the linkages. To prevent supporting extremities from rotating along contact edges, our approach

Fig. 7. Virtual linkage model for humanoid robots: We dene a virtual linkage model anchored through contact CoPs. It enables the characterization of internal tensions and moments against contact surfaces. The virtual linkage model abstracts the behavior of internal and CoM forces with respect to reaction forces. These characteristics make the virtual linkage model a powerful tool for the analysis and ecient control of CoM maneuvers and contact interactions.

is to place contact CoPs as close as possible to the geometric centers of the extremities in contact. Hence, unwanted transitions to edge or point contacts will be avoided in case that contact disturbances occur. Note that placing contact CoPs at the centers of the links is not a necessary constraint. They can be placed at any position below the links in contact, but away from contact vertices and edges to prevent rotations. Therefore, in this paper we only consider at surface contact interactions. Extensions to corner and edge contacts could also be explored using a similar methodology We associate a virtual linkage model connecting all contact centers of pressure. In the scenario shown in Figure 7 each body in contact introduces a tension with respect to its neighboring nodes as well as normal and tangential moments. For contacts with ns > 2 we can independently specify 3(ns 2) tensions, ns normal moments, and 2ns tangential moments describing the centers of pressure. Any additional link in contact introduces three new tensions with respect to its neighbors and

three more moments. No more than three tensions between neighboring nodes can be independently specied for a given contact. Internal tensions and normal moments characterize the behavior of contact bodies with respect to the friction cones and rotational friction properties of the surfaces in contact. In [55], we derived the following expression describing the virtual linkage model Fcom = GF (18) r F int where Fcom is the six dimensional vector of inertial and gravitational forces and moments at the robots center of mass and Fint is the vector of internal tensions between closed loops and tangential and normal moments to the contact surfaces, fr1 . . . frns (19) Fr R6ns , mr1 . . . m
rns

is the cumulative vector of reaction forces and moments projected onto the contact CoPs and expressed in global frame, and W com G (20) R6ns 6ns , W int is the grasp/contact matrix dened in [55]. The upper part of the grasp/contact matrix denes the CoM behavior as 1 [I]33 [0]33 [I]33 [I]33 [0]33 [0]33 , Wcom (21) Pcom [I]33 P1 Pn [I]33 [I]33 where, Pcom is the cross product operator associated with the position of the CoM with respect to the global frame, Pi is the cross product operator associated with the ith center of pressure point, and [I]33 and [0]33 are the 3 3 identity and zero matrices respectively. The lower part of G describes the internal force behavior, i.e. the tensions between contact nodes and the surface moments, and its detailed expression can be found in [55]. In the next section, we will use these virtual linkage models to develop controllers that can govern internal forces and CoM behavior. We will also study the application of the matrix G to nd solutions of CoM and internal force behaviors that comply with rotational and frictional contact constraints.

2.4

Friction Boundaries for Planning CoM and Internal Force Behaviors

The control of center of mass and internal force behavior is direclty related to the friction properties of the supporting surfaces. We analyze here this relationship and the eect on contact stability with respect to surface friction boundaries. The torque command that we will see in Equation (49) entails controlling both the robots center of mass and the internal force behaviors. As discussed in [55], center of mass behavior is always one of the task objectives involve in the torque reference of a humanoid robot. The trajectory and values of CoM and internal force behavior cannot take arbitrary values. They need to be chosen to fulll contact and frictional constraints at the supporting surfaces. Ensuring that reaction forces remain within friction boundaries is needed to prevent robot contact extremities from sliding and rotating with respect to the environment. To facilitate the analysis of friction behaviors with respect to surface properties we rotate reaction forces and moments, which are normally expressed in global frame as shown in Equation (6), to align with the frames attached to the contact surfaces, so their z components are parallel to surface normals, i.e. Fsurf Rsurf Fr . (22)

Here Fsurf represents the rotated forces and moments and Rsurf is a 6ns 6ns rotation matrix that aligns z components to surface normals. Using the above transformation, Equation (18) can be written as Fcom = G R1 Fsurf , surf F int where Fcom fcom , mcom fint , mint

(23)

Fint

(24)

are the forces and moments associated with the robots center of mass and 1 internal force behaviors. and Rsurf is an inverse cumulative rotation. The above expression can be used to estimate the surface forces due to center of mass and internal force commands, i.e. 1 com,ref F , Fsurf = Rsurf G (25) Fint,ref where Fcom,ref and Fint,ref are reference values induced by CoM and internal force behavior policies. Using the above expression we can analyze the feasibility of whole-body maneuvers and force behaviors with respect to frictional properties of the contact

surfaces. To evaluate the feasibility of these behaviors, we consider the following indexes associated with linear frictional cones and rotational frictional ratios for the k th contact extremity 1 fsurf,kz tan fsurf,kx , surf,k (26) tan1 fsurf,kz fsurf,ky surf,k msurf,kz , fsurf,kz (27)

where surf,k is the vector of angles measured between the surface plane and the reaction force vector and surf,k is a ratio between the normal moment and normal force which characterizes the rotational friction behavior. This last index provides an intuitive metric of rotational friction characteristics since it normalizes normal moments with respect to the normal force load. We determine the boundaries of the above indexes by using simple models involving static friction cones and a heuristic model for rotational friction, i.e. surf,k surf,k fricCone(k), rotIndex(k). (28) (29)

On the other hand, the boundaries of center of pressure locations are determined by the surfaces in contact, i.e. Pk surfArea(k), (30)

where Pk are contact CoPs and surfArea(k) is the surface area below the k th contact body. Overall, Equations (28- 30) determine the boundaries of frictional and rotational stability that comply with the contact model dened in Equation (5). As we mentioned earlier, we control contact CoPs to be located close to contact geometric centers. The choice of CoPs denes the anchors of the virtual linkage model and therefore determines the model of the grasp/contact matrix. Once G is dened, we use the boundaries dened in Equations (28-30) to obtain feasible values of CoM and internal forces by means of Equation (25). In this context the grasp/contact matrix G emerges as a powerful tool for the planning of advanced maneuverings in arbitrary 3D multi-contact environments.

Prioritized Whole-Body Torque Control

In this section, we use the models previously described to provide a control framework that can (1) create whole-body compliant behaviors, (2) unify the control of force, motion, and multi-contact behaviors, (3) fulll geometric and contact constraints, and (4) plan body displacements that are feasible in the

terrain. A key aspect we focus on is in the fulllment of geometrical and contact constraints. This problem has been traditionally addressed as a motion planning process. However, for fast response, we address it here as a reactive control problem. We seek control strategies that can adapt very quickly to geometric changes in the operational environment as well as to fast contact transitions. To accomplish these goals, I describe here my collaborative work on modelbased control for the creation and execution of whole-body compliant skills that comply with geometric and contact constraints. The highlights of this framework are (1) prioritized organization between task objectives that ensures that critical tasks are accomplished rst, (2) fulllment of geometric and contact constraints, (3) dynamically correct control of tasks and postures using all actuation redundancy, (4) multi-contact compliant control. Historically, inverse kinematic control strategies have been widely used for manipulator control, because they enable the direct computation of trajectories according to geometric plans [48, 51]. Extensions for multi-objective control and task prioritization were later added to enable the control of redundant manipulators [17, 56]. However, with increasing demand on contact interactions, torque control strategies became highly relevant. In particular, the operational space control formulation [27] enabled the unify control of force and motion behaviors in mobile manipulators. The control of humanoid robots has gone through a dierent path, since a differentiating issue in humanoids is the presence of a free moving base and contact forces on the supporting limbs. For many years, the main focus of humanoids had been on balance and on the generation of dynamic locomotion trajectories [19]. Full body motions of humanoids under geometric and balance constraints were later addressed as a motion planning problem [33]. A highly popular method to generate whole-body behaviors involves the generation of linear and angular momenta including task and balance constraints [23]. To enable whole-body compliant behaviors and exploit the redundancy of humanoids under geometric and contact constraints, prioritized torque control was developed [54]. Recently, a torque level control framework based on minimum norm distribution of contact forces has been introduced [21]. Accurate control of internal forces and contact centers of pressure in the context of prioritized whole-body control has been recently developed [55] and is summarized in this section. Important contributions in the eld of constrained whole-body control have also been recently made by the AIST-CNRS Joint Robotics Laboratory [1, 57]. The topics discussed in this section include, representations of whole-body skills, prioritized torque control structures, realtime handling of dynamic constraints, task feasibility, and control of contact centers of pressure and internal tension and moments. We will also discuss several simulated examples. 3.1 Representation of Whole-Body Skills

Although the goal of humanoid robots is to execute advanced dexterous and maneuvering tasks, much low-level coordination needs to take place to use eciently their mechanical advantage. To create advanced skills we must rst understand

the many coordinated processes that need to take place to manipulate, maneuver, and respond to the environment. From a pure physical standpoint, a robot is a parallel underactuated structure with motion and force interactions taking place with respect to the environment and between the limbs in contact. Displacements can be modeled as center of mass and end eector trajectories while manipulation tasks can be modeled in terms of force and motion interactions. It is logical to select the bodys center of mass and the limb end eectors as operational points, each of them controlled as a separate, but cooperating, process. Another important objective is the coordination of internal forces and moments, which are characterized from the models of the previous section. Not surprisingly, the list of coordinated processes that need to be simultaneously optimized for ecient humanoid maneuvers and interactions is long. Postural behavior to enhance workspace reachability and eort minimization are other important criteria to be controlled. A process that monitors and prevents joint limits from being violated is required for safety as well as is a process that monitors self-collisions between body parts and enforces a minimum separation. Table 1 illustrates a list of control objectives that we use to dene a manipulation skill while standing up. Task kinematics under supporting constraints were
Table 1. Decomposition of a prototypical manipulation task. Task Primitive Joint Limits Self Collisions Balance Right hand Gaze Standing posture Internal forces Coordinates joint positions distances CoM Cartesian head orientation marker coordinates tensions / moments Control Policy locking attractor repulsion eld dynamic trajectory force and position position captured sequences optimal contact

analyzed in the previous section. When a primitive dening a complex skill, such as the one in Table 1, is considered, each descriptor can be kinematically characterized through unique coordinates and constrained Jacobians, i.e. xk = T [xb , q],
xk = Jk q,

(31) (32)

where T [.] is a transformation dening the objective to be controlled, xk is the coordinate representing the k th objective, Jk is the associated prioritized Jacobian, and xb and q are base and joint coordinates. 3.2 Prioritized Torque Control

Torque control of humanoid robots represents an alternative to other popular control approaches, such as inverse kinematics and resolved momentum con-

trol [23]. In contrast with these two other approaches, prioritized torque control is based on two key feature that include, (1) using prioritized force models to unify whole-body motion and force interactions while ensuring the fulllment of geometric and contact constraints, and (2) developing whole-body compliant multi-contact capabilities for eective interactions with the environment. Over the years, I have developed and matured prioritized torque control methods that can create complex interactive skills for operations of humanoids in highly constrained environments. The motivation behind prioritized structures is to develop mechanisms to temporarily override non-critical task processes in order to fulll critical constraints as well as to exploit eciently the actuation redundancy for postural behavior. I will summarize here this work. Note that for proofs of the mathematical demonstrations, the reader should refer to the authors previously published work. In general, for nt arbitrary task objectives dening a skill we use the following recursive prioritized whole-body torque structure for control
nt

= 1 + 2|prec(2) + + nt |prec(nt ) =
k=1

k|prec(k) ,

(33)

where the subscript k|prec(k) is used to indicate that the k th task operates in the null space of all higher priority tasks. Denition 3 ( Prioritized torques ). The following expression determines the projection of lower priority tasks into the null space of higher priority tasks k|prec(k)
T Nprec(k) k ,

(34)

where Nprec(k) is the combined null space of all higher priority tasks (i.e. all preceding tasks) to the k th level. The hierarchical torque control structure (33), unies forces and motion by dening the recursive structure
T T = J1T F1 + J2|1 F2|1 + + Jnt |prec(nt ) Fnt |prec(nt ) + NtT posture N

=
k=1 Jk|prec(k)

T Jk|prec(k) Fk + NtT posture , (35)

where the matrices correspond to prioritized task Jacobians as dened in (36), the vectors Fk|prec(k) correspond to operational forces to control the k th priority tasks, Nt corresponds to the product of all null spaces which denes the residual movement redundancy for posture control, and posture corresponds to the control policy for posture control which is described in detail in [53]. Denition 4 ( Prioritized Jacobian ). The following prioritized Jacobian is associated with the k th priority task
Jk|prec(k) Jk Nprec(k) ,

(36)

where Jk is the constrained Jacobian associated with the k th task and Nprec(k) is the prioritizing null space of all preceding tasks.

Theorem 1 ( Prioritized operational space control ). The following control vector yields linear control of forces for the k th prioritized task
T k|prec(k) = Jk|prec(k) Fk|prec(k) ,

(37)

where k|prec(k) is the k th component of the prioritized torque control structure shown in (33), Jk|prec(k) is the prioritized Jacobian for the k th task point discussed in (36), and Fk|prec(k) is the vector of command operational forces. Corollary 1 ( Prioritized motion control ). In the absence of contact forces at the operational point, the following control command yields linear control of task accelerations
ref Fk|prec(k) = + k|prec(k) ak k|prec(k) + pk|prec(k) k1 1 (SNs ) T k|prec(k) Jk A i=1

i|prec(i) . (38)

Here Fk|prec(k) is the control force shown in Equation (37), aref is a feedback k acceleration-level control policy for the k th priority task, and the remaining dynamic quantities for the above equation have the following expressions, 1 T k b + k|prec(k) k|prec(k) Jk A Ns b k|prec(k) J (39) q 1 T Js s Js b , k|prec(k) Jk A q p k|prec(k)
1 NsT g. k|prec(k) Jk A

(40)

When using the above structure for control we achieve linear control of task accelerations, i.e. xk = aref . (41) k Using aref we dene feedback control policy for motion control. k Corollary 2 ( Prioritized force control ). During contact interactions, the following control vector yields feedback control of contact forces
t

Fk|prec(k) = Kp (Fk Fdes ) Kv Fk Ki


0

(Fk Fdes )dt,

(42)

where Fk is the vector of forces and moments measured at the operational point, Fdes is the vector of desired forces and moments, and Kp , Kv , and Ki , are proportional, derivative, and integral gain matrices. When using the above control structure we directly control task forces at the operational point.

Corollary 3 ( Dynamically consistent prioritized null space matrix ). The following matrix fullls prioritization constraints
k1 Nprec(k) = I i=1 J i|prec(i) Ji|prec(i) ,

(43)

where J i|prec(i) is the dynamically consistent generalized inverse of the prioritized Jacobian. Moreover, we use the conventions Nprec(1) I and N1|prec(1) N1 =
I J 1 J1 .

3.3

Realtime Handling of Dynamic Constraints

The goal of this section is to describe the implementation of prioritized torque control strategies to handle reactively dynamically changing geometric constraints. The use of prioritization provides an alternative to using high dimensional path planning strategies which scale poorly to highly redundant systems. Prioritization enables the response in realtime to dynamic constraints while optimizing the desired actions. It also prevents conicting task objectives to be accomplished if they violate geometric constraints acting on the robot. The constraint models described here are not meant to replace the need for path planning, but to complement planners in behaviors that do not require global optimization. Therefore, with the methods we describe here, whole-body control is reduced to the planning of the behavior of a few operational points regarding maneuvering and manipulation goals while other objectives such as fast response to geometric and contact constraints, and postural behavior are handled as reactive processes. Internal constraints such as self collision and joint limit avoidance are especially relevant when generating goal-based behaviors. Our approach to handle self collisions will rely on implementing repulsion elds on proximity points of nearby links, creating dual repulsion forces on link pairs. Self collision constraints have been previously studied in the context of motion verication [24, 34]. However, our control approach goes further ahead by providing support to modify the robots pose in response to self collisions. To handle joint limits, our approach consists on locking joints before they reach their physical limits. Strategies to handle joint limit constraints date back to [37]. With the implementation of visual servoing techniques, joint limit prevention has recently regained importance [13, 41, 40]. Our methods extend these approaches to operate in full humanoid systems, exploiting the overall mechanical redundancy to accomplish the desired actions given the internal and external constraints. In contrast with previous methods, our approach relies on enforcing constraints as priority tasks while other operational tasks operate in the residual redundancy. This technique prevents operational tasks from violating constraints and provides a metric to determine task feasibility under the acting constraints. Obstacle avoidance constraints are handled reactively via repulsion elds against incoming obstacles. Obstacle avoidance techniques have been popular in

Fig. 8. Handling joint limit constraints: (a) The robots right hand is commanded to move towards the drill. (b) A constraint handling task is activated to prevent the right elbow to reach its physical limit. In turn, the reaching task is projected into the null space of the constraint. (c) The robot reaches the drill while complying with the elbow constraint. When a new hand command is issued, the elbow joint is unlocked. the context of path relaxation [32, 7, 4], reactive control [39, 26, 5], and collision free paths [44, 9, 38, 35, 36, 33]. Our techniques enhance and complement previous reactive and non-reactive techniques. Realtime response to motion constraints has been extensively addressed as a secondary process. In contrast, our approach consists on handling motion constraints as priority tasks and executing operational tasks in the null space that complies with the constraints. To illustrate our approach, we consider the wholebody behavior illustrated in Figure 8. The task decomposition to enforce the constraints while executing the behavior outlined in Table 1 is Proposition 1 ( Example of constraint-consistent whole-body control ). The following control structure creates a whole-body behaviors that fullls constraints while preventing lower priority tasks from violating the constraints, = jlimits + scollision|p(2) + balance|p(3) + reaching|p(4) + gaze|p(5) + posture|p(6) + internal|p(7) . (44) Here the subscripts {taskname|p(priority)} indicate the priority order.

Fig. 9. Obstacle avoidance illustration: When an incoming obstacle approaches the robots body a repulsion eld is applied to the closest point on the robots body. As a result, a safety distance is enforced to avoid the obstacle.

As shown in Figure 8, implementing this ordering enables the right arm to stretch to reach the object, even when the right elbow joint is locked at the joint limit. The task becomes unfeasible when the current conguration of the robot uses all the dynamic redundancy to fulll the constraints. Projecting operational tasks into the null space of the constraints is not only used to prevent constraint violations but also provides a metric to measure task feasibility under the constraints. Monitoring task feasibility is used as a mechanism to change behavior at runtime in response to dynamic constraints. Similar control structures than the one shown in the above proposition can be used to handle obstacle constraints as priority tasks. To handle obstacles we apply repulsion elds in the direction of the approaching objects as shown in Figure 9.

Fig. 10. Task feasibility during a reaching task: The robot attempts to reach the object until no more dynamic redundancy is available under the motion constraints. At this point the Jacobian of the reaching task, determined through Singular Value Decomposition, becomes singular. The singular directions determined the uncontrollable space (shown in red).

3.4

Task feasibility

Task feasibility can be measured by evaluating the condition number of prioritized Jacobians, i.e.
(Jk|prec(k) ) 1 (Jk|prec(k) ) dim(k) (Jk|prec(k) )

(45)

where (.) represents the condition number and i (.) corresponds to the ith singular value of the prioritized Jacobian. Lets look at the example of Figure 10. The robot stretches the arm and body to move toward the object. Joint limits at the hip and arm, and balance constraints take up the available movement redundancy. As a result, the prioritized Jacobian associated with the reaching task becomes singular. 3.5 Control of Contact Centers of Pressure and Internal Tensions/Moments

In this section, I will summarize my recent collaborative work on internal force control [55]. In this work, a controller that governs the positions of contact centers of pressure and regulates internal tensions and normal moments to contact

surfaces is described. This controller is integrated with the framework for wholebody prioritized control described earlier, unifying the control of center of mass maneuvers, operational tasks, and internal forces between contact closed loops. The space of internal forces consists of torques that have no eect on the robots motion, and therefore can be derived by analyzing the RHS of Equation (7). This condition leads to the following constraint on the torques (U Ns ) T = 0, (46)

which when plugged into (7) results in zero generalized accelerations. Therefore, the torque manifold that fullls the above constraint belong to the null space of (U Ns ), dened by the projection L I U Ns U N s Rnn , (47)

where we use the symbol L to denote contact closed loops, and the superscript to indicate that the projection operates in contact space. The torques associated with internal forces are those that do not contribute to net movement, i.e. = L T int , (48)

where int denotes the torque command to control internal forces and moments. Notice that plugging the above torques in the RHS of (7) cancels out int , fullling the cancellation of acceleration eects. We integrate the above structure with our prioritized controller discussed in Equation (35), leading to the unied torque structure
N

=
k=1

T Jk|prec(k) Fk + NtT posture + L T int .

(49)

In [55], we formulated the relationship between internal forces and moments with respect to contact forces as ft mcop = Wint Fr Fint R6(ns 1) , (50) mn where Fr is the cumulative vector of contact forces and moments described in Equation (19), Wint is the operator in the grasp matrix shown in Equation (20), that maps contact forces into internal force/moment variables, ft is the 3(ns 2) dimensional vector of tension forces, mcop is the 2ns dimensional vector of tangential moments computed at desired center of pressure points, and mn is the ns dimensional vector of normal moments to contact surfaces. The above denition of internal forces implies dening a virtual linkage model anchored at contact CoP locations as described earlier in the chapter. We discussed the need

to select contact CoP locations the closest possible to the geometric center of the contact surfaces to avoid unwanted rotations along edges and corners of the supporting links. To ensure that these locations become the actual CoPs we neutralize CoP moments at these points, i.e. mcop = 0. We focus our attention to the problem of choosing internal forces that fulll contact constraints. As we mentioned earlier, we control contact CoPs to be located close to contact geometric centers. The choice of CoPs denes the anchors of the virtual linkage model and therefore determines the form of the grasp/contact matrix. Once G is dened, we use the boundaries dened in Equations (28-30) to obtain feasible values of CoM and internal forces by means of Equation (25). In turn, the feasible solutions correspond to the range of values of Fcom and Fint that fulll contact constraints. The problem of choosing internal and CoM force control policies in optimal ways is a subject we are currently researching and therefore is not discussed here in detail. The next step consists on implementing a controller that regulates internal force behavior to desired values, i.e. ft,ref [0] 2ns . = m n,ref

Fint Fint,ref

(51)

where ft,ref and mn,ref are desired internal force values obtained either through optimization processes or manually chosen. To do that, in [55] we designed a new torque controller that can directly manipulate internal force behavior. Plugging Equation (49) into Equation (6) and using (50) we obtain the equality Fint = J i|l int + Fint,{t,p} i pi , where
T L U J s Wint T

(52)

J i|l

(53)

is a transformation matrix from torques to forces, Fint,{t,p} Wint J s U T


k=1 T N T Jk|prec(k) Fk + NtT posture

(54)

are forces induced by task and postural behavior with torques shown in Equation (49), and i and pi are Coriolis/centrifugal and gravity terms dened as i pi Wint r , Wint pr . (55) (56)

Inverting Equation (52) we obtain the following internal force torque controller T int = Ji|l Fint,ref Fint,{t,p} + i + pi , (57)
where Ji|l is a Moore-Penrose left pseudo inverse of (53) and the subscript {i|l} denotes internal quantities operating in the space of contact closed loops. Plug ging the above expression into (52) and provided that Ji|l is full row rank, we obtain the linear equality Fint = Fint,ref . (58) To ensure that Ji|l is full row-rank, L needs to span all internal force and moment quantities. This applies if there are at least six independent mechanical joints separating each contact extremity from the base link. A second required condition is to ensure that Wint denes independent internal quantities. This is already ensured in our derivation of the virtual linkage model. The term Ji|l might be interpreted as a kinematic forward quantity mapping innitesimal displacements of joint positions to innitesimal displacements of virtual linkage displacements, i.e. xint = Ji|l q. (59)

Equation (57) will work in open loop, but to achieve accurate tracking of internal forces under actuator friction and mechanical modeling errors a feedback force control law involving proportional-integral-derivative feedback (PID) is needed. Given appropriate choice of the control law, the above linear relationship will ensure convergence to the desired internal force behaviors. The above control structure provides a dynamically correct internal force controller that has no coupling eects on operational task, center of mass maneuvers, and postural behavior, hence enabling the ecient control of whole-body multi-contact interactions. It provides the support to simultaneously control the position of multiple contact centers of pressure and the internal tensions and normal moments acting between contact closed loops. A block diagram of the overall whole-body torque controller with internal force commands is shown in Figure 11.

Simulation Results

We study various examples on a simulated model of the child size humanoid robot Asimo. The objective of Asimo is to operate in oces and homes assisting humans in a variety of service and care giving chores. Recently, my colleagues at Stanford have developed a research version of Asimo that uses torque control commands [31, 67]. Torque controlled robots have the ability to execute wholebody compliant behaviors which is needed to operate eciently and safely in human environments where contact interactions occur often. A dynamic simulation environment [8] and a contact solver based on propagation of forces and impacts [52] are used to simulate the execution of the methods described in this chapter. The whole-body controller described in Equation (49) is implemented

Fig. 11. Whole-body torque controller diagram: Block (a), represents the decisions made by a high level planning module such as the decision module presented in [47]. Block (b), outlines the information contained in behavioral entities representing the desired skills, here depicted for a multiphase multi-contact behavior. These entities are dened by state machines where the states consist of action primitives. Behavior primitives received information from a sensor-based database which is used to update the action states and their representations. Block (c), shows the description of action primitives as collections of task objects organized according to a desired priority ordering. The action primitive shown above corresponds to one of the states of the desired multi-contact behavior. Block (d), describes task objects as entities containing coordinate representations, dierential kinematic representations, goal-based potential functions, and force-based control policies. Block (e), represents a system currently under development that is used to automatically solve internal force behavior given a desired action representation. Block (g), represents our prioritized torque-based controller, which uses the previous behavior representations and control policies to create whole-body control torques. Block (h), represents the estimated dynamic behavior of the underactuated robot under multi-contact constraints in response to the previous torque commands.

Fig. 12. Compliant multi-contact behavior: A four contact stance is shown here. Contact centers of pressure are controlled to stay at the center of the bodies in contact. The center of mass is controlled to remain at a xed location. The table is pivoted rapidly by a external user to challenge the pose. To add additional diculty, the internal tension between the left hand and the right foot is commanded to track a sinusoidal trajectory. All other tensions and normal moments are commanded to become null. The robot was able to track eectively all tension behaviors with errors around 0.3N. CoM positions where tracked with error values below 0.1 millimeters.

on a task-based software environment that enables the online optimization of whole-body behaviors. Using this environment we create various behaviors involving biped and multi-contact stances as well as operational task interactions and center of mass maneuvers. 4.1 Multi-contact behavior

In the example shown in Figure 12, we study a compliant behavior that emerges from controlling internal forces and moments. The robot is commanded to establish a four contact stance leaning against a pivoting table. We implement a multi-contact behavior that involves the following phases, (1) moving the robots hands and its center of mass towards a table, (2) establishing four contacts, (3) perturbing the tabletop position by applying random external interactions, and (4) controlling the internal forces and moments to respond compliantly to the variations of the table. This sequence of actions is accomplished using a state machine where each state involves optimizing multiple low-level task objectives. To achieve the desired global behavior we simultaneously control center of mass maneuvers, operational tasks, and postural stance as indicated in Proposition 1 as well as the internal force policy dened in Equation (57). Using Equation (18), we construct two virtual linkage models for the phases involving two and four contacts. For simplicity, all tension forces and normal moments are controlled to become null, except for one internal tension. To demonstrate force tracking at the internal level, the tension between the left hand and the right foot is commanded to track a sinusoidal trajectory. All other internal forces and tensions are commanded to be zero. At the same time, a user interacts with the pivoting table moving it up and down in arbitrary fast motions. The position of contact CoPs on hands and feet is chosen to be at the geometric center of the extremities in contact. During the bipedal phase, the center of mass is commanded to stay between the feet while moving the hands towards the table, while CoM and CoP positions are simultaneously controlled. In the phases with four contacts, the center of mass is commanded to maneuver towards the table by tracking a trajectory that fullls contact stability constraint as dened by the boundaries of Equations (28- 30). A feedback controller to track this trajectory is implemented using force commands in CoM space. Postural behavior is controlled by optimizing a criterion that minimizes the distance with respect to human pre-recorded postures using a method similar to the one described in [12]. Because contact CoPs are commanded to stay at the center of the extremities in contact, the hands respond compliantly to table movement, remaining at against the surface to maintain the desired CoP positions. The accompanying data graphs show tangential and normal moments, the tension between the left hand and the right foot, and the sagittal position of the CoM. The tracking error for the internal tension is small with a maximum value around 0.3 N. This error is mainly caused due to the unmodeled movement of the table. As we recall, our framework assumes that the table is static, which is implied in Equation (5). However, because the table undergoes fast accelerations the model is inaccurate.

Fig. 13. Response to a rapidly moving obstacles: An interactive behavior that responds to dynamically moving obstacles is simulated. Obstacles are handled as priority tasks with repulsion elds. Center of mass maneuvers, hand behavior, and postural behavior are controlled as lower priority tasks in the order listed. When the obstacle approaches the body (b), the posture changes since it operates with lower priority. In (a), the obstacle approaches the hand, which is controlled to remain at a xed location. A conict between objectives takes place, and as a result the hand task becomes unfeasible.

Despite this inaccuracy, the tracking behavior is very good. In contrast, if the tabletop remains at a xed location, the tracking error converges to zero (not shown here). 4.2 Realtime response to dynamic constraints

Let us briey review another example demonstrating the response to rapidly moving obstacles as shown in Figure 13. More details on this example can be found in [53]. This behavior is accomplished using the controllers described in the section on geometric constraints. The graphs, depict the response when the obstacle approaches the hand which is implementing an operational position task to remain xed in place. Because the obstacle operates with highest priority, the desired safety distance is maintained. The prioritized Jacobian of the hand task becomes singular and a planner uses this condition to remove the hand task from the controller. When the obstacle approaches the head, it is avoided using the whole-body redundancy. Stable balance is maintained at all times.

4.3

Dual arm manipulation

A third example shown in Figure 14 involving dual arm manipulation is briey reviewed. The Asimo version we have at Stanford has only four degrees of freedom per arm. Therefore, the hands have been removed and replaced by cylinders to engage in point contact manipulation. The action primitive we use for this behavior is depicted below in order of priorities
Task Primitive Augmented object position Augmented object orientation Dual hand tension Whole-body posture Coordinates Cartesian Cartesian Tension Generalized coordinates Control Policy Proportional-derivative Proportional-derivative Integral Proportional-derivative

An augmented object position task denes the Cartesian position of the object with respect to both hands. Therefore the coordinate representation and the associated Jacobian for a task that controls the object position are xobject pos Jobject pos 1 (prh + plh ) , 2 1 (Jrh + Jlh ) , 2 (60) (61)

where prh and plh are the position of the right and left hands respectively, and Jrh and Jlh are the associated Jacobians. An augmented object orientation task denes the 2D orientation that can be realized with two point contacts. A virtual frame with the y axis aligned with the line connecting the two hands and the z axis pointing upwards perpendicular to the gravity vector is dened. Using point contacts, only the orientation with respect to the frames x and z axis are controllable. Therefore, the coordinate representation and the associated Jacobian for an orientation task of that type are xobject ori Jobject ori virtual frame ,
0

(62) (63)

Rv Sxz v R0 Qrl Jrl .

Here virtual frame is the quaternion associated with the orientation of the virtual frame connecting the two hands. Since the orientation of the frame is roughly dened by the position of the hands, the Jacobian of the orientation task involves only the cumulative position rows of the Jacobians of the right and left hands, Jrl , i.e. ignoring the rotation components. In the above equation, the operator Qrl denes the cross product operator of the right and left hands, i.e. 1 prh plh , (64) Qrl 2 where the operator ( . ) indicates cross product. Because the y orientation of the task frame is not controllable, we rotate the quantity (Qrl Jrl ) in the Jacobian

Fig. 14. Dual arm manipulation: This sequence is taken from a simulation of a dual arm point manipulation behavior. The position and 2D orientation of the object are dened and control using PD control tracking. A tension task dened between the points of contact is controlled to grasp the object eciently. A posture control law optimizes the conguration of the residual redundancy.

of Equation (63) to align with the virtual orientation frame using the rotation matrix from global to virtual frames v R0 , and we eliminate by selection the y direction using the matrix Sxz . Finally, we rotate the Jacobian back to the global frame using 0 Rv to unify the task representation with respect to global frame. Although the orientation task is composed of three coordinates, removing the y coordinate on the Jacobian will cause our whole-body controller to optimize only the two controllable orientation directions while discarding the noncontrollable direction. To control both the position and orientation of the object we use simple PD controllers. In this controllers, the angular velocity of the object is needed and can be determined using the expressions, 1 (rh + lh ) , 2 prh vrh plh vlh = , lh = , ||prh || ||plh || (65) (66)

rh

where the i s correspond to angular velocities and the vi s correspond to linear velocities. Moreover, the operator indicates cross product.

To grasp the object using forces, we simultaneously implement a dual hand tension task which we dene through the dierential forces acting along the line that connects the points of contact. The Jacobian that represents this task is Jtension with rl Sy v R0 rl Jrl . I33 I33 (67)

(68)

representing a dierential operator of the coordinates of both hands. The matrix Sy selects only the y dimensions along the line connecting the robot hands. The control command that we use for the tension task is a purely integral, i.e.
t

Ftension = Ki
0

(Fk Fdes )dt,

(69)

where Fk is the force read by the sensor and Fdes is the desired force. As we can see in the Figure 14, we succeeded to track eciently both the position and orientation of the object trajectories while regulating the tension to the desired value. In this experiment, object trajectories were provided by a user using keyboard commands.

Conclusion and Discussion

In this chapter, I have summarized some of my contributions and collaborative work in the area of whole-body compliant control of humanoid robots. Developing whole-body torque control methods addresses several key aspects for advanced humanoid robot interactions, including compliant behaviors, handling of geometric and contact constraints, unied motion and force control, prioritized multi-objective optimization, and skill representation, among others. Prioritization is a mechanism that substitutes the need for search-based or global optimization methods in the generation of several types of behaviors: (1) handling of geometric and contact constraints, (2) exploiting the available redundancy to execute operational tasks, and (3) exploiting the available redundancy to optimize postural performance. Using prioritized compliant control enables the fast adaptation to contact interactions and the fast response to dynamically changing geometric constraints. By identifying the set of internal forces and moments between contact closed loops we enable the characterization and control of the complex multi-contact interactions of the robot with the environment. Enabling the precise control of contact centers of pressure, we create compliant contact behaviors and by placing contact CoPs near the center of contact bodies we prevent unwanted rotations along contact edges. Characterizing the behavior of internal tensions and moments as well as the behavior of the robots center of mass with respect to contact reaction forces we provide tools to plan whole-body interactions and maneuvers that satisfy frictional constraints. Other methods solely based on ZMP

modeling disregard the local interactions between contact bodies hindering the ability to comply with contact constraints and to create compliant contact behaviors. Our methods are dynamically correct, enabling the simultaneous control of operational tasks, center of mass maneuvers, postural behaviors, and internal force behaviors with high performance. We have demonstrated these capabilities through whole-body multi-contact examples involving upper and lower extremities. Using dynamic modeling enables safe open loop compliant interactions. It also enables the simultaneous control of multiple task points with high accuracy and without introducing coupling eects between task objectives. Moreover, dynamic modeling of the interactions between contact closed loops allows us to derive internal force controllers that operate simultaneously with motion objectives. The controllers developed here exhibit strong robustness with respect to model errors including errors in link masses and theis associated center of mass positions. In the last years, I have co-developed with colleagues at Stanford, a realtime embedded software architecture that implements the methodology described in this chapter. This software has been adapted to control a research version of the humanoid robot Asimo and several robotic arms including the PUMA and the Barrett WAM arms. To handle the computational complexity of the models, the architecture is organized in multiple processes running at dierent rates. A servo process implements the feedback force and position controllers and runs at fast rates. The model process computes the complex dynamic and prioritized operators and runs at slower speeds than the servo process. A layer to create complex skills as aggregations of task objectives is provided in this software in the form of behavioral entities. These behavioral abstractions are designed to be coordinated from high-level planning processes. Recently, we presented a position paper outlining our view for bridging the gap between semantic planning and continuous control of humanoid and mobile robots [47]. Future work includes the development of planners to dynamically maneuver using multi-contact interactions in unstructured terrains. The models of multicontact and center of mass behavior we have developed in this chapter provide a powerful environment to sample potential contact transitions and feasible center of mass maneuvering trajectories. Another important area of research is compliant behaviors for multi-nger manipulation tasks. Here, the models of multi-contact interactions that I have presented can be extended for planning dexterous hand manipulation tasks.

Acknowledgements
Many thanks to Dr. Kensuke Harada, Dr. Eiichi Yoshida, and Dr. Kazuhito Yokoi for their eort to put this book together. Many thanks to Prof. Oussama Khatib, Dr. Jae-Heung Park, Dr. Roland Philippsen, Taizo Yoshikawa, and Francois Conti for their contribution and advice. Many thanks to Dr. Khed-

dar Abderrhamane for reviewing the manuscript. I am very grateful to Honda Motor Co. for supporting my research at Stanford during the last years.

References
1. K. Abderrahmane and A. Escande. Challenges in contact-support planning for acyclic motion of humanoids and androids. In International Symposium on Robotics, pages 7407455, Seuol, Korea, October 2008. 2. C.O. Alford and S.M. Belyeu. Coordinated control of two robot arms. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 468473, March 1984. 3. T. Bretl and S. Lall. Testing static equilibrium for legged robots. IEEE Transactions on Robotics, 24(4):794807, August 2008. 4. O. Brock and O. Khatib. Elastic strips: A framework for motion generation in human environments. International Journal of Robotics Research, 21(12):1031 1052, 2002. 5. R. Brooks. A robust layered control system for a mobile robot. International Journal of Robotics and Automation, 2(1):1423, March 1986. 6. J. Buchli, F. Iida, and A.J. Ijspeert. Finding resonance: Adaptive frequency oscillators for dynamic legged locomotion. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006. 7. C.E. Buckley. The application of continuum methods to path planning. PhD thesis, Stanford University, Stanford, USA, 1986. 8. K.C. Chang and O. Khatib. Operational space dynamics: Ecient algorithms for modeling and control of branching mechanisms. In Proceedings of the IEEE International Conference on Robotics and Automation, April 2000. 9. R. Chatila. Systeme de Navigation pour un Robot Mobile Autonome: Modelisation et Processus Decisionnels. PhD thesis, Universite Paul Sabatier, Toulouse, France, 1981. 10. C. Collette, A. Micaelli, C. Andriot, and P. Lemerle. Robust balance optimization control of humanoid robots with multiple non coplanar grasps and frictional contacts. In Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, USA, May 2008. 11. S. Collins, A. Ruina, R. Tedrake, and M. Wisse. Ecient bipedal robots based on passive-dynamic walkers. Science Magazine, 307(5712):10821085, 2005. 12. E. Demircan, L. Sentis, V. DeSapio, and O. Khatib. Human motion reconstruction by direct control of marker trajectories. In Advances in Robot Kinematics (ARK), 11th International Symposium, Batz-sur-Mer, France, June 2008. Springer. 13. B. Espiau, F. Chaumette, and P. Rives. A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3):313326, 1992. 14. R. Featherstone. Robot Dynamics Algorithms. Kluwer Academic Publishers, Norwell, USA, 1987. 15. E.F. Fichter and E.D. McDowell. A nover design for a robot arm. In Advancements of Computer Technology, pages 250256. ASME, 1980. 16. A.A. Frank. Control Systems for Legged Locmotion Machines. PhD thesis, University of Southern California, Los Angeles, USA, 1968. 17. H. Hanafusa, T. Yoshikawa, and Y. Nakamura. Analysis and control of articulated robot with redundancy. In Proceedings of IFAC Symposium on Robot Control, volume 4, pages 19271932, 1981.

18. K. Harada, S. Kajita, K. Kaneko, and H. Hirukawa. Zmp analysis for arm/leg coordination. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 7581, Las Vegas, USA, October 2003. 19. K. Hirai, M. Hirose, Y. Haikawa, and T. Takenaka. The development of Honda humanoid robot. In Proceedings of the IEEE International Conference on Robotics and Automation, volume 2, pages 13211326, Leuven, Belgium, 1998. 20. R. Holmberg and O. Khatib. Development and control of a holonomic mobile robot for mobile manipulation tasks. International Journal of Robotics Research, 19(11):10661074, 2000. 21. S-H. Hyon, J. Hale, and G. Cheng. Full-body compliant humanhumanoid interaction: Balancing in the presence of unknown external forces. IEEE Transactions on Robotics, 23(5):884898, October 2007. 22. A.J. Ijspeert, J. Buchli, A. Selverston, M. Rabinovich, M. Hasler, W. Gerstner, A. Billard, H. Markram, and D. Floreano, editors. Dynamical principles for neuroscience and intelligent biomimetic devices. Abstracts of the EPFL-LATSIS Symposium, Laussane, Switzerland, 2006. 23. S. Kajita, F. Kanehiro, K. Kaneko, K. Fujiwara, K. Harada, K. Yokoi, and H. Hirukawa. Resolved momentum control: Humanoid motion planning based on the linear and angular momentum. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 16441650, Las Vegas, USA, October 2003. 24. F. Kanehiro and H. Hirukawa. Online self-collision checking for humanoids. In Proceedings of the 19th Annual Conference of the Robotics Society of Japan, Tokyo, Japan, September 2001. 25. O. Khatib. Commande Dynamique dans lEspace Oprationnel des Robots Mae nipulateurs en Prsence dObstacles. PhD thesis, lEcole Nationale Suprieure de e e lAronautique et de lEspace, Toulouse, France, 1980. e 26. O. Khatib. Real-time obstacle avoidance for manipulators and mobile robots. International Journal of Robotics Research, 5(1):908, 1986. 27. O. Khatib. A unied approach for motion and force control of robot manipulators: The operational space formulation. International Journal of Robotics Research, 3(1):4353, 1987. 28. O. Khatib. Object manipulation in a multi-eector robot system. In R. Bolles and B. Roth, editors, Robotics Research 4, pages 137144. MIT Press, 1988. 29. O. Khatib, O. Brock, K.C. Chang, F. Conti, D. Ruspini, and L. Sentis. Robotics and interactive simulation. Communications of the ACM, 45(3):4651, March 2002. 30. O. Khatib, L. Sentis, J.H. Park, and J. Warren. Whole body dynamic behavior and control of human-like robots. International Journal of Humanoid Robotics, 1(1):2943, March 2004. 31. O. Khatib, P. Thaulaud, and J. Park. Torque-position transformer for task control of position controlled robots. Patent, November 2006. Patent Number: 20060250101. 32. B.H. Krogh. A generalized potential eld approach to obstacle avoidance control. In Proceeding of the Internatioal Robotics Research Conference, Betlehem, USA, August 1984. 33. J. Kuner, K. Nishiwaki, S. Kagami, M. Inaba, and H. Inoue. Motion planning for humanoid robots. In Proceedings of the International Symposium of Robotics Research, Siena, Italy, October 2003. 34. J. Kuner, K. Nishiwaki, S. Kagami, Y. Kuniyoshil, M. Inabal, and H. Inouel. Self-collision detection and prevention for humanoid robots. In Proceedings of the

35. 36. 37.

38. 39.

40.

41.

42. 43. 44. 45.

46.

47.

48. 49. 50. 51. 52.

53.

IEEE International Conference on Robotics and Automation, pages 22652270, Washington, USA, May 2002. J.C. Latombe. Robot Motion Planning. Kluwer Academic Publishers, Boston, USA, 1991. J.P. Laumond and P.E. Jacobs. A motion planner for nonholonomic mobile robots. IEEE Transactions on Robotics and Automation, 10(5):577593, 1994. A. Liegois. Automatic supervisory control of the conguration and behavior of multibody mechanisms. IEEE Transactions on Systems, Man, and Cybernetics, 7:868871, 1977. T. Lozano-Perez. Spatial planning: a conguration space approach. IEEE Transaction of Computers, 32(2):108120, 1983. A.A. Maciejewski and C.A. Klein. Obstacle avoidance for kinematically redundant manipulators in dynamically varying environments. International Journal of Robotics Research, 4(3):109117, 1985. N. Mansard and O. Khatib. Continous control law from unilateral constraints. In Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, USA, May 2008. E. Marchand and G. Hager. Dynamic sensor planning in visual servoing. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, volume 3, pages 19881993, Leuven, Belgium, May 1998. T. McGeer. Passive dynamic walking. The International Journal of Robotics Research, 9(2):6268, 1990. J.-P Merlet. Redundant parallel manipulators,. Laboratory of Robotics and Automation, 8(1):1724, February 1996. H. Moravec. Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover. PhD thesis, Stanford University, Stanford, USA, 1980. Y. Nakamura, H. Hanafusa, and T. Yoshikawa. Mechanics of coordinative manipulation by multiple robotic mechanisms. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 991998, April 1987. Florent Lamiraux Takeshi Sakaguchi Abderrahmane Kheddar Olivier Stasse, Neo Ee Sian and Kazuhito Yoikoi. Architectures and models for humanoid robots in collaborative working environments. In International Symposium on Robotics, pages pp 354359, 2008. R. Philippsen, N. Negati, and L. Sentis. Bridging the gap between semantic planning and continuous control for mobile manipulation using a graph-based world representation. In Proceedings of the Workshop on Hybrid Control of Autonomous Systems, Pasadena, July 2009. D.L. Pieper. The Kinematics of Manipulators Under Computer Control. PhD thesis, Stanford University, Stanford, USA, 1968. M.H. Raibert, M. Chepponis, and H.B. Brown. Running on four legs as though they were one. IEEE Journal of Robotics and Automation, 2(2):7082, June 1986. M.H. Raibert and J.J. Craig. Hybrid position force control of manipulators. ASME J. Dyn. Sys. Measurement Contr., 103(2):126133, 1981. B. Roth, J. Rastegar, and V. Sheinmann. On the design of computer controlled manipulators. In First CISM IFToMM Symposium, pages 93113, 1973. D. Ruspini and O. Khatib. Collision/contact models for dynamic simulation and haptic interaction. In The 9th International Symposium of Robotics Research (ISRR99), pages 185195, Snowbird, USA, October 1999. L. Sentis. Synthesis and Control of Whole-Body Behaviors in Humanoid Systems. PhD thesis, Stanford University, Stanford, USA, 2007.

54. L. Sentis and O. Khatib. Synthesis of whole-body behaviors through hierarchical control of behavioral primitives. International Journal of Humanoid Robotics, 2(4):505518, December 2005. 55. L. Sentis, J. Park, and O. Khatib. Modeling and control of multi-contact centers of pressure and internal forces in humanoid robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, USA, October 2009. 56. B. Siciliano and J. Slotine. A general framework for managing multiple tasks in highly redundant robotic systems. In Proceedings of the IEEE International Conference on Advanced Robotics, pages 12111216, Pisa, Italy, June 1991. 57. O. Stasse, A. Escande, N. Mansard, S. Misossec, P. Evrard, and A. Kheddar. Realtime (self)-collision avoidance task on a hrp-2 humanoid robot. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 32003205, Pasadena, May 2008. 58. A. Takanishi, M. Ishida, Y. Yamazaki, and I. Kato. The realization of dynamic walking robot wl-10rd. In Proceedings of the International Conference of Advanced Robotics, 1985. 59. A. Takanishi, H.O. Lim, M. Tsuda, and I. Kato. Realization of dynamic biped walking stabilized by trunk motion on sagitally uneven surface. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 1990. 60. R. Tedrake, T.W. Zhang, M. Fong, and H.S. Seung. Actuating a simple 3d passive dynamic walker. In Proceedings of the IEEE International Conference on Robotics and Automation, 2004. 61. M. Vukobratovi and B. Borovac. Zero-moment point thirty tive years of its life. c International Journal of Humanoid Robotics, 1(1):157173, 2004. 62. J. T. Wen and L. Wilnger. Kinematic manipulability of general constrained rigid multibody systems. In Proceedings of the IEEE International Conference on Robotics and Automation, volume 15, pages 558567, Detroit, USA, May 1999. 63. E.R. Westervelt, J.W. Grizzle, C. Chevallereau, J.H. Choi, and B. Morris. Feebdack control of dynamic bipedal robt locomotion. CRC Oress, 2007. 64. D. Williams and O. Khatib. The virtual linkage: A model for internal forces in multi-grasp manipulation. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 10251030, Atlanta, USA, October 1993. 65. K. Yamane and Y. Nakamura. Dynamics lterconcept and implementation of online motion generator for human gures. IEEE Transactions on Robotics and Automation, 19(3):421432, 2003. 66. Y. Yi, J. McInroy, and Y. Chen. General over-constrained rigid multi-body systems: Dierential kinematics, and fault tolerance. In Proceedings of the SPIE International Symposium on Smart Structures and Materials, volume 4701, pages 189199, San Diego, USA, March 2002. 67. T. Yoshikawa and O. Khatib. Compliant and safe motion control of a humanoid robot in contact with the environment and humans. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, September 2008.

You might also like