0% found this document useful (0 votes)
90 views40 pages

Combining Task and Motion Planning: A Culprit Detection Problem

This document summarizes a research paper that addresses the problem of combining task and motion planning for robots. The main challenge is that symbolic plans generated by a task planner may not be feasible geometrically when evaluated by a motion planner. This can lead to inefficient backtracking between the two levels. The authors propose guiding symbolic search with detailed information from the motion planning level, specifically identifying the root causes or "culprits" of geometric failures. This is intended to directly steer the task planner towards feasible solutions rather than relying on local explanations for individual failures. The paper presents this approach and evaluates its effectiveness at reducing search compared to alternative methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views40 pages

Combining Task and Motion Planning: A Culprit Detection Problem

This document summarizes a research paper that addresses the problem of combining task and motion planning for robots. The main challenge is that symbolic plans generated by a task planner may not be feasible geometrically when evaluated by a motion planner. This can lead to inefficient backtracking between the two levels. The authors propose guiding symbolic search with detailed information from the motion planning level, specifically identifying the root causes or "culprits" of geometric failures. This is intended to directly steer the task planner towards feasible solutions rather than relying on local explanations for individual failures. The paper presents this approach and evaluates its effectiveness at reducing search compared to alternative methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Mathematisch-Naturwissenschaftliche Fakultät

Fabien Lagriffoul | Benjamin Andres

Combining task and motion planning:


A culprit detection problem

Suggested citation referring to the original publication:


The International Journal of Robotics Research 35(8) (2015)
DOI https://doi.org/10.1177/0278364915619022
ISSN (print) 1741-3176
ISSN (online) 0278-3649

Postprint archived at the Institutional Repository of the Potsdam University in:


Postprints der Universität Potsdam
Mathematisch-Naturwissenschaftliche Reihe ; 422
ISSN 1866-8372
http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-405124
Article
The International Journal of
Robotics Research
Combining task and motion planning: A 2015, Vol. 35(8) 890–927
© The Author(s) 2016
culprit detection problem Reprints and permissions:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/0278364915619022
ijr.sagepub.com

Fabien Lagriffoul1 and Benjamin Andres2

Abstract
Solving problems combining task and motion planning requires searching across a symbolic search space and a geometric
search space. Because of the semantic gap between symbolic and geometric representations, symbolic sequences of actions
are not guaranteed to be geometrically feasible. This compels us to search in the combined search space, in which frequent
backtracks between symbolic and geometric levels make the search inefficient. We address this problem by guiding symbolic
search with rich information extracted from the geometric level through culprit detection mechanisms.

Keywords
Combined Task and Motion Planning, Manipulation Planning

1. Introduction backtracking between symbolic and geometric levels. The


key idea for achieving intelligent search across both search
Popular robotic platforms such as ASIMO have demon-
spaces is to leverage information from the geometric level
strated impressive skills for various types of tasks. These
in order to guide search at the symbolic level (or vice versa).
platforms embody the most recent achievements from the
This idea has been used by many authors, but not fully
fields of computer vision, motion planning, automatic con-
exploited (see Section 2). In most cases, the information
trol, and actuation, which provide them with the capacity
fed back to the task planner relates to a motion path that
to achieve a great deal of complex actions. However, these
was unfeasible, or to an object that was occluding another
impressive results rely for a large part on human interven-
object. We argue that such simple feedback cannot effi-
tion for scripting the sequences of actions executed by the
ciently guide the task planner, because it provides a local
robot. Setting aside the inherent issues of uncertainty in
explanation of failure, i.e. an explanation that is only valid
perception and execution, we focus on the planning tech-
for the particular sequence of actions that produced it.
niques that could be used for replacing human scripting by
If the task planner is fed back with local explanations
a fully automated process. Automated planning techniques
for geometric failures, it may repeatedly end up with plans
exist for computing symbolic plans containing hundreds
leading to similar failures. Consider for instance the prob-
of actions, likewise efficient motion planning techniques
lem illustrated in Figure 1. The task is to create a pile
exist that could compute a motion path for each such
of blocks a-b-c-d, at any location. A geometric failure is
action. Unfortunately, combining both planning techniques
detected when the motion planner is called for the last
together is not straightforward. The main problem is that
action place( d, c). If the task planner is only notified that
symbolic planning works on idealized representations of the
this action is unfeasible, it will backtrack to a previous
real world, hence symbolic plans are not always geometri-
decision point in order to reach the goal through a differ-
cally feasible at the outset. Consequently, finding a geomet-
ent sequence of actions. But without the explicit knowledge
rically feasible plan requires combining search both across
that the cause of failure is rooted in the choice of p1 as the
symbolic and geometric levels. This problem is referred to
as Combined Task and Motion Planning (CTAMP). 1 AASSCognitive Robotic Systems Lab, Örebro University, Sweden
Searching in the combined search space is intractable in 2 Knowledge
Processing and Information Systems, University of Potsdam,
most cases, because the cross product of both search spaces Germany
is too large. Decoupling both search spaces is not workable
Corresponding author:
either, because in the case of geometrically intricate prob-
Fabien Lagriffoul, AASS Cognitive Robotic Systems Lab, Örebro
lems, the dependencies between geometric actions (which University, S-70182 Örebro, Sweden.
are not captured by the symbolic level) lead to intensive Email: [email protected]
Lagriffoul and Andres 891

evaluation of the proposed approach in Section 11 and end


up with some concluding remarks.

2. Related work
Different approaches to CTAMP have been devised, with
different schemes for integrating symbolic and geometric
reasoning. We review this work in the light of the topic of
Fig. 1. Stacking the last block is not possible because of collisions this paper, i.e. how the information at one level is used in
between the gripper and a fixed obstacle. order to guide the search at the other level. A number of
relevant related problems in motion planning and constraint
programming literature are also reviewed.
location for block a, there is no reason for directly back- In some approaches, the geometric level steers the
tracking to that particular decision point. Hence, it may try search and gets guidance from the symbolic level. In Sam-
out a large number of symbolic plans before finding one plSGD (Plaku and Hager, 2010) for instance, the system
which avoids this pitfall. This could be avoided if the actual mainly works on a motion planning problem, while a heuris-
cause of failure was precisely identified. Our approach con- tic task planner (FF, Hoffmann and Nebel (2001)) is repeat-
sists of focusing the computational effort on finding min- edly called in order to compute a utility value based on the
imal explanations for geometric failures, in order to pre- length of the symbolic plan that achieves the goal. ASyMov
cisely guide the task planner towards a feasible plan. This (Cambon et al., 2009) uses a similar principle, but takes
idea is similar in principle to well-known search techniques into account both the symbolic distance to the goal and the
used in artificial intelligence (AI) such as dependency- number of failures of the path planner (based on probabilis-
directed backtracking (Stallman and Sussman, 1977) or tic roadmaps (PRMs), Kavraki et al. (1996)) to determine
conflict-driven back-jumping (Dechter and Frost, 2002). the heuristic values of the search nodes. These nodes repre-
In this paper, we describe the core component of our sent hybrid symbolic/geometric states, and a plan is found
approach, a geometric reasoner capable of computing min- using A* search. In these type of approaches, symbolic
imal explanations for failures occurring in the process of and geometric reasoning are tightly intertwined, i.e. each
geometrically instantiating a symbolic sequence of actions. visited geometric state triggers a call to the task planner.
These explanations are then used as logical constraints by a This may be an issue for large problems, in which decou-
task planner based on answer set programming (ASP) (Lif- pling search spaces is necessary for reaching a solution.
schitz, 2008). Computing minimal explanations essentially Our approach addresses this difficulty by alternating pure
boils down to a culprit detection problem, which is a diffi- symbolic search and pure geometric search.
cult problem in general, since it reduces to the set covering In a more common type of approach, the task planner
problem (Bylander et al., 1991). We propose two techniques is steering the search, while a geometric reasoner is called
to address it. The first one is a polynomial-time algorithm to geometrically evaluate the preconditions and compute
for culprit detection in a constraint network representing a the geometric effects of actions. These approaches include
relaxed version of the geometric part of the CTAMP prob- semantic attachments (Dornhege et al., 2009; Guitton and
lem. The second one consists of constructing a graph of Farges, 2009; Karlsson et al., 2012). HPN (Kaelbling and
the geometric dependencies between the actions of unfea- Lozano-Pérez, 2011) differs by using a late commitment
sible symbolic plans, in order to extract subsequences of approach, more suited for interleaving execution and plan
actions which are separately evaluated as potential culprit refinement. In all these approaches, the feedback from the
subsequences. Beyond these two techniques, the main con- geometric level to the symbolic level consists in mere “suc-
tribution of this paper is to propose a novel view on the cess” or “failure”, the latter resulting in a dead-end for the
problem of combining task and motion planning, by point- task planner. This opens the door for repeatedly encoun-
ing out a culprit detection problem at the interface between tering similar geometric failures, as explained in the intro-
the symbolic and geometric search spaces. ductory example. By contrast, our approach prevents this
The rest of this paper is organized as follows. After by analyzing the very cause of geometric failures, and pro-
reviewing some related work in Section 2, we describe the vides a meaningful feedback to the task planner so that the
general principles of our approach in Section 3, which moti- same failure cannot occur again.
vate the choices made for the architecture of our system, The approach of Srivastava et al. (2014) allows a richer
presented in Section 4. A brief introduction to planning with feedback by means of logical predicates. They present a
Answer Set Programming is given in Section 5. Then, the general interface which takes care of the geometric details,
symbolic and geometric domains used for our experiments and assume optimistic default values for geometric precon-
are described in Sections 6 and 7. The core of the article ditions. If a geometric failure is detected, the symbolic state
describes the culprit detection mechanisms in Sections 8, 9 is updated accordingly and re-planning is triggered. This
and 10. Finally, we present the results of the experimental approach relies on the assumption that the actual cause of
892 The International Journal of Robotics Research 35(8)

geometric failures lies in individual actions, and that the the symbolic space is traversed. With logic-based planning,
plan can be repaired from the current state. Again, this the task planner operates in a search space comparable to
stands in contrast with our approach, which is based on the the space of plans. Such a search space enables pruning out
observation that locally dealing with geometric failures may families of plans regardless of the exact chronology of their
cause them to re-occur over and over again. actions, unlike state-space planners which can only prune
Garrett et al. (2014) tightly connect geometric and sym- out sub-trees rooted in the state currently visited. This fea-
bolic levels via a conditional reachability graph used for ture is exploited in the approach presented in this paper: The
computing the heuristic of the task planner. The heuris- geometric failures detected in a small number of unfeasible
tic implicitly informs the symbolic level about occluding plans are used for pruning out entire families of plans con-
objects that need to be moved and in which order they are taining the same flaws, although their sequences of actions
to be moved. This approach is somehow opposite to Asy- may be very different.
mov (using FF as a heuristic guiding a PRM planner) since In this vein, Choi and Amir (2009) use a sampling-based
it uses a PRM planner to compute a heuristic for FF. It motion graph to build an action theory, from which a plan
is not possible to pre-compute the conditional reachability is computed. Only feasible actions are represented in the
graph for all possible situations, therefore these computa- graph, hence failures do not directly guide the search. How-
tions are performed on-demand while the heuristic is com- ever, the reachability of objects is associated to modes,
puted. The problem is then similar to Asymov: Each visited which implicitly represent the fact that some combinations
symbolic state triggers geometric computations, thus both of actions prevent some objects from being reached. In
search spaces are tightly intertwined. This may be problem- Luna et al. (2014), a Satisfiability Modulo Theories (SMT)
atic for large problems that require decoupling of the search solver is used for plan synthesis. Like in the approach of
spaces. Choi and Amir (2009), the failures are not explicitly fed
Lozano-Pérez and Kaelbling (2014) frame the CTAMP back, but the feasibility of geometric paths with respect
problem as a discrete Constraint Satisfaction Problem to objects placements is connected with the logical level,
(CSP) (Rossi et al., 2006) for quickly assessing if a given through a manipulation graph (computed offline) encoded
symbolic plan is geometrically feasible or not. A solution to in the formula. Erdem et al. (2011) use the action lan-
the CSP provides grasps and placements that do not inter- guage C+ to encode the planning problem into a logic
fere with each other. The strength of their approach is to program. The failures detected at the geometric level are
account for path existence constraints in the CSP formula- fed back in the form of logical constraints and a new plan is
tion, by pre-computing a map representing the free-space. computed. The feedback is limited to collisions or infea-
This approach focuses on the geometric aspects of CTAMP, sibility of motion paths. A similar approach is taken by
assuming a symbolic plan given by an external task planner, Aker et al. (2012), but using ASP programs. A similar
but it does not provide a mechanism for integrating geomet- scheme is used in our work, but the major difference in our
ric constraints in the symbolic search space, as we propose approach (besides using culprit detection mechanisms) is
in this paper. the level of granularity used for symbolically representing
In previous work (Bidot et al., 2015; Lagriffoul et al., the world (see Section 6.6 for a comparison between both
2014), we combined a HTN (Hierarchical Task Network) approaches).
planner (Nau et al., 2004) for task planning with a bidi- Toussaint (2015) addresses sequential manipulation plan-
rectional RRT path planner (LaValle, 2006) for motion ning problems of building stable piles of objects. His
planning, and used a linear constraint network for prun- approach stands out from the previously mentioned ones
ing out kinematically inconsistent choices for grasps and in the sense that the symbolic level is not guided by geo-
placements. The limitation of this approach is that HTN metric failures, but rather by a heuristic value calculated
(and more generally state-space planning) does not allow by optimizing different costs, computed for different lev-
us to exploit geometric constraints in a meaningful way at els of refinement of the symbolic action sequence. At the
the symbolic level. The reason for this is that geometric lowest level of refinement, the cost is given by optimizing
constraints are fed back to the task planner through the pre- the stability of the resulting pile, and in further refinement,
conditions of symbolic operators. This restricts the expres- kinematic constraints are taken into account for optimizing
siveness of constraints that can be fed back to the symbolic motions. As mention by the author, this approach is valid
level, since they inherently relate to single actions. In the for problems where collisions are not of major concern (a
presented work, we address this limitation by replacing the flying robotic arm is used), because collisions are consid-
HTN planner by a logic programming approach, which is ered only at a later stage. However, this may be inefficient
allows us to leverage meaningful geometric constraints in for more “classical” CTAMP scenarios, i.e. where collisions
the task planning process (see Section 3.2). are the main cause of reconsidering symbolic decisions.
A third type of approach consists of stating the
symbolic planning problem in terms of logic programming
(Gelfond and Lifschitz, 1998; Kautz and Selman, 1992; Lif-
Related problems
schitz, 2002). The main difference with the previous type A number of techniques have been developed beyond basic
of approaches (based on state-space planners) is the way motion planning in order to cope with robotic problems.
Lagriffoul and Andres 893

The limitations of motion planning arise when obstacles


need to be moved, or when task constraints impose dis-
crete steps on motion paths. A general approach to the
Manipulation Planning problem was proposed by Simeon
(2004), based on the composition of several Probabilistic
Roadmaps (PRMs). Stilman and Kuffner (2008) and Stil-
man et al. (2007) address the difficult problem of robot
Navigation Among Movable Obstacles (NAMO), with a
backward search algorithm that recursively moves occlud-
ing objects out of the space which the robot has to
traverse. Multi-modal Motion Planning addresses high Fig. 2. A set of bounding boxes (bbox) representing all the possi-
dimensional motion planning problems by planning dis- ble poses that the center of each object can occupy is computed.
crete mode switches in which lower dimensional sub- In the illustrated example, block e is placed on p2 and block f
spaces are sampled using domain-dependent strategies. is placed on p3 . Some bounding boxes are also computed for the
This approach has been successfully applied by Hauser intermediate poses of objects, but they are not represented in this
and Latombe (2010) to climbing robots, or push-planning figure.
by a humanoid robot (Hauser et al., 2007). These works
however, do not take causal reasoning into consideration.
Recent work by Hauser (2014) on the Minimum Con- geometric failures. This is achieved by two culprit detection
straint Removal (MCR) problem is relevant to the present mechanisms which we introduce in this section. Secondly,
work. It is proven that deciding the minimum number of since the cause of failure is not a mere “success/failure”
obstacles to remove for making a path feasible is NP-Hard. answer, we need a common language between the geomet-
A greedy algorithm is presented, which can compute parsi- ric reasoner and the task planner, so that the cause of failure
monious explanations for path planning failures. This falls can directly be used by the symbolic search process. In
in line with our approach, which aims at computing min- most approaches, the common language is defined by the
imal explanation for geometric failures, but is currently preconditions of symbolic operators, which are true/false
lacking methods for detecting path planning failures. depending on the success/failure of the geometric reasoner.
Several search techniques developed in other areas are Here, since we use a logic programming approach for task
also relevant to this work. Although they address differ- planning, the common language is more expressive since it
ent types of problems, they share with the present work can be any logical expression supported by the task planner.
the use of culprit detection mechanisms for pruning the
search space. Stallman and Sussman (1977) introduced
the Dependency-Directed Backtracking scheme to reduce 3.1. Finding minimal explanations for geometric
the complexity of electronic circuit analysis. The possi- failures
ble operating regions of electronic devices are represented
Consider again the blocks-world problem illustrated in Fig-
by discrete states, and their physics are described by alge-
ure 1. If the task planner initially decides to build the pile
braic relations. As a physical contradiction is detected, a
at location p1 , the action place( d, c) always fails geomet-
dependency-structure is used to compute a relevant expla-
rically, because the gripper always collides with the fixed
nation and prevent similar choices occurring again. Sim-
obstacle. The cause of failure is not the action place( d, c)
ilar techniques are used in Boolean Satisfiability (SAT)
per se, because this action would be feasible if the pile
solvers. The conflicts occurring during search are analyzed
was built at location p2 or p3 . Rather, it is the result of the
by specialized procedures (Silva and Sakallah, 1996), and a
choice of p1 as a location for a, combined with the geomet-
clause expressing the negation of the cause of conflict is
ric effects of actions place( b, a) , place( c, b) , place( d, c),
re-injected in the clause database for pruning the search
and the position of the fixed obstacle relative to p1 . Note
space. Backjumping techniques (Dechter and Frost, 2002)
also that, during the last action (place( d, c)), blocks e and f
analyze the dead-ends reached during search to identify
have been moved to some temporary locations, but neither
inconsistent partial solutions, which allows the algorithm
the choice of these locations nor the order in which blocks
to backtrack several levels up in the decision tree, skipping
are moved are relevant for explaining the failure. Wherever
irrelevant variables. Similarly in this paper, specialized pro-
blocks e and f are placed, and whatever order is chosen for
cedures perform culprit detection at the geometric level,
actions, the same problem will eventually occur. Therefore,
which are then exploited by the intelligent backtracking
a minimal explanation of the failure should only depend on
mechanisms of the ASP solver.
blocks a, b, c, d and p1 , otherwise the task planner may
return an infinite number of unfeasible plans by permuting
the temporary locations of blocks e and f , by permuting the
3. General principles order of actions, or by increasing the number of actions. Iso-
Our approach relies on two key components. First, there lating the minimal number of factors explaining the failure
is a geometric reasoner capable of analyzing the cause of is the culprit detection problem that we propose to solve.
894 The International Journal of Robotics Research 35(8)

Definition Culprit Detection Problem


The input of a culprit detection problem is defined by a
set of hypotheses and a set of observations to be explained.
The output is an explanation, i.e. a parsimonious set of
hypotheses which explains all the observations (Bylander
et al., 1991).
Next, two methods are sketched out for addressing this
culprit detection problem. In the first method, the hypothe- Fig. 3. Schematic illustration of the test of the subsequence
ses are a set of linear constraints, and the observation is A3 , A7 . The black dots represent geometric instances of sym-
the inconsistency of the constraint network. In the second bolic actions resulting from the discretization process (see Sec-
method, the hypotheses are symbolic actions, and the obser- tion 7.3). All the combinations within these discretized geometric
vation is when a sequence of actions is not geometrically instances of A3 and A7 are tested.
feasible.
The first method consists of computing a set of bound- But most often, geometric failures are caused by one or two
ing boxes, which encompass all the poses that each object actions only. For instance, if a large object is placed in a box
can possibly occupy at each time step (see Figure 2). The (A3 ), it is impossible to place another object in that box later
sizes and positions of the bounding boxes are computed on (A7 ), and this problem is independent from the actions
using the spatial relations between objects (e.g. on( a, p1 ), performed in between (A4 , A5 , A6 ). Proving this is a culprit
on( b, a)) taken from the symbolic plan, plus some numeric detection problem. In order to detect the culprit action(s),
information from the geometric domain, e.g. the pose of p1 , several subsequences of actions are tested in isolation, e.g.
the dimensions of block a, etc. The bounding boxes are rep- A1 , A7 , A2 , A7 , etc. All the possible combinations within
resented by a network of linear constraints. The constraint the sets of discretized geometric instances of each action
network is used to detect geometric failures caused by vio- in the subsequence are tried out (see Figure 3), and if all of
lation of kinematic constraints, and bounding boxes are them fail, the subsequence is reported as unfeasible. Section
used to sample objects/robots poses for detecting geomet- 9 describes how this is done in practice, in particular how
ric failures caused by collisions. For instance, the bounding to select proper subsequences of actions, since trying all of
box of the gripper (bbox( gripper) in Figure 2) is used to them is intractable. Next, we discuss how to represent the
sample a discrete subset of the poses that the gripper can causes of geometric failures, and how to use them within
possibly occupy, and perform a collision check for each of the task planning process.
them. Since all the samples cause a collision, the sequence
of actions is unfeasible. Then, using the constraint network 3.2. Reasoning about failures in the planning
and culprit detection mechanisms, it is possible to prove that
process
the pose of bbox( gripper) only depends on the poses of p1 ,
blocks a, b, c and d, and create an explanation of the failure Continuing on the blocks-world example (Figure 1), let us
which is neither depending on blocks e and f , nor depending assume that the actual cause of failure is detected by the
on the order of actions. The culprit detection mechanisms geometric reasoner, and returned to the task planner. As
for achieving this are presented in Section 8. The drawback mentioned above, the cause of failure should not include
of this method is that the bounding boxes cover volumes the positions of blocks e and f , neither should it refer to the
which are often larger than what the manipulator can actu- order of actions. One could represent it as a conjunction of
ally reach. Hence, some sample positions that are actually logical statements, for instance
not feasible can be found to be collision-free, which causes
some failures not to be detected. on( a, p1 ) ∧ on( b, a) ∧ on( c, b) ∧ on( d, c) (1)
The second method copes with this problem by discretiz-
This information is valuable only if it can quickly guide
ing the poses of robots and objects using the same process
the task planner to backtrack to the action place( a, p1 ) and
that is used for finding a geometric plan (see Section 7.3),
build the pile at a different location. If the planning problem
the difference being that motion planning is not performed,
is modeled in propositional logic, this process is facilitated,
i.e. only the initial and final configurations of actions are
because the planning problem is represented as a set of
considered (more about this point in Section 4.3), and only
clauses P (Kautz et al., 1996), and expression (1) can be
subsets of actions from the task plan are considered. The
added as a logical constraint to the problem. Informally
goal is to find a minimum subset of actions causing the geo-
metric failure. Imagine for instance a sequence of symbolic P ∧ ¬( on( a, p1 ) ∧ on( b, a) ∧ on( c, b) ∧ on( d, c) )
actions A1 , . . . , An . Let us assume that a geometric failure
occurred for action A7 . It may be that the problem is intri- which is equivalent to
cate and, regardless of the geometric instances chosen for
the symbolic actions, there is no solution to the problem. P ∧ ¬( on( a, p1 ) ∧ goal)
Lagriffoul and Andres 895

(1)

Fig. 4. The two main components of our system; the ASP solver
for task planning, and the geometric reasoner to geometrically
instantiate the symbolic plan or to analyze the causes of failure.

(2)
Since the goal must be true, a simple inference mechanism
entails that on( a, p1 ) is false.
Most state-of-the-art planners are based on heuristic
search and do not support this type of global inference
mechanism. Therefore, we use the logic programming
paradigm for the task planning component of our system. It (3)
allows us to efficiently guide task planning with logical con-
Fig. 5. The geometric reasoner analyzes the cause of failure
straints formulated from the explanations of geometric fail-
through three layers representing the plan at different levels
ures, and the inference mechanisms of logic programming.
of abstraction: (1) spatial relations, (2) geometric dependencies
In the present work, we opted for ASP, which provides an
chains, and (3) whole plan with motion paths.
expressive language and effective solvers. Note that the pro-
posed approach does not specifically rely on ASP: The only
requirement for the task planner is to support global infer- the ASP solver in the form of a logical constraint, or returns
ence mechanisms. Other logic-programming languages, or a geometric instance of the symbolic plan otherwise. With
satisfiability-based planners could be used as well. this logical constraint added to the problem, the ASP solver
generates a new plan that is free from the detected failures,
and the cycle repeats until a feasible plan is found.
4. System overview For the goals of this paper, the core component of our
4.1. Overall architecture system is the geometric reasoner (Figure 5). The geomet-
ric reasoner takes as input a sequence of symbolic actions.
The overall architecture of our system is simple (see Fig- First, this sequence of actions is searched for geometric
ure 4). The ASP solver takes as input a domain definition failures in layer (1) spatial relations and layer (2) geomet-
file and a problem definition file, both written in AnsPro- ric dependencies chains. If no failure is found, it attempts
log, the logic programming language of ASP. The domain to geometrically instantiate the symbolic plan in layer (3)
describes the actions, when they can apply, and which log- by searching for a motion path for each action. If a failure
ical effects they have. It also contains a set of rules which is detected in any of these layers, the geometric reasoner
describe what does not change (frame axioms), and what returns a logical expression describing the cause of failure.
is indirectly changed by the actions (indirect effects). More The layers are hierarchically organized from a high level of
details about these rules are given in the next section and abstraction down to the motion planning level. The lower in
in Section 6. A problem definition file contains a symbolic this hierarchy, the more computationally expensive it is to
description of the initial state and the goal state. The geo- detect a failure. Hence, when a failure is detected, a logi-
metric reasoner takes as input the geometric description of cal constraint is returned and the remaining more primitive
the initial scene, i.e. the initial poses of robots, objects, and checks are not performed. A more detailed description of
obstacles. The scene also includes the 3D representations the different layers is given in Section 4.2.
of each robot, object, and obstacle. The geometric reasoner
also gets some information from the symbolic domain:
which objects are movable, and the kinematic structure of
4.2. The geometric reasoner
compound robots, i.e. which base is connected to which Finding a culprit subset of elements is a difficult prob-
manipulator. The working process is a simple loop where lem in general, because it requires checking all the subsets
(i) the ASP solver finds a symbolic solution plan, (ii) the in the power set of these elements, which requires up to
plan is analyzed by the geometric reasoner, and (iii) the geo- 2N checks, N being the number of actions in a symbolic
metric reasoner feeds back the (potential) cause of failure to plan in our case. This quickly leads us to an intractable
896 The International Journal of Robotics Research 35(8)

number of subsets of actions to be checked. In CTAMP, we exclusively consider these two configurations is moti-
this combinatorial problem is made worse by the fact that vated by the fact that we do not consider heavily cluttered
“checking” one subset of actions implies that various geo- environments. Therefore when kinematically feasible initial
metric computations are performed, including searching for and final configurations have been found, finding a motion
feasible grasps and placements, and motion planning. The path is possible in most cases. Furthermore, the manipula-
first layer of the geometric reasoner “Spatial relations” (1) tors are more subject to kinematic constraints at grasp and
copes with this complexity by working on an abstraction release positions, because the pose of the gripper is con-
of the space of grasps and placements, by building a set of strained by the pose of the object to be grasped / target
bounding boxes which encompass all the possible poses that pose to place the object in, which is not the case during
objects/robots can occupy after completion of each action. the transfer of the object. Note however that all the paths
Although this representation is not precise, it allows us to are computed in any case, i.e. our system does not pro-
detect some geometric inconsistencies in polynomial time duce motions which may cause collisions. But if an action
(see Section 8). These bounding boxes are also used to per- is invalidated because of a path planning failure, no mean-
form various collision checks which are explained in Sec- ingful explanation is fed back to the symbolic level, and
tion 10. No motion planning is performed in this layer. The the same failure may be encountered again in a different
logical constraints returned by this layer are expressed in sequence of actions. This owes to the fact that identifying
terms of spatial relations between objects (see Section 6.4). the culprit colliding object(s) in a path planning failure is
Since the first layer does not take into account the kine- a difficult problem (Hauser, 2014). This issue is discussed
matics of robots, it may let some geometric failures go by further in the conclusion.
undetected. The “geometric dependencies chains” in layer The proposed approach is not complete in different
(2) copes with this problem. This layer analyzes the geomet- respects. Although the motions performed for each action
ric dependencies between the actions of a symbolic plan. are computed by a resolution-complete path planner, the
For example, if two mobile robots located at some dis- start and goal configurations of these paths are a priori dis-
tance from each other pick up two different objects, there cretized, therefore many start and goal configurations are
are no geometric dependencies between both actions, but if excluded from the search space. Another source of incom-
robot A reaches robot B and hands over an object, then geo- pleteness lies in the fact that the geometric problem is
metric dependencies between these actions exist. Layer (2) broken down by the task planner into a sequence of sub-
constructs a graph of the geometric dependencies between problems, each of which is solved within a subspace of the
the actions of a plan, and uses this graph to select some configuration space. For instance, if the symbolic plan con-
subsequences of actions to be extracted from that plan. tains an action commanding the right arm of a humanoid
These subsequences are then geometrically evaluated sep- robot, the subspace is the configuration space of the right
arately from the other actions in the plan. Several types of manipulator, while the left arm acts as an obstacle. Potential
collision checks are performed during this evaluation (see solutions are lost in this way, compared with if the prob-
Section 10). The details of this process are presented in Sec- lem had been stated in the combined search space of both
tion 9. Motion planning is not performed in this layer, i.e. arms. For the same reason, only the objects represented in
only the feasibility of the initial and final configurations of the symbolic domain can be acted upon. Therefore, if they
the paths is checked. The logical constraints returned by this are not symbolically represented, occluding objects cannot
layer represent unfeasible subsequences as partially-ordered be moved away, nor can a flat surface be used as a temporary
subsequences of actions. location.
Finally, if no geometric failure is detected by the previ- Another issue with completeness concerns the detection
ous layer, layer (3) evaluates the whole sequence of actions of failures. Proving a continuous-space problem unfeasible
through geometric backtrack search, that is, depth-first is not possible with sampling-based techniques. However,
search in the search space of possible grasps and placements our simplified approach for multi-step motion planning
until a motion path for each action is found (see Section 9.1, facilitates this process: since the resulting configurations
and our previous work, Bidot et al. (2015)). For this last of actions are discretized, and since they act as obliga-
step, a cutoff time is set. If no solution is found within the tory pathways for a global solution, failures can be eas-
time limit, the logical constraint returned by the geometric ily detected by considering these configurations in priority.
reasoner is the subsequence of actions that it managed to Failures owing to kinematic violations and failures owing to
instantiate within the time limit. For the ASP solver, this collisions present us with two different cases. In the case of
means that it must no longer return any plans that begin kinematics violations, our approach is conservative, i.e. the
with this subsequence of actions. bounding boxes always overestimate the actual capacities
of manipulators, or the size of regions in which objects can
be. Consequently, violations of kinematic constraints can be
4.3. Assumptions and completeness issues safely fed back to the ASP solver without loss of solutions.
In Section 3.1, the second method for culprit detection This is not true for the failures owing to collisions, because
analyzes subsequences of actions by only considering ini- collision checks are performed on a finite set of samples,
tial and final configurations of motions. The reason why therefore feasible configurations may not be discovered. In
Lagriffoul and Andres 897

step (action) is appended to the incremental part. This is


handled by an outside controller as well as the identification
of the new final action. While expanding the encoding with
an incremental part all occurrences t are substituted by
an integer, representing the time step to be added.
In the encoding shown in Figure 8, the base part extends
from Line 1 to 6, the incremental part form Line 8 to 28
and the end part from Line 30 to 31. The rules in Line 3
Fig. 6. Initial situation of the minimal example of placing block b
and 4 formulate the potential actions the robotic arm is able
onto the red_tray.
to execute, with pick_up(Block) stating that Block is
to be picked up and place(Location) that a currently
grasped object is to be placed on Location.
1 # p r o g r a m base . The first rule of the incremental part is a choice rule
2 b l o c k ( a ). b l o c k ( b ).
3 l o c a t i o n ( g r e e n _ t r a y ). l o c a t i o n ( r e d _ t r a y ).
(Line 10), stating that on every time step the task plan may
4 a c t u a t o r ( arm ). include one action from the set of potential actions specified
above for each actuator. The do(Actuator,Action,t)
6 init ( on (a , g r e e n _ t r a y )). predicate represents that an Actuator performs an Action
7 init ( on (b , r e d _ t r a y )).
8 goal ( on (a , r e d _ t r a y )). at time step t. The integrity rule in Line 11 ensures that any
chosen action for each actuator must be possible for it to
Fig. 7. ASP instance for the minimal example.
execute in this incremental step. Possible actions are defined
by the rules in Line 13 to 20. The first rule states that it is
possible for an actuator to pick_up any block, given that
the block was placed on a location and the actuator was
this case, the constraint returned to the ASP solver prunes not grasping anything in the previous step. The second rule
out potential solutions. states that it is possible for an actuator to place a block at
any location if the actuator was grasping the block in the
previous step.
5. Planning with ASP Lines 22 to 28 model the logical consequences of cho-
A formal definition of the ASP language is given in sen actions implementing the frame axiom in ASP. If a
Appendix A. For illustration purposes, a small ASP exam- pick_up action is chosen for an actuator, it holds for
ple will be presented in this section. The problem in the the current step that the object is grasped by the actua-
example scenario is for a robotic arm to move a block a tor (Line 22), while the condition that the object is on a
from a green_tray to a red_tray. The scenario also location stops (Line 23). The rules are equivalent for the
includes an additional block b which may obstruct a trivial place action, but the block is now on the placed location
solution. (Line 25) and stops to be grasped by the corresponding
As common in ASP, we divide the encoding of the exam- actuator (Line 26). Line 28 declares that any fluent held in
ple into two parts, a fact format for representing problem the previous step also holds in the current step unless it was
instances and a generic encoding for solving pick&place stopped.
problems. Figure 7 presents the problem instance of our The end part of the encoding starts with the external
example. The facts in Line 2 to 4 define our environment, literal horizon(t) which in Line 30 identifies the last
consisting of two blocks, two locations and an actuator (the incremental step of the solution, i.e. the last action of the
robotic arm). Line 6 and 7 presents the initial conditions, action plan. Being external, the value of the literal is deter-
with block a placed on the green_tray and block b on the mined by the controller, not the solver. The controller sets
red_tray. Finally, Line 8 defines the goal condition of our horizon(t) to true if t is the last incremental step and to
example with block a on the red_tray. false if not. The integrity rule in Line 31 excludes all answer
A general ASP encoding for solving this problem is sets in which the goal of the example is not fulfilled in the
shown in Figure 8. Note that this code is for illustration last incremental step.
purposes, and is a simplified version of the actual code Since it is not possible to move block a to the red_tray
(Appendix C presents samples of the actual encoding). The in only one action, the ASP solver fails to find a solution
encoding consists of three parts: base, incremental and with only the base and incremental(1) part. Thus, the
end. The base part represents the initial situation of the controller adds the incremental(2) part to the encoding
scenario. In the incremental part possible actions and and a solution can be found:
their effect on the environment at a specific point in time are do(arm,pick_up(a),1)
defined. Finally, the goal conditions are described in the end do(arm,place(red_tray),2)
part. As long as the incremental part is insufficient to sat- Assuming the red_tray is not large enough to hold both
isfy the goal condition in the end part, an additional time blocks, the geometric solver rejects the plan and feeds back
898 The International Journal of Robotics Research 35(8)

Fig. 8. ASP encoding for the minimal example.

an integrity constraint describing the cause of error to the 6.1. Representing robots
ASP solver, i.e. that a and b may not be on red_tray at the
Robot parts are referred to as components, represented by
same time step. Since there are now no valid solutions with
variables, and some predicates are used to define proper-
only two actions, the encoding is extended by two additional
ties or relationships between them. For instance, Fabot is
incremental steps (3 and 4). The next solution is found with
defined as follows:
four actions by first placing the block b on the green_tray
component(fabot_base)
and then the a block on the red_tray:
component(fabot_arm)
do(arm,pick_up(b),1)
architecture_child(fabot_base,fabot_arm)
do(arm,place(green_tray),2)
base(fabot_base)
do(arm,pick_up(a),3)
able(fabot_base,moving)
do(arm,place(red_tray),4)
able(fabot_arm,manipulating)
Note that the plan length is only increased if the ASP
skilled(fabot_arm)
solver proved that there are no valid plans for the current
length. The predicate architecture_child indicates that the
two mentioned components belong to the same robot, and
the predicate base identifies its base. The specific abil-
ities of the components are represented with the able
6. Symbolic domain predicate, which determines which actions each compo-
nent is supported for. The skilled predicate specifies that
In order to make the presentation of our techniques more Fabot can pick piles of objects (because the design of its
concrete, we will use examples based on a concrete domain, manipulator prevents the gripper from tilting). This can be
in which we use three simulated robots with different capa- easily modeled by adding a constraint in the domain (see
bilities. The first robot is Justin, the DLR humanoid robot Appendix C, 3) without the need for defining a different
(Ott et al., 2006), with two arms with 7 degrees of freedom pick action for each robot. This scheme allows us to model
(DoF) each, and two dexterous hands. The second robot is more complex robots such as Justin (see Appendix C, 5).
Fabot, a mobile manipulator with a 3 DoF arm that can Objects and locations are also represented by variables
translate along the vertical bar attached to its base, which and predicates. For instance:
allows it to grasp objects on the floor, or to reach high loca-
location(table)
tions (see Figure 9). For the mobile part, Fabot’s base is
object(cup)
holonomic. The third robot is r2d2, a mobile robot with
object(block_a)
holonomic base and a flat area on top, which can be used
as a mobile tray. Justin is constrained to be fixed, in order to Types can be assigned to objects, for domain-specific use:
enforce the cooperation between the robots. block(block_a)
Lagriffoul and Andres 899

A robot such as r2d2 can also be used as a location, i.e.


objects can be placed on it:
location(r2d2)

6.2. Geometric predicates


Geometric predicates form a language which allows the
ASP solver to symbolically reason about the physical world.
These predicates accept a time parameter (see parameter t
in Figure 8) which is omitted here for brevity.

1. moved(X) represents the fact that X moves or is moved.


X can be a component, object, or location. Fig. 9. The three types of spatial relations.
2. connected(Parent,Child) implies that if Parent
is moved, then Child moves as well (but not necessarily
actions is given here, the reader may consult Appendix C for
the converse). It applies to a wide range of situations:
a complete AnsProlog implementation of the pick action
robot composition, object grasp, or object support.
as an example.
Examples: connected(fabot_base,fabot_arm),
connected(fabot_arm,cup), connected(tray, 1. pick(Object,GraspType) represents the action of
cup). picking Object using a given grasp type (top, side,
3. on_location(Object,X) represents the relation or bottom).
resulting from the transitivity of the connected rela- 2. place(Orientation,GraspType,Location) rep-
tion. X can be a location or a component. Example: resents the action of placing the held object on
block_a is on the tray and block_b is on block_a, Location using GraspType in a given orientation (z1
then on_location(block_b,tray). or z2). Location must be of type location.
4. oriented(Object,Orientation) represents the 3. stack(Orientation,GraspType,Location) is
gross orientation of Object, i.e. its alignment/anti- similar to place, but the target location must be of type
alignment with one of the reference axes (see Sec- object. Geometrically, sample poses for stacking are
tion 7.1). Orientation can be x1, x2, y1, y2, z1 or limited to one point centered on top of the supporting
z2, e.g. z1 represents upright and z2 upside-down. object, with different orientations.
5. reachable(X,Component) represents the fact that 4. reach(X,Manipulator) moves the base to which
Component is located at a sufficient distance from X Manipulator is connected so that X becomes reach-
for attempting a pick, place or stack action. X can be an able by it. X can be a location or an object.
object or a location. 5. dock(Base,Manipulator) is the converse of reach:
6. manipulated(Component,Object) represents the it moves Base so that Manipulator can reach it (used
fact that Object is actively acted upon by Component, by the r2d2 robot).
directly or indirectly. Examples of manipulated cup: 6. move(Component) simply moves Component away
Fabot grasps a cup, Fabot moves its base while holding from its current pose1 . This action is used if a compo-
a cup, r2d2 moves with a cup on top of it. nent needs to be moved.
The value of these predicates changes over time by the
direct effect of actions, but also indirectly through side- 6.4. Spatial relations
effects and ramification. For instance, when the base of a Just as symbolic actions are the symbolic counterparts
robot moves, the locations/objects reachable by the manip- of geometric actions, spatial relations are the symbolic
ulator are not reachable any longer. Similarly, if the robot counterparts of spatial constraints (see Section 8). They are
is holding an object, this object is also moved. By using used in order for the ASP solver to reason upon the logical
ASP as a modeling language, we are able to express ram- constraints fed back from the “Spatial relations layer”. The
ifications and indirect effects in a native way. Appendix C general form of a spatial relation predicate is:
(1,2) presents rule samples that illustrate how this can be
relation(X, Y, type, p1 , …, pn , t)
handled.
where X and Y represent the two objects/robots/locations on
which the relation applies. We define three types of spatial
6.3. Actions relations: grasp, placement, and dock, which are illustrated
The symbolic domain consists of six actions: The manipu- in Figure 9.
lators are able to perform pick, place and stack actions,
while the bases can do reach and dock actions. All compo- A grasp relation exists between an object and a Tool Cen-
nents can perform the move action. A general description of ter Point (TCP) whenever an object is picked, placed, or
900 The International Journal of Robotics Research 35(8)

:- relation(r2d2, bottle, placement, z1, t)


relation(r2d2, fabot, dock, t)
relation(justin, bottle, grasp, top, t)
This means that Justin cannot pick/place the bottle in
upright position (z12 ) from/on r2d2 with a top-grasp, while
r2d2 is docked to Fabot. This constraint is powerful because
(i) it applies to both pick and place actions, (ii) it does
Fig. 10. Entailment of spatial relations.
not explicitly say how the placement relation is created
(it could be any robot using any type of grasp), (iii) nor
stacked. It represents the fact that, at some point during the does it say how the dock relation is created (it could be
action, the TCP of the robot is necessarily within a volume r2d2 docking to Fabot or Fabot reaching r2d2). This type
centered at the object. A grasp relation does not hold any of constraint can rule out a large number of symbolic
longer if: plans. Therefore, constraints expressed in terms of spatial
relations achieve a stronger guidance of the task planner,
• the object is not grasped and the TCP is moved or;
compared to constraints expressed in terms of actions.
• the object is not grasped and the object is moved.
A placement relation exists between a robot/object/loca- 6.5. Generalized constraints with types
tion o1 and an object o2 whenever o2 is placed or stacked. It
represents the fact that, at the end of the action, o2 is neces- In many scenarios, it is needed to manipulate several
sarily located in a region centered around o1 . A placement objects that are instances of the same type. One can
relation does not hold any longer if: reasonably assume that during the perception of the scene,
it is possible to compare the shape of objects and assign
• o2 is not connected with o1 and o1 is moved or; them to different classes. In the present work, we manually
• o2 is not connected with o1 and o2 is moved. assigned the type “block” for all objects:
A dock relation exists between location/robot r1 and a block(block_a). block(block_b). block(block_c)...
robot r2 whenever r1 docks to r2 or when r2 reaches a loca-
tion/robot r1 . It represents the fact that, at the end of the The idea is to make some logical constraints more general
action, r2 is necessarily located in a region centered at the by using typed variables instead of object instances. Con-
location/robot r1 . A dock relation does not hold any longer sider for instance the geometric failure depicted in Figure 1.
if: This failure can be described using spatial relations:
• r1 is moved or; :- relation(p1, block_a, placement, z1, t)
• r2 is moved. relation(block_a, block_b, placement, z1, t)
relation(block_b, block_c, placement, z1, t)
The implementation of these rules is provided in relation(block_c, block_d, placement, z1, t)
Appendix C (4). Note that spatial relations, just as with spa- relation(gripper, block_d, grasp, top, t)
tial constraints, do not represent an actual state of the world.
Since all blocks have the same geometry and can afford the
Their role is rather to express a necessary relation result-
same grasps, this constraint is actually valid for any combi-
ing from an action. For instance, the existence of a grasp
nation of block instances. Hence, we can use the following
relation does not mean that the object can be grasped, how-
generalized constraint instead:
ever, if the grasp relation cannot hold, then a fortiori the
object cannot be grasped (nor placed, nor stacked). There- :- relation(p1, X1, placement, z1, t)
fore, when a spatial constraint is proven not to hold (by the relation(X1, X2, placement, z1, t)
geometric reasoner), the corresponding spatial relation can relation(X2, X3, placement, z1, t)
be use by the ASP solver for pruning the actions by which relation(X3, X4, placement, z1, t)
it is entailed. relation(gripper, X4, grasp, top, t)
Spatial relations are entailed by actions as depicted in block(X1), block(X2), block(X3), block(X4)
Figure 10. Spatial relations are used when the cause of Note that if the object instance has a feature that is not
failure is detected by the “Spatial relations layer” of the shared by all instances, this substitution is not allowed. For
geometric reasoner. A logical constraint expressed with spa- example, if an object instance is in its initial pose, it is
tial relations is more powerful than a logical constraint unique with respect to reachability/graspability, and there-
expressed with actions, because (i) there is a “many-to-one” fore cannot be substituted. This technique results in addi-
mapping between actions and relations, and (ii) because the tional computational costs, because the ASP solver must
predicates of spatial relation sometimes have less param- ground the constraint with respect to all possible variable
eters than the predicates of actions. For instance, imagine substitutions. Nevertheless, the planning performance is
that the geometric reasoner computes the following con- radically improved because generalized constraints have a
straint: stronger pruning effect.
Lagriffoul and Andres 901

6.6. Granularity of symbolic representations


An important issue for designing the symbolic domain is
how detailed the symbolic representations should be, e.g.
should the precise poses of objects be symbolically rep-
resented, should the modeling of grasping actions include
the movement of the base, or the opening/closing of the
Fig. 11. Examples of template transformations T side_z (a) and
gripper? A good point of comparison with our work is
T top_z (b) for the left gripper. Side grasp poses for instance, can
the work by Aker et al. (2012) which sometimes uses
be parametrized by pi , γi and uq , applying the translation pi , the
more detailed representations, and sometimes less detailed
rotation γi about uq (z in this example), and the transformation
ones. For object/robots poses, they gridize the geometric
T side_z to the gripper (c). ( u, v, w) represents the body-fixed frame
space and each cell is represented in the symbolic domain
attached to the gripper.
with a row-column scheme, while the choice of grasp is
entirely dealt with at the geometric level. Conversely in our
approach, object/robots pose are dealt with by the geomet- the pose of a body oi will be noted ( pi , T p , uq , γi ), where
ric reasoner and the type of grasp is decided at the symbolic pi =( xi , yi , zi ) ∈ R3 represents the translation of the ith body.
level. One may argue that this is solely an issue about dele- T p represents a transformation of the body-fixed frame in
gating more or less computational effort to the task planner the world frame, which we define as a template transforma-
or to the geometric reasoner, and that the overall cost is the tion. Template transformations represent natural positions
same. Next, we present some arguments for nuancing this of interest for objects and grippers, i.e. upright or upside-
statement. down for objects, and top, bottom, or side grasps for grip-
An obvious limitation of using detailed symbolic repre- pers. uq ∈ R3 is a unit vector which we define as reference
sentations is that the task planner is literally “drowned in axis, and γi ∈ R an angle of rotation about the axis uq .
details”, and therefore cannot efficiently reason about the T p belongs to a predefined set of transformations, and uq is
big picture, i.e. causal or temporal aspects of the problem. chosen among a predefined set of axes. Both are determined
Another limitation of this approach is that it prevents us by the geometric reasoner depending on symbolic informa-
from using specialized (algebraic, constraint-based) meth- tion. As an example, for sampling poses of an object to
ods for dealing more efficiently with the continuous aspects be placed in upright position on a table (see Figure 12),
of CTAMP, which the task planner is not designed for. the geometric reasoner selects the upright template trans-
On the other hand, our domain uses semi-detailed sym- formation of this object, and uq = z as template axis. The
bolic representations for some geometric aspects. For geometric reasoner computes the z parameter according to
instance, we represent the gross orientation of objects and the pose of the table and the height of the object. Then,
the type of grasp used for picking objects. This increases translations (x, y) and orientations (γ ) are sampled and the
the complexity at the symbolic level, but it also simplifies transformation matrix M of the object is then given by
the work of the geometric reasoner by excluding unfeasible  
actions from its search space (see Appendix C, 3 and 6). In Ruq ( γ ) p
M= T upright
specific cases, it can prove geometric tasks unfeasible only O 1
by means of causal reasoning, e.g. the robot Fabot cannot with
bring an object from upside-down to the upright position ⎡ ⎤ ⎡ ⎤
on its own. cos( γ ) −sin( γ ) 0 x
Ruq = Rz = ⎣ sin( γ ) cos( γ ) 0 ⎦ and p = ⎣ y ⎦
0 0 1 z
7. Geometric domain
The limitation of this representation is that all possi-
This section describes how object poses, actions, and states ble orientations cannot be represented, since finite sets of
are represented at the geometric level. It also explains how template transformations and reference axes are used. The
the continuous configuration space is discretized. advantage is that the orientations of the TCPs and objects
can be represented at the symbolic level by ignoring the
7.1. Hybrid pose representation intrinsic orientation γi , using instead the gross orienta-
tion x1, x2, y1, y2, z1 or z2 (see Section 6.2). As
In order to have a symbolic representation of the orienta- explained in Section 6.6, symbolic reasoning about orien-
tions of objects/TCPs, we use a hybrid discrete-continuous tations presents some advantages. In the next sections, the
scheme to represent the pose of a body. We use bold low- pose of a body is simply noted as ( pi , γi ) for clarity.
ercase letters to denote a column vector, e.g. p, and bold
capital letters to denote matrices, e.g. T. All coordinates
are expressed in the world frame. The pose of a body is
7.2. Representing actions and states
obtained by applying a rotation, a translation and a tem- Let A1 , . . . , An  be a sequence of symbolic actions. We
plate transformation to the body (see Figure 11). Hence, denote by sj the geometric state resulting from applying
902 The International Journal of Robotics Research 35(8)

Fig. 13. Reach and pick example with Fabot and a cup,
top view. Crosses represent possible locations for the mobile
Fig. 12. The discretization schemes for different actions. base for different values of γ1 . The orientation of the base is
determined by γ2 .
the symbolic action Aj on the previous geometric state. We
consider m rigid bodies. The ith object is denoted by oi , When dealing with robot manipulation tasks, an impor-
i ∈ {1, . . . , m}. The position of object oi in state sj (i.e. after tant issue is to sample transition configurations at the inter-
(j)
action Aj has been completed) is denoted by pi , and its section of different sub-spaces, e.g. the space of a mobile
(j)
orientation, by γi robot moving its base towards an object, and the space of its
 manipulator grasping the object. This problem is addressed
(j−1) (j−1) Aj  (j) (j)
pi , γi −
→ pi , γi in the multi-modal planning literature Hauser and Latombe
(2010), the idea is to use intelligent strategies for sampling
Finally, we define a geometric state, or configuration as the transition configurations between the different modes of the
set of values representing the poses of all objects, mobile system. A similar but simpler approach is used here, the dif-
bases, and TCPs ference being that transitions between different modes are
decided at the symbolic level. Next, we illustrate through an
c = {p1 , γ1 , . . . , pm , γm , example the simple domain dependent strategies used for a
pbase1 , γbase1 , pbase2 , γbase2 , . . . “reach and pick” task, which involves two distinct modes.
ptcp1 , γtcp1 , ptcp2 , γtcp2 , . . . In this example (see Figure 13), the mobile manipulator
Fabot is to pick a cup which is out of reach. Unlike multi-
q1 , q2 , . . . }
modal planning, the problem is not defined by an initial and
where qi represents the configuration chosen for the ith a goal configuration, but rather by an initial configuration,
robotic manipulator to place the gripper at ( ptcpi , γtcpi ). In an initial symbolic state, and a symbolic plan (computed by
addition to the geometric state, we also need to keep track of the ASP solver). In the initial symbolic state, the cup is not
which objects are attached to which ones, in order to predict reachable by Fabot (symbolically), while the action pick
how the state will change when robots are actuated. only applies to reachable objects. The action reach makes
At the geometric level, a symbolic action Aj can be per- an object reachable by a robot. Hence, the ASP solver
formed in various ways, e.g. a pick action can be performed computes a symbolic plan consisting of a reach action fol-
with different orientations of the TCP, a place action can lowed by a pick action. These actions are discretized (see
result in different positions/orientations of the object, and a Figure 12 and Table 1) as follows;
dock/reach action can result in different positions/orienta-
1. The reach action is discretized into 40 poses
tions for the mobile robot (see Figure 12). We denote one
parametrized by two angular values γ1 and γ2 . The
geometric instantiation of a symbolic action Aj by k aj , k ∈
poses are distributed on a circle3 centered around the
{1, . . . , r}, where r, the resolution, depends on the type of
reached object, with radius R depending on the type
action and the resolution used for discretization. k is later
of robot performing the action (see Figure 13). R was
referred to as the action index.
empirically determined such that the gripper affords
a wide range of approach directions, while the base
7.3. Domain dependent discretization remains far enough from the object to minimize the risk
Discretization of grasps and placements is a limitation of of collision with a potential supporting object.
this approach, but we emphasize the fact that discretization 2. The pick action is discretized into 16 grasp frames,
only concerns the resulting configuration of each action. In which are pre-computed for all possible gripper-object
other words, the final motion plan consists of discretized pairs. These grasp frames are such that the gripper does
configurations (one for each action) which are connected not collide with a potential flat surface under the object.
to each other by calling a bi-directional RRT algorithm For objects with axial symmetry, the grasp frames are
(LaValle, 2006) working in the continuous domain. obtained by incremental rotations of a template grasp
Lagriffoul and Andres 903

Table 1. Parametrization and typical resolutions used for discretization. The “index” column refers to a list of pre-computed grasp
frames.
Action x y z γ1 γ2 Index Total

Pick – – – – – 16 16
Stack 1 1 1 16 – – 16
Place 7 7 1 16 – – 784
Dock / Reach – – – 8 5 – 40

frame. For other objects (e.g. a tray), the grasp frames


are manually created for different graspable areas of
the object. The grasp frame is then applied to the grip-
per, and inverse kinematic (IK) solutions are computed
for the manipulator. The first collision-free solution is
selected.

The problem boils down to finding a resulting configu-


ration for the reach action, from which a feasible grasp
can be performed. With the typical resolutions shown in
Table 1, this may require 40 × 16, namely 640 configu-
rations to check in the worst case. This is done by the
GeometricBacktracking() algorithm (see Algorithm 1, Sec- Fig. 14. Representation of the spatial constraints as bounding
tion 9.1), which proceeds in a depth-first search manner. In boxes.
a nutshell, each action is computed “backwards”, i.e. (i) the
resulting configuration is computed first, (ii) collisions are
checked, (iii) a path from the current configuration to the is not a culprit detection technique, but a way of selecting
resulting configuration is computed. In case of success, the subsequences of actions from the symbolic plan.
next action is processed, otherwise another resulting config-
uration is tried. If none of the configurations work, the algo-
rithm backtracks to the previous action. Configurations are
8. Culprit detection with spatial constraints
chosen according to van der Corput sequences (Kuipers and This section describes the first test performed by the geo-
Niederreiter, 1974), which guarantee a uniform distribution metric reasoner: the “consistency check” (see Figure 5,
of the samples. layer (1)). The problem is relaxed by representing the poses
This approach for multi-step motion planning is incom- of objects/robots by a set of bounding boxes. A network
plete because of the naive discretization and because the of linear constraints is built, from which inconsistencies
modes are enforced by the symbolic level, e.g. a robot can- are detected using linear programming techniques. These
not fold/unfold its manipulator for getting through a narrow inconsistencies reveal violations of kinematic constraints or
corridor if the implementation of the reach action does not reachability problems. It is crucial that the bounding boxes
allow for it. We do not use sophisticated sampling strategies always cover a larger space than the space actually occupied
for sampling grasp configurations, e.g. using loop-closure by objects/robots, in order to guarantee that only unfeasible
constraints (Cortés and Siméon, 2004), but simply select geometric states are rejected by the constraints.
among the set of discretized grasps, those which accept an
IK solution. However, although the focus of this work is
not multi-modal planning, tasks requiring complex object
8.1. Building the linear constraint network
manipulation could be solved, as shown in the experimen- The spatial constraints are the geometric counterparts of the
tal evaluation (Section 11). We also refer the reader to the spatial relations introduced in Section 6.4. Hence, we also
GeRT (Generalizing Robot manipulation Tasks) project for define three types of spatial constraints: grasp, placement,
more details about the application of these techniques on and dock (see Figure 9). These constraints are automatically
the real robotic platform Justin (Ott et al., 2006) and provide generated from the symbolic plan. One can see them as a
video links for concrete illustration4 . set of bounding boxes, which encompass all the possible
In the next sections, we describe the techniques used poses in which each object/base/TCP can be. For instance,
for culprit detection in the different layers of the geomet- a placement constraint can be visualized as a polyhedral
ric reasoner (see Figure 5): spatial constraints in Section region encompassing the location in which the center point
8, and unavoidable collisions in Section 10. Then, Sec- of the object has to be after the corresponding place action
tion 9 focuses on geometric dependencies chains, which has been executed. The placement constraint on the pose of
904 The International Journal of Robotics Research 35(8)

imposes the new variable r2d2(1) to be within a bounding


box centered around left_base(0) . left_base(0) is a variable
representing the pose of the first link of the left arm of
Justin. A placement constraint P (1) is created between r2d2
and block B, since the block is connected to the robot (this
is known from the symbolic state). After the pick action
(2)
A2 , a new variable tcpright is created since the right TCP is
Fig. 15. Polyhedral region for the TCP relative to the object moved. A grasp constraint is added to the network, that con-
during a grasp action. strains the right TCP to be within a bounding box located
above block A (see Figure 14). The exact size and position
an object oi with respect to a fixed location at step j can thus of this bounding box is determined using the predefined
be written as a linear inequality grasp frames of this object class. Finally, after the stack
action A3 , the right TCP and block A are moved, hence new
(j)
a(j) ≤ pi ≤ b(j) (2) variables are created for both. Two constraints are created:
a grasp constraint G (3) between the TCP and the object, and
with a(j) =( locxmin , locymin , loczmin ) a placement constraint P (3) between block B and block A.
and b(j) =( locxmax , locymax , loczmax ) The kinematic constraints for manipulators (K(2) and K(3) )
can be modeled as a box centered on the first joint of the
where a(j) and b(j) define a bounding box around the loca- manipulator with dimensions depending on the length of
tion in the world frame. In most cases, the constraints are the manipulator, although a better approximation is possible
between two objects that can move, e.g. a grasp constraint (Lagriffoul et al., 2012). The resulting constraint network is
between an object at pose pk and a TCP at pose pi at step j shown in Figure 16.
can be written as We define the vector of the variables representing the
(j) (j) (j) poses of all objects/bases/TCPs in the problem
pk + c(j) ≤ pi ≤ pk + d (j) (3)
x =( x1 , x2 , . . . , xN )
with c(j) =( −, −, δ)
and d (j) =( , , δ) The bounding boxes are represented by a set of intervals that
define an upper bound and a lower bound on these variables,
where  and δ are some parameters that can be extracted
which we call the domain D of the problem
from the grasp frame (see Figure 15). Note that depending
on the type of constraint, the bounding box is not neces- D = [x1 , x1 ], [x2 , x2 ], . . . , [xN , xN ]
sarily centered around the object. The poses of unmovable
objects and the initial poses of movable objects are modeled The set of all linear constraints of the problem
as variables subject to unary equality constraints, e.g.
C = {G (j) , P (j) , D(j) , K(j) }, j ∈ {1, . . . , n}
(0)
pi = pinit (4)
can be expressed as
where pinit is a constant.
Dx ≤ e (5)
The linear constraint network is initialized with the ini-
tial poses of objects/bases/TCPs, and built by iterating over where D and e aggregate all the spatial constraints of the
the actions of the symbolic plan. For each action, one or problem (see expressions (2), (3) and (4)).
several constraints are added. A new set of variables is cre-
ated for each object/base/TCP that is moved. From now on,
we use the term “variable” to denote the translation p of an 8.2. Culprit detection in the linear program
object, which actually consists of three variables ( x, y, z). Identifying a culprit subset of constraints in a constraint
Let us describe this process with an example. Consider for network is a difficult problem in general. In the case of
instance the symbolic plan: linear programming, there exists efficient methods (imple-
mented in most solvers) to compute a so called Irreducible
A1 : dock (r2d2, left_base)
Infeasible Set (IIS). An IIS is an infeasible subset of con-
A2 : pick (right, top, block_a, table)
straints, from which removing one constraint makes the
A3 : stack (right, top, block_b, z1, block_a)
unfeasible problem feasible. IISs are useful for diagnosing
In the initial state s0 , the bounding box of each variable is a a potential cause of infeasibility in simple cases, but often a
point corresponding to the initial pose. After the dock action problem has many IISs (potentially an exponential number)
A1 , r2d2 is moved, and so is block B, which is placed on and finding the actual cause of failure requires a tedious
(1)
r2d25 , hence two variables r2d2(1) and blockB are created. find-and-repair process. IISs are essentially a tool for find-
A dock constraint is posted to the constraint network, which ing modeling errors, which is not useful here. Imagine for
Lagriffoul and Andres 905

c0 : p0 = pinit
c1 : p0 − c (1)
≤ p1 ≤ p0 + d (1)
c2 : p1 − c(2) ≤ p2 ≤ p1 + d (2)
...
ci−1 : pi−2 − c(i−1) ≤ pi−1 ≤ pi−2 + d (i−1)

Fig. 17. Linear inequalities resulting from splitting a line-


constraint network in two parts. pinit , c(1) , d (1) , . . . , c(i−1) , d (i−1)
are constants.

Fig. 16. The spatial constraints graph for the plan A1 , A2 , A3  when the placement constraint P (3) is posted. Then, one can
from Section 8.1. The arrows represent the constraints created trace back the following candidate culprit set
for each action: grasp constraints (G), placement constraints (P),
dock constraints (D), and kinematic constraints (K). The variables S = {K(3) , G (3) , P (3) , P (1) , D(1) }
in the figure represent the three components of the translation x, y
and z. The nodes indexed by 0 are constants. Tracing the candidate culprit set is a preliminary step before
finding the culprit set, which eliminates irrelevant con-
straints (K(2) and G (2) in this example). In order to prove
instance that the problem illustrated in Figure 14 gives rise
that the candidate culprit set S is the culprit set, we have to
to an inconsistency, i.e. the right TCP cannot reach r2d2 in
prove that there is no smaller subset of S causing inconsis-
order to stack block A on block B. In this case, an IIS would
tency. This may require many consistency checks, because
tell us that the inconsistency could be removed if the dock-
it requires checking all the subsets in the power set of S.
ing area was larger, or if the right arm of Justin was longer,
Another approach consists in proving that all the subsets
etc. These constraints are not modeling errors, but the real
of cardinality n − 1 are consistent. This proves that S is
constraints of the problem. Rather, what is useful here is
the smallest inconsistent set, hence the culprit set. In other
to determine the culprit set of constraints which causes
words, it is only needed to show that removing any one of
inconsistency. The solution to this problem is to compute
the constraints in S removes the inconsistency. We propose
a set of constraints which contains at least one constraint
a simple way to proceed in case the candidate culprit set is
from each IIS in the model (Chinneck, 1996). This prob-
a line-network, i.e. a network of which the topology is a tree
lem, referred to as the IIS set covering problem, is known
with branching factor equal to 1.
to be NP-hard (Chakravarti, 1994). This problem is related
to computing the maximum cardinality feasible subsystem Proposition 1: In an inconsistent line-network with con-
(see, e.g. Parker and Ryan (1996)). In the present work, straints of type bounding box, removing any one of the
we take advantage of the specific structure of the constraint constraints removes the inconsistency.
network to devise a simpler technique. Proof: Let s = {c0 , . . . , cn } be a line constraint network.
Removing a constraint ci from s always results in two line-
8.3. Identifying the culprit set in a line-network networks {c0 , . . . , ci−1 } and {ci+1 , . . . , cn }, where c0 and cn
are unary equality constraints of type (4) (because they
While constraints are added to the network, a hypergraph of
correspond to the initial pose of an object), and the con-
the constraint network is built, in which the edges represent
straints c1 , . . . , ci−1 , ci+1 , . . . , cn−1 are binary constraints of
constraints of type bounding box, and the nodes contain the
type “bounding box” (see equation (2) or equation (3)). Fig-
variables of the pose of an object at a certain time step (see
ure 17 represents the linear inequalities composing such a
Figure 16). We call this graph the spatial constraints graph.
line-network
Each time a constraint C is posted, a consistency check is
performed. Therefore, when an inconsistency is detected, The two subsets of constraints are trivially feasible because
we know that C belongs to the set of culprit constraints. it is always possible to recursively construct a solution
Then, the graph is used for tracing back all the constraints ( p0 , p1 , . . . , pi−1 ) (see Figure 18)
in relation with C, until a unary equality constraint (4) is
reached. When such a constraint is reached, there is no 1 (k)
with pk = ( d − c(k) ) +pk−1 , k = 1, . . . , i − 1 (6)
need to continue the process, since the associated variable 2
is constant, hence its value cannot be affected by other con-

straints. We call the constraint network resulting from this
process the candidate culprit set. Let us consider the exam- Therefore, if the candidate culprit set is a line-network, then
ple in Figure 14. Imagine that an inconsistency is detected it is necessarily the culprit set.
906 The International Journal of Robotics Research 35(8)

Fig. 18. Example of a trivial solution with real intervals instead


of bounding boxes: p1 = 12 ( 3 − 1) +0 = 1, p2 = 12 ( 1 − 2) +1 =
1 , p = 1 ( 2 − 1) + 1 = 1. But if a unary constraint is imposed on
2 3 2 2
p3 , the system may be inconsistent, for example with p3 = x, x <
−10 or x > 10.
.

With the culprit set, one can automatically generate a


logical constraint for the task planner, using the mapping
between geometric constraints and spatial relations:
Fig. 19. The spatial constraints graph for the sequence of actions
G (3) → relation(right, blockA, grasp, top, 3)
A1 , A2 , A22 , A3 , A32 .
P (3) → relation(blockB, blockA, placement, z1, 3)
P (1) → relation(r2d2, blockB, placement, z1, 1)
D(1) → relation(r2d2, left base, dock, 1)
and the mapping between variables and symbolic objects:
right_base(0) → right_base
left_base(0) → left_base
The logical constraint is constructed as a conjunction of
terms, which simply are the spatial relations associated to
the geometric constraints. Besides, using Proposition 1,
we observe that the inconsistency in the culprit set can be
removed if either c0 or cn are removed. This means that the Fig. 20. The simplified spatial constraint graph after tracing back
inconsistency can be avoided if one of the objects that are in and simplification.
their initial pose was moved. This information is included
into the constraint as well (in the last two lines). Finally, the
certain extent, hence they prune out entire families of plans
following expression is generated:
regardless of the specific ordering of actions.
:- relation(right, blockA, grasp, top, t)
relation(blockB, blockA, placement, z1, t)
relation(r2d2, blockB, placement, z1, t) 8.4. Identifying the culprit set in a tree
relation(r2d2, left_base, dock, t) In more complicated cases, the candidate culprit set is not a
not 1{moved(left_base, 1..t-1); moved(right_base, line-network, but a tree6 . In this case, there may be several
1..t-1)} possible culprit sets. Consider for instance the following
In natural language, this means that the ASP solver cannot sequence of actions:
return a plan in which r2d2 is docked at the left base, while
A1 : dock (r2d2, left_base)
B is placed on r2d2 and A is placed on B, while the right
A2 : pick (right, top, block_a, table)
TCP is grasping B, unless the left base or the right base is
A22 : pick (left, top, block_c, table)
moved in a previous time step.
A32 : place (left, top, block_c, z1, r2d2)
Note that in this constraint, the relations are indexed by
A3 : stack (right, top, block_b, z1, block_a)
the same variable t, whereas the corresponding constraints
have been posted in the network at different time steps. This This is the same plan as in the previous example, with the
is not wrong, since the spatial relations persist at the sym- addition that in parallel with actions A2 and A3 , Justin picks
bolic level until they are destroyed according to the rules and places another block C from the table onto r2d2 with the
defined in Section 6.4. Therefore, this constraint prevents left arm (actions A22 and A32 ). The corresponding constraint
the task planner from returning the plan A1 , A2 , A3 , as network is given in Figure 19. If the constraints associated
well as all the plans that cause these relations to be true to A3 are posted before the constraints associated to A32 ,
at the same time. This is another reason why spatial con- then an inconsistency is detected and we can trace back a
straints are effective: they make abstraction of time to a set of constraints which is a line-network, as in the previous
Lagriffoul and Andres 907

example. Otherwise, the following candidate culprit set is (Boyd and Vandenberghe, 2004). The second reason is that
traced back we maintain a graph representing the constraint network,
and take advantage of its structure to trace back a candidate
S  = {K(3) , G (3) , P (3) , P (1) , D(1) , P (3) , G (3) , K(3) } culprit set. If the candidate culprit set is a line-network, it is
the culprit set itself. If it is a tree, the culprit set can be found
Figure 20 represents a simplification of the candidate cul-
with a simple procedure with linear worst-case complexity.
prit set. Note that the variable left_base(0) can be seen as
two different leave nodes because it is a constant. In this
example, the culprit subset is obviously {C1 , C3 } because the
actions A22 and A32 do not resolve the kinematic problem of 9. Culprit detection in geometric
the right manipulator being unable to reach r2d2. But auto- dependencies chains
matically finding the culprit set is not easy in the general This layer of the geometric reasoner also works on a relax-
case. It requires finding the smallest inconsistent subset, ation of the problem, but unlike the “Spatial relations” layer,
i.e. performing consistency checks on the power set of the the actions are evaluated with their exact kinematic con-
set of constraints, considering subsets of increasing size. In straints, and a search in the space of grasps and placements
this way, the inconsistent set with the smallest possible car- (geometric backtracking) is performed. The relaxation con-
dinality can be found. We refer to this set as the optimal sists of (i) isolating subsequences of actions in the symbolic
culprit set, but there may be other culprit sets, which we plan, (ii) not performing motion planning (only the final
refer to as minimal culprit sets. These sets are minimal in configurations resulting from actions are considered, i.e.
the sense that removing any constraint from them removes grasp/release positions for pick/place actions, or final pose
the inconsistency, but their cardinality is not minimal. of the robot for dock/reach actions). First, we introduce the
Finding a culprit set in a tree constraint network is com- geometric backtracking process.
putationally expensive, but in all the scenarios addressed in
this paper, the structure of the problem allows us to use a
simpler technique. The reason is that in the problems we
9.1. Geometric backtracking
address, the culprit set is always a line-network. Indeed, the
topology of the constraint network maps to the kinematic Geometric backtracking is a search process which allows
relations between robots/objects at the time of inconsis- us, when an action fails, to reconsider the choices made at
tency. This means that a culprit constraint network consist- the geometric level for previous actions (Bidot et al., 2015;
ing of three or more branches, would result from a situation Karlsson et al., 2012). In the present work, we reconsider
in which three or more robots are simultaneously interact- the choices made for grasps and placements. It is also possi-
ing with the same object, which never occurs, because it is ble to reconsider the choices of inverse kinematic solutions
not allowed by the symbolic domain. for manipulators, but we found that few problems need this
Using the assumption that the culprit set is a line- feature to be solved. Algorithm 1 implements geometric
network, we can use Proposition 1 and iterate over all the backtracking in a systematic depth-first search manner, but
constraints in order to test if they belong to the culprit sub- it can be implemented in different ways, e.g. by combining
set or not, hence isolating the culprit set from the candidate several probabilistic roadmaps (Cambon et al., 2009).
culprit set. We use the following procedure The function is initially called with the initial configura-
tion (see Section 7.2) which gives a geometric description
Given a candidate culprit set S = {C1 , . . . , Cn } with bound-
of the initial scene, a symbolic plan (from the ASP solver),
ing box constraints, for each Ci ∈ S:
and an empty list sol. At each call, the function takes the
if S \ {Ci } remains inconsistent, then S ← S \ {Ci }
first action A in the sequence S, and keeps the remaining
which requires n consistency checks. The constraint net- list T for the recursive call (line 9). The resolution r of an
work resulting from this process is a minimal culprit set. action is the number of ways a symbolic action can be geo-
Once a minimal culprit set is found, a logical constraint metrically instantiated. Then, the function iterates over all
is automatically generated as previously explained, and the possible action indexes k (see Section 7.2) for the action
returned to the ASP solver. Note that there may exist several A. The function resultConfig( ) (line 5) returns the configu-
culprit sets depending on the order in which the constraints ration resulting from applying the geometric action k a on
are tested. It is possible to enumerate all of them and send the configuration c, or null if the action is unfeasible. If the
them in bulk to the ASP solver, or simply return the first action is feasible, the temporary solution sol is appended
one and detect the other ones during the next iterations. with the current action index (line 7). In case some actions
In summary, detecting the culprit spatial relations is remain to be evaluated (T = ∅), the function is recursively
achieved by detecting inconsistencies in a constraint net- called with the new configuration and the list of remaining
work representing the spatial constraints of the problem. actions. In case of failure, the next action index k is tried.
This is done efficiently for two reasons. First, inconsisten- If all of them fail (line 14), the null value is returned to
cies are detected using linear programming, for which effi- the calling function through line 9, and the calling func-
cient methods with polynomial worst-case complexity exist tion tries the next action index. If the last action is reached
908 The International Journal of Robotics Research 35(8)

Algorithm 1: GeometricBacktracking
Function GeometricBacktracking( c, S, sol)
input : c: a configuration
S: a sequence of symbolic actions
sol: a list of action indexes
1 A = S.head( )
2 T = S.tail( )
3 r = A.resolution Fig. 21. Illustration of a part of the symbolic plan A1 , . . . , An .
4 for k ← 1 . . . r do
5 c = resultConfig(k a, c)
6 if c = null then
We illustrate these through an example. Consider for
7 sol ← sol ∪ k
instance the following subsequence of symbolic actions,
8 if T = ∅ then illustrated in Figure 21. We assume that the plan contains
9 temp = GeometricBacktracking( c , T , sol ) some other actions performed by other robots on other
10 if temp = null then objects, and that block A was not manipulated prior to
11 return temp action Ac5 :
...
12 else
Ac1 : pick (r2, side, bottle, cellar)
13 return sol
...
Ac2 : reach (r2, table)
14 return null Ac3 : place (r2, bottle, z1, table)
...
Ac4 : reach (r1, table)
(T = 0, line 13), the solution is returned to the initial call- Ac5 : pick (r1, side, block_a, table)
ing function through line 11 in the form of a list of action ...
indexes, which indicates which grasp/placement to use for The first type of geometric dependency (A) exists between
each symbolic action. actions Ac2 and Ac3 , or Ac4 and Ac5 . The geometric reachable
GeometricBacktracking() is a depth-first search algo- set for the actions Ac3 and Ac5 is affected by the geomet-
rithm with no heuristic to guide the search. Although some ric instance chosen for the actions Ac2 and Ac4 , because
work has been initiated in this direction (Bidot et al., 2015; the poses that the TCP can reach depend on the pose of
Lagriffoul et al., 2012), geometric backtracking remains the base relative to the table. The second type of geomet-
a difficult problem because of the large branching factor ric dependency (B) exists between actions Ac1 and Ac3 ,
and because geometric computations such as motion plan- because the set of poses in which the bottle can be placed
ning are not computationally reducible. In practice, Geo- on the table depends on how the bottle has been grasped,
metricBacktracking() cannot complete in reasonable time if even if the action Ac2 is performed in the same way. This
the depth exceeds 4-5 actions. Therefore, a cutoff time has depends on the connection between the gripper and the
to be used. However, if the problem contains few geometric bottle. The third type of geometric dependency (C) exists
dependencies, then less geometric backtracking is needed, between actions Ac3 and Ac5 because placing the bottle on
and it is possible to find a solution for a symbolic plan con- the table may cause collisions between the bottle and r1,
taining dozens of actions. Next, we define different types which may change the geometric reachable set of the action
of geometric dependencies and the concept of geometri- Ac5 . We denote a geometric dependency of type T between
cally ground sequence of actions, which is used to isolate T
Ai and Aj by: Ai  Aj .
subsequences of actions to be separately evaluated. According to definition 2, geometric dependencies of
types A and B are direct geometric dependencies, since the
9.2. Geometric dependencies geometric reachable set is only affected because of kine-
matic issues. Note also that in a direct geometric depen-
This section refers to the notion of Geometric Reachable dir.
dency Ai  Aj , Aj cannot be geometrically instantiated if Ai
Set and Geometric dependency between two actions, which is not geometrically instantiated. Consider for example the
are formally defined in Appendix B. Next, we will consider B
three types of geometric dependencies in particular: relation Ac1  Ac3 : without a geometric instance for Ac1 ,
the position of the bottle within the gripper is unknown,
(A) dependencies based on reachability; hence no geometric instance can be defined for Ac3 . Sim-
A
(B) dependencies based on body connection; ilarly with Ac4  Ac5 , the geometric instantiation of the
(C) dependencies based on collisions. pick action requires the pose of the robot to be defined.
Lagriffoul and Andres 909

Some actions require several actions in order to be instanti-


dir.
ated, then we denote it by {Ai1 , . . . , Aik }  Aj . In contrast,
when two actions have a non-direct geometric dependency,
the second action may be instantiated even if the first action
C
is not. Consider Ac3  Ac5 for instance: Ac5 can be geo-
metrically instantiated without the pose of the bottle on the Fig. 22. A manipulation problem with two robots. The goal state
table being defined. is to have both blocks stacked on table1 (dashed line).
We say that an action is independent (with respect to
the plan containing it) if it has no direct dependency with
other actions. In our example, Ac2 and Ac4 are indepen- Therefore, in order to detect culprit subsequences, one
dent because they only depend on the position of the table, must first find the ground subsequences, i.e. identifying
which cannot be moved (although they may have non-direct the direct geometric dependencies that exist between the
dependencies with other actions, because of collisions). actions of the plan. Consider the actions of the symbolic
Definition 3 Geometrically ground subsequence of actions domain and their parameters (we limit ourselves to pick,
Let P = A1 , . . . , An  be a sequence of actions, and Q = place, and reach for clarity):
Ai1 , . . . , Aiq  a subsequence extracted from P, i.e. 1 ≤ i1 < pick (Robot, Grasp_type, Object, Location)
· · · < iq ≤ n. The subsequence Q is geometrically ground place (Robot, Object, Axis, Location)
iff ∀Aj ∈ Q, reach (Robot, Location)
Aj is independent or The direct dependencies can be determined by mapping the
dir.
∀Ap ∈ P such that Ap  Aj , we have Ap ∈ Q. parameters of actions in the symbolic plan with domain
knowledge provided by the user about how actions affect
In other words, a ground sequence of actions is “self-
object reachability and body connection (see Tables 2 and
contained”, and can be geometrically instantiated even if
3). Using this information, the graph of direct geometric
it is a subsequence of actions extracted from a larger
dependencies Gdir can be automatically constructed.
sequence. Examples of ground subsequences of action are
We illustrate the construction of this graph with an exam-
given in the next subsection. For the sake of brevity, we
ple. Consider the problem illustrated in Figure 22. The goal
simply use the term “ground” in the remainder of the article.
is to have block B stacked on block A on table1. A possible
symbolic solution plan for this problem could start with the
following action sequence:
9.3. Finding culprit subsequences
A1 : reach (r1, table2)
In order to find a culprit subsequence of actions in a sym-
A2 : reach (r2, table3)
bolic plan A1 , . . . , An , one needs to find a minimal sub-
A3 : pick (r2, top, block_b, block_a)
sequence of actions Ac1 , . . . , Acq  which is unfeasible. In
A4 : reach (r2, table2)
order to prove that a subsequence is not feasible, we deac-
A5 : place (r2, block_b, z1, table2)
tivate collision detection for objects which are not manip-
A6 : pick (r1, top, block_b, table2)
ulated in the subsequence, and use the algorithm Geo-
A7 : reach (r2, table3)
metricBacktracking(), which searches a possible geometric
A8 : reach (r1, table1)
instantiation of that subsequence. If it returns false, then
A9 : pick (r2, top, block_a, table3)
the subsequence would a fortiori be unfeasible if executed
A10 : reach (r2, table2)
within the original plan, and therefore is a culprit one.
A11 : place (r1, block_b, z1, table1)
The difficulty is that one must work with the power set
...
of {A1 , . . . , An }, i.e. checking all the subsequences with one
action, two actions, three actions, etc. However, the basic The first step in constructing Gdir is to determine for each
problem is not about the combinatorial, but rather about action, using Table 3 and the parameters of the symbolic
finding subsequences of actions which are ground, because actions, which objects are moved, and which connections
a subsequence which is not ground cannot be geometrically between bodies are created (see Figure 23). Then, for each
instantiated in the first place. Consider again the problem action Aj , using the table of dependencies (Table 2), the pre-
in Figure 21 for example. The plan fails because wherever vious actions are listed for finding the last action(s) that
the bottle is placed on the table, the bottle prevents r1 to changed what Aj depends on (arrows in Figure 23). Note
grasp block A. Intuitively, the culprit subsequence seems to that this process runs in time quadratic in the number of
be Ac3 , Ac5 . In order to prove it, one needs to show that actions of the plan.
all the possible combinations of the geometric actions k ac3 As an example, the action A5 place(r2, block_b, z1,

and k ac5 are unfeasible. The problem is that Ac3 , Ac5  is table2) depends on action A4 because A4 moved the robot’s
not ground because it cannot be geometrically instantiated base (type (A) dependency) and on action A3 because A3
without the actions Ac1 , Ac2 and Ac4 . created a connection between the TCP and the object (type
910 The International Journal of Robotics Research 35(8)

A7 , A9 , A10 ,
A1 , A2 , A3 , A4 , A5 , A6 , A8 , A11 }
As an example, the subsequence A7 , A9  is ground because
it can be instantiated regardless of other actions (it depends
on the pose of table3 which is unmovable, and it depends
on the pose of block_a which has not been moved yet). In
contrast, the subsequence A8 , A11  is not ground because
action A11 requires the prior grasp of block_b.
In practice, since geometric backtracking is computation-
ally expensive, only ground subsequences containing up to
four actions are evaluated. It is possible to construct more
(and longer) ground subsequences by combining the ele-
ments of Sground with each other, but this is not done for the
same reason. Culprit subsequences are detected by running
the algorithm GeometricBacktracking() on each ground
subsequence in Sground . Let us denote by Ag1 , . . . , Agm 
a ground subsequence. In case of failure, the depth d
at which the failure occurs is recorded. The subsequence
Ag1 , . . . , Agd  is therefore a culprit subsequence.
During the evaluation of a subsequence, different types of
collision checks are performed (see Section 10) in order to
identify the objects that always collide. If the subsequence
is unfeasible, a constraint is returned to the task planner, for
example:
:- action1 (param11, param12, t1),
action2 (param21, param22, t2),
action3 (param31, param32, param33, t3),
...
Fig. 23. Graph of direct geometric dependencies. The column on t1<t2, t2<t3, ...,
the right indicates the objects/bodies that are moved because of not moved(colliding_object1, 1..t2-1); not
their connection with a mobile base. moved(colliding_object2, 1..t3-1); ...}.
The culprit subsequence is defined by a partial order on the
Table 2. Actions and their associated dependencies. actions. This representation is general, hence a large num-
Action Depends on pose of Depends on connection of ber of symbolic plans can be ruled out by this constraint.
However, this type of constraint is weaker than the con-
pick Robot_base, Object Location and Object straints returned by the “Spatial relations” layer, in which
place / stack Robot_base, Location TCP and Object no order on the actions is imposed.
reach / dock Location ∅
In this section, we have described the second layer of the
geometric reasoner, which detects culprit subsequences of
Table 3. Actions and their effects on pose and connection. actions within a symbolic plan. It uses the direct geomet-
ric dependencies between actions in order to extract ground
Action Changes the pose of Creates a connection between
subsequences which can be tested independently. In the next
pick TCP TCP and Object section, we explain in more details the different types of col-
place / stack TCP, Object Location and Object lision checks that are performed in both the first and second
reach / dock Robot_base ∅ layers.

(B) dependency). A5 also depends on the pose of the loca- 10. Different types of collisions checks
tion table2, but in this plan, table2 was not moved in the pre-
Unavoidable collisions checks are performed in layers (1)
vious actions, hence no dependency is represented. Then, a
and (2) of the geometric reasoner (see Figure 5). Unavoid-
set of ground subsequences Sground can be built by tracing
able collisions are a common cause of infeasibility in
back the direct dependencies from each action:
CTAMP problems: when all the geometric instantiations
Sground = { k
aj of an action Aj in the plan result in a collision. We
A1 , A2 , A2 , A3 , A4 , A2 , A3 , A4 , A5 , detect three types of unavoidable collisions, which return
A1 , A2 , A3 , A4 , A5 , A6 , A7 , A8 , constraints of different strength:
Lagriffoul and Andres 911

• strong unavoidable collisions: collisions with fixed Algorithm 2: DetectStrongUnavoidableCollisions


obstacles only (see Figure 1);
• weak unavoidable collisions: collisions with objects that Function DetectStrongUnavoidableCollisions( D, oi , j)
have not yet been moved in previous time steps; input : D: a domain
• collision with unavoidable volumes, i.e. regions of oi : an object/base/TCP
space which are necessarily occupied at some time step. j: a step in the symbolic plan
1 deactivate all movable objects but oi
These three types of collision detection checks are per-
2 bbox = {[xoi (j) , xoi (j) ], [yoi (j) , yoi (j) ], [zoi (j) , zoi (j) ]} ⊂ D
formed both in the “Spatial relations” layer (1) and in the
“Geometric dependencies chains” layer (2) (see Figure 5). 3 unavoidable = ∅
In both cases, the same types of collision checks are per- 4 forall the ( x, y, z) ∈ bbox, γ ∈ [0, 2π ] do
formed, but the way the geometric configurations are sam-
5 setPose(oi , x, y, z, γ )
pled is different. In this section, we describe how this is 6 collisions = collide()
done in the “Spatial relations” layer, i.e. using bounding
7 if collisions = ∅ then
boxes.
8 return false
9 else
10.1. Strong unavoidable collisions 10 unavoidable ← unavoidable ∪ collisions
When this test is performed, a set of bounding boxes (D)
11 return unavoidable
has been computed for each object/base/TCP at different
time steps (see Section 8). Algorithm 2 is run for each of
them, following the chronological order of the steps. First
of all, Algorithm 2 deactivates collision detection for each
object/base/TCP, except oi (line 1). This means that only the
collisions between oi and the fixed obstacles are considered
during the collision detection phase. Then, it retrieves from
D the bounding box bbox associated to the object/base/TCP
oi (line 2). From this box, a discrete set of positions is uni-
formly sampled, as well as a set of orientations (line 4),
and the pose of the object/base/TCP is updated accordingly
(line 5). The orientation γ represents the rotation around
a reference axis applied to a template transformation, as Fig. 24. Example of weak unavoidable collision between the TCP
described in Section 7.1. The function collide() returns a and the bottle.
list of colliding objects, if any (line 6). If one sampled pose
is collisions-free (line 7), this means that collisions are not
unavoidable and the function returns false (line 8). Other-
wise, a list of objects causing the collisions is populated at that have not been moved yet (according to the ordering of
line 10. actions). If an object consistently causes such collisions, a
The process is demanding when an unavoidable colli- logical constraint is generated using the spatial constraints
sion exists because all the samples need to be tested. In graph as previously explained. In addition, the constraint
such case, a logical constraint is automatically generated enforces that the colliding object(s) have to be moved dur-
as explained earlier. The candidate culprit set is found by ing a previous time step. Consider for instance the situation
(j) illustrated in Figure 24. The bottle prevents r1 from grasp-
back-tracing from the node oi in the spatial constraints
ing block A. In this case, a weak unavoidable collision is
graph (see Section 8.3). For instance, the problem depicted
detected between the gripper and the bottle, and the follow-
in Figure 1 would result in the following constraint:
ing constraint is returned to the task planner:
:- relation(p1, block_a, placement, z1, t)
relation(block_a, block_b, placement, z1, t) :- relation(area1, r1, dock, t) (a)
relation(block_b, block_c, placement, z1, t) relation(block_a, gripper, grasp, side, t) (b)
relation(block_c, block_d, placement, z1, t) relation(r2, block_a, placement, z1, t) (c)
relation(gripper, block_d, grasp, top, t) not 1{moved(area1, 1..t-1); moved(r2, 1..t-1)} (d)
not 1{moved(p1, 1..t-1)} not moved(bottle, t2) (e)
t2<t

Like in the case of a spatial inconsistency, the extremities of


10.2. Weak unavoidable collisions the culprit chain are used to determine that moving area1 or
This test is similar to the strong unavoidable collisions r2 may relax the linear constraints, hence possibly avoid the
test, but it also includes collisions with the movable objects collision (d).
912 The International Journal of Robotics Research 35(8)

Fig. 26. Example of collision between the gripper and the


unavoidable volume of the lid of the box.

Fig. 25. 2D example of construction of an UV: the center of a store which UVs are occurring at each time step, and which
rectangular object is to be placed on a square target location (top). objects these UVs correspond to.
First, the unavoidable volume by rotation is computed (disc at bot- As an example, consider the situation in Figure 26, from
tom left). Then the intersection of all possible translations of that which the following sequence of actions is to be executed:
disc on the target location (bottom right). A1 : pick (left, top, block_a, table)
A2 : stack (left, top, block_a, z1, block_b)
A3 : pick (right, top, lid, table)
This “occlusion problem” is often mentioned in similar A4 : stack (right, top, lid, z1, box)
works. A common strategy is to use an ad hoc “occlud-
ing” predicate in order to artificially trigger the removal of Regardless of how block A is stacked on block B, this
the object. Here, occlusions are just a special case of weak sequence is doomed to fail at step 4, when the lid is stacked
unavoidable collision. The advantage is that the ASP solver on top of the box. This problem can be detected as an
is not tied to a predefined strategy, and can find other ways unavoidable collision between the left TCP and the UV of
to solve the problem by using the logical constraint and the the lid. Note that this problem cannot be detected either
inference mechanisms of the solver. Removing the object is by the strong unavoidable collision check (because it only
not the only possibility. Basically, any plan that makes one considers fixed obstacles), or by the weak unavoidable col-
term of the constraint (a, b, c, d, e) false resolves the prob- lision check (because the left TCP is moved before the lid
lem. For example, without changing the length of the plan, is stacked, hence it is deactivated).
the solver could decide to choose a different docking area Collisions with UVs are detected together with weak
(a = false), or use a top grasp (b = false). If no solution unavoidable collisions, using specific rules for activating
is found this way, some actions can be added to the plan, UVs at the correct time step. For instance in our example,
e.g. moving block A to another location with another robot the UV of the lid must be activated because the position of
(c = false), moving the robot r2 (d = false), or picking the the left TCP at step 2 is not changed until the lid is stacked,
bottle up and placing it away (e = false). at step 4. This can be determined using the list Luv and the
symbolic plan P. P contains the information that the left
TCP is not moved after step 2, and Luv indicates that an UV
10.3. Unavoidable volumes exists for the lid at step 4.
The logical constraint is generated as for weak unavoid-
An unavoidable volume (UV) represents a region of space
able collisions: a culprit set is found by back-tracing from
which is necessarily occupied at a given time step. It is con- (j)
the node oi (left_tcp(2) ) in the spatial relation graph. But
structed by intersecting the volumes of an object in all the
in addition, another culprit set is back-traced from the node
possible poses it can occupy as the result of an action. Fig-
corresponding to the object associated to the colliding UV
ure 25 depicts how UVs can be geometrically constructed.
(lid (4) ). The resulting constraint is built as a conjunction of
In our implementation, these volumes are not computed in
the terms of both culprit sets. In our example, this process
this way. We use a predefined set of cylinders with differ-
results in the following constraint:
ent radius and height, which are selected with ad hoc rules
when needed. :- relation(left_tcp, block_a, grasp, top, t)
According to the example in Figure 25, UVs can only be relation(block_b, block_a, placement, z1, t)
computed when the pose of a large object is constrained to relation(box, block_b, placement, z1, t)
be inside a small region. However, UVs can be computed not 1{moved(left_base,1..t-1); moved(box,1..t-1)}
in many situations, e.g. during stacking actions or grasping relation(right_tcp, lid, grasp, top, t)
actions, because both the TCP and the grasped object are relation(box, lid, placement, z1, t)
confined in a small region. Unavoidable volumes are com- not 1{moved(right_base,1..t-1);
puted after a symbolic plan is found. The bounding boxes moved(box,1..t-1)}
are used to determine the size of the regions that each object Note that UVs are also useful during geometric back-
occupies at each time step. A data structure Luv is used to track search, because in some problems, they prevent from
Lagriffoul and Andres 913

Table 4. Average times for the different checks performed by the


geometric reasoner, assuming a plan with 30 actions.

Check (layer) Success Failure

Fig. 27. Geometric bodies attached to the TCP for Justin (left) Consistency (1) 3s 7s
and Fabot (right). Unavoidable collisions (1) 0.07 s 0.15 s
Unavoidable collisions (2) 2s 3s
Geometric backtracking (3) 45 s 1 min
placing an object in a pose which may compromise a future
action.
The collision checks described in this section are done in
all the layers of the geometric reasoner. For layers (1) and
(2), in the special case where the tested body oi is a mobile
robot, only the base is used for collision detection. If oi is a
gripper, only the body attached to the TCP (see Figure 27)
is considered, i.e. the links of the manipulator are ignored.
None of these restrictions apply in layer (3).

11. Experimental evaluation


In this section, we evaluate our approach on three differ-
ent scenarios, which present different difficult aspects of
CTAMP. Scenario 1 is based on the introductory example.
The difficulty lies in the peculiar geometric configuration Fig. 28. The 3D version of the introductory example. The picture
which requires a culprit detection mechanism in order to shows the experiment with 10 blocks. For the experiments with
avoid repeatedly encountering the same failure. This sce- fewer blocks, the top-most blocks are removed from the piles.
nario is used for evaluating the scalability of our approach.
Scenario 2 illustrates the generality of our approach, by
addressing a problem which usually requires ad hoc algo- The planning times given in the results do not include the
rithms to be solved. Scenario 3 shows an example of a path smoothing time. When a solution is found, the motion
problem where the causes of failure need to be generalized plans consist of raw RRT paths that guarantee feasibility,
in order to solve the problem efficiently. Scenario 4 shows but which need to be smoothed. We do not include this time
a limitation of our approach: geometric constraints cannot in the results because we consider that smoothing can be
be fed back to the ASP solver, because symbolic aspects done during execution (except for the first action), with an
(allocation of tasks to robots) play a predominant role. Sam- average time of 2 seconds per action. We also use the term
ple videos of the solutions to these problems can be found iteration, which refers to the fact that a failure has been
online7 . detected and that a new plan is generated by the ASP solver
The linear programs are solved with Gurobi8 , and col- (see Figure 4).
lision detection with the library V-Collide (Hudson et al.,
1997). Motion planning (for Justin’s manipulators, Fabot’s
manipulator and Fabot’s base) is done with bi-directional
11.1. Scenario 1
RRT (LaValle, 2006) implemented in Java. The ASP com- This experiment is inspired by the introductory scenario
bined grounder-solver is clingo 4.2.18 . The rest is imple- as shown in Figure 28. The aim of the experiment is to
mented in Java, and all the experiments are conducted on a show the scalability of our approach on a combinatorial task
MacBook Pro with Intel Core 2 duo i5, 2.4 GHz. planning blocks-world problem with a non-trivial geomet-
For a better understanding of the results, we present in ric problem. The task is to build a pile by stacking all the
Table 4 an order of magnitude of the time spent during the blocks in a randomly chosen order. The location of the pile
culprit detection checks, for a plan containing 30 actions. is not specified, it may be on any of the modeled locations
These checks are performed in sequence, and if a check (the table or one of the three trays).
fails, the latter ones are not performed. It may seem counter- A fixed obstacle is located above the table at a distance
intuitive to do the consistency check before the unavoidable such that it is impossible to build a pile with more than
collisions checks which are faster. The reason is that the two blocks on the table, otherwise the TCP would collide
consistency check is also used to compute the bounding with this obstacle. The right-most tray is not covered by
boxes necessary for the subsequent checks. The resolutions the obstacle, but its higher position prevents the robot from
used for discretizing actions are those presented in Table 1. stacking more than six blocks on it, because of kinematic
Note that when the geometric backtracking check succeeds, constraints. These constraints are not logically encoded in
a valid solution is found. the symbolic domain, the ASP solver will receive them
914 The International Journal of Robotics Research 35(8)

Fig. 29. Total average planning time with respect to the number Fig. 30. Two pathological cases making the problem unfeasible
of blocks. The numbers above the bars correspond to the average or difficult. On the left, a pile is built which prevents grasping
number of actions in the solution plans. block C later on. On the right, blocks are placed on the table in
an intermediate position, and due to clutteredness, some actions
become difficult to perform.

from the geometric reasoner. If these two problems are


not detected, the problem is intractable because the planner
gets trapped in trying different plans which all lead to the
same failures, i.e. collision with the obstacle or kinematic
problem.
The problem was scaled up by increasing the number of
blocks from three to ten. The initial configuration resem-
bles what is depicted in Figure 28. Each problem was run
with four different initial positions for the tray on the table,
and with block A located on the tray. We did not change Fig. 31. Details for the average time spent on geometric rea-
the position of the piles, because of reachability issues for soning. “Unavoidable collisions” refers to the three types of
blocks C and J. In total, 96 runs were conducted. For the unavoidable collisions performed in layers (2) and (3) in Figure 5.
experiments with nine and ten blocks, it happens that no
solution is found because the planner ends up in a patholog-
ical situation where block C is occluded by the pile under
construction (see Figure 30, left). As a result, 91.7% of the
runs were solved.
The global results of the scalability experiment are pre-
sented in Figure 29. The trend is exponential, reaching up
to 15 minutes average planning time for ten blocks (52
actions). Nevertheless, the planner is able to find a solu-
tion in reasonable time (less than 1 minute) for problems
up to six or seven blocks, with plans containing around
30 actions. The time spent by the ASP solver on com- Fig. 32. This figure shows which checks in the geometric reasoner
puting the symbolic plan(s) increases faster than the time detected the failures during the iteration process (see Figure 4). (1)
spent on geometric reasoning. This is owing to the fact that refers to the “Spatial relations” layer in Figure 5.
the algorithms used in the geometric reasoner run either in
polynomial time, or have a cutoff time.
The detailed results for the time spent on geometric rea- by building the pile on one of the two trays which are not
soning are given in Figure 31. Detection of unavoidable occluded by the obstacle.
collisions all together is fast, and increases linearly with the For problems with more than six blocks, the consis-
number of actions, while most of the time is spent on geo- tency check is triggered, because the planner finds solu-
metric backtracking. Figure 32 shows the average number tion plans in which the pile is built on the right-most tray.
of iterations needed to solve the problems, and the pro- Then an inconsistency is detected because it is not possi-
portion of each type of failure encountered during culprit ble to stack more than six blocks for kinematic reasons.
detection. For problems up to six blocks, all the failures are For nine and ten blocks, the planner attempts to build the
detected by some strong unavoidable collision checks in the pile on the table, which is also impossible with respect to
“spatial relations” layer. It takes on average 4-5 iterations kinematic constraints. That is why we observe two extra
for the geometric reasoner to detected that it is impossible failed consistency checks. Note that this is also not possi-
to build any pile on the table with more than two blocks, ble because of the obstacle, but recall that in the sequence
using the left or right TCPs. Then a valid plan is found, of checks, the consistency check is performed before the
Lagriffoul and Andres 915

collision checks. These consistency checks lead the planner


to choose the left-most tray as the only possible location
to stack the blocks, which explains why less unavoidable
collision checks are observed for nine and ten blocks.
Even when the main causes of failure have been detected,
the planner needs to iterate over several solution plans,
because it fails at the geometric level for reasons that are
not detected by any specific check. This happens when the
choice made at the geometric level for an action Ai causes
occlusions or motion planning failures for another action Aj . Fig. 33. Erdem et al. (2011) experiment on rearrangement plan-
If Ai and Aj are too far from each other in the plan, the geo- ning of multiple objects (left), and our setup (right). The difference
metric backtracking layer cannot reconsider Ai and returns is that they use a mobile manipulator whereas in our setup Justin
failure. These situations typically occur because of pecu- remains in a fixed position.
liar configurations, or when the number of objects increases
(see Figure 30). This explains why the number of geomet-
ric backtracking checks increases for more than six blocks
(Figure 32). Consequently, the geometric backtracking time
increases as well (see Figure 31), secondarily because the
plans are longer, but mainly because the geometric back-
tracking check is performed multiple times. Another prob-
lem of geometric backtracking failures is that they are not
informative for the ASP solver, because they just prevent it
from returning one specific plan.
In summary, our approach shows a decent performance
for plans up to 30 actions, although two factors appear
as a limitation for larger problem instances. (i) Task plan-
ning is a difficult problem in general. (ii) At the geometric
level, intricate situations occur more frequently for larger
problem instances. Often, they cannot be solved by geo-
metric backtracking, and lead to failures that do not guide
Fig. 34. Left: Details for the average time spent on geometric rea-
the ASP solver. Therefore, the planner has to iterate over
soning. Right: Checks in the geometric reasoner used to detect
different plans until a plan without intricacies is found by
the failures during the iteration process (see Figure 4). (1) Refers
chance. This is time-consuming because geometric back-
to the “Spatial relations” layer and (2) refers to the “Geometric
track search has to be done several times.
dependencies chains” layer in Figure 5.

11.2. Scenario 2 space for the tray. The cup has to be moved to a tempo-
In this scenario, we replicate the experiment by Havur rary position, and placed at its final position after the tray
et al. (2014) on rearrangement planning of multiple objects. has been moved. Note that it is also possible to move the
The aim of this experiment is to show the generality of tray first. The limited space on the table, plus the fact that
our approach, by applying it to a problem which usually Justin can only use one manipulator makes the task diffi-
requires specific techniques to be solved. Rearrangement cult. It is also forbidden to stack the objects on each other,
planning is a variation of navigation among movable obsta- otherwise the problem is less challenging. At the task level,
cles (Stilman et al., 2007). These problems are known to the problem is simple if the objects that need to be moved
be complex, therefore a common assumption applied for are identified. At the geometric level however, the tempo-
addressing them is to restrict the space of solutions to mono- rary poses of objects need to be carefully chosen because
tone plans, i.e. plans in which objects are moved at most the space is limited.
once, which is an incomplete approach. We refer the reader We conducted 100 runs, with randomized initial posi-
to Havur et al. (2014) for the related work. They propose an tions for all objects (ensuring that they are reachable). 100%
approach with multiple stages, including the gridization of of the runs were solved. Depending on the initial configura-
the continuous plane and hybrid planning combining ASP tion, the solution plans contain six, eight, or ten actions in
and geometric reasoning. respectively 29%, 46%, and 25% of the problem instances.
The scenario is illustrated in Figure 33. The goal is to When the problem is solved with six actions (which is the
swap the position of the cup with the position of the tray simplest case), the plan consists in moving the cup (respec-
(meaning the center of the cup with the center of the tray). tively the tray) to an intermediate position, moving the tray
Blocks A and B have to be moved in order to free some to the position of the cup (respectively the tray), and then
916 The International Journal of Robotics Research 35(8)

moving the cup to the position of the tray (respectively the A1 : pick(right, border, z1, tray, table)
cup). Two or four extra actions are used when blocks A A2 : place(right, border, table, z1, tray)
and/or B need to be moved. The results are summarized in A3 : pick(right, top,z1, cup, table)
Figure 34. A4 : stack(right, top, target1, z1, cup)
The time spent on ASP solving is negligible, hence the A5 : pick(right, border, z1, tray, table)
chart on the left in Figure 34 practically represents the total A6 : stack(right, border, target2, z1, tray)
planning time: on average 12, 54, and 87 s for respec- But a problem remains because of block B that still pre-
tively six, eight, and ten actions. Although most of the time vents the tray from being placed on target2. Again, a weak
is spent on geometric backtracking, the chart on the right unavoidable collision is detected in the first layer and the
shows that the type of failure which is the most frequently following constraint is returned:
detected is weak unavoidable collisions in the first layer. :-relation(tray, right, grasp, border, t)
These checks have not much impact on the geometric rea- relation(target2, tray, placement, z1, t)
soning time because they are fast (Table 4), and occur early not 1{moved(block_b, 1..t-1)}
in the sequence of checks (see Figure 5).
The ASP solver returns the fourth plan which satisfies all
Let us illustrate the iteration process with an example
the constraints, i.e. block B is moved before the tray is
where block B is close to the cup, and therefore has to
placed, and the tray is moved before the cup is placed:
be moved. We use two virtual locations (5×5 cm squares),
symbolically labeled target1 and target2, corresponding to A1 : pick(right, top, z1, block_b, table)
the initial centers of respectively the tray and the cup. The A2 : place(right, top, table, z1, block_b)
goal is defined as: A3 : pick(right, border, z1, tray, table)
A4 : stack(right, border, target2, z1, tray)
:- not connected(target1, cup, t), horizon(t)
A5 : pick(right, top, z1, cup, table)
:- not connected(target2, tray, t), horizon(t)
A6 : stack(right, top, target1, z1, cup)
Even though the initial state is randomized at the geometric
But the problem remains that the cup has to be removed
level, it remains symbolically the same for each run. There-
before the tray is placed. Note that this problem was
fore, the first plan returned by the ASP solver is always:
detected at the first iteration, but since several colliding
objects were detected, the constraint contained a disjunc-
A1 : pick(right, border, z1, tray, table) tion with respect to the objects to be moved (block B or
A2 : stack(right, border, target2, z1, tray) cup). Through several iterations however, this disjunction is
A3 : pick(right, top, z1, cup, table) incrementally resolved. Finally, another weak unavoidable
A4 : stack(right, top, target1, z1, cup) collision is detected and this constraint is returned:
For action A2 , the geometric reasoner detects an unavoid- :-relation(tray, right, grasp, border, t)
able collision with block B. The collision is detected as relation(target2, tray, placement, z1, t)
“weak” since block B has not been moved yet. It is detected not 1{moved(cup, 1..t-1)}
in the first layer, i.e. by sampling all the positions of the
With this constraint, the ASP solver cannot find a solution
tray in a bounding box centered on target2. The following
plan with six actions, but it finds one with eight actions, by
constraint is returned:
inserting two actions at the beginning of the fourth plan that
:-relation(tray, right, grasp, border, t) move the cup onto the table.
relation(target2, tray, placement, z1, t) We showed through an example how our system solves a
not 1{moved(block_b, 1..t-1); moved(cup, 1..t-1)} particular problem instance. Five iterations are necessary to
detect which objects have to be moved, and in which order.
The second plan avoids this problem by starting to move This is achieved quickly because the geometric checks
the cup instead of the tray. But a similar problem occurs involved are fast, and the problem is simple at the symbolic
because of a collision with the tray. Therefore a weak level. The difficulty is at the geometric level, in the choice of
unavoidable collision with the tray is detected and the the intermediate poses for the objects. They have to be cho-
following constraint is returned: sen in a way that does not compromise any future action,
:-relation(cup, right, grasp, top, t) which is not trivial because of the limited space together
relation(target1, cup, placement, z1, t) with the kinematic constraints of the manipulator. These
not 1{moved(tray, 1..t-1)} geometric choices are facilitated by the unavoidable vol-
umes (see Section 10.3), which can be computed because
Now, there are no more solutions within plans of length 4.
the tray is a large object to be placed on a small area. Nev-
The solver increases the length to 5, for which there is no
ertheless, the UV of the tray is a cylinder which is smaller
solution, and then searches for a plan of length 6. The first
than the actual tray (see Figure 25), hence the possibility
constraint enforces to move block B or the cup before plac-
remains that some objects are placed in positions occupied
ing the tray on target2. The third plan takes this constraint
by the tray in the next steps. Consequently, a significant
into account by moving the cup in action A4 :
Lagriffoul and Andres 917

effort remains to be spent on geometric backtrack search in


order to find appropriate intermediate poses for all objects.

11.3. Scenario 3
This scenario demonstrates the capacity of our approach to
generalize from the detected failures, i.e. after detecting an
inconsistent configuration with two particular blocks, the
planner is able to prune out the plans leading to the same
failure with another combination of blocks. In the initial
configuration, an open box is located on the right side of
Justin, containing a pile with six blocks and a bottle, and
the lid of the box is set on the table (see Figure 35). The
Fig. 35. A complex scenario combining several difficulties.
goal is to have the six blocks inside the box and the box
closed with the lid:
:- not on_location(block_a,cylbox,t), horizon(t).
:- not on_location(block_b,cylbox,t), horizon(t).
:- not on_location(block_c,cylbox,t), horizon(t).
:- not on_location(block_d,cylbox,t), horizon(t).
:- not on_location(block_e,cylbox,t), horizon(t).
:- not on_location(block_f,cylbox,t), horizon(t).
:- not connected(cylbox,cylbox_lid,t), horizon(t).

There is no predicate to represent that an object is “inside”


the box, because we did not implement an action able to
achieve this geometric effect. Instead, we use the predicate
on_location, which represents the fact that an object is
directly connected to the box, or connected to a pile of
objects located in the box (see Section 6.2). We use the
following rule:
on_location(Object, Location, t) :- Fig. 36. Left: Details for the average time spent on geometric rea-
connected(Location, Object, t). soning. Right: Checks in the geometric reasoner used to detect the
and a rule that ensures the transitivity: failures during the iteration process (see Figure 4). (1) Refers to
the “Spatial relations” layer in Figure 5.
on_location(Object2, Location, t) :-
on_location(Object1, Location, t),
connected(Object1, Object2, t).
From these rules, there is a large number of combinations
that satisfy the goal: 1 pile with 6 blocks, 2 piles with 1 and
5 blocks, 2 and 4 blocks, or 3 and 3 blocks, etc., modulo all
the possible orderings of the blocks. We added a symbolic
constraint stating that no more than 3 piles can be made
inside the box, otherwise the problem gets difficult, due to
the size of the robot’s hands with respect to the size of the
box. Geometrically, it is not possible to close the box if the
bottle is inside, nor if more than 2 blocks are stacked on Fig. 37. Examples of interference between both robots during
each other. Therefore the only solution is to create 3 piles geometric backtracking. One of the robots cannot move its arm
with 2 blocks each. after a pick action because the other robot’s arm is above (left), or
The mobile robot Fabot was used in this scenario. It can less common: Fabot’s TCP is trapped between Justin’s hand and
grasp all the objects with a side grasp, and unlike Justin, the grasped object (right).
it can grasp an object which is not clear, i.e. it can grasp a
whole pile of objects. With this additional robot, the prob-
lem can be solved in fewer steps, since some actions can be The initial positions of the bottle and the pile were ran-
done in parallel. On the other hand, parallel actions some- domized, and 100 runs were conducted. A solution was
times lead to intricate situations where both robots interfere found in 75% of the runs using a cutoff time of 5 minutes.
with each other (see Figure 37), which triggers costly GBT. The average time for finding a solution is 131 s (standard
918 The International Journal of Robotics Research 35(8)

deviation 59 s). The plans consist of 9 or 10 steps, contain-


ing 16 to 19 actions. The detailed results are presented in
Figure 36. We aggregated the results into two groups. In
the first group (28 runs out of 75), the problem was solved
in 72 s on average (standard deviation 9.6 s). In the sec-
ond group (47 runs out of 75), the problem was solved in
163 s on average (standard deviation 49 s). The ASP solv-
ing time is negligible: respectively 2.9 s and 4.5 s. The time
spent on geometric reasoning is dominated by geometric
backtracking, although not as strongly as in the previous
scenarios.
Both groups share the same three types of checks.
Unavoidable volumes are mainly used to detect that it is
not possible to place a block (or stack 2 blocks) in the box,
and leave the hand of Justin there while the box is closed. Fig. 38. A transportation scenario with two mobile robots.
They also detect that it is not possible to have a pile with 3
blocks and the lid on the box. Strong unavoidable collisions
detect that Fabot cannot pick/place a block in the box, or
stack a block on 1 or 2 other blocks in the box, without the
TCP colliding with the rim of the box (the TCP of Fabot is
by construction always horizontal). Weak unavoidable colli-
sions detect that it is not possible to place the lid on the box
without moving block C and/or D. Whether D needs to be
moved or not depends on the position of the pile: the handle
of the lid collides with block D if the pile is located in the
middle of the box. Once these culprits have been detected,
the problem is basically solved, unless the initial position
of the pile and the bottle are prone to interference, as illus-
trated in Figure 37 on the left, which explain the 25% of Fig. 39. Total average planning time with respect to the number
failures, because GBT does not complete. of blocks. The numbers above the bars correspond to the average
In order to explain why the second group of runs takes number of actions in the solution plans.
twice the time to solve the problem, we need to analyze
what happens during the first iterations of the algorithm.
achieved thanks to the generalized constraints with types as
The first symbolic plan is always the same: putting the lid
explained in Section 6.5. Once the system has tried to create
on the box. Consequently the geometric reasoner feeds back
a pile in the box, e.g. F-E-D, and detected that this prevents
a constraint saying that this is not possible unless the bottle
from placing the lid on the box, it generates a constraint
or blocks C and/or D are moved. At the second iteration,
with object types, that prevents all possible combinations
the ASP solver finds a plan in which the bottle is moved, by
of 3 blocks in the box with the lid on the box. Moreover,
stacking the bottle on block F with the right arm and putting
the ASP solver can infer that it is impossible to build a pile
the lid on the box with Fabot. Depending on the initial con-
with 4, 5 or 6 blocks, because this is logically impossible
figuration, an inconsistency may be detected, because the
without first building a pile with 3 blocks. Therefore, it is
neck of the bottle may be too high to be reachable by the
only possible to build 3 piles with 2 blocks each (remember
right arm. But in some cases, no inconsistency is detected:
that we imposed a constraint of maximum 3 piles in the box
the TCP of Justin grasping the bottle is inside its bound-
for space reasons). This example shows that the ASP solver
ing box, although the exact kinematics do not allow that
is not only used for planning action sequences, but also for
movement. If it is detected, the problem is “easy”, otherwise
reasoning about spatial relations between objects.
the system needs more iterations to figure out that stacking
the bottle on block F is not a good option. During these
iterations, extra consistency checks and geometric back-
tracking checks are performed as indicated in Figure 36,
11.4. Scenario 4
which increases the planning time. In this scenario, the task is to clean a set of objects. In the
This scenario contains another difficulty which does not initial state, objects are dirty and located at their respec-
appear in the presented results. There is a large number of tive tables (see Figure 38). In the goal state, they have to be
possible arrangements of the blocks to achieve the goal at clean, and back to their initial position. Cleaning an object
the symbolic level, from which only a small subset is geo- is simulated by the action of placing that object on the table
metrically feasible, i.e. 3 piles with 2 blocks each. This is in front of Justin. Note that this problem is not solvable by
Lagriffoul and Andres 919

multimodal planning techniques, since symbolic reasoning a different plan in case of failure. With this setup, the sys-
is needed to achieve the goal. tem becomes equivalent (in terms of type of information fed
A simple solution consists of Fabot moving back and back to the ASP solver) to the approaches by Erdem et al.
forth between Justin’s table and the smaller tables, trans- (2011) and Aker et al. (2012).
porting each object one after the other, which requires Scenario 1: No problem instance solved
6-7 actions for each object9 . But since parallel actions are Without culprit detection mechanisms, none of the prob-
allowed, since r2d2 can carry several objects, and since lem instances could be solved, not even those instances
Fabot can manipulate piles of objects, better solutions exist, with three blocks. Let us consider the simpler three-blocks
in which robots wisely cooperate with each other. case in detail. A solution for this problem requires six
The problem is geometrically simple, i.e. there are no steps, for instance (some parameters have been omitted for
narrow passages, and only weak geometric dependencies concision):
between actions. Therefore, the symbolic plan returned
at the first iteration is always geometrically feasible, and 1. pick(right, block_c) pick(left, block_a)
the geometric reasoning time increases linearly with the 2. place(right, block_c, table) place(left, block_a,
number of actions (see Figure 39). At the symbolic level blue_tray)
however, 4 robots can manipulate the objects, which leads 3. pick(left, block_b)
to a large number of combinations. Remember that the 4. stack(left, block_b, block_a)
ASP solver only increases its search horizon when it has 5. pick(left, block_c)
proven that no valid plan exists for the current length. 6. stack(left, block_c, block_b)
The advantage is that the plans are optimal (in terms of
However, without geometric feedback, a logically feasi-
number of steps), but the drawback is that the computa-
ble plan consists in leaving block_a in place, and stacking
tional cost increases exponentially with the horizon length,
block_b and block_c on top, which can be done in three
as shown in Figure 39. This type of problem would be
steps. There exists four such plans of length 3, 99 plans
more efficiently solved with state-space heuristic planning
of length 4, and 1193 plans of length 5. Since the ASP
approaches (Dornhege et al., 2009; Srivastava et al., 2014),
solver increases its horizon after exhausting all the solu-
or the approach by Kaelbling and Lozano-Pérez (2011) for
tions, the system therefore needs to geometrically evaluate
long-horizon problems.
1296 plans before considering solutions of length 6, which
It is interesting to compare these results with the results
is not possible within the allotted time (20 min).
of scenario 1, in which plans with 52 actions (for the
Scenario 2: Time increased by one order of magnitude
10-blocks problem instance) are found in much less time.
For this scenario, the average solving time is increased by
Setting aside the fact that both problems are different, we
a factor 10, and not all instances could be solved. Problems
hypothesize the following explanation: in scenario 1, a solu-
requiring six and eight actions were all solved, on average
tion is found after 9 iterations. During these iterations, the
in 135 and 606 s respectively. For problems requiring ten
ASP solver finds several shorter plans, which are unfeasi-
actions, only two problem instances were solved (out of
ble, but which enable to compute 9 logical constraints that
25), in 17 and 19 min, the rest being cut off. The prob-
can be used for pruning the symbolic search space. This
lems requiring six actions could be solved because after
does not occur in the present scenario, i.e. the ASP solver
trying the two possible 4-actions plans (swapping the cup
has to solve a difficult problem from the start, and no log-
and the tray), the ASP solver enumerates 6-actions plans,
ical constraint is used for pruning, since only one iteration
which consist of swapping the cup and the tray, plus mov-
is performed.
ing an extra object. If by chance the extra object is the cup
Note that this poor performance owes to the fact that an
or the tray, a solution is found. Problems requiring eight
optimal plan is sought. The performance is dramatically
actions present more combinations, but a plan moving the
improved if the ASP solver is asked for a solution within
occluding object can be found by chance within 20 min-
more steps. For instance, the 5-blocks problem instance is
utes. Few plans requiring ten actions were solved because
optimally solved with 26 steps in 2 hours. Solving the same
reaching to that search horizon requires exhausting all the
problem within 50 steps takes only 28 s, although the plan
plans of length 6 and 8, which is rarely possible within the
contains useless actions (see the videos for a comparison of
allotted time.
the optimal versus non-optimal plan).
Scenario 3: No problem instance solved
The problem in scenario 3 is similar to the case of scenario
1. Logically speaking, the problem is feasible within two
11.5. Comparative experiments steps: since the blocks are on_location with respect to
11.5.1. Experiments without culprit detection In order to the box in the initial state, it is sufficient to pick the lid
assess the impact of the culprit detection mechanisms, the and place it on the box for achieving the goal. However, the
same experiments have been done, with layers (1) and (2) actual feasible plans consist of at least nine steps. Consider-
deactivated. Layer (3) was maintained for geometric evalua- ing the number of possible arrangements of blocks, and the
tion of symbolic plans, and forcing the ASP solver to return combinations for deciding which robot manipulates which
920 The International Journal of Robotics Research 35(8)

blocks, the problem is clearly intractable without feedback red tray or the blue tray (say, nside actions). Therefore, the
from the geometric level. planner does not reconsider moving block_a to the blue tray
Scenario 4: Similar results or the red tray as long as the heuristic “sees” solutions with
Similar results were observed for scenario 4 because one less than nside actions. Although the function Fstop prevents
iteration is sufficient to solve all problem instances. The the planner from exploring all such unfeasible plans, there
time spent on geometric reasoning was not significantly remains many states to visit from which a plan with less
reduced, because the time spent on geometric backtracking than nside actions seems possible. For instance, the blocks
dominates the time spent on culprit detection. from the initial piles can be unstacked in different orders
or to different locations, and worse, nside − ngreen irrelevant
actions can be inserted without making the current state
less promising than moving block_a to the red tray or the
11.5.2. Comparison with heuristic planning approaches
blue tray. The planner cannot escape this local minimum for
In this experiment, we evaluate how approaches based on
problem instances with more than five blocks. If the native
state-space heuristic planning, which interleave symbolic
heuristic function of FF is used, the planner is trapped in
and geometric reasoning, e.g. Dornhege et al. (2009); Sri-
a heuristic plateau for the same reasons, and never escapes
vastava et al. (2014), would perform on scenario 1. Since
from it. These results cannot be generalized to other types of
their setup cannot be exactly replicated, we instead emu-
problem, since scenario 1 was constructed with the aim of
lated geometric reasoning in order to address a simpler
highlighting this issue. Nevertheless, it suggests that similar
problem, and therefore assess a lower bound on the results
pitfalls may be encountered each time geometric constraints
that may be obtained for solving the same problem with
are not well captured by the heuristic function.
these types of approaches. The tested hypothesis is that for
Table 5 also includes the results obtained with the ASP
some problems, heuristic state-space planners may perform
solver on the same problem instances (see Figure 29). A
poorly because the geometric constraints of the problem are
fair comparison is difficult: the ASP solver did not have
not captured by the heuristic function.
the information computed by the FF heuristic, and the FF
An equivalent pick-and-place domain was implemented
planner did not have the logical constraints computed by
in PDDL (Ghallab et al., 1998), and the forward state-space
the culprit detection mechanisms. It is also inappropriate to
planner FF (Hoffmann and Nebel, 2001) was used. No
compare optimal and non-optimal planning. The question is
detailed geometric representations were used, i.e. only one
how would the FF planner perform if the information com-
symbol for each location, one type of grasp, were used. The
puted with culprit detection mechanisms was integrated in
fact that the red tray (respestively blue tray) is only reach-
the computation of the heuristic. This raises the issue of the
able by the right (respectively left) arm, was hardcoded in
feasibility of such integration in the first place. Although
the domain. In order to emulate geometric reasoning, we
the work by Garrett et al. (2014) presents a method for
implemented a function Fstop which symbolically evaluates
including information about occluding objects in the FF
if a pile of more than two blocks exists on the table or on
heuristic, extending this approach to the complex logical
the green tray (which always causes a collision between
expressions returned by our geometric reasoner appears as
the gripper and the obstacle). Fstop is called for each vis-
a very challenging problem.
ited symbolic state, and the state is not expanded if such a
Scenario 1 is also an example in which information about
pile is detected. The running time of this function is neg-
occluding objects may not be useful, since the occluding
ligible compared to the time for computing the heuristic,
object cannot be moved. This supports our initial claim
therefore the timings obtained are a lower bound on the
that information about occluding objects or unfeasible paths
time that would be taken if actual geometric reasoning was
is not sufficient to efficiently guide the task planner in
performed.
all situations, and advocates for using more informative
The native heuristic function of FF (hFF ) estimates the
diagnosing methods, such as the proposed culprit detec-
goal distance by building a planning graph and comput-
tion mechanisms. Assuming that such mechanisms could
ing a plan ignoring the negative effects of actions. Instead,
be implemented in the heuristic function, one may expect
we used the heuristic function f = g + hFF (where g is
performance issues since the heuristic function is called for
the length of the partial plan at hand), otherwise the plan-
each expanded node. This problem does not arise with our
ner gets trapped in a heuristic plateau owing to the lack of
approach because symbolic and geometric reasoning are
geometric information, as explained next.
weakly coupled.
The results are presented in the second line of Table 5,
and confirm the hypothesis. In absence of information
about the obstacle above the table, the heuristic consid-
ers that bringing a block towards the green tray (or the
11.6. Discussion
table) requires two actions, whereas bringing a block from In the first scenario, the scalability of the planner was evalu-
one side to another requires four actions. Hence, it esti- ated. The experiments indicate an exponentially increasing
mates that stacking all blocks on the green tray requires less planning time, which is the rule for planning problems.
actions (say, ngreen actions) than stacking all blocks on the Nonetheless, the planning time remains below 1 min for
Lagriffoul and Andres 921

Table 5. Average task planning time (s) for scenario 1, ASP solving time versus FF planning time. Dashes represent cutoff time (30
min) or insufficient memory.

Number of blocks 3 4 5 6 7 8 9 10

ASP 0.2 0.36 1.04 3.9 12.9 56.1 134.7 319


FF 1.3 10.2 132 – – – – –

problems requiring 30 actions, which is a decent perfor- Finally, a brief comparison with state-space heuristic
mance with respect to the complexity of the problem. Sce- planning approaches on a specific problem points out poten-
nario 1 also demonstrates the ability of our approach to deal tial local minimum problems if the heuristic does not com-
with combinatorial task planning problems, which is possi- pletely capture the geometric constraints of the problem.
ble because of the weak coupling between the symbolic and
geometric levels.
12. Conclusion
In scenario 2, the challenge is at the geometric level.
After a few iterations, the objects that need to be moved We presented an approach for combining task and motion
are known by the task planner. The difficulty is then to planning which included two culprit detection mechanisms
carefully choose intermediate positions for these objects. in order to feed back rich information from the geometric
Although the performance is not impressive, the experi- level to the symbolic level. The first mechanism works on
ments showed that all instances were solved. This demon- a relaxed version of the geometric problem, in which the
strates the ability of our approach to solve, without specific poses of robots and objects are approximated by a set of
heuristics or strategies, a problem which usually requires ad bounding boxes represented by a network of linear con-
hoc algorithms. straints. We proposed techniques to detect spatial incon-
The third scenario, although it seems simple, hides sev- sistencies within this network in polynomial time. These
eral difficulties. The symbolic goal state can be achieved in bounding boxes are also used to detect different types of
many ways, although few of them are geometrically feasi- unavoidable collisions. The second mechanism relies on
ble. Solving the problem requires to detect geometric con- the construction of a graph of the geometric dependencies
straints on specific objects instances and generalize them to between actions. From this graph, shorter subsequences of
other objects of the same type. actions can be extracted and independently evaluated.
The last scenario points out a limitation of our approach, The failures detected by these mechanisms efficiently
i.e. when a difficult symbolic problem is to be solved in the guide the ASP solver because they do not simply report
first place. Then, it is not possible to iterate through sim- a “local” failure, but rather a context (expressed with spa-
pler solutions, that offer opportunities for adding geometric tial relations) or a culprit subsequence of partially ordered
constraints which can help solving the symbolic problem. actions (with geometric dependencies chains). Since the
This is the drawback of decoupling symbolic and geometric task planner is based on logic programming, the detected
search spaces. failures are fed back by simply adding logical constraints
The experiments also point out another limitation of to the problem. Therefore, the planning problem and the
our approach, i.e. when the symbolic solution plan leads geometric constraints are homogeneously integrated in the
to intricate geometric configurations (see Figure 30 and same search space. ASP is not strictly required for the task
37), which cannot be resolved by geometric backtrack- planning part though: other logic-programming languages,
ing. Since the geometric reasoner cannot explicitly express or satisfiability-based planners could be used as well. One
the cause of failure into a logical constraint, the system could also think of partial order planning as a suitable can-
needs to iterate over symbolic plans until the problem didate, since the logical constraints returned by the geo-
disappears by chance, which is very inefficient. A pos- metric reasoner can be interpreted in terms of constraints
sible approach to address this issue is discussed in the on partially instantiated plans, and therefore they could be
next section. mapped to a small number of decision points in the plan-
Comparative experiments have shown that culprit space. It is less obvious though how this could be integrated
detection mechanisms are the backbone of our approach, with a state-space planner.
in particular for solving intricate problems. In scenario 2, The experiments have demonstrated the capacity of
the difficulty was mainly at the geometric level, while in our system to solve various types of problems. Thanks
scenario 4, the difficult part was the symbolic problem. On to the weak coupling between symbolic and geometric
both scenarios, removing the culprit detection mechanisms search spaces, challenging task planning problems can be
did not affect the results much. On scenarios 1 and 3, where addressed, provided that geometric information can be used
symbolic and geometric aspects are more intricate, using to cut the search space. Thanks to geometric backtrack-
culprit detection mechanisms makes the difference. ing, intricate geometric problems can be solved, although
922 The International Journal of Robotics Research 35(8)

this may be expensive in some cases. The proposed cul- (iii) Task planning with ASP can be improved. The com-
prit detection mechanisms are effective: in most cases, a bined ASP grounder-solver clingo offers a general declara-
dozen iterations is sufficient to symbolically capture the tive framework for incorporating heuristics into the solving
main causes of geometric failures in a problem. This allows procedure (Gebser et al., 2013). In our future work we plan
the ASP solver to prune out large parts of the search space, to utilize this framework to specify a heuristic for task plan-
and quickly reach to a feasible plan. Last but not least, our ning problems in order to firstly reduce the solving time
system produces plans optimal in the number of steps, and of the ASP solver and, secondly to guide the ASP solver
supports for parallel actions, which few other approaches such that the task plan found is adjusted to the geometric
do. Nevertheless, we found limitations that need to be solver, e.g. avoiding unnecessary actions, spreading objects
addressed in order to apply this approach to a wider range over available locations, etc. This would further reduce the
of problems. overall solving time and would allow to approach more
(i) Path planning failures are not explained. The pro- challenging problems.
posed culprit detection mechanisms only consider the con-
figurations reached by a robot after completion of an action. Acknowledgments
As mentioned, this limitation is not of major concern for We also thank Lars Karlsson and Alessandro Saffiotti for their
the proposed scenarios, but it would have a greater impact insightful suggestions and help in improving this article.
if the robots had to operate in heavily cluttered environ-
ments. In order to handle path planning failures, one needs
Funding
to prove that a path does not exist, which remains a diffi-
cult and unsolved problem. The work by Hauser (2014) on This work was partially supported by EU FP7 project “Gen-
the Minimum Constraint Removal Problem is promising in eralizing Robot Manipulation Tasks” (GeRT) [contract number
this respect. It could plug this gap in our approach, by feed- 248273].
ing back to the ASP solver minimal explanations for path
planing failures, indicating which objects need to be moved Notes
away.
1. Geometrically, manipulators are moved away from their “nat-
(ii) Geometric backtrack search is difficult. In the exper- ural” workspace, e.g. for Justin, away from the space in front
iments, some of the runs could not be solved because the of the torso. Bases are moved to a circular region one meter
symbolic plan led to an intricate configuration (see Fig- away from the current position.
ure 30), and since the culprit decision was made early, 2. The meaning of this symbol is explained in the next section.
3. This could be improved by sampling with in a ring or a disk,
geometric backtracking is not able to backtrack up to that
but a circular domain proved suffcient in all the experiments.
decision before the cutoff time is reached. In related work 4. https://www.youtube.com/user/MRLabSweden
(Bidot et al., 2015), we proposed a heuristically guided geo- 5. This is known from the symbolic state: since r2d2 and block
metric backtrack search algorithm to address this issue. It B are connected, if r2d2 is moved, then block B is moved by
turned out to be difficult to find heuristics that can handle ramification.
6. Because new variables are created each time an object is
all situations, because geometric dependencies result from
moved, no cycles can be created, but this is out of the scope of
subtle interactions between the kinematics of the robot and this article.
the configuration space of obstacles. 7. http://aass.oru.se/~fll/videos_ijrr.html
In most such cases however, we observed that intricate 8. Gurobi Optimization, Inc (2013) Gurobi optimizer reference
situations only concern one or two actions, while the rest manual. http://www.gurobi.com
9. Potassco (2014) Potassco, the Potsdam answer set solving
of the plan can be geometrically instantiated without diffi-
collection. http://potassco.sourceforge.net/
culty, since the geometric reasoner has already detected the 10. reach small_table1, pick block_a, reach table, clean, pick
main possible causes of failure. In most cases, one could block_a, reach small_table1, place block_a, pick block_b,...
circumvent such problems by simply switching the order
of two actions, or by inserting actions that move undesir-
References
able objects away, while preserving the rest of the plan. We
will therefore focus our future efforts on integrating local Aker E, Patoglu V and Erdem E (2012) Answer set program-
symbolic plan repair strategies to the current approach. ming for collaborative housekeeping robotics: Representation,
Another possibility which could be explored is to relax reasoning, and execution. Intelligent Service Robotics 5(4):
some constraints during geometric backtracking, which is 275–291.
Bidot J, Karlsson L, Lagriffoul F et al. (2015) Geometric back-
for now inflexible with respect to kinematic constraints and
tracking for combined task and motion planning in robotic
collisions, and to perform the needed adjustments at exe- systems. Artificial Intelligence (Online).
cution time by local reasoners (Scioni et al., 2015; Winkler Boyd S and Vandenberghe L (2004) Convex Optimization. New
et al., 2012). In other words, delegating some of the prob- York: Cambridge University Press.
lems encountered offline during geometric backtracking to Bylander T, Allemang D, Tanner MC et al. (1991) The computa-
online execution processes, which would also provide more tional complexity of abduction. Artificial Intelligence 49(1–3):
robust execution. 25–60.
Lagriffoul and Andres 923

Cambon S, Alami R and Gravot F (2009) A hybrid approach to Kaelbling LP and Lozano-Pérez T (2011) Hierarchical task and
intricate motion, manipulation and task planning. The Interna- motion planning in the now. In: Proceedings of International
tional Journal of Robotics Research 28(1): 104–126. Conference on Robotics and Automation (ICRA), IEEE, pp.
Chakravarti N (1994) Some results concerning post-infeasibility 1470–1477.
analysis. European Journal of Operational Research 73(1): Karlsson L, Bidot J, Lagriffoul F et al. (2012) Combining task
139–143. and path planning for a humanoid two-arm robotic system. In:
Chinneck JW (1996) An effective polynomial-time heuristic for TAMPRA: ICAPS Workshop on Combining Task and Motion
the minimum-cardinality IIS set-covering problem. Annals of Planning for Real-World Applications, pp. 13–20.
Mathematics and Artificial Intelligence 17(1-2): 127–144. Kautz H and Selman B (1992) Planning as satisfiability. In: IN
Choi J and Amir E (2009) Combining planning and motion plan- ECAI-92. Vienna: Wiley, pp. 359–363.
ning. In: Proceedings of International Conference on Robotics Kautz HA, McAllester D and Selman B (1996) Encoding plans
and Automation (ICRA). Piscataway: IEEE Press, pp. 238–244. in propositional logic. In: Proceedings of KR96, Morgan Kauf-
Cortés J and Siméon T (2004) Sampling-based motion planning mann, pp. 374–384.
under kinematic loop-closure constraints. Zeist: IEEE. Kavraki L, Svestka P, Latombe J et al. (1996) Probabilistic
Dechter R and Frost D (2002) Backjump-based backtracking for roadmaps for path planning in high-dimensional configura-
constraint satisfaction problems. Artificial Intelligence 136(2): tion spaces. In: Proceedings of International Conference on
147–188. Robotics and Automation (ICRA), IEEE, pp. 566–580.
Dornhege C, Eyerich P, Keller T et al. (2009) Semantic attach- Kuipers L and Niederreiter H (1974) Uniform distribution of
ments for domain-independent planning systems. In: Proceed- sequences. New York: Wiley.
ings of International Conference on Automated Planning and Lagriffoul F, Dimitrov D, Bidot J et al. (2014) Efficiently com-
Scheduling (ICAPS). pp. 114–121. bining task and motion planning using geometric constraints.
Erdem E, Haspalamutgil K, Palaz C et al. (2011) Combining high- The International Journal of Robotics Research 33(14): 1726–
level causal reasoning with low-level geometric reasoning and 1747.
motion planning for robotic manipulation. In: Proceedings of Lagriffoul F, Dimitrov D, Saffiotti A et al. (2012) Constraint prop-
International Conference on Robotics and Automation (ICRA). agation on interval bounds for dealing with geometric back-
pp. 4575–4581. tracking. In: Proceedings of the International Conference on
Garrett CR, Lozano-Pérez T and Kaelbling LP (2014) Heuristic Intelligent Robots and Systems (IROS), IEEE, pp. 957–964.
search for task and motion planning. In: Proceedings of the LaValle S (2006) Planning Algorithms. Cambridge: Cambridge
Workshop on Planning and Robotics (PlanRob). pp. 148–156. University Press.
Gebser M, Kaufmann B, Otero R et al. (2013) Domain-specific Lifschitz V (2002) Answer set programming and plan generation.
heuristics in answer set programming. In: Proceedings of AAAI, Artificial Intelligence 138: 2002.
Springer, pp. 350–356. Lifschitz V (2008) What is answer set programming. In: Pro-
Gelfond M and Lifschitz V (1998) Action languages. Electronic ceedings of the Twenty-Third AAAI Conference on Artificial
Transactions on AI 3, Royal Swedish Academy of Science, pp. Intelligence. Chicago: AAAI, pp. 1594–1597.
193–210. Lozano-Pérez T and Kaelbling LP (2014) A constraint-based
Ghallab M, Howe A, Knoblock C et al. (1998) PDDL—The method for solving sequential manipulation planning problems.
planning domain definition language. In: IEEE/RSJ International Conference on Intelligent Robots
Guitton J and Farges JL (2009) Taking into account geometric and Systems, IEEE, pp. 3684–3691.
constraints for task-oriented motion planning. In: ICAPS Work- Luna R, Lahijanian M, Moll M et al. (2014) Fast stochastic motion
shop on Bridging the Gap Between Task And Motion Planning, planning with optimality guarantees using local policy recon-
BTAMP’09, pp. 26–33. figuration. In: IEEE International Conference on Robotics and
Hauser K (2014) The minimum constraint removal problem with Automation (ICRA), IEEE. To appear.
three robotics applications. International Journal of Robotics Nau D, Ghallab M and Traverso P (2004) Automated Plan-
Research 33(1): 5–17. ning: Theory & Practice. San Francisco: Morgan Kaufmann
Hauser K and Latombe JC (2010) Multi-modal motion plan- Publishers Inc.
ning in non-expansive spaces. International Journal of Robotic Ott C, Eiberger O, Friedl W et al. (2006) A humanoid two-arm
Research 29(7): 897–915. system for dexterous manipulation. In: Proceedings of Interna-
Hauser K, Ng-Thow-Hing V and González-Baños HH (2007) tional Conference on Humanoid Robots (Humanoids), IEEE,
Multi-modal motion planning for a humanoid robot manipu- pp. 276–283.
lation task. In: Proceedings of the International Symposium on Parker M and Ryan J (1996) Finding the minimum weight IIS
Robotic Research, Springer, pp. 307–317. cover of an infeasible system of linear inequalities. Annals of
Havur G, Ozbilgin G, Erdem E et al. (2014) Geometric rear- Mathematics and Artificial Intelligence 17(1-2): 107–126.
rangement of multiple movable objects on cluttered surfaces: A Plaku E and Hager G (2010) Sampling-based motion planning
hybrid reasoning approach. In: Proceedings of the 2014 IEEE with symbolic, geometric, and differential constraints. In: Pro-
International Conference on Robotics and Automation (ICRA ceedings of International Conference on Robotics and Automa-
2014), IEEE, pp. 445–452. tion (ICRA), IEEE, pp. 5002–5008.
Hoffmann J and Nebel B (2001) The ff planning system: Fast Rossi F, van Beek P and Walsh T (2006) Handbook of Constraint
plan generation through heuristic search. Journal of Artificial Programming. New York: Elsevier.
Intelligence Research 14(1): 253–302. Scioni E, Borghesan G, Bruyninckx H et al. (2015) Bridging
Hudson TC, Lin MC, Cohen J et al. (1997) V-collide: Acceler- the gap between discrete symbolic planning and optimization-
ated collision detection for vrml. In: Proceedings of VRML. based robot control. In: 2015 IEEE International Conference
Monterey: ACM, pp. 119–125. on Robotics and Automation, IEEE. Accepted for publication.
924 The International Journal of Robotics Research 35(8)

Silva JaPM and Sakallah KA (1996) Grasp: A new search algo- some literals among 1 , . . . , k subject to the encompass-
rithm for satisfiability. In: Proceedings of the 1996 IEEE/ACM ing bounds. Each body component Bi is either an atom or a
International Conference on Computer-aided Design, ICCAD count constraint for 1 ≤ i ≤ n.
’96. IEEE Computer Society, pp. 220–227. We adhere to the definition of answer sets provided
Simeon T (2004) Manipulation planning with probabilistic in Simons et al. (2002). A count constraint holds with
roadmaps. The International Journal of Robotics Research
respect to a set X of atoms if L ≤ |{a | j = a, 1 ≤ j ≤
23(7-8): 729–746.
Simons P, Niemelä I and Soininen T (2002) Extending and imple-
k, a ∈ X } ∪ {∼a | j = ∼a, 1 ≤ j ≤ k, a ∈ / X }| ≤ U. A
menting the stable model semantics. Artificial Intelligence body literal Bi (or ∼Bi ) holds with respect to X if Bi holds
138(1-2): 181–234. (or does not hold) with respect to X , where an atom a holds
Srivastava S, Fang E, Riano L et al. (2014) Combined task and if a ∈ X . A rule r is satisfied with respect to X if some
motion planning through an extensible planner-independent body literal of r does not hold with respect to X , H is count
interface layer. In: IEEE International Conference on Robotics constraint holding with respect to X , or H ∈ X . Note that
and Automation (ICRA), IEEE, pp. 639–646. an integrity constraint is unsatisfied if all literals in its body
Stallman RM and Sussman GJ (1977) Forward reasoning and hold with respect to X .
dependency-directed backtracking in a system for computer- A ground logic program  is a set of ground (i.e.
aided circuit analysis. Artificial Intelligence 9(2): 135–196. variable-free) rules. A set X of atoms is a model of  if
Stilman M and Kuffner J (2008) Planning among movable obsta-
each r ∈  is satisfied with respect to X . An answer set
cles with artificial constraints. In: Algorithmic Foundation
of  is a model X of  such that every atom in X is deriv-
of Robotics VII, Springer Tracts in Advanced Robotics, vol-
ume 47. Los Angeles: SAGE, pp. 119–135. able from . Roughly speaking, the latter means that, for
Stilman M, Schamburek JU, Kuffner J et al. (2007) Manipulation each a ∈ X ,  contains a rule r with head H = a or H
planning among movable obstacles. In: Robotics and Automa- being a count constraint comprising a such that all body lit-
tion, 2007 IEEE International Conference on. pp. 3327–3332. erals of r hold with respect to X . We still note that programs
Toussaint M (2015) Logic-geometric programming: An are required to be safe (Ullman, 1988), that is, each vari-
optimization-based approach to combined task and motion able must occur in a positive body literal. Predicates such
planning. In: International Joint Conferences on Artificial as p and variables such as Y in “p( Y )” are written as lower-
Intelligence, AAAI. case or uppercase strings, respectively. Default negation ∼
Ullman J (1988) Principles of Database and Knowledge-Base is written as “not”.
Systems. New York: Computer Science Press.
Winkler J, Bartels G, Mösenlechner L et al. (2012) Knowledge
Enabled High-Level Task Abstraction and Execution. First B. Definitions
Annual Conference on Advances in Cognitive Systems 2(1):
Definition 1 Geometric reachable set
131–148.
Let us consider a sequence of actions P = A1 , . . . , An .
We denote a geometric instantiation of P applied on the
Appendices geometric state s by

A. Answer set programming π ( s) = i1 a1 , . . . ,in an 

The reader may be interested in consulting a brief intro- where i1 , . . . , in represent the action indexes of each action,
duction to Answer Set Programming (Lifschitz, 2008) for and we denote the resolution used for discretizing an action
completing the reading of this section. Aj by rj (see Section 7.2). Let kπ ( s) (respectively cπ ( s)) be
A rule r is of the following form the boolean function returning true if all actions in π ( s) are
kinematically valid (respestively collision-free), and false
H ← B1 , . . . , Bm , ∼Bm+1 , . . . , ∼Bn otherwise. We define the Geometric Reachable Set from the
geometric state s by an action Aq ∈ P as:
We use headr = H and body( r) = {B1 , . . . , Bm , ∼
Bm+1 , . . . , ∼Bn } to denote the head and the body of r, R( Aq , s, P) = {aq ∈ π ( s) | π ( s) is a geometric instance of P,
respectively, where “∼” stands for default negation.10 The aq is a geometric instance of Aq ,
head H is an atom a belonging to some alphabet A, the fal- kπ ( s) , cπ ( s) },
sum ⊥, or a count constraint L {1 , . . . , k } U. In the latter,
i = ai or i = ∼ai is a literal for ai ∈ A and 1 ≤ i ≤ k;
L and U are integers providing a lower and an upper bound. and similarly, we define the Geometric kinematically
Either or both of L and U can be omitted, in which case they Reachable Set from the geometric state s by an action
are identified with the (trivial) bounds 0 and ∞, respec- Aq ∈ P, which does not take collisions into account, as:
tively. If bodyr = ∅, r is called a fact, and we skip “←”
when writing facts below. A rule r such that headr = ⊥ Rkin ( Aq , s, P) = {aq ∈ π ( s) | π ( s) is a geometric instance of P,
is an integrity constraint, one with a count constraint in aq is a geometric instance of Aq ,
the head is a choice rule because it amounts to choosing kπ ( s) }.
Lagriffoul and Andres 925

Definition 2 Geometric dependencies between two actions which we denote by Ap  Aq . In other words, Ap is geomet-
Let Ap , Aq ∈ P = A1 , . . . , An  be two symbolic actions rically dependent on Aq if changing the geometric instance
such that p < q. Let s0 be the initial geometric state on of Ap can lead to a different geometric reachable set for Aq .
which P applies. Let π( s0 ) and π  ( s0 ) be two geomet- Similarly, Ap is directly geometrically dependent on Aq iff
ric instantiations of the symbolic subsequence A1 , . . . , Ap  ∃π ( s0 ) , π  ( s0 ) such that:
which only differ with respect to the geometric instance
chosen for Ap Rkin ( Aq , sp , Ap+1 , . . . , Aq ) = Rkin ( Aq , sp , Ap+1 , . . . , Aq ) ,

dir
π ( s0 ) = i1 a1 ,i2 a2 , . . . ,ip ap  which we denote by Ap  Aq .

π  ( s0 ) = i1 a1 ,i2 a2 , . . . ,ip ap , ip = ip

and sp , sp , the geometric states resulting from π ( s0 ) and


π  ( s0 ), respectively. Ap is geometrically dependent on Aq
iff ∃π ( s0 ) , π  ( s0 ) such that

R( Aq , sp , Ap+1 , . . . , Aq ) = R( Aq , sp , Ap+1 , . . . , Aq )


926 The International Journal of Robotics Research 35(8)

C. ASP encoding of the planning problem


Lagriffoul and Andres 927

You might also like