Behavior Planning For Character Animation
Behavior Planning For Character Animation
net/publication/220789215
CITATIONS READS
152 1,429
2 authors:
All content following this page was uploaded by James Kuffner on 01 June 2014.
Abstract
This paper explores a behavior planning approach to automatically generate realistic motions for animated char-
acters. Motion clips are abstracted as high-level behaviors and associated with a behavior finite-state machine
(FSM) that defines the movement capabilities of a virtual character. During runtime, motion is generated automat-
ically by a planning algorithm that performs a global search of the FSM and computes a sequence of behaviors for
the character to reach a user-designated goal position. Our technique can generate interesting animations using a
relatively small amount of data, making it attractive for resource-limited game platforms. It also scales efficiently
to large motion databases, because the search performance is primarily dependent on the complexity of the behav-
ior FSM rather than on the amount of data. Heuristic cost functions that the planner uses to evaluate candidate
motions provide a flexible framework from which an animator can control character preferences for certain types
of behavior. We show results of synthesized animations involving up to one hundred human and animal characters
planning simultaneously in both static and dynamic environments.
Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional
Graphics and Realism:Animation
1. Introduction
Creating realistic motion for animated characters is an im-
portant problem with applications ranging from the special
effects industry to interactive games and simulations. The
use of motion capture data for animating virtual characters
has become a popular technique in recent years. By captur-
ing the movement of a real human and replaying this move-
ment on a virtual character, the resulting motion exhibits a
high degree of realism. However, it can be difficult to re-use
existing motion capture data, and to adapt existing data to
different environments and conditions.
This paper presents a behavior planning approach to au-
tomatically generate realistic motions for animated charac-
ters. We first organize motion clips into an FSM of behav-
iors. Each state of the FSM contains a collection of motions
Figure 1: Planned behaviors for 100 animated characters
representing a high-level behavior. Given this behavior FSM
navigating in a complex dynamic environment.
and a pre-defined environment, our algorithm searches the
FSM and plans for a sequence of behaviors that allows the
character to reach a user-specified goal. The distinguishing
features of our approach are the representation of motion as FSM of behaviors. There are a number of techniques that
abstract high-level behaviors, and the application of a global generate motion from a graph-like data structure built from
planning technique that searches over these behaviors. motion capture data [AF02, KGP02, LCR∗ 02]. These meth-
ods utilize search techniques that explore links between large
We represent sequences of motion capture clips as high- databases of individual poses of motion data. In contrast, our
level behaviors, which are then connected together into an planning algorithm searches over the behavior states of the
FSM. Each state consists of a collection of long sequences crawling [SYN01]. Techniques incorporating the search of
of individual poses that represent a high-level behavior. The control actions or motion primitives include examples in-
output of the planner is a sequence of behaviors that moves volving grasp primitives [KvdP01], various body controllers
the character to a specified goal location. Our planner does [FvdPT01], and precomputed vehicle trajectories [GVK04].
not need to know the details of the underlying motion or
Our work is closely related to recent techniques [AF02,
poses. This abstract representation of motion offers a num-
KGP02, LCR∗ 02, PB02, GSKJ03] that build graph-like data
ber of advantages over previous methods detailed in Sec-
structures of motions. These approaches facilitate the re-use
tion 2 of the paper.
of large amounts of motion capture data by automating the
Our planning technique synthesizes motion by perform- process of building a graph of motion. In contrast, our algo-
ing a global search of the behavior FSM. Previous methods rithm requires the existence of a behavior FSM and motion
synthesize final motion by using a local policy mapping from data that has been appropriately segmented. However, we
state to motion. Gleicher et al.’s [GSKJ03] method builds a have found that it is not difficult to construct and re-use our
data structure of motion clips and then replays these clips to FSMs because of their small size. More importantly, we be-
generate motion. At runtime, the graph of motions is used lieve that by abstracting the raw motion data into high-level
as a policy that specifies a motion for the character depend- behaviors, our approach offers a number of advantages over
ing on the state the character is in. The game industry uses previous techniques:
move trees [Men99, MBC01] to generate motion for human-
Scalability and Efficiency: Our approach can scale to a
like characters. These approaches are reactive, and typically
large amount of data. Gleicher et al. [GSKJ03] described
make decisions based only on local information about the
how unstructured motion graphs can be inappropriate for
environment. In contrast, our approach plans and searches
interactive systems that require fast response times. Lee et
through the FSM to explore candidate behavior sequence so-
al. [LL04] explains similarly that for a large motion set, the
lutions. It builds a search tree of the states in the FSM, and
time required for searching a motion graph is the bottleneck
considers many different possible paths in a global sense be-
of the method. The number of behavior states in our FSMs
fore deciding on the final output motion. This enables our
(25 in our largest example) is relatively small compared to
algorithm to avoid many of the pitfalls that purely reactive
motion graph approaches, which typically have on the order
techniques are prone to, such as escaping local minimas in
of thousands of nodes corresponding to individual poses of
the environment.
motion (and potentially tens of thousands of edges). Because
of the small size of our FSM, and the fact that our branching
2. Background and Motivation factor can be very small compared to that for unstructured
graphs, our planner is able to generate long animation se-
A number of methods have been proposed in the anima-
quences very efficiently. Moreover, once we have an FSM
tion literature for generating motion for synthetic characters.
that is large enough to produce interesting motions, the in-
Because virtual environments require continuous streams of
clusion of additional motion data to existing behavior states
motion, approaches that only create individual, static clips
does not change the complexity; it will only add to the va-
cannot be readily utilized. These include keyframing, mo-
riety of the synthesized motions. In Figure 2 for example,
tion capture editing [BW95, WP95, Gle98, LS99, PW99],
the addition of another jog left motion clip leads to the same
and motion interpolation [WH97, RCB98]. By contrast, pro-
FSM. This is in contrast to unstructured motion graph ap-
cedural animation techniques can generate arbitrarily long
proaches [KGP02, LCR∗ 02] that require recomputation of
streams of motion. These include behavior scripting [PG96]
the graph if an additional motion data is added.
and physically based methods that simulate natural dynam-
ics [LP02,HWBO95, SHP04,FP03]. The key issues with us- Memory Usage: Our method requires a relatively small
ing physically-based techniques for interactive environments amount of data in order to generate interesting motions,
have been high computational costs, and providing the ap- making it particularly appealing to resource-limited game
propriate amount of high-level control. systems. As an example, our synthesized horse motions
(Figure 8) are generated from only 194 frames of data. While
A variety of planning and search techniques have been
these motions do not have as much variety as the synthesized
used previously to create meaningful movements for ani-
human motions, they may be appropriate for simple charac-
mated characters. Planning approaches that preprocess sta-
ters in some games.
tic environment geometry with graph structures, and sub-
sequently use motion capture data can produce human- Intuitive Structure: Because the FSM of behaviors is well-
like motion [CLS03, PLS03]. Preprocessing the environ- structured, the solutions that the planner returns can be un-
ment with a roadmap has also been used for generating derstood intuitively. The high-level structure of behaviors
flocking behaviors [BLA02]. Animations of object manip- makes it easier for a non-programmer or artist to understand
ulation tasks have been synthesized using planning tech- and work with our system. For example, a virtual charac-
niques [KKKL94, YKH04]. Planning algorithms have also ter that wants to retrieve a book from inside a desk in an-
been used to generate cyclic motions such as walking and other room needs to do the following: exit the room it is
in, get to the other room, enter it, walk over to the desk,
open the drawer, and pick up the book. It is relatively dif-
ficult for previous techniques to generate motion for such a
long and complex sequence of behaviors. Because the FSM
effectively partitions the motions into distinct high-level be-
haviors, planning a solution and synthesizing a resulting an-
imation can be performed in a natural way.
Generality: We can apply our algorithm to different charac-
ters and environments without having to design new behav- Figure 2: A simple FSM of behaviors. Arcs indicate allow-
ior FSMs. For example, we generated animations for a skate- able transitions between behavior states. Each state contains
boarder and a horse (Figure 8) using essentially the same a set of example motion clips for a particular behavior.
FSM (Figure 3). In addition, by representing the motion data
at the behavior level, there is no dependence on any partic-
ular environment structure. While others have demonstrated represents a possible transition between two behaviors. Fig-
synthesized motion for a single character navigating in a flat ure 2 shows a simple FSM. The start state allows the char-
empty environment or heavily preprocessed terrain, we show acter to transition from standing still to jogging, and the end
examples of up to 100 characters moving simultaneously in state allows the character to transition from jogging to stand-
a variety of complex dynamic environments. ing still. We define a behavior to be the same as a state. How-
ever, we may have more than one state labeled, for example,
Optimality: Our method computes optimal sequences of be-
“jog left”. This is because it is possible to have multiple “jog
haviors. The maze example in Lee et al. [LCR∗ 02] uses a
left” states, each with different transitions and different con-
best first search technique to synthesize motion for a char-
nections within the FSM.
acter to follow a user sketched path. The user specified path
is useful for providing high-level control of the character. There can be multiple motion clips within a state. Having
However, the greedy nature of these searches may cause un- multiple clips that differ slightly in the style or details of the
expected or undesirable deviations from the path. Our ap- motion adds to the variety of the synthesized motions, espe-
proach overcomes this limitation because our FSM is small cially if there are many characters utilizing the same FSM.
enough to perform optimal planning. In some cases, gen- However, clips of motions in the same state should be fairly
erating optimal sequences of motion may be undesirable. similar at the macro scale, differing only in the subtle details.
For these situations, we can relax the optimality criteria and For example, if a “jog left” clip runs a significantly longer
use other non-optimal search techniques inside the plan- distance than another “jog left” clip, they should be placed
ning module. In addition to providing optimality, carefully in different states and assigned different costs.
designed behavior FSMs can provide coverage guarantees,
Individual clips of motions may be extracted from longer
which is an issue for unstructured graphs [RP04]. Unstruc-
clips of motion. Each motion clip has transition labels that
tured graphs have no pre-determined connections between
correspond to the first and last frame of every clip along
motions, and can make no guarantees about how quickly one
with their nearby frames, taking the velocities of the char-
motion can be reached from another.
acter into account. Transitions are possible if the end of one
Anytime Algorithm: For game systems with limited CPU clip is similar to the beginning of another. It is advantageous
resources and real-time constraints, our planning algorithm to have the beginning and end of every clip to be similar.
can be interrupted at any time and asked to return the best This means that every clip would be able to transition to all
motion computed up until that point as in [GVK04]. the others, allowing for a larger variety in the number of pos-
sible output motions. In practice, we have found that it is a
3. Behavior Planning good idea to include some motion clips that are relatively
short compared to the length of the expected solutions. This
We explain our behavior planning approach in more detail. makes it easier for the planner to globally arrange the clips
The algorithm takes as input an FSM of behaviors, informa- in a way that avoids the obstacles even in cluttered environ-
tion about the environment, and starting and goal locations ments.
for the character. It uses an A*-search planner [HNR68] to
find a sequence of behaviors that allows the character to Figure 3 shows an example of the FSM used for the
move from the start to the goal. human-like character. The most complicated FSM that we
used has a similar structure, except for: (1) additional jog-
ging states mostly connected to each other; and (2) more spe-
3.1. Behavior Finite-State Machine
cialized behaviors such as jumping. We captured our human
The behavior FSM defines the movement capabilities of the motion data using a Vicon optical system, at a frequency of
character. Each state consists of a collection of motion clips 120 Hz. We used seven types of jogging behaviors: one mov-
that represent a high-level behavior, and each directed edge ing forward, three types of turning left, and three of turning
Algorithm 1: B EHAVIOR P LANNER for each clip of motion. The cost of each clip is computed
by the distance that the root position travels multiplied by a
Tree.Initialize(sinit );
user weight. The distance that the root position travels is es-
Queue.Insert(sinit ,DistToGoal(sinit ,sgoal ));
timated by the Euclidean distance between the start and end
while !Queue.Empty() do
sbest ← Queue.RemoveMin(); frame projections onto the ground plane. If there are multi-
if GoalReached(sbest ,sgoal ) then ple clips in a state, their costs should be similar; otherwise
return sbest ; they should be in different states. Each state has only one
end cost. For multiple clips in the same state, we take the aver-
e ← E(sbest .time); age of the cost of each clip. The search algorithm is optimal
A ← F(sbest , e); with respect to the states’ cost.
foreach a ∈ A do
sout .pos = sin .pos + f (sin .ori, x(a), y(a), θ(a))
snext ← T (sbest , a);
if G(snext , sbest , e) then sout .ori = sin .ori + θ(a)
Tree.Expand(snext ,sbest ); sout .time = sin .time + t(a)
Queue.Insert(snext ,DistToGoal(snext ,sgoal ));
sout .cost = sin .cost + cost(a) (1)
end
end The user weights assigned to each action correspond to the
end character’s preference for executing a particular action. For
return no possible path found; example, the weights for jogging-and-turning are set slightly
higher than the weights for jogging forward. The weights for
jumping are higher than any jogging motions, reflecting the
relatively higher effort required to jump. Hence the charac-
the following information for each action a: (1) the rela- ter will prefer to jog rather than jump over the same distance
tive change in the character’s root position and orientation whenever jogging is possible. The stopping and waiting mo-
(x(a), y(a), θ(a)), (2) the change in time t(a) (represented tions have the highest weights; the character should prefer
by the change in number of frames), and (3) the change in the other motions unless it is much better to stop and wait
cost cost(a). for a short time. There is also an additional cost for walking
Pseudocode for the behavior planner is shown in Algo- up or down a sloped terrain. This makes it more preferable
rithm 1. The planner initializes the root of the tree with the to choose a path that is flat, if such a path exists, than one
state sinit , which represents the starting configuration of the that requires traversing a rough terrain. The user can easily
character at time t = 0. Most of our synthesized results were change these relative weights in order to define a particular
generated using A* search, so the total cost is the sum of set of preferences for the character’s output motion. Because
the cost of the path up to that node and the expected cost the number of behavior states in the FSM is relatively small,
to reach the goal (DistToGoal). In addition to A*-search, we we have not found parameter tuning to be a major issue in
also experimented with using truncated A* and inflated A* practice.
search. For inflated A*, we set the relative weight for the es-
The function G determines if we should expand snext as
timated cost to reach the goal to be twice the weight of the
a child node of sbest in the tree. First, collision checking is
cost of the path taken so far. The planner iteratively expands
performed on the position of snext . This also checks the in-
the lowest cost node sbest in the queue until either a solution
termediate positions of the character between sbest and snext .
is found, or until the queue is empty, in which case there
The discretization of the positions between these two states
is no possible solution. If sbest reaches sgoal (within some
should be set appropriately according to the speed and dura-
small tolerance ε), then the solution path from the root node
tion of the action. The amount of discretization is a tradeoff
to sbest is returned. Otherwise, the successor states of sbest
between the search speed and the accuracy of the collision
are considered for expansion.
checking. For the special actions such as jumping, we also
The function F returns the set of actions A that the char- check to see if the character is inside the within regions of
acter is allowed to take from sbest . This set is determined any corresponding obstacles during the execution of the ac-
by the transitions in the FSM. Some transitions may only be tion. In the case of a jumping motion, for example, since we
valid when the character’s position is in the near regions of have annotated when the jump occurs, we can add this time
the special obstacles. Moreover, F can add more variety to to the accumulated time at that point (sbest .time) and use the
the synthesized motion by randomly selecting a motion clip total time to index the function E.
within each chosen state, if there are multiple clips in a state.
As a final step for function G, we utilize a state-indexed
The function T takes the input state sin and an action a as table to keep track of locations in the environment that have
parameters and returns the output state sout resulting from previously been visited. If the global position and orienta-
the execution of that action (Equation 1). The function f tion of a potential node snext has been visited before, the
represents the translation and rotation that may take place function G will return false, thereby keeping it from being
Figure 6: A dynamic environment with a falling tree. Left: Before it falls, the characters are free to jog normally. Center: As it
is falling, the characters can neither jog past nor jump over it. Right: After it has fallen, the characters can jump over it. See
the accompanying video for the animation.
steeper slopes, it would be best to include a “jog up ter- over 100000 motion clips by including the same motion clip
rain” behavior in our FSM. Second, an animator can con- into a state 100000 times. This does not affect the resulting
trol a character’s preferences for certain types of behavior motion, but it simulates a large database of motion. For this
by adjusting cost functions that the planner uses to evaluate artificial example, the database size increased by about 1000
candidate motions. In the example, one solution (the light times (from 2000 frames to 2000000), and the search time
blue path) is generated using the normal costs for jumping only doubled (from about 2 seconds to 4).
and navigating uphill. Another solution (the dark blue path)
is generated using very high costs for these two motions, Figure 8(top) shows a skateboarder stopping and then
with everything else being the same. The one with normal jumping over a moving hook-shaped obstacle. It executes a
costs produces a more direct path: the character jogs through “stop and wait” motion so that it can prepare for the timing
the terrain and executes a jumping motion. The one with of its jump. The planner takes about one second to generate
higher costs produces a longer path, but it is optimal with 20 seconds of motion.
respect to the costs given that the character prefers neither to
An example of three horses simultaneously avoiding each
jump nor navigate the elevated part of the terrain if possible.
other and a number of moving obstacles is shown in Fig-
Third, our technique scales efficiently to large motion data-
ure 8(bottom). The motions of the moving obstacles are pre-
bases since the search performance is primarily dependent
generated from a rigid-body dynamics solver. Their posi-
on the complexity of the behavior FSM rather than on the
tions at discrete time steps are then automatically stored into
amount of motion data. We artificially create an FSM with
the time-indexed gridmaps representing the environment. In
5. Discussion
We have presented a behavior planning approach to auto-
matically generate realistic motions for animated characters.
We model the motion data as abstract high-level behaviors.
Our behavior planner then performs a global search of a
data structure of these behaviors to synthesize motion. Al-
though a designer has to connect these behaviors together
into a behavior FSM, the tradeoff that comes from a well
organized data structure is that the graph size, behavior se-
mantics, search time, and level of control over the resulting
motion is vastly improved.
Our behavior planning approach can provide guarantees
of completeness and optimality. It is complete in the sense
that if a solution exists, the algorithm will find it, and if no
solution exists, it will fail in a finite amount of time. The
method is optimal with respect to the behavior costs defined
for the A* search. This optimality criteria, however, is not
a necessity. We have shown that other non-optimal searches
such as truncated or inflated A* can be used to produce rea-
sonable results. A limitation to being complete and optimal
is that A* search is an exponential search method. But this
does not present a problem in practice when applied to small
Figure 8: Top: Snapshots of a skateboarder stopping and data structures such as our behavior FSMs. Our planner can
jumping over a moving obstacle. Bottom: Three horses gal- search through several thousand states in a fraction of a sec-
loping across a field of fast-moving obstacles. ond, and cover fairly long distances in a virtual environment.
The primary drawback of our approach compared to mo-
tion graphs [AF02, KGP02, LCR∗ 02] is that motion graphs
can be constructed automatically from a general set of mo- mize the planning time as much as possible, while keeping
tion data. Our behavior planning approach relies on the exis- the synthesized motion reasonable.
tence of a behavior FSM, and motion data clips that have
Our behavior planner does not allow the virtual charac-
been appropriately categorized into sets of behaviors. In
ter to exactly match precise goal postures. Our focus in on
practice, we have found the FSM design process to be rel-
efficiently generating complex sequences of large-scale mo-
atively simple, and once the FSM is constructed, it can be
tions across large, complex terrain involving different behav-
re-used for different characters and environments. Further-
iors. Given a small number of appropriately designed “go
more, we can add individual clips of motions to the states
straight”, “turn left”, and “turn right” actions, our planner
easily once its basic structure is defined. The majority of the
can generate motions that cover all reachable space at the
work is, therefore, in the segmentation of data. We do re-
macro-scale. No motion editing is required to turn fractional
quire that the motions be carefully segmented and transition
amounts or traverse a fractional distance because we are
labels be identified. However, the added benefit of having
computing motion for each character to travel over relatively
abstracted motion clips organized into behaviors outweighs
long distances (compared to each motion clip). The algo-
the extra effort in manually segmenting the data; there are
rithm globally arranges the motion clips in a way that avoids
many advantages (see Section 2) that come from building a
obstacles in cluttered environments while reaching distant
well defined FSM.
goals. The character stops when it is within a small distance
An issue regarding the organization of the motion clips ε from the goal location. If matching a precise goal posture
is that it is difficult to know when to group similar clips is required, motion editing techniques [BW95, WP95] may
into the same state. We currently assign a clip to an exist- be used after the blending stage.
ing state if the motion matches the high-level description of
Although our results include animations of multiple char-
the behavior and it has the same transition labels of that state.
acters, we did not originally intend to build a system for gen-
Otherwise, we assign a new state for the clip, or we elimi-
erating motions for crowds. We do not claim our method to
nate the clip completely if it does not fit into the states we
be better than the existing specialized commercial crowd an-
want to have in the FSM. This grouping process is qualita-
imation systems. Nevertheless, the application of our plan-
tive and is a potential candidate for automation in the future.
ning technique to multiple characters produces results that
Arikan et al. [AF02] clustered the edges in their graph of
are quite compelling. In addition, to the best of our knowl-
motion data. Lee et al. [LCR∗ 02] have tried clustering to
edge, existing crowd animation systems utilize local reactive
group similar poses. However, it is difficult to define an au-
steering methods [Rey87] rather than a global planning ap-
tomatic algorithm to perform clustering on long sequences
proach.
of motion accurately for a general set of data. It is difficult
even for humans to group similar clips of motions together, A possible direction for future work is to parametrize the
and hence getting the ground truth for comparison purposes states in the FSM. Instead of having a “jog left” behavior
will be a challenge. The segmentation process requires iden- state, we can have a “jog left by x degrees” state. Such a state
tifying poses of the data that are similar, and can potentially might use interpolation methods [WH97, RCB98] to gener-
be automated in the future. Identifying similar poses from a ate an arbitrary turn left motion given a few input clips. This
set of motion data is already a step that is automated when can decrease the amount of input data needed, while increas-
building a motion graph, and this is an active area of re- ing the variety of motion the planner can generate. We can
search [GSKJ03, BSP∗ 04]. also have behaviors such as “jump forward x meters over an
Our examples show planned motions in dynamic environ- object of height h”. This would allow our system to work in
ments whose obstacle motions were known a priori. This is a larger variety of environments.
reasonable for applications in the movie industry that are of-
fline, and with some interactive games where the motions Acknowledgements: We thank Moshe Mahler for the human
of non-player entities are known in advance. Other interac- character models and for helping with video production. We thank
tive applications benefit from “on-the-fly” motion generation Justin Macey for cleaning the motion capture data, and the CMU
in environments where human-controlled characters move Graphics Lab for providing many resources used for this project.
unpredictably. For this case, we suggest an anytime algo- Finally, we are grateful to Alias/Wavefront for the donation of their
rithm (Section 2), in which the planning process is repeat- Maya software. This research was partially supported by NSF grants
edly interrupted and returns the best motion computed at that ECS-0325383, ECS-0326095, and ANI-0224419.
point (obtained by examining the state at the top of the plan-
ning priority queue). Provided that replanning happens at a
References
high enough frequency relative to the maximum speed of the
unpredictable entities, the characters will be able to avoid [AF02] A RIKAN O., F ORSYTH D. A.: Interactive motion gener-
obstacles “on-the-fly”. There exists a tradeoff between the ation from examples. ACM Transactions on Graphics 21, 3 (July
computation time and the optimality of the generated mo- 2002), 483–490.
tion. For interactive game systems, the designer can mini- [BLA02] BAYAZIT O. B., L IEN J.-M., A MATO N. M.:
Roadmap-based flocking for complex environments. In Pa- [LP02] L IU C. K., P OPOVIC Z.: Animating human athletics.
cific Conference on Computer Graphics and Applications (2002), In Proc. ACM SIGGRAPH 2002 (Annual Conference Series)
pp. 104–115. (2002).
[BSP∗ 04] BARBI Č J., S AFONOVA A., PAN J.-Y., FALOUTSOS [LS99] L EE J., S HIN S. Y.: A hierarchical approach to interactive
C., H ODGINS J. K., P OLLARD N. S.: Segmenting Motion Cap- motion editing for human-like figures. In Proc. ACM SIGGRAPH
ture Data into Distinct Behaviors. In Proceedings of Graphics 99 (Annual Conference Series) (1999), pp. 39–48.
Interface 2004 (July 2004), pp. 185–194. [MBC01] M IZUGUCHI M., B UCHANAN J., C ALVERT T.: Data
[BW95] B RUDERLIN A., W ILLIAMS L.: Motion signal process- driven motion transitions for interactive games. Eurographics
ing. In SIGGRAPH 95 Conference Proceedings (1995), ACM 2001 Short Presentations (September 2001).
SIGGRAPH, pp. 97–104. [Men99] M ENACHE A.: Understanding Motion Capture for
[CLS03] C HOI M. G., L EE J., S HIN S. Y.: Planning biped lo- Computer Animation and Video Games. Morgan Kaufmann Pub-
comotion using motion capture data and probabilistic roadmaps. lishers Inc., 1999.
ACM Transactions on Graphics 22, 2 (Apr. 2003), 182–203. [PB02] P ULLEN K., B REGLER C.: Motion capture assisted ani-
[FP03] FANG A. C., P OLLARD N. S.: Efficient synthesis of phys- mation: Texturing and synthesis. ACM Transactions on Graphics
ically valid human motion. ACM Transactions on Graphics (SIG- 21, 3 (July 2002), 501–508.
GRAPH 2003) 22, 3 (July 2003), 417–426. [PG96] P ERLIN K., G OLDBERG A.: Improv: A system for script-
[FvdPT01] FALOUTSOS P., VAN DE PANNE M., T ERZOPOULOS ing interactive actors in virtual worlds. In Proc. ACM SIGGRAPH
D.: The virtual stuntman: dynamic characters with a repertoire of 96 (Annual Conference Series) (1996), pp. 205–216.
autonomous motor skills. Computers and Graphics 25, 6 (2001), [PLS03] P ETTRE J., L AUMOND J.-P., S IMEON T.: A 2-stages
933–953. locomotion planner for digital actors. Symposium on Computer
[Gle98] G LEICHER M.: Retargeting motion to new characters. In Animation (Aug. 2003), 258–264.
Proc. ACM SIGGRAPH 98 (Annual Conference Series) (1998), [PW99] P OPOVI Ć Z., W ITKIN A.: Physically based motion
pp. 33–42. transformation. In Proc. ACM SIGGRAPH 99 (Annual Confer-
[GSKJ03] G LEICHER M., S HIN H. J., KOVAR L., J EPSEN A.: ence Series) (1999), pp. 11–20.
Snap-together motion: assembling run-time animations. ACM [RCB98] ROSE C., C OHEN M., B ODENHEIMER B.: Verbs and
Transactions on Graphics 22, 3 (July 2003), 702–702. adverbs: Multidimensional motion interpolation. IEEE Computer
[GVK04] G O J., V U T., K UFFNER J.: Autonomous be- Graphics and Application 18, 5 (1998), 32–40.
haviors for interactive vehicle animations. In ACM SIG- [Rey87] R EYNOLDS C. W.: Flocks, herds, and schools: A dis-
GRAPH/Eurographics Symposium on Computer Animation tributed behavioral model. In Computer Graphics (SIGGRAPH
(Aug. 2004). ’87 Proceedings) (July 1987), vol. 21, pp. 25–34.
[HNR68] H ART P., N ILSSON N., R AFAEL B.: A formal basis for [RP04] R EITSMA P. S. A., P OLLARD N. S.: Evaluat-
the heuristic determination of minimum cost paths. IEEE Trans. ing motion graphs for character navigation. In ACM SIG-
Sys. Sci. and Cyb. 4 (1968), 100–107. GRAPH/Eurographics Symposium on Computer Animation
[HWBO95] H ODGINS J., W OOTEN W., B ROGAN D., OŠB RIEN (Aug. 2004).
J. F.: Animating human athletics. In Proc. ACM SIGGRAPH 95 [SHP04] S AFONOVA A., H ODGINS J. K., P OLLARD N. S.: Syn-
(Annual Conference Series) (1995), pp. 71–78. thesizing physically realistic human motion in low-dimensional,
[KGP02] KOVAR L., G LEICHER M., P IGHIN F.: Motion graphs. behavior-specific spaces. ACM Transactions on Graphics (SIG-
ACM Transactions on Graphics 21, 3 (July 2002), 473–482. GRAPH 2004) 23, 3 (Aug. 2004).
[KKKL94] KOGA Y., KONDO K., K UFFNER J. J., L ATOMBE J.- [SYN01] S HILLER Z., YAMANE K., NAKAMURA Y.: Planning
C.: Planning motions with intentions. In Proceedings of SIG- motion patterns of human figures using a multi-layered grid and
GRAPH 94 (July 1994), pp. 395–408. the dynamics filter. In Proceedings of the IEEE International
Conference on Robotics and Automation (2001), pp. 1–8.
[KvdP01] K ALISIAK M., VAN DE PANNE M.: A grasp-based mo-
tion planning algorithm for character animation. J. Visualization [WH97] W ILEY D., H AHN J.: Interpolation synthesis of articu-
and Computer Animation 12, 3 (2001), 117–129. lated figure motion. IEEE Computer Graphics and Application
17, 6 (1997), 39–45.
[LaV] L AVALLE S. M.: Planning Algorithms. Cambridge Uni-
versity Press (also available at http://msl.cs.uiuc.edu/planning/). [WP95] W ITKIN A. P., P OPOVI Ć Z.: Motion warping. In Pro-
To be published in 2006. ceedings of SIGGRAPH 95 (Aug. 1995), Computer Graphics
Proceedings, Annual Conference Series, pp. 105–108.
[LCR∗ 02] L EE J., C HAI J., R EITSMA P. S. A., H ODGINS J. K.,
P OLLARD N. S.: Interactive control of avatars animated with [YKH04] YAMANE K., K UFFNER J. J., H ODGINS J. K.: Syn-
human motion data. ACM Transactions on Graphics 21, 3 (July thesizing animations of human manipulation tasks. ACM Trans-
2002), 491–500. actions on Graphics (SIGGRAPH 2004) 23, 3 (Aug. 2004).
[LL04] L EE J., L EE K. H.: Precomputing avatar behavior
from human motion data. In Proceedings of the 2004 ACM
SIGGRAPH/Eurographics symposium on Computer animation
(2004), ACM Press, pp. 79–87.
View public
c The Eurographics Association 2005.