Motion Capture Assisted Animation: Texturing and Synthesis
Motion Capture Assisted Animation: Texturing and Synthesis
We discuss a method for creating animations that allows the anima- 0 (a)
tor to sketch an animation by setting a small number of keyframes −2
Degrees of freedom that were not keyframed are synthesized. The −10
method takes advantage of the fact that joint motions of an artic- −12
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
ulated figure are often correlated, so that given an incomplete data
set, the missing degrees of freedom can be predicted from those that Motion Capture Data
are present. 1
translation in inches
0 (b)
CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional
−1
Graphics and Realism—Animation; J.5 [Arts and Humantities]:
performing arts −2
−3
Keywords: animation, motion capture, motion texture, motion
synthesis −4
−5
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
time in seconds
1 Introduction
Figure 1: Comparison of keyframed data and motion capture
As the availability of motion capture data has increased, there has data for root y translation for walking. (a) keyframed data, with
been more and more interest in using it as a basis for creating com- keyframes indicated by red dots (b) motion capture data. In this ex-
puter animations when life-like motion is desired. However, there ample, the keyframed data has been created by setting the minimum
are still a number of difficulties to overcome concerning its use. As possible number of keys to describe the motion. Notice that while
a result, most high quality animations are still created by keyfram- it is very smooth and sinusoidal, the motion capture data shows ir-
ing by skilled animators. regularities and variations. These natural fluctuations are inherent
Animators usually prefer to use keyframes because they allow to live motion. A professional animator would achieve such detail
precise control over the actions of the character. However, creating by setting more keys.
a life-like animation by this method is extremely labor intensive. If
too few key frames are set, the motion may be lacking in the detail
we are used to seeing in live motion (figure 1). The curves that
are generated between key poses by computer are usually smooth control over the motion. Motion capture sessions are labor inten-
splines or other forms of interpolation, which may not represent the sive and costly, and if the actor does not do exactly what the anima-
way a live human or animal moves. The animator can put in as tor had in mind or if plans change after the motion capture session,
much detail as he or she wants, even to the point of specifying the it can be difficult and time consuming to adapt the motion capture
position at every time, but more detail requires more time and effort. data to the desired application.
A second reason keyframing can be extremely labor intensive is A more subtle problem with motion capture data is that it is not
that a typical model of an articulated figure has over 50 degrees of an intuitive way to begin constructing an animation. Animators are
freedom, each of which must be painstakingly keyframed. usually trained to use keyframes, and will often build an animation
Motion capture data, on the other hand, provides all the detail by first making a rough animation with few keyframes to sketch out
and nuance of live motion for all the degrees of freedom of a char- the motion, and add complexity and detail on top of that. It is not
acter. However, it has the disadvantage of not providing for full easy or convenient for an animator to start creating an animation
with a detailed motion he or she did not create and know every
aspect of.
We propose a method for combining the strengths of keyframe
animation with those of using motion capture data. The animator
begins by creating a rough sketch of the scene he or she is creating
by setting a small number of keyframes on a few degrees of free-
dom. Our method will use the information in motion capture data
to add detail to the degrees of freedom that were animated if de-
sired, a process we call adding “texture” to the motion. Degrees of
freedom that were not keyframed at all are synthesized. The result
is an animation that does exactly what the animator wants it to, but
has the nuance of live motion.
2 Related Work We also are interested in adapting motion capture data to differ-
ent situations. However, rather than starting with the live data, we
There has been a great deal of past research in a number of different start with a sketch created by the animator of what the final result
areas that are related to our project. We divide this work into four should be, and fit the motion capture data onto that framework. As
main categories that are described below. a result, it can be used to create motions substantially different from
what was in the original data.
20
Ankle angle in degrees
10
−10
−20
(a) (b) 1 1
0 0
−1 −1
−2 −2
−3 −3
10 20 30 40 20 40 60 80
(c) (d)
2 2
Angle in degrees
1 1
(c) (d)
0 0
−1 −1
−2 −2
−3 −3
10 20 30 40 10 20 30 40
Time in Seconds X 24
Figure 4: Breaking data in to fragments. The bands of the Figure 5: Matching. Each keyframed fragment is compared to all of
keyframed data and motion capture data shown with red dashed the motion capture fragments, and the K closest matches are kept.
lines in figure 3 are broken into fragments where the sign of the Shown is the process of matching the first fragment shown in fig-
first derivative changes. (a) keyframed data. (b) motion capture ure 4 (c). (a) The keyframed fragment to be matched. (b) The
data. (c) keyframed data broken in to fragments. (d) motion cap- keyframed fragment, shown in a thick blue line, compared to all of
ture data broken into fragments. the motion capture fragments, shown in thin black lines. (c) Same
as (b), but the motion captured fragments have been stretched or
compressed to be the same length as the keyframed fragment. (d)
Same as (c), but only the 5 closest matches are shown.
left hip and left knee x angles, in our example) are broken into frag-
ments at those locations (figure 4). Note that in the figures we il-
lustrate the process for just one of the matching angles, the hip, but
actually the process is applied to all of the matching angles simul- ing angles and the magnitude of the rest of the angles, which is
taneously. We also match the first derivative of the chosen band of not usually likely to be true. However, it can improve the resulting
each of these angles. Including the first derivatives in the matching animations for cases in which the keyframed data is similar to the
helps choose fragments of real data that are more closely matched motion capture data, and the action is fairly constrained, such as
not only in value but in dynamics to the keyframed data. Note that walking.
the sign change of the first derivative of only one of the angles is More fragments than just the closest match are saved because
used to determine where to break all of the data corresponding to there is more to consider than just how close the data fragment is
the matching angles into fragments, so that all are broken at the to the original. We must take into consideration which fragments
same locations. come before and after. We would like to encourage the use of con-
All of the fragments of keyframed data in the chosen frequency secutive chunks of data as described in the next section.
band and their first derivatives are stepped through one by one, and
for each we ask which fragment of real data is most similar (fig- 3.3 Path finding
ure 5). To achieve this comparison, we stretch or compress the real
data fragments in time by linearly resampling them to make them Now that we have the K closest matches for each fragment, we must
the same length as the keyframed fragment. In the motion capture choose a path through the possible choices to create a single data
data, there are often unnatural poses held for relatively long periods set. The resulting animation is usually more pleasing if there are
of time for calibration purposes. To avoid chosing these fragments, sections in time where fragments that were consecutive in the data
any real fragment that was originally more than 4 times as long as are used consecutively to create the path. As a result, our algorithm
the fragment of keyframed data being matched is rejected. We find considers the neighbors of each fragment, and searches for paths
the sum of squared differences between the keyframed fragment that maximize the use of consecutive fragments.
being matched and each of the real data fragments, and keep the For each join between fragments, we create a cost matrix, the
K closest matches. As we save fragments of the matching angles, i jth component of which gives the cost for joining fragment i with
we also save the corresponding fragments of original motion cap- fragment j. A score of zero is given if the fragments were consec-
ture data (not just the frequency band being matched) for all of the utive in the original data, and one if they were not. We find all of
angles to be synthesized or textured (figure 6). the possible combinations of fragments that go through the points
At this point, it is sometimes beneficial to include a simple scale of zero cost.
factor. Let A be the m × n matrix of values in the keyframed data This technique is easiest to explain using an example, which is
being matched, where m is the number of matching angles and n is diagrammed in figure 7. Suppose we had 4 fragments of synthetic
the length of those fragments. Let M be the m × n matrix of one of data to match, and saved 3 nearest matches. In the illustration we
the K choices of matching fragments. Then to scale the data, we show that for fragment 1 of the keyframed data, the best matches
look for the scale factor s that minimizes kMs − Ak. The factor s were to fragments 4, 1, and 3 of the real data, and for fragment 2
is then multiplied by all of the data being synthesized. In practice of the keyframed data the closest matches were to fragments 5, 7,
such a scale factor is useful only in a limited set of cases, because it and 2 of the real data, and so on. We have drawn lines between
assumes a linear relationship between the magnitude of the match- fragments to indicate paths of zero cost. Here there are three best
2
1 (a)
Keyframed
0
Fragment 1 2 3 4
−1
−2
40
(b) Matching Data
30
20 Fragments 1 7 4 8
10 3 2 6 5
0
−10
0 50 100 150 200 250
40 0 1 1 1 1 1 1 1 1
Cost
30 (c) 1 1 1 1 1 1 1 1 1
20 Matricies 1 0 1 0 1 1 1 0 1
10
0
−10
0 50 100 150 200 250 Figure 7: Choosing a path by maximizing the instances of con-
Time in Seconds X 24 secutive fragments. In the table we show a hypothetical example
of a case where four keyframed fragments were matched, and the
Figure 6: Matching and synthesis. (a) The five closest matches for K = 3 closest matches of motion capture fragments were kept for
a series of fragments of keyframed data is shown. The keyframed each keyframed fragment. The matches at the tops of the columns
data is shown with a thick blue line, the matching motion capture are the closest of the 3 matches. Blue lines are drawn between frag-
fragments are shown with thin black lines. (b) An example of one of ments that were consecutive in the motion capture data, and the cost
the angles being synthesized is shown, the lowest spine joint angle matricies between each set of possible matches are shown below.
rotation about the x axis. The five fragments for each section come
from the spine motion capture data from the same location in time
as the matching hip angle fragments shown in plot (a). (c) An ex- The next step is to skew the fragment to pass through the new
ample of a possible path through the chosen spine angle fragments endpoints. To achieve this warping, we define two lines, one that
is shown with a thick red line. passes through the old endpoints, and one that passes through the
new endpoints. We subtract the line that passes through the old
endpoints and add the line that passes through the new endpoints to
choices. One is fragment 4, 5, 6, and 2 from the real data. In this yield the shifted fragment. The process is diagramed in figure 8.
case we choose fragment 2 of the real data to match the fourth frag- In order to further smooth any remaining discontinuity, a
ment of keyframed data rather than 8 or 5 because it was originally quadratic function is fit to the join region from N points away from
the closest match. A second possible path would be 4, 5, 4, and 5, the joint point to within 2 points of the join point, where N is a
and a third would be 1, 2, 4, 5. All three would yield two instances parameter. A smaller value of N keeps the data from being altered
of zero cost. An example of an actual path taken through fragments too greatly from what was in the motion capture data, and a larger
chosen by matching is shown in figure 6c. value more effectively blends between different fragments. In prac-
Note that for z instances of zero cost, there can be no greater than tice we found a N from 5-20 to be effective, corresponding to 0.2
z paths to consider, and in fact will usually be far less because the - 0.8 seconds. The resulting quadratic is blended with the original
instances can be linked up. In our example (figure 7) there were joined data using a sine squared function as follows.
four instances of zero cost, but only three possible paths that min- Define the blend function f as
imize the cost. The P best paths (where P is a parameter set by
the animator) are saved for the animator to look at. All are valid πt
f = (cos + 1)2 (1)
choices and it is an artistic decision which is best. 2N
In practice we found that saving roughly 1/10 the total number
where N is half the length of the shortest of the fragments that are
of fragments produced good results. Saving too many matches re-
to be joined and t is the time, shifted to be zero at the join region. If
sulted in motions that were not coordinated with the rest of the
we define q as the quadratic function we obtained from the fit, and
body, and saving too few did not allow for sufficient temporal co-
m as the data after matching, then the data s after smoothing is
herence when seeking a path through the fragments.
s(t) = f (t)q(t) + (1 − f (t))m(t). (2)
3.4 Joining
An example of this process is shown in figure 9.
Now that we have the best possible paths, the ends may still not
quite line up in cases where the fragments were not originally con-
secutive. For example, in figure 6c we show an example of data 4 Experiments
after matching and choosing the paths. To take care of these dis-
continuities, we join the ends together by the following process. We tested our method on several different situations, three of which
For fragment i, we define the new endpoints as follows. The new are described below. All of these examples are presented on the
first point will be the mean between the first point of fragment i and accompanying video tape.
the last point of fragment i − 1. (Note that there is overlap between
the ends of the fragments; if the last point of fragment i is placed at 4.1 Walking
time t, the first point of fragment i + 1 is also at time t.) The new
last point of fragment i will be the mean between the last point of A short animation of two characters walking toward each other,
fragment i and the first point of fragment i + 1. slowing to a stop, stomping, and crouching was created using
30 30
25 25
(a) (b)
20 20
15 15
10 10
5 5
0 0
−5 −5
20 40 60 80 100 120 20 40 60 80 100 120
30 30
Angle in Degrees
25 25
(c) (d)
20 20
15 15
Figure 8: Joining the ends of selected fragments. (a) Four fragments keyframes. Keyframes were set only on the positions of the root
of spine angle data that were chosen in the matching step are shown. (not the rotations) and feet. Inverse kinematics were used on the
Note this graph is a close up view of the first part of the path illus- feet at the ankle joint, as is customary in keyframe animation of ar-
trated in figure 6c. There are significant discontinuities between the ticulated figures. The joint angles for the hips and knees were read
first and second fragments, as well as between the third and fourth. out afterwards for use in texturing and synthesis.
(b) The original endpoints of the fragments are marked with black
Each character’s motion was enhanced using a different motion
circles, the new endpoints are marked with blue stars. The second
capture data set of walking motion. The two data sets each con-
and third fragments were consecutive in the motion capture data, so
sisted of walking with roughly a single step size, but each exhibited
the new and old endpoints are the same. (c) For each fragment, the
a very different style of walking. One was a relatively “normal”
line between the old endpoints (black dashes) and the line between
walk, but rather bouncy, and the other was of a person imitating
the new endpoints (blue solid line) are shown. (d) For each frag-
a drag queen and was quite stylized, containing unusual arm and
ment, the line between the old endpoints is subtracted, and the line
head gestures. The length of each data set was 440 time points at
between the new endpoints is added, to yield the curve of joined
24 frames per second, or about 18 seconds worth of data. A Lapla-
fragments. The new endpoints are again marked with blue stars.
cian pyramid was used for the frequency analysis. The 4th highest
band was used for matching. For texturing, bands 2-3 were synthe-
sized, and for synthesis, all bands 2 and lower. The very highest
frequency band tended to add only undesirable noise to the motion.
4
(a) The upper body degrees of freedom could successfully be syn-
2 thesized using a number of different combinations for the matching
angles. For example, both hip x angles; the left hip x and left knee
0
x angle; or the right hip x and right knee x all gave good results.
−2 Joined Data The most pleasing results were obtained by using data from the left
Quadratic Fit hip and left knee x angles during the stomp (the character stomps
−4
his left foot) and data from both hips for the rest of the animation.
5 10 15 20 25 30 35 40
Scaling after matching also improved the results in this case, for
example when the character slows down and comes to a stop, scal-
4
ing caused the motion of the body and arm motions to reduce in
coordination with the legs.
Angle in Degrees
(b)
2
The method does not directly incorporate hard constraints, so
0 we used the following method to maintain the feet contact with the
floor. First the pelvis and upper body motions were synthesized.
−2 Joined Data
Smoothed Data Since altering the pelvis degrees of freedom causes large scale mo-
−4 tions of the body, inverse kinematic constraints were subsequently
5 10 15 20 25 30 35 40 applied to keep the feet in place on the floor. This new motion was
Time in Seconds x 24 used for texturing the lower body motion during times the feet were
not in contact with the floor.
Figure 9: Smoothing at the join point. A close up of the join be- The motion of the characters was much more life-like after en-
tween fragments 1 and 2 from figure 8 is shown with a red solid hancement. The upper body moved in a realistic way and responded
line. (a) The quadratic fit using the points on either side of the join appropriately to the varying step sizes and the stomp, even though
point (as described in the text) is shown with a black dashed line. these motions were not explicit in the motion capture data. In ad-
(b) The data after blending with the quadratic fit is shown with a dition, the style of walking for each character clearly came from
blue dashed line. the data set used for the enhancement. Some example frames are
shown in figure 10.
Figure 12: Example frames from the dance animations. The blue
character, on the left in each image, represents the keyframed
sketch. The purple character, on the right in each image, shows
Figure 11: Example frames from animations of the otter charac- the motion after enhancement.
ter. On the top row are some frames from the original keyframed
animation, while on the bottom are the corresponding frames after
texturing. did not actually do. Some example frames are shown in figure 12.
The best results were obtained by using all of the hip and knee an-
gles as the matching angles, but some good animations could also
be created using fewer degrees of freedom. In these experiments,
4.2 Otter Character
the effects of choosing different paths through the matched data be-
Although we have focussed on the idea of filling in missing degrees came especially noticeable. Because of the wide variation within
of freedom by synthesis or adding detail by texturing, the method the data, different paths yielded significantly different upper body
can also be used to alter the style of an existing animation that al- motions, all of which were well coordinated with the lower body.
ready has a large amount of detail in it.
To test this possibility, we used an otter character that had been
animated by keyframe animation to run. Using the motion capture 5 Conclusion and Future Work
sets of walking described above, we could affect the style of the
Presently the two main methods by which the motions for computer
character’s run by texturing the upper body motions, using the hip
animations are created are by using keyframes and by using motion
and knee angles as the matching angles. The effect was particularly
capture. The method of keyframes is labor intensive, but has the ad-
noticeable when using the drag queen walk for texturing, the otter
vantage of allowing the animator precise control over the actions of
character picked up some of the head bobs and asymmetrical arm
the character. Motion capture data has the advantage of providing a
usage of the motion capture data. Some example frames are shown
complete data set with all the detail of live motion, but the animator
in figure 11.
does not have full control over the result. In this work we present a
method that combines the advantages of both methods, by allowing
4.3 Modern Dance the animator to control an initial rough animation with keyframes,
and then fill in missing degrees of freedom and detail using the in-
In order to investigate a wider range of motions than those related formation in motion capture data. The results are particularly strik-
to walking or running, we turned to modern dance. Unlike other ing for the case of synthesis. One can create an animation of only
forms of dance such as ballet or other classical forms, modern does the lower body, and given some motion capture data, automatically
not have a set vocabulary of motions, and yet it uses the whole body create life-like upper body motions that are coordinated with the
at its full range of motion. Thus it provides a situation where the lower body.
correlations between joints will exist only extremely locally in time, One drawback of the method as it currently stands is that it does
and a stringent test of our method. not directly incorporate hard constraints. As a result the textur-
A modern dance phrase was animated by sketching only the ing cannot be applied to cases where the feet are meant to remain
lower body and root motions with keyframes. Motion capture data in contact with the floor, unless it were combined with an inverse
of several phrases of modern dance was collected, and a total of kinematic solver in the animation package being used. Currently
1097 time points (24 frames per second) from 4 phrases was used. we are working to remedy this deficiency.
The upper body was synthesized, and the lower body textured. The Another active area of research is to determine a more funda-
same method for maintaining feet contact with the floor that was mental method for breaking the data into fragments. In this work
described above for the walking experiments was used here. The we used the sign change of the derivative of one of the joint an-
frequency analysis was the same as for the walking, except that the gles used for matching, because it is simple to detect and often rep-
6th highest band was used for matching. A lower frequency band resents a change from one movement idea to another. The exact
was used because the large motions in the dance data set tended to choice of where to break the data into fragments is not as important
happen over longer times than the steps in walking. as it may seem. What is important is that both the keyframed and
The results were quite successful here, especially for synthesis real data are broken at analogous locations, which is clearly the case
of the upper body motions. The motions were full and well coor- with our method. The method could be made more efficient by de-
dinated with the lower body, and looked like something the dancer tecting more fundamental units of movement that may yield larger
who performed for the motion capture session could have done, but fragments. However, due to the complexity of human motion, this
problem is a challenging one, and an ongoing topic of research. B REGLER , C. 1997. Learning and recognizing human dynamics in
On the surface, another drawback of the method may appear to video sequences. Proc. CVPR, 569–574.
be the need to choose a particular frequency band for the matching
step. However, the choice is not difficult to make, and in fact the re- B RUDERLIN , A., AND W ILLIAMS , L. 1995. Motion signal pro-
sults are not highly dependent upon the choice. Any low frequency cessing. proc. SIGGRAPH 1995, 97–104.
band that provides information about the overall motion will pro-
C HENNEY, S., AND F ORSYTH , D. A. 2000. Sampling plausible
vide good results. The resulting animations sometimes vary from
solutions to multi-body constraint problems. proc. SIGGRAPH
one another depending upon which frequency band is chosen as
2000, 219–228.
slightly different regions of data are matched, but more often they
are quite similar. If too high a band is chosen, however, the result- C HI , D., C OSTA , M., Z HAO , L., AND BADLER , N. 2000. The
ing animation has an uncoordinated look to it, as the overall motion emote model for effort and shape. proc. SIGGRAPH 2000, 173–
is not accurately represented in the matching step. 182.
Similarly, another drawback of the method may appear to be that
the animator must specify which degrees of freedom to use as the G LEICHER , M. 1997. motion editing with spacetime constraints.
matching angles. However, if one has spent some time keyframing 1997 Symposium on Interactive 3D graphics, 139–148.
a character, it is quite easy in practice to specify this information.
The most simplistic approach is to simply use all of the degrees G LEICHER , M. 1998. Retargeting motion to new characters. proc.
of freedom that the animator has sketched out the motion for with SIGGRAPH 1998, 33–42.
keyframes. In many cases, however, fewer degrees of freedom can
H EEGER , D. J., AND B ERGEN , J. R. 1995. Pyramid-based texture
be specified, and equally good results can be obtained. If the motion
analysis/sysnthesis. proc. SIGGRAPH 1995, 229–238.
has less variation, such as walking, the results will still be pleasing
if fewer angles are chosen as the matching angles. In fact it is fas- H ODGINS , J., W OOTEN , W. L., B ROGAN , D. C., AND O’B RIEN ,
cinating how correlated the motions of the human body are. Given J. F. 1995. Animating human athletics. proc. SIGGRAPH 1995,
only the data in two angles, such as the hip and knee x angles of 229–238.
one leg, one can synthesize the rest of the body. However, for a
motion with more variation in it, such as dancing, it is better to KOVAR , L., G LEICHER , M., AND P IGHIN , F. 2002. Motion
include more angles, to ensure good choices during matching. If graphs. Proc. SIGGRAPH 2002.
fewer joints are used for matching in this case, some of the result-
ing paths may still be good results, but others may appear somewhat L EE , J., AND S HIN , S. Y. 1999. A hierarchical approach to inter-
uncoordinated with the full body. active motion editing for human-like figures. proc. SIGGRAPH
In fact, the goal of this project was not to create a completely 1999, 39–48.
automatic method, but to give the animator another tool for incor- L EE , J., AND S HIN , S. Y. 2001. A coordinate-invariant approach
porating the information in motion capture data into his or her cre- to multiresolution motion analysis. Graphical Models 63, 2, 87–
ations. Different choices of the matching angles can yield different 105.
results and provide the animator with different possibilities to use
in the final animation. Another source of different motions comes L EE , J., C HAI , J., R EITSMA , P. S. A., H ODGINS , J. K., AND
from examining different paths through the best matches. The ani- P OLLARD , N. S. 2002. Interactive control of avatars animated
mator has the option of looking at several possibilities and making with human motion data. Proc. SIGGRAPH 2002.
an artistic decision which is best. Ultimately we hope that methods
such as this one will further allow animators to take advantage of L I , Y., WANG , T., AND S HUM , H. 2002. Motion texture: A
the benefits of motion capture data without sacrificing the control two-level statistical model for character motion synthesis. Proc.
they are used to having when keyframing. SIGGRAPH 2002.
P ERLIN , K., AND G OLDBERG , A. 1996. Improv: a system for
6 Acknowledgments scripting interactive actors in virtual reality worlds. proc. SIG-
GRAPH 1996, 205–216.
Special thanks to Rearden Steel studios for providing the motion
capture data, and to Electronic Arts for providing the otter model. P ERLIN , K. 1985. An image synthesizer. Computer Graphics 19,
3, 287–296.
P OPOVIC , Z., AND W ITKIN , A. 1999. Physically based motion
References transformation. proc. SIGGRAPH 1999, 159–168.
A RIKAN , O., AND F ORSYTH , D. A. 2002. Interactive motion P ULLEN , K., AND B REGLER , C. 2000. Animating by multi-level
generation from examples. Proc. SIGGRAPH 2002. sampling. IEEE Computer Animation Conference, 36–42.
B ODENHEIMER , B., S HLEYFMAN , A., AND H ODGINS , J. 1999. S CHODL , A., S ZELISKI , R., S ALESIN , D. H., AND E SSA , I.
The effects of noise on the perception of animated human run- 2000. Video textures. Proc. SIGGRAPH 00, 489–498.
ning. Computer Animation and Simulation ’99, Eurographics
Animation Workshop (September), 53–63. U NUMA , M., A NJYO , K., AND T EKEUCHI , R. 1995. Fourier
principles for emotion-based human figure animation. proc. SIG-
B ONET, J. S. D. 1997. Multiresolution sampling procedure for GRAPH 1995, 91–96.
analysis and synthesis of texture images. proc. SIGGRAPH
1997, 361–368. W ITKIN , A., AND K ASS , M. 1988. Spacetime constraints. Com-
puter Graphics 22, 159–168.
B RAND , M., AND H ERTZMANN , A. 2000. Style machines. proc.
SIGGRAPH 2000, 183–192. W ITKIN , A., AND P OPOVIC , Z. 1995. Motion warping. proc.
SIGGRAPH 1995, 105–108.
B RAND , M. 1999. Voice puppetry. Proc. SIGGRAPH 1999, 21–28.