1 History 2 Early Examples 3: Other Animation Techniques
1 History 2 Early Examples 3: Other Animation Techniques
1 History 2 Early Examples 3: Other Animation Techniques
1 History
2 Early Examples
3 3D animation
Cel-shaded animation
Morph target animation
Skeletal animation
Motion capture
History
There is no single person who can be considered the "creator" of the art of
film animation, as there were several people doing several projects which
could be considered various types of animation all around the same time.
J. Stuart Blackton was possibly the first American filmmaker to use the
techniques of stop-motion and hand-drawn animation. Introduced to
filmmaking by Edison, he pioneered these concepts at the turn of the 20th
century, with his first copyrighted work dated 1900. Several of his films,
among them The Enchanted Drawing (1900) and Humorous Phases of
Funny Faces (1906) were film versions of Blackton's "lightning artist"
routine, and utilized modified versions of Méliès' early stop-motion
techniques to make a series of blackboard drawings appear to move and
reshape themselves. 'Humorous Phases of Funny Faces' is regularly cited as
the first true animated film, and Blackton is considered the first true
animator.
Early Examples
Cel-shaded animation
Object with a basic cel-shader (also known as a toon shader) and border detection.
Black "ink" outlines and contour lines can be created using a variety of methods.
One popular method is to first render a black silhouette, slightly larger than the
object itself. Backface culling is inverted and the back-facing triangles are drawn in
black. To dilate the silhouette, these back-faces may be drawn in wireframe multiple
times with slight changes in translation. Alternately, back-faces may be rendered
solid-filled, with their vertices translated along their vertex normals in a vertex
shader. After drawing the silhouette, back-face culling is set back to normal to draw
the shading and optional textures of the object. Finally, the image is composited via
Z-buffering, as the back-faces always lie deeper in the scene than the front-faces.
The result is that the object is drawn with a black outline and interior contour lines.
Finally, the edge texture and the color texture are composited to produce the final
rendered image:
As with most image-processing techniques, the performance penalty for this method
is not affected by scene complexity.
Some of the more prominent games that have featured cel-shaded graphics:
Breath of Fire: Dragon Quarter
Cel Damage
Crackdown
Crazyracing Kartrider
Dark Cloud 2
Fear Effect
Gungrave series
Harvest Moon: Save the Homeland
Jackie Chan Adventures
Killer7
Klonoa 2
The Legend of Zelda: Phantom Hourglass
Mega Man X Command Mission
Metal Gear Acid 2
Samurai Legend
Star Wars: Clone Wars
Steamboy
Silver Surfer
Skyland
Sonic X
Team Galaxy
The Iron Giant
Depending on the renderer, the vertices will move along paths to fill in the blank
time between the keyframes or the renderer will simply switch between the different
positions, creating a somewhat jerky look. The former is used more commonly.
There are advantages to using morph target animation over skeletal animation. The
artist has more control over the movements because he or she can define the
individual positions of the vertices within a keyframe, rather than being constrained
by skeletons. This can be useful for animating cloth, skin, and facial expressions
because it can be difficult to conform those things to the bones that are required for
skeletal animation.
However, there are also disadvantages. Vertex animation is usually a lot more time-
consuming than skeletal animation because every vertex position would have to be
calculated. (3D models in modern computer and video games often contain
something to the order of 4,000-9,000 vertices.) Also, in methods of rendering where
vertices move from position to position during in-between frames, a distortion is
created that doesn't happen when using skeletal animation. This is described by
critics of the technique as looking "shaky". On the other hand, this distortion may
be part of the desired "look".
Not all morph target animation has to be done by actually editing vertex positions.
It is also possible to take vertex positions found in skeletal animation and then use
those rendered as morph target animation.
Skeletal animation
This technique is used by constructing a series of 'bones'. Each bone has a three
dimensional transformation (which includes its position, scale and orientation), and
an optional parent bone. The bones therefore form a hierarchy. The full transform
of a child node is the product of its parent transform and its own transform. So
moving a thigh-bone will move the lower leg too. As the character is animated, the
bones change their transformation over time, under the influence of some animation
controller.
Each bone in the skeleton is associated with some portion of the character's visual
representation. In the most common case of a polygonal mesh character, the bone is
associated with a group of vertices; for example, in a model of a human being, the
'thigh' bone would be associated with the vertices making up the polygons in the
model's thigh. Portions of the character's skin can normally be associated with
multiple bones, each one having a scaling factors called vertex weights, or blend
weights. The movement of skin near the joints of two bones, can therefore be
influenced by both bones.
For a polygonal mesh, each vertex can have a blend weight for each bone. To
calculate the final position of the vertex, each bone transformation is applied to the
vertex position, scaled by its corresponding weight. This algorithm is called matrix
palette skinning, because the set of bone transformations (stored as transform
matrices) form a palette for the skin vertex to choose from.
Motion capture
Motion capture, motion tracking, or mocap is a technique of digitally recording
movements for entertainment, sports, and medical applications. In the context of
filmmaking (where it is sometimes called performance capture), it refers to the
technique of recording the actions of human actors, and using that information to
animate digital character models in 3D animation.
The procedure
In the motion capture session, the movements of one or more actors are sampled
many times per second. High resolution optical motion capture systems can be used
to sample body, facial and finger movement at the same time.
A motion capture session records only the movements of the actor, not his/her visual
appearance. These movements are recorded as animation data which are mapped to
a 3D model (human, giant robot, etc.) created by a computer artist, to move the
model the same way. This is comparable to the older technique of rotoscope where
the visual appearance of the motion of an actor was filmed, then the film used as a
guide for the frame by frame motion of a hand-drawn animated character.
If desired, a camera can pan, tilt, or dolly around the stage while the actor is
performing and the motion capture system can capture the camera and props as
well. This allows the computer generated characters, images and sets, to have the
same perspective as the video images from the camera. A computer processes the
data and displays the movements of the actor, as inferred from the 3D position of
each marker. If desired, a virtual or real camera can be tracked as well, providing
the desired camera positions in terms of objects in the set.
A related technique match moving can derive 3D camera movement from a single
2D image sequence without the use of photogrammetry, but is often ambiguous
below centimeter resolution, due to the inability to distinguish pose and scale
characteristics from a single vantage point. One might extrapolate that future
technology might include full-frame imaging from many camera angles to record
the exact position of every part of the actor’s body, clothing, and hair for the entire
duration of the session, resulting in a higher resolution of detail than is possible
today.
After processing, the software exports animation data, which computer animators
can associate with a 3D model and then manipulate using normal computer
animation software. If the actor’s performance was good and the software
processing was accurate, this manipulation is limited to placing the actor in the
scene that the animator has created and controlling the 3D model’s interaction with
objects.
The end