CG Unit 5
CG Unit 5
10
Werner Purgathofer, TU Wien
3D-Viewing
█ Concepts of 3D-Viewing
For representing 3D-objects on a 2D-screen in a nice or recognizable way, many techniques are combined.
First, the projection has to be defined, which will be described in the next paragraph. After the projection
has been set, any of the following qualities can be generated:
Wire-Frame-Representation = Only the edges of polygons are drawn. Hidden edges may or may not be
drawn.
13
█ 3D-Viewing-Pipeline
The viewing-pipeline in 3 dimensions is almost the same as the 2D-viewing-pipeline. Only after the
definition of the viewing direction and orientation (i.e., of the camera) an additional projection step is done,
which is the reduction of 3D-data onto a projection plane:
norm.
object- Creation of world- Definition of viewing- Projection proj.- Mapping on device- Transform. device
coord. objects and coord. mapping region coord. onto image coord. unity image coord. to specific coord.
scenes + orientation plane region device
This projection step can be arbitrarily complex, depending on which 3D-viewing concepts should be used.
█ Viewing-Coordinates
Similar to photography there are certain degrees of freedom when specifying the camera:
1. Camera position in space
2. Viewing direction from this position
3. Orientation of the camera (view-up vector)
4. Size of the display window (corresponds to the
focal length of a photo-camera)
With these parameters the camera-coordinate system is
defined (viewing coordinates). Usually the xy-plane of
this viewing-coordinate system is orthogonal to the main viewing direction and the viewing direction is in
the direction of the negative z-axis.
Based on the camera position the usual way to define the viewing-coordinate system is:
1. Choose a camera position (also called eye-point, or view-point).
2. Choose a viewing direction = Choose the z– direction of the viewing-coordinates.
3. Choose a direction „upwards“. From this, the x-axis and y-axis can be calculated: the image-plane is
orthogonal to the viewing direction. The parallel projection of the „view-up vector“ onto this image plane
defines the y-axis of the viewing coordinates.
4. Calculate the x-axis as vector-product of the z- and y-axis.
5. The distance of the image-plane from the eye-point defines the viewing angle, which is the size of the
scene to be displayed.
In animations the camera-definition is often automatically calculated according to certain conditions, e.g.
when the camera moves around an object or in flight-simulations, such that desired effects can be achieved
in an uncomplicated way.
14
█ Projections
In geometry several types of projections are known. However, in computer graphics mainly only the parallel
and the perspective projection are of interest.
Parallel Projection
A parallel projection can either be done orthogonally to an image-plane, or obliquely (e.g. when casting
shadows).
Perspective Projection
With a perspective projection some laws of affine transformations are not valid anymore (e.g. parallel lines
may not be parallel anymore after the perspective projection has been applied). Therefore, since it’s not an
affine transformation anymore, it can’t be described by a 3x3 matrix anymore. Luckily, again homogeneous
coordinates can help in this case. However, this is the sole case where the homogeneous component h is not
equal to 1 in the result. Therefore, in a subsequent step a division by this value is needed.
15
xp : x = dp : (zprpz) or xp = x·dp / (zprpz)
yp : y = dp : (zprpz) or yp = y·dp / (zprpz)
zp = zvp.
By representing projections in matrix form, it is possible to formulate the whole transformation from model-
coordinates to device-coordinates by multiplying the single matrices to one matrix. Only if a perspective
projection is involved, it must not be forgotten to divide the result by the homogeneous component h.
Finally, it should be mentioned that the number of principal vanishing points is dependent on how the axes
of the coordinate system are positioned against the image plane. If 2 coordinate axes are parallel to the
image plane it is called 1-point perspective projection, if only one is parallel to the image plane we call it 2-
point perspective projection, and if none of the three axes is parallel to the image plane it is called 3-point
perspective projection (because then there are 3 principal vanishing points).
16
Composite Transformation:
A number of transformations or sequence of transformations can be combined into single one
called as composition. The resulting matrix is called as composite matrix. The process of
combining is called as concatenation.
Suppose we want to perform rotation about an arbitrary point, then we can perform it by the
sequence of three transformations
1. Translation
2. Rotation
3. Reverse Translation
The ordering sequence of these numbers of transformations must not be changed. If a matrix is
represented in column form, then the composite transformation is performed by multiplying
matrix in order from right to left side. The output obtained from the previous matrix is multiplied
with the new coming matrix.
Example showing composite transformations:
The enlargement is with respect to center. For this following sequence of transformations will be
performed and all will be combined to a single one
Step1: The object is kept at its position as in fig (a)
Step2: The object is translated so that its center coincides with the origin as in fig (b)
Step3: Scaling of an object by keeping the object at origin is done in fig (c)
Step4: Again translation is done. This second translation is called a reverse translation. It will
position the object at the origin location.
Above transformation can be represented as TV.STV-1
Advantage of composition or concatenation of matrix:
1. It transformations become compact.
2. The number of operations will be reduced.
3. Rules used for defining transformation in form of equations are complex as compared to
matrix.
Composition of two translations:
Let t1 t2 t3 t4are translation vectors. They are two translations P1 and P2. The matrix of P1 and
P2 given below. The P1 and P2are represented using Homogeneous matrices and P will be the
final transformation matrix obtained after multiplication.
Above resultant matrix show that two successive translations are additive.
Composition of two Rotations: Two Rotations are also additive
Composition of two Scaling: The composition of two scaling is multiplicative. Let S11 and S12are
matrix to be multiplied.
INTRODUCTION
Object definitions
Key-frame specifications
Generation of in-between frames
STORYBOARD LAYOUT
🞭 It is an outline of the action.
🞭 It defines the motion sequence as a set of basic events that are to take place.
🞭 Depending on the type of animation to be produced, the storyboard could
consist of a set of rough sketches or it could be a list of the basis ideas for the
motion
OBJECT DEFINATION
🞭 An object definition is given for each participant in the action.
🞭 Objects can be defined in terms of basic shapes, such as polygons or splines.
🞭 Along with the shape, the associated movements for each object are
specified.
IN-BETWEEN FRAMES
🞭 It is a process of generating intermediate frames between two images to
give appearance that the first image evolves smoothly into the second
image. In–betweens are the drawing between the key frames which help to
create the illusion of motion.
🞭 In-between are the intermediate frames between the key frames
🞭 The number of in-betweens needed is determined by the media to be used
to display the animation. Film requires 24 frames per second, and graphics
terminals are refreshed at the rate of 30 to 60 frames per second
🞭 Time intervals for the motion are set up so that there are from three to five
in-betweens for each pair of key frames.
There are several other tasks that may be required, depending on the
application. They include:
Motion verification
Editing
🞭 In animation packages, one function is provided to store and manage the object
database. Object shapes are stored and updated in the database.
🞭 Other object functions include those for motion generation and object rendering.
Motions can be generated according to specified constraints using two-
dimensional or three-dimensional transformations.
Camera motions .
Generation of in-betweens .
• ONE FUNCTION AVAILABLE IN ANIMATION PACKAGES IS PROVIDED TO STORE AND
MANAGE THE OBJECT DATABASE.
(OBJECT SHAPES AND ASSOCIATED PARAMETERS ARE STORED AND UPDATED IN THE
DATABASE).
We can also animate objects along 2D motion paths using the color-table
transformations.
The pixel value at successive positions along the motion path of an object are
stored in color-table and the pixel at 1st pixel is set on , we set the pixel at the
other object positions to the background color.
KEY-FRAME SYSTEMS
If all surfaces are described with polygon meshes, then the number
of edges per polygon can change from one frame to the next. Thus, the total
number of line segments can be different in different frames.
MORPHING
Given two key frames for an object transformation, we first adjust the
object
specification in one of the frames so that the number of polygon edges
(or the number of vertices) is the same for the two frames
A straight-line segment in key frame k is transformed into
two line segments in key frame k +1. Since key frame k + 1 has an extra
vertex,
we add a vertex between vertices 1 and 2 in key frame k to balance the
number of vertices (and edges) In the two key frames. Using linear
interpolation to generate the in-betweens, we transition the added vertex in
key frame k into vertex 3‘ along the straight-line path shown in Fig.
.
Vmax = max(Vk,Vk+1)
Vmin = min( Vk,Vk+1) and
Nls = (Vmax -1) mod (Vmin – 1)
Np = int ((Vmax – 1)/(Vmin – 1 ))
For n in-betweens, the time for the jth in-between would then be
calculated as:
where t the time difference between the two key frames
This figure gives a plot of the trigonometric acceleration function and the
in-between spacing for n= 5.
Often, motions contain both speed-ups and slow-downs. We can model
a combination of increasing-decreasing speed by first increasing the in-
between time spacing, then we decrease this sparing. A function to
accomplish these time changes is
with t denoting the time difference for the two key frames. Time
intervals for the moving object first increase, then the time ntervals
decrease, as shown in next figure:
We can model decreasing speed (deceleration) with sinθ in the range
0<θ<π/2. The time position of an in-between is now defined as:
A plot of this function and the decreasing size of the time intervals is
shown in the next figure for five in-betweens