0% found this document useful (0 votes)
72 views49 pages

CG Unit 5

This document discusses concepts and techniques for 3D viewing and projection. It begins by describing wireframe representation, depth cueing, visibility, shading, illumination models, shadows, reflections, textures, and surface details as techniques for representing 3D objects on a 2D screen. It then explains the 3D viewing pipeline, which involves defining a camera and projection plane, then projecting 3D data onto that plane. Two main types of projection are discussed: parallel projection and perspective projection. The document provides mathematical descriptions of how each type of projection transforms 3D coordinates. It concludes by noting that multiple transformations can be combined into a single composite transformation matrix.

Uploaded by

Lalitha Ponnam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views49 pages

CG Unit 5

This document discusses concepts and techniques for 3D viewing and projection. It begins by describing wireframe representation, depth cueing, visibility, shading, illumination models, shadows, reflections, textures, and surface details as techniques for representing 3D objects on a 2D screen. It then explains the 3D viewing pipeline, which involves defining a camera and projection plane, then projecting 3D data onto that plane. Two main types of projection are discussed: parallel projection and perspective projection. The document provides mathematical descriptions of how each type of projection transforms 3D coordinates. It concludes by noting that multiple transformations can be combined into a single composite transformation matrix.

Uploaded by

Lalitha Ponnam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Computergraphik 1 – Textblatt engl-04 Vs.

10
Werner Purgathofer, TU Wien

3D-Viewing
█ Concepts of 3D-Viewing
For representing 3D-objects on a 2D-screen in a nice or recognizable way, many techniques are combined.
First, the projection has to be defined, which will be described in the next paragraph. After the projection
has been set, any of the following qualities can be generated:

Wire-Frame-Representation = Only the edges of polygons are drawn. Hidden edges may or may not be
drawn.

Depth-Cueing = Edges or parts which are nearer to the viewer are


displayed with more intensity (brighter, broadened, more saturated),
edges which are farther away from the viewer are displayed with less
intensity (darker, thinner, grayed out).

Correct Visibility = Surface-elements (edges,


polygons), which are occluded by other surface-
elements, are not drawn so that only visible
areas are shown.

Shading = Depending on the angle of view or the angle of incident


light, surfaces are colored brighter or darker.

Illumination Models = Physical simulation of lighting conditions and


propagation and their influence on the appearance of surfaces.

Shadows = Areas which have no line of sight to the


light-source are displayed darker.

Reflections, Transparency = Reflecting objects show


mirror-images, and through transparent objects the
background can be seen.

Textures = Patterns or samples are „painted“ on surfaces to give the


objects a more complex look (looks much more realistic then).

Surface Details = Small geometric


structures on surfaces (like orange peel,
bark, cobblestones, tire profiles) are
simulated using tricks.

Stereo Images = A separate image is created


and presented (with various techniques) for
each eye to generate a 3D-impression.

13
█ 3D-Viewing-Pipeline

The viewing-pipeline in 3 dimensions is almost the same as the 2D-viewing-pipeline. Only after the
definition of the viewing direction and orientation (i.e., of the camera) an additional projection step is done,
which is the reduction of 3D-data onto a projection plane:

norm.
object- Creation of world- Definition of viewing- Projection proj.- Mapping on device- Transform. device
coord. objects and coord. mapping region coord. onto image coord. unity image coord. to specific coord.
scenes + orientation plane region device

This projection step can be arbitrarily complex, depending on which 3D-viewing concepts should be used.

█ Viewing-Coordinates

Similar to photography there are certain degrees of freedom when specifying the camera:
1. Camera position in space
2. Viewing direction from this position
3. Orientation of the camera (view-up vector)
4. Size of the display window (corresponds to the
focal length of a photo-camera)
With these parameters the camera-coordinate system is
defined (viewing coordinates). Usually the xy-plane of
this viewing-coordinate system is orthogonal to the main viewing direction and the viewing direction is in
the direction of the negative z-axis.

Based on the camera position the usual way to define the viewing-coordinate system is:
1. Choose a camera position (also called eye-point, or view-point).
2. Choose a viewing direction = Choose the z– direction of the viewing-coordinates.
3. Choose a direction „upwards“. From this, the x-axis and y-axis can be calculated: the image-plane is
orthogonal to the viewing direction. The parallel projection of the „view-up vector“ onto this image plane
defines the y-axis of the viewing coordinates.
4. Calculate the x-axis as vector-product of the z- and y-axis.
5. The distance of the image-plane from the eye-point defines the viewing angle, which is the size of the
scene to be displayed.

In animations the camera-definition is often automatically calculated according to certain conditions, e.g.
when the camera moves around an object or in flight-simulations, such that desired effects can be achieved
in an uncomplicated way.

To convert world-coordinates to viewing-coordinates a series of simple transformations is needed: mainly a


translation of the coordinate origins onto each other and afterwards 3 rotations, such that the coordinate-axes
also coincide (two rotations for the first axis, one for the second axis, and the third axis is already correct
then). Of course, all these
transformations can be merged by
multiplication into one matrix, which
looks about like this:

MWC,VC = Rz· Ry· Rx· T

14
█ Projections

In geometry several types of projections are known. However, in computer graphics mainly only the parallel
and the perspective projection are of interest.

Parallel Projection
A parallel projection can either be done orthogonally to an image-plane, or obliquely (e.g. when casting
shadows).

The orthogonal parallel projection is very simple.


Assuming that the direction of projection is the z-
axis, simply omit the z-value of a point, i.e. set its z-
value to zero. The corresponding matrix is therefore
also very simple, because the z-coordinate is just
replaced by zero.

An oblique parallel projection, which is defined


by two angles α and φ, can be done as follows: for
any point the distance L is the distance between
two points, the one orthogonally projected onto
the z-plane, which results in (x, y, (0)), and the
result of the oblique projection, which results in
(xp, yp, (0)). This distance L is then one cathetus
of the right-angled triangle with vertices (x, y, z),
(x, y, 0) and (xp, yp, 0), and at the same time L is
the hypotenuse of the also right-angled triangle in the image-plane with the cathetus (xp–x) and the cathetus
(yp–y). From this L = z/tan α, and xp = x + L·cos φ and yp = y + L·sin φ. If L is substituted into the other two
equations this produces the given matrix.

Perspective Projection
With a perspective projection some laws of affine transformations are not valid anymore (e.g. parallel lines
may not be parallel anymore after the perspective projection has been applied). Therefore, since it’s not an
affine transformation anymore, it can’t be described by a 3x3 matrix anymore. Luckily, again homogeneous
coordinates can help in this case. However, this is the sole case where the homogeneous component h is not
equal to 1 in the result. Therefore, in a subsequent step a division by this value is needed.

Let us first examine the case where the projected image


lies on the plane z=0. The image of a point P(x, y, z) is
the point, where the line through the point itself and the
center of projection (0, 0, zprp) cuts the plane z=0.
When viewed from above, similar right-angled
triangles can be found with the catheti dp and xp on one
hand, and dpz (because z is on the negative side of the
z-axis) and x on the other hand.

This yields xp : x = dp : (dpz) or xp = ·dp/(dpz)


and in analogy yp : y = dp : (dpz) or yp = y·dp/(dpz)
and of course zp = 0.

If we project onto any other plane z = zvp ≠ 0 not


much has to be changed: only dp is replaced by the z-
value of the center of projection zprp:

15
xp : x = dp : (zprpz) or xp = x·dp / (zprpz)
yp : y = dp : (zprpz) or yp = y·dp / (zprpz)
zp = zvp.

This can be written as a homogeneous matrix also:

In general, the value of h of the resulting vector will


not be equal to 1 (h is calculated as (zprpz)/dp).
This is the reason why the resulting vector has to be
divided by h (in other words, it is scaled with 1/h)
to get the correct resulting point:

xp = xh/h, yp = yh/h and zp results in zvp.

By representing projections in matrix form, it is possible to formulate the whole transformation from model-
coordinates to device-coordinates by multiplying the single matrices to one matrix. Only if a perspective
projection is involved, it must not be forgotten to divide the result by the homogeneous component h.

Finally, it should be mentioned that the number of principal vanishing points is dependent on how the axes
of the coordinate system are positioned against the image plane. If 2 coordinate axes are parallel to the
image plane it is called 1-point perspective projection, if only one is parallel to the image plane we call it 2-
point perspective projection, and if none of the three axes is parallel to the image plane it is called 3-point
perspective projection (because then there are 3 principal vanishing points).

16
Composite Transformation:
A number of transformations or sequence of transformations can be combined into single one
called as composition. The resulting matrix is called as composite matrix. The process of
combining is called as concatenation.
Suppose we want to perform rotation about an arbitrary point, then we can perform it by the
sequence of three transformations
1. Translation
2. Rotation
3. Reverse Translation
The ordering sequence of these numbers of transformations must not be changed. If a matrix is
represented in column form, then the composite transformation is performed by multiplying
matrix in order from right to left side. The output obtained from the previous matrix is multiplied
with the new coming matrix.
Example showing composite transformations:
The enlargement is with respect to center. For this following sequence of transformations will be
performed and all will be combined to a single one
Step1: The object is kept at its position as in fig (a)
Step2: The object is translated so that its center coincides with the origin as in fig (b)
Step3: Scaling of an object by keeping the object at origin is done in fig (c)
Step4: Again translation is done. This second translation is called a reverse translation. It will
position the object at the origin location.
Above transformation can be represented as TV.STV-1
Advantage of composition or concatenation of matrix:
1. It transformations become compact.
2. The number of operations will be reduced.
3. Rules used for defining transformation in form of equations are complex as compared to
matrix.
Composition of two translations:
Let t1 t2 t3 t4are translation vectors. They are two translations P1 and P2. The matrix of P1 and
P2 given below. The P1 and P2are represented using Homogeneous matrices and P will be the
final transformation matrix obtained after multiplication.

Above resultant matrix show that two successive translations are additive.
Composition of two Rotations: Two Rotations are also additive
Composition of two Scaling: The composition of two scaling is multiplicative. Let S11 and S12are
matrix to be multiplied.
INTRODUCTION

🞭 The term computer animation generally refers to any time sequence


ofvisual changes in a scene.
🞭 In addition to changing object position with translations or rotations, a
computer-generated animation could display time variations in object
size, color, transparency, or surface texture.
🞭 Computer animations can be generated by changing camera parameters,
such as position, orientation, and focal length.
🞭 Computer animations can also be produced by changing lighting effects.
DESIGN OF ANIMATION SEQUENCES
An animation sequence is designed with the following steps:
 Storyboard layout

 Object definitions

 Key-frame specifications
 Generation of in-between frames

STORYBOARD LAYOUT
🞭 It is an outline of the action.
🞭 It defines the motion sequence as a set of basic events that are to take place.
🞭 Depending on the type of animation to be produced, the storyboard could
consist of a set of rough sketches or it could be a list of the basis ideas for the
motion
OBJECT DEFINATION
🞭 An object definition is given for each participant in the action.
🞭 Objects can be defined in terms of basic shapes, such as polygons or splines.
🞭 Along with the shape, the associated movements for each object are
specified.

KEY FRAME SPECIFICATION


🞭 A key frame is a detailed drawing of the scene at a certain time in the
animation sequence.
🞭 Within each key frame, each object is positioned according to the time for
that frame.
🞭 Some key frames are chosen at extreme positions in the action; others are
spaced so that the time interval between key frames is not too great.

IN-BETWEEN FRAMES
🞭 It is a process of generating intermediate frames between two images to
give appearance that the first image evolves smoothly into the second
image. In–betweens are the drawing between the key frames which help to
create the illusion of motion.
🞭 In-between are the intermediate frames between the key frames
🞭 The number of in-betweens needed is determined by the media to be used
to display the animation. Film requires 24 frames per second, and graphics
terminals are refreshed at the rate of 30 to 60 frames per second
🞭 Time intervals for the motion are set up so that there are from three to five
in-betweens for each pair of key frames.

There are several other tasks that may be required, depending on the
application. They include:
 Motion verification

 Editing

 Production and synchronization of a soundtrack


GENERAL COMPUTER ANIMATION FUNCTIONS
🞭 Some steps included the development of an animation sequence are:
 Object manipulations and rendering
 Camera motions
 Generation of in-betweens

🞭 Animation packages, such as Wave-front, provide special functions for


designing the animation and processing individual objects.

🞭 In animation packages, one function is provided to store and manage the object
database. Object shapes are stored and updated in the database.

🞭 Other object functions include those for motion generation and object rendering.
Motions can be generated according to specified constraints using two-
dimensional or three-dimensional transformations.

🞭 Another function simulates camera movements. Standard motions are zooming,


panning, and tilting.
GENERAL COMPUTER-ANIMATIONFUNCTIONS

 Animation packages ,such as wave front , provide


special functions for designing the animation and
processing individual objects.

 Some steps included in the development of animation


sequence are-
 Object manipulation and rendering

 Camera motions .
 Generation of in-betweens .
• ONE FUNCTION AVAILABLE IN ANIMATION PACKAGES IS PROVIDED TO STORE AND
MANAGE THE OBJECT DATABASE.
(OBJECT SHAPES AND ASSOCIATED PARAMETERS ARE STORED AND UPDATED IN THE
DATABASE).

• Other object functions include:-


Object motion generation(2-D or 3-D transformations)
Object rendering .
• One function to stimulate Camera Movements:-
 Zooming
 Panning (rotating horizontally or vertically)
 Tilting .
RASTER ANIMATIONS

On Raster systems ,we generate real-time animation in limited application


Using raster operation.

Such as 2-D or 3-D transformations.

We can also animate objects along 2D motion paths using the color-table
transformations.

The pixel value at successive positions along the motion path of an object are
stored in color-table and the pixel at 1st pixel is set on , we set the pixel at the
other object positions to the background color.
KEY-FRAME SYSTEMS

A key-frame in animation is a drawing that defines the starting and ending


points of any smooth transition. The drawings are called "frames" because
their position in time is measured in frames on a strip of film. A sequence of
key frames defines which movement the viewer will see, whereas the position
of the key frames on the film, video or animation defines the timing of the
movement. Because only two or three key frames over the span of a second do
not create the illusion of movement, the remaining frames are filled with in-
betweens.

With complex object transformations, the shapes of objects may change


over time. Examples are clothes, facial features, magnified detail, evolving
shapes, exploding or disintegrating objects, and transforming one object into
another object.

If all surfaces are described with polygon meshes, then the number
of edges per polygon can change from one frame to the next. Thus, the total
number of line segments can be different in different frames.
MORPHING

Transformation of object shapes from one form to the other is termed as


morphing as short form of metamorphism. This method can be applied toany of
motion or transition relating a change in shape.

Given two key frames for an object transformation, we first adjust the
object
specification in one of the frames so that the number of polygon edges
(or the number of vertices) is the same for the two frames
A straight-line segment in key frame k is transformed into
two line segments in key frame k +1. Since key frame k + 1 has an extra
vertex,
we add a vertex between vertices 1 and 2 in key frame k to balance the
number of vertices (and edges) In the two key frames. Using linear
interpolation to generate the in-betweens, we transition the added vertex in
key frame k into vertex 3‘ along the straight-line path shown in Fig.
.

An example of a triangle linearly expanding into a quadrilateral


given In Fig
The general pre processing rules for equalizing key frames in terms
of either the number of vertices to be added to a key frame.

Suppose we equalize the edge count and parameters Lk and Lk+1


denote the number of line segments in two consecutive frames. We
define,

Lmax = max (Lk, Lk+1)


Lmin = min(Lk , Lk+1)
Ne = Lmax mod Lmin
Ns = int (Lmax/Lmin)

The pre processing is accomplished by

• Dividing Ne edges of key framemin into Ns+1 section.

• Dividing the remaining lines of key framemin into Ns sections.

For example, if Lk = 15 and Lk+1 = 11, we divide 4 lines of key-


framek+1 into 2 sections each. The remaining lines of keyframek+1 are
left intact.
If we equalize the vertices counts then the parameters Vk and Vk+1 are
used to denote the number of vertices in the two consecutive frames. In
this case we define:

Vmax = max(Vk,Vk+1)
Vmin = min( Vk,Vk+1) and
Nls = (Vmax -1) mod (Vmin – 1)
Np = int ((Vmax – 1)/(Vmin – 1 ))

Preprocessing using vertex count is performed by:

• Adding Np points to Nls line section of key-framemin.

• Adding Np-1 points to the remaining edges of key-framemin.

For the triangle-to quadrilateral example, Vk = 3 and Vk+1 = 4.


Both Nls and Np are 1, so we would add one point to one edge of
key-framek
No points would be added to the remaining lines of keyframek+1 .
SIMULATING ACCELERATIONS
Curve-fitting techniques are often used to specify the animation paths
Between key frames. Given the vertex positions at the key frames, we can fit the
positions with linear or nonlinear paths.
This figure illustrates a nonlinear fit of key-frame positions. To simulate
accelerations, we can adjust the time spacing for the in-betweens.
For constant speed (zero acceleration), we use equal-interval time spacing
for the in-betweens. Suppose we want n in-betweens for key frames at
times t1 and t2 . The time interval between key frames is then divided into
n +1 subintervals, yielding an in-between spacing of

We can calculate the time for any in-between as

Nonzero accelerations are used to produce realistic displays of speed


changes, particularly at the beginning and end of a motion sequence.
We can model the start-up and slowdown portions of an animation
path with spline or trigonometric functions. Parabolic and cubic time
functions have been applied to acceleration modelling, but
trigonometric functions are more commonly used in animation
packages.
To model increasing speed (positive acceleration), we want the time spacing
between frames to increase so that greater changes in position occur as the
object moves faster. We can obtain an increasing interval size with the
function

For n in-betweens, the time for the jth in-between would then be
calculated as:
where t the time difference between the two key frames
This figure gives a plot of the trigonometric acceleration function and the
in-between spacing for n= 5.
Often, motions contain both speed-ups and slow-downs. We can model
a combination of increasing-decreasing speed by first increasing the in-
between time spacing, then we decrease this sparing. A function to
accomplish these time changes is

The time for the jth in-between is now calculated as:

with t denoting the time difference for the two key frames. Time
intervals for the moving object first increase, then the time ntervals
decrease, as shown in next figure:
We can model decreasing speed (deceleration) with sinθ in the range
0<θ<π/2. The time position of an in-between is now defined as:

A plot of this function and the decreasing size of the time intervals is
shown in the next figure for five in-betweens

You might also like