CS602 Final Term Merged
CS602 Final Term Merged
Graphics
(Solved MCS’s)
LECTURE FROM
(23 to 45)
[email protected] FOR MORE VISIT JUNAID MALIK
[email protected]
VULMSHELP.COME 0304-1659294
AL-JUNAID TECH INSTITUTE
Pivot
Point
rotation Page no : 114
True
False
Trivial Reject
Trivial Accept Page no : 137
None of given
10.Dark lights are nothing more than lights in which one or more of the color
values are _ . Unknown
Eye Retina
Cone Page no : 393
12. This projection technique has the direction of projection perpendicular to the
viewing plane, but the viewing direction is NOT perpendicular to one of the
principle faces.
Ambient
Diffuse
Specular
Emissive
15.A plane is two dimensional since in order to uniquely define any point on its
surface we require _ numbers.
Two
Three
Four
Five
AL-JUNAID TECH INSTITUTE
16. In perspective projection, for your view to come out correctly, you will also
want the to pass through the middle of the screen.
X axis
Y axis
Z axis Page no : 195
None
17. Neither floating-point nor signed integer values are clamped to the range
_ _ before updating the current color.
0 , -1.0
-1 , 1
1 , -1
0, 1 Page no : 316
18. Bezier curve is the ideal standard for representing the piecewise
polynomial curves. Most complex
Less complex
None of given
More complex Page no : 333
19. An object's determine its orientation relative to the light sources. For
each vertex, OpenGL uses the assigned normal to determine how much light that
particular vertex receives from each light source.
Unit
Normal Page no : 395
None of given
21. Which of the following affine transforms does NOT affect vectors?
Scale
Rotation
Shear
Translation Page no : 113
AL-JUNAID TECH INSTITUTE
22) This projection technique does NOT have the direction of projection
perpendicular to the viewing plane.
23) This projection technique has the direction of projection perpendicular to the
viewing plane, and the viewing direction is perpendicular to one of the principle
faces.
24) In OpenGL, there are several different matrices. We have discussed two of
them in class. Which one of the below would be used in conjunction with a
glRotatef function call?
25) In OpenGL, there are several different matrices. We have discussed two of
them in class. Which one of the below would be used in conjunction with
glFrustum?
GL_MODELVIEW
GL_PROJECTION Page no : 369
26) Which of the following is the order that geometry operations are performed in
OpenGL (where we read the order from left to right)?
28) Which of the following is NOT a modern application for Computer Graphics---
30) TV series are made as simply as possible from the animation point of view.
This approach is generally known as --------------------- --.
Full animation
Limited animation Page no : 423
Low animation
High resolution
31) An eight frame run cycle that ------------------ frame/frames to each step gives a
fast and vigorous dash. At this speed the successive leg positions are quite widely
separated and may need dry brush or speed lines to make the movement flow.
► Two
► One
► Three
AL-JUNAID TECH INSTITUTE
► Four Page no :432
► Forward scattering
► Diffuse Lambertian
► Backscattering
► Retro Page no : 288
33)What makes this really challenging to model is that the index of refraction for
most materials is a function of the ------------------ of the light. This means that not
only is there a shift in the angle of refraction, but that the shift is different for
differing --------------- of light.
34) The reflected light wave turns out to be a -------------------- case since light is
reflected at the same angle as the incident wave (when the surface is smooth and
uniform, as we'll assume for now).
► Abnormal
► Complex
► Simple Page no : 291
► Unknown
36) sets the reshape call back for the current window. The reshape
callback is triggered when a window is reshaped.
AL-JUNAID TECH INSTITUTE
► glutMainLoop
► glutIdleFunc
► glutReshapeFunc Page no : 307
► glutDisplayFunc
37) Signed integer colour components, when specified, are linearly mapped to
floating-point values such that the most positive represent able value maps to 1.0,
and the most negative represent able value maps to --------------- --. Floating-point
values are mapped directly.
► -1.0
► 0.0
► 2.0
► 1.0 Page no : 315
41) Keep polygon orientations consistent to make sure that when viewed from the
outside, all the polygons on the surface are oriented in the direction.
42)The ----------------- is most simple example that exhibits the property self-
similarity.
► Mosse
► Fern Page no : 350
► None of the given
► Thohar
► Multi-dimensional
► One-dimensional
► Two-dimensional
► Three-dimensional Page no: 366
44)Which of the following properties of rational Bezier curves fails if the weight
assigned to a control point is negative?
► End-point interpolation
AL-JUNAID TECH INSTITUTE
► Variation Diminishing
► Symmetry
► Convex-Hull page no : 335
45)In the Phong reflection model, there are 3 constants (a, b, c) which are used to
describe the qualities of which of the following phenomena?
► Specular repeated
► Diffuse
► Ambient
48) When you hit a surface in ray tracing, generally shadow rays are tested against
all objects in a scene. If these rays come back saying they hit an object in the
scene, which of the following do you do?
► add all components (i.e. ambient, diffuse and specular) from that light
source to the object.
► add all EXCEPT the ambient light from that light source to the object
(i.e. diffuse and specular)
► add only the ambient light from that light source to the object
► add none of the light from that light source to the object
49) The Color Space tool is a handy tool that we can use to interactively add two
colours together to see the effects of the various strategies for handling
oversaturated colours.
► False
AL-JUNAID TECH INSTITUTE
► True page no : 230
► Ending lines
► Points
► Vertices Page no : 243
► Edges
51) Which of the following properties of Bezier curves guarantees that a line passes
through the control polygon as many times or more times than the line passes
through the Bezier curve itself?
► End-point interpolation
► Variation Diminishing
► Symmetry
► Convex-Hull
► Edge
► Vertices
► Pixels Page no : 80
► None of the given
53) The actual filling process in boundary filling algorithm begins when a point
_ of the figure is selected.
► At boundary
1) In class, we discussed three forms of shading for “Utah” graphics. Which was
the first to use per
vertex normals?
Flat Shading
Phong Shading
AL-JUNAID TECH INSTITUTE
Gouraud Shading Page no : 240
59) Given any implicit equation, which of the following is true for all (x, y, z) that
make the equation exactly zero?
All those points are inside the object defined by the implicit equation
All those points are on the surface of the object defined by the implicit
equation Page no :205
All those points are outside the object defined by the implicit equation
You can’t know anything without knowing what the implicit equation is
60) When solving ray-sphere intersections using the implicit equation for a sphere,
you must solve the quadratic equation. Which of the following do you know if the
B2-4AC (i.e. the part under the square root) is negative?
The ray intersects the sphere at a negative t… discard this result
The ray intersects the sphere at a positive t… continue to the solution
The ray does not intersect the sphere… discard this result Page no : 265
The ray begins inside the sphere… this is a special case
73) Bezier curve can represent the more complex piecewise curve.
Polynomial Page no : 33
Exponential
Cubic
None of above
74)) Curve and surface equations can be expressed in either a parametric or a non
parametric form.
True
False Page no : 333
75) Using a lighting model based upon the Blinn Phong model means that we'll
always get a uniform specular highlight based upon the colour of the ---------------
light and material, which means that all reflections based on this model, will be
reminiscent of plastic.
Union
Refracting Page no : 291
Intersection
Reflecting
76) If the current matrix (according to glMatrixMode) is multiplied by the
translation matrix, with the product replacing the current matrix. That is, if M is the
AL-JUNAID TECH INSTITUTE
current matrix and T is the translation matrix, then M is replaced with ---------------
--.
M-T
Page 21 of 22
M+T
M/T
M*T
77) With similar expressions for y(u) and z(u). Again the a, b, c and d terms are
constant coefficients. As we did with Equation for a plane curve, we combine the
x(u), y(u) , and z(u) expressions into a single vector
equation P(u) = .
Au2+bu1+cu+d
Au4+bu3+cu2+d1
Au3+bu2+cu2+d
Au3+bu2+cu+d Page no : 326
78) The matrix generated by gluPerspective is multiplied by the current matrix, just
as if glMultMatrix were called with the generated matrix. To load the perspective
matrix onto the current matrix stack instead, precede the call to gluPerspective with
a call to .
glRotated
gluPerspective
glTranslated
glLoadIdentity Page no : 313
79) Each number that makes up a matrix is called an of the matrix.
Element Page no : 101
Variable
Value
Component
80) Which one of the following step is not involved to write pixel using video
BIOS services.
Setting desired video mode
Using BIOS service to set color of a screen pixel
Calling BIOS interrupt to execute the process of writing pixel.
Using OpenGL service to set color of a screen pixel
81) Shadow mask methods can display a range of colors.
Small
Wide Page no : 29
Random
AL-JUNAID TECH INSTITUTE
Crazy
82) Using Cohen-Sutherland line clipping, it is impossible for a vertex to be
labeled 1111.
True
False
83) Intensity of the electron beam is controlled by setting _ levels on
the control grid, a metal cylinder that fits over the cathode.
Amplitude
Current
Voltage Page no : 26
Electron
84) Which of the following is NOT a modern application for Computer Graphics---
97) Bezier curve is the ideal standard for representing the ----------------------------
piecewise polynomial curves.
► None of the given
► Non complex
► Most complex
► More complex (Page 338)
98) Keep polygon orientations consistent to make sure that when viewed from the
outside, all the polygons on the surface are oriented in the same direction.
► None of the given
► Neither
► Different
► Same (page 345)
99) The ----------------- is most simple example that exhibits the property self-
similarity.
► Mosse
► Fern (Page 355)
► None of the given
► Thohar
100) A common mistake people make when creating three-dimensional graphics is
to start thinking too soon that thefinal image appears on a flat, two-dimensional
screen. Avoid thinking about which pixels need to be drawn, andinstead try to
visualize ----------------- space.
AL-JUNAID TECH INSTITUTE
► Multi-dimensional
► One-dimensional
► Two-dimensional
► Three-dimensional (Page 371)
101)Which of the following properties of rational Bezier curves fails if the weight
assigned to a control point isnegative?
► End-point interpolation
► Variation Diminishing
► Symmetry
► Convex-Hull
102) We want our scene to look more realistic, we should use lights.
Ambient (Page 282)
Point Parallel
Spot
None of the given
103) This is a simple example of line clipping: the display window is the canvas
and also the default ------------------- , thus all line segments inside the canvas are
drawn.
Clipping Rectangle (Page 141)
Clipping Circle
Clipping Polygon
Clipping Angle
104) One problem with Gouraud shading is that the ------------ intensities can never
be greater than the intensities atthe edges.
Triangles (Page 246)
Squares
Rectangles
Polygons
105)There is more penetration of light in case of surfaces.
Conductor (like metals)
Nonconductor (like dielectrics) (Page 235)
Both conductor and nonconductor
None of the given
106) lights should be avoided because they are not for real time
environment.
Point
Parallel
AL-JUNAID TECH INSTITUTE
Spot (Page 244)
None of the given
107) The physical range of colors a device can display is called
_ of the device.
Sharpness
Gamut (Page 229)
Colouring
Colouring with Sharpness
108) is simply the calculation of color reflected by the surface.
Shading (Page 240)
Clamping
Scaling
None of the given
109) When obtaining normals for a triangle, which of the following mathematical
constructs is NOT used?
Vector normalization
Vector cross products
Vector dot products
Point-Point subtraction
110) Loosely, the alpha component of the RGBA quad represents the
_ of a surface.
Opaqueness (Page 227)
Light
Darkness
Shine
111) An algorithm that clips a polygon must deal with many ------------------cases.
The case is particularly noteworthy in that the concave polygon is clipped into -----
------ isolate polygons.
Similar, three
Different, two (Page 146)
Different, three
Similar, two
112) lighting is not dependent on any source.
Ambient
Diffuse
Specular
Emissive
113)
AL-JUNAID TECH INSTITUTE
In order to get a more realistic representation of lighting, we'll need to understand
how light passes through a medium and how hitting the boundary layer at the ------
----------- of two media can affect light's properties.
Intersection (Page 296)
Union
Endpoints
Edges
114) Lambertian shading was used mostly back when computers weren't fast
enough to do ________ in real time.
Phong shading
Processing
Shading
Gouraud shading (Page 245)
115)
In Perspective Projection the point of View (POV) must lie on the
_ .
All axis
Z axis (Page 200)
X axis
Y axis
116) If we want any object to glow, we should use lights.
Ambient
Diffuse
Specular
Emissive (Page 240)
117) There are not many different ways of representing the intensity of a particular
color element.
True
False (Page 276)
118) In Perspective Projection the screen plane must be parallel to
the .
Y-Z plane
X-Y plane (Page 200)
Z-Y plane
X-Z plane
119) light is reflected in all directions from surface.
Ambient
Diffuse (page 239)
AL-JUNAID TECH INSTITUTE
Specular
Emissive
120) A space curve can be confined to a plane.
Yes
No (Page 331)
121) To convert the information in the A matrix into that required for the P matrix,
we do some simple matrix algebra, First we have UA=UNP then Simply A = -------
UP
NP (Page 333)
UN
None
122) Perspective projection is specified with the function glFrustum().
Yes (Page 376)
No
122) Choose a camera lens or adjust the zoom
projection transformation (Page 372)
viewport transformation
modeling transformation'
viewing transformation
123) Using a lighting model based upon the Blind Phong model means that we'll
always get a uniform specular highlight based upon the color of the ---------------
light and material, which means that all reflections based on this model, will be
reminiscent of plastic.
Union
Refracting
Intersection
Reflecting (Page 296)
124) Refractive index is a function of temperature, mostly due to density changes in
materials with changes intemperature.
True (Page 300)
False
125)
Length L depends on the angle alpha and the z coordinate of the point to be
projected and L can be representedby ----------------- --.
z * 1/ tan (alpha) (Page 198)
z * L2
z * 1/ tan (beta)
AL-JUNAID TECH INSTITUTE
z * 1/ tan (gamma)
126)The traditional approach in real-time computer graphics has been to calculate
lighting at a vertex as a sum ofthe light.
Ambient
Ambient, diffuse, and specular (Page 281)
Specular
Diffuse, and specular
127)
Another way to define a space curve by using intermediate points and the tangents
at each end for making the
curve
Yes
No (Page 334)
128) An independent consortium, the OpenGL Architecture Review Board, guides
the OpenGL specification. Withbroad industry support, OpenGL is the only truly
open, vendor-neutral,---------------- graphics standard.
Tertiary
Binary
Single platform
Multiplatform (Page 301)
129) glutReshapeWindow requests a change in the size of the current window. The
width and height parameters aresize extents in pixels. The width and height must
be values.
Neutral
Negative
Positive (Page 311)
None of the given
130) A space curve is not confined to a plane. It is free to twist through space. To
define a space curve we must useparametric functions that are -------------------- --.
Binary polynomials
Mono polynomials
Quadratic polynomials
Cubic polynomials (Page 331)
131)
Refractive index is a function of temperature, mostly due to changes in --------------
-------- of materials withchanges in temperature. A simple correction can be
applied in most circumstances to allow us to use a valuegiven at one temperature at
another.
Density (Page 300)
AL-JUNAID TECH INSTITUTE
pressure
nature
volume
132) If we assign a different value to the parametric variable for the intermediate
point, then we obtain differentvalues for the coefficients. This, in turn, means that a
different curve is produced, although it passes through the -------------- three points.
isolate
different
same (Page 328)
none
133) The attenuation formula is f = ----------------------- , where C, L and Q are the
constant, linear and quadraticattenuation factors and d is the distance between the
vertex being lit and the light source.
1/(C + Ld + Qd2)
1/(C + Ld + Qd)
1/(C + L +d + Qd2)
1/(Cd + Ld + Qd2)
144) Bezier curve is tangent to the lines connecting _ .
First two points
Last two points
Fist two points and last two point (Page 340)
None of the given
145) End points and an intermediate point on the curve, then we now ----------------
----- quantities that we canexpress in terms of these coefficients (3 points x 3
coordinates each), and we can use these three points todefine a unique curve.
Six
Three
Two
Nine (Page 326
146) Choose a camera lens or adjust the zoom
projection transformation (Page 372)
viewport transformation
modeling transformation
viewing transformation
148) _ OpenGL function is used for aiming and positioning the camera
towards the object
glLoadIdentity() (Page 375)
AL-JUNAID TECH INSTITUTE
gluLookAt()
glFrustum()
None of Above
149) A parametric curve is one whose defining equations are given in terms of a ---
----------, common, independentvariable called the parametric variable.
Triple
Double
Single (Page 325)
None of the given
150) The reflection coefficients are in the-------------------------- range and are
specified as part of the material property. However, they are strictly empirical and
since they simply adjust the overall intensity of the material color, the material
color values are usually adjusted so the color intensity varies rather than using a
reflection coefficient.
[0, 10]
[0, 1] (Page 281)
[0, 5]
[0, 2]
151) To ensure a smooth transition from one section of a piecewise __________ to
the next, we can impose various continuity conditions at the connection points on
parametric curve parametric curve
polygon vector (not confirm) (Page 245)
None of the these
152) The curve is always contained within the of the control points
Tangents
Convex Hull (Page 340)
Subdivision
None of Above
Question # 1
A space curve can be confined to a plane.
Yes
No (Page 331)
Question # 2
To convert the information in the A matrix into that required for the P matrix, we
do some simple matrix
algebra, First we have UA=UNP then Simply A = -------------
UP
NP (Page 333)
AL-JUNAID TECH INSTITUTE
UN
None
16
Question #3
Perspective projection is specified with the function glFrustum().
Yes (Page 376)
No
Question # 4
Choose a camera lens or adjust the zoom
projection transformation (Page 372)
viewport transformation
modeling transformation'
viewing transformation
Question # 5
Using a lighting model based upon the Blinn Phong model means that we'll always
get a uniform specularhighlight based upon the color of the --------------- light and
material, which means that all reflections based on this model, will be reminiscent
of plastic.
Union
Refracting
Intersection
Reflecting (Page 296)
Question # 6
Refractive index is a function of temperature, mostly due to density changes in
materials with changes in temperature.
True (Page 300)
False
Question # 7
Length L depends on the angle alpha and the z coordinate of the point to be
projected and L can be represented by ------------------ --.
z * 1/ tan (alpha) (Page 198)
z * L2
z * 1/ tan (beta)
z * 1/ tan (gamma)
17
Question # 8
The traditional approach in real-time computer graphics has been to calculate
lighting at a vertex as a sum ofthe light.
AL-JUNAID TECH INSTITUTE
Ambient
Ambient, diffuse, and specular (Page 281)
Specular
Diffuse, and specular
Question # 9
Another way to define a space curve by using intermediate points and the tangents
at each end for making thecurve
Yes
No (Page 334)
Question # 10
An independent consortium, the OpenGL Architecture Review Board, guides the
OpenGL specification. Withbroad industry support, OpenGL is the only truly open,
vendor-neutral, ---------------- graphics standard.
Tertiary
Binary
Single platform
Multiplatform (Page 301)
Question # 11
glutReshapeWindow requests a change in the size of the current window. The
width and height parameters aresize extents in pixels. The width and height must
be values.
Neutral
Negative
Positive (Page 311)
None of the given
Question # 12
A space curve is not confined to a plane. It is free to twist through space. To define
a space curve we must use
parametric functions that are -------------------- --.
Binary polynomials
Mono polynomials
Quadratic polynomials
Cubic polynomials (Page 331)
Question # 1 3
Refractive index is a function of temperature, mostly due to changes in --------------
-------- of materials withchanges in temperature. A simple correction can be
applied in most circumstances to allow us to use a valuegiven at one temperature at
another.
Density (Page 300)
AL-JUNAID TECH INSTITUTE
pressure
nature
volume
Question # 14
If we assign a different value to the parametric variable for the intermediate point,
then we obtain differentvalues for the coefficients. This, in turn, means that a
different curve is produced, although it passes through the
-------------- three points.
isolate
different
same (Page 328)
none
Question #15
Bezier curve is tangent to the lines connecting .
First two points
Last two points
Fist two points and last two point (Page 340)
None of the given
Question # 16
End points and an intermediate point on the curve, then we now ---------------------
quantities that we canexpress in terms of these coefficients (3 points x 3
coordinates each), and we can use these three points todefine a unique curve.
Six
Three
Two
Nine (Page 326)
19
Question # 17
Choose a camera lens or adjust the zoom
projection transformation (Page 372)
viewport transformation
modeling transformation
viewing transformation
QuestionZ#18
_ OpenGL function is used for aiming and positioning the camera
towards the object
glLoadIdentity() (Page 375)
AL-JUNAID TECH INSTITUTE
gluLookAt()
glFrustum()
None of Above
Question # 19
A parametric curve is one whose defining equations are given in terms of a ---------
----, common, independentvariable called the parametric variable.
Triple
Double
Single (Page 325)
None of the given
Question # 20
The reflection coefficients are in the -------------------------- range and are specified
as part of the materialproperty. However, they are strictly empirical and since they
simply adjust the overall intensity of the materialcolor, the material color values
are usually adjusted so the color intensity varies rather than using a
reflectioncoefficient.
[0, 10]
[0, 1] (Page 281)
[0, 5]
[0, 2]
Question # 21
To ensure a smooth transition from one section of a piecewise to the
next, we can impose variouscontinuity conditions at the connection points
non parametric curve
parametric curve
polygon vector (not confirm) (Page 245)
None of the these
Question # 22
The curve is always contained within the of the control points
Tangents
Convex Hull (Page 340)
Subdivision
None of Above
Question # 23
Projection can be defined as a mapping of point P(x, y, z) onto its image P`(x`, y`,
z` ) in the---------------- ,which constitutes the display surface. The mapping is
determined by a projection line called the projector thatpasses through P and
intersects the ------------- --.
Two Coordinate Planes
AL-JUNAID TECH INSTITUTE
View plane or projection plan (Page 193)
Three Coordinate Planes
Mapping plane
Question # 24
Determine how large we want the final photograph to be - for example, we might
want it enlarged
projection transformation
viewport transformation (Page 372)
modeling transformation
viewing transformation
Question # 25
Ambient light is the light that comes from ---------------------- directions, thus all
surfaces are illuminated equallyregardless of orientation. However, this is a big
hack in traditional lighting calculations since "real" ambientlight really comes from
the light reflected from the "environment."
All (Page 281)
Opposite
Same
Four different
Question # 26
Silhouette edges occur when dot product of surface normal vector and the view
vector is .
Zero (Page 345)
One
Both zero and one
Question # 27
If the current matrix (according to glMatrixMode) is multiplied by the translation
matrix, with the productreplacing the current matrix. That is, if M is the current
matrix and T is the translation matrix, then M isreplaced with --------------- --.
M-T
M+T
M/T
M*T (Page 317)
Question # 28
Arrange the scene to be photographed into the desired composition
projection transformation
viewport transformation
modeling transformation (Page 317)
AL-JUNAID TECH INSTITUTE
viewing transformation
Question # 29
In the forms of texture mapping, Image to world space and world space to image,
each suffers from differentproblems related to magnifications and magnification.
Which of the two shows the following problem: When the
texture is larger than the screen space it maps to, many texture units (texels) are
never sampled?
Image to world space
World space to image
X-axis
Y-axis
Question # 31
Imagine a curve in three-dimensional space, each point on the curve has a unique
set of coordinates: a specificx value, y value, and z value. Each coordinate is
controlled by a -------------- parametric equation.
Opposite
Similar
Separate (Page 325)
Question # 32
We allow the parametric variable to take on values only in the interval ---------------
-.
-1 <= u <= 0
0 <= u <= 2
0 <= u <= 1 (Page 326)
-1 <= u <= 1
Question # 33
Bezier curve can represent the more complex piecewise curve.
Polynomial (Page 338)
Exponential
Cubic
None of above
Question # 34
A fractal is generally a property called _ _.
Fractal Dimension
Self-similarity (Page 355)
Koch Curve
None of above
Question # 35
Normalized cross product of two vectors on that surface provides normal vector
AL-JUNAID TECH INSTITUTE
Yes (Page 347)
No
Question # 36
Every point on a curve has a straight line associated with it called the _
State line
tangent line (Page 334)
curved line
None of the given
Question # 36
The value returned is a unique small integer identifier for the window. The range
of allocated identifiers startsat ------------------ --. This window identifier can be
used when calling glutSetWindow.
Three
Two
One (Page 308)
Zero
Question # 37
Curve and surface equations can be expressed in either a parametric or a non
parametric form.
True
False
Question # 38
Bernstein polynomial functions are the basic functions of curves.
NURBS
Bezier (Page 342)
Both NURBS and Bazier
None of the given
Question # 39
Geometric patterns that is repeated at ever smaller scales to produce irregular
shapes and surfaces are called
Geometric patterns
Fractals (Page 352)
Animated components
Segments
Question # 40
In order to get a more realistic representation of lighting, we'll need to understand
how light passes through amedium and how hitting the boundary layer at the -------
---------- of two media can affect light's properties.
AL-JUNAID TECH INSTITUTE
Intersection (Page 296)
Union
Endpoints
Edges
Question # 41
_ sets the global idle call back to be 'func' so a GLUT program
can perform backgroundprocessing tasks or continuous animation when window
system events are not being received.
glutIdleFunc (Page 313)
glutMainLoop
glutDisplayFunc
glutReshapeFunc
Question # 42
A tangent vector certainly defines the slope at one end of the curve, but a vector
has characteristics of......
direction
magnitude
both direction and magnitude (Page 336)
None of the given
Question # 43
The degree of a Bezier curve is equal to n-1, where n is the number of control
points
Yes (Page 339)
No
Question # 44
Bit mask to select a window with multisampling support. If multisampling is not
available, a----------------- window will automatically be chosen.
Non-multisampling (Page 310)
Multisampling
Mono-multisampling
Di-multisampling
Question # 45
OpenGL is well structured with an intuitive design and logical commands.
Efficient OpenGL routines typicallyresult in applications with fewer lines of code
than those that make up programs generated using other graphics
libraries or packages. In addition, OpenGL drivers ----------------information about
the underlying hardware,freeing the application developer from having to design
for specific hardware features.
AL-JUNAID TECH INSTITUTE
Encapsulate (Page 302)
Shows
Hibernates
None of the given
Question # 46
With similar expressions for y(u) and z(u). Again the a, b, c and d terms are
constant coefficients. As we didwith Equation for a plane curve, we combine the
x(u), y(u) , and z(u) expressions into a single vector equation
P(u) = .
Au2+bu1+cu+d
Au4+bu3+cu2+d1
Au3+bu2+cu2+d
Au3+bu2+cu +d (Page 331)
Question # 48
The matrix generated by gluPerspective is multiplied by the current matrix, just as
if glMultMatrix were called with the generated matrix. To load the perspective
matrix onto the current matrix, stack instead, precede the call
to gluPerspective with a call to -------------------- --.
glRotated
gluPerspective (Page 318)
glTranslated
glLoadIdentity
Question # 49
The basic functions fi(u) in Bezier curve must be symmetric with respect to u and
(u-2)
yes
no (Page 341)
Question # 50
Arrange the scene to be photographed into the desired composition
projection transformation
viewport transformation
modeling transformation (Page 372)
viewing transformation
Question No: 51
NURBS stands for -------------------- --.
AL-JUNAID TECH INSTITUTE
Non Universal Rational Binary Spline
Non Uniform Rational Binary Splines
Non Uniform Rational Beta Splines (Page 325)
Non Universal Rational Beta Spline
Question No: 1
Which of the following is NOT a modern application for Computer
Graphics
► Stop-motion animation (Page 6)
► Computer Aided Geometric Design
► Video Games
► Scientific Visualization
Question No: 52
Both Boundary Filling and Flood filling algorithms are non-recursive
techniques,
► False
► True
Question No: 53
TV series are made as simply as possible from the animation point of view. This
approach is generally known as --------------------- --.
► Full animation
► Limited animation (Page 431)
► Low animation
► High resolution
Question No: 54
An eight frame run cycle that ------------------ frame/frames to each stepgives a fast
and vigorous dash. At this speed the successive leg positions are quite widely
separated and may need dry brush or speed lines to make the movement flow.
► Two
► One
AL-JUNAID TECH INSTITUTE
► Three
Question No: 55
----------- Reflection is the effect of reflecting light toward the direction from
which it came, no matter the orientation of the surface.
► Forward scattering
► Diffuse Lambertian
► Backscattering
► Retro (Page 296)
Question No: 56
What makes this really challenging to model is that the index of refraction for most
materials is a function of the ------------------ of the light. This means that not only is
there a shift in the angle of refraction, but that the shift is different for differing ----
of light.
Question No: 57
The reflected light wave turns out to be a --------------------- case since light is
reflected at the same angle as the incident wave (when the surface is smooth and
uniform, as we'll assume for now).
► Abnormal
► Complex
► Simple (Page 299)
► Unknown
Question No: 58
Question No: 59
_ sets the reshape callback for the current window. The reshape
callback is triggered when a window is reshaped.
► glutMainLoop
► glutIdleFunc
► glutReshapeFunc (Page 315)
► glutDisplayFunc
Question No: 60
Signed integer color components, when specified, are linearly mapped to floating-
point values such that the most positive representable value maps to 1.0, and the
most negative representable value maps to --------------- --. Floating-point values
are mapped directly.
► -1.0
► 0.0
► 2.0
► 1.0 (Page 323)
Question No: 61
Question No: 62
Question No: 63
Question No: 64
Question No: 65
Keep polygon orientations consistent to make sure that when viewed from
the outside, all the polygons on the surface are oriented in the same
direction.
► None of the given
► Neither
► Different
► Same (page 347)
Question No: 66
► Moses
AL-JUNAID TECH INSTITUTE
► Fern (Page 358)
► None of the given
► Thohar
Question No: 66
► Multi-dimensional
► One-dimensional
► Two-dimensional
► Three-dimensional (Page 374)
Question No: 68
Which of the following properties of rational Bezier curves fails if the weight
assigned to a control point is negative?
► End-point interpolation
► Variation Diminishing
► Symmetry
► Convex-Hull
Question No: 69
Actually, each component is a rational Bézier curve. We have made it very clear
that all weights must be non-negative. If some of them are negative, the strong
convex hull property or even the convex hull property will not hold.
Question No: 70
In the Phone reflection model, there are 3 constants (a, b, c) which are used to
describe the qualities of which of the following phenomena?
Question No: 71
► Specular
► Diffuse
► Ambient
Question No: 72
When you hit a surface in ray tracing, generally shadow rays are tested against all
objects in a scene. If these rays come back saying they hit an object in the scene,
which of the following do you do?
► add all components (i.e. ambient, diffuse and specular) from that
light source to the object.
► add all EXCEPT the ambient light from that light source to the object
(i.e. diffuse and specular)
► add only the ambient light from that light source to the object
► add none of the light from that light source to the object
Question No: 73
The Color Space tool is a handy tool that we can use to interactively add two
colours together to see the effects of the various strategies for handling
oversaturated colours.
► False
► True (Page 238)
Question No: 74
Question No: 75
Which of the following properties of Bezier curves guarantees that a line passes
through the control polygon as many times or more times than the line passes
through the Bezier curve itself?
► End-point interpolation
► Variation Diminishing
► Symmetry
► Convex-Hull
Question No: 76
► Edge
► Vertices
► Pixel (Page 80)
► None of the given
Question No: 77
The actual filling process in boundary filling algorithm begins when a point
_ of the figure is selected.
Question No: 78
Question No: 79
If a line connecting any two points within a polygon does not intersect any edge,
then it will be a polygon.
Question No: 80
_ can be defined as a mapping of point P(x, y, z) onto its image P`(x`, y`,
z`) in the view plane which constitutes the displaysurface.
► Mapping plane
► Three Coordinate Planes
► View plane
► Projection (Page 265)
Question No: 81
► Unknown
► Simple (Page 299)
► Complex
► Abnormal
Question No: 82
OpenGL has become the industry's most widely used and supported
graphics application programming interface (API), bringing thousands of
applications to a wide variety of computer platforms.
2-Dimensional
3-Dimensional
2-Dimensional and 3-Dimensional (Page 301)
Question No: 84
-------- sets the global idle callback to be ‘func’ so a GLUT program can perform
background processing tasks or continuous animation when window system events
are not being received.
Hyperbola(Page 70)
Parabola ( 4px=y2)
None of given
Ellipse (X2/a2 + y2/b2 =1)
Question No: 86
Trivial Reject
Trivial Accept (Page 145)
None of given
Question No: 90
Dark lights are nothing more than lights in which one or more of the color values
are .
Unknown
Negative (Page 238)
Positive
Zero
Question No: 91
At a physical surface, our eye's perception of the color depends on the distribution
of photon energies that arrive and trigger our _ cells.
Eye
Retina
Cone (Page 401)
Question No: 93
A plane is two dimensional since in order to uniquely define any point on its
surface we require _ numbers.
AL-JUNAID TECH INSTITUTE
Two (Page 359)
Three
Four
Question No: 96
In perspective projection, for your view to come out correctly, you will also want
the to pass through the middle of the screen.
X axis
Y axis
Z axis (Page 203)
None
Question No: 97
Neither floating-point nor signed integer values are clamped to the range
before updating the current color.
0 , -1.0
-1 , 1
1 , -1
0, 1 (Page 324)
Question No: 98
An object's determine its orientation relative to the light sources. For each
vertex, OpenGL uses the assigned normal to determine how much light that
particular vertex receives from each light source.
Unit
Normal (Page 403)
None of given
Question No: 99
Scale
Rotation
Shear
Translation
Question No: 101
This is a simple example of line clipping: the display window is the canvas and
also the default -------------------- , thus all line segments inside the canvas are
drawn.
One problem with Gouraud shading is that the ------------ intensities can never be
greater than the intensities at the edges.
_ Lights should be avoided because they are not for real time
environment.
Point
Parallel
Spot (Page 247)
None of the given
Question No: 106
Sharpness
Gamut (Page 232)
Colouring
Colouring with Sharpness
Question No: 107
Vector normalization
Vector cross products
Vector dot products
Point-Point subtraction
Question No: 109
An algorithm that clips a polygon must deal with many ----------------- cases. The
case is particularly note worthy in that the concave polygon is clipped into ----------
- isolate polygons
Similar, three
Different, two (Page 146)
Different, three
Similar, two
Question No: 111
True
AL-JUNAID TECH INSTITUTE
False (Page 60)
Question No: 112
True
False
Question No: 113
A + B = B + A
a(A + B) = aA + aB
(AT)T = AT
A + (B + C) = (A + B) + C
Line from an outside point to this point does not cross the edges odd number
of times
Line from any point to this point crosses the edges odd number of times
Line from an outside point to this point crosses the edges odd number of
times (Page 80)
Line from this point to any point outside the polygon intersects any edge
Question No: 115
As opposed to direct memory access method, BIOS routines provide an easier and
faster method of drawing pixels on screen.
True
False (Page 47)
Question No: 116
AL-JUNAID TECH INSTITUTE
When a point P(x,y) is rotated by θ the coordinates of transformed point P' are
given as:
True
False (Page 27)
Question No: 118
Incremental line drawing algorithm makes use of the equation of straight line.
True
False (page 54)
Question No: 119
In matrix multiplication:
In Horizontal retrace, after completion of all the pixels in a scan line, the refreshing
continues from the 1st pixel of the next scan line.
True
AL-JUNAID TECH INSTITUTE
False(Page 28)
Question No: 121
When dot product of two vectors equals zero, this implies that the two vectors are:
In Pixmap exactly one bit is used to hold color value of each pixel.
True
False (Page 28)
Question No: 124
To show 256 colors, the no of bits required for each pixel are
8 (Page 39)
16
32
AL-JUNAID TECH INSTITUTE
64
same order
same corresponding elements
Same order and same corresponding elements.(page 11)
Different elements.
Question No: 128
The equation of hyperbola centered at origin (if the transverse axis is along x -axis)
can be given as:
x2 b2+ y2 a2–1 = 0
x2 b2+ y2 a2+1 = 0
x2 a2– y2 b2–1 = 0
x2 b2 – y2 a2–1 = 0
Question No: 132
Which one is not valid out code to perform trivial accept / reject test in line
clipping:
1101
1001 (Page 143)
0101
0110
FastGL
OpenGL
DirectX
EasyGL (Page 47)
AL-JUNAID TECH INSTITUTE
Question No: 134
Global coordinate systems can be defined with respect to local coordinate system
True
False (Page 258)
Question No: 138
Magnitude
AL-JUNAID TECH INSTITUTE
Vector (Page 117) cross product of 2 vectors is a vector.
Scalar
Value
Question No: 139
Rotation
Translation (Page 121)
Reflection
Scaling factor
Question No: 142
If the value of scaling factors sx and sy is greater than 1, then size of objects will
be .
Reduced
Enlarged (Page 121)
Remain same
Shear
Question No: 144
projection transformation
viewport transformation
modeling transformation
viewing transformation (Page 375)
Question No: 147
NURBS
Bezier (Page 342)
Both NURBS and Bazier
None of the given
Question No: 149
Which of the following does NOT figure into the Field of View of a pinhole
camera?
In class, we discussed the purpose of the front and back clipping planes in
OpenGL. Which of the following was NOT a purpose for using clipping planes?
AL-JUNAID TECH INSTITUTE
division by zero
objects behind the center of projection mapping onto the projection plane
avoiding the problems of infinite viewing volume size
Question No: 152
true
false
Question No: 154
Ray Tracing
Radiosity
Photon Mapping
RenderMan
Question No: 155
Ray Tracing
Radiosity
AL-JUNAID TECH INSTITUTE
Photon Mapping
RenderMan
Question No: 156
When solving for ray-polygon intersections, after intersecting the ray with a plane,
the dominant component of the plane normal is found. this is used to
ignore any component other than the dominant when you project to 2D
ignore the dominant component when you project to 2D
solve the inside-outside test only for that component
Question No: 158
The majority of the execution time of a ray tracer is spent in ray-object intersection
code.
true
AL-JUNAID TECH INSTITUTE
false
Question No: 160
In the Pixar short “Geri’s Game”, the trees in the background were created using
which of the following techniques?
Fractals
Bump mapping
Environment mapping
Catmull-Clark Subdivision Surfaces
Question No: 161
The basic functions fi(u) in Bezier curve must be symmetric with respect to u and
(u-2)
yes
no (Page 344)
Question No: 162
In the Pixar short “Geri’s Game”, Geri’s glasses seemed to bend the light as it
passed through. Which of the following techniques was used?
Fractals
Bump mapping
Environment mapping
Catmull-Clark Subdivision Surfaces
Question No: 163
2Dimensional
3Dimensional (Page 248)
Multidimensional
None
Question No: 165
Fractal Geometry (Fractal shapes are self similar and independent of size or
scaling)
Traditional Geometry
Euclidean Geometry (Euclidean shapes normally have a few characteristic
sizes or lengthscales) (Page 361)
None of Above
Question No: 166
projection transformation
viewport transformation
modeling transformation (Page 375)
viewing transformation
Question No: 167
DirectX
Graphic Windowing Toolkit
CGI
AL-JUNAID TECH INSTITUTE
OpenGL (Page 305)
Question No: 168
Small
Wide (Page 28)
Random
Crazy
Question No: 170
FINALTERM EXAMINATION
Spring 2010
CS602- Computer Graphics
► Full animation
► Limited animation (Page 428)
► Low animation
► High resolution
1
Question No: 4 ( Marks: 1 ) - Please choose one
An eight frame run cycle that ------------------ frame/frames to each step gives a fast and vigorous dash. At this
speed the successive leg positions are quite widely separated and may need dry brush or speed lines to make the
movement flow.
► Two
► One
► Three
► Four (Page 437)
► Forward scattering
► Diffuse Lambertian
► Backscattering
► Retro (Page 293)
► Abnormal
► Complex
► Simple (Page 296)
► Unknown
2
Question No: 9 ( Marks: 1 ) - Please choose one
__________ sets the reshape callback for the current window. The reshape callback is triggered when a window
is reshaped.
► glutMainLoop
► glutIdleFunc
► glutReshapeFunc (Page 312)
► glutDisplayFunc
► -1.0
► 0.0
► 2.0
► 1.0 (Page 320)
3
Question No: 14 ( Marks: 1 ) - Please choose one
Bezier curve is the ideal standard for representing the ---------------------------- piecewise polynomial curves.
► Mosse
► Fern (Page 355)
► None of the given
► Thohar
► Multi-dimensional
► One-dimensional
► Two-dimensional
► Three-dimensional (Page 371)
► End-point interpolation
► Variation Diminishing
► Symmetry
► Convex-Hull Click here 4 detail
4
Question No: 19 ( Marks: 1 ) - Please choose one
In the Phong reflection model, there are 3 constants (a, b, c) which are used to describe the qualities of which of
the following phenomena?
► Specular
► Diffuse
► Ambient
► add all components (i.e. ambient, diffuse and specular) from that light source to the object.
► add all EXCEPT the ambient light from that light source to the object (i.e. diffuse and specular)
► add only the ambient light from that light source to the object
► add none of the light from that light source to the object
► False
► True (Page 235)
► Ending lines
► Points
► Vertices (Page 248)
► Edges
5
Question No: 24 ( Marks: 1 ) - Please choose one
Which of the following properties of Bezier curves guarantees that a line passes through the control polygon as
many times or more times than the line passes through the Bezier curve itself?
► End-point interpolation
► Variation Diminishing
► Symmetry
► Convex-Hull
► Edge
► Vertices
► Pixel (Page 80)
► None of the given
6
Question No: 29 ( Marks: 1 ) - Please choose one
__________ can be defined as a mapping of point P(x, y, z) onto its image P`(x`, y`, z` ) in the view plane
which constitutes the display surface.
► Mapping plane
► Three Coordinate Planes
► View plane
► Projection (Page 193)
7
FINALTERM EXAMINATION
Spring 2010
CS602- Computer Graphics
8
Question No: 6 ( Marks: 1 ) - Please choose one
Rotation is performed around a fixed point called ______.
Pivot point rotation (Page 119)
Eye
Retina
Cone (Page 398)
9
Question No: 12 ( Marks: 1 ) - Please choose one
This projection technique has the direction of projection perpendicular to the viewing plane, but the viewing
direction is NOT perpendicular to one of the principle faces.
Ambient
Diffuse
Specular
Emissive
X axis
Y axis
Z axis (Page 200)
None
0 , -1.0
-1 , 1
1 , -1
0, 1 (Page 321)
10
Question No: 17 ( Marks: 1 ) - Please choose one
An object's _______ determine its orientation relative to the light sources. For each vertex, OpenGL uses the
assigned normal to determine how much light that particular vertex receives from each light source.
Unit
Normal (Page 400)
None of given
Scale
Rotation
Shear
Translation
11
Final Term MCQS and Quizzes
Question # 1 of 10 ( Total Marks: 1 ) Select correct option:
We want our scene to look more realistic, we should use _________ lights.
12
Question # 5 of 10 ( Total Marks: 1 ) Select correct option:
_________ lights should be avoided because they are not for real time environment.
Point
Parallel
Spot (Page 244)
None of the given
Sharpness
Gamut (Page 229)
Colouring
Colouring with Sharpness
Vector normalization
Vector cross products
Vector dot products
Point-Point subtraction
13
Question # 10 of 10 ( Total Marks: 1 ) Select correct option:
An algorithm that clips a polygon must deal with many ----------------- cases. The case is particularly note
worthy in that the concave polygon is clipped into ----------- isolate polygons.
Similar, three
Different, two (Page 146)
Different, three
Similar, two
Phong shading
Processing
Shading
Gouraud shading (Page 245)
All axis
Z axis (Page 200)
X axis
Y axis
14
Question # 5 of 10 ( Total Marks: 1 ) Select correct option:
If we want any object to glow, we should use ________________ lights.
Ambient
Diffuse
Specular
Emissive (Page 240)
Y-Z plane
X-Y plane (Page 200)
Z-Y plane
X-Z plane
Yes
No (Page 331)
UP
NP (Page 333)
UN
None
15
Question # 1 of 10 ( Total Marks: 1 ) Select correct option:
Perspective projection is specified with the function glFrustum().
Union
Refracting
Intersection
Reflecting (Page 296)
16
Question # 6 of 10 ( Total Marks: 1 ) Select correct option:
The traditional approach in real-time computer graphics has been to calculate lighting at a vertex as a sum of
the ________ light.
Ambient
Ambient, diffuse, and specular (Page 281)
Specular
Diffuse, and specular
Yes
No (Page 334)
Tertiary
Binary
Single platform
Multiplatform (Page 301)
Neutral
Negative
Positive (Page 311)
None of the given
Binary polynomials
Mono polynomials
Quadratic polynomials
Cubic polynomials (Page 331)
17
Question # 1 of 10 ( Total Marks: 1 ) Select correct option:
Refractive index is a function of temperature, mostly due to changes in ---------------------- of materials with
changes in temperature. A simple correction can be applied in most circumstances to allow us to use a value
given at one temperature at another.
Six
Three
Two
Nine (Page 326)
18
Question # 6 of 10 ( Total Marks: 1 ) Select correct option:
Choose a camera lens or adjust the zoom
projection transformation (Page 372)
viewport transformation
modeling transformation
viewing transformation
Triple
Double
Single (Page 325)
None of the given
[0, 10]
[0, 1] (Page 281)
[0, 5]
[0, 2]
19
Question # 1 of 10 ( Total Marks: 1 ) Select correct option:
The curve is always contained within the _______ of the control points
Tangents
Convex Hull (Page 340)
Subdivision
None of Above
projection transformation
viewport transformation (Page 372)
modeling transformation
viewing transformation
20
Question # 6 of 10 ( Total Marks: 1 ) Select correct option:
If the current matrix (according to glMatrixMode) is multiplied by the translation matrix, with the product
replacing the current matrix. That is, if M is the current matrix and T is the translation matrix, then M is
replaced with -----------------.
M-T
M+T
M/T
M*T (Page 317)
projection transformation
viewport transformation
modeling transformation (Page 317)
viewing transformation
Opposite
Similar
Separate (Page 325)
-1 <= u <= 0
0 <= u <= 2
0 <= u <= 1 (Page 326)
-1 <= u <= 1
21
Question # 1 of 10 ( Total Marks: 1 ) Select correct option:
Bezier curve can represent the more complex piecewise ___________ curve.
Polynomial (Page 338)
Exponential
Cubic
None of above
State line
tangent line (Page 334)
curved line
None of the given
Three
Two
One (Page 308)
Zero
22
Question # 7 of 10 ( Total Marks: 1 ) Select correct option:
Bernstein polynomial functions are the basic functions of ______________ curves.
NURBS
Bezier (Page 342)
Both NURBS and Bazier
None of the given
Geometric patterns
Fractals (Page 352)
Animated components
Segments
Ambient
Diffuse
Specular
a) GL_MODELVIEW
b) GL_PROJECTION
23
Question # 2 of 10 ( Total Marks: 1 ) Select correct option:
In OpenGL, there are several different matrices. We have discussed two of them in class. Which one of the
below would be used in conjunction with glFrustum?
a) GL_MODELVIEW
b) GL_PROJECTION
a) All those points are inside the object defined by the implicit equation
b) All those points are on the surface of the object defined by the implicit equation Click here 4 detail
c) All those points are outside the object defined by the implicit equation
d) You can’t know anything without knowing what the implicit equation is
24
Question # 7 of 10 ( Total Marks: 1 ) Select correct option:
When solving ray-sphere intersections using the implicit equation for a sphere, you must solve the quadratic
equation. Which of the following do you know if the B2-4AC (i.e. the part under the square root) is negative?
direction
magnitude
both direction and magnitude (Page 336)
None of the given
25
Question # 2 of 10 ( Total Marks: 1 ) Select correct option:
OpenGL is well structured with an intuitive design and logical commands. Efficient OpenGL routines typically
result in applications with fewer lines of code than those that make up programs generated using other graphics
libraries or packages. In addition, OpenGL drivers --------------- information about the underlying hardware,
freeing the application developer from having to design for specific hardware features.
glRotated
gluPerspective (Page 318)
glTranslated
glLoadIdentity
26
Question # 7 of 10 ( Total Marks: 1 ) Select correct option:
Shadow mask methods can display a __________ range of colors.
Small
Wide (Page 28)
Random
Crazy
True
False
o True
o False (Page 60)
oA+B=B+A
o a(A + B) = aA + aB
o (AT)T = AT
o A + (B + C) = (A + B) + C
27
Question # 3 of 10 ( Total Marks: 1 ) Select correct option:
According to Odd Parity Rule, a point is inside the polygon, if:
o Line from an outside point to this point does not cross the edges odd number of times
o Line from any point to this point crosses the edges odd number of times
o Line from an outside point to this point crosses the edges odd number of times (Page 80)
o Line from this point to any point outside the polygon intersects any edge
o True
o False (Page 27)
28
Question # 9 of 10 ( Total Marks: 1 ) Select correct option:
In Horizontal retrace, after completion of all the pixels in a scan line, the refreshing continues from the 1st pixel
of the next scan line.
o True
o False (Page 28)
29
Question # 5 of 10 ( Total Marks: 1 ) Select correct option:
25 * 80 resolution with 16 colors supports
30
Question # 10 of 10 ( Total Marks: 1 ) Select correct option:
The equation of hyperbola centered at origin (if the transverse axis is along x -axis) can be
given as:
a. x2 b2+ y2 a2–1 = 0
b. x2 b2+ y2 a2+1 = 0
c. x2 a2– y2 b2–1 = 0 Click here for detail
d. x2 b2 – y2 a2–1 = 0
a. 1101
b. 1001 (Page 143)
c. 0101
d. 0110
a. FastGL
b. OpenGL
c. DirectX
d. EasyGL (Page 42)
31
Question # 5 of 10 ( Total Marks: 1 ) Select correct option:
According to the architecture of raster graphics system, display processor memory will act as_________.
Video controller (Page 36)
System memory
Frame buffer
Video controller and System memory
True
False (Page 255)
32
Question # 1 of 10 ( Total Marks: 1 ) Select correct option:
If the values of scaling factors sx and sy are less than 1, then size of object will be ___________________.
projection transformation
viewport transformation
modeling transformation
viewing transformation (Page 372)
33
Question # 7 of 10 ( Total Marks: 1 ) Select correct option:
_________ is based on characteristic size or scale
Fractal Geometry
Traditional Geometry
Euclidean Geometry (Page 359)
None of Above
NURBS
Bezier (Page 342)
Both NURBS and Bazier
None of the given
a) division by zero
b) objects behind the center of projection mapping onto the projection plane
c) avoiding the problems of infinite viewing volume size
34
Question # 2 of 10 ( Total Marks: 1 ) Select correct option:
In class, we discussed how the image of the Double Eagle Tanker was obtained for the large poster in the main
hall of Sitterson. It required rendering several perspective images using OpenGL. Which of the following was
NOT a step required in that process?
a) Surface Normal
b) Direction to Viewer
c) Direction to Material Center
d) Direction to Light
a) true
b) false
a) Ray Tracing
b) Radiosity
c) Photon Mapping
d) RenderMan
35
Question # 7 of 10 ( Total Marks: 1 ) Select correct option:
We discussed several global illumination algorithms in class. Which of the following is generally characterized
by shiny spheres and checkerboards?
a) Ray Tracing
b) Radiosity
c) Photon Mapping
d) RenderMan
a) Ray Tracing
b) Radiosity
c) Photon Mapping
d) RenderMan
a) ignore any component other than the dominant when you project to 2D
b) ignore the dominant component when you project to 2D
c) solve the inside-outside test only for that component
36
Question # 2 of 10 ( Total Marks: 1 ) Select correct option:
The majority of the execution time of a ray tracer is spent in ray-object intersection code.
a) true
b) false
a) start rays
b) shadow rays
c) reflection rays
d) transmission rays
a) true
b) false
Mkkg
b) jittering
a) Fractals
b) Bump mapping
c) Environment mapping
d) Catmull-Clark Subdivision Surfaces
a) Fractals
b) Bump mapping
c) Environment mapping
d) Catmull-Clark Subdivision Surfaces
37
Question # 8 of 10 ( Total Marks: 1 ) Select correct option:
The basic functions fi(u) in Bezier curve must be symmetric with respect to u and (u-2)
yes
no (Page 341)
a) Fractals
b) Bump mapping
c) Environment mapping
d) Catmull-Clark Subdivision Surfaces
Fractal Geometry (Fractal shapes are self similar and independent of size or scaling)
Traditional Geometry
Euclidean Geometry (Euclidean shapes normally have a few characteristic sizes or length
scales) (Page 359)
None of Above
38
Question # 4 of 10 ( Total Marks: 1 ) Select correct option:
Which language API defines graphics operations independent of the operating system or computer hardware?
Additional hardware specific libraries are used to provide an interface between API and the hardware and
between the user and the platform specific windowing system.
a. DirectX
b. Graphix Windowing Toolkit
c. CGI
d. OpenGL (Page 302)
39
Question # 7 of 10 ( Total Marks: 1 ) Select correct option:
Match the pictures on the right with the corresponding term on the left. The arrows in the picture denote light
rays. The dashed lines represent the material type to be considered. The key is in the interaction of the light
rays with the material.
Specular (b)
Diffuse
(a)
Transparent
Translucent
(d)
(c)
40
Solved by: Well Wisher (Sahar) Class BSCS 6th Semester
2. OpenGL has become the industry's most widely used and supported ____________ graphics application
programming interface (API), bringing thousands of applications to a wide variety of computer
platforms.
Dimensional
Dimensional
2 Dimensional and 3-Dimensional
Ref: http://www.opengl.org/about/
3. ____________ sets the reshape callback for the current window.
glutIdle function
glutKeyboardFunc
glutReshapeFunc
glutDisplayFunc
Ref:
https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man3/glu
tReshapeFunc.3.html
4. _________sets the global idle callback to be ‘func’ so a GLUT program can perform background
processing tasks or continuous animation when window system events are not being received.
glutIdle function
glutKeyboardFunc
glutReshapeFunc
glutDisplayFunc
Ref: http://www.cs.uccs.edu/~ssemwal/man.html
5. X2/a2 – y2/b2 =1 is an equation of
Hyperbola Page no : 70
Parabola
None of given
Ellipse
8. Computer graphics is very helpful in producing graphical representations for scientific visualization
and analysis
True
False
10.Dark lights are nothing more than lights in which one or more of the color values are _____.
Unknown
Negative Page no : 230
Positive
Zero
Page 2 of 22
11.A series of _______ computer operations convert an object's three-dimensional coordinates to pixel
positions on the screen. Transformations, which are represented by matrix multiplication, include
modeling, viewing, and projection operations. Such operations include rotation, translation, scaling,
reflecting, orthographic projection, and perspective projection.
Three Page no : 366
Two
Four
Ten
At a physical surface, our eye's perception of the color depends on the distribution of photon
energies that arrive and trigger our _______ cells.
Eye
Retina
Cone Page no : 393
12.This projection technique has the direction of projection perpendicular to the viewing plane, but the
viewing direction is NOT perpendicular to one of the principle faces.
Orthographic Parallel Projection
Axonometric Parallel Projection Page no : 189
Oblique Parallel Projection
13.The Phong reflection model simplifies light-matter interactions into (essentially) 4 vectors and a
number of constants. Which piece of the Phong model is responsible for giving spheres their bright
white spots?
Specular Page no : 234
Ambient
Diffuse
14.In the Phong Reflection model, _______ light is the same everywhere.
a. Ambient
b. Diffuse
c. Specular
d. Emissive
e.
Ref: www.cs.unc.edu/~jwendt/classes/COMP136/quizzes/quiz3Answers.doc
Page 3 of 22
15.A plane is two dimensional since in order to uniquely define any point on its surface we require _______
numbers.
Two
Three
Four
Five
16.In perspective projection, for your view to come out correctly, you will also want the _______ to pass
through the middle of the screen.
X axis
Y axis
Z axis Page no : 195
None
17.Neither floating-point nor signed integer values are clamped to the range ________ before updating the
current color.
0 , -1.0
-1 , 1
1 , -1
0, 1 Page no : 316
18.Bezier curve is the ideal standard for representing the ________ piecewise polynomial curves.
Most complex
Less complex
None of given
More complex Page no : 333
19.An object's _______ determine its orientation relative to the light sources. For each vertex, OpenGL uses
the assigned normal to determine how much light that particular vertex receives from each light
source.
Unit
Normal Page no : 395
None of given
Page 4 of 22
20.Which was the most oldest shading model?
a. Flat Shading
b. Phong Shading
c. Gouraud Shading
a) Scale
b) Rotation
c) Shear
d) Translation Page no : 113
22) This projection technique does NOT have the direction of projection perpendicular to the viewing
plane.
a) Orthographic Parallel Projection
b) Axonometric Parallel Projection
c) Oblique Parallel Projection Page no : 189
23) This projection technique has the direction of projection perpendicular to the viewing plane, and
the viewing direction is perpendicular to one of the principle faces.
a) Orthographic Parallel Projection Page no : 189
b) Axonometric Parallel Projection
c) Oblique Parallel Projection
24) In OpenGL, there are several different matrices. We have discussed two of them in class. Which
one of the below would be used in conjunction with a glRotatef function call?
a) GL_MODELVIEW Page no : 388
b) GL_PROJECTION
25) In OpenGL, there are several different matrices. We have discussed two of them in class. Which
one of the below would be used in conjunction with glFrustum?
a) GL_MODELVIEW
b) GL_PROJECTION Page no : 369
26) Which of the following is the order that geometry operations are performed in OpenGL (where we
read the order from left to right)?
a) GL_PROJECTION GL_MODELVIEW Perspective division
b) GL_MODELVIEW GL_PROJECTION Perspective division
c) Perspective division GL_PROJECTION GL_MODELVIEW
d) GL_MODELVIEW Perspective division GL_PROJECTION
e) GL_PROJECTION Perspective division GL_MODELVIEW
27) The Phong reflection model simplifies light-matter interactions into (essentially) 4 vectors and a
number of constants. Each piece of the Phong model uses different vectors and constants. Which
portion does NOT include taking a dot product?
a) Ambient Page no : 234
b) Diffuse
c) Specular
Page 5 of 22
FINAL TERM PAPER 2010
Both Boundary Filling and Flood filling algorithms are non-recursive techniques,
► False Page no : 97
► True
TV series are made as simply as possible from the animation point of view. This approach is generally
known as ------------------------.
► Full animation
► Limited animation Page no : 423
► Low animation
► High resolution
An eight frame run cycle that ------------------ frame/frames to each step gives a fast and vigorous dash. At
this speed the successive leg positions are quite widely separated and may need dry brush or speed
lines to make the movement flow.
Page 6 of 22
► Two
► One
► Three
► Four Page no :432
► Forward scattering
► Diffuse Lambertian
► Backscattering
► Retro Page no : 288
What makes this really challenging to model is that the index of refraction for most materials is a
function of the------------------- of the light. This means that not only is there a shift in the angle of
refraction, but that the shift is different for differing ---------------of light.
The reflected light wave turns out to be a ---------------------case since light is reflected at the same angle
as the incident wave (when the surface is smooth and uniform, as we'll assume for now).
► Abnormal
► Complex
Page 7 of 22
► Simple Page no : 291
► Unknown
__________ sets the reshape call back for the current window. The reshape callback is triggered when a
window is reshaped.
► glutMainLoop
► glutIdleFunc
► glutReshapeFunc Page no : 307
► glutDisplayFunc
Signed integer colour components, when specified, are linearly mapped to floating-point values such
that the most positive represent able value maps to 1.0, and the most negative represent able value
maps to ------------------. Floating-point values are mapped directly.
► -1.0
► 0.0
► 2.0
► 1.0 Page no : 315
Page 8 of 22
Question No: 11 ( Marks: 1 ) - Please choose one
Bezier curve is numerically the ----------------------- of all the polynomial-based curves used in these
applications.
► None of the given
► Most stable
► Less stable
► Most unstable
Ref :
http://books.google.com.pk/books?id=YmQy799flPkC&pg=PA264&lpg=PA264&dq=Bezier+cur
ve+is+numerically+the+-----------------------+of+all+the+polynomial-
based+curves+used+in+these+applications.&source=bl&ots=MHnr87FLlQ&sig=wG0oXJ00vxWt
EY7RnfNnOXJrc08&hl=en&sa=X&ei=nTroUaDdL6eB4ASioIHgBA&ved=0CCoQ6AEwAA#v=onepa
ge&q=Bezier%20curve%20is%20numerically%20the%20-----------------------
Page 9 of 22
%20of%20all%20the%20polynomial-
based%20curves%20used%20in%20these%20applications.&f=false
Bezier curve is the ideal standard for representing the ---------------------------- piecewise polynomial
curves.
► None of the given
► Non complex
► Most complex
► More complex repeated
Keep polygon orientations consistent to make sure that when viewed from the outside, all the polygons
on the surface are oriented in the ____ direction.
The ---------------- is most simple example that exhibits the property self similarity.
► Mosse
► Fern Page no : 350
► None of the given
► Thohar
Page 10 of 22
A common mistake people make when creating three-dimensional graphics is to start thinking too soon
that the final image appears on a flat, two-dimensional screen. Avoid thinking about which pixels need
to be drawn, and instead try to visualize ----------------- space.
► Multi-dimensional
► One-dimensional
► Two-dimensional
► Three-dimensional Page no: 366
Which of the following properties of rational Bezier curves fails if the weight assigned to a control point
is negative?
► End-point interpolation
► Variation Diminishing
► Symmetry
► Convex-Hull page no : 335
In the Phong reflection model, there are 3 constants (a, b, c) which are used to describe the qualities of
which of the following phenomena?
Page 11 of 22
The Phong reflection model simplifies light-matter interactions into (essentially) 4 vectors and a
number of constants. Which piece of the Phong model is responsible for giving spheres their bright
white spots?
► Specular repeated
► Diffuse
► Ambient
When you hit a surface in ray tracing, generally shadow rays are tested against all objects in a scene. If
these rays come back saying they hit an object in the scene, which of the following do you do?
► add all components (i.e. ambient, diffuse and specular) from that light source to the object.
► add all EXCEPT the ambient light from that light source to the object (i.e. diffuse and specular)
► add only the ambient light from that light source to the object
► add none of the light from that light source to the object
The Color Space tool is a handy tool that we can use to interactively add two colours together to see the
effects of the various strategies for handling oversaturated colours.
► False
Page 12 of 22
► Ending lines
► Points
► Vertices Page no : 243
► Edges
Which of the following properties of Bezier curves guarantees that a line passes through the control
polygon as many times or more times than the line passes through the Bezier curve itself?
► End-point interpolation
► Variation Diminishing
► Symmetry
► Convex-Hull
Ref: http://cagd.cs.byu.edu/~557/text/ch2.pdf
Parity is a concept used to determine which _____________ lie within a polygon. (Choose best suitable
answer)
► Edge
► Vertices
► Pixels Page no : 80
► None of the given
The actual filling process in boundary filling algorithm begins when a point _____________ of the figure is
selected.
► Outside the boundary
► Inside the boundary
► At boundary
► None of the given
Page 13 of 22
Ref: http://groups.csail.mit.edu/graphics/classes/6.837/F98/Lecture8/Slide05.html
Question No: 27 ( Marks: 1 ) - Please choose one
Weiler-Atherton Polygon Clipping technique modify the vertex-processing procedures for window
boundaries so that _________ polygons are displayed correctly.
► Convex
► Concave Page no : 245
► Complex
► None of the given
If a line connecting any two points within a polygon does not intersect any edge, then it will be a _________
polygon.
► Convex Page no : 78
► Concave
► Complex
► None of the given
__________ can be defined as a mapping of point P(x, y, z) onto its image P`(x`, y`, z` ) in the view plane
which constitutes the display
surface.
► Mapping plane
► Three Coordinate Planes
► View plane Repeated
► Projection
The reflected light wave turns out to be a / an ______________ case since light is reflected at the same
angle as the incident wave (when the surface is smooth and uniform, as we'll assume for now).
Page 14 of 22
► Unknown
► Simple Page no: 291
► Complex
► Abnormal
1) In class, we discussed three forms of shading for “Utah” graphics. Which was the first to use per
vertex normals?
a) Flat Shading
b) Phong Shading
c) Gouraud Shading Page no : 240
2) Given any implicit equation, which of the following is true for all (x, y, z) that make the equation
exactly zero?
a) All those points are inside the object defined by the implicit equation
b) All those points are on the surface of the object defined by the implicit equation Page no :
205
c) All those points are outside the object defined by the implicit equation
d) You can’t know anything without knowing what the implicit equation is
3) When solving ray-sphere intersections using the implicit equation for a sphere, you must solve
the quadratic equation. Which of the following do you know if the B2-4AC (i.e. the part under the
square root) is negative?
a) The ray intersects the sphere at a negative t… discard this result
b) The ray intersects the sphere at a positive t… continue to the solution
c) The ray does not intersect the sphere… discard this result Page no : 265
d) The ray begins inside the sphere… this is a special case
Page 15 of 22
4)
_________________ sets the global idle call back to be 'func' so a GLUT program can perform
background processing tasks or continuous animation when window system events are not
being received.
Select correct option:
glutIdleFunc
glutMainLoop
glutDisplayFunc
glutReshapeFunc
Ref: http://www.opengl.org/resources/libraries/glut/spec3/node63.html
5)
A space curve can be confined to a plane.
Select correct option:
True
False Page no : 326
6)
A tangent vector certainly defines the slope at one end of the curve, but a vector has
characteristics of......
Select correct option:
direction
magnitude
both direction and magnitude Page no : 331
None of the given
7)
We allow the parametric variable to take on values only in the interval ----------------.
Select correct option:
-1 <= u <= 0
0 <= u <= 2
0 <= u <= 1 Page no : 321
-1 <= u <= 1
Page 16 of 22
8)
The degree of a Bezier curve is equal to n-1, where n is the number of control points
Select correct option:
NURBS
Bezier Page no : 337
Both NURBS and Bazier
None of the given
10)
A parametric curve is one whose defining equations are given in terms of a -------------, common,
independent variable called the parametric variable.
Select correct option:
Triple
Double
Single Page no : 320
None of the given
11)
Bit mask to select a window with multisampling support. If multisampling is not available, a ------
----------- window will automatically be chosen.
Select correct option:
12)
Bezier curve is tangent to the lines connecting _____________.
Select correct option:
Page 17 of 22
First two points
Last two points
Fist two points and last two point
None of the given
13)
OpenGL is well structured with an intuitive design and logical commands. Efficient OpenGL
routines typically result in applications with fewer lines of code than those that make up
programs generated using other graphics libraries or packages. In addition, OpenGL drivers ------
--------- information about the underlying hardware, freeing the application developer from
having to design for specific hardware features.
Select correct option:
A space curve is not confined to a plane. It is free to twist through space. To define a space curve
we must use parametric functions that are ----------------------.
Select correct option:
Binary polynomials
Mono polynomials
Quadratic polynomials
Cubic polynomials Page no : 326
15)
End points and an intermediate point on the curve, then we now --------------------- quantities that
we can express in terms of these coefficients (3 points x 3 coordinates each), and we can use
these three points to define a unique curve.
Select correct option:
Six
Three
Two
Nine Page no : 321
Page 18 of 22
16)
To convert the information in the A matrix into that required for the P matrix, we do some
simple matrix algebra, First we have UA=UNP then Simply A = -------------
Select correct option:
UP
NP
UN
None of the given
17)
If we assign a different value to the parametric variable for the intermediate point, then we
obtain different values for the coefficients. This, in turn, means that a different curve is
produced, although it passes through the -------------- three points.
Select correct option:
Isolate
Different
Same Page no : 323
None of the given
16)
In order to get a more realistic representation of lighting, we'll need to understand how light
passes through a medium and how hitting the boundary layer at the ----------------- of two media
can affect light's properties.
Select correct option:
17)
To ensure a smooth transition from one section of a piecewise __________ to the next, we can
impose various continuity conditions at the connection points
Select correct option:
Page 19 of 22
non parametric curve
parametric curve
polygon vector
Non of the these
Ref : www.mrl.snu.ac.kr/courses/CourseGraphics/Splines.ppt
18)
Bezier curve can represent the more complex piecewise ___________ curve.
Select correct option:
Polynomial Page no : 33
Exponential
Cubic
None of above
19)
Curve and surface equations can be expressed in either a parametric or a non parametric form.
Select correct option:
True
False Page no : 333
20)
Using a lighting model based upon the Blinn Phong model means that we'll always get a uniform
specular highlight based upon the colour of the --------------- light and material, which means that
all reflections based on this model, will be reminiscent of plastic.
Select correct option:
Union
Refracting Page no : 291
Intersection
Reflecting
21)
If the current matrix (according to glMatrixMode) is multiplied by the translation matrix, with
the product replacing the current matrix. That is, if M is the current matrix and T is the
translation matrix, then M is replaced with -----------------.
Select correct option:
M-T
Page 20 of 22
M+T
M/T
M*T
22)
With similar expressions for y(u) and z(u). Again the a, b, c and d terms are constant coefficients. As we
did with Equation for a plane curve, we combine the x(u), y(u) , and z(u) expressions into a single vector
equation P(u) = ---------------------------------------.
Select correct option:
Au2+bu1+cu+d
Au4+bu3+cu2+d1
Au3+bu2+cu2+d
Au3+bu2+cu+d Page no : 326
23)
The matrix generated by gluPerspective is multiplied by the current matrix, just as if
glMultMatrix were called with the generated matrix. To load the perspective matrix onto the
current matrix stack instead, precede the call to gluPerspective with a call to -----------------------.
Select correct option:
glRotated
gluPerspective
glTranslated
glLoadIdentity Page no : 313
24)
Each number that makes up a matrix is called an __________ of the matrix.
Element Page no : 101
Variable
Value
Component
25)
Which one of the following step is not involved to write pixel using video BIOS services.
Setting desired video mode
Using BIOS service to set color of a screen pixel
Calling BIOS interrupt to execute the process of writing pixel.
Using OpenGL service to set color of a screen pixel
26)
Page 21 of 22
Shadow mask methods can display a __________ range of colors.
Small
Wide Page no : 29
Random
Crazy
27)
Using Cohen-Sutherland line clipping, it is impossible for a vertex to be labeled 1111.
True
False
28)
Intensity of the electron beam is controlled by setting __________ levels on the control grid, a metal
cylinder that fits over the cathode.
Amplitude
Current
Voltage Page no : 26
Electron
Page 22 of 22
17-3D Transformations I VU
1
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
Rotation
Rotation is the process of moving a point in space in a non -linear manner. More
particularly, it involves moving the point from one position on a sphere whose center is at
the origin to another position on the sphere. Why would you want to do something like
this? As we will show in later section, allowing the point of view to move around is only
an illusion – projection requires that the POV be at the origin. When the user thinks the
POV is moving, you are actually translating all your points in the opposite direction; and
when the user thinks the POV is looking down a new vector, you are actually rotating all
the points in the opposite direction; and when the user thinks the POV is looking down a
new vector, you are actually rotating all the points in the opposite direction.
Normalization: Note that this process of moving your points so that your POV is at the
origin looking down the +Z axis is called normalization.
You need to know three different angles: how far to rotate around the X axis( YZ
rotation, or “pitch”); how far to rotate around the Y axis (XZ plane, or “yaw”); and how
far to rotate around the Z axis (XY rotation, or “roll”). Conceptually, you do the three
rotations separately. First, you rotate around one axis, followed by another, then the last.
The order of rotations is important when you cascade rotations; we will rotate first around
the Z axis, then around the X axis, and finally around the Y axis.
To show how the rotation formulas are derived, let’s rotate the point <x,y,z> around the Z
axis with an angle of degrees.
ROLL:-
If you look closely, you should note that when we rotate around the Z axis, the Z element
of the point does not change. In fact, we can just ignore the Z – we already know what it
will be after the rotation. If we ignore the Z element, then we have the same case as if we
were rotating the two-dimensional point <x,y> through the angle .
2
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
This is the way to rotate a 2-D point. For simplicity, consider the pivot at origin and rotate
point P (x,y) where x = r cosФ and y = r sinФ
If rotated by θ then:
x' = r cos(Ф + θ)
= r cosФ cosθ – r sinФ sinθ
and
y' = r sin(Ф + θ)
= r cosФ sinθ + r sinФ cosθ
3
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
In general we work with “square” matrices. This means that the number of vectors in the
matrix is the same as the number of elements in the vectors that comprise it.
Mathematically, we show a matrix as a 2-D array of numbers surrounded by vertical
lines. For example:
|x1 y1 z1|
|x2 y2 z2|
|x3 y3 z3|
we designate this as a 3*3 matrix ( the first 3 is the number of rows, and the second 3 is
the number of columns).
The “rows” of the matrix are the horizontal vectors that make it up; in this case, <x1,
y1,z1>, <x2,y2,z2>, and <x3,y3,z3>. In mathematics, we call the vertical vectors
“columns.” In this case they are < x1,x2,x3>, <y1,y2,y3> and <z1,z2,z3>.
The most important thing we do with a matrix is to multiply it by a vector or another
matrix. We follow one simple rule when multiplying something by a matrix: multiply
each column by a multiplicand and store this as an element in the result. Now as I said
4
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
earlier, you can consider each column to be a vector, so when we multiply by a matrix,
we are just doing a bunch of vector multiplies. So which vector multiply do you use -the
dot product, or the crosss product? You use the dot product.
We also follow on simple rule when multiplying a matrix by something: mubliply each ro
by the multiplier. Again, rows are just vectors, and the type of ultiplicaiton is the dot
product.
Let’s look at some examples. First, let’s assume that I have a matrix M, and I want to
multiply it by a point < x,y,z>, the first ting I know is that the vector rows of the matrix
must contain three elements (in other words, three columns). Why ? because I have to
multiply those rows by my point using a dot product, and to do that, the two vectors must
have the same number of element. Since I am going to get dot product for each row in M,
I will end up with a tuple that has one element for each row in M. as I stated earlier, we
work almost exclusively with square matrices, since I must have three columns, M will
also have three rows. Lets see:
|1 0 0|
< x,y,z> * |0 1 0| ={<x,y,z>*< 1,0,0> ,<x,y,z><0,1,0>,<x,y,z> *<0,0,1>}={ x,y,z}
|0 0 1|
5
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
Example:
To show this happening, let's manually rotate the point <2,0,0> 45 degrees clockwise
about the z axis.
Now you can take an object and apply a sequence of transformations to it to make it do
whatever you want. All you need to do is figure out the sequence of transformations
needed and then apply the sequence to each of the points in the model.
As an example, let's say you want to rotate an object sitting at a certain point p around its
z axis. You would perform the following sequence of transformations to achieve this:
The first transformation moves a point such that it is situated about the world origin
instead of being situated about the point p. The next one rotates it (remember, you can
only rotate about the origin, not arbitrary points in space). Finally, after the point is
rotated, you want to move it back so that it is situated about p. The final translation
accomplishes this.
6
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
The first step in such case would be to translate the object such that the arbitrary axis
coincides with the x-axis.
7
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
The next step would be to rotate the object w.r.t. x-axis through angle .
Then the object is translated such that the arbitrary axis gets back to its original position.
8
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
Now, if the arbitrary axis is not parallel to any of the coordinate axes, then the problem is
slightly more difficult. It only adds to the number of steps required to get the job done.
Let P1, P2 be the line arbitrary axis.
In the first step, the translation takes place that coincides the point P1 to the origin. Points
after this step are P1’ and P2’.
9
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
Now the arbitrary axis is rotated such that the point P2’ rotates to become P2’’ and lies on
the z-axis.
10
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
Now the object of interest is rotated about origin such that the arbitrary axis is poised like
in above figure. Point P2’’ gets back to its previous position P2’.
11
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
Finally the translation takes place to position the arbitrary axis back to its original
position.
Scaling
Coordinate transformations for scaling relative to the origin are
X` = X . Sx
Y` = Y. Sy
Z` = Z. Sz
Scaling an object with transformation changes the size of the object and reposition the
object relative to the coordinate origin. If the transformation parameters are not all equal,
relative dimensions in the object are changed.
Uniform Scaling : We preserve the original shape of an object with a uniform scaling (
Sx = Sy = Sz)
Sx 0 0 0
0 0
Sy 0
0 0 Sz 0
0 0 0 1
12
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
Scaling with respect to a selected fixed position (Xf,Yf,Z f) can be represented with the
following transformation sequence:
1 0 0 X f Sx 0 0 0 1 0 0 X f
Y f . 0 Sy 0 0 0 1 0 Y
0 1 0 . f
0 0 1 Zf 0 0 Sz 0 0 0 1 Z f
0 0 0 1 0 0 0 1 0 0 0 1
Sx 0 0 (1 Sx ) X f
(1 S )Y
0 Sy 0 f
y
0 0 Sz (1 Sz )Z f
0 0 0 1
Reflection
A three-dimensional reflection can be performed relative to a selected reflection axis or
with respect to a selected reflection plane. In general, three-dimensional reflection
matrices are set up similarly to those for two dimensions. Reflections relative to a given
axis are equivalent to 180 degree rotations.
The matrix representation for this reflection of points relative to the X axis
1 0 0 0
1 0
0 0
0 0 1 0
0 0 0 1
13
© Copyright Virtual University of Pakistan
18-3D Transformations II VU
The matrix representation for this reflection of points relative to the Y axis
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
The matrix representation for this reflection of points relative to the xy plane is
1 0 0 0
0 0 0
1
0 0 1 0
0 0 0 1
Shears
Shearing transformations can be used to modify object shapes.
As an example of three-dimensional shearing, the following transformation produces a
z-axis shear:
1 0 a 0
0 1 b 0
0 0 1 0
0 0 0 1
Parameters a and b can be assigned and real values. The effect of this transformation
matrix is to alter x and y- coordinate values by an amount that is proportional to the z
value, while leaving the z coordinate unchanged.
For centuries, artists, engineers, designers, drafters, and architects have been facing
difficulties and constraints imposed by the problem of representing a three -dimensional
object or scene in a two-dimensional medium -- the problem of projection. The
implementers of a computer graphics system face the same challenge.
Projection can be defined as a mapping of point P(x,y,z) onto its image P`(x`,y`,z`) in the
projection plane or view plane, which constitutes the display surface. The mapping is
determined by a projection line called the projector that passes through P and intersects
the view plane.
Parallel Projection
Perspective Projection
These methods are used to solve the basic problems of pictorial representations
15
© Copyright Virtual University of Pakistan
19-Projections VU
Parallel Projection
Parallel projection methods are used by drafters and engineers to create working drawings
of an object which preserves its scale and shape. The complete representation of these
details often requires two or more views (projections) of the object onto different view
planes.
In parallel projection, image points are found as the intersection of the view plane with a
projector drawn from the object point and having a fixed direction. The direction of
projection is the prescribed direction for all projections. Orthographic projections are
characterized by the fact that the direction of projection is perpendicular to the view
plane. When the direction of projection is parallel to any of the principal axes, this
produces the front, top, and side views of mechanical drawings (also referred to as multi
view drawings).
Axonometric projections are orthographic projections in which the direction of projection
is not parallel to any of the three principal axes. Non orthographic parallel projections
are called oblique parallel projection.
16
© Copyright Virtual University of Pakistan
19-Projections VU
Look at the parallel projection of a point (x, y, z). (Note the left handed coordinate
system). The projection plane is at z = 0. x, y are the orthographic projection values and
xp, yp are the oblique projection values (at angle a with the projection plane)
17
© Copyright Virtual University of Pakistan
19-Projections VU
The projection plane intersects the x, y, z axes at equal distances and the projection plane
Normal makes an equal angle with the three axes.
To form an orthographic projection xp = x, yp= y , zp = 0. To form different types e.g.,
Isometric, just manipulate object with 3D transformations.
2) Dimetric
The direction of projection makes equal angles with exactly two of the principal axes
3) Trimetric
The direction of projection makes unequal angles with the three principal axes
Oblique Projection
If the direction of projection is not perpendicular to the projection plane then it is an
oblique projection.
The projectors are not perpendicular to the projection plane but are parallel from the
object to the projection plane.
Transformation equations for an orthographic parallel projection are straightforward. If
the view plane is placed at position Zvp along the Z axis, Then any point (x,y,z) in
viewing coordinates is transformed to projection coordinates as:
Xp = x
Yp = y
19
© Copyright Virtual University of Pakistan
19-Projections VU
Where the original Z-coordinate value is preserved for the depth information needed in
depth cueing and visible-surface determination procedures.
An oblique projection is obtained by projecting points along parallel lines that are
not perpendicular to the projection plane. In some applications packages, an oblique
projection vector is specified with two angles, alpha and phi, as shown in the figure. Point
(x,y,z) is projected to position(Xp,Yp) on the view plane. Orthographic projection
coordinates on the plane are (x,y). The oblique projection line from (x,y,z) to (Xp,Yp)
makes an angle alpha with the line on the projection plane that joins (Xp,Yp) and (x, y).
This line, of length L, is at an angle phi with the horizontal direction in the projection
plane. We can express the projection coordinates in terms of x, y, L, and phi as
cos(phi) = Xp – x / L
sin(phi) = Yp – y / L
Xp = x + L cos(phi)
Yp = y + L sin(phi)
Length L depends on the angle alpha and the z coordinate of the point to be projected:
tan (alpha) = z / L
Thus,
L = z * 1/ tan (alpha)
L = z * L1
Where L1 is the inverse of tan(alpha), which is also the value of L when z = 1, we can
then write the oblique projection equations.
Xp = x + z (L1 cos(phi) )
Yp = y + z (L1 sin(phi) )
20
© Copyright Virtual University of Pakistan
19-Projections VU
The transformation matrix for producing any parallel projection onto the xy plane can be
written as
2) Cabinet
tan (Alpha) = 2, Alpha= 63.40°, L1 = 1 / 2
Lines which are perpendicular to the projection plane are projected at 1 / 2 length. This is
a Cabinet projection
21
© Copyright Virtual University of Pakistan
20-Perspective Projections VU
Now that you have a structure that can store a three dimensional point (Point3D), how do
you calculate the corresponding screen pixel? First, let’s look at what you are modeling.
Following figure shows how it would look.
22
© Copyright Virtual University of Pakistan
20-Perspective Projections VU
2. The second approach is where the POV lies at the origin, and the screen lies on a plane
at some +z coordinate, as shown in figure given below:
As we will see later, this second approach is much more convenient when we add features
making it possible for the POV to move around the 3D world or for objects to move
around in the world.
Calculating the screen pixel that correlates to a 3D point is now a matter of simple
geometry. From a viewpoint above the screen and POV (looking at the X-Z plane), the
geometry appears like the one shown in figure below:
23
© Copyright Virtual University of Pakistan
20-Perspective Projections VU
In geometric terms, we say that the triangle from A to B to S is similar to the triangle
from A to C to P because the three angles that make up the triangles are the same: the
angle from AB to AS is the same as the angle from AC to AP, the two right angles are
both 90 degrees, and therefore the remaining two angles are the same ( the sum of the
angles in a triangle is always 180 degrees). What also holds true from similar triangles is
that the ratio of two sides holds between the similar triangles; this means that the ratio of
BS to AB is the same as the ratio of CP to AC. But we know what AB is-it is Screen.z!
and we know what AC is-it is point.z! and we know what CP is-it is point.x! Therefore:
|BS| / |AB| = |CP| / |AC|
|BS| = |AB| * |CP| / |AC|
|BS| = Screen.z * point.x / point.z
Screen.z is the distance d from the point of view at origin or the scaling factor.
Notice that |BS| is the length of the line segment that goes from B to S in world units. But
we normally address the screen with the point (0,0) at the top left, with +X pixels moving
to the right, and +Y pixels moving down—and not from the middle of the screen. And we
draw to the screen in pixel units – not our world units (unless, of course, 1.0 in your
world represents one pixel).
There is a final transformation that the points must go through in the transformation
process. This transformation maps 3D points defined with respect to the view origin (in
view space) and turns them into 2D points that can be drawn on the display. After
transforming and clipping the polygons that make up the scene such that they are visible
on the screen, the final step is to move them into 2D coordinates, since in order to
actually draw things on the screen you need to have absolute x, y coordinates on the
screen to draw.
The way this used to be done was without matrices, just as an explicit projection
calculation. The point (x,y,z) would be mapped to (x', y') using the following equations
24
© Copyright Virtual University of Pakistan
20-Perspective Projections VU
Where xCenter and yCenter were half of the width and height of the screen respectively,
these days more complex equations are used, especially since there is now the need to
make provisions for z-buffering. While you want x and y to still behave the same way,
you don't want to use a value as arbitrary as scale.
Instead, a better value to use in the calculation of the projection matrix is the horizontal
field of view (fov). The horizontal fov will be hard coded, and the code chooses a vertical
field of view that will keep the aspect ratio of the screen. This makes sense: You couldn't
get away with using the same field of view for both horizontal and vertical directions
unless the screen was square; it would end up looking vertically squashed.
Finally, you also want to scale the z values appropriately. In future, We'll teach you about
z-buffering, but for right now just make note of an important feature: They let you clip
out certain values of z-range. Given the two variables z near and z far, nothing in front of z near
will be drawn, nor will anything behind z far. To make the z-buffer work swimmingly on
all ranges of z near and z far, you need to scale the valid z values to the range of 0.0 to 1.0.
Just for a sanity check, check out the result of this matrix multiplication:
25
© Copyright Virtual University of Pakistan
20-Perspective Projections VU
This is almost the result wanted, but there is more work to be done. Remember that in
order to extract the Cartesian (x,y,z) coordinates from the vector, the homogenous w
component must be 1.0. Since, after the multiplication, it's set to z (which can be any
value), all four components need to be divided by z to normalize it (and have the
homogeneity factor equal 1). This gives the following Cartesian coordinate:
As you can see, this is exactly what was wanted. The width and height are still scaled by
values as in the above equation and they are still divided by z. The visible x and y pixels
are mapped to [−1,1], so before rasterization Application multiplies and adds the number
by xCenter or yCenter. This, in essence, maps the coordinates from [−1,1] to [0,width]
and [0,height].
With this last piece of the puzzle, it is now possible to create the entire transformation
pipeline. When you want to render a scene, you set up a world matrix (to transform an
object's local coordinate points into world space), a view matrix (to transform world
coordinate points into a space relative to the viewer), and a projection matrix (to take
those viewer-relative points and project them onto a 2D surface so that they can be drawn
on the screen). You then multiply the world, view, and projection matrices together (in
that order) to get a total matrix that transforms points from object space to screen space.
To draw a triangle, for example, you would take its local space points defining its three
corners and multiply them by the transformation matrix. Then you have to remember to
divide through by the w component. The points are now in screen space and can be filled
in using a 2D raster algorithm. Drawing multiple objects is a snap, too. For each object in
the scene all you need to do is change the world matrix and reconstruct the total
transformation matrix.
The Perspective Projection Matrix Used by Microsoft Direct3D
The projection matrix is typically a scale and perspective projection. The projection
transformation converts the viewing frustum into a cuboid shape. Because the near end of
the viewing frustum is smaller than the far end, this has the effect of expanding objects
that are near to the camera; this is how perspective is applied to the scene.
The Viewing Frustum
A viewing frustum is 3-D volume in a scene positioned relative to the viewport's camera.
The shape of the volume affects how models are projected from camera space onto the
screen. The most common type of projection, a perspective projection, is responsible for
making objects near the camera appear bigger than objects in the distance. For
perspective viewing, the viewing frustum can be visualized as a pyramid, with the camera
26
© Copyright Virtual University of Pakistan
20-Perspective Projections VU
positioned at the tip. This pyramid is intersected by a front and back clipping plane. The
volume within the pyramid between the front and back clipping planes is the viewing
frustum. Objects are visible only when they are in this volume.
If you imagine that you are standing in a dark room and looking through a square
window, you are visualizing a viewing frustum. In this analogy, the near clipping plane is
the window, and the back clipping plane is whatever finally interrupts your view—the
skyscraper across the street, the mountains in the distance, or nothing at all. You can see
everything inside the truncated pyramid that starts at the window and ends with whatever
interrupts your view, and you can see nothing else.
The viewing frustum is defined by fov (field of view) and by the distances of the front and
back clipping planes, specified in z-coordinates.
In this illustration, the variable D is the distance from the camera to the origin of the
space that was defined in the last part of the geometry pipeline—the viewing
transformation. This is the space around which you arrange the limits of your viewing
frustum. For information about how this D variable is used to build the projection matrix
The Matrix
In the viewing frustum, the distance between the camera and the origin of the viewing
transformation space is defined arbitrarily as D, so the projection matrix looks like:
27
© Copyright Virtual University of Pakistan
20-Perspective Projections VU
The viewing matrix translates the camera to the origin by translating in the z direction by
- D. The translation matrix is as follows:
Multiplying the translation matrix by the projection matrix (T*P) gives the composite
projection matrix. It looks like:
The following illustration shows how the perspective transformation converts a viewing
frustum into a new coordinate space. Notice that the frustum becomes cuboid and also
that the origin moves from the upper-right corner of the scene to the center.
In the perspective transformation, the limits of the x- and y-directions are -1 and 1. The
limits of the z-direction are 0 for the front plane and 1 for the back plane.
This matrix translates and scales objects based on a specified distance from the camera to
the near clipping plane, but it doesn't consider the field of view (fov), and the z-values that
it produces for objects in the distance can be nearly identical, making depth comparisons
difficult. The following matrix addresses these issues, and it adjusts vertices to account
for the aspect ratio of the viewport, making it a good choice for the perspective
projection.
28
© Copyright Virtual University of Pakistan
20-Perspective Projections VU
In this matrix, Zn is the z-value of the near clipping plane. The variables w, h, and Q have
the following meanings. Note that fov w and fov h represent the viewport's horizontal and
vertical fields of view, in radians.
For your application, using field-of-view angles to define the x- and y-scaling coefficients
might not be as convenient as using the viewport's horizontal and vertical dimensions (in
camera space). As the math works out, the following two formulas for w and h use the
viewport's dimensions, and are equivalent to the preceding formulas.
In these formulas, Zn represents the position of the near clipping plane, and the V w and V h
variables represent the width and height of the viewport, in camera space.
29
© Copyright Virtual University of Pakistan
21-Triangles and Planes VU
It is impossible to see triangles that face away from you. (You can find this out by
computing the triangle's plane normal and performing a dot product with a vector from
the camera location to a location on the plane.)
Now let's move on to the code. To help facilitate using the multiple types, I'll implement
triangles structure. I only define constructors and keep the access public.
struct tri
{
tri()
{
// nothing
}
30
© Copyright Virtual University of Pakistan
21-Triangles and Planes VU
{
v[0] = v0;
v[1] = v1;
v[2] = v2;
}
};
Strips and Fans
Lists of triangles are generally represented in one of three ways. The first is an explicit
list or array of triangles, where every three elements represent a new triangle. However,
there are two additional representations, designed to save bandwidth while sending
triangles to dedicated hardware to draw them. They are called triangle strips and triangle
fans.
Triangle fans, conceptually, look like the folding fans you see in Asian souvenir shops.
They are a list of triangles that all share a common point. The first three elements indicate
the first triangle. Then each new element is combined with the first element and the
current last element to form a new triangle. Note that an N-sided polygon can be
represented efficiently using a triangle fan
Figure below illustrates what I'm talking about.
31
© Copyright Virtual University of Pakistan
21-Triangles and Planes VU
Planes
The next primitive to discuss is the plane. Planes are to 3D what lines are in 2D; they're
n–1 dimensional hyperplanes that can help you accomplish various tasks. Planes are
defined as infinitely large, infinitely thin slices of space, like big pieces of paper.
Triangles that make up your model each exist in their own plane. When you have a plane
that represents a slice of 3D space, you can perform operations like classification of
points and polygons and clipping.
So how do you represent planes? Well it is best to build a structure from the equation that
defines a plane in 3D. The implicit equation for a plane is:
What do these numbers represent? The triplet <a,b,c> represents what is called the
normal of the plane. A normal is a unit vector that, conceptually speaking, sticks directly
out of a plane. A stronger mathematical definition would be that the normal is a vector
that is perpendicular to all of the points that lie in the plane.
The d component in the equation represents the distance from the plane to the origin. The
distance is computed by tracing a line towards the plane until you hit it. Finally the triplet
<x,y,z> is any point that satisfies the equation. The set of all points <x,y,z> that solve the
equation is exactly all the points that lie in the plane.
All of the pictures I'm showing you will be of the top-down variety, and the 3D planes
will be on edge, appearing as 2D lines. This makes figure drawing much easier.
Following are two examples of planes. The first has the normal pointing away from the
origin, which causes d to be negative (try some sample values for yourself if this doesn't
make sense). The second has the normal pointing towards the origin, so d is positive. Of
course, if the plane goes through the origin, d is zero (the distance from the plane to the
origin is zero). Figures 1 and Figure 2 provide some insight into this relation.
32
© Copyright Virtual University of Pakistan
21-Triangles and Planes VU
Figure 1: d is negative when the normal faces away from the origin
and find a normal for the plane. After generating the normal and making it unit length,
finding the d value for the plane is just a matter of storing the negative dot product of the
normal with any of the points. This holds because it essentially solves the plane equation
above for d. Of course plugging a point in the plane equation will make it equal 0, and
this constructor has three of them. Following has the code to construct a plane from three
points.
To calculate a plane from 3 given points we first calculate the normal. If we imagine the 3
points form three edges in the plane then we can take two of the edges and calculate the
33
© Copyright Virtual University of Pakistan
21-Triangles and Planes VU
cross-product between them. The resulting directional vector will be the normal, and then
we can plug any of the 3 known points into the plane equation to solve for k. For points
p1,p2 and p3 we get:
normal = (p1-p2) x (p3-p2)
k = normal * p1
Note that it is extremely important to keep track of which direction your points are stored
in. Let's take 3 points stored in clockwise direction in the x/y plane:
Normal vector = n
n = cross product ( (b-a),(c-a) )
Normalize(n); find a unit vector
d = - dot product(n,a)
If you already have a normal and also have a point on the plane, the first step can be
skipped.
34
© Copyright Virtual University of Pakistan
21-Triangles and Planes VU
This brings up an important point. If you have an n-sided polygon, nothing discussed up
to this point is forcing all of the points to be coplanar. However, problems can crop up if
some of the points in the polygon aren't coplanar. For example, when I discuss back-face
culling in a moment, you may misidentify what is actually behind the polygon, since there
won't be a plane that clearly defines what is in front of and what is behind the plane. That
is one of the advantages of using triangles to represent geometry—three points define a
plane exactly.
Defining Locality with Relation to a Plane
One of the most important operations planes let you perform is defining the location of a
point with respect to a plane. If you drop a point into the equation, it can be classified into
three cases: in front of the plane, in back of the plane, or coplanar with the plane. Front is
defined as the side of the plane the normal sticks out of.
Here, once again, precision will rear its ugly head. Instead of doing things the theoretical
way, having the planes infinitely thin, I'm going to give them a certain thickness of (you
guessed it) epsilon.
How do you orient a point in relation to a plane? Well, simply plug x, y, and z into the
equation, and see what you get on the right side. If you get zero (or a number close
enough to zero by plus or minus epsilon), then the point satisfied the equation and lies on
the plane. Points like this can be called coplanar. If the number is greater than zero, then
you know that you would have to travel farther along the origin following the path of the
normal than you would need to go to reach the plane, so the point must be in front of the
plane. If the number is negative, it must be behind the plane. Note that the first three
terms of the equation simplify to the dot product of the input vector and the plane normal.
Figure below has a visual representation of this operation.
35
© Copyright Virtual University of Pakistan
21-Triangles and Planes VU
partially in front and partially in back. I'll refer to this state as splitting the plane. It's just a
term; the element isn't actually splitting anything.
Back-face Culling
Now that you know how to define a point with respect to a plane, you can perform back-
face culling, one of the most fundamental optimization techniques of 3D graphics.
Let's suppose you have a triangle whose elements are ordered in such a fashion that when
viewing the triangle from the front, the elements appear in clockwise order. Back-face
culling allows you to take triangles defined with this method and use the plane equation
to discard triangles that are facing away. Conceptually, any closed mesh, a cube for
example, will have some triangles facing you and some facing away. You know for a fact
that you'll never be able to see a polygon that faces away from you; they are always
hidden by triangles facing towards you. This, of course, doesn't hold if you're allowed to
view the cube from its inside, but this shouldn't be allowed to happen if you want to really
optimize your engine.
Rather than perform the work necessary to draw all of the triangles on the screen, you can
use the plane equation to find out if a triangle is facing towards the camera, and discard it
if it is not. How is this achieved? Given the three points of the triangle, you can define a
plane that the triangle sits in. Since you know the elements of the triangle are listed in
clockwise order, you also know that if you pass the elements in order to the plane
constructor, the normal to the plane will be on the front side of the triangle. If you then
think of the location of the camera as a point, all you need to do is perform a point-plane
test. If the point of the camera is in front of the plane, then the triangle is visible and
should be drawn.
There's an optimization to be had. Since you know three points that lie in the plane (the
three points of the triangle) you only need to hold onto the normal of the plane, not the
entire plane equation. To perform the back-face cull, just subtract one of the triangle's
points from the camera location and perform a dot product with the resultant vector and
the normal. If the result of the dot product is greater than zero, then the view point was in
front of the triangle. Figure below can help explain the point.
36
© Copyright Virtual University of Pakistan
21-Triangles and Planes VU
Clipping Lines
One thing that you'll need is the ability to take two points (a and b) that are on different
sides of a plane defining a line segment, and find the point making the intersection of the
line with the plane.
This is easy enough to do. Think of this parametrically. Point a can be thought of as the
point at time 0 and point b as the point at time 1, and the point of intersection you want to
find is somewhere between those two.
Take the dot product of a and b. Using them and the inverse of the plane's d parameter,
you can find the scale value (which is a value between 0 and 1 that defines the parametric
location of the particle when it intersects the plane). Armed with that, you just use the
scale value, plugging it into the linear parametric equation to find the intersection
location. Figure 5.17 shows this happening visually, and Listing 5.20 has the code.
inline const point3 plane3::Split( const point3 &a, const point3 &b ) const
{
float aDot = (a * n);
float bDot = (b * n);
Introduction
The first step in triangle rasterization is to be able to render a solid filled triangle. All
triangle drawing routines should fill the same pixels on the screen so it makes sense to
start with the simplest example and work up. The goal is to draw a filled triangle by
plotting pixels on the screen given three vertex points.
The first step is to sort the triangle vertices by y. Label the top vertex (x 0 ,y0 ), the middle
vertex (x1 ,y 1), and the bottom vertex (x2 ,y2 ). Now the triangle fill can be thought of as
two separate routines, filling the top half (the region between y 0 and y 1) and filling the
bottom half (the region between y1 and y2). Each of the fill routines consists of filling the
triangle region one scanline at a time, using the DDA algorithm to find the x values of the
beginning and the end of each pixel span to draw. The top half will uses DDA to find the
x values on edge 01 and edge 02. The bottom half uses DDA to find the x values on edge 12
and edge 02.
Sub-pixel Accuracy
The afore mentioned rasterization technique is works well when vertex coordinates are
integers, but there are some subtle changes that should be made to the DDA algorithm
when the vertex coordinates do not fall on integer bounds. In essence, sub-pixel accuracy
is a way of accounting for the fractional components of the vertex positions in the triangle
rasterizer. The changes that need to be made are mainly used to prevent jumpiness when
there are amounts of motion that are smaller than a pixel. The edges of the triangle reflect
the fractional change, and without sub-pixel accuracy, the entire triangle would jump
down a pixel at a time. Also, the calculations performed for sub-pixel accuracy allow for
quicker edge anti-aliasing.
The idea of sub-pixel accuracy is to pre-step the x coordinate of each of the edge DDAs
an amount corresponding to the fractional component of the y position of the vertex. For
38
© Copyright Virtual University of Pakistan
22-Triangle Rasterization VU
notation sake we denote the upper vertex of an edge as xa,ya and the lower vertex as xb,yb.
For each edge the starting x coordinate for the DDA algorithm x s is obtained by a prestep
amount xprestep to the original x coordinate of the vertex:
The pre step amount ‘xf’ for x, is calculated by multiplying x/y by the fix up distance, y f
. This fix up distance is just the distance of ya to the next lowest scan line. For
clarification purposes, here is some pseudo code for the sub-pixel accurate DDA, which
can be used to find endpoints for the pixel spans in our triangle rasterizer. This technique
can be used to draw sub-pixel accurate lines also.
yf=yai-ya;
xp=(dxdy*yf);
x=xa+xp;
So far we have described a method for solid filled triangle rasterization, but there are a
variety of other fill types that are use. Smooth shaded triangles can be used to
approximate the effects of lighting over a surface. They can be used for light falloff, or
can be used to give the appearance to a curved surface.
The idea behind smooth shading is to linearly interpolate the vertex colors over the
triangle being drawn. Luckily for us, we already have the tool to d o this, DDA. In fact,
drawing smooth shaded polygons is not much more difficult than drawing solid filled
ones. The vertex colors must be interpolated along each edge of the triangle using DDA.
This gives us a separate pair of colors for the beginning and end of each pixel span for
39
© Copyright Virtual University of Pakistan
22-Triangle Rasterization VU
each scan line. The last step that needs to be performed is to use DDA to interpolate the
colors across each pixel span.
To do smooth shading with RGB color, you must use separate DDA interpolation routines
for the red, green, and blue components of the color. Also note that inside the smooth
shading routine r, g, and b must be represented as a type with a fractional component
(type float is a good choice). To avoid visual artifacts it is recommended that you use the
sub-texel accuracy technique that is described further on in these notes.
Another common triangle fill method is called texture mapping. Texture mapping is a
technique for interpolating an image over the triangle being rasterized. The image being
interpolated is known as a texture map, and each pixel in the texture map is known as a
texel. Because it allows for the use of images to represent a surface on an object, it ha s
the potential to greatly reduce the number of triangles needed to represent an object. In
addition to this, texture mapping can also be used to simulate the effects of complex
lighting conditions on an object.
This section describes bilinear texture mapping, which is the simplest technique to
implement. In order to perform bilinear texture mapping, each vertex contains a u, v
texture coordinate. This specifies the location in the texture map that this vertex
corresponds to. Given these texture coordinates, texture mapping isn’t much more
difficult than smooth shading. The texture coordinates u, v are interpolated over the
triangle using DDA just like the r, g, and b values are in smooth shading. The difference
is that the resulting u, v location for every pixel is used to lookup a color value in the
texture map image for the pixel to be drawn.
However, there is the problem how to deal with the fractional component of the u, v
values; looking up color values in the texture map requires integer coordinates.
One technique is to round the u, v values to the nearest integer. This is the quickest
approach, but produces a blocky looking triangle image when the texture map is small in
comparison to the triangle size. Most software based texture mapping routines used in
computer games take use this approach because of the speed advantages. However, the
majority of hardware based texture mapping routines has an option to do bilinear
sampling. Bilinear sampling uses the fractional component of the u, v coordinate to
perform a weighted average of 4 adjacent texel colors. The fractional components of u, v
are used to find the distance of u, v from the texels themselves. This distance is used as
the weighting, and the formula for the pixel color to be drawn cbilin_samp is:
40
© Copyright Virtual University of Pakistan
22-Triangle Rasterization VU
{
color c00,c01,c10,c11;
int u0,u1,v0,v1;
float ufrac,vfrac;
u0=floor(u);
u1=ceiling(u);
v0=floor(u);
v1=ceiling(u);
ufrac=u-u0;
vfrac=v=v0;
c00=texMap[u0][v0];
c01=texMap[u0][v1];
c10=texMap[u1][v0];
c11=texMap[u1][v1];
Sub-Texel Accuracy
The disparity between integer screen pixel locations and the mathematical equations for
the triangle also causes problems for texture mapping, and smooth shading. Any value
that is interpolated over the triangle such as r, g, and b for smooth shading and u and v for
texture mapping must take into account the fractional component of the vertex
information. Taking these fractional quantities into account is called sub-texel accuracy,
for the reason that it is used most commonly with texture mapping. In actuality, sub-texel
accuracy can be applied to any quantity interpolated over the triangle. Without sub-texel
accuracy, the texture will visibly jump around by a pixel when the triangle undergoes
small amounts of motion.
The sub-texel accurate DDA interpolators for texture mapping are very similar to the sub-
pixel accurate DDA routine presented earlier. For each edge of the triangle, the sub-texel
DDA for the interpolated values is identical to the sub-pixel DDA, when u or v is
substituted for x. However, for each scan line, the beginning and end x locations of the
pixel span have fractional components which need to be accounted for. To interpolate the
texels coordinates correctly over the pixel span for each scan line, a subtexel accurate
pixel span DDA is required. Luckily for us, this formulation is also virtually identical to
the sub-pixel DDA. All that need to be done is to substitute u or v for x in the original,
and substitute x for y in the original.
41
© Copyright Virtual University of Pakistan
22-Triangle Rasterization VU
Drawing triangle (or in general convex polygon, but as we discussed we will use only
triangles) is very simple. The basic idea of the line triangle drawing algorithm is as
follows.
For each scan line (horizontal line on the screen), find the points of intersection with the
edges of the triangle. Then, draw a horizontal line between intersections and do this for
all scan lines.
But how can we find these points quickly?
Using linear interpolation!
We have 3 vertices and we want to find coordinates of all points belonging to segments
determined by these vertices.
Assume we have segment given by points:
(xa,ya) and (xb,yb).
Our task is to find points: (xc,ya+1), (xd,ya+2), ... , (xm,yb-1), (xn,yb).
Notice that xa changes to xb in (yb-ya) steps.
We also have:
xa=xa+0*(xb-xa)/(yb-ya),
where delta=(xb-xa)/(yb-ya).
The coordinates of vertices are (A. x,A. y), (B. x,B. y), (C. x,C. y); we assume that A. y
<= B. y <= C. y (you should sort them first)
S = A means that S. x = A. x; S. y = A. y;
42
© Copyright Virtual University of Pakistan
22-Triangle Rasterization VU
S=E=A
I ought to explain what is the comparision dx1 > dx2 for. It's optimization trick: in the
horizontal line routine, we don't need to compare the x's (S.x is always less than or equal
to E.x).
43
© Copyright Virtual University of Pakistan
22-Triangle Rasterization VU
Gouraud Shading
The idea of gouraud and flat triangle is nearly the same. Gouraud takes only three
parameters more (the color value of each of the vertices), and the routine just interpolates
among them drawing a beautiful, shaded triangle.
You can use 256-colors mode, in which vertices' colors are simply indices to palette or hi-
color mode (recommended).
Flat triangle interpolated only one value (x in connection with y), 256 colors gouraud
needs three (x related to y, color related to y, and color related to x), hi-color gouraud
needs seven (x related to y, red, green and blue components of color related to y, and
color related to x (also three components))
Drawing a gouraud triangle, we add only two parts to the flat triangle routine. The
horizline routine gets a bit more complicated due to the interpolation of the color value
related to x but the main routine itself remains nearly the same.
We'll give you a full gouraud routine because good pseudo code is better than the best
description:
the coordinates of vertices are (A.x,A.y), (B.x,B.y), (C.x,C.y) we assume that
A.y<=B.y<=C.y (you should sort them first)
vertex A has color (A.r,A.g,A.b), B (B.r,B.g,B.b), C (C.r,C.g,C.b), where X.r is color's
red component, X.g is color's green component and X.b is color's blue component
dx1,dx2,dx3 are deltas used in interpolation of x-coordinate
dr1,dr2,dr3, dg1,dg2,dg3, db1,db2,db3 are deltas used in interpolation of color's
components
putpixel(P) plots a pixel with coordinates (P.x,P.y) and color (P.r,P.g,P.b)
S=A means that S.x=A.x; S.y=A.y; S.r=A.r; S.g=A.g; S.b=A.b;
Drawing triangle:
if (B.y-A.y > 0) {
dx1=(B.x-A.x)/(B.y-A.y);
dr1=(B.r-A.r)/(B.y-A.y);
dg1=(B.g-A.g)/(B.y-A.y);
db1=(B.b-A.b)/(B.y-A.y);
} else
dx1=dr1=dg1=db1=0;
if (C.y-A.y > 0) {
dx2=(C.x-A.x)/(C.y-A.y);
dr2=(C.r-A.r)/(C.y-A.y);
dg2=(C.g-A.g)/(C.y-A.y);
44
© Copyright Virtual University of Pakistan
22-Triangle Rasterization VU
db2=(C.b-A.b)/(C.y-A.y);
} else
dx2=dr2=dg2=db2=0;
if (C.y-B.y > 0) {
dx3=(C.x-B.x)/(C.y-B.y);
dr3=(C.r-B.r)/(C.y-B.y);
dg3=(C.g-B.g)/(C.y-B.y);
db3=(C.b-B.b)/(C.y-B.y);
} else
dx3=dr3=dg3=db3=0;
S=E=A;
if(dx1 > dx2) {
for(;S.y<=B.y;S.y++,E.y++) {
if(E.x-S.x > 0) {
dr=(E.r-S.r)/(E.x-S.x);
dg=(E.g-S.g)/(E.x-S.x);
db=(E.b-S.b)/(E.x-S.x);
} else
dr=dg=db=0;
P=S;
for(;P.x < E.x;P.x++) {
putpixel(P);
P.r+=dr; P.g+=dg; P.b+=db;
}
S.x+=dx2; S.r+=dr2; S.g+=dg2; S.b+=db2;
E.x+=dx1; E.r+=dr1; E.g+=dg1; E.b+=db1;
}
E=B;
for(;S.y<=C.y;S.y++,E.y++) {
if(E.x-S.x > 0) {
dr=(E.r-S.r)/(E.x-S.x);
dg=(E.g-S.g)/(E.x-S.x);
db=(E.b-S.b)/(E.x-S.x);
} else
dr=dg=db=0;
P=S;
for(;P.x < E.x;P.x++) {
putpixel(P);
P.r+=dr; P.g+=dg; P.b+=db;
}
S.x+=dx2; S.r+=dr2; S.g+=dg2; S.b+=db2;
E.x+=dx3; E.r+=dr3; E.g+=dg3; E.b+=db3;
}
} else {
for(;S.y<=B.y;S.y++,E.y++) {
if(E.x-S.x > 0) {
dr=(E.r-S.r)/(E.x-S.x);
45
© Copyright Virtual University of Pakistan
22-Triangle Rasterization VU
dg=(E.g-S.g)/(E.x-S.x);
db=(E.b-S.b)/(E.x-S.x);
} else
dr=dg=db=0;
P=S;
for(;P.x < E.x;P.x++) {
putpixel(P);
P.r+=dr; P.g+=dg; P.b+=db;
}
S.x+=dx1; S.r+=dr1; S.g+=dg1; S.b+=db1;
E.x+=dx2; E.r+=dr2; E.g+=dg2; E.b+=db2;
}
S=B;
for(;S.y<=C.y;S.y++,E.y++) {
if(E.x-S.x > 0) {
dr=(E.r-S.r)/(E.x-S.x);
dg=(E.g-S.g)/(E.x-S.x);
db=(E.b-S.b)/(E.x-S.x);
} else
dr=dg=db=0;
P=S;
for(;P.x < E.x;P.x++) {
putpixel(P);
P.r+=dr; P.g+=dg; P.b+=db;
}
S.x+=dx3; S.r+=dr3; S.g+=dg3; S.b+=db3;
E.x+=dx2; E.r+=dr2; E.g+=dg2; E.b+=db2;
}
}
Textured Triangles
The left triangle is the triangle which is drawn onto the screen. There's a single scanline
(one call to the horizline routine) pointed out as an example. The triangle on the right is
the same triangle in the bitmap space, and there's the same scanline drawn from another
point of view into it, too. So we need just to interpolate, interpolate, and once more
interpolate in texture filler - an easy job if you've understood the idea of gouraud filler.
An optimization trick: the color deltas in gouraud and (u,v) coordinate deltas in texture
remain constant, so we need to calculate them only once per polygon. Let's take the u
delta in linear texturing as an example. Assume, that dx2<=dx3 (we are using the same
symbols like in flat and gouraud filler). As we know, we need to interpolate S.u to E.u in
the horizline routine in (S.x-E.x) steps. We are in the need of a u delta (du) which would
be the same for the whole polygon. So instead of calculating in each scanline this:
du = (E.u-S.u) / (E.x-S.x),
we do like this in the setup part of the polygon routine: We know that
S.x = Ax + (B.y - A.y) * dx1,
S.u = A.u + (B.y-A.y) * du1,
E.x = B.x = A.x + (B.y-A.y) * dx2,
E.u = B.u = A.u + (B.y-A.y) * du2,
When
y = B.y (when y is the y-coordinate of the second vertex).
When we place the values of the variables S.u,E.u,S.x and E.x (above) to the u delta
statement,
du = (E.u-S.u) / (E.x-S.x),
[A.u+(B.y-A.y)*du2] - [A.u+(B.y-A.y)*du1]
du =
[A.x+(B.y-A.y)*dx2] - [A.x+(B.y-A.y)*dx1]
(B.y-A.y)*(A.u-A.u+du2-du1)
du =
(B.y-A.y)*(A.x-A.x+dx2-dx1)
du2-du1
du =
dx2-dx1
47
© Copyright Virtual University of Pakistan
22-Triangle Rasterization VU
outerUdelta2-outerUdelta1
innerUdelta = --------------------------------
outerXdelta2-outerXdelta1
Nice! But what if dx2 = dx1? This of course means that the polygon is just one line, so du
doesn't need any specific value; zero does the job very well.
Note! I find it hard to get good results using fixed point math because of inadequate
precision.
Environmental Mapping
As I said in 'shading' part, the way demos do environment mapping is very simple. Take
the X and Y components of your pseudo-normal vectors (perpendicular to vertices), and
use them to index your texture map!
Using texturing and shading at the same time is quite straightforward to implement: the
basic idea being that we just interpolate the values of both texture and shade and blend
them in a suitable ratio (alpha-blending).
48
© Copyright Virtual University of Pakistan
23-Lighting I VU
49
© Copyright Virtual University of Pakistan
23-Lighting I VU
struct color4
{
union {
struct
{
float r, g, b, a; // Red, Green, and Blue color data
};
float c[4];
};
color4(){}
void Saturation()
{
if( r > 1 )
r = 1.f;
if( g > 1 )
g = 1.f;
if( b > 1 )
b = 1.f;
if( a > 1 )
a = 1.f;
if( r < 0.f )
r = 0.f;
if( g < 0.f )
g = 0.f;
if( b < 0.f )
b = 0.f;
if( a < 0.f )
a = 0.f;
}
};
We should also point out that when dealing with colors, particularly with some of the
subtleties that we'll be getting into with lights and shades, we should understand the
gamut of the target device. This is where our beautiful clean mathematics meets the real
world. The gamut of a device is simply the physical range of colors the device can
display. Typically, a high-quality display has a better gamut than a cheap one. A good
printer has a gamut that is significantly different from that of a monitor. If we are
interested in getting some color images for printing, we shall have to do some
manipulation on the color values to make the printed image look like the one our program
generated on the screen. We should also be aware that there are color spaces other than
the RGB color space. HSV (hue, saturation, and value) is one that's typically used by
printers, for example.
In one of the early magazines articles of Mike Abrash [ABRASH 1992], he tells a story
about going from a 256-color palette to hardware that supported 256 levels for each RGB
color–16 million colors! What would we do with all those colors? He goes on to tell of a
story by Sheldon Linker at the eighth Annual Computer Graphics Show on how the folks
at the Jet Propulsion Lab back in the 1970s had a printer that could print over 50 million
distinct colors. As a test, they printed out words on paper where the background color was
only one color index from the word's color. To their surprise, it was easy to discern the
words—the human eye is very sensitive to color graduations and edge detection. The JPL
team then did the same tests on color monitors and discovered that only about 16 million
colors could be distinguished. It seems that the eye is (not too surprisingly) better at
perceiving detail from reflected light (such as from a printed page) than from emissive
light (such as from a CRT). The moral is that the eye is a lot more perceptive than you
51
© Copyright Virtual University of Pakistan
23-Lighting I VU
might think. Twenty four-bits of color really is not that much range, particularly if we are
performing multiple passes. Round-off error can and will show up if we aren't careful!
An example of the various gamuts is shown in the figure below. The CIE diagrams are
the traditional way of displaying perceived color space, which, we should note, is very
different from the linear color space used by today's graphics hardware. The colored area
is the gamut of the human eye. The gamut of printers and monitors are subsets of this
gamut.
Figure 1: The 1931 CIE diagram shows the gamut of the eye and the lesser gamut of
output devices.
First we need to be aware of how to treat colors. The calculation of the color of a
particular pixel depends, for example, on the surface's material properties that we've
programmed in, the color of the ambient light (lighting model), the color of any light
shining on the surface (perhaps of the angle of the light to the surface), the angle of the
surface to the viewpoint, the color of any fog or other scattering material that's between
the surface and the viewpoint, etc. No matter how you are calculating the color of the
pixel, it all comes down to color calculations, at least on current hardware, on rgb or
rgba vectors where the individual color elements are limited to the [0,1] range.
Operations on colors are done piecewise–that is, even though we represent colors as rgb
vectors, they aren't really vectors in the mathematical sense. Vector multiplication is
different from the operation we perform to multiply color. We'll use the symbol to
indicate such piecewise multiplication.
Colors are multiplied to describe the interaction between a surface and a light source. The
colors of each are multiplied together to estimate the reflected light color–this is the color
of the light that this particular light reflects off this surface. The problem with the
52
© Copyright Virtual University of Pakistan
23-Lighting I VU
standard rgb model is just that we're simulating the entire visible spectrum by three colors
with a limited range.
Let's start with a simple example of using reflected colors. Later on we will discuss on
lighting, we'll discover how to calculate the intensity of a light source, but for now, just
assume that we've calculated the intensity of a light, and it's a value called id. This
intensity of our light is represented by, say, a nice lime green color.
Thus
Let's say we shine this light on a nice magenta surface given by cs.
So, to calculate the color contribution of this surface from this particular light, we
perform a piecewise multiplication of the color values.
This gives us the dark plum color shown in figure below. We should note that since the
surface has no green component, that no matter what value we used for the light color,
there would never be any green component from the resulting calculation. Thus a pure
green light would provide no contribution to the intensity of a surface if that surface
contained a zero value for its green intensity. Thus it's possible to illuminate a surface
with a bright light and get little or no illumination from that light. We should also note
that using anything other than a full-bright white light [1,1,1] will involve multiplication
of values less than one, which means that using a single light source will only illuminate a
surface to a maximum intensity of its color value, never more. This same problem also
happens when a texture is modulated by a surface color. The color of the surface will be
multiplied by the colors in the texture. If the surface color is anything other than full
white, the texture will become darker. Multiple texture passes can make a surface very
dark very quickly.
53
© Copyright Virtual University of Pakistan
23-Lighting I VU
Figure 2: Multiplying (modulating) color values results in a color equal to or less than
(darker) the original two.
Given that using a colored light in a scene makes the scene darker, how do we make the
scene brighter? There are a few ways of doing this. Given that color multiplication will
never result in a brighter color, it's offset a bit since we end up summing all the light
contributions together, which, as we'll see in the next section, brings with it its own
problems. But if we are just interested in increasing the brightness on one particular light
or texture, one way is to use the API (Library routines e.g. OpenGL or DirectX) to
artificially brighten the source–this is typically done with texture preprocessing. Or, we
can artificially brighten the source, be it a light or a texture, by adjusting the values after
we modulate them.
On the other hand, what if we have too much contribution to a color? While the colors of
lights are modulated by the color of the surface, each light source that illuminates the
surface is added to the final color. All these colors are summed up to calculate the final
color. Let's look at such a problem. We'll start with summing the reflected colors off a
surface from two lights. The first light is an orange color and has rgb values
[1.0,0.49,0.0], and the second light is a nice light green with rgb values [0.0,1.0,0.49].
Summing these two colors yields [1.0, 1.49, 0.49], which we can't display because of the
values larger than one figure below shows.
Figure 3: Adding colors can result in colors that are outside the displayable range.
So, what can be done when color values exceed the range that the hardware can display?
It turns out that there are three common approaches [HALL 1990].
Clamping the color values is implemented in hardware, so for shaders (technology used in
today computer graphics for lighting and shading), it's the default, and it just means that
we clamp any values outside the [0,1] range. Unfortunately, this results in a shift in the
color.
The second most common approach is to scale the colors by the largest component. This
maintains the color but reduces the overall intensity of the color.
The third is to try to maintain the intensity of the color by shifting (or clipping) the color
toward pure bright white by reducing the colors that are too bright while increasing the
other colors and maintaining the overall intensity. Since we can't see what the actual color
for (figure above) is, let's see what color each of these methods yields (figure below).
54
© Copyright Virtual University of Pakistan
23-Lighting I VU
Figure 4: The results of three strategies for dealing with the same oversaturated color.
As we can see, we get three very different results. In terms of perceived color, the scaled
is probably the closest though it's darker than the actual color values. If we weren't
interested in the color but more in terms of saturation, then the clipped color is closer.
Finally, the clamped value is what we get by default, and as you can see, the green
component is biased down so that we lose a good sense of the "greenness" of the color we
were trying to create.
55
© Copyright Virtual University of Pakistan
24-Lighting II VU
Now it's perfectly fine to end up with an oversaturated color and pass this result along to
the graphics engine. What happens in the pipeline is an implicit clamping of the color
values. Any value that's greater than one is clamped to one, and any less than zero are
clamped to zero. So this has the benefit of requiring no effort on the part of the shader
(technology that is being used today for lighting and shading supported by Graphics
hardware) writer. Though this may make the rendering engine happy, it probably isn't
what we want. Intuitively, we'd think that shining orange and green lights on a white
surface would yield a strong green result. But letting the hardware clamp eradicates any
predominant effect from the green light. Clamping is fast, but it tends to lose fidelity in
the scene, particularly in areas where we would want and expect subtle changes as the
light intensities interact, but end up with those interactions getting eradicated because the
differences are all getting clamped out by the graphics hardware.
Figure 1: Adding colors can result in colors that are outside the displayable range.
One problem with clamping or scaling colors is that they get darker (lose saturation). An
alternative to scaling is to maintain saturation by shifting color values. This technique is
called clipping, and it's a bit more complicated than color scaling or clamping. The idea is
to create a gray-scale vector that runs along the black-white axis of the color cube that's
got the same brightness as the original color and then to draw a ray at right angles to this
vector that intersects (i.e., clips) the original color's vector. We need to check to make
sure that the grayscale vector is itself within the [0,1] range and then to check the sign of
the ray elements to see if the color elements need to be increased or decreased. As we are
probably wondering, this can result in adding in a color value that wasn't in the original
color, but this is a direct result of wanting to make sure that the overall brightness is the
56
© Copyright Virtual University of Pakistan
24-Lighting II VU
same as the original color. And, of course, everything goes to hell in a handbasket if
we've got overly bright colors, which leave we with decisions about how to nudge the
gray-scale vector into the [0,1] range, since that means you can't achieve the input color's
saturation value. Then we're back to clamping or scaling again.
ColorSpace Tool
The ColorSpace tool is a handy tool that we can use to interactively add two colors
together to see the effects of the various strategies for handling oversaturated colors. We
simply use the sliders to select the rgb color values for each color. The four displays in
Figure 5 show the composite, unmodified values of the resulting color (with no color
square) and the clamped, clipped, and scaled color rgb values along with a color square
illustrating those color values.
We may be wondering, if we can have color values greater than the range in intermediate
calculations, can we have negative values? Yes, we can! They are called "darklights" after
their description in an article [GLASSNER 1992] in Graphic Gems III. Since this is all
just math until we pass it back to the graphics hardware, we can pretty much do anything
we want, which is pretty much the idea behind programmable shaders (technology used
by today graphics hardware for lighting and shading)! Darklights are nothing more than
lights in which one or more of the color values are negative. Thus instead of contributing
to the overall lighting in a scene, we can specify a light that diminishes the overall
lighting in a scene. Darklights are used to eliminate bright areas when we're happy with
all the lighting in your scene except for an overly bright area. Darklights can also be used
to affect a scene if we want to filter out a specific rgb color. If we wanted to get a night
vision effect, we could use a darklight with negative red and blue values, for example,
which would just leave the green channel.
57
© Copyright Virtual University of Pakistan
24-Lighting II VU
Alpha Blending
Up to this point, we’ve been fairly dismissive of the mysterious alpha component that
rides along in all of the color4 structure. Now, we may finally learn its dark secrets. A lot
of power is hidden away inside the alpha component.
Loosely, the alpha component of the RGBA quad represents the opaqueness of a surface.
An alpha value of 0xFF (255) means the color is completely opaque, and an alpha value
of 0x00 (0) means the color is completely transparent. Of course, the value of the alpha
component is fairly meaningless unless we actually activate the alpha blending step. If we
want, we can set things up a different way, such as having 0x00(0) mean that the color is
completely opaque. The meaning of alpha is dependent on how we set up the alpha
blending step.
As you rasterize primitives, each pixel that we wish to change in the frame buffer gets
sent through the alpha blending step. That pixel is combined using blending factors to the
pixel that is currently in the frame buffer. We can add the two pixels together, multiply
them together, linearly combine them using the alpha component, and so forth. The name
"alpha blending" comes from the fact that generally the blending factors used are either
the alpha or the inverse of the alpha.
The Alpha Blending Equation
The equation that governs the behavior of the blending is defined as follows:
Final color is the color that goes to the frame buffer after the blending operation. Source
is the pixel we are attempting to draw to the frame buffer, generally one of the many
pixels in a triangle we have to draw. Destination is the pixel that already exists in the
frame buffer before we attempt to draw a new one. The source and destination blend
factors are variables that modify how the colors are combined together. The blend factors
are the components we have control over in the equation; we cannot modify the positions
of any of the terms or modify the operations performed on them.
For example, say we want an alpha blending equation to do nothing—to just draw the
pixel from the triangle and not consider what was already there at all. An equation that
would accomplish this would be:
58
© Copyright Virtual University of Pakistan
24-Lighting II VU
In this equation, the destination blend factor is set to the source color itself. Also, since
the source blend factor is set to zero, the left-hand side of the equation drops away and we
are left with:
Code Example
//This code will blend one image into the second
struct COLOR3{
BYTE b;
BYTE g;
BYTE r;
};
int k=0;
p1++;
p2++;
}
}
BlitData(displaydeviceContext, 0,0,firstImage.Width,firstImage.Height);
Following images shows the result of the above code.
59
© Copyright Virtual University of Pakistan
24-Lighting II VU
60
© Copyright Virtual University of Pakistan
25-Mathiematics of Lighting and Shading Part I VU
In order to understand how an object's color is determined, we'll need to understand the
parts that come into play to create the final color. First, we need a source of illumination,
typically in the form of a light source in our scene. A light has the properties of color (an
rgb value) and intensity. Typically, these are multiplied to give scaled rgb values. Lights
can also have attenuation, which means that their intensity is a function of the distance
from the light to the surface. Lights can additionally be given other properties such as a
shape (e.g., spotlights) and position (local or directional), but that's more in the
implementation rather than the math of lighting effects. Given a source of illumination,
we'll need a surface on which the light will shine. Here's where we get interesting effects.
Two types of phenomena are important lighting calculations.
The first is the interaction of light with the surface boundary, and the second is the effect
of light as it gets absorbed, transmitted, and scattered by interacting with the actual
material itself. Since we really only have tools for describing surfaces of objects and not
the internal material properties, light—surface boundary interactions are the most
common type of calculation we'll see used, though we can do some interesting
simulations of the interaction of light with material internals.
Materials are typically richer in their descriptions in an effort to mimic the effects seen in
real light—material surface interactions. Materials are typically described using two to
four separate colors in an effort to catch the nuances of real-world light—material surface
interactions. These colors are the ambient, diffuse, specular, and emissive colors, with
ambient and specular frequently grouped together, and emissive specified only for objects
that generate light themselves. The reason there are different colors is to give different
effects arising from different environmental causes. The most common lights are as
follows:
Ambient lighting:
It is the overall color of the object due to the global ambient light level. This is the color
of the object when there's no particular light, just the general environmental illumination.
That is, the ambient light is an approximation for the global illumination in the
environment, and relies upon no light in the scene. It's usually a global value that's added
to every object in a scene.
Diffuse lighting:
It is the color of the object due to the effect of a particular light. The diffuse light is the
light of the surface if the surface were perfectly matte. The diffuse light is reflected in all
directions from the surface and depends only on the angle of the light to the surface
normal.
Specular lighting:
It is the color of the highlights on the surface. The specular light mimics the shininess of a
surface, and its intensity is a function of the light's reflection angle off the surface.
61
© Copyright Virtual University of Pakistan
25-Mathiematics of Lighting and Shading Part I VU
Emissive lighting:
When we need an object to "glow" in a scene, we can do this with an emissive light. This
is just an additional color source added to the final light of the object. Because we're
simulating an object giving off its own light; we'd still have to add a real "light" to get an
effect on objects in a scene.
Before we get into exactly what these types of lighting are, let's put it in perspective for
our purpose of writing shader code. Shading is simply calculating the color reflected off a
surface (which is pretty much what shaders do). When a light reflects off a surface, the
light colors are modulated by the surface color (typically, the diffuse or ambient surface
color). Modulation means multiplication, and for colors, since we are using rgb values,
this means component-by-component multiplication. So for light source l with color
(r1,g1,b1 shining on surface s with color (rs,gs,bs, the resulting color r would be:
Where the resulting rgb values of the light and surface are multiplied out to get the final
color's rgb values
The final step after calculating all the lighting contributions is to add together all the
lights to get the final color. So a shader might typically do the following:
In the real world, we get some sort of interaction (reflection, etc.) when a photon interacts
with a surface boundary. Thus we see the effects not only when we have a transparent—
opaque boundary (like airplastic), but also a transparent—transparent boundary (like air-
water). The key feature here is that we get some visual effect when a photon interacts
with some boundary between two different materials. The conductivity of the materials
directly affects how the photon is reflected. At the surface of a conductor (metals, etc.),
the light is mostly reflected. For dielectrics (nonconductors), there is usually more
penetration and transmittance of the light. For both kinds of materials, the dispersion of
the light is a function of the roughness of the surface (Figure 1 and 2).
62
© Copyright Virtual University of Pakistan
25-Mathiematics of Lighting and Shading Part I VU
Figure 2: Light reflecting from a rough and smooth surface of a dielectric showing some
penetration.
The simplest model assumes that the roughness of the surface is so fine that light is
dispersed equally in all directions as shown in Figure 1, though later we'll look at fixing
this assumption. A generalization is that conductors are opaque and dielectrics are
transparent. This gets confusing since most of the dielectric surfaces that we are interested
in modeling are mixtures and don't fall into the simple models we've described so far.
Consider a thick colored lacquer surface. The lacquer itself is transparent, but suspended
in the lacquer are reflective pigment off of which light gets reflected, bounced, split,
shifted or altered before perhaps reemerging from the surface. This can be seen in Figure
3, where the light rays are not just reflected but bounced around a bit inside the medium
before getting retransmitted to the outside.
63
© Copyright Virtual University of Pakistan
25-Mathiematics of Lighting and Shading Part I VU
Metallic paint, brushed metal, velvet, etc. are all materials for which we need to examine
better models to try to represent these surfaces. But with a little creativity in the
modeling, it's possible to mimic the effect. Figure 4 shows what you get when you use
multiple broad specular terms for multiple base colors combined with a more traditional
shiny specular term. There's also a high-frequency normal perturbation that simulates the
sparkle from a metallic flake pigment. As we can see, we can get something that looks
particularly striking with a fairly simple model.
Figure 4: A simple shader to simulate metallic paint: (a) shows the two-tone paint
shading pass; (b) shows the specular sparkle shading pass; (c) shows the environment
mapping pass; (d) shows the final composite image
The traditional model gives us a specular term and a diffuse term. We have been able to
add in texture maps to give our scenes some uniqueness, but the lighting effects have
been very simple. Shaders allow us to be much more creative with lighting effects. As
Figure 4 shows, with just a few additional specular terms, we can bring forth a very
interesting look. But before we go off writing shaders, we'll need to take a look at how it
all fits together in the graphics pipeline. And a good place to start is by exa mining the
traditional lighting model.
64
© Copyright Virtual University of Pakistan
26-Mathiematics of Lighting and Shading Part II Light Types and Shading Models VU
Parallel lights cheat a little bit. They represent light that comes from an infinitely far away
light source. Because of this, all of the light rays that reach the object are parallel (hence
the name). The standard use of parallel lights is to simulate the sun. While it's not
infinitely far away, 93 million miles is good enough!
The great thing about parallel lights is that a lot of the math goes away. The attenuation
factor is always 1 (for point/spotlights, it generally involves divisions if not square roots).
The incoming light vector for calculation of the diffuse reflection factor is the same for all
considered points, whereas point lights and spotlights involve vector subtractions and a
normalization per vertex.
Typically, lighting is the kind of effect that is sacrificed for processing speed. Parallel
light sources are the easiest and therefore fastest to process. If we can't afford to do the
nicer point lights or spotlights, falling back to parallel lights can keep our frame rates at
reasonable levels.
The light direction is different for each surface location (otherwise the point light would
look just like a directional light). The equation for it is:
65
© Copyright Virtual University of Pakistan
26-Mathiematics of Lighting and Shading Part II Light Types and Shading Models VU
Spotlights are the most expensive type of light we discuss in this course and should be
avoided if possible because it is not for real time environment. We model a spotlight not
unlike the type we would see in a theatrical production. They are point lights, but light
only leaves the point in a particular direction, spreading out based on the aperture of the
light.
Spotlights have two angles associated with them. One is the internal cone whose angle is
generally referred to as theta (θ). Points within the internal cone receive all of the light of
the spotlight; the attenuation is the same as it would be if point lights were used. There is
also an angle that defines the outer cone; the angle is referred to as phi. Points outside the
outer cone receive no light. Points outside the inner cone but inside the outer cone receive
light, usually a linear falloff based on how close it is to the inner cone.
Figure 2: A spotlight
If we think all of this sounds mathematically expensive, we're right. Some library
packages like OpenGL and Direct3D implements lighting for us, so we won't need to
worry about the implementation of the math behind spotlights, but rest assured that
they're extremely expensive and can slow down our graphics application a great deal.
66
© Copyright Virtual University of Pakistan
26-Mathiematics of Lighting and Shading Part II Light Types and Shading Models VU
Then again, they do provide an incredible amount of atmosphere when used correctly, so
we will have to figure out a line between performance and aesthetics.
Shading Models
Once we've found basic lighting information, we need to know how to draw the triangles
with the supplied information. There are currently three ways to do this; the third has just
become a hardware feature with DirectX 9.0 In our previous lectures we have already
studied flat and gouraud shading triangle algorithms.
I. Lambert
Triangles that use Lambertian shading are painted with one solid color instead of using a
gradient. Typically each triangle is lit using that triangle's normal. The resulting object
looks very angular and sharp. Lambertian shading was used mostly back when computers
weren't fast enough to do Gouraud shading in real time. To light a triangle, you compute
the lighting equations using the triangle's normal and any of the three vertices of the
triangle.
67
© Copyright Virtual University of Pakistan
26-Mathiematics of Lighting and Shading Part II Light Types and Shading Models VU
III. Phong
Phong shading is the most realistic shading model We are going to talk about, and also
the most computationally expensive. It tries to solve several problems that arise when we
use Gouraud shading. If we're looking for something more realistic, some authors have
also discussed nicer shading models like Tarrence-Sparrow, but they aren't real time (at
least not right now). First of all, Gouraud shading uses a linear gradient. Many objects in
real life have sharp highlights, such as the shiny spot on an apple. This is difficult to
handle with pure Gouraud shading. The way Phong does this is by interpolating the
normal across the triangle face, not the color value, and the lighting equation is solved
individually for each pixel.
68
© Copyright Virtual University of Pakistan
27-Review II VU
This is a simple example of line clipping: the display window is the canvas and also the
default clipping rectangle, thus all line segments inside the canvas are drawn.
The red box is the clipping rectangle we will use later, and the dotted line is the extension
of the four edges of the clipping rectangle.
69
© Copyright Virtual University of Pakistan
27-Review II VU
70
© Copyright Virtual University of Pakistan
27-Review II VU
Vertices which are kept after clipping against one window edge are saved for clipping
against the remaining edges.
Note that the number of vertices usually changes and will often increase.
We are using the Divide and Conquer approach.
71
© Copyright Virtual University of Pakistan
27-Review II VU
Another approach to check the final vertex list for multiple vertex points along any clip
window boundary and correctly join pairs of vertices. Finally, we could use a more
general polygon clipper, such as wither the Weiler-Atherton algorithm or the Weiler
algorithm described in the next section.
In this technique, the vertex-processing procedures for window boundaries are modified
so that concave polygons are displayed correctly. This clipping procedure was developed
as a method for identifying visible surfaces, and so it can be applied with arbitrary
polygon-clipping regions.
The basic idea in this algorithm is that instead of always proceeding around the polygon
edges as vertices are processed, we sometimes want to follow the window boundaries.
Which path we follow depends on the polygon-processing direction(clockwise or
counterclockwise) and whether the pair of polygon vertices currently being processed
represents an outside-to-inside pair or an inside-to-outside pair. For clockwise processing
of polygon vertices, we use the following rules:
In following figure, the processing direction in the Wieler-Atherton algorithm and the
resulting clipped polygon is shown for a rectangular clipping window.
Inside
go up 3
1
4
8 6
go up
72
© Copyright Virtual University of Pakistan
27-Review II VU
27.11 3D Concepts
With the Cartesian coordinate system, you can define any point in space by saying how
far along each of the three axes you need to travel in order to reach the point if you start at
the origin.
Following are three types of the coordinate systems.
Can define points, segments, lines, rays, curves, polygons, (any planar geometry)
Can have multiple origins (frames of reference and transform coordinates among
them
73
© Copyright Virtual University of Pakistan
27-Review II VU
Can define cubes, cones, spheres, etc., (volumes in space) in addition to all one-
and two-dimensional entities
Can have multiple origins (frames of reference) and transform coordinates among
them
74
© Copyright Virtual University of Pakistan
27-Review II VU
axis.
Right handed Cartesian coordinate system describes the relationship of the X,Y, and Z
in the following manner:
Origin +Y
-Z
North
+X
West East
+Z Sout
75
© Copyright Virtual University of Pakistan
27-Review II VU
Origin +Y
+Z
North
+X
West East
-Z Sout
Left handed Cartesian coordinate system describes the relationship of the X, Y and Z
in the following manner:
P = (X, Y, Z)
Thus the origin of the Coordinate system is located at point (0,0,0), while five units to
the right of that position might be located at point (5,0,0).
76
© Copyright Virtual University of Pakistan
27-Review II VU
Local coordinate systems can be defined with respect to global coordinate system
77
© Copyright Virtual University of Pakistan
27-Review II VU
In fact, there usually are multiple coordinate systems within any 3-D screen
Individual coordinate systems often are hierarchically linked within the scene
78
© Copyright Virtual University of Pakistan
27-Review II VU
27.26 Primitives
Primitives are the fundamental geometric entities within a given data structure.
79
© Copyright Virtual University of Pakistan
27-Review II VU
For example, 100 individual triangles, each requiring 3 vertices, would require
100 x 3 or 300 vertex definitions to be stored in the 3-D database.
Meshes also provide continuity across surfaces which is important for shading
calculations
80
© Copyright Virtual University of Pakistan
27-Review II VU
With curved surfaces, the accuracy of the approximation is directly proportional to the
number of polygons used in the representation.
But more polygons also exact greater computational overhead, thereby degrading
interactive performance, increasing render times, etc.
27.27 Rendering
The process of computing a two dimensional image using a combination of a three -
dimensional database, scene characteristics, and viewing transforma tions. Various
algorithms can be employed for rendering, depending on the needs of the application.
27.28 Tessellation
The subdivision of an entity or surface into one or more non-overlapping primitives.
Typically, renderers decompose surfaces into triangles as part of the rendering
process.
27.29 Sampling
The process of selecting a representative but finite number of values along a
continuous function sufficient to render a reasonable approximation of the function
for the task at hand.
27.31 Transformations
The process of moving points in space is called transformation.
262
© Copyright Virtual University of Pakistan
27-Review II VU
Where:
x x ' tx
P ' y ' T t
P y
y
z z ' t z
3D Translation Example
We may want to move a point “3 meters east, -2 meters up, and 4 meters north.” What
would be done in such event?
Steps for Translation
Given a point in 3D and a translation vector, it can be translated as follows:
Homogeneous Coordinates
Analogous to their 2D Counterpart, the homogeneous coordinates for 3D translation can
be expressed as :
x' 1 0 0 tx x
y' 0 1 0 t y y
.
z' 0 0 1 tz z
0 0 0 263
1 1 1
© Copyright Virtual University of Pakistan
27-Review II VU
Abbreviated as:
P’ = T (tx, ty , tz) . P
On solving the RHS of the matrix equation, we get:
x' x tx
y' y t
y
z' z tz
1 1
Which shows that each of the 3 coordinates gets translated by the corresponding
translation distance.
Rotation
Rotation is the process of moving a point in space in a non-linear manner
… Now in 3D
Rotation can be about any of the three axes:
About z-axis (i.e. in xy plane)
About x-axis (i.e. in yz plane)
About y-axis (i.e. in xz plane)
Roll : around z-axis
Pitch: around x-axis
Yaw: around y-axis
264
© Copyright Virtual University of Pakistan
27-Review II VU
by Cyclic permutation
b) SCALING:-
Coordinate transformations for scaling relative to the origin are
X` = X . Sx
Y` = Y . Sy
Z` = Z . Sz
Uniform Scaling
We preserve the original shape of an object with a uniform scaling
( Sx = Sy = Sz)
Differential Scaling
We do not preserve the original shape of an object with a differential scaling
( Sx <> Sy <> Sz)
S x 0 0 0
0 0
Scaling w.r.t. Origin Sy 0
0 0 Sz 0
0 0 0 1
27.33 PROJECTION
Projection can be defined as a mapping of point P(x,y,z) onto its image
P`(x`,y`,z`) in the projection plane or view plane, which constitutes the display surface
265
© Copyright Virtual University of Pakistan
27-Review II VU
Methods of Projection
•Parallel Projection
•Perspective Projection
•Oblique
Axonometric projections:
There are three axonometric projections:
•Isometric
•Dimetric
•Trimetric
267
© Copyright Virtual University of Pakistan
27-Review II VU
1. Isometric
The projection plane intersects each coordinate axis in the model coordinate system at an
equal distance or the direction of projection makes equal angles with all of the three
principal axes
2. Dimetric
The direction of projection makes equal angles with exactly two of the principal axes
3. Trimetric
The direction of projection makes unequal angles with the three principal axes
Xp = x + z ( L1 cos (Ф) )
Yp = y + z ( L1 sin (Ф) )
Where L1 = L/z
268
© Copyright Virtual University of Pakistan
28-Review III VU
know what AC is-it is point.z ! and we know what CP is-it is point.x ! therefore:
|BS| / |AB| = |CP| / |AC|
|BS| = |AB| * |CP| / |AC|
|BS| = Screen.z * point.x / point.z
Screen.z is the distance d from the point of view at origin or the scaling factor.
Notice that |BS| is the length of the line segment that goes from B to S in world units. But
we normally address the
screen with the point
(0,0) at the top left, with
+X pixels moving to the
right, and +Y pixels
moving down—and not
from the middle of the
screen. And we draw to
the screen in pixel units –
not our world units
(unless, of course, 1.0 in
your world represents
one pixel).
28.2 Triangles
Triangles are to 3D
graphics what pixels are to 2D graphics. Every PC hardware accelerator under the sun
uses triangles as the fundamental drawing primitive (well … scan line aligned trapezoids
actually, but that's a hardware implementation issue). When you draw a polygon,
hardware devices really draw a fan of triangles. Triangles "flesh out" a 3D object,
connecting them together to form a skin or mesh that defines the boundary surface of an
object. Triangles, like polygons, generally have an orientation associated with them, to
help in normal calculations. The ordering of the vertices goes clockwise around the
triangle. Figure below shows what a clockwise ordered triangle would look like.
It is impossible to see triangles that face away from you. (You can find this out by
computing the triangle's plane normal and performing a dot product with a vector from
the camera location to a location on the plane.)
Now let's move on to the code. To help facilitate using the multiple types, I'll implement
triangles structure. I only define constructors and keep the access public.
struct tri
{
tri()
{
// nothing
v[0] = v0;
v[1] = v1;
v[2] = v2;
}
};
Triangle fans, conceptually, look like the folding fans you see in Asian souvenir shops.
They are a list of triangles that all share a common point. The first three elements indicate
the first triangle. Then each new element is combined with the first element and the
current last element to form a new triangle. Note that an N-sided polygon can be
represented efficiently using a triangle fan, Figure below illustrates what I'm talking
about.
271
© Copyright Virtual University of Pakistan
28-Review III VU
28.4 Planes
The next primitive to discuss is the plane. Planes are to 3D what lines are in 2D; they're
n–1 dimensional hyper-planes that can help you accomplish various tasks. Planes are
defined as infinitely large, infinitely thin slices of space, like big pieces of paper.
Triangles that make up your model each exist in their own plane. When you have a plane
that represents a slice of 3D space, you can perform operations like classification of
points and polygons and clipping.
So how do you represent planes? Well it is best to build a structure from the equation that
defines a plane in 3D. The implicit equation for a plane is:
272
© Copyright Virtual University of Pakistan
28-Review III VU
What do these numbers represent? The triplet <a,b,c> represents what is called the
normal of the plane. A normal is a unit vector that, conceptually speaking, sticks directly
out of a plane. A stronger mathematical definition would be that the normal is a vector
that is perpendicular to all of the points that lie in the plane.
The d component in the equation represents the distance from the plane to the origin. The
distance is computed by tracing a line towards the plane until you hit it. Finally the triplet
<x,y,z> is any point that satisfies the equation. The set of all points <x,y,z> that solve the
equation is exactly all the points that lie in the plane.
All of the pictures I'm showing you will be of the top-down variety, and the 3D planes
will be on edge, appearing as 2D lines. This makes figure drawing much easier.
Following are two examples of planes. The first has the normal pointing away from the
origin, which causes d to be negative (try some sample values for yourself if this doesn't
make sense). The second has the normal pointing towards the origin, so d is positive. Of
course, if the plane goes through the origin, d is zero (the distance from the plane to the
origin is zero). Figures 1 and Figure 2 provide some insight into this relation.
Figure 1: d is negative when the normal faces away from the origin
It's important to notice that technically the normal <a,b,c> does not have to be unit-length
for it to have a valid plane equation. But since things end up nicer if the normal is unit-
length.
Constructing a plane given three points that lie in the plane is a simple task. You just
perform a cross product between the two vectors made up by the three points
and find a normal for the plane. After generating the normal and making it unit length,
finding the d value for the plane is just a matter of storing the negative dot product of the
normal with any of the points. This holds because it essentially solves the plane equation
above for d. Of course plugging a point in the plane equation will make it equal 0, and
this constructor has three of them.
Let's suppose you have a triangle whose elements are ordered in such a fashion that when
viewing the triangle from the front, the elements appear in clockwise order. Back -face
culling allows you to take triangles defined with this method and use the plane equation
to discard triangles that are facing away. Conceptually, any closed mesh, a cube for
example, will have some triangles facing you and some facing away. You know for a fact
that you'll never be able to see a polygon that faces away from you; they are always
hidden by triangles facing towards you. This, of course, doesn't hold if you're allowed to
view the cube from its inside, but this shouldn't be allowed to happen if you want to really
optimize your engine.
Rather than perform the work necessary to draw all of the triangles on the screen, you can
use the plane equation to find out if a triangle is facing towards the camera, and discard it
if it is not. How is this achieved? Given the three points of the triangle, you can define a
plane that the triangle sits in. Since you know the elements of the triangle are listed in
clockwise order, you also know that if you pass the elements in order to the plane
constructor, the normal to the plane will be on the front side of the triangle. If you then
think of the location of the camera as a point, all you need to do is perform a point-plane
test. If the point of the camera is in front of the plane, then the triangle is visible and
should be drawn.
There's an optimization to be had. Since you know three points that lie in the plane (the
three points of the triangle) you only need to hold onto the normal of the plane, not the
entire plane equation. To perform the back-face cull, just subtract one of the triangle's
points from the camera location and perform a dot product with the resultant vector and
the normal. If the result of the dot product is greater than zero, then the view point was in
front of the triangle. Figure below can help explain the point.
274
© Copyright Virtual University of Pakistan
28-Review III VU
275
© Copyright Virtual University of Pakistan
28-Review III VU
Drawing triangle (or in general convex polygon, but as we discussed we will use only
triangles) is very simple. The basic idea of the line triangle drawing algorithm is as
follows.
For each scan line (horizontal line on the screen), find the points of intersection with the
edges of the triangle. Then, draw a horizontal line between intersections and do this for
all scan lines.
But how can we find these points quickly?
Using linear interpolation!
We have 3 vertices and we want to find coordinates of all points belonging to segments
determined by these vertices.
Assume we have segment given by points:
(xa,ya) and (xb,yb).
Our task is to find points: (xc,ya+1), (xd,ya+2), ... , (xm,yb-1), (xn,yb).
Notice that xa changes to xb in (yb-ya) steps.
We also have:
xa=xa+0*(xb-xa)/(yb-ya),
where delta=(xb-xa)/(yb-ya).
The coordinates of vertices are (A. x,A. y), (B. x,B. y), (C. x,C. y); we assume that
A. y <= B. y <= C. y (you should sort them first)
S = A means that S. x = A. x; S. y = A. y;
276
© Copyright Virtual University of Pakistan
28-Review III VU
S =E =A
E=B;
for(;S.y<=C.y;S.y++,E.y++,S.x+=dx2,E.x+=dx3)
horizontal line(S.x,E.x,S.y, E.x,color);
}
else
{
for(;S.y<=B.y;S.y++,E.y++,S.x+=dx1,E.x+=dx2)
horizontal line(S.x,E.x,S.y, E.x,color);
S=B;
for(;S.y<=C.y;S.y++,E.y++,S.x+=dx3,E.x+=dx2)
horizontal line(S.x,E.x,S.y, E.x,color);
}
I ought to explain what is the comparison dx1 > dx2 for. It's optimization trick: in the
horizontal line routine, we don't need to compare the x's (S.x is always less than or equal
to E.x).
277
© Copyright Virtual University of Pakistan
28-Review III VU
The idea of gouraud and flat triangle is nearly the same. Gouraud takes only three
parameters more (the color value of each of the vertices), and the routine just interpolates
among them drawing a beautiful, shaded triangle.
You can use 256-colors mode, in which vertices' colors are simply indices to palette or hi-
color mode (recommended).
Flat triangle interpolated only one value (x in connection with y), 256 colors gouraud
needs three (x related to y, color related to y, and color related to x), hi-color gouraud
needs seven (x related to y, red, green and blue components of color related to y, and
color related to x (also three components))
Drawing a gouraud triangle, we add only two parts to the flat triangle routine. The
horizline routine gets a bit more complicated due to the interpolation of the color value
related to x but the main routine itself remains nearly the same.
278
© Copyright Virtual University of Pakistan
28-Review III VU
The left triangle is the triangle which is drawn onto the screen. There's a single scanline
(one call to the horizline routine) pointed out as an example. The triangle on the right is
the same triangle in the bitmap space, and there's the same scanline drawn from another
point of view into it, too. So we need just to interpolate, interpolate, and once more
interpolate in texture filler - an easy job if you've understood the idea of gouraud filler.
28.11 COLOR
It is important to understand how color is represented in computer graphics so that we can
manipulate it effectively. A color is usually represented in the graphics pipeline by a
three-element vector representing the intensities of the red, green, and blue components,
or for a more complex object, by a four-element vector containing an additional value
called the alpha component that represents the opacity of the color. Thus we can talk
about rgb or rgba colors and mean a color that's made up of either three or four elements.
There are many different ways of representing the intensity of a particular color element.
Colors can also be represented as floating point values in the range [0,1].
Nowadays every PC we can buy has hardware that can render images with thousands or
millions of individual colors. Rather than have an array with thousands of color entries,
the images instead contain explicit color values for each pixel. A 16-bit display is named
since each pixel in a 16-bit image is taken up by 16 bits (2 bytes): 5 bits of red
information, 6 bits of green information, and 5 bits of blue information. Incidentally, the
extra bit (and therefore twice as much color resolution) is given to green because our eyes
are more sensitive to green. A 24-bit display, of course, uses 24 bits, or 3 bytes per pixel,
for color information. This gives 1 byte, or 256 distinct values each, for red, green, and
blue. This is generally called true color, because 256 3 (16.7 million) colors is about as
much as your eyes can discern, so more color resolution really isn't necessary, at least for
computer monitors.
Finally, there is 32-bit color, something seen on most new graphics cards. Many 3D
accelerators keep 8 extra bits per pixel around to store transparency information, which is
generally referred to as the alpha channel, and therefore take up 4 bytes, or 32 bits, of
storage per pixel. Rather than reimplement the display logic on 2D displays that don't
need alpha information, these 8 bits are usually just wasted.
In one of the early magazines articles of Mike Abrash [ABRASH 1992], he tells a story
about going from a 256-color palette to hardware that supported 256 levels for each RGB
color–16 million colors! What would we do with all those colors? He goes on to tell of a
story by Sheldon Linker at the eighth Annual Computer Graphics Show on how the folks
at the Jet Propulsion Lab back in the 1970s had a printer that could print over 50 million
distinct colors. As a test, they printed out words on paper where the background color was
only one color index from the word's color. To their surprise, it was easy to discern the
words—the human eye is very sensitive to color graduations and edge detection. The JPL
team then did the same tests on color monitors and discovered that only about 16 million
colors could be distinguished. It seems that the eye is (not too surprisingly) better at
perceiving detail from reflected light (such as from a printed page) than from emissive
light (such as from a CRT). The moral is that the eye is a lot more perceptive than you
might think. Twenty four-bits of color really is not that much range, particularly if we are
performing multiple passes. Round-off error can and will show up if we aren't careful!
279
© Copyright Virtual University of Pakistan
28-Review III VU
An example of the various gamuts is shown in the figure below. The CIE diagrams are
the traditional way of displaying perceived color space, which, we should note, is very
different from the linear color space used by today's graphics hardware. The colored area
is the gamut of the human eye. The gamut of printers and monitors are subsets of this
gamut.
Figure 1: The 1931 CIE diagram shows the gamut of the eye and the lesser gamut of
output devices.
First we need to be aware of how to treat colors. The calculation of the color of a
particular pixel depends, for example, on the surface's material properties that we've
programmed in, the color of the ambient light (lighting model), the color of any light
shining on the surface (perhaps of the angle of the light to the surface), the angle of the
surface to the viewpoint, the color of any fog or other scattering material that's between
the surface and the viewpoint, etc. No matter how you are calculating the color of the
pixel, it all comes down to color calculations, at least on current hardware, on rgb or
rgba vectors where the individual color elements are limited to the [0,1] range.
Operations on colors are done piecewise–that is, even though we represent colors as rgb
vectors, they aren't really vectors in the mathematical sense. Vector multiplication is
different from the operation we perform to multiply color. We'll use the symbol to
indicate such piecewise multiplication.
Colors are multiplied to describe the interaction between a surface and a light source. The
colors of each are multiplied together to estimate the reflected light color–this is the color
of the light that this particular light reflects off this surface. The problem with the
standard rgb model is just that we're simulating the entire visible spectrum by three colors
with a limited range.
280
© Copyright Virtual University of Pakistan
28-Review III VU
Let's start with a simple example of using reflected colors. Later on we will discuss on
lighting, we'll discover how to calculate the intensity of a light source, but for now, just
assume that we've calculated the intensity of a light, and it's a value called id. This
intensity of our light is represented by, say, a nice lime green color.
Thus
Let's say we shine this light on a nice magenta surface given by cs.
So, to calculate the color contribution of this surface from this particular light, we
perform a piecewise multiplication of the color values.
This gives us the dark plum color shown in figure below. We should note that since the
surface has no green component, that no matter what value we used for the light color,
there would never be any green component from the resulting calculation. Thus a pure
green light would provide no contribution to the intensity of a surface if that surface
contained a zero value for its green intensity. Thus it's possible to illuminate a surface
with a bright light and get little or no illumination from that light. We should also note
that using anything other than a full-bright white light [1,1,1] will involve multiplication
of values less than one, which means that using a single light source will only illuminate a
surface to a maximum intensity of its color value, never more. This same problem also
happens when a texture is modulated by a surface color. The color of the surface will be
multiplied by the colors in the texture. If the surface color is anything other than full
white, the texture will become darker. Multiple texture passes can make a surface very
dark very quickly.
Figure 2: Multiplying (modulating) color values results in a color equal to or less than
(darker) the original two.
281
© Copyright Virtual University of Pakistan
28-Review III VU
Given that using a colored light in a scene makes the scene darker, how do we make the
scene brighter? There are a few ways of doing this. Given that color multiplication will
never result in a brighter color, it's offset a bit since we end up summing all the light
contributions together, which, as we'll see in the next section, brings with it its own
problems. But if we are just interested in increasing the brightness on one particular light
or texture, one way is to use the API (Library routines e.g. OpenGL or DirectX) to
artificially brighten the source–this is typically done with texture preprocessing. Or, we
can artificially brighten the source, be it a light or a texture, by adjusting the values after
we modulate them.
On the other hand, what if we have too much contribution to a color? While the colors of
lights are modulated by the color of the surface, each light source that illuminates the
surface is added to the final color. All these colors are summed up to calculate the final
color. Let's look at such a problem. We'll start with summing the reflected colors off a
surface from two lights. The first light is an orange color and has rgb values
[1.0,0.49,0.0], and the second light is a nice light green with rgb values [0.0,1.0,0.49].
Summing these two colors yields [1.0, 1.49, 0.49], which we can't display because of the
values larger than one figure below shows.
Figure 3: Adding colors can result in colors that are outside the displayable range.
So, what can be done when color values exceed the range that the hardware can display?
It turns out that there are three common approaches [HALL 1990].
Clamping the color values is implemented in hardware, so for shaders (technology used in
today computer graphics for lighting and shading), it's the default, and it just means that
we clamp any values outside the [0,1] range. Unfortunately, this results in a shift in the
color.
The second most common approach is to scale the colors by the largest component. This
maintains the color but reduces the overall intensity of the color.
The third is to try to maintain the intensity of the color by shifting (or clipping) the color
toward pure bright white by reducing the colors that are too bright while increasing the
other colors and maintaining the overall intensity. Since we can't see what the actual color
for (figure above) is, let's see what color each of these methods yields (figure below).
282
© Copyright Virtual University of Pakistan
28-Review III VU
Figure 4: The results of three strategies for dealing with the same oversaturated color.
As we can see, we get three very different results. In terms of perceived color, the scaled
is probably the closest though it's darker than the actual color values. If we weren't
interested in the color but more in terms of saturation, then the clipped color is closer.
Finally, the clamped value is what we get by default, and as you can see, the green
component is biased down so that we lose a good sense of the "greenness" of the color we
were trying to create.
283
© Copyright Virtual University of Pakistan
29-Mathematics of Lighting and Shading Part III VU
itotal k a ia ( k d id k sis)
Where i total , is the intensity of light (as an rgb value) from the sum of the intensity of the
global ambient value and the diffuse and specular components of the light from the light
sources. This is called a local lighting model since the only light on a vertex is from a
light source, not from other objects. That is, lights are lights, not objects. Objects that are
brightly lit don't illuminate or shadow any other objects. We’ve included the reflection
coefficients for each term, k for completeness since we'll frequently see the lighting
equation. The reflection coefficients are in the [0, 1] range and are specified as part of the
material property. However, they are strictly empirical and since they simply adjust the
overall intensity of the material color, the material color values are usually adjusted so the
color intensity varies rather than using a reflection coefficient, so we'll ignore them in our
actual color calculations. This is a very simple lighting equation and gives fairly good
results. However, it does fail to take into account any gross roughness or anything other
than perfect isotropic reflection. That is, the surface is treated as being perfectly smooth
and equally reflective in all directions. Thus this equation is really only good at modeling
the illumination of objects that don't have any "interesting" surface properties. By this we
mean anything other than a smooth surface (like fur or sand) or a surface that doesn't
really reflect light uniformly in all directions (like brushed metal, hair, or skin). However,
with liberal use of texture maps to add detail, this model has served pretty well and can
still be used for a majority of the lighting processing to create a realistic environment in
real time. Let's take a look at the individual parts of the traditional lighting pipeline.
Ambient Light
Ambient light is the light that comes from all directions—thus all surfaces are illuminated
equally regardless of orientation. However, this is a big hack in traditional lighting
calculations since "real" ambient light really comes from the light reflected from the
"environment." This would take a long time to calculate and would require ray tracing or
the use of radiosity methods, so traditionally, we just say that there's x amount of global
ambient light and leave it at that. This makes ambient light a little different from the other
lighting components since it doesn't depend on a light source. However, we typically do
want ambient light in our scene because having a certain amount of ambient light makes
the scene look natural. One large problem with the simplified lighting model is that there
is no illumination of an object with reflected light—the calculations required are
enormous for a scene of any complexity (every object can potentially reflect some light
and provide some illumination for every other object in a scene) and are too time
consuming to be considered for real-time graphics. So, like most things in computer
284
© Copyright Virtual University of Pakistan
29-Mathematics of Lighting and Shading Part III VU
graphics, we take a look at the real world, decide it's too complicated, and fudge up
something that kind a works. Thus the ambient light term is the "fudge factor" that
accounts for our simple lighting model's lack of an inter-object reflectance term. The
ambient light equation is given by
ia ma sa
Where ia is the ambient light intensity, ma is the ambient material color, and sa is the
light source ambient color. Typically, the ambient light is some amount of white (i.e.,
equal rgb values) light, but we can achieve some nice effects using colored ambient light.
Though it's very useful in a scene, ambient light doesn't help differentiate objects in a
scene since objects rendered with the same value of ambient tend to blend since the
resulting color is the same. Figure 1 shows a scene with just ambient illumination. We
can see that it's difficult to make out details or depth information with just ambient light.
Ambient lighting is our friend. With it we make our scene seem more realistic than it is.
A world without ambient light is one filled with sharp edges, of bright objects surrounded
by sharp, dark, harsh shadows. A world with too much ambient light looks washed out
and dull. Since the number of actual light sources supported by hardware FFP is limited
(typically to eight simultaneous), we'll be better off to apply the lights to add detail to the
area that our user is focused on and let ambient light fill in the rest. Before we point out
that talking about the hardware limitation of the number of lights has no meaning on
shaders, where we do the lighting calculations, we'll point out that eight lights were
typically the maximum that the hardware engineers created for their hardware. It was a
performance consideration. There's nothing stopping us (except buffer size) from writing
a shader that calculates the effects from a hundred simultaneous lights. But we think that
we'll find that it runs much too slowly to be used to render our entire scene. But the nice
thing about shaders is we can.
Diffuse Light
Diffuse light is the light that is absorbed by a surface and is reflected in all directions. In
the traditional model, this is ideal diffuse reflection—good for rough surfaces where the
reflected intensity is constant across the surface and is independent of viewpoint but
285
© Copyright Virtual University of Pakistan
29-Mathematics of Lighting and Shading Part III VU
depends only upon the direction of the light source to the surface. This means that
regardless of the direction from which we view an object with a stationary diffuse light
source on it, the brightness of any point on the surface will remain the same. Thus, unlike
ambient light, the intensity of diffuse light is directional and is a function of the angle of
the incoming light and the surface. This type of shading is called Lambertian shading
after Lambert's cosine law, which states that the intensity of the light reflected from an
ideal diffuse surface is proportional to the cosine of the direction of the light to the vertex
normal. Since we're dealing with vertices here and not surfaces, each vertex has a normal
associated with it. We might hear talk of per-vertex normals vs. per-polygon normals. The
difference being that per polygon has one normal shared for all vertices in a polygon,
whereas per vertex has a normal for each vertex. OpenGL has the ability to specify per -
polygon normals, and Direct3D does not. Since vertex shaders can't share information
between vertices (unless we explicitly copy the data our self). We'll focus on per-vertex
lighting. Figure 2 shows the intensity of reflected light as a function of the angle between
the vertex normal and the light direction.
Figure 2: Diffuse light decreases as the angle between the light vector and the surface
normal increases.
Which is similar to the ambient light equation, except that the diffuse light term is now
multiplied by the dot product of the unit normal of the vertex and the unit direction vector
to the light from the vertex (not the direction from the light). Note that the md value is a
color vector, so there are rgb or rgba values that will get modulated.
Since , where theta is the angle between vectors, when the angle
between them is zero, cos(theta ) is 1 and the diffuse light is at its maximum. When the
angle is 90°, cos (theta) is zero and the diffuse light is zero. One calculation advantage is
that when the cos(theta ) value is negative, this means that the light isn't illuminating the
vertex at all. However, since we (probably!) don't want the light illuminating sides that it
physically can't shine on, we want to clamp the contribution of the diffuse light to
contribute only when cos(theta ) is positive. Thus the equation in practice looks more like
Where we've clamped the diffuse value to only positive values, Figure 3 was rendered
with just diffuse lighting. Notice how we can tell a lot more detail about the objects and
pick up distance cues from the shading.
286
© Copyright Virtual University of Pakistan
29-Mathematics of Lighting and Shading Part III VU
The problem with just diffuse lighting is that it's independent of the viewer's direction.
That is, it's strictly a function of the surface normal and the light direction. Thus as we
change the viewing angle to a vertex, the vertex's diffuse light value never changes. You
have to rotate the object (change the normal direction) or move the light (change the light
direction) to get a change in the diffuse lighting of the object. However, when we
combine the ambient and diffuse, as in Figure 4, we can see that the two types of light
give a much more realistic representation than either does alone. This combination of
ambient and diffuse is used for a surprisingly large number of items in rendered scenes
since when combined with texture maps to give detail to a surface we get a very
convincing shading effect.
Figure 4: When diffuse and ambient terms are combined, you get more detail and a more
natural-looking scene. The final color is the combination of the ambient and diffuse
colors.
Specular Light
Ambient light is the light that comes from the environment (i.e., it's directionless); diffuse
light is the light from a light source that is reflected by a surface evenly in all directions
(i.e., it's independent of the viewer's position). Specular light is the light from a light
source that is reflected by a surface and is reflected in such a manner that it's both a
function of the light's vector and the viewer's direction. While ambient light gives the
287
© Copyright Virtual University of Pakistan
29-Mathematics of Lighting and Shading Part III VU
object an illuminated matte surface, specular light is what gives the highlights to an
object. These highlights are greatest when the viewer is looking directly along the
reflection angle from the surface. This is illustrated in Figure 5.
Most discussions of lighting (including this one) start with Phong's lighting equation
(which is not the same as Phong's shading equation). In order to start discussing specular
lighting, let's look at a diagram of the various vectors that are used in a lighting equation.
We have a light source, some point the light is shining on, and a viewpoint. The light
direction (from the point to the light) is vector l, the reflection vector of the light vector
(as if the surface were a mirror) is r, the direction to the viewpoint from the point is
vector v. The point's normal is n.
Warnock [WARNOCK 1969] and Romney [ROMNEY 1969] were the first to try to
simulate highlights using a cos n (θ) term. But it wasn't until Phong Bui-Tong [BUI 1998]
reformulated this into a more general model that formalized the power value as a measure
of surface roughness that we approach the terms used today for specular highlights.
Phong's equation for specular lighting is
Figure 6: The relationship between the normal n, the light vector v, the view direction v,
and the reflection vector r.
288
© Copyright Virtual University of Pakistan
29-Mathematics of Lighting and Shading Part III VU
It basically says that the more the view direction, v, is aligned with the reflection
direction, r, the brighter the specular light will be. The big difference is the introduction
of the ms term, which is a power term that attempts to approximate the distribution of
specular light reflection. The ms term is typically called the "shininess" value. The larger
the ms value, the "tighter" (but not brighter) the specular highlights will be. This can be
seen in the Figure 7, which shows values of for values of m ranging from 1 to
128. As we can see, the specular highlights get narrower for higher values, but they don't
get any brighter.
Figure 7: Phong's specular term for various values of the "shininess" term. Note that the
values never get above 1.
Now, as we can see, this requires some calculations since we can't know r before hand
since it's the v vector reflected around the point's normal. To calculate r we can use the
following equation:
If l and n are normalized, then the resulting r is normalized and the equation can be
simplified.
And just as we did for diffuse lighting, if the dot product is negative, then the term is
ignored.
289
© Copyright Virtual University of Pakistan
29-Mathematics of Lighting and Shading Part III VU
Figure 8 shows the scene with just specular lighting. As we can see, we get an impression
of a very shiny surface.
When we add the ambient, diffuse, and specular terms together, we get Figure 8A. The
three terms all act in concert to give us a fairly good imitation of a nice smooth surface
that can have a varying degree of shininess to it. We may have noticed that computing the
reflection vector took a fair amount of effort. In the early days of computer graphics,
there was a concerted effort to reduce anything that took a lot of computation, and the
reflection vector of Phong's equation was one such item.
Now it's computationally expensive to calculate specular lighting using Phong's equation
since computing the reflection vector is expensive. Blinn [BLINN 1977] suggested,
instead of using the reflection and view vectors, that we create a "half" vector that lies
between the light and view vectors. This is shown as the h vector in Figure 9. Just as
Phong's equation maximizes when the reflection vector is coincident with the view vector
(thus the viewer is looking directly along the reflection vector), so does Blinn's. When the
half vector is coincident with the normal vector, then the angle between the view vector
290
© Copyright Virtual University of Pakistan
29-Mathematics of Lighting and Shading Part III VU
and the normal vector is the same as between the light vector and the normal vector.
Blinn's version of Phong's equation is:
Figure 9: The half-angle vector is an averaging of the light and view vectors.
where the half vector is defined as
The advantage is that no reflection vector is needed; instead, we can use values that are
readily available, namely, the view and light vectors. Note that both OpenGL and the
DirectX FFP use Blinn's equation for specular light. Besides a speed advantage, there are
some other effects to note between Phong's specular equation and Blinn's. If we multiply
Blinn's exponent by 4, we approximate the results of Phong's equation. Thus if there's an
upper limit on the value of the exponent, Phong's equation can produce sharper
highlights. For l • v angles greater than 45° (i.e., when the light is behind an object and
we're looking at an edge), the highlights are longer along the edge direction for Phong's
equation. Blinn's equation produces results closer to those seen in nature.
For an in-depth discussion of the differences between the two equations, there's an
excellent discussion in [FISHER 1994]. Figure 10 shows the difference between Phong
lighting and Blinn—Phong lighting.
291
© Copyright Virtual University of Pakistan
29-Mathematics of Lighting and Shading Part III VU
Figure 10: Blinn-Phong specular on the left, Phong specular on the right.
Note: Some of the material, for the preparation of this lecture, is taken from a book Real
time shader Programming by Ron fosner
292
© Copyright Virtual University of Pakistan
30-Mathematics of Lighting and Shading Part IV VU
Our final scene with ambient, diffuse, and (Blinn's) specular light contributions (with one
white light above and to the left of the viewer) looks like Figure 1.
It may be surprising to discover that there's more than one way to calculate the shading of
an object, but that's because the model is empirical, and there's no correct way, just
different ways that all have tradeoffs. Until now though, the only lighting equation we've
been able to use has been the one we just formulated. Most of the interesting work in
computer graphics is tweaking that equation, or in some cases, throwing it out altogether
and coming up with something new.
The next sections will discuss some refinements and alternative ways of calculating the
various coefficients of the lighting equation.
Light Attenuation
Light in the real world loses its intensity as the inverse square of the distance from the
light source to the surface being illuminated. However, when put into practice, this
seemed to drop off the light intensity in too abrupt a manner and then not to vary too
much after the light was far away. An empirical model was developed that seems to give
satisfactory results. This is the attenuation model that's used in OpenGL and DirectX. The
fatten factor is the attenuation factor. The distance d between the light and the vertex is
always positive. The attenuation factor is calculated by the following equation:
293
© Copyright Virtual University of Pakistan
30-Mathematics of Lighting and Shading Part IV VU
Where the kc, k1, and kq parameters are the constant, linear, and quadratic attenuation
constants respectively, to get the "real" attenuation factor, we can set kq to one and the
others to zero. The attenuation factor is multiplied by the light diffuse and specular
values. Typically, each light will have a set of these parameters for itself. The lighting
equation with the attenuation factor looks like this.
Figure 2 shows a sample of what attenuation looks like. This image is the same as the one
shown in Figure 1, but with light attenuation added.
Figure 2: A scene with light attenuation. The white sphere is the light position.
where S is either the Phong or Blinn-Phong flavor of the specular lighting equation, then
Schlick's simplification is to replace the preceding part of the specular equation with
Which eliminates the need for an exponential term, At first glance, a plot of Schlick's
function looks very similar to the exponential equation (Figure 3).
Figure 3: Schlick's term for specular looks very much like the more expensive Phong
term.
294
© Copyright Virtual University of Pakistan
30-Mathematics of Lighting and Shading Part IV VU
If we plot both equations in the same graph (Figure 4), we can see some differences and
evaluate just how well Schlick's simplification works. The blue values are Schlick's, and
the red are the exponential plot. As the view and light angles get closer (i.e., get closer to
zero on the x axis), we can see that the values of the curves are quite close. (For a value of
zero, they overlap.) As the angles approach a grazing angle, we can see that the
approximation gets worse. This would mean that when there is little influence from a
specular light, Schlick's equation would be slightly less sharp for the highlight.
We might notice the green line in Figure 4. Unlike the limit of a value of 128 for the
exponential imposed in both OpenGL and DirectX FFP, we can easily make our values in
the approximation any value we want. The green line is a value of 1024 in Schlick's
equation. We may be thinking that we can make a very sharp specular highlight using
Schlick's approximation with very large values—sharper than is possible using the
exponential term. Unfortunately, we can't since we really need impractically large values
(say, around 100 million) to boost it significantly over the exponential value for 128. But
that's just the kind of thinking that's going to get our creative juices flowing when writing
our own shaders! If the traditional way doesn't work, figure out something that will.
Figure 5: The full moon is a good example of something that doesn't show Lambertian
diffuse shading.
Figure 6: The same dirt field showing wildly differing reflection properties.
296
© Copyright Virtual University of Pakistan
30-Mathematics of Lighting and Shading Part IV VU
Notice how the backscattering image shows a near uniform diffuse illumination, whereas
the forward scattering image shows a uniform dull diffuse illumination. Also note that we
can see specular highlights and more color variation because of the shadows due to the
rough surface whereas the backscattered image washes out the detail. In an effort to better
model rough surfaces, Oren and Nayar [OREN 1992] came up with a generalized version
of a Lambertian diffuse shading model that tries to account for the roughness of the
surface. They applied the Torrance—Sparrow model for rough surfaces with isotropic
roughness and provided parameters to account for the various surface structures found in
the Torrance—Sparrow model. By comparing their model with actual data, they
simplified their model to the terms that had the most significant impact. The Oren—
Nayar diffuse shading model looks like this.
Where
Now this may look daunting, but it can be simplified to something we can appreciate if
we replace the original notation with the notation we've already been using. p/π is a
surface reflectivity property, which we can replace with our surface diffuse color. E0 is a
light input energy term, which we can replace with our light diffuse color. And the θi
term is just our familiar angle between the vertex normal and the light direction. Making
these exchanges gives us
Which looks a lot more like the equations we've used, there are still some parameters to
explain.
297
© Copyright Virtual University of Pakistan
30-Mathematics of Lighting and Shading Part IV VU
σ is the surface roughness parameter. It's the standard deviation in radians of the angle of
distribution of the microfacets in the surface roughness model. The larger the value, the
rougher the surface.
θr is the angle between the vertex normal and the view direction.
φr - φ i is the circular angle (about the vertex normal) between the light vector and the
view vector.
a is max(θ i, θ r).
β is min (θ i, θ r).
Note that if the roughness value is zero, the model is the same as the Lambertian diffuse
model. Oren and Nayar also note that we can replace the value 0.33 in coefficient A with
0.57 to better account for surface inter-reflection.
298
© Copyright Virtual University of Pakistan
31-Mathematics of Lighting and Shading Part V VU
In order to get a more realistic representation of lighting, we need to move away from the
simplistic models that are found hard coded in most graphics pipelines and move to
something that is based more in a physical representation of light as a wave with
properties of its own that can interact with its environment. To do this, we'll need to
understand how light passes through a medium and how hitting the boundary layer at the
intersection of two media can affect light's properties. In Figure 1, there's an incident light
hitting a surface. At the boundary of the two media (in this case, air and glass), there are
two resulting rays of light. The reflected ray is the one that we've already discussed to
some extent, and the other ray is the refracted or transmitted ray.
In addition to examining the interaction of light with the surface boundary, we need a
better description of real surface geometries. Until now, we've been treating our surfaces
as perfectly smooth and uniform. Unfortunately, this prevents us from getting some
interesting effects. We'll go over trying to model a real surface later, but first let's look at
the physics of light interacting at a material boundary.
Reflection
Reflection of a light wave is the change in direction of the light ray when it bounces off
the boundary between two media. The reflected light wave turns out to be a simple case
since light is reflected at the same angle as the incident wave (when the surface is smooth
and uniform, as we'll assume for now). Thus for a light wave reflecting off a perfectly
smooth surface
Until now, we've treated all of our specular lighting calculations as essentially reflection
off a perfect surface, a surface that doesn't interact with the light in any manner other than
reflecting light in proportion to the color of the surface itself. Using a lighting model
based upon the Blinn—Phong model means that we'll always get a uniform specular
highlight based upon the color of the reflecting light and material, which means that all
reflections based on this model, will be reminiscent of plastic. In order to get a more
299
© Copyright Virtual University of Pakistan
31-Mathematics of Lighting and Shading Part V VU
interesting and realistic lighting model, we need to add in some nonlinear elements to our
calculations. First, let's examine what occurs when light is reflected off a surface. For a
perfect reflecting surface, the angle of the incoming light (the angle of incidence) is equal
to that of the reflected light. Phong's equation just blurs out the highlight a bit in a
symmetrical fashion. Until we start dealing with non uniform smooth surfaces in a
manner a bit more realistic than Phong's.
Refraction
Refraction happens when a light wave goes from one medium into another. Because of
the difference in the speed of light of the media, light bends when it crosses the boundary.
Snell's law gives the change in angles.
Snell's law states that when light refracts through a surface, the refracted angle is
shifted by a function of the ratio of the two material's indices of refraction. The index
of refraction of vacuum is 1, and all other material's indices of refraction are greater than
1. What this means is that in order to realistically model refraction, we need to know the
indices of refraction of the two materials that the light is traveling through. Let's look at
an example (Figure 2) to see what this really means. Let's take a simple case of a ray of
light traveling through the air (n air = 1) and intersecting a glass surface (n Glass = 1.5). If
the light ray hits the glass surface at 45°, at what angle does the refracted ray leave the
interface?
Figure 2: The refracted ray's angle is less than the incoming ray's.
The angle of incidence is the angle between the incoming vector and the surface.
Rearranging Snell's law, we can solve for the refracted angle.
300
© Copyright Virtual University of Pakistan
31-Mathematics of Lighting and Shading Part V VU
This is a fairly significant change in the angle! If we change things around so that we are
following a light ray emerging from water into the air, we can run into another
phenomenon. Since the index of refraction is just a measure of the change in speed that
light travels in a material, we can observe from Snell's law (and the fact that the index of
refraction in a vacuum is 1) that light bends toward the normal when it slows down (i.e.,
when the material it's intersecting with has a higher index of refraction). Consequently,
when we intersect a medium that has a lower index of refraction (e.g., going from glass to
air), then the angle will increase. Ah, we must be thinking, we're approaching a
singularity here since we can then easily generate numbers that we can't take the inverse
sine of! If we use Snell's law for light going from water to air, and plug in 90° for the
refracted angle, we get 41.8° for the incident angle. This is called the critical angle at
which we observe the phenomenon of total internal reflection. At any angle greater than
this, light will not pass though a boundary but will be reflected internally. One place that
we get interesting visual properties is in the diamond—air interface. The refractive index
of a diamond is fairly high, 2.24, which means that it's got a very low critical angle, just
24.4°. This means that a good portion of the light entering a diamond will bounce around
the inside of the diamond hitting a number of air—diamond boundaries, and as long as
the angle is 24.4° or greater, it will keep reflecting internally. This is why diamonds are
cut to be relatively flatish on the top but with many faceted sides, so that light entering in
one spot will bounce around and exit at another, giving rise to the sparkle normally
associated with diamonds.
Another place where a small change in the indices of refraction occurs is on a road heated
by the sun when viewed from far away (hence a glancing incident angle). The hot air at
the road's surface has a slightly smaller index of refraction than the denser, cooler air
above it. This is why we get the effect of a road looking as though it were covered with
water and reflecting the image above it—the light waves are actually reflected off the
warm air—cold air interface.
What makes this really challenging to model is that the index of refraction for most
materials is a function of the wavelength of the light. This means that not only is there a
301
© Copyright Virtual University of Pakistan
31-Mathematics of Lighting and Shading Part V VU
shift in the angle of refraction, but that the shift is different for differing wavelengths of
light. Figure 4 and 5 show the index of refraction for fused quartz and sapphire plotted
against the wavelength. We can see the general trend that shorter wavelength light
(bluish) tends to bend more than the longer (reddish) wavelengths.
This is the phenomenon that's responsible for the spectrum that can be seen when white
light is passed through a prism (Figure 6). It's refraction that will break apart a light
source into its component colors, not reflection.
302
© Copyright Virtual University of Pakistan
31-Mathematics of Lighting and Shading Part V VU
This is one area where our simplistic model of light breaks down since we're not
computing an entire spectrum of light waves, but we're limited to three primary colors.
For reference, the rgb values can be assigned to a range of wavelengths as follows:
There's a lot more to color science than just determining wavelengths, but that's beyond
our scope.
While the spectrum spreading effect of refraction is interesting in itself, the rgb nature of
computer color representation precludes performing this spreading directly—we can't
break up a color value into multiple color values. However, with some work, we can
compute the shade of the color for a particular angle of refraction and then use that as the
material color to influence the refracted color.
Where the actual temperature we want is t, and the 25 is the temperature (both in °C) of
the actual index we have, η25.
303
© Copyright Virtual University of Pakistan
32-Introduction to OpenGL VU
Where Applicable:
OpenGL is built for compatibility across hardware and operating systems. This
architecture makes it easy to port OpenGL programs from one system to another. While
each operating system has unique requirements, the OpenGL code in many programs can
be used as is.
Developer Audience:
Designed for use by C/C++ programmers
Run-time Requirements:
OpenGL can run on Linux and all versions of 32 bit Microsoft Windows.
Most Widely Adopted Graphics Standard
Stable
OpenGL implementations have been available for more than seven years on a
wide variety of platforms. Additions to the specification are well controlled, and
proposed updates are announced in time for developers to adopt changes.
304
© Copyright Virtual University of Pakistan
32-Introduction to OpenGL VU
Evolving
Because of its thorough and forward-looking design, OpenGL allows new
hardware innovations to be accessible through the API via the OpenGL extension
mechanism. In this way, innovations appear in the API in a timely fashion, letting
application developers and hardware vendors incorporate new features into their
normal product release cycles.
Scalable
Easy to use
OpenGL is well structured with an intuitive design and logical commands.
Efficient OpenGL routines typically result in applications with fewer lines of code
than those that make up programs generated using other graphics libraries or
packages. In addition, OpenGL drivers encapsulate information about the
underlying hardware, freeing the application developer from having to design for
specific hardware features.
Well-documented
Numerous books have been published about OpenGL, and a great deal of sample
code is readily available, making information about OpenGL inexpensive and
easy to obtain.
Simplifies Software Development, Speeds Time-to-Market
All elements of the OpenGL state—even the contents of the texture memory and the
frame buffer—can be obtained by an OpenGL application. OpenGL also supports
visualization applications with 2D images treated as types of primitives that can be
manipulated just like 3D geometric objects. As shown in the OpenGL visualization
programming pipeline diagram above, images and vertices defining geometric primitives
are passed through the OpenGL pipeline to the frame buffer.
Available Everywhere
Supported on all UNIX® workstations, and shipped standard with every Windows
95/98/2000/NT and MacOS PC, no other graphics API operates on a wider range of
hardware platforms and software environments. OpenGL runs on every major operating
system including Mac OS, OS/2, UNIX, Windows 95/98, Windows 2000, Windows NT,
Linux, OPENStep, and BeOS; it also works with every major windowing system,
including Win32, MacOS, Presentation Manager, and X-Window System. OpenGL is
callable from Ada, C, C++, Fortran, Python, Perl and Java and offers complete
independence from network protocols and topologies.
Architected for Flexibility and Differentiation!
Although the OpenGL specification defines a particular graphics processing pipeline,
platform vendors have the freedom to tailor a particular OpenGL implementation to meet
unique system cost and performance objectives. Individual calls can be executed on
dedicated hardware, run as software routines on the standard system CPU, or
implemented as a combination of both dedicated hardware and software routines. This
implementation flexibility means that OpenGL hardware acceleration can range from
simple rendering to full geometry and is widely available on everything from low-cost
PCs to high-end workstations and supercomputers. Application developers are assured
consistent display results regardless of the platform implementation of the OpenGL
environment.
Using the OpenGL extension mechanism, hardware developers can differentiate their
products by developing extensions that allow software developers to access additional
performance and technological innovations.
Main purpose of OpenGL
As a software interface for graphics hardware, the main purpose of OpenGL is to render
two- and three-dimensional objects into a frame buffer. These objects are described as
sequences of vertices (that define geometric objects) or pixels (that define images).
OpenGL performs several processes on this data to convert it to pixels to form the final
desired image in the frame buffer.
Primitives and Commands discusses points, line segments, and polygons as the
basic units of drawing; and the processing of commands.
306
© Copyright Virtual University of Pakistan
32-Introduction to OpenGL VU
Primitives are defined by a group of one or more vertices. A vertex defines a point, an
endpoint of a line, or a corner of a polygon where two edges meet. Data (consisting of
vertex coordinates, colors, normals, texture coordinates, and edge flags) is associated with
a vertex, and each vertex and its associated data are processed independently, in order,
and in the same way. The only exceptions to this rule are cases in which the group of
vertices must be clipped so that a particular primitive fits within a specified region. In this
case, vertex data may be modified and new vertices created. The type of clipping depends
on which primitive the group of vertices represents.
Commands are always processed in the order in which they are received, although there
may be an indeterminate delay before a command takes effect. This means that each
primitive is drawn completely before any subsequent command takes effect. It also means
that state-querying commands return data that is consistent with complete execution of all
previously issued OpenGL commands.
307
© Copyright Virtual University of Pakistan
32-Introduction to OpenGL VU
Determines which portions of the frame buffer OpenGL may access at any given
time.
Therefore, there are no OpenGL commands to configure the frame buffer or initialize
OpenGL. Frame buffer configuration is done outside of OpenGL in conjunction with the
window system; OpenGL initialization takes place when the window system allocates a
window for OpenGL rendering.
Basic OpenGL Operation
The following diagram illustrates how OpenGL processes data. As shown, commands
enter from the left and proceed through a processing pipeline. Some commands specify
geometric objects to be drawn, and others control how the objects are handled during
various processing stages.
Per-fragment operations these are the final operations performed on the data
before it's stored as pixels in the frame buffer.
Per-fragment operations include conditional updates to the frame buffer based on
incoming and previously stored z values (for z buffering) and blending of
308
© Copyright Virtual University of Pakistan
32-Introduction to OpenGL VU
incoming pixel colors with stored colors, as well as masking and other logical
operations on pixel values.
Data can be input in the form of pixels rather than vertices. Data in the form of pixels,
such as might describe an image for use in texture mapping, skips the first stage of
processing described above and instead is processed as pixels, in the pixel operations
stage. Following pixel operations, the pixel data is either:
Stored as texture memory, for use in the rasterization stage.
Rasterized, with the resulting fragments merged into the frame buffer just as if
they were generated from geometric data.
OpenGL Processing Pipeline
Many OpenGL functions are used specifically for drawing objects such as points, lines,
polygons, and bitmaps. Some functions control the way that some of this drawing occurs
(such as those that enable antialiasing or texturing). Other functions are specifically
concerned with frame buffer manipulation. The topics in this section describe how all of
the OpenGL functions work together to create the OpenGL processing pipeline. This
section also takes a closer look at the stages in which data is actually processed, and ties
these stages to OpenGL functions.
The following diagram details the OpenGL processing pipeline. For most of the pipeline,
you can see three vertical arrows between the major stages. These arrows represent
vertices and the two primary types of data that can be associated with vertices: color
values and texture coordinates. Also note that vertices are assembled into primitives, then
into fragments, and finally into pixels in the framebuffer.
The OpenGL Visualization Programming Pipeline
309
© Copyright Virtual University of Pakistan
32-Introduction to OpenGL VU
310
© Copyright Virtual University of Pakistan
33-OpenGL Programming – I VU
For writing a program, using OpenGL, for any of the operating systems we can use
OpenGL Utility Library named, “glut”. “glut” is used to initialize OpenGL on any
platform e.g. Microsoft Windows or Linux etc because it is platform independent. “glut”
can create window, get keyboard input and run an event handler or message loop in our
graphics application.
All those functions that start with the prefix “gl” are the core OpenGL functions and those
which start with “glu” or “glut” are the OpenGL utility library functions.
Let’s write a program that uses “glut” and then uses OpenGL function to create graphics.
#include <GL/glut.h>
int main()
{
glutCreateWindow( "first graphics window" );
}
glutCreateWindow
glutCreateWindow creates a top-level window.
Usage
int glutCreateWindow(char *name);
name:
ASCII character string for use as window name.
Implicitly, the current window is set to the newly created window. Each created window
has a unique associated OpenGL context. State changes to a window's associated
OpenGL context can be done immediately after the window is created.
The display state of a window is initially for the window to be shown. But the window's
display state is not actually acted upon until glutMainLoop is entered. This means until
glutMainLoop is called, rendering to a created window is ineffective because the window
cannot yet be displayed.
The value returned is a unique small integer identifier for the window. The range of
allocated identifiers starts at one. This window identifier can be used when calling
glutSetWindow.
X Implementation Notes
The proper X Inter-Client Communication Conventions Manual (ICCCM) top-level
properties are established. The WM_COMMAND property that lists the command line
used to invoke the GLUT program is only established for the first window created.
311
© Copyright Virtual University of Pakistan
33-OpenGL Programming – I VU
This is the simple program that we have written so far. Now we will use more features of
“glut” library.
#include <GL/glut.h>
int main()
{
//Set up the OpenGL rendering context.
glutInitDisplayMode( GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH );
In the above program we have used “glut” functions. Let us discuss them in detail.
GlutInitDisplayMode
Usage
void glutInitDisplayMode(unsigned int mode);
mode
Display mode, normally the bitwise OR-ing of GLUT display mode bit masks. See
values below:
GLUT_RGBA
Bit mask to select an RGBA mode window. This is the default if neither
GLUT_RGBA nor GLUT_INDEX are specified.
GLUT_RGB
An alias for GLUT_RGBA.
GLUT_INDEX
312
© Copyright Virtual University of Pakistan
33-OpenGL Programming – I VU
Bit mask to select a color index mode window. This overrides GLUT_RGBA if it
is also specified.
GLUT_SINGLE
Bit mask to select a single buffered window. This is the default if neither
GLUT_DOUBLE or GLUT_SINGLE are specified.
GLUT_DOUBLE
Bit mask to select a double buffered window. This overrides GLUT_SINGLE if it
is also specified.
GLUT_ACCUM
Bit mask to select a window with an accumulation buffer.
GLUT_ALPHA
Bit mask to select a window with an alpha component to the color buffer(s).
GLUT_DEPTH
Bit mask to select a window with a depth buffer.
GLUT_STENCIL
Bit mask to select a window with a stencil buffer.
GLUT_MULTISAMPLE
Bit mask to select a window with multisampling support. If multisampling is not
available, a non-multisampling window will automatically be chosen. Note: both
the OpenGL client-side and server-side implementations must support the
GLX_SAMPLE_SGIS extension for multisampling to be available.
GLUT_STEREO
Bit mask to select a stereo window.
GLUT_LUMINANCE
Bit mask to select a window with a ``luminance'' color model. This model
provides the functionality of OpenGL's RGBA color model, but the green and
blue components are not maintained in the frame buffer. Instead each pixel's red
component is converted to an index between zero and
glutGet(GLUT_WINDOW_COLORMAP_SIZE)-1 and looked up in a per-
window color map to determine the color of pixels within the window. The initial
colormap of GLUT_LUMINANCE windows is initialized to be a linear gray
ramp, but can be modified with GLUT's colormap routines.
Description
The initial display mode is used when creating top-level windows, subwindows, and
overlays to determine the OpenGL display mode for the to-be-created window or overlay.
Note that GLUT_RGBA selects the RGBA color model, but it does not request any bits
of alpha (sometimes called an alpha buffer or destination alpha) be allocated. To request
alpha, specify GLUT_ALPHA. The same applies to GLUT_LUMINANCE.
glutReshapeWindow
glutReshapeWindow requests a change to the size of the current window.
Usage
void glutReshapeWindow(int width, int height);
width
New width of window in pixels.
313
© Copyright Virtual University of Pakistan
33-OpenGL Programming – I VU
height
New height of window in pixels.
Description
glutReshapeWindow requests a change in the size of the current window. The width and
height parameters are size extents in pixels. The width and height must be positive values.
glutKeyboardFunc
glutKeyboardFunc sets the keyboard callback for the current window.
Usage
void glutKeyboardFunc(void (*func)(unsigned char key,
int x, int y));
func
The new keyboard callback function.
Description
glutKeyboardFunc sets the keyboard callback for the current window. When a user types
into the window, each key press generating an ASCII character will generate a keyboard
callback. The key callback parameter is the generated ASCII character. The state of
modifier keys such as Shift cannot be determined directly; their only effect will be on the
returned ASCII data. The x and y callback parameters indicate the mouse location in
window relative coordinates when the key was pressed. When a new window is created,
no keyboard callback is initially registered, and ASCII key strokes in the window are
ignored. Passing NULL to glutKeyboardFunc disables the generation of keyboard
callbacks.
During a keyboard callback, glutGetModifiers may be called to determine the state of
modifier keys when the keystroke generating the callback occurred.
We can also see glutSpecialFunc for a means to detect non-ASCII key strokes.
glutDisplayFunc
glutDisplayFunc sets the display callback for the current window.
Usage
void glutDisplayFunc(void (*func)(void));
func
The new display callback function.
314
© Copyright Virtual University of Pakistan
33-OpenGL Programming – I VU
Description
glutDisplayFunc sets the display callback for the current window. When GLUT
determines that the normal plane for the window needs to be redisplayed, the display
callback for the window is called. Before the callback, the current window is set to the
window needing to be redisplayed and (if no overlay display callback is registered) the
layer in use is set to the normal plane. The display callback is called with no parameters.
The entire normal plane region should be redisplayed in response to the callback (this
includes ancillary buffers if your program depends on their state).
GLUT determines when the display callback should be triggered based on the window's
redisplay state. The redisplay state for a window can be either set explicitly by calling
glutPostRedisplay or implicitly as the result of window damage reported by the window
system. Multiple posted redisplays for a window are coalesced by GLUT to minimize the
number of display callbacks called.
When an overlay is established for a window, but there is no overlay display callback
registered, the display callback is used for redisplaying both the overlay and normal plane
(that is, it will be called if either the redisplay state or overlay redisplay state is set). In
this case, the layer in use is not implicitly changed on entry to the display callback.
See glutOverlayDisplayFunc to understand how distinct callbacks for the overlay and
normal plane of a window may be established.
When a window is created, no display callback exists for the window. It is the
responsibility of the programmer to install a display callback for the window before the
window is shown. A display callback must be registered for any window that is shown. If
a window becomes displayed without a display callback being registered, a fatal error
occurs. Passing NULL to glutDisplayFunc is illegal as of GLUT 3.0; there is no way to
``deregister'' a display callback (though another callback routine can always be
registered).
Upon return from the display callback, the normal damaged state of the window (returned
by calling glutLayerGet(GLUT_NORMAL_DAMAGED) is cleared. If there is no
overlay display callback registered the overlay damaged state of the window (returned by
calling glutLayerGet(GLUT_OVERLAY_DAMAGED) is also cleared.
glutReshapeFunc
glutReshapeFunc sets the reshape callback for the current window.
Usage
void glutReshapeFunc(void (*func)(int width, int height));
func
The new reshape callback function.
Description
glutReshapeFunc sets the reshape callback for the current window. The reshape callback
is triggered when a window is reshaped. A reshape callback is also triggered immediately
before a window's first display callback after a window is created or whenever an overlay
for the window is established. The width and height parameters of the callback specify
the new window size in pixels. Before the callback, the current window is set to the
window that has been reshaped.
315
© Copyright Virtual University of Pakistan
33-OpenGL Programming – I VU
glutIdleFunc
glutIdleFunc sets the global idle callback.
Usage
void glutIdleFunc(void (*func)(void));
Description
glutIdleFunc sets the global idle callback to be ‘func’ so a GLUT program can perform
background processing tasks or continuous animation when window system events are
not being received. If enabled, the idle callback is continuously called when events are
not being received. The callback routine has no parameters. The current window and
current menu will not be changed before the idle callback. Programs with multiple
windows and/or menus should explicitly set the current window and/or current menu and
not rely on its current setting.
The amount of computation and rendering done in an idle callback should be minimized
to avoid affecting the program's interactive response. In general, not more than a single
frame of rendering should be done in an idle callback.
Passing NULL to glutIdleFunc disables the generation of the idle callback.
glutMainLoop
glutMainLoop enters the GLUT event processing loop.
Usage
void glutMainLoop(void);
Description
glutMainLoop enters the GLUT event processing loop. This routine should be called at
most once in a GLUT program. Once called, this routine will never return. It will call as
necessary any callbacks that have been registered.
glutSwapBuffers
glutSwapBuffers swaps the buffers of the current window if double buffered.
Usage
void glutSwapBuffers(void);
Description
Performs a buffer swap on the layer in use for the current window. Specifically,
glutSwapBuffers promotes the contents of the back buffer of the layer in use of the
current window to become the contents of the front buffer. The contents of the back
buffer then become undefined. The update typically takes place during the vertical retrace
of the monitor, rather than immediately after glutSwapBuffers is called.
An implicit glFlush is done by glutSwapBuffers before it returns. Subsequent OpenGL
commands can be issued immediately after calling glutSwapBuffers, but are not executed
until the buffer exchange is completed.
If the layer in use is not double buffered, glutSwapBuffers has no effect.
316
© Copyright Virtual University of Pakistan
34-OpenGL Programming – II VU
#include "GL/glut.h"
#include <stdlib.h>
glBegin(GL_TRIANGLES);
glVertex3f(0.5f,0.2f,0.0f);
glVertex3f(0.5f,0.0f,0.0f);
glVertex3f(0.0f,0.0f,0.0f);
glEnd();
glutSwapBuffers();
}
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
317
© Copyright Virtual University of Pakistan
34-OpenGL Programming – II VU
glLoadIdentity();
glTranslated(0.0, 0.0, -5.0 );
}
int main( int argc, char* argv[] )
{
return 0;
}
//eof
In the above program we have first set perspective projection matrix and then rendered a
triangle in idle function.
318
© Copyright Virtual University of Pakistan
34-OpenGL Programming – II VU
glMatrixMode
The glMatrixMode function specifies which matrix is the current matrix.
void glMatrixMode(
GLenum mode
);
Parameters
mode
The matrix stack that is the target for subsequent matrix operations. The mode
parameter can assume one of three values:
Value Meaning
GL_MODELVIEW Applies subsequent matrix operations to
the modelview matrix stack.
GL_PROJECTION Applies subsequent matrix operations to
the projection matrix stack.
GL_TEXTURE Applies subsequent matrix operations to
the texture matrix stack.
Remarks
The glMatrixMode function sets the current matrix mode.
The following function retrieves information related to glMatrixMode:
Error Codes
The following are the error codes generated and their conditions.
glLoadIdentity
The glLoadIdentity function replaces the current matrix with the identity matrix.
void glLoadIdentity(
void
);
Remarks
The glLoadIdentity function replaces the current matrix with the identity matrix. It is
semantically equivalent to calling glLoadMatrix with the identity matrix
319
© Copyright Virtual University of Pakistan
34-OpenGL Programming – II VU
glTranslated, glTranslatef
The glTranslated and glTranslatef functions multiply the current matrix by a translation
matrix.
void glTranslated(
GLdouble x,
GLdouble y,
GLdouble z
);
void glTranslatef(
GLfloat x,
GLfloat y,
GLfloat z
);
Parameters x, y, z
The x, y, and z coordinates of a translation vector.
Remarks
The glTranslate function produces the translation specified by (x, y, z). The translation
vector is used to compute a 4x4 translation matrix:
The current matrix (see glMatrixMode) is multiplied by this translation matrix, with the
product replacing the current matrix. That is, if M is the current matrix and T is the
translation matrix, then M is replaced with M•T.
320
© Copyright Virtual University of Pakistan
34-OpenGL Programming – II VU
gluPerspective
The gluPerspective is the function of gl utility library. This function sets up a perspective
projection matrix.
void gluPerspective(
GLdouble fovy,
GLdouble aspect,
GLdouble zNear,
GLdouble zFar
);
Parameters
fovy
The field of view angle, in degrees, in the y-direction.
aspect
The aspect ratio that determines the field of view in the x-direction. The aspect
ratio is the ratio of x (width) to y (height).
zNear
The distance from the viewer to the near clipping plane (always positive).
zFar
The distance from the viewer to the far clipping plane (always positive).
Remarks
The gluPerspective function specifies a viewing frustum into the world coordinate
system. In general, the aspect ratio in gluPerspective should match the aspect ratio of the
associated viewport. For example, aspect = 2.0 means the viewer's angle of view is twice
as wide in x as it is in y. If the viewport is twice as wide as it is tall, it displays the image
without distortion.
glRotated, glRotatef
The glRotated and glRotatef functions multiply the current matrix by a rotation matrix.
void glRotated(
GLdouble angle,
GLdouble x,
GLdouble y,
GLdouble z );
void glRotatef(
GLfloat angle,
321
© Copyright Virtual University of Pakistan
34-OpenGL Programming – II VU
GLfloat x,
GLfloat y,
GLfloat z );
Parameters
angle
The angle of rotation, in degrees.
x, y, z
The x, y, and z coordinates of a vector, respectively.
Remarks
The glRotate function computes a matrix that performs a counterclockwise rotation of
angle degrees about the vector from the origin through the point (x, y, z).
The current matrix (see glMatrixMode) is multiplied by this rotation matrix, with the
product replacing the current matrix. That is, if M is the current matrix and R is the
translation matrix, then M is replaced with M•R.
glClearColor
The glClearColor function specifies clear values for the color buffers.
void glClearColor(
GLclampf red,
GLclampf green,
GLclampf blue,
GLclampf alpha
);
Parameters
red, green, blue, alpha
The red, green, blue, and alpha values that glClear uses to clear the color buffers.
The default values are all zero.
Remarks
The glClearColor function specifies the red, green, blue, and alpha values used by
glClear to clear the color buffers. Values specified by glClearColor are clamped to the
range [0,1].
Error Codes
The following is the error code generated and its condition.
Error code Condition
322
© Copyright Virtual University of Pakistan
34-OpenGL Programming – II VU
The glColor3 variants specify new red, green, and blue values explicitly, and set
the current alpha value to 1.0 implicitly.
Neither floating-point nor signed integer values are clamped to the range [0,1] before
updating the current color. However, color components are clamped to this range before
they are interpolated or written into a color buffer.
You can update the current color at any time. In particular, you can call glColor between
a call to glBegin and the corresponding call to glEnd.
glClear
The glClear function clears buffers to preset values.
void glClear(
GLbitfield mask
);
Parameters
mask
Bitwise OR operators of masks that indicate the buffers to be cleared. The four
masks are as follows.
Mask Buffer to be Cleared
GL_COLOR_BUFFER_BIT The buffers currently enabled for color
writing.
GL_DEPTH_BUFFER_BIT The depth buffer.
GL_ACCUM_BUFFER_BIT The accumulation buffer.
GL_STENCIL_BUFFER_BIT The stencil buffer.
Remarks
The glClear function sets the bitplane area of the window to values previously selected
by glClearColor, glClearIndex, glClearDepth, glClearStencil, and glClearAccum.
You can clear multiple color buffers simultaneously by selecting more than one buffer at
a time using glDrawBuffer.
The pixel-ownership test, the scissor test, dithering, and the buffer writemasks affect the
operation of glClear. The scissor box bounds the cleared region. The glClear function
ignores the alpha function, blend function, logical operation, stenciling, texture mapping,
and z-buffering.
The glClear function takes a single argument (mask) that is the bitwise OR of several
values indicating which buffer is to be cleared.
The value to which each buffer is cleared depends on the setting of the clear value for that
buffer.
If a buffer is not present, a glClear call directed at that buffer has no effect.
Error Codes
The following are the error codes generated and their conditions.
324
© Copyright Virtual University of Pakistan
34-OpenGL Programming – II VU
glBegin, glEnd
The glBegin and glEnd functions delimit the vertices of a primitive or a group of like
primitives.
void glBegin(
GLenum mode
);
void glEnd(
void
);
Parameters
mode
The primitive or primitives that will be created from vertices presented between
glBegin and the subsequent glEnd. The following are accepted symbolic
constants and their meanings:
GL_POINTS
Treats each vertex as a single point. Vertex n defines point n. N points are drawn.
GL_LINES
Treats each pair of vertices as an independent line segment. Vertices 2n 1 and 2n
define line n. N/2 lines are drawn.
GL_LINE_STRIP
Draws a connected group of line segments from the first vertex to the last.
Vertices n and n+1 define line n. N 1 lines are drawn.
GL_LINE_LOOP
Draws a connected group of line segments from the first vertex to the last, then
back to the first. Vertices n and n+1 define line n. The last line, however, is
defined by vertices N and 1. N lines are drawn.
GL_TRIANGLES
Treats each triplet of vertices as an independent triangle. Vertices 3n 2, 3n 1, and
3n define triangle n. N/3 triangles are drawn.
GL_TRIANGLE_STRIP
Draws a connected group of triangles. One triangle is defined for each vertex
presented after the first two vertices. For odd n, vertices n, n + 1, and n + 2 define
triangle n. For even n, vertices n + 1, n, and n + 2 define triangle n. N 2 triangles
are drawn.
GL_TRIANGLE_FAN
Draws a connected group of triangles. One triangle is defined for each vertex
presented after the first two vertices. Vertices 1, n + 1, and n + 2 define triangle n.
N 2 triangles are drawn.
GL_QUADS
Treats each group of four vertices as an independent quadrilateral. Vertices 4n 3,
4n 2, 4n 1, and 4n define quadrilateral n. N/4 quadrilaterals are drawn.
325
© Copyright Virtual University of Pakistan
34-OpenGL Programming – II VU
GL_QUAD_STRIP
Draws a connected group of quadrilaterals. One quadrilateral is defined for each
pair of vertices presented after the first pair. Vertices 2n 1, 2n, 2n + 2, and 2n + 1
define quadrilateral n. N quadrilaterals are drawn. Note that the order in which
vertices are used to construct a quadrilateral from strip data is different from that
used with independent data.
GL_POLYGON
Draws a single, convex polygon. Vertices 1 through N define this polygon.
Remarks
The glBegin and glEnd functions delimit the vertices that define a primitive or a group of
like primitives. The glBegin function accepts a single argument that specifies which of
ten primitives the vertices compose. Taking n as an integer count starting at one, and N as
the total number of vertices specified, the interpretations are as follows:
You can use only a subset of OpenGL functions between glBegin and glEnd. The
functions you can use are:
glVertex
glColor
glIndex
glNormal
glMaterial
You can also use glCallList or glCallLists to execute display lists that include only
the preceding functions. If any other OpenGL function is called between glBegin and
glEnd, the error flag is set and the function is ignored.
Regardless of the value chosen for mode in glBegin, there is no limit to the
number of vertices you can define between glBegin and glEnd. Lines, triangles,
quadrilaterals, and polygons that are incompletely specified are not drawn.
Incomplete specification results when either too few vertices are provided to specify
even a single primitive or when an incorrect multiple of vertices is specified. The
incomplete primitive is ignored; the complete primitives are drawn.
326
© Copyright Virtual University of Pakistan
34-OpenGL Programming – II VU
Modes that require a certain multiple of vertices are GL_LINES (2), GL_TRIANGLES
(3), GL_QUADS (4), and GL_QUAD_STRIP (2).
Error Codes
The following are the error codes generated and their conditions.
Error code Condition
GL_INVALID_ENUM mode was set to an unaccepted value.
GL_INVALID_OPERATION A function other than glVertex, glColor,
glIndex, glNormal, glTexCoord,
glEvalCoord, glEvalPoint, glMaterial,
glEdgeFlag, glCallList, or glCallLists was
called between glBegin and the
corresponding glEnd. The function glEnd
was called before the corresponding glBegin
was called, or glBegin was called within a
glBegin/glEnd sequence.
glVertex
These functions specify a vertex.
327
© Copyright Virtual University of Pakistan
35-Curves VU
We all know what a curve is. In this lecture we will explore the mathematical definition
of a curve in a form that is very useful to geometric modeling and other computer
graphics applications: that definition consists of a set of parametric equations. The
mathematics of parametric equations is the basis for Bezier, NURBS (Non Uniform
Rational Beta Splines), and Hermite curves. We will discuss both plane curves and space
curves here and also discussion of the tangent vector, blending functions, conic curves,
re-parameterization, and continuity and composite curves.
A parametric curve is one whose defining equations are given in terms of a single,
common, independent variable called the parametric variable. We have already
encountered parametric variables in earlier discussions of vectors, lines, and planes.
Imagine a curve in three-dimensional space, each point on the curve has a unique set of
coordinates: a specific x value, y value, and z value. Each coordinate is controlled by a
separate parametric equation, whose general form looks like
Each point on a curve is defined by a vector p (figure 1). The components of this vector
are x(u), y(u), and z(u). We express this as
p p(u) ⋯⋯⋯(2)
Which says that the vector p is a function of the parametric variable u.
328
© Copyright Virtual University of Pakistan
35-Curves VU
Plane Curves
To define plane curves, we use parametric functions that are second-degree polynomials:
P(u) au 2 bu c ⋯⋯⋯(5)
We allow the parametric variable to take on values only in the interval 0 u 1. This
ensures that the equation produces a bounded line segment. The coefficients a, b, c, in this
equation are vectors, and each has three components; for example, a = [ax ay az ].
This curve has serious limitations. Although it can generate all the conic curves, or a
close approximation to them, it cannot generate curve with an inflection point, like an S-
shaped curve, no matter what values we select for the coefficients a, b, c. to do this
requires a cubic polynomial.
How do we define a specific plane curve, one that we can display, with define end points,
and a precise orientation in space? first, note in equation 4 or 5 that there are nine
coefficients that we must determine: a x, bx, …, c z. if we know the two end points and the
intermediate point.
End points and an intermediate point on the curve, then we now nine quantities that we
can express in terms of these coefficients (3 points x 3 coordinates each = 9 known
quantities), and we can use these three points to define a unique curve (Figure 2). By
applying some simple algebra to these relationships, we can rewrite Equation 5 in terms
329
© Copyright Virtual University of Pakistan
35-Curves VU
of the three points. To one of the end points we assign u = 0, and to the other u = 1. To
the intermediate point, we arbitrarily assign u = 0.5. We can write this points as
P0 = [x0 y0 z0 ]
P0.5 = [x0.5 y0.5 z0.5] (6)
P1 = [x1 y1 z1 ]
Where the subscripts indicate the value of the parametric variable at each point.
Now we solve equations 4 for the a x, bx, …, c z coefficients in terms of these points. Thus,
for x at u = 0, u =0.5, and u =1, we have
x0 = c x
x0.5 = 0.25ax + 0.5bx + cx (7)
x1 = ax + bx + cx
Next we solve these three equations in three unknowns for a x, bx, and c x, finding
330
© Copyright Virtual University of Pakistan
35-Curves VU
Using this result and equivalent expressions for y(u) and z(u), we combine them into a
single vector equation:
Equation 11 produces the same curve as Equation 5. The curve will always lie in a plane
no matter what three points we choose, Furthermore, it is interesting to note that the point
P0.5 which is on the curve a t u= 0.5, is not necessarily half way along the length of the
curve between p0 and p1. We can show this quite convincingly by choosing three points
to define a curve such that two of them are relatively close together (figure 3). In fact, if
we assign a different value to the parametric variable for the intermediate point, then we
obtain different values for the coefficients in equations 8. This, in turn, means that a
different curve is produced, although it passes through the same three points.
Equation 5 is the algebraic form and equation 11 is the geometric form. Each of these
equations can be written more compactly with matrices. Compactness is not the only
advantage to matrix notation. Once a curve is defined in matrix form, we can use the full
power of matrix algebra to solve many geometry problems.
a
[u 2
u 1]b au 2 bu c ⋯⋯⋯(12)
c
U [u 2 u 1] ⋯⋯⋯(13)
A [a b c]T ⋯⋯⋯(14)
331
© Copyright Virtual University of Pakistan
35-Curves VU
p(u) UA ⋯⋯⋯(15)
Remember that A is really a matrix of vectors, so that
a a
x ay az
A b b b b ⋯⋯⋯(16)
x y z
0
p
p(u) [(2u 3u 1) (4u 4u) (2u u)] p
2 2 2
⋯⋯⋯(17)
0.5
p1
Using the following substitutions:
p0 x0 y0 z0
y0.5 ⋯⋯⋯(19)
p p0.5 x0.5 z
0.5
where P is the control point matrix and the nine terms on the right are its elements or the
geometric coefficients, we can now write
p(u) FP ⋯⋯⋯(20)
This is the matrix version of the geometric form.
Because it is the same curve in algebraic form, p(u)=UA, or geometric form, p(u)=FP, we
can write
FP UA ⋯⋯⋯(21)
The F matrix is itself the product of two other matrices:
332
© Copyright Virtual University of Pakistan
35-Curves VU
2 4 2
F [u 2 u 1] 3 4 1 ⋯⋯⋯(22)
1 0 0
The matrix on the left we recognize as U, and we can denote the other matrix as
2 4 2
M 3 4 1 ⋯⋯⋯(23)
1 0 0
F UM ⋯⋯⋯(24)
Using this we substitute appropriately to find
UMP UA ⋯⋯⋯(25)
Pre-multiplying each side of this equation by U 1 yields
MP A ⋯⋯⋯(26)
This expresses a simple relationship between the algebraic and geometric coefficients
A MP ⋯⋯⋯(27)
Or
P M 1A ⋯⋯⋯(28)
The matrix M is called a basis transformation matrix, and F is called a blending function
matrix.
333
© Copyright Virtual University of Pakistan
36-Space Curves VU
334
© Copyright Virtual University of Pakistan
36-Space Curves VU
x4 a x b x cx d x
Now we can express ax, bx, cx and dx in terms of x1, x2, x3, and x4. After doing the necessary
algebra, we obtain
9 9
a x 27 x 27 x x
x 1 2 3
2 2 2 2 4
45 9
bx 9x1 x 2 18x3 x4
2 2
11 9
c x 9x x x (4)
x
2 1 2
2 3 4
d x x1
We substitute these results into Equation 1, producing
9 9
xu x 27
9
27 x u3 9x 45 18x x u2
x x x
2 1 1
11 29 2 2 3 2 4 2 2 3
2 (5)
x 9x x x u x
1 2 3 4 1
2 2
All this looks a bit messy right now, but we can put it into a neat, much more compact
form. We begin by rewriting Equation 5 as follows:
9 27
xu u3 9u2 u 1 x1 u3 u2 9u x
11 45
27 3
2
2 2
9 3 9 2 2 (6)
u 18u2 9u x u u u x2
3
2 2 4
2
Using equivalent expressions for y(u) and z(u), we can summarize them as a single vector
equation:
9 11 27 3 45 2
pu u3 9u2 u 1 p u u 9u p
1 2
27 3
2
2
9 3 9 2 2
2 (7)
u 18u2 9u p u u u p
3 4
2 2 2
This means that, given four point assigned successive values of u (in this case at u=0, 1/3,
2/3, 1), equation 7 produces a curve that starts at p1, passes through p2 and p3, and ends
at p4.
Now let’s take one more step toward a more compact notation. Using the four parametric
functions appearing in Equation 7, we define a new matrix, G G1 G2 G3 G4 ,
where
9 11 27 3 45 2
G u3 9u2 u 1 G u u 9u
2
227 3 9 3 9 22
1 2
2 2
G u 18u 9u G u u u (8)
4
3
2 2 2
And then define a matrix P containing the control points, P P1 P2 P P T , so that
3 4
335
© Copyright Virtual University of Pakistan
36-Space Curves VU
Pu GP (9)
The matrix G is the product of two other matrices, U and N:
G UN (10)
Where U u3 u2 u 1 and
T
9 27 9 27
2 2
2
2
45 9
1
N 9 2 2 (11)
9
11 9 1
2 2
1 0 0 0
(Note that N is another example of a basis transformation matrix.)
Now we let
a ax ay az
b b b b
A x y z (12)
c c x cy cz
d d d d
x y z
To convert the information in the A matrix into that required for the P matrix, we do
some simple matrix algebra, using Equations 9, 10 and 13. First we have
GP UNP (14)
And then
UA UNP (15)
Or more simply
A NP (16)
336
© Copyright Virtual University of Pakistan
37-The Tangent Vector VU
Another way to define a space curve does not use intermediate points. It uses the tangents
at each end of the curve, instead. Every point on a curve has a straight line associated
with it called the tangent line, which is related to the first derivation of the Parametric
functions x(u), y(u), and z(u), such as those given by Equation 2 of previous lecture. Thus
d
xu,
d
yu,
d
and zu (1)
du du du
From elementary calculus, we can compute, for example,
dy dyu/ du (2)
du dxu/ du
We can treat dxu du, dyu du, and dz u du as components of a vector along the
tangent line to the curve. We call this the tangent vector, and define it as
d
Pu u
x u i
d
zuk
d
y u j (3)
du du du
Or more simply as
Pu xu yu zu (4)
(Here the superscript u indicates the first derivative operation with respect to the
independent variable u). This is a very powerful idea, and we will now see how to use it
to define a curve.
In the last section, we discussed how to define a curve by specifying four points. Now we
have another way to define a curve. We will still use the two end points, but instead of
two intermediate points, we will use the tangent vectors at each end to supply the
information we need to define a curve. By manipulating theseu tangent vectors, we can
control the slope at each end. The set of vectors p , p , p , and pu are called the
0 1 0 1
boundary conditions. This method itself is called the cubic Hermite interpolation, after C.
Hermite (1822-1901) the French mathematician who made significant contributions to
our understanding of cubic and quadratic polynomials.
P1
Pu0
P0
xu ax u3 bx u2 c x u d x (1A)
337
© Copyright Virtual University of Pakistan
37-The Tangent Vector VU
d x x0
Substituting the result into Equation (1A), yields
xu 2x 2x xu xu u3 3x 3x 2x u xu xu u2 xu x (8)
0 1 0 1 0 1 0 1 0 0 0
Because y(u) and z(u) have equivalent forms, we can include them by rewriting Equation
9 in vector form:
p u 2u3 3u2 1 p 2u3 3u2 p u3 2u2 u pu u3 u2 pu
0
1
0
1
(10)
F1 2u3 3u2 1
F2 2u3 3u2
(11)
F3 u3 2u2 u
F4 u3 u2
These matrix elements are the polynomial coefficients of the vectors which we rewrite as
pu F p F p F pu F pu (12)
1 0 2 1 3 0 4 1
Then
pu FB (14)
Here again we write the matrix F as the product of two matrices, U and M, so that
F UM (15)
where
338
© Copyright Virtual University of Pakistan
37-The Tangent Vector VU
U u3 u2 u 1 (16)
and
2 2 1 1
3 3
2 1
M = (17)
0 0 1 0
1 0 0 0
Rewriting Equation 14 using these substitutions, we obtain
A MB (20)
Consider the four vectors that make up the boundary condition matrix. There is nothing
extraordinary about the vectors defining the end points, but what about the two tangent
vector? A tangent vector certainly defines the slope at one end of the curve, but a vector
has characteristics of both direction and magnitude. All we need to specify ht slope is a
unit tangent vector at each end, say t 0 and t 1. But p0, p1, t 0, and t 1 supply only 10 of the 12
pieces of information needed to completely determine the curve. So the magnitude of the
tangent vector is also necessary and contributes to the shape of the curve. In fact, we can
write pu0 and p1u as:
pu m t (21)
0 0 0
And
pu m t (22)
1 1 1
339
© Copyright Virtual University of Pakistan
37-The Tangent Vector VU
P0 0 0 0
P 1 0
B= 1 0 (24)
m0t0 0.707
0.707
0
m t 0.707 0.707 0
1 1
Carefully consider this array of 12 elements; they uniquely define the curve. By changing
either m0 or m1, or both, we can change the shape of the curve. But it is a restricted kind
of change because not only do the end points remain fixed, but the end slopes are also
unchanged!
The three curves drawn with light lines in Figure 3 show the effects of varying m0 and
m1. This is a very powerful tool for designing curves, making it possible to join up end -
to-end many curves in a smooth way and still exert some control over the interior shape
of each individual curve. For example, as we increase the value of m 0 while holding m1
fixed, the curve seems to be pushed toward p1. Keeping m0 and m1 equal but increasing
their value increases the maximum deflection of the curve from the x-axis and increases
the curvature at the maximum. (Under some conditions, not necessarily desirable, we can
force a loop to form).
340
© Copyright Virtual University of Pakistan
38-Bezier Curves VU
In the early 1960s, Peter Bezier began looking for a better way to define curves and
surfaces one that would be useful to a design engineer. He was familiar with the work of
Ferguson and Coons and their parametric cubic curves and bicubic surfaces. However,
these did not offer an intuitive way to alter and control shape. The results of Bezier’s
research led to the curves and surfaces that bear his name and became part of the
UNISURF system. The French automobile manufacturer, Renault used UNISURF to
design the sculptured surfaces of many of its products.
A Geometric Construction
We can draw a Bezier curve using a simple recursive geometric construction. Let’s begin
by constructing a second-degree curve. We select three points A, B, C so that line AB is a
tangent to the curve at A, and BC is tangent at C. The curve begins at A and ends at C.
For any ratio, ui, we construct points D and E so that
AD BE (1)
ui
AB BC
To define this curve in a coordinate system, let point A = (xA, yA), B= (xB, yB). Then
coordinates of points D and E for some value of ui are
x D x A ui x B x A
(2)
yD y A ui y B y A
And
341
© Copyright Virtual University of Pakistan
38-Bezier Curves VU
x E x B ui x C x B
(3)
y E y B ui y C y B
F i A i i B i C (5)
y 1 u y 2u 1 u y u 2 y
2
F i A i i B i C
We generalize this set of equations for any point on the curve using the following
substitutions:
xu xF
(6)
yu yF
And we let
x0 x A x1 xB x2 xC
(7)
y0 y A y1 yB y2 yC
This is the set of second-degree equations for the coordinates of points on Bezier curve
based on our construction.
We express this construction process and Equations 8 in terms of vectors with the
following substitutions. Let the vector po represent point A, p 1 point B, and p2 point C.
Form vector geometry we have D >>>> and E == >>> if we let F = >>> we see that
p u p 0 u p1 p 0 u p1 u p 2 p1 p 0 u p1 p 0 (9)
We rearrange terms to obtain a more compact vector equation of a second degree Bezier
curve:
p u 1 u p 0 2u 1 u p 1 u 2 p 2
2
(10)
The ratio u is the parametric variable. Later we will see that this equation is an example of
a Bernstein polynomial. Note that the curve will always lie in the plane containing the
three control points, but the points do not necessarily lie in the xy plane.
Similar constructions apply to Bezier curves of any degree. In fact the degree of a Bezier
curve is equal to n-1, where n is the number of control points.
Figure 2 shows the construction of point on a cubic Bezier curve, which requires four
control points A, B, C, and D to define it. The curve begins at point A tangent to the, AB,
and ends at D and tangent to CD. We construct points E, F, and G so that
342
© Copyright Virtual University of Pakistan
38-Bezier Curves VU
AE BF CG
ui (11)
AB BC CD
EH FI
ui (12)
EF FG
Finally on HI we locate J so that
HJ
ui (13)
HI
We can make no more subdivisions, which means that point J is on the curve. If we
continue this process for a sequence of points, then their locus defines the curve. If points
A, B, C, and D are represented by the vectors >>>>> respectively, then expressing the
construction of the intermediate points E, F, G, H, and I in terms of these vectors to
produce point J, or P(u), yields
p u p 0 u p1 p0 u p1 u p 2 p1 p0 u p1 p0
u{ p1 u p 2 p1 u p 2 u p 3 p 2 p1 u p 2 p 0 p 0 (14)
u p1 p0 u p1 u p 2 p1 p0 u p1 p0 }
This awkward expression simplifies nicely to
p u 1 u p 0 3u 1 u p 1 3u 2 1 u p 2 u 3 p 0
3 2
(15)
Of course this construction, of a cubic curve with its four control points is done in the
plane of the paper. However, the cubic polynomial allows a curve that is nonplanar; that
is, it can represent a curve that twists in space.
The geometric construction of a Bezier curve shows how the control points influence its
shape. The curve begins on the first point and ends on the last point. It is tangent to the
lines connecting the first two points and the last two points. The curve is always
contained within the convex hull of the control points.
No one spends time constructing and plotting the points of a Bezier curve by hand, of
course. A computer does a much faster and more accurate job. However, it is worth doing
several curves this way for insight into the characteristics of Bezier curves.
An Algebraic Definition
Bezier began with the idea that any point p(u) on a curve segment should be given by an
equation such as the following:
343
© Copyright Virtual University of Pakistan
38-Bezier Curves VU
n
pu pi fi u (16)
i0
Equation 16 is a compact way to express the sum of several similar terms because what it
says is this:
Figure
344
© Copyright Virtual University of Pakistan
38-Bezier Curves VU
A family of functions called Bernstein polynomials satisfies these requirements. They are
the basis functions of the Bezier curve (Other curves, such as the NURBS curves, use
different, but related, basis functions). We rewrite Equation 16 using them so that
n
pu p B u
i0
i i ,n
(18)
Were the basis functions are
Bi,n u u 1 u
n
i
i ni
(19)
i!nn! 1!
n
i (20)
Expanding Equation 18 for a second degree Bezier curve (When n=2 and there are three
control points) produces
pu p0 B0,2 u p1 B1,2 u p2 B2,2 u (21)
B0,2 u 1 u 2 (22)
B1,2 u 2u1 u (23)
B2,2 u u 2
(24)
345
© Copyright Virtual University of Pakistan
38-Bezier Curves VU
p u 1 u p 0 2u 1 u p 1 u 2 p 2
2
(25)
This is the same expression we found from the geometric construction, Equation 10. The
variable u is now called the parametric variable.
Now, let’s expand Equation 18 for a cubic Bezier curve, where n=3:
pu p0 B0,3 u p1B1,3 u p2 B2,3 u p3 B3,3 u (26)
And from Equation 20 we find
B 0,3 u 1 u
3
(27)
B1,3 u 3u 1 u
2
(28)
B 3,3u u3 (30)
Bezier curve equations are well suited for expression in matrix form. We can expand the
cubic parametric functions and rewrite Equation 31 as
346
© Copyright Virtual University of Pakistan
38-Bezier Curves VU
1 3u 3u 2 u 3 T
p
p u
3u 6u2 3u3 p0
1
(32)
3u2 3u3 p2
u3 p 3
or as
1 3 3
3 6 3
1 p
0 p0
p u u3 u2
u 1 1 (33)
3 3 0 0 p2
1 0 0 0 p 3
If we let
U u3 u2 u 1 (34)
P p0 p3
T
p1 p2 (35)
and
1 3 3 1
3 6
0
M
3
(36)
3 3 0 0
1 0 0 0
Note that the composition of the matrices U, M, and P varies according to the number of
control points (that is, the degree of the Bernstein polynomial basis functions).
347
© Copyright Virtual University of Pakistan
39-Building Polygonal Models of Surfaces VU
Keep polygon orientations consistent. Make sure that when viewed from the
outside, all the polygons on the surface are oriented in the same direction (all
clockwise or all counterclockwise). Consistent orientation is important for
polygon culling and two-sided lighting. Try to get this right the first time, since it's
excruciatingly painful to fix the problem later. (If you use glScale*() to reflect
geometry around some axis of symmetry, you might change the orientation with
glFrontFace() to keep the orientations consistent.)
When you subdivide a surface, watch out for any nontriangular polygons. The
three vertices of a triangle are guaranteed to lie on a plane; any polygon with four
or more vertices might not. Nonplanar polygons can be viewed from some
orientation such that the edges cross each other, and OpenGL might not render
such polygons correctly.
There's always a trade-off between the display speed and the quality of the image.
If you subdivide a surface into a small number of polygons, it renders quickly but
might have a jagged appearance; if you subdivide it into millions of tiny polygons,
it probably looks good but might take a long time to render. Ideally, you can
provide a parameter to the subdivision routines that indicates how fine a
subdivision you want, and if the object is farther from the eye, you can use a
coarser subdivision. Also, when you subdivide, use large polygons where the
surface is relatively flat, and small polygons in regions of high curvature.
For high-quality images, it's a good idea to subdivide more on the silhouette edges
than in the interior. If the surface is to be rotated relative to the eye, this is tougher
to do, since the silhouette edges keep moving. Silhouette edges occur where the
normal vectors are perpendicular to the vector from the surface to the viewpoint -
that is, when their vector dot product is zero. Your subdivision algorithm might
choose to subdivide more if this dot product is near zero.
Try to avoid T-intersections in your models (see Figure 1). As shown, there's no
guarantee that the line segments AB and BC lie on exactly the same pixels as the
segment AC. Sometimes they do, and sometimes they don't, depending on the
transformations and orientation. This can cause cracks to appear intermittently in
the surface.
If you're constructing a closed surface, make sure to use exactly the same numbers
for coordinates at the beginning and end of a closed loop, or you can get gaps and
cracks due to numerical round-off. Here's a two-dimensional example of bad code:
348
© Copyright Virtual University of Pakistan
39-Building Polygonal Models of Surfaces VU
#define PI 3.14159265
#define EDGES 30
/* draw a circle */
glBegin(GL_LINE_STRIP);
The edges meet exactly only if your machine manages to calculate the sine and cosine of
0 and of (2*PI*EDGES/EDGES) and gets exactly the same values. If you trust the
floating-point unit on your machine to do this right, the authors have a bridge they'd like
to sell you. To correct the code, make sure that when i == EDGES, you use 0 for the
sine and cosine, not 2*PI*EDGES/EDGES. (Or simpler still, use GL_LINE_LOOP
instead of GL_LINE_STRIP, and change the loop termination condition to i < EDGES.)
To illustrate some of the considerations that arise in approximating a surface, let's look at
some example code sequences. This code concerns the vertices of a regular icosahedron
(which is a Platonic solid composed of twenty faces that span twelve vertices, each face
of which is an equilateral triangle). An icosahedron can be considered a rough
approximation for a sphere. Example 1 defines the vertices and triangles making up an
icosahedron and then draws the icosahedron.
int i;
glBegin(GL_TRIANGLES);
for (i = 0; i < 20; i++) {
/* color information here */
glVertex3fv(&vdata[tindices[i][0]][0]);
glVertex3fv(&vdata[tindices[i][1]][0]);
glVertex3fv(&vdata[tindices[i][2]][0]);
}
glEnd();
349
© Copyright Virtual University of Pakistan
39-Building Polygonal Models of Surfaces VU
The strange numbers X and Z are chosen so that the distance from the origin to any of the
vertices of the icosahedron is 1.0. The coordinates of the twelve vertices are given in the
array vdata[ ][ ], where the zeroth vertex is {- &Xgr; , 0.0, &Zgr; }, the first is {X, 0.0,
Z}, and so on. The array tindices[ ][ ] tells how to link the vertices to make triangles. For
example, the first triangle is made from the zeroth, fourth, and first vertex. If you take the
vertices for triangles in the order given, all the triangles have the same orientation.
The line that mentions color information should be replaced by a command that sets the
color of the ith face. If no code appears here, all faces are drawn in the same color, and
it'll be impossible to discern the three-dimensional quality of the object. An alternative to
explicitly specifying colors is to define surface normals and use lighting, as described in
the next subsection.
Note: In all the examples described in this section, unless the surface is to be drawn only
once, you should probably save the calculated vertex and normal
coordinates so that the calculations don't need to be repeated each time that the surface is
drawn. This can be done using your own data structures or by
constructing display lists.
The function normcrossprod() produces the normalized cross product of two vectors, as
shown in Example 3.
350
© Copyright Virtual University of Pakistan
39-Building Polygonal Models of Surfaces VU
If you're using an icosahedron as an approximation for a shaded sphere, you'll want to use
normal vectors that are perpendicular to the true surface of the sphere, rather than being
perpendicular to the faces. For a sphere, the normal vectors are simple; each points in the
same direction as the vector from the origin to the corresponding vertex. Since the
icosahedron vertex data is for an icosahedron of radius 1, the normal and vertex data is
identical. Here is the code that would draw an icosahedral approximation of a smoothly
shaded sphere
glBegin(GL_TRIANGLES);
for (i = 0; i < 20; i++) {
glNormal3fv(&vdata[tindices[i][0]][0]);
glVertex3fv(&vdata[tindices[i][0]][0]);
glNormal3fv(&vdata[tindices[i][1]][0]);
glVertex3fv(&vdata[tindices[i][1]][0]);
glNormal3fv(&vdata[tindices[i][2]][0]);
glVertex3fv(&vdata[tindices[i][2]][0]);
}
glEnd();
351
© Copyright Virtual University of Pakistan
39-Building Polygonal Models of Surfaces VU
352
© Copyright Virtual University of Pakistan
39-Building Polygonal Models of Surfaces VU
Generalized Subdivision
A recursive subdivision technique such as the one described in Example 5 can be used for
other types of surfaces. Typically, the recursion ends either if a certain depth is reached or
if some condition on the curvature is satisfied (highly curved parts of surfaces look better
with more subdivision).
To look at a more general solution to the problem of subdivision, consider an arbitrary
surface parameterized by two variables u[0] and u[1]. Suppose that two routines are
provided:
If surf() is passed u[], the corresponding three-dimensional vertex and normal vectors (of
length 1) are returned. If u[] is passed to curv(), the curvature of the surface at that point
is calculated and returned. (See an introductory textbook on differential geometry for
more information about measuring surface curvature.)
Example 6 shows the recursive routine that subdivides a triangle either until the
maximum depth is reached or until the maximum curvature at the three
vertices is less than some cutoff.
void subdivide(float u1[2], float u2[2], float u3[2], float cutoff, long depth)
{
GLfloat v1[3], v2[3], v3[3], n1[3], n2[3], n3[3];
GLfloat u12[2], u23[2], u32[2];
GLint i;
if (depth == maxdepth || (curv(u1) < cutoff && curv(u2) < cutoff && curv(u3) <
cutoff)) {
surf(u1, v1, n1); surf(u2, v2, n2); surf(u3, v3, n3);
glBegin(GL_POLYGON);
glNormal3fv(n1); glVertex3fv(v1);
glNormal3fv(n2); glVertex3fv(v2);
glNormal3fv(n3); glVertex3fv(v3);
glEnd();
353
© Copyright Virtual University of Pakistan
39-Building Polygonal Models of Surfaces VU
return;
}
for (i = 0; i < 2; i++) {
u12[i] = (u1[i] + u2[i])/2.0;
u23[i] = (u2[i] + u3[i])/2.0;
u31[i] = (u3[i] + u1[i])/2.0;
}
subdivide(u1, u12, u31, cutoff, depth+1);
subdivide(u2, u23, u12, cutoff, depth+1);
subdivide(u3, u31, u23, cutoff, depth+1);
subdivide(u12, u23, u31, cutoff, depth+1);
}
354
© Copyright Virtual University of Pakistan
40-Fractals VU
Fractal are geometric patterns that is repeated at ever smaller scales to produce irregular
shapes and surfaces that can not be represented by classical geometry. Fractals are used in
computer modeling of irregular patterns and structure in nature.
According to Webster's Dictionary a fractal is defined as being "derived from the Latin
word fractus meaning broken, uneven: any of various extremely irregular curves or
shapes that repeat themselves at any scale on which they are examined."
"I coined fractal from the Latin adjective fractus. The corresponding Latin verb
frangere means 'to break:' to create irregular fragments. It is therefore sensible -
and how appropriate for our needs! - that, in addition to 'fragmented' (as in
fraction or refraction), fractus should also mean 'irregular,' both meanings being
preserved in fragment"[3]
2. The square below is also broken into smaller pieces. Each of which is 1/4th the
size of the original. In this case it takes 16 of the smaller pieces to create the
original.
3. As with the others the cube is also broken down into smaller cubes of 1/4 the size
of the original. It takes 64 of these smaller cubes to create the original cube.
355
© Copyright Virtual University of Pakistan
40-Fractals VU
N = S^D
Where N is the number of small pieces that go into the larger one, S is the scale to which
the smaller pieces compare to the larger one and D is the dimension.
We now have the tools to be able to calculate the dimension. Just solve for D in the
previous equation. When we do this we find that the Dimension is:
D = log N / log S
This dimension is the Hausdorff-Besicovitch dimension.
Koch Curve
Euclidean Geometry is the stuff we all learn in school. It is the geometry of lines, planes,
circles etc. It's simple and it works, and for a long time, mathematicians thought it was a
reasonable representation of nature. However, people soon discovered that they could
draw (or at least begin to draw) certain curves and surfaces that could not be described by
the classical geometry.
How hard can it be to draw a curve? Let us attempt to describe. This is the Koch curve:
Draw a triangle.
If we say that each line is of length 1, then the total length of the curve is 3.
356
© Copyright Virtual University of Pakistan
40-Fractals VU
Now take each edge in turn and add another triangle, a third of the size. So now there are
12 edges and 12 points. The length of the curve is now 4. Repeat the process again, and
again, forever.
Length = 5.3333
Length = 7.1111
357
© Copyright Virtual University of Pakistan
40-Fractals VU
Length = 12.6420
As we continue adding edges, the length of the curve increases. If we add edges forever,
then length of the curve reaches infinity, but the whole curve nevertheless covers a finite
area. The curve is infinitely detailed. No matter how closely we zoom into the image, it
always shows up more detail.
Self Similarity
So what do these mathematical curiosities have to do with the real world? Well,
everything as it turns out. Such objects turn up all the time in the natural world. Animals,
plants, rocks, crystals and liquids all exhibit fractal properties and self similarity.
Let’s take a look at a common plant, the fern. The fern is typical of many plants in that it
exhibits self similarity. A fern consists of a leaf, which is made up from many similar, but
smaller leaves, each of which, in turn, is made from even smaller leaves. The closer we
look the more detail we see.
The following figure is a standard fern, which we may well find while being dragged on
long walks in the country by your parents long before we are able to fully appreciate the
beauty of nature. We will see the overall theme of repeating leaves. Each smaller leaf
looks similar to the larger leaf.
358
© Copyright Virtual University of Pakistan
40-Fractals VU
Looking a little closer, we can see that those small leaves are made up from even smaller
leaves.
Of course, in reality, a fern does have a smallest leaf, though we’re sure every fern aspires
to be like that one. What is interesting it that the program to generate this image is only a
few lines long. The same tends to be true for all fractals. A very simple algorithm can
explain an infinitely complex object.
Fractal Geometry
Almost all geometric forms used for building man made objects belong to Euclidean
geometry, they are comprised of lines, planes, rectangular volumes, arcs, cylinders,
spheres, etc. These elements can be classified as belonging to an integer dimension, 1, 2,
or 3. This concept of dimension can be described both intuitively and mathematically.
Intuitively we say that a line is one dimensional because it only takes 1 number to
uniquely define any point on it. That one number could be the distance from the start of
the line. This applies equally well to the circumference of a circle, a curve, or the
boundary of any object.
A plane is two dimensional since in order to uniquely define any point on its surface we
require two numbers. There are many ways to arrange the definition of these two numbers
but we normally create an orthogonal coordinate system. Other examples of two
dimensional objects are the surface of a sphere or an arbitrary twisted plane.
The volume of some solid object is 3 dimensional on the same basis as above, it takes
three numbers to uniquely define any point within the object.
359
© Copyright Virtual University of Pakistan
40-Fractals VU
The process starts with a single line segment and continues for ever. The first few
iterations of this procedure are shown below.
360
© Copyright Virtual University of Pakistan
40-Fractals VU
This demonstrates how a very simple generation rule for this shape can generate some
unusual (fractal) properties. Unlike Euclidean shapes this object has detail at all levels. If
one magnifies a Euclidean shape such as the circumference of a circle it becomes a
different shape, namely a straight line. If we magnify this fractal more and more detail is
uncovered, the detail is self similar or rather it is exactly self similar. Put another way,
any magnified portion is identical to any other magnified portion.
Note also that the "curve" on the right is not a fractal but only an approximation of one.
This is no different from when one draws a circle, it is only an approximation to a perfect
circle. At each iteration the length of the curve increases by a factor of 4/3. Thus the
limiting curve is of infinite length and indeed the length between any two points of the
curve is infinite. This curve manages to compress an infinite length into a finite area of
the plane without intersecting itself! Considering the intuitive notion of 1 dimensional
shapes, although this object appears to be a curve with one starting point and one end
point, it is not possible to uniquely specify any position along the curve with one number
as we expect to be able to do with Euclidean curves which are 1 dimensional. Although
the method of creating this curve is straightforward, there is no algebraic formula the
describes the points on the curve. Some of the major differences between fractal and
Euclidean geometry are outlined in the following table.
Firstly the recognition of fractal is very modern, they have only formally been studied in
the last 10 years compared to Euclidean geometry which goes back over 2000 years.
361
© Copyright Virtual University of Pakistan
40-Fractals VU
Secondly whereas Euclidean shapes normally have a few characteristic sizes or length
scales (e.g. the radius of a circle or the length of of a side of a cube) fractals have so
characteristic sizes. Fractal shapes are self similar and independent of size or scaling.
Third, Euclidean geometry provides a good description of man made objects whereas
fractals are required for a representation of naturally occurring geometries. It is likely that
this limitation of our traditional language of shape is responsible for the striking
difference between mass produced objects and natural shapes. Finally, Euclidean
geometries are defined by algebraic formulae, for example
defines a sphere. Fractals are normally the result of a iterative or recursive construction or
algorithm.
L-Systems
The following is based on L-Systems as described in "Lecture Notes in Biomathematics"
by Przemyslaw Prusinkiewcz and James Hanan. A brief description of an 0L system will
be presented here but for a more complete description the user should consult the
literature.
362
© Copyright Virtual University of Pakistan
40-Fractals VU
363
© Copyright Virtual University of Pakistan
40-Fractals VU
Recent usage of L-Systems is for the creation of realistic looking objects that occur in
nature and in particular the branching structure of plants. One of the important
characteristics of L systems is that only a small amount of information is required to
represent very complex objects. So while the bushes in figure 9 contain many thousands
of lines they can be described in a database by only a few bytes of data, the actual bushes
are only "grown" when required for visual presentation. Using suitably designed L -
System algorithms it is possible to design the L-System production rules that will create a
particular class of plant.
Further examples:
364
© Copyright Virtual University of Pakistan
40-Fractals VU
365
© Copyright Virtual University of Pakistan
40-Fractals VU
366
© Copyright Virtual University of Pakistan
40-Fractals VU
367
© Copyright Virtual University of Pakistan
40-Fractals VU
Featured on the cover of the HPC (High Performance Computing) magazine, 3 August
2001.
368
© Copyright Virtual University of Pakistan
40-Fractals VU
each with an assigned probability. To run the system an initial point is chosen and on
each iteration one of the transformation is chosen randomly according to the assigned
probabilities, the resulting points (xn, yn) are drawn on the page. As in the case of L
systems, if the IFS code for a desired image can be determined (by something called the
Collage theorem) then large data compression ratios can be achieved. Instead of storing
the geometry of the very complex object just the IFS generator needs to be stored and the
image can be generated when required. The fundamental iterative process involves
replacing rectangles with a series of rectangles called the generator. The rectangles are
replaced by a suitably scaled, translated, and rotated version of the generator.
For example consider the generator on
the right
It consists of three rectangles, each
with its own center, dimensions and
rotation angle. The initial conditions
usually consist of a single square, the
first iteration then consists of replacing
this square by a suitably positioned,
scaled and rotated version of the
generator.
The next iteration involves replacing
each of the rectangles in the current
system by suitably positioned, scaled,
and rotated versions of the generator
resulting in the following
369
© Copyright Virtual University of Pakistan
40-Fractals VU
370
© Copyright Virtual University of Pakistan
40-Fractals VU
371
© Copyright Virtual University of Pakistan
40-Fractals VU
372
© Copyright Virtual University of Pakistan
40-Fractals VU
IFS Fern
Random IFS
Wada basins
373
© Copyright Virtual University of Pakistan
41-Viewing VU
374
© Copyright Virtual University of Pakistan
41-Viewing VU
375
© Copyright Virtual University of Pakistan
41-Viewing VU
Note that these steps correspond to the order in which we specify the desired
transformations in our program, not necessarily the order in which the relevant
mathematical operations are performed on an object's vertices. The viewing
transformations must precede the modeling transformations in our code, but we can
specify the projection and viewport transformations at any point before drawing occurs.
Figure 2 shows the order in which these operations occur on our computer.
v'=Mv
(Remember that vertices always have four coordinates (x, y, z, w), though in most cases
w is 1 and for two-dimensional data z is 0.) Note that viewing and modeling
transformations are automatically applied to surface normal vectors, in addition to
vertices. (Normal vectors are used only in eye coordinates.) This ensures that the normal
vector's relationship to the vertex data is properly preserved.
The viewing and modeling transformations we specify are combined to form the
modelview matrix, which is applied to the incoming object coordinates to yield eye
coordinates. Next, if we've specified additional clipping planes to remove certain objects
from the scene or to provide cutaway views of objects, these clipping planes are applied.
After that, OpenGL applies the projection matrix to yield clip coordinates. This
transformation defines a viewing volume; objects outside this volume are clipped so that
they're not drawn in the final scene. After this point, the perspective division is performed
by dividing coordinate values by w, to produce normalized device coordinates. Finally,
the transformed coordinates are converted to window coordinates by applying the
viewport transformation. We can manipulate the dimensions of the viewport to cause the
final image to be enlarged, shrunk, or stretched. We might correctly suppose that the x
and y coordinates are sufficient to determine which pixels need to be drawn on the screen.
However, all the transformations are performed on the z coordinates as well. This way, at
the end of this transformation process, the z values correctly reflect the depth of a given
376
© Copyright Virtual University of Pakistan
41-Viewing VU
vertex (measured in distance away from the screen). One use for this depth value is to
eliminate unnecessary drawing. For example, suppose two vertices have the same x and y
values but different z values. OpenGL can use this information to determine which
surfaces are obscured by other surfaces and can then avoid drawing the hidden surfaces.
As we've probably guessed by now, we need to know a few things about matrix
mathematics to get the most out of this lecture as we have learnt from previous lectures.
void display(void)
{
glClear (GL_COLOR_BUFFER_BIT);
glColor3f (1.0, 1.0, 1.0);
glLoadIdentity (); /* clear the matrix */
/* viewing transformation */
gluLookAt (0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
glScalef (1.0, 2.0, 1.0); /* modeling transformation */
glutWireCube (1.0);
glFlush ();
}
void reshape (int w, int h)
{
glViewport (0, 0, (GLsizei) w, (GLsizei) h);
glMatrixMode (GL_PROJECTION);
377
© Copyright Virtual University of Pakistan
41-Viewing VU
glLoadIdentity ();
glFrustum (-1.0, 1.0, -1.0, 1.0, 1.5, 20.0);
glMatrixMode (GL_MODELVIEW);
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize (500, 500);
glutInitWindowPosition (100, 100);
glutCreateWindow (argv[0]);
init ();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}
Recall that the viewing transformation is analogous to positioning and aiming a camera.
In this code example, before the viewing transformation can be specified, the current
matrix is set to the identity matrix with glLoadIdentity(). This step is necessary since
most of the transformation commands multiply the current matrix by the specified matrix
and then set the result to be the current matrix. If we don't clear the current matrix by
loading it with the identity matrix, we continue to combine previous transformation
matrices with the new one we supply. In some cases, we do want to perform such
combinations, but we also need to clear the matrix sometimes. In Example 1, after the
matrix is initialized, the viewing transformation is specified with gluLookAt(). The
arguments for this command indicate where the camera (or eye position) is placed, where
it is aimed, and which way is up. The arguments used here place the camera at (0, 0, 5),
aim the camera lens towards (0, 0, 0), and specify the up-vector as (0, 1, 0). The up-vector
defines a unique orientation for the camera.
If gluLookAt() was not called, the camera has a default position and orientation. By
default, the camera is situated at the origin, points down the negative z-axis, and has an
up-vector of (0, 1, 0). So in Example 1, the overall effect is that gluLookAt() moves the
camera 5 units along the z-axis.
We use the modeling transformation to position and orient the model. For example, we
can rotate, translate, or scale the model - or perform some combination of these
operations. In Example 1, glScalef() is the modeling transformation that is used. The
arguments for this command specify how scaling should occur along the three axes. If all
the arguments are 1.0, this command has no effect. In Example 1, the cube is drawn twice
as large in the y direction. Thus, if one corner of the cube had originally been at (3.0, 3.0,
3.0), that corner would wind up being drawn at (3.0, 6.0, 3.0). The effect of this modeling
transformation is to transform the cube so that it isn't a cube but a rectangular box.
378
© Copyright Virtual University of Pakistan
41-Viewing VU
Try This
Change the gluLookAt() call in Example 1 to the modeling transformation
glTranslatef() with parameters (0.0, 0.0, -5.0). The result should look exactly the same as
when we used gluLookAt(). Why are the effects of these two commands similar?
Note that instead of moving the camera (with a viewing transformation) so that the cube
could be viewed, we could have moved the cube away from the camera (with a modeling
transformation). This duality in the nature of viewing and modeling transformations is
why we need to think about the effect of both types of transformations simultaneously. It
doesn't make sense to try to separate the effects, but sometimes it's easier to think about
them one way rather than the other. This is also why modeling and viewing
transformations are combined into the modelview matrix before the transformations are
applied.
Also note that the modeling and viewing transformations are included in the display()
routine, along with the call that's used to draw the cube, glutWireCube(). This way,
display() can be used repeatedly to draw the contents of the window if, for example, the
window is moved or uncovered, and we've ensured that each time, the cube is drawn in
the desired way, with the appropriate transformations. The potential repeated use of
display() underscores the need to load the identity matrix before performing the viewing
and modeling transformations, especially when other transformations might be performed
between calls to display().
Specifying the projection transformation is like choosing a lens for a camera. We can
think of this transformation as determining what the field of view or viewing volume is
and therefore what objects are inside it and to some extent how they look. This is
equivalent to choosing among wide-angle, normal, and telephoto lenses, for example.
With a wide-angle lens, we can include a wider scene in the final photograph than with a
telephoto lens, but a telephoto lens allows us to photograph objects as though they're
closer to us than they actually are. In computer graphics, we don't have to pay $10,000 for
a 2000-millimeter telephoto lens; once we've bought our graphics workstation, all we
need to do is use a smaller number for our field of view. In addition to the field -of-view
considerations, the projection transformation determines how objects are projected onto
the screen, as its name suggests. Two basic types of projections are provided for us by
OpenGL, along with several corresponding commands for describing the relevant
parameters in different ways. One type is the perspective projection, which matches how
we see things in daily life. Perspective makes objects that are farther away appear
smaller; for example, it makes railroad tracks appear to converge in the distance. If we're
trying to make realistic pictures, we'll want to choose perspective projection, which is
specified with the glFrustum() command in this code example. The other type of
projection is orthographic, which maps objects directly onto the screen without affecting
their relative size. Orthographic projection is used in architectural and computer-aided
design applications where the final image needs to reflect the measurements of objects
rather than how they might look. Architects create perspective drawings to show how
particular buildings or interior spaces look when viewed from various vantage points; the
need for orthographic projection arises when blueprint plans or elevations are generated,
which are used in the construction of buildings. Before glFrustum() can be called to set
379
© Copyright Virtual University of Pakistan
41-Viewing VU
Try This
Change the glFrustum() call in Example 1 to the more commonly used Utility Library
routine gluPerspective() with parameters (60.0, 1.0, 1.5, 20.0). Then experiment with
different values, especially for fov(field of view ), near and far plane.
Viewing and modeling transformations are inextricably related in OpenGL and are in fact
combined into a single modelview matrix. (See "A Simple Example: Drawing a Cube.")
One of the toughest problems newcomers to computer graphics face is understanding the
effects of combined three-dimensional transformations. As we've already seen, there are
alternative ways to think about transformations - do we want to move the camera in one
direction, or move the object in the opposite direction? Each way of thinking about
transformations has advantages and disadvantages, but in some cases one way more
naturally matches the effect of the intended transformation. If we can find a natural
approach for wer particular application, it's easier to visualize the necessary
transformations and then write the corresponding code to specify the matrix
manipulations. The first part of this section discusses how to think about transformations;
later, specific commands are presented. For now, we use only the matrix-manipulation
commands we've already seen. Finally, keep in mind that we must call glMatrixMode()
with GL_MODELVIEW as its argument prior to performing modeling or viewing
transformations.
Now let's talk about the order in which we specify a series of transformations. All
viewing and modeling transformations are represented as 4 × 4 matrices. Each successive
glMultMatrix*() or transformation command multiplies a new 4 × 4 matrix M by the
current modelview matrix C to yield CM. Finally, vertices v are multiplied by the current
modelview matrix. This process means that the last transformation command called in our
program is actually the first one applied to the vertices: CMv. Thus, one way of looking
at it is to say that we have to specify the matrices in the reverse order. Like many other
things, however, once we've gotten used to thinking about this correctly, backward will
seem like forward. Consider the following code sequence, which draws a single point
using three transformations:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixf(N); /* apply transformation N */
glMultMatrixf(M); /* apply transformation M */
glMultMatrixf(L); /* apply transformation L */
glBegin(GL_POINTS);
glVertex3f(v); /* draw transformed vertex v */
glEnd();
With this code, the modelview matrix successively contains I, N, NM, and finally NML,
where I represents the identity matrix. The transformed vertex is NMLv. Thus, the vertex
transformation is N(M(Lv)) - that is, v is multiplied first by L, the resulting Lv is
multiplied by M, and the resulting MLv is multiplied by N. Notice that the
transformations to vertex v effectively occur in the opposite order than they were
specified. (Actually, only a single multiplication of a vertex by the modelview matrix
occurs; in this example, the N, M, and L matrices are already multiplied into a single
matrix before it's applied to v.)
381
© Copyright Virtual University of Pakistan
41-Viewing VU
Thus, if we like to think in terms of a grand, fixed coordinate system - in which matrix
multiplications affect the position, orientation, and scaling of our model - we have to
think of the multiplications as occurring in the opposite order from how they appear in the
code. Using the simple example shown on the left side of Figure 4 (a rotation about the
origin and a translation along the x-axis), if we want the object to appear on the axis after
the operations, the rotation must occur first, followed by the translation. To do this, we'll
need to reverse the order of operations, so the code looks something like this (where R is
the rotation matrix and T is the translation matrix):
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixf(T); /* translation */
glMultMatrixf(R); /* rotation */
draw_the_object();
Another way to view matrix multiplications is to forget about a grand, fixed coordinate
system in which our model is transformed and instead imagine that a local coordinate
system is tied to the object we’re drawing. All operations occur relative to this changing
coordinate system. With this approach, the matrix multiplications now appear in the
natural order in the code. (Regardless of which analogy we're using, the code is the same,
but how we think about it differs.) To see this in the translation-rotation example, begin
by visualizing the object with a coordinate system tied to it. The translation operation
moves the object and its coordinate system down the x-axis. Then, the rotation occurs
about the (now-translated) origin, so the object rotates in place in its position on the axis.
This approach is what we should use for applications such as articulated robot arms,
where there are joints at the shoulder, elbow, and wrist, and on each of the fingers. To
figure out where the tips of the fingers go relative to the body, we'd like to start at the
shoulder, go down to the wrist, and so on, applying the appropriate rotations and
translations at each joint. Thinking about it in reverse would be far more confusing. This
second approach can be problematic, however, in cases where scaling occurs, and
especially so when the scaling is non-uniform (scaling different amounts along the
different axes). After uniform scaling, translations move a vertex by a multiple of what
they did before, since the coordinate system is stretched. Non-uniform scaling mixed with
rotations may make the axes of the local coordinate system non-perpendicular.
modeling transformations are discussed first, even if viewing transformations are actually
issued first. This order for discussion also matches the way many programmers think
when planning their code: Often, they write all the code necessary to compose the scene,
which involves transformations to position and orient objects correctly relative to each
other. Next, they decide where they want the viewpoint to be relative to the scene they've
composed, and then they write the viewing transformations accordingly.
Modeling Transformations
Multiplies the current matrix by a matrix that moves (translates) an object by the given x,
y, and z values (or moves the local coordinate system by the
same amounts).
Note that using (0.0, 0.0, 0.0) as the argument for glTranslate*() is the identity operation
- that is, it has no effect on an object or its local coordinate system.
Rotate
void glRotate{fd}(TYPE angle, TYPE x, TYPE y, TYPE z);
383
© Copyright Virtual University of Pakistan
41-Viewing VU
Multiplies the current matrix by a matrix that rotates an object (or the local coordinate
system) in a counterclockwise direction about the ray from the origin through the point
(x, y, z). The angle parameter specifies the angle of rotation in degrees.
The effect of glRotatef(45.0, 0.0, 0.0, 1.0), which is a rotation of 45 degrees about the z-
axis, is shown in Figure 6.
Note that an object that lies farther from the axis of rotation is more dramatically rotated
(has a larger orbit) than an object drawn near the axis. Also, if the angle argument is zero,
the glRotate*() command has no effect.
Scale
void glScale{fd}(TYPEx, TYPE y, TYPEz);
Multiplies the current matrix by a matrix that stretches, shrinks, or reflects an object
along the axes. Each x, y, and z coordinate of every point in the object is multiplied by the
corresponding argument x, y, or z. With the local coordinate system approach, the local
coordinate axes are stretched, shrunk, or reflected by the x, y, and z factors, and the
associated object is transformed with them.
Figure 7 shows the effect of glScalef(2.0, -0.5, 1.0).
384
© Copyright Virtual University of Pakistan
41-Viewing VU
glScale*() is the only one of the three modeling transformations that changes the apparent
size of an object: Scaling with values greater than 1.0 stretches an object, and using
values less than 1.0 shrinks it. Scaling with a -1.0 value reflects an object across an axis.
The identity values for scaling are (1.0, 1.0, 1.0). In general, we should limit our use of
glScale*() to those cases where it is necessary. Using glScale*() decreases the
performance of lighting calculations, because the normal vectors have to be renormalized
after transformation.
Note: A scale value of zero collapses all object coordinates along that axis to zero. It's
usually not a good idea to do this, because such an operation cannot be undone.
Mathematically speaking, the matrix cannot be inverted, and inverse matrices are required
for certain lighting operations. Sometimes collapsing coordinates does make sense,
however; the calculation of shadows on a planar surface is a typical application. In
general, if a coordinate system is to be collapsed, the projection matrix should be used
rather than the modelview matrix.
Example 2 is a portion of a program that renders a triangle four times, as shown in Figure
8. These are the four transformed triangles.
385
© Copyright Virtual University of Pakistan
41-Viewing VU
glLoadIdentity();
glScalef(1.5, 0.5, 1.0);
draw_triangle();
glLineStipple(1, 0x8888); /* dotted lines */
glLoadIdentity();
glRotatef (90.0, 0.0, 0.0, 1.0);
draw_triangle ();
glDisable (GL_LINE_STIPPLE);
Viewing Transformations
Use one or more modeling transformation commands (that is, glTranslate*() and
glRotate*()). We can think of the effect of these transformations as moving the camera
position or as moving all the objects in the world, relative to a stationary camera.
Use the Utility Library routine gluLookAt() to define a line of sight. This routine
encapsulates a series of rotation and translation commands.
Create our own utility routine that encapsulates rotations and translations. Some
applications might require custom routines that allow we to specify the viewing
transformation in a convenient way. For example, we might want to specify the roll,
386
© Copyright Virtual University of Pakistan
41-Viewing VU
pitch, and heading rotation angles of a plane in flight, or we might want to specify a
transformation in terms of polar coordinates for a camera that's orbiting around an object.
In the simplest case, we can move the viewpoint backward, away from the objects; this
has the same effect as moving the objects forward, or away from the viewpoint.
Remember that by default forward is down the negative z-axis; if we rotate the viewpoint,
forward has a different meaning. So, to put 5 units of distance between the viewpoint and
the objects by moving the viewpoint, as shown in Figure 10, use glTranslatef(0.0, 0.0, -
5.0);
This routine moves the objects in the scene -5 units along the z axis. This is also
equivalent to moving the camera +5 units along the z axis.
387
© Copyright Virtual University of Pakistan
41-Viewing VU
Now suppose we want to view the objects from the side. Should we issue a rotate
command before or after the translate command? If we're thinking in terms of a grand,
fixed coordinate system, first imagine both the object and the camera at the origin. We
could rotate the object first and then move it away from the camera so that the desired
side is visible. Since we know that with the fixed coordinate system approach, commands
have to be issued in the opposite order in which they should take effect, we know that we
need to write the translate command first in our code and follow it with the rotate
command. Now let's use the local coordinate system approach. In this case, think about
moving the object and its local coordinate system away from the origin; then, the rotate
command is carried out using the now-translated coordinate system. With this approach,
commands are issued in the order in which they're applied, so once again the translate
command comes first. Thus, the sequence of transformation commands to produce the
desired result is
If we're having trouble keeping track of the effect of successive matrix multiplications, try
using both the fixed and local coordinate system approaches and see whether one makes
more sense to us. Note that with the fixed coordinate system, rotations always occur about
the grand origin, whereas with the local coordinate system, rotations occur about the
origin of the local system. We might also try using the gluLookAt() utility routine
described in the next section.
Often, programmers construct a scene around the origin or some other convenient
location, then they want to look at it from an arbitrary point to get a good view of it. As
its name suggests, the gluLookAt() utility routine is designed for just this purpose. It
takes three sets of arguments, which specify the location of the viewpoint, define a
reference point toward which the camera is aimed, and indicate which direction is up.
388
© Copyright Virtual University of Pakistan
41-Viewing VU
Choose the viewpoint to yield the desired view of the scene. The reference point is
typically somewhere in the middle of the scene. (If we've built our scene at the origin, the
reference point is probably the origin.) It might be a little trickier to specify the correct
up-vector. Again, if we've built some real-world scene at or around the origin and if we've
been taking the positive y-axis to point upward, then that's our up-vector for
gluLookAt(). However, if we're designing a flight simulator, up is the direction
perpendicular to the plane's wings, from the plane toward the sky when the plane is right-
side up on the ground.
The gluLookAt() routine is particularly useful when we want to pan across a landscape,
for instance. With a viewing volume that's symmetric in both x and y, the (eyex, eyey,
eyez) point specified is always in the center of the image on the screen, so we can use a
series of commands to move this point slightly, thereby panning across the scene.
Defines a viewing matrix and multiplies it to the right of the current matrix. The desired
viewpoint is specified by eyex, eyey, and eyez. The centerx, centery, and centerz
arguments specify any point along the desired line of sight, but typically they're some
point in the center of the scene being looked at. The upx, upy, and upz arguments indicate
which direction is up (that is, the direction from the bottom to the top of the viewing
volume)
In the default position, the camera is at the origin, is looking down the negative z-axis,
and has the positive y-axis as straight up. This is the same as calling
gluLookat (0.0, 0.0, 0.0, 0.0, 0.0, -100.0, 0.0, 1.0, 0.0);
The z value of the reference point is -100.0, but could be any negative z, because the line
of sight will remain the same. In this case, we don't actually want to call gluLookAt(),
because this is the default and we are already there! (The lines extending from the camera
represent the viewing volume, which indicates its field of view.)
389
© Copyright Virtual University of Pakistan
41-Viewing VU
Figure 12 shows the effect of a typical gluLookAt() routine. The camera position (eyex,
eyey, eyez) is at (4, 2, 1). In this case, the camera is looking right at the model, so the
reference point is at (2, 4, -3). An orientation vector of (2, 2, -1) is chosen to rotate the
viewpoint to this 45-degree angle.
Note that gluLookAt() is part of the Utility Library rather than the basic OpenGL library.
This isn't because it's not useful, but because it encapsulates several basic OpenGL
commands - specifically, glTranslate*() and glRotate*(). To see this, imagine a camera
located at an arbitrary viewpoint and oriented according to a line of sight, both as
specified with gluLookAt() and a scene located at the origin. To "undo" what
gluLookAt() does, we need to transform the camera so that it sits at the origin and points
down the negative z-axis, the default position. A simple translate moves the camera to the
origin. We can easily imagine a series of rotations about each of the three axes of a fixed
coordinate system that would orient the camera so that it pointed toward negative z
values. Since OpenGL allows rotation about an arbitrary axis, we can accomplish any
desired rotation of the camera with a single glRotate*() command.
Note: we can have only one active viewing transformation. we cannot try to combine the
effects of two viewing transformations, any more than a camera can have two tripods. If
we want to change the position of the camera, make sure we call glLoadIdentity() to
wipe away the effects of any current viewing transformation.
Advanced
To transform any arbitrary vector so that it's coincident with another arbitrary vector (for
instance, the negative z-axis), we need to do a little mathematics. The axis about which
we want to rotate is given by the cross product of the two normalized vectors. To find the
angle of rotation, normalize the initial two vectors. The cosine of the desired angle
between the vectors is equal to the dot product of the normalized vectors. The angle of
rotation around the axis given by the cross product is always between 0 and 180 degrees.
Note that computing the angle between two normalized vectors by taking the inverse
390
© Copyright Virtual University of Pakistan
41-Viewing VU
cosine of their dot product is not very accurate, especially for small angles. But it should
work well enough to get us started.
For some specialized applications, we might want to define our own transformation
routine. Since this is rarely done and in any case is a fairly advanced topic, it's left mostly
as an exercise for the reader. The following exercises suggest two custom viewing
transformations that might be useful.
Try This
Suppose we're writing a flight simulator and we'd like to display the world from the point
of view of the pilot of a plane. The world is described in a coordinate system with the
origin on the runway and the plane at coordinates (x, y, z). Suppose further that the plane
has some roll, pitch, and heading (these are rotation angles of the plane relative to its
center of gravity).
Show that the following routine could serve as the viewing transformation:
l Suppose our application involves orbiting the camera around an object that's centered at
the origin. In this case, we'd like to specify the viewing transformation by using polar
coordinates. Let the distance variable define the radius of the orbit, or how far the camera
is from the origin. (Initially, the camera is moved distance units along the positive z-axis.)
The azimuth describes the angle of rotation of the camera about the object in the x-y
plane, measured from the positive y-axis. Similarly, elevation is the angle of rotation of
the camera in the y-z plane, measured from the positive z-axis. Finally, twist represents
the rotation of the viewing volume around its line of sight. Show that the following
routine could serve as the viewing transformation:
The modelview and projection matrices we've been creating, loading, and multiplying
have only been the visible tips of their respective icebergs. Each of these matrices is
actually the topmost member of a stack of matrices (see Figure 20).
Draw the car body. Remember where we are, and translate to the right front wheel. Draw
the wheel and throw away the last translation so our current position is back at the origin
of the car body. Remember where we are, and translate to the left front wheel...
Similarly, for each wheel, we want to draw the wheel, remember where we are, and
successively translate to each of the positions that bolts are drawn, throwing away the
transformations after each bolt is drawn.
Since the transformations are stored as matrices, a matrix stack provides an ideal
mechanism for doing this sort of successive remembering, translating, and throwing
away. All the matrix operations that have been described so far (glLoadMatrix(),
glMultMatrix(), glLoadIdentity() and the commands that create specific transformation
matrices) deal with the current matrix, or the top matrix on the stack. We can control
which matrix is on top with the commands that perform stack operations:
glPushMatrix(), which copies the current matrix and adds the copy to the top of the
392
© Copyright Virtual University of Pakistan
41-Viewing VU
stack, and glPopMatrix(), which discards the top matrix on the stack, as shown in Figure
21. In effect, glPushMatrix() means "remember where we are" and glPopMatrix()
means "go back to where we were."
void glPushMatrix(void);
Pushes all matrices in the current stack down one level. The current stack is determined
by glMatrixMode(). The topmost matrix is copied, so its contents are duplicated in both
the top and second-from-the-top matrix. If too many matrices are pushed, an error is
generated.
void glPopMatrix(void);
Pops the top matrix off the stack, destroying the contents of the popped matrix. What was
the second-from-the-top matrix becomes the top matrix. The current stack is determined
by glMatrixMode(). If the stack contains a single matrix, calling glPopMatrix()
generates an error.
Example 4 draws an automobile, assuming the existence of routines that draw the car
body, a wheel, and a bolt.
draw_wheel_and_bolts()
{
long i;
draw_wheel();
for(i=0;i<5;i++){
glPushMatrix();
glRotatef(72.0*i,0.0,0.0,1.0);
glTranslatef(3.0,0.0,0.0);
draw_bolt();
glPopMatrix();
}
}
draw_body_and_wheel_and_bolts()
{
draw_car_body();
glPushMatrix();
393
© Copyright Virtual University of Pakistan
41-Viewing VU
This code assumes the wheel and bolt axes are coincident with the z-axis, that the bolts
are evenly spaced every 72 degrees, 3 units (maybe inches) from the center of the wheel,
and that the front wheels are 40 units in front of and 30 units to the right and left of the
car's origin.
A stack is more efficient than an individual matrix, especially if the stack is implemented
in hardware. When we push a matrix, we don't need to copy the current data back to the
main process, and the hardware may be able to copy more than one element of the matrix
at a time. Sometimes we might want to keep an identity matrix at the bottom of the stack
so that we don't need to call glLoadIdentity() repeatedly.
394
© Copyright Virtual University of Pakistan
42-Examples of Composing Several Transformations VU
The program described in this section draws a simple solar system with a planet and a
sun, both using the same sphere-drawing routine. To write this program, we need to use
glRotate*() for the revolution of the planet around the sun and for the rotation of the
planet around its own axis. We also need glTranslate*() to move the planet out to its
orbit, away from the origin of the solar system. Remember that we can specify the desired
size of the two spheres by supplying the appropriate arguments for the glutWireSphere()
routine. To draw the solar system, we first want to set up a projection and a viewing
transformation. For this example, gluPerspective() and gluLookAt() are used. Drawing
the sun is straightforward, since it should be located at the origin of the grand, fixed
coordinate system, which is where the sphere routine places it. Thus, drawing the sun
doesn't require translation; we can use glRotate*() to make the sun rotate about an
arbitrary axis. To draw a planet rotating around the sun, as shown in Figure 24, requires
several modeling transformations. The planet needs to rotate about its own axis once a
day. And once a year, the planet completes one revolution around the sun.
To determine the order of modeling transformations, visualize what happens to the local
coordinate system. An initial glRotate*() rotates the local coordinate system that initially
coincides with the grand coordinate system. Next, glTranslate*() moves the local
coordinate system to a position on the planet's orbit; the distance moved should equal the
radius of the orbit. Thus, the initial glRotate*() actually determines where along the orbit
the planet is (or what time of year it is). A second glRotate*() rotates the local coordinate
system around the local axes, thus determining the time of day for the planet. Once we've
issued all these transformation commands, the planet can be drawn.
In summary, these are the OpenGL commands to draw the sun and planet; the full
program is shown in Example 6.
glPushMatrix();
glutWireSphere(1.0, 20, 16); /* draw sun */
glRotatef ((GLfloat) year, 0.0, 1.0, 0.0);
glTranslatef (2.0, 0.0, 0.0);
395
© Copyright Virtual University of Pakistan
42-Examples of Composing Several Transformations VU
glutPostRedisplay();
break;
case `Y':
year = (year - 5) % 360;
glutPostRedisplay();
break;
default:
break;
}
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize (500, 500);
glutInitWindowPosition (100, 100);
glutCreateWindow (argv[0]);
init ();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutKeyboardFunc(keyboard);
glutMainLoop();
return 0;
}
Try This
Try adding a moon to the planet. Or try several moons and additional planets. Hint: Use
glPushMatrix() and glPopMatrix() to save and restore the position and orientation of the
coordinate system at appropriate moments. If we're going to draw several moons around a
planet, we need to save the coordinate system prior to positioning each moon and restore
the coordinate system after each moon is drawn.
Try tilting the planet's axis.
We can use a scaled cube as a segment of the robot arm, but first we must call the
appropriate modeling transformations to orient each segment. Since the origin of the local
coordinate system is initially at the center of the cube, we need to move the local
397
© Copyright Virtual University of Pakistan
42-Examples of Composing Several Transformations VU
coordinate system to one edge of the cube. Otherwise, the cube rotates about its center
rather than the pivot point.
After we call glTranslate*() to establish the pivot point and glRotate*() to pivot the
cube, translate back to the center of the cube. Then the cube is scaled (flattened and
widened) before it is drawn. The glPushMatrix() and glPopMatrix() restrict the effect of
glScale*(). Here's what our code might look like for this first segment of the arm (the
entire program is shown in Example 7):
To build a second segment, we need to move the local coordinate system to the next pivot
point. Since the coordinate system has previously been rotated, the x-axis is already
oriented along the length of the rotated arm. Therefore, translating along the x-axis moves
the local coordinate system to the next pivot point. Once it's at that pivot point, we can
use the same code to draw the second segment as we used for the first one. This can be
continued for an indefinite number of segments (shoulder, elbow, wrist, fingers).
Try This
Modify Example 7 to add additional segments onto the robot arm. l
Modify Example 7 to add additional segments at the same position. For example, give the
robot arm several "fingers" at the wrist, as shown in Figure 26. Hint: Use glPushMatrix()
and glPopMatrix() to save and restore the position and orientation of the coordinate
system at the wrist. If we're going to draw fingers at the wrist, we need to save the current
matrix prior to positioning each finger and restore the current matrix after each finger is
drawn.
400
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
When we look at a physical surface, our eye's perception of the color depends on the
distribution of photon energies that arrive and trigger our cone cells. Those photons come
from a light source or combination of sources, some of which are absorbed and some are
reflected by the surface. In addition, different surfaces may have very different properties
- some are shiny and preferentially reflect light in certain directions, while others scatter
incoming light equally in all directions. Most surfaces are somewhere in between.
OpenGL approximates light and lighting as if light can be broken into red, green, and
blue components. Thus, the color of light sources is characterized by the amount of red,
green, and blue light they emit, and the material of surfaces is characterized by the
percentage of the incoming red, green, and blue components that is reflected in various
directions. The OpenGL lighting equations are just an approximation but one that works
fairly well and can be computed relatively quickly. If we desire a more accurate (or just
different) lighting model, we have to do our own calculations in software. Such software
can be enormously complex, as a few hours of reading any optics textbook should
convince us. In the OpenGL lighting model, the light in a scene comes from several light
sources that can be individually turned on and off. Some light comes from a particular
direction or position, and some light is generally scattered about the scene. For example,
when we turn on a light bulb in a room, most of the light comes from the bulb, but some
light comes after bouncing off one, two, three, or more walls. This bounced light (called
ambient) is assumed to be so scattered that there is no way to tell its original direction, but
it disappears if a particular light source is turned off.
Finally, there might be a general ambient light in the scene that comes from no particular
source, as if it had been scattered so many times that its original source is impossible to
determine. In the OpenGL model, the light sources have an effect only when there are
surfaces that absorb and reflect light. Each surface is assumed to be composed of a
material with various properties. A material might emit its own light (like headlights on
an automobile), it might scatter some incoming light in all directions, and it might reflect
some portion of the incoming light in a preferential direction like a mirror or other shiny
surface. The OpenGL lighting model considers the lighting to be divided into four
independent components: emissive, ambient, diffuse and specular. All four components
are computed independently and then added together.
These are the steps required to add lighting to our scene. Define NORMAL vectors for
each vertex of all the objects. These NORMALS determine the orientation of the object
relative to the light sources.
401
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
The lighting-related calls are in the init() command; they're discussed briefly in the
following paragraphs and in more detail later in the chapter. One thing to note about
Example 1 is that it uses RGBA color mode, not color-index mode. The OpenGL lighting
calculation is different for the two modes, and in fact the lighting capabilities are more
limited in color-index mode. Thus, RGBA is the preferred mode when doing lighting.
Creates the light specified by light, which can be GL_LIGHT0, GL_LIGHT1, ... , or
GL_LIGHT7. The characteristic of the light being set is defined by pname, which
specifies a named parameter (see Table 1). param indicates the values to which the
pname characteristic is set; it's a pointer to a group of values if the vector version is used,
or the value itself if the nonvector version is used. The nonvector version can be used to
set only single-valued light characteristics.
404
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
Note: The default values listed for GL_DIFFUSE and GL_SPECULAR in Table 1 apply
only to GL_LIGHT(). For other lights, the default value is (0.0, 0.0, 0.0, 1.0) for both
GL_DIFFUSE and GL_SPECULAR.
Example 2 shows how to use glLight*():
As we can see, arrays are defined for the parameter values, and glLightfv() is called
repeatedly to set the various parameters. In this example, the first three calls to
glLightfv() are superfluous, since they're being used to specify the default values for the
GL_AMBIENT, GL_DIFFUSE, and GL_SPECULAR parameters.
The GL_DIFFUSE parameter probably most closely correlates with what we naturally
think of as "the color of a light." It defines the RGBA color of the diffuse light that a
particular light source adds to a scene. By default, GL_DIFFUSE is (1.0, 1.0, 1.0, 1.0) for
GL_LIGHT0, which produces a bright. The default value for any other light
(GL_LIGHT1, ... , GL_LIGHT7) is (0.0, 0.0, 0.0, 0.0).
The GL_SPECULAR parameter affects the color of the specular highlight on an object.
Typically, a real-world object such as a glass bottle has a specular highlight that's the
color of the light shining on it (which is often white). Therefore, if we want to create a
realistic effect, set the GL_SPECULAR parameter to the same value as the
GL_DIFFUSE parameter. By default, GL_SPECULAR is (1.0, 1.0, 1.0, 1.0) for
GL_LIGHT0 and (0.0, 0.0, 0.0, 0.0) for any other light.
405
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
Note: The alpha component of these colors is not used until blending is enabled.
As previously mentioned, we can choose whether to have a light source that's treated as
though it's located infinitely far away from the scene or one that's nearer to the scene. The
first type is referred to as a directional light source; the effect of an infinite location is
that the rays of light can be considered parallel by the time they reach an object. An
example of a real-world directional light source is the sun. The second type is called a
positional light source, since its exact position within the scene determines the effect it
has on a scene and, specifically, the direction from which the light rays come. A desk
lamp is an example of a positional light source. The light used in Example 1 is a
directional one:
Note: Remember that the colors across the face of a smooth-shaded polygon are
determined by the colors calculated for the vertices. Because of this, we probably want to
avoid using large polygons with local lights. If we locate the light near the middle of the
polygon, the vertices might be too far away to receive much light, and the whole polygon
will look darker than we intended. To avoid this problem, break up the large polygon into
smaller ones.
For real-world lights, the intensity of light decreases as distance from the light increases.
Since a directional light is infinitely far away, it doesn't make sense to attenuate its
intensity over distance, so attenuation is disabled for a directional light. However, we
might want to attenuate the light from a positional light.
OpenGL attenuates a light source by multiplying the contribution of that source by an
attenuation factor:
where
d = distance between the light's position and the vertex
kc = GL_CONSTANT_ATTENUATION
406
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
kl = GL_LINEAR_ATTENUATION
kq = GL_QUADRATIC_ATTENUATION
By default, kc is 1.0 and both kl and kq are zero, but we can give these parameters
different values:
glLightf(GL_LIGHT0, GL_CONSTANT_ATTENUATION, 2.0);
glLightf(GL_LIGHT0, GL_LINEAR_ATTENUATION, 1.0);
glLightf(GL_LIGHT0, GL_QUADRATIC_ATTENUATION, 0.5);
Note that the ambient, diffuse, and specular contributions are all attenuated. Only the
emission and global ambient values aren't attenuated. Also note that since attenuation
requires an additional division (and possibly more math) for each calculated color, using
attenuated lights may slow down application performance.
Spotlights
As previously mentioned, we can have a positional light source act as a spotlight - that is,
by restricting the shape of the light it emits to a cone. To create a spotlight, we need to
determine the spread of the cone of light we desire. (Remember that since spotlights are
positional lights, we also have to locate them where we want them. Again, note that
nothing prevents us from creating a directional spotlight, but it won't give us the result we
want.) To specify the angle between the axis of the cone and a ray along the edge of the
cone, use the GL_SPOT_CUTOFF parameter. The angle of the cone at the apex is then
twice this value, as shown in Figure 2.
Note that no light is emitted beyond the edges of the cone. By default, the spotlight
feature is disabled because the GL_SPOT_CUTOFF parameter is 180.0. This value
means that light is emitted in all directions (the angle at the cone's apex is 360 degrees, so
it isn't a cone at all). The value for GL_SPOT_CUTOFF is restricted to being within the
range [0.0,90.0] (unless it has the special value 180.0). The following line sets the cutoff
parameter to 45 degrees:
We also need to specify a spotlight's direction, which determines the axis of the cone of
light:
407
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
The direction is specified in object coordinates. By default, the direction is (0.0, 0.0, -1.0),
so if we don't explicitly set the value of GL_SPOT_DIRECTION, the light points down
the negative z-axis. Also, keep in mind that a spotlight's direction is transformed by the
modelview matrix just as though it were a normal vector, and the result is stored in eye
coordinates.
In addition to the spotlight's cutoff angle and direction, there are two ways we can control
the intensity distribution of the light within the cone. First, we can set the attenuation
factor described earlier, which is multiplied by the light's intensity. We can also set the
GL_SPOT_EXPONENT parameter, which by default is zero, to control how concentrated
the light is. The light's intensity is highest in the center of the cone. It's attenuated toward
the edges of the cone by the cosine of the angle between the direction of the light and the
direction from the light to the vertex being lit, raised to the power of the spot exponent.
Thus, higher spot exponents result in a more focused light source.
Multiple Lights
As mentioned, we can have at least eight lights in our scene (possibly more, depending on
our OpenGL implementation). Since OpenGL needs to perform calculations to determine
how much light each vertex receives from each light source, increasing the number of
lights adversely affects performance. The constants used to refer to the eight lights are
GL_LIGHT0, GL_LIGHT1, GL_LIGHT2, GL_LIGHT3, and so on. In the preceding
discussions, parameters related to GL_LIGHT0 were set. If we want an additional light,
we need to specify its parameters; also, remember that the default values are different for
these other lights than they are for GL_LIGHT0, as explained in Table 1. Example 3
defines a white attenuated spotlight.
If these lines were added to Example 1, the sphere would be lit with two lights, one
directional and one spotlight.
408
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
Try This
Modify Example 1 in the following manner:
Change the first light to be a positional colored light rather than a directional white one.
Add an additional colored spotlight. Hint: Use some of the code shown in the preceding
section. Measure how these two changes affect performance.
As we can see, the viewport and projection matrices are established first. Then, the
identity matrix is loaded as the modelview matrix, after which the light position is set.
Since the identity matrix is used, the originally specified light position (1.0, 1.0, 1.0) isn't
changed by being multiplied by the modelview matrix. Then, since neither the light
position nor the modelview matrix is modified after this point, the direction of the light
remains (1.0, 1.0, 1.0).
409
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
spin is a global variable and is probably controlled by an input device. display() causes
the scene to be redrawn with the light rotated spin degrees around a stationary torus. Note
the two pairs of glPushMatrix() and glPopMatrix() calls, which are used to isolate the
viewing and modeling transformations, all of which occur on the modelview stack. Since
in Example 5 the viewpoint remains constant, the current matrix is pushed down the stack
and then the desired viewing transformation is loaded with gluLookAt(). The matrix
stack is pushed again before the modeling transformation glRotated() is specified. Then
the light position is set in the new, rotated coordinate system so that the light itself
appears to be rotated from its previous position. (Remember that the light position is
stored in eye coordinates, which are obtained after transformation by the modelview
matrix.) After the rotated matrix is popped off the stack, the torus is drawn.
Example 6 is a program that rotates a light source around an object. When the left mouse
button is pressed, the light position rotates an additional 30 degrees. A small, unlit,
wireframe cube is drawn to represent the position of the light in the scene.
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_DEPTH_TEST);
}
/* Here is where the light position is reset after the modeling
* transformation (glRotated) is called. This places the
* light at a new position in world coordinates. The cube
* represents the position of the light.
*/
void display(void)
{
GLfloat position[] = { 0.0, 0.0, 1.5, 1.0 };
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix ();
glTranslatef (0.0, 0.0, -5.0);
glPushMatrix ();
glRotated ((GLdouble) spin, 1.0, 0.0, 0.0);
glLightfv (GL_LIGHT0, GL_POSITION, position);
glTranslated (0.0, 0.0, 1.5);
glDisable (GL_LIGHTING);
glColor3f (0.0, 1.0, 1.0);
glutWireCube (0.1);
glEnable (GL_LIGHTING);
glPopMatrix ();
glutSolidTorus (0.275, 0.85, 8, 15);
glPopMatrix ();
glFlush ();
}
void reshape (int w, int h)
{
glViewport (0, 0, (GLsizei) w, (GLsizei) h);
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
gluPerspective(40.0, (GLfloat) w/(GLfloat) h, 1.0, 20.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void mouse(int button, int state, int x, int y)
{
switch (button) {
case GLUT_LEFT_BUTTON:
if (state == GLUT_DOWN) {
spin = (spin + 30) % 360;
glutPostRedisplay();
}
break;
default:
break;
}
}
411
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
This section explains how to specify a lighting model. It also discusses how to enable
lighting - that is, how to tell OpenGL that we want lighting calculations performed.
The command used to specify all properties of the lighting model is glLightModel*().
glLightModel*() has two arguments: the lighting model property and the desired value
for that property.
412
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
As discussed earlier, each light source can contribute ambient light to a scene. In addition,
there can be other ambient light that's not from any particular source. To specify the
RGBA intensity of such global ambient light, use the GL_LIGHT_MODEL_AMBIENT
parameter as follows:
In this example, the values used for lmodel_ambient are the default values for
GL_LIGHT_MODEL_AMBIENT. Since these numbers yield a small amount of white
ambient light, even if we don't add a specific light source to our scene, we can still see the
objects in the scene.
Enabling Lighting
With OpenGL, we need to explicitly enable (or disable) lighting. If lighting isn't enabled,
the current color is simply mapped onto the current vertex, and no calculations
concerning normals, light sources, the lighting model, and material properties are
performed. Here's how to enable lighting:
glEnable(GL_LIGHTING);
To disable lighting, call glDisable() with GL_LIGHTING as the argument. We also need
to explicitly enable each light source that we define, after we've specified the parameters
for that source. Example 1 uses only one light,
GL_LIGHT0:
glEnable(GL_LIGHT0);
We've seen how to create light sources with certain characteristics and how to define the
desired lighting model. This section describes how to define the material properties of the
objects in the scene: the ambient, diffuse, and specular colors, the shininess, and the color
of any emitted light. Most of the material properties are conceptually similar to ones
we've already used to create light sources. The mechanism for setting them is similar,
except that the command used is called glMaterial*().
Specifies a current material property for use in lighting calculations. face can be
GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK to indicate which face of the
object the material should be applied to. The particular material property being set is
identified by pname and the desired values for that property are given by param, which is
either a pointer to a group of values (if the vector version is used) or the actual value (if
the nonvector version is used). The nonvector version works only for setting
413
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
GL_SHININESS. The possible values for pname are shown in Table 3. Note that
GL_AMBIENT_AND_DIFFUSE allows we to set both the ambient and diffuse material
colors simultaneously to the same RGBA value.
Note that most of the material properties set with glMaterial*() are (R, G, B, A) colors.
Regardless of what alpha values are supplied for other parameters, the alpha value at any
particular vertex is the diffuse-material alpha value (that is, the alpha value given to
GL_DIFFUSE with the glMaterial*() command, as described in the next section). Also,
none of the RGBA material properties apply in color-index mode.
The GL_DIFFUSE and GL_AMBIENT parameters set with glMaterial*() affect the
color of the diffuse and ambient light reflected by an object. Diffuse reflectance plays the
most important role in determining what we perceive the color of an object to be. It's
affected by the color of the incident diffuse light and the angle of the incident light
relative to the normal direction. (It's most intense where the incident light falls
perpendicular to the surface.) The position of the viewpoint doesn't affect diffuse
reflectance at all.
Ambient reflectance affects the overall color of the object. Because diffuse reflectance is
brightest where an object is directly illuminated, ambient reflectance is most noticeable
where an object receives no direct illumination. An object's total ambient reflectance is
affected by the global ambient light and ambient light from individual light sources. Like
414
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
diffuse reflectance, ambient reflectance isn't affected by the position of the viewpoint. For
real-world objects, diffuse and ambient reflectance are normally the same color. For this
reason, OpenGL provides we with a convenient way of assigning the same value to both
simultaneously with glMaterial*():
In this example, the RGBA color (0.1, 0.5, 0.8, 1.0) - a deep blue color - represents the
current ambient and diffuse reflectance for both the front- and
back-facing polygons.
Specular Reflection
Specular reflection from an object produces highlights. Unlike ambient and diffuse
reflection, the amount of specular reflection seen by a viewer does depend on the location
of the viewpoint - it's brightest along the direct angle of reflection. To see this, imagine
looking at a metallic ball outdoors in the sunlight. As we move our head, the highlight
created by the sunlight moves with us to some extent. However, if we move our head too
much, we lose the highlight entirely.
OpenGL allows us to set the effect that the material has on reflected light (with
GL_SPECULAR) and control the size and brightness of the highlight (with
GL_SHININESS). We can assign a number in the range of [0.0, 128.0] to
GL_SHININESS - the higher the value, the smaller and brighter (more focused) the
highlight.
Twelve spheres, each with different material parameters. The row properties are as
labeled above. The first column uses a blue diffuse material color with no specular
properties. The second column adds white specular reflection with a low shininess
exponent. The third column uses a high shininess exponent and thus has a more
concentrated highlight. The fourth column uses the blue diffuse color and, instead of
specular reflection, adds an emissive component.
In above figure, the spheres in the first column have no specular reflection. In the second
column, GL_SPECULAR and GL_SHININESS are assigned values as follows:
415
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
Emission
By specifying an RGBA color for GL_EMISSION, we can make an object appear to be
giving off light of that color. Since most real-world objects (except lights) don't emit
light, we'll probably use this feature mostly to simulate lamps and other light sources in a
scene.
Notice that the spheres appear to be slightly glowing; however, they're not actually acting
as light sources. We would need to create a light source and position it at the same
location as the sphere to create that effect.
As we can see, glMaterialfv() is called repeatedly to set the desired material property for
each sphere. Note that it only needs to be called to change a property that needs to be
respecified. The second, third, and fourth spheres use the same ambient and diffuse
properties as the first sphere, so these properties do not need to be respecified. Since
glMaterial*() has a performance cost associated with its use, Example 8 could be
rewritten to minimize material-property changes. Another technique for minimizing
performance costs associated with changing material properties is to use
glColorMaterial().
417
© Copyright Virtual University of Pakistan
43-Real-World and OpenGL Lighting VU
Causes the material property (or properties) specified by mode of the specified material
face (or faces) specified by face to track the value of the current color at all times. A
change to the current color (using glColor*()) immediately updates the specified material
properties. The face parameter can beGL_FRONT, GL_BACK, or
GL_FRONT_AND_BACK (the default). The mode parameter can be GL_AMBIENT,
GL_DIFFUSE, GL_AMBIENT_AND_DIFFUSE (the default), GL_SPECULAR, or
GL_EMISSION. At any given time, only one mode is active. glColorMaterial() has no
effect on color-index lighting.
Note that glColorMaterial() specifies two independent values: the first specifies which
face or faces are updated, and the second specifies which material property or properties
of those faces are updated. OpenGL does not maintain separate mode variables for each
face. After calling glColorMaterial(), we need to call glEnable() with
GL_COLOR_MATERIAL as the parameter. Then, we can change the current color using
glColor*() (or other material properties, using glMaterial*()) as needed as we draw:
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT, GL_DIFFUSE);
/* now glColor* changes diffuse reflection */
glColor3f(0.2, 0.5, 0.8);
/* draw some objects here */
glColorMaterial(GL_FRONT, GL_SPECULAR);
/* glColor* no longer changes diffuse reflection */
/* now glColor* changes specular reflection */
glColor3f(0.9, 0.0, 0.2);
/* draw other objects here */
glDisable(GL_COLOR_MATERIAL);
break;
case GLUT_RIGHT_BUTTON:
if (state == GLUT_DOWN) { /* change blue */
diffuseMaterial[2] += 0.1;
if (diffuseMaterial[2] > 1.0)
diffuseMaterial[2] = 0.0;
glColor4fv(diffuseMaterial);
glutPostRedisplay();
}
break;
default:
break;
}
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize (500, 500);
glutInitWindowPosition (100, 100);
glutCreateWindow (argv[0]);
init ();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMouseFunc(mouse);
glutMainLoop();
return 0;
}
Try This
Modify Example 8 in the following manner:
Change the global ambient light in the scene. Hint: Alter the value of the
GL_LIGHT_MODEL_AMBIENT parameter.
Change the diffuse, ambient, and specular reflection parameters, the shininess exponent,
and the emission color. Hint: Use the glMaterial*() command, but avoid making
excessive calls.
Use two-sided materials and add a user-defined clipping plane so that we can see the
inside and outside of a row or column of spheres. if we need to recall user-defined
clipping planes.) Hint: Turn on two-sided lighting with
GL_LIGHT_MODEL_TWO_SIDE, set the desired material properties, and add a
clipping plane.
Remove all the glMaterialfv() calls, and use the more efficient glColorMaterial() calls
to achieve the same lighting.
420
© Copyright Virtual University of Pakistan
44-Evaluators, curves and Surfaces VU
Evaluators
Where u and v can both vary in some domain. The range isn't necessarily three-
dimensional as shown here. You might want two-dimensional output for curves on a
plane or texture coordinates, or you might want four-dimensional output to specify
RGBA information. Even one-dimensional output may make sense for gray levels. For
each u (or u and v, in the case of a surface), the formula for C() (or S()) calculates a point
on the curve (or surface). To use an evaluator, first define the function C() or S(), enable
it, and then use the glEvalCoord1() or glEvalCoord2() command instead of glVertex*().
This way, the curve or surface vertices can be used like any other vertices - to form points
or lines, for example. In addition, other commands automatically generate series of
vertices that produce a regular mesh uniformly spaced in u (or in u and v). One- and two-
dimensional evaluators are similar, but the description is somewhat simpler in one
dimension, so that case is discussed first.
One-Dimensional Evaluators
The program shown in Example 1 draws a cubic Bézier curve using four control points,
as shown in Figure 1.
421
© Copyright Virtual University of Pakistan
44-Evaluators, curves and Surfaces VU
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize (500, 500);
glutInitWindowPosition (100, 100);
glutCreateWindow (argv[0]);
init ();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}
A cubic Bézier curve is described by four control points, which appear in this example in
the ctrlpoints[][] array. This array is one of the arguments to glMap1f(). All the
arguments for this command are as follows:
GL_MAP1_VERTEX_3
Note that the second and third arguments control the parameterization of the curve - as
the variable u ranges from 0.0 to 1.0, the curve goes from one end to the other. The call to
glEnable() enables the one-dimensional evaluator for three-dimensional vertices.
The curve is drawn in the routine display() between the glBegin() and glEnd() calls.
Since the evaluator is enabled, the command glEvalCoord1f() is just like issuing a
glVertex() command with the coordinates of a vertex on the curve corresponding to the
input parameter u.
If Pi represents a set of control points (one-, two-, three-, or even four- dimensional), then
the equation
423
© Copyright Virtual University of Pakistan
44-Evaluators, curves and Surfaces VU
represents a Bézier curve as u varies from 0.0 to 1.0. To represent the same curve but
allowing u to vary between u1 and u2 instead of 0.0 and 1.0, evaluate
The command glMap1() defines a one-dimensional evaluator that uses these equations.
Defines a one-dimensional evaluator. The target parameter specifies what the control
points represent, as shown in Table 1, and therefore how many values need to be supplied
in points. The points can represent vertices, RGBA color data, normal vectors, or texture
coordinates. For example, with GL_MAP1_COLOR_4, the evaluator generates color
data along a curve in four-dimensional (RGBA) color space. You also use the parameter
values listed in Table 1 to enable each defined evaluator before you invoke it. Pass the
appropriate value to glEnable() or glDisable() to enable or disable the evaluator. The
second two parameters for glMap1*(), u1 and u2, indicate the range for the variable u.
The variable stride is the number of single- or double-precision values (as appropriate)
in each block of storage. Thus, it's an offset value between the beginning of on e control
point and the beginning of the next. The order is the degree plus one, and it should agree
with the number of control points. The points parameter points to the first coordinate of
the first control point. Using the example data structure for glMap1*(), use the following
for points:
(GLfloat *)(&ctlpoints[0].x)
424
© Copyright Virtual University of Pakistan
44-Evaluators, curves and Surfaces VU
More than one evaluator can be evaluated at a time. If you have both a
GL_MAP1_VERTEX_3 and a GL_MAP1_COLOR_4 evaluator defined and enabled, for
example, then calls to glEvalCoord1() generate both a position and a color. Only one of
the vertex evaluators can be enabled at a time, although you might have defined both of
them. Similarly, only one of the texture evaluators can be active. Other than that,
however, evaluators can be used to generate any combination of vertex, normal, color,
and texture-coordinate data. If more than one evaluator of the same type is defined and
enabled, the one of highest dimension is used. Use glEvalCoord1*() to evaluate a
defined and enabled one-dimensional map.
Causes evaluation of the enabled one-dimensional maps. The argument u is the value (or
a pointer to the value, in the vector version of the command) of the domain coordinate.
For evaluated vertices, values for color, color index, normal vectors, and texture
coordinates are generated by evaluation. Calls to glEvalCoord*() do not use the current
values for color, color index, normal vectors, and texture coordinates. glEvalCoord*()
also leaves those values unchanged.
You can use glEvalCoord1() with any values for u, but by far the most common use is
with evenly spaced values, as shown previously in Example 1. To obtain evenly spaced
425
© Copyright Virtual University of Pakistan
44-Evaluators, curves and Surfaces VU
values, define a one-dimensional grid using glMapGrid1*() and then apply it using
glEvalMesh1().
glBegin(GL_POINTS); /* OR glBegin(GL_LINE_STRIP); */
for (i = p1; i <= p2; i++)
glEvalCoord1(u1 + i*(u2-u1)/n);
glEnd();
except that if i = 0 or i = n, then glEvalCoord1() is called with exactly u1 or u2 as its
parameter.
Two-Dimensional Evaluators
In two dimensions, everything is similar to the one-dimensional case, except that all the
commands must take two parameters, u and v, into account. Points, colors, normals, or
texture coordinates must be supplied over a surface instead of a curve. Mathematically,
the definition of a Bézier surface patch is given by
Bn Bm (v)P
n m
S (u, v)
i j ij
i0 j 0
where Pij are a set of m*n control points, and the Bi are the same Bernstein polynomials
for one dimension. As before, the Pij can represent vertices, normals, colors, or texture
coordinates.
The procedure to use two-dimensional evaluators is similar to the procedure for one
dimension.
1. Define the evaluator(s) with glMap2*().
2. Enable them by passing the appropriate value to glEnable().
3. Invoke them either by calling glEvalCoord2() between a glBegin() and
glEnd() pair or by specifying and then applying a mesh with glMapGrid2() and
glEvalMesh2().
426
© Copyright Virtual University of Pakistan
44-Evaluators, curves and Surfaces VU
The target parameter can have any of the values in Table 1, except that the string MAP1
is replaced with MAP2. As before, these values are also used with glEnable() to enable
the corresponding evaluator. Minimum and maximum values for both u and v are
provided as u1, u2, v1, and v2. The parameters ustride and vstride indicate the number of
single- or double-precision values (as appropriate) between independent settings for
these values, allowing users to select a subrectangle of control points out of a much
larger array. For example, if the data appears in the form
GLfloat ctlpoints[100][100][3];
and you want to use the 4x4 subset beginning at ctlpoints[20][30], choose ustride to be
100*3 and vstride to be 3. The starting point, points, should be set to
&ctlpoints[20][30][0]. Finally, the order parameters, uorder and vorder, can be
different, allowing patches that are cubic in one direction andquadratic in the other, for
example.
Causes evaluation of the enabled two-dimensional maps. The arguments u and v are the
values (or a pointer to the values u and v, in the vector version of the command) for the
domain coordinates. If either of the vertex evaluators is enabled (GL_MAP2_VERTEX_3
or GL_MAP2_VERTEX_4), then the normal to the surface is computed analytically. This
normal is associated with the generated vertex if automatic normal generation has been
enabled by passing GL_AUTO_NORMAL to glEnable(). If it's disabled, the
corresponding enabled normal map is used to produce a normal. If no such map exists,
the current normal is used.
Example 2 draws a wire frame Bézier surface using evaluators, as shown in Figure 2. In
this example, the surface is drawn with nine curved lines in each direction. Each curve is
drawn as 30 segments. To get the whole program, add the reshape() and main() routines
from Example 1.
#include <GL/glut.h>
GLfloat ctrlpoints[4][4][3] = {
{{-1.5, -1.5, 4.0}, {-0.5, -1.5, 2.0},
{0.5, -1.5, -1.0}, {1.5, -1.5, 2.0}},
{{-1.5, -0.5, 1.0}, {-0.5, -0.5, 3.0},
{0.5, -0.5, 0.0}, {1.5, -0.5, -1.0}},
{{-1.5, 0.5, 4.0}, {-0.5, 0.5, 0.0},
{0.5, 0.5, 3.0}, {1.5, 0.5, 4.0}},
{{-1.5, 1.5, -2.0}, {-0.5, 1.5, -2.0},
{0.5, 1.5, 0.0}, {1.5, 1.5, -1.0}}
};
void display(void)
{
int i, j;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glPushMatrix ();
glRotatef(85.0, 1.0, 1.0, 1.0);
for (j = 0; j <= 8; j++) {
glBegin(GL_LINE_STRIP);
for (i = 0; i <= 30; i++)
glEvalCoord2f((GLfloat)i/30.0, (GLfloat)j/8.0);
glEnd();
glBegin(GL_LINE_STRIP);
for (i = 0; i <= 30; i++)
glEvalCoord2f((GLfloat)j/8.0, (GLfloat)i/30.0);
glEnd();
}
glPopMatrix ();
glFlush();
}
void init(void)
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glMap2f(GL_MAP2_VERTEX_3, 0, 1, 3, 4,0, 1, 12, 4, &ctrlpoints[0][0][0]);
glEnable(GL_MAP2_VERTEX_3);
glMapGrid2f(20, 0.0, 1.0, 20, 0.0, 1.0);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_FLAT);
}
In two dimensions, the glMapGrid2*() and glEvalMesh2() commands are similar to the
one-dimensional versions, except that both u and v information must be included.
428
© Copyright Virtual University of Pakistan
44-Evaluators, curves and Surfaces VU
Defines a two-dimensional map grid that goes from u1 to u2 in nu evenly spaced steps,
from v1 to v2 in nv steps (glMapGrid2*()), and then applies this grid to all enabled
evaluators (glEvalMesh2()). The only significant difference from the one-dimensional
versions of these two commands is that in glEvalMesh2() the mode parameter can be
GL_FILL as well as GL_POINT or GL_LINE. GL_FILL generates filled polygons using
the quad-mesh primitive. Stated precisely, glEvalMesh2() is nearly equivalent to one of
the following three code fragments. (It's nearly equivalent because when i is equal to nu
or j to nv, the parameter is exactly equal to u2 or v2, not to u1+nu*(u2 -u1)/nu, which
might be slightly different due to round-off error.)
or
or
Example 3 shows the differences necessary to draw the same Bézier surface as Example
2, but using glMapGrid2() and glEvalMesh2() to subdivide the square domain into a
uniform 8x8 grid. This program also adds lighting and shading, as shown in Figure 3.
429
© Copyright Virtual University of Pakistan
44-Evaluators, curves and Surfaces VU
void initlights(void)
{
GLfloat ambient[] = {0.2, 0.2, 0.2, 1.0};
GLfloat position[] = {0.0, 0.0, 2.0, 1.0};
GLfloat mat_diffuse[] = {0.6, 0.6, 0.6, 1.0};
GLfloat mat_specular[] = {1.0, 1.0, 1.0, 1.0};
GLfloat mat_shininess[] = {50.0};
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_AMBIENT, ambient);
glLightfv(GL_LIGHT0, GL_POSITION, position);
glMaterialfv(GL_FRONT, GL_DIFFUSE, mat_diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glRotatef(85.0, 1.0, 1.0, 1.0);
glEvalMesh2(GL_FILL, 0, 20, 0, 20);
glPopMatrix();
glFlush();
}
void init(void)
{
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_DEPTH_TEST);
glMap2f(GL_MAP2_VERTEX_3, 0, 1, 3, 4,
0, 1, 12, 4, &ctrlpoints[0][0][0]);
glEnable(GL_MAP2_VERTEX_3);
glEnable(GL_AUTO_NORMAL);
glMapGrid2f(20, 0.0, 1.0, 20, 0.0, 1.0);
initlights();
}
430
© Copyright Virtual University of Pakistan
45-Animations VU
Einstein, among other well known names in the world of science, made a special
study of time in relation to his research in physics. His theory of relativity
maintains that space and time are merely different aspects of the same thing. Since
then other physicists have pointed out that objects can be moved backward and
forward in space, but nothing can be moved back in time.
Another method of describing the concept of time is through the ‘three arrows of
time’. The first ‘arrow’ is thermodynamic and can be seen operating when sugar
dissolves in hot water. Second, is the historical ‘arrow’, whereby a single -celled
organism evolves to produce more complex and varied species? The third is the
cosmological ‘arrow’ which is the theory that the universe is expanding from a
‘big bang’ in the past. This cosmic expansion cannot be reversed in time. While
the principle of relationships between the ‘time arrows’ is still to be worked out on
a scientific level, the actual application of it is constantly related to all work which
utilizes it, such as music and the performing arts. In the latter it is one of the most
important raw materials.
In terms of animation, the idea of film time is one of the most vital concepts to
understand and to use. It is an essential raw material which can be compressed or
expanded and used for effects and moods in a highly creative way. It is, therefore,
essential to learn and to understand how time can be applied to animation. The
great advantage of animation is that the animator can creatively manipulate time
since an action must be timed prior to carrying out the actual physical work on a
film.
It is also essential to understand how the audience will react to the manipulation of
time from their point of view. Time sense or ‘a sense of timing’, therefore, is just
as important as color sense and skill of drawing or craftsmanship in film
animation.
It has to be realized that while a performance on stage and on the screen requires a
basic understanding of how timing works, this lecture is primarily confined to
hand drawn animation which up to this point in film history still comprises 90% of
all output in the animation medium.
Timing for TV series
For economic reasons, TV series are made as simply as possible from the
animation point of view. This approach is generally known as limited animation.
Animation is expensive, non-animation is cheaper. So to keep the films lively the
plots are usually carried along by means of dialogue. It is often necessary to work
with prerecorded blocks of dialogue which must remain intact. If this dialogue is
well recorded for maximum dramatic effect, lengths of pauses between phrases
431
© Copyright Virtual University of Pakistan
45-Animations VU
cannot be changed (except within very narrow limits) without destroying that
effect. In this case, the overall timing of long sections of the film is governed
entirely by the dialogue. (There could be, however, considerable flexibility for
more detailed timing within this fixed overall length.)
The director has room to maneuver sections. So, if the total timing for all the
recorded dialogue is subtracted from the required length for the whole film, this
gives the amount of time that is available without dialogue. This can then be split
up in the normal way and distributed throughout the film to give the best effect.
Limited animation
With limited animation as many repeats as possible are used within the 24 frames
per second. A hold is also lengthened to reduce the number of drawings. As a rule
not more than 6 drawings are produced for one second of animation. Limited
animation requires almost as much skill on the part of the animator as full
animation, since he must create an illusion of action with the greatest sense of
economy.
Full animation
Full animation implies a large number of drawings per second of action. Some
action may require that every single frame of the 24 frames within the second is
animated in order to achieve an illusion of fluidity on the screen. Neither time nor
money is spared on animation. As a rule only, TV commercials and feature-length
animated films can afford this luxury.
Ideally, the director should be able to view line test loops of the film as it
progresses and so have a chance to make adjustments. But often there is no time to
make corrections in limited animation and the aim is to make the animation work
the first time.
Timing for Animation in general
Timing in animation is an elusive subject. It only exists whilst the film is being
projected, in the same way that a melody only exists when it is being played. A
melody is more easily appreciated by listening to it than by trying to explain it in
words. So with cartoon timing, it is difficult to avoid using a lot of words to
explain what may seem fairly simple when seen on the screen.
So if having looked through the following pages you can see a better way to
achieve an effect, then go ahead and do it!
A live actor faced with these problems moves his muscles and limbs and deals
with gravity automatically from habit, and so can concentrate on acting. An
animator has to worry about making his flat, weightless drawings move like solid,
heavy objects, as well as making them act in a convincing way. In both these
aspects of animation, timing is of primary importance.
Part of the working storyboard of The Story of the Bible by Halas and Batchelor.
At this stage the director works out the smooth visual flow of the film, the editing,
433
© Copyright Virtual University of Pakistan
45-Animations VU
camera movements and so on. All these elements combine to tell the story in an
interesting way.
The storyboard
A smooth visual flow is the major objective in any film, especially if it is an
animated one. Good continuity depends on coordinating the action of the
character, choreography, scene changes and camera movement. All these different
aspects cannot be considered in isolation. They must work together to put across a
story point. Furthermore the right emphasis on such planning, including the
behaviour of the character, must also be realised.
The storyboard should serve as a blueprint for any film project and as the first
visual impression of the film. It is at this stage that the major decisions are taken
as far as the film's content is concerned. It is generally accepted that no production
should proceed until a satisfactory storyboard is achieved and most of the creative
and technical problems which may arise during the film's production have been
considered.
There is no strict rule as to how many sketches are required for a film. It depends
on the type, character and content of the project. A rough guideline is
approximately 100 storyboard sketches for each minute of film. If, however, a
film is technically complex, the number of sketches could double. For a TV
commercial, more sketches are produced as a rule because there are usually more
scene changes and more action than in longer films.
The basic unit of time in animation
The basis of timing in
animation is the fixed
projection speed of 24
frames per second (fps)
for film and video. While
other projection speeds
have been used in the
past the standard
projection rate for film of
all formats—16mm, 35mm and 70mm remains 24 fps. On television and video
this becomes 25 frames per second (PAL) or 30 fps (NTSC), but the difference is
usually imperceptible.
The thing to remember is that if an action on the screen takes one second it covers
24 frames of film, and if it takes half a second it covers 12 frames and so on.
24 frames of film go through the projector every second (25 on television). This
fixed number of frames provides the basis on which all actions are planned and
timed by the director.
For single frame animation, where one drawing is done for each frame, a second
of action needs 24 drawings. If the same action is animated on double frames,
where each drawing is photographed twice in succession, 12 drawings are
necessary out the number of frames and hence the speed of the action would be
the same in both cases.
434
© Copyright Virtual University of Pakistan
45-Animations VU
Whatever the mood or pace of the action that appears on the screen, whether it be
a frantic chase or a romantic love scene, all timing calculations must be based on
the fact that the projector continues to hammer away at its constant projection rate.
That is—24 fps for film and either 25 fps or 30 fps for television and video
depending on format. The unit of time within which an animator works is,
therefore, 1/24 sec, 1/25 sec or 1/30 sec and an important part of the skill, which
the animator has to learn is what this specific timing ‘feels’ like on screen. With
practice the animator also learns what multiples of this unit look like—3 frames, 8
frames, 12 frames and so on.
Animation and properties of matter
The basic question which an animator is continually asking himself is: ‘What will
happen to this object when a force acts upon it?’ And the success of his animation
largely depends on how well he answers this question.
All objects in nature have their own weight, construction and degree of flexibility,
and therefore each behaves in its own individual way when a force acts upon it.
This behavior, a combination of position and timing, is the basis of animation.
Animation consists of drawings, which have neither weight nor do they have any
forces acting on them. In certain types of limited or abstract animation, the
drawings can be treated as moving patterns. However, in order to give meaning to
movement, the animator must consider Newton's laws of motion which contain all
the information necessary to move characters and objects around. There are many
aspects of his theories which are important in this book. However, it is not
necessary to know the laws of motion in their verbal form, but in the way which is
familiar to everyone, that is by watching things move. For instance, everyone
knows that things do not start moving suddenly from rest—even a cannonball has
to accelerate to its maximum speed when fired. Nor do things suddenly stop
dead—a car hitting a wall of concrete carries on moving after the first impact,
during which time it folds itself rapidly up into a wreck.
It is not the exaggeration of the weight of the object which is at the centre of
animation, but the exaggeration of the tendency of the weight—any weight—to
move in a certain way.
The timing of a scene for animation has two aspects:
With inanimate objects the problems are straightforward dynamics. ‘How long
does a door take to slam?’, ‘How quickly does a cloud drift across the sky?’,
‘How long does it take a steamroller, running out of control downhill, to go
through a brick wall?’.
With living characters the same kind of problems occur because a character is a
piece of flesh which has to be moved around by the action of forces on it. In
addition, however, time must be allowed for the mental operation of the character,
if he is to come alive on the screen. He must appear to be thinking his way
through his actions, making decisions and finally moving his body around under
the influence of his own will power and muscle.
435
© Copyright Virtual University of Pakistan
45-Animations VU
The animator's job is to synthesize movements and to apply just the right amount
of creative exaggeration to make the movement look natural within the cartoon
medium.
Cartoon film is a medium of caricature. The character of each subject and the
movement it expresses are
exaggerated. The subjects
can be considered as
caricatured matter acted
upon by caricatured forces.
437
© Copyright Virtual University of Pakistan
45-Animations VU
the explosive charge is very large indeed, this is sufficient to accelerate the
cannonball to a considerable speed. A smaller force acting for a short time, say a
strong kick, may have no effect on the cannonball at all. In fact it is more likely to
damage the kicker's toe. However, persistent force, even if not very strong, would
gradually start the cannonball rolling and it would eventually be travelling fairly
quickly.
A A cannon ball needs a lot of
force to start it moving. Once
moving, it takes a lot of
stopping.
439
© Copyright Virtual University of Pakistan
45-Animations VU
The body normally leans forward in the direction of movement, although for
comic effect a backward lean can sometimes work. If a faster run than an eight
frame repeat is needed, then perhaps several foot positions can be given on each
drawing, to fill up the gaps in the movement, or possibly the legs can become a
complete blur treated entirely in dry-brush.
In the first example, drawing 4 is equivalent to the ‘step’ position in a walk, with
the maximum forward and backward leg and arm movement. In a run it is also the
point at which the centre of gravity of the body is farthest from the ground, that is,
440
© Copyright Virtual University of Pakistan
45-Animations VU
These are both examples of eight frame run cycles. This means four drawings to
each step. Drawings 1 and 5 show the same leg and arm positions but with
opposite feet, and so do 2 and 6, 3 and 7, 4 and 8. In such a short cycle these
positions should be varied slightly to avoid a mechanical effect.
Timing and music
Ever since the very first animated productions, Disney's Steamboat Mickey and
Fischinger's abstract film Brahm's Hungarian Dances, it was clear that there is a
strong relationship between animation and music. This relationship can be
explained on two accounts. First, both elements have a basic mathematical
foundation and move forward at a determined speed. Second, since animation is
created manually frame by frame, it can be fitted to music in a very exact manner.
It is further able to capture its rhythm, its mood and hit the beat right to the frame.
Most animation makes good use of this advantage.
In general principle it is more difficult to follow the rhythm of a musical
composition with its mood than its beat. The latter aspect of the music is easily
measured, since beats are fitted into bar units of defined time length and are
interpreted in time units.
Bars can contain various numbers of beats and these must be measured to the film
frame. Having done this, it is comparatively easy to fit the animation to the speed
of the beat and find the right type of movement to follow the music, whether it is a
slow waltz of 36 frames, or 4 frames for rock music. A beat can be emphasised by
synchronisation of the feet but it works better if the whole body is used. In quick
beats of 3,4 or 6 frames it is possible to follow every second beat without losing
the rhythm. It is always better to work to specially prepared music if this can be
afforded.
Camera movements
Tracks are used to move into a closer field or pull back to a more distant one.
They are done by moving the camera frame by frame, up or down its vertical
pillar, above the animation drawings. Usually the field centre also moves during a
track and as the camera travels on a fixed axis this movement in a north south,
east west direction is done by moving the table.
Tracks and table moves are worked out in terms of general timing—track lengths
and the action which the various fields must include—by the director on bar
sheets before production begins.
When the animator finalises the action of the scene in detail, he converts the
director's timing into specific instructions to the cameraman. He writes down the
field sizes and marks the frames where camera movements start and stop in the
‘camera instructions’ column on the exposure chart (Fig. A). He also provides a
drawn field key with field centres marked (Fig. B).
It then becomes the
cameraman's
responsibility to
achieve the required
effect smoothly and
accurately on the
screen. Briefly, the
procedure is as
follows: in Fig. B the
track is made from
field X to field Y so
the screen centre
moves towards the
south-east. This means
that under the camera,
the table must move
north-west. Fig. C is
an enlargement of this
table move, showing
how the cameraman
divides the line to
achieve a smooth
movement from X to
Y. At the same time he
measures the distance
the camera travels on
its column during the
track, and divides this
434
© Copyright Virtual University of Pakistan
45-Animations VU
in exactly the same way as Fig. C, so that camera and table top move smoothly
together. Fig. D is a similar track and table move which includes a tilt. This would
also be done as a table move.
435
© Copyright Virtual University of Pakistan