Module 4
Module 4
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 1
Overview of Three-Dimensional
Viewing Concepts
• Viewing a Three-Dimensional Scene:
• This coordinate reference defines the position and orientation for a view
plane (or projection plane) that corresponds to a camera film plane as
shown in below figure
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 2
• Two methods:
• parallel projection
• perspective projection
• Depth Cueing
• Identifying Visible Lines and Surfaces
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 3
• Exploded and Cutaway Views
• Exploded and cutaway views of such objects can then be used to show the
internal structure and relationship of the object parts.
• An alternative to exploding an object into its component parts is a cutaway
view, which removes part of the visible surfaces to show internal structure
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 4
• Three-Dimensional and Stereoscopic Viewing
• Three-dimensional views can be obtained by reflecting a raster image from a
vibrating, flexible mirror.
• The vibrations of the mirror are synchronized with the display of the scene on
the cathode ray tube (CRT).
• As the mirror vibrates, the focal length varies so that each point in the scene
is reflected to a spatial position corresponding to its depth.
• Stereoscopic devices present two views of a scene: one for the left eye and
the other for the right eye.
• The viewing positions correspond to the eye positions of the viewer. These
two views are typically displayed on alternate refresh cycles of a raster
monitor
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 5
The Three-Dimensional Viewing
Pipeline
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 6
Three-Dimensional Viewing-
Coordinate Parameters
⮚ Select a world-coordinate position P0 =(x0, y0, z0) for the viewing origin, which
is called the view point or viewing position and we specify a view-up vector V,
which defines the yview direction.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 7
The View-Plane Normal Vector
• Because the viewing direction is usually along the zview axis, the view
plane, also called the projection plane, is normally assumed to be
perpendicular to this axis.
• Thus, the orientation of the view plane, as well as the direction for the
positive zview axis, can be defined with a view-plane normal vector N,
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 8
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 9
The View-Up Vector
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 10
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 11
The uvn Viewing-Coordinate
Reference Frame
Using the input values for N and V,we can compute a third vector, U,
that is perpendicular to both N and V
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 12
Generating Three-Dimensional
Viewing Effects
• By varying the viewing parameters, we can obtain different views of objects in a
scene.
• we could change the direction of N to display objects at positions around the
viewing-coordinate origin.
• We could also vary N to create a composite display consisting of multiple views
from a fixed camera position.
• In interactive applications, the normal vector N is the viewing parameter that is
most often changed. Of course, when we change the direction for N, we also
have to change the other axis vectors to maintain a right-handed viewing-
coordinate system.
• If we want to simulate an animation panning effect, as when a camera moves
through a scene or follows an object that is moving through a scene, we can
keep the direction for N fixed as we move the view point,
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 13
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 14
Transformation from World to Viewing
Coordinates
• Translate the viewing-coordinate origin to the origin of the
worldcoordinate system.
• Apply rotations to align the xview, yview, and zview axes with the world
xw, yw, and zw axes, respectively.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 15
Projection Transformations
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 16
void display(void)
{ glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(viewer[0],viewer[1],viewer[2],0.0,0.0,0.0,0.0,1.0,0.0);
glDrawElements(GL_QUADS,24,GL_UNSIGNED_BYTE,cubeIndices);
glFlush();
glutSwapBuffers();
}
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 17
void keys(unsigned char key, int x, int y)
{
if(key=='x') viewer[0]-=1.0;
if(key=='X') viewer[0]+=1.0;
if(key=='y') viewer[1]-=1.0;
if(key=='Y') viewer[1]+=1.0;
if(key=='z') viewer[2]-=1.0;
if(key=='Z') viewer[2]+=1.0;
glutPostRedisplay();
}
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 18
void myinit()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-2.0,2.0,-2.0,2.0,2.0,20.0);
glMatrixMode(GL_MODELVIEW);
}
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 19
4.6 Orthogonal Projections
• A transformation of object descriptions to a view
plane along lines that are all parallel to the view-
plane normal vector N is called an orthogonal
projection also termed as orthographic projection.
• This produces a parallel-projection transformation
in which the projection lines are perpendicular to
the view plane.
• Orthogonal projections are most often used to
produce the front, side, and top views of an
object
• Front, side, and rear orthogonal projections of an
object are called elevations; and a top orthogonal
projection is called a plan view
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 20
Axonometric and Isometric
Orthogonal Projections
• We can also form orthogonal projections
that display more than one face of an
object. Such views are called axonometric
orthogonal projections.
• The most commonly used axonometric
projection is the isometric projection,
which is generated by aligning the
projection plane (or the object) so that the
plane intersects each coordinate axis in
which the object is defined, called the
principal axes, at the same distance from
the origin
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 21
Orthogonal Projection Coordinates
• With the projection direction parallel to the zview axis, the transformation
equations for an orthogonal projection are trivial. For any position (x, y, z)
in viewing coordinates, as in Figure below, the projection coordinates are
xp = x, yp = y
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 22
Clipping Window and Orthogonal-
Projection View Volume
• In OpenGL, we set up a clipping window for
three-dimensional viewing just as we did for
two-dimensional viewing, by choosing two-
dimensional coordinate positions for its lower-
left and upper-right corners.
• For three-dimensional viewing, the clipping
window is positioned on the view plane with its
edges parallel to the xview and yview axes, as
shown in Figure below . If we want to use
some other shape or orientation for the clipping
window, we must develop our own viewing
procedures
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 23
Cont…
• The edges of the clipping window specify the x and y limits for the part of
the scene that we want to display.
• These limits are used to form the top, bottom, and two sides of a clipping
region called the orthogonal-projection view volume.
• Because projection lines are perpendicular to the view plane, these four
boundaries are planes that are also perpendicular to the view plane and
that pass through the edges of the clipping window to form an infinite
clipping region, as in Figure below.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 24
Clipping Window and Orthogonal-
Projection View Volume
• These two planes are called the near-far
clipping planes, or the front-back clipping
planes.
• The near and far planes allow us to exclude
objects that are in front of or behind the part
of the scene that we want to display.
• When the near and far planes are specified,
we obtain a finite orthogonal view volume
that is a rectangular parallelepiped, as
shown in Figure below along with one
possible placement for the view plane
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 25
Normalization Transformation for an
Orthogonal Projection
• Once we have established the limits for the
view volume, coordinate descriptions
inside this rectangular parallelepiped are
the projection coordinates, and they can
be mapped into a normalized view volume
without any further projection processing.
• Some graphics packages use a unit cube
for this normalized view volume, with each
of the x, y, and z coordinates normalized in
the range from 0 to 1.
• Another normalization-transformation
approach is to use a symmetric cube, with
coordinates in the range from −1 to 1
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 26
• We can convert projection coordinates into
positions within a left-handed normalized-
coordinate reference frame, and these
coordinate positions will then be transferred to
lefthanded screen coordinates by the viewport
transformation.
• To illustrate the normalization transformation, we
assume that the orthogonal-projection view
volume is to be mapped into the symmetric
normalization cube within a left-handed
reference frame.
• Also, z-coordinate positions for the near and far
planes are denoted as znear and zfar,
respectively. Figure below illustrates this
normalization transformation
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 27
Normalization Transformation for an
Orthogonal Projection
• Another normalization-transformation approach is to use a symmetric
cube, with coordinates in the range from −1 to 1
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 28
The normalization transformation for the
orthogonal view volume is
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 29
4.7. Perspective Projections
• We can approximate this geometric-optics effect by projecting objects to
the view plane along converging paths to a position called the projection
reference point (or center of projection).
• Objects are then displayed with foreshortening effects, and projections of
distant objects are smaller than the projections of objects of the same size
that are closer to the view plane
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 30
Cont…
• projection path of a spatial position (x, y, z) to a general projection
reference point at (xprp, yprp, zprp).
• The projection line intersects the view plane at the coordinate position (xp,
yp, zvp), where zvp is some selected position for the view plane on the zview
axis.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 31
• The projection line intersects the view plane at the coordinate position (xp,
yp, zvp), where zvp is some selected position for the view plane on the zview
axis.
• We can write equations describing coordinate positions along this
perspective-projection line in parametric form as
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 32
Cont…
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 33
glFrustum
void glFrustum
GLdouble left, glFrustum(-2.0, 2.0, -2.0,2.0, 2.0,20.0);
(
GLdouble right,
GLdouble bottom,
GLdouble top,
GLdouble nearVal,
GLdouble farVal);
left, right - Specify the coordinates for the left and right vertical clipping planes.
bottom, top - Specify the coordinates for the bottom and top horizontal clipping planes.
nearVal, farVal - Specify the distances to the near and far depth clipping planes. Both distances must be positive.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 34
Perspective-Projection Transformation
Coordinates
• projection path of a spatial position (x, y, z) to a general projection
reference point at (xprp, yprp, zprp).
The projection line intersects the
view plane at the coordinate
position (xp, yp, zvp),
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 35
Projection Equation
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 36
Perspective-Projection Equations:
Special Cases
• Case 1:
• To simplify the perspective calculations, the projection reference point
could be limited to positions along the zview axis, the
• xprp = yprp = 0:
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 37
Case 2:
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 38
Case 3:
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 39
Case 4:
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 40
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 41
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 42
Cont…
• The view plane is usually placed between the projection reference point
and the scene, but, in general, the view plane could be placed anywhere
except at the projection point.
• If the projection reference point is between the view plane and the scene,
objects are inverted on the view plane (refer below figure)
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 43
Pin hole Camera
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 44
Perspective effects also depend on the distance between
the projection reference point and the view plane
• If the projection reference point
is close to the view plane,
perspective effects are
emphasized; that is, closer
objects will appear much larger
than more distant objects of
the same size.
• Similarly, as the projection
reference point moves farther
from the view plane, the
difference in the size of near
and far objects decreases
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 45
Vanishing Points for Perspective
Projections
• The point at which a set of projected
parallel lines appears to converge is
called a vanishing point.
• Each set of projected parallel lines has a
separate vanishing point.
• For a set of lines that are parallel to one
of the principal axes of an object, the
vanishing point is referred to as a
principal vanishing point.
• We control the number of principal
vanishing points (one, two, or three) with
the orientation of the projection plane,
and perspective projections are
accordingly classified as one-point, two-
point, or three-point projections
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 46
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 47
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 48
Cont…
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 49
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 50
Frustum
• A perspective-projection view volume is often referred to as a pyramid of vision
because it approximates the cone of vision of our eyes or a camera.
• The displayed view of a scene includes only those objects within the pyramid, just
as we cannot see objects beyond our peripheral vision, which are outside the cone
of vision.
• By adding near and far clipping planes that are perpendicular to the zview axis (and
parallel to the view plane), we chop off parts of the infinite, perspective projection
view volume to form a truncated pyramid, or frustum, view volume
• But with a perspective projection, we could also use the near clipping plane to take
out large objects close to the view plane that could project into unrecognizable
shapes within the clipping window.
• Similarly, the far clipping plane could be used to cut out objects far from the
projection reference point that might project to small blots on the view plane
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 51
Figure
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 52
Perspective-Projection Transformation
Matrix
• We can use a three-dimensional, homogeneous-coordinate representation
to express the perspective-projection equations in the form
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 53
Cont…
• The perspective-projection transformation of a viewing-coordinate position is
then accomplished in two steps.
• First, we calculate the homogeneous coordinates using the perspective-
transformation matrix:
• Parameters sz and tz are the scaling and translation factors for normalizing
the projected values of z-coordinates.
• Specific values for sz and tz depend on the normalization range we select.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 55
Symmetric Perspective-Projection
Frustum
• The line from the projection reference point
through the center of the clipping window and
on through the view volume is the center line for
a perspective projection frustum.
• If this center line is perpendicular to the view
plane, we have a symmetric frustum (with
respect to its center line)
• Because the frustum centerline intersects the
view plane at the coordinate location (xprp, yprp,
zvp), we can express the corner positions for the
clipping window in terms of the window
dimensions:
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 56
Cont…
• Another way to specify a symmetric perspective projection is to use
parameters that approximate the properties of a camera lens.
• A photograph is produced with a symmetric perspective projection of a scene
onto the film plane.
• Reflected light rays from the objects in a scene are collected on the film plane
from within the “cone of vision” of the camera.
• This cone of vision can be referenced with a field-of-view angle, which is a
measure of the size of the camera lens.
• A large field-of-view angle, for example, corresponds to a wide-angle lens.
• In computer graphics, the cone of vision is approximated with a symmetric
frustum, and we can use a field-of-view angle to specify an angular size for
the frustum.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 57
Figure
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 58
Figure
• For a given projection reference point and view-plane position, the field-of
view angle determines the height of the clipping window from the right
triangles in the diagram of Figure below, we see that
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 59
Cont…
• so that the clipping-window height can be calculated as
• Therefore, the diagonal elements with the value zprp −zvp could be replaced
by either of the following two expressions
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 60
Oblique Perspective-Projection
Frustum
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 61
• In this case, we can first transform the view volume to a symmetric frustum
and then to a normalized view volume.
• An oblique perspective-projection view volume can be converted to a sym
metric frustum by applying a z-axis shearing-transformation matrix.
• This transformation shifts all positions on any plane that is perpendicular to
the z axis by an amount that is proportional to the distance of the plane
from a specified z- axis reference position.
• The computations for the shearing transformation, as well as for the
perspective and normalization transformations, are greatly reduced if we
take the projection reference point to be the viewing-coordinate origin.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 62
• Taking the projection reference point as (xprp, yprp, zprp) = (0, 0, 0), we obtain
the elements of the required shearing matrix as
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 63
• Therefore, the parameters for this shearing transformation are
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 64
• Concatenating the simplified perspective-projection matrix with the shear
matrix we have
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 65
Normalized Perspective-Projection
Transformation Coordinates
• When we divide the homogeneous coordinates by the homogeneous
parameter h, we obtain the actual projection coordinates, which are
orthogonal-projection coordinates
• The final step in the perspective transformation process is to map this
parallelepiped to a normalized view volume.
• The transformed frustum view volume, which is a rectangular
parallelepiped, is mapped to a symmetric normalized cube within a left-
handed reference frame
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 66
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 67
• Because the centerline of the rectangular parallelepiped view volume is
now the zview axis, no translation is needed in the x and y normalization
transformations: We require only the x and y scaling parameters relative to
the coordinate origin.
• The scaling matrix for accomplishing the xy normalization is
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 68
• Concatenating the xy-scalingmatrix produces the following normalization
matrix for a perspective-projection transformation.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 69
• From this transformation, we obtain the homogeneous coordinates:
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 70
• To normalize this perspective transformation, we want the projection
coordinates to be (xp, yp, zp) = (−1, −1, −1) when the input coordinates are
(x, y, z) = (xwmin, ywmin, znear), and we want the projection coordinates to be
(xp, yp, zp) = (1, 1, 1) when the input coordinates are (x, y, z) = (xwmax, ywmax,
zfar).
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 71
• And the elements of the normalized transformation matrix for a general
perspective-projection are
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 72
OpenGL Three-Dimensional Viewing
Functions
• OpenGL Viewing-Transformation Function
• glMatrixMode (GL_MODELVIEW);
• gluLookAt (x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);
• OpenGL Orthogonal-Projection Function
• glMatrixMode (GL_PROJECTION);
• glOrtho (xwmin, xwmax, ywmin, ywmax, dnear, dfar);
• OpenGL General Perspective-Projection Function
• glFrustum (xwmin, xwmax, ywmin, ywmax, dnear, dfar);
• OpenGL Viewports and Display Windows
• glViewport (xvmin, yvmin, vpWidth, vpHeight);
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 73
Perspective projection Parallel projection
The center of projection is at a finite distance Center of projection at infinity results with a
from the viewing plane parallel projection
Explicitly specify: center of projection Direction of projection is specified
Size of the object is inversely proportional to
the distance of the object from the center of
projection No change in the size of object
A parallel projection reserves relative proportion
Produces realistic views but does not preserve relative of objects, but does not give us a realistic
proportion of objects representation of the appearance of object.
Not useful for recording exact shape and
measurements of the object Used for exact measurement
Parallel lines do not in general project as Parallel lines do remain parallel
parallel
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 74
Syllabus – Part B
• Visible Surface Detection Methods:
• Classification of visible surface Detection algorithms
• Depth buffer method only
• OpenGL visibility detection functions.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 75
Classification of Visible-Surface
Detection Algorithms
• We can broadly classify visible-surface detection algorithms according to whether they
deal with the object definitions or with their projected images.
• Object-space methods: compares objects and parts of objects to each other within
the scene definition to determine which surfaces, as a whole, we should label as
visible.
• Image-space methods: visibility is decided point by point at each pixel position on
the projection plane.
• Although there are major differences in the basic approaches taken by the various visible-
surface detection algorithms, most use sorting and coherence methods to improve
performance.
• Sorting is used to facilitate depth comparisons by ordering the individual surfaces in a
scene according to their distance from the view plane.
• Coherence methods are used to take advantage of regularities in a scene
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 76
Back-Face Detection
• A fast and simple object-space method for locating the back faces of a
polyhedron is based on front-back tests. A point (x, y, z) is behind a
polygon surface if
Ax + By + Cz + D < 0
where A, B,C, and Dare the plane parameters for the polygon
• We can simplify the back-face test by considering the direction of the
normal vector N for a polygon surface. If Vview is a vector in the viewing
direction from our camera position, as shown in Figure below, then a
polygon is a back face if
Vview . N > 0
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 77
Cont...
• In a right-handed viewing system with the viewing direction along the
negative zv axis (Figure below), a polygon is a back face if the z
component, C, of its normal vector N satisfies C < 0.
• Also, we cannot see any face whose normal has z component C = 0,
because our viewing direction is grazing that polygon. Thus, in general, we
can label any polygon as a back face if its normal vector has a z
component value that satisfies the inequality C <=0
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 78
• Similar methods can be used in packages that employ a left-handed viewing system.
In these packages, plane parameters A, B, C, and D can be calculated from polygon
vertex coordinates specified in a clockwise direction.
• Inequality 1 then remains a valid test for points behind the polygon.
• For other objects, such as the concave polyhedron in Figure below, more tests must
be carried out to determine whether there are additional faces that are totally or
partially obscured by other faces
• In general, back-face removal can be expected to eliminate about half of the polygon
surfaces in a scene from further visibility tests.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 79
Depth-Buffer Method
• A commonly used image-space approach for detecting visible surfaces is
the depth-buffer method, which compares surface depth values throughout
a scene for each pixel position on the projection plane.
• The algorithm is usually applied to scenes containing only polygon
surfaces, because depth values can be computed very quickly and the
method is easy to implement.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 80
Figure
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 81
• These surfaces can be processed in any order.
• If a surface is closer than any previously processed surfaces, its surface
color is calculated and saved, along with its depth.
• As implied by the name of this method, two buffer areas are required. A
depth buffer is used to store depth values for each (x, y) position as
surfaces are processed, and the frame buffer stores the surface-color
values for each pixel position.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 82
Depth-Buffer Algorithm
Z–
calculating
for each
pixel
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 86
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 87
Cont...
• The method can be applied to curved surfaces by determining depth and
color values at each surface projection point.
• In addition, the basic depth-buffer algorithm often performs needless
calculations.
• Objects are processed in an arbitrary order, so that a color can be
computed for a surface point that is later replaced by a closer surface.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 88
OpenGL Visibility-Detection Functions
OpenGL Polygon-Culling Functions
• Back-face removal is accomplished with the functions
• glEnable (GL_CULL_FACE);
• glCullFace (mode);
• where parameter mode is assigned the value GL_BACK, GL_FRONT,
GL_FRONT_AND_BACK
• By default, parameter mode in the glCullFace function has the value
GL_BACK
• The culling routine is turned off with glDisable (GL_CULL_FACE);
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 89
OpenGL Depth-Buffer Functions
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 90
• The OpenGL depth-buffer visibility-detection routines are activated with the
following function:
• glEnable (GL_DEPTH_TEST);
• And we deactivate the depth-buffer routines with
• glDisable (GL_DEPTH_TEST);
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 91
• We can also apply depth-buffer visibility testing using some other initial
value for the maximum depth, and this initial value is chosen with
theOpenGLfunction:
• glClearDepth (maxDepth);
• Parameter maxDepth can be set to any value between 0.0 and 1.0.
• Projection coordinates in OpenGL are normalized to the range from
−1.0 to 1.0, and the depth values between the near and far clipping
planes are further normalized to the range from 0.0 to 1.0.
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 92
• As an option, we can adjust these normalization values with
• glDepthRange (nearNormDepth, farNormDepth);
• By default, nearNormDepth = 0.0 and farNormDepth = 1.0.
• But with the glDepthRange function, we can set these two parameters
to any values within the range from 0.0 to 1.0, including
nearNormDepth > farNormDepth
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 93
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 94
18CS62 – Computer Graphics and Visualization. V.N. Manju, Dept. of CSE, GCEM, Bangalore. 95