UNIT 3 Notes
UNIT 3 Notes
Syllabus
Unit – III (8 hours)
Three Dimensional Geometric and Modeling Transformations: Translation,
Rotation, Scaling, Reflections, shear, Composite Transformation.
Projections: Parallel Projection, Perspective Projection.
Visible Surface Detection Methods: Back-Face Detection, Depth Buffer, A-
Buffer, Scan- Line Algorithm, Painters Algorithm.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4
r
e
p
r
e
s
e
n
t
e
d
i
n the matrix form as below −
Scaling
You can change the size of an object using scaling
transformation. In the scaling process, you either expand
or compress the dimensions of the object. Scaling can be
achieved by multiplying the original coordinates of the
object with the scaling factor to get the desired result.
The figure shows the effect of 3D scaling −
[III.1]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4
Shear
A transformation that slants the shape of an object is
called the shear transformation. Like in 2D shear, we can
shear an object along the X-axis, Y-axis, or Z-axis in 3D.
represented in 3D matrix
form as below −
[III.2]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4
Parallel Projection
Parallel projection discards z-
coordinate and parallel lines from
each vertex on the object are extended
until they intersect the view plane. In
parallel projection, we specify a
direction of projection instead of
center of projection.
In parallel projection, the
distance from the center of projection
to project plane is infinite. In this type
of projection, we connect the
projected vertices by line segments
which correspond to connections on
the original object.
Parallel projections are less
realistic, but they are good for exact
measurements. In this type of projections, parallel lines remain parallel and angles are not
preserved. Various types of parallel projections are shown in the following hierarchy.
Orthographic Projection
In orthographic projection the direction of
projection is normal to the projection of the
plane. There are three types of orthographic
projections −
Front Projection
Top Projection
Side Projection
Oblique Projection
In orthographic projection, the
direction of projection is not normal to the
projection of plane. In oblique projection, we
can view the object better than orthographic
projection.
There are two types of oblique
projections − Cavalier and Cabinet. The
Cavalier projection makes 45° angle with the
projection plane. The projection of a line
perpendicular to the view plane has the same
length as the line itself in Cavalier projection.
In a cavalier projection, the foreshortening factors for all three principal directions are equal.
[III.3]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4
The Cabinet projection makes 63.4° angle with the projection plane. In Cabinet projection,
lines perpendicular to the viewing surface are projected at ½ their actual length.
Isometric Projections
Orthographic projections that show
more than one side of an object are
called axonometric orthographic projections.
The most common axonometric projection is
an isometric projection where the projection
plane intersects each coordinate axis in the
model coordinate system at an equal distance.
In this projection parallelism of lines are
preserved but angles are not preserved. The
following figure shows isometric projection −
Perspective Projection
In perspective projection, the
distance from the center of projection to
project plane is finite and the size of the
object varies inversely with distance which
looks more realistic.
The distance and angles are not
preserved and parallel lines do not remain
parallel. Instead, they all converge at a
single point called center of
projection or projection reference point.
There are 3 types of perspective projections
which are shown in the following chart.
One point perspective projection is
simple to draw.
Two point perspective projection
gives better impression of depth.
Three point perspective projection is
most difficult to draw.
The following figure shows all the three types of perspective projection −
[III.4]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4
Translation
In 3D translation, we transfer the Z
coordinate along with the X and Y coordinates.
The process for translation in 3D is similar to 2D
translation. A translation moves an object into a
different position on the screen.
The following figure shows the effect of
translation −
A point can be translated in 3D by adding
translation coordinate (tx,ty,tz)(tx,ty,tz) to the
original coordinate (X, Y, Z) to get the new
coordinate (X’, Y’, Z’).
[III.5]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4
If the new value is greater than previous one Z-buffer will update by new value with
suitable intensity.
Algorithm:
Step 1. Initialize & position.
Step 2. During the Scan Conversion process, for each pixel position of each polygon
surface, compare depth value to previously store value in the depth buffer to
determine visibility.
Calculate Z_value for each (X, Y) position on the polygon.
if then set
Step 3. STOP.
NOTE: Z value can be evaluate
Advantage Disadvantage
A-Buffer:
It is an extension idea of depth buffer. It represent anti-aliased, area averaged
method. A drawback of Z-Buffer is that it can find only one visible surface at each pixel
position.
In other words: it deal with opaque surface and can’t accumulate the intensity value
for more than one surface as it is in necessary if transparent surface are to be display.
A_Buffer extends Z_Buffer, So that each position in the surface can referred a
linked list of surface. Thus more than one surface intensity can be taken to the consideration at
each pixel position and obscure edge can be antialised.
Each position in A_Buffer has two field.
I. Depth field : Store +ve or –ve real no.
[III.6]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4
I
f
d
e
p
t
h >= 0, the number stored at that position is the depth of a single surface overlapping the
corresponding pixel area. The intensity field then stores the RGB components of the surface
color at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The
intensity field then stores a pointer to a linked list of surface data. The surface buffer in the A-
buffer includes −
RGB intensity components
Opacity Parameter
Depth
Percent of area coverage
Surface identifier
The algorithm proceeds just like the depth buffer algorithm. The depth and opacity
values are used to determine the final color of a pixel.
Scan-Line Method
It is an image-space method to identify visible surface. This method has a depth
information for only single scan-line. In order to require one scan-line of depth values, we must
group and process all polygons intersecting a given scan-line at the same time before processing
the next scan-line. Two important tables, edge table and polygon table, are maintained for this.
The Edge Table − It contains coordinate endpoints of each line in the scene, the
inverse slope of each line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties,
other surface data, and may be pointers to the edge table.
[III.7]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4
To facilitate the search for surfaces crossing a given scan-line, an active list of edges is
formed. The active list stores only those edges that cross the scan-line in order of increasing
x. Also a flag is set for each surface to indicate whether a position along a scan-line is either
inside or outside the surface.
Pixel positions across each scan-line are processed from left to right. At the left
intersection with a surface, the surface flag is turned on and at the right, the flag is turned
off. You only need to perform depth calculations when multiple surfaces have their flags
turned on at a certain scan-line position.
Depth Sorting Method / Painter’s Algorithm / Priority Algorithm
Depth sorting method uses both image space and object-space operations. The
depth-sorting method performs two basic functions −
Second, the surfaces are scan-converted in order, starting with the surface of
greatest depth.
The scan conversion of the polygon surfaces is performed in image space. This
method for solving the hidden-surface problem is often referred to as the painter's
algorithm. The following figure shows the effect of depth sorting −
The algorithm begins by sorting by depth. For example, the initial “depth” estimate of a
polygon may be taken to be the closest z value of any vertex of the polygon.
Let us take the polygon P at the end of the list. Consider all polygons Q whose z-extents
overlap P’s. Before drawing P, we make the following tests. If any of the following tests is
positive, then we can assume P can be drawn before Q.
[III.8]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4
If all the tests fail, then we split either P or Q using the plane of the other. The new cut
polygons are inserting into the depth order and the process continues. Theoretically, this
partitioning could generate O(n2) individual polygons, but in practice, the number of
polygons is much smaller.
[III.9]
COMPUTER GRAPHICS | Unit: 03