0% found this document useful (0 votes)
13 views10 pages

UNIT 3 Notes

Uploaded by

Harsh Dewangan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views10 pages

UNIT 3 Notes

Uploaded by

Harsh Dewangan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Unit: 03

Syllabus
Unit – III (8 hours)
Three Dimensional Geometric and Modeling Transformations: Translation,
Rotation, Scaling, Reflections, shear, Composite Transformation.
Projections: Parallel Projection, Perspective Projection.
Visible Surface Detection Methods: Back-Face Detection, Depth Buffer, A-
Buffer, Scan- Line Algorithm, Painters Algorithm.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

Three Dimensional Geometric and Modeling Transformations:


Rotation
3D rotation is not same as 2D rotation. In 3D rotation, we have to specify the angle of
rotation along with the axis of rotation. We can perform 3D rotation about X, Y, and Z axes. They
a
r
e

r
e
p
r
e
s
e
n
t
e
d

i
n the matrix form as below −
Scaling
You can change the size of an object using scaling
transformation. In the scaling process, you either expand
or compress the dimensions of the object. Scaling can be
achieved by multiplying the original coordinates of the
object with the scaling factor to get the desired result.
The figure shows the effect of 3D scaling −

In 3D scaling operation, three coordinates are


used. Let us assume that the original coordinates are (X, Y, Z), scaling factors
are (SX,SY,Sz)(SX,SY,Sz) respectively, and the produced coordinates are (X’, Y’, Z’). This can be
mathematically represented as shown below –

[III.1]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

Shear
A transformation that slants the shape of an object is
called the shear transformation. Like in 2D shear, we can
shear an object along the X-axis, Y-axis, or Z-axis in 3D.

As shown in the above figure, there is a coordinate P.


You can shear it to get a new coordinate P', which can be

represented in 3D matrix
form as below −

[III.2]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

Parallel Projection
Parallel projection discards z-
coordinate and parallel lines from
each vertex on the object are extended
until they intersect the view plane. In
parallel projection, we specify a
direction of projection instead of
center of projection.
In parallel projection, the
distance from the center of projection
to project plane is infinite. In this type
of projection, we connect the
projected vertices by line segments
which correspond to connections on
the original object.
Parallel projections are less
realistic, but they are good for exact
measurements. In this type of projections, parallel lines remain parallel and angles are not
preserved. Various types of parallel projections are shown in the following hierarchy.

Orthographic Projection
In orthographic projection the direction of
projection is normal to the projection of the
plane. There are three types of orthographic
projections −
 Front Projection
 Top Projection
 Side Projection

Oblique Projection
In orthographic projection, the
direction of projection is not normal to the
projection of plane. In oblique projection, we
can view the object better than orthographic
projection.
There are two types of oblique
projections − Cavalier and Cabinet. The
Cavalier projection makes 45° angle with the
projection plane. The projection of a line
perpendicular to the view plane has the same
length as the line itself in Cavalier projection.
In a cavalier projection, the foreshortening factors for all three principal directions are equal.

[III.3]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

The Cabinet projection makes 63.4° angle with the projection plane. In Cabinet projection,
lines perpendicular to the viewing surface are projected at ½ their actual length.

Isometric Projections
Orthographic projections that show
more than one side of an object are
called axonometric orthographic projections.
The most common axonometric projection is
an isometric projection where the projection
plane intersects each coordinate axis in the
model coordinate system at an equal distance.
In this projection parallelism of lines are
preserved but angles are not preserved. The
following figure shows isometric projection −
Perspective Projection
In perspective projection, the
distance from the center of projection to
project plane is finite and the size of the
object varies inversely with distance which
looks more realistic.
The distance and angles are not
preserved and parallel lines do not remain
parallel. Instead, they all converge at a
single point called center of
projection or projection reference point.
There are 3 types of perspective projections
which are shown in the following chart.
 One point perspective projection is
simple to draw.
 Two point perspective projection
gives better impression of depth.
 Three point perspective projection is
most difficult to draw.
The following figure shows all the three types of perspective projection −

[III.4]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

Translation
In 3D translation, we transfer the Z
coordinate along with the X and Y coordinates.
The process for translation in 3D is similar to 2D
translation. A translation moves an object into a
different position on the screen.
The following figure shows the effect of
translation −
A point can be translated in 3D by adding
translation coordinate (tx,ty,tz)(tx,ty,tz) to the
original coordinate (X, Y, Z) to get the new
coordinate (X’, Y’, Z’).

Depth Buffer Algorithm / Z Buffer Algorithm:


It is one of the simplest and commonly used image space approach to eliminate hidden
surface is depth buffer or Z-buffer algorithm.
 It is called Z-Buffer because object depth is measure from viewing plane in Z axis of
viewing system.
 It compare surface depth at each pixel position on the projected plane.
 When object description is converted to projection coordinate, each pixel position on
the view plane is specified with coordinate and Z value gives the depth
information.
 It implemented in normalized coordinate so that Z-value ranges from ‘0’ at the back
clipping plane to ‘1’ at the front clipping plane.
 Here one additional buffer required called Z-Buffer to store the depth value for each
position.
 At beginning Z-Buffer initialize to ‘0’ and frame buffer with back ground color.
 A scan line passes over the surface list calculate the Z value at each pixel (x,y) &
compare with previous one.

[III.5]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

 If the new value is greater than previous one Z-buffer will update by new value with
suitable intensity.

Algorithm:
Step 1. Initialize & position.

Step 2. During the Scan Conversion process, for each pixel position of each polygon
surface, compare depth value to previously store value in the depth buffer to
determine visibility.
Calculate Z_value for each (X, Y) position on the polygon.
if then set

Step 3. STOP.
NOTE: Z value can be evaluate

Advantage Disadvantage

 Easy to implement.  Require additional Buffer.


 Total no. of polygon in a picture may be  Time taking as comparing each pixel.
large.

A-Buffer:
It is an extension idea of depth buffer. It represent anti-aliased, area averaged
method. A drawback of Z-Buffer is that it can find only one visible surface at each pixel
position.
In other words: it deal with opaque surface and can’t accumulate the intensity value
for more than one surface as it is in necessary if transparent surface are to be display.
A_Buffer extends Z_Buffer, So that each position in the surface can referred a
linked list of surface. Thus more than one surface intensity can be taken to the consideration at
each pixel position and obscure edge can be antialised.
Each position in A_Buffer has two field.
I. Depth field : Store +ve or –ve real no.

[III.6]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

II. Intensity field : Store the surface intensity information.

I
f

d
e
p
t
h >= 0, the number stored at that position is the depth of a single surface overlapping the
corresponding pixel area. The intensity field then stores the RGB components of the surface
color at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The
intensity field then stores a pointer to a linked list of surface data. The surface buffer in the A-
buffer includes −
 RGB intensity components
 Opacity Parameter
 Depth
 Percent of area coverage
 Surface identifier
The algorithm proceeds just like the depth buffer algorithm. The depth and opacity
values are used to determine the final color of a pixel.

Scan-Line Method
It is an image-space method to identify visible surface. This method has a depth
information for only single scan-line. In order to require one scan-line of depth values, we must
group and process all polygons intersecting a given scan-line at the same time before processing
the next scan-line. Two important tables, edge table and polygon table, are maintained for this.
The Edge Table − It contains coordinate endpoints of each line in the scene, the
inverse slope of each line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties,
other surface data, and may be pointers to the edge table.

[III.7]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

To facilitate the search for surfaces crossing a given scan-line, an active list of edges is
formed. The active list stores only those edges that cross the scan-line in order of increasing
x. Also a flag is set for each surface to indicate whether a position along a scan-line is either
inside or outside the surface.

Pixel positions across each scan-line are processed from left to right. At the left
intersection with a surface, the surface flag is turned on and at the right, the flag is turned
off. You only need to perform depth calculations when multiple surfaces have their flags
turned on at a certain scan-line position.
Depth Sorting Method / Painter’s Algorithm / Priority Algorithm
Depth sorting method uses both image space and object-space operations. The
depth-sorting method performs two basic functions −

 First, the surfaces are sorted in order of decreasing depth.

 Second, the surfaces are scan-converted in order, starting with the surface of
greatest depth.

The scan conversion of the polygon surfaces is performed in image space. This
method for solving the hidden-surface problem is often referred to as the painter's
algorithm. The following figure shows the effect of depth sorting −

The algorithm begins by sorting by depth. For example, the initial “depth” estimate of a
polygon may be taken to be the closest z value of any vertex of the polygon.

Let us take the polygon P at the end of the list. Consider all polygons Q whose z-extents
overlap P’s. Before drawing P, we make the following tests. If any of the following tests is
positive, then we can assume P can be drawn before Q.

 Do the x-extents not overlap?


 Do the y-extents not overlap?
 Is P entirely on the opposite side of Q’s plane from the viewpoint?
 Is Q entirely on the same side of P’s plane as the viewpoint?
 Do the projections of the polygons not overlap?

[III.8]
COMPUTER GRAPHICS | Unit: 03
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

If all the tests fail, then we split either P or Q using the plane of the other. The new cut
polygons are inserting into the depth order and the process continues. Theoretically, this
partitioning could generate O(n2) individual polygons, but in practice, the number of
polygons is much smaller.

PROBLEM: Given Determine which


point obscure other when view from C.
ANSWER: Line joining C (0,0,-10) & P1 (1,2,0)
X=XC + (X1-XC)U=0+(1-0)U=U
Y=YC+ (Y1-YC)U=0+(2-0)U=2U
Z= ZC+(Z1-ZC)U= -10+(0-(-10))U= -10+10U
Determine P2 lies on the line CP1 or not
P2= (3,6,30) where X=3,Y=6 and Z=20 means U=3
Hence P2 lies on the line.
Then check P1 first or P2 first.
By determining distance P1C to P2C.
Now C occurs on the line at U=0
P1 at U=1
P2 at U=3
Hence P1 front of P2. Therefore P1 obscure P2.
Then check P3 take U=2, X=2, Y=4 and Z=10
Hence P3 is not coming on the line.

[III.9]
COMPUTER GRAPHICS | Unit: 03

You might also like