Unit - V Computer Graphics
Unit - V Computer Graphics
Three-Dimensional Viewing
Viewing in 3D involves the following considerations:
- We can view an object from any spatial position, eg. In front of an object, Behind the object, In
the middle of a group of objects, Inside an object, etc.
- 3D descriptions of objects must be projected onto the flat viewing surface of the output device.
- The clipping boundaries enclose a volume of space
Viewing Pipeline
Modelling
Coordinates
Modelling
Transformation
Explanation
s
World
Coordinate
s
Viewing
Transformation
Viewing
Coordinate
s
Projection
Transformatio
n
Projection
Coordinate
s
Workstation
Transformatio
n
Device
Coordinates
Modelling Transformation and Viewing Transformation can be done by 3D transformations.
The viewing-coordinate system is used in graphics packages as a reference for specifying the
observer viewing position and the position of the projection plane. Projection operations convert
the viewing-coordinate description (3D) to coordinate positions on the projection plane (2D).
(Usually combined with clipping, visual-surface identification, and surface-
rendering)Workstation transformation maps the coordinate positions on the
projection plane to the output device
Viewing Transformation
Conversion of objection descriptions from world to viewing coordinates is equivalent to a
transformation that superimposes the viewing reference frame onto the world frame using the
basic
geometric translate-rotate operations:
1. Translate the view reference point to the origin of the world-coordinate system.
2. Apply rotations to align the xv, yv, and zv axes (viewing coordinate system) with the world
xw, yw,
zw axes, respectively.
Projections
Projection operations convert the viewing-coordinate description (3D) to coordinate positions on
the
projection plane (2D). There are 2 basic projection methods:
1. Parallel Projection transforms object positions to the view plane along parallel lines.
A parallel projection preserves relative proportions of objects. Accurate views of the various
sides of
an object are obtained with a parallel projection. But not a realistic representation
2. Perspective Projection transforms object positions to the view plane while converging to a
center
point of projection. Perspective projection produces realistic views but does not preserve relative
proportions. Projections of distant objects are smaller than the projections of objects of the same
size that are closer to the
projection plane.
Parallel Projection
Classification:
Orthographic Parallel Projection and Oblique Projection:
Orthographic parallel projections are done by projecting points along parallel lines that are
perpendicular to the projection plane.
Oblique projections are obtained by projecting along parallel lines that are NOT perpendicular to
the
projection plane.Some special Orthographic Parallel Projections involve Plan View (Top
projection), Side Elevations, and Isometric Projection:
Perspective Projection
Perspective projection is done in 2 steps: Perspective transformation and Parallel projection.
These
steps are described in the following section.
Perspective Transformation and Perspective Projection To produce perspective viewing effect,
after Modelling Transformation, Viewing Transformation is carried out to transform objects
from the world coordinate system to the viewing coordinate system. Afterwards, objects in the
scene are further processed with Perspective Transformation: the view volume in the shape of a
frustum becomes a regular parallelepiped. The transformation equations are
shown as follows and are applied to every vertex of each object:
x' = x * (d/z),
y' = y * (d/z), z' = z
Similar methods can be used in packages that employ a left-handed viewing system. In these
packages, plane parameters A, B, C and D can be calculated from polygon vertex
coordinates specified in a clockwise
direction unlikethecounterclockwisedirectionusedinaright−handedsystemunlikethecounterclo
ckwisedirectionusedinaright−handedsystem.
Also, back faces have normal vectors that point away from the viewing position and are
identified by C >= 0 when the viewing direction is along the positive ZvZv axis. By
examining parameter C for the different planes defining an object, we can immediately
identify all the back faces.
A-Buffer Method
The A-buffer method is an extension of the depth-buffer method. The A-buffer method is a
visibility detection method developed at Lucas film Studios for the rendering system
Renders Everything You Ever Saw REYESREYES.
The A-buffer expands on the depth buffer method to allow transparencies. The key data
structure in the A-buffer is the accumulation buffer.
Algorithm
Step-1 − Set the buffer values −
Depthbuffer x,yx,y = 0
Framebuffer x,yx,y = background color
Step-2 − Process each polygon OneatatimeOneatatime
For each projected x,yx,y pixel position of a polygon, calculate depth z.
If Z > depthbuffer x,yx,y
Compute surface color,
set depthbuffer x,yx,y = z,
framebuffer x,yx,y = surfacecolor x,yx,y
Advantages
It is easy to implement.
It reduces the speed problem if implemented in hardware.
It processes one object at a time.
Disadvantages
It requires large memory.
It is time consuming process.
Scan-Line Method
It is an image-space method to identify visible surface. This method has a depth information
for only single scan-line. In order to require one scan-line of depth values, we must group
and process all polygons intersecting a given scan-line at the same time before processing
the next scan-line. Two important tables, edge table and polygon table, are maintained for
this.
The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse
slope of each line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties, other
surface data, and may be pointers to the edge table.
To facilitate the search for surfaces crossing a given scan-line, an active list of edges is
formed. The active list stores only those edges that cross the scan-line in order of increasing
x. Also a flag is set for each surface to indicate whether a position along a scan-line is either
inside or outside the surface.
Pixel positions across each scan-line are processed from left to right. At the left intersection
with a surface, the surface flag is turned on and at the right, the flag is turned off. You only
need to perform depth calculations when multiple surfaces have their flags turned on at a
certain scan-line position.