0% found this document useful (0 votes)
12 views9 pages

Unit - V Computer Graphics

The document discusses three-dimensional viewing, focusing on the viewing pipeline, transformations, and projection methods. It explains the differences between parallel and perspective projections, detailing their respective characteristics and applications. Additionally, it covers visible-surface detection methods, including back-face detection, A-buffer, depth-buffer, and scan-line methods for rendering 3D objects onto 2D surfaces.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views9 pages

Unit - V Computer Graphics

The document discusses three-dimensional viewing, focusing on the viewing pipeline, transformations, and projection methods. It explains the differences between parallel and perspective projections, detailing their respective characteristics and applications. Additionally, it covers visible-surface detection methods, including back-face detection, A-buffer, depth-buffer, and scan-line methods for rendering 3D objects onto 2D surfaces.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Unit V

Three-Dimensional Viewing
Viewing in 3D involves the following considerations:
- We can view an object from any spatial position, eg. In front of an object, Behind the object, In
the middle of a group of objects, Inside an object, etc.
- 3D descriptions of objects must be projected onto the flat viewing surface of the output device.
- The clipping boundaries enclose a volume of space
Viewing Pipeline

Modelling
Coordinates

Modelling
Transformation
Explanation
s
World
Coordinate
s

Viewing
Transformation

Viewing
Coordinate
s

Projection
Transformatio
n

Projection
Coordinate
s

Workstation
Transformatio
n
Device
Coordinates
Modelling Transformation and Viewing Transformation can be done by 3D transformations.
The viewing-coordinate system is used in graphics packages as a reference for specifying the
observer viewing position and the position of the projection plane. Projection operations convert
the viewing-coordinate description (3D) to coordinate positions on the projection plane (2D).
(Usually combined with clipping, visual-surface identification, and surface-
rendering)Workstation transformation maps the coordinate positions on the
projection plane to the output device

Viewing Transformation
Conversion of objection descriptions from world to viewing coordinates is equivalent to a
transformation that superimposes the viewing reference frame onto the world frame using the
basic
geometric translate-rotate operations:
1. Translate the view reference point to the origin of the world-coordinate system.
2. Apply rotations to align the xv, yv, and zv axes (viewing coordinate system) with the world
xw, yw,
zw axes, respectively.

Projections
Projection operations convert the viewing-coordinate description (3D) to coordinate positions on
the
projection plane (2D). There are 2 basic projection methods:
1. Parallel Projection transforms object positions to the view plane along parallel lines.
A parallel projection preserves relative proportions of objects. Accurate views of the various
sides of
an object are obtained with a parallel projection. But not a realistic representation

2. Perspective Projection transforms object positions to the view plane while converging to a
center
point of projection. Perspective projection produces realistic views but does not preserve relative
proportions. Projections of distant objects are smaller than the projections of objects of the same
size that are closer to the
projection plane.
Parallel Projection
Classification:
Orthographic Parallel Projection and Oblique Projection:

Orthographic parallel projections are done by projecting points along parallel lines that are
perpendicular to the projection plane.
Oblique projections are obtained by projecting along parallel lines that are NOT perpendicular to
the
projection plane.Some special Orthographic Parallel Projections involve Plan View (Top
projection), Side Elevations, and Isometric Projection:

The following results can be obtained from oblique projections of a cube:

Perspective Projection
Perspective projection is done in 2 steps: Perspective transformation and Parallel projection.
These
steps are described in the following section.
Perspective Transformation and Perspective Projection To produce perspective viewing effect,
after Modelling Transformation, Viewing Transformation is carried out to transform objects
from the world coordinate system to the viewing coordinate system. Afterwards, objects in the
scene are further processed with Perspective Transformation: the view volume in the shape of a
frustum becomes a regular parallelepiped. The transformation equations are
shown as follows and are applied to every vertex of each object:
x' = x * (d/z),
y' = y * (d/z), z' = z

Where (x,y,z) is the original position of a vertex, (x',y',z') is the transformed


position of the vertex,
and d is the distance of image plane from the
center of projection. Note that:
Perspective transformation is different from perspective projection:
Perspective projection projects a
3D object onto a 2D plane perspectively. Perspective transformation converts a
3D object into a deformed 3D object. After the transformation, the depth value of
an object remains unchanged. Before the perspective transformation, all the
projection lines converge to the center of projection.
After the transformation, all the projection lines are parallel to each other.
Finally we can apply parallel projection to project the object onto a 2D image
plane. Perspective Projection = Perspective Transformation + Parallel Projection

Visible-surface deduction methods:


Back-Face Detection
A fast and simple object-space method for identifying the back faces of a polyhedron is
based on the "inside-outside" tests. A point x,y,zx,y,z is "inside" a polygon surface with
plane parameters A, B, C, and D if When an inside point is along the line of sight to the
surface, the polygon must be a back
face weareinsidethatfaceandcannotseethefrontofitfromourviewingpositionweareinsidethatfac
eandcannotseethefrontofitfromourviewingposition.
We can simplify this test by considering the normal vector N to a polygon surface, which
has Cartesian components A,B,CA,B,C.
In general, if V is a vector in the viewing direction from the
eye or"camera"or"camera" position, then this polygon is a back face if
V.N > 0
Furthermore, if object descriptions are converted to projection coordinates and your viewing
direction is parallel to the viewing z-axis, then −
V = (0, 0, Vz) and V.N = VZC
So that we only need to consider the sign of C the component of the normal vector N.
In a right-handed viewing system with viewing direction along the negative ZVZV axis, the
polygon is a back face if C < 0. Also, we cannot see any face whose normal has z
component C = 0, since your viewing direction is towards that polygon. Thus, in general, we
can label any polygon as a back face if its normal vector has a z component value −
C <= 0

Similar methods can be used in packages that employ a left-handed viewing system. In these
packages, plane parameters A, B, C and D can be calculated from polygon vertex
coordinates specified in a clockwise
direction unlikethecounterclockwisedirectionusedinaright−handedsystemunlikethecounterclo
ckwisedirectionusedinaright−handedsystem.
Also, back faces have normal vectors that point away from the viewing position and are
identified by C >= 0 when the viewing direction is along the positive ZvZv axis. By
examining parameter C for the different planes defining an object, we can immediately
identify all the back faces.
A-Buffer Method
The A-buffer method is an extension of the depth-buffer method. The A-buffer method is a
visibility detection method developed at Lucas film Studios for the rendering system
Renders Everything You Ever Saw REYESREYES.
The A-buffer expands on the depth buffer method to allow transparencies. The key data
structure in the A-buffer is the accumulation buffer.

Each position in the A-buffer has two fields −


 Depth field − It stores a positive or negative real number
 Intensity field − It stores surface-intensity information or a pointer value
If depth >= 0, the number stored at that position is the depth of a single surface overlapping
the corresponding pixel area. The intensity field then stores the RGB components of the
surface color at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity
field then stores a pointer to a linked list of surface data. The surface buffer in the A-buffer
includes −
 RGB intensity components
 Opacity Parameter
 Depth
 Percent of area coverage
 Surface identifier
The algorithm proceeds just like the depth buffer algorithm. The depth and opacity values
are used to determine the final color of a pixel.
Depth Buffer Z−BufferZ−Buffer Method
This method is developed by Cutmull. It is an image-space approach. The basic idea is to
test the Z-depth of each surface to determine the closest visiblevisible surface.
In this method each surface is processed separately one pixel position at a time across the
surface. The depth values for a pixel are compared and the closest smallestzsmallestz surface
determines the color to be displayed in the frame buffer.
It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order.
To override the closer polygons from the far ones, two buffers named frame
buffer and depth buffer, are used.
Depth buffer is used to store depth values for x,yx,y position, as surfaces are
processed 0≤depth≤10≤depth≤1.
The frame buffer is used to store the intensity value of color value at each position x,yx,y.
The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate
indicates back clipping pane and 1 value for z-coordinates indicates front clipping pane.

Algorithm
Step-1 − Set the buffer values −
Depthbuffer x,yx,y = 0
Framebuffer x,yx,y = background color
Step-2 − Process each polygon OneatatimeOneatatime
For each projected x,yx,y pixel position of a polygon, calculate depth z.
If Z > depthbuffer x,yx,y
Compute surface color,
set depthbuffer x,yx,y = z,
framebuffer x,yx,y = surfacecolor x,yx,y
Advantages
 It is easy to implement.
 It reduces the speed problem if implemented in hardware.
 It processes one object at a time.
Disadvantages
 It requires large memory.
 It is time consuming process.
Scan-Line Method
It is an image-space method to identify visible surface. This method has a depth information
for only single scan-line. In order to require one scan-line of depth values, we must group
and process all polygons intersecting a given scan-line at the same time before processing
the next scan-line. Two important tables, edge table and polygon table, are maintained for
this.
The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse
slope of each line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties, other
surface data, and may be pointers to the edge table.

To facilitate the search for surfaces crossing a given scan-line, an active list of edges is
formed. The active list stores only those edges that cross the scan-line in order of increasing
x. Also a flag is set for each surface to indicate whether a position along a scan-line is either
inside or outside the surface.
Pixel positions across each scan-line are processed from left to right. At the left intersection
with a surface, the surface flag is turned on and at the right, the flag is turned off. You only
need to perform depth calculations when multiple surfaces have their flags turned on at a
certain scan-line position.

You might also like