0% found this document useful (0 votes)
11 views64 pages

Unit - 3 Three Dimensional Graphics

The document discusses three-dimensional graphics, focusing on the representation of geometric data and the techniques for rendering 2D images from 3D models. It explains the coordinate systems, projection methods (parallel and perspective), and various transformations (translation, rotation, scaling) used in 3D graphics. Additionally, it covers polygon surfaces, data organization in graphics systems, and hierarchical structures like octrees and quadtrees for efficient representation and processing of 3D objects.

Uploaded by

manojgmmanojgm1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views64 pages

Unit - 3 Three Dimensional Graphics

The document discusses three-dimensional graphics, focusing on the representation of geometric data and the techniques for rendering 2D images from 3D models. It explains the coordinate systems, projection methods (parallel and perspective), and various transformations (translation, rotation, scaling) used in 3D graphics. Additionally, it covers polygon surfaces, data organization in graphics systems, and hierarchical structures like octrees and quadtrees for efficient representation and processing of 3D objects.

Uploaded by

manojgmmanojgm1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

UNIT – 3

THREE DIMENSIONAL

GRAPHICS
Three dimensional computer graphics that use a three – dimensional representation of geometric data
that is stored in the computer for the purpose of performing calculations and rendering 2D images.
In a three dimensional model we can view the structure from different view point. 3D graphics programs
allow objects to be created on an X – Y – Z scale (width, height, depth).
For example, the architect can view the building structure from different angles or the automobile
engineer can view the automobile design in different views. Every point in 3D is located using three
coordinate values instead of two coordinates. In order to define points with three coordinates, we define
a third axis, called the Z – axis.
Three Dimensional Co-ordinate system
The conventional orientation for the coordinate axes in a 3D cartesian reference system is as shown
below. This is called a right – handed system because the right hand thumb points in the positive
Z – axis direction.
If the thumb of the right hand points in the positive Z direction, as one curls the fingers around the z –
axis from the positive x – axis to the positive y – axis (90 – degree) then the coordinates are called right
handed system.
If the thumb of the left hand points in the positive z direction when we imagine grasping the z – axis, so
that the fingers of the left hand curl from the positive x – axis to the positive y- axis through 90 degree
then the coordinates are called left handed system.

Note:
Normally the right handed system is used in
mathematics but in computer graphics the left
handed system is used since objects behind
the display screen will have positive value.
Three Dimensional Display Techniques: On a graphics display it is impossible to produce an image
that is perfectly realistic representation of an actual scene.
The techniques that should be taken into account are:
The different kinds of aspects needed by the application.
1. The amount of processing required to generate the image.
2. The capability of the display hardware.
3. The amount of detail recorded in the image.
4. The perceptual effects of the image on the observer.
The technique for achieving realism in three dimensional graphics is projection.
Projection can be defined as mapping of a point P(x, y, z) onto its image P’(x’, y’, z’) in the projection
plane or view plane. Two types of projection:
1. Parallel Projection
2. Perspective Projection
The Classification of projection is:
1. Parallel Projection: Parallel projection is involved in generating a view of a solid object by
projecting points on the object surface along parallel lines onto the display plane. By selecting different
viewing positions, we can project visible points of the object on the display plane and obtain different
two dimensional views of the object.
In parallel projection parallel lines in the world coordinate scene, project parallel lines on the 2D display
plane. The parallel projection technique is used in engineering and architectural drawings to represent an
object with a set of views.

Parallel projection preserves relative proportions of objects. Accurate views of the various sides of an
object are obtained with a parallel projection, but this does not give us a realistic representation of the
appearance of a 3D object.
2. Perspective Projection: For a perspective projection, object positions are transformed to the view
plane along lines that converge to a point called projection reference point.
This causes objects farther from the viewing position to be displayed smaller than the original objects.
When the object is closer the size is same. For example: Aeroplane and Ship.

Figure 5.8 Perspective projection of an object


In perspective projection, parallel lines in a scene that are not parallel to the display plane are projected
into converging lines. Scenes displayed using perspective projection appears more realistic. Since this is
the way that our eyes and camera lens form images.
Object positions are transformed to the view plane along lines that converge to a point. In perspective
projection, the further an object is from the viewer, the smaller it appears. This provides the viewer with
depth cue, indicating which part of the image is closer and which part is farther from the user. The lines
of projection are not parallel. They all converge at a single point called the center of projection and the
intersection of these converging lines ( Figure 5.8).
Intensity Cueing or Depth Cueing: Depth information is important so that we can easily identify, for a
particular viewing direction, which is the front and which is the back of displayed objects. There are
several ways in which we can include depth information in the two – dimensional representation of solid
objects.

A simple method for indicating depth with wireframe displays is to vary the intensity of objects
according to their distance from the viewing position. The lines closest to the viewing position are
displayed with highest intensities, and lines farther away are displayed with decreasing intensities.
Depth cueing is applied by choosing maximum and minimum intensity values and a range of distances
over which the intensities are to vary.
Visible Line and Surface Identification:
➢ A simplest way to identify the visible line is to highlight the visible lines or to display them in a
different color.
➢ Another method is to display the non visible lines as dashed lines.
Surface Rendering: Surface rendering method is used to generate a degree of realism in a displayed
scene.
• Realism is attained in displays by setting the surface intensity of objects according to the lighting
conditions in the scene and surface characteristics.
• Lighting conditions include the intensity and positions of light sources and the background
illumination.
• Surface characteristics include degree of transparency and how rough or smooth the surfaces are to be.
Exploded and Cutaway Views: Exploded and cutaway views of objects can be to show the internal
structure and relationship of the objects parts.
• An alternative to exploding an object into its component parts is the cut away view which removes part
of the visible surfaces to show internal structure.
Three dimensional and stereoscopic views:
• In Stereoscopic views, three dimensional views can be obtained by reflecting a raster image from a
vibrating flexible mirror.
• The vibrations of the mirror are synchronized with the display of the scene on the CRT.
• Stereoscopic devices present two views of a scene; one for the left eye and the other for the right eye.
Differences between Parallel and Perspective Projection:

Parallel Projection Perspective Projection


The center of projection is at infinity. The center of projection is a finite point.
The projectors are parallel to each other. The projectors intersect at the center of
projection.
Less realistic view because of no Visual effect is similar to human visual system
foreshortening. which has perspective foreshortening.
Preserves the parallel lines. Does not preserve the parallel lines.
Three Dimensional Transformation: The geometric transformations play a vital role in generating
images of three Dimensional objects with the help of these transformations. The location of objects
relative to others can be easily expressed. Sometimes viewpoint changes rapidly, or sometimes objects
move in relation to each other. For this number of transformation can be carried out repeatedly.
1. Translation: It is the movement of an object from one position to another position. Translation is
done using translation vectors. There are three vectors in 3D instead of two. These vectors are in x, y,
and z directions. Translation in the x-direction is represented using Tx. The translation is y-direction is
represented using Ty. The translation in the z- direction is represented using Tz.
If P is a point having co-ordinates in three directions (x, y, z) is translated, then after translation its
coordinates will be (x1, y1, z1) after translation. Tx Ty Tz are translation vectors in x, y, and z directions
respectively.
x1 = x + Tx
y1 = y + Ty
z1 = z + Tz
Three-dimensional transformations are performed by transforming each vertex of the object. If an object
has five corners, then the translation will be accomplished by translating all five points to new locations.
Following figure 1 shows the translation of point figure 2 shows the translation of the cube.
Matrix for translation:

Matrix representation of point translation


Point shown in fig is (x, y, z). It becomes (x1,y1,z1) after translation. Tx Ty Tz are translation vector.
2. Rotation: It is moving of an object about an angle. Movement can be anticlockwise or clockwise. 3D
rotation is complex as compared to the 2D rotation. For 2D we describe the angle of rotation, but for a
3D angle of rotation and axis of rotation are required. The axis can be either x or y or z.
Following figures shows rotation about x, y, z- axis:

Positive rotation angles produce counterclockwise rotations about a coordinate axis, if we are looking
along the positive half of the axis towards the coordinate origin from the above figure.
Coordinate axis Rotation:
➢ Z – axis rotation
➢ X – axis rotation
➢ Y – axis rotation
1. Z – axis rotation: The two – dimensional Z – axis rotation equations are easily extended to three
dimensions. Transformation equation along Z – axis:
x′ = x cosθ − y sinθ
y′ = x sinθ + y cosθ ------- Equation 1
z′ = z
Parameter θ specifies the rotation angle.
x′ cosθ − sinθ 0 0 x
y′ = sinθ cosθ 0 0 y P′ = Rz (θ) . P
z′ 0 0 1 0 z
1 0 0 0 1 1
Rotation about Z - axis
Transformation equations for rotation about the other two coordinate axis can be obtained with a cyclic
permutation of the coordinate parameters x, y, and z as in the equation1 we use the replacements as:
X Y Z X
2. X – axis rotation: Substituting cyclically permuting coordinates in the above equation give us the
transformation equations for a X – axis rotation.
y′ = y cosθ - z sinθ
z′ = y sinθ + z cosθ -------- Equation 2
x′ = x
The matrix form is:
x′ 1 0 0 0 x
y′ = 0 cosθ −sinθ 0 y
z′ 0 sinθ cosθ 0 z
1 0 0 0 1 1

P′ = Rx (θ) . P
Rotation about X - axis
3. Y – axis Rotation: Substituting cyclically permuting coordinates in equation 2 give us the
transformation equations for y – axis rotation.
z′ = z cosθ - x sinθ
x′ = z sinθ + x cosθ -------- Equation 3
y′ = y
The matrix form is:
x′ cosθ 0 sinθ 0 x
y′ = 0 1 0 0 y
z′ −sinθ 0 cosθ 0 z
1 0 0 0 1 1

P′ = Ry (θ) . P

Rotation about Y - axis


3. Scaling: Scaling is used to change the size of an object. The size can be increased or decreased. The
three scaling factors required are Sx, Sy and Sz.
Sx=Scaling factor in x- direction
Sy=Scaling factor in y-direction
Sz=Scaling factor in z-direction
The matrix expression for the scaling transformation of a position P = (x, y, z) relative to the coordinate
origin can be written as:
x′ Sx 0 0 0 x
y′ = 0 Sy 0 0 y or P′ = S . P
z′ 0 0 Sz 0 z
1 0 0 0 1 1
Where scaling parameters Sx, Sy, and Sz are assigned any positive values.
x′ = x . Sx y′ = y . Sy z′ = z . Sz
Scaling an object with transformation changes the size of the object and repositions the object relative to
the coordinate origin.
Scaling of the object relative to a fixed point: Following steps are performed when scaling of objects
with fixed point (a, b, c). It can be represented as below:
1. Translate fixed point to the origin
2. Scale the object relative to the origin
3. Translate object back to its original position.
Note: If all scaling factors Sx=Sy=Sz. Then it is called as uniform scaling. If scaling is done with
different scaling vectors, it is called a differential scaling.
In figure (a) point (a, b, c) is shown, and object whose scaling is to done also shown in steps in fig (b),
fig (c) and fig (d).
The matrix representation for an arbitrary fixed – point – scaling can be expressed as the concatenation
of these translate – scale – translate transformations as:
Polygon Surfaces: The boundary representation for a three – dimensional object is a set of surface
polygons that enclose the object interior. Graphics systems store all object descriptions as sets of surface
polygons. This simplifies and speeds up the surface rendering and display of objects. All surfaces are
described with linear equations. The surface of an object is represented as a polygon mesh. (The wire
frame model is called a polygonal net or polygonal mesh.)

The wireframe outline can be displayed quickly to give a general indication of the surface structure. The
polygon – mesh can be improved by dividing the surface into smaller polygon.
Polygon Tables: The graphics package organizes the polygon surface data into tables. The table may contain
geometric, topological and attribute properties.

As information for each polygon is input, the data are placed into tables that are to be used in the subsequent
processing, display, and manipulation of the object in a scene.

Polygon data tables can be organized into two groups:

➢ Geometric tables

➢ Attribute tables

Geometric data tables: It contain vertex coordinates and parameters to identify the orientation of the polygon
surfaces. To store geometric data three lists is created:

1) A Vertex table: Stores the coordinate values of each vertex in the object.

2) An Edge table: Consists of a pointer to each endpoint of that edge.

3) A polygon table: Defines a polygon by providing pointers to the edges that make up the polygon.
Geometric data table representing two adjacent polygon surfaces.
Attribute data table: Contains information for an object and parameters specifying the degree of
transparency of the object and its surface reflectivity and texture characteristics.
OCTREES: Octrees are the hierarchical tree structures used to represent solid objects in graphics
systems. An octree is a tree data structure in which each internal node has exactly eight children. Octrees
are most often used to partition a three dimensional space by recursively subdividing it into eight
octants. Octree representations are used in Medical imaging and other applications for an object cross
section view.
Quadtree Encoding: The octree encoding procedure for a 3D space is an extension of an encoding
scheme for 2D space, called quadtree encoding.
➢ Quadtrees are dividing a two – dimensional region (usually a square) into quadrants.
➢ Each node in the quadrant has four data elements, one for each of the quadrants in the region.
➢ If all pixels within a quadrant have the same color( a homogeneous quadrant), the corresponding data
element in the node stores that color.
➢ Suppose all pixels in quadrant 2 of figure shown below are found to be red. The color code for red is
then places in the data element 2 of the node.

Quadrant Quadrant 0 1 2 3
0 1
Data Elements in the Representative Quadtree Node
Quadrant Quadrant
3 2
➢ Otherwise, the quadrant is said to be heterogeneous, and that quadrant is itself divided into quadrants
as shown below:

Region of two dimensional space with two levels of quadrant division.


➢ The corresponding data element in the node now flags the quadrant as heterogeneous and stores the
pointer to the next node in the quadtree.
➢ An algorithm for generating a quadtree tests pixel - intensity values and sets up the quadtree nodes
accordingly.
➢ If each quadrant in the original space has a single color specification, the quadtree has only one node.
➢ For a heterogeneous region of space, the successive subdivisions into quadrants continue until all
quadrants are homogeneous.
➢ The figure above shows a quadtree representation for a region containing one area with a solid color
that is different from the uniform color specified for all other areas in the region.
➢ Quadtree encodings provide considerable savings in storage when large color areas exist in a region
of space.
Quadtree representations for a region containing one foreground color pixel on a solid background
Octree Encoding: An Octree encoding scheme divides regions of three – dimensional space (usually
cubes) into octants and stores eight data elements in each node of the tree as shown below.
Each elements of a three – dimensional space are called volume elements, or voxels. When all voxels in
an octant are of the same type, this type value is stored in the corresponding data element of the node.
Empty regions of space are represented by voxel type “void”. Any heterogeneous octant is subdivided
into octants, and the corresponding data element in the node points to the next node in the octree.
Procedures for generating octrees are similar to those for quadtrees. Each node in the octree can now
have from zero to eight immediate descendants.
Algorithm for generating Octrees:
Step 1: Accept object definitions (such as a polygon mesh or solid geometry constructions).
Step 2: Use the minimum and maximum coordinate values of the object.
Step 3: The object in the three – dimensional space is tested, octant by octant, to generate the octree
representation.
Step 4: The established octree representation for the solid object can be applied with various
manipulation routines.
Step 5: Set operations - Union, Intersection or Difference can be applied to two octree representation.
Step 6: A union operation, a new octree is constructed with the combined regions of both objects.
Intersection operations are performed by selecting the common regions of overlap in the two octrees.
Difference operation gives rise to the region occupied by one object but not the other.
Visible – surface or hidden surface identification is carried out by searching the octants from front to
back. The first object detected is visible, so that information can be transferred to a quadtree
representation for display.
CURVES and SURFACES: Smooth curves and surfaces must be generated in many computer graphics
applications. There are several approaches to modelling curved surfaces. One is an analog of polyhedral
models. Instead of using polygons, we model an object by using small curved surface patches placed
next to each other. Another approach is to use surfaces that define solid objects, such as sphere,
cylinders and cones. A model can be constructed with these solid objects using building blocks. This
process is called solid modelling.
Properties for designing curves:
1. Control Point: Control point or knots is a way to control the shape of a curve. It is to locate points
through which the curve must pass or point that control the curve’s shape in a predictable way. A curve
is said to interpolate the control points if it passes through them.
2. Multiple values: In general, a curve is not a graph of a single valued function, irrespective of choice
of coordinate system.

3. Axis independence: The shape of an object must not change when the control points are calculated in
a different coordinate system. For example, if the control points are rotated 90 degrees, the curve should
rotate 90 degree but not change shape.
4. Global or Local Control: As a designer manipulates a control point, a curve may change shape only in
the region near the control point, a curve may change shape only in the region near the control point, or
change shape throughout.
5. Variation – diminishing property: A curve that oscillates about its control points is usually
undesirable. Variation diminishing curves tend to smooth out a sequence of control points.
6. Versatility: A curve representation that allows limited variety of shapes. The flexible technique for
the designer to control the versatility of a curve representation is by adding or removing control points.
7. Order of Continuity: A complex shape is created by joining several curves together end – to – end.
The order of the curve determines the minimum number of control points necessary to define the curve.
The order of the curve also affects how the curve behaves when a control point is moved.
No continuity: The curves do not meet at all.
Zero order continuity: It means the two curves meet.
First order continuity: it requires the curve to be tangent at the point of intersection.
Second order continuity: It requires that curvature must be the same.
The above requirements are the basis for the Bezier and B-spline formulations of curves and surfaces.
BEZIER CURVES: Bezier curves were widely publicized in 1962 by the French engineer Pierre
Bezier, who used them to design automobile body design of the Renault car.
Where the three dimensional location of the control point P3 is (Xi, Yi, Zi). The figure below shows
Bezier curve in the plane, the Z coordinate of each control point is zero. The curve shows six control
points.
A Bezier curve is a polynomial of degree, one less than the number of control points udes. Three points
generate a parabola, 4 points a cubic curve and so on.
Properties of Bezier curves:
A Bezier curve always passes through the first and last control point.
It lies within the convex polygon boundary of the control points. From the properties if Bezier blending
functions all are positive and their sum is always 1.
i.e

CUBIC BEZIER CURVES:


Cubic Bezier curves are generated with four control points. The four blending functions for cubic Bezier
curves, obtained by submitting n = 3 in the equation 17.
Plots of the four cubic Bezier blending function are given in the figure below. The form of the blending function
determines how the control points influence the shape of the curve for values of parameter u over the range from 0
to 1.
At u = 0, the function B3,3 = 1
At u = 1, the function B3,3 = 1. Thus the cubic Bezier curves will always pass through the control points P0 and
P3. B1,3 and B2,3 control the shape of the curve at intermediate values of u, so that the resulting curve tends
towards points P1 and P2.
Blending function B1,3 is maximum at u=1/3 and B2,3 is maximum at u=2/3.
Note: The Bezier curves do not allow for local control of the curve shape. If we reposition any one of the control
points, the entire curve will be affected.
Hidden surface removal: In 3D computer graphics, hidden surface detection (also known as hidden
surface removal HSR) or visible surface detection (VSD).
Definition: It is the process used to determine which surfaces and parts of surfaces are not visible from a
certain viewpoint.
Hidden line and hidden surface algorithms share several characteristics and capitalize on various forms
of coherence in order to reduce the computing required to generate an image.
➢ The algorithms use some form of geometric sorting to distinguish visible parts of an object from
those that are hidden.
➢ These algorithms used on various forms of coherence in order to reduce the computing required for
generating an image.
Coherence: It is used to take advantage of the constant value of the surface of the scene. It is based on
how much regularity exists in the scene. When we moved from one polygon of one object to another
polygon of same object color and shearing will remain unchanged.
Types of Coherence
1. Edge coherence
2. Object coherence
3. Face coherence
4. Area coherence
5. Depth coherence
6. Scan line coherence
7. Frame coherence
8. Implied edge coherence
1. Edge coherence: The visibility of edge changes when it crosses another edge or it also penetrates a visible edge.
2. Object coherence: Each object is considered separate from others. In object, coherence comparison is done using
an object instead of edge or vertex. If A object is farther from object B, then there is no need to compare edges and
faces.
3. Face coherence: In this faces or polygons which are generally small compared with the size of the image.
4. Area coherence: It is used to group of pixels cover by same visible face.
5. Depth coherence: Location of various polygons has separated a basis of depth. Depth of surface at one point is
calculated, the depth of points on rest of the surface can often be determined by a simple difference equation.
6. Scan line coherence: The object is scanned using one scan line then using the second scan line. The intercept of
the first line.
7. Frame coherence: It is used for animated objects. It is used when there is little change in image from one frame
to another.
8. Implied edge coherence: If a face penetrates in another, line of intersection can be determined from two points of
intersection.
Types of hidden surface detection algorithms:
1. Object space methods 2. Image space methods
1. Object space algorithm:
➢ It compares objects and parts of objects to each other within the scene definition to determine which
surfaces are visible.
➢ It performs geometric calculation with as much as possible. Since the precision of the solution is
greater than that of a display device, the image can be displayed enlarged many times without losing
accuracy.
➢ The computation time will tend to grow with the number of objects in the scene, whether visible or
not.
➢ Object space methods are used in hidden line algorithm.
➢ Object space method is used in Back Face Detection.
2. Image - space algorithm:
➢ In an image – space algorithm, visibility is decided point by point at each pixel position on the
projection plane.
➢ It performs calculations with only enough precision to match the resolution of the display screen used
to present the image.
➢ The computation time will tend to grow with the complexity of the visible parts of the image.
➢ The cost of image – space algorithm grows slowly than that of object – space algorithms as the
complexity of the scene increases.
➢ Image space methods are used in hidden surface algorithm.
➢ Image space method is used in Depth Buffer Method.
Back Face Detection / Removal: A fast and simple object-space method for identifying the back faces
of a polyhedron is based on the "inside-outside" tests. A point x,y,z is "inside" a polygon surface with
plane parameters A, B, C, and D if When an inside point is along the line of sight to the surface, the
polygon must be a back face we are inside that face and cannot see the front of it from our viewing
position.
We can simplify this test by considering the normal vector N to a polygon surface, which has Cartesian
components A,B,C.
In general, if V is a vector in the viewing direction from the eye or"camera" position, then this polygon
is a back face if
V.N > 0
Furthermore, if object descriptions are converted to projection coordinates and your viewing direction is
parallel to the viewing z-axis, then −
V = (0, 0, Vz) and V.N = VZC
So that we only need to consider the sign of C the component of the normal vector N.
In a right-handed viewing system with viewing direction along the negative Zv axis, the polygon is a
back face if C < 0.

Similar methods can be used in packages that employ a left-handed viewing system. In these packages,
plane parameters A, B, C and D can be calculated from polygon vertex coordinates specified in a
clockwise direction. Back faces have normal vectors that point away from the viewing position and are
identified by C >= 0 when the viewing direction is along the positive Zv axis. By examining parameter
C for the different planes defining an object, we can immediately identify all the back faces.
Depth Buffer Method: It is an image-space approach. The basic idea is to test the Z-depth of each
surface to determine the closest visible surface.
In this method each surface is processed separately one pixel position at a time across the surface. The
depth values for a pixel are compared and the closest smallest z surface determines the color to be
displayed in the frame buffer.
It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To override
the closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used.
Depth buffer is used to store depth values for x,y
position, as surfaces are processed 0≤depth≤1.
The frame buffer is used to store the intensity value of color value at each position x,y.
The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate indicates back
clipping pane and 1 value for z-coordinates indicates front clipping pane.
Algorithm:
Step-1: Set the buffer values − Depthbuffer x,y = 0
Framebuffer x,y = background color
Step-2: Process each polygon one at a time
For each projected x,y pixel position of a polygon, calculate depth z.
If Z > depthbuffer x,y
Compute surface color,
set depthbuffer x,y = z,
framebuffer x,y = surfacecolor x,y
Step-3 : If Z < depthbuffer x,y, this polygon is closer to the observer than others already recorded for
this pixel.
Advantages:
➢ It is easy to implement.
➢ It reduces the speed problem if implemented in hardware.
➢ It processes one object at a time.
Disadvantages:
➢ It requires large memory.
➢ It is time consuming process.
Scan-Line Method: Image space method used for removing hidden surfaces.
➢ An extension of scan – line polygon filling (with multiple surfaces).
➢ Idea is to intersect each polygon with a particular scan line. Solve hidden surface problem for just
that scan line.
➢ Across each scan line. Depth calculations are made for each overlapping surface to determine which
is nearest to the view plane.
➢ When the visible surface has been determined, the intensity value for that position is entered into the
refresh buffer.
➢ The cost of tiling scene is roughly proportional to its depth complexity.
➢ Efficient way to shallowly – occluded scenes.
➢ May need to split.
Two important tables, edge table and polygon table, are maintained for this.
The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse slope of each
line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties, other surface data,
and may be pointers to the edge table.
To facilitate the search for surfaces crossing a given scan-line, an active list of edges is formed. The
active list stores only those edges that cross the scan-line in order of increasing x. Also a flag is set for
each surface to indicate whether a position along a scan-line is either inside or outside the surface.
Scan-lines are processed from left to right. At the leftmost boundary of a surface, the surface flag is
turned on, and at the rightmost boundary, it is turned off.
The figure below illustrate the scan – line method for locating visible portions of surfaces for poixel
positions along the line.
Dashed lines indicate the boundaries of hidden surfaces.
➢ The list for scan line 1 contains information from the edge table for edges AB, BC, EH, and FG. For
positions along scan line 1 between edges AB and BC, only the flag for surface S1 is on.
➢ For edges EH and FG, only the flag for surface S2 is on. The intensity values in the other areas are set
to the background intensity, since there are no other positions of intersection along scan line 1.
➢ For scan line 2 and 3 the active edge contains edges Ad, EH, BC and FG. Along scan line 2 from
edge AD to edge EH, only the flag for surfaces S1 is on. But edges between EH and BC, the flags for
both surfaces S1and S2 are on.
➢ Depth calculations are calculated using the plane coefficients for the two surfaces.
Example: the depth surface S1 is assumed to be less than that of S2 so intensities for surface S1 are
loaded into the refresh buffer until boundary BC is encountered.
The flag for surface S1 goes off, and intensities for surfaces S2 are stored until edge FG is passed.
➢ For scan line 3 has the same list of edges as scan line 2. since there are no changes in line
intersections, depth calculations between edges EH and BC is not necessary.

You might also like