unit_5_visible_surface_detection_methods
unit_5_visible_surface_detection_methods
In the realistic graphics display, we have to identify those parts of a scene that are visible from a chosen
viewing position. The various algorithms that are used for that are referred to as Visible-surface detection
methods or hidden-surface elimination methods.
Visible-surface detection algorithms are broadly classified according to whether they deal with
object definitions directly or with their projected images. These two approaches are called object-space
methods and image-space methods, respectively.
An object-space method compares objects and parts of objects to each other within the scene
definition to determine which surfaces, as a whole, we should label as visible. In an image-space algorithm,
visibility is decided point by point at each pixel position on the projection plane. Most visible-surface
algorithms use image-space methods, although object space methods can be used effectively to locate visible
surfaces in some cases. Line display algorithms, on the other hand, generally use object-space methods to
identify visible lines in wire frame displays, but many image-space visible-surface algorithms can be adapted
easily to visible-line detection.
We will see four methods for Detecting Visible surface Methods. They are:
1
VARDHAMAN COLLEGE OF ENGINEERING CSE Department
UNIT 5-Visible Surface Detection Methods
1. A fast and simple object-space method for identifying the back faces of a polyhedron is based on the
"inside-outside" tests. A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C, and D if
2. When an inside point is along the line of sight to the surface, the polygon must be a back face (we are
inside that face and cannot see the front of it from our viewing position).
3. We can simplify this test by considering the normal vector N to a polygon surface, which has Cartesian
components (A, B, C). In general, if V is a vector in the viewing direction from the eye (or "camera")
position, as shown in Fig, then this polygon is a back face if
V.N>0
4. Furthermore, if object descriptions have been converted to projection coordinates and our viewing
direction is parallel to the viewing z,. axis, then V = (0, 0, Vz) and
V.N=Vz.C
so that we only need to consider the sign of C, the ; component of the normal vector N .
5. In a right-handed viewing system with viewing direction along the negative z, axis, the polygon is a back
face if C < 0. Also, we cannot see any face whose normal has z component C=0, since our viewing
direction is grazing that polygon. Thus, in general, we can label any polygon as a back face if its normal
vector has a z-component value:
6. In a Left-handed viewing system with viewing direction along the positive z, axis, the polygon is a back
face if C > 0. Also, we cannot see any face whose normal has z component C=0, since our viewing
direction is grazing that polygon. Thus, in general, we can label any polygon as a back face if its normal
vector has a z-component value:
2
VARDHAMAN COLLEGE OF ENGINEERING CSE Department
UNIT 5-Visible Surface Detection Methods
1. A commonly used image-space approach to detecting visible surfaces is the depth-buffer method,
which compares surface depths at each pixel position on the projection plane. This procedure is also
referred to as the z-buffer method, since object depth is usually measured from the view plane along
the z axis of a viewing system.
2. Each surface of a scene is processed separately, one point at a time across the surface. The method is
usually applied to scenes containing only polygon surfaces, because depth values can be computed
very quickly and the method is easy to implement. But the method can be applied to non planar
surfaces.
3. With object descriptions converted to projection coordinates, each (x, y, z) position on a polygon
surface corresponds to the orthographic projection point (x, y) on the view plane. Therefore, for each
pixel position (x, y) on the view plane, object depths can be compared by comparing z values.
4. Below Figure shows three surfaces at varying distances along the orthographic projection line from
position (x ,y ) in a view plane taken as the xy, plane. Surface 1, is closest at this position, so its surface
intensity value at (x, y) is saved. As implied by the name of this method, two buffer areas are required.
3
VARDHAMAN COLLEGE OF ENGINEERING CSE Department
UNIT 5-Visible Surface Detection Methods
1. Initialize the depth buffer and refresh buffer so that for all buffer positions (x,y),
depth(x,y) = 0, refresh(x,y)= Ibackground (background Intensity)
2. For each position on each polygon surface, compare depth values to previously stored
values in the depth buffer to determine visibility.
a) Calculate the depth z for each (x,y) position on the polygon.
b) If z>depth(x,y), then set
Depth(x,y)=z, refresh(x,y)= I surf(x,y)
After all surfaces have been processed, the depth buffer contains depth values for the
visible surfaces and the refresh buffer contains the corresponding intensity values for those
surfaces.
We first determine the y-coordinate extents of each polygon, and process the surface from the topmost scan
line to the bottom scan line.
Starting at a top vertex, we can recursively calculate x positions down a left edge of the polygon as
x' = x - (1/m), where m is the slope of the edge.
Depth values down the edge are then obtained recursively as
If we are processing down a vertical edge, the slope is infinite and the recursive calculations reduce to
4
VARDHAMAN COLLEGE OF ENGINEERING CSE Department
UNIT 5-Visible Surface Detection Methods
3. SCANLINE METHOD:
1. This method is extension of the scan-line algorithm for filling polygon interiors
2. This method is an example of image space method.
3. For all polygons intersecting each scan line
Processed from left to right
Depth calculations for each overlapping surface
The intensity of the nearest position is entered into the refresh buffer
polygon tables:
The following polygon tables are used to store coordinate descriptions of polygons along with surfaces.
Vertex Table: contains all vertices and their coordinates
Edge table : contains all edge names and their coordinate endpoints
Surface facet table: contains all surfaces along with their corresponding edge names.
Edge table
Coordinate endpoints for each line
Slope of each line
Pointers into the polygon table
Identify the surfaces bounded by each line
Surface table
Coefficients of the plane equation for each surface
Intensity information for the surfaces
Pointers into the edge table
4. For each scanline, maintain the following things: Active Edge table and Surface Flag
Active edge list
o Contain only edges across the current scan line
o Sorted in order of increasing x
Flag for each surface
o Indicate whether inside or outside of the surface
o At the leftmost boundary of a surface
The surface flag is turned on
o At the rightmost boundary of a surface
The surface flag is turned off
Example:
Active list for scan line 1
Edge table
AB, BC, EH, and FG
Between AB and BC, only the flag for
surface S1 is on
No depth calculations are necessary
Intensity for surface S1 is entered into the
refresh buffer
Similarly, between EH and FG,
only the flag for S2 is on
For scan line 2, 3
AD, EH, BC, and FG
Between AD and EH, only the flag for S1 is on
Between EH and BC, the flags for both surfaces are on
Depth calculation is needed
Intensities for S1 are loaded into the refresh buffer until BC
5
VARDHAMAN COLLEGE OF ENGINEERING CSE Department
UNIT 5-Visible Surface Detection Methods
This algorithm involves both object space and image space operations.
Image-space and object-space operations
o Sorting operations in both image and object-space
o The scan conversion of polygon surfaces in image-space
Basic functions
o Surfaces are sorted in order of decreasing depth
o Surfaces are scan-converted in order, starting with the surface of greatest depth
6
VARDHAMAN COLLEGE OF ENGINEERING CSE Department
UNIT 5-Visible Surface Detection Methods
7
VARDHAMAN COLLEGE OF ENGINEERING CSE Department