Chapter 5 visible surface detection
Chapter 5 visible surface detection
It is the process of identifying those parts of a scene that are visible from a
chosen viewing position. There are numerous algorithms for efficient
identification of visible objects for different types of applications. These various
algorithms are referred to as visible-surface detection methods. Sometimes these
methods are also referred to as hidden-surface elimination methods.
To identify those parts of a scene that are visible from a chosen viewing position
(visible-surface detection methods).
Surfaces which are obscured by other opaque (solid) surfaces along the line of
sight are invisible to the viewer so can be eliminated (hidden-surface elimination
methods).
Visible surface detection methods are broadly classified according to whether they
deal with objects or with their projected images.
Object-Space Methods (OSM):
Algorithm to determine which parts of the shapes are to be rendered in 3D
coordinates.
Methods based on comparison of objects for their 3D positions and
dimensions with respect to a viewing position
For N objects, may require N*N comparison operations.
Efficient for small number of objects but difficult to implement.
Depth sorting, area subdivision methods.
Methods:
1st Method:
A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C,
and D if 𝐴𝑥 + 𝐵𝑦 + 𝐶𝑧 + 𝐷 < 0 (from plane equation).
When an inside point is along the line of sight to the surface, the polygon
must be a back face. (We are inside that face and cannot see the front of it
from our viewing position)
Q. Find the visibility for the surface AED in rectangular pyramid where an
observer is at P (5, 5, 5).
Q. Find the visibility for the surface AED in rectangular pyramid where an observer is at
P (0, 0.5, 0).
Disadvantage:
Initially, all positions in the depth buffer are set to 0 (minimum depth), and the refresh
buffer is initialized to the background intensity. Each surface listed in the polygon tables
is then processed, one scan line at a time, calculating the depth (z- value) at each (x, y)
pixel position. The calculated depth is compared to the value previously stored in the
depth buffer at that position. If the calculated depth is greater than the value stored in the
depth buffer, the new depth value is stored, and the surface intensity at that position is
determined and placed in the same xy location in the refresh buffer.
The ratio –A/C is constant for each surface, so succeeding depth values across a
scan line are obtained from proceeding values with a single addition.
On each scan line, we start by calculating the depth on a left edge of the polygon
that intersects that scan line. Depth values at each successive position across the
scan line are then calculated by equation (ii).
A-Buffer Method:
The A-buffer (anti-aliased, area-averaged, accumulation buffer) is an
extension of the ideas in the depth-buffer method (other end of the alphabet
from "z-buffer").
A drawback of the depth-buffer method is that it deals only with opaque
(Solid) surfaces and cannot accumulate intensity values for more than one
transparent surfaces.
The A-buffer method is an extension of the depth-buffer method so that each
position in the buffer can reference a linked list of surfaces. Thus, more than
one surface intensity can be taken into consideration at each pixel position,
and object edges can be anti-aliased.
If depth is >= 0, then the surface data field stores the depth of that pixel
position as before (SINGLE SURFACE)
If the depth field is positive, the number stored at that position is the depth
of a single surface overlapping the corresponding pixel area.
The intensity field then stores the RCB components of the surface color at
that point and the percent of pixel coverage, as in figure.
If depth < 0 then the data filed stores a pointer to a linked list of surface data
(MULTIPLE SURFACE)
If the depth field is negative, this indicates multiple-surface contributions to
the pixel intensity.
The intensity field then stores a pointer to a linked list of surface data, as in
figure.
Data for each surface in the linked list includes: RGB intensity components,
opacity parameter (percent of transparency), depth, and percent of area coverage,
surface identifier, other surface-rendering parameters, and pointer to next surface.
Scan lines are processed to determine surface overlaps of pixels across the
individual scan lines.
Surfaces are subdivided into a polygon mesh and clipped against the pixel
boundaries
The opacity factors and percent of surface overlaps are used to determine the
pixel intensity as an average of the contribution from the overlapping surfaces