Visible Surface Detection
Visible Surface Detection
Although there are major differences in the basic approach taken by the various
visible-surface detection algorithms, most use sorting and coherence methods to
improve performance. Sorting is used to facilitate depth comparisons by ordering the
individual surfaces in a scene according to their distance from the view plane.
Coherence methods are used to take advantage of regularities in a scene. An individual
scan line can be expected to contain intervals (runs) of constant pixel intensities, and
scan-line patterns often change little from one line to the next. Animation frames
contain changes only in the vicinity of moving objects. And constant relationships often
can be established between objects and surfaces in a scene.
normal vectors that point away from the viewing position and are identified by C >= 0
when the viewing direction is along the positive zv axis. By examining parameter C for
the different planes defining an object, we can immediately identify all the back faces.
As implied by the name of this method, two buffer areas are required. A depth
buffer is used to store depth values for each (x, y) position as surfaces are processed,
and the refresh buffer stores the intensity values for each position. Initially, all positions
in the depth buffer are set to 0 (minimum depth), and the refresh buffer is initialized to
the background intensity. Each surface listed in the polygon tables is then processed,
one scan line at a time, calculating the depth (z value) at each (x, y) pixel position. The
calculated depth is compared to the value previously stored in the depth buffer at that
position. If the calculated depth is greater than the value stored in the depth buffer, the
new depth value is stored, and the surface intensity at that position is determined and in
the same xy location in the refresh buffer.
(Figure 4 - 12)
Depth values for a surface position (x, y) are calculated from the plane
equation for each surface:
(Fig. 4-13)
For any scan line (Fig. 4-13), adjacent horizontal positions across the line differ
by 1, and a vertical y value on an adjacent scan line differs by 1. If the depth of position
(x, y) has been determined to be z, then the depth z' of the next position (x + 1, y) along
the scan line is obtained from Equation of (x, y) as Y - 1
The ratio -A/C is constant for each surface, so succeeding depth values across a
scan line are obtained from preceding values with a single addition. On each scan line,
we start by calculating the depth on a left edge of the polygon that intersects that scan
line (Fig. 4-14). Depth values at each successive position across the scan line are then
calculated by the previous equations. We first determine the y-coordinate extents of
each polygon, and process the surface from the topmost scan line to the bottom scan
line, as shown in Fig. 4-14. Starting at a top vertex, we can recursively calculate x
positions down a left edge of the polygon as x' = x - l/m, where m is the slope of the
edge (Fig. 4-15). Depth values down the edge are then obtained recursively as
A B
z' z m
C
Top scan line
y scan line
Left edge
Intersection
(Fig. 4-14)
If we are processing down a vertical edge, the slope is infinite and the recursive
calculations reduce to
B
z' z
C
An alternate approach is to use a midpoint method or Bresenham-type algorithm
for determining x values on left edges for each scan line. Also the method can be
applied to curved surfaces by determining depth and intensity values at each surface
projection point.
(Fig. 4-15)
For polygon surfaces, the depth-buffer method is very easy to implement, and it
requires no sorting of the surfaces in a scene. But it does require the availability of a
second buffer in addition to the refresh buffer. A system with a resolution of 1024 by
1024, for example, would require over a million positions in the depth buffer, with each
position containing enough bits to represent the number of depth increments needed.
One way to reduce storage requirements is to process one section of the scene at a time,
using a smaller depth buffer. After each view section is processed, the buffer is reused
for the next section.
(Fig. 4-16)
Each position in the A-buffer has two fields
Depth field- stores a positive or negative real number
Intensity field--stores surface-intensity information or a pointer value
If depth is >= 0, the number stored at that position is the depth of a single surface
overlapping the corresponding pixel area.
The intensity field then stores the RGB components of the surface
color at that point and the percent of pixel coverage.
If depth < 0 this indicates multiple-surface contributions to the pixel intensity.
The intensity field then stores a pointer to a linked List of surface data
Surface information in the A-buffer includes:
a. RGB intensity components
b. Opacity parameter
c. Depth
d. Percept of area coverage
e. Surface identifier
f. Other surface rendering parameters
The algorithm proceeds just like the depth buffer algorithm.The depth and
opacity values are used to determine the final colour of a pixel
(Fig. 4-17)
Figure 4-17 illustrates the scan-line method for locating visible portions of surfaces for
pixel positions along the line. The active list for line 1 contains information from the
edge table for edges AB, BC, EH, and FG. For positions along this scan line between
edges AB and BC, only the flag for surface S1 is on. Therefore no depth calculations are
necessary, and intensity information for surface S1, is entered from the polygon table
into the refresh buffer. Similarly, between edges EH and FG, only the flag for surface
S2 is on. NO other positions along scan line 1 intersect surfaces, so the intensity values
in the other areas are set to the background intensity. The background intensity can be
loaded throughout the buffer in an initialization routine.
For scan lines 2 and 3 in Fig. 4-17, the active edge list contains edges AD,
EH, BC, and FG. Along scan line 2 from edge AD to edge EH, only the flag for surface
S1 is on. But between edges EH and BC, the flags for both surfaces are on. In this
interval, depth calculations must be made using the plane coefficients for the two
surfaces. For this example, the depth of surface S1 is assumed to be less than that of S2,
so intensities for surface S1 are loaded into the refresh buffer until boundary BC is
encountered. Then the flag for surface S1 goes off, and intensities for surface S2 are
stored until edge FG is passed.
We can take advantage of-coherence along the scan lines as we pass from
one scan line to the next. In Fig. 4-17, scan line 3 has the same active list of edges as
scan line 2. Since no changes have occurred in line intersections, it is unnecessary again
to make depth calculations between edges EH and BC. The two surfaces must be in the
same orientation as determined on scan line 2, so the intensities for surface S1 can be
entered without further calculations.
(Fig. 4-18)
First sort surfaces according to their distance from the view plane. The intensity
values for the farthest surface are then entered into the refresh buffer. Taking each
succeeding surface in turn (in decreasing depth order), we "paint" the surface intensities
onto the frame buffer over the intensities of the previously processed surfaces.
If a depth overlap is detected at any point in the list, we need to make some
additional comparisons to determine whether any of the surfaces should be reordered.
We make the following tests for each surface that overlaps with S. If any one of
these tests is true, no reordering is necessary for that surface. The tests are listed in
order of increasing difficulty.
1) The bounding rectangles in the xy plane for the two surfaces do not overlap
2) Surface S is completely behind the overlapping surface relative to the viewing
position.
3) The overlapping surface is completely in front of S relative to the viewing
position.
4) The projections of the two surfaces onto the view plane do not overlap.
The coordinates for all vertices of S into the plane equation for the overlapping
surface and check the sign of the result. If the plane equations are set up so that the
outside of the surface is toward the viewing position, then S is behind S' if all vertices
of S are "inside" S.
If tests 1 through 3 have all failed, we try test 4 by checking for intersections
between the bounding edges of the two surfaces using line equations in the xy plane.
An easy way to do this is to successively divide the area into four equal parts at
each step. There are four possible relationships that a surface can have with a specified
area boundary
Surrounding surface-One that completely encloses the area.
Overlapping surface-One that is partly inside and partly outside the area.
The tests for determining surface visibility within an area can be stated in terms
of these four classifications.
No further subdivisions of a specified area are needed if one of the following
conditions is true:
All surfaces are outside surfaces with respect to the area.
Only one inside, overlapping, or surrounding surface is in the area.
A surrounding surface obscures all other surfaces within the area boundaries.