Visible-Surface Detection Methods: CS3162 Introduction To Computer Graphics
Visible-Surface Detection Methods: CS3162 Introduction To Computer Graphics
Characteristics of approaches:
- Require large memory size?
- Require long processing time?
- Applicable to which types of objects?
Considerations:
- Complexity of the scene
- Type of objects in the scene
- Available equipment
- Static or animated?
- Making use of the results calculated for one part of the scene or image for other nearby parts.
- Coherence is the result of local similarity
- As objects have continuous spatial extent, object properties vary smoothly within a small local
region in the scene. Calculations can then be made incremental.
Types of coherence:
1. Object Coherence:
Visibility of an object can often be decided by examining a circumscribing solid (which may be of
simple form, eg. A sphere or a polyhedron.)
2. Face Coherence:
Surface properties computed for one part of a face can be applied to adjacent parts after small
incremental modification. (eg. If the face is small, we sometimes can assume if one part of the face is
invisible to the viewer, the entire face is also invisible).
3. Edge Coherence:
The Visibility of an edge changes only when it crosses another edge, so if one segment of an non-
intersecting edge is visible, the entire edge is also visible.
6. Depth Coherence:
The depths of adjacent parts of the same surface are similar.
7. Frame Coherence:
Pictures of the same scene at successive points in time are likely to be similar, despite small changes
in objects and viewpoint, except near the edges of moving objects.
Most visible surface detection methods make use of one or more of these coherence properties of a
scene.
To take advantage of regularities in a scene, eg., constant relationships often can be established
between objects and surfaces in a scene.
CS3162 Introduction to Computer Graphics
Helena Wong, 2001
In a solid object, there are surfaces which are facing the viewer (front faces) and there are surfaces
which are opposite to the viewer (back faces).
These back faces contribute to approximately half of the total number of surfaces. Since we cannot
see these surfaces anyway, to save processing time, we can remove them before the clipping process
with a simple test.
Each surface has a normal vector. If this vector is pointing in the direction of the center of projection,
it is a front face and can be seen by the viewer. If it is pointing away from the center of projection, it
is a back face and cannot be seen by the viewer.
The test is very simple, suppose the z axis is pointing towards the viewer, if the z component of the
normal vector is negative, then, it is a back face. If the z component of the vector is positive, it is a
front face.
Note that this technique only caters well for nonoverlapping convex polyhedra.
For other cases where there are concave polyhedra or overlapping objects, we
still need to apply other methods to further determine where the obscured faces
are partially or completely hidden by other objects (eg., using Depth-Buffer
Method or Depth-Sort Method).
Algorithm:
1. Initially each pixel of the z-buffer is set to the maximum depth value (the depth of the back
clipping plane).
2. The image buffer is set to the background color.
3. Surfaces are rendered one at a time.
4. For the first surface, the depth value of each pixel is calculated.
5. If this depth value is smaller than the corresponding depth value in the z-buffer (ie. it is closer to
the view point), both the depth value in the z-buffer and the color value in the image buffer are
replaced by the depth value and the color value of this surface calculated at the pixel position.
6. Repeat step 4 and 5 for the remaining surfaces.
7. After all the surfaces have been processed, each pixel of the image buffer represents the color of a
visible surface at that pixel.
CS3162 Introduction to Computer Graphics
Helena Wong, 2001
- This method requires an additional buffer (if compared with the Depth-Sort Method) and the
overheads involved in updating the buffer. So this method is less attractive in the cases where
only a few objects in the scene are to be rendered.
- Simple and does not require additional data structures.
- The z-value of a polygon can be calculated incrementally.
- No pre-sorting of polygons is needed.
- No object-object comparison is required.
- Can be applied to non-polygonal objects.
- Hardware implementations of the algorithm are available in some graphics workstation.
- For large images, the algorithm could be applied to, eg., the 4 quadrants of the image separately,
so as to reduce the requirement of a large additional buffer.
In this method, as each scan line is processed, all polygon surfaces intersecting that line are examined
to determine which are visible. Across each scan line, depth calculations are made for each
overlapping surface to determine which is nearest to the view plane. When the visible surface has
been determined, the intensity value for that position is entered into the image buffer.
- Step 2 is not efficient because not all polygons necessarily intersect with the scan line.
- Depth calculation in 2a is not needed if only 1 polygon in the scene is mapped onto a segment of
the scan line.
- To speed up the process:
Recall the basic idea of polygon filling: For each scan line crossing a polygon,
this algorithm locates the intersection points of the scan line with the polygon
edges. These intersection points are sorted from left to right. Then, we fill the
pixels between each intersection pair.
With similar idea, we fill every scan line span by span. When polygons overlap on a scan
line, we perform depth calculations at their edges to determine which polygon should be
visible at which span.
Any number of overlapping polygon surfaces can be processed with this method. Depth
calculations are performed only when there are polygons overlapping.
We can take advantage of coherence along the scan lines as we pass from one scan line to the
next. If there is no change in the pattern of the intersection of polygon edges with the
successive scan lines, it is not necessary to do depth calculations.
This works only if surfaces do not cut through or otherwise cyclically overlap each other. If
cyclic overlap happens, we can divide the surfaces to eliminate the overlaps.
1. Sort all surfaces according to their distances from the view point.
2. Render the surfaces to the image buffer one at a time starting from the farthest surface.
3. Surfaces close to the view point will replace those which are far away.
4. After all surfaces have been processed, the image buffer stores the final image.
The basic idea of this method is simple. When there are only a few objects in the scene, this method
can be very fast. However, as the number of objects increases, the sorting process can become very
complex and time consuming.
In case if there are any overlaps in depth, we need to make some additional comparisons to
determine whether a pair of surfaces should be reordered. The checking is as follows:
a. The bounding rectangles in the xy plane for the 2 surfaces do not overlap
b. The surface S with greater depth is completely behind the overlapping surface relative to the
viewing position.
c. The overlapping surface is completely in front of the surface S with greater depth relative to the
viewing position.
d. The projections of the 2 surfaces onto the view plane do not overlap.
If any of the above tests is passed, then the surfaces no need to be re-ordered.
CS3162 Introduction to Computer Graphics
Helena Wong, 2001
Discussion:
- Back face removal is achieved by not displaying a polygon if the viewer is located in its back
half-space
- It is an object space algorithm (sorting and intersection calculations are done in object space
precision)
- If the view point changes, the BSP needs only minor re-arrangement.
- A new BSP tree is built if the scene changes
- The algorithm displays polygon back to front (cf. Depth-sort)
CS3162 Introduction to Computer Graphics
Helena Wong, 2001
The area-subdivision method takes advantage of area coherence in a scene by locating those view
areas that represent part of a single surface.
The total viewing area is successively divided into smaller and smaller rectangles until each small
area is simple, ie. it is a single pixel, or is covered wholly by a part of a single visible surface or no
surface at all.
The procedure to determine whether we should subdivide an area into smaller rectangle is:
1. We first classify each of the surfaces, according to their relations with the area:
Surrounding surface - a single surface completely encloses the area
Overlapping surface - a single surface that is partly inside and partly outside the area
Inside surface - a single surface that is completely inside the area
Outside surface - a single surface that is completely outside the area.
To improve the speed of classification, we can make use of the bounding rectangles of surfaces for
early confirmation or rejection that the surfaces should be belong to that type.
2. Check the result from 1., that, if any of the following condition is true, then, no subdivision of this
area is needed.
a. All surfaces are outside the area.
b. Only one surface is inside, overlapping or surrounding surface is in the area.
c. A surrounding surface obscures all other surfaces within the area boundaries.
For cases b and c, the color of the area can be determined from that single surface.
8
CS3162 Introduction to Computer Graphics
Helena Wong, 2001
In these methods, octree nodes are projected onto the viewing surface
in a front-to-back order. Any surfaces toward the rear of the front
octants (0,1,2,3) or in the back octants (4,5,6,7) may be hidden by the
front surfaces.
The intensity of a pixel in an image is due to a ray of light, having been reflected from some objects in
the scene, pierced through the centre of the pixel.
So, visibility of surfaces can be determined by tracing a ray of light from the centre of projection
(viewer's eye) to objects in the scene. (backward-tracing).
The ray-casting approach is an effective visibility-detection method for scenes with curved surfaces,
particularly spheres.
9
CS3162 Introduction to Computer Graphics
Helena Wong, 2001
2. Using Hierarchies
- If a parent bounding volume does not intersect with a ray, all its
children bounding volumes do not intersect with the ray and need
not be processed
- Thus reduce the number of intersection calculations.
10