Unit 4 Visible Surface Detection and Surface-Rendering
Unit 4 Visible Surface Detection and Surface-Rendering
Visible surface detection methods are broadly classified according to whether they deal with objects or
with their projected images.
Image-Space methods: Visibility is decided point by point at each pixel position on the projection
plane.
Most visible surface detection algorithm use image-space-method but in some cases object space
methods can also be effectively used.
If V is the vector in viewing direction from the eye position then this polygon is a back face if,
V.N > 0
In a right-handed viewing system with viewing direction along the negative zv axis and in general, we can
label any polygon as a back face if its normal vector has a z component value:
C0
For other cases where there are concave polyhedral or overlapping objects, we still need to apply other
methods to further determine where the obscured faces are partially or completely hidden by other
objects (eg. Using Depth-Buffer Method or Depth-sort Method).
Depth-Buffer Method
Depth Buffer Method is the commonly used image-space method for detecting visible surface. It is also
know as z-buffer method. It compares surface depths at each pixel position on the projection plane. It is
called z-buffer method since object depth is usually measured from the view plane along the z-axis of a
viewing system.
Each surface of scene is processed separately, one point at a time across the surface. The method is
usually applied to scenes containing only polygon surfaces, because depth values can be computed very
quickly and method is easy to implement. This method can be applied also to non planer surfaces.
With object description converted to projection co-ordinates, each ( x, y, z ) position on polygon surface
corresponds to the orthographic projection point ( x, y ) on the view plane. Therefore for each pixel
position ( x, y ) on the view plane, object depth is compared by z-values.
After all surfaces are processed, the depth buffer contains the
depth value of the visible surface and refresh buffer contains the
corresponding intensity values for those surfaces.
The depth values of the surface position (x, y) are calculated by plane equation of surface.
Ax By D
Z
C
Let Depth Z' at position (x+1, y)
A( x 1) By D
Z'
C
Z' Z A (1)
C
A is constant for each surface so succeeding depth value across a scan line are obtained form
C
preceding values by simple calculation.
Step 2 is not efficient because not all polygons necessarily intersect with the scan line.
Depth calculation in 2a is not needed if only 1 polygon in the scene is mapped onto a segment of
the scan line.
Figure above illustrates the scan-line method for locating visible portions of surfaces for pixel positions
along the line. The active list for scan line 1 contains information from the edge table for edges AB, BC,
EH, and FG. For positions along this scan line between edges AB and BC, only the flag for surface S1 is on.
Therefore, no depth calculations are necessary, and intensity information for surface S1 is entered from
the polygon table into the refresh buffer. Similarly, between edges EH and FG, only the flag for surface
S2 is on. No other positions along scan line 1 intersect surfaces, so the intensity values in the other areas
are set to the background intensity. The background intensity can be loaded throughout the buffer in an
initialization routine.
For scan lines 2 and 3, the active edge list contains edges AD, EH, BC, and FG. Along scan line 2 from
edge AD to edge EH, only the flag for surface S1 is on. But between edges EH and BC, the flags for both
surfaces are on. In this interval, depth calculations must be made using the plane coefficients for the
two surfaces. For this example, the depth of surface S1 is assumed to be less than that of S2, so
intensities for surface S1 are loaded into the refresh buffer until boundary BC is encountered. Then the
flag for surface S1 goes off, and intensities for surface S2 are stored until edge FG is passed.
We can take advantage of coherence along the scan lines as we pass from one scan line to the next. In
Fig., scan line 3 has the same active list of edges as scan line 2. Since no changes have occurred in line
intersections, it is unnecessary again to make depth calculations between edges EH and BC. The two
surfaces must be in the same orientation as determined on scan line 2, so the intensities for surface S1
can be entered without further calculations.
Any number of overlapping polygon surfaces can be processed with this scan-line method. Flags for the
surfaces are set to indicate whether a position is inside or outside, and depth calculations are performed
when surfaces overlap. When these coherence methods are used, we need to be careful to keep track of
which surface section is visible on each scan line. This works only if surfaces do not cut through or
otherwise cyclically overlap each other.
This algorithm is also called "Painter's Algorithm" as it simulates how a painter typically produces his
painting by starting with the background and then progressively adding new (nearer) objects to the
canvas.
Problem: One of the major problems in this algorithm is intersecting polygon surfaces. As shown in fig.
below.
Solution: For intersecting polygons, we can split one polygon into two or more polygons which can then
be painted from back to front. This needs more time to compute intersection between polygons. So it
becomes complex algorithm for such surface existence.
Example
Assuming we are viewing along the z axis. Surface S with the greatest depth is then compared to other
surfaces in the list to determine whether there are any overlaps in depth. If no depth overlaps occur, S
can be scan converted. This process is repeated for the next surface in the list. However, if depth
Here plane P1 partitions the space into two sets of objects, one set of object is back and another set
is in front of partitioning plane relative to viewing direction. Since one object is intersected by plane
P1, we divide that object into two separate objects labeled A and B. Now object A & C are in front of
P1, B and D are back of P1.
We next partition the space with plane P2 and construct the binary free as fig (b). In this tree, the
objects are represented as terminal nodes, with front object as left branches and behind object as
right branches.
When BSP tree is complete, we process the tree by selecting surface for displaying in order back to
front. So foreground object are painted over back ground objects.
Octree Method
When an octree representation is used for viewing volume, hidden surface elimination is accomplished
by projecting octree nodes into viewing surface in a front to back order. Following figure is the front face
of a region space is formed with octants 0, 1, 2, 3. Surface in the front of these octants are visible to the
viewer. The back octants 4, 5, 6, 7 are not visible. After octant sub-division and construction of octree,
entire region is traversed by depth first traversal.
Fig1: Objects in octants 0, 1, 2, and 3 Fig2: Octant divisions for a region of space and the
obscure objects in the back octants corresponding quadrant plane.
(4, 5, 6, 7) when the viewing
direction is as shown.
Trace the path of an imaginary ray from the viewing position (eye) through viewing plane t object in
the scene.
Identify the visible surface by determining which surface is intersected first by the ray.
Can be easily combined with lightning algorithms to generate shadow and reflection.
It is good for curved surface but too slow for real time application.
Ray casting, as a visibility detection tool, is based on geometric optics methods, which trace the paths of
light rays. Since there are an infinite number of light rays in a scene and we are interested only in those
rays that pass through pixel positions, we can trace the light-ray paths backward from the pixels through
the scene. The ray-casting approach is an effective visibility-detection method for scenes with curved
surfaces, particularly spheres.
Light Sources
Sometimes light sources are referred as light emitting object and light reflectors. Generally light source
is used to mean an object that is emitting radiant energy e.g. Sun.
Point Source: Point source is the simplest light emitter e.g. light bulb.
Distributed light source: Fluorescent light
Fig: Diverging ray paths from the Fig: An object illuminated with a
Point light source distributed light source
When light is incident on an opaque surface part of it is reflected and part of it is absorbed.
Surface that are rough or grainy, tend to scatter the reflected light in all direction which is called
diffuse reflection.
Diffuse reflection
When light sources create highlights, or bright spots, called specular reflection
Specuular reflection
Illumination models
Illumination models are used to calculate light intensities that we should see at a given point on the
surface of an object. Lighting calculations are based on the optical properties of surfaces, the
background lighting conditions and the light source specifications. All light sources are considered to be
2. Diffuse reflection
Objects illuminated by ambient light are uniformly illuminated across their surfaces even though light
are more or less bright in direct proportion of ambient intensity. Illuminating object by a point light
source, whose rays enumerate uniformly in all directions from a single point. The object's brightness
varies from one part to another, depending on the direction of and distance to the light source.
The fractional amount of the incident light that is diffusely reflected can be set for each surface
with parameter K d , the coefficient of diffuse-reflection.
Value of K d is in interval 0 to 1. If surface is highly reflected, K d is set to near 1. The surface that
absorbs almost incident light, K d is set to nearly 0.
Diffuse reflection intensity at any point on the surface if exposed only to ambient light is
I ambdiff I a K d
Assuming diffuse reflections from the surface are scattered with equal intensity in all directions,
independent of the viewing direction (surface called. "Ideal diffuse reflectors") also called
Lambertian reflectors and governed by Lambert's cosine law.
I diff K d I l cos
Where I l is the intensity of the point light source.
It N is unit vector normal to the surface & L is unit vector in the direction to the point slight source then
I l ,diff K d I l ( N ..L)
For ideal reflector (perfect mirror), incident light is reflected only in the specular reflection direction
i.e. V and R coincides ( 0) .
Shiny surfaces have a narrow specular-reflection range (narrow ), and dull surfaces have a wider
reflection (wider ).
An empirical model for calculating specular-reflection range developed by Phong Bui Tuong called
Phong specular reflection model (or simply Phong model), sets the intensity of specular reflection
proportional to cos s [ cos varies from 0 to 1] where ns is a specular reflection parameter.
n
The intensity of specular reflection depends on the material properties of the surface and the angle of
incidence (), as well as other factors such as the polarization and color of the incident light.
We can approximately model monochromatic specular intensity variations using a specular-
reflection coefficient, W() for each surface over a range = 0 to = 90. In general, W() tends
Vector R in this expression can be calculated in terms of vectors L and N. As seen in Fig. above,
the projection of L onto the direction of the normal vector is obtained with the dot product N.L.
Therefore, from the diagram, we have
R + L = (2N.L)N
and the specular-reflection vector is obtained as
R = (2N.L)N - L
2. Interpolated Shading:
An alternative to evaluating the illumination equation at each point on the polygon, we can use the
interpolated shading, in which shading information is linearly interpolated across a triangle from the
values determined for its vertices. Gouraud generalized this technique for arbitrary polygons. This is
particularly easy for a scan line algorithm that already interpolates the z-value across a span from
interpolated z-values computed for the span's endpoints.
Gouraud Shading
Gouraud shading, also called intensity interpolating shading or color interpolating shading eliminates
intensity discontinuities that occur in flat shading. Each polygon surface is rendered with Gouraud
shading by performing following calculations.
1. Determine the average unit normal vector at each polygon vertex.
2. Apply an illumination model to each vertex to calculate the vertex intensity.
3. Linearly interpolate the vertex intensities over the surface of the polygon
Step 1: At each polygon vertex, we obtain a normal vertex by averaging the surface normals of all
polygons sharing the vertex as:
n
N k
Nv k 1
n
| Nk |
k 1
N1 N 2 N 3 N 4
Here in example: N v
| N1 N 2 N 3 N 4 |
Where Nv is normal vector at a vertex sharing
4 surfaces as in figure.
Step 2: Once we have the vertex normals (Nv), we can determine the intensity at the vertices from a
lighting model.
Step 3: Now to interpolate intensities along the polygon edges, we consider following figure (next
page):
In figure, the intensity of vertices 1, 2, 3 are I1, I2, I3, which are obtained by averaging normals of each
surface sharing the vertices and applying a illumination model. For each scan line, intensity at
intersection of line with Polygon edge is linearly interpolated from the intensities at the edge end point.
The intensity of a point P in the polygon surface along scan-line is obtained by linearly interpolating
intensities at I4 and I5 as,
x5 x p x p x4
Ip I 4 I5
x5 x4 x5 x4
Then incremental calculations are used to obtain Successive edge intensity values between scan-lines as
and to obtain successive intensities along a scan line. As shown in Fig. below, if the intensity at edge
position (x, y) is interpolated as:
y y2 y y
I I 1 1 I2
y1 y2 y1 y2
Then, we can obtain the intensity along this edge for next scan line at y-1 position as
y 1 y2 y ( y 1) I 2 I1
I' I 1 1 I2 I
y1 y2 y1 y2 y1 y2
Phong Shading
A more accurate method for rendering a polygon surface is to interpolate normal vector and then apply
illumination model to each surface point. This method is called Phong shading or normal vector
interpolation shading. It displays more realistic highlights and greatly reduces the mach-band effect.
A polygon surface is rendered with Phong shading by carrying out following calculations.
Determine the average normal unit vectors at each polygon vertex.
Linearly interpolate vertex normals over the surface of polygon.
Apply illumination model along each scan line to calculate projected pixel intensities for the
surface points.
Incremental calculations are used to evaluate normals between scan lines and along each individual scan
line as in Gouraud shading. Phong shading produces accurate results than the direct interpolation but it
requires considerably more calculations.
Omitting the reflectivity and attenuation parameters, we can write the calculation for light-source
diffuse reflection from a surface point (x, y) as
L.N L.( Ax By C ) ( L. A) x ( L.B) y ( L.C )
I diff ( x, y)
| L | . | N | | L | . | Ax By C | | L | . | Ax By C |
Re writing this,
ax by c
I diff ( x, y) 1
-------------------- (1)
(dx exy fy gx hy i)
2 2 2
This method still takes twice as long as in Gouraud shading. Normal Phong shading takes six to seven
times that of Gouraud shading.