0% found this document useful (0 votes)
28 views6 pages

Chapter 6

Uploaded by

laxmipoudel1116
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views6 pages

Chapter 6

Uploaded by

laxmipoudel1116
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter 6

Visible Surface Detection


Visible surface detection or Hidden surface removal is major concern for realistic graphics for identifying
those parts of a scene that are visible from a chosen viewing position. Several algorithms have been
developed. Some require more memory, some require more processing time and some apply only to
special types of objects.
Visible surface detection methods are broadly classified according to whether they deal with objects or
with their projected images.
➢ Object-Space Methods (OSM):
An object-space method compares objects and parts of objects to each other within the scene
definition to determine which surfaces, as a whole, we should label as visible.
E.g. Back-face detection method
➢ Image-Space Methods (ISM):
In an image-space algorithm, visibility is decided point by point at each pixel position on the
projection plane. Most visible-surface algorithms use image-space methods.
E.g. Depth-buffer method, Scan-line method, Area-subdivision method
➢ List Priority Algorithms
This is a hybrid model that combines both object and image precision operations. Here, depth
comparison & object splitting are done with object precision and scan conversion (which relies on
ability of graphics device to overwrite pixels of previously drawn objects) is done with image
precision.
E.g. Depth-Sorting method, BSP-tree method

A. Back Face Detection Method (Plane Equation method)


In a solid object, there are surfaces which are facing the viewer (front faces) and there are surfaces
which are opposite to the viewer (back faces). These back faces contribute to approximately half of the
total number of surfaces. Since we cannot see these surfaces anyway, to save processing time, we can
remove them before the clipping process with a simple test.
Each surface has a normal vector. If this vector is pointing in the direction of the center of projection, it is
a front face and can be seen by the viewer. If it is pointing away from the center of projection, it is a back
face and cannot be seen by the viewer.
The test is very simple, if the z component of the normal vector is positive, then, it is a back face. If the z
component of the vector is negative, it is a front face.
Principle:
➢ Remove all surfaces pointing away from the
viewer
➢ Eliminate the surface if it is completely obscured
by other surfaces in front of it
➢ Render only the visible surfaces facing the viewer
Back facing and front facing faces can be identified
using the sign of V • N where V is the view vector and N is normal vector.

➢ If V • N > 0, back face


➢ if V • N < 0 front face
➢ if V • N= 0 on line of view

Compiled By: Mohan Bhandari, Ranjan Adhikari, Bhesh Thapa


Numerical
Q. Find the visibility for the surface AED where an observer is at P (5, 5, 5).
Here,
AE = (0-1) i + (1-0) j + (0-0) k = - i + j
AD= (0-1) i + (0-0) j + (1-0) k = - i + k

Step-1: Normal vector N for AED

Thus, N = AE x AD =

Step-2: If observer at P(5, 5, 5) so we can construct the view vector V from view point A(1,0, 0) as:
V = PA = (1-5) i + (0-5) j + (0-5) k = -4i - 5j - 5k
Step-3: To find the visibility of the object, we use dot product of view vector V and normal vector N as:
V • N = (-4i - 5j – 5k) • (i + j + k) = -4-5-5 = -14< 0
This shows that the surface is visible for the observer.
Q. Find the visibility of n1 and n2 from V.

n1 • v = (2, 1, 2) ·(-1, 0, -1) = -2 – 2 = -4,


i.e n1 • v< 0
So, n1 front facing polygon

n2 • v = (-3, 1, -2) ·(-1, 0, -1) = 3 + 2 = 5


i.e n2 • v > 0
so n2 back facing polygon

B. Depth Buffer (Z-Buffer) Method


This method is developed by Edwin Catmull and is an image-space approach. The basic idea is to test
the Z-depth of each surface to determine the closest (visible) surface.
In this method each surface is processed separately one pixel position at a time across the surface. The
depth values for a pixel are compared and the closest (smallest z) surface determines the color to be
displayed in the frame buffer.
It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To override
the closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used.
➢ Depth buffer is used to store depth values for (x, y) position, as surfaces are processed (0 ≤
depth ≤ 1).
➢ The frame buffer is used to store the intensity value of color value at each position (x, y).
The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate indicates back
clipping pane and 1 value for z-coordinates indicates front clipping pane.
Compiled By: Mohan Bhandari, Ranjan Adhikari, Bhesh Thapa
Algorithm
Step-1 − Set the buffer values −
➢ Depthbuffer (x, y) = 0
➢ Framebuffer (x, y) = background color
Step-2 − Process each polygon (One at a time)
➢ For each projected (x, y) pixel position of a
polygon, calculate depth z.
➢ If Z > depthbuffer (x, y)
- Compute surface color,
- set depthbuffer (x, y) = z,
- framebuffer (x, y) = surfacecolor (x, y)
Advantages
• It is easy to implement.
• It reduces the speed problem if implemented in hardware.
• It processes one object at a time.
Disadvantages
• It requires large memory.
• It is time consuming process.

C. Scan-Line Method
This image-space method for removing hidden surfaces is an extension of the scan-line algorithm for
filling polygon interiors where, we deal with multiple surfaces rather than one. Each scan line is
processed with calculating the depth for nearest view for determining the visible surface of intersecting
polygon. When the visible surface has been determined, the intensity value for that position is entered
into the refresh buffer.
To facilitate the search for surfaces
crossing a given scan line, we can set up
an active list of edges from information in
the edge table that contain only edges that
cross the current scan line, sorted in order
of increasing x. In addition, we define a
flag for each surface that is set on or off to
indicate whether a position along a scan
line is inside or outside of the surface.
Scan lines are processed from left to right.
At the leftmost boundary of a surface, the
surface flag is turned on; and at the
rightmost boundary, it is turned off.

The active list for scan line -1 contains


information from the edge table for edges
AB, BC, EH, and FG. For positions along this scan line between edges AB and BC, only the flag for
surface S1 is on. Therefore, no depth calculations are necessary, and intensity information for surface
S1 , is entered from the polygon table into the refresh buffer. Similarly, between edges EH and FG, only
the flag for surface S2 is on. NO other positions along scan line-1 intersect surfaces, so the intensity
values in the other areas are set to the background intensity.
For scan lines 2 and 3, the active edge list contains edges AD, EH, BC, and FG. Along scan line 2 from
edge AD to edge EH, only the flag for surface S1 , is on. But between edges EH and BC, the flags for
both surfaces are on. In this interval, depth calculations must be made using the plane coefficients for
Compiled By: Mohan Bhandari, Ranjan Adhikari, Bhesh Thapa
the two surfaces. For this example, the depth of surface S1 is assumed to be less than that of S2 , so
intensities for surface S1 are loaded into the refresh buffer until boundary BC is encountered. Then the
flag for surface S1 goes off, and intensities for surface S2 are stored until edge FG is passed.

D. Area-Subdivision Method
The area-subdivision method takes advantage by locating those view areas that
represent part of a single surface.
Divide the total viewing area into smaller and smaller rectangles until each small
area is the projection of part of a single visible surface or no surface at all.
Continue this process until the subdivisions are easily analyzed as belonging to
a single surface or until they are reduced to the size of a single pixel. An easy
way to do this is to successively divide the area into four equal parts at each
step. There are four possible relationships that a surface can have with a
specified area boundary.
• Surrounding surface − One
that completely encloses the
area.
• Overlapping surface − One
that is partly inside and
partly outside the area.
• Inside surface − One that is
completely inside the area.
• Outside surface − One that
is completely outside the area.

The tests for determining surface visibility within an area can be stated in terms of these four
classifications. No further subdivisions of a specified area are needed if one of the following conditions is
true −
• All surfaces are outside surfaces with respect to the area.
• Only one inside, overlapping or surrounding surface is in the area.
• A surrounding surface obscures all other surfaces within the area boundaries.

E. A-Buffer Method
The A-buffer method is an extension of the depth-buffer method. The A-buffer method is a visibility
detection method developed at Lucas film Studios for the rendering system Renders Everything You
Ever Saw (REYES).
The A-buffer expands on the depth buffer method to allow transparencies. The key data structure in the
A-buffer is the accumulation buffer.

Each position in the A-buffer has


two fields −
• Depth field − It stores a
positive or negative real
number
• Intensity field − It stores
surface-intensity information
or a pointer value

Compiled By: Mohan Bhandari, Ranjan Adhikari, Bhesh Thapa


➢ If depth >= 0, the number stored at that position is the depth of a single surface overlapping the
corresponding pixel area.

➢ If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity field
then stores a pointer to a linked list of surface data.

➢ The surface buffer in the A-buffer includes −


• RGB intensity components
• Opacity Parameter
• Depth
• Percent of area coverage
• Surface identifier
The algorithm proceeds just like the depth buffer algorithm. The depth and opacity values are used to
determine the final color of a pixel.

F. Depth Sorting Method


Depth sorting method uses both image space and object-space operations. The depth-sorting method
performs two basic functions −
• First, the surfaces are sorted in order of decreasing depth.
• Second, the surfaces are scan-converted in order, starting with the surface of greatest depth.
The scan conversion of the polygon surfaces is performed in image space. This method for solving the
hidden-surface problem is often referred to as the painter's algorithm.
The algorithm begins by sorting by depth.
➢ Sort all surfaces according to their distances
from the view point.
➢ Render the surfaces to the image buffer one
at a time starting from the farthest surface.
➢ Surfaces close to the view point will replace
those which are far away.
➢ After all surfaces have been processed, the image buffer stores the final image.
Example:
Assuming we are viewing along the z axis. Surface S with the greatest depth is then compared to other
surfaces in the list to determine whether there are any overlaps in depth. If no depth overlaps occur, S
can be scan converted. This process is repeated for the next surface in the list. However, if depth
overlap is detected, we need to make some additional comparisons to determine whether any of the
surfaces should be reordered.

Compiled By: Mohan Bhandari, Ranjan Adhikari, Bhesh Thapa


G. BSP Tree Method
A binary space partitioning (BSP) tree is an efficient method for determining object visibility by painting
surfaces onto the screen from back to front as in the painter's algorithm.
The BSP tree is particularly useful when the view reference point changes, but object in a scene are at
fixed position.
Applying a BSP tree to visibility testing involves identifying surfaces that are "inside" or "outside" the
partitioning plane at each step of space subdivision relative to viewing direction. It is useful and efficient
for calculating visibility among a static group of 3D polygons as seen from an arbitrary viewpoint.
In the following figure,

P2
P1 C P1
b View D
f
A
P2 P2

f b f b Front
B
A B D Back
C Back (b)
(a) Front
.

Here plane P1 partitions the space into two sets of objects, one set of object is back and one set is in
front of partitioning plane relative to viewing direction. Since one object is intersected by plane P 1, we
divide that object into two separate objects labeled A and B. Now object A&C are in front of P 1, B and D
are 4 back of P1.
We next partition the space with plane P 2 and construct the binary free as fig (a). In this tree, the objects
are represented as terminal nodes, with front object as left branches and behind object as right
branches.
When BSP tree is complete, we process the tree by selecting surface for displaying in order back to
front. So fore ground object are painted over back ground objects.

Compiled By: Mohan Bhandari, Ranjan Adhikari, Bhesh Thapa

You might also like