0% found this document useful (0 votes)
3 views

Chapter 5 visible surface detection

Visible surface detection is the process of identifying parts of a scene that are visible from a specific viewing position, using various algorithms classified into object-space methods (OSM) and image-space methods (ISM). OSM deals directly with object definitions and requires extensive calculations for visibility, while ISM focuses on pixel positions and is resolution-dependent. Techniques such as depth-buffer and A-buffer methods are employed to manage visibility and intensity for opaque and transparent surfaces, respectively.

Uploaded by

ak1990074
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Chapter 5 visible surface detection

Visible surface detection is the process of identifying parts of a scene that are visible from a specific viewing position, using various algorithms classified into object-space methods (OSM) and image-space methods (ISM). OSM deals directly with object definitions and requires extensive calculations for visibility, while ISM focuses on pixel positions and is resolution-dependent. Techniques such as depth-buffer and A-buffer methods are employed to manage visibility and intensity for opaque and transparent surfaces, respectively.

Uploaded by

ak1990074
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

VISIBLE SURFACE DETECTION

It is the process of identifying those parts of a scene that are visible from a
chosen viewing position. There are numerous algorithms for efficient
identification of visible objects for different types of applications. These various
algorithms are referred to as visible-surface detection methods. Sometimes these
methods are also referred to as hidden-surface elimination methods.
To identify those parts of a scene that are visible from a chosen viewing position
(visible-surface detection methods).
Surfaces which are obscured by other opaque (solid) surfaces along the line of
sight are invisible to the viewer so can be eliminated (hidden-surface elimination
methods).

Visible surface detection methods are broadly classified according to whether they
deal with objects or with their projected images.
Object-Space Methods (OSM):
 Algorithm to determine which parts of the shapes are to be rendered in 3D
coordinates.
 Methods based on comparison of objects for their 3D positions and
dimensions with respect to a viewing position
 For N objects, may require N*N comparison operations.
 Efficient for small number of objects but difficult to implement.
 Depth sorting, area subdivision methods.

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION
 Deal with object definitions directly
 Compares objects and parts of objects to each other within the scene
definition to determine which surface as a whole we should label as visible.
 It is continuous method.
 Compares each object with all other objects to determine the visibility of the
object parts.
 E.g. Back-face detection method
Image-Space Methods(ISM):
 Deal with projected images of the objects and not directly with objects.
 Visibility is decided point by point at each pixel position on the projection
plane.
 It is a discrete method.
 Accuracy of the calculation is bounded by the display resolution.
 A change of display resolution requires re-calculation.
 Based on the pixels to be drawn on 2D. Try to determine which object
should contribute to that pixel.
 Running on the pixels to be drawn on 2D. Try to determine which object
should contribute to that pixel.
 E.g. Depth-buffer method, Scan-line method, Area-subdivision method

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION

Object Space method Image Space method


Image space is object based. It concentrates It is a pixel-based method. It is
on geometrical relation among objects in concerned with the final image, what is
the scene. visible within each raster pixel.
Here surface visibility is determined. Here line visibility or point visibility is
determined.
It is performed at the precision with which It is performed using the resolution of
each object is defined, No resolution is the display device.
considered.
Calculations are not based on the resolution Calculations are resolution base, so the
of the display so change of object can be change is difficult to adjust.
easily adjusted.
These were developed for vector graphics These are developed for raster devices.
system.
Vector display used for object method has Raster systems used for image space
large address space. methods have limited address space.
Object precision is used for application There are suitable for application where
where speed is required. accuracy is required.
It requires a lot of calculations if the image Image can be enlarged without losing
is to enlarge. accuracy.
If the number of objects in the scene In this method complexity increase with
increases, computation time also increases. the complexity of visible parts.
Back Face Detection:
 A fast and simple object-space method for identifying the back faces of a
polyhedron.
It is based on the performing inside-outside test.

Methods:
1st Method:
 A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C,
and D if 𝐴𝑥 + 𝐵𝑦 + 𝐶𝑧 + 𝐷 < 0 (from plane equation).
 When an inside point is along the line of sight to the surface, the polygon
must be a back face. (We are inside that face and cannot see the front of it
from our viewing position)

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION
 In eq. 𝐴𝑥 + 𝐵𝑦 + 𝐶𝑧 + 𝐷 = 0
if A,B,C remain constant , then varying value of D result in a whole family
of parallel plane
if D>0, plane is behind the origin (Away from observer) the surface is
invisible.
if D<=0 , plane is in front of origin (toward the observer) the surface
is visible.
2nd Method:
 To simplify this test, Let N be normal vector to a polygon surface, which has
Cartesian components (A, B, C). In general, if V is a vector in the viewing
direction from the eye (or "camera") position, then this polygon is a back
face
if V.N>0.

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION

Figure. A polygon surface with plane parameter C<0 in a right-handed viewing


coordinate system is identified as a back face when the viewing direction is along
the negative zv axis.

Q. Find the visibility for the surface AED in rectangular pyramid where an
observer is at P (5, 5, 5).

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION

This shows that the surface is invisible for the observer.

Q. Find the visibility for the surface AED in rectangular pyramid where an observer is at
P (0, 0.5, 0).
Disadvantage:

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION

Depth –Buffer Method (Z –Buffer Method):


 A commonly used image-space approach to detecting visible surfaces is the
depth-buffer method, which compares surface depths at each pixel position
on the projection plane.
 Also called z-buffer method since depth usually measured along z-axis. This
approach compares surface depths at each pixel position on the projection
plane.
 Each surface of a scene is processed separately, one point at a time across
the surface. And each (x, y, z) position on a polygon surface corresponds to
the projection point (x, y) on the view plane.
This method requires two buffers:
 A z-buffer or depth buffer: Stores depth values for each pixel position (x,
y).
 Frame buffer (Refresh buffer): Stores the surface-intensity values or color
values for each pixel position.
 As surfaces are processed, the image buffer is used to store the color values
of each pixel position and the z-buffer is used to store the depth values for
each (x, y) position.

Initially, all positions in the depth buffer are set to 0 (minimum depth), and the refresh
buffer is initialized to the background intensity. Each surface listed in the polygon tables
is then processed, one scan line at a time, calculating the depth (z- value) at each (x, y)
pixel position. The calculated depth is compared to the value previously stored in the
depth buffer at that position. If the calculated depth is greater than the value stored in the
depth buffer, the new depth value is stored, and the surface intensity at that position is
determined and placed in the same xy location in the refresh buffer.

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION
A drawback of the depth-buffer method is that it can only find one visible
surface for opaque surfaces and cannot accumulate intensity values for transparent
surfaces.

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION

The ratio –A/C is constant for each surface, so succeeding depth values across a
scan line are obtained from proceeding values with a single addition.
On each scan line, we start by calculating the depth on a left edge of the polygon
that intersects that scan line. Depth values at each successive position across the
scan line are then calculated by equation (ii).

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION

A-Buffer Method:
 The A-buffer (anti-aliased, area-averaged, accumulation buffer) is an
extension of the ideas in the depth-buffer method (other end of the alphabet
from "z-buffer").
 A drawback of the depth-buffer method is that it deals only with opaque
(Solid) surfaces and cannot accumulate intensity values for more than one
transparent surfaces.
 The A-buffer method is an extension of the depth-buffer method so that each
position in the buffer can reference a linked list of surfaces. Thus, more than
one surface intensity can be taken into consideration at each pixel position,
and object edges can be anti-aliased.

Each position in the A-buffer has two fields:


 Depth Field: Stores a positive or negative real number
o Positive: Single surface contribute to pixel intensity
o Negative: Multiple surfaces contribute to pixel intensity
 Intensity Field: Stores surface-intensity information or a pointer value.
o Surface intensity if single surface stores the RGB components of the
surface color at that point
o and percent of pixel coverage Pointer value if multiple surfaces

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION

If depth is >= 0, then the surface data field stores the depth of that pixel
position as before (SINGLE SURFACE)
If the depth field is positive, the number stored at that position is the depth
of a single surface overlapping the corresponding pixel area.
The intensity field then stores the RCB components of the surface color at
that point and the percent of pixel coverage, as in figure.

If depth < 0 then the data filed stores a pointer to a linked list of surface data
(MULTIPLE SURFACE)
If the depth field is negative, this indicates multiple-surface contributions to
the pixel intensity.
The intensity field then stores a pointer to a linked list of surface data, as in
figure.
Data for each surface in the linked list includes: RGB intensity components,
opacity parameter (percent of transparency), depth, and percent of area coverage,
surface identifier, other surface-rendering parameters, and pointer to next surface.

 Scan lines are processed to determine surface overlaps of pixels across the
individual scan lines.
 Surfaces are subdivided into a polygon mesh and clipped against the pixel
boundaries
 The opacity factors and percent of surface overlaps are used to determine the
pixel intensity as an average of the contribution from the overlapping surfaces

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION
Scan-Line method
 This image-space method for removing hidden surfaces is an extension of
the scan-line algorithm for filling polygon interiors where, we deal with
multiple surfaces rather than one.
 Each scan line is processed with calculating the depth for nearest view for
determining the visible surface of intersecting polygon. When the visible
surface has been determined, the intensity value for that position is entered
into the refresh buffer.
 To facilitate the search for surfaces crossing a given scan line, we can set up
an active list of edges from information in the edge table that contain only
edges that cross the current scan line, sorted in order of increasing x.
 In addition, we define a flag for each
surface that is set on or off to
indicate whether a position along a
scan line is inside or outside of the
surface. Scan lines are processed
from left to right.
 At the leftmost boundary of a
surface, the surface flag is turned on;
and at the rightmost boundary, it is
turned off.
Data Structure:

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]


VISIBLE SURFACE DETECTION

Sagarmatha Engineering College [Compiled by: Er. Abhisekh Khanal]

You might also like