0% found this document useful (0 votes)
32 views48 pages

UNIT 5 Visible Surface Detection Method

The document discusses methods for determining visible surfaces when rendering 3D scenes from a particular viewpoint. It describes two main approaches: object-space methods that compare objects and parts of objects, and image-space methods that decide visibility point-by-point at each pixel. Specific methods discussed include back-face detection, depth buffer/z-buffer, and scan-line algorithms. The depth buffer method stores depth and color values for each pixel, comparing depths to correctly render only the visible surfaces.

Uploaded by

Sakar Pathak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views48 pages

UNIT 5 Visible Surface Detection Method

The document discusses methods for determining visible surfaces when rendering 3D scenes from a particular viewpoint. It describes two main approaches: object-space methods that compare objects and parts of objects, and image-space methods that decide visibility point-by-point at each pixel. Specific methods discussed include back-face detection, depth buffer/z-buffer, and scan-line algorithms. The depth buffer method stores depth and color values for each pixel, comparing depths to correctly render only the visible surfaces.

Uploaded by

Sakar Pathak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

https://genuinenotes.

com

(Graphics)
1/19/2018
Nipun Thapa
Visible Surface Detection
Method
Unit 5

1
https://genuinenotes.com

Visible Surface Determination


It is the process of identifying those parts of a scene that
are visible from a chosen viewing position. There are numerous

1/19/2018
algorithms for efficient identification of visible objects for
different types of applications. These various algorithms are
referred to as visible-surface detection methods. Sometimes

Nipun Thapa (Graphics)


these methods are also referred to as hidden-surface
elimination methods.
• To identify those parts of a scene that are visible from a
chosen viewing position (visible-surface detection methods).
• Surfaces which are obscured by other opaque surfaces along
the line of sight are invisible to the viewer so can be
eliminated (hidden-surface elimination methods). 2
https://genuinenotes.com

Visible Surface Determination


Visible surface detection methods are broadly classified according to whether they deal
with objects or with their projected images.
These two approaches are
• Object-Space methods:

1/19/2018
• Compares objects and parts of objects to each other within the scene definition to determine which
surface as a whole we should label as visible.
• Deal with object definition
• E.g. Back-face detection method
• Image-Space methods:

Nipun Thapa (Graphics)


• Visibility is decided point by point at each pixel position on the projection plane.
• Deal with projected image
• E.g. Depth-buffer method, Scan-line method, Area-subdivision method
• List Priority Algorithms
• This is a hybrid model that combines both object and image precision operations. Here, depth
comparison & object splitting are done with object precision and scan conversion (which relies on
ability of graphics device to overwrite pixels of previously drawn objects) is done with image precision.
• E.g. Depth-Shorting method, BSP-tree method

{ Note : Most visible surface detection algorithm use image-space-method but in some cases object space methods are
also used for it.} 3
https://genuinenotes.com

Visible Surface Determination

1/19/2018
Nipun Thapa (Graphics)
4
https://genuinenotes.com

Visible Surface Determination

1/19/2018
Nipun Thapa (Graphics)
5
https://genuinenotes.com

Visible Surface Determination

1/19/2018
Nipun Thapa (Graphics)
6
https://genuinenotes.com

Object-Space methods

1/19/2018
Nipun Thapa (Graphics)
7
https://genuinenotes.com

Object-Space methods

1/19/2018
Nipun Thapa (Graphics)
8
https://genuinenotes.com

Image Space Methods

1/19/2018
Nipun Thapa (Graphics)
9
https://genuinenotes.com

Image Space Methods

1/19/2018
Nipun Thapa (Graphics)
10
https://genuinenotes.com

Nipun Thapa (Graphics) 1/19/2018


11
https://genuinenotes.com

Back – Face Detection Method


• A fast and simple object-space method for identifying the back faces
of a polyhedron.
• It is based on the performing inside-outside test.

1/19/2018
TWO METHODS:
First Method:
• A point (x, y, z) is "inside" a polygon surface with plane parameters

Nipun Thapa (Graphics)


A, B, C, and D if Ax+By+Cz+D < 0 (from plane equation).
• When an inside point is along the line of sight to the surface, the
polygon must be a back face.
• In eq. Ax+By+Cz+D=0
if A,B,C remain constant , then varying value of D result in a whole
family of parallel plane
if D>0, plane is behind the origin (Away from observer)
12
if D<0 , plane is in front of origin (toward the observer)
https://genuinenotes.com

Back – Face Detection Method


Second Way
• Let N be normal vector to a polygon surface, which has Cartesian

1/19/2018
components (A, B, C). In general, if V is a vector in the viewing
direction from the eye (or "camera") position, then this polygon is a
back face

Nipun Thapa (Graphics)


if V.N>0.

13
https://genuinenotes.com

Back – Face Detection Method

1/19/2018
Nipun Thapa (Graphics)
14
https://genuinenotes.com

Back – Face Detection Method

1/19/2018
Nipun Thapa (Graphics)
15
https://genuinenotes.com
Back – Face Detection Method
A view vector V is constructed from any point on the surface to the
viewpoint, the dot product of this vector and the normal N, indicates visible faces
as follows:
Case-I: (FRONT FACE)

1/19/2018
If V.N < 0 the face is visible else face is hidden

Nipun Thapa (Graphics)


Case-II: (BACK FACE)
If V.N > 0 the face is visible else face is hidden

Case-III:
For other objects, such as the concave polyhedron in Fig., more
tests need to be carried out to determine whether there are additional
faces that are totally or partly obscured by other faces. 16
https://genuinenotes.com

Concave polyhedron

1/19/2018
Nipun Thapa (Graphics)
17
https://genuinenotes.com

Cartesian Components of a Vector


Consider a Cartesian coordinate system OXYZ consisting of an origin O, ,

and three mutually perpendicular coordinate axes, OX ,OY and OZ ,see Figure .

1/19/2018
Nipun Thapa (Graphics)
18
https://genuinenotes.com
Depth Buffer Method (Z-Buffer Method)
• A commonly used image-space approach to detecting visible
surfaces is the depth-buffer method, which compares surface

1/19/2018
depths at each pixel position on the projection plane.
• Also called z-buffer method since depth usually measured

Nipun Thapa (Graphics)


along z-axis. This approach compares surface depths at each
pixel position on the projection plane.
• Each surface of a scene is processed separately, one point at a
time across the surface. And each (x, y, z) position on a
polygon surface corresponds to the projection point (x, y) on
the view plane.
19
https://genuinenotes.com
Depth Buffer Method (Z-Buffer Method)
This method requires two buffers:
• A z-buffer or depth buffer: Stores depth values for each pixel position (x, y).

1/19/2018
• Frame buffer (Refresh buffer): Stores the surface-intensity values or color
values for each pixel position.

Nipun Thapa (Graphics)


• As surfaces are processed, the image buffer is used to store the color values
of each pixel position and the z-buffer is used to store the depth values for
each (x, y) position.

20
https://genuinenotes.com
Depth Buffer Method (Z-Buffer Method)

1/19/2018
Nipun Thapa (Graphics)
21
https://genuinenotes.com
Depth – Buffer (Z – Buffer Method)
Initially, all positions in the depth buffer are set to 0 (minimum
depth), and the refresh buffer is initialized to the background intensity.

1/19/2018
Each surface listed in the polygon tables is then processed, one scan
line at a time, calculating the depth (z-value) at each (x, y) pixel

Nipun Thapa (Graphics)


position. The calculated depth is compared to the value previously
stored in the depth buffer at that position. If the calculated depth is
greater than the value stored in the depth buffer, the new depth value
is stored, and the surface intensity at that position is determined and
placed in the same xy location in the refresh buffer.
A drawback of the depth-buffer method is that it can only find
one visible surface for opaque(solid) surfaces and cannot accumulate
intensity values for transparent surfaces. 22
https://genuinenotes.com
Depth Buffer Method (Z-Buffer Method)
Algorithm:
1. Initialize both, depth buffer and refresh buffer for all buffer positions (x, y),
depth(x, y) = 0

1/19/2018
refresh(x, y) = Ibackground,
(where Ibackground is the value for the background intensity.)

Nipun Thapa (Graphics)


2. Process each polygon surface in a scene one at a time,

2.1. Calculate the depth z for each (x, y) position on the polygon.
2.2. If Z > depth(x, y), then set
depth(x, y)=z
refresh(x, y)= Isurf(x, y),
(where Isurf(x, y) is the intensity value for the surface at pixel position (x, y). )
23
3. After all pixels and surfaces are compared, draw object using X,Y,Z from depth and
intensity refresh buffer.
https://genuinenotes.com
Depth Buffer Method (Z-Buffer Method)
• After all surfaces have been processed the depth buffer
contains depth values for the visible surfaces and the refresh

1/19/2018
buffer contains the corresponding intensity values for those
surfaces

Nipun Thapa (Graphics)


Depth value for a surface position (x, y) is
z = (-Ax –By – D)/c …………………(i)

Let depth z’ at (x + 1 , y)
z’ = -A(x+1) – By – D/c
or 24
z’ = z – A/c………..(ii)
https://genuinenotes.com
Depth Buffer Method (Z-Buffer Method)

1/19/2018
Nipun Thapa (Graphics)
25
https://genuinenotes.com
Depth Buffer Method (Z-Buffer Method)

1/19/2018
Nipun Thapa (Graphics)
26
https://genuinenotes.com
Depth Buffer Method (Z-Buffer Method)

1/19/2018
Nipun Thapa (Graphics)
27
https://genuinenotes.com

A – Buffer Method
• The A-buffer (anti-aliased, area-averaged, accumulation buffer) is
an extension of the ideas in the depth-buffer method (other end of

1/19/2018
the alphabet from "z-buffer").
• A drawback of the depth-buffer method is that it deals only with

Nipun Thapa (Graphics)


opaque(Solid) surfaces and cannot accumulate intensity values for
more than one transparent surfaces.
• The A-buffer method is an extension of the depth-buffer method.
• The A-buffer is incorporated into the REYES ("Renders Everything
You Ever Saw") 3-D rendering system.
• The A-buffer method calculates the surface intensity for multiple
surfaces at each pixel position, and object edges can be ant aliased. 28
https://genuinenotes.com

A – Buffer Method
• The A-buffer expands on the depth buffer method to allow
transparencies. The key data structure in the A-buffer is the

1/19/2018
accumulation buffer

Nipun Thapa (Graphics)


29
https://genuinenotes.com

A – Buffer Method

1/19/2018
Each pixel position in the A-Buffer has two fields
Depth Field : stores a positive or negative real number

Nipun Thapa (Graphics)


• Positive : single surface contributes to pixel intensity
• Negative : multiple surfaces contribute to pixel intensity
Intensity Field : stores surface-intensity information or a pointer value
• Surface intensity if single surface stores the RGB components of the surface color at that
point
• and percent of pixel coverage Pointer value if multiple surfaces
• RGB intensity components
• Opacity parameter(per cent of transparency)
• Per cent of area coverage
• Surface identifier
• Other surface rendering parameters 30
• Pointer to next surface (Link List)
https://genuinenotes.com
A – Buffer Method

1/19/2018
If depth is >= 0, then the surface data field stores the depth of that pixel
position as before (SINGLE SURFACE)

Nipun Thapa (Graphics)


(If the depth field is positive, the number stored at that position is the depth of a single surface overlapping
the corresponding pixel area. The intensity field then stores the RCB components of the surface color at
that point and the percent of pixel coverage, as illustrated first figure.)

If depth < 0 then the data filed stores a pointer to a linked list of surface
data (MULTIPLE SURFACE)
(If the depth field is negative, this indicates multiple-surface contributions to the pixel intensity. The
intensity field then stores a pointer to a linked list of surface data, as in second figure. Data for each surface
in the linked list includes: RGB intensity components, opacity parameter (percent of transparency), depth,
percent of area coverage, surface identifier, other surface-rendering parameters, and pointer to next 31
surface)
https://genuinenotes.com

A – Buffer Method
• The A-buffer can be constructed using methods similar to
those in the depth-buffer algorithm. Scan lines are processed

1/19/2018
to determine surface overlaps of pixels across the individual
scan lines. Surfaces are subdivided into a polygon mesh and

Nipun Thapa (Graphics)


clipped against the pixel boundaries. Using the opacity factors
and percent of surface overlaps, we can calculate the intensity
of each pixel as an average of the contributions from the over
lapping surfaces.

32
https://genuinenotes.com

A – Buffer Method
• The algorithm proceeds just like the depth buffer algorithm
• The depth and opacity values are used to determine the final

1/19/2018
colour of a pixel
• Scan lines are processed to determine surface overlaps of pixels
across the individual scan lines.

Nipun Thapa (Graphics)


• Surfaces are subdivided into a polygon mesh and clipped against
the pixel boundaries
• The opacity factors and percent of surface overlaps are used to
determine the pixel intensity as an average of the contribution
from the overlapping surfaces
33
https://genuinenotes.com

DEPTH SORT (Painter Algorithm)


• This method uses both object space and image space method.
• In this method the surface representation of 3D object are sorted in

1/19/2018
of decreasing depth from viewer.
• Then sorted surface are scan converted in order starting with

Nipun Thapa (Graphics)


surface of greatest depth for the viewer.

34
The conceptual steps that performed
https://genuinenotes.com

in depth-sort algorithm are


1. Sort all polygon surface according to the smallest

1/19/2018
(farthest) Z co-ordinate of each.
2. Resolve any ambiguity this may cause when the
polygons Z extents overlap, splitting polygons if

Nipun Thapa (Graphics)


necessary.
3. Scan convert each polygon in ascending order of
smaller Z-co-ordinate i.e. farthest surface first (back to
front)
35
https://genuinenotes.com

DEPTH SORT (Painter Algorithm)


• In this method, the newly displayed surface is partly or

1/19/2018
completely obscure the previously displayed surface.
Essentially, we are sorting the surface into priority order
such that surface with lower priority (lower z, far objects)

Nipun Thapa (Graphics)


can be obscured by those with higher priority (high z-
value).

36
https://genuinenotes.com

DEPTH SORT (Painter Algorithm)


• This algorithm is also called "Painter's Algorithm" as it
simulates how a painter typically produces his painting by

1/19/2018
starting with the background and then progressively adding
new (nearer) objects to the canvas.

Nipun Thapa (Graphics)


• Thus, each layer of paint covers up the previous layer.
• Similarly, we first sort surfaces according to their distance
from the view plane. The intensity values for the farthest
surface are then entered into the refresh buffer. Taking each
succeeding surface in turn (in decreasing depth order), we
"paint" the surface intensities onto the frame buffer over the
37
intensities of the previously processed surfaces.
https://genuinenotes.com

DEPTH SORT (Painter Algorithm)


Problem

1/19/2018
Nipun Thapa (Graphics)
38
https://genuinenotes.com

DEPTH SORT (Painter Algorithm)


Problem
• One of the major problem in this algorithm is intersecting polygon

1/19/2018
surfaces. As shown in fig. below.

Nipun Thapa (Graphics)


o Different polygons may have same depth.
o The nearest polygon could also be farthest.
39
We cannot use simple depth-sorting to remove the hidden-surfaces in the images.
https://genuinenotes.com

DEPTH SORT (Painter Algorithm)


Solution

1/19/2018
• For intersecting polygons, we can split one polygon into
two or more polygons which can then be painted from
back to front. This needs more time to compute

Nipun Thapa (Graphics)


intersection between polygons. So it becomes complex
algorithm for such surface existence.

40
https://genuinenotes.com

DEPTH SORT (Painter Algorithm)

1/19/2018
Nipun Thapa (Graphics)
41
https://genuinenotes.com

DEPTH SORT (Painter Algorithm)

1/19/2018
Nipun Thapa (Graphics)
42
https://genuinenotes.com

DEPTH SORT (Painter Algorithm)

1/19/2018
Nipun Thapa (Graphics)
43
Scan-Line Method https://genuinenotes.com

• Extension of scan line algorithm for filling polygon interiors


• Instead of filling just one surface, we deal with multiple surfaces
• As each scan line is processed, all polygon surfaces intersecting that line are examined to
determine which are visible.
• Across each scan line , depth calculations are made for each overlapping surface to determine,

1/19/2018
which is nearest to view plane.
• When the visible surface has been determined , the intensity value for that position is entered
into the refresh buffer.
DATA STRUCTURE
• A. Edge table containing

Nipun Thapa (Graphics)


• Coordinate endpoints for each line in a scene
• Inverse slope of each line
• Pointers into polygon table to identify the surfaces bounded by each line
• B. Surface table containing
• Coefficients of the plane equation for each surface
• Intensity information for each surface
• Pointers to edge table
• C. Active Edge List
• To keep a trace of which edges are intersected by the given scan line
Note : 44
• The edges are sorted in order of increasing x
• Define flags for each surface to indicate whether a position is inside or outside the surface
https://genuinenotes.com

Scan-Line Method
I. Initialize the necessary data structure
1. Edge table containing end point coordinates, inverse slope and

1/19/2018
polygon pointer.
2. Surface table containing plane coefficients and surface intensity
3. Active Edge List

Nipun Thapa (Graphics)


4. Flag for each surface
II. For each scan line repeat
1. update active edge list
2. determine point of intersection and set surface on or off.
3. If flag is on, store its value in the refresh buffer
4. If more than one surface is on, do depth sorting and store the
intensity of surface nearest to view plane in the refresh buffer 45
https://genuinenotes.com

Scan-Line Method
• For scan line 1
• The active edge list contains edges AB,BC,EH, FG
• Between edges AB and BC, only flags for s1 == on and between edges EH

1/19/2018
and FG, only flags for s2==on
• no depth calculation needed and corresponding surface intensities are
entered in refresh buffer
• For scan line 2
• The active edge list contains edges AD,EH,BC and FG

Nipun Thapa (Graphics)


• Between edges AD and EH, only the flag for surface s1 == on
• Between edges EH and BC flags for both surfaces == on
• Depth calculation (using plane coefficients) is needed.
• In this example ,say s2 is nearer to the view plane than s1, so intensities
for surface s2 are loaded into the refresh buffer until boundary BC is
encountered
• Between edges BC and FG flag for s1==off and flag for s2 == on
• Intensities for s2 are loaded on refresh buffer
• For scan line 3
• Same coherent property as scan line 2 as noticed from active list, so no
depth 46
https://genuinenotes.com

Scan-Line Method
Problem:

1/19/2018
Dealing with cut through surfaces and
cyclic overlap is problematic when
used coherent properties

Nipun Thapa (Graphics)


• Solution: Divide the surface to
eliminate the overlap or cut through

47
https://genuinenotes.com

1/19/2018
Unit 5 Finished

Nipun Thapa (Graphics)


48

You might also like