0% found this document useful (0 votes)
26 views24 pages

Mod 1 - 3 CG Hidden Lines

The document discusses hidden line and surface removal in computer graphics, highlighting the challenges of rendering realistic images by eliminating invisible parts of objects. It outlines two main types of algorithms for visible-surface detection: object space methods, which focus on geometrical relationships among objects, and image space methods, which determine visibility at the pixel level. Additionally, it details specific algorithms like back-face elimination and the Z-buffer algorithm, comparing their functionalities and applications.

Uploaded by

harvar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views24 pages

Mod 1 - 3 CG Hidden Lines

The document discusses hidden line and surface removal in computer graphics, highlighting the challenges of rendering realistic images by eliminating invisible parts of objects. It outlines two main types of algorithms for visible-surface detection: object space methods, which focus on geometrical relationships among objects, and image space methods, which determine visibility at the pixel level. Additionally, it details specific algorithms like back-face elimination and the Z-buffer algorithm, comparing their functionalities and applications.

Uploaded by

harvar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Module 1

Computer graphics
Hidden Lines/Surfaces
Visible-Surface Detection
Problem:
Given a scene and a projection,
what can we see?
Hidden line/surface removal
Wire frame Hidden line removal

Hidden surface removal


Hidden line/surface removal
▪ One of the most challenging problems in computer graphics is the
removal of hidden parts from images of solid objects.
▪ In real life, the opaque material of these objects obstructs the light
rays from hidden parts and prevents us from seeing them.
▪ In the computer generation, no such automatic elimination takes
place when objects are projected onto the screen coordinate
system.
▪ Instead, all parts of every object, including many parts that should
be invisible are displayed.
▪ To remove these parts to create a more realistic image, we must
apply a hidden line or hidden surface algorithm to set of objects.
Types of hidden surface detection algorithms
▪ Object space methods
▪ Image space methods

Object space methods:


▪ In this method, various parts of objects are compared. After
comparison visible, invisible or hardly visible surface is
determined.
▪ These methods generally decide visible surface. In the wireframe
model, these are used to determine a visible line. So these
algorithms are line based instead of surface based. Method
proceeds by determination of parts of an object whose view is
obstructed by other object and draws these parts in the same color.
Continued…
Image space methods: Here positions of various pixels
are determined. It is used to locate the visible surface instead
of a visible line. Each point is detected for its visibility. If a
point is visible, then the pixel is on, otherwise off. So the
object close to the viewer that is pierced by a projector
through a pixel is determined. That pixel is drawn is
appropriate color.
Comparison of Object space and Image space method
Object Space Image Space
1. Image space is object based. It concentrates on 1. It is a pixel-based method. It is concerned with the final
geometrical relation among objects in the scene. image, what is visible within each raster pixel.

2. Here surface visibility is determined. 2. Here line visibility or point visibility is determined.

3. It is performed at the precision with which each object is 3. It is performed using the resolution of the display device.
defined, No resolution is considered.

4. Calculations are not based on the resolution of the display 4. Calculations are resolution base, so the change is difficult
so change of object can be easily adjusted. to adjust.

5. These were developed for vector graphics system. 5. These are developed for raster devices.

6. Object-based algorithms operate on continuous object 6. These operate on object data.


data.
7. Vector display used for object method has large address 7. Raster systems used for image space methods have
space. limited address space.
8. Object precision is used for application where speed is 8. There are suitable for application where accuracy is
required. required.
9. It requires a lot of calculations if the image is to enlarge. 9. Image can be enlarged without losing accuracy.

10. If the number of objects in the scene increases, 10. In this method complexity increase with the complexity
computation time also increases. of visible parts.
Visible-Surface Detection
Two main types of algorithms:
Object space: Determine which part of the object
are visible
Image space: Determine per pixel which point of
an object is visible

Object space Image space


Visible-Surface Detection
Four algorithms:
• Back-face elimination
• Depth-buffer
• Depth-sorting
• Ray-casting
But there are many other.
Algorithm Classifications
• Object-Space Methods:
– Visibility is decided by comparing objects in object-
space.
e.g. Back face elimination, painter’s algorithm.

• Image-Space Methods:
– Visibility is decided at each pixel position in the
projection plane.
e.g. Z-Buffer algorithm
Back-Face Elimination
Don’t draw surfaces facing away from viewpoint:
– Assumes objects are solid polyhedra
• Usually combined with additional methods c
– Compute polygon normal n:
• Assume counter-clockwise vertex
n
order
a
• For a triangle (a, b, c): n = (b – a)  (c – a)
b
– Compute vector from viewpoint to any y
point p on polygon v:
– Facing away (don’t draw) if n and v are
pointed in opposite directions →
dot product n · v > 0 x

z
11
Back Face Elimination

• A polygon is a back face if V • N > 0.


x

z
Back-face elimination
We cannot see the back-face of solid objects:
Hence, these can be ignored

V  N  0 : back face
N V
Back-face elimination
We can see the front-face of solid objects:
Hence, that face is accepted

V  N  0 : front face
V

N
Back-face elimination
• Object-space method
• Works fine for convex polyhedra: ±50% removed
• Concave or overlapping polyhedra: require additional
processing
• Interior of objects can not be viewed
Partially visible front faces
Back-Face Culling Example
n1·v = (2, 1, 2) · (-1, 0, -1)
= -2 – 2 = -4,
n2 = (-3, 1, -2) So n1·v < 0
n1 = (2, 1, 2)
So n1 front facing polygon

n2 ·v = (-3, 1, -2) · (-1, 0, -1)


=3+2=5
v = (-1, 0, -1) So n2 · v > 0
So n2 back facing polygon

16
Z-Buffering
Z-Buffer Algorithm
▪ It is also called a Depth Buffer Algorithm.
▪ Depth buffer algorithm is simplest image space
algorithm.
▪ For each pixel on the display screen, we keep a
record of the depth of an object within the pixel
that lies closest to the observer.
▪ In addition to depth, we also record the intensity
that should be displayed to show the object.
Depth buffer is an extension of the frame buffer.
Depth buffer algorithm requires 2 arrays,
intensity and depth each of which is indexed by
pixel coordinates (x, y).
Z-Buffer Algorithm
• Most widely used Image-space algorithm.
• Easy to implement, both in software and
hardware. y
• Incremental computation.

(x, y)

x
z
Depth-Buffer Algorithm
• Image-space method
• Otherwise called z-buffer algorithm
Normalized view volume

Algorithm:
Draw polygons,
yv
Remember the
xv
color most in front.
zv Stores two arrays
front =
1. Z depth, z(x,y)
visible pixel View plane
2. Pixel data,I(x,y)
Depth-Buffer Algorithm
Fast calculation z: use coherence.
polygon Plane : Ax + By + Cz + D = 0
scan line − Ax − By − D
Hence : z ( x, y ) =
y C
− A( x + 1) − By − D
Also : z ( x + 1, y ) =
C
x x+1 A
Thus : z ( x + 1, y ) = z ( x, y ) −
display C
B
Also : z ( x, y ) = z ( x, y − 1) +
C
Depth-Buffer Algorithm
+ Easy to implement
+ Hardware supported
+ Polygons can be processed in arbitrary order
+ Fast: ~ #polygons, #covered pixels

- Costs memory
- Color calculation sometimes done multiple times
- Transparancy is tricky
Depth-Buffer Algorithm
var zbuf: array[N,N] of real; { z-buffer: 0=near, 1=far }
fbuf: array[N,N] of color; { frame-buffer }

For all 1<= i, j <=N do


zbuf[i,j] := 1.0; col[i,j] := BackgroundColour;
For all polygons do { scan conversion }
For all covered pixels (i,j) do Sorting
Calculate depth z;
If z < zbuf[i,j] then { closer! }
zbuf[i,j] := z;
fbuf[i,j] := surfacecolor(i,j);
Comparison
• Hardware available? Use depth-buffer, possibly in
combination with back-face elimination or depth-sort for
part of scene.
• If not, choose dependent on complexity scene and type
of objects:
– Simple scene, few objects: depth-sort
– Quadratic surfaces: ray-casting
– Otherwise: depth-buffer
• Many additional methods to boost performance
(kD-trees, scene decomposition, etc.)

You might also like