0% found this document useful (0 votes)
79 views12 pages

CG Unit5

Uploaded by

zeenat parveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views12 pages

CG Unit5

Uploaded by

zeenat parveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Computer Graphics and Animation: Unit 5

Visible Surface Determination and computer graphics algorithm

When we view a picture containing non-transparent objects and surfaces, then we cannot see those
objects from view which are behind from objects closer to eye. We must remove these hidden surfaces to
get a realistic screen image. The identification and removal of these surfaces is called Hidden-surface
problem.

There are two approaches for removing hidden surface problems − Object-Space method and Image-space
method. The Object-space method is implemented in physical coordinate system and image-space method
is implemented in screen coordinate system.

When we want to display a 3D object on a 2D screen, we need to identify those parts of a screen that are
visible from a chosen viewing position.

Object-space Methods: Compare objects and parts of objects to each other within the scene definition to
determine which surfaces, as a whole, we should label as visible: For each object in the scene do
Begin
1. Determine those part of the object whose view is unobstructed by other parts of it or any other object
with respect to the viewing specification.
2. Draw those parts in the object color.
End
 Compare each object with all other objects to determine the visibility of the object parts.
 If there are n objects in the scene, complexity = O(n2)
 Calculations are performed at the resolution in which the objects are defined (only limited by the
computation hardware). - Process is unrelated to display resolution or the individual pixel in the
image and the result of the process is applicable to different display resolutions.
 Display is more accurate but computationally more expensive as compared to image space
methods because step 1 is typically more complex, eg. Due to the possibility of intersection
between surfaces.
 Suitable for scene with small number of objects and objects with simple relationship with each
other.
Image-space Methods (Mostly used)
Visibility is determined point by point at each pixel position on the projection plane. For each pixel in the
image do

Begin
1. Determine the object closest to the viewer that is pierced by the projector through the pixel
2. Draw the pixel in the object colour.
End
 For each pixel, examine all n objects to determine the one closest to the viewer.
 If there are p pixels in the image, complexity depends on n and p ( O(np) ).
 Accuarcy of the calculation is bounded by the display resolution.
 A change of display resolution requires re-calculation.
Application of Coherence in Visible Surface Detection Methods:
Making use of the results calculated for one part of the scene or image for other nearby parts.
Coherence is the result of local similarity
As objects have continuous spatial extent, object properties vary smoothly within a small local region in
the scene. Calculations can then be made incremental.
Types of coherence:
1. Object Coherence: Visibility of an object can often be decided by examining a circumscribing solid
(which may be of simple form, eg. A sphere or a polyhedron.)
2. Face Coherence: Surface properties computed for one part of a face can be applied to adjacent parts
after small incremental modification. (eg. If the face is small, we sometimes can assume if one part of the
face is invisible to the viewer, the entire face is also invisible).
3. Edge Coherence: The Visibility of an edge changes only when it crosses another edge, so if one segment
of an nonintersecting edge is visible, the entire edge is also visible.
4. Scan line Coherence: Line or surface segments visible in one scan line are also likely to be visible in
adjacent scan lines. Consequently, the image of a scan line is similar to the image of adjacent scan lines.
5. Area and Span Coherence: A group of adjacent pixels in an image is often covered by the same visible
object. This coherence is based on the assumption that a small enough region of pixels will most likely lie
within a single polygon. This reduces computation effort in searching for those polygons which contain a
given screen area (region of pixels) as in some subdivision algorithms.
6. Depth Coherence: The depths of adjacent parts of the same surface are similar.
7. Frame Coherence: Pictures of the same scene at successive points in time are likely to be similar, despite
small changes in objects and viewpoint, except near the edges of moving objects. Most visible surface
detection methods make use of one or more of these coherence properties of a scene. To take advantage
of regularities in a scene, eg. Constant relationships often can be established between objects and surfaces
in a scene.
Depth Buffer (Z-Buffer) Method

This method is developed by Cutmull. It is an image-space approach. The basic idea is to test the Z-depth of
each surface to determine the closest (visible) surface.

In this method each surface is processed separately one pixel position at a time across the surface. The
depth values for a pixel are compared and the closest (smallest z) surface determines the color to be
displayed in the frame buffer.

It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To override the
closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used.

Depth buffer is used to store depth values for (x, y) position, as surfaces are processed (0 ≤ depth ≤ 1).

The frame buffer is used to store the intensity value of color value at each position (x, y).
The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate indicates back
clipping pane and 1 value for z-coordinates indicates front clipping pane.

Algorithm

Step-1 − Set the buffer values −

Depthbuffer (x, y) = 0

Framebuffer (x, y) = background color

Step-2 − Process each polygon (One at a time)

For each projected (x, y) pixel position of a polygon, calculate depth z.

If Z > depthbuffer (x, y)

Compute surface color,

set depthbuffer (x, y) = z,

framebuffer (x, y) = surfacecolor (x, y)

Advantages

• It is easy to implement.
• It reduces the speed problem if implemented in hardware.
• It processes one object at a time.

Disadvantages

• It requires large memory.


• It is time consuming process.
Back-Face Detection
In a solid object, there are surfaces which are facing the viewer (front faces) and there are surfaces which
are opposite to the viewer (back faces). These back faces contribute to approximately half of the total
number of surfaces. Since we cannot see these surfaces anyway, to save processing time, we can remove
them before the clipping process with a simple test. Each surface has a normal vector. If this vector is
pointing in the direction of the center of projection, it is a front face and can be seen by the viewer. If it is
pointing away from the center of projection, it is a back face and cannot be seen by the viewer. The test is
very simple, if the z component of the normal vector is positive, then, it is a back face. If the z component
of the vector is negative, it is a front face. Note that this technique only caters well for non overlapping
convex polyhedral. For other cases where there are concave polyhedra or overlapping objects, we still
need to apply other methods to further determine where the obscured faces are partially or completely
hidden by other objects (eg. Using Depth-Buffer Method or Depth-sort Method).

Back Face Removal Algorithm


It is used to plot only surfaces which will face the camera. The objects on the back side are not visible. This
method will remove 50% of polygons from the scene if the parallel projection is used. If the perspective
projection is used then more than 50% of the invisible area will be removed. The object is nearer to the
center of projection, number of polygons from the back will be removed.

It applies to individual objects. It does not consider the interaction between various objects. Many polygons
are obscured by front faces, although they are closer to the viewer, so for removing such faces back face
removal algorithm is used.

When the projection is taken, any projector ray from the center of projection through viewing screen to
object pieces object at two points, one is visible front surfaces, and another is not visible back surface.

This algorithm acts a preprocessing step for another algorithm. The back face algorithm can be represented
geometrically. Each polygon has several vertices. All vertices are numbered in clockwise. The normal M1 is
generated a cross product of any two successive edge vectors. M1represent vector perpendicular to face and
point outward from polyhedron surface

N1=(v2-v1 )(v3-v2)
If N1.P≥0 visible
N1.P<0 invisible
Algorithm for left-handed system :
1) Compute N for every face of object.
2) If (C.(Z component) > 0)
then a back face and don't draw
else
front face and draw
The Back-face detection method is very simple. For the left-handed system, if the Z component of the
normal vector is positive, then it is a back face. If the Z component of the vector is negative, then it is a
front face.
Algorithm for right-handed system :
1) Compute N for every face of object.
2) If (C.(Z component) < 0)
then a back face and don't draw
else
front face and draw
Thus, for the right-handed system, if the Z component of the normal vector is negative, then it is a back
face. If the Z component of the vector is positive, then it is a front face.
Back-face detection can identify all the hidden surfaces in a scene that contain non-overlapping convex
polyhedra.
Recalling the polygon surface equation :
Ax + By + Cz + D < 0
While determining whether a surface is back-face or front face, also consider the viewing direction. The
normal of the surface is given by :
N = (A, B, C)
A polygon is a back face if V view. N > 0. But it should be kept in mind that after application of the viewing
transformation, viewer is looking down the negative Z-axis. Therefore, a polygon is back face if :
(0, 0, -1).N > 0
or if C < 0
Viewer will also be unable to see surface with C = 0, therefore, identifying a polygon surface as a back face
if : C <= 0.
Considering (a),
V.N = |V||N| Cos(angle)
if 0 <= angle 0 and V.N > 0
Hence, Back-face.
Considering (b),
V.N = |V||N| Cos(angle)
if 90 < angle <= 180, then
cos(angle) < 0 and V.N < 0
Hence, Front-face.

Limitations:
1) This method works fine for convex polyhedra, but not necessarily for concave polyhedra.
2) This method can only be used on solid objects modeled as a polygon mesh.
The Painter's Algorithm
It came under the category of list priority algorithm. It is also called a depth-sort algorithm. In this algorithm
ordering of visibility of an object is done. If objects are reversed in a particular order, then correct picture
results.

Objects are arranged in increasing order to z coordinate. Rendering is done in order of z coordinate. Further
objects will obscure near one. Pixels of rear one will overwrite pixels of farther objects. If z values of two
overlap, we can determine the correct order from Z value as shown in figure (a).

If z objects overlap each other as in figure (b) this correct order can be maintained by splitting of objects.

Depth sort algorithm or painter algorithm was developed by Newell, sancha. It is called the painter
algorithm because the painting of frame buffer is done in decreasing order of distance. The distance is
from view plane. The polygons at more distance are painted firstly.
The concept has taken color from a painter or artist. When the painter makes a painting, first of all, he will
paint the entire canvas with the background color. Then more distance objects like mountains, trees are
added. Then rear or foreground objects are added to picture. Similar approach we will use. We will sort
surfaces according to z values. The z values are stored in the refresh buffer.
Steps performed in-depth sort
1. Sort all surfaces according to their distances from the view point.
2. Render the surfaces to the image buffer one at a time starting from the farthest surface.
3. Surfaces close to the view point will replace those which are far away.

4. After all surfaces have been processed, the image buffer stores the final image.
The basic idea of this method is simple. When there are only a few objects in the scene, this method can be
very fast. However, as the number of objects increases, the sorting process can become very complex and
time consuming.
Algorithm
Step1: Start Algorithm

Step2: Sort all polygons by z value keep the largest value of z first.

Step3: Scan converts polygons in this order.


Test is applied
1. Does A is behind and non-overlapping B in the dimension of Z as shown in fig (a)
2. Does A is behind B in z and no overlapping in x or y as shown in fig (b)
3. If A is behind B in Z and totally outside B with respect to view plane as shown in fig (c)
4. If A is behind B in Z and B is totally inside A with respect to view plane as shown in fig (d)

The success of any test with single overlapping polygon allows F to be painted.

Scan-Line Method

In this method, as each scan line is processed, all polygon surfaces intersecting that line are examined to
determine which are visible. Across each scan line, depth calculations are made for each overlapping surface
to determine which is nearest to the view plane. When the visible surface has been determined, the intensity
value for that position is entered into the image buffer.
For each scan line do

Begin

For each pixel (x,y) along the scan line do ------------ Step 1

Begin

z_buffer(x,y) = 0

Image_buffer(x,y) = background_color

End

For each polygon in the scene do ----------- Step 2

Begin

For each pixel (x,y) along the scan line that is covered by the polygon do

Begin

2a. Compute the depth or z of the polygon at pixel location (x,y).

2b. If z < z_buffer(x,y) then

Set z_buffer(x,y) = z

Set Image_buffer(x,y) = polygon's colour

End

End

End
- Step 2 is not efficient because not all polygons necessarily intersect with the scan line.

- Depth calculation in 2a is not needed if only 1 polygon in the scene is mapped onto a segment of the scan
line.

- To speed up the process:

Recall the basic idea of polygon filling: For each scan line crossing a polygon, this algorithm locates the
intersection points of the scan line with the polygon edges. These intersection points are sorted from left
to right. Then, we fill the pixels between each intersection pair.

With similar idea, we fill every scan line span by span. When polygon overlaps on a scan line, we perform
depth calculations at their edges to determine which polygon should be visible at which span. Any number
of overlapping polygon surfaces can be processed with this method. Depth calculations are performed only
when there are polygons overlapping. We can take advantage of coherence along the scan lines as we pass
from one scan line to the next. If no changes in the pattern of the intersection of polygon edges with the
successive scan lines, it is not necessary to do depth calculations. This works only if surfaces do not cut
through or otherwise cyclically overlap each other. If cyclic overlap happens, we can divide the surfaces to
eliminate the overlaps.

• The algorithm is applicable to non-polygonal surfaces (use of surface and active surface table, z value is
computed from surface representation).
• Memory requirement is less than that for depth-buffer method.
• Lot of sortings are done on x-y coordinates and on depths.
COLOR CONCEPTS
A color can be made lighter by adding white or darker by adding black. Therefore, graphics packages
provide color palettes to a user which has two or more color models.
NOTE:
Hue: it is an actual color
Saturation: indicates amount of grey in a color
Brightness: tells the difference between a bread and a burnt toast i.e. how much black (or white) is mixed
in a color.
Types of Color Models

RGB
eg. Monitor

YIQ Color CMY


eg. printers
eg. TV Model and plotters

HSV
eg. Color
palette

RGB COLOR MODEL


• This color model is represented by a unit cube with one corner located at the origin of a 3-d color co-
ordinate system.
• The origin represents black and the vertex with co-ordinates (1,1,1) • is white. Vertices of the cube on
the axes represent the primary colors, and the remaining vertices represent the complementary color for
each of the primary colors.
• (Complementary colors: are pair of colors which, when combined cancels each other out)
• The RGB color scheme is an additive model. Intensities of the primary colors are added to produce other
colors.
• Each color point within the bounds of the cube can be represented as the triple ( R , G, B), where values
for R, G, and B are assigned in the range from 0 to 1.
Eg: It is an additive model means to produce
magenta, vertex is obtained by adding red(1,0,0)
blue (0,0,1) to get (1,0,1) which is the co-ordinate
of magenta. white at (1,1,1) is the sum of
red(1,0,0), green(0,1,0), and blue (0,0,1). Shades
of gray are represented along the main diagonal
of the cube from the origin (black) to the white
vertex. Each point along this diagonal has an
equal contribution from each primary color

YIQ COLOR MODEL


• In the YIQ color model, Luminance (brightness) information is contained in the Y parameter, while
chromaticity information (hue and purity/saturation) is incorporated into the I and Q parameters.
• A combination of red, green, and blue intensities are chosen for the Y parameter.
• black-and-white television monitors use only the Y signal.
• Parameter I contains orange-cyan hue information
• Q carries green-magenta hue information. • An RGB signal can be converted to a television signal using
an NTSC encoder, which converts RGB values to YIQ values.
The conversion from RGB values to YIQ values is accomplished with the transformation

An NTSC video signal can be converted to an RGB signal using an NTSC decoder.
CMY COLOR MODEL
The primary color are Cyan, Magenta and Yellow (CMY).
• It is a subtractive model. CMY model defining color with a subtractive process inside a unit cube.

In the CMY model, point (1, 1, 1) represents black, because all components of the incident light are
subtracted.
• The origin represents white light.
• Equal amounts of each of the primary colors produce grays, along the main diagonal of the cube.
• A combination of cyan and magenta ink produces blue light, because the red and green components of
the incident light are absorbed.
• Other color combinations are obtained by a similar subtractive process
• The conversion from an RGB representation to a CMY :-
𝑪𝑪 𝟏𝟏 𝑹𝑹
�𝑴𝑴�=�𝟏𝟏� − �𝑮𝑮�
𝒀𝒀 𝟏𝟏 𝑩𝑩
Conversion from a CMY to RGB :-
𝑹𝑹 𝟏𝟏 𝑪𝑪
�𝑮𝑮� = �𝟏𝟏� − �𝑴𝑴�
𝑩𝑩 𝟏𝟏 𝒀𝒀
Where black is represented in the CMY system as the unit column vector
The printing process used with the CMY model generates a color point with a collection of four ink dots.
One dot is used for each of the primary colors and one dot is black.
A black dot is included because the combination of cyan, magenta, aand yellow inks produce gray instead
of black.

You might also like