CG Unit5
CG Unit5
When we view a picture containing non-transparent objects and surfaces, then we cannot see those
objects from view which are behind from objects closer to eye. We must remove these hidden surfaces to
get a realistic screen image. The identification and removal of these surfaces is called Hidden-surface
problem.
There are two approaches for removing hidden surface problems − Object-Space method and Image-space
method. The Object-space method is implemented in physical coordinate system and image-space method
is implemented in screen coordinate system.
When we want to display a 3D object on a 2D screen, we need to identify those parts of a screen that are
visible from a chosen viewing position.
Object-space Methods: Compare objects and parts of objects to each other within the scene definition to
determine which surfaces, as a whole, we should label as visible: For each object in the scene do
Begin
1. Determine those part of the object whose view is unobstructed by other parts of it or any other object
with respect to the viewing specification.
2. Draw those parts in the object color.
End
Compare each object with all other objects to determine the visibility of the object parts.
If there are n objects in the scene, complexity = O(n2)
Calculations are performed at the resolution in which the objects are defined (only limited by the
computation hardware). - Process is unrelated to display resolution or the individual pixel in the
image and the result of the process is applicable to different display resolutions.
Display is more accurate but computationally more expensive as compared to image space
methods because step 1 is typically more complex, eg. Due to the possibility of intersection
between surfaces.
Suitable for scene with small number of objects and objects with simple relationship with each
other.
Image-space Methods (Mostly used)
Visibility is determined point by point at each pixel position on the projection plane. For each pixel in the
image do
Begin
1. Determine the object closest to the viewer that is pierced by the projector through the pixel
2. Draw the pixel in the object colour.
End
For each pixel, examine all n objects to determine the one closest to the viewer.
If there are p pixels in the image, complexity depends on n and p ( O(np) ).
Accuarcy of the calculation is bounded by the display resolution.
A change of display resolution requires re-calculation.
Application of Coherence in Visible Surface Detection Methods:
Making use of the results calculated for one part of the scene or image for other nearby parts.
Coherence is the result of local similarity
As objects have continuous spatial extent, object properties vary smoothly within a small local region in
the scene. Calculations can then be made incremental.
Types of coherence:
1. Object Coherence: Visibility of an object can often be decided by examining a circumscribing solid
(which may be of simple form, eg. A sphere or a polyhedron.)
2. Face Coherence: Surface properties computed for one part of a face can be applied to adjacent parts
after small incremental modification. (eg. If the face is small, we sometimes can assume if one part of the
face is invisible to the viewer, the entire face is also invisible).
3. Edge Coherence: The Visibility of an edge changes only when it crosses another edge, so if one segment
of an nonintersecting edge is visible, the entire edge is also visible.
4. Scan line Coherence: Line or surface segments visible in one scan line are also likely to be visible in
adjacent scan lines. Consequently, the image of a scan line is similar to the image of adjacent scan lines.
5. Area and Span Coherence: A group of adjacent pixels in an image is often covered by the same visible
object. This coherence is based on the assumption that a small enough region of pixels will most likely lie
within a single polygon. This reduces computation effort in searching for those polygons which contain a
given screen area (region of pixels) as in some subdivision algorithms.
6. Depth Coherence: The depths of adjacent parts of the same surface are similar.
7. Frame Coherence: Pictures of the same scene at successive points in time are likely to be similar, despite
small changes in objects and viewpoint, except near the edges of moving objects. Most visible surface
detection methods make use of one or more of these coherence properties of a scene. To take advantage
of regularities in a scene, eg. Constant relationships often can be established between objects and surfaces
in a scene.
Depth Buffer (Z-Buffer) Method
This method is developed by Cutmull. It is an image-space approach. The basic idea is to test the Z-depth of
each surface to determine the closest (visible) surface.
In this method each surface is processed separately one pixel position at a time across the surface. The
depth values for a pixel are compared and the closest (smallest z) surface determines the color to be
displayed in the frame buffer.
It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To override the
closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used.
Depth buffer is used to store depth values for (x, y) position, as surfaces are processed (0 ≤ depth ≤ 1).
The frame buffer is used to store the intensity value of color value at each position (x, y).
The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate indicates back
clipping pane and 1 value for z-coordinates indicates front clipping pane.
Algorithm
Depthbuffer (x, y) = 0
Advantages
• It is easy to implement.
• It reduces the speed problem if implemented in hardware.
• It processes one object at a time.
Disadvantages
It applies to individual objects. It does not consider the interaction between various objects. Many polygons
are obscured by front faces, although they are closer to the viewer, so for removing such faces back face
removal algorithm is used.
When the projection is taken, any projector ray from the center of projection through viewing screen to
object pieces object at two points, one is visible front surfaces, and another is not visible back surface.
This algorithm acts a preprocessing step for another algorithm. The back face algorithm can be represented
geometrically. Each polygon has several vertices. All vertices are numbered in clockwise. The normal M1 is
generated a cross product of any two successive edge vectors. M1represent vector perpendicular to face and
point outward from polyhedron surface
N1=(v2-v1 )(v3-v2)
If N1.P≥0 visible
N1.P<0 invisible
Algorithm for left-handed system :
1) Compute N for every face of object.
2) If (C.(Z component) > 0)
then a back face and don't draw
else
front face and draw
The Back-face detection method is very simple. For the left-handed system, if the Z component of the
normal vector is positive, then it is a back face. If the Z component of the vector is negative, then it is a
front face.
Algorithm for right-handed system :
1) Compute N for every face of object.
2) If (C.(Z component) < 0)
then a back face and don't draw
else
front face and draw
Thus, for the right-handed system, if the Z component of the normal vector is negative, then it is a back
face. If the Z component of the vector is positive, then it is a front face.
Back-face detection can identify all the hidden surfaces in a scene that contain non-overlapping convex
polyhedra.
Recalling the polygon surface equation :
Ax + By + Cz + D < 0
While determining whether a surface is back-face or front face, also consider the viewing direction. The
normal of the surface is given by :
N = (A, B, C)
A polygon is a back face if V view. N > 0. But it should be kept in mind that after application of the viewing
transformation, viewer is looking down the negative Z-axis. Therefore, a polygon is back face if :
(0, 0, -1).N > 0
or if C < 0
Viewer will also be unable to see surface with C = 0, therefore, identifying a polygon surface as a back face
if : C <= 0.
Considering (a),
V.N = |V||N| Cos(angle)
if 0 <= angle 0 and V.N > 0
Hence, Back-face.
Considering (b),
V.N = |V||N| Cos(angle)
if 90 < angle <= 180, then
cos(angle) < 0 and V.N < 0
Hence, Front-face.
Limitations:
1) This method works fine for convex polyhedra, but not necessarily for concave polyhedra.
2) This method can only be used on solid objects modeled as a polygon mesh.
The Painter's Algorithm
It came under the category of list priority algorithm. It is also called a depth-sort algorithm. In this algorithm
ordering of visibility of an object is done. If objects are reversed in a particular order, then correct picture
results.
Objects are arranged in increasing order to z coordinate. Rendering is done in order of z coordinate. Further
objects will obscure near one. Pixels of rear one will overwrite pixels of farther objects. If z values of two
overlap, we can determine the correct order from Z value as shown in figure (a).
If z objects overlap each other as in figure (b) this correct order can be maintained by splitting of objects.
Depth sort algorithm or painter algorithm was developed by Newell, sancha. It is called the painter
algorithm because the painting of frame buffer is done in decreasing order of distance. The distance is
from view plane. The polygons at more distance are painted firstly.
The concept has taken color from a painter or artist. When the painter makes a painting, first of all, he will
paint the entire canvas with the background color. Then more distance objects like mountains, trees are
added. Then rear or foreground objects are added to picture. Similar approach we will use. We will sort
surfaces according to z values. The z values are stored in the refresh buffer.
Steps performed in-depth sort
1. Sort all surfaces according to their distances from the view point.
2. Render the surfaces to the image buffer one at a time starting from the farthest surface.
3. Surfaces close to the view point will replace those which are far away.
4. After all surfaces have been processed, the image buffer stores the final image.
The basic idea of this method is simple. When there are only a few objects in the scene, this method can be
very fast. However, as the number of objects increases, the sorting process can become very complex and
time consuming.
Algorithm
Step1: Start Algorithm
Step2: Sort all polygons by z value keep the largest value of z first.
The success of any test with single overlapping polygon allows F to be painted.
Scan-Line Method
In this method, as each scan line is processed, all polygon surfaces intersecting that line are examined to
determine which are visible. Across each scan line, depth calculations are made for each overlapping surface
to determine which is nearest to the view plane. When the visible surface has been determined, the intensity
value for that position is entered into the image buffer.
For each scan line do
Begin
For each pixel (x,y) along the scan line do ------------ Step 1
Begin
z_buffer(x,y) = 0
Image_buffer(x,y) = background_color
End
Begin
For each pixel (x,y) along the scan line that is covered by the polygon do
Begin
Set z_buffer(x,y) = z
End
End
End
- Step 2 is not efficient because not all polygons necessarily intersect with the scan line.
- Depth calculation in 2a is not needed if only 1 polygon in the scene is mapped onto a segment of the scan
line.
Recall the basic idea of polygon filling: For each scan line crossing a polygon, this algorithm locates the
intersection points of the scan line with the polygon edges. These intersection points are sorted from left
to right. Then, we fill the pixels between each intersection pair.
With similar idea, we fill every scan line span by span. When polygon overlaps on a scan line, we perform
depth calculations at their edges to determine which polygon should be visible at which span. Any number
of overlapping polygon surfaces can be processed with this method. Depth calculations are performed only
when there are polygons overlapping. We can take advantage of coherence along the scan lines as we pass
from one scan line to the next. If no changes in the pattern of the intersection of polygon edges with the
successive scan lines, it is not necessary to do depth calculations. This works only if surfaces do not cut
through or otherwise cyclically overlap each other. If cyclic overlap happens, we can divide the surfaces to
eliminate the overlaps.
• The algorithm is applicable to non-polygonal surfaces (use of surface and active surface table, z value is
computed from surface representation).
• Memory requirement is less than that for depth-buffer method.
• Lot of sortings are done on x-y coordinates and on depths.
COLOR CONCEPTS
A color can be made lighter by adding white or darker by adding black. Therefore, graphics packages
provide color palettes to a user which has two or more color models.
NOTE:
Hue: it is an actual color
Saturation: indicates amount of grey in a color
Brightness: tells the difference between a bread and a burnt toast i.e. how much black (or white) is mixed
in a color.
Types of Color Models
RGB
eg. Monitor
HSV
eg. Color
palette
An NTSC video signal can be converted to an RGB signal using an NTSC decoder.
CMY COLOR MODEL
The primary color are Cyan, Magenta and Yellow (CMY).
• It is a subtractive model. CMY model defining color with a subtractive process inside a unit cube.
In the CMY model, point (1, 1, 1) represents black, because all components of the incident light are
subtracted.
• The origin represents white light.
• Equal amounts of each of the primary colors produce grays, along the main diagonal of the cube.
• A combination of cyan and magenta ink produces blue light, because the red and green components of
the incident light are absorbed.
• Other color combinations are obtained by a similar subtractive process
• The conversion from an RGB representation to a CMY :-
𝑪𝑪 𝟏𝟏 𝑹𝑹
�𝑴𝑴�=�𝟏𝟏� − �𝑮𝑮�
𝒀𝒀 𝟏𝟏 𝑩𝑩
Conversion from a CMY to RGB :-
𝑹𝑹 𝟏𝟏 𝑪𝑪
�𝑮𝑮� = �𝟏𝟏� − �𝑴𝑴�
𝑩𝑩 𝟏𝟏 𝒀𝒀
Where black is represented in the CMY system as the unit column vector
The printing process used with the CMY model generates a color point with a collection of four ink dots.
One dot is used for each of the primary colors and one dot is black.
A black dot is included because the combination of cyan, magenta, aand yellow inks produce gray instead
of black.