0% found this document useful (0 votes)
12 views57 pages

Unit 4

The document discusses various concepts related to light, color, shading, and hidden surface removal in computer graphics. It covers chromaticity, shading algorithms such as constant intensity, Gouraud, and Phong shading, as well as halftone shading and hidden surface detection algorithms like the Z-buffer and Painter algorithms. Additionally, it explains the challenges of hidden surface removal and presents different methods for achieving realistic image rendering.

Uploaded by

nayankokane62
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views57 pages

Unit 4

The document discusses various concepts related to light, color, shading, and hidden surface removal in computer graphics. It covers chromaticity, shading algorithms such as constant intensity, Gouraud, and Phong shading, as well as halftone shading and hidden surface detection algorithms like the Z-buffer and Painter algorithms. Additionally, it explains the challenges of hidden surface removal and presents different methods for achieving realistic image rendering.

Uploaded by

nayankokane62
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

UNIT 4 Light, Color, shading and Hidden surfaces

CIE Chromaticity
1. The chromaticity diagram represents the spectral colours and their mixtures
based on the values of the primary colours(i.e Red, Green, Blue) contained
by them.
2. Chromaticity contains two parameters i.e, hue and saturation. When we put
hue and saturation together then it is known as Chrominance.
3. The various saturated pure spectral colours are represented along the
perimeter of the curve representing the three primary colours – red, green
and blue. Point C marked in chromaticity diagram represents a particular
white light formed
Shading Algorithms:
1. Shading is referred to as the implementation of the illumination model at the pixel points
or polygon surfaces of the graphics objects.
2. Shading model is used to compute the intensities and colors to display the surface.
3. The shading model has two primary ingredients: properties of the surface and properties
of the illumination falling on it.
4. The principal surface property is its reflectance, which determines how much of the
incident light is reflected.
5. If a surface has different reflectance for the light of different wavelengths,

The simplest form of shading considers only diffuse illumination:

Epd=Rp Id

where Epd is the energy coming from point P due to diffuse illumination. Id is the diffuse
illumination falling on the entire scene, and Rp is the reflectance coefficient at P
Constant Intensity Shading
1. A fast and straightforward method for rendering an object with polygon surfaces is constant
intensity shading, also called Flat Shading.
2. In this method, a single intensity is calculated for each polygon. All points over the surface of
the polygon are then displayed with the same intensity value.
3. Constant Shading can be useful for quickly displaying the general appearances of the curved
surface as shown in fig:
Gouraud shading
1. This Intensity-Interpolation scheme, developed by Gouraud and usually referred to as Gouraud
Shading, renders a polygon surface by linear interpolating intensity value across the surface.
2. Intensity values for each polygon are coordinate with the value of adjacent polygons along the
common edges, thus eliminating the intensity discontinuities that can occur in flat shading.
3. Each polygon surface is rendered with Gouraud Shading by performing the following calculations:
a. Determining the average unit normal vector at each polygon vertex
b. Apply an illumination model to each vertex to determine the vertex intensity.
c. Linear interpolate the vertex intensities over the surface of the polygon.
4. At each polygon vertex, we obtain a normal vector by averaging the surface normals of all polygons
staring that vertex as shown in fig:
5. Thus, for any vertex position V, we acquire the unit vertex normal with the calculation

● Apply an illumination model to each vertex to determine the vertex intensity.

1. Interpolating intensities along the polygon edges.


2. For each scan line, the intensities at the intersection of the scan line with a polygon edge
are linearly interpolated from the intensities at the edge endpoints.
3. For example: In fig, the polygon edge with endpoint vertices at position 1 and 2 is
intersected by the scanline at point 4. A fast method for obtaining the intensities at point 4
is to interpolate between intensities I1 and I2 using only the vertical displacement of the
scan line.
3
4. Similarly, the intensity at the right intersection of this scan line (point 5) is interpolated from the
intensity values at vertices 2 and 3. Once these bounding intensities are established for a scan
line, an interior point (such as point P in the previous fig) is interpolated from the bounding
intensities at point 4 and 5 as
Phong Shading
1. A more accurate method for rendering a polygon surface is to interpolate the normal vector and then
apply the illumination model to each surface point.
2. This method developed by Phong Bui Tuong is called Phong Shading or normal vector Interpolation
Shading.
3. It displays more realistic highlights on a surface and greatly reduces the Match-band effect
4. A polygon surface is rendered using Phong shading by carrying out the following steps:
a. Determine the average unit normal vector at each polygon vertex.
b. Linearly & interpolate the vertex normals over the surface of the polygon.
c. Apply an illumination model along each scan line to calculate projected pixel intensities for the
surface points.
5. Interpolation of the surface normal along a polynomial edge between two vertices as shown in fig:
6. Incremental methods are used to evaluate normals between scan lines and along each scan
line. At each pixel position along a scan line, the illumination model is applied to determine the
surface intensity at that point.
7. Intensity calculations using an approximated normal vector at each point along the scan line
produce more accurate results than the direct interpolation of intensities, as in Gouraud
Shading.
Halftone Shading

1. Halftone is the reprographic technique that simulates continuous tone imagery through the
use of dots, varying either in size, in shape or in spacing.
2. "Halftone" can also be used to refer specifically to the image that is produced by this
process.
3. The human visual system has a tendency to average brightness over small areas, so the
black dots and their white background merge and are perceived as an intermediate shade
of grey.
4. The process of generating a binary pattern of black and white dots from an image is
termed halftoning.
5. Use dots of varying size to represent intensities
6. Area of dots proportional to intensity in image
intensities in a n x n cluster
Hidden Surface Removal
1. One of the most challenging problems in computer graphics is the removal of hidden parts from images of solid
objects.
2. In real life, the opaque material of these objects obstructs the light rays from hidden parts and prevents us from
seeing them.
3. In the computer generation, no such automatic elimination takes place when objects are projected onto the screen
coordinate system.
4. Instead, all parts of every object, including many parts that should be invisible are displayed.
5. To remove these parts to create a more realistic image, we must apply a hidden line or hidden surface algorithm to
set of objects.
6. Hidden line and Hidden surface algorithms capitalize on various forms of coherence to reduce the computing
required to generate an image.
7. Different types of coherence are related to different forms of order or regularity in the image.
8. Scan line coherence arises because the display of a scan line in a raster image is usually very similar to the display
of the preceding scan line.
9. Frame coherence in a sequence of images designed to show motion recognizes that successive frames are very
similar.
10. Object coherence results from relationships between different objects or between separate parts of the same
objects.
Types of hidden surface detection algorithms

Object space methods:

● In this method, various parts of objects are compared. After comparison visible, invisible or
hardly visible surface is determined.
● These methods generally decide visible surface. In the wireframe model, these are used to
determine a visible line. So these algorithms are line based instead of surface based.

Image space methods:

● Here positions of various pixels are determined. It is used to locate the visible surface instead
of a visible line.
● Each point is detected for its visibility. If a point is visible, then the pixel is on, otherwise off.
● So the object close to the viewer that is pierced by a projector through a pixel is determined.
That pixel is drawn is appropriate color.
Algorithms used for hidden line surface detection

1. Back Face Removal Algorithm


2. Z-Buffer Algorithm
3. Painter Algorithm
4. Subdivision Algorithm
Back Face Removal Algorithm
1. It is used to plot only surfaces which will face the camera. The objects on the back side are
not visible.
2. This method will remove 50% of polygons from the scene if the parallel projection is used. If
the perspective projection is used then more than 50% of the invisible area will be removed.
3. The object is nearer to the center of projection, number of polygons from the back will be
removed.
4. It applies to individual objects. It does not consider the interaction between various objects.
Many polygons are obscured by front faces, although they are closer to the viewer, so for
removing such faces back face removal algorithm is used.
5. When the projection is taken, any projector ray from the center of projection through viewing
screen to object pieces object at two points, one is visible front surfaces, and another is not
visible back surface.
6. This algorithm acts a preprocessing step for another algorithm. The back face algorithm can
be represented geometrically. Each polygon has several vertices.
7. All vertices are numbered in clockwise. The normal M1 is generated a cross product of any
two successive edge vectors.
8. M1represent vector perpendicular to face and point outward from polyhedron surface

N1=(v2-v1 )(v3-v2)
If N1.P≥0 visible
N1.P<0 invisible
Back Face Removed Algorithm
Repeat for all polygons in the scene.

1. Do numbering of all polygons in clockwise


direction i.e.
v1 v2 v3.....vz
2. Calculate normal vector i.e. N1
N1=(v2-v1 )*(v3-v2)
3. Consider projector P, it is projection from any
vertex
Calculate dot product
Dot=N.P
4. Test and plot whether the surface is visible or not.
If Dot ≥ 0 then
surface is visible
else
Not visible
Z-Buffer Algorithm
1. It is also called a Depth Buffer Algorithm. Depth buffer algorithm is simplest image space
algorithm.
2. For each pixel on the display screen, we keep a record of the depth of an object within the
pixel that lies closest to the observer.
3. In addition to depth, we also record the intensity that should be displayed to show the
object.
4. Depth buffer is an extension of the frame buffer. Depth buffer algorithm requires 2 arrays,
intensity and depth each of which is indexed by pixel coordinates (x, y).

Algorithm

For all pixels on the screen, set depth [x, y] to 1.0 and intensity [x, y] to a background value.

For each polygon in the scene, find all pixels (x, y) that lie within the boundaries of a polygon when
projected onto the screen. For each of these pixels:

(a) Calculate the depth z of the polygon at (x, y)

(b) If z < depth [x, y], this polygon is closer to the observer than others already recorded for this pixel.
In this case, set depth [x, y] to z and intensity [x, y] to a value corresponding to polygon's shading. If
instead z > depth [x, y], the polygon already recorded at (x, y) lies closer to the observer than does this
new polygon, and no action is taken.

3. After all, polygons have been processed; the intensity array will contain the solution.
4. The depth buffer algorithm illustrates several features common to all hidden surface
algorithms.

5. First, it requires a representation of all opaque surface in scene polygon in this case.

6. These polygons may be faces of polyhedral recorded in the model of scene or may simply
represent thin opaque 'sheets' in the scene.
Painter Algorithm
● It came under the category of list priority algorithm. It is also called a depth-sort algorithm.
In this algorithm ordering of visibility of an object is done. If objects are reversed in a
particular order, then correct picture results.
● Objects are arranged in increasing order to z coordinate. Rendering is done in order of z
coordinate. Further objects will obscure near one. Pixels of rear one will overwrite pixels of
farther objects. If z values of two overlap, we can determine the correct order from Z value
as shown in fig (a).
● If z objects overlap each other as in fig (b) this correct order can be maintained by splitting
of objects.
● Depth sort algorithm or painter algorithm was developed by Newell, sancha. It is called the
painter algorithm because the painting of frame buffer is done in decreasing order of distance.
The distance is from view plane. The polygons at more distance are painted firstly.
● The concept has taken color from a painter or artist. When the painter makes a painting, first of
all, he will paint the entire canvas with the background color. Then more distance objects like
mountains, trees are added. Then rear or foreground objects are added to picture. Similar
approach we will use. We will sort surfaces according to z values. The z values are stored in the
refresh buffer.

Steps performed in-depth sort

1. Sort all polygons according to z coordinate.


2. Find ambiguities of any, find whether z coordinate overlap, split polygon if necessary.
3. Scan convert each polygon in increasing order of z coordinate.

Painter Algorithm
Step1: Start Algorithm

Step2: Sort all polygons by z value keep the largest value of z first.

Step3: Scan converts polygons in this order.


Test is applied

1. Does A is behind and non-overlapping B in the dimension of Z as shown in fig (a)


2. Does A is behind B in z and no overlapping in x or y as shown in fig (b)
3. If A is behind B in Z and totally outside B with respect to view plane as shown in fig (c)
4. If A is behind B in Z and B is totally inside A with respect to view plane as shown in fig
(d)

The success of any test with single overlapping polygon allows F to be painted.
Area Subdivision Algorithm

● It was invented by John Warnock


and also called a Warnock
Algorithm.
● It is based on a divide & conquer
method. It uses fundamental of
area coherence.
● It is used to resolve the visibility
of algorithms. It classifies
polygons in two cases i.e. trivial
and non-trivial.
● Trivial cases are easily handled.
● Non trivial cases are divided into
four equal subwindows.
● The windows are again further
subdivided using recursion until
all polygons classified trivial and
non trivial.
Classification of Scheme
It divides or classifies polygons in four categories:

1. Inside surface: It is surface which is completely inside the surrounding window or specified boundary as
shown in fig(b).

2. Outside surface: The polygon surface completely outside the surrounding window as shown in fig (d)

3. Surrounding surface: It is polygon surface which completely encloses the surrounding window as
shown in fig (e)

4. Overlapping surface: It is surface partially inside or partially outside the surface area as shown in fig (c)

You might also like