0% found this document useful (0 votes)
37 views13 pages

Chapter 6 New

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views13 pages

Chapter 6 New

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

Chapter 6

Polygon Tables
We specify a polygon surface with a set of vertex coordinates and associated attribute parameters. As information for each
polygon is inputted, the data are placed into tables that are to be used in the subsequent processing, display and
manipulation of the objects in the scene. Polygon data tables can be organized into two groups: geometric tables and
attribute tables.

Geometric tables: contains the vertex coordinates and parameter to identify the spatial orientation of the polygon surfaces.

Attribute information: for an object includes parameters specifying the degree of transparency of the object and its
surface reflectivity and texture characteristics.
In summary, two types of polygon table:
 Attribute Table:
 Information about degree of transparency, surface reflectivity, texture characteristics, etc.
 Geometric Table:
 Vertex Table
 Edge Table
 Surface Table

A convenient organization for storing geometric data is to create three lists: vertex table, polygon table and edge table. The
coordinate values for each vertex in the object are stored in the vertex table. The edge table contains pointers back into the
vertex table to identify the vertices for each polygon edge. And the polygon table contains pointer back into the edge table
to identify the edges for each polygon. This can be shown in the following figure for two adjacent polygons on an object
surface. In addition, individual objects and their component polygons faces can be assigned object and facet identifier for
easy reference. Listing the geometric data in three tables as shown in figure below provides a convenient reference to the
individual components (vertices, edges and polygons) of each object.

Hidden Surface Removal/ Visible Surface Detection Method


 The major consideration in realistic graphical displays in identifying those parts of a scene that is visible from a
choose viewing position.
 Particular applications for this may depend upon factors such as complexity of the scene, type of objects to be
displayed.
 Various algorithms are referred to as Visible Surface detection methods.
 Sometimes they are also called as hidden surface elimination methods.
 In wire frame, we don’t remove those areas but represent them with gridlines.

Visible surface detections are broadly classified according to whether they deals with object definitions directly or with
their projected images. These two approaches are called object-space methods and image-space methods. They are
described below.

1
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

1. Object-Space methods:
- Compare the actual 3D representations with each other within the scene to decide visibility.
- Calculations are done without regard to a particular display resolution.

2. Image-Space methods:
- Decide visibility by looking point-by-point at each pixel position on the projection plane
- Performed at the resolution of the display device.
- Simple, fast and works on any 3D objects.
- Most visible surface detection algorithms use this method.

Bake-Face-Detection
This method is a fast and simple object-space method for identifying the back faces. A point (x, y, z) is “inside” a polygon
surface with plane parameters A, B, C and D if

Ax + By + Cz + D < 0
When an inside point is along the line of sight to the surface, the polygon must be a back surface. We can simply this test
by considering Normal Vector N= (A, B, C) and Viewing Vector as shown in fig below. The polygon is a back face. If
V.N >0
This test is easy to apply to all polygons. However, it provides necessary but not sufficient condition.

Fig: A polygon Surface with plane parameter C<0 in right-handed viewing coordinate system is
identified as a back face when the viewing direction is along the negative Z axis.

Partially hidden faces cannot be determined by back-face detection method.

Depth Buffer Methods


Depth Buffer method is also called as Z-Buffer method. This method compares Surface depths at each pixel position on the
projection plane. The object depth is usually measured from the viewing plane along the z-axis of a viewing system. Each
surface of a scene is processed separately, one point at a time across the surface. The method is usually applied to scenes
containing only polygon surfaces, because depth values can be computed very quickly and the method is easy to
implement. But the method can be applied to non-planer faces.

We can implement the depth-buffer algorithm in normalized coordinates, so that z-values range from 0 at the back clipping
plane to Z(max) at the front clipping plane. This method requires two buffer areas. A depth buffer is used to store depth
values for each (x, y) position as surfaces are processed, and the refresh buffer stores the intensity values for each
position.

Fig: At View-plane position (x, y), surface s1 has the smallest depth from the view plane and so
is visible at that position

2
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

Initially, all positions in the depth buffer are set to 0 (minimum depth) and the refresh buffer is initialized to the background
intensity. Each surface listed in the polygon tables is then processed, one scan line at a time, calculating the depth (z-value) at
each (x, y) pixel position. The calculated depth is compared to the previously stored in the depth buffer at that position. If the
calculated depth is greater than the value stored in the depth buffer, the new depth value is stored, and the surface intensity at
that position is determined and placed in the same xy location in the refresh buffer. Depth values for a surface position (x, y)
are calculated from the plane equation for each surface.
− Ax − By − D
z=
C
For any scan line, adjacent horizontal positions across the line differ by 1, and a vertical y values on an adjacent
scan line differ by 1. If the depth position (x, y) has been determined to be z, then the depth z’ of the next position
(x+1, y) along the scan line y is obtained as given below.
Y-axis

− A( x + 1) − By − D
Or
z' = Y
C
− Ax − By − D − A Y-1
Or z' = −
C C
−A
z' = z −
C
X X+1 X-axis

The ratio − A is a constant for each surface, so succeeding depth values across a scan line are obtained from
C
preceding values with a single addition. Similarly a depth value down an edge at position ( x − 1 , y-1) in the next
m
scan line is calculated by:

p
Or Tanθ =
b
y − ( y − 1)
Or Tanθ =
x'− x
1
Or
Tanθ =
x'− x
1 where m is the slope of the edge
Or −m =
x'− x
x −1
Or x' =
m
Now we can calculate z’ at point (x’, y-1) is Y-axis

( x − 1)
−A − B ( y − 1) − D Y

z' = m Y-1

Or C
A +B
− Ax − By − D
Or z' = + m
C C
Or
A +B X X-1/m X-axis
z' = z − m
C
If we move along the vertical edge, i.e. θ =900 then slope m=0
B
z' = z +
C
3
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

- This method is very easy to implement. But if resolution 1024*1024, over one million positions in the z buffer
would be needed.
- One way to reduce it is to process one section at a time.

The steps of a depth-buffer algorithm are summarized below:


1. Initialize the depth buffer & refresh buffer to depth(x, y) =0 & refresh (x, y)=Ibackground
2. For each position on each polygon surface, compare depth values to previously stored values in the depth buffer
to determine visibility.
 Calculate depth Z for each (x, y) position on each polygon surface.
 If (Z >depth(x, y)), then set depth(x, y) =Z and refresh(x, y) =Isurface(x, y) where Ibackground is the value for
background intensity and Isurface is intensity value of surface at point (x, y).

After all surfaces have been processed, the depth buffer contains depth values for the visible surfaces and the refresh
buffer contains the corresponding intensity values for those surfaces.
The depth-buffer method is very easy to implement but it does require the availability of a second buffer in addition
to the refresh buffer.

A-Buffer Method/ Accumulation Buffer Method:

A buffer method of hidden surface technique is image space approach of hidden surface removal technique. Just like
in z buffer A-buffer it represents anti-aliased, area-averaged, accumulation-buffer method developed by Lucasfilm
for im3plementation in the surface rendering system called REYES.
Z-buffer method has a drawback that it can only define one visible surface at each pixel position. i.e. it only deals
with an opaque surfaces and cannot accumulate intensity values for more than on surfaces. A-buffer method
resolves the visibility among an arbitrary collection of opaque, transparent, and intersecting objects. A-buffer has
two fields.
Depth Field: it only holds a positive or negative real number
Intensity Field: it holds surface intensity information or pointer value

The depth field in A-buffer indicates whether a single surface or multiple surfaces contribute the intensity of
corresponding pixel. If the depth is +ve, the number stored at that position is single surface and the depth field is –ve
this indicates multiple surface contributions to the pixel intensity. The surface field stores the information of that
particular surface. It includes surface identifier, percentage of area coverage, depth, and percentage of
transparency, RGB intensity components and other surface rendering parameter.

Scan-Line Method
This method for removing hidden surfaces is also an image-space method. In this method, all polygon surfaces
intersecting a scan line are examined to determine which are visible. Across each scan line, depth calculations are
made for each overlapping surface to determine which one is the nearest. When the visible surface has been
determined, the intensity value for that position is entered in to the refresh buffer.

The various tables including edge table and polygon table for the various surfaces are required. The edge and
surface tables are in expanded form and include additional information.
The edge table contains.
4
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

 The coordinate endpoints for each line in the scene.


 The inverse slope of each line and
 Pointers into the polygon table to identify the surfaces bounded by each line.
Similarly, the polygon table contains:
 Coefficients of the plane equation for each surface,
 Intensity information for the surfaces and
 Pointers in to the edge table.

To facilitate the search for surfaces crossing a given scan line, we can set up an active list of edges from
information in the edge table. This active list will contain only edges that cross the current scan line, stored in order
of increasing X. in addition, we define a flag for each surface that is set on or off to indicate whether a position
along a scan line is inside or outside of the surface. Scan lines are processed from left to right. At the leftmost
boundary of a surface, the surface flag in turned on, and at the rightmost boundary, it is turned off.

Fig: Scan lines crossing the projection of two surfaces, S1 and S 2 in the view plane

The above figure illustrates the scan-line method for location visible portions of surfaces for pixel positions along
the line. The active list for scan-line 1 contains information from the edge table for the edge AB, BC, EH and FG.
For positions along the scan line between edge AB and BC, only the flag for surface S1 is on. Therefore no depth
calculations are necessary and the intensity information for surface S1 is entered in to the refresh buffer. Similarly,
between edges EH and FG, only the flag for S2 is on. In this region the intensity information for surface S2 is entered
in to the refresh buffer. No other positions along scan line 1 intersect surfaces, so the intensity values in the other
areas are set to background intensity.
For scan line 2, the active edge list contains edge AD, EH, BC and FG. Along scan line 2 from edge AD to edge EH,
only the flag for surface S1 is on. But between the edges EH and BC the flags for both surfaces are on. In this
interval, depth calculation must be made using the plane coefficients for the two surfaces. For this example, the
depth of surface S1 is less than that of S2. So intensities for surface S1 are loaded in to the refresh buffer until
boundary BC is encountered. Then the flag for surface S1 goes off, and intensities for surface S2 are stored until edge
FG is passed. Similarly the procedure can repeat for scan line 3.
The algorithm for Scan-Line Method is as follows:
1. Set up an active edge list and assign flag for each surface.
2. For each scan line:
 Update the active edge list in increasing order of x for the scan line at position of y
 Start scanning from left to right.
 If one flag is ON, assign refresh buffer at that position with the intensity information of the corresponding
polygon surface.
 If two or more flags are ON at the same time, do depth calculations of each surface and assign the refresh
buffer at that position with the intensity if the surface having minimum depth.

Illumination Models & Surface-Rendering Methods


An Illumination mode, also called a lighting model or shading model, is used to calculate the intensity of light that we
should see at a given point on the surface of an object.
A surface-rendering algorithm uses the intensity calculations from an illumination model to determine the light intensity
for all projected pixel positions for the various surfaces in a scene. Surface rendering can be performed by applying the
illumination model to every visible surface point, or the rendering can be accomplished by interpolating intensities across
the surface from a small set of illumination-model calculations.

5
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

Photorealism in computer graphics involves two elements:


a) Accurate graphical representations of objects, and
b) Good physical descriptions of the lighting effects in a scene such as light reflections, surface texture, and
shadows.

Scan-line algorithms use interpolation schemes. Ray tracing algorithms invoke the illumination model at each pixel
position. Surface-rendering procedures are termed surface-shading methods.
Illumination model or a lighting model = the model for calculating light intensity at a single surface point.
Surface rendering = procedure for applying a lighting model to obtain pixel intensities for all the projected surface
positions in a scene.
Given the parameters for the optical properties of surfaces (opaque/transparent, shiny/dull, surface-texture), the relative
positions of the surfaces in a scene, the color and positions of the light sources, and the position and orientation of the
viewing plane, illumination models calculate the intensity projected from a particular surface point in a specified viewing
direction.
To minimize intensity calculations, most packages use empirical models based on simplified photometric calculations.

Light Sources
When we view an opaque non-luminous object, we see reflected light from the surfaces of the object.
The total reflected light is the sum of the contributions from light sources and other reflecting surfaces in the scene.

Light sources = light-emitting sources;


Reflecting surfaces = light-reflecting sources.

Figure 1. Light viewed from an opaque surface is in general a combination of reflected light
from a light source and reflections of light reflections from other surfaces.

Point Source
The simplest model for a light emitter is a point source. Point sources are
abstraction of real-world sources of light such as light bulbs, candles, or the sun.
The light originates at a particular place; it comes from a particular direction over a
particular distance. For point sources, the position and orientation of the object’s
surface relative to the light source will determine how much light the surface will
receive and, in turn, how bright it will appear. Surfaces facing towards and Fig. 2
positioned near the light source will receive more light than those facing away Diverging ray paths from a
from or far removed from the source follow radially diverging paths from the point light source.
source position as shown in fig. 2.
The point light source emits rays in radial directions from its source. A point light source is a fair approximation for
sources that are small compared to the size of objects in the scene like a local light source (a light bulb).
The direction of the light to each point on a surface changes when a point light source is used. Thus, a normalized vector to
the light emitter must be computed for each point that is illuminated.

Distributed Light Source

A nearby source, such as the long fluoresent light as shown in fig. 3, is


accurately modeled as a distributed light source. Here, the illumination
effects cannot be approximately realistically with a point source, because
the area of the source is not small compared to the surfaces in the scene.
An accurate model for the distributed source is one that considers the
accumulated illumination effects of the points over the surface of the
source.
Fig. 3
An object illuminated with a 6
distributed light source.
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

All of the rays from a directional/distributed light source have the same direction, and no point of origin. It is as if the light
source was infinitely far away from the surface that it is illuminating.
Sunlight is an example of an infinite light source.
When light is incident on an opaque surface, part of it is reflected and part is
absorbed.

Shiny materials reflect more of the incident light, and dull surface absorb more of
the incident light. Fig. 4
For an illuminated transparent surface, some of the incident light will be reflected Diffuse reflection from a
and some will be transmitted through the material. surface.

Grainy surfaces scatter the reflected light in all directions. This scattered light is
called diffuse reflection.
The surface appears equally bright from all viewing directions.
What we call the color of an object is the color of the diffuse reflection of the
incident light.
Fig. 5
Light sources create highlights, bright spots, called specular reflection. More Specular reflection
pronounced on shiny surfaces than on dull. superimposed on diffuse
reflection vectors.
Basic Illumination Models
Lighting calculations are based on:
• Optical properties of surfaces, such as glossy, matte, opaque, and transparent. This controls the amount of
reflection and absorption of incident light.
• The background lighting conditions.
• The light-source specifications. All light sources are considered to be point sources, specified with a coordinate
position and intensity value (color).

Ambient Light
Even though an object in a scene is not directly lit it will still be visible. This is because
light is reflected from nearby objects.
Ambient light has no spatial or directional characteristics.
The amount of ambient light incident on each object is a constant for all surfaces and
over all directions.
The amount of ambient light that is reflected by an object is independent of the objects
position or orientation and depends only on the optical properties of the surface.
The level of ambient light in a scene is a parameter Ia , and each surface illuminated with
this constant value. •
Illumination equation for ambient light is I = k aI a Fig. 6
where Ambient light shading.
I is the resulting intensity
Ia is the incident ambient light intensity
ka is the object’s basic intensity, ambient-reflection coefficient.

Since ambient light produces a flat shading for each surface (Fig. 6), at least one light source is included in a scene, often
as a point source at the viewing position.

Diffuse Reflection

Diffuse reflections are constant over each surface in a scene, independent of the viewing direction.
The amount of the incident light that is diffusely reflected can be set for each surface with parameter kd, the diffuse-
reflection coefficient, or diffuse reflectivity.
0 ≤ kd ≤ 1;
kd near 1 – highly reflective surface;
kd near 0 – surface that absorbs most of the incident light;
kd is a function of surface color;

Even though there is equal light scattering in all direction from a surface, the brightness of the surface does depend on the
orientation of the surface relative to the
light source:

(a) (b)
Fig. 8
A surface perpendicular to the direction of the incident light (a) is more 7
illuminated than an equal-sized surface at an oblique angle (b) to the
incoming light direction.
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

As the angle between the surface normal and the incoming light direction increases, les of the incident light falls on the
surface.
We denote the angle of incidence between the incoming light direction and the surface normal as θ. Thus, the amount of
illumination depends on cosθ. If the incoming light from the source is perpendicular to the surface at a particular point, that
point is fully illuminated. If Il is the intensity of the point light source, then the diffuse reflection equation for a point on the
surface can be written as
To Light N
Il,diff = kdIlcosθ Source
or
L
Il,diff = kdIl(N.L) θ
If N is the unit normal vector to a surface and L is the unit direction vector
to the point light source from a position on the surface. Fig. 9
Angle of incidence θ between the
Figure 10 illustrates the illumination with diffuse reflection, using various unit light-source direction vector L
values of parameter kd between 0 and1. and the unit surface normal N.

Fig. 10
Series of pictures of sphere illuminated by diffuse reflection model only using different kd values
(0.4, 0.55, 0.7, 0.85,1.0).
We can combine the ambient and point-source intensity calculations to obtain an expression for the total diffuse reflection.
Idiff = kaIa+kdIl(N.L)
Where both ka and kd depend on surface material properties and are assigned values in the range from 0 to 1.

Fig. 11
Series of pictures of sphere illuminated by ambient and diffuse reflection
model. Ia = Il = 1.0, kd = 0.4 and ka values (0.0, 0.15, 0.30, 0.45, 0.60).

Specular Reflection and the Phong Model To Light N


Source R
Specular reflection is the result of total, or near total, reflection of the incident
light in a concentrated region around the specular-reflection angle. L
θθ
Figure 13 shows the specular reflection direction at a point on the illuminated Φ V
surface.

In this figure, Fig. 13


• R represents the unit vector in the direction of specular reflection; Modeling specular reflection.
• L – unit vector directed toward the point light source;
• V – unit vector pointing to the viewer from the surface position

8
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

• Angle Φ is the viewing angle relative to the specular-reflection direction R.

Shiny surfaces have a narrow specular-reflection range; dull surfaces have a wider reflection range.

Phong model is an empirical model for calculating the specular-reflection range:


• Sets the intensity of specular reflection proportional to cosnsΦ;
• Angle Φ assigned values in the range 0o to 90o, so that cosΦ values from 0 to 1;
• Specular-reflection parameter ns is determined by the type of surface, very shiny surface is modeled with a large
value for ns (say, 100 or more), and small values are used for duller surfaces. For perfect reflector (perfect mirror),
ns is infinite;
• Specular-reflection coefficient ks equal to some value in the range 0 to 1 for each surface.

N N
R R
L L

Shiny Surface (Large ns) Dull Surface (Small


n)
Fig. 14 Modeling specular reflection with parameter n. s

cos Φ
ns

Fig. 15
Plots of cosnsΦ for several values of specular parameter ns.
Phong specular-reflection model:
Ispec = ksIl cosnsΦ
Since V and R are unit vectors in the viewing and specular-reflection directions, we can calculate the value of cosnsΦ with
the dot product V.R.

N
R
L N.L

Fig. 16 Calculation of vector R by considering projections onto the direction of the normal vector N.

Ispec = ksIl (V.R)ns


R + L = (2N.L)N N H
R = (2N.L)N-L R
L α
Φ V
α = Φ/2
Fig. 17 9
Halfway vector H along the bisector of the angle between L and V.
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

H = (L + V)/|(L + V)|
Ispec = ksIl (N.H)ns

Combine Diffuse and Specular Reflections with Multiple Light Sources


For a single point light source, we can model the combined diffuse and specular reflections from a point on an illuminated
surface as
I = Idiff + Ispec
= kaIa + kdIl(N.L) + ksIl(N.H)ns
If we place more than one point source in a scene, we obtain the light reflection at any surface point by summering the
contributions from the individual sources:
I = kaIa + Σni=1 Ili [kd (N.Li) + ks(N.Hi)ns]

Intensity Attenuation
As radiant energy from a point light source travels through space, its amplitude is attenuated by the factor 1/d2, where d is
the distance that the light has travelled.
A surface close to the light source (small d) receives a higher incident intensity from the source than a distant surface
(large d).
Problem in using the factor 1/d2 to attenuate intensities:
The factor 1/d2 produces too much intensity variations when d is small, and it produces very little variation when d is large.
We can compensate for these problems by using inverse linear or quadratic functions of d to attenuate intensities.
For example, a general inverse quadratic attenuation function:
1
f (d ) =
a0 + a1d + a2 d 2
The value of the constant term a0 can be adjusted to prevent f(d) from becoming too large when d is very small.
With a given set of attenuation coefficients, we can limit the magnitude of the attenuation function to 1 with the calculation
 1 
f (d ) = min1, 
2 
 a0 + a1d + a2 d 
Using this function, we can then write our basic illumination model as

[ ]
n
I = k a I a + ∑ f ( d i ) I li k d ( N • Li ) + k s ( N • H i ) n s
i =1
where di is the distance light has traveled from light source i.

Shadows
• Hidden-surface methods can be used to locate areas where light sources produce shadows.
o Apply a hidden-surface method with a light source at a view position.
o Shadow patterns generated by a hidden-surface method are valid for any selected viewing position, as
long as the light-source positions are not changed.
• In polygon-based system, we can add surface-detail polygons that correspond to shadow areas of surface
polygons.
• We can display shadow areas with ambient light intensity only, or we can combine the ambient light with
specified surface texture.

Polygon Rendering Methods


• The application of an illumination model to the rendering of standard graphic objects which formed with polygon
surfaces. Polygon surfaces rendering can be done in two ways:- each polygon can be rendered with a single intensity,
or the intensity can be obtained at each point of the surface using an interpolation scheme.
• Three methods, each of them treats a single polygon independent of all others (non-global):
o Constant-Intensity Shading / Flat Shading;
o Intensity-Interpolation Shading / Gouraud Shading;
o Normal-vector Interpolation Shading / Phong Shading

Constant-Intensity Shading/ Flat Shading


• Fast & simple.
• A single intensity is calculated for each polygon.
• All points over the surface of the polygon are displayed with the same intensity value.
• Useful for quickly displaying the general appearance of a curved surface.
• Flat shading provides an accurate rendering for an object if all of the following assumptions are valid:
o The object is a polyhedron and is not an approximation of an object with a curved surface;

10
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

o All light sources illuminating the object are far from the surface so that N.L and the attenuation function
are constant over the surface;
o The viewing position is also far from the surface so that V.R is constant over the surface;
• Reasonable approximation of surface lighting effects by flat shading:
o Using small polygon facets;
o Calculate the intensity for each facet at the center of the polygon;

Gouraud Shading

o Intensity-interpolation scheme, referred to as Gouraud shading, renders a polygon surface by linearly


interpolating intensity values across the surface.
o Intensity values for each polygon are matched with the values of adjacent polygons along the common edges, thus
eliminating the intensity discontinuities that can occur in flat shading.

Each polygon surface is rendered with Gouraud shading by performing the following calculations:

o Determine the average unit normal vector at each polygon vertex;


o Apply an illumination model to each vertex to calculate the vertex intensity;
o Linearly interpolate the vertex intensities over the surface of the polygon;

Step 1:
At each surface of the polygon, we obtain a normal vector by averaging the surface normals to all polygons sharing that
vertex. For any vertex, we obtain the unit vertex normal with the following calculation:

Step 1:The normal vector Nv is calculated as the average of the surface normals for each polygon sharing that vertex

Step 2:
Once we have the vertex normals, we can calculate the intensity at the vertices from the lighting model.

Step 3:

For each scan line, the intensity at the intersection of the scan line with a polygon edge is linearly interpolated from the
intensities at the edge endpoints.
A fast method for obtaining this intensity is to interpolate between intensities of endpoints by using only the vertical
displacement of the scan line:
y s − y2 y − ys ys − y3 y − ys
I a = I1 + I2 1 and I b = I1 + I3 1
y1 − y2 y1 − y2 y1 − y3 y1 − y3
Once these bounding intensities are established for a scan line, an interior point (such as p) is interpolated from the
bounding intensities at points a and b as

11
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

xb − x p x p − xa
I p = Ia + Ib
xb − xa xb − xa
Incremental calculations:

I1
I
y
scan lines
y-1
I’
I2

x x+1

If the intensity at edge position (x,y) is interpolated as


y − y2 y −y
I = I1 + I2 1
y1 − y2 y1 − y2

then we can obtain the intensity along this edge for the next scan line, y-1, as
I2 − I
I '= I +
y1 − y2

Similar calculations are used to obtain intensities at horizontal pixel positions along each scan line.
Disadvantages of Gouraud shading:
• Highlights on the surface are sometimes displayed with anomalous shapes.
• Can cause bright or dark intensity streaks to appear on the surface (Mach-band effect).

Remedy:
• The Mach-Band effect can be removed by dividing the surface into a greater number of polygon faces or using
other methods such as Phong Shading.

Phong Shading
• Gouraud shading lacks specular highlights except near the vertices.
• Phong shading eliminates these problems.
• A more accurate method for rendering a polygon surface.
• Interpolates normal vectors, and then applies the illumination model to each surface point.
• So it is also called Normal Vector interpolation Shading.
• Method developed by Phong Bui Tuong.
• Called Phong shading, or normal-vector interpolation shading.
• More realistic highlights.
• Greatly reduces the Mach-band effect.

In Phong Shading following steps are done:


• Determine the average unit normal vector at each polygon vertex.
• Linearly interpolate the vertex normals over the surface of
the polygon. N3
• Apply illumination model along each scan line to calculate
projected pixel intensities for the surface points. N 1

Step 2:
The normal vector N for the scan line intersection point along the N
edge between vertices 1 and 2 can be obtained by vertically
interpolating between edge endpoint normals:
y − y2 y −y scan line
N = N1 + N2 1
y1 − y2 y1 − y2 N2
Incremental methods are used to evaluate normals between scan lines
and along each individual scan line.

At each pixel position along a scan line, the illumination model is applied to determine the surface intensity at that point.

12
Prepared By: Milan Chikanbanjar Computer Department Computer Graphics

• Produce more accurate results.


• Trade-off:
o Phong shading requires a lot of calculations.
• Bishop & Weimer developed fast approximation using Taylor series expansion.
• (Study the Fast Phong Method yourself)

13

You might also like