Unit 3 Visual Realism: Objectives
Unit 3 Visual Realism: Objectives
Visual Realism
Realism
UNIT 3 VISUAL REALISM
Structure
3.1 Introduction
Objectives
3.1 INTRODUCTION
CAD/CAM software has been recognized as an essential tool in the designing and
manufacturing of a product due to its ability to depict the designs and tool paths with
friendly visual displays and effects. There is no CAD/CAM package existing without
visualization of the design of a product on the computer screen. Visualization embraces
both image understanding and image synthesis. Thus, the visualization becomes one of
the challenging tasks in computer graphics to generate images that appear realistic. The
quest for visual realism began in the early 70s when memory prices dropped low enough
to allow raster scan technologies to be cost-effective over the then prevailing calligraphic
displays. The need of visual realism is increasing forever in every discipline of science
and engineering.
You may appreciate the visual realism applications in enhancing our knowledge and
training skills as described below :
(a) Robot Simulations : Visualization of movement of their links and joints
and end effector movement etc.
(b) CNC programs verification of tool movement along the path prescribed and
estimation of cup height and surface finish etc.
(c) Discrete Even Simulation : Most of DES packages provide the user to
create shop floor environment on the screen to visualize layout of facilities,
movement of material handling systems, performance of machines and tools.
(d) Scientific Computing : Visualization of results of FEM analysis like
iso-stress and iso-strain regions, deformed shapes and stress contours.
Temperature and heat flux in heat-transfer analysis. Display and animation
of mode shape in vibration analysis.
(e) Flight Simulation : Cockpit training for pilots is first being provided with
flight simulators, which virtually simulates the surrounding that an actual
flight will pass through.
Objectives
After studying this unit, you should be able to
x know specific needs of realism,
x add realism to pictures by eliminating hidden lines and surfaces on solid
objects,
x know several techniques for removing hidden lines and surfaces,
x determine how light reflects off the surfaces,
51
Introduction to Computer
x see how to approximate a smooth surface with polygon net, and
x
and Computer Graphics
know the use of various colour models.
If after all this, we add a background of our choice, we have the total 3D image – except
that it is virtual, imaginary, not real, but one fully under our control, to be manipulated as
we wish.
Certainly, there is still one more major item missing, before we can look at a computer
display or plot and perceive it just as we see a real object, namely the stereo vision we
experience with our two eyes. That too is now available, with software producing two
colour-coded, polarized, or alternating images for left and right eyes, and a set of special
glasses to merge them for the observer. In fact, this technology is very much in use by
architects who wish to show off their latest creations to their (rich) clients, and for doctor
who wish to plan and rehearse microsurgery of the brain or heart.
The development of perspective projection made constructed objects look more realistic.
But for a realistic scene, we should only draw those lines and polygons which could
actually be seen, not those which would be hidden by other objects. What is hidden and
what is visible depend upon the point of view.
The main problem in visualization is the display of three-dimensional objects and scenes
on two-dimensional screens. How can the third dimension, the depth, be displayed on the
screen? How can the visual complexities of the real environment such as lighting, colour,
shadows, and texture be represented as image attributes? What complicates the display of
52 three-dimensional objects even further is the centralized nature of the databases of their
geometric models. If we project a complex three-dimensional model onto a screen, we Visual
Visual Realism
Realism
get a complex maze of lines and curves. To interpret this maze, curves and surfaces that
cannot be seen from the given viewpoint should be removed. Hidden line and surface
removal eliminates the ambiguities of the displays of three-dimensional models and is
considered the first step toward visual realism.
However, the displayed image shows even the lines that are supposed to be hidden and
hence invisible, interfering with and disturbing our perception of the object. In essence,
the surfaces are all transparent and the edges are all visible, leading to ambiguity and
confusion. For instance, in Figure 3.2(a), which of the two comers is nearer to the
viewer : P or Q? This will be a problem in whatever view of the object we wish to
display.
This skeletal view is called a Wire-frame Model. Although not a realistic representation
of the object, it is still very useful in the early stages of image development for making
sure that all the corners and edges are where they should be. For some situations such as
preliminary designs, centre-line diagrams of structural frames, etc. these are not only
quite' adequate, but even preferred over the more realistic views.
Sometimes, for checking during image creation, or for easier interpretation of the final
display, hidden lines and surfaces are not completely omitted, but shown as dashed or
lighter lines, as in Figure 3.2(b). This is known as Partial Wire-frame.
The most realistic view, of course, is the one that our eyes would see, namely with the
hidden lines and surfaces completely removed, as in Figures 3.1(b) and 3.2(c). Thus, for
realism in 3-D representation, the lines and surfaces which are hidden by the surfaces
visible to the observer must be rubbed out or covered up, or better yet, not drawn in the
first place, removed from the plotting data by a computer graphics algorithm.
Hidden-line removal refers to wire-frame diagrams without surface rendering and
polygonal surfaces with straight edges. Hidden-surface removal refers to all kinds of
surfaces, including curved ones. For the simplified and conceptual treatment planned
herein, we shall use the phrases “hidden-line” and “hidden-surface” interchangeably.
Hidden surface removal is the most troublesome and most expensive aspect of 3-D
computer graphics. Not only is hidden surface removal extremely complex in 53
Introduction to Computer conceptualization and computation, but it is also very tedious and time-consuming in
and Computer Graphics
implementation and processing.
Various approaches to achieve visual realism exist. They are directly related to the types
of geometric models utilized to represent three-dimensional objects. Thus, one would
expect an upward trend in the efficiency and automation level of these approaches as the
geometric modeling techniques have advanced from wire-frames, to surfaces, to solids.
Among the existing visualization approaches are parallel projections, perspective
projections, hidden line removal, hidden surface removal, hidden solid removal, and the
generation of shaded images of models and scenes.
A wide variety of hidden line and hidden surface removing (visibility) algorithms are in
existence today. The development of these algorithms is influenced by the types of
graphics display devices they support (whether they are vector or raster) and by the type
of data structure or geometric modeling they operate on (wire-frame, surface, or solid
modeling). Some algorithms utilize parallel, over the traditional serial, processing to
speed up their execution. The formalization and generalization of these algorithms are
useful and are required if one attempts to design and build special-purpose hardware to
support hidden line and hidden surface removal, which is not restricted to a single
algorithm. However, it is not a trivial task to convert the different algorithmic
formulations into a form that allows them to be mapped onto a generalized scheme.
Algorithms that are applied to a set of objects to remove hidden parts to create a more
realistic image are usually classified into hidden line and hidden surface algorithms. The
former supports line-drawing devices such as vector displays and plotters, while the latter
supports raster displays.
The algorithms used for removal of hidden line and hidden surfaces are broadly classified
according to whether they deal with object definitions directly or with their projected
images. These two approaches are called object-space methods and image-space
methods, respectively. An object-space method compares objects and parts of objects to
each other within the scene definition to determine which surfaces, as a whole, we should
label as visible. In an image-space algorithm, visibility is decided point by point at each
pixel position on the projection plane. Most visible-surface algorithms use image-space
methods, although object-space methods can be used effectively to locate visible surfaces
in some cases. Line-display algorithms, on the other hand, generally use object-space
methods to identify visible lines in wire-frame displays, but many
image-space visible-surface algorithms can be adapted easily to visible-line detection.
3.3.2 Back Face Detection Method
A single polyhedron is a convex solid, which has no external angle between faces less
than 180° and there is a simple object space method to determine which faces are the
visible front faces, and which are the hidden back faces.
The faces are characterized by their outward normals, and for this reason, the method is
also known as the Outward Normal Method.
The outward normal for every face of the convex solid is determined by taking the
cross-product of any two adjacent sides of the face. For consistency in checking, when
the viewer looks directly at the face under consideration, the vertex numbering (naming)
scheme for all faces must be in the same direction, say counter-clockwise.
We need to start with a single polygon numbered in the proper order. There are analytical
methods for doing this, but where user has control over the input, it is much easier to
number the first polygon counter-clockwise when visible to the viewer.
Once this is done, any adjacent polygon will have the common edge traversed in the
opposite direction, and hence no further decision-making will be necessary regarding the
direction of numbering. This logic is particularly helpful in naming the faces away from
the observer and those that are approximately horizontal, eliminating the need for the
observer to imagine himself at all sorts of odd viewing angles.
54 The algorithm of this method is as follows :
Algorithm for Back Face Detection Method Visual
Visual Realism
Realism
Q Q Q Q
Q
P P P P
P
(a) (b) (c) (d)
Now the outward normals of the four faces, found by taking the cross product of
any two adjacent edges of each polygon are found to be as follows :
N PQR 4i 2 j 6k
NQSR 2i 4 j 2k
N PRS 3i j 3k
N PSQ 5i 5 j 5k
The sign of the coefficients of k will determine the direction in which each normal
is pointing, positive being in the positive direction of the z-axis. In fact, it is
coefficients of i and j. The normal vectors are given in full only to indicate their
orientations in Figure 3.4(a), with the two normals with positive k values being
marked in full lines, and the two with negative k values being marked in broken
lines.
NQSR
S R S
NPRS R
NPQR
NPSQ Q
Q P
P
(a) (b)
Figure 3.4 : Back-Face Detection Worked Example : (a) Normals; and (b) Display
In the present case, the coefficients of k for the faces PQR and QSR are positive,
and hence those two faces are visible, while the other two faces PRS and PSQ with
negative k coefficients are hidden from view. Hence, the display will be as shown
in Figure 3.4(b), with the hidden edge PS shown in thin broken line.
Application of Algorithm
If the algorithm given earlier is applied to the four polygons initially named (say)
as PQR, PQS, PRS, and QRS, we get :
56 i = 1 : PQR chosen correctly. Confirm, k = 6 > 0, visible, plot.
i = 2 : j = 1 : PQS (P) versus PQR (P), PQ-PQ straight match, reverse PQS Visual
Visual Realism
Realism
to SQP.
k = 5, < 0, not visible. (First vertex repeated at end, shown within
brackets.)
i = 4 : j = 1 : QRS (Q) versus PQR (P), QR-QR straight match, reverse QRS
to SRQ.
k = 2, > 0, visible, plot.
Thus, faces PQR and SRQ are visible, and the other two faces PQS and PRS are
hidden, as already shown.
The computer algorithm checks one face at a time, and plots the edges and fills in the
area only if the k coefficient is positive. If it is desired to show the hidden edges in
broken lines (or with thinner lines or lighter colour), the algorithm may plot the edges of
the hidden polygons first as broken (or thinner or lighter) lines, without filling in the
areas and then plot the visible edges and faces, so that the visible edges will overlap the
earlier hidden-coded line rendering at the common edges.
3.3.3 Z-Buffer Algorithm
Also known as the Depth-Buffer algorithm, this image-space method simply selects for
display the polygon or portion of a polygon that is nearest to the viewer, by checking the
z-value of every pixel. Let us consider the quadrilateral ABCD of Figure 3.5, with the
triangle EFG in front of it, and rectangle : PQRS behind it.
(a) (b)
Figure 3.5 : Z-Buffer Algorithm : (a) Front View; and (b) Practical View
As shown in Figure 3.5(a), for any point H in the triangle, its projection Hc on the
quadrilateral will be behind H, and ZH will be less than ZHc. Hence, the entire triangle
EFG will be visible.
On the other hand, the rectangle PQRS is partially hidden by the quadrilateral, the portion
PQUT being hidden and the portion TURS being visible. This will be mathematically
known by the fact that for all points (x, y) within the region PQUT, the Z for the
quadrilateral will be less than the Z for the rectangle, while for all points within the
region TURS, there will be no common (x, y) between the quadrilateral and the rectangle,
leaving only the rectangle portion to be visible.
The rays for the determination of H c, T, and U c from H, T, and U are shown in parallel
projection in Figure 3.5(b). The z-buffer memory size by this method is only dependent
on the viewport, and any number of polygons can be handled.
The algorithm proceeds as follows :
Algorithm for Z-Buffer Method
(a) Initialize every pixel in the viewport to the smallest value of z, namely z0 the
z-value of the rear clipping plane or “back-ground”. Store all values in the
z-buffer. 57
Introduction to Computer (b) Start polygon loop i = 1 to n, n being the number of polygons.
and Computer Graphics
(i) Scan convert each pixel, that is, find the value of z for each (x, y) for
the polygon.
(ii) Compare it with the zc value stored for that (x, y). If z < zc, then the
new point is closer to the viewer than the previous point, and hence is
visible in front of the previous point; reset zc to the new value.
(c) End loop on i.
(d) Plot only the final zc values in the z-buffer for all (x, y) in the viewport, with
the appropriate colour or gray scale and other attributes.
3.3.4 Scan-Line Algorithm
This image-space method for removing hidden surfaces is an extension of the scan-line
algorithm for filling polygon interiors. Instead of filling just one surface, we now deal
with multiple surfaces. As each scan line is processed, all polygon surfaces intersecting
that line are examined to determine which are visible. Across each scan line, depth
calculations are made for each overlapping surface to determine which is nearest to the
view plane. When the visible surface has been determined, the intensity value for that
position is entered into the refresh buffer.
We assume that tables are set up for the various surfaces, which include both an edge
table and a polygon table. The edge table contains coordinate end points for each line in
the scene, the inverse slope of each line, and pointers into the polygon table to identify
the surfaces bounded by each line. The polygon table contains coefficients of the plane
equation for each surface, intensity information for the surfaces, and possibly pointers
into the edge table. To facilitate the search for surfaces crossing a given scan line, we can
set up an active list of edges from information in the edge table. This active list will
contain only edges that cross the current scan line, sorted in order of increasing x. In
addition, we define a flag for each surface that is set on or off to indicate whether a
position along a scan line is inside or outside of the surface. Scan lines are processed
from left to right. At the leftmost boundary of a surface, the surface flag is turned on; and
at the rightmost boundary, it is turned off.
Figure 3.6 illustrates the scan-line method for locating visible portions of surface for
pixel positions along the line. The active list for scan line 1 contains information from the
edge table for edges AB and BC, only the flag for surface S1 is on. Therefore, no depth
calculations are necessary, and intensity information for surface S1 is entered from the
polygon table into the refresh buffer. Similarly, between edges EH and FG, only the flag
for surface S2 is on. No other positions along scan line 1 intersect surfaces, so the
intensity values in the other areas are set to the background intensity. The background
intensity can be loaded throughout the buffer in an initialization routine.
For scan lines 2 and 3 in Figure 3.6, the active edge list contains edges AD, EH, BC, and
FG. Along scan line 2 from edge AD to edge EH, only the flag for surface S1 is on. But
between edges EH and BC, the flags for both surfaces are on. In this interval, depth
calculations must be made using the plane coefficients for the two surfaces. For this
example, the depth of surface S1 is assumed to be less than that of S2, so intensities for
surface S1 are loaded into the refresh buffer until boundary BC is encountered. Then the
flag for surface S1 goes off, and intensities for surface S2, so intensities for surface S1 are
loaded into the refresh buffer until boundary BC is encountered. Then the flag for surface
S1 goes off, and intensities for surface S2 are stored until edge FG is passed.
We can take advantage of coherence along the scan lines as we pass from one scan line to
the next. In Figure 3.6, scan line 3 has the same active list of edges as scan line 2. Since
no changes have occurred in line intersections, it is unnecessary again to make depth
calculations between edges EH and BC. The two surfaces must be in the same orientation
as determined on scan line 2, so the intensities for surface S1 can be entered without
further calculations.
58
Visual
Visual Realism
Realism
Figure 3.6 : Scan Lines Crossing the Projection of Two Surfaces, S1 and S2, in the View Plane. Dashed
Lines Indicate the Boundaries of Hidden Surfaces
Any number of overlapping polygon surfaces can be processed with this scan-line
method. Flags for the surfaces are set to indicate whether a position is inside or outside,
and depth calculations are performed when surfaces overlap. When these coherence
methods are used, we need to be careful to keep track of which surface section is visible
on each scan line. This works only if surfaces do not cut through or otherwise cyclically
overlap each other as shown in Figure 3.7. If any kind of cyclic overlap is present in a
scene, we can divide the surfaces to eliminate the overlaps. The dashed lines in this
figure indicate where planes could be subdivided to form two distinct surfaces, so that the
cyclic overlaps are eliminated.
Subdividing
Subdividing Line
Line
SAQ 1
(a) Write a program to implement the z buffer to generate shaded images.
(b) Test the back-face removal algorithm by drawing a die. Provide different
views of the die so that each face is displayed at least once.
59
Introduction to Computer y-extents of two polygons will enable decisions to be made on their overlap, and the
and Computer Graphics
z-extent will define their relative positions with respect to the viewer.
(a) Boxes Polygons Do Not Overlap (b) Boxes Overlap and Polygons Do Not
(c) Boxes and Polygons Overlap (d) Minimax Test of Individual Edges
Figure 3.10 : Minimax Tests for Typical Polygons and Edges
Runtime : O (p x n)
p : number of pixels
n : number of polygons.
3.3.8 Franklin Algorithm
We mentioned how the number of possible comparisons of polygons grows as the square
of the number of polygons in the scene. Many of the hidden-surface algorithms exhibit
this behaviour and have serious performance problems on complex scenes. Franklin
developed an approach which gives linear time behaviour for most scenes. This is done
62 by overlaying a grid of cells on the scene (similar to Warnocks approach, only these cells
are not subdivided). The size of the cells is on the order of the size of an edge in the Visual
Visual Realism
Realism
scene. At each cell the algorithm looks for a covering face and determines which edges
are in front of this face. It then computes the intersections of these edges and determines
their visibility. The idea is that as objects are added to the scene and the number of
polygons increases, the new objects will either be hidden by objects already in the scene
or will hide other objects in the scene. While the number of objects increases, the
complexity of the final scene (after hidden portions are removed) does not increase. By
considering only the edges in front of the covering face for a cell, the algorithm considers
only the edges likely to be in the final image. Although the total number of edges may
increase, this increase occurs, for the most part, behind the covering faces, and the
number of edges in front will remain small.
3.3.9 Binary Space Partition
A binary space-partitioning (BSP) tree is an efficient method for determining object
visibility by painting surfaces onto the screen from back to front, as in the painter’s
algorithm. The BSP tree is particularly useful when the view reference point changes, but
the objects in a scene are at fixed positions.
Applying a BSP tree to visibility testing involves identifying surfaces that are “inside”
and “outside” the partitioning plane at each step of the space subdivision, relative to the
viewing direction. Figure 3.12 illustrates the basic concept in this algorithm. With plane
P1, we first partition the space into two sets of objects. One set of objects is behind, or in
back of, plane P1 relative to the view plane P1, we divide that object into two separate
objects, labeled A and B. Objects A and C are in front of P1, and objects B and D are
behind P1. We next partition the space again with plane P2 and construct the binary tree
representation shown in Figure 3.12(b). In this tree, the objects are represented as
terminal nodes, with front objects as left branches and back objects as right branches.
(a) (b)
Figure 3.12 : A Region of Space (a) is Partitioned with Two Planes P1 and P2 to form the BSP Tree
Representation in (b)
For objects described with polygon facets, we chose the partitioning planes to coincide
with the polygon planes. The polygon equations are then used to identify “inside” and
“outside” polygons, and the tree is constructed with one partitioning plane for each
polygon face. Any polygon intersected by a partitioning plane is split into two parts.
When the BSP tree is complete, we process the tree by selecting the surfaces for display
in the order back to front, so that foreground objects are painted over the background
objects. Fast hardware implementations for constructing and processing BSP trees are
used in some systems.
3.3.10 Area Subdivision Method
In this method, the viewport is examined for clear decisions on the polygons situated in
it, in regard to their overlap and visibility to the viewer.
For the visibility of a polygon within the viewport, its x- and y-extents are compared with
the horizontal and vertical spans of the viewport, and classified as one of the following,
with respect to Figure 3.13, in which, the x-y extents of the polygon are marked by its
bounding rectangle :
63
Introduction to Computer (a) Surrounding
and Computer Graphics
(b) Contained
(c) Disjoint
(d) Overlapping or Intersecting
Surrounding
A polygon surrounds a viewport if it completely encloses or covers the viewport.
This happens if none of its sides cuts any edge of the viewport. It is not sufficient
if the x-y extent encloses the viewport, because even with the same extent, if the
right edge of the viewport were to be shifted a little more to the right as shown
dotted in Figure 3.13(a), the polygon’s right edge would cut the top and right
edges of the viewport. If the surrounding polygon is the closest to the viewer, the
entire viewport is covered by the colour and other attributes of the polygon.
At the zero-th level, namely for the entire viewport, the only decision that can be made is
that the x-y extent of the triangle does not intersect the extent of any of the other
polygons, and hence can be declared visible.
The subdivision of the viewport and further analysis are illustrated in Figure 3.15. In the
figure, the z locations of the five objects are schematically indicated on the left, with T
being closest to the viewer.
65
Introduction to Computer
and Computer Graphics
Figure 3.15 : Subdivision Method Worked Example : (a) Stages and Decisions; (b) Details for Stages
III and IV; (c) Final Plot; and (d) z Locations
It has to be recognized that, in a real-life problem, the resolution may be of the order of
1024 by 768, and many lines and surfaces will have to be reduced to the pixel level for
decisions. However, the decisions themselves are very basic, involving only the
identification of the attributes of the regions considered, with a few simple computations.
The procedure is summarized in Table 3.1. If two polygons overlap or intersect, their
edge crossings and penetration lines will have to be determined by computation, and then
the algorithm applied.
Table 3.1
Level Segment Polygon Decision
0 All T Contained. No other overlap. Visible.
I 1 Q Extent Contained. Visible.
2, 3, 4 Q, S, H, V Cannot decide. Further Subdivide.
II 2.1, 2.2 Q Contained, Visible.
2.3 Q Surrounding, Visible.
2.4 Q Surrounding, Visible.
2.4 S Contained, Behind Q. Hidden.
3.1, 3.3 None Background colour.
3.2 QH Q Surrounding, Visible. H Contained, in front of Q. Visible.
3.4 Q Contained, Visible.
4.1 QH Q Surrounding, Visible. H Contained, in front of Q. Visible.
4.2 QV Q Surrounding, Visible. V Contained, in front of Q. Visible.
4.3 Q Contained, Visible.
4.4 Q, V Cannot decide. Further Subdivide.
III 4.4.1 QV Contained, Visible. Surrounding, in front of Q, Visible,
4.4.2 Q, V covers Q.
4.4.3 V Cannot decide. Further Subdivide.
4.4.4 V Contained, Visible. Surrounding, in front of Q, Visible,
covers Q.
Surrounding, Visible.
IV 4.4.2.1 Q, V Both Surrounding. V in front of Q, Visible, covers Q.
4.4.2.2 Q Surrounding, Visible.
4.4.2.3 V Surrounding, Visible.
4.4.2.4 None Background colour.
SAQ 2
(a) Some special cases, which cause problems for some hidden-surface
algorithms, are penetrating faces and cyclic overlap. A penetrating face
occurs when polygon A passes through polygon B. Cyclic overlap occurs
when polygon A is in front of polygon B, which is in front of polygon C,
which is in front of Polygon A. Actually, we need only two polygons for
66 cyclic overlap; imagine a rectangle threaded through a polygon shaped like
the letter C so that it is behind the top of the C but in front of the bottom Visual
Visual Realism
Realism
part. For the various hidden-surface methods we have presented, discuss
whether or not they can handle penetrating faces and cyclic overlap.
(b) (i) Show that no polygon subdivision takes place in applying the binary
space partition method to a convex object.
(ii) For the case of convex object compare the cost of the back-face
removal method with that of the binary space partition method for a
single view.
(iii) Suppose we wish to display a sequence of views of a convex object.
How would the cost of using back-face removal compare to the binary
space partition scheme?
(c) Modify the back-face algorithm for unifilled polygons so that instead of
removing back faces it draws them in a less pronounced line style (e.g., as
dashed lines).
(d) Test the painter’s algorithm by showing several filled polygons with
different interior styles and different states of overlap, entered in mixed
order.
(e) Test the painter’s algorithm by showing two houses composed of filled
polygons with different interior styles. Select a view such that one house
partially obscures the other house.
(f) Sketch the minimax boxes for the tangent polygons shown in figure. What
conclusions can you make?
(g) Find the face and priority lists of scenes in figure. Assume that back faces
have been removed to simplify the problem.
3.4.1 Illumination
The colour or shade that a surface appears to the human eye depends primarily on three
factors :
x Colour and strength of incoming illumination
x Colour and texture (rough/smooth) of surface
x Relative positions and orientations of surface, light source and observer
3.4.2 Simplifying Assumptions
Neglect colour - consider Intensity: For now we shall forget about colour and restrict our
discussion just to the intensity of light. We shall consider white light where the intensity
of all colour components (red, green and blue) is equal. This will give us a monochrome
(black and white) picture.
Intensity and the 1/r Effect: Usually a light source is a mixture of diffuse background
illumination and one or more point sources of light. For the purpose of this discussion
(mainly to simplify the mathematics) we shall assume that the light source we are
considering is at a large distance from the scene.
This has two effects :
x All light rays from the source are (virtually) parallel
x There is no change in the intensity of the light across the scene – i.e. there is
no fall off of intensity as a 1/r 2 effect.
68
Visual
Visual Realism
Realism
In this simple diagram, the intensity of light falling on a surface can be thought of simply
as the number of rays, which hit it. It can be seen that more rays fall on cube A than on
cube B and it can quite easily be shown that the number is proportional to the reciprocal
of the square of the distance between the light source and the object (i.e. 1/r where r is
the distance between source and surface).
Parallel Rays : It can also be seen in Figure 3.17 that the light rays crossing cube B are
nearly parallel whereas the rays crossing cube A are highly divergent. This means for
distant illumination, there is little variation in intensity between one side of an object and
the other (which means we only need to do one calculation of intensity for the whole
surface), whereas this is not true for close illumination. If the need exists to implement a
physically accurate illumination model, we could not make this assumption and would
have to take account of these effects, but for most purposes, the simple model will
suffice.
3.4.3 Components of Illumination
Consider now the light reaching the eye of an observer of a scene:
The light reaching the eye when looking at a surface has clearly come from a source (or
sources) of illumination and bounced off the surface. In fact the light reaching the eye
can be considered as being made up of 3 different components :
x that from diffuse illumination (incident rays come from all over not just one
direction)
x that from a point source which is scattered diffusely from the surface
x that from a point source which is specularly reflected.
We will consider each of these components separately and then combine them into one.
69
Introduction to Computer
and Computer Graphics
3.4.4 Diffuse Illumination
Diffuse illumination means light that comes from all directions not from one particular
source. Think about the light of a grey cloudy day as compared to a bright sunny one :
On a cloudy day, there are no shadows cast, the light from the sun is scattered by the
clouds and seems to come equally from all directions.
Surface
n=1
n = 10
s
0
71
Introduction to Computer
and Computer Graphics
Actual Intensity
Perceived Intensity
In order to achieve this, the colour must be calculated for each pixel instead of one colour
for the entire polygon. By ensuring that the method we use to calculate the colour results
in the neighbouring pixels across the border between two polygons end up with
approximately the same colours, we will be able to blend the shades of the two polygons
and avoid the sudden discontinuity at the border.
Lambert shading is based upon calculating a single normal vector for the surface (which
is then compared to the lighting vector and the viewpoint vector to determine the colour).
Gouraud shading is based upon calculating a vertex normal rather than a surface normal.
A vertex normal is an artificial construct (a true normal cannot exist for a point such as a
72
vertex). A vertex normal can be thought of as the average of the normals of all the Visual
Visual Realism
Realism
polygons that share that vertex.
¦ ni
N
i 1
nv . . . (3.5)
¦ ni
N
i 1
Having found the vertex normals for each vertex of the polygon we want to shade (Figure
3.26), we can calculate the colour at each vertex using the same formula that we did for
Lambert Shading. Calculating the colour for all the remaining pixels in the polygon is
simply a matter of interpolating from the vertices, i.e. if your half-way along one of the
edges, the colour value needs to be half-way between the colour values at the ends of the
edge. A value for the colour can be given more formally by considering a scan-line
through the polygon.
yc yscan
Ic ( Ic Ib ) .
yc yb
I s2
xs2 xP
I s2 ( I s2 I s1 ) .
ys2 ys1
IP
By performing 3 separate calculations, one for red, one for green and one for blue, a
complete colour value can be achieved.
3.5.2 Phong Shading
Phong shading too is based on interpolation, but instead of interpolating the colour value,
it is the normal vector, which is interpolated for each point and a colour value calculated
for each pixel based on the interpolated value of the normal vector.
The interpolation is (like Gouraud shading) based upon calculating the vertex normals
(red arrows in Figure 3.28), using these as the basis for interpolation along the polygon
edges (blue arrows) and then using these as the basis for interpolating along a scan line to
produce the internal normals (green vectors).
Phong shading allows us to counteract the fact that we are using a flat surface to
approximate to a curved one.
The arrows (and thus the interpolated vectors) give an indication of the curvature of the
smooth surface, which the flat polygon is approximating to.
3.5.3 Comparison of Gouraud and Phong Shading
Phong shading requires more calculations, but produces better results for specular
reflection than Gouraud shading in the form of more realistic highlights.
Consider the specular reflection term in Eq. (3.4) :
cosn (s)
If n is large (the surface is a good smooth reflector) and one vertex has a very small value
of s (it is reflecting the light ray in the direction of the observer) whilst the rest of the
vertices have large values of s – a highlight occurs somewhere on our polygon.
With Gouraud shading, nowhere on the polygon can have a brighter colour (i.e. higher
value) than a vertex so unless the highlight occurs on or near a vertex, it will be missed
out altogether. When it is near a vertex, its effect is spread over the whole polygon. With
Phong shading however, an internal point may indeed have a higher value than a vertex
and the highlight will occur tightly focused in the (approximately) correct position.
74
Visual
Visual Realism
Realism
3.6 COLOURING
The use of colours in CAD/CAM has two main objectives : facilitate creating geometry
and display images. Colours can be used in geometric construction. In this case, various
wireframe, surface, or solid entities can be assigned different colours to distinguish them.
Colour is one of the two main ingredients (the second being texture) of shaded images
produced by shading algorithms. In some engineering applications such as finite element
analysis, colours can be used effectively to display contour images such as stress or heat-
flux contours.
Black and white raster displays provide achromatic colours while colour displays (or
television sets) provide chromatic colour. Achromatic colors are described as black,
various levels of gray (dark or light gray), and white. The only attribute of achromatic
light is its intensity, or amount. A scalar value between 0 (as black) and 1 (as white) is
usually associated with the intensity. Thus, a medium gray is assigned a value of 0.5. For
multiple-plane displays, different levels (scale) of gray can be produced. For example,
256(28) different levels of gray (intensities) per pixel can be produced for an eight-plane
display. The pixel value Vi (which is related to the voltage of the deflection beam) is
related to the intensity level Ii by the following equation :
1/ J
§ Ic ·
¨ ¸
©C¹
Vi . . . (3.6)
The values C and J depend on the display in use. If the raster display has no lookup table,
Vi (e.g., 00010111 in an eight-plane display) is placed directly in the proper pixel. If there
is a table, i is placed in the pixel and Vi is placed in entry i of the table. Use of the lookup
table in this manner is called gamma correction, after the exponent in
Eq. (3.6).
RGB
Gamma Correction
Figure 3.31
76 C rR gG bB . . . (3.9)
where, C = colour of resulting light, Visual
Visual Realism
Realism
Note how the primary colours define unit vectors along the axes. The three corners
opposite R, G, and B are cyan (C), magenta (M), and yellow (Y), the basis of the CMY
model. The line connecting black (0,0,0) and white (1,1,1) is the gray scale line.
Figure 3.32 illustrates the coordinates and colours of the corners of the RGB colour cube.
Most light, however, can be represented by a 3-D colour vector which terminates at some
arbitrary point in the interior of the cube. To understand the additional shadings possible
with the colour cube representation, consider the shadings possible on the surface of the
cube. In Figure 3.33, a transformed view of the colour cube is presented in which sub-
cubes interpolate the colour between the four corners of the cube.
The RGB colour model is particularly important because it is the basis for control of
most colour monitors. For this reason it is also the preferred colour model for graphics
languages and image processing programs. A typical interactive RGB colour picker for
selecting the three colour coordinates is shown in Figure 3.34.
77
Introduction to Computer
and Computer Graphics
Figure 3.34
Figure 3.35
Interactive colour picker supporting both the RGB colour model and the HSV (hue,
saturation, value) colour model (Figure 3.35). The user can select any hue from the
colour wheel by either pointing and clicking or by numerical control of the RGB arrows.
The brightness is controlled by the slide control along the right-hand side.
In the printing trade this model is frequently called the CMYK model in which the K
stands for black. The reason for black is that, although theoretically the correct mixture
of cyan, magenta, and yellow ink should absorb all the primary colours and print as
black, the best that can be achieved in practice is a muddy brown. Therefore, printers like
the Hewlett-Packard Paint Jet have a separate cartridge for black ink in addition to the
cyan, magenta, and yellow ink cartridge(s).
The CMY model can also be represented as a colour cube as shown in Figure 3.36.
78
Visual
Visual Realism
Realism
M
Grey Axis
White (0, 0, 0)
C
Cyan (1, 0, 0)
Yellow (0, 0, 1)
Green (1, 0, 1)
Y
Each corner is labeled with its (c, m, y) coordinates. Note that the RGB colour cube is
transformed into a CMY colour cube by interchanging colours across the major
diagonals.
One can understand the subtractive nature of the CMY model in the following sense.
When white light falls on a white page, virtually all the light is reflected and so the page
appears white. If white light strikes a region of the page, which has been printed with
cyan ink, however, the ink absorbs the red portion of the spectrum and only the green and
blue portions are reflected. This mixture of reflected light appears as the cyan hue.
In terms of the CMY colour cube coordinates, one can think of the origin, (0, 0, 0), as
three colour filters with a tint so faint that they appear as clear glass. In terms of
absorbing inks, the origin corresponds to pastel shades of cyan, magenta, and yellow so
faint as to appear white. As one moves up along the M axes from (0, 0, 0) towards
(0, 1, 0), it corresponds to turning the density of a tinted filter up towards the maximum
possible. In terms of inks, this motion up the M axis corresponds to moving from a pale
pastel towards a pure magenta. If one uses all three filters in sequence (or a mixture of C,
M, and Y inks), eventually all light is absorbed as one gets to pure colours of filters or
inks. This is point (1, 1, 1).
The RGB and CMY colour cubes are useful in expressing the transformations between
the two colour models. Suppose, for instance, that we know a certain ink may be
specified by the CMY coordinates, (C, M, Y), and we would like to know what mixture
of light, specified as (R, G, B) in the RGB cube, is reflected. Looking at Figure 3.38 we
note the following 3-D vector relationships:
The inverse transformation can be thought of as solving the following problem : Given
light of a certain colour, (R, G, B), reflected from a page illuminated with white light,
what mixture of ink, (C, M, Y), is required? Using Figure 3.32, we can write a set of
equations resembling with White substituted for Black. Since, on the RGB colour cube,
white has coordinates (1, 1, 1).
The CMYK colours are the Process Colours of offset printing. Several image processing,
drawing, and desktop publishing programs now have the capability of the colour
separation of colored images. The process of colour separation involves producing four
black-and-white images (or negative images) corresponding to the four colours, cyan,
magenta, yellow, and black. These separations are then used photographically to produce
the four plates for each of the four inks of the offset press. To produce the final colour
image, each sheet is printed separately with each of the
four-colour plates. Since alignment is critical, accurate crosshairs are printed on each of
the four-colour negatives to assist the printers in achieving good colour registry. 79
Introduction to Computer We can express the conversion from an RGB representation to a CMY representation
and Computer Graphics
with the matrix transformation
ªC º ª1º ª R º
«M » «1» «G »
« » «» « »
«¬ Y »¼ «¬1»¼ «¬ B »¼
where the white is represented in the RGB system as the unit column vector. Similarly,
we convert from a CMY colour representation to an RGB representation with the matrix
transformation
ªRº ª1º ª C º
«G » «1» « M »
« » «» « »
«¬ B »¼ «¬1»¼ «¬ Y »¼
where black is represented in the CMY system as the unit column vector.
3.6.4 YIQ Colour Model
Whereas an RGB monitor requires separate signals for the red, green, and blue
components of an image, a television monitor uses a single composite signal. The
National Television System Committee (NTSC) colour model for forming the composite
video signal is the YIQ model.
In the YIQ colour model, luminance (brightness) information is contained in the Y
parameter, while chromaticity information (hue and purity) is incorporated into the I and
Q parameters. A combination of red, green, and blue intensities are chosen for the Y
parameter to yield the standard luminosity curve. Since Y contains the luminance
information, black-and-white television monitors use only the Y signal. The largest
bandwidth in the NTSC video signal (about 4 MHz) is assigned to the Y information.
Parameter I contains orange-cyan hue information that provides the flesh-tone shading,
and occupies a bandwidth of approximately 1.5 MHz. Parameter Q carries
green-magenta hue information in a bandwidth of about 0.6 MHz.
An RGB signal can be converted to a television signal using an NTSC encoder, which
converts RGB values to YIQ values, then modulates and superimposes the I and Q
information on the Y signal. The conversion from RGB values to YIQ values is
accomplished with the transformation
ªY º ª0.299 0.587 0.144 º ªRº
«I » «0.596 0.275 0.321» «G »
« » « » « »
«¬Q »¼ «¬0.212 0.528 0.311 »¼ «¬ B »¼
This transformation is based on the NTSC standard RGB phosphor, whose chromaticity
coordinates were given in the preceding section. The larger proportions of red and green
assigned to parameter Y indicate the relative importance of these hues in determining
brightness, compared to blue.
An NTSC Video signal can be converted to an RGB signal using an NTSC decoder,
which separates the video signal into the YIQ components, then converts to RGB values.
We convert from YIQ space to RGB space with the inverse matrix transformation from
above Equation.
ªRº ª1.000 0.956 0.620 º ªY º
«G » «1.000 0.272 0.647 » «I »
« » « » « »
«¬ B »¼ «¬1.000 1.108 1.705 »¼ «¬Q »¼
80
3.6.5 HSV Colour Model Visual
Visual Realism
Realism
Instead of a set of colour primaries, the HSV model uses colour descriptions that have a
more intuitive appeal to a user. To give a colour specification, a user selects a spectral
colour and the amounts of white and black that are to be added to obtain different shades,
tints, and tones. Colour parameters in this model are hue (H), saturation (S), and value
(V),
Figure 3.37 : When the RGB Colour Cube (a) is Viewed along the Diagonal from White to Black, the
Colour-cube Outline is a Hexagon (b)
The three-dimensional representation of the HSV model is derived from the RGB cube. If
we imagine viewing the cube along the diagonal from the white vertex to the origin
(black), we see an outline of the cube that has the hexagon shape shown in Figure 3.39.
The boundary of the hexagon represents the various hues, and it is used as the top of the
HSV hexcone (Figure 3.38). In the hexcone, saturation is measured along a horizontal
axis, and value is along a vertical axis through the centre of the hexcone.
Hue is represented as an angle about the vertical axis, ranging from 0° at red through
360°. Vertices of the hexagon are separated by 60° intervals. Yellow is at 60°, green at
120°, and cyan opposite red at H = 180°. Complementary colours are 180° apart.
81
Introduction to Computer
and Computer Graphics
Figure 3.39 : Cross-section of the HSV Hexcone, Showing Regions for Shades, Tints, and Tones
Saturation S varies from 0 to 1. It is represented in this model as the ratio of the purity of
a selected hue to its maximum purity at S = 1. A selected hue is said to be one-quarter
pure at the value S = 0.25. At S = 0, we have the gray scale. Value V varies from 0 at the
apex of the hexcone to 1 at the top. The apex represents black. At the top of the hexcone,
colours have their maximum intensity. When V = 1 and S = 1, we have the “True” hues.
White is the point at V = 1 and S = 0.
This is a more intuitive model for most users. Starting with a selection for a pure hue,
which specifies the hue angle H and sets V = S = 1, we describe the colour we want in
terms of adding either white or black to the pure hue. Adding black decreases the setting
for V while S is held constant. To get a dark blue,Shades
V could be set to 0.4 with S = 1 and
H = 240°. Similarly, when white is to be added to the hue selected, parameter S is
S
decreased while keeping V constant. A light blue could be designated with S = 0.3 while
V = 1 and H = 240°. By adding some black and some white, we decrease both V and S.
An interface for this model typically presents the HSV parameter choices in a colour
palette.
Colour concepts associated with the terms shades, tints, and tones are represented in
across-sectional plane of the HSV hexcone (Figure 3.41). Adding black to a pure hue
values S = 1 and 0 d V d l. Adding white to a pure tone produces different tints across the
decreases V down the side of the hexcone. Thus, various shades are represented with
top plane of the hexcone, where parameter values are S = 1 and 0 d V d l. Various tones
are specified by adding both black and white, producing colour points within the
triangular cross-sectional area of the hexcone.
The human eye can distinguish about 128 different hues and about 130 different tints
(saturation levels). For each of these, a number of shades (value settings) can be detected,
depending on the hue selected. About 23 shades are discernible with yellow colours, and
can distinguish about 128 u 130 u 23 = 82,720 different colours. For most graphics
about 16 different shades can be seen at the blue end of the spectrum. This means that we
applications, 128 hues, 8 saturation levels, and 15 value settings are sufficient. With this
range of parameters in the HSV colour model, 16,384 colours would be available to a
user, and the system would need 14 bits of colour storage per pixel. Colour lookup tables
would be used to reduce the storage requirements per pixel and to increase the number of
available colours.
82
3.6.6 HLS Colour Model Visual
Visual Realism
Realism
This model has the double-cone representation shown in Figure 3.40. The three colour
parameters in this model are called hue (H), lightness (L), and Saturation (S).
Hue has the same meaning as in the HSV model. It specifies an angle about the vertical
axis that locates a chosen hue. In this model, H = 00 corresponds to blue. The remaining
colours are specified around the perimeter of the cone in the same order as in the HSV
model. Magenta is at 60°, red is at 120°, and cyan is located at H = 180°. Again,
complementary colours are 180° apart on the double cone.
The vertical axis in this model is called lightness, L. At L = 0, we have black, and white
is at L = 1. Gray scale is along the L axis, and the “pure hues” lie on the L = 0.5 plane.
Saturation parameter S again specifies relative purity of a colour. This parameter varies
from 0 to 1, and pure hues are those for which S = 1 and L = 0.5. As S decreases, the hues
are said to be less pure. At S = 0, we have the gray scale.
As in the HSV model, the HLS system allows a user to think in terms of making a
selected hue darker or lighter. A hue is selected with hue angle H, and the desired shade,
tint, or tone is obtained by adjusting L and S. Colours are made lighter by increasing L
and made darker by decreasing L. When S is decreased, the colours move toward gray.
3.7 SUMMARY
Hidden line and surface removal eliminates the ambiguities of the displays of
three-dimensional models and is considered the first step toward visual realism.
83
Introduction to Computer A large number of techniques have been developed for removing hidden surfaces and
and Computer Graphics
lines which is a fundamental problem in computer graphics. Hidden surface removal
(HSR) algorithms are complex and take long execution times. A great effort is paid to
improve their efficiency. Though wire-frame views of the objects can be drawn very
rapidly, it is difficult to interpret them when a several objects in scene overlap. Visual
realism is greatly enhanced when the faces of the objects are filled with some colours and
the surfaces that should be hidden are removed.
This unit discusses about various algorithms for hidden line or hidden surface removal.
The pictures that are rendered by filling the faces of the object with colours and removing
the hidden surfaces still do not give the impression of objects residing in a scene,
illuminated by light source. A shading model describes how light reflects off a surface,
depending on the nature of the surface and its orientation to both light source and the
camera’s eye. Shading procedure like Gouraud and Phong shading are discussed.
This unit also discusses about the nature of colour and how to represent it in computer
graphics use of various colour spaces were discussed.
84
Visual
Visual Realism
Realism
FURTHER READINGS
David F. Rogers and J. Alan Adams (2002), Mathematical Elements of Computer
Graphics, TMH.
David F. Rogers (2002), Procedural Elements for Computer Graphics, TMH.
Glen. Mullineux (1986), CAD : Computational Concepts, Macmillan Publishing,
New York.
Ibrahim. Zied (1991), CAD/CAM Theory and Practice, McGraw-Hill, New York.
M. Groover and P. Mikell (1984), CAD/CAM Computer Aided Design and
Manufacturing, Englewood Clifffs : Prentice Hall.
Micheal E. Mortemson (1990), Computational Geometry, Industrial Press, New York.
P. N. Rao (2004), CAD/CAM : Principles and Applications, Tata McGraw-Hill.
85
Introduction to Computer
and Computer Graphics INTRODUCTION TO COMPUTER AND
COMPUTER GRAPHICS
In this block, the basics of computer aided design are discussed. This block consists of
three units.
Unit 1 deals with the hardware aspects necessary to know for computer aided design. It
introduces various displace devices and input/output devices.
After discussing the current computer hardware for CAD in Unit 1, Unit 2 describes the
concepts of computer graphics applicable to computer aided design. In the beginning,
concept of 2-D transformations have been discussed followed by 3-D transformation.
Various types of projections have also been detailed.
Finally, in Unit 3, visual realism fundamentals have been discussed. This unit discusses
various hidden lines and surface removal algorithms like z-algorithms, back face
detection methods, painter’s algorithms etc. along with their application. Also, various
methods of rendering, colouring and shading are elaborated.
86
Visual
Visual Realism
Realism
COMPUTER AIDED DESIGN
This course consists of two blocks and contains seven units.
Block 1 consists of three units which covers topics like display devices along with
detailed discussion on raster refresh. Input and output devices are also detailed. Unit 2
covers 2-dimensional transformation including reflection and geometric interpretation of
homogeneous coordinates. This is further extended to 3-dimensional transformation and
projection. In this block, also visual realism, is devoted to the concepts of hidden line and
surface removal algorithms. After this, methods of rendering, colouring and shading are
also described.
In Block 2, there are four units. First unit, i.e. Unit 4, describes the geometric modeling
of curves. In this unit, curve definition procedure for both explicit and parametric
representation are presented. Curve definition techniques also include the use of conic
sections, circular arc interpolation, cubic splines, parabolic blending, Bezier curves and
curves based on B-splines. In the next unit of this block, i.e. Unit 5, methods to generate
surface are discussed. In case the model is existing or it is to be created from the scratch.
Surface modeling is followed by solid modeling. Various methods of solid modeling
along with the methods to create 3-Dimensional solid models are described. In the last
unit, CAD/CAM data exchange concepts are given. Here, different types of interfaces
along with the details of UPR architecture are detailed.
87