Unit - 3 Three Dimensional Graphics
Unit - 3 Three Dimensional Graphics
THREE DIMENSIONAL
GRAPHICS
Three dimensional computer graphics that use a three – dimensional representation of geometric data
that is stored in the computer for the purpose of performing calculations and rendering 2D images.
In a three dimensional model we can view the structure from different view point. 3D graphics programs
allow objects to be created on an X – Y – Z scale (width, height, depth).
For example, the architect can view the building structure from different angles or the automobile
engineer can view the automobile design in different views. Every point in 3D is located using three
coordinate values instead of two coordinates. In order to define points with three coordinates, we define
a third axis, called the Z – axis.
Three Dimensional Co-ordinate system
The conventional orientation for the coordinate axes in a 3D cartesian reference system is as shown
below. This is called a right – handed system because the right hand thumb points in the positive
Z – axis direction.
If the thumb of the right hand points in the positive Z direction, as one curls the fingers around the z –
axis from the positive x – axis to the positive y – axis (90 – degree) then the coordinates are called right
handed system.
If the thumb of the left hand points in the positive z direction when we imagine grasping the z – axis, so
that the fingers of the left hand curl from the positive x – axis to the positive y- axis through 90 degree
then the coordinates are called left handed system.
Note:
Normally the right handed system is used in
mathematics but in computer graphics the left
handed system is used since objects behind
the display screen will have positive value.
Three Dimensional Display Techniques: On a graphics display it is impossible to produce an image
that is perfectly realistic representation of an actual scene.
The techniques that should be taken into account are:
The different kinds of aspects needed by the application.
1. The amount of processing required to generate the image.
2. The capability of the display hardware.
3. The amount of detail recorded in the image.
4. The perceptual effects of the image on the observer.
The technique for achieving realism in three dimensional graphics is projection.
Projection can be defined as mapping of a point P(x, y, z) onto its image P’(x’, y’, z’) in the projection
plane or view plane. Two types of projection:
1. Parallel Projection
2. Perspective Projection
The Classification of projection is:
1. Parallel Projection: Parallel projection is involved in generating a view of a solid object by
projecting points on the object surface along parallel lines onto the display plane. By selecting different
viewing positions, we can project visible points of the object on the display plane and obtain different
two dimensional views of the object.
In parallel projection parallel lines in the world coordinate scene, project parallel lines on the 2D display
plane. The parallel projection technique is used in engineering and architectural drawings to represent an
object with a set of views.
Parallel projection preserves relative proportions of objects. Accurate views of the various sides of an
object are obtained with a parallel projection, but this does not give us a realistic representation of the
appearance of a 3D object.
2. Perspective Projection: For a perspective projection, object positions are transformed to the view
plane along lines that converge to a point called projection reference point.
This causes objects farther from the viewing position to be displayed smaller than the original objects.
When the object is closer the size is same. For example: Aeroplane and Ship.
A simple method for indicating depth with wireframe displays is to vary the intensity of objects
according to their distance from the viewing position. The lines closest to the viewing position are
displayed with highest intensities, and lines farther away are displayed with decreasing intensities.
Depth cueing is applied by choosing maximum and minimum intensity values and a range of distances
over which the intensities are to vary.
Visible Line and Surface Identification:
➢ A simplest way to identify the visible line is to highlight the visible lines or to display them in a
different color.
➢ Another method is to display the non visible lines as dashed lines.
Surface Rendering: Surface rendering method is used to generate a degree of realism in a displayed
scene.
• Realism is attained in displays by setting the surface intensity of objects according to the lighting
conditions in the scene and surface characteristics.
• Lighting conditions include the intensity and positions of light sources and the background
illumination.
• Surface characteristics include degree of transparency and how rough or smooth the surfaces are to be.
Exploded and Cutaway Views: Exploded and cutaway views of objects can be to show the internal
structure and relationship of the objects parts.
• An alternative to exploding an object into its component parts is the cut away view which removes part
of the visible surfaces to show internal structure.
Three dimensional and stereoscopic views:
• In Stereoscopic views, three dimensional views can be obtained by reflecting a raster image from a
vibrating flexible mirror.
• The vibrations of the mirror are synchronized with the display of the scene on the CRT.
• Stereoscopic devices present two views of a scene; one for the left eye and the other for the right eye.
Differences between Parallel and Perspective Projection:
Positive rotation angles produce counterclockwise rotations about a coordinate axis, if we are looking
along the positive half of the axis towards the coordinate origin from the above figure.
Coordinate axis Rotation:
➢ Z – axis rotation
➢ X – axis rotation
➢ Y – axis rotation
1. Z – axis rotation: The two – dimensional Z – axis rotation equations are easily extended to three
dimensions. Transformation equation along Z – axis:
x′ = x cosθ − y sinθ
y′ = x sinθ + y cosθ ------- Equation 1
z′ = z
Parameter θ specifies the rotation angle.
x′ cosθ − sinθ 0 0 x
y′ = sinθ cosθ 0 0 y P′ = Rz (θ) . P
z′ 0 0 1 0 z
1 0 0 0 1 1
Rotation about Z - axis
Transformation equations for rotation about the other two coordinate axis can be obtained with a cyclic
permutation of the coordinate parameters x, y, and z as in the equation1 we use the replacements as:
X Y Z X
2. X – axis rotation: Substituting cyclically permuting coordinates in the above equation give us the
transformation equations for a X – axis rotation.
y′ = y cosθ - z sinθ
z′ = y sinθ + z cosθ -------- Equation 2
x′ = x
The matrix form is:
x′ 1 0 0 0 x
y′ = 0 cosθ −sinθ 0 y
z′ 0 sinθ cosθ 0 z
1 0 0 0 1 1
P′ = Rx (θ) . P
Rotation about X - axis
3. Y – axis Rotation: Substituting cyclically permuting coordinates in equation 2 give us the
transformation equations for y – axis rotation.
z′ = z cosθ - x sinθ
x′ = z sinθ + x cosθ -------- Equation 3
y′ = y
The matrix form is:
x′ cosθ 0 sinθ 0 x
y′ = 0 1 0 0 y
z′ −sinθ 0 cosθ 0 z
1 0 0 0 1 1
P′ = Ry (θ) . P
The wireframe outline can be displayed quickly to give a general indication of the surface structure. The
polygon – mesh can be improved by dividing the surface into smaller polygon.
Polygon Tables: The graphics package organizes the polygon surface data into tables. The table may contain
geometric, topological and attribute properties.
As information for each polygon is input, the data are placed into tables that are to be used in the subsequent
processing, display, and manipulation of the object in a scene.
➢ Geometric tables
➢ Attribute tables
Geometric data tables: It contain vertex coordinates and parameters to identify the orientation of the polygon
surfaces. To store geometric data three lists is created:
1) A Vertex table: Stores the coordinate values of each vertex in the object.
3) A polygon table: Defines a polygon by providing pointers to the edges that make up the polygon.
Geometric data table representing two adjacent polygon surfaces.
Attribute data table: Contains information for an object and parameters specifying the degree of
transparency of the object and its surface reflectivity and texture characteristics.
OCTREES: Octrees are the hierarchical tree structures used to represent solid objects in graphics
systems. An octree is a tree data structure in which each internal node has exactly eight children. Octrees
are most often used to partition a three dimensional space by recursively subdividing it into eight
octants. Octree representations are used in Medical imaging and other applications for an object cross
section view.
Quadtree Encoding: The octree encoding procedure for a 3D space is an extension of an encoding
scheme for 2D space, called quadtree encoding.
➢ Quadtrees are dividing a two – dimensional region (usually a square) into quadrants.
➢ Each node in the quadrant has four data elements, one for each of the quadrants in the region.
➢ If all pixels within a quadrant have the same color( a homogeneous quadrant), the corresponding data
element in the node stores that color.
➢ Suppose all pixels in quadrant 2 of figure shown below are found to be red. The color code for red is
then places in the data element 2 of the node.
Quadrant Quadrant 0 1 2 3
0 1
Data Elements in the Representative Quadtree Node
Quadrant Quadrant
3 2
➢ Otherwise, the quadrant is said to be heterogeneous, and that quadrant is itself divided into quadrants
as shown below:
3. Axis independence: The shape of an object must not change when the control points are calculated in
a different coordinate system. For example, if the control points are rotated 90 degrees, the curve should
rotate 90 degree but not change shape.
4. Global or Local Control: As a designer manipulates a control point, a curve may change shape only in
the region near the control point, a curve may change shape only in the region near the control point, or
change shape throughout.
5. Variation – diminishing property: A curve that oscillates about its control points is usually
undesirable. Variation diminishing curves tend to smooth out a sequence of control points.
6. Versatility: A curve representation that allows limited variety of shapes. The flexible technique for
the designer to control the versatility of a curve representation is by adding or removing control points.
7. Order of Continuity: A complex shape is created by joining several curves together end – to – end.
The order of the curve determines the minimum number of control points necessary to define the curve.
The order of the curve also affects how the curve behaves when a control point is moved.
No continuity: The curves do not meet at all.
Zero order continuity: It means the two curves meet.
First order continuity: it requires the curve to be tangent at the point of intersection.
Second order continuity: It requires that curvature must be the same.
The above requirements are the basis for the Bezier and B-spline formulations of curves and surfaces.
BEZIER CURVES: Bezier curves were widely publicized in 1962 by the French engineer Pierre
Bezier, who used them to design automobile body design of the Renault car.
Where the three dimensional location of the control point P3 is (Xi, Yi, Zi). The figure below shows
Bezier curve in the plane, the Z coordinate of each control point is zero. The curve shows six control
points.
A Bezier curve is a polynomial of degree, one less than the number of control points udes. Three points
generate a parabola, 4 points a cubic curve and so on.
Properties of Bezier curves:
A Bezier curve always passes through the first and last control point.
It lies within the convex polygon boundary of the control points. From the properties if Bezier blending
functions all are positive and their sum is always 1.
i.e
Similar methods can be used in packages that employ a left-handed viewing system. In these packages,
plane parameters A, B, C and D can be calculated from polygon vertex coordinates specified in a
clockwise direction. Back faces have normal vectors that point away from the viewing position and are
identified by C >= 0 when the viewing direction is along the positive Zv axis. By examining parameter
C for the different planes defining an object, we can immediately identify all the back faces.
Depth Buffer Method: It is an image-space approach. The basic idea is to test the Z-depth of each
surface to determine the closest visible surface.
In this method each surface is processed separately one pixel position at a time across the surface. The
depth values for a pixel are compared and the closest smallest z surface determines the color to be
displayed in the frame buffer.
It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To override
the closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used.
Depth buffer is used to store depth values for x,y
position, as surfaces are processed 0≤depth≤1.
The frame buffer is used to store the intensity value of color value at each position x,y.
The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate indicates back
clipping pane and 1 value for z-coordinates indicates front clipping pane.
Algorithm:
Step-1: Set the buffer values − Depthbuffer x,y = 0
Framebuffer x,y = background color
Step-2: Process each polygon one at a time
For each projected x,y pixel position of a polygon, calculate depth z.
If Z > depthbuffer x,y
Compute surface color,
set depthbuffer x,y = z,
framebuffer x,y = surfacecolor x,y
Step-3 : If Z < depthbuffer x,y, this polygon is closer to the observer than others already recorded for
this pixel.
Advantages:
➢ It is easy to implement.
➢ It reduces the speed problem if implemented in hardware.
➢ It processes one object at a time.
Disadvantages:
➢ It requires large memory.
➢ It is time consuming process.
Scan-Line Method: Image space method used for removing hidden surfaces.
➢ An extension of scan – line polygon filling (with multiple surfaces).
➢ Idea is to intersect each polygon with a particular scan line. Solve hidden surface problem for just
that scan line.
➢ Across each scan line. Depth calculations are made for each overlapping surface to determine which
is nearest to the view plane.
➢ When the visible surface has been determined, the intensity value for that position is entered into the
refresh buffer.
➢ The cost of tiling scene is roughly proportional to its depth complexity.
➢ Efficient way to shallowly – occluded scenes.
➢ May need to split.
Two important tables, edge table and polygon table, are maintained for this.
The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse slope of each
line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties, other surface data,
and may be pointers to the edge table.
To facilitate the search for surfaces crossing a given scan-line, an active list of edges is formed. The
active list stores only those edges that cross the scan-line in order of increasing x. Also a flag is set for
each surface to indicate whether a position along a scan-line is either inside or outside the surface.
Scan-lines are processed from left to right. At the leftmost boundary of a surface, the surface flag is
turned on, and at the rightmost boundary, it is turned off.
The figure below illustrate the scan – line method for locating visible portions of surfaces for poixel
positions along the line.
Dashed lines indicate the boundaries of hidden surfaces.
➢ The list for scan line 1 contains information from the edge table for edges AB, BC, EH, and FG. For
positions along scan line 1 between edges AB and BC, only the flag for surface S1 is on.
➢ For edges EH and FG, only the flag for surface S2 is on. The intensity values in the other areas are set
to the background intensity, since there are no other positions of intersection along scan line 1.
➢ For scan line 2 and 3 the active edge contains edges Ad, EH, BC and FG. Along scan line 2 from
edge AD to edge EH, only the flag for surfaces S1 is on. But edges between EH and BC, the flags for
both surfaces S1and S2 are on.
➢ Depth calculations are calculated using the plane coefficients for the two surfaces.
Example: the depth surface S1 is assumed to be less than that of S2 so intensities for surface S1 are
loaded into the refresh buffer until boundary BC is encountered.
The flag for surface S1 goes off, and intensities for surfaces S2 are stored until edge FG is passed.
➢ For scan line 3 has the same list of edges as scan line 2. since there are no changes in line
intersections, depth calculations between edges EH and BC is not necessary.