Computer Graphics
Computer Graphics
3D Display Methods
Polygon Surface
Polygon Table
Plane Equation
Polygon Meshes
3D Display Methods
WHAT ARE 3D DISPLAY METHODS IN
COMPUTER GRAPHICS?
Notes:
• Project points on the object surface along parallel lines onto
the display plane.
• Parallel lines are still parallel after projection.
• Used in engineering and architectural drawings.
PERSPECTIVE PROJECTION
The perspective projection, on the
other hand, produces realistic
views but does
not relative proportions.
preserve
perspecti In
ve
projection projection,
are not the lines
parallel.
Instead of
, theyat single point called the
all converge
a
‘center of or
reference point’.
projection’ ‘projection
This is the way that our eyes and a camera lens form images and so the
displays are more realistic.
The disadvantage is that if object have only limited variation , the image
may
not provide adequate depth information and ambiguity appears.
PROJECTION REFERENCE POINT
The distance and angles are not preserved and parallel lines do not remain
parallel. Instead, they all converge at a single point called center of projection
or projection reference point. There are 3 types of perspective projections:-
•To create realistic image, the depth information is important so that we can
easily identify, for a particular viewing direction, which is the front and which
of displayed objects. The depth of an object can be represented by the
intensity of the image. The parts of the objects closest to the viewing
position are displayed with the highest intensities and objects farther
away are displayed with decreasing intensities. This effect is known as ‘depth
cueing’.
CONTINUED…
• A simple method to vary the intensity of objects according to their distance from
the viewing position.
Object-Space Image-space
DEPTH BUFFER Z−BUFFER
METHOD
It is an image-space approach. The basic idea is to test the Z-depth of each
surface to determine the closest visible surface.
To override the closer polygons from the far ones, two buffers named frame
buffer and depth buffer, are used.
• Depth buffer is used to store depth values for x,y position, as surfaces are
processed
• 0≤depth≤1
• The frame buffer is used to store the intensity value of color value at each
position x,y
SCAN-LINE METHOD
• The Edge Table − It contains coordinate endpoints of each line in the scene, the
inverse slope of each line, and pointers into the polygon table to connect edges to
surfaces.
• The Polygon Table − It contains the plane coefficients, surface material properties,
other surface data, and may be pointers to the edge table.
AREA-SUBDIVISION
METHOD
A. Surrounding surface − One that completely encloses the area.
B. Overlapping surface − One that is partly inside and partly outside
the area.
C. Inside surface − One that is completely inside the area.
D. Outside surface − One that is completely outside the area.
A-BUFFER METHOD
The A-buffer expands on the depth buffer method to allow transparencies. The key data
structure in the A-buffer is the accumulation buffer. Each position in the A-buffer has
two fields :-
Depth field − It stores a positive or negative real number
Intensity field − It stores surface-intensity information or a pointer value
CONTINUED…
If depth >= 0, the number stored at that position is the depth of a single
surface overlapping the corresponding pixel area. The intensity field then
stores the RGB components of the surface color at that point and the
percent of pixel coverage.
CONTINUED…
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity
field then stores a pointer to a linked list of surface data. The surface buffer in the A-buffer
includes :-
• RGB intensity
components
• Opacity Parameter
• Depth
• Percent of area coverage
• Surface Identifier
SURFACE RENDERING
Surface rendering involves setting the surface intensity of objects according to the lighting
conditions in the scene and according to assigned surface characteristics. The lighting conditions specify
the intensity and positions of light sources and the general background illumination required for a
scene.
On the other hand, the surface characteristics of objects specify the degree of transparency and
smoothness or roughness of the surface; usually the surface rendering methods are combined with
perspective and visible surface identification to generate a high degree of realism in a displayed scene.
CONTINUED…
POLYGON SURFACE
• In this method, the surface is specified by the set of vertex coordinates and
associated attributes.
PLANE EQUATIONS
• Where x,y,z is any point on the plane, and the coefficients A, B, C, and D are
constants describing the spatial properties of the plane. We can obtain the values
of A, B, C, and D by solving a set of three plane equations using the coordinate
values for three non collinear points in the plane. Let us assume that three
vertices of the plane are (x1, y1, z1), (x2, y2, z2) and (x3, y3, z3).
POLYGON MESHES
• 3D surfaces and solids can be approximated by a set of polygonal and line
elements. Such surfaces are called polygonal meshes. In polygon mesh, each
edge is shared by at most two polygons. The set of polygons or faces, together
form the “skin” of the object. This method can be used to represent a broad class
of solids/surfaces in graphics. A polygonal mesh can be rendered using hidden
surface removal algorithms. The polygon mesh can be represented by three ways
−
Explicit representation