Unit Iii Me16501 PDF
Unit Iii Me16501 PDF
Unit Iii Me16501 PDF
remove the hidden lines & hidden surfaces and then shade only the visible portions.
✓ Lighting gives more clear and better shading representation for the object.
✓ Various colouring schemes can be modeled for different components of the assembly
B
A B
a Axonometric axis
A D
C 2. Dimetric c Two angles are equal.
D b
a Axonometric axis
3. Trimetric b c None of angles are
equal.
❖ Many techniques have been proposed in the last ten years for managing
different data (surfaces, volumes, scattered points, vector fields, etc)
❖ Visualization modalities which allow sophisticated and differentiated insights
of the data to be obtained.
❖ Visualization is a crucial communication and analysis tool in the design and
simulation of many new products or systems.
MAJOR PROBLEM IN VISUALIZATION
➢ An object consist of number of vertices, edges, surfaces which are
represented realistically in 3D modeling.
CAD systems are celebrated in recent years for their ability to simulate
such “realistics” viewing conditions.
The generation of realistics images involves the applications of
techniques in two distinct areas:
1. The removal of hidden surfaces from the image, and shading or
colouring of the visible surfaces in manner appropriate to the
modelling lighting conditions,
2. Hidden line removal,
3. Hidden surface removal,
4. Hidden solid removal.
FURTHER MORE APPROACHES TO
ACHIEVE THE VISUAL REALISM
1. SHADING,
2. LIGHTING,
3. TRANSPARENCY &
4. COLOURING.
HIDDEN LINE REMOVAL
ALGORITHM
HIDDEN LINE ELIMINATION
HIDDEN LINE ELIMINATION
HIDDEN LINE REMOVAL:
Removing hidden line and surfaces greatly improve the visualization of objects by
displaying clear and more realistic images.
➢H.L.E stated as, "For a given three dimensional scene, a given viewing point and a given
direction eliminate from an appropriate two dimensional projection of the edges and
faces which the observer cannot see".
Various hidden line and hidden surface removal algorithms may be classified into
➢Image-space method (the visibility is decided point by point at each pixel position on
the view plane) - Rastor algorithms & Vector algorithms.
1. Minimax test,
2.Containment test,
3. Surface test,
4. Computing Silhouettes,
5. Edge intersection,
6. Segment comparisons.
MINIMAX (Bounding Box) TEST
Minimax test compares whether two polygons overlap or not.
Here, each polygon is enclosed in a box by finding its maximum and
minimum x and y coordinates. Therefore, it is termed as minimax test.
Then these boxes are compared with each other to identify the
intersection for any two boxes.
If there is no intersection of two boxes as shown in Figure, their
surrounding polygons do not overlap and hence, no elements are
removed.
If two boxes intersect, the polygons may or may not overlap as
shown in Figure.
M
I
T
N
E
I
S
M
T
A
X
CONTAINMENT TEST
❑Odd-visible count
❑Even-invisible or partiall visible
BACK FACE / SURFACE TEST:
In a solid object, there are surfaces which
are facing the viewer (front faces) and there are
surfaces which are opposite to the viewer (back
faces).
These back faces contribute to
approximately half of the total number of surfaces.
A back face test is used to determine the
location of a surface with respect to other surface.
This test can provide an efficient way of
implementing the depth comparison to remove the
faces which are not visible in a specific view port.
SILHOUETTES
❖ The hidden line algorithms described earlier are only suitable for
polyhedral objects which contain flat faces.
❖ The earliest visible-line algorithm was developed by Roberts. The
primary requirement of this algorithm is that each edge is a part of
the face of a convex polyhedron.
❖ In the phase of this algorithm, all edges shared by a pair of
polyhedron's back facing polygons are removed using a back-face
culling technique.
STEPS FOR THE ALGORITHM:
1. Treat each volume separately and eliminate self-hidden (back-faces) planes and
self hidden lines.
2. Treat each edge (or line segment) separately eliminates those which are entirely
hidden by one or more other volumes.
97
DEPTH-BUFFER ALGORITHM OR Z-BUFFER
ALGORITHM
*0 (nearer to view) to 1 (away from view) Z VALUES.
The z-buffer algorithm require a z-buffer in which z values can be sorted for each
pixel.
The z-buffer is initialized to the smallest z-value, while the frame buffer is
initialized to the background pixel value.
Both the frame and z-buffers are indexed by pixel coordinates(x,y). These
coordinates are actually screen coordinates.
HOW IT WORKS?
*For each polygon in the scene, find all the pixels (x,y) that lie inside or on
the boundries of the polygon when projected onto the screen.
* For each of these pixels, calculate the depth z of the polygon at (x,y).
* If z>depth (x,y), the polygon is closer to the viewing eye than others
already stored in the pixel.
…DEPTH-BUFFER ALGORITHM OR Z-BUFFER
ALGORITHM
Initially, all positions in the depth buffer are set at 0 (minimum depth), and the
refresh buffer is initialized to the background intensity. Z=0: Zmax=1
…DEPTH-BUFFER ALGORITHM OR Z-BUFFER
ALGORITHM
In this case, the z buffer is updated by setting the depth at(x,y) to
z. Similarly, the intensity of the frame buffer location corresponding to
the pixel is updated to the intensity of the polygon at (x,y).
After all the polygons have been processed, the frame buffer
contains the solution.
Z-BUFFERING
IMAGE PRECISION ALGORITHM:
• Determine which object is visible at each pixel
• Order of polygons not critical
• Works for dynamic scenes
• Takes more memory
BASIC IDEA:
• Rasterize (scan-convert) each polygon
• Keep track of a z value at each pixel
• Interpolate z value of polygon vertices during rasterization
• Replace pixel with new color if z value is smaller (i.e., if
object is closer to eye)
101
➢ Simple and easy to implement
Z-Buffer ❑ Aliasing occurs! Since not all depth questions can be resolved
110
Warnock’s Algorithm
Initial scene
113
Warnock’s Algorithm
First subdivision
114
Warnock’s Algorithm
Second subdivision
115
Warnock’s Algorithm
Third subdivision
116
Warnock’s Algorithm
Fourth subdivision
117
Surrounding surface: A surface completely encloses the area.
Intersecting or overlapping surface: A surface that is partly inside and
partly outside the area
Inside surface: A surface that is completely inside the area
Outside surface: A surface that is completely outside the area.
SCAN-LINE ALGORITHM OR WATKIN'S ALGORITHM
AMBIENT LIGHT:
It is a light of uniform brightness
and it is caused by the multiple
reflections of light from many sources
present in the environment.
The amount of ambient light
incident on each object is a constant for
all surfaces and over all directions.
POINT LIGHT SOURCE:
A light source is considered as a point source if it is specified with a
coordinate position and an intensity value. Object is illuminated in one
direction only. The light reflected from an object can be divided into
two components.
LIGHT-REFLECTING SOURCES
1. Specular reflection 2.Diffuse reflection:
SHADING ALGORITHMS
Shading method is expensive and it requires large number of
calculations. This section deals with more efficient shading methods for
surfaces defined by polygons. Each polygon can be drawn with a single
intensity or different intensity obtained at each point on the surface.
There are numbers of shading algorithms exists which are as follows.
(i) Constant-intensity shading or Lambert shading
(ii) Gourand or first-derivative shading
(iii) Phong or second-derivative shading
(iv) Half-tone shading
(i) Constant-intensity shading or Lambert shading:
The fast and simple method for shading polygon is constant
intensity shading which is also known as Lambert shading or faceted
shading or flat shading.
EXISTING SHADING ALGORITHMS ARE
1. CONSTANT SHADING
2. GOURAND SHADING OR FIRST-DERRIVATIVE
3. PHONG OR SHADING SECOND-DERRIVATIVE
That phong shading is much more superior to flat and gouraud shading but
requires lot of time for processing and results in better outputs.
This virtual plant illustrates the
action of lighting conditions on to
the shape and size of the
simulated models
TEXTURING:
There are two types of colours: Chromatic Colour & Achromatic Colour.
Three Characteristics of Color:
➢hue
➢brightness: the luminance of the object
➢saturation: the blue sky
150
…COLOUR
❑ Chromatic colours are provided multi-colour image
COLOR MODELS:
Colour model is an orderly system for creating a whole range of colours from a
small set of primary colours.
There are two types of colour models:
❑ Subtractive
❑ Additive.
Additive colour models use light to display colour while subtractive models use printing
inks. Ex: Electro-luminance produced by CRT or TV monitors, LCD projectors.
Transmitted light.
Colours perceived in subtractive models are the result of reflected light.
There are number of colour
models available. Some of the important colour models are as follows.
1. RGB (Red, Green, Blue) color model
2. CMY (Cyan, Magenta, Yellow) color model
3. YIQ color model
4. HSV (hue, saturation, value) color model, also called HSB (brightness)
model.
Three hardware-oriented color models are RGB (with color CRT monitors),
5. YIQ (TV color system) and CMY (certain color-printing devices).
DISADVANTAGE:
They do not relate directly to intuitive color notions of hue,
saturation, and brightness.
PRIMARY AND SECONDARY COLORS
Due to the different absorption curves of the cones, colors are seen as variable
combinations of the so-called primary colors: red, green, and blue
Their wavelengths were standardized by the CIE in 1931:
red=700 nm,
green=546.1 nm, and
blue=435.8 nm
The primary colors can be added to produce the secondary colors of light, magenta
(R+B),
cyan (G+B), and
yellow (R+G)
ADDITIVE COLOR MODEL SUBTRACTIVE COLOR MODEL
RGB COLOR MODEL
R=G=B=1---------WHITE COLOR
R=G=B=0---------BLACK COLOR
If the values are 0.5, the colour is still white but only at half intensity, so it
appears gray.
If R = G = 1 and B = 0 (full red and green with no blue),
RGB model is more suitable for quantifying direct light such as the one generated
in a CRT monitor, TV screens.
CMY COLOR MODEL
YIQ COLOR MODEL
The YIQ model takes advantage of the human eye response characteristics. The human
eye is more sensible to luminance that to colour information.
NTSC video signal about 4 MHz - Y.
Also the eye is more sensitive to the orange - blue range (I)
than in the green - magenta range (Q).
So a bandwidth of 1.5 MHz - I and
0.6 MHz - Q parameter.
The conversion from YIQ space to RGB space is achieved by the following
transformation.
It is used only to interactively create, paint, store, retrieve and modify drawings.
They do not take much time. They are basically just graphics editors used only by designers.
Level 2:
It can compute “in-betweens” and move an object along a trajectory. These systems
generally take more time and they are mainly intended to be used by or even replace in
betweens.
Level 3:
It provides the animator with operations which can be applied to objects for
example, translation or rotation. These systems may also include virtual camera operations
such as zoom, pan or tilt.
Level 4:
It provides a means of defining actors, ie., objects which possess their own animation.
The motion of these objects may also be constrained.
Level 5:
They are extensible and they can learn as they work. With each use, such a system
becomes more powerful and "intelligent".
Computer animation can be further classified into two types based on its application in major
field.(i) Entertainment animation & (ii) Engineering animation.
(I) ENTERTAINMENT ANIMATION:
Entertainment type of computer animation is mainly used to
make movies and advertisement for entertainment purposes.
The procedure is similar the conventional animation procedure
described in Figure.
The drawings of “key frames” and “in-betweens” are created by
using computer generation techniques.
The drawings of key frames are created by using various
interactive graphics software programs which utilizes the different
transformation techniques such as rotation, reflection, translation etc.
The entertainment animation can be further classified into the following two types.
(a) Exact representation and display of data, (b) High-speed and automatic production of
animation & (c) Low host dependency:
ANIMATION TECHNIQUES:
(i) Keyframe animation:
(li) Linear interpolation:
(iii) Curved interpolation:
(iv) Interpolation o/position and orientation:
(v) Interpolation of shape:
(vi) Interpolation of attributes:
ANIMATION TECHNIQUES: Keyframe
animation:
A key frame is defined by its particular moment in the animation timeline as well as by
all parameters or attributes associated with it.
A sequence with three keyframes and two interpolations one
quicker than the other
A sequence with three keyframes and two interpolations one
quicker than the other
Key frames techniques have not proven their applicability for carbon and
character animation.
ANIMATION TYPES
➢ In the preceding sections, the classification animation systems are on the
basis of their role in the animation process. Another consideration is the inode
of production.