Computer Graphics and Animation: Focusing System Magnetic Deflection Coils Cathode
Computer Graphics and Animation: Focusing System Magnetic Deflection Coils Cathode
IV
Computer Graphics and Animation
Prelim Question Paper Solution
Heating
Filament Control
Electron Beam
Grid
Phosphor Coated
Screen
Fig. (a) : Basic design of a magnetic-deflection CRT
Electron
Connector beam
Pins Electron Horizontal
gun deflection plate
Base Vertical
deflection plate
Focusing Phosphor Coated
System Screen
Fig. (b) : Electrostatic deflection of the electron beam in CRT.
Working :
A beam of electrons (cathode-ray) emitted by an electron gun, passes through focusing and
deflection systems that direct the beam toward specified position on the phosphor-coated
screen. When the electron in the beam collide with the phosphor coating, they are stopped
and their kinetic energy is absorbed by the phosphor. The phosphor then emits a small spot
of light at each position contacted by the electron beam. Because the light emitted by the
phosphor fades very rapidly, some method is needed for maintaining the screen picture. One
way to keep the phosphor glowing is to redraw the picture repeatedly by quickly redirecting
the electron beam back over the same point. This type of display is called ‘Refresh CRT’.
Q.1(b) Explain Raster scan and difference between random and Raster scan? [5]
Ans.: The most common type of graphics monitors employing a CRT is the raster-scan display,
based on television technology. In a raster-scan system, the electron beam is swept across
the screen, one row at a time from top to bottom. As the electron beam moves across each
row, the beam intensity is turned ON and OFF to create a pattern of illuminated spots.
Picture definition is stored in a memory area called the refresh buffer frame buffer. This
memory area holds the set of intensity values for all the screen points. Stored intensity
va1ues are then retrieved from the refresh buffer and “painted’ on, the screen one row
(scan line) at a time. Each screen point is referred to as a pixel or pet.
-1-
Vidyalankar : S.Y. B.Sc. (IT) CG
DDA Algorithm :
Step 1: Accept the end point co-ordinates of the line segment AB
i.e. A (x1, y1) and B (x2, y2)
Step 2: Calculate : dx = x2 – x1 dy = y2 – y1
Step 3: If abs (dx) ≥ abs (dy) then
steps = abs (dx)
Else
steps = abs (dy)
dx
Step 4: Let x increment =
step
dy
y increment =
step
Step 5: Display the pixel at starting position
putpixel (x1, y1, WHITE)
Step 6: Compute the next co-ordinate position along the line path.
xk + 1 = xk + x increment
yk + 1 = yk + y increment
putpixel (xk + 1, yk + 1, WHITE)
Step 7: If xk + 1 = x2 OR/AND
yk + 1 = y2
Then Stop
Else go to step 4.
-2-
Prelim Question Paper Solution
Step 5: Translate each calculated pixel position by T(xk, yk) and display the pixel.
x = x k + 1 + xc
y = yk + 1 + yc
putpixel (x, y, WHITE)
Step 7: STOP.
-3-
Vidyalankar : S.Y. B.Sc. (IT) CG
B4 B3 B2 B1 B4 B3 B2 B1 B4 B3 B2 B1
1 0 0 1 1 0 0 0 1 0 1 0
yw T
max
B4 B3 B2 B1 B4 B3 B2 B1 B4 B3 B2 B1
L R
0 0 0 1 0 0 0 0 0 0 1 0
yw
min
B
B4 B3 B2 B1 B4 B3 B2 B1 B4 B3 B2 B1
0 1 0 1 0 1 0 0 0 1 1 0
xw xw
min max
Step 3:
(a) Completely Inside: If both end point codes are 0000 then line segment is completely
INSIDE
DISPLAY LINE
STOP
(b) If logical AND operation of both the end point code is NOT 0000. Then the line
segment is completely OUTSIDE.
DISCARD IT
STOP
(c) If upper case (a) and (b) false then the line segment is a clipping candidate.
Step 4: Determine the intersecting boundary for any outside point P. (x, y).
Let code P = B4 B3 B2 B1
T B R L
If bit B1 = 1 then line intersect with left intersection boundary and x = xw min
If bit B2 = 1 line intersect with right intersection boundary and x = xw max
If bit B3 = 1 line intersect with bottom intersection boundary and y = yw min
If bit B4 = 1 line intersect with TOP intersection boundary and y = yw max
yw
max
C (X1, Y1)
I (X , Y)
I (X , Y)
B (X2, Y2) D (X2, Y2)
A (X1, Y1)
yw
min
xw xw
min max
-4-
Prelim Question Paper Solution
find y
y y1 x x1
For line AB, =
y 2 y1 x 2 x1
y 2 y1
y y1 = (x – x1)
x 2 x1
y 2 y1 dy
where = = m slope
x 2 x1 dx
y = y1 + m (x x1)
B (X2, Y2)
I (X , Y)
yw
min
A (X1, Y1)
D (X2, Y2)
I (X , Y)
yw
max
C (X1, Y1)
xw xw
min max
find x
x x1 y y1
For line AB, =
x 2 x1 y 2 y1
x 2 x1
x – x1 = (y y1)
y 2 y1
1
x = x1 + (y y1)
m
-5-
Vidyalankar : S.Y. B.Sc. (IT) CG
Clipped Polygon
Vertices
Output
I3
I4
I5
I8
I1
I4
I5
I6
I2
I7
I8
I6
I7
I1
I2
CLIPPER
TOP
I3
I4
I5
B
I1
I3
I4
I6
I2
I5
B
I6
Out
I1
I2
CLIPPER
BOTTOM
In
I3
I4
D
B
I3
I4
I2
I1
B
Out
I1
I2
CLIPPER
RIGHT
In
C
Out
In
C
D
B
I1
I2
D
B
CLIPPER
I1
I2
LEFT
C
vertices of a polygon
D
B
Initial set of
C
D
In
B
A
Window
A
Out
1) If the first vertex is outside the window boundary, and Left window
boundary
second vertex is inside the window boundary both the
intersecting point of the polygon edge and the second vertex
1
A
2
are stored. e.g.
I1
B
OUT
IN
where I1 is the intersection of edge AB with left window OUT (A) IN
boundary and B is second vertex. (B)
2) If both input vertices are inside the window boundary only the second vertex is stored.
e.g.
1
B
C
2
IN (B) IN (C)
Save C. i.e. second vertex
-6-
Prelim Question Paper Solution
3) If the first vertex is inside the window boundary and the second vertex is outside only
the edge intersecting is stored, e.g.
1
C
2 I2
C
C D
IN OUT
Save I2 i.e. interesection point
4) If both input vertices are outside the window boundary nothing is stored, e.g.
2
A
1
D
D A
OUT OUT
Save nothing
Let us consider another case where matrix elements a or d or both are not zero and c=b=0.
The transformation operator T operates on the point P(x, y) to produce scaling operation in
xdirection, ydirection or in both the directions, depending on the case as it may be. We
discuss the individual cases as given in the Figure 1.
y y y y
S
S
S S
0 x x 0 x x
(a) 0 0
(b) (c (d
Fig. 1 : Scaling transformations: (a) Original image; (b) scaling
in xdirection; (c) scaling in ydirection; (d) scaling xydirection.
-7-
Vidyalankar : S.Y. B.Sc. (IT) CG
Now let the matrix element a = 1 and d = sy 0 with b = c = 0. The transformation matrix
reduces to matrix of transformation representing scaling in ydirection by magnitude sy
(see Figure 1c). Therefore,
1 0
P(x, y) = [x y] = [P].[Ty] = [x y] xs y y
0 s y
1 0
That is, x = x and y = syy and Ty =
0 s y
is the scaling transformation matrix in y direction. The operational significance of this
operation implies that the points of the image are scaled in ydirection by magnitude sy and
x remains same.
In general, if one wishes to produces scaling in both x and y simultaneously by magnitude sx
in xdirection and sy in ydirection then the condition for such operation is a 1 = sx, d 1 = sy
and b = c = 0 (see Figure 1d). Therefore, the matrix of transformation for scaling in xy is
s x 0
Txy =
0 sy
where a = sx and d = sy, sx, and sy are the scaling factors chosen in x and y directions
respectively.
Q.2 (b) Explain 2D Rotation about an Arbitrary point with a diagram? [5]
Ans.: Rotation about an Arbitrary Point
Consider a point p (x, y) in 2 cooprdinate system.
Rotation about Pc (xc, yc) by an angle in anticlockwise direction will move
p (x, y) to p (x, y) as shown in figure.
P
The required transformation is accomplished by performing p (x, y)
the following sequence of operation.
pc
1. Translate the point PC (xc, yc) to the origin.
Correspondingly find the new position of P. Let the new
position of point P be P1. The required translation matrix
is,
1 0 x c
P1 (x1, y1)
T1 = 0 1 y c
0 0 1
Pc = T1 . Pc moves point Pc to origin. Then P1 = T1 . P.
This equation moves point P to P1 such that Pc moves to Pc
origin.
P1(x1 ,y1 )
2. Rotate P1 to P1 by an angle in anticlockwise direction.
P1 (x1,y1)
The required rotation matrix is,
cos sin 0
R = sin cos 0 x
0 0 1
P1 = R . P1
x1 cos sin 0 x
y1 = sin cos 0 y
1 0 0 1 1
-8-
Prelim Question Paper Solution
-9-
Vidyalankar : S.Y. B.Sc. (IT) CG
ywmax y vmax
1
P (xv, yv)
P (xw, yw) P W
N y vmin
ywmin
0 1
xw xvmin xvmax
xwmin xwmax
(d) Consider a point p (xw, yw) in wcs mapped to p (xv, yv) in the device co-ordinate
system.
xv xv min xw xwmin
Then = ...(1)
xv max xwmin xwmax xwmin
yv yv min yw ywmin
= ...(2)
yv max yv min ywmax ywmin
from equation (1) and (2),
xv = xvmin + (xw xwmin) sx ...(3)
yv = yvmin + (yw ywmin) sy ...(4)
xv max xv min Viewport x extent
where sx = =
xwmax xwmin window x extent
yv max yv min Viewport y extent
sy = =
ywmax ywmin window y extent
P (x, y)
ywmin B
A
xwmin xwmax
- 10 -
Prelim Question Paper Solution
Step 1 :
Translate (xwmin, ywmin) to origin C1
D1
the required translation matrix is
1 0 xwmin P1
T1 = 0 1 ywmin
0 0 1
A1 B1
Step 2 :
Scale wrt origin. The required scaling matrix is C2
D2
sx 0 0
S = 0 sy 0 P2
0 0 1
A2 B2
Step 3 :
Translate to (xvmin, yvmin) yv
The required translation matrix is, D3 C3
yvmin
1 0 xv min
P3
T2 = 0 1 yv min A3 B3
yvmin
0 0 1
xv
The composite transformation matrix is xvmin xvmax
V = ((T2 . s) . T1)
Any point p (x, y, 1) on the object will be transformed to p (x, y 1) such that
P = V . P
Q.2 (e) Consider a point P(2, 3) in a coordinate plane. Perform reflection of the point P [5]
through xaxis and draw the same.
Ans.: To perform reflection through xaxis, we have the matrix of reflection as
1 0
0 1
Applying the reflection matrix to the point P(2, 3), we obtain [2 3] as given below
1 0 y
P (x, y) =[2 3] = [2 3]
0 1
The point P with coordinates (2, 3) is reflected about P(2, 3)
yaxis and new coordinates of the transformed
(reflected) points becomes (2, 3), as shown in Figure.
x x
0
P(2, 3)
y
Fig. : Figure for Example 3.
- 11 -
Vidyalankar : S.Y. B.Sc. (IT) CG
2) Oblique Projection
If the direction of projection is not perpendicular to the projection plane is called as
oblique projection.
A multi-view projection displays a single face of a 3D object.
They are classified as : (a) cavalier projection
(b) cabinet projection.
Cavalier Cabinet
(DOP = 45o) (DOP = 63.4o)
tan(a) = 1 tan(a) = 2
- 12 -
Prelim Question Paper Solution
Finally, the direction vector IP of the positive p axis is chosen so that it is perpendicular to
Jq and by convention, so that the triad IP, JQ, and N form a left - handed coordinate
system. That is :
N Jn
IP
| N JP |
This coordinate system is called the view plane coordinate system or viewing coordinate
system.
- 13 -
Vidyalankar : S.Y. B.Sc. (IT) CG
the projection plane. If the distance from the one to the other is finite, then the
projection is perspective, and if infinite the projection is parallel. When we define a
perspective projection, we explicitly specify its center of projection; for a parallel
projection, we give its direction of projection. The visual effect of a perspective projection
is similar to that of photographic systems and of the human visual system, and is known as
perspective foreshortening. The size of the perspective shortening of an object varies
inversely with the distance of that object from the center of projection. Thus, although the
perspective projection of objects tends to look realistic, it is not particularly useful for
recording the exact shape and measurements of the objects; distances cannot be taken
from the projection, angles are preserved on only those faces of the object parallel to the
projection plane, and parallel lines do not in general project as parallel lines. The parallel
projection is a less realistic view because perspective foreshortening is lacking, although
there can be different constant foreshortenings along each axis. The projection can be
used for exact measurements, and parallel lines do remain parallel. As in the perspective
projection, angles are preserved only on faces of the object parallel to the projection plane.
- 14 -
Prelim Question Paper Solution
The human visual system is a marvelously complex and highly nonlinear detector of
electromagnetic radiation with wavelengths ranging from 380 to 770 nanometers (nm). We
see light of different wavelengths as a continuum of colors ranging through the visible
spectrum: 650 nm is red, 540 nm is green, 450 nm is blue, and so on.
The sensitivity of the human eye to light varies with wavelength. A light source with a
radiance of one watt/m2-steradian of green light, for example, appears much brighter than
the same source with a radiance of one watt/m2 -steradian of red or blue light. In
photometry, we do not measure watts of radiant energy. Rather, we attempt to measure the
subjective impression produced by stimulating the human eye-brain visual system with
radiant energy.
This task is complicated immensely by the eye's nonlinear response to light. It varies not
only with wavelength but also with the amount of radiant flux, whether the light is constant
or flickering, the spatial complexity of the scene being perceived, the adaptation of the iris
and retina, the psychological and physiological state of the observer, and a host of other
variables. Nevertheless, the subjective impression of seeing can be quantified for "normal"
viewing conditions. In 1924, the Commission Internationaled' Eclairage (International
Commission on Illumination, or CIE) asked over one hundred observers to visually match the
"brightness" of monochromatic light sources with different wavelengths under controlled
conditions. The statistical result — the so-called CIE photometric curve shown in Figure
shows the photopic luminous efficiency of the human visual system as a function of
wavelength. It provides a weighting function that can be used to convert radiometric into
photometric measurements.
- 15 -
Vidyalankar : S.Y. B.Sc. (IT) CG
- 16 -
Prelim Question Paper Solution
2. Determine those part of the object whose view is unobstructed by other parts of it or
any other object with respect to the viewing specification.
3. Draw those parts in the object color.
End
Compare each object with all other objects to determine the visibility of the
object parts.
If there are n objects in the scene, complexity =0(n2)
Calculations are performed at the resolution in which the objects are defined (only
limited by the computation hardware).
Process is unrelated to display resolution or the individual pixel in the image and the
result of the process is applicable to different display resolutions.
Display is more accurate but computationally more expensive as compared to image
space methods because step 1 is typically more complex, eg. Due to the possibility
of intersection between surfaces.
Suitable for scene with small number of objects and objects with simple relationship
with each other.
The test is very simple, if the z component of the normal vector is positive, then, it is a
back face. If the z component of the vector is negative, it is a front face.
Note that this technique only caters well for non-overlapping convex polyhedra.
For other cases where there are concave polyhedra or overlapping
objects, we still need to apply other methods to further determine
where the obscured faces are partially or completely hidden by other
objects (e.g. Using Depth-Buffer Method or Depth-sort Method).
- 17 -
Vidyalankar : S.Y. B.Sc. (IT) CG
Discussion
• Back face removal is achieved by not displaying a polygon if the viewer is located in its
back half-space.
• It is an object space algorithm (sorting and intersection calculations are done in object
space precision).
• If the view point changes, the BSP needs only minorre-arrangement.
• A new BSP tree is built if the scene changes.
• The algorithm displays polygon back to front (cf.Depth-sort).
BSP Algorithm
Procedure DisplayBSP(tree: BSP_tree) Begin
If tree is not empty then
If viewer is in front of the root then Begin
DisplayBSP(tree.back_child) displayPolygon(tree.root) DisplayBSP (tree. front_ child)
End Else
Begin
DisplayBSP(tree.front_child)displayPolygon(tree.root) DisplayBSP(tree.back_child)
End
End
The x and y coordinates of the particle can be thought of as functions of a new variable t,
and so we can write, x = f(t), y = g(t), where f, g are functions of t. In some physical
problems t is thought of as time.
- 18 -
Prelim Question Paper Solution
The equations,
x = f(t)
y = g(t)
called the parametric equations of the point P(x, y) and the variable t is called a
parameter.
It is often very useful to take a cartesian equation y = F(x) and introduce a parameter t so
that the x and y coordinates of any point on the curve can be expressed in terms of this
parameter.
Here is a very simple example of a parametrisation. Suppose we begin with the line whose
equation is
Y = 3x 4.
We can introduce a new variable tand write x=t. Then we have y = 3t 4.
Thus we can rewrite the equation y = 3x 4 in parametric form as :
x=t
y = 3t 4.
Curves are useful in geometric modeling and they should have a shape which has a clear and
intuitive relation to the path of the sequence of control points. One family of curves
satisfying this requirement are Bezier curve.
The Bezier curve require only two end points and other points that control the endpoint
tangent vector.
Bezier curve is defined by a sequence of N + 1 control points, P0 , P1, ... , Pn . We
defined the Bezier curve using the algorithm (invented by DeCasteljeau), based on recursive
splitting of the intervals joining the consecutive control points.
A purely geometric construction for Bezier splines which does not rely on any polynomial
formulation, and is extremely easy to understand. The DeCasteljeau method is an algorithm
which performs repeated bi-linear interpolation to compute splines of any order.
However, we cannot easily control the curve locally. That is, any change to an individual
control point will cause changes in the curve along its full length. In addition, we cannot
create a local cusp in the curve, that is, we cannot create a sharp corner unless we create it
at the beginning or end of a curve where it joins another curve. Finally, it is not possible to
keep the degree of the Bezier curve fixed while adding additional points; any additional
points will automatically increase the degree of the curve. The so-called b-spline curve
addresses each of these problems with the Bezier curve. It provides the most powerful and
useful approach to curve design available today. However, the down side is that the bspline
- 19 -
Vidyalankar : S.Y. B.Sc. (IT) CG
curve is somewhat more complicated to compute compared to the Bezier curve, but as we
shall see the two kinds of curves are closely related. In fact a b-spline curve degenerates
into a Bezier curve when the order of the b-spline curve is exactly equal to the number of
control points.
- 20 -
Prelim Question Paper Solution
you can manoeuver different articulations or bend it into a fluid curve. It can prove useful
in many situations including cut-out animation. When paired with your creativity, the
Deformation Effect can produce some stunning results.
There are 2 main types of deformer :
• Bone Deformer
• Curve Deformer
All the Deformation Effect modules are available in the Module
Library, under the Deformation tab.
Fig. 1
Fig. 2
The Bone Deformer allows you to create a basic or advanced skeleton structure in which the
parent deformer will move the child deformers. The Bone Deformer is mostly used when
animating the extremities of a character such as arms or legs and will add fluidity and a
natural feel to the animation. The Bone effect can be manipulated to rotate a limb at an
articulation joint and to also shorten or elongate the extremities of a limb. Every Bone
module within a skeleton chain is linked together by an Articulation module.
Fig. 3
The Curve Deformer has a hierarchy similar to that of the Bones Deformer and provides
you with complete flexibility. For example, when editing curves, you can deform a straight
line into an arc or a zig-zag with only a few clicks. Curve Deformers are mostly used to
animate elements that do not have joints, for example hair or facial features. However, in
some cases they can be used to animate limbs to create a specific animation genre, similar
to the early rubber hose style of animation with typically simple, flowing curves, without
articulation (no hinged wrists or elbows).
Character animation is generally defined as the art of making a particular character move in
a two- or three-dimensional context. It is a process central to the concept of animation.
Many associate early character animation with Walt Disney Studios, where cartoon artists
created particular characters and presented them with particular traits and characteristics
on screen. This requires combining a lot of technical drawing or animation with some top-
- 21 -
Vidyalankar : S.Y. B.Sc. (IT) CG
level ideas about how the character moves, "thinks," behaves and otherwise appears
consistently on screen.
The target frame rate for interactive applications such as games and simulations is often
25-60 hertz, with only a small fraction of the time allotted to an individual frame remaining
for physical simulation. Simplified models of physical behaviors are generally preferred if
they are more efficient, easier to accelerate (through pre-computation, clever data
structures, or SIMD / GPGPU), or satisfy desirable mathematical properties (such as
unconditional stability or volume conservation when a soft body undergoes deformation).
Fine details are not important when the overriding goal of a visualization is aesthetic appeal
or the maintenance of player immersion since these details are often difficult for humans
to notice or are otherwise impossible to distinguish at human scales.
Procedural is used to simulate particle systems (smoke, fire, water), cloth and clothing, rigid
body dynamics, and hair and fur dynamics, as well as character animation.
In video games, it is often used for simple things like turning a character's head when a
player looks around (as in Quake III Arena) and more complex things, like ragdoll physics,
which is usually used for the death of a character in which the ragdoll will realistically fall
to the floor. A ragdoll usually consists of a series of connected rigid bodies that are
programmed to have Newtonian physics acting upon them; therefore, very realistic effects
can be generated that would very hardly be possible with traditional animation. For example,
a character can die slumped over a cliff and the weight of its upper-body can drag the rest
of it over the edge.
Even more complex examples of procedural animation can be found in the game Spore
wherein User-created creatures will automatically be animated to all actions needed in the
- 22 -
Prelim Question Paper Solution
game from walking, to driving, to picking things up. In the game Unreal Tournament 3, bodies
who have gone into ragdoll mode to fake death can arise from any position into which they
have fallen and get back on their feet. The canceled Indiana Jones game from Lucas Arts
shown at E3 2006 featured character motions that were animated entirely in real-time,
with characters dodging, punching, and reacting to the environment based on an engine
called Euphoria by NaturalMotion which has since been used in games such as Grand Theft
Auto IV and Backbreaker.
The basic idea behind animation is to play back the recorded images at the rates fast
enough to fool the human eye into interpreting them as continuous motion. Animation can
make a series of dead images come alive. Animation can be used in many areas like
entertainment, computer aided-design, scientific visualization, training, education, e-
commerce, and computer art.
In this technique, a storyboard is laid out and then the artists draw the major frames of the
animation. Major frames are the ones in which prominent changes take place. They are the key
points of animation. Keyframing requires that the animator specifies critical or key positions for
the objects. The computer then automatically fills in the missing frames by smoothly
interpolating between those positions.
Procedural
In a procedural animation, the objects are animated by a procedure - a set of rules -not by
keyframing. The animator specifies rules and initial conditions and runs simulation. Rules are
often based on physical rules of the real world expressed by mathematical equations.
Behavioral
In behavioral animation, an autonomous character determines its own actions, at least to a
certain extent. This gives the character some ability to improvise, and frees the animator from
the need to specify each detail of every character's motion.
This technology has enabled a number of famous athletes to supply the actions for
characters in sports video games. Motion capture is pretty popular with the animators
mainly because some of the commonplace human actions can be captured with relative ease.
However, there can be serious discrepancies between the shapes or dimension^ of the
subject and the graphical character and this may lead to problems of exact execution.
- 23 -
Vidyalankar : S.Y. B.Sc. (IT) CG
different sequences while maintaining physical realism. Secondly, real-time simulations allow
a higher degree of interactivity where the real person can maneuver the actions of the
simulated character.
In contrast the applications based on key-framing and motion select and modify motions
form a pre-computed library of motions. One drawback that simulation suffers from is the
expertise and time required to handcraft the appropriate controls systems.
Keyframes are important frames during which an object changes its size direction, shape or
other properties. The computer then figures out all the in- between frames and saves an
extreme amount of time for the animator. The following illustrations depict the frames
drawn by user and the frames generated by computer.
- 24 -