Computer Graphics & Image Processing
Computer Graphics & Image Processing
Computer Graphics
M.Sc. IT
Bindiya Patel
Revised By: Ms Ujjwala
BE (I.T.)
Lecturer
Deptt. of Information Technology
Biyani Girls College, Jaipur
2
Published by :
Think Tanks
Biyani Group of Colleges
While every effort is taken to avoid errors or omissions in this Publication, any mistake or
omission that may have crept in is not intentional. It may be taken note of that neither the
publisher nor the author will be responsible for any damage or loss of any kind arising to
anyone in any manner on account of such errors and omissions.
Computer Graphics
Preface
the students. The book has been written keeping in mind the general weakness
in understanding the fundamental concepts of the topics. The book is selfexplanatory and adopts the Teach Yourself style. It is based on questionanswer pattern. The language of book is quite easy and understandable based
on scientific approach.
This book covers basic concepts related to the microbial understandings
about diversity, structure, economic aspects, bacterial and viral reproduction etc.
Any further improvement in the contents of the book by making corrections,
omission and inclusion is keen to be achieved based on suggestions from the
readers for which the author shall be obliged.
I acknowledge special thanks to Mr. Rajeev Biyani, Chairman & Dr. Sanjay
Biyani, Director (Acad.) Biyani Group of Colleges, who are the backbones and
main concept provider and also have been constant source of motivation
throughout this Endeavour. They played an active role in coordinating the various
stages of this Endeavour and spearheaded the publishing work.
I look forward to receiving valuable suggestions from professors of various
educational institutions, other faculty members and students for improvement of
the quality of the book. The reader may feel free to send in their comments and
suggestions to the under mentioned address.
Author
Syllabus
M.Sc. IT
Computer Graphics
1.
2.
3.
4.
5.
6.
7.
8.
Computer Graphics
Content
S.No.
1.
Name of Topic
Graphics Application and Hardware
1.1
Introduction to Computer Graphics
1.2
Application of Computer Graphics
1.3
Video Display Devices
1.4
Raster Scan Displays
1.5
Random Scan Displays
1.6
Color CRT Monitor
1.7
Shadow Mask Methods
Page No.
7-19
2.
Transformation
2.1
Transformation in 2-dimension & 3-dimension
2.2
Rotation in 2-dimansion & 3-dimension
2.3
Scaling in 2-dimansion & 3-dimension
2.4
Composite Transformation
2.5
Reflection
2.6
Shear
20-30
3.
Output Primitives
3.1
Line Drawing Algorithms
(a)
DDA
(b)
Bresenhams Algorithm
3.2
Circle Drawing Algorithm
3.3
Ellipse Drawing Algorithm
3.4
Boundary Fill Algorithm
3.5
Flood Fill Algorithm
31-49
4.
Clipping Algorithm
4.1
Introduction to Clipping
4.2
Application of Clipping
4.3
Line Clipping Methods
(a)
Cohen Sutherland Method
(b)
Cyrus Beck Algorithm
50-58
6
S.No.
5.
Name of Topic
Visible Surface Detection
5.1
Depth Buffer Method
5.2
Z Buffer Method
5.3
Object Space Method
5.4
Image Space Method
5.5
Painters Algorithm
5.6
Back Face Detection
5.7
A Buffer Method
5.8
Scan Line Method
Page No.
59-70
6.
71-82
7.
Image Processing
7.1
Introduction to Image Processing
7.2
Operations of Image Processing
7.3
Application of Image Processing
7.4
Image Enhancement Techniques
83-90
8.
Multimedia
8.1
Introduction to Multimedia
8.2
Application of Multimedia
8.3
Tweaking
8.4
Morphing
8.5
Frame Grabbers
8.6
Scanners
8.7
Digital Cameras
8.8
JPEG Compression Technique
8.9
MPEG Compression Technique
8.10
Data Stream
8.11
Hypertext / Hypermedia
91-102
Computer Graphics
Chapter-1
Ans.: Computer has become a powerful tool for rapid and economical
production of pictures. There is virtually no area in which graphical
displays cannot is used to some advantage. To day computer graphics is
used delusively in such areas as science, medicine, Engineering etc.
Application of computer graphics :
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
Q.2
Ans.: The primary output device in a graphics system is a video controller. The
operation of most video monitors is based on the standard cathode-ray
tube (CRT) design.
Refresh Cathode Ray Tube : Fig (1) illustrates the basic operation of a
CRT
Fig.1
Here electron beam is emitted by the electron gun in a CRT. It passes
through focusing and deflection system that directs the beam towards
specified position on the phosphor coated system. The light emitted by the
phosphor fades very rapidly. In order to maintain the screen picture or to
keep the phosphor is to redraw the picture repeatedly by quickly directing
the electron beam over the same point. This process is called refresh CRT.
Computer Graphics
Fig.2
The primary components of an electron gun in a CRT are the heated metal
cathode and control guide as in Fig 2.
Heat is supplied to the cathode by directing a current through a coil of
wire called filament, inside the cylindrical cathode structure this heats up
the electrons & the heated up electrons move with high positive voltage
towards the phosphor screen.
Intensity of the electron beam is controlled by setting voltage levels on the
control grid. A high negative voltage will shut off the beam by repelling
electrons & stopping them from passing through. Amount of light emitted
depend on number of electron striking the screen.
Focusing system in a CRT is needed to force the electron beam to
coverage into a small spot as it strikes the phosphor. Otherwise electron
would repel each other.
Focusing is accomplished with either electric or magnetic fields.
Electrostatic focusing is commonly used in television and computer
graphics monitors. With electrostatic focusing, the electron beam passes
through a positively charged metal cylinder that forms an electron lens, as
shown in Fig. 2-3, The action of the electrostatic lens focuses the electron
beam at the center of the screen, in exactly the same way that an optical
lens focuses a beam of light at a particular focal distance. Similar lens
focusing effects can be accomplished with a magnetic field setup by a coil
mounted around the outside of the CRT envelope. Magnetic lens focusing
produces the smallest spot size on the screen and is used in special
purpose device.
10
Spots of light are produced on the screen by the transfer of the CRT beam
energy to the phosphor. When the electrons in the beam collide with the
Computer Graphics
11
phosphor coating, they are stopped and their energy is absorbed by the
phosphor. Part of the beam energy is converted by friction into heat
energy, and the remainder causes electrons in the phosphor atoms to
move up to higher quantum-energy levels. After a short time, the
excited phosphor electrons begin dropping back to their stable ground
state, giving up their extra energy as small quantums of light energy.
What we see on the screen is the combined effect of all the electron light
emissions: a glowing spot that quickly fades after all the excited phosphor
electrons have returned to their ground energy level. The frequency (or
color) of the light emitted by the phosphor is proportional to the energy
difference between the excited quantum state and the ground state.
Difference kinds of phosphor are available for use in a CRT. Besides color,
a major difference between phosphor is their persistence : how long they
continue to emit light (that is, have excited electrons returning to the
ground states) after the CRT beam is removed. Persistence is defined as
the time it takes the emitted light from the screen to decay to one-tenth of
its original intensity. Lower-persistence phosphor requires higher refresh
rates to maintain a picture on the screen without flicker. A phosphor with
low persistence is useful for displaying highly complex, static, graphics
monitors are usually constructed with persistence in the range from 10 to
60 microseconds.
Figure 2-5 shows the intensity distribution of a spot on the screen. The
intensity is greatest at the center of the spot, and decreases with a
Gaussian distribution out to the edge of the spot. This distribution
corresponds to the cross-sectional electron density distribution of the CRT
beam.
The maximum number of points that can be displayed without overlap on
a CRT is referred to as the resolution. A more precise definition of
resolution is the number of points per centimeter that can be plotted
horizontally and vertically, although it is often simply stated as the total
number of points in each direction. Spot intensity has a Gaussian
distribution (Fig. 2-5), so two adjacent spot will appear distinct as long as
their separation is greater than the diameter at which each spot has
intensity in Fig. 2-6. Spot size also depends on intensity. As more
12
electrons are accelerated toward the phosphor per second, the CRT beam
diameter and the illuminated spot increase. In addition, the increased
excitation energy tends to spread to neighboring phosphor atoms not
directly in the path of the beam, which further increases the spot
diameter. Thus, resolution of a CRT is dependent on the type of
phosphor, the intensity to be displayed, and the focusing and deflection
system. Typing resolution on high-quality system is 1280 by 1024, with
higher resolution available on many systems. High resolution systems are
often referred to as high-definition system. The physical size of a graphics
monitor is given as the length of the screen diagonal, with sizes varying
form about 12 inches to 27 inches or more. A CRT monitor can be attached
to a variety of computer systems, so the number of screen points that can
actually be plotted depends on the capabilities of the system to which it is
attached.
Another property of video monitors is aspect ratio. This number gives the
ratio of vertical points to horizontal points necessary to produce equallength lines in both directions on the screen. (Sometimes aspect ratio is
stated in terms of the ratio of horizontal to vertical points.) An aspect ratio
of means that a vertical line plotted with three points has the same
length as a horizontal line plotted with four points.
Q.3
Computer Graphics
13
14
Computer Graphics
15
16
Computer Graphics
17
18
Computer Graphics
19
Ans. :
S.No.
Raster Scan
Random Scan
1.
2.
3.
4.
5.
This
provides
resolution.
higher
20
Chapter-2
Transformation
Q.1
What is Transformation?
Techniques?
What
are
general
Transformations
y = y + t y
(1)
x1
x2
P =
x '1
x'2
tx
ty
_ _ _ (2)
_ _ _ (3)
T = [tx , ty]
Computer Graphics
21
P
Xr
_ _ _ (4)
y = r sin
_ _ _ (5)
_ _ _ (6)
_ _ _ (7)
22
P= R . P
cos
-sin
sin
cos
_ _ _ (8)
(x, y)
(xr, yr)
_ _ _ (9)
y = y . Sy
_ _ _ (10)
Computer Graphics
x'
y'
Or
sx
sy
x
y
23
_ _ _ (11)
P = S . P
_ _ _ (12)
_ _ _ (3)
1 0 t x1
0 1 t y2
0 1 t y1 = 0 1 t y1 + t y2
0 0
1
0 0 1
0 0
Or
1 0 t x1 + t x2
_ _ _ (4)
_ _ _ (5)
24
(2)
_ _ _ (6)
_ _ _ (7)
_ _ _ (8)
Or
s x2
s x1
s y2
s x1 , s x2
s y1 0 =
0 1
s y1 , s y2
_ _ _ (9)
_ _ _ (10)
Reflection
(2)
Shear
Ans.: (1)
Computer Graphics
25
1
Original
Position
2
Original
Position
2
Reflected
Position
Reflected
Position
2
1
1
Fig. 1
(i)
0 -1 0
0
_ _ _ (8)
This transformation keeps x-values the same but flips the yvalues of coordinate positions.
(ii)
Reflection about the y-axis flips x-coordinates keeping ycoordinates the same The matrix for this transformation
is :
-1 0 0
0
1 0
0 1
_ _ _ (9)
26
(2)
_ _ _ (3)
y = y
_ _ _ (4)
- Sh x .y ry
_ _ _ (5)
y = y
_ _ _ (6)
_ _ _ (7)
1 - Sh y .x ry
0
y = Shy (x -xry) + y
_ _ _ (8)
Computer Graphics
Q.4
27
Translation
(2)
Rotation
(3)
Scaling
(1)
1
0
0
0
0
1
0
0
0
0
1
0
tx
ty
tz
1
x
y
z
1
_ _ _ (1)
P = T . P
_ _ _ (2)
y = y + ty
z = z + tz
y-axis
(x, y, z)
(x, y, z)
z-axis
_ _ _ (3)
28
_ _ _ (4)
z = z
Parameter specifies the rotation angle In homogenous coordinate
form, the 3-d,
z axis rotation equations are expressed as :
x1
y1
=
z1
1
cos -sin 0 0
sin
cos
x
0 0
y
.
1 0
z
0 1
1
_ _ _ (5)
Rx () . P
_ _ _ (6)
_ _ _ (7)
Computer Graphics
29
x
0
y
.
0
z
1
1
_ _ _ (8)
P Rx() . P
_ _ _ (9)
x
0
y
.
0
z
1
1
_ _ _ (8)
Equations are :
z'= z cos x sin
x = z sin + x cos
_ _ _ (11)
y = y
P = Ry () . P
(3)
_ _ _ (12)
z
1
Sx
Sy
0
Sz
0 0
P = S . P
x
y
0
.
z
0
1
1
_ _ _ (13)
_ _ _ (14)
Where scaling Parameters Sx, Sy, and Sz are assigned any positive
values.
x' = x . Sx
y = y. Sy
z = z . Sz _ _ _ (15)
30
(ii)
(iii)
Computer Graphics
31
Chapter-3
Output Primitives
Q.1
_ _ _ (1)
with m representing the slope of the line and b as the y intercept. Where
the two end point of a line segment are specified at positions (x 1, y1) and
(x2, y2) as shown is Fig.(1) we can determine values for the slope m and y
intercept b with the following calculations.
M=
y 2 - y1
x 2 - x1
b = y1 mx1
_ _ _ (2)
y2
_ _ _ (3)
y1
_ _ _ (4)
Similarly we can obtain x interval Fig. (1) Line Path between endpoint
x=
y
m
32
_ _ _ (6)
Subscript k takes integer values starting form 1, for the first point &
increase by 1 until the final end point is reached.
For lines with positive slope greater than 1, we reverse the role of x and y.
That is we sample at unit y intervals (y = 1) and calculate each
succeeding x value as :
xk+1 = xk +
1
m
_ _ _ (7)
Equation (6) and (7) are based on assumption that lines are to be
processed form left end point to the right end point.
If this processing is reversed the sign is changed
x = - 1
&
y = - 1
yk+1 = yk m
_ _ _ (8)
1
m
_ _ _(9)
xk+1 = xk
Equations (6) to (9) are used to calculate pixel position along a line with
negative slope.
Computer Graphics
33
When the start endpoint is at the right we set x = -1 and obtain y position
from equation (7) similarly when Absolute value of Negative slope is
greater than 1, we use y = -1 & eq.(9) or we use y = 1 & eq.(7).
Bresenhams Line Algorithm : An accurate and efficient raster line
generating Algorithm, developed by Bresenham, scan concerts line using
only incremental integer calculations that can be adapted to display
circles and other curves. The vertical axes show scan-line position, & the
horizontal axes identify pixel columns as shown in Fig. (5) & (6)
13
Specified Line
12
Path
50
Specified Line
11
49
Path
10
48
10
11
12
13
50
Fig.5
51
52
53
53
Fig.6
_ _ _(10)
34
Then d1 = y yk = m (xk + 1) +b - yk
and
d2 = (yk + 1) y = yk + 1 m (xk + 1) b
_ _ _ (11)
A decision Parameter Pk for the Kth step in the line algorithm can be
obtained by rearranging eq.(11) so that it involves only integer
calculation. We accomplish this by substituting m = y/x. where y &
x are the vertical & horizontal separation of the endpoint positions &
defining.
Pk = x (d1 d2) = 2y. xk - 2x yk + c
_ _ _ (12)
The sign of Pk is same as the sign of d1 - d2. Since x > 0 for our example
Parameter C is constant & has the value 2y + x (2b -1), which is
independent of pixel position. If the pixel position at yk is closer to line
path than the pixel at yk+1 (that is d1 < d2), then decision Parameter Pk is
Negative. In that case we plot the lower pixel otherwise we plot the upper
pixel.
Coordinate changes along the line owner in unit steps in either the x or
directions. Therefore we can obtain the values of successive decision
Parameter using incremental integer calculations. At step k = 1, the
decision Parameter is evaluated form eq.(12) as :
Pk+1 = 2y . xk+1 - 2x . yk+1 + C
yk+1
d2
d1
yk
xk+1
Fig.8
Computer Graphics
35
xk+1 = xk + 1
_ _ _ (13)
_ _ _ (14)
Q.2
(1)
Input the two line endpoints & store the left end point in (x0 , y0).
(2)
Load (x0 , y0) into frame buffer that is plot the first point.
(3)
(4)
(5)
Digitize the line with end points (20, 10) & (30, 18) using Bresenhams
Line Drawing Algorithm.
y 2 - y1
18 - 10
8
=
=
= 0.8
x 2 - x1
30 - 20 10
x = 10
,
y = 8
Initial decision parameter has the value
P0 = 2y - x = 2x8 10 = 6
Since P0 > 0, so next point is (xk + 1, yk + 1) (21, 11)
36
Now k = 0,
Now k = 1,
Now k = 2
Now k = 3
Now k = 4
Now k = 5
Now k = 6
Pk+1
P1
Pk+1
P2
Pk+1
P2
Pk+1
P4
Pk+1
P5
Pk+1
P6
Pk+1
P7
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
Pk + 2y - 2x
P0 + 2y - 2x
6 + (-4)
2
Since P1 > 0,
Pk + 2y - 2x
2 + (- 4)
-2
Since P2 < 0,
Pk + 2y
- 2 + 16
14
Since P3 > 0,
Pk + 2y - 2x
14 4
10
Since P4 > 0,
Pk + 2y - 2x
10 4
6
Since P5 > 0,
Pk + 2y - 2x
64
2
Since P6 > 0,
Pk + 2y - 2x
2 + (- 4)
-2
Since P7 < 0,
Computer Graphics
Now k = 7
Now k = 8
Pk+1
P8
Pk+1
P9
=
=
=
=
=
=
Pk + 2y
- 2 + 16
14
Since P8 > 0,
Pk + 2y - 2x
14 4
10
Since P9 > 0,
K
0
1
2
3
4
5
6
7
8
9
Pk
6
2
-2
14
10
6
2
-2
14
10
37
(xk+1, yk+1)
(21, 11)
(22, 12)
(23, 12)
(24, 13)
(25, 14)
(26, 15)
(27, 16)
(28, 16)
(29, 17)
(30, 18)
Ans.: A circle is a set of points that are at a given distance r form the center
position (xc, yc). This distance relationship is given as :
(x xc)2 + (y yc)2 r2 = 0
This equation is used to calculate the position of points along the circle
path by moving in the x direction from (xc - r) to (xc + r) and determining
the corresponding y values as :
y = yc
(x c - x)2 - r 2
38
However this method is not the best method to calculate the circle point
as it requires heavy computation. Moreover spacing between the points is
not uniform. Another method that can be used by calculating the polar
coordinates r and where
x = xc + r cos
y = yc + r sin
Although this method results in equal spacing between the points but it
also requires heavy computation. The efficient method is incremental
calculation of decision parameter.
Mid Point Algorithm : We move in unit steps in the x-direction and
calculate the closed pixel position along the circle path at each step. For a
given radius r & screen center position (xc, yc). We first set our Algorithm
to calculate the position of points along the coordinate position (x0, y0).
These calculated positions are then placed at this proper screen position
by adding xc to x and yc to y.
For a circle from x = 0 to x = y in first quadrant, the slope varies from 0 to
1. We move in the positive x direction and determine the decision
parameter to find out the possible two y values along the circle path.
Points in the other 7 octants are calculated by symmetry.
y (x = y)
(y, x)
(y, x)
(-x, y)
(x, y)
45
x (x = 0)
(-x, -y)
(-y, -x)
(x, -y)
(y, -y)
Computer Graphics
39
_ _ _ (1)
Any point (x, y) on the boundary of the circle with radius r satisfy the
equation of fcircle( x, y) = 0
The relative position of any point (x, y) can be determined by checking the
sign of circle function.
< 0 if (x, y) is inside circle boundary.
fcircle( x, y)
_ _ _ (2)
yk
yk -1
xk
xk+1
xk+2
Pk = (xk + 1 )2 +(yk - )2 r2
_ _ _ (3)
If Pk < 0,
Mid point is inside the circle boundary and the pixel on the
scan line yk is closer to the circle boundary.
Otherwise,
40
fcircle(xk+1 + 1, yk+1 - )
Or
Pk+1
Or
Pk+1
_ _ _ (4)
&
2yk+1 = 2yk 2
P0 =
5
-r
4
If r is a integer then P0 = 1 r
Algorithm :
(1)
Input radius r and circle center ( xc, yc) and obtain the first point on
circumference of a circle centered on origin (x0, y0) = (0, r)
(2)
(3)
(4)
(5)
Move each calculated pixel position (x, y) onto the circular path
centered on (xc, yc) & plot coordinate values x = x + xc & y = y + yc.
(6)
5
-r
4
Computer Graphics
Q.4
Ans.:
41
Demonstrate the Mid Point Circle Algorithm with circle radius, r = 10.
P0 = 1 r =1 - 10 = - 9
Now the initial point (x0, y0) = (0, 10) and initial calculating terms for
calculating decision parameter are
2x0 = 0 , 2y0 = 20
Since Pk < 0,
P1 = - 9 +3 = - 6
Now P1 < 0,
P2 = - 6 + 5 = - 1
Now P2 < 0,
P3 = -1+ 7 = 6
Now P3 > 0,
P4 = 6 + 9 - 18 = - 3
Now P4 < 0,
P5 = - 3 + 11 = 8
Now P5 > 0,
P6 = 8 +13 - 16 = 5
Now P6 > 0,
(xk+1, yk+1)
2xk+1
2yk+1
(1, 10)
20
(2, 10)
20
(3, 10)
20
(4, 9)
18
(5, 9)
10
18
(6, 8)
12
16
(7, 7)
14
14
Ans.: An ellipse is defined as a set of points such that the sum of distances from
two fixed points (foci) is same for all points given a point P = (x, y),
distances are d1 & d2, equation is :
d1 + d2 = constant
_ _ _ (1)
42
&
F2 (x2, y2)
Equation is :
(x - x1 )2 +(y - y1 )2
(x - x 2 )2 +(y - y 2 ) 2
_ _ _ (2)
_ _ _ (3)
ry
rx
rx
x - xc
rx
y - yc
+
ry
=1
xc
_ _ _(4)
Computer Graphics
43
Region1
(-x, y)
(x, y)
(-x, -y)
(x, -y)
Slope = -1
ry
Region2
rx
_ _ _ (5)
44
-2ry 2 x
dy
=
dx
2rx 2 x
- {from eq.(5)}
dy
= -1
So,
2ry2 x = 2rx2y
dx
We move out of Region 1 when 2ry2 x 2rx2y
_ _ _ (7)
Figure (1) shows mid point between 2-candidate pixel at (xk+1), we have
selected pixel at
(xk, yk) we need to determine the next pixel.
At boundary
yk
yk-1
mid
point
xk
xk+1
Fig.:1
if Pk < 0
if Pk 0
Increments =
Computer Graphics
45
2rx2y = 2rx2ry
And these values are compared at each step & we move out of Region 1
when condition (7) is satisfied initial decision parameter for region 1 is
calculated as :
P10 = fellipse(x0, y0) = (1, ry ) = ry2 + rx2( ry - )2 rx2 ry2
P10 = ry2 rx2 ry + rx2
_ _ _ (10)
_ _ _ (11)
yk
yk-1
xk
xk+1
_ _ _ (12)
_ _ _ (13)
46
Input rx, ry and ellipse centre (xc, yc) obtain the first point (x0, y0) =
(0, ry)
(2)
Q.6
(3)
(4)
(5)
(6)
(7)
More each calculated pixel position (x, y). Center on (xc, yc), plot
coordinate values x = x + xc & y = y + yc.
(8)
ry2 = 36
2ry2 = 72
P10 = 36 (64 x 6) +
1
x 64 = 36 384 + 16 = 332
4
P10 < 0
Computer Graphics
47
P15 > 0
P16 > 0
K12
P1k
(xk+1, yk+1)
2ry2xk+1
2rx2yk+1
332
(1, 6)
72
768
224
(2, 6)
144
768
44
(3, 6)
216
768
208
(4, 5)
288
640
108
(5, 5)
360
640
288
(6, 4)
432
512
244
(7, 3)
504
384
48
P1k
(xk+1, yk+1)
2ry2xk+1
2rx2yk+1
23
(8, 2)
576
256
215
(8, 1)
576
128
279
(8, 0)
Ans.: (1)
(2)
Q.8
(2)
Computer Graphics
49
Fill methods starting from an interior point are useful with more complex
boundaries and in interactive painting systems.
Boundary Fill Algorithm : Another approach to area filling is to start at a
point inside a region and paint the interior outward toward the boundary.
If the boundary is specified in a single color, the fill Algorithm precedes
outward pixel by pixel until the boundary color is encountered. This
method is called Boundary Fill Algorithm.
Boundary Fill Algorithm procedure accepts as input the coordinates of an
interior point (x, y), a fill color and a boundary color. Starting with (x, y),
neighbouring boundary points are also tested. This process continues till
all the pixels up to the boundary color for the area is tested.
Diagram
1
Recursive boundary fill Algorithm may not fill regions correctly if some
interior pixels are already displayed in the fill color. This occurs because
the Algorithm checks next point both for boundary color and for fill color.
The pixel positions are stored in form of a stack in the Algorithm.
Flood Fill Algorithm : Sometimes we want to fill in (or recolor) an area
that is not defined within a single color boundary suppose we take an
area which is bordered by several different color. We can paint such area
by replacing a specified interior color instead of searching for a boundary
color value. This approach is called flood fill Algorithm.
We start from interior paint (x, y) and reassign all pixel values that are
currently set to a given interior color with the desired fill color. If the area
we want to paint has move than one interior color, we can first reassign
pixel value so that all interior points have the same color.
50
Chapter-4
Clipping Algorithm
Q.1
Ans. : Any procedure that identifies those portion of a picture that are either
inside or outside of a specified region of space is referred to as a clipping
Algorithm or clipping. This region against which an object is to be clipped
is called a clip window.
Application of Clipping :
(1)
(2)
(3)
(4)
(5)
(6)
Computer Graphics
51
Ans. A line clipping procedure involves several parts First, we can test a given
line segment to determine whether it lies completely inside the clipping
window. If it does not, we try to determine whether it lies completely
outside the window. Finally if we can not identify a line as completely
inside or completely outside we must perform intersection calculation
with one or move clipping boundaries. We process lines through insideoutside tests by checking the line end points.
P9
P4
P2
P10
P1
P5
P3
P8
P6
P5
P7
A line with both end paints inside all clipping boundaries such as
line form P1 to P2 is saved.
A line with both end points outside any one of the clip boundaries
(line P3, P4 in Fig.) is outside the window.
All other lines cross one or more clipping boundaries and may
require calculation of multiple intersection point. For a line
segment with end points (x1, y1) and (x2, y2) and one or both end
points outside clipping rectangle, the parametric representation.
x = x1 + u(x2 x1)
y = y1 + u(y2 y1), 0 u 1
52
1000
0001
WINDOW
1010
0000
0010
W
0101
0100
0110
Each bit position in the region code is used to indicate one the four
relative coordinate positions of the point with respect to the clip window:
to the left, right, top and bottom. By numbering the bit position in the
region code as 1 through 4 right to left, the coordinate regions can be
correlated with the bit positions as :
bit 1 : left
bit 2 : right
bit 3 : below ;
bit 4 : above
Computer Graphics
Bit 1
Bit 2
53
Any lines that are completely inside the clip window have a region
code 0000 for both end points few points to be kept in mind while
checking.
(2)
Any lines that have 1 in same bit position for both end points are
considered to be completely outside.
(3)
Now here we use AND operation with both region codes and if
result is not 0000 then line is completely outside.
P2
P2
P2
P3
P1
P3
P1
P4
54
Ans. Cyrus Beck Technique : Cyrus Beck Technique can be used to clip a 2D
line against a rectangle or 3D line against an arbitrary convex
polyhedron in 3-d space.
Liang Barsky Later developed a more efficient parametric line clipping
Algorithm. Now here we follow Cyrus Beck development to introduce
parametric clipping now in parametric representation of line Algorithm
has a parameter t representation of the line segment for the point at which
that segment intersects the infinite line on which the clip edge lies,
Because all clip edges are in general intersected by the line, four values of
t are calculated. Then a series of comparison are used to check which out
of four values of (t) correspond to actual intersection, only then are the (x,
y) values of two or one actual intersection calculated.
Advantage of this on Cohen Sutherland :
(1)
Computer Graphics
(2)
55
P1
Fig.1
Ni [P(t) PEi] = 0
P0
Ni
t=0
at P0
t=1
at P1
Now pick an arbitrary point PEi on edge Ei Now consider three vectors
P(t) PEi to three designated points on P0 to P1.
Now we will determine the endpoints of the line on the inside or outside
half plane of the edge.
Now we can determine in which region the points lie by looking at the
dot product
Ni [P(t) PEi].
Negative if point is inside half plane
Ni [P(t) PEi]
56
t=
Ni [P0 - PEi]
- Ni D
_ _ _ (1)
(2)
Next step after finding out the four values for t is to find out
which value corresponds to internal intersection of line segment.
(i)
(ii)
Computer Graphics
57
P1 (t = 1)
PE
PL
Line 1
P0
(t = 0)
PE
P0 (t = 0)
PE
P1 (t = 1)
PL
PL
Line 2
PL
P1 (t = 1)
Line 3
PE
P0 (t = 0)
Fig.2
Potential entering
PL
Potential leaving
(i)
(ii)
Ni D > 0
58
(3)
Final steps are to select a pair (PE, PL) that defines the clipped line.
Now we suggest that PE intersection with largest t value which we
call tE and the PL intersection with smallest t value tL. Now the
intersection line segment is then defined by range ( tE, tL). But this
was in case of an infinite line. But we want the range for P0 to P1
line. Now we set this as :
t = 0 is a upper bound for tE
t = 1 is a upper bound for tL
But if tE > tL
Now this is case for line 2, no portion of P0P1 is in clip rectangle so
whole line is discarded. Now tE and tL that corresponds to actual
intersection is used to find value of x & y coordinates.
( 1, 0)
(xmin, y)
(x0 xmin, y0 y)
right : x = xmax
(1, 0)
(xmax, y)
(x0 xmax, y0 y)
bottom : y = ymin
(0, 1)
(x, ymin)
(x0 x, y0 ymin)
top : y = ymax
(0, 1)
(x, ymax)
(x0 x, y0 ymax)
Ni.( P0 PEi )
Ni.D
( x0
xmin)
( x x0 )
( x0 xmax)
( x1 x0 )
( y0 ymin )
( y1 y0 )
( y0
( y1
ymax)
y0 )
Computer Graphics
59
Chapter-5
Ans.: This is commonly used image space approach to detecting visible surfaces
is the depth buffer method, which compares surface depths at each pixel
position on the projection plane. This procedure is referred to as the zbuffer method since object depth is usefully me as used from the view
plane along z-axis of the viewing system. Each surface of a scene is
processed separately, one point at a time across the surface. This method
is mainly applied to scenes containing only polygon surfaces, because
depth values can be computed very quickly and method is easy to
implement.
With object descriptions converted to projection coordinates, each (x, y, z)
position on the polygon surface corresponds to the orthographic
projection point (x, y) on the view plane. Therefore, for each pixel
position(x, y) on the view plane, object depths can be compared by
comparing z-values.
This Fig.(1) shows three surfaces at varying distances along the
orthographic projection line form position (x, y) in a view plane taken as
xvyv plane. Surface S1 is closest at this position, so its surface intensity
value at (x, y) is saved.
As implied by name of this method, two buffer areas are required. A
depth buffer is used to store depth values for each (x, y) position as
surfaces are processed and the refresh buffer stores the intensity values
for each position.
60
yv
(x, y)
xv
z
Fig.(1) : At view plane position (x, y) surface s1 has the smallest depth
form the view plane and so is visible at that position.
Initially all values/positions in the depth buffer are set to 0 (minimum
depth) and the refresh buffer is initialized to the back ground intensity.
Each surface listed in polygon table is then processed, one scan line at a
time, calculating the depth (z-values) at each (x, y) pixel position. The
calculated depth is compared to the value previously stored in the depth
Buffer at that position. If the calculated depth is greater than the value
stored in the depth buffer, the depth value is stored and the surface
intensity at that position is determined and placed in the same xy location
in the refresh Buffer.
Depth Buffer Algorithm :
(1)
Initialize the depth buffer and refresh buffer so that for all buffer
position (x, y).
depth ( x, y) = 0
(2)
Computer Graphics
depth (x, y) = z
61
- Ax - By - D
C
_ _ _ (1)
y-axis
y
y-1
x-axis
x x+1
Fig.(2) : From position (x, y) on a scan line, the next position across
the line has coordinate (x+1, y) and the position immediately below
on the next line has coordinate (x, y1).
The depth z of the next position (x+1, y) along the scan line is
obtained from equation (1) as :
Z'=
- A(x+1) - By - D
C
Z' = Z -
A
C
_ _ _ (2)
_ _ _ (3)
62
Z' Z
A
m
B
C
_ _ _ (4)
If we process down a vertical edge, the slope is infinite and the recursive
calculation reduce to
Z'=Z +
B
C
_ _ _ (5)
Computer Graphics
Q.2
63
Ans.:
Q.3
S.No.
Z Buffer Method
A Buffer Method
1.
The
ABuffer
method
represents an antialised, area
averaged, accumulation Buffer
method.
2.
3.
This
cant
accumulate
intensity values for more than
one surface.
64
Q. 4
Ans.:
Q.5
S.No.
Z Buffer Method
A Buffer Method
1.
2.
3.
Ans.: Depth sorting methods for solving hidden surface problem is often
refered to as Painters Algorithm.
Now using both image space and object space operations, the depth
sorting method performs following basic functions :
(1)
(2)
Sorting operations carried out in both image and object space, the scan
conversion of the polygon surface is performed in image space.
For e.g. : In creating an oil painting, an artist first paints the background
colors. Next the most distant objects are added, then the nearer objects
and so forth. At the final step, the foreground objects are painted on the
canvas over the Background and other objects that have been painted on
the canvas. Each layer of paint covers up the previous layer using similar
technique, we first of sort surfaces according to their distances form view
plane. The intensity value for the farthest surface are then entered into the
refresh Buffer. Taking each succeeding surface in turn (in decreasing
depth) we paint the surface intensities onto the frame buffer over the
intensities of the previously processed surfaces.
Computer Graphics
65
ZMin.
ZMax.
ZMin.
XV
ZV
This process is then repeated for the next surface in the list. As long as no
overlaps occur, each surface is processed in depth order until all have
been scan converted.
If a depth overlap is detected at any point in the list we need to make
some additional comparison tests. We make the test for each surface that
overlaps with S. If any one of these test is true, no reordering is necessary
for that surface.
(1)
The bounding rectangle in the xy plane for the two surfaces does
not overlap.
(2)
Surface S is completely behind the overlapping surface relative to
the viewing position.
(3)
The overlapping surface is completely in front of S relative to the
viewing position.
(4)
The projection of the two surfaces onto the view plane do not
overlap.
66
We perform these tests in the order listed and processed to the next
overlapping surface as soon as we find one of the test is true. If all the
overlapping surfaces pass at least one of these tests none of them is
behind S. No reordering necessary and S is scan converted.
(1)
S
XMin.
(2)
S
XV
XMax. XMin. XMax.
S
S
XV
Computer Graphics
67
(a)
(b)
68
(ii)
A Buffer Method
(iii)
Ans.: Back Face Detection : A fast and simple object space method for
identifying the back faces of a polyhedron is based on inside-outside
tests discussed. A point (x, y, z) is inside a polygon surface with plane
parameters A, B, C and D if
Ax + By + Cz + D < 0
_ _ _ (1)
When an inside point is along the line of sight to the surface, the polygon
must be back face (we are inside that face and cant see the front of it from
our viewing position). We can simplify this test by considering the normal
vector N to a polygon surface which has Cartesian components (A, B, C).
In general if v is a vector in the viewing direction from the eye (a camera)
Position shown in fig we then this polygon is a back if
V. N > 0
_ _ _ (2)
N (A, B, C)
V
Fig.(1) : Vector V in the viewing direction and a back face normal vector N
of the Polyhedron.
Computer Graphics
69
XV
Fig.(2)
V
ZV
The polygon surface is back face if C < 0, Also we cant see any face
whose normal has Z component C = 0. Since our viewing direction is
grazing that polygon. Thus in general, we label any polygon as a back
face if its normal vector has a Z component value.
C0
By examining Parameter C for the different planes defining an object, we
can immediately identify all the Back faces. For a single convex
polyhedron such as pyramid in Fig.(2) this test identifies all the hidden
surfaces on the object, since each surface is either completely visible or
completely hidden. For other objects such as concave polyhedron more
tests need to be performed to determine whether these are additional
faces that are to tally or partly obscured by other Faces.
A Buffer Method : An extension of the ideas in the depth Buffer method
is A Buffer method. The A buffer method represents an antialised, area
averaged, accumulation buffer method.
The A Buffer method expands the depth buffer so that each position in
the buffer can reference a linked list of surfaces. Thus move than one
intensity can be taken into consideration at each pixel position and object
edges can be antialised.
Each position A in A Buffer has two fields :
(1) Depth Field :
Stores a positive and negative real Number.
(2) Intensity Field : Stores surface Intensity Information or a painter value.
d>0
depth
field
I
Intensity
field
d<0
depth
field
Surf 1
Intensity
field
Surf 1
70
(b)
If the depth field is positive, the member stored at that position is the
depth of a single surface overlapping the corresponding pixel area. The
intensity field then stores RGB components of surface color at that point
and the percentage of pixel coverage as shown in Fig.(1).
Data for each surface in the linked list includes :
1)
5) Surface Identifier
2)
Opacity Parameter
3)
Depth
4)
Scan Line Method : The image space method for removing hidden
surfaces is a extension of the scan line Algorithm for filling Polygon
interior of filling first one surface we now deal with the multiple surfaces.
As each scan line is processed, all Polygon surfaces intersecting that line
are examined to determine which are visible. Across each scan line depth
calculations are made for each overlapping surface to determine which is
nearest to the view plane. When the visible surface has been determined,
the intensity value for that position is entered into the refresh Buffer.
Two tables are used for various surfaces :
(i)
Edge Table
(ii)
Polygon Table
Edge Table : contains coordinate end points for each line in the scene, the
inverse slope of each line and pointers into the polygon table to identify
the surface bounded by each line.
Polygon Table : Contains coefficient of the plane equation for each
surface, intensity information for the surfaces and possible pointers into
the edge table.
Computer Graphics
71
Chapter-6
What are Bezier Curves and Surfaces. Write the properties of Bezier
Curves.
0u1
_ _ _ (1)
K=0
The Bezier blending functions BEZk,n (u) are the Bernstein polynomial.
BEZk,n(u) = C (n, k) uk (1 u) n-k
Where C(n,k) are the binomial coefficients
_ _ _ (2)
72
C( n ,k )
n
kn k
_ _ _ (3)
n > k 1 _ _ _ (4)
K=0
K=0
K=0
(a)
(b)
Bezier Curves generated from three or four control points. Dashed lines
connect the control point positions.
Properties of Bezier Curve :
(1)
It always passes through the first and last control points. That is
boundary condition at two ends of the curve are :
For two end points
P (0) = P0
P (1) = Pn
_ _ _ (6)
Computer Graphics
73
Now value for the first derivative of a Bezier Curve at the end
points can be calculated from control point coordinates as :
P(0) = - nP0 +nP1
_ _ _ (8)
Another property of Bezier Curve is, it lies within the convex hull.
Therefore this follows the properties of Bezier blending function.
That is they are all positive & their sum is always 1.
n
_ _ _ (9)
K=0
_ _ _ (10)
74
-3
-6
-3
What are B-Spline Line, Curves and Surfaces? Write the properties of BSpline Curves?
Ans.: These are most widely used class of approximating splines B-splines have
two advantage over Bezier splines.
(1)
(2)
The trade of off is that B splines are move complex than Bezier splines.
B spline Curves : Blending function for B spline curve is :
n
umin u umax
2 d n+1
Where the Pk are an input set of (n + 1) control points. There are several
differences between this B-spline formulation and that for Bezier splines.
The range of parameter u now depends on how we choose the B spline
parameters. And the B spline blending functions Bk,d are polynomials of
degree d 1, where parameter d can be chosen to be any integer. Value in
the range from 2 up to the number of control points , n + 1, Local control
for B splines is achieved by defining the blending functions over
subintervals of the total range of u.
Computer Graphics
75
if uk u uk+1
=
0
Bk,d u
otherwise
u uk
uk
d 1
uk
Bk, d
uk d u
Bk=1,d
uk d uk 1
(2)
(3)
(4)
(5)
(6)
(7)
Any one control points can affect the shape of almost d curve
section.
For any vale of u in the interval from knot value ud-1 to un-1 the sum over
all basis function is 1.
76
n
Bk,d (u) = 1
K=0
We need to specify the knot values to obtain the blending function using
recurrence relation.
Classification of B splines according to the knot vectors :
Uniform, Periodic B splines : When spacing between knot values is
constant. The resulting curve is called a uniform B spline.
For e.g. : { - 1.5, -1.0, -0.5, 0.0}
0.8
0.6
0.4
0.2
0
1
(P0 + 4P1 + P2)
6
P(1) =
1
(P1 + 4P2 + P3)
6
Computer Graphics
P(0) =
1
(P2 P0)
2
P(1) =
1
(P3 P1)
2
77
P (u) = [u3 u2 u 1] . MB .
P1
P2
P3
-3
-6
-3
[0, 1, 2, 3, 3, 4]
[0, 2, 2, 3, 3, 6]
78
k2
_ _ _ (1)
P(0) = DPk
P(1) = DPk+1
With DPk and DPk+1 specifying values for the parametric derivatives
(slope of the curve) at control points Pk and Pk+1 respectively.
Vector equivalent equation :
P(u) = au3 + bu2 + cu + d
0u1
_ _ _ (2)
b
c
d
_ _ _ (3)
Computer Graphics
79
_ _ _ (4)
PK
Fig.(1) Parametric point function P(u) for a Hermite curve section between
control point Pk and Pk+1.
Now we express the hermite boundary condition in matrix form.
Pk
DPk
DPk+1
Pk+1
_ _ _ (5)
Pk+1
DPk
DPk
DPk+1
DPk+1
-1
Pk
Pk
=MH
Pk+1
_ _ _ (6)
Where MH, the Hermite Matrix is the inverse of the boundary constraint
Matrix.
80
[u3 u2 u 1] MH
Pk+1
DPk
DPk+1
Ans.:
Q.5
S.No.
Hermite Curve
B spline Curve
1.
This
is
a
class
approximating spline.
2.
of
Explain Zero Order, First Order, and Second Order Continuity in Curve
Blending?
Ans.: To ensure a smooth transition form one section of a piecewise curve to the
next. We can suppose various continuity conditions, at the connection
Computer Graphics
81
u1 u u2
_ _ _ (1)
P2
P2
P3
P0
P3
P0
Convex Hall
P1
P1
82
With First order continuity, the change of tangent vectors for two sections
can be quite different, so that the general shape of the two adjacent section
can change abruptly.
Applications :
(1)
(2)
Computer Graphics
83
Chapter-7
Image Processing
Q.1
Ans.: Image Processing is any form of signal, processing for which Input is an
Image such as photographs or frames of video, the output of Image
processing can be either an Image or a set of characteristics or parameters
related to the Image. Most Image processing techniques in values treating
the image as 2d signal and applying standard signal processing
technique to it. Image processing usually refers to digital Image
processing but optical and analog image Processing are also possible.
Operations of Image Processing :
(1)
(2)
(3)
(4)
(5)
Image editing
(6)
Image registration
(7)
Image stabilization
84
Q.2
(8)
Image segmentation
(9)
(ii)
(iii)
Organizing Information
(iv)
(v)
Interaction (as the Input to a Device for Computer Human
Interaction)
(2)
(3)
(4)
(5)
(6)
Computer Graphics
85
Q.3
(7)
(8)
(9)
Photo Manipulation
Point Operation
Spatial Operation
Transformation
Information
Pseudo coding
(ii)Pseudo Coloring
(iii) Homographic
Filtering
86
(1)
>1
a=
1
3
for
>1
b=
2
L
3
for
>1
c=L
u
a
(b)
c
> 1 for mid region
Computer Graphics
87
v
Threshold
Threshold
Image Histogram u
aub
V =
L
0
otherwise
a
aub
otherwise
V =
u
88
(ii)
(iii)
(2)
Computer Graphics
89
input pixels often the image is convolved with the finite response
filter called spatial Mask.
(a)
(k, l)
y (m, n) & v (m, n) are input & out put image respectively.
a (k, l) are filter weight.
w = chosen window
Spatial averaging is used for noise smoothening low pass
filtering & sub sampling of Images.
The observed Image is given as
y (m, n) = u (m, n) + n (m, n)
denotes noise ratio which has mean value 0.
Directional Smoothening : To protect the edge from
blurring while smooth using a directional averaging filter
can be useful v (m, n, o).
(b)
w}
90
(iii)
(c)
(d)
m, n
h LP m, n
hHP denotes high pass filter hLP denotes low Pass filter such a filter
can be implemented by simply subtracting the low Pass filter
output form its Input. Low Pass filter are useful for Noise
smoothening & interpolation. High Pass filter are useful in
extracting edges & in sharpening Images Band pass filters are
useful in enhancement of edges & other high Pass image
characteristics in presence of noise.
Computer Graphics
91
Chapter-8
Multimedia
Q.1
Ans.: From the word multimedia system we device that it is any system which
supports more than a single kind of media.
We understand multimedia in a qualitative rather than quantitative way.
A multimedia system is characterized by computer controlled, integrated
production, manipulation, presentation storage and communication of
independent Information which is encoded at least through a continuous
(time dependent) and a discrete (time independent) medium.
Multimedia is very often used as an attribute of many systems,
components, products, ideas etc without satisfying the above presented
characters tics.
Multimedia means from the users perspective, that computer information
can be represented through audio or video, in addition to text, image,
graphics and animation.
Applications of Multimedia :
(1)
(2)
92
Q.2
(3)
(4)
(5)
Research Centers
Tweaking
(ii)
Morphing a Motion
(iii)
Scanners
(ii)
Digital Cameras
(iii)
Frame Grabbers
(b)
(c)
Computer Graphics
93
(b)
(c)
(2)
(3)
Hand : Hand scanners are manual devices that are dragged across
the surface of the Image to be scanned.
(b)
94
Application :
(1)
(2)
Digital Cameras : Here the pictures are taken & stored in digital form in
the Pixels.
Q.3
(ii)
(2)
(3)
(4)
Computer Graphics
95
(5)
(6)
(7)
Picture
Processing
Pixel
Predictor
Block Mcu
FDCT
Entropy encoding
Quantization
Run Length
Huffman
Arithmetic
These four variants Image processing cab be determined that lead to four
modes :
(1)
(2)
(3)
The lossless mode has low compression ratio that allows perfect
reconstruction of original Image.
(4)
96
RGB Signal : This signals consists of separate signals for red, green &
blue colors. Other colors are coded as a result of combination of these
colors like R+G+B = 1 which result into white colors.
YUV Signal : As human perception is more sensitive to brightness than
any chrominance information. In this instead of separating colors we
separate brightness information with color inf (chrominance U & V).
(1)
Computer Graphics
97
Dc i
Blocki-1
Blocki
DIF = Dc i - Dc i-1
Each table entry is a 8 bit integer value. Quantization &
dehumanization must use the same table.
Entropy Encoding : During the initial step of entropy encoding, the
quantized DC Coefficient are treated separately from AC
Coefficient.
(a)
(b)
98
(ii)
Lossless Mode : The lossless mode shown in Fig uses data units of
single pixels for Image Preparation. Any precession form 2 to 16
bits per pixel can be used.
Compressed
Predictor
Entropy
Encoder
Image Data
Uncompressed
Data
Tables
In this mode, image processing & quantization use a predictive
technique instead of transformation encoding Technique.
Computer Graphics
99
Hierarchical Mode : This uses either the lossy DCT based Algo or
alternatively the lossless.
Compression Technique : Main feature of this mode is encoding of
an image at different resolution i. e. the encoded date contain
images at several resolutions. The prepared Image Is initially
sampled at lower resolution (reduced by the factor 2n).
Disadvantage :
Requires more storage capacity.
Advantage : the compressed Image is immediately available at
different resolutions. Application working with lower resolution
does not have to decode the whole Image & subsequently apply
Image processing Algo to reduce the resolution.
It takes less CPU time to display an image with full resolution then
to process a scaled down Image & display it with reduced number
of pixels.
Q.4
Ans.: MPEG standard was developed by ISO to cover motion video as well as
audio coding.
In 1993 MPEG was accepted as the International standard (IS) MPEG
standard considers functionalities of other standards.
(1)
(2)
100
(b)
(c)
(d)
Computer Graphics
101
Psychoacaustical
Model
Noise level
is determined using
this model
Filter Banks
32 sub-bands.
Control
Quantization
Multiplexer
Entropy Codes
102
(2)
(3)
(4)
Next is slice layer. Each slice consists of macro blocks that may
vary from one image to next.
(5)
(6)
Computer Graphics
4) The relationship among the data and objects which are stored in the database
called application database, and referred by the?
A. Application programs
B. application model
C. graphics display
D. both a and b
103
104
10) RGB system needs __of storage for the frame buffer?
A. 100 megabytes
B. 10 megabytes
C. 3 megabytes
D. 2 Gb
Computer Graphics
105
11) The SRGP package provides the __to wide the variety of display devices?
A. interface
B. connection
C. link
D. way
15) The midpoint circle drawing algorithm also uses the __of the circle to generate?
A. two-way symmetry
B. four-way symmetry
C. eight-way symmetry
D. both a & b
16) a polygon in which the line segment joining any 2 points within the polygon lies
completely inside the polygon is called__?
A. convex polygon
106
B. concave polygon
C. both a and b
D. both a and b
17) A polygon in which the line segment joining any 2 points within the polygon may
not lie completely inside the polygon is called __?
A. convex polygon
B. concave polygon
C. both a and b
D. Hexagon
20) Line produced by moving pen is __ at the end points than the line produced by
the pixel replication?
A. thin
B. straight
C. thicker
D. both a and b
21) The process of selecting and viewing the picture with different views is called__?
A. Windowing
B. clipping
C. projecting
D. both a and b
Computer Graphics
107
22) The process which divides each element of the picture into its visible and
invisible portions, allowing the invisible portion to be discarded is called__?
A. clipping
B. Windowing
C. both a and b
D. Projecting
25) A method used to test lines for total clipping is equivalent to the__?
A. logical XOR operator
B. logical OR operator
C. logical AND operator
D. both a and b
26) A process of changing the position of an object in a straight line path from one
coordinate location to another is called__?
A. Translation
B. rotation
C. motion
D. both b and c
108
1
A
16
A
2
B
17
B
3
A
18
A
4
A
19
A
5
C
20
C
6
B
21
A
7
C
22
A
8
A
23
A
9
A
24
A
10
C
25
C
11
A
26
A
12
C
27
C
13
A
28
A
14
A
29
A
15
C
30
A
Computer Graphics
109
Case study
1.) Implement the polyline function using the DDA algorithm, given any number (n) of
input points.
2.) How much time is used up scanning across each row of pixels for the duration of
screen refresh on a raster system with a resolution of 1290 by 1024 and a refresh rate of
60 frames per second?
3.) Implement a procedure to perform a one-point perspective projection of an object.
4.) Implement a procedure to perform a one-point perspective projection of an object.
110
Glossary
2D Graphics
Displayed representation of a scene or an object along two axes of reference: height and width (x
and y).
3D Graphics
Displayed representation of a scene or an object that appears to have three axes of reference:
height, width, and depth (x, y, and z).
3D Pipeline
The process of 3D graphics can be divided into three-stages: tessellation, geometry, and
rendering. In the tessellation stage, a described model of an object is created, and the object is
then converted to a set of polygons. The geometry stage includes transformation, lighting, and
setup. The rendering stage, which is critical for 3D image quality, creates a two dimensional
display from the polygons created in the geometry stage.
Anti-aliasing
Anti-aliasing is sub pixel interpolation, a technique that makes edges appear to have better
resolution.
Bitmap
A Bitmap is a pixel by pixel image.
Blending
Blending is the combining of two or more objects by adding them on a pixel-by-pixel basis.
Depth Cueing
Depth cueing is the lowering of intensity as objects move away from the viewpoint.
Computer Graphics
111
Dithering
Dithering is a technique for archiving 24-bit quality in 8 or 16-bit frame buffers. Dithering uses
two colors to create the appearance of a third, giving a smooth appearance to an otherwise abrupt
transition.
Flat Shading
The flat shading method is also called constant shading. For rendering, it assigns a uniform color
throughout an entire polygon. This shading results in the lowest quality, an object surface with a
faceted appearance and a visible underlying geometry that looks 'blocky'.
Interpolation
Interpolation is a mathematical way of regenerating missing or needed information. For example,
an image needs to be scaled up by a factor of two, from 100 pixels to 200 pixels. The missing
pixels are generated by interpolating between the two pixels that are on either side of the pixel
that needs to be generated. After all of the 'missing' pixels have been interpolated, 200 pixels exist
where only 100 existed before, and the image is twice as big as it used to be.
Lighting
There are many techniques for creating realistic graphical effects to simulate a real-life 3-D object
on a 2-D display. One technique is lighting. Lighting creates a real-world environment by means
of rendering the different grades of darkness and brightness of an object's appearance to make the
object look solid.
Line Buffer
A line buffer is a memory buffer used to hold one line of video. If the horizontal resolution of the
screen is 640 pixels and RGB is used as the color space, the line buffer would have to be 640
locations long by 3 bytes wide. This amounts to one location for each pixel and each color plane.
Line buffers are typically used in filtering algorithms.
112
Projection
The process of reducing three dimensions to two dimensions for display is called Projection. It is
the mapping of the visible part of a three dimensional object onto a two dimension screen.
Rasterization
Translating an image into pixels.
Rendering
The process of creating life-like images on a screen using mathematical models and formulas to
add shading, color, and lamination to a 2D or 3D wireframe.
Transformation
Change of coordinates; a series of mathematical operations that act on output primitives and
geometric attributes to convert them from modeling coordinates to device coordinates.
Z-buffer
A part of off-screen memory that holds the distance from the viewpoint for each pixel, the Zvalue. When objects are rendered into a 2D frame buffer, the rendering engine must remove
hidden surfaces.
Z-buffering
A process of removing hidden surfaces using the depth value stored in the Z-buffer. Before
bringing in a new frame, the rendering engine clears the buffer, setting all Z-values to 'infinity'.
When rendering objects, the engine assigns a Z-value to each pixel: the closer the pixel to the
viewer, the smaller the Z value. When a new pixel is rendered, its depth is compared with the
stored depth in the Z-buffer. The new pixel is written into the frame buffer only if its depth value
is less than the stored one.
Z-sorting
A process of removing hidden surfaces by sorting polygons in back-to-front order prior to
rendering. Thus, when the polygons are rendered, the forward-most surfaces are rendered last.
The rendering results are correct unless objects are close to or intersect each other. The advantage
Computer Graphics
113
is not requiring memory for storing depth values. The disadvantage is the cost in more CPU
cycles and limitations when objects penetrate each other.
Filtering
This is a broad word which can mean the removal of coffee grinds from the coffee.
However, within the narrow usage of this book, a filtering operation is the same as a
convolution operation (see "convolution"). Anti-aliasing is usually done by filtering.
flat projection
A method of projecting a 3D scene onto a 2D image such that the resulting object sizes
are not dependent on their position. Flat projection can be useful when a constant scale is
needed throughout an image, such as in some mechanical drawings.
Frame
One complete video image. When interlacing is used, a frame is composed of two fields,
each containing only half the scan lines.
GIF
A file format for storing images. GIF stands for Graphics Interchange format, and is
owned by Compuserve, Inc.
key frame
A selected frame of an animation at which all the scene state is defined. In the key frame
animation method, the scene state at key frames is interpolated to create the scene state at
the in-between frames.
key frame animation
An animation control method that works by specifying the complete scene state at
selected, or key, frames. The scene state for the remaining frames is interpolated from the
state at the key frames.
Raster Scan
114
The name for the pattern the electron beam sweeps out on a CRT face. The image is
made of closely spaced scan lines, or horizontal sweeps.
Refresh Rate
The rate at which parts of the image on a CRT are re-painted, or refreshed. The horizontal
refresh rate is the rate at which individual scan lines are drawn. The vertical refresh rate
is the rate at which fields are drawn in interlaced mode, or whole frames are drawn in
non-interlaced mode.
Refresh Rate, Horizontal
The rate at which scan lines are drawn when the image on a CRT is re-drawn, or
refreshed.
Refresh Rate, Vertical
The rate at which fields are re-drawn on a CRT when in interlaced mode, or the rate at
which the whole image is re-drawn when in non-interlaced mode.
Scan Line
One line in a raster scan. Also used to mean one horizontal row of pixels.
Computer Graphics
115
M.Sc.(Information Technology)
(SECOND-SEMESTER)EXAMINATION, 2012
(New Scheme)
COMPUTER GRAPHICS AND MULTIMEDIA TECHNOLOGY
PAPER: 121
TIME ALLOWED: THREE HOURS
Maximum Marks80
Note:(1) No supplementary answer-book will be given to any candidate. Hence the
candidates should write the answers precisely in the Main answer-book only.
(2) All the parts of question should be answered at one place in the answer-book.
One complete question should not be answered at different places in the answer
book.
116
(b)
Write the steps required to plot a line whose slope is between 00 450
using the slope intercept equation or using Bresenhams method.
4.(a)
(b)
5. (a)
What are the properties of Bezier curve with respect to a set of control
points? For the case of 4 control point,derive blending functions and
their shapes.
What is the B-spline and Bezier curve? Explain the A-buffer based
hidden surface algorithm.
(b)
6. (a)
(b)
7. (a)
(b)
8.
9.
Computer Graphics
117
Bibliography
1.
http://www.cgsociety.org/
http://forums.cgsociety.org/
http://www.maacindia.com/
http://cg.tutsplus.com/
5. http://programmedlessons.org/VectorLessons/index.html