0% found this document useful (0 votes)
8 views

CS8092 Computer Graphics and Multimedia Watermark

Uploaded by

shanuz6963
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

CS8092 Computer Graphics and Multimedia Watermark

Uploaded by

shanuz6963
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

AALIM MUHAMMED SALEGH COLLEGE OF ENGINEERING

DEPARTMENT OF INFORMATION TECHNOLOGY

CS8092 –COMPUTER GRAPHICS & MULTIMEDIA

YEAR / SEM: III / VI

PREPARED BY: ER.RUBIN JULIS M Asst.Prof


UNIT – I ILLUMINATION AND COLOUR MODELS

PART - A

1. State the difference between CMY and HSV color models.(nov/dec 2012)
The HSV (Hue,Saturation,Value) model is a color model which uses color descriptions that have a more
intuitive appeal to a user. To give a color specification, a user selects a spectral color and the amounts of
white and black that is to be added to obtain different shades, tint, and tones.
A color model defined with the primary colors cyan, magenta, and yellow is useful for describing color
output to hard-copy devices.

2. What are subtractive colors?(may/june 2012)


RGB model is an additive system, the Cyan-Magenta-Yellow (CMY) model is a subtractive color model. In
a subtractive model, the more that an element is added, the more that it subtracts from white. So, if none of
these are present the result is white, and when all are fully present the result is black.

3. Define YIQ color model


In the YIQ color model, luminance (brightness) information in contained in the Y parameter, chromaticity
information (hue and purity) is contained into the I and Q parameters.
A combination of red, green and blue intensities are chosen for the Y parameter to yield the standard luminosity
curve. Since Y contains the luminance information, black and white TV monitors use only the Y signal.

4. What do you mean by shading of objects? (nov/dec 2011)


A shading model dictates how light is scattered or reflected from a surface. The shading models described
here focuses on achromatic light. Achromatic light has brightness and no color; it is a shade of gray so it is
described by a single value its intensity.
A shading model uses two types of light source to illuminate the objects in a scene: point light sources and
ambient light.

5. What is texture?( nov/dec 2011)


The realism of an image is greatly enhanced by adding surface texture to various faces of a mesh object. The
basic technique begins with some texture function, texture(s,t) in texture space , which has two parameters
s and t. The function texture(s,t) produces a color or intensity value for each value of s and t between
0(dark)and 1(light).

6. What are the types of reflection of incident light?(nov/dec 2013)


There are two different types of reflection of incident light
 Diffuse scattering.
 Specular reflections.

7. Define rendering (may/june 2013)


Rendering is the process of generating an image from a model (or models in what collectively could
be called a scene file), by means of computer programs. Also, the results of such a model can be called a
rendering.

8. Differentiate flat and smooth shading (may/june 2013)


The main distinction is between a shading method that accentuates the individual polygons (flat shading)
and a method that blends the faces to de-emphasize the edges between them (smooth shading).

9. Define shading (may/june 2012)


Shading is a process used in drawing for depicting levels of darkness on paper by applying media more
densely or with a darker shade for darker areas, and less densely or with a lighter shade for lighter areas.

10. What is a shadow? (nov/dec 2012)


Shadows make an image more realistic. The way one object casts a shadow on another object gives
important visual clues as to how the two objects are positioned with each other.Shadows conveys
lot of information as such, you are getting a second look at the object from the view point of the light source.

11. What are two methods for computing shadows?


 Shadows as Texture.
 Creating shadows with the use of a shadow buffer.

12. Write any two Drawbacks of Phong Shading


 Relatively slow in speed.
 More computation is required per pixel.

13. What are the two common sources of textures?


 Bitmap Textures.
 Procedural Textures.

14. Write two types of smooth shading.


 Gouraud shading.
 Phong shading.

15. What is a color model?


A color model is a method for explaining the properties or behavior of color within some particular context.
Example: XYZ model, RGB model.

16. Define intensity of light.


Intensity is the radiant energy emitted per unit time, per unit solid angle, and per unit projected area of source.

17. What is hue?


The perceived light has a dominant frequency (or dominant wavelength). The dominant frequency is also called
as hue or simply as color.

18. What is purity of light?


Purity describes how washed out or how “pure” the color of the light appears. pastels and pale colors are
described as less pure.

19. Define the term chromacity.


The term chromacity is used to refer collectively to the two properties describing color characteristics: purity and
dominant frequency.

20. How is the color of an object determined?


When white light is incident upon an object, some frequencies are reflected and some are absorbed by the object.
The combination of frequencies present in the reflected light determines what we perceive as the color of the
object.

21. Define purity or saturation.


Purity describes how washed out or how "pure" the color of the light appears.

22. Define complementary colors.


If the two color sources combine to produce white light, they are referred to as 'complementary colors. Examples
of complementary color pairs are red and cyan, green and magenta, and blue and yellow.

23. Define primary colors.


The two or three colors used to produce other colors in a color model are referred to as primary colors.

24. State the use of chromaticity diagram.


Comparing color gamuts for different sets of primaries. Identifying complementary colors. Determining dominant
wavelength and purity of a given color.
25. What is Color Look up table?
In color displays, 24 bits per pixel are commonly used, where 8 bits represent 256 level for each color. It is
necessary to read 24- bit for each pixel from frame buffer. This is very time consuming. To avoid this video
controller uses look up table to store many entries to pixel values in RGB format. This look up table is commonly
known as colour table.

26. What is the use of hidden line removing algorithm?


The hidden line removal algorithm determines the lines, edges, surfaces or volumes that are visible or invisible to
an observer located at a specific point in space.
27.Define Computer Graphics
Computer graphics remains one of the most existing and rapidly growing computer fields. Computer Graphics may
be defined as a pictorial representation or graphical representation of objects in a computer.

28.Define Random scan/Raster scan displays?


Random scan is a method in which the display is made by the electronic beam which is directed only to the points or
part of the screen where the picture is to be drawn. The Raster scan system is a scanning technique in which the
electrons sweep from top to bottom and from left to right. The intensity is turned on or off to light and unlight the
pixel. Write down the attributes of characters

29.What is Aspect ratio?


The ratio of vertical points to the horizontal points necessary to produce length of lines in both directions of the
screen is called the Aspect ratio. Usually the aspect ratio is ¾.

30.What is aliasing and antialiasing?


In the line drawing algorithms, all rasterzed locations do not match with the true line and have to represent a
straight line. This problem is severe in low resolution screens. In such screens line appears like a stair-step. This
effect is known as aliasing. The process of adjusting intensities of the pixels along the line to minimize the effect of
aliasing is called antialiasing.

31. What is pixel phasing?


Pixel phasing is an antialiasing technique, stair steps are smoothed out by moving the electron beam to
more nearly approximate positions specified by the object geometry.

32.What do you mean by emissive and non-emissive displays?


The emissive display converts electrical energy into light energy. The plasma panels, thin film electro-luminescent
displays are the examples. The Non emissive are optical effects to convert the sunlight or light from any other
source to graphic form. Liquid crystal display is an example.

33.What do you mean by scan conversion?


A major task of the display processor is digitizing a picture definition given in an application program into a set of
pixel-intensity values for storage in the frame buffer. This digitization process is called scan conversion.

34. What is an output primitive?


Graphics programming packages provide function to describe a scene in terms of these basic geometric structures,
referred to as output primitives.

35.What is Transformation?
Transformation is the process of introducing changes in the shape size and orientation of the object using scaling
rotation reflection shearing & translation etc.

36.What is translation?
Translation is the process of changing the position of an object in a straight-line path from one coordinate location
to another. Every point (x, y) in the object must undergo a displacement to
(x´,y´). the transformation is: x´
= x + tx
y´ = y+ty

37. What is rotation?


A 2-D rotation is done by repositioning the coordinates along a circular path, in X = rcos (q + f) and Y = r sin (q +
f).

38. What is scaling?


The scaling transformations changes the shape of an object and can be carried out by multiplying each vertex (x,y)
by scaling factor Sx,Sy where Sx is the scaling factor of x and Sy is the scaling factor of y.

39.What is shearing?
The shearing transformation actually slants the object along the X direction or the Y direction as required.ie; this
transformation slants the shape of an object along a required plane.

40. What is reflection?


The reflection is actually the transformation that produces a mirror image of an object. For this use some angles and
lines of reflection.

41. Distinguish between window port & view port.


A portion of a picture that is to be displayed by a window is known as window port. The display area of the part
selected or the form in which the selected part is viewed is known as view port.

42. Define clipping? And types of clipping.


Clipping is the method of cutting a graphics display to neatly fit a predefined graphics region or the view port.
∑ Point clipping
∑ Line clipping

43. What is the need of homogeneous coordinates?


To perform more than one transformation at a time, use homogeneous coordinates or matrixes. They reduce
unwanted calculations intermediate steps saves time and memory and produce a sequence of transformations.

44. What is fixed point scaling?


The location of a scaled object can be controlled by a position called the fixed point that is to remain unchanged
after the scaling transformation.

45. Define Affine transformation?


A coordinate transformation of the form X= axxx +axyy+bx, y ‟ayxx+ayy y+by is called a two- dimensional affine
transformation. Each of the transformed coordinates x „and y „is a linear function of the original coordinates x and
y, and parameters aij and bk are constants determined by the transformation type.

46.List out the various Text clipping.


∑ All-or-none string clipping -if all of the string is inside a clip window, keep it otherwise discards.
∑ All-or-none character clipping – discard only those characters that are not completely inside the window.Any
character that either overlaps or is outside a window boundary is clipped.
47. What is the use of clipping?(may/june 2012)
Clipping in computer graphics is to remove objects, lines or line segments that are outside the viewing
volume.

48. How will you clip a point?(may/june 2013)


Assuming that the clip window is a rectangle in standard position, we save a point P=(x,y) for display if the
following inequalities are satisfied:
xwmin ≤ x≤ xwmax ywmin ≤ y≤ ywmax
where the edges of the clip window (xwmin ,xwmax, ywmin, ywmax) can be either the world- coordinate
window boundaries or viewport boundaries. If any one of these inequalities is not satisfied, the points are
clipped (not saved for display).

49. Define viewing transformation.


The mapping of a part of world coordinate scene to device coordinates are called viewing transformation. Two
dimensional viewing transformation is simply referred to as window to viewport transformation or the
windowing transformation.

PART – B
1. Explain in detail about XYZ color model.
Refer Unit 1 NotesPg(2-3)
2. Explain in detail about RGB color model.[DEC 2011,16,18,MAY 2016,18]
Refer Unit 1 Notes Pg(3-4)
3. Explain in detail about YIQ color model.[DEC 2016,17,MAY 2018]
Refer Unit 1 Notes Pg (4)
4. Explain in detail about CMY color model.[MAY 2012,18,DEC 18]
Refer Unit 1 Notes Pg(4-5)
5. Explain in detail about HSV color model.[DEC 2011,13,16,MAY 2013,16,17,18]
Refer Unit 1 Notes Pg(5-7)
6. Compare and contrast RGB and CMY.[MAY 2012]
Refer Unit 1 Notes Pg (3-5)
7. Explain in detail about the conversion between HSV and RGB color models.[MAY 2016,17]
Refer Unit 1 Notes Pg(3-4)
Refer Unit 1 Notes Pg(5-7)
8. Explain in detail about HLS color model.[MAY 2013,17,18]
Refer Unit 1 Notes Pg(7)
9. Derive equations and explain Bresenham’s Line Drawing algorithm with an example.[MAY
2016,DEC 12,18]
Refer Unit 1 Notes Pg(9-10)
10. Explain DDA Line Drawing Algorithm with an example in detail.[MAY 2016,19 DEC 2017]
Refer Unit 1 Notes Pg(7-9)
11. Derive equations and explain Midpoint Circle generation algorithm with an example.[DEC
2011,17,MAY 2017,2018] Refer Unit 1 Notes Pg(10-11)

12. Write down and explain the midpoint circle drawing algorithm by deriving its decision
parameter.Also calculate the pixel locations of a circle having center at (2, 3) and radius 10 units.
.[DEC 2011,17,MAY 2017,2018] Refer Unit 1 Notes Pg(10-11)

13.Using Bresenham line drawing algorithm , find out which pixel would be turned on for the
line with end points (4,4) to (12,9) .[MAY 2016,DEC 12,18] Refer Unit 1 Notes Pg(9-10)
UNIT-II TWO DIMENSIONAL
GRAPHICS PART - A

1. What is Transformation?
Transformation is the process of introducing changes in the shape size and orientation of the object
using scaling rotation reflection shearing & translation etc.

2.Write short notes on active and passive transformations.


In the active transformation the points x and y represent different coordinates of the same
coordinate system. Here all the points are acted upon by the same transformation and hence the shape
of the object is not distorted.

In a passive transformation the points x and y represent same points in the space but in a different
coordinate system. Here the change in the coordinates is merely due to the change in the type of the
user coordinate system.
.
3.What is translation?
Translation is the process of changing the position of an object in a straight-line path from one
coordinate location to another. Every point (x, y) in the object must undergo a displacement to
(x´,y´). the transformation is:
x’=x+tx
y’=y+ty

4. What is rotation?
A 2-D rotation is done by repositioning the coordinates along a
circular path, in X = rcos (q + f) and Y = r sin (q + f).

5. What is scaling?
The scaling transformations changes the shape of an object and can be carried out by multiplying
each vertex (x,y) by scaling factor Sx,Sy where Sx is the scaling factor of x and Sy is the scaling
factor of y.

6. What is shearing?[MAY 2016,DEC 2017]


The shearing transformation actually slants the object along the X direction or the Y direction as
required.ie; this transformation slants the shape of an object along a required plane.

7. What is reflection?
The reflection is actually the transformation that produces a mirror image of an object. For this use
some angles and lines of reflection.

8. Distinguish between window port & view port?[NOV/DEC 2011]


A portion of a picture that is to be displayed by a window is known as window port. The display area
of the part selected or the form in which the selected part is viewed is known as view port.

9. Define clipping? And types of clipping.[DEC 2017,2018]

Clipping is the method of cutting a graphics display to neatly fit a predefined graphics region or the
view port.
 Point clipping
 Line clipping
 Area clipping
 Curve clipping
 Text clipping

10. What is covering (exterior clipping)?[MAY 2017]


This is just opposite to clipping. This removes the lines coming inside the windows and displays the
remaining. Covering is mainly used to make labels on the complex pictures.

11. What is the need of homogeneous coordinates?


To perform more than one transformation at a time, use homogeneous coordinates or matrixes. They
reduce unwanted calculations intermediate steps saves time and memory and produce a sequence of
transformations.

12. Distinguish between uniform scaling and differential scaling.


When the scaling factors sx and sy are assigned to the same value, a uniform scaling is produced that
maintains relative object proportions. Unequal values for sx and sy result in a differential scaling that
is often used in design application. .

13. What is fixed point scaling?


The location of a scaled object can be controlled by a position called the fixed point that is to remain
unchanged after the scaling transformation. .

14. Define Affine transformation.


A coordinate transformation of the form X= axxx +axyy+bx, y ‟ayxx+ayy y+by is called a two-
dimensional affine transformation. Each of the transformed coordinates x „and y „is a linear function
of the original coordinates x and y, and parameters aij and bk are constants determined by the
transformation type.

15. Distinguish between bitBlt and pixBlt.


Raster functions that manipulate rectangular pixel arrays are generally referred to as raster ops.
Moving a block of pixels from one location to another is also called a block transfer of pixel values.
On a bilevel system, this operation is called a bitBlt (bit-block transfer), on multilevel system t is
called pixBlt.

16. List out the various Text clipping.


• All-or-none string clipping -if all of the string is inside a clip window, keep it otherwise
discards.

• All-or-none character clipping – discard only those characters that are not completely inside
the window. Any character that either overlaps or is outside a window boundary is clipped.

• Individual characters – if an individual character overlaps a clip window boundary, clip off
the parts of the character that are outside the window.
17. What is fixed point scaling?
The location of a scaled object can be controlled by a position called the fixed point that is to
remain unchanged after the scaling transformation.

18. List out the various Text clipping.


• All-or-none string clipping - if all of the string is inside a clip window, keep it otherwise
discards.
• All-or-none character clipping – discard only those characters that are not completely inside
the window. Any character that either overlaps or is outside a window boundary is clipped.
• Individual characters – if an individual character overlaps a clip window boundary, clip off
the parts of the character that are outside the window.

19. Write down the shear transformation matrix. (nov/dec 2012)

A transformation that distorts the shape of an object such that the transformed shape appears
as if the object were composed of internal layers that had been caused to slide over each other is
called a shear.

20. What is the use of clipping?(may/june 2012)


Clipping in computer graphics is to remove objects, lines or line segments that are outside the viewing
volume.

21. How will you clip a point?(may/june 2013)


Assuming that the clip window is a rectangle in standard position, we save a point P=(x,y) for
display if the following inequalities are satisfied:
xwmin ≤ x≤ xwmax ywmin ≤ y≤ ywmax

where the edges of the clip window (xwmin ,xwmax, ywmin, ywmax) can be either the world-coordinate
window boundaries or viewport boundaries. If any one of these inequalities is not satisfied, the points
are clipped (not saved for display).

22. Define viewing transformation.


The mapping of a part of world coordinate scene to device coordinates are called viewing
transformation. Two dimensional viewing transformation is simply referred to as window to viewport
transformation or the windowing transformation.

PART – B

1. Explain reflection and shear?[DEC 2012,13,18,MAY 2018]


Refer Unit 2 Notes pg(18-19)
2. Explain Liang Barsky line clipping[MAY 2005,10,DEC 2011,2015]
Refer Unit 2 Notes pg(25-26)
3. Explain Sutherland Hodgeman polygon clipping[DEC 2016,MAY 2018]
Refer Unit 2 Notes pg(27)
4. Explain about clipping operations[DEC2016,MAY 2018]
Refer Unit 2 Notes pg(23-27)
5. Explain in detail about window to viewport coordinate transformation.[DEC 2015,MAY 18]
Refer Unit 2 Notes pg(21-22)
6. Write a detailed note on the basic two dimensional transformations.[DEC 2012,17,MAY 2013,17,18]
Refer Unit 2 Notes pg(14-19)
7. Explain with an example the Cohen-Sutherland line clipping algorithm.[MAY 2012,16,19,DEC
12,17]
Refer Unit 2 Notes pg(23-24)
8. Compare Cohen-Sutherland line clipping algorithm and Liang-Barsky line clipping algorithm.
MAY 2005,10,DEC 2011,2015] Write note on any one polygon clipping algorithm.
Refer Unit 2 Notes pg(23-24), Refer Unit 2 Notes pg(25-26), Refer Unit 2 Notes pg(27-28)
UNIT – III THREE DIMENSIONAL GRAPHICS

PART - A

1. What are the various representation schemes used in three dimensional objects?
 Boundary representation (B-res) – describe the 3 dimensional object as a set of surfaces that
separate the object interior from the environment.

 Space-portioning representation – describe interior properties, by partitioning the spatial


region containing an object into a set of small, no overlapping, contiguous solids.

2. What is Polygon mesh?


Polygon mesh is a method to represent the polygon, when the object surfaces are tiled, it is more
convenient to specify the surface facets with a mesh function. The various meshes are
 Triangle strip – (n-2) connected triangles
 Quadrilateral mesh – generates (n-1)(m-1) Quadrilateral

3. What is Bezier Basis Function?


Bezier Basis functions are a set of polynomials, which can be used instead of the primitive
polynomial basis, and have some useful properties for interactive curve design.

4. What is surface patch?


A single surface element can be defined as the surface traced out as two parameters (u, v) take all
possible values between 0 and 1 in a two-parameter representation. Such a single surface element is
known as a surface patch.

5. Write short notes on rendering bi-cubic surface patches of constant u and v method.
The simple way is to draw the iso-parmetric lines of the surface. Discrete approximations to curves on
the surface are produced by holding one parameter constant and allowing the other to vary at discrete
intervals over its whole range. This produce curves of constant u and constant v.

6. What are the advantages of rendering polygons by scan line method?


i. The max and min values of the scan were easily found.
ii. The intersection of scan lines with edges is easily calculated by a simple incremental method.
iii. The depth of the polygon at each pixel is easily calculated by an incremental method.

7. What are the advantages of rendering by patch splitting?


 It is fast-especially on workstations with a hardware polygon-rendering pipeline.
 It‟s speed can be varied by altering the depth of sub-division.


8. Define B-Spline curve.
A B-Spline curve is a set of piecewise (usually cubic) polynomial segments that pass close to a set of
control points. However the curve does not pass through these control points, it only passes close to
them.

9. What is a spline?
To produce a smooth curve through a designed set of points, a flexible strip called spline is used. Such
a spline curve can be mathematically described with a piecewise cubic polynomial function whose
first and second derivatives are continuous across various curve section.

10. What is the use of control points?


Spline curve can be specified by giving a set of coordinate positions called control points, which
indicates the general shape of the curve, can specify spline curve.

11. What are the different ways of specifying spline curve?


 Using a set of boundary conditions that are imposed on the spline.
 Using the state matrix that characteristics the spline
 Using a set of blending functions that calculate the positions along the curve path by
specifying combination of geometric constraints on the curve.

12. What are the important properties of Bezier Curve?.


• It needs only four control points
• It always passes through the first and last control points
• The curve lies entirely within the convex half formed by four control points.

13. Differentiate between interpolation spline and approximation spline.


When the spline curve passes through all the control points then it is called interpolate. When the
curve is not passing through all the control points then that curve is called approximation spline.

14. What do you mean by parabolic splines?


For parabolic splines a parabola is fitted through the first three points p1,p2,p3 of the data array of k
points. Then a second parabolic arc is found to fit the sequence of points p2, p3, p4. This continues in
this way until a parabolic arc is found to fit through points pn-2, pn-1 and pn. The final plotted curve
is a meshing together of all these parabolic arcs.

15. What is cubic spline?


Cubic splines are a straight forward extension of the concepts underlying parabolic spline. The total
curve in this case is a sequence of arcs of cubic rather than parabolic curves. Each cubic satisfies :ax+
bx + cx + d

16. What is a Blobby object?[DEC 2016,2018,MAY 2018]


Some objects do not maintain a fixed shape, but change their surface characteristics in certain motions
or when in proximity to other objects. That is known as blobby objects. Example – molecular
structures, water droplets.

17. Define Octrees.


Hierarchical tree structures called octrees, are used to represent solid objects in some graphics
systems. Medical imaging and other applications that require displays of object cross sections
commonly use octree representation.

18. Define Projection.[MAY 2017]


The process of displaying 3D into a 2D display unit is known as projection. The projection transforms
3D objects into a 2D projection plane. The process of converting the description of objects from world
coordinates to viewing coordinates is known as projection

19. What are the steps involved in 3D transformation?


• Modeling Transformation
• Viewing Transformation
• Projection Transformation
• Workstation Transformation

20. What do you mean by view plane?


A view plane is nothing but the film plane in camera which is positioned and oriented for a particular
shot of the scene.

21. What is view-plane normal vector?


This normal vector is the direction perpendicular to the view plane and it is called as [DXN DYN DZN]

22. What is view distance?


The view plane normal vector is a directed line segment from the view plane to the view reference
point. The length of this directed line segment is referred to as view distance.

23. Mention some surface detection methods.


Back-face detection, depth-buffer method, A-buffer method, scan-line method, depth-sorting method,
BSP-tree method, area subdivision, octree method, ray casting.

24. What you mean by parallel projection?[MAY 2012,16,18,19,DEC-2017,18]


Parallel projection is one in which z coordinates is discarded and parallel lines from each vertex on
the object are extended until they intersect the view plane.

25. What do you mean by Perspective projection? [MAY 2012,16,18,19,DEC-2017,18]


Perspective projection is one in which the lines of projection are not parallel. Instead, they all
converge at a single point called the center of projection.

26. What is Projection reference point?


In Perspective projection, the lines of projection are not parallel. Instead, they all converge at a single
point called Projection reference point.

27. What is the use of Projection reference point?


In Perspective projection, the object positions are transformed to the view plane along these converged
projection line and the projected view of an object is determined by calculating the intersection of the
converged projection lines with the view plane.

28. What are the different types of parallel projections?


The parallel projections are basically categorized into two types, depending on the relation between
the direction of projection and the normal to the view plane. They are orthographic parallel projection
and oblique projection.

29. What is orthographic parallel projection?


When the direction of the projection is normal (perpendicular) to the view plane then the projection is
known as orthographic parallel projection

30. What is orthographic oblique projection?


When the direction of the projection is not normal (not perpendicular) to the view plane then the
projection is known as oblique projection.
31. What is an axonometric orthographic projection?
The orthographic projection can display more than one face of an object. Such an orthographic
projection is called axonometric orthographic projection.

32. What is cavalier projection?


The cavalier projection is one type of oblique projection, in which the direction of projection makes a
45-degree angle with the view plane.

33. What is cabinet projection?


The cabinet projection is one type of oblique projection, in which the direction of projection makes a n
angle of arctan (2)=63.4- with the view plane.

34. What is vanishing point?


The perspective projections of any set of parallel lines that are not parallel to the projection plane
converge to appoint known as vanishing point.

35. What do you mean by principle vanishing point.


The vanishing point of any set of lines that are parallel to one of the three principle axes of an object is
referred to as a principle vanishing point or axis vanishing point.

36. What is view reference point?


The view reference point is the center of the viewing coordinate system. It is often chosen to be close
to or on the surface of the some object in the scene.

PART B
1. Explain Beizer Curves [DEC 2015,17,MAY 2017,18]
Refer Unit 3 Notes Pg(36-37)

2. Explain Back face detection method and Depth buffer method[NOV/DEC 2015,MAY/JUNE 2014,
MAY/JUNE 2013]
Refer Unit 3 Notes Pg(41-44)
3.Explain area subdivision and A- Buffer method
Refer Unit 3 Notes Pg(43-47)
4.Briefly explain about the basic transformations performed on three dimensional objects.[DEC
2011,MAY 2012]
Refer Unit 3 Notes Pg(48-50)
5.Write short notes on parallel and perspective projections.[DEC 2012,15]
Refer Unit 3 Notes Pg(53-54)
6.Explain in detail about three dimensional display methods.[DEC 2012,15]

Refer Unit 3 Notes Pg(53-54)


UNIT –IV
MULTIMEDIA SYSTEMS DESIGN&MULTIMEDIA FILE HANDLING
PART A (2 MARKS)
1. What is multimedia?[MAY 2013]
Multimedia is an efficient combination of all the multimedia objects like text, image, video,
audio. It is a general term used for documents, applications, presentations and any
information dissemination that uses the different multimedia objects.
2. Define workflow
Workflow is defined as the automation of work among users where the system is intelligent
enough to act based on the definition of document or work type. The workflow in document
imaging systems defines the sequence for scanning images, performing quality checks,
performing data entry based on the contents of the images, indexing them and storing them.
3. Write the four different categories of image processing.
Image Recognition, Image Enhancement, Image Synthesis, Image Reconstruction are the four
different categories of image processing.
4. Write the different image enhancement techniques.
1. Image calibration
2. Real Time alignment
3. Gray scale normalization
4. RGB hue intensity adjustment
5. Color separation
6. Frame averaging
5. Define dual buffered VGA mixing/scaling.
In double buffer scheme there are two buffers. One called decompression buffer used to store
the original images and the other called display buffer used to store the resized buffer.
6. Compare and contrast FDDI and ATM.
Advantages and disadvantages of ATM and FDDI
1. They reduce network congestion
2. They use wiring hubs, so fault isolation is easy
3. They still suffer from LAN capacity limit and the significant cost of upgrading the network
bandwidth The difference between the two network standards are, ATM allows the user to
operate at the speed desired by the user itself while FDDI allows the user to connect only at
the network speed.
7. What is a hypertext and hypermedia.[DEC 2015]
Hypertext is an application of indexing text to provide a rapid search of specific text strings in
one or more documents. Hypertext is an integral component of hypermedia documents.
Hypermedia is an extension of hypertext in that in addition to text they also contain
virtually any kind of information such as audio, animated video, graphics.
8. Define abstract images.
Abstract images are computer generated images based on some arithmetic calculations. The
two types of abstract images are discrete function and continuous function.
9. Write about the benefits of multimedia?
1. Significant reduction of time and space needed to file, store and retrieve documents
in electronic form
2. Increased productivity by eliminating lost or missing file conditions
3. Providing simultaneous document access to multiple users.
4. Reduction of time and money spent on photocopying
10. What are the factors that affect the speed of retrieval?
Retrieval speed is the direct result of the storage latency, size of the data relative to the display
resolution, transmission media and speed and decompression efficiency.
11. Write the four different types of DBMS that support multimedia systems.
1. Extending the RDBMS to support the various objects of multimedia as binary objects
2. Extending RDBMS beyond basic binary objects to the concepts of inheritance and classes
3. Converting to a full fledged object oriented database that supports standard SQL language
4. Object oriented database management systems with support to object oriented programming
languages like C++.
12. Mention the key issues of database organization?
Data Independence, Common distributed database architecture, Distributed database servers,
Multimedia object management.
13. Mention some of the evolving techniques of multimedia?
Hypermedia documents - Hypertext - Hyper speech HDTV and UDTV 3D technology and
holography Digital signal processing.
14. What is the use of Digital Video Interface(DVI)?
A technology from Intel corp. for compressing and decompressing data, audio and full-motion
video.
15. What is High-Definition Television (HDTV)?
A new digital broadcast standard aimed at changing the shape and doubling the quality of
television pictures. HDTV will provide 1125 lines instead of 525 lines, have the widescreen
16-to-9 shape, and come with surround sound of CD quality in five channels.
16. What is the use of Audio-Video Interleave(AVI)?
Microsoft crop’s video standard for digital video offerings with a minimum of 160- by-120
divan 15 frame-per-second resolution in the Microsoft Windows environment
17. Mention some of the evolving techniques of multimedia?[DEC 2004,MAY 2005]
Hypermedia documents
Hypertext
Hyper speech HDTV and UDTV 3D technology and holography
Digital signal processing
18. What are the factors that affect the speed of retrieval?
Retrieval speed is the direct result of the storage latency, size of the data relative to the display
resolution, transmission media and speed and decompression efficiency.
19.What does CCITT Group 4 incorporate?
Compression standard based on two-dimensional compression where every scanline is the
reference line for the next line and only deltas are stored.
20.Define chunk.
A block of information of a specific type as used in TIFF and RIFF standards.
21.Write about DCT coefficients.
Each 8x8 block (16x16 is also used) of source image sample is effectively a 64-point discrete
signal which is a function of two spatial dimensions x and i.e this signal is decompressed
into 64 orthogonal basis signals, each of these 64 signals will contain one of the 64 unique
two-dimensional spatial frequencies which make up the input signals spectrum. The output
amplitude of the set of 64 orthogonal basis signals are called DCT coefficients. In other
words, the value of each DCT coefficient is uniquely defined by the particular 64-point
input signal and can be regarded as the relative amounts of the 2D spatial frequencies
contained in the 64-point input signal. The coefficient with zero frequency in both
dimensions is called the DC coefficient and the remaining are called AC coefficients.
22. Define Huffman coding
Huffman coding requires that one or more sets of Huffman code tables be specified by the
application for coding as well as decoding to decompress data. The Huffman tables may be
predefined and used within an application as defaults, or computed specially for a given
image.
23. What is the purpose of JPEG(Joint Photograpic Experts Group
A lossy compression scheme based on an ISO standard which specifies the encoding of still
image information using discrete cosine transform and quantization techniques.
24.Define MIDI (Musical Instrument Digital Interface).[MAY 2004]
A protocol for the interchange of musical information among musical instruments,
synthesizers, and sound boards.
25.What is MIDI synthesizer?
Allows an external MIDI device such as musical keyboard to connect to the sound board,
compose music, and store it on a PC. Multiple voices (musical instruments) can be
sequenced by a MIDI synthesizer.
26.Define motion compensation.
A predictive technique whereby sequential frames are compared for differences and future
frames are predicted based on the direction of changes.
27.What is Motion JPEG?
A proprietary extension of the JPEG standard that adds motion compensation techniques for
compression of moving images. Motion JPEG is simpler than MPEG.
1. What are moving images?
A sequence of digitally encoded images generated by computer animation or by digitizing the
output of a video camera.
28.What is the purpose of MPEG (Motion Picture Experts Group)
An ISO standard which specifies the encoding of video information associated audio
information, and the interleaving of these two data streams using discrete cosine transform
and quantization techniques. The standard specifies transmission rates up to 1.5 Mbits/sec,
as the nominal rates for CDs.
29.Define MPEG 2.
Enhanced MPEG standard to address the needs of broadcast video encoding with extensibility
to HDTV picture sizes and data rates.
30.Define quantization.
Quantization is a process that attempts to determine what information can be safely discarded
without a significant loss in visual fidelity. It uses DCT coefficients and provides many-to-
one mapping.The quantization process is fundamentally lossy due to its many-toone
mapping.
31.What is zig-zag sequence?
Ordering of quantized DCT coefficients designed to facilitate entropy coding by placing low-
frequency coefficients before high-frequency coefficients.

32.What are the layers used in TWAIN architecture?


The TWAIN architecture consists of four layers:
• Application
• Protocol
• Acquisition
• Device
33.What are the characteristics of voice recognition systems?
An isolated-word speech recognition system requires that the speaker pause briefly
between words, whereas a continuous speech recognition system does not. Spontaneous, or
extemporaneously generated, speech contains disfluencies, and is much more difficult to
recognize than speech read from script.
34.What is Phoneme? Explain its uses.
A phoneme is a single "unit" of sound that has meaning in any language. There are 44
phonemes in English (in the standard British model), each one representing a different
sound a person can make. Since there are only 26 letters in the alphabet, sometimes letter
combinations need to be used to make a phoneme.
35.Define Lossy compression.[DEC 2015]
The primary criterion is that removal of the real information should not perceptly affect
the quality of the result. In the case of video, compression causes some information to be
lost; some information at a delete level is considered not essential for a reasonable
reproduction of the scene. This type of compression is called lossy compression.
36.Compare and Contrast lossy and lossless compression techniques[MAY 2005]
In lossless compression, data is not altered or lost in the process of compression or
decompression. Decompression generates an exact replica ofthe original object. Text
compression is a good example of lossless compression.
Lossy compression is that some loss would occur while compressing information
objects.

Lossy compression is used for compressing audio, gray-scale or color images, and
video objects in which absolute data accuracy is not necessary.
The idea behind the lossy compression is that, the human eye fills in the missing
information in the case of video.
The following lists some of the lossy compression mechanisms:

Joint Photographic Experts Group (JPEG)

Moving Picture Experts Group (MPEG)

Intel DVI

CCITT H.261 (P * 24) Video Coding Algorithm
Fractals.
Some of the commonly accepted lossless standards are given below:

Packpits encoding (Run-length encoding)

CCITT Group 3 I D

CCITT Group 3 2D

CCITT Group 4

Lempe l-Ziv and Welch algorithm LZW.
PART B (16 MARKS)

1. Explain the multimedia system architecture with a neat diagram[MAY-


2005,07,08,09,10,11,13,16,18,DEC-2003,08,10,15,16,18]
Refer Unit 4 Part 1(60-62)
2. Discuss the evolving technologies for the multimedia systems. [DEC-2009,2015,MAY
2010]
Refer Unit 4 Part 1(62-64)
3. Explain defining objects for multimedia systems[MAY 2016,DEC 2016,18]
Refer Unit 4 Part 1(64-65)
4. Explain multimedia database[DEC-2004,07,08,10,15,16,MAY-2007,09,10,13,16]
Refer Unit 4 Part 1(67-70)
5. Explain the characteristics of MDBMS.
Refer Unit 4 Part 1(67-70)
6. Explain the different file formats used in multimedia.[MAY 2005,08,10,18-DEC-
2004,10,15,16,18]

Refer Unit 4 Part 2 Pg(66-67)


7. Define MIDI. List its attribute. Compare and contrast the use of MIDI and digitized
audio in multimedia production.[DEC 2018]
Refer Unit 4 Part 2 Pg(66-67)

8. Explain JPEG, MPEG file format in detail [MAY 2004,16,DEC-2009,18]

Refer Unit 4 Part 2 Pg (81-84)


9. What are the types of compression available in multimedia? Explain any two types of
compression technology.[DEC 2003,04,May 2007,09]
Refer Unit 4 Part 2 Pg (87-90)
10. Explain the architecture of TWAIN. Also give its specification objectives [May
2007,2010]
Refer Unit 4 Part 2 Pg(100-102)
11.(i) Discuss the CCITT group of compression standards in detail.[OCT 15]
(ii) Explain the TIFF file format.[MAY 2008,18,DEC 2016,18]
Refer Unit 4 Part 2 Pg(91-93)
12.List the types of fixed and removable storage devices available for multimedia, and
discuss the strength and weakness of each one.
Refer Unit 4 Part 2 Pg (116-119)
UNIT-V
HYPERMEDIA
PART A (2 marks)
1. Define an Authoring system?
Authoring system is a software program that allows people to create an application
experience.
2. What are the design issues for multimedia authoring?[May 2013]
 Display resolution
 Data formats for captured data
 Compression algorithms
 Network interfaces
 Storage formats.
3. What are the types of Multimedia authoring Systems?[May 2005]
 Dedicated Authoring system
 Timeline-Based Authoring
 Structured Multimedia systems
 Telephone Authoring System
4. What is the purpose of zooming?
Zooming allows the user to see more detail for a specific area of the image.
5. What is panning?
Panning implies that the image window is unable to display the full image at the selected
resolution for display. In that case the image can be panned left to right or right to
left as well as top to bottom or bottom to top. Panning is useful for finding
detail that is not visible in the full image.
6. What are the steps needed for Hypermedia report generation?
 Planning
 Creating each component
 Integrating components.
7. What are the components of a distributed Multimedia system?[DEC 2008]
Application s/w, Container object store, Image and still video store, Audio and video
component store, Object directory service agent, Component service agent, User
interface service agent, networks.
8. What are the characteristics of Document store?
Primary document storage, Linked object storage, Linked object management.
9. What are key issues in data organization for multimedia systems?
 Data independence
 Common Distributed Database Architecture
 Multiple Data services.
10. What are the key elements in object server architecture of multimedia
applications?
Object name server, Object directory manager, Object server, Object manager,
Network manager, Object data store.
11. What are the types of database replication?
 Round-robin replication
 manual replication
 scheduled replication
 immediate replication,
 replication-on-demand
 predictive replication
 replicating references
 no replication.
12. What are the primary n/w topologies used for multimedia?
 traditional LANS
 extended LANS
 High-speed LANS, WANS
13. What is the purpose of MIME?
Multipurpose Internet Mail Extension specification defines mechanisms for generalizing
the message content to include multiple body parts and multiple data types.
14. What are the characteristics of image and still video stores?
o Compressed information
o Multi- image documents
o Related annotations
o Large volumes
o Migration b/w high- volume media such as an optical disk library and
high-speed media such as magnetic cache storage
o Shared access.
15. What are the services provided by a directory service agent?
Directory service, Object assignment, Object status management, Directory service
domains, Directory service server elements, n/w access.
16. What are the services provided by User Interface Agent?
Services on workstations, Using display s/w.
17. Give the primary goal of MAPI.
Message API, Separate client applications from the underlying messaging services,
Make basic mail-enabling a standard feature for all applications, Support messaging
reliant workgroup applications.
18.What is meant by hypermedia messaging?[DEC 2009,MAY 2010,16]
Messaging is one of the major multimedia applications.In hypermedia messaging,
the different multimedia components are added to messages.When the multimedia
document is a part of a messaging system,it is called a hypermedia message
19.What are the standard types of multimedia object servers?[MAY 16]

Standard types of multimedia object servers are:


1.Data processing servers RDBMS and ODBMS
2.Document Databse Servers
3.Document Imaging and still video servers.
4.Audio and voice mail Servers.
5.Full motion video servers

PART B (16 MARKS)

1. Explain different design issues for multimedia authoring.[DEC 2016]


Refer Unit 5 Pg(138-141)
2. Explain different types of multimedia authoring systems.[MAY
2004,07,11,16,18,DEC-2008,09,10,15]
Refer Unit 5 Pg (138-141)
3. Explain the key design issues in user interface design. [OCT 2015]
Refer Unit 5 Pg(143-146)
4. Explain linking and embedding in hypermedia design.[MAY 2008]
Refer Unit 5 Pg (150-151)
5. Explain any four distributed multimedia components.[DEC 2015]
Refer Unit 5 Pg (166-169)
6. Explain different multimedia servers with suitable examples.[MAY 2018]
Refer Unit 5 Pg(159-160)
7. Explain Mobile messaging and hypermedia messaging [DEC 2003,16,May-
2007,2008,09,10,16]
Refer Unit 5 Pg (148-150)

You might also like