0% found this document useful (0 votes)
23 views19 pages

Cs Question Bank Graphics

Cs

Uploaded by

ranjeet verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views19 pages

Cs Question Bank Graphics

Cs

Uploaded by

ranjeet verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Computer graphics unitwise questions

UNIT I - 2 MARKS
1. Distinguish between bitmap and pixmap. (APRIL 2014)

On a Black-and-White system with one bit per pixel, the frame buffer is
commonly called a Bitmap. For systems with multiple bits per pixel, the frame buffer
is often referred to as a Pixmap.

2. What are Random-scan displays? (APRIL 2014)

Random scan is a method in which the display is made by the electronic beam
which is directed only to the points or part of the screen where the picture is to be
drawn.

3. Write a note on: Digitizers (APRIL 2014)


A common device for drawing, painting, or interactively selecting coordinate
positions on an object is a Digitizer. These devices can be used to input coordinate
values in either a two-dimensional or a three-dimensional space.

4. What is GKS? (APRIL 2014) (NOV 2016)

The Graphical Kernel System (GKS) [2] is a document produced by the


International Standards Organization (ISO) which defines a common interface to
interactive computer graphics for application programs.

5. Define: Scientific visualization. (NOV 2014)

Producing graphical representations for scientific, engineering and medical


data sets and processes is generally referred to as Scientific Visualization.

6. What is meant by resolution? (NOV 2014) (NOV 2017)

The maximum number of points that can be displayed without overlap on a


CRT is referred to as the Resolution.

7. What is pixmap? (NOV 2014)

Systems with multiple bits per pixel, the frame buffer is often referred to as a
Pixmap.

8. Write down the application fields of computer graphics. (APRIL 2015)

Computer-Aided Design, Presentation Graphics, Computer Art,


Entertainment, Education and Training, Visualization, Image Processing, Graphical
User Interfaces.
9. Write any two Input devices. (APRIL 2015)

Keyboards:
The Keyboard is an efficient device for inputting nongraphic data as picture
labels, i.e., for entering text strings. Cursor-control keys, Numeric pad and function
keys are common features on general-purpose keyboards.
Mouse:
A mouse is small hand-held box used to position the screen cursor. Wheels or
rollers on the bottom of the mouse can be used to record the amount and direction of
movement.

10. Write the uses of CAD methods. (NOV 2015)


A major use of Computer Graphics is in Design Processes, particularly for
engineering and architectural systems. CAD (Computer-Aided Design) methods are
used in the design of buildings, automobiles, aircraft, watercraft, spacecraft,
computers, textiles and many other products.

11. Write a note on: Display controller in Raster – Scan systems. (NOV 2015)
In Raster-Scan systems, the electron beam is swept across the screen, one row
at
a time from top to bottom. As the electron beam moves across each row, the beam
intensity is turned on and off to create a pattern of illuminated spots. Picture definition
is stored in a memory area called the Refresh Buffer or Frame Buffer.

12. What are ink–jet method? (NOV 2015)


Features of inkjet printers are1. They can print 2 to 4 pages per minute. 2.
Resolution is about 360d.p.i. therefor better print quality is achieved. 3. The operating
cost is very low. The only part that requires replacement is ink cartridge. 4. Four
colours cyan, yellow, magenta, black are available.

13. Define: Resolution. (APRIL 2016)


The maximum number of points that can be displayed without overlap on a CRT
is referred to as the Resolution.

14. Write a note on: Joysticks. (APRIL 2016)


A Joystick consists of a small, vertical lever (called the stick) mounted on a
base that is used to steer the screen cursor around. Most joysticks select screen
positions with actual stick movement; others respond to pressure on the stick. Some
are mounted on a keyboard; others function as stand-alone units. In another type of
movable joystick, 8 switches are arranged in a circle, so that the stick can select any
one of eight directions for cursor movement. Pressure-sensitive joysticks, also called
Isometric Joysticks, have a non-movable stick. Pressure on the stick is measured with
strain gauges and converted to movement of the cursor in the direction specified.

15. What is GUI? (NOV 2016)


Nowadays all the software packages provide a Graphical Interface. A major
component of a graphical interface is a window manager that allows a user to display
multiple-window areas. Each window can contain a different process that can contain
graphical or non-graphical displays. Interfaces also display menus and icons for fast
selection of processing operations and parameter values.

16. Write a note on Aspect ratio. (NOV 2016)


One property of video monitor is aspect ratio, which gives the ratio of vertical
points to horizontal points necessary to produce equal-length lines in both directions on
the screen.

17. What is Image Processing? (APRIL 2017)


Image Processing is a technique used to modify or interpret existing pictures, such
as photographs and TV scans. Two principal applications of Image Processing are,
i) improving picture quality
ii) machine perception of visual information as used in Robotics.
To apply image-processing methods, we first digitize a photograph or other picture
into an image file. Then digital methods can be applied to rearrange picture parts, to
enhance colour separations, or to improve the quality of shading.

18. Write a note on: Scan Conversion (APRIL 2017)


A major task of the display processor is digitizing a picture definition given in an
application program into a set of pixel-intensity values for storage in the frame buffer.
This digitization process is called scan conversion.
19. What are Nonimpact Printers? (APRIL 2017)

Nonimpact Printers and plotters use laser techniques, ink-jet sprays,


xerographic processes (photocopying), electrostatic methods, and electrothermal
methods to get images onto Paper.

20. Define the term Computer Graphics (NOV 2017)


The term computer graphics includes almost everything on computer that is not
text or sound. It is an art of drawing pictures, lines, charts, etc. using computers with the
help of programming. Or we can say that graphics is the representation and manipulation
of image data by computer with the help from specialized software and hardware.
Graphic designing is done using the various available software for computers which can
produce the 3D images in the required shape and dimension. Computer graphics help us
in getting the real display experiences.
21. How to do Refreshing of the screen? (NOV 2017)
Refresh rate of a random-scan system depends on the number of lines to be
displayed. Picture definition is now stored as a set of line-drawing commands in an
area of memory referred to as the Refresh Display File. It is also called as Display
List, Display Program, Refresh Buffer.

22. What do you mean by addressability? (NOV 2017)


In computer graphics, the capability of a display surface or storage device to
accommodate a specified number of uniquely identifiable points. 2. In micrographics,
the capability of a specified field frame to contain a specific number of uniquely
identifiable points.

UNIT I - 5 MARKS
1. Discuss briefly use of computer graphics in image processing. (APRIL 2014)
2. Write short notes on: Direct –view storage tubes. (APRIL 2014)
3. What are raster – scan systems? Explain. (NOV 2014)
4. Write short notes on: Graphics software (NOV 2014) (APRIL 2016)
5. Write short notes on Hard copy devices. (APRIL 2015)
6. Discuss the essential characteristics of Graphics software. (APRIL 2015)
7. Write short notes on: Graphical user interfaces. (NOV 2015)
8. Discuss briefly on: Flat – panel displays. (NOV 2015)
9. Discuss about use of Image processing in computer graphics. (APRIL 2016)
10. Discuss about use of Computer Graphics in CAD. (NOV 2016)
11. Write short notes on: Random-Scan systems. (NOV 2016)
12. Discuss about the use of Computer Graphics in Presentation Graphics and Computer
Art. (APRIL 2017)
13. Write short notes on: Refresh Cathode-Ray Tubes (APRIL 2017)
14. Compare and construct the CRT and LCD. (NOV 2017)
15. Illustrate the organization of a simple random scan system (NOV 2017)

UNIT I - 10 MARKS
1. Explain the working of Refresh Cathode-Ray Tubes with a diagram. (APRIL 2014)
2. Discuss in detail, any four input devices used in computer graphics. (NOV 2014)
3. Explain the working principle of CRT. (APRIL 2015)
4. Discuss the following: (a) Random – Scan systems. (b) Graphics software. (NOV
2015)
5. Describe about direct-view storage tubes and flat-panel displays. (APRIL 2016)
6. Discuss about Hard-Copy Devices (NOV 2016)
7. Describe about any Four Graphical Input devices (APRIL 2017)
8. Describe the architecture of simple Raster graphics system (NOV 2017)
UNIT 2 - 2 MARKS
1. Write a note on: Setpixel () function. (APRIL 2014)

SetPixel() is a function that simply sets a pixel with the user-defined color.
For Windows users, pixel operation can be easily done with C/C++ programming
language.

2. What are color tables? (APRIL 2014)


The following figure illustrates a possible scheme for storing color values in a
color lookup table (or video lookup table), where frame-buffer values art- now used
as indices into the color table.

3. Write a note on: Marker Attributes. (APRIL 2014)


A marker symbol is a single character that can he displayed in different colors and
in different sizes. We select a particular character to be the marker symbol with
setMarkerType (mt)
where marker type parameter mt is set to an integer code.
Typical codes for marker type are the integers 1 through 5, specifying, respectively, a
dot (.), a vertical cross (+), an asterisk (*), a circle (o), and a diagonal cross (X).
Displayed marker types are centered on the marker coordinates.
We set the marker size with
setMarkerSizeScaleFactor (ms)
with parameter marker size ms assigned a positive number.

4. List the properties of Ellipses. (NOV 2014)


An Ellipse is defined as the set of points such that the sum of the distances from
two fixed positions (foci) is the same for all points.
If the distances to the two foci from any point P = (x, y) on the ellipse are labeled d1
and d2, then the general equation of an ellipse can be stated as,
d1 + d2 = constant
Expressing distances d1 and d2 in terms of the focal coordinates F1 = (x1, y1) and F2
= (x2,y2), we have

We can rewrite the general ellipse equation in the form,


Ax2 + By2 + Cxy + Dx + Ey + F = 0
where the coefficients A, B, C, D, E, and F are evaluated in terms of the focal
coordinates and the dimensions of the major and minor axes of the ellipse.
5. What are line attributes? (NOV 2014) (NOV 2016)

Possible selections for the line-type attribute include solid lines, dashed lines,
and dotted lines.
setLinetype (lt)
where parameter 1 is assigned a positive integer value of 1, 2, 3, or 4 to generate lines
that are, respectively, solid, dashed, dotted, or dash-dotted.
We set the line-width attribute with the command:
setLinesidthScaleFactor (lw)
Line-width parameter lw is assigned a positive number to indicate the relative width
of the line to be displayed. A value of 1 specifies a standard-width line.

6. Write a note on: Inquiry functions. (NOV 2014) (APRIL 2015)

Inquiry functions are used to retrieve the current settings of attributes and
other parameters such as workstation types and status from the system lists.By using
inquiry function, current values of any specified parameter can be saved and then they
can be used to check the current state of the system if any error encounters.
InquiresetTextcolor(lasttc)

7. What are the basic line attributes? (APRIL 2015)

Possible selections for the line-type attribute include solid lines, dashed lines,
and dotted lines.
setLinetype (lt)
where parameter 1 is assigned a positive integer value of 1, 2, 3, or 4 to generate lines
that are, respectively, solid, dashed, dotted, or dash-dotted.
We set the line-width attribute with the command:
setLinesidthScaleFactor (lw)
Line-width parameter lw is assigned a positive number to indicate the relative width
of the line to be displayed. A value of 1 specifies a standard-width line.

8. Write the properties of circles. (NOV 2015)


A Circle is defined as the set of points that are all at a given distance r from a
center position (xc, yc). This distance relationship is expressed by the Pythagorean
theorem in Cartesian coordinates as,

9. Write the intensity codes for a four – level grayscale system. (NOV 2015)

Intensity codes Stored intensity in Binary code Displayed gray


frame buffer scale
0.0 0 00 Black
0.33 1 01 Dark gray
0.67 2 10 Light gray
1.0 3 11 white

10. Define: Attributes. (APRIL 2016)


In general, any parameter that affects the way a primitive is to be displayed is
referred to as an attribute parameter. Some attribute parameters, such as color and
size, determine the fundamental characteristics of a primitive.

11. What is DDA? (APRIL 2016)


The Digital Differential Analyzer (DDA) is a Scan-Conversion line algorithm
based calculating either y or x .
Consider first a line with positive slope, less than or equal to 1, we sample at unit
intervals (x=1) and compute each successive y value as
yk+1 = yk + m
subscript k takes integer values starting from 1, for the first point, and increases by 1
until the final endpoints is reached.
Since m can be any real number between 0 & 1, the calculated y values must be
rounded to the nearest integer.

12. Write a note on: Grayscale. (APRIL 2016)


With monitors that have no color capability, color functions can be used in an
application program to set the shades of gray, or grayscale, for displayed primitives.
Numeric values over the range from 0 to 1 can be used to specify grayscale levels, which
are then converted to appropriate binary codes for storage in the raster. This allows the
intensity settings to be easily adapted to systems with differing grayscale capabilities.
If additional bits per pixel are available in the frame buffer, the value of 0.33 would
be mapped to the nearest level. With 3 bits per pixel, we can accommodate 8 gray
levels; while 8 bits per pixel would give us 256 shades of gray.

13. Define Ellipse. (NOV 2016)


An Ellipse is defined as the set of points such that the sum of the distances from
two fixed positions (foci) is the same for all points.
If the distances to the two foci from any point P = (x, y) on the ellipse are labeled d1
and d2, then the general equation of an ellipse can be stated as,
d1 + d2 = constant
Expressing distances d1 and d2 in terms of the focal coordinates F1 = (x1, y1) and F2
= (x2,y2), we have

We can rewrite the general ellipse equation in the form,


Ax2 + By2 + Cxy + Dx + Ey + F = 0
where the coefficients A, B, C, D, E, and F are evaluated in terms of the focal
coordinates and the dimensions of the major and minor axes of the ellipse.

14. Write a note on text attributes. (NOV 2016)


There are a great many text options that can be made available to graphics
programmers. First of all, there is the choice of font (or typeface), which is a set of
characters with a particular design style such as New York, Courier, Helvetica, London,
Times Roman, and various special symbol groups.
The characters in a selected font can also be displayed with assorted underlining styles
(solid, dotted, double), in boldface, in italics and in or shadow styles.
A particular font and associated style is selected in a PHIGS program by setting an
integer code for the text font parameter tf in the function
setTextFont (tf)
Color settings for displayed text are stored m the system attribute list and used by the
procedures that load character definitions into the frame buffer. When a character
string is to be displayed, the current color is used to set pixel values in the frame
buffer corresponding to the character shapes and positions. Control of text color (or
intensity) is managed from an application program with
setTextColourIndex (tc)
where text color parameter tc specifies an allowable color code.

15. How to implement line-width options? (APRIL 2017)


We set the line-width attribute with the command:
setLinesidthScaleFactor (lw)
Line-width parameter lw is assigned a positive number to indicate the relative width
of the line to be displayed. A value of 1 specifies a standard-width line.

16. Write the equation of a Straight line. (APRIL 2017)


The Cartesian slope-intercept equation for a straight line is
y = mx + b (1)
with m representing the slope of the line and b as the y intercept. Given that the two
endpoints of a line segment are specified at positions (x1, y1) and (x2, y2).
We can determine the slope m and y intercept b with the following calculations:
(y2-y1)
m = -------- (2)
(x2-x1)
b = y1 – m x1 (3)
Algorithms for displaying straight lines are based on the line equations (1) and the
calculations given in equations (2) and (3).

17. What is meant by Odd-even rule? (APRIL 2017)


The odd-even rule, also called the odd parity rule or the even-odd rule, by
conceptually drawing a line from any position P to a distant point outside the
coordinate extents of the object and counting the number of edge crossings along the
line. If the number of polygon edges crossed by this line is odd, then P is an interior
point. Otherwise, P is an exterior point.

18. What is Frame buffer? (NOV 2017)


Picture definition is stored in a memory area called the Refresh Buffer or
Frame Buffer. This memory area holds the set of intensity values for all the screen
points. Stored intensity values are then retrieved from the refresh buffer and “painted”
on the screen one row (scan line) at a time. Each screen point is referred to as a Pixel
or Pel (Picture Element).

UNIT II - 5 MARKS
1. Write the steps in Midpoint Circle Algorithm. (APRIL 2014)
2. Discuss briefly on: Line Attributes. (APRIL 2014)
3. Write the steps in Bresenham’s line drawing algorithm. (NOV 2014)
4. Write about character attributes (NOV 2014)
5. Describe the mid-point circle drawing algorithm. (APRIL 2015)
6. What is composite transformation? Explain. (APRIL 2015)
7. Write short notes on: Flood – fill algorithm. (NOV 2015) (APRIL 2017)
8. What are inquiry functions? Explain. (NOV 2015)
9. Explain briefly on: line attributes. (APRIL 2016)
10. Explain briefly on: Boundary-Fill Algorithm (NOV 2016)
11. Explain the DDA line drawing algorithm in detail (NOV 2017)

UNIT II - 10 MARKS

1. Describe in detail, Boundary-Fill Algorithm. (APRIL 2014)


2. Explain Ellipse-Generating (Midpoint) Algorithm. (NOV 2014) (APRIL 2017)
3. Describe about Bresenham’s line drawing algorithm (APRIL 2015) (NOV 2017)
4. Write short notes on: (a) Line attributes. (b) Character attributes. (NOV 2015)
5. Explain midpoint circle generation algorithm. (APRIL 2016)
6. Explain DDA Line Drawing Algorithm (NOV 2016)
UNIT III - 2 MARKS
1. What are Rubber-Band methods? (APRIL 2014)

A rubber band line: The user specifies two end points. As he moves from the
first endpoint to the second, the program displays a line form first end point to the
cursor position. This effect is of an elastic line stretched between the first endpoint
and to the cursor.

2. Define: 2D shear. (APRIL 2014)

A transformation that distorts (deform or alter) the shape of an object such that
the transformed shape appears as if the object were composed of internal layers that
had been caused to slide over each other is called a shear.
Two common shearing transformations are those that shift coordinate x values
and those that shift y values.

3. What is 2D translation? (NOV 2014)

A translation is applied to an object by repositioning it along a straight-line path from


one coordinate location to another. We translate a two-dimensional point by adding
translation distances, tx and ty, to the original coordinate position (x, y) to move the
point to a new position (x', y').
x' = x + tx, y' = y + ty
The translation distance pair (tx, ty) is called a translation vector or shift vector.

4. Define: Clipping. (NOV 2014) (NOV 2017)


Generally, any procedure that identifies those portions of a picture that are
either inside or outside of a specified region of space is referred to as a clipping
algorithm, or simply clipping. The region against which an object is to clip is called a
clip window.
Applications of clipping include extracting part of a defined scene for
viewing; identifying visible surfaces in three-dimensiona1 views; antialiasing line
segments or object boundaries; creating objects using solid-modeling procedures;
displaying a multiwindow environment; and drawing and painting operations that
allow parts of a picture to be selected for copying, moving, erasing, or duplicating.

5. What is transformation? (APRIL 2015)

Changes in orientation, size, and shape are accomplished with geometric


transformations that alter the coordinate descriptions of objects. The basic geometric
transformations are translation, rotation, and
scaling. Other transformations that are often applied to objects include reflection and
shear.

6. Define viewing. (APRIL 2015)


A world-coordinate area selected for display is called a window. An area on a
display device to which a window is mapped is called a viewport.
_ The window defines what is to be viewed.
_ The viewport defines where it is to be displayed.
Often, windows and viewports are rectangles in standard position, with the
rectangle edges parallel to the coordinate axes.In general, the mapping of a part of a
world-coordinate scene to device coordinates is referred to as a viewing
transformation.

7. Define Grid. (APRIL 2015)


A kind of constraint is a grid of rectangular lines displayed in some part of the
screen area. When a grid is used, any input coordinate position is rounded to the
nearest intersection of two grid lines.

8. What are Logical Input devices? (APRIL 2015)

An abstraction of one or more physical devices that delivers logical


input values to an application. Graphics standards divide the primitive input
devices into the logical classes locator, stroke, valuator, choice, pick, and string.

9. What is 2D reflection? (NOV 2015)


A reflection is a transformation that produces a mirror image of an object. The
mirror image for a two-dimensional reflection is generated relative to an axis of
reflection by rotating the object 180" about the reflection axis.

10. Write a note on: Dragging. (NOV 2015)

In computer graphical user interfaces, drag and drop is a pointing device


gesture in which the user selects a virtual object by "grabbing" it and dragging it to a
different location or onto another virtual object.

11. What do you mean by scaling? (APRIL 2016) (NOV 2017)


A scaling transformation alters the size of an object. This operation can be carried
out for polygons by multiplying the coordinate values (x, y) of each vertex by scaling
factors sx and sy to produce the transformed coordinates (x', y'):
x xx . s , y yy . s
Scaling factor sx, scales objects in the x direction, while sy scales in the
y direction.

12. What is meant by clipping? (APRIL 2016)


Generally, any procedure that identifies those portions of a picture that are
either inside or outside of a specified region of space is referred to as a clipping
algorithm, or simply clipping. The region against which an object is to clip is called a
clip window.
Applications of clipping include extracting part of a defined scene for
viewing; identifying visible surfaces in three-dimensiona1 views; antialiasing line
segments or object boundaries; creating objects using solid-modeling procedures;
displaying a multiwindow environment; and drawing and painting operations that
allow parts of a picture to be selected for copying, moving, erasing, or duplicating.

13. What do you mean by Translation? (NOV 2016)

A translation is applied to an object by repositioning it along a straight-line


path from one coordinate location to another. We translate a two-dimensional point by
adding translation distances, tx and ty, to the original coordinate position (x, y) to
move the point to a new position (x', y').
x' = x + tx , y' = y + ty
The translation distance pair (tx, ty) is called a translation vector or shift vector.

14. What is meant by Viewport? (NOV 2016) (NOV 2014)


An area on a display device to which a window is mapped is called a viewport.
The viewport defines where it is to be displayed.

15. What do you mean by 2D Shear? (APRIL 2017)

A transformation that distorts (deform or alter) the shape of an object such that
the transformed shape appears as if the object were composed of internal layers that
had been caused to slide over each other is called a shear.
Two common shearing transformations are those that shift coordinate x values and
those that shift y values.

16. What is meant by Window? (APRIL 2017)


A world-coordinate area selected for display is called a window. The window
defines what is to be viewed.

17. Write a note on point Clipping. (APRIL 2017)

Assuming that the clip window is a rectangle in standard position, we save a


point P = (x, y) for display if the following inequalities are satisfied:
xwmin <=x<= xwmax
ywmin <=y<= ywmax
where the edges of the clip window (xwmin, xwmax, ywmin, ywmax) can be either
the world-coordinate window boundaries or viewport boundaries. If any one of these
four inequalities is not satisfied, the point is clipped (not saved for display).

18. Define the term Transformation (NOV 2017)


Changes in orientation, size, and shape are accomplished with geometric
transformations that alter the coordinate descriptions of objects. The basic geometric
transformations are translation, rotation, and
scaling. Other transformations that are often applied to objects include reflection and
shear.
UNIT III - 5 MARKS

1. Discuss any TWO Interactive picture construction techniques. (APRIL 2014)


2. Discuss briefly about 2D composite transformations. (NOV 2014) (APRIL 2017)
3. Write short notes on Modeling concepts. (APRIL 2015)
4. Discuss about 2D basic transformations. (NOV 2015) (APRIL 2016)
5. Write short notes on: any four interactive picture construction techniques. (APRIL
2016)
6. Discuss about 2D Reflection and 2D Shear. (NOV 2016)
7. Write short notes on: Logical Classification of Input Devices (NOV 2016)
8. Explain about Interactive Picture Construction Techniques (APRIL 2017)
9. Describe the procedure for window to view port coordinate transformations (NOV
2017)
10. Write down the Sutherland-Hodgman polygon clipping algorithm (NOV 2017)

UNIT III - 10 MARKS

1. Explain about Matrix Representations and Homogeneous coordinates. (APRIL 2014)


2. Describe Sutherland – Hodgeman polygon clipping algorithm. (NOV 2014)
3. What are the additional transformation available in some packages? Explain (APRIL
2015)
4. Explain the working of Cohen – Sutherland line clipping algorithm. (NOV 2015)
(APRIL 2017) (NOV 2017)
5. Explain about logical classification of input devices. (APRIL 2016)
6. Explain about Window to View-port Transformation (NOV 2016)
UNIT IV - 2 MARKS
1. Define: Parallel Projection. (APRIL 2014) (APRIL 2016) (NOV 2017)

In parallel projections, lines that are parallel in three-dimensional space


remain parallel in the two-dimensional projected image. A perspective
projection of an object is often considered more realistic than a parallel projection,
since it more closely resembles human vision and photography.

2. Write a note on: 3D scaling. (APRIL 2014)

Scaling in 3D is a straightforward extension of scaling in 2D. As in the 2D case, if Sx


= Sy = Sz the object shapes are maintained, else they are distorted.
Scaling: x' = x * Sx
y' = y * Sy
z' = z * Sz

(Sx 0 0 0)
(x' y' z' 1) = (x y z 1) * (0 Sy 0 0)
(0 0 Sz 0)
(0 0 0 1)

As in 2D, if the object is not centered at the origin (0, 0, 0) the scaling transformation
causes both size change and movement of the object. Scaling about a fixed point P0
(x0,y0,z0) can be accomplished by the following:
1. translating P0 to the origin
2. scaling the object
3. translating P0 back to original position.
so the composite matrix is
= T(-x0,-y0,-z0)*(S(Sx,Sy,Sz))*(T(x0,y0,z0))

3. What is perspective projection? (NOV 2014) (NOV 2016)

In perspective projection, the distance from the center of projection to


project plane is finite and the size of the object varies inversely with distance which
looks more realistic. ... Instead, they all converge at a single point called center
of projection or projection reference point.

4. What are Polygon Tables? (APRIL 2015) (NOV 2015) (APRIL 2016)

We specify objects as a set of vertices and associated attributes.


This information can be stored in tables, of which there are two types: geometric
tables and attribute tables. The geometry can be stored as three tables: a vertex table,
an edge table, and a polygon table. Each entry in the vertex table is a list of
coordinates defining that point. Each entry in the edge table consists of a pointer to
each endpoint of that edge. And the entries in the polygon table define a polygon by
providing pointers to the edges that make up the polygon.
5. Define Scaling with 3D objects. (APRIL 2015)

Scaling in 3D is a straightforward extension of scaling in 2D. As in the 2D


case, if Sx = Sy = Sz the object shapes are maintained, else they are distorted.

Scaling: x' = x * Sx
y' = y * Sy
z' = z * Sz

(Sx 0 0 0)
(x' y' z' 1) = (x y z 1) * (0 Sy 0 0)
(0 0 Sz 0)
(0 0 0 1)

As in 2D, if the object is not centered at the origin (0, 0, 0) the scaling transformation
causes both size change and movement of the object. Scaling about a fixed point P0
(x0,y0,z0) can be accomplished by the following:

1. translating P0 to the origin


2. scaling the object
3. translating P0 back to original position.
so the composite matrix is

= T(-x0,-y0,-z0)*(S(Sx,Sy,Sz))*(T(x0,y0,z0))

6. Define Boundary representations. ( NOV 2015)


boundary representation—often abbreviated as B-rep or BREP—is a
method for representing shapes using the limits. A solid is represented as a collection
of connected surface elements, the boundary between solid and non-solid.

7. What is 3D translation? ( NOV 2015) (APRIL 2017)

We translate a 3D point by adding translation distances, tx, ty, and tz, to the
original coordinate position (x,y,z): x' = x + tx, y' = y + ty, z' = z + tz

8. What are Cavalier Projections? (NOV 2016)


There are two types of obliqueprojections − Cavalier and Cabinet.
The Cavalier projection makes 45° angle with the projection plane.
The projection of a line perpendicular to the view plane has the same length as the
line itself in Cavalier projection.

9. Write a note on : Matrix Representation of 3D scaling (NOV 2016)


(Sx 0 0 0)
(x' y' z' 1) = (x y z 1) * (0 Sy 0 0)
(0 0 Sz 0)
(0 0 0 1)
10. What is surface patch? (NOV 2017)
Surface patches are analogous to the multiple polynomial arcs used to build a
spline. They allow more complex surfaces to be represented by a series of relatively
simple equation sets rather than a single set of complex equations.

UNIT IV - 5 MARKS

1. Write short notes on: Plane Equations. (APRIL 2014)


2. What are Polygon tables? Explain. (NOV 2014)
3. Explain any two three – dimensional display methods. (NOV 2015)
4. Discuss about Polygon Tables (NOV 2016)
5. Discuss about Plane Equations and Polygon Meshes (APRIL 2017)
6. Explain about the perspective projection with example (NOV 2017)

UNIT IV - 10 MARKS

1. Write short notes on: (a) Depth cueing (b) Polygon Meshes. (APRIL 2014)
2. Explain about three – dimensional basic transformations. (NOV 2014)
3. Explain Rotation and Translation with respect to 3D objects. (APRIL 2015)
4. Describe about 3D composite transformations. (NOV 2015)
5. Describe visible line and surface identification. (APRIL 2016)
6. Describe about 3D Rotation and 3D Reflections (NOV 2016)
7. Describe about Parallel and Perspective Transformations (APRIL 2017)
8. Describe any two three-dimensional display methods (NOV 2017)
UNIT V - 2 MARKS

1. What are Wireframe-Visibility methods? (APRIL 2014)

A wire-frame model is a visual presentation of a 3-dimensional (3D) or


physical object used in 3D computer graphics. It is created by specifying each edge
of the physical object where two mathematically continuous smooth surfaces meet, or
by connecting an object's constituent vertices using straight lines or curves.

2. What is orthographic parallel projection? (NOV 2014)

The term orthographic is sometimes reserved specifically for depictions of


objects where the principal axes or planes of the object are parallel with the
projection plane.

3. What are object – space methods? (NOV 2014) (APRIL 2015)

There are two approaches for removing hidden surface problems − Object-
Space method and Image-space method. The Object-space method is implemented
in physical coordinate system and image-space method is implemented in screen
coordinate system.

4. Why to remove the hidden surface? (APRIL 2015)

When we view a picture containing non-transparent objects and surfaces, then


we cannot see those objects from view which are behind from objects closer to eye.
We must remove these hidden surfaces to get a realistic screen image. The
identification and removal of these surfaces is called Hidden-surface problem.

5. What is oblique parallel projection? ( NOV 2015)

Oblique projection is a type of parallel projection: it projects an image by


intersecting parallel rays (projectors) from the three-dimensional source object with
the drawing surface (projection plane).

6. What is A – buffer method? ( NOV 2015)

The A-buffer method is an extension of the depth-buffer method. The A-buffer


method is a visibility detection method developed at Lucas film Studios for the
rendering system Renders Everything You Ever Saw (REYES).The A-buffer expands
on the depth buffer method to allow transparencies. The key data structure in the A-
buffer is the accumulation buffer.
7. Write a note on: viewing pipeline. (APRIL 2016)

The viewing transformation which maps picture co-ordinates in the WCS to


display co-ordinates in PDCS is performed by the following transformations.
• Converting world co-ordinates to viewing co-ordinates.
• Normalizing viewing co-ordinates.
• Converting normalized viewing co-ordinates to device co-ordinates.

The steps involved in viewing transformation:-


1. Construct the scene in world co-ordinate using the output primitives and
attributes.
2. Obtain a particular orientation for the window by setting a two-dimensional
viewing co-ordinate system in the world co-ordinate plane and define a window in
the viewing co-ordinate system.
3. Use viewing co-ordinates reference frame to provide a method for setting up
arbitrary orientations for rectangular windows.
4. Once the viewing reference frame is established, transform descriptions in world
co-ordinates to viewing co-ordinates.
5. Define a view port in normalized co-ordinates and map the viewing co-ordinates
description of the scene to normalized co-ordinates.
6. Clip all the parts of the picture which lie outside the viewport.

8. What is meant by an object-space method? (APRIL 2016)


There are two approaches for removing hidden surface problems − Object-
Space method and Image-space method. The Object-space method is implemented
in physical coordinate system
9. What is meant by an Image-space method? (NOV 2016)

There are two approaches for removing hidden surface problems − Object-
Space method and Image-space method. The image-space method is implemented
in screen coordinate system.

10. Write the use of View-up Vector (APRIL 2017)


The vector view-up (VUP) is often used in lecturing the part of specifying
the viewing coordinate system under the world coordinate system. However, the
existing way of lecturing the vector VUP can not explain its actual function and
exact physical meaning. The new conception of initial VUP is proposed and it is
pointed out that the principle of rotating initial VUP around the vector n equals to
the twist of the virtual camera.

11. What is meant by Vanishing point? (APRIL 2017)

A vanishing point is an abstract point on the image plane where 2D


projections (or drawings) of a set of parallel lines in 3D space appear to converge.

12. Define Color Model. (NOV 2017)

There are several established color models used in computer graphics, but the
two most common are the RGB model (Red-Green-Blue) for computer display and
the CMYK model (Cyan-Magenta-Yellow-Black) for printing.

UNIT V - 5 MARKS

1. Discuss briefly on: Transformation from world to viewing coordinates. (APRIL


2014)
2. Write the steps of a depth – buffer algorithm. (APRIL 2015)
3. Explain depth cueing. (APRIL 2015)
4. How to view a 3D objects? (APRIL 2015)
5. Write short notes on: Viewing pipeline. (NOV 2015)
6. Discuss about depth cueing. (APRIL 2016)
7. Write the steps of a depth-buffer method. (APRIL 2016)
8. Write the steps of A-Buffer Method (NOV 2016)
9. Write short notes on: Back-Face Detection (APRIL 2017) (NOV 2017)

UNIT V - 10 MARKS

1. Explain about A-Buffer Method. (APRIL 2014)


2. Discuss in detail, wireframe methods. (NOV 2014)
3. Discuss about the different types of projections (APRIL 2015)
4. Discuss the following: (a) General parallel – projection transformations. (b) Back –
face detection. (NOV 2015)
5. Explain about Wireframe methods. (APRIL 2016)
6. Explain about Viewing Pipeline and Viewing Coordinates (NOV 2016)
7. Explain about Depth-Buffer and A-Buffer Methods (APRIL 2017)
8. Explain the steps for a Depth-Buffer method (NOV 2017)

You might also like