CG 1
CG 1
Q-1.2: What is computer interactive graphics & Conceptual framework for interactive graphics. Ans:
Computer interactive graphics: Computer interactive graphics is a computer graphics system that allows the
operator or user to interact with the graphical information presented on the display using one or more of a
number of input devices, some of which are aimed at delivering positions relevant to the information being
displayed. Conceptual framework for interactive
graphics: Conceptual framework for interactive
graphics has the following elements: Graphics Library:
Graphics library is intermediary between application
program and display hardware (Graphics System).
Application Program: Application program maps
application objects to views (images) of those objects
by calling on graphics library. Application model may
contain lots of non-graphical data (e.g., non-geometric object properties). Graphics System: An interface that
interacts between Graphics library and Hardware; User interaction results in modification of model and/or image.
Q-1.3: Explain RGB color model: RGB stands for Red, Green, and
Blue. The RGB color model is a way to represent colors. The RGB
color model is one of the most widely used color representation
method in computer graphics. It uses a color coordinate system with
three primary colors: R(red), G(green), B(blue). The RGB primaries
are additive primaries, that is the individual contributions of each
primary are added together to yield the result. On the basis of this
theory RGB color model used Red, Green and Blue to get a color.
We can represent this color model with the unit cube defined on R,
G and B axes as shown figure below:
The vertex of the cube on the axis represents the primary colors and
the remaining vertices represent complementary color for each of
the primary colors. The end of the diagonal represents black (0,0,0) and another end represents white (1,1,1).
Q-1.4: Different components of a basic raster scan CRT. Ans: CRT stands for Cathode Ray Tube. CRT is a
technology used in traditional computer monitors and televisions. The image on CRT display is created by firing
electrons from the back of the tube of phosphorus located towards the front of the screen. Once the electron
heats the phosphorus, they light up, and they are projected on a screen. Components of a basic raster scan CRT
are given below: (1) Electron Gun: Electron gun consisting of a series of elements, primarily a heating filament
(heater) and a cathode. The electron gun creates a source of electrons which are focused into a narrow beam
directed at the face of the CRT. (2) Control Electrode: It is used to turn the electron beam on and off. (3) Focusing
system: It is used to create a clear picture by focusing the electrons into a narrow beam. (4) Deflection Yoke: It is
used to control the direction of the electron beam. It
creates an electric or magnetic field which will bend the
electron beam as it passes through the area. (5)
Phosphorus-coated screen: The inside front surface of
every CRT is coated with phosphors. Phosphors glow
when a high-energy electron beam hits them.
Phosphorescence is the term used to characterize the
light given off by a phosphor after it has been exposed
to an electron beam.
Q-1.5: Raster scan display and Random scan display. Ans: Raster
Scan Display: ∎A Raster Scan Display is based on intensity control of
pixels in the form of a rectangular box called Raster on the screen. ∎In
a raster scan system, the electron beam is swept across the screen, one
row at a time from top to bottom. As the electron beam moves across
each row, the beam intensity is turned on and off to create a pattern of
illuminated spots. ∎Picture definition is stored in memory area called
the Refresh Buffer or Frame Buffer. This memory area holds the set of
intensity values for all the screen points. Stored intensity values are then retrieved from the refresh buffer and
“painted” on the screen one row scanline at a time as shown in the following illustration. ∎Each screen point is
referred to as a pixel (picture element). At the end of each scan line, the electron beam returns to the left side of
the screen to begin displaying the next scan line. Random scan display: Random Scan System uses an electron
beam which operates like a pencil to create a line image on the CRT screen. The picture is constructed out of a
sequence of straight-line segments. Each line segment is drawn on the screen by directing the beam to move from
one point on the screen to the next, where its x & y coordinates define each point. After drawing the picture. The
system cycles back to the first line and design all the lines of the image 30 to 60 time each second. It is also called
vector display, stroke-writing display, or calligraphic display.
Q-1.6: Explain YIQ color model: The YIQ (Luminance-in phase-quadrature) color model is used in television
broadcasting to encode color information in a way that's compatible with black-and-white TV signals. The YIQ
model was designed to be backward compatible with black-and-white television systems. In a black-and-white TV
set, only the Y component (luminance) is used, so it can display the image correctly without color. Here's a brief
explanation: ∎Components: *Y (Luminance): Represents brightness or the black-and-white information of the
image. *I (In-phase): Carries the color information along the horizontal axis. *Q (Quadrature): Carries the color
information along the vertical axis. ∎Encoding Color: *Y represents the black-and-white part of the image, similar
to a regular black-and-white TV signal. *I and Q carry color information by encoding the color differences between
the luminance (Y) and the actual color values.
Q-1.7: Describe the working principle of vector graphics display system. Ans: A vector graphics system is
a type of computer graphics system that uses mathematical equations and geometric primitives to represent and
render images on a display screen. Here's how it works: Mathematical Descriptions: Shapes, lines, and curves are
represented by mathematical formulas rather than being made up of small dots (pixels). Coordinate System: The
display system operates within a coordinate system, defining points, lines, curves, and shapes based on
mathematical coordinates. These coordinates determine the location of elements on the screen. Geometric
Primitives: Shapes in vector graphics are constructed using geometric primitives such as points, lines, curves (like
Bezier or B-spline curves), and polygons. Drawing Commands: Instructions or commands are sent to the display
system to draw specific shapes or lines at designated coordinates. These commands contain mathematical
descriptions of the shapes to be rendered. Rendering Process: When the system receives drawing commands, it
interprets the mathematical descriptions and algorithms to generate and display the corresponding shapes or
lines on the screen. Scalability without Losing Quality: Because everything is described using math, you can make
things bigger or smaller without losing their sharpness. Efficient Storage: It's more efficient with memory and can
handle resizing without losing quality.
Q-1.8: Explain CMYK color model: The CMYK color model (also known as process color, or four color) is a
subtractive color model, based on the CMY color model, used in color printing, and is also used to describe the
printing process itself. The abbreviation CMYK refers to the four ink plates used: cyan, magenta, yellow, and key
(black). These four colors are needed to reproduce full color artwork in magazines, Books and brochures. By
combining cyan, magenta, yellow and black on paper in varying, percentage, the illusion of lots of color is created;
Given a CMY specification, black is used in place of equal amounts of C, M and Y according to the relations: K: =
min (C, M, Y); C: C-K; M: =M-K; Y: Y-K; Working: The CMYK model works by partially or entirely masking colors on a
lighter, usually white, background. The ink reduces the light that would otherwise be reflected. Such a model is
called subtractive because inks "subtract" the colors red, green and blue from white light; White light minus red
leaves cyan, white light minus green leaves magenta, and white light minus blue leaves yellow.
Q-1.9: What is output device? Explain different types of computer graphics output devices. Ans: An output
device in the context of computers is any hardware component that presents or displays data processed by the
computer. It takes digital information from the computer and converts it into a human-readable form, such as
text, images, or multimedia. Output devices enable users to interact with and perceive the results of their input or
the processed information; Here are some different types of computer graphics output devices:
Monitors/Displays: Monitors, commonly called as Visual Display Unit (VDU). These are the most common output
devices for computer graphics. Monitors or displays present visual information using pixels, showing text, images,
videos, and graphical user interfaces (GUIs). They come in various types such as CRT (Cathode Ray Tube), LCD
(Liquid Crystal Display), LED (Light Emitting Diode), and OLED (Organic Light Emitting Diode). Printers: Printers
produce hard copies of digital images or documents. There are different types of printers, including inkjet printers,
line printer, dot-matrix printer, laser printers, and 3D printers, each with specific capabilities for producing
graphics in various formats and qualities. Plotters: Plotters are devices used to produce high-quality vector
graphics by precisely positioning a pen or other writing instruments. They are commonly used in engineering and
architectural applications to create detailed technical drawings. Projection Systems: These devices project
computer-generated images onto surfaces such as screens, walls, or boards. Projectors are used in presentations,
home theaters, and large-scale displays. Virtual Reality (VR) Headsets: VR headsets are specialized output devices
that create immersive computer-generated environments. They use screens and sensors to display images and
track head movements, providing users with an immersive graphical experience. Speakers/Audio Output Devices:
While primarily used for sound output, speakers and audio devices also play a role in some computer graphics
applications, such as multimedia presentations, where audio complements visual information.
Q-1.10: Define MIDI & Animation. Ans: MIDI: MIDI stands for "Musical Instrument Digital Interface." It is
a widely used protocol and technology that allows electronic musical instruments, computers, and other devices
to communicate, control, and synchronize with each other for the purpose of creating and producing music. Here
are some key points about MIDI: ∎Communication Protocol: MIDI is a digital communication protocol that uses a
standardized set of commands and messages to transmit musical information between devices. These messages
include instructions for note pitches, duration, velocity (how hard a note is played), control changes, and more.
∎Universal Compatibility: One of the significant advantages of MIDI is its universal compatibility. MIDI-enabled
devices from different manufacturers can communicate seamlessly, allowing musicians to create complex setups
with various instruments and equipment. ∎MIDI Messages: MIDI messages are divided into several categories,
including note-on/off messages (to trigger and release musical notes), control change messages (to adjust
parameters like volume or modulation), program change messages (to select different instrument sounds or
patches), and system messages (for synchronization and system-level commands). ∎MIDI Connections: MIDI
devices are connected using MIDI cables (often 5-pin DIN connectors) or modern USB connections. MIDI data is
transmitted serially between devices. ∎MIDI Controllers: MIDI controllers are devices that generate MIDI data,
allowing musicians to manipulate and control other MIDI-equipped instruments or software. Examples of MIDI
controllers include electronic keyboards, drum pads, wind controllers, and MIDI foot pedals.
Animation: Animation is the process of creating the illusion of motion and change by rapidly displaying a
sequence of static images or frames. Each frame slightly differs from the previous one. These frames are usually
generated digitally or drawn by hand, and when played in sequence at a sufficient speed, they create the
perception of movement. It's a visual technique used in various mediums, including films, videos, games, and
multimedia presentations, to bring characters, objects, or drawings to life. Animation can be produced through
traditional hand-drawn techniques, stop motion, computer-generated imagery (CGI), or a combination of these
methods. It's a powerful storytelling tool that allows for the portrayal of movements, actions, and emotions in a
visually engaging and dynamic way.
Q-2.1: What is scan conversion & Side effects of scan conversion. Ans: The responsibility of the graphics
system or the application program to convert each primitive from its geometric definition into a set of pixels that
makes up the primitive in the image space. This conversion task is generally referred to as scan-conversion or
rasterization. So, we can say that, the process of representing continuous graphics object as a collection of
discrete pixels is known as scan conversion. Side effects of scan conversion: Generally, four major side effects are
occurred, these are: Aliasing: *It occurs due to the limited resolution of the screen, where it can't accurately
represent the smoothness of shapes. *Aliasing happens when smooth curves or diagonal lines appear jagged or
pixelated on a digital display. Unequal intensity: * It's occurring when neighboring pixels show different colors or
brightness levels. * Inaccuracies in scan conversion led to inconsistent pixel representations, causing irregularities
in color or brightness transitions. Overstrike: * Overstrike occurs when one graphical element overlaps and hides
parts of another. * It happens if the rendering order or depth information isn't managed correctly, resulting in
improper layering or occlusion of visual elements. Picket fence problem: * This problem arises as a visual artifact
resembling a picket fence on vertical edges. * Improper alignment or uneven spacing of pixels along vertical lines
causes a distorted or uneven appearance in the rendered image.
Q-2.2: DDA scan conversion/Line drawing algorithm. Ans: DDA (Digital Differential Analyzer) is a line
drawing algorithm used in computer graphics to generate a line segment between two specified endpoints. It is a
simple and efficient algorithm that works by using the incremental difference between the x-coordinates and y-
coordinates of the two endpoints to plot the line. This algorithm is explained step by step here:
Step1: Declare x1, y1, x2, y2, dx, dy, x, Step4: if (absolute(dx) > absolute(dy)) Step6: Set pixel (x, y)
y as integer variables. Steps = absolute(dx); Step7: x = x + xincrement
Step2: Input the two endpoints of Else y = y + yincrement
the line segment, (x1, y1) and (x2, Steps = absolute(dy); Set pixels (Round (x), Round
y2). Step5: xincrement=dx/step (y))
Step3: Calculate the differences yincrement =dy/step Step8: Repeat step 9 until x = x2
between x and y coordinates: dx = x2 assign x = x1 Step9: End
- x1 and dy = y2 - y1. assign y = y1
Q-2.3: Bresenham’s line drawing algorithm. Ans: Bresenham's line drawing algorithm is a fundamental
method used in computer graphics to draw a line between two given points on a grid-based display, such as a
computer screen. It's an efficient way to determine which pixels to turn on or off to create a straight line between
these points. The algorithm, developed by Jack E. Bresenham in 1962, operates by incrementally plotting the
pixels that best approximate the line path between two endpoints.
Step1: *Input two endpoints of a line: (x1, y1) and (x2, y2). Advantages: ∎Efficiency: Bresenham's
* Calculate differences in x and y coordinates: dx = x2 - x1 and algorithm avoids using floating-point
dy = y2 - y1. * Determine the direction of the line (whether it's arithmetic and only involves integer
steep or shallow) operations, making it faster than DDA.
Step2: Initialize decision parameter: p = 2 * dy - dx (for lines ∎Accuracy: It produces more accurate results
with slope between 0 and 1). and does not suffer from rounding errors.
Step3: ∎For each x coordinate from x1 to x2: Limitations: ∎Limited to Straight Lines:
*Plot the pixel at (x, y). *Increment x. Bresenham's algorithm is specifically
Step4: ∎Update the decision parameter: designed for drawing straight lines and
⨀If p < 0, p = p + 2 * dy (no change in y) doesn't directly extend to other shapes.
⨀If p ≥ 0, p = p + 2 * dy - 2 * dx, y is also incremented.
Step5: Continue plotting pixels until reaching x2.
Q-2.4: Bresenham’s circle drawing algorithm. Ans: Bresenham's circle algorithm is a method used to draw
circles in computer graphics. It's an efficient algorithm that utilizes integer arithmetic to plot the points on the
circumference of a circle, avoiding the need for expensive floating-point operations. The algorithm uses the
concept of symmetry to plot points in one octant of the circle and then mirrors these points to generate the
complete circle.
Step1: Input radius r & center (p, q) and enter the value of r. Step5: Find location of next pixels to be scanned
Step2: Calculate decision parameter: If d < 0
d = 3 - 2r and Set x=0, y= r. then d = d + 4x + 6
Step3: If x > = y than go to step 6. increment x = x + 1
Step4: Plot eight points by using concepts of eight-way If d ≥ 0
symmetry. The center is at (p, q). Current active pixel is (x, y). then d = d + 4 (x - y) + 10
(x+ p, y+ q) (y+ p, x+ q) increment x = x + 1
(-x+ p, y+ q) (-y+ p, x+ q) decrement y = y – 1
(x+ p, -y+ q) (y+ p, -x+ q) Repeat step3.
(-x+ p, -y+ q) (-y+ p, -x+ q) Step6: Stop
Q-2.5: Mid-Point circle algorithm. Ans: Similar to Bresenham's circle algorithm, the Midpoint circle algorithm
exploits the symmetry of circles to calculate points efficiently in one octant and then mirrors them to the other
octants, reducing computational overhead; Whether the mid-point lies inside or outside the circle can be decided
by using the formula: - Given a circle centered at (0,0) and radius r and a point (x, y); F(x, y) = x2 + y2 – r2 ; if F(x, y)<
0, the point is inside the circle; F(x, y)=0, the point is on the perimeter; F(x, y)> 0, the point is outside the circle;
Step1: Input radius r & center (x, y) and enter the Step3: Repeat steps while x ≤ y; Plot (x, y) as (0, r)
value of r. If (p<0) then set p = p + 2x + 3
Step2: Set x=0, y= r and calculate the initial decision Else p = p + 2(x-y) + 5; y =y - 1 (end if); x =x+1 (end loop)
parameter p using the formula p = 1 – r. Step4: End
Q-2.6: Draw the eight-way symmetry of a circle. Ans: ∎The first
thing we can notice to make our circle drawing algorithm more
efficient is that circles centered at (0, 0) have eight-way symmetry.
∎If the calculation of the point of one octant is done, then the
other seven points can be calculated easily by using the concept of
eight-way symmetry. ∎If the point is (x, y) is on the circle, then we
can trivially compute seven other points on the circle, as shown in
figure below. ∎Therefore, we need to compute only 45° segment
to determine the circle completely. get point's symmetric
complement about these lines by permuting the indices {(x, -y), (-x,
y), (-x, -y), (y,x), (y, -x), (-y, x),(-y, -x)}.
Q-2.8: What are the advantages of Bresenham line algorithm over DDA algorithm. Ans: Advantages of
Bresenham's Line Algorithm over DDA: Efficiency: Bresenham's algorithm uses only integer arithmetic
(incremental calculations and decision making based on integers), while DDA uses floating-point arithmetic
(multiplications and divisions). That’s why Bresenham’s algorithm more efficient over DDA algorithm. No Division
Operation: Bresenham's algorithm avoids divisions and multiplications, focusing mainly on additions and
subtractions; DDA involves calculating slopes (which may require division operations) and involves floating-point
multiplication. Accuracy: Bresenham's algorithm selects the nearest pixel to approximate the line, resulting in
more accurate representations compared to DDA; DDA may suffer from rounding errors due to floating-point
arithmetic. Ease of Implementation: Bresenham's algorithm uses only incremental calculations and requires only
additions and subtractions, making it simpler to implement in hardware or software.
Q-2.9: Write down the Bresenham’s line algorithm for line having slope between 0 and 45°. Ans:
1. Compute the initial values: 4. Test to see whether the entire line has been
dx = X2— X1 Inc2= 2(dy— dx) drawn. If X= Xend, stop.
dy = Y2— Y1 d = Inc1— dx 5. Compute the location of the next pixel. If d< 0,
Inc1= 2dy then d= d+ Inc1. If d ≥ 0, then d= d+ Inc2, and then y=
2. Set (x, y) equal to the lower left-hand endpoint and y+ 1.
xend equal to the largest value of x. If dx < 0, then X= X2, 6. Increment X: X= X + 1.
Y= Y2, Xend= X1. If dx > 0, then X= X1, Y= Y1, Xend = X2 7. Plot a point at the current (x, y) coordinates.
3. Plot a point at the current (x, y) coordinates. 8. Go to step 4.
Q-3.1: Define Transformation & its type. Ans: Transformation: In computer graphics, transformation refers to
the process of manipulating the position, orientation, angle, and size of objects within a 2D or 3D space. It
involves altering the state or condition of an object, system, or entity to produce a different result or effect. These
transformations are fundamental for creating visual effects, animations, and rendering scenes. Types of
Transformation: There are two types of transformation, they are: Geometric Transformation: Geometric
transformations are mathematical operations that alter the position, shape, size, orientation, or other geometric
properties of objects in a graphical space. Geometric transformations include operations such as translation,
rotation, scaling, shearing, reflection, and more. These operations change the appearance of shapes while
preserving their fundamental properties like angles or lengths. These transformations are commonly used in
mathematics, computer graphics, image processing, and various engineering and scientific applications.
Coordinate Transformation: Coordinate transformation, also known as coordinate change or coordinate
conversion, refers to the process of changing from one system of coordinates to another. This transformation is
used in various fields including mathematics, physics, engineering, and computer graphics to represent and
analyze objects or phenomena in different reference frames or coordinate systems.
Q-3.2: What is composite transformation Ans: Composite transformation refers to the combination of multiple
individual transformations applied successively to an object or a point in a specific order. In computer graphics,
this technique involves applying several transformations (such as translation, rotation, and scaling) one after
another to achieve a final overall transformation. Composite transformation can be achieved by concatenation of
transformation matrices to obtain a combined transformation matrix. Example: Let's assume we have matrices for
translation (T), rotation (R), and scaling (S). To create a composite transformation matrix (C), we multiply these
matrices in the order we want the transformations to occur: C=S× R× T
Q-3.3: Define 2D Discrete Cosine Transformation (DTC). Ans: In computer graphics, DCT stands for Discrete
Cosine Transform. It is a mathematical technique that is used to transform a finite sequence of data points into a
sum of cosine functions oscillating at different frequencies. It is a mathematical transformation widely used in
signal processing and image compression. It's often employed in applications such as JPEG compression for
images and MP3 compression for audio. The 2D DCT is an extension of the 1D DCT to two dimensions. The 2D DCT
separates the image into components of different frequencies. The 2D DCT of an image or a matrix is defined by
(2𝑥+1)𝑢𝜋 (2𝑦+1)𝑣𝜋
the following equation: 𝐹(𝑢, 𝑣) = 𝐶(𝑢)𝐶(𝑣) ∑𝑀−1 𝑁−1
𝑥=0 ∑𝑦=0 f(x, y)cos [ ] cos [ ]. Where: ∎F(u,v) is the
2𝑀 2𝑁
DCT coefficient at position (u,v) in the transformed domain. ∎f(x,y) is the pixel intensity at position (x,y) in the
spatial domain. ∎M and N are the dimensions of the image or matrix in the spatial domain. ∎C(u) and C(v) are
scaling factors defined as: C(u)=1⁄ , for u=0, and C(u)=1, otherwise; C(v)= 1⁄ , for v=0, and C(v)=1, otherwise
√2 √2
Q-3.4: Define Translation, Rotation, Scaling, Reflection & Shearing. Ans: The fundamental/basic
geometrical Transformation in Computer Graphics include: Translation, Rotation and Scaling. And the derived
geometrical Transformation in Computer Graphics are: Reflection & Shearing.
Translation: Translation changes the position of the Rotation: Rotation changes the orientation of an
object without altering its orientation or size. It involves object around a fixed point. It involves rotating an
moving an object from one place to another in a object around a point by a specified angle. Rotation
straight line; In translation, an object is displaced a can be clockwise or anticlockwise; In rotation, the
given distance and direction from its original position. If object is rotated θ° about the origin. The convention
the displacement is given by the vector V= txI + tyJ, the is that the direction of rotation is counterclockwise if
new object point P'(X', Y') can be found by applying the θ is a positive angle and clockwise if θ is a negative
transformation Tv to P(X, Y). angle. The transformation of rotation Rθ is- P’= Rθ(P)
P'= Tv(P); where X' = X + tX and Y'= Y + tY where X’= Xcos(θ)– Ysin(θ) and Y’= Xsin(θ)+ Ycos(θ).
Scaling: Scaling changes the size of an object. It is the process of expanding or compressing the dimensions of
an object along one or more axes. ∎Positive scaling constants Sx and Sy are used to describe changes in length
with respect to the X direction and Y direction respectively. ∎A scaling constant greater than one indicates an
expansion of length, and less
than one, compression of length.
The scaling transformation SSx, Sy
is given by P'= SSx, Sy(P) where X'=
Sxx and Y'= Syy.
Figure shows that scaling
transformation with scaling factors Sx = 2 and Sy = 1/2; If both scaling constants have the same value s, the
scaling transformation is said to be homogeneous or uniform. Furthermore, if S> 1, it is a magnification and for
S< 1, it is a reduction.
Reflection: ∎It is a transformation which produces a Shearing: Shearing distorts the shape of an object
mirror image of an object. Reflection (also known as along a specific axis, keeping the other axes
flipping or mirroring) involves creating a mirror image unchanged. It skews the object by changing the
of an object across a specified axis. The object is angles between the axes. Shx and Shy is the shearing
rotated by180°. ∎The size of reflected object is the factors.
same of original object. ∎Reflected object is always
formed on the other side of mirror; Shearing along to x-axis:
Reflection along to x-axis:
Shearing along to y-axis:
Reflection along to y-axis:
Shearing along to xy plane:
Reflection along to xy plane:
Q-3.5: Deduce the formula of 3D geometric transformation of translation, rotation and scaling. Ans: In
three-dimensional space, geometric transformations involve the manipulation of points, vectors, or objects. Here
are the formulas for translation, rotation, and scaling in three dimensions:
Translation (T): Translation Rotation (R): Rotation involves rotating an Scaling (S): Scaling involves
involves moving an object by a object around an axis. The rotation matrix stretching or compressing an
certain distance along each axis. depends on the axis of rotation and the angle object along each axis. For
For a translation vector (tx, ty, of rotation. For example, for rotation about scaling factors (Sx, Sy, Sz), the
tz), the transformation of a point the x-axis by an angle θ, the rotation matrix scaling transformation is given
(x, y, z) can be expressed as: is: by:
1 0 0 𝑡𝑥 𝑋 1 0 0 0 𝑆𝑥 0 0 0 𝑋
𝑋′ 𝑋′
0 1 0 𝑡𝑦 𝑌 0 cos(𝜃) − 𝑠𝑖𝑛(𝜃) 0 0 𝑆𝑦 0 0 𝑌
[𝑌′ ] = [ ][ ] [ ] [𝑌′] = [ ][ ]
0 0 1 𝑡𝑧 𝑍 0 𝑠𝑖𝑛(𝜃) cos(𝜃) 0 0 0 𝑆𝑧 0 𝑍
𝑍′ 𝑍′
0 0 0 1 1 0 0 0 1 0 0 0 1 1
The transformed coordinates are Similarly, rotation matrices can be defined The transformed coordinates
(x′, y′, z′). for rotations about the y-axis and z-axis. are (x′, y′, z′).
Q-5.1: Define Projector & Projection. Ans: Projector: A projector is an optical device that projects an image
or video onto a surface, commonly a projection screen. Most projectors create an image by shining a light through
a small trans scan project the image directly, by using lasers. Projection:
Projection refers to the process of displaying an image or video by casting
or throwing it onto a surface. This process is achieved using a light source
and optical components within the projector. It can be defines as a
mapping of point P(x, y, z) onto its image P’(x’, y’, z’) in the projection
plane or view plane, which constitutes the display surface. Figure shows
the scheme –
Q-5.2: Various types of projection. Ans: There are two basic types of projection: ∎Perspective (converging
projectors): Perspective projection is a method of representing a three-dimensional (3D) object or scene in a two-
dimensional (2D) space, such as a drawing or an image. This technique mimics the way the human eye perceives
objects in the real world by creating the illusion of depth and distance. It is determined by prescribing a center of
projection and a view plane. Types: Depending on the number of vanishings points this type of projection has
three subcategories: (i) One point (one principal vanishing point). (ii) Two point (two principal vanishing point).
(iii) Three point (three principal vanishing point). ∎Parallel (parallel projection): In a parallel projection, the
projection lines are parallel, meaning that the lines of sight from the viewer to the objects in the 3D space do not
converge. It is determined by prescribing a direction of projection vector and a view plane. Types: There are two
main types of parallel projection: 1. Orthographic (projectors perpendicular to view plane): In orthographic
projection, the projection lines are perpendicular to the projection plane. This results in a 2D representation
where all parallel lines in the 3D space remain parallel in the 2D image. Types: Orthographic projection are two
types: (i) Multiview (view plane parallel to principal planes) (ii) Axonometric (view plane not parallel to principal
planes): Common axonometric projections include: ∎Isometric- All three axes are equally foreshortened.
∎Dimetric- Only two of the three axes are equally foreshortened. ∎Trimetric- All three axes have different
foreshortened. 2. Oblique General (projectors not perpendicular to view plane): Oblique projection involves
projecting points in the 3D space onto the 2D plane along lines that are not necessarily perpendicular. It is a form
of parallel projection in which the three principal axes of an object are not equally foreshortened. Types: Oblique
projection are two types: Cavalier- All lines perpendicular to the projection plane are projected with no change in
length. Cabinet- All lines perpendicular to the projection plane are projected to one half of their length.
Q-5.3: What do you mean by center of projection. Ans: In Perspective Projection the center of projection
is at finite distance from projection plane. It is a point where lines or projection that are not parallel to projection
plane appear to meet. It is an arbitrary point from where the lines are drawn on each point of an object. ∎If cop is
located at a finite point in 3D space, Perspective projection is the result. ∎If the cop is located at infinity, all the
lines are parallel and the result is a parallel projection.
Q-5.4: Different perspective anomalies. Ans: The process of constructing a perspective view introduces
certain anomalies which enhance realism in terms of depth cues but also distort actual sizes and shapes. The
different perspective anomalies are explained below: Perspective foreshortening: The farther an object is from
the center of projection, the smaller it appears (i.e., its projected size becomes smaller). Vanishing points:
Projections of lines that are not parallel to the view plane (i.e., lines that are not perpendicular to the view plane
normal) appear to meet at some point on the view plane. View confusion: Objects behind the center of projection
are projected upside down and backward onto the view plane. Topological distortion: A finite line segment
joining a point which lies in front of the viewer to a point in back of the viewer is actually projected to a broken
line of infinite extent.
Q-5.5: Define wire frame model and Advantages & Disadvantages. Ans: A wireframe model is a visual
representation of a 3D object in computer graphics. It is created using lines and curves to outline the shape of the
object. The primary purpose of a wireframe model is to provide a skeletal overview of the object's structure. It
consists of edges, vertices, and polygons. Here vertices are connected by edges, and polygons are sequences of
vertices or edges. The edges may be curved or straight-line segments. In the latter case, the wireframe model is
called a polygonal net or polygonal mesh. Advantages: ∎ Simplicity: Wireframes are simple and uncluttered,
focusing on the fundamental structure of objects. ∎ Efficiency: Quick to create and computationally less intensive,
making them suitable for rapid design iterations. ∎ Conceptualization: Useful in the early stages of design for
conceptualization and planning. ∎ Easy to clip and manipulate through the use of geometric and coordinate
transformations. Disadvantages: ∎ Lack of Realism: Wireframes lack surface details, textures, and shading,
resulting in a less realistic representation. ∎ Limited Information: They do not convey information about surface
materials, reflections, or lighting effects. ∎ Ineffective for Rendering: Wireframes are not suitable for rendering
final images; additional steps are needed for realistic visualization.
Q-5.6: How can we use wire frame model to design an object. Ans: Here’s how we can use wireframe
model to design an object in computer graphics: Conceptualization: Define the overall design and key features.
Basic Structure: Create the wireframe with primary outlines. Detailing: Add lines and curves for features and
details. Reference Images: Refer to blueprints or reference images for accuracy. Connect Vertices: Form faces by
connecting vertices. Refinement: Adjust for proportions and symmetry. Consistency Check: Ensure uniformity and
alignment. Dimensions: Add measurements and annotations. Review and Collaborate: Gather feedback and
make adjustments. Export/Convert: Transition to other formats if necessary. Detailing/Rendering: Add more
details or move to rendering for realism.
Q-5.7: Describe different ways to represent a polygonal net model. Ans: Different ways to represent a
polygonal net model include: 1. Vertex-Edge Representation: Defined by vertices and edges, suitable for
wireframes. 2. Vertex-Face Representation: Uses vertices and faces, common in 3D graphics. 3. Half-Edge Data
Structure: Represents edges as two half-edges, efficient for certain operations. 4. Winged-Edge Data Structure:
Extends vertex-edge with face information, useful for editing. 5. Polygon Meshes with Normal Vectors: Includes
normal for shading and rendering. 6. Quad and Triangle Meshes: Combines quads and triangles for flexible
representation.
Q-6.1: Define hidden surface problem. Ans: The hidden surface problem is a challenge in computer graphics
related to rendering 3D scenes on a 2D display. When we view a picture containing non-transparent objects and
surfaces, then we cannot see those objects from view which are behind from objects closer to eye. We must
remove these hidden surfaces to get a realistic screen image. The identification and removal of these surfaces is
called Hidden-surface problem. There are two approaches for removing hidden surface problems − Object-Space
method and Image-space method. The Object-space method is implemented in physical coordinate system and
image-space method is implemented in screen coordinate system.
Q-6.2: Write down Z-buffer/Depth Buffer algorithm. Ans: One of the simplest and commonly used image
space approach to eliminate hidden surfaces is the Z-buffer or depth buffer algorithm. It is developed by Catmull.
This algorithm compares surface depths at each pixel position on the projection plane. The surface depth is
measured from the view plane along the z-axis of a viewing system. Depth buffer algorithm requires 2 arrays,
intensity and depth each of which is indexed by pixel coordinates (x, y). The Z buffer algorithm consists of the
following steps:
Step1: Initialize the Z-buffer and frame-buffer so that for Step2: During scan conversion process, for each
all buffer positions: position on each polygon surface, compare depth
Z-buffer (x, y) = 0 and frame-buffer (x, y) = I-background values to previously stored values in the depth
Step3: Stop buffer to determine visibility. Calculate z-value for
each (x, y) position on the polygon:
Note that, I-background is the value for the background If z > Z-buffer (x, y), then set
intensity and I-surface is the projected intensity value Z-buffer (x, y) = z
for the surface at pixel position (x, y). frame-buffer (x, y) = I-surface (x, y)
Q-6.3: How does Z-buffer algorithm determine which surface are hidden. Ans: The Z-buffer algorithm sets
up a two-dimensional array which is like the frame buffer. However, the Z buffer stores the depth value at each
pixel rather than the color, which is stored in the frame buffer. By setting the initial values of the Z buffer to some
large number, usually the distance of back clipping plane, the problem of determining which surfaces are closer is
reduced to simply comparing the present depth values stored in the Z buffer at pixel (x, y) with the newly
calculated depth value at pixel (x, y). If this new value is less than the present Z-buffer value (i.e., closer along the
line of sight), this value replaces the present value and the pixel color is changed to the color of the new surface.
Q-6.4: What is coherence & Types of coherence. Ans: Coherence denotes similarities between items or entities.
It describes the extent to which these items or entities are locally constant. Coherence is based on the principle of
locality whereby “nearby” things do have the same or similar characteristics. In simple word, in order to reduce
the amount of calculation in each scan-line loop, we try to take advantage of relationships and dependencies
called coherences, between different elements that comprise a scene. Types of Coherence: There are four types
of coherence: (1) Scan-line coherence: If a pixel on a scan line lies within a polygon, pixels near it will most likely
lie within the polygon. (2) Edge coherence: If an edge of a polygon intersects a given scan line, it will most likely
intersect scan lines near the given one. (3) Area coherence: A small area of an image will most likely lie within a
single polygon. (4) Spatial coherence: Certain properties of an object can be determined by examining the extent
of the object, that is a geometric figure which circumscribes the given object.
Q: Why are hidden-surface algorithms needed? Ans: Hidden-surface algorithms are needed to determine
which objects and surface will obscure those objects and surfaces that are in back of them, thus rendering a more
realistic image.
Painter’s algorithm has some limitations, these are: ∎It struggles when objects intersect or overlap, causing
rendering errors. ∎It doesn't handle transparency or semi-transparency well. ∎Sorting objects based on depth
can be computationally expensive, impacting performance for complex scenes.
Q-6.5: What kind of problem that we faced in hidden surface problem. Ans: The hidden surface problem in
computer graphics arises when determining which surfaces or objects should be visible in a rendered scene from a
particular viewpoint. Here are some specific challenges associated with this problem: Occlusion: Objects or
surfaces closer to the viewer can obstruct the view of objects behind them. Deciding which objects are in front
and visible, and which are behind and should be hidden, is essential for accurate rendering. Depth Complexity:
Scenes with many intersecting or overlapping surfaces create complexity in determining the order in which
objects should be rendered. Efficiency: For real-time applications like video games, the hidden surface problem
needs to be solved quickly. Algorithms used to determine visibility must be efficient enough to handle complex
scenes and large amounts of data within tight time constraints. Artifacts and Accuracy: Inaccuracies in
determining hidden surfaces can lead to visual artifacts, such as flickering, aliasing, or incorrect object occlusion,
which degrade the quality of the rendered image.
Q-6.6: Discuss Painter’s algorithm for visible surface determination. Ans: The Painter's algorithm is a basic
method used in computer graphics for visible surface determination. It's a simple algorithm that works by sorting
objects based on their depth (distance from the viewer) and then rendering them in this sorted order. The idea is
to draw the farthest objects first and then paint the nearer objects on top, simulating the correct visibility. Here's
an outline of the Painter's algorithm:
Scene Description: Begin with a scene containing Rendering: ∎Start rendering the objects in the sorted
various objects or polygons that need to be rendered. order, from the farthest to the nearest. ∎For each
Depth Sorting: ∎Calculate the depth (distance from object, draw it onto the screen, potentially covering
the viewer) of each object or polygon in the scene. portions of previously drawn objects that are farther
Typically, this involves determining the Z-coordinate away. Overlap Handling: When rendering closer
of each object's centroid or a representative point. objects, they might cover parts or completely occlude
∎Sort the objects based on their calculated depth objects drawn earlier. Handle overlap by painting the
values. Objects with greater depth (farther from the nearer object pixels over the ones from farther objects.
viewer) come first in the sorting order, while those Completion: Render all objects according to the sorting
with lesser depth (closer to the viewer) follow. order until the entire scene is displayed.
Q-6.7: How does the basic scan-line method determine which surfaces are hidden. Ans: The basic scan-line
method looks one at a time at each of the horizontal lines of pixels in the display area. For example, at the
horizontal pixel line y = α, the graphics data structure (consisting of all scan-converted polygons) is searched to
find all polygons with any horizontal (y) pixel values equal to α. Next, the algorithm looks at each individual pixel
in the α row. At pixel (α, β), the depth values (z values) of each polygon found above are compared to find the
polygon having the smallest z value at this pixel. The color of pixel (α, β) is then set to the color of the
corresponding polygon at this pixel.
Q-6.8: Write down the scan line algorithm. Ans: It is an image space algorithm. It processes one line at a
time rather than one pixel at a time. It uses the concept area of coherence. This algorithm records edge list, active
edge list. The edge list or edge table contains the coordinate of two endpoints. Active Edge List (AEL) contain
edges a given scan line intersects during its sweep. Algorithm: Step1: Initialize the desired data structure:
∎Create a polygon table having color, edge pointers, coefficients. ∎Establish edge table contains information
regarding, the endpoint of edges, pointer to polygon, inverse slope. ∎Create Active edge list. This will be sorted in
increasing order of x. ∎Create a flag F. It will have two values either on or off. Step2: Perform the following steps
for all scan lines: ∎ Enter values in Active edge list (AEL) in sorted order using y as value ∎ Scan until the flag, i.e. F
is on using a background color ∎ When one polygon flag is on, and this is for surface S1 enter color intensity as
I1into refresh buffer ∎ When two or image surface flag are on, sort the surfaces according to depth and use
intensity value Sn for the nth surface. This surface will have least z depth value ∎ Use the concept of coherence for
remaining planes. Step4: Stop Algorithm.
Q-4.1: Define Clipping, Shielding, View Ports, Windows, Aspect Ration & Viewpoints. Ans: Clipping: In
computer graphics, clipping refers to the process of determining which parts of an object or image should be
visible and lie inside the window and which parts should be discarded or "clipped" away. It is essential for
rendering only the portions of objects or images that are within the boundaries of the viewing area, improving
performance and ensuring that graphics are displayed correctly. Shielding: Shielding typically refers to the use of
barriers or protective layers to prevent the transmission of unwanted signals or elements. In computer graphics or
design contexts, shielding might involve creating barriers or protective layers to control the visibility or interaction
of certain elements within a scene. Viewport: In computer graphics, a viewport is a rectangular area on a display
device where the graphical output is shown. Multiple viewports can exist within a single window, allowing
different perspectives or portions of a scene to be displayed simultaneously. Viewports are commonly used in
graphics software and 3D modeling applications. Windows: In computer graphics, a "window" refers to a
rectangular or defined area of the world coordinate space within a larger screen or graphical display. Windows in
this context are used to view a specific portion of an image, scene, or graphical content. They play a crucial role in
various aspects of computer graphics, including rendering, viewing, and interaction. Aspect Ratio: In the context
of computer graphics and display devices, the aspect ratio determines the proportional relationship between the
width and height of the screen or image. It is typically expressed as a ratio of the width to the height and plays a
crucial role in determining how images and content appear on a screen. It is expressed as two numbers separated
by a colon, such as 4:3 or 16:9. Viewpoints: In computer graphics and 3D modeling, a viewpoint, refers to the
position and orientation from which a scene is observed or rendered. The viewpoint determines what is visible in
the final image and how objects are projected onto the two-dimensional screen. Adjusting the viewpoint allows
for different perspectives and views of a 3D scene.
Q-4.2: Cohen-Sutherland line clipping algorithm. Ans: In this algorithm, first of all, it is detected whether line
lies inside the screen or it is outside the screen. All lines come under any one of the following cases: Case1:
Visible: If two end points of a line is present inside the window, then this line is called visible line. In this case
there is no need of clipping, the algorithm can accept this line. Case2: Non-visible: If two end points of a line is
present outside the window, then this line is called non-visible line. In this case there is no need of clipping, the
algorithm can reject this line. Case3: Partially visible: If two end points of a line one is present in inside the
window and other is present outside the window, then this line are called partially visible line. In this case clipping
is required, an intersection point is found out and the line outside the intersection point is clipped out;
It is a line clipping algorithm, which is used to clip the lines that are present outside the window or viewport. It
divides a 2D space into 9 regions and then efficiently determines the lines and portions of lines that are visible in
the central region of interest (the viewport). Algorithm:
Step1: Assign a 4-bit region code to each Step4: If a line is partially inside the window, find the
endpoint of the line segment based on its intersection with the boundary of the window. By using the
position relative to the clipping window. following formula: - m= (y2 – y1)/(x2 – x1); (a) If Bit1 is "1" line
Step2: Perform OR operation on both of these intersects with left boundary of the window, y3 = y1 + m (Xmin -
end-points X1); (b) If Bit2 is "1" line intersect with right boundary of the
Step3: If OR = 0000, then line is considered to window, y3 = y1 + m (Xmax – X1); (c) If Bit3 is "1" line intersects
be visible with bottom boundary of the window, X3 = X1 + (Ymin - y1)/m; (d)
Else, Perform AND operation on both If bit 4 is "1" line intersects with the top boundary of the
endpoints window, X3 =X1 + (Ymax – y1)/m; Step5: Now, overwrite the
If AND ≠ 0000, then the line is invisible endpoints with a new one and update it; Step6: Repeat the step4
Else, AND = 0000, the line is partially inside the till the line doesn’t get completely clipped.
window and considered for clipping.
Q-4.3: Convert window-to-viewport coordinate. Ans: A window is specified by four world coordinates: WXmin,
WXmax, WYmin, and WYmax (see Fig. 5-2). Similarly, a viewport is described by four normalized device
coordinates: VXmin, VXmax, VYmin, and VYmax. The objective of window-to-viewport mapping is to convert the
world coordinates (WX, WY) of an arbitrary point to its corresponding normalized device coordinates (VX, VY). In
order to maintain the same relative placement of the point in the viewport as in the window, we require:
WX − WXmin VX − VXmin WX − WXmin VX − VXmin
= 𝑎𝑛𝑑 =
WXmax − WXmin VXmax − VXmin WXmax − WXmin VXmax − VXmin
VXmax − VXmin
VX = (WX − WXmin) + VXmin
WXmax − WXmin
Thus,
VYmax − VYmin
VY = (WY − WYmin) + VYmin
{ WYmax − WYmin
Since the eight coordinate values that define the window and the viewport are just constants, we can express
these two formulas for computing (VX, VY) from (WX, WY) in terms of a translate-scale-translate transformation
N.
VX WX
(VY) = N. (WY)
1 1
VXmax − VXmin
0 0
1 0 VXmin WXmax − WXmin 1 0 −WXmin
Where, N = (0 1 VYmin) . VYmax − VYmin . (0 1 −WYmin)
0 0 1 0 0 0 0 1
WYmax − WYmin
( 0 0 1)
Note that geometric distortions occur (e.g. squares in the window become rectangles in the viewport) whenever
the two scaling constants differ.
Q-7.1: Define light, color. Ans: Light: Light, or visible light, is electromagnetic energy in the 400 to 700nm, i.e.,
–
nanometer (10 9 meter), wavelength (λ) range of the spectrum. A typical light has its energy distributed across
the visible band and the proportions are described by a spectral energy distribution function P(λ). To model a light
or reproduce a given light with physical precision one would need to duplicate the exact energy distribution,
which is commonly referred to as spectral reproduction. Characteristics of light: Light can be characterized in
three perceptual terms: (1) The first one is brightness, which corresponds to its physical property called
luminance. Luminance measures the total energy in the light. (2) The second perceptual term is hue, which
distinguishes a white light from a red light or a green light. Hue corresponds to another physical property called
the dominant wavelength of the distribution. (3) The third perceptual term is saturation, which describes the
degree of vividness. Saturation corresponds to the physical property called excitation purity, which is defined to
be the percentage of luminance that is allocated to the dominant or pure color component. Color: In the context
of computer graphics, color refers to the visual property of objects that results from the way they interact with
light. Color in computer graphics is typically represented using the RGB (Red, Green, Blue) model, where different
intensities of red, green, and blue light are combined to create a wide range of colors.
Q-7.2: Define texture and types of it. Ans: Texture: While gradual shading based on illumination is an
important step towards achieving photo-realism, most real-life objects have regular or irregular surface features
(e.g., wood grain). These surface details are collectively referred to as surface texture. Types: There are three
approaches to adding surface texture: Projected texture: Projected texture is an effective tool when target
surfaces are relatively flat and facing the reference plane. It refers to the technique of casting a texture onto a
surface in a three-dimensional (3D) scene using a light source. It enhances visual realism by simulating how light
interacts with surfaces in a scene. Texture mapping: Texture mapping is the process of applying a texture to the
surface of a 3D model. The texture coordinates are mapped to the vertices of the model, allowing the texture to
be wrapped around and displayed on the model's surface. Solid texture: Solid texture also known as a procedural
solid texture or 3D texture, is a type of texture that is generated algorithmically rather than being based on an
image or predefined data.
Q-7.3: Describe the Phong model. Ans: This is a widely used and highly effective way to mimic the reflection of
light from object surfaces to the viewer's eye. It is considered an empirical approach because, although it is
consistent with some basic principles of physics, it is largely based on our observation of the phenomenon. It is
also referred to as a local illumination model because its main focus is on the direct impact of the light coming
from the light source; The Phong model combines three components to simulate the reflection of light on a
surface: Ambient reflection: It is a uniform background illumination that provides a basic level of brightness to a
surface. Diffuse reflection: It is proportional to the dot product of the light direction and the surface normal.
Specular reflection: Specular reflection models the shiny highlight that appears on a surface when illuminated. It
depends on the viewing direction, the light direction, and the surface's reflective properties;
Q-7.4: Types of Interpolative shading methods: In the context of shading, interpolative shading methods are
crucial for determining how light interacts with surfaces across a polygon or other geometric primitives. These
methods are employed to create smooth transitions and realistic shading effects in rendered images. The
interpolation process involves calculating values for pixels or points within a polygon by considering the values at
the polygon's vertices. Types: Here are different types of interpolation shading methods: Flat shading: In flat
shading, the color or shading information for a polygon is determined by evaluating the lighting model once for
each polygon, and the resulting color is applied to the entire polygon. Gouraud Shading: Gouraud shading, named
after Henri Gouraud, calculates shading values at the vertices of a polygon and then linearly interpolates these
values across the polygon's surface. Phong Shading: Phong shading, introduced by Bui Tuong Phong, calculates
shading values at each pixel by interpolating the vertex normals across the polygon and then applying the lighting
model at each pixel.
Q-7.5: How do we got normals for Phong shading? At what points are these normals calculated. Ans: In
Phong shading, normals are crucial for accurately calculating the reflection of light on a surface. Normals
represents the direction perpendicular to the surface at a given point, and they are used to determine how light
interacts with that point. The normals are typically calculated at the vertices of polygons and then interpolated
across the surface of the polygon. Here's a step-by-step explanation of how normals are obtained for Phong
shading: (1) Define Normals at Vertices: At each vertex of a polygon, a normal vector is defined to be
perpendicular to the surface at that vertex. The normal vectors can be calculated based on the geometry of the
surface. (2) Interpolate Normals Across the Surface: Once the normal vectors are defined at the vertices, they are
interpolated across the surface of the polygon, which calculates the normal at each pixel based on its position
within the polygon. (3) Use Interpolated Normals in Shading: The interpolated normals are then used in the
shading equations, such as the Phong reflection model, at each pixel. These equations take into account the
direction of the light, the viewing direction, and the properties of the material to determine the intensity and
color of light reflected at that point.
Q-1: Differences between Interpolation & Approximation curve. Ans:
Aspect Interpolation Curve Approximation Curve
Definition Passes exactly through given data points. Does not necessarily pass through data points.
Curve Piece wise smooth (continuous at data Generally smooth, may have control points.
Characteristics points)
Accuracy Highly accurate within data points. Smooth representation of overall data trend.
Use Case Shape representation, path animations, Curve/surface fitting, smooth design
motion in computer graphics. representation.
Handling Prone to oscillations if data has noise or Smoother fit, reduced impact of noise/outliers.
noise or inconsistencies.
outliers
Complexity Can be computationally expensive for high Generally, more computationally efficient.
degree interpolation.
Constraints Must have data points to interpolate. Requires control points or data points to fit.
Q-2: Differences between parallel & perspective projections. Ans: a) Parallel projection represents the object
in a different way like telescope. a) Perspective projection represents the object in three-dimensional way. b) It
does not alter the shape or the size of the given object on a plane. b) In this perspective, the objects that stay far
away appear to be smaller in size, while the ones near to the viewer’s eyes appear bigger in size. c) The distance
of the object from the center of projection is infinite. c) The distance of the object from the center of projection is
finite. d) Parallel projection can give the accurate view of object. d) Perspective projection cannot give the
accurate view of object. e) The lines of parallel projection are parallel and projector in parallel projection is
parallel. e) The lines of perspective projection are not parallel and projector in perspective projection is not
parallel. Q-4: Write down the differences between raster display and vector display.
Ans: a) Raster displays represent images as a grid of pixels, where each pixel is a discrete point on the
screen. These pixels collectively form the image. a) Vector displays represent images using mathematical
equations to define shapes and lines. Instead of pixels, they use vectors, which are paths defined by endpoints
and equations. b) It renders images by scanning and filling in pixels. b) It renders images by generating lines and
shapes based on equations. c) Raster displays have a fixed resolution determined by the number of pixels on the
screen. Higher resolution often results in more detailed and sharper images. c) Vector displays are not limited by a
fixed grid of pixels, so they can theoretically produce sharp images at any size. The image quality is not dependent
on the screen resolution. d) It may lose quality when resized, as pixelation becomes apparent. d) It maintains
image quality when resized, shapes remain smooth. e) Examples: Computer monitors, LCD screens, most modern
TVs. e) Examples: Early arcade game displays (e.g., Asteroids), computer-aided design (CAD).
Q-3: Write down the differences between computer graphics and image processing. Ans: a) Computer
graphics is primarily deals with the creation, manipulation, and representation of visual images and animations. a)
Image processing focuses on the manipulation and analysis of existing images. b) The primary goal of computer
graphics is to create visually compelling and interactive content for human consumption, often focusing on
aesthetics and realism. b) Image processing aims to extract information from images, improve their quality, and
make them suitable for analysis or specific applications. c) In computer graphics, the input often starts with
geometric models or descriptions of objects and scenes. c) Image processing starts with digital images as input.
These images can be captured from various sources, such as cameras, scanners, or medical imaging devices. d)
Generates visual output, such as images, animations, or interactive graphics, often for entertainment, simulation,
or design purposes. d) Produces modified or enhanced images as output, with improvements in quality, clarity, or
the extraction of specific information. e) Computer graphics find applications in video games, movies, virtual
reality, computer-aided design (CAD), architectural visualization, and user interface design. e) Image processing is
used in medical imaging, satellite image analysis, facial recognition, quality control in manufacturing, and various
scientific and engineering applications.
Q-6: Differentiate between geometric transformation and coordinate transformation. Ans: Definition: a)
Geometric transformation involves altering the shape, size, orientation, or position of objects in a two-
dimensional (2D) or three-dimensional (3D) space. a) Coordinate transformation involves changing the coordinate
system in which points or objects are represented. Purpose: b) The primary goal of geometric transformation is to
change the appearance of an object while preserving its essential geometric properties, such as angles, distances,
and shapes. b) The primary goal of coordinate transformation is to map points from one coordinate system to
another, allowing for changes in the way positions are expressed without necessarily altering the shape or
appearance of the objects. Operations: c) Common geometric transformations include translation (shifting),
rotation, scaling (resizing), shearing, and reflection. c) Common coordinate transformations include translation of
the origin, rotation of the coordinate axes, scaling of coordinates, and shearing of coordinates. Representation: d)
Geometric transformations are often represented using transformation matrices, which encode the linear or
affine transformations applied to the coordinates of points in space. d) Coordinate transformations are typically
represented using matrices or mathematical equations that define the relationship between the coordinates in
the original and transformed coordinate systems.
Q-5.: What is color model and luminance & Purpose of chromaticity diagram. Ans: Color model: A color model
is a system used to describe a color. It is a mathematical representation or system that describes how colors can
be represented as combinations of various primary colors. Three of the most common color models are RGB,
CMYK, and HSL/HSV. Luminance: Luminance refers to the brightness or intensity of light emitted or reflected from
a surface, as perceived by the human eye. It is a measure of the amount of light energy emitted, transmitted, or
reflected per unit of area. It is distinct from the concept of color and is solely concerned with the intensity or
brightness of the light. Chromaticity diagram: A chromaticity diagram is a visual representation of colors without
considering their luminance or brightness. It's a two-dimensional graph that displays colors based on their
chromatic properties, specifically their hue and saturation; The most well-known chromaticity diagram is the CIE
1931 xy chromaticity diagram developed by the International Commission on Illumination (CIE). Purpose of
chromaticity diagram: Color Representation: Chromaticity diagrams provide a standardized way to represent and
visualize colors. Color Range: They illustrate the range of colors that can be produced or perceived by a particular
device. Colors Mixing: These diagrams teach us how colors mix together to make new ones. Setting Rules: They
help create rules for how colors should look in things like screens, printers, or designs. Checking Quality:
Industries use them to make sure colors are correct and consistent in products like lights, screens, and pictures.
Research Help: They're also useful for scientists and engineers working on new ways to understand and use
colors.