Cit371 Introduction To Computer Graphics and Animation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

CIT371 INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION

Q. What is Computer Graphics all about?

Computer graphics generally means creation, storage and manipulation of


models and images. Such models come from diverse and expanding set of
fields including physical, mathematical, artistic, biological, and even
conceptual (abstract) structures. ―Perhaps the best way to define computer
graphics is to find out what it is not. It is not a machine. It is not a
computer, nor a group of computer programs. It is not the know-how of a
graphic designer, a programmer, a writer, a motion picture specialist, or a
reproduction specialist. Computer graphics is all these –a consciously
managed and documented technology directed toward communicating
information accurately and descriptively. Computer graphics (CG) is the field
of visual computing, where one utilizes computers both to generate visual
images synthetically and to integrate or alter visual and spatial information
sampled from the real world.

Q. Write on the history of Computer graphics under the following:

i) the age of Sutherland

In the early 1960's IBM, Sperry-Rand, Burroughs and a few other computer
companies existed. The computers of the day had a few kilobytes of memory,
no operating systems to speak of and no graphical display monitors. The
peripherals were Hollerith punch cards, line printers, and roll-paper
plotters. The only programming languages supported were assembler,
FORTRAN, and Algol. Function graphs and ―Snoopy'' calendars were about
the only graphics done. In 1963 Ivan Sutherland presented his paper
Sketchpad at the Summer Joint Computer Conference. Sketchpad allowed
interactive design on a vector graphics display monitor with a light pen
input device. Most people mark this event as the origins of computer
graphics.

ii) the Early ‗70‘s

The state of the art in computing was an IBM 360 computer with about 64
KB of memory, a Tektronix 4014 storage tube, or a vector display with a
light pen (but these were very expensive).

Page 1
Software and Algorithms

Rendering (shading) were discovered by Gouraud and Phong at the


University of Utah. Phong also introduced a reflection model that included
specular highlights. Keyframe based animation for 3-D graphics was
demonstrated. Xerox PARC developed a ``paint'' program. Ed Catmull
introduced parametric patch rendering, the z-buffer algorithm, and texture
mapping. BASIC, C, and Unix were developed at Dartmouth and Bell Labs.

Hardware and Technology

An Evans & Sutherland Picture System was the high-end graphics


computer. It was a vector display with hardware support for clipping and
perspective. Xerox PARC introduced the Altos personal computer, and an 8
bit computer was invented at Intel.

iii) the ‗00‘s

Today most graphicist want an Intel PC with at least 256 MB of memory and
a 10 GB hard drive. Their display should have graphics board that supports
real-time texture mapping. A flatbed scanner, color laser printer, digital
video camera, DVD, and MPEG encoder/decoder are the peripherals one
wants. The environment for program development is most likely Windows
and Linux, with Direct 3D and OpenGL, but Java 3D might become more
important. Programs would typically be written in C++ or Java.

What will happen in the near future -- difficult to say, but high definition TV
(HDTV) is poised to take off (after years of hype). Ubiquitous, untethered,
wireless computing should become widespread, and audio and gestural
input devices should replace some of the functionality of the keyboard and
mouse

You should expect 3-D modeling and video editing for the masses, computer
vision for robotic devices and capture facial expressions, and realistic
rendering of difficult things like a human face, hair, and water. With any
luck C++ will fall out of favor.

Q. What are the application areas of computer graphics?

1. Medical Imaging

There are few endeavors more noble than the preservation of life. Today, it
can honestly be said that computer graphics plays a significant role in
saving lives. The range of application spans from tools for teaching and
diagnosis, all the way to treatment. Computer graphics is tool in medical
applications rather than a mere artifact. No cheating or tricks allowed.

Page 2
2. Scientific Visualization

Computer graphics makes vast quantities of data accessible. Numerical


simulations frequently produce millions of data values. Similarly, satellite-
based sensors amass data at rates beyond our abilities to interpret them by
any other means than visually. Mathematicians use computer graphics to
explore abstract and high-dimensional functions and spaces. Physicists can
use computer graphics to transcend the limits of scale. With it they can
explore both microscopic and macroscopic world

3. Computer Aided Design

Computer graphics has had a dramatic impact on the design process.


Today, most mechanical and electronic designs are executed entirely on
computer. Increasingly, architectural and product designs are also
migrating to the computer. Automated tools are also available that verify
tolerances and design constraints directly from CAD designs. CAD designs
also play a key role in a wide range of processes from the design of tooling
fixtures to manufacturing.

4. Graphical User Interfaces (GUIs)

Computer graphics is an integral part of everyday computing. Nowhere is


this fact more evident than the modern computer interface design. Graphical
elements such as windows, cursors, menus, and icons are so common place
it is difficult to imagine computing without them. Once graphics
programming was considered a specialty. Today, nearly all professional
programmers must have an understanding of graphics in order to accept
input and present output to users.

5. Games

Games are an important driving force in computer graphics. In this class we


are going to talk about games. We'll discuss on how they work. We'll also
question how they get so much done with so little to work with.

6. Entertainment

If you can imagine it, it can be done with computer graphics. Obviously,
Hollywood has caught on to this. Each summer, we are amazed by state-of-
the-art special effects. Computer graphics is now as much a part of the
entertainment industry as stunt men and makeup. The entertainment
industry plays many other important roles in the field of computer graphics.

Q. What is Interactive Computer Graphics?

Page 3
User controls contents, structure, and appearance of objects and their
displayed images via rapid visual feedback. Basic components of an
interactive graphics system input (e.g., mouse, tablet and stylus, force
feedback device, scanner, live video streams…), processing (and storage),
display/output (e.g., screen, paper-based printer, video recorder, non-linear
editor.

Q. What do we need in computer graphics?

In computer graphics we work with points and vectors defined in terms of


some coordinate frame (a positioned coordinate system). We also need to
change coordinate representation of points and vectors, hence to transform
between different coordinate frames. Hence a mathematical background of
geometry and algebra is very essential and also a knowledge of basic
programming in C language.

Q. Briefly describe the basic graphics rendering pipeline

The Graphics Rendering Pipeline

Rendering is the conversion of a scene into an image:

The scene composed of models in three space.

Models, composed of primitives, supported by the rendering system.

Models entered by hand or created by a program.

For our purposes today, models already generated. The image drawn on
monitor, printed on laser printer, or written to a raster in memory or a file.
These different possibilities require us to consider device independence.

Classically, ―model‖ to ―scene‖' to ―image‖ conversion broken into finer steps,


called the graphics pipeline. Commonly implemented in graphics hardware
to get interactive speeds. At a high level, the graphics pipeline usually looks
like

Page 4
Each stage refines the scene, converting primitives in modelling space to
primitives in device space, where they are converted to pixels (rasterized).

A number of coordinate systems are used:

MCS: Modeling Coordinate System.

WCS: World Coordinate System.

VCS: Viewer Coordinate System.

NDCS: Normalized Device Coordinate System.

DCS or SCS: Device Coordinate System or equivalently the Screen


Coordinate System.

Keeping these straight is the key to understanding a rendering system.


Transformation between two coordinate systems represented with matrix.
Derived information may be added (lighting and shading) and primitives may
be removed (hidden surface removal) or modified (clipping).

Q. Explain various display devices?

An important component is the ―refresh buffer‖ or ―frame buffer‖ which is a


random-access memory containing one or more values per pixel, used to
drive the display. The video controller translates the contents of the frame
buffer into signals used by the CRT to illuminate the screen. It works as
follows:

1. The display screen is coated with ―phospors‖ which emit light when
excited by an electron beam. (There are three types of phospor, emitting red,
green, and blue light.) They are arranged in rows, with three phospor dots
(R, G, and B) for each pixel.

Page 5
2. The energy exciting the phosphors dissipates quickly, so the entire screen
must be refreshed 60 times per second.

3. An electron gun scans the screen, line by line, mapping out a scan
pattern. On each scan of the screen, each pixel is passed over once. Using
the contents of the frame buffer, the controller controls the intensity of the
beam hitting each pixel, producing a certain color.

1. Cathode Ray Tube (CRT)

Electron gun sends beam aimed (deflected) at a particular point on the


screen, Traces out a path on the screen, hitting each pixel once per cycle.
―scan lines‖ · Phosphors emit light (phosphoresence); output decays rapidly
(exponentially - 10 to 60 microseconds) · As a result of this decay, the entire
screen must be redrawn (refreshed) at least 60 times per second. This is
called the refresh rate . If the refresh rate is too slow, we will see a
noticeable flicker on the screen. CFF (Critical Fusion Frequency) is the
minimum refresh rate needed to avoid flicker. This depends to some degree
on the human observer. Also depends on the persistence of the phosphors;
that is, how long it takes for their output to decay. · The horizontal scan rate
is defined as the number of scan lines traced out per second. · The most
common form of CRT is the shadow-mask CRT. Each pixel consists of a
group of three phosphor dots (one each for red, green, and blue), arranged in
a triangular form called a triad. The shadow mask is a layer with one hole
per pixel. To excite one pixel, the electron gun (actually three guns, one for
each of red, green, and blue) fires its electron stream through the hole in the
mask to hit that pixel. · The dot pitch is the distance between the centers of
two triads. It is used to measure the resolution of the screen. (Note: On a
vector display, a scan is in the form of a list of lines to be drawn, so the time
to refresh is dependent on the length of the display list.)

2. Liquid Crystal Display (LCD)

Page 6
A liquid crystal display consists of 6 layers, arranged in the following order
(back-to-front):

 A reflective layer which acts as a mirror


 A horizontal polarizer, which acts as a filter, allowing only the
horizontal component of light to pass through
 A layer of horizontal grid wires used to address individual pixels

How it works:

The liquid crystal rotates the polarity of incoming light by 90 degrees.


Ambient light is captured, vertically polarized, rotated to horizontal polarity
by the liquid crystal layer, passes through the horizontal filter, is reflected
by the reflective layer, and passes back through all the layers, giving an
appearance of lightness. However, if the liquid crystal molecules are
charged, they become aligned and no longer change the polarity of light
passing through them. If this occurs, no light can pass through the
horizontal filter, so the screen appears dark. The principle of the display is
to apply this charge selectively to points in the liquid crystal layer, thus
lighting or not lighting points on the screen. Crystals can be dyed to provide
color. An LCD may be backlit, so as not to be dependent on ambient light.
TFT (thin film transistor) is most popular LCD technology today. Plasma
Display Panels

Q. What are the different hardware and software of graphics?

Graphics software

Graphics software (that is, the software tool needed to create graphics
applications) has taken the form of subprogram libraries. The libraries
contain functions to do things like: draw points, lines, polygons apply
transformations fill areas with color handle user interactions. An important
goal has been the development of standard hardware independent libraries
such as:

 CORE GKS (Graphical Kernel Standard)

Page 7
 PHIGS (Programmer‘s Hierarchical Interactive Graphics System)
 X Windows OpenGL Hardware vendors may implement some of the
 OpenGL primitives in hardware for speed.

Hardware

―Vector graphics‖ Early graphic devices were line-oriented. For example, a


―pen plotter‖ from H-P. Primitive operation is line drawing. ―Raster graphics‖
Today‘s standard. A raster is a 2-dimensional grid of pixels (picture
elements). Each pixel may be addressed and illuminated independently. So
the primitive operation is to draw a point; that is, assign a color to a pixel.
Everything else is built upon that. There are a variety of raster devices, both
hardcopy and display.

Hardcopy:

 Laserprinter
 Ink-jet printer
 Film recorder
 Electrostatic printer
 Pen plotter

Q. List the six major elements of a graphic system.

1. Mouse
2. Tablet and stylus
3. Force feedback device
4. Scanner
5. Live video streams
6. Display/output (e.g., screen, paper-based printer, video recorder, non-
linear editor.

Q. List four major areas of computer graphics.

1. Medical Imaging
2. Scientific Visualization
3. Computer Aided Design
4. Graphical User Interfaces (GUIs)

Q. What do you understand by BRDF?

The bidirectional reflectance distribution function (BRDF) is a function of


four real variables that defines how light is reflected at an opaque surface. It
is employed in the optics of real-world light, in computer graphics
algorithms, and in computer vision algorithms. A BRDF is defined as the
ratio of the quantity of reflected light in direction wo, to the amount of light

Page 8
that reaches the surface from direction wi. To make this clear, let‘s call the
quantity of light reflected from the surface in direction wo, Lo, and the
amount of light arriving from direction wi, Ei. The bidirectional reflectance
distribution function (BRDF) is a function of four real variables that defines
how light is reflected at an opaque surface. It is employed in the optics of
real-world light, in computer graphics algorithms, and in computer
vision algorithms

Q. Explain the two classes and two properties of BRDFs.

There are two classes of BRDFs and two important properties. BRDFs can
be classified into two classes: isotropic BRDFs and anisotropic BRDFs. The
two important properties of BRDFs are reciprocity and conservation of
energy.

The term isotropic is used to describe BRDFs that represent reflectance


properties that are invariant with respect to rotation of the surface around
the surface normal vector. Consider a small relatively smooth surface
element and fix the light and viewer positions. If we were to rotate the
surface about its normal, the BRDF value (and consequently the resulting
illumination) would remain unchanged. Materials with this characteristic
such as smooth plastics have isotropic BRDFs.

Anisotropy, on the other hand, refers to BRDFs that describe reflectance


properties that do exhibit change with respect to rotation of the surface
around the surface normal vector. Some examples of materials that have
anisotropic BRDFs are brushed metal, satin, and hair. In general, most real-
world BRDFs are anisotropic to some degree, but the notion of isotropic
BRDFs is useful because many classes of analytical BRDF models fall within
this class. In general, most real-world BRDFs are probably more isotropic
than anisotropic though many real-world surfaces have subtle anisotropy.
Any material that exhibits even the slightest anisotropic reflection has a
BRDF that is anisotropic.

Q. State two additional physically based BRDFs properties

Based on physical laws and considered to be physically plausible, BRDFs


have two properties: reciprocity and conservation of energy. The reciprocity
property says that if the sense of the traveling light is reversed, say,
swapping the incoming and outgoing directions, the value of the BRDF
remains unchanged. Conservation of energy property states that the total
quantity of scattering light during the light-matter interaction cannot exceed
the original quantity of light arriving at the surface.

Page 9
Q. Explain the term transformations.

Transformations are often considered to be one of the hardest concepts in


elementary computer graphics. A Cartographer can change the size of charts
and topographical maps. So if graphics images are coded as numbers, the
numbers can be stored in memory. These numbers are modified by
mathematical operations called as Transformation. The purpose of using
computers for drawing is to provide facility to user to view the object from
different angles, enlarging or reducing the scale or shape of object called as
Transformation. Transformation means changing some graphics into
something else by applying rules. We can have various types of
transformations such as translation, scaling up or down, rotation, shearing,
etc. When a transformation takes place on a 2D plane, it is called 2D
transformation.

Two essential aspects of transformation are given below:

1. Each transformation is a single entity. It can be denoted by a unique


name or symbol.
2. It is possible to combine two transformations, after connecting a
single transformation is obtained, e.g., A is a transformation for
translation. The B transformation performs scaling. The combination
of two is C=AB. So C is obtained by concatenation property.

There are two complementary points of view for describing object


transformation.

1. Geometric Transformation: The object itself is transformed relative to


the coordinate system or background. The mathematical statement of
this viewpoint is defined by geometric transformations applied to each
point of the object.
2. Coordinate Transformation: The object is held stationary while the
coordinate system is transformed relative to the object. This effect is
attained through the application of coordinate transformations.

Q. What are homogeneous coordinates?

Homogeneous coordinates are another way to represent points to simplify


the way in which we express affine transformations. Homogeneous
coordinates are a more convenient notation for 2D transformations

 An equivalent representation
 Require a single matrix to represent general affine transformations

Page 10
 Can be used to represent perspective transformations (later)

Q. Define focal length.

The focal length of an optical system is a measure of how strongly the


system converges or diverges light; it is the inverse of the system's optical
power. A positive focal length indicates that a system converges light, while
a negative focal length indicates that the system diverges light. A system
with a shorter focal length bends the rays more sharply, bringing them to a
focus in a shorter distance or diverging them more quickly. For the special
case of a thin lens in air, a positive focal length is the distance over which
initially collimated (parallel) rays are brought to a focus, or alternatively a
negative focal length indicates how far in front of the lens a point
source must be located to form a collimated beam. For more general optical
systems, the focal length has no intuitive meaning; it is simply the inverse of
the system's optical power.

Q. Outline two important properties of perspective projection.

1. Parallel lines don‘t remain parallel, rendered object size decreases with
distance from the image plane
2. More realistic, provides a sense of being in the scene Used for
immersive environments

Q. What do you understand by rasterization?

Rasterisation (or rasterization) is the task of taking an image described in a


vector graphics format (shapes) and converting it into a raster image (a
series of pixels, dots or lines, which, when displayed together, create the
image which was represented via shapes). PCs started to get decent, but still
they could not support 3-D graphics, so most programmer's wrote software
for scan conversion (rasterization) used the painter's algorithm for hidden
surface removal, and developed ―tricks‖' for real-time animation.

Q. Explain Z-buffering and state one of its uses.

Z-buffering, also known as depth buffering, is a technique in computer


graphics programming. It is used to determine whether an object (or part of

Page 11
an object) is visible in a scene. It can be implemented either in hardware or
software, and is used to increase rendering efficiency.

Advantages:

 Easy to implement in hardware (and software)


 Fast with hardware support
 Fast depth buffer memory
 Hardware supported
 Process polygons in arbitrary order
 Handles polygon interpenetration trivially

Disadvantages:

 Lots of memory for z-buffer


 Integer depth values
 Scan-line algorithm
 Prone to aliasing
 Super-sampling
 Overhead in z-checking: requires fast memory

Q. Outline the three major techniques of mapping methods.

Two-part Mapping

An alternative is to texture a whole object a one go using a projection from a


simpler intermediate object

This is a 2 part mapping:

1. Map texture onto regular surface, e.g.:Sphere, Cylinder,Box

2. Project texture onto object:

Normal from intermediate, Normal to intermediate, Projector from centre of


object

All we have to do is define the mappings onto the intermediate object:

Environmental Mapping

We can simulate the appearance of a reflective object in a complex


environment without using ray tracing. This is called environment or
reflection mapping. We position a viewpoint within the object looking out,
then use the resulting image as a texture to apply to the object

 Replace reflective object S by projection surface P

Page 12
 Compute image of environment on P and project image from P to S

Bump Mapping

Bump Maps are used to capture fine-scale surface detail or roughness:

 Apply perturbation function to surface normal


 Use perturbed normal in lighting calculations

Elements from the bump map are mapped to a polygon in exactly the same
way as a surface texture, but they are interpreted as a perturbation to the
surface normal, which in turn affects the rendered intensity. The bump map
may contain:

 Random patterns
 Regular patterns
 Surface detail

Q. What do you understand by keyframing?

Keyframing is an animation technique where motion curves are interpolated


through states at times, (~q1, ..., ~qT ), called keyframes, specified by a
user.

Page 13
Underlying technique is interpolation

The in-between frames are interpolated from the keyframes. Originally done
by armies of underpaid animators but now done with computers. A key
frame or keyframe is a location on a timeline which marks the beginning or
end of a transition. It holds special information that defines where a
transition should start or stop. The intermediate frames are interpolated
over time between those definitions to create the illusion of motion.

Q. State two advantages and two disadvantages of keyframing.

Advantages of keyframing

1. The advantage to using Auto-Key is that your work is reduced to


moving to a new time and then editing objects, the camera, or the
environment.
2. You are freed from the added thought process and actions needed to
generate keyframes for each move your objects make.

Q. Define Kinematics

Kinematics describes the properties of shape and motion independent of


physical forces that cause motion. Kinematic techniques are used often in
keyframing, with an animator either setting joint parameters explicitly with
forward kinematics or specifying a few key joint orientations and having the
rest computed automatically with inverse kinematics.

Forward Kinematics

With forward kinematics, a point ¯p is positioned by ¯p = f(θ) whereθ is a


state vector (θ1, θ2, ...θn) specifying the position, orientation, and rotation of
all joints.

Page 14
For the above example, ¯p = (l1 cos(θ1) + l2 cos(θ1 + θ2), l1 sin(θ1) + l2
sin(θ1 + θ2)).

Inverse Kinematics

With inverse kinematics, a user specifies the position of the end effector, p,
and the algorithm has to evaluate the required θ give p. That is, θ = f −1(p).
Usually, numerical methods are used to solve this problem, as it is often
nonlinear and either underdetermined or over determined. A system is
underdetermined when there is not a unique solution, such as when there
are more equations than unknowns. A system is over determined when it is
inconsistent and has no solutions. Extra constraints are necessary to obtain
unique and stable solutions. For example, constraints may be placed on the
range of joint motion and the solution may be required to minimize the
kinetic energy of the system.

Q. Explain what is meant by motion capture?

In motion capture, an actor has a number of small, round markers attached


to his or her body that reflect light in frequency ranges that motion capture
cameras are specifically designed to pick up. With enough cameras, it is
possible to reconstruct the position of the markers accurately in 3D.

In practice, this is a laborious process. Markers tend to be hidden from


cameras and 3D reconstructions fail, requiring a user to manually fix such
drop outs. The resulting motion curves are often noisy, requiring yet more
effort to clean up the motion data to more accurately match what an
animator wants.

Despite the labor involved, motion capture has become a popular technique
in the movie and game industries, as it allows fairly accurate animations to
be created from the motion of actors. However, this is limited by the density
of markers that can be placed on a single actor. Faces, for example, are still
very difficult to convincingly reconstruct. Motion capture is one of the
primary animation techniques for computer games.

Q. Enumerate two advantages and two disadvantages of motion capture.

Page 15
Advantages of Motion Capture

1. Once you have the program, you can get lots of motion
2. It reduces the overall cost of keyframe-based animation in
entertainment industry.

Disadvantages of Motion Capture

1. The animation is generally hard to control, which makes it hard to tell


a story with purely procedural means
2. It requires particular hardware and special software program in order
to produce and process data.

Q. Explain the thin lens Camera Model

In optics, a thin lens is a lens with a thickness (distance along the optical
axis between the two surfaces of the lens) that is negligible compared to
the radii of curvature of the lens surfaces. Lenses whose thickness is not
negligible are sometimes called thick lenses.
The thin lens approximation ignores optical effects due to the thickness of
lenses and simplifies ray tracing calculations. It is often combined with
the paraxial approximation in techniques such as ray transfer matrix
analysis.
The focal length, f, of a lens in air is given by the lensmaker's equation:

Q. List five graphic hard copy devices

1. Printers
2. Dot-Matrix Printers
3. Daisy Wheel Printers
4. Line Printers
5. Drum Printers

Q. Define aliasing and antialiasing?

Aliasing
In signal processing and related disciplines, aliasing is an effect that causes
different signals to become indistinguishable (or aliases of one another)
when sampled. It also often refers to the distortion or artefact that results

Page 16
when a signal reconstructed from samples is different from the original
continuous signal.
Aliasing can occur in signals sampled in time, for instance digital audio, and
is referred to as temporal aliasing. It can also occur in spatially sampled
signals (e.g. moiré patterns in digital images); this type of aliasing is
called spatial aliasing.
Aliasing can also occur in videos, where it is called temporal aliasing
because it is caused by the frequency of the frames rather than the
pixelation of the image. Because of the limited frame rate, a fast-moving
object like a wheel looks like it‘s turning in reverse or too slowly; this is
called the wagon-wheel effect. This is determined by the frame rate of the
camera and can be avoided by using temporal aliasing reduction filters
during filming.

Antialiasing

In computer graphics, antialiasing is a software technique for diminishing


jaggies - stairstep-like lines that should be smooth. Jaggies occur because
the output device, the monitor or printer, doesn't have a high enough
resolution to represent a smooth line. Anti-aliasing may refer to any of a
number of techniques to combat the problems of aliasing in a sampled
signal such as a digital image or digital audio recording.

Q. What is light?

Light as we perceive it is electromagnetic radiation from a narrow band of


the complete spectrum of electromagnetic radiation called the visible
spectrum. The physical nature of light has elements that are like particle
(when we discuss photons) and as a wave. Recall that wave can be described
either in terms of its frequency, measured say in cycles per second, or the
inverse quantity of wavelength. The electro-magnetic spectrum ranges from
very low frequency (high wavelength) radio waves (greater than 10
centimeter in wavelength) to microwaves, infrared, visible light, ultraviolet
and x-rays and high frequency (low wavelength) gamma rays (less than 0.01
nm in wavelength). Visible light lies in the range of wavelengths from around
400 to 700 nm, where nm denotes a nanometer, or 10−9 of a meter.

Physically, the light energy that we perceive as color can be described in


terms of a function of wavelength λ, called the spectral distribution function
or simply spectral function, f(λ). As we walk along the wavelength axis (from
long to short wavelengths), the associated colors that we perceive varying
along the colors of the rainbow red, orange, yellow, green, blue, indigo,

Page 17
violet. (Remember the ―Roy G. Biv‖ mnemonic.) Of course, these color names
are human interpretations, and not physical divisions.

Q. Explain the following properties of light: reflection and refraction.

Reflection of light

Reflection is the change in direction of a wavefront at an interface between


two different media so that the wavefront returns into the medium from
which it originated. Common examples include the reflection of light, sound
and water waves.

How do we model illumination due to light reflected off of other surfaces?

(A)Light leaves the light source, (B)Light leaves the light source, is reflected
off the back wall

Refraction of light

Refraction is the change in direction of a wave passing from one medium to


another or from a gradual change in the medium. Refraction of light is the
most commonly observed phenomenon, but other waves such as sound
waves and water waves also experience refraction.

Q. Briefly explain color space

A color model is an abstract mathematical model describing the way colors


can be represented as tuples of numbers, typically as three or four values or
color components (e.g. RGB and CMYK are color models). However, a color
model with no associated mapping function to an absolute color space is a
more or less arbitrary color system with little connection to the requirements
of any given application. Adding a certain mapping function between the
color model and a certain reference color space results in a definite
"footprint" within the reference color space. This "footprint" is known as a
gamut, and, in combination with the color model, defines a new color space.
For example, Adobe RGB and SRGB are two different absolute color spaces,
both based on the RGB model.

Q. What is a Vector?

Page 18
In computing, a vector processor or array processor is a central processing
unit that implements an instruction set containing instructions that operate
on one-dimensional arrays of data called vectors, compared to the scalar
processors, whose instructions operate on single data items. A vector u; v;w
is a directed line segment (no concept of position). Vectors are represented
in a coordinate system by a n-tuple v = (v1; : : : ; vn).

The dimension of a vector is dim(v) = n.

Length jvj and direction of a vector is invariant with respect to choice of


Coordinate system.

Points, Vectors and Notation

Much of Computer Graphics involves discussion of points in 2D or 3D.


Usually we write such points as Cartesian Coordinates e.g. p = [x, y]T or q =
[x, y, z]T . Point coordinates are therefore vector quantities, as opposed to a
single number e.g. 3 which we call a scalar quantity. In these notes we write
vectors in bold and underlined once. Matrices are written in bold, double-
underlined.

The superscript [...]T denotes transposition of a vector, so points p and q are


column vectors (coordinates stacked on top of one another vertically). This is
the convention used by most researchers with a Computer Vision
background, and is the convention used throughout this course. By
contrast, many Computer Graphics researchers use row vectors to represent
points. For this reason you will find row vectors in many Graphics textbooks
including Foley et al, one of the course texts. Bear in mind that you can
convert equations between the two forms using transposition. Suppose we
have a 2 × 2 matrix M acting on the 2D point represented by column vector
p. We would write this as Mp. If p was transposed into a row vector p′ = pT ,
we could write the above transformation p′MT . So to convert between the
forms (e.g. from row to column form when reading the course-texts),
remember that: Mp = (pTMT )T

Q. What is raster graphics?

The Evans & Sutherland Corporation and General Electric started building
flight simulators with real-time raster graphics. Unix, X and Silicon
Graphics GL were the operating systems, window system and application
programming interface (API) that graphicist used. Shaded raster graphics
were starting to be introduced in motion pictures. PCs started to get decent,
but still they could not support 3-D graphics, so most programmer's wrote
software for scan conversion (rasterization) used the painter's algorithm for
hidden surface removal, and developed ―tricks‖' for real-time animation.

Page 19
Q. Explain the following colour models

RGB colour model

The additive colour model used for computer graphics is represented by the
RGB colour cube, where R, G, and B represent the colours produced by red,
green and blue phosphours, respectively.

YIQ colour model

YIQ is the color space used by the NTSC color TV system, employed mainly
in North and Central America, and Japan. I stands for in-phase,
while Q stands for quadrature, referring to the components used
in quadrature amplitude modulation. Some forms of NTSC now use
the YUV color space, which is also used by other systems such as PAL.
The Y component represents the luma information, and is the only
component used by black-and-white television receivers. I and Q represent
the chrominance information. In YUV, the U and V components can be
thought of as X and Y coordinates within the color space. I and Q can be
thought of as a second pair of axes on the same graph, rotated 33°; therefore
IQ and UV represent different coordinate systems on the same plane.

Page 20
The YIQ system is intended to take advantage of human color-response
characteristics. The eye is more sensitive to changes in the orange-blue (I)
range than in the purple-green range (Q)—therefore less bandwidth is
required for Q than for I. Broadcast NTSC limits I to 1.3 MHz and Q to
0.4 MHz. I and Q are frequency interleaved into the 4 MHz Y signal, which
keeps the bandwidth of the overall signal down to 4.2 MHz. In YUV systems,
since U and V both contain information in the orange-blue range, both
components must be given the same amount of bandwidth as I to achieve
similar color fidelity.
CYMK colour model

To produce blue, one would mix cyan and magenta inks, as they both reflect
blue while each absorbing one of green and red. Unfortunately, inks also
interact in non-linear ways. This makes the process of converting a given
monitor colour to an equivalent printer colour a challenging problem. Black
ink is used to ensure that a high quality black can always be printed, and is
often referred to as to K. Printers thus use a CMYK colour model.

The transformation yields the colour on monitor 2 which is equivalent to a


given colour on monitor 1. Quality conversion to-andfrom printer gamuts is
difficult. A first approximation is shown on the left. A fourth colour, K, can
be used to replace equal amounts of CMY, as shown on the right.

HSV and HSL colour model

Models such as HSV (hue, saturation, value) and HLS (hue, luminosity,
saturation) are designed for intuitive understanding. Using these colour
models, the user of a paint program would quickly be able to select a desired
colour. HSL (hue, saturation, lightness) (or HSB (hue, saturation,
brightness)) and HSV (hue, saturation, value) are alternative representations
of the RGB color model, designed in the 1970s by computer
graphics researchers to more closely align with the way human vision
perceives color-making attributes. In these models, colors of each hue are
arranged in a radial slice, around a central axis of neutral colors which
ranges from black at the bottom to white at the top.
The HSV representation models the way paints of different colors mix
together, with the saturation dimension resembling various tints of brightly

Page 21
colored paint, and the value dimension resembling the mixture of those
paints with varying amounts of black or white paint. The HSL model
attempts to resemble more perceptual color models such as the Natural
Color System (NCS) or Munsell color system, placing fully saturated colors
around a circle at a lightness value of 1⁄2, where a lightness value of 0 or 1 is
fully black or white, respectively.

Q. The table below summarizes the properties of the four primary types of
printing ink. Fill the missing gap

Green paper is green because it reflects green and absorbs other


wavelengths. The following table summarizes the properties of the four
primary types of printing ink.

Q. Discuss any 3 Processes of Traditional Animations

Traditional Animation

At the early day of the history of animation, it took a lot of effort to make an
animation, even the shortest ones. In film, every second requires 24 picture
frames for the movement to be so smooth that humans cannot recognise
discrete changes between frames. Before the appearance of cameras and
computers, animations were produced by hand. Artists had to draw every
single frame and then combined them as one animation. It is worth
mentioning about some of the techniques that were used to produce
animations in the early days that are still being employed in computer-
based animations:

1. Key frames: This technique is used to sub divide the whole animation
into key points between which a lot of actions happen. For example, to
specify an action of raising a hand, at this stage the manager only specifies
the start and finish positions of the hand without having to worry about the
image sequence in between. It is then the artist‘s job to draw images in

Page 22
between the start and finish positions of the hand, a process called in-
betweening. Using this technique, many people can be involved in producing
one animation and hence it helps reduce the amount of time to get the
product done. In today‘s computer animation packages, key frame technique
is used as a powerful tool for designing. Here, the software does the in-
betweening.

2. Cel animation: This is also a very powerful technique in producing


animations. It is common that only a few objects will change in the
animation. It is time consuming to draw the whole image background for
every single frame. When using Cel animation method, moving objects and
background are drawn on separate pictures and they will be laid on top of
each other when merging. This technique significantly reduces production
time by reducing the work and allowing many people can work
independently at the same time. Motion can bring the simplest of characters
to life. Even simple polygonal shapes can convey a number of human
qualities when animated: identity, character, gender, mood, intention,
emotion, and so on. Animation make objects change over time according to
scripted actions.

In general, animation may be achieved by specifying a model with n


parameters that identify degrees of freedom that an animator may be
interested in such as

 polygon vertices
 spline control
 joint angles
 muscle contraction
 camera parameters
 color

3. Kinematics: Kinematics describe the properties of shape and motion


independent of physical forces that cause motion. Kinematic techniques are
used often in keyframing, with an animator either setting joint parameters
explicitly with forward kinematics or specifying a few key joint orientations
and having the rest computed automatically with inverse kinematics.

Q. Explain fully what is meant by raster graphics image.

The IBM PC was marketed in 1981 The Apple MacIntosh started production
in 1984, and microprocessors began to take off, with the Intel x86 chipset,
but these were still toys. Computers with a mouse, bitmapped (raster)
display, and Ethernet became the standard in academic and science and
engineering settings. In computer graphics, a raster graphics or bitmap

Page 23
image is a dot matrix data structure that represents a generally rectangular
grid of pixels, viewable via a monitor, paper, or other display medium.
Raster images are stored in image files with varying formats. A bitmap is a
rectangular grid of pixels, with each pixel's color being specified by a
number of bits. A bitmap might be created for storage in the display's video
memory or as a device-independent bitmap file. A raster is technically
characterized by the width and height of the image in pixels and by the
number of bits per pixel (or color depth, which determines the number of
colors it can represent).

Q. What is animation?

At the early day of the history of animation, it took a lot of effort to make an
animation, even the shortest ones. In film, every second requires 24 picture
frames for the movement to be so smooth that humans cannot recognise
discrete changes between frames. Before the appearance of cameras and
computers, animations were produced by hand. Artists had to draw every
single frame and then combined them as one animation Old machines, such
as the ZX Spectrum, required more CPU time to iterate through each
location in the frame buffer than it took for the video hardware to refresh the
screen. In an animation, this would cause undesirable flicker due to
partially drawn frames. To compensate, byte range [0, (W − 1)] in the buffer
wrote to the first scan-line, as usual.

Q. What are homogeneous coordinates?

Q. Explain what is meant by simulation

A simulation is an approximate imitation of the operation of a process or


system; that represents its operation over time. Simulation is used in many
contexts, such as simulation of technology for performance tuning or
optimizing, safety engineering, testing, training, education, and video games.
Graphics has had a tremendous affect on society. Things that affect society
often lead to ethical and legal issues. For example, graphics are used in
battles and their simulation, medical diagnosis, crime reenactment,

Page 24
cartoons and films. Computer graphics makes vast quantities of data
accessible. Numerical simulations frequently produce millions of data
values. Similarly, satellite-based sensors amass data at rates beyond our
abilities to interpret them by any other means than visually.

Q. Outline two types of simulation.

Live: Simulation involving real people operating real systems

1. Involve individuals or groups


2. May use actual equipment
3. Should provide a similar area of operations
4. Should be close to replicating the actual activity

Virtual: Simulation involving real people operating simulated systems.


Virtual simulations inject Human-In-The-Loop in a central role by
exercising:
1. Motor control skills (e.g., flying an airplane)
2. Decision skills (e.g., committing fire control resources to action)
3. Communication skills (e.g., members of a C4I team)

Q. Enumerate four (4) Pixel Operations


1. Fog: blend pixels with fog colour with blending governed by Z coordinate

2. Antialiasing: replace pixels by the average of their own and their nearest
neighbours colours

3. Colour balancing: modify colours as they are written into the colour
buffer.

4. Direct manipulation: copy or replace pixels as well as the colour and


depth buffers OpenGL provides:
A stencil buffer used for masking areas of other buffers
An accumulation buffer used for whatever you want
A bitwise XOR operator is provided usually hardwired in the graphics chip.

Q. List any three (3) display hardware


1. Cathode Ray Tube (CRT)
2. Liquid Crystal Display (LCD)
3. Vector Displays

Page 25
Q. The BSP trees (Binary Space Partitioning) can be viewed as a
generalization of k-d trees. Describe the BSP trees giving its characteristics
and organization

BSP trees (short for binary space partitioning trees) can be viewed as a
generalization of k-d trees. Like k-d trees, BSP trees are binary trees, but
now the orientation and position of a splitting plane can be chosen
arbitrarily. The figure below depicts the feeling of a BSP tree.

A Binary Space Partition tree (BSP tree) is a very different way to represent a
scene, Nodes hold facets, and the structure of the tree encodes spatial
information about the scene. It is useful for HSR and related applications.

Characteristics of BSP Tree


A BSP tree is a binary tree.
Nodes can have 0, 1, or two children. Order of child nodes matters, and if a
node has just 1 child, it matters whether this is its left or right child.
Each node holds a facet.
This may be only part of a facet from the original scene. When
constructing a BSP tree, we may need to split facets.
Organization:
Each facet lies in a unique plane.In 2-D, a unique line. For each facet,
we choose one side of its plane to be the ―outside‖. (The other direction is
―inside‖.) This can be the side the normal vector points toward. Rule: For
each node,

 Its left descendant subtree holds only facets ―inside‖ it.


 Its right descendant subtree holds only facets ―outside‖ it.

Q. Define the following terms:

Spectroradiometer
A device to measure the spectral energy distribution. It can therefore also
provide the CIE xyz tristimulus values. illuminant C A standard for white

Page 26
light that approximates sunlight. It is defined by a colour temperature of
6774 K.

The RGB Colour Cube


The additive colour model used for computer graphics is represented by the
RGB colour cube, where R, G, and B represent the colours produced by red,
green and blue phosphours, respectively.

Complementary colours
Colours which can be mixed together to yield white light. For example,
colours on segment CD are complementary to the colours on segment CB.

Dominant wavelength
The spectral colour which can be mixed with white light in order to
reproduce the desired colour. Colour B in the above figure is the dominant
wavelength for colour A.

non-spectral colours
Colours not having a dominant wavelength. For example, colour E in the
above figure. perceptually uniform colour space. A colour space in which the
distance between two colours is always proportional to the perceived
distance. The CIE XYZ colour space and the CIE chromaticity diagram are
not perceptually uniform, as the following figure illustrates. The CIE LUV
colour space is designed with perceptual uniformity in mind.

Colour Gamuts
The chromaticity diagram can be used to compare the "gamuts" of various
possible output devices (i.e., monitors and printers). Note that a colour
printer cannot reproduce all the colours visible on a colour monitor.

Page 27
The RGB Colour Cube
The additive colour model used for computer graphics is represented by the
RGB colour cube, where R, G, and B represent the colours produced by red,
green and blue phosphours, respectively.

Q. Enumerate the Basic Vector Algebra

Just as we can perform basic operations such as addition, multiplication


etc. on scalar values, so we can generalize such operations to vectors. The
figure below summarizes some of these operations in diagrammatic form.

Page 28
Illustrating vector addition (left) and subtraction (middle). Right: Vectors
have direction and magnitude; lines (sometimes called ‗rays‘) are vectors
plus a starting point.

Q. Write short note on four (4) basic vector algebra

Vector Addition
When we add two vectors, we simply sum their elements at corresponding
positions. So for a pair of 2D vectors a = [u, v]T and b = [s, t]T we have:
a + b = [u + s, v v + t]T

Vector Subtraction
Vector subtraction is identical to the addition operation with a sign change,
since when we negate a vector we simply flip the sign on its elements.
−b = [−s,−t]T
a − b = a + (−b) = [u − s, v − t]T

Vector Scaling
If we wish to increase or reduce a vector quantity by a scale factor λ then we
multiply each element in the vector by λ.
λa = [λu, λv] T

Vector Magnitude
We write the length of magnitude of a vector s as |s|. We use Pythagoras‘
theorem to compute the magnitude:

The figure shows this to be valid, since u and v are distances along the
principal axes (x and y) of the space, and so the distance of a from the origin
is the hypotenuse of a right-angled triangle. If we have an n-dimensional
vector q = [q1, q2, q3, q..., qn] then the definition of vector magnitude
generalises to:

Page 29
Figure (a) Demonstrating how the dot product can be used to measure the
component of one vector in the direction of another (i.e. a projection, shown
here as p). (b) The geometry used to prove a ◦ b = |a||b|cosθ via the Law of
Cosines

Q. Explain the term Ray casting, the basic ideas for it and highlight two of
its goal

Ray casting is the use of ray–surface intersection tests to solve a variety of


problems in 3D computer graphics and computational geometry. The term
was first used in computer graphics in a 1982 paper by Scott Roth to
describe a method for rendering constructive solid geometry models.
Consider each element of the view window one at a time (i.e., each image
pixel) Test all of the objects in the scene to determine which one affects the
image pixel

The goal of ray casting is to determine the color of each pixel in the view
window by considering all of the objects in the scene
 What part of the scene affects a single pixel?
 For a single pixel, see a finite volume of the scene

Q. Define the following:


(i)Dominant wavelength
The spectral colour which can be mixed with white light in order to
reproduce the desired colour. Colour B in the above figure is the dominant
wavelength for colour A.

(ii) complementary colours


Colours which can be mixed together to yield white light. For example,
colours on segment CD are complementary to the colours on segment CB.

(iii) spectroradiometer
A device to measure the spectral energy distribution. It can therefore also
provide the CIE xyz tristimulus values. illuminant C A standard for white
light that approximates sunlight. It is defined by a colour temperature of
6774 K.

(iv) rendering
Rendering (shading) were discovered by Gouraud and Phong at the
University of Utah. Phong also introduced a reflection model that included
specular highlights. Rendering is the conversion of a scene into an image:

Page 30
The scene composed of models in three space. Models, composed of
primitives, supported by the rendering system. Models entered by hand or
created by a program.

Q. What is Ray tracing?

Ray tracing is a rendering technique that is, in large part, a simulation of


geometric optics. It does this by first creating a mathematical representation
of all the objects, materials, and lights in a scene, and then shoots
infinitesimally thin rays of light from the viewpoint through the image plane
into the scene. The rays are then tested against all objects in the scene to
determine their intersections. In a simple ray tracer, a single ray is shot
through each pixel on the view plane, and the nearest object that the ray
hits is determined. The pixel is then shaded according to the way the
object‘s surface reflects the light. If the ray does not hit any object, the pixel
is shaded with the background colour. Ray Tracing was developed as one
approach to modeling the properties of global illumination.

Q. Express Affine transformation in:


Affine Transformations

Affine transformations are coordinate transforms that can be described by


linear systems i.e., affine transformations can be written in the form
(i) linear form

(ii) matrix form

(iii) use homogeneous coordinates to represent the general affine


transformation

Page 31
Q. State the snell‘s law
Snell's Law states that the ratio of the angle of incidence to the angle of
refraction of a wave as it travels through a boundary between two media is a
constant termed the refractive index.
Snell‘s Law:
sinθr = (n1/n2) sinθi

Q. Outline any 4 coordinate systems that can be employed in graphics


rendering

1. MCS: Modeling Coordinate System


2. WCS: World Coordinate System
3. VCS: Viewer Coordinate System
4. NDCS: Normalized Device Coordinate System

Q. Mention 3 strategies to build BV trees


Essentially, there are 3 strategies to build BV trees:
1. Bottom-up
2. Top-down
3. Insertion

Q. Briefly explain the components of a video interface card


A typical video interface card contains a display processor, a frame buffer,
and a video controller. The frame buffer is a random access memory
containing some memory (at least one bit) for each pixel, indicating how the
pixel is supposed to be illuminated. The depth of the frame buffer measures
the number of bits per pixel. A video controller then reads from the frame
buffer and sends control signals to the monitor, driving the scan and refresh
process. The display processor processes software instructions to load the
frame buffer with data.

Page 32
(Note: In early PCs, there was no display processor. The frame buffer was
part of the physical address space addressable by the CPU. The CPU was
responsible for all display functions.)
Some Typical Examples of Frame Buffer Structures:
1. For a simple monochrome monitor, just use one bit per pixel.
2. A gray-scale monitor displays only one color, but allows for a range of
intensity levels at each pixel. A typical example would be to use 6-8 bits per
pixel, giving 64-256 intensity levels. For a color monitor, we need a range of
intensity levels for each of red, green, and blue. There are two ways to
arrange this.
3. A color monitor may use a color lookup table (LUT). For example, we
could have a LUT with 256 entries. Each entry contains a color represented
by red, green, and blue values. We then could use a frame buffer with depth
of 8. For each pixel, the frame buffer contains an index into the LUT, thus
choosing one of the 256 possible colors. This approach saves memory, but
limits the number of colors visible at any one time.
4. A frame buffer with a depth of 24 has 8 bits for each color, thus 256
intensity levels for each color. 224 colors may be displayed. Any pixel can
have any color at any time. For a 1024x1024 monitor we would need 3
megabytes of memory for this type of frame buffer.

Q. How does LCD work?


The liquid crystal rotates the polarity of incoming light by 90 degrees.
Ambient light is captured, vertically polarized, rotated to horizontal polarity
by the liquid crystal layer, passes through the horizontal filter, is reflected
by the reflective layer, and passes back through all the layers, giving an
appearance of lightness. However, if the liquid crystal molecules are
charged, they become aligned and no longer change the polarity of light
passing through them. If this occurs, no light can pass through the
horizontal filter, so the screen appears dark. The principle of the display is
to apply this charge selectively to points in the liquid crystal layer, thus
lighting or not lighting points on the screen. Crystals can be dyed to provide
color. An LCD may be backlit, so as not to be dependent on ambient light.
TFT (thin film transistor) is most popular LCD technology today.

Q. How can you achieve bump maps?

Bump Maps are used to capture fine-scale surface detail or roughness:


 Apply perturbation function to surface normal
 Use perturbed normal in lighting calculations
Elements from the bump map are mapped to a polygon in exactly the same
way as a surface texture, but they are interpreted as a perturbation to the

Page 33
surface normal, which in turn affects the rendered intensity. The bump map
may contain:
 Random patterns
 Regular patterns
 Surface detail

Q. State the equation of a straight line

Rewrite equation of a straight line:

Q. What is a texture?

Texture can be used to modulate diffuse and ambient reflection coefficients,


as with Gouraud shading. We simply need a way to map each point on the
surface to a point in texture space, e.g. given an intersection point p(λ ),
convert into parametric form s(α, β) and use (α, ∗ β) to find texture
coordinates (μ, ν). Unlike Gouraud shading, we don‘t need to interpolate (μ,
ν) over polygons. We get a new (μ, ν) for each intersection point.

Explain the each of the following forms ofcontinuity:

i. C 0
A curve is C0 and G0 continuous if adjacent segments join at a common
endpoint.

ii. C 1
A curve is G1 continuous if the parametric first derivative is continuous
across its joints •i.e., the tangent vectors of adjacent segments are collinear
(i.e., the tangent vectors are on the same line) at the shared endpoint. A
curve is C1 continuous if the spatial first derivative is continuous across
joints •i.e., tangent vectors of adjacent segments are collinear and have the
same magnitude at their shared endpoint

Page 34
iii. C ∞
The curve is CN continuous if the nth derivatives of adjacent segments are
collinear and have the same magnitude at their shared endpoint Curve
continuity has a significant impact on the quality of a curve or surface
Different industries have different standards
•Computer Graphics often requires G1 continuity
‗Good enough‘ for animations and games
•The automotive industry often requires G2 continuity
Visually appealing surface reflections off of car bodies
•Aircraft and race cars may require G4 or G5 continuity
Avoid turbulence when air flows over the surface of the vehicle

Page 35

You might also like