Cit371 Introduction To Computer Graphics and Animation
Cit371 Introduction To Computer Graphics and Animation
Cit371 Introduction To Computer Graphics and Animation
In the early 1960's IBM, Sperry-Rand, Burroughs and a few other computer
companies existed. The computers of the day had a few kilobytes of memory,
no operating systems to speak of and no graphical display monitors. The
peripherals were Hollerith punch cards, line printers, and roll-paper
plotters. The only programming languages supported were assembler,
FORTRAN, and Algol. Function graphs and ―Snoopy'' calendars were about
the only graphics done. In 1963 Ivan Sutherland presented his paper
Sketchpad at the Summer Joint Computer Conference. Sketchpad allowed
interactive design on a vector graphics display monitor with a light pen
input device. Most people mark this event as the origins of computer
graphics.
The state of the art in computing was an IBM 360 computer with about 64
KB of memory, a Tektronix 4014 storage tube, or a vector display with a
light pen (but these were very expensive).
Page 1
Software and Algorithms
Today most graphicist want an Intel PC with at least 256 MB of memory and
a 10 GB hard drive. Their display should have graphics board that supports
real-time texture mapping. A flatbed scanner, color laser printer, digital
video camera, DVD, and MPEG encoder/decoder are the peripherals one
wants. The environment for program development is most likely Windows
and Linux, with Direct 3D and OpenGL, but Java 3D might become more
important. Programs would typically be written in C++ or Java.
What will happen in the near future -- difficult to say, but high definition TV
(HDTV) is poised to take off (after years of hype). Ubiquitous, untethered,
wireless computing should become widespread, and audio and gestural
input devices should replace some of the functionality of the keyboard and
mouse
You should expect 3-D modeling and video editing for the masses, computer
vision for robotic devices and capture facial expressions, and realistic
rendering of difficult things like a human face, hair, and water. With any
luck C++ will fall out of favor.
1. Medical Imaging
There are few endeavors more noble than the preservation of life. Today, it
can honestly be said that computer graphics plays a significant role in
saving lives. The range of application spans from tools for teaching and
diagnosis, all the way to treatment. Computer graphics is tool in medical
applications rather than a mere artifact. No cheating or tricks allowed.
Page 2
2. Scientific Visualization
5. Games
6. Entertainment
If you can imagine it, it can be done with computer graphics. Obviously,
Hollywood has caught on to this. Each summer, we are amazed by state-of-
the-art special effects. Computer graphics is now as much a part of the
entertainment industry as stunt men and makeup. The entertainment
industry plays many other important roles in the field of computer graphics.
Page 3
User controls contents, structure, and appearance of objects and their
displayed images via rapid visual feedback. Basic components of an
interactive graphics system input (e.g., mouse, tablet and stylus, force
feedback device, scanner, live video streams…), processing (and storage),
display/output (e.g., screen, paper-based printer, video recorder, non-linear
editor.
For our purposes today, models already generated. The image drawn on
monitor, printed on laser printer, or written to a raster in memory or a file.
These different possibilities require us to consider device independence.
Page 4
Each stage refines the scene, converting primitives in modelling space to
primitives in device space, where they are converted to pixels (rasterized).
1. The display screen is coated with ―phospors‖ which emit light when
excited by an electron beam. (There are three types of phospor, emitting red,
green, and blue light.) They are arranged in rows, with three phospor dots
(R, G, and B) for each pixel.
Page 5
2. The energy exciting the phosphors dissipates quickly, so the entire screen
must be refreshed 60 times per second.
3. An electron gun scans the screen, line by line, mapping out a scan
pattern. On each scan of the screen, each pixel is passed over once. Using
the contents of the frame buffer, the controller controls the intensity of the
beam hitting each pixel, producing a certain color.
Page 6
A liquid crystal display consists of 6 layers, arranged in the following order
(back-to-front):
How it works:
Graphics software
Graphics software (that is, the software tool needed to create graphics
applications) has taken the form of subprogram libraries. The libraries
contain functions to do things like: draw points, lines, polygons apply
transformations fill areas with color handle user interactions. An important
goal has been the development of standard hardware independent libraries
such as:
Page 7
PHIGS (Programmer‘s Hierarchical Interactive Graphics System)
X Windows OpenGL Hardware vendors may implement some of the
OpenGL primitives in hardware for speed.
Hardware
Hardcopy:
Laserprinter
Ink-jet printer
Film recorder
Electrostatic printer
Pen plotter
1. Mouse
2. Tablet and stylus
3. Force feedback device
4. Scanner
5. Live video streams
6. Display/output (e.g., screen, paper-based printer, video recorder, non-
linear editor.
1. Medical Imaging
2. Scientific Visualization
3. Computer Aided Design
4. Graphical User Interfaces (GUIs)
Page 8
that reaches the surface from direction wi. To make this clear, let‘s call the
quantity of light reflected from the surface in direction wo, Lo, and the
amount of light arriving from direction wi, Ei. The bidirectional reflectance
distribution function (BRDF) is a function of four real variables that defines
how light is reflected at an opaque surface. It is employed in the optics of
real-world light, in computer graphics algorithms, and in computer
vision algorithms
There are two classes of BRDFs and two important properties. BRDFs can
be classified into two classes: isotropic BRDFs and anisotropic BRDFs. The
two important properties of BRDFs are reciprocity and conservation of
energy.
Page 9
Q. Explain the term transformations.
An equivalent representation
Require a single matrix to represent general affine transformations
Page 10
Can be used to represent perspective transformations (later)
1. Parallel lines don‘t remain parallel, rendered object size decreases with
distance from the image plane
2. More realistic, provides a sense of being in the scene Used for
immersive environments
Page 11
an object) is visible in a scene. It can be implemented either in hardware or
software, and is used to increase rendering efficiency.
Advantages:
Disadvantages:
Two-part Mapping
Environmental Mapping
Page 12
Compute image of environment on P and project image from P to S
Bump Mapping
Elements from the bump map are mapped to a polygon in exactly the same
way as a surface texture, but they are interpreted as a perturbation to the
surface normal, which in turn affects the rendered intensity. The bump map
may contain:
Random patterns
Regular patterns
Surface detail
Page 13
Underlying technique is interpolation
The in-between frames are interpolated from the keyframes. Originally done
by armies of underpaid animators but now done with computers. A key
frame or keyframe is a location on a timeline which marks the beginning or
end of a transition. It holds special information that defines where a
transition should start or stop. The intermediate frames are interpolated
over time between those definitions to create the illusion of motion.
Advantages of keyframing
Q. Define Kinematics
Forward Kinematics
Page 14
For the above example, ¯p = (l1 cos(θ1) + l2 cos(θ1 + θ2), l1 sin(θ1) + l2
sin(θ1 + θ2)).
Inverse Kinematics
With inverse kinematics, a user specifies the position of the end effector, p,
and the algorithm has to evaluate the required θ give p. That is, θ = f −1(p).
Usually, numerical methods are used to solve this problem, as it is often
nonlinear and either underdetermined or over determined. A system is
underdetermined when there is not a unique solution, such as when there
are more equations than unknowns. A system is over determined when it is
inconsistent and has no solutions. Extra constraints are necessary to obtain
unique and stable solutions. For example, constraints may be placed on the
range of joint motion and the solution may be required to minimize the
kinetic energy of the system.
Despite the labor involved, motion capture has become a popular technique
in the movie and game industries, as it allows fairly accurate animations to
be created from the motion of actors. However, this is limited by the density
of markers that can be placed on a single actor. Faces, for example, are still
very difficult to convincingly reconstruct. Motion capture is one of the
primary animation techniques for computer games.
Page 15
Advantages of Motion Capture
1. Once you have the program, you can get lots of motion
2. It reduces the overall cost of keyframe-based animation in
entertainment industry.
In optics, a thin lens is a lens with a thickness (distance along the optical
axis between the two surfaces of the lens) that is negligible compared to
the radii of curvature of the lens surfaces. Lenses whose thickness is not
negligible are sometimes called thick lenses.
The thin lens approximation ignores optical effects due to the thickness of
lenses and simplifies ray tracing calculations. It is often combined with
the paraxial approximation in techniques such as ray transfer matrix
analysis.
The focal length, f, of a lens in air is given by the lensmaker's equation:
1. Printers
2. Dot-Matrix Printers
3. Daisy Wheel Printers
4. Line Printers
5. Drum Printers
Aliasing
In signal processing and related disciplines, aliasing is an effect that causes
different signals to become indistinguishable (or aliases of one another)
when sampled. It also often refers to the distortion or artefact that results
Page 16
when a signal reconstructed from samples is different from the original
continuous signal.
Aliasing can occur in signals sampled in time, for instance digital audio, and
is referred to as temporal aliasing. It can also occur in spatially sampled
signals (e.g. moiré patterns in digital images); this type of aliasing is
called spatial aliasing.
Aliasing can also occur in videos, where it is called temporal aliasing
because it is caused by the frequency of the frames rather than the
pixelation of the image. Because of the limited frame rate, a fast-moving
object like a wheel looks like it‘s turning in reverse or too slowly; this is
called the wagon-wheel effect. This is determined by the frame rate of the
camera and can be avoided by using temporal aliasing reduction filters
during filming.
Antialiasing
Q. What is light?
Page 17
violet. (Remember the ―Roy G. Biv‖ mnemonic.) Of course, these color names
are human interpretations, and not physical divisions.
Reflection of light
(A)Light leaves the light source, (B)Light leaves the light source, is reflected
off the back wall
Refraction of light
Q. What is a Vector?
Page 18
In computing, a vector processor or array processor is a central processing
unit that implements an instruction set containing instructions that operate
on one-dimensional arrays of data called vectors, compared to the scalar
processors, whose instructions operate on single data items. A vector u; v;w
is a directed line segment (no concept of position). Vectors are represented
in a coordinate system by a n-tuple v = (v1; : : : ; vn).
The Evans & Sutherland Corporation and General Electric started building
flight simulators with real-time raster graphics. Unix, X and Silicon
Graphics GL were the operating systems, window system and application
programming interface (API) that graphicist used. Shaded raster graphics
were starting to be introduced in motion pictures. PCs started to get decent,
but still they could not support 3-D graphics, so most programmer's wrote
software for scan conversion (rasterization) used the painter's algorithm for
hidden surface removal, and developed ―tricks‖' for real-time animation.
Page 19
Q. Explain the following colour models
The additive colour model used for computer graphics is represented by the
RGB colour cube, where R, G, and B represent the colours produced by red,
green and blue phosphours, respectively.
YIQ is the color space used by the NTSC color TV system, employed mainly
in North and Central America, and Japan. I stands for in-phase,
while Q stands for quadrature, referring to the components used
in quadrature amplitude modulation. Some forms of NTSC now use
the YUV color space, which is also used by other systems such as PAL.
The Y component represents the luma information, and is the only
component used by black-and-white television receivers. I and Q represent
the chrominance information. In YUV, the U and V components can be
thought of as X and Y coordinates within the color space. I and Q can be
thought of as a second pair of axes on the same graph, rotated 33°; therefore
IQ and UV represent different coordinate systems on the same plane.
Page 20
The YIQ system is intended to take advantage of human color-response
characteristics. The eye is more sensitive to changes in the orange-blue (I)
range than in the purple-green range (Q)—therefore less bandwidth is
required for Q than for I. Broadcast NTSC limits I to 1.3 MHz and Q to
0.4 MHz. I and Q are frequency interleaved into the 4 MHz Y signal, which
keeps the bandwidth of the overall signal down to 4.2 MHz. In YUV systems,
since U and V both contain information in the orange-blue range, both
components must be given the same amount of bandwidth as I to achieve
similar color fidelity.
CYMK colour model
To produce blue, one would mix cyan and magenta inks, as they both reflect
blue while each absorbing one of green and red. Unfortunately, inks also
interact in non-linear ways. This makes the process of converting a given
monitor colour to an equivalent printer colour a challenging problem. Black
ink is used to ensure that a high quality black can always be printed, and is
often referred to as to K. Printers thus use a CMYK colour model.
Models such as HSV (hue, saturation, value) and HLS (hue, luminosity,
saturation) are designed for intuitive understanding. Using these colour
models, the user of a paint program would quickly be able to select a desired
colour. HSL (hue, saturation, lightness) (or HSB (hue, saturation,
brightness)) and HSV (hue, saturation, value) are alternative representations
of the RGB color model, designed in the 1970s by computer
graphics researchers to more closely align with the way human vision
perceives color-making attributes. In these models, colors of each hue are
arranged in a radial slice, around a central axis of neutral colors which
ranges from black at the bottom to white at the top.
The HSV representation models the way paints of different colors mix
together, with the saturation dimension resembling various tints of brightly
Page 21
colored paint, and the value dimension resembling the mixture of those
paints with varying amounts of black or white paint. The HSL model
attempts to resemble more perceptual color models such as the Natural
Color System (NCS) or Munsell color system, placing fully saturated colors
around a circle at a lightness value of 1⁄2, where a lightness value of 0 or 1 is
fully black or white, respectively.
Q. The table below summarizes the properties of the four primary types of
printing ink. Fill the missing gap
Traditional Animation
At the early day of the history of animation, it took a lot of effort to make an
animation, even the shortest ones. In film, every second requires 24 picture
frames for the movement to be so smooth that humans cannot recognise
discrete changes between frames. Before the appearance of cameras and
computers, animations were produced by hand. Artists had to draw every
single frame and then combined them as one animation. It is worth
mentioning about some of the techniques that were used to produce
animations in the early days that are still being employed in computer-
based animations:
1. Key frames: This technique is used to sub divide the whole animation
into key points between which a lot of actions happen. For example, to
specify an action of raising a hand, at this stage the manager only specifies
the start and finish positions of the hand without having to worry about the
image sequence in between. It is then the artist‘s job to draw images in
Page 22
between the start and finish positions of the hand, a process called in-
betweening. Using this technique, many people can be involved in producing
one animation and hence it helps reduce the amount of time to get the
product done. In today‘s computer animation packages, key frame technique
is used as a powerful tool for designing. Here, the software does the in-
betweening.
polygon vertices
spline control
joint angles
muscle contraction
camera parameters
color
The IBM PC was marketed in 1981 The Apple MacIntosh started production
in 1984, and microprocessors began to take off, with the Intel x86 chipset,
but these were still toys. Computers with a mouse, bitmapped (raster)
display, and Ethernet became the standard in academic and science and
engineering settings. In computer graphics, a raster graphics or bitmap
Page 23
image is a dot matrix data structure that represents a generally rectangular
grid of pixels, viewable via a monitor, paper, or other display medium.
Raster images are stored in image files with varying formats. A bitmap is a
rectangular grid of pixels, with each pixel's color being specified by a
number of bits. A bitmap might be created for storage in the display's video
memory or as a device-independent bitmap file. A raster is technically
characterized by the width and height of the image in pixels and by the
number of bits per pixel (or color depth, which determines the number of
colors it can represent).
Q. What is animation?
At the early day of the history of animation, it took a lot of effort to make an
animation, even the shortest ones. In film, every second requires 24 picture
frames for the movement to be so smooth that humans cannot recognise
discrete changes between frames. Before the appearance of cameras and
computers, animations were produced by hand. Artists had to draw every
single frame and then combined them as one animation Old machines, such
as the ZX Spectrum, required more CPU time to iterate through each
location in the frame buffer than it took for the video hardware to refresh the
screen. In an animation, this would cause undesirable flicker due to
partially drawn frames. To compensate, byte range [0, (W − 1)] in the buffer
wrote to the first scan-line, as usual.
Page 24
cartoons and films. Computer graphics makes vast quantities of data
accessible. Numerical simulations frequently produce millions of data
values. Similarly, satellite-based sensors amass data at rates beyond our
abilities to interpret them by any other means than visually.
2. Antialiasing: replace pixels by the average of their own and their nearest
neighbours colours
3. Colour balancing: modify colours as they are written into the colour
buffer.
Page 25
Q. The BSP trees (Binary Space Partitioning) can be viewed as a
generalization of k-d trees. Describe the BSP trees giving its characteristics
and organization
BSP trees (short for binary space partitioning trees) can be viewed as a
generalization of k-d trees. Like k-d trees, BSP trees are binary trees, but
now the orientation and position of a splitting plane can be chosen
arbitrarily. The figure below depicts the feeling of a BSP tree.
A Binary Space Partition tree (BSP tree) is a very different way to represent a
scene, Nodes hold facets, and the structure of the tree encodes spatial
information about the scene. It is useful for HSR and related applications.
Spectroradiometer
A device to measure the spectral energy distribution. It can therefore also
provide the CIE xyz tristimulus values. illuminant C A standard for white
Page 26
light that approximates sunlight. It is defined by a colour temperature of
6774 K.
Complementary colours
Colours which can be mixed together to yield white light. For example,
colours on segment CD are complementary to the colours on segment CB.
Dominant wavelength
The spectral colour which can be mixed with white light in order to
reproduce the desired colour. Colour B in the above figure is the dominant
wavelength for colour A.
non-spectral colours
Colours not having a dominant wavelength. For example, colour E in the
above figure. perceptually uniform colour space. A colour space in which the
distance between two colours is always proportional to the perceived
distance. The CIE XYZ colour space and the CIE chromaticity diagram are
not perceptually uniform, as the following figure illustrates. The CIE LUV
colour space is designed with perceptual uniformity in mind.
Colour Gamuts
The chromaticity diagram can be used to compare the "gamuts" of various
possible output devices (i.e., monitors and printers). Note that a colour
printer cannot reproduce all the colours visible on a colour monitor.
Page 27
The RGB Colour Cube
The additive colour model used for computer graphics is represented by the
RGB colour cube, where R, G, and B represent the colours produced by red,
green and blue phosphours, respectively.
Page 28
Illustrating vector addition (left) and subtraction (middle). Right: Vectors
have direction and magnitude; lines (sometimes called ‗rays‘) are vectors
plus a starting point.
Vector Addition
When we add two vectors, we simply sum their elements at corresponding
positions. So for a pair of 2D vectors a = [u, v]T and b = [s, t]T we have:
a + b = [u + s, v v + t]T
Vector Subtraction
Vector subtraction is identical to the addition operation with a sign change,
since when we negate a vector we simply flip the sign on its elements.
−b = [−s,−t]T
a − b = a + (−b) = [u − s, v − t]T
Vector Scaling
If we wish to increase or reduce a vector quantity by a scale factor λ then we
multiply each element in the vector by λ.
λa = [λu, λv] T
Vector Magnitude
We write the length of magnitude of a vector s as |s|. We use Pythagoras‘
theorem to compute the magnitude:
The figure shows this to be valid, since u and v are distances along the
principal axes (x and y) of the space, and so the distance of a from the origin
is the hypotenuse of a right-angled triangle. If we have an n-dimensional
vector q = [q1, q2, q3, q..., qn] then the definition of vector magnitude
generalises to:
Page 29
Figure (a) Demonstrating how the dot product can be used to measure the
component of one vector in the direction of another (i.e. a projection, shown
here as p). (b) The geometry used to prove a ◦ b = |a||b|cosθ via the Law of
Cosines
Q. Explain the term Ray casting, the basic ideas for it and highlight two of
its goal
The goal of ray casting is to determine the color of each pixel in the view
window by considering all of the objects in the scene
What part of the scene affects a single pixel?
For a single pixel, see a finite volume of the scene
(iii) spectroradiometer
A device to measure the spectral energy distribution. It can therefore also
provide the CIE xyz tristimulus values. illuminant C A standard for white
light that approximates sunlight. It is defined by a colour temperature of
6774 K.
(iv) rendering
Rendering (shading) were discovered by Gouraud and Phong at the
University of Utah. Phong also introduced a reflection model that included
specular highlights. Rendering is the conversion of a scene into an image:
Page 30
The scene composed of models in three space. Models, composed of
primitives, supported by the rendering system. Models entered by hand or
created by a program.
Page 31
Q. State the snell‘s law
Snell's Law states that the ratio of the angle of incidence to the angle of
refraction of a wave as it travels through a boundary between two media is a
constant termed the refractive index.
Snell‘s Law:
sinθr = (n1/n2) sinθi
Page 32
(Note: In early PCs, there was no display processor. The frame buffer was
part of the physical address space addressable by the CPU. The CPU was
responsible for all display functions.)
Some Typical Examples of Frame Buffer Structures:
1. For a simple monochrome monitor, just use one bit per pixel.
2. A gray-scale monitor displays only one color, but allows for a range of
intensity levels at each pixel. A typical example would be to use 6-8 bits per
pixel, giving 64-256 intensity levels. For a color monitor, we need a range of
intensity levels for each of red, green, and blue. There are two ways to
arrange this.
3. A color monitor may use a color lookup table (LUT). For example, we
could have a LUT with 256 entries. Each entry contains a color represented
by red, green, and blue values. We then could use a frame buffer with depth
of 8. For each pixel, the frame buffer contains an index into the LUT, thus
choosing one of the 256 possible colors. This approach saves memory, but
limits the number of colors visible at any one time.
4. A frame buffer with a depth of 24 has 8 bits for each color, thus 256
intensity levels for each color. 224 colors may be displayed. Any pixel can
have any color at any time. For a 1024x1024 monitor we would need 3
megabytes of memory for this type of frame buffer.
Page 33
surface normal, which in turn affects the rendered intensity. The bump map
may contain:
Random patterns
Regular patterns
Surface detail
Q. What is a texture?
i. C 0
A curve is C0 and G0 continuous if adjacent segments join at a common
endpoint.
ii. C 1
A curve is G1 continuous if the parametric first derivative is continuous
across its joints •i.e., the tangent vectors of adjacent segments are collinear
(i.e., the tangent vectors are on the same line) at the shared endpoint. A
curve is C1 continuous if the spatial first derivative is continuous across
joints •i.e., tangent vectors of adjacent segments are collinear and have the
same magnitude at their shared endpoint
Page 34
iii. C ∞
The curve is CN continuous if the nth derivatives of adjacent segments are
collinear and have the same magnitude at their shared endpoint Curve
continuity has a significant impact on the quality of a curve or surface
Different industries have different standards
•Computer Graphics often requires G1 continuity
‗Good enough‘ for animations and games
•The automotive industry often requires G2 continuity
Visually appealing surface reflections off of car bodies
•Aircraft and race cars may require G4 or G5 continuity
Avoid turbulence when air flows over the surface of the vehicle
Page 35