Computer Graphics and Visualization
Computer Graphics and Visualization
Visualization
1
Computer Graphics
2
Example
4
Applications of CG
5
1. Display of Information
6
7
8
2. Design
9
10
3. Simulation and Animation
11
12
4. User Interface
13
A Graphics System
Input devices
Image formed in FB
15
16
The Frame Buffer
17
The image we see on the output device is an
arraythe rasterof picture elements, or
pixels, produced by the graphics system.
Pixels are stored in a part of memory called
the frame buffer.
The frame buffer can be viewed as the core
element of a graphics system.
Resolution is the number of pixels in the frame
buffer determines the detail that you can see
in the image.
18
The depth, or precision, of frame buffer is
defined as the number of bits that are used for
each pixel.
It determines properties such as how many
colors can be represented on a given system.
For example, a 1-bit-deep frame buffer allows
only two colors.
8-bit-deep frame buffer allows 28 (256) colors.
In full-color systems, there are 24 (or more)
bits per pixel (RGB).
19
A triangle is specified by its three vertices, but
to display its outline by the three line segments
connecting the vertices.
The graphics system must generate a set of
pixels that appear as line segments to the
viewer.
The conversion of geometric entities to pixel
colors and locations in the frame buffer is
known as rasterization, or scan conversion
20
Graphics Processing Units (GPUs), custom-
tailored to carry out specific graphics
functions.
The GPU can be either on the mother board of
the system or on a graphics card.
The frame buffer is accessed through the
graphics processing unit and usually is on the
same circuit board as the GPU.
GPUs are so powerful that they can often be
used as mini supercomputers for computing.
21
Monochrome System
22
CRT
Phosphor
Field
24
Cathode Ray Tube (CRT)
Deflectors Phosphor
Field
25
Phosphors Decay
26
Interlacing
Triad
Deflectors
Phosphor Field 29
Color Displays
30
Shadow Mask CRT
31
LEDs , liquid-crystal displays (LCDs), and plasma
panels, all use a two-dimensional grid to address
individual light-emitting elements.
The two outside plates each contain parallel grids of
wires that are oriented perpendicular to each other.
Electrical signals to the proper wires in each grid. the
electrical field at a location, determined by the
intersection of two wires.
32
The middle plate in an LED panel contains light-
emitting diodes that can be turned on and off by
the electrical signals sent to the grid.
In an LCD display, the electrical field controls the
polarization of the liquid crystals in the middle
panel.
A plasma panel uses the voltages on the grids to
energize gases embedded between the glass
panels holding the grids. The energized gas
becomes a glowing plasma.
33
Input Devices
34
Image Formation
35
In modern systems can exploit the capabilities
of the s/w and h/w to create realistic images of
computer-generated 3D objects
This task involves many aspects of image
formation, such as lighting, shading, and
properties of materials
Computer-generated images are synthetic
Traditional imaging methods include cameras
and the human visual system.
36
Elements of Image Formation
Objects
Viewer
Light source(s)
40
The point (xp, yp, d) is called the projection of the point
(x, y, z).
The field, or angle of view of our camera is the angle made
by the largest object that our camera can image on its film
plane.
41
The ideal pinhole camera has an infinite depth of
field: Every point within its field of view is in focus.
Every point in its field of view projects to a point
on the back of the camera.
The pinhole camera has two disadvantages.
- First, because the pinhole is so small - it admits only a
single ray from a point source - almost no light enters the
camera.
- Second, the camera cannot be adjusted to have a
different angle of view.
42
By replacing the pinhole with a lens, we solve the two
problems of the pinhole camera.
First, the lens gathers more light than can pass through
the pinhole. The larger the aperture of the lens, the more
light the lens can collect.
Second, by picking a lens with the proper focal length,
equivalent to choosing d for the pinhole camerawe can
achieve any desired angle of view (up to 180 degrees).
Lenses, however, do not have an infinite depth of field: Not
all distances from the lens are in focus.
Like the pinhole camera, computer graphics produces
images in which all objects are in focus.
43
Human Visual System
projector
p
image plane
projection of p
center of projection
45
The image is formed on the
film plane at the back of the
camera.
The specification of the objects
is independent of the
specification of the viewer.
Hence, within a graphics
library, there will be separate
functions for specifying the
objects and the viewer.
46
47
48
The Programmers Interface
49
The Pen-Plotter Model
A pen plotter produces images by
moving a pen held by a gantry, a
structure that can move the pen in
two orthogonal directions across the
paper.
moveto(x,y);
lineto(x,y);
moveto(0,0);
lineto(1,0);
lineto(1,1);
lineto(0,1);
lineto(0,0);
50
A Raster-based, but still limiting, 2D model relies on
writing pixels directly into a frame buffer.
write_pixel(x, y, color);
51
Three-Dimensional APIs
52
Advantages
53
Object Specification
type of object
location of vertex
glBegin(GL_POLYGON)
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 1.0, 0.0);
glVertex3f(0.0, 0.0, 1.0);
glEnd( );
56
Camera Specification
57
Lights and Materials
Types of lights
- Point sources vs distributed sources
- Spot lights
- Near and far sources
- Color properties
Material properties
- Absorption: color properties
- Scattering
Diffuse
Specular
58
Additive and Subtractive Color
Additive color
- Form a color by adding amounts of three
primaries
CRTs, projection systems, positive film
- Primaries are Red (R), Green (G), Blue (B)
Subtractive color
- Form a color by filtering white light with cyan
(C), Magenta (M), and Yellow (Y) filters
Light-material interactions
Printing
Negative film
59
Global vs Local Lighting
60
Why not ray tracing?
62
GRAPHICS ARCHITECTURES
Early graphics systems used general-purpose computers
with the standard von Neumann architecture. Such
computers are characterized by a single processing unit
that processes a single instruction at a time.
The display in these systems was based on a calligraphic
CRT display that included the necessary circuitry to
generate a line segment connecting two points.
63
Display Processor
Rather than have the host computer try to refresh display,
use a special purpose computer called a display processor
(DPU)
65
The Graphics Pipeline
Process objects one at a time in the order they are
generated by the application
There is no point in building a pipeline unless we will
do the same operation on many data sets.
In computer graphics, large sets of vertices and
pixels must be processed in the same manner.
The four major steps in the imaging process:
1. Vertex processing
2. Clipping and primitive assembly
3. Rasterization
4. Fragment processing
66
1. Vertex Processing
Each vertex is processed independently.
The two major functions of this block are to carry out
coordinate transformations and to compute a color for
each vertex.
Much of the work in the pipeline is in converting object
representations from one coordinate system to another
- Object coordinates
- Camera (eye) coordinates
- Screen coordinates
67
Successive changes in coordinate systems by multiplying,
or concatenating, the individual matrices into a single
matrix.
Vertex processor also computes vertex colors
After multiple stages of transformation, the geometry is
transformed by a projection transformation
The assignment of vertex colors can be as simple as the
program specifying a color or as complex as the
computation of a color from a physically realistic lighting
model that incorporates the surface properties of the object
and the characteristic light sources in the scene
68
Projection
69
2. Primitive Assembly
No imaging system can see the whole world at once
The human retina has a limited size corresponding to an
approximately 90-degree field of view.
Cameras have film of limited size, and can adjust their
fields of view by selecting different lenses.
Vertices must be collected into geometric objects before
clipping and rasterization can take place Line segments
Polygons Curves and surfaces
70
2. Clipping
Just as a real camera cannot see the whole world, the
virtual camera can only see part of the world or object space
- Objects that are not within this volume are said to be
clipped out of the scene
71
Clipping must be done on a primitive-by-primitive basis
rather than on a vertex by vertex basis.
Thus, within this stage of the pipeline, one must assemble
sets of vertices into primitives, such as line segments and
polygons, before clipping can take place.
Consequently, the output of this stage is a set of primitives
whose projections can appear in the image.
72
3. Rasterization
73
The primitives that emerge from the clipper are still
represented in terms of their vertices and must be
converted to pixels in the frame buffer.
For example, if three vertices specify a triangle with a solid
color, the rasterizer must determine which pixels in the
frame buffer are inside the polygon.
The output of the rasterizer is a set of fragments for each
primitive.
A fragment can be thought of as a potential pixel that
carries with it information, including its color and location,
that is used to update the corresponding pixel in the frame
buffer.
Fragments can also carry along depth information. 74
4. Fragment Processing
75
Programmable Pipelines
Ray tracing, radiosity, photon mappingcan achieve real-
time behavior, the ability to render complex dynamic
scenes so that the viewer sees the display without defects.
Vertex programs can alter the location or color of each
vertex as it flows through the pipeline.
Programmability is now available at every level, including
hand-held devices such as cell phones. WebGL is being
built into Web browsers
76
Performance Characteristics
There are two fundamentally different types of processing
At the front end, there is geometric processing, At the back
end, rasterization
Pipeline architectures dominate the graphics field, specially
where real-time performance is of importance.
Commodity graphics cards incorporate the pipeline within
their GPUs. Cards that cost less than $100 can render
millions of shaded texture-mapped polygons per second.
Graphics cards use GPUs that contain the entire pipeline
within a single chip. The latest cards implement the entire
pipeline using floating point arithmetic and have floating-
point frame buffers.
77