0% found this document useful (0 votes)
2 views

Module 1 ppt

Computer graphics involves the creation and manipulation of images using computers, encompassing hardware, software, and applications. It has evolved from simple line displays to complex simulations used in training, entertainment, and design. Key components of a graphics system include input devices, processors, memory, frame buffers, and output devices, with applications spanning information display, design, simulation, and user interfaces.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Module 1 ppt

Computer graphics involves the creation and manipulation of images using computers, encompassing hardware, software, and applications. It has evolved from simple line displays to complex simulations used in training, entertainment, and design. Key components of a graphics system include input devices, processors, memory, frame buffers, and output devices, with applications spanning information display, design, simulation, and user interfaces.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

What is Computer

Graphics?

1
Angel: Interactive Computer Graphics 5E © Addison-
Wesley 2009
Chapter 1: Graphics System and Models
1. Applications of Computer Graphics
2. A Graphics System
3. Images: Physical and Synthetic
4. Imaging Systems
5. The Synthetic Camera Model
6. The Programmer’s Interface
7. Graphics Architecture
8. Programmable Pipeline
9. Performance Characteristics.
Computer Graphics
• Computer graphics is concerned with all aspects of producing pictures or
images using a computer
• Hardware
• Software
• Applications
• The field began humbly almost 50 years ago, with the display of a few lines on a
cathode-ray tube (CRT); now, we can create images by computer that are
indistinguishable from photographs of real objects.
• We routinely train pilots with simulated airplanes, generating graphical displays
of a virtual environment in real time.
• Feature-length movies made entirely by computer have been successful, both
critically and financially.
• Massive multiplayer games can involve tens of thousands of concurrent
participants.
Preliminary Answer

• The object is an artist’s rendition of the sun for an


Application: animation to be shown in a domed environment
(planetarium)

• Maya for modeling and rendering but Maya is built


Software: on top of OpenGL

Hardware: • PC with graphics card for modeling and rendering


4
Applications of computer graphics
• Computer Graphics: All aspects of producing pictures or
images using a computer.
• The applications of computer graphics are many and
varied; we can, however, divide them into four major
areas:
• Display of information
• Design
• Simulation and animation
• User interfaces
1. Display
•Architectural drawings, e.g. plan of a building
•Maps: geographical information
•Plotting statistical graphs, e.g. share prices
•Medical images: Computed Tomography (CT),
Magnetic Resonance Imaging (MRI)
•Scientific visualization
2. Design (interaction important)
• Computer Aided Design (CAD)
• Design of very-large-scale integrated (VSLI) circuits

3. Simulation
• Flight Simulation for training pilots
• Computer Games
• Television and Computer-Animated Films: Toy Story (Pixar), Ice Age
• Virtual Reality
4. User interfaces
• Window-Based Operating Systems: Microsoft Windows, Macintosh, X
Windows
• Internet Browsers: Netscape, Explorer
1.2 A Graphics System
Output
• A computer graphics system is a device
computer system; as such, it must have
all the components of a general-purpose
computer system.
• There are five major elements in our
system:
1. Input devices
2. Processor
3. Memory Input devices Image
4. Frame buffer formed in
5. Output devices FB
1.2.1 Pixels and the Frame Buffer
• Almost all graphics systems are raster based.
• A picture is produced as an array—the raster—of picture elements, or pixels,
within the graphics system.
• As we can see from Figure 1.2, each pixel corresponds to a location, or
small area, in the image.

• The pixels are stored in a part of


memory called the frame buffer.
• Its resolution—the number of pixels
in the frame buffer—determines the
detail that you can see in the image.
• The depth, or precision, of the frame buffer, defined as the number of bits
that are used for each pixel,
• determines properties such as how many colors can be represented on a given
system.
For example,
• 1-bit-deep frame buffer allows only two colors.
• 8-bit-deep frame buffer allows 28 (256) colors.
• In full-color systems, there are 24 (or more) bits per pixel.
• A 24-bit image offers 16.7 million (224 ) color values
• Such systems can display sufficient colors to represent most images realistically.
• They are also called true-color systems, or RGB-color systems, because individual
groups of bits in each pixel are assigned to each of the three primary colors—red,
green, and blue—used in most displays.
Frame Buffer
• In a very simple system, the frame buffer holds only the colored
pixels that are displayed on the screen.
• In most systems, the frame buffer holds far more information, such
as depth information needed for creating images from three-
dimensional data.
• In these systems, the frame buffer comprises multiple buffers.
• For now, we can use the terms frame buffer and color buffer
synonymously without confusion.
• The conversion of geometric entities to pixel colors and locations in
the frame buffer is known as rasterization, or scan conversion.
Special-Purpose
Graphics Processing
Units (GPUs)
• In early graphics systems, the frame buffer was part of the
standard memory that could be directly addressed by the CPU.
• Today, virtually all graphics systems are characterized by special-
purpose graphics processing units (GPUs), custom-tailored to carry
out specific graphics functions.
• The GPU can be either on the motherboard of the system or on a
graphics card.
• The frame buffer is accessed through the graphics processing unit
and may be included in the GPU.
1.2.2 Output Devices
• For many years, the dominant type of display (or monitor) has been the
cathode ray tube (CRT).
• Although various flat-panel technologies are now more popular, the basic
functioning of the CRT has much in common with these newer displays.
1.2.2 Output Devices
• A simplified picture of a CRT is shown in Figure.

CRT Video1

CRT Video2
CRT Display Principles
• Raster-Scan Displays
• Picture elements: screen point referred as “Pixel”
• Picture information stored in refresh (frame) buffer
• When electrons strike the phosphor coating on the tube, light is
emitted.
• The direction of the beam is controlled by two pairs of deflection
plates.
• The output of the computer is converted, by digital-to-analog
converters, to voltages across the x and y deflection plates.
• Light appears on the surface of the CRT when a sufficiently intense
beam of electrons is directed at the phosphor.
• A device is known as the random-scan, calligraphic, or vector CRT,
because the beam can be moved directly from any position to any
other position.
• If the intensity of the beam is turned off, the beam can be moved
to a new position without changing any visible display.
Refresh Rates
• A typical CRT will emit light for only a short time—usually, a
few milliseconds— after the phosphor is excited by the
electron beam.
• For a human to see a steady, flicker-free image on most CRT
displays, the same path must be retraced, or refreshed, by the
beam at a sufficiently high rate, the refresh rate.
• The frequency at which a picture is redrawn on the screen is
referred to as the “refresh rate”
Refresh Rates
• As each screen refresh takes place, we tend to see each frame as a
smooth continuation of the patterns in the previous frame, so long as
the refresh rate is not too low.
• Below 24 fps( frames per second) - picture appears to flicker
• Old silent films, for example, show this effect because they were
photographed at a rate of 16 fps.

• When sound systems were developed in the 1920s, motion-picture


film rates increased to 24 fps, which removed flickering and the
accompanying jerky movements of the actors.
• Current refresh rates is from 60-80 fps
• Some systems has up to 120 fps
Interlaced Display
• In a raster system, the graphics system takes pixels from the frame
buffer and displays them as points on the surface of the display in one
of two fundamental ways.
• In a noninterlaced or progressive display, the pixels are displayed row
by row, or scan line by scan line, at the refresh rate.
• In an interlaced display, odd rows and even
rows are refreshed alternately.
• In an interlaced display operating at 60 Hz, the
screen is redrawn in its entirety only 30 times
per second.
• Interlaced displays are used in commercial
television.
• Viewers located near the screen, however, can
tell the difference between the interlaced and
noninterlaced displays.

Noninterlaced displays are becoming more widespread, even


though these displays process pixels at twice the rate of
interlaced displays.
Color CRT
• Color CRTs have three different colored phosphors (red, green, and
blue), arranged in small groups.
• One common style arranges the phosphors in triangular groups called
triads, each triad consisting of three phosphors, one of each primary.
• Most color CRTs have three electron beams, corresponding to the
three types of phosphors.

Delta electron gun arrangement


In-line electron gun arrangement
Shadow Mask
• In the shadow-mask CRT
(Figure 1.4), a metal screen
with small holes—the
shadow mask—ensures that
an electron beam excites
only phosphors of the proper
color.
Flat Panel Display

• Although CRTs are still the most common display device, they are rapidly
being replaced by flat-screen technologies.
• Flat-panel monitors are inherently raster.
• Although there are multiple technologies available, including light-
emitting diodes (LEDs), liquid-crystal displays (LCDs), and plasma panels,
all use a two-dimensional grid to address individual light-emitting
elements.
• The middle plate in an LED panel contains light-emitting diodes that can
be turned on and off by the electrical signals sent to the grid.
Aspect Ration
• Until recently, most displays had a 4:3 width to height ratio
(or aspect ratio) that correspondeds to commercial
television.
• Computer displays moved up to the popular resolutions of
1024 × 768 (XGA) and 1280 × 1024 (SXGA).
• The newer High Definition Television (HDTV) standard uses
a 16:9 aspect ratio
1.2.3 Input Devices
• Most graphics systems provide a keyboard and at least one other input
device.
• The most common input devices are the mouse, the joystick, and the data
tablet.
• Each provides positional information to the system, and each usually is
equipped with one or more buttons to provide signals to the processor.
• Often called pointing devices, these devices allow a user to indicate a
particular location on the display.
• Game consoles lack keyboards but include a greater variety of
input devices than a standard workstation.
• A typical console might have multiple buttons, a joystick, and dials.
• Devices such as the Nintendo Wii are wireless and can sense
accelerations in three dimensions.
• Games and virtual reality applications have all generated the need
for input devices that provide more than two-dimensional data.
• Higher-dimensional data can be obtained by devices such as data
gloves, which include many sensors, and computer vision systems
1.3 Images: Physical and Synthetic
• Computer-generated images are synthetic or
artificial, in the sense that the objects being
imaged may not exist physically.
• The preferred method to form computer-
generated images is similar to traditional
imaging methods, such as cameras and the
human visual system.
• Hence, before we discuss the mechanics of
writing programs to generate images, we
discuss the way images are formed by optical
systems.
1.3.1 Objects and Viewers
Image Formation
• In computer graphics, we form images which are generally two dimensional
using a process analogous to how images are formed by physical imaging
systems
• Cameras
• Microscopes
• Telescopes
• Human visual system
1.3.1 Objects and Viewers contd..,
Elements of Image Formation
• Objects
• Viewer
Objects
• The object exists in space independent of any image-formation
process and of any viewer.
• In computer graphics, where we deal with synthetic objects, we
form objects by specifying the positions in space of various
geometric primitives, such as points, lines, and polygons
• In most graphics systems, a set of locations in space, or
of vertices, is sufficient to define, or approximate, most
objects.
• For example, a line can be specified by two vertices;
• A polygon can be specified by an ordered list of vertices;
• A sphere can be specified by two vertices that give its
center and any point on its circumference.
Viewers
• Every imaging system must provide a means of forming
images from objects.
• To form an image, we must have someone or something
that is viewing our objects, be it a person, a camera, or a
digitizer.
• It is the viewer that forms the image of our objects.
• In the human visual system, the image is formed on the
back of the eye.
• In a camera, the image is formed in the film plane.
• Figure 1.7 shows a camera system viewing a building.
• Here we can observe that both the object and the viewer exist in a
three-dimensional world.
• However, the image that they define—what we find on the film
plane—is two dimensional.
1.3.2 Light and Images

• We have yet to mention light.


• If there were no light sources, the objects would be dark, and there
would be nothing visible in our image.
• In Figure 1.8, we see a physical object and a viewer (the camera);
now, however, there is a light source in the scene.
• Light from the source strikes various surfaces of the object, and a
portion of the reflected light enters the camera through the lens.
• The details of the interaction between light and the surfaces of the
object determine how much light enters the camera.
• Light is a form of electromagnetic radiation.
• The electromagnetic spectrum (Figure 1.9) includes radio waves, infrared
(heat), and a portion that causes a response in our visual systems.
• This visible spectrum, which has wavelengths in the range of 350 to 780
nanometers (nm), is called (visible) light.
• A given light source has a color determined by the energy that it emits at
various wavelengths.
• Geometric optics models light sources as emitters of light
energy, each of which have a fixed intensity.
• Where light travels in straight lines, from the sources to
those objects with which it interacts.
• An ideal point source emits energy from a single location
at one or more frequencies equally in all directions.
• More complex sources, such as a light bulb, can be
characterized as emitting light over an area and by
emitting more light in one direction than another.
1.3.3 Image Formation Models
• We include the viewer in the figure 1.10
because we are interested in the light that
reaches her eye.
• The viewer can also be a camera, as shown
in Figure 1.11.
• A ray is a semi-infinite line that emanates
from a point and travels to infinity in a
particular direction.
• A portion of these infinite rays contributes to
the image on the film plane of our camera.
Ray Tracing
• Ray tracing and photon mapping are image-formation techniques that
are based on these ideas and that can form the basis for producing
computer-generated images.
1.4 Imaging Systems
• We now introduce two physical imaging systems:
• The pinhole camera and
• The human visual system.
• The pinhole camera is a simple example of an imaging
system that will enable us to understand the functioning
of cameras and of other optical imagers.
• We emulate it to build a model of image formation.
• The human visual system is extremely complex but still
obeys the physical principles of other optical imaging
systems.
1.4.1 The Pinhole Camera
• A pinhole camera is a box with a small hole in the center of one
side of the box; the film is placed inside the box on the side
opposite the pinhole.
• The film plane is located a distance d from the pinhole.
• A side view (Figure 1.13) allows us to calculate where the image of
the point (x, y, z) is on the film plane z = −d.
The pinhole camera has two disadvantages.
• First, because the pinhole is so small—it admits only a
single ray from a point source—almost no light enters
the camera.
• Second, the camera cannot be adjusted to have a
different angle of view.
The Human Visual System

• The major components of the visual system are shown in Figure 1.15.
• Light enters the eye through the lens and cornea, a transparent structure that
protects the eye.
• The iris opens and closes to adjust the amount of light entering the eye.
• The lens forms an image on a two-dimensional structure called the retina at the
back of the eye.
• The rods and cones (so named because of their appearance when magnified)
are light sensors and are located on the retina.
• They are excited by electromagnetic energy in the range of 350 to 780 nm.
• The rods are low-level-light sensors that account for our night
vision and are not color sensitive; the cones are responsible for
our color vision.
• Whereas intensity is a physical measure of light energy,
brightness is a measure of how intense we perceive the light
emitted from an object to be.
• Brightness is an overall measure of how we react to the
intensity of light.
• The initial processing of light in the human visual system is
based on the same principles used by most optical systems.
1.5 The Synthetic-Camera Model
• We look at creating a computer generated image as being similar to forming
an image using an optical system.
• This paradigm has become known as the synthetic-camera model.
• Consider the imaging system shown in Figure 1.16.
• Again we see objects and a viewer.
• First, the specification of the objects is independent of the specification
of the viewer.
• Hence, we should expect that, within a graphics library, there will be separate
functions for specifying the objects and the viewer.
• Second, we can compute the image using simple geometric calculations,
just as we did with the pinhole camera.
• Figure 1.17, the view in part (a) is similar to that of the pinhole camera.
• Whereas with a real camera, we would simply flip the film to regain the
original orientation of the object, with our synthetic camera we can avoid
the flipping by a simple trick.
• We draw another plane in front of the lens (Figure 1.17(b)), and work in
three dimensions, as shown in Figure 1.18.
• We find the image of a point on the object on the virtual image
plane by drawing a line, called a projector, from the point to the
center of the lens, or the center of projection (COP).
• In our synthetic camera, the virtual image plane that we have
moved in front of the lens is called the projection plane.
• The image of the point is located where the projector passes
through the projection plane.
• As we saw, not all objects can be imaged onto the pinhole camera’s
film plane.
• The angle of view expresses this limitation. In the synthetic camera,
we can move this limitation to the front by placing a clipping
rectangle, or clipping window, in the projection plane (Figure 1.19)
Given the location of the center of projection, the location and orientation of the
projection plane, and the size of the clipping rectangle, we can determine which objects
will appear in the image.
1.6 The Programmer’s Interface
• User can interact with a graphics system by completely self-contained
packages, such as the ones used in the CAD community, a user develops
images through interactions with the display using input devices, such as a
mouse and a keyboard.
• Ex: In a typical application, such as the painting program shown in Figure 1.20,
the user sees menus and icons that represent possible actions

By clicking on these items, the user guides the


software and produces images without having
to write programs.
• Of course, someone has to develop the code for these applications.
• The interface between an application program and a graphics system can be
specified through a set of functions that resides in a graphics library.
• These specifications are called the application programmer’s interface (API).
• The application programmer’s model of the system is shown in Figure 1.21.
• The software drivers are responsible for interpreting the output of the API and
converting these data to a form that is understood by the particular hardware.
1.6.1 The Pen-Plotter Model
• Historically, most early graphics systems were two-dimensional systems.
• The conceptual model that they used is now referred to as the pen-plotter
model, referencing the output device that was available on these systems.
• A pen plotter (Figure 1.22) produces images by moving a pen held by a gantry,
a structure that can move the pen in two orthogonal directions across the
paper.
• The plotter can raise and lower the pen as required to create the desired
image.
• Pen plotters are still in use; they are well suited for drawing large diagrams,
such as blueprints.
• Various APIs—such as LOGO and
PostScript—have their origins in this model.
• We can describe such a graphics system with the following drawing functions:
moveto(x,y)
lineto(x,y)
• Execution of the moveto function moves the pen to the location (x, y) on the
paper without leaving a mark.
• The lineto function moves the pen to (x, y) and draws a line from the old to
the new location of the pen.
• Here is a fragment of a simple program in such a system:
moveto(0, 0);
lineto(1, 0);
lineto(1, 1);
lineto(0, 1); 𝟎, 𝟎
lineto(0,0);
• This fragment would generate the output shown in Figure
• An alternate raster-based, but still limiting, two-dimensional
model relies on writing pixels directly into a frame buffer.
• Such a system could be based on a single function of the
form
write_pixel(x, y, color)
• where x,y is the location of the pixel in the frame buffer and
color gives the color to be written there.
1.6.2 Three-Dimensional APIs
• If we are to follow the synthetic-camera model, we need functions in the
API to specify the following:
• Objects
• A viewer
• Light sources
• Material properties
• Objects are usually defined by sets of vertices.
• For simple geometric objects— such as line segments, rectangles, and
polygons—there is a simple relationship between a list of vertices, or
positions in space, and the object.
• For more complex objects, there may be multiple ways of defining the
object from a set of vertices.
• This would ensure that the coordinate values were not
accumulated with any values we may have previously set
for the projection matrix.
• We can define the coordinate frame for the screen display
window with the following statements
glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluOrtho2D (xmin, xmax, ymin, ymax);
Contd..
• The display window will then be referenced by coordinates (xmin, ymin)
at the lower-left corner and by coordinates (xmax, ymax) at the upper-
right corner.

Display
Window
ymax

ymin
xmin
Xmax
OpenGL Point Functions
• OpenGL primitives are displayed with a default size and color.
• The default color for primitives is white, and the default point size is
equal to the size of a single screen pixel.
• We use the following OpenGL function to state the coordinate values
for a single position:

glVertex* ( );

where the asterisk (*) indicates that suffix codes are required for this function
• Coordinate positions in OpenGL can be given in two, three, or four
dimensions.
• We use a suffix value of 2, 3, or 4 on the glVertex function to
indicate the dimensionality of a coordinate position.
• A fourth-dimensional specification indicates a homogeneous-
coordinate representation.
• The second suffix code on the glVertex function specifying a
numerical data type are i (integer), s (short), f (float), and d
(double).
• If we use an array specification for a coordinate position, we need
to append v (for “vector”) as a third suffix code.
• Calls to glVertex functions must be placed between a glBegin
function and a glEnd function.
• The argument of the glBegin function is used to identify the kind of
output primitive that is to be displayed, and glEnd takes no
arguments.
• For point plotting, the argument of the glBegin function is the
symbolic constant GL_POINTS.
• Thus, the form for an OpenGL specification of a point position is
glBegin (GL_POINTS);
glVertex* ( );
glEnd ( );
Example: Three equally spaced points are plotted along a two-dimensional.

glBegin (GL_POINTS);
glVertex2i (50, 100);
glVertex2i (75, 150);
glVertex2i (100, 200);
glEnd ( );
OpenGL Line Functions
• In OpenGL, we select a single endpoint coordinate position using the
glVertex function, just as we did for a point position.

• And we enclose a list of glVertex functions between the glBegin/glEnd pair.


• But now we use a symbolic constant as the argument for the glBegin
function that interprets a list of positions as the endpoint coordinates for
line segments.
• There are three symbolic constants in OpenGL.
• GL_LINES , GL_LINE_LOOP, GL_LINE_STRIP
GL_LINES
• A set of straight-line segments between each successive pair of
endpoints in a list is generated using the primitive line constant GL
LINES.

glBegin (GL_LINES);
glVertex2i (p1);
glVertex2i (p2);
glVertex2i (p3);
glVertex2i (p4);
glVertex2i (p5);
glEnd ( );
GL_LINE_STRIP
• With the OpenGL primitive constant GL LINE STRIP, we obtain a
polyline.
• The display is a sequence of connected line segments between the first
endpoint in the list and the last endpoint.
glBegin (GL_LINE_STRIP);
glVertex2i (p1);
glVertex2i (p2);
glVertex2i (p3);
glVertex2i (p4);
glVertex2i (p5);
glEnd ( );
GL LINE LOOP
• The Third OpenGL line primitive is GL LINE LOOP, which produces a
closed polyline.
• Lines are drawn as with GL LINE STRIP, but an additional line is drawn to
connect the last coordinate position and the first coordinate position.

glBegin (GL_LINE_LOOP);
glVertex2i (p1);
glVertex2i (p2);
glVertex2i (p3);
glVertex2i (p4);
glVertex2i (p5);
glEnd ( );
• Here is an example of specifying two point positions in a
three-dimensional world reference frame.

glBegin (GL_POLYGON);
glVertex3f(0.0, 0.0, 0.0); /* vertex A */
glVertex3f(0.0, 1.0, 0.0); /* vertex B */
glVertex3f(0.0, 0.0, 1.0); /* vertex C */
glEnd ( );
OpenGL Point-Attribute Functions
• Color is specified with glColor function.
• We set the size for an OpenGL point with
glPointSize (size);
• Point is then displayed as a square block of pixels.
• Thus, a point size of 1.0 displays a single pixel, and a
point size of 2.0 displays a 2 × 2 pixel array.
• Attribute functions may be listed inside or outside of a glBegin/glEnd
pair.
glColor3f (1.0, 0.0, 0.0);
glBegin (GL_POINTS);
glVertex2i (50, 100);
glPointSize (2.0);
glColor3f (0.0, 1.0, 0.0);
glVertex2i (75, 150);
glPointSize (3.0);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (100, 200);
glEnd ( );
• We can specify a viewer or camera in a variety of ways
• If we look at the camera shown in Figure 1.25, we can
identify four types of necessary specifications:
• Position: The camera location usually is given by the
position of the center of the lens, which is the center of
projection (COP).
• Orientation: Once we have positioned the camera, we can
place a camera coordinate system with its origin at the
center of projection. We can then rotate the camera
independently around the three axes of this system.
• Focal length: The focal length of the lens determines the
size of the image on the film plane or, equivalently, the
portion of the world the camera sees
• Film plane The back of the camera has a height and a width.
• The classical two-point perspective of a cube shown in Figure 1.26 is a two-
point perspective because of a particular relationship between the viewer
and the planes of the cube.

• Although the OpenGL API allows us to set transformations with complete


freedom, it also provides helpful extra functions.
• For example, consider the following function calls:
gluLookAt(cop_x, cop_y, cop_z, at_x, at_y, at_z, up_x, up_y, up_z);
glPerspective(field_of_view, aspect_ratio, near, far);
• The first function call points the camera from a center of
projection toward a desired point (the at point), with a
specified up direction for the camera.
• The second selects a lens for a perspective view (the field
of view) and how much of the world that the camera
should image (the aspect ratio and the near and far
distances).
• Light sources are defined by their location, strength,
color, and directionality.
• Material properties are characteristics, or attributes,
of the objects, and such properties are specified
through a series of function calls at the time that each
object is defined.
1.7 Graphics Architectures

• A simple model of early graphics systems is shown in Figure 1.28.


• The display in these systems was based on a calligraphic CRT display that
included the necessary circuitry to generate a line segment connecting two
points.
• The job of the host computer was to run the application program and to
compute the endpoints of the line segments in the image (in units of the display).
• This information had to be sent to the display at a rate high enough to avoid
flicker on the display.
• In the early days of computer graphics, computers were so slow that refreshing
even simple images, containing a few hundred line segments, would burden an
expensive computer
1.7.1 Display Processor
• These display processors had conventional architectures (Figure 1.29) but
included instructions to display primitives on the CRT.
• The main advantage of the display processor was that the instructions to
generate the image could be assembled once in the host and sent to the display
processor, where they were stored in the display processor’s own memory as a
display list, or display file.
• The display processor would then execute repetitively the program in the display
list, at a rate sufficient to avoid flicker, independently of the host, thus freeing the
host for other tasks.
1.7.2 Pipeline Architectures
• For computer-graphics applications, the most important use of custom VLSI
circuits has been in creating pipeline architectures.
• The concept of pipelining is illustrated in Figure 1.30 for a simple arithmetic
calculation. In our pipeline, there is an adder and a multiplier.
• If we use this configuration to compute a + (b ∗ c), then the calculation takes one
multiplication and one addition—the same amount of work required if we use a
single processor to carry out both operations.
• However, suppose that we have to carry out the same computation with
many values of a, b, and c.
• Now, the multiplier can pass on the results of its calculation to the adder
and can start its next multiplication while the adder carries out the second
step of the calculation on the first set of data.
• Here the rate at which data flows through the system, the throughput of
the system, has been doubled.
1.7.3 The Graphics Pipeline
• In a complex scene, there may be thousands—even millions—of vertices
that define the objects.
• We must process all these vertices in a similar manner to form an image in
the frame buffer.
• Processing the geometry of our objects to obtain an image, we can employ
the block diagram in Figure 1.31, which shows the four major steps in the
imaging process:
1. Vertex processing
2. Clipping and primitive assembly
3. Rasterization
4. Fragment processing
1) Vertex Processing:
• In the first block of our pipeline, each vertex is processed independently.
• The two major functions of this block are to carry out coordinate
transformations and to compute a color for each vertex.
2) Clipping and Primitive Assembly:
• The second fundamental block in the implementation of the standard
graphics pipeline is for clipping and primitive assembly.
• We must do clipping because of the limitation that no imaging system can
see the whole world at once.
• We obtain the equivalent property in the synthetic camera by considering a
clipping volume
• The projections of objects in this volume appear in the image.
• Those that are outside do not and are said to be clipped out.
3) Rasterization:
• The primitives that emerge from the clipper are still represented in
terms of their vertices and must be further processed to generate
pixels in the frame buffer.
• For example, if three vertices specify a triangle filled with a solid color,
the rasterizer must determine which pixels in the frame buffer are
inside the polygon.
4) Fragment Processing:
• The final block in our pipeline takes in the fragments generated by the
rasterizer and updates the pixels in the frame buffer.
• If the application generated three-dimensional data, some fragments
may not be visible because the surfaces that they define are behind
other surfaces.
• The color of a fragment may be altered by texture mapping

You might also like