Module1 Computer Graphics With Open GL
Module1 Computer Graphics With Open GL
From Chapter 2 of Computer Graphics with OpenGL®, Fourth Edition, Donald Hearn, M. Pauline Baker, Warren R. Carithers.
Copyright © 2011 by Pearson Education, Inc. Published by Pearson Prentice Hall. All rights reserved.
1
Computer Graphics Hardware
Magnetic
Deflection Coils Phosphor-
Focusing Coated
System Screen
Base
Electron
Connector Electron Beam
FIGURE 1 Pins Gun
Basic design of a magnetic-deflection
CRT.
Electron
Focusing Beam
Cathode Anode Path
Heating
Filament
Control Accelerating
FIGURE 2 Grid Anode
Operation of an electron gun with an
accelerating anode.
2
Computer Graphics Hardware
the vacuum inside the CRT envelope, the free, negatively charged electrons are
then accelerated toward the phosphor coating by a high positive voltage. The
accelerating voltage can be generated with a positively charged metal coating
on the inside of the CRT envelope near the phosphor screen, or an accelerating
anode, as in Figure 2, can be used to provide the positive voltage. Sometimes
the electron gun is designed so that the accelerating anode and focusing system
are within the same unit.
Intensity of the electron beam is controlled by the voltage at the control grid,
which is a metal cylinder that fits over the cathode. A high negative voltage
applied to the control grid will shut off the beam by repelling electrons and
stopping them from passing through the small hole at the end of the control-
grid structure. A smaller negative voltage on the control grid simply decreases
the number of electrons passing through. Since the amount of light emitted by
the phosphor coating depends on the number of electrons striking the screen, the
brightness of a display point is controlled by varying the voltage on the control
grid. This brightness, or intensity level, is specified for individual screen positions
with graphics software commands.
The focusing system in a CRT forces the electron beam to converge to a small
cross section as it strikes the phosphor. Otherwise, the electrons would repel each
other, and the beam would spread out as it approaches the screen. Focusing is
accomplished with either electric or magnetic fields. With electrostatic focusing,
the electron beam is passed through a positively charged metal cylinder so that
electrons along the center line of the cylinder are in an equilibrium position. This
arrangement forms an electrostatic lens, as shown in Figure 2, and the electron
beam is focused at the center of the screen in the same way that an optical lens
focuses a beam of light at a particular focal distance. Similar lens focusing effects
can be accomplished with a magnetic field set up by a coil mounted around the
outside of the CRT envelope, and magnetic lens focusing usually produces the
smallest spot size on the screen.
Additional focusing hardware is used in high-precision systems to keep the
beam in focus at all screen positions. The distance that the electron beam must
travel to different points on the screen varies because the radius of curvature for
most CRTs is greater than the distance from the focusing system to the screen
center. Therefore, the electron beam will be focused properly only at the center
of the screen. As the beam moves to the outer edges of the screen, displayed
images become blurred. To compensate for this, the system can adjust the focusing
according to the screen position of the beam.
As with focusing, deflection of the electron beam can be controlled with either
electric or magnetic fields. Cathode-ray tubes are now commonly constructed
with magnetic-deflection coils mounted on the outside of the CRT envelope, as
illustrated in Figure 1. Two pairs of coils are used for this purpose. One pair is
mounted on the top and bottom of the CRT neck, and the other pair is mounted
on opposite sides of the neck. The magnetic field produced by each pair of coils
results in a transverse deflection force that is perpendicular to both the direction
of the magnetic field and the direction of travel of the electron beam. Horizontal
deflection is accomplished with one pair of coils, and vertical deflection with the
other pair. The proper deflection amounts are attained by adjusting the current
through the coils. When electrostatic deflection is used, two pairs of parallel plates
are mounted inside the CRT envelope. One pair of plates is mounted horizontally
to control vertical deflection, and the other pair is mounted vertically to control
horizontal deflection (Fig. 3).
Spots of light are produced on the screen by the transfer of the CRT beam
energy to the phosphor. When the electrons in the beam collide with the phosphor
3
Computer Graphics Hardware
Vertical Phosphor-
Focusing Deflection Coated
System Plates Screen
Base
Electron
Connector Electron Horizontal Beam
FIGURE 3 Pins Gun Deflection
Electrostatic deflection of the electron Plates
beam in a CRT.
coating, they are stopped and their kinetic energy is absorbed by the phosphor.
Part of the beam energy is converted by friction into heat energy, and the remain-
der causes electrons in the phosphor atoms to move up to higher quantum-energy
levels. After a short time, the “excited” phosphor electrons begin dropping back
to their stable ground state, giving up their extra energy as small quantums of
light energy called photons. What we see on the screen is the combined effect of all
the electron light emissions: a glowing spot that quickly fades after all the excited
phosphor electrons have returned to their ground energy level. The frequency (or
color) of the light emitted by the phosphor is in proportion to the energy difference
between the excited quantum state and the ground state.
Different kinds of phosphors are available for use in CRTs. Besides color, a
major difference between phosphors is their persistence: how long they continue
to emit light (that is, how long it is before all excited electrons have returned to
the ground state) after the CRT beam is removed. Persistence is defined as the
time that it takes the emitted light from the screen to decay to one-tenth of its
original intensity. Lower-persistence phosphors require higher refresh rates to
maintain a picture on the screen without flicker. A phosphor with low persistence
can be useful for animation, while high-persistence phosphors are better suited
for displaying highly complex, static pictures. Although some phosphors have
persistence values greater than 1 second, general-purpose graphics monitors are
usually constructed with persistence in the range from 10 to 60 microseconds.
FIGURE 4
Figure 4 shows the intensity distribution of a spot on the screen. The
Intensity distribution of an illuminated
phosphor spot on a CRT screen. intensity is greatest at the center of the spot, and it decreases with a Gaussian
distribution out to the edges of the spot. This distribution corresponds to the
cross-sectional electron density distribution of the CRT beam.
The maximum number of points that can be displayed without overlap on
a CRT is referred to as the resolution. A more precise definition of resolution is
the number of points per centimeter that can be plotted horizontally and ver-
tically, although it is often simply stated as the total number of points in each
direction. Spot intensity has a Gaussian distribution (Fig. 4), so two adjacent
spots will appear distinct as long as their separation is greater than the diameter
at which each spot has an intensity of about 60 percent of that at the center of
FIGURE 5
the spot. This overlap position is illustrated in Figure 5. Spot size also depends
Two illuminated phosphor spots are
distinguishable when their separation on intensity. As more electrons are accelerated toward the phosphor per second,
is greater than the diameter at which the diameters of the CRT beam and the illuminated spot increase. In addition,
a spot intensity has fallen to the increased excitation energy tends to spread to neighboring phosphor atoms
60 percent of maximum. not directly in the path of the beam, which further increases the spot diameter.
Thus, resolution of a CRT is dependent on the type of phosphor, the intensity
to be displayed, and the focusing and deflection systems. Typical resolution on
high-quality systems is 1280 by 1024, with higher resolutions available on many
systems. High-resolution systems are often referred to as high-definition systems.
4
Computer Graphics Hardware
The physical size of a graphics monitor, on the other hand, is given as the length of
the screen diagonal, with sizes varying from about 12 inches to 27 inches or more.
A CRT monitor can be attached to a variety of computer systems, so the number
of screen points that can actually be plotted also depends on the capabilities of
the system to which it is attached.
Raster-Scan Displays
The most common type of graphics monitor employing a CRT is the raster-scan
display, based on television technology. In a raster-scan system, the electron beam
is swept across the screen, one row at a time, from top to bottom. Each row is
referred to as a scan line. As the electron beam moves across a scan line, the beam
intensity is turned on and off (or set to some intermediate value) to create a pattern
of illuminated spots. Picture definition is stored in a memory area called the
refresh buffer or frame buffer, where the term frame refers to the total screen area.
This memory area holds the set of color values for the screen points. These stored
color values are then retrieved from the refresh buffer and used to control the
intensity of the electron beam as it moves from spot to spot across the screen. In this
way, the picture is “painted” on the screen one scan line at a time, as demonstrated
in Figure 6. Each screen spot that can be illuminated by the electron beam
is referred to as a pixel or pel (shortened forms of picture element). Since the
refresh buffer is used to store the set of screen color values, it is also sometimes
called a color buffer. Also, other kinds of pixel information, besides color, are
stored in buffer locations, so all the different buffer areas are sometimes referred
to collectively as the “frame buffer.” The capability of a raster-scan system to
store color information for each screen point makes it well suited for the realistic
display of scenes containing subtle shading and color patterns. Home television
sets and printers are examples of other systems using raster-scan methods.
Raster systems are commonly characterized by their resolution, which is the
number of pixel positions that can be plotted. Another property of video monitors
(a) (b)
FIGURE 6
A raster-scan system displays an object
as a set of discrete points across each
(c) (d) scan line.
5
Computer Graphics Hardware
is aspect ratio, which is now often defined as the number of pixel columns divided
by the number of scan lines that can be displayed by the system. (Sometimes this
term is used to refer to the number of scan lines divided by the number of pixel
columns.) Aspect ratio can also be described as the number of horizontal points
to vertical points (or vice versa) necessary to produce equal-length lines in both
directions on the screen. Thus, an aspect ratio of 4/3, for example, means that
a horizontal line plotted with four points has the same length as a vertical line
plotted with three points, where line length is measured in some physical units
such as centimeters. Similarly, the aspect ratio of any rectangle (including the total
screen area) can be defined to be the width of the rectangle divided by its height.
The range of colors or shades of gray that can be displayed on a raster system
depends on both the types of phosphor used in the CRT and the number of bits
per pixel available in the frame buffer. For a simple black-and-white system, each
screen point is either on or off, so only one bit per pixel is needed to control
the intensity of screen positions. A bit value of 1, for example, indicates that the
electron beam is to be turned on at that position, and a value of 0 turns the beam
off. Additional bits allow the intensity of the electron beam to be varied over
a range of values between “on” and “off.” Up to 24 bits per pixel are included
in high-quality systems, which can require several megabytes of storage for the
frame buffer, depending on the resolution of the system. For example, a system
with 24 bits per pixel and a screen resolution of 1024 by 1024 requires 3 MB of
storage for the refresh buffer. The number of bits per pixel in a frame buffer is
sometimes referred to as either the depth of the buffer area or the number of bit
planes. A frame buffer with one bit per pixel is commonly called a bitmap, and
a frame buffer with multiple bits per pixel is a pixmap, but these terms are also
used to describe other rectangular arrays, where a bitmap is any pattern of binary
values and a pixmap is a multicolor pattern.
As each screen refresh takes place, we tend to see each frame as a smooth
continuation of the patterns in the previous frame, so long as the refresh rate is
not too low. Below about 24 frames per second, we can usually perceive a gap
between successive screen images, and the picture appears to flicker. Old silent
films, for example, show this effect because they were photographed at a rate of
16 frames per second. When sound systems were developed in the 1920s, motion-
picture film rates increased to 24 frames per second, which removed flickering
and the accompanying jerky movements of the actors. Early raster-scan computer
systems were designed with a refresh rate of about 30 frames per second. This
produces reasonably good results, but picture quality is improved, up to a point,
with higher refresh rates on a video monitor because the display technology on the
monitor is basically different from that of film. A film projector can maintain the
continuous display of a film frame until the next frame is brought into view. But
on a video monitor, a phosphor spot begins to decay as soon as it is illuminated.
Therefore, current raster-scan displays perform refreshing at the rate of 60 to
80 frames per second, although some systems now have refresh rates of up to
120 frames per second. And some graphics systems have been designed with a
variable refresh rate. For example, a higher refresh rate could be selected for a
stereoscopic application so that two views of a scene (one from each eye position)
can be alternately displayed without flicker. But other methods, such as multiple
frame buffers, are typically used for such applications.
Sometimes, refresh rates are described in units of cycles per second, or hertz
(Hz), where a cycle corresponds to one frame. Using these units, we would
describe a refresh rate of 60 frames per second as simply 60 Hz. At the end of
each scan line, the electron beam returns to the left side of the screen to begin
displaying the next scan line. The return to the left of the screen, after refreshing
6
Computer Graphics Hardware
0
1
2
3
FIGURE 7
Interlacing scan lines on a raster-scan display. First,
all points on the even-numbered (solid) scan lines
are displayed; then all points along the
odd-numbered (dashed) lines are displayed.
each scan line, is called the horizontal retrace of the electron beam. And at the
1 1
end of each frame (displayed in 80 to 60 of a second), the electron beam returns
to the upper-left corner of the screen (vertical retrace) to begin the next frame.
On some raster-scan systems and TV sets, each frame is displayed in two
passes using an interlaced refresh procedure. In the first pass, the beam sweeps
across every other scan line from top to bottom. After the vertical retrace, the
beam then sweeps out the remaining scan lines (Fig. 7). Interlacing of the scan
lines in this way allows us to see the entire screen displayed in half the time that
it would have taken to sweep across all the lines at once from top to bottom.
This technique is primarily used with slower refresh rates. On an older, 30 frame-
per-second, non-interlaced display, for instance, some flicker is noticeable. But
1
with interlacing, each of the two passes can be accomplished in 60 of a second,
which brings the refresh rate nearer to 60 frames per second. This is an effective
technique for avoiding flicker—provided that adjacent scan lines contain similar
display information.
Random-Scan Displays
When operated as a random-scan display unit, a CRT has the electron beam
directed only to those parts of the screen where a picture is to be displayed.
Pictures are generated as line drawings, with the electron beam tracing out the
component lines one after the other. For this reason, random-scan monitors are
also referred to as vector displays (or stroke-writing displays or calligraphic
displays). The component lines of a picture can be drawn and refreshed by a
random-scan system in any specified order (Fig. 8). A pen plotter operates in a
similar way and is an example of a random-scan, hard-copy device.
Refresh rate on a random-scan system depends on the number of lines to be
displayed on that system. Picture definition is now stored as a set of line-drawing
commands in an area of memory referred to as the display list, refresh display file,
vector file, or display program. To display a specified picture, the system cycles
through the set of commands in the display file, drawing each component line in
turn. After all line-drawing commands have been processed, the system cycles
back to the first line command in the list. Random-scan displays are designed to
draw all the component lines of a picture 30 to 60 times each second, with up to
100,000 “short” lines in the display list. When a small set of lines is to be displayed,
each refresh cycle is delayed to avoid very high refresh rates, which could burn
out the phosphor.
Random-scan systems were designed for line-drawing applications, such as
architectural and engineering layouts, and they cannot display realistic shaded
scenes. Since picture definition is stored as a set of line-drawing instructions rather
than as a set of intensity values for all screen points, vector displays generally have
higher resolutions than raster systems. Also, vector displays produce smooth line
7
Computer Graphics Hardware
(a) (b)
FIGURE 8
A random-scan system draws the
component lines of an object in any
specified order. (c) (d)
drawings because the CRT beam directly follows the line path. A raster system, by
contrast, produces jagged lines that are plotted as discrete point sets. However,
the greater flexibility and improved line-drawing capabilities of raster systems
have resulted in the abandonment of vector technology.
8
Computer Graphics Software
1 Coordinate Representations
2 Graphics Functions
3 Software Standards
4 Other Graphics Packages
5 Introduction to OpenGL
6 Summary
From Chapter 3 of Computer Graphics with OpenGL®, Fourth Edition, Donald Hearn, M. Pauline Baker, Warren R. Carithers.
Copyright © 2011 by Pearson Education, Inc. Published by Pearson Prentice Hall. All rights reserved.
29
Computer Graphics Software
GL (Graphics Library), OpenGL, VRML (Virtual-Reality Modeling Language), Java 2D, and
Java 3D. A set of graphics functions is often called a computer-graphics application
programming interface (CG API) because the library provides a software interface
between a programming language (such as C++) and the hardware. So when we write
an application program in C++, the graphics routines allow us to construct and display
a picture on an output device.
1 Coordinate Representations
To generate a picture using a programming package, we first need to give the
geometric descriptions of the objects that are to be displayed. These descriptions
determine the locations and shapes of the objects. For example, a box is specified
by the positions of its corners (vertices), and a sphere is defined by its center posi-
tion and radius. With few exceptions, general graphics packages require geomet-
ric descriptions to be specified in a standard, right-handed, Cartesian-coordinate
reference frame. If coordinate values for a picture are given in some other ref-
erence frame (spherical, hyperbolic, etc.), they must be converted to Cartesian
coordinates before they can be input to the graphics package. Some packages
that are designed for specialized applications may allow use of other coordi-
nate frames that are appropriate for those applications.
In general, several different Cartesian reference frames are used in the process
of constructing and displaying a scene. First, we can define the shapes of individ-
ual objects, such as trees or furniture, within a separate reference frame for each
object. These reference frames are called modeling coordinates, or sometimes
local coordinates or master coordinates. Once the individual object shapes have
been specified, we can construct (“model”) a scene by placing the objects into
appropriate locations within a scene reference frame called world coordinates.
This step involves the transformation of the individual modeling-coordinate
frames to specified positions and orientations within the world-coordinate frame.
As an example, we could construct a bicycle by defining each of its parts
(wheels, frame, seat, handlebars, gears, chain, pedals) in a separate modeling-
coordinate frame. Then, the component parts are fitted together in world coor-
dinates. If both bicycle wheels are the same size, we need to describe only one
wheel in a local-coordinate frame. Then the wheel description is fitted into the
world-coordinate bicycle description in two places. For scenes that are not too
complicated, object components can be set up directly within the overall world-
coordinate object structure, bypassing the modeling-coordinate and modeling-
transformation steps. Geometric descriptions in modeling coordinates and world
coordinates can be given in any convenient floating-point or integer values, with-
out regard for the constraints of a particular output device. For some scenes, we
might want to specify object geometries in fractions of a foot, while for other
applications we might want to use millimeters, or kilometers, or light-years.
After all parts of a scene have been specified, the overall world-coordinate
description is processed through various routines onto one or more output-device
reference frames for display. This process is called the viewing pipeline. World-
coordinate positions are first converted to viewing coordinates corresponding to the
view we want of a scene, based on the position and orientation of a hypothetical
camera. Then object locations are transformed to a two-dimensional (2D) projec-
tion of the scene, which corresponds to what we will see on the output device.
The scene is then stored in normalized coordinates, where each coordinate value
is in the range from −1 to 1 or in the range from 0 to 1, depending on the system.
30
Computer Graphics Software
Viewing and
Projection Coordinates
1
1 Video Monitor
Modeling
Coordinates
Normalized 1 Plotter
World Coordinates
Coordinates
Other Output
Device
Coordinates
FIGURE 1
The transformation sequence from modeling coordinates to device coordinates for a three-dimensional scene.
Object shapes can be individually defined in modeling-coordinate reference systems. Then the shapes are positioned
within the world-coordinate scene. Next, world-coordinate specifications are transformed through the viewing
pipeline to viewing and projection coordinates and then to normalized coordinates. At the final step, individual
device drivers transfer the normalized-coordinate representation of the scene to the output devices for display.
2 Graphics Functions
A general-purpose graphics package provides users with a variety of functions
for creating and manipulating pictures. These routines can be broadly classified
31
Computer Graphics Software
according to whether they deal with graphics output, input, attributes, transfor-
mations, viewing, subdividing pictures, or general control.
The basic building blocks for pictures are referred to as graphics output
primitives. They include character strings and geometric entities, such as points,
straight lines, curved lines, filled color areas (usually polygons), and shapes
defined with arrays of color points. In addition, some graphics packages pro-
vide functions for displaying more complex shapes such as spheres, cones, and
cylinders. Routines for generating output primitives provide the basic tools for
constructing pictures.
Attributes are properties of the output primitives; that is, an attribute
describes how a particular primitive is to be displayed. This includes color spec-
ifications, line styles, text styles, and area-filling patterns.
We can change the size, position, or orientation of an object within a scene
using geometric transformations. Some graphics packages provide an additional
set of functions for performing modeling transformations, which are used to con-
struct a scene where individual object descriptions are given in local coordinates.
Such packages usually provide a mechanism for describing complex objects (such
as an electrical circuit or a bicycle) with a tree (hierarchical) structure. Other pack-
ages simply provide the geometric-transformation routines and leave modeling
details to the programmer.
After a scene has been constructed, using the routines for specifying the object
shapes and their attributes, a graphics package projects a view of the picture onto
an output device. Viewing transformations are used to select a view of the scene,
the type of projection to be used, and the location on a video monitor where the
view is to be displayed. Other routines are available for managing the screen
display area by specifying its position, size, and structure. For three-dimensional
scenes, visible objects are identified and the lighting conditions are applied.
Interactive graphics applications use various kinds of input devices, including
a mouse, a tablet, and a joystick. Input functions are used to control and process
the data flow from these interactive devices.
Some graphics packages also provide routines for subdividing a picture
description into a named set of component parts. And other routines may be
available for manipulating these picture components in various ways.
Finally, a graphics package contains a number of housekeeping tasks, such as
clearing a screen display area to a selected color and initializing parameters. We
can lump the functions for carrying out these chores under the heading control
operations.
3 Software Standards
The primary goal of standardized graphics software is portability. When packages
are designed with standard graphics functions, software can be moved easily
from one hardware system to another and used in different implementations and
applications. Without standards, programs designed for one hardware system
often cannot be transferred to another system without extensive rewriting of the
programs.
International and national standards-planning organizations in many coun-
tries have cooperated in an effort to develop a generally accepted standard for
computer graphics. After considerable effort, this work on standards led to the
development of the Graphical Kernel System (GKS) in 1984. This system was
adopted as the first graphics software standard by the International Standards
Organization (ISO) and by various national standards organizations, including
32
Computer Graphics Software
the American National Standards Institute (ANSI). Although GKS was origi-
nally designed as a two-dimensional graphics package, a three-dimensional GKS
extension was soon developed. The second software standard to be developed
and approved by the standards organizations was Programmer’s Hierarchical
Interactive Graphics System (PHIGS), which is an extension of GKS. Increased
capabilities for hierarchical object modeling, color specifications, surface render-
ing, and picture manipulations are provided in PHIGS. Subsequently, an extension
of PHIGS, called PHIGS+, was developed to provide three-dimensional surface-
rendering capabilities not available in PHIGS.
As the GKS and PHIGS packages were being developed, the graphics work-
stations from Silicon Graphics, Inc. (SGI), became increasingly popular. These
workstations came with a set of routines called GL (Graphics Library), which
very soon became a widely used package in the graphics community. Thus,
GL became a de facto graphics standard. The GL routines were designed for
fast, real-time rendering, and soon this package was being extended to other
hardware systems. As a result, OpenGL was developed as a hardware-
independent version of GL in the early 1990s. This graphics package is
now maintained and updated by the OpenGL Architecture Review Board,
which is a consortium of representatives from many graphics companies and
organizations. The OpenGL library is specifically designed for efficient process-
ing of three-dimensional applications, but it can also handle two-dimensional
scene descriptions as a special case of three dimensions where all the z coordinate
values are 0.
Graphics functions in any package are typically defined as a set of specifica-
tions independent of any programming language. A language binding is then
defined for a particular high-level programming language. This binding gives
the syntax for accessing the various graphics functions from that language. Each
language binding is defined to make best use of the corresponding language capa-
bilities and to handle various syntax issues, such as data types, parameter passing,
and errors. Specifications for implementing a graphics package in a particular lan-
guage are set by the ISO. The OpenGL bindings for the C and C++ languages are
the same. Other OpenGL bindings are also available, such as those for Java and
Python.
Later in this book, we use the C/C++ binding for OpenGL as a framework
for discussing basic graphics concepts and the design and application of graphics
packages. Example programs in C++ illustrate applications of OpenGL and the
general algorithms for implementing graphics functions.
33
Computer Graphics Software
using a variety of lighting models. Finally, graphics libraries are often provided
in other types of systems, such as Mathematica, MatLab, and Maple.
5 Introduction to OpenGL
A basic library of functions is provided in OpenGL for specifying graphics prim-
itives, attributes, geometric transformations, viewing transformations, and many
other operations. As we noted in the last section, OpenGL is designed to be hard-
ware independent, so many operations, such as input and output routines, are
not included in the basic library. However, input and output routines and many
additional functions are available in auxiliary libraries that have been developed
for OpenGL programs.
Certain functions require that one (or more) of their arguments be assigned
a symbolic constant specifying, for instance, a parameter name, a value for a
parameter, or a particular mode. All such constants begin with the uppercase
letters GL. In addition, component words within a constant name are written in
capital letters, and the underscore ( ) is used as a separator between all component
words in the name. The following are a few examples of the several hundred
symbolic constants available for use with OpenGL functions:
The OpenGL functions also expect specific data types. For example, an
OpenGL function parameter might expect a value that is specified as a 32-bit inte-
ger. But the size of an integer specification can be different on different machines.
To indicate a specific data type, OpenGL uses special built-in, data-type names,
such as
Each data-type name begins with the capital letters GL, and the remainder of the
name is a standard data-type designation written in lowercase letters.
Some arguments of OpenGL functions can be assigned values using an array
that lists a set of data values. This is an option for specifying a list of values as a
pointer to an array, rather than specifying each element of the list explicitly as a
parameter argument. A typical example of the use of this option is in specifying
xyz coordinate values.
Related Libraries
In addition to the OpenGL basic (core) library, there are a number of associ-
ated libraries for handling special operations. The OpenGL Utility (GLU) pro-
vides routines for setting up viewing and projection matrices, describing complex
objects with line and polygon approximations, displaying quadrics and B-splines
34
Computer Graphics Software
Header Files
In all of our graphics programs, we will need to include the header file for the
OpenGL core library. For most applications we will also need GLU, and on many
systems we will need to include the header file for the window system. For
instance, with Microsoft Windows, the header file that accesses the WGL rou-
tines is windows.h. This header file must be listed before the OpenGL and GLU
header files because it contains macros needed by the Microsoft Windows version
of the OpenGL libraries. So the source file in this case would begin with
#include <windows.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
(We could include gl.h and glu.h as well, but doing so would be redundant and
could affect program portability.) On some systems, the header files for OpenGL
and GLUT routines are found in different places in the filesystem. For instance,
on Apple OS X systems, the header file inclusion statement would be
#include <GLUT/glut.h>
35
Computer Graphics Software
In addition, we will often need to include header files that are required by the
C++ code. For example,
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
With the ISO/ANSI standard for C++, these header files are called cstdio, cst-
dlib, and cmath.
Next, we can state that a display window is to be created on the screen with
a given caption for the title bar. This is accomplished with the function
where the single argument for this function can be any character string that we
want to use for the display-window title.
Then we need to specify what the display window is to contain. For this,
we create a picture using OpenGL functions and pass the picture definition to
the GLUT routine glutDisplayFunc, which assigns our picture to the display
window. As an example, suppose we have the OpenGL code for describing a line
segment in a procedure called lineSegment. Then the following function call
passes the line-segment description to the display window:
glutDisplayFunc (lineSegment);
But the display window is not yet on the screen. We need one more GLUT
function to complete the window-processing operations. After execution of the
following statement, all display windows that we have created, including their
graphic content, are now activated:
glutMainLoop ( );
This function must be the last one in our program. It displays the initial graphics
and puts the program into an infinite loop that checks for input from devices such
as a mouse or keyboard. Our first example will not be interactive, so the program
will just continue to display our picture until we close the display window. In
later chapters, we consider how we can modify our OpenGL programs to handle
interactive input.
Although the display window that we created will be in some default location
and size, we can set these parameters using additional GLUT functions. We use the
glutInitWindowPosition function to give an initial location for the upper-
left corner of the display window. This position is specified in integer screen
36
Computer Graphics Software
Vid
eo s
100 cree
50 n
An E
xam
ple O
penG
Display L Pr
Window ogra
m
300
400
FIGURE 2
A 400 by 300 display window at position (50,
100) relative to the top-left corner of the video
display.
coordinates, whose origin is at the upper-left corner of the screen. For instance,
the following statement specifies that the upper-left corner of the display window
should be placed 50 pixels to the right of the left edge of the screen and 100 pixels
down from the top edge of the screen:
After the display window is on the screen, we can reposition and resize it.
We can also set a number of other options for the display window, such as
buffering and a choice of color modes, with the glutInitDisplayMode func-
tion. Arguments for this routine are assigned symbolic GLUT constants. For exam-
ple, the following command specifies that a single refresh buffer is to be used for
the display window and that we want to use the color mode which uses red,
green, and blue (RGB) components to select color values:
The values of the constants passed to this function are combined using a logical or
operation. Actually, single buffering and RGB color mode are the default options.
But we will use the function now as a reminder that these are the options that
are set for our display. Later, we discuss color modes in more detail, as well as
other display options, such as double buffering for animation applications and
selecting parameters for viewing three-dimensional scenes.
37
Computer Graphics Software
Using RGB color values, we set the background color for the display window
to be white, as in Figure 2, with the OpenGL function:
The first three arguments in this function set the red, green, and blue component
colors to the value 1.0, giving us a white background color for the display window.
If, instead of 1.0, we set each of the component colors to 0.0, we would get a
black background. And if all three of these components were set to the same
intermediate value between 0.0 and 1.0, we would get some shade of gray. The
fourth parameter in the glClearColor function is called the alpha value for the
specified color. One use for the alpha value is as a “blending” parameter. When we
activate the OpenGL blending operations, alpha values can be used to determine
the resulting color for two overlapping objects. An alpha value of 0.0 indicates a
totally transparent object, and an alpha value of 1.0 indicates an opaque object.
Blending operations will not be used for a while, so the value of alpha is irrelevant
to our early example programs. For now, we will simply set alpha to 0.0.
Although the glClearColor command assigns a color to the display win-
dow, it does not put the display window on the screen. To get the assigned window
color displayed, we need to invoke the following OpenGL function:
glClear (GL_COLOR_BUFFER_BIT);
The suffix 3f on the glColor function indicates that we are specifying the three
RGB color components using floating-point (f) values. This function requires that
the values be in the range from 0.0 to 1.0, and we have set red = 0.0, green = 0.4,
and blue = 0.2.
For our first program, we simply display a two-dimensional line segment.
To do this, we need to tell OpenGL how we want to “project” our picture onto
the display window because generating a two-dimensional picture is treated by
OpenGL as a special case of three-dimensional viewing. So, although we only
want to produce a very simple two-dimensional line, OpenGL processes our
picture through the full three-dimensional viewing operations. We can set the
projection type (mode) and other viewing parameters that we need with the fol-
lowing two functions:
glMatrixMode (GL_PROJECTION);
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
38
Computer Graphics Software
within this world-coordinate rectangle will be shown within the display win-
dow. Anything outside this coordinate range will not be displayed. Therefore,
the GLU function gluOrtho2D defines the coordinate reference frame within the
display window to be (0.0, 0.0) at the lower-left corner of the display window and
(200.0, 150.0) at the upper-right window corner. Since we are only describing
a two-dimensional object, the orthogonal projection has no other effect than to
“paste” our picture into the display window that we defined earlier. For now, we
will use a world-coordinate rectangle with the same aspect ratio as the display
window, so that there is no distortion of our picture. Later, we will consider how
we can maintain an aspect ratio that does not depend upon the display-window
specification.
Finally, we need to call the appropriate OpenGL routines to create our line seg-
ment. The following code defines a two-dimensional, straight-line segment with
integer, Cartesian endpoint coordinates (180, 15) and (10, 145).
glBegin (GL_LINES);
glVertex2i (180, 15);
glVertex2i (10, 145);
glEnd ( );
Now we are ready to put all the pieces together. The following OpenGL
program is organized into three functions. We place all initializations and related
one-time parameter settings in function init. Our geometric description of the
“picture” that we want to display is in function lineSegment, which is the
function that will be referenced by the GLUT function glutDisplayFunc. And
the main function contains the GLUT functions for setting up the display window
and getting our line segment onto the screen. Figure 3 shows the display window
and line segment generated by this program.
FIGURE 3
The display window and line segment
produced by the example program.
39
Computer Graphics Software
40
Graphics Output Primitives
A
16 OpenGL Display-Window Reshape
general software package for graphics applications, some-
Function
times referred to as a computer-graphics application pro-
17 Summary
gramming interface (CG API), provides a library of functions
that we can use within a programming language such as C++ to cre-
ate pictures. The set of library functions can be subdivided into several
categories. One of the first things we need to do when creating a pic-
ture is to describe the component parts of the scene to be displayed.
Picture components could be trees and terrain, furniture and walls,
storefronts and street scenes, automobiles and billboards, atoms and
molecules, or stars and galaxies. For each type of scene, we need to
describe the structure of the individual objects and their coordinate loca-
tions within the scene. Those functions in a graphics package that we
use to describe the various picture components are called the graphics
output primitives, or simply primitives. The output primitives describ-
ing the geometry of objects are typically referred to as geometric
primitives. Point positions and straight-line segments are the simplest
From Chapter 4 of Computer Graphics with OpenGL®, Fourth Edition, Donald Hearn, M. Pauline Baker, Warren R. Carithers.
Copyright © 2011 by Pearson Education, Inc. Published by Pearson Prentice Hall. All rights reserved.
45
Graphics Output Primitives
Screen Coordinates
Locations on a video monitor are referenced in integer screen coordinates, which
correspond to the pixel positions in the frame buffer. Pixel coordinate values give
y
the scan line number (the y value) and the column number (the x value along a
scan line). Hardware processes, such as screen refreshing, typically address pixel
positions with respect to the top-left corner of the screen. Scan lines are then
5 referenced from 0, at the top of the screen, to some integer value, ymax , at the
4 bottom of the screen, and pixel positions along each scan line are numbered from
3 0 to xmax , left to right. However, with software commands, we can set up any
2 convenient reference frame for screen positions. For example, we could specify
1 an integer range for screen positions with the coordinate origin at the lower-left
0 of a screen area (Figure 1), or we could use noninteger Cartesian values for a
0 1 2 3 4 5 x picture description. The coordinate values we use to describe the geometry of a
scene are then converted by the viewing routines to integer pixel positions within
FIGURE 1
Pixel positions referenced with respect the frame buffer.
to the lower-left corner of a screen Scan-line algorithms for the graphics primitives use the defining coordinate
area. descriptions to determine the locations of pixels that are to be displayed. For
46
Graphics Output Primitives
example, given the endpoint coordinates for a line segment, a display algorithm
must calculate the positions for those pixels that lie along the line path between
the endpoints. Since a pixel position occupies a finite area of the screen, the
finite size of a pixel must be taken into account by the implementation algo-
rithms. For the present, we assume that each integer screen position references
the center of a pixel area.
Once pixel positions have been identified for an object, the appropriate color
values must be stored in the frame buffer. For this purpose, we will assume that
we have available a low-level procedure of the form
This procedure stores the current color setting into the frame buffer at integer
position (x, y), relative to the selected position of the screen-coordinate origin. We
sometimes also will want to be able to retrieve the current frame-buffer setting for
a pixel location. So we will assume that we have the following low-level function
for obtaining a frame-buffer color value:
47
Graphics Output Primitives
2 Specifying A Two-Dimensional
World-Coordinate Reference Frame
in OpenGL
The gluOrtho2D command is a function we can use to set up any two-
dimensional Cartesian reference frame. The arguments for this function are the
four values defining the x and y coordinate limits for the picture we want to dis-
play. Since the gluOrtho2D function specifies an orthogonal projection, we
need also to be sure that the coordinate values are placed in the OpenGL projec-
tion matrix. In addition, we could assign the identity matrix as the projection
matrix before defining the world-coordinate range. This would ensure that the
coordinate values were not accumulated with any values we may have previously
set for the projection matrix. Thus, for our initial two-dimensional examples, we
can define the coordinate frame for the screen display window with the follow-
ing statements:
glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluOrtho2D (xmin, xmax, ymin, ymax);
The display window will then be referenced by coordinates (xmin, ymin) at the
lower-left corner and by coordinates (xmax, ymax) at the upper-right corner, as
shown in Figure 2.
We can then designate one or more graphics primitives for display using the
coordinate reference specified in the gluOrtho2D statement. If the coordinate
extents of a primitive are within the coordinate range of the display window, all
of the primitive will be displayed. Otherwise, only those parts of the primitive
within the display-window coordinate limits will be shown. Also, when we set up
the geometry describing a picture, all positions for the OpenGL primitives must
be given in absolute coordinates, with respect to the reference frame defined in
the gluOrtho2D function.
Vide
o Sc
reen
Display
Window
ymax
ymin
FIGURE 2 xmin
World-coordinate limits for a display
window, as specified in the
glOrtho2D function. xmax
48
Graphics Output Primitives
glVertex* ( );
where the asterisk (*) indicates that suffix codes are required for this function.
These suffix codes are used to identify the spatial dimension, the numerical data
type to be used for the coordinate values, and a possible vector form for the
coordinate specification. Calls to glVertex functions must be placed between a
glBegin function and a glEnd function. The argument of the glBegin function
is used to identify the kind of output primitive that is to be displayed, and glEnd
takes no arguments. For point plotting, the argument of the glBegin function is
the symbolic constant GL POINTS. Thus, the form for an OpenGL specification
of a point position is
glBegin (GL_POINTS);
glVertex* ( );
glEnd ( );
Although the term vertex strictly refers to a “corner” point of a polygon, the
point of intersection of the sides of an angle, a point of intersection of an
ellipse with its major axis, or other similar coordinate positions on geometric
structures, the glVertex function is used in OpenGL to specify coordinates for
any point position. In this way, a single function is used for point, line, and poly-
gon specifications—and, most often, polygon patches are used to describe the
objects in a scene.
Coordinate positions in OpenGL can be given in two, three, or four dimen-
sions. We use a suffix value of 2, 3, or 4 on the glVertex function to indi-
cate the dimensionality of a coordinate position. A four-dimensional specifica-
tion indicates a homogeneous-coordinate representation, where the homogeneous
parameter h (the fourth coordinate) is a scaling factor for the Cartesian-coordinate
values. Homogeneous-coordinate representations are useful for expressing
transformation operations in matrix form. Because OpenGL treats two-dimen-
sions as a special case of three dimensions, any (x, y) coordinate specification is
equivalent to a three-dimensional specification of (x, y, 0). Furthermore, OpenGL
represents vertices internally in four dimensions, so each of these specifications
are equivalent to the four-dimensional specification (x, y, 0, 1).
We also need to state which data type is to be used for the numerical-
value specifications of the coordinates. This is accomplished with a second
suffix code on the glVertex function. Suffix codes for specifying a numeri-
cal data type are i (integer), s (short), f (float), and d (double). Finally, the
coordinate values can be listed explicitly in the glVertex function, or a sin-
gle argument can be used that references a coordinate position as an array. If we
use an array specification for a coordinate position, we need to append v (for
“vector”) as a third suffix code.
49
Graphics Output Primitives
200
150
100
50
FIGURE 3
Display of three point positions generated with
glBegin (GL POINTS). 50 100 150 x
In the following example, three equally spaced points are plotted along a two-
dimensional, straight-line path with a slope of 2 (see Figure 3). Coordinates are
given as integer pairs:
glBegin (GL_POINTS);
glVertex2i (50, 100);
glVertex2i (75, 150);
glVertex2i (100, 200);
glEnd ( );
Alternatively, we could specify the coordinate values for the preceding points in
arrays such as
and call the OpenGL functions for plotting the three points as
glBegin (GL_POINTS);
glVertex2iv (point1);
glVertex2iv (point2);
glVertex2iv (point3);
glEnd ( );
glBegin (GL_POINTS);
glVertex3f (-78.05, 909.72, 14.60);
glVertex3f (261.91, -5200.67, 188.33);
glEnd ( );
We could also define a C++ class or structure (struct) for specifying point
positions in various dimensions. For example,
class wcPt2D {
public:
GLfloat x, y;
};
50
Graphics Output Primitives
pointPos.x = 120.75;
pointPos.y = 45.30;
glBegin (GL_POINTS);
glVertex2f (pointPos.x, pointPos.y);
glEnd ( );
Also, we can use the OpenGL point-plotting functions within a C++ procedure
to implement the setPixel command.
Thus, we obtain one line segment between the first and second coordinate
positions and another line segment between the third and fourth positions. In
this case, the number of specified endpoints is odd, so the last coordinate position
is ignored.
With the OpenGL primitive constant GL LINE STRIP, we obtain a polyline.
In this case, the display is a sequence of connected line segments between the first
endpoint in the list and the last endpoint. The first line segment in the polyline is
displayed between the first endpoint and the second endpoint; the second line
segment is between the second and third endpoints; and so forth, up to the last line
endpoint. Nothing is displayed if we do not list at least two coordinate positions.
51
Graphics Output Primitives
p3 p3 p3
p1 p5 p1 p5 p1
p2 p4 p2 p4 p2 p4
Using the same five coordinate positions as in the previous example, we obtain
the display in Figure 4(b) with the code
glBegin (GL_LINE_STRIP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
The third OpenGL line primitive is GL LINE LOOP, which produces a closed
polyline. Lines are drawn as with GL LINE STRIP, but an additional line is
drawn to connect the last coordinate position and the first coordinate position.
Figure 4(c) shows the display of our endpoint list when we select this line option,
using the code
glBegin (GL_LINE_LOOP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
52
Attributes of Graphics Primitives
One way to incorporate attribute options into a graphics package is to extend the
parameter list associated with each graphics-primitive function to include the appro-
priate attribute values. A line-drawing function, for example, could contain additional
parameters to set the color, width, and other properties of a line. Another approach is to
maintain a system list of current attribute values. Separate functions are then included
in the graphics package for setting the current values in the attribute list. To generate a
primitive, the system checks the relevant attributes and invokes the display routine for
that primitive using the current attribute settings. Some graphics packages use a com-
bination of methods for setting attribute values, and other libraries, including OpenGL,
assign attributes using separate functions that update a system attribute list.
A graphics system that maintains a list for the current values of attributes and other
parameters is referred to as a state system or state machine. Attributes of output
primitives and some other parameters, such as the current frame-buffer position, are
referred to as state variables or state parameters. When we assign a value to one or
more state parameters, we put the system into a particular state, and that state remains
in effect until we change the value of a state parameter.
100
Attributes of Graphics Primitives
T A B L E 1
The eight RGB color codes for a 3-bit-per-pixel frame buffer
0 0 0 0 Black
1 0 0 1 Blue
2 0 1 0 Green
3 0 1 1 Cyan
4 1 0 0 Red
5 1 0 1 Magenta
6 1 1 0 Yellow
7 1 1 1 White
can be stored in the frame buffer in two ways: We can store red, green, and blue
(RGB) color codes directly in the frame buffer, or we can put the color codes into
a separate table and use the pixel locations to store index values referencing the
color-table entries. With the direct storage scheme, whenever a particular color
code is specified in an application program, that color information is placed in the
frame buffer at the location of each component pixel in the output primitives to
be displayed in that color. A minimum number of colors can be provided in this
scheme with 3 bits of storage per pixel, as shown in Table 1. Each of the three
bit positions is used to control the intensity level (either on or off, in this case) of
the corresponding electron gun in an RGB monitor. The leftmost bit controls the
red gun, the middle bit controls the green gun, and the rightmost bit controls the
blue gun. Adding more bits per pixel to the frame buffer increases the number
of color choices that we have. With 6 bits per pixel, 2 bits can be used for each
gun. This allows four different intensity settings for each of the three color guns,
and a total of 64 color options are available for each screen pixel. As more color
options are provided, the storage required for the frame buffer also increases.
With a resolution of 1024 × 1024, a full-color (24-bit per pixel) RGB system needs
3 MB of storage for the frame buffer.
Color tables are an alternate means for providing extended color capabilities
to a user without requiring large frame buffers. At one time, this was an impor-
tant consideration; but today, hardware costs have decreased dramatically and
extended color capabilities are fairly common, even in low-end personal com-
puter systems. So most of our examples will simply assume that RGB color codes
are stored directly in the frame buffer.
Color Tables
Figure 1 illustrates a possible scheme for storing color values in a color lookup
table (or color map). Sometimes a color table is referred to as a video lookup
table. Values stored in the frame buffer are now used as indices into the color
table. In this example, each pixel can reference any of the 256 table positions, and
each entry in the table uses 24 bits to specify an RGB color. For the hexadecimal
color code 0x0821, a combination green-blue color is displayed for pixel location
(x, y). Systems employing this particular lookup table allow a user to select any
101
Attributes of Graphics Primitives
Color
Lookup
Table To Red Gun
0
. To Green Gun
.
. To Blue Gun
y 196
196 2081 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1
.
.
.
255
x
FIGURE 1
A color lookup table with 24 bits per entry that is accessed from a frame buffer with 8 bits per pixel. A value of
196 stored at pixel position (x , y ) references the location in this table containing the hexadecimal value 0x0821
(a decimal value of 2081). Each 8-bit segment of this entry controls the intensity level of one of the three electron
guns in an RGB monitor.
256 colors for simultaneous display from a palette of nearly 17 million colors.
Compared to a full-color system, this scheme reduces the number of simulta-
neous colors that can be displayed, but it also reduces the frame-buffer storage
requirement to 1 MB. Multiple color tables are sometimes available for handling
specialized rendering applications, such as antialiasing, and they are used with
systems that contain more than one color output device.
A color table can be useful in a number of applications, and it can provide
a “reasonable” number of simultaneous colors without requiring large frame
buffers. For most applications, 256 or 512 different colors are sufficient for a sin-
gle picture. Also, table entries can be changed at any time, allowing a user to be
able to experiment easily with different color combinations in a design, scene,
or graph without changing the attribute settings for the graphics data structure.
When a color value is changed in the color table, all pixels with that color index
immediately change to the new color. Without a color table, we can change the
color of a pixel only by storing the new color at that frame-buffer location. Simi-
larly, data-visualization applications can store values for some physical quantity,
such as energy, in the frame buffer and use a lookup table to experiment with
various color combinations without changing the pixel values. Also, in visual-
ization and image-processing applications, color tables are a convenient means
for setting color thresholds so that all pixel values above or below a specified
threshold can be set to the same color. For these reasons, some systems provide
both capabilities for storing color information. A user can then elect either to use
color tables or to store color codes directly in the frame buffer.
Grayscale
Because color capabilities are now common in computer-graphics systems, we
use RGB color functions to set shades of gray, or grayscale, in an application
program. When an RGB color setting specifies an equal amount of red, green, and
blue, the result is some shade of gray. Values close to 0 for the color components
produce dark gray, and higher values near 1.0 produce light gray. Applications
for grayscale display methods include enhancing black-and-white photographs
and generating visualization effects.
102
Attributes of Graphics Primitives
The first constant in the argument list states that we are using a single buffer for
the frame buffer, and the second constant puts us into the RGB mode, which is the
default color mode. If we wanted to specify colors by an index into a color table,
we would replace the OpenGL constant GLUT RGB with GLUT INDEX. When we
specify a particular set of color values for primitives, we define the color state of
OpenGL. The current color is applied to all subsequently defined primitives until
we change the color settings. A new color specification affects only the objects we
define after the color change.
glColor* (colorComponents);
103
Attributes of Graphics Primitives
Suffix codes are similar to those for the glVertex function. We use a code of
either 3 or 4 to specify the RGB or RGBA mode along with the numerical data-type
code and an optional vector suffix. The suffix codes for the numerical data types
are b (byte), i (integer), s (short), f (float), and d (double), as well as unsigned
numerical values. Floating-point values for the color components are in the range
from 0.0 to 1.0, and the default color components for glColor, including the
alpha value, are (1.0, 1.0, 1.0, 1.0), which sets the RGB color to white and the alpha
value to 1.0. If we select the current color using an RGB specification (i.e., we
use glColor3 instead of glColor4), the alpha component will be automatically
set to 1.0 to indicate that we do not want color blending. As an example, the
following statement uses floating-point values in RGB mode to set the current
color for primitives to cyan (a combination of the highest intensities for green and
blue):
Using an array specification for the three color components, we could set the color
in this example as
glColor3fv (colorArray);
However, if we were to use unsigned 32-bit integers (suffix code ui), the range
is 0 to 4,294,967,295! At this scale, small changes in color component values are
essentially invisible; to make a one-percent change in the intensity of a single color
component, for instance, we would need to change that component’s value by
42,949,673. For that reason, the most commonly used data types are floating-point
and small integer types.
glIndex* (colorIndex);
104
Attributes of Graphics Primitives
integer, or floating point. The data type for parameter colorIndex is indicated
with a suffix code of ub, s, i, d, or f, and the number of index positions in a color
table is always a power of 2, such as 256 or 1024. The number of bits available
at each table position depends on the hardware features of the system. As an
example of specifying a color in index mode, the following statement sets the
current color index to the value 196:
glIndexi (196);
All primitives defined after this statement will be assigned the color stored at that
position in the color table until the current color is changed.
There are no functions provided in the core OpenGL library for loading values
into a color-lookup table because table-processing routines are part of a window
system. Also, some window systems support multiple color tables and full color,
while other systems may have only one color table and limited color choices.
However, we do have a GLUT routine that interacts with a window system to set
color specifications into a table at a given index position as follows:
Color parameters red, green, and blue are assigned floating-point values in
the range from 0.0 to 1.0. This color is then loaded into the table at the position
specified by the value of parameter index.
Routines for processing three other color tables are provided as extensions
to the OpenGL core library. These routines are part of the Imaging Subset of
OpenGL. Color values stored in these tables can be used to modify pixel values as
they are processed through various buffers. Some examples of using these tables
are setting camera focusing effects, filtering out certain colors from an image,
enhancing certain intensities or making brightness adjustments, converting a
grayscale photograph to color, and antialiasing a display. In addition, we can
use these tables to change color models; that is, we can change RGB colors to
another specification using three other “primary” colors (such as cyan, magenta,
and yellow).
A particular color table in the Imaging Subset of OpenGL is activated
with the glEnable function using one of the table names: GL COLOR TABLE,
GL POST CONVOLUTION COLOR TABLE, or GL POST COLOR MATRIX
COLOR TABLE. We can then use routines in the Imaging Subset to select a partic-
ular color table, set color-table values, copy table values, or specify which com-
ponent of a pixel’s color we want to change and how we want to change it.
105
Attributes of Graphics Primitives
If color blending is not activated, an object’s color simply replaces the frame-buffer
contents at the object’s location.
Colors can be blended in a number of different ways, depending on the effects
that we want to achieve, and we generate different color effects by specifying two
sets of blending factors. One set of blending factors is for the current object in the
frame buffer (the “destination object”), and the other set of blending factors is for
the incoming (“source”) object. The new, blended color that is then loaded into
the frame buffer is calculated as
(Sr Rs + Dr Rd , Sg G s + Dg G d , Sb Bs + Db Bd , Sa As + Da Ad ) (1)
where the RGBA source color components are (Rs , G s , Bs , As ), the destina-
tion color components are (Rd , G d , Bd , Ad ), the source blending factors are
(Sr , Sg , Sb , Sa ), and the destination blending factors are (Dr , Dg , Db , Da ). Com-
puted values for the combined color components are clamped to the range from
0.0 to 1.0. That is, any sum greater than 1.0 is set to the value 1.0, and any sum
less than 0.0 is set to 0.0.
We select the blending-factor values with the OpenGL function
glBlendFunc (sFactor, dFactor);
Parameters sFactor and dFactor, the source and destination factors, are each
assigned an OpenGL symbolic constant specifying a predefined set of four blend-
ing coefficients. For example, the constant GL ZER0 yields the blending factors
(0.0, 0.0, 0.0, 0.0) and GL ONE gives us the set (1.0, 1.0, 1.0, 1.0). We could set all four
blending factors either to the destination alpha value or to the source alpha value
using GL DST ALPHA or GL SRC ALPHA. Other OpenGL constants that are
available for setting the blending factors include GL ONE MINUS DST ALPHA,
GL ONE MINUS SRC ALPHA, GL DST COLOR, and GL SRC COLOR. These
blending factors are often used for simulating transparency, and they are dis-
cussed in greater detail in Section 18-4. The default value for parameter sFactor
is GL ONE, and the default value for parameter dFactor is GL ZERO. Hence,
the default values for the blending factors result in the incoming color values
replacing the current values in the frame buffer.
Then, for RGB color mode, we specify the location and format of the color com-
ponents with
glColorPointer (nColorComponents, dataType,
offset, colorArray);
106
Attributes of Graphics Primitives
glEnableClientState (GL_VERTEX_ARRAY);
glEnableClientState (GL_COLOR_ARRAY);
We can even stuff both the colors and the vertex coordinates into one inter-
laced array. Each of the pointers would then reference the single interlaced array
with an appropriate offset value. For example,
The first three elements of this array specify an RGB color value, the next three
elements specify a set of (x, y, z) vertex coordinates, and this pattern continues to
the last color-vertex specification. We set the offset parameter to the number of
bytes between successive color, or vertex, values, which is 6*sizeof(GLint)
for both. Color values start at the first element of the interlaced array, which
is hueAndPt [0], and vertex values start at the fourth element, which is
hueAndPt [3].
Because a scene generally contains several objects, each with multiple planar
surfaces, OpenGL provides a function in which we can specify all the vertex and
color arrays at once, as well as other types of information. If we change the color
and vertex values in this example to floating-point, we use this function in the
form
107
Attributes of Graphics Primitives
Color indices are listed in the array colorIndex and the type and stride
parameters are the same as in glColorPointer. No size parameter is needed
because color-table indices are specified with a single value.
Each color component in the designation (red, green, and blue), as well as the
alpha parameter, is assigned a floating-point value in the range from 0.0 to 1.0. The
default value for all four parameters is 0.0, which produces the color black. If each
color component is set to 1.0, the clear color is white. Shades of gray are obtained
with identical values for the color components between 0.0 and 1.0. The fourth
parameter, alpha, provides an option for blending the previous color with the
current color. This can occur only if we activate the blending feature of OpenGL;
color blending cannot be performed with values specified in a color table.
There are several color buffers in OpenGL that can be used as the current
refresh buffer for displaying a scene, and the glClearColor function specifies
the color for all the color buffers. We then apply the clear color to the color
buffers with the command
glClear (GL_COLOR_BUFFER_BIT);
We can also use the glClear function to set initial values for other buffers that are
available in OpenGL. These are the accumulation buffer, which stores blended-color
information, the depth buffer, which stores depth values (distances from the view-
ing position) for objects in a scene, and the stencil buffer, which stores information
to define the limits of a picture.
In color-index mode, we use the following function (instead of glClear-
Color) to set the display-window color:
glClearIndex (index);
The window background color is then assigned the color that is stored at position
index in the color table; and the window is displayed in this color when we issue
the glClear (GL COLOR BUFFER BIT) function.
Many other color functions are available in the OpenGL library for dealing
with a variety of tasks, such as changing color models, setting lighting effects for
a scene, specifying camera effects, and rendering the surfaces of an object. We
examine other color functions as we explore each of the component processes in
a computer-graphics system. For now, we limit our discussion to those functions
relating to color specifications for graphics primitives.
108
Attributes of Graphics Primitives
4 Point Attributes
Basically, we can set two attributes for points: color and size. In a state system,
the displayed color and size of a point is determined by the current values stored
in the attribute list. Color components are set with RGB values or an index into a
color table. For a raster system, point size is an integer multiple of the pixel size,
so that a large point is displayed as a square block of pixels.
glPointSize (size);
and the point is then displayed as a square block of pixels. Parameter size is
assigned a positive floating-point value, which is rounded to an integer (unless
the point is to be antialiased). The number of horizontal and vertical pixels in
the display of the point is determined by parameter size. Thus, a point size
of 1.0 displays a single pixel, and a point size of 2.0 displays a 2 × 2 pixel array. If
we activate the antialiasing features of OpenGL, the size of a displayed block of
pixels will be modified to smooth the edges. The default value for point size is 1.0.
Attribute functions may be listed inside or outside of a glBegin/glEnd pair.
For example, the following code segment plots three points in varying colors and
sizes. The first is a standard-size red point, the second is a double-size green point,
and the third is a triple-size blue point:
6 Line Attributes
A straight-line segment can be displayed with three basic attributes: color, width,
and style. Line color is typically set with the same function for all graphics prim-
itives, while line width and line style are selected with separate line functions. In
addition, lines may be generated with other effects, such as pen and brush strokes.
Line Width
Implementation of line-width options depends on the capabilities of the output
device. A heavy line could be displayed on a video monitor as adjacent parallel
lines, while a pen plotter might require pen changes to draw a thick line.
109
Attributes of Graphics Primitives
Line Style
Possible selections for the line-style attribute include solid lines, dashed lines, and
dotted lines. We modify a line-drawing algorithm to generate such lines by setting
the length and spacing of displayed solid sections along the line path. With many
graphics packages, we can select the length of both the dashes and the inter-dash
spacing.
FIGURE 2
Pen and brush shapes for line display.
110
Attributes of Graphics Primitives
glLineWidth (width);
Parameter pattern is used to reference a 16-bit integer that describes how the
line should be displayed. A 1 bit in the pattern denotes an “on” pixel position, and
a 0 bit indicates an “off” pixel position. The pattern is applied to the pixels along
the line path starting with the low-order bits in the pattern. The default pattern is
0xFFFF (each bit position has a value of 1), which produces a solid line. Integer pa-
rameter repeatFactor specifies how many times each bit in the pattern is to be
repeated before the next bit in the pattern is applied. The default repeat value is 1.
With a polyline, a specified line-style pattern is not restarted at the beginning
of each segment. It is applied continuously across all the segments, starting at the
first endpoint of the polyline and ending at the final endpoint for the last segment
in the series.
As an example of specifying a line style, suppose that parameter pattern is
assigned the hexadecimal representation 0x00FF and the repeat factor is 1. This
would display a dashed line with eight pixels in each dash and eight pixel po-
sitions that are “off” (an eight-pixel space) between two dashes. Also, because
low-order bits are applied first, a line begins with an eight-pixel dash starting
at the first endpoint. This dash is followed by an eight-pixel space, then another
eight-pixel dash, and so forth, until the second endpoint position is reached.
111
Attributes of Graphics Primitives
glEnable (GL_LINE_STIPPLE);
If we forget to include this enable function, solid lines are displayed; that is, the
default pattern 0xFFFF is used to display line segments. At any time, we can turn
off the line-pattern feature with
glDisable (GL_LINE_STIPPLE);
This replaces the current line-style pattern with the default pattern (solid lines).
In the following program outline, we illustrate use of the OpenGL line-
attribute functions by plotting three line graphs in different styles and widths.
Figure 3 shows the data plots that could be generated by this program.
glBegin (GL_LINE_STRIP);
for (k = 0; k < 5; k++)
glVertex2f (dataPts [k].x, dataPts [k].y);
glFlush ( );
glEnd ( );
}
glEnable (GL_LINE_STIPPLE);
glDisable (GL_LINE_STIPPLE);
112
Attributes of Graphics Primitives
FIGURE 3
Plotting three data sets with three different
OpenGL line styles and line widths: single-width
dash-dot pattern, double-width dash pattern, and
triple-width dot pattern.
glBegin (GL_LINES);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (50, 50);
glColor3f (1.0, 0.0, 0.0);
glVertex2i (250, 250);
glEnd ( );
Function glShadeModel can also be given the argument GL FLAT. In that case,
the line segment would have been displayed in a single color: the color of the
second endpoint, (250, 250). That is, we would have generated a red line. Actually,
GL SMOOTH is the default, so we would generate a smoothly interpolated color
line segment even if we did not include this function in our code.
We can produce other effects by displaying adjacent lines that have different
colors and patterns. In addition, we can use the color-blending features of OpenGL
by superimposing lines or other objects with varying alpha values. A brush stroke
and other painting effects can be simulated with a pixmap and color blending.
The pixmap can then be moved interactively to generate line segments. Individual
pixels in the pixmap can be assigned different alpha values to display lines as
brush or pen strokes.
8 Curve Attributes
Parameters for curve attributes are the same as those for straight-line segments. We
can display curves with varying colors, widths, dot-dash patterns, and available
pen or brush options. for adapting curve-drawing algorithms to accommodate
attribute selections are similar to those for line drawing.
113
Implementation Algorithms for
Graphics Primitives and Attributes
1 Line-Drawing Algorithms
2 Parallel Line Algorithms
3 Setting Frame-Buffer Values
4 Circle-Generating Algorithms
5 Ellipse-Generating Algorithms
6 Other Curves
7 Parallel Curve Algorithms
8 Pixel Addressing and Object
Geometry
9 Attribute Implementations for
Straight-Line Segments and Curves
10 General Scan-Line Polygon-Fill
Algorithm
11 Scan-Line Fill of Convex Polygons
12 Scan-Line Fill for Regions with
Curved Boundaries
13 Fill Methods for Areas with
Irregular Boundaries
14
15
Implementation Methods for Fill
Styles
Implementation Methods
I n this chapter, we discuss the device-level algorithms for im-
plementing OpenGL primitives. Exploring the implementa-
tion algorithms for a graphics library will give us valuable
for Antialiasing
insight into the capabilities of these packages. It will also provide us
16 Summary
with an understanding of how the functions work, perhaps how they
could be improved, and how we might implement graphics routines
ourselves for some special application. Research in computer graph-
ics is continually discovering new and improved implementation tech-
niques to provide us with methods for special applications, such as
Internet graphics, and for developing faster and more realistic graph-
ics displays in general.
From Chapter 6 of Computer Graphics with OpenGL®, Fourth Edition, Donald Hearn, M. Pauline Baker, Warren R. Carithers.
Copyright © 2011 by Pearson Education, Inc. Published by Pearson Prentice Hall. All rights reserved.
131
Implementation Algorithms for Graphics Primitives and Attributes
FIGURE 1
Stair-step effect (jaggies) produced
when a line is generated as a series of
pixel positions.
1 Line-Drawing Algorithms
A straight-line segment in a scene is defined by the coordinate positions for the
endpoints of the segment. To display the line on a raster monitor, the graphics sys-
tem must first project the endpoints to integer screen coordinates and determine
the nearest pixel positions along the line path between the two endpoints. Then the
line color is loaded into the frame buffer at the corresponding pixel coordinates.
Reading from the frame buffer, the video controller plots the screen pixels. This
process digitizes the line into a set of discrete integer positions that, in general,
only approximates the actual line path. A computed line position of (10.48, 20.51),
for example, is converted to pixel position (10, 21). This rounding of coordinate
values to integers causes all but horizontal and vertical lines to be displayed with
a stair-step appearance (known as “the jaggies”), as represented in Figure 1. The
characteristic stair-step shape of raster lines is particularly noticeable on systems
with low resolution, and we can improve their appearance somewhat by dis-
playing them on high-resolution systems. More effective techniques for smooth-
ing a raster line are based on adjusting pixel intensities along the line path (see
Section 15 for details).
Line Equations
yend
We determine pixel positions along a straight-line path from the geometric prop-
erties of the line. The Cartesian slope-intercept equation for a straight line is
y0
y=m·x+b (1)
with m as the slope of the line and b as the y intercept. Given that the two endpoints
of a line segment are specified at positions (x0 , y0 ) and (xend , yend ), as shown in
x0 xend
Figure 2, we can determine values for the slope m and y intercept b with the
FIGURE 2 following calculations:
Line path between endpoint positions yend − y0
(x 0 , y 0 ) and (x end , y end ). m= (2)
xend − x0
b = y0 − m · x0 (3)
Algorithms for displaying straight lines are based on Equation 1 and the calcu-
lations given in Equations 2 and 3.
For any given x interval δx along a line, we can compute the corresponding
y interval, δy, from Equation 2 as
δy = m · δx (4)
Similarly, we can obtain the x interval δx corresponding to a specified δy as
δy
δx = (5)
m
These equations form the basis for determining deflection voltages in analog dis-
plays, such as a vector-scan system, where arbitrarily small changes in deflection
voltage are possible. For lines with slope magnitudes |m| < 1, δx can be set pro-
portional to a small horizontal deflection voltage, and the corresponding vertical
deflection is then set proportional to δy as calculated from Equation 4. For lines
132
Implementation Algorithms for Graphics Primitives and Attributes
whose slopes have magnitudes |m| > 1, δy can be set proportional to a small ver-
tical deflection voltage with the corresponding horizontal deflection voltage set
proportional to δx, calculated from Equation 5. For lines with m = 1, δx = δy and
the horizontal and vertical deflections voltages are equal. In each case, a smooth yend
line with slope m is generated between the specified endpoints.
On raster systems, lines are plotted with pixels, and step sizes in the horizontal y0
and vertical directions are constrained by pixel separations. That is, we must
“sample” a line at discrete positions and determine the nearest pixel to the line at
each sampled position. This scan-conversion process for straight lines is illustrated
x0 xend
in Figure 3 with discrete sample positions along the x axis.
FIGURE 3
DDA Algorithm Straight-line segment with five
sampling positions along the x axis
The digital differential analyzer (DDA) is a scan-conversion line algorithm based on between x 0 and x end .
calculating either δy or δx, using Equation 4 or Equation 5. A line is sampled
at unit intervals in one coordinate and the corresponding integer values nearest
the line path are determined for the other coordinate.
We consider first a line with positive slope, as shown in Figure 2. If the slope
is less than or equal to 1, we sample at unit x intervals (δx = 1) and compute
successive y values as
yk+1 = yk + m (6)
Subscript k takes integer values starting from 0, for the first point, and increases
by 1 until the final endpoint is reached. Because m can be any real number
between 0.0 and 1.0, each calculated y value must be rounded to the nearest integer
corresponding to a screen pixel position in the x column that we are processing.
For lines with a positive slope greater than 1.0, we reverse the roles of x and y.
That is, we sample at unit y intervals (δy = 1) and calculate consecutive x values as
1
xk+1 = xk + (7)
m
In this case, each computed x value is rounded to the nearest pixel position along
the current y scan line.
Equations 6 and 7 are based on the assumption that lines are to be pro-
cessed from the left endpoint to the right endpoint (Figure 2). If this processing is
reversed, so that the starting endpoint is at the right, then either we have δx = −1
and
yk+1 = yk − m (8)
or (when the slope is greater than 1) we have δy = −1 with
1
xk+1 = xk − (9)
m
Similar calculations are carried out using Equations 6 through 9 to deter-
mine pixel positions along a line with negative slope. Thus, if the absolute value
of the slope is less than 1 and the starting endpoint is at the left, we set δx = 1 and
calculate y values with Equation 6. When the starting endpoint is at the right
(for the same slope), we set δx = −1 and obtain y positions using Equation 8.
For a negative slope with absolute value greater than 1, we use δy = −1 and
Equation 9, or we use δy = 1 and Equation 7.
This algorithm is summarized in the following procedure, which accepts as
input two integer screen positions for the endpoints of a line segment. Horizontal
and vertical differences between the endpoint positions are assigned to parame-
ters dx and dy. The difference with the greater magnitude determines the value of
parameter steps. This value is the number of pixels that must be drawn beyond
the starting pixel; from it, we calculate the x and y increments needed to generate
133
Implementation Algorithms for Graphics Primitives and Attributes
the next pixel position at each step along the line path. We draw the starting pixel
at position (x0, y0), and then draw the remaining pixels iteratively, adjusting x
and y at each step to obtain the next pixel’s position before drawing it. If the magni-
tude of dx is greater than the magnitude of dy and x0 is less than xEnd, the values
for the increments in the x and y directions are 1 and m, respectively. If the greater
change is in the x direction, but x0 is greater than xEnd, then the decrements −1
and −m are used to generate each new point on the line. Otherwise, we use a unit
increment (or decrement) in the y direction and an x increment (or decrement) of m1 .
#include <stdlib.h>
#include <math.h>
void lineDDA (int x0, int y0, int xEnd, int yEnd)
{
int dx = xEnd - x0, dy = yEnd - y0, steps, k;
float xIncrement, yIncrement, x = x0, y = y0;
FIGURE 4
A section of a display screen where a The DDA algorithm is a faster method for calculating pixel positions than one
straight-line segment is to be plotted,
starting from the pixel at column 10 on
that directly implements Equation 1. It eliminates the multiplication in Equa-
scan line 11. tion 1 by using raster characteristics, so that appropriate increments are applied
in the x or y directions to step from one pixel position to another along the line path.
The accumulation of round-off error in successive additions of the floating-point
increment, however, can cause the calculated pixel positions to drift away from
the true line path for long line segments. Furthermore, the rounding operations
and floating-point arithmetic in this procedure are still time-consuming. We can
50 improve the performance of the DDA algorithm by separating the increments
Specified m and m1 into integer and fractional parts so that all calculations are reduced
49 Line Path to integer operations. A method for calculating m1 increments in integer steps
is discussed in Section 10. In the next section, we consider a more general scan-
48 line approach that can be applied to both lines and curves.
50 51 52 53
Bresenham’s Line Algorithm
FIGURE 5 In this section, we introduce an accurate and efficient raster line-generating algo-
A section of a display screen where a
negative slope line segment is to be rithm, developed by Bresenham, that uses only incremental integer calculations.
plotted, starting from the pixel at In addition, Bresenham’s line algorithm can be adapted to display circles and
column 50 on scan line 50. other curves. Figures 4 and 5 illustrate sections of a display screen where
134
Implementation Algorithms for Graphics Primitives and Attributes
straight-line segments are to be drawn. The vertical axes show scan-line posi-
tions, and the horizontal axes identify pixel columns. Sampling at unit x intervals yk 3
in these examples, we need to decide which of two possible pixel positions is
closer to the line path at each sample step. Starting from the left endpoint shown yk 2 y mx b
in Figure 4, we need to determine at the next sample position whether to plot
the pixel at position (11, 11) or the one at (11, 12). Similarly, Figure 5 shows a yk 1
negative-slope line path starting from the left endpoint at pixel position (50, 50).
yk
In this one, do we select the next pixel position as (51, 50) or as (51, 49)? These
questions are answered with Bresenham’s line algorithm by testing the sign of
an integer parameter whose value is proportional to the difference between the xk xk 1 xk 2 xk 3
vertical separations of the two pixel positions from the actual line path. FIGURE 6
To illustrate Bresenham’s approach, we first consider the scan-conversion A section of the screen showing a pixel
process for lines with positive slope less than 1.0. Pixel positions along a line in column x k on scan line y k that is to
path are then determined by sampling at unit x intervals. Starting from the left be plotted along the path of a line
segment with slope 0 < m < 1.
endpoint (x0 , y0 ) of a given line, we step to each successive column (x position)
and plot the pixel whose scan-line y value is closest to the line path. Figure 6
demonstrates the kth step in this process. Assuming that we have determined that
the pixel at (xk , yk ) is to be displayed, we next need to decide which pixel to plot
in column xk+1 = xk + 1. Our choices are the pixels at positions (xk + 1, yk ) and
(xk + 1, yk + 1).
At sampling position xk + 1, we label vertical pixel separations from the yk 1
mathematical line path as dlower and dupper (Figure 7). The y coordinate on the y d upper
mathematical line at pixel column position xk + 1 is calculated as
dlower
y = m(xk + 1) + b (10) yk
Then xk 1
dlower = y − yk
FIGURE 7
= m(xk + 1) + b − yk (11) Vertical distances between pixel
positions and the line y coordinate at
and sampling position x k + 1.
dupper = (yk + 1) − y
= yk + 1 − m(xk + 1) − b (12)
To determine which of the two pixels is closest to the line path, we can set up an
efficient test that is based on the difference between the two pixel separations as
follows:
dlower − dupper = 2m(xk + 1) − 2yk + 2b − 1 (13)
A decision parameter pk for the kth step in the line algorithm can be obtained
by rearranging Equation 13 so that it involves only integer calculations. We
accomplish this by substituting m = y/x, where y and x are the vertical
and horizontal separations of the endpoint positions, and defining the decision
parameter as
pk = x(dlower − dupper )
= 2y · xk − 2x · yk + c (14)
The sign of pk is the same as the sign of dlower − dupper , because x > 0 for our
example. Parameter c is constant and has the value 2y + x(2b − 1), which is
independent of the pixel position and will be eliminated in the recursive calcula-
tions for pk . If the pixel at yk is “closer” to the line path than the pixel at yk + 1
(that is, dlower < dupper ), then decision parameter pk is negative. In that case, we
plot the lower pixel; otherwise, we plot the upper pixel.
135
Implementation Algorithms for Graphics Primitives and Attributes
Coordinate changes along the line occur in unit steps in either the x or y
direction. Therefore, we can obtain the values of successive decision parameters
using incremental integer calculations. At step k + 1, the decision parameter is
evaluated from Equation 14 as
pk+1 = 2y · xk+1 − 2x · yk+1 + c
Subtracting Equation 14 from the preceding equation, we have
pk+1 − pk = 2y(xk+1 − xk ) − 2x(yk+1 − yk )
However, xk+1 = xk + 1, so that
pk+1 = pk + 2y − 2x(yk+1 − yk ) (15)
where the term yk+1 − yk is either 0 or 1, depending on the sign of parameter pk .
This recursive calculation of decision parameters is performed at each integer
x position, starting at the left coordinate endpoint of the line. The first parameter,
p0 , is evaluated from Equation 14 at the starting pixel position (x0 , y0) and with
m evaluated as y/x as follows:
p0 = 2y − x (16)
We summarize Bresenham line drawing for a line with a positive slope less
than 1 in the following outline of the algorithm. The constants 2y and 2y −
2x are calculated once for each line to be scan-converted, so the arithmetic
involves only integer addition and subtraction of these two constants. Step 4 of
the algorithm will be performed a total of x times.
136
Implementation Algorithms for Graphics Primitives and Attributes
A plot of the pixels generated along this line path is shown in Figure 8.
18
15
FIGURE 8
10 Pixel positions along the line path between
endpoints (20, 10) and (30, 18), plotted with
20 21 22 25 30
Bresenham’s line algorithm.
#include <stdlib.h>
#include <math.h>
137
Implementation Algorithms for Graphics Primitives and Attributes
else {
x = x0;
y = y0;
}
setPixel (x, y);
Displaying Polylines
Implementation of a polyline procedure is accomplished by invoking a line-
drawing routine n − 1 times to display the lines connecting the n endpoints. Each
successive call passes the coordinate pair needed to plot the next line section,
where the first endpoint of each coordinate pair is the last endpoint of the previ-
ous section. Once the color values for pixel positions along the first line segment
have been set in the frame buffer, we process subsequent line segments starting
with the next pixel position following the first endpoint for that segment. In this
way, we can avoid setting the color of some endpoints twice. We discuss methods
for avoiding the overlap of displayed objects in more detail in Section 8.
138
Color Models and Color Applications
579
Color Models and Color Applications
by spraying the ink for the three primary colors over each other and allowing
them to mix before they dry. For black-and-white or grayscale printing, only the
black ink is used.
where the white point in RGB space is represented as the unit column vector. And
we convert from a CMY color representation to an RGB representation using the
matrix transformation
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
R 1 C
⎣G ⎦ = ⎣1⎦ − ⎣ M⎦ (12)
B 1 Y
In this transformation, the unit column vector represents the black point in the
CMY color space.
For the conversion from RGB to the CMYK color space, we first set K =
max( R, G, B). Then K is subtracted from each of C , M, and Y in Equation 11.
Similarly, for the transformation from CMYK to RGB, we first set K = min(R, G, B).
Then K is subtracted from each of R, G, and B in Equation 12. In practice, these
transformation equations are often modified to improve the printing quality for
a particular system.
580