Mod 5
Mod 5
FROM VERTICES
C H AP T E R 7
TO FRAGMENTS
W e now turn to the next steps in the pipeline: clipping, rasterization, and
hidden-surface removal. Although we have yet to consider some major parts
of OpenGL that are available to the application programmer, including discrete
primitives, texture mapping, and curves and surfaces, there are several reasons for
considering these topics at this point. First, you may be wondering how your pro-
grams are processed by the system that you are using: how lines are drawn on the
screen, how polygons are filled, and what happens to primitives that lie outside the
viewing volumes defined in your program. Second, our contention is that if we are
to use a graphics system efficiently, we need to have a deeper understanding of the
implementation process: which steps are easy, and which ones tax our hardware and
software. Third, our discussion of implementation will open the door to new capa-
bilities that are supported by the latest hardware.
Learning implementation involves studying algorithms. As when we study other
algorithms, we must be careful to consider such issues as theoretical versus practical
performance, hardware versus software implementations, and the specific character-
istics of an application. Although we can test whether an OpenGL implementation
works correctly, in the sense that it produces the correct pixels on the screen, there
are many choices for the algorithms employed. We focus on the basic operations that
are necessary to implement a standard API and are required, whether the rendering
is done by a pipeline architecture or by some other method, such as a ray tracer. Con-
sequently, we present a variety of the basic algorithms for each of the principal tasks
in an implementation.
In this chapter, we are concerned with the basic algorithms that are used to
implement the rendering pipeline used by OpenGL. We will focus on three issues:
clipping, rasterization, and hidden-surface removal. Clipping involves eliminating
objects that lie outside the viewing volume and thus cannot be visible in the image.
Rasterization produces fragments from the remaining objects. These fragments can
contribute to the final image. Hidden-surface removal determines which fragments
correspond to objects that are visible, namely those that are in the view volume and
are not blocked from view by other objects closer to the camera.
329
330 Chapter 7 From Vertices to Fragments
Vertices Pixels
Application Graphics Frame
program system buffer
for(each_object) render(object);
A pipeline renderer fits this description. Vertices are defined by the program and
flow through a sequence of modules that transforms them, colors them, and deter-
mines whether they are visible. A polygon might flow through the steps illustrated in
Figure 7.2. Note that after a polygon passes through geometric processing, the ras-
terization of this polygon can potentially affect any pixel in the frame buffer. Most
implementations that follow this approach are based on construction of a render-
ing pipeline that contains hardware or software modules for each of the tasks. Data
(vertices) flow forward through the system.
In the past, the major limitations of the object-oriented approach were the large
amount of memory required and the high cost of processing each object indepen-
7.1 Basic Implementation Strategies 331
y y y
x x x Frame
buffer
z z
dently. Any geometric primitive that emerges from the geometric processing poten-
tially can affect any set of pixels in the frame buffer; thus, the entire color buffer—and
various other buffers, such as the depth buffer used for hidden-surface removal—
must be of the size of the display and must be available at all times. Before memory
became inexpensive and dense, this requirement was considered to be a serious prob-
lem. Now various pipelined geometric processors are available that can process tens of
millions of polygons per second. In fact, precisely because we are doing the same op-
erations on every primitive, the hardware to build an object-based system is fast and
relatively inexpensive, with many of the functions implemented with special-purpose
chips.
Today, the main limitation of object-oriented implementations is that they can-
not handle most global calculations. Because each geometric primitive is processed
independently—and in an arbitrary order—complex shading effects that involve
multiple geometric objects, such as reflections, cannot be handled, except by approx-
imate methods. The major exception is hidden-surface removal, where the z-buffer
is used to store global information.
Image-oriented approaches loop over pixels, or rows of pixels called scanlines,
that constitute the frame buffer. In pseudocode, the outer loop of such a program is
of the following form:
for(each_pixel) assign_a_color(pixel);
For each pixel, we work backward, determining which geometric primitives can
contribute to its color. The advantages of this approach are that we need only limited
display memory at any time and that we can hope to generate pixels at the rate and
in the order required to refresh the display. Because the results of most calculations
do not differ greatly from pixel to pixel (or scanline to scanline), we can use this
coherence in our algorithms by developing incremental forms for many of the steps
in the implementation. The main disadvantage of this approach is that unless we first
build a data structure from the geometric data, we do not know which primitives
affect which pixels. Such a data structure can be complex and may imply that all
the geometric data must be available at all times during the rendering process. For
problems with very large databases, even having a good data representation may not
avoid memory problems. However, because image-oriented approaches have access
332 Chapter 7 From Vertices to Fragments
to all objects for each pixel, they are well suited to handle global effects such as
shadows and reflections. Ray tracing (Chapter 13) is an example of the image-based
approach.
Because our primary interest is in interactive applications, we lean toward the
object-based approach, although we look at examples of algorithms suited for both
approaches. In addition, as hardware improves in both speed and lowered cost, ap-
proaches that are not competitive today may well be practical in the near future.
7.2.1 Modeling
The usual results of the modeling process are sets of vertices that specify a group of
geometric objects supported by the rest of the system. We have seen a few examples
that required some modeling by the application, such as the approximation of spheres
in Chapter 6. In Chapters 10 and 11, we explore other modeling techniques.
We can look at the modeler as a black box that produces geometric objects and is
usually an application program that may be interactive. The modeler might perform
other tasks in addition to producing geometry. Consider, for example, clipping: the
process of eliminating parts of objects that cannot appear on the display because they
lie outside the viewing volume. A user can generate geometric objects in her program,
and she can hope that the rest of the system can process these objects at the rate
at which they are produced; or the modeler can attempt to ease the burden on the
rest of the system by minimizing the number of objects that it passes on. This latter
approach often means that the modeler may do some of the same jobs as the rest
Geometry Fragment
Modeling Rasterization Display Frame buffer
processing Processing
of the system, albeit with different algorithms. In the case of clipping, the modeler,
knowing more about the specifics of the application, can often use a good heuristic to
eliminate many, if not most, primitives before they are sent on through the standard
viewing process.
Collectively, these operations constitute what has been called front-end process-
ing. All involve three-dimensional calculations, and all require floating-point arith-
metic. All generate similar hardware and software requirements. All are carried out
on a vertex-by-vertex basis. We will discuss clipping, the only geometric step that we
have yet to discuss, in Section 7.3.
7.2.3 Rasterization
Even after geometric processing has taken place, we still need to retain depth infor-
mation for hidden-surface removal. However, only the x, y values of the vertices are
needed to determine which pixels in the frame buffer can be affected by a primi-
tive. For example, after perspective division, a line segment that was defined origi-
nally in three dimensions by two vertices becomes a line segment defined by a pair
of three-dimensional vertices in normalized device coordinates. To generate a set of
fragments that give the locations of the pixels in the frame buffer corresponding to
these vertices, we only need their x, y components or, equivalently, the results of
the orthogonal projection of these vertices. We determine these fragments through
a process called rasterization or scan conversion. For line segments, rasterization
determines which fragments should be used to approximate a line segment between
the projected vertices. For polygons, rasterization determines which pixels lie inside
the two-dimensional polygon determined by the projected vertices.
The colors that we assign to these fragments can be determined by the color at-
tributes or obtained by interpolating the shades at the vertices that are computed, as
in Chapter 6. Objects more complex than line segments and polygons are usually ap-
proximated by multiple line segments and polygons, and thus most graphics systems
do not have special rasterization algorithms for them. We will see exceptions to this
rule for some special curves and surfaces in Chapter 12.
The rasterizer starts with vertices in normalized device coordinates but outputs
fragments whose locations are in units of the display—window coordinates. As we
saw in Chapters 2 and 5, the projection of the clipping volume must appear in the
assigned viewport. In OpenGL, this final transformation is done after projection
and is two-dimensional. The preceding transformations have normalized the view
volume such that its sides are of length 2 and line up with the sides of the viewport
(Figure 7.4), so this transformation is simply
x + 1.0
xv = xvmin + (xvmax − xvmin ),
2.0
y + 1.0
yv = yvmin + (yvmax − yvmin ),
2.0
z + 1.0
zv = zvmin + (zvmax − zvmin ).
2.0
Recall that for perspective viewing, these z-values have been scaled nonlinearly
by perspective normalization. However, they retain their original depth order so they
7.2 Four Major Tasks 335
(xmax, ymax )
(xv max, yv max )
(x, y) (xv, yv )
can be used for hidden-surface removal. We will use the term screen coordinates to
refer to the the two-dimensional system that is the same as window coordinates but
lacks the depth coordinate.
7.3 CLIPPING
We can now turn to clipping, the process of determining which primitives, or parts of
primitives, fit within the clipping or view volume defined by the application program.
Clipping is done before the perspective division that is necessary if the w component
of a clipped vertex is not equal to 1. The portions of all primitives that can possibly
be displayed—we have yet to apply hidden-surface removal—lie within the cube as
follows:
− w ≤ x ≤ w,
− w ≤ y ≤ w,
− w ≤ z ≤ w.
This coordinate system is called clip coordinates, and it depends on neither the orig-
inal application units nor the particulars of the display device, although the infor-
mation to produce the correct image is retained in this coordinate system. Note also
that projection has been carried out only partially. We still must do the perspective
division and the final orthographic projection. After perspective division, we have a
three-dimensional representation in normalized device coordinates. By carrying our
clipping in clip coordinates, we avoid doing the perspective division for primitives
that lie outside the clipping volume.
We will concentrate on clipping of line segments and polygons because they
are the most common primitives to pass down the pipeline. Although the OpenGL
pipeline does clipping on three-dimensional objects, there are other systems in which
the objects are first projected into the x, y plane. Fortunately, many of the most
efficient algorithms are almost identical in two and three dimensions and we will
focus on these algorithms.
H F
D
B
G
E
A C
Likewise, b1 is 1 if y < ymin , and b2 and b3 are determined by the relationship between
x and the left and right sides of the window. The resulting codes are indicated in
Figure 7.7. For each endpoint of a line segment, we first compute the endpoint’s
outcode, a step that can require eight floating-point subtractions per line segment.
Consider a line segment whose outcodes are given by o1 = outcode(x1, y1) and
o2 = outcode(x2 , y2). We can now reason on the basis of these outcodes. There are
four cases:
1. (o1 = o2 = 0.) Both endpoints are inside the clipping window, as is true for
segment AB in Figure 7.7. The entire line segment is inside, and the segment
can be sent on to be rasterized.
338 Chapter 7 From Vertices to Fragments
H
J
G B D F
I
A C
E
only a single division should be required in any case. If we use the standard explicit
form of a line,
y = mx + h,
where m is the slope of the line and h is the line’s y intercept, then we can compute
m and h from the endpoints. However, vertical lines cannot be represented in this
form—a critical weakness of the explicit form. If we were interested in only the
Cohen-Sutherland algorithm, it would be fairly simple to program all cases directly,
because the sides of the clipping rectangle are parallel to the axes. However, we are
interested in more than just clipping; consequently, other representations of the line
and line segment are of importance. In particular, parametric representations are
almost always used in computer graphics. We have already seen the parametric form
of the line in Chapter 4; the parametric representation of other types of curves is
considered in Chapter 12.
Note that this form is robust and needs no changes for horizontal or vertical lines.
As the parameter α varies from 0 to 1, we move along the segment from p1 to p2.
Negative values of α yield points on the line on the other side of p1 from p2. Similarly,
values of α > 1 give points on the line past p2 going off to infinity.
Consider the line segment and the line of which it is part, as shown in Fig-
ure 7.8(a). As long as the line is not parallel to a side of the window (if it is, we can
handle that situation with ease), there are four points where the line intersects the
extended sides of the window. These points correspond to the four values of the pa-
rameter: α1, α2, α3, and α4. One of these values corresponds to the line entering the
window; another corresponds to the line leaving the window. Leaving aside, for the
moment, how we compute these intersections, we can order them, and can determine
which correspond to intersections that we need for clipping. For the given example,
4
4
3 3 2
2
1 1
(a) (b)
Hence, all four intersections are inside the original line segment, with the two in-
nermost (α2 and α3) determining the clipped line segment. We can distinguish this
case from the case in Figure 7.8(b), which also has the four intersections between the
endpoints of the line segment, by noting that the order for this case is
The line intersects both the top and the bottom of the window before it intersects
either the left or the right; thus, the entire line segment must be rejected. Other cases
of the ordering of the points of intersection can be argued in a similar way.
Efficient implementation of this strategy requires that we avoid computing inter-
sections until they are needed. Many lines can be rejected before all four intersections
are known. We also want to avoid floating-point divisions where possible. If we use
the parametric form to determine the intersection with the top of the window, we
find the intersection at the value
ymax − y1
α= .
y2 − y1
Similar equations hold for the other three sides of the window. Rather than com-
puting these intersections, at the cost of a division for each, instead we write the
equation as
All the tests required by the algorithm can be restated in terms of ymax , y, and
similar terms can be computed for the other sides of the windows. Thus, all decisions
about clipping can be made without floating-point division. Only if an intersection is
needed (because a segment has to be shortened) is the division done. The efficiency of
this approach, compared to that of the Cohen-Sutherland algorithm, is that we avoid
multiple shortening of line segments and the related reexecutions of the clipping
algorithm. We forgo discussion of other efficient two-dimensional line-clipping al-
gorithms because, unlike the Cohen-Sutherland and Liang-Barsky algorithms, these
algorithms do not extend to three dimensions.
Implementation Algorithms for
Graphics Primitives and Attributes
1 Line-Drawing Algorithms
2 Parallel Line Algorithms
3 Setting Frame-Buffer Values
4 Circle-Generating Algorithms
5 Ellipse-Generating Algorithms
6 Other Curves
7 Parallel Curve Algorithms
8 Pixel Addressing and Object
Geometry
9 Attribute Implementations for
Straight-Line Segments and Curves
10 General Scan-Line Polygon-Fill
Algorithm
11 Scan-Line Fill of Convex Polygons
12 Scan-Line Fill for Regions with
Curved Boundaries
13 Fill Methods for Areas with
Irregular Boundaries
14
15
Implementation Methods for Fill
Styles
Implementation Methods
I n this chapter, we discuss the device-level algorithms for im-
plementing OpenGL primitives. Exploring the implementa-
tion algorithms for a graphics library will give us valuable
for Antialiasing
insight into the capabilities of these packages. It will also provide us
16 Summary
with an understanding of how the functions work, perhaps how they
could be improved, and how we might implement graphics routines
ourselves for some special application. Research in computer graph-
ics is continually discovering new and improved implementation tech-
niques to provide us with methods for special applications, such as
Internet graphics, and for developing faster and more realistic graph-
ics displays in general.
131
Implementation Algorithms for Graphics Primitives and Attributes
FIGURE 1
Stair-step effect (jaggies) produced
when a line is generated as a series of
pixel positions.
1 Line-Drawing Algorithms
A straight-line segment in a scene is defined by the coordinate positions for the
endpoints of the segment. To display the line on a raster monitor, the graphics sys-
tem must first project the endpoints to integer screen coordinates and determine
the nearest pixel positions along the line path between the two endpoints. Then the
line color is loaded into the frame buffer at the corresponding pixel coordinates.
Reading from the frame buffer, the video controller plots the screen pixels. This
process digitizes the line into a set of discrete integer positions that, in general,
only approximates the actual line path. A computed line position of (10.48, 20.51),
for example, is converted to pixel position (10, 21). This rounding of coordinate
values to integers causes all but horizontal and vertical lines to be displayed with
a stair-step appearance (known as “the jaggies”), as represented in Figure 1. The
characteristic stair-step shape of raster lines is particularly noticeable on systems
with low resolution, and we can improve their appearance somewhat by dis-
playing them on high-resolution systems. More effective techniques for smooth-
ing a raster line are based on adjusting pixel intensities along the line path (see
Section 15 for details).
Line Equations
yend
We determine pixel positions along a straight-line path from the geometric prop-
erties of the line. The Cartesian slope-intercept equation for a straight line is
y0
y=m·x+b (1)
with m as the slope of the line and b as the y intercept. Given that the two endpoints
of a line segment are specified at positions (x0 , y0 ) and (xend , yend ), as shown in
x0 xend
Figure 2, we can determine values for the slope m and y intercept b with the
FIGURE 2 following calculations:
Line path between endpoint positions yend − y0
(x 0 , y 0 ) and (x end , y end ). m= (2)
xend − x0
b = y0 − m · x0 (3)
Algorithms for displaying straight lines are based on Equation 1 and the calcu-
lations given in Equations 2 and 3.
For any given x interval δx along a line, we can compute the corresponding
y interval, δy, from Equation 2 as
δy = m · δx (4)
Similarly, we can obtain the x interval δx corresponding to a specified δy as
δy
δx = (5)
m
These equations form the basis for determining deflection voltages in analog dis-
plays, such as a vector-scan system, where arbitrarily small changes in deflection
voltage are possible. For lines with slope magnitudes |m| < 1, δx can be set pro-
portional to a small horizontal deflection voltage, and the corresponding vertical
deflection is then set proportional to δy as calculated from Equation 4. For lines
132
Implementation Algorithms for Graphics Primitives and Attributes
whose slopes have magnitudes |m| > 1, δy can be set proportional to a small ver-
tical deflection voltage with the corresponding horizontal deflection voltage set
proportional to δx, calculated from Equation 5. For lines with m = 1, δx = δy and
the horizontal and vertical deflections voltages are equal. In each case, a smooth yend
line with slope m is generated between the specified endpoints.
On raster systems, lines are plotted with pixels, and step sizes in the horizontal y0
and vertical directions are constrained by pixel separations. That is, we must
“sample” a line at discrete positions and determine the nearest pixel to the line at
each sampled position. This scan-conversion process for straight lines is illustrated
x0 xend
in Figure 3 with discrete sample positions along the x axis.
FIGURE 3
DDA Algorithm Straight-line segment with five
sampling positions along the x axis
The digital differential analyzer (DDA) is a scan-conversion line algorithm based on between x 0 and x end .
calculating either δy or δx, using Equation 4 or Equation 5. A line is sampled
at unit intervals in one coordinate and the corresponding integer values nearest
the line path are determined for the other coordinate.
We consider first a line with positive slope, as shown in Figure 2. If the slope
is less than or equal to 1, we sample at unit x intervals (δx = 1) and compute
successive y values as
yk+1 = yk + m (6)
Subscript k takes integer values starting from 0, for the first point, and increases
by 1 until the final endpoint is reached. Because m can be any real number
between 0.0 and 1.0, each calculated y value must be rounded to the nearest integer
corresponding to a screen pixel position in the x column that we are processing.
For lines with a positive slope greater than 1.0, we reverse the roles of x and y.
That is, we sample at unit y intervals (δy = 1) and calculate consecutive x values as
1
xk+1 = xk + (7)
m
In this case, each computed x value is rounded to the nearest pixel position along
the current y scan line.
Equations 6 and 7 are based on the assumption that lines are to be pro-
cessed from the left endpoint to the right endpoint (Figure 2). If this processing is
reversed, so that the starting endpoint is at the right, then either we have δx = −1
and
yk+1 = yk − m (8)
or (when the slope is greater than 1) we have δy = −1 with
1
xk+1 = xk − (9)
m
Similar calculations are carried out using Equations 6 through 9 to deter-
mine pixel positions along a line with negative slope. Thus, if the absolute value
of the slope is less than 1 and the starting endpoint is at the left, we set δx = 1 and
calculate y values with Equation 6. When the starting endpoint is at the right
(for the same slope), we set δx = −1 and obtain y positions using Equation 8.
For a negative slope with absolute value greater than 1, we use δy = −1 and
Equation 9, or we use δy = 1 and Equation 7.
This algorithm is summarized in the following procedure, which accepts as
input two integer screen positions for the endpoints of a line segment. Horizontal
and vertical differences between the endpoint positions are assigned to parame-
ters dx and dy. The difference with the greater magnitude determines the value of
parameter steps. This value is the number of pixels that must be drawn beyond
the starting pixel; from it, we calculate the x and y increments needed to generate
133
Implementation Algorithms for Graphics Primitives and Attributes
the next pixel position at each step along the line path. We draw the starting pixel
at position (x0, y0), and then draw the remaining pixels iteratively, adjusting x
and y at each step to obtain the next pixel’s position before drawing it. If the magni-
tude of dx is greater than the magnitude of dy and x0 is less than xEnd, the values
for the increments in the x and y directions are 1 and m, respectively. If the greater
change is in the x direction, but x0 is greater than xEnd, then the decrements −1
and −m are used to generate each new point on the line. Otherwise, we use a unit
increment (or decrement) in the y direction and an x increment (or decrement) of m1 .
#include <stdlib.h>
#include <math.h>
void lineDDA (int x0, int y0, int xEnd, int yEnd)
{
int dx = xEnd - x0, dy = yEnd - y0, steps, k;
float xIncrement, yIncrement, x = x0, y = y0;
FIGURE 4
A section of a display screen where a The DDA algorithm is a faster method for calculating pixel positions than one
straight-line segment is to be plotted,
starting from the pixel at column 10 on
that directly implements Equation 1. It eliminates the multiplication in Equa-
scan line 11. tion 1 by using raster characteristics, so that appropriate increments are applied
in the x or y directions to step from one pixel position to another along the line path.
The accumulation of round-off error in successive additions of the floating-point
increment, however, can cause the calculated pixel positions to drift away from
the true line path for long line segments. Furthermore, the rounding operations
and floating-point arithmetic in this procedure are still time-consuming. We can
50 improve the performance of the DDA algorithm by separating the increments
Specified m and m1 into integer and fractional parts so that all calculations are reduced
49 Line Path to integer operations. A method for calculating m1 increments in integer steps
is discussed in Section 10. In the next section, we consider a more general scan-
48 line approach that can be applied to both lines and curves.
50 51 52 53
Bresenham’s Line Algorithm
FIGURE 5 In this section, we introduce an accurate and efficient raster line-generating algo-
A section of a display screen where a
negative slope line segment is to be rithm, developed by Bresenham, that uses only incremental integer calculations.
plotted, starting from the pixel at In addition, Bresenham’s line algorithm can be adapted to display circles and
column 50 on scan line 50. other curves. Figures 4 and 5 illustrate sections of a display screen where
134
Implementation Algorithms for Graphics Primitives and Attributes
straight-line segments are to be drawn. The vertical axes show scan-line posi-
tions, and the horizontal axes identify pixel columns. Sampling at unit x intervals yk 3
in these examples, we need to decide which of two possible pixel positions is
closer to the line path at each sample step. Starting from the left endpoint shown yk 2 y mx b
in Figure 4, we need to determine at the next sample position whether to plot
the pixel at position (11, 11) or the one at (11, 12). Similarly, Figure 5 shows a yk 1
negative-slope line path starting from the left endpoint at pixel position (50, 50).
yk
In this one, do we select the next pixel position as (51, 50) or as (51, 49)? These
questions are answered with Bresenham’s line algorithm by testing the sign of
an integer parameter whose value is proportional to the difference between the xk xk1 xk 2 xk3
vertical separations of the two pixel positions from the actual line path. FIGURE 6
To illustrate Bresenham’s approach, we first consider the scan-conversion A section of the screen showing a pixel
process for lines with positive slope less than 1.0. Pixel positions along a line in column x k on scan line y k that is to
path are then determined by sampling at unit x intervals. Starting from the left be plotted along the path of a line
segment with slope 0 < m < 1.
endpoint (x0 , y0 ) of a given line, we step to each successive column (x position)
and plot the pixel whose scan-line y value is closest to the line path. Figure 6
demonstrates the kth step in this process. Assuming that we have determined that
the pixel at (xk , yk ) is to be displayed, we next need to decide which pixel to plot
in column xk+1 = xk + 1. Our choices are the pixels at positions (xk + 1, yk ) and
(xk + 1, yk + 1).
At sampling position xk + 1, we label vertical pixel separations from the yk 1
mathematical line path as dlower and dupper (Figure 7). The y coordinate on the y d upper
mathematical line at pixel column position xk + 1 is calculated as
dlower
y = m(xk + 1) + b (10) yk
Then xk 1
dlower = y − yk
FIGURE 7
= m(xk + 1) + b − yk (11) Vertical distances between pixel
positions and the line y coordinate at
and sampling position x k + 1.
dupper = (yk + 1) − y
= yk + 1 − m(xk + 1) − b (12)
To determine which of the two pixels is closest to the line path, we can set up an
efficient test that is based on the difference between the two pixel separations as
follows:
dlower − dupper = 2m(xk + 1) − 2yk + 2b − 1 (13)
A decision parameter pk for the kth step in the line algorithm can be obtained
by rearranging Equation 13 so that it involves only integer calculations. We
accomplish this by substituting m = y/x, where y and x are the vertical
and horizontal separations of the endpoint positions, and defining the decision
parameter as
pk = x(dlower − dupper )
= 2y · xk − 2x · yk + c (14)
The sign of pk is the same as the sign of dlower − dupper , because x > 0 for our
example. Parameter c is constant and has the value 2y + x(2b − 1), which is
independent of the pixel position and will be eliminated in the recursive calcula-
tions for pk . If the pixel at yk is “closer” to the line path than the pixel at yk + 1
(that is, dlower < dupper ), then decision parameter pk is negative. In that case, we
plot the lower pixel; otherwise, we plot the upper pixel.
135
Implementation Algorithms for Graphics Primitives and Attributes
Coordinate changes along the line occur in unit steps in either the x or y
direction. Therefore, we can obtain the values of successive decision parameters
using incremental integer calculations. At step k + 1, the decision parameter is
evaluated from Equation 14 as
pk+1 = 2y · xk+1 − 2x · yk+1 + c
Subtracting Equation 14 from the preceding equation, we have
pk+1 − pk = 2y(xk+1 − xk ) − 2x(yk+1 − yk )
However, xk+1 = xk + 1, so that
pk+1 = pk + 2y − 2x(yk+1 − yk ) (15)
where the term yk+1 − yk is either 0 or 1, depending on the sign of parameter pk .
This recursive calculation of decision parameters is performed at each integer
x position, starting at the left coordinate endpoint of the line. The first parameter,
p0 , is evaluated from Equation 14 at the starting pixel position (x0 , y0) and with
m evaluated as y/x as follows:
p0 = 2y − x (16)
We summarize Bresenham line drawing for a line with a positive slope less
than 1 in the following outline of the algorithm. The constants 2y and 2y −
2x are calculated once for each line to be scan-converted, so the arithmetic
involves only integer addition and subtraction of these two constants. Step 4 of
the algorithm will be performed a total of x times.
136
Implementation Algorithms for Graphics Primitives and Attributes
A plot of the pixels generated along this line path is shown in Figure 8.
18
15
FIGURE 8
10 Pixel positions along the line path between
endpoints (20, 10) and (30, 18), plotted with
20 21 22 25 30
Bresenham’s line algorithm.
#include <stdlib.h>
#include <math.h>
137
Implementation Algorithms for Graphics Primitives and Attributes
else {
x = x0;
y = y0;
}
setPixel (x, y);
Displaying Polylines
Implementation of a polyline procedure is accomplished by invoking a line-
drawing routine n − 1 times to display the lines connecting the n endpoints. Each
successive call passes the coordinate pair needed to plot the next line section,
where the first endpoint of each coordinate pair is the last endpoint of the previ-
ous section. Once the color values for pixel positions along the first line segment
have been set in the frame buffer, we process subsequent line segments starting
with the next pixel position following the first endpoint for that segment. In this
way, we can avoid setting the color of some endpoints twice. We discuss methods
for avoiding the overlap of displayed objects in more detail in Section 8.
138
Implementation Algorithms for Graphics Primitives and Attributes
among the various processors available. One approach to the partitioning prob-
lem is to adapt an existing sequential algorithm to take advantage of multiple
processors. Alternatively, we can look for other ways to set up the processing so
that pixel positions can be calculated efficiently in parallel. An important consid-
eration in devising a parallel algorithm is to balance the processing load among
the available processors.
Given n p processors, we can set up a parallel Bresenham line algorithm by
subdividing the line path into n p partitions and simultaneously generating line
segments in each of the subintervals. For a line with slope 0 < m < 1.0 and left
endpoint coordinate position (x0 , y0 ), we partition the line along the positive x
direction. The distance between beginning x positions of adjacent partitions can
be calculated as
x + n p − 1
x p = (17)
np
where x is the width of the line, and the value for partition width x p is com-
puted using integer division. Numbering the partitions, and the processors, as 0,
1, 2, up to n p − 1, we calculate the starting x coordinate for the kth partition as
xk = x0 + kx p (18)
yk = y0 + round(kyp ) (20)
The initial decision parameter for Bresenham’s algorithm at the start of the kth
subinterval is obtained from Equation 14:
Each processor then calculates pixel positions over its assigned subinterval
using the preceding starting decision parameter value and the starting coordinates
(xk , yk ). Floating-point calculations can be reduced to integer arithmetic in the
computations for starting values yk and pk by substituting m = y/x and
rearranging terms. We can extend the parallel Bresenham algorithm to a line
with slope greater than 1.0 by partitioning the line in the y direction and calcu-
lating beginning x values for the partitions. For negative slopes, we increment
coordinate values in one direction and decrement in the other.
Another way to set up parallel algorithms on raster systems is to assign each
processor to a particular group of screen pixels. With a sufficient number of pro-
cessors, we can assign each processor to one pixel within some screen region. This
139
Implementation Algorithms for Graphics Primitives and Attributes
Moving across a scan line, we can calculate the frame-buffer address for the pixel
at (x + 1, y) as the following offset from the address for position (x, y):
addr(x + 1, y) = addr(x, y) + 1 (24)
Stepping diagonally up to the next scan line from (x, y), we get to the frame-buffer
address of (x + 1, y + 1) with the calculation
addr(x + 1, y + 1) = addr(x, y) + xmax + 2 (25)
140
Implementation Algorithms for Graphics Primitives and Attributes
ymax
… …
where the constant xmax + 2 is precomputed once for all line segments. Similar
incremental calculations can be obtained from Equation 23 for unit steps in the
negative x and y screen directions. Each of the address calculations involves only
a single integer addition.
Methods for implementing these procedures depend on the capabilities of
a particular system and the design requirements of the software package. With
systems that can display a range of intensity values for each pixel, frame-buffer
address calculations include pixel width (number of bits), as well as the pixel
screen location.
4 Circle-Generating Algorithms
Because the circle is a frequently used component in pictures and graphs, a proce-
dure for generating either full circles or circular arcs is included in many graphics
packages. In addition, sometimes a general function is available in a graphics (x, y)
r
library for displaying various kinds of curves, including circles and ellipses. u
yc
Properties of Circles xc
A circle (Figure 11) is defined as the set of points that are all at a given distance r
FIGURE 11
from a center position (xc , yc ). For any circle point (x, y), this distance relationship
Circle with center coordinates (x c , y c )
is expressed by the Pythagorean theorem in Cartesian coordinates as and radius r .
(x − xc )2 + (y − yc )2 = r 2 (26)
We could use this equation to calculate the position of points on a circle circumfer-
ence by stepping along the x axis in unit steps from xc − r to xc + r and calculating
the corresponding y values at each position as
y = yc ± r 2 − (xc − x)2 (27)
However, this is not the best method for generating a circle. One problem with
this approach is that it involves considerable computation at each step. Moreover,
the spacing between plotted pixel positions is not uniform, as demonstrated in
Figure 12. We could adjust the spacing by interchanging x and y (stepping
through y values and calculating x values) whenever the absolute value of the
slope of the circle is greater than 1; but this simply increases the computation and
FIGURE 12
processing required by the algorithm. Upper half of a circle plotted
Another way to eliminate the unequal spacing shown in Figure 12 is to with Equation 27 and with
calculate points along the circular boundary using polar coordinates r and θ (x c , y c ) = (0, 0).
141
Implementation Algorithms for Graphics Primitives and Attributes
(Figure 11). Expressing the circle equation in parametric polar form yields the
pair of equations
x = xc + r cos θ
(28)
y = yc + r sin θ
When a display is generated with these equations using a fixed angular step size,
a circle is plotted with equally spaced points along the circumference. To reduce
calculations, we can use a large angular separation between points along the cir-
cumference and connect the points with straight-line segments to approximate
the circular path. For a more continuous boundary on a raster display, we can
set the angular step size at r1 . This plots pixel positions that are approximately one
unit apart. Although polar coordinates provide equal point spacing, the trigono-
metric calculations are still time-consuming.
For any of the previous circle-generating methods, we can reduce computa-
(y, x) (y, x)
tions by considering the symmetry of circles. The shape of the circle is similar in
each quadrant. Therefore, if we determine the curve positions in the first quad-
(x, y)
(x, y) rant, we can generate the circle section in the second quadrant of the xy plane
45
by noting that the two circle sections are symmetric with respect to the y axis.
(x, y) (x, y) Also, circle sections in the third and fourth quadrants can be obtained from sec-
tions in the first and second quadrants by considering symmetry about the x axis.
(y, x) (y, x) We can take this one step further and note that there is also symmetry between
octants. Circle sections in adjacent octants within one quadrant are symmetric
FIGURE 13
with respect to the 45◦ line dividing the two octants. These symmetry conditions
Symmetry of a circle. Calculation of a are illustrated in Figure 13, where a point at position (x, y) on a one-eighth
circle point (x , y ) in one octant yields circle sector is mapped into the seven circle points in the other octants of the
the circle points shown for the other xy plane. Taking advantage of the circle symmetry in this way, we can generate
seven octants. all pixel positions around a circle by calculating only the points within the sec-
tor from x = 0 to x = y. The slope of the curve in this octant has a magnitude
less than or equal to 1.0. At x = 0, the circle slope is 0, and at x = y, the slope
is −1.0.
Determining pixel positions along a circle circumference using symmetry and
either Equation 26 or Equation 28 still requires a good deal of computation.
The Cartesian equation 26 involves multiplications and square-root calcula-
tions, while the parametric equations contain multiplications and trigonometric
calculations. More efficient circle algorithms are based on incremental calculation
of decision parameters, as in the Bresenham line algorithm, which involves only
simple integer operations.
Bresenham’s line algorithm for raster displays is adapted to circle generation
by setting up decision parameters for finding the closest pixel to the circumference
at each sampling step. The circle equation 26, however, is nonlinear, so that
square-root evaluations would be required to compute pixel distances from a
circular path. Bresenham’s circle algorithm avoids these square-root calculations
by comparing the squares of the pixel separation distances.
However, it is possible to perform a direct distance comparison without a
squaring operation. The basic idea in this approach is to test the halfway position
between two pixels to determine if this midpoint is inside or outside the circle
boundary. This method is applied more easily to other conics; and for an integer
circle radius, the midpoint approach generates the same pixel positions as the
Bresenham circle algorithm. For a straight-line segment, the midpoint method is
equivalent to the Bresenham line algorithm. Also, the error involved in locating
pixel positions along any conic section using the midpoint test is limited to half
the pixel separation.
142
Implementation Algorithms for Graphics Primitives and Attributes
143
Implementation Algorithms for Graphics Primitives and Attributes
Increments for obtaining pk+1 are either 2xk+1 + 1 (if pk is negative) or 2xk+1 +
1−2yk+1 . Evaluation of the terms 2xk+1 and 2yk+1 can also be done incrementally as
2xk+1 = 2xk + 2
2yk+1 = 2yk − 2
At the start position (0, r ), these two terms have the values 0 and 2r , respectively.
Each successive value for the 2xk+1 term is obtained by adding 2 to the previous
value, and each successive value for the 2yk+1 term is obtained by subtracting 2
from the previous value.
The initial decision parameter is obtained by evaluating the circle function at
the start position (x0 , y0 ) = (0, r ):
1
p0 = f circ 1, r −
2
2
1
= 1+ r − − r2
2
or
5
p0 = − r (33)
4
If the radius r is specified as an integer, we can simply round p0 to
p0 = 1 − r (for r an integer)
because all increments are integers.
As in Bresenham’s line algorithm, the midpoint method calculates pixel posi-
tions along the circumference of a circle using integer additions and subtractions,
assuming that the circle parameters are specified in integer screen coordinates.
We can summarize the steps in the midpoint circle algorithm as follows:
144
Implementation Algorithms for Graphics Primitives and Attributes
y yx
10
9
8
7
6
5
4
3
FIGURE 15
2 Pixel positions (solid circles) along a circle path
1 centered on the origin and with radius r = 10,
as calculated by the midpoint circle algorithm.
0 Open (“hollow”) circles show the symmetry
0 1 2 3 4 5 6 7 8 9 10 x positions in the first quadrant.
145
Implementation Algorithms for Graphics Primitives and Attributes
#include <GL/glut.h>
class screenPt
{
private:
GLint x, y;
public:
/* Default Constructor: initializes coordinate position to (0, 0). */
screenPt ( ) {
x = y = 0;
}
void setCoords (GLint xCoordValue, GLint yCoordValue) {
x = xCoordValue;
y = yCoordValue;
}
146
Implementation Algorithms for Graphics Primitives and Attributes
5 Ellipse-Generating Algorithms y
Loosely stated, an ellipse is an elongated circle. We can also describe an ellipse
as a modified circle whose radius varies from a maximum value in one direc-
tion to a minimum value in the perpendicular direction. The straight-line seg-
d1
ments through the interior of the ellipse in these two perpendicular directions are
F1
referred to as the major and minor axes of the ellipse. P = (x, y)
d2
F2
Properties of Ellipses
A precise definition of an ellipse can be given in terms of the distances from any
x
point on the ellipse to two fixed positions, called the foci of the ellipse. The sum
of these two distances is the same value for all points on the ellipse (Figure 16). FIGURE 16
If the distances to the two focus positions from any point P = (x, y) on the ellipse Ellipse generated about foci F1 and F2 .
are labeled d1 and d2 , then the general equation of an ellipse can be stated as
d1 + d2 = constant (34)
By squaring this equation, isolating the remaining radical, and squaring again,
we can rewrite the general ellipse equation in the form
A x 2 + B y2 + C x y + D x + E y + F = 0 (36)
147