0% found this document useful (0 votes)
41 views29 pages

Mod 5

Note ds
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views29 pages

Mod 5

Note ds
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

MODULE 5

FROM VERTICES
C H AP T E R 7
TO FRAGMENTS

W e now turn to the next steps in the pipeline: clipping, rasterization, and
hidden-surface removal. Although we have yet to consider some major parts
of OpenGL that are available to the application programmer, including discrete
primitives, texture mapping, and curves and surfaces, there are several reasons for
considering these topics at this point. First, you may be wondering how your pro-
grams are processed by the system that you are using: how lines are drawn on the
screen, how polygons are filled, and what happens to primitives that lie outside the
viewing volumes defined in your program. Second, our contention is that if we are
to use a graphics system efficiently, we need to have a deeper understanding of the
implementation process: which steps are easy, and which ones tax our hardware and
software. Third, our discussion of implementation will open the door to new capa-
bilities that are supported by the latest hardware.
Learning implementation involves studying algorithms. As when we study other
algorithms, we must be careful to consider such issues as theoretical versus practical
performance, hardware versus software implementations, and the specific character-
istics of an application. Although we can test whether an OpenGL implementation
works correctly, in the sense that it produces the correct pixels on the screen, there
are many choices for the algorithms employed. We focus on the basic operations that
are necessary to implement a standard API and are required, whether the rendering
is done by a pipeline architecture or by some other method, such as a ray tracer. Con-
sequently, we present a variety of the basic algorithms for each of the principal tasks
in an implementation.
In this chapter, we are concerned with the basic algorithms that are used to
implement the rendering pipeline used by OpenGL. We will focus on three issues:
clipping, rasterization, and hidden-surface removal. Clipping involves eliminating
objects that lie outside the viewing volume and thus cannot be visible in the image.
Rasterization produces fragments from the remaining objects. These fragments can
contribute to the final image. Hidden-surface removal determines which fragments
correspond to objects that are visible, namely those that are in the view volume and
are not blocked from view by other objects closer to the camera.
329
330 Chapter 7 From Vertices to Fragments

Vertices Pixels
Application Graphics Frame
program system buffer

FIGURE 7.1 High-level view of the graphics process.

7.1 BASIC IMPLEMENTATION STRATEGIES


Let’s begin with a high-level view of the implementation process. In computer graph-
ics, we start with an application program, and we end with an image. We can again
consider this process as a black box (Figure 7.1) whose inputs are the vertices and
states defined in the program—geometric objects, attributes, camera specifications—
and whose output is an array of colored pixels in the frame buffer.
Within the black box, we must do many tasks, including transformations, clip-
ping, shading, hidden-surface removal, and rasterization of the primitives that can
appear on the display. These tasks can be organized in a variety of ways, but regard-
less of the strategy that we adopt, we must always do two things: We must pass every
geometric object through the system, and we must assign a color to every pixel in the
color buffer that is displayed.
Suppose that we think of what goes into the black box in terms of a single
program that carries out the entire process. This program takes as input sets of
vertices specifying geometric objects and produces as output pixels in the frame
buffer. Because this program must assign a value to every pixel and must process
every geometric primitive (and every light source), we expect this program to contain
at least two loops that iterate over these basic variables.
If we wish to write such a program, then we must immediately address the
following question: Which variable controls the outer loop? The answer we choose
determines the flow of the entire implementation process. There are two fundamental
answers. The two strategies that follow are often called the image-oriented and the
object-oriented approaches.
In the object-oriented approach, the outer loop is over the objects. We can think
of the program as controlled by a loop of the following form:

for(each_object) render(object);

A pipeline renderer fits this description. Vertices are defined by the program and
flow through a sequence of modules that transforms them, colors them, and deter-
mines whether they are visible. A polygon might flow through the steps illustrated in
Figure 7.2. Note that after a polygon passes through geometric processing, the ras-
terization of this polygon can potentially affect any pixel in the frame buffer. Most
implementations that follow this approach are based on construction of a render-
ing pipeline that contains hardware or software modules for each of the tasks. Data
(vertices) flow forward through the system.
In the past, the major limitations of the object-oriented approach were the large
amount of memory required and the high cost of processing each object indepen-
7.1 Basic Implementation Strategies 331

y y y

Clipping Projection Rasterizing

x x x Frame
buffer
z z

FIGURE 7.2 Object-oriented approach.

dently. Any geometric primitive that emerges from the geometric processing poten-
tially can affect any set of pixels in the frame buffer; thus, the entire color buffer—and
various other buffers, such as the depth buffer used for hidden-surface removal—
must be of the size of the display and must be available at all times. Before memory
became inexpensive and dense, this requirement was considered to be a serious prob-
lem. Now various pipelined geometric processors are available that can process tens of
millions of polygons per second. In fact, precisely because we are doing the same op-
erations on every primitive, the hardware to build an object-based system is fast and
relatively inexpensive, with many of the functions implemented with special-purpose
chips.
Today, the main limitation of object-oriented implementations is that they can-
not handle most global calculations. Because each geometric primitive is processed
independently—and in an arbitrary order—complex shading effects that involve
multiple geometric objects, such as reflections, cannot be handled, except by approx-
imate methods. The major exception is hidden-surface removal, where the z-buffer
is used to store global information.
Image-oriented approaches loop over pixels, or rows of pixels called scanlines,
that constitute the frame buffer. In pseudocode, the outer loop of such a program is
of the following form:

for(each_pixel) assign_a_color(pixel);

For each pixel, we work backward, determining which geometric primitives can
contribute to its color. The advantages of this approach are that we need only limited
display memory at any time and that we can hope to generate pixels at the rate and
in the order required to refresh the display. Because the results of most calculations
do not differ greatly from pixel to pixel (or scanline to scanline), we can use this
coherence in our algorithms by developing incremental forms for many of the steps
in the implementation. The main disadvantage of this approach is that unless we first
build a data structure from the geometric data, we do not know which primitives
affect which pixels. Such a data structure can be complex and may imply that all
the geometric data must be available at all times during the rendering process. For
problems with very large databases, even having a good data representation may not
avoid memory problems. However, because image-oriented approaches have access
332 Chapter 7 From Vertices to Fragments

to all objects for each pixel, they are well suited to handle global effects such as
shadows and reflections. Ray tracing (Chapter 13) is an example of the image-based
approach.
Because our primary interest is in interactive applications, we lean toward the
object-based approach, although we look at examples of algorithms suited for both
approaches. In addition, as hardware improves in both speed and lowered cost, ap-
proaches that are not competitive today may well be practical in the near future.

7.2 FOUR MAJOR TASKS


We start by reviewing the blocks in the pipeline, focusing on those blocks that we
have yet to discuss in detail. There are four major tasks that any graphics system
must perform to render a geometric entity, such as a three-dimensional polygon, as
that entity passes from definition in a user program to possible display on an output
device:
Modeling
Geometry processing
Rasterization
Fragment Processing
Figure 7.3 shows how these tasks might be organized in a pipeline implementation.
Regardless of the approach, all the tasks must be carried out.

7.2.1 Modeling
The usual results of the modeling process are sets of vertices that specify a group of
geometric objects supported by the rest of the system. We have seen a few examples
that required some modeling by the application, such as the approximation of spheres
in Chapter 6. In Chapters 10 and 11, we explore other modeling techniques.
We can look at the modeler as a black box that produces geometric objects and is
usually an application program that may be interactive. The modeler might perform
other tasks in addition to producing geometry. Consider, for example, clipping: the
process of eliminating parts of objects that cannot appear on the display because they
lie outside the viewing volume. A user can generate geometric objects in her program,
and she can hope that the rest of the system can process these objects at the rate
at which they are produced; or the modeler can attempt to ease the burden on the
rest of the system by minimizing the number of objects that it passes on. This latter
approach often means that the modeler may do some of the same jobs as the rest

Geometry Fragment
Modeling Rasterization Display Frame buffer
processing Processing

FIGURE 7.3 Implementation tasks.


7.2 Four Major Tasks 333

of the system, albeit with different algorithms. In the case of clipping, the modeler,
knowing more about the specifics of the application, can often use a good heuristic to
eliminate many, if not most, primitives before they are sent on through the standard
viewing process.

7.2.2 Geometry Processing


Geometry processing works with vertices. The goals of the geometry processor are to
determine which geometric objects can appear on the display and to assign shades
or colors to the vertices of these objects. Four processes are required: projection,
primitive assembly, clipping, and shading.
Usually, the first step in geometry processing is to change representations from
object coordinates to camera or eye coordinates. As we saw in Chapter 5, the conver-
sion to eye coordinates is only the first part of the viewing process. The second step is
to transform vertices using the projection transformation to a normalized view vol-
ume in which objects that might be visible are contained in a cube centered at the
origin. The combined transformations determine the model-view matrix. Vertices
are now represented in clip coordinates. Not only does this normalization convert
both parallel and orthographic projections to a simple orthographic projection in a
simple volume, but it simplifies the clipping process, as we will see in Section 7.7.
Geometric objects are transformed by a sequence of transformations that may
reshape and move them (modeling) or may change their representations (viewing).
Eventually, only those primitives that fit within a specified volume, the view volume,
can appear on the display after rasterization. We cannot, however, simply allow all
objects to be rasterized, hoping that the hardware will take care of primitives that lie
wholly or partially outside the view volume. The implementation must carry out this
task before rasterization. One reason is that rasterizing objects that lie outside the
view volume is inefficient because such objects cannot be visible. Another reason is
that when vertices reach the rasterizer, they can no longer be processed individually
and first must be assembled into primitives. Primitives that lie partially in the viewing
volume can generate new primitives with new vertices for which we must carry out
shading calculations. Before clipping can take place, vertices must be grouped into
objects, a process known as primitive assembly.
Note that even though an object lies inside the view volume, it will not be vis-
ible if it is obscured by other objects. Algorithms for hidden-surface removal (or
visible-surface determination) are based on the three-dimensional spatial relation-
ships among objects. This step is normally carried out as part of fragment processing.
Colors are must be assigned to each vertex. These colors are either determined
by the current color or, if lighting is enabled, vertex colors are computed using
the modified Phong model. Per-vertex lighting calculations are part of geometric
processing.
After clipping takes place, the remaining vertices are still in four-dimensional
homogeneous coordinates. Perspective division converts them to three-dimensional
representation in normalized device coordinates.
334 Chapter 7 From Vertices to Fragments

Collectively, these operations constitute what has been called front-end process-
ing. All involve three-dimensional calculations, and all require floating-point arith-
metic. All generate similar hardware and software requirements. All are carried out
on a vertex-by-vertex basis. We will discuss clipping, the only geometric step that we
have yet to discuss, in Section 7.3.

7.2.3 Rasterization
Even after geometric processing has taken place, we still need to retain depth infor-
mation for hidden-surface removal. However, only the x, y values of the vertices are
needed to determine which pixels in the frame buffer can be affected by a primi-
tive. For example, after perspective division, a line segment that was defined origi-
nally in three dimensions by two vertices becomes a line segment defined by a pair
of three-dimensional vertices in normalized device coordinates. To generate a set of
fragments that give the locations of the pixels in the frame buffer corresponding to
these vertices, we only need their x, y components or, equivalently, the results of
the orthogonal projection of these vertices. We determine these fragments through
a process called rasterization or scan conversion. For line segments, rasterization
determines which fragments should be used to approximate a line segment between
the projected vertices. For polygons, rasterization determines which pixels lie inside
the two-dimensional polygon determined by the projected vertices.
The colors that we assign to these fragments can be determined by the color at-
tributes or obtained by interpolating the shades at the vertices that are computed, as
in Chapter 6. Objects more complex than line segments and polygons are usually ap-
proximated by multiple line segments and polygons, and thus most graphics systems
do not have special rasterization algorithms for them. We will see exceptions to this
rule for some special curves and surfaces in Chapter 12.
The rasterizer starts with vertices in normalized device coordinates but outputs
fragments whose locations are in units of the display—window coordinates. As we
saw in Chapters 2 and 5, the projection of the clipping volume must appear in the
assigned viewport. In OpenGL, this final transformation is done after projection
and is two-dimensional. The preceding transformations have normalized the view
volume such that its sides are of length 2 and line up with the sides of the viewport
(Figure 7.4), so this transformation is simply
x + 1.0
xv = xvmin + (xvmax − xvmin ),
2.0
y + 1.0
yv = yvmin + (yvmax − yvmin ),
2.0
z + 1.0
zv = zvmin + (zvmax − zvmin ).
2.0
Recall that for perspective viewing, these z-values have been scaled nonlinearly
by perspective normalization. However, they retain their original depth order so they
7.2 Four Major Tasks 335

(xmax, ymax )
(xv max, yv max )

(x, y) (xv, yv )

(xv min, yv min )


(xmin, ymin )

FIGURE 7.4 Viewport transformation.

can be used for hidden-surface removal. We will use the term screen coordinates to
refer to the the two-dimensional system that is the same as window coordinates but
lacks the depth coordinate.

7.2.4 Fragment Processing


In the simplest situations, fragments are assigned colors by the rasterizer and these
colors are the ones that are placed in the frame buffer at the locations corresponding
to the fragments’ locations. However, there are many other possibilities.
The separate pixel pipeline (Chapter 8), supported by architectures such as
OpenGL, merges with the results of the geometric pipeline at the rasterization stage.
Consider what happens when a shaded and texture-mapped polygon is processed.
Vertex lighting is computed as part of the geometric processing. The texture values are
not needed until after rasterization when the renderer has generated fragments that
correspond to locations inside a polygon. At this point, interpolation of per-vertex
colors and texture coordinates takes place, and the texture parameters determine how
to combine texture colors and fragment colors to determine final colors in the color
buffer.
As we have noted, objects that are in the view volume will not be visible if they
are blocked by any opaque objects closer to the viewer. The required hidden-surface
removal process is typically carried out on a fragment-by-fragment basis.
Until now, we have assumed that all objects are opaque and thus an object located
behind another object is not visible. We can also assume that objects are translucent
and allow some light to pass through. In this case, fragment colors may have to
be blended with the colors of pixels already in the color buffer. We consider this
possibility in Chapter 8.
In most displays, the process of taking the image from the frame buffer and
displaying it on an output device happens automatically and is not of concern to
the application program. However, there are numerous problems with the quality of
display, such as the jaggedness associated with images on raster displays. In Chapter 8,
we introduce algorithms for reducing this jaggedness, or aliasing, and we discuss
problems with color reproduction on displays.
336 Chapter 7 From Vertices to Fragments

7.3 CLIPPING
We can now turn to clipping, the process of determining which primitives, or parts of
primitives, fit within the clipping or view volume defined by the application program.
Clipping is done before the perspective division that is necessary if the w component
of a clipped vertex is not equal to 1. The portions of all primitives that can possibly
be displayed—we have yet to apply hidden-surface removal—lie within the cube as
follows:
− w ≤ x ≤ w,
− w ≤ y ≤ w,
− w ≤ z ≤ w.

This coordinate system is called clip coordinates, and it depends on neither the orig-
inal application units nor the particulars of the display device, although the infor-
mation to produce the correct image is retained in this coordinate system. Note also
that projection has been carried out only partially. We still must do the perspective
division and the final orthographic projection. After perspective division, we have a
three-dimensional representation in normalized device coordinates. By carrying our
clipping in clip coordinates, we avoid doing the perspective division for primitives
that lie outside the clipping volume.
We will concentrate on clipping of line segments and polygons because they
are the most common primitives to pass down the pipeline. Although the OpenGL
pipeline does clipping on three-dimensional objects, there are other systems in which
the objects are first projected into the x, y plane. Fortunately, many of the most
efficient algorithms are almost identical in two and three dimensions and we will
focus on these algorithms.

7.4 LINE-SEGMENT CLIPPING


A clipper decides which primitives, or parts of primitives, can possibly appear on
the display and are passed on to the rasterizer. Primitives that fit within the specified
view volume pass through the clipper, or are accepted. Primitives that cannot appear
on the display are eliminated, or rejected or culled. Primitives that are only partially
within the view volume must be clipped such that any part lying outside the volume
is removed.
Clipping can occur at one or more places in the viewing pipeline. The modeler
may clip to limit the primitives that the hardware must handle. The primitives may
be clipped after they have been projected from three- to two-dimensional objects.
In OpenGL, primitives are clipped against a three-dimensional view volume before
rasterization. We will develop a sequence of clippers. For both pedagogic and his-
toric reasons, we start with two two-dimensional line-segment clippers. Both extend
directly to three dimensions and to clipping of polygons.
7.4 Line-Segment Clipping 337

H F
D
B
G
E
A C

FIGURE 7.5 Two-dimensional clipping.

7.4.1 Cohen-Sutherland Clipping


The two-dimensional clipping problem for line segments is shown in Figure 7.5. We
can assume for now that this problem arises after three-dimensional line segments
have been projected onto the projection plane, and that the window is part of the
projection plane that is mapped to the viewport on the display. All values are specified
as real numbers. We can see that the entire line segment AB appears on the display,
whereas none of CD appears. EF and GH have to be shortened before being displayed.
Although a line segment is completely determined by its endpoints, GH shows that,
even if both endpoints lie outside the clipping window, part of the line segment may
still appear on the display.
We could compute the intersections of the lines of which the segments are parts
with the sides of the window, and thus could determine the necessary information
for clipping. However, we want to avoid intersection calculations, if possible, because
1001 1000 1010
each intersection requires a floating-point division. The Cohen-Sutherland algorithm y = ymax
was the first to seek to replace most of the expensive floating-point multiplications 0001 0000 0010
and divisions with a combination of floating-point subtractions and bit operations. y = ymin
0101 0100 0110
The algorithm starts by extending the sides of the window to infinity, thus break-
x = xmin x = xmax
ing up space into the nine regions shown in Figure 7.6. Each region can be assigned a
unique 4-bit binary number, or outcode, b0b1b2b3, as follows. Suppose that (x, y) is FIGURE 7.6 Breaking up of
a point in the region; then, space and outcodes.

1 if y > ymax ,
b0 =
0 otherwise.

Likewise, b1 is 1 if y < ymin , and b2 and b3 are determined by the relationship between
x and the left and right sides of the window. The resulting codes are indicated in
Figure 7.7. For each endpoint of a line segment, we first compute the endpoint’s
outcode, a step that can require eight floating-point subtractions per line segment.
Consider a line segment whose outcodes are given by o1 = outcode(x1, y1) and
o2 = outcode(x2 , y2). We can now reason on the basis of these outcodes. There are
four cases:

1. (o1 = o2 = 0.) Both endpoints are inside the clipping window, as is true for
segment AB in Figure 7.7. The entire line segment is inside, and the segment
can be sent on to be rasterized.
338 Chapter 7 From Vertices to Fragments

H
J

G B D F
I
A C
E

FIGURE 7.7 Cases of outcodes in Cohen-Sutherland algorithm.

2. (o1 = 0, o2 = 0; or vice versa.) One endpoint is inside the clipping window;


one is outside (see segment CD in Figure 7.7). The line segment must be
shortened. The nonzero outcode indicates which edge, or edges, of the win-
dow are crossed by the segment. One or two intersections must be computed.
Note that after one intersection is computed, we can compute the outcode of
the point of intersection to determine whether another intersection calcula-
tion is required.
3. (o1 & o2 = 0.) By taking the bitwise and of the outcodes, we determine
whether or not the two endpoints lie on the same outside side of the window.
If so, the line segment can be discarded (see segment EF in Figure 7.7).
4. (o1 & o2 = 0.) Both endpoints are outside, but they are on the outside of
different edges of the window. As we can see from segments GH and IJ in
Figure 7.7, we cannot tell from just the outcodes whether the segment can be
discarded or must be shortened. The best we can do is to intersect with one
of the sides of the window, and to check the outcode of the resulting point.

All our checking of outcodes requires only Boolean operations. We do intersection


calculations only when they are needed, as in the second case, or as in the fourth
case, where the outcodes did not contain enough information.
The Cohen-Sutherland algorithm works best when there are many line segments
but few are actually displayed. In this case, most of the line segments lie fully outside
one or two of the extended sides of the clipping rectangle, and thus can be eliminated
on the basis of their outcodes. The other advantage is that this algorithm can be
extended to three dimensions. The main disadvantage of the algorithm is that it
must be used recursively. Consider line segment GH in Figure 7.7. It must be clipped
against both the left and top sides of the clipping window. Generally, the simplest
way to do so is to use the initial outcodes to determine the first side of the clipping
window to clip against. After this first shortening of the original line segment, a new
outcode is computed for the new endpoint created by shortening, and the algorithm
is reexecuted.
We have not discussed how to compute any required intersections. The form this
calculation takes depends on how we choose to represent the line segment, although
7.4 Line-Segment Clipping 339

only a single division should be required in any case. If we use the standard explicit
form of a line,

y = mx + h,

where m is the slope of the line and h is the line’s y intercept, then we can compute
m and h from the endpoints. However, vertical lines cannot be represented in this
form—a critical weakness of the explicit form. If we were interested in only the
Cohen-Sutherland algorithm, it would be fairly simple to program all cases directly,
because the sides of the clipping rectangle are parallel to the axes. However, we are
interested in more than just clipping; consequently, other representations of the line
and line segment are of importance. In particular, parametric representations are
almost always used in computer graphics. We have already seen the parametric form
of the line in Chapter 4; the parametric representation of other types of curves is
considered in Chapter 12.

7.4.2 Liang-Barsky Clipping


If we use the parametric form for lines, we can approach the clipping of line segments
in a different—and ultimately more efficient—manner. Suppose that we have a line
segment defined by the two endpoints p1 = [x1, y1]T and p2 = [x2 , y2]T . We can use
these endpoints to define a unique line that we can express parametrically, either in
matrix form,

p(α) = (1 − α)p1 + αp2 ,

or as two scalar equations,

x(α) = (1 − α)x1 + αx2 ,

y(α) = (1 − α)y1 + αy2 .

Note that this form is robust and needs no changes for horizontal or vertical lines.
As the parameter α varies from 0 to 1, we move along the segment from p1 to p2.
Negative values of α yield points on the line on the other side of p1 from p2. Similarly,
values of α > 1 give points on the line past p2 going off to infinity.
Consider the line segment and the line of which it is part, as shown in Fig-
ure 7.8(a). As long as the line is not parallel to a side of the window (if it is, we can
handle that situation with ease), there are four points where the line intersects the
extended sides of the window. These points correspond to the four values of the pa-
rameter: α1, α2, α3, and α4. One of these values corresponds to the line entering the
window; another corresponds to the line leaving the window. Leaving aside, for the
moment, how we compute these intersections, we can order them, and can determine
which correspond to intersections that we need for clipping. For the given example,

1 > α4 > α3 > α2 > α1 > 0.


340 Chapter 7 From Vertices to Fragments

4

4
3  3 2

2
1 1

(a) (b)

FIGURE 7.8 Two cases of a parametric line and a clipping window.

Hence, all four intersections are inside the original line segment, with the two in-
nermost (α2 and α3) determining the clipped line segment. We can distinguish this
case from the case in Figure 7.8(b), which also has the four intersections between the
endpoints of the line segment, by noting that the order for this case is

1 > α4 > α2 > α3 > α1 > 0.

The line intersects both the top and the bottom of the window before it intersects
either the left or the right; thus, the entire line segment must be rejected. Other cases
of the ordering of the points of intersection can be argued in a similar way.
Efficient implementation of this strategy requires that we avoid computing inter-
sections until they are needed. Many lines can be rejected before all four intersections
are known. We also want to avoid floating-point divisions where possible. If we use
the parametric form to determine the intersection with the top of the window, we
find the intersection at the value
ymax − y1
α= .
y2 − y1

Similar equations hold for the other three sides of the window. Rather than com-
puting these intersections, at the cost of a division for each, instead we write the
equation as

α(y2 − y1) = αy = ymax − y1 = ymax .

All the tests required by the algorithm can be restated in terms of ymax , y, and
similar terms can be computed for the other sides of the windows. Thus, all decisions
about clipping can be made without floating-point division. Only if an intersection is
needed (because a segment has to be shortened) is the division done. The efficiency of
this approach, compared to that of the Cohen-Sutherland algorithm, is that we avoid
multiple shortening of line segments and the related reexecutions of the clipping
algorithm. We forgo discussion of other efficient two-dimensional line-clipping al-
gorithms because, unlike the Cohen-Sutherland and Liang-Barsky algorithms, these
algorithms do not extend to three dimensions.
Implementation Algorithms for
Graphics Primitives and Attributes

1 Line-Drawing Algorithms
2 Parallel Line Algorithms
3 Setting Frame-Buffer Values
4 Circle-Generating Algorithms
5 Ellipse-Generating Algorithms
6 Other Curves
7 Parallel Curve Algorithms
8 Pixel Addressing and Object
Geometry
9 Attribute Implementations for
Straight-Line Segments and Curves
10 General Scan-Line Polygon-Fill
Algorithm
11 Scan-Line Fill of Convex Polygons
12 Scan-Line Fill for Regions with
Curved Boundaries
13 Fill Methods for Areas with
Irregular Boundaries
14

15
Implementation Methods for Fill
Styles
Implementation Methods
I n this chapter, we discuss the device-level algorithms for im-
plementing OpenGL primitives. Exploring the implementa-
tion algorithms for a graphics library will give us valuable
for Antialiasing
insight into the capabilities of these packages. It will also provide us
16 Summary
with an understanding of how the functions work, perhaps how they
could be improved, and how we might implement graphics routines
ourselves for some special application. Research in computer graph-
ics is continually discovering new and improved implementation tech-
niques to provide us with methods for special applications, such as
Internet graphics, and for developing faster and more realistic graph-
ics displays in general.

131
Implementation Algorithms for Graphics Primitives and Attributes

FIGURE 1
Stair-step effect (jaggies) produced
when a line is generated as a series of
pixel positions.

1 Line-Drawing Algorithms
A straight-line segment in a scene is defined by the coordinate positions for the
endpoints of the segment. To display the line on a raster monitor, the graphics sys-
tem must first project the endpoints to integer screen coordinates and determine
the nearest pixel positions along the line path between the two endpoints. Then the
line color is loaded into the frame buffer at the corresponding pixel coordinates.
Reading from the frame buffer, the video controller plots the screen pixels. This
process digitizes the line into a set of discrete integer positions that, in general,
only approximates the actual line path. A computed line position of (10.48, 20.51),
for example, is converted to pixel position (10, 21). This rounding of coordinate
values to integers causes all but horizontal and vertical lines to be displayed with
a stair-step appearance (known as “the jaggies”), as represented in Figure 1. The
characteristic stair-step shape of raster lines is particularly noticeable on systems
with low resolution, and we can improve their appearance somewhat by dis-
playing them on high-resolution systems. More effective techniques for smooth-
ing a raster line are based on adjusting pixel intensities along the line path (see
Section 15 for details).

Line Equations
yend
We determine pixel positions along a straight-line path from the geometric prop-
erties of the line. The Cartesian slope-intercept equation for a straight line is
y0
y=m·x+b (1)
with m as the slope of the line and b as the y intercept. Given that the two endpoints
of a line segment are specified at positions (x0 , y0 ) and (xend , yend ), as shown in
x0 xend
Figure 2, we can determine values for the slope m and y intercept b with the
FIGURE 2 following calculations:
Line path between endpoint positions yend − y0
(x 0 , y 0 ) and (x end , y end ). m= (2)
xend − x0
b = y0 − m · x0 (3)
Algorithms for displaying straight lines are based on Equation 1 and the calcu-
lations given in Equations 2 and 3.
For any given x interval δx along a line, we can compute the corresponding
y interval, δy, from Equation 2 as
δy = m · δx (4)
Similarly, we can obtain the x interval δx corresponding to a specified δy as
δy
δx = (5)
m
These equations form the basis for determining deflection voltages in analog dis-
plays, such as a vector-scan system, where arbitrarily small changes in deflection
voltage are possible. For lines with slope magnitudes |m| < 1, δx can be set pro-
portional to a small horizontal deflection voltage, and the corresponding vertical
deflection is then set proportional to δy as calculated from Equation 4. For lines

132
Implementation Algorithms for Graphics Primitives and Attributes

whose slopes have magnitudes |m| > 1, δy can be set proportional to a small ver-
tical deflection voltage with the corresponding horizontal deflection voltage set
proportional to δx, calculated from Equation 5. For lines with m = 1, δx = δy and
the horizontal and vertical deflections voltages are equal. In each case, a smooth yend
line with slope m is generated between the specified endpoints.
On raster systems, lines are plotted with pixels, and step sizes in the horizontal y0
and vertical directions are constrained by pixel separations. That is, we must
“sample” a line at discrete positions and determine the nearest pixel to the line at
each sampled position. This scan-conversion process for straight lines is illustrated
x0 xend
in Figure 3 with discrete sample positions along the x axis.
FIGURE 3
DDA Algorithm Straight-line segment with five
sampling positions along the x axis
The digital differential analyzer (DDA) is a scan-conversion line algorithm based on between x 0 and x end .
calculating either δy or δx, using Equation 4 or Equation 5. A line is sampled
at unit intervals in one coordinate and the corresponding integer values nearest
the line path are determined for the other coordinate.
We consider first a line with positive slope, as shown in Figure 2. If the slope
is less than or equal to 1, we sample at unit x intervals (δx = 1) and compute
successive y values as
yk+1 = yk + m (6)
Subscript k takes integer values starting from 0, for the first point, and increases
by 1 until the final endpoint is reached. Because m can be any real number
between 0.0 and 1.0, each calculated y value must be rounded to the nearest integer
corresponding to a screen pixel position in the x column that we are processing.
For lines with a positive slope greater than 1.0, we reverse the roles of x and y.
That is, we sample at unit y intervals (δy = 1) and calculate consecutive x values as
1
xk+1 = xk + (7)
m
In this case, each computed x value is rounded to the nearest pixel position along
the current y scan line.
Equations 6 and 7 are based on the assumption that lines are to be pro-
cessed from the left endpoint to the right endpoint (Figure 2). If this processing is
reversed, so that the starting endpoint is at the right, then either we have δx = −1
and
yk+1 = yk − m (8)
or (when the slope is greater than 1) we have δy = −1 with
1
xk+1 = xk − (9)
m
Similar calculations are carried out using Equations 6 through 9 to deter-
mine pixel positions along a line with negative slope. Thus, if the absolute value
of the slope is less than 1 and the starting endpoint is at the left, we set δx = 1 and
calculate y values with Equation 6. When the starting endpoint is at the right
(for the same slope), we set δx = −1 and obtain y positions using Equation 8.
For a negative slope with absolute value greater than 1, we use δy = −1 and
Equation 9, or we use δy = 1 and Equation 7.
This algorithm is summarized in the following procedure, which accepts as
input two integer screen positions for the endpoints of a line segment. Horizontal
and vertical differences between the endpoint positions are assigned to parame-
ters dx and dy. The difference with the greater magnitude determines the value of
parameter steps. This value is the number of pixels that must be drawn beyond
the starting pixel; from it, we calculate the x and y increments needed to generate

133
Implementation Algorithms for Graphics Primitives and Attributes

the next pixel position at each step along the line path. We draw the starting pixel
at position (x0, y0), and then draw the remaining pixels iteratively, adjusting x
and y at each step to obtain the next pixel’s position before drawing it. If the magni-
tude of dx is greater than the magnitude of dy and x0 is less than xEnd, the values
for the increments in the x and y directions are 1 and m, respectively. If the greater
change is in the x direction, but x0 is greater than xEnd, then the decrements −1
and −m are used to generate each new point on the line. Otherwise, we use a unit
increment (or decrement) in the y direction and an x increment (or decrement) of m1 .

#include <stdlib.h>
#include <math.h>

inline int round (const float a) { return int (a + 0.5); }

void lineDDA (int x0, int y0, int xEnd, int yEnd)
{
int dx = xEnd - x0, dy = yEnd - y0, steps, k;
float xIncrement, yIncrement, x = x0, y = y0;

if (fabs (dx) > fabs (dy))


steps = fabs (dx);
else
steps = fabs (dy);
xIncrement = float (dx) / float (steps);
Specified yIncrement = float (dy) / float (steps);
13
Line Path
setPixel (round (x), round (y));
12
for (k = 0; k < steps; k++) {
x += xIncrement;
11 y += yIncrement;
setPixel (round (x), round (y));
10 }
}
10 11 12 13

FIGURE 4
A section of a display screen where a The DDA algorithm is a faster method for calculating pixel positions than one
straight-line segment is to be plotted,
starting from the pixel at column 10 on
that directly implements Equation 1. It eliminates the multiplication in Equa-
scan line 11. tion 1 by using raster characteristics, so that appropriate increments are applied
in the x or y directions to step from one pixel position to another along the line path.
The accumulation of round-off error in successive additions of the floating-point
increment, however, can cause the calculated pixel positions to drift away from
the true line path for long line segments. Furthermore, the rounding operations
and floating-point arithmetic in this procedure are still time-consuming. We can
50 improve the performance of the DDA algorithm by separating the increments
Specified m and m1 into integer and fractional parts so that all calculations are reduced
49 Line Path to integer operations. A method for calculating m1 increments in integer steps
is discussed in Section 10. In the next section, we consider a more general scan-
48 line approach that can be applied to both lines and curves.
50 51 52 53
Bresenham’s Line Algorithm
FIGURE 5 In this section, we introduce an accurate and efficient raster line-generating algo-
A section of a display screen where a
negative slope line segment is to be rithm, developed by Bresenham, that uses only incremental integer calculations.
plotted, starting from the pixel at In addition, Bresenham’s line algorithm can be adapted to display circles and
column 50 on scan line 50. other curves. Figures 4 and 5 illustrate sections of a display screen where

134
Implementation Algorithms for Graphics Primitives and Attributes

straight-line segments are to be drawn. The vertical axes show scan-line posi-
tions, and the horizontal axes identify pixel columns. Sampling at unit x intervals yk 3
in these examples, we need to decide which of two possible pixel positions is
closer to the line path at each sample step. Starting from the left endpoint shown yk 2 y  mx  b
in Figure 4, we need to determine at the next sample position whether to plot
the pixel at position (11, 11) or the one at (11, 12). Similarly, Figure 5 shows a yk 1
negative-slope line path starting from the left endpoint at pixel position (50, 50).
yk
In this one, do we select the next pixel position as (51, 50) or as (51, 49)? These
questions are answered with Bresenham’s line algorithm by testing the sign of
an integer parameter whose value is proportional to the difference between the xk xk1 xk 2 xk3
vertical separations of the two pixel positions from the actual line path. FIGURE 6
To illustrate Bresenham’s approach, we first consider the scan-conversion A section of the screen showing a pixel
process for lines with positive slope less than 1.0. Pixel positions along a line in column x k on scan line y k that is to
path are then determined by sampling at unit x intervals. Starting from the left be plotted along the path of a line
segment with slope 0 < m < 1.
endpoint (x0 , y0 ) of a given line, we step to each successive column (x position)
and plot the pixel whose scan-line y value is closest to the line path. Figure 6
demonstrates the kth step in this process. Assuming that we have determined that
the pixel at (xk , yk ) is to be displayed, we next need to decide which pixel to plot
in column xk+1 = xk + 1. Our choices are the pixels at positions (xk + 1, yk ) and
(xk + 1, yk + 1).
At sampling position xk + 1, we label vertical pixel separations from the yk  1
mathematical line path as dlower and dupper (Figure 7). The y coordinate on the y d upper
mathematical line at pixel column position xk + 1 is calculated as
dlower
y = m(xk + 1) + b (10) yk

Then xk  1
dlower = y − yk
FIGURE 7
= m(xk + 1) + b − yk (11) Vertical distances between pixel
positions and the line y coordinate at
and sampling position x k + 1.
dupper = (yk + 1) − y
= yk + 1 − m(xk + 1) − b (12)
To determine which of the two pixels is closest to the line path, we can set up an
efficient test that is based on the difference between the two pixel separations as
follows:
dlower − dupper = 2m(xk + 1) − 2yk + 2b − 1 (13)
A decision parameter pk for the kth step in the line algorithm can be obtained
by rearranging Equation 13 so that it involves only integer calculations. We
accomplish this by substituting m = y/x, where y and x are the vertical
and horizontal separations of the endpoint positions, and defining the decision
parameter as
pk = x(dlower − dupper )
= 2y · xk − 2x · yk + c (14)

The sign of pk is the same as the sign of dlower − dupper , because x > 0 for our
example. Parameter c is constant and has the value 2y + x(2b − 1), which is
independent of the pixel position and will be eliminated in the recursive calcula-
tions for pk . If the pixel at yk is “closer” to the line path than the pixel at yk + 1
(that is, dlower < dupper ), then decision parameter pk is negative. In that case, we
plot the lower pixel; otherwise, we plot the upper pixel.

135
Implementation Algorithms for Graphics Primitives and Attributes

Coordinate changes along the line occur in unit steps in either the x or y
direction. Therefore, we can obtain the values of successive decision parameters
using incremental integer calculations. At step k + 1, the decision parameter is
evaluated from Equation 14 as
pk+1 = 2y · xk+1 − 2x · yk+1 + c
Subtracting Equation 14 from the preceding equation, we have
pk+1 − pk = 2y(xk+1 − xk ) − 2x(yk+1 − yk )
However, xk+1 = xk + 1, so that
pk+1 = pk + 2y − 2x(yk+1 − yk ) (15)
where the term yk+1 − yk is either 0 or 1, depending on the sign of parameter pk .
This recursive calculation of decision parameters is performed at each integer
x position, starting at the left coordinate endpoint of the line. The first parameter,
p0 , is evaluated from Equation 14 at the starting pixel position (x0 , y0) and with
m evaluated as y/x as follows:
p0 = 2y − x (16)

We summarize Bresenham line drawing for a line with a positive slope less
than 1 in the following outline of the algorithm. The constants 2y and 2y −
2x are calculated once for each line to be scan-converted, so the arithmetic
involves only integer addition and subtraction of these two constants. Step 4 of
the algorithm will be performed a total of x times.

Bresenham’s Line-Drawing Algorithm for |m| < 1.0


1. Input the two line endpoints and store the left endpoint in (x0 , y0 ).
2. Set the color for frame-buffer position (x0 , y0 ); i.e., plot the first point.
3. Calculate the constants x, y, 2y, and 2y − 2x, and obtain the
starting value for the decision parameter as
p0 = 2y − x
4. At each xk along the line, starting at k = 0, perform the following test:
If pk < 0, the next point to plot is (xk + 1, yk ) and
pk+1 = pk + 2y
Otherwise, the next point to plot is (xk + 1, yk + 1) and

pk+1 = pk + 2y − 2x

5. Repeat step 4 x − 1 more times.

EXAMPLE 1 Bresenham Line Drawing


To illustrate the algorithm, we digitize the line with endpoints (20, 10) and
(30, 18). This line has a slope of 0.8, with
x = 10, y = 8
The initial decision parameter has the value
p0 = 2y − x
=6

136
Implementation Algorithms for Graphics Primitives and Attributes

and the increments for calculating successive decision parameters are


2y = 16, 2y − 2x = −4
We plot the initial point (x0 , y0 ) = (20, 10), and determine successive pixel
positions along the line path from the decision parameter as follows:

k pk (xk+1 , yk+1 ) k pk (xk+1 , yk+1 )

0 6 (21, 11) 5 6 (26, 15)


1 2 (22, 12) 6 2 (27, 16)
2 −2 (23, 12) 7 −2 (28, 16)
3 14 (24, 13) 8 14 (29, 17)
4 10 (25, 14) 9 10 (30, 18)

A plot of the pixels generated along this line path is shown in Figure 8.

18

15

FIGURE 8
10 Pixel positions along the line path between
endpoints (20, 10) and (30, 18), plotted with
20 21 22 25 30
Bresenham’s line algorithm.

An implementation of Bresenham line drawing for slopes in the range


0 < m < 1.0 is given in the following procedure. Endpoint pixel positions for the
line are passed to this procedure, and pixels are plotted from the left endpoint to
the right endpoint.

#include <stdlib.h>
#include <math.h>

/* Bresenham line-drawing procedure for |m| < 1.0. */


void lineBres (int x0, int y0, int xEnd, int yEnd)
{
int dx = fabs (xEnd - x0), dy = fabs(yEnd - y0);
int p = 2 * dy - dx;
int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy - dx);
int x, y;

/* Determine which endpoint to use as start position. */


if (x0 > xEnd) {
x = xEnd;
y = yEnd;
xEnd = x0;
}

137
Implementation Algorithms for Graphics Primitives and Attributes

else {
x = x0;
y = y0;
}
setPixel (x, y);

while (x < xEnd) {


x++;
if (p < 0)
p += twoDy;
else {
y++;
p += twoDyMinusDx;
}
setPixel (x, y);
}
}

Bresenham’s algorithm is generalized to lines with arbitrary slope by consid-


ering the symmetry between the various octants and quadrants of the xy plane.
For a line with positive slope greater than 1.0, we interchange the roles of the x
and y directions. That is, we step along the y direction in unit steps and calculate
successive x values nearest the line path. Also, we could revise the program to
plot pixels starting from either endpoint. If the initial position for a line with pos-
itive slope is the right endpoint, both x and y decrease as we step from right to
left. To ensure that the same pixels are plotted regardless of the starting endpoint,
we always choose the upper (or the lower) of the two candidate pixels whenever
the two vertical separations from the line path are equal (dlower = dupper ). For neg-
ative slopes, the procedures are similar, except that now one coordinate decreases
as the other increases. Finally, special cases can be handled separately: Horizontal
lines (y = 0), vertical lines (x = 0), and diagonal lines (|x| = |y|) can each
be loaded directly into the frame buffer without processing them through the
line-plotting algorithm.

Displaying Polylines
Implementation of a polyline procedure is accomplished by invoking a line-
drawing routine n − 1 times to display the lines connecting the n endpoints. Each
successive call passes the coordinate pair needed to plot the next line section,
where the first endpoint of each coordinate pair is the last endpoint of the previ-
ous section. Once the color values for pixel positions along the first line segment
have been set in the frame buffer, we process subsequent line segments starting
with the next pixel position following the first endpoint for that segment. In this
way, we can avoid setting the color of some endpoints twice. We discuss methods
for avoiding the overlap of displayed objects in more detail in Section 8.

2 Parallel Line Algorithms


The line-generating algorithms we have discussed so far determine pixel po-
sitions sequentially. Using parallel processing, we can calculate multiple pixel
positions along a line path simultaneously by partitioning the computations

138
Implementation Algorithms for Graphics Primitives and Attributes

among the various processors available. One approach to the partitioning prob-
lem is to adapt an existing sequential algorithm to take advantage of multiple
processors. Alternatively, we can look for other ways to set up the processing so
that pixel positions can be calculated efficiently in parallel. An important consid-
eration in devising a parallel algorithm is to balance the processing load among
the available processors.
Given n p processors, we can set up a parallel Bresenham line algorithm by
subdividing the line path into n p partitions and simultaneously generating line
segments in each of the subintervals. For a line with slope 0 < m < 1.0 and left
endpoint coordinate position (x0 , y0 ), we partition the line along the positive x
direction. The distance between beginning x positions of adjacent partitions can
be calculated as
x + n p − 1
x p = (17)
np

where x is the width of the line, and the value for partition width x p is com-
puted using integer division. Numbering the partitions, and the processors, as 0,
1, 2, up to n p − 1, we calculate the starting x coordinate for the kth partition as

xk = x0 + kx p (18)

For example, if we have n p = 4 processors, with x = 15, the width of the


partitions is 4 and the starting x values for the partitions are x0 , x0 + 4, x0 + 8,
and x0 + 12. With this partitioning scheme, the width of the last (rightmost) sub-
interval will be smaller than the others in some cases. In addition, if the line
endpoints are not integers, truncation errors can result in variable-width partitions
along the length of the line.
To apply Bresenham’s algorithm over the partitions, we need the initial value
for the y coordinate and the initial value for the decision parameter in each parti-
tion. The change yp in the y direction over each partition is calculated from the
line slope m and partition width x p :

yp = mx p (19)

At the kth partition, the starting y coordinate is then

yk = y0 + round(kyp ) (20)

The initial decision parameter for Bresenham’s algorithm at the start of the kth
subinterval is obtained from Equation 14:

pk = (kx p )(2y) − round(kyp )(2x) + 2y − x (21)

Each processor then calculates pixel positions over its assigned subinterval
using the preceding starting decision parameter value and the starting coordinates
(xk , yk ). Floating-point calculations can be reduced to integer arithmetic in the
computations for starting values yk and pk by substituting m = y/x and
rearranging terms. We can extend the parallel Bresenham algorithm to a line
with slope greater than 1.0 by partitioning the line in the y direction and calcu-
lating beginning x values for the partitions. For negative slopes, we increment
coordinate values in one direction and decrement in the other.
Another way to set up parallel algorithms on raster systems is to assign each
processor to a particular group of screen pixels. With a sufficient number of pro-
cessors, we can assign each processor to one pixel within some screen region. This

139
Implementation Algorithms for Graphics Primitives and Attributes

approach can be adapted to a line display by assigning one processor to each of


yend the pixels within the limits of the coordinate extents of the line and calculating
pixel distances from the line path. The number of pixels within the bounding box
of a line is x · y (as illustrated in Figure 9). Perpendicular distance d from the
⌬y
line in Figure 9 to a pixel with coordinates (x, y) is obtained with the calculation
d = Ax + B y + C (22)
y0
⌬x where
−y
A=
linelength
x0 xend
x
FIGURE 9 B=
Bounding box for a line with endpoint linelength
separations x and y .
x0 y − y0 x
C=
linelength
with

linelength = x 2 + y2
Once the constants A, B, and C have been evaluated for the line, each processor
must perform two multiplications and two additions to compute the pixel dis-
tance d. A pixel is plotted if d is less than a specified line thickness parameter.
Instead of partitioning the screen into single pixels, we can assign to each
processor either a scan line or a column of pixels depending on the line slope. Each
processor then calculates the intersection of the line with the horizontal row or
vertical column of pixels assigned to that processor. For a line with slope |m| < 1.0,
each processor simply solves the line equation for y, given an x column value.
For a line with slope magnitude greater than 1.0, the line equation is solved for x
by each processor, given a scan line y value. Such direct methods, although slow
on sequential machines, can be performed efficiently using multiple processors.

3 Setting Frame-Buffer Values


A final stage in the implementation procedures for line segments and other objects
is to set the frame-buffer color values. Because scan-conversion algorithms gen-
erate pixel positions at successive unit intervals, incremental operations can also
be used to access the frame buffer efficiently at each step of the scan-conversion
process.
As a specific example, suppose the frame buffer array is addressed in row-
major order and that pixel positions are labeled from (0, 0) at the lower-left corner
to (xmax , ymax ) at the top-right corner (Figure 10) of the screen. For a bilevel
system (one bit per pixel), the frame-buffer bit address for pixel position (x, y) is
calculated as
addr(x, y) = addr(0, 0) + y(xmax + 1) + x (23)

Moving across a scan line, we can calculate the frame-buffer address for the pixel
at (x + 1, y) as the following offset from the address for position (x, y):
addr(x + 1, y) = addr(x, y) + 1 (24)

Stepping diagonally up to the next scan line from (x, y), we get to the frame-buffer
address of (x + 1, y + 1) with the calculation
addr(x + 1, y + 1) = addr(x, y) + xmax + 2 (25)

140
Implementation Algorithms for Graphics Primitives and Attributes

ymax

… …

(0, 0) (1, 0) (2, 0) (xmax, ymax)


(x, y) (xmax, 0) (0, 1)

0 addr (0, 0) addr (x, y)


0 xmax
Screen Frame Buffer
FIGURE 10
Pixel screen positions stored linearly in row-major order within the frame buffer.

where the constant xmax + 2 is precomputed once for all line segments. Similar
incremental calculations can be obtained from Equation 23 for unit steps in the
negative x and y screen directions. Each of the address calculations involves only
a single integer addition.
Methods for implementing these procedures depend on the capabilities of
a particular system and the design requirements of the software package. With
systems that can display a range of intensity values for each pixel, frame-buffer
address calculations include pixel width (number of bits), as well as the pixel
screen location.

4 Circle-Generating Algorithms
Because the circle is a frequently used component in pictures and graphs, a proce-
dure for generating either full circles or circular arcs is included in many graphics
packages. In addition, sometimes a general function is available in a graphics (x, y)
r
library for displaying various kinds of curves, including circles and ellipses. u
yc

Properties of Circles xc
A circle (Figure 11) is defined as the set of points that are all at a given distance r
FIGURE 11
from a center position (xc , yc ). For any circle point (x, y), this distance relationship
Circle with center coordinates (x c , y c )
is expressed by the Pythagorean theorem in Cartesian coordinates as and radius r .
(x − xc )2 + (y − yc )2 = r 2 (26)

We could use this equation to calculate the position of points on a circle circumfer-
ence by stepping along the x axis in unit steps from xc − r to xc + r and calculating
the corresponding y values at each position as

y = yc ± r 2 − (xc − x)2 (27)

However, this is not the best method for generating a circle. One problem with
this approach is that it involves considerable computation at each step. Moreover,
the spacing between plotted pixel positions is not uniform, as demonstrated in
Figure 12. We could adjust the spacing by interchanging x and y (stepping
through y values and calculating x values) whenever the absolute value of the
slope of the circle is greater than 1; but this simply increases the computation and
FIGURE 12
processing required by the algorithm. Upper half of a circle plotted
Another way to eliminate the unequal spacing shown in Figure 12 is to with Equation 27 and with
calculate points along the circular boundary using polar coordinates r and θ (x c , y c ) = (0, 0).

141
Implementation Algorithms for Graphics Primitives and Attributes

(Figure 11). Expressing the circle equation in parametric polar form yields the
pair of equations

x = xc + r cos θ
(28)
y = yc + r sin θ

When a display is generated with these equations using a fixed angular step size,
a circle is plotted with equally spaced points along the circumference. To reduce
calculations, we can use a large angular separation between points along the cir-
cumference and connect the points with straight-line segments to approximate
the circular path. For a more continuous boundary on a raster display, we can
set the angular step size at r1 . This plots pixel positions that are approximately one
unit apart. Although polar coordinates provide equal point spacing, the trigono-
metric calculations are still time-consuming.
For any of the previous circle-generating methods, we can reduce computa-
(y, x) (y, x)
tions by considering the symmetry of circles. The shape of the circle is similar in
each quadrant. Therefore, if we determine the curve positions in the first quad-
(x, y)
(x, y) rant, we can generate the circle section in the second quadrant of the xy plane
45
by noting that the two circle sections are symmetric with respect to the y axis.
(x, y) (x, y) Also, circle sections in the third and fourth quadrants can be obtained from sec-
tions in the first and second quadrants by considering symmetry about the x axis.
(y, x) (y, x) We can take this one step further and note that there is also symmetry between
octants. Circle sections in adjacent octants within one quadrant are symmetric
FIGURE 13
with respect to the 45◦ line dividing the two octants. These symmetry conditions
Symmetry of a circle. Calculation of a are illustrated in Figure 13, where a point at position (x, y) on a one-eighth
circle point (x , y ) in one octant yields circle sector is mapped into the seven circle points in the other octants of the
the circle points shown for the other xy plane. Taking advantage of the circle symmetry in this way, we can generate
seven octants. all pixel positions around a circle by calculating only the points within the sec-
tor from x = 0 to x = y. The slope of the curve in this octant has a magnitude
less than or equal to 1.0. At x = 0, the circle slope is 0, and at x = y, the slope
is −1.0.
Determining pixel positions along a circle circumference using symmetry and
either Equation 26 or Equation 28 still requires a good deal of computation.
The Cartesian equation 26 involves multiplications and square-root calcula-
tions, while the parametric equations contain multiplications and trigonometric
calculations. More efficient circle algorithms are based on incremental calculation
of decision parameters, as in the Bresenham line algorithm, which involves only
simple integer operations.
Bresenham’s line algorithm for raster displays is adapted to circle generation
by setting up decision parameters for finding the closest pixel to the circumference
at each sampling step. The circle equation 26, however, is nonlinear, so that
square-root evaluations would be required to compute pixel distances from a
circular path. Bresenham’s circle algorithm avoids these square-root calculations
by comparing the squares of the pixel separation distances.
However, it is possible to perform a direct distance comparison without a
squaring operation. The basic idea in this approach is to test the halfway position
between two pixels to determine if this midpoint is inside or outside the circle
boundary. This method is applied more easily to other conics; and for an integer
circle radius, the midpoint approach generates the same pixel positions as the
Bresenham circle algorithm. For a straight-line segment, the midpoint method is
equivalent to the Bresenham line algorithm. Also, the error involved in locating
pixel positions along any conic section using the midpoint test is limited to half
the pixel separation.

142
Implementation Algorithms for Graphics Primitives and Attributes

Midpoint Circle Algorithm


As in the raster line algorithm, we sample at unit intervals and determine the
closest pixel position to the specified circle path at each step. For a given radius
r and screen center position (xc , yc ), we can first set up our algorithm to calculate
pixel positions around a circle path centered at the coordinate origin (0, 0). Then
each calculated position (x, y) is moved to its proper screen position by adding xc
to x and yc to y. Along the circle section from x = 0 to x = y in the first quadrant,
the slope of the curve varies from 0 to −1.0. Therefore, we can take unit steps in
the positive x direction over this octant and use a decision parameter to determine
which of the two possible pixel positions in any column is vertically closer to the
circle path. Positions in the other seven octants are then obtained by symmetry.
To apply the midpoint method, we define a circle function as
f circ (x, y) = x 2 + y2 − r 2 (29)
Any point (x, y) on the boundary of the circle with radius r satisfies the equation
f circ (x, y) = 0. If the point is in the interior of the circle, the circle function is
negative; and if the point is outside the circle, the circle function is positive. To
summarize, the relative position of any point (x, y) can be determined by checking
the sign of the circle function as follows:

⎨< 0, if (x, y) is inside the circle boundary

f circ (x, y) = 0, if (x, y) is on the circle boundary (30) x2  y2  r 2  0


yk
> 0, if (x, y) is outside the circle boundary
yk  1 Midpoint
The tests in 30 are performed for the midpositions between pixels near the circle
path at each sampling step. Thus, the circle function is the decision parameter
in the midpoint algorithm, and we can set up incremental calculations for this xk xk  1 x k  2
function as we did in the line algorithm.
FIGURE 14
Figure 14 shows the midpoint between the two candidate pixels at sampling
Midpoint between candidate pixels at
position xk + 1. Assuming that we have just plotted the pixel at (xk , yk ), we next sampling position x k + 1 along a
need to determine whether the pixel at position (xk + 1, yk ) or the one at position circular path.
(xk + 1, yk − 1) is closer to the circle. Our decision parameter is the circle function
29 evaluated at the midpoint between these two pixels:
 
1
pk = f circ xk + 1, yk −
2
 
1 2
= (xk + 1)2 + yk − − r2 (31)
2
If pk < 0, this midpoint is inside the circle and the pixel on scan line yk is closer
to the circle boundary. Otherwise, the midposition is outside or on the circle
boundary, and we select the pixel on scan line yk − 1.
Successive decision parameters are obtained using incremental calculations.
We obtain a recursive expression for the next decision parameter by evaluating
the circle function at sampling position xk+1 + 1 = xk + 2:
 
1
pk+1 = f circ xk+1 + 1, yk+1 −
2
 
1 2
= [(xk + 1) + 1] + yk+1 −
2
− r2
2
or
pk+1 = pk + 2(xk + 1) + yk+1
2
− yk2 − (yk+1 − yk ) + 1 (32)

where yk+1 is either yk or yk − 1, depending on the sign of pk .

143
Implementation Algorithms for Graphics Primitives and Attributes

Increments for obtaining pk+1 are either 2xk+1 + 1 (if pk is negative) or 2xk+1 +
1−2yk+1 . Evaluation of the terms 2xk+1 and 2yk+1 can also be done incrementally as
2xk+1 = 2xk + 2
2yk+1 = 2yk − 2
At the start position (0, r ), these two terms have the values 0 and 2r , respectively.
Each successive value for the 2xk+1 term is obtained by adding 2 to the previous
value, and each successive value for the 2yk+1 term is obtained by subtracting 2
from the previous value.
The initial decision parameter is obtained by evaluating the circle function at
the start position (x0 , y0 ) = (0, r ):
 
1
p0 = f circ 1, r −
2
 2
1
= 1+ r − − r2
2
or
5
p0 = − r (33)
4
If the radius r is specified as an integer, we can simply round p0 to
p0 = 1 − r (for r an integer)
because all increments are integers.
As in Bresenham’s line algorithm, the midpoint method calculates pixel posi-
tions along the circumference of a circle using integer additions and subtractions,
assuming that the circle parameters are specified in integer screen coordinates.
We can summarize the steps in the midpoint circle algorithm as follows:

Midpoint Circle Algorithm


1. Input radius r and circle center (xc , yc ), then set the coordinates for the
first point on the circumference of a circle centered on the origin as
(x0 , y0 ) = (0, r )
2. Calculate the initial value of the decision parameter as
5
p0 =−r
4
3. At each xk position, starting at k = 0, perform the following test: If
pk < 0, the next point along the circle centered on (0, 0) is (xk+1 , yk ) and
pk+1 = pk + 2xk+1 + 1
Otherwise, the next point along the circle is (xk + 1, yk − 1) and
pk+1 = pk + 2xk+1 + 1 − 2yk+1
where 2xk+1 = 2xk + 2 and 2yk+1 = 2yk − 2.
4. Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (x, y) onto the circular path
centered at (xc , yc ) and plot the coordinate values as follows:
x = x + xc , y = y + yc
6. Repeat steps 3 through 5 until x ≥ y.

144
Implementation Algorithms for Graphics Primitives and Attributes

EXAMPLE 2 Midpoint Circle Drawing


Given a circle radius r = 10, we demonstrate the midpoint circle algorithm by
determining positions along the circle octant in the first quadrant from x = 0
to x = y. The initial value of the decision parameter is
p0 = 1 − r = −9
For the circle centered on the coordinate origin, the initial point is (x0 , y0 ) =
(0, 10), and initial increment terms for calculating the decision parameters
are
2x0 = 0, 2y0 = 20
Successive midpoint decision parameter values and the corresponding coordi-
nate positions along the circle path are listed in the following table:

k pk (xk+1 , yk+1 ) 2xk+1 2yk+1


0 −9 (1, 10) 2 20
1 −6 (2, 10) 4 20
2 −1 (3, 10) 6 20
3 6 (4, 9) 8 18
4 −3 (5, 9) 10 18
5 8 (6, 8) 12 16
6 5 (7, 7) 14 14

A plot of the generated pixel positions in the first quadrant is shown in


Figure 15.

y yx

10
9
8
7
6
5
4
3
FIGURE 15
2 Pixel positions (solid circles) along a circle path
1 centered on the origin and with radius r = 10,
as calculated by the midpoint circle algorithm.
0 Open (“hollow”) circles show the symmetry
0 1 2 3 4 5 6 7 8 9 10 x positions in the first quadrant.

The following code segment illustrates procedures that could be used to


implement the midpoint circle algorithm. Values for a circle radius and for the
center coordinates of the circle are passed to procedure circleMidpoint. A
pixel position along the circular path in the first octant is then computed and
passed to procedure circlePlotPoints. This procedure sets the circle color
in the frame buffer for all circle symmetry positions with repeated calls to the
setPixel routine, which is implemented with the OpenGL point-plotting
functions.

145
Implementation Algorithms for Graphics Primitives and Attributes

#include <GL/glut.h>

class screenPt
{
private:
GLint x, y;

public:
/* Default Constructor: initializes coordinate position to (0, 0). */
screenPt ( ) {
x = y = 0;
}
void setCoords (GLint xCoordValue, GLint yCoordValue) {
x = xCoordValue;
y = yCoordValue;
}

GLint getx ( ) const {


return x;
}

GLint gety ( ) const {


return y;
}
void incrementx ( ) {
x++;
}
void decrementy ( ) {
y--;
}
};

void setPixel (GLint xCoord, GLint yCoord)


{
glBegin (GL_POINTS);
glVertex2i (xCoord, yCoord);
glEnd ( );
}

void circleMidpoint (GLint xc, GLint yc, GLint radius)


{
screenPt circPt;

GLint p = 1 - radius; // Initial value for midpoint parameter.

circPt.setCoords (0, radius); // Set coordinates for top point of circle.

void circlePlotPoints (GLint, GLint, screenPt);


/* Plot the initial point in each circle quadrant. */
circlePlotPoints (xc, yc, circPt);
/* Calculate next point and plot in each octant. */

146
Implementation Algorithms for Graphics Primitives and Attributes

while (circPt.getx ( ) < circPt.gety ( )) {


circPt.incrementx ( );
if (p < 0)
p += 2 * circPt.getx ( ) + 1;
else {
circPt.decrementy ( );
p += 2 * (circPt.getx ( ) - circPt.gety ( )) + 1;
}
circlePlotPoints (xc, yc, circPt);
}
}

void circlePlotPoints (GLint xc, GLint yc, screenPt circPt)


{
setPixel (xc + circPt.getx ( ), yc + circPt.gety ( ));
setPixel (xc - circPt.getx ( ), yc + circPt.gety ( ));
setPixel (xc + circPt.getx ( ), yc - circPt.gety ( ));
setPixel (xc - circPt.getx ( ), yc - circPt.gety ( ));
setPixel (xc + circPt.gety ( ), yc + circPt.getx ( ));
setPixel (xc - circPt.gety ( ), yc + circPt.getx ( ));
setPixel (xc + circPt.gety ( ), yc - circPt.getx ( ));
setPixel (xc - circPt.gety ( ), yc - circPt.getx ( ));
}

5 Ellipse-Generating Algorithms y
Loosely stated, an ellipse is an elongated circle. We can also describe an ellipse
as a modified circle whose radius varies from a maximum value in one direc-
tion to a minimum value in the perpendicular direction. The straight-line seg-
d1
ments through the interior of the ellipse in these two perpendicular directions are
F1
referred to as the major and minor axes of the ellipse. P = (x, y)
d2
F2
Properties of Ellipses
A precise definition of an ellipse can be given in terms of the distances from any
x
point on the ellipse to two fixed positions, called the foci of the ellipse. The sum
of these two distances is the same value for all points on the ellipse (Figure 16). FIGURE 16
If the distances to the two focus positions from any point P = (x, y) on the ellipse Ellipse generated about foci F1 and F2 .
are labeled d1 and d2 , then the general equation of an ellipse can be stated as

d1 + d2 = constant (34)

Expressing distances d1 and d2 in terms of the focal coordinates F1 = (x1 , y1 ) and


F2 = (x2 , y2 ), we have
 
(x − x1 )2 + (y − y1 )2 + (x − x2 )2 + (y − y2 )2 = constant (35)

By squaring this equation, isolating the remaining radical, and squaring again,
we can rewrite the general ellipse equation in the form

A x 2 + B y2 + C x y + D x + E y + F = 0 (36)

147

You might also like