CGV Module 2
CGV Module 2
CGV Module 2
2.1.1 Introduction
An useful construct for describing components of a picture is an area that is filled with
some solid color or pattern.
A picture component of this type is typically referred to as a fill area or a filled area.
Any fill-area shape is possible, graphics libraries generally do not support specifications
VTUPulse.com
for arbitrary fill shapes
Figure below illustrates a few possible fill-area shapes.
Graphics routines can more efficiently process polygons than other kinds of fill shapes
because polygon boundaries are described with linear equations.
1
Module 2 Fill Area Primitives
Below figure shows the side and top surfaces of a metal cylinder approximated in an
outline form as a polygon mesh.
Displays of such figures can be generated quickly as wire-frame views, showing only the
polygon edges to give a general indication of the surface structure
Objects described with a set of polygon surface patches are usually referred to as standard
graphics objects, or just graphics objects.
VTUPulse.com
or sides of the polygon.
It is required that the polygon edges have no common point other than their endpoints.
Thus, by definition, a polygon must have all its vertices within a single plane and there
can be no edge crossings
Examples of polygons include triangles, rectangles, octagons, and decagons
Any plane figure with a closed-polyline boundary is alluded to as a polygon, and one
with no crossing edges is referred to as a standard polygon or a simple polygon
Problem:
For a computer-graphics application, it is possible that a designated
set ofpolygonvertices do not all lie exactly in one plane
This is due to roundoff error in the calculation of numerical
values, o
t erors in selecting coordinate positions for the vertices, or, more typically, to
approximating a curved surface with a set of polygonal patches
Solution:
To divide the specified surface mesh into triangles
2
Module 2 Fill Area Primitives
Polygon Classifications
Polygons are classified into two types
✓ Convex Polygon and
✓ Concave Polygon
Convex Polygon:
The polygon is convex if all interior angles of a polygon are less than or equal to 180◦,
where an interior angle of a polygon is an angle inside the polygon boundary that is
formed by two adjacent edges
An equivalent definition of a convex polygon is that its interior lies completely on one
side of the infinite extension line of any one of its edges.
Also, if we select any two points in the interior of a convex polygon, the line segment
joining the two points is also in the interior.
Concave Polygon:
A polygon that is not convex is called a
concavepolygon. Te below figure shows convex and concave
polygon
VTUPulse.com
The term degenerate polygon is often used to describe a set of vertices
thatare collinear or that have repeated coordinate positions.
3
Module 2 Fill Area Primitives
Identification algorithm 1
Identifying a concave polygon by calculating cross-products of successive pairs of edge
vectors.
If we set up a vector for each polygon edge, then we can use the cross-product of adjacent
edges to test for concavity. All such vector products will be of the same sign (positive or
negative) for a convex polygon.
Therefore, if some cross-products yield a positive value and some a negative value, we
have a concave polygon
VTUPulse.com
Identification algorithm 2
Look at the polygon vertex positions relative to the extension line of any edge.
If some vertices are on one side of the extension line and some vertices are on the other
side, the polygon is concave.
4
Module 2 Fill Area Primitives
Vector method
First need to form the edge vectors.
Given two consecutive vertex positions, Vk and Vk+1, we define the edge vector between
them as
Ek = Vk+1 – Vk
Calculate the cross-products of successive edge vectors in order around the polygon
perimeter.
If the z component of some cross-products is positive while other cross-products have a
negative z component, the polygon is concave.
We can apply the vector method by processing edge vectors in counterclockwise order If
any cross-product has a negative z component (as in below figure), the polygon is
VTUPulse.com
concave and we can split it along the line of the first edge vector in the cross-product pair
E1 = (1, 0, 0) E2 = (1, 1, 0)
E3 = (1, −1, 0) E4 = (0, 2, 0)
E5 = (−3, 0, 0) E6 = (0, −2, 0)
5
Module 2 Fill Area Primitives
E5 × E6 = (0, 0, 6) E6 × E1 = (0, 0, 2)
Since the cross-product E2 × E3 has a negative z component, we split the polygon along
the line of vector E2.
The line equation for this edge has a slope of 1 and a y intercept of −1 . No other edge
cross-products are negative, so the two new polygons are both convex.
Rotational method
Proceeding counterclockwise around the polygon edges, we
shift the position of the polygon so that each vertex Vk
in turn is at the coordinate origin.
We rotate the polygon about the origin in a clockwise
direction so that the next vertex Vk+1 is on the x axis.
If the following vertex, Vk+2, is below the x
axis, the polygon is concave.
We then split the polygon along the x axis to form two
VTUPulse.com
new polygons, and we repeat the concave test for each
of the two new polygons
6
Module 2 Fill Area Primitives
Inside-Outside Tests
Also called the odd-parity rule or the even-odd rule.
Draw a line from any position P to a distant point outside the coordinate extents of the
closed polyline.
Then we count the number of line-segment crossings along this line.
If the number of segments crossed by this line is odd, then P is considered to be an
interior point Otherwise, P is an exterior point
VTUPulse.com
We can use this procedure, for example,to fill the interior region between two concentric
circles or two concentric polygons with a specified color.
7
Module 2 Fill Area Primitives
The nonzero winding-number rule tends to classify as interior some areas that the odd-
even rule deems to be exterior.
Variations of the nonzero winding-number rule can be used to define interior regions in
other ways define a point to be interior if its winding number is positive or if it is
negative; or we could use any other rule to generate a variety of fill shapes
VTUPulse.com
Boolean operations are used to specify a fill area as a combination of two regions
One way to implement Boolean operations is by using a
variation of the basic winding-number rule consider the
direction for each boundary to be counterclockwise, the
union of two regions would consist of those points
whose winding number is positive
8
Module 2 Fill Area Primitives
Polygon Tables
The objects in a scene are described as sets of polygon surface facets
The description for each object includes coordinate information specifying the geometry for
the polygon facets and other surface parameters such as color, transparency, and light-
reflection properties.
The data of the polygons are placed into tables that are to be used in the subsequent
processing, display, and manipulation of the objects in the scene
These polygon data tables can be organized into two groups:
Geometric tables and
Attribute tables
Geometric data tables contain vertex coordinates and parameters to identify the spatial
VTUPulse.com
orientation of the polygon surfaces.
Attribute information for an object includes parameters specifying the degree of
transparency of the object and its surface reflectivity and texture characteristics
Geometric data for the objects in a scene are arranged conveniently in three lists: a vertex
table, an edge table, and a surface-facet table.
Coordinate values for each vertex in the object are stored in the vertex table.
The edge table contains pointers back into the vertex table to identify the vertices for
each polygon edge.
And the surface-facet table contains pointers back into the edge table to identify the edges
for each polygon
9
Module 2 Fill Area Primitives
The object can be displayed efficiently by using data from the edge table to
identifypolygon boundaries.
An alternative arrangement is to use just two tables: a vertex table and a surface-
facet table this scheme is less convenient, and some edges could get drawn twice in a wire-
frame display.
Another possibility is to use only a surface-facet table, but this duplicates coordinate
information, since explicit coordinate values are listed for each vertex in each polygon
VTUPulse.com
facet. Also the relationship between edges and facets would have to be reconstructed from
the vertex listings in the surface-facet table.
We could expand the edge table to include forward pointers into the surface-facet
table sothat a common edge between polygons could be identifiedmore rapidly the vertex
table could be expanded to reference corresponding edges, for faster information retrieval
Because the geometric data tables may contain extensive listings of vertices and edges
forcomplex objects and scenes, it is important that the data be checked for consistency
and completeness.
Some of the tests that could be performed by a graphics package are
(1) that every vertex is listed as an endpoint for at least two edges,
10
Module 2 Fill Area Primitives
Plane Equations
Each polygon in a scene is contained within a plane of infinite extent.
The general equation of a plane is
Ax + B y + C z + D = 0
Where,
(x, y, z) is any point on the plane, and
The coefficients A, B, C, and D (called plane parameters) are constants
describing the spatial properties of the plane.
We can obtain the values of A, B, C, and D by solving a set of three plane equations using
the coordinate values for three noncollinear points in the plane for the three successive
VTUPulse.com
convex-polygon vertices, (x1, y1, z1), (x2, y2, z2), and (x3, y3, z3), in a counterclockwise
order and solve the following set of simultaneous linear plane
equations for the ratios A/D, B/D, and C/D:
(A/D)xk + (B/D)yk + (C/D)zk = −1, k = 1, 2, 3
The solution to this set of equations can be obtained in determinant form, using Cramer’s
rule, as
Expanding the determinants, we can write the calculations for the plane coefficients n
i
the form
11
Module 2 Fill Area Primitives
It is possible that the coordinates defining a polygon facet may not be contained withina
single plane.
We can solve this problem by dividing the facet into a set of triangles; or we could find
an approximating plane for the vertex list.
One method for obtaining an approximating plane is to divide the vertex list into subsets,
where each subset contains three vertices, and calculate plane parameters A, B, C, Dfor
each subset.
VTUPulse.com
Any point that is not on the plane and that is visible to the front face of a polygon surface
section is said to be in front of (or outside) the plane, and, thus, outside the object.
And any point that is visible to the back face of the polygon is behind (or inside) the
plane.
Plane equations can be used to identify the position of spatial points relative to the
polygon facets of an object.
For any point (x, y, z) not on a plane with parameters A, B, C, D, we have
Ax + B y + C z + D != 0
Thus, we can identify the point as either behind or in front of a polygon surface contained
within that plane according to the sign (negative or positive) of
Ax + By + Cz + D:
if Ax + B y + C z + D < 0, the point (x, y, z) is behind the plane
if Ax + B y + C z + D > 0, the point (x, y, z) is in front of the plane
Orientation of a polygon surface in space can be described with the normal vector for
theplane containing that polygon
12
Module 2 Fill Area Primitives
The normal vector points in a direction from inside the plane to the outside; that is,
fromthe back face of the polygon to the front face.
Thus, the normal vector for this plane is N = (1, 0, 0), which is in the direction of
thepositive x axis.
That is, the normal vector is pointing from inside the cube to the outside andis
perpendicular to the plane x = 1.
The elements of a normal vector can also be obtained using a vector crossproduct
Calculation.
We have a convex-polygon surface facet and a right-handed Cartesian system, we again
VTUPulse.com
select any three vertex positions,V1,V2, and V3, taken in counterclockwise order when
viewing from outside the object toward the inside.
Forming two vectors, one from V1 to V2 and the second from V1 to V3, we calculate
Nas the vector cross-product:
N = (V2 − V1) × (V3 − V1)
This generates values for the plane parameters A, B, and C.We can then obtain the value
for parameter D by substituting these values and the coordinates in
Ax + B y + C z + D = 0
The plane equation can be expressed in vector form using the normal N and the position
P of any point in the plane as
N·P = −D
13
Module 2 Fill Area Primitives
VTUPulse.com
These codes are i (for integer), s (for short), f (for float), d (for double), and
v fo(rvector).
Example
glRecti (200, 100, 50, 250);
If we put the coordinate values for this rectangle into arrays, we can generate the
same square with the following code:
int vertex1 [ ] = {200, 100};
int vertex2 [ ] = {50, 250};
glRectiv (vertex1, vertex2);
Polygon
With the OpenGL primitive constant GL POLYGON, we can display a single
polygonfill area.
Each of the points is represented as an array of (x, y) coordinate
values:glBegin (GL_POLYGON);
glVertex2iv (p1);
14
Module 2 Fill Area Primitives
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glVertex2iv (p6);
glEnd ( );
A polygon vertex list must contain at least three vertices. Otherwise, nothing is displayed.
VTUPulse.com
✓ A single convex polygon fill area generated with the primitive constant GL POLYGON. (b)
Two unconnected triangles generated with GL TRIANGLES.
✓ Four connected triangles generated with GL TRIANGLE STRIP.
✓ Four connected triangles generated with GL TRIANGLE FAN.
Triangles
Displays the trianlges.
15
Module 2 Fill Area Primitives
In this case, the first three coordinate points define the vertices for one triangle, the next
three points define the next triangle, and so forth.
For each triangle fill area, we specify the vertex positions in a counterclockwise order
triangle strip
glBegin (GL_TRIANGLE_STRIP);
Each successive triangle shares an edge with the previously defined triangle, so the
ordering of the vertex list must be set up to ensure a consistent display.
Example, our first triangle (n = 1) would be listed as having vertices (p1, p2, p6). The
second triangle (n = 2) would have the vertex ordering (p6, p2, p3). Vertex ordering for
the third triangle (n = 3) would be (p6, p3, p5). And the fourth triangle (n = 4) would be
listed in the polygon tables with vertex ordering (p5, p3, p4).
16
Module 2 Fill Area Primitives
Triangle Fan
Another way to generate a set of connected triangles is to use the “fan”
Approach glBegin (GL_TRIANGLE_FAN);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glVertex2iv (p6);
glEnd ( );
For N vertices, we again obtain N−2 triangles, providing no vertex positions are repeated,
and we must list at least three vertices be specified in the proper order to define front and
back faces for each triangle correctly.
Therefore, triangle 1 is defined with the vertex list (p1, p2, p3); triangle 2 has the vertex
ordering (p1, p3, p4); triangle 3 has its vertices specified in the order (p1, p4, p5); and
triangle 4 is listed with vertices (p1, p5, p6).
VTUPulse.com
Quadrilaterals
OpenGL provides for the specifications of two types of quadrilaterals.
With the GL QUADS primitive constant and the following list of eight vertices, specified as
two-dimensional coordinate arrays, we can generate the display shown in Figure (a):
glBegin (GL_QUADS);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glVertex2iv (p6);
glVertex2iv (p7);
glVertex2iv (p8);
glEnd ( );
17
Module 2 Fill Area Primitives
Rearranging the vertex list in the previous quadrilateral code example and changing the
primitive constant to GL QUAD STRIP, we can obtain the set of connected quadrilaterals
shown in Figure (b):
glBegin (GL_QUAD_STRIP);
glVertex2iv (p1);
VTUPulse.com
glVertex2iv (p2);
glVertex2iv (p4);
glVertex2iv (p3);
glVertex2iv (p5);
glVertex2iv (p6);
glVertex2iv (p8);
glVertex2iv (p7);
glEnd ( );
For a list of N vertices, we obtain N/2− 1 quadrilaterals, providing that N ≥ 4. Thus, our
first quadrilateral (n = 1) is listed as having a vertex ordering of (p1, p2, p3, p4). The
second quadrilateral (n=2) has the vertex ordering (p4, p3, p6, p5), and the vertex
ordering for the third quadrilateral (n=3) is (p5, p6, p7, p8).
18
Module 2 Fill Area Primitives
We can also fill selected regions of a scene using various brush styles, color-blending
VTUPulse.com
combinations, or textures.
For polygons, we could show the edges in different colors, widths, and styles; and we can
select different display attributes for the front and back faces of a region.
Fill patterns can be defined in rectangular color arrays that list different colors for
different positions in the array.
An array specifying a fill pattern is a mask that is to be applied to the display area.
The mask is replicated in the horizontal and vertical directions until the display area si
filled with nonoverlapping copies of the pattern.
This process of filling an area with a rectangular pattern is called tiling, and a rectangular
fill pattern is sometimes referred to as a tiling pattern predefined fill patterns are available
in a system, such as the hatch fill patterns
19
Module 2 Fill Area Primitives
Hatch fill could be applied to regions by drawing sets of line segments to display either
single hatching or crosshtching
VTUPulse.com
The linear soft-fill algorithm repaints an area that was originally painted by
mergingaforeground color F with a single background color B, where F != B.
The current color P of each pixel within the area to be refilled is some linear
combinationof F and B:
20
Module 2 Fill Area Primitives
P = tF + (1 − t)B
Where the transparency factor t has a value between 0 and 1 for each pixel.
For values of t less than 0.5, the background color contributes more to the interior
colorof the region than does the fill color.
If our color values are represented using separate red, green, and blue
(RGB)components, each component of the colors, with
= (PR, PG, PB), F = (FR, FG, FB), B = (BR, BG, BB) is used
We can thus calculate the value of parameter t using one of the RGB color
components asfollows:
Where k = R, G, or B; and Fk != Bk .
When two background colors B1 and B2 are mixed with foreground color F, the
resultingpixel color P is
P = t0F + t1B1 + (1 − t0 − t1)B2
Where the sum of the color-term coefficients t0, t1, and (1 − t0 − t1) must equal 1.
VTUPulse.com
With three background colors and one foreground color, or with two background
and twoforeground colors, we need all three RGB equations to obtain the relative amounts
of the four colors.
21
Module 2 Fill Area Primitives
Figure above illustrates the basic scan-line procedure for a solid-color fill of a polygon.
For each scan line that crosses the polygon, the edge intersections are sorted from left to
right, and then the pixel positions between, and including, each intersection pair are set to
the specified fill color the fill color is applied to the five pixels from x = 10 to x = 14 and
to the seven pixels from x = 18 to x = 24.
Whenever a scan line passes through a vertex, it intersects two polygon edges at that
point.
In some cases, this can result in an odd number of boundary intersections for a scan line.
VTUPulse.com
Scan line y’ intersects an even number of edges, and the two pairs of intersection points
along this scan line correctly identify the interior pixel spans.
But scan line y intersects five polygon edges.
Thus, as we process scan lines, we need to distinguish between these cases.
For scan line y, the two edges sharing an intersection vertex are on opposite sides of the
scan line.
But for scan line y’, the two intersecting edges are both above the scan line.
22
Module 2 Fill Area Primitives
Thus, a vertex that has adjoining edges on opposite sides of an intersecting scan line
should be counted as just one boundary intersection point.
If the three endpoint y values of two consecutive edges monotonically increase or
decrease, we need to count the shared (middle) vertex as a single intersection point
for the scan line passing through that vertex.
Otherwise, the shared vertex represents a local extremum (minimum or maximum) on the
polygon boundary, and the two edge intersections with the scan line passing through
that vertex can be added to the intersection list.
One method for implementing the adjustment to the vertex-intersection count is to
shorten some polygon edges to split those vertices that should be counted as one
intersection.
We can process nonhorizontal edges around the polygon boundary in the order specified,
either clockwise or counterclockwise.
Adjusting endpoint y values for a polygon, as we process edges in order around the
polygon perimeter. The edge currently being processed is indicated as a solid line
VTUPulse.com
In (a), the y coordinate of the upper endpoint of the current edge is decreased by 1.
In (b), the y coordinate of the upper endpoint of the next edge is decreased by 1.
23
Module 2 Fill Area Primitives
The slope of this edge can be expressed in terms of the scan-line intersection coordinates:
Because the change in y coordinates between the two scan lines is simply
y k+1 − yk = 1
The x-intersection value xk+1 on the upper scan line can be determined from the x-
intersection value xk on the preceding scan line as
VTUPulse.com
Each successive x intercept can thus be calculated by adding the inverse of the slope and
rounding to the nearest integer.
Along an edge with slope m, the intersection xk value for scan line k above the initial scan
line can be calculated as
xk = x0 +k/m
Where, m is the ratio of two integers
24
Module 2 Fill Area Primitives
To perform a polygon fill efficiently, we can first store the polygon boundary in a sorted
edge table that contains all the information necessary to process the scan lines efficiently.
Proceeding around the edges in either a clockwise or a counterclockwise order, we can
use a bucket sort to store the edges, sorted on the smallest y value of each edge, in the
correct scan-line positions.
Only nonhorizontal edges are entered into the sorted edge table.
Each entry in the table for a particular scan line contains the maximum y value for that
edge, the x-intercept value (at the lower vertex) for the edge, and the inverse slope of the
edge. For each scan line, the edges are in sorted order fromleft to right
VTUPulse.com
We process the scan lines from the bottom of the polygon to its top, producing an active
edge list for each scan line crossing the polygon boundaries.
The active edge list for a scan line contains all edges crossed by that scan line, with
iterative coherence calculations used to obtain the edge intersections
Implementation of edge-intersection calculations can be facilitated by
storing Δx and y values in the sorted edge list
25
Module 2 Fill Area Primitives
VTUPulse.com
Once we have set a mask, we can establish it as the current fill pattern with the
function glPolygonStipple (fillPattern);
We need to enable the fill routines before we specify the vertices for the polygons that are
to be filled with the current pattern
glEnable (GL_POLYGON_STIPPLE);
Similarly, we turn off pattern filling with
glDisable (GL_POLYGON_STIPPLE);
26
Module 2 Fill Area Primitives
VTUPulse.com
glEnd ( );
27
Module 2 Fill Area Primitives
To plot only the polygon vertex points, we assign the constant GL_POINT to
parameterdisplayMode.
Another option is to display a polygon with both an interior fill and a different
color orpattern for its edges.
The following code section fills a polygon interior with a green color, and then the
edgesare assigned a red color:
glColor3f (0.0, 1.0, 0.0);
/* Invoke polygon-generating routine. */
glColor3f (1.0, 0.0, 0.0); glPolygonMode
(GL_FRONT, GL_LINE);
/* Invoke polygon-generating routine again. */
For a three-dimensional polygon (one that does not have all vertices in the xy
plane), thismethod for displaying the edges of a filled polygon may produce gaps along the
edges.
This effect, sometimes referred to as stitching.
One way to eliminate the gaps along displayed edges of a three-dimensional
VTUPulse.com
polygon is ot shift the depth values calculated by the fill routine so that they do not overlap
with the edge depth values for that polygon.
We do this with the following two OpenGL functions:
glEnable (GL_POLYGON_OFFSET_FILL);
glPolygonOffset (factor1, factor2);
The first function activates the offset routine for scan-line filling, and the second function is
used to set a couple of floating-point values factor1 and factor2 that are used to
calculate the amount of depth offset.
The calculation for this depth offset is
depthOffset = factor1 · maxSlope + factor2 · const
Where,
maxSlope is the maximum slope of the polygon and
const is an implementation constant
As an example of assigning values to offset factors, we can modify the previous code
segment as follows:
glColor3f (0.0, 1.0, 0.0);
28
Module 2 Fill Area Primitives
glEnable (GL_POLYGON_OFFSET_FILL);
glPolygonOffset (1.0, 1.0);
/* Invoke polygon-generating routine. */
glDisable (GL_POLYGON_OFFSET_FILL);
glColor3f (1.0, 0.0, 0.0);
glPolygonMode (GL_FRONT, GL_LINE);
/* Invoke polygon-generating routine again. */
Another method for eliminating the stitching effect along polygon edges
si to use the OpenGL stencil buffer to limit the polygon interior filling so that it does not
overlap the edges.
To display a concave polygon using OpenGL routines, we must first
splititinto a set of convex polygons.
We typically divide a concave polygon into a set of triangles. Then we
coulddisplay the triangles.
VTUPulse.com
Dividing a concave polygon (a) into a set of triangles (b) produces triangle edges (dashed) that
are interior to the original polygon.
Fortunately, OpenGL provides a mechanism that allows us to eliminate selected edges
from a wire-frame display.
So all we need do is set that bit flag to “off” and the edge following that vertex will not
be displayed.
We set this flag for an edge with the following function:
glEdgeFlag (flag)
To indicate that a vertex does not precede a boundary edge, we assign the OpenGL
constant GL_FALSE to parameter flag.
29
Module 2 Fill Area Primitives
This applies to all subsequently specified vertices until the next call to
glEdgeFlag simade.
The OpenGL constant GL_TRUE turns the edge flag on again, which is the default.
As an illustration of the use of an edge flag, the following code displays only two
edgesof the defined triangle
VTUPulse.com
glEnableClientState (GL_EDGE_FLAG_ARRAY);
glEdgeFlagPointer (offset, edgeFlagArray);
30
Module 2 2D Viewing
VTUPulse.com
The geometric-transformation functions that are available in all graphics packages are
those for translation, rotation, and scaling.
Two-Dimensional Translation
We perform a translation on a single coordinate point by adding offsets to its
coordinates so as to generate a new coordinate position.
We are moving the original point position along a straight-line path to its new location.
To translate a two-dimensional position, we add translation distances tx and ty to the
original coordinates (x, y) to obtain the new coordinate position (x’, y’) as shown in
Figure
1
Module 2 2D Viewing
The translation distance pair (tx, ty) is called a translation vector or shift vector
Column vector representation is given as
This allows us to write the two-dimensional translation equations in the matrix Form
VTUPulse.com
GLint k;
for (k = 0; k < nVerts; k++) {
verts [k].x = verts [k].x + tx;
verts [k].y = verts [k].y + ty;
}
glBegin (GL_POLYGON);
for (k = 0; k < nVerts; k++)
glVertex2f (verts [k].x, verts [k].y);
glEnd ( );
}
Two-Dimensional Rotation
We generate a rotation transformation of an object by specifying a rotation axis and a
rotation angle.
2
Module 2 2D Viewing
A positive value for the angle θ defines a counterclockwise rotation about the pivot
point,as in above Figure , and a negative value rotates objects in the clockwise direction.
The angular and coordinate relationships of the original and transformed point
VTUPulse.com
positionsare shown in Figure
In this figure, r is the constant distance of the point from the origin, angle φ is the
originalangular position of the point from the horizontal, and θ is the rotation angle.
we can express the transformed coordinates in terms of angles θ and φ as
3
Module 2 2D Viewing
VTUPulse.com
The transformation equations for rotation of a point about any specified rotation position
(xr , yr ):
Code:
class wcPt2D {
public:
GLfloat x, y;
};
void rotatePolygon (wcPt2D * verts, GLint nVerts, wcPt2D pivPt, GLdouble theta)
{
wcPt2D * vertsRot;
GLint k;
for (k = 0; k < nVerts; k++) {
4
Module 2 2D Viewing
vertsRot [k].x = pivPt.x + (verts [k].x - pivPt.x) * cos (theta) - (verts [k].y -
pivPt.y) * sin (theta);
vertsRot [k].y = pivPt.y + (verts [k].x - pivPt.x) * sin (theta) + (verts [k].y -
pivPt.y) * cos (theta);
}
glBegin (GL_POLYGON);
for (k = 0; k < nVerts; k++)
glVertex2f (vertsRot [k].x, vertsRot
[k].y); glEnd ( );
}
Two-Dimensional Scaling
To alter the size of an object, we apply a scaling transformation.
A simple twodimensional scaling operation is performed by multiplying object positions
(x, y) by scaling factors sx and sy to produce the transformed coordinates (x’, y’):
VTUPulse.com
The basic two-dimensional scaling equations can also be written in the following matrix
form
5
Module 2 2D Viewing
Unequal values for sx and sy result in a differential scaling that is often used in design
applications.
In some systems, negative values can also be specified for the scaling parameters.
Thsinot only resizes an object, it reflects it about one or more of the coordinate axes.
Figure below illustrates scaling of a line by assigning the value 0.5 to both sx and sy
We can control the location of a scaled object by choosing a position, called the fixed
point, that is to remain unchanged after the scaling transformation.
Coordinates for the fixed point, (x f , yf ), are often chosen at some object position, such
as its centroid but any other spatial position can be selected.
For a coordinate position (x, y), the scaled coordinates (x’, y’) are then calculated
fromthe following relationships:
VTUPulse.com
We can rewrite Equations to separate the multiplicative and additive terms as
Where the additive terms x f (1 − sx) and yf (1 − sy) are constants for all points in the
object.
Code:
class wcPt2D {
public:
GLfloat x, y;
};
void scalePolygon (wcPt2D * verts, GLint nVerts, wcPt2D fixedPt, GLfloat sx, GLfloat sy)
{
wcPt2D vertsNew;
6
Module 2 2D Viewing
GLint k;
for (k = 0; k < nVerts; k++) {
vertsNew [k].x = verts [k].x * sx + fixedPt.x * (1 - sx);
vertsNew [k].y = verts [k].y * sy + fixedPt.y * (1 - sy);
}
glBegin (GL_POLYGON);
for (k = 0; k < nVerts; k++)
glVertex2f (vertsNew [k].x, vertsNew
[k].y); glEnd ( );
}
VTUPulse.com
With coordinate positions P and P’ represented as column vectors.
Homogeneous Coordinates
Multiplicative and translational terms for a two-dimensional geometric transformation
can be combined into a single matrix if we expand the representations to 3 × 3 matrices
We can use the third column of a transformation matrix for the translation terms, and al
transformation equations can be expressed as matrix multiplications.
We also need to expand the matrix representation for a two-dimensional coordinate
position to a three-element column matrix
7
Module 2 2D Viewing
VTUPulse.com
This translation operation can be written in the abbreviated form
The rotation transformation operator R(θ ) is the 3 × 3 matrix with rotation parameter θ.
8
Module 2 2D Viewing
VTUPulse.com
An inverse rotation is accomplished by replacing the rotation angle by its negative.
A two-dimensional rotation through an angle θ about the coordinate origin
has theinverse transformation matrix
We form the inverse matrix for any scaling transformation by replacing the scaling
parameters with their reciprocals. the inverse transformation matrix is
9
VTUPulse.com
Module 2 2D Viewing
VTUPulse.com
column vectors
Also, the composite transformation matrix for this sequence of translations is
By multiplying the two rotation matrices, we can verify that two successive
rotations areadditive:
R(θ2) · R(θ1) = R(θ1 + θ2)
10
Module 2 2D Viewing
So that the final rotated coordinates of a point can be calculated with the composite
rotation matrix as
P’ = R(θ1 + θ2) · P
VTUPulse.com
We can generate a two-dimensional rotation about any other pivot point (xr , yr )by
performing the following sequence of translate-rotate-translate operations:
Translate the object so that the pivot-point position is moved to the coordinate origin.
Rotate the object about the coordinate origin.
Translate the object so that the pivot point is returned to its original position.
The composite transformation matrix for this sequence is obtained with the concatenation
11
Module 2 2D Viewing
−1
where T(−xr , −yr ) = T xr , yr ).
(
VTUPulse.com
To produce a two-dimensional scaling with respect to a selected fixed position (x f , yf ),
when we have a function that can scale relative to the coordinate origin only. This
sequence is
1. Translate the object so that the fixed point coincides with the coordinate origin.
2. Scale the object with respect to the coordinate origin.
3. Use the inverse of the translation in step (1) to return the object to its original position.
Concatenating the matrices for these three operations produces the required scaling
matrix:
12
Module 2 2D Viewing
The composite matrix resulting from the product of these three transformations is
VTUPulse.com
Matrix Concatenation Properties
Property 1:
Multiplication of matrices is associative.
For any three matrices,M1,M2, andM3, the matrix product M3 · M2 · M1 canbe
performed by first multiplying M3 and M2 or by first multiplyingM2 and M1:
M3 · M2 · M1 = (M3 · M2) · M1 = M3 · (M2 · M1)
We can construct a composite matrix either by multiplying from left to right
(premultiplying) or by multiplying from right to left (postmultiplying)
Property 2:
Transformation products, on the other hand, may not be commutative. The matrix
productM2 · M1 is not equal toM1 · M2, in general.
13
Module 2 2D Viewing
This means that if we want to translate and rotate an object, we must be careful about the
order in which the composite matrix is evaluated
Reversing the order in which a sequence of transformations is performed may affect the
transformed position of an object. In (a), an object is first translated in the x direction,
then rotated counterclockwise through an angle of 45◦. In (b), the object is first rotated
45◦ counterclockwise, then translated in the x direction.
VTUPulse.com
rotations, and scalings, can be expressed as
The four elements rsjk are the multiplicative rotation-scaling terms in the transformation,
which involve only rotation angles and scaling factors if an object is to be scaled and
rotated about its centroid coordinates (xc , yc ) and then translated, the values for the
elements of the composite transformation matrix are
Although the above matrix requires nine multiplications and six additions, the explicit
calculations for the transformed coordinates are
14
Module 2 2D Viewing
We need actually perform only four multiplications and four additions to transform
coordinate positions.
Because rotation calculations require trigonometric evaluations and several
multiplications for each transformed point, computational efficiency can become an
important consideration in rotation transformations
If we are rotating in small angular steps about the origin, for instance, we can set cos θ to
1.0 and reduce transformation calculations at each step to two multiplications and two
additions for each set of coordinates to be rotated.
These rotation calculations are
x’= x − y sin θ, y’ = x sin θ + y
VTUPulse.com
where the four elements r jk are the multiplicative rotation terms, and the elements trx
and try are the translational terms
A rigid-body change in coordinate position is also sometimes referred to as a rigid-
motion transformation.
In addition, the above matrix has the property that its upper-left 2 × 2 submatrix is an
orthogonal matrix.
If we consider each row (or each column) of the submatrix as a vector, then the two row
vectors (rxx, rxy) and (ryx, ryy) (or the two column vectors) form an orthogonal set of unit
vectors.
Such a set of vectors is also referred to as an orthonormal vector set. Each vector has unit
length as follows
15
Module 2 2D Viewing
Therefore, if these unit vectors are transformed by the rotation submatrix, then the vector
(rxx, rxy) is converted to a unit vector along the x axis and the vector (ryx, ryy) is
transformed into a unit vector along the y axis of the coordinate system
For example, the following rigid-body transformation first rotates an object through an
angle θ about a pivot point (xr , yr ) and then translates the object
Here, orthogonal unit vectors in the upper-left 2×2 submatrix are (cos θ, −sin θ) and (sin
VTUPulse.com
θ, cos θ).
16
Module 2 2D Viewing
The rotation matrix for revolving an object from position (a) to position (b) can be constructed
with the values of the unit orientation vectors u’ and v’ relative to the original orientation.
VTUPulse.com
Reflection
A transformation that produces a mirror image of an object is called a reflection.
For a two-dimensional reflection, this image is generated relative to an axis of
reflectionby rotating the object 180◦ about the reflection axis.
Reflection about the line y = 0 (the x axis) is accomplished with the
transformationMatrix
This transformation retains x values, but “flips” the y values of coordinate positions.
The resulting orientation of an object after it has been reflected about the x axis is
shownin Figure
17
Module 2 2D Viewing
A reflection about the line x = 0 (the y axis) flips x coordinates while keeping y
coordinates the same. The matrix for this transformation is
Figure below illustrates the change in position of an object that has been reflected
aboutthe line x = 0.
VTUPulse.com
We flip both the x and y coordinates of a point by reflecting relative to an axis that si
perpendicular to the xy plane and that passes through the coordinate origin the matrix
representation for this reflection is
18
Module 2 2D Viewing
If we choose the reflection axis as the diagonal line y = x (Figure below), the
reflectionmatrix is
VTUPulse.com
To obtain a transformation matrix for reflection about the diagonal y = −x, we
couldconcatenate matrices for the transformation sequence:
clockwise rotation by 45◦,
reflection about the y axis, and
counterclockwise rotation by 45◦.
The resulting transformation matrix is
19
Module 2 2D Viewing
Shear
A transformation that distorts the shape of an object such that the transformed shape
appears as if the object were composed of internal layers that had been caused to slide
over each other is called a shear.
Two common shearing transformations are those that shift coordinate x values and those
that shift y values. An x-direction shear relative to the x axis is produced with the
transformation Matrix
Any real number can be assigned to the shear parameter shx Setting parameter shx
to the value 2, for example, changes the square into a parallelogram is shown below.
Negative values for shx shift coordinate positions to the left.
VTUPulse.com
A unit square (a) is converted to a parallelogram (b) using the x -direction shear with shx = 2.
20
Module 2 2D Viewing
A y-direction shear relative to the line x = xref is generated with the transformation
Matrix
VTUPulse.com
moving a block of pixel values from one position to another is termed a block transfer, a
bitblt, or a pixblt.
Figure below illustrates a two-dimensional translation implemented as a block
transfer ofa refresh-buffer area
Translating an object from screen position (a) to the destination position shown in (b) by moving
a rectangular block of pixel values. Coordinate positions Pmin and Pmax specify the limits of the
rectangular block to be moved, and P0 is the destination reference position.
21
Module 2 2D Viewing
For array rotations that are not multiples of 90◦, we need to do some extra processing.
The general procedure is illustrated in Figure below.
VTUPulse.com
Each destination pixel area is mapped onto the rotated array and the amount of
overlapwith the rotated pixel areas is calculated.
A color for a destination pixel can then be computed by averaging the colors
of theoverlapped source pixels, weighted by their percentage of area overlap.
Pixel areas in the original block are scaled, using specified values for sx and sy,
and thenmapped onto a set of destination pixels.
The color of each destination pixel is then assigned according to its area of
overlap withthe scaled pixel areas
22
Module 2 2D Viewing
VTUPulse.com
A block of RGB color values in a buffer can be saved in an array with the function
glReadPixels (xmin, ymin, width, height, GL_RGB, GL_UNSIGNED_BYTE, colorArray);
If color-table indices are stored at the pixel positions, we replace the constant GL
R
GB
with GL_COLOR_INDEX.
To rotate the color values, we rearrange the rows and columns of the color array,
as described in the previous section. Then we put the rotated array back in the buffer
with glDrawPixels (width, height, GL_RGB, GL_UNSIGNED_BYTE,
colorArray);
23
Module 2 2D Viewing
We can also combine raster transformations with logical operations to produce various
effects with the exclusive or operator
VTUPulse.com
Translation parameters tx, ty, and tz can be assigned any real-number
values, and the single suffix code to be affixed to this function is either f
(float) or d (double).
For two-dimensional applications, we set tz = 0.0; and a two-dimensional
position is represented as a four-element column matrix with the z
component equal to 0.0.
example: glTranslatef (25.0, -10.0, 0.0);
Similarly, a 4 × 4 rotation matrix is generated with
glRotate* (theta, vx, vy, vz);
where the vector v = (vx, vy, vz) can have any floating-point values for its
components.
This vector defines the orientation for a rotation axis that passes through
the coordinate origin.
If v is not specified as a unit vector, then it is normalized automatically
before the elements of the rotation matrix are computed.
24
Module 2 2D Viewing
VTUPulse.com
glMatrixMode (GL_MODELVIEW);
which designates the 4×4 modelview matrix as the current matrix
Two other modes that we can set with the glMatrixMode function are the texture
mode and the color mode.
The texture matrix is used for mapping texture patterns to surfaces, and the color
matrix is used to convert from one color model to another.
The default argument for the glMatrixMode function is GL_MODELVIEW.
With the following function, we assign the identity matrix to the current matrix:
glLoadIdentity ( );
Alternatively, we can assign other values to the elements of the current matrix using
glLoadMatrix* (elements16);
A single-subscripted, 16-element array of floating-point values is specified with
parameter elements16, and a suffix code of either f or d is used to designate the data type
The elements in this array must be specified in column-major order
To illustrate this ordering, we initialize the modelview matrix with the following code:
25
Module 2 2D Viewing
glMatrixMode (GL_MODELVIEW);
GLfloat elems [16];
GLint k;
for (k = 0; k < 16; k++)
elems [k] = float (k);
glLoadMatrixf (elems);
Which produces the matrix
We can also concatenate a specified matrix with the current matrix as follows:
glMultMatrix* (otherElements16);
Again, the suffix code is either f or d, and parameter otherElements16 is a 16-element,
single-subscripted array that lists the elements of some other matrix in column-major
order.
VTUPulse.com
Thus, assuming that the current matrix is the modelview matrix, which we designate as
M, then the updated modelview matrix is computed as
M = M· M’
The glMultMatrix function can also be used to set up any transformation sequence with
individually defined matrices.
For example,
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ( ); // Set current matrix to the identity.
glMultMatrixf (elemsM2); // Postmultiply identity with matrix M2.
glMultMatrixf (elemsM1); // Postmultiply M2 with matrix M1.
produces the following current modelview matrix:
✓ = M2 · M1
26
Module 2 2D Viewing
VTUPulse.com
By changing the position of a viewport, we can view objects at different positions
on thedisplay area of an output device
Usually, clipping windows and viewports are rectangles in standard position,
with therectangle edges parallel to the coordinate axes.
We first consider only rectangular viewports and clipping windows, as
illustrated niFigure
1
VTUPulse.com
Module 2 2D Viewing
Viewing Pipeline
The mapping of a two-dimensional, world-coordinate scene description to
devicecoordinates is called a two-dimensional viewing transformation.
This transformation is simply referred to as the window-to-viewport transformation or the
windowing transformation
We can describe the steps for two-dimensional viewing as indicated in Figure
VTUPulse.com
Systems use normalized coordinates in the range from 0 to 1, and others use a normalized
range from −1 to 1.
At the final step of the viewing transformation, the contents of the
viewport aretransferred to positions within the display window.
Clipping is usually performed in normalized coordinates.
This allows us to reduce computations by first concatenating the various
transformationmatrices
2
Module 2 2D Viewing
We must set the parameters for the clipping window as part of the
projectiontransformation.
Function:
glMatrixMode (GL_PROJECTION);
We can also set the initialization as
glLoadIdentity ( );
This ensures that each time we enter the projection mode, the matrix will be reset
to the identity matrix so that the new viewing parameters are not combined with the
previous ones
VTUPulse.com
Normalized coordinates in the range from −1 to 1 are used in the OpenGL
clippingroutines.
Objects outside the normalized square (and outside the clipping window) are
eliminatedfrom the scene to be displayed.
If we do not specify a clipping window in an application program, the default
coordinatesare (xwmin, ywmin) = (−1.0, −1.0) and (xwmax, ywmax) = (1.0, 1.0).
Thus the default clipping window is the normalized square centered on the
coordinateorigin with a side length of 2.0.
3
Module 2 2D Viewing
vpWidth and vpHeight are pixel width and height of the viewport
Coordinates for the upper-right corner of the viewport are calculated
for thistransformation matrix in terms of the viewport width and height:
VTUPulse.com
glutInit (&argc, argv);
We have three functions inGLUTfor defining a display window and
choosing itsdimensions and position:
1. glutInitWindowPosition (xTopLeft, yTopLeft);
gives the integer, screen-coordinate position for the top-left corner of the display
window, relative to the top-left corner of the screen
4
Module 2 2D Viewing
VTUPulse.com
routine
7. glClearIndex (index);
This function sets the display window color using color-index mode,
Where parameter index is assigned an integer value corresponding to a position
within the color table.
5
Module 2 2D Viewing
VTUPulse.com
glutPositionWindow (xNewTopLeft, yNewTopLeft);
Similarly, the following function resets the size of the current display window:
glutReshapeWindow (dwNewWidth, dwNewHeight);
With the following command, we can expand the current display window to fill the
screen:
glutFullScreen ( );
Whenever the size of a display window is changed, its aspect ratio may change and
objects may be distorted from their original shapes. We can adjust for a change in
display-window dimensions using the statement
glutReshapeFunc (winReshapeFcn);
6
Module 2 2D Viewing
We use the following routine to convert the current display window to an icon in the form
of a small picture or symbol representing the window:
glutIconifyWindow ( );
The label on this icon will be the same name that we assigned to the window, but we can
change this with the following command:
glutSetIconTitle ("Icon Name");
We also can change the name of the display window with a similar command:
glutSetWindowTitle ("New Window Name");
We can choose any display window to be in front of all other windows by first
designating it as the current window, and then issuing the “pop-window” command:
glutSetWindow (windowID);
glutPopWindow ( );
In a similar way, we can “push” the current display window to the back so that it is
behind all other display windows. This sequence of operations is
glutSetWindow (windowID);
glutPushWindow ( );
VTUPulse.com
We can also take the current window off the screen with
glutHideWindow ( );
In addition, we can return a “hidden” display window, or one that has been converted to
an icon, by designating it as the current display window and then invoking the function
glutShowWindow ( );
GLUT Subwindows
Within a selected display window, we can set up any number of second-level display
windows, which are called subwindows.
We create a subwindow with the following function:
glutCreateSubWindow (windowID, xBottomLeft, yBottomLeft, width, height);
Parameter windowID identifies the display window in which we want to set up the
subwindow.
7
Module 2 2D Viewing
Subwindows are assigned a positive integer identifier in the same way that first-level
display windows are numbered, and we can place a subwindow inside another
subwindow.
Each subwindow can be assigned an individual display mode and other parameters. We
can even reshape, reposition, push, pop, hide, and show subwindows
VTUPulse.com
Viewing Graphics Objects in a GLUT Display Window
After we have created a display window and selected its position, size, color, and other
characteristics, we indicate what is to be shown in that window
Then we invoke the following function to assign something to that
window: glutDisplayFunc (pictureDescrip);
This routine, called pictureDescrip for this example, is referred to as a callback function
because it is the routine that is to be executed whenever GLUT determines that the
display-window contents should be renewed.
We may need to call glutDisplayFunc after the glutPopWindow command if the display
window has been damaged during the process of redisplaying the windows.
In this case, the following function is used to indicate that the contents of the current
display window should be renewed:
glutPostRedisplay ( );
8
Module 2 2D Viewing
VTUPulse.com
GLUT_WINDOW_X: obtains the x-coordinate position for the top-left corner of the
current display window
GLUT_WINDOW_WIDTH or GLUT_SCREEN_WIDTH : retrieve the current
display-window width or the screen width with.
2.3.3 Questions
1. With an example explain the vector method for splitting concave polygons?
2. Explain the splitting of a convex polygon using inside – outside tests?
3. Explain plane equation and its use for identification of polygon faces?
4. List the openGL polygon fill area functions with example?
5. Give the openGL functions for the following
a. Fill pattern function
b. Texture and Interpolation patterns?
6. Explain the 2D reflection and shear transformation with example?
7. Explain the 2D viewing transformation pipeline?
8. Explain 2D viewing functions in openGL?
9. With the help of matrix representations, explain 2D composite translation, rotation and
scaling?
10. Explain the general scan line polygon fill algorithm?