A. Graphs and Charts: Computer Graphics and Visualization (18CS62)
A. Graphs and Charts: Computer Graphics and Visualization (18CS62)
A. Graphs and Charts: Computer Graphics and Visualization (18CS62)
Computer graphics is an art of drawing pictures, lines, charts, etc. using computers with the help
of programming. Computer graphics image is made up of number of pixels. Pixel is the smallest
addressable graphical unit represented on the computer screen.
:
a. Graphs and Charts
An early application for computer graphics is the display of simple data graphs usually
plotted on a character printer. Data plotting is still one of the most common graphics
application.
Graphs & charts are commonly used to summarize functional, statistical, mathematical,
engineering and economic data for research reports, managerial summaries and other
types of publications.
Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 4
Computer Graphics and Visualization (18CS62) Module 1
Typically examples of data plots are line graphs, bar charts, pie charts, surface graphs,
contour plots and other displays showing relationships between multiple parameters in
two dimensions, three dimensions, or higher-dimensional spaces
b. Computer-Aided Design
Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 5
Computer Graphics and Visualization (18CS62) Module 1
With virtual-reality systems, designers and others can move about and interact with
objects in various ways.
Architectural designs can be examined by taking simulated “walk” through the rooms or
around the outsides of buildings to better appreciate the overall effect of a particular
design.
With a special glove, we can even “grasp” objects in a scene and turn them over or move
them from one place to another.
d. Data Visualizations
Producing graphical representations for scientific, engineering and medical data sets and
processes is another fairly new application of computer graphics, which is generally
referred to as scientific visualization. And the term business visualization is used in
connection with data sets related to commerce, industry and other nonscientific areas.
There are many different kinds of data sets and effective visualization schemes depend on
the characteristics of the data. A collection of data can contain scalar values, vectors or
higher-order tensors.
e. Education and Training
Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 6
Computer Graphics and Visualization (18CS62) Module 1
Computer generated models of physical, financial, political, social, economic & other
systems are often used as educational aids.
Models of physical processes physiological functions, equipment, such as the color coded
diagram as shown in the figure, can help trainees to understand the operation of a system.
For some training applications, special hardware systems are designed. Examples of such
specialized systems are the simulators for practice sessions, aircraft pilots, air traffic-
control personnel.
Some simulators have no video screens, for eg: flight simulator with only a control panel
for instrument flying
f. Computer Art
The picture is usually painted electronically on a graphics tablet using a stylus, which can
simulate different brush strokes, brush widths and colors.
Fine artists use a variety of other computer technologies to produce images. To create
pictures the artist uses a combination of 3D modeling packages, texture mapping,
drawing programs and CAD software etc.
Commercial art also uses theses “painting” techniques for generating logos & other
designs, page layouts combining text & graphics, TV advertising spots & other
applications.
A common graphics method employed in many television commercials is morphing,
where one object is transformed into another.
Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 7
Computer Graphics and Visualization (18CS62) Module 1
g. Entertainment
Television production, motion pictures, and music videos routinely a computer graphics
methods.
Sometimes graphics images are combined a live actors and scenes and sometimes the
films are completely generated a computer rendering and animation techniques.
Some television programs also use animation techniques to combine computer generated
figures of people, animals, or cartoon characters with the actor in a scene or to transform an
actor’s face into another shape.
h. Image Processing
Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 8
Computer Graphics and Visualization (18CS62) Module 1
Image processing methods are often used in computer graphics, and computer graphics
methods are frequently applied in image processing.
Medical applications also make extensive use of image processing techniques for picture
enhancements in tomography and in simulations and surgical operations.
It is also used in computed X-ray tomography(CT), position emission
tomography(PET),and computed axial tomography(CAT).
Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 9
Computer Graphics and Visualization (18CS62) Module 1
https://www.youtube.com/watch?v=0ZuSu44-WeE&list=PL338D19C40D6D1732&index=2
A beam of electrons, emitted by an electron gun, passes through focusing and deflection
systems that direct the beam toward specified positions on the phosphor-coated screen.
The phosphor then emits a small spot of light at each position contacted by the electron
beam and the light emitted by the phosphor fades very rapidly.
One way to maintain the screen picture is to store the picture information as a charge
distribution within the CRT in order to keep the phosphors activated.
The most common method now employed for maintaining phosphor glow is to redraw
the picture repeatedly by quickly directing the electron beam back over the same screen
points. This type of display is called a refresh CRT.
The frequency at which a picture is redrawn on the screen is referred to as the refresh
rate.
Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 10
Computer Graphics and Visualization (17CS62) Module 1
Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 14
Computer Graphics and Visualization (17CS62) Module 1
Each entry in the bitmap is a 1-bit data which determine the on (1) and off (0) of the
intensity of the pixel.
Case2:Incase of color systems
On color systems, the frame buffer storing the values of the pixels is called a pixmap
(Though now a days many graphics libraries name it as bitmap too).
Each entry in the pixmap occupies a number of bits to represent the color of the pixel. For a
true color display, the number of bits for each entry is 24 (8 bits per red/green/blue
channel, each channel 28=256 levels of intensity value, ie. 256 voltage settings for each
of the red/green/blue electron guns).
Refresh rate on a random-scan system depends on the number of lines to be displayed on that
system.
Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 15
Computer Graphics and Visualization (17CS62) Module 1
Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 16
Multiuser environments & computer networks are now common elements in many
graphics applications.
Various resources, such as processors, printers, plotters and data files can be distributed
on a network & shared by multiple users.
A graphics monitor on a network is generally referred to as a graphics server.
The computer on a network that is executing a graphics application is called the client.
A workstation that includes processors, as well as a monitor and input devices can
function as both a server and a client.
Graphics on Internet
A great deal of graphics development is now done on the Internet.
Computers on the Internet communicate using TCP/IP.
Resources such as graphics files are identified by URL (Uniform resource locator).
The World Wide Web provides a hypertext system that allows users to loacate and view
documents, audio and graphics.
Each URL sometimes also called as universal resource locator.
The URL contains two parts Protocol- for transferring the document, and Server-
contains the document.
Graphics Software
There are two broad classifications for computer-graphics software
2. Attribute Functions:
Attributes are the properties of the output primitives; that is, an attribute describes how a
particular primitive is to be displayed.
They include color specifications, line styles, text styles, and area-filling patterns.
6. Input functions:
Interactive graphics applications use various kinds of input devices, such as a mouse, a
tablet, or a joystick.
Input functions are used to control and process the data flow from these interactive
devices.
7. Control operations:
Finally, a graphics package contains a number of housekeeping tasks, such as clearing a
screen display area to a selected color and initializing parameters. We can lump the
functions for carrying out these chores under the heading control operations.
OpenGL basic(core) library :-A basic library of functions is provided in OpenGL for
specifying graphics primitives, attributes, geometric transformations, viewing
transformations, and many other operations.
The OpenGL functions also expect specific data types. For example, an OpenGL function
parameter might expect a value that is specified as a 32-bit integer. But the size of an
integer specification can be different on different machines.
To indicate a specific data type, OpenGL uses special built-in, data-type names, such as
GLbyte, GLshort, GLint, GLfloat, GLdouble, Glboolean
Related Libraries
In addition to OpenGL basic(core) library(prefixed with gl), there are a number of
associated libraries for handling special operations:-
1) OpenGL Utility(GLU):- Prefixed with “glu”. It provides routines for setting up viewing
and projection matrices, describing complex objects with line and polygon approximations,
displaying quadrics and B-splines using linear approximations, processing the surface-rendering
operations, and other complex tasks.
-Every OpenGL implementation includes the GLU library
2)OpenInventor:- provides routines and predefined object shapes for interactive three-
dimensional applications which are written in C++.
3) Window-system libraries:- To create graphics we need display window. We cannot create
the display window directly with the basic OpenGL functions since it contains only device-
independent graphics functions, and window-management operations are device-dependent.
Web Link:https://www.youtube.com/watch?v=rf0LmaZIGXA
Header Files
In all graphics programs, we will need to include the header file for the OpenGL core
library.
In windows to include OpenGL core libraries and GLU we can use the following header
files:-
#include <windows.h> //precedes other header files for including Microsoft windows ver
of OpenGL libraries
#include<GL/gl.h>
#include <GL/glu.h>
The above lines can be replaced by using GLUT header file which ensures gl.h and glu.h are
included correctly,
#include <GL/glut.h> //GL in windows
In Apple OS X systems, the header file inclusion statement will be,
#include <GLUT/glut.h>
Web Link:https://www.youtube.com/watch?v=rf0LmaZIGXA
We can state that a display window is to be created on the screen with a given caption for
the title bar. This is accomplished with the function
glutCreateWindow ("An Example OpenGL Program");
where the single argument for this function can be any character string that we want to use for
the display-window title.
Step3:Specificationofthedisplaywindow
Then we need to specify what the display window is to contain.
For this, we create a picture using OpenGL functions and pass the picture definition to
the GLUT routine glutDisplayFunc, which assigns our picture to the display window.
Example: suppose we have the OpenGL code for describing a line segment in a
procedure called lineSegment.
Then the following function call passes the line-segment description to the display
window:
glutDisplayFunc (lineSegment);
Step4:onemoreGLUTfunction
But the display window is not yet on the screen.
We need one more GLUT function to complete the window-processing operations.
After execution of the following statement, all display windows that we have created,
including their graphic content, are now activated:
glutMainLoop ( );
This function must be the last one in our program. It displays the initial graphics and puts the
program into an infinite loop that checks for input from devices such as a mouse or keyboard.
GLUT Function 1:
We use the glutInitWindowPosition function to give an initial location for the upper left
corner of the display window.
GLUT Function 2:
After the display window is on the screen, we can reposition and resize it.
GLUT Function 3:
We can also set a number of other options for the display window, such as buffering and
a choice of color modes, with the glutInitDisplayMode function.
Arguments for this routine are assigned symbolic GLUT constants.
Example: the following command specifies that a single refresh buffer is to be used for
the display window and that we want to use the color mode which uses red, green, and
blue (RGB) components to select color values:
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
The values of the constants passed to this function are combined using a logical or
operation.
Actually, single buffering and RGB color mode are the default options.
But we will use the function now as a reminder that these are the options that are set for
our display.
Later, we discuss color modes in more detail, as well as other display options, such as
double buffering for animation applications and selecting parameters for viewing
threedimensional scenes.
Step1:tosetbackgroundcolor
For the display window, we can choose a background color.
Using RGB color values, we set the background color for the display window to be
white, with the OpenGL function:
glClearColor (1.0, 1.0, 1.0, 0.0);
The first three arguments in this function set the red, green, and blue component colors to
the value 1.0, giving us a white background color for the display window.
If, instead of 1.0, we set each of the component colors to 0.0, we would get a black
background.
The fourth parameter in the glClearColor function is called the alpha value for the
specified color. One use for the alpha value is as a “blending” parameter
When we activate the OpenGL blending operations, alpha values can be used to
determine the resulting color for two overlapping objects.
An alpha value of 0.0 indicates a totally transparent object, and an alpha value of 1.0
indicates an opaque object.
For now, we will simply set alpha to 0.0.
Although the glClearColor command assigns a color to the display window, it does not
put the display window on the screen.
glBegin (GL_LINES);
glVertex2i (180, 15);
glVertex2i (10, 145);
glEnd ( );
Screen co-ordinates:
Locations on a video monitor are referenced in integer screen coordinates, which
correspond to the integer pixel positions in the frame buffer.
Scan-line algorithms for the graphics primitives use the coordinate descriptions to
determine the locations of pixels
Example: given the endpoint coordinates for a line segment, a display algorithm must
calculate the positions for those pixels that lie along the line path between the endpoints.
Since a pixel position occupies a finite area of the screen, the finite size of a pixel must
be taken into account by the implementation algorithms.
For the present, we assume that each integer screen position references the centre of a
pixel area.
Once pixel positions have been identified the color values must be stored in the frame
buffer
glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluOrtho2D (xmin, xmax, ymin, ymax);
The display window will then be referenced by coordinates (xmin, ymin) at the lower-
left corner and by coordinates (xmax, ymax) at the upper-right corner, as shown in
Figure below
Geometric Primitives:
It includes points, line segments, polygon etc.
These primitives pass through geometric pipeline which decides whether the primitive
is visible or not and also how the primitive should be visible on the screen etc.
The geometric transformations such rotation, scaling etc can be applied on the
primitives which are displayed on the screen. The programmer can create geometric
primitives as shown below:
where:
glBegin indicates the beginning of the object
that has to be displayed
glEnd indicates the end of primitive
Case2:
we could specify the coordinate values for the preceding points in arrays such as
int point1 [ ] = {50, 100};
int point2 [ ] = {75, 150};
int point3 [ ] = {100, 200};
and call the OpenGL functions for plotting the three points as
glBegin (GL_POINTS);
glVertex2iv (point1);
glVertex2iv (point2);
glVertex2iv (point3);
glEnd ( );
Case 2: GL_LINE_STRIP:
Successive vertices are connected using line segments. However, the final vertex is not
connected to the initial vertex.
glBegin (GL_LINES_STRIP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 49
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
Case 3: GL_LINE_LOOP:
Successive vertices are connected using line segments to form a closed path or loop i.e., final
vertex is connected to the initial vertex.
glBegin (GL_LINES_LOOP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
Point Attributes
Basically, we can set two attributes for points: color and size.
In a state system: The displayed color and size of a point is determined by the current
values stored in the attribute list.
Color components are set with RGB values or an index into a color table.
For a raster system: Point size is an integer multiple of the pixel size, so that a large point is
displayed as a square block of pixels
Size:
We set the size for an OpenGL point with
glPointSize (size);
and the point is then displayed as a square block of pixels.
Example program:
Attribute functions may be listed inside or outside of a glBegin/glEnd pair.
Example: the following code segment plots three points in varying colors and sizes.
The first is a standard-size red point, the second is a double-size green point, and the
third is a triple-size blue point:
Ex:
glColor3f (1.0, 0.0, 0.0);
glBegin (GL_POINTS);
glVertex2i (50, 100);
glPointSize (2.0);
glColor3f (0.0, 1.0, 0.0);
glVertex2i (75, 150);
glPointSize (3.0);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (100, 200);
glEnd ( );
Pattern:
Parameter pattern is used to reference a 16-bit integer that describes how the line should
be displayed.
1 bit in the pattern denotes an “on” pixel position, and a 0 bit indicates an “off” pixel
position.
The pattern is applied to the pixels along the line path starting with the low-order bits in
the pattern.
The default pattern is 0xFFFF (each bit position has a value of 1),which produces a solid
line.
repeatFactor
Integer parameter repeatFactor specifies how many times each bit in the pattern is to be
repeated before the next bit in the pattern is applied.
The default repeat value is 1.
Example:
For line style, suppose parameter pattern is assigned the hexadecimal representation
0x00FF and the repeat factor is 1
This would display a dashed line with eight pixels in each dash and eight pixel positions
that are “off” (an eight-pixel space) between two dashes.
Also, since low order bits are applied first, a line begins with an eight-pixel dash starting
at the first endpoint.
This dash is followed by an eight-pixel space, then another eight-pixel dash, and so forth,
until the second endpoint position is reached.
glDisable (GL_LINE_STIPPLE);
This replaces the current line-style pattern with the default pattern (solid lines).
Example Code:
typedef struct { float x, y; } wcPt2D;
wcPt2D dataPts [5];
void linePlot (wcPt2D dataPts [5])
{
int k;
glBegin (GL_LINE_STRIP);
for (k = 0; k < 5; k++)
linePlot (dataPts);
glDisable (GL_LINE_STIPPLE);
Curve Attributes
Parameters for curve attributes are the same as those for straight-line segments.
We can display curves with varying colors, widths, dot-dash patterns, and available pen
or brush options.
Methods for adapting curve-drawing algorithms to accommodate attribute selections are
similar to those for line drawing.
Raster curves of various widths can be displayed using the method of horizontal or
vertical pixel spans.
Case 1: Where the magnitude of the curve slope |m| <= 1.0, we plot vertical spans;
Case 2: when the slope magnitude |m| > 1.0, we plot horizontal spans.
Method 2: Another method for displaying thick curves is to fill in the area between two Parallel
curve paths, whose separation distance is equal to the desired width. We could do this using the
Method 3:The pixel masks discussed for implementing line-style options could also be used in
raster curve algorithms to generate dashed or dotted patterns
Method 4: Pen (or brush) displays of curves are generated using the same techniques discussed
for straight-line segments.
y=m * x +b------------>(1)
with m as the slope of the line and b as the y intercept.
Given that the two endpoints of a line segment are specified at positions (x0,y0) and
(xend, yend) ,as shown in fig
We determine values for the slope m and y intercept b with the following equations:
m=(yend - y0)/(xend - x0)----------------->(2)
b=y0 - m.x0-------------->(3)
Algorithms for displaying straight line are based on the line equation (1) and calculations
given in eq(2) and (3).
For given x interval δx along a line, we can compute the corresponding y interval δy from
eq.(2) as
δy=m. δx----------------->(4)
Similarly, we can obtain the x interval δx corresponding to a specified δy as
δx=δy/m------------------>(5)
These equations form the basis for determining deflection voltages in analog displays,
such as vector-scan system, where arbitrarily small changes in deflection voltage are
possible.
Web Links:https://www.youtube.com/watch?v=m5YbqpL7BIY
https://www.youtube.com/watch?v=iP2LEde_epc
Case1:
if m<1,x increment in unit intervals
i.e..,xk+1=xk+1
then, m=(yk+1 - yk)/( xk+1 - xk)
m= yk+1 - yk
yk+1 = yk + m------------>(1)
where k takes integer values starting from 0,for the first point and increases by 1 until
final endpoint is reached. Since m can be any real number between 0.0 and 1.0,
Case2:
if m>1, y increment in unit intervals
i.e.., yk+1 = yk + 1
then, m= (yk + 1- yk)/( xk+1 - xk)
m(xk+1 - xk)=1
xk+1 =(1/m)+ xk-----------------(2)
Case3:
if m=1,both x and y increment in unit intervals
i.e..,xk+1=xk + 1 and yk+1 = yk + 1
yk+1 = yk - m-----------------(3)
or(when the slope is greater than 1)we have δy=-1 with
xk+1 = xk - (1/m)----------------(4)
Similar calculations are carried out using equations (1) through (4) to determine the pixel
positions along a line with negative slope. thus, if the absolute value of the slope is less
than 1 and the starting endpoint is at left ,we set δx==1 and calculate y values with eq(1).
when starting endpoint is at the right(for the same slope),we set δx=-1 and obtain y
positions using eq(3).
This algorithm is summarized in the following procedure, which accepts as input two integer
screen positions for the endpoints of a line segment.
if m<1,where x is incrementing by 1
yk+1 = yk + m
So initially x=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the
starting point .
Illuminate pixel(x, round(y))
x1= x+ 1 , y1=y + 1
Illuminate pixel(x1,round(y1))
x2= x1+ 1 , y2=y1 + 1
Illuminate pixel(x2,round(y2))
Till it reaches final point.
if m>1,where y is incrementing by 1
xk+1 =(1/m)+ xk
So initially y=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the
starting point .
Illuminate pixel(round(x),y)
x1= x+( 1/m) ,y1=y
Illuminate pixel(round(x1),y1)
x2= x1+ (1/m) , y2=y1
Illuminate pixel(round(x2),y2)
Till it reaches final point
Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 58
The DDA algorithm is faster method for calculating pixel position than one that directly
implements .
It eliminates the multiplication by making use of raster characteristics, so that appropriate
increments are applied in the x or y directions to step from one pixel position to another
along the line path.
The accumulation of round off error in successive additions of the floating point
increment, however can cause the calculated pixel positions to drift away from the true
line path for long line segments. Furthermore ,the rounding operations and floating point
arithmetic in this procedure are still time consuming.
we improve the performance of DDA algorithm by separating the increments m and 1/m
into integer and fractional parts so that all calculations are reduced to integer operations.
#include <stdlib.h>
#include <math.h>
inline int round (const float a)
{
return int (a + 0.5);
}
void lineDDA (int x0, int y0, int xEnd, int yEnd)
{
int dx = xEnd - x0, dy = yEnd - y0, steps, k;
float xIncrement, yIncrement, x = x0, y = y0;
if (fabs (dx) > fabs (dy))
steps = fabs (dx)
else
steps = fabs (dy);
Bresenham’s Algorithm:
It is an efficient raster scan generating algorithm that uses incremental integral
calculations
To illustrate Bresenham’s approach, we first consider the scan-conversion process for
lines with positive slope less than 1.0.
Pixel positions along a line path are then determined by sampling at unit x intervals.
Starting from the left endpoint (x0, y0) of a given line, we step to each successive
column(x position) and plot the pixel whose scan-line y value is closest to the line path.
Consider the equation of a straight line y=mx+c where m=dy/dx
Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 60
Code:
#include <stdlib.h>
#include <math.h>
/* Bresenham line-drawing procedure for |m| < 1.0. */
void lineBres (int x0, int y0, int xEnd, int yEnd)
{
int dx = fabs (xEnd - x0), dy = fabs(yEnd - y0);
int p = 2 * dy - dx; int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy - dx); int x, y;
/* Determine which endpoint to use as start position. */
if (x0 > xEnd)
{
x = xEnd; y = yEnd; xEnd = x0;
}
else
{
x = x0; y = y0;
}
setPixel (x, y);
while (x < xEnd)
{
x++;
if (p < 0)
p += twoDy;
else {
y++; p += twoDyMinusDx;
}
setPixel (x, y);
}
}
Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 61
Properties of Circles
A circle is defined as the set of points that are all at a given distance r from a center
position (xc , yc ).
For any circle point (x, y), this distance relationship is expressed by the Pythagorean
theorem in Cartesian coordinates as
We could use this equation to calculate the position of points on a circle circumference
by stepping along the x axis in unit steps from x c −r to xc +r and calculating the
corresponding y values at each position as
One problem with this approach is that it involves considerable computation at each step.
Moreover, the spacing between plotted pixel positions is not uniform.
We could adjust the spacing by interchanging x and y (stepping through y values and
calculating x values) whenever the absolute value of the slope of the circle is greater than
1; but this simply increases the computation and processing required by the algorithm.
Another way to eliminate the unequal spacing is to calculate points along the circular
boundary using polar coordinates r and θ
Expressing the circle equation in parametric polar form yields the pair of equations
Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 62
Computer Graphics and Visualization (18CS62) Module 2
An useful construct for describing components of a picture is an area that is filled with
some solid color or pattern.
A picture component of this type is typically referred to as a fill area or a filled area.
Any fill-area shape is possible, graphics libraries generally do not support specifications
for arbitrary fill shapes
Figure below illustrates a few possible fill-area shapes.
Graphics routines can more efficiently process polygons than other kinds of fill shapes
because polygon boundaries are described with linear equations.
When lighting effects and surface-shading procedures are applied, an approximated
curved surface can be displayed quite realistically.
Approximating a curved surface with polygon facets is sometimes referred to as surface
tessellation, or fitting the surface with a polygon mesh.
Below figure shows the side and top surfaces of a metal cylinder approximated in an
outline form as a polygon mesh.
Displays of such figures can be generated quickly as wire-frame views, showing only the
polygon edges to give a general indication of the surface structure
Objects described with a set of polygon surface patches are usually referred to as standard
graphics objects, or just graphics objects.
Problem:
For a computer-graphics application, it is possible that a designated set of polygon
vertices do not all lie exactly in one plane
This is due to round off error in the calculation of numerical values, to errors in selecting
coordinate positions for the vertices, or, more typically, to approximating a curved
surface with a set of polygonal patches
Solution:
To divide the specified surface mesh into triangles
Polygon Classifications
Polygons are classified into two types
1. Convex Polygon and
2. Concave Polygon
Convex Polygon:
The polygon is convex if all interior angles of a polygon are less than or equal to 180◦,
where an interior angle of a polygon is an angle inside the polygon boundary that is
formed by two adjacent edges
An equivalent definition of a convex polygon is that its interior lies completely on one
side of the infinite extension line of any one of its edges.
Also, if we select any two points in the interior of a convex polygon, the line segment
joining the two points is also in the interior.
Concave Polygon:
A polygon that is not convex is called a concave polygon.
The below figure shows convex and concave polygon
The term degenerate polygon is often used to describe a set of vertices that are collinear
or that have repeated coordinate positions.
Identification algorithm 1
Identifying a concave polygon by calculating cross-products of successive pairs of edge
vectors.
If we set up a vector for each polygon edge, then we can use the cross-product of adjacent
edges to test for concavity. All such vector products will be of the same sign (positive or
negative) for a convex polygon.
Therefore, if some cross-products yield a positive value and some a negative value, we
have a concave polygon
Identification algorithm 2:
Look at the polygon vertex positions relative to the extension line of any edge.
If some vertices are on one side of the extension line and some vertices are on the other
side, the polygon is concave.
Vector method
First need to form the edge vectors.
Given two consecutive vertex positions, Vk and Vk+1 , we define the edge vector between
them as
Ek = V k+1 – Vk
Calculate the cross-products of successive edge vectors in order around the polygon
perimeter.
If the z component of some cross-products is positive while other cross-products have a
negative z component, the polygon is concave.
We can apply the vector method by processing edge vectors in counterclockwise order If
any cross-product has a negative z component (as in below figure), the polygon is
concave and we can split it along the line of the first edge vector in the cross-product pair
E1 = (1, 0, 0) E2 = (1, 1, 0)
E3 = (1, -1, 0) E4 = (0, 2, 0)
E5 = (-3, 0, 0) E6 = (0, -2, 0)
Rotational method:
Proceeding counterclockwise around the polygon edges, we shift the position of the
polygon so that each vertex Vk in turn is at the coordinate origin.
We rotate the polygon about the origin in a clockwise direction so that the next vertex
Vk+1 is on the x axis.
If the following vertex, Vk+2, is below the x axis, the polygon is concave.
We then split the polygon along the x axis to form two new polygons, and we repeat the
concave test for each of the two new polygons
Inside-Outside Tests:
Also called the odd-parity rule or the even-odd rule.
Draw a line from any position P to a distant point outside the coordinate extents of the
closed polyline.
Then we count the number of line-segment crossings along this line.
If the number of segments crossed by this line is odd, then P is considered to be an
interior point Otherwise, P is an exterior point
We can use this procedure, for example,to fill the interior region between two concentric
circles or two concentric polygons with a specified color.
The nonzero winding-number rule tends to classify as interior some areas that the odd-
even rule deems to be exterior.
Variations of the nonzero winding-number rule can be used to define interior regions in
other ways define a point to be interior if its winding number is positive or if it is
negative; or we could use any other rule to generate a variety of fill shapes
Boolean operations are used to specify a fill area as a combination of two regions
One way to implement Boolean operations is by using a variation of the basic winding-
number rule
consider the direction for each boundary to be counterclockwise, the union of two regions
would consist of those points whose winding number is positive
The intersection of two regions with counterclockwise boundaries would contain those
points whose winding number is greater than 1,
To set up a fill area that is the difference of two regions (say, A - B), we can enclose
region A with a counterclockwise border and B with a clockwise border
Polygon Tables:
The objects in a scene are described as sets of polygon surface facets
The description for each object includes coordinate information specifying the geometry
for the polygon facets and other surface parameters such as color, transparency, and light
reflection properties.
The data of the polygons are placed into tables that are to be used in the subsequent
processing, display, and manipulation of the objects in the scene
These polygon data tables can be organized into two groups:
1. Geometric tables and
2. Attribute tables
Geometric data tables contain vertex coordinates and parameters to identify the spatial
orientation of the polygon surfaces.
Attribute information for an object includes parameters specifying the degree of
transparency of the object and its surface reflectivity and texture characteristics
Geometric data for the objects in a scene are arranged conveniently in three lists: a vertex
table, an edge table, and a surface-facet table.
Coordinate values for each vertex in the object are stored in the vertex table.
The edge table contains pointers back into the vertex table to identify the vertices for
each polygon edge.
And the surface-facet table contains pointers back into the edge table to identify the edges
for each polygon
The object can be displayed efficiently by using data from the edge table to identify
polygon boundaries.
An alternative arrangement is to use just two tables: a vertex table and a surface-facet
table this scheme is less convenient, and some edges could get drawn twice in a
wireframe display.
Another possibility is to use only a surface-facet table, but this duplicates coordinate
information, since explicit coordinate values are listed for each vertex in each polygon
facet. Also the relationship between edges and facets would have to be reconstructed
from the vertex listings in the surface-facet table.
We could expand the edge table to include forward pointers into the surface-facet table so
that a common edge between polygons could be identified more rapidly the vertex table
could be expanded to reference corresponding edges, for faster information retrieval