Module 1 Complete (CG)
Module 1 Complete (CG)
OF IMAGE PROCESSING
21CS63
• “To impart quality technical education with ethical values, employable skills and research
to achieve excellence”
• MISSION
• Network with industry & premier institutions to encourage emergence of new ideas by
• To inculcate the professional & ethical values among young students with employable skills
MISSION
COMPUTER GRAPHICS
INTRODUCTION
Introduction to COMPUTER GRAPHICS
✓ Medical applications also make extensive use of image processing techniques for picture
enhancements in tomography and in simulations and surgical operations.
✓ It is also used in computed X-ray tomography(CT), position emission tomography(PET) and
computed axial tomography(CAT).
• 9. Graphical User Interfaces
✓ It is common now for applications software to provide
graphical user interface (GUI).
✓ A major component of graphical interface is a window
manager that allows a user to display multiple, rectangular
screen areas called display windows.
✓ Each screen display area can contain a different process,
showing graphical or nongraphical information, and various
methods can be used to activate a display window.
✓ Using an interactive pointing device, such as mouse, we can
active a display window on some systems by positioning the
screen cursor within the window display area and pressing the
left mouse button.
Typical applications areas are
• GUI • Plotting in business
• Web/business/commercial
publishing and advertisements
• CAD/CAM design
(VLSI, Construction, Circuits)
• Scientific Visualization
• Entertainment
(movie, TV Advt., Games etc.)
• Cartography • Multimedia
• Virtual reality
• Process Monitoring
• Menus • Buttons
• Icons • Valuators
• Cursors • Grids
• Core graphics
• GKS
• SRGP
• X11-based systems.
On various platforms, such as
DOS, Windows,
Linux, OS/2,
SGI, SunOS,
Solaris, HP-UX,
Mac, DEC-OSF.
Various utilities and tools available for
web-based design include: Java, XML, VRML and
GIF animators.
• User Interface.
Computer Graphics systems could be active or
passive.
Host Computer
Display Interactio
cmds n data
Display Processor
Lucy
Video Controller Lucy
CRT
Shadow Mask
Electron Guns
Red Input
Green
Input
Blue Input
Deflection
Yoke Red, Blue,
and Green
Beam Penetration Method
• Coat the screen with layers of
different colored phosphors.
✓Beam of slow electrons excite outer red
layer, beam of very fast electrons
penetrates through red layer and excites
the inner green layer.
✓(Drawback)Limited to number of colors
since only two colors used.
✓(Drawback)Picture quality is not as good
as with the other method.
• Dot Pitch –the spacing between pixels
on a CRT, measured in millimeters.
Generally, the lower the number, the
more detailed the image.
• Uses 3 phosphor color dots at each pixel position, each emitting
one of red, blue and green lights.
• It has 3 electron guns, one for each color dot and shadow mask Shadow Mask
grid just behind the phosphor-coated screen.
• Shadow mask contains series of holes aligned with phosphor
Method
dot patterns, one hole for each phosphor triad.
• When the 3 beams pass through a hole in the shadow mask,
they activate a dot triangle, which appears as small color spot
on the screen. The phosphor dots in the triangles are arranged
so that each electron beam can activate only its corresponding
color dot when it passes through the shadow mask.
• The number of electrons in each beam controls the amount of
red, blue and green light generated by the triad.
• Another configuration is the in-line arrangement, in which the
electron guns and corresponding color dots are aligned along
one scan line.
• Color CRTs in graphics systems are designed as RGB monitors
• We obtain color variations in a shadow-mask CRT by varying the intensity levels of the three
electron be
• When all three dots are activated with equal beam intensities, we see a white color. Yellow is
produced with equal intensities from the green and red dots only.
• Some inexpensive home-computer systems and video games have been designed for use with a
color TV set and a radio-frequency (RF) modulator.
• The purpose of the RF modulator is to simulate the signal from a broadcast TV station.
• This means that the color and intensity information of the picture must be combined and
superimposed on the broadcast-frequency carrier signal that the TV requires as input.
• Then the circuitry in the TV takes this signal from the RF modulator, extracts the picture
information, and paints it on the screen.
• Composite monitors are adaptations of TV sets that allow bypass of the broadcast circuitry.
• These display devices still require that the picture information be combined, but no carrier signal is
needed.
• Since picture information is combined into a composite signal and then separated by the monitor,
the resulting picture quality is still not the best attainable.
Flat Panel Display
• The term flat panel display refers to a class of video device that have reduced
volume, weight, power requirement and are thinner than CRTs (that could be
hung on walls or worn on wrists).
• We can separate flat panel display in two categories:
1. Emissive displays (Emitters): - convert electrical energy into light.
Eg. Plasma panel,
Thin film electroluminescent displays
Light emitting diodes(LED).
2. Non emissive displays (Non Emitters): - use optical effects to convert sunlight or
light from some other source into graphics patterns.
Eg. Liquid Crystal Display(LCD).
Plasma Panels displays
• This is also called gas discharge displays.
• It is constructed by filling the region between two
glass plates with a mixture of gases that usually
includes neon.
• A series of vertical conducting ribbons is placed
on one glass panel and a set of horizontal ribbon
is built into the other glass panel.
• Firing voltage is applied to a pair of horizontal
and vertical conductors cause the gas at the
intersection of the two conductors to break down
into glowing plasma of electrons and ions.
• Refresh rate: 60 times per second.
• Separation between pixels is provided by the
electric field of conductor.
• Disadvantage: strictly monochromatic device
Thin Film Electroluminescent Displays
• It is similar to plasma panel display but region
between the glass plates is filled with phosphors
such as Zinc sulfide doped with magnesium instead
of gas.
• When sufficient voltage is applied the phosphors
becomes a conductor in area of intersection of the
two electrodes.
• Electrical energy is then absorbed by the
manganese atoms which then release the energy as
a spot of light similar to the glowing plasma effect
in plasma panel.
• It requires more power than plasma panel.
• In this good color displays are difficult to achieve.
Light Emitting Diode (LED)
• In this display a matrix of multi-color light emitting diode is arranged
to form the pixel position in the display and the picture definition is
stored in refresh buffer.
• Similar to scan line refreshing of CRT information is read from the
refresh buffer and converted to voltage levels that are applied to the
diodes to produce the light pattern on the display.
Liquid Crystal Display (LCD)
• This non emissive device produce picture by passing polarized light from
the surrounding or from an internal light source through liquid crystal
material that can be aligned to either block or transmit the light.
• The liquid crystal refreshes to fact that these compounds have crystalline
arrangement of molecules then also flows like liquid.
• It consists of two glass plates each with light polarizer at right angles to
each other sandwich the liquid crystal material between the plates.
• Rows of horizontal transparent conductors are built into one glass plate,
and column of vertical conductors are put into the other plates.
• The intersection of two conductors defines a pixel position.
• Passive-matrix LCD:
• In the ON state polarized light
passing through material is twisted
so that it will pass through the
opposite polarizer, the light is then
reflected back to the viewer.
• In the OFF state, voltage applied to
the two intersecting conductors
align the molecules so that the light
is not twisted.
• Active-matrix LCD:
• A transistor is placed at each pixel
location, using thin-film transistor
technology, that control the voltage
at pixel locations and prevent
charge from gradually leaking out of
the liquid-crystal cells.
Three-Dimensional Viewing Devices
• Graphics monitors for the display of three-dimensional scenes have been devised using a technique
that reflects a CRT image from a vibrating, flexible mirror
• As the varifocal mirror vibrates, it changes focal length.
• These vibrations are synchronized with the display of an object on a CRT so that each point on the
object is reflected from the mirror into a spatial position corresponding to the distance of that point
from a specified viewing location.
• This allows us to walk around an object or scene and view it from different sides.
Three-Dimensional Viewing Devices
• In addition to displaying three-dimensional images, these systems are often capable of displaying
two-dimensional cross-sectional “slices” of objects selected at different depths, such as in medical
applications to analyze data from ultrasonography and CAT scan devices, in geological
applications to analyze topological and seismic data, in design applications involving solid objects,
and in three-dimensional simulations of systems, such as molecules and terrain.
Stereoscopic and Virtual-Reality Systems
• Another technique for representing a three-dimensional object is to displaystereoscopic views of
the object.
• This method does not produce true three dimensional images, but it does provide a three-
dimensional effect by presenting a different view to each eye of an observer so that scenes do
appear to have depth.
• When we simultaneously look at the left view with the left eye and the right view with the right
eye, the two views merge into a single image and we perceive a scene with depth
• One way to produce a stereoscopic effect on a raster system is to display each of the two views on
alternate refresh cycles.
• The screen is viewed through glasses, with each lens designed to act as a rapidly alternating shutter
that is synchronized to block out one of the views.
• One such design uses liquid-crystal shutters and an infrared emitter that synchronizes the glasses
with the views on the screen..
Stereoscopic and Virtual-Reality Systems
• Stereoscopic viewing is also a component in virtual-reality systems, where users can step into a
scene and interact with the environment.
• A headset containing an optical system to generate the stereoscopic views can be used in
conjunction with interactive input devices to locate and manipulate objects in the scene.
• A sensing system in the headset keeps track of the viewer’s position, so that the front and back of
objects can be seen as the viewer “walks through” and interactswith the display.
• Another method for creating a virtual-reality environment is to use projectors to generate a scene
within an arrangement of walls, where a viewer interacts with a virtual display using stereoscopic
glasses and data gloves
Raster-Scan Systems
• Interactive raster-graphics systems typically employ several processing units.
• In addition to the central processing unit (CPU), a special-purpose processor, called the video
controller or display controller, is used to control the operation of the display device.
A fixed area of the system memory is reserved for the frame buffer, and the video controller is
given direct access to the frame-buffer memory.
• The basic refresh operations of the video controller are diagrammed.
• Two registers are used to store the coordinate values for the screen pixels. Initially, the x register is
set to 0 and the y register is set to the value for the top scan line.
• The contents of the frame buffer at this pixel position are then retrieved and used to set the
intensity of the CRT beam.
• Then the x register is incremented by 1, and the process is repeated for the next pixel on the top
scan line.
• This procedure continues for each pixel along the top scan line.
• After the last pixel on the top scan line has been processed, the x register is reset to 0 and the y
register is set to the value for the next scan line down from the top of the screen.
• Pixels along this scan line are then processed in turn, and the procedure is repeated for each
successive scan line.
• After cycling through all pixels along the bottom scan line, the video controller resets the registers
to the first pixel position on the top scan line and the refresh process starts over.
• The screen must be refreshed at a rate of at least 60 frames per second
• To speed up pixel processing, video controllers can retrieve multiple pixel values from
the refresh buffer on each pass.
• The multiple pixel intensities are then stored in a separate register and used to control the
CRT beam intensity for a group of adjacent pixels.
• When that group of pixels has been processed, the next block of pixel values is retrieved
from the frame buffer.
• A video controller can be designed to perform a number of other operations.
• For various applications, the video controller can retrieve pixel values from different
memory areas on different refresh cycles. In some systems, for example, multiple frame
buffers are often provided so that one buffer can be used for refreshing while pixel values
are being loaded into the other buffers. Then the current refresh buffer can switch roles
with one of the other buffers. This provides a fast mechanism for generating real-time
animations.
• Another video-controller task is the transformation of blocks of pixels, so that screen areas can be
enlarged, reduced, or moved from one location to another during the refresh cycles.
• In addition, the video controller often contains a lookup table, so that pixel values in the frame
buffer are used to access the lookup table instead of controlling the CRT beam intensity directly.
• This provides a fast method for changing screen intensity values.
• Finally, some systems are designed to allow the video controller to mix the frame-buffer image with
an input image from a television camera or other input device
Display Processor
• Also called either a Graphics Controller or Display Co-Processor
• Specialized hardware to assist in scan converting output primitives
into the frame buffer.
• Fundamental difference among display systems is how much the
display processor does versus how much must be done by the
graphics subroutine package executing on the general-purpose CPU.
Architecture of a raster-graphics system with a
display processor
• Major task of the display processor is digitizing a picture definition given in an application
program into a set of pixel values for storage in the frame buffer.
• This digitization process is called scan conversion.
• Graphics commands specifying straight lines and other geometric objects are scan converted into a
set of discrete points, corresponding to screen pixel positions.
• Scan converting a straight-line segment, for example, means that we have to locate the pixel
positions closest to the line path and store the color for each position in the frame buffer
• Similar methods are used for scan converting other objects in a picture
• definition. Characters can be defined with rectangular pixel grids, for character grids can vary from
about 5 by 7 to 9 by 12 or more for higher-quality displays.
• A character grid is displayed by superimposing the rectangular grid pattern into the frame buffer at
a specified coordinate position.
• For characters that are defined as outlines, the shapes are scan-converted into the frame buffer by
locating the pixel positions closest to the outline.
• Display processors are also designed to perform a number of additional operations.
• These functions include generating various line styles (dashed, dotted, or solid), displaying color
areas, and applying transformations to the objects in a scene.
• Also, display processors are typically designed to interface with interactive input devices.
• A character defined as an outline shape.
• In an effort to reduce memory requirements in raster systems, methods have been devised for
organizing the frame buffer as a linked list and encoding the color information.
• The first number in each pair can be a reference to a color value, and the second number can
specify the number of adjacent pixels on the scan line that are to be displayed in that color.
• This technique, called run-length encoding, can result in a considerable saving in storage space if
a picture is to be constructed mostly with long runs of a single color each.
• A similar approach can be taken when pixel colors change linearly
• Another approach is to encode the raster as a set of rectangular
areas (cell encoding).
• The disadvantages of encoding runs are that color changes are
difficult to record and storage requirements increase as the
lengths of the runs decrease.
• In addition, it is difficult for the display controller to process
the raster when many short runs are involved.
• Moreover, the size of the frame buffer is no longer a major
concern, because of sharp declines in memory costs.
• Nevertheless, encoding methods can be useful in the digital
storage and transmission of picture information.
Graphics workstations and viewing systems
• Most graphics monitors today operate as raster-scan
displays, and both CRT and flat panel systems are in
common use.
• Graphics workstation range from small general-purpose
computer systems to multi monitor facilities, often with
ultra –large viewing screens.
• High-definition graphics systems, with resolutions up to
2560 by 2048, are commonly used in medical imaging, A high-resolution
air-traffic control, simulation, and CAD. (2048 by 2048)
graphics monitor.
• Many high-end graphics workstations also include large
viewing screens, often with specialized features.
Graphics Software
• Two broad classifications of computer-graphics
software
1. Special-purpose packages: Nonprogrammers
Example: generate pictures, graphs, charts, painting
programs or CAD systems in some application area
without worrying about the graphics procedure
2. General programming packages: general programming package
provides a library of graphics functions that can be used in a
programming language such as C, C++, Java, or FORTRAN.
Example: GL (Graphics Library), OpenGL, VRML (Virtual-Reality
Modeling Language), Java 2D And Java 3D
• A set of graphics functions is often called a computer-graphics
application programming interface (CG API)
Coordinate Representations
• To generate a picture, the geometric descriptions of
the objects that are to be displayed (Location,
Shapes).
Eg: Box, Sphere etc
• If coordinate values for a picture are given in some
other reference frame (spherical, hyperbolic, etc.),
they must be converted to Cartesian coordinates.
• Several different Cartesian reference frames are used
in the process of constructing and displaying
• First we define the shapes of individual objects, such
as trees or furniture. These reference frames are
called modeling coordinates or local coordinates(Eg:
Bicycle)
• Then we place the objects into appropriate locations
within a scene reference frame called world
coordinates.
• After all parts of a scene have been specified, it is
processed through various output-device reference
frames for display. This process is called the viewing
pipeline.
• The scene is then stored in normalized coordinates.
Which range from −1 to 1 or from 0 to 1 Normalized
coordinates are also referred to as normalized device
coordinates.
• The coordinate systems for display devices are
generally called device coordinates, or screen
coordinates.
• Geometric descriptions in modeling coordinates and
world coordinates can be given in floating-point or
integer values.
• Example: Figure briefly illustrates the sequence of coordinate
transformations from modeling coordinates to device coordinates for
a display
Graphics Functions
• General-purpose graphics package provides users
with a variety of functions for creating and
manipulating pictures
• Graphics input, Output, attributes, transformations,
Viewing, subdividing pictures etc…
• The basic building blocks for pictures are referred to
as graphics output primitives (Straight line, Curved
line, Sphere, Cones etc)
• Attributes are properties of the output primitives.
• We can change the size, position, or orientation of an
object using geometric transformations
• Modeling transformations, which are used to
construct a scene.
• Viewing transformations are used to select a view of
the scene, the type of projection to be used and the
location where the view is to be displayed.
• Input functions are used to control and process the
data flow from these interactive devices (mouse,
tablet and joystick).Graphics package contains a
number of tasks.
Software Standards
• The primary goal of standardized graphics software is
portability.
• In 1984, Graphical Kernel System (GKS) was adopted
as the first graphics software standard by the
International Standards Organization (ISO)
• The second software standard to be developed and
approved by the standards organizations was
Programmer’s Hierarchical Interactive Graphics
System (PHIGS).
• Extension of PHIGS, called PHIGS+, was developed to provide 3-D
surface rendering capabilities not available in PHIGS.
• The graphics workstations from Silicon Graphics, Inc. (SGI), came with
a set of routines called GL (Graphics Library)
Other Graphics Packages
• Many other computer-graphics programming libraries
have been developed for general graphics routines
• Some are aimed at specific applications (animation,
virtual reality, etc.) Example: Open Inventor Virtual-
Reality Modeling Language (VRML).
• We can create 2-D scenes with in Java applets
(java2D, Java 3D)
Graphics packages
• A set of libraries that provide programmatically access to
some kind of graphics 2D functions.
• Types
1. GKS-Graphics Kernel System – first graphics package –
accepted by ISO &
2. PHIGS (Programmer’s Hierarchical Interactive Graphics
ANSI
Standard)-accepted by ISO & ANSI
3. PHIGS + (Expanded package)
4. Silicon Graphics GL (Graphics Library)
Graphics packages
5. Open GL
6.Pixar Render Man interface
7.Postscript interpreters
8.Painting, drawing, design packages
OpenGL Basic(core) library
A basic library of functions which is provided in OpenGL for specifying graphics primitives,
attributes, geometric transformations, viewing transformations, and many other operations.
Basic OpenGL Syntax
➢Function names in the OpenGL basic library (also called the OpenGL core
library) are prefixed with gl. The component word first letter is capitalized.
➢For eg:- glBegin, glClear, glCopyPixels, glPolygonMode
➢Symbolic constants that are used with certain functions as parameters are all
in capital letters, preceded by “GL”, and component are separated by
underscore.
➢For eg:- GL_2D, GL_RGB, GL_CCW, GL_POLYGON, GL_AMBIENT_AND_DIFFUSE.
➢The OpenGL functions also expect specific data types.
➢For example, an OpenGL function parameter might expect a value that is
specified as a 32-bit integer. But the size of an integer specification can be
different on different machines.
➢To indicate a specific data type, OpenGL uses special built-in, data-type names,
such as GLbyte, GLshort, GLint, GLfloat, GLdouble, Glboolean
Related Libraries
In addition to OpenGL basic(core) library(prefixed with gl), there are a number of
associated libraries for handling special operations:-
1) OpenGL Utility(GLU):- Prefixed with “glu”. It provides routines for setting up viewing
and projection matrices, describing complex objects with line and polygon
approximations, processing the surface-rendering operations, and other complex
tasks. -Every OpenGL implementation includes the GLU library
2) Open Inventor:- provides routines and predefined object shapes for interactive
three dimensional applications which are written in C++.
3) Window-system libraries:- To create graphics we need display window. We cannot
create the display window directly with the basic OpenGL functions since it contains
only device-independent graphics functions, and window-management operations
are device-dependent. However, there are several window-system libraries that
supports OpenGL functions for a variety of machines. Eg:- Apple GL(AGL), Windows-
to-OpenGL(WGL), Presentation Manager to OpenGL(PGL), GLX.
4) OpenGL Utility Toolkit(GLUT):- provides a library of functions which acts as interface
for interacting with any device specific screen-windowing system, thus making our
program device-independent. The GLUT library functions are prefixed with “glut”.
Header Files
• In all graphics programs, we will need to include the header file for
the OpenGL core library.
• In windows to include OpenGL core libraries and GLU we can use the
following header files:- #include //precedes other header files for
including Microsoft windows version of OpenGL libraries
#include<GL/gl.h>
#include<GL/glu.h>
• The above lines can be replaced by using GLUT header file which
ensures gl.h and glu.h are included correctly,
#include <GL/glut.h> //GL in windows
• In Apple OS X systems, the header file inclusion statement will be,
#include<GLUT/glut.h>
Display-Window Management Using GLUT
➢ We can consider a simplified example, minimal number of operations for displaying a
picture.
Step 1: initialization of GLUT
➢We are using the OpenGL Utility Toolkit, our first step is to initialize GLUT.
➢This initialization function could also process any command line arguments, but we will
not need to use these parameters for our first example programs.
➢We perform the GLUT initialization with the statement
glutInit (&argc, argv);
Step 2: title
➢We can state that a display window is to be created on the screen with a given caption
for the title bar. This is accomplished with the function
glutCreateWindow ("An Example OpenGL Program");
➢where the single argument for this function can be any character string that we want to
use for the display-window title.
Step 3: Specification of the display window
Then we need to specify what the display window is to contain. For this, we create a
picture using OpenGL functions and pass the picture definition to the GLUT routine
glutDisplayFunc, which assigns our picture to the display window
Example: suppose we have the OpenGL code for describing a line segment in a procedure
called line Segment.
➢Then the following function call passes the line-segment description to the display
window:
glutDisplayFunc (lineSegment);
Step 4: one more GLUT function
➢But the display window is not yet on the screen.
➢We need one more GLUT function to complete the window-processing operations.
➢After execution of the following statement, all display windows that we have created,
including their graphic content, are now activated:
glutMainLoop ( );
➢This function must be the last one in our program. It displays the initial graphics and puts
the program into an infinite loop that checks for input from devices such as a mouse or
keyboard.
Step 5: these parameters using additional GLUT functions
➢ Although the display window that we created will be in some default location and size,
we can set these parameters using additional GLUT functions.
GLUT Function 1:
➢We use the glutInitWindowPosition function to give an initial location
for the upper left corner of the display window.
➢This position is specified in integer screen coordinates, whose origin
is at the upper-left corner of the screen.
• glutInitWindowPosition(50,100);
GLUT Function 2:
➢After the display window is on the screen, we can reposition and resize it.
glutInitWindowSize(400,300);
GLUT Function 3:
➢We can also set a number of other options for the display window, such as
buffering and a choice of color modes, with the glutInitDisplayMode
function.
Arguments for this routine are assigned with symbolic GLUT constants.
Example: the following command specifies that a single refresh buffer is to
be used for the display window and that we want to use the color mode
which uses red, green, and blue (RGB) components to select color values:
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
The values of the constants passed to this function are combined using a
logical or operation.
Actually, single buffering and RGB color mode are the default options.
A Complete OpenGL Program
Step 1: to set background color
➢For the display window, we can choose a background color.
➢Using RGB color values, we set the background color for the display
window to be white, with the OpenGL function: glClearColor (1.0, 1.0,
1.0, 0.0);
➢The first three arguments in this function set the red, green, and blue
component colors to the value 1.0, giving us a white background color
for the display window.
➢If, instead of 1.0, we set each of the component colors to 0.0, we
would get a black background
➢The fourth parameter in the glClearColor function is called the alpha
value for the specified color.
➢One use for the alpha value is as a “blending” parameter
➢When we activate the OpenGL blending operations, alpha values can
be used to determine the resulting color for two overlapping objects.
➢An alpha value of 0.0 indicates a totally transparent object, and an
alpha value of 1.0 indicates an opaque object.
➢For now, we will simply set alpha to 0.0.
➢Although the glClearColor command assigns a color to the display
window, it does not put the display window on the screen.
Step 2: to set window color
To get the assigned window color displayed, we need to invoke the following
OpenGL function: glClear (GL_COLOR_BUFFER_BIT);
The argument GL COLOR BUFFER BIT is an OpenGL symbolic constant
specifying that it is the bit values in the color buffer (refresh buffer) that are
to be set to the values indicated in the glClearColor function. (OpenGL has
several different kinds of buffers that can be manipulated.
Step 3: to set color to object
In addition to setting the background color for the display window, we can
choose a variety of color schemes for the objects we want to display in a
scene.
For our initial programming example, we will simply set the object color to
be a red glColor3f (1.0, 0.0, 0.0);
The suffix 3f on the glColor function indicates that we are specifying the
three RGB color components using floating-point (f) values.
This function requires that the values be in the range from 0.0 to 1.0, and we
have set red = 1.0, green = 0.0, and blue = 0.0.
Example program
• For our first program, we simply display a two-
dimensional line segment.
• To do this, we need to tell OpenGL how we want to
“project” our picture onto the display window
because generating a two-dimensional picture is
treated by OpenGL as a special case of three-
dimensional viewing.
• So, although we only want to produce a very simple
two-dimensional line, OpenGL processes our picture
through the full three-dimensional viewing
operations.
• We can set the projection type (mode) and other
viewing parameters that we need with the following
two functions:
glMatrixMode (GL_PROJECTION);
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
• This specifies that an orthogonal projection is to be
used to map the contents of a two dimensional
rectangular area of world coordinates to the screen,
and that the x- coordinate values within this rectangle
range from 0.0 to 200.0 with y-coordinate values
ranging from 0.0 to 150.0.
• Whatever objects we define within this world-
coordinate rectangle will be shown within the display
window.
• Anything outside this coordinate range will not be
displayed.
• Therefore, the GLU function gluOrtho2D defines the
coordinate reference frame within the display window
to be (0.0, 0.0) at the lower-left corner of the display
window and (200.0, 150.0) at the upper-right window
corner.
• Finally, we need to call the appropriate OpenGL
routines to create our line segment.
• The following code defines a two-dimensional,
straight-line segment with integer,
• Cartesian endpoint coordinates (180, 15) and (10,
145).
• glBegin (GL_LINES);
glVertex2i (180, 15);
glVertex2i (10, 145);
glEnd ( );
The first OpenGL program is organized into three functions.
init: We place all initializations and related one-time parameter settings in
function init.
lineSegment: Our geometric description of the “picture” that we want to
display is in function lineSegment, which is the function that will be referenced
by the GLUT function glutDisplayFunc.
main function: main function contains the GLUT functions for setting up the
display window and getting our line segment onto the screen.
glFlush: This is simply a routine to force execution of our OpenGL functions,
which are stored by computer systems in buffers in different locations,
depending on how OpenGL is implemented.
The procedure lineSegment that we set up to describe our picture is referred to
as a display callback function.
And this procedure is described as being “registered” by glutDisplayFunc as the
routine to invoke whenever the display window might need to be redisplayed
#include<GLUT/glut.h> // (or others, depending on the system in use)
void init (void)
{
glClearColor (1.0, 1.0, 1.0, 0.0); // Set display-window color to white.
glMatrixMode (GL_PROJECTION); // Set projection parameters.
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
}
• Absolute coordinate:
➢ So far, the coordinate references that we have discussed are stated as absolute coordinate values.
➢ This means that the values specified are the actual positions within the coordinate system in use.
• Relative coordinates:
➢ However, some graphics packages also allow positions to be specified using relative coordinates.
➢ This method is useful for various graphics applications, such as producing drawings with pen plotters,
artist’s drawing and painting systems, and graphics packages for publishing and printing applications.
➢ Taking this approach, we can specify a coordinate position as an offset from the last position that was
referenced (called the current position).
Specifying a Two-Dimensional World-Coordinate Reference Frame in OpenGL
➢ The gluOrtho2D command is a function we can use to set up any 2D Cartesian reference frames.
➢ The arguments for this function are the four values defining the x and y coordinate limits for the
picture we want to display.
➢ Since the gluOrtho2D function specifies an orthogonal projection, we need also to be sure that the
coordinate values are placed in the OpenGL projection matrix.
➢ In addition, we could assign the identity matrix as the projection matrix before defining the world-
coordinate range.
➢ This would ensure that the coordinate values were not accumulated with any values we may have
previously set for the projection matrix.
➢ Thus, for our initial two-dimensional examples, we can define the coordinate frame for the screen
display window with the following statements
• glMatrixMode (GL_PROJECTION);
• glLoadIdentity ( );
• gluOrtho2D (xmin, xmax, ymin, ymax);
The display window will then be referenced by coordinates (xmin, ymin) at the lower-left corner and by
coordinates (xmax, ymax) at the upper-right corner, as shown in Figure below
OpenGL Functions
Geometric Primitives:
• It includes points, line segments, polygon etc. These primitives pass
through geometric pipeline which decides whether the primitive is
visible or not and also how the primitive should be visible on the
screen etc.
• The geometric transformations such rotation, scaling etc. can be
applied on the primitives which are displayed on the screen. The
programmer can create geometric primitives as shown below:
Case 2:
Case 3:
GL_LINE_LOOP:
Successive vertices are connected
using line segments to form a
closed path or loop i.e., final vertex
is connected to the initial vertex.
glBegin (GL_LINES_LOOP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
Opengl Point-Attribute Functions
Color:
• The displayed color of a designated point position is controlled by the current
color values in the state list.
• Also, a color is specified with either the glColor function or the glIndex function.
Size:
• We set the size for an OpenGL point with glPointSize (size); and the point is then
displayed as a square block of pixels.
• Parameter size is assigned a positive floating-point value, which is rounded to an
integer
• The number of horizontal and vertical pixels in the display of the point is
determined by parameter size.
• Thus, a point size of 1.0 displays a single pixel, and a point size of 2.0 displays a
2×2 pixel array.
• If we activate the antialiasing features of OpenGL, the size of a displayed block of
pixels will be modified to smooth the edges.
• The default value for point size is 1.0.
The following code segment plots three points in varying colors and
sizes. The first is a standard-size red point, the second is a double-
size green point, and the third is a triple-size blue point, Attribute
functions may be listed inside or outside of a glBegin/glEnd pair.
glColor3f (1.0, 0.0, 0.0);
glBegin (GL_POINTS);
glVertex2i (50, 100);
glPointSize (2.0);
glColor3f (0.0, 1.0, 0.0);
glVertex2i (75, 150);
glPointSize (3.0);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (100, 200);
glEnd ( );
Line-Attribute Functions OpenGL
• In OpenGL straight-line segment has three attribute settings: line color, line-width, and
line style.
• OpenGL provides a function for setting the width of a line and another function for
specifying a line style, such as a dashed or dotted line.
• .Style
Type
• .Width
Width
• .Color
Color
• .Pen
Pen&&Brush
Brush
Style:
➢Solid
➢Dotted – very short dash with spacing equal to or greater than dash itself
➢Dashed – displayed by generating an inter dash spacing
➢Dash Dotted –combination of the earlier two
⦁ For lines with slope magnitude less than 1, we can modify a
line-drawing routine to display thick lines by plotting a
vertical span of pixels at each x position along the line. The
number of pixels in each span is set equal to the integer
magnitude of parameter lw.
⦁ For lines with slope magnitude greater than 1, we can plot
thick lines with horizontal spans, alternately picking up
pixels to the right and left of the line path.
⦁ We can adjust the shape of the line ends to give them a better
appearance by adding line caps.
⦁ One kind of line cap is the butt cap obtained by adjusting the
end positions of the component parallel lines so that the
thick line is displayed with square ends that are perpendicular
to the line path.
⦁ If the specified line has slope m, the square end of the thick
line has slope - l / m .
⦁ Another line cap is the round cap obtained by adding a filled
semicircle to each butt cap.
⦁ The circular arcs are centered on the line endpoints and have a
diameter equal to the line thickness.
⦁ Here, we simply extend the line and add butt caps that are
positioned one-half of the line width beyond the specified
endpoints.
⦁ We can generate thick polylines that are smoothly joined at the
cost of additional processing at the segment endpoints.
⦁ miter join
⦁ round join
⦁ bevel join.
⦁ A miter join is accomplished by
extending the outer boundaries of each
of the two lines until they meet.
⦁ A round join is produced by capping the
connection between the two segments
with a circular boundary whose diameter
is equal to the line width.
⦁ A bevel join is generated by displaying
the line segments with butt caps and
filling in the triangular gap where the
segments meet.
Line-Attribute Functions OpenGL
OpenGL Line-Width Function
Line width is set in OpenGL with the function Syntax:
glLineWidth (width);
OpenGL Line-Style Function
We set a current display style for lines with the OpenGL function Syntax:
glLineStipple (repeatFactor, pattern);
Where Pattern:
Parameter pattern is used to reference a 16-bit integer that describes how the
line should be displayed and 1 bit in the pattern denotes an “on” pixel position,
and a 0 bit indicates an “off” pixel position.
repeatFactor
Integer parameter repeatFactor specifies how many times each bit in the pattern
is to be repeated before the next bit in the pattern is applied. The default repeat
value is 1.
Activating line style:
• Before a line can be displayed in the current line-style pattern, we
must activate the linestyle feature of OpenGL.
glEnable (GL_LINE_STIPPLE);
• If we forget to include this enable function, solid lines are displayed;
that is, the default pattern 0xFFFF is used to display line segments.
• At any time, we can turn off the line-pattern feature with
glDisable (GL_LINE_STIPPLE);
This replaces the current line-style pattern with the default pattern
(solid lines).
Example Code
typedef struct { float x, y; } wcPt2D; -//
wcPt2D dataPts [5];
void linePlot (wcPt2D dataPts [5])
{
int k;
glBegin (GL_LINE_STRIP);
for (k = 0; k < 5; k++)
glVertex2f (dataPts [k].x, dataPts [k].y);
glFlush ( );
glEnd ( );
}
/* Invoke a procedure here to draw coordinate axes. */
glEnable (GL_LINE_STIPPLE);
/* Input first set of (x, y) data values. */ glLineStipple(1,0x0101);
/* Plot a dotted, triple-width polyline.*/
glLineStipple (1, 0x1C47);
/*Plot a dash-dot, standard-width polyline. */ glLineWidth (3.0);
linePlot (dataPts);
linePlot (dataPts);
glDisable (GL_LINE_STIPPLE);
/* Input second set of (x, y) data values. */
glLineStipple(1,0x00FF);
/ *Plot a dashed, double-width polyline.*/
glLineWidth (2.0);
linePlot (dataPts);
• Equations (1) and (2) are based on the assumption that lines are to be processed from the left
endpoint to the right endpoint. If this processing is reversed, so that the starting endpoint is at the
right, then either we have δx=-1 and
• yk+1 = yk - m (3)
• or(when the slope is greater than 1)we have δy=-1 with xk+1 = xk - (1/m) (4)
Summary of the DDA is
if m<1,where x is incrementing by 1
yk+1 = yk + m
xk+1 =(1/m)+ xk
void lineDDA (int x0, int y0, int xEnd, int yEnd)
{
int dx = xEnd - x0, dy = yEnd - y0, steps, k;
yk+1
d2
y
d1
yk
Xk+1
Bresenham’s Line Algorithm (cont.)
y = m (xk + 1) + b
d1 = y – yk = m (xk + 1) + b - yk
d2 = (yk+1) – y = yk+1 - m (xk + 1) - b
d1 – d2 = 2m*xk + 2m – 2yk + 2b -1
d1 – d2 = 2m (xk + 1) – 2yk + 2b -1
Bresenham’s Line Algorithm (cont.)
pk = ∆x (d1 – d2) = 2 ∆ y * X k- 2 ∆ x * yk + c,
where c = 2 ∆ y + ∆ x (2b – 1) ,
• p0 = 2 ∆ y - ∆ x
Bresenham’s Line Algorithm (cont.)
1. Input two end points and store (x0, y0) in the frame buffer.
2. plot (x0, y0) to be the first point.
3. Calculate the constants ∆ x, ∆ y, 2 ∆ y, and 2 ∆ y – 2 ∆ x, and obtain the
starting value for the decision parameter as p0 = 2 ∆ y - ∆ x.
4. At each xk along the line, starting at k = 0, perform the
following test.
If pk < 0, plot (xk+1, yk) and pk+1 = pk + 2∆y
Otherwise,
plot (xk+1, yk+1) and pk+1 = pk + 2∆y - 2∆x.
• using Bresenham’s Line-Drawing Algorithm, Digitize the line with endpoints (20,10) and (30,18).
• y = 18 – 10 = 8
• x = 30 – 20 = 10
• m = y / x = 0.8
1 2 (22,12) 6 2 (27,16)
2 -2 (23,12) 7 -2 (28,16)
3 14 (24,13) 8 14 (29,17)
4 10 (25,14) 9 10 (30,18)
Example (cont.)
Bresenham’s Line Algorithm (cont.)
• Notice that bresenham’s algorithm works on lines with slope in range 0 < m < 1.
• To draw lines with slope > 1, interchange the roles of x and y directions.
Bresenham’s Line-Drawing Algorithm
for |m| < 1.0
1. Input the two line endpoints and store the left endpoint in (x0, y0).
2. Set the color for frame-buffer position (x0, y0); i.e., plot the first point.
3. Calculate the constants ∆x, ∆y, 2∆y, and 2∆y − 2∆x, and obtain the starting
value for the decision parameter as p0 = 2∆y −∆x
4. At each xk along the line, starting at k = 0, perform the following test: If pk <
0, the next point to plot is (xk + 1, yk ) and pk+1 = pk + 2∆y Otherwise, the
next point to plot is (xk + 1, yk + 1) and pk+1 = pk + 2∆y − 2∆x
5. Repeat step 4 ∆x − 1 more times. Note: If |m|>1.0 Then p0 = 2∆x −∆y and
If pk < 0, the next point to plot is (xk , yk +1) and pk+1 = pk + 2∆x Otherwise, the
next point to plot is (xk + 1, yk + 1) and pk+1 = pk + 2∆x − 2∆y
#include #include
/* Bresenham line-drawing procedure for |m| < 1.0. */
void lineBres (int x0, int y0, int xEnd, int yEnd)
{
int dx = fabs (xEnd - x0), dy = fabs(yEnd - y0);
int p = 2 * dy - dx;
int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy - dx); int x, y;
/* Determine which endpoint to use as start position. */
if (x0 > xEnd)
{
x = xEnd; y = yEnd; xEnd = x0;
}
else
{
x = x0; y = y0;
setPixel (x, y);
while (x < xEnd)
{
x++;
if (p < 0)
p += twoDy;
else
{
y++;
p += twoDyMinusDx;
}
setPixel (x, y);
}
}