0% found this document useful (0 votes)
81 views190 pages

CG 18CS62 Module1

This document provides an introduction to the course "Computer Graphics and Visualization" (18CS62). It discusses key topics that will be covered, including computer graphics hardware and software, 2D and 3D graphics primitives, geometric transformations, color models, and computer animation. The course aims to explain core computer graphics concepts and demonstrate how to design and implement graphics algorithms and interactive 3D scenes using OpenGL. It will consist of 5 modules covering topics like 2D drawing, 3D transformations, illumination models, and visible surface detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views190 pages

CG 18CS62 Module1

This document provides an introduction to the course "Computer Graphics and Visualization" (18CS62). It discusses key topics that will be covered, including computer graphics hardware and software, 2D and 3D graphics primitives, geometric transformations, color models, and computer animation. The course aims to explain core computer graphics concepts and demonstrate how to design and implement graphics algorithms and interactive 3D scenes using OpenGL. It will consist of 5 modules covering topics like 2D drawing, 3D transformations, illumination models, and visible surface detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 190

COMPUTER GRAPHICS AND

VISUALIZATION
(18CS62)
INTRODUCTION TO COMPUTER GRAPHICS
• Graphics are visual images or designs on some
surface, such as a wall, canvas, screen, paper, or
stone to inform, illustrate, or entertain.
• Computer graphics are pictures created using
computers.
• The phrase was coined in 1960, by computer
graphics researchers Verne Hudson and William
Fetter.
• Computer graphics is concerned with all aspects
of producing pictures or images using a computer.
IMPORTANCE OF COMPUTER GRAPHICS
• One of the most useful ways of presenting the processed
information by a computer system.
• We humans use visual system more extensively to perceive the
world as it is.
• Interaction with the computer is possible with the help of GUIs
(Graphical User Interfaces).
• Computer graphics provide a way of visualizing different type
of complex information which is easily understood by humans.
COURSE OBJECTIVES
• Explain hardware, software and OpenGL Graphics
Primitives.
• Illustrate interactive computer graphic using the OpenGL.
• Design and implementation of algorithms for 2D graphics
Primitives and attributes.
• Demonstrate Geometric transformations, viewing on both 2D
and 3D objects.
• Infer the representation of curves, surfaces, Color and
Illumination models.
OVERVIEW OF MODULES

MODULE DESCRIPTION
Module-1 Overview: Computer Graphics and OpenGL

Module-2 Fill area Primitives, 2D Geometric Transformations and 2D


viewing
Module-3 Clipping,3D Geometric Transformations, Color and Illumination
Models
Module-4 3D Viewing and Visible Surface Detection

Module-5 Input& interaction, Curves and Computer Animation


COURSE OUTCOMES
• Understand the basics of computer graphics and OpenGL
• Apply the concepts of geometric and viewing transformations
on 2D objects
• Apply the concepts of clipping, 3D viewing and Illumination
models
• Understand three-dimensional Viewing and Visible Surface
Detection
• Determine various inputs to the graphics system and user
interactions with it
TEXTBOOKS
1. Donald Hearn & Pauline Baker: Computer Graphics with OpenGL
Version,3rd / 4th Edition, Pearson Education,2011

2. Edward Angel: Interactive Computer Graphics- A Top Down


approach with OpenGL, 5th edition. Pearson Education, 2008
REFERENCE BOOKS
1. James D Foley, Andries Van Dam, Steven K Feiner, John F
Huges Computer graphics with OpenGL: pearson education
2. Xiang, Plastock : Computer Graphics , sham’s outline series,
2nd edition, TMG.
3. Kelvin Sung, Peter Shirley, steven Baer : Interactive Computer
Graphics, concepts and applications, Cengage Learning
4. M M Raiker, Computer Graphics using OpenGL, Filip
learning/Elsevier
Module-1
OVERVIEW: COMPUTER
GRAPHICS AND OPENGL
Computer Graphics
Deals with all aspects of producing images or pictures using
computer.
APPLICATIONS OF COMPUTER
GRAPHICS
• Graphs and Charts
• Computer-Aided Design
• Virtual Reality Environments
• Data Visualizations
• Education and Training
• Computer Art
• Entertainment
• Image Processing
• Graphical User Interfaces
Graphs and Charts

• Early application of computer graphics is the display of


simple data graphs.
• Used to summarize financial, statistical, mathematical,
scientific, engineering and economic data for research
reports, managerial summaries.
Computer-Aided Design
• A major use of computer graphics is in design processes –
engineering and architectural systems.
• Used in the design of buildings, automobiles, aircraft, spacecraft.
• Software packages for CAD applications typically provide the
designer with multi window environment, which show enlarged
sections or different views of objects.
Virtual-Reality Environments
• User can interact with the objects in a three-dimensional scene.
• Specialized hardware devices provide three-dimensional viewing
effects and allow the user to “pick up” objects in the scene.
• Animations in virtual reality environments are often used to train
heavy equipment operators.
Data Visualization
• Producing graphical representations for scientific, engineering and
medical data sets and processes – scientific visualization.
• Data sets related to commerce, industry, and other non scientific
areas – business visualization.
• Effective visualization depends on the characteristics of the data.
• Any data sets can be distributed over two- dimensional region of
space, three-dimensional region, or a higher dimensional space.
Medical Imaging
Business Visualization
Education and Training
• Computer generated models are often used as educational
aids.
• Models on physical processes or equipments can help
trainees to understand the operation of a system.
• For some training applications, special hardware systems
are designed.
• Simulators for practice sessions or training of ship captains,
aircraft pilots, heavy-equipment operators, and air traffic-
control personnel.
Flight Simulator
Diagram used to explain the operation of
a nuclear reactor
Naval Simulator
Computer Art
• Variety of computer methods and tools are available that provide
facilities for designing object shapes and specifying object
motions.
• Pictures can be painted electronically on a graphics tablet using a
stylus, which can simulate different brush strokes, brush widths,
and colors.
• Commercial art also uses these “painting” techniques for
generating logos and other designs.
• Computer generated animations are also used in producing
television commercials.
Computer Generated Art
Entertainment
• Television productions, motion pictures, and music videos
routinely use computer-graphics methods.
• Graphics are combined with live actors and scenes or the films are
completely generated using computer-rendering and animation
techniques.
Movies
Image Processing
• The modification or interpretation of existing pictures, such as
photographs – image processing.
• Computer Graphics – a computer is used to create a picture.
• Image Processing – used to improve picture quality, analyze
images, and use it for specific applications.
• Image Processing methods are often used in computer graphics
and computer graphics methods are frequently applied in image
processing.
Graphical User Interface
• A major component of a graphical interface is a window manager
that allows a user to display multiple, rectangular screen areas
called display windows.
• Each screen display area can contain graphical and non graphical
information.
• Interfaces also display menus and icons for selection of a display
window, a processing option, or a parameter value.
User interfaces
Video Display Devices
• Primary output device in a graphics system is a video monitor.
• The operation of most video monitors is based on the standard
cathode-ray tube.
• Several other technologies exist and solid state monitors have
eventually predominated.
Refresh Cathode-Ray Tube
• A beam of electrons (cathode rays), emitted by an electron gun,
passes through focusing and deflection systems that direct the
beam towards specified positions on the phosphor-coated screen.
• The phosphor then emits a small spot of light at each position
contacted by the electron beam.
• Because the light emitted by the phosphor fades very rapidly,
some method is needed for maintaining the screen picture.
• The most common method is to redraw the picture repeatedly by
quickly directing the electron beam back over the same screen
points.
• This type of display is called a refresh CRT, and the frequency at
which a picture is redrawn on the screen is referred to as the
refresh rate.
• The primary components of an electron gun in a CRT are the
heated metal cathode and a control grid.
• Heat is supplied to the cathode by directing a current through a
coil of wire, called the filament, inside the cylindrical cathode
structure.
• This causes electrons to be “boiled off” the hot cathode surface.
• In the vacuum inside the CRT envelope, the free, negatively
charged electrons are then accelerated towards the phosphor
coating by a high positive voltage.
• The accelerating voltage can be generated with a positively
charged metal coating on the inside of the CRT envelope near the
phosphor screen, or an accelerating anode.
• Intensity of the electron beam is controlled by the voltage at the
control grid.
• The amount of light emitted by the phosphor coating depends on
the number of electrons striking the screen.
• The focusing system in a CRT forces the electron beam to converge
to a small cross section as it strikes the phosphor.
• Otherwise, the electrons would repel each other, and the beam
would spread out as it approaches the screen.
• Focusing is accomplished with either electric or magnetic fields.
• Deflection of the electron beam can be controlled with either
electric or magnetic fields.
• CRT are now commonly constructed with magnetic-deflection
coils mounted on the outside of the CRT envelop.
• Two pairs of coils are used for this purpose.
• One pair is mounted on the top and bottom of the CRT neck, and
the other pair is mounted on the opposite sides of the neck.
• Different kinds of phosphors are available for use in CRTs.
• Besides color the major difference between phosphors is their
persistence.
• Persistence - how long they continue to emit light after the CRT
beam is removed.
• Lower persistence phosphor require higher refresh rates to
maintain a picture on the screen without flicker.
• Resolution – the maximum number of points that can be displayed
without overlap on a CRT.
RASTER-SCAN DISPLAYS
• In a raster-scan system, the electron beam is swept across the
screen, one row at a time, from top to bottom.
• Each row is referred to as a scan line.
• As the electron beam moves across a scan line, the beam intensity
is turned on and off to create a pattern of illuminated spots.
• Picture definition is stored in a memory area called the refresh
buffer or frame buffer, where the term frame refers to the total
screen area.
• Memory area holds the set of color values for the screen points.
• These stored color values are then retrieved from the refresh
buffer and used to control the intensity of the electron beam as it
moves from spot to spot across the screen.
• In this way, the picture is “painted” on the screen one scan line at a
time.
• Refresh buffer is used to store the set of screen color values - color
buffer.
• Other kinds of pixel information, besides color, are stored in buffer
locations, so all the different buffer areas are sometimes referred
to collectively as the - “frame buffer.”
• Printers and home television sets – other examples using raster-
scan methods.
Retracing
• At the end of each scan line, the electron beam returns to the left
side of the screen to begin displaying the next scan line- horizontal
retracing.
• At the end of each frame, the electron beam returns to the
top left corner of the screen – vertical retracing.
• Property of the video monitor – aspect ratio.
• Aspect Ratio – defined as the number of pixel columns divided by
the number of scan lines that can be displayed by the system.
• The range of colors or shades of gray that can be displayed on a
raster system depends on the types of phosphor used in the CRT
and the number of bits per pixel available in the frame buffer.
• For a black and white system each screen point is either on or off –
one bit per pixel is needed to control the intensity of the screen
positions.
• Up to 24 bits per pixel are included in high-quality systems.
RANDOM-SCAN DISPLAYS
• CRT has the electron beam directed only to those parts of the
screen where a picture is to be displayed.
• Pictures are generated as line drawings, with the electron beam
tracing out the component lines one after the other.
• Also referred to as vector displays or stroke-writing displays or
calligraphic displays.
• The component lines of a picture can be drawn and refreshed by a
random-scan system in any specified order.
• Refresh rate on a random-scan system depends on the number of
lines to be displayed on that system.
• Picture definition is now stored as a set of line-drawing commands
in an area of memory referred to as the display list, refresh
display file, vector file, or display program.
• Designed for line-drawing applications, such as architectural and
engineering layouts, and they cannot display realistic shaded
scenes.
Random Scan Display vs Raster Scan Displays
• In random scan display picture definition is stored as a set of line
drawing instructions rather than a set of intensity values for all
screen points.
• Random scan display have higher resolution than raster systems.
• Random scan display produce smooth line drawings because the
CRT beam directly follows the line path.
• Raster system produces jagged lines that are plotted as discrete
point sets
COLOR CRT MONITORS
• A CRT monitor displays color pictures by using a combination of
phosphors that emit different-colored light.
• The emitted light from the different phosphors merges to form a
single perceived color, which depends on the particular set of
phosphors that have been excited.
Beam-Penetration Method
• One way to display color pictures is to coat the screen with layers
of different colored phosphors.
• The emitted color depends on how far the electron beam
penetrates into the phosphor layers.
• Typically used only two phosphor layers: red and green.
Shadow-mask Methods
• Commonly used in raster-scan systems (including color TV).
• Produce a much wider range of colors than the beam penetration
method.
• Approach is based on the way that we seem to perceive colors as
combinations of red, green, and blue components, called the RGB
color model.
Shadow-Mask CRT
• A shadow-mask CRT uses three phosphor color dots at each pixel
position.
• One phosphor dot emits a red light, another emits a green light,
and the third emits a blue light.
• This type of CRT has three electron guns, one for each color dot,
and a shadow-mask grid just behind the phosphor-coated screen.
• The three electron beams are deflected and focused as a group
onto the shadow mask, which contains a series of holes aligned
with the phosphor-dot patterns.
• The light emitted from the three phosphors results in a small spot
of color at each pixel position, since our eyes tend to merge the
light emitted from the three dots into one composite color.
Flat-Panel Displays
• Class of video devices that have reduced volume, weight and
power requirements compared to a CRT.
• Two types –
• Emissive displays
• Non emissive displays
• Emissive displays are devices that convert electrical energy into
light. Ex: plasma panels, light emitting diodes
• Non emissive displays use optical effects to convert sunlight or
light from some other source into graphics pattern. Ex: liquid-
crystal device
Plasma Panels
• Constructed by filling the region between two glass plates with a
mixture of gases that usually includes neon.
• A series of vertical conducting ribbons is placed on one glass panel, and
a set of horizontal conducting ribbons is built into the other glass panel.
• Firing voltages applied to an intersecting pair of horizontal and vertical
conductors cause the gas at the intersection of two conductors to break
down into a glowing plasma of electrons and ions.
• Picture definition is stored in a refresh buffer, and the firing voltages are
applied to refresh the pixel positions 60 times per second.
Fig.10: Basic design of a plasma-panel display device
• Disadvantage
• Strictly monochromatic devices, but systems are now available with
multicolor capabilities.
Thin Film Electroluminescent Displays
• It is similar to plasma panel display but region between the glass plates
is filled with phosphors such as zinc sulfide doped with manganese
instead of gas
• When sufficient voltage is applied the phosphors becomes a conductor
in area of intersection of the two electrodes
• Electrical energy is then absorbed by the manganese atoms which then
release the energy as a spot of light similar to the glowing plasma effect
in plasma panel
• It requires more power than plasma panel
• Good color displays are harder to achieve
Fig.11: Basic design of a thin-film electroluminescent display device
Light- Emitting Diode
• A matrix of diodes is arranged to form pixel positions in the
displays
• The picture definition is stored in a refresh buffer.
• Information is read from the refresh buffer and converted to
voltage levels that are applied to the diodes to produce light
patterns in the display.
Liquid-Crystal Displays
• Used in small systems, such as laptops, computers and calculators.
• The term liquid crystal refers to the fact that these compounds
have a crystalline arrangement of molecules, yet they flow like a
liquid.
• Produce picture by passing polarized light from the surroundings
or from an internal light source through a liquid-crystal material
that can be aligned to either block or transmit light.
Fig12: The light-twisting, shutter effect used in the design of most LCD devices
RASTER-SCAN SYSTEMS
• Interactive raster-graphics systems typically employ several
processing units.
• In addition to the central processing unit (CPU), a special-purpose
processor, called the video controller or display controller, is
used to control the operation of the display device.
• Here, the frame buffer can be anywhere in the system memory,
and the video controller accesses the frame buffer to refresh the
screen.
• In addition to the video controller, more sophisticated raster
systems employ other processors as coprocessors and
accelerators to implement various graphics operations.
VIDEO CONTROLLER
• The figure shows a commonly used organization for raster
systems.
• A fixed area of the system memory is reserved for the frame buffer,
and the video controller is given direct access to the frame-buffer
memory.
• Frame-buffer locations, and the corresponding screen positions,
are referenced in Cartesian coordinates.
• In an application program, we use the commands within a
graphics software package to set coordinate positions for
displayed objects relative to the origin of the Cartesian reference
frame.
• Often, the coordinate origin is referenced at the lower-left corner
of a screen display area by the software command.
• The screen is then represented as the first quadrant of a two-
dimensional system, with positive x values increasing from left to
right and positive y values increasing from the bottom of the
screen to the top.
• The pixel positions are then assigned integer x values from that
range 0 to xmax across the screen, left to right, and integer y
values that vary from 0 to ymax, bottom to top.
• Hardware processes reference the pixel positions from the top-left
corner of the screen.
• The figure shows the basic video-controller refresh operation.
• Two registers are used to store the coordinate values for the
screen pixels.
• Initially, the x register is set to 0 and the y register is set to the
value for the top scan line.
• The contents of the frame buffer at this pixel position are then
retrieved and used to set the intensity of the CRT beam.
• Then the x register is incremented by 1, and the process is
repeated for the next pixel on the top scan line.
• This procedure continues for each pixel along the top scan line.
• After the last pixel on the top scan line has been processed, the x
register is set to 0 and the y register is set to the value for the next
scan line down from the top of the screen.
• After cycling through all pixels along the bottom scan line, the
video controller resets the registers to the first pixel position on
the top scan line and the refresh process starts over.
RASTER-SCAN DISPLAY PROCESSOR
• Figure shows one way to organize the components of a raster
system that contains a separate display processor, sometimes
referred to as a graphics controller or a display coprocessor.
• The purpose of the display processor is to free the CPU from the
graphics chores. In addition to the system memory, a separate
display-processor memory area can be provided.
• A major task of the display processor is digitizing a picture
definition given in an application program into a set of pixel values
for storage in the frame buffer. This digitization process is called
scan conversion.
• Graphics commands specifying straight lines and other geometric
objects are scan converted into a set of discrete points,
corresponding to screen pixel positions.
• Characters can be defined with rectangular pixel grids, as in the
figure, or they can be defined with outline shapes, as in next figure.
The array size for character grids can vary from about 5 by 7 to 9
by 12 or more for higher-quality displays.
• A character grid is displayed by superimposing the rectangular
grid pattern into the frame buffer at the specified coordinate
position.
• For characters that are defined as outlines, the shapes are scan
converted into the frame buffer by locating the pixel positions
closest to the outline.
• Display processors perform additional operations
• Generating various line styles
• Displaying color areas
• Applying transformations to the objects in a scene
• Interface with interactive input devices, such as mouse
INPUT DEVICES
• Graphics workstations can make use of various devices for data
input.
• Most systems have a keyboard and one or more additional devices
specifically designed for interactive input. These include a mouse,
trackball, spaceball, and joystick.
• Some other input devices used in particular applications are
digitizers, dials, button boxes, data gloves, touch panels, image
scanners, and voice systems.
Touch Panel, Scanner, Digitizer, Hand Gloves, Joy Stick, Track Ball
Keyboards, Button Boxes, Dials, Mouse
Light Pen, Voice System
GRAPHICS SOFTWARE
• There are two broad classifications for computer-graphics
software:
• Special-purpose packages
• General programming packages.
• Special-purpose packages are designed for non-programmers
who want to generate pictures, graphs, or charts in some
application area without worrying about the graphics procedures
that might be needed to produce such displays.
• The interface to a special-purpose package is typically a set of
menus that allow users to communicate with the programs in their
own terms.
• Examples of such applications include artists' painting programs and
various architectural, business, medical, and engineering CAD systems.
• A general programming package provides a library of graphics
functions that can be used in a programming language such as C, C++,
Java, or Fortran.
• Basic functions in a typical graphics library include those for specifying
picture components (straight lines, polygons, spheres, and other
objects), setting color values and applying rotations or other
transformations.
• Some examples of general graphics programming packages are GL
(Graphics Library), OpenGL, VRML (Virtual-Reality Modeling
Language), Java 2D, and Java 3D.
• A set of graphics functions is often called a computer-graphics
application programming interface (CG API) because the library
provides a software interface between a programming language (such
as C++) and the hardware.
COORDINATE REPRESENTATIONS
• To generate a picture using a programming package, first we give
geometric descriptions of the objects that are to be displayed -
determine the locations and shapes of the objects.
• With few exceptions, general graphics packages require geometric
descriptions to be specified in a standard, right-handed, Cartesian-
coordinate reference frame.
• In general, several different Cartesian reference frames are used in
the process of constructing and displaying a scene.
• We can define the shapes of individual objects, such as trees or
furniture, within a separate reference frame for each object. These
reference frames are called modeling coordinates, or sometimes
local coordinates or master coordinates.
• After the individual object shapes have been specified, we can
“model” a scene by placing the objects into appropriate locations
within a scene reference frame called world coordinates.
• After all parts of a scene have been specified, the overall world-
coordinate description is processed through various routines onto
one or more output-device reference frames for display. This
process is called the viewing pipeline.
• World coordinate positions are first converted to viewing
coordinates corresponding to the view we want of a scene, based
on the position and orientation of the hypothetical camera.
• Later object locations are transformed to a two-dimensional (2D)
projection of the scene, which corresponds to what we will see on
the output device.
• The scene is then stored in normalized coordinates, where each
coordinate value is in the range from −1 to 1 or in the range from 0
to 1, depending on the system.
• Normalized coordinates are also referred to as normalized device
coordinates.
• Finally, the picture is scan-converted into the refresh buffer of a
raster system for display. The coordinate systems for display
devices are generally called device coordinates, or screen
coordinates in the case of a video monitor.
• Device coordinates (xdc, ydc) are integers within the range (0, 0)
to (xmax, ymax) for a particular output device.
GRAPHICS FUNCTIONS
• A general-purpose graphics package provides users with a variety
of functions for creating and manipulating pictures.
• These routines can be broadly classified according to whether they
deal with graphics output, input, attributes, transformations,
viewing, subdividing pictures, or general control.
• The basic building blocks for pictures are referred to as graphics
output primitives.
• They include character strings and geometric entities, such as
points, straight lines, curved lines, filled color areas (usually
polygons), and shapes defined with arrays of color points.
• Some graphics packages provide functions for displaying more
complex shapes such as spheres, cones, and cylinders.
• Routines for generating output primitives provide the basic tools for
constructing pictures.
• Attributes are properties of the output primitives. An attribute
describes how a particular primitive is to be displayed. This includes
color specifications, line styles, text styles, and area-filling patterns.
• We can change the size, position, or orientation of an object within a
scene using geometric transformations.
• After a scene has been constructed, a graphics package projects a view
of the picture onto an output device.
• Viewing transformations are used to select a view of the scene, the
type of projection to be used, and the location on a video monitor where
the view is to be displayed.
• Interactive graphics applications use various kinds of input devices,
including a mouse, a tablet, and a joystick. Input functions are used to
control and process the data flow from these interactive devices.
• Graphics package contains a number of housekeeping tasks, such as
clearing a screen display area to a selected color and initializing
parameters. We can put the functions for carrying out these chores
under the heading control operations.
SOFTWARE STANDARDS
• The primary goal of standardized graphics software is portability.
• Without standards, programs designed for one hardware system
often cannot be transferred to another system without extensive
rewriting of the programs.
• International and national standards-planning organizations in
many countries have cooperated in an effort to develop a generally
accepted standard for computer graphics.
SOFTWARE STANDARDS PURPOSE
Adopted as the first graphics software standard by
Graphical Kernel System (GKS) the ISO and by various national standards
organizations, including ANSI.
Increased capabilities for hierarchical object
Programmer’s Hierarchical
modeling, color specifications, surface rendering and
Interactive Graphics System (PHIGS) picture manipulations are provided in PHIGS.

• As the GKS and PHIGS packages were being developed, the graphics
workstations from Silicon Graphics, Inc. (SGI), became increasingly
popular. These workstations came with a set of routines called GL
(Graphics Library).
• The GL routines were designed for fast, real-time rendering, and soon this
package was being extended to other hardware systems. As a result,
OpenGL was developed as a hardware independent version of GL in the
early 1990s.
• This graphics package is now maintained and updated by the
OpenGL Architecture Review Board, which is a consortium of
representatives from many graphics companies and organizations.
• Graphics functions in any package are typically defined as a set of
specifications independent of any programming language.
• A language binding is then defined for a particular high-level
programming language.
• This binding gives the syntax for accessing the various graphics
functions from that language.
• The OpenGL bindings for the C and C++ languages are the same.
Other OpenGL bindings are also available, such as those for Java
and Python.
INTRODUCTION TO OPENGL
• Basic library of functions in OpenGL - graphics primitives,
attributes, geometric transformation, viewing transformation etc.
• OpenGL is hardware independent – many operations such as input
and output routines are not included in the basic library.
• Input and output routines and many additional functions are
available in auxiliary libraries that has been developed for OpenGL
programs.
Basic OpenGL Syntax
• Function names in the OpenGL basic library are prefixed with gl. Ex:
glBegin, glClear.
• Certain functions require arguments to be assigned a symbolic constant
specifying, a parameter name, a value for a parameter or a particular
mode.
• All such constants begin with uppercase letters GL.
• Component words within a constant name are written in capital letters,
and the underscore is used as a separator between all component
words in the name. Ex: GL_POLYGON
• To indicate a specific data type, OpenGL uses special built-in, data type
names such as GLshort, GLint, GLfloat, Gldouble
Related Libraries
• The OpenGL Utility (GLU) provides routines for setting up viewing
and projection matrices, describing complex objects with line and
polygon approximations and other complex tasks.
• Every OpenGL implementation includes the GLU library, and all
GLU function names start with the prefix glu.
• To create a graphics display using OpenGL, we first need to set up
a display window on our video screen.
• We cannot create the display window directly with the basic
OpenGL functions, since this library contains only device
independent graphics functions and the window- management
operations depend on the computer we are using.
• OpenGL Utility Toolkit (GLUT) provides a library of functions for
interacting with any screen-windowing system.
• The GLUT library functions are prefixed with glut, this library also
contains methods for describing and rendering quadric curves and
surfaces.
Header Files
• If we use GLUT to handle the window-managing operations, we do not
need to include gl.h and glu.h because GLUT ensures that these will be
included correctly.
• We can replace the header files for OpenGL and GLU with
#include<GL/glut.h>
• We could include gl.h and glu.h as well, but doing so would be
redundant and could affect program portability.
• In addition, we will often need to include header files that are required
by the C++ code
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
Display-Window Management using GLUT
• First step is to initialize GLUT. This initialization function could
process any command-line arguments.
• We perform the GLUT initialization with the statement:
glutInit (&argc, argv);
• We can state that a display window is to be created on the screen
with a given caption for a title bar using the function
glutCreate Window(“An example OpenGL program”);
• Then we need to specify what the display window is to contain.
• For this, we create a picture using OpenGL functions and pass the
picture definition to the GLUT routine glutDisplayFunc, which
assigns our picture to the display window.
glutDisplayFunc(linesegment) ;
• But the display window is not yet on the screen. We need one
more function to complete the window-processing operations.
glutMainLoop();
• This function must be the last one in our program.
• It displays the initial graphics and puts the program into an
infinite loop that checks for input from devices such as mouse or
keyboard.
• Although the display window will be in some default position and
size, we set these parameters using additional GLUT functions.
• We use glutInitWindowPosition function to give an initial location
for the top left corner of the display window.
• glutInitWindowPosition(50,100) specifies that the top-left corner
of the display window should be placed 50 pixels to the right of
the left edge of the screen and 100 pixels down from the top edge
of the screen.
• glutInitWindowSize function is used to set the initial pixel width
and height of the display window.
• We can set a number of other options for the display window, such as
buffering and choice of color modes, with the glutInitDisplayMode
function.
• Arguments for this routine are symbolic glut constants.
• Using RGB color values, we set the background color for the display
window to be white, with the OpenGL function
glClearColor(1.0, 1.0, 1.0, 0.0);
• The first three arguments in this function set each of the red, green and
blue component colors to the value 1.0.
• The fourth parameter in the glClearcolor function is called the alpha
value for the specified color.
• When we activate the OpenGL blending operations, alpha values can be
used to determine the resulting color of two overlapping objects.
• Alpha value 0 indicates totally transparent object, 1 indicates an opaque
object.
• Although the glClearColor assigns color to the display window, it
does not put it on the screen.
• To get the assigned window color displayed, we need to invoke the
following OpenGL function
glClear(GL_COLOR_BUFFER_BIT);
• GL_COLOR_BUFFER_BIT is an OpenGL symbolic constant
specifying that the bit values in the color buffer are to be set to the
values indicated in the glClearColor function.
Chapter -2
GRAPHICS OUTPUT
PRIMITIVES
Coordinate Reference Frames
• To describe a picture, we first decide upon a convenient Cartesian
coordinate system, called the world-coordinate reference frame,
which could be either two dimensional or three-dimensional.
• We then describe the objects in our picture by giving their geometric
specifications in terms of positions in world coordinates.
• For instance, we define a straight-line segment with two end point
positions, and a polygon is specified with a set of positions for its
vertices.
• These coordinate positions are stored in the scene description along
with other information about the objects, such as their color and their
coordinate extents, which are the minimum and maximum x, y, and z
values for each object.
• Objects are then displayed by passing the scene information to the
viewing routines, which identify visible surfaces and ultimately map
the objects to positions on the video monitor.
• The scan-conversion process stores information about the scene,
such as color values, at the appropriate locations in the frame buffer,
and the objects in the scene are displayed on the output device.
Screen Coordinates
• Locations on a video monitor are referenced in integer screen
coordinates, which correspond to the pixel positions in the frame
buffer.
• Pixel coordinate values give the scan line number (the y value) and
the column number (the x value along a scan line).
• Hardware processes, such as screen refreshing, typically address
pixel positions with respect to the top-left corner of the screen.
• Scan lines are then referenced from 0, at the top of the screen, to
some integer value, ymax, at the bottom of the screen, and pixel
positions along each scan line are numbered from 0 to xmax, left to
right.
• However, with software commands, we can set up any convenient
reference frame for screen positions.
• The coordinate values we use to describe the geometry of a scene are
then converted by the viewing routines to integer pixel positions
within the frame buffer.
• Scan-line algorithms for the graphics primitives use the defining
coordinate descriptions to determine the locations of pixels that are
to be displayed.
• For example, given the endpoint coordinates for a line segment, a
display algorithm must calculate the positions for those pixels that lie
along the line path between the endpoints.
• We assume that each integer screen position references the center of
a pixel area.
• Once pixel positions have been identified for an object, the
appropriate color values must be stored in the frame buffer.
• For this purpose, we will assume that we have a low-level procedure
of the form setPixel (x, y);
• This procedure stores the current color setting into the frame buffer
at integer position(x, y), relative to the selected position of the
screen-coordinate origin
• We sometimes also will want to be able to retrieve the current frame-
buffer setting for a pixel location.
• So we will assume that we have the following low-level function for
obtaining a frame-buffer color value: getPixel (x, y, color);
• In this function, parameter color receives an integer value
corresponding to the combined red, green, and blue (RGB) bit codes
stored for the specified pixel at position (x, y).
• Although we only specify color values at (x, y) positions for a two
dimensional picture, additional screen-coordinate information is
needed for three-dimensional scenes.
• In this case, screen coordinates are stored as three dimensional
values, where the third dimension references the depth of object
positions relative to a viewing position.
Absolute and Relative Coordinate
Specifications
• Absolute coordinate values - the values specified are the actual
positions within the coordinate system in use.
• Relative coordinate values - we can specify a coordinate position
as an offset from the last position that was referenced (called the
current position).
• Useful for various graphics applications, such as producing
drawings with pen plotters, artist’s drawing and painting systems,
and graphics packages for publishing and printing applications.
• For example, if location (3, 8) is the last position that has been
referenced in an application program, a relative coordinate
specification of (2,−1) corresponds to an absolute position of (5, 7).
• Options can be provided in a graphics system to allow the
specification of locations using either relative or absolute
coordinates.
Specifying A 2D World-Coordinate Reference Frame
in OpenGL
• The gluOrtho2D command is a function we can use to set up any two
dimensional Cartesian reference frame.
• The arguments for this function are the four values defining the x and y
coordinate limits for the picture we want to display.
• Since the gluOrtho2D function specifies an orthogonal projection, we
also need to be sure that the coordinate values are placed in the
OpenGL projection matrix.
• In addition, we could assign the identity matrix as the projection matrix
before defining the world-coordinate range to ensure that the
coordinate values were not accumulated with any values we may have
previously set for the projection matrix.
• We can define the coordinate frame for the screen display window
with the following statements:
glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluOrtho2D (xmin, xmax, ymin, ymax);
• The display window will then be referenced by coordinates(xmin,
ymin) at the lower-left corner and by coordinates(xmax, ymax) at
the upper-right corner.
• If the coordinate extents of a primitive are within the coordinate
range of the display window, all of the primitive will be displayed.
• Otherwise, only those parts of the primitive within the display-
window coordinate limits will be shown.
• When we set up the geometry describing a picture, all positions for
the OpenGL primitives must be given in absolute coordinates, with
respect to the reference frame defined in the gluOrtho2D function.
OpenGL Point Functions
• To specify the geometry of a point, we simply give a coordinate
position in the world reference frame.
• Then this coordinate position, along with other geometric
descriptions we may have in our scene, is passed to the viewing
routines.
• Unless we specify other attribute values, OpenGL primitives are
displayed with a default size and color.
• The default color for primitives is white, and the default point size
is equal to the size of a single screen pixel.
• We use the following OpenGL function to state the coordinate values for a
single position:
glVertex* ( );
• The asterisk (*) indicates that suffix codes are required for this function
which are used to identify the spatial dimension, the numerical data type
to be used for the coordinate values, and a possible vector form for the
coordinate specification.
• Calls to glVertex functions must be placed between a glBegin function and a
glEnd function.
• The argument of the glBegin function is used to identify the kind of output
primitive that is to be displayed, and glEnd takes no arguments.
• For point plotting, the argument of the glBegin function is the symbolic
constant GL_ POINTS.
• Coordinate positions in OpenGL can be given in two, three, or four
dimensions.
• We use a suffix value of 2, 3 on the glVertex function to indicate the
dimensionality of a coordinate position.
• Because OpenGL treats two-dimensions as a special case of three
dimensions, any (x, y) coordinate specification is equivalent to a
three-dimensional specification of (x, y, 0).
• To state which data type is to be used for the numerical value
specifications of the coordinates we use second suffix code on the
glVertex function.
• Suffix codes for specifying a numerical data type are i (integer), s
(short), f (float), and d (double).
• Coordinate values can be listed explicitly in the glVertex function, or a
single argument can be used that references a coordinate position as
an array.
• If we use an array specification for a coordinate position, we need to
append v (for “vector”) as a third suffix code.
OpenGL Line Functions
• Graphics packages typically provide a function for specifying one
or more straight-line segments, where each line segment is
defined by two endpoint coordinate positions.
• In OpenGL, we select a single end point coordinate position using
the glVertexfunction, just as we did for a point position.
• And we enclose a list of glVertexfunctions between the glBegin /
glEnd pair.
• There are three symbolic constants in OpenGL that we can use to
specify how a list of end point positions should be connected to form
a set of straight-line segments.
• By default, each symbolic constant displays solid, white lines.
• A set of straight-line segments between each successive pair of end
points in a list is generated using the primitive line constant GL_
LINES.
• With the OpenGL primitive constant GL_LINE_STRIP, we obtain a
polyline.
• In this case, the display is a sequence of connected line segments
between the first end point in the list and the last end point.
• The first line segment in the polyline is displayed between the first
endpoint and the second endpoint; the second line segment is
between the second and third end points; and so forth, up to the last
line endpoint.
• The third OpenGL line primitive is GL_LINE_LOOP, which produces
a closed polyline.
• Lines are drawn as with GL _LINE_STRIP, but an additional line is
drawn to connect the last coordinate position and the first
coordinate position.
LINE-DRAWING ALGORITHMS
• A straight-line segment in a scene is defined by coordinate
positions for the endpoints of the segment.
• To display the line on a raster monitor, the graphics system must
first project the endpoints to integer screen coordinates and
determine the nearest pixel positions along the line path between
the two endpoints.
• Next the line color is loaded into the frame buffer at the
corresponding pixel coordinates.
• A computed line positions of (10.48,20.51) is converted to pixel
position (10,21).
• This rounding of coordinates values to integers causes all but
horizontal and vertical lines to be displayed with a stair-step
appearance ("the jaggies").
Line Equations
• Determine the pixel position along a straight-line path from the
geometric properties of the line.
• The Cartesian slope-intercept equation for a straight line is
y = m . x + b (1)
with m as the slope of the line and b as the y intercept.
• Given the two endpoints of a line segment at positions (x0,y0) and
(xend ,yend), we can determine values for the slope m and y intercept b
with the following calculations:
• For any given x interval δx along a line, we can compute the
corresponding y and interval δy from equation 2 as
δy = m. δx (4)
• Similarly, we can obtain the x interval δx corresponding to a specified
δy as
δx = δy/m (5)
• These equations form the basis for determining deflection voltages in
analog displays, such as vector-scan system, where arbitrarily small
changes in deflection voltage are possible.
• On raster systems, lines are plotted with pixels, and the step sizes in
the horizontal and vertical directions are constrained by pixel
separations.
• That is we must "sample" a line at discrete positions and determine
the nearest pixel to the line at each sample position.
• The scan-conversion process for straight lines is illustrated in fig.
with discrete sample positions along the x -axis.
Straight –line segment
with five sampling
positions along the x
axis between x0 and
xend
DDA Algorithm
• Digital Differential Analyzer is a scan–conversion line algorithm
based on calculating either δx or δy using eq 4 and 5.
• A line is sampled at unit intervals in one coordinate and the
corresponding integer values nearest the line path are determined
for the other coordinate.
• If the slope is less than or equal to 1, we sample at unit x intervals (δx
= 1) and compute successive y values as
Y k+1 = yk +m (6)
• Subscript k takes integer values starting from 0, for the first point and
increases by 1 until the final endpoint is reached.
• Since m can be any real number between 0.0 and 1.0, each calculated
y value must be rounded to the nearest integer corresponding to a
screen pixel position in the x column we are processing.
• For lines with positive slope greater than 1.0, we reverse the roles of x
and y.
• We sample at unit y intervals (δy = 1) and calculate the consecutive x
values as
x k+1 = xk + 1/m (7)
• In this case, each computed x value is rounded to the nearest pixel
position along the current y scan line.
• Equation (6) and (7) are based on the assumption that lines are to be
processed from the left endpoint to the right endpoint.
• If this processing is reversed, so that the starting endpoint is at the
right, then either we have δx = -1 and
y k+1 = yk – m (8)
• Or when the slope is greater than 1 we have δy = -1 with
x k+1 = xk - 1/m (9)
• This algorithm accepts as input two integer screen positions for the
endpoints of a line segment.
• Horizontal and vertical differences between the endpoint positions are
assigned to parameters dx and dy. The difference with the greater
magnitude determines the value of parameter steps.
• We draw the starting pixel at position (x0, y0), and then draw the
remaining pixels iteratively, adjusting x and y at each step to obtain the
next pixel position before drawing it.
• If the magnitude of dx is greater than the magnitude of dy and x0 is less
than xEnd, the values for the increments in the x and y directions are 1 and
m, respectively.
• If the greater change is in the x direction, but x0 is greater than xEnd, then
the decrements −1 and −m are used to generate each new point on the line.
• Otherwise, we use a unit increment (or decrement) in the y direction and
an x increment (or decrement) of 1/m
#include <stdlib.h> xIncrement = float (dx) / float (steps);
#include <math.h> yIncrement = float (dy) / float (steps);
inline int round (const float a) { return setPixel (round (x), round (y));
int (a + 0.5); } for (k = 0; k < steps; k++)
void lineDDA (int x0, int y0, int xEnd, int {
yEnd)
{ x += xIncrement;
int dx = xEnd - x0, dy = yEnd - y0, steps, k; y += yIncrement;
float xIncrement, yIncrement, x = x0, y = setPixel (round (x), round (y));
y0; }
if (fabs (dx) > fabs (dy)) }
steps = fabs (dx);
else
steps = fabs (dy);
• Faster method for calculating pixel position than one that directly
implements eq 1 .
• Eliminates the multiplication by making use of appropriate
increments applied in the x or y directions to step from one pixel
position to another along the line path.
• Round off error in successive additions of the floating point
increment, can cause the calculated pixel positions to drift away from
the true line path for long line segments.
• Rounding operations and floating point arithmetic in this procedure
are still time consuming.
Consider two points (2,3) and (12,8) solve by using DDA
Algorithm.
Bresenham’s Line Drawing Algorithm

• Efficient raster scan generating algorithm that uses incremental


integral calculations.
• Uses only incremental integer calculations.
• Can be adapted to display circles and other curves.
• The vertical axes show scan line positions, and the horizontal axes
identify pixel columns.
• Sampling at unit x intervals, we need to decide which of the two pixel
positions is closer to the line path at each sample step.
• Starting from the left end point we need to determine at the next
sample position whether to plot the pixel at position (11,11) or at
(11,12).
• Fig shows a negative slope line path starting from the left end point at
pixel position (50,50).
• In this one do we select the next pixel position as (51, 50) or as (51,
49)?
• These questions are answered by the Bresenham’s line drawing
algorithm by testing the sign of an integer parameter whose value is
proportional to the difference between the vertical separations of the
two pixel positions from the actual line path.
• To illustrate Bresenham’s approach, we first consider the scan
conversion process for lines with positive slope less than 1.0.
• Pixel positions along a line path are then determined by sampling at
unit x intervals.
• Starting from the left endpoint (x0, y0) of a given line, we step to each
successive column (x position) and plot the pixel whose scan-line y
value is closest to the line path.
• The figure demonstrates the kth step in this process.
• Assuming we have determined that the pixel at (xk,yk) is to be
displayed, we next need to decide which pixel to plot in column xk+1=
xk+1.
• Our choices are the pixels at positions (xk+1, yk) and (xk+1, yk+1).
• At sampling position xk+1, we label vertical pixel separations from the
mathematical line path as dupper and dlower
• The y coordinate on the mathematical line at pixel column positions
xk+1 is calculated as
Y = m (xk+1) +b (10)
• To determine which of the two pixels is closest to the line path, we
can set up an efficient test that is based on the difference between the
two pixel separations:

• A decision parameter pk for the kth step in the line algorithm can be
obtained by rearranging Equation 13 so that it involves only integer
calculations.
• We accomplish this by substituting m = Δy/Δx, where Δy and Δx are the
vertical and horizontal separations of the endpoint positions, and
defining the decision parameter as
• The sign of pk is the same as the sign of dlower −dupper, because Δx
>0 for our example.
• Parameter c is constant and has the value 2Δy + Δx(2b − 1), which is
independent of the pixel position and will be eliminated in the
recursive calculations for pk .
• If the pixel at yk is “closer” to the line path than the pixel at yk + 1
(that is, dlower < dupper), then decision parameter pk is negative.
• In that case, we plot the lower pixel; otherwise, we plot the upper
pixel.
• Coordinate changes along the line occur in unit steps in either the x
and y directions.
• We can obtain the values of successive decision parameters using
incremental integer calculations.
• At step k+1, the decision parameter is evaluated from (14) as
• Subtracting (14) from the preceding equation, we have,
• However xk+1=xk+1

where the term yk+1 − yk is either 0 or 1, depending on the sign of


parameter pk.
• Bresenham’s Line-Drawing Algorithm for |m| < 1.0
1. Input the two line endpoints and store the left endpoint in (x0, y0).
2. Set the color for frame-buffer position (x0, y0); i.e., plot the first
point.
3. Calculate the constants Δx, Δy, 2Δy, and 2Δy − 2Δx, and obtain the
starting value for the decision parameter as p0 = 2Δy − Δx
4. At each xk along the line, starting at k = 0, perform the following test:
If pk < 0, the next point to plot is (xk + 1, yk ) and
pk+1 = pk + 2Δy
Otherwise, the next point to plot is (xk+ 1, yk + 1) and
pk+1= pk + 2Δy − 2Δx
5. Repeat step 4 Δx − 1 more times.
• Digitize the line with endpoints (20,10)and (30,18) with slope of 0.8
• For a line with positive slope greater than 1.0, we interchange the
roles of the x and y directions.
• We step along the y direction in unit steps and calculate successive x
values nearest the line path.
• If the initial position for a line with positive slope is the right end
point, both x and y decrease as we step from right to left.
• If dlower=dupper always choose the upper (or lower)of the two
candidate pixels.
• For negative slopes, the procedures are similar, except that now one
coordinate decreases as the other increases.
• Horizontal lines (Δy = 0), vertical lines (Δx = 0) and diagonal lines (Δx
= Δy) can be loaded directly into the frame buffer without processing
the line plotting algorithm.
Circle-Generating Algorithms
• A circle is defined as a set of points that are all given distance r
from a center position (xc , yc)
• For any circle point (x, y) this distance relationship is expressed by
the Pythagorean theorem in Cartesian coordinates as
(x-xc)2 + (y-yc)2 = r2 (26)
• We could use this equation to calculate the position of points on a
circle circumference by stepping along the x axis in unit steps from
xc –r to xc +r and calculating the corresponding y values at each
position.
• But this not the best method for generating the circle.
• Problem with this approach is that it involves considerable
computation at each step and the spacing between the plotted pixel
positions is not uniform.
• One way to eliminate the unequal spacing is to calculate points along
the circular boundary using polar coordinates r and ϴ.
• Expressing the circle equation in parametric polar form yields the
pair of equations
x= xc + r cos ϴ
y= yc + r cos ϴ
• When a display is generated with these equations using a fixed
angular step size a circle is plotted with equally spaced points along
the circumference.
• Although the polar coordinates provide equal point spacing, the
trigonometric calculations are still time consuming.
• For any of the previous circle-generating methods, we can reduce the
computations by considering the symmetry of circles.
• The shape of the circle is similar in each quadrant.
• Therefore, if we determine the curve positions in the first quadrant,
we can generate the circle section in the second quadrant of the xy
plane by noting that the two circle sections are symmetric with
respect to the y axis.
• Circle sections in the third and fourth quadrants can be obtained from
sections in the first and second quadrants by considering symmetry
about the x axis octants.
• Determining the pixel positions along a circle circumference using
symmetry still requires a good deal of computation.
• The Cartesian equation involves multiplication and square root
calculations, while parametric equations contain multiplications and
trigonometric calculations.
• More efficient circle algorithms are based on incremental calculation
of decision parameters, as in Bresenham line algorithm which
involves only simple integer operations.
Midpoint Circle Algorithm
• Bresenham’s line algorithm for raster displays is adapted to circle
generation by setting up decision parameter for finding the closest
pixel to the circumference at each sampling step.
• The basic idea in this approach is to test halfway position between
two pixels to determine if this midpoint is inside or outside the
circle boundary.
• As in raster line algorithm, we sample at unit intervals and determine
the closest pixel position to the specified circle path at each step.
• To apply the midpoint method, we define the circle as
• The relative position of any point (x,y) can be determined by checking
the sign of the circle function:

• Any point (x,y) on the boundary of the circle with radius r satisfies
the equation fcirc(x,y)=0.
• If the point is in the interior of the circle, the circle function is
negative.
• If the point is outside the circle, the function is positive.
• The figure shows the midpoint between the two candidate pixels at
sampling position xk+1 .
• Assuming that we have just plotted the pixel at (xk, yk), we next
need to determine whether the pixel at (xk +1, yk ) or the one at
position (xk +1, yk -1) is closer to the circle.
• Our decision parameter is the circle function (29) evaluated at the mid
point between these two pixels:

• If pk<0, this midpoint is inside the circle and pixel on scan line yk is
closer to the circle boundary.
• Otherwise the mid position is outside or on the circle boundary, and
we select the pixel on scan line yk -1.
• Successive decision parameters are obtained using incremental
calculations.
• The initial decision parameter is obtained by evaluating the circle
function at the start position (x0 , y0 )= (0 ,r).

• If the radius r is specified as an integer, we can simply round p0 to


P0 =1 - r
Algorithm
1. Input radius r and circle center (xc , yc ), then set the coordinates for
the first point on the circumference of a circle centered on the
origin as (x0 , y0) = (0, r)
2. Calculate the initial value of the decision parameter as
p0 = 5 /4 – r
3. At each xk position, starting at k = 0, perform the following test:
• If pk < 0, the next point along the circle centered on (0, 0)
is (xk+1, yk ) and pk+1 = pk+ 2xk+1 + 1
• Otherwise, the next point along the circle is
(xk+1, yk−1) and pk+1 = pk + 2xk+1 +1 − 2yk+1 where 2xk+1 = 2xk + 2 and 2yk+1 =
2yk − 2.
4. Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (x, y) onto the circular path
centered at (xc ,yc) and plot the coordinate values as follows
x = x + xc , y = y + yc
6. Repeat steps 3 through 5 until x ≥ y.
• EXAMPLE: Given a circle radius r = 10, we demonstrate the
midpoint circle algorithm by determining positions along the
circle octant in the first quadrant from x = 0 to x = y.
Character Primitives
• Routines for generating character primitives are available in most
graphics packages.
• Letters, numbers and other characters can be displayed in a
variety of sizes and styles.
• The overall design style for a set (family) of characters is called
typeface.
• Two different representations are used for storing computer fonts.
• A simple method is to set up a pattern of binary values on a
rectangular grid - bitmap font (Raster font).
• Another more flexible scheme is to describe the character shapes
using straight lines and curve sections – outline font (Stroke font).
Bitmap Font
• Simplest to define and display: we just need to map the character
grids to a frame buffer position.
• Require more storage space since each variation (size and format)
must be saved in a font cache.
• It is possible to generate different sizes and other variations, such
as bold and italic from one bitmap font set, but this often does not
produce good results.
• We can increase or decrease the size of the character bitmap only in
integer multiples of the pixel size.
• To double the size of the character, we need to double the number of
pixels in the bitmap.
• This just increases the ragged appearance of its edges.
Outline Fonts
• In contrast to the bitmap fonts, outline fonts can be increased in
size without distorting the character shapes.
• Requires less storage because each variation does not require a
distinct font cache.
• We can produce boldface, italic or different sizes by manipulating
the curve definitions for the character outlines.
• Does not take more time to process the outline fonts.
OpenGL Character Functions
• The GLUT library contains routines for displaying both bitmapped
and outline fonts.
• Bitmapped GLUT fonts are rendered using OpenGL glBitmap
function.
• The outline fonts are generated with polyline (GL_LINE_STRIP)
boundaries.
• We can display the bitmap GLUT character with
glutBitmapCharacter( font, character);
where parameter font is assigned a symbolic GLUT constant
identifying a particular set of type faces, and parameter character is
assigned either a ASCII code or the specific character we wish to
display.
• We can select fixed width font by assigning either
GLUT_BITMAP_8_BY_13 or GLUT_BITMAP_9_BY_15.
• We can select a 10 point proportionally spaced font with either
GLUT_BITMAP_TIMES_ROMAN_10 or GLUT_BITMAP_HELVETICA_10.
• A 12 point Times-Roman font is also available as well as 18 point
Helvetica fonts.
• Each character generated by glutBitmapCharacter is displayed so that
the origin (lower left corner ) of the bitmap is at the current raster
position.
• After the character bitmap is loaded into the refresh buffer, an offset
equal to the width of the character is added to the x coordinate for the
current raster position.
• An outline character is displayed with the following function call:
glutStrokeCharacter(font, character);
• We can assign parameter font either the value
GLUT_STROKE_ROMAN, which displays a proportionally spaced font,
or the value GLUT_STROKE_MONO_ROMAN, which displays a font
with constant spacing.
• We can control the size and position of these characters by specifying
transformation operations.

You might also like