Computer Graphics Module1
Computer Graphics Module1
Computer Graphics Module1
Syllabus:
Overview: Computer Graphics and OpenGL: Computer Graphics:Basics of computer graphics,
Application of Computer Graphics, Video Display Devices: Random Scan and Raster Scan displays,
color CRT monitors, Flat panel displays. Raster-scan systems: video controller, raster scan Display
processor, graphics workstations and viewing systems, Input devices, graphics networks, graphics on
the internet, graphics software. OpenGL: Introduction to OpenGL ,coordinate reference frames,
specifying two-dimensional world coordinate reference frames in OpenGL, OpenGL point functions,
OpenGL line functions, point attributes, line attributes, curve attributes, OpenGL point attribute
functions, OpenGL line attribute functions, Line drawing algorithms(DDA, Bresenham’s), circle
generation algorithms(Bresenham’s).
✓ An early application for computer graphics is the display of simple data graphs usually plotted on a
character printer. Data plotting is still one of the most common graphics application.
✓ Graphs & charts are commonly used to summarize functional, statistical, mathematical, engineering
and economic data for research reports, managerial summaries and other types of publications.
✓ Typically examples of data plots are line graphs, bar charts, pie charts, surface graphs, contour plots
and other displays showing relationships between multiple parameters in two dimensions, three
dimensions, or higher-dimensional spaces.
b. Computer-Aided Design
✓ A major use of computer graphics is in design processes-particularly for engineering and architectural
systems.
✓ CAD, computer-aided design or CADD, computer-aided drafting and design methods are now
routinely used in the automobiles, aircraft, spacecraft, computers, home appliances.
✓ Circuits and networks for communications, water supply or other utilities are constructed with repeated
placement of a few geographical shapes.
✓ Animations are often used in CAD applications. Real-time, computer animations using wire-frame
shapes are useful for quickly testing the performance of a vehicle or system.
c. Virtual-Reality Environments
✓ There are many different kinds of data sets and effective visualization schemes depend on the
characteristics of the data. A collection of data can contain scalar values, vectors or higher-order tensors.
e. Education and Training
✓ Computer generated models of physical, financial, political, social, economic & other systems are
often used as educational aids.
✓ Models of physical processes physiological functions, equipment, such as the color coded diagram as
shown in the figure, can help trainees to understand the operation of a system.
✓ For some training applications, special hardware systems are designed. Examples of such specialized
systems are the simulators for practice sessions ,aircraft pilots, air traffic control personnel.
✓ Some simulators have no video screens, for eg: flight simulator with only a control panel for instrument
flying.
f. Computer Art
✓ The picture is usually painted electronically on a graphics tablet using a stylus, which can simulate
different brush strokes, brush widths and colors.
✓ Fine artists use a variety of other computer technologies to produce images. To create pictures the
artist uses a combination of 3D modeling packages, texture mapping, drawing programs and CAD
software etc.
✓ Commercial art also uses theses “painting” techniques for generating logos & other designs, page
layouts combining text & graphics, TV advertising spots & other applications.
✓ A common graphics method employed in many television commercials is morphing, where one object
is transformed into another.
g. Entertainment
✓Television production, motion pictures, and music videos routinely a computer graphics methods.
✓ Sometimes graphics images are combined a live actors and scenes and sometimes the films are
completely generated a computer rendering and animation techniques.
✓ Some television programs also use animation techniques to combine computer generated figures of
people, animals, or cartoon characters with the actor in a scene or to transform an actor’s face into another
shape.
h. Image Processing
✓ The modification or interpretation of existing pictures, such as photographs and TV scans is called
image processing.
✓ Methods used in computer graphics and image processing overlap, the two areas are concerned with
fundamentally different operations.
✓ Image processing methods are used to improve picture quality, analyze images, or recognize visual
patterns for robotics applications.
✓ Image processing methods are often used in computer graphics, and computer graphics methods are
frequently applied in image processing.
✓ Medical applications also make extensive use of image processing techniques for picture enhancements
in tomography and in simulations and surgical operations.
✓ It is also used in computed X-ray tomography(CT), position emission tomography(PET),and computed
axial tomography(CAT).
✓ A beam of electrons, emitted by an electron gun, passes through focusing and deflection systems that
direct the beam toward specified positions on the phosphor-coated screen.
✓ The phosphor then emits a small spot of light at each position contacted by the electron beam and the
light emitted by the phosphor fades very rapidly.
✓ One way to maintain the screen picture is to store the picture information as a charge distribution
within the CRT in order to keep the phosphors activated.
✓ The most common method now employed for maintaining phosphor glow is to redraw the picture
repeatedly by quickly directing the electron beam back over the same screen points. This type of display
is called a refresh CRT.
✓ The frequency at which a picture is redrawn on the screen is referred to as the refresh rate.
✓ The primary components of an electron gun in a CRT are the heated metal cathode and a
control grid.
✓ The heat is supplied to the cathode by directing a current through a coil of wire, called the filament,
inside the cylindrical cathode structure.
✓ This causes electrons to be “boiled off” the hot cathode surface.
✓ Inside the CRT envelope, the free, negatively charged electrons are then accelerated toward the
phosphor coating by a high positive voltage.
✓ Intensity of the electron beam is controlled by the voltage at the control grid.
✓ Since the amount of light emitted by the phosphor coating depends on the number of electrons striking
the screen, the brightness of a display point is controlled by varying the voltage on the control grid.
✓ The focusing system in a CRT forces the electron beam to converge to a small cross section as it strikes
the phosphor and it is accomplished with either electric or magnetic fields.
✓ With electrostatic focusing, the electron beam is passed through a positively charged metal cylinder
so that electrons along the center line of the cylinder are in equilibrium position.
✓ Deflection of the electron beam can be controlled with either electric or magnetic fields.
✓ Cathode-ray tubes are commonly constructed with two pairs of magnetic-deflection coils
✓ One pair is mounted on the top and bottom of the CRT neck, and the other pair is mounted on opposite
sides of the neck.
✓ The magnetic field produced by each pair of coils results in a traverse deflection force that is
perpendicular to both the direction of the magnetic field and the direction of travel of the electron beam.
✓ Horizontal and vertical deflections are accomplished with these pair of coils.
Electrostatic deflection of the electron beam in a CRT
✓ When electrostatic deflection is used, two pairs of parallel plates are mounted inside the CRT envelope
where, one pair of plates is mounted horizontally to control vertical deflection, and the other pair is
mounted vertically to control horizontal deflection.
✓ Spots of light are produced on the screen by the transfer of the CRT beam energy to the phosphor.
✓ When the electrons in the beam collide with the phosphor coating, they are stopped and their kinetic
energy is absorbed by the phosphor.
✓ Part of the beam energy is converted by the friction in to the heat energy, and the remainder causes
electros in the phosphor atoms to move up to higher quantum-energy levels.
✓ After a short time, the “excited” phosphor electrons begin dropping back to their stable ground state,
giving up their extra energy as small quantum of light energy called photons.
✓ What we see on the screen is the combined effect of all the electrons light emissions: a glowing spot
that quickly fades after all the excited phosphor electrons have returned to their ground energy level.
✓ The frequency of the light emitted by the phosphor is proportional to the energy difference between
the excited quantum state and the ground state.
✓ Lower persistence phosphors required higher refresh rates to maintain a picture on the screen without
flicker.
✓ The maximum number of points that can be displayed without overlap on a CRT is referred to as a
resolution.
✓ Resolution of a CRT is dependent on the type of phosphor, the intensity to be displayed, and the
focusing and deflection systems.
✓ High-resolution systems are often referred to as high-definition systems.
✓ To display a specified picture, the system cycles through the set of commands in the display file,
drawing each component line in turn.
✓ After all line-drawing commands have been processed, the system cycles back to the first line
command in the list.
✓ Random-scan displays are designed to draw all the component lines of a picture 30 to 60 times each
second, with up to 100,000 “short” lines in the display list.
✓ When a small set of lines is to be displayed, each refresh cycle is delayed to avoid very high refresh
rates, which could burn out the phosphor.
Difference between Raster scan system and Random scan system
✓ This technique is generally used in raster scan displays. Including color TV.
✓ In this technique CRT has three phosphor color dots at each pixel position.
✓ One dot for red, one for green and one for blue light. This is commonly known as Dot triangle.
✓ Here in CRT there are three electron guns present, one for each color dot. And a shadow mask grid
just behind the phosphor coated screen.
✓ The shadow mask grid consists of series of holes aligned with the phosphor dot pattern.
✓ Three electron beams are deflected and focused as a group onto the shadow mask and when they pass
through a hole they excite a dot triangle.
✓ In dot triangle three phosphor dots are arranged so that each electron beam can activate only its
corresponding color dot when it passes through the shadow mask.
✓ A dot triangle when activated appears as a small dot on the screen which has color of combination of
three small dots in the dot triangle.
✓ By changing the intensity of the three electron beams we can obtain different colors in the shadow
mask CRT.
✓ Firing voltage is applied to a pair of horizontal and vertical conductors cause the gas at the
intersection of the two conductors to break down into glowing plasma of electrons and ions.
✓ Picture definition is stored in a refresh buffer and the firing voltages are applied to refresh the pixel
positions, 60 times per second.
✓ Alternating current methods are used to provide faster application of firing voltages and thus brighter
displays.
✓ Separation between pixels is provided by the electric field of conductor.
✓ One disadvantage of plasma panels is they were strictly monochromatic device that means shows
only one color other than black like black and white.
b.Thin Film Electroluminescent Displays
✓ It is similar to plasma panel display but region between the glass plates is filled with phosphors such
as doped with magnesium instead of gas.
✓ When sufficient voltage is applied the phosphors becomes a conductor in area of intersection of the
two electrodes.
✓ Electrical energy is then absorbed by the manganese atoms which then release the energy as a spot of
light similar to the glowing plasma effect in plasma panel.
✓ It requires more power than plasma panel.
✓ In this good color and gray scale difficult to achieve.
✓ It consists of two glass plates each with light polarizer at right angles to each other sandwich the
liquid crystal material between the plates.
✓ Rows of horizontal transparent conductors are built into one glass plate, and column of vertical
conductors are put into the other plates.
✓ The intersection of two conductors defines a pixel position.
✓ In the ON state polarized light passing through material is twisted so that it will pass through the
opposite polarizer.
✓ In the OFF state it will reflect back towards source.
➔ Here, the frame buffer can be anywhere in the system memory, and the video controller accesses the
frame buffer to refresh the screen.
➔ In addition to the video controller, raster systems employ other processors as coprocessors and
accelerators to implement various graphics operations.
1.4.1 Video controller:
✓ The figure below shows a commonly used organization for raster systems.
✓ A fixed area of the system memory is reserved for the frame buffer, and the video controller is given
direct access to the frame-buffer memory.
✓ Frame-buffer locations, and the corresponding screen positions, are referenced in the Cartesian
coordinates.
✓ The screen surface is then represented as the first quadrant of a two-dimensional system with positive
x and y values increasing from left to right and bottom of the screen to the top respectively.
✓ Pixel positions are then assigned integer x values that range from 0 to xmax across the screen, left to
right, and integer y values that vary from 0 to ymax, bottom to top.
Basic Video Controller Refresh Operations
✓ The basic refresh operations of the video controller are diagrammed.
✓ Initially, the x register is set to 0 and the y register is set to the value for the top scan line.
✓ The contents of the frame buffer at this pixel position are then retrieved and used to set the intensity
of the CRT beam.
✓ Then the x register is incremented by 1, and the process is repeated for the next pixel on the top scan
line.
✓ This procedure continues for each pixel along the top scan line.
✓ After the last pixel on the top scan line has been processed, the x register is reset to 0 and the y register
is set to the value for the next scan line down from the top of the screen.
✓ The procedure is repeated for each successive scan line.
✓ After cycling through all pixels along the bottom scan line, the video controller resets the registers to
the first pixel position on the top scan line and the refresh process starts over
a.Speed up pixel position processing of video controller:
✓ Since the screen must be refreshed at a rate of at least 60 frames per second,the simple procedure
illustrated in above figure may not be accommodated by RAM chips if the cycle time is too slow.
✓ To speed up pixel processing, video controllers can retrieve multiple pixel values from the refresh
buffer on each pass.
✓ When group of pixels has been processed, the next block of pixel values is retrieved from the frame
buffer.
Advantages of video controller:
✓ A video controller can be designed to perform a number of other operations.
✓ For various applications, the video controller can retrieve pixel values from different memory areas
on different refresh cycles.
✓ This provides a fast mechanism for generating real-time animations.
✓ Another video-controller task is the transformation of blocks of pixels, so that screen areas can be
enlarged, reduced, or moved from one location to another during the refresh cycles.
✓ In addition, the video controller often contains a lookup table, so that pixel values in the frame buffer
are used to access the lookup table. This provides a fast method for changing screen intensity values.
✓ Finally, some systems are designed to allow the video controller to mix the framebuffer image with an
input image from a television camera or other input device
b) Raster-Scan Display Processor
✓ Figure shows one way to organize the components of a raster system that contains a separate display
processor, sometimes referred to as a graphics controller or a display coprocessor.
✓ The purpose of the display processor is to free the CPU from the graphics chores.
✓ In addition to the system memory, a separate display-processor memory area can be provided.
Scan conversion:
✓ A major task of the display processor is digitizing a picture definition given in an application program
into a set of pixel values for storage in the frame buffer.
✓ This digitization process is called scan conversion.
Example 1: displaying a line
➔ Graphics commands specifying straight lines and other geometric objects are scan converted into a
set of discrete points, corresponding to screen pixel positions.
➔ Scan converting a straight-line segment.
Example 2: displaying a character
➔ Characters can be defined with rectangular pixel grids
➔ The array size for character grids can vary from about 5 by 7 to 9 by 12 or more for higher-quality
displays.
➔ A character grid is displayed by superimposing the rectangular grid pattern into the frame buffer at a
specified coordinate position.
Using outline:
➔ For characters that are defined as outlines, the shapes are scan-converted into the frame buffer by
locating the pixel positions closest to the outline.
✓ A large, curved-screen system can be useful for viewing by a group of people studying a particular
graphics application.
✓ A 360 degree paneled viewing system in the NASA control-tower simulator, which is used for training
and for testing ways to solve air-traffic and runway problems at airports.
1.5 Input Devices
➢ Graphics workstations make use of various devices for data input.Most systems have keyboards and
mouses, while some other systems have trackball, spaceball, joystick,button boxes, touch panels, image
scanners and voice systems.
Keyboard:
➢ Keyboard on graphics system is used for entering text strings,issuing certain commands and selecting
menu options.
➢ Keyboards can also be provided with features for entry of screen coordinates menu selections or
graphics functions.
➢ General purpose keyboard uses function keys and cursor-control keys.
➢ Function keys allow user to select frequently accessed operations with a single keystroke. Cursor-
control keys are used for selecting a displayed object or a location by positioning the screen cursor.
Button Boxes and Dials:
➢ Buttons are often used to input predefined functions .Dials are common devices for entering scalar
values.
➢ Numerical values within some defined range are selected for input with dial rotations.
Mouse Devices:
➢ Mouse is a hand-held device,usually moved around on a flat surface to position the screen
cursor.wheeler or roolers on the bottom of the mouse used to record the amount and direction of
movement.
➢ Some of the mouses uses optical sensors,which detects movement across the horizontal and vertical
grid lines.
➢ Since a mouse can be picked up and put down,it is used for making relative changes in the position of
the screen.
➢ Most general purpose graphics systems now include a mouse and a keyboard as the primary input
devices.
Trackballs and Spaceballs:
➢ A trackball is a ball device that can be rotated with the fingers or palm of the hand to produce screen
cursor movement.
➢ Laptop keyboards are equipped with a trackball to eliminate the extra space required by a mouse.
➢ Spaceball is an extension of two-dimensional trackball concept.
➢ Spaceballs are used for three-dimensional positioning and selection operations in virtual reality
systems, modeling, animation, CAD and other applications.
Joysticks:
➢ Joystick is used as a positioning device, which uses a small vertical lever(stick) mounded on a base.It
is used to steer the screen cursor around and select screen position with the stick movement.
➢ A push or pull on the stick is measured with strain gauges and converted to movement of the screen
cursor in the direction of the applied pressure.
Data Gloves:
➢ Data glove can be used to grasp a virtual object.The glove is constructed with a series of sensors that
detect hand and finger motions.
➢ Input from the glove is used to position or manipulate objects in a virtual scene.
Digitizers:
➢ Digitizer is a common device for drawing,painting or selecting positions.
➢ Graphics tablet is one type of digitizer,which is used to input 2-dimensional coordinates by activating
a hand cursor or stylus at selected positions on a flat surface.
➢ A hand cursor contains cross hairs for sighting positions and stylus is a pencil-shaped device that is
pointed at positions on the tablet.
Image Scanners:
➢ Drawings,graphs,photographs or text can be stored for computer processing with an image scanner
by passing an optical scanning mechanism over the information to be stored.
➢ Once we have the representation of the picture, then we can apply various image processing method
to modify the representation of the picture and various editing
operations can be performed on the stored documents.
Touch Panels:
➢ Touch panels allow displayed objects or screen positions to be selected with the touch of a finger.
➢ Touch panel is used for the selection of processing options that are represented as a menu of graphical
icons.
➢ Optical touch panel-uses LEDs along one vertical and horizontal edge of the frame.
➢ Acoustical touch panels generates high-frequency sound waves in horizontal and vertical directions
across a glass plate.
Light Pens:
➢ Light pens are pencil-shaped devices used to select positions by detecting the light coming from points
on the CRT screen.
➢ To select positions in any screen area with a light pen,we must have some nonzero light intensity
emitted from each pixel within that area.
➢ Light pens sometimes give false readings due to background lighting in a room.
Voice Systems:
➢ Speech recognizers are used with some graphics workstations as input devices for voice commands.
The voice system input can be used to initiate operations or to enter data.
➢ A dictionary is set up by speaking command words several times,then the system analyses each word
and matches with the voice command to match the pattern
➔ Function names in the OpenGL basic library (also called the OpenGL core library) are prefixed with
gl. The component word first letter is capitalized.
➔ For eg:- glBegin, glClear, glCopyPixels, glPolygonMode
➔ Symbolic constants that are used with certain functions as parameters are all in capital letters, preceded
by “GL”, and component are separated by underscore.
➔ For eg:- GL_2D, GL_RGB, GL_CCW, GL_POLYGON,
➔ The OpenGL functions also expect specific data types. For example, an OpenGL function parameter
might expect a value that is specified as a 32-bit integer. But the size of an integer specification can be
different on different machines.
➔ To indicate a specific data type, OpenGL uses special built-in, data-type names, such as GLbyte,
GLshort, GLint, GLfloat, GLdouble, Glboolean
Related Libraries
➔ In addition to OpenGL basic(core) library(prefixed with gl), there are a number of associated libraries
for handling special operations:-
1) OpenGL Utility(GLU):- Prefixed with “glu”. It provides routines for setting up viewing and
projection matrices, describing complex objects with line and polygon approximations, displaying
quadrics and B-splines using linear approximations, processing the surface-rendering operations, and
other complex tasks. -Every OpenGL implementation includes the GLU library
2) Open Inventor:- provides routines and predefined object shapes for interactive threedimensional
applications which are written in C++.
3) Window-system libraries:- To create graphics we need display window. We cannot create the display
window directly with the basic OpenGL functions since it contains only device-independent graphics
functions, and window-management operations are device-dependent. However, there are several
window-system libraries that supports. OpenGL functions for a variety of machines.
Eg:- Apple GL(AGL), Windows-to-OpenGL(WGL), Presentation Manager to OpenGL(PGL), GLX.
4) OpenGL Utility Toolkit(GLUT):- provides a library of functions which acts as interface for
interacting with any device specific screen-windowing system, thus making our program device-
independent. The GLUT library functions are prefixed with “glut”.
Header Files
✓ In all graphics programs, we will need to include the header file for the OpenGL core library.
✓ In windows to include OpenGL core libraries and GLU we can use the following header
files:-
#include <windows.h> //precedes other header files for including Microsoft windows version of OpenGL
libraries
#include<GL/gl.h>
#include <GL/glu.h>
✓ The above lines can be replaced by using GLUT header file which ensures gl.h and glu.h are included
correctly,
✓ #include <GL/glut.h> //GL in windows
✓ In Apple OS X systems, the header file inclusion statement will be,
✓ #include <GLUT/glut.h>
Display-Window Management Using GLUT
✓ We can consider a simplified example, minimal number of operations for displaying a picture.
Step 1: initialization of GLUT
We are using the OpenGL Utility Toolkit, our first step is to initialize GLUT.
This initialization function could also process any command line arguments, but we will not need to
use these parameters for our first example programs.
We perform the GLUT initialization with the statement
glutInit (&argc, argv);
Step 2: title
We can state that a display window is to be created on the screen with a given caption for the title bar.
This is accomplished with the function
glutCreateWindow ("An Example OpenGL Program");
where the single argument for this function can be any character string that we want to use for the
display-window title.
Step 3: Specification of the display window
Then we need to specify what the display window is to contain.
For this, we create a picture using OpenGL functions and pass the picture definition to the GLUT
routine glutDisplayFunc, which assigns our picture to the display window.
Example: suppose we have the OpenGL code for describing a line segment in a procedure called
lineSegment.
Then the following function call passes the line-segment description to the display window:
glutDisplayFunc (lineSegment);
Step 4: one more GLUT function
But the display window is not yet on the screen.
We need one more GLUT function to complete the window-processing operations.
After execution of the following statement, all display windows that we have created, including their
graphic content, are now activated:
glutMainLoop ( );
This function must be the last one in our program. It displays the initial graphics and puts the program
into an infinite loop that checks for input from devices such as a mouse or keyboard.
Step 5: these parameters using additional GLUT functions
Although the display window that we created will be in some default location and size, we can set
these parameters using additional GLUT functions.
GLUT Function 1:
➔ We use the glutInitWindowPosition function to give an initial location for the upper left corner of the
display window.
➔ This position is specified in integer screen coordinates, whose origin is at the upper-left corner of the
screen.
GLUT Function 2:
After the display window is on the screen, we can reposition and resize it.
GLUT Function 3:
➔ We can also set a number of other options for the display window, such as buffering and a choice of
color modes, with the glutInitDisplayMode function.
➔ Arguments for this routine are assigned symbolic GLUT constants.
➔ Example: the following command specifies that a single refresh buffer is to be used for the display
window and that we want to use the color mode which uses red, green, and blue (RGB) components to
select color values:
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
➔ The values of the constants passed to this function are combined using a logical or operation.
➔ Actually, single buffering and RGB color mode are the default options.
➔ But we will use the function now as a reminder that these are the options that are set for our display.
➔ Later, we discuss color modes in more detail, as well as other display options, such as double buffering
for animation applications and selecting parameters for viewing threedimensional scenes.
A Complete OpenGL Program
➔ There are still a few more tasks to perform before we have all the parts that we need for a complete
program.
Step 1: to set background color
➔ For the display window, we can choose a background color.
➔ Using RGB color values, we set the background color for the display window to be white, with the
OpenGL function:
glClearColor (1.0, 1.0, 1.0, 0.0);
➔ The first three arguments in this function set the red, green, and blue component colors to the value
1.0, giving us a white background color for the display window.
➔ If, instead of 1.0, we set each of the component colors to 0.0, we would get a black background.
➔ The fourth parameter in the glClearColor function is called the alpha value for the specified color.
One use for the alpha value is as a “blending” parameter
➔ When we activate the OpenGL blending operations, alpha values can be used to determine the
resulting color for two overlapping objects.
➔ An alpha value of 0.0 indicates a totally transparent object, and an alpha value of 1.0 indicates an
opaque object.
➔ For now, we will simply set alpha to 0.0.
➔ Although the glClearColor command assigns a color to the display window, it does not put the display
window on the screen.
Step 2: to set window color
➔ To get the assigned window color displayed, we need to invoke the following OpenGL function:
glClear (GL_COLOR_BUFFER_BIT);
➔ The argument GL COLOR BUFFER BIT is an OpenGL symbolic constant specifying that it is the bit
values in the color buffer (refresh buffer) that are to be set to the values indicated in the glClearColor
function. (OpenGL has several different kinds of buffers that can be manipulated.
Step 3: to set color to object
➔ In addition to setting the background color for the display window, we can choose a variety of color
schemes for the objects we want to display in a scene.
➔ For our initial programming example, we will simply set the object color to be a dark green
➔ The procedure lineSegment that we set up to describe our picture is referred to as a display callback
function.
➔ And this procedure is described as being “registered” by glutDisplayFunc as the routine to invoke
whenever the display window might need to be redisplayed.
Example: if the display window is moved.
Following program to display window and line segment generated by this program:
#include <GL/glut.h> // (or others, depending on the system in use)
void init (void)
{
glClearColor (1.0, 1.0, 1.0, 0.0); // Set display-window color to white.
glMatrixMode (GL_PROJECTION); // Set projection parameters.
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
}
void lineSegment (void)
{
glClear (GL_COLOR_BUFFER_BIT); // Clear display window.
glColor3f (0.0, 0.4, 0.2); // Set line segment color to green.
glBegin (GL_LINES);
glVertex2i (180, 15); // Specify line-segment geometry.
glVertex2i (10, 145);
glEnd ( );
glFlush ( ); // Process all OpenGL routines as quickly as possible.
}
void main (int argc, char** argv)
{
glutInit (&argc, argv); // Initialize GLUT.
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); // Set display mode.
glutInitWindowPosition (50, 100); // Set top-left display-window position.
glutInitWindowSize (400, 300); // Set display-window width and height.
glutCreateWindow ("An Example OpenGL Program"); // Create display window.
init ( ); // Execute initialization procedure.
glutDisplayFunc (lineSegment); // Send graphics to display window.
glutMainLoop ( ); // Display everything and wait.
}
1.13 Coordinate Reference Frames
To describe a picture, we first decide upon
A convenient Cartesian coordinate system, called the world-coordinate reference frame, which could
be either 2D or 3D.
We then describe the objects in our picture by giving their geometric specifications in terms of
positions in world coordinates.
Example: We define a straight-line segment with two endpoint positions, and a polygon is specified
with a set of positions for its vertices.
These coordinate positions are stored in the scene description along with other info about the objects,
such as their color and their coordinate extents
Co-ordinate extents :Co-ordinate extents are the minimum and maximum x, y, and z values for each
object.
A set of coordinate extents is also described as a bounding box for an object.
Ex:For a 2D figure, the coordinate extents are sometimes called its bounding rectangle.
Objects are then displayed by passing the scene description to the viewing routines which identify
visible surfaces and map the objects to the frame buffer positions and then on the video monitor.
The scan-conversion algorithm stores info about the scene, such as color values, at the appropriate
locations in the frame buffer, and then the scene is displayed on the output device.
Screen co-ordinates:
✓ Locations on a video monitor are referenced in integer screen coordinates, which correspond to the
integer pixel positions in the frame buffer.
✓ Scan-line algorithms for the graphics primitives use the coordinate descriptions to determine the
locations of pixels
✓ Example: given the endpoint coordinates for a line segment, a display algorithm must calculate the
positions for those pixels that lie along the line path between the endpoints.
✓ Since a pixel position occupies a finite area of the screen, the finite size of a pixel must be taken into
account by the implementation algorithms.
✓ For the present, we assume that each integer screen position references the centre of a pixel area.
✓ Once pixel positions have been identified the color values must be stored in the frame buffer
Assume we have available a low-level procedure of the form
i)setPixel (x, y);
• stores the current color setting into the frame buffer at integer position(x, y), relative to the position of
the screen-coordinate origin
ii)getPixel (x, y, color);
• Retrieves the current frame-buffer setting for a pixel location;
• Parameter color receives an integer value corresponding to the combined RGB bit codes stored for the
specified pixel at position (x,y).
• Additional screen-coordinate information is needed for 3D scenes.
• For a two-dimensional scene, all depth values are 0.
Absolute and Relative Coordinate Specifications
Absolute coordinate:
➢ So far, the coordinate references that we have discussed are stated as absolute coordinate values.
➢ This means that the values specified are the actual positions within the coordinate system in use.
Relative coordinates:
➢ However, some graphics packages also allow positions to be specified using relative coordinates.
➢ This method is useful for various graphics applications, such as producing drawings with pen plotters,
artist’s drawing and painting systems, and graphics packages for publishing and printing applications.
➢ Taking this approach, we can specify a coordinate position as an offset from the last position that was
referenced (called the current position).
Specifying a Two-Dimensional World-Coordinate Reference Frame in OpenGL
➢ The gluOrtho2D command is a function we can use to set up any 2D Cartesian reference frames.
➢ The arguments for this function are the four values defining the x and y coordinate limits for the
picture we want to display.
➢ Since the gluOrtho2D function specifies an orthogonal projection, we need also to be sure that the
coordinate values are placed in the OpenGL projection matrix.
➢ In addition, we could assign the identity matrix as the projection matrix before defining the world-
coordinate range.
➢ This would ensure that the coordinate values were not accumulated with any values we may have
previously set for the projection matrix.
➢ Thus, for our initial two-dimensional examples, we can define the coordinate frame for the screen
display window with the following statements
glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluOrtho2D (xmin, xmax, ymin, ymax);
➢ The display window will then be referenced by coordinates (xmin, ymin) at the lower-left corner and
by coordinates (xmax, ymax) at the upper-right corner, as shown in Figure below.
➢ We can then designate one or more graphics primitives for display using the coordinate reference
specified in the gluOrtho2D statement.
➢ If the coordinate extents of a primitive are within the coordinate range of the display window, all of
the primitive will be displayed.
➢ Otherwise, only those parts of the primitive within the display-window coordinate limits will be
shown.
➢ Also, when we set up the geometry describing a picture, all positions for the OpenGL primitives must
be given in absolute coordinates, with respect to the reference frame defined in the gluOrtho2D function.
1.14 OpenGL Functions
Geometric Primitives:
➢ It includes points, line segments, polygon etc.
➢ These primitives pass through geometric pipeline which decides whether the primitive is visible or
not and also how the primitive should be visible on the screen etc.
➢ The geometric transformations such rotation, scaling etc can be applied on the primitives which are
displayed on the screen.The programmer can create geometric primitives as shown below:
where:
glBegin indicates the beginning of the object that has to be displayed
Dept. Of CSE /BNMIT 26
Computer Graphics-Module 1 17CS62
➢ we obtain one line segment between the first and second coordinate positions and another line segment
between the third and fourth positions.
➢ if the number of specified endpoints is odd, so the last coordinate position is ignored.
Case 1: Lines
glBegin (GL_LINES);
glVertex2iv (p1); glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
Case 2: GL_LINE_STRIP:
Successive vertices are connected using line segments. However, the final vertex is not connected to the
initial vertex.
glBegin (GL_LINES_STRIP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
Case 3: GL_LINE_LOOP:
Successive vertices are connected using line segments to form a closed path or loop i.e., final vertex is
glBegin (GL_LINES_LOOP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
➔ Some implementations of the line-width function might support only a limited number of widths, and
some might not support widths other than 1.0.
➔ That is, the magnitude of the horizontal and vertical separations of the line endpoints, deltax and
deltay, are compared to determine whether to generate a thick line using vertical pixel spans or horizontal
pixel spans.
OpenGL Line-Style Function
➔ By default, a straight-line segment is displayed as a solid line.
➔ But we can also display dashed lines, dotted lines, or a line with a combination of dashes and dots.
➔ We can vary the length of the dashes and the spacing between dashes or dots.
➔ We set a current display style for lines with the OpenGL function:
Syntax: glLineStipple (repeatFactor, pattern);
Pattern:
➔ Parameter pattern is used to reference a 16-bit integer that describes how the line should be displayed.
➔ 1 bit in the pattern denotes an “on” pixel position, and a 0 bit indicates an “off” pixel position.
➔ The pattern is applied to the pixels along the line path starting with the low-order bits in the pattern.
➔ The default pattern is 0xFFFF (each bit position has a value of 1),which produces a solid line.
repeatFactor
➔ Integer parameter repeatFactor specifies how many times each bit in the pattern is to be repeated
before the next bit in the pattern is applied.
➔ The default repeat value is 1.
Polyline:
➔ With a polyline, a specified line-style pattern is not restarted at the beginning of each segment.
➔ It is applied continuously across all the segments, starting at the first endpoint of the polyline and
ending at the final endpoint for the last segment in the series.
Example:
➔ For line style, suppose parameter pattern is assigned the hexadecimal representation 0x00FF and the
repeat factor is 1.
➔ This would display a dashed line with eight pixels in each dash and eight pixel positions that are “off”
(an eight-pixel space) between two dashes.
➔ Also, since low order bits are applied first, a line begins with an eight-pixel dash starting at the first
endpoint.
➔ This dash is followed by an eight-pixel space, then another eight-pixel dash, and so forth, until the
second endpoint position is reached.
Activating line style:
➢ Before a line can be displayed in the current line-style pattern, we must activate the linestyle feature
of OpenGL.
glEnable (GL_LINE_STIPPLE);
➢ If we forget to include this enable function, solid lines are displayed; that is, the default pattern 0xFFFF
is used to display line segments.
➢ At any time, we can turn off the line-pattern feature with
glDisable (GL_LINE_STIPPLE);
➢ This replaces the current line-style pattern with the default pattern (solid lines).
Example Code:
Dept. Of CSE /BNMIT 30
Computer Graphics-Module 1 17CS62
Method 5: Painting and drawing programs allow pictures to be constructed interactively by using a
pointing device, such as a stylus and a graphics tablet, to sketch various curve shapes.
1.19 Line Drawing Algorithm
✓ A straight-line segment in a scene is defined by coordinate positions for the endpoints of
the segment.
✓ To display the line on a raster monitor, the graphics system must first project the endpoints to integer
screen coordinates and determine the nearest pixel positions along the line path between the two endpoints
then the line color is loaded into the frame buffer at the corresponding pixel coordinates
✓ The Cartesian slope-intercept equation for a straight line is
y=m * x +b------------>(1)
with m as the slope of the line and b as the y intercept.
✓ Given that the two endpoints of a line segment are specified at positions (x0,y0) and (xend, yend) ,as
shown in fig.
✓ We determine values for the slope m and y intercept b with the following equations:
m=(yend - y0)/(xend - x0)----------------->(2)
b=y0 - m.x0-------------->(3)
✓ Algorithms for displaying straight line are based on the line equation (1) and calculations given in
eq(2) and (3).
✓ For given x interval δx along a line, we can compute the corresponding y interval δy from eq.(2) as
δy=m. δx----------------->(4)
✓ Similarly, we can obtain the x interval δx corresponding to a specified δy as
δx=δy/m------------------>(5)
✓ These equations form the basis for determining deflection voltages in analog displays, such as vector-
scan system, where arbitrarily small changes in deflection voltage are possible.
✓ For lines with slope magnitudes
➔ |m|<1, δx can be set proportional to a small horizontal deflection voltage with the corresponding
vertical deflection voltage set proportional to δy from eq.(4)
➔ |m|>1, δy can be set proportional to a small vertical deflection voltage with the corresponding
horizontal deflection voltage set proportional to δx from eq.(5)
➔ |m|=1, δx=δy and the horizontal and vertical deflections voltages are equal
DDA Algorithm (DIGITAL DIFFERENTIAL ANALYZER)
➔ The DDA is a scan-conversion line algorithm based on calculating either δy or δx.
➔ A line is sampled at unit intervals in one coordinate and the corresponding integer values nearest the
line path are determined for the other coordinate
➔ DDA Algorithm has three cases so from equation i.e.., m=(yk+1 - yk)/(xk+1 - xk)
Case1:
if m<1,x increment in unit intervals
Dept. Of CSE /BNMIT 32
Computer Graphics-Module 1 17CS62
i.e..,xk+1=xk+1
then, m=(yk+1 - yk)/( xk+1 - xk)
m= yk+1 - yk
yk+1 = yk + m------------>(1)
➔ where k takes integer values starting from 0,for the first point and increases by 1 until
final endpoint is reached. Since m can be any real number between 0.0 and 1.0,
Case2:
if m>1, y increment in unit intervals
i.e.., yk+1 = yk + 1
then, m= (yk + 1- yk)/( xk+1 - xk)
m(xk+1 - xk)=1
xk+1 =(1/m)+ xk-----------------(2)
Case3:
if m=1,both x and y increment in unit intervals
i.e..,xk+1=xk + 1 and yk+1 = yk + 1
Equations (1) and (2) are based on the assumption that lines are to be processed from the left endpoint to
the right endpoint. If this processing is reversed, so that the starting endpoint is at the right, then either
we have δx=-1 and
yk+1 = yk - m-----------------(3)
or(when the slope is greater than 1)we have δy=-1 with
xk+1 = xk - (1/m)----------------(4)
➔ Similar calculations are carried out using equations (1) through (4) to determine the pixel positions
along a line with negative slope. thus, if the absolute value of the slope is less than 1 and the starting
endpoint is at left ,we set δx==1 and calculate y values with eq(1).
➔ when starting endpoint is at the right(for the same slope),we set δx=-1 and obtain y positions using
eq(3).
➔ This algorithm is summarized in the following procedure, which accepts as input two integer screen
positions for the endpoints of a line segment.
➔ if m<1,where x is incrementing by 1 yk+1 = yk + m
➔ So initially x=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the starting point .
Illuminate pixel(x, round(y))
x1= x+ 1 , y1=y + 1
Illuminate pixel(x1,round(y1))
x2= x1+ 1 , y2=y1 + 1
Illuminate pixel(x2,round(y2))
Till it reaches final point.
➔ if m>1,where y is incrementing by 1
xk+1 =(1/m)+ xk
➔ So initially y=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the
starting point .
Illuminate pixel(round(x),y)
x1= x+( 1/m) ,y1=y
Illuminate pixel(round(x1),y1)
x2= x1+ (1/m) , y2=y1
Illuminate pixel(round(x2),y2)
2. Set the color for frame-buffer position (x0, y0); i.e., plot the first point.
3. Calculate the constants Δx, Δy, 2Δy, and 2Δy − 2Δx, and obtain the starting value for
the decision parameter as
p0 = 2Δy −Δx
4. At each xk along the line, starting at k = 0, perform the following test:
If pk < 0, the next point to plot is (xk + 1, yk ) and
pk+1 = pk + 2Δy
Otherwise, the next point to plot is (xk + 1, yk + 1) and
pk+1 = pk + 2Δy − 2Δx
5. Repeat step 4 Δx − 1 more times.
Note:
If |m|>1.0
Then
p0 = 2Δx −Δy
and
If pk < 0, the next point to plot is (xk , yk +1) and
pk+1 = pk + 2Δx
Otherwise, the next point to plot is (xk + 1, yk + 1) and
pk+1 = pk + 2Δx − 2Δy
Code:
#include <stdlib.h>
#include <math.h>
/* Bresenham line-drawing procedure for |m| < 1.0. */
void lineBres (int x0, int y0, int xEnd, int yEnd)
{
int dx = fabs (xEnd - x0), dy = fabs(yEnd - y0);
int p = 2 * dy - dx;
int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy - dx);
int x, y;
/* Determine which endpoint to use as start position. */
if (x0 > xEnd) {
x = xEnd;
y = yEnd;
xEnd = x0;
}
else {
x = x0;
y = y0;
}
setPixel (x, y);
while (x < xEnd) {
x++;
if (p < 0)
p += twoDy;
else {
y++;
p += twoDyMinusDx;
Dept. Of CSE /BNMIT 35
Computer Graphics-Module 1 17CS62
}
setPixel (x, y);
}
}
Properties of Circles
➔ A circle is defined as the set of points that are all at a given distance r from a center position (xc , yc ).
➔ For any circle point (x, y), this distance relationship is expressed by the Pythagorean theorem in
Cartesian coordinates as
➔ We could use this equation to calculate the position of points on a circle circumference by stepping
along the x axis in unit steps from xc −r to xc +r and calculating the corresponding y values at each position
as
➔ One problem with this approach is that it involves considerable computation at each step. Moreover,
the spacing between plotted pixel positions is not uniform.
➔ We could adjust the spacing by interchanging x and y (stepping through y values and calculating x
values) whenever the absolute value of the slope of the circle is greater than 1; but this simply increases
the computation and processing required by the algorithm.
➔ Another way to eliminate the unequal spacing is to calculate points along the circular boundary using
polar coordinates r and θ
➔ Expressing the circle equation in parametric polar form yields the pair of equations
➔ Conside the circle centered at the origin,if the point ( x, y) is on the circle, then we can compute 7
other points on the circle as shown in the above figure.
➔ Our decision parameter is the circle function evaluated at the midpoint between these two pixels:
➔ The initial decision parameter is obtained by evaluating the circle function at the start position (x0, y0)
= (0, r ):