CGV - Module-1 Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

COMPUTER GRAPHICS AND VISUALIZATION

Module-1

Computer Graphics and OpenGL

C.K.SRINIVAS
Associate Professor
Dept. of CS&E
BITM, Ballari.
Cell: 9886684832
Email: [email protected]
Module – 1: Computer Graphics and OpenGL

What is Computer Graphics?

• Graphics is the key technology for communicating ideas, data, and trends in most areas of
commerce, Science, Engineering, and Education. Graphics provides one of the natural means
for communicating with the computer.

• Graphics refers to picture objects, Sketch of building/bridge, Flowcharts, control flow diagrams,
bar charts, pie charts etc.,

• Computer Graphics (CG) means creation, storage and manipulation of models and images of
picture objects by the aid of computers.

• Such models come from diverse and expanding set of fields including physical, mathematical,
artistic, biological, and even conceptual (abstract) structures.

CG includes almost everything on computers that is not text or sound. Today almost every computer can
do some graphics, and people have even come to expect to control their computer through icons and
pictures rather than just by typing.

Classification of CG

• Computer Graphics is broadly classified into three categories.

 Based on Type of Object

 2-dimensional Graphics(Pixel, Line, circle)

 3-dimensional Graphics (Cube, Polyhedron)

 Based on User Interaction

 Interactive Computer Graphics (ICG)

 Non Interactive Computer Graphics: - User/Observer has no control over the


pictures/images on the screen.

Ex: Titles displayed on TV system or other computer art.

 Based on Applications

 Display of Information, Design

 Simulation - Animation and User Interfaces

2
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Application of Computer Graphics


Computer Graphics has numerous applications, some of which are listed below:

Computer-aided drafting and design: In computer-aided design (CAD), interactive graphics is used to
design components and systems of mechanical, electrical, electromechanical, and electronic devices,
including structure such as buildings, automobile bodies, airplane and ship hulls, very large scale-
integrated (VLSI) chips, optical systems, and telephone and computer networks.

Simulation and animation for scientific visualization and entertainment:


Once the graphics system evolved to be capable of generating sophisticated images in real time,
engineers and researchers began to use them as simulators. One of the most important uses has been in
the training of pilots. Graphical flight simulators have proved to increase safety and to reduce training
expenses. Computer produced animated movies and displays of the time-varying behavior of real and
simulated objects are becoming increasingly popular for scientific and engineering visualization.

Office automation and electronic publishing: The use of graphics for the creation and dissemination
of information has increased enormously since the advent of desktop publishing on personal computers.
Many organizations whose publications used to be printed by outside specialists can now produce
printed materials inhouse. Office automation and electronic publishing can produce both traditional
printed (hardcopy) documents and electronic (softcopy) documents that allow browsing of networks of
interlinked multimedia documents are proliferating.

(Interactive) plotting in business, science and technology: The next most common use of graphics
today is probably to create 2D and 3D graphs of mathematical, physical, and economic functions;
histograms, bar and pie charts; task-scheduling charts; inventory and production charts, and the like . All
these are used to present meaningfully and concisely the trends and patterns gleaned from data, so as to
clarify complex phenomena and to facilitate informed decision making.

Cartography: Computer graphics is used to produce both accurate and schematic representations of
geographical and other natural phenomena from measurement data. Examples include geographic maps,
relief maps, exploration maps for drilling and mining, oceanographic charts, weather maps, contour
maps, and population-density maps.

Process control: Whereas flight simulators or arcade games let users interact with a simulation of a real
or artificial world, many other applications enable people or interact with some aspect of the real world
itself. Status displays in refineries, power plants, and computer networks show data values from sensors
attached to critical system components, so that operators can respond to problematic conditions. For
example, military commanders view field data – number and position of vehicles, weapons launched,
troop movements, causalities – on command and control displays to revise their tactics as needed; flight
controller airports see computer-generated identification and status information for the aircraft blips on
their radar scopes, and can thus control traffic more quickly and accurately.

Medical imaging :Medical Imaging poses interesting and important data analysis problem. Modern
imaging technologies such as computed tomography (CT), magnetic resonance imaging (MRI), ultra

3
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

sound, and positron-emission (PET) generate three dimensional data that must be subjected to
algorithmic manipulation to provide useful information

User interfaces
Computer graphics provides User in building Graphical User interfaces (GUIs) for most of the
application programs that includes windows, icons, menus, and a pointing device, such as mouse. Most
of the operating system such as Microsoft windows, Macintosh OS provides GUI for interacting. More
recently, millions of people become internet users mainly due to graphical network browsers such as
Firefox and internet Explorer. Word-processing , spreadsheet, and desktop-publishing programs are
typical applications that take advantage of user-interface techniques.

Define the following terms


a.) Pixel b) Raster c.) Scan line d.)Frame buffer e.) Interlacing
f.) Resolution g.)Aspect ratio h.) Scan conversion i.) Depth of frame buffer

The pixel (a word invented from "picture element") is the basic unit of programmable color on a
computer display or in a computer image. Pixel (combination of Picture & Element) is the smallest
element P(X, Y) of a display which can be assigned a color.

RASTER: A rectangular array of points or dots.


SCAN LINE: A row of pixels

Raster The area of a video display that is covered by sweeping the electron beam of the display in a
series of horizontal lines from top to bottom. Raster is a rectangular array of points or dots.

Scan line A horizontal line of pixels generated by a single horizontal sweep of the beam from a monitor's
electron gun. Or a row of pixels is called scan line

Frame buffer The pixels are stored in a part of memory called frame buffer. A frame buffer may be
thought of as computer memory organized as a two dimensional array with each (x, y) addressable
location corresponding to one pixel. The frame buffer usually is implemented with special types of
memory chips that enable fast redisplay.

In very simple systems frame buffer holds only colored pixels that are displayed on the screen. In most
systems, the frame buffer holds far more information needed for creating images from 3D data.

4
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Bit Planes or Bit Depth is the number of bits corresponding to each pixel. A typical frame buffer
resolution might be

640 x 480 x 8
1280 x 1024 x 8
1280 x 1024 x 24

{Depending on the type of pixmap being stored the values can take on various sizes. A black and white
raster image (a true bitmap) only needs one bit per pixel - either on or off. A true-color image (24-bit
color) requires 24-bits per pixel 1 and is composed of 3 one-byte values (red, green, and blue). 8-bit
grey-scale images are another common form. The number of bits used per pixel is called the pixel depth
or color depth; n bits represent 2n possible values.}

Interlacing. In raster systems the graphics system takes pixel from the frame buffer and displays them as
points on the surface of the display in one of two fundamental ways.

In a NonInterlaced or progressive display, the pixels are displayed row by row or scanline by scanline,
at the refresh rate.

In an Interlaced display, odd rows and even rows are refreshed alternatively. Interlace displays are used
in commercial T.V. Increasing the refresh rate decreases flickering, reducing eye strain, but few people
notice any change above 60-72 Hz.

Resolution. The maximum number of pixels that can be displayed on a monitor, expressed as (number
of horizontal pixels) x (number of vertical pixels), i.e., 1024x768. The ratio of horizontal to vertical
resolution is usually 4:3, the same as that of conventional television sets.

Aspect ratio: It is an output device’s width-height ratio. It is the ratio of Maximum number of pixels on
vertical to the Maximum number of pixels on horizontal direction. Usually aspect ratio must be ¾ for
better resolution/appearance.

Scan conversion. The conversion of geometric entities to pixel colors and location in the frame buffer is
known as Rasterization or scan conversion.

5
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Depth of frame buffer. The Depth or precision of frame buffer defined as the number of bits that are
used for each pixel. For example, a 1-bit-deep frame buffer allows only two colors, whereas an 8-bit-
deep frame buffer allows 28 (256) colors. In full color systems there are 24 (or more) bits per pixel. They
are also called true color systems or RGB color systems.

Video Display Devices


Typically, the primary output device in a graphics system is a video monitor. Historically, the operation
of most video monitors was based on the standard cathode-ray tube (CRT) design, but several other
technologies exist. In recent years, flat-panel displays have become significantly more popular due to
their reduced power consumption and thinner designs.

Refresh Cathode-Ray Tubes


Figure 1 illustrates the basic operation of a CRT. A beam of electrons (cathode rays) emitted by an
electron gun, passes through focusing and deflection systems that direct the beam toward specified
positions on the phosphor-coated screen. The phosphor then emits a small spot of light at each position
contacted by the electron beam.

Because the light emitted by the phosphor fades very rapidly, the picture is redrawn by quickly directing
the electron beam back over the same screen points. This type of display is called a refreshCRT.

The primary components of an electron gun in a CRT are the heated metal cathode and a control grid
Heat is supplied to the cathode by directing a current through a coil of wire, called the filament.

Intensity of the electron beam is controlled by the voltage at the control grid, which is a metal cylinder
that fits over the cathode. The focusing system in a CRT forces the electron beam to converge to a small
cross section as it strikes the phosphor.

Two pairs of magnetic-deflection coils are mounted on the outside of the CRT. One pair is mounted on
the top and bottom of the CRT neck and the other pair is mounted on opposite sides of the neck.
Horizontal deflection is accomplished with one pair of coils, and vertical deflection with the other pair.
The proper deflection amounts are attained by adjusting the current through the coils.

Figure 1: Basic design of a magnetic-deflection CRT.

6
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Figure 2: illustrates the delta-delta shadow-mask method, commonly used in color CRT systems.

Shadow-mask methods are commonly used in raster-scan systems because they produce a much wider
range of colors than the beam penetration method. This approach is based on to perceive colors as
combinations of red, green, and blue components, called the RGB color model. Thus, a shadow-mask
CRT uses three phosphor color dots at each pixel position. One phosphor dot emits a red light, another
emits a green light, and the third emits a blue light. This type of CRT has three electron guns, one for
each color dot, and a shadow-mask grid just behind the phosphor-coated screen.

The light emitted from the three phosphors results in a small spot of color at each pixel position. The
three electron beams are deflected and focused as a group onto the shadow mask, which contains a series
of holes aligned with the phosphor-dot patterns. When the three beams pass through a hole in the
shadow mask, they activate a dot triangle. The phosphor dots in the triangles are arranged so that each
electron beam can activate only its corresponding color dot when it passes through the shadow mask.

Flat-Panel Displays
Although most graphics monitors are still constructed with CRTs, other technologies are emerging that
may soon replace CRT monitors.

The term flat-panel display refers to a class of video devices that have reduced volume, weight, and
power requirements compared to a CRT. A significant feature of flat-panel displays is that they are
thinner than CRTs, Some additional uses for flat-panel displays are as small TV monitors, calculator
screens, pocket video-game screens, laptop computer screens, armrest movie-viewing stations on
airlines, advertisement boards in elevators, and graphics displays in applications requiring rugged,
portable monitors.

The flat-panel displays are separated into two categories: emissive displays and nonemissive displays.
The emissive displays (or emitters) are devices that convert electrical energy into light. Plasma panels,
thin-film electroluminescent displays, and light-emitting diodes are examples of emissive displays.

7
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Nonemissive displays (or nonemitters) use optical effects to convert sunlight or light from some other
source into graphics patterns. The liquid-crystal device is as example of a nonemissive display.
Plasma panels, also called gas-discharge displays, are constructed by filling the region between two
glass plates with a mixture of gases that usually includes neon.

Figure 3: Basic design of a plasma-panel display device. Figure 4: Basic design of a thin-film electroluminescent display device.

A series of vertical conducting ribbons is placed on one glass panel, and a set of horizontal conducting
ribbons is built into the other glass panel. Firing voltages applied to an intersecting pair of horizontal and
vertical conductors cause the gas at the intersection of the two conductors to break down into glowing
plasma of electrons and ions. Picture definition is stored in a refresh buffer, and the firing voltages are
applied to refresh the pixel positions 60 times per second.
One disadvantage of plasma panels has been that they were strictly monochromatic devices, but systems
are now available with multicolor capabilities.

Thin-film electroluminescent displays are similar in construction to plasma panels. The difference is
that the region between the glass plates is filled with a phosphor, such as zinc sulfide doped with
manganese, instead of a gas. When a sufficiently high voltage is applied to a pair of crossing electrodes,
the phosphor becomes a conductor in the area of the intersection of the two electrodes. Electrical energy
is absorbed by the manganese atoms, which then release the energy as a spot of light.
Electroluminescent displays require more power than plasma panels, and good color displays are harder
to achieve.

A third type of emissive device is the light-emitting diode (LED).A matrix of diodes is arranged to
form the pixel positions in the display, and picture definition is stored in a refresh buffer. As in scan-line
refreshing of a CRT, information is read from the refresh buffer and converted to voltage levels that are
applied to the diodes to produce the light patterns in the display.

Liquid-crystal displays (LCDs) are commonly used in small systems, such as laptop computers and
calculators these nonemissive devices produce a picture by passing polarized light from the
8
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

surroundings or from an internal light source through a liquid-crystal material that can be aligned to
either block or transmit the light.
Advantages of LCD are low weight, small size, low power consumption and low cost.

Random-Scan Displays

Figure 5: A random-scan system draws the component lines of an object in any specified order.

When operated as a random-scan display unit, a CRT has the electron beam directed only to those parts
of the screen where a picture is to be displayed. Pictures are generated as line drawings, with the
electron beam tracing out the component lines one after the other. For this reason, random-scan monitors
are also referred to as vector displays (or stroke-writing displays or calligraphic displays). The
component lines of a picture can be drawn and refreshed by a random-scan system in any specified order
as shown in the figure.

Refresh rate on a random-scan system depends on the number of lines to be displayed on that system.
Picture definition is now stored as a set of line-drawing commands in an area of memory referred to as
the displaylist, refresh displayfile, vector file, or display program.

Random-scan systems were designed for line-drawing applications, such as architectural and
engineering layouts, and they cannot display realistic shaded scenes. Since picture definition is stored as
a set of line-drawing instructions rather than as a set of intensity values for all screen points, vector
displays generally have higher resolutions than raster systems. Also, vector displays produce smooth
line drawings because the CRT beam directly follows the line path.

9
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Raster-Scan Systems
Interactive raster-graphics systems typically employ several processing units. In addition to the central
processing unit (CPU), a special-purpose processor, called the video controller or display controller, is
used to control the operation of the display device.

Organization of a simple raster system is shown in Figure 6. Here, the frame buffer can be anywhere in
the system memory, and the video controller accesses the frame buffer to refresh the screen. In addition
to the video controller, more sophisticated raster systems employ other processors as coprocessors and
accelerators to implement various graphics operations

Video Controller

Figure shows a commonly used organization for raster systems. A fixed area of the system memory is
reserved for the frame buffer, and the video controller is given direct access to the frame-buffer memory.
Frame-buffer locations, and the corresponding screen positions, are referenced in Cartesian coordinates.

Figure 6: Raster systems. Architecture of a raster system with a fixed portion of the system memory reserved for the
frame buffer

The basic refresh operations of the video controller are diagrammed in the below figure. Two registers
are used to store the coordinate values for the screen pixels. Initially, the x register is set to 0 and the y
register is set to the value for the top scan line. The contents of the frame buffer at this pixel position are
then retrieved and used to set the intensity of the CRT beam. Then the x register is incremented by 1,
and the process is repeated for the next pixel on the top scan line. This procedure continues for each
pixel along the top scan line.

After the last pixel on the top scan line has been processed, the x register is reset to 0 and the y register
is set to the value for the next scan line down from the top of the screen. Pixels along this scan line are
then processed in turn, and the procedure is repeated for each successive scan line. After cycling through
all pixels along the bottom scan line, the video controller resets the registers to the first pixel position on
the top scan line and the refresh process starts over.

10
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Figure 7:Basic video-controller refresh operations

Raster-Scan Display Processor

Figure below shows the components of a raster system that contains a separate display processor,
sometimes referred to as a graphics controller or a display coprocessor. The purpose of the display
processor is to free the CPU from the graphics chores. In addition to the system memory, a separate
display-processor memory area can be provided.

A major task of the display processor is digitizing a picture definition given in an application program
into a set of pixel values for storage in the frame buffer. This digitization process is called scan
conversion.

Figure 8: Architecture of a raster-graphics system with a display processor

11
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Display processors are also designed to perform a number of additional operations. These functions
include generating various line styles (dashed, dotted, or solid), displaying color areas, and applying
transformations to the objects in a scene. Also, display processors are typically designed to interface
with interactive input devices, such as a mouse.

In an effort to reduce memory requirements in raster systems, methods have been devised for organizing
the frame buffer as a linked list and encoding the color information. One organization scheme is to store
each scan line as a set of number pairs. The first number in each pair can be a reference to a color value,
and the second number can specify the number of adjacent pixels on the scan line that are to be
displayed in that color. This technique is called as run-length encoding.

A similar approach can be taken when pixel colors change linearly. Another approach is to encode the
raster as a set of rectangular areas (cell encoding). The disadvantages of encoding runs are that color
changes are difficult to record and storage requirements increase as the lengths of the runs decrease.

Three-Dimensional Viewing Devices

Figure 9: Operation of a three-dimensional display system using a vibrating mirror that changes focal length to match
the depths of points in a scene.

Graphics monitors for the display of three-dimensional scenes have been devised using a technique that
reflects a CRT image from a vibrating flexible mirror. Vibrating mirror changes its focal length due to
vibrations which is synchronized with the display of an object on a CRT. Each point on the object is
reflected from the mirror into a spatial position corresponding to the distance of that point from a
specified viewing location.

This allows walking around an object or scene and viewing it from different sides. In addition to
displaying three-dimensional images, these systems are often capable of displaying two-dimensional
cross-sectional “slices” of objects selected at different depths.

12
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Applications of 3D viewing devices


 In medical to analyze data from ultrasonography and CAT scan devices,
 In geological to analyze topological and seismic data,
 In design applications involving solid objects, and
 In three-dimensional simulations of systems, such as molecules and terrain.

Input Devices

Keyboards
An alphanumeric keyboard is used primarily as a device for entering text strings, issuing certain
commands, and selecting menu options. The keyboard is an efficient device for inputting such
nongraphic data as picture labels. Keyboards can also be provided with features to facilitate entry of
screen coordinates, menu selections, or graphics functions.

Cursor-control keys and function keys are common features on general purpose keyboards. Function
keys allow users to select frequently accessed operations with a single keystroke, and cursor-control
keys are convenient for selecting a displayed object or a location by positioning the screen cursor. A
keyboard can also contain other types of cursor-positioning devices, such as a trackball or joystick,
along with a numeric keypad for fast entry of numeric data.

Mouse Devices
A mouse is a small handheld unit that is usually moved around on a flat surface to position the screen
cursor. Wheels or rollers on the bottom of the mouse can be used to record the amount and direction of
movement. Another method for detecting mouse motion is with an optical sensor. The mouse is moved
over a special mouse pad that has a grid of horizontal and vertical lines.
One, two, three, or four buttons are included on the top of the mouse for signaling the execution of
operations, such as recording cursor position or invoking a function.

Trackballs and Spaceballs


A trackball is a ball device that can be rotated with the fingers or palm of the hand to produce screen-
cursor movement. Potentiometers, connected to the ball, measure the amount and direction of rotation. A
trackball also can be mounted on other devices, or it can be obtained as a separate add-on unit that
contains two or three control buttons.
An extension of the two-dimensional trackball concept is the spaceball, which provides six degrees of
freedom. Unlike the trackball, a spaceball does not actually move. Strain gauges measure the amount of
pressure applied to the spaceball to provide input for spatial positioning and orientation as the ball is
pushed or pulled in various directions. Spaceballs are used for three-dimensional positioning and
selection operations in virtual-reality systems, modeling, animation, CAD, and other applications.

Joysticks
Joystick consists of a small, vertical lever (called the stick) mounted on a base. Most joysticks select
screen positions with actual stick movement; others respond to pressure on the stick. The distance that
the stick is moved in any direction from its center position corresponds to the relative screen-cursor
movement in that direction.

13
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Potentiometers mounted at the base of the joystick measure the amount of movement, and springs return
the stick to the center position when it is released. One or more buttons can be programmed to act as
input switches to signal actions that are to be executed once a screen position has been selected.

In another type of movable joystick, the stick is used to activate switches that cause the screen cursor to
move at a constant rate in the direction selected. Eight switches, arranged in a circle, are sometimes
provided so that the stick can select any one of eight directions for cursor movement. Pressure-sensitive
joysticks, also called isometric joysticks, have a non-movable stick. A push or pull on the stick is
measured with strain gauges and converted to movement of the screen cursor in the direction of the
applied pressure.

Data Gloves
A data glove is a device that fits over the user’s hand and can be used to grasp a “virtual object.” The
glove is constructed with a series of sensors that detect hand and finger motions. Electromagnetic
coupling between transmitting antennas and receiving antennas are used to provide information about
the position and orientation of the hand. The transmitting and receiving antennas can each be structured
as a set of three mutually perpendicular coils, forming a three dimensional Cartesian reference system.

Input from the glove is used to position or manipulate objects in a virtual scene. A two-dimensional
projection of the scene can be viewed on a video monitor, or a three-dimensional projection can be
viewed with a headset.

Digitizers
A common device for drawing, painting, or interactively selecting positions is a digitizer. These devices
can be designed to input coordinate values in either a two-dimensional or a three-dimensional space.

One type of digitizer is the graphics tablet (also referred to as a data tablet), which is used to input two-
dimensional coordinates by activating a hand cursor or stylus at selected positions on a flat surface.

A hand cursor contains crosshairs for sighting positions, while a stylus is a pencil-shaped device that is
pointed at positions on the tablet. Many graphics tablets are constructed with a rectangular grid of wires
embedded in the tablet surface. Electromagnetic pulses are generated in sequence along the wires, and
an electric signal is induced in a wire coil in an activated stylus or hand-cursor to record a tablet
position. Depending on the technology, signal strength, coded pulses, or phase shifts can be used to
determine the position on the tablet.

Image Scanners
Drawings, graphs, photographs, or text can be stored for computer processing with an image scanner by
passing an optical scanning mechanism. The gradations of grayscale or color are then recorded and
stored in an array. Scanners are available in a variety of sizes and capabilities, including small handheld
models, drum scanners, and flatbed scanners.

14
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Touch Panels
Touch panels allow displayed objects or screen positions to be selected with the touch of a finger. A
typical application of touch panels is for the selection of processing options that are represented as a
menu of graphical icons. Touch input can be recorded using optical, electrical, or acoustical methods.

Optical touch panels employ a line of infrared light-emitting diodes (LEDs) along one vertical edge and
along one horizontal edge of the frame.

An electrical touch panel is constructed with two transparent plates separated by a small distance. One
of the plates is coated with a conducting material, and the other plate is coated with a resistive material.

In acoustical touch panels, high-frequency sound waves are generated in horizontal and vertical
directions across a glass plate. Touching the screen causes part of each wave to be reflected from the
finger to the emitters. The screen position at the point of contact is calculated from a measurement of the
time interval between the transmission of each wave and its reflection to the emitter.

Light Pens
Light pen devices are used to select screen positions by detecting the light coming from points on the
CRT screen. They are sensitive to the short burst of light emitted from the phosphor coating at the
instant the electron beam strikes a particular point. An activated light pen, pointed at a spot on the screen
as the electron beam lights up that spot, generates an electrical pulse that causes the coordinate position
of the electron beam to be recorded. Also, light pens require special implementations for some
applications since they cannot detect positions within black areas. To be able to select positions in any
screen area with a light pen, we must have some nonzero light intensity emitted from each pixel within
that area.

Voice Systems
Speech recognizers are used with some graphics workstations as input devices for voice commands. The
voice system input can be used to initiate graphics a predefined dictionary of words and phrases. A
dictionary is set up by speaking the command words several times. The system then analyzes each word
and establishes a dictionary of word frequency patterns, along with the corresponding functions that are
to be performed. Later, when a voice command is given, the system searches the dictionary for a
frequency-pattern match. A separate dictionary is needed for each operator using the system. Input for a
voice system is typically spoken into a microphone mounted on a headset.

15
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

3 GRAPHICS SOFTWARE

There are two broad classifications for computer-graphics software:


I. Special purpose packages and
II. General programming packages.

Special-purpose packages are designed for nonprogrammers who want to generate pictures, graphs, or
charts. The interface to a special purpose package is typically a set of menus that allows users to
communicate with the programs. Examples of such applications include artist’s painting programs and
various architectural, business, medical, and engineering CAD systems.

A general programming package provides a library of graphics functions that can be used in a
programming language such as C, C++, Java, or Fortran. Basic functions in a typical graphics library
include those for specifying picture components (straight lines, polygons, spheres, and other objects),
setting color values, selecting views of a scene, and applying rotations or other transformations. Some
examples of general graphics programming packages are GL (Graphics Library), OpenGL, VRML
(Virtual-Reality Modeling Language), Java 2D, and Java 3D.

A set of graphics functions is often called a computer-graphics application programming interface


(CG API), because the library provides a software interface between a programming language (such as
C++) and the hardware.

Coordinate Representations

F I G U R E 10 : Coordinate Representations

16
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

In general several different Cartesian reference frames are used in the process of constructing and
displaying a scene. In each of these frames, a vertex has different coordinates. The following is the order
that the system occur in the pipeline.
1. Object Coordinates
2. World coordinates
3. Eye Coordinates
4. Clip coordinates
5. Normalized device coordinates
6. Window or Screen coordinates

Object Coordinates: In most applications, we tend to specify or use an object with a convenient size,
orientation, and location in its own frame called the model or object frame. The coordinates in the
corresponding function calls are in object model coordinates.

World coordinates: After a graphics object is defined in its own modeling coordinate system, the object
is transformed to where it belongs in the scene. This is called the model transformation, and the single
coordinate system that describes the position of every object in the scene is called the world coordinate
system.

Eye Coordinates: Once the 3D world has been created, an application programmer would like the
freedom to be able to view it from any location. But graphics viewing models typically require a specific
orientation and/or position for the eye at this stage.

For example, the system might require that the eye position be at the origin, looking in –Z (or
sometimes +Z). So the next step in the pipeline is the viewing transformation, in which the coordinate
system for the scene is changed to satisfy this requirement. The result is the 3D eye coordinate system.

Clip coordinates: In OpenGL, these transformations are concatenated together into model-view
coordinates and specified by a single model-view matrix. Once objects are in eye coordinates, OpenGL
must check whether they lie within the view volume. If it does not it is clipped from the scene before
rasterization. This is done by bringing all the objects into a cube centered at the origin in clip
coordinates

Normalized device coordinates: After this transformation, vertices are still in homogenous coordinates.
The division by the w component, called perspective division, yields three dimensional representations
in normal device coordinates.

17
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Window or screen coordinates: The final step in the pipeline is to change units so that the object is in a
coordinate system appropriate for the display device. Because the screen is a digital device, this requires
that the real numbers in the 2D eye coordinate system be converted to integer numbers that represent
screen coordinate. This is done with a proportional mapping followed by a truncation of the coordinate
values. It is called the window-to-viewport mapping, and the new coordinate space is referred to as
screen coordinates, or display coordinates.

Graphics Functions
A general-purpose graphics package provides users with a variety of functions for creating and
manipulating pictures. These routines can be broadly classified according to whether they deal with
graphics output, input, attributes, transformations, viewing, subdividing pictures, or general control.

Graphics output primitives:

The basic building blocks for pictures are referred to as graphics output primitives. They include
character strings and geometric entities, such as points, straight lines, curved lines, filled color areas
(usually polygons), and shapes defined with arrays of color points. In addition, some graphics packages
provide functions for displaying more complex shapes such as spheres, cones, and cylinders.

Attributes:
Attributes are properties of the output primitives that describe how a particular primitive is to be
displayed. This includes color specifications, line styles, text styles, and area-filling patterns.

Geometric transformations:
Geometric Transformation functions allow user to carry out Transformations of objects such as rotation,
translation, and scaling. That is user can change the size, position, or orientation of an object within a
scene.
Some graphics packages provide an additional set of functions for performing modeling transformations,
which are used to construct a Scene where individual object descriptions are given in local coordinates.

Viewing transformations:
The Viewing Transformation allow us to specify various views (top, front, side, back), to select a view
of the scene, the type of projection to be used, and the location on a video monitor where the view is to
be displayed. Other routines are available for managing the screen display area by specifying its
position, size, and structure.

Input functions
Input functions provide to programmers to design the interactive programs that characterize modern
graphics system. The functions have to deal with devices such as keyboard, mice, and data tablets. Input
functions are used to control and process the data flow from these interactive devices.

18
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Control operations:
The Control functions enable us to communicate with the window system, to initialize programs, and
to deal with any errors that take place during the execution of programs. Some graphics packages also
provide routines for subdividing a picture description into a named set of component parts. Finally, a
graphics package contains a number of housekeeping tasks, such as clearing a screen display area to a
selected color and initializing parameters.

Display-Window Management Using GLUT

Following is a main program that works for most non interactive applications
#include<GL/glut.h>
void main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(400, 300);
glutInitWindowPosition(50, 100);
glutCreateWindow("An Example OpenGL Program");
glutDisplayFunc(display);
myinit();
glutMainLoop();
}

F I G U R E 11 : A 400 by 300 display window at position (50, 100) relative to the top-left corner of the video
display.

19
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Before we can open a window there must be interaction between the windowing system and OpenGL. In
GLUT, this interaction is initialized by the following function call.

glutInit(int *argc, char **argv)

Initializes GLUT and processes any command line arguments. glutInit() should be called before
any other GLUT routine. This function initializes the toolkit.

glutInitDisplayMode(unsigned int mode)

Specifies whether to use an RGBA or color-index color model. For example, if you want a
window with double buffering, the RGBA color model, and a depth buffer, you might call
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH).

glutInitWindowPosition(int x, int y)

Specifies the screen location for the upper-left corner of your window. For example
glutInitWindowPosition(50, 100) specifies window’s upper left corner should be positioned on the screen
50 pixels from the left edge and 100 pixels down from the top.

glutInitWindowSize(int width, int size)

Specifies the size, in pixels, of your window. For example glutInitWindowSize (400, 300)
specifies that the screen window should initially be 400 pixels wide by 300 pixels high.

int glutCreateWindow(char *string)

Creates a window with an OpenGL context. It returns a unique identifier for the new window.
For example int glutCreateWindow(“An Example OpenGL Program”) this function actually opens and
displays the screen window, putting the title “An Example OpenGL Program” in the title bar. Until
glutMainLoop() is called, the window is not yet displayed.

The Display Callback

glutDisplayFunc(void (* func)(void))
Is the first and most important event callback function. Whenever GLUT determines
the contents of the window need to be redisplayed, the callback function registered by glutDisplayFunc()
is executed. Therefore, you should put all the routines you need to redraw the scene in the display
callback function.

glutMainLoop()
Whose execution will cause the program to begin an event processing loop. If there are no
events to process, the program will sit in a wait state, with our graphics on the screen; until we terminate
the program through some external means say by hitting a special key or combination of keys, such as
control C that terminates the execution of the program.

20
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

A Complete OpenGL Program


#include <GL/glut.h>

void init (void)


{
glClearColor (1.0, 1.0, 1.0, 0.0); // Set display-window color to white.
glMatrixMode (GL_PROJECTION); // Set projection parameters.
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
}

void lineSegment (void)


{
glClear (GL_COLOR_BUFFER_BIT); // Clear display window.
glColor3f (0.0, 0.4, 0.2); // Set line segment color to green.
glBegin (GL_LINES);
glVertex2i (180, 15); // Specify line-segment geometry.
glVertex2i (10, 145);
glEnd ( );
glFlush ( ); // Process all OpenGL routines as quickly as possible.
}

void main (int argc, char** argv)


{
glutInit (&argc, argv); // Initialize GLUT.
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); // Set display mode.
glutInitWindowPosition (50, 100); // Set top-left display-window position.
glutInitWindowSize (400, 300); // Set display-window width and height.
glutCreateWindow ("An Example OpenGL Program"); // Create display window.
init ( ); // Execute initialization procedure.
glutDisplayFunc (lineSegment); // Send graphics to display window.
glutMainLoop ( ); // Display everything and wait.
}

F I G U R E 12: The display window and line segment produced by the example program.

21
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

SPECIFYING A TWO-DIMENSIONAL WORLD-COORDINATE REFERENCE FRAME


IN OpenGL

FIGURE 1 World-coordinate limits for a display window, as specified in the glOrtho2D function.

gluOrtho2D is a function can be used to set up any two-dimensional Cartesian reference frame. The
arguments for this function are the four values defining the x and y coordinate limits for the display. The
gluOrtho2D function specifies an orthogonal projection; make sure that the coordinate values are
placed in the OpenGL projection matrix.

In addition, assign the identity matrix as the projection matrix before defining the world-coordinate
range. This would ensure that the coordinate values were not accumulated with any values we may have
previously set for the projection matrix. Thus, for two-dimensional display, define the coordinate frame
for the screen display window with the following statements.

glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluOrtho2D (xmin, xmax, ymin, ymax);

The display window will then be referenced by coordinates (xmin, ymin) at the lower-left corner and by
coordinates (xmax, ymax) at the upper-right corner, as shown in Fig.13.

OpenGL POINT FUNCTIONS


To specify the geometry of a point, simply give a coordinate position in the world reference frame. Then
this coordinate position, along with other geometric descriptions is passed to the viewing routines.
OpenGL primitives are displayed with a default size and color. The default color for primitives is white
and the default point size is equal to the size of one screen pixel

The following OpenGL function is used to state the coordinate values for a single position
glVertex* ( );
where the asterisk (*) indicates that suffix codes are required for this function. These suffix codes are
used to identify the spatial dimension, the numerical data type to be used for the coordinate values, and a
possible vector form for the coordinate specification. A glVertex function must be placed between a

22
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

glBegin function and a glEnd function. The argument of the glBegin function is used to identify the
kind of output primitive that is to be displayed, and glEnd takes no arguments.
For point plotting, the argument of the glBegin function is the symbolic constant GL POINTS. Thus,
the form for an OpenGL specification of a point position is
glBegin (GL_POINTS);
glVertex* ( );
glEnd ( );

We use a suffix value of 2, 3, or 4 on the glVertex function to indicate the dimensionality of a


coordinate position. A four-dimensional specification indicates a homogeneous-coordinate
representation, where the homogeneous parameter h (the fourth coordinate) is a scaling factor for the
Cartesian-coordinate values.

Since OpenGL treats two dimensions as a special case of three dimensions, any (x, y) coordinate
specification is equivalent to (x, y, 0) with h = 1. We need to state also which data type is to be used for
the numerical-value specifications of the coordinates. This is accomplished with a second suffix code on
the glVertex function. Suffix codes for specifying a numerical data type are i (integer), s (short), f
(float), and d (double).

Finally, the coordinate values can be listed explicitly in the glVertex function, or a single argument can
be used that references a coordinate position as an array. If we use an array specification for a coordinate
position, we need to append a third suffix code: v (for “vector”). In the following example, three equally
spaced points are plotted along a two-dimensional straight-line path with a slope of 2 (Fig. 14).
Coordinates are given as integer pairs.
glBegin (GL_POINTS);
glVertex2i (50, 100);
glVertex2i (75, 150);
glVertex2i (100, 200);
glEnd ( );

FIGURE 14 Display of three point positions generated with glBegin (GL POINTS).

Alternatively, we could specify the coordinate values for the preceding points in arrays such as
int point1 [ ] = {50, 100};
int point2 [ ] = {75, 150};
int point3 [ ] = {100, 200};

23
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

and call the OpenGL functions for plotting the three points as
glBegin (GL_POINTS);
glVertex2iv (point1);
glVertex2iv (point2);
glVertex2iv (point3);
glEnd ( );
And here is an example of specifying two point positions in a three dimensional world reference frame.
In this case, we give the coordinates as explicit floating-point values.
glBegin (GL_POINTS);
glVertex3f (-78.05, 909.72, 14.60);
glVertex3f (261.91, -5200.67, 188.33);
glEnd ( );
We could also define a C++ class or structure (struct) for specifying point positions in various
dimensions. For example,
class wcPt2D {
public:
GLfloat x, y;
};
Using this class definition, we could specify a two-dimensional, world-coordinate point position with
the statements
wcPt2D pointPos;
pointPos.x = 120.75;
pointPos.y = 45.30;
glBegin (GL_POINTS);
glVertex2f (pointPos.x, pointPos.y);
glEnd ( );

OpenGL LINE FUNCTIONS

In OpenGL, A set of straight-line segments between each successive pair of endpoints in a list is
generated using the primitive line constant GL LINES. In general, this will result in a set of
unconnected lines unless some coordinate positions are repeated. Nothing is displayed if only one
endpoint is specified, and the last endpoint is not processed if the number of endpoints listed is odd.
For example, consider five coordinate positions, labeled p1 through p5, and each is represented as a two
dimensional array, then the following code could generate the display shown in Fig. 315(a).
glBegin (GL_LINES);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );

24
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Thus, we obtain one line segment between the first and second coordinate positions, and another line
segment between the third and fourth positions. In this case, the number of specified endpoints is odd, so
the last coordinate position is ignored.

FIGURE 15 Line segments that can be displayed in OpenGL using a list of five endpoint coordinates.
(a) An unconnected set of lines generated with the primitive line constant GL LINES.
(b)Apolyline generated with GL LINE STRIP.
(c) A closed polyline generated with GL LINE LOOP.

With the OpenGL primitive constant GL LINE STRIP, we obtain a polyline.

In this case, the display is a sequence of connected line segments between the first endpoint in the list
and the last endpoint. The first line segment in the polyline is displayed between the first endpoint and
the second endpoint; the second line segment is between the second and thirdendpoints; and so forth,
uptothe last line endpoint. Nothing is displayed if we do not list at least two coordinate positions. Using
the same five coordinate positions as in the previous example, we obtain the display in Fig. 15(b) with
the code
glBegin (GL_LINE_STRIP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
The third OpenGL line primitive is GL LINE LOOP, which produces a closed polyline. An additional
line is added to the line sequence from the previous example, so that the last coordinate endpoint in the
sequence is connected to the first coordinate endpoint of the polyline. Figure 15(c) shows the display of
our endpoint list when we select this line option.
glBegin (GL_LINE_LOOP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
25
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

OpenGL CURVE FUNCTIONS


Routines for generating basic curves, such as circles and ellipses, are not included as primitive functions
in the OpenGL core library. But this library does contain functions for displaying Bezier splines, which
are polynomials that are defined with a discrete point set.

Using rational B-splines, we can display circles, ellipses, and other two-dimensional quadrics.
OpenGL Utility (GLU) has routines for three dimensional quadrics, such as spheres and cylinders, as
well as routines for producing rational B-splines, which are a general class of splines that include the
simpler Bezier curves.. In addition, there are routines in the OpenGL Utility Toolkit (GLUT) that we can
use to display some three-dimensional quadrics, such as spheres and cones, and some other shapes.

Another method we can use to generate a display of a simple curve is to approximate it using a polyline.
We just need to locate a set of points along the curve path and connect the points with straight-line
segments. The more line sections are include in the polyline, the smoother the appearance of the curve.

A third alternative is to write our own curve-generation functions based on the algorithms presented in
the following sections.

FIGURE 3-15 A circular arc approximated with (a) three straight-line segments,(b) six line segments, and
(c) twelve line segments.

OpenGL POINT-ATTRIBUTE FUNCTIONS

The displayed color of a designated point position is controlled by the current color values in the state
list. A color is specified with either the glColor function or the glIndex function.

OpenGL point with glPointSize (size) sets the size for a point and is displayed as a square block of
pixels. Parameter size is assigned a positive floating-point value, which is rounded to an integer. The
number of horizontal and vertical pixels in the display of the point is determined by parameter size. Thus
a point size of 1.0 displays a single pixel, and a point size of 2.0 displays a 2 by 2 pixel array.
26
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

The default value for point size is 1.0.Attribute functions may be listed inside or outside of a
glBegin/glEnd pair. For example, the following code segment plots three points in varying colors and
sizes.
The first is a standard-size red point, the second is a double-size green point, and the third is a triple-size
blue point.
glColor3f (1.0, 0.0, 0.0);
glBegin (GL_POINTS);
glVertex2i (50, 100);
glPointSize (2.0);
glColor3f (0.0, 1.0, 0.0);
glVertex2i (75, 150);
glPointSize (3.0);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (100, 200);
glEnd ( );

OpenGL LINE-ATTRIBUTE FUNCTIONS

In OpenGL the appearance of a straight-line segment can be controlled with three attribute settings such
as line color, line width, and line style. OpenGL provides a function for setting the width of a line and
another function for specifying a line style, such as a dashed or dotted line.

OpenGL Line-Width Function


Line width is set in OpenGL with the function
glLineWidth (width);
Value to parameter width is a floating-point, and this value is rounded to the nearest nonnegative
integer. If the input value rounds to 0.0, the line is displayed with a standard width of 1.0, which is the
default width

OpenGL Line-Style Function


By default, a straight-line segment is displayed as a solid line. But, can also display dashed lines, dotted
lines, or a line with a combination of dashes and dots. And vary the length of the dashes and the spacing
between dashes or dots. We set a current display style for lines with the OpenGL function:
glLineStipple (repeatFactor, pattern);
Parameter pattern is used to reference a 16-bit integer that describes how the line should be displayed.
A 1 bit in the pattern denotes an “on” pixel position, and a 0 bit indicates an “off” pixel position. The
pattern is applied to the pixels along the line path starting with the low-order bits in the pattern. The
default pattern is 0xFFFF (each bit position has a value of 1), which produces a solid line. Integer
parameter repeatFactor specifies how many times each bit in the pattern is to be repeated before the
next bit in the pattern is applied. The default repeat value is 1.

As an example of specifying a line style, suppose parameter pattern is assigned the hexadecimal
representation 0x00FF and the repeat factor is 1. This would display a dashed line with eight pixels in
each dash and eight pixel positions that are “off” (an eight-pixel space) between two dashes.
Also, since low order bits are applied first, a line begins with an eight-pixel dash starting at the first
endpoint. This dash is followed by an eight-pixel space, then another eight-pixel dash, and so forth, until
the second endpoint position is reached.

27
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Before a line can be displayed in the current line-style pattern, activate the line-style feature of OpenGL.
This is accomplished with the following function.
glEnable (GL_LINE_STIPPLE);

At any time, we can turn off the line-pattern feature with glDisable (GL_LINE_STIPPLE);
This replaces the current line-style pattern with the default pattern (solid lines).

In the following program outline, we illustrate use of the OpenGL line attribute functions by plotting
three line graphs in different styles and widths. Figure 16 shows the data plots that could be generated
by this program.

FIGURE 16 Plotting three data sets with three different OpenGL line styles and line widths: single-width dash-dot
pattern, double-width dash pattern, and triple-width dot pattern.

28
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Digital Differential Analyzer (DDA) Algorithm

The digital differential analyzer (DDA) is a scan-conversion line algorithm based on calculation either
∆y or ∆x . The line at unit intervals in one coordinate and determine corresponding integer values
nearest the line path for the other coordinate.

A line with positive slop, if the slope is less than or equal to 1, at unit x intervals (∆x=1) and compute
each successive y values as
yk+1 = yk + m --------------- (1)

Subscript k takes integer values starting from 1 for the first point and increases by 1 until the final
endpoint is reached. m can be any real number between 0 and 1 and, the calculated y values must be
rounded to the nearest integer

For lines with a positive slope greater than 1 we reverse the roles of x and y, (∆y=1) and calculate each
succeeding x value as

xk+1 = xk + (1/m) -------------- (2)

Equation (1 and (2) are based on the assumption that lines are to be processed from the left endpoint to
the right endpoint.

If this processing is reversed, ∆x=-1 that the starting endpoint is at the right

yk+1 = yk – m -------------(3)

When the slope is greater than 1 and ∆y = -1 with

xk+1 = xk-1(1/m) -------------(4)

If the absolute value of the slope is less than 1 and the start endpoint is at the left, we set ∆x = 1 and
calculate y values with Eq. (1)

When the start endpoint is at the right (for the same slope), we set ∆x = -1 and obtain y positions from
Eq. (3). Similarly, when the absolute value of a negative slope is greater than 1, we use ∆y = -1 and Eq.
(4) or we use ∆y = 1 and Eq. (2).

29
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

DDA Line Algorithm

#define ROUND(a) ((int)(a+0.5))

void lineDDA (int x1, int y1, int x2, int y2)
{
int dx = x2 – x1, dy = y2 – y1, steps, k;
float xIncrement, yIncrement, x = x1, y = y1;
if (abs (dx) > abs (dy)
steps = abs (dx) ;
else
steps = abs (dy);
xIncrement = dx / (float) steps;
yIncrement = dy / (float) steps
setpixel (ROUND(x), ROUND(y) ) :
for (k=0; k<steps; k++)
{
x += xIncrement;
y += yIncrement;
setpixel (ROUND(x), ROUND(y));
}
}

Problems
1. Obtain the coordinate points for a straight line whose ends points are (0,0) and (6,18).

Solution:
Coordinate points for a straight line whose ends points are (0, 0) and (6, 18).
Given:
x1=0, y1=0, x2=6, y2=18
Calculate dx= x2- x1 6-0=6
dy= y2- y1 18-0=18
Since dy > dx

therefore steps = 18,

xinc = dx/steps = 6/18 = 0.33


yinc = dy/steps =18/18 = 1

First points are (0, 0) and other points are

30
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

k X=x+ xinc Round(X) Y=y+ yinc Coordinate points (x,y)


Output

1 0 + 0.33=0.33 0 0+1=1 (0,1)

2 0.33+0.33=0.66 1 1+1=2 (1,2)

3 0.66+0.33=0.99 1 2+1=3 (1,3)

4 0.99+0.33=1.32 1 4 (1,4)

5 1.32+0.33=1.65 2 5 (2,5)

6 1.65+0.33=1.98 2 6 (2,6)

7 1.98+0.33=2.31 2 7 (2,7)

8 2.31+0.33=2.64 3 8 (3,8)

9 2.64+0.33=2.97 3 9 (3,9)

10 2.97+0.33=3.3 3 10 (3,10)

11 3.3+0.33=3.63 4 11 (4,11)

12 3.63+0.33=3.96 4 12 (4,12)

13 3.96+0.33=4.29 4 13 (4,13)

14 4.29+0.33=4.62 5 14 (5,14)

15 4.62+0.33=4.95 5 15 (5,15)

16 4.95+0.33=5.28 5 16 (5,16)

17 5.28+0.33=5.61 6 17 (6,17)

18 5.61+0.33=5.94 6 18 (6,18)

31
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

2. Obtain the coordinate points for a straight line whose ends points are (0, 0) and (4, 6).

Solution:
Coordinate points for a straight line whose ends points are (0, 0) and (4, 6).
Given:
x1=0, y1=0, x2=4, y2=6

Calculate

dx= x2- x1 4-0=4


dy= y2- y1 6–0=6

Since dy > dx therefore steps = 6,

xinc = dx/steps = 4/6 = 0.67


yinc = dy/steps = 6/6 = 1

Assign X  x1 and Y  y1

k X=x+ xinc Round(X) Y=y+ yinc Coordinate points

(x,y)

Output

1 0 + 0.67=0.67 1 0+1=1 (1,1)

2 0.67+0.67=1.34 1 1+1=2 (1,2)

3 1.34+0.67=2.01 2 2+1=3 (2,3)

4 2.01+0.67=2.68 3 4 (3,4)

5 2.68+0.67=3.35 3 5 (3,5)

6 3.35+0.67=4.08 4 6 (4,6)

Problems:
Still uses floating point and round () inside the loop.

How can we get rid of these? Solution is Bresenham’s line algorithm

32
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Bresenham’s Line Algorithm

F I G U R E 17: A section of the screen showing a pixel in column xk on the scanline yk to be plotted along a line segment
slope 0<m<1

An accurate and efficient raster line generating algorithm developed by Bresenham that uses only
incremental integer calculations.
To illustrate Bresenham's approach, consider the scan-conversion process for lines with positive slope
less than 1.
Pixel positions along a line path can then be plotted by taking unit steps in the x-direction and
determine the y-coordinate value of the nearest pixel to the line at each step.

Consider the situation as shown below

F I G U R E 18: Vertical distances between pixel position and the line y coordinate at sampling position xk+1

We assume that the pixel position (xk, yk) has been plotted and we now need to decide which is the next
pixel to plot. The two choices for the next pixel position are at coordinate.(xk+1,yk) and .(xk+1,yk+1).

The coordinate differences between the centre of two pixels and the line coordinate y are labeled
as d1 and d2.
The y coordinate on the mathematical line at pixel position xk+1 is calculated as

y =m(xk+1)+b --------- (1)

Then
d1 = y-yk = m(xk+1)+b-yk
d2 = (yk+1)-y = yk+1-m(xk+1)-b
33
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

To determine which of the two pixel is closest to the line path, efficient test that is based on the
difference between the two pixel separations

d1- d2 = 2m(xk+1)-2yk+2b-1 ------------ (2)

A decision parameter Pk for the kth step in the line algorithm can be obtained by rearranging equation
(2). By substituting m=∆y/∆x where ∆x and ∆y are the vertical and horizontal separations of the
endpoint positions and defining the decision parameter as

pk = ∆x (d1- d2) = 2∆y xk.-2∆x. yk + c ----------- (3)

The sign of pk is the same as the sign of d1- d2,since ∆x>0

Parameter C is constant and has the value 2∆y + ∆x(2b-1) which is independent of the pixel position
and will be eliminated in the recursive calculations for Pk.

If Pk is negative (i.e. d1< d2) then the pixel at yk is “closer”, than the pixel at yk+1, plot the lower pixel.
otherwise plot the upper pixel i.e. the pixel at yk+1.

To obtain the values of successive decision parameters using incremental integer calculations. At steps
k+1, the decision parameter is evaluated from equation (3) as

Pk+1 = 2∆y xk+1-2∆x. yk+1 +c

Subtracting the equation (3) from the preceding equation

Pk+1 - Pk = 2∆y (xk+1 - xk) -2∆x(yk+1 - yk)

But xk+1= xk+1 so that

Pk+1 = Pk+ 2∆y-2∆x(yk+1 - yk) ------- (4)

Where the term yk+1-yk is either 0 or 1 depending on the sign of parameter Pk

This recursive calculation of decision parameter is performed at each integer x position, starting at the
left coordinate endpoint of the line.

The first parameter P0 is evaluated from equation at the starting pixel position (x0,y0) and with m
evaluated as ∆y/∆x
P0 = 2∆y-∆x ------------- (5)

34
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Bresenham’s line Drawing Algorithm for |m| < 1

1. Input the two line endpoints and store the left end point in (x0,y0)
2. Load (x0, y0) into frame buffer, ie. Plot the first point.
3. Calculate the constants ∆x, ∆y, 2∆y and obtain the starting value for the decision parameter as
P0 = 2∆y-∆x
4. At each xk along the line, starting at k=0 perform the following test
If Pk < 0, the next point to plot is(xk+1,yk) and Pk+1 = Pk + 2∆y
otherwise,
the next point to plot is (xk+1,yk+1) and Pk+1 = Pk + 2∆y - 2∆x

5. Perform step4 ∆x times.

Implementation of Bresenham Line drawing Algorithm

void lineBres (int x1,int y1,int x2, int y2)


{
int dx = abs( x2 – x1) , dy = abs (y2 – y1);
int p = 2 * dy – dx;
int k1 = 2 * dy;
int k2= 2 *(dy - dx);
int x , y;

if (x1 > x 2) /* Determine which point to use as start, which as end * /


{
x = x2; y = y2; x2 = x1;
}
else
{
x = x1; y = y1;
}
setPixel(x,y);
while(x<x2)
{
x++;
if (p<0)
p+=k1;
else
{
y++;
p+=k2;
}
setPixel(x,y);
}
}

35
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Problem

1. Use Bresenham’s line algorithm to digitalize a line from point (0,2) to point (4,5)
Solution

Given x0=0, y0=2, x1=4, y1=5

Calculate dx = x1- x0 4–0=4

dy = y1- y0 5–3=3

Slope m = Δy/ Δx = ¾ <1

Now calculate the successive decision parameter Pk and corresponding pixel positions (xk+1,yk+1) closest to the
line path as follows.

The initial decision parameter has the value

P0 = 2 * dy - dx;

= 2*3– 4 = 6 - 4=2 > 0

(xk+1,yk+1) i.e. x1 = x0+1, = 0 + 1 = 1

y1 = y0+1 = 2 + 1 = 3

Pk +1 = Pk + 2 * (dy - dx)

P1 = P0 +1 = P0 + 2 * (-1) = 2 + (-2) = 0 ≥ 0

(xk+1,yk+1) i.e. x2 = x1+1, = 1 + 1 = 2

y2 = y1+1 = 3 + 1 = 4

P1 +1 = P1 + 2 * (dy - dx)

P2 = P1 + 2 * (-1) = 0 + (-2) = -2 < 0

Choose (xk+1,yk) and Pk +1 = Pk + 2 Δy


36
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

i.e. (x2+1,y2) and P2 +1 = Pk + 2 Δy

x2+1 = 2+1 = 3, y2 = 4 and

P3 = P2 + 2 * dy

= -2 + 2 * 3

= -2 + (6)

=4>0

Since P3 > 0

choose (xk+1,yk+1) i.e. x4 = x3+1, = 3+1 = 4

y4 = y3+1 = 4+1 = 5

We plot the initial point (x0, y0) = (0, 2), and determine successive pixel positions along the line path from the
decision parameter as

k Pk Coordinate of pixel to be plotted

xk+1 yk+1

x0 = 0 y0 = 2 start pixel

0 P0 = 2 x1 = 1 y1 = 3

1 P1 = 0 x2 = 2 y2 = 4

2 P2 = -2 x3 = 3 y3 = 4

3 P3 = 4 x4 = 4 y4 = 5 end pixel

37
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Midpoint circle Algorithm

F I G U R E 19: Midpoint between candidate pixels at sampling position xk+1 between along circular path

To apply the midpoint method we define a circle function as fcircle(x,y) = x 2+y2-r2 Any point (x,y)
on the boundary of the circle with radius r satisfies the equation fcircle (x,y)=0. If the point is in the
interior of the circle, the circle function is negative. And if the point is outside the circle the, circle
function is positive

fcircle (x,y) <0, if (x,y) is inside the circle boundary


=0, if (x,y) is on the circle boundary
>0, if (x,y) is outside the circle boundary

The circle function is the decision parameter in the midpoint algorithm.

Midpoint between candidate pixels at sampling position xk+1 along a circular path. Fig -1 shows the
midpoint between the two candidate pixels at sampling position xk+1. To plot the pixel at (xk,yk) next
need to determine whether the pixel at position (xk+1,yk) or the one at position (xk+1,yk-1) is circular
to the circle.

The decision parameter is the circle function evaluated at the midpoint between these two pixels

Pk= fcircle (xk+1, yk-1/2)


=( xk+1)2 + ( yk -1/2) 2- r2
If Pk <0, this midpoint is inside the circle and the pixel on scan line yk is closer to the circle boundary.
Otherwise the mid position is outside or on the circle boundary and select the pixel on scan line yk -1.

Successive decision parameters are obtained using incremental calculations. To obtain a recursive
expression for the next decision parameter by evaluating the circle function at sampling position
xk+1+1= xk+2

Pk= fcircle xk+1+1,yk+1-1/2)


Pk =[( xk+1)+1]2+(yk+1-1/2)2-r2 or
Pk+1= Pk + 2(xk+1) + (y2k+1 - y2 k ) - (yk+1 - yk) + 1

Where yk+1 is either yk or yk-1 depending on the sign of Pk .


Increments for obtaining Pk+1 are either 2xk+1+1 (if Pk is negative) or 2xk+1+1-2 yk+1.

38
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Evaluation of the terms 2xk+1 and 2yk+1 can also be done incrementally as
2xk+1=2xk+2
2 yk+1=2 yk-2

At the Start position (0,r) these two terms have the values 0 and 2r respectively. Each successive value
for the 2xk+1 term is obtained by adding 2 to the previous value and each successive value for the
2yk+1 term is obtained by subtracting 2 from the previous value.

The initial decision parameter is obtained by evaluating the circle function at the start position
(x0,y0)=(0,r)

P0= fcircle (1, r-1/2)


=1+(r-1/2)2-r2 or
P0=(5/4)-r

If the radius r is specified as an integer, we can simply round P0 to P0=1- r (for r an integer)

Midpoint circle Algorithm


1. Input radius r and circle center (xc,yc) and obtain the first point on the circumference of the circle
centered on the origin as (x0,y0) = (0,r)
2. Calculate the initial value of the decision parameter as P0=(5/4)-r
3. At each xk position, starting at k=0, perform the following test.
If Pk <0 the next point along the circle centered on (0,0) is (xk+1,yk) and Pk+1=Pk+2xk+1+1
Otherwise
the next point along the circle is (xk+1,yk-1) and Pk+1=Pk+2xk+1+1-2 yk+1
Where 2xk+1=2xk+2 and 2yk+1=2yk-2
4. Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (x,y) onto the circular path centered at (xc,yc) and plot the
Coordinate values. x=x+xc y=y+yc
6. Repeat step 3 through 5 until x>=y.

Use Midpoint Circle algorithm to obtain the raster pixel position of a circle of radius r = 10 with centre
as (3, 4).

Given a circle radius r=10

Determine the pixel positions along the circle octant in the first quadrant from x=0 to x=y.
The initial value of the decision parameter is P0=1- r
=1 - 10
=-9

For the circle centered on the coordinate origin, the initial point is (x0, y0) = (0, 10) and

39
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Successive midpoint decision parameter values and the corresponding coordinate positions along the
circle path are listed in the following table.

k Pk xk yk xk+1 yk+1 Pk+1= Pk + 2(xk+1) + (y2k+1 - y2 k ) - (yk+1 - yk) + 1

0 -9 0 10 1 10 = -9 + 2(1) + 0 – 0 +1 = -6
1 -6 1 10 2 10 = -6 + 2(2) + 0 – 0 +1 = -1
2 -1 2 10 3 10 = -1 + 2(3) + 0 – 0 +1 = 6
3 6 3 10 4 9 = 6 + 2(4) + (9*9 – 10*10) – (9 - 10) +1 = -3
4 -3 4 9 5 9 = -3 + 2(5) + 0 – 0 +1 = 8
5 8 5 9 6 8 = 8 + 2(6) + (8*8 – 9*9) –(8 – 9) +1 = 5
6 5 6 8 7 7 Algorithm terminate since (xk+1 ≤ yk+1)

Plot all the generated pixels positions with centre as (xc, yc) = (3, 4)

(x +xc, y+yc) (-x +xc, -y+yc) (-x +xc, y+yc) (x +xc, -y+yc)
0+3, 10+4 0+3, -10+4 0+3, 10+4 0+3, -10+4
1+3, 10+4 -1+3, -10+4 -1+3, 10+4 1+3, -10+4
2+3, 10+4 -2+3, -10+4 -2+3, 10+4 2+3, -10+4
3+3, 10+4 -3+3, -10+4 -3+3, 10+4 3+3, -10+4
4+3, 9+4 -4+3, -9+4 -4+3, 9+4 4+3, -9+4
5+3, 9+4 -5+3, -9+4 -5+3, 9+4 5+3, -9+4
6+3, 8+4 -6+3, -8+4 -6+3, 8+4 6+3, -8+4
7+3, 7+4 -7+3, -7+4 -7+3, 7+4 7+3, -7+4
(y +xc, x+yc) (-y +xc, -x+yc) (-y +xc, x+yc) (y +xc, -x+yc)
8+4, 6+3 -8+4, -6+3 -8+4, 6+3 8+4, -6+3
9+4, 5+3 -9+4, -5+3 -9+4, 5+3 9+4, -5+3
9+4, 4+3 -9+4, -4+3 -9+4, 4+3 9+4, -4+3
10+4, 3+3 -10+4, -3+3 -10+4, 3+3 10+4, -3+3
10+4, 2+3 -10+4, -2+3 -10+4, 2+3 10+4, -2+3
10+4, 1+3 -10+4, -1+3 -10+4, 1+3 10+4, -1+3
10+4, 0+3 -10+4, -0+3 -10+4, 0+3 10+4, -0+3

40
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Implementation of Midpoint Circle Algorithm

void circleMidpoint (int xCenter, int yCenter, int radius)


{
int x = 0; int y = radius; int p = 1 - radius;

circlePlotPoints (xCenter, yCenter, x, y); /* Plot first set of points */


while (x < y)
{
x++ ;
if (p < 0)
p +=2*x +1;
else
{
y--;
p +=2* (x - Y) + 1;
}
circlePlotPoints(xCenter, yCenter, x, y)
}
}
void circlePlotPolnts (int xCenter, int yCenter, int x, int y)
{
setpixel (xCenter + x, yCenter + y ) ;
setpixel (xCenter - x. yCenter + y);
setpixel (xCenter + x, yCenter - y);
setpixel (xCenter - x, yCenter - y ) ;
setpixel (xCenter + y, yCenter + x);
setpixel (xCenter - y , yCenter + x);
setpixel (xCenter t y , yCenter - x);
setpixel (xCenter - y , yCenter - x);
}

41
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.
Module – 1: Computer Graphics and OpenGL

Sl. MODULE – I Computer Graphics & OpenGL Marks


no
1. What is computer graphics? Mention the list of applications. How they are classified. 08

2. Explain with neat block diagram, the working of CRT. 08

3. Explain the following terms a.) Pixel b) Raster c.) Scan line d.)Frame buffer e.) Interlacing f.) Resolution 10
g.)Aspect ratio h.) Scan conversion i.) Depth of frame buffer j) Refresh rate k) Display list l) Interlaced Display .

4. Explain the working of the following input devices. 10


i. Mouse ii. Light pen iii. Tablet iv Touch panel v joystick
5. With a block diagram, explain the simple raster display system architecture. 08

6. Explain the logical organization of the video-controller. 08

7. Explain with a neat block diagram of random display system. 10

8. List out the differences between random display and raster display. 06

9. Explain the raster system architecture with a peripheral display processor. 08

10. Briefly describe how mapping takes place from world coordinates to raster coordinates. 08

11. Explain with library organization, the openGL interface. OR 08


Explain any five openGL functions that initiate interaction with windowing system.
12. Explain the major groups of graphics functions for creating and manipulating pictures. 08

13. Briefly explain OpenGL primitives and attributes with suitable examples. 08

14. Using DDA algorithm calculate the pixel positions that will be chosen while scan converting a line from following 08
screen coordinates.
1. (1, 1) to (8, 5) 2. (5, 8) to (9, 11) 3. (3, 2) to (9, 6) 4. (3, 7) to (9, 11) 5.(0, 0) to (-6, -6) 6.(0, 0) to (-8, -4)
15. Using Bresenhams algorithm calculate the pixel positions that will be chosen while scan converting a line from 08
following screen coordinates. (slope less than 1)
1. (1, 1) to (8, 5) 2. (5, 8) to (9, 11) 3. (3, 2) to (9, 6) 4. (3, 7) to (9, 11) 5.(20, 10) to (30, 18)
16. Using Bresenhams algorithm calculate the pixel positions that will be chosen while scan converting a line from 08
following screen coordinates. (slope greater than 1)
1. (5, 5) to (8, 10) 2. (0, 0) to (4, 6) 3. (3, 2) to (6, 9) 4. (7, 3) to (11, 9) 5.(0, 0) to (5, 7)
17. Explain the DDA algorithm for calculating pixel positions along a line. 08

18. Explain Bresenham line drawing algorithm. How is it advantages when compared to other existing methods? 08

19. Explain the MidPoint circle algorithm to scan convert a circle. sketch the decision parameter figures 08

20. Draw the circle with (5, 5) as centre and 8 as radius.

42
C.K.SRINIVASA, Asso. Prof. Dept Of CS&E, BITM, BALLARI.

You might also like