0% found this document useful (0 votes)
40 views171 pages

Module 1 Complete (CG)

Uploaded by

mshreyas817
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views171 pages

Module 1 Complete (CG)

Uploaded by

mshreyas817
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 171

COMPUTER GRAPHICS AND FUNDAMENTALS

OF IMAGE PROCESSING

21CS63

Faculty Name : Dr. DEEPA .S.R


Professor & Head, Department of CSD
KSIT, Bangalore
K S INSTITUTE OF TECHNOLOGY
• VISION

• “To impart quality technical education with ethical values, employable skills and research
to achieve excellence”

• MISSION

• To attract and retain highly qualified, experienced & committed faculty.

• To create relevant infrastructure.

• Network with industry & premier institutions to encourage emergence of new ideas by

providing research & development facilities to strive for academic excellence.

• To inculcate the professional & ethical values among young students with employable skills

& knowledge acquired to transform the society.


VISION

MISSION
COMPUTER GRAPHICS

INTRODUCTION
Introduction to COMPUTER GRAPHICS

Computer graphics involves display,


manipulation and storage of pictures and experimental
data for proper visualization using a computer.

Computer graphics is an art of drawing


pictures, lines, charts, etc. using computers with the help
of programming. Computer graphics image is made up
of number of pixels.

Pixel is the smallest addressable graphical unit


represented on the computer screen.
• Typical graphics system comprises of a host computer
with support of fast processor, large memory,
• frame buffer and

• Display devices (color monitors),


• Input devices (mouse, keyboard, joystick,
• touch screen, trackball)

• Output devices (LCD panels, laser printers,


• color printers. Plotters etc.)

• Interfacing devices such as, video I/O, TV


interface etc.
Applications of
Computer Graphics

1. Graphs and Charts


✓ An early application for computer
graphics is the display of simple data
graphs usually plotted on a character
printer. Data plotting is still one of the
most common graphics application.
✓ Graphs & charts are commonly used to
summarize functional, statistical,
mathematical, engineering and economic
data for research reports, managerial
summaries and other types of
publications.
• 2. Computer-Aided Design
✓ A major use of computer graphics is in design processes-
particularly for engineering and architectural systems.
✓ CAD, computer-aided design or CADD, computer-aided
drafting and design methods are now routinely used in the
automobiles, aircraft, spacecraft, computers, home appliances.
✓ Circuits and networks for communications, water supply or
other utilities are constructed with repeated placement of a
few geographical shapes.
✓ Animations are often used in CAD applications. Real-time,
computer animations using wire-frame shapes are useful for
quickly testing the performance of a vehicle or system.
• 3. Virtual-Reality Environments

✓ Animations in virtual-reality environments are often used


to train heavy-equipment operators or to analyze the
effectiveness of various cabin configurations and control
placements.

✓ With virtual-reality systems, designers and others can


move about and interact with objects in various ways.
Architectural designs can be examined by taking
simulated “walk” through the rooms or around the
outsides of buildings to better appreciate the overall
effect of a particular design.

✓ With a special glove, we can even “grasp” objects in a


scene and turn them over or move them from one place
to another.
• 4. Data Visualizations

✓ Producing graphical representations for


scientific, engineering and medical data sets
and processes is another fairly new application
of computer graphics, which is generally
referred to as scientific visualization. And the
term business visualization is used in
connection with data sets related to
commerce, industry and other nonscientific
areas.
✓ There are many different kinds of data sets
and effective visualization schemes depend on
the characteristics of the data. A collection of
data can contain scalar values, vectors or
higher-order tensors
• 5. Education and Training

✓ Computer generated models of physical, financial, political,


social, economic & other systems are often used as
educational aids.
✓ Models of physical processes physiological functions,
equipment, such as the color coded diagram as shown in the
figure, can help trainees to understand the operation of a
system.
✓ For some training applications, special hardware systems are
designed. Examples of such specialized systems are the
simulators for practice sessions ,aircraft pilots, air traffic
control personnel.
✓ Some simulators have no video screens, for eg: flight
simulator with only a control panel for instrument flying
• 6. Computer Art
✓ The picture is usually painted electronically on a graphics
tablet using a stylus, which can simulate different brush
strokes, brush widths and colors.
✓ Fine artists use a variety of other computer technologies to
produce images. To create pictures the artist uses a
combination of 3D modeling packages, texture mapping,
drawing programs and CAD software etc.
✓ Commercial art also uses theses “painting” techniques for
generating logos & other designs, page layouts combining text
& graphics, TV advertising spots & other applications.
✓ A common graphics method employed in many television
commercials is morphing, where one object is transformed
into another.
• 7. Entertainment
✓ Television production, motion pictures, and music videos
routinely a computer graphics methods.
✓ Sometimes graphics images are combined a live actors and
scenes and sometimes the films are completely generated a
computer rendering and animation techniques.
✓ Some television programs also use animation techniques to
combine computer generated figures of people, animals, or
cartoon characters with the actor in a scene or to transform
an actor’s face into another shape.
• 8. Image Processing

✓ The modification or interpretation of existing pictures,


such as photographs and TV scans is called image
processing.
✓ Methods used in computer graphics and image
processing overlap, the two areas are concerned with
fundamentally different operations.
✓ Image processing methods are used to improve
picture quality, analyze images, or recognize visual
patterns for robotics applications.
✓ Image processing methods are often used in
computer graphics, and computer graphics methods
are frequently applied in image processing.
✓ TOMOGRAPHY
• Image Processing Contd….

✓ Medical applications also make extensive use of image processing techniques for picture
enhancements in tomography and in simulations and surgical operations.
✓ It is also used in computed X-ray tomography(CT), position emission tomography(PET) and
computed axial tomography(CAT).
• 9. Graphical User Interfaces
✓ It is common now for applications software to provide
graphical user interface (GUI).
✓ A major component of graphical interface is a window
manager that allows a user to display multiple, rectangular
screen areas called display windows.
✓ Each screen display area can contain a different process,
showing graphical or nongraphical information, and various
methods can be used to activate a display window.
✓ Using an interactive pointing device, such as mouse, we can
active a display window on some systems by positioning the
screen cursor within the window display area and pressing the
left mouse button.
Typical applications areas are
• GUI • Plotting in business

• Office automation • Desktop publishing

• Plotting in science and technology

• Web/business/commercial
publishing and advertisements
• CAD/CAM design
(VLSI, Construction, Circuits)

• Scientific Visualization
• Entertainment
(movie, TV Advt., Games etc.)

• Simulation studies • Simulators

• Cartography • Multimedia

• Virtual reality

• Process Monitoring

• Digital Image Processing

• Education and Training


GUI – Graphical User Interface

Typical Components Used:

• Menus • Buttons

• Icons • Valuators

• Cursors • Grids

• Dialog Boxes • Sketching

• Scroll Bars • 3-D Interface


GKS – Graphics Kernel System

by ISO (International Standards Organization)


& ANSI (American National Standards Institute)

SRGP – Simple Raster Graphics Package

PHIGS – Programmers Hierarchical


Interactive Graphics System
Various application packages
and standards are available:

• Core graphics

• GKS

• SRGP

• PHIGS, SPHIGS and PEX 3D

• OpenGL (with ActiveX and Direct3D)

• X11-based systems.
On various platforms, such as

DOS, Windows,

Linux, OS/2,

SGI, SunOS,

Solaris, HP-UX,

Mac, DEC-OSF.
Various utilities and tools available for
web-based design include: Java, XML, VRML and
GIF animators.

Certain compilers, such as, Visual C/C++, Visual


Basic, Borland C/C++, Borland Pascal, Turbo C, Turbo
Pascal, Gnu C/C++, Java provide their own graphical
libraries, API, support and help for programming 2-
D/3-D graphics.
Some these systems are
• device-independent (X11, OpenGL )

• device-dependent (Solaris, HP-AGP ).


Four basic output primitives (or elements)
for drawing pictures:
• POLYLINE

• Filled POLYGONS (regions)


Four major areas of Computer
• ELLIPSE (ARC) Graphics are:
• Display of information,
• TEXT
• Design/Modeling,
• IMAGE
• Simulation and

• User Interface.
Computer Graphics systems could be active or
passive.

In both cases, the input to the system is the scene


description and output is a static or animated scene to
be displayed.

In case of active systems, the user controls the


display with the help of a GUI, using an input device.

Computer Graphics is now-a-days, a significant


component of almost all systems and applications of
computers in every field of life.
CRT DISPLAY DEVICES
Video Display Devices
➢The primary output device in a graphics system is a video monitor.

➢ Historically, the operation of most video monitors was based on the


standard cathode ray tube (CRT) design, but several other
technologies exist.

➢ In recent years, flat-panel displays have become significantly more


popular due to their reduced power consumption and thinner
designs.
Refresh Cathode-Ray Tubes
➢A beam of electrons, emitted by an electron gun, passes through
focusing and deflection systems that direct the beam toward
specified positions on the phosphor-coated screen.
➢The phosphor then emits a small spot of light at each position
contacted by the electron beam and the light emitted by the
phosphor fades very rapidly.
➢One way to maintain the screen picture is to store the picture
information as a charge distribution within the CRT in order to keep
the phosphors activated.
➢The most common method now employed for maintaining phosphor
glow is to redraw the picture repeatedly by quickly directing the
electron beam back over the same screen points. This type of display
is called a refresh CRT.
➢ The frequency at which a picture is redrawn on the screen is referred
to as the refresh rate.
Operation of an electron gun with an accelerating
anode
➢The primary components of an electron gun in a CRT are the heated metal
cathode and a control grid.
➢The heat is supplied to the cathode by directing a current through a coil of
wire, called the filament, inside the cylindrical cathode structure.
➢This causes electrons to be “boiled off” the hot cathode surface.
➢Inside the CRT envelope, the free, negatively charged electrons are then
accelerated toward the phosphor coating by a high positive voltage.
➢Intensity of the electron beam is controlled by the voltage at the control
grid.
➢ Since the amount of light emitted by the phosphor coating depends on
the number of electrons striking the screen, the brightness of a display
point is controlled by varying the voltage on the control grid.
➢The focusing system in a CRT forces the electron beam to converge to a
small cross section as it strikes the phosphor and it is accomplished with
either electric or magnetic fields.
➢With electrostatic focusing, the electron beam is passed through a
positively charged metal cylinder so that electrons along the center
line of the cylinder are in equilibrium position.
➢Deflection of the electron beam can be controlled with either electric
or magnetic fields.
➢Cathode-ray tubes are commonly constructed with two pairs of
magnetic-deflection coils
➢One pair is mounted on the top and bottom of the CRT neck, and the
other pair is mounted on opposite sides of the neck.
➢The magnetic field produced by each pair of coils results in a traverse
deflection force that is perpendicular to both the direction of the
magnetic field and the direction of travel of the electron beam.
➢Horizontal and vertical deflections are accomplished with these pair
of coils
Cathode Ray Tube
➢What we see on the screen is the combined effect of all the electrons
light emissions: a glowing spot that quickly fades after all the excited
phosphor electrons have returned to their ground energy level.
➢ The frequency of the light emitted by the phosphor is proportional
to the energy difference between the excited quantum state and the
ground state.
➢Lower persistence phosphors required higher refresh rates to
maintain a picture on the screen without flicker.
➢The maximum number of points that can be displayed without
overlap on a CRT is referred to as a resolution.
➢Resolution of a CRT is dependent on the type of phosphor, the
intensity to be displayed, and the focusing and deflection systems.
➢High-resolution systems are often referred to as high-definition
systems.
Raster-Scan Displays and Random
Scan Displays
1. Raster-Scan Displays
✓The electron beam is swept across the screen one row at a time from top to
bottom. As it moves across each row,
✓The beam intensity is turned on and off to create a pattern of illuminated spots.
✓This scanning process is called refreshing. Each complete scanning of a screen is
normally called a frame.
✓The refreshing rate, called the frame rate, is normally 60 to 80 frames per second,
or described as 60 Hz to 80 Hz.
✓Picture definition is stored in a memory area called the frame buffer.
✓This frame buffer stores the intensity values for all the screen points. Each screen
point is called a pixel (picture element).
✓Property of raster scan is Aspect ratio, which is defined as number of pixel
columns divided by number of scan lines that can be displayed by the system.
Case 1: In case of black and white systems
• On black and white systems, the frame buffer storing the values of
the pixels is called a bitmap.
• Each entry in the bitmap is a 1-bit data which determine the on (1)
and off (0) of the intensity of the pixel.
Case 2: In case of color systems
On color systems, the frame buffer storing the values of the pixels is
called a pixmap (Though now a days many graphics libraries name it as
bitmap too).
Each entry in the pixmap occupies a number of bits to represent the
color of the pixel. For a true color display, the number of bits for each
entry is 24 (8 bits per red/green/blue channel, each channel 2^8=256
levels of intensity value, ie. 256 voltage settings for each of the
red/green/blue electron guns)
Raster Display

Host Computer
Display Interactio
cmds n data
Display Processor

Lucy
Video Controller Lucy

Frame buffer Monitor


Basic definitions
◼ Video raster devices display an image by sequentially
drawing out the pixels of the scan lines that form the
raster.
• Raster: A rectangular array of points or dots.
• Pixel (Pel, picture elements): One dot or picture element of the raster,
defined as smallest addressable area on screen.
• Scan line: A row of pixels.
• Resolution: # of pixel positions that can be plotted.
• Aspect Ratio: # of horizontal points to vertical points(or vice versa).
• Depth: # of bits per pixel in a frame buffer.
• Bitmap: a frame buffer with one bit per pixel
• Pixmap: a frame buffer with multiple bits per pixel.
• Horizontal retrace: return to the left of the screen, after refreshing each
scan line.
• Vertical retrace: electron beam returns to the top left corner of the
screen to begin the next frame.
• Refresh rates are described in units of cycles per seconds, or Hertz(Hz).
2. Random-Scan Displays
➢When operated as a random-scan display unit, a CRT has the electron
beam directed only to those parts of the screen where a picture is to
be displayed.
➢ Pictures are generated as line drawings, with the electron beam
tracing out the component lines one after the other.
➢ For this reason, random-scan monitors are also referred to as vector
displays (or stroke writing displays or calligraphic displays).
➢ The component lines of a picture can be drawn and refreshed by a
random-scan system in any specified order
Difference between Raster scan system and Random scan system
Base of Raster Scan System Random Scan System
Difference
Electron Beam The electron beam is swept across the The electron beam is directed only to the
screen, one row at a time, from top to parts of screen where a picture is to be
bottom drawn
Resolution Its resolution is poor because raster system Its resolution is good because this system
in contrast produces zigzag lines that are produces smooth lines drawings because CRT
plotted as discrete point sets. beam directly follows the line path.
Picture Picture definition is stored as a set of Picture definition is stored as a set of line
Definition intensity values for all screen points, called drawing instructions in a display file.
pixels in a refresh buffer area.
Realistic Display The capability of this system to store These systems are designed for line-drawing
intensity values for pixel makes it well suited and can’t display realistic shaded scenes.
for the realistic display of scenes contain
shadow and color pattern.
Draw an Image Screen points/pixels are used to draw an Mathematical functions are used to draw an
image image
Color CRT Monitors
• A CRT monitor displays color pictures by using a combination of phosphors that emit different-
colored light.
• The emitted light from the different phosphors merges to form a single perceived color, which
depends on the particular set of phosphors that have been excited.

CRT

Shadow Mask

Electron Guns

Red Input

Green
Input

Blue Input

Deflection
Yoke Red, Blue,
and Green
Beam Penetration Method
• Coat the screen with layers of
different colored phosphors.
✓Beam of slow electrons excite outer red
layer, beam of very fast electrons
penetrates through red layer and excites
the inner green layer.
✓(Drawback)Limited to number of colors
since only two colors used.
✓(Drawback)Picture quality is not as good
as with the other method.
• Dot Pitch –the spacing between pixels
on a CRT, measured in millimeters.
Generally, the lower the number, the
more detailed the image.
• Uses 3 phosphor color dots at each pixel position, each emitting
one of red, blue and green lights.
• It has 3 electron guns, one for each color dot and shadow mask Shadow Mask
grid just behind the phosphor-coated screen.
• Shadow mask contains series of holes aligned with phosphor
Method
dot patterns, one hole for each phosphor triad.
• When the 3 beams pass through a hole in the shadow mask,
they activate a dot triangle, which appears as small color spot
on the screen. The phosphor dots in the triangles are arranged
so that each electron beam can activate only its corresponding
color dot when it passes through the shadow mask.
• The number of electrons in each beam controls the amount of
red, blue and green light generated by the triad.
• Another configuration is the in-line arrangement, in which the
electron guns and corresponding color dots are aligned along
one scan line.
• Color CRTs in graphics systems are designed as RGB monitors
• We obtain color variations in a shadow-mask CRT by varying the intensity levels of the three
electron be
• When all three dots are activated with equal beam intensities, we see a white color. Yellow is
produced with equal intensities from the green and red dots only.
• Some inexpensive home-computer systems and video games have been designed for use with a
color TV set and a radio-frequency (RF) modulator.
• The purpose of the RF modulator is to simulate the signal from a broadcast TV station.
• This means that the color and intensity information of the picture must be combined and
superimposed on the broadcast-frequency carrier signal that the TV requires as input.
• Then the circuitry in the TV takes this signal from the RF modulator, extracts the picture
information, and paints it on the screen.
• Composite monitors are adaptations of TV sets that allow bypass of the broadcast circuitry.
• These display devices still require that the picture information be combined, but no carrier signal is
needed.
• Since picture information is combined into a composite signal and then separated by the monitor,
the resulting picture quality is still not the best attainable.
Flat Panel Display
• The term flat panel display refers to a class of video device that have reduced
volume, weight, power requirement and are thinner than CRTs (that could be
hung on walls or worn on wrists).
• We can separate flat panel display in two categories:
1. Emissive displays (Emitters): - convert electrical energy into light.
Eg. Plasma panel,
Thin film electroluminescent displays
Light emitting diodes(LED).
2. Non emissive displays (Non Emitters): - use optical effects to convert sunlight or
light from some other source into graphics patterns.
Eg. Liquid Crystal Display(LCD).
Plasma Panels displays
• This is also called gas discharge displays.
• It is constructed by filling the region between two
glass plates with a mixture of gases that usually
includes neon.
• A series of vertical conducting ribbons is placed
on one glass panel and a set of horizontal ribbon
is built into the other glass panel.
• Firing voltage is applied to a pair of horizontal
and vertical conductors cause the gas at the
intersection of the two conductors to break down
into glowing plasma of electrons and ions.
• Refresh rate: 60 times per second.
• Separation between pixels is provided by the
electric field of conductor.
• Disadvantage: strictly monochromatic device
Thin Film Electroluminescent Displays
• It is similar to plasma panel display but region
between the glass plates is filled with phosphors
such as Zinc sulfide doped with magnesium instead
of gas.
• When sufficient voltage is applied the phosphors
becomes a conductor in area of intersection of the
two electrodes.
• Electrical energy is then absorbed by the
manganese atoms which then release the energy as
a spot of light similar to the glowing plasma effect
in plasma panel.
• It requires more power than plasma panel.
• In this good color displays are difficult to achieve.
Light Emitting Diode (LED)
• In this display a matrix of multi-color light emitting diode is arranged
to form the pixel position in the display and the picture definition is
stored in refresh buffer.
• Similar to scan line refreshing of CRT information is read from the
refresh buffer and converted to voltage levels that are applied to the
diodes to produce the light pattern on the display.
Liquid Crystal Display (LCD)
• This non emissive device produce picture by passing polarized light from
the surrounding or from an internal light source through liquid crystal
material that can be aligned to either block or transmit the light.
• The liquid crystal refreshes to fact that these compounds have crystalline
arrangement of molecules then also flows like liquid.
• It consists of two glass plates each with light polarizer at right angles to
each other sandwich the liquid crystal material between the plates.
• Rows of horizontal transparent conductors are built into one glass plate,
and column of vertical conductors are put into the other plates.
• The intersection of two conductors defines a pixel position.
• Passive-matrix LCD:
• In the ON state polarized light
passing through material is twisted
so that it will pass through the
opposite polarizer, the light is then
reflected back to the viewer.
• In the OFF state, voltage applied to
the two intersecting conductors
align the molecules so that the light
is not twisted.
• Active-matrix LCD:
• A transistor is placed at each pixel
location, using thin-film transistor
technology, that control the voltage
at pixel locations and prevent
charge from gradually leaking out of
the liquid-crystal cells.
Three-Dimensional Viewing Devices
• Graphics monitors for the display of three-dimensional scenes have been devised using a technique
that reflects a CRT image from a vibrating, flexible mirror
• As the varifocal mirror vibrates, it changes focal length.
• These vibrations are synchronized with the display of an object on a CRT so that each point on the
object is reflected from the mirror into a spatial position corresponding to the distance of that point
from a specified viewing location.
• This allows us to walk around an object or scene and view it from different sides.
Three-Dimensional Viewing Devices
• In addition to displaying three-dimensional images, these systems are often capable of displaying
two-dimensional cross-sectional “slices” of objects selected at different depths, such as in medical
applications to analyze data from ultrasonography and CAT scan devices, in geological
applications to analyze topological and seismic data, in design applications involving solid objects,
and in three-dimensional simulations of systems, such as molecules and terrain.
Stereoscopic and Virtual-Reality Systems
• Another technique for representing a three-dimensional object is to displaystereoscopic views of
the object.
• This method does not produce true three dimensional images, but it does provide a three-
dimensional effect by presenting a different view to each eye of an observer so that scenes do
appear to have depth.
• When we simultaneously look at the left view with the left eye and the right view with the right
eye, the two views merge into a single image and we perceive a scene with depth
• One way to produce a stereoscopic effect on a raster system is to display each of the two views on
alternate refresh cycles.
• The screen is viewed through glasses, with each lens designed to act as a rapidly alternating shutter
that is synchronized to block out one of the views.
• One such design uses liquid-crystal shutters and an infrared emitter that synchronizes the glasses
with the views on the screen..
Stereoscopic and Virtual-Reality Systems
• Stereoscopic viewing is also a component in virtual-reality systems, where users can step into a
scene and interact with the environment.
• A headset containing an optical system to generate the stereoscopic views can be used in
conjunction with interactive input devices to locate and manipulate objects in the scene.
• A sensing system in the headset keeps track of the viewer’s position, so that the front and back of
objects can be seen as the viewer “walks through” and interactswith the display.
• Another method for creating a virtual-reality environment is to use projectors to generate a scene
within an arrangement of walls, where a viewer interacts with a virtual display using stereoscopic
glasses and data gloves
Raster-Scan Systems
• Interactive raster-graphics systems typically employ several processing units.
• In addition to the central processing unit (CPU), a special-purpose processor, called the video
controller or display controller, is used to control the operation of the display device.

Architecture of simple Raster system


• Here, the frame buffer can be anywhere in the system memory, and the video controller accesses
the frame buffer to refresh the screen.
• In addition to the video controller, more sophisticated raster systems employ other processors as
coprocessors and accelerators to implement various graphics operations.
• Access the frame buffer to refresh the screen
• Control the operation for display Video Controller
• Color look-up table

A fixed area of the system memory is reserved for the frame buffer, and the video controller is
given direct access to the frame-buffer memory.
• The basic refresh operations of the video controller are diagrammed.
• Two registers are used to store the coordinate values for the screen pixels. Initially, the x register is
set to 0 and the y register is set to the value for the top scan line.
• The contents of the frame buffer at this pixel position are then retrieved and used to set the
intensity of the CRT beam.
• Then the x register is incremented by 1, and the process is repeated for the next pixel on the top
scan line.
• This procedure continues for each pixel along the top scan line.
• After the last pixel on the top scan line has been processed, the x register is reset to 0 and the y
register is set to the value for the next scan line down from the top of the screen.
• Pixels along this scan line are then processed in turn, and the procedure is repeated for each
successive scan line.
• After cycling through all pixels along the bottom scan line, the video controller resets the registers
to the first pixel position on the top scan line and the refresh process starts over.
• The screen must be refreshed at a rate of at least 60 frames per second
• To speed up pixel processing, video controllers can retrieve multiple pixel values from
the refresh buffer on each pass.
• The multiple pixel intensities are then stored in a separate register and used to control the
CRT beam intensity for a group of adjacent pixels.
• When that group of pixels has been processed, the next block of pixel values is retrieved
from the frame buffer.
• A video controller can be designed to perform a number of other operations.
• For various applications, the video controller can retrieve pixel values from different
memory areas on different refresh cycles. In some systems, for example, multiple frame
buffers are often provided so that one buffer can be used for refreshing while pixel values
are being loaded into the other buffers. Then the current refresh buffer can switch roles
with one of the other buffers. This provides a fast mechanism for generating real-time
animations.
• Another video-controller task is the transformation of blocks of pixels, so that screen areas can be
enlarged, reduced, or moved from one location to another during the refresh cycles.
• In addition, the video controller often contains a lookup table, so that pixel values in the frame
buffer are used to access the lookup table instead of controlling the CRT beam intensity directly.
• This provides a fast method for changing screen intensity values.
• Finally, some systems are designed to allow the video controller to mix the frame-buffer image with
an input image from a television camera or other input device
Display Processor
• Also called either a Graphics Controller or Display Co-Processor
• Specialized hardware to assist in scan converting output primitives
into the frame buffer.
• Fundamental difference among display systems is how much the
display processor does versus how much must be done by the
graphics subroutine package executing on the general-purpose CPU.
Architecture of a raster-graphics system with a
display processor
• Major task of the display processor is digitizing a picture definition given in an application
program into a set of pixel values for storage in the frame buffer.
• This digitization process is called scan conversion.
• Graphics commands specifying straight lines and other geometric objects are scan converted into a
set of discrete points, corresponding to screen pixel positions.
• Scan converting a straight-line segment, for example, means that we have to locate the pixel
positions closest to the line path and store the color for each position in the frame buffer
• Similar methods are used for scan converting other objects in a picture
• definition. Characters can be defined with rectangular pixel grids, for character grids can vary from
about 5 by 7 to 9 by 12 or more for higher-quality displays.
• A character grid is displayed by superimposing the rectangular grid pattern into the frame buffer at
a specified coordinate position.
• For characters that are defined as outlines, the shapes are scan-converted into the frame buffer by
locating the pixel positions closest to the outline.
• Display processors are also designed to perform a number of additional operations.
• These functions include generating various line styles (dashed, dotted, or solid), displaying color
areas, and applying transformations to the objects in a scene.
• Also, display processors are typically designed to interface with interactive input devices.
• A character defined as an outline shape.
• In an effort to reduce memory requirements in raster systems, methods have been devised for
organizing the frame buffer as a linked list and encoding the color information.
• The first number in each pair can be a reference to a color value, and the second number can
specify the number of adjacent pixels on the scan line that are to be displayed in that color.
• This technique, called run-length encoding, can result in a considerable saving in storage space if
a picture is to be constructed mostly with long runs of a single color each.
• A similar approach can be taken when pixel colors change linearly
• Another approach is to encode the raster as a set of rectangular
areas (cell encoding).
• The disadvantages of encoding runs are that color changes are
difficult to record and storage requirements increase as the
lengths of the runs decrease.
• In addition, it is difficult for the display controller to process
the raster when many short runs are involved.
• Moreover, the size of the frame buffer is no longer a major
concern, because of sharp declines in memory costs.
• Nevertheless, encoding methods can be useful in the digital
storage and transmission of picture information.
Graphics workstations and viewing systems
• Most graphics monitors today operate as raster-scan
displays, and both CRT and flat panel systems are in
common use.
• Graphics workstation range from small general-purpose
computer systems to multi monitor facilities, often with
ultra –large viewing screens.
• High-definition graphics systems, with resolutions up to
2560 by 2048, are commonly used in medical imaging, A high-resolution
air-traffic control, simulation, and CAD. (2048 by 2048)
graphics monitor.
• Many high-end graphics workstations also include large
viewing screens, often with specialized features.
Graphics Software
• Two broad classifications of computer-graphics
software
1. Special-purpose packages: Nonprogrammers
Example: generate pictures, graphs, charts, painting
programs or CAD systems in some application area
without worrying about the graphics procedure
2. General programming packages: general programming package
provides a library of graphics functions that can be used in a
programming language such as C, C++, Java, or FORTRAN.
Example: GL (Graphics Library), OpenGL, VRML (Virtual-Reality
Modeling Language), Java 2D And Java 3D
• A set of graphics functions is often called a computer-graphics
application programming interface (CG API)
Coordinate Representations
• To generate a picture, the geometric descriptions of
the objects that are to be displayed (Location,
Shapes).
Eg: Box, Sphere etc
• If coordinate values for a picture are given in some
other reference frame (spherical, hyperbolic, etc.),
they must be converted to Cartesian coordinates.
• Several different Cartesian reference frames are used
in the process of constructing and displaying
• First we define the shapes of individual objects, such
as trees or furniture. These reference frames are
called modeling coordinates or local coordinates(Eg:
Bicycle)
• Then we place the objects into appropriate locations
within a scene reference frame called world
coordinates.
• After all parts of a scene have been specified, it is
processed through various output-device reference
frames for display. This process is called the viewing
pipeline.
• The scene is then stored in normalized coordinates.
Which range from −1 to 1 or from 0 to 1 Normalized
coordinates are also referred to as normalized device
coordinates.
• The coordinate systems for display devices are
generally called device coordinates, or screen
coordinates.
• Geometric descriptions in modeling coordinates and
world coordinates can be given in floating-point or
integer values.
• Example: Figure briefly illustrates the sequence of coordinate
transformations from modeling coordinates to device coordinates for
a display
Graphics Functions
• General-purpose graphics package provides users
with a variety of functions for creating and
manipulating pictures
• Graphics input, Output, attributes, transformations,
Viewing, subdividing pictures etc…
• The basic building blocks for pictures are referred to
as graphics output primitives (Straight line, Curved
line, Sphere, Cones etc)
• Attributes are properties of the output primitives.
• We can change the size, position, or orientation of an
object using geometric transformations
• Modeling transformations, which are used to
construct a scene.
• Viewing transformations are used to select a view of
the scene, the type of projection to be used and the
location where the view is to be displayed.
• Input functions are used to control and process the
data flow from these interactive devices (mouse,
tablet and joystick).Graphics package contains a
number of tasks.
Software Standards
• The primary goal of standardized graphics software is
portability.
• In 1984, Graphical Kernel System (GKS) was adopted
as the first graphics software standard by the
International Standards Organization (ISO)
• The second software standard to be developed and
approved by the standards organizations was
Programmer’s Hierarchical Interactive Graphics
System (PHIGS).
• Extension of PHIGS, called PHIGS+, was developed to provide 3-D
surface rendering capabilities not available in PHIGS.
• The graphics workstations from Silicon Graphics, Inc. (SGI), came with
a set of routines called GL (Graphics Library)
Other Graphics Packages
• Many other computer-graphics programming libraries
have been developed for general graphics routines
• Some are aimed at specific applications (animation,
virtual reality, etc.) Example: Open Inventor Virtual-
Reality Modeling Language (VRML).
• We can create 2-D scenes with in Java applets
(java2D, Java 3D)
Graphics packages
• A set of libraries that provide programmatically access to
some kind of graphics 2D functions.
• Types
1. GKS-Graphics Kernel System – first graphics package –
accepted by ISO &
2. PHIGS (Programmer’s Hierarchical Interactive Graphics
ANSI
Standard)-accepted by ISO & ANSI
3. PHIGS + (Expanded package)
4. Silicon Graphics GL (Graphics Library)
Graphics packages
5. Open GL
6.Pixar Render Man interface
7.Postscript interpreters
8.Painting, drawing, design packages
OpenGL Basic(core) library

A basic library of functions which is provided in OpenGL for specifying graphics primitives,
attributes, geometric transformations, viewing transformations, and many other operations.
Basic OpenGL Syntax
➢Function names in the OpenGL basic library (also called the OpenGL core
library) are prefixed with gl. The component word first letter is capitalized.
➢For eg:- glBegin, glClear, glCopyPixels, glPolygonMode
➢Symbolic constants that are used with certain functions as parameters are all
in capital letters, preceded by “GL”, and component are separated by
underscore.
➢For eg:- GL_2D, GL_RGB, GL_CCW, GL_POLYGON, GL_AMBIENT_AND_DIFFUSE.
➢The OpenGL functions also expect specific data types.
➢For example, an OpenGL function parameter might expect a value that is
specified as a 32-bit integer. But the size of an integer specification can be
different on different machines.
➢To indicate a specific data type, OpenGL uses special built-in, data-type names,
such as GLbyte, GLshort, GLint, GLfloat, GLdouble, Glboolean
Related Libraries
In addition to OpenGL basic(core) library(prefixed with gl), there are a number of
associated libraries for handling special operations:-
1) OpenGL Utility(GLU):- Prefixed with “glu”. It provides routines for setting up viewing
and projection matrices, describing complex objects with line and polygon
approximations, processing the surface-rendering operations, and other complex
tasks. -Every OpenGL implementation includes the GLU library
2) Open Inventor:- provides routines and predefined object shapes for interactive
three dimensional applications which are written in C++.
3) Window-system libraries:- To create graphics we need display window. We cannot
create the display window directly with the basic OpenGL functions since it contains
only device-independent graphics functions, and window-management operations
are device-dependent. However, there are several window-system libraries that
supports OpenGL functions for a variety of machines. Eg:- Apple GL(AGL), Windows-
to-OpenGL(WGL), Presentation Manager to OpenGL(PGL), GLX.
4) OpenGL Utility Toolkit(GLUT):- provides a library of functions which acts as interface
for interacting with any device specific screen-windowing system, thus making our
program device-independent. The GLUT library functions are prefixed with “glut”.
Header Files
• In all graphics programs, we will need to include the header file for
the OpenGL core library.
• In windows to include OpenGL core libraries and GLU we can use the
following header files:- #include //precedes other header files for
including Microsoft windows version of OpenGL libraries
#include<GL/gl.h>
#include<GL/glu.h>
• The above lines can be replaced by using GLUT header file which
ensures gl.h and glu.h are included correctly,
#include <GL/glut.h> //GL in windows
• In Apple OS X systems, the header file inclusion statement will be,
#include<GLUT/glut.h>
Display-Window Management Using GLUT
➢ We can consider a simplified example, minimal number of operations for displaying a
picture.
Step 1: initialization of GLUT
➢We are using the OpenGL Utility Toolkit, our first step is to initialize GLUT.
➢This initialization function could also process any command line arguments, but we will
not need to use these parameters for our first example programs.
➢We perform the GLUT initialization with the statement
glutInit (&argc, argv);
Step 2: title
➢We can state that a display window is to be created on the screen with a given caption
for the title bar. This is accomplished with the function
glutCreateWindow ("An Example OpenGL Program");
➢where the single argument for this function can be any character string that we want to
use for the display-window title.
Step 3: Specification of the display window
Then we need to specify what the display window is to contain. For this, we create a
picture using OpenGL functions and pass the picture definition to the GLUT routine
glutDisplayFunc, which assigns our picture to the display window
Example: suppose we have the OpenGL code for describing a line segment in a procedure
called line Segment.
➢Then the following function call passes the line-segment description to the display
window:
glutDisplayFunc (lineSegment);
Step 4: one more GLUT function
➢But the display window is not yet on the screen.
➢We need one more GLUT function to complete the window-processing operations.
➢After execution of the following statement, all display windows that we have created,
including their graphic content, are now activated:
glutMainLoop ( );
➢This function must be the last one in our program. It displays the initial graphics and puts
the program into an infinite loop that checks for input from devices such as a mouse or
keyboard.
Step 5: these parameters using additional GLUT functions
➢ Although the display window that we created will be in some default location and size,
we can set these parameters using additional GLUT functions.
GLUT Function 1:
➢We use the glutInitWindowPosition function to give an initial location
for the upper left corner of the display window.
➢This position is specified in integer screen coordinates, whose origin
is at the upper-left corner of the screen.
• glutInitWindowPosition(50,100);
GLUT Function 2:
➢After the display window is on the screen, we can reposition and resize it.
glutInitWindowSize(400,300);
GLUT Function 3:
➢We can also set a number of other options for the display window, such as
buffering and a choice of color modes, with the glutInitDisplayMode
function.
Arguments for this routine are assigned with symbolic GLUT constants.
Example: the following command specifies that a single refresh buffer is to
be used for the display window and that we want to use the color mode
which uses red, green, and blue (RGB) components to select color values:
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
The values of the constants passed to this function are combined using a
logical or operation.
Actually, single buffering and RGB color mode are the default options.
A Complete OpenGL Program
Step 1: to set background color
➢For the display window, we can choose a background color.
➢Using RGB color values, we set the background color for the display
window to be white, with the OpenGL function: glClearColor (1.0, 1.0,
1.0, 0.0);
➢The first three arguments in this function set the red, green, and blue
component colors to the value 1.0, giving us a white background color
for the display window.
➢If, instead of 1.0, we set each of the component colors to 0.0, we
would get a black background
➢The fourth parameter in the glClearColor function is called the alpha
value for the specified color.
➢One use for the alpha value is as a “blending” parameter
➢When we activate the OpenGL blending operations, alpha values can
be used to determine the resulting color for two overlapping objects.
➢An alpha value of 0.0 indicates a totally transparent object, and an
alpha value of 1.0 indicates an opaque object.
➢For now, we will simply set alpha to 0.0.
➢Although the glClearColor command assigns a color to the display
window, it does not put the display window on the screen.
Step 2: to set window color
To get the assigned window color displayed, we need to invoke the following
OpenGL function: glClear (GL_COLOR_BUFFER_BIT);
The argument GL COLOR BUFFER BIT is an OpenGL symbolic constant
specifying that it is the bit values in the color buffer (refresh buffer) that are
to be set to the values indicated in the glClearColor function. (OpenGL has
several different kinds of buffers that can be manipulated.
Step 3: to set color to object
In addition to setting the background color for the display window, we can
choose a variety of color schemes for the objects we want to display in a
scene.
For our initial programming example, we will simply set the object color to
be a red glColor3f (1.0, 0.0, 0.0);
The suffix 3f on the glColor function indicates that we are specifying the
three RGB color components using floating-point (f) values.
This function requires that the values be in the range from 0.0 to 1.0, and we
have set red = 1.0, green = 0.0, and blue = 0.0.
Example program
• For our first program, we simply display a two-
dimensional line segment.
• To do this, we need to tell OpenGL how we want to
“project” our picture onto the display window
because generating a two-dimensional picture is
treated by OpenGL as a special case of three-
dimensional viewing.
• So, although we only want to produce a very simple
two-dimensional line, OpenGL processes our picture
through the full three-dimensional viewing
operations.
• We can set the projection type (mode) and other
viewing parameters that we need with the following
two functions:
glMatrixMode (GL_PROJECTION);
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
• This specifies that an orthogonal projection is to be
used to map the contents of a two dimensional
rectangular area of world coordinates to the screen,
and that the x- coordinate values within this rectangle
range from 0.0 to 200.0 with y-coordinate values
ranging from 0.0 to 150.0.
• Whatever objects we define within this world-
coordinate rectangle will be shown within the display
window.
• Anything outside this coordinate range will not be
displayed.
• Therefore, the GLU function gluOrtho2D defines the
coordinate reference frame within the display window
to be (0.0, 0.0) at the lower-left corner of the display
window and (200.0, 150.0) at the upper-right window
corner.
• Finally, we need to call the appropriate OpenGL
routines to create our line segment.
• The following code defines a two-dimensional,
straight-line segment with integer,
• Cartesian endpoint coordinates (180, 15) and (10,
145).
• glBegin (GL_LINES);
glVertex2i (180, 15);
glVertex2i (10, 145);
glEnd ( );
The first OpenGL program is organized into three functions.
init: We place all initializations and related one-time parameter settings in
function init.
lineSegment: Our geometric description of the “picture” that we want to
display is in function lineSegment, which is the function that will be referenced
by the GLUT function glutDisplayFunc.
main function: main function contains the GLUT functions for setting up the
display window and getting our line segment onto the screen.
glFlush: This is simply a routine to force execution of our OpenGL functions,
which are stored by computer systems in buffers in different locations,
depending on how OpenGL is implemented.
The procedure lineSegment that we set up to describe our picture is referred to
as a display callback function.
And this procedure is described as being “registered” by glutDisplayFunc as the
routine to invoke whenever the display window might need to be redisplayed
#include<GLUT/glut.h> // (or others, depending on the system in use)
void init (void)
{
glClearColor (1.0, 1.0, 1.0, 0.0); // Set display-window color to white.
glMatrixMode (GL_PROJECTION); // Set projection parameters.
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
}

void lineSegment (void)


{
glClear (GL_COLOR_BUFFER_BIT); // Clear display window.
glColor3f (0.0, 1.0, 0.0); // Set line segment color to green.
glBegin (GL_LINES);
glVertex2i (180, 15); // Specify line-segment geometry.
glVertex2i (10, 145);
glEnd ( );
glFlush ( ); // Process all OpenGL routines as quickly as possible.
}
void main (int argc, char** argv)
{
glutInit (&argc, argv); // Initialize GLUT.
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); // Set display mode.
glutInitWindowPosition (50, 100); // Set top-left display-window position.
glutInitWindowSize (400, 300); // Set display-window width and height.
glutCreateWindow ("An Example OpenGL Program"); // Create display window.
init ( ); // Execute initialization procedure.
glutDisplayFunc (lineSegment); // Send graphics to display window.
glutMainLoop ( ); // Display everything and wait.
}
Coordinate Reference Frames
To describe a picture, we first decide upon A convenient Cartesian coordinate system,
called the world-coordinate reference frame, which could be either 2D or 3D.
We then describe the objects in our picture by giving their geometric specifications in
terms of positions in world coordinates.
Example: We define a straight-line segment with two endpoint positions, and a polygon
is specified with a set of positions for its vertices.
These coordinate positions are stored in the scene description along with other info
about the objects, such as their color and their coordinate extents (Co-ordinate extents
:Co-ordinate extents are the minimum and maximum x, y, and z values for each object).
A set of coordinate extents is also described as a bounding box for an object. Ex: For a
2D figure, the coordinate extents are sometimes called its bounding rectangle.
Objects are then displayed by passing the scene description to the viewing routines
which identify visible surfaces and map the objects to the frame buffer positions and
then on the video monitor.
The scan-conversion algorithm stores info about the scene, such as color values, at the
appropriate locations in the frame buffer, and then the scene is displayed on the output
device.
Screen co-ordinates:
Locations on a video monitor are referenced in integer screen coordinates,
which correspond to the integer pixel positions in the frame buffer.
Scan-line algorithms for the graphics primitives use the coordinate
descriptions to determine the locations of pixels
Example: given the endpoint coordinates for a line segment, a display
algorithm must calculate the positions for those pixels that lie along the line
path between the endpoints.
Since a pixel position occupies a finite area of the screen, the finite size of a
pixel must be taken into account by the implementation algorithms.
For the present, we assume that each integer screen position references the
centre of a pixel area.
Once pixel positions have been identified the color values must be stored in
the frame buffer
Absolute and Relative Coordinate Specifications

• Absolute coordinate:
➢ So far, the coordinate references that we have discussed are stated as absolute coordinate values.

➢ This means that the values specified are the actual positions within the coordinate system in use.
• Relative coordinates:
➢ However, some graphics packages also allow positions to be specified using relative coordinates.

➢ This method is useful for various graphics applications, such as producing drawings with pen plotters,
artist’s drawing and painting systems, and graphics packages for publishing and printing applications.
➢ Taking this approach, we can specify a coordinate position as an offset from the last position that was
referenced (called the current position).
Specifying a Two-Dimensional World-Coordinate Reference Frame in OpenGL
➢ The gluOrtho2D command is a function we can use to set up any 2D Cartesian reference frames.

➢ The arguments for this function are the four values defining the x and y coordinate limits for the
picture we want to display.

➢ Since the gluOrtho2D function specifies an orthogonal projection, we need also to be sure that the
coordinate values are placed in the OpenGL projection matrix.

➢ In addition, we could assign the identity matrix as the projection matrix before defining the world-
coordinate range.

➢ This would ensure that the coordinate values were not accumulated with any values we may have
previously set for the projection matrix.

➢ Thus, for our initial two-dimensional examples, we can define the coordinate frame for the screen
display window with the following statements
• glMatrixMode (GL_PROJECTION);
• glLoadIdentity ( );
• gluOrtho2D (xmin, xmax, ymin, ymax);
The display window will then be referenced by coordinates (xmin, ymin) at the lower-left corner and by
coordinates (xmax, ymax) at the upper-right corner, as shown in Figure below
OpenGL Functions
Geometric Primitives:
• It includes points, line segments, polygon etc. These primitives pass
through geometric pipeline which decides whether the primitive is
visible or not and also how the primitive should be visible on the
screen etc.
• The geometric transformations such rotation, scaling etc. can be
applied on the primitives which are displayed on the screen. The
programmer can create geometric primitives as shown below:

• where: glBegin indicates the beginning of the object that has to be


displayed glEnd indicates the end of primitive
OpenGL Point Functions
➢The type within glBegin() specifies the type of the object and its value
can be as follows: GL_POINTS
➢Each vertex is displayed as a point.
➢The size of the point would be of at least one pixel.
➢Then this coordinate position, along with other geometric descriptions
we may have in our scene, is passed to the viewing routines.
➢Unless we specify other attribute values, OpenGL primitives are
displayed with a default size and color.
➢The default color for primitives is white, and the default point size is
equal to the size of a single screen pixel
Case 1: glBegin(GL_POINTS);
glBegin (GL_POINTS); glVertex2iv (point1);
glVertex2i (50, 100); glVertex2iv (point2);
glVertex2i (75, 150); glVertex2iv (point3);
glVertex2i (100, 200); glEnd( );
glEnd( ); Case 3:
specifying two point positions in a
Case 2: three dimensional world reference
frame. In this case, we give the
we could specify the coordinate coordinates as explicit floating-point
values for the preceding points in values:
arrays such as
glBegin (GL_POINTS);
int point1 [ ] = {50, 100}; int point2 [ ] = {75, 150}; int
point3 [ ] = {100, 200}; glVertex3f (-78.05, 909.72, 14.60);
and call the OpenGL functions for glVertex3f (261.91, -5200.67, 188.33);
plotting the three points as glEnd ( );
Pixels
• The term "pixel" is actually short for "Picture Element."
• These small little dots are what make up the images
on computer displays, whether they are flat-screen (LCD) or tube
(CRT) monitors.
• The screen is divided up into a matrix of thousands or even millions
of pixels.
OpenGL LINE FUNCTIONS

Primitive type is GL_LINES


➢Successive pairs of vertices are considered as endpoints and they are
connected to form an individual line segments.
➢Note that successive segments usually are disconnected because the
vertices are processed on a pair-wise basis.
➢we obtain one line segment between the first and second coordinate
positions and another line segment between the third and fourth
positions.
➢if the number of specified endpoints is odd, so the last coordinate
position is ignored.
Case 1: Case 2:
GL_LINES: GL_LINE_STRIP:
glBegin (GL_LINES); Successive vertices are connected
glVertex2iv (p1); using line segments. However, the
glVertex2iv (p2); final vertex is not connected to the
glVertex2iv (p3); initial vertex.
glVertex2iv (p4); glBegin (GL_LINES_STRIP);
glVertex2iv (p5); glVertex2iv (p1);
glEnd ( ); glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );

Case 2:
Case 3:
GL_LINE_LOOP:
Successive vertices are connected
using line segments to form a
closed path or loop i.e., final vertex
is connected to the initial vertex.
glBegin (GL_LINES_LOOP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
Opengl Point-Attribute Functions
Color:
• The displayed color of a designated point position is controlled by the current
color values in the state list.
• Also, a color is specified with either the glColor function or the glIndex function.
Size:
• We set the size for an OpenGL point with glPointSize (size); and the point is then
displayed as a square block of pixels.
• Parameter size is assigned a positive floating-point value, which is rounded to an
integer
• The number of horizontal and vertical pixels in the display of the point is
determined by parameter size.
• Thus, a point size of 1.0 displays a single pixel, and a point size of 2.0 displays a
2×2 pixel array.
• If we activate the antialiasing features of OpenGL, the size of a displayed block of
pixels will be modified to smooth the edges.
• The default value for point size is 1.0.
The following code segment plots three points in varying colors and
sizes. The first is a standard-size red point, the second is a double-
size green point, and the third is a triple-size blue point, Attribute
functions may be listed inside or outside of a glBegin/glEnd pair.
glColor3f (1.0, 0.0, 0.0);
glBegin (GL_POINTS);
glVertex2i (50, 100);
glPointSize (2.0);
glColor3f (0.0, 1.0, 0.0);
glVertex2i (75, 150);
glPointSize (3.0);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (100, 200);
glEnd ( );
Line-Attribute Functions OpenGL

• In OpenGL straight-line segment has three attribute settings: line color, line-width, and
line style.

• OpenGL provides a function for setting the width of a line and another function for
specifying a line style, such as a dashed or dotted line.
• .Style
Type

• .Width
Width

• .Color
Color

• .Pen
Pen&&Brush
Brush
Style:
➢Solid
➢Dotted – very short dash with spacing equal to or greater than dash itself
➢Dashed – displayed by generating an inter dash spacing
➢Dash Dotted –combination of the earlier two
⦁ For lines with slope magnitude less than 1, we can modify a
line-drawing routine to display thick lines by plotting a
vertical span of pixels at each x position along the line. The
number of pixels in each span is set equal to the integer
magnitude of parameter lw.
⦁ For lines with slope magnitude greater than 1, we can plot
thick lines with horizontal spans, alternately picking up
pixels to the right and left of the line path.
⦁ We can adjust the shape of the line ends to give them a better
appearance by adding line caps.

⦁ One kind of line cap is the butt cap obtained by adjusting the
end positions of the component parallel lines so that the
thick line is displayed with square ends that are perpendicular
to the line path.

⦁ If the specified line has slope m, the square end of the thick
line has slope - l / m .
⦁ Another line cap is the round cap obtained by adding a filled
semicircle to each butt cap.

⦁ The circular arcs are centered on the line endpoints and have a
diameter equal to the line thickness.

⦁ A third type of line cap is the projecting square cap.

⦁ Here, we simply extend the line and add butt caps that are
positioned one-half of the line width beyond the specified
endpoints.
⦁ We can generate thick polylines that are smoothly joined at the
cost of additional processing at the segment endpoints.

⦁ There three possible methods for smoothly joining two line


segments.

⦁ miter join
⦁ round join
⦁ bevel join.
⦁ A miter join is accomplished by
extending the outer boundaries of each
of the two lines until they meet.
⦁ A round join is produced by capping the
connection between the two segments
with a circular boundary whose diameter
is equal to the line width.
⦁ A bevel join is generated by displaying
the line segments with butt caps and
filling in the triangular gap where the
segments meet.
Line-Attribute Functions OpenGL
OpenGL Line-Width Function
Line width is set in OpenGL with the function Syntax:
glLineWidth (width);
OpenGL Line-Style Function
We set a current display style for lines with the OpenGL function Syntax:
glLineStipple (repeatFactor, pattern);
Where Pattern:
Parameter pattern is used to reference a 16-bit integer that describes how the
line should be displayed and 1 bit in the pattern denotes an “on” pixel position,
and a 0 bit indicates an “off” pixel position.
repeatFactor
Integer parameter repeatFactor specifies how many times each bit in the pattern
is to be repeated before the next bit in the pattern is applied. The default repeat
value is 1.
Activating line style:
• Before a line can be displayed in the current line-style pattern, we
must activate the linestyle feature of OpenGL.
glEnable (GL_LINE_STIPPLE);
• If we forget to include this enable function, solid lines are displayed;
that is, the default pattern 0xFFFF is used to display line segments.
• At any time, we can turn off the line-pattern feature with
glDisable (GL_LINE_STIPPLE);
This replaces the current line-style pattern with the default pattern
(solid lines).
Example Code
typedef struct { float x, y; } wcPt2D; -//
wcPt2D dataPts [5];
void linePlot (wcPt2D dataPts [5])
{
int k;
glBegin (GL_LINE_STRIP);
for (k = 0; k < 5; k++)
glVertex2f (dataPts [k].x, dataPts [k].y);
glFlush ( );

glEnd ( );
}
/* Invoke a procedure here to draw coordinate axes. */
glEnable (GL_LINE_STIPPLE);
/* Input first set of (x, y) data values. */ glLineStipple(1,0x0101);
/* Plot a dotted, triple-width polyline.*/
glLineStipple (1, 0x1C47);
/*Plot a dash-dot, standard-width polyline. */ glLineWidth (3.0);
linePlot (dataPts);
linePlot (dataPts);
glDisable (GL_LINE_STIPPLE);
/* Input second set of (x, y) data values. */

glLineStipple(1,0x00FF);
/ *Plot a dashed, double-width polyline.*/

glLineWidth (2.0);
linePlot (dataPts);

/* Input third set of (x, y) data values. */


Curve Attributes
• Parameters for curve attributes are the same as those for straight-line
segments.
• We can display curves with varying colors, widths, dot-dash patterns,
and available pen or brush options.
• Methods for adapting curve-drawing algorithms to accommodate
attribute selections are similar to those for line drawing.
• Raster curves of various widths can be displayed using the method of
horizontal or vertical pixel spans.
Case 1: Where the magnitude of the curve slope |m| <= 1.0, we plot
vertical spans;
Case 2: when the slope magnitude |m| > 1.0, we plot horizontal spans.
Different methods to draw a curve:
Method 1:
Using circle symmetry property, we generate the circle path with vertical
spans in the octant from x = 0 to x = y, and then reflect pixel positions about
the line y = x to y=0
Method 2:
Another method for displaying thick curves is to fill in the area between
two Parallel curve paths, whose separation distance is equal to the desired
width. We could do this using the specified curve path as one boundary and
setting up the second boundary either inside or outside the original curve
path. This approach, however, shifts the original curve path either inward or
outward, depending on which direction we choose for the second
boundary.
Method 3:
The pixel masks discussed for implementing line-style options could
also be used in raster curve algorithms to generate dashed or dotted
patterns
Method 4:
Pen (or brush) displays of curves are generated using the same
techniques discussed for straight-line segments.
Method 5:
Painting and drawing programs allow pictures to be constructed
interactively by using a pointing device, such as a stylus and a graphics
tablet, to sketch various curve shapes.
Line Drawing Algorithm
A straight-line segment in a scene is defined by coordinate positions for the
endpoints of the segment.
To display the line on a raster monitor, the graphics system must first
project the endpoints to integer screen coordinates and determine the
nearest pixel positions along the line path between the two endpoints then
the line color is loaded into the frame buffer at the corresponding pixel
coordinates
The Cartesian slope-intercept equation for a straight line is
y=m * x +b----------- >(1) with m as the slope of the line and b as the y
intercept.
Given that the two endpoints of a line segment are specified at positions
(x0,y0) and (xend, yend) ,as shown in fig.
We determine values for the slope m and y intercept b with the
following equations: m=(yend - y0)/(xend - x0)----------------->(2)
b=y0 - m.x0 ------------- >(3)
Algorithms for displaying straight line are based on the line equation (1)
and calculations given in eq(2) and (3).
For given x interval δx along a line, we can compute the corresponding
y interval δy from eq.(2) as δy=m. δx ---------------->(4)
Similarly, we can obtain the x interval δx corresponding to a specified
δy as δx=δy/m ----------------- >(5)
For lines with slope magnitudes

➢|m|<1, δx can be set proportional to a small horizontal deflection


voltage with the corresponding vertical deflection voltage set
proportional to δy from eq.(4)
➢|m|>1, δy can be set proportional to a small vertical deflection
voltage with the corresponding horizontal deflection voltage set
proportional to δx from eq.(5)
➢|m|=1, δx=δy and the horizontal and vertical deflections voltages are
equal
Example 1 The endpoints of line are(0,0) & (6,18). Compute
each value of y as x steps from 0 to 6 and plot the result.
Solution : Equation of line is y= mx +b
m = y2-y1/x2-x1= 18-0/6-0 = 3
Next the y intercept b is found by plugging y1& x1 into the
equation y = 3x + b,
0 = 3(0) + b. Therefore, b=0, so the equation for the line is
y= 3x.
DDA Algorithm (DIGITAL DIFFERENTIAL ANALYZER)
• The DDA is a scan-conversion line algorithm based on calculating
either δy or δx.
• A line is sampled at unit intervals in one coordinate and the
corresponding integer values nearest the line path are determined for
the other coordinate
• DDA Algorithm has three cases so from equation i.e.,
m=(yk+1 - yk)/(xk+1 - xk)
Case1:
if m<1, x increment in unit intervals i.e..,xk+1=xk+1
then, m=(yk+1 - yk)/( xk+1 - xk)
m= yk+1 - yk
yk+1 = yk + m > (1)

where k takes integer values starting from 0, for the first


point and increases by 1 until final endpoint is reached. Since
m can be any real number between 0.0 and 1.0,
Case 2:
if m>1, y increment in unit intervals
i.e. , yk+1 = yk + 1
then, m= (yk + 1- yk)/( xk+1 - xk)
m(xk+1 - xk)=1
xk+1 =(1/m)+ xk (2)
Case 3:
if m=1,both x and y increment in unit intervals

i.e. ,xk+1=xk + 1 and yk+1 = yk + 1

• Equations (1) and (2) are based on the assumption that lines are to be processed from the left
endpoint to the right endpoint. If this processing is reversed, so that the starting endpoint is at the
right, then either we have δx=-1 and
• yk+1 = yk - m (3)
• or(when the slope is greater than 1)we have δy=-1 with xk+1 = xk - (1/m) (4)
Summary of the DDA is
if m<1,where x is incrementing by 1
yk+1 = yk + m

So initially x=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the


starting point .
o Illuminate pixel(x, round(y))
x1= x+ 1 , y1=y+ m
o Illuminate pixel(x1,round(y1))
x2= x1+ 1 , y2=y1 + m
o Illuminate pixel(x2,round(y2))
o Till it reaches final point.
if m>1,where y is incrementing by 1

xk+1 =(1/m)+ xk

So initially y=0,Assuming (x0,y0)as initial point


assigning

x= x0,y=y0 which is the starting point .


o Illuminate pixel(round(x),y)
• o x1= x+( 1/m) ,y1=y
o Illuminate pixel(round(x1),y1)
• o x2= x1+ (1/m) , y2=y1
o Illuminate pixel(round(x2),y2)
o Till it reaches final point.
#include <stdlib.h>
#include <math.h>
inline int round (const float a)
{
return int (a + 0.5);
}

void lineDDA (int x0, int y0, int xEnd, int yEnd)
{
int dx = xEnd - x0, dy = yEnd - y0, steps, k;

float xIncrement, yIncrement, x = x0, y =y0;

if (fabs (dx) > fabs (dy))


steps = fabs (dx);
else
steps = fabs (dy);
xIncrement = float (dx) / float (steps);

yIncrement = float (dy) / float (steps);


setPixel (round (x), round (y));
for (k = 0; k < steps; k++)
{
x += xIncrement;
y += yIncrement;
setPixel (round (x), round (y));
}
}
Algorithm:
(x1,y1) (x2,y2) are the end points and dx, dy are the
float variables.
Where dx= abs(x2-x1) and dy= abs(y2-y1)
(i) If dx >=dy then
length = dx
else
length = dy
endif
(ii) xinc = (x2-x1)/length
(iii) yinc = (y2-y1)/length
(iv) i=0
(v) Plot ((x), (y))
(vi) x = x + xinc
y = y + yinc
(vii) i=i+1
(viii) If i < length then go to step (v)
(ix) Stop
Example 2 Scan convert a line having end points
(3,2) & (4,7) using DDA.
Solution: dx= x2 - x1 = 4-3 = 1
dy= y2 - y1 = 7-2 = 5
As, dx < dy then
length = y2-y1 = 5
xinc = (x2-x1)/ length = 1/5 =0.2
yinc = (y2-y1)/ length = 5/5 = 1
Limitations of DDA:
(1) The rounding operation & floating point
arithmetic are time consuming procedures.
(2) Round-off error can cause the calculated pixel
position to drift away from the true line path for
long line segment.
Bresenham’s Algorithm:
➢It is an efficient raster scan generating algorithm that uses
incremental integral calculations
➢To illustrate Bresenham’s approach, we first consider the scan-
conversion process for lines with positive slope less than 1.0.
➢Pixel positions along a line path are then determined by sampling at
unit x intervals. Starting from the left endpoint (x0, y0) of a given line,
we step to each successive column (x position) and plot the pixel
whose scan-line y value is closest to the line path.
➢Consider the equation of a straight line y=mx+c where m=dy/dx
Bresenham’s Line Algorithm

• Is an efficient algorithm for line drawing.


• When we have a point (xk, yk), then we must decide whether to draw the point
(xk+1, yk) or (xk+1, yk+1).
• Note that, at all cases, we move to xk+1, still we must decide to move to yk or
yk+1.

yk+1
d2
y
d1
yk

Xk+1
Bresenham’s Line Algorithm (cont.)

• After calculating d1 and d2 we will choose yk or yk+1.

y = m (xk + 1) + b

d1 = y – yk = m (xk + 1) + b - yk
d2 = (yk+1) – y = yk+1 - m (xk + 1) - b

d1 – d2 = 2m*xk + 2m – 2yk + 2b -1
d1 – d2 = 2m (xk + 1) – 2yk + 2b -1
Bresenham’s Line Algorithm (cont.)

• Now we will define pk (decision parameter) as:

pk = ∆x (d1 – d2) = 2 ∆ y * X k- 2 ∆ x * yk + c,

where c = 2 ∆ y + ∆ x (2b – 1) ,

k represents the kth step

➢ If pk < 0 (i.e. d1 < d2) → we plot the pixel (xk+1, yk)


Otherwise we plot the pixel (xk+1, yk+1)
Bresenham’s Line Algorithm (cont.)

• to get the next pk → pk+1

• pk+1 = pk + 2 ∆y - 2 ∆x (yk+1 – yk)

• p0 = 2 ∆ y - ∆ x
Bresenham’s Line Algorithm (cont.)
1. Input two end points and store (x0, y0) in the frame buffer.
2. plot (x0, y0) to be the first point.
3. Calculate the constants ∆ x, ∆ y, 2 ∆ y, and 2 ∆ y – 2 ∆ x, and obtain the
starting value for the decision parameter as p0 = 2 ∆ y - ∆ x.
4. At each xk along the line, starting at k = 0, perform the
following test.
If pk < 0, plot (xk+1, yk) and pk+1 = pk + 2∆y
Otherwise,
plot (xk+1, yk+1) and pk+1 = pk + 2∆y - 2∆x.

5. Perform step 4 ∆x – 1 times.


Bresenham’s Line Algorithm ( Example)

• Note: Bresenham’s algorithm is used when slope is <= 1.

• using Bresenham’s Line-Drawing Algorithm, Digitize the line with endpoints (20,10) and (30,18).

• y = 18 – 10 = 8
• x = 30 – 20 = 10
• m = y / x = 0.8

• plot the first point (x0, y0) = (20, 10)


• p0 = 2 * y – x = 2 * 8 – 10 = 6 , so the next point is (21, 11)
Example (cont.)

K Pk (xk +1, yk +1) K Pk (xk +1, yk +1)


0 6 (21,11) 5 6 (26,15)

1 2 (22,12) 6 2 (27,16)

2 -2 (23,12) 7 -2 (28,16)

3 14 (24,13) 8 14 (29,17)

4 10 (25,14) 9 10 (30,18)
Example (cont.)
Bresenham’s Line Algorithm (cont.)

• Notice that bresenham’s algorithm works on lines with slope in range 0 < m < 1.

• We draw from left to right.

• To draw lines with slope > 1, interchange the roles of x and y directions.
Bresenham’s Line-Drawing Algorithm
for |m| < 1.0
1. Input the two line endpoints and store the left endpoint in (x0, y0).
2. Set the color for frame-buffer position (x0, y0); i.e., plot the first point.
3. Calculate the constants ∆x, ∆y, 2∆y, and 2∆y − 2∆x, and obtain the starting
value for the decision parameter as p0 = 2∆y −∆x
4. At each xk along the line, starting at k = 0, perform the following test: If pk <
0, the next point to plot is (xk + 1, yk ) and pk+1 = pk + 2∆y Otherwise, the
next point to plot is (xk + 1, yk + 1) and pk+1 = pk + 2∆y − 2∆x
5. Repeat step 4 ∆x − 1 more times. Note: If |m|>1.0 Then p0 = 2∆x −∆y and
If pk < 0, the next point to plot is (xk , yk +1) and pk+1 = pk + 2∆x Otherwise, the
next point to plot is (xk + 1, yk + 1) and pk+1 = pk + 2∆x − 2∆y
#include #include
/* Bresenham line-drawing procedure for |m| < 1.0. */
void lineBres (int x0, int y0, int xEnd, int yEnd)
{
int dx = fabs (xEnd - x0), dy = fabs(yEnd - y0);
int p = 2 * dy - dx;
int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy - dx); int x, y;
/* Determine which endpoint to use as start position. */
if (x0 > xEnd)
{
x = xEnd; y = yEnd; xEnd = x0;
}
else
{
x = x0; y = y0;
setPixel (x, y);
while (x < xEnd)
{
x++;
if (p < 0)
p += twoDy;
else
{
y++;
p += twoDyMinusDx;
}
setPixel (x, y);
}
}

You might also like