Unit-1 - Merged Cgip
Unit-1 - Merged Cgip
More recently, millions of people have become users of the Internet. Their access is through
graphical network browsers, such as Firefox, Chrome, Safari, and Internet Explorer that use
these same interface tools.
7. Image processing
Processing of existing images into refined ones for better interpretation is one of the many
applications of computer graphics.
Computer graphics are also used in the field of photography so that different types of pictures
can be improved or their shortcomings can be overcome, which is called image processing.
You can also edit the image using graphics software.
Edit Means - You can modify any image according to your need, such as - you can increase or
decrease the brightness of the image and you can do many other changes.
8. Visualization
The use of visualization is used by big scientists, professional doctors, and experienced
engineers to study a large amount of information about data.
The weather department also uses visualization to obtain weather information. So that the
information about the data of a field can be studied properly.
A Graphics System
A computer graphics system is a computer system with all the components of a general-purpose
computer system as shown in the block diagram in Figure 1.1.
There are six major elements in our system:
1. Input devices
2. Central Processing Unit
3. Graphics Processing Unit
4. Memory
5. Frame buffer
6. Output devices
Input Devices
Most graphics systems provide a keyboard and at least one other input device. The most
common input devices are the mouse, the joystick, and the data tablet.
Each provides positional information to the system, often called pointing devices. Each
such pointing device usually is equipped with one or more buttons to provide signals to
the processor.
3D location on a real- world object can be obtained by laser range finders & acoustic
sensors
Higher-dimensional data can be obtained by devices such as data gloves, which may include
many sensors, & computer vision systems.
The CPU and the GPU
In a simple system, there may be only one processor, the central processing unit (CPU) of
the system, which must do both the normal processing and the graphical processing. The main
graphical function of the processor is to take specifications of graphical primitives (such as lines,
circles, and polygons) generated by application programs and to assign values to the pixels in the
frame buffer that best represent these entities. The conversion of geometric entities to pixel colors
and locations in the frame buffer is known as rasterization, or scan conversion.
In early graphics systems, the frame buffer was part of the standard memory that could be
directly addressed by the CPU. Today, virtually all graphics systems are characterized by special-
purpose graphics processing units (GPUs), custom-tailored to carry out specific graphics
functions. The GPU can be either on the mother board of the system or on a graphics card. The
frame buffer is accessed through the graphics processing unit.
We can use the terms frame buffer and color buffer synonymously without confusion.
Output Devices
For many years, the type of display (or monitor) has been the cathode-ray tube (CRT).
Components of CRT:
1. Electron Gun:
• Electron gun consisting of a series of elements, primarily a heating filament (heater)
and a cathode.
• The electron gun creates a source of electrons which are focused into a narrow beam
directed at the face of the CRT.
2. Control Electrode: It is used to turn the electron beam on and off.
3. Focusing system: It is used to create a clear picture by focusing the electrons into a narrow
beam.
4. Deflection Yoke:
• It is used to control the direction of the electron beam.
• It creates an electric or magnetic field which will bend the electron beam as it passes
through the area. In a conventional CRT, the yoke is linked to a sweep or scan
generator.
• The deflection yoke which is connected to the sweep generator creates a fluctuating
electric or magnetic potential.
5. Phosphorus-coated screen:
• The inside front surface of every CRT is coated with phosphors.
• Phosphors glow when a high-energy electron beam hits them.
• Phosphorescence is the term used to characterize the light given off by a phosphor after
it has been exposed to an electron beam.
Operation:
1. A beam of electrons (cathode rays) is emitted by an electron gun.
2. It then passes through focusing and deflection systems that direct the beam toward
specified positions on the phosphor-coated screen.
3. The phosphor then emits a small spot of light at each position contacted by the electron
beam. The color you view on the screen is produced by a blend of red, blue and green light.
4. Because the light emitted by the phosphor fades very rapidly, some method is needed for
maintaining the screen picture.
5. One way to do this is to store the picture information as a charge distribution within the
CRT, which is used to keep the phosphors activated.
6. However, the most common method now employed for maintaining phosphor glow is to
redraw the picture repeatedly by quickly directing the electron beam back over the same
screen points. This type of display is called a refresh CRT.
7. The frequency at which a picture is redrawn on the screen is referred to as the Refresh
Rate.
The idea behind an electron gun is to create electrons and then accelerate them to a very
high speed.
The primary components of an electron gun in a CRT are:
o The Heated Metal Cathode and
o A Control Grid .
Heated Metal Cathode:
Heat is supplied to the cathode by directing a current through a coil of wire, called the
filament, inside the cylindrical cathode structure.
The electron gun starts with a small heater, which is a lot like the hot, bright filament of a
regular light bulb. It heats a cathode, which emits a cloud of electrons. Two anodes turn
the cloud into an electron beam:
The accelerating anode attracts the electrons and accelerates them toward the
screen.
The focusing anode turns the stream of electrons into a very fine beam.
Control Grid:
Control grid is used to surround the cathode. Grid is cylindrical in shape. It is made up of
metal.
Grid has hole at one end, through which electrons get escaped.
The control grid is kept at lower potential as compared to cathode, so that a electrostatic
field can be created.
It will direct that electrons through point source, so process of focusing will be simplified.
Focusing System
The focusing system is to create a clear picture by focusing the electrons into a narrow
beam. Otherwise, electrons would repel each other and beam would spread out as it
reaches the screen.
Focusing is accomplished with either electric or magnetic fields.
Deflection System
Deflection of the electron beam can be controlled by either electric fields or magnetic
fields.
In case of magnetic field, two pairs of coils are used, one for horizontal deflection and
other for vertical deflection.
In case of electric field, two pairs of parallel plates are used, one for horizontal deflection
and second for vertical deflection as shown in figure above.
CRT Screen
The inside of the large end of a CRT is coated with a fluorescent material that gives off
light when struck by electrons.
When the electrons in the beam is collides with phosphor coating screen, they stopped
and their kinetic energy is absorbed by the phosphor.
Then a part of beam energy is converted into heat energy and the remainder part causes
the electrons in the phosphor atom to move up to higher energy levels.
Persistence
It is defined as the time they continue to emit light after the CRT beam is removed.
Persistence is defined as the time it takes the emitted light from the screen to decay to
one-tenth of its original intensity.
Lower-persistence phosphors require higher refresh rates to maintain a picture on the
screen without flicker.
A phosphor with low persistence is useful for animation; a high-persistence phosphor is
useful for displaying highly complex, static pictures.
Resolution
The number of points per centimeter that can be used be plotted horizontally and
vertically. Or Total number of points in each direction.
The resolution of a CRT is depend on
type of phosphor
intensity to be displayed
Aspect Ratio
It is ratio of horizontal to vertical points.
Example: An aspect ratio of 3/4 means that a vertical line plotted with three points has
same length as horizontal line plotted with four points.
Raster-Scan Displays
The most common type of graphics monitor employing a CRT is the raster-scan display,
based on television technology.
In a raster-scan system, the electron beam is swept across the screen, one row at a time,
from top to bottom. Each row is referred to as a scan line. As the electron beam moves
across a scan line, the beam intensity is turned on and off to create a pattern of illuminated
spots.
Picture definition is stored in a memory area called the refresh buffer or frame buffer,
where the term frame refers to the total screen area.
This memory area holds the set of color values for the screen points which are then retrieved
from the refresh buffer and used to control the intensity of the electron beam as it moves from
spot to spot across the screen.
In this way, the picture is “painted” on the screen one scan line at a time, as demonstrated in
Figure above.
Each screen spot that can be illuminated by the electron beam is referred to as a pixel or pel
(shortened forms of picture element).
Since the refresh buffer is used to store the set of screen color values, it is also sometimes
called a color buffer. Also, other kinds of pixel information, besides color, are stored in
buffer locations, so all the different buffer areas are sometimes referred to collectively as the
“frame buffer.”
Beam refreshing is of two types. First is horizontal retracing and second is vertical
retracing.
When the beam starts from the top left corner and reaches the bottom right scale, it will again
return to the top left side called at vertical retrace. Then it will again more horizontally from
top to bottom call as horizontal retracing shown in figure.
Important Definitions:
Raster systems are commonly characterized by their resolution, which is the number of
pixel positions that can be plotted.
Another property of video monitors is aspect ratio, which is now often defined as the
number of pixel columns divided by the number of scan lines that can be displayed by the
system.
Aspect ratio can also be described as the number of horizontal points to vertical points (or
vice versa) necessary to produce equal-length lines in both directions on the screen. In other
words, it can be defined to be the width of the rectangle divided by its height.
The range of colors or shades of gray that can be displayed on a raster system depends on
both the types of phosphor used in the CRT and the number of bits per pixel available in the
frame buffer.
black and white: 1 bit per pixel.
gray scale: 1 byte per pixel (256 gray levels)
true color(R,G,B): 3 bytes=24 pits per pixel (224 colors)
The number of bits per pixel in a frame buffer is sometimes referred to as either the depth of
the buffer area or the number of bit planes.
A frame buffer with one bit per pixel is commonly called a bitmap, and a frame buffer with
multiple bits per pixel is a pixmap.
In raster scan systems refreshing is done at a rate of 60-80 frames per second. Refresh rates
are also sometimes described in units of cycles per second / Hertz (Hz).
calligraphic displays.
A beam of high-speed electrons excites the inner green layer. Thus screen shows a green
color.
The speed of the electrons, and hence the screen color at any point, is controlled by the
beam acceleration voltage.
Advantages:
1. Inexpensive
Disadvantages:
1. Only four colors are possible
2. Quality of pictures is not as good as with another method.
Shadow-Mask Method:
Shadow Mask Method is commonly used in Raster-Scan System because they produce a
much wider range of colors than the beam-penetration method.
It is used in the majority of color TV sets and monitors.
Construction:
A shadow mask CRT has 3 phosphor color dots at each pixel position.
One phosphor dot emits: red light
Another emits: green light
Third emits: blue light
This type of CRT has 3 electron guns, one for each color dot and a shadow mask grid just
behind the phosphor coated screen.
Shadow mask grid is pierced with small round holes in a triangular pattern.
Figure shows the delta-delta shadow mask method commonly used in color CRT system.
Working:
1. Triad arrangement of red, green, and blue guns.
The deflection system of the CRT operates on all 3 electron beams simultaneously; the 3
electron beams are deflected and focused as a group onto the shadow mask, which
contains a sequence of holes aligned with the phosphor- dot patterns.
When the three beams pass through a hole in the shadow mask, they activate a dotted
triangle, which occurs as a small color spot on the screen.
The phosphor dots in the triangles are organized so that each electron beam can activate
only its corresponding color dot when it passes through the shadow mask.
2. Inline arrangement:
Another configuration for the 3 electron guns is an Inline arrangement in which the 3
electron guns and the corresponding red-green-blue color dots on the screen, are aligned
along one scan line rather of in a triangular pattern.
This inline arrangement of electron guns in easier to keep in alignment and is commonly
used in high-resolution color CRT's.
Flat-Panel Displays
The Flat-Panel display refers to a class of video devices that have reduced volume, weight and
power requirement compare to CRT.
Example: Small T.V. monitor, calculator, pocket video games, laptop computers, an
advertisement board in elevator.
1. Emissive Display: The emissive displays are devices that convert electrical energy into light.
Examples are Plasma Panel, thin film electroluminescent display and LED (Light Emitting
Diodes).
2. Non-Emissive Display: The Non-Emissive displays use optical effects to convert sunlight or
light from some other source into graphics patterns. Examples are LCD (Liquid Crystal Device).
Liquid Crystal Displays are the devices that produce a picture by passing polarized light from
the surroundings or from an internal light source through a liquid-crystal material that
transmits the light.
LCD uses the liquid-crystal material between two glass plates; each plate is the right angle to
each other between plates liquid is filled. One glass plate consists of rows of conductors
arranged in vertical direction. Another glass plate is consisting of a row of conductors arranged
in horizontal direction. The pixel position is determined by the intersection of the vertical &
horizontal conductor. This position is an active part of the screen.
Liquid crystal display is temperature dependent. It is between zero to seventy degree Celsius. It
is flat and requires very little power to operate.
Graphics monitors for the display of 3-D scenes have been devised using a technique that
reflects a CRT image from a vibrating, flexible mirror (Fig. 14).
As the varifocal mirror vibrates, it changes focal length. These vibrations are synchronized with
the display of an object on a CRT so that each point on the object is reflected from the mirror
into a spatial position corresponding to the distance of that point from a specified viewing
location.
This allows us to walk around an object or scene and view it from different sides.
In addition to displaying 3-D images, these systems are often capable of displaying 2-D cross-
sectional “slices” of objects selected at different depths, such as in medical applications to
analyze data from ultrasonography.
Stereoscopic system:
Stereoscopic views does not produce 3-D images, but it produce 3D effects by presenting
different view to each eye of an observer so that it appears to have depth.
To obtain this we first need to obtain two views of object generated from viewing direction
corresponding to each eye.
We can construct the two views as computer generated scenes with different viewing positions
or we can use stereo camera pair to photograph some object or scene.
When we see simultaneously both the view as left view with left eye and right view with right
eye then two views is merge and produce image which appears to have depth.
One way to produce stereoscopic effect is to display each of the two views with raster system
on alternate refresh cycles.
The screen is viewed through glasses with each lance design such a way that it act as a rapidly
alternating shutter that is synchronized to block out one of the views.
Virtual-reality:
Virtual reality is the system which produce images in such a way that we feel that our
surrounding is what we are set in display devices but in actually it does not.
In virtual reality user can step into a scene and interact with the environment.
A head set containing an optical system to generate the stereoscopic views is commonly used
in conjunction with interactive input devices to locate and manipulate objects in the scene.
Sensor in the head set keeps track of the viewer’s position so that the front and back of objects
can be seen as the viewer “walks through” and interacts with the display.
Virtual reality can also be produce with stereoscopic glass and video monitor instead of head
set. This provides low cost virtual reality system.
Sensor on display screen track head position and accordingly adjust image depth.
Raster graphics system with a fixed portion of the system memory reserved
for the frame buffer:
A fixed area of the system memory is reserved for the frame buffer and the video controller
can directly access that frame buffer memory. Frame buffer location and the screen position are
referred in Cartesian coordinates. For many graphics monitors the coordinate origin is defined at
the lower left screen corner. Screen surface is then represented as the first quadrant of the two
dimensional systems with positive Xvalue increases as left to right and positive Y-value increases
bottom to top.
Two registers are used to store the coordinates of the screen pixels which are X and Y. Initially
the X is set to 0 and Y is set to Ymax. The value stored in frame buffer for this pixel is retrieved and
used to set the intensity of the CRT beam. After this X register is incremented by one. This
procedure is repeated till X becomes equals to Xmax. Then X is set to 0 and Y is decremented by
one pixel and repeat above procedure.
This whole procedure is repeated till Y is become equals to 0 and complete the one refresh
cycle. Then controller reset the register as top –left corner i.e. X=0 and Y=Ymax and refresh
process start for next refresh cycle. Since screen must be refreshed at the rate of 60 frames per
second the simple procedure illustrated in figure cannot be accommodated by typical RAM chips.
To speed up pixel processing video controller retrieves multiple values at a time using more
numbers of registers and simultaneously refresh block of pixel. Such a way it can speed up and
accommodate refresh rate more than 60 frames per second.
For reduce memory requirements in raster scan system, methods have been devised for
organizing the frame buffer as a linked list and encoding the color information.
One way to do this is to store each scan line as a set of integer pair. The first number in each
pair can be a reference to a color value, and the second number can specify the number of adjacent
pixels on the scan line that are to be displayed in that color. This technique is called run-length
encoding. A similar approach is when pixel colors changes linearly. Another approach is to encode
the raster as a set of rectangular areas (cell encoding).
The disadvantages of encoding runs are that color changes are difficult to record and
storage requirements increase as the lengths of the runs decrease.
It is also difficult for display controller to process the raster when many short runs are involved.
Coordinate Representations
To generate a picture using a programming package, we first need to give the geometric
descriptions of the objects that are to be displayed. These descriptions determine the locations
and shapes of the objects.
Except few all other general graphics packages require geometric descriptions to be specified
in a standard, right-handed, Cartesian-coordinate reference frame. If coordinate values for a
picture are given in some other reference frame, they must be converted to Cartesian
coordinates before they can be input to the graphics package.
Some packages that are designed for specialized applications may allow use of other
coordinate frames that are appropriate for those applications.
In general, several different Cartesian reference frames are used in the process of constructing
and displaying a scene.
First, we can define the shapes of individual objects, within a separate reference frame for each
object. These reference frames are called modeling coordinates, or sometimes local
coordinates or master coordinates. Once the individual object shapes have been specified,
we can construct (“model”) a scene by placing the objects into appropriate locations within a
scene reference frame called world coordinates.
Generally a graphic system first converts the world-coordinates position to normalized device
coordinates where each coordinate value is in the range from −1 to 1 or in the range from 0 to
1, depending on the system.
Finally, the picture is scan-converted into the refresh buffer of a raster system for display. The
coordinate systems for display devices are generally called device coordinates, or screen
coordinates in the case of a video monitor.
Figure above briefly illustrates the sequence of coordinate transformations from modeling
coordinates to device coordinates for a display that is to contain a view of two three-
dimensional (3D) objects.
An initial modeling-coordinate position (xmc , ymc , zmc ) in this illustration is transferred to
world coordinates, then to viewing and projection coordinates, then to left-handed normalized
coordinates, and finally to a device-coordinate position (xdc , ydc ) with the sequence:
Graphics Functions
A general-purpose graphics package provides users with a variety of functions for creating and
manipulating pictures.
These routines can be broadly classified according to whether they deal with graphics output,
input, attributes, transformations, viewing, subdividing pictures, or general control.
The basic building blocks for pictures are referred to as graphics output primitives. They
include character strings and geometric entities, such as points, straight lines, curved lines,
filled color areas (usually polygons), and shapes defined with arrays of color points.
In addition, some graphics packages provide functions for displaying more complex shapes
such as spheres, cones, and cylinders.
Attributes are properties of the output primitives; that is, an attribute describes how a
particular primitive is to be displayed. This includes color specifications, line styles, text styles,
and area-filling patterns.
We can change the size, position, or orientation of an object within a scene using geometric
transformations.
Some graphics packages provide an additional set of functions for performing modeling
transformations. Such packages usually provide a mechanism for describing complex objects.
Viewing transformations are used to select a view of the scene, the type of projection to be
used, and the location on a video monitor where the view is to be displayed.
Interactive graphics applications use various kinds of input devices, including a mouse, a tablet,
and a joystick. Input functions are used to control and process the data flow from these
interactive devices.
Finally, a graphics package contains a number of housekeeping tasks, such as clearing a screen
display area to a selected color and initializing parameters. We use control functions for
these operations.
Introduction to OpenGL
A basic library of functions is provided in OpenGL for specifying graphics primitives, attributes,
geometric transformations, viewing transformations, and many other operations.
Our basic model of a graphics package is a black box, a term that engineers use to denote a
system whose properties are described only by its inputs and outputs (we may know nothing
about its internal workings).
We can take the simplified view of inputs as function calls and outputs as primitives displayed
on our monitor, as shown in Figure.
An API for interfacing with this system can contain hundreds of individual functions. It will be
helpful to divide these functions into seven major groups:
1. Primitive functions 5. Input functions
2. Attribute functions 6. Control functions
3. Viewing functions 7. Query functions
4. Transformation functions
1. Primitive functions:
The primitive functions define the low-level objects or atomic entities that our system
can display. Depending on the API, the primitives can include points, line segments, polygons,
pixels, text, and various types of curves and surfaces.
2. Attribute functions
If primitives are the what of an API, then attributes are the how. That is, the attributes
govern the way that a primitive appears on the display.
Attribute functions allow us to perform operations ranging from choosing the color
with which we display a line segment, to picking a pattern with which to fill the inside of a
polygon, to selecting a typeface for the titles on a graph.
3. Viewing functions
The viewing functions allow us to specify various views, although APIs differ in the
degree of flexibility they provide in choosing a view.
4. Transformation functions
Transformation functions that allows her to carry out transformations of objects, such
as rotation, translation, and scaling.
5. Input functions
For interactive applications, an API must provide a set of input functions to allow us to
deal with the diverse forms of input that characterize modern graphics systems. We need
functions to deal with devices such as keyboards, mouse, and data tablets.
6. Control functions
The control functions enable us to communicate with the window system, to initialize
our programs, and to deal with any errors that take place during the execution of our
programs.
7. Query functions
Within our applications we can often use other information within the API, including
camera parameters or values in the frame buffer. A good API provides this information
through a set of query functions.
Most of our applications will be designed to access openGL directly through functions in 3
libraries:
• OpenGL core library(GL)
-OpenGL32 on Windows
-GL on most unix/linux systems
-All funtions in the GL library begin with gl.
• OpenGL Utility Library (GLU)
-Contains the code for creating common objects & simplifying viewing
-All functions in this library are created from the core GL library
-All functions in the GLU library begin with glu.
• OpenGL Utility Toolkit (GLUT)
-To interface with the window system and to get input from external devices into
our programs, we need at least one more library.
-For the X Window System, this library is called GLX, for Windows, it is wgl, and for
the Macintosh, it is agl. Rather than using a different library for each system, we use
two readily available library called the OpenGL Utility Toolkit (GLUT).
Figure 2.4 shows the organization of the libraries for an X Window System environment.
For this window system, GLUT will use GLX and the X libraries. The application program, however,
can use only GLUT functions and thus can be recompiled with the GLUT library for other window
systems.
OpenGL makes heavy use of defined constants to increase code readability and are defined
in header (.h) files. In most implementations, one of the include lines
#include <GL/glut.h>
or
#include < glut.h> is sufficient to read in glu.h and gl.h.
Certain functions require that one (or more) of their arguments be assigned a symbolic
constant.
All such constants begin with the uppercase letters GL. In addition, component words within a
constant name are written in capital letters, and the underscore ( _ ) is used as a separator
between all component words in the name.
Example:
The OpenGL functions also expect specific data types. To indicate a specific data type, OpenGL
uses special built-in, data-type names, such as:
Each data-type name begins with the capital letters GL, and the remainder of the name is a
standard data-type designation written in lowercase letters.
In addition to the OpenGL basic (core) library, the OpenGL Utility (GLU) provides routines for
setting up viewing and projection matrices, describing complex objects with line and polygon
approximations, displaying quadrics and other complex tasks.
Every OpenGL implementation includes the GLU library, and all GLU function names start with
the prefix glu.
To get started, we can consider a simplified, minimal number of operations for displaying a
picture.
Step-1: Initialization of GLUT.
Since we are using the OpenGL Utility Toolkit, This is done using the statement:
This function must be the last one in our program. It displays the initial graphics and puts
the program into an infinite loop that checks for input from devices such as a mouse or
keyboard.
Our first example will not be interactive, so the program will just continue to display our
picture until we close the display window.
Step-5: Additional Glut Funtions:
Although the display window that we created will be in some default location and size, we can
set these parameters using additional GLUT functions.
To describe a picture, we first decide upon a convenient Cartesian coordinate system, called the
world-coordinate reference frame, which could be either 2D or 3D. We then describe the
objects in our picture by giving their geometric specifications in terms of positions in world
coordinates.
Example: We define a straight-line segment with two endpoint positions, and a polygon is
specified with a set of positions for its vertices.
These coordinate positions are stored in the scene description along with other info about the
objects, such as their color and their coordinate extents.
Co-ordinate extents :Co-ordinate extents are the minimum and maximum x, y, and z values for
each object. A set of coordinate extents is also described as a bounding box for an object. Ex:
For a 2D figure, the coordinate extents are sometimes called its bounding rectangle.
Objects are then displayed by passing the scene description to the viewing routines which
identify visible surfaces and map the objects to the frame buffer positions and then on the
video monitor.
Screen Coordinates
Locations on a video monitor are referenced in integer screen coordinates, which correspond
to the integer pixel positions in the frame buffer.
Scan-line algorithms for the graphics primitives use the coordinate descriptions to determine
the locations of pixels
Example: given the endpoint coordinates for a line segment, a display algorithm must calculate
the positions for those pixels that lie along the line path between the endpoints.
Since a pixel position occupies a finite area of the screen, the finite size of a pixel must be taken
into account by the implementation algorithms.
For the present, we assume that each integer screen position references the centre of a pixel
area. Once pixel positions have been identified the color values must be stored in the frame
buffer.
Absolute Coordinate values means that the values specified are the actual positions within the
coordinate system in use.
However, some graphics packages also allow positions to be specified using Relative
Coordinates. This method is useful for various graphics applications, such as producing
drawings with pen plotters, artist's drawing and painting systems, and graphics packages for
publishing and printing applications. Taking this approach, we can specify a coordinate
position as an offset from the last position that was referenced (called the current position).
The gluOrtho2D command is a function we can use to set up any 2D Cartesian reference
frames. Since the gluOrtho2D function specifies an orthogonal projection, we need also to be
sure that the coordinate values are placed in the OpenGL projection matrix.
In addition, we could assign the identity matrix as the projection matrix before defining the
world-coordinate range.
This would ensure that the coordinate values were not accumulated with any values we may
have previously set for the projection matrix.
Thus, for our initial two-dimensional examples, we can define the coordinate frame for the
screen display window with the following statements:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D (xmin, xmax, ymin, ymax);
The display window will then be referenced by coordinates (xmin, ymin) at the lower-left
corner and by coordinates (xmax, ymax) at the upper-right corner, as shown in Figure 2.
We can then designate one or more graphics primitives for display using the coordinate
reference specified in the gluOrtho2D statement.
If the coordinate extents of a primitive are within the coordinate range of the display window,
all of the primitive will be displayed. Otherwise, only those parts of the primitive within the
display-window coordinate limits will be shown.
Geometric primitives include points, line segments, polygons, curves, and surfaces.
These primitives pass through a geometric pipeline, where they are subject to a series of
geometric operations that determine whether a primitive is visible.
The basic OpenGL geometric primitives are specified by sets of vertices. Thus, the programmer
defines the objects with sequences of the following:
Case 3:
Specifying two point positions in a three dimensional world reference frame. In
this case, we give the coordinates as explicit floating-point values:
Case-1:
Case-2: With the OpenGL primitive constant GL LINE STRIP, we obtain a polyline.
Case-3: With the OpenGL primitive constant GL LINE LOOP, we obtain a closed
polyline.
COLOR
Color is one of the most interesting aspects of both human perception and computer
graphics.
A visible color can be characterized by a function C(λ) that occupies wavelengths from
about 350 to 780 nm.
The human visual system has three types of cones responsible for color vision. Hence, our
brains do not receive the entire distribution C(λ) for a given color but rather three values—the
tristimulus values—that are the responses of the three types of cones to the color. This leads to
the basic tenet of three-color theory.
A consequence of this tenet is that, in principle, a display needs only three primary colors to
produce the three tristimulus values needed for a human observer.
The CRT is one example
of the use of additive color,
where the primary colors add
together to give the perceived
color. Other examples that use
additive color include
projectors and slide (positive)
film. In such systems, the
primaries are usually red,
green, and blue. With additive color, primaries add light to an initially black display, yielding the
desired color.
For processes such as commercial printing and painting, a subtractive color model is
more appropriate. Here we start with a white surface, such as a sheet of paper. Colored pigments
remove color components from light that is striking the surface. In subtractive systems, the
primaries are usually the complementary colors: cyan, magenta, and yellow (CMY; Figure above).
There are two different approaches to handle color in a graphics system from the
programmer’s perspective—that is, through the API. They are
1. RGB-Color Model
2. Indexed-Color Model
RGB-Color Model:
In modern systems RGB color has become the norm. In a three-primary-color, additive-
color RGB system, there are conceptually separate buffers for red, green, and blue images. Each
pixel has separate red, green, and blue components that correspond to locations in memory as
shown in the figure below.
few as 4 bits per color. Because our API should be independent of the particulars of the hardware,
we would like to specify a color independently of the number of bits in the frame buffer. A natural
technique is to use the color cube and to specify color components as numbers between 0.0 and
1.0, where 1.0 denotes the maximum (or saturated value) of the corresponding primary and 0.0
denotes a zero value of that primary.
In OpenGL, for RGB-Color Model, we use the following function call:
Indexed color provided a solution that allowed applications to display a wide range of
colors. This technique can be created within an application.
We can select colors by interpreting our limited-depth pixels as indices into a table of colors
rather than as color values. Suppose that our frame buffer has k bits per pixel. Each pixel value or
index is an integer between 0 and 2k − 1. Suppose that we can display each color component with a
precision of mbits; that is, we can choose from 2m reds, 2m greens, and 2m blues. Hence, we can
produce any of 23m colors on the display, but the frame buffer can specify only 2k of them. We
handle the specification through a user-defined color-lookup table that is of size 2k × 3mas shown
in the figure.
Once the user has constructed the table, she can specify a color by its index, which points to
the appropriate entry in the color-lookup table shown in the figure below.
One difficulty arises if the window system and underlying hardware support only a limited
number of colors because the window system may have only a single color table that must be used
for all its windows, one for each window on the screen.
OpenGL supports this mode, called color index mode. The functions available are:
Point Attributes
Basically, we can set two attributes for points: color and size.
In a state system: The displayed color and size of a point is determined by the current values
stored in the attribute list.
For a raster system: Point size is an integer multiple of the pixel size, so that a large point is
displayed as a square block of pixels.
1. Color components are set with RGB values or an index into a color table.
glColor3f(r,g,b);
glColor3ub(r,g,b)[0-255]
2. Point Size: we can set size of our rendered point by using
glPointSize(float size)- default is 1 pixel wide
Attribute functions may be listed inside or outside of a glBegin/glEnd pair. For example, the
following code segment plots 3 points in varying colors and sizes. The first is a standard-size red
point, the second is a double-size green point, and the third is a triple-size blue point:
Line Attributes
A straight-line segment can be displayed with three basic attributes:
1. Color
2. Width
3. Style
1. Line color: Color components are set with RGB values or an index into a color table.
glColor3f(r,g,b);
glColor3ub(r,g,b)[0-255]
2. Line Width:
Line width is set in OpenGL with the function:
glLinewidth (width);
We assign a floating-point value to parameter width, and this value is rounded to the
nearest nonnegative integer.
The line is displayed with a standard width of 1.0, which is the default width.
3. Line Style:
By default, a straight-line segment is displayed as a solid line.
However, we can also display dashed lines, dotted lines, or a line with a combination of
dashes and dots, and we can vary the length of the dashes and the spacing between dashes
or dots.
We set a current display style for lines with the OpenGL function:
glLineStipple(int factor,short pattern);
• The pattern is a 16 bit sequence of 1s and 0s
e.g. 1110111011101110
• The factor is a bit multiplier for the pattern (it enlarges it)
e.g. factor = 2 turns the above pattern into:
11111100111111001111110011111100
• The pattern can be expressed in hexadecimal notation
• e.g. 0xEECC = 1110111011001100
Line Stippling- Stippling means to add a pattern to a simple line or the filling of a polygon.
OpenGL allows stippling to be performed using bit patterns.
To Enter into and exit from the stipple mode we use:
Turn stippling on with: glEnable(GL_LINE_STIPPLE);
Turn off with: glDisable(GL_LINE_STIPPLE);
Example: To Draw Dashed Lines, with line width=2 and red line
glColor2f(1,1,0);
glLineWidth(2);
glEnable (GL_LINE_STIPPLE)
glLineStipple (1,0xF0F0);
glBegin(GL_LINE_LOOP);
glVertex2f(x1,y1);
glVertex2f(x2,y2);
glVertex2f(x3,y3);
glEnd( )
glDisable(GL_LINE_STIPPLE);
Curve Attributes
Parameters for curve attributes are the same as those for straight-line segments.
We can display curves with varying colors, widths, dot-dash patterns, and available pen or
brush options.
OpenGL does not consider curves to be drawing primitives in the same way that it considers
points and lines to be primitives.
Curves can be drawn in several ways in OpenGL.
Perhaps the simplest approach is to approximate the shape of the curve using short line
segments.
Alternatively, curved segments can be drawn using splines.
Line-Drawing Algorithms
A straight-line segment in a scene is defined by the coordinate positions for the endpoints of
the segment.
To display the line, first project the endpoints to integer screen coordinates and determine the
nearest pixel positions along the line path between the two endpoints.
This process digitizes the line into a set of discrete integer positions that, in general, only
approximates the actual line path.
A computed line position of (10.48, 20.51), for example, is converted to pixel position (10, 21).
This rounding of coordinate values to integers causes all but horizontal and vertical lines to be
displayed with a stair-step appearance (known as “the jaggies”), as shown in Figure.
Need algorithm to figure out which intermediate pixels are on line path which is more effective
technique for smoothing a raster line that are based on adjusting pixel intensities along the line
path.
2. Lines should terminate accurately: Unless lines are plotted accurately, they may terminate at
the wrong place.
3. Lines should have constant density: Line density is proportional to the no. of dots displayed
divided by the length of the line. To maintain constant density, dots should be equally spaced.
4. Line density should be independent of line length and angle: This can be done by
computing an approximating line-length estimate and to use a line-generation algorithm that
keeps line density constant to within the accuracy of this estimate.
5. Line should be drawn rapidly: This computation should be performed by special-purpose
hardware.
Line Equations:
The Cartesian slope-intercept equation for a straight line is:
With m as the slope of the line and b as the y intercept. Given that the two endpoints of a line
segment are specified at positions (x0, y0) and (xend, yend), as shown in Figure, we can
determine values for the slope m and y intercept b with the following calculations:
Illustration:
If a line is drawn from (2, 3) to (6, 8) with use of DDA, How many points will needed to
generate such line?
Solution:
Step1: Given (x0,y0)=(2,3) and (x1,y1)=(6,8)
Step2: dx=x1-x0=6-2=4
dy=y1-y0=8-3=5
Step3: here, dy>dx, therefore, steps=5
Step4: xincrement=dx/steps=4/5=0.8
yincrement=dy/steps=5/5=1
Step5: x=x0=2,y=y0=3
p0=(2,3)
Advantages:
It is a simple algorithm.
It is easy to implement.
It avoids using the multiplication operation which is costly in terms of time complexity.
Disadvantages:
There is an extra overhead of using round off( ) function.
Using round off( ) function increases time complexity of the algorithm.
Resulted lines are not smooth because of round off( ) function.
The points generated by this algorithm are not accurate.
Explanation:
Assuming that we have determined that the pixel at (xk, yk) is to be displayed, we next need to
decide which pixel to plot in column xk+1 = xk+1.
Our choices are the pixels at positions (xk+1, yk) and (xk+1, yk+1).
At sampling position xk+1, we label vertical pixel separations from the mathematical line path
as dlower and dupper (Figure 7).
The y coordinate on the mathematical line at pixel column position xk + 1 is calculated as :
y=m(xk + 1)+b (1)
Then
dlower = y − yk
= m(xk + 1) + b − yk (2)
and
dupper = (yk + 1) − y
= yk + 1 − m(xk + 1) − b (3)
To determine which of the two pixels is closest to the line path, we can set up an efficient test
that is based on the difference between the two pixel separations as follows:
dlower − dupper = (m(xk + 1) + b − yk )-(yk + 1 − m(xk + 1) − b)
= 2m(xk + 1) − 2yk + 2b − 1
dlower − dupper =2(dy/dx)(xk + 1) − 2yk + 2b − 1(4)
By rearranging Equation 4 (by multiplying dx on both sides) so that it involves only integer
calculations.
Thus the decision parameter pk=dx( dlower − dupper) ,We have,
dx(dlower − dupper )=2(dy)(xk + 1) − 2*dx*yk + 2*dx*b − dx
=2*dy* xk + 2*dy- 2*dx*yk + 2*dx*b − dx
dx(dlower − dupper )= 2*dy* xk - 2*dx*yk +c
Where C=2*dy+ 2*dx*b − dx ,which is independent of the pixel position and will be
eliminated in the recursive calculations.
Thus the decision parameter pk= 2*dy* xk - 2*dx*yk + c
If the pixel at yk is “closer” to the line path than the pixel at yk + 1(that is, dlower < dupper), then
decision parameter pk is negative. In that case, we plot the lower pixel; otherwise, we plot the upper
pixel.
Summary:
Algorithm
bresenham (x1,y1,x2,y2)
{
x=x1;
y=y1;
dx=x2-x1;dy=y2-y1;
p=2*dy-dy
while(x<=x2)
{
putpixel(x,y); x++;
if(p<0)
p=p+2*dy;
else
{
p=p+2*dy-2dx;
y++;
}
}
}
Illustration
Illustrate Bresenham’s Algm for a line with endpoints (20, 10) to (30, 18).
Solution:
x=20, y=10
dx=30-20=10; dy=18-10=8
p=2*8-10=6
twody=2*8=16
twodyminusdx=2*(8-10)=-4
plot(20,10)
k p (xk,yk)
0(p>0) 2 (21,11)
1(p>0) -2 (22,12)
2(p<0) 14 (23,12)
3(p>0) 10 (24,13)
4(p>0) 6 (25,14)
5(p>0) 2 (26,15)
6(p>0) -2 (27,16)
7(p<0) 14 (28,16)
8(p>0) 10 (29,17)
9(p>0) 6 (30,18)
Questions on Unit-1
1) Define the following:
a) Pixel b) Resolution c) Bit Plane
d) Raster e) Depth of the frame Buffer f) Refresh Rate
g) Frame Buffer h) Rasterization i) Aspect Ratio
Important Definitions
Basis of
DDA Bresenham’s Algorithm
comparison
Multiplication and division are Only addition and subtraction are used
Method
the only operations used. in this process
Price It comes at a high cost. It is on the lower end of the price range.
The resolution of random scan is higher in The resolution of raster scan is lower in
comparison raster scan. comparison to random scan.
A mathematical function is used to render an To render image or picture, pixels are used.
image or a picture.
Random scan has lower refresh rate about 30 Raster scan has higher refresh rate about 60
to 60 times per second. to 80 times per second.
Solved Example:
Q.1. Consider a line AB with A(0,0) and B(8,4) apply a simple DDA algorithm to
calculate the pixels on this line.
Solution:
1. A(0,0) B(8,4) x1=0 y1=0 x2=8 y2=4
2. dx = 8-0 = 8, dy = 4-0 = 4
3. Steps=8, dx>dy
4. xinr =1
yinr = 0.5
v x y plot
0 0 (0,0)
0 1 0.5 (1,1)
1 2 1 (2,1)
2 3 1.5 (3,2)
3 4 2 (4,2)
4 5 2.5 (5,3)
5 6 3 (6,3)
6 7 3.5 (7,4)
7 8 4 (8,4)
Q.2. Use DDA Line drawing algorithm draw a line AB for the endpoints A (1,1) and
B(5,3)
Solution:
1. A(1,1) and B(5,3), x1=1 y1=1 x2= 5 y2=3
2. dx=5-1 = 4 , dy =3-1 =2
3. Steps= 4, dx>dy
4. xincr =1
yincr = 0.5
v x y plot
1 1 (1,1)
0 2 1.5 (2,2)
1 3 2 (3,2)
2 4 2.5 (4,3)
3 5 3 (5,3)
Q.3. Consider a line AB with A (2,3) and B (6,8). Apply a simple DDA algorithm and
calculate the pixels on the line
Solution:
1. A(2,3) and B(6,8) x1=2, y1 =3 x2=6 y2 =8
2. dx = x2-x1 = 6-2 =4 dy = y2-y1 = 8-3 =5
3. Steps=5,dy>dx
4. xincr =0.8
yincr = 1
v x y plot
2 3 (2,3)
0 2.8 4 (3,4)
1 3.6 5 (4,5)
2 4.4 6 (4,6)
3 5.2 7 (5,7)
4 6 8 (6,8)