CS8092 Notes Computer Graphics and Multimedia
CS8092 Notes Computer Graphics and Multimedia
net
1.1 Introduction
The term computer graphics includes almost everything on computers that is not text
or sound. Today almost every computer can do some graphics, and people have even
come to expect to control their computer through icons and pictures rather than just by
typing. Here in our lab at the Program of Computer Graphics, we think of computer
graphics as drawing pictures on computers, also called rendering. The pictures can be
t
photographs, drawings, movies, or simulations - pictures of things, which do not yet
ne
exist and maybe could never exist. Or they may be pictures from places we cannot see
directly, such as medical images from inside your body. We spend much of our time
improving the way computer pictures can simulate real world scenes. We want
z.
images on computers to not just look more realistic, but also to be more realistic in
their colors, the way objects and rooms are lighted, and the way different materials
appear. We call this work “realistic image synthesis”.
in Figure 1.1. At the hardware level (not shown in picture), a computer receives input
from interaction devices, and outputs images to a display device. The software has
three components. The first is the application program, it creates, stores into, and
retrieves from the second component, the application model, which represents the the
.p
graphic primitive to be shown on the screen. The application program also handles
user input. It produces views by sending to the third component, the graphics system,
w
actually producing the picture. Thus the graphics system is a layer in between the
application program and the display hardware that effects an output transformation
w
www.padeepz.net
www.padeepz.net
Applicatio Graphics
Applicati n Program system
on Model
t
ne
Figure 1.1: Conceptual model for interactive graphics
The objective of the application model is to captures all the data, objects, and
relationships among them that are relevant to the display and interaction part of the
z.
application program and to any nongraphical postprocessing modules.
ep
A computer graphics operation that transfers automatically and without operator
intervention. Non-interactive computer graphics involves one way communication
e
between the computer and the user. Picture is produced on the monitor and the user
does not have any control over the produced picture.
ad
Graphics provides one of the most natural means of communicating with a computer,
.p
representations.
Creating and reproducing pictures, however, presented technical problems that stood
in the way of their widespread use. Thus, the ancient Chinese proverb “a picture is
worth ten thousand words” became a cliché in our society only after the advent of
www.padeepz.net
www.padeepz.net
inexpensive and simple technology for producing pictures—first the printing press,
then photography.
Interactive computer graphics is the most important means of producing pictures since
the invention of photography and television; it has the added advantage that, with the
computer, we can make pictures not only of concrete, “real-world” objects but also of
t
abstract, synthetic objects, such as mathematical surfaces in 4D and of data that have
ne
no inherent geometry, such as survey results. Furthermore, we are not confined to
static images. Although static pictures are a good means of communicating
information, dynamically varying pictures are frequently even better–to time-varying
phenomena, both real (e.g., growth trends, such as nuclear energy use in the United
z.
States or population movement form cities to suburbs and back to the cities). Thus, a
movie can show changes over time more graphically than can a sequence of slides.
ep
Thus, a sequence of frames displayed on a screen at more than 15 frames per second
can convey smooth motion or changing form better than can a jerky sequence, with
several seconds between individual frames. The use of dynamics is especially
effective when the user can control the animation by adjusting the speed, the portion
e
of the total scene in view, the amount of detail shown, the geometric relationship of
ad
the objects in the another, and so on. Much of interactive graphics technology
therefore contains hardware and software for user-controlled motion dynamics and
update dynamics.
With motion dynamics, objects can be moved and tumbled with respect to a stationary
.p
observer. The objects can also remain stationary and the viewer can move around
them , pan to select the portion in view, and zoom in or out for more or less detail, as
w
though looking through the viewfinder of a rapidly moving video camera. In many
cases, both the objects and the camera are moving. A typical example is the flight
simulator, which combines a mechanical platform supporting a mock cockpit with
w
display screens for windows. Computers control platform motion, gauges, and the
simulated world of both stationary and moving objects through which the pilot
w
www.padeepz.net
www.padeepz.net
t
to it. For example, a complex mechanical linkage, such as the linkage on a stream
ne
engine, can be animated by moving or rotating all the pieces appropriately.
Update dynamics is the actual change of the shape, color, or other properties of the
objects being viewed. For instance, a system can display the deformations of an
z.
airplane structure in flight or the state changes in a block diagram of a nuclear reactor
in response to the operator’s manipulation of graphical representations of the many
ep
control mechanisms. The smoother the change, the more realistic and meaningful the
result. Dynamic interactive graphics offers a large number of user-controllable modes
with which to encode and communicate information: the 2D or 3D shape of objects in
a picture, their gray scale or color, and the time variations of these properties. With
e
the recent development of digital signal processing (DSP) and audio synthesis chips,
ad
audio feedback can now be provided to augment the graphical feedback and to make
the simulated environment even more realistic.
efficient, graphics make possible higher-quality and more precise results or products,
greater productivity, and lower analysis and design costs.
w
The modern graphic display is very simple in construction. It consists of the three
w
(2) Monitor like a TV set without the tuning and receiving electronics.
(3) Display Controller It passes the contents of the frame buffer to the monitor.
www.padeepz.net
www.padeepz.net
t
…………
11100000 00001001
ne
00000000 11000001 Display Controller
………………………….
00001110 01110001
…………………………
00000000 00011101
.
00001110 00001001
.
00000000 00001001
.
00011110 00010001
z.
.
00000000 00110001
.
00000000 10000001
.
00011110 11000001
………………………….
00000001
00000110
00110001
10000001
ep
Figure 1.2
Inside the frame buffer the image is stored as a pattern of binary digital numbers,
e
which represent a array of picture elements, or pixels. In the simplest case, where you
want to store only black and white images, you can represent black pixels by “1’s”
ad
and white pixels by “0’s” in the frame buffer. Therefore, a array of black and white
pixels of 16X16 could be represented by 32 bytes, stored in frame buffer.
The display controller reads each successive byte of data from the frame buffer and
.p
converts its 0’s and 1’s into corresponding video signals. This signal is then fed to the
monitor, producing a black and white image on the screen. The display controller
w
repeats this operation 30 times a second to maintain a steady picture on the monitor. If
you want to change the image, then you need to modify the frame buffer’s contexts to
represent the new pattern of pixels.
w
Classification of Applications
The diverse uses of computer graphics listed in the previous section differ in a variety
of ways, and a number of classification is by type (dimensionality) of the object to be
represented and the kind of picture to be produced. The range of possible
combinations is indicated in Table 1.1.
www.padeepz.net
www.padeepz.net
t
Picture
ne
Type of Object
2D
3D
Pictorial Representation
Line drawing
Gray scale image
z.
Color image
Line drawing (or wire-frame)
Line drawing, with various effects Shaded, color image with
various effects
ep
Table 1.1
population-density maps.
computers and workstations, and even those that run on terminals attached to
time shared computers and network compute servers, have user interfaces that
w
www.padeepz.net
www.padeepz.net
Advantage of such user-interface techniques: The authors of this book used such
programs to create both the text and the figures; then , the publisher and their
contractors produced the book using similar typesetting and drawing software.
t
mathematical, physical, and economic functions; histograms, bar and pie charts;
ne
task-scheduling charts; inventory and production charts, and the like . All these
are used to present meaningfully and concisely the trends and patterns gleaned
from data, so as to clarify complex phenomena and to facilitate informed
decision making.
z.
Office automation and electronic publishing: The use of graphics for the creation
and dissemination of information has increased enormously since the advent of
ep
desktop publishing on personal computers. Many organizations whose
publications used to be printed by outside specialists can now produce printed
materials inhouse. Office automation and electronic publishing can produce both
e
traditional printed (hardcopy) documents and electronic (softcopy) documents
that allow browsing of networks of interlinked multimedia documents are
ad
proliferating
Color Plate 1.8 shows an example of such a 3D design program, intended for
nonprofessionals also a customize your own patio deck” program used in lumber
w
www.padeepz.net
www.padeepz.net
utility programs can postprocess the design database to make parts lists, to
process ‘bills of materials’, to define numerical control tapes for cutting or
drilling parts, and so on.
t
of real and simulated objects are becoming increasingly popular for scientific and
ne
engineering visualization. We can use them to study abstract mathematical
entries as well as mathematical models of such phenomena as fluid flow,
relativity, nuclear and chemical reactions, physiological system and organ
function, and deformation of mechanical structures under various kinds of loads.
z.
Another advanced-technology area is interactive cartooning. The simpler kinds
of systems for producing ‘Flat” cartons are becoming cost-effective in creating
ep
routine ‘in-between” frames that interpolate between two explicity specified ‘key
frames”. Cartoon characters will increasingly be modeled in the computer as 3D
shape descriptions whose movements are controlled by computer commands,
rather than by the figures being drawn manually by cartoonists . Television
e
commercials featuring flying logos and more exotic visual trickery have become
ad
Art and commerce: Overlapping the previous categories the use of computer
graphics in art and advertising here, computer graphics is used to produce
.p
pictures that express a message and attract attention. Personal computers and
Teletext and Videotexts terminals in public places such as in private homes, offer
w
much simpler but still informative pictures that let users orient themselves, make
choices, or even “teleshop” and conduct other business transactions. Finally,
slide production for commercial, scientific, or educational presentations is
w
another cost-effective use of graphics, given the steeply rising labor costs of the
traditional means of creating such material.
w
Process control: Whereas flight simulators or arcade games let users interact with
a simulation of a real or artificial world, many other applications enable people
or interact with some aspect of the real world itself. Status displays in refineries,
power plants, and computer networks show data values from sensors attached to
www.padeepz.net
www.padeepz.net
t
accurately than they could with the uninitiated radar data alone; spacecraft
ne
controllers monitor telemetry data and take corrective action as needed.
1.7 Summary
z.
Computer graphics includes the process and outcomes associated with using
computer technology to convert created or collected data into visual
representations.
ep
Graphical interfaces have replaced textual interfaces as the standard means
for user-computer interaction. Graphics has also become a key technology for
communicating ideas, data, and trends in most areas of commerce, science,
e
engineering, and education With graphics, we can create artificial realities,
ad
Until the late eighties, the bulk of computer-graphics applications dealt with
.p
www.padeepz.net
www.padeepz.net
t
3D, lies in modeling the objects whose images we want to produce. The
ne
graphics system acts as the intermediary between the application model and
the output device. The application program is responsible for creating and
updating the model based on user interaction; the graphics system does the
best-understood, most routine part of the job when it creates views of objects
z.
and passes user events to the application.
1.8 Keywords
ep
Computer Graphics, Interactive Graphics, Passive Graphics
textual or graphical computer output. Explain when and why graphics output
would be more appropriate in this application.
5. Explain briefly the classification of computer graphics.
w
Wesley
2. Computer Graphics , Second Edition , by Pradeep K. Bhatia , I.K
.International Publisher.
3. High Resolution Computer Graphics using Pascal/C, by Ian O. Angell and
Gareth Griffith, John Wiley & Sons
www.padeepz.net
www.padeepz.net
t
8. Principles of Interactive Computer Graphics, Newman, TMH
ne
z.
e ep
ad
.p
w
w
w
www.padeepz.net
www.padeepz.net
1.0 Objectives
At the end of this chapter the reader will be able to:
• Describe and distinguish raster and random scan displays
t
•
ne
Describe various display devices.
• Describe how colour CRT works..
Structure
2.1 Introduction
z.
2.2 Refresh CRT
2.3 Random-Scan and Raster Scan Monitor
2.4 Color CRT Monitors
2.5 Direct-View Storage Tubes (DVST)
2.6 Flat-Panel Displays
e ep
2.7 Light-emitting Diode (LED) and Liquid-crystal Displays (LCDs)
2.8 Hard Copy Devices
ad
2.9 Summary
2.10 Key Words
2.11 Self Assessment Questions (SAQ)
.p
www.padeepz.net
www.padeepz.net
2.1 Introduction
The principle of producing images as collections of discrete points set to appropriate
colours is now widespread throughout all fields of image production. The most
common graphics output device is the video monitor which is based on the standard
cathode ray tube(CRT) design, but several other technologies exist and solid state
monitors may eventually predominate.
t
ne
2.2 Refresh CRT
Figure 2.1 illustrates the basic operation of a CRT. A beam of electrons (cathode
z.
rays), emitted by an electron gun, passes through focusing and deflection systems that
direct the beam toward specified positions on the phosphor-coated screen.
e ep
ad
.p
The phosphor then emits a small spot of light at each position contacted by the
w
electron beam. Because the light emitted by the phosphor fades very rapidly, some
method is needed for maintaining the screen picture. One Way to keep the phosphor
w
glowing is to redraw the picture repeatedly by quickly directing the electron beam
back over the same points. This type of display is called a refresh CRT.
w
Working
Beam passes between two pairs of metal plates, one vertical and other horizontal. A
voltage difference is applied to each pair of plates according to the amount that the
beam is to be deflected in each direction. As the electron beam passes between each
pair of plates, it is bent towards the plate with the higher positive voltage. In figure
www.padeepz.net
www.padeepz.net
2.2 the beam is first deflected towards one side of the screen. Then, as the beam
passes through the horizontal plates, it is deflected towards, the top or bottom of the
screen. To get the proper deflection, adjust the current through coils placed around the
outside of the CRT loop. The primary components of an electron gun in a CRT are the
heated metal cathode and a control grid (Fig. 2.2). Heat is supplied to the cathode by
directing a current through a coil of wire, called the filament, inside the cylindrical
t
cathode structure. This causes electrons to be "boiled off" the hot cathode surface. In
ne
the vacuum inside the CRT envelope, the free, negatively charged electrons are then
accelerated toward the phosphor coating by a high positive voltage. The accelerating
voltage can be generated with a positively charged metal coating on the in- side of the
z.
CRT envelope near the phosphor screen, or an accelerating anode can be used, as in
Fig. 2.2. Sometimes the electron gun is built to contain the accelerating anode and
focusing system within the same unit.
e ep
ad
.p
The focusing system in a CRT is needed to force the electron beam to converge into a
w
small spot as it strikes the phosphor. Otherwise, the electrons would repel each other,
and the beam would spread out as it approaches the screen. Focusing is accomplished
with either electric or magnetic fields. Electrostatic focusing is commonly used in
w
television and computer graphics monitors. With electrostatic focusing, the electron
beam passes through a positively charged metal cylinder that forms an electrostatic
w
lens, as shown in Fig. 2.3. The action of the electrostatic lens focuses the electron
beam at the center of the screen, in exactly the same way that an optical lens focuses a
beam of light at a particular focal distance. Similar lens focusing effects can be
accomplished with a magnetic field set up by a coil mounted around the outside of the
www.padeepz.net
www.padeepz.net
CRT envelope. Magnetic lens focusing produces the smallest spot size on the screen
and is used in special-purpose devices.
As with focusing, deflection of the electron beam can be controlled either with
electric fields or with magnetic fields. Cathode-ray tubes are now commonly
constructed with magnetic deflection coils mounted on the outside of the CRT
t
envelope, as illustrated in Fig. 2.1. Two pairs of coils are used, with the coils in each
ne
pair mounted on opposite sides of the neck of the CRT envelope. One pair is mounted
on the top and bottom of the neck, and the other pair is mounted on opposite sides of
the neck. The magnetic field produced by each pair of coils results in a transverse
deflection force that is perpendicular both to the direction of the magnetic field and to
z.
the direction of travel of the electron beam. Horizontal deflection is accomplished
with one pair of coils, and vertical deflection by the other pair. The proper deflection
ep
amounts are attained by adjusting the current through the coils. When electrostatic
deflection is used, two pairs of parallel plates are mounted inside the CRT envelope.
One pair of plates is mounted horizontally to control the vertical deflection, and the
other pair is mounted vertically to control horizontal deflection (Fig. 2.3).
e
ad
.p
w
Spots of light are produced on the screen by the transfer of the CRT beam energy to
w
the phosphor. When the electrons in the beam collide with the phosphor coating, they
are stopped and their kinetic energy is absorbed by the phophor. Part of the beam
w
energy is converted by friction into heat energy, and the remainder causes electrons in
the phosphor atoms to move up to higher quanturn-energy levels. After a short time,
the "excited" phosphor electrons begin dropping back to their stable ground state,
giving up their extra energy as small quantums of light energy. What we see on the
screen is the combined effect of all the electron light emissions: a glowing spot that
www.padeepz.net
www.padeepz.net
quickly fades after all the excited phosphor electrons have returned to their ground
energy level. The frequency (or color) of the light emitted by the phosphor is
proportional to the energy difference between the excited quantum state and the
ground state.
Figure 2.4 shows the intensity distribution of a spot on the screen. The intensity is
t
greatest at the center of the spot, and decreases with a Gaussian distribution out to the
ne
edges of the spot. This distribution corresponds to the cross-sectional electron density
distribution of the CRT beam.
z.
Figure 2.4: Intensity distribution of an illuminated phosphor spot on a CRT
Resolution ep screen
The maximum number of points that can be displayed without overlap on a CRT is
e
referred to as the resolution. A more precise definition of resolution is the number of
points per centimeter that can be plotted horizontally and vertically, although it is
ad
often simply stated as the total number of points in each direction. This depends on
the type of phosphor used and the focusing and deflection system.
.p
Aspect Ratio
Another property of video monitors is aspect ratio. This number gives the ratio of
vertical points to horizontal points necessary to produce equal-length lines in both
w
directions on the screen. (Sometimes aspect ratio is stated in terms of the ratio of
horizontal to vertical points.) An aspect ratio of 3/4 means that a vertical line plotted
w
with three points has the same length as a horizontal line plotted with four points.
Random scan system uses an electron beam which operates like a pencil to create a
line image on the CRT. The image is constructed out of a sequence of straight line
segments. Each line segment is drawn on the screen by directing the beam to move
www.padeepz.net
www.padeepz.net
from one point on screen to the next, where each point is defined by its x and y
coordinates. After drawing the picture, the system cycles back to the first line and
design all the lines of the picture 30 to 60 time each second. When operated as a
random-scan display unit, a CRT has the electron beam directed only to the parts of
the screen where a picture is to be drawn. Random-scan monitors draw a picture one
line at a time and for this reason are also referred to as vector displays (or stroke-
t
writing or calligraphic displays) Fig. 2.5. A pen plotter operates in a similar way and
ne
is an example of a random-scan, hard-copy device.
z.
e ep
ad
.p
Figure 2.5: A random-scan system draws the component lines of an object in any
order specified
line-drawing applications and can-not display realistic shaded scenes. Since picture
definition is stored as a set of line-drawing instructions and not as a set of intensity
w
values for all screen points, vector displays generally have higher resolution than
raster systems. Also, vector displays produce smooth line drawings because the CRT
beam directly follows the line path.
www.padeepz.net
www.padeepz.net
t
well as brightness. In a raster-scan system, the electron beam is swept across the
ne
screen, one row at a time from top to bottom. As the electron beam moves across each
row, the beam intensity is turned on and off to create a pattern of illuminated spots.
Picture definition is stored in a memory area called the refresh buffer or frame buffer.
z.
This memory area holds the set of intensity values for all the screen points. Stored
intensity values are then retrieved from the refresh buffer and "painted" on the screen
one row (scan line) at a time (Fig. 2.6). Each screen point is referred to as a pixel or
ep
pel (shortened forms of picture element). The capability of a raster-scan system to
store intensity information for each screen point makes it well suited for the realistic
display of scenes containing subtle shading and color patterns. Home television sets
e
and printers are examples of other systems using raster-scan methods.
ad
.p
w
w
w
www.padeepz.net
www.padeepz.net
Intensity range for pixel positions depends on the capability of the raster system. In a
simple black-and-white system, each screen point is either on or off, so only one bit
per pixel is needed to control the intensity of screen positions. For a bilevel system, a
bit value of 1 indicates that the electron beam is to be turned on at that position, and a
value of 0 indicates that the beam intensity is to be off. Additional bits are needed
when color and intensity variations can be displayed. On some raster-scan systems
t
(and in TV sets), each frame is displayed in two passes using an interlaced refresh
ne
procedure. In the first pass, the beam sweeps across every other scan line from top to
bottom. Then after the vertical re- trace, the beam sweeps out the remaining scan lines
(Fig. 2.7). Interlacing of the scan lines in this way allows us to see the entire screen
z.
displayed in one-half the time it would have taken to sweep across all the lines at once
from top to bottom. Interlacing is primarily used with slower refreshing rates. On an
older, 30 frame- per-second, noninterlaced display, for instance, some flicker is
ep
noticeable. But with interlacing, each of the two passes can be accomplished in l/60th
of a second, which brings the refresh rate nearer to 60 frames per second. This is an
effective technique for avoiding flicker, providing that adjacent scan lines contain
e
similar display information.
ad
.p
w
Figure 2.7: Interlacing Scan lines on a raster-scan display. First , all points on
the even-numbered (solid) scan lines are displayed; then all points along the odd-
w
www.padeepz.net
www.padeepz.net
The beam-penetration method for displaying color pictures has been used with
random-scan monitors. Two layers of phosphor, usually red and green, are coated
onto the inside of the CRT screen, and the displayed color depends on how far the
t
electron beam penetrates into the phosphor layers. A beam of slow electrons excites
ne
only the outer red layer. A beam of very fast electrons penetrates through the red layer
and excites the inner green layer. At intermediate beam speeds, combinations of red
and green light are emitted to show two additional colors, orange and yellow. The
z.
speed of the electrons, and hence the screen color at any point, is controlled by the
beam-acceleration voltage. Beam penetration has been an inexpensive way to produce
color in random-scan monitors, but only four colors are possible, and the quality of
method. A shadow-mask CRT has three phosphor color dots at each pixel position.
One phosphor dot emits a red light, another emits a green light, and the third emits a
blue light. This type of CRT has three electron guns, one for each color dot, and a
shadow-mask grid just behind the phosphor-coated screen. Figure 2.8 illustrates the
.p
delta-delta shadow-mask method, commonly used in color CRT- systems. The three
electron beams are deflected and focused as a group onto the shadow mask, which
contains a series of holes aligned with the phosphor-dot patterns. When the three
w
beams pass through a hole 'in the shadow mask, they activate a dot triangle, which
appears as a small color spot on the screen. The phosphor dots in the triangles are
w
arranged so that each electron beam can activate only its corresponding color dot
when it passes through the shadow mask. Another configuration for the three electron
w
guns is an in-line arrangement in which the three electron guns, and the.
Corresponding red-green-blue color dots on the screen, are aligned along one scan
line instead of in a triangular pattern. This in-line arrangement of electron guns is
easier to keep in alignment and is commonly used in high-resolution color CRTs.
www.padeepz.net
www.padeepz.net
t
ne
z.
Figure 2.8: Operation of a delta–delta, shadow-mask CRT. Three electron guns,
aligned with the triangular color-dot patterns on the screen, are directed to each
ep
dot triangle by a shadow mask.
into one composite. The color we see depends on the amount of excitation of the red,
green, and blue phosphors. A white (or gray) area is the result of activating all three
dots with equal intensity. Yellow is produced with the green and red dots only,
magenta is produced with the blue and red dots, and cyan shows up when blue and
.p
green are activated equally. In some low-cost systems, the electron beam can only be
set to on or off, limiting displays to eight colors. More sophisticated systems can set
w
intermediate intensity levels for the electron beams, allowing several million different
colors to be generated.
w
information inside the CRT instead of refreshing the screen. A direct-view storage
tube (DVST) stores the picture information as a charge distribution just behind the
phosphor-coated screen. Two electron guns are used in a DVST. One, the primary
gun, is used to store the picture pattern; the second, the flood gun, maintains the
picture display. A DVST monitor has both disadvantages and advantages compared to
www.padeepz.net
www.padeepz.net
the refresh CRT. Because no refreshing is needed, very complex pictures can be
displayed at very high resolutions without flicker. Disadvantages of DVST systems
are that they ordinarily do not display color and that selected parts of a picture cannot
be erased. To eliminate a picture section, the entire screen must be erased and the
modified picture redrawn. The erasing and redrawing process can take several
seconds for a complex picture. For these reasons, storage displays have been largely
t
replaced by raster systems.
ne
2.6 Flat-Panel Displays
The term flat panel display refers to a class of video device that have reduced volume
z.
, weight and power requirement compared to a CRT. A significant feature of flat-
panel displays is that they are thinner than CRTs, and we can hang them on walls or
wear them on our wrists. Since we can even write on some flat-panel displays, they
ep
will soon be available as pocket notepads. Current uses for flat-panel displays include
small TV monitors, calculators, pocket video games, laptop computers, armrest
viewing of movies on airlines, as advertisement boards in elevators, and as graphics
e
displays in applications requiring rugged, portable monitors.
We can separate flat-panel displays into two categories: emissive displays and
ad
nonemissive displays. The emissive displays (or emitters) are devices that convert
electrical energy into light. Plasma panels, thin-film electroluminescent displays, and-
light-emitting diodes are examples of emissive displays. Flat CRTs have also been
.p
devised, in which electron beams are accelerated parallel to the screen, then deflected
90° to the screen. But flat CRTs have not proved to be as successful as other emissive
devices. Nonemmissive displays (or nonemitters) use optical effects to convert
w
sunlight or light from some other source into graphics patterns. The most important
example of a nonemissive flat-panel display is a liquid-crystal device.
w
In LED, a matrix of diodes is arranged to form the pixel positions in the display and
picture definition is stored in a refresh buffer. Information is read from the refresh
buffer and converted to voltage levels that are applied to the diodes to produce the
light patterns in the display.
www.padeepz.net
www.padeepz.net
Liquid crystal displays are the divices that produce a picture by passing polarized
light from the surroundings or from an internal light source through a liquid crystal
material that transmit the light. Liquid-crystal displays (LCDs) are commonly used in
small systems, such ' as calculators and portable, laptop computers. These non-
t
emissive devices produce a picture by passing polarized light from the surroundings
ne
or from an internal light source through a liquid-crystal material that can be aligned to
either block or transmit the light.
The term liquid crystal refers to the fact that these compounds have a crystalline
z.
arrangement of molecules, yet they flow like a liquid. Flat-panel displays commonly
use nematic (threadlike) liquid-crystal compounds that tend to keep the long axes of
the rod-shaped molecules aligned. A flat-panel display can then be constructed with a
ep
nematic liquid crystal, as demonstrated in Fig. 2-9. Two glass plates, each containing
a light polarizer at right angles to the other plate, sandwich the liquid-crystal material.
Rows of horizontal transparent conductors are built into one glass plate, and columns
e
of vertical conductors are put into the other plate. The intersection of two conductors
defines a pixel position. Normally, the molecules are aligned as shown in the "on
ad
state" of Fig. 2.9. Polarized light passing through the material is twisted so that it will
pass through the opposite polarizer. The light is then reflected back to the viewer. To
turn off the pixel, we apply a voltage to the two intersecting conductors to align the
molecules so that the light is not twisted. This type of flat-panel device is referred to
.p
as a passive-matrix LCD. Picture definitions are stored in a refresh buffer, and the
screen is refreshed at the rate of 60 frames per second, as in the emissive devices.
w
Back lighting is also commonly applied using solid-state electronic devices, so that
the system is not completely dependent on outside light sources. Colors can be
displayed by using different materials or dyes and by placing a triad of color pixels at
w
each screen location. Another method for constructing LCDs is to place a transistor at
each pixel location, using thin-film transistor technology. The transistors are used to
w
control the voltage at pixel locations and to prevent charge from gradually leaking out
of the liquid-crystal cells. These devices are called active-matrix displays.
www.padeepz.net
www.padeepz.net
t
ne
z.
e ep
Figure 2.9: The light-twisting, shutter effect used in the design of most liquid-
ad
printer are individual dot size on the paper and the number of dots per inch.
We can obtain hard-copy output for our images in several formats. For presentations
w
or archiving, we can send image files to devices or service bureaus that will produce
35-mm slides or overhead transparencies. To put images on film, we can simply
w
photograph a scene displayed on a video monitor. And we can put our pictures on
paper by directing graphics output to a printer or plotter.
Printers produce output by either impact or nonimpact methods. Impact printers press
formed character faces against an inked ribbon onto the paper. A line printer is an
example of an impact device, with the typefaces mounted on bands, chains, drums, or
www.padeepz.net
www.padeepz.net
wheels. Nonimpact printers and plotters use laser techniques, ink-jet sprays,
xerographic processes (as used in photocopying machines), electrostatic methods, and
electrothermal methods to get images onto paper.
In a laser device, a laser beam creates a charge distribution on a rotating drum coated
with a photoelectric material, such as selenium. Toner is applied to the drum and then
t
transferred to paper. Figure 2.10 shows examples of desktop laser printers with a
ne
resolution of 360 dots per inch.
z.
e ep
Figure 2.10: Small-footprint laser printers
ad
Ink-jet methods produce output by squirting ink in horizontal rows across a roll of
paper wrapped on a drum. The electrically charged ink stream is deflected by an
electric field to produce dot-matrix patterns.
.p
2.9 Summary
• Persistence is defined as the time it takes the emitted light from screen to decay to
w
z This chapter have surveyed the major hardware and software features of
w
• The dominant graphics display device is the raster refresh monitor, based on
television technology. A raster system uses a frame buffer to store intensity
information for each screen position (pixel). Pictures are then painted on the
www.padeepz.net
www.padeepz.net
screen by retrieving this information from the frame buffer as the electron beam in
the CRT sweeps across each scan line, from top to bottom. Older vector displays
constructs pictures by drawing lines between specified line endpoints. Picture
information is then stored as a set of line-drawing instructions.
• Various other video display devices are available. In particular, flat-panel display
t
technology is developing at a rapid rate, and these devices may largely replace
ne
raster displays in the near future. At present, flat-panel displays are commonly
used in the small systems and in special-purpose systems. Flat-panel displays
include plasma panels and liquid-crystal devices. Although vector monitors can be
used to display high-quality line drawings, improvements in raster display
z.
technology have caused vector monitors to be largely replaced with raster
systems.
•
ep
Hard-copy devices for graphics workstations include standard printers and
plotters, in addition to devices for producing slides, transparencies, and film
output. Printing methods include dot matrix, laser, ink jet, electrostatic, and
e
electrothermal. Plotter methods include pen plotting and combination printer-
plotter devices.
ad
Random scan display, raster scan display, CRT, persistence, aspect ratio
.p
www.padeepz.net
www.padeepz.net
t
2.12 References/Suggested Readings
ne
1. Computer Graphics, Principles and Practice, Second Edition, by James D.
Foley, Andries van Dam, Steven K. Feiner, John F. Hughes, Addison-
Wesley
z.
2. Computer Graphics , Second Edition , by Pradeep K. Bhatia , I.K
.International Publisher.
3.
4.
5.
ep
High Resolution Computer Graphics using Pascal/C, by Ian O. Angell and
Gareth Griffith, John Wiley & Sons
Computer Graphics (C Version), Donald Hearn and M. Pauline Baker,
e
Prentice Hall
ad
.p
w
w
w
www.padeepz.net
www.padeepz.net
3.0 Objectives
At the end of this chapter the reader will be able to:
• Describe scan converstion
t
• Describe how to scan convert basic graphic primitives like point, line, circle,
ne
ellipse
Structure
3.1 Introduction
3.2 Scan-converting a Point
3.3 Scan-converting a Straight Line
z.
3.4 Scan-converting a Circle
3.5 Scan-converting an Ellipse
3.6 Summary
3.7 Key Words
3.8 Self Assessment Questions (SAQ)
e ep
ad
.p
w
w
w
www.padeepz.net
www.padeepz.net
3.1 Introduction
We have studied various display devices in the previous chapter. It is clear that these
devices need special procedures for displaying any graphic object: line, circle, curves,
and even characters. Irrespective of the procedures used, the system can generate the
images on these raster devices by turning the pixels on or off. The process in which
t
the object is represented as the collection of discrete pixels is called scan conversion.
ne
The video output circuitry of a computer is capable of converting binary values stored
in its display memory into pixel-on, pixel-off information that can be used by a raster
output device to display a point. This ability allows graphics computers to display
models composed of discrete dots.
z.
Almost any model can be reproduced with a sufficiently dense matrix of dots
(pointillism), most human operators generally think in terms of more complex
ep
graphics objects such as points, lines, circles and ellipses. Since the inception of
computer graphics, many algorithms have been developed to provide human users
with fast, memory-efficient routines that generate higher-level objects of this kind.
e
However, regardless of what routines are developed, the computer can produce
images on raster devices only by turning the appropriate pixels on or off. Many scan-
ad
We have already defined that a pixel is collection of number of points. Thus it does not
w
represent any mathematical point. Suppose we wish to display a point C(5.4, 6.5). It means
that we wish to illuminate that pixel, which contains this point C. Refer to figure 3.1, which
w
shows that pixel corresponding to point C. What happens if we try to display C’(5.3, 6.4)?
Well, it also corresponding to the same pixel as that of C (5.4,6.5).Thus we can say that point
C(x,y) is represented by an integer part of x and integer part of y. So, we can use the
w
command as
www.padeepz.net
www.padeepz.net
t
ne
z.
Figure 3.1: Scan-converting point
We normally use right handed cortesian coordinate system. The origin in this system starts at
ep
the bottom. However in case of computer system, due to the memory organization, the system
turns out ot left handed Cartesian system. Thus there is a difference in the actual
representation and the way in which we work with the points.
e
The basic steps involved in converting Cartesian coordinate system to the system
understable points are:
ad
Step 1: Identify the starting address corresponding to the line on which the point is to
be displayed.
Step 2: Find the byte address in which the point is to be displayed.
Step 3: Compute the value for the byte that represents the point.
.p
Step 4: Logically OR the calculated value with the present value of the byte.
Step 5: Store the value found in step 4 in the byte found in steps 1 and 2.
w
Step 6: Stop.
3.3 Scan-converting a Straight Line
w
A scan conversion of line locates the coordinates of the pixels lie on or near an ideal
straight line impaired on 2D raster grid. Before discussing the various methods, let us
see what are the characteristics of line. One expects the following features of line:
w
www.padeepz.net
www.padeepz.net
Even though the rasterization tries to generate a completely straight line, yet in few
cases we may not get equal brightness. Basically, the lines which are horizontal or
vertical or oriented by 450 , have equal brightness. But for the lines with larger length
and different orientations, we need to have complex computations in our algorithms.
This may reduce the speed of generation of line. Thus we make some sort of
compromise while generating the lines, such as:
t
ne
1. Calculate only the approximate line length.
z.
A straight line may be defined by two endpoints and an equation figure 3.2. In figure
3.1 the two endpoints are described by (x1, y1) and (x2, y2). The equation of the line
ep
is used to describe the x, y coordinates of all the points that lie between these two
endpoints. Using the equation of a straight line, y = mx + b where m = Δy/Δx and b =
the y intercept, we can find values of y by incrementing x = x1 to x = x2. By scan-
converting these calculated x = x2. By scan-converting these calculated x, y values,
e
we represent the line as a sequence on pixels.
ad
While this method of scan-converting a straight line is adequate for many graphics
applications, interactive graphics systems require a much faster response than the
method described above can provide. Interactive graphics is a graphics system in
which the user dynamically controls the presentation of graphics models on a
.p
computer display.
w
w
w
Figure 3.2
www.padeepz.net
www.padeepz.net
t
and x’2, calculate the corresponding value of y using the equation and scan-convert (x,
ne
y). If [m] > 1, then for every integer value of y between and excluding y’1 and y’2
calculate the corresponding value of x using the equation and scan-convert (x, y).
z.
(multiplication and addition) in every step that uses the line equation since m and b
are generally real numbers. The challenge is to find a way to achieve the same goal as
quickly as possible.
coordinates – i.e., pixels which are intensified are those which lie very close to the
line path if not exactly on the line path which in this case are perfectly horizontal,
vertical or 45° lines only. Standard algorithms are available to determine which pixels
provide the best approximation to the desired line, one such algorithm is the DDA
.p
(Digital Differential Analyser) algorithm. Before going to the details of the algorithm,
let us discuss some general appearances of the line segment, because the respective
w
appearance decides which pixels are to be intensified. It is also obvious that only
those pixels that lie very close to the line path are to be intensified because they are
the ones which best approximate the line. Apart from the exact situation of the line
w
path, which in this case are perfectly horizontal, vertical or 45° lines (i.e., slope zero,
infinite, one) only. We may also face a situation where the slope of the line is > 1 or <
w
In Figure 3.3, there are two lines. Line 1 (slope<1) and line 2 (slope>1). Now let us
discuss the general mechanism of construction of these two lines with the DDA
www.padeepz.net
www.padeepz.net
algorithm. As the slope of the line is a crucial factor in its construction, let us consider
the algorithm in two cases depending on the slope of the line whether it is > 1 or < 1.
Case 1: slope (m) of line is < 1 (i.e., line 1): In this case to plot the line we have to
move the direction of pixel in x by 1 unit every time and then hunt for the pixel value
of the y direction which best suits the line and lighten that pixel in order to plot the
line.
t
So, in Case 1 i.e., 0 < m < 1 where x is to be increased then by 1 unit every time and
ne
proper y is approximated.
z.
e ep
ad
Case 2: slope (m) of line is > 1 (i.e., line 2) if m > 1 i.e., case of line 2, then the most
appropriate strategy would be to move towards the y direction by 1 unit every time
.p
and determine the pixel in x direction which best suits the line and get that pixel
lightened to plot the line.
So, in Case 2, i.e., (infinity) > m > 1 where y is to be increased by 1 unit every time
w
starting coordinate of the line, and each iteration of the algorithm increments the pixel
one unit along the major, or x-axis. The pixel is incremented along the minor, or y-
axis, only when a decision variable (based on the slope of the line) changes sign. A
key feature of the algorithm is that it requires only integer data and simple arithmetic.
This makes the algorithm very efficient and fast.
www.padeepz.net
www.padeepz.net
t
ne
Figure 3.4
The algorithm assumes the line has positive slope less than one, but a simple change
z.
of variables can modify the algorithm for any slope value.
ep
Figure 3.4 shows a line segment superimposed on a raster grid with horizontal axis X
and vertical axis Y. Note that xi and yi are the integer abscissa and ordinate
respectively of each pixel location on the grid. Given (xi, yi) as the previously plotted
e
pixel location for the line segment, the next pixel to be plotted is either (xi + 1, yi) or
(xi + 1, yi + 1). Bresenham's algorithm determines which of these two pixel locations
ad
is nearer to the actual line by calculating the distance from each pixel to the line, and
plotting that pixel with the smaller distance. Using the familiar equation of a straight
line, y = mx + b, the y value corresponding to xi + 1 is y=m(xi+1) + b The two
distances are then calculated as:
.p
d1 = y- yi
d1= m(xi + 1) + b- yi
w
d2 = (yi+ 1)- y
d2 = (yi+ 1) - m(xi + 1)- b
and,
w
Multiplying this result by the constant dx, defined by the slope of the line m = dy/dx,
the equation becomes:
dx(d1-d2)= 2dy(xi)- 2dx(yi) + c
www.padeepz.net
www.padeepz.net
where c is the constant 2dy + 2dxb - dx. Of course, if d2 > d1, then (d1-d2) < 0, or
conversely if d1 > d2, then (d1-d2) >0. Therefore, a parameter pi can be defined such
that
pi = dx(d1-d2)
t
ne
z.
Figure 3.5
pi = 2dy(xi) - 2dx(yi) + c
ep
If pi > 0, then d1 > d2 and yi + 1 is chosen such that the next plotted pixel is (xi + 1, yi).
Otherwise, if pi < 0, then d2 > d1 and (xi + 1, yi + 1) is plotted. (See Figure3.5 .)
Similarly, for the next iteration, pi + 1 can be calculated and compared with zero to
e
determine the next pixel to plot. If pi +1 < 0, then the next plotted pixel is at (xi + 1 + 1,
ad
Yi+1); if pi + 1< 0, then the next point is (xi + 1 + 1, yi + 1 + 1). Note that in the equation
for pi + 1, xi + 1 = xi + 1.
pi + 1 = 2dy(xi + 1) - 2dx(yi + 1) + c.
Subtracting pi from pi + 1, we get the recursive equation:
.p
pi + 1 = pi + 2dy
or, if pi > 0 then yi + 1 = yi + 1, and
w
pi + 1 = pi + 2(dy-dx)
To further simplify the iterative algorithm, constants c1 and c2 can be initialized at
w
the beginning of the program such that c1 = 2dy and c2 = 2(dy-dx). Thus, the actual
meat of the algorithm is a loop of length dx, containing only a few integer additions
and two compares (Figure 3.5) .
www.padeepz.net
www.padeepz.net
Circle is one of the basic graphic component, so in order to understand its generation,
let us go through its properties first. A circle is a symmetrical figure. Any circle-
generating algorithm can take advantage of the circle’s symmetry to plot of eight
points for each value that the algorithm calculates. Eight-way symmetry is used by
reflecting each calculated point around each 45˚ axis. For example, if point 1 in Fig.
3.6 were calculated with a circle algorithm, seven more points could be found by
t
reflection. The reflection is accomplished by reversing the x, y coordinates as in point
ne
2, reversing the x,y coordinates and reflecting about the y axis as in point 3, reflecting
about the y axis in point 4, switching the signs of x and y as in point 5, reversing the
x, y coordinates, reflecting about the y axis and reflecting about the x axis.
z.
e ep
ad
As in point 6, reversing the x, y coordinates and reflecting about the y-axis as in point
w
To summarize:
w
P1 = (x, y) P5 = (- y,-x)
w
P2 = (y, x) P6 = (-y, - x)
www.padeepz.net
www.padeepz.net
There are two standard methods of mathematically defining a circle centered at the
origin. The first method defines a circle with the second-order polynomial equation
(see Fig. 3.7).
y2 = r2 – x2
t
Where x = the x coordinate
ne
y = the y coordinate
z.
With this method, each x coordinate in the sector, from 90 to 45˚, is found by stepping
ep
of x. This is a very inefficient method, however, because for each point both x and r
must be squared and subtracted from each other; then the square root of the result
must be found.
e
The second method of defining a circle makes use of trigonometric functions (see Fig.
ad
3.8):
x = r cos θ y = r sin θ
r = circle radius
w
x = x coordinate
y = y coordinate
w
By this method, θ is stepped from θ to π/4, and each value of x and y is calculated.
However, computation of the values of sin θ and cos θ is even more time-consuming
w
www.padeepz.net
www.padeepz.net
t
ne
Figure 3.7 & 3.8: Circle defined with a second-egree Polynomial equation and
z.
circle defined with trignometric functions respectively.
ep
If a circle is to be plotted efficiently, the use of trigonometric and power functions
must be avoided. And as with the generation of a straight line, it is also desirable to
perform the calculations necessary to find the scan-converted points with only integer
addition, subtraction, and multiplication by powers of 2.
e
Bresenham’s circle
algorithm allows these goals to be met.
ad
www.padeepz.net
www.padeepz.net
The best approximation of the true circle will be described by those pixels in the
raster that fall the least distance from the true circle. Examine Figs. 3.10 (a) and 3.10
(b). Notice that if points are generated from 90 and 45°, each new point closest to the
true circle can be found by taking either of two actions:(1) move in the x direction one
unit or (2) move in the x direction one unit. Therefore, a method of selecting between
these two choices is all that is necessary to find the points closets to the true circle.
t
ne
The process is as follows. Assume that the last scan-converted pixel is P1 [see Fig. 3-
10(b)]. Let the distance from the origin to the true circle squared minus the distance
to point P3 squared = D(Si). Then let the distance from the origin to the true circle
squared minus the distance to point P2 squared = D (Ti). As the only possible valid
z.
moves are to move either one step in the x direction or one step in the x direction and
one step in the negative y direction, the following expressions can be developed:
D (Si ) = (xi ) 2 + y 2i
− 1 − 1 − r2
ep D (Ti ) = (xi − 1)2 + (y i − 1 − 1)2 − r 2
Since D (Si) will always be positive and D (Ti) will always be negative, a decision
variable d may be defined as follows:
e
d i = D (Si ) + D (Ti )
ad
Therefore
d i = ( x i −1 + 1) 2 + y 2 i −1 − r 2 + ( x i −1 + 1) 2 + ( y i −1 − 1) 2 − r 2
d 1 = 3 − 2r
w
w
w
www.padeepz.net
www.padeepz.net
t
ne
z.
e ep
Figure 3.10
xi +1 = xi + 1 d i + 1 = d i + 4x i + 6
xi +1 = xi + 1 y i +1 = y i − 1 d i +1 = d i + 4( x i − y i ) + 10
.p
approach. It is based on the following function for testing, the spatial relationship
between an arbitrary (x, y) and a circle of radius r centered at the origin:
w
⎧
2⎪
f (x , y ) = x + y
2 2
−r ⎨
w
⎪
⎩
www.padeepz.net
www.padeepz.net
Now consider the coordinates of the point halfway between pixel T and pixel S in Fig.
1
3-8: (xi + 1, yi – ). This is called the midpoint and we use it to define decision
2
parameter:
t
pi = f (xi + 1, yi –
1
) = (xi + 1)2 + (yi – 1
) 2 – r2
2 2
ne
If pi is negative, the midpoint is inside the circle, and we choose pixel T. On the other
hand, if pi is positive (or equal to zero), the midpoint is outside the circle (or on the
circle), and we choose pixel S. Similarly, the decision parameter for the next step is
z.
pi+1 = (x i+1 + 1)2 + (yi +1 – 1 )2 – r2
2
⎠
⎛
⎝
1⎞
p i +1 − p i = [(x i + 1)1]2 – (x i + 1)2 + ⎜ y i +1 − ⎟ − ⎜ y i – ⎟
2⎠
2
Hence
( )
ad
p i +1 = p + 2 (x i + 1) + 1 + yi 2 + yi 2 – (y i +1 – y i )
If pixel T is chosen (meaning pi < 0), we have yi+1 = yi. On the other hand, if pixel S is
chosen (meaning pi ≥ 0) , we have yi+1= yi – 1. Thus
.p
⎧p i + 2x i + 3 If p i < 0 If p i < 0
w
pi + 1 = ⎨ i
⎩ p + 2(x i – y i +1 ) + 1 If p i ≥ 0
Finally, we compute the initial value for the decision parameter using the original
w
2
⎛ 1⎞ 5
p i = (0 + 1) 2 + ⎜ r − ⎟ – r 2 = – r
⎝ 2⎠ 4
www.padeepz.net
www.padeepz.net
One can see that this is not really integer computation. However, when r is a integer
1
we can simply set p1 = 1 – r. The error of being less than the precise value does not
4
prevent p1 from getting the appropriate sign. It does not affect the rest of the scan-
conversion process either, because the decision variable is only updated with integer
increments in subsequent steps.
t
The following is a description of this midpoint circle algorithm that generates the
ne
pixel coordinates in the 90o to 45o octant:
int x = 0, y = r, p = 1 – r;
z.
while (x < = y) {
If (p < 0)
p = p + 2x + 3;
else {
e ep
p = p + 2 (x – y) + 5;
ad
y –;
x++;
.p
}
w
The ellipse, like the circle, shows symmetry. In the case of an ellipse, however,
w
The polynomial method of defining an ellipse (Fig. 3.11 is given by the expression
(x − h )2 (y − k )2
+ =1
a2 b2
www.padeepz.net
www.padeepz.net
t
evaluating the expression
ne
x − h2
y = b 1− +k
a2
This method is very inefficient, however, because the squares of a and (x – h) must be
z.
2
found; then floating-point division of (x − h )2 by a ] and floating point multiplication
of the square root of [1 − (x − h )2 / a2 ] by b must be performed
e ep
ad
.p
Routines have been found that will scan-convert general polynomial equations,
including the ellipse. However, these routines are logic intensive and thus are very
w
1. Set the initial variables: a = length of majo0r axis; b = length of minor axis; (h, k)
= coordinates of ellipse center; x = 0; i = step; xend = a.
2. Test to determine whether the entire ellipse has been scan-converted. If x> xend,
stop.
www.padeepz.net
www.padeepz.net
x2
y = b 1−
a2
4. Plot the four points, found by symmetry, at the current (x, y) coordinates:
t
ne
Plot (-y - h, x + k) Plot ( y + h, -x + k)
5. Increment x; x = x + i.
6. Go to step 2.
z.
3.5.2 Trigonometric Method of Defining an Ellipse
x = a * cos(0) + h
e ep
Fig. 3.12). The following equations define an ellipse trigonometrically:
and y = b * sin(0) + k
θ = current angle
.p
For the generation of an ellipse using the trigonometric method, the value of θ is
w
varied from 0 to π / 2 radians (rad). The remaining points are found by symmetry.
While this method is also inefficient and thus generally too slow for interactive
applications, a lookup table containing the values for sin(θ ) and cos(θ ) with θ
w
ranging from 0 to π / 2 rad can be used. This method would have been considered
unacceptable at one time because of the relatively high cost of the computer memory
w
www.padeepz.net
www.padeepz.net
t
ne
Figure 3.12: Trigonometric description of an ellipse
θ . However, because the cost of computer memory has plummeted in recent years,
z.
this method is now quite acceptable.
ep
Since the ellipse shows fourway symmetry, it can easily be rotated 90° . The new
equation is found by trading a and b, the values which describe the major and minor
axes. When the polynomial method is used, the equations used to describe the ellipse
e
become
ad
(x − h )2 (y − k )2
+ =1
b2 a2
When the trigonometric is used, the equations used to describe the ellipse become
θ = current angle
www.padeepz.net
www.padeepz.net
Assume that you would like to rotate the ellipse through an angle other than 90
degrees. It can be seen from Fig. 3.12 that rotation of the ellipse may be
accomplished by rotating the x, y coordinates of each scan-converted point which
become
t
ne
z.
e ep
ad
3.6 Summary
• Scan-converting a point involves illuminating the pixel that contains the point
• Interactive graphics is a graphics system in which the user dynamically controls
w
www.padeepz.net
www.padeepz.net
4. What steps are required to plot a line whose slope is between 45 and 90º
using Bresenham’s method?
t
5. What steps are required to plot a dashed line using Bresenham’s method?
ne
6. Show graphically that an ellipse has four-way symmetry by plotting four
points on the ellipse: x = a * cos(0) + h y = b * sin(0) + k
z.
7. How must Prob. 3.9 be modified if an ellipse is to be rotated (a) π / 4 , (b)
8. ep
π / 9 , and (c) π / 2 radians?
Prentice Hall,
5. Advanced Animation and Rendering Techniques, Theory and Practice, Alan
w
www.padeepz.net
www.padeepz.net
4.0 Objectives
At the end of this chapter the reader will be able to:
• Describe two dimensional transformations
t
•
ne
Describe and distinguish between two dimensional geometric and coordinate
transformations
• Describe composite transformations
• Describe shear transformations
z.
Structure
4.1 Introduction
4.2 Geometric Transformations
4.3 Coordinate Transformations
4.4 Composite Transformations
4.5 Shear Transformation
4.6 Summary
e ep
4.7 Key Words
4.8 Self Assessment Questions (SAQ)
ad
www.padeepz.net
www.padeepz.net
4.1 Introduction
Transformations are fundamental part of computer graphics. In order to manipulate
object in two dimensional space, we must apply various transformation functions to
object. This allows us to change the position, size, and orientation of the objects.
Transformations are used to position objects, to shape objects, to change viewing
positions, and even to change how something is viewed.
t
ne
There are two complementary points of view for describing object movement. The
first is that the object itself is moved relative to a stationary coordinate system or
background. The mathematical statement of this viewpoint is described by geometric
transformations applied to each point of the object. The second point of view holds
z.
that the object is held stationary while the coordinate system is moved relative to the
object. This effect is attained through the application of coordinate transformations.
ep
An example involves the motion of an automobile against a scenic background. We
can also keep the automobile fixed while moving the backdrop fixed (a geometric
transformation). We can also keep the automobile fixed while moving the backdrop
scenery (a coordinate transformation).
e
In some situations, both methods are
employed.
ad
points. Every object point P has coordinates (x, y), and so the object is the sum total
of all its coordinate points. If the object is moved to a new position, it can be
w
regarded as a new object Obj' , all of whose coordinate point P’ can be obtained from
www.padeepz.net
www.padeepz.net
t
ne
Figure 4.1
Points in 2-dimensional space will be represented as column vectors:
We are interested in three types of transformation:
z.
• Translation
• Scaling
•
•
Rotation
Mirror Reflection
4.2.1 Translation
e ep
In translation, an object is displaced a given and direction from its original position.
If the displacement is given by the vector v = t x I + t y J , the new object point
ad
P' (x' , y ' ) can be found by applying the transformation Tv to P(x, y) (see Fig. 4.1).
In rotation, the object is rotated θ° about the origin. The convention is that the
w
P' = Rθ (P)
www.padeepz.net
www.padeepz.net
t
ne
z.
Figure 4.2
ep
Scaling is the process of expanding or compressing the dimension of an object.
Positive scaling constants Sx and Sy , are used to describe changes in length with
respect to the x direction and y direction, respectively. A scaling constant greater than
one indicates an expansion of length, and less than one, compression of length. The
e
scaling transformation Ssx sy is given by P' = S s x s y ( P) where x' = sx .x and y ' = sx .y .
ad
Notice that after a scaling transformation is performed, the new object is located at a
different position relative to the origin. In fact, in a scaling transformation the only
point that remains fixed is the origin (Figure 4.3).
.p
w
w
w
Figure 4.3
www.padeepz.net
www.padeepz.net
If both scaling constants have the same value s, the scaling transformation is said to
be homogeneous. Furthermore, if s > 1, it is a magnification and for s < 1, a reduction
If either the x and y axis is treated as a mirror, the object has a mirror image or
reflection. Since the reflection P' of an object point P is located the same distance
t
from the mirror as P (Fig. 4.4), the mirror reflection transformation M x about the x-
ne
axis is given by
P' = M x (P)
z.
Similarly, the mirror reflection about the y-axis is
P' = M y (P)
Figure 4.4
w
Scaling: S s−x1, s y = S 1 / s x ,1 / s y
www.padeepz.net
www.padeepz.net
Suppose that we have two coordinate systems in the plane. The first system is located
at origin O and has coordinate axes xy figure 4.6. The second coordinate system is
located at origin O' and has coordinate axes x' y ' Now each point in the plane has two
t
coordinate descriptions: (x, y) or ( x' , y ' ) , depending on which coordinate system is
ne
used. If we think of the second system x' y ' as arising from a transformation applied
to the first system xy, we say that a coordinate transformation has been applied. We
can describe this transformation by determining how the (x' , y ' ) coordinates of a point
z.
P are related to the (x, y) coordinates of the same point.
e ep
ad
Figure 4.5
4.3.1 Translation
.p
If the xy coordinate system is displaced to a new position, where the direction and
distance of the displacement is given by the vector v = t x I + t y J , the coordinates of a
w
( x' , y ' ) = T v ( x , y )
w
The xy system is rotated by θ ° about the origin figure 4.6. Then the coordinates of a
point in both systems are related by the rotation transformation Rθ :
www.padeepz.net
www.padeepz.net
( x' , y ' ) = R θ ( x , y )
t
ne
z.
4.3.3 Scaling with Respect to the Origin
e ep
Figure 4.6
Suppose that a new coordinate system is formed by leaving the origin and coordinate
ad
axes unchanged, but introducing different units of measurement along the x and y
axes. If the new units are obtained from the old units by a scaling of sy units along
the y-axis, the coordinates in the new system are related to coordinates in the old
system through the scaling transformation Ss x , sy :
.p
(x' , y ' ) = Ss x , sy (x , y )
w
where x' = 1 / sx .x and y ' = 1/ sy .y . Figure 4.7 shows coordinate scaling transformation
1
using scaling factors sx = 2 and sy = .
w
2
w
www.padeepz.net
www.padeepz.net
t
ne
Figure 4.7
z.
4.3.4 Mirror Reflection about an Axis
If the new coordinate system is obtained by reflecting the old system about either x or
ep
y axis, the relationship between coordinates is given by the coordinate transformations
M x and M y : For reflection about the x axis (figure 4.8 (a))
( x' , y ' ) = M x ( x , y )
e
where x' = x and y ' = − y . For reflection about the y axis [figure 4.8(b)]
ad
( x' , y ' ) = M y ( x , y )
Figure 4.8
Notice that the reflected coordinate system is left-handed; thus reflection changes the
orientation of the coordinate system.
www.padeepz.net
www.padeepz.net
Each coordinate transformation has an inverse which can be found by applying the
opposite transformation:
−1
Translation : T v = T − v translation in the opposite direction
t
−1
Rotation: Rθ = R −θ rotation in the opposite direction
ne
−1
Scaling : Ssx , sy = S1/ sx ,1/ sy
z.
4.4 Composite Transformations
ep
More complex geometric and coordinate transformations can be built from the basic
transformations described above by using the process of composition of functions. For
example, such operations as rotation about a point other than the origin or reflection
about lines other than the axes can be constructed from the basic transformations.
e
Matrix Description of the Basic Transformations
ad
⎛1 ⎞
⎜ 0⎟
⎛ sx 0⎞ ⎟
=⎜ x
s
Ssx ,s y = ⎜⎜ ⎟ Ssx , sy
⎝0 s y ⎟⎠ ⎜ 1⎟
⎜0 sy ⎟⎠
w
M x = ⎛⎜ 1 0⎞
⎟ M x = ⎛⎜ 1 0⎞
⎟
⎝ 0 − 1⎠ ⎝ 0 − 1⎠
w
M y = ⎛⎜ −11 0 ⎞
1⎟⎠ M y = ⎛⎜ −01 0 ⎞
1⎟⎠
⎝ ⎝
www.padeepz.net
www.padeepz.net
We represent the coordinate pair (x, y) of a point P by the triple (x, y, 1). This is
simply the homogeneous representation of P. Then translation in the direction
t
v = t x I + t y J can be expressed by the matrix function.
ne
⎛1 0 tx ⎞
⎜ ⎟
Tv = ⎜ 0 1 t y ⎟
⎜0 0 1 ⎟
⎝ ⎠
z.
Then
⎛ 1 0 t x ⎞⎛ x ⎞ ⎛ x + t x ⎞
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜ 0 1 t y ⎟⎜⎜ y ⎟⎟ = ⎜ y + t y ⎟
⎝
ep
⎜0 0 1 ⎟ 1
⎠⎝ ⎠ ⎝ 1 ⎠
⎜
The advantage of introducing a matrix form for translation is that we can now build
ad
⎛ 0⎞
a third column ⎜ 0⎟ and a third row (0 0 1). That is
⎜ 1⎟
⎝ ⎠
w
⎛ a b 0⎞
⎜ b d 0⎟
⎜ 0 0 1⎟
⎝ ⎠
w
www.padeepz.net
www.padeepz.net
Problem 1 Derive the transformation that rotates an object point θ o about the origin.
Write the matrix representation for this rotation.
Answer Refer to Fig. 4.9. Definition of the trigonometric functions sin and cos yields
t
and
ne
x = r cos φ y = r sin φ
z.
and
or
⎛ x' ⎞ ⎛x⎞
Writing P' = ⎜⎜ ⎟⎟, P = ⎜⎜ ⎟⎟, and
ad
⎝ y' ⎠ ⎝ y⎠
⎛ cosθ − sin θ ⎞
Rθ = ⎜⎜ ⎟⎟
⎝ sin θ cosθ ⎠
.p
Figure 4.9
Problem 2 Write the general form of the matrix for rotation about a point P (h, k).
www.padeepz.net
www.padeepz.net
homogeneous coordinate form for the rotation and translation matrices, we have
⎛ 1 0 h ⎞ ⎛ cos(θ) − sin (θ ) h ⎞ ⎛ 1 0 − h ⎞
⎜ ⎟⎜ ⎟⎜ ⎟
Rθ, P = ⎜ 0 1 k ⎟ ⎜ sin(θ) cos(θ) k ⎟ ⎜ 0 1 − k ⎟
⎜ 0 0 1⎟ ⎜ 0
⎝ ⎠⎝ 0 1 ⎟⎠ ⎜⎝ 0 0 1 ⎟⎠
⎛ cos(θ) − sin (θ) [− h cos(θ) + k sin (θ) + h]⎞⎟
t
⎜
= ⎜ cos(θ) − sin (θ) [− h cos(θ) + k sin (θ) + h]⎟
ne
⎜ 0 0 0 ⎟⎠
⎝
Problem 3
(a) Find the matrix that represents rotation of an object by 30o about the origin.
z.
(b) What are the new coordinates of the point P (2, 4) after the rotation?
Answer
⎛ cos30o
R o =⎜
⎜ sin 30o
− sin 30o ⎞⎟ ⎜
e
=
⎛
⎜ 3 1 ⎞⎟
−
2 2⎟
ep
30
⎝ cos30o ⎟⎠ ⎜ 1 3⎟
⎜ ⎟
⎝ 2 2 ⎠
ad
⎝2 2 ⎠
Problem 4 Perform a 45o rotation of triangle A (0, 0), B (1, 1), C (5, 2)
w
Answer
the varitices:
⎛ A B C⎞
⎜ ⎟
⎜01 5 ⎟
⎜01 2 ⎟
⎜ ⎟
⎜ 11 1 ⎟
⎝ ⎠
www.padeepz.net
www.padeepz.net
⎛ ⎞
⎜ ⎟
⎜ 2 2 ⎟
⎛ ⎞
⎜
⎜ − 0 ⎟
⎟
cos45o − sin45o
⎜ ⎟ ⎜ ⎟
⎜
⎜ 0⎟⎟ ⎜
⎜
2 1 ⎟
⎟
= sin45ο
⎜ ⎟ ⎜ ⎟
2 2
cos45o
⎜ ⎟ ⎜ ⎟
R ⎜ 0⎟⎟ =⎜ 0 ⎟
45ο
⎜ ⎜ ⎟
⎜
⎜
⎟
⎟
⎜
⎜ 2 2 ⎟
⎟
⎜
⎜
0 0 1 ⎟
⎟
⎜
⎜ 0 0 1 ⎟
⎟
⎜ ⎟ ⎜ ⎟
t
⎝ ⎠ ⎜ ⎟
⎜ ⎟
⎜ ⎟
⎜ ⎟
ne
⎝ ⎠
So the coordinates A' B' C' of the rotated triangle ABC can be found as
⎛ 2 2 ⎞ ⎛ A' B' C' ⎞
⎜ 0⎟ ⎜ ⎟
⎜ 2 2 ⎟⎛0 1 5⎞ ⎜ ⎟
z.
3 2
⎜ ⎟⎜ ⎟ 0 0
[A' B' C ']= R45ο [ABC ] = ⎜ 0 ⎟ ⎜ 0 1 2 ⎟ = ⎜⎜ ⎟
2 2 2
2 2 7 2 ⎟
⎜ ⎟⎜ ⎟
⎜ 0 0 1⎟⎝1 1 1⎠ ⎜ 0 2 ⎟
⎜⎜ 2 ⎟⎟
e
( )
⎜
⎝
⎛3
Thus A' = (0,0 ), B' = 0, 2 , andC ' = ⎜
⎝2
2,
7
2
⎞
ep
2 ⎟.
⎠
⎟
⎠ ⎝1 1 1 ⎠
⎛ 2 2 ⎞
⎜ − −1 ⎟
⎛ 1 0 − 1⎞ ⎛ 1 0 1⎞ ⎜ 2 2 ⎟
⎜ ⎟⎜ ⎟ ⎜
R45o, P = ⎜ 0 1 − 1⎟ ⎜ 0 1 1⎟ = ⎜
2
2
2
2
( )
⎟
2−1 ⎟
⎜ 0 0 1 ⎟ ⎜ 0 0 1⎟ ⎜ ⎟
⎝ ⎠⎝ ⎠ ⎜ 0 0 1 ⎟
.p
⎜ ⎟
⎝ ⎠
Now
w
⎛ 2 2 ⎞
⎜ − −1 ⎟
⎜ 2 2 ⎟⎛0 1 5⎞
w
⎜
[A' B' C '] = R45o , p[ABC ] = ⎜
2
2
2
2
( )
⎟⎜ ⎟
2 −1 ⎟⎜0 1 2⎟
⎜ ⎟⎜ ⎟
⎜ 0 0 1 ⎟⎝1 1 1⎠
⎜ ⎟
w
⎝ ⎠
www.padeepz.net
www.padeepz.net
⎛ ⎛3 ⎞⎞
⎜ −1 −1 ⎜ 2 − 1⎟ ⎟
⎜ ⎝2 ⎠⎟
⎜
=⎜ ( 2 −1 ) (2 2−1 ) ⎛9
⎜
⎝2
⎞⎟
2 − 1⎟ ⎟
⎠⎟
⎜
⎜ 1 1 1 ⎟
⎜ ⎟
⎝ ⎠
t
ne
Problem 5 Write the general form of a scaling matrix with respect to a fixed point
P(h, k ).
Answer
z.
Following the same general procedure as in Example 2 and 3, we write the required
transformation with
v = hI + kJas
S a , b , P = Tv .S a , bT−v
e
⎛ 1 0 h⎞ ⎛ a 0 0⎞ ⎛ 1 0 − h⎞
ep
⎜ ⎟⎜ ⎟⎜ ⎟
= ⎜ 0 1 k ⎟ ⎜ 0 b 0⎟ ⎜ 0 1 − k ⎟
⎜ 0 0 1⎟ ⎜ 0 0 1⎟ ⎜ 0 0 1 ⎟⎠
⎝ ⎠⎝ ⎠⎝
ad
⎛ a 0 − ab + h ⎞
⎜ ⎟
= ⎜ 0 b − bk + k ⎟
⎜0 0
⎝ 1 ⎟⎠
.p
Problem 6 Find the transformation that scales (with respect to the origin) by (a)a unit
in the X direction, (b) b units in the Y direction, and (c) simultaneously a units in the
w
Answer
w
(a) The scaling transformation applied to a point P(x , y ) produces the point
(ax, y ). We can write this in matrix form as S a ,1 .P, or
w
⎛ a 0 ⎞ ⎛ x ⎞ ⎛ ax⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟
⎝ 0 1⎠ ⎝ y ⎠ ⎝ y ⎠
www.padeepz.net
www.padeepz.net
(b) As in part (a), the required transformation can be written in matrix from as
S1,b P. So
⎛1 0⎞⎛ x ⎞ ⎛ x ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟
⎝ 0 b ⎠ ⎝ y ⎠ ⎝ by ⎠
t
ne
(c) Scaling in both directions is described by the transformation x' = ax and
y' = by. Writing this in matrix form as S ab .P, we have
⎛ a 0 ⎞ ⎛ x ⎞ ⎛ ax⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟
z.
⎝ 0 b⎠ ⎝ y ⎠ ⎝ by⎠
Problem 7 An Observer standing at the origin sees a point P(1,1). If the point is
ep
translated one unit in the direction v = I, its new coordinate position is P’(2,1).
Suppose instead that the observer stepped back one unit along the x axis. What would
be the apparent coordinates of P with respect to the observer?
Answer
e
The problem can be set as a transformation of coordinate system . If we translate the
ad
origin O in the direction v = –I (to a new position at O’) the coordinates of P in this
system can be found by the translation Tv .
⎛ 1 0 1 ⎞ ⎛1⎞ ⎛ 2 ⎞
⎜ ⎟⎜ ⎟ ⎜ ⎟
.p
Tv P = ⎜ 0 1 0 ⎟ ⎜1⎟ = ⎜ 1 ⎟
⎜ 0 0 1 ⎟ ⎜1⎟ ⎜ 1 ⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
w
So the new coordinates are (2,1)’. This has the following interpretation a displacement
of one unit in a given direction can be achieved by either moving the object forward
w
The shear transformation distorts an object by scaling one coordinate using the other.
If distorts the shape of an object in such a way as if the object were composed of
internal layers that has been caused to slide over each other is called shear. Two
common shearing transformations are those that shift coordinate x values and those
that shift y values.
www.padeepz.net
www.padeepz.net
t
where h is the negative or positive fraction of Y coordinate of P to be added to the X
ne
coordinate. can be any real number.
z.
2D Shear along Y Direction ep
Similarly, shear along y-direction is given by
e (0.1)
ad
(0.2)
.p
(0.3)
www.padeepz.net
www.padeepz.net
(0.4)
t
ne
Problem 8
If h=0,5 g=0.8, then shear along X direction of the point P : (8,9) is obtained
by substituting these values in (0.3).
z.
Shear in Y direction is
e ep
ad
.p
w
www.padeepz.net
www.padeepz.net
4.6 Summary
t
original position
ne
If the new coordinate system is obtained by reflecting the old system
about either x or y axis, the relationship between coordinates is given
by the coordinate transformations
z.
Scaling is the process of expanding or compressing the dimension of
an object
transformations ep
Multiplying the basic matrix transformations can do complex
1. Prove that the multiplication of the 3X3 matrices in 2-D geometry in each of
w
www.padeepz.net
www.padeepz.net
the origin and then translating in the direction of vector I, and (b) translating
and then rotating.
t
6. Show that reflection about the line y = x is attained by reversing coordinates. That is,
ne
M L (x , y ) = (y , x)
z.
1. Computer Graphics (C Version), Donald Hearn and M. Pauline Baker,
Prentice Hall,
2.
3.
ep
Computer Graphics , Second Edition , by Pradeep K. Bhatia , I.K
.International Publisher.
Advanced Animation and Rendering Techniques, Theory and Practice, Alan
Watt and Mark Watt , ACM Press/Addison-Wesley
e
4. Graphics Gems I-V, various authors, Academic Press
5. Computer Graphics, Plastok, TMH
ad
www.padeepz.net
www.padeepz.net
5.0 Objectives
At the end of this chapter the reader will be able to:
• Describe Windowing concepts
t
•
ne
Describe and distinguish between window and viewport
• Describe Normalized Device Coordiantes
• Describe Clipping
Structure
z.
5.1 Introduction
5.2 Window-to-Viewport Mapping
5.3 Two – Dimensional Viewing and Clipping
5.4 Point Clipping
5.5 Line Clipping
5.6 Area Clipping
5.7 Text Clipping
ep
5.8 Window-To-Viewport Coordinate Transformation
5.9 Exterior and Interior Clipping
e
5.10 Summary
5.11 Key Words
ad
www.padeepz.net
www.padeepz.net
5.1 Introduction
In very basic two dimensional graphics usually use device coordinates. If any
graphics primitive lies partially or completely outside the window then the portion
outside will not be drawn. It is clipped out of the image. In many situations we have
to draw objects whose dimensions are given in units completely incompatible with the
screen coordinates system. Programming in device coordinates is not very convenient
t
since the programmer has to do any required scaling from the coordinates natural to
ne
the application to device coordinates. This has led to two dimensional packages being
developed which allow the application programmer to work directly in the coordiate
system which is natural to the application. These user coordinates are usually called
z.
World Coordinates (WC). The packages then coverts the coordinates to Device
Coordinates (DC) automatically. The transformation form the WC to DC is often
carried out in tow steps. First using the Normalisation Transformation and then the
ep
Workstation Transformation. The Viewing Transformation is the process of going
form a window in World coordinates to viewport in Physical Device Coordinates
(PDC).
e
5.2 Window-to-Viewport Mapping
ad
A window is specified by four world coordinates : wxmin, wxmax, wymin,and wymax (see
Fig. 5.1) Similarly, a viewport is described by four normalized device coordinates:
vxmin,vxmax,vymin, and vy max. The objective of window – to – viewport mapping is to
convert the world coordinates (wx, wy) of an arbitrary point to its corresponding
.p
Thus
w
⎧ vxmax − vxmin
⎪vx = (wx − wxmin ) + vxmin
⎪ wxmax − wxmin
⎨
vymax − vymin
⎪vy = (wy − wymin ) + vymin
⎪⎩ wymax − wymin
www.padeepz.net
www.padeepz.net
Since the eight coordinate values that define the window and the viewport are just
constants, we can express these two formulas for computing (vx, vy) from (wx, wy) in
terms of a translate-scale-translate transformation N
⎛ vx ⎞ ⎛ wx ⎞
⎜ ⎟ ⎜ ⎟
⎜ vy ⎟ = N .⎜ wy ⎟
⎜1⎟ ⎜ 1 ⎟
t
⎝ ⎠ ⎝ ⎠
ne
where
⎛ vx max − vx min ⎞
⎜ 0 0⎟
⎛ 1 0 vx min ⎞⎜ wx max − wx min ⎟⎛ 1 0 − wx min ⎞
⎜ ⎟⎜ vy max − vy min ⎟⎜ ⎟
N = ⎜ 0 1 vy min ⎟⎜ 0 0 ⎟⎜ 0 1 − wy min ⎟
z.
wy max − wy min ⎟⎜
⎜0 0
⎝ 1 ⎠ ⎟⎜
1⎟⎝ 0 0 1 ⎟⎠
⎜ 0 0
⎜ ⎟
⎝ ⎠
e ep
ad
Note that geometric distortions occur (e.g. squares in the window become rectangles
in the viewport) whenever the two scalling constants differ.
Much like what we see in real life through a small window on the wall or the
w
rectangular window with its edge parallel to the axes of the WCS is used to select the
portion of the scene for which an image is to be generated (see Fig. 5.2). Sometimes
an additional coordinate system called the viewing coordinate system is introduced to
simulate the effect of moving and / or tilting the camera.
www.padeepz.net
www.padeepz.net
On the other hand, an image representing a view often becomes part of a larger image,
like a photo on an album page, which models a computer monitor’s display area.
Since album pages vary and monitor sizes differ from one system to another, we want
to introduce a device-independent tool to describe the display area. This tool is called
the normalized device coordinate system (NDCS) in which a unit (1 x 1) square
whose lower left corner is at the origin of the coordinate system defines the display
t
area of a virtual display device. A rectangular viewport with its edges parallel to the
ne
axes of the NDCS is used to specify a sub-region of the display area that embodies the
image.
z.
e ep
ad
The process that converts object coordinates in WCS to normalized device coordinate
.p
to as viewing transformation.
www.padeepz.net
www.padeepz.net
device that does not have a 1/1 aspect ratio, it may be mapped to a square sub-region
(see fig. 5.2) so as to avoid introducing unwanted geometric distortion.
Along with the convenience and flexibility of using a window to specify a localized
view comes the need for clipping, since objects in the scene may be completely inside
the window, completely outside the window, or partially visible through the window.
t
The clipping operation eliminates objects or portions of objects that are not visible
ne
through the window to ensure the proper construction of the corresponding image.
Note that clipping may occur in the world coordinate or viewing coordinate space,
where the window is used to clip the objects; it may also occur in the normalized
z.
device coordinate space, where the viewport/workstation window is used to clip. In
either case we refer to the window or the viewport/workstation window as the
clipping window.
Where xmin, xmax, ymin and ymax define the clipping window. A point (x,y) is
considered inside the window when the inequalities all evaluate to true.
Lines that do not intersect the clipping window are either completely inside the
window or completely outside the window. On the other hand, a line that intersects
the clipping window is divided by the intersection point(s) into segments that are
w
either inside or outside the window. The following algorithms provide efficient ways
to decide the spatial relationship between an arbitrary line and the clipping window
w
In this algorithm we divide the line clipping process into two phases: (1) identify
those lines which intersect the clipping window and so need to be clipped and (2)
perform the clipping.
www.padeepz.net
www.padeepz.net
2. Not visible – the line definitely lies outside the window. This will occur if the
line from (x1,y1,) to (x2,y2) satisfies any one of the following four inequalities:
t
ne
3. Clipping candidate – the line is in neither category 1 and 2
In fig. 5.3, line l1 is in category 1 (visible); lines l2 and l3 are in category 2 (not
visible) ; and lines l4 and l5 are in category 3 (clipping candidate).
z.
e ep
ad
.p
Figure 5.3
The algorithm employs an efficient procedure for finding the category of a line. It
w
www.padeepz.net
www.padeepz.net
Starting from the leftmost bit, each bit of the code is set to true (1) or false (0)
according to the scheme
t
ne
Bit 4 ≡ endpoint is to the left of the window = sign (xmin –x)
z.
2. The line is visible if both region codes are 0000, and not visible if the bitwise
logical AND of the codes is not 0000, and a candidate for clipping if the
ep
bitwise logical AND of the region codes is 0000.
e
ad
.p
w
Figure 5.4
w
For a line in category 3 we proceed to find the intersection point of the line with one
of the boundaries of the clipping window, or to be exact, with the infinite extension of
w
one of the boundaries. We choose an endpoint of the line, say (x1, y1), that is outside
the window, i.e., whose region code is not 0000. We then select an extended boundary
line by observing that those boundary lines that are candidates for intersection are of
ones for which the chosen endpoint must be “pushed across” so as to change a “1” in
its code to a “0” (see Fig. 5.4). This means:
www.padeepz.net
www.padeepz.net
t
Consider line CD is Fig.5.4. If endpoint C is chosen, then the bottom boundary line
ne
y=ymin is selected for computing intersection. On the other hand, if endpoint D is
chosen, then either the top boundary line y=ymax or the right boundary line x = xmax is
used. The coordinates of the intersection point are
z.
⎧ xi = x min or x max
⎨
or
⎩ y i = y1 + m( xi − x1 )
⎧ xi = x1 + ( y i − y1 ) / m
e ep
if the boundary line is vertical
Now we replace endpoint (x1, y1) with the intersection point (x1, yi) effectively
eliminating the portion of the original line that is on the outside of the selected
.p
window boundary. The new endpoint is then assigned an updated region code and the
clipped line re-categorized and handled in the same way. This iterative process
terminates when we finally reach a clipped line that belongs to either category 1
w
An alternative way to process a line in category 3 is based on binary search. The line
w
is divided at its midpoint into two shorter line segments. The clipping categories of
the two new line segments are then determined by their region codes. Each segment in
category 3 is divided again into shorter segments and categorized. This bisection and
categorization process continues until each line segment that spans across a window
boundary (hence encompasses an intersection point) reaches a threshold for line size
www.padeepz.net
www.padeepz.net
and all other segments are either in category 1 (visible) or in category 2 (invisible) .
The midpoint coordinates ( x m, y m) ) of a line joining ( x 1, y 1 ) and ( x 2, y 2 ) are given by
x + x2 y1 + y 2
xm = 1 ym =
2 2
The example in fig.5.5 illustrates how midpoint subdivision is used to zoom in onto
t
the two intersection points I i and I 2 with 10 bisections. The process continues until
ne
we reach two line segments that are, say, pixel – sized . i.e. mapped to one single
pixel each in the image space. If the maximum number of pixels in a line is M , this
method will yield a pixel-sized line segment in N subdivisions, where 2N = M or N =
log2 M. For instance, when M = 1024 we need at most N = log2 1024 = 10
z.
subdivisions.
e ep
ad
.p
Figure 5.5
matrix.
Answer
w
www.padeepz.net
www.padeepz.net
⎛1 0 ⎛ sx 0 0 ⎞ ⎛ 1 0
vx min ⎞ − wx min ⎞
⎜ ⎜⎟ ⎟ ⎜ ⎟
N = ⎜0 1 ⎜ 0 sy 0 ⎟ ⎜ 0 1
vx min ⎟ − wx min ⎟
⎜0 0 ⎜ 0 0 1⎟ ⎜ 0 0
1 ⎟⎠ 1 ⎟
⎝ ⎝ ⎠ ⎝ ⎠
⎛ sx 0 − sx wx min + vx min ⎞
⎜ ⎟
=⎜ 0 sy − sy wy min + vy min ⎟
⎜0 0 1 ⎟
⎝ ⎠
t
Problem 2 Find the normalization transformation that maps a window whose lower
ne
left corner is at (1,1) and upper right corner is at (3, 5) onto (a) a viewport that is the
entire normalized device screen and (b) a viewport that has lower left corner at (0, 0)
z.
Answer
From problem 1, we need to identify the appropriate parameters
and
e ep
(a) The window parameters are wxmin = 1, wymax = 1, and wymax = 5. The viewport
⎛ 1⎞
⎜1 − ⎟
⎜2 0 2⎟
ad
1 1
N =⎜0 − ⎟
⎜ 4 4⎟
⎜0 0 1 ⎟
⎜ ⎟
⎝ ⎠
(b) The window parameters are the same as in (a). The viewport parameters are not
.p
1 1 1 1
vxmin = 0, vxmax = , vymin = 0, vymax = . The sx = , sy= , and
2 2 4 8
w
⎛ 1⎞
⎜1 − ⎟
⎜4 0 4⎟
1 1⎟
N =⎜0 −
⎜ 8 8⎟
⎜0 0 1 ⎟
w
⎜ ⎟
⎝ ⎠
Problem 3 Find a normalization transformation from the window whose lower left
w
corner is at (0,0) and upper right corner is at (4,3) onto the normalized device screen
so that aspect ratios are preserved.
Answer
www.padeepz.net
www.padeepz.net
4
The window aspect ratio is aw = . Unless otherwise indicated , we shall choose a
3
viewport that is as large as possible with respect to the normalized device screen. To
3
this end, we choose the x extent from 0 to 1 and the y extent from 0 to . So
4
1 4
av = =
3 3
t
4
ne
As in Prob. 1 , with parameters wxmin = 0, wxmax = 4, wymax = 0, wymin = 3, and vxmax
3
= 1, vymin = 0, vymax = .¾
4
z.
⎛1 ⎞
⎜ 0 0 ⎟
⎜4 1 ⎟
N =⎜0 0 ⎟
4
⎜0 0 0 ⎟⎟
⎜
⎝ ⎠
polygons. All in all, the task of clipping seems rather complex. Each edge of the
polygon must be tested against each edge of the clip rectangle; new edges must be
added, and existing edges must be discarded, retained, or divided. Multiple polygons
may result from clipping a single polygon. We need an organized way to deal with all
.p
these cases.
www.padeepz.net
www.padeepz.net
t
ne
z.
Figure 5.6
ep
strategy: It solves a series of simple and identical problems that, when combined,
solve the overall problem. The simple problem is to clip a polygon against a single
infinite clip edge. Four clip edges, each defining one boundary of the clip rectangle,
e
successively clip a polygon against a clip rectangle.
ad
Note the difference between this strategy for a polygon and the Cohen-Sutherland
algorithm for clipping a line: The polygon clipper clips against four edges in
succession, whereas the line clipper tests the outcode to see which edge is crossed,
and clips only when necessary.
.p
• Polygons can be clipped against each edge of the window one at a time.
Windows/edge intersections, if any, are easy to find since the X or Y
w
• Note that the number of vertices usually changes and will often increases.
• We are using the Divide and Conquer approach.
www.padeepz.net
www.padeepz.net
The clip boundary determines a visible and invisible region. The edges from vertex i
to vertex i+1 can be one of four types:
t
• Case 2 : Exit visible region - save the intersection
ne
• Case 3 : Wholly outside visible region - save nothing
• Case 4 : Enter visible region - save intersection and endpoint
Case 1 : Wholly inside the visible region, save the endpoint (Fig. 5.7).
z.
e ep
ad
.p
Figure 5.7
.
w
w
www.padeepz.net
www.padeepz.net
t
ne
z.
Figure 5.8
.
e ep
Case 3 : Wholly outside visible region, save nothing (Figure 5.9)
ad
.p
w
w
Figure 5.9
w
www.padeepz.net
www.padeepz.net
t
ne
z.
Figure 5.10
ep
Because clipping against one edge is independent of all others, it is possible to
arrange the clipping stages in a pipeline. The input polygon is clipped against one
e
edge and any points that are kept are passed on as input to the next stage of the
pipeline. This way four polygons can be at different stages of the clipping process
ad
For text clipping various techniques are available in graphic packages. The clipping
technique used will depend upon the application requirement and how characters
produced. Normally we have many simple methods for clipping text. One method for
w
processing character strings relative to a window boundary is to use the all or none
sting clipping strategy shown in figure 5.11 below. If all of the string is discarded.
w
www.padeepz.net
www.padeepz.net
Figure 5.11
t
ne
The boundary positions of the rectangle are the compared to the window boundaries,
and the string is rejected if there is any overlap. This method produces fastest text
clipping.
z.
5.8 Window-To-Viewport Coordinate Transformation
Once object descriptions have been transferred to the viewing reference frame, we
ep
choose the window extents in viewing coordinates and select the viewport limits in
normalized coordinates. Object descriptions are then transferred to normalized device
coordinates. We do this using a transformation that maintains the same relative
placement of objects in normalized space as they had in viewing coordinates. If a
e
coordinate position is at the center of the viewing window, for instance, it will be
displayed at the center of the viewport.
ad
Figure 5.12 illustrates the window-to-viewport mapping. A point at position (xw, yw)
in the window is mapped into position (xv, yv) in the associated view-port. To
maintain the same relative placement in the viewport as in the window, we require
.p
that
xv − xvmin xw − xw min
=
xvmax − xvmin xw max − xw min
w
yv − yvmin yw − yw min
=
yvmax − yvmin yw max − yw min
w
w
www.padeepz.net
www.padeepz.net
t
ne
Figure 5.12: A point at position (xm, yw) in a designated window is mapped to
viewport coordinates
(xv, yv) so that relative positions in the two areas are the same.
z.
Solving these expressions for the viewport position (xv, yv), we have
xwmax − xwmin
yvmax − yvmin
sy =
yw max − yw min
.p
Above equations can also be derived with a set of transformations that converts the
window area into the viewport area. This conversion is performed with the following
sequence of transformations:
w
Relative proportions of objects are maintained if the scaling factors are the same (sx =
w
Character strings can be handled in two ways when they are mapped to a viewport.
The simplest mapping maintains a constant character size, even though the viewport
www.padeepz.net
www.padeepz.net
area may be enlarged or reduced relative to the window. This method would be
employed when text is formed with standard character fonts that cannot be changed.
In systems that allow .for changes in character size, string definitions can be
windowed the same as other primitives. For characters formed with line segments, the
mapping to the viewport can be carried out as a sequence of line transformations.
t
From normalized coordinates, object descriptions are mapped to the viewport display
ne
devices. Any number of output devices can be open in a particular application, and
another window-to-viewport transformation can be performed for each open output
device. This mapping, called the workstation transformation, is accomplished by
selecting a window area in normalized space and a viewport area in the coordinates of
z.
the display device. With the workstation transformation, we gain some additional
control over the positioning of parts of a scene on individual output devices. As
ep
illustrated in Fig. 5.13, we can use work station transformations to partition a view so
that different parts of normalized space can be displayed on different output devices.
e
ad
.p
w
So far, we have considered only procedures for clipping a picture to the interior of a
w
region by eliminating everything outside the clipping region. What is saved by these
procedures is inside the region. In some cases, we want to do the reverse, that is, we
want to clip a picture to the exterior of a specified region. The picture parts to be
saved are those that are outside the region. This is referred to as exterior clipping.
www.padeepz.net
www.padeepz.net
t
Exterior clipping is used also in other applications that require overlapping pictures.
ne
Examples here include the design of page layouts in advertising or publishing
applications or for adding labels or design patterns to a picture. The technique can
also be used for combining graphs, maps, or schematics. For these applications, we
can use exterior clipping to provide a space for an insert into a larger picture.
z.
Procedures for clipping objects to the interior of concave polygon windows can also
make use of external clipping. A figure 5.14 shows a line P1P2 that is to be clipped to
ep
the interior of a concave window with vertices V1V2V3V4V5. Line P1P2 can be
clipped in two passes: (1) First, P1P2 is clipped to the interior of the convex polygon
V1V2V3V4 to yield the clipped segment P'1 P'2 . (2) Then an external clip of P' 1 P' 2 is
e
performed against the convex polygon V1V5V4 to yield the final clipped line segment
ad
P" 1 P' 2 .
.p
w
w
w
Figure 5.14: Clipping line P1P2 to the interior of a concave polygon with vertices
V1V2V3V4V5 (a), using convex polygons V1V2V3V4 (b) and V1V5V4 (c), to produce
the clipped line P”1P’2
www.padeepz.net
www.padeepz.net
5.10 Summary
t
ne
A window is a rectangular surrounding the object or a part of it that
we wish to draw on the screen.
A viewport beings the window content to the screen, so, that window
coordinates are based on world coordinates system.
z.
Multiplying the basic matrix transformations can do complex
transformations
ep
Shear transformation distorts an object by scaling one coordinate
using the other in such a way as if the object were composed of
internal layers that has been caused to slide over each other.
e
5.11 Key Words
ad
6. Find the workstation transformation that maps the normalized device screen onto a
physical device whose x extent is 0 to 199 and y extent is 0 to 639 where origin is
located at the lower left corner.
w
www.padeepz.net
www.padeepz.net
t
3. Advanced Animation and Rendering Techniques, Theory and Practice, Alan
ne
Watt and Mark Watt , ACM Press/Addison-Wesley
4. Graphics Gems I-V, various authors, Academic Press
5. Computer Graphics, Plastok, TMH
z.
6. Principles of Interactive Computer Graphics, Newman, TMH
e ep
ad
.p
w
w
w
www.padeepz.net
www.padeepz.net
6.0 Objectives
At the end of this chapter the reader will be able to:
t
• Describe the concept of pointing and positioning
ne
• Describe various interactive graphic devices
• Describe various interactive graphic techniques including positioning,
constraints, grid, gravity field, inking, painting, dragging, and rubber band
z.
techniques
Structure
6.1 Introduction
6.2 Concept of Positioning and Pointing
6.3 Interactive Graphic Devices
6.4 Interactive Graphical Techniques
6.5 Summary
6.6 Key Words
e ep
6.7 Self Assessment Questions (SAQ)
6.8 References/Suggested Readings
ad
.p
w
w
w
www.padeepz.net
www.padeepz.net
6.1 Introduction
As hardware cost is plummeting, which is considered as the major bottleneck for the
progress; now communication devices is more listened for better development. For
that reason, techniques for developing high-quality user interfaces are moving to the
forefront in computer science and are becoming the "last frontier" in providing
computing to a wide variety of users—as other aspects of technology continue to
t
improve, but the human users remain the same. Interest in the quality of user-
ne
computer interfaces is a recent part of the formal study of computers. The emphasis
until the early 1980s was on optimizing two scarce hardware resources, computer
time and memory. Program efficiency was the highest goal. With today’s plummeting
z.
hardware costs and powerful graphics-oriented personal computing environments the
focus turns to optimizing user efficiency rather than computer efficiency. Thus,
although many of the ideas presented in this chapter require additional CPU cycles
ep
and memory space, the potential rewards in user productivity and satisfaction well
outweigh the modest additional cost of these resources. The quality of the user
interface often determines whether users enjoy or despise a system, whether the
e
designers of the system are praised or damned, whether a system succeeds or fails in
the market. Actually, a poor user interface such as in air traffic control or in nuclear
ad
requires little typing skill. Most users of such systems are not computer programmers
and have little sympathy for the old-style difficult-to-learn keyboard-oriented
command-language interfaces that many programmers take for granted. The designer
w
www.padeepz.net
www.padeepz.net
terminals. The mouse and keyboard now predominate, but a wide variety of input
devices can be used. An interaction task is the entry of a unit of information by
the user. Basic interaction tasks are position, text, select, and quantify. The unit of
information that is input in a position interaction task is of course a position; the text
task yields a text string; the select task yields an object identification; and the quantify
task yields a numeric value. A designer begins with the interaction tasks necessary for
t
a particular application. For each such task, the designer chooses an appropriate
ne
interaction device and interaction technique. Many different interaction techniques
can be used for a given interaction task, and there may be several different ways of
using the same device to perform the same task. For instance, a selection task can be
z.
carried out by using a mouse to select items from a menu, using a keyboard to enter
the name of the selection, pressing a function key, circling the desired command with
the mouse, or even writing the name of the command with the mouse. Similarly, a
Interaction tasks are defined by what the user accomplishes, whereas logical input
e
devices categorize how that task is accomplished by the application program and the
graphics system. Interaction tasks are user-centered, whereas logical input devices are
ad
Most display terminals provide the user with an alphanumeric keyboard with which to
w
type commands and enter data for the program. For some applications, however, the
keyboard is inconvenient or inadequate. For example, the user may wish to indicate
one of a number of symbols on the screen, in order to erase the symbol. If each
symbol is labeled, he can do so by typing the symbol’s name; by pointing at the
www.padeepz.net
www.padeepz.net
symbol, however, he may be able to erase more rapidly, and the extra clutter of labels
can be avoided.
Another problem arises if the user has to add lines or symbols to the picture on the
screen. Although he can identify an items’s position by typing coordinates he can do
so even better by pointing at the screen, particularly if what matters most is the
t
items’s position relative to the rest of the picture.
ne
These two examples illustrate the two basic types of graphical interaction: pointing at
items already on the screen and positioning new items. The need to interact in these
ways has stimulated the developed of a number of different types of graphical input
z.
device, some of which are described in this chapter.
Ideally a graphical input device should lend itself both to pointing and to positioning.
ep
In reality there are no devices with this versatility. Most devices are much better at
positioning than at pointing; one device, the light pen, is the exact opposite.
Fortunately, however we can supplement the deficiencies of these devices by software
and in this way produce hardware-software system that has both capabilities.
e
Nevertheless the distinction between pointing and positioning capability is extremely
important.
ad
Another important distinction is between devices that can be used directly on the
screen surface and devices that cannot. The latter might appear to be less useful, but
this is far from true. Radar operators and air-traffic controllers have for years used
.p
devices like the joystick and the tracker ball neither of which can be pointed at the
screen. The effectiveness of these input devices depends on the use of visual
feedback: the x and y outputs of the device control the movement of a small cross, or
w
cursor, displayed on the screen. The user of the device steers the cursor around the
screen as if it were a toy boat on the surface of a pond. Although this operation sounds
w
The use of visual feedback has an additional advantage: just as in any control system,
w
it compensates for any lack of linearity in the device. A linear input device is one that
faithfully increases or decreases the input coordinate value in exact proportion to the
user’s hand movement. If the device is being used to trace a graph or a map. Linearity
is important. A cursor, however, can be controlled quite easily even if the device
behaves in a fairly nonlinear fashion. For example, the device may be much less
www.padeepz.net
www.padeepz.net
sensitive near the left – hand region of its travel: a 1 – inch hand movement may
change the x value by only 50 units, whereas the same movement elsewhere may
change x by 60 units. The user will simply change his hand movement to compensate,
often without even noticing the no linearity. This phenomenon has allowed simple,
inexpensive devices like the mouse to be used very successfully for graphical input.
t
6.3 Interactive Graphic Devices
ne
Various devices are available for data input on graphics workstations. Most systems
have a keyboard and one or more additional devices specially designed for interactive
input. These include a mouse, trackball, spaceball, joystick, digitizers, dials, and
z.
button boxes. Some other input devices used in particular applications are data gloves,
touch panels, image scanners, and voice systems.
6.3.1 Keyboards
ep
The well-known QWERTY keyboard has been with us for many years. It is ironic that
this keyboard was originally designed to slow down typists, so that the typewriter
hammers would not be so likely to jam. Studies have shown that the newer Dvorak
e
keyboard , which places vowels and other high-frequency characters under the home
positions of the fingers, is somewhat faster than is the QWERTY design. It has not
ad
been widely accepted. Alphabetically organized keyboards are sometimes used when
many of the users are non typists. But more and more people are being exposed to
QWERTY keyboards, and experiments have shown no advantage of alphabetic over
.p
QWERTY keyboards .In recent years, the chief force serving to displace the keyboard
has been the shrinking size of computers, with laptops, notebooks, palmtops, and
personal digital assistants. The typewriter keyboard is becoming the largest
w
component of such pocket-sized devices, and often the main component standing in
the way of reducing its overall size. The chord keyboard has five keys similar to piano
w
keys, and is operated with one hand, by pressing one or more keys simultaneously to
"play a chord." With five keys, 31 different chords can be played. Learning to use a
w
chord keyboard (and other similar stenographer style keyboards) takes longer than
learning the QWERTY keyboard, but skilled users can type quite rapidly, leaving the
second hand free for other tasks. This increased training time means, however, that
such keyboards are not suitable substitutes for general use of the standard
alphanumeric keyboard. Again, as computers become smaller, the benefit of a
www.padeepz.net
www.padeepz.net
keyboard that allows touch typing with only five keys may come to outweigh the
additional difficulty of learning the chords. Other keyboard-oriented considerations,
involving not hardware but software design, are arranging for a user to enter
frequently used punctuation or correction characters without needing simultaneously
to press the control or shift keys, and assigning dangerous actions (such as delete) to
keys that are distant from other frequently used keys.
t
ne
6.3.2 Touch Panels
As the name implies, touch panels allow displayed objects or screen positions to be
selected with the touch of a finger. A typical application of touch panels is for the
z.
selection of processing options that are represented with graphical icons. Other
systems can be adapted for touch input by fitting a transparent device with a touch-
sensing mechanism over the video monitor screen. Touch input can be recorded using
ep
optical, electrical, or acoustical methods.
Optical touch panels employ a line of infrared light-emitting diodes (LEDs) along one
vertical edge and along one horizontal edge of the frame. The opposite vertical and
e
horizontal edges contain light detectors. These detectors are used to record which
beams are interrupted when the panel is touched. The two crossing beams that are
ad
interrupted identify the horizontal and vertical coordinates of the screen position
selected. Positions can be selected with an accuracy of about inch. With closely
spaced LEDs, it is possible to break two horizontal or two vertical beams
.p
simultaneously. In this case, an average position between the two interrupted beams is
recorded. The LEDs operate at infrared frequencies, so that the light is not visible to a
user. An electrical touch panel is constructed with two transparent plates separated by
w
a small distance. One of the plates is coated with a conducting material, and the other
plate is coated with a resistive material. When the outer plate is touched, it is forced
w
into contact with the inner plate. This contact creates a voltage drop across the
resistive plate that is converted to the coordinate values of the selected screen
w
position.
www.padeepz.net
www.padeepz.net
point of contact is calculated from a measurement of the time interval between the
transmission of each wave and its reflection to the emitter.
t
The pencil-shaped devices 's are used to select screen positions by detecting the light
ne
coming from point on the CRT screen. They are sensitive to the short burst of light
emitted from the phosphor coating at the instant the electron beam strikes a particular
point. Other light sources, such as the background light in the room, are usually not
detected by a light pen. An activated light pen, pointed at a spot on the screen as the
z.
electron beam lights up that spot, generates an electrical pulse that causes the
coordinate position of the electron beam to be recorded. As with cursor-positioning
ep
devices, recorded light-pen coordinates can be used to position an object or to select a
processing option. Although light pens are still with us, they are not as popular as they
once were since they have several disadvantages compared to other input devices that
have been developed. For one, when a light pen is pointed at the screen, part of the
e
screen image is obscured by the hand and pen. And prolonged use of the light pen can
cause arm fatigue. Also, light pens require special implementation for some
ad
applications because they cannot detect positions within black areas. To be able to
select positions in any screen area with a light pen, we must have some nonzero
intensity assigned to each screen pixel. In addition, light pens sometime give false
.p
One type of digitizer is the graphics tablet (also referred to as a data tablet), which is
used to input two-dimensional coordinates by activating a hand cursor or stylus at
w
selected positions on a flat surface. A hand cursor contains cross hairs for sighting
positions, while a stylus is a pencil-shaped device that is pointed at positions on the
w
tablet. This allows an artist to produce different brush strokes with different pressures
on the tablet surface. Tablet size varies from 12 by 12 inches for desktop models to 4
by 60 inches or larger for floor models. Graphics tablets provide a highly accurate
method for selecting coordinate positions, with an accuracy that varies from about 0.2
mm on desktop models to about 0.05 mm or less on larger models. Many graphics
www.padeepz.net
www.padeepz.net
tablets are constructed with a rectangular grid of wire embedded in the tablet surface.
Electromagnetic pulses are generated in sequence along the wires, and an electric
signal is induced in a wire coil in an activated stylus or hand cursor to record a tablet
position. Depending on the technology, a their signal strength, coded pulses, or phase
shifts can be used to determine the position on the tablet.
t
6.3.5 Joysticks
ne
A joystick consists of a small, vertical lever (called the stick) mounted on a base that
is used to steer the screen cursor around. Most joysticks select screen positions with
actual stick movement; others respond to pressure on the stick. The distance that the
z.
stick is moved in any direction from its center position corresponds to screen-cursor
movement in that direction. Potentiometers mounted at the base of the joystick
measure the amount of movement, and springs return the stick to the center position
ep
when it is released. One or more buttons can be programmed to act as input switchs to
signal certain actions once a screen position has been selected.
6.3.6 Mouse
e
A mouse is small hand-held box used to position the screen cursor. Wheels or rollers
ad
on the bottom of the mouse can be used to record the amount and direction of
movement. Another method for detecting mouse motion is with an optical sensor. For
these systems, the mouse is moved over a special mouse pad that has a grid of
horizontal and vertical lines. The optical sensor detects movement across the lines in
.p
the grid.
Since a mouse can be picked up and put down at another position without change in
w
cursor movement, it is used for making relative changes in the position of the screen
cursor. One, two, or three buttons are usually included on the top of the mouse for
w
Speech recognizers are used in some graphics workstations as input devices to accept
voice commands. The voice-system input can be used to initiate graphics operations
www.padeepz.net
www.padeepz.net
A dictionary is set up for a particular operator by having the operator speak the
command words to be used into the system. Each word is spoken several times, and
the system analyzes the word and establishes a frequency pattern for that word in the
t
dictionary along with the corresponding function to be performed. Later, when a voice
ne
command is given, the system searches the dictionary for a frequency-pattern match.
Voice input is typically spoken into a microphone mounted on a headset. The
microphone is designed to minimize input of other background sounds. If a different
operator is to use the system, the dictionary must be reestablished with that operator's
z.
voice patterns. Voice systems have some advantage over other input devices, since the
attention of the operator does not have to be switched from one device to another to
enter a command.
6.3.8 Other Devices ep
Here we discuss some of the less common, and in some cases experimental, 2D
interaction devices. Voice recognizers, which are useful because they free the user’s
e
hands for other uses, apply a pattern-recognition approach to the waveforms created
when we speak a word. The waveform is typically separated into a number of
ad
different frequency bands, and the variation over time of the magnitude of the
waveform. in each band forms the basis for the pattern matching. However, mistakes
can occur in the pattern matching, so it is especially important that an application
.p
word to cue the system that a word end has occurred. The more difficult task of
recognizing connected speech from a limited vocabulary can now be performed by
off-the-shelf hardware and software, but with somewhat less accuracy. As the
vocabulary becomes larger, however, artificial-intelligence techniques are needed to
www.padeepz.net
www.padeepz.net
t
based systems add inflections. Other systems actually play back digitized spoken
ne
words or phrases. They sound realistic, but require more memory to store the digitized
speech. Speech is best used to augment rather than to replace visual feedback, and is
most effective when used sparingly. For instance, a training application could show a
z.
student a graphic animation of some process, along with a voice narration describing
what is being seen. See for additional guidelines for the effective application of
speech recognition and generation in user-computer interfaces, and for an introduction
ep
to speech interfaces, and for speech recognition technology. The data tablet has been
extended in several ways. Many years ago, Herot and Negroponte used an
experimental pressure-sensitive stylus : High pressure and a slow drawing speed
e
implied that the user was drawing a line with deliberation, in which case the line was
recorded exactly as drawn; low pressure and fast speed implied that the line was being
ad
drawn quickly, in which case a straight line connecting the endpoints was recorded.
Some commercially available tablets sense not only stylus pressure but orientation as
well. The resulting 5 degrees of freedom reported by the tablet can be used in various
.p
creative ways. For example, Bleser, Sibert, and McGee implemented the GWPaint
system to simulate various artist’s tools, such as an italic pen, that are sensitive to
pressure and orientation. An experimental touch tablet, developed by Buxton and
w
colleagues, can sense multiple finger positions simultaneously, and can also sense the
area covered at each point of contact The device is essentially a type of touch panel,
w
but is used as a tablet on the work surface, not as a touch panel mounted over the
screen. The device can be used in a rich variety of ways . Different finger pressures
correlate with the area covered at a point of contact, and are used to signal user
w
commands: a light pressure causes a cursor to appear and to track finger movement;
increased pressure is used, like a button-push on a mouse or puck, to begin feedback
such as dragging of an object; decreased pressure causes the dragging to stop.
www.padeepz.net
www.padeepz.net
There are several techniques that are incorporated into graphics packages to aid the
interactive construction of pictures. Various input options can be provided, so that
coordinate information entered with locator and stroke devices can be adjusted or
interpreted according to a selected option. For example, we can restrict all lines to be
t
either horizontal or vertical. Input coordinates can establish the position or boundaries
ne
for objects to be drawn, or they can be used to rearrange previously displayed objects.
Coordinate values supplied by locator input are often used with positioning methods
z.
to specify a location for displaying an object or a character string. We interactively
select coordinate positions with a pointing device, usually by positioning the screen
ep
cursor. Just how the object or text-string positioning is performed depends on the
selected options. With a text string, for example, the screen point could be taken as
the center string position, or the start or end position of the string, or any of the other
string-positioning options. For lines, straight line segments can be displayed between
e
two selected screen positions:
ad
As an aid in positioning objects, numeric values for selected positions can be echoed
on the screen. Using the echoed coordinate values as a guide, we can make
adjustments in the selected location to obtain accurate positioning.
.p
6.4.2 Constraints
specified orientation or alignment of the displayed coordinates. There are many kinds
of constraint functions that can be specified, but the most common constraint is a
w
horizontal and vertical alignment of straight lines. This type of constraint, shown in
Figs. 6.1 and 6.2, is useful in forming network layouts. With this constraint, we can
w
create horizontal and vertical lines without worrying a-bout precise specification of
endpoint coordinates.
www.padeepz.net
www.padeepz.net
t
ne
Figure 6.1: Horizontal line constraint
z.
e ep
Figure 6.2: Vertical line constraint
ad
values, a horizontal line is displayed. Otherwise, a vertical line is drawn. Other kinds
of constraints can be applied to input coordinates to produce a variety of alignments.
w
Lines could be constrained to have a particular slant, such as 45°, and input
coordinates could be constrained to lie along predefined paths, such as circular arcs.
w
6.4.3 Grids
w
Another kind of constraint is a grid of rectangular lines displayed in some part of the
screen area. When a grid is used, any input coordinate position is rounded to the
nearest intersection of two grid lines. Figure 6.3 illustrates line drawing with grid.
Each of the two cursor positions is shifted to the nearest grid intersection point, and
the line is drawn between these grid points. Grids facilitate object constructions,
www.padeepz.net
www.padeepz.net
because a new line can be joined easily to a previously drawn line by selecting any
position near the endpoint grid intersection of one end of the displayed line.
t
ne
z.
Figure 6.3: Line drawing using a grid
Spacing between grid lines is often an option that can be set by the user. Similarly,
ep
grids can be turned on and off, and it is sometimes possible to use partial grids and
grids of different sizes in different screen areas.
between endpoints. Since exact positioning of the screen cursor at the connecting
point can be difficult, graphics packages can be designed to convert any input position
near a line to a position on the line.
.p
illustrated with the shaded boundary shown in Fig. 6.4. Areas around the endpoints
are enlarged to make it easier for us to connect lines at their endpoints. Selected
w
positions in one of the circular areas of the gravity field are attracted to the endpoint
in that area. The size of gravity fields is chosen large enough to aid positioning, but
w
small enough to reduce chances of overlap with other lines. If many lines are
displayed, gravity areas can overlap, and it may be difficult to specify points
correctly. Normally, the boundary for the gravity field is not displayed.
www.padeepz.net
www.padeepz.net
t
ne
Figure 6.4: Gravity field around a line. Any selected point in the shaded area is
shifted to a position on the line
z.
Straight lines can be constructed and positioned using rubber-band method which
stretch out a line from a starting position as the screen cursor is move Figure 6.5
ep
demonstrates the rubber-band method. We first select a screen position for one
endpoint of the line. Then, as the cursor moves around, the line displayed from the
start position to the current position of the cursor. When we finally select a second
screen position, the other line endpoint is set.
e
ad
.p
w
w
w
Figure 6.5: Rubber-band method for drawing and positioning a straight line
segment
www.padeepz.net
www.padeepz.net
Rubber-band methods are used to construct and position other objects besides straight
lines. Figure 6.6 demonstrates rubber-band construction of a rectangle, and Fig. 6.7
shows a rubber-band circle construction.
t
ne
Figure 6.6: Rubber-band method for constructing a rectangle
z.
e ep
ad
6.4.6 Sketching
Options for sketching, drawing, and painting come in a variety of forms. Straight
.p
lines, polygons, and circles can be generated with methods discussed in the previous
sections. Curve-drawing options can be provided using standard curve shapes, such as
circular arcs and splines, or with freehand sketching procedures. Splines are
w
interactively constructed by specifying a set of discrete screen points that give the
general shape of the curve. Then the system fits the set of points with a polynomial
w
curve. In freehand drawing, curves are generated by following the path of a stylus on
a graphics tablet or the path of the screen cursor on a video monitor. Once a curve is
w
displayed, the designer can alter the curve shape by adjusting the positions of selected
points along the curve path.
www.padeepz.net
www.padeepz.net
t
ne
Figure 6.8 Uses rubber band methods to create objects consisting of connected
line segments
Line widths, line styles, and other attribute options are also commonly found in
z.
painting and drawing packages. Various brush styles, brush patterns, color
combinations, object shapes, and surface-texture patterns are also available on many
ep
systems, particularly those designed as artist's workstations. Some paint systems vary
the line width and brush strokes according to the pressure of the artist's hand on the
stylus.
e
6.4.7 Dragging
into position by dragging them with the screen cursor. We first select an object, then
move the cursor in the direction we want the object to move, and the selected object
follows the cursor path. Dragging objects to various positions in scene is useful in
.p
This technique, which closely simulates the effect of drawing on paper, is called
inking. For many years the main use of inking has been in conjunction with on-line
w
www.padeepz.net
www.padeepz.net
6.4.9 Painting
t
It is possible to provide a range of tools for painting on a raster display: these tools
ne
take the form of brushes that lay down trails of different thick nesses and colors. For
example, instead of depositing a single dot at each sampled input position, the
program can insert a group of dots so as to fill in a square or circle: the result will be a
z.
much thicker trace. On a "black-and-white display the user needs brushes that paint
in both black and white, so that information can be both added and removed (Figure
6.9). When a color display is used for painting, a menu of different colors can be
provided.
e ep
ad
.p
6.5 Summary
• An interaction technique is a way of using a physical input/output device to
w
• The basic interaction tasks for interactive graphics are positioning, selecting,
entering text, and entering numeric quantities.
www.padeepz.net
www.padeepz.net
mode allows input devices to initiate data entry and control processing of data.
Once a mode has been chosen for a logical device class and the particular physical
device to be used to enter this class of data, input functions the program are used
to enter data values into the program. An application program can make
simultaneous use of several physical input devices operating in different modes.
t
• Interactive picture-construction methods are commonly used in a variety
ne
applications, including design and painting packages. These methods provide
users with the capability to position objects, to constrain figures to predefine
orientations or alignments, to sketch figures, and to drag objects around the
screen. Grids, gravity fields, and rubber-band methods are used to aid in
z.
positioning and other picture-construction operations.
6.6 Key Words
ep
Positioning, Pointing, Interactive Graphics, Inking, Painting, Dragging
6.7 Self Assessment Questions (SAQ)
1. Discuss some hardware interaction devices with their advantages and disadvantages.
2. Define in precise the various interactive techniques?
e
3. Discuss how pointing techniques are different are different form positioning
techniques.
ad
c) Dragging
w
d) Constrained painting
6.8 References/Suggested Readings
w
www.padeepz.net
www.padeepz.net
t
ne
z.
e ep
ad
.p
w
w
w
www.padeepz.net
www.padeepz.net
7.0 Objectives
At the end of this chapter the reader will be able to:
t
• Describe three dimensional transformation and viewing and clipping
ne
• Describe parallel and perspective projections
• Describe concept of hidden line and surface elimination methods like Z-
Buffer, Scan line, Painter’s Subdivision
•
z.
Describe Wireframe model
• Describe Bezier curves and surfaces
Structure
7.1 Introduction
7.2 Three–Dimensional Transformations
7.3 Projections
e ep
7.4 Three – Dimensional Viewing and Clipping
7.5 Wireframe Models
7.6 Bezier Curves and Surfaces
ad
www.padeepz.net
www.padeepz.net
7.1 Introduction
We now turn to transformations in three dimensions. In most cases, the mathematics
of linear transformations is easy to extend from two dimensions to three, but the
discussion here demonstrates that certain transformations, most notably rotations, are
more complex in three dimensions because there are more directions about which to
rotate and because the simple terms clockwise and counterclockwise no longer apply.
t
We start with a short discussion of coordinate systems in three dimensions.
ne
In two dimensions, there is only one Cartesian coordinate system, with two
perpendicular axes labeled x and y (actually, the axes don’t have to be perpendicular,
but this is irrelevant for our discussion of transformations). A coordinate system in
three dimensions consists similarly of three perpendicular axes labeled x, y, and z, but
z.
there are two such systems, a left-handed and a right-handed (Figure 7.1a), and they
are different. A right-handed coordinate system is constructed by the following rule.
ep
Align your right thumb with the positive x axis and your right index finger with the
positive y axis. Your right middle finger will then point in the direction of positive z.
The rule for a left-handed system uses the left hand in a similar manner. It is also
possible to define a left-handed coordinate system as the mirror image (reflection) of
e
a right-handed one. Notice that one coordinate system cannot be transformed into the
ad
other by translating or rotating it. The difference between left-handed and right-
handed coordinate systems becomes important when a three-dimensional object is
projected on a two-dimensional screen. We assume that the screen is positioned at the
xy plane with its origin (i.e., its bottom left corner) at the origin of the three-
.p
dimensional system. We also assume that the object to be projected is located on the
positive side of the z axis and the viewer is located on the negative side, looking at the
projection of the image on the screen. Figure 7.1b shows that in a left-handed three-
w
dimensional coordinate system, the directions of the positive x and y axes on the
screen coincide with those of the three-dimensional x and y axes. In a right-handed
w
system (Figure 7.1c), though, the two-dimensional x axis (on the screen) and the
three-dimensional x axis point in opposite directions.
w
www.padeepz.net
www.padeepz.net
t
ne
z.
ep
Figure 7.1 Three Dimensional Coordinate System
coordinate system. In coordinate transformation, the object is fixed and the desired
transformation of the object is done on the coordinate system itself. These
transformations are formed by composing the basic transformations of translation,
w
www.padeepz.net
www.padeepz.net
objects and scene is facilitated by the use of instance transformations. The concepts
and transformations introduced here are direct generalizations of these introduced in
for two-dimensional transformations.
t
considered as a set of points.
ne
Obj = { P ( x,y,z)}
If the object moved to a new position, we can regard it as a new object Obj’, all of
whose coordinate points P’ (x’ , y’, z’) can be obtained from the original coordinate
z.
points P(x,y,z) of Obj through the application of a geometric transformation.
Translation
ep
An Object is displaced a given distance and direction from its original position. The
direction and displacement of the translation is prescribed by a vector
V = aI + bJ + cK
e
The new coordinates of a translated point can be calculated by using the
ad
transformation
⎧ x' = x + a
⎪
Tv : ⎨ y' = y + a
⎪ z' = z + a
.p
⎛ x' ⎞ ⎛ 1 a⎞ ⎛x⎞
w
0 0
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜ y' ⎟ ⎜ 0 1 0 b⎟ ⎜ y⎟
⎜ z' ⎟ = ⎜ 0 0 1 a⎟ ⎜ z⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜ 1 ⎟ ⎜0 1⎟⎠ ⎜ 1⎟
w
⎝ ⎠ ⎝ 0 0 ⎝ ⎠
www.padeepz.net
www.padeepz.net
t
ne
Figure 7.2
Scaling
The process of scaling changes the dimensions of an object. The scale factor s
z.
determines whether the scaling is a magnification, s> I , or a reduction , s< 1.
Scaling with respect to the origin, where the origin remains fixed, is offered by the
transformation
Ssx , sy,sz
⎧ x' = Sx . x
⎪
: ⎨ y' = Sy . y
e
⎪ z' = S . z
ep
⎩ x
ad
⎛ Sx 0 0 ⎞
⎜ ⎟
S s x , sy , s z =⎜ 0 Sy 0 ⎟
⎜0 0 S ⎟
⎝ z ⎠
.p
Rotation
Rotation in three dimensions is considerably more complex then rotation in two
w
defined when one of the positive x, y or z coordinate axes is chosen as the axis of
rotation. Then the construction of the rotation transformation proceeds just like that
of a rotation in two dimensions about the origin (see. Fig. 7.3).
www.padeepz.net
www.padeepz.net
t
ne
Figure 7.3
z.
⎧ x' = x cosθ − y sin θ
⎪
Rθ . K : ⎨ y' = x sin θ + y cosθ
⎪ z' = z
⎩
Similarly:
.p
⎧ x' = x
⎪
Rθ . I : ⎨ y' = y cosθ − z sin θ
w
Note that the direction of a positive angle of rotation is chosen in accordance to the
w
www.padeepz.net
www.padeepz.net
⎛ cosθ − sin θ 0 ⎞
⎜ ⎟
Rθ , K = ⎜ sin θ cosθ 0 ⎟
⎜ 0 0 1 ⎟⎠
⎝
⎛ cosθ 0 sin θ ⎞
⎜ ⎟
Rθ , J =⎜ 0 1 0 ⎟
⎜ − sin θ 0 cosθ ⎟
⎝ ⎠
⎛1 0 0⎞
⎜ ⎟
t
Rθ , I = ⎜ 0 cosθ − sin θ ⎟
⎜ 0 sin θ cosθ ⎟
ne
⎝ ⎠
The general use of rotation about an axis L can be built up from these canonical using
matrix multiplication.
z.
7.2.2 Rotation About an Arbitrary Axis in Space
The rotation about an arbitrary axis in space occurs in robotics animation and
ep
simulation. This is accomplished using the procedure of translations and simple
rotation about the coordinate axes. (See figure 7.4) We know the technique for
rotation about a coordinate axis. The procedural idea is to make the arbitrary rotation
axis coincident with one of the coordinate axes.
e
Let use assume that an arbitrary axis in space passing through the point (x0, y0, z0)
ad
with direction cosine (Cx, Cy, Cz). Rotation about this axis by some angle is
accomplished using the following procedure i.e.
.p
y y’
w
x’
z’
x
w
z
w
Figure 7.4
1. Translate the body so that the point (x0, y0, z0) is reached at the origin of the
coordinate system.
www.padeepz.net
www.padeepz.net
2. Do appropriate rotations to make the axis of rotation coincident with the z-axis
(the selection of z-axis is arbitrary we can choose any axis).
t
Now our aim to make the arbitrary rotation axis coincident with the z-axis (It is an
ne
arbitrary choice we can choose any axis), For this first rotate about the x-axis and
them about the y-axis.
z.
To find the value of rotation angle about x-axis used to place the arbitrary axis in the
xz-plane, now first project the unit vector along the axis onto the yz-plane as shown in
fig 7.5 (a).
z
e
Cy
ep z
Cx
d
d β
α P(Cx, 0, d)
P
ad
Cz 1
0
d
y y
Cx
x x
.p
(a) (b)
Figure 7.5 (a) Rotation about x-axis), (b) Rotation about y-axis
w
The projected component of the y and z vectors are Cy and Cz and the direction
cosine of the unit vector along the arbitrary axis. from the fig (a) we note that -
w
2 2
d = ÖCy + Cz - - - (1)
w
and
Cz Cy
www.padeepz.net
www.padeepz.net
After completion of rotation about the x-axis into the xz-plane the z-component of the
unit vector is d and the x-component is Cx and the direction cosine in the x-direction
i.e. shown in fig 7.5(b). We know the length of the unit vector is 1 (one). The rotation
angle b about the y-axis required to make the arbitrary axis coincident with z-axis is -
t
Now the total transformation or composite transformation is then.
ne
[T] = [Tr] [Rx] [Ry] [Rd] [Ry]-1 [Rx]-1 [Tr]-1 - - - (4)
1 0 0 0
z.
where 0 1 0 0
[Tr] = 0 0 1 0 - - - (5)
-x0 -y0 -z0 1
0 0 0 1 0 0 0 1
.p
cos(-β) 0 -sin(-β) 0 d 0 Cx 0
[Ry] = 0 1 0 0 = 0 1 0 0 (7)
w
cosd sind 0 0
and Rd = -sind cosd 0 0 - - - (8)
www.padeepz.net
www.padeepz.net
0 0 1 0
0 0 0 1
Now we take a second point on the axis i.e. (x1 y1 z1) to find the direction cosine
i.e.
t
[(x1 - x0) (y1 - y0) (z1 - z0)]
ne
[cx cy cz] =
z.
7.2.3 Coordinate Transformations
We can also achieve the effects of translation, scaling , and rotation by moving the
ep
observer who views the object and by keeping the object stationary. This type of
transformation is called a coordinate transformation. We first attach a coordinate
system to the observer and then move the observer and the attached coordinate
e
system. Next, we recalculate the coordinates of the observed object with respect to
this new observer coordinate system. The new coordinate values will be exactly the
ad
same as if the observer had remained stationary and the object had moved,
corresponding to a geometric transformation(see Fig. 7.6).
.p
w
w
Figure 7.6
w
www.padeepz.net
www.padeepz.net
⎧ x' = x − a
⎪
T v : ⎨ y' = y − b
⎪ z' = z − c
⎩
t
transformations.
ne
As in the two-dimensional case , we summarize the relationships between the matrix
forms of the coordinate transformations and the geometric transformations:
z.
Coordinate Transformations Geometric Transformations
Translation Tv T–v
Rotation
Scaling
e Rθ
Ssx , sx ,sz
ep R–
S1/sx,1/sy,1/sz
transformation):
−1 −1 −1
T v = T −v Rθ = R −θ Ssx ,sy ,sz = S1 / x x ,1 / sy ,1 / sz
.p
More complex geometric and coordinate transformations are formed through the
process of composition of functions. For matrix function, however, the process of
w
www.padeepz.net
www.padeepz.net
t
⎛a b c 0⎞
ne
⎜ ⎟
⎜d e f 0⎟
⎜g h i 0⎟
⎜ ⎟
⎜0 0
⎝ 0 1⎟⎠
z.
These transformations are then applied to points P ( x, y., z, ) having the
homogeneous form:
⎛x⎞
⎜ ⎟
⎜ y⎟
⎜ z⎟
⎜ ⎟
⎜ 1⎟
⎝ ⎠
e ep
7.3 Projections
ad
in the example of Fig. 7.7. For a perspective projection (Fig. 7.8), object positions are
transformed to the view plane along lines that converge to a point called the
w
projection reference point (or center of projection). The projected view of an object is
determined by calculating the intersection of the projection lines with the view plane.
w
w
www.padeepz.net
www.padeepz.net
t
ne
Figure 7.7: Parallel projection of an object to the view plane
z.
e ep
ad
A parallel projection preserves relative proportions of objects, and this is the method
used in drafting to produce scale drawings of three-dimensional objects. Accurate
.p
views of the various sides of an object are obtained with a parallel projection, but this
does not give us a realistic representation of the appearance of a three-dimensional
w
object. A perspective projection, on the other hand, produces realistic views but does
not preserve relative proportions. Projections of distant objects are smaller than the
projections of objects of the same size that are closer to the projection plane.
w
Hence to project in the direction of the z-axis onto the xy plane can be carried out by:
www.padeepz.net
www.padeepz.net
scale(1.0.1.0,0.0)
For example in the isometric case the direction of projection must be symmetric with
t
respect to the three co-ordinate directions to allow equal foreshortening on each axis.
ne
This is obviously achieved by using a direction of projection (1,1,1).
3.Oblique. It is usual to take the viewplane parallel to a face of the object. This
face (and these parallel to it) will remain undistorted(figure 7.9).
z.
e ep
ad
Figure 7.9
Assume that the viewplane is the xy plane and that in the required image the
foreshortening factor in the z direction is l and lines parallel to the z-axis make an
.p
angle of alpha with the x-axis. Thus a cube of side l would appear in the image as
follows(figure 7.10):
w
w
w
Figure 7.10
www.padeepz.net
www.padeepz.net
In the image the point P is (lcos(alpha),lsin(alpha)), but this corresponds to the point
(0,0,-1) in three-dimensions hence the required transformation in matrix form using
homogeneous co-ordinates is:
t
ne
which is a shear transformation. If l = 1 then it is called a Cavalier projection and if
l = ½ then it is called a Cabinet projection.
z.
7.3.2 Perspective Projections
ep
In the following it is assumed that the Centre of Projection is at the origin and that the
viewplane is normal to the z-axis at a distance d from the origin(figure 7.11). This is
the situation that holds after view orientation and view mapping.
e
ad
.p
w
Figure 7.11
w
www.padeepz.net
www.padeepz.net
where
t
that is:
ne
hence dividing by the homogeneous term W gives:
z.
From
e ep
which gives the projected point as previously and also gives z' = d as required.
ad
it is seen that if x and y remain constant x'-> 0 and y'->0 while z->infinity, i.e. lines
parallel to the z-axis converge to the vanishing point (0,0) in the viewplane. If the
viewplane cuts two axes then two vanishing points result while if it cuts three axes
.p
An important step in photography is to position and aim the camera at the scene in
order to compose a picture. This parallel the specification of 3D viewing parameters
w
in computer graphics that prescribe the projector (the center of projection for
perspective projection or the direction of projection for parallel projection) along with
w
In addition, a view volume defines the spatial extent that is visible through a
rectangular window in the view plane. The bounding surfaces of this view volume is
used to tailor / clip the objects that have been placed in the scene via modeling
www.padeepz.net
www.padeepz.net
transformations prior to viewing . The clipped objects are then projected into the
window area, resulting in a specific view of the 3D scene that can be further mapped
to the view-port in the NDCS.
t
and algorithms. We then summarize the three-dimensional viewing process. Finally,
ne
we examine the operational the organization of a typical 3D graphics pipeline.
z.
Three – dimensional viewing of objects requires the specification of a projection
plane (called the view plane), a center of projection, and a view volume in world
coordinates.
Figure: 7.12
www.padeepz.net
www.padeepz.net
The view plane coordinates system or viewing coordinate system can be specified as
follows; (1) let the reference point R0 (x0, y0, z0) be the origin of the coordinates
system and ( 2) determine the coordinate axes To do this we first choose a reference
t
vector U called the up vector. A unit vector Jq define the direction of the positive q
ne
axis for the view plane coordinate system . To calculate Jq we proceed as follows:
with N being the view plane unit normal vector , let Uq = U – ( N – U) N Then
Uq
Jq =
z.
Uq
Is the unit vector that defines the direction of the positive q axis (see Fig. 7.13)
e ep
ad
.p
Figure 7.13
w
N × Jp
Ip =
N × Jp
w
This coordinate system is called the view plane coordinate system or viewing
coordinate system. A left-handed system is traditionally chosen so that, if one thinks
of the view plane as the face of a display device, then with the p and q coordinate axes
superimposed on the display device, the normal vector N will point away from an
www.padeepz.net
www.padeepz.net
observer facing the display. Thus the direction of increasing distance away from the
observer is measured along N [ see Fig. 7.14(a)].
t
ne
z.
e ep
ad
Figure 7.14
.p
The view volume bounds a region in world coordinate space that will be clipped and
w
projected onto the view plane. To define a view volume that projects onto a specified
rectangular window defined in the view plane, we use view plane coordinates (p, q)v
to locate points on the view plane. Then a rectangular view plane window is defined
w
by prescribing the coordinates of the lower left-hand corner L(pmin, qmin)v and upper
right hand corner R(pmax, qmax)v (see Fig. 7.15). We can use the vectors Ip and Jq to
w
For a perspective , view the view volume, corresponding , to the given window , is a
semi-infinite pyramid, with apex at the viewpoint (Fig. 7.16). For views created using
www.padeepz.net
www.padeepz.net
parallel projections ( Fig. 7.17) the view volume is an infinite parallelepiped with
sides parallel to the direction of projection.
t
ne
z.
Figure 7.15 Figure 7.16
e ep
ad
Figure 7.17
.p
The view volumes created above are infinite in extent. In practice, we prefer to use a
w
finite volume to limit the number of points to be projected. In addition, for perspective
views, very distant objects from the view plane, when projected, appear as
w
indistinguisliable spots, while objects very close to the center of projection appear to
have disjointed structure. This is another reason for using a finite view volume.
w
A finite volume is deliminated by using front (near) and back ( far) clipping planes
parallel to the view plane. These planes are specified by giving the front distance f
and back distance b relative to the view plane reference point Ro and measured along
the normal vector N. The signed distance b and f can be positive or negative (Figs.
7.18 and 7.19).
www.padeepz.net
www.padeepz.net
t
ne
z.
Figure 7.18
e ep
ad
.p
Figure 7.19
w
Two differing strategies have been devised to deal with the extraordinary
w
1. Direct clipping: In this method, as the name suggests, clipping is done directly
w
www.padeepz.net
www.padeepz.net
The canonical view volume for parallel projection is the unit cube whose faces are
defined by the planes x = 0, x = 1, y = 1, z = 0, and z = 1. The corresponding
normalization transformation Npar is constructed (See Figure 7.20).
t
ne
z.
Figure 7.20
ep
The canonical view volume for perspective projection is the truncated pyramid whose
faces are defined by the planes x = z, x = -z, y = z , y = -z, z = zf, and z = 1 ( where zf
is to be calculated) (Fig. 7.21).
e
ad
.p
w
w
Figure 7.21
The basis of the canonical clipping strategy is the fact that the computations involved
w
such operations as finding the intersections of a line segment with the planes forming
the faces of the canonical view volume are minimal. This is balanced by the overhead
involved in transforming points many of which will be subsequently clipped.
For perspective views, additional clipping may be required to avoid the perspective
anomalies produced by projecting objects that are behind the viewpoint.
www.padeepz.net
www.padeepz.net
Clipping Algorithms
Three – dimensional clipping algorithms are often direct adaptations of their two-
dimensional counter parts. The modifications necessary arise from the fact that we are
now clipping against the six. Faces of the view volume, which are planes, as opposed
to the four edges of the two – dimensional window, which are lines
t
1. Finding the intersection of a line and a plane.
ne
2. Assigning region codes to the endpoints of line segments for the Cohen-
Sutherland algorithm.
3. Deciding when a point is to the right (also said to be outside ) or to the left
z.
(inside) of a plane for the Sutherland – Hodgman algorithm.
Figure 7.22
w
1. Explicit vertex list V = {P0, P1, P2, …..PN). The points Pi(xi, yi, zi) are the
w
vertices of the polygonal net, stored in the order in which they would be
encountered by traveling around the model . Although this form of representation
is useful for single polygons, it is quite inefficient for a complete polygonal net,
in that shared vertices are repeated several times. In addition when displaying
the model by drawing the edges, shared edges are drawn several times.
www.padeepz.net
www.padeepz.net
t
stored exactly once. Each edge in the edge list points to the two vertices in the
ne
vertex list which define that edge. A polygon is now represented as a list of
pointers or indices into the edge list. Additional information, such as those
polygon sharing a given edge, can also be stored in the edge list. Explicit edge
z.
listing can be used to represent the more general wireframe model. The
wireframe model is displayed by drawing all the edges, and each edge is drawn
only once.
7.5.2 Polyhedron ep
A polyhedron is a closed polygonal net (i.e. one which encloses a define volume) in
e
which each polygon is planar. The polygon are called the faces of the polyhedron . In
modeling, polyhedrons are quite often treated as solid (i.e. block) objects as opposed
ad
Wireframe models are used in engineering applications. They are easy to construct
.p
and , if they are compound of straight lines, easy to clip and manipulate through the
use of geometric and coordinate transformations. However, for building realistic
w
models, especially of highly curved objects, we must use a very large number of
polygons to achieve the illusions of roundness and smoothness.
w
This spline approximation method was developed by the French engineer Pierre
w
Bezier for use in the design of Renault automobile bodies. Bezier splines have a
number of properties that make them highly useful and convenient for curve and
surface design. They are also easy to implement. For these reasons, Bezier splines are
widely available in various cad systems, in general graphics packages (such as GL on
www.padeepz.net
www.padeepz.net
silicon graphics systems), and in assorted drawing and painting pack-ages (such as
Aldus superPaint and cricket draw).
t
in such a manner that they are very useful and convenient for curve and surface
ne
design, and are easy to implement Curves are trajectories of moving points. We will
specify them as functions assigning a location of that moving point (in 2D or 3D) to a
parameter t, i.e., parametric curves.
z.
Curves are useful in geometric modeling and they should have a shape which has a
clear and intuitive relation to the path of the sequence of control points. One family of
curves satisfying this requirement are Bezier curve.
A purely geometric construction for Bezier splines which does not rely on any
polynomial formulation, and is extremely easy to understand. The DeCasteljeau
method is an algorithm which performs repeated bi-linear interpolation to compute
.p
Figure 23
www.padeepz.net
www.padeepz.net
Each of them is then divided in the same ratio t : 1- t, giving rise to the another points.
Again, each consecutive two are joined with line segments, which are subdivided and
so on, until only one point is left. This is the location of our moving point at time t.
The trajectory of that point for times between 0 and 1 is the Bezier curve.
A simple method for constructing a smooth curve that followed a control polygon p
with m-1 vertices for small value of m, the Bezier techniques work well. However, as
t
m grows large (m>20) Bezier curves exhibit some undesirable properties.
ne
z.
Figure 24(a) Bezier curve defined by its endpoint vector (b) All sort of curves can
ep
be specified with different direction vectors at end points
e
Figure 24(c ) Reflex curves appear when you set the vectors in different
ad
directions
In general, a Bezier curve section can be fitted to any number of control points. The
number of control points to be approximated and their relative positions determine the
degree of the Bezier polynomial. As with the interpolation splines, a Bezier curve can
.p
Two sets of orthogonal Bezier curves can be used to design an object surface by
specifying by an input mesh of control points. The parametric vector function for the
Bezier surface is formed as the Cartesian product of Bezier blending functions:
w
m n
P(u, v ) = ∑ ∑ Pj ,kBEZj ,mBEZj, m(v)BEZk, n(u)
j =o k= o
www.padeepz.net
www.padeepz.net
Figure 7.25 illustrates two Bezier surface plots. The control points are connected by
dashed lines, and the solid lines show curves of constant a and constant v. Each curve
of constant a is plotted by varying v over the interval from 0 to 1, with u fixed at one
of the values in this unit interval. Curves of constant v are plotted similarly.
t
ne
z.
Figure 7.25: Bezier surfaces constructed for (a) m = 3, n = 3, and (b) m = 4, n = 4.
ep
Dashed lines connect the control points
Bezier surfaces have the same properties as Bezier curves, and they provide a
convenient method for interactive design applications. For each surface patch, we can
select a mesh of control points in the xy "ground" plane, then we choose elevations
e
above the ground plane for the z-coordinate values of the control points. Patches can
ad
section boundaries.
w
w
www.padeepz.net
www.padeepz.net
t
ne
z.
Figure 7.26: A composite Bezier surface constructed with two Bezier sections,
ep
joined at the indicated boundary line. The dashed lines connect specified control
points. First-order continuity is established by making the ratio of length L1 to
length L2 constant for each collinear Line of control points across the boundary
e
between the surface sections.
Opaque objects that are closer to the eye and in the line of sight of other objects will
block those objects those objects from view. In fact, some surfaces of these opaque
.p
Visible because they are eclipsed by the objects' visible parts. The surfaces that are
w
blocked or hidden from view must be "removed" in order to construct a realistic view
of the 3D scene The identification and removal of these surfaces is called the hidden
w
– surface problem. The solution involves the determination of the closest visible
surface along each projection line.
w
There are many different hidden –surface algorithms. Each can be characterized as
either an image-space method, in which the pixel grid is used to guide the
computational activities that determine visibility at the pixel level, or an object-space
method, in which surface visibility is determined using continuous models in the
object space (or its transformation) without involving pixel based operations.
www.padeepz.net
www.padeepz.net
Notice that the hidden – surface problem has ties to the calculation of shadows. If we
place a light source, such as a bulb, at the viewpoint, all surfaces that are visible from
the viewpoint are lit directly by the light source and all surfaces that are hidden from
the viewpoint are in the shadow of some opaque objects blocking the light.
Depth Comparisons
t
We assume that all coordinates (x,y,z,) are described in the normalized viewing
ne
coordinates system.
Any hidden-surface algorithm must determine which edges and surfaces are visible
either from the center of projection for perspective projections or along the direction
z.
of projection for parallel projections.
The question of visibility reduces to this : given two points P1 (x1, y1, z1) and P2 (x2,
2
ep
y2, z2), does either point obscure the other? This is answered in two steps:
If not, neither point obscures the other. If so, a depth comparison tells us which
e
point is in front of the other.
For an orthographic parallel projection onto the xy plane, P1 and P2 are on the same
ad
projector if
x1 = x2 and y1 = y2. In this case, depth comparison reduces to comparing z1 and z2. If
z1 < z2, then P1 obscures P2 [see Fig. 7.27(a)].
.p
w
w
w
Figure 7.27
For a perspective projection [see. Fig. 7.27 (b)], the calculations are more complex.
However, this complication can be avoided by transforming all-three dimensional
www.padeepz.net
www.padeepz.net
If the original object lies in the normalized perspective view volume, the normalized
perspective to parallel transform
t
⎛1 1 ⎞
ne
⎜ 0 0 ⎟
⎜2 2 ⎟
⎜ 1 1 ⎟
⎜0 0 ⎟
NT p = ⎜ 2 2 ⎟
⎜0 1 −z ⎟
0
⎜ 1− zf 1− zf ⎟
z.
⎜ ⎟
⎜0 0 1 0 ⎟
⎝ ⎠
e ep
ad
.p
w
w
Figure 7.28
w
(where zf is the location of the front clipping plane of the normalized perspective view
volume) transforms the normalized perspective view volume into the unit cube
bounded by 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, 0 ≤ z ≤ 1. We call this cube the normalized
display space. A critical fact is that the normalized perspective to parallel transform
preserves lines, planes, and depth relationships.
www.padeepz.net
www.padeepz.net
If our display device has display coordinates H x V, application of the scaling matrix
⎛H 0 0 0⎞
⎜ ⎟
⎜0 V 0 0⎟
SH ,V ,1 =⎜
0 0 1 0⎟
⎜ ⎟
⎜0 0 0 1⎟⎠
⎝
t
region 0 ≤ x ≤ H, 0 ≤ y ≤ V, 0 ≤ z ≤ 1. We call this region the display space. The
ne
display transform DTp.
transforms the normalized perspective view volume onto the display space.
z.
Clipping must be done against the normalized perspective view volume prior to
applying the transform NTp. An alternative to this is to combine NTp with normalizing
ep
transformation Nper, forming the single transformation NT’p = NTp.Nper. Then clipping
is done in homogeneous coordinate space.
We now describe several algorithms for removing hidden surfaces from scenes
e
containing objects defined with planar (i.e., flat), polygonal faces. We assumed that
the displays transform DTp has been applied (if a perspective projection is being
ad
Although there are special-purpose hidden –line algorithms, each of the above
algorithms can be modified to eliminate hidden lines or edges. This is especially
useful for wire frame polygonal models where the polygons are unfilled. The idea is
w
to use a color rule, which fills all the polygons with the background color – say the
edges and lines a different color – say white. The use of a hidden surface algorithm
w
Z – Buffer Algorithm
w
We say that a point in display space is “ seen” from pixel (x,y) if the projection of the
point is scan converted to this pixel. The Z-buffer algorithm essentially keeps track of
the smaller z coordinate (also called the depth value) of those point which are seen
from pixel (x,y). These Z values are stored in what is called the Z buffer.
www.padeepz.net
www.padeepz.net
Let Zbuf (x, y) denote the current depth value that is stored in the Z buffer at pixel (x,
y). We work with the (already) projected polygons P of the scene to be rendered.
1. Initialize the screen to a background color. Initialize the Z buffer to the depth of
the back clipping plane. That is, set
t
Zbuf (x, y) = Zback, for every pixel (x, y)
ne
2. Scan-convert each (projected) polygon P in the scene and during this scan–
conversion process, for each pixel (x, y) that lies inside the polygon.
z.
b. If Z (x,y) < Zbuf (x, y), set Zbuf(x,y) = Z(x,y), and set the pixel value at (x, y)
to the color of the polygon P at (x,y). In Fig. 7.40, points P1 and P2 are both
ep
scan-converted to pixel (x, y); however, since z1 < z2, P1 will obscure P2 and
the P1 z value z1, will be stored in the Z buffer.
e
ad
.p
w
w
w
Figure 7.29
Although the Z-buffer algorithm requires Z-buffer storage proportional to the number
of pixels on the screen, it does not require additional memory for storing all the
objects comprising the scene. In fact, since the algorithm process polygons one at a
time, the total number of objects in a scene can be arbitrarily large.
www.padeepz.net
www.padeepz.net
Back-Face Removal
Object surfaces that are oriented away from the viewer are called back-faces. The
back-faces of an opaque polyhedron are completely blocked by the polyhedron itself
and hidden from view. We can therefore identify and remove these back-faces based
solely o their orientation without further processing (projection and scan-conversion)
and without regard to other surfaces and objects in the scene.
t
ne
Let N = (A,B,C) be the normal vector of a planar polygonal face, with N pointing in
the direction the polygon is facing. Since the direction of viewing is the direction of
the positive z axis (see Fig. 7.30), the polygon is facing away from the viewer when C
> 0, (the angle between N and the z axis is less than 90°). The polygon is also
z.
classified as a back-face when C =0 since in this case it is parallel to the line of sight
and its projection is either hidden as a back-face when C = 0, since in this case it is
ep
parallel to the line of sight and its projection is either hidden or overlapped by the
edge(s) of some visible polygon(s).
Although this method identifies or removes back-faces quickly it does not handle
e
polygons that face the viewer but are hidden (partially or completely) behind other
surfaces. It can be used as a preprocessing step for other algorithms.
ad
Also called the depth sort or priority algorithm, the painter`s algorithm processes
polygons as if they were been painted onto the view plane in the order of their
.p
distance from the viewer. More distance polygons are painted first. Nearer polygons
are painted on or over more distance polygons, partially or totally obscuring them
from view. The key to implementing this concept is to find a priority ordering of the
w
Any attempt at a priority ordering based on depth sorting alone results in ambiguities
that must be resolved in order to correctly assign priorities. For example, when two
w
polygons overlap, how do we decide which one obscures the other? (See. Fig. 7.30).
www.padeepz.net
www.padeepz.net
t
ne
z.
Figure 7.30
e ep
Assigning Priorities
ad
The z extent of a polygon is the region between the planes z = zmin and z = zmax (Fig.
7.31).Here
zmin is the smallest of the z coordinates of all the polygon`s vertices, and zmax is the
w
largest.
Similar definitions hold for the x and y extents of a polygon. The intersection of the x,
w
www.padeepz.net
www.padeepz.net
t
ne
Figure 7.31
z.
Polygon P does not obscure polygon Q if any one of the following tests, applied in
sequential order, is true.
Test 0 :
Test 1 :
ep
the z extents of P and Q do not overlap and zQ max of Q is smaller than
Figure 7.32
www.padeepz.net
www.padeepz.net
t
ne
z.
Figure 7.33
Test 3 : All the vertices of P lie on that side of the plane containing Q which is
ep
farthest from the viewpoint. Refer to Fig. 7.34.
e
ad
.p
w
w
Figure 7.34
Test 4: All the vertices of Q lie on that side of the plane containing P which is
w
www.padeepz.net
www.padeepz.net
t
ne
Figure 7.35
z.
Test 5 : The projections of the polygons P and Q onto the view plane do not
overlap. This is checked by comparing each edge of one polygon
The Algorithm
1.
ep
against each edge of the other polygon to search for intersections.
Sort all polygons into a polygon list according to z max (the largest z coordinate
e
of each polygon`s vertices). Starting from the end of the list, assign priorities for
ad
2. Find all polygons Q (proceeding P) in the polygon list whose z extents overlap
that of P (test 0).
.p
polygon Q into two polygons. Q1 and Q2. Remove Q from the list and
place Q1 and Q2 on the list, in sorted order.
w
Sometimes the polygons are subdivided into triangles before processing, thus
reducing the computational effort for polygon subdivision in step 3.
www.padeepz.net
www.padeepz.net
A scan-line algorithm consists essentially of two nested loops, an x–scan loop nested
within a
y-scan loop.
y-Scan
For each y value, say y = α , interested the polygons to be rendered with the scan
t
plane y = α . This scan plane is parallel to the xz plane, and the resulting intersections
ne
are line segments in this plane (see Fig. 7.36).
z.
e ep
ad
.p
Figure 7.36
x-scan
w
1. For each x value, say x = f, intersect the line segments found above with the x
scan line
w
x = β lying on the y–scan plane. This intersection results in a set of points that
lies on the
w
x–scan line.
2. Sort these points with respect to their z coordinates. The point (x,y,z) with the
smallest z value is visible, and the color of the polygon containing this point is
the color set at the pixel corresponding to this point.
www.padeepz.net
www.padeepz.net
In order to reduce the amount of calculation in each scan –line loop, we try to take
advantage of relationships and dependencies, called coherences, between different
elements that comprise a scene.
Types of Coherence
1. Scan-line coherence: If a pixel on a scan line lies within a polygon, near it will
t
most likely lie within the polygon.
ne
2. Edge coherence: If an edge of a polygon intersects a given scan line, it will not
likely intersect scan lines near the given one.
3. Area coherence: A small area of an image will most likely lie within a single
z.
polygon.
ep
examining the extent of the object, that is, a geometric figure which
circumscribes the given object. Usually the extent is a rectangle or rectangle
solid (also called a bounding box).
e
Scan-line coherence and edge coherence are both used to advantage in scan–
converting polygons.
ad
extents intersect – a much simpler problem (see Fig. 7.36). Note in Fig. 7.37) objects
A and B do not intersect; however, objects A and C and B and C, do intersection. For
example, the edge of object A is at coordinate P3 = (7,4) and the edge of object B is at
w
coordinate P4 = (7,3) ]. Of course, even if the extents intersect, this does not guarantee
that the polygons intersect. See Fig. 7.38 and note that the extents A` and B` overlap
w
www.padeepz.net
www.padeepz.net
t
ne
Figure 7.37
z.
e ep
Figure: 7.38
ad
In the following algorithm, scan line and edge coherence are used to enhance the
processing done in the y-scan as follows. Since the y-scan loop constructs a list of
w
potentially visible line segments, instead of reconstructing this list each time the y–
scan line changes (absolute calculation), we keep the list and update it according to
w
how it has changed (incremental calculation). This processing is facilitated by the use
of what is called the edge list, and its efficient construction and maintenance is at the
w
1. The edge list – contains all non horizontal edges (horizontal edges are
automatically displayed) or the projections of the polygons in the scene. The
www.padeepz.net
www.padeepz.net
edges are sorted by the edge`s smaller y coordinate (ymin). Each edge entry in the
edge list also contain.
t
c. The increment Δx = 1/m.
ne
d. A pointer indicating the polygon to which the edge belongs.
z.
a. The equation of the plane within which the polygon lies – used for
depth determination, i.e., to find the z value at pixel (x, y,).
b.
ep
An IN/OUT flag, initialized to OUT (this flag is set depending on
whether a given scan line is in or out of the polygon).
c.
e
Color information for the polygon.
I. Initialization
II. y-scan loop. Activate edges whose y min is equal to y. Sort active edges in order
of increasing x.
w
III. x – scan loop. Process, from left to right, each active edge as follows:
a. Invert the IN / OUT flag of the polygon in the polygon list which
w
contains the edge. Count the number of active polygon whose IN/OUT
flag is set to IN. If this number is I, only one polygon is visible. All pixel
values from this edge and up to the next edge are set to the color of then
polygon. If this number is greater than 1, determine the visible polygon by
the smallest z value of each polygon at the pixel under consideration.
www.padeepz.net
www.padeepz.net
These z values are found from the equation of the plane containing the
polygon, unless the polygon becomes obscured by another before the next
edge is reached, in which case we set the remaining pixels to the color of
the obscuring polygon. If this number is 0, pixels from this edge and up to
the next one are left unchanged.
t
follows :
ne
1. Remove those edges for which the value of y max equals the present
scan-line value y. If no edges remain, the algorithm has finished.
z.
2. For each remaining active edge, in order, replace x by x + 1/ m. This is
the edge intersection with the next scan line y + 1.
Subdivision Algorithm ep
The subdivision algorithm is a recursive procedure based on a two – step strategy that
e
first decides which projected polygons overlap a given area A on the screen and are
therefore potentially visible in that area. Second, in each area these polygons are
ad
further tested to determine which ones will be visible within this area and should
therefore be displayed. If a visibility decision cannot be made, this screen area,
usually a rectangular window, is further subdivided either until a visibility decision
can be made, or until the screen area is a single pixel.
.p
Starting with the full screen as the initial area, the algorithm divides an area at each
stage into four smaller areas, thereby generating a quad tree (see Fig. 7.39).
w
w
w
www.padeepz.net
www.padeepz.net
t
ne
Figure 7.39
z.
The processing exploits area coherence by classifying polygon P with respect to a
given screen A into the following categories ; (1) surrounding polygon – polygon that
ep
completely contains the area [Fig.7.40(a)], (2) intersecting polygon – polygon that
intersects the area [ Fig. 7.40(b)], (3) contained polygon – polygon that is completely
contained within the area [ Fig. 7.40(c)], and (4) disjoint polygon – polygon that is
e
disjoint from the area [ Fig. 7.40 (d)].
ad
.p
w
w
Figure 7.40
The Classification of the polygons within a picture is the main computational expense
w
of the algorithm and is analogous to the clipping algorithms discussed in. With the use
of one of these clipping algorithms, a polygon in category 2 (intersecting polygon)
can be clipped into a contained polygon and a disjoint polygon (see Fig. 7.41).
Therefore, we could proceed as if category 2 were eliminated.
www.padeepz.net
www.padeepz.net
t
ne
Figure 7.41
For a given screen area, we keep a potentially visible polygons list (PVPL), those in
categories 1,2, and 3 (Disjoint polygons are clearly not visible). Also, note that on
z.
subdivision of a screen area, surrounding and disjoint polygons remain surrounding
and disjoint polygons of the newly formed areas. Therefore, only contained and
interesecting polygons need to be reclassified.
ep
Removing Polygons Hidden by a Surrounding Polygon
The key to efficient visibility computation lies in the fact that a polygon is not visible
if it is in back of a surrounding polygon. Therefore, it can be removed from the PVPL.
e
To facilitate processing, this list is sorted by zmin, the smallest z coordinate of the
ad
polygon within this area. In addition for each surrounding polygon S, we also record
its largest z coordinate zs man .
If, for a polygon P on the list zp > zs man (for a surrounding polygon S), then P is
min
.p
hidden by S and thus is not visible. In addition, all other polygons after P on the list
will also be hidden by S, so we can remove these polygons from the PVPL.
w
Subdivision Algorithm
2. Create a PVPL with respect to an area sorted on z min (the smallest z coordinate
of the polygon within the area). Place the polygons in their appropriate
w
www.padeepz.net
www.padeepz.net
t
d. If the area is the pixel (x, y) and neither a, b, nor c applies, compute the
ne
z coordinate z(x, y) at pixel (x, y) of all polygons on the PVPL. The pixel
is then set to the color of the polygon with the smallest z coordinate.
4. If none of the above cases has occurred, subdivide the screen area into fourths.
z.
For each area, go to step 2.
7.8 Summary
• To transform an object from its own model space to a common coordinate
ep
space, called world space, in which all objects, light sources, and the viewer
coexist. This step is called the modeling transformation stage.
rotation.
• The rotation about an arbitrary axis in space occurs in robotics animation and
w
simulation.
www.padeepz.net
www.padeepz.net
t
• To obtain a perspective projection of a three-dimensional object, we transform
ne
points along projection lines that meet at the projection reference point.
z.
shapes.
7.9 Key Words
Three dimensional graphics, rotation, scaling, translation, parallel, perspective,
curves.
7.10 Self Assessment Questions (SAQ)
e ep
projection, hidden line, surface elimination, viewing, clipping, wireframe , Bezier
1. What happens when two polygons have the same z value and the Z-buffer
algorithm is used?
ad
2. Show that the alignment transformation satisfies the relation Av-1 = AvT.
3. How many view planes (at the origin) produce isometric projections of an
object?
.p
4. Find the equations of the planes forming the view volume for the general
parallel projection.
w
8. Find a normalization transformation from the window whose lower left corner
is at(0,0) and upper right corner is at (4,3) onto the normalized device screen
so that aspect ratios are preserved.
7.11 References/ Suggested Readings
www.padeepz.net
www.padeepz.net
t
4. Advanced Animation and Rendering Techniques, Theory and Practice, Alan
ne
Watt and Mark Watt , ACM Press/Addison-Wesley
z.
e ep
ad
.p
w
w
w
www.padeepz.net
www.padeepz.net
8.0 Objectives
At the end of this chapter the reader will be able to:
t
• Describe the concept of multimedia
ne
• Describe multimedia hardware and software
• Describe multimedia authoring tools
• Describe Graphic software
•
z.
Describe GKS attributes, primitives, window, viewport and display routines
Structure
8.1 Introduction
8.2 What is Multimedia? ep
8.3 Hardware and Software for Multimedia
8.4 Applications Area for Multimedia
8.5 Components of Multimedia
e
8.6 Multimedia Authoring Tools
8.7 Computer Graphic Software
ad
www.padeepz.net
www.padeepz.net
8.1 Introduction
As the name suggests, multimedia is a set of more than one media element used to
produce a concrete and more structured way of communication. In other words
multimedia is simultaneous use of data from different sources. These sources in
multimedia are known as media elements. With growing and very fast changing
information technology, Multimedia has become a crucial part of computer world. Its
t
importance has realised in almost all walks of life, may it be education, cinema,
ne
advertising, fashion and what not.
Throughout the 1960s, 1970s and 1980s, computers have been restricted to dealing
with two main types of data - words and numbers. But the cutting edge of information
z.
technology introduced faster system capable of handling graphics, audio, animation
and video. And the entire world was taken aback by the power of multimedia.
called multimedia computer. If the sequence and timing of these media elements can
be controlled by the user, then one can name it as Interactive Multimedia.
w
Multimedia Software
Multimedia software comprises a wide variety of software that allows you to combine
w
images with text, music and sound, animation, video, and other special effects. Many
kinds of multimedia software are available. Multimedia software is sometimes
broadly grouped as:
www.padeepz.net
www.padeepz.net
Multimedia Authoring can be used to create anything from simple slide shows to full-
blown games and interactive applications. A professional multimedia development
program is called an authoring tool or authoring software. An authoring tool allows
t
you to give the user interactive control over the sequence and timing of videos,
ne
graphics and animation. It also provides a scripting language, also called a macro or
authoring language to control the action.
z.
A multimedia authoring tool is a program that helps you write multimedia
applications. A multimedia authoring tool enables you to create a final application
ep
merely by linking together objects, such as a paragraph of text, an illustration, or a
song. Attractive and useful multimedia applications can be produced by defining the
objects' relationships to each other, and by sequencing them in an appropriate order. A
multimedia authoring tool requires less technical knowledge to master and are used
e
exclusively for applications that present a mixture of textual, graphical, and audio
ad
data. Multimedia authoring tools are used to create interactive presentations, screen
savers, games, CDs and DVDs. A multimedia authoring tool supports a wide variety
of multimedia file formats including images, video, and sound. Clip-art and sound
libraries are rarely bundled with a multimedia authoring tool.
.p
themselves or wait for their users to click on Next. Unlike the pages of a traditional
book, they can talk to their readers too. No matter what your particular need may be,
whether to e-mail an electronic photograph album to your relatives, allow users of
w
your web page to download a multimedia catalog, show prospective clients samples
of your work or make a dull subject come alive in class, multimedia presentation
software can transform your ideas into a complete, professional application. In a live
multimedia presentation, the slides are commonly projected onto large screens or
www.padeepz.net
www.padeepz.net
t
all sorts of other multimedia applications. Almost everything it can accomplish
ne
requires nothing more than a few mouse clicks. It has a short learning curve. Some
multimedia presentation software documents are EXE files - they can be opened on
any system. They can be freely distributed on disks and CD-ROMs, and over the
z.
Internet. Multimedia Presentation Software has some great features like the ability to:
•
ep
Write your own clickable advertisements, to distribute alone or with other
products.
Design training materials that include pictures, sounds and interactive
elements.
e
• Compile a distributable portfolio.
ad
screen driver capable of displaying more than 16 million colors is needed to get the
best out of it.
w
Multimedia Hardware
For producing multimedia you need hardware, software and creativity. In this section
w
As you know, Central Processing Unit (CPU) is an essential part in any computer. It
is considered as the brain of computer, where processing and synchronization of all
activities takes place. The efficiency of a computer is judged by the speed of the CPU
www.padeepz.net
www.padeepz.net
t
faster the CPU and the faster the computer will be able to perform. As the multimedia
ne
involves more than one medial element, including high-resolution graphics, high
quality motion video, and one need a faster processor for better performance.
In today’s scenario, a Pentium processor with MMX technology and a speed of 166 to
z.
200 MHz (Megahertz) is an ideal processor for multimedia. In addition to the
processor one will need a minimum 16 MB RAM to run WINDOWS to edit large
computer.
(b) Monitor
e ep
images or video clips. But a 32 or 64 MB RAM enhances the capacity of multimedia
As you know that monitor is used to see the computer output. Generally, it displays
ad
25 rows and 80 columns of text. The text or graphics in a monitor is created as a result
of an arrangement of tiny dots, called pixels. Resolution is the amount of details the
monitor can render. Resolution is defined in terms of horizontal and vertical pixel
(picture elements) displayed on the screen. The greater the number of pixels, better
.p
Like any other computer device, monitor requires a source of input. The signals that
w
monitor gets from the processor are routed through a graphics card. But there are
computers available where this card is in-built into the motherboard. This card is also
w
called the graphics adapter or display adapter. This card controls the individual pixels
or tiny points on a screen that make up image. There are several types of display
w
adapter available. But the most popular one is Super Virtual Graphics Arrays (SVGA)
card and it suits the multimedia requirement. The advantage of having a SVGA card
is that the quality of graphics and pictures is better.
Now the PCs, which are coming to the market, are fitted with SVGA graphics card.
That allows images of up to 1024 ´ 768 pixels to be displayed in up to 16 millions of
www.padeepz.net
www.padeepz.net
colours. What determines the maximum resolution and color depth is the amount of
memory on the display adapters. Often you can select the amount of memory required
such as 512KB, 1MB, 2MB, 4MB, etc. However, standard multimedia requirement is
a 2MB of display memory (or Video RAM). But one must keep in mind that this
increases the speed of the computer, also it allows displaying more colours and more
resolutions. One can easily calculate the minimum amount of memory required for
t
display adapter as (Max. Horizontal Resolution x Max. Vertical Resolution ´ Colour
ne
Depths. in Bits )/8192 = The minimum video (or display) memory required in KB.
For example, if SVGA resolution (800´600) with 65,536 colours (with colour depth of
16) you will need
z.
(800 x 600 x 16) / 8192 = 937.5 KB, i.e., approximately 1 MB of display memory.
ep
Another consideration should be the refresh rate, i.e., the number of times the images
is painted on the screen per second. More the refresh rate, better the image formation.
Often a minimum of 70-72Mhz is used to reduce eye fatigue. As a matter of fact
higher resolution requires higher refresh rates to prevent screen flickers.
e
(c) Video Grabbing Card
ad
As we have already discussed, we need to convert the analog video signal to digital
signal for processing in a computer. Normal computer will not be able to do it alone.
.p
It requires special equipment called video grabbing card and software to this
conversion process. This card translates the analog signal it receives from
conventional sources such as a VCR or a video camera, and converts them into digital
w
format. The software available with it will capture this digital signal and store them
into computer file. It also helps to compress the digitized video so that it takes lesser
w
This card is fitted into a free slot on the motherboard inside the computer and gets
w
connected to an outside source such as TV, VCR or a video camera with the help of a
cable. This card receives both video and audio signal from the outside source and
conversion from analog to digital signal takes place. This process of conversion is
known as sampling. This process converts the analog signal to digital data streams so
that this signal can be stored in binary data format of 0’s and 1’s. This digital data
www.padeepz.net
www.padeepz.net
stream is then compressed using the video capturing software and stores them in the
hard disk as a file. This file is then used for incorporation into multimedia. This
digitized file can also be edited according to the requirements using various editing
software such as Adobe Premiere.
A number of digitizer or video grabbing cards are available in the market. However,
t
one from Intel called Intel Smart Video Recorder III does a very good job of
ne
capturing and compressing video.
Today’s computers are capable of creating the professional multimedia needs. Not
z.
only you can use computer to compose your own music, but it can also be used for
recognition of speech and synthesis. It can even read back the entire document for
ep
you. But before all this happens, we need to convert the conventional sound signal to
computer understandable digital signals. This is done using a special component
added to the system called sound card. This is installed into a free slot on the
computer motherboard. As in the case of video grabber card, sound card will take the
e
sound input from outside source (such as human voice, pre-recorded sounds, natural
ad
sounds etc.) and convert them into digital sound signal of 0’s and 1’s. The recording
software used alongwith the sound card will store this digitised sound stream in a file.
This file can latter be used with multimedia software. One can even edit the digitised
sound file and add special sound effects into it.
.p
Most popular sound card is from Creative Systems such as Sound Blaster-16,
AWE32, etc. AWE32 sound card supports 16 channel, 32 voice and 128 instruments
w
CD-ROM is a magnetic disk of 4.7 inches diameter and it can contain data up to 680
w
Megabytes. It has become a standard by itself basically for its massive storage
capacity, faster data transfer rate. To access CD-ROM a very special drive is required
and it is known as CD-ROM drive. Let us look into the term ROM that stands for
‘Read Only Memory’. It means the material contained in it can be read (as many
times, as you like) but the content cannot be changed.
www.padeepz.net
www.padeepz.net
As multimedia involves high resolution of graphics, high quality video and sound, it
requires large amount of storage space and at the same time require a media, which
can support faster data transfer. CD-ROM solves this problem by satisfying both
requirements.Similar to the hard disk drive, the CD-ROM drive has certain
specification which will help to decide which drive suit best to your multimedia
requirement.
t
ne
(i) Transfer Rate
Transfer rate is basically the amount of data the drive is capable of transferring at a
sustained rate from the CD to the CPU. This is measured in KB per second. For
z.
example, 1x drive is capable of transferring 150KB of data from the CD to the CPU.
In other terms 1x CD drive will sustain a transfer rate of 150KB/sec, where x stands
for 150 KB. This is the base measurement and all higher rates are multiple of this
ep
number, x. Latest CD-ROM drive available is of 64x, that means it is capable of
sustaining a data transfer rate of 64x150=9600 KB =9.38MB per second from the CD
to the CPU.
e
(ii) Average Seek time
ad
The amount of time lapses between request and its delivery is known as average seeks
time. The lower the value better the result and time is measured in milliseconds. A
good access time is 150ms.
.p
Recently computer technology has made tremendous progress. You can now have
CDs which can ‘write many, read many’ times. This means you can write your files in
w
to a blank CD through a laser beam. The written material can be read many times and
they can even be erased and re-written again. Basically this re-writable CD’s can be
w
(f) Scanner
w
Multimedia requires high quality of images, graphics to be used. And it takes lot of
time creating them. However there are ready-made sources such as real life
photographs, books, arts, etc. available from where one easily digitized the required
pictures. To convert these photographs to digital format, one need a small piece of
www.padeepz.net
www.padeepz.net
t
File Format Explanation
ne
PICT - A widely used format compatible with most Macintosh
JPEG - Joint Photographic Experts Group - a format that compresses files and lets
z.
you choose compression versus quality
TIFF - Tagged Image File Format - a widely used format compatible with both
Macintosh and Windows systems
ep
Windows BMP - A format commonly used on MS-DOS and MS-Windows
computers
e
GIF - Graphics Interchange Format - a format used on the Internet, GIF supports only
ad
Scanners are available in various shapes and sizes like hand-held, feed-in, and flatbed
types. They are also for scanning black-and-white only or color. Some of the reputed
.p
(g) Touchscreen
w
As the name suggests, touchscreen is used where the user is required to touch the
surface of the screen or monitor. It is basically a monitor that allows user to interact
w
with computer by touching the display screen. This uses beams of infrared light that
are projected across the screen surface. Interrupting the beams generates an electronic
w
signal identifying the location of the screen. And the associated software interprets the
signal and performs the required action. For example, touching the screen twice in
quick succession works as double clicking of the mouse. Imagine how useful this will
be for visually handicapped people who can identify things by touching a surface.
www.padeepz.net
www.padeepz.net
Touchscreen is normally not used for development of multimedia, it is rather used for
multimedia presentation arena like trade show, information kiosk, etc.
Placing the media in a perspective within the instructional process is an important role
of the teacher and library professional. Following are the possible areas of application
t
of multimedia:
ne
• Can be used as reinforcement
• Can be used to clarify or symbolize a concept
• Creates the positive attitude of individuals toward what they are learning and
z.
the learning process itself can be enhanced.
• The content of a topic can be more carefully selected and organized
•
•
ep
The teaching and learning can be more interesting and interactive
The delivery of instruction can be more standardized.
The length of time needed for instruction can be reduced.
e
The instruction can be provided when and where desired or necessary
ad
(i) Text
.p
However to give special effects, one needs graphics software which supports this kind
of job.
w
(ii) Graphics
w
www.padeepz.net
www.padeepz.net
Unlike text, which uses a universal ASCII format, graphics does not have a single
agreed format. They have different format to suit different requirement. Most
commonly used format for graphics is .BMP or bitmap pictures. The size of a
graphics depends on the resolution it is using. A computer image uses pixel or dots on
the screen to form itself. And these dots or pixel, when combined with number of
colors and other aspects are called resolution. Resolution of an image or graphics is
t
basically the pixel density and number of colors it uses. And the size of the image
ne
depends on its resolution. A standard VGA (Virtual Graphics Arrays) screen can
display a screen resolution of 640 ´ 480 = 307200 pixel. And a Super VGA screen can
display up-to 1024 ´ 768 = 786432 pixel on the screen. While developing multimedia
graphics one should always keep in mind the image resolution and number of colors
z.
to be used, as this has a direct relation with the image size. If the image size is bigger,
it takes more time to load and also requires higher memory for processing and larger
disk-space for storage.
(iii) Animation
e ep
Moving images have an overpowering effect on the human peripheral vision.
Followings are few points for its popularity.
ad
Animation is a set of static state, related to each other with transition. When
.p
something has two or more states, then changes between states will be much easier for
users to understand if the transitions are animated instead of being instantaneous. An
animated transition allows the user to track the mapping between different subparts
w
through the perceptual system instead of having to involve the cognitive system to
deduce the mappings.
w
Sometimes opposite animated transitions can be used to indicate movement back and
forth along some navigational dimension. One example used in several user interfaces
is the use of zooming to indicate that a new object is "grown" from a previous one
(e.g., a detailed view or property list opened by clicking on an icon) or that an object
is closed or minimized to a smaller representation. Zooming out from the small object
www.padeepz.net
www.padeepz.net
t
illustrated by showing a map with an animation of the covered area changing over
ne
time.
z.
Animation can be used to show multiple information objects in the same space. A
typical example is client-side imagemaps with explanations that pop up as the user
Some types of information are easier to visualize with movement than with still
pictures. Consider, for example, how to visualize the tool used to remove pixels in a
ad
graphics application.
As you know the computer screen is two-dimensional. Hence users can never get a
full understanding of a three-dimensional structure by a single illustration, no matter
how well designed. Animation can be used to emphasize the three-dimensional nature
w
of objects and make it easier for users to visualize their spatial structure. The
animation need not necessarily spin the object in a full circle - just slowly turning it
w
back and forth a little will often be sufficient. The movement should be slow to allow
the user to focus on the structure of the object.
w
You can also move three-dimensional objects, but often it is better if you determine in
advance how best to animate a movement that provides optimal understanding of the
object. This pre-determined animation can then be activated by simply placing the
cursor over the object. On the other hand, user-controlled movements requires the
user to understand how to manipulate the object (which is inherently difficult with a
www.padeepz.net
www.padeepz.net
two-dimensional control device like the mouse used with most computers - to be
honest, 3D is never going to make it big time in user interfaces until we get a true 3D
control device).
Attracting attention
Finally, there are a few cases where the ability of animation to dominate the user’s
t
visual awareness can be turned to an advantage in the interface. If the goal is to draw
ne
the user’s attention to a single element out of several or to alert the user to updated
information then an animated headline will do the trick. Animated text should be
drawn by a one-time animation (e.g., text sliding in from the right, growing from the
z.
first character, or smoothly becoming larger) and never by a continuous animation
since moving text is more difficult to read than static text. The user should be drawn
to the new text by the initial animation and then left in peace to read the text without
further distraction.
ep
One of the excellent software available to create animation is Animator Pro. This
provides tools to create impressive animation for multimedia development.
e
(iv) Video
ad
Beside animation there is one more media element, which is known as video. With
latest technology it is possible to include video impact on clips of any type into any
.p
The video clips may contain some dialogues or sound effects and moving pictures.
These video clips can be combined with the audio, text and graphics for multimedia
presentation. Incorporation of video in a multimedia package is more important and
w
complicated than other media elements. One can procure video clips from various
sources such as existing video films or even can go for an outdoor video shooting.
w
All the video available are in analog format. To make it usable by computer, the video
clips are needed to be converted into computer understandable format, i.e., digital
format. Both combinations of software and hardware make it possible to convert the
analog video clips into digital format. This alone does not help, as the digitised video
www.padeepz.net
www.padeepz.net
clips take lots of hard disk space to store, depending on the frame rate used for
digitisation. The computer reads a particular video clip as a series of still pictures
called frames. Thus video clip is made of a series of separate frames where each
frame is slightly different from the previous one. The computer reads each frame as a
bitmap image. Generally there are 15 to 25 frames per second so that the movement is
smooth. If we take less frames than this, the movement of the images will not be
t
smooth.
ne
To cut down the space there are several modern technologies in windows
environment. Essentially these technologies compress the video image so that lesser
space is required.
z.
However, latest video compression software makes it possible to compress the
digitised video clips to its maximum. In the process, it takes lesser storage space. One
ep
more advantage of using digital video is, the quality of video will not deteriorate from
copy to copy as the digital video signal is made up of digital code and not electrical
signal. Caution should be taken while digitizing the video from analog source to avoid
e
frame droppings and distortion. A good quality video source should be used for
digitization.
ad
(v) Audio
Audio has a greater role to play in multimedia development. It gives life to the static
.p
instrumental notes, natural sound and many more. All these can be used in any
combination as long as they give some meaning to their inclusion in multimedia.
w
• There are many ways in which these sounds can be incorporated into the
computer. For example;
w
www.padeepz.net
www.padeepz.net
To Graphics Kernel System (GKS) was development in to the need for a standardized
t
method of developing graphic program. It represent a standard graphic interface with
ne
consistent syntax. Furthermore, GKS was designed so that it may be bound by means
of subroutines to most common programming language such as C, FORTRAN 77,
PASCAL, and BASIC.
z.
GKS represents a programming language in the sense that it presents the programmer
with a consistent set of reserved words with specific language bindings used within a
specific syntactical structure. For example, the GKS reserved word for plotting points
ep
is “POLYMARKER.” (the reserved word POLYMAKER is followed by its
corresponding parameters, n, X, Y, where n is the number of points, X is the array of
the x co-ordinates and Y is the array of the y co-ordinates.) Thus the GKS statements
e
X(1) = 3 Y (1) =2
ad
POLYMARKER (1,X,Y)
Represents a language-independent command-data list that would plot the point (3,2).
In actual implementation with a programming language, in this case FORTRAN, the
.p
X(1) =3
Y(1) =2
w
www.padeepz.net
www.padeepz.net
system using the slope method of line plotting could easily be upgraded to make use
of Bresenham’s line algorithm.
To avoid confusion. From this point on only the GKS names of commands will be
shown. For example, the FORTRAN implementation CALL GPM (n, X, Y,) will be
shown only as POLY MARKER (n, X, Y,)
t
8.8 GKS Primitive
ne
The graphic kernel system is based on four basic primitive: POLYLINE,
POLYMAKER,FILL AREA, and TEXT. The syntax of each of the primitive is as
follows:
z.
POLYLINE (n, X, Y)
POLIMARKER (n, X, Y)
Y = Y array
Where (x, y)= the starting coordinates for the starting and “string” = the text that is to
.p
be out put.
Each GKS primitive can have many attributes, which are set with an index number
and the SET command. For example, the default value for POLYLINE INDEX is 1. a
w
POLYLINE INDEX of generates a solid line. Therefore, if the X array = (1, 4) and
the Y array= (2, 8), the following would generate a solid line from point (1, 2) to point
w
www.padeepz.net
www.padeepz.net
t
ne
6
5
z.
2
e 1
2
1
ep2 3 4 5
Figure 8.1
ad
The GKS POLYLINE INDEX of 2 creates a dashed line. Thus, using the same data, X array
=(1, 4) and Y array = (2, 8), the following code would generate a dashed line from (1, 2) to (4,
8) (see fig. 8.2).
.p
w
w
w
www.padeepz.net
www.padeepz.net
t
6
ne
5
z.
3
2
e ep
Figure 8.2
In addition to the POLYLINE INDEX, there are also index settings for
ad
POLYMARKER, FILL AREA, and TEXT. In each case, the SET function INDEX is
used to set the attributes of the command prior to its use. For example, using the data
X array = (2, 6, 6, 2) and Y array = (2, 2, 6, 6) with the commands
.p
POLYMARKER (4, X, Y)
w
POLYMARKER (4, X, Y)
w
www.padeepz.net
www.padeepz.net
t
ne
z.
e ep
Figure 8.3
Note that FILL AREA always connects the first and last points in the array 8.3 (c )
Other attributes supported by GKS allow the programmer to set the width of lines,
.p
change, color, select the style of marker used by POLYMAKER, and change the style
and pattern used by FILL AREA. For example, the attribute value 3 for
POLYMAKER gives an asterisk [see Fig. 8.3 (c)].
w
Text font is selected with the SET TEXT INDEX (n) command. However, text has
several other attributes which can be changed with other set commands. These set
w
commands allow the programmer to select character height, slant, color, spacing, and
angle. Unique to text are the SET CHARACTER UO VECTOR (X, Y) and the SET
w
TEXT PATH (path) commands. CHARACTER UP VECTOR (X, Y) sets the slope at
which text will be printed. Here , X represents the change in x, and Y, the change in y.
For example
www.padeepz.net
www.padeepz.net
t
ne
z.
O
e ep L
E
ad
H
.p
w
w
HE LL O
w
Figure 8.4
www.padeepz.net
www.padeepz.net
t
ne
(a) (b)
z.
e
(c)
ep (d)
Figure 8.5
ad
The character path attribute sets the direction in which the characters will be printed.
The programmer can set character path UP, DOWN, LEFT, or RIGHT [see Figs. 8.5
(a) through 8.5 (d)].
.p
for example.
www.padeepz.net
www.padeepz.net
t
ne
VIEWPORT
100
z.
20
10
e
10 20
ep 100
Figure 8.6
the WINDOW command thus specifies how data will be mapped onto the display. For
example, the following would result in fig. 8.7:
w
POLYMARKER (1, X, Y)
As GKS supports multiple view ports and windows, a provision to indicate which
view port is to be written to is required. View port selection is done with the SELECT
www.padeepz.net
www.padeepz.net
(0,10) (20,10)
t
WINDOW
ne
z.
e
(10,0)
ep (20,0)
ad
Figure 8.7
FILLAREA, and TEXT-along with their attributes allow the programmer to construct
a wide range of images, most developers would like to have access to some other
commonly used primitives. For example, in a charting application, use of an axis
w
For example, an axis command should allow control over tick mark intervals, length
of tick marks, style of tick marks, length of axes, and axes intersection point. For the
w
s9implest case, no tick marks are used; the axes run the length of the current window
and intersect at the origin. The resulting code for simple axes might appear as follows:
www.padeepz.net
www.padeepz.net
XA (1) =X(1)
YA (1) = 0
XA(2)=X(2)
YA(2)=0
t
ne
XA(1=0
YA(1)=Y(1)
XA(2)=0
z.
YA(2)=Y(2)
X(2)=X MAX
e ep
Y(1)=Y MIN
ad
Y(2)=Y MIN
8.12 Summary
.p
4. A hypertext system consists of nodes - which contain the text - and links
between the nodes, which define the paths the user can follow to access the
text in non-sequential ways. The links represent associations of meaning and
can be thought of as cross-references.
www.padeepz.net
www.padeepz.net
t
communicated to the computer.
ne
8. Multimedia Authoring Tools are the tools for the development of multimedia
applications represent the essential part of the system for organization and
arrangement of multimedia project elements, such as graphics, sound,
animation and video clips. They are used for designing interactively and user
z.
interface, for presenting the project on the screen and for synthesizing of
multimedia elements into cohesive project.
ep
9. GKS represents a standard graphic interface with consistent syntax
10. The graphic kernel system is based on four basic primitive: POLYLINE,
POLYMAKER,FILL AREA, and TEXT
e
11. GKS WINDOW command is as follows:
ad
14. GKS clipping function can be set to either of two states, CLIP or NOCLIP
15. The graphics kernel system allows multiple users to work on a single
w
application.
www.padeepz.net
www.padeepz.net
t
reconstruct sound wave from a sample data?
ne
5. Explain the concept of video on multimedia..
z.
8. What do the elements in the X and Y arrays represent in the POLYMARKER
command?
10. Will generate a square with only four instead of five elements in the X and y
ad
ARRAYS. WHY?
11. What are some of the types of attributes that can be changed for (a) FILL
AREA, (b) POLYMAKER, and (c) POLYLINE?
.p
12. What are some of the types of attributes that can be changed for TEXT?
13. What would a display look like after the following commands?
w
14. How would the display appear after the following commands were executed?
w
www.padeepz.net
www.padeepz.net
t
3. Advanced Animation and Rendering Techniques, Theory and Practice, Alan
ne
Watt and Mark Watt , ACM Press/Addison-Wesley
4. Graphics Gems I-V, various authors, Academic Press
5. Computer Graphics, Plastok, TMH
6. Principles of Interactive Computer Graphics, Newman, TMH
z.
e ep
ad
.p
w
w
w
www.padeepz.net