0% found this document useful (0 votes)
109 views120 pages

Computer Graphics Note by Maruti Dhungana

This document provides an introduction and history of computer graphics. It discusses how computer graphics originated in the 1960s and has since become widespread in applications such as CAD, scientific visualization, business visualization, and presentation graphics. The key developments in computer graphics hardware and software are outlined from the early days of plotting to modern interactive graphics using personal computers. Computer graphics is now widely used across many fields such as engineering, science, medicine, education, and entertainment to visualize and analyze data.

Uploaded by

Shiva Baral
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views120 pages

Computer Graphics Note by Maruti Dhungana

This document provides an introduction and history of computer graphics. It discusses how computer graphics originated in the 1960s and has since become widespread in applications such as CAD, scientific visualization, business visualization, and presentation graphics. The key developments in computer graphics hardware and software are outlined from the early days of plotting to modern interactive graphics using personal computers. Computer graphics is now widely used across many fields such as engineering, science, medicine, education, and entertainment to visualize and analyze data.

Uploaded by

Shiva Baral
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 120

Unit 1

Unit 1. (5 Hrs)

Introduction

Computers have become a powerful tool for the rapid and economical production of pictures.
There is virtually no area in which graphical displays cannot be used to some advantage, and so
it is not surprising to find the use of computer graphics so widespread. Computer graphics is the
creation, display, manipulation and storage of picture and experimental data for proper
visualization using a computer. It is the pictorial synthesis of real and/or imaginary objects from
their computer-based models (or data sets). William Fetter coined term “computer graphics” in
1960 to describe new design methods he was pursuing at Boeing. He created a series of widely
reproduced images on pen plotter exploring cockpit design, using 3D model of human body.

These images of objects are generated from various fields such as science, engineering,
medicine, business, industry, government, art, entertainment, advertising, education, and
training. Computer graphics today is largely interactive, that is, the user controls the contents,
structure, and appearance of images of the objects by using input devices, such as keyboard,
mouse, or touch-sensitive panel on the screen.

History of Computer Graphics

The history of the computer graphics can be study as a chronicle development of hardware and
software. The evolution of graphics under various terms is described on following points.

- Crude plotting on hardcopy devices such as teletypes and line printers dates from the early days
of computing.

- The whirlwind computer developed in 1950 at Massachusetts institute of Technology ( MIT)


had computer-drive CRT displays for output, both for operator use and for cameras producing
hard copy.

- The SAGE air-defense system developed in the middle 1950’s was the first to use command
and control CRT displays console on which operators identified targets with the light pens

The beginning of modern interactive graphics, however were found in IVAN SUTHER LAND’S
seminal doctoral work on the sketchpad drawing systems. He introduced data structures for
storing symbol hierarchies built up via replication of standard components ( used for drawing
circuit symbols). He also developed interaction technique that use the keyboard and light pen for
making choices, pointing and drawing and formulating many other ideas and technique that are
still in use today.

- By the mid- sixties, a number of research projects and commercial products had appeared as the
potentially of CAD activities in computer, automobile and aerospace grew enormously for
automating drafting-insensitive activities. The general motor system for automobile design and

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Intek Digitek system for lens design were pioneers in showing the efforts utilizing graphics
interaction in the interactive cycle’s common in engineering.

- Due to the high cost of graphics hardware, expensive computing resources, difficulty in wring
large interactive program and due to many other reasons the human computer interaction was
still done primarily in both mode using punched cards. After the advent of graphics based
personal computers such as Apple Macintosh and IBM PC , the costs of both hardware and
software was driven down. Millions of graphics computer were sold exclusively for offices and
home. Thus the interactive graphics (GUI) as “the window on the computer” became an integral
part of PC featuring graphical interaction. The reason for making interactive graphics affordable
was because of the advent of direct-view storage tube (DVST) which replace buffer and refresh
process and eliminated all flicker in the system. Before this, buffer memory and processors were
enough only to refresh at 30HZ and only few thousand lines could be drawn without noticeable
flicker.

- Another major hardware advance of late sixties was attaching a display is a minicomputer,
relieving heavy demands of refreshed display devices (like user interaction handling and
updating image on the screen) with the central-time sharing computer.

- In 1968, another such devices was invented. The refresh display hardware for geometric
transformations could scale, rotate and translate points and lines on the screen at real time;
perform the 2D and 3D clipping and could produce parallel and perspective projections.

- The development of inexpensive raster graphics, based on television technology in early


seventies contributed more to the growth of the field. The Raster displays stores displays stores
display primitives (lines, characters or areas) in a refresh buffer. The development of graphics
cannot be stand-off without the study of graphics input technology. The clumsy, fragile light pen
has been replaced by mouse, the tablet, touch panel, digitizers and other devices. Interaction to
computer using devices require no knowledge of programming and only a little keyboard use; the
use makes choices by selecting menus, icons , check options, places predefined symbols on
screen and draws by indicating consecutive end points to be connected by lines or interpolated
by smooth curves and fills closed areas bounded by polygons or point contours with shades of
gray, colors on various patterns. Now computer graphics have become integral and essential
technology in the computers system.

Advantages of Computer Graphics

In every field, we need large number of information to be analyzed and study the behaviors of
certain processes. Numerical simulations in supercomputers, satellite cameras and other sources
are amassing large amount of data files faster they can be interpreted. Scanning these huge
amounts of data files to determine their nature and relationships is a tedious job. But if the data
are converted to a visual form it is very easier to infer various conclusions immediately.
Producing graphical presentations for scientific, engineering and medical data sets and processes

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

is generally referred to as scientific visualization. If the data sets are concerned with commerce,
industry and other non-scientific areas, it is called business visualization. Computer graphics has
great advantage in visualizing such data. Today, almost all interactive application programs,
even those for manipulating text (e.g. word processor) or numerical data (e.g. spreadsheet
programs), use graphics extensively in the user interface and for visualizing and manipulating the
application-specific objects.

Even people who do not use computers encounter computer graphics in TV commercials and as
cinematic special effects. Thus computer graphics is an integral part of all computer user
interfaces, and is indispensable for visualizing 2D, 3D objects in all most all areas such as
education, science, engineering, medicine, commerce, the military, advertising, and
entertainment. The theme is that learning how to program and use computers now includes
learning how to use simple 2D graphics. There is virtually no area in which graphical displays
cannot be used to some advantages.

Area of Applications

Computer graphics started with the display of data or hard copy plotter and CRT screens had
grown include the creation, storage and manipulation of mode is of images of objects.We find
computer graphics used in a diverse areas as science, engineering , medicine business, industry,
government, art, entertainment ,education and others.

COMPUTER AIDED DESING (CAD):

A major use of computer graphics is in design processes, particularly for engineering and
architectural systems, but almost all products are now computer designed. Generally referred to
as CAD, computer-aided design methods are now routinely used in the design of buildings,
automobiles, aircraft, watercraft, spacecraft, computers, textiles, and many, many other products.
In CAD, interactive graphics is used to design components and systems of mechanical, electrical,
electromechanical and electronic devices including structures such as buildings, automobile
bodies, airplane, VLSI chips, optical systems and telephone and computer networks. The
emphasis is on interacting with a computer-based model of the component or system being
designed in orders to test, for example its structural, electrical or thermal properties. The model
is interpreted by a simulator that feeds back the behavior of system to the user for further
interactive design and test cycles. Some mechanical parts are manufactured by describing how
the surfaces are to be formed with machine tools. Numerically controlled machine tools are then
set up to manufacture the parts according to these construction layouts.

Architects use interactive graphics method to layout floor plans that shows positioning of rooms,
doors, windows, stairs, shelves and other building features. An electrical designer then tries out
arrangements for wiring, electrical outlets and other system to determine space utilization on a
building. The realistic displays then allows architects and their clients to study appearance of
building and even go for a simulated “walk” through rooms or around building.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

PRESENTATION GRAPHICS:

Another major application areas of computer graphics is the “Presentation Graphics”.


Presentation Graphics are used to provide illustrations for reports or to generate transparencies
for use with graphics. Presentation Graphics is commonly used to summarize financial,
statistical, mathematical, scientific and economic data for research reports, managerial reports
and other types of reports. Typical examples are bar charts, line graphs, surface graphs, pie
charts and other displays showing relationship between multiple variables. The 3D graphics are
usually used simply for effects; they can provide a more diagrammatic or more attractive
presentation of data relationship. Various representative of presentation graphics are:

Computer Art:

Computer graphics is used to generate arts. They are widely used in both fine art and commercial
art applications. Fine arts are drawn by artist hand and this kind of art is perfect to the artist skill.
Artist use a variety of computer methods including special purpose hardware, artists paint brush
program, other paint packages. Moreover, artists use a touchpad or a stylus or digitizer to draw
pictures. The movement of object is captured by some input hardware. These arts are usually
generated by using mathematical functions or algorithms. Computer art is not as realistic as fine
arts. Commercial art uses animations to demonstrate or present commercial products to the
public. Fine artists use a variety of computer techniques to produce images. These images are
created using a combination of 3D modeling package, texture mapping, drawing programs and
CAD software.

These technique for generating electronic images are also applied in commercial art for logos
and other design, page layouts combining text and graphics, TV advertising, sports, and other
areas. Animations are also used frequently in advertising and TV commercial and produce frame
by frame, where each frame of the motion is rendered and saved as an image file. A common
graphics method employed in many commercials is morphing, where one object is transformed
into another.

Education and training:

Computer graphics is used in education and training for making it more effective and more
illustrative. E.g. if a teacher is to teach bonding of molecules or electron jump from higher
energy state to lower energy state or the structure of gene, then he can demonstrate these
concepts using computer graphics software or presentations. Another example could be taken for
surgery. A student can learn surgery using data gloves and realistic computer graphics. The cost
of education as well as risk of human life is reduced. Other examples could be flight simulator
and driving simulator for pilot and driving training. Models of physical systems, physiological
systems, population trends or equipments such as the color coded diagram also help trainees to
understand the operation of the system.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Entertainment:

Computer graphics methods are now commonly used in making motion pictures, music videos
and TV shows. Images are drawn in wire-frame form and will be shaded with rendering methods
to produce solid surfaces. Music videos use graphics in several ways. Graphics objects can be
combined with the line action. Computer graphics are also used to introduce virtual characters to
movies like character in “Ice Age”,"Avatar”.

Visualization:

Scientists, engineers, medical personnel, business analysts, and others often need to analyze large
amounts of information or to study the behavior of certain processes. Numerical simulations
carried out on supercomputers frequently produce data files containing thousands and even
millions of data values. Similarly, satellite cameras and other sources are amassing large data
files faster than they can be interpreted. Scanning these large sets of number to determine trends
and relationships is a tedious and ineffective process. But if the data are converted to a visual
form, the trends and patterns are often immediately apparent. Some methods generate very large
amount of data/information for example a survey of one million people’s choice for using
different toothpaste generates large amount of data. Analyzing the property of the whole amount
of data is difficult. Therefore to visualize large amount of information, graphical computer
systems are used.

Image Processing:

Image processing applies techniques to modify or interpret existing pictures, such as photographs
and TV scans. Two principal applications of image processing are (1) improving picture quality
and (2) machine perception of visual information, as used in robotics. Image can be created using
simple point program or can be fed into computer by scanning the image. These picture/ images
need to be changed to improve the quality. For image/pattern recognition systems, images need
to be changed in specified format so that the system can recognize the meaning of the picture.
For example scanners with OCR features must have letters similar to standard font set. Medical
applications also make extensive use of image processing techniques for picture enhancements,
in tomography and in simulations of operations.

Graphical user Interface:

It is common now for software packages to provide a graphical interface. GUIs have become key
factors for the success of the software or operating system. A major component of a graphical
interface is a window manager that allows a user to display multiple-window areas. Each
window can contain a different process that can contain graphical or non-graphical displays. To
make a particular window active, we simply click in that window using an interactive pointing

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

device. Interfaces also display menus and icons for fast selection of processing options or
parameter values. An icon is a graphical symbol that is designed to look like the processing
option it represents. The advantages of icons are that they take up less screen space than
corresponding textual descriptions and they can be understood more quickly if well designed.
Menus contain lists of textual descriptions and icons. 3D GUI uses graphical objects called
gizmos to represent certain objects or process involved in human computer communication for
virtual purpose. Lots of aesthetics (colors) and psychological analysis have been done to create
user friendly GUI. The most popular GUI is windows based GUI

Hardware and Software for Computer Graphics

Hardware

Input Devices:

Input device are used to feed data or information into a computer system. They are usually used
to provide input to the computer upon which reaction, outputs are generated. Data input devices
like keyboards are used to provide additional data to the computers whereas pointing and
selection devices like mouse, light pens, touch panels are used to provide visual and indication-
input to the application.

1. Tablet:

A tablet is digitizer. In general a digitizer is a device which is used to scan over an object, and to
input a set of discrete coordinate positions. These positions can then be joined with straight-line
segments to approximate the shape of the original object. A tablet digitizes an object detecting
the position of a movable stylus (pencil-shaped device) or puck (link mouse with cross hairs for
sighting positions) held in the user's hand. A tablet is flat surface, and its size of the tablet varies
from about 6 by 6 inches up to 48 by 72 inches or more. The accuracy of the tablets usually falls
below 0.2 mm. There are mainly three types of tablets.

a. Electrical tablet:

A grid of wires on ¼ to ½ inch centers is embedded in the tablet surface and electromagnetic
signals generated by electrical pulses applied in sequence to the wires in the grid induce an
electrical signal in a wire coil in the stylus (or puck). The strength of the signal induced by each
pulse is used to determine the position of the stylus. The signal strength is also used to determine
roughly how far the stylus is from the tablet. When the stylus is within ½ inch from the tablet, it
is taken as "near" otherwise it is either "far" or "touching". When the stylus is "near" or
"touching", a cursor is usually shown on the display to provide visual feedback to the user. A
signal is sent to the computer when the tip of the stylus is pressed against the tablet, or when any
button on the puck is pressed. The information provided by the tablet repeats 30 to 60 time per
second.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

b. Sonic tablet:

The sonic tablet uses sound waves to couple the stylus to microphones positioned on the
periphery of the digitizing area. An electrical spark at the tip of the stylus creates sound bursts.
The position of the stylus or the coordinate values is calculated using the delay between when the
spark occurs and when its sound arrives at each microphone. The main advantage of sonic tablet
is that it does not require a dedicated working area for the microphones can be placed on any
surface to form the "tablet" work area. This facilitates digitizing drawing on thick books.
Because in an electrical tablet this is not convenient for the stylus cannot get closer to the tablet
surface.

c. Resistive tablet:

The tablet is just a piece of glass coated with a thin layer of conducting material. When a buttery-
powered stylus is activated at certain position, it emits high-frequency radio signals, which
induces the radio signals on the conducting layer. The strength of the signal received at the edges
of the tablet is used to calculate the position of the stylus. Several types of tablets are transparent,
and thus can be backlit for digitizing x-rays films and photographic negatives. The resistive
tablet can be used to digitize the objects on CRT because it can be curved to the shape of the
CRT. The mechanism used in the electrical or sonic tablets can also be used to digitize the 3D
objects.

2. Touch panel

The touch panel allows the users to point at the screen directly with a finger to move the cursor
around the screen, or to select the icons. Following are the mostly used touch panels.

a. Optical touch panel

It uses a series of infra-red light emitting diodes (LED) along one vertical edge and along one
horizontal edge of the panel. The opposite vertical and horizontal edges contain photo-detectors
to form a grid of invisible infrared light beams over the display area. Touching the screen breaks
one or two vertical and horizontal light beams, thereby indicating the finger's position. The
cursor is then moved to this position, or the icon at this position is selected. It two parallel beams
are broken, the finger is presumed to be centered between them; if one is broken, the finger is
presumed to be on the beam. There is a low-resolution panel, which offers 10 to 50 positions in
each direction.

b. Sonic panel:

Bursts of high-frequency sound waves traveling alternately horizontally and vertically are
generated at the edge of the panel. Touching the screen causes part of each wave to be reflected
back to its source. The screen position at the point of contact is then calculated using the time

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

elapsed between when the wave is emitted and when it arrives back at the source. This is a high-
resolution touch panel having about 500 positions in each direction.

c. Electrical touch panel:

It consists of slightly separated two transparent plates one coated with a thin layer of conducting
material and the other with resistive material. When the panel is touched with a finger, the two
plates are forced to touch at the point of contact thereby creating the touched position. The
resolution of this touch panel is similar to that of sonic touch panel.

3. Light pen

It is a pencil-shaped device to determine the coordinates of a point on the screen where it is


activated such as pressing the button. In raster display, Y is set at Ymax and X changes from 0 to
Xmax for the first scanning line. For second line, Y decreases by one and X again changes from
0 to Xmax, and so on. When the activated light pen "sees" a burst of light at certain position as
the electron beam hits the phosphor coating at that position, it generates a electric pulse, which is
used to save the video controller's X and Y registers and interrupt the computer. By reading the
saved values, the graphics package can determine the coordinates of the position seen by the
light pen. Because of the following drawbacks the light pens are not popular now days.

 Light pen obscures the screen image as it is pointed to the required spot
 Prolong use of it can cause arm fatigue
 It can not report the coordinates of a point that is completely black. As a remedy one can
display a dark blue field in place of the regular image for a single frame time
 It gives sometimes false reading due to background lighting in a room

4. Keyboard

A keyboard creates a code such as ASCII uniquely corresponding to a pressed key. It usually
consists of alphanumeric keys, function keys, cursor-control keys, and separate numeric pad. It is
used to move the cursor, to select he menu item, pre-defined functions. In computer graphics
keyboard is mainly used for entering screen coordinates and text, to invoke certain functions.
Now-a-days ergonomically designed keyboard (Ergonomic keyboard) with removable palm rests
is available. The slope of each half of the keyboard can be adjusted separately.

5. Mouse

A mouse is a small hand-held device used to position the cursor on the screen. Mice are relative
devices, that is, they can be picked up, moved in space, and then put down gain without any
change in the reported position. For this, the computer maintains the current mouse position,
which is incremented or decremented by the mouse movements. Following are the mice, which
are mostly used in computer graphics.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

a. Mechanical mouse

When a roller in the base of this mechanical mouse is moved, a pair of orthogonally arranged
toothed wheels, each placed in between a LED and a photo detector, interrupts the light path.
The numbers of interrupts so generated are used to report the mouse movements to the computer.

b. Optical mouse

Instead of a moving ball or wheel, an optical mouse uses a tiny camera to send input to a
computer. The first optical mouse projected a beam of light that reflected off of a special mouse
pad onto a sensor. The pad was made of reflective material and covered with a grid of lines that
disrupted the beam of light. A sensor sent the computer a signal every time the beam was
disrupted, and the cursor was then moved to the appropriate position. This kind of mouse didn't
work very well for a few reasons. For one thing, the mouse had to be held at a precise angle to
the mouse pad to align the light, the grid and the sensor correctly. In addition, if the mouse pad
was lost or damaged, the mouse was useless.

Subsequent optical mice have been much more stable. Inside such a mouse, a red light-emitting
diode (LED) shines a light that bounces off of a surface and then gets detected by a
complementary metal-oxide semiconductor (CMOS) sensor, which takes a picture 1,500 times
per second. These pictures are sent to a digital signal processor (DSP). An optical mouse has to
do more thinking than a mechanical mouse, since part of its job is to analyze the pictures it takes
for patterns. In this sense, the mouse itself is actually a simple computer. Based on how the
patterns in the mouse's sample images change from picture to picture, the DSP determines where
the mouse cursor should move and sends those coordinates to the computer. The computer then
moves the cursor accordingly.

(Hard Copy, Display Technologies)

Typically, the primary output device in a graphics system is a video monitor. The operation of
most video monitors is based on the standard cathode-ray tube (CRT) design, but several other
technologies exist and solid-state monitors are predominant. The display devices used in
graphics system is video monitor. The most common video monitor is based on CRT technology.

Cathode Ray Tube (CRT)

CRT is the most common display devices on computer today. A CRT is an evacuated glass tube,
with a heating element on one end and a phosphor-coated screen on the other end. The primary
components of an electron gun in a CRT are the heated metal cathode and a control grid. Heat is
supplied to the cathode by directing a current through filament (a coil of wire), inside the
cylindrical cathode structure. Heating causes electrons to be boiled off the hot cathode surface. In
the vacuum inside the CRT envelope, the free, negatively charged electrons are then accelerated
toward the phosphor coating by a high positive voltage.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

When a current flows through this heating element (filament) the conductivity of metal is
reduced due to high temperature. These cause electrons to pile up on the filament. These
electrons are attracted to a strong positive charge from the outer surface of the focusing anode
cylinder. Due to the weaker negative charge inside the cylinder, the electrons head towards the
anode forced into a beam and accelerated by the inner cylinder walls in just the way that water is
speeds up when its flow though a small diameter pipe.

The forwarding fast electron beam is called Cathode Ray. A cathode ray tube is shown in figure
below.

There are two sets of weakly charged deflection plates with oppositely charged, one positive and
another negative. The first set displaces the beam up and down and the second displaces the
beam left and right. The electrons are sent flying out of the neck of bottle (tube) until the smash
into the phosphor coating on the other end. When electrons strike on phosphor coating, the
phosphor then emits a small spot of light at each position contacted by electron beam. The
glowing positions are used to represent the picture in the screen.

The amount of light emitted by the phosphor coating depends on the no of electrons striking the
screen. The brightness of the display is controlled by varying the voltage on the control grid.

Random Scan Display System

Random Scan (vector display system or Stoke writing or Calligraphic systems):

Random scan system uses an electron beam which operates like a pencil to create a line image on
the CRT. The image is constructed out of a sequence of straight line segments. Each line
segment is drawn on the screen by directing the beam to move from one point on screen to the
next, where each point is defined by its x and y coordinates. After drawing the picture, the
system cycles back to the first line and design all the lines of the picture 30 to 60 time each
second. When operated as a random-scan display unit, a CRT has the electron beam directed

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

only to the parts of the screen where a picture is to be drawn. Random-scan monitors draw a
picture one line at a time and for this reason are also referred to as vector displays (or stroke-
writing or calligraphic displays).

Architecture of a simple random scan system

An application program is input and stored in the system memory along with the graphic
package. Graphics commands in the application program are translated by the graphic package in
to a display file stored in the system memory. This display file is then accessed by the display
processor to refresh the screen. Sometimes the display processor in a random scan is referred to
as a display processing unit or a graphics controller. Graphics patterns are drawn on a random-
scan system by directing the electron beam along the component lines of the picture. The buffer
stores the computer produce display list or display program which contains points and line
plotting commands with (x,y) and end point co-ordinates as well as character plotting
commands. The commands for plotting points lines and characters are interpreted by the display
processor. Lines are defined by the values for their coordinate endpoints, and these input
coordinate values are converted to x and y deflection voltages. A scene is then drawn one line at
a time by positioning the beam to fill in the line between specified endpoints. It sends digital and
points co-ordinates to a vector generator that converts digital co-ordinates values to analog
voltages for beam deflection circuits that display an electron beam writing on CRT phosphor
coating.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

The main principal of the vector system is that the beam is deflected form end point to end point
as detected by arbitrary order of the display commands term as random scan. Since the light
output of phosphor decays in tens or hundreds of microseconds, the display processor must cycle
through the display list to refresh the phosphor at least 30 times per seconds (Hz) to have flicker
hence the buffer holding display list is usually called a refresh buffer. A CRT beam in this
system is adjusted in such a way that electron beam only hits the spot where the graphics is to be
drawn. Thus the refresh rate in this system depends upon the number of lines to be displayed.
Random scan displays are designed to draw all the component lines of pictures 30 to 60 times
per seconds. A pen plotter operates in a similar way and is an example of a random-scan, hard-
copy device.

Advantage

It produces smooth line drawing because the CRT beam directly follows the line path definitions
that are stored in the form of line drawing commands .Vector display system are mostly used for
line drawing applications

Disadvantages:

When the number of command in the buffer goes high the system take long time to process and
draw pictures. It cannot apply shading features and cannot display realistic shaded scenes.

Raster Graphics

In a raster-scan system, the electron beam is swept across the screen, one row at a time from top
to bottom. As the electron beam moves across each row, the beam intensity is turned on and off
to create a pattern of illuminated spots. Picture definition is stored in a memory area called the
refresh buffer or frame buffer. This memory area holds the set of intensity values for all the

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

screen points. Stored intensity values are then retrieved from the refresh buffer and "painted" on
the screen one row (scan line) at a time. Each screen point is referred to as a pixel. The capability
of a raster-scan system to store intensity information for each screen point makes it well suited
for the realistic display of scenes containing subtle shading and color patterns. Home television
sets and printers are examples of other systems using raster scan methods. In raster scan each
frame is displayed in two passes using an interlaced refresh procedure. In the first pass, the beam
sweeps across every other scan line from top to bottom. Then after the vertical re- trace, the
beam sweeps out the remaining scan lines. Interlacing of the scan lines in this way allows us to
see the entire screen displayed in one-half the time it would have taken to sweep across all the
lines at once from top to bottom. Interlacing is primarily used with slower refreshing rates. On an
older, 30 frame- per-seconds, no interlaced display, for instance, some flicker is noticeable. But
with interlacing, each of the two passes can be accomplished in l/60th of a second, which brings
the refresh rate nearer to 60 frames per second. This is an effective technique for avoiding
flicker, providing that adjacent scan lines contain similar display information. On a black and
white system with one bit per pixel, the frame buffer is commonly called a bitmap. For systems
with multiple bits per pixel, the frame buffer is often referred to as a pixmap. The return to the
left of the screen, after refreshing each scan line is called the horizontal retrace of the electron
beam. And at the end of each frame, the electron beam returns (vertical retrace) to the top left
corner of the screen to begin the next frame.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Interactive raster-graphics systems typically employ several processing units. In addition to the
CPU, a special purpose processor called the video controller or display controller is used to
control the operation of the display device. The figure shows the organization of a raster system.

Architecture of a simple raster system

Organization of a simple raster system is shown in Fig. above .Here, the frame buffer can be
anywhere in the system memory, and the video controller accesses the frame buffer to refresh the
screen. In addition to the video controller, more sophisticated raster systems employ other
processors as coprocessors and accelerators to implement various graphics operations.

Video Controller

Architecture of a raster system with a fixed portion of the system memory reserved for the frame
buffer

In some raster scan system a fixed area of the system memory is reserved for the frame buffer,
and the video controller is given direct access to the frame buffer memory. Frame-buffer
locations, and the corresponding screen positions, are referenced in Cartesian coordinates. For
many graphics monitors, the coordinate origin is defined at the lower left screen corner. The
screen surface is then represented as the first quadrant of a two-dimensional system, with
positive x values increasing to the right and positive y values increasing from bottom to top. (On
some personal computers, the coordinate origin is referenced at the upper left comer of the
screen so the y values are inverted.) Scan lines are then labeled from Ymax, at the top of the
screen to 0 at the bottom. Along each scan line, screen pixel positions are labeled from 0 to Xmax.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Basic refresh operations of the video controller

The basic refresh operations of the video controller are diagrammed above .Two registers are
used to store the coordinates of the screen pixels. Initially, the x register is set to 0 and the y
register is set to Ymax .The value stored in the frame buffer for this pixel position is then
retrieved and used to set the intensity of the CRT beam. Then the x register is incremented by 1,
and the process repeated for the next pixel on the top scan line. This procedure is repeated for
each pixel along the scan line. After the last pixel on the top scan line has been processed, the x
register is reset to 0 and the y register is decremented by 1. Pixels along this scan line are then
processed in turn, and the procedure is repeated for each successive scan line. After cycling
through all pixels along the bottom scan line (y = 0), the video controller resets the registers to
the first pixel position on the top scan line and the refresh process starts over.

Screen must be refreshing at least at the rate of 60 frames per second. To speed of the pixel
processing video controller can retrieve multiple pixel values from the refresh buffer on each
pass. The multiple pixel intensity are then stored in a separate register and used to control the
CRT bit intensity for a group of adjacent pixel. When that group of pixel has been processes the
next block of pixel values is retrieved from the frame buffer. Besides these refresh operation
video controller also performs different operation video controller retrieved pixel intensity from
different memory area on different refresh cycle.

In high quality system, for example, two frame buffers are often provided so that gun buffer can
be used for refreshing while the other is being filled with intensity values. This provides a fast
mechanism for generating real time animations scenes different views of moving object can be
successively loaded in the refresh buffer.

Raster-Scan Display Processor

The purpose of the display processor or graphics controller is to free the CPU from the graphics
chores. In addition to the system memory a separate display processor memory area can also
provided. A major task of the display processor is digitizing a picture definition given in an
application program into a set of pixel-intensity values for storage in the frame buffer. This
digitization process is called scan conversion. Lines and other geometric objects are converted

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

into set of discrete intensity points. Scan converting a straight-line segment, for example, means
that we have to locate the pixel positions closest to the line path and store the intensity for each
position in the frame buffer. Characters can be defined with rectangular grids, or they can be
defined with curved outlines. Also, display processors are typically designed to interface with
interactive input devices, such as a mouse.

To reduce the memory space required to store the image information, each scan line are stored
as a set of integer pairs. One number of each pair indicates an intensity value, and the second
number specifies number of adjacent pixels the scan line that is also having same intensity. This
technique is called run-length encoding. Another approach is to encode the raster as a set of
rectangular areas (cell encoding). The disadvantages of encoding runs are that intensity changes
are difficult to make and storage requirements actually increase as the length of the runs
decreases.

Architecture of a raster-graphics system with a display processor

Graphics Software

In computer graphics, graphics software or image editing software is a program or collection of


programs that enable a person to manipulate visual images on a computer. Computer graphics
can be classified into two distinct categories: raster graphics and vector graphics. Many graphics
programs focus exclusively on either vector or raster graphics, but there are a few that combine
them in interesting ways. It is simple to convert from vector graphics to raster graphics, but
going the other way is harder. Some software attempts to do this. In addition to static graphics,
there are animation and video editing software. Most graphics programs have the ability to
import and export one or more graphics file formats.

Software standards

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

The Primary goal of standardized graphics software is portability. Without standards, programs
designed for one hardware system often cannot be transferred to another system without
extensive rewriting of the programs. Graphical Kernel System(GKS) was adopted as the the first
graphics software standard by ISO and by various national standard organizations ANSI.
Programmer’s Hierarchical Interactive Graphics Standard PHIGS which is the extension of GKS
was developed later on. Standardization for device interface methods is given in the Computer
Graphics Interface (CGI) system. And the Computer Graphics Metafile (CGM) system specifies
standards for archiving and transporting pictures.

Now Open GL is used in most of graphic software. OpenGL is a software interface to graphics
hardware. This interface consists of about 150 distinct commands that you use to specify the
objects and operations needed to produce interactive three-dimensional applications. OpenGL is
designed as a streamlined, hardware-independent interface to be implemented on many different
hardware platforms. To achieve these qualities, no commands for performing windowing tasks or
obtaining user input are included in OpenGL; instead, you must work through whatever
windowing system controls the particular hardware you’re using. Similarly, OpenGL doesn’t
provide high-level commands for describing models of three-dimensional objects. Such
commands might allow you to specify relatively complicated shapes such as automobiles, parts
of the body, airplanes, or molecules.

With OpenGL, you must build up your desired model from a small set of geometric primitives -
points, lines, and polygons. A sophisticated library that provides these features could certainly be
built on top of OpenGL. The OpenGL Utility Library (GLU) provides many of the modeling
features, such as quadric surfaces and NURBS curves and surfaces. GLU is a standard part of
every OpenGL implementation.

Scan Conversion Algorithms (Line, Circle, Ellipse)

Digital devices display a straight line segment by plotting discrete points between the two
endpoints. Discrete coordinate positions along the line path are calculated from the equation of
the line. For a raster video display, the line color (intensity) is then loaded into the frame buffer
at the corresponding pixel coordinates. Reading from the frame buffer, the video controller then
"plots" the screen pixels. Screen locations are referenced with integer values, so plotted positions
may only approximate actual Line positions between two specified endpoints.

For the time being, we will assume that pixel positions are referenced according to scan-line
number and column number (pixel position across a scan line). Scan lines are numbered
consecutively from 0, starting at the bottom of the screen; and pixel columns are numbered from
0, left to right across each scan line.

Line drawing Algorithm

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

In case of frame buffer , the information about the image to be projected on the screen is stored
in an m*n matrix, in the form of 0s and 1s; the 1s stored in an m* n matrix positions are
brightened on the screen and 0’s are not brightened on the screen and this section which may or
may not be brightened is known as the Pixel (picture element). This information of 0s and 1s
gives the required pattern on the output screen i.e., for display of information. In such a buffer,
the screen is also in the form of m* n matrix, where each section or niche is a pixel (i.e., we have
m* n pixels to constitute the output).

Sometime the line may have a slope and intercept and its information is required to be stored in
more than one section of the frame buffer, so in order to draw or to approximate such the line,
two or more pixels are to be made ON. Thus, the outcome of the line information in the frame
buffer is displayed as a stair; this effect of having two or more pixels ON to approximating a line
between two points say A and B is known as the Staircase effect. The concept is shown below in
figure .

DDA

Line drawing is accomplished by calculating intermediate point coordinates along the line path
between two given end points. Since screen pixels are referred with integer values, or plotted

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

positions, which may only approximate the calculated coordinates – i.e., pixels which are
intensified are those which lie very close to the line path if not exactly on the line path which in
this case are perfectly horizontal, vertical or 45° lines only. Standard algorithms are available to
determine which pixels provide the best approximation to the desired line, one such algorithm is
the DDA (Digital Differential Analyser) algorithm. Before going to the details of the algorithm,
let us discuss some general appearances of the line segment, because the respective appearance
decides which pixels are to be intensified. It is also obvious that only those pixels that lie very
close to the line path are to be intensified because they are the ones which best approximate the
line. Apart from the exact situation of the line path, which in this case are perfectly horizontal,
vertical or 45° lines (i.e., slope zero, infinite, one) only. We may also face a situation where the
slope of the line is > 1 or < 1.Which is the case shown in Figure below

In Figure above , there are two lines. Line 1 (slope<1) and line 2 (slope>1). Now let us discuss
the general mechanism of construction of these two lines with the DDA algorithm. As the slope
of the line is a crucial factor in its construction, let us consider the algorithm in two cases
depending on the slope of the line whether it is > 1 or < 1.

Case 1: slope (m) of line is < 1 (i.e., line 1): In this case to plot the line we have to move the
direction of pixel in x by 1 unit every time and then hunt for the pixel value of the y direction
which best suits the line and lighten that pixel in order to plot the line.
So, in Case 1 i.e., 0 < m < 1 where x is to be increased then by 1 unit every time and proper y is
approximated.

Case 2: slope (m) of line is > 1 (i.e., line 2) if m > 1 i.e., case of line 2, then the most
appropriate strategy would be to move towards the y direction by 1 unit every time and
determine the pixel in x direction which best suits the line and get that pixel lightened to plot the
line. So, in Case 2, i.e., (infinity) > m > 1 where y is to be increased by 1 unit every time and
proper x is approximated.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

We have assumed that the line generation through DDA is discussed only for the first quadrant,
if the line lies in any other quadrant then we apply respective transformation.

Complete algorithm

The Cartesian slope equation of a straight line is, y = mx+b ………………….(i)

When m represents the slope of the line and b as the y –intercept.

Suppose two end points of a lien segment at positions (x1, y1) and (x2, y2) are shown in figure.

m = (y2 – y1)/ (x2 –x1) …………………………………………………………………. (ii)

b = y – mx …………….(iii)
For any given x-interval △ x along the line, we compute the corresponding y- intercept △ y
form equation (ii).
△ y = m △ x …………………(iv)

△ x =m/Δ y
These equations form the basis of determining deflection voltage in analog devices.

Case: I
For
|m| < 1,
Then △ x can be proportional to a small horizontal deflection voltage and the corresponding
vertical deflection is set to △ y as calculated from equation (iv).

Case: II
For |m| >1
Then, △ y can be set proportional to a small vertical deflection voltage with the corresponding
horizontal voltage set proportional to △ x calculated from equation (V)

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Case III
When m = 1, then △ x = △ y and then horizontal and vertical voltages equal.

Comments:
1. It uses multiplication
2. It uses float operation.

DDA ( Digital differential Analyzer):


This algorithm samples the line at unit interval in one-coordinate and determines corresponding
integer values nearest the line path for other co-ordinates. The equation of the line is,
Y = mx+b………………….(i)
m = y2-y1/x2-x1 ………………..(ii)
For any interval △ x , corresponding interval is given by △ y = m △ x.

Case I :
|m| < 1 , we sample at unit x interval i.e △ x = 1.
xk+1 = xk +1 ……………..(iii)
Then we compute each successive y-values, △ y = m
yk+1 = yk + m ……………………..(iv)

Case: II
|m| > 1, we sample at unit y-interval i.e △ y = 1 and compute each successive x-values.
Therefore, 1 = m △ x
△ x = 1/m
xk+1- xk = 1/m……………..(v)
yk+1 = yk + 1…………….(vi)
Above equation hold for the lines processed from left end to right end. For the reverse process i.
e if the line is to be processed form right to left then,

Case: III
For|m| <1, △ x = -1
xk+1 = xk-1 ……………………………….. (vii)
yk+1 = yk- m ……………………………….(viii)

Case: IV
For |m| >1, △ y = -1
yk+1 =yk -1 ……………………(ix)
Xk+1= xk -1/m…………….(x)
Therefore , in general,
yk+1 = yk ± m
xk+1 = xk ± 1 for |m| < 1
yk+1= yk ± 1
xk+1 = xk ± 1/m for |m| >1

Algorithm:
1. Declare the variables, x1,y1 and x2 , y2 ,dx, dy ,del x, del y as real and k as integer.
2. Perform

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

dx = x2-x1
dy = y2 – y1
3. Test if |dy|<|dx| then Steps = |dx| Else steps = |dy|
4. set del x = dx/steps
del y = dy/steps
x= x1
y = y1
5. Plot (x, y)
6. Do for k = 1 to steps
x = x+ delx
y = y +del y
Plot (x,y)
End do.

Bresenhan’s line Algorithm (BLA):

DDA includes calculation related to m and 1/m which is little complicated since it produce float
values. Bresenhan’s improves DDA algorithm by only involving integer calculation. Bresenham
algorithm is accurate and efficient raster line generation algorithm. This algorithm scan converts
lines using only incremental integer calculations and these calculations can also be adopted to
display circles and other curves.

Case I:
|m| < 1 m >0
Let (xk, yk) be the pixel position determined then the next pixel to be plotted is
either (xk+1,yk) or (xk+1,yk+1)

Let d1 and d2 be the separation of pixel position (xk+1, yk) and (xk+1, yk+1) from the
actual line path.
y = mx +b

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Then, at sampling position (xk +1)


y = m (xk+1) +b

From figure above :


d1 = y – yk
d2 = yk+1- y
d1 - d2 = (y –yk) – (yk+1-y)

Let us define a decision parameter pk for the kth step by


pk = △ x (d1- d2)
Since △x > 0 ,
Therefore, pk <0 if d1 < d2
pk ≥ 0 , if d1 > d2

pk = △x{ y-yk –(yk+1 – y )}


= △x{ y-yk- yk-1+y} since, yk+1 = yk+1
= △x{ 2{ m(xk+1)+b} – 2 yk – 1}
pk = △x{2mxk + 2m + 2b-2xk-1} since, xk+1= xk+1
pk = 2mxk △x – 2yk △x +(2m+2b-1) △x
pk = 2. (△y/△x).xk △x -2yk △x +( 2m+2b-1) △x
pk = 2 △y xk – 2 △x yk +c …………………………….. (i)

Where, C = (2m+2b-1) △x is a constant.

Now, for next step,


pk+1 = 2 △x xk+1 – 2 △x yk+1 +c………….(ii)
From (i) and (ii)
pk+1 – pk = 2 △y (xk+1- xk) – 2 △x ( yk+1- yk)
i.e pk+1 = pk +2 △y – 2 △x ( yk+1 –yk)
Where,
yk+1 – yk = ‘0’ or ‘1’
If pk< 0 , then we plot lower pixel
yk+1 = yk ………………(iii)
pk+1 = pk +2 △y ………….(iv)

If pk ≥ 0 then we plot upper pixel.


Therefore, yk+1 = yk+1…………..(v)

pk+1 = pk + 2 △y – 2 △x …………(vi)

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Therefore, initial decision parameter.


P0 = 2 △y xo – 2 △x yo + c [from equation (i)]
= 2 △y xo- 2 △xyo +( 2m+ 2b-1)△x
= 2△y xo – 2 △x yo + 2 m △x + 2b △x – △x
= 2 △y xo- 2△x yo + 2. (△y/△x).△x + 2 ( yo – mxo)△x – △x
= 2 △y xo - 2 △x yo + 2 △y + 2 △xyo+ 2 △y/△x. xo △x – △x
Po = 2 △y – △x

We can summarize Bresenham line drawing for a line with a positive slope less
than 1 in the following listed steps. The constants 2 △y and 2 △y – 2 △x are
calculated once for each line to be scan converted, so the arithmetic involves only
integer addition and subtraction of these two constants.

Bresenham’s line algorithm: ( For |m|<1)

1. Input the tow line end points and store the end point in (x 0,y0)
2. Load (x0, y0) into the frame buffer; that is plot the first point.
3. Calculate constants △x, △y , 2 △y and 2 △y – 2 △x and obtain the starting value
for the decision parameter as ;
po = 2 △y – △x
4. At each xk along the line, starting at k= 0 , perform the following test:
If pk <0 the next point to plot is ( xk+1, yk) and pk+1 = pk +2 △y.
otherwise, the next point to plot is ( xk+1, yk+1) and pk+1= pk+ 2△y – 2 △x
5. Repeat step 4 △x times.
For |m|>1
Case II:

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Let (xk, yk) be the pixel position determined then the next pixel to the plotted is
either ( xk+1, yk+1) or ( xk, yk+1)

Let d1 and d2 be the separation of the pixel positons ( xk, yk+1) and ( xk+1, yk+1) for
the actual line path.

y = mx +b
The actual value of x is given by
x = (y-b)/m

Now, sampling position at yk+1


Form figure,
d1 = x - xk
d2 = xk+1 – x
Let us define a decision parameter pk

pk = △y ( d1- d2)

As before , it is calculated as :

pk = 2 △x yk – 2△y xk + c

Similarly we derive the expression for pk and pk+1

If pk ≥ 0
pk+1 = pk + 2 △x – 2 △y
We plot , xk+1 = xk+1, yk+1 = yk+1
pk < 0
pk+1 = pk+ 2 △x

For initial parameter, we derive

Po = 2 △x – △y

So we conclude ,

m < 1, we take x-direction sample. xk+1 = xk+1

m > 1, we take y-direction sample. yk+1 = yk +1

For developing algorithm , we assume following :

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

 Bresenhams algorithm is generalized to lines with arbitrary slopes by considering the


symmetry between the various octants and quadrants of the coordinate system.
 For line with +ve slope (m) such that m > 1, then we interchange the roles of x and y
direction i.e., we step along y directions in unit steps and calculate successive x values
nearest the line path.
 For –ve slopes the procedures are similar except that now, one coordinates decreases as
the other increases.

Bresenham’s Complete algorithm:

1. Input the two end points (x1,y1) and (x2,y2)

2. Compute dx= |x2 –x1| and dy = |y2 –y1|

3. If x2 – x1 < 0 and y2 –y1 > 0 or x2-x1>0 and y2 – y1 < 0.

Then set a = -1, else a = 1

4. If dy < dx then,

i. If x1>x2 then, t = x1 ; x1 = x2 ; x2= t

t = y1 ; y1 = y2 ; y2 = t

ii. Find initial decision parameter P = 2dy - dx

iii. Plot the first pixel (x1, y1)

iv. Repeat the following till |x1| < |x2|

a. If P< 0 then,

P = P+ 2dy

Else, P = P+ 2dy – 2dx

y = y1+a

b. Increase x1 by 1 i.e x1 = x1+1

c. Plot (x1,y1)

5. Else |m|>1

i. Check if (y1>y2) then,

t = x1; x1 = x2 ; x2 = t

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

t = y1 , y1 = y2 ; y2 = t.

ii. Find initial decision parameters. P = 2dx – dy

iii. Plot the first point (x1,y1)

iv. Repeat the following till y1 < y2

a. If P< 0 then , P = P+2dx.

Else, P = P+ 2dx-2dy

x1 = x1 +a

b. Increase y1 by 1. i.e y1 = y1+1

c. Plot (x1,y1)

Circle drawing algorithm

Circle: A cicle is defined as a set of points that are all at a given distance ‘r’ from the centre
position (xc, yc). This distance relationship is expressed by the Pythagorean theorem in Cartesian
co-ordinate as (x – x1)2 + (y-y1) 2 = r2

Methods to draw circle:


a. Direct method
b. Trigonometric method
c. Midpoint Circle method.

Direct method:

x2 +y2 = r2 ; y =

Trigonometric method:

x = xcosθ , y = ysinθ

Bresenham’s mid point algorithm :

This is also a scan conveting algorithm . The equation of the circle with the centre (h,k) is given
by , (x-h)2 +(y-k)2 = r2 ……….(i)

When, h = 0, x = 0 then the equation of the circle at origin

x2+y2 = r2 ……………(ii)

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Differentiating both sides,

2x+2y dy/dx = 0

Or dy/dx = -x/y , where , dy/dx = slope

Now , if dy/dx = 0, then x = 0.

If, dy/dx = -1, then x = y

Consider circle section for x = 0, x = y where slope of the curve varies from 0 to1.Calculation of
circle point (x,y) in one octant gives the circle point shown for the other seven octants. To apply
the mid point we define a circle function as,

fcircle (x,y) = x2+y2 – r2 …………..(iii)

Suppose,

f(x,y) = < 0 if (x,y ) is inside the circle boundary.

= 0 if (x,y) is on the circle boundary.

>0 , if (x,y ) is outside the circle boundary.

The circle function tests are performed for the mid position between pixels near the circle path at
each sampling step.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Figure: Midpoint between candidate pixels at sampling position xk+1 along a circular path.

Assume that we have just plotted the pixel (xk, yk). We next need to determine whether the pixel
at position (xk +1,yk) or the one at the position (xk +1,yk-1) is closer to the circle. The decision
parameter pk is the circle function which we have evaluated in equation (iii) at the midpoint
between these two pixels.

i,e pk = fcircle ( xk+1, yk-1/2)

= (xk +1)2 +(yk -1/2)2 – r2 …………..(iv)

If pk< 0 , this mid pixel is inside the circle and the pixel on the scan line yk is closer to the circle
boundary . Otherwise the midpoint is outside or on the circle boundary , and we select the pixel
yk -1, successive decision parameters can be obtained similarly using incremental calculations.

We obtained a recursive expression for the next decision parameter by evaluating the circle
function at sampling position.

xk+1 +1 = xk +2

pk+1 = fcircle (xk+1+1 ,yk+1 – 1/2)

= (xk+1+1)2 + ( yk+1-1/2)2 – r2 ………………(v)

Subtracting equation (iv) from (v).

pk+1 – pk = [ (xk+1) + 1 )]2 + (yk+1- 1/2)2 – (xk +1)2- (yk-1/2 )2 ….(vi)

pk+1 = pk + 2(xk + 1)+( y2k+1 - y2k) – (yk+1 - yk) + 1

where yk+1 is either yk or yk-1,, depending on the sign of pk.Increments for obtaining pk+1 are
either 2xk + 1 (if pk is negative) or 2xk+ 1 - 2yk+l. Evaluation of the terms 2k+1and 2yk+1 can
also be done incrementally as

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

2xk+1 = 2xk+2

2yk+1= 2yk-2

At the start position (0, r), these two terms have the values 0 and 2r, respectively. Each
successive value is obtained by adding 2 to the previous value of 2x and subtracting 2 from the
previous value of 2y.

The initial decision parameter is obtained by evaluating the circle function at the start position
(x0, y0) = (0, r):

Hence, p0 = fcircle(x0+1, y0- 1/2)

= fcircle( 1, r-1/2)

= 1+ (r-1/2)2 –r2

= 1 + r2 – r + 1/4 – r2

p0 = 5/4 – r

since 5 and 4 are integer values so if the radius r is specified as an integer, we can simply round
p0 to p0 = 1-r (for r an integer) since all increments are integers.

p = 1-r ……….(vii)

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Ellipse Algorithm

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

(x0,y0) for region 2 is the last point for region 1. The pixel for other quadrant are determined by
symmetry

Complete Algorithm:
1. Obtain the centre xc,yc semi-major and semi-minor axis length as a and b.
2. Set x = 0, y = b
3. Plot the point (x,y) and its symmetry points at appropriate positions by x = x+xc , y = y+yc
4. Compute initial decision parameter for R1 , P1 = b2-a2b2+a2/4
5. Repeat the following till 2b2x < 2a2y
i. x = x+1
ii. Test if P1 < 0 then
P1 = P1 +2b2x+b2
Else,
y = y-1
P1 = P1+2b2x- 2a2y = b2
iii. Plot the points (x,y) and its symmetry points at approximate position by x = x+xc y = y+yc
6. Compute initial decision parameter for R2
P2 = b2(x+0.5)2 + a2 (y-1)2 – a2b2
7. Repeat the following till y>0
i. y = y-1
ii. Test if P2 > 0 then P2 = P2 – 2a2y +a2
Else,
x = x+1
P2 = P2 +2b2x – 2a2y + a2
iii. Plot the points (x,y) and its symmetry points at approximate points by x = x+xc , y = y +yc

Filled Area primitives


A standard output primitive in general graphics package is solid color or patterned polygon area.
Other kinds of area primitives are sometimes available, but polygons are easier to process since
they have linear boundaries.
There are two basic approaches to area filling in raster systems. One way to fill an area is to
determine the overlap intervals for scan lines that cresses the area. Another method for area
filling is to start from a given interior position and point outward from this until a specified
boundary is met.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

SCAN-LINE Polygon Fill Algorithm:


In scan-line polygon fill algorithm, for each scan-line crossing a polygon, it locates the
intersection points of the scan line with the polygon edges. These intersection points are then
sorted from left to right, and the corresponding frame-buffer positions between each intersection
pair are set to the specified color. In the figure i) below, the four pixel intersection positions with
the polygon boundaries defined two stretches of interior pixel from x=10 to x=14 and from x=16
to x=24. Some scan-line intersections at polygon vertices require extra special handling. A scan-
line passing through a vertex intersect two polygon edges at that position, adding two points to
the list of intersection for the scan-line.

Fig i) Interior pixels along a scan line Fig ii) Intersection points along scan lines that intersect
passing through a polygon area polygon vertices

Figure ii) shows two scan lines at position y and y' that intersect the edge points. Scan line at y
intersects five polygon edges. Scan line at y' intersects 4 (even numbers) of edges though it
passes through vertex.

Intersection points along scan line y' correctly identify the interior pixel spans. But with scan line
y, we need to do some additional processing to determine the correct interior points. For scan
line y, the two edges sharing the intersecting vertex are on opposite side of the scan-line. But for
scan-line y' the two edges sharing intersecting vertex are on the same side (above) the scan line
position. So the vertices those are on opposite side of scan line require extra processing.
We can identify these vertices by tracing around the polygon boundary either in clockwise or
counter clockwise order and observing the relative changes in vertex y coordinates as we move
from one edge to next. If the endpoint y values of two consecutive edges monotonically increases
or decrease, we need to count the middle vertex as a single intersection point for any scan line
passing through that vertex. Otherwise the shared vertex represents a local extremum (minimum
or maximum) on the polygon boundary, and the two edge intersections with the scan-line passing
through that vertex.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

One way to resolve the question as to whether we should count a vertex as one intersection or
two is to shorten some polygon edges to split those vertices Filled-Area Primitives that should be
counted as one intersection. We can process non horizontal edges around the polygon boundary
in the order specified, either clockwise or counter clockwise. As we process each edge, we can
check to determine whether that edge and the next non horizontal edge have either monotonically
increasing or decreasing endpoint y values. If so, the lower edge can be shortened to ensure that
only one intersection point is generated for the scan line going through the common vertex
joining the two edges. Figure above illustrates shortening of an edge. When the endpoint y
coordinates of the two edges are increasing, the y value of the upper endpoint for the current
edge is decreased by 1, as in Fig.(a). When the endpoint y values are monotonically decreasing,
as in Fig.(b), we decrease the y coordinate of the upper endpoint of the edge following the
current edge.

We can also use coherence which is simply that the properties of one part of a scene are related
in some way to other parts of the scene so that the relationship can be used to reduce processing.
Coherence methods often involve incremental calculations applied along a single scan line or
between successive scan lines. In determining edge intersections, we can set up incremental
coordinate calculations along any edge by exploiting the fact that the slope of the edge is
constant from one scan line to the next. Figure below shows two successive scan lines crossing a
left edge of a polygon. The slope of this polygon boundary line can be expressed in terms of the
scan-line intersection coordinates:

m = ( yk+1 - yk) / (xk+1 - xk)

Since the change in y coordinates between the two scan lines is simply

yk+1 - yk =1

the x-intersection value xk+1 on the upper scan line can be determined from the x-intersection
value xk on the preceding scan line as

xk+1 = xk + 1/m

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Each successive x intercept can thus be calculated by adding the inverse of the slope and
rounding to the nearest integer. An obvious parallel implementation of the fill algorithm is to
assign each scan line crossing the polygon area to a separate processor. Edge-intersection
calculations are then performed independently. Along an edge with slope m, the intersection xk
value for scan line k above the initial scan line can be calculated as

xk= x0+ k/m

slope m is the ratio of two integers:

m = del y/ del x

where del x and del y are the differences between the edge endpoint x and y coordinate values.
Thus, incremental calculations of x intercepts along an edge for successive scan lines can be
expressed as :

xk+1 = xk + del x / del y

Using this equation, we can perform integer evaluation of the x intercepts by initializing a
counter to 0, then incrementing the counter by the value of del x each time we move up to a new
scan line. Whenever the counter value becomes equal to or greater than del y, we increment the
current x intersection value by 1 and decrease the counter by the value del y. This procedure is
equivalent to maintaining integer and fractional parts for x intercepts and incrementing the
fractional part until we reach the next integer value.

Inside-Outside Test:
Area filling algorithms and other graphics package often need to identify interior and exterior
region for a complex polygon in a plane. For ex. in figure below, it needs to identify interior and
exterior region.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

We apply odd-even rule, also called odd-parity rule. To identify the interior or exterior point, we
can draw a line from a point p to a distant point outside the coordinate extents of the object and
count the number of intersecting edge crossed by this line. If the intersecting edge crossed by this
line is odd, P is interior otherwise P is exterior.

Scan-Line Fill of Curved Boundary area


It requires more work then polygon filling, since intersection calculation involves nonlinear
boundary for simple curves as circle, eclipses, performing a scan line fill is straight forward
process. We only need to calculate the two scan-line intersection on opposite sides of the curve.
Then simply fill the horizontal spans of pixel between the boundary points on opposite side of
curve. Symmetries between quadrants are used to reduce the boundary calculation we can fill
generating pixel position along curve boundary using midpoint method.

Boundary-fill Algorithm:
In Boundary filling algorithm starts at a point inside a region and paint the interior outward the
boundary. If the boundary is specified in a single color, the fill algorithm proceeds outward pixel
by until the boundary color is reached.
A boundary-fill procedure accepts as input the co-ordinates of an interior point (x,y), a fill color,
and a boundary color. Starting from (x,y), the procedure tests neighbouring positions to
determine whether they are of boundary color. If not, they are painted with the fill color, and
their neighbours are tested. This process continues until all pixel up to the boundary color area
have tested. The neighbouring pixels from current pixel are proceeded by two method:
4- connected if they are adjacent horizontally and vertically.
8- connected if they adjacent horizontally, vertically and diagonally.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1


 Fill method that applies and tests its 4 neighbouring pixel is called 4- connected.
 Fill method that applies and tests its 8 neighbouring pixel is called 8- connected.
The outline of this algorithm is:

void Boundary_fill4(int x,int y,int b_color, int fill_color)


{
int value=get pixel (x,y);
if (value! =b_color&&value!=fill_color)
{
putpixel (x,y,fill_color);
Boundary_fill 4 (x-1,y, b_color, fill_color);
Boundary_fill 4 (x+1,y, b_color, fill_color);
Boundary_fill 4 (x,y-1, b_color, fill_color);
Boundary_fill 4 (x,y+1, b_color, fill_color);
}
}
Boundary fill 8- connected:
void Boundary-fill8(int x,int y,int b_color, int fill_color)
{
int current;
current=getpixel (x,y);
if (current !=b_color&&current!=fill_color)
( putpixel (x,y,fill_color);
Boundary_fill8(x-1,y,b_color,fill_color);
Boundary_fill8(x+1,y,b_color,fill_color);
Boundary_fill8(x,y-1,b_color,fill_color);
Boundary_fill8(x,y+1,b_color,fill_color);
Boundary_fill8(x-1,y-1,b_color,fill_color);
Boundary_fill8(x-1,y+1,b_color,fill_color);
Boundary_fill8(x+1,y-1,b_color,fill_color);
Boundary_fill8(x+1,y+1,b_color,fill_color);
}
}
Recursive boundary-fill algorithm not fills regions correctly if some interior pixels are already
displayed in the fill color. Encountering a pixel with the fill color can cause a recursive branch to
terminate, leaving other interior pixel unfilled. To avoid this we can first change the color of any
interior pixels that are initially set to the fill color before applying the boundary fill procedure.

Flood-fill Algorithm:

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Flood_fill Algorithm is applicable when we want to fill an area that is not defined within a single
color boundary. If fill area is bounded with different color, we can paint that area by replacing a
specified interior color instead of searching of boundary color value. This approach is called flood
fill algorithm. We start from a specified interior pixel (x,y) and reassign all pixel values that are
currently set to a given interior color with desired fill color.

Using either 4-connected or 8-connected region recursively starting from input position, The
algorithm fills the area by desired color.

Algorithm:
void flood_fill4(int x,int y,int fill_color,int old_color)
{
int current;
current=getpixel (x,y);
if (current==old_color_
{
putpixel (x,y,fill_color);
flood_fill4(x-1,y, fill_color, old_color);
flood_fill4(x,y-1, fill_color, old_color);
flood_fill4(x,y+1, fill_color, old_color);
flood_fill4(x+1,y, fill_color, old_color);
}
}

Similarly flood fill for 8 connected can be also defined.


We can modify procedure flood_fill4 to reduce the storage requirements of the stack by filling
horizontal pixel spans.

Filling Rectangle

Two things to consider


i. which pixels to fill
ii. with what value to fill
Move along scan line (from left to right) that intersect the primitive and fill in pixels that lay
inside
To fill rectangle with solid color
Set each pixel lying on scan line running from left edge to right with same pixel value, each span
from xmax to xmin
for( y from ymin to ymax of rectangle) /*scan line*/
for( x from xmin to xmax of rectangle) /*by pixel*/
writePixel(x, y, value);

CLIPPING

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Clipping may be described as the procedure that identifies the portions of a picture lie inside the
region, and therefore, should be drawn or, outside the specified region, and hence, not to be
drawn. The algorithms that perform the job of clipping are called clipping algorithms there are
various types, such as:

• Line Clipping
• Polygon Clipping
• Curve Clipping

Further, there are a wide variety of algorithms that are designed to perform certain types of
clipping operations, some of them which will be discussed in unit.
Line Clipping Algorithms:
• Cohen Sutherland Line Clippings

Polygon or Area Clipping Algorithm


• Sutherland-Hodgman Algorithm

LINE CLIPPING
Line is a series of infinite number of points, where no two points have space in between them.
So, the above said inequality also holds for every point on the line to be clipped. A variety of line
clipping algorithms is available in the world of computer graphics, but we restrict our discussion
to the following Line clipping algorithms, name after their respective developers:
Cohen Sutherland algorithm

In this method, every line endpoint is assigned a four digit binary code(region code) that
identifies the location of the point relative to the boundary.
For,
b1 : left
b2 : right
b3 : below
b4 : above

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

The value 1 indicates its relative position. If a point is within clipping rectangle then region code
is 0000. So ,
If x – xwmin < 0 , b1 = 1
If xwmax – x < 0 , b2 = 1
If y – ywmin < 0 , b3 = 1
If ywmin – y < 0 , b4 = 1
If the region codes of both end points are 0000 then we accept the line.

Now, to perform Line clipping for various line segment which may reside inside the window
region fully or partially, or may not even lie in the widow region; we use the tool of logical
ANDing between the b1b2b3b4 codes of the points lying on the line.
Logical ANDing (^) operation => 1 ^ 1 = 1; 1 ^ 0 = 0; between respective bits implies 0 ^ 1 = 0;
0^0=0

Any line that have one in the same bit position is rejected i.e if A AND B ≠ 0
Line is completely outside

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

The lines which con not be identified as completely inside or outside a window by these tests are
checked for intersection with the window boundary. Such lines may or may not cross into the
window interior.
In the fig. aside , region code of
P1 = 0001
P2 = 1000
P1 AND P2 = 0
So we need further calculation. Starting form P1, Intersection of P1 with left boundary is
calculated.
Region code of P1’ = 0000
P1’ AND P2 = 0 .
Intersecting of P2 with above boundary is calculated region code of P2’ = 0000
Since both end points have region codes (0000) .So P1’, P2’ portion of the line is saved.
Similarly,
For P3, P4.
P3 = 1000
P4 = 0010
P3 AND P4 = 0
So we need further calculations; starting from P3 region code of P3 is 1000, i.e b4 is high, so
intersection of P3 with upper boundary which yields P3’ having region code 1010.
Again P3' AND P4 ≠ 0

So P3 P4 is totally clipped.
The intersection point with vertical boundary can be obtained by y = y1 + m(x-x1)

Where (x1,y1) and (x2,y2) are end points of line and y is the coordinate value of intersection
point where x value is either xwmin or xwmax and
m = y2 –y1 / x2 – x1 .
Similarly , intersection point with horizontal boundary
x = x1 + (y-y1)/m
Where , y = ywmin or ywmax

CLIPPING CIRCLES AND ELLIPSES

To clip a circle against a rectangle, we can first do a trivial accept/reject test by intersecting the
circle’s extent (a square of the size of the circle‘s diameter) with the clip rectangle, If the circle
intersects the rectangle, we divide it into quadrants and do the trivial accept reject test for each.
These tests may lead in turn to tests for octants. We can then compute the intersection of the
circle and the edge analytically by solving their equations simultaneously, and then scan convert
the resulting arcs using the appropriately initialized algorithm with the calculated (and suitably

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

rounded) starting and ending points. If scan conversion is fast, or if the circle is not too large, it is
probably more efficient to scissor on a pixel-by-pixel basis, testing each boundary pixel against
the rectangle bounds before it is written. An extent check would certainly be useful in any ease.
If the circle is filled, spans of adjacent interior pixels on each scan line can be filled without
bounds checking by clipping each span and then filling its interior pixels .

To clip ellipses, we use extent testing at least down to the quadrant level as with circles. We can
then either compute the intersections of ellipse and rectangle analytically and use those (suitably
rounded) endpoints in the appropriately initialized scan—conversion algorithm, or clip as we
scan convert.

Clipping Polygons

Polygon is a surface enclosed by several lines. Thus, by considering the polygon as a set of line
we can divide the problem to line clipping and hence, the problem of polygon clipping is
simplified.Sutherland-Hodgman algorithm is one of the standard methods used for clipping
arbitrary shaped polygons with a rectangular clipping window. It uses divide and conquer
technique for clipping the polygon.

Sutherland-Hodgman Algorithm

Any polygon of any arbitrary shape can be described with the help of some set of vertices
associated with it. When we try to clip the polygon under consideration with any rectangular
window, then, we observe that the coordinates of the polygon vertices satisfies one of the four
cases listed in the table shown below, and further it is to be noted that this procedure of clipping
can be simplified by clipping the polygon edgewise and not the polygon as a whole. This
decomposes the bigger problem into a set of sub problems, which can be handled separately as
per the cases listed in the table below. Actually this table describes the cases of the Sutherland-
Hodgman Polygon Clipping algorithm.

Thus, in order to clip polygon edges against a window edge we move from vertex Vi to the next
vertexVi+1 and decide the output vertex according to four simple tests or rules or cases listed
below:

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

In words, the 4 possible Tests listed above to clip any polygon states are as mentioned below:
1) If both Input vertices are inside the window boundary then only 2nd vertex is added to output
vertex list.
2) If 1st vertex is inside the window boundary and the 2nd vertex is outside then, only the
intersection edge with boundary is added to output vertex.
3) If both Input vertices are outside the window boundary then nothing is added to the output list.
4) If the 1st vertex is outside the window and the 2nd vertex is inside window, then both the
intersection points of the polygon edge with window boundary and 2nd vertex are added to
output vertex list.

So, we can use the rules cited above to clip a polygon correctly. The polygon must be tested
against each edge of the clip rectangle; new edges must be added and existing edges must be
discarded, retained or divided. Actually this algorithm decomposes the problem of polygon
clipping against a clip window into identical sub problems, where a sub problem is to clip all
polygon edges (pair of vertices) in succession against a single infinite clip edge. The output is a
set of clipped edges or pair of vertices that fall in the visible side with respect to clip edge. These
set of clipped edges or output vertices are considered as input for the next sub problem of
clipping against the second window edge. Thus, considering the output of the previous sub
problem as the input, each of the sub problems are solved sequentially, finally yielding the
vertices that fall on or within the window boundary. These vertices connected in order forms, the
shape of the clipped polygon.

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

Pseudocode for Sutherland – Hodgman Algorithm

Define variables
inVertexArray is the array of input polygon vertices
outVertexArray is the array of output polygon vertices
Nin is the number of entries in inVertexArray
Nout is the number of entries in outVertexArray
n is the number of edges of the clip polygon
ClipEdge[x] is the xth edge of clip polygon defined by a pair of vertices
s, p are the start and end point respectively of current polygon edge
i is the intersection point with a clip boundary
j is the vertex loop counter

Define Functions
AddNewVertex(newVertex, Nout, outVertexArray)
: Adds newVertex to outVertexArray and then updates Nout
InsideTest(testVertex, clipEdge[x])
: Checks whether the vertex lies inside the clip edge or not; retures TRUE is inside else returns
FALSE
Intersect(first, second, clipEdge[x])
: Clip polygon edge(first, second) against clipEdge[x], outputs the intersection point

{ : begin main
x=1
while (x ≤ n) : Loop through all the n clip edges
{
Nout = 0 : Flush the outVertexArray
s = inVertexArray[Nin] : Start with the last vertex in inVertexArray

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 1

for j = 1 to Nin do : Loop through Nin number of polygon vertices (edges)


{
p = inVertexArrray[j]
if InsideTest(p, clipEdge[x] = = TRUE then : Case A and D
if InsideTest(s, clipEdge[x] = = TRUE then
AddNewVertex(p, Nout, outVertexArray) : Case A
else
i = Intersect(s, p, clipEdge[x]) : Case D
AddNewVertex(i, Nout, outVertexArray)
AddNewVertex(p, Nout, outVertexArray)
end if
else : i.e. if InsideTest(p, clipEdge[x] = = FALSE
(Cases 2 and 3)
if InsideTest(s, clipEdge[x]) = =TRUE then : Case B
{
Intersect(s, p, clipEdge[x])
AddNewVertex(i, Nout, outVertexArray)
end if : No action for case C
s = p : Advance to next pair of vertices
j=j+1
end if : end {for}
}
x = x + 1 : Proceed to the next ClipEdge[x +1]
Nin = Nout
inVertexArray = outVertexArray : The ouput vertex array for the current clip edge becomes the
input vertex array for the next clip edge
} : end while
} : end main

Compiled By : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Geometrical transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Matrix Representation and Homogeneous Co-ordinates

Many graphics applications involve sequences of geometric transformations. An animation, for


example, might require an object to be translated and rotated at each increment of the motion. In
order to combine sequence of transformations we have to eliminate the matrix addition. We can
combine the multiplicative and translational terms for two-dimensronal geometric
transformations into a single matrix representation by expanding the 2 by 2 matrix
representations to 3 by 3 matrices. This allows us to express all transformation equations as
matrix multiplications, providing that we also expand the matrix representations for coordinate
positions. To express any two-dimensional transformation as a matrix multiplication, we
represent each Cartesian coordinate position ( x,y) with the homogeneous coordinate triple (x,y,
h) .To achieve this we have represent matrix as 3 X 3 instead of 2 X 2 introducing an additional
dummy coordinate h. Here points are specified by three numbers instead of two. This coordinate
system is called as Homogeneous coordinate system and it allows expressing transformation
equation as matrix multiplication

Cartesian coordinate position (x,y) is represented as homogeneous coordinate triple(x,y,h)

• Represent coordinates as (x,y,h)


• Actual coordinates drawn will be (x/h,y/h)

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

For Translation

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Other Transformations

1. Reflection
2. Shearing

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Note, if Sx = Sy then the relative proportions of objects are maintained else the world object will be
stretched or contracted in either x or y direction when displayed on output device.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

In terms of Geometric transformations the above relation can be interpreted through the following two
steps:

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Three dimensional Graphics:


Three Dimensional object to screen perspective viewing
Transfomation:

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Extension of two-dimensional to three dimensional transformation:

Just as 2D-transfromtion can be represented by 3x3 matrices using homogeneous co-ordinate can
be represented by 4x4 matrices, provided we use homogenous co-ordinate representation of
points in 3D space as well.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Rotation about x-axis:


x’ = x
y’ = ycosθ+zsinθ
z’ = ysinθ+zsinθ

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

About y-axis:
Y’ = y
z = zcosθ – xsinθ
x = zsinθ+ x cosθ

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

(ii) About any axis:


Net transformation

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Shearing:
In 2D, shearing about x-axis means x-values changes by amount
proportional to y-values and y-values remains same.
However in 3D, z-axis means x and y value change by
amount proportional to z-value and z-value remains same. i.e x’=
x+ Shx.z
y’ = y + Shy.z
z’ = z

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 2 Transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

Realization In 3-D Graphics:

Many computer graphic application involve the display of 3-D objects and scenes. For examples
CAD systems allow the users to manipulate models of machine components, automobile bodies
and aircraft parts. Simulation system present a continuously moving picture of a 3-D world to the
pilot of ship or aircraft. These applications differ from 2-D applications not only in the added
dimension: they also requires concern for realism in the display of objects.

Producing a realistic image of a 3-D scene on a 2D-display present many problem. For example
,depth in the 3rd dimension to be displayed on the scene, modulation of 3D object in a computer
so that images can be generated.There are a number of techniques for achieving realism.

The basic problem addressed by visualization technique is sometimes called depth cueing. When
a 3-D scene is projected onto a 2-D display scene , information about the depth of objects in the
image tends to be reduced or loosed entirely. Techniques that provide depth cues are designated
to restore or enhance the communication of depths to the observer. The different technique for
achieving realism are:

- Parallel projection.
- Perspective projection
- Intensity cues.
- Stereoscopic views.
- Kinetic depth effect.
- Hidden line elimination
- Shadding with hidden surface removed
- 3-D images.

Modelling 3-D scenes:

The techniques used to generate different kinds of 3-D scenes are start from a model of the
scene. The model is needed for two purpose.

1. It is used by viewing algorithm together with information about the location of the viewer.

2. It is used to modify and analyze the objects in the scene, activities usually considered part of
the application program.

The information in a model of a 3-D scene can be divided into two important classes: geometry
and topology. Geometry is concerned with measurements, such as the location of a point or the
dimension of the object. Topological information records the structure of a scene: how points are
combined to form polygons, how polygons form object and how object from scenes.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

3D Object Representation

Graphics scenes can contain many different kinds of objects: trees, flowers, clouds, rocks, water,
bricks, rubber, paper, marble, steel, glass, plastic, and cloth, just to mention a few. To produce
realistic displays of scenes, we need to use representations that accurately model object
characteristics.

 Polygon and quadric surfaces provide precise descriptions for simple Euclidean objects
such as polyhedrons and ellipsoids;
 Spline surfaces and construction techniques are useful for designing aircraft wings, gears,
and other engineering structures with curved surfaces;
 Procedural methods, such as fractal constructions and particle systems, allow us to give
accurate representations for clouds, clumps of grass, and other natural objects;
 Physically based modeling methods using systems of interacting forces can be used to
describe the no rigid behavior of a piece of cloth or a glob of jelly;
 Octree encodings are used to represent internal features of objects, such as those obtained
from medical CT images;
 Isosurface displays, volume renderings, and other visualization techniques are applied to
three-dimensional discrete data sets to obtain visual representations of the data.

Representation schemes for solid objects are often divided into two broad categories:

1. Boundary representations: (B-reps) describe a three-dimensional object as a set of surfaces


that separate the object interior from the environment. Typical examples of boundary
representations are polygon facets and spline patches

2. Space-partitioning representation: Its representations are used to describe interior


properties, by partitioning the spatial region containing an object into a set of small, no
overlapping, contiguous solids (usually cubes). A common space-partitioning description for a
three-dimensional object is an octree representation.

Boundary representations

Polygon surface:

The most commonly used boundary representation for a 3-D graphic object is a set of surface
polygon that enclose the object interior. Many graphic system stored all object description as
sets of surface polygon. This simplifies and speeds up the surface rendering and display of a
object ,since all the surface are described with linear equation. A polygon representation for a
polyhedron precisely defines the surface features of the object. But for other objects, surfaces are
tessellated (or tiled) to produce the polygon-mesh approximation. Wire frame display can be
done quickly to give general indication of surface structure. Then realistic scenes are produced
by interpolating shading patterns across polygon surface to illuminate.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

Polygon Table

A polygon surface is specified with a set of vertex coordinates and associated attribute
parameters. As information for each polygon is input, the data are placed into tables that are to
be used in the subsequent processing, display, and manipulation of the objects in a scene.
Polygon data tables can be organized into two groups: geometric tables and attribute tables.

(a) Geometric table: It contains vertex co-ordinates and parameter to identify the spatial
orientation of polygon surface.

(b) Attribute table: It gives attribute information for an object (degree of transparency, surface
reflectivity etc)

(a) Geometric table : A convenient organization for storing geometric data is to create three lists:
a vertex table, an edge table, and a polygon table.

(i) Vertex table: It stores co-ordinate values for each vertex of the object.

(ii) The edge table stores the Edge information of each edge of polygon facets. It contains
pointers back into vertex to identify the edges for each polygon.

Edge Vertex
E1 (v1,v2)
E2 (v2,v3)
E3 (v3,v1)
E4 (v1,v5)
E5 (v5,v4)
E6 (v4,v3)
Surface table:
The polygon surface table stores the surface information for each surface i.e. each surface is
represented by edge lists of polygon. The surface contains polygonal facets as shown in figure

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

Surface Edge
S1 (E1,E2,E3)
S2 (E3,E4,E5,E6)

Rues for creating Geometric table:


1. Every vertex in listed as an end point for at least two edges.
2. Every edge is part of at least one polygon
3. Each polygon has at least one shared edge.
4. Every surface is close.

Polygon Meshes:

A polygon mesh is collection of edges, vertices and polygons connected such that each edge is
shared by at most two polygons.
i) An edge connects two vertices and a polygon is a closed sequence of edges.
ii) An edge can be shared by two polygons and a vertex is shared by at least two edges.

When object surface is to be tiled, it is more convenient to specify the surface facets with a mesh
function. One type of polygon mesh is triangle strip. This function produce n-2 connected
triangles.

Fig: A triangle strip formed with 11 triangles connecting 13 vertices.

Another similar function is the quadrilateral mesh, which generates a mesh of (n-1) by (m-1)
quadrilaterals, given the co-ordinates for an n x m array of vertices.

Fig :A quadrilateral mesh containing 12quadrilaterals


construded from a 5 by 4 input vertex array

Plane Equation

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

For some of the surface-rendering procedures, we need information about the spatial orientation
of the individual surface components of the object. This information is obtained from the vertex
coordinate values and the equations that describe the polygon planes.

The equation of the plane is

Ax+By+Cz +D = 0

Solution of plane Co-efficient:

1. Algebraic Approach.

2. Vertex Approach.

We derive the solution for coefficient A,B,C and D using algebraic approach .If (x1,y1,z1) ,
(x2,y2,z2) and (x3,y3,z3) are three successive vertex of the polygon then

Ax +By+Cz = -D

(A/D)x +(B/D)y+(C/D)z = -1

(A/D)x1 +(B/D)y1+(C/D)z1 = -1 ……….(i)


(A/D)x2 +(B/D)y2+(C/D)z2 = -1 ……….(ii)
(A/D)x3 +(B/D)y3+(C/D)z3 = -1 ………(iii)

By cramers rule:

Expanding the determinant we can write that,

A = (y3z3 – y3z2) – y1(z3-z2) + z1(y3-y2)


B = (z3-z2) – (x2z3 – x3z2)+(x2-x3)
C = (y2 – y3) – y1 (x2 –x3)+ (x2y3 –x3y2)
D = -x2(y2z3-y3z2) +y1(x2z3-x3z2)- z1(x2y3 –x3y2)

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

As vertex values and other information are entered into the polygon data structure, values for A,
B, C and D are computed for each polygon and store with other polygon data.

The plane equation is used to determine the position of spatial points relative to the plane surface
of an object. If p(x,y,z) is any point on the plane then the distance between the point and the
plane is

If d = 0 , Ax+By+Cz+D = 0 i.e point is on the plane.


If d<0, Ax+By+Cz+D = 0 i.,e point is inside the plane.
If d>0, Ax+By+Cz+D = 0 i.e point is outside the plane.

The side of the plane that faces the object interior is ‘inside’ face and visible face is ‘outside’
face. If polygon vertices are specified in counter clockwise direction when viewing the outer side
of the plane in right handed coordinate system the direction of normal vector will be from inside
to outside.
Representing Curves Line and Surfaces

Displays of three dimensional curved lines and surfaces can be generated from an input set of
mathematical functions defining the objects or from a set of user specified data points. When
functions are specified, a package can project the defining equations for a curve to the display
plane and plot pixel positions along the path of the projected function. For surfaces, a functional
description is often tessellated to produce a polygon-mesh approximation to the surface. Usually,
this is done with triangular polygon patches to ensure that all vertices of any polygon are in one
plane. Polygons specified with four or more vertices may not have all vertices in a single plane.

Examples of display surfaces generated from functional descriptions include the quadrics and the
super quadrics. When a set of discrete coordinate points is used to specify an object shape, a
functional description is obtained that best fits the designated points according to the constraints
of the application. Spline representations are examples of this class of curves and surfaces. These
methods are commonly used to design new object shapes, to digitize drawings, and to describe
animation paths. Curve-fitting methods are also used to display graphs of data values by fitting
specified curve functions to the discrete data set, using regression techniques such as the least-
squares method. Curve and surface equations can be expressed in either a parametric or a
nonparametric form.

QUADRIC SURFACE

A frequently used class of objects are the quadric surfaces, which are described with second-
degree equations (quadratics).A quadric is a generalization of a conic section to 3D. They

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

include spheres, ellipsoids, tori, paraboloids, and hyperboloids. Quadric surfaces, particularly
spheres and ellipsoids, are common elements of graphics scenes, and they are often available in
graphics packages as primitives form which more complex objects can be constructed.

There are six basic types of quadric surfaces, which depend on the signs of the parameters. They
are the ellipsoid, hyperboloid of one sheet, hyperboloid of two sheets, elliptic cone, elliptic
paraboloid, and hyperbolic paraboloid (saddle). All but the hyperbolic paraboloid may be
expressed as a surface of revolution.

Sphere

In Cartesian coordinates, a spherical surface with radius r centered on the coordinate origin is
defined as the set of points (x, y, z) that satisfy the equation

x2 + y2 + z2 = r2
Ellipsoid

An ellipsoidal surface can be described as an extension of a spherical surface, where the radii in
three mutually perpendicular directions can have different values

Torus

A torus is a doughnut-shaped object, as shown in Figure. It can be generated by rotating a circle


or other conic about a specified axis. The Cartesian representation for points over the surface of a
torus can be written in the form

where r is any given offset value

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

BLOBBY OBJECTS

Some objects do not maintain a fixed shape, but change their surface characteristics in certain
motions or when in proximity to other objects. Examples in this class of objects include
molecular structures, water droplets and other liquid effects, melting objects, and muscle shapes
in the human body. These objects can be described as exhibiting "blobbiness" and are often
simply referred to as blobby objects, since their shapes show a certain degree of fluidity. A
molecular shape, for example, can be described as spherical in isolation, but this shape changes
when the molecule approaches another molecule. These characteristics cannot be adequately
described simply with spherical or elliptical shapes. Similarly, muscle shapes in human arm
exhibit similar characteristics. In this case, we want to model surface shapes so that the total
volume remains constant. Several models have been developed for representing blobby objects
as distribution functions over a region of space. One way to do this is to model objects as
combinations of Gaussian density functions, or "bumps". A surface function is then defined as:

Where T = Threshold and a and b are used to adjust amount of


blobbness.

Other methods for generating blobby objects use density functions that fall off to 0 in a finite
interval, rather than exponentially. The "metaball" model describes composite objects as
combinations of quadratic density functions of the form:

And the "soft object" model uses the function

Parametric Curves

Curves and surfaces can have explicit, implicit, and parametric representations.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

There are multiple ways to represent curves in two dimensions:


i) Explicit: y = f(x), given x, find y.
Example:
The explicit form of a line is y = mx + b.
ii) Implicit: f(x, y) = 0
Example:
The implicit equation for a circle of radius r and center pc = (xc, yc) is
(x − xc)2 + (y − yc)2 = r2,

iii) Parametric: P = P0 + u (P1 - P0)

In mathematics, parametric equation is a method of defining a relation using parameters like u in


the example. Parametric representations are the most common in computer graphics.

Advantages of parametric forms


 More degrees of freedom
 Directly transformable
 Dimension independent
 No infinite slope problems
 Separates dependent and independent variables
 Inherently bounded
 Easy to express in vector and matrix form
 Common form for many curves and surfaces

Parametric cubic curves are commonly used in graphics because curves of lower order
commonly have too little flexibility, while curves of higher order are usually considered
unnecessarily complex and make it easy to introduce undesired wiggles.
A parametric cubic curve in 3D is defined by:

Here , we will discuss only some curves that are generally used in computer graphics to represent
3D objects like spline curves and Bezier curve ,

Spline Representation
A Spline is a flexible strips used to produce smooth curve through a designated set of points. A
curve drawn joining this points is spline curve.Mathematically, spline are described as pice-wise
cubic polynomial functions whose first and second derivatives are continuous across the various
curve sections. In computer graphics, the term spline curve now refers to any composite curve

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

formed with polynomial sections satisfying specified continuity conditions at the boundary of the
pieces. A spline surface can be described with two sets of orthogonal spline curves.

There are several different kinds of spline specifications that are used in graphics applications.
Each individual specification simply refers to a particular type of polynomial with certain
specified boundary conditions. Spline is used in graphics application to design and digitalize
drawings for storage in computer and to specify animation path. Typical CAD application for
spline includes the design of automobile bodies, aircraft and spacecraft surface etc.

There are three equivalent methods for specifying a particular spline representation:
(1) We can state the set of boundary conditions that are imposed on the spline; or
(2) we can state the matrix that characterizes the spline; or
(3) we can state the set of blending functions (or basis functions) that determine how specified
geometric constraints on the curve are combined to calculate positions along the curve path.

CUBIC SPLINE INTERPOLATION METHODS


This class of splines is most often used to set up paths for object motions or to provide a
representation for an existing object or drawing, but interpolation splines are also used
sometimes to design object shapes. Cubic polynomials offer a reasonable compromise between
flexibility and speed of computation. Compared to higher-order polynomials, cubic splines
require less calculations and memory and they are more stable. Compared to lower-order
polynomials, cubic splines are more flexible for modeling arbitrary curve shapes. Given a set of
control points, cubic interpolation splines are obtained by fitting the input points with a
piecewise cubic polynomial curve that passes through every control point.

Suppose we have n+1 control points specified with co-ordinates.


pk = (xk,yk,zk), k = 0,1,2,3…………………..n

A cubic interpolation fit of those points is

We can describe the parametric cubic polynomial that is to be fitted between each pair of control
points with the following set of parametric equations.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

Bezier Curves
Bezier curves are used in computer graphics to produce curves which appear reasonably smooth
at all scales. This spline approximation method was developed by French engineer Pierre Bezier
for automobile body design. Bezier spline was designed in such a manner that they are very
useful and convenient for curve and surface design, and are easy to implement Curves are
trajectories of moving points. We will specify them as functions assigning a location of that
moving point (in 2D or 3D) to a parameter t, i.e., parametric curves. Curves are useful in
geometric modeling and they should have a shape which has a clear and intuitive relation to the
path of the sequence of control points. One family of curves satisfying this requirement are
Bezier curve.
The Bezier curve requires only two end points and other points that control the endpoint tangent
vector.

Bezier curve is defined by a sequence of N + 1 control points, P0, P1,. . . , Pn. We defined the Bezier curve
using the algorithm (invented by DeCasteljeau), based on recursive splitting of the intervals joining the
consecutive control points. In general Bezier curve can be fitted to any number of control points. The
number of control points to be approximated and their relative position determine the degree of Bezier
polynomial. The Bezier curve can be specified with boundary condition, with characterizing matrix or
blending functions. But for general blending function specification is most convenient.

The Bezier belending function BEZk,n(u) are the Bernstein polynomial as,

The vector equation (1) represents a set of three parametric equation for individual curve conditions.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

Bezier curve is a polynomial of degree one less than control points i.e. 3 points generate parabola, 4
points a cubic curve and so n.

Properties of Bezier Curve:

1. It always passs through initial and final control points. i.e p(0) = p0 and p(1)=pn.

2. Values of the parametric first derivatives of a Bezier curve at the end points can be calculated from
control points as-

p'(0) = -np0 + np1


p'(1) = -npn-1 + npn
3. The slope at the beginning of the curve is along the line joining the first two points and slope at the
end of curve is along the line joining last two points.

4. Parametric second derivative at a Bezier curve at end points are-

p"(0) = n(n-1)[(p2-p1) – (p1-p0)]


p"(1) = n(n-1)[(pn-2-pn-1) – (pn-1-pn)]

Another important property of any Bezier curve is that it lies within the convex hull (convex polygon
boundary) of the control points. This follows from the properties of Bezier blending functions: They are
all positive and their sum is always 1,

so that any curve position is simply the weighted sum of the control-point positions. The convex-hull
property for a Bezier curve ensures that the polynomial will not have erratic oscillations.

Solid Modeling

Surface representation is the logical evolution using faces (surfaces), edges and vertices. In this
sequence of developments, the solid modeling uses topological information in addition to the
geometrical information to represent the object unambiguously and completely.

A solid model of an object is a more complete representation than its surface model. It provides
more topological information in addition to the geometrical information which helps to represent
the solid unambiguously. Unlike surface representations which contain only geometrical data,
the solid model uses topological information in addition to the geometrical information to
represent the object unambiguously and completely. Solid model results in accurate design, helps
to further the goal of CAD/ CAM, Flexible manufacturing leading to better automation of the
manufacturing process.

Geometry: The graphical information of dimension, length, angle, area and transformations

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

Topology: The invisible information about the connectivity, neighborhood, associatively etc

Various methods are described below .

SWEEP REPRESENTATIONS

Sweep representations are useful for constructing three-dimensional objects that possess
translational, rotational, or other symmetries. We can represent such objects by specifying a two
dimensional shape and a sweep that moves the shape through a region of space. A set of two-
dimensional primitives, such as circles and rectangles, can be provided for sweep representations
as menu options.

Sweep Volume:

Sweeping a 2D area along a trajectory creates a new 3D object

• Translational Sweep: 2D area swept along a linear trajectory normal to the 2D plane

• Tapered Sweep: scale area while sweeping

• Slanted Sweep: trajectory is not normal to the 2D plane

• Rotational Sweep: 2D area is rotated about an axis

• General Sweep: object swept along any trajectory and transformed along the sweep

Other methods for obtaining two-dimensional figures include closed spline curve constructions
and cross-sectional slices of solid objects.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

Boundary Representations (B-rep)


B-rep describes a solid in terms of its surface boundaries: vertices, edges, and faces. Curved
faces can be approximated by polygons or represented by parametric (implicit) surfaces. A
closed 2D surface defines a 3D object and at each point on the boundary there is an “in” and an
“out” side .Boundary representations can be defined in two ways:
1) Primitive based: A collection of primitives forming the boundary (polygons, for example)
2) Freeform based (splines, parametric surfaces, implicit forms)

Primitive based :
A polyhedron is a solid bounded by a set of polygons. It is constructed from:
– Vertices V
– Edges E
– Faces F
Each edge must connect two vertices and be shared by exactly two faces. At least three edges
must meet at each vertex. A simple polyhedron is one that can be deformed into a sphere
(contains no holes) and it must satisfy Euler's formula: V-E+F=2

Euler’s formula can be generalized to a polyhedron with holes and multiple components
V-E+F-H=2(C-G)
Where:
• H is the number of holes in the faces
• C is the number of separate components
• G is the number of pass-through holes (genus if C=1)
• V, E and F are respectively vertices, edges and faces

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

Spatial Partitioning Representation

In spatial-partitioning representations, a solid is decomposed into a collection of adjoining, non


intersecting solids that are more primitive than the original solid. Primitives may vary in type,
size, position, parameterization, and orientation. Forms of spatial-partitioning representations:

• Cell decomposition
• Spatial-occupancy enumeration
• Octrees
• Binary space-partitioning trees
We will be only explaining Octrees :

OCTREES

Hierarchical tree structures, called octrees, are used to represent solid objects in some graphics
systems. Medical imaging and other applications that require displays of object cross sections
commonly use octree representations. The tree structure is organized so that each node
corresponds to a region of three-dimensional space. This representation for solids takes
advantage of spatial coherence to reduce storage requirements for three-dimensional objects. It
also provides a convenient representation for storing information about object interiors.

The octree encoding procedure for a three-dimensional space is an extension of an encoding


scheme for two-dimensional space, called quadtree encoding.Quadtrees are generated by
successively dividing a two-dimensional region (usually a square) into quadrants. Each node in
the quadtree has four data elements, one for each of the quadrants in the region.

Each node in the quadtree has four data elements, one for each of the quadrants in the region. If
all pixels within a quadrant have the same color (a homogeneous quadrant), the corresponding
data element in the node stores that color. In addition, a flag is set in the data element to indicate
that the quadrant is homogeneous. Otherwise, the quadrant is said to be heterogeneous, and that
quadrant is itself divided into quadrants (see figure below). The corresponding data element in
the node now flags the quadrant as heterogeneous and stores the pointer to the next node in the
quadtree.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

An octree encoding scheme divides regions of three-dimensional space (usually cubes) into
octants and stores eight data elements in each node of the tree (see figure below). Individual
elements of a three-dimensional space are called volume elements, or voxels. When all voxels in
an octant are of the same type, this type value is stored in the corresponding data element of the
node. Empty regions of space are represented by voxel type "void." Any heterogeneous octant is
subdivided into octants, and the corresponding data element in the node points to the next node
in the octree.Voxels in each octant is tested, and octant subdivisions continue until the region of
space contains only homogeneous octants. Each node in the octree can now have from zero to
eight immediate descendants.

Algorithms for generating octrees can be structured to accept definitions of objects in any form,
such as a polygon mesh, curved surface patches, or solid geometry constructions. Using the
minimum and maximum coordinate values of the object, we can define a box (parallelepiped)
around the object. This region of three-dimensional space containing the object is then tested,
octant by octant, to generate the octree representation. Once an octree representation has been
established for a solid object, various manipulation routines can be applied to the solid. An
algorithm for performing set operations can be applied to two octree representations for the same
region of space. The new octree is then formed by either storing the octants where the two
objects overlap or the region occupied by one object but not the other. Three-dimensional octree
rotations are accomplished by applying the transformations to the occupied octants. Visible-
surface identification is carried out by searching the octants from front to back. The first object
detected is visible, so that information can be transferred to a quadtree representation for display.

BSP TREES

This representation scheme is similar to octree encoding, except we now divide space into two
partitions instead of eight at each step. With a binary space-partitioning (BSP) tree, we subdivide
a scene into two sections at each step with a plane that can be at any position and orientation. In
an octree encoding, the scene is subdivided at each step with three mutually perpendicular planes
aligned with the Cartesian coordinate planes.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

For adaptive subdivision of space, BSP trees can provide a more efficient partitioning since we
can position and orient the cutting planes to suit the spatial distribution of the objects. This can
reduce the depth of the tree representation for a scene, compared to an octree, and thus reduce
the time to search the tree. In addition, BSP trees are useful for identifying visible surfaces and
for space partitioning in ray-tracing algorithms.

DISPLAYING LIGHT INTENSITIES

Values of intensity calculated by an illumination model must be converted to one of the


allowable intensity levels for the particular graphics system in use. Some systems are capable of
displaying several intensity levels, while others are capable of only two levels for each pixel (on
or off). In the first case, we convert intensities from the lighting model into one of the available
levels for storage in the frame buffer. For bi-level systems, we can convert intensities into
halftone patterns.

Assigning Intensity Levels

We first consider how grayscale values on a video monitor can be distributed over the range
between 0 and 1 so that the distribution corresponds to our perception of equal intensity
intervals. We perceive relative light intensities the same way that we perceive relative sound
intensities: on a logarithmic scale. This means that if the ratio of two intensities is the same as
the ratio of two other intensities, we perceive the difference between each pair of intensities to be
the same.

As an example, we perceive the difference between intensities 0.20 and 0.22 to be the same as
the difference between 0.80 and 0.88. Therefore, to display n + 1 successive intensity levels with
equal perceived brightness, the intensity levels on the monitor should be spaced so that the ratio
of successive intensities is constant:

I1/ I0 = I2/I1 = ….. = In/In-1 = r

Here, we denote the lowest level that can be displayed on the monitor as I0 and the highest as In.
Any intermediate intensity can then be expressed in terms of I0 ,as

I k = r k I0

We can calculate the value of r, given the values of I0 and n for a particular system,by
substituting k = n in the preceding expression. Since In = 1, we have

r= (1/I0)1/n

Thus, the calculation for Ik

Ik= I0(n-k)/n

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

Color Models:

A color model is an abstract mathematical model describing the way colors can be represented as
tuples of numbers, typically as three or four values or color components. When this model is
associated with a precise description of how the components are to be interpreted (viewing
conditions, etc.), the resulting set of colors is called color space. This section describes ways in
which human color vision can be modeled.

RGB

Media that transmit light (such as television) use additive color mixing with primary colors of
red, green, and blue, each of which stimulates one of the three types of the eye's color receptors
with as little stimulation as possible of the other two. This is called "RGB" color space. Mixtures
of light of these primary colors cover a large part of the human color space and thus produce a
large part of human color experiences. This is why color television sets or color computer
monitors need only produce mixtures of red, green and blue light. See Additive color. Other
primary colors could in principle be used, but with red, green and blue the largest portion of the
human color space can be captured. Unfortunately there is no exact consensus as to what loci in
the chromaticity diagram the red, green, and blue colors should have, so the same RGB values
can give rise to slightly different colors on different screens.

HSV

Recognizing that the geometry of the RGB model is poorly aligned with the color-making
attributes recognized by human vision, computer graphics researchers developed two alternate
representations of RGB, HSV and HSL (hue, saturation, value and hue, saturation, lightness), in
the late 1970s. HSV and HSL improve on the color cube representation of RGB by arranging
colors of each hue in a radial slice, around a central axis of neutral colors which ranges from
black at the bottom to white at the top. The fully saturated colors of each hue then lie in a circle,
a color wheel. HSV models itself on paint mixture, with its saturation and value dimensions
resembling mixtures of a brightly colored paint with, respectively, white and black. HSL tries to
resemble more perceptual color models such as NCS or Munsell. It places the fully saturated
colors in a circle of lightness ½, so that lightness 1 always implies white, and lightness 0 always
implies black. HSV and HSL are both widely used in computer graphics, particularly as color
pickers in image editing software. The mathematical transformation from RGB to HSV or HSL
could be computed in real time, even on computers of the 1970s, and there is an easy-to-
understand mapping between colors in either of these spaces and their manifestation on a
physical RGB device.

The HSV (Hue, Saturation, Value) model, also called HSB (Hue, Saturation, Brightness), defines
a color space commonly used in graphics applications. Hue value ranges from 0 to 360,
Saturation and Brightness values range from 0 to 100%. The RGB (Red, Green, Blue) is also

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 3 : 3D Object Representation

used, primarily in web design. When written, RGB values are commonly specified using three
integers between 0 and 255, each representing red, green, and blue intensities.

The RGB model's approach to colors is important because:

they prefer another color model

In the model, a color is described by specifying the intensity levels of the colors red, green, and
blue. The typical range of intensity values for each color, 0 - 255, is based on taking a binary
number with 32 bits and breaking it up into four bytes of 8 bits each. 8 bits can hold a value from
0 to 255. The fourth byte is used to specify the "alpha", or the opacity, of the color. Opacity
comes into play when layers with different colors are stacked. If the color in the top layer is less
than fully opaque (alpha < 255), the color from underlying layers "shows through". In the RGB
model, hues are represented by specifying one color as full intensity (255), a second color with a
variable intensity, and the third color with no intensity (0).

HSV

The HSV, or HSB, model describes colors in terms of hue, saturation, and value (brightness).
Note that the range of values for each attribute is arbitrarily defined by various tools or
standards. Be sure to determine the value ranges before attempting to interpret a value. Hue
corresponds directly to the concept of hue in the Color Basics section.

The advantages of using hue are


circle is easily identified

ted easily without affecting the hue Saturation


corresponds directly to the concept of tint in the Color Basics section, except that full saturation
produces no tint, while zero saturation produces white, a shade of gray, or black. Value
corresponds directly to the concept of intensity in the Color Basics section.

cifying a hue with less than full saturation and full value

The advantage of HSV is that each of its attributes corresponds directly to the basic color
concepts, which makes it conceptually simple. The perceived disadvantage of HSV is that the
saturation attribute corresponds to tinting, so de-saturated colors have increasing total intensity.
For this reason, the CSS3 standard plans to support RGB and HSL but not HSV.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Visible Surface Determination

For the generation of realistic graphics display, we wish to determine which lines or surfaces of the
objects are visible, either from the COP (for perspective projections) or along the direction of projection
(for parallel projections), so that we can display only the visible lines or surfaces. For this, we need to
conduct visibility tests. Visibility tests are conducted to determine the surface that is visible from a given
viewpoint. This process is known as visible-line or visible-surface determination, or hidden-line or
hidden-surface elimination. Visible surface detection or Hidden surface removal is major
concern for realistic graphics for identifying those parts of a scene that are visible from a
choosen viewing position.

Depending on the specified viewing position, particular edges are eliminated in the graphics display. For
example Figure (a) represents more complex model and Figure (b) is a realistic view of the object, after
removing hidden lines or edges.

There are numerous algorithms for identification of visible objects for different types of applications.
Some methods require more memory, some involve more processing time, and some apply only to special
types of objects. Deciding upon a method for a particular application can depend on such factors as the
complexity of the scene, type of objects to be displayed, available equipment, and whether static or
animated displays are to be generated. These requirements have encouraged the development of carefully
structured visible surface algorithms.

The two approaches are

Object-Space methods: An object space method compares objects and parts of objects to each other
within scene definition to determine which surfaces are visible.

Image-Space methods: An image space method visibility is decided point by point at each pixel position
on the projection plane.

Object space methods are implemented in the physical coordinate system in which objects are defined
whereas image space methods are implemented in screen coordinate system in which the objects are
viewed. Most visible surface algorithms use image-space method although object space method
can be used effectively to locate visible surfaces in some cases. Line display algorithm generally
use object space method to identify visible lines in wire-frame display.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Back Face detection:

A fast and simple object space method for identifying the back-faces of a poly hedron based on
the “inside-outside” tests.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Scan-Line method

In contrast to z-buffer method, where we consider one surface at a time, scan-line


method deals with multiple surfaces. As it processes each scan-line at a time, all
polygon intersected by that scan-line are examined to determine which surfaces are
visible. The visibility test involves the comparison of depths of each overlapping
surface to determine which one is closer to the view plane. If it is found so, then it
is declared as a visible surface and the intensity values at the positions along the
scan-line are entered into the refresh-buffer.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

List Priority Algorithms

•Determine a visibility ordering for objects ensuring that a correct image results if the objects are
rendered in that order
•Use a combination of object precision and image precision operations
•Object precision: depth comparisons and object splitting
•Image precision: scan conversion
•The list of sorted objects is created with object precision
•Two examples: Depth sort algorithm (Painter’s Algorithm and BSP’s)

DEPTH SORTING ALGORITHM:


This method uses both object space and image space method. In this method the surface
representation of 3D object are sorted in of decreasing depth from viewer. Then sorted surface
are scan converted in order starting with surface of greatest depth for the viewer.
The conceptual steps that performed in depth-sort algorithm are
1. Sort all polygon surface according to the smallest (farthest) Z co-ordinate of each.
2. Resolve any ambiguity this may cause when the polygons Z extents overlap, splitting
polygons if necessary.
3. Scan convert each polygon in ascending order of smaller Z-co-ordinate i.e. farthest surface
first (back to front)
In this method, the newly displayed surface is partly or completely obscures the previously
displayed surface. Essentially, we are sorting the surface into priority order such that surface
with lower priority (lower z, far objects) can be obscured by those with higher priority (high z-
value).

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

This algorithm is also called "Painter's Algorithm" as it simulates how a painter typically
produces his painting by starting with the background and then progressively adding new
(nearer) objects to the canvas.

Illumination and Shading

Light Source

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Polygon Rendering Methods:


(i) Constant Intensity shading Method.
(ii) Gouraud Shading method(Intensity Interpolation)
(iii) Phong Shading Method (Normal Vector Interpolation).

(i) Constant Intensity shading Method.

The simplest model for shading for a polygon is constant intensity shading also
called as Faceted Shading Or flat shading. This approach implies an illumination
model once to determine a single intensity value that is then used to render an
entire polygon. Constant shading is useful for quickly displaying the general
appearance of a curved surface.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Constant.

Even if all conditions are not true, we can still reasonably approximate surface -
lighting effects using small polygon facets with fast shading and calculate the
intensity for each facet, at the centre of the polygon of course constant shading
does not produce the variations in shade across the polygon that should occur.

Disadvantage:
- The intensity discontinuity occurs at the border of the two surfaces.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Phong Shading
A more accurate method for rendering a polygon surface is to interpolate normal
vector and then apply illumination model to each surface point. This method is
called Phong shading or normal vector interpolation method for shading. It
displays more realistic highlights and greatly reduce the mach band effect.
A polygon surface is rendered with Phong shading by carrying out following
calculations.

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor


Unit 4 Visible Surface Determination

Compiled by : Er. Marut Dhungana

Downloaded from CSIT Tutor

You might also like