Computer Graphics Notes-converted (Repaired)
Computer Graphics Notes-converted (Repaired)
1.Computer Graphics Principle and Practice, J.D. Foley, A. Dam, S.K. Feiner, Addison Wesley.
2. Procedural Elements of Computer Graphics, David Rogers,TMH.
3. Computer Graphics: Algorithms and Implementations, D.P Mukherjee, D. Jana, PHI.
4. Computer Graphics, Z. Xiang, R. A. Plastock, Schaum’s Outlines, McGrowHill.
5. Computer Graphics, S. Bhattacharya, Oxford UniversityPress.
Module:01
Syllabus
Module –I(12 hours)
Overview of Graphics System: Video Display Units, Raster-Scan and
Random Scan Systems, Graphics Input and Output Devices. Output
Primitives: Line drawing Algorithms: DDA and Bresenham’s Line
Algorithm, Circle drawing Algorithms: Midpoint Circle Algorithm and
Bresenham’s Circle drawing Algorithm. Two Dimensional Geometric
Transformation: Basic Transformation (Translation, Rotation, Scaling)
Matrix Representation, Composite Transformations, Reflection, Shear,
Transformation between coordinate systems.
Introduction
Computer is information processing machine. User needs to communicate with computer and the
computer graphics is one of the most effective and commonly used ways of communication with
the user.
It displays the information in the form of graphical objects such as pictures, charts, diagram and
graphs.
Graphical objects convey more information in less time and easily understandable formats for example
statically graph shown in stock exchange.
In computer graphics picture or graphics objects are presented as a collection of discrete pixels.
We can control intensity and color of pixel which decide how picture look like.
The special procedure determines which pixel will provide the best approximation to the desired picture
or graphics object this process is known as Rasterization.
The process of representing continuous picture or graphics object as a collection of discrete pixels
is called Scan Conversion.
Display devices
Display devices are also known as output devices.
Most commonly used output device in a graphics system is a video monitor.
Cathode-ray-tubes
The display image is stored in the form of 1’s and 0’s in the refresh buffer.
The video controller reads this refresh buffer and produces the actual image on screen.
It will scan one line at a time from top to bottom & then back to the top.
Horizontal Retrace
Vertical Retrace
OFF ON
In this method the horizontal and vertical deflection signals are generated to move the beam all
over the screen in a pattern shown in fig. 1.4.
Here beam is swept back & forth from left to the right.
When beam is moved from left to right it is ON.
When beam is moved from right to left it is OFF and process of moving beam from right to left after
completion of row is known as Horizontal Retrace.
When beam is reach at the bottom of the screen. It is made OFF and rapidly retraced back to the
top left to start again and process of moving back to top is known as Vertical Retrace.
The screen image is maintained by repeatedly scanning the same image. This process is known
as
Refreshing of Screen.
In raster scan displays a special area of memory is dedicated to graphics only. This memory is
called
Frame Buffer.
Frame buffer holds set of intensity values for all the screen points.
That intensity is retrieved from frame buffer and display on screen one row at a time.
Each screen point referred as pixel or Pel (Picture Element).
Each pixel can be specified by its row and column numbers.
It can be simply black and white system or color system.
In simple black and white system each pixel is either ON or OFF, so only one bit per pixel is
needed.
Additional bits are required when color and intensity variations can be displayed up to 24-bits per
pixel are included in high quality display systems.
On a black and white system with one bit per pixel the frame buffer is commonly called a Bitmap.
And for systems with multiple bits per pixel, the frame buffer is often referred as a Pixmap.
Resolution Its resolution is poor because Its resolution is good because this
raster system in contrast system produces smooth lines
produces zigzag lines that are drawings because CRT beam directly
plotted as discrete point sets. follows the line path.
The low speed electrons then penetrate the storage grid and strike the phosphor coating without
affecting the positive charge pattern on the storage grid.
During this process the collector just behind the storage grid smooth out the flow of flood electrons.
Advantage of DVST
Refreshing of CRT is not required.
Very complex pictures can be displayed at very high resolution without flicker.
Flat screen.
Disadvantage of DVST
They do not display color and are available with single level of line intensity.
For erasing it is necessary to removal of charge on the storage grid so erasing and redrawing
process take several second.
Erasing selective part of the screen cannot be possible.
Cannot used for dynamic graphics application as on erasing it produce unpleasant flash over entire
screen.
It has poor contrast as a result of the comparatively low accelerating potential applied to the
flood electrons.
The performance of DVST is somewhat inferior to the refresh CRT.
Fig. 1.10: - Light twisting shutter effect used in design of most LCD.
It is generally used in small system such as calculator and portable laptop.
This non emissive device produce picture by passing polarized light from the surrounding or
from an internal light source through liquid crystal material that can be aligned to either
block or transmit the light.
The liquid crystal refreshes to fact that these compounds have crystalline arrangement of
molecules then also flows like liquid.
It consists of two glass plates each with light polarizer at right angles to each other
sandwich the liquid crystal material between the plates.
Rows of horizontal transparent conductors are built into one glass plate, and column of
vertical conductors are put into the other plates.
The intersection of two conductors defines a pixel position.
In the ON state polarized light passing through material is twisted so that it will pass
through the opposite polarizer.
In the OFF state it will reflect back towards source.
We applied a voltage to the two intersecting conductor to align the molecules so that the
light is not twisted.
This type of flat panel device is referred to as a passive matrix LCD.
In active matrix LCD transistors are used at each (x, y) grid point.
Transistor cause crystal to change their state quickly and also to control degree to which
the state has been changed.
Transistor can also serve as a memory for the state until it is changed.
So transistor make cell ON for all time giving brighter display then it would be if it had to be
refresh periodically
Stereoscopic system
Stereoscopic views does not produce three dimensional images, but it produce 3D effects by
presenting different view to each eye of an observer so that it appears to have depth.
To obtain this we first need to obtain two views of object generated from viewing direction
corresponding to each eye.
We can construct the two views as computer generated scenes with different viewing
positions or we can use stereo camera pair to photograph some object or scene.
When we see simultaneously both the view as left view with left eye and right view with
right eye then two views is merge and produce image which appears to have depth.
One way to produce stereoscopic effect is to display each of the two views with raster
system on alternate refresh cycles.
The screen is viewed through glasses with each lance design such a way that it act as a
rapidly alternating shutter that is synchronized to block out one of the views.
Virtual-reality
Virtual reality is the system which produce images in such a way that we feel that our
surrounding is what we are set in display devices but in actually it does not.
In virtual reality user can step into a scene and interact with the environment.
A head set containing an optical system to generate the stereoscopic views is commonly
used in conjunction with interactive input devices to locate and manipulate objects in the
scene.
Sensor in the head set keeps track of the viewer’s position so that the front and back of
objects can be seen as the viewer “walks through” and interacts with the display.
Virtual reality can also be produce with stereoscopic glass and video monitor instead of
head set. This provides low cost virtual reality system.
Sensor on display screen track head position and accordingly adjust image depth.
System
I/O Devices Bus
Raster graphics systems having additional processing unit like video controller or display controller.
Here frame buffer can be anywhere in the system memory and video controller access this
for refresh the screen.
In addition to video controller more processors are used as co-processors to accelerate the
system in sophisticated raster system.
System bus
I/O Devices
Fig. 1.15: - Architecture of a raster graphics system with a fixed portion of the system
memory reserved for the frame buffer.
A fixed area of the system memory is reserved for the frame buffer and the video controller
can directly access that frame buffer memory.
Frame buffer location and the screen position are referred in Cartesian coordinates.
For many graphics monitors the coordinate origin is defined at the lower left screen corner.
Screen surface is then represented as the first quadrant of the two dimensional systems with
positive X- value increases as left to right and positive Y-value increases bottom to top.
X Y
regist regist
Intensity Memory Pixel
Address registe
Frame
Fig. 1.16: - Basic video controller refreshBuffer
operation.
Two registers are used to store the coordinates of the screen pixels which are X and Y
Initially the X is set to 0 and Y is set to Ymax.
The value stored in frame buffer for this pixel is retrieved and used to set the intensity of the CRT
beam.
After this X register is incremented by one.
This procedure is repeated till X becomes equals to Xmax.
Then X is set to 0 and Y is decremented by one pixel and repeat above procedure.
This whole procedure is repeated till Y is become equals to 0 and complete the one refresh
cycle. Then controller reset the register as top –left corner i.e. X=0 and Y=Ymax and
refresh process start for next refresh cycle.
Since screen must be refreshed at the rate of 60 frames per second the simple procedure
illustrated in figure cannot be accommodated by typical RAM chips.
To speed up pixel processing video controller retrieves multiple values at a time using more
numbers of registers and simultaneously refresh block of pixel.
Such a way it can speed up and accommodate refresh rate more than 60 frames per second.
Raster-graphics system with a display processor
Display Processor Memory
Frame Buffer Video Controller
Monitor
CP Display Syste
U Processor m
System
Bus
I/O Devices
An application program is input & stored in the system memory along with a graphics package.
Graphics commands in the application program are translated by the graphics package into a
display file stored in the system memory.
This display file is used by display processor to refresh the screen.
Display process goes through each command in display file. Once during every refresh cycle.
Sometimes the display processor in random scan system is also known as display
processing unit or a graphics controller.
In this system graphics platform are drawn on random scan system by directing the electron
beam along the component times of the picture.
Lines are defined by coordinate end points.
This input coordinate values are converts to X and Y deflection voltages.
A scene is then drawn one line at a time.
Keyboards
Keyboards are used as entering text strings. It is efficient devices for inputting such a non-
graphics data as picture label.
Cursor control key’s & function keys are common features on general purpose keyboards.
Many other application of key board which we are using daily used of computer graphics
are commanding & controlling through keyboard etc.
Mouse
Mouse is small size hand-held box used to position screen cursor.
Wheel or roller or optical sensor is directing pointer on the according to movement of mouse.
Three buttons are placed on the top of the mouse for signaling the execution of some operation.
Now a day’s more advance mouse is available which are very useful in graphics application for
example Zmouse.
Joysticks
A joy stick consists of small vertical lever mounted on a base that is used to steer the
screen cursor around.
Most joy sticks selects screen positioning according to actual movement of stick (lever).
Some joy sticks are works on pressure applied on sticks.
Sometimes joy stick mounted on keyboard or sometimes used alone.
Movement of the stick defines the movement of the cursor.
In pressure sensitive stick pressure applied on stick decides movement of the cursor. This
pressure is measured using strain gauge.
This pressure sensitive joy sticks also called as isometric joy sticks and they are non movable
sticks.
Data glove
Data glove is used to grasp virtual objects.
The glow is constructed with series of sensors that detect hand and figure motions.
Electromagnetic coupling is used between transmitter and receiver antennas which used to
provide position and orientation of the hand.
Transmitter & receiver Antenna can be structured as a set of three mutually perpendicular
coils forming 3D Cartesian coordinates system.
In put from the glove can be used to position or manipulate object in a virtual scene.
Digitizer
Digitizer is common device for drawing painting or interactively selecting coordinates
position on an object.
One type of digitizers is graphics tablet which input two dimensional coordinates by
activating hand cursor or stylus at selected position on a flat surface.
Stylus is flat pencil shaped device that is pointed at the position on the tablet.
Image Scanner
Image Scanner scan drawing, graph, color, & black and white photos or text and can stored for computer
processing by passing an optical scanning mechanism over the information to be stored.
Once we have internal representation of a picture we can apply transformation.
We can also apply various image processing methods to modify the picture.
For scanned text we can apply modification operation.
Touch Panels
As name suggest Touch Panels allow displaying objects or screen-position to be selected with
the touch or finger.
A typical application is selecting processing option shown in graphical icons.
Some system such as a plasma panel are designed with touch screen
Other system can be adapted for touch input by fitting transparent touch sensing
mechanism over a screen.
Touch input can be recorded with following methods.
1. Optical methods
2. Electrical methods
3. Acoustical methods
Optical method
Optical touch panel employ a line of infrared LEDs along one vertical and one horizontal edge.
The opposite edges of the edges containing LEDs are contain light detectors.
When we touch at a particular position the line of light path breaks and according to that
breaking line coordinate values are measured.
In case two line cuts it will take average of both pixel positions.
LEDs operate at infrared frequency so it cannot be visible to user.
Electrical method
An electrical touch panel is constructed with two transparent plates separated by small distance.
One is coated with conducting material and other is coated with resistive material.
When outer plate is touch it will come into contact with internal plate.
When both plates touch it creates voltage drop across the resistive plate that is converted
into coordinate values of the selected position.
Acoustical method
In acoustical touch panel high frequency sound waves are generated in horizontal and vertical
direction across a glass plates.
When we touch the screen the waves from that line are reflected from finger.
These reflected waves reach again at transmitter position and time difference between
sending and receiving is measure and converted into coordinate values.
Light pens
Light pens are pencil-shaped device used to select positions by detecting light coming from
points on the CRT screen.
Activated light pens pointed at a spot on the screen as the electron beam lights up that spot
and generate electronic pulse that causes the coordinate position of the electron beam to
be recorded.
Voice systems
It is used to accept voice command in some graphics workstations.
It is used to initiate graphics operations.
It will match input against predefined directory of words and phrases.
Dictionary is setup for a particular operator by recording his voice.
Each word is speak several times and then analyze the word and establishes a frequency
pattern for that word along with corresponding function need to be performed.
When operator speaks command it will match with predefine dictionary and perform desired action.
Coordinate representations
Except few all other general packages are designed to be used with Cartesian coordinate
specifications.
If coordinate values for a picture are specified is some other reference frame they must be
converted to Cartesian coordinate before giving input to graphics package.
Special-purpose package may allow use of other coordinates which suits application.
In general several different Cartesian reference frames are used to construct and display scene.
We can construct shape of object with separate coordinate system called modeling
coordinates or sometimes local coordinates or master coordinates.
Once individual object shapes have been specified we can place the objects into appropriate
positions called world coordinates.
Finally the World-coordinates description of the scene is transferred to one or more output
device reference frame for display. These display coordinates system are referred to as “Device
Coordinates” or “Screen Coordinates”.
Generally a graphic system first converts the world-coordinates position to normalized
device coordinates. In the range from 0 to 1 before final conversion to specific device
coordinates.
An initial modeling coordinates position ( Xmc,Ymc) in this illustration is transferred to a
device coordinates position(Xdc,Ydc) with the sequence ( Xmc,Ymc) ( Xwc,Ywc) ( Xnc,Ync)
( Xdc,Ydc).
Graphic Function
A general purpose graphics package provides user with Varity of function for creating and
manipulating pictures.
The basic building blocks for pictures are referred to as output primitives. They includes
character, string, and geometry entities such as point, straight lines, curved lines, filled
areas and shapes defined with arrays of color points.
Input functions are used for control & process the various input device such as mouse, tablet, etc.
Control operations are used to controlling and housekeeping tasks such as clearing display screen
etc.
All such inbuilt function which we can use for our purpose are known as graphics function
Fig. 2.1: - Stair step effect produced when line is generated as a series of pixel positions.
The stair step shape is noticeable in low resolution system, and we can improve their
appearance somewhat by displaying them on high resolution system.
More effective techniques for smoothing raster lines are based on adjusting pixel
intensities along the line paths.
For raster graphics device-level algorithms discuss here, object positions are specified directly
in integer device coordinates.
Pixel position will referenced according to scan-line number and column number which is
illustrated by following figure.
6
5
4
3
2
1
0
0123456
Fig. 2.2: - Pixel positions referenced by scan-line number and column number.
similarly for retrieve the current frame buffer intensity we assume to have procedure 𝑔𝑒𝑡𝑝𝑖𝑥𝑒𝑙(𝑥, 𝑦).
Y2
y1
X1 X2
DDA Algorithm
Digital differential analyzer (DDA) is scan conversion line drawing algorithm based
on calculating either
∆𝑦 or ∆𝑥 using above equation.
We sample the line at unit intervals in one coordinate and find corresponding
integer values nearest the line path for the other coordinate.
Consider first a line with positive slope and slope is less than or equal to 1:
We sample at unit x interval (∆𝑥 = 1) and calculate each successive y value as
follow:
𝑦=𝑚∗𝑥+𝑏
𝑦𝑘 = 𝑚 ∗ (𝑥 + 1) + 𝑏
In general 𝑦𝑘 = 𝑚 ∗ (𝑥 + 𝑘) + 𝑏 , &
𝑦𝑘+1 = 𝑚 ∗ (𝑥 + 𝑘 + 1) + 𝑏
Now write this equation in form:
𝑦𝑘+1 − 𝑦𝑘 = (𝑚 ∗ (𝑥 + 𝑘 + 1) + 𝑏) – (𝑚 ∗ (𝑥 + 𝑘) + 𝑏)
𝑦𝑘+1 = 𝑦𝑘 + 𝑚
So that it is computed fast in computer as addition is fast compare to multiplication.
In above equation 𝑘 takes integer values starting from 1 and increase by 1
until the final endpoint is reached.
As 𝑚 can be any real number between 0 and 1, the calculated 𝑦 values must
be rounded to the nearest integer.
Consider a case for a line with a positive slope greater than 1:
We change the role of 𝑥 and 𝑦 that is sample at unit 𝑦 intervals (∆𝑦 = 1) and calculate
each succeeding
𝑥 value as:
𝑥 = (𝑦 − 𝑏)/𝑚
𝑥1 = ((𝑦 + 1) − 𝑏)/𝑚
In general 𝑥𝑘 = ((𝑦 + 𝑘) − 𝑏)/𝑚, &
𝑥𝑘+1 = ((𝑦 + 𝑘 + 1) − 𝑏)/𝑚
Now write this equation in form:
𝑥𝑘+1 − 𝑥𝑘 = (((𝑦 + 𝑘 + 1) − 𝑏)/𝑚) – (((𝑦 + 𝑘) − 𝑏)/𝑚)
𝑥𝑘+1 = 𝑥𝑘 + 1/𝑚
Above both equations are based on the assumption that lines are to be
processed from left endpoint to the right endpoint.
If we processed line from right endpoint to
left endpoint than: If ∆𝑥 = −1 equation
become:
𝑦𝑘+1 = 𝑦𝑘 – 𝑚
If ∆𝑦 = −1 equation become:
𝑥𝑘+1 = 𝑥𝑘 − 1/𝑚
1 Specified
3 50
linepath
S pecified
1 2 49
li ne path
1 1 48
1 0 47
10 11 12 13 14 15 50 51 52535455
Fig. 2.4: - Section of a display screen Fig. 2.5: - Section of a display screen
where a straight line segment is to be where a negative slope line segment is
plotted, starting from the pixel at to be plotted, starting from the pixel at
column 10 on scan line 11. column 50 on scan line 50.
The vertical axes show scan-line positions and the horizontal axes identify pixel
column.
Sampling at unit 𝑥 intervals in these examples, we need to decide which of
two possible pixel position is closer to the line path at each sample step.
To illustrate bresenham’s approach, we first consider the scan-conversion
process for lines with positive slope less than 1.
Pixel positions along a line path are then determined by sampling at unit 𝑥 intervals.
Starting from left endpoint (𝑥0 , 𝑦0 ) of a given line, we step to each successive
column and plot the pixel whose scan-line 𝑦 values is closest to the line path.
Assuming we have determined that the pixel at (𝑥𝑘 , 𝑦𝑘 ) is to be displayed, we next
need to decide which pixel to plot in column 𝑥𝑘 + 1.
Our choices are the pixels at positions (𝑥𝑘 + 1, 𝑦𝑘 ) and (𝑥𝑘 + 1, 𝑦𝑘 + 1).
Let’s see mathematical calculation used to decide which pixel position is light up.
We know that equation of line is:
𝑦 = 𝑚𝑥 + 𝑏
Now for position 𝑥𝑘 + 1.
𝑦 = 𝑚(𝑥𝑘 + 1) + 𝑏
Now calculate distance bet actual line’s 𝑦 value and lower pixel as 𝑑1 and distance
bet actual line’s 𝑦
value and upper pixel as 𝑑2.
𝑑1 = 𝑦 − 𝑦𝑘
d1 = m(xk + 1) + b − yk ......................................................................................................................................... (1)
𝑑2 = (𝑦𝑘 + 1) − 𝑦
𝑑2 = (𝑦𝑘 + 1) − 𝑚(𝑥𝑘 + 1) − 𝑏..…………………………………………………………………………………………………………(2)
Now calculate 𝑑1 − 𝑑2 from equation (1) and (2).
𝑑1 − 𝑑2 = (𝑦 – 𝑦𝑘) – ((𝑦𝑘 + 1) – 𝑦)
𝑑1 − 𝑑2 = {𝑚(𝑥𝑘 + 1) + 𝑏 − 𝑦𝑘 } − {(𝑦𝑘 + 1) − 𝑚(𝑥𝑘 + 1) − 𝑏}
𝑑1 − 𝑑2 = {𝑚𝑥𝑘 + 𝑚 + 𝑏 − 𝑦𝑘 } − {𝑦𝑘 + 1 − 𝑚𝑥𝑘 − 𝑚 − 𝑏}
𝑑1 − 𝑑2 = 2𝑚(𝑥𝑘 + 1) − 2𝑦𝑘 + 2𝑏 − 1……………………………………………………………………………….……………..(3)
Now substitute 𝑚 = ∆𝑦/∆𝑥 in equation (3)
𝑑1 − 𝑑2 = 2 (∆𝑦) (𝑥𝑘 + 1) − 2𝑦𝑘 + 2𝑏 − 1 ….………………………………….………………………………………………….(4)
Now we have decision parameter 𝑝𝑘 for 𝑘 𝑡ℎ step in the line algorithm is given by:
∆
𝑝𝑘 = ∆𝑥(𝑑1 − 𝑑2)
𝑝𝑘 = ∆𝑥(2∆𝑦/∆𝑥(𝑥𝑘 + 1) – 2𝑦𝑘 + 2𝑏 – 1)
𝑝𝑘 = 2∆𝑦𝑥𝑘 + 2∆𝑦 − 2∆𝑥𝑦𝑘 + 2∆𝑥𝑏 − ∆𝑥
𝑝𝑘 = 2∆𝑦𝑥𝑘 − 2∆𝑥𝑦𝑘 + 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥 ……………………………………………………….………………………(5)
𝑝𝑘 = 2∆𝑦𝑥𝑘 − 2∆𝑥𝑦𝑘 + 𝐶 (𝑊ℎ𝑒𝑟𝑒 𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝐶 = 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥) ...................................(6)
The sign of 𝑝𝑘 is the same as the sign of 𝑑1 − 𝑑2, since ∆𝑥 > 0 for our example.
Parameter 𝑐 is constant which is independent of pixel position and will
eliminate in the recursive calculation for 𝑝𝑘.
Now if 𝑝𝑘 is negative then we plot the lower pixel otherwise we plot the upper pixel.
So successive decision parameters using incremental integer calculation as:
𝑝𝑘+1 = 2∆𝑦𝑥𝑘+1 − 2∆𝑥𝑦𝑘+1 + C
Now Subtract 𝑝𝑘 from 𝑝𝑘+1
𝑝𝑘+1 − 𝑝𝑘 = 2∆𝑦(𝑥𝑘+1 − 𝑥𝑘) -2∆𝑥(𝑦𝑘+1 − 𝑦𝑘)
𝑝𝑘+1 − 𝑝𝑘 = 2∆𝑦𝑥𝑘+1 − 2∆𝑥𝑦𝑘+1 + C − 2∆𝑦𝑥𝑘 + 2∆𝑥𝑦𝑘 − C
But 𝑥𝑘+1 = 𝑥𝑘 + 1, so that (𝑥𝑘+1 − 𝑥𝑘) = 1
𝑝𝑘+1 = 𝑝𝑘 + 2∆𝑦 − 2∆𝑥(𝑦𝑘+1 − 𝑦𝑘)
Where the terms 𝑦𝑘+1 − 𝑦𝑘 is either 0 or 1, depends on the sign of parameter 𝑝𝑘.
This recursive calculation of decision parameters is performed at each integer 𝑥
position starting at the left coordinate endpoint of the line.
The first decision parameter 𝑝0 is calculated using equation (5) as first time
we need to take constant part into account so:
𝑝𝑘 = 2∆𝑦𝑥𝑘 − 2∆𝑥𝑦𝑘 + 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥
Now 𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑒 𝑏 = 𝑦0 – 𝑚𝑥0
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥(𝑦0 − 𝑚𝑥0 ) − ∆x
Now Substitute 𝑚 = ∆𝑦/𝛥𝑥
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥(𝑦0 − (∆𝑦/∆𝑥)𝑥0) − ∆x
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥𝑦0 − 2∆𝑦𝑥0 − ∆x
𝑝0 = 2∆𝑦 − ∆x
Let’s see Bresenham’s line drawing algorithm for |𝑚| < 1
1. Input the two line endpoints and store the left endpoint in (𝑥0 , 𝑦0 ).
2. Load (𝑥0 , 𝑦0 ) into the frame buffer; that is, plot the first point.
3. Calculate constants ∆𝑥, ∆𝑦, 2∆𝑦, and 2∆𝑦 − 2∆𝑥, and obtain the starting
value for the decision parameter as
𝑝0 = 2∆𝑦 − ∆𝑥
4. At each 𝑥𝑘 along the line, starting at 𝑘 = 0, perform
the following test: If 𝑝𝑘 < 0, the next point to plot is
(𝑥𝑘 + 1, 𝑦𝑘 ) and
𝑝𝑘+1 = 𝑝𝑘 + 2∆𝑦
Otherwise, the next point to plot is (𝑥𝑘 + 1, 𝑦𝑘 + 1) and
𝑝𝑘+1 = 𝑝𝑘 + 2∆𝑦 − 2∆𝑥
5. Repeat step-4 ∆𝑥 times.
Bresenham’s algorithm is generalized to lines with arbitrary slope by considering
symmetry between the various octants and quadrants of the 𝑥𝑦 plane.
For lines with positive slope greater than 1 we interchange the roles of the 𝑥 and 𝑦
directions.
Also we can revise algorithm to draw line from right endpoint to left
endpoint, both 𝑥 and 𝑦 decrease as we step from right to left.
When 𝑑1 − 𝑑2 = 0 we choose either lower or upper pixel but once we choose
lower than for all such case for that line choose lower and if we choose upper
the for all such case choose upper.
For the negative slope the procedure are similar except that now one coordinate
decreases as the other increases.
The special case handle separately. Horizontal line (∆𝑦 = 0), vertical line (∆𝑥 = 0)
and diagonal line with |∆𝑥| = |∆𝑦| each can be loaded directly into the frame
buffer without processing them through the line plotting algorithm.
Circle
A circle is defined as the set of points that are all at a given distance r from a center position say (𝑥𝑐 , 𝑦𝑐 ).
Properties of Circle
The distance relationship is expressed by the Pythagorean theorem in Cartesian
coordinates as:
(𝑥 − 𝑥𝑐)2 + (𝑦 − 𝑦𝑐)2 = 𝑟2
We could use this equation to calculate circular boundary points by
incrementing 1 in 𝑥 direction in every steps from 𝑥𝑐 – 𝑟 to 𝑥𝑐 + 𝑟 and
calculate corresponding 𝑦 values at each position as:
(𝑥 − 𝑥𝑐)2 + (𝑦 − 𝑦𝑐)2 = 𝑟2
(𝑦 − 𝑦𝑐)2 = 𝑟2 − (𝑥 − 𝑥𝑐)2
(𝑦 − 𝑦𝑐 ) = ±√𝑟 2 − (𝑥𝑐 − 𝑥)2
Fig. 2.8: - Positive half of circle showing non uniform spacing bet calculated pixel positions.
We can adjust spacing by stepping through 𝑦 values and calculating 𝑥 values whenever the
absolute value of the slop of the circle is greater than 1. But it will increases computation
processing requirement.
Another way to eliminate the non-uniform spacing is to draw circle using polar coordinates ‘𝑟’ and ‘ ’.
𝑥 = 𝑥𝑐 + 𝑟 cos
Calculating circle boundary using polar equation is given by pair of equations which is as follows.
𝑦 = 𝑦𝑐 + 𝑟 sin
When display is produce using these equations using fixed angular step size circle is plotted
with uniform spacing.
The step size ‘ ’ is chosen according to application and display device.
For a more continuous boundary on a raster display we can set the step size at 1/𝑟. This
plot pixel position that are approximately one unit apart.
Computation can be reduced by considering symmetry city property of circles. The shape
of circle is similar in each quadrant.
𝑦 axis and similarly for third and fourth quadrant from second and first respectively using
We can obtain pixel position in second quadrant from first quadrant using reflection about
This symmetry condition is shown in figure below where point (𝑥, 𝑦) on one circle sector is
450 line dividing the two octants.
mapped in other seven sector of circle.
(-Y, X) (Y, X)
Taking advantage of this symmetry property of circle we can generate all pixel
Fig. 2.9: - symmetry of circle.
Bresenham’s circle algorithm avoids these square root calculation by comparing the square of
the pixel separation distance.A method for direct distance comparison to test the midpoint
between two pixels to determine if this midpoint is inside or outside the circle boundary.
This method is easily applied to other conics also.
Midpoint approach generates same pixel position as generated by bresenham’s circle algorithm.
The error involve in locating pixel positions along any conic section using midpoint test is
limited to one- half the pixel separation.
𝒙𝟐 + 𝒚𝟐 − 𝒓𝟐 = 𝟎
𝒚𝒌
Midpoint
𝒚𝒌 − 𝟏
𝒙𝒌 𝒙𝒌 + 𝒙𝒌 + 𝟐
𝟏
1 2
𝑝𝑘
= 𝑥𝑘 + 1 + (𝑦𝑘 − ) − 𝑟
2
2
If 𝑝𝑘 < 0 this midpoint is inside the circle and the pixel on the scan line 𝑦𝑘 is
closer to circle boundary. Otherwise the midpoint is outside or on the
boundary and we select the scan line 𝑦𝑘 − 1.
Algorithm for Midpoint Circle Generation
1. Input radius 𝑟 and circle center (𝑥𝑐 , 𝑦𝑐 ), and obtain the first point on the
circumference of a circle centered on the origin as
(𝑥0 , 𝑦0 ) = (0, 𝑟)
2. calculate the initial value of the decision parameter as
𝑝 =5−𝑟
40
3. At each 𝑥𝑘 position, starting at 𝑘 = 0, perform the following test:
If 𝑝𝑘 < 0, the next point along the circle centered on (0, 0) is (𝑥𝑘 + 1, 𝑦𝑘 ) &
𝑝𝑘+1 = 𝑝𝑘 + 2𝑥𝑘+1 + 1
Otherwise, the next point along the circle is (𝑥𝑘 + 1, 𝑦𝑘 − 1) &
𝑝𝑘+1 = 𝑝𝑘 + 2𝑥𝑘+1 + 1 − 2𝑦𝑘+1
Where 2𝑥𝑘+1 = 2𝑥𝑘 + 2, & 2𝑦𝑘+1 = 2𝑦𝑘 − 2.
4. Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (𝑥, 𝑦) onto the circular path centered on
(𝑥𝑐 , 𝑦𝑐 ) and plot the coordinate values:
𝑥 = 𝑥 + 𝑥𝑐 , 𝑦 = 𝑦 + 𝑦 𝑐
6. Repeat steps 3 through 5 until 𝑥 ≥ 𝑦.
2D Transformation
Transformation
Changing Position, shape, size, or orientation of an object on display is known as transformation.
Basic Transformation
Basic transformation includes three transformations Translation, Rotation, and Scaling.
These three transformations are known as basic transformation because with
combination of these three transformations we can obtain any transformation.
Translation
(𝒙′, 𝒚′)
𝒕𝒚
(𝒙, 𝒚)
𝒕𝒙
�
�
�
]
�
𝑃 ′ = [𝑃 ]
+¿)
Final coordinates after translation are [A’ (12, 11), B’ (17, 16), C’ (22, 11)].
Rotation
It is a transformation that used to reposition the object along the circular path in the
XY - plane.
To generate a rotation we specify a rotation angle 𝜽 and the position of the
Rotation Point (Pivot Point) (𝒙𝒓, 𝒚𝒓 ) about which the object is to be rotated.
Positive value of rotation angle defines counter clockwise rotation and
negative value of rotation angle defines clockwise rotation.
We first find the equation of rotation when pivot point is at coordinate origin(𝟎, 𝟎).
(𝒙′, 𝒚′)
( 𝒙, 𝒚 )
𝜽
)= ¿ c os θ −s in θ
( )
x'
(
y' sin θ cos θ
(𝒙′, 𝒚′)
(𝒙, 𝒚)
𝜽
∅
(𝒙𝒓, 𝒚𝒓)
Fig. 3.3: - Rotation about pivot
point.
Transformation equation for rotation of a point about pivot point (𝒙𝒓, 𝒚𝒓 ) is:
𝒙′ = 𝒙𝒓 + (𝒙 − 𝒙𝒓 ) 𝐜𝐨𝐬 𝜽 − (𝒚 − 𝒚𝒓 ) 𝐬𝐢𝐧 𝜽
𝒚′ = 𝒚𝒓 + (𝒙 − 𝒙𝒓) 𝐬𝐢𝐧 𝜽 + (𝒚 − 𝒚𝒓) 𝐜𝐨𝐬 𝜽
These equations are differing from rotation about origin and its matrix
representation is also different.
Its matrix equation can be obtained by simple method that we will discuss later in
this chapter.
Rotation is also rigid body transformation so we need to rotate each point of object.
Example: - Locate the new position of the triangle [A (5, 4), B (8, 3), C (8, 8)]
after its rotation by 90o clockwise about the origin.
As rotation is clockwise we will take 𝜃 = −90°.
𝑃′ = 𝑅 ∙ 𝑃
Final coordinates after rotation are [A’ (4, -5), B’ (3, -8), C’ (8, -8)].
Scaling
Fixed Point
Final coordinate after scaling are [A’ (1, 1), B’ (3, 1), C’ (3, 3), D’ (1, 3)].
𝑷′ = 𝑻(𝒕 ,𝒕 ) ∙ 𝑷
𝒙′ 𝟏 𝟎 𝒕𝒙 𝒙
𝒙
𝑷′ = 𝑹(𝜽) ∙ 𝑷
𝒙′ 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝒙
[𝒚 ] = [ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽
′
𝟎] [𝒚]
𝟏 𝟎 𝟎 𝟏 𝟏
NOTE: - Inverse of rotation matrix is obtained by
replacing 𝜽 by −𝜽.
Scaling
𝑷′ = 𝑺(𝒔𝒙,𝒔𝒚) ∙ 𝑷𝒙′ 𝒔𝒙 𝟎 𝟎 𝒙
[𝒚 ] = [ 𝟎
′
𝒔𝒚 𝟎] [𝒚]
𝟏 𝟎 𝟎 𝟏 𝟏
NOTE: - Inverse of scaling matrix is obtained by
replacing 𝒔𝒙 & 𝒔𝒚 by 𝟏 �
𝒙
Composite Transformation
We can set up a matrix for any sequence of transformations as a composite
transformation matrix by calculating the matrix product of individual
transformation.
For column matrix representation of coordinate positions, we form composite
transformations by multiplying matrices in order from right to left.
Translations
Two successive translations are performed as:
𝑷′ = 𝑻(𝒕𝒙𝟐 , 𝒕𝒚𝟐 ) ∙ {𝑻(𝒕𝒙𝟏 , 𝒕𝒚𝟏 ) ∙ 𝑷}
𝑷′ = {𝑻(𝒕𝒙𝟐, 𝒕𝒚𝟐 ) ∙ 𝑻(𝒕𝒙𝟏 , 𝒕𝒚𝟏 )} ∙ 𝑷
𝟏 𝟎 𝒕𝒙𝟐 𝟏 𝟎 𝒕𝒙𝟏
𝑷 = [𝟎 𝟏 𝒕𝒚𝟐] [𝟎 𝟏 𝒕𝒚𝟏] ∙ 𝑷
′
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝟏 𝟎 𝒕𝒙𝟏 + 𝒕𝒙𝟐
𝑷 = [𝟎 𝟏 𝒕𝒚𝟏 + 𝒕𝒚𝟐] ∙ 𝑷
′
𝟎 𝟎 𝟏
𝑷′ = 𝑻(𝒕𝒙𝟏 + 𝒕𝒙𝟐 , 𝒕𝒚𝟏 + 𝒕𝒚𝟐 ) ∙ 𝑷}
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively.
This concept can be extended for any number of successive translations.
Rotations
Two successive Rotations are performed as:
𝑷′ = 𝑹(𝜽𝟐) ∙ {𝑹(𝜽𝟏) ∙ 𝑷}
𝑷′ = {𝑹(𝜽𝟐) ∙ 𝑹(𝜽𝟏)} ∙ 𝑷
𝑷′ = 𝑹(𝜽𝟏 + 𝜽𝟐) ∙ 𝑷
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively.
This concept can be extended for any number of successive rotations.
Two successive scaling are performed as:
𝑷′ = 𝑺(𝒔𝒙𝟐 , 𝒔𝒚𝟐 ) ∙ {𝑺(𝒔𝒙𝟏 , 𝒔𝒚𝟏 ) ∙ 𝑷}
𝑷′ = {𝑺(𝒔𝒙𝟐 , 𝒔𝒚𝟐 ) ∙ 𝑺(𝒔𝒙𝟏 , 𝒔𝒚𝟏 )} ∙ 𝑷
𝒔𝒙𝟐 𝟎 𝟎 𝒔𝒙𝟏 𝟎 𝟎
𝑷 = [ 𝟎 𝒔𝒚𝟐 𝟎] [ 𝟎
′
𝒔𝒚𝟏 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝒔𝒙𝟏 ∙ 𝒔𝒙𝟐 𝟎 𝟎
𝑷′ = 𝟎 𝒔𝒚𝟏 ∙ 𝒔𝒚𝟐 𝟎] ∙ 𝑷
[ 𝟎 𝟎 𝟏
𝑷 ial point
′
coordinate
= respectively.
𝑺( This concept can
𝒔𝒙 be extended for any
𝟏 number of
∙ successive scaling.
𝒔𝒙
𝟐,
𝒔𝒚
𝟏
∙
𝒔𝒚
𝟐)
∙
𝑷
H
e
r
e
�
�
′
a
n
d
�
�
a
r
e
c
o
l
u
m
n
v
e
c
t
o
r
o
f
fi
n
a
l
a
n
d
i
n
it
6 0 0 2 8 12 48
𝑃′ = [0 6 0] ∙ [2 8] = [12 48]
0 0 1 1 1 1 1
Final Coordinates after rotations are 𝑝, (12, 12) and 𝑞 , (48, 48).
(𝒙𝒓, 𝒚𝒓)
(𝒙𝒓, 𝒚𝒓)
(c) (d)
(a) (b) Scale Object Translate Object
Original Translate
with Respect so that Fixed Point
Position of Object so that to Origin is Return to
Object and Fixed Point (𝒙𝒇,
Fixed Point Position (𝒙𝒇, 𝒚𝒇) .
𝒚𝒇) is at Origin
𝒔𝒚 ) ∙ 𝑷
As we want size half so value of scale factor are 𝑠𝑥 = 0.5, 𝑠𝑦 = 0.5 and Coordinates of
square are [A (2, 2), B (6, 2), C (6, 6), D (2, 6)].
𝑃′ = 𝑆(𝑥𝑓 , 𝑦𝑓 , 𝑠𝑥 , 𝑠𝑦 ) ∙ 𝑃
𝑠𝑥 0 𝑥𝑓(1 − 𝑠𝑥) 2 6 62
𝑃′ = [ 0 𝑠𝑦 𝑦𝑓(1 − 𝑠𝑦)] [2 2 6 6]
0 0 1 1 1 11
0.5 0 4(1 − 0.5) 2 6 62
𝑃′ = [ 0 0.5 4(1 − 0.5) ] [ 2 2 6 6]
0 0 1 1 1 11
0.5 0 2 2 6 62
𝑃′ = [ 0 0.5 2] [2 2 6 6]
0 0 1 1 1 11
3 5 5 3
𝑃′ = [3 3 5 5]
1 1 1 1
Final coordinate after scaling are [A’ (3, 3), B’ (5, 3), C’ (5, 5), D’ (3, 5)]
𝒔𝟐
𝒔𝟏
Other Transformation
Some package provides few additional transformations which are useful in
certain applications. Two such transformations are reflection and shear.
Reflection
y
1 Original
Position
2 3
x
2’ 3’
Reflected
Position
1’
1 0 0
[0 −1 0]
0 0 1
y
1’ 1 Original
Reflected
Position Position
3’2’ 2 3
x
Fig. 3.10: - Reflection about y - axis.
This transformation keeps y values are same, but flips (Change the sign) x
values of coordinate positions.
−1 0 0
[ 0 1 0]
0 0 1
y
Original
3 Position
1 2
1’ x
3’
Reflected 2’
Position
−1 0 0
[ 0 −1 0]
0 0 1
y
Origi x=y line
nal
Positi
on
3
2
1
Reflected
1
’
Position
3
2
’
x=-y line 3 y
1 2
Original
1’ Position
3’
2’
Reflected
Position
x
0 −1 0
[−1 0 0]
0 0 1
Example: - Find the coordinates after reflection of the triangle [A (10, 10), B
(15, 15), C (20, 10)] about x axis.
1 0 0 10 15 20
𝑃′ = [0 −1 0] [10 15 10 ]
0 0 1 1 1 1
10 15 20
𝑃′ = [−10 −15 −10]
1 1 1
Final coordinate after reflection are [A’ (10, -10), B’ (15, -15), C’ (20, -10)]
Shear
Shear in 𝒙 − 𝒅𝒊𝒓𝒆𝒄𝒕𝒊𝒐𝒏 .
Before After
Y Shear Shear
Y
X X
Fig. 3.13: - Shear in x-direction.
Shear relative to 𝑥 − 𝑎𝑥𝑖𝑠 that is 𝑦 = 0 line can be produced by following equation:
𝒙′ = 𝒙 + 𝒔𝒉𝒙 ∙ 𝒚 , 𝒚′ = 𝒚
Transformation matrix for that is:
𝟏 𝒔𝒉𝒙 𝟎
[𝟎 𝟏 𝟎]
𝟎 𝟎 𝟏
Here 𝒔𝒉𝒙 is shear parameter. We can assign any real value to 𝒔𝒉𝒙.
We can generate 𝑥 − 𝑑𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛 shear relative to other reference line 𝑦 = 𝑦𝑟𝑒𝑓 with
following equation:
𝒙′ = 𝒙 + 𝒔𝒉𝒙 ∙ (𝒚 − 𝒚𝒓𝒆𝒇) , 𝒚′ = 𝒚
Transformation matrix for that is:
𝟏 𝒔𝒉𝒙 −𝒔𝒉𝒙 ∙ 𝒚𝒓𝒆𝒇
[𝟎 𝟏 𝟎 ]
𝟎 𝟎 𝟏
Example: - Shear the unit square in x direction with shear parameter ½
relative to line 𝑦 = −1. Here 𝑦𝑟𝑒𝑓 = −1 and 𝑠ℎ𝑥 = 0.5
Coordinates of unit square are [A (0, 0), B (1, 0), C (1, 1), D (0, 1)].
1 𝑠ℎ𝑥 −𝑠ℎ𝑥 ∙ 0 1 10
′ 𝑦𝑟𝑒𝑓
𝑃 = [0 1 0 ] [ 0 0 1 1]
0 0 1 1 1 11
𝑃′ = [ 0 1
1 0.5 −0.5 ∙ (−1) 0 1 1 0
0 ] [0 0 1 1]
0 0 1 1 1 11
𝑃′ = [ 0 1
1 0.5 0.5 0 1 1 0
0 ] [0 0 1 1]
0 0 1 1 1 11
0.5 1.5 2 1
𝑃′ = [ 0 0 1 1]
1 1 11
Final coordinate after shear are [A’ (0.5, 0), B’
Before
Y Shear After
Y
Shear
X X
0 0 1 1 1 11
1 0 0 0 1 10
𝑃′ = [0.5 1 0.5] [0 0 1 1]
0 0 1 1 1 11
0 1 1
0
𝑃′ = [ 0.5 1 2
1.5]
1 1 1
1
Final coordinate after shear are [A’ (0, 0.5), B’ (1, 1), C’ (1, 2), D’ (0,
1.5)]
Module:02
Fig. 3.1: - A viewing transformation using standard rectangles for the window and viewport.
Now we see steps involved in viewing pipeline.
Construct Convert
MC WC VC NVC
World- World-
Coordinate Coordinate
DC
Scene Using to Viewing
Modeling- Coordinate
Coordinate s
Map Viewing
Map
Coordinate to
Normalize
Normalized
d
Viewing
Viewport
Coordinates
to
using Window-
Device
Viewport
Fig. 3.2: - 2D viewing pipeline.
As shown in figure above first of all we construct world coordinate scene using modeling
coordinate transformation.
After this we convert viewing coordinates from world coordinates using window to viewport
transformation.
Then we map viewing coordinate to normalized viewing coordinate in which we obtain values in
between 0 to 1.
At last we convert normalized viewing coordinate to device coordinate using device driver
software which provide device specification.
Finally device coordinate is used to display image on display screen.
By changing the viewport position on screen we can see image at different place on the screen.
By changing the size of the window and viewport we can obtain zoom in and zoom out effect as
per requirement.
Fixed size viewport and small size window gives zoom in effect, and fixed size viewport and
larger window gives zoom out effect.
View ports are generally defines with the unit square so that graphics package are more device
independent which we call as normalized viewing coordinate.
Viewing Coordinate Reference Frame
Fig. 3.3: - A viewing-coordinate frame is moved into coincidence with the world frame in
two steps: (a) translate the viewing origin to the world origin, and then (b) rotate to align
the axes of the two systems.
We can obtain reference frame in any direction and at any position.
For handling such condition first of all we translate reference frame origin to standard
reference frame origin and then we rotate it to align it to standard axis.
In this way we can adjust window in any reference frame.
this is illustrate by following transformation matrix:
M w c , v c=R T
Where T is translation matrix and R is rotation matrix.
Window-To-Viewport Coordinate Transformation
Mapping of window coordinate to viewport is called window to viewport transformation.
We do this using transformation that maintains relative position of window coordinate into
viewport.
That means center coordinates in window must be remains at center position in viewport.
𝐱 𝐯 − 𝐱 𝐯𝐦𝐢𝐧 𝐱 𝐰 − 𝐱 𝐰𝐦𝐢𝐧
We find relative position by equation as follow:
=𝐱
𝐱𝐯𝐦𝐚𝐱 − 𝐰𝐦𝐚𝐱 − 𝐱 𝐰𝐦𝐢𝐧
𝐱𝐯𝐦𝐢𝐧
𝐲𝐰 − 𝐲𝐰𝐦𝐢𝐧
=𝐲
𝐰𝐦𝐚𝐱 − 𝐲𝐰𝐦𝐢𝐧
𝐲𝐯 − 𝐲𝐯𝐦𝐢𝐧
𝐲𝐯𝐦𝐚𝐱 −
𝐲𝐯𝐦𝐢𝐧
Solving by making viewport position as subject we obtain:
𝐱 𝐯 = 𝐱 𝐯𝐦𝐢𝐧 + (𝐱 𝐰 − 𝐱 𝐰𝐦𝐢𝐧 )𝐬𝐱
𝐲𝐯 = 𝐲𝐯𝐦𝐢𝐧 + (𝐲𝐰 − 𝐲𝐰𝐦𝐢𝐧 )𝐬𝐲
Where scaling factor are :
𝐬𝐱 𝐱 𝐯𝐦𝐚𝐱 − 𝐱 𝐯𝐦𝐢𝐧
= 𝐱 𝐰𝐦𝐚𝐱 − 𝐱 𝐰𝐦𝐢𝐧
𝐬𝐲 𝐲𝐯𝐦𝐚𝐱 − 𝐲𝐯𝐦𝐢𝐧
=𝐲 − 𝐲𝐰𝐦𝐢𝐧
𝐰𝐦𝐚𝐱
We can also map window to viewport with the set of transformation, which include
following sequence of transformations:
1.
Perform a scaling transformation using a fixed-point position of (xWmin,ywmin) that
scales the window area to the size of the viewport.
2.
Translate the scaled window area to the position of the viewport.
For maintaining relative proportions we take (sx = sy). in case if both are not equal then
we get stretched or contracted in either the x or y direction when displayed on the output
device.
Characters are handle in two different way one way is simply maintain relative position
like other primitive and other is to maintain standard character size even though viewport
size is enlarged or reduce.
Number of display device can be used in application and for each we can use different
window-to- viewport transformation. This mapping is called the workstation
transformation.
𝒙𝒘𝒎𝒊𝒏 ≤ 𝒙 ≤ 𝒙𝒘𝒂𝒎𝒙
inequality:
𝒚𝒘𝒎𝒊𝒏 ≤ 𝒚 ≤ 𝒚𝒘𝒂𝒎𝒙
If above both inequality is satisfied then the point is inside otherwise the point is
outside the clipping window.
Line Clipping
Line clipping involves several possible cases.
1. Completely inside the clipping window.
2. Completely outside the clipping window.
3. Partially inside and partially outside the clipping window.
P9
P2
P2
P8
P1
P1 P8
P5 P6
P3
P6
P5
P7
P7
Before After
Clippi Clipping
ng (b)
(a)
Fig. 3.5: - Line clipping against a rectangular window.
Line which is completely inside is display completely. Line which is completely outside is
eliminated from display. And for partially inside line we need to calculate intersection with
window boundary and find which part is inside the clipping boundary and which part is
eliminated.
For line clipping several scientists tried different methods to solve this clipping procedure.
Some of them are discuss below.
Cohen-Sutherland Line Clipping
This is one of the oldest and most popular line-clipping procedures.
Algorithm
Step-1:
Assign region code to both endpoint of a line depending on the position where the line endpoint is
located.
Step-2:
If both endpoint have code ‘0000’
Then line is completely inside.
Otherwise
Perform logical ending between this two codes.
Fig. 3.12: - Processing the vertices of the polygon through boundary clipper.
There are four possible cases when processing vertices in sequence around the perimeter of a
polygon.
Fig. 3.13: - Clipping a polygon against successive window boundaries.
As shown in case 1: if both vertices are inside the window we add only second vertices to output
list.
In case 2: if first vertices is inside the boundary and second vertices is outside the
boundary only the edge intersection with the window boundary is added to the output
vertex list.
In case 3: if both vertices are outside the window boundary nothing is added to window boundary.
In case 4: first vertex is outside and second vertex is inside the boundary, then adds both
intersection point with window boundary, and second vertex to the output list.
When polygon clipping is done against one boundary then we clip against next window boundary.
We illustrate this method by simple example.
Window
3
2’
1’
2
3’4
1
6 5’
4’
5
V
V 3’ 3
V 4’
V1
V4
(stop) V 5’
V 7’
V
As shown in figure we start from v1 and move clockwise towards v2 and add intersection
point and next point to output list by following polygon boundary, then from v2 to v3 we
add v3 to output list.
From v3 to v4 we calculate intersection point and add to output list and follow window boundary.
Similarly from v4 to v5 we add intersection point and next point and follow the polygon
boundary, next we move v5 to v6 and add intersection point and follow the window
boundary, and finally v6 to v1 is outside so no need to add anything.
This way we get two separate polygon section after clipping.
ALIASING
Aliasing is a effect of displaying a high resolution image in a low
resolution display. The aliasing effect are also called artifact or distortion.
The following effect may occur due to aliasing:
a) Jagged Profile: it is especially
noticed where there is a high
contrast between the interior and
exterior of the object.
b) Picket Fence Problem: it occurs
when a user attempts to scan
convert an object that will not fit
exactly in the raster.
[II.6]
COMPUTER GRAPHICS | Module: 02
c) StaircaseArtifact:Inalowresoluti
ondisplaythe slant lines may
have unequal brightness
because two corner pixel the
distance is 1.4 whereas
between two horizontal or
vertical neighbor the distance
is1.
ANTIALIASING
The aliasing effect can be reduced by adjusting intensities of the pixel along the
line. The process of adjusting intensities of the pixels along the line to minimize the
effect of aliasing the called antialiasing.
Methods of antialiasing:
I. IncreasingResolution
II. Unweighted AreaSampling
III. Weighted AreaSampling.
I. INCREASINGRESOLUTION:
The aliasing can be minimized by
increasing resolution of the raster
display. By increasing resolution
and making it twice the original
one, the line passes through twice
as many as column of the pixel
and therefore has twice as many
jags
buteachjagishalfaslargeasinX&Ydi
rection.
II. UNWEIGHTED AREASAMPLING:
In unweighted area sampling, the
intensity of pixel is proportional to the
amount of line area occupied by the
pixel. The technique produce
noticeably better results than full/zero
intensity pixel.
III. WEIGHTED AREASAMPLING
Equal area contribute unequally i.e. a
small area closer to the pixel center has
greater intensity than does one at a
greater distance. Here intensity depends
on area occupied as well as distance
from pixels center.
HALFTONING:
It is a technique for obtaining
increasing visual
resolutionwithaminimumnumberofintensity.I
tdecrease the overall resolution of the
image and it is more suitable where the
resolution of the original image is less than
the outpour device but having more
intensity than the output device.
THRESHOLDING:
Halftoning is resulting loss of spatial
resolution which is visible in case of
displaying a low resolution image. To
improve it thresholding is used. In
thresholding
the output image is having the same size as original image. But having only two intensity
levels:
IfI(X,Y)ΣTthenI1 (X,Y)=whiteelseI1 (X,Y)=Black.
Where I=Original image and I1=ThresholdingiNage
DITHERING:
Dithering is the process by which we create illusion of a color that are not present actually. It is
done by random arrangement of pixel. It is digital Halftoning process to approximate a color that
can’t be display with uniform dots of the displaydevice.
The classical dithering algorithm are:
Filled-Area Primitives
n practical we often use polygon which are filled with some color or pattern inside it.
There are two basic approaches to area filling on raster systems.
One way to fill an area is to determine the overlap intervals for scan line that cross the area.
Another method is to fill the area is to start from a given interior position and paint out
wards from this point until we encounter boundary.
Scan-Line Polygon Fill Algorithm
Figure below shows the procedure for scan-line filling algorithm.
Fig. 2.17: - Interior pixels along a scan line passing through a polygon area.
For each scan-line crossing a polygon, the algorithm locates the intersection points are of
scan line with the polygon edges.
This intersection points are stored from left to right.
Frame buffer positions between each pair of intersection point are set to specified fill color.
Some scan line intersects at vertex position they are required special handling.
For vertex we must look at the other endpoints of the two line segments of the polygon
which meet at this vertex.
If these points lie on the same (up or down) side of the scan line, then that point is counts
as two intersection points.
If they lie on opposite sides of the scan line, then the point is counted as single intersection.
This is illustrated in figure below
Fig. 2.18: - Intersection points along the scan line that intersect polygon vertices.
As shown in the Fig. 2.18, each scan line intersects the vertex or vertices of the polygon.
For scan line 1, the other end points (B and D) of the two line segments of the polygon lie
on the same side of the scan
line, hence there are two intersections resulting two pairs: 1 -2 and 3 - 4. Intersections points
2 and 3 are actually same Points. For scan line 2 the other endpoints (D and F) of the two
line segments of the Polygon lie on the opposite sides of the scan line, hence there is a
single intersection resulting two pairs: l - 2 and 3 - 4. For scan line 3, two vertices are the
intersection points"
For vertex F the other end points E and G of the two line segments of the polygon lie on
the same side of the scan line whereas for vertex H, the other endpoints G and I of the
two line segments of the polygon lie on the opposite side of the scan line. Therefore, at
vertex F there are two intersections and at vertex H there is only one intersection. This
results two pairs: 1 - 2 and 3 - 4 and points 2 and 3 are actually same points.
Coherence methods often involve incremental calculations applied along a single scan line or
between successive scan lines.
In determining edge intersections, we can set up incremental coordinate calculations along
any edge by exploiting the fact that the slope of the edge is constant from one scan line to
the next.
Figure below shows three successive scan-lines crossing the left edge of polygon.
𝑦𝑘+1 − 𝑦𝑘
For above figure we can write slope equation for polygon boundary as follows.
𝑚 = 𝑥𝑘+1 − 𝑥𝑘
Since change in 𝑦 coordinates between the two scan lines is simply
𝑦𝑘+1 − 𝑦𝑘 = 1
So slope equation can be modified as follows
𝑦𝑘+1 − 𝑦𝑘
𝑚 = 𝑥𝑘+1 − 𝑥𝑘
1
𝑚=
𝑥𝑘+1 − 𝑥𝑘
𝑥𝑘+1 1
− 𝑥𝑘 =
𝑥𝑘+1 1 𝑚
= 𝑥𝑘 +
𝑚 𝑥 intercept can thus be calculated by adding the inverse of the
Each successive
slope and rounding to
the nearest integer.
For parallel execution of this algorithm we assign each scan line to separate
processor in that case instead of using previous 𝑥 values for calculation we
use initial 𝑥 values by using equation as.
𝑥𝑘
= 𝑥0 + 𝑘
𝑚
Using this equation we can perform integer evaluation of 𝑥 intercept by
initializing a counter to 0, then incrementing counter by the value of ∆𝑥 each
time we move up to a new scan line.
When the counter value becomes equal to or greater than ∆𝑦, we increment the
current 𝑥 intersection value by 1 and decrease the counter by the value ∆𝑦.
This procedure is seen by following figure.
Decere 0
4
ment 1
5
Decere
2
6
ment
3
0 Y0
Decere
ment
X0
Y0
X0
Fig. 2.19: - line with slope 7/3 and its integer calculation using equation 𝑥𝑘+1 = 𝑥𝑘 + ∆.
∆𝑦
Steps for above procedure
Suppose m = 7/3
Initially, set counter to 0, and increment to 3 (which is 𝛥𝑥).
1.
intercept (in other words, the 𝑥- intercept for this scan line is one more than
4.
L
i
n
e
N
u
m
b
e
r
Yc
B
Yd Yc Xd 1/mdb Ye Xd 1/mde
C Scan Line Yc
Ya Yb Xc 1/mcb Yb Xa 1/mab
C’
E
Scan Line . .
Yd
.
1
Scan Line Ya D
0
A
Each entry in the table for a particular scan line contains the maximum 𝑦 values for that edges,
Fig. 2.20: - A polygon and its sorted edge table.
the 𝑥
intercept value for edge, and the inverse slope of the edge.
For each scan line the edges are in sorted order from left to right.
Than we process the scan line from the bottom to top for whole polygon and produce
active edge list for each scan line crossing the polygon boundaries.
The active edge list for a scan line contains all edges crossed by that line, with iterative
coherence calculation used to obtain the edge intersections.
Inside-Outside Tests
In area filling and other graphics operation often required to find particular point is inside
or outside the polygon.
For finding which region is inside or which region is outside most graphics package use
either odd even rule or the nonzero winding number rule.
By conceptually drawing a line from any position 𝑝 to a distant point outside the
It is also called the odd parity rule or even odd rule.
To obtain accurate edge count we must sure that line selected is does not pass from any
vertices.
This is shown in figure 2.21(a).
Fig. 2.21: - Identifying interior and exterior region for a self-intersecting polygon.
This is shown in figure 2.21(b).
vector 𝑈 along the line from 𝑝 to distant point with the edge vector 𝐸 for each
One way to determine directional edge crossing is to take the vector cross product of a
otherwise the edge is crossing from left to right and we subtract 1 from winding number.
Seed
pixel with specified color.
2
(a)
1 2
1
3
(b)
1
3
1
5 6
6
4
(c) 5
1
4
1
5
(d) 4 5
1
4
1
Fig. 2.24: - Boundary fill across pixel spans for a 4-connected area.
Flood-Fill Algorithm
Sometimes it is required to fill in an area that is not defined within a single color boundary.
In such cases we can fill areas by replacing a specified interior color instead of searching
for a boundary color.
This approach is called a flood-fill algorithm. Like boundary fill algorithm, here we start
with some seed and examine the neighbouring pixels.
However, here pixels are checked for a specified interior color instead of boundary color
and they are replaced by new color.
Using either a 4-connected or 8-connected approach, we can step through pixel positions
until all interior point have been filled.
The following procedure illustrates the recursive method for filling 4-connected region
using flood-fill algorithm.
Procedure :
flood-fill4(x, y, new-color, old-color)
{
if(getpixel (x,y) = = old-color)
{
putpixel (x, y, new-color)
flood-fill4 (x + 1, y, new-color,
old -color); flood-fill4 (x, y + 1,
new -color, old -color); flood-
fill4 (x - 1, y, new -color, old -
color); flood-fill4 (x, y - l, new -
color, old-color);
}
}
Note: 'getpixel' function gives the color of .specified pixel and 'putpixel' function draws the
pixel with specified color.
Spline.
Fig. 4.9: -interpolation spline. Fig. 4.10: -Approximation spline.
Approximation Spline: - When curve section follows general control point path without
necessarily passing through any control point, the resulting curve is said to approximate
the set of control points and that curve is known as Approximation Spline.
Spline curve can be modified by selecting different control point position.
We can apply transformation on the curve according to need like translation scaling etc.
The convex polygon boundary that encloses a set of control points is called convex hull.
Fig. 4.11: -convex hull shapes for two sets of control points.
A poly line connecting the sequence of control points for an approximation spline is
usually displayed to remind a designer of the control point ordering. This set of connected
line segment is often referred as control graph of the curve.
Control graph is also referred as control polygon or characteristic polygon.
Fig. 4.12: -Control-graph shapes for two different sets of control points.
Fig. 4.13: - Piecewise construction of a curve by joining two curve segments uses different
orders of continuity: (a) zero-order continuity only, (b) first-order continuity, and (c) second-
order continuity.
First order continuity is often sufficient for general application but some graphics package
like cad requires second order continuity for accuracy.
We consider that curve is in 𝑐2 continuity means first and second parametric derivatives
Natural cubic spline is a mathematical representation of the original drafting spline.
of adjacent curve section are same at control point.
For the ‘’n+1’’ control point we have n curve section and 4n polynomial constants to find.
For all interior control points we have four boundary conditions. The two curve section on
either side of control point must have same first & second order derivative at the control
We get other two condition as 𝑝0 (first control points) starting & 𝑝𝑛(last control point) is
points and each curve passes through that control points.
end point of the curve.
add one extra dummy point at each end. I.e. we add 𝑝−1 & 𝑝𝑛+1 then all original control
�
]
�
𝑑
Now derivatives of p(u) is p’(u)=3au2+2bu+c+0
Matrix form of p’(u) is
𝑎
�
𝑃′(𝑢) = [3𝑢2 2𝑢 1 0] �
[
�
�
]
𝑑
Now substitute end point value of u as 0 & 1 in above equation & combine all four
parametric equations
in matrix form:
𝑝𝑘 0 0 0 1 𝑎
𝑝𝑘+1 1 1 1 1 𝑏
[ ]=[
𝑑𝑝𝑘 ][ ]
0 0 1 0 𝑐
𝑑𝑝𝑘+1 3 2 1 0 𝑑
Now solving it for polynomial co efficient
𝑎 2 −2 1 1 �
𝑏
[ ]
𝑐 = [−3 3 −2 −1 �
]�
𝑑 [
0 0 1 0 �
𝑎 1 0 0 0 �
𝑝𝑘 �
�
+
�
1
𝑑𝑝𝑘 ]
�
� �
+
�
� � 1
[ 𝑏] 𝑝𝑘+1
𝑐 = 𝑀𝐻
𝑑𝑝𝑘 ]
𝑑 [
𝑑𝑝 𝑘+1
Now Put value of above equation in equation of 𝑝(𝑢)
2 −2 1 𝑝
−3 3 −2 −1 𝑝𝑘𝑘+1
( ) [ ]
𝑝 𝑢 = 𝑢 𝑢 𝑢1 [
3 2
][ ]
0 0 1 𝑑𝑝𝑘
1 0 0 𝑑𝑝𝑘+1
�
�
�
𝑝𝑘+1
�
Bezier Curves
Bezier curve section can be fitted to any number of control points.
Number of control points and their relative position gives degree of the Bezier
polynomials.
With the interpolation spline Bezier curve can be specified with boundary condition
or blending function.
Most convenient method is to specify Bezier curve with blending function.
Consider we are given n+1 control point position from p0 to pn where pk = (xk, yk, zk).
This is blended to gives position vector p(u) which gives path of the approximate
Bezier curve is:
𝑛
∑ 𝐵𝐸𝑍𝑘,𝑛 (𝑢) = 1
�
=
�
0
So any curve position is simply the weighted sum of the control point positions.
Bezier curve smoothly follows the control points without erratic oscillations.
P2
P4
P1
P0=P5
Fig. 4.21: -A closed Bezier Curve generated by specifying the first and last
control points at the same location.
If we specify multiple control point at same position it will get more weight
and curve is pull towards that position.
P3
P1=P2
P0
P4
Fig. 4.22: -A Bezier curve can be made to pass closer to a given coordinate
position by assigning multiple control point at
that position.
Bezier curve can be fitted for any number of control points but it requires
higher order polynomial calculation.
Complicated Bezier curve can be generated by dividing whole curve into several
lower order polynomial curves. So we can get better control over the shape of
small region.
Since Bezier curve passes through first and last control point it is easy to join
two curve sections with zero order parametric continuity (C0).
For first order continuity we put end point of first curve and start point of
second curve at same position and last two points of first curve and first two
point of second curve is collinear. And second control point of second curve
is at position
𝑝𝑛 + (𝑝𝑛 − 𝑝𝑛−1)
So that control points are equally spaced.
Fig. 4.23: -Zero and first order continuous curve by putting control point at
proper place.
Similarly for second order continuity the third control point of second curve
in terms of position of the last three control points of first curve section as
𝑝𝑛−2 + 4(𝑝𝑛 − 𝑝𝑛−1)
C2 continuity can be unnecessary restrictive especially for cubic curve we left
only one control point for adjust the shape of the curve.
𝑝(𝑢) = [𝑢3 𝑝
𝑢 ]∙
𝑢
2
∙
1
𝑀𝐵𝐸𝑍 [
1
�
�
2
]
𝑝
3
−1 3 −3 1
3 −6 3 0
𝑀𝐵𝐸𝑍 = [ ]
1 0 0 0
−3 3 0 0
We can add additional parameter like tension and bias as we did with the
interpolating spline.
Bezier Surfaces
Two sets of orthogonal Bezier curves can be used to design an object surface by
an input mesh of control points.
By taking Cartesian product of Bezier blending function we obtain parametric vector
function as:
𝑚 𝑛
Fig. 4.25: -Bezier surfaces constructed for (a) m=3, n=3, and (b) m=4, n=4.
Dashed line connects the control points.
Each curve of constant u is plotted by varying v over interval 0 to 1. And
similarly we can plot for constant v.
Bezier surfaces have same properties as Bezier curve, so it can be used in
interactive design application.
For each surface patch we first select mesh of control point XY and then select
elevation in Z direction.
We can put two or more surfaces together and form required surfaces using
method similar to curve section joining with continuity C0, C1, and C2 as per
need.
B-Spline Curves
General expression for B-Spline curve in terms of blending function is given by:
𝑛
𝑝(𝑢) = [𝑢3 𝑝
𝑢 ]∙
𝑢
2
𝑀𝐵 ∙
1
[
1
�
�
2
]
𝑝
3
Where
−1 3 −3 1
1 3 −6 3 0
𝑀𝐵 = [ ]
6 −3 0 3 0
1 4 1 0
We can also modify the B-Spline equation to include a tension parameter t.
The periodic cubic B-Spline with tension matrix then has the form:
−𝑡 12 − 9𝑡 9𝑡 − 12 𝑡
1 3𝑡 12𝑡 − 18 18 − 15𝑡 0
𝑀𝐵𝑡 = [ ]
6 −3𝑡 0 3𝑡 0
𝑡 6 − 2𝑡 𝑡 0
When t = 1 𝑀𝐵𝑡 = 𝑀𝐵
We can obtain cubic B-Spline blending function for parametric range from 0
to 1 by converting matrix representation into polynomial form for t = 1 we
have 1
𝐵 ( ) 3
= (1 − 𝑢)
6
0,3
𝐵1, 1
(𝑢) = (3𝑢3 − 6𝑢2 + 4)
3
6
1
𝐵2,3 (𝑢) = (−3𝑢3 + 3𝑢2 + 3𝑢 +
1) 6
𝐵3,3 1
(𝑢) =
6
�
�
3
B-Spline Surfaces
B-Spline surface formation is also similar to Bezier splines orthogonal set of
curves are used and for connecting two surface we use same method which
is used in Bezier surfaces.
Vector equation of B-Spline surface is given by cartesion product of B-Spline
blending functions:
𝑛1 𝑛2
Generation of Fractals
Fractals can be generated by repeating the same shape over and over again as shown in the following
figure. In figure aa shows an equilateral triangle. In figure bb, we can see that the triangle is repeated to
create a star-like shape. In figure cc, we can see that the star shape in figure bb is repeated again and
again to create a new shape.
We can do unlimited number of iteration to create a desired shape. In programming terms, recursion is
used to create such shapes.
Geometric Fractals
Geometric fractals deal with shapes found in nature that have non-integer or fractal dimensions. To
geometrically construct a deterministic nonrandomnonrandom self-similar fractal, we start with a given
geometric shape, called the initiator. Subparts of the initiator are then replaced with a pattern, called
the generator.
As an example, if we use the initiator and generator shown in the above figure, we can construct good
pattern by repeating it. Each straight-line segment in the initiator is replaced with four equal-length line
segments at each step. The scaling factor is 1/3, so the fractal dimension is D = ln 4/ln 3 ≈ 1.2619.
Also, the length of each line segment in the initiator increases by a factor of 4/3 at each step, so that the
length of the fractal curve tends to infinity as more detail is added to the curve as shown in the
following figure −
Module:03
Similar to 2D translation, which used 3x3 matrices, 3D translation use 4X4 matrices (X, Y, Z, h).
𝒙, = 𝒙 + 𝒕𝒙
In 3D translation point (X, Y, Z) is to be translated by amount tx, ty and tz to location (X', Y', Z').
𝒚, = 𝒚 + 𝒕𝒚
𝒛, = 𝒛 + 𝒕𝒛
𝑷′ = 𝑻 ∙ 𝑷
Let’s see matrix equation
𝒙 𝟏 𝟎 𝟎 𝒕𝒙 𝒙
𝟎 𝟏 𝟎 𝒕𝒚 � �
,
𝒚
′
=𝒛
[ , ] ∙[ ]
𝟎 𝟎 𝟏 𝒕𝒛 �
[
𝟏 𝟎 𝟎 𝟎 𝟏 �
𝟏
]
Example : - Translate the given point P (10,10,10) into 3D space with translation factor T
𝑃′ = 𝑇 ∙ 𝑃
(10,20,5).
𝑥
= 1 0 0 𝑡𝑥
0 1 0 𝑡𝑦
,
𝑦
0 0 1 𝑡𝑧] ∙
′
𝑧
[ ,] [
1 0 0 0 1 1
𝑥 1 0 0 10 10
,
𝑥 �
��
� [
]
𝑦′ 0 1 0 20 10
[ ]=[ ]∙[ ]
𝑧 0 0 1 5 10
,
1 0 0 0 1 1
𝑥′ = 30
20
𝑦
,
𝑧
[ ,] [ ]
15
1 1
Final coordinate after translation is P, (20, 30, 15).
Rotation
For 3D rotation we need to pick an axis to rotate about.
The most common choices are the X-axis, the Y-axis, and the Z-axis
Coordinate-Axes Rotations
Y Y Y
X X X
Z Z Z
Z-Axis
Rotation
Two dimension rotation equations can be easily convert into 3D Z-axis rotation equations.
𝒙, = 𝒙 𝐜𝐨𝐬 𝜽 − 𝒚 𝐬𝐢𝐧 𝜽
Rotation about z axis we leave z coordinate unchanged.
𝒚, = 𝒙 𝐬𝐢𝐧 𝜽 + 𝒚 𝐜𝐨𝐬 𝜽
𝒛, = 𝒛
Where Parameter 𝜽 specify rotation angle.
𝑷′ = 𝑹𝒛(𝜽) ∙ 𝑷
Matrix equation is written as:
𝒙, 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝟎 𝒙
𝒚′ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎 𝟎 𝒚
= ∙
𝒛 𝟎 𝟏 𝟎 [𝒛]
[, ] [ ]
𝟎
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
X-Axis Rotation
Transformation equation for x-axis is obtain from equation of z-axis rotation by replacing
𝒙→𝒚→𝒛→𝒙
cyclically as shown here
𝒚, = 𝒚 𝐜𝐨𝐬 𝜽 − 𝒛 𝐬𝐢𝐧 𝜽
Rotation about x axis we leave x coordinate unchanged.
𝒛, = 𝒚 𝐬𝐢𝐧 𝜽 + 𝒛 𝐜𝐨𝐬 𝜽
𝒙, = 𝒙
Where Parameter 𝜽 specify rotation angle.
𝑷′ = 𝑹𝒙(𝜽) ∙ 𝑷
Matrix equation is written as:
𝒙, 𝟏 𝟎 𝟎 𝟎 𝒙
𝒚′ 𝟎 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝒚
= ∙
𝒛 𝟎 𝐬𝐢𝐧 𝜽 𝟎
[, ] [ ][ ]
𝐜𝐨𝐬 𝜽
�
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
xis Rotation
Transformation equation for y-axis is obtain from equation of x-axis rotation by replacing
𝒙→𝒚→𝒛→𝒙
cyclically as shown here
𝒛, = 𝒛 𝐜𝐨𝐬 𝜽 − 𝒙 𝐬𝐢𝐧 𝜽
Rotation about y axis we leave y coordinate unchanged.
𝒙, = 𝒛 𝐬𝐢𝐧 𝜽 + 𝒙 𝐜𝐨𝐬 𝜽
𝒚, = 𝒚
Where Parameter 𝜽 specify rotation angle.
𝑷′ = 𝑹𝒚(𝜽) ∙ 𝑷
Matrix equation is written as:
𝒙′
𝒚 𝐜𝐨𝐬 𝜽 𝟎 𝐬𝐢𝐧 𝜽 𝟎 𝒙
𝟎 𝟏
𝟎 𝟎 [𝒚]
,
𝒛
[ ]
]∙
− 𝐬𝐢𝐧 𝜽 𝟎 𝐜𝐨𝐬 𝜽 𝟎
=[
�
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
𝑃′ = 𝑅𝑧(𝜃) ∙ 𝑃
Example: - Rotate the point P(5,5,5) 90o about Z axis.
𝑥
1
𝑦 ′
sin 90 cos 90 0 𝑦
= 0 ∙
𝑧
[ ] [
, 0 0 1 [
�
1 ]
0
�
0 0 0
1
]
𝑥𝑦
1
, ′ 0 −1 0
1 0 0 0 0 5 5
[ ]=[ ] ∙[ ]
𝑧 0 0 1 0 5
,
1 0 0 0 1 1
𝑥
, −5
𝑦
5
′ = [ ]
, ]𝑧
[ 5
1
1
Final coordinate after rotation is P, (-5, 5, 5).
Y
P2
P1
𝟏 𝟎 𝟎 −𝒙𝟏
for the same is as below
𝟎 𝟏 𝟎 −𝒚
𝑻=𝟎 [ 𝟎 𝟏 −𝒛𝟏𝟏]
𝟎 𝟎 𝟎 𝟏
2) Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
This task can be completed by two rotations first rotation about x-axis and second rotation about
y-axis.
But here we do not know rotation angle so we will use dot product and vector product.
𝑽 𝒙𝟐 − 𝒙𝟏 𝒚𝟐 − 𝒚𝟏 𝒛𝟐 − 𝒛𝟏
𝒖= ) = (𝒂, 𝒃, 𝒄)
Unit vector along rotation axis is obtained by dividing vector by its magnitude.
=( , ,
𝑽
|𝑽| |𝑽| | |𝑽|
u’ u
α
X
uz
𝒖′ ∙ 𝒖𝒛 = |𝒖′||𝒖𝒛| 𝐜𝐨𝐬 𝜶
Coordinate of ‘u’’ is (0,b,c) as we will take projection on YZ-plane x value is zero.
𝒖 ′ ∙ 𝒖𝒛
𝐜𝐨𝐬 𝜶 = (𝟎, 𝒃, 𝒄)(𝟎, 𝟎, 𝟏) 𝒄 √𝒃𝟐 + 𝒄𝟐
|𝒖
𝒘𝒉𝒆𝒓𝒆
= =
|𝒖𝒛|
𝒅=
′|
And
√𝒃𝟐 + 𝒄𝟐 𝒅
𝒖′ × 𝒖𝒛 = 𝒖𝒙|𝒖′||𝒖𝒛| 𝐬𝐢𝐧 𝜶 = 𝒖𝒙 ∙ 𝒃
𝒖𝒙|𝒖′||𝒖𝒛| 𝐬𝐢𝐧 𝜶 = 𝒖𝒙 ∙ 𝒃
|𝒖′||𝒖𝒛| 𝐬𝐢𝐧 𝜶 = 𝒃
Comparing magnitude
𝟎𝐜𝐨𝐬 𝜶 − 𝐬𝐢𝐧 𝜶𝟎
𝑹 �(𝜶) =𝟎[ 𝐬𝐢𝐧 𝜶 𝐜𝐨𝐬 𝜶 𝟎 ]
𝟎 𝟎 𝟎 𝟏
𝟏 𝟎 𝟎 𝟎
𝒄 𝒃
𝟎 𝟎
𝒅 𝒅
𝑹𝒙(𝜶) =
−
𝟎𝒅 𝒅 𝟎
� �
[𝟎 𝟎 𝟎 𝟏 ]
After performing above rotation ‘u’ will rotated into ‘u’’’ in XZ-plane with coordinates (a, 0,
√(b2+c2)). As we know rotation about x axis will leave x coordinate unchanged, ‘u’’’ is in
XZ=plane so y coordinate is zero, and z component is same as magnitude of ‘u’’.
Now rotate ‘u’’’ about Y-axis so that it coincides with Z-axis.
X
uzβ u’’
𝒖 ′ ∙ 𝒖𝒛
(𝒂, 𝟎, √𝒃𝟐 + 𝒄𝟐) (𝟎, 𝟎, 𝟏)
𝐜𝐨𝐬 𝜷 =
= |𝒖 = √𝒃𝟐 + 𝒄𝟐 = 𝒅 𝒘𝒉𝒆𝒓𝒆 𝒅 = √𝒃𝟐 + 𝒄𝟐
𝟏
′|
|𝒖𝒛|
𝑹𝒚(𝜷) 𝟎 𝟏 𝟎 𝟎
− 𝐬𝐢𝐧 𝜷 𝟎 𝐜𝐨𝐬
] 𝜷
𝟎
=[
𝟎 𝟎 𝟎 𝟏
𝒅 𝟎 −𝒂 𝟎
𝟎𝟏 𝟎 𝟎
𝑹 (𝜷) = [
𝒂 𝟎 𝒅 𝟎
]
𝟎 𝟎 𝟎 𝟏
�
Now by combining both rotation we can coincides rotation axis with Z-axis
3) Perform the specified rotation about that coordinate axis.
𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝟎
As we know we align rotation axis with Z axis so now matrix for rotation about z axis
𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎 𝟎
𝑹 (�𝜽) = [ 𝟎 𝟎 𝟏 𝟎]
𝟎 𝟎 𝟎 𝟏
4) Apply inverse rotations to bring the rotation axis back to its original orientation.
This step is inverse of step number 2.
5) Apply the inverse translation to bring the rotation axis back to its original position.
6) This step is inverse of step number 1.
Scaling
It is used to resize the object in 3D space.
We can apply uniform as well as non uniform scaling by selecting proper scaling factor.
Scaling in 3D is similar to scaling in 2D. Only one extra coordinate need to consider into it.
Scaling
𝑷′ = 𝑺 ∙ 𝑷
Simple coordinate axis scaling can be performed as below.
𝒙, 𝒔𝒙 𝟎 𝟎 𝟎 𝒙
𝒚′ 𝟎 𝒔 𝟎 𝒚
[ , ] [𝟎 𝟎 𝟎
𝒛 𝒔 𝒚
∙
𝟎𝒛
= ][ ]
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
𝒛
Example: - Scale the line AB with coordinates (10,20,10) and (20,30,30) respectively with
𝑃′ = 𝑆 ∙ 𝑃
scale factor S(3,2,4).
𝑥, 𝑠𝑥 0 0 0 𝑥
𝑦
[ ] [0 𝑠 0 ∙
𝑦 ′
𝑧 =0 0 𝑠𝑧 𝑧
, 𝑦 [ ]
1 ] 1
0
0 0 0
𝐴𝑥
1
′ 3 0 0 10 20
𝐵𝑥
′ 0
𝐴𝑦′ 𝐵𝑦′ 0 2 0 0 20 30
𝐴𝑧 ′
=[ ]∙[ ]
0 0 4 10 30
𝐵𝑧′ [ 1
0 1 1
0 0 0
1 1
𝐴𝑥
]
′ 30 60
𝐵𝑥
′
𝐴𝑦′ 𝐵𝑦′ 40 60
𝐴𝑧 ′
=[ ]
40 120
𝐵𝑧′ [ 1
1 1
1
]
Final coordinates after scaling are A, (30, 40, 40) and B’ (60, 60, 120).
X
Fixed Point
𝟏 𝟎 𝟎 𝟏 𝟎 𝟎 −𝒙𝒇
𝒙𝒇 𝒔𝒙 𝟎 𝟎
𝟎
𝟎 𝟏 𝟎 𝒚𝒇 𝟎 𝒔𝒚 𝟎 𝟎 𝟎 𝟏 𝟎 −𝒚𝒇
𝑷 = ∙𝑷
𝟎 𝟎 𝟏 𝒛𝒇 𝟎 𝟎 𝒔𝒛 𝟎� 𝟎 𝟏 −𝒛
′
∙[ ] ∙
[𝟎 𝟎 𝟎 𝟎 𝟎 𝟎 [𝟎 𝟎 𝟎 𝟏 ]
�
𝟏] 𝟏
𝒔𝒙 𝟎 𝟎 (𝟏 − 𝒔𝒙)𝒙𝒇
𝟎 𝒔𝒚 𝟎 (𝟏 − 𝒔𝒚)𝒚𝒇
𝑷 = ∙𝑷
′
𝟎 𝟎 𝒔𝒛 (𝟏
− 𝒔𝒛)𝒛𝒇 [ 𝟎 𝟎
𝟎 𝟏 ]
Other Transformations
Reflections
Reflection means mirror image produced when mirror is placed at require position.
When mirror is placed in XY-plane we obtain coordinates of image by just changing the
sign of z coordinate.
𝟏 𝟎 𝟎 𝟎
Transformation matrix for reflection about XY-plane is given below.
𝟎 𝟏 𝟎 𝟎
𝑹𝑭 = [
� 𝟎 𝟎 −𝟏 𝟎
]
𝟎 𝟎 𝟎 𝟏
−𝟏 𝟎 𝟎 𝟎
Similarly Transformation matrix for reflection about YZ-plane is.
𝟎𝟏 𝟎 𝟎
𝑹𝑭 = [
𝟎 𝟎 𝟏 𝟎
]
𝟎 𝟎 𝟎 𝟏
�
𝟏 𝟎 𝟎 𝟎
Similarly Transformation matrix for reflection about XZ-plane is.
𝟎 −𝟏 𝟎 𝟎
𝑹𝑭 �=𝟎[ 𝟎 𝟏 𝟎]
𝟎 𝟎 𝟎 𝟏
Shears
Shearing transformation can be used to modify object shapes.
They are also useful in 3D viewing for obtaining general projection transformations.
Here we use shear parameter ‘a’ and ‘b’
𝟏 𝟎 𝒂 𝟎
Shear matrix for Z-axis is given below
𝟎 𝟏 𝒃 𝟎
𝑺𝑯 = [
� 𝟎 𝟎 𝟏 𝟎
]
𝟎 𝟎 𝟎 𝟏
𝟏 𝟎 𝟎 𝟎
Similarly Shear matrix for X-axis is.
𝒂𝟏 𝟎 𝟎
𝑺𝑯 = [
� 𝒃 𝟎 𝟏 𝟎
]
𝟎 𝟎 𝟎 𝟏
𝟏 𝒂 𝟎 𝟎
Similarly Shear matrix for X-axis is.
𝟎 𝟏 𝟎 𝟎
𝑺𝑯 = [
� 𝟎 𝒃 𝟏 𝟎
]
𝟎 𝟎 𝟎 𝟏
Viewing Pipeline
Viewing Co-ordinates.
Generating a view of an object is similar to photographing the object.
We can take photograph from any side with any angle & orientation of camera.
Similarly we can specify viewing coordinate in ordinary direction.
Fig. 5.9: -A right handed viewing coordinate system, with axes Xv, Yv, and Zv, relative to a
world- coordinate scene.
Fig. 5.10: -Viewing scene from different direction with a fixed view-reference point.
Fig. 5.11: - Aligning a viewing system with the world-coordinate axes using a sequence of translate-
rotate transformations.
As shown in figure the steps of transformation
Consider view reference point in world coordinate system is at position (𝑥0, 𝑦0, 𝑧0)than
for align view reference point to world origin we perform translation with matrix:
1 0 0 −𝑥0
𝑇 = [0 0 1 −𝑧00]
0 1 0 −𝑦
0 0 0 1
Now we require rotation sequence up-to three coordinate axis rotations depending upon
direction we choose for N.
Another method for generating the rotation transformation matrix is to calculate unit uvn
vectors and from the composite rotation matrix directly.
Here
𝑛= = (𝑛1, , 𝑛3)
𝑁
𝑛2
|𝑁|
𝑉×
𝑢
𝑁= = (𝑢1, , 𝑢3)
|𝑉 × 𝑢2
𝑁|
𝑣 = 𝑛 × 𝑢 = (𝑣1, 𝑣2, 𝑣3)
This method also automatically adjusts the direction for u so that v is perpendicular to n.
𝑢1 𝑢2 𝑢3 0
Than composite rotation matrix for the viewing transformation is then:
𝑣 𝑣 𝑣 0
𝑅 = [𝑛11 𝑛22 𝑛33 0]
0 0 0 1
This aligns u to Xw axis, v to Yw axis and n to Zw axis.
𝑀𝑤𝑐,𝑣𝑐 = 𝑅 ∙ 𝑇
Finally composite matrix for world to viewing coordinate transformation is given by:
This transformation is applied to object’s coordinate to transfer them to the viewing reference
frame.
Projections
Once world-coordinate descriptions of the objects in a scene are converted to viewing
coordinates, we can project the three-dimensional objects onto the two-dimensional view
plane.
Process of converting three-dimensional coordinates into two-dimensional scene is known as
projection.
There are two projection methods namely.
1. Parallel Projection.
2. Perspective Projection.
Lets discuss each one.
Parallel Projections
V
i
P1 e
P1’
w
P2’
P
l
a
n
e
P2
View Plane
Projection Line
(X,Y)
Xv
Zv
View Plane
Projection Line
Yv
(Xp, Yp)
(X,Y,Z)
α L
Φ
Xv
(X,Y)
Zv
𝒁
Length L depends on the angle α and the z coordinate of the point to be projected:
𝐭𝐚𝐧 𝜶 =
𝑳
𝒁
𝑳=
𝐭𝐚𝐧 𝜶
𝑳 = 𝒁𝑳𝟏 , 𝟏
= 𝐭𝐚𝐧 𝜶
𝑾𝒉𝒆𝒓𝒆 𝑳𝟏
𝒙𝒑 = 𝒙 + 𝒁𝑳𝟏 𝐜𝐨𝐬 ∅
Now put the value of L in projection equation.
𝒚𝒑 = 𝒚 + 𝒁𝑳𝟏 𝐬𝐢𝐧 ∅
𝟏 𝟎 𝑳𝟏 𝐜𝐨𝐬 ∅ 𝟎
Now we will write transformation matrix for this equation.
𝟎 𝟏 𝑳𝟏 𝐬𝐢𝐧 ∅ 𝟎
𝑴𝒑𝒂𝒓𝒂𝒍𝒍𝒆𝒍 = [
𝟎 𝟎 𝟎 𝟎
]
𝟎 𝟎 𝟎 𝟏
This equation can be used for any parallel projection. For orthographic projection L1=0
and so whole term which is multiply with z component is zero.
When value of 𝐭𝐚𝐧 𝜶 = 𝟏 projection is known as Cavalier projection.
When value of 𝐭𝐚𝐧 𝜶 = 𝟐 projection is known as Cabinet projection.
Perspective Projection
View
Plane
P1
P
1’
Proje
P2 ction
Refer
P2’ ence
point
P=(x,y,z)
(xp,yp,zvp)
zvp zprp zv
V
i
e
w
P
l
a
n
e
𝒛𝒗𝒑 = 𝒛 − (𝒛 − 𝒛𝒑𝒓𝒑)𝒖
𝒛𝒗𝒑 − 𝒛
𝒖 = 𝒛𝒑𝒓𝒑 − 𝒛
𝒛𝒑𝒓𝒑 −value
𝒛𝒗𝒑of u in equation
𝒅𝒑 of x’ and y’ we will obtain.
𝒙 =𝒙( )=𝒙(
Now substituting
𝒛𝒑𝒓𝒑 − 𝒛
)
𝒛𝒑𝒓𝒑
𝒑
−𝒛 𝒛 𝒛
𝒑𝒓𝒑 𝒗𝒑
𝒚 =𝒚( −𝒛
−
𝒅
𝒚( 𝑾𝒉𝒆𝒓𝒆 𝒅 𝒛
)= ), =
𝒑
𝒛𝒑𝒓𝒑
𝒑 𝒗𝒑
𝒛𝒑𝒓𝒑
𝒑
𝒑𝒓𝒑
−𝒛 − 𝒛
Using three dimensional homogeneous-coordinate representations, we can write the perspective
projection transformation matrix form as.
� 𝟏 𝟎 𝟎 𝟎 𝒙
� 𝟎 𝟏 𝟎 𝟎 𝒚
��𝒉
𝒉
[ ]= 𝟎 𝟎 −
𝒛𝒉 𝒛𝒗𝒑 (𝒛𝒑𝒓𝒑 ⁄𝒅𝒑 ) ∙ [ 𝒛 ]
𝒛𝒗𝒑 ⁄𝒅𝒑
𝒉 [𝟎 𝟎 − 𝒛𝒑𝒓𝒑⁄𝒅𝒑 ] 𝟏
𝟏⁄𝒅𝒑
𝒛𝒑𝒓𝒑 −
In this representation, the homogeneous factor is.
𝒉=
𝒛
𝒂𝒏𝒅
𝒅𝒑
𝒙𝒑 = 𝒙𝒉 ⁄𝒉 𝒂𝒏𝒅 𝒚𝒑 = 𝒚𝒉 ⁄𝒉
If view plane is taken to be uv plane, then 𝒛𝒗𝒑 = 𝟎 and the projection coordinates are.
There are number of special cases for the perspective transformation equations.
𝒙 𝟏
) =𝒙(
𝒛 𝒙( 𝟏 − 𝒛⁄𝒛𝒑𝒓𝒑
)
𝟏
𝒑 =𝒑𝒓𝒑
𝒛𝒑𝒓𝒑 ) =𝒚(
�� −𝒛 )
𝒛 𝒚(
=𝒑𝒓𝒑
𝒑
𝒛𝒑𝒓𝒑 𝟏 − 𝒛⁄𝒛𝒑𝒓𝒑
−𝒛
If we take projection reference point at origin than 𝒛𝒑𝒓𝒑 = 𝟎 and the projection coordinates are.
𝒛𝒗𝒑 𝟏
𝒙𝒑 = 𝒙 ( )=𝒙(
𝒛 𝒛⁄𝒛𝒗𝒑
)
𝒛𝒗𝒑 𝟏
𝒚𝒑 = 𝒚 ( )=𝒚(
𝒛 𝒛⁄𝒛𝒗𝒑
)
The vanishing point for any set of lines that are parallel to one of the principal axes of an
object is referred to as a principal vanishing point
We control the number of principal vanishing points (one, two, or three) with the
orientation of the projection plane, and perspective projections are accordingly classified as
one-point, two-point, or three- point projections.
Perspective projection
This method generating view of 3D object by projecting point on the display plane along
converging paths.
Fig. 4.2: - perspective projection
This will display object smaller when it is away from the view plane and of nearly same
size when closer to view plane.
It will produce more realistic view as it is the way our eye is forming image.
Depth cueing
Many times depth information is important so that we can identify for a particular viewing
direction which are the front surfaces and which are the back surfaces of display object.
Simple method to do this is depth cueing in which assign higher intensity to closer object
& lower intensity to the far objects.
Depth cuing is applied by choosing maximum and minimum intensity values and a range of
distance over which the intensities are to vary.
Another application is to modeling effect of atmosphere.
Back-Face Detection
Back-Face Detection is simple and fast object –space method.
It identifies back faces of polygon based on the inside-outside tests.
A point (x, y, z) is inside if Ax + By + Cz + d < 0 where A, B, C, and D are constants and
this equation is nothing but equation of polygon surface.
We can simplify test by taking normal vector N= (A, B, C) of polygon surface and vector
V in viewing direction from eye as shown in figure
Fig. 6.2:-view of a concave polyhedron with one face partially hidden by other faces.
For each position on each polygon surface, compare depth values to previously stored valu
the depth buffer to determine visibility.
Calculate the depth z for each (x, y) position on the polygon.
If z > depth(x, y), then set
o𝑑𝑒𝑝𝑡ℎ(𝑥, 𝑦) = 𝑧,𝑟𝑒𝑓𝑟𝑒𝑠ℎ(𝑥, 𝑦) = 𝐼𝑠𝑢𝑟𝑓(𝑥, 𝑦)
Where 𝐼𝑏𝑎𝑐𝑘𝑔𝑛𝑑 is the value for the background intensity, and 𝐼𝑠𝑢𝑟𝑓(𝑥, 𝑦) is the pro
S3
S2
S1 Yv
(X, Y) Xv
Zv
Fig. 6.3:- At view plane position (x, y), surface s1 has smallest depth from the view plane and
so is visible at that position.
We are starting with pixel position of view plane and for particular surface of object.
If we take orthographic projection of any point (x,y,z) of the surface on the view plane we
get two dimension coordinate (x,y) for that point to display.
Here we are taking (x.y) position on plan and find particular surface is at how much depth.
We can implement depth buffer algorithm in normalized coordinates so that z values
range from 0 at the back clipping plane to zmax at the front clipping plane.
Zmax value can be 1 for unit cube or the largest value.
Here two buffers are required. A depth buffer to store depth value of each (x,y) position
and refresh buffer to store corresponding intensity values.
Initially depth buffer value is 0 and refresh buffer value is intensity of background.
Each surface of polygon is then process one by one scanline at a time.
Calculate the z values at each (x,y) pixel position.
If calculated depth value is greater than the value stored in depth buffer it is replaced with
Fig. 6.4:-From position (x,y) on a scan line, the next position across the line has coordinates
(x+1,y), and the position immediately below on the next line has
For horizontal line next pixel’s z values can be calculated by putting x’=x+1 in
coordinates (x,y-1).
above equation.
−𝐴(𝑥 + 1) − 𝐵𝑦 − 𝐷
𝑧′ = 𝐶
𝐴
𝑧 =𝑧−
′
𝐶
Similarly for vertical line pixel below the current pixel has y’=y-1 so it’s z values can
be calculated as
follows.
−𝐴𝑥 − 𝐵(𝑦 − 1) − 𝐷
𝑧′ = 𝐶
𝐵
𝑧 =𝑧+
′
𝐶
If we are moving along polygon boundary then it will improve performance
by eliminating extra calculation.
For this if we move top to bottom along polygon boundary we get x’=x-1/m
and y’=y-1, so z value is obtain as follows.
−𝐴(𝑥 − 1⁄𝑚) − 𝐵(𝑦 − 1) − 𝐷
𝑧′ = 𝐴⁄ + 𝐵 𝐶
𝑚
𝑧′ = 𝑧 +
𝐶
Alternately we can use midpoint method to find the z values.
Module:04
Illumination Models: Basic Models, Displaying Light
Intensities. Surface Rendering Methods: Polygon Rendering
Methods: Gouraud Shading, Phong Shading. Computer
Animation: Types of Animation, Key frame Vs. Procedural
Animation, Methods of Controlling Animation, Morphing.
Introduction to Virtual Reality and Augmented Reality.
ILLUMINATION AND SHADING
Light Source: Light source that illuminate an object are of two types
Light emitting Source: Bulb, Sunetc.
Light reflecting Source: Wall of a roometc.
i. Point Source: The dimension of the light source are smaller than size of
theobject.
ii. Distributed light: The dimension of the light source and the object are
approximatelysame.
iii. Light Source Attenuation: A basic property of light is that it loses its
intensity the further it
travelsfromitssource.Theintensityoflightfromthesunchangesinproportionto
thedistance from the sun. The technical name for this is lightattenuation.
Then illumination equation
I=Ia .Ka+Ip .Kd (N¯˙.L¯˙)
Ia=intensityofANbientlight/
BackgroundLightKa=ANbientreflect
ionCoefficient
Ip=IntensityoflightcoNesfroNtheeoi
ntsourceKd=diffusereflectivity
iv. Ambient Light: if instead of self-luminosity, there is a diffuse, non-directional
source lightthen
product of multiple reflection light from many surface present in the environment
is called as ambient light.
Specular Reflection
When illuminate a
shiny surface such as
polished metal, we observe
high light or bright spot on
signee surface. This
phenomena of reflection of
incident light in concentrated
region around the specular
reflection angle is called
specular reflection.
N: Normal Vector
R: unit vector in the
direction of total
specular reflection.
L: unit vector
towards point light
source.
V: Unit vector pointing viewer from surface.
Light source
When we see any object we see reflected light from that object. Total reflected light is the
sum of contribution from all sources and reflected light from other object that falls on the
object.
So that the surface which is not directly exposed to light may also visible if nearby object is
illuminated.
The simplest model for light source is point source. Rays from the source then follows
radial diverging paths from the source position.
Ambient Light
This is a simple way to model combination of light reflection from various surfaces to
produce a uniform illumination called ambient light, or background light.
Ambient light has no directional properties. The amount of ambient light incident on all
the surfaces and object are constant in all direction.
If consider that ambient light of intensity 𝐼𝑎 and each surface is illuminate with 𝐼𝑎 intensity
then resulting reflected light is constant for all the surfaces.
Diffuse Reflection
When some intensity of light is falls on object surface and that surface reflect light in all
the direction in equal amount then the resulting reflection is called diffuse reflection.
Ambient light reflection is approximation of global diffuse lighting effects.
Diffuse reflections are constant over each surface independent of our viewing direction.
Amount of reflected light is depend on the parameter Kd, the diffuse reflection
coefficient or diffuse reflectivity.
Kd is assign value in between 0 and 1 depending on reflecting property. Shiny surface
reflect more light so Kd is assign larger value while dull surface assign small value.
If surface is exposed to only ambient light we calculate ambient diffuse reflection as:
𝐼𝑎𝑚𝑏𝑑𝑖𝑓𝑓 = 𝐾𝑑 𝐼𝑎
Where 𝐼𝑎 the ambient light is falls on the surface.
Practically most of times each object is illuminated by one light source so now we discuss
diffuse reflection intensity for point source.
e assume that the diffuse reflection from source are scattered with equal intensity in all
directions, independent of the viewing direction such a surface are sometimes referred as
This is modelled by lambert’s cosine law. this law states that the radiant energy
ideal diffuse reflector or lambertian reflector.
from any small surface area dA in any direction ∅𝑛 relative to surface normal is
Φn
Radiant energy direction
proportional to 𝑐𝑜𝑠∅𝑛.
dA
Fig. 6.6:- Radiant energy from a surface area dA in direction Φn relative to the surface normal
direction.
As shown reflected light intensity is does not depends on viewing direction so for
lambertian reflection, the intensity of light is same in all viewing direction.
Even though there is equal light distribution in all direction from perfect reflector the
brightness of a surface does depend on the orientation of the surface relative to light
source.
As the angle between surface normal and incidence light direction increases light falls on
the surface is decreases
Fig. 6.7:- An illuminated area projected perpendicular to the path of the incoming light rays.
If we denote the angle of incidence between the incoming light and surface normal as𝜃,
proportional to 𝑐𝑜𝑠𝜃.
then the projected area of a surface patch perpendicular to the light direction is
If 𝐼𝑙 is the intensity of the point light source, then the diffuse reflection
equation for a point on the surface can be written as
𝐼𝑙,𝑑𝑖𝑓𝑓 = 𝐾𝑑 𝐼𝑙 𝑐𝑜𝑠𝜃
Surface is illuminated by a point source only if the angle of incidence is in
the range 00 to 900 other than this value of 𝜃 light source is behind the
surface.
Fig. 6.8:-Angle of incidence 𝜃 between the unit light-source direction vector L and the unit
surface normal N.
As shown in figure N is the unit normal vector to surface and L is unit vector in direction
of light source then we can take dot product of this to is:
𝑁 ∙ 𝐿 = cos 𝜃
And
𝐼𝑙,𝑑𝑖𝑓𝑓 = 𝐾𝑑 𝐼𝑙 (𝑁 ∙ 𝐿)
Now in practical ambient light and light source both are present and so total
diffuse reflection is given by:
𝐼𝑑𝑖𝑓𝑓 = 𝐾𝑎 𝐼𝑎 + 𝐾𝑑 𝐼𝑙 (𝑁 ∙ 𝐿)
Here for ambient reflection coefficient 𝐾𝑎 is used in many graphics package so here we use
𝐾𝑎 instead of
𝐾𝑑.
Objects other than ideal reflectors exhibits specular reflection over a finite
vector N is unit normal vector and V is unit vector in viewing direction.
𝐼 = 𝐼𝑑𝑖𝑓𝑓 + 𝐼𝑠𝑝𝑒𝑐
adding intensity due to both reflection as follows:
𝐼 = 𝐾𝑎 𝐼𝑎 + 𝐾𝑑 𝐼𝑙 (𝑁 ∙ 𝐿) + 𝐾𝑠 𝐼𝑙 (𝑁 ∙ 𝐻)𝑛𝑠
And for multiple source we can extend this equation as follow:
𝑛
𝐼 = 𝐾𝑎 𝐼𝑎 + ∑ 𝐼𝑙 [𝐾𝑑 (𝑁 ∙ 𝐿) + 𝐾𝑠 (𝑁 ∙ 𝐻)𝑛𝑠 ]
𝑖=1
Properties of Light
Light is an electromagnetic wave. Visible light is have narrow band in electromagnetic
spectrum nearly 400nm to 700nm light is visible and other bands not visible by human
eye.
𝑐 = λf
We can find relation between f and λ as follows:
Frequency is constant for all the material but speed of the light and wavelength are material
dependent.
For producing white light source emits all visible frequency light.
Reflected light have some frequency and some are absorbed by the light. This frequency
reflected back is decide the color we see and this frequency is called as dominant frequency
(hue) and corresponding reflected wavelength is called dominant wavelength.
Other property are purity and brightness. Brightness is perceived intensity of light.
Intensity is the radiant energy emitted per unit time, per unit solid angle and per unit
projected area of the source.
Purity or saturation of the light describes how washed out or how “pure” the color of the light
appears.
Dominant frequency and purity both collectively refers as chromaticity.
If two color source combined to produce white light they are called complementary color of
each other. For example red and cyan are complementary color.
Typical color models that are uses to describe combination of light in terms of dominant
frequency use three colors to obtain reasonable wide range of colors, called the color
gamut for that model.
Two or three colors are used to obtain other colors in the range are called primary colors.
These visual pigments have peak sensitivity at red, green and blue color.
So combining these three colors we can obtain wide range of color this concept is used
in RGB color model.
SHADING:
We can shade any surface by calculating surface normal at each visible point and applying
the describe illumination model at that point.
Different Types of Shading
I. Constant intensityshading
II. GouraudShading
III. PhongShading
IV. HalftoneShading
Gouraud Shading:
1. It is also called interpolated shading.
2. The polygon surface is displayed by linearly interpolating intensity value
across thesurface.
3. Intensity vale of each polygon
method with the value of adjacent polygon along
the common edge. [IV.2]
It need following calculation:
1. Determine the average normal vector at each polygonvertex.
2. Apply illumination model to each polygon vertex to determine
vertexintensity.
3. Linearly interpolate the vertex intensity over the surface of thepolygon.
[IV.3]
LetN1 ,N2 ,N3arenormalofthreesurfacewithsharingvertex‘V’.
∑Ni
N N1+N2 OR
v = |∑Ni |
+N3
|N1 +N2 +N3 |
Ya
I—=Y Y1 — Y a
2
.I + .I
a 1 2
Y
Y1b—Y2 Y1—Y2
I—=Y Y3 — Y b
.I + .I
2
b 3 2
Y3—Y2 Y3—Y2
intensity
Xb of P = IP
I—=X Xp — X a
p + .I
.I
p a b
Xb—Xa Xb—Xa
Phong Shading:
It is also called normal vector interpolating shading.
Here interpolate normal vector rather
than intensity. It procced asfollows:
1. Determine he average nit
normal vector at each
polygonvertex.
2. Linearly interpolate the
vertex normal over the
surface of thepolygon.
3. Apply an illumination model
along each scan line to
determine projected pixel
intensities for surfacepoint.
Y1 .N Y 1— . N 2
N =2
—Y +
Y2
Y1—Y2 1 Y1 —
Halfway Y2
Shading:
Many display devices are like
They can produce only two intensitylevel.
In such case we can create an apparent increase in the
number of available intensity. This is achieved b incorporating multiple
pixel position into the display for each intensity value.
: when we view a small area from large distance our eye’s
average fine details within the small area & record only
the overall intensity of thearea.
This phenomena of apparent increase the number of available intensity
by considering combine intensity of multiple pixel is known as
Halftoning.
ANIMATION:
Literally means “Giving life to”. It generally refer to any time sequence
of [IV.4] visual
changesinascenes.Hechangeinthescenearemadebytransformation(Translatio
n,Scaling, rotation). Some of the application of animation is for entertainment
like cartoon. We can produce animation by changing lighting effect or other
parameters and procedure associated with illumination &rendering.
PRODUCTION TECHIQUE:
The overall animation of enter object is referred as Production.
Production is broken
intomajorpartrefertoassequence.Asequenceisusuallyidentifiedbyanassociate
dstrategy area. Production is usually consists of one to dozensequence.
[IV.5]
Each sequence is broken down into one or more sot. Each shot is
continuous camera
recording.Eachshotisbrokendownintotheindirectframeoffilm.Frameisasinglere
corded image.
[IV.5]
to worry about this detail. Actors can communicate with other actors be
sending messages
andsocansynchronizetheirmovements.Thisissimilartothebehaviorofobjectsin
object- orientedlanguages.
2. ProceduralAnimation:
Scripting Systems were the earliest type of motion control systems. The
animator writes a
scriptintheanimationlanguage.Thus,theusermustlearnthislanguageandthePr
ocedures are used that define movement over time. These might be
procedures that use the laws of physics (Physically - based modeling) or
animator generated methods. An example is a motion that is the result of
some other action (this is called a "secondary action"), for example throwing
a ball which hits another object and causes the second object tomove.
3. RepresentationalAnimation:
This technique allows an object to change its shape during the animation.
There are three subcategories to this. The first is the animation of
articulated objects, i.e., complex objects composed of connected rigid
segments. The second is soft object animation used for
deformingandanimatingthedeformationofobjects,e.g.skinoverabodyorfacialm
uscles.
Thethirdismorphingwhichisthechangingofoneshapeintoanotherquitedifferent
shape. This can be done in two or threedimensions.
4. StochasticAnimation:
This uses stochastic processes to control groups of objects, such as in
particle systems. Examples are fireworks, fire, water falls, etc.
5. BehavioralAnimation:
Objectsor"actors"aregivenrulesabouthowtheyreacttotheirenvironment.Exam
plesare schools of fish or flocks of birds where each individual behaves
according to a set of rules defined by theanimator.
MORPHING
Mor
phing is a
familiar
technology
to produce
special
effects in
image or
videos.
Morphing is
common in
entertainm
ent
industry.
Morphing is widely used in movies, animation games etc. In addition to the usage of
entertainment industry, morphing can be used in computer based trainings,
electronic book illustrations, presentations, education purposes etc. morphing
[IV.6]
software is widely available in internet.
Animation industry looking for advanced technology to produce special
effects on their movies. Increasing customers of animation industry does not satisfy
with the movies with simple animation. Here comes the significance of morphing.
The Word "Morphing" comes from the word "metamorphosis" which means
change shape, appearance or form. Morphing is done by coupling image warping
with color interpolation. Morphing is the process in which the source image is
gradually distorted and vanished while producing the target image. So earlier
images in the sequence are similar to source image and last
imagesaresimilartotargetimage.Middleimageofthesequenceis
theaverageofthesourceimage and the targetimage.
[IV.7]
Introduction to Virtual Reality and Augmented Reality:
Virtual Reality: Today the Virtual reality (VR) technology is applied to advance fields
of medicine, engineering, education, design, training, and entertainment. VR
is a computer interfaces which tries to mimic real world beyond the flat
monitor to give an immersive 3D (Three Dimension) visual experiences.
Often it is hard to reconstruct the scales and distances between objects in
static 2D images. Thus the third dimension helps bringing depth to objects.
Virtual reality (VR) is a computer-generated scenario that simulates
experience through senses and perception. The immersive environment can
be similar to the real world
oritcanbefantastical,creatinganexperiencenotpossibleinourphysicalreality.Aug
mented reality systems may also be considered a form of VR that layers
virtual information over a live camera feed into a headset or through a
smartphone or tablet device giving the user the ability to view three-
dimensionalimages.
Current VR technology most commonly uses virtual reality headsets
or multi- projected environments, sometimes in combination with physical
environments or props, to
generaterealisticimages,soundsandothersensationsthatsimulateauser'sphysic
alpresence in a virtual or imaginary environment. A person using virtual
reality equipment is able to "look around" the artificial world, move around in
it, and interact with virtual features or items. The effect is commonly created
by VR headsets consisting of a head-mounted display with a small screen in
front of the eyes, but can also be created through specially designed rooms
with multiple largescreens.
VR systems that include transmission of vibrations and other
sensations to the user through a game controller or other devices are known
as haptic systems. This tactile information is generally known as force
feedback in medical, video gaming and military training applications.
AugmentedReality:AugmentedReality(AR)isageneraltermforacollectionoftechnologi
esused
toblendcomputergeneratedinformationwiththeviewer’snaturalsenses.Asimplee
xample of AR is using a spatial display (digital projector) to augment a real
world object (a wall) for a presentation. As you can see, it’s not a new idea,
but a real revolution has come with advances in mobile personal computing
such as tablets andsmartphones.
Sincemobile‘smart’deviceshavebecomeubiquitous,‘AugmentedReality
Browsers’
havebeendevelopedtorunonthem.ARbrowsersutilisethedevice’ssensors(came
rainput,
GPS,compass,etal)andsuperimposeusefulinformationinalayerontopoftheimag
efrom the camera which, in turn, is viewed on the device’sscreen.
AR Browsers can retrieve and display graphics, 3D objects, text,
audio, video, etc., and use geospatial or visual ‘triggers’ (typically images, QR
codes, point cloud data) in the environment to initiate the display.
AR is being used in an increasing variety of ways, from providing
point-of-sale information to shoppers, tourist information on landmarks,
computer enhancement of traditional printed media, service information for
on-site engineers; the number of applications is huge.
[IV.8]
There are a number of different development platforms on the market.
The main mobile device platforms are Junaio (now withdrawn), Aurasma and
Layar. There’s also a plethora of different apps with novel ideas for AR
applications if you search for them with
[IV.9]
yourappprovider,frominteractivemuseumdisplays
tooverlayingmedicalinformationover apatient.
There are some exciting emergent display technologies which are
nearing commercialization. A series of ‘Head Up Display’ (HUD) devices are
coming to market which will provide a ‘hands-free’ projection of AR
information via devices integrated in spectacle-like screens – examples
include Google’s Project Glass, which is now in beta release to selected
users in the States.
Time will tell the level of adoption such hardware will reach, since the
wearer looks
somewhatunusualwearingthedevice,butitwon’tbelongbeforetheybecomemore
discreet
– the ‘bionic contact lens’ is already in development.
[IV.1
0]
Augmented Reality vs. Virtual Reality
[IV.1
1]
Augmentedrealityandvirtualrealityareinversereflectionsofoneinanotherwithwha
teach technology seeks to accomplish and deliver for the user. Virtual reality offers a
digital recreation of a real life setting, while augmented reality delivers virtual
elements as an overlay to the realworld.
How are Virtual Reality and Augmented Reality Similar?
Technology
Augmentedandvirtualrealitiesbothleveragesomeofthesametypesoftechnology,
andthey each exist to serve the user with an enhanced or enrichedexperience.
Entertainment
Bothtechnologiesenableexperiencesthatarebecomingmorecommonlyexpecte
dandsought
afterforentertainmentpurposes.Whileinthepasttheyseemedmerelyafigmentofasciencef
iction imagination, new artificial worlds come to life under the user’s control, and
deeper layers of
interactionwiththerealworldarealsoachievable.Leadingtechmogulsareinvestinganddev
eloping new adaptations, improvements, and releasing more and more products and
apps that support these technologies for the increasingly savvyusers.
Science and Medicine
Additionally, both virtual and augmented realities have great potential in
changing the landscape of the medical field by making things such as remote
surgeries a real possibility. These technologies been already been used to treat and
heal psychological conditions such as Post Traumatic Stress Disorder (PTSD).
How do Augmented and Virtual Realities Differ?
Purpose
Augmented reality enhances experiences by adding virtual components such
as digital
images,graphics,orsensationsasanewlayerofinteractionwiththerealworld.Contrastingl
y,virtual reality creates its own reality that is completely computer generated
anddriven.
Delivery Method
Virtual Reality is usually delivered to the user through a head-mounted, or
hand-held controller. This equipment connects people to the virtual reality, and
allows them to control and navigate their actions in an environment meant to
simulate the real world.
Augmentedrealityisbeingusedmoreandmoreinmobiledevicessuchaslaptops,smartpho
nes,and tablets to change how the real world and digital images, graphics intersect
andinteract.
How do they work together?
It is not always virtual reality vs. augmented reality– they do not always
operate independently of one another, and in fact are often blended together to
generate an even more immersing experience. For example, haptic feedback-which
is the vibration and sensation added to interaction with graphics-is considered an
augmentation. However, it is commonly used within a virtual reality setting in order to
make the experience more lifelike though touch.
Virtual reality and augmented reality are great examples of experiences and
[IV.1
2]
interactions fueled by the desire to become immersed in a simulated land for
entertainment and play, or to add a new dimension of interaction between digital
devices and the real world. Alone or blended together, they are undoubtedly opening
up worlds-both real and virtual alike.
[IV.1
3]